1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

Skybuck's Review of GTC 2014

Discussion in 'Nvidia' started by Skybuck Flying, Mar 25, 2014.

  1. Hello,

    This is Skybuck's Review of Graphics Technology Conference 2014 (GTC 2014)
    by nVIDIA company.

    First I watched a little youtube video of the Pascal announcement which
    really excited me and I just couldn't believe it... it's way cool.

    Then I watched the full key note on twitch live stream, at first I couldn't
    find it.. but it's at the bottom where it says "watch the replay":

    http://www.gputechconf.com/page/live-stream-source2.html

    I watched it at 360p which was good enough and fast enough for viewing and
    audio so that's nice... there were some hickups during download
    but nothing to major... 480p was maybe a bit too much don't know about
    that...

    Anyway... on the news... hihi:

    The biggest news of GTC 2014 is:

    VISION RECOGNITION.

    Yup... it can all be summed up with just two words, "vision recognition"...
    which is all about software/hardware/computers recgonizing the
    enviroment/objects that they live in/are around.

    I couldn't agree more with the CEO of nVIDIA.... "computer vision
    recognition" has huge potential for this world that we live in... to
    automate all kinds of tasks.

    And the good news for NVIDIA is that it requires huge processing power....
    which probably and hopefully can be parallized by their cuda chips.

    This is somewhat of a turn around for their company... in the past it wasn't
    so clear what CUDA's potential/capabilities/solution would be for besides
    from the obvious... computer graphics, gaming, particle simulation...

    but now computer vision is also part of that... also deep neural networks
    got some attention.

    However the competition on the chip market has not been sitting still. I can
    remember vaguely that other chip companies are also exploring building in
    neurals into their chips.

    So nvidia can expect some competition potentially from it's competitors when
    it comes down to neural networks and computer vision.

    For now cuda is well positioned... it is general purpose and can be used for
    all kinds of applications, including neural networks and computer vision.

    Their ode to Blaise Pascal is nice... but a bit risky... some c/c++
    programmers may look down on pascal... but real programmers do respect
    pascal as a good teaching language.

    I hope that in the future nVIDIA will honor pascal even more... by adding a
    pascal compiler lol :) that just be cool and be nice for teaching
    programmers how to use these chips.

    Maybe I will also offer to start selling my cuda framework to Delphi
    programmers which is a pascal-clone/derivate so that they can also enjoy
    cuda.

    Yes... I think it is time to unleash the power of Delphi unto Cuda/nVIDIA...
    so these coming days/months... I will try to make my CUDA framework
    available... at least for some dollars :)

    Maybe somebody will buy it or maybe nobody.... there is a bit a chicken and
    egg problem. Pascal/Delphi programmers must be able to try it out to see if
    it has any value... otherwise they may not
    be too interested in it... or perhaps they will... time will tell... some
    demos or freebies would be necessary to break through the chicken and egg
    problem.

    Though ofcourse there also exists the possibility of simply using CUDA C/C++
    to get some sense or feeling for it... though pascal/delphi would still be a
    bit nicer.

    Also my interest is a little bit in python... and some possibilities exist
    there too... and sikuli which is also kinda cool... maybe some cuda code
    could be added to sikuli to speed up it's recognition software.

    I am sure me or other people could do that.

    What was also interesting about GTC 2014 which is more of the same thing...
    is "Audi" and their "Pilotted Car" project... A car like the knight rider or
    something lol... it has some kind of capabilities... auto-parking or
    collision avoidance or warning systems... or something...
    not exactly sure what it can do... but it involved some 3D structures from
    motion algorithms/software... analyzing video to re-construct it's 3D
    geometry which sounds interesting but computentially expensive ? ;)

    Funny part was CEO asking if the chip in the back of the trunk could be
    replaced for "upgrades" = more selling chips = more bling bling = more
    dollars signs in eyes = LOL :) good for him, good for audi.

    There was more iRay and such which was still interesting to see... and
    something about a whale... and water and oceans... that part was a little
    bit boring... it didn't look that good to me... I may have to re-visit that
    on high-resolution.

    Why the CEO is interested in ocean simulations and wild life in oceans is
    something I don't really understand that much... it may be computational
    expensive.... but besides from that it's use to humanity escapes me for
    now... it may be nice for hollywood... or "free my willy" lol...
    but either than that I don't think many people will be watching ocean
    simulations for fun... much... I could be wrong though.

    There was another quite interesting demo... a demo about fluid dynamics...
    "gridless"... it runs outside of a grid... and it looked pretty good/cool.
    The fire section did run in a grid but also looked very good/cool...
    hopefully sometime in the future that could also be gridless that be cool.
    Such a thing is obviously usefull for gaming.

    I do see a bit of a danger for the CEO... cars nowadays are less of a status
    symbol for youngsters... focussing too much on that may be a bit of a
    mistake... for now... plenty of cars being sold... though one never knows
    what the future may bring... maybe some new kind of transportation system...
    so eggs must be distributed among different baskets.

    One other of those baskets is mobile computing... which is ofcourse very
    popular currently... mobile phones, tablets... there tegra k1 processor, a
    mobile super computer is very interesting.

    And they also have a development kit available for 192 dollars ! Which could
    be interesting for mobile developers... it only comes with linux tools
    though... some windows developers might be left in the cold for now.

    The last part was kinda strange... every attendee got a shield... a gaming
    device... I guess perhaps most attendees are reporters and not developers ?
    I am not sure about that...

    As I developer I must rather had gotten that development kit lol... hmmm I
    am gonna google what a shield costs.. Wow 199 dollars ?! Quite expensive.

    I am not sure what kind of processor is in there... probably not a tegra k1
    ! ;) :) But at least it's nice device... I'd be happy with that too..

    But here is an idea for the future GTC 2015 or GTC 2016 if they wanna give
    away more free stuff:

    Give them a choice, just like a switch in a computer:

    Reporters can choose the shield, Developers can choose the tegra k1
    development kit or something else in the future.

    Yeah... then it be interesting what each attendee choose :)

    Perhaps this is also a little spying trick, then nvidia will understand
    better who's in the audience or what there interesting is... ofcourse there
    decision to choose something might be fake so how reliable it is remains
    unknown.

    The CEO looked in good health which is also somewhat reassuring for anybody
    investing in nvidia or cuda technology.

    However I guess big company might still be nervous in investing in
    NVIDIA/CUDA technology because of big competitors like AMD and Intel and
    perhaps others.

    Perhaps it's not a bad idea for NVIDIA to try and integrate their graphics
    chip with an INTEL processor.

    Right now Intel's graphics part of the haswell chip might not be that
    great... it's not really greatly programmeable.

    While AMD's processors/apu's might become better programmable... and now
    they also have mantle... though directx 12 might eclipse that...

    There was no word about nVIDIA and DirectX 12 which is a bit odd... perhaps
    this was mentioned at that other conference... or perhaps it's simply too
    new...

    Maybe next year or at another conference nVIDIA could go into DirectX 12
    support for their graphics/gaming... yeah.

    Anyway the idea of an INTEL/NVIDIA combined chip could be interesting... and
    defend a bit against AMD's APU attempts.

    Then again... perhaps buying two seperate chips is better for heat... but
    perhaps worse for performance... not sure.

    Not seeing a CPU part in the GPU of nvidia does worry me a bit... that might
    be a crucial part in getting more performance and more software for cuda.

    At least some kind of serial part... it doesn't have to be a full blown cpu
    which can run an operating system... but just something that can perform
    serial computations...
    perhaps that would ease the minds of some software developers... that
    whatever their computations are like... nvidia has a solution for that.

    Be it serial or parallel.

    A parallel only course is a bit risky.

    And with that last conclusion I will end this nice posting.

    Though the stage was a bit dark... perhaps they a bit in the dark
    ages/depressed... time will tell.

    Oh wait there is one more thing that I must discuss... it's the VMWare
    thing.... VMWare and GPU and CLOUD.... GPU is going to be virtualized for
    the cloud.

    I am not sure if GPU is already virtualized on PC I haven't tried that
    yet... perhaps I should or maybe shouldn't... my pc/gpu probably not
    powerfull enough anyway...

    For what I would like to do there might be other solutions like hacking
    directx to reduce graphics. None the less VMWARE and GPU virtualization is
    interesting...

    I think I wrote about that last time during GTC 2013.

    Another topic was machine learning... which plugged into computer vision
    algorithms a bit... though these are seperate topics... machine learning can
    be used for other stuff as well.

    It's going to be an interesting future... next few years... more automation
    possibilities... this may create problems for workers... job-reductions...
    unemployment and more demand for computer programmers.

    Therefore the need for a good parallel and/or gpu programming language to
    teach new programmers might increase, hence Pascal for GPU makes sense, I
    hope to see it happen some day :) (lol)

    Though I am pretty sure nVIDIA will try hard at providing frameworks/apis
    and all kinds of other stuff to try and make lazy programming possible...
    just use the framework and done... but that creates an entire different
    problem...
    learning how to use the frameworks... and perhaps frameworks have
    limitations... perhaps first trying a general approach might be better...
    instead of trying to make a zillion frameworks... cause then it's not
    hardware-locked-in... but framework-locked in... which may
    have the same drawbacks or other drawbacks... none the less expecting
    programmers to create their own frameworks is too much to ask for... so
    eventually they might start using those frameworks... but perhaps the
    teaching programming language can plug into those
    frameworks and use it... and slam two flies with one blow...

    Sikuli is a nice example how a programmer using sikuli might get interested
    in visual algorithms and trying to speed it up... perhaps something like
    that can be created for visual recognition software as well. Perhaps
    combining video inputs and logic code... and api calls... and writing more
    code around that.

    Bye,
    Skybuck.
     
    Skybuck Flying, Mar 25, 2014
    #1
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.