1. OCAU Merchandise is available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion in this thread.
    Dismiss Notice

NVIDIA's Tesla

Discussion in 'Video Cards & Monitors' started by stmok, Jun 21, 2007.

  1. stmok

    stmok Member

    Joined:
    Jul 24, 2001
    Messages:
    8,882
    Location:
    Sydney
    I'm not sure if it should be in this section of the forums, so Moderators, feel free to move it to the appropriate section.

    Now, Nvidia's Tesla is the response to AMD's "Stream Processor". In both cases, we use GPUs to accelerate number crunching.

    Official page
    http://www.nvidia.com/object/tesla_computing_solutions.html

    Beyond3D's article on it.
    http://www.nvidia.com/object/tesla_computing_solutions.html

    Official page for Developer tools
    http://developer.nvidia.com/object/cuda.html
    (Tools available for both Windows and Linux)

    Because developing for such a solution is NOT easy, Nvidia has provided info and technical expertise to UIUC (University of Illinois at Urbana-Champaign). All the lectures (MP3 format), slides, etc are freely available for you to download and read. They can be found here...

    ECE 498 AL : Programming Massively Parallel Processors
    http://courses.ece.uiuc.edu/ece498/al/index.html

    You need at least a G8x (for Geforce 8xxx) GPU and knowledge of C programming language to play with CUDA (Compute Unified Device Architecture).
     
  2. Ajax9000

    Ajax9000 Member

    Joined:
    Sep 5, 2006
    Messages:
    125
    Location:
    Sydney
    Someone over at DailyTech noted that many users of high-perfomance computers would want double precision processing -- but as the NVIDIA CUDA Release Notes V0.8 state that athough CUDA supports the C "double" data type, on G80 GPUs, these types get demoted to 32-bit. It then states that NVIDIA GPUs supporting double precision in hardware will become available in late 2007.

    Unless they have quietly added 64b to a new revision of the G80 (unlikely, and even more unlikely that they woudn't NOT shout about it), you'd be mad to shell out US$12k now for 32b if 64b is less than 6 months away.

    Adrian
     
  3. proffesso

    proffesso Member

    Joined:
    Jan 5, 2002
    Messages:
    9,166
    Location:
    Melbourne
    true, but the guys that spend 12k on these things will have 12k in another 6 months....probably left over from petty cash
     
  4. doolybird

    doolybird Member

    Joined:
    Sep 14, 2001
    Messages:
    72
    Location:
    Newcastle
    Work in HPC environment at a UNI and despite the cringes of lots of software developers a hell of a lot of HPC programming is done in FORTRAN and its modern variants....wish tools were available like CUDA for that language.
     
  5. tornado33

    tornado33 Member

    Joined:
    Mar 31, 2003
    Messages:
    6,992
    Location:
    Newcastle
    They will be making personal supercomputers with the Tesla system.

    link

     
  6. AFireInside

    AFireInside Member

    Joined:
    Apr 29, 2004
    Messages:
    615
    Location:
    Melbourne
    Umm correct me if i'm wrong but isn't CUDA Nvidia's version of ATI's stream? Tesla just seems to be a way to market CUDA to businesses at a price premium when a high end gaming card can do the same thing with the CUDA software, am i wrong?
     
  7. Merudo

    Merudo Member

    Joined:
    Jun 26, 2007
    Messages:
    5,437
    Very interesting stuff.

    This is why I love R&D like this, it can only mean good things :)
     
  8. EraSa

    EraSa Member

    Joined:
    May 17, 2004
    Messages:
    195
    Any actual coloration to the great man himself?
    Or just branded Tesla?
     
  9. FatBoyNotSoSlim

    FatBoyNotSoSlim Member

    Joined:
    Sep 9, 2002
    Messages:
    14,099
    Location:
    SE Melbourne
    CUDA powered x246 encoding or fail.... The amount of HTPC recordings that I want transcoded and fast is growing.... :D
     
  10. mitsimonsta

    mitsimonsta Member

    Joined:
    Aug 21, 2006
    Messages:
    10,996
    Location:
    2161 / 2060
    Same end result, but uses two ways to get there.

    ATi uses code programmed in Brook - The Folding@Home GPU2 client with an ATI card uses this. It is easy to program (once you know how) however tweaking/optimising code to get the most out of the hardware is very difficult.

    nVidia use CUDA which is basically C with some streaming instructions added. This makes for VERY easy programming and the resultant code is very good as C and the C compliers for many platforms have had 30+ years of development on them. It was a very smart move on nVidia's part to create a programming environment for streaming GPU from C.
     
  11. round

    round (Banned or Deleted)

    Joined:
    Apr 7, 2007
    Messages:
    15,473
    Location:
    /pol/
    to many platforms = fail.

    it can never be mass market with many many processor designs.

    look at amd and intel for example, they both comply to the same code execution, were ati and nvidia have there own new code.

    once again, it will only become real mass market once intel joins the party, and adds GPU style processings to the processors on die. as really this is a very limited processing feature, just like the physics add on card idea was.
     
  12. FLB

    FLB Member

    Joined:
    Jan 30, 2003
    Messages:
    3,580
    Location:
    Adelaide
    This is where OpenCL (Open Compute Language) Apples side of town, and DirectX11 Compute Shaders come in MS side of town.
     

Share This Page

Advertisement: