3d computer update -> help

Discussion in 'Programming & Software Development' started by isometrix, Feb 7, 2003.

  1. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,518
    Location:
    Brisbane
    the company i work for models very large specialist buildings and city scapes, and very accurately (lots of curves!). some of our larger scenes push 2 million faces easy. the "smaller" ones sit around the 100k to 250k faces size.

    we use quadro4 900XGLs for our stuff for three reasons:
    1) they are half the price of a wildcat
    2) the are slightly faster than fireGL X1 cards in shaded mode (and again, cheaper)
    3) driver support and stability is slightly better (subjective)

    it's the classic case of horse for courses. as a student, i don't think you can look past a gf4ti4200 for great price to performance ratio on your 3d apps. again, i've had some not so pleasant experiences with professional openGL stuff on ATi, but that may well have changed of late with updated drivers; i wouldn't know.
     
  2. proffesso

    proffesso Member

    Joined:
    Jan 5, 2002
    Messages:
    9,109
    Location:
    Watsonia, Melbourne
    wildcats are old news, they cant keep up with current quadro's (even the 7110)

    ive used a lot of cards (firegl4, wildcat2 5000, wildcat 5110, quadro 2 - 4 etc) and the Nvidia gets the 'overall' vote everytime. the firegl (ati series) are faster in wireframe, but only marginally. the wildcats were...a dissapointment. same with the firegl 2 and 4.

    also a quadro 550 is bottom of the pile. a 700 or 750 are best value.


    Brief 3dlabs history, and why they arent top of the pile now -

    3dlabs have had thier day. we can argue that all you want :) the fact ive chosen other cards over the 2 wildcats ive used is explination enough.

    people think they kick ass becuase a few years ago, they did. they were the pinicle of 3d powerhouses. (and cost 6 grand)

    anyone remember the Dynamic Pictures Oxygen 102,202,402? the Real3D? Symmetric? AccelGraphics? Elsa? HP (Fx5 / 10)(Gloria series, awesome)

    ps, Intergraph used to own the wildcat series btw :p

    3Dlabs did the Permidia, Glint etc, these names you should know as well.

    3dlabs had the development money (and purcahse cash) to 'continue' other makers cards. but with the advent of consumer 3d, they had no hope to keep up with Nvidia or Ati. and thus, were overtaken. it did take them a bit. as the Quadro 1 - 3 were not as fast as thier competiters. but the rate they were advacing was far beyond what 3dlabs could match.

    the 7110 and 7210 could only just keep up with the Quadro4....the QuadroFX trounces them.

    http://www.amazoninternational.com/html/benchmarks/graphicCards/quadroFX/quadroFX_2000_page1.asp

    ive never owned a 3dlabs card, unless you count the Oxygen 202 from a few years back (when they bought dynamic pictures). but have been using them day and and day out at various places. (from a Vx1 to wildcats)

    3dlabs are SGI of today....they used to be the only thing to use if you wanted to be truley uber pro....SGI cant even sell a workstation these days because x86 machines are so much faster (due to consumer demand, and thus, consumer dev costs).

    ps, Jacaranda...try dual monitors with your 8500, then it wont be as fast as the 4200 (we benched a friend of mines, in XSI)

    rendertime is all well and good, but you wont be playing with advanced shaders with 40 2k maps and blurred reflections on every object when you start out. you'll spend more time lerning how to model and animate.


    let the flames begin. only happy to argue this point these days.
     
  3. Jacaranda

    Jacaranda Member

    Joined:
    Jun 22, 2002
    Messages:
    918
    Location:
    Brisbane
    No argument here, you obviously have more experience than me!
    I'm quite open to being proved wrong, I learn more that way. :)

    The only wildcat I've ever seen in action was hooked up to 5 monitors, and still seemed damn fast and very good looking, which is the reason I assumed thy would be in general better than the nvidias.

    From memory I thought the line anti-alliasing, in particular looked better than the quadro4 750's that I'm used too.

    However I for one at this point in time can't afford $1300 for a quadro4 750 XGL, so here's a question for you.

    I can get a quadro4 380XL for $388.

    In the same price range I could pick up a r9500 or a ti4200/4400, and then soft-firegl/soft-quadro it.

    For the purposes of modelling and animating in maya, and playing a few games on occasion, which would in your opinion give best bang for buck?

    *sigh* Pity I can't soft-firegl my 8500...

    But hey, I'm a student, I doubt I've got the cash to upgrade anytime this year, so this is mainly hypothetical.
     
  4. proffesso

    proffesso Member

    Joined:
    Jan 5, 2002
    Messages:
    9,109
    Location:
    Watsonia, Melbourne
    i'd get the 4200 and soft mod it. for the budget you dont need anymore.

    the wildcat only drives two monitors, you must have seen multi-card stuff.
     
  5. OP
    OP
    isometrix

    isometrix Member

    Joined:
    Jun 27, 2001
    Messages:
    193
    Location:
    Sydney
    well im about to get 600 dollars (thanx ebay :) and then getting another 600 cause im selling my rig right now to my bro. and probably 200 from work.

    That all comes to 1400.
    So thats my budget I guess.

    I might sell my bro everything excepts my hdd, good ol 13 gig haha, thank god for my fileserver, my monitor of course, and my crappy vibra 128 sound card..

    anyway thats how things are cracking right now
     
  6. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,518
    Location:
    Brisbane
    HAHAHA! sorry, but the LAST place i would quote is amazon for card reviews. they are the laughing stock of the international 3d community of late with their dodgy reviews. and "SPEC VIEW PREF"? migth be nice if they could even spell the name of the benchmark they are using. that, and anyone worth their salt knows specviewperf is as biased as 3dmark.

    i've tested a number of cards, and generally speaking, wildcat cards use far less CPU utilisation than nvidia and ATi pro cards. typically in absolutely massive scenes, they are alot better performers.

    HOWEVER... you wouldn't see me buying one. why? too bloody expensive. do a price:performance ratio and you just can't beat a quadro4 these days. quadros are also going through a big price drop in the states at the moment, which means we should see the same here soon. with the advent of the quadroFX the average user who's not modelling scenes for PIXAR movies can happily sit on older generation hardware for one tenth the cost.
     
  7. proffesso

    proffesso Member

    Joined:
    Jan 5, 2002
    Messages:
    9,109
    Location:
    Watsonia, Melbourne
    sigh.

    seems we might have to run some of our own benchmarks then.

    SpecViewPerf IS an industry standard, whether you like it or not. (biased or not, the FX still scores about 50% better, if not more on most tests....thats a little too biased to be biased (considering 3dlabs pasted SpecViewPerf all over thier pages until the Wildcat4 came out...)

    your telling me youve never made a spelling mistake? hmm? word isnt going to pick it up, so you cant do anything there.

    last wildcat I had here was the 5110, it was on par with the DCC and FireGL3 /4 it was a LOT slower in textured views, it only got faster (in wire, and shade) after about a mil polys. (read below)
    that was in Softimage 3.9.2.2 and XSI 1.5

    know your market. if your using SolidEdge, or Pro/E or CadKey etc, and you might have 3000 nurbs surfaces making up a plane (equaling +3mil polys in the viewport) then something with less cpu time will matter.

    your never going to touch 3 mil polys in the viewport in Maya or XSI...if you do, then your just daft. ILM uses proxies for all its animation, so learn from that.

    for the DCC market, the qaudro is the fastest, best way to go. if you still arent convinced, then wait for the 8210 vs QuadroFX2. you will see what development costs can achive.

    what cards, and what platforms (and apps) have you tested under? I can probably still benchmark a few cards we have left here. I can test under - Maya 4.5 Unlim, Max 5, XSI 3.01, Soft3d 4.0

    I only use XSI 3.01 and maybe some soft3d 4.0, so I cant really vouch for the others. but we have lics, so testing isnt a prob.

    and any previous versions of those.

    I would love to know where you think good benchmarks can be found. (application benchmarks) I dont care if a card can do it all, I want the best card for my app.

    benchmark dual monitors too. cards vary quite a lot there.


    elvis, we might want to take this off-list, as its getting a bit specific.
     
  8. titan

    titan Member

    Joined:
    Dec 28, 2001
    Messages:
    2,887
    Location:
    Leichhardt, Sydney
    i wouldn't got for a top range cpu cos its not necessary. Most workstations without dualies use the gfx card for rendering, which works amazingly well on my system of xp1800+, 512 ddr and gf4ti4400
     
  9. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,518
    Location:
    Brisbane
    i work for an architectural company. the primary software we use is bently microstation and triforma, autocad architectural desktop (ADT) 3.3, discreet 3d studio max 5.1 and autodesk VIZ 4.

    we're a 100% dell office, so all of our workstations are dell precisions typically. we have the option of wildcat, ati fireGL x1 and quadro4 cards. every machine i've purchased in the last 12 months has been quadro based, primarily for the great cost to performance ratio. wildcats are hideously expensive for not much gain, and the ATi cards still haven't quite got their drivers up to spec.

    as for specviewperf... no thanks. it's becoming as corruptable as 3dmark. companies are only starting to realise they can optimised their drivers just for SVP so that when people test them they look great, only to have mediocre performance later on in applications.

    also, one other thing that SVP doesn't handle is 3d studio max's custom driver ability. SVP is openGL only, and custom drivers within MAX/VIZ give the ability of at times a 25% boost in viewport rendering. most other 3d apps use GL as their primary viewport API. 3dlabs, nvidia and ati all have custom 3dsmax drivers that SVP can never benchmark.

    if i HAD to choose a benchmark, it would probably be spec APC. SAPC runs within an application, which is the ONLY way to judge the true performance of said application. the only gripe i have is that typically the models used in spec APC are much smaller in polycount than the scenes my 3d guys use.

    and speaking of 3d scene size: here's the problem companies like mine face. typically 3d art is just that: art. if something isn't 100% to scale, or 100% accurately modelled, it's not that big a deal, as it's for presentation purposes. the "modelling by proxy" you mention above is proof of that. if it looks right to the eye, then it's good enough for print.

    in architecture we use our 3d models to generate architectural drawings, elevations, sections, and the like. this is a relatively new thing in architecture (compared to 3d in entertainment, games, movies, etc which has been around commercially for a decade or so) to try and reduce the time taken to develop architectural information. about 10% of our 3d work is for marketing, print or film. the rest is for information gatehring purposes.

    now here's the problem: on a drawing of scale 1:1000, a 2mm error means 2 metres in the real world. that's the difference bewteen a skyscraper standing, and a skyscraper that falls down when it fills with it's 5000 workers. even 1000 people at 50kg per person is over 50 metric tons of mass.

    we work to accuracy. if not, it means people's lives. if a scene needs 2 million polys to be accurate, then we use 2 million polys. i can't really go into detail as to what sort of buildings we design due to some very heavy agreements in my contract, but i can tell you now they're MUCH bigger than 5000 people's worth! :)

    anyways, moving along from that: for autocad we use the AUGI benchmarks for our 2D benchmarking. as of yet i don't think there are any 3D benchmarks for microstation nor autocad modelling. the probelm is that the people using these tools rarely care about or even understand what goes on "under the hood" so to speak. and IT guys like myself shudder at the idea of having to learn CAD to work out how to benchmark an application! :)
     

Share This Page

Advertisement: