MSSQL Boxes: Opteron vs Xeon

Discussion in 'Business & Enterprise Computing' started by bsbozzy, Jan 20, 2005.

  1. bsbozzy

    bsbozzy Member

    Joined:
    Nov 11, 2003
    Messages:
    3,924
    Location:
    Sydney
    OK, need some sql and smp freaks here.

    I have 2/3 situations here, we need a databse server and to follow our current suit, it will be HP Branded. Now, theres the DL580 quad xeon, DL585 Quad Opteron or the DL760 8 way Xeon.

    Im looking at either the 580 or the 585, the 760 just doesnt seem to cut it with only pc133 memory vs the pc2100 stuff in the 58x's. So if anyone can give any advice, it would be great.

    Cheers

    Brett
     
  2. flugle

    flugle Member

    Joined:
    Jun 29, 2001
    Messages:
    857
    Location:
    Newcastle, NSW
    I don't see too many 760's going out any more. They only have PC133 but they still have a shitload of memory bandwidth as you use 10 DIMMS per memory kit (ok well 8 as 2 are for RAID). But its the CPU scaling past 4 CPUs that tapers off

    580's or 585s are probably the best bet for a mid range box, with plenty of memory specced in.

    Depending on the app (if its MS SQL) you may want to consider an Itanium 2 box. Initially the hardware will be dearer, but if you're paying for SQL Enterprise per CPU licences (again I'm making an assumption) you can get away with half as many CPUs for similar performace. You'll save a ton on software licences and still have a top performing box. Something like an rx2620 with 2 x 1.6Ghz 6MB cache CPUs is a pretty kick arse box
     
    Last edited: Jan 20, 2005
  3. OP
    OP
    bsbozzy

    bsbozzy Member

    Joined:
    Nov 11, 2003
    Messages:
    3,924
    Location:
    Sydney
    No Itanium, it will be Xeon/Opteron based, (Leaning 90% towards Opteron) with Win2k3 ES and SQL 2000 sp3. If SQL 2005 comes out before we get the server, then it most likely will be running on SQL 2005. Also, i will be attatching a HP SAN to the server, desked out with hopefully 145GB 15k disks
     
  4. stalin

    stalin (Taking a Break)

    Joined:
    Jun 26, 2001
    Messages:
    4,581
    Location:
    On the move
    Opteron's in the 585 will be quicker than the same Xeon box. If you REALLY feel an 8 way box is justified, get 2x585, and cluster them using your nice SAN.
    Note: A single DB will not speed up when clustered over multiple machines. This is because active-active clustering was removed in SP2 of SQL, But if you build it to suit half can be on one server and half on the other which is effectivley like load balancing SQL 2000, when one fails the cluster resources can happily move over to the remaining server. 2005 i think will support full active active clustering.

    As for 146GB disks.. i would strongly recommend against it. get the 76GB disks at 15K disk IO will be much better. I would be guessing the database wont be that big (if its 130GB DB as an example) it will only be spread accross 2, 3 or 4 (dependant on RAID config) 140GB disks. you do that on the 76's and you get twice as many spindles working for you. price performance the 76 15K are the best.

    Also when your configuring the array on the SAN.. dont forget to make the array a DB array, the performance difference due to access patterns caching etc is noticable... also when your partioning it up.. if your going the cluster route or plan to later.. make sure you leave ~1GB (1GB is ample) for a quorum drive. It sucks where there is no spare capacity and you just need a little bit for the quorum... without it you have no cluster.

    PS what interface to the SAN are you using?
     
  5. stalin

    stalin (Taking a Break)

    Joined:
    Jun 26, 2001
    Messages:
    4,581
    Location:
    On the move
    Option 2.. if your not adverse to going IBM.. an x445 would do you. 4-way Xeon expandable to 16-way. (you buy 4-way at a time) You loose some performance with the CPU scaling (as expected), but if you want one BIG system its a cheap way to start. also it does 32GB DDR for the 4-way..

    it uses the x-architecture or whatever they call it to interconnect between the servers. its pretty funky technology (for intel based systems anyway) it will interface fine and dandy to a FC-2 or iSCSI SAN
     
  6. Ranchu

    Ranchu Member

    Joined:
    Jun 27, 2001
    Messages:
    1,131
    Location:
    Brisbane
    I think you should reassess your position on Itanium 2 since nothing else can match the performance it offers for SQL Server.
     
  7. OP
    OP
    bsbozzy

    bsbozzy Member

    Joined:
    Nov 11, 2003
    Messages:
    3,924
    Location:
    Sydney
    But, that means all the stuff my developers are writing will have to be re-written.

    I have already proposed the 585, and if we need to scale, to 2 or 3 585's, too bad no one has any w2k3 + sql benchies, well none that i can find anyway. I have repositioned to 72.8GB 15k's anyway. So far, i have been recommended raid 10, but thats fod for a later time. Itanium is definately a no go.
     
  8. stalin

    stalin (Taking a Break)

    Joined:
    Jun 26, 2001
    Messages:
    4,581
    Location:
    On the move
    hmm benchies.. I used to look after about 8 SQL 2000 on w2k3 clusters on an IBM SAN. All Ran Xeon's due to company policy (policy of wasting money IMHO). although i dont touch MSSQL anymore woo hoo!

    I dont work there anymore but i might be able to get one of the guys there to do some benchies on a spare cluster or 2. what sort of info are you after..? what bencies you want run against it?

    One DB was on 2x IBM x345 Dual 3.2GHz Xeon 4GB DDR to an IBM FASTT700 SAN array of 9x76GB 10K FC-2 disks in RAID 5 (F&P access modes)

    The lack of RAm sucked! the 10K disks didnt help.. but lack of RAM compounded the problem, RAID 5 made the disks worse and f&P access mode added yet more fule to the fire. Yet it managed just. None of that crap was my choice. :tired:

    15K disks, FC-2, RAID 10 over as many disks as yrou SAN allows.. although inter enclosure speeds are fast they seemed slower than within enclosures... sop maybe aim for 1 DB per enclosure.. .with some space for expansion. and Opterons makes for a happy DB. also you have 64-bit support if you wanted....
     
  9. stalin

    stalin (Taking a Break)

    Joined:
    Jun 26, 2001
    Messages:
    4,581
    Location:
    On the move
    I was working on a 6-way 700MHz Xeon MP today regarding performance issues... it surprises me more and more how poorly Xeon's scale.. 6-way was not alot quicker than the 4-way in this situation.

    I have no idea why i made this another post.. :confused:
     
  10. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    11,683
    Location:
    Canberra
    I'd too recommend an IBM x445. Cheaper than the HP equivilent. and the scalability, is brilliant. With 32-way becomign available with the next gen Xeon MPs.

    We've got several of these at work (16-way / 32GB / SAN), all heavy SQL2000 users. Our smallest database is 20GB (an OLAP Cube), the biggest 1.5TB.
     
  11. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    8,516
    Location:
    Adelaide, SA
    so many excellent (and kewl) choices

    another vote for the IBM here, although the oppys would be my choice if limited to HP.

    Stalin: that 100fsb shared bus on the xeons sure chokes quick doesnt it?
     
  12. joe_sixpack

    joe_sixpack Member

    Joined:
    Jan 21, 2002
    Messages:
    2,850
    Location:
    Brisbane
    You could also look at some of the Operton offerings from SUN. They have several 4way Operton servers that support Windows 2003 EE. Something like the Sun Fire V40z?
     
  13. OP
    OP
    bsbozzy

    bsbozzy Member

    Joined:
    Nov 11, 2003
    Messages:
    3,924
    Location:
    Sydney
    Thankx guys, but my company is purely HP servers only, they started buying about a year ago, and we arent changing to IBM now. I think a nice MSA1000 full of 72.8GB 15k disks would be sweet to add ontop of a DL585 Quad Opteron with a minimum of 8 GB Ram
     
  14. FaTs

    FaTs Member

    Joined:
    Aug 25, 2002
    Messages:
    1,455
    Location:
    Bris Vegas...
    Why not go with the new gen 15k's and go 146gb?
     
  15. OP
    OP
    bsbozzy

    bsbozzy Member

    Joined:
    Nov 11, 2003
    Messages:
    3,924
    Location:
    Sydney
    They will be 15k drives, but 72.8GB's just seem to have a much faseter response time, unless you can prove me wrong
     
  16. flugle

    flugle Member

    Joined:
    Jun 29, 2001
    Messages:
    857
    Location:
    Newcastle, NSW
    Because the idea is to get as many spindles running at once to increase system throughput. The more disks, the better. The MSA bsbozzy has can handle 42 discs in its max config. Its much better to have 10x72GB disks than 5 x 146GB

    Frankly its sometimes debateable whether even 15k discs are needed. Those HP SAN controllers are pretty smart devices and have half a gig cache which can do some pretty smart caching.

    Speaking to a few HP storage engineers, most of them reckon you get better performance by saving the money on 15k vs 10k disks, and using the difference to buy even more 10k disks. Naturally this depends on what sort of data access is being made
     
    Last edited: Jan 21, 2005
  17. OP
    OP
    bsbozzy

    bsbozzy Member

    Joined:
    Nov 11, 2003
    Messages:
    3,924
    Location:
    Sydney
    What about if cost doesnt matter, more performance wise than cost
     
  18. flugle

    flugle Member

    Joined:
    Jun 29, 2001
    Messages:
    857
    Location:
    Newcastle, NSW
    15k still may not necessarily be the best option even if cost doesn't matter.

    It's been a couple of months since I've touched an MSA, but in one of the advanced panels in the ACU you can get details on the cache hits. On an MSA (and even moreso on an EVA) the % of hits is pretty darned good, and that cache memory is a helluva lot faster than even a 15k disc.

    Of course if cost doesn't matter, then why not do both :)

    Does your MSA just have the 1 onboard shelf or any other shelves? The standard shelf that comes with the unit is split bus, and you can attach either another split shelf or 2 single shelves. For best performance if cost is no object, I'd split those 10 disks ,or even more, across as many channels as possible.
     
  19. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    11,683
    Location:
    Canberra

    We make HEAVY use of NetApp storage applicances at work. We want out right performance, we use 18GB 10KRPM disks, and LOTS of them - 112 to be exact over two clustered 'heads'. We're just upgrading - cause we need more space. Again performance being the requirement.

    112x 72GB 10KRPM drives, spread over two 'heads' again clustered. Not cheap (~$1M), but it keeps the 8 gigabit links going to them busy.
     
  20. stalin

    stalin (Taking a Break)

    Joined:
    Jun 26, 2001
    Messages:
    4,581
    Location:
    On the move
    Did you read my post?

    Get 140GB 15K disks and only allocate small portions of them to the arrays. You get better platter density, high RPM and lots of spindles cause you have lots of disks.. but then of course its a waste of money..

    Aeth: Yup 100Mhz dont really cut it. Building the replacement cluster now ;)


    Anyway as for the IBM/HP... x445 would still be the best option (ignoring sun as im not familiar with their hardware anymore) were I was there was 4 racks of IBM + 2 racks of HP in the one portion of the server room.. i must say they dont look too bad next to each other. The IBM gear might not fit in HP racks as it is deeper. But i spoze if you dont want IBM gear.. you dont want IBM gear. Get the next best thing which is the 585.

    Takes some photos for us as well. If you post yours up I will stick up one of what I looked after.. its not classified so im allowed to.
    ;) ;) *cough cough*

    PS I asked the queston before but its been missed.

    What interface are you using? Redundant?
     
    Last edited: Jan 21, 2005

Share This Page