Discussion in 'Storage & Backup' started by Adzgibbo, Nov 10, 2009.
What OS? asdasdasdasd10char
i thought the intel nics and virtual driver would give it away.
vista. would be on 7 if intel didnt take so long to release updated proset at the time of 7 going gold vista has no issues so i keep it, no need to reinstall. i still have games that dont play on 7 as well.
Why would that give it away? Intel NICs work on pretty much every OS. I realise this is all about LANs where most people play games, but I figured you might be sitting there playing Tux Racer or something while everyone raped your system grabbing every Linux ISO that has been released in the last 10 years.
multiple computers can work well for lan, i find i get more bandwidth from multiple machines.
i have moved from this...
Also back on topic i find Corsair 1000W's work pretty well tho i found i needed 2x1000W psu's for my 44 drives (i also had a icore 7 and 280GTX)
I accept your challenge, sir!
i have no issues with games. gtx280 provides plenty of frames for 1920x1200 for most things
yeah but that means double entry fee!
i think the problem is splitting the railing of the psu, i know its going to be a nightmare on my enermax as i reach over 600W, as its really just 2 500W psus in a single unit
2nd pic: 'stairway to heaven'? the staggered caddys are like stairs
A drive consuming an average of a few watts will only cost about $7 a year to run, so that won't be nearly enough to cover the premium you pay for the largest capacity drive.
32 x 1.5TB = 48TB, $4480
32 x 4W = $224/year
24 x 2TB = 48TB, $5760 ($1280 more)
24 x 4W = $168/year ($56 less)
So your payback time would be a couple of decades.
This assumes that electricity pricing isn't going to quadruple in 5 years time, of course... but if it does then you'd probably consider turning off a few drives.
edit: this calc doesn't take into account the extra 8 SATA ports you need with 1.5TB drives, which could significantly reduce the total price gap.
This is true, but only because I'm getting help from the switch
Arh you only running 3 nics,
I got the HP Quad PCIE Nic,
Also your running your drives in raid 5, I got the same card but 2x 8x 1.5tbs in raid 0 and can hit 700mbs
LOL and we have the same cpu
Oh yea need the right switch
I also have a layer 2 48port gigabit switch to Link Agg my quad nic to, wokrs like a treat
The beauty of this HP Quad nic is that in uses 4x intel chips so it need the intel proset software which is great.
When creating a team with either link aggregation or general switch team it allows you to add any other nics that it finds in the system,
And seeing I have 4 onboard nics as well I can create A 8GBS CONNECTION, I have tested this and is fully working,
it uses TWO intel chipsets.
i bet your onboard nics i teh team will disable some of the features that are really required with massive bandwidth though. so if you do add them to the team youll end up going slow due to packet offloading being disabled.
700? i need pics, mainly due to the controller topping out at 500-550MB/sec read and 400-450 write, and this is from hpt themselves.
evenwith 3 nics, im not using 100% network utilization due to running out of cpu cycles for the virtual network adapter.
There is a pic
My mate has the same card with 16 drives in raid 0 and he hits 1300mbs
Yea even with just with my 4 hp nics i dont use full bandwidth and adding the 4 onboard makes no diff at all
crystal diskmark kgo.
i run a 16x seagates drives (7200.11 series i think) and it all powers up nicely on a el cheapo psu ($50 for the case and psu)
i have the elcheapo case gutted and all the drives custom mounted in it and my desktop next to it with a cable running between the 2 boxes
the beauty of this setup is i have a switch which powers on the 16 hard disks and a separate one which powers up the desktop, that way i don't have to power up the drives if i don't need them (the system + drives draws a lot of power and makes for a good heater in winter - shitty in summer though)
many controllers let you set up a staggered spin up to avoid a big power draw with all the disks spinning up at once, but i found this was not required for my 16 drives as the elcheapo 400w psu was more than up to the task
Here is another option and I feel a better option for the HDD's PSU:
Even if one fails, the other will hold the drives alive. You can then use a the smaller puppy to run the rest with redundancy, nothing worse then no go-go at a LAN.
68% efficiency for the ST55GF, ouch! (I don't think it's a typo since the other unit specifies >70%)
Typical loads are never at 100% rated capacity. If you go buy some desktop PSU and it claims 80%+ efficient, that value is at full tit, not when it's at idle or low power use. The figure falls of very quickly and is no better than anything else realy. It's for this reason you match a supply to the job, you don't run 1500 on a little ION based ITX system for example.
Besides, with coins being thrown around here, power use is not on the high agenda list. 2x 16 port RAID cards and 32x 1.5TB HDD's equals alot
Seagates = $5800
WD's = $5700
Samsungs = $4500
16 port adaptec = $1000 - 1600 each.
Go cheap on PSU/band-aid solutions = fail.
Edit: I do understand all the claims for efficiencies, I also understand all the things that effect and defeat these claims.
The 80+ certification requires a PSU to be more than 80% efficient across a range of power delivery values - typically from 20% to 100%
Page 9 of this - http://www.energystar.gov/ia/partne...wnloads/computer/Version5.0_Computer_Spec.pdf
Virtually all PSUs really suck efficiency-wise in the low power region. For example, those little Ion systems you mentioned are probably consuming less power than the average desktop PSU is wasting at deliveries in the sub-60 watt region.