HP ProLiant MicroServer Owners Club! (Attempting to sell here will result in bans)

Discussion in 'Storage & Backup' started by oli, May 10, 2011.

  1. gr8bob

    gr8bob Member

    Joined:
    May 12, 2009
    Messages:
    130
    I think the potential issue would be the spike in power load during disk spin up of all the drives. Unless you can implement staggered spin up on the drive, it would be good to have some reserve on power handling.
     
  2. OP
    OP
    oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,268
    Location:
    The Internet
    What's the timespan? 1? 2? 3 years?
     
  3. neilt0

    neilt0 New Member

    Joined:
    May 26, 2011
    Messages:
    58
    This is true.

    I have 2x 7200rpm drives, 3x 5400rpm drives and 1x 2.5" 5400rpm drive in mine and the power at boot spikes at over 115W. In normal use, it's around 35W.
     
  4. Menthu_Rae

    Menthu_Rae Member

    Joined:
    Mar 19, 2002
    Messages:
    6,874
    Location:
    Northern Beaches, Sydney
    No ATi graphics work "properly" in Linux, hence why everyone goes nVidia :Pirate:

    It's just a shame that nVidia have also started screwing Linux users with the likes of Optimus (lack of) support and what not :upset:
     
  5. mikehol

    mikehol Member

    Joined:
    Jun 15, 2006
    Messages:
    58
    Ta. I'd hoped it had got better. There is also Intel, who help with open-source drivers. No VDPAU, but good enough for MythTV and general Desktop use.
    I'll keep this machine headless then.
     
  6. Menthu_Rae

    Menthu_Rae Member

    Joined:
    Mar 19, 2002
    Messages:
    6,874
    Location:
    Northern Beaches, Sydney
    I don't think Intel are very good either TBH. They take so long to get their shite sorted when they are the dudes. Delaying drivers for a discrete graphics card is kind of so-so, but when the graphics are integrated onto your CPU then it's kind of bloody important to have decent drivers out at launch :rolleyes:

    Personally I'll be putting in a GeForce GT520 into each of my boxen. It's sad, because I love ATi and AMD because they gave me my beloved T-Bird 1.33 AXJA and Radeon 9800 XT and Radeon X850XT. Trump cards of the day/time :Pirate:

    Nowadays however my whole setup is Intel/nVidia sans this new microserver and it's little AMD Neo, quite disheartening :(
     
  7. rugger

    rugger Member

    Joined:
    Aug 24, 2003
    Messages:
    661
    Location:
    Perth, WA
    MDADM arrays

    It is becoming clear to me that for all linux users using mdadm to create a raid 5 array using these big drives, the default stripe_cache_size is woefully inadequate.

    cat /sys/block/md<devicenumber>/md/stripe_cache_size

    will usually reply back with 256. I strongly recommend that you increase this cache to 4096 ... but probably not higher.

    I am seeing a doubling of sequential write speeds with the higher stripe_cache_size, as well as somewhat improved random read/write speeds (30-40%)

    I've tested this using iozone on 1 and 2tb drives. (256k chunk on the 1tb drives, 128k chunk on the 2tb drives) Improvements were seen on both arrays.

    to increase the cache to 4096K on your linux raid array, use the following command:

    echo 4096 > /sys/block/md<devicenumber>/md/stripe_cache_size
     
    Last edited: May 27, 2011
  8. spite

    spite Member

    Joined:
    Feb 11, 2004
    Messages:
    23
    Location:
    Brisbane
    On thepost above, make sure you check the file path you're sending the echo to, it was missing an L in the 'block'.

    echo 4096 > /sys/block/md<devicenumber>/md/stripe_cache_size

    This made my write speeds significantly faster too ... :D

    121mbytes/s -> 227mbytes/s using Linux Raid5 with 5 x Hitachi 5k3000 2TB's
    100mbytes/s -> 164mbytes/s using Linux Raid5 with 5 x Seagate Barracuda Green 2TB's (aligned - but now I am wondering lol).
     
    Last edited: May 27, 2011
  9. rmuser

    rmuser Member

    Joined:
    May 11, 2011
    Messages:
    62
    I'm gonna brute force it.... lol

    Or else I'll just go to MSY and get an nVidia card.
     
  10. stumo

    stumo Member

    Joined:
    May 17, 2011
    Messages:
    85
    Location:
    VIC
    You just need a bigger ac adapter, none of that 12V hdd spinup/motor power goes through the pico. It comes straight from the 12V ac adapter. You can get 120W adapters off ebay for 30 bux. I think that's continuous power too, so peak would be higher. The pico would use a max of about 40W of that to power the other rails, leaving 80W for the 12V to the drives.
     
  11. SaTaN

    SaTaN Member

    Joined:
    Jun 18, 2002
    Messages:
    4,790
    Location:
    Caulfield-ish
    hmm, my array is messed up apparently. I copied a a 3.4GB file from the array to the stock 250GB drive in the ODD port at about 65MB/s, copying it back was under 40MB/s, even with the above trick.
    my array is 4x2T wd greens in mdadm raid5... I'm using the entire device so there shuldnt be any alignment issues right?

    any ideas how to fix this? do i need to copy all the data off again and rebuild it all?
     
  12. rmuser

    rmuser Member

    Joined:
    May 11, 2011
    Messages:
    62
    Try copying the file to /dev/null so that the ODD port and HDD on it are not the bottleneck.... Have you flashed your bios to get full speed on that port? Regardless, your testing is not testing the array io throughput it is testing the io throughput of the 250GB HDD. /facepalm

    eg.

    cp /mnt/md0/file /dev/null

    for read throughput, and

    dd if=/dev/zero of=/mnt/md0/testfile bs=10M count=300

    ...for a 3GB write test

    Didn't occur to you that a single HDD would be the bottleneck rather than your RAID5? And yes, you should not have alignment issues if you have used full devices rather than partitions.

    Edit: please post your results too. I have the same drives (4x2TB WD green) in RAID10 so am interested to compare the RAID5 and RAID10 performance.
     
    Last edited: May 27, 2011
  13. Blinky

    Blinky Member

    Joined:
    Jul 4, 2001
    Messages:
    3,287
    Location:
    Brisbane
    I've noticed you've said this a couple of times now and it is not correct for every pico psu, as mentioned earlier.
     
  14. SaTaN

    SaTaN Member

    Joined:
    Jun 18, 2002
    Messages:
    4,790
    Location:
    Caulfield-ish
    please reread what i said... WRITE speed to the single ODD drive is double WRITE speed to the raid5 array which is just wrong. you're not suggesting the ODD drive has worse READ performance than its WRITE are you?
     
  15. rmuser

    rmuser Member

    Joined:
    May 11, 2011
    Messages:
    62
    lol

    What I am saying is that if you copy between the single ODD HDD and the RAID5 in EITHER DIRECTION, the bottleneck will be the ODD HDD. In other words your testing is telling you NOTHING about the peak performance of the RAID5 array.

    Just run the tests I suggested and post the results and then we'll know whether or not the performance of your RAID5 is reasonable or not.

    you may want to use time for the cp command since it won't report the throughput:

    "time cp /mnt/md0/file /dev/null"
     
    Last edited: May 27, 2011
  16. stumo

    stumo Member

    Joined:
    May 17, 2011
    Messages:
    85
    Location:
    VIC
    It seems to be hard enough to get this info through anyway, without adding the few wide input models to the mix. If anyone's using those then I'd hope they know what they're doing.

    The wide input models are red I believe, and are only suited for limited applications that don't require much 12V power. The normal picos are yellow.
     
  17. Goonit

    Goonit Member

    Joined:
    Oct 3, 2008
    Messages:
    403
    None of us want to get wide range PicoPSUs though..?? I am no doubt out of my depth, but what you said earlier hasn't swayed the argument to keep the STD PSU at all. Considering most of us would be looking at the PicoPSU with the 12v DC feed of the AC adaptor, we're all good to go and save a money by increasing the efficiency of the PSU dramatically.

    EDIT: All i see is, wattage reduced, efficiency increased. Where's the problem as long as we get higher wattage adaptors? eg, 60w Pico, 120w adaptor.
     
    Last edited: May 27, 2011
  18. rugger

    rugger Member

    Joined:
    Aug 24, 2003
    Messages:
    661
    Location:
    Perth, WA
    While the PicoPSU board would see very good efficiency .... the laptop AC to 12V adaptor would be where it most of the inefficiency comes from.

    I'm not exactly seeing why the PicoPSU+laptop AC adaptor would be significantly more efficient then the included PSU .... maybe a few watts difference, however I can't see it recouping the cost of buying it any time soon.
     
  19. Blinky

    Blinky Member

    Joined:
    Jul 4, 2001
    Messages:
    3,287
    Location:
    Brisbane
    +1 ~ Thank you for the common sense post.
     
  20. mikehol

    mikehol Member

    Joined:
    Jun 15, 2006
    Messages:
    58
    Not quite true - I just checked the specs.

    Output Voltage (V) 5VSB +3.3 +5 +12 -12
    Output Current (A) 1.5 5.0 6.0 4.0 0.05
    Output Power (W) 7.5 16.5 30 48 0.6
    Peak Current (A) 2.0 7.0 7.0 7.0 0.1

    (sorry , HTML paste not working on OCAU)

    The 80W rating includes 12V supply, which is switched and so limited.
    For a server, you are not worried about ATX power-off or suspend, so you could bypass the picoPSU for the 12V to the hard drives. But normally the 12V does indeed go through the pico, even though not regulated.

    I ordered a 7A (84W) power brick to go with my pico, to provide a bit of a safety margin over the 5A units, and see if it can power my core2 desktop as well. (Search for "Delta 12V 7A" on ebay.)
     
    Last edited: May 27, 2011

Share This Page

Advertisement: