Discussion in 'Storage & Backup' started by oli, May 10, 2011.
PSU isn't a problem for lots of drives
5 x 3.5
6 x 2.5
Some of you may have read my recent ramblings on the HP P400 controller with incorrect port locations, fouling cables, wrong sas connectors etc etc.
I have found a solution to the problem with connecting backplane easily to these cables. A sas cable manufacturer in the US has come up with a design for me so that the P400 controller can connect directly to backplane cable. This involves a SFF-8484 for HBA connection to a pcb with SFF-8087 socket much like the socket on the motherboard.
These are costing about $40 each with no MOQ. They pretty much designed the unit for me and sent me an engineering diagram so pretty chuffed that it won't cost too much. Getting shipping quote now, if too much then i might try going through my shipito account.
Anyway shoot me a pm if anyone is interested in details or maybe tacking onto my order as I will order in the next few days.
Also on the topic of P400, some of you might be aware of the predicament of fouling sas cables if you have the DL380 version with rear facing plugs. I have done a lot of research into this and found that you can remove the rear connectors and purchase a molex part to place on the front. This requires soldering so only would be good for those with soldering skills.
I am bringing in 40 of these which is MOQ. These are $4.50 each and if anyone is hardcore enough or crazy enough then let me know and I can send you some of these for what i paid subject to me verifying that they actually fit and work properly.
All up this is costing the following for a really decent raid controller.
HP P400 - ~$72 shipped
SFF8484-SFF8087(f) Cable - ~$38 shipping TBA
Molex Connector - $4.50
I will have plenty of spare molex connectors.
Just ask if anyone has any questions. I like hacking shit and decided it was a good mission to have the P400 connect natively into the backplane fanout.
The adapters would be good for others running low profile cards with SFF-8484 connectors
On another note.
for the ODD port
These guys make a bucket load of converters... might be worth asking them for a SFF-8484 to SFF-8087 adapter also
Finally got round to setting up my HP Microserver - awesome little thing for the price... only downside so far is the buzzing PSU fan - waiting on HP to send replacement...
My config is the 4x2TB Hitachi 5k3000's from Scorptec along with the 8GB Kingston ECC Ram with the deal.
Having never used Solaris before I found installing Solaris 11 Express, Gnome, VNC & napp-it not too difficult... however I'm stuck on getting napp-it to run SMB/CIFS...
I have created a raidz pool with my 4 drives and I've reset the root password & toggled the SMB/CIFS server service from napp-it off and on again.... but it won't go online... in napp-it it says "Current state of SMB/CIFS Server: offline" despite several reboots etc...
Is there any extra step that i'm missing to getting SMB/CIFS to work in napp-it? Or has something stuffed up so I should do a wipe & reinstall? Thanks in advance
EDIT: Lol fail... I figured out the problem - I hadn't created a ZFS Folder
For anyone else - here is an excellent guide to how to set up ZFS with napp-it http://napp-it.org/doc/downloads/napp-it.pdf
I've got a couple of HP NC360T's, which is a PCI-E x4 dual gigabit NIC. From the specs I found on HP's support site it can auto-negotiate a bus width from x1 to x4... and the spec sheets for the Intel chipset state that it can be used in an architecture from anywhere between 1 and 4 lanes. Technically a single lane can do 2gbps duplex, so it should theoretically be possible to have full dual gigabit over PCI-E x1.
Having read all that, I decided to take a punt and am gonna try to get the NIC working in the PCI-E x1 port over the next couple of days.
Today was just some simple tests, nothing in-depth, just to make sure it was even possible.
First step was to insulate pin 19 onwards, as they were lanes 1 through to 3. Which left a single lane 0 working (just like it would on a PCI-E x1 card):
I plugged it into a spare PC's PCI-E x16 slot, booted it up and got nothing. The motherboard (an AIO Gigabyte AMD board) couldn't even see it or initialise it.
I decided to plug it into the Microserver anyway to see if it would work, expecting to be disappointed.
To my surprise, I was greeted with this:
logged into the vSphere Client and saw this:
Tomorrow I'll do some proper bandwidth/load testing to see if I can maintain consistent and error free gigabit LAN on both channels while restricted to a single bus lane, and if all goes well I'll be breaking out the dremel and shortening the card to PCI-E x1.
Hrm, nice bunch of connectors. I will bring in 8 of these connectors leaving me with 3 spare. I am confident that they will work. Going to get a shipping quote now.
I'm going to try FreeNAS next, currently running UnRAID Basic (free version, only handles up to 3 HDDs). I don't have a Mac to test for you but I have been able to get iTunes set up on a Windows PC and connect to the music share on my Microserver without too much trouble.
Has anyone compared an earlier Microserver to the latest ones being sent out?
I got one when they were first for sale through HP @ $199. I just received a replacement unit from HP.
A couple of differences:
1. The first motherboard was revision Rev 0A the new one is Rev 0B
2. The new caddies have stickers on them on the front that say "HP NON HOT PLUG HDD".
3. Different power supply fan. The new one is a Delta the old one was a T&T.
Anyway a few minor differences I noticed.
Hah, bet some people rang and complained thinking the drives were hot swappable... Maybe they killed a PSU or something plugging them all in at once while running.
Problem is if the bios it set to IDE mode, then YES the drives are NOT hot swapable. If set to AHCI mode, they are.... but not supported to do so by HP.
Not sure on the RAID setting
Anyone who ordered from Cworld will be having delays - see other thread.
Cheapest place to get this card ?
$130 ? http://www.digitalcentre.com.au/p/4...eyefinity-displayport-dual-link-dvi-hdcp.html
might as well get the full 16x card
I've got two identical cards that look extremely similar to yours - Intel Pro/1000PT Dual-GbE NICs. I was thinking about putting them in my Microserver but will have a IBM BR10i in the 16x slot. The other issue is not having a low-profile bracket for the NIC.
I'm not terribly keen on the idea of taking a dremel/saw to the card - did you try the card without the tape to see what'd happen? I can't remember reading much about people's usage of the 1x slot..
Thats not pcie 1x version
It's not a punt, it will work.
Nice pics though, we luv pics
Lifesaver! I have been looking all over for this type of cable!
I sent you a PM.
If you take the tape off and stick it in the x16 slot it works as normal. It won't physically go into the x1 slot. I don't want to cut it off without testing it properly though, taping the pins functionally identical to cutting them off.
pins 19 through to 32 are for bus 1,2 and 3. If you ground pin 31, then the card will know that your system supports x4, but if it's not, then it will auto-negotiate to x1. Once it does that, pins 19 to 32 are unused... so it makes no difference if they exist on the PCB or not.
That's all theoretical though. It all comes down to card implementation and architecture if the standard is adhered to. From the looks of it the HP card in the HP server seems to work, but the HP card in a non HP motherboard doesn't?
I've got a P400 I'll be using in the x16 slot, which means the NIC has to go into the x1 slot. I can use a single port card easily, but ideally I'd like one port dedicated to WAN and two ports dedicated to LAN with load balancing. The only out of the box solution is the Startech dual gigabit PCI-E x1 card, but I'm not a fan of spending $150 on a NIC just yet.
$30 vs $130 you can just mod the 16x card to fit a 1x slot. You have 4 tries to get it right
Haha the DOF on my macro lens is terrible.
I know it works for video cards which automatically under-clock to match, but I'm still not 100% certain it will support a dual gigabit NIC. It makes sense for them to have it at 4x bus, as it allows them a fair bit of overhead. I'm not 100% convinced that the same overhead will fit into a single bus lane though. I'll crank up ttcp tonight and get some benchmarks at full x4 vs x1, and if they're similar then out comes the dremel.