OpenSolaris/Solaris 11 Express/ESXi : BYO Home NAS for Media and backup images etc

Discussion in 'Storage & Backup' started by davros123, Dec 14, 2009.

  1. Stanza

    Stanza Member

    Joined:
    Jun 27, 2001
    Messages:
    2,875
    Location:
    Adelaide
    As I have said to others,

    Make a pool of one drive only.

    Test

    Destroy pool

    Make pool again with one drive (2nd drive)

    Test

    Destroy pool

    Rinse repeat...

    See if one drive is dog slow compared to the others... And holding up the whole pool.

    .
     
  2. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,878
    Great minds....

    I had this issue with a WD Green and took it back and swapped it out at the store..speed went back to full. drive did not show as having faults in WD tools....
     
    Last edited: May 23, 2012
  3. Zoiks

    Zoiks Member

    Joined:
    Aug 10, 2003
    Messages:
    3,058
    Location:
    Ashgrove, QLD
    So I have 10 new hard drives sitting in boxes (thanks for the tip on B&H).

    So now the journey begins on migrating the data. Sadly my raid card has not arrived yet (express my arse), but I desperately need new space. SOOO.

    Im going to migrate from a 4 disk raidz2 to a 3 disk raid z2. This will give me time so when I get back from work I can move the the whole kaboodle.

    Now the Question!

    Is there a simple way to transfer all the data across from the 4x1tb tank to the 3x2tb tank?

    I have a feeling this is what snapshotting or cloning is for but im not certain.

    Ie can I take a snapshot of the current setup and then clone it over to the new tank. Or is it more like a systemlink?
     
  4. Rezin

    Rezin Member

    Joined:
    Oct 27, 2002
    Messages:
    9,488
    http://www.markround.com/archives/38-ZFS-Replication.html

    Code:
    [root@solaris]$ zfs snapshot master/data@1
    [root@solaris]$ zfs send master/data@1 | zfs receive slave/data
    master: oldtank
    slave: newtank
     
    Last edited: May 28, 2012
  5. Zoiks

    Zoiks Member

    Joined:
    Aug 10, 2003
    Messages:
    3,058
    Location:
    Ashgrove, QLD
    yeah reading more on it now. I can then change the name of the new tank to the same of the old tank and pretend that nothing happend (other then a heap of new space)?
     
  6. Rezin

    Rezin Member

    Joined:
    Oct 27, 2002
    Messages:
    9,488
    Yep. :thumbup:

    Code:
    zpool export newtank
    Code:
    zpool import newtank supertank
    Edit: Oh, wait.. to the name of the oldtank.. I guess you'd have to export oldtank beforehand.
     
  7. Zoiks

    Zoiks Member

    Joined:
    Aug 10, 2003
    Messages:
    3,058
    Location:
    Ashgrove, QLD
    Im wondering if I should be using the -R command. It sounds like a more complete copy or something.

    Watching the gstat of the non -R transfers anyway:
    L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
    0 0 0 0 0.0 0 0 0.0 0.0 gpt/OS
    10 228 228 23425 30.5 0 0 0.0 96.7 gpt/zfs
    1 942 3 2 31.2 939 114934 9.0 93.9 gpt/ND1

    not sure what some of the columns are:
    ms/r
    ms/w

    I guess the main bits are the KBps write and read though.
     
  8. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,878
    hi Zoiks,

    i am on a train so not sure of all the flags...so no help here.

    fwiw, I used the following command to send my snapshots...(incl. the recursive flag ) when I did this...iirc it also takes the snapshots over

    Code:
    nas@nas:~# zfs snapshot -r cloud@3tbmovesun1408
    nas@nas:~# zfs send -R cloud@3tbmovesun1408 | zfs recv -vFd cloud2
    also, something as a trap for new players om this one...disable snapshots cleanup as if you use this it will have an issue if it cleans up one of the old snapshots (ie. ages or needs the space).

    let us know how it goes.
     
  9. Zoiks

    Zoiks Member

    Joined:
    Aug 10, 2003
    Messages:
    3,058
    Location:
    Ashgrove, QLD
    Ok I transferred everything over.
    Process
    1. First added the new hardware and turned on computer
    2. Then created the the new pool
    3. set the old pool to read only
    4. created a snaapshot and then sent it to the new pool (now -R flag)
    5. exported both pools
    6. reimported new pool to the same name as the old pool
    7. So far everything seemed to work except for sabnzbd which dropped the queue. Fixed by resetting the temporary download folder to its correct location

    Now im going to turn the device into a mirror.


    HOPEFULLY nothign messes up. Im going to be away from the box for 2 weeks

    PS. Thanks for the help guys
     
    Last edited: May 29, 2012
  10. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,878
    sweeet. that's oh so much better than copying everything over manually...which sadly is what I used to do before I worked out how brilliant zfs send was...I was scared of it to be honest...did not trust it...sigh...If only I knew then what I know now :)

    hope everything goes well on the new hardware.

    cheers.
     
  11. Zoiks

    Zoiks Member

    Joined:
    Aug 10, 2003
    Messages:
    3,058
    Location:
    Ashgrove, QLD
    Cheers im just pissed that I paid for express postage on everything and things have not arrived.

    Im still curious as to exactly what the -R flag does... But thats a battle for another day
     
  12. Stanza

    Stanza Member

    Joined:
    Jun 27, 2001
    Messages:
    2,875
    Location:
    Adelaide
    Ok here is a bit of info that might help some...

    Identifying drives a different way.

    A few here have Norco cases, so I am interested in how they appear... as the below info may help you identify drives / and their locations in the backplanes (if not Chassis etc if you have one of those also ... ie an expander and something that can enumerate system topology and accurately.

    ======

    So here goes.

    With my MSA70's and using plain ol SAS controllers... not SAS2 chipped controllers... identifying drives is and can be a challenge. For several reasons.

    Reason 1 = as I have 68 Bays to play with.... compared to having just a controller with say 4 or 8 ports... picking which drive to add to or remove from a vdev can be rather interesting.

    Before with just 8 ports on the ol SAS controller it was sort of easy...eg a drive would be called say c3t2d0

    So to find out which drive was which was also easy... as you could enumerate it down sort of simply
    c3t2d0 broke down into
    c3 = controller number 3
    t2 = Target number 2 (or port 2 if you like)
    d0 = Drive 0

    So if I added / removed a drive it was simple to identify which was which

    Now Moving t using an expander based chassis and things move around a bit and are detected differently

    A Drive that was c3t2d0 becomes...(if it is in the 2nd bay) c3t36d0

    MSA70 has 25 bays which seem to get numbered from t35 thru to t58 (yeah I don't know either why it starts @ 35 ??:confused: and also no idea why 35 + 25 somehow = 58 ??

    anyways it can get confusing.... and No I don't know what the second chassis drive numbering goes when it is daisychained....yet.

    But...on with the show


    So to organise it a little more simply

    Enter the lovely command "croinfo"
    cro means

    Chassis
    Recepticle
    Occupant

    DESCRIPTION
    The diskinfo and croinfo utility share the same binary exe-
    cutable. At runtime, the utility checks to see how it was
    invoked, and adjusts defaults.

    The croinfo utility allows users to query and display
    specific aspects of a system's configuration. Queries are
    performed against a record-oriented dataset that captures
    the relationship between physical location and various
    aspects of the device currently at that physical location.
    This relationship is expressed in terms of Chassis, Recepta-
    cle, and Occupant (thus the cro prefix).


    :leet:

    Neato

    So now instead of using the format command to try and identify a drive eg

    Code:
    root@solaris:~# format
    Searching for disks...done
    
    
    AVAILABLE DISK SELECTIONS:
           0. c3t35d0 <HP-DG072A8B54-HPD7-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@23,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________1/disk
           1. c3t36d0 <HP-DG072A9BB7-HPD0-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@24,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________2/disk
           2. c3t37d0 <HP-DG072A9BB7-HPD0-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@25,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________3/disk
           3. c3t38d0 <HP-DG072A4951-HPD4-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@26,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________4/disk
           4. c3t39d0 <HP-DG072A9BB7-HPD0-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@27,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________5/disk
           5. c3t40d0 <HP-DG072A8B54-HPD7-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@28,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________6/disk
           6. c3t41d0 <HP-DG072A8B54-HPD7-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@29,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________7/disk
           7. c3t42d0 <HP-DG072A8B54-HPD7-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@2a,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________8/disk
           8. c3t58d0 <ATA-ST91000430AS-CC9D-931.51GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@3a,0
              /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________24/disk
           9. c3t43d0 <HP-DG072A8B54-HPD7-68.37GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@2b,0
          10. c3t59d0 <ATA-ST91000430AS-CC9D-931.51GB>
              /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci103c,3229@0/sd@3b,0
          11. c4t0d0 <HP-LOGICAL VOLUME-1.86 cyl 4420 alt 2 hd 255 sec 63>
              /pci@0,0/pci8086,25e5@5/pci1166,103@0/pci103c,3211@8/sd@0,0
    Specify disk (enter its number):
    
    Lovely output isn't it...:sick:

    now lets run croinfo command by itself and see what we get instead

    Code:
    root@solaris:~# croinfo
    D:devchassis-path                                                            t:occupant-type  c:occupant-compdev
    ---------------------------------------------------------------------------  ---------------  ------------------
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________0        -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________1/disk   disk             c3t35d0
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________2/disk   disk             c3t36d0
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________3/disk   disk             c3t37d0
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________4/disk   disk             c3t38d0
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________5/disk   disk             c3t39d0
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________6/disk   disk             c3t40d0
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________7/disk   disk             c3t41d0
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________8/disk   disk             c3t42d0
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________9        -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________10       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________11       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________12       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________13       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________14       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________15       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________16       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________17       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________18       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________19       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________20       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________21       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________22       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________23       -                -
    /dev/chassis/HP-MSA70.5001438000408100/Bay__________________________24/disk  disk             c3t58d0
    
    
    Thats a little better, but now lets tidy it up a little, for even more readability.

    So now we introduce the fmadm command "fmadm - fault management configuration tool" .... yes seems a weird tool to tidy up the output of a Chassis Recepticle Occupant output...

    But when you run the fmadm command by itself we see, part of it's use is to manage Chassis alias's:thumbup:

    Code:
    root@solaris:~# fmadm
    Usage: fmadm [-P prog] [-q] [cmd [args ... ]]
    
        Fault Status and Administration
            fmadm faulty [-afgiprsv] [-n <max_fault>] [-u <uuid>]
                    display list of faulty resources
            fmadm acquit <fmri> [<uuid>] | <label> [<uuid>] | <uuid>
                    acquit resource or acquit case
            fmadm replaced <fmri> | <label>
                    notify fault manager that resource has been replaced
            fmadm repaired <fmri> | <label>
                    notify fault manager that resource has been repaired
    
    [COLOR="Yellow"][B]    Chassis Alias Administration
            fmadm add-alias <product-id>.<chassis-id> <alias-id> ['comment']
                    add alias to /etc/dev/chassis_aliases database
            fmadm remove-alias <alias-id> | <product-id>.<chassis-id>
                    remove mapping from /etc/dev/chassis_aliases database
            fmadm lookup-alias <alias-id> | <product-id>.<chassis-id>
                    lookup mapping in /etc/dev/chassis_aliases database
            fmadm list-alias
                    list current /etc/dev/chassis_aliases database
            fmadm sync-alias
                    verify /etc/dev/chassis_aliases contents and sync[/B][/COLOR]
        Caution: Documented Fault Repair Procedures Only...
          Module Administration
            fmadm config
                    display fault manager configuration
            fmadm load <path>
                    load specified fault manager module
            fmadm unload <module>
                    unload specified fault manager module
            fmadm reset [-s serd] <module>
                    reset module or sub-component
          Log Administration
            fmadm rotate <logname>
                    rotate log file
          Fault Administration
            fmadm flush <fmri> ...
                    flush cached state for resource
    
    Which is what we will have a go at now.

    My 1st chassis is identified as /dev/chassis/HP-MSA70.5001438000408100

    So I am going to make it something smaller and more readable / sensible

    Let say as I have two MSA70's I will call one MSA70A and the Other MSA70B (makes sense to me):tongue:

    Code:
    root@solaris:~# fmadm add-alias HP-MSA70.5001438000408100 MSA70A 'MSA70 Top'
    
    Now whats it look like?

    Code:
    root@solaris:~# croinfo
    D:devchassis-path                                         t:occupant-type  c:occupant-compdev
    --------------------------------------------------------  ---------------  ------------------
    /dev/chassis/MSA70A/Bay__________________________0        -                -
    /dev/chassis/MSA70A/Bay__________________________1/disk   disk             c3t35d0
    /dev/chassis/MSA70A/Bay__________________________2/disk   disk             c3t36d0
    /dev/chassis/MSA70A/Bay__________________________3/disk   disk             c3t37d0
    /dev/chassis/MSA70A/Bay__________________________4/disk   disk             c3t38d0
    /dev/chassis/MSA70A/Bay__________________________5/disk   disk             c3t39d0
    /dev/chassis/MSA70A/Bay__________________________6/disk   disk             c3t40d0
    /dev/chassis/MSA70A/Bay__________________________7/disk   disk             c3t41d0
    /dev/chassis/MSA70A/Bay__________________________8/disk   disk             c3t42d0
    /dev/chassis/MSA70A/Bay__________________________9        -                -
    /dev/chassis/MSA70A/Bay__________________________10       -                -
    /dev/chassis/MSA70A/Bay__________________________11       -                -
    /dev/chassis/MSA70A/Bay__________________________12       -                -
    /dev/chassis/MSA70A/Bay__________________________13       -                -
    /dev/chassis/MSA70A/Bay__________________________14       -                -
    /dev/chassis/MSA70A/Bay__________________________15       -                -
    /dev/chassis/MSA70A/Bay__________________________16       -                -
    /dev/chassis/MSA70A/Bay__________________________17       -                -
    /dev/chassis/MSA70A/Bay__________________________18       -                -
    /dev/chassis/MSA70A/Bay__________________________19       -                -
    /dev/chassis/MSA70A/Bay__________________________20       -                -
    /dev/chassis/MSA70A/Bay__________________________21       -                -
    /dev/chassis/MSA70A/Bay__________________________22       -                -
    /dev/chassis/MSA70A/Bay__________________________23       -                -
    /dev/chassis/MSA70A/Bay__________________________24/disk  disk             c3t58d0
    
    Thats better.... no idea yet how to change the silly HP Bay numbering...eg get rid of the annoying ______________ bits.

    You will notice my chassis obviously doesn't fully use the SES2 standard... so Bays are a little weird... in that Bay________0 is the
    Chassis itself??.... and Bay___25 is not represented as a bay... just as c3t58d0 outside of a bay??

    So it's not perfect... but it's definately easier to manage.

    Naturally when you have racks and racks of storage chassis, the above tools would come in very handy.

    You could instead label an alias from
    HP-MSA70.5001438000408100
    to something helpful

    Lets say it's sitting in Rack #10 and racked @ a level of 26RU High
    We could call / alias it something smarter

    HPMSA70.Asset#128934 MSA70@RACK10__U26-U27

    Make finding the right drive in a large Data Centre quite helpful eh?:p

    BTW

    /dev/chassis
    should contain your chassis / backplane info

    a line gets added to
    /etc/dev/chassis_aliases

    When you use fmadm to add an alias

    Now I am off to destroy some more shit for the good of OCAU users:weirdo:

    as I see /dev/chassis/MSA70A contains folders/directories called

    Code:
    root@solaris:/dev/chassis/MSA70A# ls
    Bay__________________________0   Bay__________________________15  Bay__________________________21  Bay__________________________6
    Bay__________________________1   Bay__________________________16  Bay__________________________22  Bay__________________________7
    Bay__________________________10  Bay__________________________17  Bay__________________________23  Bay__________________________8
    Bay__________________________11  Bay__________________________18  Bay__________________________24  Bay__________________________9
    Bay__________________________12  Bay__________________________19  Bay__________________________3
    Bay__________________________13  Bay__________________________2   Bay__________________________4
    Bay__________________________14  Bay__________________________20  Bay__________________________5
    root@solaris:/dev/chassis/MSA70A#
    
    Wonder what happens if I rename them:leet:

    Muhahaha
    .
     
    Last edited: May 31, 2012
  13. brayway

    brayway Member

    Joined:
    Nov 29, 2008
    Messages:
    6,715
    Location:
    Dun - New Zealand
    does anybody know how to get CPU temperature readings, either through Nappit, or command line?
     
  14. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,878
    if you have an ipmi interface you might be able to get it that way...of via ipmitool.

    I did have a look at getting it some other way a while back but it did not work iirc.
     
  15. brayway

    brayway Member

    Joined:
    Nov 29, 2008
    Messages:
    6,715
    Location:
    Dun - New Zealand
    :Paranoid: You lost me :lol:
     
  16. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,878
    yeah...lost myself...in short. not that I know of.
     
  17. Hive

    Hive Member

    Joined:
    Jul 8, 2010
    Messages:
    4,993
    Location:
    ( ͡° ͜ʖ ͡°)
    You don't have an IPMI motherboard so that won't work for you.
     
  18. Stanza

    Stanza Member

    Joined:
    Jun 27, 2001
    Messages:
    2,875
    Location:
    Adelaide
    Lmsensors ??

    Maybe it's available for Solaris? Port from Linux?

    .
     
  19. miicah

    miicah Member

    Joined:
    Jun 3, 2010
    Messages:
    5,563
    Location:
    Brisbane, QLD
    Can I team two 3-com network cards under Solaris? Or is it only Sun hardware that can be teamed?

    EDIT: Actually does my router/switch need to support this as well? I'm just looking for higher throughput.
     
  20. Hive

    Hive Member

    Joined:
    Jul 8, 2010
    Messages:
    4,993
    Location:
    ( ͡° ͜ʖ ͡°)
    Higher throughput to one PC or multiple at once?
     

Share This Page