1. Win some Crucial goodies in OCAU's Christmas Treasure Hunt!
    Dismiss Notice

OpenSolaris/Solaris 11 Express/ESXi : BYO Home NAS for Media and backup images etc

Discussion in 'Storage & Backup' started by davros123, Dec 14, 2009.

  1. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
    Update

    napp-it is running from current release Feb 02 on Solaris 11.4b (not all functions tested)
    If you want that the napp-it wget installer compiles ex smartmontools 6.6 you should
    set the beta repository prior napp-it and install gcc (pkg install gcc-5)

    You need to setup the beta repository. If you have defined it after a napp-it setup,
    install storage services manually
    pkg install --accept --deny-new-be storage/storage-server
     
    Last edited: Feb 2, 2018
  2. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
  3. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
  4. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,886
    Cool. Given many people reading this will be using this for a home systems, I think it's worth including a bit on workload types and the value/need for a Slog/l2Arc as I see sooo many people who think they will magically make their system faster for home use like streaming movies.

    Also, I did not know I could do this! Would have been very handy on the weekend when I wanted to pull some drives and could not remember which was which!
    [​IMG]
     
  5. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
    new in napp-it 18.06 Dev (Apr 11)

    to be prepared for next OmniOS 151026 stable in may (or current bloody)
    https://github.com/omniosorg/omnios-build/blob/r151026/doc/ReleaseNotes.md

    Support for vdev removal (new ZFS feature) in menu Pools > Shrink Pool (OmniOS 151025+)
    Support for poolwide checkpoints (new ZFS feature) in menu Snaps > Checkpoint (OmniOS 151025+) and Pool > Import

    Disk Detection: adds ATTO HBAs (ATTO, a media specialist now supports Illumos)
    Disk Map: correct detection of HBA even if disks on different LSI mptsas HBAs shows the same c-number ex c0t500..
    Disk Map: add dd disk detection
    Disk Location: dd detection for all disks
     
  6. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
    The new OmniOS is the first Open-ZFS storage distribution to include the vdev remove (Pool shrink).
    Oracle Solaris 11.4 also comes with this feature but it seems with less restriction.

    Open-ZFS, at least currently lacks the support for a vdev remove of a basic or mirror vdev when a raid-z [1-3] vdev is part of the pool or a remove raid-Z[1-3] at all or add a raid-z [1-3] after a remove of ex a basic/mirror vdev what limits its use cases. Support of raid-Z [2-3] is expected in Open-ZFS (but not Z1), Bug #7614: zfs device evacuation/removal - illumos gate - illumos.org

    Open-ZFS ex OmniOS that is the first to include this feature also requires a re-mapping table with a continous small RAM need/reservation and small performance degration.This is listed in the output of a zpool status. A manual zpool remap can fix this.

    It seems that Solaris 11.4 does not have these restrictions
    vdev removal, poolwide checkpoints/snaps or imp... | Oracle Community
     
    Last edited: Apr 12, 2018
  7. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
    Oracle Solaris 11.4 Beta Refresh is now available:
    https://community.oracle.com/message/14784683#14784683

    This refresh includes new capabilities and additional bug fixes.
    Some new features in this release:

    • ZFS Device Removal
    • ZFS Scheduled Scrub
    • SMB 3.1.1
     
  8. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
    A new snapshot of Openindiana Hipster 2018.04 is available

    OpenIndiana Hipster is a rolling distribution of the opensource Solaris fork Illumos with a snapshot every 6 months. It comes in the three flavours GUI with a Mate desktop, Text (very similar to OmniOS, another Illumos distribution) and Minimal.

    https://wiki.openindiana.org/oi/2018.04+Release+notes
     
  9. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
    OmniOS 151026 stable (may.07.2018) is out

    Release note: https://github.com/omniosorg/omnios-build/blob/r151026/doc/ReleaseNotes.md
    Download: https://downloads.omniosce.org/media/r151026/

    Main improvements:

    - Protection against the Meltdown Intel CPU vulnerability announced earlier this year
    - Experimental support for bhyve - a fast, lightweight and modern hypervisor
    - Sparse-branded zones, clocking in under 4MB per zone
    - An improved Installer which is dramatically faster making the OmniOS installation procedure
    one of the fastest in the industry. The new installer also provides many more options for
    customising the installed system.
    - A new lightweight default MTA (Dragonfly Mail Agent)
    - Fault management improvements for SSD disks

    ZFS features
    - Improved support for ZFS pool recovery (import good data from a damaged pool)
    - The new zfs remove of toplevel vdevs
    - zfs checkpoint features (poolwide checkpoint to make even a fs destroy/ vdev add/remove undoable)
    - Support for raidz2 and raidz3 boot disks

    Hardware
    - support of the new BroadCom tri-mode HBAs

    napp-it supports OmniOS 151026 up from 18.01 (apr.02)
     
  10. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,886
    Finally got my new disks 6 x 8TB enterprise Toshi's at a great price too! ($265ea delivered).

    Fresh new napp-it OI installed from Gea's esxi image, config. done and now copying the pool. Should be done in no time at this rate :)

    Code:
                   capacity     operations    bandwidth
    pool        alloc   free   read  write   read  write
    ----------  -----  -----  -----  -----  -----  -----
    cloud1      54.6G  43.4T      0  6.67K  3.99K   847M
    Code:
    AVAILABLE DISK SELECTIONS:
           0. c0t5000CCA228C1D283d0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1d283
           1. c0t5000CCA228C1DB52d0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1db52
           2. c0t5000CCA228C1DD35d0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1dd35
           3. c0t5000CCA228C1E0D3d0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1e0d3
           4. c0t5000CCA228C1EA96d0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1ea96
           5. c0t5000CCA228C1EB93d0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1eb93
           6. c0t5000CCA228C1FA5Ad0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1fa5a
           7. c0t5000CCA228C1FBDEd0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1fbde
           8. c0t5000CCA228C1FF03d0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c1ff03
           9. c0t5000CCA228C20D27d0 <ATA-Hitachi HDS5C303-A5C0-2.73TB>
              /scsi_vhci/disk@g5000cca228c20d27
          10. c0t500003977C500F16d0 <ATA-TOSHIBA MG05ACA8-GX0R-7.28TB>
              /scsi_vhci/disk@g500003977c500f16
          11. c0t500003977C500F60d0 <ATA-TOSHIBA MG05ACA8-GX0R-7.28TB>
              /scsi_vhci/disk@g500003977c500f60
          12. c0t500003977C3023D0d0 <ATA-TOSHIBA MG05ACA8-GX0R-7.28TB>
              /scsi_vhci/disk@g500003977c3023d0
          13. c0t500003977C3023D1d0 <ATA-TOSHIBA MG05ACA8-GX0R-7.28TB>
              /scsi_vhci/disk@g500003977c3023d1
          14. c0t500003977C3023D2d0 <ATA-TOSHIBA MG05ACA8-GX0R-7.28TB>
              /scsi_vhci/disk@g500003977c3023d2
          15. c0t500003977C30241Cd0 <ATA-TOSHIBA MG05ACA8-GX0R-7.28TB>
              /scsi_vhci/disk@g500003977c30241c
          16. c34t0d0 <VMware -Virtual disk   -1.0 cyl 5218 alt 2 hd 255 sec 63>
              /pci@0,0/pci15ad,1976@10/sd@0,0
    
     
    
    #zpool create -o version=28 -O version=5 cloud1 raidz2 c0t500003977C500F16d0 c0t500003977C500F60d0 c0t500003977C3023D0d0 c0t500003977C3023D1d0 c0t500003977C3023D2d0 c0t500003977C30241Cd0
    
    # zpool status
      pool: cloud
     state: ONLINE
      scan: scrub repaired 0 in 27h40m with 0 errors on Wed Jul 11 01:25:52 2018
    config:
    
        NAME                       STATE     READ WRITE CKSUM
        cloud                      ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c0t5000CCA228C1D283d0  ONLINE       0     0     0
            c0t5000CCA228C1DB52d0  ONLINE       0     0     0
            c0t5000CCA228C1DD35d0  ONLINE       0     0     0
            c0t5000CCA228C1E0D3d0  ONLINE       0     0     0
            c0t5000CCA228C1EA96d0  ONLINE       0     0     0
            c0t5000CCA228C1FA5Ad0  ONLINE       0     0     0
            c0t5000CCA228C1FBDEd0  ONLINE       0     0     0
            c0t5000CCA228C1FF03d0  ONLINE       0     0     0
            c0t5000CCA228C1EB93d0  ONLINE       0     0     0
            c0t5000CCA228C20D27d0  ONLINE       0     0     0
    
    errors: No known data errors
    
      pool: cloud1
     state: ONLINE
    status: The pool is formatted using an older on-disk format. The pool can
        still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
        pool will no longer be accessible on older software versions.
      scan: none requested
    config:
    
        NAME                       STATE     READ WRITE CKSUM
        cloud1                     ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c0t500003977C500F16d0  ONLINE       0     0     0
            c0t500003977C500F60d0  ONLINE       0     0     0
            c0t500003977C3023D0d0  ONLINE       0     0     0
            c0t500003977C3023D1d0  ONLINE       0     0     0
            c0t500003977C3023D2d0  ONLINE       0     0     0
            c0t500003977C30241Cd0  ONLINE       0     0     0
    
    errors: No known data errors
    
      pool: rpool
     state: ONLINE
      scan: scrub repaired 0 in 0h2m with 0 errors on Sun Sep 10 23:02:29 2017
    config:
    
        NAME         STATE     READ WRITE CKSUM
        rpool        ONLINE       0     0     0
          c34t0d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    
    # zpool list
    NAME     SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
    cloud   27.2T  19.7T  7.54T  72%  1.00x  ONLINE  -
    cloud1  43.5T  1.37M  43.5T   0%  1.00x  ONLINE  -
    rpool   39.8G  33.9G  5.84G  85%  1.00x  ONLINE  -
    
    #zfs snapshot -r cloud@migrate_to_cloud1_11_7
    #zfs send -R cloud@migrate_to_cloud1_11_7 | zfs receive -F cloud1
    
    iostat
    
     1531276820                                                                                                      
     r/s      w/s      kr/s      kw/s      wait      actv      wsvc_t      asvc_t      %w      %b      device      last_60rd      last60wr      last60wait      last60w%      last60b%
     cpu_busy%(now/av10s)      100      91                                                                                        
     1061.8      2.0      41328.1      0.5      0.0      0.9      0.0      0.9      2      23      c0t5000CCA228C20D27d0      1160      14      0      1      33
     983.7      2.0      40373.9      0.5      0.0      1.8      0.0      1.8      0      33      c0t5000CCA228C1EB93d0      1079      13      0      0      40
     1053.8      2.0      39937.1      0.5      0.0      1.0      0.0      1.0      0      25      c0t5000CCA228C1D283d0      1072      14      0      1      37
     1078.8      2.0      40458.5      0.5      0.0      1.0      0.0      0.9      0      24      c0t5000CCA228C1EA96d0      1133      12      0      0      37
     1046.8      2.0      40412.9      0.5      0.0      1.0      0.0      1.0      0      24      c0t5000CCA228C1DB52d0      1138      14      0      0      34
     1088.8      2.0      40914.3      0.5      0.0      0.9      0.0      0.8      0      22      c0t5000CCA228C1FF03d0      1016      12      0      1      56
     1101.8      2.0      40928.3      0.5      0.0      0.9      0.0      0.8      0      26      c0t5000CCA228C1FA5Ad0      1066      13      0      0      39
     1071.8      2.0      40402.4      0.5      0.0      0.9      0.0      0.9      0      23      c0t5000CCA228C1E0D3d0      1095      13      0      0      39
     1092.8      2.0      40992.4      0.5      0.0      1.0      0.0      1.0      0      24      c0t5000CCA228C1FBDEd0      1166      14      0      0      34
     1057.8      2.0      40988.8      0.5      0.0      1.1      0.0      1.0      0      25      c0t5000CCA228C1DD35d0      1148      13      0      0      32
     0.0      0.0      0.0      0.0      0.0      0.0      0.0      0.0      0      0      c34t0d0      1      0      0      0      0
     0.0      0.0      0.0      0.0      0.0      0.0      0.0      0.0      0      0      c33t0d0      0      0      0      0      0
     0.0      281.2      0.0      223401.4      0.0      9.4      0.0      33.4      1      98      c0t500003977C30241Cd0      0      249      0      0      55
     0.0      506.4      0.0      241242.6      0.0      9.1      0.0      18.0      1      97      c0t500003977C500F60d0      0      218      0      1      57
     0.0      402.3      0.0      244284.9      0.0      9.1      0.0      22.6      1      98      c0t500003977C3023D0d0      0      236      0      0      54
     0.0      388.3      0.0      231407.4      0.0      9.4      0.0      24.1      1      99      c0t500003977C3023D1d0      0      236      0      0      56
     0.0      291.2      0.0      242271.4      0.0      9.5      0.0      32.7      1      100      c0t500003977C3023D2d0      0      200      0      0      57
     0.0      424.3      0.0      213914.4      0.0      9.2      0.0      21.7      1      97      c0t500003977C500F16d0      0      212      0      0      56
      __________      __________      __________      __________      __________      __________      __________      __________      __________      __________      __________      __________      __________      __________      __________      __________
     
    Last edited: Jul 11, 2018
  11. chook

    chook Member

    Joined:
    Apr 9, 2002
    Messages:
    846
    I think upgrade fever is contagious :).

    I am just about to retire the E3 1230 and X9SCM-F with 32GB of unbuffered ECC RAM and replace it with two E5 2670's on a X9DRX-F with 256GB of buffered ECC RAM. Also replacing the two 240GB SSD's with a 2TB SSD and a 1.2TB NVMe SSD.

    Replacing the six HGST 2TB drives is next. I still have 3TB free in the RAID-Z2 pool but they are getting kind of old.
     
  12. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,886
    Good move with the memory and SSD. I am constantly juggling my 32GB on the same board...to the point where I setup another server (elcheapo) with 32GB as well to spread the load. I was lucky enough to have a mate buy me a 1TB as a gift a while back. Loads of RAM and SSD would be so good for stacking VM's into.

    Tempwise, the drives are running at about the same temp as the HGST's. They have been running 100% for a few hours due to the zfs send to the new pool

    [​IMG]
     
  13. chook

    chook Member

    Joined:
    Apr 9, 2002
    Messages:
    846
    Actually, this is probably the right place to ask, as part of the migration I have copied my VM's onto a local drive and after reinstalling ESXi for the new hardware was going to migrate them over. I have the HGST drives passed through to Solaris as bare metal. If I just plug the HBA controller into the new server and pass them through will Solaris just find them again or will I have to be a little clever?

    I found the 32GB to be very constricting particularly as I am about to install some big software for work. This doubles as my test lab and I was getting tired of starting and stopping VM's.

    The Toshiba drives are actually my drive of choice at the moment. I don't dislike the HGST but the Toshiba's seem to be better priced even when you don't get a deal like the one you scored.
     
  14. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
    If you have added the disks in ESXi as physical raw disks (ex on the SAS HBA via edit VM settings: add hard disk > new raw disks) and you pass-through this HBA to Solaris, all disks should be detected without problems with a ZFS pool importable.
     
  15. chook

    chook Member

    Joined:
    Apr 9, 2002
    Messages:
    846
    Perfect. That is what I was hoping. I have kept the old ESXi install put aside in case something goes wrong and am doing a test install on a USB stick I have lying around to see how it works :).
     
  16. GoofyHSK

    GoofyHSK Bracket Mastah

    Joined:
    Mar 3, 2002
    Messages:
    1,565
    Location:
    Adelaide Hills
    You running a norco Davros? (Can't recall, been so long) - worried about the backplanes handling 3tb+?
     
  17. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,886
    it's a norco but not as you know it :) it's a generic branded norco from anywarepc. norco, but white label.

    I have had no issues with the backplanes at all. steady as a rock. my mate did though, techbuy replaced them all for him though.

    no issues with >3TB. i do recall so,ething a long long time ago about it, but not heard anything on that for many many years.
     
  18. evilasdeath

    evilasdeath Member

    Joined:
    Jul 24, 2004
    Messages:
    4,741
    Where did you get that price from? Currently looking at new disks myself to up my array. Tossing up between 8/10/12 TB disks, coming from 4TBs
     
  19. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,886
  20. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    167
    Info

    There is a refresh of the Solaris 11.4 beta
    https://blogs.oracle.com/solaris/oracle-solaris-114-open-beta-refresh-2


    Midnight Commander
    Midnight commander (mc) is a filebrowser that allows browsing/ copying/ moving/ viewing and editing files from a text console.

    As mc is not in the current Solaris 11.4 or OmniOS repository the fastest way to get it is my online installer:
    wget -O - www.napp-it.org/midnight_commander | perl

    If midnight commander does not show correct borders in Putty:
    - open Putty settings Window > Translation: modify ex from UTF-8 to ISO 8859-1 (Latin-1, West Europe)
    - reconnect
     
    Last edited: Jul 26, 2018

Share This Page