1. OCAU Merchandise now available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion here.
    Dismiss Notice

OpenSolaris/Solaris 11 Express/ESXi : BYO Home NAS for Media and backup images etc

Discussion in 'Storage & Backup' started by davros123, Dec 14, 2009.

  1. waltermitty

    waltermitty Member

    Joined:
    Feb 19, 2016
    Messages:
    1,427
    Location:
    BRISBANE
    this is such a great milestone for zfs!
     
  2. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    davros123 likes this.
  3. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    SMB3 (kernelbased Solarish SMB server) is announced for OmniOS 151032 in November 2019,
    see omniosorg/Lobby

    If you want to try it now, use OpenIndiana (always newest Illumos) or OmniOS bloody 151031
     
  4. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
  5. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    ZFS Allocation Classes
    This is a new feature in Open-ZFS

    It allows to add special vdevs or dedup vdevs to a pool.
    Dedup vdevs are used to hold the dedup table to end its ram problem. Special vdevs hold metadate or small io. This allows to create mixed pools of disks and SSD/NVMe where performance critical data or filesystems can land on the faster vdev(s).

    I have made some performance benchmarks,
    http://napp-it.org/doc/downloads/special-vdev.pdf

    I am really impressed about the result as this allows to use a slow disk pool where you can decide per ZFS filesystem based on the "recsize" vs "special_small_blocks" settings that data of this filesystem land on the special vdev ex an Intel Optane.
     
  6. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    Update

    Dedup and special vdevs are removable vdevs. This works only when all vdevs in the pool have the same ashift setting, ex ashift =12, best for 4k disks.

    At least in current Illumos there is a problem that a pool crashes (corrupted) when you try to remove a special vdev from a pool with different ashift settings, ex a pool with ashift=12 vdevs and a special vdev with ashift=9. In current napp-it-dev I therefor set ashift=12 instead "auto" as default to create or extend a pool.

    If you want to remove a special or dedup vdev, first check the ashift setting of all vdevs (menu Pool, click on the datapool). I have send a mail to illumos-dev and hope that this bug is solved prior next OmniOS stable.

    If you create or extend a pool, I suggest to care about same ashift. When you try to remove a regular vdev (ex basic, mirror) from a pool and vdev is different then it stops with a message that this cannot be done due different ashift settings (but no crash like with special vdevs).
     
  7. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    Last edited: Oct 22, 2019
  8. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    Some more insights about the new ZFS Allocation Classes feature
    http://napp-it.org/doc/downloads/special-vdev.pdf

    1. About Allocation Classes
    2. Performance of a slow diskbased pool
    3. With special vdev (metadata only)
    4. With special vdev (for a single filesystem)
    5. With special vdev (for a single filesystem) and Slog (Optane)
    6. Performance of a fast diskbased pool
    7. Fast diskbased pool vwith special vdev
    8. NVMe Pool vs special vdev (same NVMe)
    9. Compare Results
    10. Conclusion
    11. When is a special vdev helpful
    12. When not
    13. General suggestions
     
  9. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    OmniOS 151032 stable is out
    This is the most feature rich update for Open-ZFS and OmniOS ever.

    download: https://omniosce.org/download.html

    Release notes

    https://github.com/omniosorg/omnios-build/blob/r151032/doc/ReleaseNotes.md

    Update
    http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf

    New Open-ZFS features:
    - native ZFS encryption
    - raw zfs send of locked and encrypted filesystems
    - sequential/sorted resilver (can massively reduce resilver/scrub time
    - manual and auto trim for the SSDs/NVMes in a pool
    - Allocation classes for metadata, dedup and small io (mixed pool from disk/SSD/NVMe)
    see https://www.napp-it.org/doc/downloads/special-vdev.pdf
    a warning at this point: a zpool remove of a special vdev with a different ashift than the pool crashes Illumos/ZoL
    - force ashift on zpool create/add

    OmniOS related
    -updated NVMe driver (with basic support for NVMe/U.2 hotplug)
    -updates for newer hardware
    -installer supports UEFI boot

    - SMB 3.02 (kernelbased SMB) with many new features
    see https://github.com/illumos/illumos-gate/pulls?q=is:pr+SMB3
    - improvement for LX/ Linux zones, newer Linux distributions
    - improvements for Bhyve
    - improvements to the Enlightened Hyper-V drivers for running under Hyper-V or Microsoft Azure.

    Napp-it 19.dev/19.h2.x supports the new features
     
  10. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    napp-it v20.x

    I have uploaded a pre version of next napp-it v19.12 noncommercial homeuse
    and napp-it 20.01 pro to support the newest features of Oracle Solaris and expecially OmniOS/OpenIndiana.

    - ZFS encryption with web/filebased keys, a http/https keyserver with an HA option,
    keysplit for two locations, automount after reboot and user lock/unlock via SMB
    http://napp-it.org/doc/downloads/zfs_encryption.pdf

    - special vdevs
    https://www.napp-it.org/doc/downloads/special-vdev.pdf

    - trim

    - force ashift when adding a vdev

    - protection against accidentially adding a basic vdev to a pool

    more (all features from 19.dev)
    https://napp-it.org/downloads/changelog_en.html
     
  11. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
  12. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
  13. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
  14. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    I have just added support for Apple Time Machine via SMB into napp-it up from 19.12 (OmniOS, OI, kernelbased/ Solaris SMB server, see menu Services > Bonjour)
     
  15. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
  16. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    OpenIndiana 2020.04 is available

    OpenIndiana is a Solaris fork like OmniOS and based on Illumos. Unlike OmniOS there is no stable/ long term stable and no commercial support option or regular security fixes sometimes several per month. OpenIndiana is like a reference installation of Illumos. You get it as a minimal, text (similar to OmniOS) or desktop edition. Every "pkg update" gives the newest state of Illumos similar to the OmniOS bloody.


    OpenIndiana is mostly for a ZFS home system with browser and office apps. For a production ZFS storage system, prefer OmniOS optionally with a support contract.

    http://docs.openindiana.org/release-notes/2020.04-release-notes/
     
  17. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    I have added a small howto for backup/copy/sync local files from/to a napp-it ZFS filer from/to a cloudservice (or between cloudservices) like (Amazon) S3, Google or Microsoft.

    http://www.napp-it.org/doc/downloads/cloudsync.pdf

    update:
    I have added configuration infos for https, encrypted files and setup for Amazon S3/minIO and Google drive
     
    Last edited: Jun 14, 2020
  18. chook

    chook Member

    Joined:
    Apr 9, 2002
    Messages:
    3,321
    One of my disks is now unavailable so that seemed like a fine and dandy excuse to buy larger disks. There are six in the pool and I have three for now and will get three more a bit later.

    Time to replace some disks in my raidz2 pool. Each disk will be replaced in the same location on the same hardware. Did a bit of research and think I have this sorted but want to bounce it off folks here.

    These are my planned steps:
    1. Identify which disk is the bad one: iostat -En
    2. Match the WWN with the serial number.
    3. Remove the bad disk.
    4. Insert the good disk.
    5. Invoke: zpool replace tank c0t<new WWN>d0
    6. Wait.
    7. Invoke: zpool scrub tank
    8. Wait some more.
    9. Repeat for other disks one at a time.
    Have I misunderstood something or scrambled my brain slightly here?
     
    Last edited: Aug 25, 2020
  19. chook

    chook Member

    Joined:
    Apr 9, 2002
    Messages:
    3,321
    Sorry for the double post but I went ahead and gave it a go (since it is raidz2 and thus I have a backup). This is how to do it when you are using WWN:
    1. Identify which disk is the bad one: iostat -En
    2. Match the WWN with the serial number.
    3. Remove the bad disk.
    4. Insert the good disk.
    5. Invoke: zpool replace tank c0t<old WWN>d0 c0t<new WWN>d0
    6. Wait.
    7. Invoke: zpool scrub tank - Not needed. Thanks gea.
    8. Wait. - As above.
    9. Repeat for other disks one at a time/
    Even though you are putting the new disk into the same physical location you have to use both devices in teh command line because they have different WWN and are not just referenced with what would otherwise be and identical "t1" or similar label.

    The step 7 to scrub (and thus step 8 to wait) is also optional but recommended in case something went wrong while waiting at step 6. - As above.
     
    Last edited: Aug 26, 2020
  20. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    197
    A list of disks (WWN, a factory disk number) to disk bays is always recommended, either you write it down in a list (WWN is usually printed on the disk). When you use napp-it you can create and printout a graphical disk map of your storage case. If you haven't done, power off and create the list first to avoid wrong replacements.

    If you have enough bays, add a new disk additionally and start a disk replace old > new. This does not affect redundancy level like it would happen when you remove a disk to replace in slot. After the replacement is finished, remove the old disk (many disk controller support hot add/remove).

    An additional scrub is not needed as a resilver has read all data already. If an error occurs, ZFS will report.

    btw
    iostat is more an inventory of disks and events since last reboot. Real problems that require a replace are reported by ZFS itself (too many errors, offline, checksum errors). Also Smartmontools can give informations about a needed or suggested replace. Current napp-it gives a smart warning on some critical raw smart values.
     
    chook likes this.

Share This Page

Advertisement: