1. Check out OCAU's review of the SpaceX Starlink satellite internet service!
    Dismiss Notice

OpenSolaris/Solaris 11 Express/ESXi : BYO Home NAS for Media and backup images etc

Discussion in 'Storage & Backup' started by davros123, Dec 14, 2009.

  1. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
    AiO with Solaris 11.4 on ESXi 6.7
    No support from current Vmware tools for Solaris 11.4 final from today
    vmware-tools / open-vm-tools on 11.4b |Oracle Community

    My findings/ "just a hack"

    VMware vmtools for Solaris from ESXi 6.7
    Executing on a textonly setup of S11.4 final on ESXi 6.7

    Installer vmware-install-pl on 11.4 installs but fails with a message
    Package "SUNWuiu8" not found when executing vmtool/bin/vmware-config-tools.pl

    This can be skipped by editing vmtool/bin/vmware-config-tools.pl in line 13026
    when you comment out the check for SUNWuiu8

    When you then run vmtool\bin\vmware-config-tools.pl it hangs due a missing /usr/bin/isalist
    I copied isalist over from a Solaris 11.3, made it executable and then vmware-config-tools.pl works

    After a reboot I got the message vmools installed with a console message
    Warning: Signature verification of module /kernel/drv/amd64/vmmemctl failed

    same with verification of the vmxnet3s driver
    vmxnet3s reports deprecated "misc/mac"

    Not sure if this is critical

    vmxnet3s and guest restart from ESXi works

    Gea
     
  2. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
    Full OS/ Appliance disaster recovery

    On current napp-it 18.09dev I have added a new function/menu: System > Recovery
    with modified replication jobs to make recovery easy. The idea behind is:

    To recover a fully configured appliance from a BE (bootenvironment):

    1. backup current BE: create a replication job (require 18.09dev) with current BE as source

    2. reinstall OS and napp-it

    3. restore BE: create a replication job with the BE as source and rpool/ROOT as target (require 18.09dev)

    4. activate the restored BE and reboot

    This BE backup/restore can be done also manually via replication and zfs send
     
    davros123 likes this.
  3. GoofyHSK

    GoofyHSK Bracket Mastah

    Joined:
    Mar 3, 2002
    Messages:
    1,568
    Location:
    Adelaide Hills
    Looks like another of my Hitachi 5k3000 2tbs is failing;
    What are my options for (single) replacement, pool is ashift=9

    Also its been too long since I looked at all this stuff... how many more failures can my pool handle
     

    Attached Files:

  4. HobartTas

    HobartTas Member

    Joined:
    Jun 22, 2006
    Messages:
    1,095
    You have to find another native 512b drive, either another Hitachi like the one you have or something like my Samsung 1.5TB HD154UI which is native 512b as is I think the 2TB version HD204UI also is and I don't think there were any hard drives bigger than 2 TB that were native 512b as they were all 4Kn/512e and even these Samsung drives I've mentioned were replaced pretty quickly with 4Kn versions with similar model numbers so you may have some difficulty sourcing any replacements.

    2 Hard drives from RaidZ2-0, one from RaidZ2-2, one from RaidZ2-1 going back to 2 if the repair is successful. If you lose a total of 3 hard drives in any RaidZ2 group you then automatically lose the entire pool. If your down two drives in any RaidZ2 group your at risk of file loss from any bad blocks you may also have in that stripe that bad block occurs as your effectively running at raid 0 there on that stripe without any bad blocks, however, the rest of the pool will still be OK even if you do lose some files.

    If I were you I'd buy a used LTO5 tape drive from Ebay for about the going rate of $300-$500 or so and get about two dozen 1.5TB LTO5 tapes at about $30 each and back up all your stuff pronto and preferably two lots of backups so you'll need 48 tapes. Alternatively get another six lots of 10 TB hard drives for say 6 x $450 = $2700 and create another 6 drive Raid-Z2 pool for a net storage of 40TB and copy all your existing data over to that.

    I run a 10 drive Raid-Z2 but I wouldn't run them in your config of three lots of 8 drives for a net storage of 36TB as I'd prefer to have two lots of 12 drive Raid-Z3 also for a net storage of 36TB unless you really need the IOPS with your config. Also for home use I'd have them as two or three completely separate pools rather than the one single pool you currently have as its just too monolithic.

    Lastly, if you do recreate the pool I'd suggest you do so with Ashift=12 even on native 512b drives because then if say you did get some more hard drive failures you can then add any 4Kn/512e hard drives and I believe it would be accepted into the pool but it wouldn't if the ashift=9 still exists.

    Cheers
     
    GoofyHSK likes this.
  5. GoofyHSK

    GoofyHSK Bracket Mastah

    Joined:
    Mar 3, 2002
    Messages:
    1,568
    Location:
    Adelaide Hills
    I don't recall why I split the pools up like that exactly,
    But something to do with 3x Sas Cards, and thinking if a whole backplane fails I'm less likely to lose a whole set

    Think I will take c3t14d1 out as the chksum count is now 286K
     
  6. HobartTas

    HobartTas Member

    Joined:
    Jun 22, 2006
    Messages:
    1,095
    That seems reasonable as Sun's Thumper had a config of 48 drives spread over 6 controllers and they had 8 lots of I think 6 drive Raid-Z where each member drive was on a separate controller and hence if a controller card failed it meant that the pool was still operational but ZFS just needs ports and they don't need to be tied to any particular HBA controller or SAS card. For instance I drive my 10 drive Raid-Z2 off 2 SATA's and the 8 onboard SAS ports but now that I have a SAS LTO I will reconfigure the array to 3 SATA's and 7 SAS ports as the LTO will need the 8th one.

    Its possible it's the drive but more likely the backplane or cabling, can you test the drive in another PC or if you don't have one then export the pool and just swap the dodgy drive with a neighbour, import the pool and see if the errors move to the new position or stay where they are on the same port? If the port is the problem then you just need to leave it unconnected and just connect up the otherwise good drive some other way.
     
    GoofyHSK likes this.
  7. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
  8. GoofyHSK

    GoofyHSK Bracket Mastah

    Joined:
    Mar 3, 2002
    Messages:
    1,568
    Location:
    Adelaide Hills
    Attempting to test both drives in another box, not really sure what best methods are though.

    EDIT: extra fun; trying smartinfo in nappit on said solaris server (11.3) causes a kernel panic
    EDIT2: was napp 16.xf, 18free seems to have fixed
     
    Last edited: Oct 25, 2018
  9. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
  10. GoofyHSK

    GoofyHSK Bracket Mastah

    Joined:
    Mar 3, 2002
    Messages:
    1,568
    Location:
    Adelaide Hills
    Looks like I still have to implement that somehow on Solaris11.3, got the same kernel panic today (without any of my manual input)

    EDIT: Will start looking at OmniOS VM instead, lack of oracle support access is annoying
     
    Last edited: Oct 25, 2018
  11. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
    Okay, seems that the ahci driver problem with a kernel panic on ESXi 6.7 when using smartmontools on ESXi vdisks seems inherited from OpenSolaris. OmniOS fixed it, Oracle not yet.
     
  12. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
    Last edited: Nov 6, 2018
  13. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    3,045
    Hi Gea. I have a problem sharing via www using napp-it. I've posted some screen snaps below. I think it should be as simple as turning on appache and setting the share property...but it's not working. Shares do work fine with CIFS or NFS etc...but not www.

    I have used another apache server on a diff machine (192.168.0.4) to manually setup shares so i know it SHOULD work and I could manually set up the apcahe config but I want to do it via the gui. Can you help me debug this?

    Click to view full size!


    Click to view full size!
     
  14. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
    What happens if you enter http://192.168.0.1

    This should show you the content of the DocumentRoot folder.
    (napp-it sets this to the shared filesystem, not the absolute ZFS filesystem root=/poolname) to avoid access to other filesystems
     
  15. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    3,045
    hi. at 192.168.0.1, i get the Apache " It Works" page.

    Oh, btw, am using your OI Hipster release....
    napp-it Appliance: PRO version (If a PRO version expires, functionality is reduced to the unlimited FREE version)

    uptime : 23:08:06 up 51 day(s), 4:22, 1 user, load average: 0.17, 0.09, 0.06
    running on : SunOS nas 5.11 illumos-229852ddf2 i86pc i386 i86pc
    OpenIndiana Hipster 201804 powered by illumos OpenIndiana Project part of The Illumos Foundation C 2010-2018 Use is subject to license terms Assembled 27 April 2018
     
  16. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
    Can you enable vhosts in Apache settings as ZFS filesystems are shared via vhosts, each on on different port
     
  17. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    3,045
    ok. done. But still same 404 @ http://192.168.0.1/cloud/testcifs
    "Not Found
    The requested URL /cloud/testcifs was not found on this server."


    I saw this message pop up when I enabled www share...
    sh[1]: /cloud/testcifs/_wwwroot/index.php: cannot create [No such file or directory]

    Not sure what that's all about...I left document root as /cloud/testcifs/ in the dropdown box and did not specify /cloud/testcifs/_wwwroot/

    Is there a logfile I can look at to get more debug info?
     
    Last edited: Nov 7, 2018
  18. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
    Without vhost, the root folder for Apache is the default root from basic Apache settings. With vhost enabled the root folder is either your filesystem cloud/testcifs or cloud/testcifs/_wwwroot (depends on setting when you enable the www sharing).

    The url is always http://192.168.0.1. If you add a path afterwards, this means regular folders below.
     
  19. OP
    OP
    davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    3,045
    ok, so vhost is enabled, I have specified cloud/testcifs and if I navigate to http://192.168.0.1 I should see the shared root of /cloud/testcifs yeah? but I get this....

    Click to view full size!


    Your in Germany yeah...so 9:30am there atm...how about we do a screen share and I can show you? Let me know and I can PM you a zoom session.
     
  20. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    220
    Yes, it should show the content of /cloud/testcifs (optionally _wwwroot below) and not the default Apache status page from Apache defaults
    Can you first restart Apache to be sure that changes are valid

    Then check apache.conf (Services > Apache > apache.conf) where you can set the default docroot (currently shown)
    and where extensions are enabled (look if vhost is enabled: LoadModule vhost_alias_module lib/httpd/mod_vhost_alias.so not commented)

    Then check Services > Apache > edit includes > httpd_vhosts where the vhosts and their ip/port/docroot (/cloud/testcifs) is defined
     

Share This Page

Advertisement: