esxi 6.7 slow iSCSI and NFS transfer to FreeNAS

Discussion in 'Business & Enterprise Computing' started by Multiplexer, Nov 14, 2018.

  1. Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    Transfering 9.5GB iso from ESXi local datastore to FreeNAS datastore is slow, below is the time recorded, any idea why?
    • NFS - 28.9MB/sec
    • iSCSI - 45.3MB/sec
    The Setup
    • Both ESXi local datastore and FreeNAS datastore is on single SSD
    • ESXi is 16GB and FreeNAS is 64GB
    • ESXi have direct ethernet NIC connection to the FreeNAS, i.e. no switches in between
    Windows transfer via iSCSI is 90MB/sec

    EDIT: added info
     
    Last edited: Nov 14, 2018
  2. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,576
    Location:
    Canberra
    is the windows test from within vmware? (i.e over a vswitch).
     
  3. OP
    OP
    Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    The Windows is simply a physical desktop
     
  4. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,295
    Location:
    Canberra
    Iscsi can be both SYNC or ASYNC, NFS is always SYNC.

    SYNC on BSD + ZFS will force disk buffer flushing to occur, before the request is returned to the client (hypervisor)

    a quality SLOG can improve the write amplification by removing the ZFS metadata fragmentation problems.

    did you do a copy from the ESXi client or was it a copy datastore to datastore via SSH?

    Code:
    [root@xxxx-xxx02:/vmfs/volumes/xxxx] ls -al
    total 3971088
    drwxr-xr-t    1 root     root          1960 Sep 12 01:19 .
    drwxr-xr-x    1 root     root           512 Nov 14 05:01 ..
    -rw-r--r--    1 root     root     3239035092 Jun 13 02:30 perflog.csv
    
    [root@xxxx-xxx02:/vmfs/volumes/xxxx] time -v cp perflog.csv -f /vmfs/volumes/XXX_XXXX_XXX/
            Command being timed: "cp perflog.csv -f /vmfs/volumes/XXX_XXXX_XXX/"
            User time (seconds): 10.54
            System time (seconds): 0.00
            Percent of CPU this job got: 30%
            Elapsed (wall clock) time (h:mm:ss or m:ss): 0m 34.43s
            Average shared text size (kbytes): 0
            Average unshared data size (kbytes): 0
            Average stack size (kbytes): 0
            Average total size (kbytes): 0
            Maximum resident set size (kbytes): 0
            Average resident set size (kbytes): 0
            Major (requiring I/O) page faults: 0
            Minor (reclaiming a frame) page faults: 0
            Voluntary context switches: 0
            Involuntary context switches: 0
            Swaps: 0
            File system inputs: 0
            File system outputs: 0
            Socket messages sent: 0
            Socket messages received: 0
            Signals delivered: 0
            Page size (bytes): 4096
            Exit status: 0
    
    3,088MB in 34 seconds, ~90MB/sec writes, local raid 10 6x300GB SAS, gigabit ethernet to FreeNAS, R610 - 192 GB ram, 2 x 480GB (resized to 25GB) DC S3520 SSDs in mirrored SLOG with 24 x 6TB 7200 drives in a 8 x 3-way mirror, ESXi 6.0.
     
    Last edited: Nov 14, 2018
  5. OP
    OP
    Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    What you are saying is the same as a post I found online? https://www.xigmanas.com/forums/viewtopic.php?t=7936

    I mounted the FreeNAS storage to ESXi and
    browse storage then did a copy from local storage to FreeNAS storage
     
  6. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,295
    Location:
    Canberra
    no, although yes but no.

    that article is discussing guest mounted NFS vs hypervisor mounted NFS, it also touches on ZFS sync.

    ZFS sync is different to ISCSI sync and NFS, NFS can be mounted async although not by ESXi, I care not to use ISCSI, LUNS are antiquated.

    My file copy is not within a guest, I SSH'd into the hypervisor and copied from a local DS to a FreeNAS NFS DS.

    Your issue is fundamentally latency, although SSD's have great async write capabilities (due to ram) they fail to get data on flash quickly, particularly when it needs to erase a block and re-write it on NAND, it takes time.

    ZFS honours the FreeBSD NFS server sync requirement, for example on ESXi:
    • ESXI sends an 64KB write request to FreeNAS (NFS) (NFS DS is mounted SYNC by ESXi)
    • NFS server calls fsync() against the storage request to the ZFS filesystem
    • ZFS honours the fsync() requirement, and forces a buffer flush from the write cache of the drives (SSD or HDD) to confirm the write request it made it to disk, along with metadata updates (it's atomic)
    • once ZFS waits for the storage to comply with the fsync() request, it returns the write completion to NFS, which returns to ESXi that the data is written.
    • 28.9MB/s is about 462 x 64k IOPS (2.1ms to completion)

    Using quality SLOGs minimises this time immensely, without an SLOG, both the ZPOOL metadata suffers from increased write amplification fragmentation and the zpool gets bombarded with write operations, an SLOG essentially buffers these write requests and bulk dumps data to the ZPOOL, some SSDs are better than others for SLOG, Optane is the pinnacle of drives these days.
     
    Daemon likes this.
  7. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,576
    Location:
    Canberra
    I love IT. :D

    Unbuffered 4k writes is what you want to measure for the kids playing at home. Also Power Loss Protection.

    And yes, Optane is a fucking beast. Until you play with pMEM.
     
  8. dave_dave_dave

    dave_dave_dave Member

    Joined:
    Mar 17, 2004
    Messages:
    2,865
    Location:
    Gold Coast
  9. OP
    OP
    Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    So if I am going keep iSCSI, I should implement ZIL or L2ARC
     
  10. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,295
    Location:
    Canberra
    First of all, do you value your data for iSCSI use?
     
  11. OP
    OP
    Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    No, I do not value the data as I only use it for home lab. As well as snapshot, I keep a base VM backup. I just want to improve speed because I hate waiting.

    I do have another 125GB SSD sitting here doing nothing and can use it as ZIL or cache
    or
     
    Last edited: Nov 17, 2018
  12. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,295
    Location:
    Canberra
    Then set sync=disabled against the zpool
     
  13. OP
    OP
    Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    I will give that a try tonight. But if I am to use another SSD just for testing purposes, should I be using ZIL or L2ARC? Any simple article to read?
     
  14. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,576
    Location:
    Canberra
  15. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,295
    Location:
    Canberra
    I doubt a separate SLOG will increase performance dramatically over a single SSD zpool mounted sync=disabled.

    The filesystem already has an intent log, moving this to its own SSD may compromise performance if the SLOG is a poor performing drive.
     
  16. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,576
    Location:
    Canberra
    Agreed. sync=disabled == "infinite" unbuffered 4k speed - theoretically.

    The Nexenta guys swore to me that there was edge-cases you could run into with Sync=Disabled tho. I just fixed that problem w/ remapped LBA ssd's with good unbuffered 4k write performance.
     
  17. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,295
    Location:
    Canberra
    result?
     
  18. OP
    OP
    Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    Tried SLOG, the performance gain is not noticeable so not worth the effort and space for a homelab. I ended up just using iSCSI.
     
  19. v81

    v81 Member

    Joined:
    Jan 31, 2005
    Messages:
    662
    Location:
    SE Vic
    FWIW, my enterprise days are long past and i'm just a PC enthusiast again...
    I was once messing around NFS and iSCSI with FreeNAS and NAS4Free and found them slow, could never figure out why.
    Then used a Synology Diskstation 415+ with 3x6TB disks, and ended up with the 1GBe being the bottleneck.
    Dunno how or why, i always put it down to some magic configuration that Synology DiskStationManager uses.
    Apparently you can use a knockoff of diskstation called Xpenology.
     
  20. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,576
    Location:
    Canberra
    the other ones i can think of.

    Record size of 32-64k
    atime = off
     

Share This Page

Advertisement: