Discussion in 'Business & Enterprise Computing' started by wwwww, Apr 24, 2020.
Yes, I explored those options. It cannot be on a separate machine.
your exploring yet say it cannot be on a different host
a NAS is a different host!
why not what stops you?
Clever...that means a NAS is not an option.
except that if its a VM - its effectively a separate machine.
but what difference does it make VM or different host?
What advantage does it have being inside the same host?
Not every limitation is technical. Understand that it is a workstation, not a server. It sits in an office. I cannot just additional external hardware wiithout considering aesthetics, physical space, noise, etc.
The idea of requiring 40TBs locally in a workstation has some real problems. With 12 rust drives 4 fast cache drives pkus whatever host drives on the thing and an appropriate HBA to mount them. This is no standard "workstation" its not your pretty mac sitting on a desk to look good by any stretch.
NAS doesnt need to be at the asthetic desk, you shove it in a rack away that people dont see.
If your content on having this workstation have all this storage, then do away with the SMR drives and then maybe it wont freeze, probably going to be cheapest option.
Not every solution needs to be complex either.
The amount of times i see problems overthought and hung up on an implementation issue that would have been easily sorted by a more elegant, simple and easy solution is overwhelming.
The amount of times I see a hardware consultant try to recommend hardware for what is fundementally a software issue is overwhelming.
The freezing issue was caused by tiering. Removing the tiering fixed this problem.
I just passed through the SSDs as a single VROC RAID0 array, HDDs in Storage Spaces Parity.
Data gets written to the SSDs and progressively copied across onto the HDD array and symlinked back.
40TB capacity with a hot spare, 1000MB/s writes limited by the source.
A simple, elegant, software solution.
Except you have shit hardware and closed source software with no fix.
If you actually want a fix, stop shit posting on OCAU and go ask Microsoft.
If you want me to shill hardware - I can, but trying to add a hypervisor in the way for a performance-based usecase is a fucken stupid idea. Just like having 40TB of space in a desktop.
It seems to frustrate you when people fix software problems with software. I'm guessing you're a hardware consultant/salesperson?
Just because you can't solve a problem doesn't mean others can't. Not every thread is specifically addressed to you. If you see a problem that you can't fix, just don't reply and let other people help out. It's not helpful for you to go all testy and start shitposting whenever you can't solve something.
No thanks. I think the quote, "If your only tool is a hammer, then every problem looks like a nail" is very apt here. I guess you must know your hardware very well, but this was clearly a software issue.
If something is a stupid idea then why would you assume that's what's being done? Unless your intention was to call yourself stupid???
- The hypervisor was not "added" to solve this problem. The VM was what needed the space in the first place. I said this no less than 3 times.
- Also it was not a performance-based usecase. I specifically said it only had one performance requirement which was to write 5TB of sequential data quickly on one-off occasions - something easily met by the fact that the SSD cache was almost as big as the data being copied.
Not actually. But you have a hardware problem because software isn't mature enough to meet your needs on a single host.
I actually know software defined storage better - tiered parity storage spaces aren't supported on a local machine, much less inside a virtual machine. Its a S2D thing.
Storage Spaces is also *incredibly* particular about hardware - hence the whole ready node thing. Note that Microsoft doesn't publish a HCL? Wanna bet which drives *absolutely* aren't a supported type? Yeah that would be SMR drives.
This is that hammer thing again. You mean all the software you know isn't mature enough. It took fewer than 60 lines of code in a 22 year old language to turn the SSDs into a massive write cache that continually flushes to solve this problem. I would've liked a cleaner solution like a single software-defined drive but since StorageSpaces isn't mature enough for that, this does the trick.
So bcache2 and mdadm or zfs could solve your problems.
Storage Spaces *is* mature - it runs the largest cloud platform on the planet.
This is quite a simple solution to engineer and S2D is quite capable of achieving this.
The fact you have had mouse freezing is usually indicative of high disk active % time.
You have mentioned 40TB of fault tolerant storage, is this 'real' tolerant as in mirror or mirror + parity?
well I believe you already have your answer, if you're wanting to use S2D and REFS, you need to be able to de-stage the cache tier to the backing disks and have those operations committed (SYNC writes)
what CPU have you got within the host, please don't say hex core 1.6 Ghz?
This is due to the SMR drives taking a performance hit:
see more here:
I'll give you some numbers in a few days as to what you could expect on S2D and PMR drives.
did you remember to over provision your flash tier by 20% ?
The SM953 has a 512/4K/8KB logical block address, did you align the SSD to the windows block size?
Did you enable the high performance power plan and disable PCI-E ASPM ?
It doesn't need to be highly available, just somewhat resilient to failure and bit errors. The mouse freezing seemed to be an issue with tiering, when I removed tiering from the pool it stopped happening.
Not S2D, it's only a single host. CPUs are 2xXeon 6238T, they're fast enough.
I meant that the SSDs lose their threaded write performance when run in VMs. I believe this is due to the SCSI/ATA controller Hyper V uses not being able to fully utilise NVMe.
I ended up implementing it without storage spaces tiering. I just made a script to copy the data from the SSDs to the HDDs as it's written then create a symlink to the data.
Did you manually set the write-back cache at the time of creation to > 1GB per drive?
Specifies the size of the write-back cache. The cmdlet creates the write-back cache of the size that you specify when the cmdlet creates the virtual disk space.
The following describes the behavior of this parameter based on the value that you specify:
If you do not specify this parameter, the cmdlet sets the value of the WriteCacheSizeDefault property from the storage pool.
The default setting of WriteCacheSizeDefault for a storage pool is Auto, which specifies that Windows Server automatically selects the optimal write-back cache size for your configuration. You can change the value of WriteCacheSizeDefault to a concrete value at any time.
The Auto setting for WriteCacheSize operates as follows:
If any of the following is true, Auto is set to 1 GB:
The storage pool contains at least N drives with enough capacity and you set the Usage parameter to Journal. N = 1 for simple spaces, N = 2 for two-way mirror and single parity, N = 3 for three-way mirror and dual parity.
The storage pool contains at least N drives with enough capacity and the media type of the virtual disk is set to SSD. N = 1 for simple spaces, N = 2 for two-way mirror and single parity, N = 3 for three-way mirror and dual parity.
Otherwise, Auto is set to 0 (no log) for simple and mirror spaces, and 32 MB for parity spaces.
If you specify Auto or 0 (zero) for this parameter and the storage space is not a parity space, the cmdlet verifies that either 3.a.i or 3.a.ii is true. If either 3.a.i or 3.a.ii is not true, you cannot set WriteCacheSize to Auto or 0.
The objective of these conditions is to help you avoid scenarios in which you force the creation of a write-back cache in situations that result in slower performance.
you need 300MB/s sequential writes and you chose Hyper-v storage spaces? lol u need a decent SAN dude for 40TB at 300MB/s... but I'd say the active data set isn't 40TB... work out your active data... then get a cache big enough....
que the retard who posts something about synology... yeah they great... if u running a cinema at home... its not a production solution.