Discussion in 'Other Operating Systems' started by elvis, May 11, 2011.
Can you not leverage integrated graphics for the host?
you can if your cpu has integrated graphics, but its not guaranteed from what ive read (its dependent on cpu arch, the more recent the Cpu, the more likely it will work)
Has anyone played around with Bhyve?
I was looking at deploying FreeNAS inside a KVM VM with PCI passthrough of my SAS controller, but it looks like I can run a bunch of Linux VM's on FreeNAS running Bhyve. This eliminates any possible problems associated with virtualizing FreeNAS, but I'm worried about the features and maturity of Bhyve. The VM's will be in production, but we're a small outfit and our type of production is different to large enterprise in that it doesn't really matter if things go down for an hour or five.
I've been using Ovirt for a long time and I like it, but I'm not using it for anything close to what it's capable of in terms of running a cloud type setup with multiple hosts (I run it on a single host).
Bhyve on FreeNAS will allow me to consolidate some hardware and it looks like it'll run FreeBSD, Scientific Linux and Ubuntu quite happily - and that's all I need it to do. I guess I'll give it a go for non-critical tasks and report back. Stay tuned
I haven't, but I've been casually following its progress. It's been around a while now (introduced in FreeBSD 10.0, with 10.3 and 11.1 currently the stable releases).
By all accounts it covers off all the basic things you'd want from a virtualisation system, and supports libvirt so it can be managed by all the standard open source tools.
BSD already has its excellent jail system, which should solve any need for BSD-on-BSD. But if you're wanting to virtualise other stuff on BSD and don't have crazy requirements, it seems bhyve is perfectly capable.
Can anyone report on their real world experience with GPU pass through under KVM, specifically stuff that is OpenGL capable with Windows guests?
GPU pass-through presents the GPU to the guest OS as if it were plugged into a physical computer (there is a bit happening in between but yeah)
All 3D OpenGL etc all works as it would.
You will need a dedicated graphics card per guest/s running at the same time to do it.
My real world experience is: "Your host does not support GPU passthrough".
You probably need VT-dx or whatever.
Thanks for your input - much appreciated, as always. To be perfectly honest, I could get away with running jails for 90% of the tasks I want to perform, but it would be nice to have the capability to spin up Linux VM's for whatever reason. I caught myself spinning up a FreeBSD VM on Bhyve for some tests the other day, which was a bit of a "WTF am I doing this for?" moment
I reckon I need to spend some more time with FreeBSD again before I can make a more informed decision. It was my preference over Linux for many years but I decided to move to CentOS and SL for a few different reasons which aren't so important these days, and aside from pfSense and FreeNAS I'm a bit out of the loop. I didn't even know about Bhyve until recently - I think the last FreeBSD installation I used was 6.4!
Nah, I need a host which doesn't have a discrete gpu.
Eh? Descrete gpus are exactly what you need.
You might have to run the host headless if you only have one though. Makes it harder.
If you have an igpu and descrete, that's the easiest. Run host on igpu, pass the hardware through. I got it working as a test with ovirt without even reinstalling the windows guest (ran directly from the hdd) .
Booted, ran. Then I broke the host config and couldn't be bothered any more.
If you do what I did, be prepared to fixmbr. That was annoying with only a shitty netbook to dl the iso.
From what I've read, nVidia optimus chips, like my 920M, don't work well with passthrough, not least of all because of drivers.
There's also the minor detail that the machine I would do it on, is one on which I don't want to exclude the "power" of the nVidia chip, as it gets used regularly.
Oh dear. Yeah, it'll be easier on a PC.
Eventually, if I ever end up moving back to Australia, or if I decide to get more stuff shipped over to PNG, I will likely get this happening on a dedicated rig. Unless, of course, they make a major breakthrough in passing through the GPU without having to blacklist it first.
So, whatever issue I had with the performance of VM's under Bhyve hasn't reared its head since I first noticed it. I don't know if the problem was disk IO, network IO, or what. Maybe I just hadn't allocated enough memory to the VM for the tasks it had to do when first set up. I haven't investigated too closely because the problem hasn't repeated.
Anyway, it looks like FreeNAS 11.1, which should be released late this month or early next month, will include the ability to attach a certain network interface to a VM. In my case, I want some VM's to have a tap interface for internal LAN use, and others to have a tap interface based on another interface so the VM's can have public IP's. It's doable now, bit awkward to set up. Not that I'm scared of the command line, but FreeNAS configuration works best from the GUI...
Hey, I'm trying to update ESXi 6.0.0 to 6.0 Upgrade 2 from CLI but when I attempt it I get the following error:
Failed to remediate the host: (None, "Failed to save Bootcfg to file /altbootbank/boot.cfg: [Errno 30] Read-only file system: '/altbootbank/boot.cfg.tmp'")
I have ESXi currently running from a USB, is this why its complaining about the read-only file system? I'm fairly new to upgrading ESXi so any assistance would be greatly appreciated.
Nevermind I sorted it, ended up grabbing the HP iso and upgraded.
I've noticed some interesting behaviour with Bhyve. I'm running Scientific Linux 7 on FreeNAS/Bhyve and also on CentOS/KVM. The two virtualization hosts are connected to the same switch on our network.
The SL7 instances on KVM fetch DHCP leases just fine and behave as normal. On Bhyve, the SL7 instances occasionally lose their DHCP lease, leaving the VM uncontactable and with high system loads. Logging in via VNC and ifup/ifdown fixes the problem, as does configuring static addressing.
I'm not sure if it's a problem with my pfSense DHCP server, Bhyve and the Virtio network device I've assigned to the VM, or a configuration issue with SL7. But it's interesting, and something I haven't encountered before.
Hmm, I think I'm in the correct thread... I'm setting up pfSense on my Hyper-V host, and I want to use SR-IOV. Yes, I know pfSense works fine with abstracted virtual switches, but this is 2017 and SR-IOV has been a standard for a pretty long time now - it is more efficient, and because pfSense is getting two dedicated hardware ports for it anyway, I figure giving it the lowest level access possible is for the best.
The problem is, I can't find anything, anywhere, that comes even remotely close to discussing this, let alone implementing it - neither pfSense nor even FreeBSD. Apparently FreeBSD had SR-IOV baked in years ago, but I can't find more than off-hand references - again, nothing I can actually work with.
So does anyone have any suggestions for how to go about this? Or am I just spitting in the wind here?
I've been running virtualised pfSense and FreeNAS for a few years with pci passthrough of dual network (pfSense) and LSI HBA (freenas), but not on Hyper-V; I use Xen on centos. My pfSense box uses a dual NIC so I can physically separate external from internal networks through the host.
In Xen, IOV is largely useless as you can't live migrate vm's on it, and the xen netback driver for virtual bridge interface between VM's allows >10gbps throughput. I get native performance from my ZFS VM to other VM's with volumes mounted as iSCSI (windows) or nfs (linux), although it does burn up a few CPU cores on the hypervisor.
Having said which I don't know much about Hyper-V. There is Xen pci backend support for vt-d devices baked into freeBSD 10 (i.e. current versions of pfSense / FreeNAS) so it all just works, which probably isn't the case for Hyper-V?