The enormous Virtualisation help thread

Discussion in 'Other Operating Systems' started by elvis, May 11, 2011.

  1. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,737
    Location:
    Canberra
    NFS based datastores? on your fancy oracle kit?
     
  2. Ck21

    Ck21 Member

    Joined:
    Jan 2, 2011
    Messages:
    582
    Location:
    SA
    Bios shows up, cant find boot device, loads up pxe and then loops back to cant find boot device.

    VM was created using the virt-manager GUI frontend, the guides i was going off where using a bash script to setup the VM and start it, i hate scripts... like really hate.. so wanted to see what would happen using a GUI frontend app for kvm/qemu and as i expected, it failed. Im assuming when i get around to writing the script (errr "copy/paste" lol :D:leet::Pirate::thumbup:) it should boot and may or may not bluescreen depending if windows loads the correct driver that was specified in a kvm migration.reg fix file found all over the net..:Paranoid:
     
  3. Unframed

    Unframed Member

    Joined:
    Mar 30, 2010
    Messages:
    9,121
    Location:
    Hella south west
    I had a lot of woes back in the day with win guests on KVM. I got mine to work using the man page, have a thorough read and you should get it but it may also work booting from the win disc and use bootrec to rebuild your boot loader for EFI/BIOS depending on your preference
     
  4. Ck21

    Ck21 Member

    Joined:
    Jan 2, 2011
    Messages:
    582
    Location:
    SA
    I was just looking up some BCDedit reference material to do exactly that :thumbup:

    Although, reading through abit of oVirt has got me intrigued, i might try getting it up and running through that first, and if i cant, ill go back to learning kvm/qemu.

    the reason being that: in kvm/qemu, all devices in the iommu group get assigned to a VM, for this to work, both of my gpus need to be in different groups (root ports). Plan is to have my radeon 7750 in primary, have linux use it as the primary display, and pass through the gtx970 to the VM and display to the screen via DVI as my screen supports multiple inputs and switching between them at will via the osd controls.

    In saying that, my understanding is with sandy/ivy/haswell mainstream cpu's, when dual gpus are used, the single x16 pcie link is spilt in half literally, from the same root port in the pcie controller, but on E5's or any HEDT cpu (sockets2011/2011-3) they are spread out amongst different ports in the pcie controller.

    Some people have got it to work on mainstream i7's (non-k cpus), but it highly depends on the motherboard, and how the manufacturer wired up the pcie lanes.
     
  5. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    38,442
    Location:
    Brisbane
    Currently, yes. iSCSI in testing, although NFS and QCOW2/QED give us just about everything we need so far.

    The ZS3-2 array also does Fibre Channel exports, if that's you're thing. We're moving awat from FC all together (other than for connecting tape backup units, but there is no switching, just straight connection). We're serving multiple 20GbE NFS shares currently (40GbE next year), all over jumbo frames (4K/8K NFS blocks fit in a single frame/packet), and the performance is amazing.
     
  6. Urbansprawl

    Urbansprawl Member

    Joined:
    May 5, 2003
    Messages:
    565
    Can I ask how many NICs you use per hypervisor and their rough roles? We've gone to 10GbE BaseT on Arista (from Juniper) which has lowered costs a lot but the costs are still such that eliminating two ports per hypervisor would save a lot of $$.
     
  7. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,737
    Location:
    Canberra
    Why do you need iSCSI or FC at all?
     
  8. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    38,442
    Location:
    Brisbane
    We use a mix of Myricom and Intel 10GbE fibre NICs. So far a single NIC per physical node is enough to meet our needs.

    10GbE fibre is standard for us across all of our storage, and about 10% of our workstations. We're in the media biz, so pushing lots of data about is standard fare for us (particularly now that everything we do is UHD or 4K uncompressed). We're sitting on 2.5PB of production storage between our sites, with another 700TB of nearline.

    Anything that's not part of a cluster is virtualised. So that's all of our application servers, license servers, monitoring tools, and other utility stuff. We outsource email to Google Apps (because every man and his dog knows my opinion of email).

    iSCSI can be nice to elliminate a few levels of mess. When you're writing virtual file system -> virtual disk -> QCOW2/QED -> NFS -> File -> File system -> Physical disk, you can shave a few layers off by putting iSCSI in the middle, and with it a bunch of fsyncing through multiple caches that doesn't always help things.

    VMWare's iSCSI implementation is terrible, so most big VMWare users have never seen the difference. Under Linux/KVM/QEmu it can give you some latency improvements over the above scenario if you desperately require it. The old "don't virtualise large databases" rule isn't always a rule when you can see how far you can push iSCSI under KVM.

    But again, QCOW2/QED on NFS is fast enough for us so far. Our performance bottlenecks are elsewhere (bare metal clustering). File based containers are also nice and simple, and don't confuse the helpdesk guys as much.

    FC is dead tech IMHO. It's not moving fast enough to compete. 10GbE is dirt cheap, and faster standards (40/56/100GbE) are out at the high end. Our storage pumps out data over 8x 10GbE ports currently (2 heads driving half the storage each, in active/active for failover). We're already pushing the vendor to offer 8x 40GbE by late 2016 (no doubt we'll be expected to deliver in 8K by that point, given the silly hype that the market seems to be chasing).
     
  9. Glide

    Glide Member

    Joined:
    Aug 22, 2002
    Messages:
    1,151
    Location:
    Was: Sydney Now: USA
    Anyone using RHEV or decided against RHEV for going straight up oVirt?
     
  10. m0n4g3

    m0n4g3 Member

    Joined:
    Aug 5, 2009
    Messages:
    3,643
    Location:
    Perth, WA
    Do you have any recommended NFS tweaks, or are you pretty much running a standard NFS setup?
     
  11. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    38,442
    Location:
    Brisbane
    * Better than realitme compression (lzo/lzjb/lz4) on your underlying FS
    * noatime everywhere (FS layer, NFS layer)
    * biggest NFS rsize/wsize you can do (ours are 1MB)
    * jumbo frames
    * NFSv3, unless you absolutely positively need NFSv4 for some reason (security compliance, for example).

    oVirt, because we're cheap bastards. Would love RHEV, but no budget. :(
     
    Last edited: Dec 8, 2015
  12. m0n4g3

    m0n4g3 Member

    Joined:
    Aug 5, 2009
    Messages:
    3,643
    Location:
    Perth, WA
    Thanks! Pretty much what I normally do now except for the realtime compression and jumbo frames.
     
  13. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,737
    Location:
    Canberra
    careful with jumbos.

    I just takes 1 device to fucking ruin your day/week/month. And the problems don't necessarily appear for days/weeks - depending on your network. Most Storage vendors flat out don't recommend them anymore simply because the general performance gain is somewhere between 0-15%, and you literally lose the entirety of that in a single outage.

    elvis' usage is *highly* specific and warrants it.
     
  14. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    38,442
    Location:
    Brisbane
    Biggest gotcha with jumbo frames is how to traverse internal routers/vlans.

    Pro Tip: check your MSS settings as well as your MTU settings on whatever device sits between your networks.

    I set the MSS to 1460 (1500 - 40 byte Ethernet header) on our routers (previously Juniper devices, upgraded to Linux boxes a few years back when then went EOL and the vendor wanted squillions to keep them in support). That sorts us nicely for inter VLAN and WAN traffic.

    We have MTU 9000 on both our production artist network and our VM storage network. Everything else is MTU 1500.

    But yes, do be careful. Good planning/testing required.
     
  15. m0n4g3

    m0n4g3 Member

    Joined:
    Aug 5, 2009
    Messages:
    3,643
    Location:
    Perth, WA
    only enabled jumbo frames on my storage network. everything else is normal since the requirements for performance aren't there, i'm just a stickler for making sure my vm's are running in tip top shape :)
     
  16. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,737
    Location:
    Canberra
    Have you benched your VM's with and without Jumbo frames?
     
  17. m0n4g3

    m0n4g3 Member

    Joined:
    Aug 5, 2009
    Messages:
    3,643
    Location:
    Perth, WA
    Unfortunately have not, but you raise a big point. I shall endeavor to test with jumbo's on and off.
     
  18. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    38,442
    Location:
    Brisbane
    As mentioned, where I work has very specific requirements. Graphing via LibreNMS shows us our average (and indeed our 95th percentile too) frame size across our production network sits around the 6000 byte mark.

    We use a lot of NFS3, and constantly work on enormous files and media that get dragged from one end of the business to the other through a complex pipeline. Jumbo frames end-to-end are a must for us, and give us a very clear benefit (measured and proven over many petabytes of data transfer).

    For VM users, they can be very beneficial on your storage network with either iSCSI or NFS as your backing stores. Nsanity's comments hold - be careful, and know what you're doing. But if you've got a completely separate storage network and no need to route from it to anywhere else, then you're in a good spot to at least try this out.
     
  19. Primüs

    Primüs Member

    Joined:
    Apr 1, 2003
    Messages:
    3,377
    Location:
    CFS
    I've been using oVirt for a while now. Not many problems, a few things different from VMWare where I was coming from, but without full clustering on free ESXi, I didn't want to spend the money on proper VMWare licensing.
     
  20. m0n4g3

    m0n4g3 Member

    Joined:
    Aug 5, 2009
    Messages:
    3,643
    Location:
    Perth, WA
    anyone using proxmox? It seems to be very straight forward coming from vmware, and hyper-v.

    I use that at home and it's really good IMHO, handles external storage probably a little better than vmware.
     

Share This Page

Advertisement: