• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

ESXi 5.0


winstontj

Member
Member
Posts
68
#1
I've got it working - still quite buggy and getting LOTS of errors, etc. but 100% working.

65 VMs that I'm doing load testing, migrations & failover testing. The only thing I can't do (yet) is Volume License and activation/deactivation type testing because this W8 is self-activating.

How I did it:

Build VM in Workstation 8.0 (edited the config.xml file so that it says "windows 8 x64", etc.)

Migrate from Workstation 8 (with VMware Tools installed) over to XenServer

Migrate from XenServer over to Xen XCP

Migrate from Xen XCP over to vSphere 5 / ESXi 5

No idea why or how it worked but it did. I had read a few blogs of people saying that they got it to migrate directly from XenServer over to ESXi 5 but I couldn't make that happen. I tried to build it on XenServer and just install VMware Tools but that didn't work - I think it really liked being on a HV where VMware Tools were installed & running.

Also, FWIW, I kept EVERYTHING on the same hardware - Built on Workstation8 on Dell T5500, Migrated over from Workstation 8 over to XenServer on another (identical RAM, CPU, HDD, etc. same type & brand parts not just "matching specs") T5500, over to Xen on a third T5500 and then finally over to ESXi 5 on a 4th T5500.

I had problems changing even types of CPUs. I could only get it to work if I kept the same exact hardware even across the board. (fwiw it was on Xeon X5680 CPUs & Samsung 8GB ECC ddr3 RAM with a single old-school 300gb raptor). Dell's T5500 BIOS version A11.

If you PM me I can build another one on a really small HDD and then host (is that legal/legit?) the files for someone who wants to try a short-cut.

I'm now able to migrate W8 across heterogeneous HA clusters & resource pools... and even directly from one machine to the next (like a T5500 with Xeon 5600 CPU to a T5500 with 5500 series CPU or even to T7400 with dual 5400 Xeons).

Again, assuming this is legal & legit (I don't see why not), I have a ton of either old 40-80gb HDDs or 32GB USB sticks. If you want to paypal me something reasonable I'm happy to back up a clean VM install and post it to you.


I'm having immense trouble with vMotion - I can't seem to migrate a W8 VM from one HA cluster over to another HA cluster (realtime with no interruption to the end-user). I have no idea why but if anyone has some advice I'm all ears.

What I get is essentially a system config change and/or driver install requiring a reboot. Which is really annoying.

Also - for the guy who asked about >2 socket support in W8 - everything blows up and goes straight to hell when I try and migrate those test machines with 4-8 sockets. It goes into BSOD and gets past POST but then I get a random watchdog error and auto-shutdown. IDK if that helps you at all - because if you do a physical install it **seems** as though that would work but ymmv.

Happy to answer questions - hopefully some of mine will get answered too.
 

My Computer

System One

  • OS
    XP64, W7x64, W8x64, 2k8r2, Lucid(server) + a few others
    System Manufacturer/Model
    Dell T5500 (4) + T7500 (4)
    CPU
    2x Xeon X5680 or 2x W5580
    Memory
    72gb in the T5500's & 192GB in the T7400's
    Graphics Card(s)
    NVIDIA GeForce 9800's
    Monitor(s) Displays
    EIZO, NEC or Dell (dual or quad on Ergotron)
    Hard Drives
    4x WD1500HLHX in RAID6
    Keyboard
    Bloomberg
    Internet Speed
    Cogent 100mbps + TWC 50/5 backup
    Other Info
    BackBlaze Storage Array, 48x 2TB in RAID6 = 96TB

    Offer cheap virtualization solutions - so machines & HW is usually over the top.

jimbo45

New Member
VIP Member
Guru
Hafnarfjörður IS

Posts
4,373
#2
Hi there
Vm Esxi works well even on a "White Box" --however there is a MAIN pre-req which typical Dometic(i.e Home compute Mobos don't follow. You need to have a supported LAN card -- the bog standard ones such as gigabyte don't work and your Esxi install will fail at the first hurdle.

A cheapish Intel PRO Lan card will work wonders -- also for Esxi you should have a minimum of 2 Lan slots on your Lan adaptor.

You'll probably need some sort of decent graphics card too -- not for Esxi itself as this runs from the REMOTE consoles but to allow your VM's access to proper graphicsw so you can get things like DVD / movie playing on your vm's. VMware ESXi is FREE BTW -- and if you set it up right you have the very useful feature of "PCI Passthru" which means that the actual PCI card hardware (graphics etc) can be passed to the VM as real hardware.

Vm esxi itself is a Tiny OS --you can boot easily from a 2GB USB stick. VM's running under VMWARE's esxi will run at around 95% Native speed --in fact you probably wouldn't be able to tell the difference.

There's a site listing "White Box" requirements for Esxi -- but if you are going to "Roll your Own" I'd suggest you buy a SERVER type mobo or even a cheap server such as a Lenovo basic server and customise it yourself.


Don't forget you will need an external laptop / computer to control Esxi - otherwise running and using vm's is just like running them on VMware's workstation - except ther are all running in "Background" mode.

Cheers
jimbo
 

My Computer

System One

  • OS
    Linux Centos 7, W8.1, W7, W2K3 Server W10
    Computer type
    PC/Desktop
    Monitor(s) Displays
    1 X LG 40 inch TV
    Hard Drives
    SSD's * 3 (Samsung 840 series) 250 GB
    2 X 3 TB sata
    5 X 1 TB sata
    Internet Speed
    0.12 GB/s (120Mb/s)

winstontj

Member
Member
Posts
68
#3
You can still access ESXi via CLI and build a VM that way - then install vSphere Client via CLI or once you get internet working then you are good to go.

All my stuff is fully licensed, no free trial - the free trial is limited to what you can do (like device pass through) and its only a 60d or 90d trial.

All of my VMs get the max of 128mb of RAM for video and that's that - they only access via RDT and have no device pass through enabled at all (unless its a rare circumstance). Device pass through makes it impossible to use things like vMotion or Failover or High Avail clusters - unless all devices are exactly the same. For some applications its ok but for others device pass through is a horrible idea.

Have you done the DVD/movie player thing? Is there a way to split up a dual-DVI video card between multiple VMs?
 

My Computer

System One

  • OS
    XP64, W7x64, W8x64, 2k8r2, Lucid(server) + a few others
    System Manufacturer/Model
    Dell T5500 (4) + T7500 (4)
    CPU
    2x Xeon X5680 or 2x W5580
    Memory
    72gb in the T5500's & 192GB in the T7400's
    Graphics Card(s)
    NVIDIA GeForce 9800's
    Monitor(s) Displays
    EIZO, NEC or Dell (dual or quad on Ergotron)
    Hard Drives
    4x WD1500HLHX in RAID6
    Keyboard
    Bloomberg
    Internet Speed
    Cogent 100mbps + TWC 50/5 backup
    Other Info
    BackBlaze Storage Array, 48x 2TB in RAID6 = 96TB

    Offer cheap virtualization solutions - so machines & HW is usually over the top.

jimbo45

New Member
VIP Member
Guru
Hafnarfjörður IS

Posts
4,373
#4
Hi there
For typical corporate Virtual servers --passthru isn't likely to be a good idea. IMO ALL servers should be Virtual machines these days but that's another whole story.

However for testing different hardware configurations having PCI passthru is actually a very good idea. DVD playing works fine on a W7 machine running on vmware workstation version 8 (graphics acceleration needs to be turned on). I'll try converting this vm to a vmesxi one this weekend and test it.

Not sure about splitting the dual-dvi video card between multiple VM's -- unless there's a setting in the Card hardware or Bios itself to say operate this as two separate independent devices -then it should work.

If you don't need to use the VM's at the same time then a KVM switch should work - switch the hardware between the two computers.

You might find a kvm switch which allows concurrent access -- I don't know of any however.

Sorry can't give you more help than that

Cheers
jimbo
 

My Computer

System One

  • OS
    Linux Centos 7, W8.1, W7, W2K3 Server W10
    Computer type
    PC/Desktop
    Monitor(s) Displays
    1 X LG 40 inch TV
    Hard Drives
    SSD's * 3 (Samsung 840 series) 250 GB
    2 X 3 TB sata
    5 X 1 TB sata
    Internet Speed
    0.12 GB/s (120Mb/s)

winstontj

Member
Member
Posts
68
#5
I was more asking about splitting video cards more for the GPU functions vs. the video display. I haven't figured out a way to share the same device between two VMs (pass one device on to two VMs) without luck.


I use IP KVMs so that's not a big deal - between the IPKVMs and the consoles viewing the VMs is no problem.
 

My Computer

System One

  • OS
    XP64, W7x64, W8x64, 2k8r2, Lucid(server) + a few others
    System Manufacturer/Model
    Dell T5500 (4) + T7500 (4)
    CPU
    2x Xeon X5680 or 2x W5580
    Memory
    72gb in the T5500's & 192GB in the T7400's
    Graphics Card(s)
    NVIDIA GeForce 9800's
    Monitor(s) Displays
    EIZO, NEC or Dell (dual or quad on Ergotron)
    Hard Drives
    4x WD1500HLHX in RAID6
    Keyboard
    Bloomberg
    Internet Speed
    Cogent 100mbps + TWC 50/5 backup
    Other Info
    BackBlaze Storage Array, 48x 2TB in RAID6 = 96TB

    Offer cheap virtualization solutions - so machines & HW is usually over the top.

Users Who Are Viewing This Thread (Users: 0, Guests: 1)