Private Cloud Hypervisors

By | January 16, 2015

Microsoft Windows Server 2012 R2 / System Center 2012 R2 and VMware vSphere 5.5 offer lots of enterprise-grade virtualization features. These are most popular Private Cloud making hypervisors. Here we want to compare the main features of both software and try to find out the best way to utilize the power of their component according to our requirement.

VMware or Microsoft?

We organized the comparison into the following sections:

  • Licensing
  • Virtualization Scalability
  • VM Portability, High Availability and Disaster Recovery
  • Storage
  • Networking
  • Guest Operating Systems


Licensing:  Licencing is major component to decide the the software purchasing.

Microsoft Windows Server 2012 R2 + System Center 2012 R2 Datacenter Editions VMware vSphere 5.5 Enterprise Plus + vCenter Server 5.5 Notes
# of Physical CPUs per License 2 1 With Microsoft, each Datacenter Edition license provides licensing for up to 2 physical CPUs per Host.  Additional licenses can be “stacked” if more than 2 physical CPUs are present.With VMware, a vSphere 5.5 Enterprise Plus license must be purchased for each physical CPU.  This difference in CPU licensing is one of the factors that can contribute to increased licensing costs.  In addition, a minimum of one license of vCenter Server 5.5 is required for vSphere deployments.
# of Managed OSE’s per License Unlimited Unlimited Both solutions provide the ability to manage an unlimited number of Operating System Environments per licensed Host.
# of Windows Server VM Licenses per Host Unlimited 0 With VMware, Windows Server VM licenses must still be purchased separately. In environments virtualizing Windows Server workloads, this can contribute to a higher overall cost when virtualizing with VMware.VMware does include licenses for an unlimited # of VMs running SUSE Linux Enterprise Server per Host.
Includes Anti-virus / Anti-malware protection Yes - System Center Endpoint Protection agents included for both Host and VMs with System Center 2012 R2 Yes – Includes vShield Endpoint Protection  which deploys as EPSEC thin agent in each VM + separate virtual appliance.
Includes full SQL Database Server licenses for management databases Yes – Includes all needed database server licensing to manage up to 1,000 hosts and 25,000 VMs per management server. No – Must purchase additional database server licenses to scale beyond managing 100 hosts and 3,000 VMs with vCenter Server Appliance. VMware licensing includes an internal vPostgres database that supports managing up to 100 hosts and 3,000 VMs via vCenter Server Appliance. See VMware vSphere 5.5 Configuration Maximums for details.
Includes licensing for Enterprise Operations Monitoring and Management of hosts, guest VMs and application workloads running within VMs. Yes – Included in System Center 2012 R2 No – Operations Monitoring and Management requires separate license for vCenter Operations Manager or upgrade to vSphere with Operations Management
Includes licensing for Private Cloud Management capabilities – pooled resources, self-service, delegation, automation, elasticity, chargeback/showback Yes – Included in System Center 2012 R2 No – Private Cloud Management capabilities require additional cost of VMware vCloud Suite.
Includes management tools for provisioning and managing VDI solutions for virtualized Windows desktops. Yes – Included in the RDS role of Windows Server 2012. No – VDI management requires additional cost of VMware Horizon View.
Includes web-based management console Yes – Included in System Center 2012 App Controller using web browsers supporting Silverlight 5, and free Windows Azure Pack for multi-tenant self-service VM management using web browsers supporting HTML5/JavaScript. Yes – Included in vSphere Web Client using IE 8,9,10, Firefox and Chrome.


Virtualization Scalability :  Virtualization capabilities will proof your decision.

Microsoft Windows Server 2012 R2 + System Center 2012 R2 Datacenter Editions VMware vSphere 5.5 Enterprise Plus + vCenter Server 5.5 Notes
Maximum # of Logical Processors per Host 320 320 With vSphere 5.5 Enterprise Plus, VMware has “caught up” to Microsoft in terms of Maximum # of Logical Processors supported per Host.
Maximum Physical RAM per Host 4TB 4TB With vSphere 5.5 Enterprise Plus, VMware has “caught up” to Microsoft in terms of Maximum Physical RAM supported per Host.
Maximum Active VMs per Host 1,024 512
Maximum Virtual CPUs per VM 64 64 When using VMware FT, only 1 Virtual CPU per VM can be used.
Hot-Adjust Virtual CPU Resources to VM Yes - Hyper-V provides the ability to increase and decrease Virtual Machine limits for processor resources on running VMs. Yes - Can Hot-Add virtual CPUs for running VMs on selected Guest Operating Systems and adjust Limits/Shares for CPU resources. VMware Hot-Add CPU feature requires supported Guest Operating System. Check VMware Compatibility Guide for details.VMware Hot-Add CPU feature not supported when using VMware FT
Maximum Virtual RAM per VM 1TB 1TB When using VMware FT, only 64GB of Virtual RAM per VM can be used.
Hot-Add Virtual RAM to VM Yes ( Dynamic Memory ) Yes  Requires supported Guest Operating System.
Dynamic Memory Management Yes ( Dynamic Memory ) Yes ( Memory Ballooning ) Note that memory overcommit is not supported for VMs that are configured as an MSCS VM Guest Cluster. VMware vSphere 5.5 also supports another memory technique: Transparent Page Sharing (TPS).  While TPS was useful in the past on legacy server hardware platforms and operating systems, it is no longer effective in many environments due to modern servers and operating systems supporting Large Memory Pages (LMP) for improved memory performance.
Guest NUMA Support Yes Yes NUMA = Non-Uniform Memory Access.  Guest NUMA support is particularly important for scalability when virtualizing large multi-vCPU VMs on Hosts with a large number of physical processors.
Maximum # of physical Hosts per Cluster 64 32
Maximum # of VMs per Cluster 8,000 4,000
Virtual Machine Snapshots Yes - Up to 50 snapshots per VM are supported. Yes - Up to 32 snapshots per VM chain are supported, but VMware only recommends 2-to-3.In addition, VM Snapshots are not supported for VMs using an iSCSI initiator.
Integrated Application Load Balancing for Scaling-Out Application Tiers Yes – via System Center 2012 R2 VMM No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.
Bare metal deployment of new Hypervisor hosts and clusters Yes – via System Center 2012 R2 VMM Yes - VMware Auto Deploy and Host Profiles supports bare metal deployment of new hosts into an existing cluster, but does not support bare metal deployment of new clusters.
Bare metal deployment of new Storage hosts and clusters Yes - via System Center 2012 R2 VMM No
Manage GPU Virtualization for Advanced VDI Graphics Yes – Server GPUs can be virtualized and pooled across VDI VMs via RemoteFX and native VDI management features in RDS role. Yes - via vDGA and vSGA features, but requires separate purchase of VMware Horizon View to manage VDI desktop pools.
Virtualization of USB devices Yes – Client USB devices can be passed to VMs via Remote Desktop connections. Direct  redirection of USB storage from Host possible with Windows-to-Go certified devices.  Direct redirection of other USB devices possible with third-party solutions. Yes – via USB Pass-through support.
Virtualization of Serial Ports Yes - Virtual Machine Serial Ports can be connected to Named Pipes on a host.   Named Pipes can then be connected to Physical Serial Ports on a host using free PipeToCom tool.Live Migration of VMs using virtualized serial ports can be provided via 3rd party software, such as Serial over Ethernet and Network Serial Port, or 3rd party hardware, such as Digi PortServer TS and Lantronix UDS1100 Yes – Virtual Machine Serial Ports can be connected to Named Pipes, Files or Physical Serial Ports on a host.vMotion of VMs using virtualized serial ports can be supported when using 3rd party virtual serial port concentrators, such as Avocent ACS v6000. Note that the ability to perform Virtual Machine Live Migration (or vMotion) for VM’s with virtualized serial ports requires a third-party option on both solutions compared.
Minimum Disk Footprint while still providing management of multiple virtualization hosts and guest VM’s ~800KB – Micro-kernelized hypervisor ( Ring -1 )~5GB – Drivers + Management ( Parent Partition – Ring 0 + 3 )Microsoft Hyper-V uses a modern micro-kernelized hypervisor architecture, which minimizes the components needed within the hypervisor running in Ring -1, while still providing strong scalability, performance, VM security, Virtual Disk security and broad device driver compatibility. ~155MB - Monolithic hypervisor w/ Drivers( Ring -1 + 0 )~4GB – Management  ( vCenter Server Appliance – Ring 3 )VMware vSphere uses a larger classic monolithic hypervisor approach, which incorporates additional code, such as device drivers, into the hypervisor.  This approach can make device driver compatibility an issue in some cases, but offers increased compatibility with legacy server hardware that does not support Intel-VT / AMD-V hardware-assisted virtualization. Microsoft and VMware each use different approaches for hypervisor architecture.  Each approach offers different advantages as noted in the columns to the left.See When it comes to hypervisors, does size really matter? for a more detailed real-world comparison.Frequently, patch management comes up when discussing disk footprints.  See Orchestrating Patch Management for more details on this area.
Boot from Flash Yes - Supported via Windows-to-Go devices. Yes
Boot from SAN Yes - can leverage included iSCSI Target Server or 3rd party iSCSI / FC storage arrays using software or hardware boot providers. Yes – can leverage 3rd party iSCSI / FC storage arrays using software or hardware boot providers.


VM Portability, High Availability and Disaster Recovery : HA is the thing what we need form this.

Microsoft Windows Server 2012 R2 + System Center 2012 R2 Datacenter Editions VMware vSphere 5.5 Enterprise Plus + vCenter Server 5.5 Notes
Live Migration of running VMs Yes – Unlimited concurrent Live VM Migrations.  Provides flexibility to cap at a maximum limit that is appropriate for your datacenter architecture. Particularly useful when using RDMA-enabled NICs. Yes – but limited to 4 concurrent vMotions per host when using 1GbE network adapters and 8 concurrent vMotions per host when using 10GbE network adapters.
Live Migration of running VMs without shared storage between hosts Yes – Supported via Shared Nothing Live Migration Yes – Supported via Enhanced vMotion.
Live Migration using compression of VM memory state Yes – Supported via Compressed Live Migration, providing up to a 2X increase in Live Migration speeds. No
Live Migration over RDMA-enabled network adapters Yes – Supported via SMB-Direct Live Migration, providing up to a 10X increase in Live Migration speeds and low CPU utilization. No
Live Migration of VMs Clustered with Windows Server Failover Clustering (MSCS Guest Cluster) Yes – by configuring relaxed monitoring of MSCS VM Guest Clusters. No based on documented vSphere MSCS Setup Limitations
Highly Available VMs Yes – Highly available VMs can be configured on a Hyper-V Host cluster.  If the application running inside the VM is cluster aware, a VM Guest Cluster can also be configured via MSCS for faster application failover times. Yes – Supported by VMware HA, but with the limitations listed above when using MSCS VM Guest Clusters.
Failover Prioritization of Highly Available VMs Yes – Supported by clustered priority settings on each highly available VM. Yes
Affinity Rules for Highly Available VMs Yes – Supported by preferred cluster resource owners and anti-affinity VM placement rules. Yes
Cluster-Aware Updating for Orchestrated Patch Management of Hosts. Yes – Supported via included Cluster-Aware Updating (CAU) role service. Yes – Supported by vSphere 5.5 Update Manager, but if using vCenter Server Appliance, need separate 64-bit Windows OS license for Update Management server.  If supporting more than 5 hosts and 50 VMs, also need a separate SQL database server.
Guest OS Application Monitoring for Highly Available VMs Yes Yes – Provided by vSphere App HA, but limited to only the following applications: Apache Tomcat, IIS, SQL Server, Apache HTTP Server, SharePoint, SpringSource tc Runtime.
VM Guest Clustering via Shared Virtual Hard Disk files Yes – Provided via native Shared VHDX support for VM Guest Clusters Yes – But only Single-Host VM Guest Clustering supported via Shared VMDK files.  For VM Guest Clusters that extend across multiple hosts, must use RDM instead.
Maximum # of Nodes per VM Guest Cluster 64 5 – as documented in VMware Guidelines for Supported MSCS Configurations
Intelligent Placement of new VM workloads Yes – Provided via Intelligent Placement in System Center 2012 R2 Yes – Provided via vSphere DRS, but without ability to intelligently place fault tolerant VMs using VMware FT.
Automated Load Balancing of VM Workloads across Hosts Yes – Provided via Dynamic Optimization in System Center 2012 R2 Yes - Provided via vSphere DRS, but without ability to load-balance VM Guest Clusters using MSCS.
Power Optimization of Hosts when load-balancing VMs Yes – Provided via Power Optimization in System Center 2012 R2 Yes – Provided via Distributed Power Management (DPM) within a vSphere DRS cluster, with the same limitations listed above for Automated Load Balancing.
Fault Tolerant VMs No - The vast majority of application availability needs can be supported via Highly Available VMs and VM Guest Clustering on a more cost-effective and more-flexible basis than software-based fault tolerance solutions.  If required for specific business applications, hardware-based fault tolerance server solutions can be leveraged where needed. Yes – Supported via VMware FT, but there are a large number of limitations when using VMware FT, including no support for the following when using VMware FT: VM Snapshots, Storage vMotion, VM Backups via vSphere Data Protection, Virtual SAN, Multi-vCPU VMs, More than 64GB of vRAM per VM. Software-based fault tolerance solutions, such as VMware FT, generally have significant limitations.  If applications require more comprehensive fault tolerance than provided via Highly Available VMs and VM Guest Clustering, hardware-based fault tolerance server solutions offer an alternative choice without the limits imposed by software-based fault tolerance solutions.
Backup VMs and Applications Yes - Provided via included System Center 2012 R2 Data Protection Manager with support for Disk-to-Disk, Tape and Cloud backups. Yes - Only supports Disk-to-Disk backup of VMs via vSphere Data Protection.  Application-level backup integration requires separately purchased vSphere Data Protection Advanced.
Site-to-Site Asynchronous VM Replication Yes – Provided via Hyper-V Replica with 30-second, 5-minute or 15-minute replication intervals. Minimum RPO = 30-seconds.Hyper-V Replica also supports extended replication across three sites for added protection. Yes – Provided via vSphere Replication with minimum replication interval of 15-minutes. Minimum RPO = 15-minutes. In VMware solution, Orchestrated Failover of Site-to-Site replication can be provided via separately licensed VMware SRM.In Microsoft solution, Orchestrated Failover of Site-to-Site replication can be provided via included PowerShell at no additional cost. Alternatively, a GUI interface for orchestrating failover can be provided via the separately licensed Windows Azure HRM service.

Storage : It will define caliber  of our system.

Microsoft Windows Server 2012 R2 + System Center 2012 R2 Datacenter Editions VMware vSphere 5.5 Enterprise Plus + vCenter Server 5.5 Notes
Maximum # Virtual SCSI Hard Disks per VM 256 ( Virtual SCSI ) 60 ( PVSCSI ) 120 ( Virtual SATA )
Maximum Size per Virtual Hard Disk 64TB 62TB vSphere 5.5 support for 62TB VMDK files is limited to when using VMFS5 and NFS datastores only.In vSphere 5.5, VMFS3 datastores are still limited to 2TB VMDK files.In vSphere 5.5, Hot-Expand, VMware FT , Virtual Flash Read Cache and Virtual SAN are not supported with 62TB VMDK files.
Native 4K Disk Support Yes - Hyper-V provides support for both 512e and 4K large sector-size disks to help ensure compatibility with emerging innovations in storage hardware. No
Boot VM from Virtual SCSI disks Yes ( Generation 2 VMs ) Yes
Hot-Add Virtual SCSI VM Storage for running VMs Yes Yes
Hot-Expand Virtual SCSI Hard Disks for running VMs Yes Yes but not supported with new 62TB VMDK files.
Hot-Shrink Virtual SCSI Hard Disks for running VMs Yes No
Storage Quality of Service Yes ( Storage QoS ) Yes ( Storage IO Control ) In VMware vSphere 5.5, Storage IO Control is not supported for RDM disks.In Windows Server 2012 R2, Storage QoS is not supported for Pass-through disks.
Virtual Fibre Channel to VMs Yes ( 4 Virtual FC NPIV ports per VM ) Yes ( 4 Virtual FC NPIV ports per VM ) – but not supported when using VM Guest Clusters with MSCS. vSphere 5.5 Enterprise Plus also includes a software initiator for FCoE support for VMs.While not included inbox in Windows Server 2012 R2, a no-cost ISV solution is available here to provide FCoE support for Hyper-V VMs.
Live Migrate Virtual Storage for running VMs Yes - Unlimited concurrent Live Storage migrations. Provides flexibility to cap at a maximum limit that is appropriate for your datacenter architecture. Yes – but only up to 2 concurrent Storage vMotion operations per host / only up to 8 concurrent Storage vMotion operations per datastore.  Storage vMotion is also not supported for MSCS VM Guest Clusters.
Flash-based Read Cache Yes - Using SSDs in Tiered Storage Spaces, limited up to 160 physical disks and 480 TB total capacity. Yes – but only up to 400GB of cache per virtual disk / 2TB cumulative cache per host for all virtual disks. See this article for additional challenges and considerations when implementing Flash-based Read Caching on VMware.
Flash-based Write-back Cache Yes - Using SSDs in Storage Spaces for Write-back Cache. No
SAN-like Storage Virtualization using commodity hard disks. Yes – Included in Windows Server 2012 R2 Storage Spaces. No VMware provides Virtual SAN which is included as an experimental feature in vSphere 5.5.  You can test and experiment with Virtual SAN, but VMware does not expect it to be used in a production environment.
Automated Tiered Storage between SSD and HDD using commodity hard disks. Yes – Included in Windows Server 2012 R2 Storage Spaces. No VMware provides Virtual SAN which is included as an experimental feature in vSphere 5.5.  You can test and experiment with Virtual SAN, but VMware does not expect it to be used in a production environment.
Can consume storage via iSCSI, NFS, Fibre Channel and SMB 3.0. Yes Yes – Except no support for SMB 3.0
Can present storage via iSCSI, NFS and SMB 3.0. Yes – Available via included iSCSI Target Server, NFS Server and Scale-out SMB 3.0 Server support.  All roles can be clustered for High Availability. No VMware provides vSphere Storage Appliance as a separately licensed product to deliver the ability to present NFS storage.
Storage Multipathing Yes – via MPIO and SMB Multichannel Yes – via VAMP
SAN Offload Capability Yes – via ODX Yes – via VAAI
Thin Provisioning and Trim Storage Yes – Available via Storage Spaces Thin Provisioning and NTFS Trim Notifications. Yes – but trim operations must be manually processed by running esxcli vmfs unmap command to reclaim disk space.
Storage Encryption Yes – via BitLocker No
Deduplication of storage used by running VMs Yes – Available via included Data Deduplication role service. No
Provision VM Storage based on Storage Classifications Yes – via Storage Classifications in System Center 2012 R2 Yes – via Storage Policies, formerly called Storage Profiles, in vCenter Server 5.5
Dynamically balance and re-balance storage load based on demands Yes – Storage IO load balancing and re-balancing is automatically handled on-demand by both SMB 3.0 Scale Out File Server and Automated Storage Tiers in Storage Spaces. Yes – Performed via Storage DRS, but limited in load-balancing frequency.  The default DRS load-balance interval only runs at 8-hour intervals and can be adjusted to run load-balancing only as often as every 1-hour. Microsoft and VMware use different approaches for storage load balancing.  Microsoft’s approach is to provide granular, on-the-fly load balancing at an IO-level across SSD and HDD for better granularity.  VMware’s approach is to provide storage load balancing at a VM-level and use Storage vMotion to live migrate running VM’s between storage locations periodically in an attempt to distribute storage loads for running VMs.
Integrated Provisioning and Management of Shared Storage Yes - System Center 2012 R2 VMM includes storage provisioning and management of SAN Zoning, LUNS and Clustered Storage Servers. No - Provisioning and management of Shared Storage is available through some 3rd party storage vendors who offer plug-ins to vCenter Server 5.5.

Networking:  Its for connectivity and expendability .

Microsoft Windows Server 2012 R2 + System Center 2012 R2 Datacenter Editions VMware vSphere 5.5 Enterprise Plus + vCenter Server 5.5 Notes
Distributed Switches across Hosts Yes – Supported by Logical Switches in System Center 2012 R2 Yes
Extensible Virtual Switches Yes - Several partners offer extensions today, such as Cisco, NEC, Inmon and 5nine. Windows Server 2012 R2 offers new support for co-existence of Network Virtualization and Switch Extensions. Replaceable, not extensible - VMware virtual switch is replaceable, not incrementally extensible with multiple 3rd party solutions concurrently
NIC Teaming Yes – Up to 32 NICs per NIC Team.  Windows Server 2012 R2 provides new Dynamic Load Balancing mode using flowlets to provide efficient load balancing even between a small number of hosts. Yes – Up to 32 NICs per Link Aggregation Group
Private VLANs (PVLAN) Yes Yes
ARP Spoofing Protection Yes No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.
DHCP Snooping Protection Yes No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.
Router Advertisement Guard Protection Yes  No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.
Virtual Port ACLs Yes - Windows Server 2012 R2 adds support for Extended ACLs that include Protocol, Src/Dst Ports, State, Timeout & Isolation ID Yes – via new Traffic Filtering and Marking policies in vSphere 5.5 distributed switches
Trunk Mode to VMs Yes Yes
Port Monitoring Yes Yes
Port Mirroring Yes Yes
Dynamic Virtual Machine Queue Yes Yes
IPsec Task Offload Yes No
Single Root IO Virtualization (SR-IOV) Yes Yes – SR-IOV is supported by vSphere 5.5 Enterprise Plus, but without support for vMotion, Highly Available VMs or VMware FT when using SR-IOV.
Virtual Receive Side Scaling ( Virtual RSS ) Yes Yes ( VMXNet3 )
Network Quality of Service Yes Yes
Network Virtualization / Software-Defined Networking (SDN) Yes – Provided via Hyper-V Network Virtualization based on NVGRE protocol and in-box Site-to-Site NVGRE Gateway. No – Requires additional purchase of VMware NSX
Integrated Network Management of both Virtual and Physical Network components YesSystem Center 2012 R2 VMM supports integrated management of virtual networks, Top-of-Rack (ToR) switches and integrated IP Address Management No


Guest Operating Systems: It is for our existence in virtual world.

For this section, we are defining Supported Guest Operating Systems as operating systems that are supported by both the virtualization platform vendor and by the operating system vendor.  Below, we were listed the latest common versions of major Windows and Linux operating systems that we have seen used in business environments of all sizes over the years, including SMB, Enterprise and hosting partner organizations.  We have included the support status for each operating system along with relevant notes where helpful.

If you’re looking for the full list of Guest Operating Systems supported by each platform, you can find the full details at the following locations:

Microsoft Windows Server 2012 R2 + System Center 2012 R2 Datacenter Editions VMware vSphere 5.5 Enterprise Plus + vCenter Server 5.5 Notes
Windows Server 2012 R2 Yes Yes
Windows 8.1 Yes Yes
Windows Server 2012 Yes Yes
Windows 8 Yes Yes
Windows Server 2008 R2 SP1 Yes Yes
Windows Server 2008 R2 Yes Yes
Windows 7 with SP1 Yes Yes
Windows 7 Yes Yes
Windows Server 2008 SP2 Yes Yes
Windows Home Server 2011 Yes No
Windows Small Business Server 2011 Yes No
Windows Vista with SP2 Yes Yes
Windows Server 2003 R2 SP2 Yes Yes
Windows Server 2003 SP2 Yes Yes
Windows XP with SP3 Yes Yes
Windows XP x64 with SP2 Yes Yes
CentOS 5.7, 5.8, 6.0 – 6.4 Yes Yes
CentOS Desktop 5.7, 5.8, 6.0 – 6.4 Yes Yes
Red Hat Enterprise Linux 5.7, 5.8, 6.0 – 6.4 Yes Yes
Red Hat Enterprise Linux Desktop 5.7, 5.8, 6.0 – 6.4 Yes Yes
SUSE Linux Enterprise Server 11 SP2 & SP3 Yes Yes
SUS Linux Enterprise Desktop 11 SP2 & SP3 Yes Yes
OpenSUSE 12.1 Yes Yes
Ubuntu 12.04, 12.10, 13.10 Yes Yes – Currently 13.04 in the 13.x distros
Ubuntu Desktop 12.04, 12.10, 13.10 Yes Yes – Currently 13.04 in the 13.x distros
Oracle Linux 6.4 Yes Oracle has certified its supported products to run on Hyper-V and Windows Azure YesHowever, per this Oracle article, Oracle has not certified any of its products to run on VMware. Oracle will only provide support for issues that are either known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware.
Mac OS X 10.7.x & 10.8.x No Yes – However, see note to the right.  Based on current Apple EULA, this configuration may not be legally permitted in your environment. Note that according to the Apple EULA for Mac OS X, it is not permitted to install Mac OS X on any platform that is not Apple-branded hardware. If you choose to virtualize Mac OS X on non-Apple hardware platforms, it’s my understanding that you’re violating the terms of the Apple EULA.
Sun Solaris 10 No YesHowever, per this Oracle article, Oracle has not certified any of its products to run on VMware. Oracle will only provide support for issues that are either known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware.

In terms of Guest Operating System choices … It’s somewhat of a draw in this area, as the best choice for you really depends upon which Guest Operating Systems you are actually using in your environment.

If you are primarily using the latest past few versions of common Windows and Linux operating systems in your shop, either platform probably nicely supports your required mix of Guest Operating Systems.  However, if you’re still using older legacy versions or specialized versions of some operating systems, you may need to more closely review the full compatibility lists for each platform using the links provided above.  When evaluating Guest Operating System support for virtualization platforms, remember to also check with the Operating System vendor to ensure that the OS in question also meets their support and licensing policies.

Managing Heterogeneous Hypervisor Environments

In certain scenarios, you may find that a mix of virtualization platforms is needed to cost-effectively support all the features and Guest Operating Systems for which you’re looking, in which case you’ll be pleased to find that Microsoft System Center 2012 R2 also supports Private Cloud management across heterogeneous hypervisors, including Hyper-V, VMware vSphere and Citrix XenServer.  For more details on managing VMware vSphere and Citrix XenServer hypervisors with Microsoft System Center 2012 R2, be sure to check out the following articles:


Source: Microsoft Link


Some other comparison

Avoid unnecessary risk and overhead by choosing a robust and production-proven hypervisor as the foundation for your virtualized datacenter. Selecting the right hypervisor is the first step towards success in building a virtual infrastructure.

Comparing Hypervisors

  • Hyper-V and Xen Architectures: Too Much Code
  • Achieve Scalable Performance
  • Virtualization-Aware Networking and Security Solutions

Comparing VMware vSphere and Microsoft Hyper-V

VMware vSphere—the industry’s first x86 “bare-metal” hypervisor—is the most reliable and robust hypervisor. Launched in 2001 and now in its fifth generation, VMware vSphere has been production-proven in tens of thousands of customer deployments all over the world.

Other hypervisors are less mature, unproven in a wide cross-section of production datacenters, and lacking core capabilities needed to deliver the reliability, scalability, and performance that customers require.


So while others try to catch up to VMware in the areas highlighted below, VMware’s continuing enhancements are taking vSphere to the next level of enterprise-class hypervisors needed to deliver the Software-Defined Data Center—further extending VMware’s lead and ensuring that VMware customers obtain unparalleled levels of performance and reliability.

Hypervisor Attributes

VMware vSphere 5.5

Windows Server 2012 with Hyper-V

Small Disk Footprint

<200MB disk footprint

>5GB with Server Core installation

~10GB with full Windows Server installation

OS Independence

No reliance on general purpose operating system

Relies on Windows 2012 in Parent Partition

Hardened Drivers

Optimized with hardware vendors

Generic Windows drivers

Advanced Memory Management

Ability to reclaim unused memory, de-duplicate memory pages, compress memory pages, swap to disk/SSD

Only uses ballooning, requires special drivers—no Linux, no NUMA (Need to check)

Advanced CPU Management

Tuned to support Intel SMT hyper-threading; Supports 3D graphics accelerators

No reliable performance advantage when using hyper-threading

Advanced Storage Management

vSphere Virtual Machine File System

Lacks an integrated cluster file system

Virtual Security Technology

vCloud Networking and Security Enables hypervisor level security introspection

Minimal adoption of their security standard. 3rd party plug-ins required

Flexible Resource Allocation

Hot add VM vCPUs and memory, VMFS volume grow, hot extend virtual disks, hot add virtual disks

Nothing comparable  (Need to check)

Simplified Patching

No unrelated patching; Image-based patching with rollback capabilities provide clean and simple host patching

Subject to frequent patching related to the OS; Complex patching architecture requires additional effort and complexity


Architecture: Windows Server 2012 with Hyper-V and Xen (Too Much Code)

A smaller virtualization footprint reduces the attack surface for external threats and can drastically lower the number of patches required— giving you a more reliable product and a more stable datacenter.

As part of VMware’s ongoing focus to advance virtualization reliability, VMware created VMware® ESXi™, the industry’s smallest hypervisor and first complete x86/x64 virtualization architecture with no dependence on a general-purpose operating system. No other virtualization platform can match the compact size of VMware ESXi with its small disk footprint. Removing the patches that would normally need to be applied reduces the security risks associated with a general purpose server operating system. Windows Server 2012 with Hyper-V, Xen, and KVM all have architectures that depend on a general purpose server operating system, linking the reliability of their hypervisors to that of the respective general purpose server operating system.

Microsoft attempted to follow VMware’s lead to reduce the attack surface of its virtualization platform by offering Windows Server Core (a subset of Windows Server 2012) as an alternative parent partition to a full Windows Server 2012 install. However, the disk footprint of Server Core in its virtualization role is still approximately 5 gigabytes (GB). Until Microsoft changes its virtualization architecture to remove its dependency on Windows, it will remain large and vulnerable to Windows patches, updates, and security breaches. All of the proprietary Xen-based and KVM offerings, such as those from Citrix, Oracle, Red Hat, and Novell face similar issues by relying upon general purpose Linux as a core part of their virtualization architectures.

Scalability: Achieve Better Scalability and Performance in your Data Center

The hypervisor plays a key part in delivering scalable virtualization performance. See detailed performance demonstrations and comparisons in the performance section of the VMware website.

We can see that VMware vSphere achieves high-performance throughput in a heavily virtualized environment, even as the number of total supported users and virtual machines per physical host increases.



Better Memory Management for Scalability

In most virtualization scenarios, system memory is the limiting factor controlling the number of virtual machines that can be consolidated onto a single server. By more intelligently managing virtual machine memory use, VMware lets you maximize the number of virtual machines your hardware can support. Of all x86 bare-metal hypervisors, VMware vSphere supports the highest efficiency of memory utilization with minimal performance impact by combining several exclusive technologies.

VMware vSphere uses four techniques for memory management:


Transparent Page Sharing. Think of it as de-duplication for your memory.  During periods of idle CPU activity, ESXi scans memory pages loaded by each virtual machine to find matching pages that can be shared. The memory savings can be substantial, especially when the same operating system or applications are loaded in multiple guests, as is the case with VDI.  Transparent Page Sharing has a negligible effect on performance (sometimes it evens improves guest performance) and users can tune ESXi parameters to speed up scanning if desired.  Also, despite claims by our competitors, Transparent Page Sharing will in fact work with large memory pages in guests by breaking those pages into smaller sizes to enable page sharing when the host is under memory pressure.

Guest Ballooning. This is where ESXi achieves most of its memory reclamation.  When the ESXi hypervisor needs to provide more memory for virtual machines that are just powering on or getting busy, it asks the guest operating systems in other virtual machines to provide memory to a balloon process that runs in the guest as part of the VMware Tools.  ESXi can then loan that “ballooned” memory to the busy VMs.  The beauty of ballooning is that it’s the guest operating system, not ESXi that decides which processes or cache pages to swap out to free up memory for the balloon.  The guest, whether it’s Windows or Linux, is in a much better position than the ESXi hypervisor to decide which memory regions it can give up without impacting performance of key processes running in the VM.


Hypervisor Swapping. Any hypervisor that permits memory oversubscription must have a method to cope with periods of extreme pressure on memory resources.  Ballooning is the preferred way to reclaim memory from guests, but in the time it takes for guests to perform the in-guest swapping involved, other guests short on memory would experience freezes, so ESXi employs hypervisor swapping as a fast-acting method of last resort.  With this technique, ESXi swaps its memory pages containing mapped regions of virtual machine memory to disk to free host memory.  Reaching the point where hypervisor swapping is necessary will impact performance, but vSphere supports swapping to increasingly common solid state disks, which testing shows can cut the performance impact of swapping by a factor of five.


Memory Compression. To reduce the impact of hypervisor swapping, vSphere introduced memory compression.  The idea is to delay the need to swap hypervisor pages by compressing the memory pages managed by ESXi – if two pages can be compressed to use only one page of physical RAM, that’s one less page that needs to be swapped.  Because the compression/decompression process is so much faster than disk access, performance is preserved.

With VMware vSphere, virtual machines look and act just like physical machines. Any guest operating systems and any applications or monitoring tools in the virtual machines see a consistent, fixed amount of installed RAM. That ensures that guest software and management tools behave as expected.

If all virtual machines on a host spike at the same time and require all of their memory allocation, VMware DRS can automatically load balance by performing vMotion live migrations of virtual machines to other hosts in the DRS cluster.

Watch a technical video on: VMware Distributed Resource Scheduler and VMware vSphere

Networking and Securing: the Virtual Datacenter

VMware vCloud Networking and Security provides rich networking and security functionality for virtualized compute environments, built using the vCloud Suite. It provides a broad range of services delivered through virtual appliances, such as a virtual firewall, virtual private network (VPN), load balancing, NAT, DHCP and VXLAN-extended networks. These foundational capabilities for the vCloud Suite enhance operational efficiency, improve agility with control and enable extensibility to partner solutions.

vCloud Networking and Security is deployed on top of vSphere Distributed Switch (VDS). VDS enables centralized network provisioning, administration, and monitoring using cluster-level network aggregation for data center access switching. VDS enables individual host-level virtual switches to be abstracted into a single large VDS that spans multiple hosts at the data center level, with vCenter Server™ acting as the control point for all configured VDS instances.

vCloud Networking and Security components:

  • Edge Virtual Appliance – Provides networking and security gateway services, such as firewall, NAT, load balancer, VPN and DHCP. Edge High Availability protects against network, host and software failures.
  • App Firewall – Segments and isolates critical applications within the virtual data center using vNIC-level firewalling.
  • VXLAN – VXLAN, in conjunction with VDS, creates Layer 2 logical networks that are encapsulated in standard Layer 3 IP packets. A large number of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure. These logical networks can span non-contiguous clusters or pods, without the need for VLANs, enabling customers to scale their applications across clusters and pods. VXLAN requires multicast to be turned on in Top of Rack (ToR) switches.
  • Data Security – Scans virtual workloads for sensitive data and reports regulation violations so you can quickly assess the state of compliance with global regulations.
  • vCloud Ecosystem Framework – Integrates partner services at the vNIC, virtual edge or policy management plane through REST APIs.

Management integration with VMware vCenter Server™ and VMware vCloud Director® reduces the cost and complexity of datacenter operations. With vCloud Networking and Security, enterprises can virtualize business critical applications with confidence, secure VMware View deployments and build secure and agile private clouds.

VMware includes, with select vSphere editions, vShield Endpoint to secure your virtual machine endpoints with highly-efficient anti-virus and data security. vShield Endpoint works in conjunction with industry-leading security vendors allowing their technology – implemented as virtual appliances – to protect endpoint virtual machines without resource-sapping agents. vShield Endpoint applies security at the hypervisor layer using introspection to monitor memory, network traffic and storage in every virtual machine. vShield Endpoint is virtualization-aware security that is more scalable, efficient and simpler to manage than legacy agent-based approaches designed for physical machines.

Source : VMware


In Summary …

As you can see, both Microsoft Windows Server 2012 R2 / System Center 2012 R2 and VMware vSphere 5.5 offer lots of enterprise-grade virtualization features.  Hopefully this comparison was useful to you in more granularly evaluating each platform for your environment.

  • Which virtualization platform scored higher for your needs it totally depends on your core requirement.
  • If your are going to use Windows Platform for application deployment then Windows Server 2012 R2 and System Center 2012 R2 is more preferable in term or support and cost basis.
  • In case of Hybrid Cloud scenario Microsoft have both option System Center 2012 R2 & Windows Server Hyper-V 2012 as Private Cloud and Microsoft Azure as Public Cloud with 24X7 Support. VMware is strong player but only for On-Premises solution for Hybrid cloud 3 Party solution require.

VMware has very good virtualization Platform with very well add-on monitoring & management capabilities suites and Microsoft have good virtualization platform with well managed integrated Monitoring & Management suite.