Showing posts with label Hyper-V R2. Show all posts
Showing posts with label Hyper-V R2. Show all posts
Monday, April 4, 2011
Microsoft Offers iSCSI Target as a Free Download
Microsoft is making it's iSCSI Target Software for Windows Server 2008 R2 a free download! Now you can create shared storage for production or test deployments of Hyper-V servers to have Live Migration and High Availability without the need to deploy a NAS or SAN in your envirornment. Read more about it here: http://blogs.technet.com/b/virtualization/archive/2011/04/04/free-microsoft-iscsi-target.aspx
Monday, August 30, 2010
Hyper-V versus VMware my take on a few topics
A colleague and I were "Discussing" the differences between Hyper-V and VMware Server Virtualization. I think we all know that both Microsoft and VMware have big marketing departments and both are working overtime to to promote their products.
This discussion started based on an blast email I sent about Hyper-V taking over at CH2M Hill to my company.
My Colleague found an article that he sent me that talks about how VMware feels that Microsoft is miss-informing information regarding the features of Hyper-V compared to Vsphere/ESX, http://blogs.vmware.com/virtualreality/2010/03/hyper-v-passes-microsofts-checkmarks-exam-isnt-that-always-the-case.html
I read the post and then responded with the below email to him around my experiences and information about the posts information: (Items in red below are taken from the above mentioned post)
Unlike vSphere, for example, Hyper-V R2 does not provide VM restart prioritization, which means that there is no easy way for admins to make sure that critical VMs are being restarted first. Incidentally, the lack of VM restart prioritization is one the reasons why Burton Group stated that Hyper-V R2 is not an enterprise production-ready solution
This is miss-information, You can control the priority of VMs restart by the delayed start setting. You would set a longer start time for VMs that are not a priority and set the start time for high priority VMs to zero. Read this about this debate, http://www.networkworld.com/news/2010/080910-vmware-microsoft-disaster-recovery.html?fsrc=netflash-rss
In addition because Hyper-V R2 lacks memory overcommit (a feature that is totally missing from Microsoft’s checklist), it can restart VMs only if the failover server has enough spare memory capacity to accommodate the VMs of the failed host.
This above statement about Memory overcommit is very comical as it is widely been reported on as a “Big Win for VMware” since Hyper-V entered the market. Before Hyper-V entered and even with Hyper-V in the field VMware has said that Memory overcommit is not recommended or supported for production environments. With Server 2008 R2 SP1 Microsoft is implementing Dynamic Memory, this is not memory overcommit but memory over-allocation. Microsoft has implemented a system whereas the host OS Hyper-V server communicates with the Guest OS VM to determine the memory needs of the guest and then allocates more memory to the guest. This is configured at the admin level by giving the guest a minimum (start up) memory and a maximum memory threshold; so I can configure a Guest to start with 512MB of memory and a maximum of 4GB of memory depending of need. This is fully supported in a production environment. What this enables is admins to configure guest to utilize physical host memory dynamically based on need, but still will not exceed the physical memory available to the host. Configuration also allows priority for this to be set per Guest.
This checkmark is funny to say the least. If you don’t know what the word “quick” means in Microsoft’s marketing jargon (and believe me I have heard illuminating translations of the term from Microsoft’s own employees), you’d think that Microsoft has a fast Storage VMotion (possibly faster than VMware’s). The reality is that even just talking about Storage VMotion in Hyper-V’s case doesn’t make sense, because Microsoft’s Quick Storage Migration, just like Quick Migration for VMs, cannot migrate VM virtual disks without downtime. VMware Storage VMotion, on the other hand, can migrate virtual disks without any application downtime.
Not sure what they mean, in the R2 version with Clustered Share Volumes, I have implemented many instances of Live Migrations that do not cause any downtime or interruption for a Guest/VM migration from one host to another.
As Microsoft TechNet shows, SCOM is a very complex product that consumes a considerable amount of servers and databases that – opposite to what Microsoft wants people to believe – are neither free nor included in the cost of SMSD licenses
it is true that SCOM is a paid product and requires additional server resources to implement but this is not required for Live Migrations. Plus implementing SCOM in an environment allows a company to not just monitor the virtual environment but all physical servers as well. Single pane of glass for all monitoring regardless of the server hardware solution.
This discussion started based on an blast email I sent about Hyper-V taking over at CH2M Hill to my company.
My Colleague found an article that he sent me that talks about how VMware feels that Microsoft is miss-informing information regarding the features of Hyper-V compared to Vsphere/ESX, http://blogs.vmware.com/virtualreality/2010/03/hyper-v-passes-microsofts-checkmarks-exam-isnt-that-always-the-case.html
I read the post and then responded with the below email to him around my experiences and information about the posts information: (Items in red below are taken from the above mentioned post)
Unlike vSphere, for example, Hyper-V R2 does not provide VM restart prioritization, which means that there is no easy way for admins to make sure that critical VMs are being restarted first. Incidentally, the lack of VM restart prioritization is one the reasons why Burton Group stated that Hyper-V R2 is not an enterprise production-ready solution
This is miss-information, You can control the priority of VMs restart by the delayed start setting. You would set a longer start time for VMs that are not a priority and set the start time for high priority VMs to zero. Read this about this debate, http://www.networkworld.com/news/2010/080910-vmware-microsoft-disaster-recovery.html?fsrc=netflash-rss
In addition because Hyper-V R2 lacks memory overcommit (a feature that is totally missing from Microsoft’s checklist), it can restart VMs only if the failover server has enough spare memory capacity to accommodate the VMs of the failed host.
This above statement about Memory overcommit is very comical as it is widely been reported on as a “Big Win for VMware” since Hyper-V entered the market. Before Hyper-V entered and even with Hyper-V in the field VMware has said that Memory overcommit is not recommended or supported for production environments. With Server 2008 R2 SP1 Microsoft is implementing Dynamic Memory, this is not memory overcommit but memory over-allocation. Microsoft has implemented a system whereas the host OS Hyper-V server communicates with the Guest OS VM to determine the memory needs of the guest and then allocates more memory to the guest. This is configured at the admin level by giving the guest a minimum (start up) memory and a maximum memory threshold; so I can configure a Guest to start with 512MB of memory and a maximum of 4GB of memory depending of need. This is fully supported in a production environment. What this enables is admins to configure guest to utilize physical host memory dynamically based on need, but still will not exceed the physical memory available to the host. Configuration also allows priority for this to be set per Guest.
This checkmark is funny to say the least. If you don’t know what the word “quick” means in Microsoft’s marketing jargon (and believe me I have heard illuminating translations of the term from Microsoft’s own employees), you’d think that Microsoft has a fast Storage VMotion (possibly faster than VMware’s). The reality is that even just talking about Storage VMotion in Hyper-V’s case doesn’t make sense, because Microsoft’s Quick Storage Migration, just like Quick Migration for VMs, cannot migrate VM virtual disks without downtime. VMware Storage VMotion, on the other hand, can migrate virtual disks without any application downtime.
Not sure what they mean, in the R2 version with Clustered Share Volumes, I have implemented many instances of Live Migrations that do not cause any downtime or interruption for a Guest/VM migration from one host to another.
As Microsoft TechNet shows, SCOM is a very complex product that consumes a considerable amount of servers and databases that – opposite to what Microsoft wants people to believe – are neither free nor included in the cost of SMSD licenses
it is true that SCOM is a paid product and requires additional server resources to implement but this is not required for Live Migrations. Plus implementing SCOM in an environment allows a company to not just monitor the virtual environment but all physical servers as well. Single pane of glass for all monitoring regardless of the server hardware solution.
Hyper-V pushes out VMWare at fortune 500 company
Hyper-V gaining momentum all the time. Here is an article talking about CH2M Hill decision to move from VMWare ESX to Microsoft Hyper-V Virtualization platform, major decision point being a cost savings of over three million dollars over the next three to five years.
the article is located here:
http://www.thevarguy.com/2010/08/27/fortune-500-firm-leaving-vmware-for-microsoft-hyper-v/
the article is located here:
http://www.thevarguy.com/2010/08/27/fortune-500-firm-leaving-vmware-for-microsoft-hyper-v/
Wednesday, August 25, 2010
Hyper-V Change DVD ISO Image from within VM
Was directed to this post by a good friend; the author has created a tool that allows a user to change the ISO image for the DVD from within the VM/Guest. Have not had a chance to play with this yet but looks very cool!
http://sqlblogcasts.com/blogs/simons/archive/2010/08/23/guest-tools-for-hyper-v-change-your-dvd-from-within-the-guest.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:%20SimonsSqlServerStuff%20(SimonS%20SQL%20Server%20Stuff)
http://sqlblogcasts.com/blogs/simons/archive/2010/08/23/guest-tools-for-hyper-v-change-your-dvd-from-within-the-guest.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:%20SimonsSqlServerStuff%20(SimonS%20SQL%20Server%20Stuff)
Hyper-V R2 Rollup Patch released
A rollup patch for Hyper-V R2 has been released that fixes three different issues, http://support.microsoft.com/default.aspx?scid=kb;en-US;2264080
Thursday, March 18, 2010
SP1 announced for Server 2008 R2 and Windows 7
SP1 plans have been released for Server 2008 R2 and Windows 7, http://blogs.technet.com/windowsserver/archive/2010/03/18/announcing-windows-server-2008-r2-and-windows-7-service-pack-1.aspx
Two big features that will be included primarily benefit Hyper-V; Dynamic memory and RemoteFX. RemoteFX allows VMs to have richer content and visual experience.
Dynamic memory has been requested since Hyper-V first launched. Microsoft talks about that with Dynamic Memory companies can get for VDI workstations on Hyper-V hosts.
Two big features that will be included primarily benefit Hyper-V; Dynamic memory and RemoteFX. RemoteFX allows VMs to have richer content and visual experience.
Dynamic memory has been requested since Hyper-V first launched. Microsoft talks about that with Dynamic Memory companies can get for VDI workstations on Hyper-V hosts.
Labels:
Hyper-V R2,
Windows 7,
Windows Server 2008 R2
Wednesday, February 17, 2010
Server 2008 R2 Hyper-V and Intel Xeon 5500 Series Processor Problem
So I have been helping our internal IT implement Hyper-V to consolidate our server infrastructure. We implemented a two node Server 2008 R2 Hyper-V cluster on Dell R710 servers. Everything was going pretty good until this past Sunday. Our IT Manager converted the second of two SQL Servers to be a VM using SCVMM 2008 R2 P2V process. The conversion went pretty good. On Monday, all our consultants are required to submit their timesheet for the previous week, and custom application developed and located on SharePoint. This puts a little bit of a load on the Front End and SQL backend servers in the morning as everyone is trying to submit thier timesheets and our internal workers are pulling reports and getting invoices ready to be sent out (Need to get paid for the great work we all do!)
Well this Monday became challenging as now the entire SharePoint Environment plus, Exchange, and several other servers are hosted VMs on the cluster. The challenge came from the fact that our Hyper-V Host servers began to crash and reboot, failing the cluster and all VMs running on them. Initially it appeared to be a network problem. So we decided to move the last SQL box converted to a dedicated VSwitch and Physical NIC (We have a three NIC team setup to a single VSwitch for all of the VMs on both Hosts) This one done thinking that the network errors we were seeing on the Hosts were caused by the newly added SQL server. This did not solve the problem. We also saw some storage issues and reviewed everything to ensure that it was sound (Fiber Attached Storage on the Hosts; most VMs OS Drive is provided a couple Clustered Shared Volumes, CSV, thru the fiber connections with thier data drives utilizing Pass-Thru disks).
So with the problem still being sporadic we looked to VMM and removed the cluster from VMM and rebooted the Hosts separately to check things out. This seemed to solve the problem a bit (Tuesday no issues) . Today we had crashes again, our IT Manager walked thru the crash dumps with a fine tune comb and found the culprit: 0x00000101 - CLOCK_WATCHDOG_TIMEOUT error message. Didn't take him long to find this KB article documenting the problem, http://support.microsoft.com/kb/975530
There is a know issue with Intel Xeon 5500 Nehalem Processors. The problem happens sporadically so initial identification of the problem is a little tough. Their is a hotfix for this as well as some work around documented in the KB article.
This problem is only with the new Nehalem processors and Server 2008 R2 with Hyper-V role installed. The problem is documented by Intel as well here: http://www.intel.com/assets/pdf/specupdate/321324.pdf
Hope this information helps. Would be very interested to see who else has experienced this problem.
Well this Monday became challenging as now the entire SharePoint Environment plus, Exchange, and several other servers are hosted VMs on the cluster. The challenge came from the fact that our Hyper-V Host servers began to crash and reboot, failing the cluster and all VMs running on them. Initially it appeared to be a network problem. So we decided to move the last SQL box converted to a dedicated VSwitch and Physical NIC (We have a three NIC team setup to a single VSwitch for all of the VMs on both Hosts) This one done thinking that the network errors we were seeing on the Hosts were caused by the newly added SQL server. This did not solve the problem. We also saw some storage issues and reviewed everything to ensure that it was sound (Fiber Attached Storage on the Hosts; most VMs OS Drive is provided a couple Clustered Shared Volumes, CSV, thru the fiber connections with thier data drives utilizing Pass-Thru disks).
So with the problem still being sporadic we looked to VMM and removed the cluster from VMM and rebooted the Hosts separately to check things out. This seemed to solve the problem a bit (Tuesday no issues) . Today we had crashes again, our IT Manager walked thru the crash dumps with a fine tune comb and found the culprit: 0x00000101 - CLOCK_WATCHDOG_TIMEOUT error message. Didn't take him long to find this KB article documenting the problem, http://support.microsoft.com/kb/975530
There is a know issue with Intel Xeon 5500 Nehalem Processors. The problem happens sporadically so initial identification of the problem is a little tough. Their is a hotfix for this as well as some work around documented in the KB article.
This problem is only with the new Nehalem processors and Server 2008 R2 with Hyper-V role installed. The problem is documented by Intel as well here: http://www.intel.com/assets/pdf/specupdate/321324.pdf
Hope this information helps. Would be very interested to see who else has experienced this problem.
Tuesday, January 26, 2010
Windows Server 2008 Core Network Bind tool
I have done numerous Server 2008 Core installations, primarily to be the hosts for Hyper-V Virtualization. R2 made some great improvements in manageability as it relates to the Core platform, namely the SCONFIG command and the ability to have the GUI iSCSI initiator tool by running iscsicpl.
Well a new tool to help with Network Adapter bindings has just been released. I have used a previous version of this tool and it was helpful. Want to see what this one has. This functionality is really needed! http://code.msdn.microsoft.com/nvspbind
Thank You KeithMange
Well a new tool to help with Network Adapter bindings has just been released. I have used a previous version of this tool and it was helpful. Want to see what this one has. This functionality is really needed! http://code.msdn.microsoft.com/nvspbind
Thank You KeithMange
Sunday, December 13, 2009
Microsoft Hyper-V Technet Resource Page
Good source for Technical Information and Resouces around Hyper-V,
http://technet.microsoft.com/en-us/dd565807.aspx
http://technet.microsoft.com/en-us/dd565807.aspx
Saturday, October 17, 2009
SCVMM 2008 R2 Problem with P2V from Dell Physical Server
Had a client that I upgraded to 2008 R2 Hyper-V and SCVMM 2008 R2. When they tried to P2V Dell Physical servers they received BSOD or errors on the Install Integration Services step. We tried over and over and could not successfully P2V the server. We then tested to see if a new VM build had any problems with installing Integration Services, worked perfectly. So we ended up getting Microsoft Premier Support involved. After several tests and trials later the problem was solved; the problem dealt with the physical server being build with the Dell OpenManage startup CD. Seems this method of build set a registry key that conflicts with the Integration Services install. The key in question is:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Wdf01000\Group] = Base
changing the key to "WdfLoadGroup" and rebooting the VM allowed the Integration Services to install successfully.
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Wdf01000\Group] = Base
changing the key to "WdfLoadGroup" and rebooting the VM allowed the Integration Services to install successfully.
Labels:
Hyper-V R2,
SCVMM 2008 R2,
Windows Server 2008 R2
Thursday, September 10, 2009
Windows Server 2008 R2 Hyper-V
I have already done a couple upgrades of Hyper-V from R1 to R2 and must say that Microsoft has taken a huge leap in their data center virtualization offering! I think most everyone who follows virtualization knows about the BIG improvement, LIVE MIGRATIONS. But Live Migrations are not the only new feature/enhancement with the R2 Hyper-V release. One of the small features that might not get a lot of notice but I really appreciate is a check mark on the Virtaul switch that allows you to turn off administrative access on a physical NIC when connected to a virtual switch. This to me not only reinforces the best practice to have a separate, dedicated NIC for admin access to the host computer but really cleans up the Network Interface connections on a host computer. With R1 each time you created a Virtual Switch a "duplicate" Network Interface was created and without proper naming (side note: ran into this at a client sight, they did not properly name the Network Interfaces and it made troubleshooting and administration a nightmare) would result is numerous Network interfaces only differentiated buy a number appended to the end of the standard name. Now check the check box does not create this second Network Interface, nice work Microsoft.
Another cool feature, and what actually enables Live Migrations, is Clustered Shared Volumes. Along with allowing for multiple hosts to access a single LUN at the same time, it eliminates the requirement for creating and individual LUN for each virtual machine. This provides for better utilization of shared storage space. Now you can create a single large LUN and present it to all hosts in the cluster and place multiple VMs within this LUN separated by simple folders.
More to come!
Another cool feature, and what actually enables Live Migrations, is Clustered Shared Volumes. Along with allowing for multiple hosts to access a single LUN at the same time, it eliminates the requirement for creating and individual LUN for each virtual machine. This provides for better utilization of shared storage space. Now you can create a single large LUN and present it to all hosts in the cluster and place multiple VMs within this LUN separated by simple folders.
More to come!
Subscribe to:
Posts (Atom)