Article Reviews

Top 10 Things System Administrators Need to Know About Virtualization Essay

Posted on

The only constant thing about technology is that it is never constant. The advances that we’ve witnessed in computing over the last couple of decades are staggering. To even suggest even a decade ago that a single hard drive could be as big as 2 Terabytes (TB) or that a single server could have 1TB+ of RAM would have gotten you strange looks and even possibly laughs. Today, we are experiencing one of the greatest paradigm shifts that computing has seen yet: virtualization.

We are quickly moving away from thinking about a “one operating system per one server” model to one that says “how many operating systems can we place on a single server? This single but monumental change in datacenter design can be very difficult for system administrators to grasp. For many years, our practice was to buy the biggest servers we could, install as much CPU and memory as possible, then take that server and install our operating system of choice. It didn’t matter that we saw incredibly inefficient use of those expensive resources (usually only 5-10%)…this was the only design we knew. Virtualization technology changes everything we know currently about datacenter management.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

By implementing a robust solution, such as the VMware vSphere 4. 1 suite, we are able to realize benefits and costs savings that were never once thought possible. The purpose of this white paper is to offer a high-level description of the top 10 things about virtualization every system administrator (sysadmin) needs to know; regardless whether you’re just dipping your toes in the virtualization pool or are already in the deep end. And, in the best David Letterman tradition, we’ll start with number 10. DISCLAIMER: This list is not in a strict order of importance.

It is ok for you to disagree with my order choice, or even the completeness of this listing. Number 10: It is much easier to test new solutions in your infrastructure One of the challenges we face as sysadmins is the validation of changes to our network without disrupting the status quo of the production environment. In order to properly test new software, updates, patches, etc. , we have usually have to beg and plead with our management to purchase a test and validation lab. The logic is simple: change isn’t always good.

We need an external environment to verify that the new patch or Service Pack we are about to unleash on our company is not going to wreak havoc. Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 2 Let’s consider this example: You have a web server farm deployed on the Windows Server 2003 platform. A new FastCGI for IIS (Internet Information Services) update has been released, and you’re anxious to implement it in your production environment. Of course, you’re not going to go in guns blazin’ and blindly install the upgrade on your web servers.

You’re going to test the upgrade first to make sure there are no negative consequences. So, in a traditional datacenter, you would have to make sure you have an identical physical environment in your test labs for your webserver farm so you could deploy the update. But what if you don’t have an extensive test lab or even a lab at all? Many small- to medium-sized businesses find themselves in exactly this situation. Without a test lab, your only recourse will be to schedule a maintenance window for your production servers and test the update during the planned outage.

How can virtualization fix this required downtime for the production servers? With VMware vSphere 4. x, you can run the Converter plug-in and do a “Physical to Virtual” conversion, also called a “P2V” conversion. This will create an exact copy of the physical server as a virtual machine that you can now use for testing during any time of the day you see fit. Most importantly, you will be able to easily and accurately test the new software update against an exact copy of the physical server without having to actually touch the production environment.

Number 9:You can virtualize your user desktops and take back control For most system administrators I’ve known over the years, end-user support is one of the most dreaded parts of the job. Don’t get me wrong, I’m not saying that end users are bad, but from my experience, this is the single area where most of the unpleasant surprises tend to come. Once you give a PC or a laptop to a user, you have created a potentially dangerous situation, at least in small part. What problems am I talking about?

For one, practically every PC and laptop that we issue to our users comes equipped with removable media drives, such as CD-ROMs/DVD-ROMs, and USB thumb drive slots. While we are certainly able to implement policies preventing many types of applications from being installed, that doesn’t mitigate the risk of the user bringing infected multimedia content from home and spreading it throughout your network. In the end, you can only do so much, in a traditional computing model, to secure your end-users’ computing environment. What we need is a new way of doing things.

Virtualize the desktop! This will allow you to maintain the user desktop images completely separate from the physical computer they access it on. By doing this, you can better secure the image and provide for better performance and availability through more tightly-integrated system policies. Of course, there is a balancing act that we must maintain. To successfully virtualize the user desktops, we must ensure a few things: 1. The user experience is not disrupted 2. That we don’t apply a blanket treatment to every user 3. Allow the user to still tailor their working environment

Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 3 By virtualizing the desktops, we can ensure the security, performance, and reliability of the end-user experience without forcing the users to endure a “vanilla” experience. Number 8:You can provide enhanced support for mobile clients One of the biggest end-user trends that we’ve seen building in recent years is mobility. As a matter of fact, it’s not just that our users have cell phones, but these “phones” are more like actual computers that as a minor consequence can also place calls.

These smartphones, such as iPhones and Android devices, allow users unrivaled access to computing power and abilities that were unthinkable even a few years ago. In addition to personal gear, many companies also issue corporate phones, such as Blackberries, that are specifically configured to communicate with local servers. This creates an interesting problem for system administrators: as users become more and more mobile savvy, they want to carry less and less. This means that most users will balk at the idea of carrying their iPhone AND the Blackberry at the same time.

Whereas Blackberries used to be the cell phones with the advanced features, many will argue now that iPhones and Android phones are as at least as powerful, if not more so. Fortunately, there are emerging solutions that solve this issue. VMware has developed mobile virtualization software that will allow a user to have two completely separate virtual phone instances on the same phone hardware in exactly the same way we would store multiple virtual machines on a single ESXi host. This is a very cool capability that is sure to ease the burden of many system and security administrators.

Because personal equipment in a corporate environment is always a recipe for disaster, both from a security and liability standpoint, the fact that we can place a secure corporate phone image on the user’s personal phone that can function not just independently from the personal phone image, but concurrently with it (i. e. , you can receive calls from either image at the same time), will create the needed buy-in from the users (and admit it, even though we tend to discount the user’s opinion regarding their use of company resources, it is a necessary component for actual success).

Number 7:You can take advantage of recent trends in virtualizing Unified Communications One of the big shifts in technology in the last several years has been the fact that server administrators now find themselves administering IP-based telephony PBXs (Private Branch Exchanges). While telephony used to be handled by its own dedicated team of engineers, Cisco’s Unified Communications Manager (CUCM) changed this trend and brought call processing management to a physical server chassis. With the release of CUCMv8. , virtualization is now supported by Cisco, which is something that many people have been waiting on for a long time. Of course, there’s a small catch: it’s only supported on the Cisco Unified Computing System (UCS) platform running VMware vSphere software. This means that sysadmins now have yet another nugget of technology they have to become familiar with. Not to fear, though, because once the CUCM server has been installed as a virtual appliance, it looks and behaves exactly as we have been used to in the standalone physical server. Copyright ©2011 Global Knowledge Training LLC.

All rights reserved. 4 This new virtualized existence will allow us to take advantage of the many availability and performance enhancements that VMware is known for; including the ability to vMotion the server to maintain load balancing or to perform maintenance and High Availability to ensure the virtual appliance will be restarted if it crashes. Number 6:You can better support legacy applications and servers Having to support extremely old servers with ancient operating systems and applications is something most sysadmins have had to deal with at one point r another in their careers. There can be many reasons for these servers staying around, but from my experience, the most common reasons usually involve licensing issues or a software vendor that refuses to build a new version to support modern operating systems. These legacy systems end up costing a company a lot of money in specialized support requirements and dedicated maintenance contracts (think elevated prices) that often are the result of having only a single integrator available with the parts in stock to repair the server.

As was mentioned earlier, we can use VMware’s Converter plug-in to do a P2V (Physical-to-Virtual) conversion, which will create an identical copy of the server in the form of a virtual machine. Once the server is in virtual form, hardware no longer becomes a concern, since the ESXi host’s hypervisor is presenting virtual drivers to the guest operating system and applications. At this point, we can terminate that inflated maintenance contract because if you need to perform maintenance on the ESXi host, all you need to do is vMotion the VM to another host (i. e. the VM stays powered up and working the entire time) and power the original host down. Number 5:You will experience a significant reduction in the number of physical servers to support Thinking about the many servers I’ve managed in production networks, I almost placed this topic in the top spot. While it ended up as number five, it is nonetheless, a very significant benefit of virtualization to be able to greatly reduce the footprint of your datacenter. The benefits can be talked about for weeks: lower administrative overhead, lower energy costs, lower cabling requirements, etc.

Modern virtualization solutions make use of a “bare metal” operating system called a hypervisor to remove the one-to-one dependence between the physical server and operating system that we’ve become accustomed to. This “abstraction” of the hardware resources (i. e. , CPU, memory, networking, and hard drives) from the operating system allow us to now place many independent operating systems onto a single physical server. The virtualization market leader by a very large margin is VMware, which is actually the company that figured out how to represent the physical resources in software.

This incredible feat allowed each individual guest operating system to think it is an actual physical computer, using whatever percentage of the resources you assign it. Another interesting side effect of this process is that virtual machines can operate with higher efficiency and performance with less resources assigned than an equivalent physical server. As foreign as this sounds, I’ve witnessed many virtual servers with 1 CPU and 2 GB of RAM outperform equivalent physical servers with 4 CPUs and 8 GB of RAM. This allows the collective physical resources of your ESX/ESXi hosts to be extended to even more concurrent VMs.

Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 5 Number 4:You will see a much more efficient use of existing resources One of the real downfalls of traditional datacenters is how incredibly wasteful physical servers are. Back in the day, we had a simple design concept: max it out. If you needed 10 database servers, you bought the biggest and most powerful servers you could find, you filled it full with all the CPUs and RAM sticks you could fit in there, and then you installed the operating system and applications.

While this model certainly gave you performance to spare, many studies have shown that a normal server utilization rate is around 5 percent to 10 percent. This means that around 90 percent of utilization sits there unutilized and wasted. At the same time, these servers require an immense amount of electricity to function, so what you have is an incredibly inefficient and very operationally expensive server. Multiply this effect by the potentially hundreds and thousands of servers in a datacenter and you have one big mess on your hands.

With virtualization’s ability to place many independent operating systems on each physical server, we see that the resource utilization back up to around 70 percent to 80 percent with the electricity requirements falling off by 15 percent to 20 percent. Needless to say, this is a combination that has saved many companies thousands and even millions of dollars. Number 3:You will see a much greater uptime for critical servers Over the years, the mark of excellence for datacenters, as far as availability goes, has been the five 9s (99. 999%) of uptime.

In order to achieve this uptime, which allows for only a mere five minute outage in an entire year, you have to spend an incredible amount of money. Essentially, you have to purchase double of everything – two servers, two power supplies, two separate circuits, etc. With modern virtualization capabilities, we are able to see improvements in high availability never before considered possible. As the market leader, VMware has many techniques to extend the availability and scalability of small to very large datacenters, but its two primary tools are High Availability and Fault Tolerance.

With High Availability (HA), vSphere will automatically restart a virtual machine if it crashes or otherwise becomes unresponsive. So, with HA, we will experience some downtime, but it is nonetheless very impressive that an immediate restart can automatically be executed. I have come into my datacenter the next morning to find a dead server that had been out of commission all night. For many companies, even a minor outage can cost them millions in lost revenue, so this is a vital capability.

For the most critical servers, Fault Tolerance allows an exact copy of the original virtual machine to be created and held in standby as a secondary VM in read-only mode to the primary VM in full read-write mode. These two VMs communicate with each other via a Fault Tolerance logging mechanism at the millisecond level called vLockStep. If even 1 second of communication is lost between the two VMs, an automatic failover takes place and the user will be hard-pressed to ever know it just happened.

As a matter of fact, I have tried to see the failover as it takes place and even with my attention on it, it is still very hard to see the failover, as it happens so fast. Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 6 Number 2:You can implement a “greener” infrastructure As system administrators, we often don’t think about aspects that reach beyond the operation of the datacenter – there’s certainly enough to keep us busy as it is. Nonetheless, traditional datacenters have caused an array of environmental issues that directly affect your company’s bottom line.

According to VMware, “For every server virtualized, customers can save about 7,000 kilowatt hours [of electrical expenses], or four tons of CO2 emissions, every year. ” These are significant figures that cannot be ignored and also do a great job of giving some insight into why so many CEOs and CIOs have embraced virtualization as the new datacenter design methodology. When we place these kinds of figures against real utility bills paid at the end of the month, it’s no wonder that many companies are literally saving millions of dollars a year in datacenter operational costs.

Besides the obvious realities of the impact on the corporate pocketbooks, the other consideration here is a global responsibility to take better care of the environment. It’s hard to imagine that the hardware and software inside a typical corporate datacenter would be considered a serious source of pollution. But, given the drastic reduction in emissions and kilowatt hours of electricity saved thru virtualization, the environment is just as big a winner as the bottom line.

Number 1:Virtualization is already becoming the standard for modern infrastructures And here we have arrived at the number one reason for system administrators to become more familiar with virtualization: it simply has become the standard for how we build and manage modern datacenters. It almost seems a little anticlimactic, but as systems designers and administrators, we have undergone a complete paradigm shift in how we define modern computing. As a benefit of several decades of datacenter operations, we have learned some hard lessons on many fronts: power, efficiency, scalability, availability, and environmental responsibility.

All of these lessons learned have brought us to an entirely new way of computing, and that is to take advantage of virtualization and do much, much more with less. Through virtualization, we are able to drastically reduce the size of our datacenters while increasing the capabilities of our servers at the same time. All the while, we’ve not had to relearn what it means to manage servers. The virtual servers are managed exactly the same way as we used to manage physical servers. As our society becomes more and more mobile and tech savvy, the bar of expectation in our corporate networks becomes higher and higher as well.

Gone are the days where we were ok with the fact that it would take 30 minutes to load and application or all afternoon to execute a program. Today, even very minor interruptions are noticed and cause complaints. The pressure to guarantee consistent uptime, driven mainly by the strict requirements of modern advancements in IP telephony and video, has resulted in a solution that for the first time ever can actually provide 100% uptime of critical servers. Yes, virtualization IS the modern standard for serious computing and datacenter management.

Am I saying the every server can be virtualized? Of course not…there will always be at least a couple that will be better off left as Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 7 physical machines. But I can say with an extremely high level of confidence that almost all servers can be virtualized and as time marches on, we’ll hit the point very soon that every server will exist as a virtual machine. Summary So there we have it – the top 10 things that system administrators need to understand and embrace about virtualization.

As I mentioned before, you will likely have a couple of entries you would ranked or listed differently than we did here, and that’s perfectly ok. The honest truth is there are many great things about virtualization, and we all see those benefits through slightly different lenses. Nonetheless, I know we can all agree that virtualization technology has transformed the very landscape of datacenter design and operation. References 1. Virtualization and Automation Drive Dynamic Data Centers http://www. ca. com/files/technologybriefs/ dca-manager-tech-brief-us. pdf 2.

VMware’s “Green” Virtualization http://www. pcworld. com/businesscenter/article/145169/vmwares_ green_virtualization. html 3. VMware Install, Configure, Manage (ICM) v4. 1 authorized curriculum Learn More Learn more about how you can improve productivity, enhance efficiency, and sharpen your competitive edge. Check out the following Global Knowledge courses: VMware vSphere: Fast Track [V4. 1] VMware vSphere: Install, Configure, Manage [V4. 1] VMware View: Desktop Fast Track [V4. 5] VMware View: Install, Configure, Manage [V4. 5] For more information or to register, visit www. lobalknowledge. com or call 1-800-COURSES to speak with a sales representative. Our courses and enhanced, hands-on labs and exercises offer practical skills and tips that you can immediately put to use. Our expert instructors draw upon their experiences to help you understand key concepts and how to apply them to your specific work situation. Choose from our more than 1,200 courses, delivered through Classrooms, e-Learning, and On-site sessions, to meet your IT and business training needs. About the Author Jeffrey Hall is an Independent Consultant and Instructor.

Jeffrey has more than 15 years of experience designing and administering Security, Unified Communications, Datacenter, and Virtualization solutions for such organizations as the U. S. Army, SBC, AT&T, and Genesis Networks. Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 8 Additionally, Jeffrey holds the following certifications: VCI, VCP, CCSI, CCNP Security, CCNP Voice, Data Center Support Specialists, CCIP, CCDP, and CCNP. He lives in the Memphis, TN area with his wife, two daughters, and grandson. Copyright ©2011 Global Knowledge Training LLC. All rights reserved. 9