A Point of View on Virtualization

From DocWiki

Jump to: navigation, search

There is a difference between Virtual and Virtualization, we often mix the two up with rather confusing results.

Contents

What is Virtual to You?

Virtual is the creation of effective illusions. Virtual LANs are an illusory LAN giving the semblance of network privacy on shared segments. Virtual Reality is simply 'fake real life.' Telepresence is a form of Virtual Reality -- when we espouse 'being here is being there' and enable global communications with a level of interaction and experience that mirrors in-person semantics we are using an illusion, albeit a digital one. There are Virtual Private Networks, not a fake network, but again the illusion of privacy and segmentation while using shared resources. There are Virtual Applications, where I have an application running on my mobile device, then it pops on my desk as I come to work, and my stadium seat as I grab a beer and a dog. ll of these are examples of the application of the modifier 'Virtual' to a noun that is generally assumed to be well-understood: Networks, LANs, Reality, etc...

For the sake of this discussion and the general syncing of taxonomy lets separate the adjective 'Virtual' from the use of the noun 'Virtualization'. The noun-form 'Virtualization' expresses in the mind of the general IT populace something quite different than Telepresence, VLANs, and Storage Proxies and NFS all of which at some point have claimed to be 'Virtualization'. Lets assign them to being implementations of the adjective/modifier 'Virtual' and some of them as technologies that were early innovations that have helped Virtualization be more efficient, or adopted more quickly. This is not an attempt to denigrate their value or importance, but more importantly find clarity and alignment with the broader IT audience who feels that the term Virtualization is one with more singular definition and purpose. Virtualization, to most IT audiences, is all about the Virtual Server or Virtual Machine (used somewhat interchangeably, but we will stick with the virtual machine to avoid overlapping definitions). The reason for the singularity of the term Virtualization is because it is about a system and an architecture, derived from the impact of the VM on IT Architectures. Virtualization is an architecture. In short, what we used to have to do physically we can now do in software- logically.

Virtualization is about the fundamental shift from hardware definitions of IT systems to the use of software definitions of the self-same architectures.

Virtualization enables the creation of software architectural definitions by putting well-understood wrappers or abstractions between components with a high degree of change, variability, and complexity. Virtualization was fundamentally created when the hypervisor was inserted between the server hardware and the operating system. This abstracted the software identity of workload and applications from the underlying hardware. This enabled this new 'Virtual Machine' a logical container for an operating system and its applications to not be inexorably bound to a specific piece of server hardware. This sea change in the definition of workload has had three main immediate visible impacts:

  1. Rather than binding one O/S and application to each server you can now run multiple VMs on each server. This is usually one of the initial implementations of virtualization.
  2. The Virtual Machine can move. It can move from one physical server to another. If you have enough network capacity between the two servers in question you can move the workload statefully, without dropping client connections, without changing the addresses in use for global communications.
  3. Homogenization of the x86 server. It used to matter a lot whose server was bought. The vendor chosen at the point of application development became in most cases a permanent decision. Because the operating system included the hardware abstraction layer every device driver had to be chosen carefully, then the entire package qualified as a single, monolithic entity, with the application package. In some cases the hardware addresses associated with the server were even used to license the applications. The advent of virtualization meant that these could be much more effectively decoupled- removing the vendor lock-in often enjoyed over application life-cycles.

The Growth of Virtualization, Breaking your Network

Compounding the value of this abstraction and mobility is the increase in network capacity and processor performance. Gordon Moore observed that transistor density was doubling every two years - later branded as Moore's Law. For a period of time as transistor density went up engineers were able to lower the voltages and increase the clock-speed of the processor. More transistors meant more memory and more complex processing instruction sets and such and shorter 'distances' between them so things got done more quickly and the chips could run faster and faster. But at some point in the Gigahertz (billion clock cycles per second) range the processor vendors ran head-on into the speed of light. In short, the clock was cycling faster than the ability to signal the output of one instruction from one side of a chip to another. This meant chips took a bit longer to make and design as companies started 'hand placing' the transistors to ensure the the most common outputs were physically close to the next most common part of silicon that needed that data. It also meant that we were about at the end of the line on getting faster and faster chips with single processors.

So while we hit the wall on making faster chips we could still put more transistors on these chips - that is when we started seeing 'Dual Core' and now 'Quad Core' chips. Essentially packaging two and then four processors into one physical die and then placed into a single socket. Most commercial-grade servers are designed with at least two sockets and a direct data path from one socket to the other and each socket has its own memory.

Every action has an equal and opposite reaction, right? Adding processors to a socket sounds all well and good but the operating systems and most of the applications were not designed to take advantage of more than one processor, in fact a very small subset of all of the code develoed for the x86 space was designed to be 'parallel' or 'multi-threaded', implying it could leverage more than one processor concurrently.

The net-net: we have these really fast processors, and most apps can't use but 1/4 of them at best. Put two sockets together, and you are at 1/8th. Being able to put multiple Virtual Machines on each physical server means that you can use a greater percentage of the CPU resources that are available for every VM you put on the server.

Let me give a few examples from our own recent experiences as one of the larger implementers of virtualization technologies:

Recently Cisco acquired WebEx, and with this acquisition gained a new data center in Mountain View, CA. This was a wonderful fringe benefit to our IT department from our entrance into the Software-as-a-Service space because our data centers in our main San Jose campus were about out of capacity. We had built a new data center in Richardson, Texas but have been finding to our dismay that application migrations of production systems is the 'long pole in the tent.' So having a local facility, that was not at max-Q that we could use for handling some near-term capacity requirements was a very good thing.

Over the past few weeks our IT department executed a pretty amazing feat: we moved over 150 servers from the production data centers in San Jose, to the new WebEx facility 10 miles up the street. While this would have traditionally been a project that would take 6m to 12m on production servers with the advent of virtualization this project took 8 hours on a sunny Saturday afternoon.

Traditionally we would have had to hire movers, plan long outages, deal with transit-related equipment failures, and re-address applications when they were received on the far side and cross our fingers hoping nothing was too hard-coded into the core applications.

We moved 150+ Virtual Machines in eight hours. This, by the way, is the network equivalent in bandwidth of every Telepresence System on the Cisco campus communicating non-stop for over 8 hours as well, about 354 high-definition real-time video transmissions being moved from one data center to another. We moved these applications, preserving their addressing, naming, identity, policy, security, and all that intangible hard-to-see stuff that is the glue between the IT components but is necessary to IT operations. What had been a year-long project now took only eight hours.

The other example I want to bring up is our own opportunity to embrace Virtualization and also to thrive from its implementation while delivering increased customer value- simply embrace virtualization for our own appliances where it makes sense.

We sell lots of servers; Cisco calls them appliances. They are an integration of x86 hardware, usually generic with no special hardware installed, with Cisco software- sometimes our own O/S, sometimes a hardened open-source or commercial O/S offering. This model of delivery of network services has fundamentally worked for our business for the last decade, starting around 1996 with the introduction of the PIX Firewall and LocalDirector Server Load Balancer. What we gained from this model was quality and engineering predictability. We could control the firmware versions, the memory speeds and types, the network interface card, even the MAC address. In short we used appliances to reduce the amount of variables and avoid having to build our own O/S that had broad support for every variation of hardware that has created the challenging combinatorial problem we have all seen with driver support.

Cisco ships thousands of servers annually. The question I pose, to us, and to our customers is this: can they be virtual machines? Can we run a Voice-over-IP Call Manager in a Virtual Machine? What about a Network Analysis Module? Netflow Collector? Wireless Controller? WAN Optimization?

Why not? If we can ensure the right resources are available and we can write our own code to work well with the VM who is to say it cannot be done?

Cisco gets to reduce the number of servers it buys and sells, packages and ships. We shift from the key value being the package to the key value being the software itself and free up engineering resources to focus on delivering that software value.

Customers in turn reduce support costs, operational complexity and the number of x86 system varieties they support. They reduce their power draw, carbon emissions, space consumed, and cooling needed to run our applications. In the end, a win-win for all.

What prevents this? Nick Carr observed about the legendary inventor Thomas Edison's religious attitude towards small power plants, massively distributed, and DC power distribution the following, "The system that Edison had imagined, and then brought to life, came to be his imagination's cage."

Virtualization's Role in IT Architectures

Today's IT architectures remind me of the Lego block toys of my youth. I would sit for hours and days combing through a footlocker of parts to find the exact piece I needed so I could build what my mind imagined.

I never worked off a plan or architecture- I just sat, dreamed, built, and played. My friend Scott lived across the street from me and we would play a game of 'Crash Up Derby'. We would each build a car, then sit on opposite ends of his very smooth front porch, wind up, and push our cars forward into each others. My initial incarnations were Lego-esque versions of the chariots in Ben Hur bristling with spikes and rods and designed to somehow Lego the offending wheeled battering ram into oblivion. After some initial experiences and learning it was time for the second design. On Mark II I designed a slick delta winged wedge that sat millimeters off the ground. Scott's car would hit it, fly up in the air, and land, shattering into many pieces. Subsequent refinements resulted in a few bruises, cuts, and our collective grounding as we each found out exactly which buttons could be pushed, but thats a different story that is much more like 'Stand By Me' than 'Cisco's Virtualization Story'.

Why wax eloquent on the topic of Lego bricks? Today you buy servers from one company, operating systems from another, applications from a third, networks from a fourth, SANs from maybe a fifth, storage from a sixth, and management from lucky number seven- you get the picture.

IT infrastructure architectures, are often predicated on the proverbial 'application thrown over the fence' and being told to stand up the infrastructure in weeks. This is a lot like me sitting down, looking into my war chest of Lego bricks, and dreaming up something to build with what I know I have on hand. Most often, there is no plan.

Think of the dots on the top of a Lego brick like a protocol. They enable the interconnection of the different pieces, they interlock, and if done well they hold together acceptably. However, my friend Scott found that the laws of Newton applied to his derby car quite well. When it rode up on the top of my wedge-shaped racer we discovered flight, a parabolic arc two to three feet in height. While Scott's car flew very well, it did not land nearly as adroitly, impacting in a way that caused all of those pieces to break their interconnection with each other and fall apart. Stress an IT implementation built without a solid foundation - see what happens - looks similar, but at least Scott and I never read about ours in a blog back then.

The last lesson we learned from this journey back to fifth-grade was the part I do not like to admit too often- I lost. Scott found the axle rods. These were solid pieces of plastic shaped like a '+' that went through the pieces with the holes in them to allow the wheels to spin. The axles were strong, Built properly Scott found a way to make a Lego crash-up derby racer that had no parts to break off, and was impervious to my wedge. This stressed our organization a bit, causing some conflict between the two of us- I don't remember much of the rest, but I think we started talking again a day or two later.

Scott evolved his architecture. He used a new piece or part, extended the role of it from an axle to make a wheel turn to becoming the foundation of his Lego car. From being the part that provided mobility to the part that provided integrity. Scott found, at age 11, that a good architecture didn't need hundreds of different parts - his was simple, elegant, and successful. Scott FTW.

Virtualization, when coupled to the infrastructure and applications in a sturdy manner, with real integration, can be simple, elegant, and successful. But it also must be implemented with a purpose, with a plan, with an architecture.

What is Cisco doing with and for Virtualization and the Customer Journey

Cisco's virtualization story? The network is now integrated with the virtualization platform. Cisco has moved from the network being an inhibitor to the elegant deployment of virtualization, to making our networks embrace and enable virtualization, to making our networks accelerate virtualization. Are we done? Never. But at least now we have the protocols, the linkages, the connecting bits on top of the Lego bricks, and the smoothly operating axles that provide the foundation that strengthens everything around it.

In virtualization we are not the whole, but we enable the whole to be greater than the sum of its parts and we make everything work better together - thats what networks inherently do.

An architecture that embraces virtualization, with a foundation that accelerates and enables virtualization is different than what is often deployed today. It's not that what we have been deploying and building is wrong, its just that variables have shifted and it is time to look at our architecture differently.

Competitive businesses need to be strong, and nimble. We need to be operationally efficient, and innovative. We need to increase business agility, and corporate productivity.

Virtualization lets us extend the life-cycle of our capital assets, reduce operating costs, altruistically run cleaner and greener, and put more people on automating the business and less on manually implementing IT. We move from months to minutes, and from the physical to the logical.

The journey our customers take is not the same as our roadmap. Not in the same instance, not in the same order. This was an important learning point we grasped in the last few months of evolving our virtualization vision. In some ways while we want it to be 'all about the network' we have to realize that 'it's all about the system' and the network is an important part of that system.

As Thomas Edison observed when writing later in his life about the invention of the electrical systems he stated, "It was not only necessary that the lamps should give light and the dynamos generate current, but that the lamps must be adapted to the current of the dynamos and the dynamos must be constructed to give the character of current required by the lamps and likewise all parts of the system must be constructed with reference to all other parts, since, in one sense, all the parts form one machine."

Standardize your Infrastructure

The first step then in embracing virtualization must be the standardization of the underlying infrastructure. Consolidate servers, reduce the number of disparate platforms, ease IT's support burden. Bring servers in from branches- use them more efficiently. Centralize storage, manage information and data consistently. Make the investments necessary to allow IT to scale its people resources. Build the infrastructure that will allow IT to implement virtualization successfully in labs and development, then scale it to production.

Production Virtualization

Production implementations of virtualization is where we really start extending the life-cycle of our capital assets. Our data centers themselves are the system. The system view is the building, the cooling, the servers, the storage, the electrical, the flooring, the chillers, the cabling, the racks, the network, the tape archival, the UPS, and the power distribution. The data center is the whole, the sum of the parts. This system is being broken every five years or so because of the same law that increases processor performance -- Moore's Law. If you double the transistor density and do not lower the voltage you increase the power draw and the heat generation of each processor. We design the DC system to last, but the facilities infrastructure is expected to last fifteen years while the IT assets are only expected to last three to five. The constant refresh of server assets to newer more powerful processors that take more power, although delivering more capability and capacity, strains the DC system so that the Data Center as-a-whole is obsoleted about very five years. This means a mission critical facility that costs well over $100 Million is obsoleted 30% through its depreciation schedule.

In the move to production virtualization VM usage tends to grow dramatically, and often new servers are brought in with more memory to support increased VM densities. Be certain to optimize each technology area to support Virtualization, not just the resident platforms. In this phase of deployment we look to consolidating the SANs, enabling every server to boot any image. We homogenize the I/O to and from the server just as we homogenize the server to a smaller number of x86 service platforms. We virtualize our core network services like security and load balancing to enable all aspects of workload provisioning to be done in software, not just VM cloning.

Taking a systems and architectural approach to the deployment of virtualization and the data center as a whole allows for the data center to last longer by using the most efficient infrastructure necessary to support the workload requirements within their VMs. It also means that workload may need to be rebalanced across the supporting infrastructure to improve performance,.

Dynamic Virtualization: VMs Start Moving

This is when VMs start moving, Virtualization becomes Dynamic. This has caused the biggest change in network architectures in a decade, a regression almost to the world of the large-flat network where address portability is enabled. The view of the business is that the ability to balance workload across multiple machines and move entire applications from one set of devices to another is so valuable that the network architectures must change. The view of the business is correct. Network architectures must change. They have to embrace this move, this newfound workload portability across the racks and rows even across data centers.

This Dynamic Virtualization and concept of workload portability does change much of what we all have known and applied about networking- but it also changes operational tasks we have all suffered through - change control for one. There are some customers who require over 140 signatures to upgrade the software on a switch, because every app owner must sign off before any shared component is taken out of service. It's a lot easier when all the workload has been moved off the shared component prior to the device ever being taken out of service.

To embrace and accelerate this we have been pioneering new technologies to enable larger flat networks, while maintaining the concepts of scalability, the protocols for reliability and failover, and the common operating models we are used to. We created storage systems that let the workload move without requiring all of the data to directly follow that move. We accept and embrace this move by embedding our own network operating systems into the VM so the network identity and profile is consistently observed as the workload moves.

Once workload is portable the question then simply becomes, how far is it portable? Can I move it from one server to another? Certainly. How about across data centers? Harder. How about across companies? Impossible... today.

Virtualization, Utility, and Clouds: A Summation

Virtualization, as an abstraction, divorces the two halves of IT- Infrastructure and Application. Infrastructure can standardize while Applications can become increasingly feature-rich, unbound from infrastructure dependency. Application teams can focus on improving productivity of line-of-business operations without re-qualifying their applications every time the infrastructure teams need to upgrade performance capabilities or drive a change designed to improve operational efficiency. With abstraction there is an increased symbiosis between these two halves of IT.

While virtualization enables the application teams to focus on improving IT's automation of business process, it also enables the infrastructure teams to drive operational efficiencies much more rapidly than in an organization with fixed system dependencies. This abstraction also means that the applications and their resident operating systems can move, even across organizational boundaries when operating as part of the right system.

When the electrical utility was created in the United States it was initially envisioned as being deployed in small utilities, direct current powered, and private plants for the purposes of optimizing the industrialization that was sweeping all forms of production.

This is strongly similar to the IT architectures of today- deployed by a company for their own purposes, bespoke tailored to perfectly fit that companies operating requirements. Off-the-shelf components, assembled on-site to fit exacting specifications. The creation of a large-scale utility takes advantage of economies of scale and economical transport to provide service to a large and distributed population. The further you can efficiently transport a utility the more centralized and efficient the aggregate production and monetization of that utility can be.

The network provides the most efficient transport possible for IT services to be delivered, in many cases from anywhere in the world. To further optimize 'anywhere in the world' is often taken to mean anywhere that has cost effective labor, power, tax structures, and information privacy legislation with stable governments. Network globalization has created tremendous freedom of choice in where services are delivered from, the next evolution is to enable freedom of choice in who is providing these services.

The system we envision enables workloads to be portable across companies, preserving all state necessary for clients to stay connected and all aspects of the service to continue uninterrupted - security, load balancing, IP addressing, accounting, QoS policy, storage, encryption, VPNs, all will statefully move with their respective workload - across organizational boundaries. There is a big caveat coming though, if, and only if, the receiving organization has the capacity to receive the workload and the capability to meeting the customer's defined levels of service capability.

This ability to peer workload between organizations is similar to today's ability to peer addressing information and network reach-ability state between organizations- the root technology that enabled today's Internet. Workload peering will enable enterprises to seamlessly hand-off virtualized applications to providers choosing between them based on who can meet their service levels, capacity requirements, and operating requirements. The providers will capture new customers faster, provision them more quickly, and generate revenue months faster than today's hosting architectures allow.

Workload will flow from business to provider over networks, the most efficient utility transport medium ever invented -- the broadest reach, the richest service, the 'Ubertility.' The movement of just one workload from one organization to another is an order-of-magnitude more data movement than high-definition video. The movement of entire application suites whether for business continuance or for capacity on demand or linear growth economics will lead to the largest increase in network capacity we have seen in a decade. What can I say, I liked that part of the nineties.

Rating: 0.0/5 (0 votes cast)

Personal tools