Monday, September 29, 2008

Virtual Boy - Part 1

Definition of Virtualization in Enterprise Computing

At this time there exists no coherent, industry wide, definition of virtualization; or accepted method of tracking what environments, application, and infrastructure are virtualized.

As there are many technologies, solutions, and even fundamental approaches available for virtualization (some of which will be review in later documents), one cannot simply identify a product, and define an application using it as being “virtualized”. In this instance a higher level definition is necessary.

To this end and for purposes of this document (and future documents in this series), enterprise virtualization shall be defined as follows:

Virtualization, is the provisioning of computing resources for applications or services, in a manner abstracted from the hardware that will provide those computing resources.

Typically, multiple virtualized applications may be provisioned on a single hardware instance (server, appliance, mainframe etc..); or cluster or plex of multiple hardware instances, behaving as a single instance.

Conversely, the computing resources for virtualized applications may be spread across multiple aggregated hardware instances or a cluster or plex of hardware instances.

Either class of provisioning would be considered virtualized.

Computing resources for an application may be mapped to dedicated hardware; however through the abstraction of virtualization, ideally, this hardware should be able to be changed without requiring the rebuild or reconfiguration of the virtualized instance (though some downtime may occur).

THE CHALLENGES OF THE “TRADITIONAL”
APPLICATION PROVISIONING MODEL


The traditional model of application provisioning involves defining a maximum resource requirement and projected growth requirements (for three to four years) for an application; and provisioning a dedicated server or servers which will provide up to 100% more computing resources than that projected requirement (depending on the application tier); each server requiring it’s own power, cooling, floor space, storage, and supporting infrastructure.

Importantly, in this model, the end user is paying full price for their projected needs, and must pay for a substantial portion (if not all) of the infrastructure costs up front. If the users business needs change, they will either need to go back and re-architect and re-implement the solution; or they will be left with excess capacity that they are paying for, but not utilizing.

Additionally, in the tradtional provisioning model, the enterprise must provision the full computing capacity, floorspace, power and cooling, and storage requirements for all projected growth of the application, without regard to actual utilization.

As a result, across the entire enterprise IT world, we have average CPU utilizations on the order of 6% to 8%, average memory utilization of 18% to 40%, and allocated storage utilization of 14% to 40% (these numbers represent broad ranges, because there are multiple conflicting data sources, depending on which organization, infrastructure, and measuring method are involved).

Compare these estimates, to “best practice” utilization goals of 40% to 60% average CPU and memory utilization, and 60% to 80% allocated storage utilization.

This is harmful both to the end user, and to the enterprise as a whole, because this underutilized infrastructure presents a large fixed cost, as well as a significant allocation of limited facilities resources.

Taken over an entire large enterprise, this inefficiency in resource allocation, implementation, and utilization can add up to hundreds of millions of dollars in wasted time, and wasted capacity. Across the entire world of enterprise IT, the total could be hundreds of billions.

Additionally, the significant upfront costs, and three to four year commitment of resources required to implement any solution; create an environment hostile to the development of new and innovative solutions. It is extremely difficult to create, develop, and test new technologies (that may, or may not, present viable business solutions when fully developed); if there are significant resource dedication requirements to even basic experimentation on a small scale.

Finally, the provisioning of physical infrastructure requires a significant expenditure of time and effort across many groups within an enterprise. Workflow analysis across large enterprise IT, shows that up to 160 individuals may be directly involved with the provisioning of a single hardware instance; and that provisioning may take up to 12 weeks from the inception of a project.

THE VIRTUAL COMPUTING MODEL

Virtual computing aims to address the issues raised above, by provisioning computing resources for applications in a manner independent of the hardware the application will be provisioned on.

Virtualizing an application allows an end user to specify only the resources they need, and allows the enterprise to allocate only what is required, from pools of available computing resources. This can be managed in a centralized way, to ensure adequate capacity, performance, availability, and quality of service are maintained; while improving average utilization of individual hardware instances from 4% up to as much as 60% (although average utilization can be increased over 60%, this is against best practices of capacity planning).

Additionally, if an applications resource needs to shrink or grow, virtualization allows the end user the flexibility to request additional computing resources, or reduce their computing resources (and thus reduce their cost); without rebuilding and re-provisioning the application.

Critically, this capacity can be provisioned at little incremental cost, and with minimal effort and involvement of personnel; far more rapidly (a matter of hours or days) than physical infrastructure can be provisioned.

Presuming efficiency, and effective management are maintained in the virtual environment, these efficiences of process and materiel can reverse the potentially hundreds of billions of dollars of waste in the traditional application provisioning model.

In future posts in this series, I will discuss virtualization technologies and methodologies.