Friday, March 28, 2008

What I actually get paid to write

Earlier this week I wrote a bit about my writing process, and how it differs between my recreational writing (primarily here on this blog), and my professional writing; that is, the writing I get paid for.

At one time, to supplement my normal income, I was a freelance writer; with pieces in several small circulation national magazines, some good sized web sites etc... I've also co-authored several technical books, text books, study guides and the like, and written a number of papers for research groups, think tanks, consultancies etc...

I stopped doing that after the dot bomb crash, basically because it wasn't worthwhile to me to continue trying to get paid for it. It wasn't that people weren't willing to pay; it's just what they were willing to pay, wasn't what I was willing to work for.

Of course I haven't stopped writing. For a long time I wrote long form postings on various usenet communities, and some mailing lists. Then three years ago, I started writing this blog.

Funny thing though; as much as I write online, the vast majority of my writing isn't available anywhere. If you think I write a lot here, it's NOTHING compared to how much I write for my actual job.

I have mentioned many times before, that I am a small business owner. I operate a contract consultancy, with two practices. The first practice covers physical security and principal protection, electronic security, and security and defensive training. My other practice is in information systems architecture. On the that side, I work with policy and process, information security, high availability, disaster recovery, business continuance, large scale and high performance computing, and large scale and high performance storage.

At one point I was paid mostly to DO things; and my physical presence (and the application thereof) was my primary work product.

Then I got hurt, a lot; and I got fat, a lot.

These days, mostly, I get paid to think; and the tangible work product of that thought ends up as various written materials. Reports, analysis papers, white papers, position papers, statements of work, technical manuals, policy documents, audit reports... a nearly endless stream of different types of documents.

At my current contract, I act as the chief architect, for one of the major divisions, of one of the largest banks in the world. My job is essentially to keep up with all the technologies we operate in the bank, as well as all the emerging technologies that we might be able to use; keep up with the regulatory, audit, compliance, process, and practice requirements for the bank; and then figure out how to effectively use those technologies to meet our business needs, and fulfill our administrative requirements.

Most critically, my job is to figure out how to fail as little as possible; and how to fail and recover gracefully when we do (and we do. Every system fails sometimes).

I take that knowledge and experience, and I consult with the end users to help them understand what solutions may be available to them, help them shape their technical requirements to meet their business requirements, and help them initiate and execute their projects.

I then analyze every project the division undertakes against all those various criteria above, I write up my analyses, and I approve, disapprove, or make changes to projects as is appropriate.

I also act as both the representative of the entire enterprise (from an architectural perspective) to my end users; and conversely, I represent my end users needs to the entire enterprise, to ensure that their needs are met in any standard we promulgate.

Finally, I also manage the project teams, lower level architects and engineers, and other associated staff for each project. I have no direct reports, but I do provide technical leadership, mentoring, training, and task management to all the associated personnel.

All of this is done through the vehicle of professional writing.

I read others professional writing, and hear their presentations; I write my own documents and present them. Decisions concerning millions, and even tens of millions of dollars are made every week, based on this professional writing.

I would say that on average I have about 20-25 hours of meetings a week, and pretty much all the rest of my time is spent either reading someones business writing, or writing my own.

Thing is, it's both desperately boring, and quite challenging at the same time. Each type of document is very different. Each audience is different. Each document has a different goal or purpose; and you have to be able to craft your writing to that audience, and that purpose.

The language I write in isn't English, or even "technology" it's "business"; and it's constantly changing. The same set of words might have completely different meanings (and frequently very different political connotations) to two different clients, or even to two different groups within the same client.

So... what it all comes down to is, I wouldn't want to submit my business writing to any magazines; but it is VERY effective at what it is designed to do; and that is to convince people to do things the way I think they should be done.

Which is really, what makes it worth it; because if I write a great paper, and all the supporting documentation, and hold the meetings to support it; and we do the right thing; it feels great. I've done my job well, and we're on the right course. If I do all that, and still we do something stupid (and it happens a lot)... well, it's disappointing at the least.

So, I'm going to share with you an example of what I'm talking about.

Below this introductory section is a document I wrote a few weeks ago for work. I'm not going to directly preface it much here, because the paper should tell you what it's about, and why.

What I will share is who the audience was for it, and what the scope of the doc was. This paper went to the chief information officer (CIO) , the senior VP for my division, the CIO of the entire bank, the CTO (chief technical officer) of the bank, and the COO (chief operating officer) of the bank.

Important to the scope of the document, all of the members of my audience are in technical management, and all of them except the COO are former senior engineers themselves. The idea here was to scope this paper somewhere in between a basic executive summary (which should be kept to three pages or less), and a real full white paper or architectural analysis (which would easily have run 30 pages or more). It was intended to hit the high points of business and technical merit, and plant a seed and an outline to go deeper with. Essentially, it's a pitch paper to START the process of making a major change in the way we do absolutely everything in information management.

That by the way is damn near the hardest thing in the world to write. You have to be technical, but not too technical. You have to reference business, but not go too deep into it. You have to constantly consider your audience, and adjust your language, and level of detail. In fact in this case it was doubly so, because this is an extremely politically sensitive topic within the organization, as well as an enormously expensive one. It was absolutely critical that I come down on the tech side of things with my explanations, but not "too far"; while providing a strong suggestion of the business advantages, without steppig on any political landmines (especially as regards staffing levels).

Honestly, as pure "writing", it is, objectively, utter shite. It's repetitive, and redundant (ok, bad joke), it uses atrocious sentence structure, it has little coherent narrative structure, it has far too much detail in some sections and not nearly enough in others.

Seriously, it's crap.

The paper violates every principle of good writing, except one: It communicated the intended message, and the importance of that message, with absolute effectiveness. In fact, it drilled the central concept into the readers brains, front and center, making it impossible for them to get it out again.

The people who read this paper are going to be murmuring the two words of the subject in their sleep for weeks.

I'm going to assume most of my readers are fairly technically literate, and those that aren't have already stopped reading this post anyway (if not, you're probably terribly bored with all this). I'm also going to assume most of you, excepting the IT managers and engineers out there, haven't heard of the subject of this paper; or have at best minimal exposure to it.

If I did my job, the IT managers in the audience should already be thinking about how to do it in their environment (or how their approach would differ etc...); and the rest of you should be able to come out of reading it with a basic understanding and definition of the subject, and of its advantages and disadvantages.

Of course, after reading it, you're also probably going to be sick of those two words; but they're going to stick with you, and when the context is right, they, and this paper, are going to pop into your head.

I like to run the "spouses and secretarys" test on these types of documents. If you can give the thing to your husband or wife (presuming they aren't also an IT manager/architect/engineer) , or your secretary; and they can understand the subject, why it's important, and ask relevant questions; you've pretty much done your job.

All of that is the ultimate goal of business writing; and ultimately, that is how business writing must be evaluated...

Oh and I should note, I used the same thought process as I do with most of my other writing. I stuck the main idea up in some corner of my head, let it stew for a while, and when I was ready (the day before the deadline in this case), I just let it flow out onto the page.

In this case, I thought about it for about a week; then I wrote the whole thing in just over two hours, with another hour or so for revising.

Just as writing, I still think it's crap though; and would be embarrassed about it, if it wasn't EXACTLY what I set out to write.

... anyway, uhhhh... enjoy I s'pose (most formatting, and many specifics have been removed to preserve confidentiality):
Utility Computing in the Enterprise


This document is owned by the {edited for confidentiality} team; and has been created to provide an overview of Utility Computing, including advantages, challenges, and potential implementation targets and methods .


Updates and revisions to this document should be directed to {edited for confidentiality}.


Version History:
{edited for confidentiality}

Approval Management:
{edited for confidentiality}


At this time, {edited for confidentiality} is facing a growing problem managing IT infrastructure and facilities. {edited for confidentiality} has thousands of individual infrastructure components spread across multiple lines of business, facilities, and areas of responsibility. Combined with this organizational diversity, exists an even more complex diversity of technologies, vendors, revisions, applications, and services.

This diversity of both organization and infrastructure has created challenges in managing and supporting systems, applications and services, managing product lifecycle, managing facilities, and managing and controlling expenditures.

As project complexity continues to increase, so too do staffing levels required to implement and maintain this infrastructure, implementation times, and the costs associated with all of the above.

It is generally understood that some (or several) means of reducing complexity, and improving service delivery speed, cost, efficiency, and value to the end user; are critical to the continued viability of infrastructure operations within the bank.


In the world at large, utilities are organizations that provide a valuable service to their customers, for a periodic fee, generally based on consumption or capacity (or a combination of both). For example electricity, water, and cable, are all utilities we are familiar with.

Utility computing seeks to establish this model for both specific applications; and as a larger goal, for generalized computing.

This model of paying for a service based on capacity and consumption, was at one time common to the computing world. For decades, mainframe computing was based on such a fundamental model (and in some ways, and some organizations, it still is).

Additionally, there are still today some applications within the general world of information technology that are sold and administered in this manner; such as shared hosting services for web sites, web based email, internet based backup, distributed content management (Akamai and the like), and elements of content security (virus scanning, content filtering etc…).

So, this concept is neither unknown, nor unfamiliar in the information technology world. It is however currently uncommon in the domain of generalized computing infrastructure.


The traditional model of general computing infrastructure is a procedurally oriented model, with each procedure having its own associated complication, management, overhead, and timeline:

1. Identify a business need
2. Develop a software solution to address that need
3. Develop a hardware architecture to support the software for the current need
4. Estimate three to four year growth and account for additional infrastructure to support it
5. Allocate infrastructure
6. Implement the infrastructure and application
7. Pay for and manage each of these elements separately

Importantly, in this model, the end user is paying full price for their projected needs, and must pay for a substantial portion (if not all) of the infrastructure costs up front. If the users business needs change, they will either need to go back and re-architect and re-implement the solution; or they will be left with excess capacity that they are paying for, but not utilizing.

As a result, for example across the entire enterprise, we have average CPU utilizations on the order of 6% to 8%, average memory utilization of 18% to 40%, and allocated storage utilization of 14% to 40% (these numbers represent broad ranges, because there are multiple conflicting data sources, depending on which organization, infrastructure, and measuring method are involved). Compare these estimates, to “best practice” utilization goals of 40% to 60% average CPU and memory utilization, and 60% to 80% allocated storage utilization.

This is harmful both to the end user, and to the enterprise as a whole, because this unutilized infrastructure presents a large fixed cost, as well as a significant allocation of limited facilities resources.

Taken over an entire enterprise, this inefficiency in resource allocation, implementation, and utilization adds up to hundreds of millions of dollars in wasted time, and wasted capacity.

Additionally, the significant upfront costs, and three to four year commitment of resources required to implement any solution, create an environment hostile to the development of new and innovative solutions. It is extremely difficult to create, develop, and test new technologies (that may, or may not present viable business solutions when fully developed); if there are significant resource dedication requirements to even basic experimentation on a small scale.


Utility computing aims to address the issues raised above, by offering both specific applications, and generalized computing services, as a utility.

In every respect, this lines up with conventional assumptions about utility class services in the wider world.

When you order electrical service for your home, you specify that you want a 120 volt, 200 amp service for the house, 220 volt 80 amp service for the garage etc... Implied in this, is that you expect it to be “always on”; that is to provide 99.999% uptime or better.

You don’t however specify how your power is to be generated, what wires it will be transported on, what models of generator, transformer, and meter you’ll use etc… You also don’t pay individually for power generation, line fees, maintenance on the generators and lines, insurance, and the salaries of the linemen, and powerplant operators. The total cost to you is paid for as one number, every month.

This is the model we want to implement, at least for certain applications and environments, within the enterprise.

This is a fundamental change in how computing resources are allocated, implemented, administered, and paid for, as outlined here. Rather than a seven step process, with each step paid for and managed individually, the entire process is simplified:

1. Build a flexible utility computing infrastructure to address business needs
2. Identify a business need
3. Determine initial computing capacity requirements
4. Request computing resources be allocated to meet these requirements
5. Pay for computing resources on a periodic (monthly, quarterly, annually) basis

Importantly, the group with a business need, only pays for the capacity they will utilize. They neither pay for nor manage specific infrastructure or systems, nor do they pay or manage specific staff to support that infrastructure. If capacity requirements change, they simply request a change in resources, and their associated costs will change during their next billing period.

Critical to this concept is the abstraction of the end user, from the infrastructure and its support. In the traditional computing model, the end user pays for specific servers, in a specific configuration; and pays for the personnel to support that infrastructure. In the utility computing model, the end user pays for and receives computing capacity, as measured by five criteria:

1. Volume of data
2. Volume of transactions
3. Performance requirements
4. Reliability requirements
5. Specific support requirements (application, platform, revision etc…)

To address these business needs, the service delivery groups must develop, implement, and maintain, a flexible, utility class infrastructure; and structure a service offering, and associated costing model, around the maintenance of that offering, as a utility.

This of course means that all costs, on a total cost accounting basis; must be collected and accounted for, in the service costing model; and that capacity, administration, management, and support levels must all be maintained on an ongoing basis; as is expected of any utility.


The goals of utility computing are straightforward:

• Lower total cost of ownership
• Improve efficiency (of all aspects of information infrastructure and management)
• Improve service delivery speed

These are goals which absolutely can be achieved through the use of the utility computing model; and which we believe are best achieved through this model.


Utility computing offers a number of advantages over the traditional computing model, that will help us to achieve our goals:

• Reduced complexity to the end user
• Reduced cost to the end user
• Reduced cost of management of resources
• Lowered barrier to entry for development and innovation
• Improved efficiency of resource allocation
• Improved resource utilization
• Improved control over architecture and infrastructure
• Improved control over lifecycle management
• Improved control over capacity management
• Improved speed of deployment


There are of course a number of challenges presented by utility computing:

• Current funding and billing models are incompatible with utility computing
• Current capacity management models are incompatible with utility computing
• Current resource allocation models are incompatible with utility computing
• Difficulty in total cost accounting (this is a very significant challenge)
• Changing the mental model of our end users (this is also a very significant challenge)
• Development of a utility class service infrastructure
• Current staffing levels and taskings are not conducive to offering a utility class service


Noted above specifically as challenges, are the development of a utility class service and supporting infrastructure and process. Current models for infrastructure development, deployment, and funding are oriented very strongly to the traditional model.

In order to present a successful utility class service to end users, the following criteria must be met:

1. The service must be highly reliable: Any successful utility class service, must present that service to a reliability standard equivalent to the tier we wish to support. Our recommendation at this time is that we build to a tier 3 standard; but the eventual goal is to build to a tier 2 standard.

2. The service must be highly flexible: Any generalized computing utility class service must be flexible enough to meet all the application needs of our end users. This involves flexibility of processing power, memory, and storage allocation, and a broad base of application support.

Any specific application offered as a utility class service (for example databases or web hosting), must be flexible enough to meet the specific needs of our end users with regard to those applications.

3. The service must be easily, and dynamically scalable: We must develop and implement an infrastructure, and processes, which allow us to allocate whatever resources are necessary to accommodate our end users needs. To this end, we must be able to increase and decrease our end users resource allocations, seamlessly and transparently; and then charge appropriately.

4. The service must be easily supportable, maintainable, and manageable: We must develop and implement an infrastructure to standards which our personnel can support, maintain, and manage.

Concurrently we must develop tools to accomplish those tasks; and maintain the appropriate staffing levels and skill sets to do so.

5. The processes must be in place for end users to utilize this service: In order to present a service to our end users, we must develop a process for the users to obtain, and pay for service; and a concomitant process to use these payments to support and maintain the service infrastructure.


Integral to the concept of utility class computing is abstraction. The basic principle of computing as a utility is abstraction of the end user from the supporting infrastructure. Essentially a utility class service is predicated on the ability to provide continuity or service, and continuity of quality, without regard to the specifics of underlying infrastructure.

In order to manage this continuity, and the complexity of the supporting infrastructure, a deeper level of abstraction is also warranted. To this end, the traditional infrastructure architecture model must change to support utility computing.

In the traditional model, a specific infrastructure solution is architected on an individual basis, for each application and each end user. In utility computing, a generalized model must be developed, so that end users can easily understand, and request appropriate capacity to support their business needs.

The architecture model most appropriate to computing as a utility, can best be thought of as object oriented.

In the object oriented architecture model, certain classes are defined to cover services, applications, systems, storage, and other infrastructure and support requirements. Each class has properties, dependencies, inheritances, and requirements; that define it, and its relationship to other classes. Each class has sub classes within it, which inherit properties from the class itself.

At the highest level, each end user can define their business needs, and then choose the appropriate solution class to meet those needs.

Let’s use a 2 terabyte database with hundreds of thousands of transactions a day, and a tier 2 reliability requirement as an example.

Under the traditional architecture model, the end user would take this capacity request to their internal application support and architecture teams, who would consult with vendors and supporting groups, and define a suggested hardware solution for the database to run on. They would then go to the infrastructure group, and request hardware; the data center group and request space, power, and networking; the database group and request software and database administration support etc…

In the object oriented architecture model, the end user would define their business need, and request capacity from the utility service provider, by selecting an object in the “enterprise class high performance database, tier 2” class, and then selecting 2 terabytes as the database size.

At this point, the end user would be divorced from the architecture and infrastructure. They would be presented with a class of service as requested, pay their monthly (or quarterly or yearly) bill, and not concern themselves with any deeper layer of the architecture or infrastructure.

Of course, underneath that, there is infrastructure to support this.

The “enterprise class high performance database, tier 2” object class that the end user selected, is itself a subclass of “databases” which all have some common properties. Inside the “enterprise class high performance database” object, there are additional subclasses for “tier 1” “tier 2” “tier 3” and “unclassified”, to define reliability requirements. Inside the “Tier 2” subclass, there are specific database applications like “Oracle” and “DB2”. Inside the “Oracle” class there are subclasses of “Sun” “IBM” and “Linux”. Within the “Sun” sub class, there are specific server platforms like “M5000” and “T5220” and so forth.

The end user can select as far down the sub classes as they need to descend for their specific application requirements; or if they don’t have those requirements, they can simply choose “enterprise class high performance database”. Each level of class is clearly defined, with its own properties, and with inheritances from the classes above it.

Classes can also be linked in parallel sets, rather than as subsets. For example, a solution set called “web service middleware” could have parallel object class sets inside it covering “database” “web server” “application server” “ETL” etc… and each of those classes would have its own subclasses.

This model can be abstracted to as high a level as requirements dictate, or to the smallest and most granular level of configuration, because each class can be specifically defined by its own properties, its inheritances, and its relationships with other classes.


Not all applications are suitable for utility class computing. Certain specific requirements (security, compliance and regulation, unusual application support or performance requirements) either rule it out entirely, or present great difficulties. For some applications, it simply may not be the most appropriate solution (for whatever reason).

Conversely, some applications present a far lower barrier to utility computing. Applications with the following characteristics are generally good candidates for utility computing:

• Applications that are easily supported in a virtualized environment
• Applications that are highly parallelized, but do not require isolated environments
• Applications that are explicitly cluster aware, but do not require isolated environments
• Applications that are easily supported by, or explicitly designed for a shared services environment
• Applications that tend to change requirements rapidly (development environments for example)

There exist today in the enterprise, numerous applications which meet, or could meet, one or more of those criteria.

In fact, there are already some successful shared service environments, both at the application, and the server level, which could relatively easily be transitioned to a utility computing model, or which implement a similar service model such as:

1. Shared database services
2. Shared web hosting services
3. Enterprise monitoring services
4. Enterprise authentication services
5. Enterprise Exchange services
6. Virtualization services (VMWare and other virtual server offerings)

In our opinion, the strongest candidates currently for transition to a utility class computing service are the virtual services, and the shared database services; because both infrastructures have been designed from the beginning to be extensible and supportable as a service rather than as individual servers; but the billing and capacity management now in place use the traditional model.

These two models conflict with each other, and in fact reduce the efficiency of operation for both environments, and complicate their management and maintenance. Transitioning them to a utility model would allow the environment to be managed, maintained, and expanded as is most appropriate to the service being provided. It would also assist in selling the shared solution to end users, as it would change their billing models to match the service model, and “mental model” they wish to present.


As utility computing presents significant challenges to employment within the enterprise, we believe it is best to attempt a phased approach to its introduction. This will allow us to develop the skillsets, staffing, processes, and infrastructure necessary to offer a successful utility computing service.


1. Identify strong candidates for transition to a utility computing service.
2. Select a specific “best prospect” candidate
3. Develop preliminary processes for billing, resource provisioning, support, and other needs
4. Conduct preliminary cost benefit, and risk benefit analyses for the transition to a utility computing service
5. Develop preliminary business case for the transition to a utility computing service
6. Develop a transition plan for the application, infrastructure, administration, and support
7. Approach application owners with the pre-prepared materials laying out all factors of transition
8. Develop success criteria, and the metrics to document them
9. Obtain approval, and executive buy-in from the application owners, and their management, to proceed with phase 2


1. Revise all plans and documentation above as necessary
2. Approach end users of the application in question, and discuss their needs
3. Discuss the new model, and transition process with end users
4. Iterate through steps 1-3 as necessary
5. Build supporting infrastructure for the application transition
6. Implement new processes necessary to support application transition
7. Execute the transition plan created in phase 1
8. Evaluate the project against the success criteria using collected metrics
9. Conduct “after action reviews” to learn what has worked, what has not, what can be improved etc… and revise all plans, documentation, policies, procedures, and infrastructure as necessary


1. Iterate through phase 1 and phase 2 for additional applications
2. When a representative critical mass of successful applications has been achieved, develop plans to transition generalized computing services to the utility computing service model
3. Review all applications that have been transitioned to date for lessons learned, and performance against the goals of utility computing
4. Revise all plans and documentation as necessary
5. Execute transition of generalized computing to the utility computing services model


Although the transition to a utility computing based information infrastructure presents significant challenges and up front costs; the efficiencies thus gained in speed of deployment, simplicity of project and infrastructure management, reduction in administrative overhead, and efficiency of infrastructure utilization should lower total cost of ownership across the entire information infrastructure, and provide savings that greatly outweigh those costs and challenges.