Hello, I'm Vaclav Vincalek, President of PCIS. We've been fortunate over the years to build strong relationships with customers and partners. One of our friends, a CTO in the technology industry has graciously agreed to offer insight into questions we want to pose.
(Note: In order to protect those referred to as the innocent, 'our friend' has requested to remain anonymous, so we'll simply refer to him as "The CTO").
Are we talking utility computing, or computing as a utility? Unfortunately there are numerous definitions for this that cause confusion. In the early ninties, NETFRAME, a provider of high end Intel servers based on mainframe hardware architecture advertised their servers as a "utility" implying that customers expected the same level of reliability from their servers as they expect from the power and the water providers. This was the first instance in the Intel space that I can recall where data center architecture was referred to in this matter. In this case, the premise was that IT services should be like a dial tone—always there, always available. To a large extent, this expectation of service delivery remains, even when unstated. Hardware today is very good, and so is software. It's an amusing reality that most of this kit runs perfectly so long as humans don't touch it.
However, this is not the prevalent definition any longer so for the purposes of answering your question Vaclav, I'll go with the definition I hear most frequently and that your readers can see on Wikipedia. The following definition is a copy from Wikipedia at the time of my response.
Utility computing (also known as cloud computing or on demand computing) is the packaging of computing resources, such as computation and storage, as a metered service similar to a physical public utility (such as water or natural gas). This system has the advantage of a low or no initial cost to acquire hardware; instead, computational resources are essentially rented. Customers with very large computations or a sudden peak in demand can also avoid the delays that would result from physically acquiring and assembling a large number of computers.
Conventional Internet hosting services have the capabilty to quickly arrange for the rental of individual servers, for example to provision a bank of web servers to accommodate a sudden surge in traffic to a web site.
Virtual Organizations accessing different and overlapping sets of resources
"Utility computing" usually envisions some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the "back end" to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer. The technique of running a single calculation on multiple computers is known as distributed computing.
The term "grid computing" is often used to describe a particular form of distributed computing, where the supporting nodes are geographically distributed or cross administrative domains. To provide utility computing services, a company can "bundle" the resources of members of the public for sale, who might be paid with a portion of the revenue from clients.
One model, common among volunteer computing applications, is for a central server to dispense tasks to participating nodes, on the behest of approved end-users (in the commercial case, the paying customers). Another model, sometimes called the virtual organization,[Virtual Organization. (n.d.) Retrieved August 14, 2007 from http://en.wikipedia.org/wiki/Virtual_organization] is more decentralized, with organizations buying and selling computing resources as needed or as they go idle. The definition of "utility computing" sometimes may also extend to specialized tasks, such as web services.
If this definition makes sense to you, then what it specifically implies is that the packing of resources will go in one of two directions at a macroscopic level. In the first scenario, systems are very generic and very simple so as to act as hosting centers for specific virtual machines. This is an interesting idea, but problematic because of the "missings". The key missing being management. In a traditional server based computer room, you can walk up to the server causing strife and physically touch it. It's real and reasonably easy to find. While that is sometimes true, the challenge of this is then managing it—keeping it up to date, managing its inventory and licenses, etc. Virtualization without a management framework doesn't make this easier, in fact it makes it much harder because virtual devices are intangible—you cannot reach out and touch them. Virtual servers have all the same problems as physical servers compounded by their intangibility. Hence without a comprehensive means to manage these virtual devices, to deploy them, to decommission, to inventory and to provide library services for the Virtual Machines, I wouldn't deploy this in production. All software vendors require license audits. Hard enough with real iron, but significantly more difficult with a library of virtual servers—particularly for those that are created (hence under license) but deactivated. It is utility computing in the spirit of the definition, but without pragmatic and effective management stratagems in place, it evokes the haunting guitars of Chris Rea's The Road to Hell.
The other scenario is the road you offered for consideration, that of highly specialized appliances. In this situation we are talking about real iron with very specialized software and hardware components that do one thing. In talking with other IT professionals I see a lot of this kind of thing. The router is a classic example. It does one thing extraordinarily well. There has been a trend from the vendors of operating systems to try to make their offerings do "everything". For small sites, it can be cheap and handy but larger data centers eschew this model or suffer the consequences. Recently I have seen organizations building their own "appliance" offerings using a Linux distro to build up a dedicated function device for services such as DNS. This type of solution can be a good idea if the organization or supporting partner has the skills to build, test and maintain the package of resources. Many people are finding that while the vendor delivered appliances are superb choices, there are cost savings to be had when the solution is built in house or by a local provider.
I see two really strong targets for a solution provider built, supported and perhaps remotely operated appliance. The first is a remote backup model using high speed bandwidth to address the problem of backup for small offices or for SMB businesses without a formal IT department. You can buy proprietary solutions, but I would want to consider a self built solution to save money. The second space that I think is emerging is that of search. You can buy Google appliances that can provide coordinated search for both internal and external data sources. You could also find a solution provider built offering that provides similar services for lower capital and operational expense.
If your readers aren't thinking about search yet in the context of the company as a whole, they should. Vista, Linux and Mac OS X all have integrated desktop search, that localizes the search experience, but on a per user basis. The real win is at the company level. Companies like Novell have had network search called Quickfinder for years, but never told anyone about it. Microsoft makes search technology as does IBM. The challenge is getting the right solution provider to put together these appliances and market them in an effective manner. The Google search appliances will always sell, but from a business perspective, I look for the ROI numbers and depending on the size of the company, these solution provider type offerings make a lot of sense.
So in summary Vaclav, I do see utility computing in the context of appliances. I encourage you and your team at PCIS to look at how you might be able to help deliver the right appliances to the marketplace.
Until next time,
Vaclav Vincalek August 14th, 2007 07:00:00 PM