An Over Simplistic Utility Model

Brynjolfsson, E., P. Hofmann, et al. (2010). “Economic and Business Dimensions Cloud Computing and Electricity:Beyond the Utility Model.” Communications of the ACM 53(5): 32-34.

—-

This paper argues that technical issues associated with innovation, scale, and geography will confront those attempting to capitalise on utility computing. They take the utility model of computing (i.e. that cloud computing is analogous to the electricity market) and identify key challenges.

In particular they identify the following technical challenges:

1)    The pace of innovation of IT – managing this pace of change requires creative expertise and innovation (unlike utilities such as electricity which, they argue, are stable).

2)    The limits of scale – Parallelisable problems are only a subset of problems. Scalability of databases has limits within architectures. APIs e.g. using SQL are difficult for high-volume transaction systems. Further large companies can benefit from Private Clouds with little advantages, and greater risks, if they go to the public cloud.

3)    Latency: The speed of light limits communication. Latency remains a problem. For many applications performance, convenience and security considerations will demand local. [While not mentioned in the article it is interesting to note that this problem is being attacked by http://www.akamai.com/ who specialise in reducing the problems of network latency through their specialist network]

They also identify the following business challenges:

1)    Complementarities and Co-Invention: “Computing is still in the midst of an explosion of innovation and co-invention First that simply replace corporate resources with cloud computing, while changing nothing else, are doomed to miss the full benefits of the new technology” (p34). It is the reinvention of new services which are key to the success of cloud. IT enabled businesses reshape industries – e.g. Apple quadrupled revenue by moving from perpetual licence to pay-per-use in iTunes, but this demanded tight integration of EPR and Billing which would have been difficult within the cloud given their volumes.

2)    Lock-in and Interoperability: Regulation controlled energy monopolies, and electrons are fungible. Yet for computing to operate like electricity will require “radically different management of data than what is on anyone’s technology roadmap”. Information is not electrons – cloud offerings will not be interchangeable. “Business processes supported by enterprise computing are not motors or light-bulbs”.

3) Security – We are not concerned about electrons as we are with information. Regulators, laws or audit is not needed. New security issues will need to be faced (see  (Owens 2010) for interesting debate on security).

—-

Owens, D (2010) “Securing Elasticity in the Cloud”, Communications of the ACM 53(6) 48-51 doi: http://doi.acm.org/10.1145/1743546.1743565

IT departments as “outside”

Joe Peppard, in a recent EJIS paper (Peppard 2007), makes the point that utility computing (along with outsourcing and ASP) are premised on a gap between IT function and the customer/user. “They assume the user is the consumer of IT services, failing to acknowledge the value derived from IT is often not only co-created but context dependent” (ibid, p 338).

Joe suggest that this is founded upon the ontological position that “IT is an artefact that can be managed”, and subsequently that the value of IT is in its possession.  This leads to the obvious claim that rather than focusing on IT management, we should focus on delivery of value through IT. This brings our perspective of IT function (and of Cloud Computing within the enterprise) from the realm of cost-saving efficiencies (as Carr 2003 might suggest) to a focus on contextual practice – supporting work.  As Joe’s the article argues “to seek not to management IT per se, but to manage to generate value through IT”.

Carr’s (2003) argument is thus that the IT function is not needed since this is outsourced to the ASP/Cloud provider. But a more subtle point might be that it needs to instead be pervasive – IT installed within business functions (so as to better contextualise Cloud Services within business practices). While IT services prior to the Cloud increasingly focused on getting the “plumbing” of the organisation correct (i.e. ensuring the email worked, installing ERP, networking the systems), with the use of Cloud services their role must be focused on improving the integration of Cloud services into the work practices of users  – focusing on both social and technical practices which can be supported or enhanced through IT.

We remain fixated on the CIO and IT department as our focus for Cloud Computing.  This seems odd. For what if this role of contextualising IT is better suited to users (who are increasingly technologically proficient particularly around Cloud Services (e.g. SalesForce / GMail)). With the Cloud users are increasingly powerful actors able to engage with, and even procure, IT infrastructure for themselves. How this might influence the role of IT within the enterprise is far from clear but it will certainly lead to new battles and new challenges.

References:

Carr, N. (2003). “IT Doesn’t Matter.” Harvard Business Review: 41-49.
Peppard,J (2007) “The Conundrum of IT Management” European Journal of Information Systems (2007) 16, 336–345. doi:10.1057/palgrave.ejis.3000697

Cusumano’s view – Cloud Computing and SaaS as New Computing Platforms.

Cusumano, M. (2010). “Cloud Computing and SaaS as New Computing Platforms.” Communications of the ACM 53(4): 27-29. http://doi.acm.org/10.1145/1721654.1721667
This is an interesting and well argued analysis of the concept of Cloud and SaaS as a platform. The paper concentrates on the lock-in and network effects and the risk they pose given the dominance of certain players in the market, in particular Salesforce, Microsoft, Amazon and Google.
Direct network effects (that the more telephones people have the more valuable they become) and indirect network effects (the more popular on platform is for developers, the more attractive the platform for other developers and users) are key to understanding the development of Cloud. Central to the articles potential importance is the analysis of how intergrated webservices (and thus integrated software platforms) might create conflicts of interest, network effects and hence risks.
Cusumano’s anlysis of Microsoft’s involvement in the market is compelling (particularly given his history in this area and detailed knowledge of the firm).
I do worry however that the papers exclusive focus on current players (and hence the interest in traditional concerns about network effects and dominance) downplays the key role of integrators and small standardisation/integration services which are emerging with the aim of reducing the impact of these network effects. Unlike traditional software  (where the cost of procurement,  installation, commissioning and use is very high) the mobility between clouds is easy if the underlying application is Cloud-provider-independent. This means there is considerable pressure from users to develop a cloud-independent service model (since everyone understands the risks of lock-in).
The future might thus be an open-source platform which is wrapped to slot into other cloud platforms… a meta-cloud perhaps.. which acts on behalf of users to enable the easy movement between providers. This is something Google is keen to stress at its cloud events.
I look forward to seeing the book on which the article is based

How Cloud Computing Changes IT Outsourcing — Outsourcing — InformationWeek

via How Cloud Computing Changes IT Outsourcing — Outsourcing — InformationWeek.

This article provides a useful look at the outsourcing relationship and compares this with the Cloud contracts. In particular (quote) “Cloud computing blurs the lines between what had been conventional outsourcing and internal operations, and it will test IT’s management and control policies”.  The article points out that companies are not ready for the challenges of Cloud growth, with their survey suggesting only “17% say they directly monitor the performance and uptime of all of their cloud and SaaS applications”. with a “shocking 59% relying on their vendors to monitor themselves”.

This is indeed shocking. As companies contemplate moving their operations to the Cloud they are perhaps being led into a strong sense of security by the vendors promises. But as demand grows these vendors facilities will be stretched and less certain.

On contracts the article points out that a cloud computing contract is a hybrid of outsourcing, software and leasing and are major contractual commitments.

Finally the more obvious points about business strategy are made – pointing out that a cloud provider may be less interested in driving innovation and major technological change as they are not as aligned to a businesses core capabilities and objectives.

Cloud Computing Presentation at GridPP Collaboration Meeting

Yesterday I gave an introductory presentation on Cloud Computing from a business perspective to a Grid meeting at Royal Holloway University. The slides are available on the GridPP website here: http://www.gridpp.ac.uk/gridpp24/CloudComputingGridPP24.ppt

Green Computing and the Cloud – SETI@home

Cloud computing hides the environmental impact of computing from the user. When we search using Google our own PC doesn’t suddenly start to cough – the fan doesn’t ramp up, our laptop doesn’t burn through the table. But somewhere in Google processors are using energy to undertake the search. Google is aware of this and tries hard to reduce this cost and its environmental impact.

There is a corollary of this though. When we use peer-to-peer software our processor uses more power and more electricity but we seldom notice. While perhaps tiny in aggregate this can be significant. And unlike Google few of us think about it, or try to use renewable energy to reduce its CO2 emissions.

Let me demonstrate with a quick back-of-the-envelope calculation.

SETI@home (the peer-to-peer application searching for ET) has 5.2 million participants and has produced an aggregate two million years of computing time. Taking an example of power usage for basic computers we can see that the difference between an idle computer and a in-use computer (i.e. where SETI is doing its processing) would be around 20watts ( though perhaps more). Given SETI has run for 17,520,000,000 hours that works out at about 350,400 Megawatt hours or 350.4 Gigawatt hours.

The UK average consumption of Gas and Electricity is about 22,338kWh per household (in 2007).  In the ten years since it started SETI@home has used about as much energy as a town of about 15,000 people would use in Gas and Electricity in an entire year!

Interestingly assuming US consumer energy costs (since most will be in homes in the US) at about 8c per kw/h this is about $28million of electricity! The key points is that this is only about 50c per year per participant – scarcely enough to make them change their SETI screensaver, but highly significant in the aggregate.

And SETI@home has yet to discover anything alien!

G-Cloud – A talk by John Suffolk (hosted by Computer Weekly)

A couple of weeks ago I attended a talk by the UK Government’s CIO – John Suffolk ( See here for more information on his role). At the talk John outlined his idea for a “G-Cloud” (government cloud) with the primary aim of reducing IT costs within government. Central government has around 130 datacenters, and an estimated 9000 server rooms, with local government and quasi-government obviously adding to this figure. Reducing and consolidating these through Cloud Computing would offer significant efficiencies and cost saving. Indeed given that 5% of contract costs are simply for bidding/procurement by simply having less procurement of resources costs would automatically be saved.

John outlined different “cloud-worlds” which he sees as important opportunities for cost saving through cloud computing in government.

1) “The testing world” – by using cloud computing to provide test-kits and environments it is possible to reduce the huge number of essentially idle servers kept simply for testing. For such servers utilisation is estimated at 7%.

2) “The shared world” – Many of the services offered by government require the same standardised and shared services. While these must be hosted internally they offer savings by using Cloud ideas. http://www.direct.gov for example has two data-centres at present – but could these also be used for similar services in other areas?

3) “Web Services world” – This was more unclear in the talk  but centred around the exploitation of cloud offerings through web services. For example could an “App-Store” be developed to aid government in simple procurement of tested and assured services. Could such an App-Store provide opportunities for SMEs to provide software into government through easier procurement processes (which currently preclude many SMEs from trying).

This idea of an App-store is  interesting. It would essentially provide a wrapper around an application to make transparent across government the pricing of an application, the contracting-vehicle required to purchase, the security level it is assured for use with, and details of who in government is using it. Finally deployment tools would be included to allow applications to be rolled out simply.

John acknowledged that many details need ironing out, particularly issues of European procurement rules (and the UKs obsession with following them to the letter of the law).  While government might like to pay-per-use and contract at crown level (so licences can be moved from department to department rather than creating new purchases) this would be a change in the way software is sold and might affect R&D, licence issues, maintenance etc.

The App-Store would be a means to crack the problem of procurement and the time it takes. and so drive costs down for both sides.

What was clear however was the desire to use the cloud for the lower level application stack. To “Disintermediate applications” because “we don’t care about underlying back-end, only care about the software service” – Government can use a common bottom of the stack.

Indeed it was discussed that a standard design for a government desktop-PC might be an “application” within the app-store so centralising this design and saving the huge costs of individual design per department (see http://www.cabinetoffice.gov.uk/media/317444/ict_strategy4.pdf#page=23 for more details).

Finally the cloud offers government the same opportunities to scale operations to meet demand (for example MyGov pages when new announcements are made, or Treasury when the budget is announced), however this scalable-service would also affect costs and might not be justified in the budgeting.  While we look at the cloud to stop web-sites going down there is also a cost to providing such scalable support for the few days a year it is needed – cloud or no cloud.

Thank you to Computer Weekly for inviting me to this event!