Our 2nd Report: Meeting the challenges of cloud computing – Accenture Outlook

http://www.accenture.com/SiteCollectionDocuments/PDF/Accenture-Outlook-Meeting-the-challenges-of-cloud-computing.pdf

Our second Accenture report on Cloud Computing is about to be published!  As a taster the above link takes you to a short synopsis (Published in the Accenture Outlook Points of View series). I will post a link to the full report when it is out.

While in danger of providing a summary on a summary, this second report builds on our first “Promise of Cloud Computing”  report to analyse the challenges faced by a move to cloud. We identify the following key challenges:

Challenge #1: Safeguarding data security

Challenge #2: Managing the contractual relationship

Challenge #3: Dealing with lock-in

Challenge #4: Managing the cloud

Once you read the paper I would love to hear your views – please use the add comments link at the bottom of this section (its quite small!) or email me directly on w.venters@lse.ac.uk

I would also suggest you also review the whole report when it is out – much of the important detail is missing from this shorter synopses.

Cloud Computing – it’s so ‘80s.

For Vint Cerf[1], the father of the internet, Cloud Computing represents a return to the days of the mainframe where service-bureaus rented their machines by the hour to companies who used them for payroll and other similar tasks. Such comparisons focus on the architectural similarities between centralised mainframes and Cloud computing – cheaply connecting to an expensive resource “as a service” through a network. But cloud is more about the provision of “low-cost” computing (albeit in bulk through data-centres) at even lower costs in the cloud. A better analogy that the mainframe then is the introduction of the humble micro-computer and the revolution it brought to corporate computing in the early 1980s.

When micros were launched many companies operated using mini or mainframe computers which were cumbersome, expensive and needed specialist IT staff to manage them[1]. Like Cloud Computing today, when compared with these existing computers the new micros offered ease of use, low cost and apparently low risk which appealed to business executives seeking to cut costs, or SMEs unable to afford mini’s or mainframes[2]. Usage exploded and in the period from the launch of the IBM PC in 1981 to 1984 the proportion of companies using PCs increased dramatically from 8% to 100% [3] as the cost and opportunity of the micro became apparent. Again, as with the cloud[4], these micros were marketed directly to business executives rather than IT staff, and were accompanied by a narrative that they would enable companies to dispense of heavy mainframes and the IT department for many tasks –doing them quicker and more effectively. Surveys from that time suggested accessibility, speed of implementation, response-time, independence and self-development were the major advantage of the PC over the mainframe[5] –  easily recognisable in the hyperbole surrounding cloud services today. Indeed Nicholas Carr’s current pronouncement of the End of Corporate IT[6] would probably have resonated well in the early 1980s when the micro looked set to replace the need for corporate IT. Indeed in 1980 over half the companies in a sample claimed no IT department involvement in the acquisition of PCs[3].

But problems emerged from the wholesale uncontrolled adoption of the Micro, and by 1984 only 2% of those sampled did not involve the IT department in PC acquisition[3]. The proliferation of PCs meant that in 1980 as many as 32% of IT managers were unable to estimate the proportion of PC within their company[3], and few could provide any useful support for those who had purchased them.

Micros ultimately proved cheap individually but expensive on-mass[2] as their use exploded and new applications for them were discovered. In addition to the increased use IT professionals worried about the lack of documentation (and thus poor opportunity for maintenance), poor data management strategies, and security issues[7]. New applications proved incompatible with others (“the time-bomb of incompatibility”[2]), and different system platforms (e.g. CP/M, UNIX, MS-DOS, OS/2, Atari, Apple …) led to redundancy and communication difficulties between services and to the failure of many apparently unstoppable software providers –household names such as Lotus, Digital-Research, WordStar and Visi and dBase[8].

Ultimately it was the IT department which brought sense to these machines and began to connect them together for useful work using compatible applications – with the emergence of companies such as Novell and Microsoft to bring order to the chaos[8].

Drawing lessons from this history for Cloud Computing are useful. The strategic involvement of IT services departments is clearly required. Such involvement should focus not on the current cost-saving benefits of the cloud, but on the strategic management of a potentially escalating use of Cloud services within the firm. IT services must get involved in the narrative surrounding the cloud – ensuring their message is neither overly negative (and thus appearing to have a vested interest in the status quo) nor overly optimistic as potential problems exist. Either way the lessons of the microcomputer are relevant again today.  Indeed Keen and Woodman argued in 1984 that companies needed the following four strategies for the Micro:

1)      “Coordination rather than control of the introduction.

2)      Focusing on the longer-term technical architecture for the company’s overall computing resources, with personal computers as one component.

3)      Defining codes for good practice that adapt the proven disciplines of the [IT industry] into the new context.

4)      Emphasis on systematic business justification, even of the ‘soft’ and unquantifiable benefits that are often a major incentive for and payoff of using personal computers” [2]

It would be wise for companies contemplating a move to the cloud to consider this advice carefully – replacing personal-computer with Cloud-computing throughout.

(c)2011 Will Venters, London School of Economics. 

[1]            P. Ceruzzi, A History of Modern Computing. Cambridge,MA: MIT Press, 2002.

[2]            P. G. W. Keen and L. Woodman, “What to do with all those micros: First make them part of the team,” Harvard Business Review, vol. 62, pp. 142-150, 1984.

[3]            T. Guimaraes and V. Ramanujam, “Personal Computing Trends and Problems: An Empirical Study,” MIS Quarterly, vol. 10, pp. 179-187, 1986.

[4]            M. Benioff and C. Adler, Behind the Cloud – the untold story of how salesforce.com went from idea to billion-dollar company and revolutionized and industry. San Francisco,CA: Jossey-Bass, 2009.

[5]           D. Lee, “Usage Patterns and Sources of Assitance for Personal Computer Users,” MIS Quarterly, vol. 10, pp. 313-325, 1986.

[6]            N. Carr, “The End of Corporate Computing,” MIT Sloan Management Review, vol. 46, pp. 67-73, 2005.

[7]            D. Benson, “A field study of End User Computing: Findings and Issues,” MIS Quarterly, vol. 7, pp. 35-45, 1983.

[8]            M. Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog: A history of the software industry. Cambridge,MA: MIT Press, 2003.

CohesiveFT

This is a company to watch http://www.cohesiveft.com/ – they have two products:

VPN-Cubed provides a virtual network onto the network of a cloud provider. This enables first to keep a standard networking layer which is consistent even if the cloud provided network changes (e.g. IP address changes).

Elastic Server allows real-time assembly and management of software components. This allows the quick creation of easy to use applications which can be easily sent to various cloud services.

However it is the fact that together these services allow virtual machines and cloud services to be moved between cloud IaaS providers without significant real-time work which is important. If their products live up to the promise then users can move to the cheapest cloud provider with ease so driving down costs to commodity supplier levels… and creating the spot market for cloud.

Cloud Will Transform Business As We Know It: The Secret’s In The Source

I am involved in the following seminar based on joint research we are doing on Cloud Computing…

 

 

 

HfS Research, in partnership with the Outsourcing Unit of the London School of Economics, is hosting a special webinar, sponsored by Accenture, on the groundbreaking study of Cloud Business Services that HfS and LSE recently released.

Join HfS Research Founder and CEO Phil Fersht, HfS Managing Director Euan Davis, Professor Leslie Willcocks, Professor for Work, Technology and Globalization at London School of Economics, and Managing Director of Cloud Services at Accenture Jimmy Harris.

The issues on the table include:

– The contrasting views and intentions of business and IT executives toward Cloud Business Services

– The impact of Cloud on work culture and delivering competitive advantage

– How both business and IT executives need to tool-up and prepare to adopt Cloud Business Services

– The crucial role service providers need to play as Cloud Business enablers for today’s organizations

 

 

https://www2.gotomeeting.com/register/744155947

 

SLA’s and the Cloud – the risks and benefits of multi-tenanted solutions.

Service Level Agreements (SLAs)  are difficult to define in the cloud in part because areas of the infrastructure (in particular the internet connection) are outside of the scope of either customer or supplier. This leads to the challenge of presenting a contractual agreement for something which is only partly in the suppliers control. Further as the infrastructure is shared (multi-tenanted) SLA’s are more difficult to provide since they rest on capacity which must be shared.

The client using the Cloud is faced a challenge. Small new cloud SaaS providers, which are  increasing their business and attracting more clients to their multi-tenanted data-centre, are unlikely to provide usefully defined SLA for their services than that which a data-centre provider can offer where it controls all elements of the supplied infrastructure.  Why would they – their business is growing and an SLA is a huge risk (since it is multi-tenanted breach of one SLA is probably a breach of lots – the payout might seem small and poor to the client but is large for a SaaS provider!). Further with each new customer the demands on the data-centre, and hence risk,  increase. Hence the argument that as SaaS providers become successful the risk of SLAs being breached might increase.

There is however a counter-point to this growth risk though – as each new customer begins to use the SaaS they will undertake their own due-diligence  checks. Many will attempt to stress test the SaaS service. Some will want to try to hack the application. As the customer base grows (and moves towards blue-chip clients) the seriousness of this testing will increase – security demands in particular will be tested as bigger and bigger companies consider their services. This presents a considerable opportunity for the individual user. For with each new customer comes the benefit of increasing stress testing of the SaaS platform – and increasing development of skills within the SaaS provider. While the SLA may continue to be poor, the risk of failure of the data-centre may well diminish as the SaaS grows.

To invoke a contract is, in effect, a failure in a relationship – a breakdown in trust. Seldom does the invocation of a contract benefit either party. The aim of an SLA is thus not just to provide a contractual agreement but rather to set out the level of service on which the partnership between customer and supplier is based. In this way an SLA is about the expected quality demanded of the supplier and with the above model the expected quality may well increase with more customers – not decrease as is usually envisaged for cloud. SLA’s for cloud providers may well be trivial and poor, but the systemic risk of using Clouds is not as simplistic as is often argued.  While it is unsurprising that cloud suppliers offer poor SLA’s (it is not in their interest to do otherwise), it does not mean that the quality of service is, or will remain, poor.

So what should the client consider in looking at the SLA offering in terms of service quality?

1) How does the Cloud SaaS supplier manage its growth? The growth of a SaaS service means greater demand on the providers data-centre. Hence greater risk that the SLA’s will be breached for their multi-tenanted data-centre.

2) How open is the Cloud SaaS provider in allowing testing of its services by new customers?

3) How well does the Cloud SaaS provider’s strategic ambition for service quality align with your desires for service quality.

Obviously these questions are in addition to all the usual SLA questions.

A Cloudy Future for Microsoft?

Friends at www.horsesforsources.com (an influential Outsourcing Blog) provide a useful analysis of Microsoft’s position in the Cloud Market. The comments are perhaps more interesting than the piece…

Click here for their article – A Cloudy Future for Microsoft?.

Structure 2010 – Mark Benioff – Cloud 2

The key focus of SalesForce.com CEO Mark Benioff talk was on identifying the difference between “Cloud 2” and none-cloud marketing efforts leveraging the cloud to sell boxes. He highlighted his three tests for Cloud Compting

1)      Efficiency – that any cloud offering should offer 1/10th the cost of existing solutions and thus enable new entrants into marketplaces. For example he highlighted that only 1500 Dell servers are used to service the 77000 useres of SalesForce.

2)      Economic – that solutions should be economically efficient

3)      Democratic – That they should allow SME business to enter the market.

Nothing particularly exciting here.

Presentation – SSIT

I hosted a panel on Cloud Computing at the SSIT 10 Global Outsourcing Workshop last week. The rather basic slides are available here: http://www.pegasus.lse.ac.uk/presentations/CloudPanelSSIT.pdf to see the video of the panel see http://tinyurl.com/2vmjaqs