The real cost of using the cloud – your help needed for research supported by Rackspace and Intel.

It’s almost a given that cloud technology has the power to change the way organisations operate. Cost efficiency, increased business agility and time-saving are just some of the key associated benefits[1]. As cloud technology has matured, it’s likely not enough for businesses to simply have cloud platforms in place as part of their operations. The  optimisation and continual upgrading of the technology may be just as important over the long term. With that in mind, a central research question remains: how can global businesses maximise their use of the cloud? What are the key ingredients they need to maintain, manage and maximise their usage of cloud?

For instance, do enterprises have the technical expertise to roll out the major cloud projects that will reap the significant efficiencies and savings for their business? How can large enterprises ensure they have the right cloud expertise in place to capitalise on innovations in cloud technology and remain competitive? Finally, what are the cost implications of nurturing in-house cloud expertise vs harnessing those of a managed cloud service provider?

A colleague (Carsten Sorensen) and I are working with Rackspace® on a project (which is also sponsored by Intel®) to find out. But we would need some help from IT leaders like you?

How you can help

We’re looking to interview IT decision makers/leaders in some of the UK’s largest enterprises (those with more than 1,000 employees and with a minimum annual turnover of £500m) which use cloud technology in some form, to help guide the insights developed as part of this project.

The interviews will be no more than 30 mins long via telephone. Your participation in the project will also give you early access to the resulting report covering the initial key findings. We would also share subsequent academic articles with you. We follow research ethics guidelines and can ensure anonymity to yourself and your company (feel free to email confidentially to discuss this issue).

If this sounds like something you’d like to get involved in then please email me w.venters@lse.ac.uk

Best wishes,

Dr Will Venters,

Dr Carsten Sorensen,

and Dr Florian Allwein.

  1. Venters, W. and E. Whitley, A Critical Review of Cloud Computing: Researching Desires and Realities. Journal of Information Technology, 2012. 27(3): p. 179-197.

(Photo (cc) Damien Pollet with thanks!)

Platforms for the Internet of Things: Opportunities and Risks

I was chairing a panel at the Internet of Things Expo in London today. One of the points for discussion was the rise of platforms related to the internet of things. As, by some estimates, the number of connected devices is predicted to exceed 50bn by 2020 so there is considerable desire to control the internet based platforms upon which these devices will rely. Before we think specifically about platforms for the Internet of Things it is worth pausing to think about platforms in general.

The idea of platforms is pretty simple – they are something flat we can build upon. In computing terms they are an evolving system of software which provides generativity [1]: the potential to innovate by capitalising on the features of the platform service to provide something more than the sum of its parts. They exhibit the economic concept of network effects [2] – that is their value increases as the number of users increases. The telephone, for example, was useless when only one person had one, but as the number of users increased so its value increased (owners could call more people). This in turn leads to lock-in effects and potential monopolisation: once a standard emerged there was considerable disincentive for existing users to switch, and, faced with competing standards, users will wisely choose a widely adopted incumbent standard (unless the new standard is considerably better or there is other incentives to switch). These network effects also influence suppliers – App developers focus on developing for the standard Android/iPhone platforms so increasing their value and creating a complex ecosystem of value.

Let’s now move to think further about this concept for the Internet of Things.  I worry somewhat about the emergence of strong commercial platforms for Internet of Things devices. IoT concerns things, whose value is derived from both their materiality and their internet-capability. When we purchase an “IoT” enabled street-light (for example) we are making a significant investment in the material streetlight as well as its Internetness. If IoT evolves like mobile phones this could lock us into the platform, and changing to an alternative platform would thus include high material cost (assuming , like mobiles, we are unable to alter software) as, unlike phones these devices are not regularly upgraded. This demonstrates platforms concern the distribution of control, and the platform provider has a strong incentive to seek to control the owners of the devices, and though this derive value from their platform over the long term. Also for many IoT devices (and particularly relevant for critical national infrastructure) this distribution of control does not correspond to distribution of risk, security and liability which many be significant for IoT devices.

There is also considerable incentive for platform creators to innovate their platform – developing new features and options to increase their value and so increase the scale and scope of their platform. This however creates potential instability in the platform – making evaluation of risk, security and liability over the long term exceedingly difficult. Further there is an incentive on platform owners to demand evolution from platform users (to drive greater value) potentially making older devices quickly redundant.

For important IoT devices (such as those used by government bodies), we might suggest that they seek to avoid these effects by harnessing open platforms based on collectively shared standards rather than singular controlled software platforms.  Open platforms are “freely available, standard definitions of service outcomes, processes, or technology that encourage multiple users to converge on utility consumption of services based on definitions – which in turn encourage suppliers to innovate around these commodities.”[3, 4]. In contrast to Open Source, Open platforms are not about the software – but about a collective standards agreement process in which standards are freely shared allowing the collective innovation around that standard. For example the 230v power-supply is a standard around which electricity generators, device manufacturers and consumers coalesce.

What are the lessons here?

(1) Wherever possible we should seek open platforms and promote the development of standards.

(2)  We must demand democratic accountability, and seek to exploit levers which ensure control over our infrastructure is reflective of need.

(3) We should seek to understand platforms as dynamic, evolving self-organising infrastructures not as static entities

References

  1. Zittrain, J.L., The Generative Internet. Harvard Law Review, 2006. 119(7): p. 1974-2040.
  2. Gawer, A. and M. Cusumano, Platform Leadership. 2002, Boston,MA: Harvard Business School Press.
  3. Brown, A., J. Fishenden, and M. Thompson, Digitizing Government. 2015.
  4. Fishenden, J. and M. Thompson, Digital Government, Open Architecture, and Innovation: Why Public Sector IT Will Never Be The Same Again. Journal of Public Administration, Research, and Theory, 2013.

History repeating: Why cloud computing could revisit the mistakes of the 1980s PC boom | TechRepublic

A conference speech I gave a couple of weeks ago is reported in a nice piece on TechRepublic…

History repeating: Why cloud computing could revisit the mistakes of the 1980s PC boom | TechRepublic.

But if you want to read a more detailed piece on this idea check out the original posting on this blog:

https://utilitycomputing.wordpress.com/2011/04/28/cloud-computing-%E2%80%93-it%E2%80%99s-so-%E2%80%9880s/

Globalization Today “Cloud as Technology – What Kind of Transformation”

Read our latest article on Cloud Technology (based on our earlier Accenture reports) in Globalization Today – http://globalizationtoday.com/february-2012/

(pages 26-33)

Our Fifth Report is out – Management implications of the Cloud

The fifth report in our Cloud Computing series for Accenture has just been published. This report looks at the impact Cloud Computing will have on the management of the IT function, and thus the skills needed by all involved in the IT industry. The report begins by analysing the impact Cloud might have in comparison to existing outsourcing project. It considers the core-capabilities which must be retained in a “cloud future”, considering how these capabilities might be managed, and the role of systems integrators in managing the Cloud.

Please use the comments form to give us feedback!

Cloud and the future of Business 5 – Management .

Third Report – The Impact of Cloud Computing

The third report in our series for Accenture is now available by clicking the image below:

Cloud and the Future of Business: From Costs to Innovation - Part Three: Impact

 

 

In this report we consider the potential short and long term impact of Cloud Computing on stakeholders. Using our survey of over 1000 executives, and supported by qualitative interviews with key Cloud stakeholders, we assess this impact on organisational performance, outsourcing and the supply industry both in the short-term and long term.

The 7 capabilities of Cloud Computing – a review of a recent MISQE article on Cloud

Iyer, B., and Henderson, J. “Preparing for the Future: Understanding the Seven Capabilities of Cloud Computing,” MIS Quarterly Executive (9:2) 2010, pp 117-131.

———————————————

In this article Bala Iyer and John Henderson research vendor offerings on Cloud to identify “seven capabilities” of services that organizations should consider before implementing Cloud. For them Cloud consists of a stack of IaaS, PaaS and the Application. Interestingly they include Collaboration within their stack – reflective of their focus on Cloud’s association with Mash-ups as the service provided to the user.   This seems useful but does confuse things a little – it is not clear the extent to which these components of the stack are integrated/exploited within a particular cloud offering.

The seven capabilities are then described:

1) Controlled Interface – the capacity for the integrated infrastructure to be responsive to change. In particular the capability of APIs to allow the innovation of applications and services on top of the platform – and the demands of the platform owner in managing/controlling that innovation.  This seems a very important point as platform owners business models are dependent upon the explotation of the platform – ranging from an open platform (like MS-Windows) where Microsoft make money from selling the initial product licence, to closed platforms (like Apple’s iPhone) where money is extracted from application purchase ontop.

2) Location Independence – the capacity for services and information assets to be controlled/exploited without reference to their location. This hooks into a range of themes – from the technical architecture of systems and their capacity for integration, to legislative demands for locations and safe-harbouring of information.

3) Sourcing Independence – this is connected with the concern for lock-in and the desire for organisations to move their application between cloud platforms. They usefully highlight however that lock-in should be evaluated within the company firewall as much as outside it. Companies should evaluate their ability to move between any IT sources and their IT services should be independent of the platform used.

4) Ubiquitous Access – this referes to the ability of a cloud service to be accessed from differing devices and platforms globally. However they rightly extend this to include access to application programming interfaces not simply web-site portal pages.

5) Virtual Business Environments – Similar to the Virtual Machine – this perspective virtualises and integrates tools which support specific major business capabilities. Another way to look at it is a suite of cloud-services and workflows which allow the realisation of business processes/functions within a cloud type environment. By considering such VBE’s the paper hints at Business Process as a Service and the possibilities of cloud services which transend basic service provision and direclty link to business process  – allowing the scalability and elasticity of cloud to link to Business process innovation.

6) Addressability and Traceability – This calls for the ability to verify the history, location and application of data in the cloud for traceability purposes and compliance issues. I would however argue that it is not simply a matter of ensuring tracability – but being able to manage the traces recorded. Our inherent assumptions of the desirability of traceability are incorrect – as Apple is learning through the problems of its desire to trace and record wiki-antena data on iPhones, or the legal challenge and sentence against Google for its (albeit unintended) recording of users WiFi signals within its Google Mapping activity in Europe. Lets remember that sometimes it is better to forget.

7) Rapid Elasticity – The self-service capability of scaling up services. Here the authors make an interesting point – highlighting the need for elasticity in IT Service AND in Contract. Simply having scalable services but pricing which is not reflective of this is challenging.

These are important dimensions of the cloud – and add to the corpus of our knowledge. What is useful is that they are drawn from an analysis of vendor offering – and further that they provide a road-map for strategy. I would urge those interested to get hold of the paper which goes into much more detail on strategic approaches to Cloud and the need for specific IT skills to manage such services. What is particularly refreshing about the article is its focus on mashing together of services – treating Cloud as a patchwork of services rather then focusing too heavily on the individual components.

Our 2nd Report: Meeting the challenges of cloud computing – Accenture Outlook

http://www.accenture.com/SiteCollectionDocuments/PDF/Accenture-Outlook-Meeting-the-challenges-of-cloud-computing.pdf

Our second Accenture report on Cloud Computing is about to be published!  As a taster the above link takes you to a short synopsis (Published in the Accenture Outlook Points of View series). I will post a link to the full report when it is out.

While in danger of providing a summary on a summary, this second report builds on our first “Promise of Cloud Computing”  report to analyse the challenges faced by a move to cloud. We identify the following key challenges:

Challenge #1: Safeguarding data security

Challenge #2: Managing the contractual relationship

Challenge #3: Dealing with lock-in

Challenge #4: Managing the cloud

Once you read the paper I would love to hear your views – please use the add comments link at the bottom of this section (its quite small!) or email me directly on w.venters@lse.ac.uk

I would also suggest you also review the whole report when it is out – much of the important detail is missing from this shorter synopses.

CohesiveFT

This is a company to watch http://www.cohesiveft.com/ – they have two products:

VPN-Cubed provides a virtual network onto the network of a cloud provider. This enables first to keep a standard networking layer which is consistent even if the cloud provided network changes (e.g. IP address changes).

Elastic Server allows real-time assembly and management of software components. This allows the quick creation of easy to use applications which can be easily sent to various cloud services.

However it is the fact that together these services allow virtual machines and cloud services to be moved between cloud IaaS providers without significant real-time work which is important. If their products live up to the promise then users can move to the cheapest cloud provider with ease so driving down costs to commodity supplier levels… and creating the spot market for cloud.

SLA’s and the Cloud – the risks and benefits of multi-tenanted solutions.

Service Level Agreements (SLAs)  are difficult to define in the cloud in part because areas of the infrastructure (in particular the internet connection) are outside of the scope of either customer or supplier. This leads to the challenge of presenting a contractual agreement for something which is only partly in the suppliers control. Further as the infrastructure is shared (multi-tenanted) SLA’s are more difficult to provide since they rest on capacity which must be shared.

The client using the Cloud is faced a challenge. Small new cloud SaaS providers, which are  increasing their business and attracting more clients to their multi-tenanted data-centre, are unlikely to provide usefully defined SLA for their services than that which a data-centre provider can offer where it controls all elements of the supplied infrastructure.  Why would they – their business is growing and an SLA is a huge risk (since it is multi-tenanted breach of one SLA is probably a breach of lots – the payout might seem small and poor to the client but is large for a SaaS provider!). Further with each new customer the demands on the data-centre, and hence risk,  increase. Hence the argument that as SaaS providers become successful the risk of SLAs being breached might increase.

There is however a counter-point to this growth risk though – as each new customer begins to use the SaaS they will undertake their own due-diligence  checks. Many will attempt to stress test the SaaS service. Some will want to try to hack the application. As the customer base grows (and moves towards blue-chip clients) the seriousness of this testing will increase – security demands in particular will be tested as bigger and bigger companies consider their services. This presents a considerable opportunity for the individual user. For with each new customer comes the benefit of increasing stress testing of the SaaS platform – and increasing development of skills within the SaaS provider. While the SLA may continue to be poor, the risk of failure of the data-centre may well diminish as the SaaS grows.

To invoke a contract is, in effect, a failure in a relationship – a breakdown in trust. Seldom does the invocation of a contract benefit either party. The aim of an SLA is thus not just to provide a contractual agreement but rather to set out the level of service on which the partnership between customer and supplier is based. In this way an SLA is about the expected quality demanded of the supplier and with the above model the expected quality may well increase with more customers – not decrease as is usually envisaged for cloud. SLA’s for cloud providers may well be trivial and poor, but the systemic risk of using Clouds is not as simplistic as is often argued.  While it is unsurprising that cloud suppliers offer poor SLA’s (it is not in their interest to do otherwise), it does not mean that the quality of service is, or will remain, poor.

So what should the client consider in looking at the SLA offering in terms of service quality?

1) How does the Cloud SaaS supplier manage its growth? The growth of a SaaS service means greater demand on the providers data-centre. Hence greater risk that the SLA’s will be breached for their multi-tenanted data-centre.

2) How open is the Cloud SaaS provider in allowing testing of its services by new customers?

3) How well does the Cloud SaaS provider’s strategic ambition for service quality align with your desires for service quality.

Obviously these questions are in addition to all the usual SLA questions.