The real cost of using the cloud – your help needed for research supported by Rackspace and Intel.

It’s almost a given that cloud technology has the power to change the way organisations operate. Cost efficiency, increased business agility and time-saving are just some of the key associated benefits[1]. As cloud technology has matured, it’s likely not enough for businesses to simply have cloud platforms in place as part of their operations. The  optimisation and continual upgrading of the technology may be just as important over the long term. With that in mind, a central research question remains: how can global businesses maximise their use of the cloud? What are the key ingredients they need to maintain, manage and maximise their usage of cloud?

For instance, do enterprises have the technical expertise to roll out the major cloud projects that will reap the significant efficiencies and savings for their business? How can large enterprises ensure they have the right cloud expertise in place to capitalise on innovations in cloud technology and remain competitive? Finally, what are the cost implications of nurturing in-house cloud expertise vs harnessing those of a managed cloud service provider?

A colleague (Carsten Sorensen) and I are working with Rackspace® on a project (which is also sponsored by Intel®) to find out. But we would need some help from IT leaders like you?

How you can help

We’re looking to interview IT decision makers/leaders in some of the UK’s largest enterprises (those with more than 1,000 employees and with a minimum annual turnover of £500m) which use cloud technology in some form, to help guide the insights developed as part of this project.

The interviews will be no more than 30 mins long via telephone. Your participation in the project will also give you early access to the resulting report covering the initial key findings. We would also share subsequent academic articles with you. We follow research ethics guidelines and can ensure anonymity to yourself and your company (feel free to email confidentially to discuss this issue).

If this sounds like something you’d like to get involved in then please email me w.venters@lse.ac.uk

Best wishes,

Dr Will Venters,

Dr Carsten Sorensen,

and Dr Florian Allwein.

  1. Venters, W. and E. Whitley, A Critical Review of Cloud Computing: Researching Desires and Realities. Journal of Information Technology, 2012. 27(3): p. 179-197.

(Photo (cc) Damien Pollet with thanks!)

Platforms for the Internet of Things: Opportunities and Risks

I was chairing a panel at the Internet of Things Expo in London today. One of the points for discussion was the rise of platforms related to the internet of things. As, by some estimates, the number of connected devices is predicted to exceed 50bn by 2020 so there is considerable desire to control the internet based platforms upon which these devices will rely. Before we think specifically about platforms for the Internet of Things it is worth pausing to think about platforms in general.

The idea of platforms is pretty simple – they are something flat we can build upon. In computing terms they are an evolving system of software which provides generativity [1]: the potential to innovate by capitalising on the features of the platform service to provide something more than the sum of its parts. They exhibit the economic concept of network effects [2] – that is their value increases as the number of users increases. The telephone, for example, was useless when only one person had one, but as the number of users increased so its value increased (owners could call more people). This in turn leads to lock-in effects and potential monopolisation: once a standard emerged there was considerable disincentive for existing users to switch, and, faced with competing standards, users will wisely choose a widely adopted incumbent standard (unless the new standard is considerably better or there is other incentives to switch). These network effects also influence suppliers – App developers focus on developing for the standard Android/iPhone platforms so increasing their value and creating a complex ecosystem of value.

Let’s now move to think further about this concept for the Internet of Things.  I worry somewhat about the emergence of strong commercial platforms for Internet of Things devices. IoT concerns things, whose value is derived from both their materiality and their internet-capability. When we purchase an “IoT” enabled street-light (for example) we are making a significant investment in the material streetlight as well as its Internetness. If IoT evolves like mobile phones this could lock us into the platform, and changing to an alternative platform would thus include high material cost (assuming , like mobiles, we are unable to alter software) as, unlike phones these devices are not regularly upgraded. This demonstrates platforms concern the distribution of control, and the platform provider has a strong incentive to seek to control the owners of the devices, and though this derive value from their platform over the long term. Also for many IoT devices (and particularly relevant for critical national infrastructure) this distribution of control does not correspond to distribution of risk, security and liability which many be significant for IoT devices.

There is also considerable incentive for platform creators to innovate their platform – developing new features and options to increase their value and so increase the scale and scope of their platform. This however creates potential instability in the platform – making evaluation of risk, security and liability over the long term exceedingly difficult. Further there is an incentive on platform owners to demand evolution from platform users (to drive greater value) potentially making older devices quickly redundant.

For important IoT devices (such as those used by government bodies), we might suggest that they seek to avoid these effects by harnessing open platforms based on collectively shared standards rather than singular controlled software platforms.  Open platforms are “freely available, standard definitions of service outcomes, processes, or technology that encourage multiple users to converge on utility consumption of services based on definitions – which in turn encourage suppliers to innovate around these commodities.”[3, 4]. In contrast to Open Source, Open platforms are not about the software – but about a collective standards agreement process in which standards are freely shared allowing the collective innovation around that standard. For example the 230v power-supply is a standard around which electricity generators, device manufacturers and consumers coalesce.

What are the lessons here?

(1) Wherever possible we should seek open platforms and promote the development of standards.

(2)  We must demand democratic accountability, and seek to exploit levers which ensure control over our infrastructure is reflective of need.

(3) We should seek to understand platforms as dynamic, evolving self-organising infrastructures not as static entities

References

  1. Zittrain, J.L., The Generative Internet. Harvard Law Review, 2006. 119(7): p. 1974-2040.
  2. Gawer, A. and M. Cusumano, Platform Leadership. 2002, Boston,MA: Harvard Business School Press.
  3. Brown, A., J. Fishenden, and M. Thompson, Digitizing Government. 2015.
  4. Fishenden, J. and M. Thompson, Digital Government, Open Architecture, and Innovation: Why Public Sector IT Will Never Be The Same Again. Journal of Public Administration, Research, and Theory, 2013.

Drugs enter the digital age – Details of a research project I’m part of…

A team of us at the LSE have just won £700k to look at the complex digital processes and infrastructures surrounding future medicine delivery. The following is taken from the press release (link below).

The world’s health sector has gone digital, with electronic prescriptions, digitised supply chains and personalised medicine the new buzz words.

Earlier this year, the US biotech company Proteus announced that it had raised US$172 million for its pioneering tablets containing embedded microchips. These swallowable devices collect and report biometric data and can tell if a patient has taken their medication correctly.

In a similar breakthrough, Google has recently announced a prototype contact lens which measures glucose in a user’s tears and communicates this information to a mobile phone so that patients can better manage their medication.

Both innovations illustrate the hybrid devices that medicines have now become – and highlight the cumbersome and mostly paper-based current systems that are still being used to deliver medicines.

Dr Tony Cornford from LSE’s Department of Management hopes to make some headway in this area by spending the next two years exploring digital innovations in how drugs are supplied and used.

A £700,000 grant from Research Councils UK will allow Dr Cornford and a team of co-investigators from LSE, the University of Leeds, UCL, Brunel and the Health Foundation to map emerging new fields, such as electronic prescribing systems, intelligent medicines supply chains, new diagnostic and monitoring procedures, and personalised medicines based on individual genomic profiles.”

 Read the full article at: Drugs enter the digital age – Health – Research highlights – Research and expertise – Home.

Cloud Computing powered by dirty energy, report warns | Environment | theguardian.com

This Guardian article touches upon something I have been complaining about for a long time. When you ran an application on your laptop which used lots of power you felt it – the laptop gets hot and burns your trousers. When you use a cloud service that power is hidden in a data-centre somewhere else and you will never know the environmental damage caused. While cloud providers often argue their data-centres are cleaner and greener than old ones, the problem is that because of there services we are using them for new things we did not do a few years ago – like social media!

Social media explosion powered by dirty energy, report warns | Environment | theguardian.com.

Simplicity and cloud computing

In my recent co-authored book on cloud computing [1]we argue that one of the primary desires for the adoption of computing as a service (as opposed to as a product such as software and hardware organised by the purchaser) was the desire for simplicity. We even adopted the term “Simplicity as a Service” to describe the disentanglement of complexity offered by new pay-as-you-go computing services associated with cloud computing  through, for example, more standardised contracting.  Indeed one of the primary motivations for many moves to the cloud is to simplify.  Yet we stumble quickly upon a problem – while the term simplicity[2] is widely used in relation to cloud computing, we have very little understanding of what this simplicity actually means? Understanding simplicity better may help us better understand our procurement of this types of service.

In this short essay want to unpick the concept of simplicity, then apply this back to the issue of cloud computing.  In this I consider simplicity from three directions which I roughly define as Modularic, Aesthetic and Systemic simplicity.

Modularic Simplicity

Is simplicity a concern for simpler mechanisms to provide the same service (i.e. a quartz watch is simpler than a Swiss automatic chronograph yet both tell the time)? To be simpler a device must perhaps have fewer components? Or perhaps simplicity lies in the interrelation between components – the interfaces?  If we consider simplicity in these terms we can seek to examine the modularity of objects – understanding how a service is composed of different services, and examining their underlying structures[3].  This is important for cloud computing in which various technical services are often interconnected to provide service – NetFlix for example integrates various Amazon’s cloud services with my iPad’s App, and with movie-content to provide service. Through decoupling services’ modularity the complexity of the constellation of modules can perhaps be better understood. Structures such as “hierarchies” are also used to keep things “simple”  and understanding such structures would help.

In this way simplicity is a calculation roughly based on counting components and their interfaces. Yet this seems rather well… simplistic!  For as Aristotle highlights wholes are “more than the sum of their parts” – there is emergence and emergent behaviour. But more than this, there is variation in the simplicity of components.

Aesthetic Simplicity

One problem with “modularic simplicity” is that the most simple modular-objects themselves can vary considerably in their “simplicity”.  Take two objects made of clay – a brick and a pottery vase. If both weigh the same they likely have the same number of atoms within them. Yet most people would agree the brick is simpler. The vase’s atoms are in a structure which introduced intricacy and difference despite the material itself being identical. Similarly two apparently similar digital MP3 files –seemingly random series of 0s and 1s –can vary considerably in their simplicity when realised as music – a flute solo verses a prog-rock band.

Simplicity then is not inherent in the material and any attempt to calculate simplicity by counting components and their relationship will be somewhat problematic. What then makes the vase more complex?  As humans perhaps we evaluate simplicity through our interpretation – an aesthetically concept of simplicity. This is certainly the perception of many designers and reflects the design aspirations of Apple computing. From their first sales brochure’s proclamation that “Simplicity is the ultimate sophistication” [4] this company has championed the idea that computing should feel “simple” for humans, in particular that the human should (in the words of their chief designer) “feel we can dominate [physical products]. As you bring order to complexity, you find a way to make the product defer to you. Simplicity… isn’t just visual style,… minimalism or the absence of clutter.”  For Apple and their vice-president of design Jonathan Ive’s simplicity is about removal of the unessential – and the reassertion of the whole (that is the form of the final product) over the parts (that is the components which make up that whole) – but wholly centred around the human user.

This concept is also represented in Ockham’s razor[5] – the assumption that simpler explanations are better despite the lack of irrefutable logical principle that this is the case (though they are more easily tested).

A human interpretation is required – cloud computing is considered “simple” in relation to its use in doing something for humans. It can only be evaluated at the level of its use (just as an iphone is only simple when held in the hand and used – not when taken apart and examined from within where its myriad complexity becomes evident).

Systemic Simplicity

If modularic simplicity places the “thing” at the centre of simplicity, and if, in contrast,  aesthetic simplicity placed humans at the heart of defining what is simple, then perhaps we can define simplicity in terms of the interrelationship between things and people – as a kind of socio-technical perspective towards simplicity? A view of simplicity in terms of the complex social and technical arrangements of life through which we get things done –such as, for example, organisations.

In many ways this simplicity might be defined by its absence – the lack of simplicity of modern organisations and their technical arrangements. The role of managers is thus often seen to be seeking to organise things to be “simpler”.  Yet most organisations are never simple and to aspire to make them so may be problematic.  Miller[6] argued that “organisations lapse into decline precisely because they amplify and extend a single strength or function while neglecting most others. Ultimately, a rich and complex organisation becomes excessively simple –it turns into a monolithic, narrowly focused version of its former self, converting a formula for success into a path towards failure.” [7] For Miller simplicity is an overwhelming preoccupation with a single goal, strategic activity, department or world-view – and making things simple through simplifying the organisation is therefore often problematic.  This suggests that understanding what can be simplified and what cannot requires a rich appreciation of the complexity of the organisation.

Indeed the origins of cybernetics[8] and complexity theory highlight that management must meet the complexity of a situation with a similar level of complexity in their response to that situation[9]. This demands that a manager’s response to organisational complexity cannot simply be simplification of their actions if they cannot similarly understand or simplify the environment within which the organisation resides.

Simplifying without this understanding is often what managers seek to do. And in making things simple they often rely on relatively simple models of the organisation to help them make these decisions. Whether it is the organisation chart, the process diagram, the UML model, their attempts to derive simplicity is focused on such simplifications. As Stafford Beer (1973) [10]reminded us managers become bewitched by the paper representations of their organisations as a “surrogate world we manage”, losing contact with the messiness of their world[11] and assuming simplicity in the world rather than seeking to simplify the world.

Beer goes further to highlight that “if a simple process is applied to complicated data, then only a small portion of that data will be registered, attended to, and make unequivocal. Most of the input will remain untouched and will remain a puzzle”.

This is not to say that we should not attempt to simplify our understanding of organisations into models and representations, but that we must carefully acknowledge these models as “simple”, and ensure that we remain attune to their alignment with the complexity of that which they represent.

When we buy cloud computing services which aim to change our organisation in some way we must be careful that we are not selecting the computing model based on a simplistic understanding of what the organisation is trying to achieve.

What can learn from this management and cloud computing?

From these three conceptualisation of simplicity we can draw some lessons for organisational managers and for cloud computing:

1) Simplicity isn’t always inherent in devices or technology, it relates to their interpretation and representation. We should seek to model simplicity in ways which reflect this.

2) That simplifying computing systems must be met with an understanding of the level of complexity of the task they are for. Selecting too simple a service is problematic[12]

3) That simplicity does not necessary mean less complex. Rather it can relate to the use of the service at the interface being observed. In procuring a service we should be attune to the lack of simplicity at different levels.

© 2014 W.Venters.

[1] Willcocks, L., W. Venters and E. Whitley (2013). Moving To The  Cloud Corporation. Basingstoke, Palgrave Macmillan.

[2] I acknowledge the contribution of PA consulting in raising with me a concern for better understanding simplicity.

[3] Baldwin, C. and K. Clark (2000). Design Rules: The power of modularity. Cambridge,MA, MIT Press.

[4] Isaacson, W. (2011). Steve Jobs, Little Brown. Page 343.

[5] http://en.wikipedia.org/wiki/Occam’s_razor

[6] Miller, D. (1993). “The Architecture of Simplicity.” The Academy of Management Review 18(1): 116-138.

[7] Miller, D. (1993). “The Architecture of Simplicity.” The Academy of Management Review 18(1): 116-138.

[8] Ashby, W. R. (1956). An introduction to cybernetics. London, Methuen & Co Ltd. Churchman, C., R. Ackoff and E. Arnoff (1957). Introduction to Operations Research. New York, Wiley.

[9] This is inherent in Ashby’s law of “requisite variety” – though different terms are used.

[10] Beer, S. (1984). “The Viable System Model: Its provenance, development , methodology and pathology.” Journal of the Operational Research Society 35: 7-36.

 

[11] Pickering, A. (2013). Living in the material world. Materiality and Space: Organizations, Artefacts and Practices. F.-X. de Vaujany and N. Mitev, Palgrave Macmillan.

[12] I discuss this in much more detail through the term “Variety” in Venters, W. and E. Whitley (2012). “A Critical Review of Cloud Computing: Researching Desires and Realities.” Journal of Information Technology 27(3): 179-197.

 

Cloud World Forum in June – Book Now

I am booked to speak at the Cloud World Forum in June. The registration pages are now open so please do book and I will catch you there…  It’s an impressive lineup:

2014 Speakers | Cloud World Forum.

Latest article published: Strategic Outsourcing: An International Journal | Cloud Sourcing and Innovation: Slow Train Coming? A Composite Research Study

The latest article from our long-running Cloud Computing research stream has just been published…

Leslie Willcocks, Will Venters, Edgar A. Whitley, (2013) “ Cloud Sourcing and Innovation: Slow Train Coming? A Composite Research Study“, Strategic Outsourcing: An International Journal, Vol. 6 Iss: 2

ABSTRACT:

Purpose – Although cloud computing has been heralded as driving the innovation agenda, there is growing evidence that cloud is actually a “slow train coming”. The purpose of this paper is to seek to understand the factors that drive and inhibit the adoption of cloud particularly in relation to its use for innovative practices.

Design/methodology/approach – The paper draws on a composite research base including two detailed surveys and interviews with 56 participants in the cloud supply chain undertaken between 2010 and 2013. The insights from this data are presented in relation to set of antecedents to innovation and a cloud sourcing model of collaborative innovation.

Findings – The paper finds that while some features of cloud computing will hasten the adoption of cloud and its use for innovative purposes by the enterprise, there are also clear challenges that need to be addressed before cloud can be successfully adopted. Interestingly, our analysis highlights that many of these challenges arise from the technological nature of cloud computing itself.

Research limitations/implications – The research highlights a series of factors that need to be better understood for the maximum benefit from cloud computing to be achieved. Further research is needed to assess the best responses to these challenges.

Practical implications – The research suggests that enterprises need to undertake a number of steps for the full benefits of cloud computing to be achieved. It suggests that collaborative innovation is not necessarily an immediate consequence of adopting cloud computing.

Originality/value – The paper draws on an extensive research base to provide empirically informed analysis of the complexities of adopting cloud computing for innovation.

ITOe – Speakers – Nordic Innovation & Agility

I’ll be talking about cloud computing and outsourcing at the Nordic Innovation and Agility forum  in Stockholm in April…

ITOe – Speakers – Nordic Innovation & Agility.

The title of my talk will be “The business of cloud computing – innovation and agility” with my focus on the way cloud computing can support innovation and drive agility in businesses. Along the way I will (probably) discuss cloud computing and the Large Hadron Collider, Smart-cities and Big-data – exploring how high capacity and agile computing can support agile business practices and innovation.

I hope you can make it!

 

Forbes has four predictions for 2013… I challenge them all

Over on Forbes Antonio Piraino makes four predications for the year ahead:

Cloud Computing: Four Predictions For The Year Ahead – Forbes.

I want to discuss my opinion of each of them.

1) “The cloud wars are (still) rumbling and they’re getting louder”. 

I sort of agree with the sentiment of this; that companies will be looking for value-add from cloud providers rather than simple metrics (such as network, storage or service). However I completely disagree that a battle will unfold next year – I think this is a growing market and we are seeing clear differentiation between offerings. The giants in this space are, in my opinion, desperately trying to carve out a none-competitive space in the growing cloud market, rather than going head-to-head in the battle the author  describes. For that way lies only commodity offering and a drive to the bottom. I suspect that differentiation will be a more likely tactic than “war”.

2) “A titanic cloud outage will create a domino effect”. 

The article argues that “As more IT resources are moved to the cloud, the chance of a major outage for a corporate enterprise… becomes exponentially more likely to occur”. Really? How on earth can the increasing outsourcing of service lead to an exponential increase in risk? The risk is dependent upon a number of factors:

1) Capability of the cloud provider to manage the service (again not dependent on the number of services managed)

2) Capability of the cloud user to contract effectively for risk (again not dependent on the number of services outsourced).

3) Multiplexing of services on a single site – this is dependent on the number of cloud users, however it is an architectural issue as to whether the risk increases. It is certainly not exponential that five companies sharing one building are at greater individual risk than if they each have their own building. The analogy of airline accidents v.s. car accidents comes to mind.  When a plane fails it looks catastrophic, but more people die on the roads.

The article goes on to say that “If an unexpected cloud outage were to take place within the context of [ financial services trades] , the banks would stand to be heavily penalized for incompliance” – I absolutely agree – because if they weren’t adopting defensive approaches to moving to the cloud they would be incompetent. As a recent Dell think-tank I was part of discussed, banks are already moving to the cloud, and for mission critical activity, but they are working with cloud providers to ensure that they are getting the same level of protection and assurance as the would in-house. Like any outsourcing relationship it is incumbent on the purchaser to understand the risk, and manage it. Indeed banks should be evaluating the risk of all their ICT whether in-house or external – as the high-profile failures at Natwest recently demonstrated in-house IT can be  risky too!

3) A changing role for the CIO. 

Here I  agree with the article. Governments will get more involved in regulation relevant to cloud, and this will create new opportunities. Whether CIOs  will act as “international power-brokers, ambassadors even diplomats” as the article suggests depends on how they move to the cloud – many cloud providers intention is to create cloud offerings which do not demand an understanding of international law. I also doubt that the “human-responsibilities will shrink” – this only counts if organisations see cloud as outsourcing rather than opportunity  – many CIOs are probably realising that while they are loosing headcount in certain ways (e.g. those data-centre administrators) they need skills in new applications only possible with the availability of cloud. How many CIO’s imagined managing  data-analytics and social-networking specialists a few years ago?

4) Death of the desktop as we know it.

“The expectation is that an employee’s mobile device is now their workspace, and that they are accountable for contributing to work on a virtually full time basis…”  I am intrigued by the idea of what is going to happen to the desktop PC. I know I use my smartphone and ipad a lot, but usually for new things rather than the same activities I  use my laptop for. For example I annotate PDF’s on the train,  read meeting notes during meetings, even look at documents in the bath. These are in addition to the use of my desktop PC or laptop (which I use for writing and for the host of applications I require for my work and for which I require a keyboard and solid operating system). Yes I bring-my-own-device to work, but I either demand a “desktop” like environment to run on it (i.e. integrated applications and services) in which case the management of the virtual desktop applications is as complex as the physical assets (save the plugging in and purchasing). And the idea that I will use my own device on a “virtually full time basis” is clearly none-sense… health and safety would not allow a person using a screen all day to have a smart-phone or tablet as that screen.

I don’t deny that PC’s will change, and that the technological environment of many industries is changing. But my question is whether this will increase or decrease the amount of work for the CIO? My earlier post (Cloud Computing its so 1980s) pointed out that the demand for applications within industry has not remained static or decreased – we will only increase our demand for applications. The question is then whether managing them will become easier or more difficult. For me the jury is still out but if pushed, it is for this reason that I think Windows 8 could be successful in this space.

I believe many of us are waiting for a device which capitalises on the benefits of tablets and smartphones, but which will run the complex ERP and office applications our businesses have come to rely upon. Sure we could try to make do with an iPad or Android device, but Windows 8 promises the opportunity to use the full industry proof applications we already have in a new way. I anticipate seeing lots more of these Windows 8  devices in the next few years – though  with many of the applications becoming much lighter on the desktop. After all the iPad and smartphone demonstrated the importance of locally running Apps not of cloud services…  they were just smaller and easier to manage applications.

 

 

Latest Article | Interventionist grid development projects: a research framework based on three frames

My latest research article has just been published. This one focuses on Grid computing within large project:

Will Venters, Avgousta Kyriakidou-Zacharoudiou, (2012) “Interventionist grid development projects: a research framework based on three frames“, Information Technology & People, Vol. 25 Iss: 3, pp.300 – 326

Abstract:

Purpose – This paper seeks to consider the collaborative efforts of developing a grid computing infrastructure within problem-focused, distributed and multi-disciplinary projects – which the authors term interventionist grid development projects – involving commercial, academic and public collaborators. Such projects present distinctive challenges which have been neglected by existing escience research and information systems (IS) literature. The paper aims to define a research framework for understanding and evaluating the social, political and collaborative challenges of such projects.

Design/methodology/approach – The paper develops a research framework which extends Orlikowski and Gash’s concept of technological frames to consider two additional frames specific to such grid projects; bureaucratic frames and collaborator frames. These are used to analyse a case study of a grid development project within Healthcare which aimed to deploy a European data-grid of medical images to facilitate collaboration and communication between clinicians across the European Union.

Findings – That grids are shaped to a significant degree by the collaborative practices involved in their construction, and that for projects involving commercial and public partners such collaboration is inhibited by the differing interpretive frames adopted by the different relevant groups.

Research limitations/implications – The paper is limited by the nature of the grid development project studied, and the subsequent availability of research subjects.

Practical implications – The paper provides those involved in such projects, or in policy around such grid developments, with a practical framework by which to evaluate collaborations and their impact on the emergent grid. Further, the paper presents lessons for future such Interventionist grid projects.

Originality/value – This is a new area for research but one which is becoming increasingly important as data-intensive computing begins to emerge as foundational to many collaborative sciences and enterprises. The work builds on significant literature in escience and IS drawing into this new domain. The research framework developed here, drawn from the IS literature, begins a new stream of systems development research with a distinct focus on bureaucracy, collaboration and technology within such interventionist grid development projects.