Clouds and Coffee: User affordance and information infrastructure

Some desktop coffee machines (e.g. figure 1) are now connected to the Internet (Pritchard, 2012).  Such devices are enrolled within increasingly complex information infrastructures involving cloud services. This form of entanglement creates mazes of unexpected heterogeneous opportunities and risks (Latour, 2003), yet users ability to perceive such opportunity and risk is limited their lack of visceral understanding of such entanglement. It is this understanding of the cloud by the user which is the focus of blog posting. Such a coffee maker “calls out” (Gibson, 1979) to users’ with a simple offering – its ability to make coffee. Its form attests to this function with buttons for espresso and latte, nozzles for dispensing drinks, and trays to catch the drips. To any user experienced in modern coffee this machine affords (Norman, 1990; Norman, 1999) the provision of coffee in its form and function keeping its information infrastructure hidden from view – only an engineer can understand that this machine is communicating.

Yet such assemblages of plastic, metal and information technology are a “quasi-object” (Latour, 2003)– complicated cases requiring political assemblies and no longer “matters of fact” but instead “states of affairs” (Latour, 2003).  Such a coffee maker is a drinks dispensing service (representing a service-dominant logic (Vargo & Lusch, 2004; Vargo, 2012)), provided through an assembly of material and immaterial objects whose boundary and ultimate purpose remain unclear.  While the device above only communicates about its maintenance, other machines may go further. Such machines’ users, hankering for an espresso to get them through a boring conference, may be kept unaware that the infrastructure is monitoring his choices to influence global coffee production, to ensure the output is sufficiently tepid and dull to damage his economic productivity, or that the device is recording and transmitting his every word.   He may be annoyed to discover his coffee is stronger than his female colleagues as gender profiling based on image recognition decides the “right” coffee for him. He may be horrified that the device ceases to work at the very moment of need because of a fault in contract payments within the accounts department – perhaps caused by their tepid weak coffee.

Similarly companies involved in providing the coffee and milk for such machines might become enrolled in this reconfiguration (Normann, 2001) of coffee service, an enrolment which could reconfigure the knowledge asymmetries within the existing market.  Suddenly an engineering company who previously made plastic and metal coffee machines is now in a position to better understand coffee demand than coffee growers or retailers. The machine itself could negotiate automatically on local markets for its milk provision, or compare material prices with similar machines in other markets, and even alter prices of coffee for consumers based on local demand. Through the enrolment of information infrastructures within a coffee service the knowledge of the coffee market shifts.

All this has happened already to the market for music (increasingly controlled by a purveyor of sophisticated walkmen using a cloud service) and more recently ebooks (increasingly controlled by a book retailer and their sophisticated book readers).   Now imagine the emergence of the smart-city with huge numbers of devices from street-lights to refrigerators connected to the cloud. How will the user of such smart-cities understand what they are interacting with – the quasi-objects they used to consider objects? How will such objects afford their informational uses alongside their more usual functions?

At the centre of this reconfiguration of material objects is a computer system residing in the cloud aggregating information. It is the aggregation of data from devices which may be central to the lessons of the cloud for SmartCities.

(© Will Venters 2012).

 

 

Gibson JJ (1979) The Ecological Approach to Perception. Houghton Mifflin, London.

Latour B (2003) Is Re-modernization Occurring-And If So, How to Prove It? Theory, Culture & Society 20(2), 35-48.

Norman D (1990) The Design of Everyday Things. The MIT Press, London.

Norman D (1999) Affordance, Conventions, and Design. Interactions ACM 6(3), 38-43.

Normann R (2001) Reframing Business: When the map changes the landscape. John Wiley & Sons Ltd, Chichester.

Pritchard S (2012) Mobile Comms: Coffee and TV. IT Pro, Dennis Publishing Ltd, London.

Vargo S and Lusch R (2004) Evolving to a New Dominant Logic for Marketing. The Journal of Marketing 68(1), 1-17.

Vargo SL (2012) Service-Dominant Logic: Reflections and Directions. Unpublished Powerpoint, Warwick,UK.

Our Fifth Report is out – Management implications of the Cloud

The fifth report in our Cloud Computing series for Accenture has just been published. This report looks at the impact Cloud Computing will have on the management of the IT function, and thus the skills needed by all involved in the IT industry. The report begins by analysing the impact Cloud might have in comparison to existing outsourcing project. It considers the core-capabilities which must be retained in a “cloud future”, considering how these capabilities might be managed, and the role of systems integrators in managing the Cloud.

Please use the comments form to give us feedback!

Cloud and the future of Business 5 – Management .

Cloud Computing – it’s so ‘80s.

For Vint Cerf[1], the father of the internet, Cloud Computing represents a return to the days of the mainframe where service-bureaus rented their machines by the hour to companies who used them for payroll and other similar tasks. Such comparisons focus on the architectural similarities between centralised mainframes and Cloud computing – cheaply connecting to an expensive resource “as a service” through a network. But cloud is more about the provision of “low-cost” computing (albeit in bulk through data-centres) at even lower costs in the cloud. A better analogy that the mainframe then is the introduction of the humble micro-computer and the revolution it brought to corporate computing in the early 1980s.

When micros were launched many companies operated using mini or mainframe computers which were cumbersome, expensive and needed specialist IT staff to manage them[1]. Like Cloud Computing today, when compared with these existing computers the new micros offered ease of use, low cost and apparently low risk which appealed to business executives seeking to cut costs, or SMEs unable to afford mini’s or mainframes[2]. Usage exploded and in the period from the launch of the IBM PC in 1981 to 1984 the proportion of companies using PCs increased dramatically from 8% to 100% [3] as the cost and opportunity of the micro became apparent. Again, as with the cloud[4], these micros were marketed directly to business executives rather than IT staff, and were accompanied by a narrative that they would enable companies to dispense of heavy mainframes and the IT department for many tasks –doing them quicker and more effectively. Surveys from that time suggested accessibility, speed of implementation, response-time, independence and self-development were the major advantage of the PC over the mainframe[5] –  easily recognisable in the hyperbole surrounding cloud services today. Indeed Nicholas Carr’s current pronouncement of the End of Corporate IT[6] would probably have resonated well in the early 1980s when the micro looked set to replace the need for corporate IT. Indeed in 1980 over half the companies in a sample claimed no IT department involvement in the acquisition of PCs[3].

But problems emerged from the wholesale uncontrolled adoption of the Micro, and by 1984 only 2% of those sampled did not involve the IT department in PC acquisition[3]. The proliferation of PCs meant that in 1980 as many as 32% of IT managers were unable to estimate the proportion of PC within their company[3], and few could provide any useful support for those who had purchased them.

Micros ultimately proved cheap individually but expensive on-mass[2] as their use exploded and new applications for them were discovered. In addition to the increased use IT professionals worried about the lack of documentation (and thus poor opportunity for maintenance), poor data management strategies, and security issues[7]. New applications proved incompatible with others (“the time-bomb of incompatibility”[2]), and different system platforms (e.g. CP/M, UNIX, MS-DOS, OS/2, Atari, Apple …) led to redundancy and communication difficulties between services and to the failure of many apparently unstoppable software providers –household names such as Lotus, Digital-Research, WordStar and Visi and dBase[8].

Ultimately it was the IT department which brought sense to these machines and began to connect them together for useful work using compatible applications – with the emergence of companies such as Novell and Microsoft to bring order to the chaos[8].

Drawing lessons from this history for Cloud Computing are useful. The strategic involvement of IT services departments is clearly required. Such involvement should focus not on the current cost-saving benefits of the cloud, but on the strategic management of a potentially escalating use of Cloud services within the firm. IT services must get involved in the narrative surrounding the cloud – ensuring their message is neither overly negative (and thus appearing to have a vested interest in the status quo) nor overly optimistic as potential problems exist. Either way the lessons of the microcomputer are relevant again today.  Indeed Keen and Woodman argued in 1984 that companies needed the following four strategies for the Micro:

1)      “Coordination rather than control of the introduction.

2)      Focusing on the longer-term technical architecture for the company’s overall computing resources, with personal computers as one component.

3)      Defining codes for good practice that adapt the proven disciplines of the [IT industry] into the new context.

4)      Emphasis on systematic business justification, even of the ‘soft’ and unquantifiable benefits that are often a major incentive for and payoff of using personal computers” [2]

It would be wise for companies contemplating a move to the cloud to consider this advice carefully – replacing personal-computer with Cloud-computing throughout.

(c)2011 Will Venters, London School of Economics. 

[1]            P. Ceruzzi, A History of Modern Computing. Cambridge,MA: MIT Press, 2002.

[2]            P. G. W. Keen and L. Woodman, “What to do with all those micros: First make them part of the team,” Harvard Business Review, vol. 62, pp. 142-150, 1984.

[3]            T. Guimaraes and V. Ramanujam, “Personal Computing Trends and Problems: An Empirical Study,” MIS Quarterly, vol. 10, pp. 179-187, 1986.

[4]            M. Benioff and C. Adler, Behind the Cloud – the untold story of how salesforce.com went from idea to billion-dollar company and revolutionized and industry. San Francisco,CA: Jossey-Bass, 2009.

[5]           D. Lee, “Usage Patterns and Sources of Assitance for Personal Computer Users,” MIS Quarterly, vol. 10, pp. 313-325, 1986.

[6]            N. Carr, “The End of Corporate Computing,” MIT Sloan Management Review, vol. 46, pp. 67-73, 2005.

[7]            D. Benson, “A field study of End User Computing: Findings and Issues,” MIS Quarterly, vol. 7, pp. 35-45, 1983.

[8]            M. Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog: A history of the software industry. Cambridge,MA: MIT Press, 2003.

Cloud and the Future of Business: From Costs to Innovation

I have not been updating this blog for a while as I have been busy writing commercial papers on Cloud Computing. The first of these, for Accenture, has just been published and is available here

http://www.outsourcingunit.org/publications/cloudPromise.pdf

The report outlines our” Cloud Desires Framework” in which we aim to explain the technological direction of Cloud in terms of four dimensions of the offerings – Equivalence, Abstraction, Automation and Tailoring.

Equivalence: The desire to provide services which are at least equivalent in quality to that experienced by a locally running service on a PCor server.

Abstraction: The desire to hide unnecessary complexity of the lower levels of the application stack.

Automation: The desire to automatically manage the running of a service.

Tailoring: The desire to tailor the provided service for specific enterprise needs.

(c) Willcocks,Venters,Whitley 2011.

By considering these dimensions to the different types of cloud service (SaaS, PaaS, IaaS and Hosted service (often ignored – but crucially Cloud-like)) it is possible to distinguish the different benefits of each away from the “value-add” differences. Crucially the framework allows simple comparison between services offered by different companies by focusing on the important desires and not the unimportant technical differences.

Take a look at the report – and let me know what you think!

SLA’s and the Cloud – the risks and benefits of multi-tenanted solutions.

Service Level Agreements (SLAs)  are difficult to define in the cloud in part because areas of the infrastructure (in particular the internet connection) are outside of the scope of either customer or supplier. This leads to the challenge of presenting a contractual agreement for something which is only partly in the suppliers control. Further as the infrastructure is shared (multi-tenanted) SLA’s are more difficult to provide since they rest on capacity which must be shared.

The client using the Cloud is faced a challenge. Small new cloud SaaS providers, which are  increasing their business and attracting more clients to their multi-tenanted data-centre, are unlikely to provide usefully defined SLA for their services than that which a data-centre provider can offer where it controls all elements of the supplied infrastructure.  Why would they – their business is growing and an SLA is a huge risk (since it is multi-tenanted breach of one SLA is probably a breach of lots – the payout might seem small and poor to the client but is large for a SaaS provider!). Further with each new customer the demands on the data-centre, and hence risk,  increase. Hence the argument that as SaaS providers become successful the risk of SLAs being breached might increase.

There is however a counter-point to this growth risk though – as each new customer begins to use the SaaS they will undertake their own due-diligence  checks. Many will attempt to stress test the SaaS service. Some will want to try to hack the application. As the customer base grows (and moves towards blue-chip clients) the seriousness of this testing will increase – security demands in particular will be tested as bigger and bigger companies consider their services. This presents a considerable opportunity for the individual user. For with each new customer comes the benefit of increasing stress testing of the SaaS platform – and increasing development of skills within the SaaS provider. While the SLA may continue to be poor, the risk of failure of the data-centre may well diminish as the SaaS grows.

To invoke a contract is, in effect, a failure in a relationship – a breakdown in trust. Seldom does the invocation of a contract benefit either party. The aim of an SLA is thus not just to provide a contractual agreement but rather to set out the level of service on which the partnership between customer and supplier is based. In this way an SLA is about the expected quality demanded of the supplier and with the above model the expected quality may well increase with more customers – not decrease as is usually envisaged for cloud. SLA’s for cloud providers may well be trivial and poor, but the systemic risk of using Clouds is not as simplistic as is often argued.  While it is unsurprising that cloud suppliers offer poor SLA’s (it is not in their interest to do otherwise), it does not mean that the quality of service is, or will remain, poor.

So what should the client consider in looking at the SLA offering in terms of service quality?

1) How does the Cloud SaaS supplier manage its growth? The growth of a SaaS service means greater demand on the providers data-centre. Hence greater risk that the SLA’s will be breached for their multi-tenanted data-centre.

2) How open is the Cloud SaaS provider in allowing testing of its services by new customers?

3) How well does the Cloud SaaS provider’s strategic ambition for service quality align with your desires for service quality.

Obviously these questions are in addition to all the usual SLA questions.

Lock-ins, SLAs and the Cloud

One of the significant concerns in entering the cloud is the potential for lock-in with a cloud provider (though clearly you otherwise remain locked-in with your own IT department as the sole  provider).

The cost of moving from one provider to another is a significant obstacle to cloud penetration – if you could change provider easily and painlessly you might be more inclided to take the risk. Various services have emerged to try to attack this problem – CloudSwitch being one which created a considerable buzz at the Structure 2010 conference.   Their service aims to provide  a software means to transfer enterprise applications from a company’s data centre into the cloud (and between cloud providers). Whether it can live up to expectations we have yet to know, but CloudSwitch is attempting to provide a degree of portability much desired by clients – and probably much feared by Cloud Service providers whose business would reduce to utility suppliers if they are successful.

But this links into another interesting conversation I was having with a media executive last week. They mentioned that since cloud virtual machines were so cheap they often (effectively)  host services across a number of suppliers to provide their own redundancy and thus ignore the SLA. If one service goes down they can switch quickly (using load balancers etc) to another utility supplier. Clearly this only works for commodity IaaS and for relatively simple content distribution (rather than transaction processing) but it is a compelling model… why worry about choosing one cloud provider and being locked-in or risking poor SLA  – choose them all.

Structure 2010: Akamai “Doing Terabit Events” (Thanks, World Cup)

Structure 2010: Akamai “Doing Terabit Events” (Thanks, World Cup). Akamai are an interesting company which highlights the problems of latency within the cloud. But also check out the work going on in Stanford on OpenFlow http://www.openflowswitch.org/ which provides a similar in data-centre / campus level network latency response by centralising the control of the network routers/switches to best manage the flow of traffic. This work also reduces the complexity of the network and allows a more specific and intelligent  networking flow than achievable by existing routing tables.