Cloud Computing – it’s so ‘80s.

For Vint Cerf[1], the father of the internet, Cloud Computing represents a return to the days of the mainframe where service-bureaus rented their machines by the hour to companies who used them for payroll and other similar tasks. Such comparisons focus on the architectural similarities between centralised mainframes and Cloud computing – cheaply connecting to an expensive resource “as a service” through a network. But cloud is more about the provision of “low-cost” computing (albeit in bulk through data-centres) at even lower costs in the cloud. A better analogy that the mainframe then is the introduction of the humble micro-computer and the revolution it brought to corporate computing in the early 1980s.

When micros were launched many companies operated using mini or mainframe computers which were cumbersome, expensive and needed specialist IT staff to manage them[1]. Like Cloud Computing today, when compared with these existing computers the new micros offered ease of use, low cost and apparently low risk which appealed to business executives seeking to cut costs, or SMEs unable to afford mini’s or mainframes[2]. Usage exploded and in the period from the launch of the IBM PC in 1981 to 1984 the proportion of companies using PCs increased dramatically from 8% to 100% [3] as the cost and opportunity of the micro became apparent. Again, as with the cloud[4], these micros were marketed directly to business executives rather than IT staff, and were accompanied by a narrative that they would enable companies to dispense of heavy mainframes and the IT department for many tasks –doing them quicker and more effectively. Surveys from that time suggested accessibility, speed of implementation, response-time, independence and self-development were the major advantage of the PC over the mainframe[5] –  easily recognisable in the hyperbole surrounding cloud services today. Indeed Nicholas Carr’s current pronouncement of the End of Corporate IT[6] would probably have resonated well in the early 1980s when the micro looked set to replace the need for corporate IT. Indeed in 1980 over half the companies in a sample claimed no IT department involvement in the acquisition of PCs[3].

But problems emerged from the wholesale uncontrolled adoption of the Micro, and by 1984 only 2% of those sampled did not involve the IT department in PC acquisition[3]. The proliferation of PCs meant that in 1980 as many as 32% of IT managers were unable to estimate the proportion of PC within their company[3], and few could provide any useful support for those who had purchased them.

Micros ultimately proved cheap individually but expensive on-mass[2] as their use exploded and new applications for them were discovered. In addition to the increased use IT professionals worried about the lack of documentation (and thus poor opportunity for maintenance), poor data management strategies, and security issues[7]. New applications proved incompatible with others (“the time-bomb of incompatibility”[2]), and different system platforms (e.g. CP/M, UNIX, MS-DOS, OS/2, Atari, Apple …) led to redundancy and communication difficulties between services and to the failure of many apparently unstoppable software providers –household names such as Lotus, Digital-Research, WordStar and Visi and dBase[8].

Ultimately it was the IT department which brought sense to these machines and began to connect them together for useful work using compatible applications – with the emergence of companies such as Novell and Microsoft to bring order to the chaos[8].

Drawing lessons from this history for Cloud Computing are useful. The strategic involvement of IT services departments is clearly required. Such involvement should focus not on the current cost-saving benefits of the cloud, but on the strategic management of a potentially escalating use of Cloud services within the firm. IT services must get involved in the narrative surrounding the cloud – ensuring their message is neither overly negative (and thus appearing to have a vested interest in the status quo) nor overly optimistic as potential problems exist. Either way the lessons of the microcomputer are relevant again today.  Indeed Keen and Woodman argued in 1984 that companies needed the following four strategies for the Micro:

1)      “Coordination rather than control of the introduction.

2)      Focusing on the longer-term technical architecture for the company’s overall computing resources, with personal computers as one component.

3)      Defining codes for good practice that adapt the proven disciplines of the [IT industry] into the new context.

4)      Emphasis on systematic business justification, even of the ‘soft’ and unquantifiable benefits that are often a major incentive for and payoff of using personal computers” [2]

It would be wise for companies contemplating a move to the cloud to consider this advice carefully – replacing personal-computer with Cloud-computing throughout.

(c)2011 Will Venters, London School of Economics. 

[1]            P. Ceruzzi, A History of Modern Computing. Cambridge,MA: MIT Press, 2002.

[2]            P. G. W. Keen and L. Woodman, “What to do with all those micros: First make them part of the team,” Harvard Business Review, vol. 62, pp. 142-150, 1984.

[3]            T. Guimaraes and V. Ramanujam, “Personal Computing Trends and Problems: An Empirical Study,” MIS Quarterly, vol. 10, pp. 179-187, 1986.

[4]            M. Benioff and C. Adler, Behind the Cloud – the untold story of how salesforce.com went from idea to billion-dollar company and revolutionized and industry. San Francisco,CA: Jossey-Bass, 2009.

[5]           D. Lee, “Usage Patterns and Sources of Assitance for Personal Computer Users,” MIS Quarterly, vol. 10, pp. 313-325, 1986.

[6]            N. Carr, “The End of Corporate Computing,” MIT Sloan Management Review, vol. 46, pp. 67-73, 2005.

[7]            D. Benson, “A field study of End User Computing: Findings and Issues,” MIS Quarterly, vol. 7, pp. 35-45, 1983.

[8]            M. Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog: A history of the software industry. Cambridge,MA: MIT Press, 2003.

7 thoughts on “Cloud Computing – it’s so ‘80s.

  1. I think Vint’s analogy is a good one. Most people talk of clouds when what they are actually offering is client server by a different name. True cloud computing must surely be p2p based.

    Like

    • Thanks for the comment Dave, but why do you see Cloud as concerning P2P ?-For me such a view focuses too heavily upon technical architecture. As Larry Ellison famously said its all just computers, networks and software. Wherever or however our “servers” or “clients” are connected they skill operate in a similar way.

      What seems different for Cloud Computing however is the narrative of cheaper,easier, simpler. This is where I see the mainframe analogy as falling down since it returns to the narrative of computing not service.

      Like

  2. Hi Will

    It is certainly true that technical architecture is secondary. My point is that in the oft used sense of the word “cloud computing” there is actually nothing new: we have had this for over thirty years in the form of client server architecture (sure, under the covers, p2p works as CS too, but the notion of clients and servers is very fluid). It could therefore be argued that cash point tills are examples of “cloud computing” too: if so, then fine, but that is less interesting.

    Imagine being able to create your own true cloud that you really do own. Files don’t get sent to a remote, anonymous and unauthorised (by you) server thousands of miles away. A cloud then becomes a community, or mini internet, totally secured from all those that you, as owner, do not wish to participate.

    That’s p2p, and I believe it is much newer and more interesting technology. I’d love to know your thoughts on that (offline if you prefer, though happy online too 🙂 )

    Like

    • P2P – or Community Cloud as it is also known (Mell P and
      Grance 2009) is certainly an interesting model. It fits closely with what we
      have termed the “Cloud Corporation” in our forthcoming Accenture Report (4th
      of the Series if memory serves –only two are out at the moment). Obviously such
      an approach can be successful and robust – SETI@home and BitTorrent are good
      examples – indeed with BitTorrent the robustness has been a menace to IP holders
      everywhere!

       

      I would however offer a couple of words of caution.

      1)     
      Latency :
      The problems currently dealt with through P2P are trivially
      parallelisable – storage, simple processing.  Similarly the Grid Computing for
      the Large Hadron Collider which I have studied extensively (http://www.pegasus.lse.ac.uk
      ) deal with problems which are easily broken down and distributed and for which
      the interconnect between nodes is not so important. For many applications of
      Cloud Computing ideas though latency between nodes is important. For example if
      you have a DB server and WebServers in Amazons cloud you know (as long as you do
      the setup correctly) that the interconnect will be pretty robust, fast and
      predictable… mostly because they will be on a local SAN network or similar. With
      P2P however the latency issue might be significant. The Peer machine might be
      crunching something like mad, and its network might be a messy route. This would
      make running distributed applications tricky – unless the P2P machines were on
      the same rack or close – in which case you loose the whole benefit of Cloud by
      having to manage a data-centre again.

      2)     
      Management :
      For many enterprises the desire of the cloud is to
      remove the headache of server management – P2P however just distributes it.. It
      could thus mean that you are not simply managing the patching of a single
      data-centre but the patching of lots of machines in the cloud… Managing the
      nodes might be a headache.

      3)     
      Liability:
      Spreading data around like this can be good – but
      liability for the leaking of that data may be difficult to ascribe. Sure the
      data will be encrypted but this might not wash with legal. For example if a peer
      on the network was stolen who is liable – how is liability apportioned – who do
      we have to inform…

       

      All this said though I think the model is very useful,
      important and offers many advantages. Sharing the risk, sharing the love,
      sharing the cost, and avoiding the lock-in  (assuming the P2P middleware is
      opensource) would appeal to many small businesses, developing nations and
      individuals.  One can particularly see the benefits in regions without good
      internet connectivity – military settings, developing nations, remote outposts –
      where running P2P cloud services would be a huge boon.

       

      —–

      Mell P and Grance T (2009) The NIST Definition of Cloud
      Computing,
      National Institute of Standards and Technology

       

       

      Like

  3. The points (potential pitfalls) you raise do, of course, have parallels with the traditional (client server) model of today’s (public) cloud solutions. I believe they can be mitigated if implemented correctly.

    Latency. Two excellent examples of p2p applications that do not appear to have latency problems are Skype and Teamviewer. And of course, if information resides on a public cloud and the network dies or their servers get hacked, which has happened a lot recently, then latency becomes a real problem. I recently used a public cloud to share a 70meg video between 2 computers physically in front of me: it took 30 minutes and made my internet connection unusable (though I could have throttled this, making the transfer slower). A properly set up p2p application would do this is tens of seconds.

    Management. This is a good one – using public clouds means you are essentially outsourcing the management of that portion of your data, which can be a very good thing. However, it does lead naturally into …

    Liability. Yes indeed, if data resides on a stolen laptop then who is liable? As you say, it can remain encrypted and if the encryption is good then I suggest this is realistically the best you can do (don’t forget that data in the public cloud is also on someone’s laptop). But who is liable if you store data on a public cloud in another country? Under the laws of which country is the data subject? What is data de-duplication is employed and the file you are sharing is also shared by someone else (maybe from yet another country, which prohibits that sharing)? Last year I blogged about data liability in the public cloud and listed many examples of data being stolen from cloud providers. Further, who owns the data in a public cloud? (You would think you do but small print can be misleading, as Facebook showed last year).

    Of course, we all entrust our data to the clod anyway in many ways (e.g. internet banking) so it all boils down to the risk you are prepared to take for the convenience offered.

    Like

  4. Completely agree with these – good analysis. I think the latency issue is however slightly misleading. I worked at CERN and they will not allow SKYPE on the site because, if used, it will suck up their huge bandwidth for half of Europe’s telephone calls. This is because it capitalises on the quality of point to point link in a distributed network. If the links are good then great. With Public Cloud the interconnect between the nodes(computers) is usually very fast – they are in the same building – however as Dave says you are reliant on the internet connection to that particular site – which could prove a single point of failure – skype would re-route away from CERN if its connection went – but anyone reliant on CERN machines would have problems unless the internet can re-route.

    The liability/management point is a good one – Dave might also say that P2P has the advantage if you can break the data in such a way that ONLY the end-user can reconstruct the original – i.e. you only store a few letters from each name on each node. I don’t entirely agree on the jurisdiction argument though – sure the public cloud are with a public provider – but their Computers are unlikely to get on an aeroplane for a sales presentation in Burma – taking your data with them.

    All this said I agree with the final point – this is about effective risk management and understanding the SLA and liability. Those companies which failed when Amazon went down had taken a huge risk – Amazon did not technically breach its SLA (http://blogs.gartner.com/lydia_leong/2011/04/21/amazon-outage-and-the-auto-immune-vulnerabilities-of-resiliency/) – they failed to understand the risks they were taking or were unable to pay to mitigate them (as NetFlix had).

    Thanks for the discussion.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.