Platforms for the Internet of Things: Opportunities and Risks

I was chairing a panel at the Internet of Things Expo in London today. One of the points for discussion was the rise of platforms related to the internet of things. As, by some estimates, the number of connected devices is predicted to exceed 50bn by 2020 so there is considerable desire to control the internet based platforms upon which these devices will rely. Before we think specifically about platforms for the Internet of Things it is worth pausing to think about platforms in general.

The idea of platforms is pretty simple – they are something flat we can build upon. In computing terms they are an evolving system of software which provides generativity [1]: the potential to innovate by capitalising on the features of the platform service to provide something more than the sum of its parts. They exhibit the economic concept of network effects [2] – that is their value increases as the number of users increases. The telephone, for example, was useless when only one person had one, but as the number of users increased so its value increased (owners could call more people). This in turn leads to lock-in effects and potential monopolisation: once a standard emerged there was considerable disincentive for existing users to switch, and, faced with competing standards, users will wisely choose a widely adopted incumbent standard (unless the new standard is considerably better or there is other incentives to switch). These network effects also influence suppliers – App developers focus on developing for the standard Android/iPhone platforms so increasing their value and creating a complex ecosystem of value.

Let’s now move to think further about this concept for the Internet of Things.  I worry somewhat about the emergence of strong commercial platforms for Internet of Things devices. IoT concerns things, whose value is derived from both their materiality and their internet-capability. When we purchase an “IoT” enabled street-light (for example) we are making a significant investment in the material streetlight as well as its Internetness. If IoT evolves like mobile phones this could lock us into the platform, and changing to an alternative platform would thus include high material cost (assuming , like mobiles, we are unable to alter software) as, unlike phones these devices are not regularly upgraded. This demonstrates platforms concern the distribution of control, and the platform provider has a strong incentive to seek to control the owners of the devices, and though this derive value from their platform over the long term. Also for many IoT devices (and particularly relevant for critical national infrastructure) this distribution of control does not correspond to distribution of risk, security and liability which many be significant for IoT devices.

There is also considerable incentive for platform creators to innovate their platform – developing new features and options to increase their value and so increase the scale and scope of their platform. This however creates potential instability in the platform – making evaluation of risk, security and liability over the long term exceedingly difficult. Further there is an incentive on platform owners to demand evolution from platform users (to drive greater value) potentially making older devices quickly redundant.

For important IoT devices (such as those used by government bodies), we might suggest that they seek to avoid these effects by harnessing open platforms based on collectively shared standards rather than singular controlled software platforms.  Open platforms are “freely available, standard definitions of service outcomes, processes, or technology that encourage multiple users to converge on utility consumption of services based on definitions – which in turn encourage suppliers to innovate around these commodities.”[3, 4]. In contrast to Open Source, Open platforms are not about the software – but about a collective standards agreement process in which standards are freely shared allowing the collective innovation around that standard. For example the 230v power-supply is a standard around which electricity generators, device manufacturers and consumers coalesce.

What are the lessons here?

(1) Wherever possible we should seek open platforms and promote the development of standards.

(2)  We must demand democratic accountability, and seek to exploit levers which ensure control over our infrastructure is reflective of need.

(3) We should seek to understand platforms as dynamic, evolving self-organising infrastructures not as static entities

References

  1. Zittrain, J.L., The Generative Internet. Harvard Law Review, 2006. 119(7): p. 1974-2040.
  2. Gawer, A. and M. Cusumano, Platform Leadership. 2002, Boston,MA: Harvard Business School Press.
  3. Brown, A., J. Fishenden, and M. Thompson, Digitizing Government. 2015.
  4. Fishenden, J. and M. Thompson, Digital Government, Open Architecture, and Innovation: Why Public Sector IT Will Never Be The Same Again. Journal of Public Administration, Research, and Theory, 2013.

Strategy Security in the cloud – comments on Athens Cloud Computing Conference

Stefan Riepl - Thanks CC

Attending the Cloud Computing Conference in Athens today I was struck by the overarching interest of the audience in security. This is entirely understandable, and certainly should be the primary concern for IT directors whose overarching concern is to keep the company safe in this dangerous digital world. As fellow speaker Ian Murphy discussed – Hacking is available “as a service” today and for little money hackers can be directed towards any organisation whose security protocols are substandard.This point was reiterated by Amar Singh.

What worries me though is that organisational strategy is not also considered a significant security concern in the face of the cloud.  For me IT directors should be taking a primary position in considering the strategic risks to their organisation from cloud-based services ripping the heart out of their business. Without considering how the business model of a business might be undermined by cloud based digital services companies look like the vacuum-valve or Cathod-Ray-Tube manufacturer obsessing about whether their product can be stolen in production and delivery!

My rather random list of possible risks would include.

1) Disintermediation – Don Tapscott discussed many years ago how intermediation business can be lost as customers circumvent or replicate intermediary business and go direct. Cloud provides the simple tools to create this type of business.

2) Cost Collapse – Many businesses rely on cost inhibiting entry into marketplaces. Automation, Cloud and Data-abundance, and PAYG infrastructure can collapse the cost of entering some of these marketplaces. An example of this is Animation where small studies can now produce full feature-films using cloud rendering services.In the future digital technology are likely to do the same to many other areas of business which are today considered capital intensive.

3) Globally local – Prior to Uber most people working in taxi services could not imagine that the value of their business would shift to include services provided from north america. Yet such platforms, by their intensive focus on value creation for users, and their creation of brokerage services radically change the business model.  Like Ebay, AirBnB, and Booking.com the creation of a dual-sided market… Read Eisenmann, T. R., G. Parker and M. W. V. Alstyne (2006). “Strategies for Two-Sided Markets.” Harvard Business Review(10). for more on this type of business model

4) Service Quality – Many existing companies struggle to respond to customers need. Using cloud services small businesses can emerge which provide much better ease of use and services by starting with a cloud-only strategy and uninhibited by the existing legacy IT.

This is just a rather random list – with time I will try to develop these ideas into something more coherent! I welcome readers contributions.

Forbes has four predictions for 2013… I challenge them all

Over on Forbes Antonio Piraino makes four predications for the year ahead:

Cloud Computing: Four Predictions For The Year Ahead – Forbes.

I want to discuss my opinion of each of them.

1) “The cloud wars are (still) rumbling and they’re getting louder”. 

I sort of agree with the sentiment of this; that companies will be looking for value-add from cloud providers rather than simple metrics (such as network, storage or service). However I completely disagree that a battle will unfold next year – I think this is a growing market and we are seeing clear differentiation between offerings. The giants in this space are, in my opinion, desperately trying to carve out a none-competitive space in the growing cloud market, rather than going head-to-head in the battle the author  describes. For that way lies only commodity offering and a drive to the bottom. I suspect that differentiation will be a more likely tactic than “war”.

2) “A titanic cloud outage will create a domino effect”. 

The article argues that “As more IT resources are moved to the cloud, the chance of a major outage for a corporate enterprise… becomes exponentially more likely to occur”. Really? How on earth can the increasing outsourcing of service lead to an exponential increase in risk? The risk is dependent upon a number of factors:

1) Capability of the cloud provider to manage the service (again not dependent on the number of services managed)

2) Capability of the cloud user to contract effectively for risk (again not dependent on the number of services outsourced).

3) Multiplexing of services on a single site – this is dependent on the number of cloud users, however it is an architectural issue as to whether the risk increases. It is certainly not exponential that five companies sharing one building are at greater individual risk than if they each have their own building. The analogy of airline accidents v.s. car accidents comes to mind.  When a plane fails it looks catastrophic, but more people die on the roads.

The article goes on to say that “If an unexpected cloud outage were to take place within the context of [ financial services trades] , the banks would stand to be heavily penalized for incompliance” – I absolutely agree – because if they weren’t adopting defensive approaches to moving to the cloud they would be incompetent. As a recent Dell think-tank I was part of discussed, banks are already moving to the cloud, and for mission critical activity, but they are working with cloud providers to ensure that they are getting the same level of protection and assurance as the would in-house. Like any outsourcing relationship it is incumbent on the purchaser to understand the risk, and manage it. Indeed banks should be evaluating the risk of all their ICT whether in-house or external – as the high-profile failures at Natwest recently demonstrated in-house IT can be  risky too!

3) A changing role for the CIO. 

Here I  agree with the article. Governments will get more involved in regulation relevant to cloud, and this will create new opportunities. Whether CIOs  will act as “international power-brokers, ambassadors even diplomats” as the article suggests depends on how they move to the cloud – many cloud providers intention is to create cloud offerings which do not demand an understanding of international law. I also doubt that the “human-responsibilities will shrink” – this only counts if organisations see cloud as outsourcing rather than opportunity  – many CIOs are probably realising that while they are loosing headcount in certain ways (e.g. those data-centre administrators) they need skills in new applications only possible with the availability of cloud. How many CIO’s imagined managing  data-analytics and social-networking specialists a few years ago?

4) Death of the desktop as we know it.

“The expectation is that an employee’s mobile device is now their workspace, and that they are accountable for contributing to work on a virtually full time basis…”  I am intrigued by the idea of what is going to happen to the desktop PC. I know I use my smartphone and ipad a lot, but usually for new things rather than the same activities I  use my laptop for. For example I annotate PDF’s on the train,  read meeting notes during meetings, even look at documents in the bath. These are in addition to the use of my desktop PC or laptop (which I use for writing and for the host of applications I require for my work and for which I require a keyboard and solid operating system). Yes I bring-my-own-device to work, but I either demand a “desktop” like environment to run on it (i.e. integrated applications and services) in which case the management of the virtual desktop applications is as complex as the physical assets (save the plugging in and purchasing). And the idea that I will use my own device on a “virtually full time basis” is clearly none-sense… health and safety would not allow a person using a screen all day to have a smart-phone or tablet as that screen.

I don’t deny that PC’s will change, and that the technological environment of many industries is changing. But my question is whether this will increase or decrease the amount of work for the CIO? My earlier post (Cloud Computing its so 1980s) pointed out that the demand for applications within industry has not remained static or decreased – we will only increase our demand for applications. The question is then whether managing them will become easier or more difficult. For me the jury is still out but if pushed, it is for this reason that I think Windows 8 could be successful in this space.

I believe many of us are waiting for a device which capitalises on the benefits of tablets and smartphones, but which will run the complex ERP and office applications our businesses have come to rely upon. Sure we could try to make do with an iPad or Android device, but Windows 8 promises the opportunity to use the full industry proof applications we already have in a new way. I anticipate seeing lots more of these Windows 8  devices in the next few years – though  with many of the applications becoming much lighter on the desktop. After all the iPad and smartphone demonstrated the importance of locally running Apps not of cloud services…  they were just smaller and easier to manage applications.

 

 

Cloud Computing – it’s so ‘80s.

For Vint Cerf[1], the father of the internet, Cloud Computing represents a return to the days of the mainframe where service-bureaus rented their machines by the hour to companies who used them for payroll and other similar tasks. Such comparisons focus on the architectural similarities between centralised mainframes and Cloud computing – cheaply connecting to an expensive resource “as a service” through a network. But cloud is more about the provision of “low-cost” computing (albeit in bulk through data-centres) at even lower costs in the cloud. A better analogy that the mainframe then is the introduction of the humble micro-computer and the revolution it brought to corporate computing in the early 1980s.

When micros were launched many companies operated using mini or mainframe computers which were cumbersome, expensive and needed specialist IT staff to manage them[1]. Like Cloud Computing today, when compared with these existing computers the new micros offered ease of use, low cost and apparently low risk which appealed to business executives seeking to cut costs, or SMEs unable to afford mini’s or mainframes[2]. Usage exploded and in the period from the launch of the IBM PC in 1981 to 1984 the proportion of companies using PCs increased dramatically from 8% to 100% [3] as the cost and opportunity of the micro became apparent. Again, as with the cloud[4], these micros were marketed directly to business executives rather than IT staff, and were accompanied by a narrative that they would enable companies to dispense of heavy mainframes and the IT department for many tasks –doing them quicker and more effectively. Surveys from that time suggested accessibility, speed of implementation, response-time, independence and self-development were the major advantage of the PC over the mainframe[5] –  easily recognisable in the hyperbole surrounding cloud services today. Indeed Nicholas Carr’s current pronouncement of the End of Corporate IT[6] would probably have resonated well in the early 1980s when the micro looked set to replace the need for corporate IT. Indeed in 1980 over half the companies in a sample claimed no IT department involvement in the acquisition of PCs[3].

But problems emerged from the wholesale uncontrolled adoption of the Micro, and by 1984 only 2% of those sampled did not involve the IT department in PC acquisition[3]. The proliferation of PCs meant that in 1980 as many as 32% of IT managers were unable to estimate the proportion of PC within their company[3], and few could provide any useful support for those who had purchased them.

Micros ultimately proved cheap individually but expensive on-mass[2] as their use exploded and new applications for them were discovered. In addition to the increased use IT professionals worried about the lack of documentation (and thus poor opportunity for maintenance), poor data management strategies, and security issues[7]. New applications proved incompatible with others (“the time-bomb of incompatibility”[2]), and different system platforms (e.g. CP/M, UNIX, MS-DOS, OS/2, Atari, Apple …) led to redundancy and communication difficulties between services and to the failure of many apparently unstoppable software providers –household names such as Lotus, Digital-Research, WordStar and Visi and dBase[8].

Ultimately it was the IT department which brought sense to these machines and began to connect them together for useful work using compatible applications – with the emergence of companies such as Novell and Microsoft to bring order to the chaos[8].

Drawing lessons from this history for Cloud Computing are useful. The strategic involvement of IT services departments is clearly required. Such involvement should focus not on the current cost-saving benefits of the cloud, but on the strategic management of a potentially escalating use of Cloud services within the firm. IT services must get involved in the narrative surrounding the cloud – ensuring their message is neither overly negative (and thus appearing to have a vested interest in the status quo) nor overly optimistic as potential problems exist. Either way the lessons of the microcomputer are relevant again today.  Indeed Keen and Woodman argued in 1984 that companies needed the following four strategies for the Micro:

1)      “Coordination rather than control of the introduction.

2)      Focusing on the longer-term technical architecture for the company’s overall computing resources, with personal computers as one component.

3)      Defining codes for good practice that adapt the proven disciplines of the [IT industry] into the new context.

4)      Emphasis on systematic business justification, even of the ‘soft’ and unquantifiable benefits that are often a major incentive for and payoff of using personal computers” [2]

It would be wise for companies contemplating a move to the cloud to consider this advice carefully – replacing personal-computer with Cloud-computing throughout.

(c)2011 Will Venters, London School of Economics. 

[1]            P. Ceruzzi, A History of Modern Computing. Cambridge,MA: MIT Press, 2002.

[2]            P. G. W. Keen and L. Woodman, “What to do with all those micros: First make them part of the team,” Harvard Business Review, vol. 62, pp. 142-150, 1984.

[3]            T. Guimaraes and V. Ramanujam, “Personal Computing Trends and Problems: An Empirical Study,” MIS Quarterly, vol. 10, pp. 179-187, 1986.

[4]            M. Benioff and C. Adler, Behind the Cloud – the untold story of how salesforce.com went from idea to billion-dollar company and revolutionized and industry. San Francisco,CA: Jossey-Bass, 2009.

[5]           D. Lee, “Usage Patterns and Sources of Assitance for Personal Computer Users,” MIS Quarterly, vol. 10, pp. 313-325, 1986.

[6]            N. Carr, “The End of Corporate Computing,” MIT Sloan Management Review, vol. 46, pp. 67-73, 2005.

[7]            D. Benson, “A field study of End User Computing: Findings and Issues,” MIS Quarterly, vol. 7, pp. 35-45, 1983.

[8]            M. Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog: A history of the software industry. Cambridge,MA: MIT Press, 2003.

Cloud and the Future of Business: From Costs to Innovation

I have not been updating this blog for a while as I have been busy writing commercial papers on Cloud Computing. The first of these, for Accenture, has just been published and is available here

http://www.outsourcingunit.org/publications/cloudPromise.pdf

The report outlines our” Cloud Desires Framework” in which we aim to explain the technological direction of Cloud in terms of four dimensions of the offerings – Equivalence, Abstraction, Automation and Tailoring.

Equivalence: The desire to provide services which are at least equivalent in quality to that experienced by a locally running service on a PCor server.

Abstraction: The desire to hide unnecessary complexity of the lower levels of the application stack.

Automation: The desire to automatically manage the running of a service.

Tailoring: The desire to tailor the provided service for specific enterprise needs.

(c) Willcocks,Venters,Whitley 2011.

By considering these dimensions to the different types of cloud service (SaaS, PaaS, IaaS and Hosted service (often ignored – but crucially Cloud-like)) it is possible to distinguish the different benefits of each away from the “value-add” differences. Crucially the framework allows simple comparison between services offered by different companies by focusing on the important desires and not the unimportant technical differences.

Take a look at the report – and let me know what you think!

Accenture Outlook: The coming of the cloud corporation

I have written, with two colleagues, an article for Accenture’s Outlook journal which introduces the idea of the Cloud Corporation:

Accenture Outlook: The coming of the cloud corporation.

The article discusses various trends in outsourcing which will impact upon Cloud (and vice versa).

Cloud computing remains focused on cost cutting achieved through new technology, however lessons from the past suggest that this is only a minor part of the disruptive innovation which Cloud may offer. In particular we should not ask “what is cloud computing?” but rather “why is cloud computing?” – in essence exploring the pressures on innovation today which resonate with the idea of utility computing.

While the cost saving is an important incremental innovation on existing practices, it is cloud’s potential to allow new forms of organisational collaboration which offer the potential of radical innovation. Moving the data-centre outside the organisation asks us to evaluate the relationship between the data-centre and the organisation. Is it “ours” to horde and control, or are parts of it able to be shared, opened, exploited by others (partners, customers, suppliers etc)? In turn does this opening of the relationship between the organisation and its information recast the organisation itself?

 

Why you can’t move a mainframe with a cloud • The Register

Why you can’t move a mainframe with a cloud • The Register.

 

This is a detailed technical analysis of the market for mainframes – discussing the infrastructure issues of moving Mainframes to cloud, or cloud to mainframes. The issues discussed are somewhat perennial – “greying workforce” shift to cheaper platforms of linux and java. But as the article attests it is the shear reliability and stability of mainframes which keeps them going – something those who proclaim the cloud will prevail must understand and respond to. With such guaranteed uptime of years  for transaction processing we cannot really envisage the Cloud for the core applications which run our information economy.