Platforms for the Internet of Things: Opportunities and Risks

I was chairing a panel at the Internet of Things Expo in London today. One of the points for discussion was the rise of platforms related to the internet of things. As, by some estimates, the number of connected devices is predicted to exceed 50bn by 2020 so there is considerable desire to control the internet based platforms upon which these devices will rely. Before we think specifically about platforms for the Internet of Things it is worth pausing to think about platforms in general.

The idea of platforms is pretty simple – they are something flat we can build upon. In computing terms they are an evolving system of software which provides generativity [1]: the potential to innovate by capitalising on the features of the platform service to provide something more than the sum of its parts. They exhibit the economic concept of network effects [2] – that is their value increases as the number of users increases. The telephone, for example, was useless when only one person had one, but as the number of users increased so its value increased (owners could call more people). This in turn leads to lock-in effects and potential monopolisation: once a standard emerged there was considerable disincentive for existing users to switch, and, faced with competing standards, users will wisely choose a widely adopted incumbent standard (unless the new standard is considerably better or there is other incentives to switch). These network effects also influence suppliers – App developers focus on developing for the standard Android/iPhone platforms so increasing their value and creating a complex ecosystem of value.

Let’s now move to think further about this concept for the Internet of Things.  I worry somewhat about the emergence of strong commercial platforms for Internet of Things devices. IoT concerns things, whose value is derived from both their materiality and their internet-capability. When we purchase an “IoT” enabled street-light (for example) we are making a significant investment in the material streetlight as well as its Internetness. If IoT evolves like mobile phones this could lock us into the platform, and changing to an alternative platform would thus include high material cost (assuming , like mobiles, we are unable to alter software) as, unlike phones these devices are not regularly upgraded. This demonstrates platforms concern the distribution of control, and the platform provider has a strong incentive to seek to control the owners of the devices, and though this derive value from their platform over the long term. Also for many IoT devices (and particularly relevant for critical national infrastructure) this distribution of control does not correspond to distribution of risk, security and liability which many be significant for IoT devices.

There is also considerable incentive for platform creators to innovate their platform – developing new features and options to increase their value and so increase the scale and scope of their platform. This however creates potential instability in the platform – making evaluation of risk, security and liability over the long term exceedingly difficult. Further there is an incentive on platform owners to demand evolution from platform users (to drive greater value) potentially making older devices quickly redundant.

For important IoT devices (such as those used by government bodies), we might suggest that they seek to avoid these effects by harnessing open platforms based on collectively shared standards rather than singular controlled software platforms.  Open platforms are “freely available, standard definitions of service outcomes, processes, or technology that encourage multiple users to converge on utility consumption of services based on definitions – which in turn encourage suppliers to innovate around these commodities.”[3, 4]. In contrast to Open Source, Open platforms are not about the software – but about a collective standards agreement process in which standards are freely shared allowing the collective innovation around that standard. For example the 230v power-supply is a standard around which electricity generators, device manufacturers and consumers coalesce.

What are the lessons here?

(1) Wherever possible we should seek open platforms and promote the development of standards.

(2)  We must demand democratic accountability, and seek to exploit levers which ensure control over our infrastructure is reflective of need.

(3) We should seek to understand platforms as dynamic, evolving self-organising infrastructures not as static entities

References

  1. Zittrain, J.L., The Generative Internet. Harvard Law Review, 2006. 119(7): p. 1974-2040.
  2. Gawer, A. and M. Cusumano, Platform Leadership. 2002, Boston,MA: Harvard Business School Press.
  3. Brown, A., J. Fishenden, and M. Thompson, Digitizing Government. 2015.
  4. Fishenden, J. and M. Thompson, Digital Government, Open Architecture, and Innovation: Why Public Sector IT Will Never Be The Same Again. Journal of Public Administration, Research, and Theory, 2013.

I’m presenting at “The Exchange 2013 – Knowledge Peers”

TheExchange_NOVEMBER

I’m excited to be presenting at “The Exchange 2013 – Knowledge Peers” on the 28th November. Not only is it at the Kia Oval (which I drive past regularly so am looking forward to getting the tour inside), but also because their focus is on networking with smaller and medium sized organisations. I am of the opinion that cloud computing will offer more valuable and exciting opportunities for SMEs than large organisations so I am looking forward to connecting with many more small organisations at the event.

I hope you can join me there!

Will.

Clouds and Coffee: User affordance and information infrastructure

Some desktop coffee machines (e.g. figure 1) are now connected to the Internet (Pritchard, 2012).  Such devices are enrolled within increasingly complex information infrastructures involving cloud services. This form of entanglement creates mazes of unexpected heterogeneous opportunities and risks (Latour, 2003), yet users ability to perceive such opportunity and risk is limited their lack of visceral understanding of such entanglement. It is this understanding of the cloud by the user which is the focus of blog posting. Such a coffee maker “calls out” (Gibson, 1979) to users’ with a simple offering – its ability to make coffee. Its form attests to this function with buttons for espresso and latte, nozzles for dispensing drinks, and trays to catch the drips. To any user experienced in modern coffee this machine affords (Norman, 1990; Norman, 1999) the provision of coffee in its form and function keeping its information infrastructure hidden from view – only an engineer can understand that this machine is communicating.

Yet such assemblages of plastic, metal and information technology are a “quasi-object” (Latour, 2003)– complicated cases requiring political assemblies and no longer “matters of fact” but instead “states of affairs” (Latour, 2003).  Such a coffee maker is a drinks dispensing service (representing a service-dominant logic (Vargo & Lusch, 2004; Vargo, 2012)), provided through an assembly of material and immaterial objects whose boundary and ultimate purpose remain unclear.  While the device above only communicates about its maintenance, other machines may go further. Such machines’ users, hankering for an espresso to get them through a boring conference, may be kept unaware that the infrastructure is monitoring his choices to influence global coffee production, to ensure the output is sufficiently tepid and dull to damage his economic productivity, or that the device is recording and transmitting his every word.   He may be annoyed to discover his coffee is stronger than his female colleagues as gender profiling based on image recognition decides the “right” coffee for him. He may be horrified that the device ceases to work at the very moment of need because of a fault in contract payments within the accounts department – perhaps caused by their tepid weak coffee.

Similarly companies involved in providing the coffee and milk for such machines might become enrolled in this reconfiguration (Normann, 2001) of coffee service, an enrolment which could reconfigure the knowledge asymmetries within the existing market.  Suddenly an engineering company who previously made plastic and metal coffee machines is now in a position to better understand coffee demand than coffee growers or retailers. The machine itself could negotiate automatically on local markets for its milk provision, or compare material prices with similar machines in other markets, and even alter prices of coffee for consumers based on local demand. Through the enrolment of information infrastructures within a coffee service the knowledge of the coffee market shifts.

All this has happened already to the market for music (increasingly controlled by a purveyor of sophisticated walkmen using a cloud service) and more recently ebooks (increasingly controlled by a book retailer and their sophisticated book readers).   Now imagine the emergence of the smart-city with huge numbers of devices from street-lights to refrigerators connected to the cloud. How will the user of such smart-cities understand what they are interacting with – the quasi-objects they used to consider objects? How will such objects afford their informational uses alongside their more usual functions?

At the centre of this reconfiguration of material objects is a computer system residing in the cloud aggregating information. It is the aggregation of data from devices which may be central to the lessons of the cloud for SmartCities.

(© Will Venters 2012).

 

 

Gibson JJ (1979) The Ecological Approach to Perception. Houghton Mifflin, London.

Latour B (2003) Is Re-modernization Occurring-And If So, How to Prove It? Theory, Culture & Society 20(2), 35-48.

Norman D (1990) The Design of Everyday Things. The MIT Press, London.

Norman D (1999) Affordance, Conventions, and Design. Interactions ACM 6(3), 38-43.

Normann R (2001) Reframing Business: When the map changes the landscape. John Wiley & Sons Ltd, Chichester.

Pritchard S (2012) Mobile Comms: Coffee and TV. IT Pro, Dennis Publishing Ltd, London.

Vargo S and Lusch R (2004) Evolving to a New Dominant Logic for Marketing. The Journal of Marketing 68(1), 1-17.

Vargo SL (2012) Service-Dominant Logic: Reflections and Directions. Unpublished Powerpoint, Warwick,UK.

Cloud and the Future of Business: From Costs to Innovation

I have not been updating this blog for a while as I have been busy writing commercial papers on Cloud Computing. The first of these, for Accenture, has just been published and is available here

http://www.outsourcingunit.org/publications/cloudPromise.pdf

The report outlines our” Cloud Desires Framework” in which we aim to explain the technological direction of Cloud in terms of four dimensions of the offerings – Equivalence, Abstraction, Automation and Tailoring.

Equivalence: The desire to provide services which are at least equivalent in quality to that experienced by a locally running service on a PCor server.

Abstraction: The desire to hide unnecessary complexity of the lower levels of the application stack.

Automation: The desire to automatically manage the running of a service.

Tailoring: The desire to tailor the provided service for specific enterprise needs.

(c) Willcocks,Venters,Whitley 2011.

By considering these dimensions to the different types of cloud service (SaaS, PaaS, IaaS and Hosted service (often ignored – but crucially Cloud-like)) it is possible to distinguish the different benefits of each away from the “value-add” differences. Crucially the framework allows simple comparison between services offered by different companies by focusing on the important desires and not the unimportant technical differences.

Take a look at the report – and let me know what you think!

Microsoft in Cloud – Bloomberg’s analysis

Interesting analysis of microsoft’s place in the cloud today… And its change of focus to bring azure front and centre in it’s offering.

Microsoft Woos Toyota, Duels Amazon.com in Cloud Bet – Bloomberg.

SLA’s and the Cloud – the risks and benefits of multi-tenanted solutions.

Service Level Agreements (SLAs)  are difficult to define in the cloud in part because areas of the infrastructure (in particular the internet connection) are outside of the scope of either customer or supplier. This leads to the challenge of presenting a contractual agreement for something which is only partly in the suppliers control. Further as the infrastructure is shared (multi-tenanted) SLA’s are more difficult to provide since they rest on capacity which must be shared.

The client using the Cloud is faced a challenge. Small new cloud SaaS providers, which are  increasing their business and attracting more clients to their multi-tenanted data-centre, are unlikely to provide usefully defined SLA for their services than that which a data-centre provider can offer where it controls all elements of the supplied infrastructure.  Why would they – their business is growing and an SLA is a huge risk (since it is multi-tenanted breach of one SLA is probably a breach of lots – the payout might seem small and poor to the client but is large for a SaaS provider!). Further with each new customer the demands on the data-centre, and hence risk,  increase. Hence the argument that as SaaS providers become successful the risk of SLAs being breached might increase.

There is however a counter-point to this growth risk though – as each new customer begins to use the SaaS they will undertake their own due-diligence  checks. Many will attempt to stress test the SaaS service. Some will want to try to hack the application. As the customer base grows (and moves towards blue-chip clients) the seriousness of this testing will increase – security demands in particular will be tested as bigger and bigger companies consider their services. This presents a considerable opportunity for the individual user. For with each new customer comes the benefit of increasing stress testing of the SaaS platform – and increasing development of skills within the SaaS provider. While the SLA may continue to be poor, the risk of failure of the data-centre may well diminish as the SaaS grows.

To invoke a contract is, in effect, a failure in a relationship – a breakdown in trust. Seldom does the invocation of a contract benefit either party. The aim of an SLA is thus not just to provide a contractual agreement but rather to set out the level of service on which the partnership between customer and supplier is based. In this way an SLA is about the expected quality demanded of the supplier and with the above model the expected quality may well increase with more customers – not decrease as is usually envisaged for cloud. SLA’s for cloud providers may well be trivial and poor, but the systemic risk of using Clouds is not as simplistic as is often argued.  While it is unsurprising that cloud suppliers offer poor SLA’s (it is not in their interest to do otherwise), it does not mean that the quality of service is, or will remain, poor.

So what should the client consider in looking at the SLA offering in terms of service quality?

1) How does the Cloud SaaS supplier manage its growth? The growth of a SaaS service means greater demand on the providers data-centre. Hence greater risk that the SLA’s will be breached for their multi-tenanted data-centre.

2) How open is the Cloud SaaS provider in allowing testing of its services by new customers?

3) How well does the Cloud SaaS provider’s strategic ambition for service quality align with your desires for service quality.

Obviously these questions are in addition to all the usual SLA questions.

A Cloudy Future for Microsoft?

Friends at www.horsesforsources.com (an influential Outsourcing Blog) provide a useful analysis of Microsoft’s position in the Cloud Market. The comments are perhaps more interesting than the piece…

Click here for their article – A Cloudy Future for Microsoft?.

Lock-ins, SLAs and the Cloud

One of the significant concerns in entering the cloud is the potential for lock-in with a cloud provider (though clearly you otherwise remain locked-in with your own IT department as the sole  provider).

The cost of moving from one provider to another is a significant obstacle to cloud penetration – if you could change provider easily and painlessly you might be more inclided to take the risk. Various services have emerged to try to attack this problem – CloudSwitch being one which created a considerable buzz at the Structure 2010 conference.   Their service aims to provide  a software means to transfer enterprise applications from a company’s data centre into the cloud (and between cloud providers). Whether it can live up to expectations we have yet to know, but CloudSwitch is attempting to provide a degree of portability much desired by clients – and probably much feared by Cloud Service providers whose business would reduce to utility suppliers if they are successful.

But this links into another interesting conversation I was having with a media executive last week. They mentioned that since cloud virtual machines were so cheap they often (effectively)  host services across a number of suppliers to provide their own redundancy and thus ignore the SLA. If one service goes down they can switch quickly (using load balancers etc) to another utility supplier. Clearly this only works for commodity IaaS and for relatively simple content distribution (rather than transaction processing) but it is a compelling model… why worry about choosing one cloud provider and being locked-in or risking poor SLA  – choose them all.

Cusumano’s view – Cloud Computing and SaaS as New Computing Platforms.

Cusumano, M. (2010). “Cloud Computing and SaaS as New Computing Platforms.” Communications of the ACM 53(4): 27-29. http://doi.acm.org/10.1145/1721654.1721667
This is an interesting and well argued analysis of the concept of Cloud and SaaS as a platform. The paper concentrates on the lock-in and network effects and the risk they pose given the dominance of certain players in the market, in particular Salesforce, Microsoft, Amazon and Google.
Direct network effects (that the more telephones people have the more valuable they become) and indirect network effects (the more popular on platform is for developers, the more attractive the platform for other developers and users) are key to understanding the development of Cloud. Central to the articles potential importance is the analysis of how intergrated webservices (and thus integrated software platforms) might create conflicts of interest, network effects and hence risks.
Cusumano’s anlysis of Microsoft’s involvement in the market is compelling (particularly given his history in this area and detailed knowledge of the firm).
I do worry however that the papers exclusive focus on current players (and hence the interest in traditional concerns about network effects and dominance) downplays the key role of integrators and small standardisation/integration services which are emerging with the aim of reducing the impact of these network effects. Unlike traditional software  (where the cost of procurement,  installation, commissioning and use is very high) the mobility between clouds is easy if the underlying application is Cloud-provider-independent. This means there is considerable pressure from users to develop a cloud-independent service model (since everyone understands the risks of lock-in).
The future might thus be an open-source platform which is wrapped to slot into other cloud platforms… a meta-cloud perhaps.. which acts on behalf of users to enable the easy movement between providers. This is something Google is keen to stress at its cloud events.
I look forward to seeing the book on which the article is based