New Publication – Research Policy: The role of Web APIs in digital innovation ecosystems.

I’m happy to share that our paper “The value and structuring role of web APIs in digital innovation ecosystems: The case of the online travel ecosystem” co-authored with Roser Pujadas and Erika Valderrama has been published in Research Policy. It is available free from here (open access). The paper examines the role of interfaces (specifically APIs) within digital ecosystems.

Pujadas, R., Valderrama, E., & Venters, W. (2024). The value and structuring role of web APIs in digital innovation ecosystems: The case of the online travel ecosystem. Research Policy, 53(2), 104931. https://doi.org/https://doi.org/10.1016/j.respol.2023.104931

– We show a dynamic ecosystem where decentralized interfaces enable decentralized governance.

– We show Web APIs are easily replicated and so switching costs are relatively low. Thus, they do not easily lock-in complementors.

– We show Web APIs create synergistic interdependencies between ecosystem actors which are not only cooperative.

– We show Web APIs create networks of interorganizational systems through which services are co-produced.

– We show Web APIs are important sources of value creation and capture in digital innovation ecosystems.

We do all this through an analysis of 26 years of the online hotel booking ecosystem (1995-2021). Within the paper we present network analysis which reveals the complexity of actors involved in booking a hotel room today – see the following image for evidence of how complex this hotel booking ecosystem has become!

Some random choice quotes from the discussion section:

“Our research uncovers the distinctive structuring role and economic value of web APIs within a digital innovation ecosystem that is decentralized, and not organized around a platform technology as the focal value proposition”

“uncovers a dynamic and competitive digital ecosystem, where web APIs are not centrally controlled, and they are not only developed by incumbents, but also by new entrants offering new services or reintermediating existing ones.

“the competitive advantage that interfaces provide to a platform or firm does not lie so much in the capacity to lock in complementors, nor even on data collection per se, but upon increasing the capacity to process and analyze data in real time, gaining valuable contextual insights within value-adding services, which can be directly monetized.”

The structuring role of web APIs in digital innovation ecosystems

interfaces can structure directly competitive relationships within an ecosystem. For instance, by revealing, what we term, surreptitious interfacing through web scraping (e.g. by early metasearchers), we show how interfaces can be imposed upon another actor against their will. Jacobides et al. (2018 p. 2285) define an ecosystem as ‘a group of interacting firms that depend on each other’s activities’ –we might add to this: or exploit each other’s activities.

“our research shows an ecosystem without a single orchestrator, and where a wide range of interfaces are designed and controlled by a range of actors, in a highly decentralized manner. Our research thus contributes to ecosystem orchestration and governance theory.”

The strategic value of web APIs

Web APIs do not enable control over standards. As web APIs draw upon open shared web standards, parsing them is relatively simple and understandable, and they are agnostic to the systems they interface. This makes them relatively easy to imitate and adapt.”

Web APIs proved ineffective tools to lock in complementors and so to establish leadership. Once an actor uses a web API, the cost of connecting to a different web API that offers the same or similar service is low, thus potentially increasing the power of suppliers and customers (Porter, 2008).”

A “consequence of low specialization costs is that the cost of establishing connections with multiple firms is relatively low…our research provides evidence of large-scale multihoming in an ecosystem built around decentralized web APIs”  

“We see firms constantly adapting, changing their roles, and adding existing services by replicating web APIs, but also offering new web APIs over time. Together, this helps explain the dynamism, growth, and decentralized governance of the ecosystem”

Web APIs in value creation and capture within the digital economy

“web APIs are used by actors within a decentralized ecosystem to interface their information systems and so, to co-produce services and products…the value of web APIs is not only as a design rule…but also as a technology-in-use that enables the interaction of distributed systems.”

“…the value of web APIs is … in facilitating the production of meaningful data… attention should be focused on the exchange of information and integration of digital capabilities through web APIs, and on the real-time production of information and prediction that web APIs enable.”

“An indirect… form of value that web APIs enable is access to potential customers.”

The problem with Web APIs, AI and policy.

“We also reveal how data analytics and AI are becoming deeply embedded across such decentralized web API-based ecosystems. As AI can benefit from harvesting data from multiple sources so we expect it to become increasingly ingrained. This embedding will make it hard to research and trace AI’s impact within the digital economy– with policy implications for those regulating AI.”

Pujadas, R., Valderrama, E., & Venters, W. (2024). The value and structuring role of web APIs in digital innovation ecosystems: The case of the online travel ecosystem. Research Policy, 53(2), 104931. https://doi.org/https://doi.org/10.1016/j.respol.2023.104931

Platforms for the Internet of Things: Opportunities and Risks

I was chairing a panel at the Internet of Things Expo in London today. One of the points for discussion was the rise of platforms related to the internet of things. As, by some estimates, the number of connected devices is predicted to exceed 50bn by 2020 so there is considerable desire to control the internet based platforms upon which these devices will rely. Before we think specifically about platforms for the Internet of Things it is worth pausing to think about platforms in general.

The idea of platforms is pretty simple – they are something flat we can build upon. In computing terms they are an evolving system of software which provides generativity [1]: the potential to innovate by capitalising on the features of the platform service to provide something more than the sum of its parts. They exhibit the economic concept of network effects [2] – that is their value increases as the number of users increases. The telephone, for example, was useless when only one person had one, but as the number of users increased so its value increased (owners could call more people). This in turn leads to lock-in effects and potential monopolisation: once a standard emerged there was considerable disincentive for existing users to switch, and, faced with competing standards, users will wisely choose a widely adopted incumbent standard (unless the new standard is considerably better or there is other incentives to switch). These network effects also influence suppliers – App developers focus on developing for the standard Android/iPhone platforms so increasing their value and creating a complex ecosystem of value.

Let’s now move to think further about this concept for the Internet of Things.  I worry somewhat about the emergence of strong commercial platforms for Internet of Things devices. IoT concerns things, whose value is derived from both their materiality and their internet-capability. When we purchase an “IoT” enabled street-light (for example) we are making a significant investment in the material streetlight as well as its Internetness. If IoT evolves like mobile phones this could lock us into the platform, and changing to an alternative platform would thus include high material cost (assuming , like mobiles, we are unable to alter software) as, unlike phones these devices are not regularly upgraded. This demonstrates platforms concern the distribution of control, and the platform provider has a strong incentive to seek to control the owners of the devices, and though this derive value from their platform over the long term. Also for many IoT devices (and particularly relevant for critical national infrastructure) this distribution of control does not correspond to distribution of risk, security and liability which many be significant for IoT devices.

There is also considerable incentive for platform creators to innovate their platform – developing new features and options to increase their value and so increase the scale and scope of their platform. This however creates potential instability in the platform – making evaluation of risk, security and liability over the long term exceedingly difficult. Further there is an incentive on platform owners to demand evolution from platform users (to drive greater value) potentially making older devices quickly redundant.

For important IoT devices (such as those used by government bodies), we might suggest that they seek to avoid these effects by harnessing open platforms based on collectively shared standards rather than singular controlled software platforms.  Open platforms are “freely available, standard definitions of service outcomes, processes, or technology that encourage multiple users to converge on utility consumption of services based on definitions – which in turn encourage suppliers to innovate around these commodities.”[3, 4]. In contrast to Open Source, Open platforms are not about the software – but about a collective standards agreement process in which standards are freely shared allowing the collective innovation around that standard. For example the 230v power-supply is a standard around which electricity generators, device manufacturers and consumers coalesce.

What are the lessons here?

(1) Wherever possible we should seek open platforms and promote the development of standards.

(2)  We must demand democratic accountability, and seek to exploit levers which ensure control over our infrastructure is reflective of need.

(3) We should seek to understand platforms as dynamic, evolving self-organising infrastructures not as static entities

References

  1. Zittrain, J.L., The Generative Internet. Harvard Law Review, 2006. 119(7): p. 1974-2040.
  2. Gawer, A. and M. Cusumano, Platform Leadership. 2002, Boston,MA: Harvard Business School Press.
  3. Brown, A., J. Fishenden, and M. Thompson, Digitizing Government. 2015.
  4. Fishenden, J. and M. Thompson, Digital Government, Open Architecture, and Innovation: Why Public Sector IT Will Never Be The Same Again. Journal of Public Administration, Research, and Theory, 2013.

Our Fifth Report is out – Management implications of the Cloud

The fifth report in our Cloud Computing series for Accenture has just been published. This report looks at the impact Cloud Computing will have on the management of the IT function, and thus the skills needed by all involved in the IT industry. The report begins by analysing the impact Cloud might have in comparison to existing outsourcing project. It considers the core-capabilities which must be retained in a “cloud future”, considering how these capabilities might be managed, and the role of systems integrators in managing the Cloud.

Please use the comments form to give us feedback!

Cloud and the future of Business 5 – Management .

CohesiveFT

This is a company to watch http://www.cohesiveft.com/ – they have two products:

VPN-Cubed provides a virtual network onto the network of a cloud provider. This enables first to keep a standard networking layer which is consistent even if the cloud provided network changes (e.g. IP address changes).

Elastic Server allows real-time assembly and management of software components. This allows the quick creation of easy to use applications which can be easily sent to various cloud services.

However it is the fact that together these services allow virtual machines and cloud services to be moved between cloud IaaS providers without significant real-time work which is important. If their products live up to the promise then users can move to the cheapest cloud provider with ease so driving down costs to commodity supplier levels… and creating the spot market for cloud.

Microsoft in Cloud – Bloomberg’s analysis

Interesting analysis of microsoft’s place in the cloud today… And its change of focus to bring azure front and centre in it’s offering.

Microsoft Woos Toyota, Duels Amazon.com in Cloud Bet – Bloomberg.

SLA’s and the Cloud – the risks and benefits of multi-tenanted solutions.

Service Level Agreements (SLAs)  are difficult to define in the cloud in part because areas of the infrastructure (in particular the internet connection) are outside of the scope of either customer or supplier. This leads to the challenge of presenting a contractual agreement for something which is only partly in the suppliers control. Further as the infrastructure is shared (multi-tenanted) SLA’s are more difficult to provide since they rest on capacity which must be shared.

The client using the Cloud is faced a challenge. Small new cloud SaaS providers, which are  increasing their business and attracting more clients to their multi-tenanted data-centre, are unlikely to provide usefully defined SLA for their services than that which a data-centre provider can offer where it controls all elements of the supplied infrastructure.  Why would they – their business is growing and an SLA is a huge risk (since it is multi-tenanted breach of one SLA is probably a breach of lots – the payout might seem small and poor to the client but is large for a SaaS provider!). Further with each new customer the demands on the data-centre, and hence risk,  increase. Hence the argument that as SaaS providers become successful the risk of SLAs being breached might increase.

There is however a counter-point to this growth risk though – as each new customer begins to use the SaaS they will undertake their own due-diligence  checks. Many will attempt to stress test the SaaS service. Some will want to try to hack the application. As the customer base grows (and moves towards blue-chip clients) the seriousness of this testing will increase – security demands in particular will be tested as bigger and bigger companies consider their services. This presents a considerable opportunity for the individual user. For with each new customer comes the benefit of increasing stress testing of the SaaS platform – and increasing development of skills within the SaaS provider. While the SLA may continue to be poor, the risk of failure of the data-centre may well diminish as the SaaS grows.

To invoke a contract is, in effect, a failure in a relationship – a breakdown in trust. Seldom does the invocation of a contract benefit either party. The aim of an SLA is thus not just to provide a contractual agreement but rather to set out the level of service on which the partnership between customer and supplier is based. In this way an SLA is about the expected quality demanded of the supplier and with the above model the expected quality may well increase with more customers – not decrease as is usually envisaged for cloud. SLA’s for cloud providers may well be trivial and poor, but the systemic risk of using Clouds is not as simplistic as is often argued.  While it is unsurprising that cloud suppliers offer poor SLA’s (it is not in their interest to do otherwise), it does not mean that the quality of service is, or will remain, poor.

So what should the client consider in looking at the SLA offering in terms of service quality?

1) How does the Cloud SaaS supplier manage its growth? The growth of a SaaS service means greater demand on the providers data-centre. Hence greater risk that the SLA’s will be breached for their multi-tenanted data-centre.

2) How open is the Cloud SaaS provider in allowing testing of its services by new customers?

3) How well does the Cloud SaaS provider’s strategic ambition for service quality align with your desires for service quality.

Obviously these questions are in addition to all the usual SLA questions.

Lock-ins, SLAs and the Cloud

One of the significant concerns in entering the cloud is the potential for lock-in with a cloud provider (though clearly you otherwise remain locked-in with your own IT department as the sole  provider).

The cost of moving from one provider to another is a significant obstacle to cloud penetration – if you could change provider easily and painlessly you might be more inclided to take the risk. Various services have emerged to try to attack this problem – CloudSwitch being one which created a considerable buzz at the Structure 2010 conference.   Their service aims to provide  a software means to transfer enterprise applications from a company’s data centre into the cloud (and between cloud providers). Whether it can live up to expectations we have yet to know, but CloudSwitch is attempting to provide a degree of portability much desired by clients – and probably much feared by Cloud Service providers whose business would reduce to utility suppliers if they are successful.

But this links into another interesting conversation I was having with a media executive last week. They mentioned that since cloud virtual machines were so cheap they often (effectively)  host services across a number of suppliers to provide their own redundancy and thus ignore the SLA. If one service goes down they can switch quickly (using load balancers etc) to another utility supplier. Clearly this only works for commodity IaaS and for relatively simple content distribution (rather than transaction processing) but it is a compelling model… why worry about choosing one cloud provider and being locked-in or risking poor SLA  – choose them all.