Building Mobility-as-a-Service in Berlin: The rhythms of information infrastructure coordination for smart cities

[The following article was jointly written with my PhD Student Ayesha Khanna. The article was published today on LSE Business Review http://blogs.lse.ac.uk/businessreview/ and is syndicated here with their agreement]

The 21st century has seen a growing recognition of the important of cities in the world: not only does over half of humanity live in cities, but cities contribute 60 per cent of global GDP, consume 75 per cent of the world’s resources and generate 75 per cent of its carbon emissions. There is little doubt that the enlarging footprint of cities, with the rapid rate of urbanization in the developing world, will be where “the battle for sustainability will be won or lost” and, for those engaged in “smart-cities” initiatives, the focus of winning this battle is through the use of digital technology to efficiently manage resources. One of the key sectors for such smart cities initiatives is transportation.

Transportation infrastructures today rely heavily on private car ownership, which is powered by fossil fuels, and public transportation, both of which operate independently of each other. Policy makers believe radical innovation in this sector is needed to move it to a more sustainable system of mobility.

To achieve the goal of sustainable, seamless, and efficient mobility, an infrastructure would be required that would allow residents to move away from private ownership to a combination of car-sharing and public transport. For example, such an intermodal chain of mobility might include taking a rented bicycle to the bus station, a bus to a stop near the office, and then a car-sharing service to the office, covering every step from origin to the last mile. Powered by renewable energy, electric vehicles could make this journey entirely green.

In order to create such a mobility infrastructure, all the services offered (buses, trains, car-sharing systems, charging stations, and payments) would have to be integrated using digital technology in order to provide an urban resident with an easy way to map and take an intermodal journey using her smartphone. This change would transform transportation as we know it today to Mobility-as-a-Service but requires considerable innovation in the various heterogeneous digital computer-based systems (what we might term the information infrastructures), underpinning the physical transportation infrastructure. (For a more detailed account of the ideas of information infrastructure see Hanseth, O. and E. Monteiro, 1998)

Framing an Academic Project

Academic research on how such mobility information infrastructures would grow from the constituent disparate systems that currently exist in silos has been nascent, especially on the topic of the coordination efforts required. Part of the reason is that many required elements of such infrastructures do not currently exist, and that cities are only just beginning to prototype them.

In our research, we use a theory of digital infrastructure coordination as a framework to unravel the forces that influence the development of a mobility focused information infrastructure, extending it to focus particularly on the influence of temporal rhythms within the coordination. Understanding this has important implications for policy makers seeking to better support smart-cities initiatives. Our research took us to Berlin and a project which was prototyping an integrated sustainable mobility system there.

The BeMobility Case Study

The BeMobility project, which lasted from September 2009 to March 2014, was started as part of a concerted effort by the German government to become a market leader and innovator in electric mobility. A public-private partnership between the government and over 30 private and academic sector stakeholders, the goal of BeMobility was to prototype an integrated mobility services infrastructure that would be efficient, sustainable and seamless for Berlin residents. The largest railways operator Deutsche Bahn was chosen as the lead partner of the project, with the think-do tank InnoZ (an institute focused on future mobility research) as the project coordinator and intermediary. Organizations participating in the project ranged from energy providers like Vattenfall to car manufacturers such as Daimler to technical scientists provided by Technical University of Berlin.

The project, despite facing many challenges, was able to prototype a transportation infrastructure which integrated electric car sharing with Berlin’s existing public transport system. In the second phase of the project, it further integrated this infrastructure with a micro-smart power-grid, providing insights into how such mobility services could be powered by renewable energies. While the integration effort was both at the hardware and software levels, our research studied the coordination efforts related to information infrastructure in particular.

“Integration of all this information is what we now call Mobility-as-a-Service. BeMobility was  one of the first projects in the world to attempt to do it.” – Member of BeMobility Project

Findings and Discussion

Our analysis showed that individuals and organizations respond to coordination efforts based on a combination of historical cycles of funding, product development and market structures, and anticipated patterns of technology disruption, innovation plans and consumer behaviour. Peoples’ actions in contributing to an integrated infrastructure are tempered not only by these past and future rhythms, but also by the limits of the technologies they encounter. Some of these limitations are physical in nature, such as the inability to integrate data due to lack of specific computing interfaces, and some are political, such as blocked access to databases due to concerns about competitive espionage and customer privacy.

Our findings also surfaced the power of the intermediary as coordinator. Contrary to the limited perception of a coordinator as a project manager and accountant for a government funded project, we saw InnoZ emerge as a key driver of the information infrastructure integration. One of the most powerful tools for the intermediary was its role in mapping future rhythms of technology development. It achieved this by showcasing prototypes of different types of electric vehicles, charging stations, solar panels, and software systems, at InnoZ’s campus.

This campus itself acted as a mini-prototype where both hardware and software integration could be first implemented and tested. The ability to physically demonstrate how the micro-smart grid could connect with the car-sharing system to enable sustainable energy for electric cars, for example, both surprised and motivated other stakeholders to take the imminent possibility of a sustainable mobility infrastructure more seriously.

Ultimately, business stakeholders were especially concerned about the commercial viability of such radical innovation. Here too the intermediary proactively shaped their thinking by conducting its own extensive social science research on the behavioural patterns of current and future users. For example, by showing that young urban residents were more interested in car-sharing than private ownership of cars, InnoZ made a strong case for why an integrated infrastructure could also be a good business investment.

Implications

As more cities experiment with Mobility-as-a-Service, understanding the influence of rhythms on coordinating information infrastructure is helpful for policymakers. Insights that would be useful to policymakers include:

  • Keeping a budget for building an innovation lab where cutting edge technologies can be tested and integration efforts can be showcased will lead to more engagement with stakeholders.
  • Working more closely with the intermediary to conduct social research on the mobility habits of millennial urban dwellers will incentivise stakeholders as it will prove a market for the smart infrastructure.
  • Anticipating the disciplinary inertia imposed by legacy systems and organizational practices, and countering it by including stakeholders in the working group whose temporal rhythms include innovative product cycles more in line with the goals of the integrated infrastructure.

This study also contributes to the academic literature on information infrastructure development by providing insights on the role of time in coordinating integration efforts. It responds to a gap in the understanding of the evolution of large-scale multi-organizational infrastructures, specifically as they relate to mobility.

♣♣♣

Notes:

Will Venters is an Assistant Professor within the Department of Management at the London School of Economics and Political Sciences. His research focuses on the distributed development of widely distributed computing systems. His recent research has focused on digital infrastructure, cloud computing and knowledge management systems. He has researched various organisations including government-related organisations, the construction industry, telecoms, financial services, health, and the Large Hadron Collider at CERN. He has undertaken consultancy for a wide range of organisations, and has published articles in top journals including the Journal of Management Studies, MIS Quarterly, Information Systems Journal, Journal of Information Technology and Information Technology and People (where he is also an associated editor).  http://www.willventers.com

Ayesha Khanna is a digital technology and product strategy expert advising governments and companies on smart cities, future skills, and fintech. She spent more than a decade on Wall Street advising product innovation teams developing large scale trading, risk management and data analytics systems. Ayesha is CEO of LionLabs, a software engineering and design firm based in Singapore. She has a BA (honors) in Economics from Harvard University, an MS in Operations Research from Columbia University and is completing her PhD on smart city infrastructures at the London School of Economics.

Photo by Mueller felix (CC- thanks)

Platforms for the Internet of Things: Opportunities and Risks

I was chairing a panel at the Internet of Things Expo in London today. One of the points for discussion was the rise of platforms related to the internet of things. As, by some estimates, the number of connected devices is predicted to exceed 50bn by 2020 so there is considerable desire to control the internet based platforms upon which these devices will rely. Before we think specifically about platforms for the Internet of Things it is worth pausing to think about platforms in general.

The idea of platforms is pretty simple – they are something flat we can build upon. In computing terms they are an evolving system of software which provides generativity [1]: the potential to innovate by capitalising on the features of the platform service to provide something more than the sum of its parts. They exhibit the economic concept of network effects [2] – that is their value increases as the number of users increases. The telephone, for example, was useless when only one person had one, but as the number of users increased so its value increased (owners could call more people). This in turn leads to lock-in effects and potential monopolisation: once a standard emerged there was considerable disincentive for existing users to switch, and, faced with competing standards, users will wisely choose a widely adopted incumbent standard (unless the new standard is considerably better or there is other incentives to switch). These network effects also influence suppliers – App developers focus on developing for the standard Android/iPhone platforms so increasing their value and creating a complex ecosystem of value.

Let’s now move to think further about this concept for the Internet of Things.  I worry somewhat about the emergence of strong commercial platforms for Internet of Things devices. IoT concerns things, whose value is derived from both their materiality and their internet-capability. When we purchase an “IoT” enabled street-light (for example) we are making a significant investment in the material streetlight as well as its Internetness. If IoT evolves like mobile phones this could lock us into the platform, and changing to an alternative platform would thus include high material cost (assuming , like mobiles, we are unable to alter software) as, unlike phones these devices are not regularly upgraded. This demonstrates platforms concern the distribution of control, and the platform provider has a strong incentive to seek to control the owners of the devices, and though this derive value from their platform over the long term. Also for many IoT devices (and particularly relevant for critical national infrastructure) this distribution of control does not correspond to distribution of risk, security and liability which many be significant for IoT devices.

There is also considerable incentive for platform creators to innovate their platform – developing new features and options to increase their value and so increase the scale and scope of their platform. This however creates potential instability in the platform – making evaluation of risk, security and liability over the long term exceedingly difficult. Further there is an incentive on platform owners to demand evolution from platform users (to drive greater value) potentially making older devices quickly redundant.

For important IoT devices (such as those used by government bodies), we might suggest that they seek to avoid these effects by harnessing open platforms based on collectively shared standards rather than singular controlled software platforms.  Open platforms are “freely available, standard definitions of service outcomes, processes, or technology that encourage multiple users to converge on utility consumption of services based on definitions – which in turn encourage suppliers to innovate around these commodities.”[3, 4]. In contrast to Open Source, Open platforms are not about the software – but about a collective standards agreement process in which standards are freely shared allowing the collective innovation around that standard. For example the 230v power-supply is a standard around which electricity generators, device manufacturers and consumers coalesce.

What are the lessons here?

(1) Wherever possible we should seek open platforms and promote the development of standards.

(2)  We must demand democratic accountability, and seek to exploit levers which ensure control over our infrastructure is reflective of need.

(3) We should seek to understand platforms as dynamic, evolving self-organising infrastructures not as static entities

References

  1. Zittrain, J.L., The Generative Internet. Harvard Law Review, 2006. 119(7): p. 1974-2040.
  2. Gawer, A. and M. Cusumano, Platform Leadership. 2002, Boston,MA: Harvard Business School Press.
  3. Brown, A., J. Fishenden, and M. Thompson, Digitizing Government. 2015.
  4. Fishenden, J. and M. Thompson, Digital Government, Open Architecture, and Innovation: Why Public Sector IT Will Never Be The Same Again. Journal of Public Administration, Research, and Theory, 2013.

What is Fog Computing?

I read an interesting article on Fog Computing and thought readers might like a short precis:

Applications such as health-monitoring or emergency response require near-instantaneous response such that the delay caused by contacting and receiving data from a cloud data-centre can be highly problematic. Fog Computing is a response to this challenge. The basic idea is to shift some of the computing from the data-centre to devices which are closer to the edge of the network – so moving the cloud to the ground (hence “fog computing”). The computing work is shared between the data-centre and various local IoT devices (e.g. a local router or smart-gateway).

“Fog computing is a paradigm for managing a highly distributed and possibly virtualized environment that provides compute and network services between sensors and cloud data-centers” (Dastjerdi et al. 2016)

While cloud computing (using large data-centres) is perfect for analysis of Big Data “at rest” (i.e.  analysing historical trends where large magnitudes of data are required and cheap processing necessary) fog computing may be much better for dynamic analysis of “data-in-motion” (data concerning immediate ongoing actions which require rapid analytical response).  For example an Augmented Reality Application cannot wait for a distant data-centre to respond when a user’s head it turned. Similarly safety-critical and business-critical applications such as health-care remote monitoring, or remote diagnostics cannot rely on permanent availability of internet connections (as those in York know when floods knocked out their internet for days this year).

Privacy concerns are also relevant. By moving data-analysis to the edge of the network (e.g. a device or local mobile phone) which is often owned by, and controlled by, the data-source the user may have more control over their data. For example an exercise tracker might aggregate and process its GPS data and fitness data on a local mobile phone rather than automatically uploading it to a distant server. It might also undertake data-trimming so reducing the bandwidth and load on the cloud. This is particularly relevant as the number of connected devices increases to billions. This gain should be balanced with the challenge of managing an increasing number of devices which must be secured to hold sensitive data safely.

Another challenge is the climatic damage this new architecture poses. While data-centres are increasingly efficient in their processing, and often rely on clean-energy sources, moving computing to less efficient devices at the edge of the network might create a problem. We are effectively balancing latency with CO2 production.

For more information on see:

Dastjerdi, A. V., Gupta, H., Calheiros, R. N., Ghosh, S. K., and Buyya, R. 2016. “Fog Computing: Principles, Architectures, and Applications,” in Internet of Things: Principles and Paradigm. Elsevier / MKP. http://www.buyya.com/papers/FogComputing2016.pdf

(Image Ian Furst (cc))