England’s Electronic Prescription Service: Infrastructure in an Institutional Setting

Good friends in Oslo (Margunn Aanestad, Miria Grisot, Ole Hanseth and Polyxeni Vassilakopoulou) have just launched their edited a book on Information Infrastructure within European Health Care. The book is open-access meaning you can download it for free here.  

Infrastructure Book

Our team’s contribution is chapter 8 which discusses England’s Electronic Prescription Service that we evaluated for NPfIT over a number of years. This service moved UK GPs away from paper prescriptions (FP10s – the green form) to electronic messages sent directly to the pharmacy.  We examine the making of the EPS temporally by looking at:  (1) How existing technology (the installed base) and historical actions affect the project. (2) How the present practices and the wider NPfIT programme influenced. (3) How the desired future, reflected in policy goals and visions, influenced the present actions.

To go to our article directly click here.

England’s Electronic Prescription Service

Ralph HibberdTony Cornford, Valentina Lichtner, Will Venters, Nick Barber.

Abstract

We describe the development of the Electronic Prescription Service (EPS), the solution for the electronic transmission of prescriptions adopted by the English NHS for primary care. The chapter is based on both an analysis of data collected as part of a nationally commissioned evaluation of EPS, and on reports of contemporary developments in the service. Drawing on the notion of an installed infrastructural base, we illustrate how EPS has been assembled within a rich institutional and organizational context including causal pasts, contemporary practices and policy visions. This process of assembly is traced using three perspectives; as the realization and negotiation of constraints found in the wider NHS context, as a response to inertia arising from limited resources and weak incentive structures, and as a purposive fidelity to the existing institutional cultures of the NHS. The chapter concludes by reflecting on the significance of this analysis for notions of an installed base.

Image (cc) Simon Harrod via Flickr with thanks!

Government as a Platform – an assessment framework

I’m pleased that my paper with Alan Brown, Jerry Fishenden and Mark Thompson has been published in Government Information Quarterly today! The paper draws together our collective work on platforms and government IT to develop an assessment framework for GaaP (Government as a platform). We then evaluate recent UK government’s digital projects using the framework.

Cover image Government Information Quarterly

“Appraising the impact and role of platform models and Government as a Platform (GaaP) in UK Government public service reform: Towards a Platform Assessment Framework (PAF)”

Alan Brown, Jerry Fishenden, Mark Thompson, Will Venters

https://doi.org/10.1016/j.giq.2017.03.003

Abstract

The concept of “Government as a Platform” (GaaP) (O’Reilly, 2009) is coined frequently, but interpreted inconsistently: views of GaaP as being solely about technology and the building of technical components ignore GaaP’s radical and disruptive embrace of a new economic and organisational model with the potential to improve the way Government operates – helping resolve the binary political debate about centralised versus localised models of public service delivery. We offer a structured approach to the application of the platforms that underpin GaaP, encompassing not only their technical architecture, but also the other essential aspects of market dynamics and organisational form. Based on a review of information systems platforms literature, we develop a Platform Appraisal Framework (PAF) incorporating the various dimensions that characterise business models based on digital platforms. We propose this PAF as a general contribution to the strategy and audit of platform initiatives and more specifically as an assessment framework to provide consistency of thinking in GaaP initiatives. We demonstrate the utility of our PAF by applying it to UK Government platform initiatives over two distinct periods, 1999–2010 and 2010 to the present day, drawing practical conclusions concerning implementation of platforms within the unique and complex environment of the public sector.

Keywords

  • Platform;
  • Ecosystem;
  • Government as a Platform;
  • GaaP;
  • Digital Government

Image: Maurice via Flickr (CC BY) with thanks!

The Enterprise Kindergarten for our new AI Babies? Digital Leadership Forum.

I am to be part of a panel at the Digital Leadership Forum event today discussing AI and the Enterprise.  In my opinion, the AI debate has become dominated by the AI technology and the arrival of products sold to Enterprise as “AI solutions” rather than the ecosystems and contexts in which AI algorithms will operate. It is to this that I intend to talk.

It’s ironic though that we should come see AI in this way – as a kind of “black-box” to be purchased and installed. If AI is about “learning” and “intelligence” then surely an enterprises “AI- Baby”, if it is to act sensibly, needs a carefully considered environment which is carefully controlled to help it learn? AI technology is about learning – nurturing even – to ensure the results are relevant. With human babies we spend time choosing the books they will learn from, making the nursery safe and secure, and allowing them to experience the world carefully in a controlled manner. But do enterprises think about investing similar effort in considering the training data for their new AI? And in particular considering the digital ecosystem (Kindergarten) which will provide such data? 

Examples of AI Success clearly demonstrate such a kindergarten approach. AlphaGo grew in a world of well understood problems (Go has logical rules) with data unequivocally relevant to that problem.  The team used experts in the game to hone its learning, and were on hand to drive its success.  Yet many AI solutions seem marketed as “plug-and-play” as though exposing the AI to companies’ messy, often ambiguous, and usually partial data will be fine.

So where should a CxO be spending their time when evaluating enterprise AI? I would argue they should seek to evaluate both the AI product and their organisation’s “AI kindergarten” in which the “AI product” will grow?

Thinking about this further we might recommend that:

  • CxOs should make sure that the data feeding AI represents the companies values and needs and is not biased or partial.
  • Ensure that the AI decisions are taken forward in a controlled way, and that there is human oversight. Ensure the organisation is comfortable with any AI decisions and that even when they are wrong (which AI sometimes will be) they do not harm the company.
  • Ensure that the data required to train the AI is available. As AI can require a huge amount of data to learn effectively so it may be uneconomic for a single company to seek to acquire that data (see UBERs woes in this).
  • Consider what would happen if the data-sources for AI degraded or changed (for example a sensor broke, a camera was changed, data-policy evolved or different types of data emerged). Who would be auditing the AI to ensure it continued to operate as required?
  • Finally, consider that the AI-baby will not live alone – they will be “social”. Partners or competitors might employ similar AI which, within the wider marketplace ecosystem, might affect the world in which the AI operates. (See my previous article on potential AI collusion). Famously the interacting algorithms of high-frequency traders created significant market turbulence dubbed the “flash-crash” with traders’ algorithms failed to understand the wider context of other algorithms interacting. Further, as AI often lacks transparency of its decision making, so this interacting network of AI may act unpredictably and in ways poorly understood.
Image Kassandra Bay (cc) Thanks

Digital infrastructures in organizational agility – Dr Florian Allwein

It was a great pleasure to see Florian Allwein, my PhD student, successfully defend his PhD today. The thesis has significant lessons for practitioners interested in the role of their digital technology in promoting agility within large organisations.

The abstract of Dr Allwein’s thesis:

Organizational agility has received much attention from practitioners and researchers in Information Systems. Existing research, however, has been criticised for a lack of variety. Moreover, as a consequence of digitalization, information systems are turning from traditional, monolithic systems to open systems defined by characteristics like modularity and generativity. The concept of digital infrastructures captures this shift and stresses the evolving, socio-technical nature of such systems. This thesis sees IT in large companies as digital infrastructures and organizational agility as a performance within them. In order to explain how such infrastructures can support performances of agility, a focus on the interactions between IT, information and the user and design communities within them is proposed. A case study was conducted within Telco, a large telecommunications firm in the United Kingdom. It presents three projects employees regarded as agile. Data was collected through interviews, observations of work practices and documents. A critical realist ontology is applied in order to identify generative mechanisms for agility. The mechanism of agilization – making an organization more agile by cultivating digital infrastructures and minding flows of information to attain an appropriate level of agility – is proposed to explain the interactions between digital infrastructures and performances of agility. It is supported by the related mechanisms of informatization and infrastructuralization. Furthermore, the thesis finds that large organizations do not strive for agility unreservedly, instead aiming for bounded agility in well-defined areas that does not put the business at risk. This thesis contributes to the literature by developing the concept of agility as a performance and illustrating how it aligns with digital infrastructures. The proposed mechanisms contribute to an emerging mid-range theory of organizational agility that will also be useful for practitioners. The thesis also contributes clear definitions of the terms “information” and “data” and aligns them to the ontology of critical realism.

(c) Dr Florian Allwein

 

Image: (cc)Erick Pleitez (Thanks)

Anti-competitive Artificial Intelligence (AI) – [FT.com]

Yesterday’s FT provides a fascinating article (available here) on the role algorithms may increasingly plan in price-rigging and collusion. While previously humans have colluded to fix prices, today’s algorithms which seek profit maximization may end up colluding in a way which is hard to detect and difficult to stop. Indeed a recent OECD report states:

“Finding ways to prevent collusion between self-learning algorithms might be one of the biggest challenges that competition law enforcers have ever faced… [Algorithms and Big Data] “may pose serious challenges to competition authorities in the future, as it may be very difficult, if not impossible, to prove an intention to co-ordinate prices, at least using current antitrust tools”.

While algorithmic trading has proliferated in financial services (reported in many popular books such as “Dark Pools”), it is their increasing use in consumer marketplaces which concerns the article’s authors – airline booking, hotels, and online retailing.

The problem for regulation is that “All of the economic models are based on human incentives and what we think humans rationally will do.” (Terrell McSweeny US FTC) while an AI algorithm which “learns” that its most profitable course of action is price coordination are poorly represented in our understanding.

“What happens if the machines realise it is in their interest to systematically and quickly raise prices in a co-ordinated way without deviating?” (Terrell McSweeny)

Indeed we might ask whether an algorithm which uses huge databases of historical demand and supply data, and detailed data of the competitive marketplace, to arrive at its most profitable price in the milliseconds of a webpage loading is acting competitively in keeping with market principles or against the consumer (who could never undertake similar analysis and therefore faces huge information asymmetry challenges).

An interesting example in the article is an App to track petrol pricing whereby, because the app highlights instantly to competitors that a price has been cut (and they can match the price cut before demand shifts), so it removes the incentive for anyone to discount.

The article even states: “the availability of perfect information, a hallmark of free market theory, might harm rather than empower consumers”

 

(Image (cc) Keith Cooper – thanks)

Professorship in Information Systems at the LSE

It’s exciting that the LSE Department of Management is recruiting another Professor in information systems… For details see…  http://bit.ly/LSEProfIS

“We welcome applicants with a successful research record in areas of digital innovation such as digital platforms, service innovation and e-business, social media and the digital economy, and information infrastructures and digital ecosystems. Scholarship on big data as a key component of digital innovation will be desirable. We expect research that demonstrates strong relevance for understanding the complexity of social or organizational processes and the institutional patterns within which digital innovation is embedded”

Building Mobility-as-a-Service in Berlin: The rhythms of information infrastructure coordination for smart cities

[The following article was jointly written with my PhD Student Ayesha Khanna. The article was published today on LSE Business Review http://blogs.lse.ac.uk/businessreview/ and is syndicated here with their agreement]

The 21st century has seen a growing recognition of the important of cities in the world: not only does over half of humanity live in cities, but cities contribute 60 per cent of global GDP, consume 75 per cent of the world’s resources and generate 75 per cent of its carbon emissions. There is little doubt that the enlarging footprint of cities, with the rapid rate of urbanization in the developing world, will be where “the battle for sustainability will be won or lost” and, for those engaged in “smart-cities” initiatives, the focus of winning this battle is through the use of digital technology to efficiently manage resources. One of the key sectors for such smart cities initiatives is transportation.

Transportation infrastructures today rely heavily on private car ownership, which is powered by fossil fuels, and public transportation, both of which operate independently of each other. Policy makers believe radical innovation in this sector is needed to move it to a more sustainable system of mobility.

To achieve the goal of sustainable, seamless, and efficient mobility, an infrastructure would be required that would allow residents to move away from private ownership to a combination of car-sharing and public transport. For example, such an intermodal chain of mobility might include taking a rented bicycle to the bus station, a bus to a stop near the office, and then a car-sharing service to the office, covering every step from origin to the last mile. Powered by renewable energy, electric vehicles could make this journey entirely green.

In order to create such a mobility infrastructure, all the services offered (buses, trains, car-sharing systems, charging stations, and payments) would have to be integrated using digital technology in order to provide an urban resident with an easy way to map and take an intermodal journey using her smartphone. This change would transform transportation as we know it today to Mobility-as-a-Service but requires considerable innovation in the various heterogeneous digital computer-based systems (what we might term the information infrastructures), underpinning the physical transportation infrastructure. (For a more detailed account of the ideas of information infrastructure see Hanseth, O. and E. Monteiro, 1998)

Framing an Academic Project

Academic research on how such mobility information infrastructures would grow from the constituent disparate systems that currently exist in silos has been nascent, especially on the topic of the coordination efforts required. Part of the reason is that many required elements of such infrastructures do not currently exist, and that cities are only just beginning to prototype them.

In our research, we use a theory of digital infrastructure coordination as a framework to unravel the forces that influence the development of a mobility focused information infrastructure, extending it to focus particularly on the influence of temporal rhythms within the coordination. Understanding this has important implications for policy makers seeking to better support smart-cities initiatives. Our research took us to Berlin and a project which was prototyping an integrated sustainable mobility system there.

The BeMobility Case Study

The BeMobility project, which lasted from September 2009 to March 2014, was started as part of a concerted effort by the German government to become a market leader and innovator in electric mobility. A public-private partnership between the government and over 30 private and academic sector stakeholders, the goal of BeMobility was to prototype an integrated mobility services infrastructure that would be efficient, sustainable and seamless for Berlin residents. The largest railways operator Deutsche Bahn was chosen as the lead partner of the project, with the think-do tank InnoZ (an institute focused on future mobility research) as the project coordinator and intermediary. Organizations participating in the project ranged from energy providers like Vattenfall to car manufacturers such as Daimler to technical scientists provided by Technical University of Berlin.

The project, despite facing many challenges, was able to prototype a transportation infrastructure which integrated electric car sharing with Berlin’s existing public transport system. In the second phase of the project, it further integrated this infrastructure with a micro-smart power-grid, providing insights into how such mobility services could be powered by renewable energies. While the integration effort was both at the hardware and software levels, our research studied the coordination efforts related to information infrastructure in particular.

“Integration of all this information is what we now call Mobility-as-a-Service. BeMobility was  one of the first projects in the world to attempt to do it.” – Member of BeMobility Project

Findings and Discussion

Our analysis showed that individuals and organizations respond to coordination efforts based on a combination of historical cycles of funding, product development and market structures, and anticipated patterns of technology disruption, innovation plans and consumer behaviour. Peoples’ actions in contributing to an integrated infrastructure are tempered not only by these past and future rhythms, but also by the limits of the technologies they encounter. Some of these limitations are physical in nature, such as the inability to integrate data due to lack of specific computing interfaces, and some are political, such as blocked access to databases due to concerns about competitive espionage and customer privacy.

Our findings also surfaced the power of the intermediary as coordinator. Contrary to the limited perception of a coordinator as a project manager and accountant for a government funded project, we saw InnoZ emerge as a key driver of the information infrastructure integration. One of the most powerful tools for the intermediary was its role in mapping future rhythms of technology development. It achieved this by showcasing prototypes of different types of electric vehicles, charging stations, solar panels, and software systems, at InnoZ’s campus.

This campus itself acted as a mini-prototype where both hardware and software integration could be first implemented and tested. The ability to physically demonstrate how the micro-smart grid could connect with the car-sharing system to enable sustainable energy for electric cars, for example, both surprised and motivated other stakeholders to take the imminent possibility of a sustainable mobility infrastructure more seriously.

Ultimately, business stakeholders were especially concerned about the commercial viability of such radical innovation. Here too the intermediary proactively shaped their thinking by conducting its own extensive social science research on the behavioural patterns of current and future users. For example, by showing that young urban residents were more interested in car-sharing than private ownership of cars, InnoZ made a strong case for why an integrated infrastructure could also be a good business investment.

Implications

As more cities experiment with Mobility-as-a-Service, understanding the influence of rhythms on coordinating information infrastructure is helpful for policymakers. Insights that would be useful to policymakers include:

  • Keeping a budget for building an innovation lab where cutting edge technologies can be tested and integration efforts can be showcased will lead to more engagement with stakeholders.
  • Working more closely with the intermediary to conduct social research on the mobility habits of millennial urban dwellers will incentivise stakeholders as it will prove a market for the smart infrastructure.
  • Anticipating the disciplinary inertia imposed by legacy systems and organizational practices, and countering it by including stakeholders in the working group whose temporal rhythms include innovative product cycles more in line with the goals of the integrated infrastructure.

This study also contributes to the academic literature on information infrastructure development by providing insights on the role of time in coordinating integration efforts. It responds to a gap in the understanding of the evolution of large-scale multi-organizational infrastructures, specifically as they relate to mobility.

♣♣♣

Notes:

Will Venters is an Assistant Professor within the Department of Management at the London School of Economics and Political Sciences. His research focuses on the distributed development of widely distributed computing systems. His recent research has focused on digital infrastructure, cloud computing and knowledge management systems. He has researched various organisations including government-related organisations, the construction industry, telecoms, financial services, health, and the Large Hadron Collider at CERN. He has undertaken consultancy for a wide range of organisations, and has published articles in top journals including the Journal of Management Studies, MIS Quarterly, Information Systems Journal, Journal of Information Technology and Information Technology and People (where he is also an associated editor).  http://www.willventers.com

Ayesha Khanna is a digital technology and product strategy expert advising governments and companies on smart cities, future skills, and fintech. She spent more than a decade on Wall Street advising product innovation teams developing large scale trading, risk management and data analytics systems. Ayesha is CEO of LionLabs, a software engineering and design firm based in Singapore. She has a BA (honors) in Economics from Harvard University, an MS in Operations Research from Columbia University and is completing her PhD on smart city infrastructures at the London School of Economics.

Photo by Mueller felix (CC- thanks)

Join conversation with Eric Schmidt “From LEO to DeepMind: Britain’s computing pioneers”

Join me in attending Eric Schmidt (Executive chairman of Alphabet (Google)) in conversation with my colleague Prof Chrisanthi Avgerou on the 14th October here at the LSE.

(Note getting a ticket will be difficult – see below for applications)

Click for full details : From LEO to DeepMind: Britain’s computing pioneers – 10 – 2016 – Events – Public events – Home

Department of Management and LEO Computers Society public conversation

Date: Friday 14 October 2016
Time:  6.30-7.30pm
Venue: LSE campus, venue TBC to ticketholders
Speaker: Eric Schmidt
Chair: Professor Chrisanthi Avgerou

Five years on from his 2011 MacTaggart lecture in which he traced Britain’s computing heritage and called for the inclusion of computer science (CS) in the National Curriculum, Alphabet executive chairman Eric Schmidt will discuss progress in CS education and digital skills, and the opportunities that flow from the next wave of British computing innovation in machine learning. Join Eric in conversation with Professor Chrisanthi Avgerou.

Eric Schmidt (@ericschmidt) is the executive chairman of Alphabet, responsible for the external matters of all of the holding company’s businesses, including Google Inc., advising their CEOs and leadership on business and policy issues. Eric joined Google in 2001 and helped grow the company from a Silicon Valley startup to a global leader in technology. He served as Google’s Chief Executive Officer from 2001-2011, overseeing the company’s technical and business strategy alongside founders Sergey Brin and Larry Page. Under his leadership Google dramatically scaled its infrastructure and diversified its product offerings while maintaining a strong culture of innovation.

Chrisanthi Avgerou is Professor of Information Systems at LSE’s Department of Management and Programme Director of LSE’s MSc Management, Information Systems and Digital Innovation. She is interested in the relationship of ICT to organisational change and the role of ICT in socio-economic development. She has served in various research and policy committees on information technology and socio-economic development of the International Federation for Information Processing (IFIP) from 1996 until 2012.

The Department of Management (@LSEManagement) is a globally diverse academic community at the heart of the LSE, taking a unique interdisciplinary, academically in-depth approach to the study of management and organisations.

In 1951 J Lyons and Co, an innovative British catering company famous for its teashops, ran the first practical business application and pioneered the world’s first business computer. In subsequent years, LEO (Lyons Electronic Office) computers were adopted by a host of blue chip companies at home and abroad. Today, the LEO Computer Society consists of former employers of LEO Computers and its succeeding companies, men and women who have worked with an LEO computer, and anyone who has an interest in the history of the company.

Twitter Hashtag for this event: #LSEcomputer

Ticket Information

This event is free and open to all however a ticket is required, only one ticket per person can be requested.

LSE students and staff are able to collect one ticket per person from the SU shop, located on Lincolns Chambers, 2-4 Portsmouth Street from 10am on Thursday 6 October. These tickets are available on a first come, first serve basis.

Members of the public, LSE alumni, LSE students and LSE staff can request one ticket via the online ticket request form which will be live on this listing from around 6pm on Thursday 6 October until at least 12noon on Friday 7 October. If at 12noon we have received more requests than there are tickets available, the line will be closed, and tickets will be allocated on a random basis to those requests received. If we have received fewer requests than tickets available, the ticket line will stay open until all tickets have been allocated.

 

Hype, Blockchain – and some Inconvenient Truths

Excellent piece on the problems of Blockchain for identity management from Jerry Fishenden… 

“For all the froth and hype about blockchain, you’d think it was going to bring about world peace, and simultaneously solve every problem known to mankind. There’s probably been more tosh written about it over the past year or so than all that previous guff about “big data”. Quite frankly, I’m disappointed blockchain hasn’t defeated ISIL single-handed and rebuilt the Seven Wonders of the Ancient World by now. Come on blockchain, what are you waiting for?!” (Click the link below to read on..)

Source: Hype, Blockchain – and some Inconvenient Truths

What can Artificial Intelligence do for business?

I am joining a panel tomorrow at the AI-Summit in London, focused on practical Artificial Intelligence (AI) for business applications. I am to be asked the question “What can Artificial Intelligence do for business?”, so by way of preparation I thought I should try to answer the question on my blog.

Perhaps we can break the question down – first considering the corollary question of “what can’t AI do for business” even if its cognitive potential matches or exceeds that of a human, then discussing “what can AI do for businesses practically today”.

What would happen if we did succeed in developing AI which has significant cognitive potential (as IBM’s Watson provides a foretaste of)?  Let’s undertake a thought experiment. Imagine that we have AI software (Fred) which is capable of matching or exceeding human level intelligence (cognitively defined), but obviously remains locked inside a prison of its computer body.  What would Fred miss that might limit his ability to help the business?

Firstly much of business is about social relationships – those attending the AI-Summit have decided that something is available which is not as effective via reading the Internet – perhaps it is the herd mentality of seeing what others are doing, perhaps it is the subtle clues, perhaps the serendipitous conversations, or perhaps it is about building trust such that unwritten knowledge is shared. Fred would likely be absent from this – even if he were given a robotic persona it is unlikely it would fit in with the subtle social activity needed to navigate the drinks reception.

Second Fred is necessarily backward looking, gleaning his intelligence and predictive capacity from processing the vast informational traces of human existence available from the past (or present). Yet we humans, and business in general, is forward looking – we live by imagined futures as much as remembered pasts. How well Fred could handle that prediction when the world can change in an instant (remember the sad day of 9/11)? Perhaps quicker than us (processing the immediate tweets) but perhaps wrongly – not seeing the mood shifts, changes and immediate actions. Who knows?

My third point is derived from the famous hawthorn experiments which showed that humans’ behaviour changes when we are observed. Embedding Fred into an organisation will change the organisation’s social dynamic and so change the organisation. Perhaps people will stop talking where Fred can hear, or talk differently when they know he is watching.  Perhaps they will be most risk averse – worried Fred would question the rationality of their decisions. Perhaps they would be more scientific – seeking to mimic Fred – and lose their aesthetic intuitive ideas? Perhaps they will find it hard to challenge, debate and argue with Fred –debate that is necessary for businesses to arrive at decisions in the face of uncertainty? Or perhaps Fred will deny the wisdom of the crowd (Surowiecki, 2005) by over representing one perspective, when the crowd may better reflect human’s likely future response?

Or perhaps, as Nicholas Carr suggests (Carr, 2014) they will prove so useful and intelligent that they dull our interest in the business, erode our attentiveness and deskill the CxOs in the organisation – just as it has been suggested flying on Autopilot can do for pilots.

Finally, (and arguably most importantly as those who believe in AI and will likely dismiss the earlier pronouncements as simplistic as AI will overcome these by brute force of intelligence), Fred’s intelligence would be based on data gleaned from a human world and “raw data is an oxymoron, data are always already cooked and never entirely raw” (Gitelman andJackson 2013 following Bowker 2005 – cited in (Kitchin, 2014)). Fred’s data is partial and decisions were made as to what was, and wasn’t counted, recorded, and how it was recorded (Bowker & Star, 1999). Our data reflects our social world and Fred is likely to over-estimate the benign nature of this representation (or extreme representations) of the data. While IBM’s Watson can reflect human knowledge in games such as Jeopardy, its limited ability to question the provenance of data without real human experience may limit its ability to act humanly – and in a world which continues to be dominate by humans this may be a problem. I had the pleasure of attending a talk two weeks ago by Prof Ross Koppel who discusses this challenge in detail in relation to health-care payments data.  AI is founded upon an ontology of scientific rationality – by far the most dominant ontological position today. This idea argues that science, and statistical inference from data, presents the truth (a single unassailable truth at that). Such rationality denies human belief, superstition, irrationality – yet these continue to play a part in the way humans act and behave. Perhaps AI needs to explore further these philosophical assumptions as Winograd and Flores famously did around AI three decades ago (Winograd & Flores, 1986).

Finally we should try, when evaluating any new technologies impact on business to be critical of “solutionism” which argues that business problems will be solved by one silver bullet. Instead we should evaluate each through a range of relevant filters – asking questions about their likely economic, social and political distortions and from this evaluate how they can truly add value to business.   In exploiting AI today, at its most basic, businesses should start by focusing on the low-hanging fruit.  AI doesn’t have to be that intelligent to provide huge benefits.  Consider how Robotic Process Automation  can help companies (e.g. O2) deal with its long tail of boring repetitive processes (Willcocks & Lacity, 2016). For example “swivel chair” functions where people extract data from one system (e.g. email) undertake simple processes using rules, then enter the output into a system of record such as ERP (Willcocks & Lacity, 2016). As such processes involve only a modicum of intelligence, and are repetitive and boring for humans, they offer cost opportunities (see Blue Prism as an example of this type of solution) – particularly as one estimate suggests such automation costs around $7500/PA(FTE) compared to $23k PA for an offshore salary (Willcocks and Lacity 2016 quoting  Operationalagility.com).

Obviously AI might move up the chain to deal with more significant business process issues – however at each stage we are reminded that CxOs will need leadership, and IT departments will need specific skills to ensure that the AI makes sensible decisions, and reflects business practices. Business Analysts will need to learn about AI such that they can act as sensible teachers – identifying risks that AI are unlikely to notice, and steering the AI to act sensibly.  Finally as the technology improves so organisational and business sociologists will be needed to wrestle with the challenges identified above.

© Will Venters

Bowker, G., & Star, S. L. (1999). Sorting Things Out:Classification and Its Consequences. Cambridge,MA: MIT Press.

Carr, N. (2014). The Glass Cage: Automation and Us: WW Norton & Company.

Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures and their consequences: Sage.

Surowiecki, J. (2005). The wisdom of crowds: Anchor.

Willcocks, L., & Lacity, M. C. (2016). Service Automation: Robots and the future of work. Warwickshire, UK: Steve Brookes Publishing.

Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Norwood,NJ: Ablex.

(Image (cc) from Jorge Barba – thanks)