[Academic Call]: AI and the Artificialities of Intelligence.

I am really excited to be a co-chair of the following academic workshop at ESSEC & Université Paris Dauphine-PSL. Please join us if you can!

AI and the Artificialities of Intelligence: What matters in and for organizing?

Call for papers 14th Organizations, Artifacts & Practices (OAP) Workshop #OAP2024

When: June 6th and 7th 2024

Where: Paris (ESSEC & Université Paris Dauphine-PSL). Face-to-face event.

Co-chairs:

Ella Hafermalz (Vrije Universiteit Amsterdam)  François-Xavier de Vaujany (Université Paris Dauphine-PSL)    Aurélie Leclercq-Vandelannoitte (CNRS, LEM, IESEG, Univ. Lille)
Julien Malaurent (ESSEC)Will Venters (LSE)Youngjin Yoo (Case Western University)

This 14th OAP workshop jointly organized by Université Paris Dauphine-PSL (DRM), ESSEC and ESSEC Metalab will be an opportunity to come back to the issue of Artificial Intelligence and its relationship with the history, philosophy and politics of management and organization.

Artificial Intelligence now pervades discussions about the future of organizations and societies. AI is expected to bring deep changes in work practices and our ways of living. Utopian and dystopian narratives are abundant. However, AI is far from being a fleeting trend; rather, it constitutes a collection of techniques with a rich history dating back to the 1950s. AI serves as a broad framework deeply intertwined with ideals of rationalism and representationalism – much like the broader digital landscape it epitomizes. The aspiration in the realm of AI is that self-sufficient techniques will progressively and continuously enhance our comprehension of the world. By means of rules and the use of massive amounts of data, it is expected that learning capabilities make AI tools more and more likely to expose and elucidate the underlying realities of the processes they initially are designed to represent. Increasingly, AI transcends its role as a ‘unraveller’ of complexity in the present. It discloses our future, what will happen in the next seconds, days, month, years or centuries. It arguably encompasses the entirety of our potential futures.

As well as having a certain hold on our future(s), these powerful tools are impacting how we think. Our cognition and understanding of the world are dramatically extended, amplified, revolutionized, but also individualized, siloed, and cut off from traditional social processes of interaction and sensemaking. In this vein, the gap between our ways of acting (in an embodied way) and our ways of thinking, grows. The dualism at the heart of representationalism, although more and more visual, narrative and corporeal, become central and even foundational. Part of our cognition – and our social practice of gaining and sharing knowledge – is delegated to AI.

These artificialities of intelligence (in particular collective intelligence), will be at the heart of this 14th OAP workshop in Paris. Behind and beyond AI as a set of codes, norms, standards, and massive use of data, our intelligence is more and more artificialized. Our collective intelligence relies on a representationalist philosophy which starts from a problem (a request) submitted to Bard or Chat GPT, generative AI tools, offering then a relevant narrative likely to answer brilliantly and confidently. Co-problematization, inquiry, concerns, openness, in short, life, are not at all part of this equation. This artificial organizing process will be central in  our discussions.

In particular, we welcome abstracts likely to cover the following topics:

  • Artificialities of intelligence as organization and organizationality;
  • Historical perspectives on digitality and AI;
  • Historical perspectives on calculative techniques, cybernetics, AI and digitality in general, in relationship with management and organizationality;
  • Revisiting and problematizing traditional assumptions about knowledge sharing and communities of practice;
  • Ethnographies, collaborative ethnographies and auto-ethnographies about AI in organizations ;
  • Pragmatist inquiries about collective intelligence;
  • Critiques of cognitivism in organization studies and management, e.g., strategic management, accounting, marketing, logistics and MIS;
  • Explorations of the relationships between new managerial techniques and AI;
  • Temporal and spatial views about AI and artificialities of intelligence;
  • Phenomenological and post-phenomenological perspectives about AI in organizations;
  • Process perspectives on the artificiality of intelligence;
  • Critical views of AI and the artificialities of intelligence;
  • AI and the metamorphosis of scientific practices;
  • AI the dynamic of scientific communities and scientific paradigms;
  • AI and its political dimension in organizations.

Of course, our event will also be opened to more traditional OAP ontological discussions around the time, space, place and materiality of organizing in a digital era, e.g., papers discussing ontologies, sociomateriality, affordances, spacing, emplacement, atmosphere, events, becoming, practices, flows, moments, existentiality, verticality, instants in the context of our digital world.

Please note that OAP 2024 will include a pre-event, the Dauphine Philosophy Workshop also hosted by University Paris Dauphine-PSL on June 6th 2024 and entitled: “Beyond judgement and legitimation: reconceptualizing the ontology of institutional dynamics in MOS”.

Those interested in our pre-OAP event and our OAP workshop must submit an extended abstract of no more than 1,000 words to workshopoap@gmail.com. The abstract must outline the applicant’s proposed contribution to the workshop. The proposal must be in .doc/.docx/.rtf format and should contain the author’s/authors’ names as well as their institutional affiliations, email address(es), and postal address(es). Deadline for submissions will be February 3rd, 2024 (midnight CET).

Authors will be notified of the committee’s decision by February 28th, 2024.

Please note that OAP 2024 will take place only onsite this year.

There are no fees associated with attending this workshop.

Organizing committee: Hélène Bussy-Socrate (PSB), François-Xavier de Vaujany (Université Paris Dauphine-PSL, DRM), Albane Grandazzi (GEM), Aurélie Leclercq-Vandelannoitte (CNRS, LEM, IESEG, Univ. Lille), Sébastien Lorenzini (Université Paris Dauphine-PSL, DRM) and Julien Mallaurent (ESSEC).

REFERENCES

Aspray, W. (1994). The history of computing within the history of information technology. History and Technology, an International Journal, 11(1), 7-19.

Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS quarterly, 45(3).

Chia, R. (1995). From modern to postmodern organizational analysis. Organization studies, 16(4), 579-604.

Chia, R. (2002). Essai: Time, duration and simultaneity: Rethinking process and change in organizational analysis. Organization Studies, 23(6), 863-868.

Clemson, B. (1991). Cybernetics: A new management tool (Vol. 4). CRC Press.

de Vaujany, F. X., & Mitev, N. (2017). The post-Macy paradox, information management and organising: Good intentions and a road to hell?. Culture and Organization, 23(5), 379-407.

de Vaujany, FX. (2022). Apocalypse managériale, Paris : Les Belles Lettres.

Introna, L. D., & Introna, L. D. (1997). Management: and manus. Management, Information and Power: A narrative of the involved manager, 82-117.

Nascimento, A. M., da Cunha, M. A. V. C., de Souza Meirelles, F., Scornavacca Jr, E., & De Melo, V. V. (2018). A Literature Analysis of Research on Artificial Intelligence in Management Information System (MIS). In AMCIS.

Öztürk, D. (2021). What Does Artificial Intelligence Mean for Organizations? A Systematic Review of Organization Studies Research and a Way Forward. The Impact of Artificial Intelligence on Governance, Economics and Finance, Volume I, 265-289.

Pickering, A. (2002). Cybernetics and the mangle: Ashby, Beer and Pask. Social studies of science, 32(3), 413-437.

Lorino, P. (2018). Pragmatism and organization studies. Oxford University Press.

Simpson, B., & Revsbæk, L. (Eds.). (2022). Doing Process Research in Organizations: Noticing Differently. Oxford University Press.

Thompson, N. A., & Byrne, O. (2022). Imagining futures: Theorizing the practical knowledge of future-making. Organization Studies, 43(2), 247-268.

Vesa, M., & Tienari, J. (2022). Artificial intelligence and rationalized unaccountability: Ideology of the elites?. Organization, 29(6), 1133-1145.

Wagner, G., Lukyanenko, R., & Paré, G. (2022). Artificial intelligence and the conduct of literature reviews. Journal of Information Technology, 37(2), 209-226.

Yates, J. (1993). Control through communication: The rise of system in American management (Vol. 6). JHU Press.

Understanding AI and Large Language Models: Spiders Webs and LSD.

The following light-hearted script was for an evening talk at the London Stock Exchange for Enterprise Technology Meetup in June 2023. The speech is based on research with Dr Roser Pujadas of UCL and Dr Erika Valderamma of UMEA in Sweden.

—–

Last Tuesday the news went wild as industry and AI leaders warned that AI might pose an “existential threat” and that “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,”[1].  I want to address this important topic but I want to paint my own picture of what I think is wrong with some of the contemporary focus on AI, why we need to expand the frame of reference in this debate to think in terms of what I will term “Algorithmic Infrastructure”[2].

But before I do that I want to talk about spiderman.  Who has seen the new spiderman animated movie? I have no idea why I went to see it since I don’t like superheroes or animated movies! We had childcare, didn’t want to eat so ended up at the movies and it beat Fast and Furious 26… Anyway I took two things from this – the first was that most of the visuals were like someone was animating on LSD, and second was that everything was connected in some spiders web of influence and connections. And that’s going what I am going to talk about – LSD and spider’s webs.

LSD Lysergic acid diethylamide – commonly known to cause hallucinations in humans.

Alongside concerns such as putting huge numbers out of work, of spoofing identity, of affecting democracy through fake news is the concern that AI will hallucinate and so provide misinformation, and just tell plain falsehoods. But the AI like LLMs haven’t taken LSD – they are just identifying and weighing erroneous data supplied. The problem is that they learn – like a child learns – from their experience of the world. LLMs and reinforcement learning AI are a kind of modern-day Pinocchio being led astray by each experience within each element of language or photo they experience.  

Pinocchio can probably pass the Turing Test  that famously asks “can a machine pass off as a human”.

The problem with the turning test is that it accepts a fake human – it does not demand humanity or human level responses. In response Philosopher John Searle’s “Chinese Room Argument” from 1980 argues something different– Imagine yourself in a room alone following a computer programme for responding to Chinese characters slipped under the door. You know nothing of Chinese and yet by following the program for manipulating the symbols and numerals you send appropriate strings of Chinese characters out under the door and this leads the outside to mistakenly assume you speak Chinese. Your only experience of Chinese are the symbols you receive – is that enough?

Our Pinocchios are just machines locked inside the room of silicon they inhabit. They can only speak Chinese by following rules from the programme they got – in our case the experience of Pinocchios neural network to data it was fed in training.

For an LLM or any ML solution … their “programme” is based on the rules embedded in the data they have ingested, compared, quantified and explored within their networks and pathways. LLM Pinocchio is built from documents gleaned from the internet. This is impressive because “Language is not just words, but “a representation of the underlying complexity” of the world, observes Percy Liang, a professor at Stanford University – except where it isn’t I would argue.

Take the word “Love” or “Pain”– what does it actually mean? No matter how much you read only a human can experience these emotions. Can anything other than a human truly understand pain? 

Or another way, as Wittgenstein argued, can a human know what it is to be a lion – and could a lion ever explain that to a human? Can our Pinocchio’s ever know what it is to be a human?

But worse – how can a non-lion ever know truly whether it has managed to simulate being a lion? How can the LLM police itself since it is has never experienced our reality, our lives, our culture, our way of being?  It will never be able to know whether it is tripping on an LSD false-world or the real-expressed and experienced world.

If you don’t believe in the partiality of written and recorded data then think of the following example (sorry about this) visiting the restroom…. We all do it but our LLM Pinocchio will never really know that …. Nobody ever does that in books, on tv, in movies, (except in comedy ), and very seldom in written documents except medical textbooks… yet we all experience it, we all know about it as an experience but no LLM will have anything to say on that – except from a medical perspective.  

This is sometimes called the frame problem. And it is easy to reveal how much context is involved in language (But less so in other forms of data which also has similar problems).

Take another example – imagine a man and a women. The man says “I am leaving you!” – The women asks “Who is she?”  You instinctively know what happened, what it means, where it fits in social convention. LLMs can answer questions within the scope of human imagining and human writing – not in their own logic or understanding. My 1 year old experiences the world and lives within it (including lots of deficating) … an LLM does not.

Pinocchios can learn from high quality quantified and clear data (e.g. playing Go or Atari Video Games) or poor quality data (e.g. most data in the real world or business and enterprise). Real world data, like real-world language, is always culturally situated. Choices are made on what to keep, sensors are designed to capture what we believe and record.  For example, in the seventeen centuries UK death record (around the time of plague) you could die of excessive drinking, fainting in the bath, Flox, being Found dead in street, Grief, HeadAche…

So now we need to think about what world the LLM or AI does live in… and so we turn back to Spiderman … or  rather back to the spiders web of connections in the crazy multi-verse universe it talks about.

LLMs and many other generative AI learn from a spiders web of data.

At the moment, most people talk about AI and LLMs as a “product” – a thing – with we interact with. We need to avoid this firm/product centric position (Pujadas et al 2023) and instead think of webs of services within an increasingly complex API-AI Economy.

In reality, LLMs, ML etc are a service – with an input (the training data and stream of questions) and an output (answers). This is perfectly amenable to integration into the digital infrastructure of cloud-based services which underpin our modern economy. This is where my team’s research is leading.

We talk about Cloud Service Integration as the modern day enterprise development approach in which these Pinocchios are weaved and configured to provide business service through ever more Application Programming Interface connected services. We have seen an explosion of this type of cloud service integration in the last decade as cloud computing has reduced the latency of API calls such that multiple requests can occur within a normal transaction (e.g. opening a webpage can involve a multitude of API calls to a multitude of different services companies who themselves call upon multiple APIs). The spiders web of connected AI-enabled services taking inputs, undertaking complex processing, and providing outputs. Each service though has training data from the past experiences of that services (which may or may not be limited or problematic data) and driving the nature of the next.   

So, to end, my worry is not that a rogue AI trips out on LSD… rather than we build an API-AI economy in which it is simply impossible to identify hallucinations, bias, unethical practices within potentially thousands of different Pinocchio’s within the spidersweb of connected interlinked services that forms such algorithmic infrastructure.

Thank you.

© Will Venters, 2023.


[1] Statement on AI Risk | CAIS (safe.ai)

[2] Pujadas, Valderrama and Venters (2023) Forthcoming presentation at the Academy of Management Conference, Boston, USA.

Spiderman image (cc): https://commons.wikimedia.org/wiki/File:Spiderman.JPG by bortescristian used with thanks.

The 5-Es of AI potential: What do executives and investors need to think about when evaluating Artificial Intelligence?

I spent last week in Berlin as part of a small international delegation of AI experts convened by the Konrad-Adenauer Foundation[1]. In meetings with politicians, civil servants and entrepreneurs, over dinners, conferences and a meeting in the Chancellery[2], we discussed in detail the challenges faced in developing AI businesses within Germany.

A strong theme was the difference between AI as a “thing” and as “component”. Within most commercial sales-pitches AI is a “thing” developed by specialist AI businesses to be evaluated for adoption. Attention is focused on what I will term efficacy. Such efficacy aligns with pharmacology definitions as “the performance of an intervention under ideal and controlled circumstances” and is contrasted with effectiveness which “refers to its performance under ‘real-world’ conditions.[3]. AI efficacy is demonstrated through sales-pitch presentations based on specially tagged data or upon human-selected data-sets honed for the purpose.

As a “component” however, AI only becomes when it is incorporated into real-world consequential and ever evolving business processes. To be effective not just efficacious, AI must bring together real-data sources in real-world physical technology (usually involving cloud services, complex networking and physical devices) for consequential action. AI “components” then become part of a complex digital ecosystem within the Niagara-like flow of real businesses rather than a “thing” isolated from it. Since business processes, data-standards, sensors and devices, evolve and change so the AI must evolve as well while continuing to meet the needs of this flow.

To be effective (not efficacious) AI components must also be:

  • efficient in terms of energy and time (providing answers sufficiently quickly as to be useful.
  • economic in terms of cost-benefit for the company (particularly as the cost of human tagging of training data can be extreme),
  • ethical by making correct moral judgements in the face of bias in data and output, ensuring effective oversight of the resultant process, and transparency of the way the algorithm works. For example resent research shows image classifier algorithms may work by unexpected means (for example identifying horse pictures from copyright tags or train types by the rails). This can prove a significant problem when new images are introduced.
  • established in that it will continue to run long-term without disruption for real world business processes and data-sets.

The final two of these Es are particularly important: Ethics because business data is usually far from pure, clean and will likely include many biases. And Established because any process delay or failure can cause a pile-ups and overflow to other processes, and thus cause disaster. In my opinion, only if all these 5Es are achieved should a business move AI into core business processes.

For business leaders seeking to address these Es the challenges will not be in acquiring PhDs in AI-algorithms but instead in (1) hiring skilled business analysts with knowledge of AI’s opportunity but also knowledge of real-world IT challenges, (2) hiring skilled Cloud-AI implementors who can ensure these Es are met in a production environment and (3) appointing AI ethics people to focus on ensuring bias, data-protection laws, and poor data quality do not lead to poor ineffective AI. Given the significant competition for AI skills, digital transformation skills and for cloud-skills [4] this will not be easy.

So while it is fun to see interesting wizz-bang demos of AI products at industry AI conferences like those in Berlin this week, in my mind executives should remain mindful that really harnessing the potential of AI represents a much deeper form of digital transformation. Hopefully my 5Es will aid those navigating such transformation.

(C) 2019 W. Venters

[1] https://www.kas.de/ also https://www.kas.de/veranstaltungen/detail/-/content/international-perspectives-on-artificial-intelligence

[2] https://en.wikipedia.org/wiki/Federal_Chancellery_(Berlin)

[3] Efficacy : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3912314/  Singal AG, Higgins PD, Waljee AK. A primer on effectiveness and efficacy trials. Clin Transl Gastroenterol. 2014;5(1):e45. Published 2014 Jan 2. doi:10.1038/ctg.2013.13. I acknowledge drawing on Peter Checkland’s SSM for 3Es (Efficacy ,Efficiency and Effectiveness) in systems thinking.

[4] Venters, D. W., Sorensen, C., and Rackspace. 2017. “The Cost of Cloud Expertise,” Rackspace and Intel.

[5] https://en.wikipedia.org/wiki/Industry_4.0

Artificial Intelligence and human work.

The best computer is a man, and it’s the only one that can be mass-produced by unskilled labour.” (Wernher von Braun)

Last night I began to think further about the role of AI and humans in society while attending Future Advocacy’s launch of a report on “Maximising the opportunities and minimising the risks of artificial intelligence in the UK”. While a very useful contribution which I recommend, my friend Rose Luckin[1] (Prof in Education Technology @ UCL) rightly criticised the lack of specific focus on improving education and pointed out that our current education strategy centres around teaching children things computers do really well (basic maths, repetition, remembering things) rather than those AI will struggle with – creativity, critical thinking etc. This left me wondering what work humans are going to provide, and whether we really understand the skills requirement of a world with AI?

In thinking about this I recalled the quote from Wernher von Braun, the German rocket scientist that “The best computer is a man, and it’s the only one that can be mass-produced by unskilled labour.”  Since the onset of the industrial revolution mechanisation has replaced human skill and as Prof Murray Shanahan[2] said last night, already replaced many jobs. After all, only around 5% of us work on agriculture today. It is therefore not a question of whether, but the degree to which new AI technology will replace jobs – and the economic efficiency of that replacement.

There are well-rehearsed arguments about the loss of jobs and plenty of books written on the subject[3]. Some jobs are clearly at risk such as professional driving in the face of self-driving technology[4]. Other jobs are safer as they involve complex unusual actions – plumbing, for example, is messy, contingent and complex (and Prof Shanahan argued this might be the last to go).

What is however lacking is a discussion of the new jobs that AI will create. Throughout history, we have underestimated the jobs created by digital technology. In 1943 IBM’s Thomas Watson predicted a worldwide-market of 5 computers, and in the 1980s people laughed at Bill Gates vision of a computer in every home.  Today we have spending forecasts for IT in the trillions[5]. With Bank of America anticipating that the “robots and AI solutions market will grow to US$153Bn by 2020” [6] it clear that disruptive innovation (Christensen, 1997) through these advanced algorithms will have a strong impact in creating new unimagined opportunity.

Since the rise of the industrial revolution we have created new jobs to replace those lost as people stopped working on farms and in factories: our grandparents would hardly imagine so many baristas, chefs, landscape gardeners, software engineering, financiers and marketers within modern society. What is interesting then is how AI might enhance and expand existing jobs, and create new ones. For instance, an AI supported lawyer might handle more cases so reducing the lawyer’s fees while maintaining their wages. This reduction may well mean more people can access the law rather than reducing the work for lawyers[7]. Similarly, we might imagine interior decorators “virtually” visiting our homes and recommending tasteful designs using AI and online stores. While I, like many others, are not currently prepared to pay designers fees for my small London home, if a store offered the service for a low fee I might well jump at the chance so creating new jobs in this area.

In this way, AI can offer huge efficiency savings which we should not necessarily be scared about. This is not however to downplay the risks to society – particularly as the distribution of this value may be inequitable with low-paid/low-skilled employees most at risk. If, however, we can ensure that those unable to capitalise on this opportunity aren’t left behind then I am cautiously optimistic.  We should also be aware that AI will likely create low-paid, low-skilled jobs as well. Someone will need to hold the 3d camera in my house for the AI designer to work. Someone will need to deliver parcels to my house for Amazon. Someone is needed to service the computers or clean up the data needed by the AI algorithm. And someone will need to make us all great coffee.

I am not trying to present a Utopian vision here – clearly there will be problems. But it is not the end of work either. After all, society has been very good at creating new work that involves sitting in front of computers shuffling files, writing text, and editing spreadsheets and PowerPoints – for people like me. Further, as Wernher von Braun’s quote reminds us – we humans are extremely good value in providing some extremely important intellegent activities: dealing with emotion and having empathy,  thinking creatively, interacting with other humans, understanding our human society and traditions. It will be a very long time, if even, before any AI can provide such intelligence. The problem is often that we underestimate the importance of these in modern work downplaying their significance in modern economic enterprise and thus overplaying the value technocratic automated AI might provide.

(This blog is an opinion piece based on personal musings rather than report on research)

CHRISTENSEN, C. M. 1997. The innovator’s dilemma: when new technologies cause great firms to fail, Harvard Business Press.

[1] https://iris.ucl.ac.uk/iris/browse/profile?upi=RLUCK37

[2] Prof Shanahan has a new book out which looks interesting:https://mitpress.mit.edu/books/technological-singularity

[3] E.g. The Rise of the Robots (Martin Ford)

[4] This is particularly pertinent for industrial driving such as farming and mining where self-driving technology is arriving already http://www.digitaltrends.com/cool-tech/self-driving-tractors/

[5] http://www.gartner.com/technology/research/it-spending-forecast/

[6]https://www.bofaml.com/content/dam/boamlimages/documents/PDFs/robotics_and_ai_condensed_primer.pdf

[7] For a full analysis of this debate read https://www.amazon.co.uk/Future-Professions-Technology-Transform-Experts/dp/0198713398  or listen to the podcast of their talk at the LSE

Image (cc) Rolf obermaier – thanks!

Artificial intelligence is hard to see – Medium

A great article discussing the impact of AI on society and the risks involved in the context of the debate on Nick Ut’s Pulitzer-prize winning picture being censored by Facebook’s AI systems. 

Why we urgently need to measure AI’s societal impacts

Click Here: Artificial intelligence is hard to see – Medium

What can Artificial Intelligence do for business?

I am joining a panel tomorrow at the AI-Summit in London, focused on practical Artificial Intelligence (AI) for business applications. I am to be asked the question “What can Artificial Intelligence do for business?”, so by way of preparation I thought I should try to answer the question on my blog.

Perhaps we can break the question down – first considering the corollary question of “what can’t AI do for business” even if its cognitive potential matches or exceeds that of a human, then discussing “what can AI do for businesses practically today”.

What would happen if we did succeed in developing AI which has significant cognitive potential (as IBM’s Watson provides a foretaste of)?  Let’s undertake a thought experiment. Imagine that we have AI software (Fred) which is capable of matching or exceeding human level intelligence (cognitively defined), but obviously remains locked inside a prison of its computer body.  What would Fred miss that might limit his ability to help the business?

Firstly much of business is about social relationships – those attending the AI-Summit have decided that something is available which is not as effective via reading the Internet – perhaps it is the herd mentality of seeing what others are doing, perhaps it is the subtle clues, perhaps the serendipitous conversations, or perhaps it is about building trust such that unwritten knowledge is shared. Fred would likely be absent from this – even if he were given a robotic persona it is unlikely it would fit in with the subtle social activity needed to navigate the drinks reception.

Second Fred is necessarily backward looking, gleaning his intelligence and predictive capacity from processing the vast informational traces of human existence available from the past (or present). Yet we humans, and business in general, is forward looking – we live by imagined futures as much as remembered pasts. How well Fred could handle that prediction when the world can change in an instant (remember the sad day of 9/11)? Perhaps quicker than us (processing the immediate tweets) but perhaps wrongly – not seeing the mood shifts, changes and immediate actions. Who knows?

My third point is derived from the famous hawthorn experiments which showed that humans’ behaviour changes when we are observed. Embedding Fred into an organisation will change the organisation’s social dynamic and so change the organisation. Perhaps people will stop talking where Fred can hear, or talk differently when they know he is watching.  Perhaps they will be most risk averse – worried Fred would question the rationality of their decisions. Perhaps they would be more scientific – seeking to mimic Fred – and lose their aesthetic intuitive ideas? Perhaps they will find it hard to challenge, debate and argue with Fred –debate that is necessary for businesses to arrive at decisions in the face of uncertainty? Or perhaps Fred will deny the wisdom of the crowd (Surowiecki, 2005) by over representing one perspective, when the crowd may better reflect human’s likely future response?

Or perhaps, as Nicholas Carr suggests (Carr, 2014) they will prove so useful and intelligent that they dull our interest in the business, erode our attentiveness and deskill the CxOs in the organisation – just as it has been suggested flying on Autopilot can do for pilots.

Finally, (and arguably most importantly as those who believe in AI and will likely dismiss the earlier pronouncements as simplistic as AI will overcome these by brute force of intelligence), Fred’s intelligence would be based on data gleaned from a human world and “raw data is an oxymoron, data are always already cooked and never entirely raw” (Gitelman andJackson 2013 following Bowker 2005 – cited in (Kitchin, 2014)). Fred’s data is partial and decisions were made as to what was, and wasn’t counted, recorded, and how it was recorded (Bowker & Star, 1999). Our data reflects our social world and Fred is likely to over-estimate the benign nature of this representation (or extreme representations) of the data. While IBM’s Watson can reflect human knowledge in games such as Jeopardy, its limited ability to question the provenance of data without real human experience may limit its ability to act humanly – and in a world which continues to be dominate by humans this may be a problem. I had the pleasure of attending a talk two weeks ago by Prof Ross Koppel who discusses this challenge in detail in relation to health-care payments data.  AI is founded upon an ontology of scientific rationality – by far the most dominant ontological position today. This idea argues that science, and statistical inference from data, presents the truth (a single unassailable truth at that). Such rationality denies human belief, superstition, irrationality – yet these continue to play a part in the way humans act and behave. Perhaps AI needs to explore further these philosophical assumptions as Winograd and Flores famously did around AI three decades ago (Winograd & Flores, 1986).

Finally we should try, when evaluating any new technologies impact on business to be critical of “solutionism” which argues that business problems will be solved by one silver bullet. Instead we should evaluate each through a range of relevant filters – asking questions about their likely economic, social and political distortions and from this evaluate how they can truly add value to business.   In exploiting AI today, at its most basic, businesses should start by focusing on the low-hanging fruit.  AI doesn’t have to be that intelligent to provide huge benefits.  Consider how Robotic Process Automation  can help companies (e.g. O2) deal with its long tail of boring repetitive processes (Willcocks & Lacity, 2016). For example “swivel chair” functions where people extract data from one system (e.g. email) undertake simple processes using rules, then enter the output into a system of record such as ERP (Willcocks & Lacity, 2016). As such processes involve only a modicum of intelligence, and are repetitive and boring for humans, they offer cost opportunities (see Blue Prism as an example of this type of solution) – particularly as one estimate suggests such automation costs around $7500/PA(FTE) compared to $23k PA for an offshore salary (Willcocks and Lacity 2016 quoting  Operationalagility.com).

Obviously AI might move up the chain to deal with more significant business process issues – however at each stage we are reminded that CxOs will need leadership, and IT departments will need specific skills to ensure that the AI makes sensible decisions, and reflects business practices. Business Analysts will need to learn about AI such that they can act as sensible teachers – identifying risks that AI are unlikely to notice, and steering the AI to act sensibly.  Finally as the technology improves so organisational and business sociologists will be needed to wrestle with the challenges identified above.

© Will Venters

Bowker, G., & Star, S. L. (1999). Sorting Things Out:Classification and Its Consequences. Cambridge,MA: MIT Press.

Carr, N. (2014). The Glass Cage: Automation and Us: WW Norton & Company.

Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures and their consequences: Sage.

Surowiecki, J. (2005). The wisdom of crowds: Anchor.

Willcocks, L., & Lacity, M. C. (2016). Service Automation: Robots and the future of work. Warwickshire, UK: Steve Brookes Publishing.

Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Norwood,NJ: Ablex.

(Image (cc) from Jorge Barba – thanks)