[Academic Call]: AI and the Artificialities of Intelligence.

I am really excited to be a co-chair of the following academic workshop at ESSEC & Université Paris Dauphine-PSL. Please join us if you can!

AI and the Artificialities of Intelligence: What matters in and for organizing?

Call for papers 14th Organizations, Artifacts & Practices (OAP) Workshop #OAP2024

When: June 6th and 7th 2024

Where: Paris (ESSEC & Université Paris Dauphine-PSL). Face-to-face event.

Co-chairs:

Ella Hafermalz (Vrije Universiteit Amsterdam)  François-Xavier de Vaujany (Université Paris Dauphine-PSL)    Aurélie Leclercq-Vandelannoitte (CNRS, LEM, IESEG, Univ. Lille)
Julien Malaurent (ESSEC)Will Venters (LSE)Youngjin Yoo (Case Western University)

This 14th OAP workshop jointly organized by Université Paris Dauphine-PSL (DRM), ESSEC and ESSEC Metalab will be an opportunity to come back to the issue of Artificial Intelligence and its relationship with the history, philosophy and politics of management and organization.

Artificial Intelligence now pervades discussions about the future of organizations and societies. AI is expected to bring deep changes in work practices and our ways of living. Utopian and dystopian narratives are abundant. However, AI is far from being a fleeting trend; rather, it constitutes a collection of techniques with a rich history dating back to the 1950s. AI serves as a broad framework deeply intertwined with ideals of rationalism and representationalism – much like the broader digital landscape it epitomizes. The aspiration in the realm of AI is that self-sufficient techniques will progressively and continuously enhance our comprehension of the world. By means of rules and the use of massive amounts of data, it is expected that learning capabilities make AI tools more and more likely to expose and elucidate the underlying realities of the processes they initially are designed to represent. Increasingly, AI transcends its role as a ‘unraveller’ of complexity in the present. It discloses our future, what will happen in the next seconds, days, month, years or centuries. It arguably encompasses the entirety of our potential futures.

As well as having a certain hold on our future(s), these powerful tools are impacting how we think. Our cognition and understanding of the world are dramatically extended, amplified, revolutionized, but also individualized, siloed, and cut off from traditional social processes of interaction and sensemaking. In this vein, the gap between our ways of acting (in an embodied way) and our ways of thinking, grows. The dualism at the heart of representationalism, although more and more visual, narrative and corporeal, become central and even foundational. Part of our cognition – and our social practice of gaining and sharing knowledge – is delegated to AI.

These artificialities of intelligence (in particular collective intelligence), will be at the heart of this 14th OAP workshop in Paris. Behind and beyond AI as a set of codes, norms, standards, and massive use of data, our intelligence is more and more artificialized. Our collective intelligence relies on a representationalist philosophy which starts from a problem (a request) submitted to Bard or Chat GPT, generative AI tools, offering then a relevant narrative likely to answer brilliantly and confidently. Co-problematization, inquiry, concerns, openness, in short, life, are not at all part of this equation. This artificial organizing process will be central in  our discussions.

In particular, we welcome abstracts likely to cover the following topics:

  • Artificialities of intelligence as organization and organizationality;
  • Historical perspectives on digitality and AI;
  • Historical perspectives on calculative techniques, cybernetics, AI and digitality in general, in relationship with management and organizationality;
  • Revisiting and problematizing traditional assumptions about knowledge sharing and communities of practice;
  • Ethnographies, collaborative ethnographies and auto-ethnographies about AI in organizations ;
  • Pragmatist inquiries about collective intelligence;
  • Critiques of cognitivism in organization studies and management, e.g., strategic management, accounting, marketing, logistics and MIS;
  • Explorations of the relationships between new managerial techniques and AI;
  • Temporal and spatial views about AI and artificialities of intelligence;
  • Phenomenological and post-phenomenological perspectives about AI in organizations;
  • Process perspectives on the artificiality of intelligence;
  • Critical views of AI and the artificialities of intelligence;
  • AI and the metamorphosis of scientific practices;
  • AI the dynamic of scientific communities and scientific paradigms;
  • AI and its political dimension in organizations.

Of course, our event will also be opened to more traditional OAP ontological discussions around the time, space, place and materiality of organizing in a digital era, e.g., papers discussing ontologies, sociomateriality, affordances, spacing, emplacement, atmosphere, events, becoming, practices, flows, moments, existentiality, verticality, instants in the context of our digital world.

Please note that OAP 2024 will include a pre-event, the Dauphine Philosophy Workshop also hosted by University Paris Dauphine-PSL on June 6th 2024 and entitled: “Beyond judgement and legitimation: reconceptualizing the ontology of institutional dynamics in MOS”.

Those interested in our pre-OAP event and our OAP workshop must submit an extended abstract of no more than 1,000 words to workshopoap@gmail.com. The abstract must outline the applicant’s proposed contribution to the workshop. The proposal must be in .doc/.docx/.rtf format and should contain the author’s/authors’ names as well as their institutional affiliations, email address(es), and postal address(es). Deadline for submissions will be February 3rd, 2024 (midnight CET).

Authors will be notified of the committee’s decision by February 28th, 2024.

Please note that OAP 2024 will take place only onsite this year.

There are no fees associated with attending this workshop.

Organizing committee: Hélène Bussy-Socrate (PSB), François-Xavier de Vaujany (Université Paris Dauphine-PSL, DRM), Albane Grandazzi (GEM), Aurélie Leclercq-Vandelannoitte (CNRS, LEM, IESEG, Univ. Lille), Sébastien Lorenzini (Université Paris Dauphine-PSL, DRM) and Julien Mallaurent (ESSEC).

REFERENCES

Aspray, W. (1994). The history of computing within the history of information technology. History and Technology, an International Journal, 11(1), 7-19.

Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS quarterly, 45(3).

Chia, R. (1995). From modern to postmodern organizational analysis. Organization studies, 16(4), 579-604.

Chia, R. (2002). Essai: Time, duration and simultaneity: Rethinking process and change in organizational analysis. Organization Studies, 23(6), 863-868.

Clemson, B. (1991). Cybernetics: A new management tool (Vol. 4). CRC Press.

de Vaujany, F. X., & Mitev, N. (2017). The post-Macy paradox, information management and organising: Good intentions and a road to hell?. Culture and Organization, 23(5), 379-407.

de Vaujany, FX. (2022). Apocalypse managériale, Paris : Les Belles Lettres.

Introna, L. D., & Introna, L. D. (1997). Management: and manus. Management, Information and Power: A narrative of the involved manager, 82-117.

Nascimento, A. M., da Cunha, M. A. V. C., de Souza Meirelles, F., Scornavacca Jr, E., & De Melo, V. V. (2018). A Literature Analysis of Research on Artificial Intelligence in Management Information System (MIS). In AMCIS.

Öztürk, D. (2021). What Does Artificial Intelligence Mean for Organizations? A Systematic Review of Organization Studies Research and a Way Forward. The Impact of Artificial Intelligence on Governance, Economics and Finance, Volume I, 265-289.

Pickering, A. (2002). Cybernetics and the mangle: Ashby, Beer and Pask. Social studies of science, 32(3), 413-437.

Lorino, P. (2018). Pragmatism and organization studies. Oxford University Press.

Simpson, B., & Revsbæk, L. (Eds.). (2022). Doing Process Research in Organizations: Noticing Differently. Oxford University Press.

Thompson, N. A., & Byrne, O. (2022). Imagining futures: Theorizing the practical knowledge of future-making. Organization Studies, 43(2), 247-268.

Vesa, M., & Tienari, J. (2022). Artificial intelligence and rationalized unaccountability: Ideology of the elites?. Organization, 29(6), 1133-1145.

Wagner, G., Lukyanenko, R., & Paré, G. (2022). Artificial intelligence and the conduct of literature reviews. Journal of Information Technology, 37(2), 209-226.

Yates, J. (1993). Control through communication: The rise of system in American management (Vol. 6). JHU Press.

The 5-Es of AI potential: What do executives and investors need to think about when evaluating Artificial Intelligence?

I spent last week in Berlin as part of a small international delegation of AI experts convened by the Konrad-Adenauer Foundation[1]. In meetings with politicians, civil servants and entrepreneurs, over dinners, conferences and a meeting in the Chancellery[2], we discussed in detail the challenges faced in developing AI businesses within Germany.

A strong theme was the difference between AI as a “thing” and as “component”. Within most commercial sales-pitches AI is a “thing” developed by specialist AI businesses to be evaluated for adoption. Attention is focused on what I will term efficacy. Such efficacy aligns with pharmacology definitions as “the performance of an intervention under ideal and controlled circumstances” and is contrasted with effectiveness which “refers to its performance under ‘real-world’ conditions.[3]. AI efficacy is demonstrated through sales-pitch presentations based on specially tagged data or upon human-selected data-sets honed for the purpose.

As a “component” however, AI only becomes when it is incorporated into real-world consequential and ever evolving business processes. To be effective not just efficacious, AI must bring together real-data sources in real-world physical technology (usually involving cloud services, complex networking and physical devices) for consequential action. AI “components” then become part of a complex digital ecosystem within the Niagara-like flow of real businesses rather than a “thing” isolated from it. Since business processes, data-standards, sensors and devices, evolve and change so the AI must evolve as well while continuing to meet the needs of this flow.

To be effective (not efficacious) AI components must also be:

  • efficient in terms of energy and time (providing answers sufficiently quickly as to be useful.
  • economic in terms of cost-benefit for the company (particularly as the cost of human tagging of training data can be extreme),
  • ethical by making correct moral judgements in the face of bias in data and output, ensuring effective oversight of the resultant process, and transparency of the way the algorithm works. For example resent research shows image classifier algorithms may work by unexpected means (for example identifying horse pictures from copyright tags or train types by the rails). This can prove a significant problem when new images are introduced.
  • established in that it will continue to run long-term without disruption for real world business processes and data-sets.

The final two of these Es are particularly important: Ethics because business data is usually far from pure, clean and will likely include many biases. And Established because any process delay or failure can cause a pile-ups and overflow to other processes, and thus cause disaster. In my opinion, only if all these 5Es are achieved should a business move AI into core business processes.

For business leaders seeking to address these Es the challenges will not be in acquiring PhDs in AI-algorithms but instead in (1) hiring skilled business analysts with knowledge of AI’s opportunity but also knowledge of real-world IT challenges, (2) hiring skilled Cloud-AI implementors who can ensure these Es are met in a production environment and (3) appointing AI ethics people to focus on ensuring bias, data-protection laws, and poor data quality do not lead to poor ineffective AI. Given the significant competition for AI skills, digital transformation skills and for cloud-skills [4] this will not be easy.

So while it is fun to see interesting wizz-bang demos of AI products at industry AI conferences like those in Berlin this week, in my mind executives should remain mindful that really harnessing the potential of AI represents a much deeper form of digital transformation. Hopefully my 5Es will aid those navigating such transformation.

(C) 2019 W. Venters

[1] https://www.kas.de/ also https://www.kas.de/veranstaltungen/detail/-/content/international-perspectives-on-artificial-intelligence

[2] https://en.wikipedia.org/wiki/Federal_Chancellery_(Berlin)

[3] Efficacy : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3912314/  Singal AG, Higgins PD, Waljee AK. A primer on effectiveness and efficacy trials. Clin Transl Gastroenterol. 2014;5(1):e45. Published 2014 Jan 2. doi:10.1038/ctg.2013.13. I acknowledge drawing on Peter Checkland’s SSM for 3Es (Efficacy ,Efficiency and Effectiveness) in systems thinking.

[4] Venters, D. W., Sorensen, C., and Rackspace. 2017. “The Cost of Cloud Expertise,” Rackspace and Intel.

[5] https://en.wikipedia.org/wiki/Industry_4.0

The Enterprise Kindergarten for our new AI Babies? Digital Leadership Forum.

I am to be part of a panel at the Digital Leadership Forum event today discussing AI and the Enterprise.  In my opinion, the AI debate has become dominated by the AI technology and the arrival of products sold to Enterprise as “AI solutions” rather than the ecosystems and contexts in which AI algorithms will operate. It is to this that I intend to talk.

It’s ironic though that we should come see AI in this way – as a kind of “black-box” to be purchased and installed. If AI is about “learning” and “intelligence” then surely an enterprises “AI- Baby”, if it is to act sensibly, needs a carefully considered environment which is carefully controlled to help it learn? AI technology is about learning – nurturing even – to ensure the results are relevant. With human babies we spend time choosing the books they will learn from, making the nursery safe and secure, and allowing them to experience the world carefully in a controlled manner. But do enterprises think about investing similar effort in considering the training data for their new AI? And in particular considering the digital ecosystem (Kindergarten) which will provide such data? 

Examples of AI Success clearly demonstrate such a kindergarten approach. AlphaGo grew in a world of well understood problems (Go has logical rules) with data unequivocally relevant to that problem.  The team used experts in the game to hone its learning, and were on hand to drive its success.  Yet many AI solutions seem marketed as “plug-and-play” as though exposing the AI to companies’ messy, often ambiguous, and usually partial data will be fine.

So where should a CxO be spending their time when evaluating enterprise AI? I would argue they should seek to evaluate both the AI product and their organisation’s “AI kindergarten” in which the “AI product” will grow?

Thinking about this further we might recommend that:

  • CxOs should make sure that the data feeding AI represents the companies values and needs and is not biased or partial.
  • Ensure that the AI decisions are taken forward in a controlled way, and that there is human oversight. Ensure the organisation is comfortable with any AI decisions and that even when they are wrong (which AI sometimes will be) they do not harm the company.
  • Ensure that the data required to train the AI is available. As AI can require a huge amount of data to learn effectively so it may be uneconomic for a single company to seek to acquire that data (see UBERs woes in this).
  • Consider what would happen if the data-sources for AI degraded or changed (for example a sensor broke, a camera was changed, data-policy evolved or different types of data emerged). Who would be auditing the AI to ensure it continued to operate as required?
  • Finally, consider that the AI-baby will not live alone – they will be “social”. Partners or competitors might employ similar AI which, within the wider marketplace ecosystem, might affect the world in which the AI operates. (See my previous article on potential AI collusion). Famously the interacting algorithms of high-frequency traders created significant market turbulence dubbed the “flash-crash” with traders’ algorithms failed to understand the wider context of other algorithms interacting. Further, as AI often lacks transparency of its decision making, so this interacting network of AI may act unpredictably and in ways poorly understood.
Image Kassandra Bay (cc) Thanks

Artificial Intelligence and human work.

The best computer is a man, and it’s the only one that can be mass-produced by unskilled labour.” (Wernher von Braun)

Last night I began to think further about the role of AI and humans in society while attending Future Advocacy’s launch of a report on “Maximising the opportunities and minimising the risks of artificial intelligence in the UK”. While a very useful contribution which I recommend, my friend Rose Luckin[1] (Prof in Education Technology @ UCL) rightly criticised the lack of specific focus on improving education and pointed out that our current education strategy centres around teaching children things computers do really well (basic maths, repetition, remembering things) rather than those AI will struggle with – creativity, critical thinking etc. This left me wondering what work humans are going to provide, and whether we really understand the skills requirement of a world with AI?

In thinking about this I recalled the quote from Wernher von Braun, the German rocket scientist that “The best computer is a man, and it’s the only one that can be mass-produced by unskilled labour.”  Since the onset of the industrial revolution mechanisation has replaced human skill and as Prof Murray Shanahan[2] said last night, already replaced many jobs. After all, only around 5% of us work on agriculture today. It is therefore not a question of whether, but the degree to which new AI technology will replace jobs – and the economic efficiency of that replacement.

There are well-rehearsed arguments about the loss of jobs and plenty of books written on the subject[3]. Some jobs are clearly at risk such as professional driving in the face of self-driving technology[4]. Other jobs are safer as they involve complex unusual actions – plumbing, for example, is messy, contingent and complex (and Prof Shanahan argued this might be the last to go).

What is however lacking is a discussion of the new jobs that AI will create. Throughout history, we have underestimated the jobs created by digital technology. In 1943 IBM’s Thomas Watson predicted a worldwide-market of 5 computers, and in the 1980s people laughed at Bill Gates vision of a computer in every home.  Today we have spending forecasts for IT in the trillions[5]. With Bank of America anticipating that the “robots and AI solutions market will grow to US$153Bn by 2020” [6] it clear that disruptive innovation (Christensen, 1997) through these advanced algorithms will have a strong impact in creating new unimagined opportunity.

Since the rise of the industrial revolution we have created new jobs to replace those lost as people stopped working on farms and in factories: our grandparents would hardly imagine so many baristas, chefs, landscape gardeners, software engineering, financiers and marketers within modern society. What is interesting then is how AI might enhance and expand existing jobs, and create new ones. For instance, an AI supported lawyer might handle more cases so reducing the lawyer’s fees while maintaining their wages. This reduction may well mean more people can access the law rather than reducing the work for lawyers[7]. Similarly, we might imagine interior decorators “virtually” visiting our homes and recommending tasteful designs using AI and online stores. While I, like many others, are not currently prepared to pay designers fees for my small London home, if a store offered the service for a low fee I might well jump at the chance so creating new jobs in this area.

In this way, AI can offer huge efficiency savings which we should not necessarily be scared about. This is not however to downplay the risks to society – particularly as the distribution of this value may be inequitable with low-paid/low-skilled employees most at risk. If, however, we can ensure that those unable to capitalise on this opportunity aren’t left behind then I am cautiously optimistic.  We should also be aware that AI will likely create low-paid, low-skilled jobs as well. Someone will need to hold the 3d camera in my house for the AI designer to work. Someone will need to deliver parcels to my house for Amazon. Someone is needed to service the computers or clean up the data needed by the AI algorithm. And someone will need to make us all great coffee.

I am not trying to present a Utopian vision here – clearly there will be problems. But it is not the end of work either. After all, society has been very good at creating new work that involves sitting in front of computers shuffling files, writing text, and editing spreadsheets and PowerPoints – for people like me. Further, as Wernher von Braun’s quote reminds us – we humans are extremely good value in providing some extremely important intellegent activities: dealing with emotion and having empathy,  thinking creatively, interacting with other humans, understanding our human society and traditions. It will be a very long time, if even, before any AI can provide such intelligence. The problem is often that we underestimate the importance of these in modern work downplaying their significance in modern economic enterprise and thus overplaying the value technocratic automated AI might provide.

(This blog is an opinion piece based on personal musings rather than report on research)

CHRISTENSEN, C. M. 1997. The innovator’s dilemma: when new technologies cause great firms to fail, Harvard Business Press.

[1] https://iris.ucl.ac.uk/iris/browse/profile?upi=RLUCK37

[2] Prof Shanahan has a new book out which looks interesting:https://mitpress.mit.edu/books/technological-singularity

[3] E.g. The Rise of the Robots (Martin Ford)

[4] This is particularly pertinent for industrial driving such as farming and mining where self-driving technology is arriving already http://www.digitaltrends.com/cool-tech/self-driving-tractors/

[5] http://www.gartner.com/technology/research/it-spending-forecast/

[6]https://www.bofaml.com/content/dam/boamlimages/documents/PDFs/robotics_and_ai_condensed_primer.pdf

[7] For a full analysis of this debate read https://www.amazon.co.uk/Future-Professions-Technology-Transform-Experts/dp/0198713398  or listen to the podcast of their talk at the LSE

Image (cc) Rolf obermaier – thanks!

Artificial intelligence is hard to see – Medium

A great article discussing the impact of AI on society and the risks involved in the context of the debate on Nick Ut’s Pulitzer-prize winning picture being censored by Facebook’s AI systems. 

Why we urgently need to measure AI’s societal impacts

Click Here: Artificial intelligence is hard to see – Medium

What can Artificial Intelligence do for business?

I am joining a panel tomorrow at the AI-Summit in London, focused on practical Artificial Intelligence (AI) for business applications. I am to be asked the question “What can Artificial Intelligence do for business?”, so by way of preparation I thought I should try to answer the question on my blog.

Perhaps we can break the question down – first considering the corollary question of “what can’t AI do for business” even if its cognitive potential matches or exceeds that of a human, then discussing “what can AI do for businesses practically today”.

What would happen if we did succeed in developing AI which has significant cognitive potential (as IBM’s Watson provides a foretaste of)?  Let’s undertake a thought experiment. Imagine that we have AI software (Fred) which is capable of matching or exceeding human level intelligence (cognitively defined), but obviously remains locked inside a prison of its computer body.  What would Fred miss that might limit his ability to help the business?

Firstly much of business is about social relationships – those attending the AI-Summit have decided that something is available which is not as effective via reading the Internet – perhaps it is the herd mentality of seeing what others are doing, perhaps it is the subtle clues, perhaps the serendipitous conversations, or perhaps it is about building trust such that unwritten knowledge is shared. Fred would likely be absent from this – even if he were given a robotic persona it is unlikely it would fit in with the subtle social activity needed to navigate the drinks reception.

Second Fred is necessarily backward looking, gleaning his intelligence and predictive capacity from processing the vast informational traces of human existence available from the past (or present). Yet we humans, and business in general, is forward looking – we live by imagined futures as much as remembered pasts. How well Fred could handle that prediction when the world can change in an instant (remember the sad day of 9/11)? Perhaps quicker than us (processing the immediate tweets) but perhaps wrongly – not seeing the mood shifts, changes and immediate actions. Who knows?

My third point is derived from the famous hawthorn experiments which showed that humans’ behaviour changes when we are observed. Embedding Fred into an organisation will change the organisation’s social dynamic and so change the organisation. Perhaps people will stop talking where Fred can hear, or talk differently when they know he is watching.  Perhaps they will be most risk averse – worried Fred would question the rationality of their decisions. Perhaps they would be more scientific – seeking to mimic Fred – and lose their aesthetic intuitive ideas? Perhaps they will find it hard to challenge, debate and argue with Fred –debate that is necessary for businesses to arrive at decisions in the face of uncertainty? Or perhaps Fred will deny the wisdom of the crowd (Surowiecki, 2005) by over representing one perspective, when the crowd may better reflect human’s likely future response?

Or perhaps, as Nicholas Carr suggests (Carr, 2014) they will prove so useful and intelligent that they dull our interest in the business, erode our attentiveness and deskill the CxOs in the organisation – just as it has been suggested flying on Autopilot can do for pilots.

Finally, (and arguably most importantly as those who believe in AI and will likely dismiss the earlier pronouncements as simplistic as AI will overcome these by brute force of intelligence), Fred’s intelligence would be based on data gleaned from a human world and “raw data is an oxymoron, data are always already cooked and never entirely raw” (Gitelman andJackson 2013 following Bowker 2005 – cited in (Kitchin, 2014)). Fred’s data is partial and decisions were made as to what was, and wasn’t counted, recorded, and how it was recorded (Bowker & Star, 1999). Our data reflects our social world and Fred is likely to over-estimate the benign nature of this representation (or extreme representations) of the data. While IBM’s Watson can reflect human knowledge in games such as Jeopardy, its limited ability to question the provenance of data without real human experience may limit its ability to act humanly – and in a world which continues to be dominate by humans this may be a problem. I had the pleasure of attending a talk two weeks ago by Prof Ross Koppel who discusses this challenge in detail in relation to health-care payments data.  AI is founded upon an ontology of scientific rationality – by far the most dominant ontological position today. This idea argues that science, and statistical inference from data, presents the truth (a single unassailable truth at that). Such rationality denies human belief, superstition, irrationality – yet these continue to play a part in the way humans act and behave. Perhaps AI needs to explore further these philosophical assumptions as Winograd and Flores famously did around AI three decades ago (Winograd & Flores, 1986).

Finally we should try, when evaluating any new technologies impact on business to be critical of “solutionism” which argues that business problems will be solved by one silver bullet. Instead we should evaluate each through a range of relevant filters – asking questions about their likely economic, social and political distortions and from this evaluate how they can truly add value to business.   In exploiting AI today, at its most basic, businesses should start by focusing on the low-hanging fruit.  AI doesn’t have to be that intelligent to provide huge benefits.  Consider how Robotic Process Automation  can help companies (e.g. O2) deal with its long tail of boring repetitive processes (Willcocks & Lacity, 2016). For example “swivel chair” functions where people extract data from one system (e.g. email) undertake simple processes using rules, then enter the output into a system of record such as ERP (Willcocks & Lacity, 2016). As such processes involve only a modicum of intelligence, and are repetitive and boring for humans, they offer cost opportunities (see Blue Prism as an example of this type of solution) – particularly as one estimate suggests such automation costs around $7500/PA(FTE) compared to $23k PA for an offshore salary (Willcocks and Lacity 2016 quoting  Operationalagility.com).

Obviously AI might move up the chain to deal with more significant business process issues – however at each stage we are reminded that CxOs will need leadership, and IT departments will need specific skills to ensure that the AI makes sensible decisions, and reflects business practices. Business Analysts will need to learn about AI such that they can act as sensible teachers – identifying risks that AI are unlikely to notice, and steering the AI to act sensibly.  Finally as the technology improves so organisational and business sociologists will be needed to wrestle with the challenges identified above.

© Will Venters

Bowker, G., & Star, S. L. (1999). Sorting Things Out:Classification and Its Consequences. Cambridge,MA: MIT Press.

Carr, N. (2014). The Glass Cage: Automation and Us: WW Norton & Company.

Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures and their consequences: Sage.

Surowiecki, J. (2005). The wisdom of crowds: Anchor.

Willcocks, L., & Lacity, M. C. (2016). Service Automation: Robots and the future of work. Warwickshire, UK: Steve Brookes Publishing.

Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Norwood,NJ: Ablex.

(Image (cc) from Jorge Barba – thanks)