The 5-Es of AI potential: What do executives and investors need to think about when evaluating Artificial Intelligence?

I spent last week in Berlin as part of a small international delegation of AI experts convened by the Konrad-Adenauer Foundation[1]. In meetings with politicians, civil servants and entrepreneurs, over dinners, conferences and a meeting in the Chancellery[2], we discussed in detail the challenges faced in developing AI businesses within Germany.

A strong theme was the difference between AI as a “thing” and as “component”. Within most commercial sales-pitches AI is a “thing” developed by specialist AI businesses to be evaluated for adoption. Attention is focused on what I will term efficacy. Such efficacy aligns with pharmacology definitions as “the performance of an intervention under ideal and controlled circumstances” and is contrasted with effectiveness which “refers to its performance under ‘real-world’ conditions.[3]. AI efficacy is demonstrated through sales-pitch presentations based on specially tagged data or upon human-selected data-sets honed for the purpose.

As a “component” however, AI only becomes when it is incorporated into real-world consequential and ever evolving business processes. To be effective not just efficacious, AI must bring together real-data sources in real-world physical technology (usually involving cloud services, complex networking and physical devices) for consequential action. AI “components” then become part of a complex digital ecosystem within the Niagara-like flow of real businesses rather than a “thing” isolated from it. Since business processes, data-standards, sensors and devices, evolve and change so the AI must evolve as well while continuing to meet the needs of this flow.

To be effective (not efficacious) AI components must also be:

  • efficient in terms of energy and time (providing answers sufficiently quickly as to be useful.
  • economic in terms of cost-benefit for the company (particularly as the cost of human tagging of training data can be extreme),
  • ethical by making correct moral judgements in the face of bias in data and output, ensuring effective oversight of the resultant process, and transparency of the way the algorithm works. For example resent research shows image classifier algorithms may work by unexpected means (for example identifying horse pictures from copyright tags or train types by the rails). This can prove a significant problem when new images are introduced.
  • established in that it will continue to run long-term without disruption for real world business processes and data-sets.

The final two of these Es are particularly important: Ethics because business data is usually far from pure, clean and will likely include many biases. And Established because any process delay or failure can cause a pile-ups and overflow to other processes, and thus cause disaster. In my opinion, only if all these 5Es are achieved should a business move AI into core business processes.

For business leaders seeking to address these Es the challenges will not be in acquiring PhDs in AI-algorithms but instead in (1) hiring skilled business analysts with knowledge of AI’s opportunity but also knowledge of real-world IT challenges, (2) hiring skilled Cloud-AI implementors who can ensure these Es are met in a production environment and (3) appointing AI ethics people to focus on ensuring bias, data-protection laws, and poor data quality do not lead to poor ineffective AI. Given the significant competition for AI skills, digital transformation skills and for cloud-skills [4] this will not be easy.

So while it is fun to see interesting wizz-bang demos of AI products at industry AI conferences like those in Berlin this week, in my mind executives should remain mindful that really harnessing the potential of AI represents a much deeper form of digital transformation. Hopefully my 5Es will aid those navigating such transformation.

(C) 2019 W. Venters

[1] https://www.kas.de/ also https://www.kas.de/veranstaltungen/detail/-/content/international-perspectives-on-artificial-intelligence

[2] https://en.wikipedia.org/wiki/Federal_Chancellery_(Berlin)

[3] Efficacy : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3912314/  Singal AG, Higgins PD, Waljee AK. A primer on effectiveness and efficacy trials. Clin Transl Gastroenterol. 2014;5(1):e45. Published 2014 Jan 2. doi:10.1038/ctg.2013.13. I acknowledge drawing on Peter Checkland’s SSM for 3Es (Efficacy ,Efficiency and Effectiveness) in systems thinking.

[4] Venters, D. W., Sorensen, C., and Rackspace. 2017. “The Cost of Cloud Expertise,” Rackspace and Intel.

[5] https://en.wikipedia.org/wiki/Industry_4.0

The Enterprise Kindergarten for our new AI Babies? Digital Leadership Forum.

I am to be part of a panel at the Digital Leadership Forum event today discussing AI and the Enterprise.  In my opinion, the AI debate has become dominated by the AI technology and the arrival of products sold to Enterprise as “AI solutions” rather than the ecosystems and contexts in which AI algorithms will operate. It is to this that I intend to talk.

It’s ironic though that we should come see AI in this way – as a kind of “black-box” to be purchased and installed. If AI is about “learning” and “intelligence” then surely an enterprises “AI- Baby”, if it is to act sensibly, needs a carefully considered environment which is carefully controlled to help it learn? AI technology is about learning – nurturing even – to ensure the results are relevant. With human babies we spend time choosing the books they will learn from, making the nursery safe and secure, and allowing them to experience the world carefully in a controlled manner. But do enterprises think about investing similar effort in considering the training data for their new AI? And in particular considering the digital ecosystem (Kindergarten) which will provide such data? 

Examples of AI Success clearly demonstrate such a kindergarten approach. AlphaGo grew in a world of well understood problems (Go has logical rules) with data unequivocally relevant to that problem.  The team used experts in the game to hone its learning, and were on hand to drive its success.  Yet many AI solutions seem marketed as “plug-and-play” as though exposing the AI to companies’ messy, often ambiguous, and usually partial data will be fine.

So where should a CxO be spending their time when evaluating enterprise AI? I would argue they should seek to evaluate both the AI product and their organisation’s “AI kindergarten” in which the “AI product” will grow?

Thinking about this further we might recommend that:

  • CxOs should make sure that the data feeding AI represents the companies values and needs and is not biased or partial.
  • Ensure that the AI decisions are taken forward in a controlled way, and that there is human oversight. Ensure the organisation is comfortable with any AI decisions and that even when they are wrong (which AI sometimes will be) they do not harm the company.
  • Ensure that the data required to train the AI is available. As AI can require a huge amount of data to learn effectively so it may be uneconomic for a single company to seek to acquire that data (see UBERs woes in this).
  • Consider what would happen if the data-sources for AI degraded or changed (for example a sensor broke, a camera was changed, data-policy evolved or different types of data emerged). Who would be auditing the AI to ensure it continued to operate as required?
  • Finally, consider that the AI-baby will not live alone – they will be “social”. Partners or competitors might employ similar AI which, within the wider marketplace ecosystem, might affect the world in which the AI operates. (See my previous article on potential AI collusion). Famously the interacting algorithms of high-frequency traders created significant market turbulence dubbed the “flash-crash” with traders’ algorithms failed to understand the wider context of other algorithms interacting. Further, as AI often lacks transparency of its decision making, so this interacting network of AI may act unpredictably and in ways poorly understood.
Image Kassandra Bay (cc) Thanks
(cc) Kevin Dooley

Evolving your business alongside cloud services – V3 writeup of my talk at Cloud Expo Yesterday

I gave a talk at Cloud Expo at the London Excel centre yesterday on the need for a much more dynamic perspective towards cloud computing. V3.co.uk have written an article providing an excellent summary of the talk if you are interested:
http://www.v3.co.uk/v3-uk/news/2454551/enterprises-must-be-ready-to-evolve-alongside-cloud-services

Dr Will Venters, assistant professor of information systems at the London School of Economics, explained that companies integrating cloud services into their IT infrastructure need to establish fluid partnerships with multiple vendors, as opposed to purchasing a static product….

Netskope’s approach to Shadow IT security.

On Wednesday last week I attended “Cloud Expo Europe” at London’s Excel centre. One of particularly interesting product was Netskope (also a finalists in the UK Cloud Awards) who are addressing the challenge of ShadowIT – employees use of cloud-services which are not sanctioned by the corporate IT departments.

According to Accenture (2013) “78% of cloud procurement comes from Strategic Business Units (SBUs), and only 28% from centralized IT functions”. Without some form of control the data-protection and compliance challenges of this can prove a huge. Users are also poorly skilled in making rational decisions about the safety of company data and products like Netskope address this by examining fire-wall logs or running Proxy servers and providing an easy interface so IT departments can enforce cloud access policies. The product analyses users’ access patterns and sends alerts, encrypts content on upload, blocks cloud transactions and quarantines content for review by Legal or IT. It essentially monitors and stops employees doing anything risky.

For me, the value of this product is the database of different cloud services with detailed information as to their safety and compliance. The product is however also really frustrating. At its heart is the assumption that the job of the IT professional is to monitor, control and police employees. This puts IT in opposition to the other business functions. Why couldn’t this product have instead started from a different assumption – that employees are, mostly, just trying to do their work as efficiently as possible. While a few are bad, most are just ignorant to the risks. Netskope would have been fantastic if it instead helped reduce this ignorance rather than policing users’ failures.  Had it provided an employee-portal to allow employees to evaluate cloud services prior to adoption it would have promoted the effective use of them, and allowed users to make rational decisions on their adoption. The IT department would be in a facilitation role rather than a policing role, and employees would feel in control (rather than in fear). The safety would be just the same (with Netskope policing policy) but with users feeling part of that effort. Productivity gains might also be achieved as users are freed to try using new valuable IT services knowing they were doing it safely and with management approval.

This isn’t to criticise Netskope for what it does do – but to call upon new approaches to thinking about the role of IT and the CIO in this cloud-future.