The Enterprise Kindergarten for our new AI Babies? Digital Leadership Forum.

I am to be part of a panel at the Digital Leadership Forum event today discussing AI and the Enterprise.  In my opinion, the AI debate has become dominated by the AI technology and the arrival of products sold to Enterprise as “AI solutions” rather than the ecosystems and contexts in which AI algorithms will operate. It is to this that I intend to talk.

It’s ironic though that we should come see AI in this way – as a kind of “black-box” to be purchased and installed. If AI is about “learning” and “intelligence” then surely an enterprises “AI- Baby”, if it is to act sensibly, needs a carefully considered environment which is carefully controlled to help it learn? AI technology is about learning – nurturing even – to ensure the results are relevant. With human babies we spend time choosing the books they will learn from, making the nursery safe and secure, and allowing them to experience the world carefully in a controlled manner. But do enterprises think about investing similar effort in considering the training data for their new AI? And in particular considering the digital ecosystem (Kindergarten) which will provide such data? 

Examples of AI Success clearly demonstrate such a kindergarten approach. AlphaGo grew in a world of well understood problems (Go has logical rules) with data unequivocally relevant to that problem.  The team used experts in the game to hone its learning, and were on hand to drive its success.  Yet many AI solutions seem marketed as “plug-and-play” as though exposing the AI to companies’ messy, often ambiguous, and usually partial data will be fine.

So where should a CxO be spending their time when evaluating enterprise AI? I would argue they should seek to evaluate both the AI product and their organisation’s “AI kindergarten” in which the “AI product” will grow?

Thinking about this further we might recommend that:

  • CxOs should make sure that the data feeding AI represents the companies values and needs and is not biased or partial.
  • Ensure that the AI decisions are taken forward in a controlled way, and that there is human oversight. Ensure the organisation is comfortable with any AI decisions and that even when they are wrong (which AI sometimes will be) they do not harm the company.
  • Ensure that the data required to train the AI is available. As AI can require a huge amount of data to learn effectively so it may be uneconomic for a single company to seek to acquire that data (see UBERs woes in this).
  • Consider what would happen if the data-sources for AI degraded or changed (for example a sensor broke, a camera was changed, data-policy evolved or different types of data emerged). Who would be auditing the AI to ensure it continued to operate as required?
  • Finally, consider that the AI-baby will not live alone – they will be “social”. Partners or competitors might employ similar AI which, within the wider marketplace ecosystem, might affect the world in which the AI operates. (See my previous article on potential AI collusion). Famously the interacting algorithms of high-frequency traders created significant market turbulence dubbed the “flash-crash” with traders’ algorithms failed to understand the wider context of other algorithms interacting. Further, as AI often lacks transparency of its decision making, so this interacting network of AI may act unpredictably and in ways poorly understood.
Image Kassandra Bay (cc) Thanks
(cc) Kevin Dooley

Evolving your business alongside cloud services – V3 writeup of my talk at Cloud Expo Yesterday

I gave a talk at Cloud Expo at the London Excel centre yesterday on the need for a much more dynamic perspective towards cloud computing. V3.co.uk have written an article providing an excellent summary of the talk if you are interested:
http://www.v3.co.uk/v3-uk/news/2454551/enterprises-must-be-ready-to-evolve-alongside-cloud-services

Dr Will Venters, assistant professor of information systems at the London School of Economics, explained that companies integrating cloud services into their IT infrastructure need to establish fluid partnerships with multiple vendors, as opposed to purchasing a static product….

Netskope’s approach to Shadow IT security.

On Wednesday last week I attended “Cloud Expo Europe” at London’s Excel centre. One of particularly interesting product was Netskope (also a finalists in the UK Cloud Awards) who are addressing the challenge of ShadowIT – employees use of cloud-services which are not sanctioned by the corporate IT departments.

According to Accenture (2013) “78% of cloud procurement comes from Strategic Business Units (SBUs), and only 28% from centralized IT functions”. Without some form of control the data-protection and compliance challenges of this can prove a huge. Users are also poorly skilled in making rational decisions about the safety of company data and products like Netskope address this by examining fire-wall logs or running Proxy servers and providing an easy interface so IT departments can enforce cloud access policies. The product analyses users’ access patterns and sends alerts, encrypts content on upload, blocks cloud transactions and quarantines content for review by Legal or IT. It essentially monitors and stops employees doing anything risky.

For me, the value of this product is the database of different cloud services with detailed information as to their safety and compliance. The product is however also really frustrating. At its heart is the assumption that the job of the IT professional is to monitor, control and police employees. This puts IT in opposition to the other business functions. Why couldn’t this product have instead started from a different assumption – that employees are, mostly, just trying to do their work as efficiently as possible. While a few are bad, most are just ignorant to the risks. Netskope would have been fantastic if it instead helped reduce this ignorance rather than policing users’ failures.  Had it provided an employee-portal to allow employees to evaluate cloud services prior to adoption it would have promoted the effective use of them, and allowed users to make rational decisions on their adoption. The IT department would be in a facilitation role rather than a policing role, and employees would feel in control (rather than in fear). The safety would be just the same (with Netskope policing policy) but with users feeling part of that effort. Productivity gains might also be achieved as users are freed to try using new valuable IT services knowing they were doing it safely and with management approval.

This isn’t to criticise Netskope for what it does do – but to call upon new approaches to thinking about the role of IT and the CIO in this cloud-future.