Cloud Expertise Report with Rackspace and Intel

For a number of months I’ve been working with Rackspace and colleague Carsten Sorensen to undertake a study of the impact of skills and expertise on cloud computing. The report “the cost of cloud expertise” has just been published here. The headline figure is that $258m is lost a year through lack of cloud expertise.

Cost of cloud expertise report

In the press release I am quoted as saying; “Put simply, cloud technology is a victim of its own success. As the technology has become ubiquitous among large organizations – and helped them to wrestle back control of sprawling physical IT estates – it has also opened up a huge number of development and innovation opportunities. However, to fully realize these opportunities, organizations need to not only have the right expertise in place now, but also have a cloud skills development strategy to ensure they are constantly evolving their IT workforce and training procedures in parallel with the constantly evolving demands of cloud. Failure to do so will severely impede the future aspirations of businesses in an increasingly competitive digital market.”

The report also explores the requirements for cloud skills, and discusses the strategy businesses can adopt to mitigate the risks of the cloud skills shortages:

  • Split the IT function into separate streams – business focused and operation focused.
  • Develop a cloud-skills strategy.
  • Assess the cloud ecosystem and ensuring a balanced pool of skills.

Take a look!

https://blog.rackspace.com/258-million-year-cost-enterprises-lack-cloud-computing-expertise-says-rackspace

Some early press coverage below…

Only 29% of IT leaders have the skills needed to fully embrace the cloud TechRepublic Sep 21, 2017
Rackspace asked organization execs around the world about cloud IT — here’s what they found San Antonio Business Journal Sep 21, 2017
Cloud Skill Shortage Costs Large Enterprises $258 Million Each Year: Report Windows IT Pro Sep 21, 2017
Cloud skills shortage holding back some Aussie businesses CIO Australia
Is cloud computing a victim of its own success? Computer Business Review Sep 21, 2017
Two-thirds of businesses losing money over poor cloud skills Cloud Pro
Here’s what’s costing businesses a lot of money London Loves Business
UK organisations lose millions a year due to lack of cloud technology skills Bdaily
Lack of cloud expertise costing companies $258mn per year The Stack
UK businesses losing revenue due to lack of cloud expertise ITProPortal

Platforms for the Internet of Things: Opportunities and Risks

I was chairing a panel at the Internet of Things Expo in London today. One of the points for discussion was the rise of platforms related to the internet of things. As, by some estimates, the number of connected devices is predicted to exceed 50bn by 2020 so there is considerable desire to control the internet based platforms upon which these devices will rely. Before we think specifically about platforms for the Internet of Things it is worth pausing to think about platforms in general.

The idea of platforms is pretty simple – they are something flat we can build upon. In computing terms they are an evolving system of software which provides generativity [1]: the potential to innovate by capitalising on the features of the platform service to provide something more than the sum of its parts. They exhibit the economic concept of network effects [2] – that is their value increases as the number of users increases. The telephone, for example, was useless when only one person had one, but as the number of users increased so its value increased (owners could call more people). This in turn leads to lock-in effects and potential monopolisation: once a standard emerged there was considerable disincentive for existing users to switch, and, faced with competing standards, users will wisely choose a widely adopted incumbent standard (unless the new standard is considerably better or there is other incentives to switch). These network effects also influence suppliers – App developers focus on developing for the standard Android/iPhone platforms so increasing their value and creating a complex ecosystem of value.

Let’s now move to think further about this concept for the Internet of Things.  I worry somewhat about the emergence of strong commercial platforms for Internet of Things devices. IoT concerns things, whose value is derived from both their materiality and their internet-capability. When we purchase an “IoT” enabled street-light (for example) we are making a significant investment in the material streetlight as well as its Internetness. If IoT evolves like mobile phones this could lock us into the platform, and changing to an alternative platform would thus include high material cost (assuming , like mobiles, we are unable to alter software) as, unlike phones these devices are not regularly upgraded. This demonstrates platforms concern the distribution of control, and the platform provider has a strong incentive to seek to control the owners of the devices, and though this derive value from their platform over the long term. Also for many IoT devices (and particularly relevant for critical national infrastructure) this distribution of control does not correspond to distribution of risk, security and liability which many be significant for IoT devices.

There is also considerable incentive for platform creators to innovate their platform – developing new features and options to increase their value and so increase the scale and scope of their platform. This however creates potential instability in the platform – making evaluation of risk, security and liability over the long term exceedingly difficult. Further there is an incentive on platform owners to demand evolution from platform users (to drive greater value) potentially making older devices quickly redundant.

For important IoT devices (such as those used by government bodies), we might suggest that they seek to avoid these effects by harnessing open platforms based on collectively shared standards rather than singular controlled software platforms.  Open platforms are “freely available, standard definitions of service outcomes, processes, or technology that encourage multiple users to converge on utility consumption of services based on definitions – which in turn encourage suppliers to innovate around these commodities.”[3, 4]. In contrast to Open Source, Open platforms are not about the software – but about a collective standards agreement process in which standards are freely shared allowing the collective innovation around that standard. For example the 230v power-supply is a standard around which electricity generators, device manufacturers and consumers coalesce.

What are the lessons here?

(1) Wherever possible we should seek open platforms and promote the development of standards.

(2)  We must demand democratic accountability, and seek to exploit levers which ensure control over our infrastructure is reflective of need.

(3) We should seek to understand platforms as dynamic, evolving self-organising infrastructures not as static entities

References

  1. Zittrain, J.L., The Generative Internet. Harvard Law Review, 2006. 119(7): p. 1974-2040.
  2. Gawer, A. and M. Cusumano, Platform Leadership. 2002, Boston,MA: Harvard Business School Press.
  3. Brown, A., J. Fishenden, and M. Thompson, Digitizing Government. 2015.
  4. Fishenden, J. and M. Thompson, Digital Government, Open Architecture, and Innovation: Why Public Sector IT Will Never Be The Same Again. Journal of Public Administration, Research, and Theory, 2013.

What is Fog Computing?

I read an interesting article on Fog Computing and thought readers might like a short precis:

Applications such as health-monitoring or emergency response require near-instantaneous response such that the delay caused by contacting and receiving data from a cloud data-centre can be highly problematic. Fog Computing is a response to this challenge. The basic idea is to shift some of the computing from the data-centre to devices which are closer to the edge of the network – so moving the cloud to the ground (hence “fog computing”). The computing work is shared between the data-centre and various local IoT devices (e.g. a local router or smart-gateway).

“Fog computing is a paradigm for managing a highly distributed and possibly virtualized environment that provides compute and network services between sensors and cloud data-centers” (Dastjerdi et al. 2016)

While cloud computing (using large data-centres) is perfect for analysis of Big Data “at rest” (i.e.  analysing historical trends where large magnitudes of data are required and cheap processing necessary) fog computing may be much better for dynamic analysis of “data-in-motion” (data concerning immediate ongoing actions which require rapid analytical response).  For example an Augmented Reality Application cannot wait for a distant data-centre to respond when a user’s head it turned. Similarly safety-critical and business-critical applications such as health-care remote monitoring, or remote diagnostics cannot rely on permanent availability of internet connections (as those in York know when floods knocked out their internet for days this year).

Privacy concerns are also relevant. By moving data-analysis to the edge of the network (e.g. a device or local mobile phone) which is often owned by, and controlled by, the data-source the user may have more control over their data. For example an exercise tracker might aggregate and process its GPS data and fitness data on a local mobile phone rather than automatically uploading it to a distant server. It might also undertake data-trimming so reducing the bandwidth and load on the cloud. This is particularly relevant as the number of connected devices increases to billions. This gain should be balanced with the challenge of managing an increasing number of devices which must be secured to hold sensitive data safely.

Another challenge is the climatic damage this new architecture poses. While data-centres are increasingly efficient in their processing, and often rely on clean-energy sources, moving computing to less efficient devices at the edge of the network might create a problem. We are effectively balancing latency with CO2 production.

For more information on see:

Dastjerdi, A. V., Gupta, H., Calheiros, R. N., Ghosh, S. K., and Buyya, R. 2016. “Fog Computing: Principles, Architectures, and Applications,” in Internet of Things: Principles and Paradigm. Elsevier / MKP. http://www.buyya.com/papers/FogComputing2016.pdf

(Image Ian Furst (cc))

Videos on Innovating Information and Digital Infrastructures…

The following link provides access to the panels and videos of the 4th Innovating Information Infrastructure workshop from earlier this year.

I attended the workshop which was excellent – can I particularly recommend my friends Ole Hanseth and Carsten Sorensen’s presentations which were great.

http://www2.warwick.ac.uk/fac/soc/wbs/subjects/ism/workshop

Enjoy!

 

CWF: Will Venters – EM360 PodcastEnterprise Management 360°

I was interviewed by Enterprise Management 360 at the cloud world forum – the podcast of the interview is now available on their site:

CWF: Will Venters – EM360 PodcastEnterprise Management 360°.

Double trouble – why cloud is a question of balance |My New Blog on Cloud Pro

I have been invited to Blog on CloudPro – don’t worry I will keep posting here as well – but if you want to read my first posting see:

Double trouble – why cloud is a question of balance | Cloud Pro.