I read an interesting article on Fog Computing and thought readers might like a short precis:
Applications such as health-monitoring or emergency response require near-instantaneous response such that the delay caused by contacting and receiving data from a cloud data-centre can be highly problematic. Fog Computing is a response to this challenge. The basic idea is to shift some of the computing from the data-centre to devices which are closer to the edge of the network – so moving the cloud to the ground (hence “fog computing”). The computing work is shared between the data-centre and various local IoT devices (e.g. a local router or smart-gateway).
“Fog computing is a paradigm for managing a highly distributed and possibly virtualized environment that provides compute and network services between sensors and cloud data-centers” (Dastjerdi et al. 2016)
While cloud computing (using large data-centres) is perfect for analysis of Big Data “at rest” (i.e. analysing historical trends where large magnitudes of data are required and cheap processing necessary) fog computing may be much better for dynamic analysis of “data-in-motion” (data concerning immediate ongoing actions which require rapid analytical response). For example an Augmented Reality Application cannot wait for a distant data-centre to respond when a user’s head it turned. Similarly safety-critical and business-critical applications such as health-care remote monitoring, or remote diagnostics cannot rely on permanent availability of internet connections (as those in York know when floods knocked out their internet for days this year).
Privacy concerns are also relevant. By moving data-analysis to the edge of the network (e.g. a device or local mobile phone) which is often owned by, and controlled by, the data-source the user may have more control over their data. For example an exercise tracker might aggregate and process its GPS data and fitness data on a local mobile phone rather than automatically uploading it to a distant server. It might also undertake data-trimming so reducing the bandwidth and load on the cloud. This is particularly relevant as the number of connected devices increases to billions. This gain should be balanced with the challenge of managing an increasing number of devices which must be secured to hold sensitive data safely.
Another challenge is the climatic damage this new architecture poses. While data-centres are increasingly efficient in their processing, and often rely on clean-energy sources, moving computing to less efficient devices at the edge of the network might create a problem. We are effectively balancing latency with CO2 production.
For more information on see:
Dastjerdi, A. V., Gupta, H., Calheiros, R. N., Ghosh, S. K., and Buyya, R. 2016. “Fog Computing: Principles, Architectures, and Applications,” in Internet of Things: Principles and Paradigm. Elsevier / MKP. http://www.buyya.com/papers/FogComputing2016.pdf
Ten years ago a couple of students (Omer Tariq and Kabir Sehgal) entered my office with the idea of creating an academic journal to publish MSc and PhD students’ essays and articles on Information Systems. Today we have just published our 10th anniversary edition. I am extremely proud that something I pushed for during the first couple of years continues to thrive on its own, and I congratulate Gizdem Akdur, this year’s editor-in-chief and her team for their great work and enthusiasm!
This is my editorial from this anniversary edition:
EDITORIAL – From the Faculty Editor
So the iSCHANNEL has made it to 10 years old. We should really celebrate with a cake with candles but that isn’t really in the spirit of this journal. If we are anything, we are forward looking. Our place is charting the future not the past and our regularly changing authors, reviewers and editors ensure this. Only myself – as so called Faculty editor – had remained around to steer the ship (though these days it mostly pilots itself and I simply pen these editorials).
This year’s articles reflect the iSCHANNEL’s forward-looking trend. Big data is reviewed by Maximilian Mende – though, reflecting our teaching here at the LSE, the focus is not on the hyperbole of this new trend, but on the limited rationality available to managers and the imposition of a technical rationality which remains inherently bounded. Also trailblazing is an article by Atta Addo on BitCoin– that most current of topics – exploring the entanglement of materiality, form and function. Drawing upon Prof. Kallinikos’ work, this article stands back to explore what currency is as a digital artefact of varying form. Similar questions are asked of cars in Tania Moser’s article which explores ubiquitous computing’s impact on transportation. This includes the famous quote “The most profound technologies are those that disappear. They weave themselves into the fabric of every day life, until they are indistinguishable from it” (Weiser, 1991).
What however excited me within this issue were two articles which rejected the inherent assumption of this quote, realising that while technology disappears for some, it becomes very much present for those it marginalises. Whether through economics, disability or location the brave new digital world is a barrier to many. It was therefore pleasing to see articles addressing the obstacles of old age in the adoption of telecare (in an article by Karolina Lukomska), and finally a paper by Matteo Ronzani on digital technology and its impact on replicating existing patterns of resource distribution which
support global inequality. These are topics of our time and it is wonderful to see this journal tackle them.
I very much wish the iSCHANNEL a productive second decade and hope our readership will continue to benefit from its insights.
Dr. Will Venters
Weiser, M. (1991). The Computer for the 21st Century. Scientific American 265 (3): pp. 94-104.
Big Data – Charm or Seduction [invited article by Mike Cushman].
“The Allure of Big Data”, The 14th Social Study of ICT workshop at LSE on 25 April 2014 pointed to answers on some questions but left others unaddressed. Two in particular were left hanging: ‘How New is Big Data’ and ‘What is Big Data’?
How new is Big Data?
Like many themes in the fields of Management and Information Systems it is both new and not new and both the ‘gee-whiz’ and ‘we’ve seen it all before’ reflexes are incomplete.
In important aspects Big Data is a re-packaging and re-selling of Data Warehousing, Data Mining, Knowledge Management, e-science and many other items form the consultants’’ catalogues of past decades. Each of these, especially KM, is a re-badging of previous efforts. But only to say that is to miss that the growth of processing power and cheap, and ever-cheaper, storage is producing changes in the uses to which accumulations of data can be put. In addition previous iterations have not had the current quantities of social media content and GPS attributes from the growth of mobile computing available to them. The development of innovative algorithms to analyse the growth of quantity and types of data afford new possibilities, even if many of them, but far from all of them, just look like expanded versions of the old routines.
What is Big Data?
Much of the discussion at the workshop was compromised by the lumping of too many distinct phenomena under one heading. Big Data is not one thing and this is a preliminary attempt at a typology of Big Data.
Big Data is the business. Companies like Google and Facebook are essentially their ability to analyse the data provided by their users in return for free provision of services. Discussions about such companies should lead to discussion about he role of advertising in the economy and society. While newspapers and magazines have always been dependent upon advertising revenue, this revenue is far more central to Google and its peers.
Big Data for marketing. The collection of customer data through loyalty cards allows retailers to design promotions at a national, store and individual customer level and makes CRM systems far more powerful.
Big Data for cost control. The collection of data on every aspect of the business allows the elimination of unnecessary cost, making cost accounting far more effective and supporting lean manufacturing approaches.
Big Data for workforce management. Employers now have access to far more data about employees’ histories and performance. This has led to the spread of both performance related pay and more intrusive disciplinary codes.
Big Data for performance ranking and comparison. It has become accepted that heterogeneous organisations can be listed in meaningful league tables with standardised measures as easily as football teams can. The result of a football match is unambiguous, subject to moderately competent refereeing. The performance of a school, university or hospital is less easily agreed. LSE moves alarmingly up and down national and international rankings according to the measures and their weightings selected by a particular newspaper. Big Data is the key cement to the conceit that these league tables are a sensible activity and that they sufficiently meaningful to obliterate the harm they do. Because Big Data and the tables are assumed to be necessary the data must be constructed and collected regardless of cost and disruption, so the Research Excellence Framework is allowed to dominate university life and only education measurable in GCSEs is understood to be a valuable product of school efforts.
Big Data for product development. The collection of data about products in use in industries like motor manufacturing can feed back into product design to eliminate design faults and weaknesses and better meet customer demands.
Big Data for science. The growth in computing capacity is necessary for data-rich experiments like those at CERN but also the collection of far greater quantities of observational data in both hard science like meteorology and in social science leading to the production of new scientific knowledge.
Big Data for policy development. Policy in areas like housing, transport, education and health have always depended on large data sets like the national census and the general household survey (the degree of faithfulness of any particular policy to the data that is claimed to support it has always, and will always, be a matter for political argument). Whether the development of bigger data will improve policy development or only intensify politicisation of data use is a matter for conjecture.
Big Data for surveillance. There has long been a recognition that states collect data on their citizens. Each state announces loudly the data collection practices of their opponents while, generally, concealing its own. ‘Totalitarian’ states have been more willing to publicise their surveillance in order to intimidate their population; ‘liberal democracies’ try to minimise knowledge about their own practices claiming it is only ‘them’ about whom dossiers are compiled – criminals, terrorists, subversives, and paedophiles. The admitted categories have always been elastic according to political priorities so may also be widened to include such as trade unionists; benefit claimants; or immigrants, refugees and aliens. While groups are added there is great institutional resistance to slimming down the list. Edward Snowden revealed that even ‘liberal democracies’ regard every citizen as potentially hostile and a surveillance subject ‘just in case’
There are continuing ethical and privacy concerns about Big Data. These are made more complex and irresolvable because Big Data is too often discussed as one thing. Regarding it is many distinct phenomena, with each domain having its own ethical and privacy requirements will allow more clarity.
29 April 2014
Mike Cushman is a retired colleague from the LSE who also specialises in Information Systems and their social and organisational implications.