Latest Article | Interventionist grid development projects: a research framework based on three frames

My latest research article has just been published. This one focuses on Grid computing within large project:

Will Venters, Avgousta Kyriakidou-Zacharoudiou, (2012) “Interventionist grid development projects: a research framework based on three frames“, Information Technology & People, Vol. 25 Iss: 3, pp.300 – 326

Abstract:

Purpose – This paper seeks to consider the collaborative efforts of developing a grid computing infrastructure within problem-focused, distributed and multi-disciplinary projects – which the authors term interventionist grid development projects – involving commercial, academic and public collaborators. Such projects present distinctive challenges which have been neglected by existing escience research and information systems (IS) literature. The paper aims to define a research framework for understanding and evaluating the social, political and collaborative challenges of such projects.

Design/methodology/approach – The paper develops a research framework which extends Orlikowski and Gash’s concept of technological frames to consider two additional frames specific to such grid projects; bureaucratic frames and collaborator frames. These are used to analyse a case study of a grid development project within Healthcare which aimed to deploy a European data-grid of medical images to facilitate collaboration and communication between clinicians across the European Union.

Findings – That grids are shaped to a significant degree by the collaborative practices involved in their construction, and that for projects involving commercial and public partners such collaboration is inhibited by the differing interpretive frames adopted by the different relevant groups.

Research limitations/implications – The paper is limited by the nature of the grid development project studied, and the subsequent availability of research subjects.

Practical implications – The paper provides those involved in such projects, or in policy around such grid developments, with a practical framework by which to evaluate collaborations and their impact on the emergent grid. Further, the paper presents lessons for future such Interventionist grid projects.

Originality/value – This is a new area for research but one which is becoming increasingly important as data-intensive computing begins to emerge as foundational to many collaborative sciences and enterprises. The work builds on significant literature in escience and IS drawing into this new domain. The research framework developed here, drawn from the IS literature, begins a new stream of systems development research with a distinct focus on bureaucracy, collaboration and technology within such interventionist grid development projects.

Book Chapter Out: The Participatory Cultures Handbook

Amazon.com: The Participatory Cultures Handbook (9780415506090): Aaron Delwiche, Jennifer Jacobs Henderson: Books.

I  co-authored (with Sarah Pearce) a chapter in this book focusing on the culture of particle physicist at CERN as they developed the world’s largest Grid Computing infrastructure for the LHC. The chapter considers the different collaborative and management practices involved in such a large endeavour and offers lessons for others building information infrastructure in a global collaboration.

Pre-Order a copy now!!!

 

Cloud Computing – it’s so ‘80s.

For Vint Cerf[1], the father of the internet, Cloud Computing represents a return to the days of the mainframe where service-bureaus rented their machines by the hour to companies who used them for payroll and other similar tasks. Such comparisons focus on the architectural similarities between centralised mainframes and Cloud computing – cheaply connecting to an expensive resource “as a service” through a network. But cloud is more about the provision of “low-cost” computing (albeit in bulk through data-centres) at even lower costs in the cloud. A better analogy that the mainframe then is the introduction of the humble micro-computer and the revolution it brought to corporate computing in the early 1980s.

When micros were launched many companies operated using mini or mainframe computers which were cumbersome, expensive and needed specialist IT staff to manage them[1]. Like Cloud Computing today, when compared with these existing computers the new micros offered ease of use, low cost and apparently low risk which appealed to business executives seeking to cut costs, or SMEs unable to afford mini’s or mainframes[2]. Usage exploded and in the period from the launch of the IBM PC in 1981 to 1984 the proportion of companies using PCs increased dramatically from 8% to 100% [3] as the cost and opportunity of the micro became apparent. Again, as with the cloud[4], these micros were marketed directly to business executives rather than IT staff, and were accompanied by a narrative that they would enable companies to dispense of heavy mainframes and the IT department for many tasks –doing them quicker and more effectively. Surveys from that time suggested accessibility, speed of implementation, response-time, independence and self-development were the major advantage of the PC over the mainframe[5] –  easily recognisable in the hyperbole surrounding cloud services today. Indeed Nicholas Carr’s current pronouncement of the End of Corporate IT[6] would probably have resonated well in the early 1980s when the micro looked set to replace the need for corporate IT. Indeed in 1980 over half the companies in a sample claimed no IT department involvement in the acquisition of PCs[3].

But problems emerged from the wholesale uncontrolled adoption of the Micro, and by 1984 only 2% of those sampled did not involve the IT department in PC acquisition[3]. The proliferation of PCs meant that in 1980 as many as 32% of IT managers were unable to estimate the proportion of PC within their company[3], and few could provide any useful support for those who had purchased them.

Micros ultimately proved cheap individually but expensive on-mass[2] as their use exploded and new applications for them were discovered. In addition to the increased use IT professionals worried about the lack of documentation (and thus poor opportunity for maintenance), poor data management strategies, and security issues[7]. New applications proved incompatible with others (“the time-bomb of incompatibility”[2]), and different system platforms (e.g. CP/M, UNIX, MS-DOS, OS/2, Atari, Apple …) led to redundancy and communication difficulties between services and to the failure of many apparently unstoppable software providers –household names such as Lotus, Digital-Research, WordStar and Visi and dBase[8].

Ultimately it was the IT department which brought sense to these machines and began to connect them together for useful work using compatible applications – with the emergence of companies such as Novell and Microsoft to bring order to the chaos[8].

Drawing lessons from this history for Cloud Computing are useful. The strategic involvement of IT services departments is clearly required. Such involvement should focus not on the current cost-saving benefits of the cloud, but on the strategic management of a potentially escalating use of Cloud services within the firm. IT services must get involved in the narrative surrounding the cloud – ensuring their message is neither overly negative (and thus appearing to have a vested interest in the status quo) nor overly optimistic as potential problems exist. Either way the lessons of the microcomputer are relevant again today.  Indeed Keen and Woodman argued in 1984 that companies needed the following four strategies for the Micro:

1)      “Coordination rather than control of the introduction.

2)      Focusing on the longer-term technical architecture for the company’s overall computing resources, with personal computers as one component.

3)      Defining codes for good practice that adapt the proven disciplines of the [IT industry] into the new context.

4)      Emphasis on systematic business justification, even of the ‘soft’ and unquantifiable benefits that are often a major incentive for and payoff of using personal computers” [2]

It would be wise for companies contemplating a move to the cloud to consider this advice carefully – replacing personal-computer with Cloud-computing throughout.

(c)2011 Will Venters, London School of Economics. 

[1]            P. Ceruzzi, A History of Modern Computing. Cambridge,MA: MIT Press, 2002.

[2]            P. G. W. Keen and L. Woodman, “What to do with all those micros: First make them part of the team,” Harvard Business Review, vol. 62, pp. 142-150, 1984.

[3]            T. Guimaraes and V. Ramanujam, “Personal Computing Trends and Problems: An Empirical Study,” MIS Quarterly, vol. 10, pp. 179-187, 1986.

[4]            M. Benioff and C. Adler, Behind the Cloud – the untold story of how salesforce.com went from idea to billion-dollar company and revolutionized and industry. San Francisco,CA: Jossey-Bass, 2009.

[5]           D. Lee, “Usage Patterns and Sources of Assitance for Personal Computer Users,” MIS Quarterly, vol. 10, pp. 313-325, 1986.

[6]            N. Carr, “The End of Corporate Computing,” MIT Sloan Management Review, vol. 46, pp. 67-73, 2005.

[7]            D. Benson, “A field study of End User Computing: Findings and Issues,” MIS Quarterly, vol. 7, pp. 35-45, 1983.

[8]            M. Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog: A history of the software industry. Cambridge,MA: MIT Press, 2003.

Cloud Computing Presentation at GridPP Collaboration Meeting

Yesterday I gave an introductory presentation on Cloud Computing from a business perspective to a Grid meeting at Royal Holloway University. The slides are available on the GridPP website here: http://www.gridpp.ac.uk/gridpp24/CloudComputingGridPP24.ppt