edikt Technical Workshop
Date: Wednesday January 27th 2010 Start: 14:00 Finish: 17:00 Venue: Rm 6301, James Clerk Maxwell Building, Kings Buildings
Purpose of workshop
The purpose of the meeting is for edikt2 participants to share information about the technical aspects of the various activities being funded through edikt2. The meeting is also open to other interested parties around the University and beyond.
The edikt (eScience Data, Information and Knowledge Transformation) project has been running since May 2002 and is using computational science to extract knowledge from vast datasets and simulation models. edikt is funded by the Scottish Funding Council.
For more information on edikt, see the project web site at http://www.edikt.org.uk.
1400 Welcome - Mr Terry Sloan (EPCC) 1405 Data intensive Research Dr Jano van Hemert (National eScience Centre) 1450 COFFEE 1520 Cloud computing and the National Grid Service Dr Steve Thorn (Information Services) 1605 Predictions Models, Bridging Biology, Chemistry and Machine Learning Dr Jan Wildenhain (Wellcome Trust Centre for Cell Biology) 1650 Wrap-up 1700 Close
Attendance at the workshop is free with no prior registration required. For catering purposes, however, it would be very helpful if you could contact Terry Sloan (email@example.com, 0131 650 5155) beforehand with your name.
ABSTRACTS and SLIDES
Data-Intensive Research Jano van Hemert, School of Informatics
Science is witnessing a data revolution. Data are now created by faster and cheaper physical technologies, software tools and digital collaborations. Examples of these include satellite networks, simulation models and social network data. To transform these data successfully into information then into knowledge and finally into wisdom, we need new forms of computational thinking. These may be enabled by building “instruments” that make data comprehensible for the “naked mind” in a similar fashion to the way in which telescopes reveal the universe to the naked eye. These new instruments must be grounded in well-founded principles to ensure they have the fidelity and capacity to transform the complex and large-scale data into comprehensive forms; this demands new data-intensive methods.
Data-intensive refers to huge volumes of data, complex patterns of data integration and analysis and intricate interactions between data and users. Current methods and tools are failing to address data-intensive challenges effectively: they fail for several reasons, all of which are aspects of scalability. I will introduce three main aspects of data-intensive research and show how we are addressing the challenges that arise from the interaction of these aspects. I will make use of results from our interdisciplinary collaborations as examples of solutions to specific challenges that can arise when scaling up intensity.
Cloud Computing and the National Grid Service Steve Thorn, Information Services
The National Grid Service (NGS) is evaluating how it can effectively make use of Cloud Computing. Whilst Cloud technologies open many possibilities, the NGS is particularly interested in practical applications that require a dynamic deployment of service. One of the aims of this study is to identify a suitable example and successfully implement it 'in the Cloud'. To achieve this in a cost-effective and controlled way, an open-source Cloud implementation (Eucalyptus) has been deployed at the collaborating sites (Edinburgh and Oxford).
This talk will present the progress so far and describe the technologies used.
Steve's slides ediktJan10-CloudcomputingSThorn.pdf (PDF)
Predictions Models, Bridging Biology, Chemistry and Machine Learning Jan Wildenhain, Wellcome Trust Centre for Cell Biology
Jan's slides ediktJan10-PredictionModelsJWildenhain.pdf (PDF)