October 8, 2012 - October 9, 2012

eScience Workshop 2012

Location: Chicago, IL, U.S.

  • Keynote: Defensible Modeling of the Biosphere

    Drew Purves

    01:03:40

    To manage the planet on which we all depend, we need to predict the future outcome of various options. How would biofuel subsidies affect crop prices affect deforestation? CO2 emissions affect climate change affect fire? At present, we cannot make such predictions with any confidence. But, as I’ll show in this talk, a computational approach to environmental science can change that. I’ll explain how we built the first fully data-constrained model of the terrestrial carbon cycle, using Big Data, cloud computing, and machine learning. And I’ll demo similar models for global food production, Amazon deforestation, and bird biodiversity. The prototype tools on which these models have been built—for example, FetchClimate, Filzbach, WorldWide Telescope—are freely available, and will hopefully allow other scientists to adopt a rigorous approach to modeling the complexities of the biosphere.


    Keynote: Biology: A Move to Dry Labs

    David Heckerman

    00:48:06

    Since its beginning, the wet lab has been the key driver in biological discovery. Recently, however, more and more science is getting done in dry labs, those where only computational analysis is done. The presentation will include examples, ranging from genomics to vaccine design.


    2012 Jim Gray Award / The Possibilities and Pitfalls Internet-Based Chemical Data

    Antony John Williams and Tony Hey

    01:21:24

    2012 Jim Gray eScience Award Presentation

    At the Microsoft eScience Workshop 2012, Microsoft Research Connections Vice President Tony Hey introduces the Jim Gray eScience Award and announces this year’s winner, Antony John Williams, who delivers the following presentation.

    The Possibilities and Pitfalls Internet-Based Chemical Data

    In less than a decade, the Internet has provided us access to enormous quantities of chemistry data. Chemists have embraced the web as a rich source of data and knowledge. However, all that glitters is not gold and—while online searches can now provide us access to information associated with many tens of millions of chemicals, can allow us to traverse patents, publications, and public domain databases—the promise of high quality data on the web needs to be tempered with caution.

    In recent years, the crowdsourcing approach to developing curated content has been growing. Can such approaches allow us to bring to bear the collective wisdom of the crowd to validate and enhance the availability of trusted chemistry data online or are algorithms likely to be more powerful in terms of validating data? While it is now possible to search the web by using a query language form natural to chemists—that of ‘structure searching the web’—increasingly, scientists are likely going to have to accept joint responsibility for the quality of data online for the foreseeable future. Their participation is likely to come through engaging in open science, the provision of data under open licenses, and by offering their skills to the community.

    This presentation provides an overview of the present state of chemistry data online, the challenges and risks of managing and accessing data in the wild, and how an Internet for chemistry continues to expand in scope and possibilities.

  • Panel: Open Data for Open Science—Data Interoperability

    Ilya Zaslavsky, Karen Stocks, Philip Murphy, Robert Gurney, and Yan Xu

    02:04:16

    The goal of cross-domain interoperability is to enable reuse of data and models outside the original context in which these data and models are collected and used and to facilitate analysis and modeling of physical processes that are not confined to disciplinary or jurisdictional boundaries. A new research initiative of the U.S. National Science Foundation, called EarthCube, is developing a roadmap to address challenges of interoperability in the earth sciences and create a blueprint for community-guided cyberinfrastructure accessible to a broad range of geoscience researchers and students.

    The panel discusses this and related initiatives and projects, focusing on challenges of data discovery, interpretation, access, and integration across domain information systems, assessment of their readiness for cross-domain integration, and technologies enabling interoperability in the geosciences.


    Panel: Enabling Multi-Scale Science

    Claudia Bauzer Medeiros, James Hunt, and Roberto Cesar

    00:51:50

    eScience research increasingly involves the need to facilitate multi-scale problem solving that spans wide ranges in space and time scales. It requires collaboration among researchers and practioneers from multiple disciplines, each with their own orientations towards problem identification, solution formulation, and implementation.

    The panel discusses some of the challenges of working in multi-scale scenarios. Panelists present these challenges from two perspectives: application, and computing approaches.

    • The first perspective focuses on issues such as scientific profiles involved, scales considered, data collected and produced, models, and visualization needs.
    • The second viewpoint considers, among others, characteristics of data and storage structures to accommodate the wide variety of data scales and formats, language/workflow constructs that may facilitate the specification, execution, and interaction of models, and interface/interaction primitives.

    The Internet of Databases—Generalizing the Archaeo Informatics Approach

    Chris van der Meijden

    00:33:21

    One thing we have learned from our Archaeo-Data-Network is, that there is a need to split meta information of databases in two levels. The first level contains a centralized unique id and very few standard information. The second level of meta information is defined by the archaeo scientist. This can be implemented for any kind of archaeo database, so the network’s extensibility is virtually unlimited. The advantage of this dual meta approach is its flexible connectivity and therefor getting comprehensive data transparent available for general searching and mining. With this approach huge, rigid archives can be connected to small, flexible databases for scientific analysis in any scientific domain. Combined with a simple authorization management for unpublished data we see in our system the potential of being the general blueprint for an eScience infrastructure, which we call the Internet of databases.


    Combining Semantic Tagging and Support Vector Machines to Streamline the Analysis of Animal Accelerometry Data

    Nigel Ward

    00:28:54

    Increasingly, animal biologists are taking advantage of low cost micro-sensor technology, by deploying accelerometers to monitor the behaviour and movement of a broad range of species. The result is an avalanche of complex tri-axial accelerometer data streams that capture observations and measurements of a wide range of animal body motion and posture parameters. We present a system which supports storing, visualizing, annotating, and automatic recognition of activities in accelerometer data streams by integrating semantic annotation and visualization services with Support Vector Machine techniques.


    Panel: Handling Big Data for the Environmental Informatics / Real-Time Environmental Observation, Modeling, and Decision Support

    Barbara Minsker, Chaowei Yang, David Maidment, Jeff Dozier, Jong Lee, and Ting Ting Zhao

    01:26:36

    Earth observations and other environmental data collection methods help us accumulate terabytes to petabytes of datasets. This pose a grand challenge to the informatics for environmental studies. We propose this session to capture the latest development on the Big Data collection, processing, and visualization in several aspects.

    With increasing near-real-time availability of embedded and mobile sensors, radar, satellite, and social media, the opportunities to improve understanding, modeling, and management of environmental systems, as well as the built and human systems that interact with environmental systems, is immense.


    Active Publications

    Ian Foster and Tanu Malik

    01:11:05

    The eScience domain brings together scientists, experts, and engineers to enterprise comprehensive, large-scale data and computational cyberinfrastructures. The objective is to advance knowledge discovery in the sciences and establish effective channels of communication between the various disciplines. Software, data, workflows, technical reports, and publications are often the modes of this communication. However, currently all these modes of communication are disconnected from each other.

    E-publishing is changing the nature of scientific communication through digital publication repositories and libraries. But the larger and more pertinent issue is connecting these yet static digital e-publications repositories to large amounts of computation, data, derived data, and extracted information.


    Machine Assisted Thought

    Michael Kurtz

    00:56:19

    I suggest that there are two distinct branches of eScience, both fundamentally enabled by the explosion of capabilities inherent in the information age. The first concerns the use of numbers, measurements from arrays of sensors, outputs from simulations, and so forth. The techniques of eScience increase our ability to perceive massive amounts of data by factors of billions or trillions. I call this Machine Assisted Perception.

    The second branch of eScience concerns the use of words, the verbal abstractions used by humans to communicate ideas. The new technologies of digital libraries and search engines have already substantially changed the scholarly thought process, growth in the capabilities of these technologies continues to be rapid. I call this machine/human collaboration Machine Assisted Thought.


    Panel: Cloud Computing—What Do Researchers Want?

    Dennis Gannon, Fabrizio Gagliardi, Marty Humphrey, and Paul Watson

    01:13:40

    Cloud computing for science is seeing take-up in many disciplines, but many researchers are skeptical. In this panel session, we discuss:

    • How researchers are using the cloud today
    • What they want/need for the future
    • Why they might not want to use the cloud

    DemoFest 2012

    Carly Strasser, Dong Xe, Eamonn Maguire, Ian Foster, Jim Pinkelman, Michael Witt, Rob Fatland, Steve Tuecke, Tanu Malik, and Yan Xu

    00:12:45

    At the 2012 eScience Workshop, DemoFest presenters briefly introduce their topics.

    • Layerscape: Tools for Collaborative Analysis of Complex Data

    Presenter: Rob Fatland, Microsoft Research

    • Globus Online: Research Data Management as a Service

    Presenter: Ian Foster, University of Chicago and Argonne National Laboratory

    • The Open-Source ISA Metadata Tracking Framework: from Data Curation and Management at the Source, to the Linked Data Universe

    Presenter: Eamonn Maguire, University of Oxford

    • SOLE: Connecting Publications to Large Online Data Repositories

    Presenter: Tanu Malik, University of Chicago and Argonne National Laboratory

    • DataUp: A Tool for Documenting and Sharing Scientific Tabular Data

    Presenter: Carly Strasser, California Digital Library

    • Databib: An Online Catalog of Research Data Repositories

    Presenter: Michael Witt, Purdue University

    • 12,000 Human Genomes from Raw Sequence to Result, on Windows and Windows Azure

    Presenter: Dong Xie, Oxford University

    • OData and Environmental Informatics

    Presenter: Jim Pinkelman (for Yan Xu), Microsoft Research

  • The Utility of Human/Computer Learning Network for Improving Biodiversity Conservation and Research

    Carl Lagoze

    00:29:54

    We describe our work to improve the quality and utility of citizen science contributions to eBird, arguably the largest biodiversity data collection project in existence. Citizen science (the use of “human sensors”) is especially important in a number of observation-based fields, such as astronomy, ecology, and ornithology, where the scale and geographic distribution of phenomena to be observed far exceeds the capabilities of the established research community. Our work is based on the notion of a Human/Computer Learning Network, in which the benefits of active learning (in both the machine learning sense and human learning sense) are cyclically fed back among human and computational participants.


    Educating Scientists About the Data Life Cycle

    William Michener

    00:27:12

    The research life cycle is well known and consists of an initial idea or question that, if sound, leads to submission and funding of a proposal, implementation of a study and, ideally, to one or many publications that advance the state of knowledge. What is less well understood is how the research life cycle is related to the data life cycle.

    In this presentation, approaches for educating scientists in eight phases of the data life cycle (e.g., planning, data acquisition and organization, quality assurance/quality control, data description, data preservation, data exploration and discovery, data integration, and analysis and visualization) are discussed. Specifically, the design and approaches used for developing learning modules, instructional material and resources, and an innovative three-week experiential course that enable participants to more efficiently and effectively manage their research data and compete for research funding are presented.


    Teaching Scientific Data Management in Data Science Education and Workforce Development Programs for Science Communities

    Robert R. Downs

    00:24:35

    Recent popularity of data science has led to increased recognition of the need for education and workforce development in data science. However, definitions of the term, data science, vary and often focus on techniques for data analytics and visualization, omitting scientific data management and related topics associated with data policy, stewardship, and preservation.

    Scientific data management encompasses a variety of concepts and methods to foster continuing access and long-term stewardship of data for current and future users. Considering the needs for scientific data management knowledge and capabilities to facilitate improved and persistent accessibility and use of scientific data throughout the data lifecycle, instruction on topics in scientific data management is recommended for data science education and workforce development programs for science communities.


    Tools and Techniques for Outreach and Popular Engagement in eScience

    Rafael Santos

    00:29:47

    Public participation in scientific research takes many forms: participation of volunteers in citizen science projects, monitoring of natural resources and phenomena, volunteering of computational resources for distributed data analysis tasks, and so forth.

    In this presentation, we comment on some of the computational tools, techniques, and case studies of applications that enable active public participation in scientific research. Of particular interest are applications that showcase the benefits of letting the public use the professional resources (in other words, the same data and computational resources that the scientists have access to) and return something back to the research behind it, such as applications that go beyond simple publication of scientific data or applications that use novel methods for user engagement. Examples of applications for scientific outreach that use specialized computational tools or techniques, and/or educational approaches, are also discussed.


    Priorities for Data Curation Education: Data Center Partnerships and Long-Tail Science

    Carole Palmer

    00:27:27

    For science to fully exploit digital data in new and innovative ways, research data will need to be collected, curated, and made accessible and usable across domains. The need for workforce development in data curation systems and services has been recognized for many years, and education programs are beginning to mature. But to continue to build strong programs in this emerging field, current data curation practice and research needs to underpin goals for professional education.

    Having established a specialization in data curation in 2006, we have assessed our program’s progress to date and identified areas in need of further development to respond to trends in e-science. Analysis of student placements shows interesting trends in the institutions hiring data curation specialists and the nature of the positions, and evaluation of internships provided in national data centers has suggested important areas for further investment. In addition, our recent research on disciplinary differences in data sharing and the value of long-tail data in the sciences has direct implications for further development of data curation curriculum.


    Big Data Processing on the Cheap

    Joe Hummel

    00:55:59

    Getting started with big data? Generating more and more data without the hardware resources to process it? This session will help newcomers to ‘big data’ get started processing and visualizing their data, without the need for expensive computing resources. While these techniques may not produce lightning-fast results, you can at least get started with your analysis.


    Educating a New Breed of Data Scientists for Scientific Data Management

    Jian Qin

    00:27:21

    Data scientists play active roles in the design and implementation work of four related areas: data architecture, data acquisition, data analysis, and data archiving. While any data and computing related academic unit could offer a data science program or curriculum, each of them has their own flavors: statistics would weigh heavily toward data analytics and computer science on computational algorithms. The information schools are taking a more holistic approach in educating data scientists. This presentation reports the data science curriculum development and implementation at Syracuse iSchool, which has been shaped by the quickly-changing, data-intensive environment not only for science but also for business and research at large. Research projects that we conducted on scientific data management with participation from the e-science student fellows demonstrates the need and significance of educating the new breed of data scientists who have the knowledge and skills to take on the work in the four related areas mentioned above.


    Publishing and eScience Panel

    James Frew, Jeff Dozier, Mark Abbott, and Shuichi Iwata

    01:28:22

    Scientific Publishing in a Connected, Mobile World

    Speaker: Mark Abbott

    New tools for content development and new distribution channels create opportunities for the scientific community, opening new venues for collaboration, review, and self-publication. However, publishing is at the heart of the culture of science, and several centuries of experience with publishing in journals will not simply vanish. Issues of peer review, reproducibility, integrity, and scientific context will need to be addressed before these new tools take hold. Open access is but one part of this conversation.

    How to Collaborate with the Crowd: a Method for “Publishing” Ongoing Work

    Speaker: Jeff Dozier

    The typical model for interdisciplinary research starts with a small-group partnership, typically with colleagues who have known each other for a while. They learn to articulate problems across disciplinary boundaries and discover shared interests. They successfully seek funding, and work together for several years. This model works, but can be cumbersome. An alternative model is to express a sequence of processes and data that integrate to create a suite of data products, and to identify insertion points where expertise from another perspective might be able to contribute to a better solution.

    When Provenance Gets Real: Implications of Ubiquitous Provenance for Scientific Collaboration and Publishing

    Speaker: James Frew

    We expect (or hope?) that the impending standardization of data models, ontologies, and services for information provenance will make scientific collaboration easier and scientific publishing more transparent. We propose a panel of active producers and users of provenance who will address scenarios such as:

    • “I’m a scientist, and this is what I would really like to tell someone with provenance.”
    • “I’m a scientist, and this is what I wish provenance would tell me when I use your data, join your project, or …”
    • “I build systems that capture and/or manage provenance, and this is what I’ve seen scientists actually do when they create and/or use provenance.”

    Data Journal Challenge for the Fourth Paradigm-Trust through Data on Environmental Studies and Projects

    Speaker: Shuichi Iwata

    The Graduate School of Project Design Landscapes on recent big data issues to bridge environmental studies and social expectations are reviewed to design an e-Journal with data files and models. Data parts are keys to give semantics to original scientific papers, and also double keys for computational models. Structured data with explicit descriptions about their metadata can be managed and their traceability can be realized systematically, step by step. However, almost all available data are unstructured, fragmented, and contain ambiguities and uncertainties. Balances between data quality and freshness/costs/coverage are discussed so as to draw a road map for a data journal, referring to two preliminary case studies on materials data and data due to nuclear reactor accidents and problems.


    What Is a Data Scientist?

    Kenji Takeda and Liz Lyon

    00:23:38

    The term, data-scientist, is becoming prevalent in science, engineering, business, and industry. We explore how the term is used in different contexts, segments, and sectors; we examine the different variants, flavors, and interpretations and try to answer the following questions:

    • What does a data scientist really do?
    • What skills does a data scientist need? How do they acquire them?
    • What tools, technologies, and platforms are used by data scientists?
    • How can we build data scientist capacity and capability for the future?

    Informatics, Information Science, Computer Science, and Data Science Curricula

    Geoffrey Fox

    00:27:57

    We describe a possible data science curricula based on discussions at Indiana University and experience with our Informatics, Computer Science, and Library and Information Science programs. This leads to an interesting breadth of courses and students’ interests, which could address the many job opportunities. We suggest a collaboration to build a MOOC (online) offering with one initial target: minority serving institutions.


    Data Science Curricula at the University of Washington eScience Institute

    Bill Howe

    00:35:14

    The University of Washington eScience Institute is engaged in a number of educational efforts in data science, including certificate programs for professionals, workshops for students in domain science, a new data-oriented introductory programming course, and a data science MOOC to be offered through Coursera in the spring. We consider the tools, techniques, research topics, and skills to be well-aligned with the data-driven discovery emphasis of eScience itself—the only difference is the applications.

    We see several benefits in aligning these two areas. For example, students in science majors who are not pursuing research careers become more marketable. In the other direction, working professionals see opportunities to apply their skills to solve science problems—we have recruited volunteers from industry in this way. In this talk, I’ll discuss these activities, review our curriculum, and describe our next steps.


    Novel Approaches to Data Visualization

    Darren Thompson, Dawn Wright, and George Djorgovski

    01:19:20

    Data Visualization in Virtual Spaces and High Dimensions

    Speaker: George Djorgovski

    Visualization is a bridge between the quantitative content of data and human intuition and understanding. Effective visualization is a critical bottleneck as the complexity and dimensionality of data increase. I will describe some experiments in collaborative, multi-dimensional data visualization in immersive virtual reality.

    CT and Imaging Tools for Windows HPC Clusters and Azure Cloud

    Speaker: Darren Thompson

    Computed Tomography (CT) is a non-destructive imaging technique widely used across many scientific, industrial, and medical fields. It is both computationally and data intensive. Our group within CSIRO has been actively developing X-ray tomography and image processing software and systems for GPU-enabled Windows HPC clusters.

    A key goal of our systems is to provide our “end users”—researchers—with easy access to the tools, computational resources, and data via familiar interfaces and client applications without the need for specialized HPC expertise. We have recently explored the adaptation of our CT-reconstruction code to the Windows Azure cloud platform, for which we have constructed a working “proof-of-concept” system. However, at this stage, several challenges remain to be met in order to make it a truly viable alternative to our HPC cluster solution.

    Work in Progress Toward Enhancing Multidimensional Visualization with Analytical Workflows

    Speaker: Dawn Wright

    Big Data, particularly from terrestrial sensor networks and ocean observatories, exceed the processing capacity and speed of conventional database systems and architectures, and require visualization in three and four dimensions in order to understand the Earth processes at play. Successfully addressing the scientific challenges of Big Data requires integrative and innovative approaches to developing, managing, and visualizing extensive and diverse data sets, but is also critically dependent on effective analytical workflows. This talk will present an emerging agenda and work in progress toward this end at Environmental Systems Research Institute.


    Panel: Scientific Data: the Current Landscape, Challenges, and Solutions

    Carly Strasser, Chris Mentzel, Dave Vieglais, Jeff Dozier, Stephanie Wright, and William Michener

    01:30:17

    Funders, researchers, and public stakeholders increasingly see the need to better communicate and curate ever expanding bodies of research data. This panel will bring together many of the stakeholders in the scientific data community, including researchers, librarians, and data repositories.

    Before the panel commences, we will provide a brief introduction to scientific data to facilitate discussion. We will describe the current landscape of scientific data and its management, including publication, citation, archiving, and sharing of data. We will also describe existing tools for data management. The panel discussion will focus on identifying gaps and unmet needs in order to help chart a path for future policy, service, and infrastructure development.