July 18, 2011 July 20, 2011

Faculty Summit 2011

Location: Redmond, WA, USA

DemoFest Booths

  • Presenters: Manuel Fahndrich and Francesco Logozzo, Microsoft Research

    CodeContracts is a language-agnostic solution to the problem of expressing contracts. CodeContracts include an API to author contracts that is part of .NET as of v 4.0, a runtime checker to improve testing, and a static checker to validate contracts at compile time. We will show the integration with Visual Studio (opens in new tab), where contracts will pop up while authoring the code; the runtime checker, where contracts on example subclasses are automatically inherited; and the static checker, where bugs in the program are spotted at design time.

  • Presenters: Rob DeLine, Microsoft Research and Jens Jacobsen, Microsoft Visual Studio Ultimate

    Debugger Canvas is a new way for developers to debug C# and Visual Basic code. Created as a collaboration between Brown University, Microsoft Research, and Visual Studio, Debugger Canvas is a pan-and-zoom display containing the parts of the code through which the user has stepped (by using the debugger) or visited (through navigation commands, such as “go to definition”). Debugger Canvas presents the code the user explores as a call-graph diagram in which each node contains the method’s body in a full-featured editor. We’ll show how Debugger Canvas is useful for both program understanding and debugging complex pieces of code.

  • Presenter: Ethan Jackson, Microsoft Research

    FORMULA—Formal Modeling Using Logic Programming and Analysis—is a modern formal specification language targeting model-based development (MBD). It is based on algebraic data types and strongly typed constraint logic programming, which support concise specifications of abstractions and model transformations. Around this core is a set of composition operators for composing specifications in the style of MBD. A major advantage of FORMULA is its model-finding and design-space exploration facility. FORMULA can be used to construct system models satisfying complex domain constraints. The user inputs a partially specified model, and FORMULA searches the space of completed models until it finds a globally satisfactory design. This process can be repeated to find many globally consistent designs. Variations on this procedure can be used to prove properties on model transformations and to perform bounded-symbolic model checking.

  • Presenter: Behrooz Chitsaz, Microsoft Research

    Science increasingly communicates through multimedia, yet multimedia sources have not historically lent themselves to robust search and retrieval with traditional search-engine technology. The Microsoft Research Audio Video Indexing System (MAVIS) is a set of software components that use speech-recognition technology to enable deep search into audio and video for actual spoken words, whether they are from meetings, presentations, online lectures, or Internet video. The speech-recognition component of MAVIS is integrated with Windows Azure for high scalability, and the component uses special techniques to help improve the search experience despite speech-recognition inaccuracies. On the front end, MAVIS integrates with Microsoft SQL Server full-text indexing so, operationally, it can enable searching of speech content similar to that of searching text.

  • Presenter: Rick Benge, Microsoft Research

    Microsoft Biology Foundation 2.0 is a bioinformatics tool kit built on Microsoft.NET. One of the tools built on this technology is Genozoom, which addresses an issue facing genome browsers, the inability to support smooth navigation of data from high to low resolutions at rapid speed. Genozoom handles this by utilizing Silverlight and DeepZoom technologies to make it natural for users to navigate and explore the multidimensional information across a genome. In addition, the genome browser will enable the input of custom data and user annotations.

  • Presenters: Christophe Poulain, Microsoft Research and Joe Pamer, Microsoft Visual Studio

    Try F# (opens in new tab) enables the Microsoft .NET language F# to be used in an interactive, browser-based environment. Try F# makes F# accessible to users with Windows, Macs, and—soon—Linux with no installation required. Try F# also includes an online training tool to introduce users to the language. The site serves as a portal for information about the language and its growing community. Try F# was developed by Microsoft Research Connections’ Engineering and Computer Science teams, in collaboration with Microsoft Research Cambridge and the Visual Studio F# development team.

  • Presenter: Joseph M. Joy, Microsoft Research

    Recent advances in visualization technologies have spawned a potent brew of visually rich applications that enable exploration over potentially large, complex data sets. Examples include Gigapan.org, Photosynth.net, PivotViewer, and World Wide Telescope. At the same time, the narrative remains a dominant form for generating emotionally captivating content such as movies and novels or imparting complex knowledge, such as via textbooks or journals. The Rich Interactive Narratives project aims to combine the compelling and time-tested narrative elements of multimedia storytelling with the information rich and exploratory nature of the latest generation of information visualization and exploration technologies. We approach the problem not as a one-off application, Internet site, or proprietary framework, but rather as a data model that transcends a particular platform or technology. This has the potential of enabling entirely new ways for creating, transforming, augmenting, and presenting rich interactive content.

  • Presenters: Chris Wendt and Vikram Dendi, Microsoft Research

    Mechanisms for community involvement are able to improve the quality of an automatic translation to a level that satisfies even the most demanding users—and make this a fun, compelling exercise. Cross-language document retrieval and automatic translation, using the example of worldwidescience.org, represents an excellent implementation of multilingual access for the research community.

  • Presenters: Alex Wade, Lee Dirks, Adnan Mahmud, Yunxiao Ma, and Xin Zou, Microsoft Research

    Microsoft Academic Search is a free service developed by Microsoft Research to help users quickly find information about academic researchers and their activities. It serves as a test bed for our object-level vertical-search research in areas such as machine learning, entity extraction, and information retrieval. With Academic Search, it’s easy to find the top researchers, papers, conferences, journals, and organizations in a growing number of research domains. You also can explore a variety of relationships between authors and their papers.

  • Presenters: Steven Johnston, University of Southampton Faculty of Engineering and the Environment and Simon Cox, Microsoft Institute of High Performance Computing at the University of Southampton

    We are using World Wide Telescope as a data visualizer for both the Clouds in Space and ASTRA projects. We will demonstrate the visualization of satellite trajectories, as well as show high-altitude flight data collected from the ASTRA 7 flight 18 kilometers into the stratosphere. The Clouds in Space project provides a cloud-based plug-in framework for satellite-trajectory propagation and conjunction analysis and is aimed at improving Space Situational Awareness by predicting potential satellite collisions. The ASTRA—Atmospheric Science Through Robotic Aircraft—project demonstrates the use of Windows Azure as a computing resource to complement low–powered, high-altitude scientific instrumentation.

  • Presenter: Judy Qiu, School of Informatics and Computing at Indiana University

    Many data analytics and scientific-computation algorithms rely on iterative computations, in which each iterative step can be specified as a MapReduce computation. Twister4Azure extends the MRRoles4Azure to support such iterative MapReduce executions, drawing lessons from the Java Twister iterative MapReduce framework introduced in Jaliya Ekanayake’s thesis. Iterative extensions include a merge step, in-memory caching of static data between iterations, cache-aware hybrid scheduling using Azure Queues, and a bulletin board. Twister4Azure and MRRoles4Azure offer the familiar MapReduce programming model with fault-tolerance features similar to traditional MapReduce and a decentralized control model without a master node, implying no single point of failure. We test on data-mining algorithms applied to metagenomics and requiring parallel linear algebra in their compute-intensive kernels.

  • Presenter: Wenming Ye, Microsoft Developer Platform Evangelism

    Computational fluid-dynamics (CFD) code is used to solve and analyze problems that involve fluid flows. It is a widely used supercomputing application for designing items from rockets to automobiles. We will show an open-source CFD code running transparently on client, cluster, and the Microsoft cloud. We will share our lessons learned for cloud optimization in the areas of model-view-controller architecture, data considerations, and UI design.

  • Presenters: Jaliya Ekanayake, Wei Lu, and Roger Barga, Microsoft Research

    Excel is an established data-collection and data-analysis tool in business, technical computing, and academic research. Excel offers an attractive user interface, easy-to-use data entry, and substantial interactivity for what-if analysis, but data in Excel is not readily discoverable and, hence, does not promote data sharing. Further, Excel does not offer scalable computation for large-scale analytics. Increasingly researchers face a deluge of data, and when working in Excel, they cannot easily invoke analytics to explore data, find related data sets, or invoke externals models. We present Excel DataScope, which seamlessly integrates cloud storage and scalable analytics into Excel through a research ribbon. Any analyst can use Excel DataScope to discover and import data from the cloud, invoke cloud scale data analytics to extract information from large datasets, invoke models, then store data in the cloud—all through a spreadsheet with which they are already familiar.

  • Presenter: Arkady Retik, Microsoft Education Group

    Visit our booth to explore the new Faculty Connection web-based portal providing access to thousands of curriculum resources for teaching and research. Try using our unique Visual Search with Deep Zoom to find and explore content interactively—from slides to software tools. Pick up our latest Curriculum Resource Kits on Cloud Computing and Operating Systems (including Windows source code), a security-thread-modeling game, and more.

  • Presenters: Nikolai Tillmann and Michal Moskal, Microsoft Research

    In 2011, more touchscreen-based mobile devices, such as smartphones and tablets, will be sold than desktops, laptops, and netbooks combined. In many cases, incredibly powerful, easy-to-use smartphones are going to be the first computing devices—and, in less developed countries, possibly the only ones—owned by virtually everybody and carried by their owners at all times. Microsoft Research provides a novel application-creation environment that enables anyone to script a smartphone anywhere without needing a separate PC. This environment enables to the development of mobile-device applications that can access your data, your media, and your sensors and help you use cloud services, including storage, computing, and social networks. This typed, structured programming language is built around the idea of using only a touchscreen as the input device to author code. In our vision, the state of the program is distributed automatically between mobile clients and the cloud, with automatic synchronization of data and execution between clients and cloud, liberating the programmer from worrying—or even having to be aware of—the details.

  • Presenters: Cati Boulanger, Tim Large, Vivek Pradeep, Steven Bathiche, Moshe Lutz, Matheen Siddiqui, and Eli White, Microsoft Applied Sciences Group

    • Flat-panel 3-D display: The display tracks the user and produces an image with both 3-D perspective and 3-D parallax without the user having to wear glasses.
    • Multiview display: Enables different people to watch their own content on the display wherever they are in the room.
    • Looking In: A gesture-controlled microscope that enables the user to look around tiny objects naturally.
    • Next-generation interactive display: Using a semi-transparent OLED and image capture through the display to enable image capture and gesture recognition.
    • Mayhem: A simple-to-use programming event to action programming model.
    • Kinect SLAM: The Kinect device is used to build real-time, dense, 3-D models of room-sized environments while undergoing six-degrees-of-freedom handheld motion.
  • Presenters: Hrvoje Benko and Andrew D. Wilson, Microsoft Research

    We will demonstrate the use of 3-D projection, combined with a Kinect depth camera, to capture and display 3-D objects. Any physical object brought into the demo can be digitized instantaneously and viewed in 3-D. For example, we will show a simple modeling application in which complex 3-D models can be constructed with a few wooden blocks by digitizing and adding one block at a time. This setup also can be used in telepresence scenarios, in which what is real on your collaborator’s table is virtual—3-D projected—on yours, and vice versa. We will show how simulating real-world physics behaviors can be used to manipulate virtual 3-D objects. Our demo uses a 3-D projector with active shutter glasses.

  • Presenter: Jun Rekimoto, University of Tokyo

    Lifelogging is an emerging research field of capturing user’s experience to support human activities. Images, audio, location, and other contextual information can be recorded. But because such recorded lifelog data tend to be huge, a good method for indexing is essential. We focus on user’s eye information. We will develop a small, wearable eye sensor to recognize a user’s gaze direction and eye movement. Combined with a miniature, eyeglass-mounted camera, it becomes possible to record a user’s view with gaze information. Various image-processing techniques then can be applied to segment objects or texts at which a user looked. We also propose an ear-worn microprojector-camera system to enable wearable, hands-free interaction. We expect such devices can be used to support people in various contexts, including human memory augmentation and mobile assistance.

  • Presenters: Kerry Hammil and Scarlet Schwiderski-Grosche, Microsoft Research

    Microsoft .NET Gadgeteer is an open-source tool kit for building small electronic devices using the .NET Micro Framework and Visual C# Express, a free C# development environment. Gadgeteer combines the advantages of object-oriented programming and solderless assembly of electronics with a kit of peripherals and support for quick physical form-factor construction using computer-aided design. This powerful combination enables embedded and handheld devices to be iteratively designed, built, and programmed in hours rather than days or weeks. Microsoft .NET Gadgeteer has been developed by Microsoft Research in collaboration with the .NET Micro Framework product team. This session will provide an overview of the various hardware and software elements of the .NET Gadgeteer platform. Attendees will be introduced to the modular electronics system and learn how individual modules can be connected to build sophisticated devices. Attendees also will learn about how .NET Gadgeteer devices can be programmed easily using C# and debugged interactively in Visual C# Express. Finally, the session will demonstrate how the tool kit supports the design of custom enclosures for .NET Gadgeteer projects, which can be built on demand by using 3-D-printing technologies.

  • Presenters: Dun-Yu Hsiao and Seth Cooper, University of Washington

    Foldit is an online game that enables players to complete and collaborate to fold proteins. Until now, players have been able to interact with the proteins only by using the traditional keyboard and mouse. We will showcase our current work, integrating Kinect as an input device for Foldit. Our goal is to enable multiple hands to interact simultaneously with proteins to offer a more fun, intuitive game interface that would help players to manipulate the protein shape directly in new ways and to use gestures to launch optimizations. The demo enables players to use Kinect to play Foldit and test their protein-folding skills.

  • Presenters: Lee Dirks, Donald Brinkman, and Rane Johnson, Microsoft Research; Andy van Dam, Brown University; Jun He, Renmin University of China; Jinwook Seo, Seoul National University

    In addition to a strong focus on eScience, Microsoft Research is also actively investigating the broader concept of eResearch—with an emphasis on Digital Humanities and eHeritage. This booth will offer demonstrations of several compelling new tools: discussion around Project Big Time (an evolution of the ChronoZoom work (opens in new tab) by Walter Alvarez at University of California, Berkeley), demos of the Garibaldi/LADS project (opens in new tab) under Andy van Dam at Brown University, as well as Prof. Jun He’s Project Storyteller (Detecting and Tracking Hot Topics to Enhance Search-Engine Performance (opens in new tab)), and Prof. Jinwook Seo’s Connecting the Past to the Future (Visualizing and Mapping Textural Land Books (opens in new tab)). The breadth of these projects shows the significant interest Microsoft Research is engaged in beyond eScience with a goal of positively impacting productivity and innovation across the entire academy.

  • Presenters: Mike Ortiz and Lu Li, Stanford University

    myScience is a cloud-enhanced, citizen-science application that changes the way observational research is done. Scientists can use myScience to crowdsource their research projects and harness the power of sensors on smartphones, all with the click of a button on our web portal and without having to write a single line of code. Users can download our mobile app to contribute to a variety of science projects and acquire points across the board. Submitted data is aggregated on the cloud and made available to scientists.

  • Presenters: Pratch Piyawongwisal and Sahil Handa, University of Illinois at Urbana-Champaign

    Mapster is a system to enable personalized, localized situational awareness by integrating heterogeneous sensor information. A Windows Phone 7 app, it gives users the ability to report any emergencies to Twitter, which are then fetched by the cloud. The map interface on the phone uses the geo-referenced data from the cloud, along with other information such as local crime reports or rainfall data, and enables the user to see a spatiotemporal animation of the changing patterns of the data.

  • See the best student design solutions related to this year’s theme, Get Connected and Stay Connected. The solutions come from the Ontario College of Art and Design from Toronto; Tongji University from Shanghai; University Iuav from Venice, Italy; New York University’s Interactive Telecommunications Program; Universidad Iberoamericana of Mexico; and the University of Washington. Examples of the innovative student design solutions:

    • In-NEED, a system for managing the community’s response to natural disasters through the use of mobile technologies—from the Ontario College of Art and Design.
    • Walk.It, an online platform that enables anyone to create and share neighborhood maps that mimic the same personality and charm of a hand-drawn map from a friend—from New York University.
    • Apart-Together, technology to enable parents who are living and working out of town because of economic necessity to maintain close contact with their children—from Tonji University.
    • Porta Vox, a system that creates a community-reporting tool that helps track and reduce incidents of crime in urban areas—from Universidad Iberoamericana.
    • Voglia is a connected device, designed as a jewelry pendant, allowing close bodily communication between couples who are physically apart—from University Iuav of Venice.
    • Origin, which uses data from sensors in mobile devices to model personal context for a more natural user interaction—from the University of Washington.
  • Presenter: Stewart Tansley, Microsoft Research

    The Kinect for Windows SDK beta is a programming toolkit for application developers. It provides the academic and enthusiast communities easy access to the capabilities offered by the Microsoft Kinect device connected to computers running the Windows 7 operating system.

    The Kinect for Windows SDK beta includes drivers, rich APIs for raw sensor streams and human motion tracking, installation documents, and resource materials. It provides Kinect capabilities to developers who build applications with C++, C#, or Visual Basic by using Microsoft Visual Studio 2010. We are going to showcase Kinect SDK sample applications.

  • Presenter: Genevieve L’Esperance, McGill University in Montreal

    WorldWide Telescope (WWT) is well-known for serving as the virtual telescope for space. It offers unique, 4-D visualizations—3-D and a time series—and has newly developed APIs to enable visualization of multiple layers of location based-data. WWT also uses Excel to enable researchers to work dynamically with data, then share the data and visualization results. Exemplifying a natural-user-interface approach, navigation will be provided with a gestural user interface enabled by Kinect interoperating with a PC application.

DemoFest Map

Microsoft Research DemoFest 2011