09:00 – 09:10 Opening remarks
09:10 – 10:10 Keynote I – Deep Learning in Mobile Systems, Experiences and Pitfalls
Prof. Heather Zheng (opens in new tab), University of Chicago
Deep learning (neural networks) is being rapidly adopted by (mobile) researchers and companies to solve a wide range of computational problems. But is it a panacea to all the (traditionally hard) problems? In this talk, I will share experiences from my lab on applying today’s deep learning models to mobile systems design, and discuss vulnerabilities inherent in many existing deep learning models that make them easy to compromise, as well as potential defenses.
Bio: Dr. Heather Zheng is the Neubauer Professor of Computer Science at University of Chicago. She received her PhD in Electrical and Computer Engineering from University of Maryland, College Park in 1999. She joined University of Chicago after spending 6 years in industry labs (Bell-Labs, NJ and Microsoft Research Asia), and 12 years at University of California at Santa Barbara. At UChicago, she co-directs the SAND Lab (Systems, Algorithms, Networking and Data). She is an IEEE Fellow, World Technology Network Fellow, and recipient of MIT Technology Review’s TR-35 Award (Young Innovators under 35), Bell-Labs President’s Gold award, and Google Faculty award. Her work has been covered by media outlets such as Scientific American, New York Times, Boston Globe, LA Times, and MIT Tech Review. She served as PC co-chair for MobiCom and DySPAN, and is the general co-chair for Hotnets 2020. She is on the steering committee of MobiCom, and is the chair of SIGMOBILE Highlights committee.
10:10 – 10:40 Break
10:40 – 11:40 Session 1 – Cameras are becoming smarter
Networked Cameras Are the New Big Data Clusters
Junchen Jiang, Yuhao Zhou (University of Chicago), Ganesh Ananthanarayanan, Yuanchao Shu (Microsoft Research), Andrew A. Chien (University of Chicago)
Live Video Analytics with FPGA-based Smart Cameras
Shang Wang (University of Electronic Science and Technology of China, Microsoft Research), Chen Zhang, Yuanchao Shu, Yunxin Liu (Microsoft Research)
Space-Time Vehicle Tracking at the Edge of the Network
Zhuangdi Xu, Kishore Ramachandran (Georgia Tech), Sayan Sinha (Indian Institute of Technology Kharagpur)
11:40 – 13:00 Lunch
13:00 – 14:00 Keynote II – 360◦ and 4K Video Streaming for Mobile Devices
Prof. Lili Qiu (opens in new tab), University of Texas at Austin
The popularity of 360◦ and/or 4K videos has grown rapidly due to the immersive user experience. 360◦ videos are displayed as a panorama and the view automatically adapts with the head movement. Existing systems stream 360◦ videos in a similar way as regular videos, where all data of the panoramic view is transmitted. This is wasteful since a user only views a small portion of the 360◦ view. To save bandwidth, recent works propose the tile-based streaming, which divides the panoramic view to multiple smaller sized tiles and streams only the tiles within a user’s field of view (FoV) predicted based on the recent head position. Interestingly, the tile-based streaming has only been simulated or implemented on desktops. We find that it cannot run in real-time even on the latest smartphone (e.g., Samsung S7, Samsung S8 and Huawei Mate 9) due to hardware and software limitations. Moreover, it results in significant video quality degradation due to head movement prediction error, which is hard to avoid. Motivated by these observations, we develop a novel tile-based layered approach to stream 360◦ content on smartphones to avoid bandwidth wastage while maintaining high video quality.
Next we explore the feasibility of supporting live 4K video streaming over wireless networks using commodity devices. Coding and streaming live 4K videos incurs prohibitive cost to the network and end system. We propose a novel system, which consists of (i) easy-to-compute layered video coding to seamlessly adapt to unpredictable wireless link fluctuations, (ii) efficient GPU implementation of video coding on commodity devices, and (iii) effectively leveraging both WiFi and WiGig through delayed video adaptation and smart scheduling. Using real experiments and emulation, we demonstrate the feasibility and effectiveness of our system.
Bio: Lili Qiu is a Professor at Computer Science Dept. in UT Austin. She received a Ph.D. in Computer Science from Cornell University in 2001. She was a researcher at Microsoft Research (Redmond, WA) in 2001 — 2004. She joined UT Austin in 2005. She is named IEEE Fellow, ACM Fellow, and ACM Distinguished Scientist. She has also received a NSF CAREER Award, Google Faculty Research Award, and best paper awards in Mobisys’18 and ICNP’17.
14:00 – 14:30 Break
14:30 – 15:30 Session 2 – ML for videos
Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems
Yoshitomo Matsubara, Sabur Hassan Baidya, Davide Callegaro, Marco Levorato, Sameer Singh (University of California, Irvine)
Cracking open the DNN black-box: Video Analytics with DNNs across the Camera-Cloud Boundary
John Emmons, Sadjad Fouladi (Stanford University), Ganesh Ananthanarayanan (Microsoft Research), Shivaram Venkataraman (University of Wisconsin-Madison), Silvio Savarese, Keith Winstein (Stanford University)
secGAN: A Cycle-Consistent GAN for Securely-Recoverable Video Transformation
Hao Wu, Jinghao Feng, Xuejin Tian, Fengyuan Xu, Sheng Zhong (Nanjing University), Yunxin Liu (Microsoft Research), XiaoFeng Wang (Indiana University Bloomington)
15:00 – 15:30 Break
15:30 – 16:10 Session 3 – Playing nice with the network
Client-side Bandwidth Estimation Technique for Adaptive Streaming of a Browser Based Free-Viewpoint Application
Tilak Varisetty, David Dietrich (Leibniz Universität Hannover)
Sensor Training Data Reduction for Autonomous Vehicles
Matthew Tomei, Alex Schwing, (University of Illinois at Urbana-Champaign), Satish Narayanasamy (University of Michigan), Rakesh Kumar (University of Illinois at Urbana-Champaign)
16:10 – 17:00 Poster and demo session