(in conjunction with ACM SIGCOMM 2020 (opens in new tab))
Paper Submissions Deadline (extended): May 11, 2020
CFP: HotEdgeVideo20.pdf (opens in new tab)
Past Workshops: HotEdgeVideo 2019 (opens in new tab)
Cameras are everywhere! Analyzing live videos from these cameras has great potential to impact science and society. Enterprise cameras are deployed for a wide variety of commercial and security reasons. Consumer devices themselves have cameras with users interested in analyzing live videos from these devices. We are all living in the golden era for computer vision and AI that is being fueled by game-changing systemic infrastructure advancements, breakthroughs in machine learning, and copious training data, largely improving their range of capabilities. Live video analytics has the potential to impact a wide range of verticals ranging from public safety, traffic efficiency, infrastructure planning, entertainment, and home safety.
Analyzing live video streams is arguably the most challenging of domains for “systems-for-AI”. Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security & privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (i.e., cameras). All aspects of video analytics call to be designed “green-field”, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.