The 18th International Conference on Mobility, Sensing and Networking (MSN 2022)
Keynote Speech 3: Real-Time AI for Infrastructure-assisted Autonomous Driving
9:00 AM — 9:45 AM HKT
Dec 14 Wed, 8:00 PM — 8:45 PM EST
Keynote Talk 3: Real-Time AI for Infrastructure-assisted Autonomous Driving
Autonomous driving will greatly improve the mobility and safety of future transportation. However, recent pilot commercial deployments have caused widespread concerns about the reliability and safety of existing autonomous driving systems. In particular, many recent accidents are caused by the delayed or erroneous perception by autonomous vehicles. Despite the significant progress on machine learning algorithms and new vehicular sensors, the limited perception capability of a single car remains the major challenge of large-scale commercial deployment of autonomous driving. An emerging technical paradigm to address this grand challenge is to improve the safety of autonomous vehicles by leveraging intelligent roadside infrastructure such as lampposts equipped with sensors and compute units. In this talk, I will discuss our recent work on real-time AI technologies for infrastructure-assisted autonomous driving. First, we have developed and deployed the world?€?s largest open smart lamppost testbed on CUHK campus. Consisting of 25 roadside units equipped with network coding-enabled wireless multi-hop networks and advanced sensors including LiDAR, mmWave radar, thermal cameras, our testbed offers various real-time services such as target detection and dynamic route planning for autonomous vehicles. Second, we propose a novel real-time deep learning task framework RT-mDL, which integrate model compression and real-time scheduling to systematically optimize concurrent execution of multiple deep learning tasks. RT-mDL enables edge platforms such as roadside units and connected vehicles to perform multiple concurrent deep learning tasks simultaneously with limited compute and communication resources. Third, I will present VI-Eye and VIPS, the first systems for real-time 3D perception fusion of vehicle and infrastructure with centimeter accuracy, leading to vehicular perception enhancement, robust object detection/tracking, localization, and navigation. Lastly, I will discuss milliEye, a new real-time mmWave radar and camera fusion system for robust object detection on the edge platforms, which requires only a small amount of labeled image/radar data through a decoupled learning architecture.
Weigang Wu, Sun Yat-sen University, China