Enhancing Privacy: Smart Glasses Consider Ditching Cameras for a Century-Old Alternative
Researchers have unveiled a groundbreaking tool named PoseSonic, designed to accurately track upper body movements of glasses wearers using sonar technology. This innovative approach suggests a potential shift from optical cameras to sonar in the development of future smart glasses. Sonar-based technology not only enhances accuracy and privacy but also promises cost-effectiveness in production.
Conducted by scientists at Cornell University, PoseSonic integrates micro sonar, powered by CHIRP technology—a miniature version of the technology utilized in ocean mapping and submarine tracking. Combined with artificial intelligence (AI), PoseSonic constructs a precise echo profile image of the wearer by capturing sound waves that fall below the threshold of human hearing. The research detailing this technology was published on September 27 in the ACM Digital Library.
Cheng Zhang, study co-author and assistant professor at Cornell, as well as the director of the Smart Computer Interfaces for Future Interactions (SciFi) Lab, expressed the immense potential of this technology as a future sensing solution for wearables, especially in everyday settings. PoseSonic outshines current camera-based sensing solutions, offering advantages such as improved efficiency, cost-effectiveness, unobtrusiveness, and privacy-conscious tracking.
Unlike existing augmented reality (AR) smart glasses that rely on cameras and various wireless technologies, PoseSonic utilizes acoustic-based tracking, providing a more efficient, cost-effective, and privacy-oriented alternative. The system includes microphones, speakers, a microprocessor, Bluetooth module, battery, and sensors. The researchers developed a functional prototype for under $40, with the potential for further cost reduction at scale.
PoseSonic’s speakers emit inaudible sound waves that bounce off the wearer’s body, returning to the microphones. The microprocessor then generates a profile image, feeding it into an AI model that estimates the 3D positions of nine body joints. Notably, PoseSonic does not require specific user training, as the algorithm is trained using video frames for reference.
Due to its lower power consumption compared to cameras, PoseSonic can operate continuously on smart glasses for over 20 hours. The technology, which could be seamlessly integrated into an AR-enabled wearable device in the future, prioritizes wearer comfort and avoids bulkiness.
Moreover, the sonar-based approach enhances privacy, as the algorithm processes only self-generated sound waves to build the 3D image. This data can be processed locally on the wearer’s smartphone, minimizing the risk of interception, as opposed to sending it to a public cloud server.
PoseSonic’s potential applications extend beyond basic tracking, offering practical use cases such as recognizing upper body movements in daily activities (eating, drinking, smoking) and monitoring the wearer’s movements during exercise. Future iterations of this technology may provide users with detailed feedback on their behavior, offering insights into body movements during physical activity beyond traditional metrics like step count or calorie consumption.