Many aspects of our lives have been engulfed by technology, but we do not seamlessly interact with our devices. Google’s Advanced Technology and Products division (ATAP) has revealed its latest research on Soli radar to push the boundaries of non-verbal human-computer interaction1. The technology is capable of reading people’s body language and performing automated tasks.
What is Soli Radar?
Soli is a radar platform developed by Google’s ATAP research team in 2015. It is a sensor with embedded radar technology that uses electromagnetic waves to pick up on subtle human body language and movements2. It was first seen in Google’s Pixel 4 for distant gesture detection and recently in Nest Hub smart display for movement and breathing pattern tracking. It’s worth noting that data from the sensor is processed locally and the raw data is never sent to the cloud.
How does Soli Radar work?
Soli’s radar emits electromagnetic waves in a broad beam. An object within the beam, like a human hand, scatters some of this energy, reflecting some portion back towards the antenna.
Properties of the reflected signal, such as energy, time delay, and frequency shift capture rich information about the object’s characteristics and behaviors, including size, shape, orientation, material, distance and velocity.
By processing the temporal signal variations and other captured characteristics of the signal, Soli can distinguish between complex movements to understand the size, shape, orientation, material, distance and velocity of the object within its field.
The Proxemics concept
Proxemics is the study of how people use space around them to mediate social interactions3. As you get closer to another person, you expect increased engagement and intimacy. Each Soli Radar-enabled device will be given a personal space concept to react more human-like when a person enters. The personal space overlap is a good indicator of whether humans will interact or just pass by.
Radar sensors assist devices in understanding the social context around them and acting accordingly (together with AI). Furthermore, Soli captures the subtle elements of movement and gesture, such as body orientation, the path you might take, and the direction your head is facing5 – aided by machine learning algorithms that refine the data and allow devices to recognize the social context of the environment around them4. It can tell if you approach or just walk past the device.
The ultimate goal is for the sensor to be able to anticipate a user’s next move and serve up a corresponding response. Radar is more privacy-friendly to gather spatial data. Unlike a camera, radar doesn’t capture and store distinguishable images of your face, or other means of identification.
Potential Applications on Roadmap
- Booting up or pull up touch controls when you are approaching
- Turn on the screen when your head facing it
- TV bookmarks where you left and resumes from that position when you’re back.
- Learn your routines over time and prevent you from any unhealthy habits.
While radar detects multiple subjects, if individuals are too close, it only senses an amorphous blob, which causes confusion to the algorithm in decision-making. There is also a balance between your settings and what the device believes you want. Due to Soli’s limited radar range of 9 feet2, more Soli devices would have to be installed in your house, and data have to be shared to process the anticipation, so the gathered data would percolate to the cloud eventually.