Many aspects of our lives have been engulfed by technology, but we do not seamlessly interact with our devices. Google’s Advanced Technology and Products division (ATAP) has revealed its latest research on Soli radar to push the boundaries of non-verbal human-computer interaction1. The technology is capable of reading people’s body language and performing automated tasks.
What is Soli Radar?
Soli is a radar platform developed by Google’s ATAP research team in 2015. It is a sensor with embedded radar technology that uses electromagnetic waves to pick up on subtle human body language and movements2. It was first seen in Google’s Pixel 4 for distant gesture detection and recently in Nest Hub smart display for movement and breathing pattern tracking. It’s worth noting that data from the sensor is processed locally and the raw data is never sent to the cloud.
How does Soli Radar work?
1. Emits.
Soli’s radar emits electromagnetic waves in a broad beam. An object within the beam, like a human hand, scatters some of this energy, reflecting some portion back towards the antenna.
2. Reflects.
Properties of the reflected signal, such as energy, time delay, and frequency shift capture rich information about the object’s characteristics and behaviors, including size, shape, orientation, material, distance and velocity.
3. Recognizes.
By processing the temporal signal variations and other captured characteristics of the signal, Soli can distinguish between complex movements to understand the size, shape, orientation, material, distance and velocity of the object within its field.
The Proxemics concept
Proxemics is the study of how people use space around them to mediate social interactions3. As you get closer to another person, you expect increased engagement and intimacy. Each Soli Radar-enabled device will be given a personal space concept to react more human-like when a person enters. The personal space overlap is a good indicator of whether humans will interact or just pass by.
Radar sensors assist devices in understanding the social context around them and acting accordingly (together with AI). Furthermore, Soli captures the subtle elements of movement and gesture, such as body orientation, the path you might take, and the direction your head is facing5 – aided by machine learning algorithms that refine the data and allow devices to recognize the social context of the environment around them4. It can tell if you approach or just walk past the device.
The ultimate goal is for the sensor to be able to anticipate a user’s next move and serve up a corresponding response. Radar is more privacy-friendly to gather spatial data. Unlike a camera, radar doesn’t capture and store distinguishable images of your face, or other means of identification.
Potential Applications on Roadmap
- Booting up or pull up touch controls when you are approaching
- Turn on the screen when your head facing it
- TV bookmarks where you left and resumes from that position when you’re back.
- Learn your routines over time and prevent you from any unhealthy habits.
Limitations
While radar detects multiple subjects, if individuals are too close, it only senses an amorphous blob, which causes confusion to the algorithm in decision-making. There is also a balance between your settings and what the device believes you want. Due to Soli’s limited radar range of 9 feet2, more Soli devices would have to be installed in your house, and data have to be shared to process the anticipation, so the gathered data would percolate to the cloud eventually.
REFERENCE
- https://www.wired.com/story/google-soli-atap-research-2022/#intcid=_wired-verso-hp-trending_a075d90f-cb9d-4d46-bcde-f31b98adc77c_popular4-1
- https://atap.google.com/soli/technology/
- https://www.scienceofpeople.com/proxemics/
- https://singularityhub.com/2022/03/04/googles-new-camera-free-sensor-can-read-and-react-to-your-body-language/
Oh yeah, this is going to cause an incident once this reaches the public eye. Think of the cool headlines like “Google can read your mind!” or “Google is now blasting you and your children with radar waves to please advertisers!” or something like that.
I think this technology is fascinating, it incorporates both biology, psychology, computer science, engineering, and so many other fascinating fields of study in order to push us further to the cool neon sci fi futures we’ve been excited for since the 60’s. However I can’t help to think about the issues that this will cause socially. We already had enough people freaking out about 5G and Covid vaccines, what happens when they figure out Google can use a radar to read your emotions and intentions? I understand that radar is not a camera, and is much more privacy friendly (and I agree) but misinformation can reign supreme, and we may find that if this causes enough issues, we’ll be adding another entry to killedbygoogle.com
On the bright side, if this does manage to integrate into the modern tech front, and advances enough to be useful to consumers, we could see a new evolution in interactivity beyond pressing buttons on a remote. It’d be what the Xbox Kinect tried to be, and maybe not flop colossally this time.
All in all, fascinating post! Thanks for sharing!
This is super cool!! It’s interesting how body language can be taken into account in such a way, and that it can predict your movements. It is, however also interesting to see potential privacy invasions that can arise from this, such as using it to learn daily routines of the people, because if in the wrong hands, that can be very dangerous (that premise alone could probably make a horror movie LOL). Additionally, as Alexander mentioned, it would be interesting to see how people react m when they are already scared of things like 5G. That being said, good job on this post!! Very well written and interesting 🙂
his is super cool!! It’s interesting how body language can be taken into account in such a way, and that it can predict your movements. It is, however also interesting to see potential privacy invasions that can arise from this, such as using it to learn daily routines of the people, because if in the wrong hands, that can be very dangerous (that premise alone could probably make a horror movie LOL). Additionally, as Alexander mentioned, it would be interesting to see how people react m when they are already scared of things like 5G. That being said, good job on this post!! Very well written and interesting 🙂
This is a great topic! I love the last application you mentioned about habits. Many apps/features on your phone/computer can track your habits on there, but they are limited to just your actions on that device. Given enough time to advance, this could pick up on every single habit you perform. This could help people quit smoking/drugs, or limit procrastination. Like you said, this device would have to be placed around your house to actually track everything you do. The computations of all these inputs would be several magnitudes greater than a single machine. The cost of installing all the machines is also a downfall. Still a great idea with lots of room for improvement.
Interesting post! There has been a lot of progress in radar technology and recognition in general. One project on github I was looking at was Google Mediapipe, a machine learning open source project from Google. It is trained to recognize mood based on gestures or facial movements. It only requires a smart phone camera! It can even detect objects. Very cool projects from Google
Interesting post! I’m sure this sort of technology will come up as an ethical debate some time in the near future, if it hasn’t already, along the lines of facial recognition and other topics. It could definitely be used maliciously to track and measure people in spaces or at least annoyingly to try to lure in unsuspecting people for an almost “salesman” like advertisement pitch from the machine. But I can also see some positive aspects in this technology, like more personable smart systems at home or in public. Time will tell, I guess!
This kind of technology is giving Terminator vibes, to sense whether a human is being aggressive and requires termination. For me this ranks alongside deepfakes, as a technology that is interesting and new, however is more likely to be used for bad rather than good.
This is interesting, and while the potential uses described are far from extreme or orwellian, I always am wary of technologies that claim to read facial expression or body language. The best example I can think of is proctorio, which, while it claims to be disability complaint, tracks eye, head, and mouth movements, talking to self, pacing, etc. It uses these to judge “suspicion” and notify the instructor if it deems this necessary. However, body language is not uniform among humans, and what may be a “suspicious” eye movement to some could be a facial tic or eye problem. Even with less important ramifications of the technology, something that follows “typical” body language has the danger of malfunctioning or not working for those who fall outside of this norm, with troubling implications for the disability community.
This is such a fascinating advancement in technology. However, this will raise some privacy evasion concerns. As mentioned in above comments, people are already freaked out about 5G and covid vaccine, this will be added to the list as well. This technology can very helpful if used appropriately, but reading all these cyber attack posts every week, I think that this technology will also be used in a negative ways to invade privacy and cause the device to malfunction.
This is an interesting technology. I cannot believe our technology has advanced to the point where the radar can read the displacement in its wavelength to read and learn our body language. I do believe this research would help human-computer interaction more flawless and intuitive, but only if Google could keep their initial proposal about never sending raw data to the cloud. I could see some potential privacy debate even when the data is processed locally. Privacy and health are some of the most sensitive topics about new technology and Google need to be more convincing for its customers.
This is an amazing invention and it’s happy to see the progress we have made in our lives. we could use this technology in various way in which a lot of people can be benefitted like for some disable people they could turn on appliances just by going near it and stuff like that. also even though there are advantages to this technology people might find a way to misuse it as well so we need to take precautions for it as well.
This is a very good post. In some ways the creation of Soli has the potential to provide a future research direction for some physical games. But with people’s privacy increasingly at risk, this technology could be a tool for big companies to steal people’s privacy. Nowadays, people’s Internet browsing history is increasingly being analyzed by large companies, which try to analyze the real state of mind of the users behind the Internet by using all the information available. Without a doubt, body language is one of the most important pieces of information. To put it another way, Soli can only do body language analysis now because it is not accurate enough. But in the future, if Soli’s accuracy is high enough, it will be possible to read out people’s facial expressions. At that point, what privacy will we have?
Good Post! This technology is extremely interesting. As soon as you mentioned the Proximecs concept, I immediately thought of how this tech could be (and most likely will be) used in the future of robotics. The possibilities are endless, imagine a robot that reads your body temperature, and if it senses that it is too high, it would lower the thermostat in your home, or it could go and fetch you a cold drink. This is only speculation as of now, but the possibilities regarding this technology do look promising!
Interesting concept! Object recognition has long been researched and a variety of applications have developed over the past few years. As with every AI technology, accuracy and uncertainity have always been an issue. Many things need to go hand in hand in order for the system to function properly such as a proper training data, context of usage, controlled environment and managing outliers. But a good progress is being made and this system by Google seems promising as it seems to address the issue of privacy by not processing or storing data remotely. They gain more traction especially if they are deployed to assist users with preferred services as mentioned in the post. Over time, the system will improve and people can get personalized assistance. Definitely something to look forward to!
It is interesting to include the video at the end of the blog! My first thought was “ah now they even track my every movement”. However, you blog lately revealed that the data was never sent to cloud and stored locally, which contradicted my guess before. I wonder if this is Google’s step to address privacy concern or it will be used to collect users data in some ways.