At this point, Glass features two distinctive audio options in its bone conduction transducer technology and add-on mono and dual ear buds available on the Glass store. Why would they be looking at other audio manifestations of Google Glass?
Google includes several details in the patent background that suggest its engineers still aren’t happy with how audio works on Google Glass:
- “Further, the trend toward miniaturization of computing hardware, peripherals, as well as sensor, detectors, and image and audio processors, among other technologies, has helped open up a field sometimes referred to as ‘wearable computing.’”
- “…[wearable device speakers]typically generate an angle of audible sound that can be nearly 360 degrees, which can result in the sound being heard by others besides the user of the speaker system.”
- “Unfortunately, on-ear and in-ear headphones can be somewhat bulky and uncomfortable for the user.”
- “However, developments and changes in generating sound that can be heard substantially only by a particular user and that can be easily adjusted for different users are contemplated in the present disclosure.“
If Google decides to incorporate this array of transducers to a future version of Google Glass, users will be able to create a private beam of audio that only they can hear. The best part of all is that you’ll be able to steer the beam of audio at the angle best suited for the shape of your head and ear.
As someone who has both walked around uncomfortably with the dual ear buds and had the onboard sound reflected off of my skull to the point of being audible by those around me, I’m curious to see if this technology will be used down the road and how effective it might be.
What do you think? Would being able to steer a beam of audio that only you could hear be a cool way to solve any remaining Google Glass audio issues? Would it help drive adoption of wearable devices as Google hopes? Lastly, is it a deal-breaker or is audio one of several issues Glass has working against it?