How does it work? Here’s a fantastic infographic about it

How does the Google Glass work?

2013-Oct-6: Since the Google Glass came out there has been increasing interest in its potential for healthcare applications.

Most recently we saw this from Philips and Accenture, with a BIDMC anesthesiologist, Dr.David Feinstein describing the use of Google Glass for anesthesia patient monitoring:

There is also the first application in surgery, by surgeon Rafael Grossman, described in his blog. The simplicity of the setup is extraordinary and goes to show the potential of the Google Glass for telemedicine, esp. in a case like one from 2009 in Australia, where a GP, “Dr. Carson, who had no experience with this kind of surgery, had to call a neurosurgeon in Melbourne and have him talk him through the procedure”. How much better would it be if instead of talk him through they could actually see what he was doing and provide even more accurate advice?

Other start-ups are using the Glass to target the documentation burden, e.g. Augmedix. That’s perhaps going to be important, by I’m very curious to see how we’re going to make the leap from keyboards and mice to the Glass.

Then there’s the CPRGlass application (still in development), where the Glass is used to help with resuscitation, making use of the the Eulerian Video Magnification by MIT (which has also been used in the Philips application “Vital Signs Camera“, sadly available only for iPad).

In my opinion an even more obvious pairing of the Eulerian Video Magnification algorithm for vital signs monitoring would be in the ED, to triage patients. And I guess it could also be useful intra-operatively in determining the tissue viability, e.g. after strangulated hernias etc.

But what would be the most natural pairing of Google Glass in healthcare if not with activities and specialties that are inherently image-centric? Two are the fields that come in mind: dermatology and radiology.

Imagine the non-dermatologist physician that is wearing the Google Glass and has immediate access to the dermatology image databases that are available for free on the web. And it’s not just the synchronous access, it’s also the side-by-side comparison that would enhance the diagnostic yield of any dermatology exam. A significant part of the dermatology training is about learning to identify images: the web could provide this benefit to every physician. Alternatively, the Glass could be used as an extension for ateledermatology session, where the non-expert is consulting in real time with the expert.

Similarly, interpretation of X-rays, CT-scans and MRIs would benefit from having a side-by-side comparison: since the most successful image interpretation algorithm is still the human eye-brain combination, we could ease the task by providing a validated diagnostic imaging for comparison. And of course, Google Glass can enable tele-ultrasound, where the operator of the probe will be receiving immediate directions, since the distant consultant will be able to track the probe position and the image concurrently and dynamically, which wouldn’t be possible with a static camera. What sets apart the Google Glass from using a smartphone/tablet or a camera/computer combination is that with the Glass you can have full tracking of the user’s activity and immediate projection of direction right on the visual field.

2014-02-28: A Google Glass app for instant medical diagnostic test results was developed from researchers in UCLA’s Henry Samueli School of Engineering and Applied Science. In particular, the Ozcan Research Group, known for working consistently on mobilizing the laband releasing it from the current requirements in huge infrastructure, has developed a Glass-based telelab app. It seems to me that the user doesn’t specifically need the Glass; any smartphone camera might do the task, but of course the Glass would make processing the images much faster. The Ozcan group has produced revolutionary mobile phone-based apps for the lab in the past as.

2014-02-28: The title “Touchless navigation comes to Google Glass” is not very informative. After all, touchless navigation exists for all sorts of devices, as Kinect, etc. This in fact is not only touchless but also hands-free as “users can tilt their head to navigate menus and not have to use their hands at all”. A commentator brings up -and tweets– the important point: benefit for quadiplegics, enabling them to control many different devices. The benefit in comparison to a simple accelerator-based control earring would be the integration with a visual controlling interface.

2014-03-08: In a G+ post, Brian Ahier uploads a photo of the Google Glass as a rounding tool, from the BIDMC.

Google Glass in BIDMC

I would be happy if I had even a simple tablet with such an interface for rounding. Obviously there’s tremendous potential for those willing to explore it.

And again via a post by Brian Ahier, probably the first ED test of Google Glass, geared towards enabling tele-ED.

RI Google Glass ED

Seems like the EHR of the feature will be multimedia enhanced, and that’s where machine learning will truly help: store all rashes along with clinical and/or pathological diagnoses; create classifier; validate; offer as a CDS tool.


No comments

Be the first one to leave a comment.

Post a Comment