Case Study

Operating Room Wearable Proof of Concept

Prototyping a wearable AR training and assistance tool that uses voice commands, gesture controls, and a dynamic HUD to guide surgeons in surgical procedures, coordinating with an integrated surgical workflow and log system.

This is part of the Touch Surgery case study.

Google Glass might have been shelved as an awkward gagdet of the late 2010s, and while its uses in the streets would inspire privacy concerns and social commentary, there were far more legitimate applications in professional environments. One of those is an Operating Room (from now on, OR).

Initial Research

Normal loupes Photo: Cardiac surgeon wearing loupes, Wikimedia commons.

In the vast majority of cases, expert surgeons do not need to be told what is the next step in a procedure. Yet in extreme cases (e.g. emergency or combat situations), critical, live-altering procedures could fall outside the particular expertise of a combat surgeon or healthcare practitioner.

Use Case for TS Glass Photo: AI-generated, illustrative.

Back in 2015, Touch Surgery tasked my team to evaluate the feasibility of integrating technology to the above scenarios. Short of travelling to an active war zone, the next best thing was to witness a few surgeries in person.

Discovery and Ideation

User Journey

This illustrated user journey from pre-operation through post-operation phases represents the "happy path" of a surgeon connecting with a surgical workflow system.

User Journey illustrated

When appropriate and necessary, it features touch-less interaction through voice recognition software via wearable devices such as Google Glass, but adapable to, for example, current Natural Language Processing as it's now standard for Large Language Models.

The proposed workflow involves glass-enabled data collection during surgery (including anaesthesia tracking and blood loss monitoring), and ending with data review and database updates.

Visual Design

glass-ui.jpg

The UI Design makes use of the limited screen real state of the Google Glass device, while keeping high-affordance visual cues that can be recognised at a glance, such as a timeline and a visual hierarchy for written instructions.

Research Outcomes

In 2015, Google Glass suffered from a technological state of the art that likely solved:

  • Short battery autonomy, this is likely surmountable with current sodium-ion batteries.
  • Automatic Speech Recognition (ASR) wasn't nowhere near as reliable as it would be today and NLP will remain preferable for accessibility— however, studies suggest strict commands are more reliable for life-critical actions as this case study covers.

This improvement makes such professional applications far more practical, and it serves to illustrate the yet untapped potential of technology in the Operating Room, provided that it does not collide with stablished norms and practices, the science and the art of healthcare professionals.