Ora

Team: Suzanne Choi, Zahin Ali, Jeffrey Chou, Angela Wang

My Role: UX/UI design for at home training app, UX research and strategy, Visual design

Timeframe: 3 months

Cover 2.png

Project background

How was your previous doctor's appointment? During a study of Doctor-Patient Communication: A Review by Jennifer Fong-Ha, researchers found that only 21% of patients expressed satisfactory communication with their physicians. Communication quality is crucial in reducing medical errors because "90% of diagnosis comes from the conversation with the patient," said Dr.  Blau from UCSF. Terry Canale in his American Academy of Orthopaedic Surgeons Vice Presidential Address also mentioned the importance of physician's bedside manner, “the patient will never care how much you know, until they know how much you care.” 

Despite the importance of communication quality in the medical field, it is extremely challenging for medical students to obtain communication skills in medical schools due to limited resources and opportunities to practice and evaluate their performance. To help raise future doctors with proper communication skills, project Ora focuses on communicative learning for medical students.

0 Challenge.png

How can we better equip future doctors with communication skills? 

 

Project Outcome: Ora

Project Ora speculates on the future of medical education in the context of year 2030. Ora is an AI assistant that helps medical students grow communication skills. Ora leverages AI's natural language processing capability to help medical students prepare, practice, and evaluate their communication skills based on the real human interactions at the hospital. Ora gives medical students a safe place to reflect on their performance and improve upon their behavior, to become an empathetic physician in the future. 


 How it works

The Ora system consists of three devices living indifferent environment: smart watch, ear piece, and a tablet. The smart watch provides brief overview of best practices and communication analysis to the student in the hospital. The ear piece provides in-the-moment hints during the conversations and critique student’s performance after the conversation. Student’s daily medical conversation is captured by the ear piece in the hospital and feed into the Ora system database. The tablet then provides detailed communication analysis, growth trajectory, learning modules, and conversation simulations enabled by AI avatar.


01

Smart Watch

smartwatch2.gif

The smart watch provides hands-free access to communication best practices and instant feedback in the hospital.

Prepare

01: Best Practice Checklist
Before any conversation, the students can review communication best practices by interaction type or learning modules the student took at home.

Evaluate

04: Conversation Critique
Post-conversation summary clarifies the specific terms and behaviors causing conversation breakdown – by category.


02

In hospital: Ear bud

The ear piece captures student’s daily medical conversation upon patient approval and provides communication critiques enabled by Ora’s virtual assistant.

Practice (Hints)

03: In-moment conversation protection
During the conversation, ear piece provides subtle but distinguishable notifications for different common downfalls to allow in-moment correction without major interruption.

Evaluate

04: Conversation Critique
Initiated by the student anytime during the day, Ora’s virtual agent provides personalized feedback and recommendations based on the student’s accumulated performance.


03

At home training app

Evaluate: Growth

05: Conversation Analysis
Upon turning on the tablet, the student first see visualizations of communication growth trajectory. This helps the student better identify communication strengths and weaknesses, and see how his performance has changed over time.

2 dashboard copy 5.png
 

Evaluate: Conversation

05: Conversation Analysis
Ora provides detailed analysis of each conversation segment by topic and provides transcripts of the full conversation with the patient’s information anonymized. This helps student understand exactly what went well or what went wrong and why.

Along with detailed analysis, Ora also provides recommended best practices and how-to videos to help student better understand how he can improve.

 

Practice: Virtual Agent

06-07: Conversation Redo
The student may re-do any conversation segment and get in-moment feedback of any breakdowns and corrections. A patient avatar is used to remove any personal identifier and protect patient privacy. After the redo with virtual agent, student may review his performance for self reflection and correction.

 

Prepare: Module

08: Learning modules and further practice
Ora leverages existing teaching resources and techniques to provide learning modules for different communication situations. Upon completion of series of learning modules, student may earn certifications to showcase different skills to future employers. Student may save different flashcards/notes within the Ora platform to facilitate his own learning.


How Ora learns?

Ora’s utilizes natural language processing capability to analyze and evaluate student’s communication performance. Ora’s model is trained by communication scripts and best practices used in medical schools. The corpus grows as more students uses Ora, with the de-identified communication data captured by the ear piece. With augmented infrastructure and IoT devices in hospital, Ora’s corpus may also include video conversations or patient feedback to improve machine accuracy and performance.



0 Design Process.png

01

Scoping framework

01 Territory Map.png

First, we created a territory map to better understand the complex healthcare system and identify design opportunities. We put the patients at the center of this diagram because of their frequent interactions with stakeholder. As our research progressed, we further focused on medical students and their communication learning. The territory map helped us scope the project and get consensus between team members.


02

Exploratory research

Interviews and Survey

During the exploratory research phase, we conducted 16 interviews and got 11 online survey responses. Interviewees include medical students, doctors, residents, designers in the medical industry, technical specialist, and computer science professors. Interviews and survey helped us better understand the problem space and identify design opportunities.

Literature review and market research

We also conducted the in-depth literature review and market research to understand the recent design and technology trend in healthcare. Five primary insights were identified during this stage:

  1. Doctors or patients
    Existing interventions are often design for either doctors or patients, rarely both

  2. Tech generation gap
    There is a technology generation gap between experienced doctors and new doctors

  3. AR/VR potential
    AR/VR technology is currently limited to surgical training. There is tremendous potential in utilizing AR/VR to facilitate communicative learning

  4. Personalization
    Products enabling personalization, such as providing information catered to specific individual needs and translation for LEP patients, are well received.

  5. No platform for caregivers
    There currently is no formal learning platform designed for caregivers.


Text based diary study

After defining our target design audience, medical students, we conducted a text-based diary study with eight 3rd and 4th year medical students. With the text diary study, we wanted to understand medical student’s detailed rotation experience and gather examples of communication challenges. After each study, we summarized their daily experiences to inform our persona building.

Synthesis

process.png

We synthesized our exploratory research with 6-step process that started with a wall of "insights"  extracted from primary and secondary research. With the 30+ post-its of insights, we ran through affinity mapping, and then build design principles from each cluster. The design principles and project objectives came out of this process help informed our design direction.

Design principles

  1. Efficiency/ Productivity:
    Support efficient communication between individuals and departments. Utilize AI to maximize productivity.

  2. Humanistic:
    Foster humanistic learning environment

  3. Career relevance:
    Make learned material applicable to the student’s future medical career. Motivate students by providing content relevant to student’s goal. Incorporate real life practices in the medical education system.

  4. Individuality:
    Accommodate the individuality of learners.

  5. Visual learning:
    Take advantage of visual learning when appropriate


03

Generative research

Personae development

Based on the information gathered from exploratory research, we created two student personae and two corresponding AI personae. As we constructed AI persona, we designed sample speech to reflect learning needs of particular students. Based on the persona study, we started looking further into the needs and challenges to include "existing teachers (doctors and residents)" into the communication learning framework. 

Participatory Workshop

During the ideation process, we went to UPMC (University of Pittsburgh Medical Center) to conduct participatory workshop with 20 participants. In this, we asked participants to explain specific communication challenges, map stress and complexity of each steps in the explained scenario, and generate their own solutions with building blocks to their specific communication challenge.

Some of the insights we got from the participatory workshop are:

  1. Confirmation:
    Students desire clear confirmation of their communication behaviors and corrections

  2. Tips for improvement:
    Students struggle with identifying concrete elements to improve from failed experiences

  3. Practice for rare patients situations:
    Practicing for rare patient interactions, such as interacting with low English proficient (LEP) patients or adolescents, are particularly useful because those experiences are difficult to gain in real life.

Storyboarding + Speed-dating

Based on pain points and creative ideas emerged in the participatory workshop, we went through three sets of design iterations involving six paper storyboards and one "video storyboard". 

The video storyboard was useful in helping medical students envision and critique the concept during speed-dating. The feedback became much more concrete and actionable. We then extracted pros and cons of each concept to iterate on our ideas.

04-1 Storyboard.png
IMG_1431+(1).JPG


Visual language

Before making mocks, we first gathered examples of existing healthcare products that matches the AI personality we have defined during personae development phase. AI persona help us search and gather proper inspirational examples, color scheme, motion behaviors, and typeface choices. 

For our design system, we decided to portray friendliness and trustworthiness with blue accent color and sans serif typeface (aperçu pro). We named our project Ora for that it is short of "oral" .

04-2 Visual-2.png
04-2 Visual-1.png

Flow mapping and prototyping

Based on the feedback we’ve received from speed dating, we started to iterate our concept with flow maps. We also started to prototype conversation flow for voice notification, smart watch screens, at home training app flow, and VR simulations. As many interviewees raised distraction and privacy concerns about AR glasses, we decided to provide notification and checklists with smart watch and ear piece.

For voice interface and smart watch, we had participants read scripts and role play patients and medical students. We observed how participants responded to different feedback (voice/words, voice/sound, watch/vibration) and the level of disruption to conversations.

For the at-home training app, we wanted to test different types and amount of information to evaluate the desirability of the functions we can provide. For conversation redo, we prototyped Skype style tablet mode and VR mode to see which format would facilitate communication learning better.

Communication Redo: VR mode

Communication Redo: Skype style tablet mode


04

Evaluative Research

With our prototypes, we went to UPMC again to tested our concept and usability with eight 3rd- and 4th- year medical students who individually participated in a 20-minute session.

05-1 Usability Testing.png
05-1 Usability Testing Insights.png

With the wizard of oz user testing, we were able to validate a lot of problems we have identified and test on our assumptions on app sequence, technological preferences, and visualization. We also tested usability of the app, such as clarity of ‘call to actions’, logical flow of the sequence, length and tone of the written language, hierarchy, etc.

During the user testing, we realized that incorporating different learning methodologies such as positive reinforcement, motivations/incentives, and self-reflection would greatly help user engagement and learning. For at home conversation redo, we initially assumed that our users would prefer VR environment better compare to the screen conversation because we believed that VR offers an immersive experience. However, we learned that our users would prefer screen conversation more because it provides a unique opportunity to self-reflect on their facial expressions and body language. Majority of users also commented that screen is more approachable to initiate conversation because of the existing convention such as Facetime and Skype.

Concept video


Reflection

This project helped me understand how to work with medical stakeholders. Medical stakeholders are often very busy and hard to reach in person. As a result, we tried to leverage creative research methods, such as text diary study and video speed dating, to accommodate remote participants.

We spent a lot of time researching AI’s capabilities and limitations. During the process, I realized how challenging it is to train AI model to provide feedback on conversation. Conversation, especially in the medical field, contains a lot of qualitative and contextual aspects that is difficult for machine to identify and evaluate. However, with growing AI research and advancements, like natural language processing and emotion detections, I believe we are not too far from making the machines understand human conversation to provide useful medical support.