Hi,I am Yibing, an IDM student at NYU.

UX / Chromati: MR Glasses For Hearing Impaired People


Chromati: MR Glasses For Hearing Impaired People




This is our inclusive design project which is aimed to solve the inclusive design challenge that was curated by Microsoft. The challenge is about designing a product, service or solution to solve for exclusion in a deskless workplace.

The Chromati is in the form of glasses we designed that facilitate communication between a hearing-impaired person and those around them through speech and text.



Our initial goal is to design for a surgeon who communicates using American Sign Language (ASL). After extension to broader user spectrum, our final goal is to design a product that will improve the overall communication between hearing impaired people and society, and reduce the inconvenience and provide opportunities for them to engage in some occupations that exclude them previously.



As the main visual designer and model developer, I designed the 3D model using Maya and low fidelity prototype with one teammate,created all the visual design, built high fidelity prototype. Also I did user research with my teammates including context inquiry, user interviews and user testing. 



Sketch (for visual design and high-fi prototype)

AI and Photoshop (for sketches)

Marvel (for low-fi prototype)

Maya (for 3D model)

AE (for video editing)














We initially brainstormed by card sorting based on the six categories of occupations: Manufacturing,  Government, Retail, Hospitality, Construction, and Transportation.We then came up with a variety of occupations. 

We also sorted different disabilities, and how they would affect different jobs in terms of which of the five senses could be hindered, and addressing what other senses could be used to guide the one that is impaired. We used this diagram to figure out our initial idea.







Finally we chose surgeon as an exclusive occupation for people with hearing disability. We discussed the different options for bridging the gap between communication. Once we came up with a solution we wrote out scenarios and drew sketches of the device.



Firstly we did secondary research related to the exclusion of hearing impairment in the medical field.

In the medical industry, communication is seen to be one of the most if not the most essential factors to success. It is so important that most people would assume that people living with disabilities would not be able to work as medical employees due to their disability hindering their capabilities.



To gain insight and learn about the disability we were working with, we went to the Center for Hearing and Communication, and meet with the advisor. We are introduced hearing aid and information about hearing loss people. Most of people are not completely loss of hearing, they are unable to hear sound of certain frequency. So if hearing aid used to alter the frequency and draw near the distance, somehow voice can be heard. And those who can’t speak are only a small part of hearing loss people. It helped get rid of the        groups preconceptions about hearing loss, and enabled us to fix the design.



We compared our product with other existing technologies. There exist two kinds of competitors for the Chromati. The first kind is the hearing aid,and the second kind of competitors are phone apps. After we analyzed different features, we found that the key role in communication is not only understanding what the other party says, but also interpreting body language and facial expressions. The Chromati allows the user to have the full expereince of an interaction without any sort of distraction.



Since we aim to design a diversity of ways thus include more people that could face hearing problems in various deskless works, we made a persona spectrum based on Microsoft Inclusive Toolkit Manual. It helps us understand who we will design for throughout the design process.



To help us further narrow our broad range of ideas that we came up with during the brainstorming sessions, we came up with three user personas.

We also created the user flows for the respective personas, to further demonstrate how the Chromati works for these varying users and contexts.








Because the realm of hearing impairment is so vast, we divided our problem statement into three parts. 

1. Frequency

Consonants like “s,” “h,” and “f,” have higher frequencies (1,500 to 6,000 Hz) and are harder to hear. Consonants convey most of the meaning of what we say. Someone who cannot hear high frequency sounds will have a hard time understanding speech and language.



2. Distance

Related to frequency, when a sound travels far to reach a person, much of its high frequency detail dissipates along the way. Over long distances, obstructions such as land masses, buildings, and even air in the atmosphere all contribute to squash those high frequencies.



3. Reliance on Lip Reading

Hearing-impaired people also rely on lip reading a lot. But when they can’t see others’ lip movements, problems arise. Face masks or other physical obstructions make it impossible to interpret via lip reading.







1. Eye Tracking

This would be used to select words, phrases, and settings in the Chromati interface. 

2. Keyboard with autocomplete phrases 

The user would type out what they would like to say with a virtual keyboard. Hand movements would be registered through the camera on the front of the frames.

3. Text to speech

The words that the user types would be spoken by an AI voice incorporated in the glasses.

4. Speech to text

The Chromati can pick up what other people are saying, and convert their dialogue into screen-based text on the interface.

5. Hand movement tracker

The glasses track hand movement so that the user can select specific words/phrases and select settings in the interface.



We made a concept map as information architecture, and included all the functions based on our solutions.



Initial steps to create the interface design involved some brainstorming and sketches, as we had to narrow down the functions that our device would be capable of performing. Here are some initial sketches that I drew.


a) Eye tracking would allow the user to navigate                b) How the AR screen would appear through the lens.    through the interface. 

c) Sketch made on Illustrator of the frames.                      d) Initial sketches for the Chromati.



In order to make our prototype testable for users to engage in,we began creating the low-fidelity mockup of the interface design and worked on making the prototype with various functions that we could present to users to navigate.

We made the first mockups of the physical prototype of the Chromati by using 3D Maya. Design is very clunky and meant to just show concept of the shape of glasses and examples of AI phrases.



Our second stage of lo-fi prototyping for the interface focused more on showing the actual interactions. Mockups were initially made in Axure first, then images and detailed text were added in and compiled into a prototyped Marvel app that shows an examples of the interface. We had users test the Marvel app to see how convenient it was for them to locate the different functions.


This is our marvel prototype: https://marvelapp.com/cg935bj/screen/41261215


We also made 3 videos as our demos of low-fi prototype. The videos used editing and acting to simulate what the glasses would look like in a day to day conversation. With these we were able to simulate the two most important parts of the glasses, and test our users with them.




After the first round of our user testing process, we learned that the initial prototype of the product is too confusing as the symbols are not easy to understand. Moving forward, the design of the interface needs to be cleaned up and labels on our design need to indicate when someone is wearing the glasses. Overall, we want to improve the ease of use of our device as it has many functionalities which can get confusing. Some feedback and suggestions that we got from our users included:

• Include more back buttons.

• The ability to change the keyboard should be more visible.

• Having to press the home button twice is tedious.

• The icons did not make any sense. Navigating the interface was a bit confusing.

For subsequent testing we improved on our design for the second round of user testing by simplifying the interface. We deleted some screens, and we got rid of some unnecessary features. The users that we tested noticed. We got higher ratings on our design from the second round of user testing than from the first, and when asked what they liked about our design most said that the design was simple and easy to understand.



Based on our user tests and low-fi prototype, we made our hi-fi prototype. Users can use eye tracking or hands movement to select and swipe elements on the interface. It has 4 modes for people to choose based on different situation: one on one mode, small group mode, crowd mode and phone call mode. On top of that, users can either type the dialogue they wish to say or they can have their voice carried out through the AI voice or choose relative phrases based on the AI system database and translate those to voice. This function helps users who use sign language to articulate verbal words so that even people who are not fluent in sign language can understand them with ease.

For one on one mode, we made a single dialog box and human recognition symbol to help user see the text more clearly.

For communication in a small group setting, we simplified the dialogue box to only speech bubbles so that user can easily see what each person says.

Phone call mode needs people to connect the phone to the Chromati by accessing the device settings, then users can see the real-time text transferred from the voice of the phone.


For our high fidelity physical prototype of Chromati, we developed a sleeker and more compact version of the model through 3D renderings.

In order to make the Chromati look more discrete, we coated it in a sleek and all-black color. The glasses themselves sit lightly on the user’s face so there is not much physical distraction when wearing them. It was modeled in Maya and Blender so that we could have a greater sense of what the glasses would look like. We rendered the hi-fi images in Maya, and made a small animation of the 360 degree view of the glasses.

And here are our final demo videos based on our previous persona. 




Leave a Reply

Your email address will not be published. Required fields are marked *