Through interviews with a participant experiencing red-green color blindness, we uncovered challenges affecting various aspects of daily life. Tasks like identifying wire colors at work, interpreting traffic signals while driving, and performing routine activities like cooking or shopping often require additional effort, reliance on external cues, or alternative strategies. These struggles highlight the broader impact of color vision deficiency on confidence, efficiency, and safety.
Stimulate:
Ideate:
Feedback:
Prioritize:
After the interview, we conducted a co-design session to better understand how the solutions could fit into the participant’s daily life. We presented two ideas: a built-in color identification feature and a traffic light recognition tool, using storyboards to simulate real-world scenarios. For example, we demonstrated how the camera feature could identify wire colors or distinguish intersecting metro lines. The participant shared their thoughts on how they might use these features, providing valuable feedback on usability. We refined the design in real-time based on their input. Ultimately, the participant chose the camera-based solution for its simplicity and the ability to use it without extra devices, reinforcing its potential for addressing various color-dependent tasks.
We conducted usability testing using a prototype built on Figma to evaluate how well the design addressed the participant’s challenges. The test scenarios included identifying colors in real-time, pinning and tracking multiple colors, and analyzing colors on saved images. These tasks allowed us to observe the participant's interactions with the design and gather feedback for improvements.For example, the participant found the pinning feature useful but suggested adding clearer visual cues, such as symbols or highlights, to indicate selected colors. Additionally, they noted that detected colors often blended into dark or complex backgrounds, making them difficult to distinguish.
The participant highlighted challenges with visibility, interaction feedback, and ease of use during key tasks. These insights guided our refinement process to enhance the overall user experience.
Improved Visibility: Detected colors sometimes blended into dark or complex backgrounds, making them difficult to distinguish.
Interaction Feedback: Adding animations or sound was suggested to make interactions smoother and more intuitive.
Simplified Interactions: Tapping on the camera feed for color selection was effective but could benefit from clearer instructions or prompts.
Based on usability testing feedback, we refined the prototype to address key challenges and improve usability. The colorblind test was introduced after the usability testing phase. This test is prompted when users first launch the ColorSense feature and helps identify their specific degree of color blindness. Based on the results, users can customize their experience by rearranging the priority of colors for detection and toggling features such as black-and-white detection. These personalization options ensure the tool adapts to individual needs, making it more accessible and effective for a diverse range of users.
This feature enables users to seamlessly interact with complex visuals in real-time, such as maps, diagrams, or wires. By pointing the camera at any visual, the feature detects all visible colors and displays them in a bottom panel for easy selection. Users can select colors either from the bottom panel or by directly tapping on sections of the photo. When a user taps on a specific area, the feature highlights the selected color by adding a shape to the section, ensuring clarity and precision. A smooth animation accompanies the selection, enhancing visual feedback and the interaction experience. Additionally, multiple colors can be pinned simultaneously, offering flexibility for tracking overlapping elements.
Users can also analyze saved images in the Photo Gallery (fig 5)using ColorSense mode. By tapping on specific areas of the photo, such as the “olive green” line, users can dynamically detect and highlight their chosen colors. As with the live feature, multiple colors can be selected simultaneously, and each selection is accompanied by clear visual feedback and smooth animations to ensure a seamless experience. This functionality extends the versatility of ColorSense, allowing users to revisit and interact with color-coded visuals like maps, diagrams, or charts directly from their saved images. This feature reflects ideas similar to those explored in camera-based color detection tools aimed at improving accessibility and enhancing user interaction (Yoshimoto, Kondo, & Tsuchiya, 2009).
To view the complete functionality of the ColorSense prototype, please refer to the video below.