How do you communicate
when you can’t rely on sound?
The Problem
This project started from a question I came across on Reddit:

This revealed a broader issue:
Multiplayer games rely heavily on voice communication,
limiting participation for Deaf and hard-of-hearing players.

Interview
To understand this gap, I spoke with Deaf and hard-of-hearing players.
Voice-first communication
leads to exclusion.
Tone is lost in text-based communication.
Existing caption solutions aren’t integrated into gameplay.
Insights
Communication isn’t just what is said—it’s how it’s said.
Urgency
Timing
Intent
Feature 1
I started with captions.
An Emotion-Augmented Caption System that makes tone, urgency, and nuance visible at a glance.
Loudness
Intensity
Pacing
Emotion
Delivery
size
weight
spacing
color tone
motion
→
→
→
→
→
user testing
One of the biggest design shifts emerged from User testing with deaf players.
AI emotion labels don’t hold up in gameplay.
From

to
2
Speaker identification should be instant.
That was smart, nice job
Speaker 1:
from
That was smart, nice job
color-coding
avatar/icon
That was smart, nice job

to
3
One system doesn’t fit all players.

Different play styles, different needs
So I introduced a customizable system.
Players can adjust caption intensity, speaker indicators, and placement based on their own needs and play style.

Overlay interface that can be adjusted in real time without leaving gameplay.
Feature 2
Even with a flexible visual system, communication still relied heavily on sight.

In fast-paced gameplay,
this quickly becomes overwhelming…
So I introduced haptic feedback as a complementary communication channel.
This translates critical signals into physical cues, allowing players to notice alerts without looking away and reducing overload by distributing attention across senses.

Feature 3
While communication became perceivable, responding remained limited.
In fast-paced gameplay, players need to react just as quickly as they receive information.
But existing systems rely on simple pings, which are limited in what they can express.

I designed the system along a spectrum of speed and clarity.
[Drag UI]
Instant selection
[Distance UI]
Precise intensity
[Chaining UI]
Rapid combinations
[Highlight UI]
Clear visibility
The interface is intentionally minimal to avoid distracting from gameplay,
prioritizing speed and intuitive expression.
iteration
Improving Interaction Visibility
From
Blends into gameplay
To
Separated from gameplay
Clarifying Active Signals
From
No clear state indication
To
Clear active state feedback
Clarifying Intensity Feedback
From
Hard to perceive intensity
To
Intensity becomes visible and controllable
Final Prototype
Clearer communication in real time.
Powered by gesture-based selection and combined signals in one motion.
Less noise
Better visual tracking
Stronger contrast
Customize signals
See them in action
Fast, expressive communication in gameplay
System Summary
Together, they form the Social Communication Accessibility Framework.
A multi-modal communication system that translates voice into signals that can be seen, felt, and expressed.

impact
User testing showed clear improvements in how players perceived and responded to communication.
The one with bigger font felt like a clear command, while the smaller one felt more like a suggestion.
It would make communication much more accessible and help me stay in sync with teammates without relying on voice.
It helps me react faster and understand what’s happening without relying on voice chat.
And importantly, all six participants said they would use this in a real game.
Designing for real-time interaction required not just adding more signals,
but deciding what actually matters in the moment.
Design Decision
Through testing, I learned that clarity is more important than complexity,
and that users need control over how they receive and express information.
User Centered

