Mood

A smart subtitle system for emotional expression in mixed-language communication
MOOD is a real-time subtitle enhancer that detects emotional tone in spoken language and transforms it into visual cues.
It’s built for Deaf users to better understand how something is said, not just what is said—bridging the emotional gap often lost in plain captions.

Category:

UIUX Design
Interaction Design

Rethinking Subtitles:

Subtitles often lack emotional nuance for Deaf individuals. MOOD restores this by visually conveying tone.

Core Technology:

Using audio analysis and voice emotion AI, MOOD interprets vocal tone in real time.Subtitles are then visually modified—through color, weight, or animation—to express emotional intent.

HMW
How might we bridge emotional gaps between Deaf and hearing communities?
How might we make emotions more visible and mutual, beyond words, sounds, or spoken tone?
MOOD

A smart subtitle system for emotional expression in mixed-language communication

Features

On the homepage, users can adjust the size of the animated subtitles to make them more comfortable and less distracting during conversations.

Show case

After the chat, the app shows a text summary. Users can tap parts of the conversation to see emotional feedback and mood changes.

Application

Application

As the AI detects emotional shifts in the speaker’s tone, the curve reacts—rising with intensity, flattening with calm.This helps users spot emotional highs, shifts, or sudden tone changes at a glance, even before reading the subtitles.

Thanks for watching

The most brutalist and efficient library
A Webflow library infused with the brutalist way
Just drag, drop and make your first MRR faster
Assets for Webflow builders.