Leadership briefing prototype

Make Red Cross digital access language-first.

A language-first Red Cross experience has to work in both directions: people should be able to apply to help in the language they use best, and they should be able to receive emergency guidance in that same language. This prototype shows one practical model for expanding both the volunteer application and the Emergency app. AI makes this transition easier than ever by drafting, organizing, and routing translations for human review before they become public safety language.

Selection logic

Use population data first, then add disaster risk.

The first tier should come from ACS language-at-home counts and limited-English indicators. Emergency release priority should then be adjusted by local disaster risk, shelter activity, app subscriptions, and interpreter capacity.

Rank Language U.S. speakers at home
1Spanish44.8M
2Chinese3.7M
3Tagalog / Filipino1.9M
4Vietnamese1.6M
5Arabic1.5M
6French1.3M
7Korean1.1M
8Portuguese1.1M
9Hindi1.1M
10Haitian Creole1.0M
11Russian1.0M
12-21German, Telugu, Urdu, Italian, Polish, Bengali, Gujarati, Japanese, Farsi/Persian0.47M-0.86M

Two public-facing surfaces

One access model, two different risk levels.

Volunteer intake can accept a broad language list immediately. Emergency alerts should offer the same list, but distinguish reviewed safety language from AI-draft content that needs approval before release.

Phrase review registry

Approve safety language before programming it into the app.

This is an editable in-app review table, not an Excel-dependent CSV. English source phrases stay attached to every translation, and every row carries risk level, approval state, reviewer, date, and notes before release.

Emergency Phrase Review Table Loading phrase registry... Local edits have not loaded yet.
Open PDF Review Packet
Loading phrase review registry...

Destructive action

Clear local phrase edits?

This only clears edits saved in this browser. Export the edited JSON first if this review work should be preserved.

Recommended build path

Do this in tiers instead of waiting for perfection.

1. Add the language selector everywhere Use the same Census/MPI-priority list across volunteer intake and emergency app entry points.
2. Treat emergency copy differently Require source text, confidence score, reviewer, timestamp, and approval state before pushing alerts.
3. Measure gaps by geography Combine ACS language data with disaster response history, shelter activity, and local app subscriptions.
4. Recruit language capacity Capture native-language volunteer skill as structured capacity, not buried text in a notes field.