Voice is the platform's speech-to-text and text-to-speech layer. It's built for community members who rely on spoken language - whether due to visual impairment, limited literacy, physical disability, or personal preference. No one should be excluded from participating because of how they communicate.

Voice converts speech to text in real time and reads platform content aloud. It supports multiple languages spoken in Koreatown with automatic language detection, and integrates with Translate for real-time spoken translation. Voice navigation lets users search, compose messages, and move through the platform hands-free.

Voice works as a layer across the platform - adding spoken input and output to Chat, Hello, Hi, Doc, Edu, Wiki, Events, Survey, and other tools. It works alongside Access for a complete accessibility experience.

Voice data is processed in real-time and discarded after transcription. No audio is stored unless the user explicitly opts in. Users control when voice features are active and can disable them at any time.

Voice is currently in active development. Advances in speech-to-text and text-to-speech technology have made this tool increasingly feasible. Priorities include core engine integration, multilingual support for Koreatown's primary languages, and accessibility testing with the community members who will benefit most.