February 2026
Adapting the Mimi Neural Codec for Kikuyu Speech
How we fine-tuned Kyutai's Mimi codec on 750+ hours of Kikuyu audio with a custom pitch-preservation loss — the foundation for real-time voice AI.

Real-time voice and text AI for African languages. We build translation, voice chatbots, and text-to-speech models so businesses can serve customers in Kikuyu, Kamba, Luo, and more.
Our translation engine is live with a 758% improvement over zero-shot baselines. Our voice chatbot lets you converse with AI in spoken Kikuyu.
C-elo Labs builds production AI for underserved African languages. We fine-tune open-source models on curated African Languages datasets and deploy them as real products.
Our work spans translation (TranslateGemma), voice chat (MMS ASR + TTS), and Speech-to-Speech AI (Mimi codec), with expansion to Kamba, Luo, Kalenjin and other Low Resource languages on the roadmap.
Learn more about usWatch our AI models translate, speak, and converse in Kikuyu — live demos from our platform.
Production AI tools for African languages — from translation to real-time voice interaction.
English → Kikuyu neural machine translation powered by our fine-tuned TranslateGemma-12B model. Achieves 19.61 BLEU — a 758% improvement over zero-shot.
Speak in Kikuyu and hear AI respond in Kikuyu. Cascaded pipeline: MMS ASR → LLM → MMS TTS, running on serverless GPUs.
Convert Kikuyu text to natural spoken audio using Meta's MMS-TTS model, adapted for Kikuyu phonetics.
Turn-by-turn driving directions with voice guidance in Kikuyu — the first navigation app in an African indigenous language.
End-to-end voice model using the Mimi neural codec — aiming for real-time, full-duplex Kikuyu conversations. Stage 1 codec adaptation complete.
The first navigation app with voice guidance in an African indigenous language. Search, navigate, and hear directions — all in Kikuyu.
Full driving directions with step-by-step maneuvers, route alternatives, and automatic rerouting.
Hear every navigation instruction spoken in natural Gĩkũyũ — powered by C-elo's voice AI.
Optimized for Kenyan roads and locations. More African languages launching soon.
Available soon on Android
Starting with Kikuyu; Luo, Kalenjin, and Kamba coming next.
iOS in development.
Our research focuses on fine-tuning LLMs for African Low Resource languages using advanced techniques like LoRA, QLoRA and synthetic data generation, plus building Speech-to-Speech models with the Mimi neural codec.
We collaborate with initiatives like African Next Voices and Google's WAXAL to create robust datasets and push the boundaries of low-resource language AI.
Explore our researchFebruary 2026
How we fine-tuned Kyutai's Mimi codec on 750+ hours of Kikuyu audio with a custom pitch-preservation loss — the foundation for real-time voice AI.
January 2026
How we fine-tuned Google's TranslateGemma-12B for English→Kikuyu translation — achieving a 758% improvement through systematic LoRA tuning and production deployment.
December 2025
We're launching C-elo Labs to build production AI for African languages — starting with Kikuyu translation, voice chat, and text-to-speech.