February 14, 2026
Adapting the Mimi Neural Codec for Kikuyu Speech
How we fine-tuned Kyutai's Mimi codec on 750+ hours of Kikuyu audio with a custom pitch-preservation loss — achieving 0.0710 val loss and 1.1 Hz pitch error.
Research updates, technical deep dives, and insights on building AI for African languages.
February 14, 2026
How we fine-tuned Kyutai's Mimi codec on 750+ hours of Kikuyu audio with a custom pitch-preservation loss — achieving 0.0710 val loss and 1.1 Hz pitch error.
January 29, 2026
How we fine-tuned Google's TranslateGemma-12B for English-Kikuyu translation, achieving 19.61 BLEU through iterative experimentation and deployed it on Modal.
January 08, 2026
How do you build a chatbot that speaks Kikuyu? Our research team discusses the technical challenges and our roadmap for voice AI.
December 22, 2025
Our architecture for real-time voice chatbots using the Mimi neural codec, preserving tonal fidelity in Kikuyu.
December 10, 2025
How we're leveraging the African Next Voices initiative's audio corpus to train robust Speech-to-Speech models for Kikuyu.
December 13, 2025
We're launching C-elo Labs to build production AI for African languages — starting with Kikuyu translation, voice chat, and text-to-speech.