Published

Speech models, federated! (SpeechBrain x Flower)

Photo of Daniel J. Beutel
Daniel J. Beutel
Founder & CEO of Adap
Federating SpeechBrain Using Flower

Speech is an interesting domain for federated learning: the data is often highly privacy-sensitive, there's a lot of data on edge devices, and it's suitable for self-supervised (pre-)training (no labels needed, yay!). So what's the best way to start training federated speech models? Enter SpeechBrain!

SpeechBrain

SpeechBrain is a PyTorch-powered all-in-one toolkit that has taken the speech community by storm. From the SpeechBrain website: "It is designed to be simple, extremely flexible, and user-friendly." Sounds good? So how can we federate SpeechBrain training?

Federated speech model training

Luckily, the folks over a SpeechBrain have put together a Jupyter Notebook on federated speech model training via SpeechBrain and Flower. Special thanks to Yan Gao and Titouan Parcollet for creating and open-sourcing this!

Federated Speech Model Training via SpeechBrain and Flower

The nice thing about notebooks is that one can run them directly on Google Colab. It's probably helpful to have a basic understanding of SpeechBrain and Flower to fully grasp what's going on. If you're new to speech and SpeechBrain, you can find their basic tutorials here. And if you're new to Flower, you can find the PyTorch quickstart here.