Published

Announcing Flower 1.8

Photo of Javier Fernandez-Marques
Javier Fernandez-Marques
Research Scientist at Flower Labs

The Flower Team is excited to announce the release of Flower 1.8 stable and it's packed with updates! Flower is a friendly framework for collaborative AI and data science. It makes novel approaches such as federated learning, federated evaluation, federated analytics, and fleet learning accessible to a wide audience of researchers and engineers.

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Daniel J. Beutel, Daniel Nata Nugraha, Danny, Gustavo Bertoli, Heng Pan, Ikko Eltociear Ashimine, Jack Cook, Javier, Raj Parekh, Robert Steiner, Sebastian van der Voort, Taner Topal, Yan Gao, mohammadnaseri, tabdar-khan.

What's new?

  • Introduce Flower Next high-level API (stable) (#3002, #2934, #2958, #3173, #3174, #2923, #2691, #3079, #2961, #2924, #3166, #3031, #3057, #3000, #3113, #2957, #3183, #3180, #3035, #3189, #3185, #3190, #3191, #3195, #3197)

    The Flower Next high-level API is stable! Flower Next is the future of Flower - all new features (like Flower Mods) will be built on top of it. You can start to migrate your existing projects to Flower Next by using ServerApp and ClientApp (check out quickstart-pytorch or quickstart-tensorflow, a detailed migration guide will follow shortly). Flower Next allows you to run multiple projects concurrently (we call this multi-run) and execute the same project in either simulation environments or deployment environments without having to change a single line of code. The best part? It's fully compatible with existing Flower projects that use Strategy, NumPyClient & co.

  • Introduce Flower Next low-level API (preview) (#3062, #3034, #3069)

    In addition to the Flower Next high-level API that uses Strategy, NumPyClient & co, Flower 1.8 also comes with a preview version of the new Flower Next low-level API. The low-level API allows for granular control of every aspect of the learning process by sending/receiving individual messages to/from client nodes. The new ServerApp supports registering a custom main function that allows writing custom training loops for methods like async FL, cyclic training, or federated analytics. The new ClientApp supports registering train, evaluate and query functions that can access the raw message received from the ServerApp. New abstractions like RecordSet, Message and Context further enable sending multiple models, multiple sets of config values and metrics, stateful computations on the client node and implementations of custom SMPC protocols, to name just a few.

  • Introduce Flower Mods (preview) (#3054, #2911, #3083)

    Flower Modifiers (we call them Mods) can intercept messages and analyze, edit or handle them directly. Mods can be used to develop pluggable modules that work across different projects. Flower 1.8 already includes mods to log the size of a message, the number of parameters sent over the network, differential privacy with fixed clipping and adaptive clipping, local differential privacy and secure aggregation protocols SecAgg and SecAgg+. The Flower Mods API is released as a preview, but researchers can already use it to experiment with arbirtrary SMPC protocols.

  • Fine-tune LLMs with LLM FlowerTune (#3029, #3089, #3092, #3100, #3114, #3162, #3172)

    We are introducing LLM FlowerTune, an introductory example that demonstrates federated LLM fine-tuning of pre-trained Llama2 models on the Alpaca-GPT4 dataset. The example is built to be easily adapted to use different models and/or datasets. Read our blog post LLM FlowerTune: Federated LLM Fine-tuning with Flower for more details.

  • Introduce built-in Differential Privacy (preview) (#2798, #2959, #3038, #3147, #2909, #2893, #2892, #3039, #3074)

    Built-in Differential Privacy is here! Flower supports both central and local differential privacy (DP). Central DP can be configured with either fixed or adaptive clipping. The clipping can happen either on the server-side or the client-side. Local DP does both clipping and noising on the client-side. A new documentation page explains Differential Privacy approaches and a new how-to guide describes how to use the new Differential Privacy components in Flower.

  • Introduce built-in Secure Aggregation (preview) (#3120, #3110, #3108)

    Built-in Secure Aggregation is here! Flower now supports different secure aggregation protocols out-of-the-box. The best part? You can add secure aggregation to your Flower projects with only a few lines of code. In this initial release, we inlcude support for SecAgg and SecAgg+, but more protocols will be implemented shortly. We'll also add detailed docs that explain secure aggregation and how to use it in Flower. You can already check out the new code example that shows how to use Flower to easily combine Federated Learning, Differential Privacy and Secure Aggregation in the same project.

  • Introduce flwr CLI (preview) (#2942, #3055, #3111, #3130, #3136, #3094, #3059, #3049, #3142)

    A new flwr CLI command allows creating new Flower projects (flwr new) and then running them using the Simulation Engine (flwr run).

  • Introduce Flower Next Simulation Engine (#3024, #3061, #2997, #2783, #3184, #3075, #3047, #2998, #3009, #3008)

    The Flower Simulation Engine can now run Flower Next projects. For notebook environments, there's also a new run_simulation function that can run ServerApp and ClientApp.

  • Handle SuperNode connection errors (#2969)

    A SuperNode will now try to reconnect indefinitely to the SuperLink in case of connection errors. The arguments --max-retries and --max-wait-time can now be passed to the flower-client-app command. --max-retries will define the number of tentatives the client should make before it gives up trying to reconnect to the SuperLink, and, --max-wait-time defines the time before the SuperNode gives up trying to reconnect to the SuperLink.

  • General updates to Flower Baselines (#2904, #2482, #2985, #2968)

    There's a new FedStar baseline. Several other baselined have been updated as well.

  • Improve documentation and translations (#3050, #3044, #3043, #2986, #3041, #3046, #3042, #2978, #2952, #3167, #2953, #3045, #2654, #3082, #2990, #2989)

    As usual, we merged many smaller and larger improvements to the documentation. A special thank you goes to Sebastian van der Voort for landing a big documentation PR!

  • General updates to Flower Examples (3134, 2996, 2930, 2967, 2467, 2910, #2918, #2773, #3063, #3116, #3117)

    Two new examples show federated training of a Vision Transformer (ViT) and federated learning in a medical context using the popular MONAI library. quickstart-pytorch and quickstart-tensorflow demonstrate the new Flower Next ServerApp and ClientApp. Many other examples received considerable updates as well.

  • General improvements (#3171, 3099, 3003, 3145, 3017, 3085, 3012, 3119, 2991, 2970, 2980, 3086, 2932, 2928, 2941, 2933, 3181, 2973, 2992, 2915, 3040, 3022, 3032, 2902, 2931, 3005, 3132, 3115, 2944, 3064, 3106, 2974, 3178, 2993, 3186, 3091, 3125, 3093, 3013, 3033, 3133, 3068, 2916, 2975, 2984, 2846, 3077, 3143, 2921, 3101, 2927, 2995, 2972, 2912, 3065, 3028, 2922, 2982, 2914, 3179, 3080, 2994, 3187, 2926, 3018, 3144, 3011, #3152, #2836, #2929, #2943, #2955, #2954)

Incompatible changes

None