Changelog#
Unreleased#
Incompatible changes#
Configurable
get_parameters
(#1242)The
get_parameters
method now accepts a configuration dictionary, just likeget_properties
,fit
, andevaluate
.
Minor updates#
Add secure gRPC connection to the
advanced_tensorflow
code example (#847)
v0.19.0 (2022-05-18)#
Flower Baselines (preview): FedOpt, FedBN, FedAvgM (#919, #1127, #914)
The first preview release of Flower Baselines has arrived! We’re kickstarting Flower Baselines with implementations of FedOpt (FedYogi, FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how to use
Flower Baselines <https://flower.dev/docs/using-baselines.html>
. With this first preview release we’re also inviting the community tocontribute their own baselines <https://flower.dev/docs/contributing-baselines.html>
.C++ client SDK (preview) and code example (#1111)
Preview support for Flower clients written in C++. The C++ preview includes a Flower client SDK and a quickstart code example that demonstrates a simple C++ client using the SDK.
Add experimental support for Python 3.10 and Python 3.11 (#1135)
Python 3.10 is the latest stable release of Python and Python 3.11 is due to be released in October. This Flower release adds experimental support for both Python versions.
Aggregate custom metrics through user-provided functions (#1144)
Custom metrics (e.g.,
accuracy
) can now be aggregated without having to customize the strategy. Built-in strategies support two new arguments,fit_metrics_aggregation_fn
andevaluate_metrics_aggregation_fn
, that allow passing custom metric aggregation functions.User-configurable round timeout (#1162)
A new configuration value allows the round timeout to be set for
start_server
andstart_simulation
. If theconfig
dictionary contains around_timeout
key (with afloat
value in seconds), the server will wait at leastround_timeout
seconds before it closes the connection.Enable both federated evaluation and centralized evaluation to be used at the same time in all built-in strategies (#1091)
Built-in strategies can now perform both federated evaluation (i.e., client-side) and centralized evaluation (i.e., server-side) in the same round. Federated evaluation can be disabled by setting
fraction_eval
to0.0
.Two new Jupyter Notebook tutorials (#1141)
Two Jupyter Notebook tutorials (compatible with Google Colab) explain basic and intermediate Flower features:
An Introduction to Federated Learning:
Open in Colab <https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1-Intro-to-FL-PyTorch.ipynb>
_Using Strategies in Federated Learning:
Open in Colab <https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2-Strategies-in-FL-PyTorch.ipynb>
_New FedAvgM strategy (Federated Averaging with Server Momentum) (#1076)
The new
FedAvgM
strategy implements Federated Averaging with Server Momentum [Hsu et al., 2019].New advanced PyTorch code example (#1007)
A new code example (
advanced_pytorch
) demonstrates advanced Flower concepts with PyTorch.New JAX code example (#906, #1143)
A new code example (
jax_from_centralized_to_federated
) shows federated learning with JAX and Flower.Minor updates
New option to keep Ray running if Ray was already initialized in
start_simulation
(#1177)Add support for custom
ClientManager
as astart_simulation
parameter (#1171)New documentation for
implementing strategies <https://flower.dev/docs/implementing-strategies.html>
_ (#1097, #1175)New mobile-friendly documentation theme (#1174)
Limit version range for (optional)
ray
dependency to include only compatible releases (>=1.9.2,<1.12.0
) (#1205)
Incompatible changes#
Remove deprecated support for Python 3.6 (#871)
Remove deprecated KerasClient (#857)
Remove deprecated no-op extra installs (#973)
Remove deprecated proto fields from
FitRes
andEvaluateRes
(#869)Remove deprecated QffedAvg strategy (replaced by QFedAvg) (#1107)
Remove deprecated DefaultStrategy strategy (#1142)
Remove deprecated support for eval_fn accuracy return value (#1142)
Remove deprecated support for passing initial parameters as NumPy ndarrays (#1142)
v0.18.0 (2022-02-28)#
What’s new?#
Improved Virtual Client Engine compatibility with Jupyter Notebook / Google Colab (#866, #872, #833, #1036)
Simulations (using the Virtual Client Engine through
start_simulation
) now work more smoothly on Jupyter Notebooks (incl. Google Colab) after installing Flower with thesimulation
extra (pip install flwr[simulation]
).New Jupyter Notebook code example (#833)
A new code example (
quickstart_simulation
) demonstrates Flower simulations using the Virtual Client Engine through Jupyter Notebook (incl. Google Colab).Client properties (feature preview) (#795)
Clients can implement a new method
get_properties
to enable server-side strategies to query client properties.Experimental Android support with TFLite (#865)
Android support has finally arrived in
main
! Flower is both client-agnostic and framework-agnostic by design. One can integrate arbitrary client platforms and with this release, using Flower on Android has become a lot easier.The example uses TFLite on the client side, along with a new
FedAvgAndroid
strategy. The Android client andFedAvgAndroid
are still experimental, but they are a first step towards a fully-fledged Android SDK and a unifiedFedAvg
implementation that integrated the new functionality fromFedAvgAndroid
.Make gRPC keepalive time user-configurable and decrease default keepalive time (#1069)
The default gRPC keepalive time has been reduced to increase the compatibility of Flower with more cloud environments (for example, Microsoft Azure). Users can configure the keepalive time to customize the gRPC stack based on specific requirements.
New differential privacy example using Opacus and PyTorch (#805)
A new code example (
opacus
) demonstrates differentially-private federated learning with Opacus, PyTorch, and Flower.New Hugging Face Transformers code example (#863)
A new code example (
quickstart_huggingface
) demonstrates usage of Hugging Face Transformers with Flower.New MLCube code example (#779, #1034, #1065, #1090)
A new code example (
quickstart_mlcube
) demonstrates usage of MLCube with Flower.SSL-enabled server and client (#842, #844, #845, #847, #993, #994)
SSL enables secure encrypted connections between clients and servers. This release open-sources the Flower secure gRPC implementation to make encrypted communication channels accessible to all Flower users.
Updated
FedAdam
andFedYogi
strategies (#885, #895)FedAdam
andFedAdam
match the latest version of the Adaptive Federated Optimization paper.Initialize
start_simulation
with a list of client IDs (#860)start_simulation
can now be called with a list of client IDs (clients_ids
, type:List[str]
). Those IDs will be passed to theclient_fn
whenever a client needs to be initialized, which can make it easier to load data partitions that are not accessible throughint
identifiers.Minor updates
Update
num_examples
calculation in PyTorch code examples in (#909)Expose Flower version through
flwr.__version__
(#952)start_server
inapp.py
now returns aHistory
object containing metrics from training (#974)Make
max_workers
(used byThreadPoolExecutor
) configurable (#978)Increase sleep time after server start to three seconds in all code examples (#1086)
Added a new FAQ section to the documentation (#948)
And many more under-the-hood changes, library updates, documentation changes, and tooling improvements!
Incompatible changes:#
Removed
flwr_example
andflwr_experimental
from release build (#869)The packages
flwr_example
andflwr_experimental
have been deprecated since Flower 0.12.0 and they are not longer included in Flower release builds. The associated extras (baseline
,examples-pytorch
,examples-tensorflow
,http-logger
,ops
) are now no-op and will be removed in an upcoming release.
v0.17.0 (2021-09-24)#
What’s new?#
Experimental virtual client engine (#781 #790 #791)
One of Flower’s goals is to enable research at scale. This release enables a first (experimental) peek at a major new feature, codenamed the virtual client engine. Virtual clients enable simulations that scale to a (very) large number of clients on a single machine or compute cluster. The easiest way to test the new functionality is to look at the two new code examples called
quickstart_simulation
andsimulation_pytorch
.The feature is still experimental, so there’s no stability guarantee for the API. It’s also not quite ready for prime time and comes with a few known caveats. However, those who are curious are encouraged to try it out and share their thoughts.
New built-in strategies (#828 #822)
FedYogi - Federated learning strategy using Yogi on server-side. Implementation based on https://arxiv.org/abs/2003.00295
FedAdam - Federated learning strategy using Adam on server-side. Implementation based on https://arxiv.org/abs/2003.00295
New PyTorch Lightning code example (#617)
New Variational Auto-Encoder code example (#752)
New scikit-learn code example (#748)
New experimental TensorBoard strategy (#789)
Minor updates
Incompatible changes:#
Disabled final distributed evaluation (#800)
Prior behaviour was to perform a final round of distributed evaluation on all connected clients, which is often not required (e.g., when using server-side evaluation). The prior behaviour can be enabled by passing
force_final_distributed_eval=True
tostart_server
.Renamed q-FedAvg strategy (#802)
The strategy named
QffedAvg
was renamed toQFedAvg
to better reflect the notation given in the original paper (q-FFL is the optimization objective, q-FedAvg is the proposed solver). Note the the original (now deprecated)QffedAvg
class is still available for compatibility reasons (it will be removed in a future release).Deprecated and renamed code example
simulation_pytorch
tosimulation_pytorch_legacy
(#791)This example has been replaced by a new example. The new example is based on the experimental virtual client engine, which will become the new default way of doing most types of large-scale simulations in Flower. The existing example was kept for reference purposes, but it might be removed in the future.
v0.16.0 (2021-05-11)#
What’s new?
New built-in strategies (#549)
(abstract) FedOpt
FedAdagrad
Custom metrics for server and strategies (#717)
The Flower server is now fully task-agnostic, all remaining instances of task-specific metrics (such as
accuracy
) have been replaced by custom metrics dictionaries. Flower 0.15 introduced the capability to pass a dictionary containing custom metrics from client to server. As of this release, custom metrics replace task-specific metrics on the server.Custom metric dictionaries are now used in two user-facing APIs: they are returned from Strategy methods
aggregate_fit
/aggregate_evaluate
and they enable evaluation functions passed to build-in strategies (viaeval_fn
) to return more than two evaluation metrics. Strategies can even return aggregated metrics dictionaries for the server to keep track of.Stratey implementations should migrate their
aggregate_fit
andaggregate_evaluate
methods to the new return type (e.g., by simply returning an empty{}
), server-side evaluation functions should migrate fromreturn loss, accuracy
toreturn loss, {"accuracy": accuracy}
.Flower 0.15-style return types are deprecated (but still supported), compatibility will be removed in a future release.
Migration warnings for deprecated functionality (#690)
Earlier versions of Flower were often migrated to new APIs, while maintaining compatibility with legacy APIs. This release introduces detailed warning messages if usage of deprecated APIs is detected. The new warning messages often provide details on how to migrate to more recent APIs, thus easing the transition from one release to another.
MXNet example and documentation
FedBN implementation in example PyTorch: From Centralized To Federated (#696 #702 #705)
Incompatible changes:
Serialization-agnostic server (#721)
The Flower server is now fully serialization-agnostic. Prior usage of class
Weights
(which represents parameters as deserialized NumPy ndarrays) was replaced by classParameters
(e.g., inStrategy
).Parameters
objects are fully serialization-agnostic and represents parameters as byte arrays, thetensor_type
attributes indicates how these byte arrays should be interpreted (e.g., for serialization/deserialization).Built-in strategies implement this approach by handling serialization and deserialization to/from
Weights
internally. Custom/3rd-party Strategy implementations should update to the slighly changed Strategy method definitions. Strategy authors can consult PR #721 to see how strategies can easily migrate to the new format.Deprecated
flwr.server.Server.evaluate
, useflwr.server.Server.evaluate_round
instead (#717)
v0.15.0 (2021-03-12)#
What’s new?
Server-side parameter initialization (#658)
Model parameters can now be initialized on the server-side. Server-side parameter initialization works via a new
Strategy
method calledinitialize_parameters
.Built-in strategies support a new constructor argument called
initial_parameters
to set the initial parameters. Built-in strategies will provide these initial parameters to the server on startup and then delete them to free the memory afterwards.# Create model model = tf.keras.applications.EfficientNetB0( input_shape=(32, 32, 3), weights=None, classes=10 ) model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"]) # Create strategy and initilize parameters on the server-side strategy = fl.server.strategy.FedAvg( # ... (other constructor arguments) initial_parameters=model.get_weights(), ) # Start Flower server with the strategy fl.server.start_server("[::]:8080", config={"num_rounds": 3}, strategy=strategy)
If no initial parameters are provided to the strategy, the server will continue to use the current behaviour (namely, it will ask one of the connected clients for its parameters and use these as the initial global parameters).
Deprecations
Deprecate
flwr.server.strategy.DefaultStrategy
(migrate toflwr.server.strategy.FedAvg
, which is equivalent)
v0.14.0 (2021-02-18)#
What’s new?
Generalized
Client.fit
andClient.evaluate
return values (#610 #572 #633)Clients can now return an additional dictionary mapping
str
keys to values of the following types:bool
,bytes
,float
,int
,str
. This means one can return almost arbitrary values fromfit
/evaluate
and make use of them on the server side!This improvement also allowed for more consistent return types between
fit
andevaluate
:evaluate
should now return a tuple(float, int, dict)
representing the loss, number of examples, and a dictionary holding arbitrary problem-specific values like accuracy.In case you wondered: this feature is compatible with existing projects, the additional dictionary return value is optional. New code should however migrate to the new return types to be compatible with upcoming Flower releases (
fit
:List[np.ndarray], int, Dict[str, Scalar]
,evaluate
:float, int, Dict[str, Scalar]
). See the example below for details.Code example: note the additional dictionary return values in both
FlwrClient.fit
andFlwrClient.evaluate
:class FlwrClient(fl.client.NumPyClient): def fit(self, parameters, config): net.set_parameters(parameters) train_loss = train(net, trainloader) return net.get_weights(), len(trainloader), {"train_loss": train_loss} def evaluate(self, parameters, config): net.set_parameters(parameters) loss, accuracy, custom_metric = test(net, testloader) return loss, len(testloader), {"accuracy": accuracy, "custom_metric": custom_metric}
Generalized
config
argument inClient.fit
andClient.evaluate
(#595)The
config
argument used to be of typeDict[str, str]
, which means that dictionary values were expected to be strings. The new release generalizes this to enable values of the following types:bool
,bytes
,float
,int
,str
.This means one can now pass almost arbitrary values to
fit
/evaluate
using theconfig
dictionary. Yay, no morestr(epochs)
on the server-side andint(config["epochs"])
on the client side!Code example: note that the
config
dictionary now contains non-str
values in bothClient.fit
andClient.evaluate
:class FlwrClient(fl.client.NumPyClient): def fit(self, parameters, config): net.set_parameters(parameters) epochs: int = config["epochs"] train_loss = train(net, trainloader, epochs) return net.get_weights(), len(trainloader), {"train_loss": train_loss} def evaluate(self, parameters, config): net.set_parameters(parameters) batch_size: int = config["batch_size"] loss, accuracy = test(net, testloader, batch_size) return loss, len(testloader), {"accuracy": accuracy}
v0.13.0 (2021-01-08)#
What’s new?
New example: PyTorch From Centralized To Federated (#549)
Improved documentation
Bugfix:
v0.12.0 (2020-12-07)#
Important changes:
v0.11.0 (2020-11-30)#
Incompatible changes:
Renamed strategy methods (#486) to unify the naming of Flower’s public APIs. Other public methods/functions (e.g., every method in
Client
, but alsoStrategy.evaluate
) do not use theon_
prefix, which is why we’re removing it from the four methods in Strategy. To migrate rename the followingStrategy
methods accordingly:on_configure_evaluate
=>configure_evaluate
on_aggregate_evaluate
=>aggregate_evaluate
on_configure_fit
=>configure_fit
on_aggregate_fit
=>aggregate_fit
Important changes: