flwr.server.strategy package

Contains the strategy abstraction and different implementations.

Submodules

flwr.server.strategy.aggregate module

Aggregation functions for strategy implementations.

flwr.server.strategy.aggregate.aggregate(results: List[Tuple[List[numpy.ndarray], int]]) → List[numpy.ndarray][source]

Compute weighted average.

flwr.server.strategy.aggregate.aggregate_qffl(weights: List[numpy.ndarray], deltas: List[List[numpy.ndarray]], hs_fll: List[List[numpy.ndarray]]) → List[numpy.ndarray][source]

Compute weighted average based on Q-FFL paper.

flwr.server.strategy.aggregate.weighted_loss_avg(results: List[Tuple[int, float, Optional[float]]]) → float[source]

Aggregate evaluation results obtained from multiple clients.

flwr.server.strategy.default module

Configurable strategy implementation.

class flwr.server.strategy.default.DefaultStrategy(fraction_fit: float = 0.1, fraction_eval: float = 0.1, min_fit_clients: int = 2, min_eval_clients: int = 2, min_available_clients: int = 2, eval_fn: Optional[Callable[[List[numpy.ndarray]], Optional[Tuple[float, float]]]] = None, on_fit_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, on_evaluate_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, accept_failures: bool = True)[source]

Bases: flwr.server.strategy.fedavg.FedAvg

Configurable default strategy.

flwr.server.strategy.fast_and_slow module

Federating: Fast and Slow.

flwr.server.strategy.fault_tolerant_fedavg module

Fault-tolerant variant of FedAvg strategy.

class flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg(fraction_fit: float = 0.1, fraction_eval: float = 0.1, min_fit_clients: int = 1, min_eval_clients: int = 1, min_available_clients: int = 1, eval_fn: Optional[Callable[[List[numpy.ndarray]], Optional[Tuple[float, float]]]] = None, on_fit_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, on_evaluate_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, min_completion_rate_fit: float = 0.5, min_completion_rate_evaluate: float = 0.5)[source]

Bases: flwr.server.strategy.fedavg.FedAvg

Configurable fault-tolerant FedAvg strategy implementation.

on_aggregate_evaluate(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateRes]], failures: List[BaseException]) → Optional[float][source]

Aggregate evaluation losses using weighted average.

on_aggregate_fit(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitRes]], failures: List[BaseException]) → Optional[List[numpy.ndarray]][source]

Aggregate fit results using weighted average.

flwr.server.strategy.fedavg module

Federated Averaging (FedAvg) [McMahan et al., 2016] strategy.

Paper: https://arxiv.org/abs/1602.05629

class flwr.server.strategy.fedavg.FedAvg(fraction_fit: float = 0.1, fraction_eval: float = 0.1, min_fit_clients: int = 2, min_eval_clients: int = 2, min_available_clients: int = 2, eval_fn: Optional[Callable[[List[numpy.ndarray]], Optional[Tuple[float, float]]]] = None, on_fit_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, on_evaluate_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, accept_failures: bool = True)[source]

Bases: flwr.server.strategy.strategy.Strategy

Configurable FedAvg strategy implementation.

evaluate(weights: List[numpy.ndarray]) → Optional[Tuple[float, float]][source]

Evaluate model weights using an evaluation function (if provided).

num_evaluation_clients(num_available_clients: int) → Tuple[int, int][source]

Use a fraction of available clients for evaluation.

num_fit_clients(num_available_clients: int) → Tuple[int, int][source]

Return the sample size and the required number of available clients.

on_aggregate_evaluate(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateRes]], failures: List[BaseException]) → Optional[float][source]

Aggregate evaluation losses using weighted average.

on_aggregate_fit(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitRes]], failures: List[BaseException]) → Optional[List[numpy.ndarray]][source]

Aggregate fit results using weighted average.

on_conclude_round(rnd: int, loss: Optional[float], acc: Optional[float]) → bool[source]

Always continue training.

on_configure_evaluate(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateIns]][source]

Configure the next round of evaluation.

on_configure_fit(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitIns]][source]

Configure the next round of training.

flwr.server.strategy.fedfs_v0 module

Federating: Fast and Slow (v0).

class flwr.server.strategy.fedfs_v0.FedFSv0(fraction_fit: float = 0.1, fraction_eval: float = 0.1, min_fit_clients: int = 1, min_eval_clients: int = 1, min_available_clients: int = 1, eval_fn: Optional[Callable[[List[numpy.ndarray]], Optional[Tuple[float, float]]]] = None, min_completion_rate_fit: float = 0.5, min_completion_rate_evaluate: float = 0.5, on_fit_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, on_evaluate_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, r_fast: int = 1, r_slow: int = 1, t_fast: int = 10, t_slow: int = 10)[source]

Bases: flwr.server.strategy.fedavg.FedAvg

Strategy implementation which alternates between fast and slow rounds.

on_aggregate_evaluate(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateRes]], failures: List[BaseException]) → Optional[float][source]

Aggregate evaluation losses using weighted average.

on_aggregate_fit(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitRes]], failures: List[BaseException]) → Optional[List[numpy.ndarray]][source]

Aggregate fit results using weighted average.

on_configure_fit(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitIns]][source]

Configure the next round of training.

flwr.server.strategy.fedfs_v1 module

Federating: Fast and Slow (v1).

class flwr.server.strategy.fedfs_v1.FedFSv1(fraction_fit: float = 0.1, fraction_eval: float = 0.1, min_fit_clients: int = 1, min_eval_clients: int = 1, min_available_clients: int = 1, eval_fn: Optional[Callable[[List[numpy.ndarray]], Optional[Tuple[float, float]]]] = None, min_completion_rate_fit: float = 0.5, min_completion_rate_evaluate: float = 0.5, on_fit_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, on_evaluate_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, dynamic_timeout_percentile: float = 0.8, r_fast: int = 1, r_slow: int = 1, t_max: int = 10, use_past_contributions: bool = False)[source]

Bases: flwr.server.strategy.fedavg.FedAvg

Strategy implementation which alternates between sampling fast and slow cients.

on_aggregate_evaluate(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateRes]], failures: List[BaseException]) → Optional[float][source]

Aggregate evaluation losses using weighted average.

on_aggregate_fit(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitRes]], failures: List[BaseException]) → Optional[List[numpy.ndarray]][source]

Aggregate fit results using weighted average.

on_configure_fit(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitIns]][source]

Configure the next round of training.

flwr.server.strategy.qffedavg module

FAIR RESOURCE ALLOCATION IN FEDERATED LEARNING [Li et al., 2020] strategy.

Paper: https://openreview.net/pdf?id=ByexElSYDr

class flwr.server.strategy.qffedavg.QffedAvg(q_param: float = 0.2, qffl_learning_rate: float = 0.1, fraction_fit: float = 0.1, fraction_eval: float = 0.1, min_fit_clients: int = 1, min_eval_clients: int = 1, min_available_clients: int = 1, eval_fn: Optional[Callable[[List[numpy.ndarray]], Optional[Tuple[float, float]]]] = None, on_fit_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, on_evaluate_config_fn: Optional[Callable[[int], Dict[str, str]]] = None, accept_failures: bool = True)[source]

Bases: flwr.server.strategy.fedavg.FedAvg

Configurable QffedAvg strategy implementation.

evaluate(weights: List[numpy.ndarray]) → Optional[Tuple[float, float]][source]

Evaluate model weights using an evaluation function (if provided).

num_evaluation_clients(num_available_clients: int) → Tuple[int, int][source]

Use a fraction of available clients for evaluation.

num_fit_clients(num_available_clients: int) → Tuple[int, int][source]

Return the sample size and the required number of available clients.

on_aggregate_evaluate(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateRes]], failures: List[BaseException]) → Optional[float][source]

Aggregate evaluation losses using weighted average.

on_aggregate_fit(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitRes]], failures: List[BaseException]) → Optional[List[numpy.ndarray]][source]

Aggregate fit results using weighted average.

on_conclude_round(rnd: int, loss: Optional[float], acc: Optional[float]) → bool[source]

Always continue training.

on_configure_evaluate(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateIns]][source]

Configure the next round of evaluation.

on_configure_fit(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitIns]][source]

Configure the next round of training.

flwr.server.strategy.strategy module

Flower server strategy.

class flwr.server.strategy.strategy.Strategy[source]

Bases: abc.ABC

Abstract class to implement custom server strategies.

abstract evaluate(weights: List[numpy.ndarray]) → Optional[Tuple[float, float]][source]

Evaluate the current model weights.

abstract on_aggregate_evaluate(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateRes]], failures: List[BaseException]) → Optional[float][source]

Aggregate evaluation results.

abstract on_aggregate_fit(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitRes]], failures: List[BaseException]) → Optional[List[numpy.ndarray]][source]

Aggregate training results.

abstract on_conclude_round(rnd: int, loss: Optional[float], acc: Optional[float]) → bool[source]

Conclude federated learning round and decide whether to continue or not.

abstract on_configure_evaluate(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateIns]][source]

Configure the next round of evaluation.

abstract on_configure_fit(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitIns]][source]

Configure the next round of training.

Parameters
  • rnd – Integer. The current round of federated learning.

  • weights – Weights. The current (global) model weights.

  • client_manager – ClientManager. The client manger which knows about all currently connected clients.

Returns

A list of tuples. Each tuple in the list identifies a ClientProxy and the FitIns for this particular ClientProxy. If a particular ClientProxy is not included in this list, it means that this ClientProxy will not participate in the next round of federated learning.