API Reference - flwr¶
client¶
Flower Client.
Client¶
-
class
flwr.client.
Client
[source]¶ Abstract base class for Flower clients.
-
abstract
evaluate
(ins: flwr.common.typing.EvaluateIns) → flwr.common.typing.EvaluateRes[source]¶ Evaluate the provided weights using the locally held dataset.
- Parameters
ins (EvaluateIns) – The evaluation instructions containing (global) model parameters received from the server and a dictionary of configuration values used to customize the local evaluation process.
- Returns
- The evaluation result containing the loss on the local
dataset and other details such as the number of local data examples used for evaluation.
- Return type
EvaluateRes
-
abstract
fit
(ins: flwr.common.typing.FitIns) → flwr.common.typing.FitRes[source]¶ Refine the provided weights using the locally held dataset.
- Parameters
ins (FitIns) – The training instructions containing (global) model parameters received from the server and a dictionary of configuration values used to customize the local training process.
- Returns
- The training result containing updated parameters and other
details such as the number of local training examples used for training.
- Return type
FitRes
-
abstract
start_client¶
-
flwr.client.
start_client
(server_address: str, client: flwr.client.client.Client, grpc_max_message_length: int = 536870912) → None[source]¶ Start a Flower Client which connects to a gRPC server.
- Parameters
server_address – str. The IPv6 address of the server. If the Flower server runs on the same machine on port 8080, then server_address would be “[::]:8080”.
client – flwr.client.Client. An implementation of the abstract base class flwr.client.Client.
grpc_max_message_length – int (default: 536_870_912, this equals 512MB). The maximum length of gRPC messages that can be exchanged with the Flower server. The default should be sufficient for most models. Users who train very large models might need to increase this value. Note that the Flower server needs to be started with the same value (see flwr.server.start_server), otherwise it will not know about the increased limit and block larger messages.
- Returns
None.
NumPyClient¶
-
class
flwr.client.
NumPyClient
[source]¶ Abstract base class for Flower clients using NumPy.
-
abstract
evaluate
(parameters: List[numpy.ndarray], config: Dict[str, Union[bool, bytes, float, int, str]]) → Union[Tuple[int, float, float], Tuple[int, float, float, Dict[str, Union[bool, bytes, float, int, str]]]][source]¶ Evaluate the provided weights using the locally held dataset.
- Parameters
parameters (List[np.ndarray]) – The current (global) model parameters.
config (Dict[str, Scalar]) – Configuration parameters which allow the server to influence evaluation on the client. It can be used to communicate arbitrary values from the server to the client, for example, to influence the number of examples used for evaluation.
- Returns
The number of examples used for evaluation. loss (float): The evaluation loss of the model on the local
dataset.
- accuracy (float, deprecated): The accuracy of the model on the
local test dataset.
- metrics (Metrics, optional): A dictionary mapping arbitrary string
keys to values of type bool, bytes, float, int, or str. Metrics can be used to communicate arbitrary values back to the server.
- Return type
num_examples (int)
-
abstract
fit
(parameters: List[numpy.ndarray], config: Dict[str, Union[bool, bytes, float, int, str]]) → Tuple[List[numpy.ndarray], int][source]¶ Train the provided parameters using the locally held dataset.
- Parameters
parameters – List[numpy.ndarray]. The current (global) model parameters.
config – Dict[str, Scalar]. Configuration parameters which allow the server to influence training on the client. It can be used to communicate arbitrary values from the server to the client, for example, to set the number of (local) training epochs.
- Returns
Updated parameters and an int representing the number of examples used for training.
- Return type
A tuple containing two elements
-
abstract
start_numpy_client¶
-
flwr.client.
start_numpy_client
(server_address: str, client: flwr.client.numpy_client.NumPyClient, grpc_max_message_length: int = 536870912) → None[source]¶ Start a Flower NumPyClient which connects to a gRPC server.
- Parameters
server_address – str. The IPv6 address of the server. If the Flower server runs on the same machine on port 8080, then server_address would be “[::]:8080”.
client – flwr.client.NumPyClient. An implementation of the abstract base class flwr.client.NumPyClient.
grpc_max_message_length – int (default: 536_870_912, this equals 512MB). The maximum length of gRPC messages that can be exchanged with the Flower server. The default should be sufficient for most models. Users who train very large models might need to increase this value. Note that the Flower server needs to be started with the same value (see flwr.server.start_server), otherwise it will not know about the increased limit and block larger messages.
- Returns
None.
server¶
Flower Server.
server.start_server¶
-
flwr.server.
start_server
(server_address: str = '[::]:8080', server: Optional[flwr.server.server.Server] = None, config: Optional[Dict[str, int]] = None, strategy: Optional[flwr.server.strategy.strategy.Strategy] = None, grpc_max_message_length: int = 536870912) → None[source]¶ Start a Flower server using the gRPC transport layer.
- Parameters
server_address – Optional[str] (default: “[::]:8080”). The IPv6 address of the server.
server – Optional[flwr.server.Server] (default: None). An implementation of the abstract base class flwr.server.Server. If no instance is provided, then start_server will create one.
config – Optional[Dict[str, int]] (default: None). The only currently supported values is num_rounds, so a full configuration object instructing the server to perform three rounds of federated learning looks like the following: {“num_rounds”: 3}.
strategy – Optional[flwr.server.Strategy] (default: None). An implementation of the abstract base class flwr.server.Strategy. If no strategy is provided, then start_server will use flwr.server.strategy.FedAvg.
grpc_max_message_length – int (default: 536_870_912, this equals 512MB). The maximum length of gRPC messages that can be exchanged with the Flower clients. The default should be sufficient for most models. Users who train very large models might need to increase this value. Note that the Flower clients need to be started with the same value (see flwr.client.start_client), otherwise clients will not know about the increased limit and block larger messages.
- Returns
None.
server.strategy¶
Contains the strategy abstraction and different implementations.
server.strategy.Strategy¶
-
class
flwr.server.strategy.
Strategy
[source]¶ Abstract base class for server strategy implementations.
-
abstract
aggregate_evaluate
(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateRes]], failures: List[BaseException]) → Optional[float][source]¶ Aggregate evaluation results.
- Parameters
rnd – int. The current round of federated learning.
results – List[Tuple[ClientProxy, FitRes]]. Successful updates from the previously selected and configured clients. Each pair of (ClientProxy, FitRes constitutes a successful update from one of the previously selected clients. Not that not all previously selected clients are necessarily included in this list: a client might drop out and not submit a result. For each client that did not submit an update, there should be an Exception in failures.
failures – List[BaseException]. Exceptions that occurred while the server was waiting for client updates.
- Returns
Optional float representing the aggregated evaluation result. Aggregation typically uses some variant of a weighted average.
-
abstract
aggregate_fit
(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitRes]], failures: List[BaseException]) → Optional[List[numpy.ndarray]][source]¶ Aggregate training results.
- Parameters
rnd – int. The current round of federated learning.
results – List[Tuple[ClientProxy, FitRes]]. Successful updates from the previously selected and configured clients. Each pair of (ClientProxy, FitRes constitutes a successful update from one of the previously selected clients. Not that not all previously selected clients are necessarily included in this list: a client might drop out and not submit a result. For each client that did not submit an update, there should be an Exception in failures.
failures – List[BaseException]. Exceptions that occurred while the server was waiting for client updates.
- Returns
Optional flwr.common.Weights. If weights are returned, then the server will treat these as the new global model weights (i.e., it will replace the previous weights with the ones returned from this method). If None is returned (e.g., because there were only failures and no viable results) then the server will no update the previous model weights, the updates received in this round are discarded, and the global model weights remain the same.
-
abstract
configure_evaluate
(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateIns]][source]¶ Configure the next round of evaluation.
- Parameters
rnd – Integer. The current round of federated learning.
weights – Weights. The current (global) model weights.
client_manager – ClientManager. The client manger which knows about all currently connected clients.
- Returns
A list of tuples. Each tuple in the list identifies a ClientProxy and the EvaluateIns for this particular ClientProxy. If a particular ClientProxy is not included in this list, it means that this ClientProxy will not participate in the next round of federated evaluation.
-
abstract
configure_fit
(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitIns]][source]¶ Configure the next round of training.
- Parameters
rnd – Integer. The current round of federated learning.
weights – Weights. The current (global) model weights.
client_manager – ClientManager. The client manger which knows about all currently connected clients.
- Returns
A list of tuples. Each tuple in the list identifies a ClientProxy and the FitIns for this particular ClientProxy. If a particular ClientProxy is not included in this list, it means that this ClientProxy will not participate in the next round of federated learning.
-
abstract
evaluate
(weights: List[numpy.ndarray]) → Optional[Tuple[float, float]][source]¶ Evaluate the current model weights.
This function can be used to perform centralized (i.e., server-side) evaluation of model weights.
- Parameters
weights – Weights. The current (global) model weights.
- Returns
The evaluation result, usually a Tuple containing loss and accuracy.
-
abstract
server.strategy.FedAvg¶
-
class
flwr.server.strategy.
FedAvg
(fraction_fit: float = 0.1, fraction_eval: float = 0.1, min_fit_clients: int = 2, min_eval_clients: int = 2, min_available_clients: int = 2, eval_fn: Optional[Callable[[List[numpy.ndarray]], Optional[Tuple[float, float]]]] = None, on_fit_config_fn: Optional[Callable[[int], Dict[str, Union[bool, bytes, float, int, str]]]] = None, on_evaluate_config_fn: Optional[Callable[[int], Dict[str, Union[bool, bytes, float, int, str]]]] = None, accept_failures: bool = True)[source]¶ Configurable FedAvg strategy implementation.
-
__init__
(fraction_fit: float = 0.1, fraction_eval: float = 0.1, min_fit_clients: int = 2, min_eval_clients: int = 2, min_available_clients: int = 2, eval_fn: Optional[Callable[[List[numpy.ndarray]], Optional[Tuple[float, float]]]] = None, on_fit_config_fn: Optional[Callable[[int], Dict[str, Union[bool, bytes, float, int, str]]]] = None, on_evaluate_config_fn: Optional[Callable[[int], Dict[str, Union[bool, bytes, float, int, str]]]] = None, accept_failures: bool = True) → None[source]¶ Federated Averaging strategy.
Implementation based on https://arxiv.org/abs/1602.05629
- Parameters
fraction_fit (float, optional) – Fraction of clients used during training. Defaults to 0.1.
fraction_eval (float, optional) – Fraction of clients used during validation. Defaults to 0.1.
min_fit_clients (int, optional) – Minimum number of clients used during training. Defaults to 2.
min_eval_clients (int, optional) – Minimum number of clients used during validation. Defaults to 2.
min_available_clients (int, optional) – Minimum number of total clients in the system. Defaults to 2.
eval_fn (Callable[[Weights], Optional[Tuple[float, float]]], optional) – Function used for validation. Defaults to None.
on_fit_config_fn (Callable[[int], Dict[str, Scalar]], optional) – Function used to configure training. Defaults to None.
on_evaluate_config_fn (Callable[[int], Dict[str, Scalar]], optional) – Function used to configure validation. Defaults to None.
accept_failures (bool, optional) – Whether or not accept rounds containing failures. Defaults to True.
-
aggregate_evaluate
(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateRes]], failures: List[BaseException]) → Optional[float][source]¶ Aggregate evaluation losses using weighted average.
-
aggregate_fit
(rnd: int, results: List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitRes]], failures: List[BaseException]) → Optional[List[numpy.ndarray]][source]¶ Aggregate fit results using weighted average.
-
configure_evaluate
(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.EvaluateIns]][source]¶ Configure the next round of evaluation.
-
configure_fit
(rnd: int, weights: List[numpy.ndarray], client_manager: flwr.server.client_manager.ClientManager) → List[Tuple[flwr.server.client_proxy.ClientProxy, flwr.common.typing.FitIns]][source]¶ Configure the next round of training.
-
evaluate
(weights: List[numpy.ndarray]) → Optional[Tuple[float, float]][source]¶ Evaluate model weights using an evaluation function (if provided).
-