DPFedAvgFixed#

class DPFedAvgFixed(strategy: Strategy, num_sampled_clients: int, clip_norm: float, noise_multiplier: float = 1, server_side_noising: bool = True)[source]#

Bases: Strategy

Wrapper for configuring a Strategy for DP with Fixed Clipping.

Warning

This class is deprecated and will be removed in a future release.

Methods

aggregate_evaluate(server_round, results, ...)

Aggregate evaluation losses using the given strategy.

aggregate_fit(server_round, results, failures)

Aggregate training results using unweighted aggregation.

configure_evaluate(server_round, parameters, ...)

Configure the next round of evaluation using the specified strategy.

configure_fit(server_round, parameters, ...)

Configure the next round of training incorporating Differential Privacy (DP).

evaluate(server_round, parameters)

Evaluate model parameters using an evaluation function from the strategy.

initialize_parameters(client_manager)

Initialize global model parameters using given strategy.

aggregate_evaluate(server_round: int, results: List[Tuple[ClientProxy, EvaluateRes]], failures: List[Tuple[ClientProxy, EvaluateRes] | BaseException]) Tuple[float | None, Dict[str, bool | bytes | float | int | str]][source]#

Aggregate evaluation losses using the given strategy.

aggregate_fit(server_round: int, results: List[Tuple[ClientProxy, FitRes]], failures: List[Tuple[ClientProxy, FitRes] | BaseException]) Tuple[Parameters | None, Dict[str, bool | bytes | float | int | str]][source]#

Aggregate training results using unweighted aggregation.

configure_evaluate(server_round: int, parameters: Parameters, client_manager: ClientManager) List[Tuple[ClientProxy, EvaluateIns]][source]#

Configure the next round of evaluation using the specified strategy.

Parameters:
  • server_round (int) – The current round of federated learning.

  • parameters (Parameters) – The current (global) model parameters.

  • client_manager (ClientManager) – The client manager which holds all currently connected clients.

Returns:

evaluate_configuration – A list of tuples. Each tuple in the list identifies a ClientProxy and the EvaluateIns for this particular ClientProxy. If a particular ClientProxy is not included in this list, it means that this ClientProxy will not participate in the next round of federated evaluation.

Return type:

List[Tuple[ClientProxy, EvaluateIns]]

configure_fit(server_round: int, parameters: Parameters, client_manager: ClientManager) List[Tuple[ClientProxy, FitIns]][source]#

Configure the next round of training incorporating Differential Privacy (DP).

Configuration of the next training round includes information related to DP, such as clip norm and noise stddev.

Parameters:
  • server_round (int) – The current round of federated learning.

  • parameters (Parameters) – The current (global) model parameters.

  • client_manager (ClientManager) – The client manager which holds all currently connected clients.

Returns:

fit_configuration – A list of tuples. Each tuple in the list identifies a ClientProxy and the FitIns for this particular ClientProxy. If a particular ClientProxy is not included in this list, it means that this ClientProxy will not participate in the next round of federated learning.

Return type:

List[Tuple[ClientProxy, FitIns]]

evaluate(server_round: int, parameters: Parameters) Tuple[float, Dict[str, bool | bytes | float | int | str]] | None[source]#

Evaluate model parameters using an evaluation function from the strategy.

initialize_parameters(client_manager: ClientManager) Parameters | None[source]#

Initialize global model parameters using given strategy.