Identical¶
Identical-gradient Byzantine attacks.
These attacks generate f_real references to the same newly created
Byzantine gradient. The gradient is built from the average honest gradient plus
a scaled attack direction.
Available attack directions are:
Bulyan: unit vector, optionally restricted to one coordinate.
Empire: negative average honest gradient.
Little: coordinate-wise standard deviation of honest gradients.
Use Case¶
Testing aggregation rules against attacks that submit identical malicious gradients from multiple Byzantine workers.
Properties¶
Identical gradients: All Byzantine workers submit the same gradient.
Direction-based: Attack direction is computed from honest gradients.
Factor optimization: Negative factors trigger automatic optimization.
Returns newly created tensors, does not alias honest input gradients.
- param factor:
Attack scaling factor. Positive values use the provided factor directly. Negative integers trigger a line search over
-factorevaluations. Defaults to-16.- type factor:
float or int, optional
- param negative:
Whether to negate the selected factor. Defaults to
False.- type negative:
bool, optional
- param Example:
- param ——-:
- param >>> import torch:
- param >>> from aggregators import average:
- param >>> from attacks import little:
- param >>> grad_honests = [torch.tensor([1.:
- param 2.:
- param 3.]):
- param torch.tensor([4.:
- param 5.:
- param 6.])]:
- param >>> byzantine_grads = little(:
- param … grad_honests=grad_honests:
:param : :param … f_decl=1: :param : :param … f_real=1: :param : :param … defense=average: :param : :param … model=None: :param : :param … factor=1.5: :param : :param … ): :param >>> len(byzantine_grads): :param 1:
See also
For a non-finite baseline attack, see NaN.