Native Acceleration¶
The native package provides automated building and loading of C++/CUDA
native PyTorch extensions. There is no separate build step; importing an
aggregator or attack that offers a native variant triggers compilation
transparently.
For the design rationale behind import-time compilation — why the project trades a separate build system for zero-friction prototyping — see the explanation of /explanation/native-compilation.
Overview¶
Component |
Description |
Environment Control |
|---|---|---|
Module loader |
Scans subdirectories, compiles sources, injects |
|
Dependency resolver |
Reads |
— |
CUDA support |
Detects |
— |
Thread pool |
Shared C++ threadpool dependency ( |
— |
Available Native Variants¶
When compilation succeeds, native variants are registered alongside their
Python counterparts with a native- prefix:
Name |
What is accelerated |
|---|---|
|
Pairwise distance computation and scoring. |
|
Coordinate-wise median. |
|
Multi-Krum selection and median averaging. |
|
Combinatorial subset search. |
Because the high-level API is identical, you can compare the exact same rule in Python and native versions simply by changing the registered name.
Internal Dependencies¶
Some modules under native/ are shared libraries (so_ prefix) that
provide infrastructure for other native variants rather than exposing a
public API:
so_threadpool— C++ thread pool used bynative-bulyanfor parallel distance computations.
These libraries are built automatically when required by a dependent module.
API Reference¶
Native (i.e. C++/CUDA) implementations automated building and loading.