Optimized floating-point neural network inference operators
Optimized floating-point neural network inference operators
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.
There are 2 versions available for this package.
Package | xnnpack |
Version | 0.0-3.08f1489 (history) |
Channel | guix |
Definition |
|
Build status | view 🚧 |
Home page | https://github.com/google/XNNPACK |
Source | |
Installation command |
|
Package | xnnpack |
Version | 0.0-2.51a9875 (history) |
Channel | guix |
Definition |
|
Build status | view 🚧 |
Home page | https://github.com/google/XNNPACK |
Source | |
Installation command |
|