Optimized floating-point neural network inference operators
Optimized floating-point neural network inference operators
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.
There are 2 versions available for this package.
| Package | xnnpack 0.0-4.51a0103 (history) |
| Channel | guix |
| Definition | |
| Build status | view 🚧 |
| Home page | https://github.com/google/XNNPACK |
| Source |
| Package | xnnpack 0.0-2.51a9875 (history) |
| Channel | guix |
| Definition | |
| Build status | view 🚧 |
| Home page | https://github.com/google/XNNPACK |
| Source |
