Optimized floating-point neural network inference operators

XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.

There are 2 versions available for this package.


Packagexnnpack
Version0.0-3.08f1489 (history)
Channelguix
Definitiongnu/packages/machine-learning.scm
Build statusview 🚧
Home pagehttps://github.com/google/XNNPACK
SourceSource code archival status at Software Heritage.
Installation command
guix install xnnpack@0.0-3.08f1489

Packagexnnpack
Version0.0-2.51a9875 (history)
Channelguix
Definitiongnu/packages/machine-learning.scm
Build statusview 🚧
Home pagehttps://github.com/google/XNNPACK
SourceSource code archival status at Software Heritage.
Installation command
guix install xnnpack@0.0-2.51a9875