Optimized floating-point neural network inference operators

XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.

There are 2 versions available for this package.


Packagexnnpack
Version0.0-3.51a9875 (history)
Channelguix
Definitiongnu/packages/machine-learning.scm
Build statusview 🚧
Home pagehttps://github.com/google/XNNPACK
SourceSource code archival status at Software Heritage.
Installation command
guix install xnnpack@0.0-3.51a9875

Packagexnnpack
Version0.0-2.ae108ef (history)
Channelguix
Definitiongnu/packages/machine-learning.scm
Build statusview 🚧
Home pagehttps://github.com/google/XNNPACK
SourceSource code archival status at Software Heritage.
Installation command
guix install xnnpack@0.0-2.ae108ef