armnn 20.08-10build2 source package in Ubuntu
Changelog
armnn (20.08-10build2) kinetic; urgency=medium * No-change rebuild against libflatbuffers2 -- Steve Langasek <email address hidden> Sat, 16 Jul 2022 17:52:39 +0000
Upload details
- Uploaded by:
- Steve Langasek
- Uploaded to:
- Kinetic
- Original maintainer:
- Ubuntu Developers
- Architectures:
- amd64 arm64 armhf i386 mipsel mips64el ppc64el
- Section:
- misc
- Urgency:
- Medium Urgency
See full publishing history Publishing
Series | Published | Component | Section |
---|
Downloads
File | Size | SHA-256 Checksum |
---|---|---|
armnn_20.08.orig.tar.xz | 4.3 MiB | e834f4ed5ed138ea6c66ea37ec11208af9803271656be16abd426f74287d1189 |
armnn_20.08-10build2.debian.tar.xz | 19.0 KiB | 29a6498be2de170a4428a7417211f1a4b854fa22d6537e09618b5082f0b34f2a |
armnn_20.08-10build2.dsc | 3.2 KiB | a5a4bd226d9c0e91f15603be5b3ae8fd989fad2d508923580b7df8bd27b9a110 |
Available diffs
- diff from 20.08-10build1 to 20.08-10build2 (556 bytes)
Binary packages built by this source
- libarmnn-cpuacc-backend22: Arm NN is an inference engine for CPUs, GPUs and NPUs
Arm NN is a set of tools that enables machine learning workloads on
any hardware. It provides a bridge between existing neural network
frameworks and whatever hardware is available and supported. On arm
architectures (arm64 and armhf) it utilizes the Arm Compute Library
to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
possible. On other architectures/hardware it falls back to unoptimised
functions.
.
This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
Arm NN takes networks from these frameworks, translates them
to the internal Arm NN format and then through the Arm Compute Library,
deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
.
This is the dynamically loadable Neon backend package.
- libarmnn-cpuacc-backend22-dbgsym: debug symbols for libarmnn-cpuacc-backend22
- libarmnn-cpuref-backend22: Arm NN is an inference engine for CPUs, GPUs and NPUs
Arm NN is a set of tools that enables machine learning workloads on
any hardware. It provides a bridge between existing neural network
frameworks and whatever hardware is available and supported. On arm
architectures (arm64 and armhf) it utilizes the Arm Compute Library
to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
possible. On other architectures/hardware it falls back to unoptimised
functions.
.
This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
Arm NN takes networks from these frameworks, translates them
to the internal Arm NN format and then through the Arm Compute Library,
deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
.
This is the dynamically loadable Reference backend package.
- libarmnn-cpuref-backend22-dbgsym: No summary available for libarmnn-cpuref-backend22-dbgsym in ubuntu kinetic.
No description available for libarmnn-
cpuref- backend22- dbgsym in ubuntu kinetic.
- libarmnn-dev: No summary available for libarmnn-dev in ubuntu kinetic.
No description available for libarmnn-dev in ubuntu kinetic.
- libarmnn-gpuacc-backend22: No summary available for libarmnn-gpuacc-backend22 in ubuntu kinetic.
No description available for libarmnn-
gpuacc- backend22 in ubuntu kinetic.
- libarmnn-gpuacc-backend22-dbgsym: debug symbols for libarmnn-gpuacc-backend22
- libarmnn22: No summary available for libarmnn22 in ubuntu kinetic.
No description available for libarmnn22 in ubuntu kinetic.
- libarmnn22-dbgsym: debug symbols for libarmnn22
- libarmnnaclcommon22: Arm NN is an inference engine for CPUs, GPUs and NPUs
Arm NN is a set of tools that enables machine learning workloads on
any hardware. It provides a bridge between existing neural network
frameworks and whatever hardware is available and supported. On arm
architectures (arm64 and armhf) it utilizes the Arm Compute Library
to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
possible. On other architectures/hardware it falls back to unoptimised
functions.
.
This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
Arm NN takes networks from these frameworks, translates them
to the internal Arm NN format and then through the Arm Compute Library,
deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
.
This is the common shared library used by Arm Compute Library backends.
- libarmnnaclcommon22-dbgsym: debug symbols for libarmnnaclcommon22
- libarmnntfliteparser-dev: Arm NN is an inference engine for CPUs, GPUs and NPUs
Arm NN is a set of tools that enables machine learning workloads on
any hardware. It provides a bridge between existing neural network
frameworks and whatever hardware is available and supported. On arm
architectures (arm64 and armhf) it utilizes the Arm Compute Library
to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
possible. On other architectures/hardware it falls back to unoptimised
functions.
.
This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
Arm NN takes networks from these frameworks, translates them
to the internal Arm NN format and then through the Arm Compute Library,
deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
.
This is the development package containing header files.
- libarmnntfliteparser22: No summary available for libarmnntfliteparser22 in ubuntu kinetic.
No description available for libarmnntflitep
arser22 in ubuntu kinetic.
- libarmnntfliteparser22-dbgsym: No summary available for libarmnntfliteparser22-dbgsym in ubuntu kinetic.
No description available for libarmnntflitep
arser22- dbgsym in ubuntu kinetic.
- python3-pyarmnn: No summary available for python3-pyarmnn in ubuntu kinetic.
No description available for python3-pyarmnn in ubuntu kinetic.
- python3-pyarmnn-dbgsym: No summary available for python3-pyarmnn-dbgsym in ubuntu kinetic.
No description available for python3-
pyarmnn- dbgsym in ubuntu kinetic.