armnn 20.08-10 source package in Ubuntu

Changelog

armnn (20.08-10) unstable; urgency=medium

  * Build with gcc-11 (Closes: #983969)

 -- Wookey <email address hidden>  Mon, 25 Oct 2021 23:56:13 +0100

Upload details

Uploaded by:
Francis Murtagh
Uploaded to:
Sid
Original maintainer:
Francis Murtagh
Architectures:
amd64 arm64 armhf i386 mipsel mips64el ppc64el
Section:
misc
Urgency:
Medium Urgency

See full publishing history Publishing

Series Pocket Published Component Section

Downloads

File Size SHA-256 Checksum
armnn_20.08-10.dsc 3.1 KiB 1ae653bf593b4a73d89f7e42db7ba798b15955510a2f6a477f0f84552350ccd3
armnn_20.08.orig.tar.xz 4.3 MiB e834f4ed5ed138ea6c66ea37ec11208af9803271656be16abd426f74287d1189
armnn_20.08-10.debian.tar.xz 18.8 KiB 5e9d3cc85c4e63370deb8dd56606431d326e0c5906a8498ee6f7b648e6b54e86

Available diffs

No changes file available.

Binary packages built by this source

libarmnn-cpuacc-backend22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable Neon backend package.

libarmnn-cpuacc-backend22-dbgsym: debug symbols for libarmnn-cpuacc-backend22
libarmnn-cpuref-backend22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable Reference backend package.

libarmnn-cpuref-backend22-dbgsym: debug symbols for libarmnn-cpuref-backend22
libarmnn-dev: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.

libarmnn-gpuacc-backend22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable CL backend package.

libarmnn-gpuacc-backend22-dbgsym: debug symbols for libarmnn-gpuacc-backend22
libarmnn22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the shared library package.

libarmnn22-dbgsym: debug symbols for libarmnn22
libarmnnaclcommon22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the common shared library used by Arm Compute Library backends.

libarmnnaclcommon22-dbgsym: debug symbols for libarmnnaclcommon22
libarmnntfliteparser-dev: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.

libarmnntfliteparser22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the shared library package.

libarmnntfliteparser22-dbgsym: debug symbols for libarmnntfliteparser22
python3-pyarmnn: PyArmNN is a python extension for the Armnn SDK

 PyArmNN provides interface similar to Arm NN C++ Api.
 .
 PyArmNN is built around public headers from the armnn/include folder
 of Arm NN. PyArmNN does not implement any computation kernels itself,
 all operations are delegated to the Arm NN library.

python3-pyarmnn-dbgsym: debug symbols for python3-pyarmnn