gloo-cuda 0.0~git20230519.597accf-2build2 source package in Ubuntu

Changelog

gloo-cuda (0.0~git20230519.597accf-2build2) noble; urgency=medium

  * No-change rebuild for CVE-2024-3094

 -- Steve Langasek <email address hidden>  Sun, 31 Mar 2024 06:00:16 +0000

Upload details

Uploaded by:
Steve Langasek
Uploaded to:
Noble
Original maintainer:
Ubuntu Developers
Architectures:
any
Section:
misc
Urgency:
Medium Urgency

See full publishing history Publishing

Series Pocket Published Component Section
Noble release multiverse misc

Downloads

File Size SHA-256 Checksum
gloo-cuda_0.0~git20230519.597accf.orig.tar.xz 180.8 KiB 68f7bb91c706808d653cb4c9f81537f7c8a732544f76f992bf4e8d7be29c803a
gloo-cuda_0.0~git20230519.597accf-2build2.debian.tar.xz 5.3 KiB 77ab86342718cd14951edbe4ce624d628e7b560f564f6cfe4d411ef05de1ca29
gloo-cuda_0.0~git20230519.597accf-2build2.dsc 2.4 KiB 963f92b00e386905d3afe4f25edcb1e980c1947dd2a98cd7ef6a77a4231671ae

View changes file

Binary packages built by this source

libgloo-cuda-0: Collective communications library (shared object)

 Gloo is a collective communications library. It comes with a number of
 collective algorithms useful for machine learning applications. These
 include a barrier, broadcast, and allreduce.
 .
 Transport of data between participating machines is abstracted so that
 IP can be used at all times, or InifiniBand (or RoCE) when available.
 In the latter case, if the InfiniBand transport is used, GPUDirect can
 be used to accelerate cross machine GPU-to-GPU memory transfers.
 .
 Where applicable, algorithms have an implementation that works with system
 memory buffers, and one that works with NVIDIA GPU memory buffers. In the
 latter case, it is not necessary to copy memory between host and device;
 this is taken care of by the algorithm implementations.
 .
 This package ships the shared object for Gloo.

libgloo-cuda-0-dbgsym: debug symbols for libgloo-cuda-0
libgloo-cuda-dev: Collective communications library (development files)

 Gloo is a collective communications library. It comes with a number of
 collective algorithms useful for machine learning applications. These
 include a barrier, broadcast, and allreduce.
 .
 Transport of data between participating machines is abstracted so that
 IP can be used at all times, or InifiniBand (or RoCE) when available.
 In the latter case, if the InfiniBand transport is used, GPUDirect can
 be used to accelerate cross machine GPU-to-GPU memory transfers.
 .
 Where applicable, algorithms have an implementation that works with system
 memory buffers, and one that works with NVIDIA GPU memory buffers. In the
 latter case, it is not necessary to copy memory between host and device;
 this is taken care of by the algorithm implementations.
 .
 This package ships the development files.