libcache-fastmmap-perl 1.39-1 source package in Ubuntu


libcache-fastmmap-perl (1.39-1) unstable; urgency=low

  [ Ansgar Burchardt ]
  * New upstream release (1.37)
  * Update my email address.
  * Bump Standards-Version to 3.9.2 (no changes).
  * debian/control:
    - Convert Vcs-* fields to git.
  * debian/copyright:
    - Minor changes

  [ Harlan Lieberman-Berg ]
  * New upstream release
  * debian/copyright:
    - Changed primary copyright owner to Opera Software Australia
  * debian/control:
    - Added myself to uploaders

  [ gregor herrmann ]
  * Bump debhelper compatibility level to 8.
 -- Ubuntu Archive Auto-Sync <email address hidden>   Mon,  17 Oct 2011 11:02:50 +0000

Upload details

Uploaded by:
Ubuntu Archive Auto-Sync on 2011-10-17
Uploaded to:
Original maintainer:
Debian Perl Group
Low Urgency

See full publishing history Publishing

Series Pocket Published Component Section


File Size SHA-256 Checksum
libcache-fastmmap-perl_1.39.orig.tar.gz 46.7 KiB 53e37525ce8be08f77fd50530f5e46f09878d851eeddf7af11ca636afe1d0937
libcache-fastmmap-perl_1.39-1.debian.tar.gz 3.3 KiB 59f9c2c1321eb1be0e2668d41a33770a6ad296fa55c2b74e5b45c3cfafdd41fd
libcache-fastmmap-perl_1.39-1.dsc 2.1 KiB 4f5a7ca142413b3c74e914d2e9b5dabc8b7927603a755eb7b52dca699a60f800

Available diffs

View changes file

Binary packages built by this source

libcache-fastmmap-perl: Perl module providing a mmap'ed cache

 Cache::FastMmap uses the mmap system call to establish an interprocess shared
 memory cache. Its core code is written in C, which can provide significant
 performance compared to a Pure Perl implementation such as Cache::Mmap. It can
 handle rather large pages without the socket connection and latency of using
 full-fledged databases where long-term persistence is unnecessary.
 Since the algorithm uses a dual-level hashing system (a hash is used to find
 a page, then another hash within each page to find a given slot), most get
 calls can execute in constant O(1) time. The system uses fcntl to handle
 concurrent access, but only locks individual pages to reduce contention. The
 oldest (least recently used) data is evicted from the cache first, making
 this cache implementation most suitable for cases when old data is unlikely
 to be searched.