r-cran-tokenizers 0.2.3-1 source package in Ubuntu

Changelog

r-cran-tokenizers (0.2.3-1) unstable; urgency=medium

  * New upstream version
  * Standards-Version: 4.6.1 (routine-update)
  * dh-update-R to update Build-Depends (routine-update)
  * Lintian-override for bug #1017966 issue

 -- Andreas Tille <email address hidden>  Mon, 26 Sep 2022 15:30:08 +0200

Upload details

Uploaded by:
Debian R Packages Maintainers
Uploaded to:
Sid
Original maintainer:
Debian R Packages Maintainers
Architectures:
any
Section:
misc
Urgency:
Medium Urgency

See full publishing history Publishing

Series Pocket Published Component Section

Downloads

File Size SHA-256 Checksum
r-cran-tokenizers_0.2.3-1.dsc 2.1 KiB ec80b1310511edbed459efed99e94bc1f6d8e231de57533b4573a6cd73b4b283
r-cran-tokenizers_0.2.3.orig.tar.gz 450.4 KiB 626d6b48b79dc4c3c130aebe201aac620f93665e0c5a890c3b6ca25c465f4207
r-cran-tokenizers_0.2.3-1.debian.tar.xz 3.0 KiB 8459602fc6f6ecc7910fd950d48dd2a159a5f8411a5b80001f232334e5b73695

Available diffs

No changes file available.

Binary packages built by this source

r-cran-tokenizers: GNU R fast, consistent tokenization of natural language text

 Convert natural language text into tokens. Includes tokenizers for
 shingled n-grams, skip n-grams, words, word stems, sentences,
 paragraphs, characters, shingled characters, lines, tweets, Penn
 Treebank, regular expressions, as well as functions for counting
 characters, words, and sentences, and a function for splitting longer
 texts into separate documents, each with the same number of words.
 The tokenizers have a consistent interface, and the package is built
 on the 'stringi' and 'Rcpp' packages for fast yet correct
 tokenization in 'UTF-8'.

r-cran-tokenizers-dbgsym: debug symbols for r-cran-tokenizers