r-cran-tokenizers 0.2.1-3 (s390x binary) in ubuntu jammy

 Convert natural language text into tokens. Includes tokenizers for
 shingled n-grams, skip n-grams, words, word stems, sentences,
 paragraphs, characters, shingled characters, lines, tweets, Penn
 Treebank, regular expressions, as well as functions for counting
 characters, words, and sentences, and a function for splitting longer
 texts into separate documents, each with the same number of words.
 The tokenizers have a consistent interface, and the package is built
 on the 'stringi' and 'Rcpp' packages for fast yet correct
 tokenization in 'UTF-8'.

Details

Package version:
0.2.1-3
Source:
r-cran-tokenizers 0.2.1-3 source package in Ubuntu
Status:
Published
Component:
universe
Priority:
Optional