Saga_tokenizersSourceTokenization library for Saga.
This module provides the main tokenization API matching HuggingFace Tokenizers design. It supports multiple tokenization algorithms (BPE, WordPiece, Unigram, Word-level, Character-level), text normalization, pre-tokenization, post-processing, and decoding.
Load a pretrained tokenizer:
let tokenizer = Tokenizer.from_file "tokenizer.json" |> Result.get_ok in
let encoding = Tokenizer.encode tokenizer "Hello world!" in
let ids = Encoding.get_ids encodingCreate a BPE tokenizer from scratch:
let tokenizer =
Tokenizer.bpe
~vocab:[("hello", 0); ("world", 1); ("[PAD]", 2)]
~merges:[]
()
in
let encoding = Tokenizer.encode tokenizer "hello world" in
let text = Tokenizer.decode tokenizer [0; 1]Train a new tokenizer:
let texts = [ "Hello world"; "How are you?"; "Hello again" ] in
let tokenizer =
Tokenizer.train_bpe (`Seq (List.to_seq texts)) ~vocab_size:1000 ()
in
Tokenizer.save_pretrained tokenizer ~path:"./my_tokenizer"Tokenization proceeds through stages:
Each stage is optional and configurable via builder methods.
Post-processing patterns are model-specific:
CLS at start, SEP at end, type IDs distinguish sequencesText normalization (lowercase, NFD/NFC, accent stripping, etc.).
Pre-tokenization (whitespace splitting, punctuation handling, etc.).
Post-processing (adding CLS/SEP, setting type IDs, etc.).
Direction for padding or truncation: `Left (beginning) or `Right (end).
type special = {token : string;The token text (e.g., "<pad>", "<unk>").
*)single_word : bool;Whether this token must match whole words only. Default: false.
lstrip : bool;Whether to strip whitespace on the left. Default: false.
rstrip : bool;Whether to strip whitespace on the right. Default: false.
normalized : bool;Whether to apply normalization to this token. Default: true for regular tokens, false for special tokens.
}Special token configuration.
Special tokens are not split during tokenization and can be skipped during decoding. Token IDs are assigned automatically when added to the vocabulary.
All special token types are uniform - the semantic meaning (pad, unk, bos, etc.) is contextual, not encoded in the type.
Padding length strategy.
`Batch_longest: Pad to longest sequence in batch`Fixed n: Pad all sequences to fixed length n`To_multiple n: Pad to smallest multiple of n >= sequence lengthtype padding = {length : pad_length;direction : direction;pad_id : int option;pad_type_id : int option;pad_token : string option;}Padding configuration.
When optional fields are None, falls back to tokenizer's configured padding token. If the tokenizer has no padding token configured and these fields are None, padding operations will raise Invalid_argument.
Truncation configuration.
Limits sequences to max_length tokens, removing from specified direction.
type data = [ | `Files of string list| `Seq of string Seq.t| `Iterator of unit -> string option ]Training data source.
`Files paths: Read training text from files`Seq seq: Use sequence of strings`Iterator f: Pull training data via iterator (None signals end)