FastBertTokenizer 0.3.29
See the version list below for details.
dotnet add package FastBertTokenizer --version 0.3.29
NuGet\Install-Package FastBertTokenizer -Version 0.3.29
<PackageReference Include="FastBertTokenizer" Version="0.3.29" />
paket add FastBertTokenizer --version 0.3.29
#r "nuget: FastBertTokenizer, 0.3.29"
// Install FastBertTokenizer as a Cake Addin #addin nuget:?package=FastBertTokenizer&version=0.3.29 // Install FastBertTokenizer as a Cake Tool #tool nuget:?package=FastBertTokenizer&version=0.3.29
FastBertTokenizer
A fast and memory-efficient library for WordPiece tokenization as it is used by BERT. Tokenization results are tested against the outputs of HuggingFace Transformers' AutoTokenizer
.
Serves similar needs and initially inspired by BERTTokenizers - thanks for the great work.
Features
- same results as HuggingFace Transformers'
AutoTokenizer
in all relevant cases. - purely managed and dependency-free
- optimized for high performance and low memory usage
Getting started
using FastBertTokenizer;
var tok = new BertTokenizer();
var maxTokensForModel = 512;
await tok.LoadVocabularyAsync("vocab.txt", true); // https://huggingface.co/BAAI/bge-small-en/blob/main/vocab.txt
var text = File.ReadAllText("TextFile.txt");
var (inputIds, attentionMask, tokenTypeIds) = tok.Tokenize(text, maxTokensForModel);
Console.WriteLine(string.Join(", ", inputIds.ToArray().Select(x => x.ToString())));
Comparison of Tokenization Results to HuggingFace Transformers' AutoTokenizer
For correctness verification about 10.000 articles of simple english Wikipedia were tokenized using FastBertTokenizer and Huggingface using the baai bge vocab.txt file. The tokenization results were exactly the same apart from these two cases:
- Letter (id 6309) contains assamese characters. Many of them are not represented in the vocabulary used. Huggingface's tokenizer skips exactly one [UNK] token for one of the chars were FastBertTokenizer emits one.
- Avignon (id 30153) has Rhône as the last word before hitting the 512 token id limit. If a word can not directly be found in the vocabulary, FastBertTokenizer we tries to tokenize prefixes of the word first, while Huggingface directly starts with a diacritic-free version of the word. Thus, FastBertTokenizer's result ends with token id for
r
while huggingface (correctly) emitsrhone
. This edge case is just relevant- for the last word, after which the tokenized output is cut off and
- if this last word contains diacritics.
These minor differences might be irrelevant in most real-world use cases. All other tested >10.000 articles including chinese and korean characters as well as much less common scripts and right-to-left letters were tokenized exactly the same as by Huggingface's Tokenizer.
Comparison to BERTTokenizers
- about 1 order of magnitude faster
- allocates more than 1 order of magnitude less memory
- better whitespace handling
- handles unknown characters correctly
- does not throw if text is longer than maximum sequence length
- handles unicode control chars
- handles other alphabets such as greek and right-to-left languages
Note that while BERTTokenizers handles token type incorrectly, it does support input of two pieces of text that are tokenized with a separator in between. FastBertTokenizer currently does not support this.
Benchmark
Tokenizing the first 5000 characters of 10254 articles of simple english Wikipedia. ThinkPad T14s Gen 1, AMD Ryzen 7 PRO 4750U, 32GB memory
Method | Mean | Error | StdDev | Gen0 | Gen1 | Gen2 | Allocated |
---|---|---|---|---|---|---|---|
BERTTokenizers | 4,942.0 ms | 54.79 ms | 48.57 ms | 1001000.0000 | 95000.0000 | 4000.0000 | 5952.43 MB |
FastBertTokenizerAllocating | 529.5 ms | 8.90 ms | 10.59 ms | 61000.0000 | 31000.0000 | 2000.0000 | 350.75 MB |
FastBertTokenizerMemReuse | 404.5 ms | 7.72 ms | 7.22 ms | 68000.0000 | - | - | 136.83 MB |
The FastBertTokenizerMemReuse
benchmark writes the results of the tokenization to the same memory area while FastBertTokenizerAllocating
allocates new memory for it's return values. See src/Benchmarks
for details how these benchmarks were perfomed.
Logo
Created by combining https://icons.getbootstrap.com/icons/cursor-text/ in .NET brand color with https://icons.getbootstrap.com/icons/braces/.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
-
net6.0
- No dependencies.
NuGet packages (3)
Showing the top 3 NuGet packages that depend on FastBertTokenizer:
Package | Downloads |
---|---|
Microsoft.SemanticKernel.Connectors.Onnx
Semantic Kernel connectors for the ONNX runtime. Contains clients for text embedding generation. |
|
SmartComponents.LocalEmbeddings
Experimental, end-to-end AI features for .NET apps. Docs and info at https://github.com/dotnet-smartcomponents/smartcomponents |
|
ADCenterSpain.Infrastructure.AI
Common classes for AI development |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on FastBertTokenizer:
Repository | Stars |
---|---|
microsoft/semantic-kernel
Integrate cutting-edge LLM technology quickly and easily into your apps
|
Version | Downloads | Last updated |
---|---|---|
1.0.28 | 108,761 | 4/30/2024 |
0.5.18-alpha | 1,053 | 12/21/2023 |
0.4.67 | 56,893 | 12/11/2023 |
0.3.29 | 310 | 9/18/2023 |
0.2.7 | 138 | 9/14/2023 |