9 packages returned for Tags:"Tokenization"

Tokenization of raw text is a standard pre-processing step for many NLP tasks. For English, tokenization usually involves punctuation splitting and separation of some affixes like possessives. Other languages require more extensive token pre-processing, which is usually called segmentation.
TextMatch is a library for searching inside texts using Lucene query expressions. Supports all types of Lucene query expressions - boolean, wildcard, fuzzy. Options are available for tweaking tokenization, such as case-sensitivity and word stemming.
Lightweight HttpModule, REST API, and managed API for *safe* and highly-optimized server-side image processing. Docs: http://imageresizing.net/ Support: http://imageresizing.net/support Works with .NET 4.5.1+, ASP.NET WebForms, MVC 1-4, WebAPI, Routing, IIS 7, 7.5, & 8, 8.1 and nearly all... More information