Pixelbadger.Toolkit
2.2.0
See the version list below for details.
dotnet tool install --global Pixelbadger.Toolkit --version 2.2.0
dotnet new tool-manifest
dotnet tool install --local Pixelbadger.Toolkit --version 2.2.0
#tool dotnet:?package=Pixelbadger.Toolkit&version=2.2.0
nuke :add-package Pixelbadger.Toolkit --version 2.2.0
Pixelbadger.Toolkit
A CLI toolkit exposing varied functionality organized by topic, including string manipulation, distance calculations, esoteric programming language interpreters, image steganography, web serving, OpenAI integration, and Model Context Protocol (MCP) servers.
Table of Contents
Installation
Option 1: Install as .NET Global Tool (Recommended)
Install the tool globally using the NuGet package:
dotnet tool install --global Pixelbadger.Toolkit
Once installed, you can use the pbtk command from anywhere:
pbtk --help
Option 2: Build from Source
Clone the repository and build the project:
git clone https://github.com/pixelbadger/Pixelbadger.Toolkit.git
cd Pixelbadger.Toolkit
dotnet build
Usage
Using the Global Tool (pbtk)
Run commands using the topic-action pattern:
pbtk [topic] [action] [options]
Using from Source
If building from source, use:
dotnet run -- [topic] [action] [options]
Available Topics and Actions
strings
String manipulation utilities.
reverse
Reverses the content of a file.
Usage:
# Using global tool
pbtk strings reverse --in-file <input-file> --out-file <output-file>
# Using from source
dotnet run -- strings reverse --in-file <input-file> --out-file <output-file>
Example:
pbtk strings reverse --in-file hello.txt --out-file hello-reversed.txt
levenshtein-distance
Calculates the Levenshtein distance between two strings or files.
Usage:
pbtk strings levenshtein-distance --string1 <string1> --string2 <string2>
Examples:
# Compare two strings directly
pbtk strings levenshtein-distance --string1 "hello" --string2 "world"
# Compare contents of two files
pbtk strings levenshtein-distance --string1 file1.txt --string2 file2.txt
search
Search indexing and querying utilities.
ingest
Ingest content into a Lucene.NET search index with intelligent chunking based on file type.
Usage:
pbtk search ingest --index-path <index-directory> --content-path <content-file>
Examples:
# Ingest a text file into a search index (paragraph chunking)
pbtk search ingest --index-path ./search-index --content-path document.txt
# Ingest markdown content (header-based chunking)
pbtk search ingest --index-path ./docs-index --content-path README.md
Details:
- Markdown files (.md, .markdown): Automatically chunked by headers (# ## ###) for semantic organization
- Other files: Split into paragraphs (separated by double newlines, or single newlines if no double newlines found)
- Each chunk becomes a separate searchable document in the index
- Metadata is stored including source file, chunk/paragraph number, and document ID
- Creates a new index if it doesn't exist, or adds to an existing index
- Markdown chunks preserve header context and hierarchy information
query
Perform BM25 similarity search against a Lucene.NET index with optional source ID filtering.
Usage:
pbtk search query --index-path <index-directory> --query <search-terms> [--max-results <number>] [--sourceIds <id1> <id2> ...]
Examples:
# Search for documents containing specific terms
pbtk search query --index-path ./search-index --query "hello world"
# Limit results to 5 documents
pbtk search query --index-path ./docs-index --query "lucene search" --max-results 5
# Complex query with operators
pbtk search query --index-path ./index --query "\"exact phrase\" OR keyword"
# Filter results by source IDs (based on filename without extension)
pbtk search query --index-path ./index --query "search terms" --sourceIds document1 readme
Details:
- Uses BM25 similarity ranking for relevance scoring
- Returns results sorted by relevance score (highest first)
- Supports Lucene query syntax including phrases, boolean operators, wildcards
- Shows source file, paragraph number, relevance score, and content for each result
- Default maximum results is 10
- Optional source ID filtering constrains results to documents from specific files
- Source IDs are derived from filenames (without extension) during ingestion
interpreters
Esoteric programming language interpreters.
brainfuck
Executes a Brainfuck program from a file.
Usage:
pbtk interpreters brainfuck --file <program-file>
Example:
pbtk interpreters brainfuck --file hello-world.bf
ook
Executes an Ook program from a file.
Usage:
pbtk interpreters ook --file <program-file>
Example:
pbtk interpreters ook --file hello-world.ook
images
Image processing and manipulation utilities.
steganography
Encode or decode hidden messages in images using least significant bit (LSB) steganography.
Usage:
Encoding a message:
pbtk images steganography --mode encode --image <input-image> --message <message> --output <output-image>
Decoding a message:
pbtk images steganography --mode decode --image <encoded-image>
Examples:
# Hide a secret message in an image
pbtk images steganography --mode encode --image photo.jpg --message "This is secret!" --output encoded.png
# Extract the hidden message
pbtk images steganography --mode decode --image encoded.png
web
Web server utilities.
serve-html
Serves a static HTML file via HTTP server.
Usage:
pbtk web serve-html --file <html-file> [--port <port>]
Options:
--file: Path to the HTML file to serve (required)--port: Port to bind the server to (default: 8080)
Examples:
# Serve an HTML file on default port 8080
pbtk web serve-html --file index.html
# Serve on a specific port
pbtk web serve-html --file test.html --port 3000
openai
OpenAI utilities.
chat
Chat with OpenAI models maintaining conversation history.
Usage:
pbtk openai chat --message <message> [--chat-history <history-file>] [--model <model-name>]
Options:
--message: The message to send to the LLM (required)--chat-history: Path to JSON file containing chat history (optional, will be created if it doesn't exist)--model: The OpenAI model to use (optional, default: gpt-5-nano)
Examples:
# Simple message without history
pbtk openai chat --message "What is the capital of France?"
# Start a conversation with history tracking
pbtk openai chat --message "Hello, my name is Alice" --chat-history ./chat.json
# Continue the conversation (remembers previous context)
pbtk openai chat --message "What's my name?" --chat-history ./chat.json
# Use a specific model
pbtk openai chat --message "Explain quantum computing" --model "gpt-4o-mini"
# Complex conversation with specific model and history
pbtk openai chat --message "Continue our discussion about AI" --chat-history ./ai-chat.json --model "gpt-4o"
Details:
- Requires
OPENAI_API_KEYenvironment variable to be set - Chat history is stored in JSON format with role/content pairs
- Maintains full conversation context across multiple interactions
- Supports all OpenAI chat models (gpt-3.5-turbo, gpt-4, gpt-4o, gpt-5-nano, etc.)
- Each conversation turn includes both user message and assistant response
- History files are created automatically if they don't exist
- Compatible with OpenAI API key authentication
translate
Translate text to a target language using OpenAI.
Usage:
pbtk openai translate --text <text-to-translate> --target-language <target-language> [--model <model-name>]
Options:
--text: The text to translate (required)--target-language: The target language to translate to (required)--model: The OpenAI model to use (optional, default: gpt-5-nano)
Examples:
# Translate text to Spanish
pbtk openai translate --text "Hello, how are you?" --target-language "Spanish"
# Translate to French using a specific model
pbtk openai translate --text "Good morning" --target-language "French" --model "gpt-4o-mini"
# Translate complex text
pbtk openai translate --text "The weather is beautiful today" --target-language "German"
Environment Setup:
# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key-here"
# Then use the openai commands
pbtk openai chat --message "Hello!"
pbtk openai translate --text "Hello!" --target-language "Spanish"
pbtk openai ocaaar --image-path "./image.jpg"
pbtk openai corpospeak --source "Hello!" --audience "csuite"
ocaaar
Extract text from an image and translate it to pirate speak using OpenAI vision capabilities.
Usage:
pbtk openai ocaaar --image-path <image-file> [--model <model-name>]
Options:
--image-path: Path to the image file to process (required)--model: The OpenAI model to use (optional, default: gpt-5-nano)
Examples:
# Extract text from an image and get pirate translation
pbtk openai ocaaar --image-path poster.jpg
# Use a specific model for better OCR accuracy
pbtk openai ocaaar --image-path document.png --model "gpt-4o"
# Process a screenshot with text
pbtk openai ocaaar --image-path screenshot.png
Details:
- Requires
OPENAI_API_KEYenvironment variable to be set - Supports common image formats: JPEG, PNG, GIF, WebP
- Uses OpenAI's vision capabilities for text extraction
- Automatically translates extracted text to pirate dialect
- Returns only the pirate-translated text without additional commentary
- Perfect for humorous OCR processing of signs, documents, or any text-containing images
corpospeak
Rewrite text for enterprise audiences with optional idiolect adaptation using OpenAI.
Usage:
pbtk openai corpospeak --source <source-text> --audience <target-audience> [--user-messages <message1> <message2> ...] [--model <model-name>]
Options:
--source: The source text to rewrite (required)--audience: Target audience - one of: csuite, engineering, product, sales, marketing, operations, finance, legal, hr, customer-success (required)--user-messages: Optional user messages to learn writing style from (multiple values allowed)--model: The OpenAI model to use (optional, default: gpt-5-nano)
Examples:
# Basic audience conversion
pbtk openai corpospeak --source "API performance is great" --audience "csuite"
# Convert for engineering team
pbtk openai corpospeak --source "New feature deployed" --audience "engineering"
# With idiolect adaptation using user writing examples
pbtk openai corpospeak --source "System upgrade complete" --audience "sales" --user-messages "Hey team!" "Let's crush this quarter!"
# Use specific model for better results
pbtk openai corpospeak --source "Database migration finished" --audience "operations" --model "gpt-4o"
Details:
- Requires
OPENAI_API_KEYenvironment variable to be set - Two-stage processing: First converts text for target audience, then optionally adapts to user's writing style
- Separate chat instances: Audience conversion and idiolect rewrite use independent OpenAI conversations
- Comprehensive audience support: Covers major enterprise tech organization roles
- Robust validation: Validates audience parameters with helpful error messages
- Perfect for adapting technical content for different stakeholders while maintaining accuracy
Supported Audiences:
- csuite/executive: Strategic, business impact focused language
- engineering: Technical precision and implementation details
- product: User impact and feature strategy focus
- sales: Value propositions and competitive advantages
- marketing: Market appeal and customer messaging
- operations: Scalability and reliability emphasis
- finance: Cost implications and ROI focus
- legal: Risk assessment and compliance considerations
- hr: People impact and organizational dynamics
- customer-success: Customer experience and support focus
mcp
Model Context Protocol server utilities for AI integration.
rag-server
Hosts an MCP server that performs BM25 similarity search against a Lucene.NET index, enabling AI assistants to retrieve relevant context from your documents.
Usage:
pbtk mcp rag-server --index-path <index-directory>
Options:
--index-path: Path to the Lucene.NET index directory (required)
Examples:
# Start MCP server with an existing search index
pbtk mcp rag-server --index-path ./search-index
# Use with Claude Desktop or other MCP clients
pbtk mcp rag-server --index-path ./docs-index
Details:
- Communicates via stdin/stdout using JSON-RPC protocol
- Provides the
searchMCP tool that performs BM25 queries against the index with configurable result limits and source ID filtering - Returns formatted search results with relevance scores, source files, paragraph numbers, source IDs, and content
- Supports optional source ID filtering to constrain results to specific documents
- Compatible with MCP clients like Claude Desktop, Continue, and other AI development tools
- Requires an existing Lucene index created with the
search ingestcommand
MCP Tool Parameters:
query(required): The search query textmaxResults(optional, default: 5): Maximum number of results to returnsourceIds(optional): Array of source IDs to filter results to specific documents
Example MCP Tool Usage:
{
"name": "search",
"arguments": {
"query": "programming concepts",
"maxResults": 3,
"sourceIds": ["document1", "readme"]
}
}
Integration Example: First create an index, then start the MCP server:
# 1. Create search index from your documents
pbtk search ingest --index-path ./my-docs --content-path documentation.md
# 2. Start MCP server for AI integration
pbtk mcp rag-server --index-path ./my-docs
Help
Get help for any command by adding --help:
# Using global tool
pbtk --help # General help
pbtk strings reverse --help # Command-specific help
# Using from source
dotnet run -- --help # General help
dotnet run -- strings reverse --help # Command-specific help
Requirements
- .NET 9.0
- SixLabors.ImageSharp (for steganography features)
- OpenAI API key (for OpenAI features)
Technical Details
Steganography Implementation
The steganography feature uses LSB (Least Significant Bit) encoding to hide messages in the RGB color channels of images. Each bit of the message is stored in the least significant bit of the red, green, or blue color values, making the changes imperceptible to the human eye.
Supported Languages
- Brainfuck: A minimalist esoteric programming language with 8 commands
- Ook: A Brainfuck derivative using "Ook." and "Ook?" syntax inspired by Terry Pratchett's Discworld orangutans
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
This package has no dependencies.
| Version | Downloads | Last Updated |
|---|---|---|
| 3.2.0 | 106 | 3/6/2026 |
| 3.1.0 | 94 | 2/12/2026 |
| 3.0.0 | 183 | 12/23/2025 |
| 2.5.1 | 264 | 9/19/2025 |
| 2.5.0 | 255 | 9/19/2025 |
| 2.4.0 | 339 | 9/18/2025 |
| 2.3.4 | 315 | 9/18/2025 |
| 2.3.3 | 328 | 9/18/2025 |
| 2.3.2 | 334 | 9/18/2025 |
| 2.3.1 | 321 | 9/17/2025 |
| 2.3.0 | 319 | 9/17/2025 |
| 2.2.1 | 331 | 9/17/2025 |
| 2.2.0 | 319 | 9/17/2025 |
| 2.1.0 | 333 | 9/17/2025 |
| 2.0.0 | 343 | 9/16/2025 |
| 1.1.1 | 323 | 9/15/2025 |
| 1.1.0 | 333 | 9/15/2025 |
| 1.0.1 | 253 | 9/14/2025 |
| 1.0.0 | 182 | 9/14/2025 |