Mythosia.AI
4.6.2
dotnet add package Mythosia.AI --version 4.6.2
NuGet\Install-Package Mythosia.AI -Version 4.6.2
<PackageReference Include="Mythosia.AI" Version="4.6.2" />
<PackageVersion Include="Mythosia.AI" Version="4.6.2" />
<PackageReference Include="Mythosia.AI" />
paket add Mythosia.AI --version 4.6.2
#r "nuget: Mythosia.AI, 4.6.2"
#:package Mythosia.AI@4.6.2
#addin nuget:?package=Mythosia.AI&version=4.6.2
#tool nuget:?package=Mythosia.AI&version=4.6.2
Mythosia.AI
Package Summary
The Mythosia.AI library provides a unified interface for various AI models with multimodal support, function calling, reasoning streaming, and advanced streaming capabilities.
Supported Providers
- OpenAI — GPT-5.2 / 5.2 Codex / 5.1 / 5 (with reasoning), GPT-4.1, GPT-4o, o3
- Anthropic — Claude Opus 4.6 / 4.5 / 4.1 / 4, Sonnet 4.6 / 4.5 / 4, Haiku 4.5
- Google — Gemini 3 Flash/Pro Preview, Gemini 2.5 Pro/Flash/Flash-Lite
- DeepSeek — Chat and Reasoner models
- xAI — Grok 4, Grok 4.1 Fast, Grok 3, Grok 3 Mini
- Perplexity — Sonar with web search and citations
📚 Documentation
- Basic Usage Guide — Getting started with text queries, streaming, image analysis, and more
- Advanced Features — Function calling, policies, and enhanced streaming
- Release Notes — Full version history and migration guides
Installation
dotnet add package Mythosia.AI
For advanced LINQ operations with streams:
dotnet add package System.Linq.Async
For RAG (Retrieval-Augmented Generation) support:
dotnet add package Mythosia.AI.Rag
This adds .WithRag() to any AIService, enabling document-based context augmentation. See the Mythosia.AI.Rag README for full usage details.
using Mythosia.AI.Rag;
var service = new ClaudeService(apiKey, httpClient)
.WithRag(rag => rag
.AddDocument("manual.txt")
.AddDocument("policy.txt")
);
var response = await service.GetCompletionAsync("What is the refund policy?");
Quick Start
// OpenAI GPT
var gptService = new ChatGptService(apiKey, httpClient);
var response = await gptService.GetCompletionAsync("Hello!");
// Anthropic Claude
var claudeService = new ClaudeService(apiKey, httpClient);
var response = await claudeService.GetCompletionAsync("Hello!");
// Google Gemini
var geminiService = new GeminiService(apiKey, httpClient);
var response = await geminiService.GetCompletionAsync("Hello!");
GPT-5 Family Configuration
GPT-5 family models support type-safe reasoning configuration with per-model enums.
Reasoning Effort (Per-Model Enums)
Each GPT-5 variant has its own enum to ensure only valid options are available at compile time.
var gptService = (ChatGptService)service;
// GPT-5: Gpt5Reasoning (Auto/Minimal/Low/Medium/High)
gptService.WithGpt5Parameters(
reasoningEffort: Gpt5Reasoning.High,
reasoningSummary: ReasoningSummary.Concise);
// GPT-5.1: Gpt5_1Reasoning (Auto/None/Low/Medium/High) + Verbosity
gptService.WithGpt5_1Parameters(
reasoningEffort: Gpt5_1Reasoning.Medium,
verbosity: Verbosity.Low,
reasoningSummary: ReasoningSummary.Concise);
// GPT-5.2: Gpt5_2Reasoning (Auto/None/Low/Medium/High/XHigh) + Verbosity
gptService.WithGpt5_2Parameters(
reasoningEffort: Gpt5_2Reasoning.XHigh,
verbosity: Verbosity.High);
Auto uses the model-appropriate default (e.g., Medium for GPT-5, None for GPT-5.1/5.2, Medium for GPT-5.2 Pro/Codex).
Reasoning Summary
All GPT-5 family models support ReasoningSummary enum (Auto / Concise / Detailed). Set to null to disable.
Gemini Configuration
Gemini 3 — ThinkingLevel
var geminiService = new GeminiService(apiKey, httpClient);
geminiService.ChangeModel(AIModel.Gemini3FlashPreview);
// GeminiThinkingLevel enum: Auto / Minimal / Low / Medium / High
geminiService.ThinkingLevel = GeminiThinkingLevel.Low; // Auto = model default (High)
Gemini 2.5 — ThinkingBudget
geminiService.ChangeModel(AIModel.Gemini2_5Pro);
geminiService.ThinkingBudget = 8192; // -1 = dynamic (default), 0 = disable
Grok Configuration
Reasoning Effort
var grokService = new GrokService(apiKey, httpClient);
grokService.ChangeModel(AIModel.Grok3Mini);
// GrokReasoning enum: Off / Low / High
grokService.WithGrokParameters(reasoningEffort: GrokReasoning.High);
Note: Only
grok-3-minisupports thereasoning_effortAPI parameter. Other Grok models ignore it.
Reasoning Content Streaming
Grok reasoning models (grok-3-mini, grok-4, grok-4-1-fast) stream reasoning_content when reasoning is enabled:
await foreach (var content in grokService.StreamAsync(message, new StreamOptions().WithReasoning()))
{
if (content.Type == StreamingContentType.Reasoning)
Console.Write($"[Think] {content.Content}");
else if (content.Type == StreamingContentType.Text)
Console.Write(content.Content);
}
Function Calling
Quick Start with Functions
// Define a simple function
var service = new ChatGptService(apiKey, httpClient)
.WithFunction(
"get_weather",
"Gets the current weather for a location",
("location", "The city and country", required: true),
(string location) => $"The weather in {location} is sunny, 22°C"
);
// AI will automatically call the function when needed
var response = await service.GetCompletionAsync("What's the weather in Seoul?");
// Output: "The weather in Seoul is currently sunny with a temperature of 22°C."
Attribute-Based Function Registration
public class WeatherService
{
[AiFunction("get_current_weather", "Gets the current weather for a location")]
public string GetWeather(
[AiParameter("The city name", required: true)] string city,
[AiParameter("Temperature unit", required: false)] string unit = "celsius")
{
// Your implementation
return $"Weather in {city}: 22°{unit[0]}";
}
}
// Register all functions from a class
var weatherService = new WeatherService();
var service = new ChatGptService(apiKey, httpClient)
.WithFunctions(weatherService);
Advanced Function Builder
var service = new ChatGptService(apiKey, httpClient)
.WithFunction(FunctionBuilder.Create("calculate")
.WithDescription("Performs mathematical calculations")
.AddParameter("expression", "string", "The math expression", required: true)
.AddParameter("precision", "integer", "Decimal places", required: false, defaultValue: 2)
.WithHandler(async (args) =>
{
var expr = args["expression"].ToString();
var precision = Convert.ToInt32(args.GetValueOrDefault("precision", 2));
// Calculate and return result
return await CalculateAsync(expr, precision);
})
.Build());
Multiple Functions with Different Types
var service = new ChatGptService(apiKey, httpClient)
// Parameterless function
.WithFunction(
"get_time",
"Gets the current time",
() => DateTime.Now.ToString("HH:mm:ss")
)
// Two-parameter function
.WithFunction(
"add_numbers",
"Adds two numbers",
("a", "First number", true),
("b", "Second number", true),
(double a, double b) => $"The sum is {a + b}"
)
// Async function
.WithFunctionAsync(
"fetch_data",
"Fetches data from API",
("endpoint", "API endpoint", true),
async (string endpoint) => await httpClient.GetStringAsync(endpoint)
);
// The AI will automatically use the appropriate functions
var response = await service.GetCompletionAsync(
"What time is it? Also, what's 15 plus 27?"
);
Function Calling Policies
// Pre-defined policies
service.DefaultPolicy = FunctionCallingPolicy.Fast; // 30s timeout, 10 rounds
service.DefaultPolicy = FunctionCallingPolicy.Complex; // 300s timeout, 50 rounds
service.DefaultPolicy = FunctionCallingPolicy.Vision; // 200s timeout, for image analysis
// Custom policy
service.DefaultPolicy = new FunctionCallingPolicy
{
MaxRounds = 25,
TimeoutSeconds = 120,
MaxConcurrency = 5,
EnableLogging = true // Enable debug output
};
// Per-request policy override
var response = await service
.WithPolicy(FunctionCallingPolicy.Fast)
.GetCompletionAsync("Complex task requiring functions");
// Inline policy configuration
var response = await service
.BeginMessage()
.AddText("Analyze this data")
.WithMaxRounds(5)
.WithTimeout(60)
.SendAsync();
Function Calling with Streaming
// Stream with function calling support
await foreach (var content in service.StreamAsync(
"What's the weather in Seoul and calculate 15% tip on $85",
StreamOptions.WithFunctions))
{
if (content.Type == StreamingContentType.FunctionCall)
{
Console.WriteLine($"Calling function: {content.Metadata["function_name"]}");
}
else if (content.Type == StreamingContentType.FunctionResult)
{
Console.WriteLine($"Function completed: {content.Metadata["status"]}");
}
else if (content.Type == StreamingContentType.Text)
{
Console.Write(content.Content);
}
}
Disabling Functions Temporarily
// Disable functions for a single request
var response = await service
.WithoutFunctions()
.GetCompletionAsync("Don't use any functions for this");
// Or use the async helper
var response = await service.AskWithoutFunctionsAsync(
"Process this without calling functions"
);
Structured Output
Deserialize LLM responses directly into C# POCOs with automatic JSON recovery.
Basic Usage
// Define your POCO
public class WeatherResponse
{
public string City { get; set; }
public double Temperature { get; set; }
public string Condition { get; set; }
}
// Get typed result — schema is auto-generated and sent to the LLM
var result = await service.GetCompletionAsync<WeatherResponse>(
"What's the weather in Seoul?");
Console.WriteLine($"{result.City}: {result.Temperature}°C, {result.Condition}");
Auto-Recovery Retry
When the LLM returns invalid JSON, a correction prompt is automatically sent asking the model to fix its output. This is not a network retry — it's an output quality/format correction loop.
// Configure service-level retry count (default: 2)
service.StructuredOutputMaxRetries = 3;
// On final failure, StructuredOutputException is thrown with rich diagnostics:
// - FirstRawResponse, LastRawResponse
// - ParseError, AttemptCount, SchemaJson, TargetTypeName
Per-Call Structured Output Policy
Override retry behavior for a single request without changing service defaults:
// Custom policy — applies only to this call, then auto-cleared
var result = await service
.WithStructuredOutputPolicy(new StructuredOutputPolicy { MaxRepairAttempts = 5 })
.GetCompletionAsync<MyDto>(prompt);
// Preset: no retry (1 attempt only)
var result = await service
.WithNoRetryStructuredOutput()
.GetCompletionAsync<MyDto>(prompt);
// Preset: strict mode (up to 3 retries = 4 total attempts)
var result = await service
.WithStrictStructuredOutput()
.GetCompletionAsync<MyDto>(prompt);
| Preset | MaxRepairAttempts | Description |
|---|---|---|
Default |
null (service default) |
Uses StructuredOutputMaxRetries |
NoRetry |
0 |
Single attempt, no retry |
Strict |
3 |
Up to 3 correction retries |
Streaming Structured Output
Stream text chunks in real-time to the UI while getting a final deserialized object with auto-repair:
var run = service.BeginStream(prompt)
.WithStructuredOutput(new StructuredOutputPolicy { MaxRepairAttempts = 2 })
.As<MyDto>();
// Optional: observe chunks in real-time
await foreach (var chunk in run.Stream(cancellationToken))
{
Console.Write(chunk); // UI display
}
// Final deserialized result (waits for stream + parse/repair)
MyDto dto = await run.Result;
Resultworks withoutStream()— justawait run.Resultinternally consumes the stream and parsesStream()is single-use — second call throwsInvalidOperationExceptionResultwaits for stream completion — even if awaited mid-stream, it won't resolve early- Repair retries are non-streaming — correction prompts use
GetCompletionAsync()for efficiency
Collection Support (List<T>, T[])
Both GetCompletionAsync<T>() and streaming support collection types — no wrapper DTO needed:
// Non-streaming: get a list directly
var items = await service.GetCompletionAsync<List<ItemDto>>(
"Extract all entities from this document...");
// Streaming: observe chunks + get list result
var run = service.BeginStream(prompt).As<List<ItemDto>>();
await foreach (var chunk in run.Stream()) Console.Write(chunk);
List<ItemDto> items = await run.Result;
List<T>, T[], IReadOnlyList<T> are all supported. JSON array schema is auto-generated from the element type.
Conversation Summary Policy
Automatically summarize old conversation messages when the conversation exceeds a configured threshold. The summary is stored and injected into the system message on each subsequent LLM request.
Configuration
// Token-based: summarize when total tokens exceed 3000, keep recent ~1000 tokens
service.ConversationPolicy = SummaryConversationPolicy.ByToken(
triggerTokens: 3000,
keepRecentTokens: 1000
);
// Message-count-based: summarize when messages exceed 20, keep last 5
service.ConversationPolicy = SummaryConversationPolicy.ByMessage(
triggerCount: 20,
keepRecentCount: 5
);
// Combined (OR condition): triggers when either threshold is exceeded
service.ConversationPolicy = SummaryConversationPolicy.ByBoth(
triggerTokens: 3000,
triggerCount: 20
);
Usage
// Just use as normal — summarization happens automatically
service.ConversationPolicy = SummaryConversationPolicy.ByMessage(triggerCount: 20, keepRecentCount: 5);
var response = await service.GetCompletionAsync("Continue our conversation...");
// When message count exceeds 20, old messages are summarized automatically
Session Persistence
// Save summary for later
string saved = service.ConversationPolicy.CurrentSummary;
// Restore in a new session
policy.LoadSummary(saved);
Key Design Decisions
- StatelessMode protection — Summary LLM calls use
StatelessMode = trueto prevent polluting the main conversation history - Backward compatible —
ConversationPolicydefaults tonull; existing behavior is unchanged - Provider-agnostic — Works with all providers (OpenAI, Claude, Gemini, Grok, DeepSeek, Perplexity)
- Incremental summarization — When re-summarizing, existing summary is included as context for the new summary
Enhanced Streaming
Stream Options
// Text only - fastest, no overhead
await foreach (var chunk in service.StreamAsync("Hello", StreamOptions.TextOnlyOptions))
{
Console.Write(chunk.Content);
}
// With metadata - includes model info, timestamps, etc.
await foreach (var content in service.StreamAsync("Hello", StreamOptions.FullOptions))
{
if (content.Metadata != null)
{
Console.WriteLine($"Model: {content.Metadata["model"]}");
}
Console.Write(content.Content);
}
// Custom options
var options = new StreamOptions()
.WithMetadata(true)
.WithFunctionCalls(true)
.WithTokenInfo(false)
.AsTextOnly(false);
await foreach (var content in service.StreamAsync("Query", options))
{
// Process based on content.Type
switch (content.Type)
{
case StreamingContentType.Text:
Console.Write(content.Content);
break;
case StreamingContentType.FunctionCall:
Console.WriteLine($"Calling: {content.Metadata["function_name"]}");
break;
case StreamingContentType.Completion:
Console.WriteLine($"Total length: {content.Metadata["total_length"]}");
break;
}
}
Reasoning Streaming
GPT-5, Gemini 3, and Grok reasoning models support streaming reasoning (thinking) content.
await foreach (var content in service.StreamAsync(message, new StreamOptions().WithReasoning()))
{
if (content.Type == StreamingContentType.Reasoning)
Console.WriteLine($"[Thinking] {content.Content}");
else if (content.Type == StreamingContentType.Text)
Console.Write(content.Content);
}
Service Support
| Service | Function Calling | Streaming | Reasoning | Notes |
|---|---|---|---|---|
| OpenAI GPT-5.2 / 5.2 Pro / 5.2 Codex | ✅ | ✅ | ✅ | Per-model reasoning enums + verbosity |
| OpenAI GPT-5.1 | ✅ | ✅ | ✅ | Reasoning + verbosity control |
| OpenAI GPT-5 / Mini / Nano | ✅ | ✅ | ✅ | Reasoning streaming + summary |
| OpenAI GPT-4.1 / GPT-4o | ✅ | ✅ | — | Full function support |
| OpenAI o3 / o3-pro | ✅ | ✅ | ✅ | Advanced reasoning |
| Claude Opus 4.6 / 4.5 / 4.1 / 4 | ✅ | ✅ | ✅ | Extended thinking + tool use |
| Claude Sonnet 4.6 / 4.5 / 4 | ✅ | ✅ | ✅ | Extended thinking + tool use |
| Claude Haiku 4.5 | ✅ | ✅ | ✅ | Extended thinking + tool use |
| Gemini 3 Flash/Pro | ✅ | ✅ | ✅ | ThinkingLevel + thought signatures |
| Gemini 2.5 Pro/Flash | ✅ | ✅ | ✅ | ThinkingBudget control |
| xAI Grok 4 / 4.1 Fast / 3 / 3 Mini | ✅ | ✅ | ✅ | GrokReasoning effort + reasoning streaming |
| DeepSeek | ❌ | ✅ | ✅ | Reasoner model streaming |
| Perplexity | ❌ | ✅ | — | Web search + citations |
Complete Examples
Building a Weather Assistant
public class WeatherAssistant
{
private readonly ChatGptService _service;
private readonly HttpClient _httpClient;
public WeatherAssistant(string apiKey)
{
_httpClient = new HttpClient();
_service = new ChatGptService(apiKey, _httpClient)
.WithSystemMessage("You are a helpful weather assistant.")
.WithFunction(
"get_weather",
"Gets current weather for a city",
("city", "City name", true),
GetWeatherData
)
.WithFunction(
"get_forecast",
"Gets weather forecast",
("city", "City name", true),
("days", "Number of days", false),
GetForecast
);
// Configure function calling behavior
_service.DefaultPolicy = new FunctionCallingPolicy
{
MaxRounds = 10,
TimeoutSeconds = 30,
EnableLogging = true
};
}
private string GetWeatherData(string city)
{
// In real implementation, call weather API
return $"{{\"city\":\"{city}\",\"temp\":22,\"condition\":\"sunny\"}}";
}
private string GetForecast(string city, int days = 3)
{
// In real implementation, call forecast API
return $"{{\"city\":\"{city}\",\"forecast\":\"{days} days of sun\"}}";
}
public async Task<string> AskAsync(string question)
{
return await _service.GetCompletionAsync(question);
}
public async IAsyncEnumerable<string> StreamAsync(string question)
{
await foreach (var content in _service.StreamAsync(question))
{
if (content.Type == StreamingContentType.Text && content.Content != null)
{
yield return content.Content;
}
}
}
}
// Usage
var assistant = new WeatherAssistant(apiKey);
// Functions are called automatically
var response = await assistant.AskAsync("What's the weather in Tokyo?");
// AI calls get_weather("Tokyo") and responds naturally
// Streaming also supports functions
await foreach (var chunk in assistant.StreamAsync(
"Compare weather in Seoul and Tokyo for the next 5 days"))
{
Console.Write(chunk);
}
Math Tutor with Step-by-Step Solutions
var mathTutor = new ChatGptService(apiKey, httpClient)
.WithSystemMessage("You are a math tutor. Always explain your reasoning.")
.WithFunction(
"calculate",
"Performs calculations",
("expression", "Math expression", true),
(string expr) => {
// Using a math expression evaluator
var result = EvaluateExpression(expr);
return $"Result: {result}";
}
)
.WithFunction(
"solve_equation",
"Solves equations step by step",
("equation", "Equation to solve", true),
(string equation) => {
var steps = SolveWithSteps(equation);
return JsonSerializer.Serialize(steps);
}
);
// The AI will use functions and explain the process
var response = await mathTutor.GetCompletionAsync(
"Solve the equation 2x + 5 = 13 and verify the answer"
);
// Output includes step-by-step solution with verification
Migration Guides
For detailed migration instructions, see the Release Notes.
Best Practices
Function Design: Keep functions focused and simple. Complex logic should be broken into multiple functions.
Error Handling: Functions should return meaningful error messages that the AI can understand.
Performance: Use appropriate policies for your use case (Fast for simple tasks, Complex for detailed analysis).
Streaming: Use
TextOnlyOptionsfor best performance when metadata isn't needed.Testing: Test function calling with various prompts to ensure robust behavior.
Troubleshooting
Q: Functions aren't being called when expected?
- Ensure functions are registered with clear, descriptive names and descriptions
- Check that
EnableFunctionsis true on the service - Verify the model supports function calling (see Service Support table above)
Q: Function calling is too slow?
- Adjust the policy timeout:
service.DefaultPolicy.TimeoutSeconds = 30 - Use
FunctionCallingPolicy.Fastfor simple operations - Consider using streaming for better perceived performance
Q: How to debug function execution?
- Enable logging:
service.DefaultPolicy.EnableLogging = true - Check the console output for round-by-round execution details
- Use
StreamOptions.FullOptionsto see function call metadata
Q: Can I use functions with streaming?
- Yes! Functions work seamlessly with streaming
- Use
StreamOptions.WithFunctionsto see function execution in real-time
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
| .NET Core | netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
| .NET Standard | netstandard2.1 is compatible. |
| MonoAndroid | monoandroid was computed. |
| MonoMac | monomac was computed. |
| MonoTouch | monotouch was computed. |
| Tizen | tizen60 was computed. |
| Xamarin.iOS | xamarinios was computed. |
| Xamarin.Mac | xamarinmac was computed. |
| Xamarin.TVOS | xamarintvos was computed. |
| Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.1
- Azure.AI.OpenAI (>= 2.1.0)
- Mythosia (>= 1.4.0)
- Newtonsoft.Json (>= 13.0.4)
- NJsonSchema (>= 11.5.2)
- System.Threading.Channels (>= 10.0.3)
- TiktokenSharp (>= 1.2.1)
NuGet packages (1)
Showing the top 1 NuGet packages that depend on Mythosia.AI:
| Package | Downloads |
|---|---|
|
Mythosia.AI.Rag
RAG (Retrieval Augmented Generation) orchestration for Mythosia.AI. Includes RagPipeline, text splitters, context builder, and OpenAI embedding provider. |
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 4.6.2 | 65 | 2/27/2026 |
| 4.6.1 | 53 | 2/27/2026 |
| 4.6.0 | 74 | 2/26/2026 |
| 4.5.0 | 71 | 2/26/2026 |
| 4.4.0 | 73 | 2/25/2026 |
| 4.3.0 | 96 | 2/24/2026 |
| 4.2.0 | 86 | 2/22/2026 |
| 4.1.0 | 95 | 2/15/2026 |
| 4.0.1 | 92 | 2/14/2026 |
| 4.0.0 | 96 | 2/13/2026 |
| 3.2.0 | 92 | 2/12/2026 |
| 3.1.0 | 95 | 2/10/2026 |
| 3.0.3 | 247 | 9/7/2025 |
| 3.0.2 | 142 | 9/6/2025 |
| 3.0.1 | 211 | 9/1/2025 |
| 3.0.0 | 233 | 8/28/2025 |
| 2.2.1 | 197 | 8/18/2025 |
| 2.2.0 | 198 | 8/8/2025 |
| 2.1.0 | 162 | 7/18/2025 |
| 2.0.1 | 191 | 7/17/2025 |