Mythosia.AI
3.2.0
See the version list below for details.
dotnet add package Mythosia.AI --version 3.2.0
NuGet\Install-Package Mythosia.AI -Version 3.2.0
<PackageReference Include="Mythosia.AI" Version="3.2.0" />
<PackageVersion Include="Mythosia.AI" Version="3.2.0" />
<PackageReference Include="Mythosia.AI" />
paket add Mythosia.AI --version 3.2.0
#r "nuget: Mythosia.AI, 3.2.0"
#:package Mythosia.AI@3.2.0
#addin nuget:?package=Mythosia.AI&version=3.2.0
#tool nuget:?package=Mythosia.AI&version=3.2.0
Mythosia.AI
Package Summary
The Mythosia.AI library provides a unified interface for various AI models with multimodal support, function calling, reasoning streaming, and advanced streaming capabilities, including OpenAI GPT-5.2/5.1/5/GPT-4o, Anthropic Claude 3/4, Google Gemini, DeepSeek, and Perplexity Sonar.
📚 Documentation
- Basic Usage Guide - Getting started with text queries, streaming, image analysis, and more
- Advanced Features - Function calling, policies, and enhanced streaming (v3.0.0)
- Release Notes - Full version history
🚀 What's New in v3.2.0
GPT-5.1 / GPT-5.2 Model Support 🧠
- GPT-5.1: Reasoning model with effort levels (none/low/medium/high) and text verbosity (low/medium/high)
- GPT-5.2: Best model for complex, coding, and agentic tasks with effort levels (none/low/medium/high/xhigh)
- GPT-5.2 Pro: High-compute model for tough problems (medium/high/xhigh)
WithGpt5_1Parameters(): Configure reasoning effort, verbosity, and reasoning summary for GPT-5.1WithGpt5_2Parameters(): Configure reasoning effort, verbosity, and reasoning summary for GPT-5.2- Reasoning Summary: All GPT-5 family models now support configurable reasoning summary (auto/concise/detailed/disabled)
Model Updates
- ✅ gpt-5.1 (full support)
- ✅ gpt-5.2, gpt-5.2-pro (full support)
- ✅ gpt-5, gpt-5-mini, gpt-5-nano (full support)
- ⏸️ gpt-5-pro (temporarily suspended)
- 🗑️ Removed deprecated: o3-mini, claude-3-5-sonnet-20241022
Quick Example
// GPT-5.2 with reasoning effort and verbosity
var gptService = (ChatGptService)service;
gptService.WithGpt5_2Parameters(reasoningEffort: "high", verbosity: "high");
var response = await gptService.GetCompletionAsync("Solve: 15 * 17");
// GPT-5.1 with custom configuration
gptService.WithGpt5_1Parameters(reasoningEffort: "medium", verbosity: "low", reasoningSummary: "concise");
var response2 = await gptService.GetCompletionAsync("Explain quantum computing");
// GPT-5 reasoning streaming
var options = new StreamOptions().WithReasoning().WithMetadata();
await foreach (var content in service.StreamAsync("Solve: 15 * 17", options))
{
if (content.Type == StreamingContentType.Reasoning)
Console.Write($"[Thinking] {content.Content}");
else if (content.Type == StreamingContentType.Text)
Console.Write(content.Content);
}
Installation
dotnet add package Mythosia.AI
For advanced LINQ operations with streams:
dotnet add package System.Linq.Async
Function Calling (New in v3.0.0)
Quick Start with Functions
// Define a simple function
var service = new ChatGptService(apiKey, httpClient)
.WithFunction(
"get_weather",
"Gets the current weather for a location",
("location", "The city and country", required: true),
(string location) => $"The weather in {location} is sunny, 22°C"
);
// AI will automatically call the function when needed
var response = await service.GetCompletionAsync("What's the weather in Seoul?");
// Output: "The weather in Seoul is currently sunny with a temperature of 22°C."
Attribute-Based Function Registration
public class WeatherService
{
[AiFunction("get_current_weather", "Gets the current weather for a location")]
public string GetWeather(
[AiParameter("The city name", required: true)] string city,
[AiParameter("Temperature unit", required: false)] string unit = "celsius")
{
// Your implementation
return $"Weather in {city}: 22°{unit[0]}";
}
}
// Register all functions from a class
var weatherService = new WeatherService();
var service = new ChatGptService(apiKey, httpClient)
.WithFunctions(weatherService);
Advanced Function Builder
var service = new ChatGptService(apiKey, httpClient)
.WithFunction(FunctionBuilder.Create("calculate")
.WithDescription("Performs mathematical calculations")
.AddParameter("expression", "string", "The math expression", required: true)
.AddParameter("precision", "integer", "Decimal places", required: false, defaultValue: 2)
.WithHandler(async (args) =>
{
var expr = args["expression"].ToString();
var precision = Convert.ToInt32(args.GetValueOrDefault("precision", 2));
// Calculate and return result
return await CalculateAsync(expr, precision);
})
.Build());
Multiple Functions with Different Types
var service = new ChatGptService(apiKey, httpClient)
// Parameterless function
.WithFunction(
"get_time",
"Gets the current time",
() => DateTime.Now.ToString("HH:mm:ss")
)
// Two-parameter function
.WithFunction(
"add_numbers",
"Adds two numbers",
("a", "First number", true),
("b", "Second number", true),
(double a, double b) => $"The sum is {a + b}"
)
// Async function
.WithFunctionAsync(
"fetch_data",
"Fetches data from API",
("endpoint", "API endpoint", true),
async (string endpoint) => await httpClient.GetStringAsync(endpoint)
);
// The AI will automatically use the appropriate functions
var response = await service.GetCompletionAsync(
"What time is it? Also, what's 15 plus 27?"
);
Function Calling Policies
// Pre-defined policies
service.DefaultPolicy = FunctionCallingPolicy.Fast; // 30s timeout, 10 rounds
service.DefaultPolicy = FunctionCallingPolicy.Complex; // 300s timeout, 50 rounds
service.DefaultPolicy = FunctionCallingPolicy.Vision; // 200s timeout, for image analysis
// Custom policy
service.DefaultPolicy = new FunctionCallingPolicy
{
MaxRounds = 25,
TimeoutSeconds = 120,
MaxConcurrency = 5,
EnableLogging = true // Enable debug output
};
// Per-request policy override
var response = await service
.WithPolicy(FunctionCallingPolicy.Fast)
.GetCompletionAsync("Complex task requiring functions");
// Inline policy configuration
var response = await service
.BeginMessage()
.AddText("Analyze this data")
.WithMaxRounds(5)
.WithTimeout(60)
.SendAsync();
Function Calling with Streaming
// Stream with function calling support
await foreach (var content in service.StreamAsync(
"What's the weather in Seoul and calculate 15% tip on $85",
StreamOptions.WithFunctions))
{
if (content.Type == StreamingContentType.FunctionCall)
{
Console.WriteLine($"Calling function: {content.Metadata["function_name"]}");
}
else if (content.Type == StreamingContentType.FunctionResult)
{
Console.WriteLine($"Function completed: {content.Metadata["status"]}");
}
else if (content.Type == StreamingContentType.Text)
{
Console.Write(content.Content);
}
}
Disabling Functions Temporarily
// Disable functions for a single request
var response = await service
.WithoutFunctions()
.GetCompletionAsync("Don't use any functions for this");
// Or use the async helper
var response = await service.AskWithoutFunctionsAsync(
"Process this without calling functions"
);
Enhanced Streaming (v3.0.0)
Stream Options
// Text only - fastest, no overhead
await foreach (var chunk in service.StreamAsync("Hello", StreamOptions.TextOnlyOptions))
{
Console.Write(chunk.Content);
}
// With metadata - includes model info, timestamps, etc.
await foreach (var content in service.StreamAsync("Hello", StreamOptions.FullOptions))
{
if (content.Metadata != null)
{
Console.WriteLine($"Model: {content.Metadata["model"]}");
}
Console.Write(content.Content);
}
// Custom options
var options = new StreamOptions()
.WithMetadata(true)
.WithFunctionCalls(true)
.WithTokenInfo(false)
.AsTextOnly(false);
await foreach (var content in service.StreamAsync("Query", options))
{
// Process based on content.Type
switch (content.Type)
{
case StreamingContentType.Text:
Console.Write(content.Content);
break;
case StreamingContentType.FunctionCall:
Console.WriteLine($"Calling: {content.Metadata["function_name"]}");
break;
case StreamingContentType.Completion:
Console.WriteLine($"Total length: {content.Metadata["total_length"]}");
break;
}
}
Service-Specific Function Support
| Service | Function Calling | Streaming Functions | Notes |
|---|---|---|---|
| OpenAI GPT-5.2 / 5.2 Pro | ✅ Full | ✅ Full | Best for complex, coding, agentic tasks |
| OpenAI GPT-5.1 | ✅ Full | ✅ Full | Reasoning with verbosity control |
| OpenAI GPT-5 / Mini / Nano | ✅ Full | ✅ Full | Reasoning streaming + summary support |
| OpenAI GPT-4o | ✅ Full | ✅ Full | Best support, all features |
| OpenAI GPT-4.1 | ✅ Full | ✅ Full | Full function support |
| OpenAI o3 | ✅ Full | ✅ Full | Advanced reasoning with functions |
| Claude 3/4 | ✅ Full | ✅ Full | Tool use via native API |
| Gemini | 🔜 Coming Soon | 🔜 Coming Soon | Support planned for future update |
| DeepSeek | ❌ | ❌ | Not yet available |
| Perplexity | ❌ | ❌ | Web search focused |
Complete Examples
Building a Weather Assistant
public class WeatherAssistant
{
private readonly ChatGptService _service;
private readonly HttpClient _httpClient;
public WeatherAssistant(string apiKey)
{
_httpClient = new HttpClient();
_service = new ChatGptService(apiKey, _httpClient)
.WithSystemMessage("You are a helpful weather assistant.")
.WithFunction(
"get_weather",
"Gets current weather for a city",
("city", "City name", true),
GetWeatherData
)
.WithFunction(
"get_forecast",
"Gets weather forecast",
("city", "City name", true),
("days", "Number of days", false),
GetForecast
);
// Configure function calling behavior
_service.DefaultPolicy = new FunctionCallingPolicy
{
MaxRounds = 10,
TimeoutSeconds = 30,
EnableLogging = true
};
}
private string GetWeatherData(string city)
{
// In real implementation, call weather API
return $"{{\"city\":\"{city}\",\"temp\":22,\"condition\":\"sunny\"}}";
}
private string GetForecast(string city, int days = 3)
{
// In real implementation, call forecast API
return $"{{\"city\":\"{city}\",\"forecast\":\"{days} days of sun\"}}";
}
public async Task<string> AskAsync(string question)
{
return await _service.GetCompletionAsync(question);
}
public async IAsyncEnumerable<string> StreamAsync(string question)
{
await foreach (var content in _service.StreamAsync(question))
{
if (content.Type == StreamingContentType.Text && content.Content != null)
{
yield return content.Content;
}
}
}
}
// Usage
var assistant = new WeatherAssistant(apiKey);
// Functions are called automatically
var response = await assistant.AskAsync("What's the weather in Tokyo?");
// AI calls get_weather("Tokyo") and responds naturally
// Streaming also supports functions
await foreach (var chunk in assistant.StreamAsync(
"Compare weather in Seoul and Tokyo for the next 5 days"))
{
Console.Write(chunk);
}
Math Tutor with Step-by-Step Solutions
var mathTutor = new ChatGptService(apiKey, httpClient)
.WithSystemMessage("You are a math tutor. Always explain your reasoning.")
.WithFunction(
"calculate",
"Performs calculations",
("expression", "Math expression", true),
(string expr) => {
// Using a math expression evaluator
var result = EvaluateExpression(expr);
return $"Result: {result}";
}
)
.WithFunction(
"solve_equation",
"Solves equations step by step",
("equation", "Equation to solve", true),
(string equation) => {
var steps = SolveWithSteps(equation);
return JsonSerializer.Serialize(steps);
}
);
// The AI will use functions and explain the process
var response = await mathTutor.GetCompletionAsync(
"Solve the equation 2x + 5 = 13 and verify the answer"
);
// Output includes step-by-step solution with verification
Migration Guide from v2.x to v3.0.0
Function Calling (New Feature)
// v3.0.0 - Functions are now supported!
var service = new ChatGptService(apiKey, httpClient)
.WithFunction("my_function", "Description",
("param", "Param description", true),
(string param) => $"Result: {param}");
// AI will automatically use functions when appropriate
var response = await service.GetCompletionAsync("Use my function");
Streaming Changes
// v2.x - Returns string chunks
await foreach (var chunk in service.StreamAsync("Hello"))
{
Console.Write(chunk); // chunk is string
}
// v3.0.0 - Can return StreamingContent with metadata
await foreach (var content in service.StreamAsync("Hello", StreamOptions.FullOptions))
{
Console.Write(content.Content); // Access text via .Content
var metadata = content.Metadata; // Access metadata
}
// For backward compatibility, default behavior unchanged
await foreach (var chunk in service.StreamAsync("Hello"))
{
Console.Write(chunk); // Still works, chunk is string
}
Policy System (New)
// v3.0.0 - Control function execution behavior
service.DefaultPolicy = FunctionCallingPolicy.Fast;
// Per-request override
await service
.WithTimeout(60)
.WithMaxRounds(5)
.GetCompletionAsync("Complex task");
Best Practices
Function Design: Keep functions focused and simple. Complex logic should be broken into multiple functions.
Error Handling: Functions should return meaningful error messages that the AI can understand.
Performance: Use appropriate policies for your use case (Fast for simple tasks, Complex for detailed analysis).
Streaming: Use
TextOnlyOptionsfor best performance when metadata isn't needed.Testing: Test function calling with various prompts to ensure robust behavior.
Troubleshooting
Q: Functions aren't being called when expected?
- Ensure functions are registered with clear, descriptive names and descriptions
- Check that
EnableFunctionsis true on the ChatBlock - Verify the model supports function calling (GPT-4, Claude 3+, Gemini)
Q: Function calling is too slow?
- Adjust the policy timeout:
service.DefaultPolicy.TimeoutSeconds = 30 - Use
FunctionCallingPolicy.Fastfor simple operations - Consider using streaming for better perceived performance
Q: How to debug function execution?
- Enable logging:
service.DefaultPolicy.EnableLogging = true - Check the console output for round-by-round execution details
- Use
StreamOptions.FullOptionsto see function call metadata
Q: Can I use functions with streaming?
- Yes! Functions work seamlessly with streaming in v3.0.0
- Use
StreamOptions.WithFunctionsto see function execution in real-time
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
| .NET Core | netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
| .NET Standard | netstandard2.1 is compatible. |
| MonoAndroid | monoandroid was computed. |
| MonoMac | monomac was computed. |
| MonoTouch | monotouch was computed. |
| Tizen | tizen60 was computed. |
| Xamarin.iOS | xamarinios was computed. |
| Xamarin.Mac | xamarinmac was computed. |
| Xamarin.TVOS | xamarintvos was computed. |
| Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.1
- Azure.AI.OpenAI (>= 2.1.0)
- Mythosia (>= 1.4.0)
- Newtonsoft.Json (>= 13.0.3)
- System.Threading.Channels (>= 9.0.7)
- TiktokenSharp (>= 1.1.6)
NuGet packages (1)
Showing the top 1 NuGet packages that depend on Mythosia.AI:
| Package | Downloads |
|---|---|
|
Mythosia.AI.Rag
RAG (Retrieval Augmented Generation) orchestration for Mythosia.AI. Includes RagPipeline, text splitters, context builder, and OpenAI embedding provider. |
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 4.6.2 | 66 | 2/27/2026 |
| 4.6.1 | 54 | 2/27/2026 |
| 4.6.0 | 74 | 2/26/2026 |
| 4.5.0 | 73 | 2/26/2026 |
| 4.4.0 | 73 | 2/25/2026 |
| 4.3.0 | 96 | 2/24/2026 |
| 4.2.0 | 86 | 2/22/2026 |
| 4.1.0 | 95 | 2/15/2026 |
| 4.0.1 | 92 | 2/14/2026 |
| 4.0.0 | 96 | 2/13/2026 |
| 3.2.0 | 92 | 2/12/2026 |
| 3.1.0 | 95 | 2/10/2026 |
| 3.0.3 | 247 | 9/7/2025 |
| 3.0.2 | 143 | 9/6/2025 |
| 3.0.1 | 211 | 9/1/2025 |
| 3.0.0 | 233 | 8/28/2025 |
| 2.2.1 | 197 | 8/18/2025 |
| 2.2.0 | 198 | 8/8/2025 |
| 2.1.0 | 162 | 7/18/2025 |
| 2.0.1 | 191 | 7/17/2025 |