Santel.Redis.TypedKeys
1.1.1
dotnet add package Santel.Redis.TypedKeys --version 1.1.1
NuGet\Install-Package Santel.Redis.TypedKeys -Version 1.1.1
<PackageReference Include="Santel.Redis.TypedKeys" Version="1.1.1" />
<PackageVersion Include="Santel.Redis.TypedKeys" Version="1.1.1" />
<PackageReference Include="Santel.Redis.TypedKeys" />
paket add Santel.Redis.TypedKeys --version 1.1.1
#r "nuget: Santel.Redis.TypedKeys, 1.1.1"
#:package Santel.Redis.TypedKeys@1.1.1
#addin nuget:?package=Santel.Redis.TypedKeys&version=1.1.1
#tool nuget:?package=Santel.Redis.TypedKeys&version=1.1.1
Santel.Redis.TypedKeys
Typed, discoverable Redis keys for .NET 9. Focus on developer ergonomics: concise key definitions, optional in‑memory caching, and lightweight pub/sub notifications – all on top of StackExchange.Redis.
- .NET: 9
- Redis client: StackExchange.Redis 2.x
- Package: Santel.Redis.TypedKeys
Highlights
- Strongly-typed wrappers for simple keys and hash maps:
RedisKey<T>,RedisHashKey<T> - Prefixed string keys stored as separate keys:
RedisPrefixedKeys<T>(format:FullName:field) - One central context (
RedisDBContextModule) where you declare all keys - Optional per-key/per-field in-memory cache with easy invalidation
- Built-in lightweight pub/sub notifications for cross-process cache invalidation
- Opt-in custom serialization per key
- Pluggable key naming via
nameGeneratorStrategydelegate - NEW: Chunked operations for large datasets (
ReadInChunks,WriteInChunks,RemoveInChunks) - NEW: Memory usage tracking with
GetSize()method for all key types - NEW: Enhanced cache invalidation methods (single, bulk, and full)
- NEW: TTL (Time-To-Live) support for
RedisPrefixedKeys<T>- automatic key expiration - Helpers: hash paging, DB size, bulk write with chunking, soft safety limits
Install
dotnet add package Santel.Redis.TypedKeys
Requirements
- .NET 9
- A running Redis server
Quick Start
- Define your context (a class inheriting
RedisDBContextModule) and declare your keys. You can omit constructors entirely – DI will initialize the context automatically:
using Santel.Redis.TypedKeys;
public class AppRedisContext : RedisDBContextModule
{
public RedisKey<string> AppVersion { get; set; } = new(0);
public RedisHashKey<UserProfile> Users { get; set; } = new(1);
public RedisHashKey<Invoice> Invoices { get; set; } = new(2);
public RedisPrefixedKeys<UserProfile> UserById { get; set; } = new(3);
}
public record UserProfile(int Id, string Name);
public record Invoice(string Id, decimal Amount);
Note: Do not call any Init methods. RedisDBContextModule automatically initializes all declared RedisKey<T>, RedisHashKey<T>, and RedisPrefixedKeys<T> via reflection when the context instance is constructed by DI.
- Register with DI
using Microsoft.Extensions.DependencyInjection;
using Santel.Redis.TypedKeys;
using StackExchange.Redis;
var services = new ServiceCollection();
services.AddSingleton<IConnectionMultiplexer>(sp =>
ConnectionMultiplexer.Connect("localhost:6379"));
services.AddLogging();
// Registers your derived context via generic extension.
services.AddRedisDBContext<AppRedisContext>(
keepDataInMemory: true,
nameGeneratorStrategy: name => $"Prod_{name}",
channelName: "Prod");
- Use it
var sp = services.BuildServiceProvider();
var ctx = sp.GetRequiredService<AppRedisContext>();
ctx.AppVersion.Write("1.5.0");
var version = ctx.AppVersion.Read();
ctx.Users.Write("42", new UserProfile(42, "Alice"));
var alice = ctx.Users.Read("42");
// Prefixed keys with automatic expiration (TTL)
await ctx.UserById.WriteAsync("42", new UserProfile(42, "Alice"), expiry: TimeSpan.FromMinutes(30));
var byId = await ctx.UserById.ReadAsync("42");
// Write session data that expires in 1 hour
await ctx.UserById.WriteAsync("session_123", sessionUser, TimeSpan.FromHours(1));
Key Naming & Pub/Sub
- Naming: by default, key name =
PropertyName. - If you supply
nameGeneratorStrategy, it receivesPropertyNameand returns the final Redis key name.- Examples:
- Prefix per environment:
name => $"Prod_{name}" - Kebab-case:
name => Regex.Replace(name, "([a-z])([A-Z])", "$1-$2").ToLowerInvariant() - Tenant-scoped:
name => $"{tenantId}:{name}
- Prefix per environment:
- Examples:
- Publish channel: controlled by
channelNameRedisKey<T>publish payload:KeyNameRedisHashKey<T>publish field:HashName|{field}RedisHashKey<T>publish-all:HashName|allRedisPrefixedKeys<T>follows the same pattern as hash
Subscribe example:
var sub = readerMux.GetSubscriber();
await sub.SubscribeAsync("Prod", (ch, msg) =>
{
var text = (string)msg;
if (text.EndsWith("|all"))
{
// Invalidate entire cache for that name (hash or prefixed)
}
else if (text.Contains('|'))
{
var parts = text.Split('|'); // parts[0] = name, parts[1] = field
// Invalidate a single field cache
}
else
{
// Simple key invalidation
}
});
Note: Publishing is performed via the write multiplexer; subscribing can use the read multiplexer.
Ctors and DI
You can optionally define your own constructors in the derived context (e.g., to do extra wiring), but it is not required. The DI extension supports automatic initialization and will provide the connections and options. No manual Init calls are needed.
Previously documented constructor overloads are still supported when present on your derived context:
(IConnectionMultiplexer mux, bool keepDataInMemory, ILogger logger, Func<string,string>? nameGeneratorStrategy, string? channelName)(IConnectionMultiplexer write, IConnectionMultiplexer read, bool keepDataInMemory, ILogger logger, Func<string,string>? nameGeneratorStrategy, string? channelName)
If you omit constructors, the base parameterless ctor is used and the context is initialized by the framework during activation.
Caching & Invalidation
RedisKey<T>: caches the lastRedisDataWrapper<T>read or writtenRedisHashKey<T>: caches individual field wrappers on-demandRedisPrefixedKeys<T>: caches individual field wrappers on-demand
Invalidation methods:
// RedisKey<T>
ctx.AppVersion.InvalidateCache(); // Clear cache for the key
// RedisHashKey<T>
ctx.Users.InvalidateCache("42"); // Clear cache for single field
ctx.Users.InvalidateCache(new[] {"1","2"}); // Clear cache for multiple fields
ctx.Users.InvalidateCache(); // Clear entire hash cache
// RedisPrefixedKeys<T>
ctx.UserById.InvalidateCache("42");
ctx.UserById.InvalidateCache(new[] {"1","2"});
ctx.UserById.InvalidateCache();
// Legacy methods (still supported)
ctx.AppVersion.ForceToReFetch();
ctx.Users.ForceToReFetch("42");
ctx.Users.ForceToReFetchAll();
API Cheatsheet (most used)
RedisKey<T>
- Construction in context:
public RedisKey<T> SomeKey { get; set; } = new(dbIndex); - Write:
Write(T value)/Task WriteAsync(T value) - Read:
T? Read(bool force = false)/Task<T?> ReadAsync(bool force = false) - Read full wrapper (timestamps):
RedisDataWrapper<T>? ReadFull() - Exists:
bool Exists() - Remove:
bool Remove()/Task<bool> RemoveAsync() - Memory size:
long GetSize()- Returns memory usage in bytes - Cache control:
InvalidateCache()/ForceToReFetch()
RedisHashKey<T>
- Construction:
public RedisHashKey<T> SomeHash { get; set; } = new(dbIndex, serialize?, deSerialize?); - Write single:
Write(string field, T value)/Task<bool> WriteAsync(string field, T value) - Write bulk:
Write(IDictionary<string,T> data)/Task<bool> WriteAsync(IDictionary<string,T> data) - Write chunked:
WriteInChunks(IDictionary<string,T> data, int chunkSize = 1000)/Task<bool> WriteInChunksAsync(...) - Read single:
T? Read(string field, bool force = false)/Task<T?> ReadAsync(string field, bool force = false) - Read multi:
Dictionary<string,T>? Read(IEnumerable<string> fields, bool force = false)/ async variant - Read chunked:
ReadInChunks(IEnumerable<string> keys, int chunkSize = 1000, bool force = false)/ async variant - Get all keys:
RedisValue[] GetAllKeys()/Task<RedisValue[]> GetAllKeysAsync() - Remove:
Remove(string key)/RemoveAsync(string key)/ multi-field overload - Remove chunked:
RemoveInChunks(IEnumerable<string> keys, int chunkSize = 1000)/ async variant - Remove whole hash:
Task<bool> RemoveAsync() - Memory size:
long GetSize()- Returns hash memory usage in bytes - Cache control:
InvalidateCache(string key)/InvalidateCache(IEnumerable<string> keys)/InvalidateCache() - Indexer:
T? this[string key]- Read via indexer syntax
RedisPrefixedKeys<T>
- Construction:
public RedisPrefixedKeys<T> SomeGroup { get; set; } = new(dbIndex); - Write single:
Write(string field, T value, TimeSpan? expiry = null)/Task<bool> WriteAsync(string field, T value, TimeSpan? expiry = null) - Write bulk:
Write(IDictionary<string,T> data, TimeSpan? expiry = null)/Task<bool> WriteAsync(IDictionary<string,T> data, TimeSpan? expiry = null) - Write chunked:
WriteInChunks(IDictionary<string,T> data, int chunkSize = 1000, TimeSpan? expiry = null)/ async variant - Read single:
T? Read(string field, bool force = false)/Task<T?> ReadAsync(string field, bool force = false) - Read multi:
Dictionary<string,T>? Read(IEnumerable<string> fields, bool force = false)/ async variant - Read chunked:
ReadInChunks(IEnumerable<string> keys, int chunkSize = 1000, bool force = false)/ async variant - Remove:
Remove(string key)/RemoveAsync(string key)/ multi-field overload - Remove chunked:
RemoveInChunks(IEnumerable<string> keys, int chunkSize = 1000)/ async variant - Memory size:
long GetSize()- Returns total memory usage of all prefixed keys in bytes (uses SCAN) - Cache control:
InvalidateCache(string key)/InvalidateCache(IEnumerable<string> keys)/InvalidateCache() - TTL support: All write methods accept optional
TimeSpan? expiryparameter for automatic key expiration
Context helpers
Task<long> GetDbSize(int database)Task<(List<string>? Keys, long Total)> GetHashKeysByPage(int database, string hashKey, int pageNumber = 1, int pageSize = 10)Task<string?> GetValues(int database, string key)(reads raw string value for a simple key)
Paging Example (Hash fields)
var (fields, total) = await ctx.GetHashKeysByPage(
database: 1,
hashKey: ctx.Users.FullName, // underlying redis key
pageNumber: 2,
pageSize: 25);
Chunked Operations (NEW)
When working with large datasets, use chunked methods to avoid blocking Redis and prevent timeouts:
Write in chunks
var manyUsers = new Dictionary<string, UserProfile>();
for (int i = 0; i < 10000; i++)
manyUsers[$"{i}"] = new UserProfile(i, $"User{i}", $"user{i}@example.com");
// Hash: Write in chunks of 500
await ctx.Users.WriteInChunksAsync(manyUsers, chunkSize: 500);
// Prefixed keys: Write in chunks
await ctx.UserSettings.WriteInChunksAsync(manyUsers, chunkSize: 500);
Read in chunks
var userIds = Enumerable.Range(0, 10000).Select(i => i.ToString()).ToList();
// Read 10,000 users in chunks of 500
var users = await ctx.Users.ReadInChunksAsync(userIds, chunkSize: 500);
Console.WriteLine($"Loaded {users?.Count} users");
Remove in chunks
var idsToRemove = Enumerable.Range(0, 10000).Select(i => i.ToString());
// Remove in chunks
await ctx.Users.RemoveInChunksAsync(idsToRemove, chunkSize: 500);
Benefits:
- Prevents Redis from blocking on large operations
- Reduces memory pressure
- Avoids network timeouts
- Production-safe for datasets with thousands of items
Time-To-Live (TTL) Support for RedisPrefixedKeys (NEW)
RedisPrefixedKeys<T> now supports automatic key expiration via TTL. All write methods accept an optional TimeSpan? expiry parameter:
// Write with 5-minute TTL
await ctx.UserById.WriteAsync("42", userData, expiry: TimeSpan.FromMinutes(5));
// Write session data with 1-hour expiration
await ctx.Sessions.WriteAsync("session123", sessionData, TimeSpan.FromHours(1));
// Bulk write with TTL
var tempData = new Dictionary<string, UserProfile>
{
["temp1"] = user1,
["temp2"] = user2
};
await ctx.UserById.WriteAsync(tempData, expiry: TimeSpan.FromMinutes(15));
// Chunked write with TTL for large datasets
var manyTempUsers = new Dictionary<string, UserProfile>();
for (int i = 0; i < 10000; i++)
manyTempUsers[$"temp{i}"] = new UserProfile(i, $"TempUser{i}");
await ctx.UserById.WriteInChunksAsync(manyTempUsers, chunkSize: 500, expiry: TimeSpan.FromHours(2));
Use cases for TTL:
- Session data: Automatically expire user sessions after inactivity
- Cache entries: Implement time-based cache invalidation
- Temporary tokens: Store verification codes, password reset tokens
- Rate limiting: Track API calls that reset after a time window
- Temporary data: Store processing results that don't need permanent storage
Notes:
- TTL is set at the time of writing; updating a key resets its expiration
- If
expiryisnull, keys persist indefinitely (default behavior) - The in-memory cache doesn't automatically clear when Redis keys expire - use pub/sub or explicit invalidation
- Consider combining TTL with cache invalidation strategies for consistency
Memory Usage Tracking (NEW)
Track Redis memory usage for monitoring and optimization:
// Get size of a simple key
var versionSize = ctx.AppVersion.GetSize();
Console.WriteLine($"AppVersion: {versionSize} bytes");
// Get size of an entire hash
var usersSize = ctx.Users.GetSize();
Console.WriteLine($"Users hash: {usersSize} bytes");
// Get total size of all prefixed keys (uses SCAN - production safe)
var settingsSize = ctx.UserSettings.GetSize();
Console.WriteLine($"All user settings: {settingsSize} bytes");
// Monitor all keys
Console.WriteLine("Memory Usage Summary:");
Console.WriteLine($" AppVersion: {ctx.AppVersion.GetSize()} bytes");
Console.WriteLine($" Users: {ctx.Users.GetSize()} bytes");
Console.WriteLine($" UserSettings: {ctx.UserSettings.GetSize()} bytes");
Notes:
- Uses Redis
MEMORY USAGEcommand - Returns
0if command is not supported or disabled - For
RedisPrefixedKeys, scans all matching keys using production-safe SCAN (not KEYS) - Useful for monitoring, capacity planning, and cost optimization
Custom Serialization
You can override serialization per key to integrate any serializer. The library always wraps your data inside RedisDataWrapper<T> for timestamps/metadata.
public RedisHashKey<Invoice> Invoices { get; set; } = new(2,
serialize: inv => JsonSerializer.Serialize(inv),
deSerialize: s => JsonSerializer.Deserialize<Invoice>(s)!);
Dependency Injection
A generic DI extension is provided:
services.AddRedisDBContext<AppRedisContext>(
keepDataInMemory: true,
nameGeneratorStrategy: name => $"Prod_{name}", // becomes final Redis key (e.g., Prod_Users)
channelName: "Prod"); // pub/sub channel name (omit/empty to disable publishing)
The factory tries these constructors in order:
(IConnectionMultiplexer mux, bool keepDataInMemory, ILogger logger, Func<string,string>? nameGeneratorStrategy, string? channelName)(IConnectionMultiplexer write, IConnectionMultiplexer read, bool keepDataInMemory, ILogger logger, Func<string,string>? nameGeneratorStrategy, string? channelName)
Best Practices
- Use a separate read multiplexer pointing at a replica if you have heavy read traffic.
- Keep
channelNameconsistent per environment/tenant to avoid cross-talk. - Use
InvalidateCache()methods after receiving pub/sub messages to keep caches coherent. - Prefer async methods for high-throughput paths.
- Use chunked operations (
ReadInChunks,WriteInChunks,RemoveInChunks) for datasets with 1000+ items. - Monitor memory usage with
GetSize()for capacity planning and cost optimization. - Set appropriate
chunkSizebased on your data size (default 1000 is good for most cases). - For
forceparameter: usetrueto bypass cache and always read from Redis. - Use TTL for temporary data: Set expiration times on session data, temporary tokens, and cache entries to prevent memory bloat and ensure automatic cleanup.
Thread Safety & Concurrency
Important: The library's key types (RedisKey<T>, RedisHashKey<T>, RedisPrefixedKeys<T>) are not thread-safe for certain operations. Here's what you need to know:
In-Memory Cache Considerations
- The in-memory cache (when
keepDataInMemory: true) uses internal dictionaries that are not protected by locks. - Concurrent reads and writes to the same key/field from multiple threads can lead to race conditions, cache corruption, or incorrect data being returned.
- Write operations (e.g.,
Write,WriteAsync) update both Redis and the local cache without synchronization. - Cache invalidation operations modify the internal cache dictionary without thread-safe guards.
Blocking Concerns
- Synchronous methods (e.g.,
Write(),Read()) perform blocking I/O to Redis, which can degrade performance under high concurrency. - Using blocking calls on thread pool threads (e.g., inside ASP.NET Core request handlers) can lead to thread pool starvation and increased latency.
- Chunked operations process data sequentially and block for the duration of all chunks.
Recommendations
- Prefer async methods (
WriteAsync,ReadAsync, etc.) in concurrent scenarios to avoid blocking threads. - Avoid concurrent access to the same key from multiple threads. If unavoidable, implement your own locking mechanism:
private readonly SemaphoreSlim _userLock = new(1, 1); await _userLock.WaitAsync(); try { await ctx.Users.WriteAsync("42", userData); } finally { _userLock.Release(); } - Consider disabling in-memory caching (
keepDataInMemory: false) if you have heavy concurrent write traffic to the same keys. - Use separate key instances per tenant/scope if possible to reduce contention.
- For high-concurrency scenarios, consider using Redis as the single source of truth and avoid relying on local caching.
What IS Thread-Safe
- StackExchange.Redis connections (
IConnectionMultiplexer) are thread-safe and designed for concurrent use. - Redis operations themselves are atomic and thread-safe at the Redis server level.
- Reading different keys/fields concurrently is generally safe as each maintains separate cache entries.
Summary
This library prioritizes simplicity and performance for typical CRUD scenarios. If your application requires heavy concurrent access to the same keys with in-memory caching enabled, you should implement application-level synchronization or disable caching for those keys.
Troubleshooting
- No pub/sub events? Ensure
channelNamewas provided and the publisher uses the write connection. - Seeing stale data? Verify
keepDataInMemorysettings and that your subscribers invalidate caches. - Timeouts on bulk writes? Lower
maxChunkSizeInBytes. - DB size returns 0? Some Redis providers disable commands (e.g.,
DBSIZE).
Versioning
- Target framework: .NET 9
- Redis client: StackExchange.Redis 2.7.x
License
MIT
Contributing
Issues and PRs are welcome.
TODO
Done ✅
Chunked operations for large datasets✅ ImplementedReadInChunks,WriteInChunks,RemoveInChunksMemory usage tracking✅ AddedGetSize()method for all key typesEnhanced cache invalidation✅ Single, bulk, and full invalidation methodsTTL (Time-To-Live) support for✅ All write methods support optional expiry parameterRedisPrefixedKeys<T>Thread safety documentation✅ Added comprehensive thread safety and concurrency section
Planned
- Auto-invalidate cache when you receive publish messages
- Custom DataWrapper options (e.g., include/exclude timestamps)
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net9.0
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.0)
- Newtonsoft.Json (>= 13.0.3)
- StackExchange.Redis (>= 2.7.33)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.