Documentation
¶
Overview ¶
Package obcache provides a high-performance, thread-safe, in-memory cache with TTL support, multiple eviction strategies (LRU/LFU/FIFO), function memoization, and hooks for observability.
Overview ¶
obcache is designed for high-throughput applications requiring fast, reliable caching with comprehensive observability and flexible configuration options. It supports both direct cache operations and transparent function memoization through its Wrap functionality.
Key Features ¶
- Thread-safe concurrent access with minimal lock contention
- Time-to-live (TTL) expiration with automatic cleanup
- Multiple eviction strategies: LRU, LFU, and FIFO
- Function memoization with customizable key generation
- Context-aware hooks for monitoring cache operations
- Built-in statistics and performance monitoring
- Redis backend support for distributed caching
- Compression support for large values (gzip/deflate)
- Prometheus metrics integration
- Singleflight pattern to prevent cache stampedes
Basic Usage ¶
Create a cache and perform basic operations:
cache, err := obcache.New(obcache.NewDefaultConfig())
if err != nil {
log.Fatal(err)
}
// Store a value with 1-hour TTL
err = cache.Set("user:123", userData, time.Hour)
if err != nil {
log.Printf("Failed to set cache: %v", err)
}
// Retrieve a value
value, found := cache.Get("user:123")
if found {
user := value.(UserData)
fmt.Printf("Found user: %+v\n", user)
}
// Check statistics
stats := cache.Stats()
fmt.Printf("Hit rate: %.2f%%\n", stats.HitRate())
Function Memoization ¶
Cache expensive function calls automatically:
// Original expensive function
func fetchUser(userID int) (*User, error) {
// Expensive database query
return queryDatabase(userID)
}
// Wrap with caching
cache, _ := obcache.New(obcache.NewDefaultConfig())
cachedFetchUser := obcache.Wrap(cache, fetchUser, obcache.WithTTL(5*time.Minute))
// Use exactly like the original function - caching is transparent
user1, err := cachedFetchUser(123) // Database query
user2, err := cachedFetchUser(123) // Cache hit
Configuration ¶
Customize cache behavior with fluent configuration:
config := obcache.NewDefaultConfig().
WithMaxEntries(10000).
WithDefaultTTL(30*time.Minute).
WithCleanupInterval(5*time.Minute).
WithEvictionType(eviction.LFU) // Use LFU instead of default LRU
cache, err := obcache.New(config)
Eviction Strategies ¶
Choose the eviction strategy that best fits your use case:
import "github.com/1mb-dev/obcache-go/v2/internal/eviction" // LRU (Least Recently Used) - Default // Evicts items that haven't been accessed recently config := obcache.NewDefaultConfig().WithEvictionType(eviction.LRU) // LFU (Least Frequently Used) // Evicts items with the lowest access count config := obcache.NewDefaultConfig().WithEvictionType(eviction.LFU) // FIFO (First In, First Out) // Evicts oldest items regardless of access patterns config := obcache.NewDefaultConfig().WithEvictionType(eviction.FIFO)
Context-Aware Hooks ¶
Monitor cache operations with context-aware hooks:
hooks := &obcache.Hooks{}
// Hook on cache hits
hooks.AddOnHit(func(ctx context.Context, key string, value any) {
log.Printf("Cache hit: %s", key)
metrics.IncrementCounter("cache.hits")
})
// Hook on cache misses
hooks.AddOnMiss(func(ctx context.Context, key string) {
log.Printf("Cache miss: %s", key)
metrics.IncrementCounter("cache.misses")
})
// Hook on evictions
hooks.AddOnEvict(func(ctx context.Context, key string, value any, reason obcache.EvictReason) {
log.Printf("Evicted: %s (reason: %s)", key, reason)
})
// Hook on manual invalidations
hooks.AddOnInvalidate(func(ctx context.Context, key string) {
log.Printf("Invalidated: %s", key)
})
cache, _ := obcache.New(obcache.NewDefaultConfig().WithHooks(hooks))
Context Propagation ¶
Use context-aware methods for timeouts and tracing:
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond) defer cancel() // Set with context err := cache.SetContext(ctx, "key", "value", time.Hour) // Get with context value, found := cache.GetContext(ctx, "key")
Redis Backend ¶
Use Redis for distributed caching:
config := obcache.NewRedisConfig("localhost:6379").
WithDefaultTTL(time.Hour)
// Customize Redis key prefix
config.Redis.KeyPrefix = "myapp:"
cache, err := obcache.New(config)
// All operations now use Redis instead of local memory
Compression ¶
Enable compression for large values:
import "github.com/1mb-dev/obcache-go/v2/pkg/compression"
config := obcache.NewDefaultConfig().
WithCompression(&compression.Config{
Enabled: true,
Algorithm: compression.CompressorGzip,
MinSize: 1024, // Only compress values > 1KB
})
cache, err := obcache.New(config)
Metrics Integration ¶
Export metrics to Prometheus:
import (
"github.com/1mb-dev/obcache-go/v2/pkg/metrics"
"github.com/prometheus/client_golang/prometheus"
)
// Create Prometheus exporter
promConfig := &metrics.PrometheusConfig{
Registry: prometheus.DefaultRegisterer,
}
metricsConfig := metrics.NewDefaultConfig()
exporter, _ := metrics.NewPrometheusExporter(metricsConfig, promConfig)
// Configure cache with metrics
config := obcache.NewDefaultConfig().
WithMetrics(&obcache.MetricsConfig{
Exporter: exporter,
Enabled: true,
CacheName: "user-cache",
})
cache, _ := obcache.New(config)
You can also implement custom exporters by implementing the metrics.Exporter interface.
Performance Considerations ¶
- Use appropriate cache sizes based on available memory
- Set reasonable TTL values to balance freshness with performance
- Consider using Redis backend for multi-instance deployments
- Enable compression for large values to reduce memory usage
- Use hooks judiciously to avoid performance overhead
- Monitor hit rates and adjust cache policies accordingly
- Choose eviction strategy based on access patterns:
- LRU: Good for temporal locality (recently used data)
- LFU: Good for frequency patterns (popular items)
- FIFO: Simple, predictable, good for time-series data
Thread Safety ¶
All cache operations are thread-safe and can be called concurrently from multiple goroutines without additional synchronization. The cache uses fine-grained locking and atomic operations to minimize contention.
Error Handling ¶
The cache is designed to degrade gracefully:
- Set operations may fail due to capacity or backend issues
- Get operations never fail - they return (nil, false) for missing/error cases
- Hook execution errors are logged but don't affect cache operations
- Backend connectivity issues fall back to cache misses where possible
Best Practices ¶
- Use meaningful cache keys with consistent naming patterns
- Set appropriate TTL values based on data freshness requirements
- Monitor cache performance using built-in statistics
- Use function wrapping for transparent caching of expensive operations
- Implement proper error handling for critical cache operations
- Use hooks for observability and debugging, not business logic
- Test cache behavior under various load conditions
- Choose the right eviction strategy for your access patterns
Examples ¶
See the examples directory for complete, runnable examples including:
- Basic cache usage patterns
- Redis integration
- Prometheus metrics collection
- Compression usage
- Web framework integration (Gin)
For more detailed documentation and examples, visit: https://github.com/1mb-dev/obcache-go/v2
Index ¶
- Constants
- func DefaultKeyFunc(args []any) string
- func SimpleKeyFunc(args []any) string
- func ValidateWrappableFunction(fn any) error
- func Wrap[T any](cache *Cache, fn T, options ...WrapOption) T
- func WrapFunc0[R any](cache *Cache, fn func() R, options ...WrapOption) func() R
- func WrapFunc0WithError[R any](cache *Cache, fn func() (R, error), options ...WrapOption) func() (R, error)
- func WrapFunc1[T any, R any](cache *Cache, fn func(T) R, options ...WrapOption) func(T) R
- func WrapFunc1WithError[T, R any](cache *Cache, fn func(T) (R, error), options ...WrapOption) func(T) (R, error)
- func WrapFunc2[T1, T2, R any](cache *Cache, fn func(T1, T2) R, options ...WrapOption) func(T1, T2) R
- func WrapFunc2WithError[T1, T2, R any](cache *Cache, fn func(T1, T2) (R, error), options ...WrapOption) func(T1, T2) (R, error)
- func WrapSimple[T any, R any](cache *Cache, fn func(T) R, options ...WrapOption) func(T) R
- func WrapWithError[T any, R any](cache *Cache, fn func(T) (R, error), options ...WrapOption) func(T) (R, error)
- type Cache
- func (c *Cache) Cleanup() int
- func (c *Cache) Clear() error
- func (c *Cache) Close() error
- func (c *Cache) DebugHandler() http.Handler
- func (c *Cache) Delete(key string) error
- func (c *Cache) Get(key string) (any, bool)
- func (c *Cache) GetContext(ctx context.Context, key string) (any, bool)
- func (c *Cache) Has(key string) bool
- func (c *Cache) Keys() []string
- func (c *Cache) Len() int
- func (c *Cache) NewDebugServer(addr string) *http.Server
- func (c *Cache) Put(key string, value any) error
- func (c *Cache) Set(key string, value any, ttl time.Duration) error
- func (c *Cache) SetContext(ctx context.Context, key string, value any, ttl time.Duration) error
- func (c *Cache) Stats() *Stats
- func (c *Cache) TTL(key string) (time.Duration, bool)
- type Config
- func (c *Config) WithCleanupInterval(interval time.Duration) *Config
- func (c *Config) WithCompression(compressionConfig *compression.Config) *Config
- func (c *Config) WithDefaultTTL(ttl time.Duration) *Config
- func (c *Config) WithEvictionType(evictionType eviction.EvictionType) *Config
- func (c *Config) WithHooks(hooks *Hooks) *Config
- func (c *Config) WithKeyGenFunc(fn KeyGenFunc) *Config
- func (c *Config) WithMaxEntries(maxEntries int) *Config
- func (c *Config) WithMetrics(metricsConfig *MetricsConfig) *Config
- func (c *Config) WithRedis(redisConfig *RedisConfig) *Config
- type DebugConfig
- type DebugKey
- type DebugResponse
- type DebugStats
- type EvictReason
- type Hook
- type HookOption
- type Hooks
- func (h *Hooks) AddOnEvict(fn func(ctx context.Context, key string, value any, reason EvictReason), ...)
- func (h *Hooks) AddOnHit(fn func(ctx context.Context, key string, value any), opts ...HookOption)
- func (h *Hooks) AddOnInvalidate(fn func(ctx context.Context, key string), opts ...HookOption)
- func (h *Hooks) AddOnMiss(fn func(ctx context.Context, key string), opts ...HookOption)
- type KeyGenFunc
- type MetricsConfig
- type RedisConfig
- type Stats
- type StoreType
- type WrapOption
- type WrapOptions
Constants ¶
const ( // TestTTL is the standard TTL used in test cases TestTTL = time.Hour // TestShortTTL is used for tests that need quick expiration TestShortTTL = 10 * time.Millisecond // TestSlowOperation simulates slow operations in benchmarks TestSlowOperation = 100 * time.Millisecond // TestMetricsReportInterval for fast metrics reporting in tests TestMetricsReportInterval = 30 * time.Millisecond // ExampleTTL for documentation examples ExampleTTL = 30 * time.Minute // ExampleShortTTL for quick examples ExampleShortTTL = 10 * time.Minute )
Test and example constants for consistent usage across the codebase. These constants help maintain consistency in tests and examples.
Variables ¶
This section is empty.
Functions ¶
func DefaultKeyFunc ¶
DefaultKeyFunc generates cache keys from function arguments using a hash-based approach This function handles most common Go types and provides stable key generation
func SimpleKeyFunc ¶
SimpleKeyFunc generates simple cache keys by joining string representations This is faster but may have collisions for complex types
func ValidateWrappableFunction ¶
ValidateWrappableFunction checks if a function can be wrapped This is useful for providing better error messages at runtime
func Wrap ¶
func Wrap[T any](cache *Cache, fn T, options ...WrapOption) T
Wrap wraps any function with caching using Go generics T must be a function type
func WrapFunc0 ¶
func WrapFunc0[R any](cache *Cache, fn func() R, options ...WrapOption) func() R
WrapFunc0 wraps a function with no arguments
func WrapFunc0WithError ¶
func WrapFunc0WithError[R any](cache *Cache, fn func() (R, error), options ...WrapOption) func() (R, error)
WrapFunc0WithError wraps a function with no arguments that returns an error
func WrapFunc1 ¶
func WrapFunc1[T any, R any](cache *Cache, fn func(T) R, options ...WrapOption) func(T) R
WrapFunc1 wraps a function with one argument
func WrapFunc1WithError ¶
func WrapFunc1WithError[T, R any](cache *Cache, fn func(T) (R, error), options ...WrapOption) func(T) (R, error)
WrapFunc1WithError wraps a function with one argument that returns an error
func WrapFunc2 ¶
func WrapFunc2[T1, T2, R any](cache *Cache, fn func(T1, T2) R, options ...WrapOption) func(T1, T2) R
WrapFunc2 wraps a function with two arguments
func WrapFunc2WithError ¶
func WrapFunc2WithError[T1, T2, R any](cache *Cache, fn func(T1, T2) (R, error), options ...WrapOption) func(T1, T2) (R, error)
WrapFunc2WithError wraps a function with two arguments that returns an error
func WrapSimple ¶
func WrapSimple[T any, R any](cache *Cache, fn func(T) R, options ...WrapOption) func(T) R
WrapSimple is a convenience function for wrapping simple functions without error returns This is a specialized version that's easier to use for simple cases
func WrapWithError ¶
func WrapWithError[T any, R any](cache *Cache, fn func(T) (R, error), options ...WrapOption) func(T) (R, error)
WrapWithError is a convenience function for wrapping functions that return (T, error)
Types ¶
type Cache ¶
type Cache struct {
// contains filtered or unexported fields
}
Cache is the main cache implementation with LRU and TTL support
func NewSimple ¶
NewSimple creates a simple cache with minimal configuration This is perfect for most use cases where you just need basic caching
func (*Cache) DebugHandler ¶
DebugHandler returns an HTTP handler that provides cache debug information The handler supports the following endpoints:
- GET /stats - Returns only cache statistics (no keys)
- GET /keys - Returns statistics and all cache keys with metadata
- GET / - Returns statistics and all cache keys with metadata (same as /keys)
func (*Cache) Get ¶
Get retrieves a value from the cache by key For context-aware operations, use GetContext instead
func (*Cache) GetContext ¶
GetContext retrieves a value from the cache by key with context support The context can be used for cancellation, timeouts, and trace propagation
func (*Cache) NewDebugServer ¶
NewDebugServer creates a new HTTP server with cache debug endpoints The server serves on the following routes:
- GET /stats - Cache statistics only
- GET /keys - Cache statistics and keys
- GET / - Cache statistics and keys (default)
func (*Cache) Set ¶
Set stores a value in the cache with the specified key and TTL For context-aware operations, use SetContext instead
func (*Cache) SetContext ¶
SetContext stores a value in the cache with context support The context can be used for cancellation, timeouts, and trace propagation
type Config ¶
type Config struct {
// StoreType determines which backend store to use
// Default: StoreTypeMemory
StoreType StoreType
// MaxEntries sets the maximum number of entries in the cache (LRU)
// Only applies to memory store
// Default: 1000
MaxEntries int
// DefaultTTL sets the default time-to-live for cache entries
// Default: 5 minutes
DefaultTTL time.Duration
// CleanupInterval sets how often expired entries are cleaned up
// Only applies to memory store (Redis handles TTL automatically)
// Default: 1 minute
CleanupInterval time.Duration
// EvictionType sets the eviction strategy for memory store
// Only applies to memory store
// Default: LRU
EvictionType eviction.EvictionType
// KeyGenFunc defines a custom key generation function
// If nil, DefaultKeyFunc will be used
KeyGenFunc KeyGenFunc
// Hooks defines event callbacks for cache operations
Hooks *Hooks
// Redis holds Redis-specific configuration
// Only used when StoreType is StoreTypeRedis
Redis *RedisConfig
// Metrics holds metrics exporter configuration
// If nil, no metrics will be exported
Metrics *MetricsConfig
// Compression holds compression configuration
// If nil, compression will be disabled
Compression *compression.Config
}
Config defines the configuration options for a Cache instance
func NewDefaultConfig ¶
func NewDefaultConfig() *Config
NewDefaultConfig returns a Config with sensible defaults for memory storage
func NewRedisConfig ¶
NewRedisConfig returns a Config configured for Redis storage
func NewRedisConfigWithClient ¶
NewRedisConfigWithClient returns a Config configured for Redis with a pre-configured client
func NewSimpleConfig ¶
NewSimpleConfig returns a Config optimized for simple key-value caching with minimal configuration needed for most use cases
func (*Config) WithCleanupInterval ¶
WithCleanupInterval sets the cleanup interval for expired entries
func (*Config) WithCompression ¶
func (c *Config) WithCompression(compressionConfig *compression.Config) *Config
WithCompression configures cache compression
func (*Config) WithDefaultTTL ¶
WithDefaultTTL sets the default TTL for cache entries
func (*Config) WithEvictionType ¶
func (c *Config) WithEvictionType(evictionType eviction.EvictionType) *Config
WithEvictionType sets the eviction strategy for memory store
func (*Config) WithKeyGenFunc ¶
func (c *Config) WithKeyGenFunc(fn KeyGenFunc) *Config
WithKeyGenFunc sets a custom key generation function
func (*Config) WithMaxEntries ¶
WithMaxEntries sets the maximum number of cache entries
func (*Config) WithMetrics ¶
func (c *Config) WithMetrics(metricsConfig *MetricsConfig) *Config
WithMetrics configures cache metrics export
func (*Config) WithRedis ¶
func (c *Config) WithRedis(redisConfig *RedisConfig) *Config
WithRedis configures the cache to use Redis storage
type DebugConfig ¶
type DebugConfig struct {
MaxEntries int `json:"maxEntries"`
DefaultTTL time.Duration `json:"defaultTTL"`
CleanupInterval time.Duration `json:"cleanupInterval"`
}
DebugConfig represents cache configuration in the debug response
type DebugKey ¶
type DebugKey struct {
Key string `json:"key"`
Value any `json:"value,omitempty"`
ExpiresAt *time.Time `json:"expiresAt,omitempty"`
CreatedAt time.Time `json:"createdAt"`
Age string `json:"age"`
TTL string `json:"ttl,omitempty"`
}
DebugKey represents a cache key with its metadata
type DebugResponse ¶
type DebugResponse struct {
Stats *DebugStats `json:"stats"`
Keys []DebugKey `json:"keys,omitempty"`
}
DebugResponse represents the JSON response structure for debug endpoints
type DebugStats ¶
type DebugStats struct {
Hits int64 `json:"hits"`
Misses int64 `json:"misses"`
Evictions int64 `json:"evictions"`
Invalidations int64 `json:"invalidations"`
KeyCount int64 `json:"keyCount"`
InFlight int64 `json:"inFlight"`
HitRate float64 `json:"hitRate"`
Total int64 `json:"total"`
Config *DebugConfig `json:"config"`
}
DebugStats represents cache statistics in the debug response
type EvictReason ¶
type EvictReason int
EvictReason indicates why a cache entry was evicted
const ( // EvictReasonLRU indicates the entry was evicted due to LRU policy EvictReasonLRU EvictReason = iota // EvictReasonTTL indicates the entry was evicted due to TTL expiration EvictReasonTTL // EvictReasonCapacity indicates the entry was evicted due to capacity limits EvictReasonCapacity )
func (EvictReason) String ¶
func (r EvictReason) String() string
type Hook ¶
type Hook struct {
// Priority determines execution order (higher values execute first)
// Default: 0 (execution order not guaranteed for hooks with same priority)
Priority int
// Condition optionally filters hook execution
// If nil, hook always executes
// If returns false, hook is skipped
Condition func(ctx context.Context, key string) bool
// Handler is the actual hook function
// Set exactly one of: OnHit, OnMiss, OnEvict, OnInvalidate
OnHit func(ctx context.Context, key string, value any)
OnMiss func(ctx context.Context, key string)
OnEvict func(ctx context.Context, key string, value any, reason EvictReason)
OnInvalidate func(ctx context.Context, key string)
}
Hook defines a cache operation hook with optional priority and condition
type HookOption ¶
type HookOption func(*Hook)
HookOption configures a hook
func WithCondition ¶
func WithCondition(condition func(ctx context.Context, key string) bool) HookOption
WithCondition sets a condition that must be true for the hook to execute
func WithPriority ¶
func WithPriority(priority int) HookOption
WithPriority sets the hook execution priority (higher values execute first)
type Hooks ¶
type Hooks struct {
// contains filtered or unexported fields
}
Hooks contains all registered cache event hooks
func (*Hooks) AddOnEvict ¶
func (h *Hooks) AddOnEvict(fn func(ctx context.Context, key string, value any, reason EvictReason), opts ...HookOption)
AddOnEvict registers a hook that executes when entries are evicted
func (*Hooks) AddOnInvalidate ¶
func (h *Hooks) AddOnInvalidate(fn func(ctx context.Context, key string), opts ...HookOption)
AddOnInvalidate registers a hook that executes when entries are invalidated
type KeyGenFunc ¶
KeyGenFunc defines a function that generates cache keys from function arguments
type MetricsConfig ¶
type MetricsConfig struct {
// Exporter is the metrics exporter to use
Exporter metrics.Exporter
// Enabled determines whether metrics collection is enabled
Enabled bool
// CacheName is the name label applied to all metrics for this cache instance
CacheName string
// ReportingInterval determines how often to export stats automatically
// Set to 0 to disable automatic reporting
ReportingInterval time.Duration
// Labels are additional labels applied to all metrics
Labels metrics.Labels
}
MetricsConfig holds metrics exporter configuration
type RedisConfig ¶
type RedisConfig struct {
// Client is a pre-configured Redis client
// If nil, a new client will be created using Addr, Password, DB
Client redis.Cmdable
// Addr is the Redis server address (host:port)
// Only used if Client is nil
Addr string
// Password for Redis authentication
// Only used if Client is nil
Password string
// DB is the Redis database number to use
// Only used if Client is nil
DB int
// KeyPrefix is prepended to all cache keys
// Default: "obcache:"
KeyPrefix string
}
RedisConfig holds Redis-specific configuration
type Stats ¶
type Stats struct {
// contains filtered or unexported fields
}
Stats holds cache performance statistics
func (*Stats) Invalidations ¶
Invalidations returns the number of manually invalidated entries
type WrapOption ¶
type WrapOption func(*WrapOptions)
WrapOption is a function that configures WrapOptions
func WithErrorCaching ¶
func WithErrorCaching() WrapOption
WithErrorCaching enables caching of errors with the same TTL as successful results
func WithErrorTTL ¶
func WithErrorTTL(ttl time.Duration) WrapOption
WithErrorTTL enables error caching with a specific TTL
func WithKeyFunc ¶
func WithKeyFunc(keyFunc KeyGenFunc) WrapOption
WithKeyFunc sets a custom key generation function for the wrapped function
func WithTTL ¶
func WithTTL(ttl time.Duration) WrapOption
WithTTL sets a custom TTL for the wrapped function
func WithoutCache ¶
func WithoutCache() WrapOption
WithoutCache disables caching for the wrapped function
type WrapOptions ¶
type WrapOptions struct {
// TTL overrides the default TTL for this wrapped function
TTL time.Duration
// KeyFunc overrides the default key generation function
KeyFunc KeyGenFunc
// DisableCache disables caching for this function (useful for testing)
DisableCache bool
// CacheErrors controls whether errors should be cached
// When true, errors will be cached with the specified ErrorTTL
CacheErrors bool
// ErrorTTL is the TTL for cached errors (defaults to TTL if not set)
ErrorTTL time.Duration
}
WrapOptions holds configuration options for function wrapping