Caching
SOM provides optional request-scoped caching for repository Read operations. The cache is completely opt-in and requires explicit cleanup.
Overview
Request-scoped caching for Read operations
Two modes: lazy (on-demand) and eager (pre-load all)
Thread-safe via
sync.RWMutexEach model type has isolated cache
Cleanup function must be called when done (typically via
defer)No automatic invalidation on Update/Delete
API Reference
WithCache
Creates a cache for the specified model type. Returns a context with caching enabled and a cleanup function that must be called.
func WithCache[T Model](ctx context.Context, opts ...CacheOption) (context.Context, func())The cleanup function removes the cache from the global store and marks it as cleaned. After cleanup, any Read using that context returns ErrCacheAlreadyCleaned.
Options
Configure cache behavior using functional options:
Errors
Usage Examples
Basic Lazy Cache
Explicit Lazy Mode
Eager Cache
Eager Cache with MaxSize
Cache with TTL (Lazy Mode)
Eager Cache with TTL (Auto-Refresh)
Multiple Models
Caches are isolated per model type:
Creating New Cache After Cleanup
Error After Cleanup
Behavior Details
Lazy Mode (Default)
First
Read()for an ID queries the database and stores the result in cacheSubsequent
Read()calls for the same ID return the cached pointerCache misses always query the database
Records created/updated after cache creation are fetched and cached on first access
Eager Mode
First
Read()call loads ALL records from the table into cacheBefore loading, checks record count against MaxSize (default 1000)
If count exceeds MaxSize, returns
ErrCacheSizeLimitExceededAfter initial load, all reads only check the cache
Cache misses return
(nil, false, nil)without querying the databaseRecords created after cache load are not visible through the cached context
With TTL enabled, the entire cache expires and refreshes automatically on next access
During TTL refresh, MaxSize is re-checked; returns
ErrCacheSizeLimitExceededif exceeded
Cleanup Lifecycle
WithCachegenerates a unique cache ID and stores it in the contextActual cache data is stored in a global map keyed by this ID
Calling cleanup removes the cache from the global map and marks the ID as dropped
Any subsequent Read using that context checks if the ID is dropped and returns
ErrCacheAlreadyCleaned
When to Use
Lazy Cache
Best for:
Request handlers that may read the same record multiple times
Graph traversals that might revisit nodes
Operations where you don't know which records will be accessed
Eager Cache
Best for:
Batch operations that need most/all records from a table
Small reference tables (roles, categories, settings)
Reports or exports that iterate over all records
When Not to Use
Long-running processes (cache may become stale)
Write-heavy operations (cache doesn't auto-invalidate)
Tables with many records and eager caching (memory concerns)
Important Notes
Cleanup is Required: Always call the cleanup function, typically via
defer. Failing to do so leaves stale entries in the global cache store.No Automatic Invalidation: If you
UpdateorDeletea record, the cache is not automatically updated. Create a new cache after writes to see fresh data.Per-Model Isolation: Each model type has its own cache. Caching
Grouprecords has no effect onUserreads.Context Carries ID, Not Data: The context stores a cache ID and options. The actual cache data is stored in a global map, enabling cleanup from anywhere.
Thread Safety: The cache uses
sync.RWMutexand is safe for concurrent reads and writes.Pointer Identity: Repeated reads of the same ID return the same pointer, which can be useful for equality checks but means mutations affect all references.
Last updated