githubEdit

Caching

SOM provides optional request-scoped caching for repository Read operations. The cache is completely opt-in and requires explicit cleanup.

Overview

  • Request-scoped caching for Read operations

  • Two modes: lazy (on-demand) and eager (pre-load all)

  • Thread-safe via sync.RWMutex

  • Each model type has isolated cache

  • Cleanup function must be called when done (typically via defer)

  • No automatic invalidation on Update/Delete

API Reference

WithCache

Creates a cache for the specified model type. Returns a context with caching enabled and a cleanup function that must be called.

func WithCache[T Model](ctx context.Context, opts ...CacheOption) (context.Context, func())

The cleanup function removes the cache from the global store and marks it as cleaned. After cleanup, any Read using that context returns ErrCacheAlreadyCleaned.

Options

Configure cache behavior using functional options:

Errors

Usage Examples

Basic Lazy Cache

Explicit Lazy Mode

Eager Cache

Eager Cache with MaxSize

Cache with TTL (Lazy Mode)

Eager Cache with TTL (Auto-Refresh)

Multiple Models

Caches are isolated per model type:

Creating New Cache After Cleanup

Error After Cleanup

Behavior Details

Lazy Mode (Default)

  1. First Read() for an ID queries the database and stores the result in cache

  2. Subsequent Read() calls for the same ID return the cached pointer

  3. Cache misses always query the database

  4. Records created/updated after cache creation are fetched and cached on first access

Eager Mode

  1. First Read() call loads ALL records from the table into cache

  2. Before loading, checks record count against MaxSize (default 1000)

  3. If count exceeds MaxSize, returns ErrCacheSizeLimitExceeded

  4. After initial load, all reads only check the cache

  5. Cache misses return (nil, false, nil) without querying the database

  6. Records created after cache load are not visible through the cached context

  7. With TTL enabled, the entire cache expires and refreshes automatically on next access

  8. During TTL refresh, MaxSize is re-checked; returns ErrCacheSizeLimitExceeded if exceeded

Cleanup Lifecycle

  1. WithCache generates a unique cache ID and stores it in the context

  2. Actual cache data is stored in a global map keyed by this ID

  3. Calling cleanup removes the cache from the global map and marks the ID as dropped

  4. Any subsequent Read using that context checks if the ID is dropped and returns ErrCacheAlreadyCleaned

When to Use

Lazy Cache

Best for:

  • Request handlers that may read the same record multiple times

  • Graph traversals that might revisit nodes

  • Operations where you don't know which records will be accessed

Eager Cache

Best for:

  • Batch operations that need most/all records from a table

  • Small reference tables (roles, categories, settings)

  • Reports or exports that iterate over all records

When Not to Use

  • Long-running processes (cache may become stale)

  • Write-heavy operations (cache doesn't auto-invalidate)

  • Tables with many records and eager caching (memory concerns)

Important Notes

  1. Cleanup is Required: Always call the cleanup function, typically via defer. Failing to do so leaves stale entries in the global cache store.

  2. No Automatic Invalidation: If you Update or Delete a record, the cache is not automatically updated. Create a new cache after writes to see fresh data.

  3. Per-Model Isolation: Each model type has its own cache. Caching Group records has no effect on User reads.

  4. Context Carries ID, Not Data: The context stores a cache ID and options. The actual cache data is stored in a global map, enabling cleanup from anywhere.

  5. Thread Safety: The cache uses sync.RWMutex and is safe for concurrent reads and writes.

  6. Pointer Identity: Repeated reads of the same ID return the same pointer, which can be useful for equality checks but means mutations affect all references.

Last updated