Reading time: 2 – 2 minutes
Following up on my previous post on Session State, there are a few conceptual ways to think about caches that I want to cover.
Cached items can be placed in session. That may be the easiest, but soon it will expose limitations. For instance, session may be serialized as one big blob. If so, you won’t be able to have multiple concurrent threads populating into the same cache. Imagine when logging in, you want to have a background asynchronous thread lookup and populate data.
The key with cache is the keys that are cached need to be independent. A session may be okay to store as a serialized object graph. But cache keys could be primed in multiple synchronous threads, so a single graph could involve locking or overwriting parts of session. If cache entries are deep in a graph, you’re almost certain to have collisions and overwritten data.
IMHO, the most important thing is: Cache keys should be deterministic. For instance, if I want to cache all of a user’s future trips (and that is a slow call), I want to be able to look in the cache without first looking up the key to the cache in some other store. I want to say “Hey, given user 12345, look in a known place in the cache (such as “U12345:futureTrips”) to see if some other thread already populated the cache value.” This does mean you need to count more uniquely addressable cache locations in your application logic, but the extra accounting is well worth the flexibility it gives you. For instance, it allows background threads to populate that cache item, without overwriting anything else.