What if, no matter how you try to simplify, your aggregate root is pretty darn big? Writing application services to handle these large entities is a challenge. We run into this all the time with scientific computing. The object representing a simulation to run is typically quite complex. Imagine describing the geology of the Gulf of Mexico. No way around it: it’s going to be split across 30 database tables and reference some pretty heavy inputs. Even if you decide to be clever and keep it all in one loadable JSON object, for instance, it’s just heavy.
Such large objects present us with a dilemma we haven’t always handled well. If we want to be able to reason about them clearly, separate data access from domain logic, and maintain important invariants, we’d rather write domain logic that assumes the entire entity is in memory for us to reference.
In this only slightly contrived example, the end user is adding some information about the water depth in the Gulf of Mexico 15 million years ago (M[ega]a[nnum]). Because the simulation runs in discrete time steps, we need to make sure that there is a time step introduced at 15Ma that will use this new value.
void Handle(AddPaleoWaterDepthCommand c) { s = LoadSimulation(c.simulationId); s.WaterDepthSeries.Add(c.Age, c.WaterDepth); s.RecalculateTimeSteps(); SaveSimulation(s);}
But lots of inputs can introduce time steps: major depositional events, salt movements, temperature histories, etc. So even though it’s just oneWaterDepthSeries, we can only RecalculateTimeSteps if we have all that other data in memory. (In a well factored domain object, by the way, just modifying the water depth series would automatically adjust all time series because you’d be using clever Value Objects, but it’s easier to see the point by writing the code this way.)
But hold on. What if the user just wants to change the name of her simulation to “Low Heat Flow Case #1 <01> Thursday Best (FINAL 2)”?
void Handle(ChangeProjectNameCommand c) { s = LoadSimulation(c.simulationId); s.Name = c.NewName; SaveSimulation(s);}
Now loading hundreds of rows from dozens of tables, along with potentially megabytes of ancillary data (like gridded inputs) just seems pig-headed. Just to change a name?
So what happens in projects like this? I’ll list two approaches; there are surely more.
Nothing is inherently wrong with either of these approaches, but they can have trouble scaling. On a recent project we decided to tackle this problem in a way appropriate for our real load. We wanted fast updates and to have the whole model in memory at once, but not to pay the cost of loading that whole model every time we wanted to do anything.
Our approach was simple: first we mediated all updates to the entity in question through a pattern we (unimaginatively) called EntityUpdater. Instead of loading and saving the object ourselves in the application service, we just told the entity updater what we wanted to do with the object and let it worry about how to do it.
void Handle(AddPaleoWaterDepthCommand c) { _entityUpdater.Update(c.simulationId, s => { s.WaterDepthSeries.Add(c.Age, c.WaterDepth); s.RecalculateTimeSteps(); });}
The first naive implementation of course just loads the whole simulation, calls the update function, and saves the simulation.
IEntityUpdater { void Update(TEntityId id, Action updateAction);} NaiveEntityUpdater : IEntityUpdater{ void Update(TEntityId id, Action updateAction) { TEntity e = LoadEntity(id); updateAction(e); SaveEntity(e); }}
For small entities, you register this strategy as the updater and you’re off and away. But for big ones how can we be cleverer? Our realization was that most of the time only one person was editing any given model. There might be several users on the system, but nearly always they were each working their own project. A fair amount of collaboration on reading the data, but not much in editing it. We also had a limited number of servers serving those users. The problem was speed and complexity, not scaling “out” to many users.
As an aside: lots of programmers get very excited reading the most recent missive from Facebook about how they handle one billion status updates every day and forget that their scientific or line of business applications have much more limited contention. Let’s keep it simple, shall we?
We decided to just cache the object. Cache? Sure, why not? Holding a “hot” copy of the model saves a lot of messing around. Every time an application service needs to call a method on it, it just does so. The save just emits the events or row updates necessary. What’s faster than a really clever really fast database read that you spend a month tuning? Why, not talking to your database at all!
But let’s face it: caching is pretty dangerous. What happens if my copy of the data is out of date? What if someone is editing it on another server? How do two application services edit it at once? In a case where you have high contention on these business objects, or you have lots of servers that would have to agree on proper updates, this is a dumb strategy. But it can be simple and fast if the conditions are right.
First: how does the cache work? We should only cache a copy of the data if it’s clean: any failure in an application service should throw away your copy so the next operation gets a clean one.
CachingEntityUpdater : IEntityUpdater{ void Update(TEntityId id, Action updateAction) { TEntity e = _cache.Get(id); if (e == null) { e = LoadEntity(id); _cache.Add(id, e); } try { updateAction(e); SaveEntity(e); } catch { _cache.Remove(id); throw; } }}
Well that was easy enough. While the first load is expensive, later operations absolutely sizzle: sub-millisecond domain logic is achievable in memory, with your SaveEntity call the only true cost. First load still getting you down? Watch the user browsing the application and preemptively load stuff in a background service.
Only masochists write thread-safe domain logic, so you really need to serialize access to the object in case two people happen to want to update the same object. (Recall: in our domain that’s rare, so a few milliseconds of contention between a small handful of users per model is the worst case and absolutely acceptable.) You could do it with a lock, or you could keep a little table of locks by ID so that you can tolerate lots of users. Implementing that is an exercise for the user, but quite straightforward. (Note: this is just to keep updates in the same process in a shared-memory threading model from stomping on each other. It’s not to handle concurrent updates by multiple systems or users on multiple servers. We’ll get to that.)
So, what’s not to like? You’ve got disgustingly fast access to domain objects, you have the peace of mind and ease of modeling that comes from the assumption that the whole domain objects in memory at once. Shoot, you could even expose the cache of domain objects for services that are reading data!
There are two more gotchas that were straightforward to get around in our environment, but bear thinking about if you’re trying this at home.
Cache invalidation is really the hardest problem to solve. There are a few solutions that can be simple enough to justify the complexity of managing cache coherence in return for this programmability and speed payoff we’re after.
This pattern is useful for getting high speed out of apparently stateless application services over lightly contended but heavy domain objects that have properly enforced transactional boundaries. That’s a lot of qualification, and be sure it’s all true! But the smiles on our users’ when they can update these objects at interactive speeds are worth it.
And perhaps most importantly — and easy to forget after all this talk of caches — the real winners are the programmers who get to make a dramatically simplifying assumption when writing their domain logic.
Tell us what you need and one of our experts will get back to you.