IOC containers elicit strong emotions from programmers, ranging from reverence to disdain.
IOC containers elicit strong emotions from programmers, ranging from reverence to disdain. It’s fashionable today to deride them for being over complicated. But for classic .NET web applications, we believe they’re quite useful, so long as you don’t go overboard with them. The usual argument for using one is to allow most of your components to be ignorant about the concrete implementation of whichever service interface they’re consuming, so you can be “decoupled” or “give yourself flexibility to change later”. How does that really look in practice? This blog series details a fun example from real life.
In a seismic interpretation app at one of our clients, when the user had verified the parameters they liked, they’d kick off a batch job which would run the seismic calculation across the entire (very large) seismic volume, saving the result to disk. Since this job could take up to 45 minutes, it was run by a separate compute cluster; jobs were both submitted and monitored via a central Zookeeper instance. Submitting a job was easy enough: just create a node in zookeeper detailing the job requirements and the compute server would pick it up. Monitoring progress involved making use of Zookeeper watch functionality, in this case from a .NET web application. We’ve written some articles here about how to consume a changing Zookeeper node as a simple IObservable<Byte[]>, and all you need is an instance of IZookeeperClient. (Managing IObservable<T> streams across the client/server boundary is an interesting topic we’ll tackle another day.)
So our application service looked something like this
// this handler is instantiated once for each user requestpublic class ObserveJobStatusHandler : IObservableQuery{ // the IOC container will supply one of these when we’re constructed public ObserveJobStatusHandler(IZookeeperClient client) { _client = client; } public IObservable Handle(JobStatusRequest req) { string pathToMonitor = “/jobs/status/“ + req.JobId; return _client.Observe(pathToMonitor) .Select(JobStatusResult.FromBytes); } private readonly IZookeeperClient _client;}
You’ll notice this handler only has to worry about two very closely related things: figuring out which path to monitor based on the ID of the job requested, and how to deserialize the byte array we get back from Zookeeper. It’s not really too worried about where the IZookeeperClientcame from.
But in our IOC registration code, we do have to decide where that client comes from. Should we instantiate a new one on each user request? Or make a singleton? Or have a pool of them? The obvious answer was singleton, i.e. use one for the entire life of the web application, and share it among all users. The client object is quite heavy, creating a permanent background thread for communicating with Zookeeper, holding a live socket open, and responding to heartbeat messages. This is how we implemented it. It was fast, easy, and the IOC took care of all the composition issues. And all was well.
public void Install(IWindsorContainer container, IConfigurationStore store){ container.Register( component.For() .ImplementedBy() .Lifestyle.Singleton);}
Except all was not well! After we read the Zookeeper manual a little more closely, we discovered that the Zookeeper client (at that time) had a potential resource leak when observing nodes, as we were doing for job status. We faced a cruel choice: host a singleton Zookeeper client and hope it didn’t leak too many resources, or spin up a new client on every request and run the much bigger risk of being unresponsive, unduly taxing the ZK cluster, and wasting lots of CPU. Who wants to make an impossible choice? Not Captain Kirk, and not us.
The best choice when dealing with a large service object that can become unstable over time is to simply purge it periodically. Perhaps once an hour or so. We decided to take an approach similar to what IIS does with ASP.NET web instances — start a new one when the old one’s time is up, route new requests to the new instance, and let the old instance finish existing requests.
But we didn’t want the service handler itself to worry about all these details; in fact we didn’t want to trouble the guy who was writing that part of the web application as he was busy enough on a completely new task! The answer was to keep the registration just as it was, but use a custom lifetime manager that would not just use a singleton, but get a new one every hour, and kill the old one.
We just exchanged this line
.Lifestyle.Singleton
with this one
.LifeStyle.Custom()
The registration code barely changed, and the usage in the many handlers relying onIZookeeperClient didn’t change at all. We didn’t even have to grep for all the usages. No more resource leaks.
This is the real strength of having centralized and abstracted factories, whether you implement them with IOC containers, hand-written “wire up” methods, curried functions, or whatever else. The users of the abstraction can continue worrying about their concerns (message serialization, path selection), while the providers of the abstraction can worry about theirs (resource leaks).
This lifecycle wasn’t built directly into our IOC of choice, Castle Windsor. So we built one. As an aside, if you want your container to do clever things like this, you should choose a clever container. In general, useful and configurable is far more important than fast when choosing an IOC container. You’d never know this from the litany of “container X vs. container Y” blog articles which simply natter on about how fast they resolve the same trivial reference 400,000 times.
In part 2 we’ll look at how we constructed this custom lifestyle.
Tell us what you need and one of our experts will get back to you.