Note: This was originally posted on my old blog at the EMC Consulting Blogs site.
This is part 3 in a series of posts on optimisation work we carried out on my current project, www.fancydressoutfitters.co.uk – an ASP.NET MVC web site built using S#arp Architecture, NHibernate, the Spark view engine and Solr. Please see part 1 and part 2 for the background.
I doubt many people will disagree with me when I say that one sure-fire way of making a piece of code run faster and consume less of your precious processing power is to do less work in that piece of code. Optimising your code is part of the answer, but however quickly your pages run when it’s just you on the site you will struggle to serve large numbers of users unless you have a good caching strategy in place.
There are a number of places that caching can be applied in an application built using S#arp Architecture. Here are a few:
- You can use the NHibernate second-level cache to store the results of database queries.
- You can apply caching wherever it’s appropriate in your model layer.
- You can cache the results of controller actions; and
- You can apply output caching to the rendered views (to be covered in part 4).
However the whole thing is something of a balancing act. You have to understand how often things are going change and weigh up the benefits of longer caching durations against the need to reflect updates in a timely fashion. You need to understand how your platform architecture affects your caching strategy – for example, what happens if you have multiple web servers each maintaining it’s own local cache? You also need to make sure that once you’ve decided on an approach, people stick to it. I’ve never been a big fan of using the standard HTTP Cache directly – it requires you to specify the caching interval every time you add an object to the cache. Even if you implement some constants to hold the intervals, you still run the risk of people choosing the wrong one.
How we approach caching
Both my current S#arp Architecture-based project and my previous WebForms one make extensive use of Dependency Injection to decouple our assemblies and increase testability. It follows that we needed to create an abstraction for the cache anyway, so we took the opportunity to kill two birds with one stone and introduce a more convention-based approach to the cache. James has recently blogged about the benefits of removing decisions from our day-to-day coding activities, and this is another example. In our abstraction, we treat the cache as a set of named areas and assign cache duration to those areas instead of to the individual objects. A few examples of the named areas we use are:
- Content – medium lifespan, intended for content pulled from the CMS;
- Resources – very long lifespan, containing values from our SQL-based resource provider (used instead of the standard RESX file-based provider); and
- Site Furniture – long lifespan, containing less frequently changing data used in the site header and footer.
(Note: If this approach interests you at all, let me know – the specifics of the interface and implementation are two much detail to include here, but I can cover them in a separate post if there is interest.)
A nice bonus from using this abstraction on the current project was that it allowed us to use the current CTP of Velocity, Microsoft’s new distributed cache. The abstraction – together with our use of the Windsor IoC container – provided us with a handy safety net: if at any point we found an issue with using Velocity, switching back to the standard HTTP cache would be a simple configuration change. As I’ll cover in a future post our soak testing showed that Velocity is fast and stable, but even post go-live, it would still be a simple matter to make the switch if necessary. One of the tenets of lean development is that you should delay decision making until the last responsible moment – for us, the last responsible moment for deciding on a caching solution could be deferred until go-live, by which point we’d been using Velocity for around 5 months.
Applying caching in the controllers
Our first area of focus was to look at caching the results of controller actions. One of the great things about ASP.NET MVC is that things are so much simpler than in the web forms world (check this out for a pretty picture of the request handling pipeline). It’s far easier to understand where you do and don’t need to apply caching in your application code, and we realised that the most effective thing to do was to put the majority of ours in the controllers.
Howard posted a diagram of our architecture on his recent blog post about AutoMapper. From this you might be able to infer that our controllers tend to follow a pretty simple pattern:
- Input view model is populated from request data by a model binder.
- Input view model is mapped to domain objects.
- Application services layer is used to do work with those domain objects.
- Resulting domain objects are translated back into a view model that can be passed to the view.
- Appropriate ActionResult is returned.
This is the most complex scenario – not all steps are required for every action.
In an ideal world we’d be caching the ActionResults returned by the controllers but this isn’t something we can do because they aren’t serializable and therefore can’t be cached in Velocity. We therefore have to settle for caching the ViewModel for each action, which gives us this pattern:
- public ActionResult Index()
- {
- return View(Views.Index, MasterPages.Default, this.IndexInner());
- }
- [Cached]
- private MyViewModel IndexInner()
- {
- }
The [Cached] attribute is actually a PostSharp aspect that caches the result of the method call, and I have both good intentions and half finished blog posts on this subject.
If the action method takes parameters, these are passed straight through to the ActionInner method – we do the minimum amount of work possible in the action method itself.
Dealing with changing content in the CMS
In an ideal world, we would set most of the caches to have an extremely long life, then implement a mechanism to allow individual caches to be flushed as and when required. Anyone who has ever used Microsoft Commerce Server will be familiar with the SiteCacheRefresh handler which allows caches to be cleared in response to a web request. However we encountered an issue here: the current release of Velocity (CTP3), does not support enumeration or flushing of named caches – these operations are limited to cache regions. The downside with cache regions is that they are tied to a single server, which pretty much removes the benefits of using a distributed cache.
As I’ve previously mentioned, our system uses N2 for Content Management and the majority of our pages are represented in the CMS. We therefore cached the ViewModel for each page using a key based on the unique ID of the page and the last updated date of the page. This works well because it means that pages can be cached indefinitely, but as soon as someone makes a change to the page then a new version will be created and cached. Obviously this means that the old version will be lingering in the background – the downside to this approach is that you may well fill up the cache more quickly and require Velocity to evict more frequently.
This approach isn’t perfect – it doesn’t cover the product pages, and doesn’t take care of changes in the site hierarchy that would mean we need to regenerate the navigation. As a result, we’ve implemented an additional hack: all cache keys used for content pages include the last updated time of a single page within the site. Touching that page will cause every cache key used at the controller level to change. I don’t particularly like it, but it does work.
The longer term plan would be to look at this again as and when Velocity moves on. With any luck, the next CTP (now overdue) will make some changes in this area. The ideal plan would be to:
- Hook into N2’s persistence mechanism – from reading the documentation and looking at the code, it should be possible to receive a notification when things change. This could then be used to remove stale objects from the cache, or flush an entire named area as required.
- Implement a restricted controller that allows clearing of individual caches (in the style of the CS SiteCacheRefresh handler mentioned above.) This would be useful if we needed to clear a cache in response to an external action – such as an ETL process pulling data from a back office system.
Caching elsewhere in the application
Once we’d implemented the caching in the controllers and configured the NHibernate second-level cache in both our custom code and N2, there were very few places in our model layer that we needed to apply caching. Once again we hit the point of diminishing returns – we could spend time profiling the application and identifying any remaining bottlenecks, or we could move on and look at what we expected to be the final big win: output caching, which I’ll be talking about in the next post of this series.
A final word about caching
On a lot of projects I’ve worked on, caching is something that tends to be implemented at the end. There are advantages to this – it can help to avoid premature optimisation, and you will (hopefully) have a good understanding of the moving parts in your codebase. However, there is one major downside that anyone who’s done this will be familiar with: once you check in the changes, every single bug that is raised for the application will be blamed on your caching. Accept it, live with it, and be prepared to be known as a) grumpy, and b) easy to wind up. Just take satisfaction in the knowledge that when you get it right, you will be setting yourself up for success when you finally put your site live.
As always, I’d appreciate feedback on these posts – either by leaving a comment, dropping me an email or via Twitter.
Tagged: asp.net, asp.net mvc, performance, s#arp architecture Image may be NSFW.
Clik here to view.
Clik here to view.
