So we have a new Node app that we're developing. We have a fairly involved CI process which involves Amazon Beanstalk deployment and one step is load testing against our staging environment.
Mid way through development the load test started failing, it's not been the most stable of build steps so it got ignored for a few days (bad I know). But I decided it was now legit failing so decided to investigate.
Turns out it was a pretty simple issue. We did a bit of profiling which shows that hbs.js was taking the largest amount of time to render the templates. Each request to origin was requiring a compile of the templates.
I wanted to add a simple caching mechanism to this that would cache the response for X amount of time.
This is the module we created.
cache =  expiry = (duration) -> Date.now() + (duration * 1000) module.exports = (action, duration) -> (request, response) -> key = request.url.split('?') render = response.render if cache[key] and cache[key].expires > Date.now() return response.send cache[key].html response.render = (view, viewData) -> render.call response, view, viewData, (err, html) -> cache[key] = html: html expires: expiry duration return response.send html action request, response
We cache the HTML result of the redering in a simple array and we store an expiry time. If there's a cache hit we simply return the HTML otherwise we process and re-cache.
module.exports = register: (app) -> app.get '/', cache home.action, 30
This is how we register this cache. We put it in the route registration so that we can choose which action to cache. We also provide a duration in seconds.
The resulting load test changes
Before it started to fail; the results are more spread out and 59% of the results took 21ms to load which was the quickest.
After these changes, 49% of the requests took 1ms and 48% took 10ms. A huge improvement and far more consistant.