You have put your puppet master behind Passenger and that lasted for some time. But now that you're beginning to have quite a good number of hosts that checkin to your master regularly, performance has started to become an issue. What can be done about it?

There are multiple answers. The first and simplest would be to put more space between your puppet runs. Say you're currently running puppet every 30mins on each client, you could change that period to 45mins or 1h.

It's also possible to distribute runs via a central scheduler that would ensure to run every node before going back to square one. One such solutions is the "puppetcommander.rb" script that R.I. Pienaar wrote to use mcollective as the run scheduler.

Another method (that you can use in conjunction with the above) is to cache the responses that come from the puppet master.

Caching

Since puppet clients, when polling the master for info, are really simply using a RESTful API (that is, a bunch of HTTP requests on URLs that classify different calls in an easy to understand manner) more and more people have started placing a web caching layer between the master and the clients. Caching HTTP responses is now a very well mastered practice and a bunch of software can offer that to you.

First, here's an interesting article about using mod_mem_cache with apache + passenger to directly cache responses:

http://paperairoplane.net/?p=380

The interesting thing in this article is that Oliver took the time to form before/after statistics in order to compare the gains that it brings. The main disatvantage with this technique is that the module keeps a different cache for each apache process, so it doesn't leverage the true capabilities of caching. The main advantage with it is that most people use apache + passenger with their puppet master, and adding caching is as simple as enabling the module and adding a couple of config lines.

Others have put nginx as a caching reverse proxy in front of the master:

http://www.masterzen.fr/2010/03/21/more-puppet-offloading/

The advantage with nginx is that the caching is global and thus once you've requested a URL, all the subsequent requests to it will come out of the cache.

I've also heard of people using Varnish and Squid as the caching layer, but I can't find good articles about setting things up for caching puppet specifically.

Beware of retaining your cache for too long!

When setting up caching in front of your puppet master, make sure not to setup indefinite caching. Doing so would mean that any changes in the manifests of config files won't show up on clients!

I've seen two approaches here:

  • Actually setting caching without any time limits, but placing a post-commit hook on the master that would trigger a purge of the cache.
  • Setting a time limit to a reasonable time so that similar requests stay long enough in the cache for other clients to use.

The first approach has the benefit of making your changes immediately available, but requires that your puppet master (or the server on which you commit your manifests, which could be different..) has access to clearing the cache.

The second one isolates better access requirements between the different servers, but means your changes would be available only after the grace period you configured.

The article above describing caching with mod_mem_cache talks about 60% hits from the cache for the request type with the biggest hit number. I'd be curious to see some numbers with engines that use a global cache like nginx, varnish and squid.