This post is for anyone interested in systems automation, devops, and other related subjects. My name is Joe, and I would like to introduce some of the work I’ve done to help support development and operations here at Paytm Labs. I am in the process of putting all of the components as open projects in GitHub (all but two parts are already there, and they will follow soon)

The name of the project is “masala”. I picked the name because we use chef as a provisioner, so it had to be something food-related, and it adds spice to our recipes.

Masala is several things:

  • A project structure, adding some higher-level configurations for chef and packer
  • A CLI wrapper, simplifying invoking the tools
  • A set of wrapper recipes for various roles, including a base. Most existing roles are data-related (such as Cassandra, Kafka, Spark, Elk, etc)
  • Lets you define “clusters” of coordinated systems that are to be pushed to live servers
  • Also allows specifications used for building “fully baked” images of various types, using packer. It works similarly to Netflix Aminator or buri
  • Allows for incremental image building, including the installation of a base OS where possible. (AMI/Docker building uses vendor-provided images as a source, and Vagrant/QEMU start from install media)
  • Not reliant on chef-server
  • Meant to allow use in even higher level tools. (IE: in infrastructure provisioning)

Some of the goals at the outset were:

  • To support the building of ‘fully baked’ images, as are necessary for use in autoscaling scenarios
  • Support live node push as well
  • Not require additional proprietary products
  • Avoid the chef-server pull model, and do push via chef-provisioning instead.
  • Leverage existing chef recipes where possible
  • Provide role-based wrapper recipes, and matching push recipes
  • Support both cloud and on-metal setups

The end state is such that we can provision a whole cluster of several kinds by defining it in a relatively simple JSON file, and feed that into a masala CLI command. Or define a base image specification, and generate that image with a different CLI command.

The masala base and wrapper recipes for masala are all open projects. The base recipe is probably the most complex of them and brings in a lot of other recipes to ensure consistent system configuration, and flexibility around attribute based configuration for various things ranging from sysctls to ulimit settings. Many of the wrappers depend on functions brought in via the base.

Of the wrapper recipes, some of them are very simple, mostly bringing in our base and calling an external recipe. Even these, however, have had fairly complex test kitchen configurations added so that non-trivial implementations of a setup could be evaluated. As an example, a one-node test of Cassandra is hardly indicative of what will happen once you try to deploy more than that. Generally, a minimum of three nodes is implemented in the wrapper test kitchens, if for no other reason than to catch and explore configuration variants needed for multiple-node setups.

The exploration done in the test kitchens tends to influence the development of the push recipes. This simplest of the inputs to the push recipes will have mostly a node list (hostnames and IPs), and a cluster-level attribute scope that is common to those nodes. The more complex of the push recipes have per-node level flags or attribute sections. Kafka is probably the most complex of these today, as we allow Zookeeper, Broker, mirror and confluent schema/proxy to be specified on any given node. All of these have some fairly specific per-node config data in addition to the flags that toggle the features onto the node or not.

That said, the aim of the configurations we feed to the push recipes, is to be as simple/terse as possible, with reasonable defaults and the ability to override where needed. The push cookbook is one of the two items not yet an open project, and the main thing holding up the completion of open sourcing the entire codebase. There is some clean-up I want to do before that happens as there has been some variations on the JSON structure between the roles that has crept in, and I want to see expressed more consistently before putting out there. This is so I don’t break it on people should they opt to use this once available.

The last project is actually ready to be opened, but for that, it requires that set of push recipes. This is the CLI and “project structure” that sits at the top, bringing in all of the components that make up masala.

We’ve used masala to deploy multi-region Cassandra clusters, Kafka clusters with remote mirroring, and auto-scaling spark clusters in AWS, and much more.

My current focus is around improving the ability of masala to target docker images as an output, so we can make use of the generated containers in a DC/OS, and continuing to work to completely open the entire set of tools on GitHub.

I’ll follow up with another post once we’ve hit that milestone, and share some practical examples of using it. Until then, let me know if you have any questions. You can reach me at or follow me on twitter.