Building the common ground – a basic iBox

In the previous post we introduced the building blocks for the iBox, which can also be used for local development. As an example we brought up the CentOS 7 integration into our environment. So how does this work?

Background

We run a couple of services and they all run on either one or multiple systems. So, if we talk about systems in our setup, we talk about many VirtualMachines all fulfilling some kind of role or – as we call it – type. VMs give us for example a way of separation and isolation, as we don’t want to keep mailboxes alongside user-maintained CMSs, that tend to be outdated and hacked/defaced/… from time to time. But it also eases management, as VMs are much easier to handle, than baremetal systems – even if it’s just to avoid the usually long BIOS roundtrip time.

Among all these different types, you have usually a set of configurations that need to be applied to all systems. Things like: sshd configuration, firewall, ntp, repositories and so on. In the end this means that a VM consists of two parts: a base system – equal among all systems – and the additional software/configuration – representing the specific type of workload.

And this common ground is often the first part, that we need to adapt and hence implement for a new major release of a distribution. As the new version might use newer or other software, tends to solve problems differently and sometimes we can even drop a few workarounds that we put in place on earlier versions.

Getting a box with the basic OS configuration

If we look at how our configuration management is set up, this means that if we start a box without a type, but still run our configuration management on top of it, we will simply get our common ground. And getting such a box is as simple as described in the README of the ibox_base repository.

  1. Install vagrant
  2. Import a stemcell from our stemcell repository
  3. Checkout the repository and its submodules.
  4. Run vagrant up
  5. Profit

After running vagrant up, vagrant will start a VM and once it got access to it, starting to apply our base configuration, that every system should have. We haven’t yet ported every detail of our base configuration to the ibox modules, but the most important things are there, the rest will be added as we move on.

Hiera

Nearly all aspects of our setup can be tuned using hiera and the sames applies to the local development environment you get with iBox.

There is a basic configuration that defines a sane hierarchy for a development environment (our production environment has a bit more levels, but the idea should be clear), which is used while running puppet. The exisiting default.yaml (in manifests/hieradata/default.yaml) should contain everything which is our default configuration of the puppet modules we use.

Furthermore, we ship a sample vagrant.yaml (located in manifests/hieradata/vagrant.yaml.sample) file, which should show you common configuration options, that we have so far ported to iBox. Simply make a copy of it to vagrant.yaml and start editing for what you like to try out.

Types

A basic configuration of the operating system is usually nothing spectacular. More interesting are usually the applications that you run on top of it.

As we move further with our adoption of CentOS 7, we have to verify the configuration of our types on the new major release, which usually means it is a good point to also release our so far secret – but still share able parts – into iBox building blocks.

As an example we can take the dbserver type, representing a database server nodes running (either or both) MariaDB and PostgreSQL.

# enables the type dbserver
ibox::types: ['dbserver']

# configure a postgres admin user called test2 (superuser role)
ib_postgres::server::admin_users:
  'test2': {}
# configure a postgres database called test and
# a role test owning that database
ib_postgres::server::default_databases:
  'test': {}

# setup a mysql user have all privileges on all databases
ib_mysql::server::admin_users:
  'user1': {}
# setup 2 MariaDB databases and for each of them a user
# with the same name and with full privileges on the specific database
ib_mysql::server::default_databases:
  'testdb1': {}
  'testdb2': {}

Having the above in the manifests/hieradata/vagrant.yaml file and a simple vagrant up will give you an iBox with 2 fully configured database servers (e.g. incl. backup) and a few resources (like databases) configured.

We will talk about types and further ports to the iBox as we move forward with releasing the parts.

Introducing iBox – stemcells

As a lot of people might know: We are happy puppet users @ immerda for ages. Also since the very beginning, we try to publish as much as possible of our puppet work and collaborate with others on improving these puppet modules and make them available and (most important!) useful for others as well.

Background

As (for obvious reasons) we can’t publish the whole configuration of our infrastructure, we went with a dual-modules setup: public and private (site-specific) modules. While this was fine in the beginning and it let us at least publish the common ground for our infrastructure, it was very hard to see how in the end all these public modules build up our infrastructure. Within the last 6 years puppet and its ecosystem developed and improved heavily and made a lot of things easier. Drastic changes within the language (e.g. 2.7 to 3.0) made it hard to keep things updated and staying on top of the current development, while still running a platform in production. But once you made the big step, a lot of things became way easier. Especially, some of these changes made it easier to share more of our infrastructure and especially publish our usage of our public modules. Furthermore, it becomes more and more important for us to be able to quickly try something new out, to develop and integrate a new feature into our infrastructure, while not necessarily be too disruptive to it. In the end our users – and also we – are using our infrastructure daily for our digital communication and we don’t want to be too disruptive to it, which would just bury us in (unnecessary) work.

Additionally, more and more people became interested in how we set things up and would like to help out and contribute to our infrastructure in one or another way. So we had a new goal: Make more of our internal (so far secret) sauce publicly available, so that people can inspect it, fix it, contribute to it or even just use it to run their very own version of our infrastructure.

iBox

This lead us to kick off (another) new internal project, where we refactor our current puppet code base in such a way that we can publish more parts of our infrastructure. In the end this could mean that anyone could use our codebase to run her very own instance of our infrastructure. And the basis for that would be something that we named: iBox.

It is a very ambitious initiative and we are far away from the final goal. However, we see it more as a project, that will accompany us in the next years, while improving the infrastructure further. We don’t plan to dedicate too much concentrated effort for it, more it should grow as we grow and move on. So in the past few months we started working on it and are now in a state where we are able to publish first, but still very basic parts of what we envision. And given that CentOS 7 was recently published, the integration of the new version of our most used distribution, was also a good starting point for our effort. We anyway would have to do the integration of the new version into our infrastructure, so why not kick off the iBox initiative and use it to develop the CentOS 7 integration into our infrastructure. Eat your own dog food! 😉

This and a few more upcoming blog posts will introduce you (and also some members of our collective) to what we have currently developed, how we built it and how you can use it. It is far from being useful, but it is a start and we expect to grow and integrate more and more things, as we move on with our infrastructure. We don’t have a concrete time line nor road map (we luckily never have!), but don’t expect it to just die after the few things that we publish.

stemcells

If we look at all the prerequisites that one needs to be able to run the code for our infrastructure, it starts with a very simple thing: a server or at least a virtualmachine (VM). You need an operating system, where you can run puppet to setup all the parts that build an infrastructure. Luckily, with today’s availability of virtualization it is very easy to run VMs on a local desktop computer and in the recent years a lot of nice tools appeared, addressing all the boring and cumbersome steps. One of them is Vagrant, which makes it easy to setup and run development VMs on your local desktop. More about that later.

Vagrant is based on the idea, that you have a set of base-images, which you use as a starting point to apply further modifications using a configuration management tool or good old simple scripts. There are plenty of boxes available, but usually projects established their own process and guidelines how a box should look like (e.g. filesystem-wise). So did we and this is what we refer to as a stemcell. Once you have such a stemcell for different virtualization providers, you can publish them somewhere and people can consume them for their vagrant setups, without redoing the OS installation on their own. Building such a stemcell is no magic and there are different tools available, that make this process very easy: e.g. Veewee or Packer.

We first went with VeeWee, but given that packer is more or less the successor and makes a few things much easier, we ported things over to packer. We already have our own internal kickstarting server (called iBoot), that makes it very easy for us to kickstart (or preseed) new VMs or physical machines based on such a file. So that each installed machine looks the same. Hence, it was easy to just adapt these existing kickstart files for a setup with packer. This is what the ibox-stemcells repository contains.

packer

Packer works in the way, that you write a template, which describes how your images (in our terms: stemcells) should be built on different virtualization platforms. For each of them, you describe a builder. Packer also comes with an internal http server, that can be used to serve the kickstart files or anything else that should be available at installation time. Packer also downloads your (net-)install images, boots the new VM using them, enters the right boot parameter, kicks off the installation and so on.

Once the installation is done, packer will apply a set of defined provisioners that could do further modification of the installed system and could even kick off a configuration management tool. However, as we just want to have a very basic image, which matches what we have after the very basic OS installation, we only apply a few things through a provisioner as a postinstall script. The basis should be as clean as possible, so that we can develop our puppet modules on top of it using Vagrant. This process will be part of a next blog post. Using packer, we now have 2 CentOS 7 stemcells available: One for kvm (qemu) and another one for Virtualbox, which should make it very easy for most people to use them.

There is also already a builder for a vmWare stemcell in place. However, the author failed to get an usable vmWare installation running in an appropriate timeframe and left it out as an exercise for the interested people.

Also it should be very easy to create stemcells for Debian, previous CentOS versions or any other intersting distribution. This should make it really easy to be able to not only develop our manifests on CentOS 7, but adding more support for other distributions.