Puppet (and everything) over Tor

Mostly, when we talk about Tor, we just talk about websites. But what’s about other traffic and tools? What’s about Puppet or Icinga? If you have a Puppet server and you like to hide where it stays and/or which nodes are connected, perhaps you like to serve those services over an onion service.

This article is based on some research how can a Puppet server and an icinga parent be hidden. But only the corresponding traffic should be routed through Tor, the rest of the traffic shouldn’t go through Tor.

Normally Tor provides a local socks proxy, and each application with support for socks proxies and be configured to run over Tor. If your application supports socks proxies, use the socks proxy. If your application supports only http proxies, there are tools, like polipo, in the internet which can translate from http to socks proxy. Puppet can use http proxies and that’s fine. But first of all, on the website of polipo it’s written that polipo isn’t maintained anymore and second, not every application supports proxies.

The goal is to implement a transparent proxy for every kind of (tcp) traffic. This example is built with CentOS 7 and/or 8, it should be possible with every kind of Unix. For the firewall part – it’s essential to configure the firewall correctly – this example uses nftables. The same is also possible with iptables.

Server side

The server side is easy, it’s like every hidden service. Tor runs on the server and creates an onion service which forwards the traffic to a port.
Configure Tor (/etc/tor/torrc):

# Puppet hidden service
HiddenServiceDir /var/lib/tor/puppet
HiddenServicePort 8140

# Icinga hidden service
HiddenServiceDir /var/lib/tor/icinga2
HiddenServicePort 5665

The onion address can be found in the corresponding hostname file, e.g. /var/lib/tor/puppet/hostname. Puppet and Icinga will have different hostnames. As this setup isn’t about how to create onion services, this setup is enough. If you like to dig deeper into onion services, checkout the Tor documentation.

Agent side

The interesting part is the agent side. For a transparent proxy we need multiple parts to work together. First of all, the client will establish a connection to an fqdn, for that we need a dns response for the fqdn. As the fqdn is an onion address, we need a dns server which is aware of onion addresses and will return an ip address. Second of all, the connection will be established directly to an ip address and port, the firewall needs to capture that and redirects it to the Tor service.

DNS

The Tor service can create a dns server (DNSPort) and serve questions. As the Tor service is aware of onion addresses it can return an ip address for an onion address (AutomapHostsOnResolve).

DNSPort 53
AutomapHostsOnResolve 1

After a restart of the Tor service, it will listen on 127.0.0.1:53 (only udp).
If not specified otherwise, the ip range which is used for onion addresses is 127.192.0.0/10 and [FE80::]/10. That’s fine if everything is on one node, if not it’s possible to specify the address range with VirtualAddrNetworkIPv4 and VirtualAddrNetworkIPv6.
The /etc/resolv.conf must be adapted to this and the only nameserver should be 127.0.0.1 as it’s the only one which can serve the onion top level domain too.

Transparent proxy

The Tor service can create a port which serves as a transparent proxy (TransPort).

TransPort 9051

After a restart of the service it will listen on 127.0.0.1:9051 tcp. To redirect all our traffic to the Tor service, some firewall rules are required.

$ cat /etc/sysconfig/nftables.conf
table ip nat {
    chain output {
        type nat hook output priority -100; policy accept;
        meta l4proto tcp ip daddr 127.192.0.0/10 redirect to :9051
    }
}

This will redirect all traffic, which has a destination within the VirtualAddrNetworkIPv4 network to the local Tor service transparent proxy.

Testing

Let’s do some tests first:

$ dig +short @127.0.0.1 kqcbbtuyhuaagan3pbm22knapempzb5ia2wnmdqnhg7rrfwx775cqmad.onion
127.255.217.241
$ dig +short @127.0.0.1 AAAA kqcbbtuyhuaagan3pbm22knapempzb5ia2wnmdqnhg7rrfwx775cqmad.onion
fe96:9715:fe63:392b:bea0:67db:78f8:2434
$ dig +short @127.0.0.1 www.immerda.ch
199.58.80.177

We see, that the dns server respond with an A and AAAA record inside of the network which is specified with VirtualAddrNetworkIPv?. Every other dns query will be answered too. That’s great, what’s about our traffic which should be redirected to the transparent proxy?

$ nmap -sT -p8140 kqcbbtuyhuaagan3pbm22knapempzb5ia2wnmdqnhg7rrfwx775cqmad.onion
Starting Nmap 7.70 ( https://nmap.org ) at 2020-03-22 20:05 UTC
Nmap scan report for kqcbbtuyhuaagan3pbm22knapempzb5ia2wnmdqnhg7rrfwx775cqmad.onion (127.230.205.159)
Host is up (0.0034s latency).
Other addresses for kqcbbtuyhuaagan3pbm22knapempzb5ia2wnmdqnhg7rrfwx775cqmad.onion (not scanned): fea3:1af6:48c2:a7f5:f5ef:bd1d:accc:f623

PORT     STATE SERVICE
8140/tcp open  puppet

Applications

As Tor creates now a transparent proxy, it’s really easy to setup them. Let’s have a look on the two examples Puppet and Icinga.

Puppet

The Puppet server needs a certificates which also includes the onion address as a x509v3 dns alternative name. Have a look at the dns_alt_names configuration in the Puppet documentation.
For an already existing Puppet server, the host certificate has to be removed and regenerated. There is no need to replace the Puppet CA.
On the agent side, we have to specify the onion server address:

$ cat /etc/puppetlabs/puppet/puppet.conf
[main]
    certname = agent.example.com
    server = kqcbbtuyhuaagan3pbm22knapempzb5ia2wnmdqnhg7rrfwx775cqmad.onion
...

Puppet will now use a connection over Tor. On the server side, the certificate of the agent will still include the certname (agent.example.com).

Icinga

This article will just explain some parts of the whole Icinga2 configuration. The whole Icinga configuration is too much and out of scope.
On the monitoring parent node, create the zones and endpoints:

$ cat /etc/icinga2/zones.conf
object Endpoint "monitoring.example.com" {
}
object Zone "monitoring.example.com" {
    endpoints = [ "monitoring.example.com" ]
}

object Endpoint "agent.example.com" {
}
object Zone "agent.example.com" {
    endpoints = [ "agent.example.com" ]
    parent = "monitoring.example.com"
}

On the agent node, create the same zones and endpoints:

$ cat /etc/icinga2/zones.conf
object Endpoint "monitoring.example.com" {
    host = "kqcbbtuyhuaagan3pbm22knapempzb5ia2wnmdqnhg7rrfwx775cqmad.onion"
    port = 5665
}
object Zone "monitoring.example.com" {
    endpoints = [ "monitoring.example.com" ]
}


object Endpoint "agent.example.com" {
}
object Zone "agent.example.com" {
    endpoints = [ "agent.example.com" ]
    parent = "monitoring.example.com"
}

All done. Icinga will now communicate over Tor.

Conclusion

It’s relative simple to create a transparent Tor proxy, but it’s not really well documented in the internet. There were blog articles which described parts of it, and every configuration option is well documented in the man page. A negative point is, that the dns server of the Tor service will not serve every dns type. Perhaps it would be better to create a onion router which serves within a network as a transparent proxy and configure your main dns server to stub resolv onion addresses on this onion router.

GitLab CI with podman

We know GitLab CI with docker runners for quiet a while now, but what’s about GitLab CI with podman? Podman is the next generation container tool under Linux, it can start docker containers within the user space, no root privileges are required. With RHEL 8 there is no docker runtime available at the moment, but Red Hat supports podman. But how can we integrate that with GitLab CI? The GitLab CI runner has some native support (called executor) for docker, shell, …, but there is no native support for podman. There are two possibilities, using the shell runner or using the custom runner. With the shell runner, you have to ensure that every project starts podman, and only podman. So let’s try the custom runner.

GitLab CI runner with custom executor

Let’s start build a GitLab CI custom executor with podman on a RHEL/CentOS 7 or 8 with a really basic container. First, install the gitlab-ci-runner Go binary and create a user with a home directory under which the gitlab-ci-runner should run later.
For this example we assume there is a unix user called gitlab-runner with the home directory /home/gitlab-runner. This user is able to run podman. Let’s try that:

sudo -u gitlab-runner podman run -it --rm \
    registry.code.immerda.ch/immerda/container-images/base/fedora:30 \
    bash

Next, let’s make a systemd service for the GitLab runner (/etc/systemd/system/gitlab-runner.service):

[Unit]
Description=GitLab Runner
After=syslog.target network.target
ConditionFileIsExecutable=/usr/local/bin/gitlab-runner

[Service]
User=gitlab-runner
Group=gitlab-runner
StartLimitInterval=5
StartLimitBurst=10
ExecStart=/usr/local/bin/gitlab-runner run --working-directory /home/gitlab-runner
Restart=always
RestartSec=120

[Install]
WantedBy=multi-user.target

Now, let’s register a runner to a GitLab instance.

sudo -u gitlab-runner gitlab-runner register \
    --url https://code.immerda.ch/ \
    --registration-token $GITLAB_REGISTRATION_TOKEN \
    --name "Podman fedora runner" \
    --executor custom \
    --builds-dir /home/user \
    --cache-dir /home/user/cache \
    --custom-prepare-exec "/home/gitlab-runner/fedora/prepare.sh" \
    --custom-run-exec "/home/gitlab-runner/fedora/run.sh" \
    --custom-cleanup-exec "/home/gitlab-runner/fedora/cleanup.sh"
  • –builds-dir: The build directory within the container.
  • –cache-dir: The cache directory within the container.
  • –custom-prepare-exec: Prepare the container before each job.
  • –custom-run-exec: Pass the .gitlab-ci.yml script items to the container.
  • –custom-cleanup-exec: Cleanup all left-overs after each job.

There are three scripts referenced at this point. Those script will be executed for each job (a CI/CD pipeline can contain multiple jobs, e.g. build, test, deploy). The whole magic will happen within those scripts. The output of those scripts is always shown in the GitLab job, so for debugging reasons it’s possible to do a set -x.

Scripts

Every job will start all the referenced scripts. First have a look on some variables we need during all scripts. Let’s create a file /home/gitlab-runner/fedora/base.sh

CONTAINER_ID="runner-$CUSTOM_ENV_CI_RUNNER_ID-project-$CUSTOM_ENV_CI_PROJECT_ID-concurrent-$CUSTOM_ENV_CI_CONCURRENT_PROJECT_ID-$CUSTOM_ENV_CI_JOB_ID"
IMAGE="registry.code.immerda.ch/immerda/container-images/base/fedora:30"
CACHE_DIR="$(dirname "${BASH_SOURCE[0]}")/../_cache/runner-$CUSTOM_ENV_CI_RUNNER_ID-project-$CUSTOM_ENV_CI_PROJECT_ID-concurrent-$CUSTOM_ENV_CI_CONCURRENT_PROJECT_ID-pipeline-$CUSTOM_ENV_CI_PIPELINE_ID"
  • CONTAINER_ID: Name of the container.
  • IMAGE: Image to use for the container.
  • CACHE_DIR: The cache directory on the host system.

Prepare script

The prepare executable (/home/gitlab-runner/fedora/prepare.sh) will

  • pull the image from the registry
  • start a container
  • install the dependencies (curl, git, gitlab-runner)
#!/usr/bin/env bash

currentDir="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
source ${currentDir}/base.sh

set -eo pipefail

# trap any error, and mark it as a system failure.
trap "exit $SYSTEM_FAILURE_EXIT_CODE" ERR

start_container() {
    if podman inspect "$CONTAINER_ID" >/dev/null 2>&1; then
        echo 'Found old container, deleting'
        podman kill "$CONTAINER_ID"
        podman rm "$CONTAINER_ID"
    fi

    # Container image is harcoded at the moment, since Custom executor
    # does not provide the value of `image`. See
    # https://gitlab.com/gitlab-org/gitlab-runner/issues/4357 for
    # details.
    mkdir -p "$CACHE_DIR"
    podman pull "$IMAGE"
    podman run \
        --detach \
        --interactive \
        --tty \
        --name "$CONTAINER_ID" \
        --volume "$CACHE_DIR":"/home/user/cache" \
        "$IMAGE"
}

install_dependencies() {
    podman exec -u 0 "$CONTAINER_ID" sh -c "dnf install -y git curl"

    # Install gitlab-runner binary since we need for cache/artifacts.
    podman exec -u 0 "$CONTAINER_ID" sh -c "curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64"
    podman exec -u 0 "$CONTAINER_ID" sh -c "chmod +x /usr/local/bin/gitlab-runner"
}

echo "Running in $CONTAINER_ID"

start_container
install_dependencies

Run script

The run executable (/home/gitlab-runner/fedora/run.sh) will run the commands defined in the .gitlab-ci.yml within the container

#!/usr/bin/env bash

currentDir="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
source ${currentDir}/base.sh

podman exec "$CONTAINER_ID" /bin/bash < "$1"
if [ $? -ne 0 ]; then
    # Exit using the variable, to make the build as failure in GitLab
    # CI.
    exit $BUILD_FAILURE_EXIT_CODE
fi

Cleanup script

And finally the cleanup executable (/home/gitlab-runner/fedora/cleanup.sh) will cleanup after every job.

#!/usr/bin/env bash

currentDir="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
source ${currentDir}/base.sh

echo "Deleting container $CONTAINER_ID"

podman kill "$CONTAINER_ID"
podman rm "$CONTAINER_ID"
exit 0

The script above doesn’t cleanup the cache. The reason is, that perhaps we need the cache during the next job or during the next pipeline. So an additional cleanup on the host system is needed for purging the cache after a while.

Last but not least

This is just a quick howto. If you wanna implement that, there are a lot of improvements. This should just explain how the custom executor can be used and how to use podman for GitLab CI runner. At the moment there is no support for the tag image (see #4357).

The state of Forward Secrecy in OpenSSL

It could be possible that your SSL services are not providing
forward secrecy and you haven’t noticed yet!

Many SSL ciphers provide forward secrecy by using ephedermal Diffie-Hellman (EDH) keys. This means that for every SSL session a temporary encryption key is negotiated and the normal key is only used for verifying authenticity. As the OpenSSL documentation states:

“By generating a temporary DH key inside the server application that is lost when the application is left, it becomes impossible for an attacker to decrypt past sessions, even if he gets hold of the normal (certified) key, as this key was only used for signing.”

Although ciphers using EDH will most probably be available in your setup, often they are disabled because the application fails to provide DH params to OpenSSL. Since it is costly to generate those parameters – which are needed to negotiate a DH key exchange – OpenSSL suggests to create them when an application is installed.

Many application will not do this, but rather let the user generate and include the parameters in the configuration manually. Since (i) most administrators are not aware of this problem, (ii) those applications do not yield any warnings if the parameters are missing and (iii) OpenSSL silently disables ciphers with unsatisfied requirements, forward secrecy is not available in many SSL connections.

Update: Also see Bernats blog for a nice roundup on the cryptographic background of perfect forward secrecy and the new, faster elliptic curve implementations.

Verify your Setup

Try to open an SSL session to your service (https, imap, smtp, jabber, irc, …) with

openssl s_client -port <port> -host <yourdomain.tld>

this will show you the details of the SSL session and you can verify that the used cipher includes EDH:

New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA

or not:

New, TLSv1/SSLv3, Cipher is AES256-SHA

Fix your Setup

Applications which we found to work with EDH ciphers are Apache and Dovecot.

Update: Applications which we found to not support EDH out of the box are: squid, exim, courier

In most applications you can configure a dhparams variable somewhere. The dhparams can be generated with the following command:

openssl dhparam -out dhparams.pem 2048

We already fixed the problem in the following services:

Squid (reverse proxy)

In /etc/squid/include.d/https_port add dhparams=/path/dhparams.pem to every line

Exim

In /etc/exim.conf add the line tls_dhparam = /path/dhparams.pem

Fix the general Problem

This problem has two main reasons:

  1. Applications do not check whether the requirements of the user selected ciphers are satisfied. The requirements are listed in the OpenSSL doku. Or they could just always generate dhparams when they are installed, since EDH ciphers should be preferred anyway.
  2. The OpenSSL API does not provide any means to verify the state of the configuration. There is no function to check if cipher requirements are met and the SSL_CTX setup is consistent. As long as at least a single cipher (even the least secure) in the acceptable ciphers list can be initialized OpenSSL will not complain to the application.

If you find any application which exhibits this problem, please file a bug report and convince the maintainers to at least generate a warning to the user and state the consequences in the documentation.

If you are a developer of an application which uses OpenSSL please consider shipping install scripts that generate dhparams or generate them on the fly if they are missing. Please do not just let OpenSSL silently disable a key feature of SSL.