lvm cache create + resize

LVM allows to have a caching layer, where your actual LV resides on spinning (slow) disks and you have a caching layer in a secondary LV, that caches some of your most frequent reads and the writes. From an end-user perspective the details are transparent: one single blockdevice. For a good overview and introduction see the following blog post: Using LVM cache for storage tiering

In our case mainly writeback cache is mainly interesting and we are adding a raid 1 SSD/NVME to the VG for the spinning disks (usually raid10).

Create

Once some fast PV has been added to the VG, we can start caching individual LVs within that VG:

lvcreate --type cache --cachemode writeback --name slowlv_cache --size 100G storage/slowlv /dev/md3

Where:

  • slowlv_cache – the name of the caching LV
  • size: The cache size
  • storage/slowlv: the vg & slow LV to cache
  • /dev/md3 the SSD/NVME PV in the vg storage

Resize

You cannot directly resize a cached LV, rather you need to uncache, resize and then add the caching LV again. When uncaching, the not yet written-through data gets synced down. This might take a while depending on your cache size and speed of the slow disks.


lvconvert --uncache /dev/storage/slowlv
lvresize -L +200G /dev/storage/slowlv
lvcreate --type cache --cachemode writeback --name slowlv_cache --size 100G storage/slowlv /dev/md3

The last command is exactly the same one as we initially used to create the cache.

Intercept traffic sent over a socket

What is the easiest way to intercept traffic sent over a UNIX Socket?

In general a socket is just a file, so you an use strace for any programm and capture what it writes there. BUT the data you capture there won’t be easy processable by something like wireshark or so.

The other way is that you setup a socat-chain, which proxies the data through a local port and then you capture the data there. You do that, by either pointing the clients or the server to another socket to write to or read from.

Let’s assume usually clients connect to /tmp/socket and we can more easily change them.

(terminal 1)$ socat -t100 -d -d -x -v UNIX-LISTEN:/tmp/socket.proxy,mode=777,reuseaddr,fork \
TCP-Connect:127.0.0.1:9000
(terminal 2)$ socat -t100 -d -d -x -v TCP-LISTEN:9000,fork,reuseaddr  UNIX-CONNECT:/tmp/socket
(terminal 3)$ tcpdump -w /tmp/data.pcap -i lo -nn port 9000

Now reconnect the client to /tmp/socket.proxy and tcpdump will record the traffic flowing over the socket as normal tcp packets.

GitLab CI with podman

We know GitLab CI with docker runners for quiet a while now, but what’s about GitLab CI with podman? Podman is the next generation container tool under Linux, it can start docker containers within the user space, no root privileges are required. With RHEL 8 there is no docker runtime available at the moment, but Red Hat supports podman. But how can we integrate that with GitLab CI? The GitLab CI runner has some native support (called executor) for docker, shell, …, but there is no native support for podman. There are two possibilities, using the shell runner or using the custom runner. With the shell runner, you have to ensure that every project starts podman, and only podman. So let’s try the custom runner.

GitLab CI runner with custom executor

Let’s start build a GitLab CI custom executor with podman on a RHEL/CentOS 7 or 8 with a really basic container. First, install the gitlab-ci-runner Go binary and create a user with a home directory under which the gitlab-ci-runner should run later.
For this example we assume there is a unix user called gitlab-runner with the home directory /home/gitlab-runner. This user is able to run podman. Let’s try that:

sudo -u gitlab-runner podman run -it --rm \
    registry.code.immerda.ch/immerda/container-images/base/fedora:30 \
    bash

Next, let’s make a systemd service for the GitLab runner (/etc/systemd/system/gitlab-runner.service):

[Unit]
Description=GitLab Runner
After=syslog.target network.target
ConditionFileIsExecutable=/usr/local/bin/gitlab-runner

[Service]
User=gitlab-runner
Group=gitlab-runner
StartLimitInterval=5
StartLimitBurst=10
ExecStart=/usr/local/bin/gitlab-runner run --working-directory /home/gitlab-runner
Restart=always
RestartSec=120

[Install]
WantedBy=multi-user.target

Now, let’s register a runner to a GitLab instance.

sudo -u gitlab-runner gitlab-runner register \
    --url https://code.immerda.ch/ \
    --registration-token $GITLAB_REGISTRATION_TOKEN \
    --name "Podman fedora runner" \
    --executor custom \
    --builds-dir /home/user \
    --cache-dir /home/user/cache \
    --custom-prepare-exec "/home/gitlab-runner/fedora/prepare.sh" \
    --custom-run-exec "/home/gitlab-runner/fedora/run.sh" \
    --custom-cleanup-exec "/home/gitlab-runner/fedora/cleanup.sh"
  • –builds-dir: The build directory within the container.
  • –cache-dir: The cache directory within the container.
  • –custom-prepare-exec: Prepare the container before each job.
  • –custom-run-exec: Pass the .gitlab-ci.yml script items to the container.
  • –custom-cleanup-exec: Cleanup all left-overs after each job.

There are three scripts referenced at this point. Those script will be executed for each job (a CI/CD pipeline can contain multiple jobs, e.g. build, test, deploy). The whole magic will happen within those scripts. The output of those scripts is always shown in the GitLab job, so for debugging reasons it’s possible to do a set -x.

Scripts

Every job will start all the referenced scripts. First have a look on some variables we need during all scripts. Let’s create a file /home/gitlab-runner/fedora/base.sh

CONTAINER_ID="runner-$CUSTOM_ENV_CI_RUNNER_ID-project-$CUSTOM_ENV_CI_PROJECT_ID-concurrent-$CUSTOM_ENV_CI_CONCURRENT_PROJECT_ID-$CUSTOM_ENV_CI_JOB_ID"
IMAGE="registry.code.immerda.ch/immerda/container-images/base/fedora:30"
CACHE_DIR="$(dirname "${BASH_SOURCE[0]}")/../_cache/runner-$CUSTOM_ENV_CI_RUNNER_ID-project-$CUSTOM_ENV_CI_PROJECT_ID-concurrent-$CUSTOM_ENV_CI_CONCURRENT_PROJECT_ID-pipeline-$CUSTOM_ENV_CI_PIPELINE_ID"
  • CONTAINER_ID: Name of the container.
  • IMAGE: Image to use for the container.
  • CACHE_DIR: The cache directory on the host system.

Prepare script

The prepare executable (/home/gitlab-runner/fedora/prepare.sh) will

  • pull the image from the registry
  • start a container
  • install the dependencies (curl, git, gitlab-runner)
#!/usr/bin/env bash

currentDir="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
source ${currentDir}/base.sh

set -eo pipefail

# trap any error, and mark it as a system failure.
trap "exit $SYSTEM_FAILURE_EXIT_CODE" ERR

start_container() {
    if podman inspect "$CONTAINER_ID" >/dev/null 2>&1; then
        echo 'Found old container, deleting'
        podman kill "$CONTAINER_ID"
        podman rm "$CONTAINER_ID"
    fi

    # Container image is harcoded at the moment, since Custom executor
    # does not provide the value of `image`. See
    # https://gitlab.com/gitlab-org/gitlab-runner/issues/4357 for
    # details.
    mkdir -p "$CACHE_DIR"
    podman pull "$IMAGE"
    podman run \
        --detach \
        --interactive \
        --tty \
        --name "$CONTAINER_ID" \
        --volume "$CACHE_DIR":"/home/user/cache" \
        "$IMAGE"
}

install_dependencies() {
    podman exec -u 0 "$CONTAINER_ID" sh -c "dnf install -y git curl"

    # Install gitlab-runner binary since we need for cache/artifacts.
    podman exec -u 0 "$CONTAINER_ID" sh -c "curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64"
    podman exec -u 0 "$CONTAINER_ID" sh -c "chmod +x /usr/local/bin/gitlab-runner"
}

echo "Running in $CONTAINER_ID"

start_container
install_dependencies

Run script

The run executable (/home/gitlab-runner/fedora/run.sh) will run the commands defined in the .gitlab-ci.yml within the container

#!/usr/bin/env bash

currentDir="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
source ${currentDir}/base.sh

podman exec "$CONTAINER_ID" /bin/bash < "$1"
if [ $? -ne 0 ]; then
    # Exit using the variable, to make the build as failure in GitLab
    # CI.
    exit $BUILD_FAILURE_EXIT_CODE
fi

Cleanup script

And finally the cleanup executable (/home/gitlab-runner/fedora/cleanup.sh) will cleanup after every job.

#!/usr/bin/env bash

currentDir="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
source ${currentDir}/base.sh

echo "Deleting container $CONTAINER_ID"

podman kill "$CONTAINER_ID"
podman rm "$CONTAINER_ID"
exit 0

The script above doesn’t cleanup the cache. The reason is, that perhaps we need the cache during the next job or during the next pipeline. So an additional cleanup on the host system is needed for purging the cache after a while.

Last but not least

This is just a quick howto. If you wanna implement that, there are a lot of improvements. This should just explain how the custom executor can be used and how to use podman for GitLab CI runner. At the moment there is no support for the tag image (see #4357).

Ehlo Onion

Email transport security in 2016, it’s still a thing! The last mile is fortified — no reasonable provider accepts plaintext smtp, pop, or imap from a client. But what about transport? It’s still opportunistic, downgradeable, interceptable, and correlateable. It’s time to put some more band-aids on this wound!

SMTP delivery to Tor Hidden Services

Moving forward our MX is reachable at ysp4gfuhnmj6b4mb.onion:25. We are happy to accept mail for all our domains (ie. all domains where mail.immerda.ch is the MX) there! Do you want to know how to do that? Or even how to make your system reachable through tor? There is a tutorial for Exim at the bottom of this post and there is another one for Postix. But wait, there is more!

Ehlo

So how about everyone on the internet does this? How about using the following format to publish Onion Service MX records in dns:

_onion-mx._tcp.immerda.ch. 3600 IN SRV 0 5 25 ysp4gfuhnmj6b4mb.onion.

Fair enough, dns can be spoofed. But we still get all the other benefits when it works, we make correlation much harder and improve the meta-data leakage. And we exclude the attacker who can only tamper with the SMTP session but not the DNS query.

At the moment this is just a proposal but we are eager to collaborate on this if you reach out to us!

Howto? (Exim)

Now let’s see how we can configure Exim send mail to a tor Onion Service for a manually curated set of domains.

There is previous work, but since the exim 4.87 release an easier approach is possible. Here is the high level overview of what we need:

  1. A static mapping between email domain and MX onion address
  2. A router to prepare the submission using Tor’s AutomapHostsOnResolve feature: The router performs a programatic DNS lookup with the tor daemon. The returned IP is being mapped to the correct onion url by tor.
  3. A transport sending emails via the tor socks proxy using the above IP as destination.

Here’s how we adjusted our exim setup for outgoing mail (and you should be able to do it in a similar way):

  • First create the mapping of recipient domains to onion addresses:

/etc/exim/onionrelay.txt

immerda.ch ysp4gfuhnmj6b4mb.onion
lists.immerda.ch ysp4gfuhnmj6b4mb.onion
  • Then convert it to cdb for faster lookups:

    cdb -m -c -t /tmp/onionrelay.tmp /etc/exim/onionrelay.cdb /etc/exim/onionrelay.txt

  • Install and configure Tor for Onion Service DNS mapping and have the local daemon running:

/etc/torrc

AutomapHostsOnResolve 1
DNSPort 5300
DNSListenAddress 127.0.0.1
...
  • Configure Exim:

/etc/exim/conf.d/perl

perl_startup = do '/etc/exim/perl-routines.pl'
perl_at_start

/etc/exim/perl-routines.pl

use Net::DNS::Resolver;
sub onionLookup {
  my $hostname = shift;
  my $res = Net::DNS::Resolver->new(nameservers => [qw(127.0.0.1)],);
  $res->port(5300);
  my $query = $res->search($hostname);
  foreach my $rr ($query->answer) {
    next unless $rr->type eq "A";
    return $rr->address;
  }
  return 'no_such_host';
}

/etc/exim/conf.d/domainlists

ONION_RELAYDB=/etc/exim/onionrelay.cdb
domainlist onion_relays     = cdb;ONION_RELAYDB
...

/etc/exim/conf.d/router

# send things over tor where we have an entry for it
onionrelays:
  driver    = manualroute
  domains   = +onion_relays
  transport = onion_relay
  # get the automap IP for the onion address from the tor daemon
  route_data = ${perl{onionLookup}{${lookup{$domain}cdb{ONION_RELAYDB}}}}
  no_more
...

/etc/exim/conf.d/transports

onion_relay:
  driver = smtp
  socks_proxy = 127.0.0.1 port=9050
...

Running OnionService MX

To receive mail all you need to do is set up a Tor Onion Service (there are plenty of tutorials out there) which listens on port 25 and publish the address to the world.

We strongly advice to run this hidden service on a separate VM and internally forward to your MX to avoid running an open relay.

You could also configure the Onion Service directly on the MX but then you need to be extra careful since connections will appear to come from 127.0.0.1. Most mail servers treat localhost in a privileged way and you want to avoid that. Possible workarounds are to either locally map to a different port or bind tor daemon to another ip (eg. 127.0.0.2).