LVM allows to have a caching layer, where your actual LV resides on spinning (slow) disks and you have a caching layer in a secondary LV, that caches some of your most frequent reads and the writes. From an end-user perspective the details are transparent: one single blockdevice. For a good overview and introduction see the following blog post: Using LVM cache for storage tiering

In our case mainly writeback cache is mainly interesting and we are adding a raid 1 SSD/NVME to the VG for the spinning disks (usually raid10).

Create

Once some fast PV has been added to the VG, we can start caching individual LVs within that VG:

lvcreate --type cache --cachemode writeback --name slowlv_cache --size 100G storage/slowlv /dev/md3

Where:

  • slowlv_cache – the name of the caching LV
  • size: The cache size
  • storage/slowlv: the vg & slow LV to cache
  • /dev/md3 the SSD/NVME PV in the vg storage

Resize

You cannot directly resize a cached LV, rather you need to uncache, resize and then add the caching LV again. When uncaching, the not yet written-through data gets synced down. This might take a while depending on your cache size and speed of the slow disks.


lvconvert --uncache /dev/storage/slowlv
lvresize -L +200G /dev/storage/slowlv
lvcreate --type cache --cachemode writeback --name slowlv_cache --size 100G storage/slowlv /dev/md3

The last command is exactly the same one as we initially used to create the cache.