Temporary postpone upgrading OpenZFS to 2.2.0 – 2.2.1 when upgrading to Proxmox 8.1

Add the following pinning rule for APT to pin proxmox to the 2.1.x branch of OpenZFS.


cat << EOF > /etc/apt/preferences.d/zfs
# Hold back on upgrading to OpenZFS 2.2.* due to a bug currently being investigated in 2.1.4*, 2.2.1* – 2023-11-25
Package: zfs-initramfs zfs-zed zfsutils-linux libzfs4linux libzpool5linux
Pin: version 2.1.13*
Pin-Priority: 900
EOF


Remember to remove the rule when Proxmox confirms the issues reported (and suspected) plaguing OpenZFS releases 2.2.0 – 2.2.1 are confirmed resolved.

Edit 2023-12-01: A fix has been released. https://www.phoronix.com/news/OpenZFS-2.2.2-Released

EVPN Learning Ressources – WIP

Table of Contents

RFC’s

Drafts


YouTube

Playlists


Routing Daemons

Linux Hypervisors


Commercial Vendors


Blog Posts


Side notes


VXLAN Packet

PBB Packet Format

Working with TC on Linux systems – dasblinkenlichten

With that out of the way – I wanted to spend some time in this post talking about the command line tool found on Linux systems called tc. We’ve talked about tc before when we discussed creating some network/traffic simulated topologies and it worked awesome for that use case. If you recall from that earlier post tc is short for Traffic Control and allows users to configure qdiscs. A qdisc is short for Queuing Discipline. I like to think of it as manipulating the Linux kernels packet scheduler.

Working with TC on Linux systems

MariaDB Galera Cluster on Ubuntu 18.04

Install required packages

  • dist-upgrade: Optional! Updates the Linux kernel if new minor updates are available.
  • ufw: Tool for easier administration of firewall rules.
  • mariadb-server, mariadb-client, galera-3, rsync: Required for running the Galera Cluster.
sudo apt-get update && \
sudo apt-get upgrade -y && \
sudo apt-get dist-upgrade -y && \
sudo apt-get autoremove && \
sudo apt-get install mariadb-server mariadb-client galera-3 rsync -y && \
sudo apt-get install ufw -y

Optional packages

If you want to be able to tell on your switch/router wich server has wich hostname you can install lldp and snmp to be able to do remote monitoring of the hosts.

sudo apt-get install lldpd snmpd -y

Configuring the Cluster nodes

Stop the MariaDB service on all hosts!

sudo service mysql stop

Open up the following ports between hosts.

sudo ufw allow proto tcp from 192.168.56.0/29 to 192.168.56.0/29 port 3306,4567-4568,4444
sudo ufw allow proto udp from 192.168.56.0/29 to 192.168.56.0/29 port 4567

Note: Subsitute the subnet above (192.168.56.0/29) with the subnet your MariaDB galera hosts are located in!

On the FIRST host

It is required all hosts have the same config for the galera cluster to work.

MariaDB looks up config in the /etc/mysql/ dir. We can add additional config files in the /etc/mysql/conf.d/ dir ending in .cnf and it will be loaded in addition to the MariaDB main configuration files.

sudo nano /etc/mysql/conf.d/galera.cnf
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
# Name of the cluster. MUST be identical on all hosts.
wsrep_cluster_name="random_cluster_name"
# wsrep_cluster_address: both IP and DNS names
# of the cluster hosts can be used.
wsrep_cluster_address="gcomm://node1,node2,node3"

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
# Local hosts IP address
wsrep_node_address="192.168.56.[2|3|4]"
# Local host hostname.
wsrep_node_name="node[1|2|3]"

Additional hosts

Do the same as above, but rememember to edit wsrep_node_address and wsrep_node_name!

Setting up Galera

On the FIRST host do:

sudo galera_new_cluster

This HAS TO BE DONE to ensure when the additional hosts mariadb server is started. They have an exisiting already configured and running Cluster node to connect to.

You can verify the number of cluster members by running

mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

each time to startup a new cluster node.

Output
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 1     |
+--------------------+-------+

Next

Bring up host no.2 and verify the number of cluster members.

mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
Output
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 2     |
+--------------------+-------+

Next

Bring up host no.3 and verify the number of cluster members.

mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
Output
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

Debian maintenance user

If your system uses the Debian maintenance user (see in /etc/mysql/debian.cnf). You will need to make sure all host members in the cluster is configured with the same credentials. As the credentials from the 1st cluster host will be synced to additional hosts joining the galera cluster.

[client]
host     = localhost
user     = debian-sys-maint
password = 03P8rdlknkXr1upf
socket   = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host     = localhost
user     = debian-sys-maint
password = 03P8rdlknkXr1upf
socket   = /var/run/mysqld/mysqld.sock
basedir  = /usr

Verifying replication works

First node

Create a test database and insert some data.

mysql -u root -p -e 'CREATE DATABASE playground;
CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");'

Second node

mysql -u root -p -e 'SELECT * FROM playground.equipment;'
Output
+----+-------+-------+-------+
| id | type  | quant | color |
+----+-------+-------+-------+
|  1 | slide |     2 | blue  |
+----+-------+-------+-------+

Insert some more data.

mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");'

Third node

Verify data created on node2 exists on db in node3.

mysql -u root -p -e 'SELECT * FROM playground.equipment;'
Output
+----+-------+-------+--------+
| id | type  | quant | color  |
+----+-------+-------+--------+
|  1 | slide |     2 | blue   |
|  2 | swing |    10 | yellow |
+----+-------+-------+--------+

Add an additional data string to the databas.

mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("seesaw", 3, "green");'

First node

Verfiy the data created on node3 exists on node 1.

mysql -u root -p -e 'SELECT * FROM playground.equipment;'
Output
+----+--------+-------+--------+
| id | type   | quant | color  |
+----+--------+-------+--------+
|  1 | slide  |     2 | blue   |
|  2 | swing  |    10 | yellow |
|  3 | seesaw |     3 | green  |
+----+--------+-------+--------+

Conclusion

If all is well. You should now have a three hosts running and working MariaDB Galera Cluster.

Notes to remember

  1. Traffic between the cluster hosts is not encrypted. So either remember to put them in a private subnet or enable encryption for cluster member traffic.
  2. There are other available state snapshot transfer agents available apart from rsync. Fx. xtrabackup. Remember to always look at your options.