MariaDB MaxScale — High Availability – Part 1

Part 1 — MaxScale Cooperative Monitoring

This two-part series breaks down each section into easy logical steps.


MaxScale High Availability

MaxScale is an excellent tool to ensure high availability in your database layer. However, it is crucial to ensure that your MaxScale server does not become a Single Point of Failure (SPOF). To prevent this, it is recommended to run two or more MaxScale servers and configure your application via a supported MariaDB connector to remove the requirement of a proxy or any other traffic distribution option. This will provide added protection and ensure that your database remains highly available even if one of the MaxScale servers goes down.

MaxScale Cooperative Monitoring

For this to work we will be using the MaxScale Cooperative Monitoring functionality. Faisal Saeed writes a great blog post here, explaining how this works. Cooperative Monitoring allows multiple MaxScale servers to manage the same MariaDB server backends without conflict. The MaxScale servers can be in different data centers, as long as they can communicate with each other. If you are enabling Cooperative Monitoring you must do this on all MaxScale servers, that are within the cluster.

If you do not have a MaxScale installation with a set of backend database servers configured and working, then please start here. This blog will be adding a second MaxScale server to this previously configured environment.

Requirements

To complete this blog, you will need a server with network connectivity and external internet access. Make sure to open the relevant firewalls. For testing purposes, high-specification servers are not required. You can use local virtual machines instead. In this guide, we will be using Digital Ocean Droplets with 2 vCPUs and 4GB RAM. All commands will be executed as the root user on the servers, using a RockyLinux 9 operating system. However, you can also use other flavours of Linux, such as CentOS.

MaxScale Installation

Follow the MaxScale installation instructions in Part 1 of my Replica Rebuild blog.

MaxScale Configuration

We already have one MaxScale server configured, managing a backend topology of three asynchronous replication servers.

To join the additional MaxScale server into the cluster we need to first configure SSH keys to allow the MaxScale server to manipulate the underlying database servers:

Bash
mkdir -p /etc/maxscale/.ssh
ssh-keygen -N '' -t rsa -b 4096 -f /etc/maxscale/.ssh/id_rsa
chown -R maxscale:maxscale /etc/maxscale/.ssh
chmod 700 /etc/maxscale/.ssh
chmod 600 /etc/maxscale/.ssh/id_rsa.pub
chmod 600 /etc/maxscale/.ssh/id_rsa

and also on the MaxScale server make sure the known hosts file exists:

Bash
mkdir -p /home/maxscale/.ssh
touch /home/maxscale/.ssh/known_hosts
chown maxscale /home/maxscale/.ssh/known_hosts

Now on the MaxScale server cat the output of the known hosts:

Bash
cat /etc/maxscale/.ssh/id_rsa.pub

You need to very carefully paste this output from the MaxScale server into the database servers into an authorized_keys file, adding a new line if some content already exists.

Bash
vi /home/maxscaleUser/.ssh/authorized_keys

Once this file is saved we need to configure the MaxScale configuration, but before we do that.

Before continuing

At this point, it is a good idea to make sure that your original MaxScale server is reporting the master server, as the server that is configured in the cnf file.

My configuration shows the IP address of Server 1:

Bash
cat /etc/maxscale.cnf | grep -i address

I am going to use the switchover command to move my Master server back to the originally configured server.

Bash
maxctrl call command mariadbmon switchover Server-Monitor server1 server3

We can check that the Master server is now back in its originally configured location:

If we do not do this, we will get an error on the new MaxScale server, as the Master is configured as read-only.

Continuing on MaxScale Server 2…

If you are sure your MaxScale Server 1 is configured correctly. We will configure MaxScale on Server 2, we must create a /etc/maxscale.cnf file, there will be one in place from the installation, and I like to move this out of the way first:

Bash
mv /etc/maxscale.cnf /etc/maxscale.cnf.OLD

Using your favourite editor create a file:

Bash
vi /etc/maxscale.cnf

and insert some very basic MaxScale configurations. This configuration must be identical to the first MaxScale server:

Bash
[maxscale]
threads=auto
config_sync_cluster="Server-Monitor"
config_sync_user=config_sync_user
config_sync_password=aBcd123_
admin_secure_gui=false
admin_host=0.0.0.0

# Server definitions
[server1]
type=server
address=10.106.0.2
port=3306
protocol=MariaDBBackend

# Monitor for the servers
[Server-Monitor]
type=monitor
module=mariadbmon
servers=server1
user=monitor_user
password=aBcd123_
replication_user=replication_user
replication_password=aBcd123_
monitor_interval=2000ms
ssh_user=maxscaleUser
ssh_keyfile=/etc/maxscale/.ssh/id_rsa
ssh_check_host_key=false

At this point, you can restart the MaxScale service, and if everything has gone to plan, it will restart without an error:

Bash
systemctl restart maxscale

MaxScale should start with no errors.

Move on to Part 2 of this mini-series to test the installation.

Part 1 | Part 2

Kester Riley

Kester Riley is a Senior Solutions Engineer who leverages his website to establish his brand and build strong business relationships. Through his blog posts, Kester shares his expertise as a consultant, mentor, trainer, and presenter, providing innovative ideas and code examples to empower ambitious professionals.

CentOS (15) Connector (5) Continuous Availability (1) Cooperative Monitoring (3) High Availability (12) Java (3) MariaDB (16) MaxScale (14) Python (2) Replica Rebuild (10) Rocky Linux (15)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.