Categories
Cloud Management and Projects Sotfware & Developers & DevOps Tools & How-Tos

MySQL master-slave replication with docker

a computer that is running mysql database for replication
MySQL Master Slave DB Replication with Docker and Docker-Compose

This post “MySQL master-slave replication with docker” was updated by: Syed Umar Bukhari on August 26, 2021

MySQL replication: a process to enable automatic copying of database(s) from one instance of MySQL to the other. In this case, we will look at master-slave replication– the most popular way to replicate SQL databases, specifically MySQL. We can create multiple slave servers for replication with a single master server. In this post, we use docker compose to create the replication on MySQL. Additionally, the code part of the post uses docker and docker-compose for MySQL master-slave replication The host OS runs the Ubuntu 18.04 OS with docker and docker compose for this experiment. You can use Windows, CentOS, Mac OS, etc. for your experiment as docker abstracts you from most of the host OS. Before beginning the process, it is assumed you have already installed docker and docker compose on your machine. If you haven’t, please do so before proceeding.

On a side note, let’s see a few reasons why MySQL is extremely popular as a database system. It is because it’s open source, has a lot of support from the community and the big tech companies. Similarly, by using SQL commands, MySQL can create, run, and query the database as well as build websites, application, it is quite efficient to use for the database designers. In addition, one of its most common uses is to build a wordpress (WP) websites.

By following this post you should be able to make your own database (DB) system much more resilient. Finally, if you want to take something out of why MySQL is important then know this— it lowers the learning curve, it’s free and it can scale to huge systems with billions of rows.

Create docker compose file

The docker compose file makes it easy for us to set some variables in the container, making the environment able to connect with the container. In addition, we use docker compose to run a multi-container environment based on the definitions in a YML file. After that, let’s create a docker compose file for replication as shown below:

version: '3'
services:
  mysql-master:
    image: percona:ps-8.0
    container_name: mysql-master
    restart: unless-stopped
    env_file: ./master/.env.master
    cap_add:
      - all
    volumes:
      - ./master/data:/var/lib/mysql
      - ./master/my.cnf:/etc/my.cnf
    environment:
      - TZ:${TZ}
      - MYSQL_USER:${MYSQL_USER}
      - MYSQL_PASSWORD:${MYSQL_PASSWORD}
      - MYSQL_ROOT_PASSWORD:${MYSQL_PASSWORD}
    networks:
      default:
        aliases:
          - mysql

  mysql-slave:
    image: percona:ps-8.0
    container_name: mysql-slave
    restart: unless-stopped
    env_file: ./slave/.env.slave
    cap_add:
      - all
    volumes:
      - ./slave/data:/var/lib/mysql
      - ./slave/my.cnf:/etc/my.cnf
    environment:
      - TZ:${TZ}
      - MYSQL_USER:${MYSQL_USER}
      - MYSQL_PASSWORD:${MYSQL_PASSWORD}
      - MYSQL_ROOT_PASSWORD:${MYSQL_ROOT_PASSWORD}
    networks:
      default:
        aliases:
          - mysql

In the docker compose file above, we use a different environment; in the MySQL master container we use an ENV file with the name .env.master. To differentiate, let’s create two folders: a master and slave folder– to separate the configuration for each container. Create folders using the command “mkdir”.

mkdir master && mkdir slave
crate master and slave directory

We have created a new folder to separate the master and slave files. We will create 2 new files next: .env.master. and .env.slave to use later.

touch master/.env.master && touch slave/.env.slave
two files env master and env slave created

Configuring the ENV file for MySQL master-slave replication with docker

This env file contains variables crucial for the container’s creation in docker compose. Additionally, the file makes it easy for us to store some information. We will create two env files for master and slave respectively. We edit the env file using “vi” command; you can use any text editor on Linux or Windows– such as Visual Studio Code or Atom.

vi master/.env.master
### WORKSPACE #############################################
TZ=UTC

#MYSQL_DATABASE=master
MYSQL_USER=master
[email protected]
MYSQL_PORT=3306
MYSQL_ROOT_PASSWORD=Mastermaster123

Create env dot slave file for slave server.

vi slave/.env.slave
### WORKSPACE #############################################
TZ=UTC

#MYSQL_DATABASE=slave
MYSQL_USER=slave
[email protected]
MYSQL_PORT=3306
MYSQL_ROOT_PASSWORD=slaveslave123

Below is an explanation for some of the variables contained in the env file to help us understand what role they perform.

Let’s look at the variables and their uses:


TZ is the time zone that will apply to the container.
MYSQL_DATABASE is the name of the database that will be created by itself.
MYSQL_USER is how the use enters into the database we create
MYSQL_PASSWORD is the password of the user that has been created. Creating a strong password is safe.
MYSQL_PORT is the port that runs the MySQL server.
MYSQL_ROOT_PASSWORD is the root user info to access all MySQL databases; create a password made of letters and symbols to be safe.

Create my.cnf file for master database.

[mysqladmin]
user=master
[mysqld]
skip_name_resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
port=3306
tmpdir=/opt/bitnami/mysql/tmp
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid_file=/opt/bitnami/mysql/tmp/mysqld.pid
max_allowed_packet=16M
bind_address=0.0.0.0
log_error=/opt/bitnami/mysql/logs/mysqld.log
character_set_server=utf8
collation_server=utf8_general_ci
plugin_dir=/opt/bitnami/mysql/lib/plugin
server-id=1
binlog_format=ROW
log-bin

[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default_character_set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin

[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid_file=/opt/bitnami/mysql/tmp/mysqld.pid
!include /opt/bitnami/mysql/conf/bitnami/my_custom.cnf

Make a my.cnf file for slave server as well.

[mysqladmin]
user=master

[mysqld]
skip_name_resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
port=3306
tmpdir=/opt/bitnami/mysql/tmp
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid_file=/opt/bitnami/mysql/tmp/mysqld.pid
max_allowed_packet=16M
bind_address=0.0.0.0
log_error=/opt/bitnami/mysql/logs/mysqld.log
character_set_server=utf8
collation_server=utf8_general_ci
plugin_dir=/opt/bitnami/mysql/lib/plugin
server-id=2
binlog_format=ROW
log-bin

[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default_character_set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin

[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid_file=/opt/bitnami/mysql/tmp/mysqld.pid
!include /opt/bitnami/mysql/conf/bitnami/my_custom.cnf
tree docker compose to understand the usage

From the picture above, let’s understand the usage of the master and slave folders and their files.


Data folder stores all the data files inside the container in the host.
My.cnf file makes configurations easier on MySQL.

Building a container for MySQL master-slave replication with Docker

Let’s build the MySQL master and slave containers with the docker compose config settings. Make sure you have everything ready to build container using the command “docker-compose up -d”.

docker-compose up -d
docker compose up building container for replication

Wait for the process of building the container to be done successfully. After that, check the process with the command “docker-compose ps”.

docker-compose ps
docker compose process check status mysql

Replication of MySQL master-slave with Docker

Now that the container runs properly, let’s begin the replication process.

Enter the container with “docker-compose exec container bash”. This will configure the replication with the MySQL command.

docker-compose exec mysql-master bash
docker compose exec command master slave bash replication start

Let’s login using the root user that we made above on MySQL now.

mysql -u root -p
login to the master slave replication server mysql

Create a user on MySQL for more replications.

mysql> CREATE USER 'replication'@'%' IDENTIFIED WITH mysql_native_password BY 'Slaverepl123';
creating a new user mysql

Grant user replication access to allow creation for MySQL replications.

mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%';
granting user replication mysql

Now, we examine if replication executed successfully or not.

mysql> show grants for [email protected]'%';
see the log status of users mysql

We will know the log status on master from this.

After that, we see the binary log of MySQL master with the following command:

mysql> SHOW MASTER STATUS\G
see the status of master server mysql

After that, the configuration on the master is complete and we continue to make the configuration on the slave. Login to container using “docker-compose exec” command.

docker-compose exec mysql-slave bash
logging in tto the containter mysql docker

After that, enter the MySQL slave server to run the following MySQL command.

mysql -u root -p
MySQL login

Execute this SQL command to push the MySQl slave part to join the master.

CHANGE MASTER TO
MASTER_HOST='mysql-master',
MASTER_USER='replication',
MASTER_PASSWORD='Slaverepl123',
MASTER_LOG_FILE='87e8982d00d1-bin.000004',
MASTER_LOG_POS=349;

The command to join to master from slave has successfully executed.

mysql change master

Let’s start the slave on mysql.

START SLAVE;
start slave mysql

After completion of all steps, recheck your work to ensure nothing was missed. Subsequently, check the status of replication on the slave server.

mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for source to send event
                  Master_Host: mysql-master
                  Master_User: replication
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: 87e8982d00d1-bin.000005
          Read_Master_Log_Pos: 156
               Relay_Log_File: ba7af6f52d85-relay-bin.000002
                Relay_Log_Pos: 331
        Relay_Master_Log_File: 87e8982d00d1-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 156
              Relay_Log_Space: 547
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: 5166800b-f068-11eb-abf5-0242ac150002
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
       Master_public_key_path: 
        Get_master_public_key: 0
            Network_Namespace: 
1 row in set, 1 warning (0.01 sec)

Make sure all databases are running smoothly, check on MySQL master.

show database master ensure mysql database db is working

If the master database is working fine, check on the slave.

mysql slave show database

After that, we will test to create a database on the master.

Therefore, let’s create a database to test if replication is working properly.

mysql> create database replicate_db;
create database replicate test

Let’s check on the slave whether the database created on the master can be created automatically on the slave.

mysql> show databases;
check slave database show

MySQL master slave replication on the docker machine is now successfully set up.

Conclusion

In this post, we have finished configuring MySQL master slave. Master slave on MySQL can be used in production and building applications. However, we must be careful when setting up replication in production. In the case you need to add slave servers, you need to manually configure it. This allows you to horizontally scale the readers. Moreover, you can backup MySQL into file storage; Sesamedisk provides storage for business and personal needs— data encryption and point in time recovery.

Additionally, you can find the docker config on github.

If you like this article, please add our blog to your bookmarks. We have lots of tech articles for you to study and understand. Lastly, I would encourage you to read this post about webrtc jitsi.

Update by: Syed Umar Bukhari on August 26, 2021

Hits: 457

Categories
Cloud Entrepreneurship Sotfware & Developers & DevOps Tools & How-Tos

Fixing jitsi recording unavailable on docker

Recording unavailable jitsi

This post “Fixing jitsi recording unavailable on docker” was updated by: Syed Umar Bukhari on October 5, 2021

Many organizations and engineers are creating video conferencing apps during the covid-19 pandemic for online learning facilities, webinars, and for talking to loved ones– this is where Jitsi comes in. Jitsi is a free video conferencing alternative app to Zoom— it is also open source!

Why Should You Use A Video Conferencing App?

Conferencing apps have gained a major role lately; it is a need that one cannot miss out on anymore, especially during this pandemic. Because of these apps, you can meet and share tutorials, conduct business meetings, record your work plans, etc.

Also, the recording feature will always be important in building a video conferencing tool.

How To Enable Recording On Jitsi?

As a first step, let’s check the configuration on the server– so that Jitsi can create recordings during the conference.

Firstly, enable Jibri because it has a configuration for recordings to run well on Jitsi. If you don’t know how to install Jitsi on Docker yet, please read our article on Jitsi with Docker.

If recording is unavailable on Jitsi, check Github for Jitsi recording fails. To observe what happens to the Jibri container, check the container logs on Jibri with:

docker-compose logs container_name

docker-compose -f docker-compose.yml -f jibri.yml logs jibri
jibri container logs

As you can see, Jbri has a problem with the container. After this, check the container process with a command that is is useful for viewing all running container processes:

docker-compose -f docker-compose -f jibri.yml ps

docker-compose -f docker-compose.yml -f jibri.yml ps
process docker-compose jitsi

As you can see, the Jibri container is not running properly, since the container keeps restarting. If you see similar results, keep reading to understand and fix this error.

Fixing The Jibri Container

The first step to fixing the Jibri Container is checking the status of the alsa-loopback module on the server. Jibri uses this module to make recordings run fast on the server. Check alsa-loopback module with this command:

arecord

arecord -L
arecord command jitsi

Checking The Kernel

As the alsa-loopback module doesn’t work properly, let’s check the kernel. The goal here is to find out if the generic kernel is installed or not.

uname -r
check kernel generic

However, it looks like the generic kernel is already installed on the server. Hence, instead, try to enable the alsa-loopback module.

Activate it with this:

sudo modprobe snd-aloop

After enabling the alsa-loopback module with modprobe, let’s check the server to see if the alsa-loopback module is active. To check the status, use this command:

arecord -L

This is useful for viewing the module’s driver– which is already active on the server.

arecord command

You can get the module running without modprobe on the server in a way that when you reboot the server, the module will still continue to run on the server.

Adding snd-aloop to the

~/etc/modules file

with the “echo and tee” command makes it easier to use.

echo snd-aloop | tee -a /etc/modules
Adding module snd-aloop

Note: This issue with the lsa-loopback module is common as some cloud providers do not provide generic kernels capable of meeting Jitsi’s requirements. In this case, while we already had the generic kernel installed, the alsa-loopback module was not set active.

Configuration of Jibri

After that, update the config on jibri.yml– the file that is used to create a Jibri container.

version: '3'

services:
    jibri:
        image: jitsi/jibri:latest
        restart: ${RESTART_POLICY}
        volumes:
            - ${CONFIG}/jibri:/config:Z
            - /dev/shm:/dev/shm
        cap_add:
            - SYS_ADMIN
            - NET_BIND_SERVICE
        devices:
            - /dev/snd:/dev/snd
        environment:
            - PUBLIC_URL
            - XMPP_AUTH_DOMAIN
            - XMPP_INTERNAL_MUC_DOMAIN
            - XMPP_RECORDER_DOMAIN
            - XMPP_SERVER
            - XMPP_DOMAIN
            - JIBRI_XMPP_USER
            - JIBRI_XMPP_PASSWORD
            - JIBRI_BREWERY_MUC
            - JIBRI_RECORDER_USER
            - JIBRI_RECORDER_PASSWORD
            - JIBRI_RECORDING_DIR
            - JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
            - JIBRI_STRIP_DOMAIN_JID
            - JIBRI_LOGS_DIR
            - DISPLAY=:0
            - TZ
        depends_on:
            - jicofo
        networks:
            meet.busanid.dev:

The file looks all set– only change the network in Docker to have it ready for use. The Docker network allows container network interconnectivity, using your domain.

Checking the ENV File

The env file stores variables and contains declarations of these variables that will eventually be loaded on the container.

These files are very important for configuring the environment in Jitsi. In the env file, you can make as many configurations as you see fit. But, keep in mind that this setting is called in Docker-Compose to be applied to the container.

For example, there are many variables to configure, such as public url, port, ssl, jvb, Jibri, etc.

For this part of the tutorial, configure the env file to use recordings in Docker. Therefore, edit the env configuration in Docker Jitsi. To do so, you must enable Rest API on JVB.

# A comma separated list of APIs to enable when the JVB is started [default: none]
# See https://github.com/jitsi/jitsi-videobridge/blob/master/doc/rest.md for more information
JVB_ENABLE_APIS=rest,colibri

This API helps the recording run on Docker Jitsi. Moreover, you need to enable recordings in Jitsi. So, to enable recordings, remove the fence in front of the variable ENABLE_RECORDING=1

# Enable recording
ENABLE_RECORDING=1

Set ENABLE_RECORDING=1 for feature recording on Jitsi can be enabled on the server.

This will bring up the recording menu when the moderator starts the meeting. Don’t forget to edit the XMPP domain name if you are using a different docker network than the default!

# XMPP domain for the jibri recorder
XMPP_RECORDER_DOMAIN=recorder.meet.busanid.dev

# XMPP recorder user for Jibri client connections
JIBRI_RECORDER_USER=recorder

# Directory for recordings inside Jibri container
JIBRI_RECORDING_DIR=/config/recordings

# The finalizing script. Will run after recording is complete
JIBRI_FINALIZE_RECORDING_SCRIPT_PATH=/config/finalize.sh

# XMPP user for Jibri client connections
JIBRI_XMPP_USER=jibri

# MUC name for the Jibri pool
JIBRI_BREWERY_MUC=jibribrewery

# MUC connection timeout
JIBRI_PENDING_TIMEOUT=90

# When jibri gets a request to start a service for a room, the room
# jid will look like: [email protected]_domain
# We'll build the url for the call by transforming that into:
# https://xmpp_domain/subdomain/roomName
# So if there are any prefixes in the jid (like jitsi meet, which
# has its participants join a muc at conference.xmpp_domain) then
# list that prefix here so it can be stripped out to generate
# the call url correctly
JIBRI_STRIP_DOMAIN_JID=muc

That looks good, right? Congratulations! You have now successfully configured the env file. What does that mean? The recordings can be used now!

In addition, it’s time to build a Docker Jitsi container using this command:

docker compose up -d.

docker-compose -f docker-compose.yml -f jibri.yml up -d
docker compose up

Wait until the process of building the Jitsi container is complete; when finished, re-check all the services running on the container with this command:

docker compose ps

docker-compose -f docker-compose.yml -f jibri.yml ps
jibri docker compose ps

Since the Jibri container is running fine, let’s look at the logs on the container with the following command::

docker compose logs

docker compose logs jibri

All looks ready to proceed to the last and final step: Jtisi Recording.

Jitsi Recording of a Conference

You should try conferencing using the Jitsi server to ensure everything is in working order.

recording conference

Recording on Jitsi is running well on our end; we’re assuming it’s the same for you. If you face any errors, please drop them in the comment section below!

Accessing the Recording Files

To access the video files from recording., go to:

~/.jitsi-meet-cfg/jibri/recordings.

Alternatively, you can customize the location of video storage from config Jibri.

Jibri recording directory

If you followed the post ,your recording should be working well on the server. Else, let us know below what errors you might be facing so we can help you fix them. At this point, the moderators can start recordings when a conference begins.

Hope you liked this article and it was able to help you fix Jibri Docker “Recording Unavailable” error. Hit the like button if you learned something new and re-blog the post if your friends might find it useful.

Don’t forget to read more of our articles, such as How To Run Jitsi With Docker? and much more tech-savvy ones on topics like MySQL databases and Python Integration of CRUD operations, etc. Stay safe, stay healthy, and keep coming back to our blog for more amazing content in the future.

Hits: 265