Categories
Cloud Management and Projects Sotfware & Developers & DevOps Tools & How-Tos

Docker mysql master-slave replication with docker

mysql percona innodb

Mysql replication; a process to enable automatic copy of database(s) from one instance of MySQL to the other. In this case we will do master to slave replication, which is the most popular way to replicate MySQL. We also can create multiple slave servers to make replication with a single master. In this post, I use docker compose to create the replication on mysql. So the code tech of the post is mysql master-slave replication with docker and docker-compose. The host OS is using ubuntu 18.04 operating system with docker and docker compose for this lab experiment. You can use windows, Centos, Macos for your lab as docker abstracts you from most of the host OS. I assume you have already installed docker and docker compose on your machine.

As a side note, Mysql is very popular as a database system. This is because it’s open source, has a lot of support from the community and BIG companies. Then also using sql commands, mysql can create, query the database and build websites, application. For instance one of the most common uses is to build wordpress (WP) websites. By following this post you should be able to make your DB system much more resilient. Finally if you want to take something out of why MySQL is important; lowers the learning curve, it’s free and also can scale to very big systems with billions of rows.

Make file docker compose

The docker compose file makes it easy for us to set some variables on the container, making the environment able to connect with the container. In this case we use docker compose to run a multicontainer environment based on definition in yml file. Let’s create a docker compose file for replication as shown below.

version: '3'
services:
  mysql-master:
    image: percona:ps-8.0
    container_name: mysql-master
    restart: unless-stopped
    env_file: ./master/.env.master
    cap_add:
      - all
    volumes:
      - ./master/data:/var/lib/mysql
      - ./master/my.cnf:/etc/my.cnf
    environment:
      - TZ:${TZ}
      - MYSQL_USER:${MYSQL_USER}
      - MYSQL_PASSWORD:${MYSQL_PASSWORD}
      - MYSQL_ROOT_PASSWORD:${MYSQL_PASSWORD}
    networks:
      default:
        aliases:
          - mysql

  mysql-slave:
    image: percona:ps-8.0
    container_name: mysql-slave
    restart: unless-stopped
    env_file: ./slave/.env.slave
    cap_add:
      - all
    volumes:
      - ./slave/data:/var/lib/mysql
      - ./slave/my.cnf:/etc/my.cnf
    environment:
      - TZ:${TZ}
      - MYSQL_USER:${MYSQL_USER}
      - MYSQL_PASSWORD:${MYSQL_PASSWORD}
      - MYSQL_ROOT_PASSWORD:${MYSQL_ROOT_PASSWORD}
    networks:
      default:
        aliases:
          - mysql

In the docker compose file above we use a different environment, in the mysql master container we use an env file with the name .env.master. To differentiate, let’s create a master and slave folder to separate the configuration for each container. Create folder using command “mkdir”.

mkdir master && mkdir slave
create folder with mkdir

We have created a folder to separate the master and slave. We will create 2 files .env master and .env.slave which we will call for the environment in the docker compose file.

touch master/.env.master && touch slave/.env.slave
touch command

Configuration env file for mysql master-slave replication with docker

This env file contains variables very important for container creation in docker compose. Env makes it easy for us to store some information in a file. We will create two env files for master and slave. We edit the env file using “vi” you can use any other text editor on linux or the operating system you are using.

vi master/.env.master
### WORKSPACE #############################################
TZ=UTC

#MYSQL_DATABASE=master
MYSQL_USER=master
[email protected]
MYSQL_PORT=3306
MYSQL_ROOT_PASSWORD=Mastermaster123

Create env.slave too for slave.

vi slave/.env.slave
### WORKSPACE #############################################
TZ=UTC

#MYSQL_DATABASE=slave
MYSQL_USER=slave
[email protected]
MYSQL_PORT=3306
MYSQL_ROOT_PASSWORD=slaveslave123

Below is an explanation for some of the variables contained in the env file :
TZ is the timezone that we will apply to the container.
MYSQL_DATABASE is the name of the database that will be created automatically.
MYSQL_USER is the user used to authenticate into the database that has been created. Create strong password.
MYSQL_PASSWORD is the password of the user that has been created.
MYSQL_PORT is the port to run the mysql service.
MYSQL_ROOT_PASSWORD is the root user credentials to access all mysql databases, create a password combination of letters and symbols to be safe.

Create my.cnf file for master.

[mysqladmin]
user=master

[mysqld]
skip_name_resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
port=3306
tmpdir=/opt/bitnami/mysql/tmp
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid_file=/opt/bitnami/mysql/tmp/mysqld.pid
max_allowed_packet=16M
bind_address=0.0.0.0
log_error=/opt/bitnami/mysql/logs/mysqld.log
character_set_server=utf8
collation_server=utf8_general_ci
plugin_dir=/opt/bitnami/mysql/lib/plugin
server-id=1
binlog_format=ROW
log-bin

[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default_character_set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin

[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid_file=/opt/bitnami/mysql/tmp/mysqld.pid
!include /opt/bitnami/mysql/conf/bitnami/my_custom.cnf

And my.cnf for slave too.

[mysqladmin]
user=master

[mysqld]
skip_name_resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
port=3306
tmpdir=/opt/bitnami/mysql/tmp
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid_file=/opt/bitnami/mysql/tmp/mysqld.pid
max_allowed_packet=16M
bind_address=0.0.0.0
log_error=/opt/bitnami/mysql/logs/mysqld.log
character_set_server=utf8
collation_server=utf8_general_ci
plugin_dir=/opt/bitnami/mysql/lib/plugin
server-id=2
binlog_format=ROW
log-bin

[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default_character_set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin

[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid_file=/opt/bitnami/mysql/tmp/mysqld.pid
!include /opt/bitnami/mysql/conf/bitnami/my_custom.cnf
tree docker compose

In the picture above we will explain the functions of each folder, following function :
data folder to store all the data in the container into the host.
my.cnf file is useful for making configurations on mysql.

Building container for mysql master-slave replication with docker

For the docker compose configuration we have made, let’s start to build the mysql master and slave containers. Make sure you have everything ready, then build container using the command “docker-compose up -d”.

docker-compose up -d
docker compose up

The process of building the container is running, wait for the container to be created successfully. Then after the container is successfully created, we check the process with the command “docker-compose ps”.

docker-compose ps
docker compose process

Replication mysql master-slave replication with docker

We have the container running properly now, let’s start the replication master slave configuration.

We will enter the container to configure the replication with the mysql command. Enter to container with “docker-compose exec container bash”.

docker-compose exec mysql-master bash
docker compose exec command

Let’s login using the root user on mysql.

mysql -u root -p
mysql command root

Create a user on mysql for replications.

mysql> CREATE USER 'replication'@'%' IDENTIFIED WITH mysql_native_password BY 'Slaverepl123';
mysql create user

Create user replication access so that it can be created for mysql replications.

mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%';
grant replications mysql

We check whether the replication was successful on the master.

mysql> show grants for [email protected]'%';
show grant replication

We will know the log status on master, Then we see the binary log on mysql master with below command.

mysql> SHOW MASTER STATUS\G
show master status mysql

The configuration on the master is complete, we continue to make the configuration on the slave. Login into container using “docker-compose exec” command.

docker-compose exec mysql-slave bash
docker compose exec slave

Enter into mysql slave in order to run the mysql command.

mysql -u root -p
mysql command slave

Execute this SQL command to make mysql slave join to master.

CHANGE MASTER TO
MASTER_HOST='mysql-master',
MASTER_USER='replication',
MASTER_PASSWORD='Slaverepl123',
MASTER_LOG_FILE='87e8982d00d1-bin.000004',
MASTER_LOG_POS=349;

Looks good, the command to join to master from slave has been successful.

join master slave replication

Let’s start the slave on mysql.

START SLAVE;
start slave mysql

When all the steps have been done, make sure all steps are not missed, after that we try to see the status of replication on the slave.

mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for source to send event
                  Master_Host: mysql-master
                  Master_User: replication
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: 87e8982d00d1-bin.000005
          Read_Master_Log_Pos: 156
               Relay_Log_File: ba7af6f52d85-relay-bin.000002
                Relay_Log_Pos: 331
        Relay_Master_Log_File: 87e8982d00d1-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 156
              Relay_Log_Space: 547
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: 5166800b-f068-11eb-abf5-0242ac150002
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
       Master_public_key_path: 
        Get_master_public_key: 0
            Network_Namespace: 
1 row in set, 1 warning (0.01 sec)

Make sure all databases are running well, we will check on mysql master.

show database master

The master database is working fine, let’s check on the slave.

mysql slave show database

Next we will test to create a database on the master, let’s create a database to test replication is working properly.

mysql> create database replicate_db;
create database replicate test

Let’s check on the slave whether the database created on the master can be created automatically on the slave.

mysql> show databases;
check slave database show

MySQL master slave replication on the docker machine is now successfully set up. You can also find my docker config on my github.

Conclusion

In this post, we have finished configuring mysql master slave. Master slave on mysql we can use in production in building applications. We must be careful in setting up replication in production. Maybe someday you will need to add slaves or mysql nodes again. In that case you need to manually configure again to add slaves. This allows you to horizontally scale the readers. You can also backup mysql into file storage, sesamedisk provides various storage for business and personal needs. We secure your data encryption and point in time recovery. If you like this article, you please add our blog to your bookmarks. We have lots of tech articles for you to study and understand. I would encourage you to also read my post about webrtc jitsi.

Hits: 9

Categories
Cloud Entrepreneurship Sotfware & Developers & DevOps Tools & How-Tos

Fixing recording unavailable on jitsi docker

Recording unavailable jitsi

Many of us make video conferencing applications during the pandemic for online learning facilities, making webinars, greeting friends with friends. Many companies are competing to build video conferencing applications there is were jitsi recording comes in. Some want to make webinars, take private lessons. The conference is a major need that cannot be missed, during the current pandemic. People can meet and tell how to do things, record their work plans, or they can record their conversations when the conference starts.

The recording feature is very important in building a video conference, making recordings function during conferences is something that needs to be considered when building a video conference application.

how to enable recording on jitsi?

A recording is a very important digital tool in conducting conferences during online meetings. With this function, we can understand what case studies are in the meeting if there are some people who don’t attend the meeting. Recording helps us to remember when we have a conference.

Maybe we think it is necessary to enable recording at the meeting because things at the conference must be documented. Some people need a recording to remember the meeting when and with what theme.

In this first step, we will check some configurations on the server so that jitsi can make recordings during the conference. We need to enable jibri because jibri has a configuration for recording to run on jitsi. If you don’t know how to install jitsi on docker, you can read the article jitsi with docker.

When recording is unavailable on jitsi, you can check on github about recording fails. The first thing you need to do is look at the logs of the Jibri container. Let’s see what happens to the jibri container, we will check the container logs on the jibri. Check container logs with “docker-compose logs container_name”.

docker-compose -f docker-compose.yml -f jibri.yml logs jibri
jibri container logs

jibri has a problem with the container, let’s check the container process.

docker-compose -f docker-compose.yml -f jibri.yml ps
process docker-compose jitsi

The command docker-compose -f docker-compose -f jibri.yml ps is useful for viewing all running container processes. It looks like the Jibri container is not running properly, the container keeps restarting. Let’s try to check what the problem is with the server.

Our first step will be to check the alsa-loopback module on the server, this module is used by jibri to make recordings run well on the server. Check alsa-loopback module with command “arecord”.

arecord -L
arecord command jitsi

Checking kernel

That right, the alsa-loopback module doesn’t work properly on the server. We will check the kernel first with “uname -r”.

uname -r
check kernel generic

It looks like the generic kernel is already installed on the server, Let’s try to enable the alsa-loopback module. We can activate alsa-loopback with command “modprobe”.

sudo modprobe snd-aloop

We have enabled the alsa-loopback module with modprobe. Let’s check on the server to see if the alsa-loopback module is active, we can check with arecord command. The arecord command is useful for viewing the alsa-loopback driver that is already active on the server.

arecord -L
arecord command

We can make the alsa-loopback module run permanently without modprobe on our server so that when we reboot the server, the alsa-loopback module will continue to run automatically on our server. Adding snd-aloop to the /etc/modules file with the “echo and tee” command will make it easier for us.

echo snd-aloop | tee -a /etc/modules
Adding module snd-aloop

We will try to check the configuration on jibri.yml, this file is used to create a jibri container.

version: '3'

services:
    jibri:
        image: jitsi/jibri:latest
        restart: ${RESTART_POLICY}
        volumes:
            - ${CONFIG}/jibri:/config:Z
            - /dev/shm:/dev/shm
        cap_add:
            - SYS_ADMIN
            - NET_BIND_SERVICE
        devices:
            - /dev/snd:/dev/snd
        environment:
            - PUBLIC_URL
            - XMPP_AUTH_DOMAIN
            - XMPP_INTERNAL_MUC_DOMAIN
            - XMPP_RECORDER_DOMAIN
            - XMPP_SERVER
            - XMPP_DOMAIN
            - JIBRI_XMPP_USER
            - JIBRI_XMPP_PASSWORD
            - JIBRI_BREWERY_MUC
            - JIBRI_RECORDER_USER
            - JIBRI_RECORDER_PASSWORD
            - JIBRI_RECORDING_DIR
            - JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
            - JIBRI_STRIP_DOMAIN_JID
            - JIBRI_LOGS_DIR
            - DISPLAY=:0
            - TZ
        depends_on:
            - jicofo
        networks:
            meet.busanid.dev:

Looks good for the jibri.yml file, in that file we change the network in docker to use our own network. Docker network allows container networks to connect to each other, we can use our own domain to create a network on docker.

Checking ENV File

Env file is used to store variables. This file contains declarations or the creation of env variables which will eventually be loaded on the container. Env files are very important for configuring the environment in jitsi. In the env there are many configurations for setting the container. This setting is what we will call in docker compose so that it can be applied to the container. There are many variables that we will configure such as public url, port, ssl, jvb, jibri, and many more.

In this step we will configure the env file to use recording in docker, let’s start to see the env configuration in docker jitsi. First step we need to enable rest api on JVB,

# A comma separated list of APIs to enable when the JVB is started [default: none]
# See https://github.com/jitsi/jitsi-videobridge/blob/master/doc/rest.md for more information
JVB_ENABLE_APIS=rest,colibri

This api helps for recording to run on docker jitsi. Next we need to enable variable recording in jitsi. To enable recording, we need to remove the fence in front of the variable ENABLE_RECORDING=1

# Enable recording
ENABLE_RECORDING=1

In the variable ENABLE_RECORDING=1 for feature recording on jitsi can be enabled on the server. This will bring up the recording menu when the moderator starts the meeting. Don’t forget to edit the xmpp domain name if you are using a different docker network than the default.

# XMPP domain for the jibri recorder
XMPP_RECORDER_DOMAIN=recorder.meet.busanid.dev

# XMPP recorder user for Jibri client connections
JIBRI_RECORDER_USER=recorder

# Directory for recordings inside Jibri container
JIBRI_RECORDING_DIR=/config/recordings

# The finalizing script. Will run after recording is complete
JIBRI_FINALIZE_RECORDING_SCRIPT_PATH=/config/finalize.sh

# XMPP user for Jibri client connections
JIBRI_XMPP_USER=jibri

# MUC name for the Jibri pool
JIBRI_BREWERY_MUC=jibribrewery

# MUC connection timeout
JIBRI_PENDING_TIMEOUT=90

# When jibri gets a request to start a service for a room, the room
# jid will look like: [email protected]_domain
# We'll build the url for the call by transforming that into:
# https://xmpp_domain/subdomain/roomName
# So if there are any prefixes in the jid (like jitsi meet, which
# has its participants join a muc at conference.xmpp_domain) then
# list that prefix here so it can be stripped out to generate
# the call url correctly
JIBRI_STRIP_DOMAIN_JID=muc

Looks good, we have configured the env file so that recording can be used. Let’s build a docker jitsi container using command “docker compose up -d”.

docker-compose -f docker-compose.yml -f jibri.yml up -d
docker compose up

Wait until the process of building the jitsi container is complete, when finished, let’s check all the services running on the container with command “docker compose ps”.

docker-compose -f docker-compose.yml -f jibri.yml ps
jibri docker compose ps

The jibri container is running fine, let’s look at the logs on the container with command “docker compose logs”.

docker compose logs jibri

Let’s try jitsi recording of a conference

It looks like jibri is running well on the logs container. Let’s try conferencing using the jitsi server.

recording conference

Recording on jitsi is currently running well, for files from recording you can see at ~/.jitsi-meet-cfg/jibri/recordings. Or you can custom config jitsi in a certain directory.

Jibri recording directory

If you followed the post your recording should be working well now on our server. At this point moderators can make recordings when starting a conference. This issue with the lsa-loopback module is common specially as some cloud providers do not provide generic kernels capable of accomodating jitsi’s requirements. In this case we had already changed the kernel to generic, but the alsa-loopback module was not yet active. As a consequence it can be confusing when trying to figure out what’s going on. Don’t forget to read our interesting articles, or you can choose our cloud storage products for your data storage. Have a nice day.

Hits: 11