Datalbi  
Créé le 07/09/2021 par Laurent Bourcier

Mise en place d'un Cluster InnoDB MySQL 8.0 avec MySQL Shell sous Debian

Sommaire
1. Introduction
2. Installation de MySQL Shell
3. Installation d'un cluster InnoDB
4. Installation de MySQL Router
5. Test de haute disponibilité

Introduction

Le cluster InnoDB, apparu dans la version 5.7 de MySQL, combine plusieurs éléments logiciels :

Tout ceci constitue une cluster InnoDB.

Installation de MySQL Shell

MySQL Shell est un outils d'administration qui peut s'installer soit sur un serveur dédié à l'administration, soit sur chaque serveur MySQL. Ici, on l'installera sur un serveur dédié.

Cet outils va servir à créer facilement un cluster InnoDB à partir d'instances non clusterisées.

L'installation de MySQL shell sous Debian se fait comme suit :

# Le repo MySQL doit être configuré
# grep "mysql-tools" /etc/apt/sources.list.d/mysql.list
deb http://repo.mysql.com/apt/debian/ buster mysql-tools

# apt-get update
# apt-get install mysql-shell

# dpkg -l | grep mysql-shell
ii  mysql-shell:amd64   8.0.26-1debian10   amd64   MySQL Shell (part of MySQL Server) 8.0

Pour pouvoir utiliser le login path (aussi appelé le keystore de MySQL Shell), il faut installer MySQL Client. Sans cette installation, MySQL Shell n'enregistrera aucun mot de passe.

# Le repo MySQL doit être configuré
# grep "mysql-8.0" /etc/apt/sources.list.d/mysql.list
deb http://repo.mysql.com/apt/debian/ buster mysql-8.0

# apt-get update
# apt-get install mysql-client

# dpkg -l | grep mysql-client
ii  mysql-client        8.0.26-1debian10   amd64   MySQL Client meta package depending on latest version

# On vérifie que le login path est disponible
# mysqlsh
JS > shell.listCredentialHelpers()
[
"login-path"
]
JS > \q

# On teste une connnexion :
# mysqlsh root@mysql01
Please provide the password for 'root@mysql01': ****
Save password for 'root@mysql01'? [Y]es/[N]o/Ne[v]er (default No): Y
JS > \q

# On vérifie le login path créé (/root/.mylogin.cnf)
# mysql_config_editor print --all
[root@mysql01]
user = "root"
password = *****
host = "mysql01"

Installation d'un cluster InnoDB

La création d'un cluster InnoDB à 3 noeuds (le minimum pour le quorum) se fait assez facilement grace à des commandes d'administration présentes dans l'interface Javascript de MySQL Shell.

Il faut cependant, comme prérequis :

Configuration minimale des instances :

# Sur mysql01, mysql02, mysql03
# cat /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
datadir         = /var/lib/mysql
log-error       = /var/log/mysql/error.log

Création du compte 'root'@'%' avec les droits pour créer un cluster :

# Sur mysql01, mysql02, mysql03
mysql> CREATE USER 'root'@'%' IDENTIFIED WITH MYSQL_NATIVE_PASSWORD BY '****';
mysql> GRANT CLONE_ADMIN, CONNECTION_ADMIN, CREATE USER, EXECUTE, FILE, GROUP_REPLICATION_ADMIN, PERSIST_RO_VARIABLES_ADMIN, PROCESS, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, REPLICATION_APPLIER, REPLICATION_SLAVE_ADMIN, ROLE_ADMIN, SELECT, SHUTDOWN, SYSTEM_VARIABLES_ADMIN ON *.* TO 'root'@'%' WITH GRANT OPTION;
mysql> GRANT DELETE, INSERT, UPDATE ON mysql.* TO 'root'@'%' WITH GRANT OPTION;
mysql> GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata.* TO 'root'@'%' WITH GRANT OPTION;
mysql> GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_bkp.* TO 'root'@'%' WITH GRANT OPTION;
mysql> GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_previous.* TO 'root'@'%' WITH GRANT OPTION;

On configure le premier serveur (mysql01) via mysqlshell :

# Sur le serveur d'admin
# mysqlsh
JS> dba.configureInstance('root@mysql01');

    Configuring MySQL instance at mysql01:3306 for use in an InnoDB cluster...
    
    This instance reports its own address as mysql01:3306
    Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
    
    applierWorkerThreads will be set to the default value of 4.
    
    NOTE: Some configuration options need to be fixed:
    +----------------------------------------+---------------+----------------+--------------------------------------------------+
    | Variable                               | Current Value | Required Value | Note                                             |
    +----------------------------------------+---------------+----------------+--------------------------------------------------+
    | binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
    | enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
    | gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
    | replica_parallel_type                  | DATABASE      | LOGICAL_CLOCK  | Update the server variable                       |
    | replica_preserve_commit_order          | OFF           | ON             | Update the server variable                       |
    | server_id                              | 1             |     | Update read-only variable and restart the server |
    +----------------------------------------+---------------+----------------+--------------------------------------------------+
    
    Some variables need to be changed, but cannot be done dynamically on the server.
    Do you want to perform the required configuration changes? [y/n]: Y
    Do you want to restart the instance after configuring it? [y/n]: Y
    Configuring instance...
    The instance 'mysql01:3306' was configured to be used in an InnoDB cluster.
    Restarting MySQL...
    NOTE: MySQL server at mysql01:3306 was restarted.

    
JS > dba.checkInstanceConfiguration('root@mysql01');

    Validating MySQL instance at mysql01:3306 for use in an InnoDB cluster...
    
    This instance reports its own address as mysql01:3306
    Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
    
    Checking whether existing tables comply with Group Replication requirements...
    No incompatible tables detected
    
    Checking instance configuration...
    Instance configuration is compatible with InnoDB cluster
    
    The instance 'mysql01:3306' is valid to be used in an InnoDB cluster.
    
    {
        "status": "ok"
    }

Les paramètres d'instance ont été ajoutés dans le fichier /var/lib/mysql/mysqld-auto.cnf.

On peut maintenant créer le cluster InnoDB.

# mysqlsh root@mysql01
JS > var cluster = dba.createCluster('cluster_test')

    A new InnoDB cluster will be created on instance 'mysql01:3306'.
    
    Validating instance configuration at mysql01:3306...
    
    This instance reports its own address as mysql01:3306
    
    Instance configuration is suitable.
    NOTE: Group Replication will communicate with other members using 'mysql01:33061'. Use the localAddress option to override.
    
    Creating InnoDB cluster 'cluster_test' on 'mysql01:3306'...
    
    Adding Seed Instance...
    Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
    At least 3 instances are needed for the cluster to be able to withstand up to
    one server failure.

    
JS > cluster.status()
{
    "clusterName": "cluster_test",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "mysql01:3306",
        "ssl": "REQUIRED",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "mysql01:3306": {
                "address": "mysql01:3306",
                "memberRole": "PRIMARY",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.26"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "mysql01:3306"
}

Pour le fonctionnement interne du cluster, un compte MySQL a été créé.

Il a pour nom "mysql_innodb_cluster_{server_id}" où server_id est uniquement et propre à une instance.

# sur mysql01
# mysql
mysql> select user, host from mysql.user;
+---------------------------------+-----------+
| user                            | host      |
+---------------------------------+-----------+
| mysql_innodb_cluster_1354288827 | %         |
| root                            | %         |
| mysql.infoschema                | localhost |
| mysql.session                   | localhost |
| mysql.sys                       | localhost |
| root                            | localhost |
+---------------------------------+-----------+

mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|  1354288827 |
+-------------+

mysql> show grants for 'mysql_innodb_cluster_1354288827'@'%';
+-------------------------------------------------------------------------+
| Grants for mysql_innodb_cluster_1354288827@%                            |
+-------------------------------------------------------------------------+
| GRANT REPLICATION SLAVE ON *.* TO `mysql_innodb_cluster_1354288827`@`%` |
| GRANT BACKUP_ADMIN ON *.* TO `mysql_innodb_cluster_1354288827`@`%`      |
+-------------------------------------------------------------------------+

L'ajout de l'instance mysql02 au cluster se fait via mysql shell :

JS > dba.configureInstance('root@mysql02')

    Configuring MySQL instance at mysql02:3306 for use in an InnoDB cluster...
    
    This instance reports its own address as mysql02:3306
    Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
    
    applierWorkerThreads will be set to the default value of 4.
    
    NOTE: Some configuration options need to be fixed:
    +----------------------------------------+---------------+----------------+--------------------------------------------------+
    | Variable                               | Current Value | Required Value | Note                                             |
    +----------------------------------------+---------------+----------------+--------------------------------------------------+
    | binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
    | enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
    | gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
    | replica_parallel_type                  | DATABASE      | LOGICAL_CLOCK  | Update the server variable                       |
    | replica_preserve_commit_order          | OFF           | ON             | Update the server variable                       |
    | server_id                              | 1             |     | Update read-only variable and restart the server |
    +----------------------------------------+---------------+----------------+--------------------------------------------------+
    
    Some variables need to be changed, but cannot be done dynamically on the server.
    Do you want to perform the required configuration changes? [y/n]: y
    Do you want to restart the instance after configuring it? [y/n]: y
    Configuring instance...
    The instance 'mysql02:3306' was configured to be used in an InnoDB cluster.
    Restarting MySQL...
    NOTE: MySQL server at mysql02:3306 was restarted.


JS > cluster.addInstance('root@mysql02')

    NOTE: The target instance 'mysql02:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
    The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'mysql02:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
    
    The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.
    
    
    Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
    Validating instance configuration at mysql02:3306...
    
    This instance reports its own address as mysql02:3306
    
    Instance configuration is suitable.
    NOTE: Group Replication will communicate with other members using 'mysql02:33061'. Use the localAddress option to override.
    
    A new instance will be added to the InnoDB cluster. Depending on the amount of
    data on the cluster this might take from a few seconds to several hours.
    
    Adding instance to the cluster...
    
    Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
    Clone based state recovery is now in progress.
    
    NOTE: A server restart is expected to happen as part of the clone process. If the
    server does not support the RESTART command or does not come back after a
    while, you may need to manually start it back.
    
    * Waiting for clone to finish...
    NOTE: mysql02:3306 is being cloned from mysql01:3306
    ** Stage DROP DATA: Completed
    ** Clone Transfer
        FILE COPY  ############################################################  100%  Completed
        PAGE COPY  ############################################################  100%  Completed
        REDO COPY  ############################################################  100%  Completed
    
    NOTE: mysql02:3306 is shutting down...
    
    * Waiting for server restart... ready
    * mysql02:3306 has restarted, waiting for clone to finish...
    ** Stage RESTART: Completed
    * Clone process has finished: 72.20 MB transferred in about 1 second (~72.20 MB/s)
    
    State recovery already finished for 'mysql02:3306'
    
    The instance 'mysql02:3306' was successfully added to the cluster.

On notera au passage que le clonage des données se fait via le plugin "Clone", qui est une nouveauté de MySQL 8.

L'ajout du troisième noeud suit le même principe :

JS > dba.configureInstance('root@mysql03')
JS > cluster.addInstance('root@mysql03')

Le status du cluster peut être consulté en Javascript :

JS > cluster.status()
{
    "clusterName": "cluster_test",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "mysql01:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "mysql01:3306": {
                "address": "mysql01:3306",
                "memberRole": "PRIMARY",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.26"
            },
            "mysql02:3306": {
                "address": "mysql02:3306",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.26"
            },
            "mysql03:3306": {
                "address": "mysql03:3306",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.26"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "mysql01:3306"
}

Le statut du cluster de Groupe de Réplication peut être consulté via les vues système classiques :

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 33c6d17d-0cf3-11ec-ba63-080027a13959 | mysql02     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | 3591cd5b-0cf3-11ec-9851-0800278ede4e | mysql03     |        3306 | ONLINE       | SECONDARY   | 8.0.26         |
| group_replication_applier | 9ce9103a-0cf2-11ec-b027-080027df3a71 | mysql01     |        3306 | ONLINE       | PRIMARY     | 8.0.26         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+

On notera que 3 comptes ont été créés pour le fonctionnement interne du cluter.

mysql> select user, host from mysql.user;
+---------------------------------+-----------+
| user                            | host      |
+---------------------------------+-----------+
| mysql_innodb_cluster_1354288827 | %         |
| mysql_innodb_cluster_2293538393 | %         |
| mysql_innodb_cluster_3149079511 | %         |
| root                            | %         |
| mysql.infoschema                | localhost |
| mysql.session                   | localhost |
| mysql.sys                       | localhost |
| root                            | localhost |
+---------------------------------+-----------+

Installation de MySQL Router

MySQL Router s'installe sur le serveur applicatif. Il est lancé par un compte mysqlrouteur, et non pas par root.

# Verifier le repository est présent
# grep mysql-tools /etc/apt/sources.list.d/mysql.list
deb http://repo.mysql.com/apt/debian/ buster mysql-tools

# apt-get update

# apt-get install mysql-router

# dpkg -l | grep mysql-router
ii  mysql-router              8.0.26-1debian10    amd64     MySQL Router Metapackage
ii  mysql-router-community    8.0.26-1debian10    amd64     MySQL Router

Pour commencer, il faut créer une configuration du router, qui sera placée dans un répertoire (ici /opt/mysqlrouter)

# systemctl stop mysqlrouter

# mysqlrouter --user=mysqlrouter --bootstrap root@mysql01:3306 --directory /opt/mysqlrouter --account router_user --account-create always
    Please enter MySQL password for router_user:
    Executing statements failed with: 'Error executing MySQL query "INSERT INTO mysql_innodb_cluster_metadata.v2_routers        (address, product_name, router_name) VALUES ('ansible.datalbi.com', 'MySQL Router', '')": The MySQL server is running with the --super-read-only option so it cannot execute this statement (1290)' (1290), trying to connect to another node
    Fetching Cluster Members
    disconnecting from mysql-server
    trying to connect to mysql-server at mysql02:3306
    - Creating account(s) (only those that are needed, if any)
    - Verifying account (using it to run SQL queries that would be run by Router)
    - Storing account in keyring
    - Adjusting permissions of generated files
    - Creating configuration /opt/myrouter/mysqlrouter.conf
    
    # MySQL Router configured for the InnoDB Cluster 'cluster_test'
    
    After this MySQL Router has been started with the generated configuration
    
        $ mysqlrouter -c /opt/myrouter/mysqlrouter.conf
    
    the cluster 'cluster_test' can be reached by connecting to:
    
    ## MySQL Classic protocol
    
    - Read/Write Connections: localhost:6446, /opt/myrouter/mysql.sock
    - Read/Only Connections:  localhost:6447, /opt/myrouter/mysqlro.sock
    
    ## MySQL X protocol
    
    - Read/Write Connections: localhost:6448, /opt/myrouter/mysqlx.sock
    - Read/Only Connections:  localhost:6449, /opt/myrouter/mysqlxro.sock
    
# ls -l /opt/mysqlrouter
    total 28
    drwx------ 2 myrouter myrouter 4096 sept.  7 14:28 data
    drwx------ 2 myrouter myrouter 4096 sept.  7 14:28 log
    srwxrwxrwx 1 myrouter myrouter    0 sept.  7 14:32 mysqlro.sock
    -rw------- 1 myrouter myrouter 2011 sept.  7 14:28 mysqlrouter.conf
    -rw------- 1 myrouter myrouter   84 sept.  7 14:28 mysqlrouter.key
    srwxrwxrwx 1 myrouter myrouter    0 sept.  7 14:32 mysql.sock
    srwxrwxrwx 1 myrouter myrouter    0 sept.  7 14:32 mysqlxro.sock
    srwxrwxrwx 1 myrouter myrouter    0 sept.  7 14:32 mysqlx.sock
    drwx------ 2 myrouter myrouter 4096 sept.  7 14:28 run
    -rwx------ 1 myrouter myrouter  289 sept.  7 14:28 start.sh
    -rwx------ 1 myrouter myrouter  149 sept.  7 14:28 stop.sh

Pour démarrer le router, il faut :

C'est pourquoi les actions suivantes sont réalisées :
# grep -i user /lib/systemd/system/mysqlrouter.service
WantedBy=multi-user.target
User=myrouter

# rm /etc/mysqlrouter/mysqlrouter.conf
# ln -s /opt/mysqlrouter/mysqlrouter.conf /etc/mysqlrouter/mysqlrouter.conf

# systemctl start mysqlrouter

# ps -ef | grep mysqlrouter
myrouter  1420     1  1 14:32 ?        00:08:34 /usr/bin/mysqlrouter

Pour tout problème de démarrage, consulter le journal par "journalctl -f"

Test de Haute Disponibilité

Sur le serveur applicatif où est installé MySQL Router et MySQL Client, on teste la connexion en read/write sur le port 6446.

On fait ensuite arrêter un noeud et on rappelle la commande de select pour voir si la connexion est toujours Ok.

# Sur le serveur MySQL Router
# mysql -u router_user -h ansible -P 6446 -p
Enter password:

mysql> select @@hostname, @@port;
+------------+--------+
| @@hostname | @@port |
+------------+--------+
| mysql01    |   3306 |
+------------+--------+
1 row in set (0,00 sec)

# Sur mysql01 : on arrete le noeud primaire
# systemctl stop mysql

# Sur le serveur MySQL Router, on rappelle la commande sur la session mysql laissée ouverte
mysql> select @@hostname, @@port;
ERROR 2013 (HY000): Lost connection to MySQL server during query
No connection. Trying to reconnect...
Connection id:    66
Current database: *** NONE ***

+------------+--------+
| @@hostname | @@port |
+------------+--------+
| mysql02    |   3306 |
+------------+--------+
1 row in set (0,01 sec)

Ce test met bien en évidence la bascule automatique de la connexion vers le nouveau serveur promaire.