Deploying with RPM

To deploy the standard edition of PolarDB-X in a centralized form via RPM, you must first obtain the corresponding RPM package. You can manually compile and generate this RPM package, or you can choose to download it (please download the appropriate RPM for x86 or ARM according to your actual situation).

The steps to compile and generate the RPM are provided below. If you have already downloaded the RPM package, you can skip this step and directly install the RPM.

Compiling RPM from Source

Compile to generate RPM from source code

The compilation environment may vary slightly depending on the operating system, but the resulting RPM package is universal.

Installing Compilation Dependencies

For Centos 7

# Install necessary dependencies
yum remove -y cmake

yum install -y git make bison libarchive ncurses-devel libaio-devel cmake3 mysql rpm-build zlib-devel openssl-devel centos-release-scl

ln -s /usr/bin/cmake3 /usr/bin/cmake

yum install -y devtoolset-7-gcc devtoolset-7-gcc-c++ devtoolset-7-binutils

echo "source /opt/rh/devtoolset-7/enable" | sudo tee -a /etc/profile
source /etc/profile

For Centos 8

# Install necessary dependencies
yum install -y git make bison libarchive ncurses-devel libaio-devel cmake3 mysql rpm-build zlib-devel

yum install -y libtirpc-devel dnf-plugins-core 

yum config-manager --set-enabled PowerTools

yum groupinstall -y "Development Tools"

yum install -y gcc gcc-c++

Compiling the RPM

# Clone the repository
git clone https://github.com/polardb/polardbx-engine.git --depth 1

# Compile the rpm
cd polardbx-engine/rpm && rpmbuild -bb t-polardbx-engine.spec

The compiled RPM will be located in /root/rpmbuild/RPMS/x86_64/ by default.

Installing the RPM

yum install -y <your downloaded or compiled rpm>

After installation, the binary files will be located in /opt/polardbx-engine/bin.

Starting the DN

Create a polarx user (or you can use another non-root user), prepare a my.cnf file (reference template) and data directory (if you've modified my.cnf, adjust the directories below accordingly) to get ready to start.

# Create and switch to the polarx user
useradd -ms /bin/bash polarx
echo "polarx:polarx" | chpasswd
echo "polarx    ALL=(ALL)    NOPASSWD: ALL" >> /etc/sudoers
su - polarx
# Create necessary directories
mkdir polardbx-engine
cd polardbx-engine && mkdir log mysql run data tmp

# Initialize
/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --initialize-insecure
# Start
/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf &

After a short wait, you can log in to the database. If using the my.cnf template from this article, log in with mysql -h127.0.0.1 -P4886 -uroot.

High-Availability Deployment

If all goes well, you should now be familiar with how to deploy the PolarDB-X engine. Next, let's set up a complete centralized cluster on three machines to test the capability of high-availability failover.

Let's assume the IP addresses of our three machines are as follows:

192.168.6.183
192.168.6.184
192.168.6.185

Following the steps previously mentioned, install the RPM on the three machines and prepare the my.cnf file and directories (if any step fails, please thoroughly clean these directories and start over). Then start the servers on the three machines as shown below:

# Execute on 192.168.6.183
/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
--cluster-info='192.168.6.183:14886;192.168.6.184:14886;192.168.6.185:14886@1' \
--initialize-insecure

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='192.168.6.183:14886;192.168.6.184:14886;192.168.6.185:14886@1' \
&

# Execute on 192.168.6.184
/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
--cluster-info='192.168.6.183:14886;192.168.6.184:14886;192.168.6.185:14886@2' \
--initialize-insecure

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='192.168.6.183:14886;192.168.6.184:14886;192.168.6.185:14886@2' \
&

# Execute on 192.168.6.185
/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
--cluster-info='192.168.6.183:14886;192.168.6.184:14886;192.168.6.185:14886@3' \
--initialize-insecure

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
--cluster-info='192.168.6.183:14886;192.168.6.184:14886;192.168.6.185:14886@3' \
&

Note that we have modified the cluster-info configuration when starting, which has the format [host1]:[port1];[host2]:[port2];[host3]:[port3]@[idx], where only [idx] differs across machines, indicating the respective [host][port] sequence number. Please adjust this configuration based on the actual IP addresses of your machines.

After waiting briefly, connect to each database instance to check the status of the cluster by executing:

SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL

We will see that one of the machines is the Leader:

          SERVER_ID: 1
       CURRENT_TERM: 20
     CURRENT_LEADER: 192.168.6.183:14886
       COMMIT_INDEX: 1
      LAST_LOG_TERM: 20
     LAST_LOG_INDEX: 1
               ROLE: Leader
          VOTED_FOR: 1
   LAST_APPLY_INDEX: 0
SERVER_READY_FOR_RW: Yes
      INSTANCE_TYPE: Normal

The other two machines are Followers:

          SERVER_ID: 2
       CURRENT_TERM: 20
     CURRENT_LEADER: 192.168.6.183:14886
       COMMIT_INDEX: 1
      LAST_LOG_TERM: 20
     LAST_LOG_INDEX: 1
               ROLE: Follower
          VOTED_FOR: 1
   LAST_APPLY_INDEX: 1
SERVER_READY_FOR_RW: No
      INSTANCE_TYPE: Normal

In the PolarDB-X engine cluster, only the Leader node can write data. Let's create a database and a table on the Leader and insert some simple data:

CREATE DATABASE db1;
USE db1;
CREATE TABLE tb1 (id int);
INSERT INTO tb1 VALUES (0), (1), (2);

Then, we can query the data on the Follower nodes.

We can also check the status of the cluster on the Leader by running:

SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL;

The result looks like this:

***************** 1. row *****************
      SERVER_ID: 1
        IP_PORT: 192.168.6.183:14886
    MATCH_INDEX: 4
     NEXT_INDEX: 0
           ROLE: Leader
      HAS_VOTED: Yes
     FORCE_SYNC: No
ELECTION_WEIGHT: 5
 LEARNER_SOURCE: 0
  APPLIED_INDEX: 4
     PIPELINING: No
   SEND_APPLIED: No
***************** 2. row *****************
      SERVER_ID: 2
        IP_PORT: 192.168.6.184:14886
    MATCH_INDEX: 4
     NEXT_INDEX: 5
           ROLE: Follower
      HAS_VOTED: Yes
     FORCE_SYNC: No
ELECTION_WEIGHT: 5
 LEARNER_SOURCE: 0
  APPLIED_INDEX: 4
     PIPELINING: Yes
   SEND_APPLIED: No
**************** 3. row *****************
      SERVER_ID: 3
        IP_PORT: 192.168.6.185:14886
    MATCH_INDEX: 4
     NEXT_INDEX: 5
           ROLE: Follower
      HAS_VOTED: No
     FORCE_SYNC: No
ELECTION_WEIGHT: 5
 LEARNER_SOURCE: 0
  APPLIED_INDEX: 4
     PIPELINING: Yes
   SEND_APPLIED: No

All APPLIED_INDEX values are 4, which indicates that the data is currently completely consistent across the three nodes.

Next, we kill the Leader node process (192.168.6.183) with kill -9 to trigger the election of a new Leader.

kill -9 $(pgrep -x mysqld)

After the old Leader is killed, mysqld_safe will immediately restart the mysqld process.

Subsequently, we observed that the Leader has changed to the node 192.168.6.185.

          SERVER_ID: 3
       CURRENT_TERM: 21
     CURRENT_LEADER: 192.168.6.185:14886
       COMMIT_INDEX: 5
      LAST_LOG_TERM: 21
     LAST_LOG_INDEX: 5
               ROLE: Leader
          VOTED_FOR: 3
   LAST_APPLY_INDEX: 4
SERVER_READY_FOR_RW: Yes
      INSTANCE_TYPE: Normal

Through the steps above, we have deployed a three-node PolarDB-X engine cluster on three machines and performed a simple verification. It is also possible to deploy three nodes on one machine, but it is necessary to ensure that different my.cnf configurations are used, with distinct port, data directory, and other parameters, as well as different ports for cluster-info.

Finally, the above process is just for experiencing and testing, and should not be used directly in production. For production, it is recommended to deploy using Kubernetes (K8S). If you do need to deploy with RPM in production, business needs to be aware of Leader switching on its own to use the correct connection string to access the database.

Reference Template for my.cnf

Please modify the parameters according to the actual situation for functionality verification and testing only. For more parameters, refer to the complete parameter template.

[mysqld]
basedir = /opt/polardbx-engine
log_error_verbosity = 2
default_authentication_plugin = mysql_native_password
gtid_mode = ON
enforce_gtid_consistency = ON
log_bin = mysql-binlog
binlog_format = row
binlog_row_image = FULL
master_info_repository = TABLE
relay_log_info_repository = TABLE

# change me if needed
datadir = /home/polarx/polardbx-engine/data
tmpdir = /home/polarx/polardbx-engine/tmp
socket = /home/polarx/polardbx-engine/tmp.mysql.sock
log_error = /home/polarx/polardbx-engine/log/alert.log
port = 4886
cluster_id = 1234
cluster_info = 127.0.0.1:14886@1

[mysqld_safe]
pid_file = /home/polarx/polardbx-engine/run/mysql.pid

results matching ""

    No results matching ""