System Configuration

Before deploying the PolarDB-X, it is necessary to initialize the server configurations and install the required software and tools. We recommend using the automation tool Ansible to accomplish these tasks. If you are using another bulk execution tool, you can also refer to the provided commands in the documentation to complete the deployment process.

BIOS Settings

The BIOS settings interface varies significantly for different CPU platforms and server vendors. Before deploying the database, it is advisable to refer to the server vendor's documentation and check if the following BIOS parameters are set correctly:

BIOS Parameter Recommended Value Description
Channel Interleaving ENABLE Allows CPU dies to interleave the use of multiple memory channels, enhancing memory bandwidth.
Die Interleaving DISABLE Each CPU die uses its own memory channel, reducing memory access latency through OS-level NUMA affinity allocation strategy.
VT-d/ IOMMU DISABLE Disables BIOS-level I/O virtualization support.

Installing the Ansible Tool

Firstly, choose any server in your environment as the deployment machine (ops node). The Ansible tool only needs to be installed on this node. After establishing password-free access between ops and other servers, all deployment tasks can be completed on ops.

Unless specified otherwise, the following commands are executed on the ops node.

Install Tool

yum install ansible python-netaddr -y

Configuring the Server List

Ansible requires editing a server list configuration file (ini) to organize servers into groups based on their roles. If you are planning to deploy a separate Docker/Kubernetes environment for PolarDB-X, it is recommended to deploy a private Docker image repository on one of the servers (usually the deployment machine, ops). Edit the server list file:

vi $HOME/all.ini

For example:

Server Usage
192.168.1.101 ops, Docker Registry
192.168.1.102 PolarDB-X Compute Node
192.168.1.103 PolarDB-X Compute Node
192.168.1.104 PolarDB-X Compute Node
192.168.1.105 PolarDB-X Storage Node
192.168.1.106 PolarDB-X Storage Node
192.168.1.107 PolarDB-X Storage Node

The input host ini file is as below:

[all]
192.168.1.101  # ops

[cn]
192.168.1.102
192.168.1.103
192.168.1.104

[dn]
192.168.1.105
192.168.1.106
192.168.1.107

[all:vars]
registry=192.168.1.101

For convenience, it is recommended to put the configuration file path into the environment variable:

export ini_file=$HOME/all.ini

Server Passwordless Access

After installing Ansible on ops, use the following script to establish passwordless access from ops to all servers:

ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa <<<y

echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config

ansible -i ${ini_file} all -m authorized_key -a " user=root key=\"{{ lookup('file', '/root/.ssh/id_rsa.pub') }} \"  " -u root --become-method=sudo --ask-become-pass --become -k

Check Effectiveness

Use the following command to check if all servers have been successfully configured for passwordless access and if Ansible tool is functioning properly:

ansible -i ${ini_file} all -m shell -a " uptime "

Configure System Parameters

Configure Time

Set the time zone and clock for all servers in batches. If deploying in a production environment, it is recommended to configure the NTP service to ensure clock synchronization.

ansible -i ${ini_file} all -m shell -a " timedatectl set-timezone Asia/Shanghai "
ansible -i ${ini_file} all -m shell -a " date -s  '`date '+%Y-%m-%d %H:%M:%S'`' "

Afterward, use the following command to check the server's clock:

ansible -i ${ini_file} all -m shell -a " date '+%D %T.%6N' "

Configure /etc/hosts

If installing a private Docker image repository, modify the server's /etc/hosts file by adding the registry domain:

ansible -i ${ini_file} all -m shell -a  " sed -i '/registry/d' /etc/hosts "
ansible -i ${ini_file} all -m shell -a  " echo '{{ registry }}    registry' >> /etc/hosts "

Configure limits.conf

Edit the limits.conf file:

vi $HOME/limits.conf

Input the file:

# End of file
root soft nofile 655350
root hard nofile 655350

* soft nofile 655350
* hard nofile 655350
* soft nproc 655350
* hard nproc 655350

admin soft nofile 655350
admin hard nofile 655350
admin soft nproc 655350
admin hard nproc 655350

Update the server's limits.conf configuration file:

ansible -i ${ini_file} all -m synchronize -a " src=$HOME/limits.conf dest=/etc/security/limits.conf "

Configure sysctl.conf

Edit the sysctl.conf file:

vi $HOME/sysctl.conf

Input the file content:

vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time=120

# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2

# see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq=1

net.core.somaxconn = 256
net.core.wmem_max = 262144

net.ipv4.tcp_keepalive_time = 20
net.ipv4.tcp_keepalive_probes = 60
net.ipv4.tcp_keepalive_intvl = 3

net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_fin_timeout = 15

#perf
kernel.perf_event_paranoid = 1

fs.aio-max-nr = 1048576

Update the server's sysctl.conf configuration file:

ansible -i ${ini_file} all -m synchronize -a " src=$HOME/sysctl.conf dest=/etc/sysctl.conf "

Load the latest configuration on the server:

ansible -i ${ini_file} all -m shell -a " sysctl -p /etc/sysctl.conf "

Disable Firewall

Disable the firewall on all servers:

ansible -i ${ini_file} all -m shell -a " systemctl disable firewalld "
ansible -i ${ini_file} all -m shell -a " systemctl stop firewalld "

Check if the firewall is disabled:

ansible -i ${ini_file} all -m shell -a " systemctl status firewalld | grep Active "

Disable SELinux

Edit the selinux file:

vi $HOME/selinux

Input the file content:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
#SELINUX=enforcing
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Update the server's configuration file:

ansible -i ${ini_file} all -m synchronize -a " src=$HOME/selinux dest=/etc/selinux/config "

Make the configuration effective:

ansible -i ${ini_file} all -m shell -a " setenforce 0 "

Check if the configuration is effective:

ansible -i ${ini_file} all -m shell -a " sestatus "

Disable Swap Partition

Run the following command to disable the Linux swap partition:

ansible -i ${ini_file} all -m shell -a " swapoff -a "
ansible -i ${ini_file} all -m shell -a " sed -i '/=SWAP/d' /etc/fstab "

Check if it is effective:

ansible -i ${ini_file} all -m shell -a " free -m | grep Swap "

Disk Mounting

Data Volume Requirements

PolarDB-X suggests adopting a single-machine multi-deployment mode, using docker or k8s pod volume for file mapping. Each node's volume adds a corresponding DockerID prefix to form two-level directories, which are then mounted to multiple mounting points.

Mounting Directory Subdirectory Host Symlink Directory (k8s Deployment) Host Symlink Directory (PXD Deployment) Description
/polarx /polarx/data-log /data-log $HOME/.pxd/data/polarx-log/ Log Directory
/polarx/filestream /filestream / Intermediate temporary files, such as backup reconstruction
/polarx/docker /var/lib/docker /var/lib/docker Docker runtime files
/polarx/kubelet /var/lib/kubelet / k8s root directory
/polarx/data (Recommended to mount separately) /polarx/data/ /data $HOME/.pxd/data/polarx/ Main space for data storage, involves random IO operations

Check the server's disk information using the fdisk command:

ansible -i ${ini_file} all -m shell -a " fdisk -l | grep Disk "

Single Data Disk

If the server has only one data disk, for example, /dev/nvme0n1, you can format and mount it using the following basic commands:

Risk Warning: All current data on the disk /dev/nvme0n1 will be lost.

# Unmount the data disk
umount /dev/nvme0n1

# Create an EXT4 file system and set the label to polarx
mkfs.ext4 /dev/nvme0n1 -m 0 -O extent,uninit_bg -E lazy_itable_init=1 -q -L polarx -J size=4000

# Create the mounting directory
mkdir -p /polarx

# Add mounting information to /etc/fstab
echo "LABEL=polarx /polarx     ext4        defaults,noatime,data=writeback,nodiratime,nodelalloc,barrier=0    0 0" >> /etc/fstab

# Mount the data disk
mount -a

If all servers have data disks with the same name, we recommend using Ansible commands for batch operations:

Risk Warning: All current data on the /dev/nvme0n1 disk of all servers will be lost.

# Batch mount /dev/nvme0n1
ansible -i ${ini_file} all -m shell -a " mkfs.ext4 /dev/nvme0n1 -m 0 -O extent,uninit_bg -E lazy_itable_init=1 -q -L polarx -J size=4000 "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx "
ansible -i ${ini_file} all -m shell -a " echo 'LABEL=polarx /polarx     ext4        defaults,noatime,data=writeback,nodiratime,nodelalloc,barrier=0    0 0' >> /etc/fstab "
ansible -i ${ini_file} all -m shell -a " mount -a "

Afterward, check the disk mounting using the df command:

ansible -i ${ini_file} all -m shell -a " df -lh | grep polarx "

If you choose Kubernetes deployment, execute the following commands to create symbolic links for the corresponding directories:

ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/kubelet"
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/kubelet /var/lib/kubelet "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/docker "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/docker /var/lib/docker "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/data-log "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/data-log /data-log "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/filestream "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/filestream /filestream "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/data "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/data /data "

If you choose PXD deployment, execute the following commands to create symbolic links for the corresponding directories:

ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/docker "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/docker /var/lib/docker "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/data-log "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/data-log $HOME/.pxd/data/polarx-log/ "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/data "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/data $HOME/.pxd/data/polarx/ "

Multiple Disks

If the server has multiple SSD data disks, it is recommended to split the data disks into 1 + N use. The first disk is mounted to the /polarx directory, storing PolarDB-X log-related files, and the next N disks are combined using the LVM method to create a logical volume that is mounted to the /polarx/data directory, storing PolarDB-X core data. How to combine multiple data disks into a logical volume using LVM can be referred to by the following steps. First, install the LVM components on the server:

ansible -i $ini_file all -m shell -a "yum install lvm2 -y"

Create the LVM logical volume using the script:

vi $HOME/create_polarx_lvm.sh

Copy the script content:

#!/bin/sh
#****************************************************************#
# ScriptName: create_polarx_lvm.sh
# Author: polardb-x@alibaba-inc.com
# Create Date: 2020-08-04 08:42
# Modify Date: 2021-05-25 17:20
#***************************************************************#

function disk_part(){
    set -e
    if [ $# -le 1 ]
    then
        echo "disk_part argument error"
        exit -1
    fi
    action=$1
    disk_device_list=(`echo $*`)

    echo $disk_device_list
    unset disk_device_list[0]

    echo $action
    echo ${disk_device_list[*]}
    len=`echo ${#disk_device_list[@]}`
    echo "start remove origin partition  "
    for dev in  ${disk_device_list[@]}
    do
        `parted -s ${dev} rm 1` || true
        dd if=/dev/zero of=${dev}  count=100000 bs=512
    done

    sed  -i  "/flash/d" /etc/fstab

    if [ x${1} == x"split" ]
    then
        echo "split disk "
        echo ${disk_device_list}
        vgcreate -s 32 vgpolarx ${disk_device_list[*]}
        lvcreate -A y -I 128K -l 100%FREE  -i ${#disk_device_list[@]} -n polarx vgpolarx
        mkfs.ext4 /dev/vgpolarx/polarx -m 0 -O extent,uninit_bg -E lazy_itable_init=1 -q -L polarx -J size=4000
        sed  -i  "/polarx/d" /etc/fstab
        mkdir -p /polarx
    opt="defaults,noatime,data=writeback,nodiratime,nodelalloc,barrier=0"
        echo "LABEL=polarx /polarx     ext4        ${opt}    0 0" >> /etc/fstab
        mount -a
    else
        echo "unkonw action "
    fi
}

function format_nvme_mysql(){

    disk_device_list=(`ls -l /dev/|grep -v ^l|awk '{print $NF}'|grep -E "^nvme[0-9]{1,2}n1$|^df[a-z]$|^os[a-z]$"`)
    if [ 0 -lt ${#disk_device_list[@]}  ]
    then
        echo "check success"
        echo "start umount partition "
        parttion_list=`df |grep -E "dev\/os|flash" |awk -F ' ' '{print $1}'`
        for partition in ${parttion_list[@]}
        do
            echo $partition
            umount $partition
        done

    else
        echo "check flash fail"
        #exit -1
    fi

    full_disk_device_list=()
    for i in ${!disk_device_list[@]}
    do
      echo ${i}
      full_disk_device_list[${i}]=/dev/${disk_device_list[${i}]}
    done
    echo ${full_disk_device_list[@]}
    disk_part split ${full_disk_device_list[@]}
}

if [ ! -d "/polarx" ]; then
    umount /dev/vgpolarx/polarx
    vgremove -f vgpolarx
    dmsetup --force --retry --deferred remove vgpolarx-polarx
    format_nvme_mysql
else
   echo "the lvm exists."
fi

Copy the script to the $HOME directory on the server and execute it:

Risk Warning: All current data on the disks with names matching /dev/nvme* of all servers will be lost.

# LVM Disk Mounting
ansible -i ${ini_file} all -m synchronize -a " src=$HOME/create_polarx_lvm.sh dest=/tmp/create_polarx_lvm.sh "
ansible -i ${ini_file} all -m shell -a " sh /tmp/create_polarx_lvm.sh "

Afterward, check the disk mounting using the df command:

ansible -i ${ini_file} all -m shell -a " df -lh | grep polarx "

If you choose K8s deployment, execute the following commands to create symbolic links for the corresponding directories:

ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/kubelet"
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/kubelet /var/lib/kubelet "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/docker "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/docker /var/lib/docker "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/data-log "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/data-log /data-log "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/filestream "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/filestream /filestream "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/data "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/data /data "

If you choose PXD deployment, execute the following commands to create symbolic links for the corresponding directories:

ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/docker "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/docker /var/lib/docker "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/data-log "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/data-log $HOME/.pxd/data/polarx-log/ "
ansible -i ${ini_file} all -m shell -a " mkdir -p /polarx/data "
ansible -i ${ini_file} all -m shell -a " ln -s /polarx/data $HOME/.pxd/data/polarx/ "

Common Tools

PolarDB-X is compatible with the MySQL protocol, and it is recommended to install the MySQL client to access the database:

ansible -i ${ini_file} all -m shell -a " yum install mysql -y "

Install recommended tools on server nodes

ansible -i ${ini_file} all -m shell -a " yum install dstat iostat htop -y "

results matching ""

    No results matching ""