Hardware and Software Configuration

PolarDB-X is a cloud-native distributed database independently developed by Alibaba, supporting deploy on both Intel and ARM architecture CPU and mainstream virtualized environments. The operating system is compatible with major Linux distributions.

PolarDB-X is composed of four core components: Compute Node (CN), Data Node (DN), Global Metadata Service (GMS), and Change Data Capture (CDC).

The deployment methods for PolarDB-X clusters include PXD and Kubernetes. PXD deployment requires a preparation host, while Kubernetes needs a K8s Master machine. You can choose either method for deployment. Additionally, the Kubernetes deployment method supports monitoring and log collection capabilities, leading to specific resource requirements for each deployment method.

The table below provides the minimum recommended resource requirements for each component in a PolarDB-X standard cluster:

Component Type Component CPU Memory Disk Type Network Card Minimum Quantity
Core Components CN 2 Core 8 GB+ SSD, 200 GB+ 10 Gigabit Ethernet 2
DN 2 Core 8 GB+ SSD, 1 TB+ (recommended 2 disks) 10 Gigabit Ethernet 2.5 (see explanation below)
GMS 2 Core 8 GB+ SSD, 200 GB+ 10 Gigabit Ethernet 2.5 (see explanation below)
CDC 2 Core 8 GB+ SSD, 200 GB+ 10 Gigabit Ethernet 2 (optional)
K8s-related K8s Master 4 Core 8 GB+ SSD, 1 TB+ 10 Gigabit Ethernet 1 (recommended 3)
Monitoring 4 Core 8 GB+ SSD, 1 TB + 1 Gigabit Ethernet 1 (optional)
Log 4 Core 8 GB+ SSD, 1 TB + 10 Gigabit Ethernet 1 (optional)
PXD-related Deployment Host 2 Core 8 GB+ No requirements No requirements 1

Explanation:

  • The table provides the minimum recommended resource configurations for PolarDB-X components. In actual production environments, CN, DN, GMS, and CDC can be deployed on the same server. For higher availability, it is recommended to deploy components separately.
  • Explanation of 2.5 times resources for GMS and DN: GMS and DN are high-reliability storage services based on the Paxos protocol. Therefore, a GMS (DN) includes nodes for three roles: Leader, Follower, and Logger. Leader and Follower have the same resource requirements to ensure service quality after HA switching. The Logger node only stores logs and does not apply binlog, requiring 2 cores and 4 GB of resources to meet the needs of most scenarios. Therefore, a GMS (DN) requires 2.5 times the resources, where 2 represents the resources for Leader and Follower, and 0.5 represents the resources for Logger.
  • Data Node (DN) is responsible for actual data storage and is recommended to use SSD disks of 1 TB or more. For performance and reliability, it is recommended to use two SSD disks to store data and logs separately.
  • For Kubernetes deployment:
    • In actual production environments, for high availability, it is strongly recommended to deploy K8s Master on three separate servers.
    • K8s Master node, monitoring, and log collection components are recommended to be deployed separately from core components (CN, DN, GMS, CDC) to ensure the stability.
  • For PXD deployment:
    • The deployment host has low resource requirements and can be selected from any machine of other components.

Operating System and Architecture Requirements

PolarDB-X supports deployment on the following Linux distributions and architectures.

Operating System Version Architecture
AliOS 7.2 and above x86_64, ARM 64
AnolisOS 7.9 or 8.6 and above x86_64, ARM 64
UOS V20 x86_64, ARM 64
KylinOS V10 x86_64, ARM 64
Kylinsec V3 x86_64, ARM 64
Inspur KOS V5 x86_64, ARM 64
CentOS 7.2 and above x86_64, ARM 64
Red Hat Enterprise Linux 7.2 and above x86_64, ARM 64

If you are using PolarDB-X in a production environment, compared to the minimum configurations recommended above, we suggest deploying with higher specifications. Below are two recommended topologies and component distribution structures based on K8s deployment: including the kernel (1 GMS, 2 CN, 2 DN, 2 CDC), monitoring, and log collection components. If you are using PXD deployment, remove the server requirements for K8s-related components.

General Deployment Topology

In general, GMS nodes and DN nodes are high-availability storage based on X-Paxos, and can be deployed on three same servers. CN nodes and CDC nodes can be deployed on two same servers. For high availability of K8s services, K8s Master is recommended to be deployed on three separate servers. Monitoring and log collection, as they do not involve the core link, can be mixed on one server.

Component Server Specifications Quantity Additional Requirements
GMS&DN 16 Core, 128 GB 3 Disk SSD, 1 TB+, recommended two
CN&CDC 16 Core, 64 GB 2 Disk SSD, 200 GB+
K8s Master 4 Core, 8 GB 3 Disk SSD, 1 TB +
Monitoring&Log 8 Core, 16 GB 1 Disk SSD, 1 TB +

High-Reliability Deployment Topology

If your production environment requires higher performance and reliability, we recommend separating each component to different servers. Below are the deployment topology and server requirements under high-reliability mode.

Component Server Specifications Quantity Additional Requirements
GMS 8 Core, 32 GB 3 Disk SSD, 200 GB+
DN 16 Core, 128 GB 3 Disk SSD, 1 TB+, recommended two
CN 16 Core, 64 GB 2 Disk SSD, 200 GB+
CDC 8 Core, 32 GB 2 Disk SSD, 200 GB+
K8s Master 4 Core, 8 GB 3 Disk SSD, 1 TB +
Monitoring&Log 8 Core, 16 GB 2 Disk SSD, 1 TB +

results matching ""

    No results matching ""