Skip to content

部署目标

  • 安装keeper集群
  • 安装CK集群,使用ck keeper作协调节点,使用 4分片1副本

环境描述

软件列表

序号软件名称版本安装包名称Md5下载地址备注
1clickhouse-common-static23.8.14.6clickhouse-common-static-23.8.14.6.x86_64.rpm0f20e54ddea3575ff345f54343e50cf0https://packages.clickhouse.com/rpm/lts/clickhouse-common-static-23.8.14.6.x86_64.rpmClickHouse编译的二进制文件
2clickhouse-server23.8.14.6clickhouse-server-23.8.14.6.x86_64.rpm83656b0c7e6fc0877ad930fcb822a187https://packages.clickhouse.com/rpm/lts/clickhouse-server-23.8.14.6.x86_64.rpm创建clickhouse-server软连接,并安装默认配置服务
3clickhouse-client23.8.14.6clickhouse-client-23.8.14.6.x86_64.rpm8f73d1d4f38037f24abf9840f23e44e3https://packages.clickhouse.com/rpm/lts/clickhouse-client-23.8.14.6.x86_64.rpm创建clickhouse-client客户端工具软连接,并安装客户端配置文件。
4clickhouse-keeper23.8.14.6clickhouse-keeper-23.8.14.6.x86_64.rpm6b5c9183a5bf3b348a820d22d5a5b4dbhttps://packages.clickhouse.com/rpm/lts/clickhouse-keeper-23.8.14.6.x86_64.rpm创建clickhouse-keeper协调服务软连接,并安装客户端配置文件。

硬件列表

生产环境中ZK和CK应该分开。

IPhostname操作系统版本备注
192.168.107.8ck01CentOS 7.6.1810CK
192.168.107.9ck02CentOS 7.6.1810CK
192.168.107.10ck03CentOS 7.6.1810CK
192.168.107.11ck04CentOS 7.6.1810CK
192.168.107.12keeper01CentOS 7.6.1810keeper
192.168.107.13keeper02CentOS 7.6.1810keeper
192.168.107.14keeper03CentOS 7.6.1810keeper

假设所有机器的数据盘都挂在 data01目录下。

安装步骤

均使用root账号操作。

准备工作

所有主机均执行

shell
# 修改系统nofile和nproc限制
echo "修改系统nofile和nproc限制开始"
cp /etc/security/limits.conf /etc/security/limits.conf.bak
echo '*   soft    nofile   165535' >> /etc/security/limits.conf
echo '*   hard    nofile   165535' >> /etc/security/limits.conf
echo '*   soft    nproc    65535' >> /etc/security/limits.conf
echo '*   hard    nproc    65535' >> /etc/security/limits.conf
echo '*   soft    memlock  unlimited' >> /etc/security/limits.conf
echo '*   hard    memlock  unlimited' >> /etc/security/limits.conf
echo 'clickhouse   soft    memlock  unlimited' >> /etc/security/limits.conf
echo 'clickhouse   hard    memlock  unlimited' >> /etc/security/limits.conf
echo "修改系统nofile和nproc限制结束"

# 查看是否生效,默认为1024,看是否修改成了添加的内容的数字(如果没有生效,需要重启机器,可以延迟到 `准备工作` 全部结束再重启主机)
ulimit -n

# 修改vm
echo "修改vm开始"
sysctl -w vm.max_map_count=262144
echo "vm.max_map_count=262144" > /etc/sysctl.conf
echo "vm.swappiness=1" >> /etc/sysctl.conf
sysctl -p
echo "修改vm结束"

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 设置selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 禁用透明大页
echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled

安装keeper

keeper01/keeper02/keeper033台机器均执行

配置主机别名

shell
echo '192.168.107.12 keeper01' >> /etc/hosts
echo '192.168.107.13 keeper02' >> /etc/hosts
echo '192.168.107.14 keeper03' >> /etc/hosts

# 验证
tail /etc/hosts
ping -c 2 keeper01
ping -c 2 keeper02
ping -c 2 keeper03

执行安装

shell
rpm -ivh clickhouse-keeper-23.8.14.6.x86_64.rpm

配置【差异执行】

keeper01/02/03均执行

shell
cp /etc/clickhouse-keeper/keeper_config.xml /root/keeper_config.xml

sed -i "s#<hostname>localhost</hostname>#<hostname>keeper01</hostname>#g" /etc/clickhouse-keeper/keeper_config.xml

sed -i '59i\
    \
                <server>\
                    <id>2</id>\
                    <hostname>keeper02</hostname>\
                    <port>9234</port>\
                </server>\
                <server>\
                    <id>3</id>\
                    <hostname>keeper03</hostname>\
                    <port>9234</port>\
                </server>\
' /etc/clickhouse-keeper/keeper_config.xml

sed -i '/<openSSL>/,/<\/openSSL>/d' /etc/clickhouse-keeper/keeper_config.xml

sed -i '29i\
    <listen_host>0.0.0.0</listen_host>\
    <interserver_listen_host>0.0.0.0</interserver_listen_host>\
' /etc/clickhouse-keeper/keeper_config.xml

keeper01 执行

shell
sed -i "s#<server_id>1</server_id>#<server_id>1</server_id>#g" /etc/clickhouse-keeper/keeper_config.xml

keeper02 执行

shell
sed -i "s#<server_id>1</server_id>#<server_id>2</server_id>#g" /etc/clickhouse-keeper/keeper_config.xml

keeper03 执行

shell
sed -i "s#<server_id>1</server_id>#<server_id>3</server_id>#g" /etc/clickhouse-keeper/keeper_config.xml

启动

3台keeper节点均执行

shell
systemctl start clickhouse-keeper.service

验证

3台keeper节点均执行

shell
# 预期都是 active (running)
systemctl status clickhouse-keeper.service

# 预期结果:zk_server_state 有一台主机输出leader,另外两台输出follower。
echo mntr | nc keeper01 9181

安装失败操作(可选)

如果验证结果不通过,说明安装失败,则删除重新安装。

shell
# 停止服务 & 卸载
systemctl stop clickhouse-keeper.service
rpm -qa | grep click
rpm -e clickhouse-keeper
rpm -qa | grep click

# 这几步不能少,否则安装后还是standalone
rm -rf /etc/clickhouse-keeper
rm -rf /var/lib/clickhouse/  
rm -rf /var/log/clickhouse-keeper 

# 重新安装
rpm -ivh clickhouse-keeper-23.8.14.6.x86_64.rpm

# 配置同上,不再赘述

安装CK

ck01/ck02/ck03/ck04 ,4台主机数据盘均为 data01

配置主机别名

shell
echo '192.168.107.8 ck01' >> /etc/hosts
echo '192.168.107.9 ck02' >> /etc/hosts
echo '192.168.107.10 ck03' >> /etc/hosts
echo '192.168.107.11 ck04' >> /etc/hosts
echo '192.168.107.12 keeper01' >> /etc/hosts
echo '192.168.107.13 keeper02' >> /etc/hosts
echo '192.168.107.14 keeper03' >> /etc/hosts

# 验证
tail /etc/hosts
ping -c 2 ck01
ping -c 2 ck02
ping -c 2 ck03
ping -c 2 ck04
ping -c 2 keeper01
ping -c 2 keeper02
ping -c 2 keeper03

执行安装

shell
rpm -ivh ./clickhouse-common-static-dbg-23.8.14.6.x86_64.rpm ./clickhouse-common-static-23.8.14.6.x86_64.rpm
  • 输入密码: p@$$W0rd

更改数据目录

通过软链接进行关联

shell
mkdir -p /data01/clickhouse-server/data/
mv /var/lib/clickhouse /data01/clickhouse-server/data/
ln -s /data01/clickhouse-server/data/clickhouse /var/lib/clickhouse
chown -R clickhouse:clickhouse /data01/clickhouse-server/data/clickhouse

mkdir -p /data01/clickhouse-server/log/
mv /var/log/clickhouse-server/ /data01/clickhouse-server/log/
ln -s /data01/clickhouse-server/log/clickhouse-server /var/log/clickhouse-server
chown -R clickhouse:clickhouse  /data01/clickhouse-server/log/

修改配置

包括普通、分片、副本等配置,配置文件位于 /etc/clickhouse-server/config.xml

  • log<level>trace</level>改为 <level>warning</level>

  • 放开注释的 <listen_host>0.0.0.0</listen_host>,表示对所有主机开放CK服务

  • 放开 <timezone>UTC</timezone>,并修改为:<timezone>Asia/Shanghai</timezone>

  • <remote_servers>注释掉 <test_shard_localhost>等一众shard,并添加如下:

    xml
            <cluster1>
                <shard>
                    <internal_replication>true</internal_replication>
                    <replica>
                        <host>ck01</host>
                        <port>9000</port>
                         <user>default</user>
                        <password>p@$$W0rd</password>
                    </replica>
                    <replica>
                        <host>ck03</host>
                        <port>9000</port>
                        <user>default</user>
                        <password>p@$$W0rd</password>
                    </replica>
                </shard>
                <shard>
                    <internal_replication>true</internal_replication>
                    <replica>
                        <host>ck02</host>
                        <port>9000</port>
                        <user>default</user>
                        <password>p@$$W0rd</password>
                    </replica>
                    <replica>
                        <host>ck04</host>
                        <port>9000</port>
                        <user>default</user>
                        <password>p@$$W0rd</password>
                    </replica>
                </shard>          
            </cluster1>
  • <zookeeper> 节点下添加:

    xml
              <zookeeper>
                    <node>
                        <host>keeper01</host>
                        <port>9181</port>
                    </node>
                    <node>
                        <host>keeper02</host>
                        <port>9181</port>
                    </node>
                    <node>
                        <host>keeper03</host>
                        <port>9181</port>
                    </node>
                </zookeeper>
  • 【差异执行】添加宏定义

    • ck01)搜索关键字 macros,改为如下:

      xml
          <macros>
              <shard>01</shard>
              <replica>ck01</replica>
          </macros>
    • ck02)搜索关键字 macros,改为如下:

      xml
          <macros>
              <shard>02</shard>
              <replica>ck02</replica>
          </macros>
    • ck03)搜索关键字 macros,改为如下:

      xml
          <macros>
              <shard>01</shard>
              <replica>ck03</replica>
          </macros>
    • ck04)搜索关键字 macros,改为如下:

      xml
          <macros>
              <shard>02</shard>
              <replica>ck04</replica>
          </macros>
    • config.xml其他参数优化:

    https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#background_schedule_pool_size

    • 后台用于merge的线程池大小

      xml
      <background_pool_size>128</background_pool_size>
    - 用于不断执行复制表、Kafka 流和 DNS 缓存更新的一些轻量级定期操作的最大线程数
    
      ```xml
      <background_schedule_pool_size>128</background_schedule_pool_size>
    • 将用于执行分布式发送的最大线程数。
    xml
      <background_distributed_schedule_pool_size>128</background_distributed_schedule_pool_size>
    • 开启Prometheus监控

      vim /etc/clickhouse-server/config.xml,搜索 prometheus关键字,去掉 <prometheus>块头部注释 <!--及尾注释 -->。无需重启,实时生效。

      验证:预期有日志输出

      shell
      curl http://127.0.0.1:9363/metrics
      cat << 'EOF' >>  /etc/prometheus/prometheus.yml
        - job_name: 'ck-monitor'
          static_configs:
            - targets: ['192.168.107.8:9363','192.168.107.9:9363','192.168.107.10:9363','192.168.107.11:9363']
      EOF
      
      tail /etc/prometheus/prometheus.yml
  • users.xml其他参数优化

    https://clickhouse.com/docs/en/operations/settings/query-complexity

    • 完整参考如下:

      xml
      <clickhouse>
          <profiles>
              <default>
                <receive_timeout>1800</receive_timeout>
                <send_timeout>1800</send_timeout>
                <max_memory_usage>100G</max_memory_usage>
                <max_bytes_before_external_group_by>50G</max_bytes_before_external_group_by>
              </default>
              <readonly>
                  <readonly>1</readonly>
              </readonly>
          </profiles>
          <users>
              <default>
                  <password></password>
                  <networks>
                      <ip>::/0</ip>
                  </networks>
                  <profile>default</profile>
                  <quota>default</quota>
                	<access_management>1</access_management>
                  <named_collection_control>1</named_collection_control>
                  <show_named_collections>1</show_named_collections>
                  <show_named_collections_secrets>1</show_named_collections_secrets>
              </default>
          </users>
          <quotas>
              <default>
                  <interval>
                      <duration>3600</duration>
                      <queries>0</queries>
                      <errors>0</errors>
                      <result_rows>0</result_rows>
                      <read_rows>0</read_rows>
                      <execution_time>0</execution_time>
                  </interval>
              </default>
          </quotas>
      </clickhouse>

启动

shell
systemctl start clickhouse-server

验证

  • 查看启动状态

    shell
    systemctl status clickhouse-server
  • 使用 clickHouse-client 验证

    shell
    clickhouse-client -m
    # 输入密码:p@$$W0rd
    
    # 退出使用 Ctrl + D
    • 验证分片

      sql
      SELECT cluster, host_name FROM system.clusters;

      预期有 cluster1

    • 验证宏定义

      sql
      SELECT * FROM system.macros;

      预期 ck01ck03shard01, ck02ck04 上shard为 02replica为各主机名称。

    • 验证副本

      sql
      ---- 测试database自动创建
      -- 在ck01上执行建database语句
      CREATE DATABASE IF NOT EXISTS cluster_db ON CLUSTER cluster1;
      -- 查看其余所有ck主机,预期会在ck02/ck03/ck04上自动建database,名为cluster_db
      show databases;
      
      ---- 测试table自动创建
      -- 在ck01上执行建table语句
      CREATE TABLE cluster_db.test_1_local ON CLUSTER cluster1(
          ID String,
          URL String,
          EventTime Date
      )ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/test_1_local', '{replica}')
      ORDER BY ID;
      -- 查看其余所有ck主机,预期会在ck02/ck03/ck04上自动建table,名为test_1_local
      use cluster_db;
      show tables;
      
      ---- 测试相同分片自动INSERT
      -- 在ck01上执行建INSERT语句
      INSERT INTO TABLE cluster_db.test_1_local VALUES('A001', 'www.baidu.com','2019-05-10 00:00:00');
      -- 查看其余所有ck主机,预期会在ck03/ck04上自动建table,名为test_1_local
      select * from cluster_db.test_1_local;

安装CK其他生态

安装Prometheus监控

ClickHouse安装【zk版本】

安装Lighthouse

ClickHouse安装【zk版本】

基于 知识共享 CC BY-NC-SA 许可发布