23.ClickHouse分布式集群部署

23.1.集群部署

23.1.1.准备工作

节点规划:

主机名 IP地址 分片 副本
clickhouse1 192.168.106.103 shard1 副本1
clickhouse2 192.168.106.104 shard1 副本2
clickhouse3 192.168.106.105 shard2 副本1
clickhouse4 192.168.106.106 shard2 副本2

规划4个节点, 2个分片, 每个分片2个副本。 分片1的副本在主机clickhouse1和clickhouse2上, 2分片的副本在主机clickhouse3和clickhouse4上。

操作系统准备工作:
(1)、修改主机名
hostname按照上面主机名进行修改。
(2)、关闭防火墙、selinux等。

关闭防火墙:
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl is-enabled firewalld.service

selinux的配置:
vim /etc/sysconfig/selinux
SELINUX=enforcing 改为 SELINUX=disabled

检查SELinux的状态
[root@localhost etc]# getenforce
Disabled
[root@localhost etc]#

在这里插入图片描述

(3)、配置/etc/hosts
必须在/etc/hosts配置主机名和ip地址映射关系, 否则副本之间的数据不能同步。

[root@clickhouse1 ~]# cat /etc/hosts
192.168.106.103    clickhouse1
192.168.106.104    clickhouse2
192.168.106.105    clickhouse3
192.168.106.106    clickhouse4
[root@clickhouse1 ~]#

2 安装配置zookeeper
下载zookeeper,zookeeper版本要求3.4.5以上。
将conf目录下的zoo_sample.cfg复制一份,命名为:zoo.cfg
环境配置:

export ZOOKEEPER_HOME=/root/apache-zookeeper-3.6.2-bin
export PATH=$PATH:$ZOOKEEPER_HOME/bin

Zookeeper的配置:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/root/apache-zookeeper-3.6.2-bin/data
dataLogDir=/root/apache-zookeeper-3.6.2-bin/log
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

上面配置目录需要手工创建。
然后启动zookeeper即可。
启动命令:

$ZOOKEEPER_HOME/bin/zkServer.sh start

3 在所有的主机安装clickhouse
4 修改clickhouse的网络相关配置
修改配置文件:/etc/clickhouse-server/config.xml
打开以下注释,并做相关修改:

<listen_host>::1</listen_host>
<listen_host>0.0.0.0</listen_host>

clickhouse1上的修改如下:

<listen_host>::1</listen_host>
<listen_host>192.168.106.103</listen_host>

Clickhouse2上的修改如下:

<listen_host>::1</listen_host>
<listen_host>192.168.106.104</listen_host>

Clickhouse3上的修改如下:

<listen_host>::1</listen_host>
<listen_host>192.168.106.105</listen_host>

Clickhouse4上的修改如下:

<listen_host>::1</listen_host>
<listen_host>192.168.106.106</listen_host>

5 增加配置文件:/etc/metrika.xml
clickhouse1的配置如下:

<?xml version="1.0" encoding="utf-8"?>

<yandex> 
  <clickhouse_remote_servers> 
    <mycluster> 
      <shard> 
        <internal_replication>true</internal_replication>  
        <replica> 
          <host>192.168.106.103</host>  
          <port>9000</port> 
        </replica>  
        <replica> 
          <host>192.168.106.104</host>  
          <port>9000</port> 
        </replica> 
      </shard>  
      <shard> 
        <internal_replication>true</internal_replication> 
        <replica> 
          <host>192.168.106.105</host>  
          <port>9000</port> 
        </replica>  
        <replica> 
          <host>192.168.106.106</host>  
          <port>9000</port> 
        </replica> 
      </shard> 
    </mycluster> 
  </clickhouse_remote_servers>  
  <zookeeper-servers> 
    <node index="1"> 
      <host>192.168.106.103</host>  
      <port>2181</port> 
    </node> 
  </zookeeper-servers>  
  <macros> 
    <layer>01</layer>  
    <shard>01</shard>  
    <replica>192.168.106.103</replica> 
  </macros>
</yandex>

Clickhouse2的配置如下:

<?xml version="1.0" encoding="utf-8"?>

<yandex> 
  <clickhouse_remote_servers> 
    <mycluster> 
      <shard> 
        <internal_replication>true</internal_replication>  
        <replica> 
          <host>192.168.106.103</host>  
          <port>9000</port> 
        </replica>  
        <replica> 
          <host>192.168.106.104</host>  
          <port>9000</port> 
        </replica> 
      </shard>  
      <shard> 
        <internal_replication>true</internal_replication> 
        <replica> 
          <host>192.168.106.105</host>  
          <port>9000</port> 
        </replica>  
        <replica> 
          <host>192.168.106.106</host>  
          <port>9000</port> 
        </replica> 
      </shard> 
    </mycluster> 
  </clickhouse_remote_servers>  
  <zookeeper-servers> 
    <node index="1"> 
      <host>192.168.106.103</host>  
      <port>2181</port> 
    </node> 
  </zookeeper-servers>  
  <macros> 
    <layer>01</layer>  
    <shard>01</shard>  
    <replica>192.168.106.104</replica>
  </macros>
</yandex>

Clickhouse3的配置如下:

<?xml version="1.0" encoding="utf-8"?>

<yandex> 
  <clickhouse_remote_servers> 
    <mycluster> 
      <shard> 
        <internal_replication>true</internal_replication>  
        <replica> 
          <host>192.168.106.103</host>  
          <port>9000</port> 
        </replica>  
        <replica> 
          <host>192.168.106.104</host>  
          <port>9000</port> 
        </replica> 
      </shard>  
      <shard> 
        <internal_replication>true</internal_replication> 
        <replica> 
          <host>192.168.106.105</host>  
          <port>9000</port> 
        </replica>  
        <replica> 
          <host>192.168.106.106</host>  
          <port>9000</port> 
        </replica> 
      </shard> 
    </mycluster> 
  </clickhouse_remote_servers>  
  <zookeeper-servers> 
    <node index="1"> 
      <host>192.168.106.103</host>  
      <port>2181</port> 
    </node> 
  </zookeeper-servers>  
  <macros> 
    <layer>01</layer>  
    <shard>02</shard>  
    <replica>192.168.106.105</replica> 
  </macros>
</yandex>

Clickhouse4的配置如下:

<?xml version="1.0" encoding="utf-8"?>

<yandex> 
  <clickhouse_remote_servers> 
    <mycluster> 
      <shard> 
        <internal_replication>true</internal_replication>  
        <replica> 
          <host>192.168.106.103</host>  
          <port>9000</port> 
        </replica>  
        <replica> 
          <host>192.168.106.104</host>  
          <port>9000</port> 
        </replica> 
      </shard>  
      <shard> 
        <internal_replication>true</internal_replication> 
        <replica> 
          <host>192.168.106.105</host>  
          <port>9000</port> 
        </replica>  
        <replica> 
          <host>192.168.106.106</host>  
          <port>9000</port> 
        </replica> 
      </shard> 
    </mycluster> 
  </clickhouse_remote_servers>  
  <zookeeper-servers> 
    <node index="1"> 
      <host>192.168.106.103</host>  
      <port>2181</port> 
    </node> 
  </zookeeper-servers>  
  <macros> 
    <layer>01</layer>  
    <shard>02</shard>  
    <replica>192.168.106.106</replica> 
  </macros>  
</yandex>

其中如下这段配置在每个节点是不相同的:

<macros> 
    <layer>01</layer>  
    <shard>01</shard>  
    <replica>192.168.106.103</replica> 
</macros>

layer表示分层, shard表示分片的编号, 按照配置顺序从1开始。这里的01表示第一个分片。 replica配置副本的标识, 这里配置为本机的主机名。 使用这三个参数可以唯一表示一个副本分片。 这里表示layer为01的分片1的副本,副本标识:192.168.106.103。

配置完成后, 在每个节点启动ClickHouse服务。
systemctl restart clickhouse-server

5 查看系统表

clickhouse1 :) select * from system.clusters where cluster='mycluster';

SELECT *
FROM system.clusters
WHERE cluster = 'mycluster'

┌─cluster───┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────┬─host_address────┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ mycluster │         111192.168.106.103192.168.106.10390001default │                  │            00 │
│ mycluster │         112192.168.106.104192.168.106.10490000default │                  │            00 │
│ mycluster │         211192.168.106.105192.168.106.10590000default │                  │            00 │
│ mycluster │         212192.168.106.106192.168.106.10690000default │                  │            00 │
└───────────┴───────────┴──────────────┴─────────────┴─────────────────┴─────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘

4 rows in set. Elapsed: 0.005 sec. 

clickhouse1 :)

在这里插入图片描述

Logo

更多推荐