Deploying ClickHouse on vSphere Kubernetes Service

Introduction

ClickHouse is an open-source, column-oriented database known for its blazing-fast performance in real-time analytics and data warehousing workloads. Deploying ClickHouse on a Kubernetes environment enables flexibility, scalability, and easier lifecycle management.

In this post, we’ll walk through how to deploy ClickHouse natively on a vSphere Kubernetes Service (VKS) cluster.

Why Run ClickHouse on vSphere Kubernetes Service?

vSphere Kubernetes Service integrates Kubernetes directly into the vSphere platform. It allows developers and operators to run modern workloads alongside traditional VMs with consistent security and networking policies.

Running ClickHouse on VKS offers several benefits:

  • Simplified Operations: Unified management of Kubernetes and VMs via vCenter.
  • High Performance: ClickHouse benefits from VCF optimized storage and networking stack.
  • Scalability: Easily scale ClickHouse pods and replicas using Kubernetes constructs.
  • Security & Isolation: vSphere namespaces guarantee workload-level segmentation and governance.

Creating VKS Cluster

  • Connect to vSphere Supervisor. Deploy a VKS Cluster with at least 3 worker nodes. Each worker node should have 4 vCPU and 8 GB RAM for this deployment.
root@image-builder:~# vcf context create --endpoint 172.16.22.13 --username administrator@vsphere.local --insecure-skip-tls-verify
? Provide a name for the context: supervisor
[i] Auth type vSphere SSO detected. Proceeding for authentication...
Provide Password:
Logged in successfully.
You have access to the following contexts:
supervisor
supervisor:clickhouse-databas
supervisor:clickhouse-ns
supervisor:svc-cci-ns-domain-c10
supervisor:svc-harbor-domain-c10
supervisor:svc-tkg-domain-c10
supervisor:svc-velero-domain-c10
If the namespace context you wish to use is not in this list, you may need to
refresh the context again, or contact your cluster administrator.
To change context, use `vcf context use <context_name>`
[ok] successfully created context: supervisor
[ok] successfully created context: supervisor:svc-velero-domain-c10
[ok] successfully created context: supervisor:svc-cci-ns-domain-c10
[ok] successfully created context: supervisor:svc-harbor-domain-c10
[ok] successfully created context: supervisor:clickhouse-database
[ok] successfully created context: supervisor:clickhouse-ns
[ok] successfully created context: supervisor:svc-tkg-domain-c10
root@image-builder:~# vcf context list
NAME CURRENT TYPE
clickhouse-db false kubernetes
clickhouse-db:clickhouse-database-cluster false kubernetes
supervisor false kubernetes
supervisor:clickhouse-database false kubernetes
supervisor:clickhouse-ns false kubernetes
supervisor:svc-cci-ns-domain-c10 false kubernetes
supervisor:svc-harbor-domain-c10 false kubernetes
supervisor:svc-tkg-domain-c10 false kubernetes
supervisor:svc-velero-domain-c10 false kubernetes
[i] Use '--wide' to view additional columns.
root@image-builder:~# vcf context use supervisor:clickhouse-ns
[ok] Token is still active. Skipped the token refresh for context "supervisor:clickhouse-ns"
[i] Successfully activated context 'supervisor:clickhouse-ns' (Type: kubernetes)
[i] Fetching recommended plugins for active context 'supervisor:clickhouse-ns'...
[ok] All recommended plugins are already installed and up-to-date.
root@image-builder:~# kubectl apply -f /home/pj/clickhouse-cluster-1.yaml
cluster.cluster.x-k8s.io/clickhouse-cl-1 created
  • The configuration yaml used for creation of VKS Cluster is below
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: clickhouse-cl
namespace: clickhouse-ns
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.156.0/20
services:
cidrBlocks:
- 10.96.0.0/12
serviceDomain: cluster.local
topology:
class: builtin-generic-v3.4.0
version: v1.33.3---vmware.1-fips-vkr.1
variables:
- name: vsphereOptions
value:
persistentVolumes:
defaultStorageClass: thanos-vcf-cl01-optimal-datastore-default-policy-raid1
- name: kubernetes
value:
certificateRotation:
enabled: true
renewalDaysBeforeExpiry: 90
security:
podSecurityStandard:
audit: restricted
auditVersion: latest
enforce: privileged
enforceVersion: latest
warn: privileged
warnVersion: latest
- name: osConfiguration
value:
ntp:
servers:
- 172.16.9.1
- name: vmClass
value: guaranteed-small
- name: storageClass
value: thanos-vcf-cl01-optimal-datastore-default-policy-raid1
controlPlane:
replicas: 3
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu,content-library=cl-65959d00ab4790f1c,os-version=24.04
workers:
machineDeployments:
- class: node-pool
name: clickhouse-cluster-nodepool-so2l
replicas: 3
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu,content-library=cl-65959d00ab4790f1c,os-version=24.04
variables:
overrides:
- name: vmClass
value: guaranteed-large
- name: volumes
value:
- name: vol-5lo8
mountPath: /var/lib/containerd
storageClass: thanos-vcf-cl01-optimal-datastore-default-policy-raid1
capacity: 50Gi
- name: vol-diqs
mountPath: /var/lib/kubelet
storageClass: thanos-vcf-cl01-optimal-datastore-default-policy-raid1
capacity: 50Gi
  • Wait for the cluster creation to finish and verify the cluster status
root@image-builder:~# kubectl get clusters
NAME CLUSTERCLASS PHASE AGE VERSION
clickhouse-cl builtin-generic-v3.4.0 Provisioned 16m v1.33.3+vmware.1-fips
root@image-builder:~# kubectl describe cluster clickhouse-cl
Name: clickhouse-cl
Namespace: clickhouse-ns
Labels: cluster.x-k8s.io/cluster-name=clickhouse-cl
run.tanzu.vmware.com/tkr=v1.33.3---vmware.1-fips-vkr.1
topology.cluster.x-k8s.io/owned=
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Control Plane:
Available Replicas: 3
Desired Replicas: 3
Ready Replicas: 3
Replicas: 3
Up To Date Replicas: 3
Workers:
Available Replicas: 3
Desired Replicas: 3
Ready Replicas: 3
Replicas: 3
Up To Date Replicas: 3
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pending 16m cluster-controller Cluster clickhouse-cl is Pending
Normal TopologyCreate 16m topology/cluster-controller Created VSphereCluster "clickhouse-ns/clickhouse-cl-j7l5d"
Normal TopologyCreate 16m topology/cluster-controller Created VSphereMachineTemplate "clickhouse-ns/clickhouse-cl-8h4sz"
Normal TopologyCreate 16m topology/cluster-controller Created KubeadmControlPlane "clickhouse-ns/clickhouse-cl-8pmq4"
Normal TopologyUpdate 16m topology/cluster-controller Updated Cluster "clickhouse-ns/clickhouse-cl"
Normal TopologyCreate 16m topology/cluster-controller Created VSphereMachineTemplate "clickhouse-ns/clickhouse-cl-clickhouse-cluster-nodepool-so2l-q5c8s"
Normal Provisioning 16m (x2 over 16m) cluster-controller Cluster clickhouse-cl is Provisioning
Normal TopologyCreate 16m topology/cluster-controller Created KubeadmConfigTemplate "clickhouse-ns/clickhouse-cl-clickhouse-cluster-nodepool-so2l-7x477"
Normal TopologyCreate 16m topology/cluster-controller Created MachineDeployment "clickhouse-ns/clickhouse-cl-clickhouse-cluster-nodepool-so2l-f5zcm"
Normal TopologyUpdate 16m topology/cluster-controller Updated KubeadmControlPlane "clickhouse-ns/clickhouse-cl-8pmq4"
Normal InfrastructureReady 13m (x2 over 13m) cluster-controller Cluster clickhouse-cl InfrastructureReady is now True
Normal Provisioned 13m (x2 over 13m) cluster-controller Cluster clickhouse-cl is Provisioned
Normal ControlPlaneReady 7m17s cluster-controller Cluster clickhouse-cl ControlPlaneReady is now True
  • Login to the VKS cluster and create a new namespace for ClickHouse database deployment
root@image-builder:~# vcf context create --endpoint 172.16.22.13 --username administrator@vsphere.local --insecure-skip-tls-verify --workload-cluster-name clickhouse-cl --workload-cluster-namespace clickhouse-ns
? Provide a name for the context: clickhouse
Provide Password:
[i] Logging in to Kubernetes cluster (clickhouse-cl-) (clickhouse-ns)
[i] Successfully logged in to Kubernetes cluster 172.16.22.15
You have access to the following contexts:
clickhouse
clickhouse:clickhouse-cl
If the namespace context you wish to use is not in this list, you may need to
refresh the context again, or contact your cluster administrator.
To change context, use `vcf context use <context_name>`
[ok] successfully created context: clickhouse
[ok] successfully created context: clickhouse:clickhouse-cl
root@image-builder:~# vcf context use clickhouse:clickhouse-cl
[ok] Token is still active. Skipped the token refresh for context "clickhouse:clickhouse-cl"
[i] Successfully activated context 'clickhouse:clickhouse-cl' (Type: kubernetes)
[i] Fetching recommended plugins for active context 'clickhouse:clickhouse-cl'...
[ok] No recommended plugins found.
root@image-builder:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
clickhouse-cl-8pmq4-bclh7 Ready control-plane 6m7s v1.33.3+vmware.1-fips 172.16.24.18 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fips
clickhouse-cl-8pmq4-dqqxn Ready control-plane 12m v1.33.3+vmware.1-fips 172.16.24.20 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fips
clickhouse-cl-8pmq4-hmlwd Ready control-plane 4m5s v1.33.3+vmware.1-fips 172.16.24.31 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fips
clickhouse-cl-clickhouse-cluster-nodepool-so2l-f5zcm-hvs4smq5 Ready <none> 9m4s v1.33.3+vmware.1-fips 172.16.24.23 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fips
clickhouse-cl-clickhouse-cluster-nodepool-so2l-f5zcm-hvsfb4j9 Ready <none> 8m10s v1.33.3+vmware.1-fips 172.16.24.24 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fips
clickhouse-cl-clickhouse-cluster-nodepool-so2l-f5zcm-hvsxllk4 Ready <none> 8m24s v1.33.3+vmware.1-fips 172.16.24.30 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fips
root@image-builder:~# kubectl create ns clickhouse
namespace/clickhouse created

Deployment of ClickHouse Operator

The first step is to deploy ClickHouse Operator which simplifies the deployment of ClickHouse on VKS.

  • Installing the ClickHouse operator using helm is the preferred approach. For this I would Add helm.altinity.com to helm repository
root@image-builder:~# helm repo add altinity https://helm.altinity.com
"altinity" has been added to your repositories
root@image-builder:~# helm repo list
NAME URL
altinity https://helm.altinity.com
root@image-builder:~# helm repo update altinity
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "altinity" chart repository
Update Complete. ⎈Happy Helming!⎈
  • Install ClickHouse-operator on VKS Cluster
root@image-builder:~# helm upgrade --install clickhouse-operator altinity/altinity-clickhouse-operator --version 0.25.4 --namespace clickhouse
Release "clickhouse-operator" does not exist. Installing it now.
NAME: clickhouse-operator
LAST DEPLOYED: Sun Oct 5 09:00:29 2025
NAMESPACE: clickhouse
STATUS: deployed
REVISION: 1
TEST SUITE: None
  • Wait for a couple of minutes for the operator deployment to finish and verify the status of the deployment

root@image-builder:~# kubectl get deployment.apps -n clickhouse
NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
clickhouse-operator-altinity-clickhouse-operator   1/1     1            1           26s
root@image-builder:~# kubectl get pods -n clickhouse
NAME                                                              READY   STATUS    RESTARTS   AGE
clickhouse-operator-altinity-clickhouse-operator-7d9cf7b78fnpw2   2/2     Running   0          27s

Deployment of ClickHouse Cluster

  • For deploying ClickHouse on VKS, I would use the installation manifest below. This would create a ClickHouse database cluster with 3 replica and 3 shard.
root@image-builder:~# cat /home/pj/clickhouse-cluster-database.yaml
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
name: clickhouse-demo
namespace: clickhouse
spec:
configuration:
clusters:
- name: cluster1
layout:
shardsCount: 3
replicasCount: 3
templates:
podTemplate: clickhouse-pod-template
volumeClaimTemplate: clickhouse-storage-template
users:
default/networks/ip:
- "0.0.0.0/0"
templates:
podTemplates:
- name: clickhouse-pod-template
spec:
containers:
- name: clickhouse
image: altinity/clickhouse-server:25.3.6.10034.altinitystable
ports:
- containerPort: 8123
- containerPort: 9000
volumeClaimTemplates:
- name: clickhouse-storage-template
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
storageClassName: thanos-vcf-cl01-optimal-datastore-default-policy-raid1
  • Deploy the ClickHouse Cluster using the above installation manifest and wait for the installation to finish successfully.
root@image-builder:~# kubectl apply -f /home/pj/clickhouse-cluster-database.yaml -n clickhouse
clickhouseinstallation.clickhouse.altinity.com/clickhouse-demo created
root@image-builder:~# kubectl get clickhouseinstallation -n clickhouse
NAME CLUSTERS HOSTS STATUS HOSTS-COMPLETED AGE SUSPEND
clickhouse-demo 1 9 Completed 3m14s
root@image-builder:~# kubectl get all -n clickhouse
NAME READY STATUS RESTARTS AGE
pod/chi-clickhouse-demo-cluster1-0-0-0 1/1 Running 0 3m4s
pod/chi-clickhouse-demo-cluster1-0-1-0 1/1 Running 0 2m9s
pod/chi-clickhouse-demo-cluster1-0-2-0 1/1 Running 0 73s
pod/chi-clickhouse-demo-cluster1-1-0-0 1/1 Running 0 3m5s
pod/chi-clickhouse-demo-cluster1-1-1-0 1/1 Running 0 119s
pod/chi-clickhouse-demo-cluster1-1-2-0 1/1 Running 0 78s
pod/chi-clickhouse-demo-cluster1-2-0-0 1/1 Running 0 3m4s
pod/chi-clickhouse-demo-cluster1-2-1-0 1/1 Running 0 2m14s
pod/chi-clickhouse-demo-cluster1-2-2-0 1/1 Running 0 87s
pod/clickhouse-operator-altinity-clickhouse-operator-7d9cf7b78p56tk 2/2 Running 0 25m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/chi-clickhouse-demo-cluster1-0-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 3m6s
service/chi-clickhouse-demo-cluster1-0-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 2m9s
service/chi-clickhouse-demo-cluster1-0-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 74s
service/chi-clickhouse-demo-cluster1-1-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 3m7s
service/chi-clickhouse-demo-cluster1-1-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 2m2s
service/chi-clickhouse-demo-cluster1-1-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 81s
service/chi-clickhouse-demo-cluster1-2-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 3m6s
service/chi-clickhouse-demo-cluster1-2-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 2m15s
service/chi-clickhouse-demo-cluster1-2-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 89s
service/clickhouse-clickhouse-demo ClusterIP None <none> 8123/TCP,9000/TCP 2m11s
service/clickhouse-operator-altinity-clickhouse-operator-metrics ClusterIP 10.111.189.190 <none> 8888/TCP,9999/TCP 25m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/clickhouse-operator-altinity-clickhouse-operator 1/1 1 1 25m
NAME DESIRED CURRENT READY AGE
replicaset.apps/clickhouse-operator-altinity-clickhouse-operator-7d9cf7b789 1 1 1 25m
NAME READY AGE
statefulset.apps/chi-clickhouse-demo-cluster1-0-0 1/1 3m4s
statefulset.apps/chi-clickhouse-demo-cluster1-0-1 1/1 2m9s
statefulset.apps/chi-clickhouse-demo-cluster1-0-2 1/1 73s
statefulset.apps/chi-clickhouse-demo-cluster1-1-0 1/1 3m5s
statefulset.apps/chi-clickhouse-demo-cluster1-1-1 1/1 119s
statefulset.apps/chi-clickhouse-demo-cluster1-1-2 1/1 78s
statefulset.apps/chi-clickhouse-demo-cluster1-2-0 1/1 3m5s
statefulset.apps/chi-clickhouse-demo-cluster1-2-1 1/1 2m14s
statefulset.apps/chi-clickhouse-demo-cluster1-2-2 1/1 87s

Accessing ClickHouse Database

  • Connect to the database locally and run a simple query to view its system data
root@image-builder:~# kubectl get pods -n clickhouse
NAME READY STATUS RESTARTS AGE
chi-clickhouse-demo-cluster1-0-0-0 1/1 Running 0 7m17s
chi-clickhouse-demo-cluster1-0-1-0 1/1 Running 0 6m22s
chi-clickhouse-demo-cluster1-0-2-0 1/1 Running 0 5m26s
chi-clickhouse-demo-cluster1-1-0-0 1/1 Running 0 7m18s
chi-clickhouse-demo-cluster1-1-1-0 1/1 Running 0 6m12s
chi-clickhouse-demo-cluster1-1-2-0 1/1 Running 0 5m31s
chi-clickhouse-demo-cluster1-2-0-0 1/1 Running 0 7m17s
chi-clickhouse-demo-cluster1-2-1-0 1/1 Running 0 6m27s
chi-clickhouse-demo-cluster1-2-2-0 1/1 Running 0 5m40s
clickhouse-operator-altinity-clickhouse-operator-7d9cf7b78p56tk 2/2 Running 0 29m
root@image-builder:~# kubectl exec -it chi-clickhouse-demo-cluster1-0-0-0 -n clickhouse -- clickhouse-client
ClickHouse client version 25.3.6.10034.altinitystable (altinity build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 25.3.6.
Warnings:
* Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. You can enable it using `echo 1 > /proc/sys/kernel/task_delayacct` or by using sysctl.
chi-clickhouse-demo-cluster1-0-0-0.chi-clickhouse-demo-cluster1-0-0.clickhouse.svc.cluster.local :) SELECT
cluster,
host_name,
port
FROM system.clusters
SELECT
cluster,
host_name,
port
FROM system.clusters
Query id: 451bd370-1fa2-4055-8f29-854465957478
┌─cluster────────┬─host_name────────────────────────┬─port─┐
1. │ all-clusters │ chi-clickhouse-demo-cluster1-0-09000
2. │ all-clusters │ chi-clickhouse-demo-cluster1-0-19000
3. │ all-clusters │ chi-clickhouse-demo-cluster1-0-29000
4. │ all-clusters │ chi-clickhouse-demo-cluster1-1-09000
5. │ all-clusters │ chi-clickhouse-demo-cluster1-1-19000
6. │ all-clusters │ chi-clickhouse-demo-cluster1-1-29000
7. │ all-clusters │ chi-clickhouse-demo-cluster1-2-09000
8. │ all-clusters │ chi-clickhouse-demo-cluster1-2-19000
9. │ all-clusters │ chi-clickhouse-demo-cluster1-2-29000
10. │ all-replicated │ chi-clickhouse-demo-cluster1-0-09000
11. │ all-replicated │ chi-clickhouse-demo-cluster1-0-19000
12. │ all-replicated │ chi-clickhouse-demo-cluster1-0-29000
13. │ all-replicated │ chi-clickhouse-demo-cluster1-1-09000
14. │ all-replicated │ chi-clickhouse-demo-cluster1-1-19000
15. │ all-replicated │ chi-clickhouse-demo-cluster1-1-29000
16. │ all-replicated │ chi-clickhouse-demo-cluster1-2-09000
17. │ all-replicated │ chi-clickhouse-demo-cluster1-2-19000
18. │ all-replicated │ chi-clickhouse-demo-cluster1-2-29000
19. │ all-sharded │ chi-clickhouse-demo-cluster1-0-09000
20. │ all-sharded │ chi-clickhouse-demo-cluster1-0-19000
21. │ all-sharded │ chi-clickhouse-demo-cluster1-0-29000
22. │ all-sharded │ chi-clickhouse-demo-cluster1-1-09000
23. │ all-sharded │ chi-clickhouse-demo-cluster1-1-19000
24. │ all-sharded │ chi-clickhouse-demo-cluster1-1-29000
25. │ all-sharded │ chi-clickhouse-demo-cluster1-2-09000
26. │ all-sharded │ chi-clickhouse-demo-cluster1-2-19000
27. │ all-sharded │ chi-clickhouse-demo-cluster1-2-29000
28. │ cluster1 │ chi-clickhouse-demo-cluster1-0-09000
29. │ cluster1 │ chi-clickhouse-demo-cluster1-0-19000
30. │ cluster1 │ chi-clickhouse-demo-cluster1-0-29000
31. │ cluster1 │ chi-clickhouse-demo-cluster1-1-09000
32. │ cluster1 │ chi-clickhouse-demo-cluster1-1-19000
33. │ cluster1 │ chi-clickhouse-demo-cluster1-1-29000
34. │ cluster1 │ chi-clickhouse-demo-cluster1-2-09000
35. │ cluster1 │ chi-clickhouse-demo-cluster1-2-19000
36. │ cluster1 │ chi-clickhouse-demo-cluster1-2-29000
37. │ default │ localhost │ 9000
└────────────────┴──────────────────────────────────┴──────┘
37 rows in set. Elapsed: 0.001 sec.
chi-clickhouse-demo-cluster1-0-0-0.chi-clickhouse-demo-cluster1-0-0.clickhouse.svc.cluster.local :) exit
  • Expose the ClickHouse Database via Load Balancer
root@image-builder:~# cat /home/pj/clickhouse-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: clickhouse-lb
namespace: clickhouse
spec:
type: LoadBalancer
selector:
clickhouse.altinity.com/app: chop
ports:
- name: tcp-clickhouse
protocol: TCP
port: 9000 # Native ClickHouse TCP port
targetPort: 9000
- name: http-clickhouse
protocol: TCP
port: 8123 # HTTP interface port
targetPort: 8123
root@image-builder:~# kubectl apply -f /home/pj/clickhouse-lb.yaml
service/clickhouse-lb configured
root@image-builder:~# kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clickhouse chi-clickhouse-demo-cluster1-0-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 28m
clickhouse chi-clickhouse-demo-cluster1-0-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 27m
clickhouse chi-clickhouse-demo-cluster1-0-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 26m
clickhouse chi-clickhouse-demo-cluster1-1-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 28m
clickhouse chi-clickhouse-demo-cluster1-1-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 27m
clickhouse chi-clickhouse-demo-cluster1-1-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 26m
clickhouse chi-clickhouse-demo-cluster1-2-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 28m
clickhouse chi-clickhouse-demo-cluster1-2-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 27m
clickhouse chi-clickhouse-demo-cluster1-2-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 27m
clickhouse clickhouse-clickhouse-demo ClusterIP None <none> 8123/TCP,9000/TCP 27m
clickhouse clickhouse-lb LoadBalancer 10.106.248.120 172.16.22.16 9000:31427/TCP,8123:31067/TCP 8m55s
clickhouse clickhouse-operator-altinity-clickhouse-operator-metrics ClusterIP 10.111.189.190 <none> 8888/TCP,9999/TCP 51m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 134m
default supervisor ClusterIP None <none> 6443/TCP 134m
kube-system antrea ClusterIP 10.109.250.236 <none> 443/TCP 134m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 134m
kube-system metrics-server ClusterIP 10.103.209.47 <none> 443/TCP 134m
tkg-system packaging-api ClusterIP 10.108.126.139 <none> 443/TCP,8080/TCP 134m
vmware-system-csi vsphere-csi-controller ClusterIP 10.100.1.129 <none> 2112/TCP,2113/TCP 134m
  • Install ClickHouse Client on a machine and connect to the ClickHouse database server
root@image-builder:~# curl https://clickhouse.com/ | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2911 0 2911 0 0 3606 0 --:--:-- --:--:-- --:--:-- 3607
Will download https://builds.clickhouse.com/master/amd64/clickhouse into clickhouse
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 148M 100 148M 0 0 14.1M 0 0:00:10 0:00:10 --:--:-- 19.2M
Successfully downloaded the ClickHouse binary, you can run it as:
./clickhouse
You can also install it:
sudo ./clickhouse install
root@image-builder:~# sudo ./clickhouse install
Decompressing the binary......
Copying ClickHouse binary to /usr/bin/clickhouse.new
Renaming /usr/bin/clickhouse.new to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-server to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-client to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-local to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-benchmark to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-obfuscator to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-git-import to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-compressor to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-format to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-extract-from-config to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-keeper to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-keeper-converter to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-disks to /usr/bin/clickhouse.
Creating symlink /usr/bin/clickhouse-chdig to /usr/bin/clickhouse.
Creating symlink /usr/bin/chdig to /usr/bin/clickhouse.
Creating symlink /usr/bin/ch to /usr/bin/clickhouse.
Creating symlink /usr/bin/chl to /usr/bin/clickhouse.
Creating symlink /usr/bin/chc to /usr/bin/clickhouse.
Creating clickhouse group if it does not exist.
groupadd -r clickhouse
Creating clickhouse user if it does not exist.
useradd -r --shell /bin/false --home-dir /nonexistent -g clickhouse clickhouse
Will set ulimits for clickhouse user in /etc/security/limits.d/clickhouse.conf.
Creating config directory /etc/clickhouse-server.
Creating config directory /etc/clickhouse-server/config.d that is used for tweaks of main server configuration.
Creating config directory /etc/clickhouse-server/users.d that is used for tweaks of users configuration.
Data path configuration override is saved to file /etc/clickhouse-server/config.d/data-paths.xml.
Log path configuration override is saved to file /etc/clickhouse-server/config.d/logger.xml.
User directory path configuration override is saved to file /etc/clickhouse-server/config.d/user-directories.xml.
OpenSSL path configuration override is saved to file /etc/clickhouse-server/config.d/openssl.xml.
Creating log directory /var/log/clickhouse-server.
Creating data directory /var/lib/clickhouse.
Creating pid directory /var/run/clickhouse-server.
chown -R clickhouse:clickhouse '/var/log/clickhouse-server'
chown -R clickhouse:clickhouse '/var/run/clickhouse-server'
chown clickhouse:clickhouse '/var/lib/clickhouse'
Set up the password for the default user:
Password for the default user is saved in file /etc/clickhouse-server/users.d/default-password.xml.
Setting capabilities for clickhouse binary. This is optional.
Allow server to accept connections from the network (default is localhost only), [y/N]: y
The choice is saved in file /etc/clickhouse-server/config.d/listen.xml.
chown -R clickhouse:clickhouse '/etc/clickhouse-server'
ClickHouse has been successfully installed.
Start clickhouse-server with:
sudo clickhouse start
Start clickhouse-client with:
clickhouse-client --password
root@image-builder:~# clickhouse-client --host 172.16.22.16
ClickHouse client version 25.10.1.1677 (official build).
Connecting to 172.16.22.16:9000 as user default.
Connected to ClickHouse server version 25.3.6.
ClickHouse server version is older than ClickHouse client. It may indicate that the server is out of date and can be upgraded.
Warnings:
* Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. You can enable it using `echo 1 > /proc/sys/kernel/task_delayacct` or by using sysctl.
chi-clickhouse-demo-cluster1-0-0-0.chi-clickhouse-demo-cluster1-0-0.clickhouse.svc.cluster.local :)

Finally , we have a running ClickHouse cluster on vSphere Kubernetes Service Cluster, combining VCF reliable infrastructure with Kubernetes flexibility .With ClickHouse on VKS, you can easily connect to BI tools, handle large datasets, and deliver real-time insights within your existing VMware ecosystem.

Disclaimer: All posts, contents and examples are for educational purposes in lab environments only and does not constitute professional advice. No warranty is implied or given. The user accepts that all information, contents, and opinions are my own. They do not reflect the opinions of my employer.


Comments

Leave a comment