Introduction
ClickHouse is an open-source, column-oriented database known for its blazing-fast performance in real-time analytics and data warehousing workloads. Deploying ClickHouse on a Kubernetes environment enables flexibility, scalability, and easier lifecycle management.
In this post, we’ll walk through how to deploy ClickHouse natively on a vSphere Kubernetes Service (VKS) cluster.
Why Run ClickHouse on vSphere Kubernetes Service?
vSphere Kubernetes Service integrates Kubernetes directly into the vSphere platform. It allows developers and operators to run modern workloads alongside traditional VMs with consistent security and networking policies.
Running ClickHouse on VKS offers several benefits:
- Simplified Operations: Unified management of Kubernetes and VMs via vCenter.
- High Performance: ClickHouse benefits from VCF optimized storage and networking stack.
- Scalability: Easily scale ClickHouse pods and replicas using Kubernetes constructs.
- Security & Isolation: vSphere namespaces guarantee workload-level segmentation and governance.
Creating VKS Cluster
- Connect to vSphere Supervisor. Deploy a VKS Cluster with at least 3 worker nodes. Each worker node should have 4 vCPU and 8 GB RAM for this deployment.
root@image-builder:~# vcf context create --endpoint 172.16.22.13 --username administrator@vsphere.local --insecure-skip-tls-verify? Provide a name for the context: supervisor[i] Auth type vSphere SSO detected. Proceeding for authentication...Provide Password:Logged in successfully.You have access to the following contexts: supervisor supervisor:clickhouse-databas supervisor:clickhouse-ns supervisor:svc-cci-ns-domain-c10 supervisor:svc-harbor-domain-c10 supervisor:svc-tkg-domain-c10 supervisor:svc-velero-domain-c10If the namespace context you wish to use is not in this list, you may need torefresh the context again, or contact your cluster administrator.To change context, use `vcf context use <context_name>`[ok] successfully created context: supervisor[ok] successfully created context: supervisor:svc-velero-domain-c10[ok] successfully created context: supervisor:svc-cci-ns-domain-c10[ok] successfully created context: supervisor:svc-harbor-domain-c10[ok] successfully created context: supervisor:clickhouse-database[ok] successfully created context: supervisor:clickhouse-ns[ok] successfully created context: supervisor:svc-tkg-domain-c10root@image-builder:~# vcf context list NAME CURRENT TYPE clickhouse-db false kubernetes clickhouse-db:clickhouse-database-cluster false kubernetes supervisor false kubernetes supervisor:clickhouse-database false kubernetes supervisor:clickhouse-ns false kubernetes supervisor:svc-cci-ns-domain-c10 false kubernetes supervisor:svc-harbor-domain-c10 false kubernetes supervisor:svc-tkg-domain-c10 false kubernetes supervisor:svc-velero-domain-c10 false kubernetes[i] Use '--wide' to view additional columns.root@image-builder:~# vcf context use supervisor:clickhouse-ns[ok] Token is still active. Skipped the token refresh for context "supervisor:clickhouse-ns"[i] Successfully activated context 'supervisor:clickhouse-ns' (Type: kubernetes)[i] Fetching recommended plugins for active context 'supervisor:clickhouse-ns'...[ok] All recommended plugins are already installed and up-to-date.root@image-builder:~# kubectl apply -f /home/pj/clickhouse-cluster-1.yamlcluster.cluster.x-k8s.io/clickhouse-cl-1 created
- The configuration yaml used for creation of VKS Cluster is below
apiVersion: cluster.x-k8s.io/v1beta1kind: Clustermetadata: name: clickhouse-cl namespace: clickhouse-nsspec: clusterNetwork: pods: cidrBlocks: - 192.168.156.0/20 services: cidrBlocks: - 10.96.0.0/12 serviceDomain: cluster.local topology: class: builtin-generic-v3.4.0 version: v1.33.3---vmware.1-fips-vkr.1 variables: - name: vsphereOptions value: persistentVolumes: defaultStorageClass: thanos-vcf-cl01-optimal-datastore-default-policy-raid1 - name: kubernetes value: certificateRotation: enabled: true renewalDaysBeforeExpiry: 90 security: podSecurityStandard: audit: restricted auditVersion: latest enforce: privileged enforceVersion: latest warn: privileged warnVersion: latest - name: osConfiguration value: ntp: servers: - 172.16.9.1 - name: vmClass value: guaranteed-small - name: storageClass value: thanos-vcf-cl01-optimal-datastore-default-policy-raid1 controlPlane: replicas: 3 metadata: annotations: run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu,content-library=cl-65959d00ab4790f1c,os-version=24.04 workers: machineDeployments: - class: node-pool name: clickhouse-cluster-nodepool-so2l replicas: 3 metadata: annotations: run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu,content-library=cl-65959d00ab4790f1c,os-version=24.04 variables: overrides: - name: vmClass value: guaranteed-large - name: volumes value: - name: vol-5lo8 mountPath: /var/lib/containerd storageClass: thanos-vcf-cl01-optimal-datastore-default-policy-raid1 capacity: 50Gi - name: vol-diqs mountPath: /var/lib/kubelet storageClass: thanos-vcf-cl01-optimal-datastore-default-policy-raid1 capacity: 50Gi
- Wait for the cluster creation to finish and verify the cluster status
root@image-builder:~# kubectl get clustersNAME CLUSTERCLASS PHASE AGE VERSIONclickhouse-cl builtin-generic-v3.4.0 Provisioned 16m v1.33.3+vmware.1-fipsroot@image-builder:~# kubectl describe cluster clickhouse-clName: clickhouse-clNamespace: clickhouse-nsLabels: cluster.x-k8s.io/cluster-name=clickhouse-cl run.tanzu.vmware.com/tkr=v1.33.3---vmware.1-fips-vkr.1 topology.cluster.x-k8s.io/owned=>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Control Plane: Available Replicas: 3 Desired Replicas: 3 Ready Replicas: 3 Replicas: 3 Up To Date Replicas: 3 Workers: Available Replicas: 3 Desired Replicas: 3 Ready Replicas: 3 Replicas: 3 Up To Date Replicas: 3Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pending 16m cluster-controller Cluster clickhouse-cl is Pending Normal TopologyCreate 16m topology/cluster-controller Created VSphereCluster "clickhouse-ns/clickhouse-cl-j7l5d" Normal TopologyCreate 16m topology/cluster-controller Created VSphereMachineTemplate "clickhouse-ns/clickhouse-cl-8h4sz" Normal TopologyCreate 16m topology/cluster-controller Created KubeadmControlPlane "clickhouse-ns/clickhouse-cl-8pmq4" Normal TopologyUpdate 16m topology/cluster-controller Updated Cluster "clickhouse-ns/clickhouse-cl" Normal TopologyCreate 16m topology/cluster-controller Created VSphereMachineTemplate "clickhouse-ns/clickhouse-cl-clickhouse-cluster-nodepool-so2l-q5c8s" Normal Provisioning 16m (x2 over 16m) cluster-controller Cluster clickhouse-cl is Provisioning Normal TopologyCreate 16m topology/cluster-controller Created KubeadmConfigTemplate "clickhouse-ns/clickhouse-cl-clickhouse-cluster-nodepool-so2l-7x477" Normal TopologyCreate 16m topology/cluster-controller Created MachineDeployment "clickhouse-ns/clickhouse-cl-clickhouse-cluster-nodepool-so2l-f5zcm" Normal TopologyUpdate 16m topology/cluster-controller Updated KubeadmControlPlane "clickhouse-ns/clickhouse-cl-8pmq4" Normal InfrastructureReady 13m (x2 over 13m) cluster-controller Cluster clickhouse-cl InfrastructureReady is now True Normal Provisioned 13m (x2 over 13m) cluster-controller Cluster clickhouse-cl is Provisioned Normal ControlPlaneReady 7m17s cluster-controller Cluster clickhouse-cl ControlPlaneReady is now True
- Login to the VKS cluster and create a new namespace for ClickHouse database deployment
root@image-builder:~# vcf context create --endpoint 172.16.22.13 --username administrator@vsphere.local --insecure-skip-tls-verify --workload-cluster-name clickhouse-cl --workload-cluster-namespace clickhouse-ns? Provide a name for the context: clickhouseProvide Password:[i] Logging in to Kubernetes cluster (clickhouse-cl-) (clickhouse-ns)[i] Successfully logged in to Kubernetes cluster 172.16.22.15You have access to the following contexts: clickhouse clickhouse:clickhouse-clIf the namespace context you wish to use is not in this list, you may need torefresh the context again, or contact your cluster administrator.To change context, use `vcf context use <context_name>`[ok] successfully created context: clickhouse[ok] successfully created context: clickhouse:clickhouse-clroot@image-builder:~# vcf context use clickhouse:clickhouse-cl[ok] Token is still active. Skipped the token refresh for context "clickhouse:clickhouse-cl"[i] Successfully activated context 'clickhouse:clickhouse-cl' (Type: kubernetes)[i] Fetching recommended plugins for active context 'clickhouse:clickhouse-cl'...[ok] No recommended plugins found.root@image-builder:~# kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEclickhouse-cl-8pmq4-bclh7 Ready control-plane 6m7s v1.33.3+vmware.1-fips 172.16.24.18 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fipsclickhouse-cl-8pmq4-dqqxn Ready control-plane 12m v1.33.3+vmware.1-fips 172.16.24.20 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fipsclickhouse-cl-8pmq4-hmlwd Ready control-plane 4m5s v1.33.3+vmware.1-fips 172.16.24.31 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fipsclickhouse-cl-clickhouse-cluster-nodepool-so2l-f5zcm-hvs4smq5 Ready <none> 9m4s v1.33.3+vmware.1-fips 172.16.24.23 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fipsclickhouse-cl-clickhouse-cluster-nodepool-so2l-f5zcm-hvsfb4j9 Ready <none> 8m10s v1.33.3+vmware.1-fips 172.16.24.24 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fipsclickhouse-cl-clickhouse-cluster-nodepool-so2l-f5zcm-hvsxllk4 Ready <none> 8m24s v1.33.3+vmware.1-fips 172.16.24.30 <none> Ubuntu 24.04.3 LTS 6.8.0-79-generic containerd://2.0.6+vmware.1-fipsroot@image-builder:~# kubectl create ns clickhousenamespace/clickhouse created
Deployment of ClickHouse Operator
The first step is to deploy ClickHouse Operator which simplifies the deployment of ClickHouse on VKS.
- Installing the ClickHouse operator using helm is the preferred approach. For this I would Add helm.altinity.com to helm repository
root@image-builder:~# helm repo add altinity https://helm.altinity.com"altinity" has been added to your repositoriesroot@image-builder:~# helm repo listNAME URLaltinity https://helm.altinity.comroot@image-builder:~# helm repo update altinityHang tight while we grab the latest from your chart repositories......Successfully got an update from the "altinity" chart repositoryUpdate Complete. ⎈Happy Helming!⎈
- Install ClickHouse-operator on VKS Cluster
root@image-builder:~# helm upgrade --install clickhouse-operator altinity/altinity-clickhouse-operator --version 0.25.4 --namespace clickhouseRelease "clickhouse-operator" does not exist. Installing it now.NAME: clickhouse-operatorLAST DEPLOYED: Sun Oct 5 09:00:29 2025NAMESPACE: clickhouseSTATUS: deployedREVISION: 1TEST SUITE: None
- Wait for a couple of minutes for the operator deployment to finish and verify the status of the deployment
root@image-builder:~# kubectl get deployment.apps -n clickhouse
NAME READY UP-TO-DATE AVAILABLE AGE
clickhouse-operator-altinity-clickhouse-operator 1/1 1 1 26s
root@image-builder:~# kubectl get pods -n clickhouse
NAME READY STATUS RESTARTS AGE
clickhouse-operator-altinity-clickhouse-operator-7d9cf7b78fnpw2 2/2 Running 0 27s
Deployment of ClickHouse Cluster
- For deploying ClickHouse on VKS, I would use the installation manifest below. This would create a ClickHouse database cluster with 3 replica and 3 shard.
root@image-builder:~# cat /home/pj/clickhouse-cluster-database.yamlapiVersion: clickhouse.altinity.com/v1kind: ClickHouseInstallationmetadata: name: clickhouse-demo namespace: clickhousespec: configuration: clusters: - name: cluster1 layout: shardsCount: 3 replicasCount: 3 templates: podTemplate: clickhouse-pod-template volumeClaimTemplate: clickhouse-storage-template users: default/networks/ip: - "0.0.0.0/0" templates: podTemplates: - name: clickhouse-pod-template spec: containers: - name: clickhouse image: altinity/clickhouse-server:25.3.6.10034.altinitystable ports: - containerPort: 8123 - containerPort: 9000 volumeClaimTemplates: - name: clickhouse-storage-template spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Gi storageClassName: thanos-vcf-cl01-optimal-datastore-default-policy-raid1
- Deploy the ClickHouse Cluster using the above installation manifest and wait for the installation to finish successfully.
root@image-builder:~# kubectl apply -f /home/pj/clickhouse-cluster-database.yaml -n clickhouseclickhouseinstallation.clickhouse.altinity.com/clickhouse-demo createdroot@image-builder:~# kubectl get clickhouseinstallation -n clickhouseNAME CLUSTERS HOSTS STATUS HOSTS-COMPLETED AGE SUSPENDclickhouse-demo 1 9 Completed 3m14sroot@image-builder:~# kubectl get all -n clickhouseNAME READY STATUS RESTARTS AGEpod/chi-clickhouse-demo-cluster1-0-0-0 1/1 Running 0 3m4spod/chi-clickhouse-demo-cluster1-0-1-0 1/1 Running 0 2m9spod/chi-clickhouse-demo-cluster1-0-2-0 1/1 Running 0 73spod/chi-clickhouse-demo-cluster1-1-0-0 1/1 Running 0 3m5spod/chi-clickhouse-demo-cluster1-1-1-0 1/1 Running 0 119spod/chi-clickhouse-demo-cluster1-1-2-0 1/1 Running 0 78spod/chi-clickhouse-demo-cluster1-2-0-0 1/1 Running 0 3m4spod/chi-clickhouse-demo-cluster1-2-1-0 1/1 Running 0 2m14spod/chi-clickhouse-demo-cluster1-2-2-0 1/1 Running 0 87spod/clickhouse-operator-altinity-clickhouse-operator-7d9cf7b78p56tk 2/2 Running 0 25mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/chi-clickhouse-demo-cluster1-0-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 3m6sservice/chi-clickhouse-demo-cluster1-0-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 2m9sservice/chi-clickhouse-demo-cluster1-0-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 74sservice/chi-clickhouse-demo-cluster1-1-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 3m7sservice/chi-clickhouse-demo-cluster1-1-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 2m2sservice/chi-clickhouse-demo-cluster1-1-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 81sservice/chi-clickhouse-demo-cluster1-2-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 3m6sservice/chi-clickhouse-demo-cluster1-2-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 2m15sservice/chi-clickhouse-demo-cluster1-2-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 89sservice/clickhouse-clickhouse-demo ClusterIP None <none> 8123/TCP,9000/TCP 2m11sservice/clickhouse-operator-altinity-clickhouse-operator-metrics ClusterIP 10.111.189.190 <none> 8888/TCP,9999/TCP 25mNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/clickhouse-operator-altinity-clickhouse-operator 1/1 1 1 25mNAME DESIRED CURRENT READY AGEreplicaset.apps/clickhouse-operator-altinity-clickhouse-operator-7d9cf7b789 1 1 1 25mNAME READY AGEstatefulset.apps/chi-clickhouse-demo-cluster1-0-0 1/1 3m4sstatefulset.apps/chi-clickhouse-demo-cluster1-0-1 1/1 2m9sstatefulset.apps/chi-clickhouse-demo-cluster1-0-2 1/1 73sstatefulset.apps/chi-clickhouse-demo-cluster1-1-0 1/1 3m5sstatefulset.apps/chi-clickhouse-demo-cluster1-1-1 1/1 119sstatefulset.apps/chi-clickhouse-demo-cluster1-1-2 1/1 78sstatefulset.apps/chi-clickhouse-demo-cluster1-2-0 1/1 3m5sstatefulset.apps/chi-clickhouse-demo-cluster1-2-1 1/1 2m14sstatefulset.apps/chi-clickhouse-demo-cluster1-2-2 1/1 87s
Accessing ClickHouse Database
- Connect to the database locally and run a simple query to view its system data
root@image-builder:~# kubectl get pods -n clickhouseNAME READY STATUS RESTARTS AGEchi-clickhouse-demo-cluster1-0-0-0 1/1 Running 0 7m17schi-clickhouse-demo-cluster1-0-1-0 1/1 Running 0 6m22schi-clickhouse-demo-cluster1-0-2-0 1/1 Running 0 5m26schi-clickhouse-demo-cluster1-1-0-0 1/1 Running 0 7m18schi-clickhouse-demo-cluster1-1-1-0 1/1 Running 0 6m12schi-clickhouse-demo-cluster1-1-2-0 1/1 Running 0 5m31schi-clickhouse-demo-cluster1-2-0-0 1/1 Running 0 7m17schi-clickhouse-demo-cluster1-2-1-0 1/1 Running 0 6m27schi-clickhouse-demo-cluster1-2-2-0 1/1 Running 0 5m40sclickhouse-operator-altinity-clickhouse-operator-7d9cf7b78p56tk 2/2 Running 0 29mroot@image-builder:~# kubectl exec -it chi-clickhouse-demo-cluster1-0-0-0 -n clickhouse -- clickhouse-clientClickHouse client version 25.3.6.10034.altinitystable (altinity build).Connecting to localhost:9000 as user default.Connected to ClickHouse server version 25.3.6.Warnings: * Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. You can enable it using `echo 1 > /proc/sys/kernel/task_delayacct` or by using sysctl.chi-clickhouse-demo-cluster1-0-0-0.chi-clickhouse-demo-cluster1-0-0.clickhouse.svc.cluster.local :) SELECT cluster, host_name, portFROM system.clustersSELECT cluster, host_name, portFROM system.clustersQuery id: 451bd370-1fa2-4055-8f29-854465957478 ┌─cluster────────┬─host_name────────────────────────┬─port─┐ 1. │ all-clusters │ chi-clickhouse-demo-cluster1-0-0 │ 9000 │ 2. │ all-clusters │ chi-clickhouse-demo-cluster1-0-1 │ 9000 │ 3. │ all-clusters │ chi-clickhouse-demo-cluster1-0-2 │ 9000 │ 4. │ all-clusters │ chi-clickhouse-demo-cluster1-1-0 │ 9000 │ 5. │ all-clusters │ chi-clickhouse-demo-cluster1-1-1 │ 9000 │ 6. │ all-clusters │ chi-clickhouse-demo-cluster1-1-2 │ 9000 │ 7. │ all-clusters │ chi-clickhouse-demo-cluster1-2-0 │ 9000 │ 8. │ all-clusters │ chi-clickhouse-demo-cluster1-2-1 │ 9000 │ 9. │ all-clusters │ chi-clickhouse-demo-cluster1-2-2 │ 9000 │10. │ all-replicated │ chi-clickhouse-demo-cluster1-0-0 │ 9000 │11. │ all-replicated │ chi-clickhouse-demo-cluster1-0-1 │ 9000 │12. │ all-replicated │ chi-clickhouse-demo-cluster1-0-2 │ 9000 │13. │ all-replicated │ chi-clickhouse-demo-cluster1-1-0 │ 9000 │14. │ all-replicated │ chi-clickhouse-demo-cluster1-1-1 │ 9000 │15. │ all-replicated │ chi-clickhouse-demo-cluster1-1-2 │ 9000 │16. │ all-replicated │ chi-clickhouse-demo-cluster1-2-0 │ 9000 │17. │ all-replicated │ chi-clickhouse-demo-cluster1-2-1 │ 9000 │18. │ all-replicated │ chi-clickhouse-demo-cluster1-2-2 │ 9000 │19. │ all-sharded │ chi-clickhouse-demo-cluster1-0-0 │ 9000 │20. │ all-sharded │ chi-clickhouse-demo-cluster1-0-1 │ 9000 │21. │ all-sharded │ chi-clickhouse-demo-cluster1-0-2 │ 9000 │22. │ all-sharded │ chi-clickhouse-demo-cluster1-1-0 │ 9000 │23. │ all-sharded │ chi-clickhouse-demo-cluster1-1-1 │ 9000 │24. │ all-sharded │ chi-clickhouse-demo-cluster1-1-2 │ 9000 │25. │ all-sharded │ chi-clickhouse-demo-cluster1-2-0 │ 9000 │26. │ all-sharded │ chi-clickhouse-demo-cluster1-2-1 │ 9000 │27. │ all-sharded │ chi-clickhouse-demo-cluster1-2-2 │ 9000 │28. │ cluster1 │ chi-clickhouse-demo-cluster1-0-0 │ 9000 │29. │ cluster1 │ chi-clickhouse-demo-cluster1-0-1 │ 9000 │30. │ cluster1 │ chi-clickhouse-demo-cluster1-0-2 │ 9000 │31. │ cluster1 │ chi-clickhouse-demo-cluster1-1-0 │ 9000 │32. │ cluster1 │ chi-clickhouse-demo-cluster1-1-1 │ 9000 │33. │ cluster1 │ chi-clickhouse-demo-cluster1-1-2 │ 9000 │34. │ cluster1 │ chi-clickhouse-demo-cluster1-2-0 │ 9000 │35. │ cluster1 │ chi-clickhouse-demo-cluster1-2-1 │ 9000 │36. │ cluster1 │ chi-clickhouse-demo-cluster1-2-2 │ 9000 │37. │ default │ localhost │ 9000 │ └────────────────┴──────────────────────────────────┴──────┘37 rows in set. Elapsed: 0.001 sec.chi-clickhouse-demo-cluster1-0-0-0.chi-clickhouse-demo-cluster1-0-0.clickhouse.svc.cluster.local :) exit
- Expose the ClickHouse Database via Load Balancer
root@image-builder:~# cat /home/pj/clickhouse-lb.yamlapiVersion: v1kind: Servicemetadata: name: clickhouse-lb namespace: clickhousespec: type: LoadBalancer selector: clickhouse.altinity.com/app: chop ports: - name: tcp-clickhouse protocol: TCP port: 9000 # Native ClickHouse TCP port targetPort: 9000 - name: http-clickhouse protocol: TCP port: 8123 # HTTP interface port targetPort: 8123root@image-builder:~# kubectl apply -f /home/pj/clickhouse-lb.yamlservice/clickhouse-lb configuredroot@image-builder:~# kubectl get svc -ANAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEclickhouse chi-clickhouse-demo-cluster1-0-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 28mclickhouse chi-clickhouse-demo-cluster1-0-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 27mclickhouse chi-clickhouse-demo-cluster1-0-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 26mclickhouse chi-clickhouse-demo-cluster1-1-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 28mclickhouse chi-clickhouse-demo-cluster1-1-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 27mclickhouse chi-clickhouse-demo-cluster1-1-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 26mclickhouse chi-clickhouse-demo-cluster1-2-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 28mclickhouse chi-clickhouse-demo-cluster1-2-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 27mclickhouse chi-clickhouse-demo-cluster1-2-2 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 27mclickhouse clickhouse-clickhouse-demo ClusterIP None <none> 8123/TCP,9000/TCP 27mclickhouse clickhouse-lb LoadBalancer 10.106.248.120 172.16.22.16 9000:31427/TCP,8123:31067/TCP 8m55sclickhouse clickhouse-operator-altinity-clickhouse-operator-metrics ClusterIP 10.111.189.190 <none> 8888/TCP,9999/TCP 51mdefault kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 134mdefault supervisor ClusterIP None <none> 6443/TCP 134mkube-system antrea ClusterIP 10.109.250.236 <none> 443/TCP 134mkube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 134mkube-system metrics-server ClusterIP 10.103.209.47 <none> 443/TCP 134mtkg-system packaging-api ClusterIP 10.108.126.139 <none> 443/TCP,8080/TCP 134mvmware-system-csi vsphere-csi-controller ClusterIP 10.100.1.129 <none> 2112/TCP,2113/TCP 134m
- Install ClickHouse Client on a machine and connect to the ClickHouse database server
root@image-builder:~# curl https://clickhouse.com/ | sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 2911 0 2911 0 0 3606 0 --:--:-- --:--:-- --:--:-- 3607Will download https://builds.clickhouse.com/master/amd64/clickhouse into clickhouse % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 148M 100 148M 0 0 14.1M 0 0:00:10 0:00:10 --:--:-- 19.2MSuccessfully downloaded the ClickHouse binary, you can run it as: ./clickhouseYou can also install it:sudo ./clickhouse installroot@image-builder:~# sudo ./clickhouse installDecompressing the binary......Copying ClickHouse binary to /usr/bin/clickhouse.newRenaming /usr/bin/clickhouse.new to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-server to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-client to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-local to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-benchmark to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-obfuscator to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-git-import to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-compressor to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-format to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-extract-from-config to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-keeper to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-keeper-converter to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-disks to /usr/bin/clickhouse.Creating symlink /usr/bin/clickhouse-chdig to /usr/bin/clickhouse.Creating symlink /usr/bin/chdig to /usr/bin/clickhouse.Creating symlink /usr/bin/ch to /usr/bin/clickhouse.Creating symlink /usr/bin/chl to /usr/bin/clickhouse.Creating symlink /usr/bin/chc to /usr/bin/clickhouse.Creating clickhouse group if it does not exist. groupadd -r clickhouseCreating clickhouse user if it does not exist. useradd -r --shell /bin/false --home-dir /nonexistent -g clickhouse clickhouseWill set ulimits for clickhouse user in /etc/security/limits.d/clickhouse.conf.Creating config directory /etc/clickhouse-server.Creating config directory /etc/clickhouse-server/config.d that is used for tweaks of main server configuration.Creating config directory /etc/clickhouse-server/users.d that is used for tweaks of users configuration.Data path configuration override is saved to file /etc/clickhouse-server/config.d/data-paths.xml.Log path configuration override is saved to file /etc/clickhouse-server/config.d/logger.xml.User directory path configuration override is saved to file /etc/clickhouse-server/config.d/user-directories.xml.OpenSSL path configuration override is saved to file /etc/clickhouse-server/config.d/openssl.xml.Creating log directory /var/log/clickhouse-server.Creating data directory /var/lib/clickhouse.Creating pid directory /var/run/clickhouse-server. chown -R clickhouse:clickhouse '/var/log/clickhouse-server' chown -R clickhouse:clickhouse '/var/run/clickhouse-server' chown clickhouse:clickhouse '/var/lib/clickhouse'Set up the password for the default user:Password for the default user is saved in file /etc/clickhouse-server/users.d/default-password.xml.Setting capabilities for clickhouse binary. This is optional.Allow server to accept connections from the network (default is localhost only), [y/N]: yThe choice is saved in file /etc/clickhouse-server/config.d/listen.xml. chown -R clickhouse:clickhouse '/etc/clickhouse-server'ClickHouse has been successfully installed.Start clickhouse-server with: sudo clickhouse startStart clickhouse-client with: clickhouse-client --passwordroot@image-builder:~# clickhouse-client --host 172.16.22.16ClickHouse client version 25.10.1.1677 (official build).Connecting to 172.16.22.16:9000 as user default.Connected to ClickHouse server version 25.3.6.ClickHouse server version is older than ClickHouse client. It may indicate that the server is out of date and can be upgraded.Warnings: * Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. You can enable it using `echo 1 > /proc/sys/kernel/task_delayacct` or by using sysctl.chi-clickhouse-demo-cluster1-0-0-0.chi-clickhouse-demo-cluster1-0-0.clickhouse.svc.cluster.local :)
Finally , we have a running ClickHouse cluster on vSphere Kubernetes Service Cluster, combining VCF reliable infrastructure with Kubernetes flexibility .With ClickHouse on VKS, you can easily connect to BI tools, handle large datasets, and deliver real-time insights within your existing VMware ecosystem.
Disclaimer: All posts, contents and examples are for educational purposes in lab environments only and does not constitute professional advice. No warranty is implied or given. The user accepts that all information, contents, and opinions are my own. They do not reflect the opinions of my employer.


Leave a comment