In order to have a working Multi-CassKop operator we need to have at least 2 k8s clusters: k8s-cluster-1 and k8s-cluster-2
- k8s >=v1.15 installed on each site, with kubectl configure to access both of thems
- The pods of each site must be able to reach pods on other sites, this is outside of the scope of Multi-Casskop and can
be achieve by different solutions such as:
- in our on-premise cluster, we leverage Calico routable IP pool in order to make this possible
- this can also be done using mesh service such as istio
- there may be other solutions as well
- having CassKop installed (with its ConfigMap) in each namespace see CassKop installation
- having External-DNS with RFC2136 installed in each namespace to manage your DNS sub zone. see Install external dns
- You need to create secrets from targeted k8s clusters in current k8S cluster (see Bootstrap)
- You may need to create network policies for Multi-Casskop inter-site communications to k8s apis, if using so.
We have only tested the configuration with Calico routable IP pool & external DNS with RFC2136 configuration.
Multi-Casskop will be deployed in k8s-cluster-1, change your kubectl context to point to this cluster.
In order to allow our Multi-CassKop controller to have access to k8s-cluster-2 from k8s-cluster-1, we are going to use kubemcsa from Admiralty to be able to export secret from k8s-cluster-2 to k8s-cluster1
This will create in current k8s (k8s-cluster-1) the k8s secret associated to the cassandra-operator service account of namespace cassandra-e2e in k8s-cluster2. /!\ The Secret will be created with the name k8s-cluster2 and this name must be used when starting Multi-CassKop and in the MultiCassKop CRD definition see below
CassKop must be deployed on each targeted Kubernetes clusters.
External-DNS must be installed in each Kubernetes clusters. Configure your external DNS with a custom values pointing to your zone and deploy it in your namespace
Proceed with Multi-CassKop installation only when Pre-requisites are fulfilled.
Deployment with Helm. Multi-CassKop and CassKop shared the same github/helm repo and semantic version.
When starting Multi-CassKop, we need to give some parameters:
- k8s.local is the name of the k8s-cluster we want to refere to when talking to this cluster.
- k8s.remote is a list of other kubernetes we want to connect to.
Names used there should map with the name used in the MultiCassKop CRD definition)
the Names in
k8s.remote must match the names of the secret exported with the kubemcsa command
You can deploy a MultiCassKop CRD instance.
You can create the Cluster with the following example multi-casskop/config/samples/multi-casskop.yaml file :
This is the sequence of operations:
- MultiCassKop first creates the CassandraCluster in k8s-cluster1.
- Then local CassKop starts to creates the associated Cassandra Cluster.
- When CassKop has created its Cassandra cluster, it updates CassandraCluster object's status with the phase=Running meaning that all is ok
- Then MultiCassKop start creating the other CassandraCluster in k8s-cluster2
- Then local CassKop started to creates the associated Cassandra Cluster.
- Thanks to the routable seed-list configured with external dns names, Cassandra pods are started by connecting to already existings Cassandra nodes from k8s-cluster1 with the goal to form a uniq Cassandra Ring.
In resulting, We can see that each clusters have the required pods.
If we go in one of the created pods, we can see that nodetool see pods of both clusters: