The following diagram depicts the deployment architecture of the Custos. The Custos services are deployed on a three-node K8 cluster and exposed to the external traffic via a K8 Ingress-Controller. The Ingress-Controller is fronted by Nginx Reverse Proxy and the whole cluster is protected through a Firewall.
The following sections describe the fresh deployment of the Custos services on a K8 Cluster.
Set up K8 Cluster
Prerequisites
- Three ubuntu VMs
- Setup SSH Keys on the local machine and remote ubuntu VMs [1]
- Setup Ansible on local machine [2]
- Basic understanding of Docker and its concepts.
Step 1
We need to create an Ansible playbook to execute the K8 deployment setup. Hence, create a local working directory and create a hosts file that contains IPs of Master and Worker nodes of the K8 Cluster.
[masters] |
Step 2
Next, configure access privileges for non-root users. To this end, create a file with the following content.
- hosts: all |
This will create non-root user "ubuntu" on remote servers and configure to access them using SSH public keys. Next, execute the above script using
ansible-playbook -i hosts config.yml
Step 3
Next, install K8 dependencies using the following Ansible scripts. This installs Docker, Transports, Kubelet, Kubeadm, and Kubectl on Ubuntu VMs.
- hosts: all |
Step 4
Next, we can set-up the master node using the following script.
- hosts: masters |
Step 5
Next, we can set-up the worker nodes using the following script
- hosts: masters |
Successful execution of the above scripts will create the K8 Cluster.
Install Helm
We need to install Helm to deploy the Custos services on the K8 Cluster. Currently, we are using version 2[3]. Helm V2 requires Tiller service to be deployed on K8 Cluster to work, but, V3 directly accesses the Cluster.
Install Dependencies and Third Party Services
The following services and components are required by the Custos services and K8 Cluster as dependencies.
Install Nginx Ingress Controller
The Nginx ingress controller is the entry point of the K8 Cluster. The open-source Nginx controller distributes traffic among its own namespace services and is restricted to its own space. Hence, we require separate ingress controllers for Custos, Linkerd, Keycloak, and, Vault namespaces. We have used the helm chart[4] to deploy Ingress-Controllers. By default, the Ingress controller requires the auto-provisioning of a load balancer and deployments that do not support load-balancers should use NodePort service type or proxy to bridge the external environment and the cluster.
Install MySQL Database
MySQL database is required by the Custos services to store data. We have used bitnami/MySQL DB[5]. This will create one Master Pod and two Slave pods. All services are connected to the Master pod for reading and writing data. Slave pods are used as replicators. Moreover, PVs should be created for data persistence. The following configuration creates PV on the K8 Cluster, and before creating the PVs, storage mount points should be created on each node.
PV configuration
|
Install Cert-Manager
The Cert-Manager component is used for the auto renewal of Let’s Encrypt certificates[6]. This will install Cert-Manager into the K8 Cluster. Nextly, Certificate Issuer is needed to be configured. For that, we have used the following configuration.
apiVersion: cert-manager.io/v1alpha2 |
According to the above configuration, Let’ Encrypt will use the HTTP challenge. So port 80 should be opened and traffic should be directed to K8 Cluster. If NodePort is used, ensure that traffic is reachable from outside through the standard ports. (configure reverse proxy)
Install Logging Stack
We have used Flutend, ElasticSearch, and Kibana to pull logs from all nodes. Flutend pulls all logs from nodes and stores them in ES storage. Kibana is pointed to ES as a dashboard and ES fetches all logs collected by Flutend. This logging stack is installed in kube-logging namespace.
Install Linkerd Service Mesh
We are using Linkerd Service Mesh[8] to enable SSL for internal inter-service communication and as a dashboard for services. The Following configuration will expose the Linkerd dashboard to outside environment.
apiVersion: v1 |
Install Keycloak
The Keycloak[9] is used as our main IDP implementation. The following diagram depicts the architecture of the Keycloak and the Keycloak pods are connected to PostgreSQL Database for persistence.
Install Vault
The HashiCorp Vault is used as our secret storage. It ensures the security of secrets. The following diagram depicts the architecture of the Vault deployment.
We have used [11,12,13,14] for deployment instructions.
Install Custos Services
The Custos source code is hosted openly at github[16]. Active developments are going on the development branch and the master branch contains the latest released source code.
Prerequisites
- Install Java11 and Maven 3.6.X
Checkout source code
git clone https://github.com/apache/airavata-custos.git
Build Source code
mvn clean install
This will build the source code and create docker images, helm charts to be deployed in K8 cluster and Java artifacts. Helm charts are created at the "target/helm" path. To publish docker images to the docker repository the use following command.
mvn dockerfile:push
Use the below commands to install the Custos service using helm charts.
helm install --name service_name --namespace custos chart_name
To upgrade an existing service
helm upgrade --name service_name --namespace custos chart_name
Troubleshooting
The main areas that you might need to troubleshoot are the Custos services and Databases. Troubleshooting Custos services are very easy, you can check the logs fom Kibana server related to the Custos namespace, or you can directly log into the Pod relevant to the service and check for console logs.
Troubleshooting Databases
First, check logs from the Kibana dashboard, or login into Master and Slave nodes and check for console logs. Most probably it should print errors.
Steps to replace database or migrate a database
- First login to Slave node and stop Slave replication thread using
mysql > STOP SLAVE;
- Make a backup of existing databases located at DB mount point (e.g /bitnami/mysql)
- Remove old data.
- Install a new database
- Create databases required by the Custos services
- Reinstall Custos core services, this will create relevant tables on databases.
- Log in to the master pod, that wants to import previous data and follow these steps
mysql > ALTER TABLE table_name DISCARD TABLESPACE;
- Copy ".Ibd" files reside in the previous back up into the new data folders and change permissions using "Chown -R user:group *.idb"
mysql > ALTER TABLE table_name IMPORT TABLESPACE;
Successful execution of the above steps should replace the previous data.
References
[3]https://devopscube.com/install-configure-helm-kubernetes/
[4]https://github.com/helm/charts/tree/master/stable/nginx-ingress
[5]https://github.com/bitnami/charts/tree/master/bitnami/mysql
[6]https://cert-manager.io/docs/installation/kubernetes/#note-helm-v2
[7]https://mherman.org/blog/logging-in-kubernetes-with-elasticsearch-Kibana-fluentd/
[8]https://linkerd.io/2/tasks/install-helm/
[9]https://github.com/codecentric/helm-charts
[10]https://github.com/bitnami/charts/tree/master/bitnami/postgresql
[11] https://medium.com/velotio-perspectives/a-practical-guide-to-hashicorp-consul-part-1-5ee778a7fcf4
[12]https://testdriven.io/blog/running-vault-and-consul-on-kubernetes/
[13]https://learn.hashicorp.com/vault/operations/ops-vault-ha-consul