Note

Audience: Sys Admin

Using kubectl#

Setup kubectl#

Use kubeconfig to setup your config, and then create a context for this project (assuming you short login is jdoe):

kubectl config set-context strass-prod --cluster=k8sprod-02 --user=jdoe@k8sprod-02 --namespace strass-prod
kubectl config set-context strass-dev --cluster=k8sdev-01 --user=jdoe@k8sdev-01 --namespace strass-dev

Log from a pod, kill it#

Use the appropriate context .. code-block:: bash

kubectl config use-context strass-dev

Get the running pods

kubectl get po

You get

NAME                                                          READY   STATUS    RESTARTS   AGE
doc-pod-master                                                1/1     Running   3          6d16h
k8sdev-01-strass-n62wqs-webhost-deployment-7465f45464-92rqh   2/2     Running   2          5d23h

The pod doc-pod* run the documentation adapted to the strass instance. The pod *webhost-deployment* run the web application.

To put the pod identifier, use this command.

CI_COMMIT_REF_SLUG=master
BACKEND_POD=$(kubectl get po -l branch=branch-${CI_COMMIT_REF_SLUG},role=front --output jsonpath='{.items[0].metadata.name}')

Get its log#

To get the log, run the following command, do not forget to specify which container you want to log.

kubectl logs k8sdev-01-strass-n62wqs-webhost-deployment-7465f45464-92rqh --container django-container
# you can also use the name we stored in an env variable
kubectl logs ${BACKEND_POD} --container django-container

You get:

Copy static files to shared directory
cp -rf /code/.static/* /code/.static.shared
Applying database migrations
No changes detected
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, live_settings, sessions, strass_app
Running migrations:
  No migrations to apply.
[2023-09-14 07:35:23 +0000] [1] [INFO] Starting gunicorn 21.2.0
[2023-09-14 07:35:23 +0000] [1] [INFO] Listening at: http://0.0.0.0:8086 (1)
[2023-09-14 07:35:23 +0000] [1] [INFO] Using worker: sync
[2023-09-14 07:35:23 +0000] [16] [INFO] Booting worker with pid: 16

Restart the app, i.e: kill the pod#

By deleting the web pod, the deployment will create a new one, which in the end is restarting the application.

kubectl delete po ${BACKEND_POD}