2. Lab2: Kubernetes on Docker Desktop
2.2. Check the installation
If we have other Kubernetes installations on our machine other than Docker Desktop (see some alternatives) it may be necessary to switch the context if we are experience an error like "Unable to connect to the server: dial tcp i/o time out".
|
To switch the context to docker-desktop
we can use get the
following command:
$ kubectl config use-context docker-desktop Switched to context "docker-desktop".
We can get the current context with the following:
$ kubectl config current-context docker-desktop
So now we can check our kubectl version:
$ kubectl version Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Let us get some information on the cluster:
$ kubectl cluster-info Kubernetes master is running at https://kubernetes.docker.internal:6443 KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Let us check out the nodes in the cluster:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION docker-desktop Ready master 41m v1.19.3
3. Lab 3: Starting, accessing and stoping the Kubernetes Dashboard
3.1. Starting
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
$ kubectl get pods --all-namespaces
$ kubectl cluster-info Kubernetes master is running at https://kubernetes.docker.internal:6443 KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
3.2. Accessing
$ kubectl proxy &
To generate the required token to access the dashboard, type the following command:
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
3.3. Stoping
Stop the dashboard:
$ kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
Verify if it is stoped:
$ kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard
$ kubectl get namespaces
Stop the kubectl prooxy
execution by typing:
$ kill $(ps -ef | grep 'kubectl proxy' | head -1 | awk '{print $2}')
4. Lab 4: Installing Kafka on Kubernetes throught Strimzi
4.1. Creating a kafka namespace
$ kubectl create namespace kafka
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition customresourcedefinition.apiextensions.k8s.io/kafkas.kafka.strimzi.io created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-entity-operator-delegation created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-topic-operator-delegation created customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkarebalances.kafka.strimzi.io created deployment.apps/strimzi-cluster-operator created customresourcedefinition.apiextensions.k8s.io/kafkamirrormaker2s.kafka.strimzi.io created clusterrole.rbac.authorization.k8s.io/strimzi-entity-operator created clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-global created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-broker-delegation created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-namespaced created clusterrole.rbac.authorization.k8s.io/strimzi-topic-operator created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-client-delegation created clusterrole.rbac.authorization.k8s.io/strimzi-kafka-client created serviceaccount/strimzi-cluster-operator created clusterrole.rbac.authorization.k8s.io/strimzi-kafka-broker created customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkabridges.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnectors.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnects2is.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnects.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkamirrormakers.kafka.strimzi.io created configmap/strimzi-cluster-operator created
4.2. Applying Strimzi installation file
$ kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka kafka.kafka.strimzi.io/my-cluster created
4.3. Provision the Apache Kafka cluster
$ kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka kafka.kafka.strimzi.io/my-cluster condition met
Wait while Kubernetes starts the required pods, services and so on:
$ kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka kafka.kafka.strimzi.io/my-cluster condition met
4.4. Send and receive messages
Start a consumer:
$ kubectl -n kafka run kafka-consumer -ti \ --image=quay.io/strimzi/kafka:0.21.1-kafka-2.7.0 --rm=true \ --restart=Never -- \ bin/kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic --from-beginning
On a different terminal, start a producer:
$ kubectl -n kafka run kafka-producer -ti \ --image=quay.io/strimzi/kafka:0.21.1-kafka-2.7.0 \ --rm=true --restart=Never -- \ bin/kafka-console-producer.sh \ --broker-list my-cluster-kafka-bootstrap:9092 \ --topic my-topic
Type some messages:
If you don't see a command prompt, try pressing enter. >a [2021-02-09 13:24:27,844] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) [2021-02-09 13:24:27,956] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 4 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) [2021-02-09 13:24:28,071] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 5 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) >bb >c >
Note the output appearing on the consumer terminal:
If you don't see a command prompt, try pressing enter. a bb c