KIND(Kubernetes IN Docker)を使ってマルチノードクラスタの構築し、Ciliumを観測している事例をみつけたので、これのテストをしてみました。
「Cilium Grafana Observability Demo」
https://github.com/isovalent/cilium-grafana-observability-demo
git clone https://github.com/isovalent/cilium-grafana-observability-demo.git
ここで使われているOSSは、ここ最近興味があるものばかりなので、たくさんの要素がありますが、まずは動作させてその構造を調べようと思っています。
環境) Ubuntu 20.04 (リアルPC)
インストールは事例どおりではうまくいかないことがあり、思考錯誤しました。
(Timeoutでかえってこない場合は、Timeoutを伸ばすことをしたり(helm)、それでもうまくいかない場合はパッケージのバージョンを変えたりしています。)
docker, kubectl, kind, yq, helm, cilium-cli のインストール(バージョンを調べたりchecksumをとったりしている部分は参考)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
sudo apt-get install docker-ce docker-ce-cli docker-compse sudo apt-get install containerd.io sudo systemctl restart docker curl -L -s https://dl.k8s.io/release/stable.txt curl -LO https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl (echo "$(cat kubectl.sha256) kubectl" | sha256sum --check) curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O yq (wget https://github.com/mikefarah/yq/releases/download/v4.34.1/yq_linux_amd64 -O yq) sudo snap install helm --classic CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt) CLI_ARCH=amd64 if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum |
dockerはサーバですが、あとはクライアントツールです。
dockerでkubernetesのマルチノードを実現できるのは、とても便利!
(helm はkubernetesのパッケージマネージャー)
久しぶりのCiliumなので以下で復習も。
https://decode.red/blog/tag/bpf/
kind-config.yaml書き換え(kubeletが立ち上がらないエラーのため)
image: kindest/node:v1.24.15@sha256:7db4f8bea3e14b82d12e044e25e34bd53754b7f2b0e9d56df21774e6f66a70ab
クラスタの作成
kind create cluster –name cidemo –config kind-config.yaml -v10
リポジトリの追加
helm repo add cilium https://helm.cilium.io
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo add minio https://operator.min.io
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add strimzi https://strimzi.io/charts
helm repo add elastic https://helm.elastic.co
Prometheus-operator CRDs インストール
helm template kube-prometheus prometheus-community/kube-prometheus-stack –include-crds \
| yq ‘select(.kind == “CustomResourceDefinition”) * {“metadata”: {“annotations”: {“meta.helm.sh/release-name”: “kube-prometheus”, “meta.helm.sh/release-namespace”: “monitoring”}}}’ \
| kubectl create -f –
ネームスペース作成
kubectl create ns monitoring
Ciliumインストール
# masterIP is needed for kubeProxyReplacement
MASTER_IP=”$(docker inspect cidemo-control-plane | jq ‘.[0].NetworkSettings.Networks.kind.IPAddress’ -r)”
helm upgrade cilium cilium/cilium \
–version 1.13.4 \
–install \
–wait \
–namespace kube-system \
–values helm/cilium-values.yaml \
–set kubeProxyReplacement=strict \
–set k8sServiceHost=”${MASTER_IP}” \
–set k8sServicePort=6443
ステータス確認
kubectl get pods -n kube-system
cilium status –wait
Ingressインストール
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
–install \
–wait \
–namespace ingress-nginx –create-namespace \
–version 4.1.4 \
–values helm/ingress-nginx-values.yaml
OpenTelemetry operator and collectorインストール (インストールしわすれて、uiも表示させたあと実行)
helm upgrade opentelemetry-operator open-telemetry/opentelemetry-operator \
–install \
–wait \
–namespace opentelemetry-operator –create-namespace \
–version 0.15.0 \
-f helm/opentelemetry-operator-values.yaml
kubectl apply -n opentelemetry-operator -f manifests/otel-collector.yaml
Tempoインストール
helm upgrade tempo grafana/tempo \
–install \
–wait \
–namespace tempo –create-namespace \
–create-namespace \
–version 0.16.2 \
-f helm/tempo-values.yaml
Prometheus & Grafanaインストール
helm upgrade kube-prometheus prometheus-community/kube-prometheus-stack \
–install \
–wait \
–namespace monitoring –create-namespace \
–version 40.3.1 \
–values helm/prometheus-values.yaml
jobs-app インストール
helm dep build ./helm/jobs-app
helm upgrade jobs-app ./helm/jobs-app \
–install \
–wait \
–create-namespace \
–namespace tenant-jobs \
-f helm/jobs-app-values.yaml
CiliumNetworkPolicy確認
kubectl get ciliumnetworkpolicy -n tenant-jobs -o yaml
※取得内容は末尾
pod、serviceや、grafanaの画面など確認
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
n@n:~/docker/cilium-grafana-observability-demo$ ku get service -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m ingress-nginx ingress-nginx-controller NodePort 10.96.252.29 <none> 80:32177/TCP,443:32628/TCP 37m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.96.68.161 <none> 443/TCP 37m kube-system cilium-agent ClusterIP None <none> 9962/TCP,9964/TCP 40m kube-system cilium-operator ClusterIP None <none> 9963/TCP 40m kube-system hubble-metrics ClusterIP None <none> 9965/TCP 40m kube-system hubble-peer ClusterIP 10.96.2.83 <none> 443/TCP 40m kube-system hubble-relay ClusterIP 10.96.233.230 <none> 80/TCP 40m kube-system hubble-relay-metrics ClusterIP None <none> 9966/TCP 40m kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 50m kube-system prometheus-k8s-coredns ClusterIP None <none> 9153/TCP 29m kube-system prometheus-k8s-kube-controller-manager ClusterIP None <none> 10257/TCP 29m kube-system prometheus-k8s-kube-etcd ClusterIP None <none> 2381/TCP 29m kube-system prometheus-k8s-kube-proxy ClusterIP None <none> 10249/TCP 29m kube-system prometheus-k8s-kube-scheduler ClusterIP None <none> 10259/TCP 29m kube-system prometheus-k8s-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 29m monitoring alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 29m monitoring kube-prometheus-grafana ClusterIP 10.96.71.238 <none> 80/TCP 29m monitoring kube-prometheus-kube-state-metrics ClusterIP 10.96.6.128 <none> 8080/TCP 29m monitoring kube-prometheus-prometheus-node-exporter ClusterIP 10.96.112.110 <none> 9100/TCP 29m monitoring prometheus-k8s-alertmanager ClusterIP 10.96.139.60 <none> 9093/TCP 29m monitoring prometheus-k8s-operator ClusterIP 10.96.207.25 <none> 443/TCP 29m monitoring prometheus-k8s-prometheus ClusterIP 10.96.37.140 <none> 9090/TCP 29m monitoring prometheus-operated ClusterIP None <none> 9090/TCP 29m tempo tempo ClusterIP 10.96.4.139 <none> 3100/TCP,16687/TCP,16686/TCP,6831/UDP,6832/UDP,14268/TCP,14250/TCP,9411/TCP,55680/TCP,55681/TCP,4317/TCP,4318/TCP,55678/TCP 31m tenant-jobs coreapi ClusterIP 10.96.156.219 <none> 9080/TCP 21m tenant-jobs elasticsearch-master ClusterIP 10.96.184.167 <none> 9200/TCP,9300/TCP 21m tenant-jobs elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 21m tenant-jobs jobposting ClusterIP 10.96.249.219 <none> 9080/TCP 21m tenant-jobs jobs-app-kafka-bootstrap ClusterIP 10.96.248.50 <none> 9091/TCP,9092/TCP,9093/TCP 11m tenant-jobs jobs-app-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9092/TCP,9093/TCP 11m tenant-jobs jobs-app-zookeeper-client ClusterIP 10.96.87.11 <none> 2181/TCP 20m tenant-jobs jobs-app-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 20m tenant-jobs loader ClusterIP 10.96.219.145 <none> 50051/TCP 21m tenant-jobs recruiter ClusterIP 10.96.175.66 <none> 9080/TCP 21m n@n:~/docker/cilium-grafana-observability-demo$ ku get deploy -A NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE ingress-nginx ingress-nginx-controller 1/1 1 1 37m kube-system cilium-operator 1/1 1 1 41m kube-system coredns 2/2 2 2 50m kube-system hubble-relay 1/1 1 1 41m local-path-storage local-path-provisioner 1/1 1 1 50m monitoring kube-prometheus-grafana 1/1 1 1 30m monitoring kube-prometheus-kube-state-metrics 1/1 1 1 30m monitoring prometheus-k8s-operator 1/1 1 1 30m tenant-jobs coreapi 1/1 1 1 22m tenant-jobs crawler 1/1 1 1 22m tenant-jobs jobposting 1/1 1 1 22m tenant-jobs jobs-app-entity-operator 1/1 1 1 11m tenant-jobs loader 1/1 1 1 22m tenant-jobs recruiter 1/1 1 1 22m tenant-jobs resumes 1/1 1 1 22m tenant-jobs strimzi-cluster-operator 1/1 1 1 22m |
下記のservice,deployは後で確認
1 2 3 4 5 |
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) opentelemetry-operator opentelemetry-operator-controller-manager-metrics-service ClusterIP 10.96.200.221 <none> 8443/TCP,8080/TCP 8m30s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE opentelemetry-operator opentelemetry-operator-controller-manager 1/1 1 1 8m35s |
また下記でもやったようにUIツールを起動してみました。
hubble-uiインストール
helm upgrade cilium cilium/cilium –version 1.13.4 –namespace kube-system –reuse-values –set hubble.relay.enabled=true –set hubble.ui.enabled=true
hubble-ui 起動
cilium hubble ui –port-forward 8080
yamlファイルで簡単に構築できてしまうので、何がどのようにつながっているかを把握するためには、動かしてみてからいろんな状態を調べるのがよいと思い、ここまで進めました。
(docker ps), (kubectl get node) で、cidemo-control-plane が、(kind get clusters) で cidemo がひとつあるだけなので、管理はとてもすっきりしています。
(デフォルトで”kind”となるため、名前をあえてつけました)
psコマンドでは、多数のプロセスを表示しますが、podの内外で実行してみて、コンテナは名前空間が違うだけでプロセスなんだ、ということを実感できます。
いろんなコマンドを実行してみることで、理解が深まります。
—————————-
CiliumNetworkPolicy
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
apiVersion: v1 items: - apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: annotations: meta.helm.sh/release-name: jobs-app meta.helm.sh/release-namespace: tenant-jobs creationTimestamp: "2023-06-20T00:53:51Z" generation: 1 labels: app.kubernetes.io/managed-by: Helm name: allow-all-within-namespace namespace: tenant-jobs resourceVersion: "4082" uid: 6d0c67d9-7583-463f-aa27-b8c180baae3c spec: description: Allow all within namespace egress: - toEndpoints: - {} endpointSelector: {} ingress: - fromEndpoints: - {} - apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: annotations: meta.helm.sh/release-name: jobs-app meta.helm.sh/release-namespace: tenant-jobs creationTimestamp: "2023-06-20T00:53:51Z" generation: 1 labels: app.kubernetes.io/managed-by: Helm name: dns-visibility namespace: tenant-jobs resourceVersion: "4083" uid: 8dcb43e8-3f80-4da2-a321-94438cb6a7a3 spec: egress: - toEndpoints: - matchLabels: k8s:io.kubernetes.pod.namespace: kube-system k8s:k8s-app: kube-dns toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchPattern: '*' - toFQDNs: - matchPattern: '*' - toEntities: - all endpointSelector: matchLabels: {} - apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: annotations: meta.helm.sh/release-name: jobs-app meta.helm.sh/release-namespace: tenant-jobs creationTimestamp: "2023-06-20T00:53:51Z" generation: 1 labels: app.kubernetes.io/managed-by: Helm name: l7-egress-visibility namespace: tenant-jobs resourceVersion: "4080" uid: 8a65a463-671f-4fc5-8e8f-14b7fc043e0a spec: description: L7 policy egress: - toEntities: - world toPorts: - ports: - port: "80" protocol: TCP rules: http: - {} endpointSelector: {} - apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: annotations: meta.helm.sh/release-name: jobs-app meta.helm.sh/release-namespace: tenant-jobs creationTimestamp: "2023-06-20T00:53:51Z" generation: 1 labels: app.kubernetes.io/managed-by: Helm name: l7-ingress-visibility namespace: tenant-jobs resourceVersion: "4084" uid: 100693e8-d2c5-4886-a9c0-7300e539d5f8 spec: description: L7 policy endpointSelector: {} ingress: - toPorts: - ports: - port: "9080" protocol: TCP - port: "50051" protocol: TCP - port: "9200" protocol: TCP rules: http: - {} kind: List metadata: resourceVersion: "" |