관리 메뉴

피터의 개발이야기

[CKA] Udemy 실습문제풀이 - Trouble shooting 본문

Kubernetes/CKA

[CKA] Udemy 실습문제풀이 - Trouble shooting

기록하는 백앤드개발자 2024. 1. 28. 12:05
반응형

[kubernetes] 쿠버네티스 관련 글 목차

ㅁ 들어가며

ㅇ Udemy, Practice, TROUBLESHOOTING 공부 메모.

 

ㅁ APPLICATION FAILURE

ㅇ 잘못된 서비스 명 수정

apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  namespace: alpha
spec:
    ports:
    - port: 3306
      targetPort: 3306
    selector:
      name: mysql

 

ㅇ mysql-service의 target port 수정

apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  namespace: beta
spec:
    ports:
    - port: 3306
      targetPort: 3306
    selector:
      name: mysql

 

ㅇ mysql-service의 lable 수정

$ kubectl -n gamma describe svc mysql-service | grep -i selector
Selector:          name=mysql

$ kubectl -n gamma describe svc mysql-service | grep -i selector
Selector:          name=sql00001

# edit svc
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  namespace: gamma
spec:
    ports:
    - port: 3306
      targetPort: 3306
    selector:
      name: mysql

 

ㅇ webapp-mysql의 mysql user 정보 수정

spec:
      containers:
      - env:
        - name: DB_Host
          value: mysql-service
        - name: DB_User
          value: root
        - name: DB_Password
          value: paswrd

 

ㅇ mysql 의 비번 수정 및 webapp의 user 정보 수정

 

# nodePort 30081로 수정
---
apiVersion: v1
kind: Service
metadata:
  name: web-service
  namespace: zeta
spec:
  ports:
  - nodePort: 30081
    port: 8080
    targetPort: 8080
  selector:
    name: webapp-mysql
  type: NodePort

# webapp deploy DB_User
spec:
    containers:
    - env:
      - name: DB_Host
        value: mysql-service
      - name: DB_User
        value: root
      - name: DB_Password
        value: paswrd

# mysql pod pwd 
spec:
  containers:
  - env:
    - name: MYSQL_ROOT_PASSWORD
      value: paswrd

 

ㅁ Control Plane Failure

ㅇ 앱의 기동 원인 분석

# 전체 컨테이너 상황분석
$ k get all -A
ㄴ 스케줄러 이상 확인

# 스케줄러 상세 정보 확인
$ k -n kube-system describe po kube-scheduler-controlplane
.............
Events:
  Type     Reason   Age                   From     Message
  ----     ------   ----                  ----     -------
  Warning  Failed   13m (x4 over 14m)     kubelet  Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "kube-schedulerrrr": executable file not found in $PATH: unknown
.............

# kube-scheduler-controlplane command 수정
# 시행착오. kubectl edit로는 수정이 안됨. 
# /etc/kubernetes/manifests/kube-scheduler.yaml을 수정해야함.
.............
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    ....

 

 

ㅇ pod scaleup 문제

 ㄴ kube-controller config 문제 해결

 ㄴ kubectl scale deploy app --replicas=2 --current-replicas=2

 

ㅇ kube-controll ca.crt 파일 경로 문제

 ㄴ hostpath 경로가 잘못됨

 ㄴ yaml을 bak 붙여서 백업했는데, 확장자 yaml이 아닌 전체 파일에 대해서 적용하고 있어서 변경된 사항들이 반영이 되지 않았음.

 

ㅁ Worker Node Failure

# node 상태 확인
$ kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
controlplane   Ready      control-plane   5m43s   v1.26.0
node01         NotReady   <none>          5m7s    v1.26.0

# kubelet, containerd 상태확인
$ systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2022-12-29 14:50:26 EST; 11min ago
       Docs: https://containerd.io
   Main PID: 1058 (containerd)
      Tasks: 65
     Memory: 181.2M
     CGroup: /system.slice/containerd.service
     
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: inactive (dead) since Thu 2022-12-29 14:51:58 EST; 7min ago
       Docs: https://kubernetes.io/docs/home/
    Process: 2085 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTR>
   Main PID: 2085 (code=exited, status=0/SUCCESS)
   

# kubelet 시작
$ systemctl start kubelet

root@node01:~> systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Thu 2022-12-29 15:00:26 EST; 3s ago
       Docs: https://kubernetes.io/docs/home/

 

ㅇ 클러스터 에러 분석

# node01 kubelet log 확인
$ journalctl -u kubelet.service
.
May 30 13:08:20 node01 kubelet[4554]: E0530 13:08:20.141826    4554 run.go:74] "command failed" err="failed to construct kubelet dependencies: unable to load client CA file /etc/kubernetes/pki/WRONG-CA-FILE.crt: open /etc/kubernetes/pki/WRONG-CA-FILE.crt: no such file or directory"
.

# /var/lib/kubelet/config.yaml 수정
x509:
    clientCAFile: /etc/kubernetes/pki/WRONG-CA-FILE.crt

 

 

ㅇ node01 -> controll 통신 문제

# cluster server 정보
$ kubectl config view 
.
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://controlplane:6443
  name: kubernetes
.

# kubelet log 확인
$ journalctl -u kubelet 
.
May 30 13:43:55 node01 kubelet[8858]: E0530 13:43:55.004939    8858 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://controlplane:6553/api/v1/nodes?fieldSelector=metadata.name%3Dnode01&limit=500&resourceVersion=0": 
dial tcp 192.24.132.5:6553: connect: connection refused
.

# kubelet config 수정
$ vi /etc/kubernetes/kubelet.conf
.
- cluster:
    certificate-authority-data: .
    server: https://controlplane:6443
.

# kubelet restart
$ systemctl restart kubelet

 

ㅁ Network

ㅇ weave 미설치로 인한 pod 실행오류

 ㄴ weave install: https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-installation

 

Do the services in triton namespace have a valid endpoint? If they do, check the kube-proxy and the weave logs.
Does the cluster have a Network Addon installed?

Install Weave using the link: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network


curl -L https://github.com/weaveworks/weave/releases/download/latest_release/weave-daemonset-k8s-1.11.yaml | kubectl apply -f -

 

ㅇ kube-proxy 문제


# kube-proxy log
$  k logs -n kube-system pods/kube-proxy-zk9xz 
E0128 05:37:09.535039       1 run.go:74] "command failed" err="failed complete: open /var/lib/kube-proxy/configuration.conf: no such file or directory"

# 옳은 config 경로
$  k -n kube-system describe configmaps kube-proxy 
Name:         kube-proxy
Namespace:    kube-system
Labels:       app=kube-proxy
Annotations:  kubeadm.kubernetes.io/component-config.hash: sha256:c15650807a67e3988e859d6c4e9d56e3a39f279034149529187be619e5647ea0

Data
====
config.conf:
----

# kube-proxy daemon 수정
$ k -n kube-system edit daemonsets.apps kube-proxy 
.
    spec:
      containers:
      - command:
        - /usr/local/bin/kube-proxy
        - --config=/var/lib/kube-proxy/config.conf
        - --hostname-override=$(NODE_NAME)
.
반응형
Comments