Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
Tags
- CKA 기출문제
- aws
- Pinpoint
- CloudWatch
- Java
- Spring
- 공부
- MySQL
- 정보처리기사실기 기출문제
- AI
- AWS EKS
- minikube
- kotlin coroutine
- 기록으로 실력을 쌓자
- IntelliJ
- 코틀린 코루틴의 정석
- APM
- 티스토리챌린지
- mysql 튜닝
- Kubernetes
- kotlin
- Linux
- kotlin querydsl
- PETERICA
- 정보처리기사 실기
- Elasticsearch
- 오블완
- kotlin spring
- 정보처리기사 실기 기출문제
- CKA
Archives
- Today
- Total
피터의 개발이야기
[CKA] Udemy 실습문제풀이 - Cluster Maintenance 본문
반응형
ㅁ 들어가며
ㅇ Udemy, Practice, Cluster Maintenance 공부 메모.
ㅁ 함께 보면 좋은 사이트
ㅁ OS Upgrade
# node unschedulable
k drain node01 --ignore-daemonsets
# node Schedulable
k uncordon node01
# pod만 있는 경우 손실 우려로 인해 drain이 되지 않음.
node/node01 already cordoned
error: unable to drain node "node01" due to error:cannot delete Pods declare no controller (use --force to override): default/hr-app, continuing command...
There are pending nodes to be drained:
node01
cannot delete Pods declare no controller (use --force to override): default/hr-app
# corden: Unschedulable, 더이상의 pod를 받지 않는다.
k cordon node01
# status Unschedulable 확인
k get no
ㅁ Cluster Upgrade
ㄴ 이 페이지는 kubeadm으로 생성된 쿠버네티스 클러스터를 1.28.x 버전에서 1.29.x 버전으로, 1.29.x 버전에서 1.29.y(여기서 y > x) 버전으로 업그레이드하는 방법을 설명한다.
# Cluster version chk!
$ k version --short
Client Version: v1.26.0
Kustomize Version: v4.5.7
Server Version: v1.26.0
# kubeadm 계획 수립
$ kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.26.0
[upgrade/versions] kubeadm version: v1.26.0
I0124 21:51:22.244223 20453 version.go:256] remote version is much newer: v1.29.1; falling back to: stable-1.26
[upgrade/versions] Target version: v1.26.13
[upgrade/versions] Latest version in the v1.26 series: v1.26.13
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 2 x v1.26.0 v1.26.13
Upgrade to the latest version in the v1.26 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.26.0 v1.26.13
kube-controller-manager v1.26.0 v1.26.13
kube-scheduler v1.26.0 v1.26.13
kube-proxy v1.26.0 v1.26.13
CoreDNS v1.9.3 v1.9.3
etcd 3.5.6-0 3.5.6-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.26.13
Note: Before you can perform this upgrade, you have to update kubeadm to v1.26.13.
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
# node upgrade start!!
$ apt update
# kubeadm upgrade
$ apt-get install kubeadm=1.27.0-00
# controlplane upgrade
$ kubeadm upgrade apply v1.27.0
# kubelet
$ apt-get install kubelet=1.27.0-00
# daemon restart
$ systemctl daemon-reload
$ systemctl restart kubelet
# drain
$ k drain node01 --ignore-daemonsets
# drain 확인: pod가 다 이동되었는지 확인함.
$ k get po -o wide
# node 이동
$ ssh node01
# kubeadm 1.27.0-00 패치
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.27.0-00 && \
apt-mark hold kubeadm
# worknode update
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.27.0-00 kubectl=1.27.0-00 && \
apt-mark hold kubelet kubectl
# daemon restart
sudo systemctl daemon-reload
sudo systemctl restart kubelet
# controlplane node로 이동
exit
# uncorden
k uncordon node01
# node ready chk
k get no
ㅁ Backup and Restore Methods
# etcd image? short command.
$ k -n kube-system get po etcd-controlplane -o json | jq .spec.containers[0].image
# etcd listen url?
$ k -n kube-system get po etcd-controlplane -o json | jq .spec.containers[0].command | grep listen
"--listen-client-urls=https://127.0.0.1:2379,https://192.10.146.9:2379", <====
"--listen-metrics-urls=http://127.0.0.1:2381",
"--listen-peer-urls=https://192.10.146.9:2380",
# ETCD ca file?
$ k -n kube-system get po etcd-controlplane -o json | jq .spec.containers[0].command | grep crt
"--cert-file=/etc/kubernetes/pki/etcd/server.crt",
"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt",
"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt",
"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt" <====
# ETCD backup
$ k -n kube-system get po etcd-controlplane -o json | jq .spec.containers[0].command
[
"etcd",
"--advertise-client-urls=https://192.10.146.9:2379",
"--cert-file=/etc/kubernetes/pki/etcd/server.crt",
"--client-cert-auth=true",
"--data-dir=/var/lib/etcd",
"--experimental-initial-corrupt-check=true",
"--experimental-watch-progress-notify-interval=5s",
"--initial-advertise-peer-urls=https://192.10.146.9:2380",
"--initial-cluster=controlplane=https://192.10.146.9:2380",
"--key-file=/etc/kubernetes/pki/etcd/server.key",
"--listen-client-urls=https://127.0.0.1:2379,https://192.10.146.9:2379",
"--listen-metrics-urls=http://127.0.0.1:2381",
"--listen-peer-urls=https://192.10.146.9:2380",
"--name=controlplane",
"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt",
"--peer-client-cert-auth=true",
"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key",
"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt",
"--snapshot-count=10000",
"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"
]
# 명령문 생성
$ cat bak.sh
ETCDCTL_API=3 etcdctl
--endpoints=https://127.0.0.1:2379 \ # listen-client-urls
--cacert=/etc/kubernetes/pki/etcd/ca.crt \ # trusted-ca-file
--cert=/etc/kubernetes/pki/etcd/server.crt \ # cert-file
--key=/etc/kubernetes/pki/etcd/server.key \ # key-file
snapshot save /opt/snapshot-pre-boot.db
# 실행
$ sh bak.sh
Snapshot saved at /opt/snapshot-pre-boot.db
# 파일확인
$ ll /opt/snapshot-pre-boot.db
-rw-r--r-- 1 root root 2232352 Jan 25 01:11 /opt/snapshot-pre-boot.db
# Restore ETCD data-dir
ETCDCTL_API=3 etcdctl --data-dir /var/lib/etcd-from-backup \
snapshot restore /opt/snapshot-pre-boot.db
# ETCD Pod hostpath update
$ vi /etc/kubernetes/manifests/etcd.yaml
volumeMounts:
- mountPath: /var/lib/etcd-from-backup <===
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
volumes:
- hostPath:
path: /var/lib/etcd-from-backup <===
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
반응형
'Kubernetes > CKA&CKAD' 카테고리의 다른 글
[CKA] Udemy 실습문제풀이 - Storage (0) | 2024.01.27 |
---|---|
[CKA] Udemy 실습문제풀이 - Security (0) | 2024.01.25 |
[CKA] Udemy 실습문제풀이 - Application Lifecycle Management (0) | 2024.01.25 |
[CKA] 기출문제 - ETCD Backup and Restore (0) | 2023.12.25 |
[CKA] Udemy 실습문제풀이 - Lightning Lab (3) | 2023.06.22 |
Comments