관리 메뉴

피터의 개발이야기

[EBS] EKS 생성, MongoDB 구성, gp2에서 gp3 EBS 볼륨으로 마이그레이션 본문

AWS

[EBS] EKS 생성, MongoDB 구성, gp2에서 gp3 EBS 볼륨으로 마이그레이션

기록하는 백앤드개발자 2022. 9. 20. 01:19
반응형

 

ㅁ 개요

 ㅇ MonogoDB의 볼륨이 gp2로 구성되어 있는 것을 gp3로 변경 테스트환경 구성과정 정리

 ㅇ EKS에 CSI Driver를 설치 하지 않고 AWS Console에서 gp3 업그레이드 후 볼륨 접속 확인

 

ㅁ EKSCTL를 이용한 EKS 생성 및 확인과정

$ eksctl create cluster --name k8s-peter --region ap-northeast-2 --version 1.20 --nodegroup-name work-nodes --nodes 1 --nodes-min 1 --nodes-max 3 --node-type t3.medium --node-volume-size=20 --with-oidc --ssh-access --ssh-public-key aws-login-key --managed
2022-09-18 20:43:31 [ℹ]  eksctl version 0.106.0
2022-09-18 20:43:31 [ℹ]  using region ap-northeast-2
2022-09-18 20:43:32 [ℹ]  setting availability zones to [ap-northeast-2a ap-northeast-2b ap-northeast-2c]
2022-09-18 20:43:32 [ℹ]  subnets for ap-northeast-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-09-18 20:43:32 [ℹ]  subnets for ap-northeast-2b - public:192.168.32.0/19 private:192.168.128.0/19
2022-09-18 20:43:32 [ℹ]  subnets for ap-northeast-2c - public:192.168.64.0/19 private:192.168.160.0/19
2022-09-18 20:43:32 [ℹ]  nodegroup "work-nodes" will use "" [AmazonLinux2/1.20]
2022-09-18 20:43:32 [ℹ]  using EC2 key pair %!q(*string=<nil>)
2022-09-18 20:43:32 [ℹ]  using Kubernetes version 1.20
2022-09-18 20:43:32 [ℹ]  creating EKS cluster "k8s-peter" in "ap-northeast-2" region with managed nodes
2022-09-18 20:43:32 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-09-18 20:43:32 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-northeast-2 --cluster=k8s-peter'
2022-09-18 20:43:32 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "k8s-peter" in "ap-northeast-2"
2022-09-18 20:43:32 [ℹ]  CloudWatch logging will not be enabled for cluster "k8s-peter" in "ap-northeast-2"
2022-09-18 20:43:32 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-northeast-2 --cluster=k8s-peter'
2022-09-18 20:43:32 [ℹ]
2 sequential tasks: { create cluster control plane "k8s-peter",
    2 sequential sub-tasks: {
        4 sequential sub-tasks: {
            wait for control plane to become ready,
            associate IAM OIDC provider,
            2 sequential sub-tasks: {
                create IAM role for serviceaccount "kube-system/aws-node",
                create serviceaccount "kube-system/aws-node",
            },
            restart daemonset "kube-system/aws-node",
        },
        create managed nodegroup "work-nodes",
    }
}
2022-09-18 20:43:32 [ℹ]  building cluster stack "eksctl-k8s-peter-cluster"
2022-09-18 20:43:32 [ℹ]  deploying stack "eksctl-k8s-peter-cluster"
2022-09-18 20:44:02 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:44:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:45:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:46:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:47:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:48:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:49:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:50:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:51:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:52:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:53:32 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:54:33 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:55:33 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-cluster"
2022-09-18 20:57:34 [ℹ]  building iamserviceaccount stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node"
2022-09-18 20:57:34 [ℹ]  deploying stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node"
2022-09-18 20:57:34 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node"
2022-09-18 20:58:04 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node"
2022-09-18 20:58:55 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node"
2022-09-18 20:58:55 [ℹ]  serviceaccount "kube-system/aws-node" already exists
2022-09-18 20:58:55 [ℹ]  updated serviceaccount "kube-system/aws-node"
2022-09-18 20:58:55 [ℹ]  daemonset "kube-system/aws-node" restarted
2022-09-18 20:58:55 [ℹ]  building managed nodegroup stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 20:58:55 [ℹ]  deploying stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 20:58:55 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 20:59:25 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 21:00:01 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 21:00:52 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 21:01:56 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 21:01:56 [ℹ]  waiting for the control plane availability...
2022-09-18 21:01:57 [!]  failed to determine authenticator version, leaving API version as default v1alpha1: failed to parse versions: unable to parse first version "": strconv.ParseUint: parsing "": invalid syntax
2022-09-18 21:01:57 [✔]  saved kubeconfig as "/home/ec2-user/.kube/config"
2022-09-18 21:01:57 [ℹ]  no tasks
2022-09-18 21:01:57 [✔]  all EKS cluster resources for "k8s-peter" have been created
2022-09-18 21:01:57 [ℹ]  nodegroup "work-nodes" has 1 node(s)
2022-09-18 21:01:57 [ℹ]  node "ip-192-168-70-154.ap-northeast-2.compute.internal" is ready
2022-09-18 21:01:57 [ℹ]  waiting for at least 1 node(s) to become ready in "work-nodes"
2022-09-18 21:01:57 [ℹ]  nodegroup "work-nodes" has 1 node(s)
2022-09-18 21:01:57 [ℹ]  node "ip-192-168-70-154.ap-northeast-2.compute.internal" is ready
2022-09-18 21:01:58 [ℹ]  kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2022-09-18 21:01:58 [✔]  EKS cluster "k8s-peter" in "ap-northeast-2" region is ready

 ㅇ EKS cluster "k8s-peter" in "ap-northeast-2" region is ready 이 마지막 문구를 통해 EKS 구성을 확인한다.

 

[ec2-user@ip-172-31-43-214 ~]$ kubectl get pods --all-namespaces
NAMESPACE     NAME                      READY   STATUS    RESTARTS   AGE
kube-system   aws-node-lrbzd            1/1     Running   0          7m5s
kube-system   coredns-df796cc4d-bhgth   1/1     Running   0          17m
kube-system   coredns-df796cc4d-k9nxw   1/1     Running   0          17m
kube-system   kube-proxy-dsf7j          1/1     Running   0          7m5s
[ec2-user@ip-172-31-43-214 ~]$ kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   18m
[ec2-user@ip-172-31-43-214 ~]$ kubectl get all --all-namespaces
NAMESPACE     NAME                          READY   STATUS    RESTARTS   AGE
kube-system   pod/aws-node-lrbzd            1/1     Running   0          8m23s
kube-system   pod/coredns-df796cc4d-bhgth   1/1     Running   0          18m
kube-system   pod/coredns-df796cc4d-k9nxw   1/1     Running   0          18m
kube-system   pod/kube-proxy-dsf7j          1/1     Running   0          8m23s

NAMESPACE     NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       service/kubernetes   ClusterIP   10.100.0.1    <none>        443/TCP         19m
kube-system   service/kube-dns     ClusterIP   10.100.0.10   <none>        53/UDP,53/TCP   18m

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/aws-node     1         1         1       1            1           <none>          18m
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           <none>          18m

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   2/2     2            2           18m

NAMESPACE     NAME                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-df796cc4d   2         2         2       18m
[ec2-user@ip-172-31-43-214 ~]$

  ㅇ kubectl get pods --all-namespaces 명령어로 기본 구성을 확인할 수 있다.

 

 

ㅁ [kubernetes] MongoDB 환경 구축

 ㅇ 기존에 MongoDB 생성과정을 정리한 이전 글을 따라 MongoDB를 구축하였다.

# create
[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$  kubectl apply -f .
deployment.apps/mongo-client created
deployment.apps/mongo created
service/mongo-nodeport-svc created
persistentvolume/mongo-data-pv created
persistentvolumeclaim/mongo-data created
secret/mongo-creds created

# 생성확인
[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ kubectl get all
NAME                                READY   STATUS    RESTARTS   AGE
pod/mongo-76f9996ff5-hfvfv          0/1     Running   4          6m49s
pod/mongo-client-6f6b5c779b-rv6qd   1/1     Running   0          6m49s

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
service/kubernetes           ClusterIP   10.100.0.1       <none>        443/TCP           35m
service/mongo-nodeport-svc   NodePort    10.100.199.240   <none>        27017:32000/TCP   6m49s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongo          0/1     1            0           6m49s
deployment.apps/mongo-client   1/1     1            1           6m49s

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/mongo-76f9996ff5          1         1         0       6m49s
replicaset.apps/mongo-client-6f6b5c779b   1         1         1       6m49s

 

ㅁ AWS 볼륨과 PV, PVC 관계이해

 ㅇ 먼저 AWS 볼륨이 생성되고 쿠버네티스 API서버로 PV를 생성해 쿠버네티스에 등록한다.
 ㅇ PV가 생성되면 크기와 지원 가능한 접근 모드를 지정해야 하는데,  먼저 최소 용량과 접근 모드를 명시한 PVC 생성한다.

 ㅇ 그런 다음 PVC를 쿠버네티스 API 서버에 게시하고 쿠버네티스는 적절한 PV를 찾아 클레임에 볼륨을 바인딩한다.

 ㅇ 그리하여 최종적으로 PVC는 POD 내부의 볼륨 중 하나로 사용될 수 있다. 

 ㅇ AWS 볼륨 -> PV -> PVC 순으로 생성되는 것이다.

 

ㅁ AWS 볼륨확인

 ㅇ 제일 먼저 볼륨이 우선 생성되어야 한다.

 ㅇ EC2 > 볼륨 에서 쿠버네티스에 의해 생성된 볼륨 정보를 확인 할 수 있다.

 ㅇ 볼륨 유형은 기본 설정 값인 gp2인 것을 확인 할 수 있다.

 

ㅁStorageClass 확인

# storageclasses 확인
[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ kubectl get storageclasses.storage.k8s.io
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  53m

 ㅇ StorageClass(SC) gp2는 PersistentVolumeClaim(PVC)을 생성하는데 사용된다.

 

ㅁ PersistentVolum(PV) 확인

# persistentvolume 확인
[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ kubectl get persistentvolume
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                STORAGECLASS   REASON   AGE
mongo-data-pv                              1Gi        RWO            Retain           Available                                                23m
pvc-9ac37449-f7a2-4d26-8dc0-4463b866b67f   1Gi        RWO            Delete           Bound       default/mongo-data   gp2                     23m

# pv 상세 확인
[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ kubectl get pv pvc-9ac37449-f7a2-4d26-8dc0-4463b866b67f -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubernetes.io/createdby: aws-ebs-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
  creationTimestamp: "2022-09-18T12:19:00Z"
  finalizers:
  - kubernetes.io/pv-protection
  labels:
    failure-domain.beta.kubernetes.io/region: ap-northeast-2
    failure-domain.beta.kubernetes.io/zone: ap-northeast-2c
  name: pvc-9ac37449-f7a2-4d26-8dc0-4463b866b67f
  resourceVersion: "4281"
  uid: 086b8332-c459-4e1a-ba89-e898d242a529
spec:
  accessModes:
  - ReadWriteOnce
  awsElasticBlockStore:
    fsType: ext4
    volumeID: aws://ap-northeast-2c/vol-0202ac8ec2be8af4a
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: mongo-data
    namespace: default
    resourceVersion: "4262"
    uid: 9ac37449-f7a2-4d26-8dc0-4463b866b67f
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: failure-domain.beta.kubernetes.io/zone
          operator: In
          values:
          - ap-northeast-2c
        - key: failure-domain.beta.kubernetes.io/region
          operator: In
          values:
          - ap-northeast-2
  persistentVolumeReclaimPolicy: Delete
  storageClassName: gp2
  volumeMode: Filesystem
status:
  phase: Bound

 ㅇ spec: awsElasticBlockStore: 

      fsType: ext4
      volumeID: aws://ap-northeast-2c/vol-0202ac8ec2be8af4a

 ㅇ 위 부분에서 AWS 볼륨이 ap-northeast-2c 리즌에 있는 것을 확인 할 수 있다.

 ㅇ PV와 연결되어 있음을 확인할 수 있다.

 

[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ kubectl get pv pvc-9ac37449-f7a2-4d26-8dc0-4463b866b67f  -o jsonpath='{.spec.awsElasticBlockStore.volumeID}'
aws://ap-northeast-2c/vol-0202ac8ec2be8af4a

 ㅇ  -o jsonpath='{.spec.awsElasticBlockStore.volumeID}' 명령어를 통해 짧게 확인 할 수도 있다.

 

ㅁ PersistentVolumeClaims(PVC)확인

[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ kubectl get persistentvolumeclaims -o wide
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
mongo-data   Bound    pvc-9ac37449-f7a2-4d26-8dc0-4463b866b67f   1Gi        RWO            gp2            62m   Filesystem

[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ kubectl get persistentvolumeclaims mongo-data -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mongo-data","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
    volume.kubernetes.io/selected-node: ip-192-168-70-154.ap-northeast-2.compute.internal
  creationTimestamp: "2022-09-18T12:18:54Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: mongo-data
  namespace: default
  resourceVersion: "4283"
  uid: 9ac37449-f7a2-4d26-8dc0-4463b866b67f
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: gp2
  volumeMode: Filesystem
  volumeName: pvc-9ac37449-f7a2-4d26-8dc0-4463b866b67f
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound

 ㅇ spec: volumName 에서 PV에서 PVC로 연결되어 있음을 확인 할 수 있다.

 

ㅁ Pod mount 확인

[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ kubectl get pods mongo-76f9996ff5-hfvfv -o yaml
apiVersion: v1
kind: Pod
metadata:
~~ 생략 ~~
spec:
  containers:
  - args:
    ~~ 생략 ~~
    volumeMounts:
    - mountPath: /data/db
      name: mongo-data-dir
  ~~ 생략 ~~
  volumes:
  - name: mongo-data-dir
    persistentVolumeClaim:
      claimName: mongo-data

 ㅇ AWS 볼륨이 PV, PVC를 거쳐 POD에 마운트 되어 있는 상태를 확인하였다.

 

ㅁ StorageClass gp3 생성작업

# storageClass 생성
[ec2-user@ip-172-31-43-214 gp3Test]$ cat storageClass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  fsType: ext4
volumeBindingMode: Immediate

# apply
[ec2-user@ip-172-31-43-214 gp3Test]$ kubectl apply -f pvc_gp3.yaml
persistentvolumeclaim/testgp3 created

# storageClass 생성확인
[ec2-user@ip-172-31-43-214 gp3Test]$ kubectl get sc
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  151m
gp3             kubernetes.io/aws-ebs   Delete          Immediate              false                  9m33s

 ㅇ StorageClass(SC) gp3가 생성되었다.

 

ㅁ pvc gp3 생성

# pvc gp3 yaml 생성
[ec2-user@ip-172-31-43-214 gp3Test]$ cat testgp3.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: testgp3
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: gp3
  
# apply
[ec2-user@ip-172-31-43-214 gp3Test]$ kubectl apply -f testgp3.yaml
persistentvolumeclaim/testgp3 created

# 생성확인
[ec2-user@ip-172-31-43-214 gp3Test]$ kubectl get pvc
NAME         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongo-data   Bound     pvc-e075d518-f556-4101-b254-84292bd1c2f3   1Gi        RWO            gp2            144m
testgp3      Pending                                                                        gp3            9s

# 상세원인 분석
[ec2-user@ip-172-31-43-214 gp3Test]$ kubectl describe persistentvolumeclaims testgp3
Name:          testgp3
Namespace:     default
StorageClass:  gp3
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason              Age               From                         Message
  ----     ------              ----              ----                         -------
  Warning  ProvisioningFailed  8s (x4 over 50s)  persistentvolume-controller  Failed to provision volume with StorageClass "gp3": invalid AWS VolumeType "gp3"

 ㅇ AWS에서 gp3 볼륨생성 시 에러가 발생하였다.

 ㅇ Failed to provision volume with StorageClass "gp3": invalid AWS VolumeType "gp3" 에러가 발생하였다.

 

ㅁ 트러블 슈팅

https://github.com/aws/containers-roadmap/issues/1187

 

[EKS] [request]: enable support for provisioning gp3 volumes · Issue #1187 · aws/containers-roadmap

Community Note Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to...

github.com

 ㅇ 이곳 이슈에 의하면 gp3를 사용하기 위해서는 CSI driver를 설치해야 한다고 한다.

 

 

 ㅇ Amazon EBS CSI 드라이버  공식문서에서

EKS 클러스터가 1.22 이전이며 현재 CSI driver가 클러스터에 설치되지 않은 경우 클러스터를 1.23으로 업데이트하기 전에 이 드라이버를 클러스터에 설치해야한다고 권고 하고 있다.

 

ㅁ CSI driver 확인

[ec2-user@ip-172-31-43-214 gp3Test]$ kubectl api-resources | grep "storage.k8s.io/v1"
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
csistoragecapacities                           storage.k8s.io/v1beta1                 true         CSIStorageCapacity
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

 ㅇ Amazon EBS CSI 관련 Kubernetes 구성 요소가 K8s API 서버에 등록되었는지 다시 확인할 수 있다.

 

[ec2-user@ip-172-31-43-214 gp3Test]$ kubectl get po -n kube-system -l 'app in (ebs-csi-controller,ebs-csi-node,snapshot-controller)'
No resources found in kube-system namespace.

 ㅇ 실행 중인지 확인을 하였지만 실행되어 있지 않았다.

 

ㅁ EKS 업그레이드 하지 않고 gp3로 올리는 방법 구상

 ㅇ 현재의 EKS에 CSI Driver를 설치하지 않고 gp3를 사용하는 방법을 찾아야만하였다.

 ㅇ AWS Console에서 gp2 볼륨을 gp3로 변경해 보았다.

 

ㅁ AWS 볼륨 gp3로 변경하기

 

 ㅇ AWS 볼륨 유형이 gp3로 변경 확인되었다.

 ㅇ 제약사항으로 AWS에는 gp3로 타임이 변경되었지만 쿠버네티스에서는 gp2이다.

 ㅇ 하지만 스토리지는 정상적으로 작동하고 있는 것을 확인할 수 있었다.

 ㅇ 추가적으로 gp2에서 향상된 gp3의 IOPS에 대한 테스트가 필요하였다.

 

ㅁ EKS 삭제

[ec2-user@ip-172-31-43-214 kubernetes-mongodb]$ eksctl delete cluster --name k8s-peter
2022-09-18 23:02:36 [ℹ]  deleting EKS cluster "k8s-peter"
2022-09-18 23:02:36 [ℹ]  will drain 0 unmanaged nodegroup(s) in cluster "k8s-peter"
2022-09-18 23:02:36 [ℹ]  starting parallel draining, max in-flight of 1
2022-09-18 23:02:36 [ℹ]  deleted 0 Fargate profile(s)
2022-09-18 23:02:36 [✔]  kubeconfig has been updated
2022-09-18 23:02:36 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2022-09-18 23:02:40 [ℹ]
3 sequential tasks: { delete nodegroup "work-nodes",
    2 sequential sub-tasks: {
        2 sequential sub-tasks: {
            delete IAM role for serviceaccount "kube-system/aws-node",
            delete serviceaccount "kube-system/aws-node",
        },
        delete IAM OIDC provider,
    }, delete cluster control plane "k8s-peter" [async]
}
2022-09-18 23:02:40 [ℹ]  will delete stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 23:02:40 [ℹ]  waiting for stack "eksctl-k8s-peter-nodegroup-work-nodes" to get deleted
2022-09-18 23:02:40 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 23:03:10 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 23:04:00 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 23:04:30 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 23:05:17 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 23:06:42 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-nodegroup-work-nodes"
2022-09-18 23:06:44 [ℹ]  will delete stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node"
2022-09-18 23:06:44 [ℹ]  waiting for stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node" to get deleted
2022-09-18 23:06:45 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node"
2022-09-18 23:07:15 [ℹ]  waiting for CloudFormation stack "eksctl-k8s-peter-addon-iamserviceaccount-kube-system-aws-node"
2022-09-18 23:07:15 [ℹ]  deleted serviceaccount "kube-system/aws-node"
2022-09-18 23:07:16 [ℹ]  will delete stack "eksctl-k8s-peter-cluster"
2022-09-18 23:07:17 [✔]  all cluster resources were deleted

 

ㅁ 함께 보면 좋은 사이트

 ㅇ 쿠버네티스 볼륨 개념정리

 

쿠버네티스 볼륨(Volume) 개념 정리

쿠버네티스에서 컨테이너에 디스크 스토리지를 연결하는 볼륨에 대해 알아봅시다

blog.eunsukim.me

 ㅇ Amazon EKS 클러스터를 gp2에서 gp3 EBS 볼륨으로 마이그레이션

 

Migrating Amazon EKS clusters from gp2 to gp3 EBS volumes | Amazon Web Services

Kubernetes (sometimes referred to as K8s) is an open-source container orchestration engine and a fast-growing project hosted by the Cloud Native Computing Foundation (CNCF). K8s has a massive adoption on premises and in the cloud for running stateless and

aws.amazon.com

 ㅇ Amazon EBS CSI 드라이버

 

Amazon EBS CSI 드라이버 - Amazon EKS

이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오.

docs.aws.amazon.com

 ㅇgp3으로 볼륨 프로비저닝 실패

 

How to provision a gp3 volume using StorageClass

If you are using an outdated EBS CSI it might fail to provison gp3 volumes

pet2cattle.com

반응형
Comments