1. CA 환경 구성 및 동작 확인
이번 실습은 5장 Amazon EKS 원클릭 배포 환경에서 진행합니다.
그리고 새롭게 인프라를 배포하면 아래 기본 설정 명령을 입력 후 진행 바랍니다.
기본 설정 명령어
1.1. CA 환경 구성
Auto Scaling Group(ASG) 확인 및 조정
aws autoscaling describe-auto-scaling-groups \
--query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='myeks']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]" \
--output table
Bash
복사
# 현재 ASG 정보 확인
export ASG_NAME=$(aws autoscaling describe-auto-scaling-groups \
--query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='myeks']].AutoScalingGroupName" \
--output text); echo $ASG_NAME
Bash
복사
# ASG 이름 변수 선언
aws autoscaling update-auto-scaling-group \
--auto-scaling-group-name ${ASG_NAME} \
--min-size 3 \
--max-size 9 \
--desired-capacity 3
Bash
복사
# ASG Max Size를 9로 변경
CA 설치 및 확인
curl -s -O https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
sed -i "s/<YOUR CLUSTER NAME>/$CLUSTER_NAME/g" \
cluster-autoscaler-autodiscover.yaml
cat cluster-autoscaler-autodiscover.yaml | yh
Bash
복사
# CA 설치 파일 다운로드 및 변수 치환
kubectl apply -f cluster-autoscaler-autodiscover.yaml
Bash
복사
# CA 배포
kubectl get pod -n kube-system | grep cluster-autoscaler
Bash
복사
# CA 확인
1.2. CA 동작 확인
EKS Node Viewer 접속
wget -O eks-node-viewer https://github.com/awslabs/eks-node-viewer/releases/download/v0.7.1/eks-node-viewer_Linux_x86_64
chmod +x eks-node-viewer
sudo mv -v eks-node-viewer /usr/local/bin
Bash
복사
# [신규 터미널] eks-node-viewer 접속
eks-node-viewer
Bash
복사
테스트용 자원 설치
cat <<EoF> nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-to-scaleout
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
service: nginx
app: nginx
spec:
containers:
- image: nginx
name: nginx-to-scaleout
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
EoF
cat nginx.yaml | yh
Bash
복사
# 테스트용 디플로이먼트 파일 생성 및 확인
kubectl apply -f nginx.yaml
Bash
복사
# 테스트용 디플로이먼트 배포
kubectl get deployment/nginx-to-scaleout
Bash
복사
# 테스트용 디플로이먼트 정보 확인
노드 scale-out 확인
kubectl scale --replicas=15 deployment/nginx-to-scaleout && date
Bash
복사
# replicas를 15로 조정 (scale-out 확인)
aws autoscaling describe-auto-scaling-groups \
--query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='myeks']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]" \
--output table
Bash
복사
# 현재 ASG 정보 확인
노드 scale-in 확인
kubectl delete -f nginx.yaml && date
Bash
복사
# 테스트용 디플로이먼트 삭제 (scale-in 확인 - 10분 이상 소요)
aws autoscaling describe-auto-scaling-groups \
--query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='myeks']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]" \
--output table
Bash
복사
# 현재 ASG 정보 확인
1.3. CA 실습 자원 삭제
실습 자원 삭제
kubectl delete -f cluster-autoscaler-autodiscover.yaml
Bash
복사
# CA 삭제
2. Karpenter 환경 구성
2.1. Karpenter 설치 및 확인
환경 변수 설정 및 확인
export OIDC_ARN=$(aws iam list-open-id-connect-providers \
--query 'OpenIDConnectProviderList[*].Arn' \
--output text)
echo "export OIDC_ARN=$OIDC_ARN" >> /etc/profile; echo $OIDC_ARN
export OIDC_URL=${OIDC_ARN#*oidc-provider/}
echo "export OIDC_URL=$OIDC_URL" >> /etc/profile; echo $OIDC_URL
export AWS_PARTITION="aws"
echo "export AWS_PARTITION=$AWS_PARTITION" >> /etc/profile; echo $AWS_PARTITION
export KARPENTER_VERSION="1.5.0"
echo "export KARPENTER_VERSION=$KARPENTER_VERSION" >> /etc/profile; echo $KARPENTER_VERSION
export K8S_VERSION=1.32
echo "export K8S_VERSION=$K8S_VERSION" >> /etc/profile; echo $K8S_VERSION
export KARPENTER_NS="kube-system"
echo "export KARPENTER_NS=$KARPENTER_NS" >> /etc/profile; echo $KARPENTER_NS
export ARM_AMI_ID="$(aws ssm get-parameter \
--name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-arm64/recommended/image_id \
--query Parameter.Value \
--output text)"
echo "export ARM_AMI_ID=$ARM_AMI_ID" >> /etc/profile; echo $ARM_AMI_ID
export AMD_AMI_ID="$(aws ssm get-parameter \
--name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2/recommended/image_id \
--query Parameter.Value \
--output text)"
echo "export AMD_AMI_ID=$AMD_AMI_ID" >> /etc/profile; echo $AMD_AMI_ID
export ALIAS_VERSION="$(aws ssm get-parameter \
--name "/aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2023/x86_64/standard/recommended/image_id" \
--query Parameter.Value | xargs aws ec2 describe-images \
--query 'Images[0].Name' \
--image-ids | sed -r 's/^.*(v[[:digit:]]+).*$/\1/')"
echo "export ALIAS_VERSION=$ALIAS_VERSION" >> /etc/profile; echo $ALIAS_VERSION
Bash
복사
# Karpenter 설치를 위한 변수 선언 및 확인
EC2 Spot fleet의 server-linked-role 확인
aws iam create-service-linked-role \
--aws-service-name spot.amazonaws.com || true
Bash
복사
# EC2 Spot Fleet 사용을 위한 service-linked-role 생성 확인
Note:
아래와 같은 문구가 나오면 정상입니다.
An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation...
Karpenter Node - IAM Policy 생성 및 IAM Role 생성
cat << EOF > node-trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
Bash
복사
# node-trust-policy.json 생성
aws iam create-role \
--role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
--assume-role-policy-document file://node-trust-policy.json
Bash
복사
# KarpenterNodeRole 생성
aws iam attach-role-policy \
--role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
--policy-arn arn:${AWS_PARTITION}:iam::aws:policy/AmazonEKSWorkerNodePolicy
aws iam attach-role-policy \
--role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
--policy-arn arn:${AWS_PARTITION}:iam::aws:policy/AmazonEKS_CNI_Policy
aws iam attach-role-policy \
--role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
--policy-arn arn:${AWS_PARTITION}:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
aws iam attach-role-policy \
--role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
--policy-arn arn:${AWS_PARTITION}:iam::aws:policy/AmazonSSMManagedInstanceCore
Bash
복사
# KarpenterNodeRole에 IAM Policy 연결
aws iam create-instance-profile \
--instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}"
aws iam add-role-to-instance-profile \
--instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}" \
--role-name "KarpenterNodeRole-${CLUSTER_NAME}"
Bash
복사
# IAM EC2 Instance Profile 생성
Karpenter Controller - IAM Policy 생성 및 IAM Role 생성
cat << EOF > controller-trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:${AWS_PARTITION}:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_URL}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_URL}:aud": "sts.amazonaws.com",
"${OIDC_URL}:sub": "system:serviceaccount:${KARPENTER_NS}:karpenter"
}
}
}
]
}
EOF
cat controller-trust-policy.json | jq
Bash
복사
# controller-trust-policy.json 생성
aws iam create-role \
--role-name KarpenterControllerRole-${CLUSTER_NAME} \
--assume-role-policy-document file://controller-trust-policy.json
Bash
복사
# KarpenterControllerRole 생성
cat << EOF > controller-policy.json
{
"Statement": [
{
"Action": [
"ssm:GetParameter",
"ec2:DescribeImages",
"ec2:RunInstances",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeInstances",
"ec2:DescribeInstanceTypes",
"ec2:DescribeInstanceTypeOfferings",
"ec2:DeleteLaunchTemplate",
"ec2:CreateTags",
"ec2:CreateLaunchTemplate",
"ec2:CreateFleet",
"ec2:DescribeSpotPriceHistory",
"pricing:GetProducts"
],
"Effect": "Allow",
"Resource": "*",
"Sid": "Karpenter"
},
{
"Action": "ec2:TerminateInstances",
"Condition": {
"StringLike": {
"ec2:ResourceTag/karpenter.sh/nodepool": "*"
}
},
"Effect": "Allow",
"Resource": "*",
"Sid": "ConditionalEC2Termination"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:${AWS_PARTITION}:iam::${ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}",
"Sid": "PassNodeIAMRole"
},
{
"Effect": "Allow",
"Action": "eks:DescribeCluster",
"Resource": "arn:${AWS_PARTITION}:eks:${AWS_DEFAULT_REGION}:${ACCOUNT_ID}:cluster/${CLUSTER_NAME}",
"Sid": "EKSClusterEndpointLookup"
},
{
"Sid": "AllowScopedInstanceProfileCreationActions",
"Effect": "Allow",
"Resource": "*",
"Action": [
"iam:CreateInstanceProfile"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned",
"aws:RequestTag/topology.kubernetes.io/region": "${AWS_DEFAULT_REGION}"
},
"StringLike": {
"aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
}
}
},
{
"Sid": "AllowScopedInstanceProfileTagActions",
"Effect": "Allow",
"Resource": "*",
"Action": [
"iam:TagInstanceProfile"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned",
"aws:ResourceTag/topology.kubernetes.io/region": "${AWS_DEFAULT_REGION}",
"aws:RequestTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned",
"aws:RequestTag/topology.kubernetes.io/region": "${AWS_DEFAULT_REGION}"
},
"StringLike": {
"aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*",
"aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
}
}
},
{
"Sid": "AllowScopedInstanceProfileActions",
"Effect": "Allow",
"Resource": "*",
"Action": [
"iam:AddRoleToInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:DeleteInstanceProfile"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned",
"aws:ResourceTag/topology.kubernetes.io/region": "${AWS_DEFAULT_REGION}"
},
"StringLike": {
"aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*"
}
}
},
{
"Sid": "AllowInstanceProfileReadActions",
"Effect": "Allow",
"Resource": "*",
"Action": "iam:GetInstanceProfile"
}
],
"Version": "2012-10-17"
}
EOF
cat controller-policy.json | jq
Bash
복사
# controller-policy.json 생성
aws iam create-policy \
--policy-name KarpenterControllerPolicy-${CLUSTER_NAME} \
--policy-document file://controller-policy.json
Bash
복사
# KarpenterControllerPolicy 생성
aws iam attach-role-policy \
--role-name KarpenterControllerRole-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}
Bash
복사
# KarpenterControllerRole에 IAM Policy 연결
생성된 IAM Role과 IAM Policy 확인
aws iam list-policies \
--query 'Policies[?contains(PolicyName, `Karpenter`)].PolicyName' \
--output text
aws iam list-roles \
--query 'Roles[?contains(RoleName, `Karpenter`)].RoleName' \
--output text
Bash
복사
# Karpenter 관련 IAM Policy와 IAM Role 확인
NodeGroup에 구성되는 서브넷에 태그 추가
for NODEGROUP in $(aws eks list-nodegroups --cluster-name ${CLUSTER_NAME} --query 'nodegroups' --output text); do
echo "NodeGroup: $NODEGROUP"
SUBNET_IDS=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} --nodegroup-name $NODEGROUP --query 'nodegroup.subnets' --output text)
for SUBNET_ID in $SUBNET_IDS; do
echo "Subnet: $SUBNET_ID"
aws ec2 describe-tags --filters "Name=resource-id,Values=$SUBNET_ID" --output text
done
done
Bash
복사
# NodeGroup에 구성되는 서브넷의 태그 정보 확인
for NODEGROUP in $(aws eks list-nodegroups --cluster-name ${CLUSTER_NAME} \
--query 'nodegroups' --output text); do aws ec2 create-tags \
--tags "Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}" \
--resources $(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
--nodegroup-name $NODEGROUP --query 'nodegroup.subnets' --output text )
done
Bash
복사
# NodeGroup에 구성되는 서브넷의 태그에 karpenter.sh/discovery 추가
for NODEGROUP in $(aws eks list-nodegroups --cluster-name ${CLUSTER_NAME} --query 'nodegroups' --output text); do
echo "NodeGroup: $NODEGROUP"
SUBNET_IDS=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} --nodegroup-name $NODEGROUP --query 'nodegroup.subnets' --output text)
for SUBNET_ID in $SUBNET_IDS; do
echo "Subnet: $SUBNET_ID"
aws ec2 describe-tags --filters "Name=resource-id,Values=$SUBNET_ID" --output text | grep karpenter
done
done
Bash
복사
# NodeGroup에 구성되는 서브넷의 태그 정보 확인 (karpenter 필터)
보안 그룹에 태그 설정
SECURITY_GROUPS=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} \
--query "cluster.resourcesVpcConfig.clusterSecurityGroupId" \
--output text); echo $SECURITY_GROUPS
Bash
복사
# 보안 그룹 변수 선언
aws ec2 create-tags \
--tags "Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}" \
--resources ${SECURITY_GROUPS}
Bash
복사
# 보안 그룹에 태그 추가
aws-auth 권한 수정
kubectl get configmap aws-auth \
-n kube-system \
-o yaml > aws-auth.yaml
cat aws-auth.yaml | yh
Bash
복사
# aws-auth configmap의 설정 값 저장
cat aws-auth.yaml | awk '
/mapRoles:/ {
print;
print " - groups:";
print " - system:bootstrappers";
print " - system:nodes";
print " rolearn: arn:'"$AWS_PARTITION"':iam::'"$ACCOUNT_ID"':role/KarpenterNodeRole-'"$CLUSTER_NAME"'";
print " username: system:node:{{EC2PrivateDNSName}}";
next
}
{ print }
' > aws-auth-patched.yaml
cat aws-auth-patched.yaml | yh
Bash
복사
# aws-auth-patched.yaml에 내용 추가 후 확인
kubectl apply -f aws-auth-patched.yaml -n kube-system
Bash
복사
# aws-auth configmap 업데이트
Karpenter 설치 템플릿 저장
helm registry logout public.ecr.aws
Bash
복사
# public ecr logout
helm template karpenter oci://public.ecr.aws/karpenter/karpenter \
--version "${KARPENTER_VERSION}" \
--namespace "${KARPENTER_NS}" \
--set "settings.clusterName=${CLUSTER_NAME}" \
--set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=arn:${AWS_PARTITION}:iam::${ACCOUNT_ID}:role/KarpenterControllerRole-${CLUSTER_NAME}" \
--set controller.resources.requests.cpu=1 \
--set controller.resources.requests.memory=1Gi \
--set controller.resources.limits.cpu=1 \
--set controller.resources.limits.memory=1Gi > karpenter.yaml
Bash
복사
# helm template로 Karpenter 설치에 사용할 Manifest 가져오기
Node Affinity 설정
NODEGROUP=$(aws eks list-nodegroups \
--cluster-name ${CLUSTER_NAME} \
--query 'nodegroups[0]' \
--output text); echo $NODEGROUP
Bash
복사
# Node Group Name 변수 정의
cat karpenter.yaml | awk -v NG="$NODEGROUP" '
/- key: karpenter\.sh\/nodepool/ {
print; getline; print;
print " - key: eks.amazonaws.com/nodegroup";
print " operator: In";
print " values:";
print " - " NG;
next;
}
{ print }
' > karpenter-patched.yaml
cat karpenter-patched.yaml | yh
Bash
복사
# Node Affinity 설정 추가해서 karpenter-patched.yaml에 저장
dnsPolicy 수정
cat karpenter-patched.yaml | grep dnsPolicy
Bash
복사
sed -i 's/dnsPolicy: ClusterFirst/dnsPolicy: Default/' karpenter-patched.yaml
cat karpenter-patched.yaml | grep dnsPolicy
Bash
복사
Warning:
Karpenter는 기본적으로 ClusterFirst DNS 정책을 사용합니다.
하지만 DNS 서비스 파드보다 먼저 실행되어야 하는 경우, 클러스터 DNS에 의존하지 않도록 DNS 정책을 Default로 설정해야 합니다.
Karpenter 설치
kubectl create -f \
"https://raw.githubusercontent.com/aws/karpenter-provider-aws/v${KARPENTER_VERSION}/pkg/apis/crds/karpenter.sh_nodepools.yaml"
kubectl create -f \
"https://raw.githubusercontent.com/aws/karpenter-provider-aws/v${KARPENTER_VERSION}/pkg/apis/crds/karpenter.k8s.aws_ec2nodeclasses.yaml"
kubectl create -f \
"https://raw.githubusercontent.com/aws/karpenter-provider-aws/v${KARPENTER_VERSION}/pkg/apis/crds/karpenter.sh_nodeclaims.yaml"
kubectl apply -f karpenter-patched.yaml
Bash
복사
# CRD 생성 및 Karpenter 배포
Note:
Karpenter 0.32.0 이상부터 nodepools, nodelclaims, ec2nodeclasses 3가지 CRD를 사용합니다.
Karpenter 설치 확인
kubectl get-all -n kube-system -l app.kubernetes.io/name=karpenter
Bash
복사
# karpenter 설치 확인
kubectl get all -n kube-system -l app.kubernetes.io/name=karpenter
Bash
복사
kubectl get crd | grep karpenter
Bash
복사
그라파나 대시보드 생성
echo -e "Grafana Web URL = https://grafana.$MyDomain"
Bash
복사
# Grafana 웹 접속 URL 확인 (기본 계정 - admin / prom-operator)
•
Dashboard → New → Import → 아래 JSON 입력 → Load → Import
grafana dashboard json - Karpenter
2.2. Karpenter NodePool 생성 및 확인
Karpenter NodePool과 EC2NodeClass 생성
cat <<EOF | envsubst | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
- key: karpenter.k8s.aws/instance-category
operator: In
values: ["c", "m", "r"]
- key: karpenter.k8s.aws/instance-generation
operator: Gt
values: ["2"]
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default
expireAfter: 720h # 30 * 24h = 720h
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default
spec:
instanceProfile: "KarpenterNodeInstanceProfile-${CLUSTER_NAME}"
amiSelectorTerms:
- alias: "al2023@${ALIAS_VERSION}"
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "${CLUSTER_NAME}"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "${CLUSTER_NAME}"
EOF
Bash
복사
NodePool과 EC2NodeClass 확인
kubectl get nodepool,ec2nodeclass
Bash
복사
# nodepool, ec2nodeclass 확인
kubectl describe nodepool/default
Bash
복사
# nodepool/default 상세 정보
kubectl describe ec2nodeclass/default
Bash
복사
# ec2nodeclass/default 상세 정보
3. Karpenter 동작 확인
3.1. 테스트용 디플로이먼트 생성
디플로이먼트 배포
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 500m
securityContext:
allowPrivilegeEscalation: false
EOF
Bash
복사
# 테스트용 디플로이먼트 배포 (Replicas: 0, CPU : 500m)
kubectl get deploy
Bash
복사
# 테스트용 디플로이먼트 확인
3.2. Karpenter의 스케일링 확인
Scale-Out 확인
kubectl scale deployment inflate --replicas 5
Bash
복사
# replicas 수정 (replicas 0 -> 5)
kubectl logs -f -n kube-system \
-l app.kubernetes.io/name=karpenter \
-c controller
Bash
복사
# 로그 확인
kubectl logs -n kube-system \
-l app.kubernetes.io/name=karpenter \
-c controller | grep 'launched nodeclaim' | jq '.'
Bash
복사
kubectl get nodeclaims
Bash
복사
# nodeclaims 확인
kubectl describe nodeclaims
Bash
복사
# nodeclaims 상세 확인
kubectl get node \
--label-columns=eks.amazonaws.com/capacityType,karpenter.sh/capacity-type,node.kubernetes.io/instance-type
Bash
복사
# 스팟 인스턴스 확인
Scale-In 확인
kubectl scale deployment inflate --replicas 0; date
Bash
복사
# replicas 수정 (replicas 5 -> 0), 시간 확인
kubectl logs -n kube-system \
-l app.kubernetes.io/name=karpenter \
-c controller | jq '.'
Bash
복사
# 로그 확인
kubectl get nodeclaims
Bash
복사
# nodeclaims 확인
기존 Karpenter NodePool과 EC2NodeClass 삭제
kubectl delete nodepool,ec2nodeclass default
Bash
복사
# 기존 nodepool, ec2nodeclass 삭제
3.3. Karpenter의 Disruption 확인
신규 Karpenter NodePool과 EC2NodeClass 생성
cat <<EOF | envsubst | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- on-demand
- key: node.kubernetes.io/instance-type
operator: In
values:
- c5.large
- m5.large
- m5.xlarge
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default
expireAfter: 720h # 30 * 24h = 720h
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default
spec:
instanceProfile: "KarpenterNodeInstanceProfile-${CLUSTER_NAME}"
amiSelectorTerms:
- alias: "al2023@${ALIAS_VERSION}"
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "${CLUSTER_NAME}"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "${CLUSTER_NAME}"
EOF
Bash
복사
# 신규 NodePool과 EC2NodeClass 생성
Scale-Out 확인
kubectl scale deployment inflate --replicas 15
Bash
복사
# replicas 수정 (replicas 0 -> 15)
kubectl logs -f -n kube-system \
-l app.kubernetes.io/name=karpenter \
-c controller \
| jq 'select(.message == "launched nodeclaim")'
Bash
복사
# 로그 확인
kubectl get nodeclaims
Bash
복사
# nodeclaims 확인
Consolidation 확인
kubectl scale deployment inflate --replicas 5
Bash
복사
# replicas 수정 (replicas 15 -> 5)
kubectl logs -f -n kube-system \
-l app.kubernetes.io/name=karpenter \
-c controller \
| jq 'select(.message == "disrupting node(s)")'
Bash
복사
# 로그 확인
자원 삭제
kubectl delete deployment inflate
Bash
복사
# 테스트용 디플로이먼트 삭제
kubectl logs -f -n kube-system \
-l app.kubernetes.io/name=karpenter \
-c controller \
| jq 'select(.message == "disrupting node(s)")'
Bash
복사
# 로그 확인
kubectl delete nodepool,ec2nodeclass default
Bash
복사
# nodepool, ec2nodeclass 삭제
4. 실습 환경 삭제
5장 Karpenter 실습이 종료되어 모든 실습 환경을 삭제합니다.
Karpenter Node & Controller IAM 삭제
aws iam detach-role-policy \
--role-name KarpenterNodeRole-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
aws iam detach-role-policy \
--role-name KarpenterNodeRole-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
aws iam detach-role-policy \
--role-name KarpenterNodeRole-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
aws iam detach-role-policy \
--role-name KarpenterNodeRole-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
aws iam detach-role-policy \
--role-name KarpenterControllerRole-${CLUSTER_NAME} \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}
Bash
복사
# IAM Policy 연결 해제
aws iam delete-policy \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}
Bash
복사
# 생성된 사용자 IAM Policy 삭제
aws iam remove-role-from-instance-profile \
--instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}" \
--role-name "KarpenterNodeRole-${CLUSTER_NAME}"
aws iam delete-instance-profile \
--instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}"
Bash
복사
# IAM Instance Profile 삭제
aws iam delete-role \
--role-name KarpenterNodeRole-${CLUSTER_NAME}
aws iam delete-role \
--role-name KarpenterControllerRole-${CLUSTER_NAME}
Bash
복사
# IAM Role 삭제
Prometheus Stack 삭제
helm uninstall -n kube-system kube-prometheus-stack
Bash
복사
# prometheus-stack 삭제
watch -d 'kubectl get pod,ing -n kube-system | grep prometheus'
Bash
복사
helm uninstall -n kube-system kube-ops-view
Bash
복사
# kube-ops-view 삭제
Karpenter 자원 삭제
kubectl delete -f karpenter.yaml
Bash
복사
# Karpenter 자원 삭제
Amazon EKS 원클릭 배포의 삭제
eksctl delete cluster --name $CLUSTER_NAME \
&& aws cloudformation delete-stack --stack-name $CLUSTER_NAME
Bash
복사
# Amazon EKS 원클릭 배포의 삭제
Warning:
Amazon EKS 원클릭 배포의 삭제는 약 15분 정도 소요됩니다.
삭제가 완료될 때 까지 SSH 연결 세션을 유지합니다.
Warning:
만약에 CloudFormation 스택이 삭제되지 않는다면, 수동으로 VPC(myeks-VPC)를 삭제 후 CloudFormation 스택을 다시 삭제해 주세요.
Amazon Route 53 레코드 및 호스팅 영역 삭제
1) 서비스 > Route 53 > 호스팅 영역 > 도메인 선택
•
대상 레코드 선택 > ‘레코드 삭제’ 버튼 > ‘삭제’ 버튼
2) 더 이상 도메인을 사용 계획이 없을 경우 호스팅 영역 삭제
•
NS 레코드와 SOA 레코드만 존재하는지 확인
•
‘영역 삭제’ 버튼 > ‘삭제’ 버튼
여기까지 5장 Karpenter 구성 하기 실습을 마칩니다.
수고하셨습니다 :)