25.09.17
iperf 다수로 다른 노드 ip 네트워크 대역간 통신 최대 측정
Cilium autoroute 빼보기 - static routing 이 처리하는지 보기 , 에러로그 안나오는지 확인
헤드리스 서비스를 글로벌로 만들어두기 service.cilium.io/global-sync-endpoint-slices: "true"
culium 1.17.6 vs 1.18.1 - bgp
•
매트릭스(2×2)로 비교:
a) veth + Big TCP OFF
b) veth + Big TCP ON
c) netkit + Big TCP OFF
d) netkit + Big TCP ON
•
동일 조건으로 측정:
◦
STREAM: -t TCP_STREAM -l 60 -- -m 131072 -M 131072 -O THROUGHPUT
◦
RR: -t TCP_RR -l 60 -- -r 80000,80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT
•
노드 상태 함께 기록:
◦
오프로딩: ethtool -k <iface> | egrep 'gro|gso|tso'
◦
MTU 일관성: ip link show <iface> / cilium status --verbose | grep -i mtu
◦
재전송 지표: nstat -az | egrep 'Tcp(Ext)?(Retrans|Timeouts|InSegs|OutSegs)'
Parameters:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instances. Linked to AWS Parameter
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
SgIngressSshCidr:
Description: The IP address range that can be used to communicate to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: (\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
InstanceTypeCTR:
Description: Instance Type
Type: String
Default: t3.medium
InstanceTypeWK:
Description: Instance Type
Type: String
Default: t3.medium
Arch:
Type: String
Default: amd64
AllowedValues: [amd64, arm64]
Mappings:
AmiPath:
amd64:
Path: "/aws/service/canonical/ubuntu/server/24.04/stable/current/amd64/hvm/ebs-gp3/ami-id"
arm64:
Path: "/aws/service/canonical/ubuntu/server/24.04/stable/current/arm64/hvm/ebs-gp3/ami-id"
Resources:
# VPC
MYVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 192.168.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Sub MY-VPC
# PublicSubnets
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: ap-northeast-2c
CidrBlock: 192.168.56.0/24
VpcId: !Ref MYVPC
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub MY-PublicSubnet1
InternetGateway:
Type: AWS::EC2::InternetGateway
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref MYVPC
PublicSubnetRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref MYVPC
Tags:
- Key: Name
Value: !Sub MY-PublicSubnetRouteTable
PublicSubnetRoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PublicSubnetRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet1
RouteTableId: !Ref PublicSubnetRouteTable
# Hosts
MYEC2SG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: eksctl-host Security Group
VpcId: !Ref MYVPC
Tags:
- Key: Name
Value: !Sub MY-HOST-SG
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref SgIngressSshCidr
SecurityGroupEgress:
- IpProtocol: -1
CidrIp: 0.0.0.0/0
AllowAllWithinSG:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !Ref MYEC2SG
IpProtocol: -1
SourceSecurityGroupId: !Ref MYEC2SG
Description: Allow all traffic within the same SG
CTREC2:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceTypeCTR
ImageId:
Fn::Sub:
- "{{resolve:ssm:${AmiParam}}}"
- { AmiParam: !FindInMap [AmiPath, !Ref Arch, Path] }
KeyName: !Ref KeyName
Tags:
- Key: Name
Value: !Sub k8s-ctr
SourceDestCheck: false
NetworkInterfaces:
- DeviceIndex: 0
SubnetId: !Ref PublicSubnet1
GroupSet:
- !Ref MYEC2SG
AssociatePublicIpAddress: true
PrivateIpAddress: 192.168.56.30
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeType: gp3
VolumeSize: 20
DeleteOnTermination: true
UserData:
Fn::Base64: |
#!/bin/bash
hostnamectl --static set-hostname "k8s-ctr"
# hosts 설정 추가
cat <<EOF >> /etc/hosts
192.168.56.30 k8s-ctr
192.168.56.31 k8s-worker1
192.168.56.32 k8s-worker2
EOF
# Config convenience
echo 'alias vi=vim' >> /etc/profile
echo "sudo su -" >> /home/ubuntu/.bashrc
# Change Timezone
sed -i "s/UTC/Asia\/Seoul/g" /etc/sysconfig/clock
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
# Disable AppArmor
systemctl stop ufw && systemctl disable ufw >/dev/null 2>&1
systemctl stop apparmor && systemctl disable apparmor >/dev/null 2>&1
# Default Package
apt update -qq >/dev/null 2>&1
apt-get install apt-transport-https ca-certificates curl gpg -y -qq >/dev/null 2>&1
# Download the public signing key for the Kubernetes package repositories.
mkdir -p -m 755 /etc/apt/keyrings
K8SMMV=1.33
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$K8SMMV/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$K8SMMV/deb/ /" >> /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
# packets traversing the bridge are processed by iptables for filtering
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/k8s.conf
# enable br_netfilter for iptables
modprobe br_netfilter
modprobe overlay
echo "br_netfilter" >> /etc/modules-load.d/k8s.conf
echo "overlay" >> /etc/modules-load.d/k8s.conf
# apt list -a kubelet ; apt list -a containerd.io
apt update >/dev/null 2>&1
apt-get install -y kubelet=1.33.2-1.1 kubectl=1.33.2-1.1 kubeadm=1.33.2-1.1 containerd.io=1.7.27-1 >/dev/null 2>&1
apt-mark hold kubelet kubeadm kubectl >/dev/null 2>&1
# containerd configure to default and cgroup managed by systemd
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# avoid WARN&ERRO(default endpoints) when crictl run
cat <<EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF
# ready to install for k8s
systemctl restart containerd && systemctl enable containerd
systemctl enable --now kubelet
# Install Packages & Helm"
apt-get install -y bridge-utils sshpass net-tools conntrack ngrep tcpdump ipset arping wireguard jq tree bash-completion unzip kubecolor >/dev/null 2>&1
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash >/dev/null 2>&1
# kubeadm 초기화
kubeadm init \
--token 123456.1234567890123456 \
--token-ttl 0 \
--skip-phases=addon/kube-proxy \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.56.30 \
--cri-socket=unix:///run/containerd/containerd.sock \
--upload-certs \
--v=5
# kubeconfig 설정
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown root:root /root/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
sleep 30
# kubectl alias 및 completion 등록
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'source <(kubeadm completion bash)' >> /etc/profile
echo 'alias k=kubectl' >> /etc/profile
echo 'alias kc=kubecolor' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile
# Install Kubectx & Kubens"
git clone https://github.com/ahmetb/kubectx /opt/kubectx >/dev/null 2>&1
ln -s /opt/kubectx/kubens /usr/local/bin/kubens
ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
# Install Kubeps & Setting PS1
git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1 >/dev/null 2>&1
cat <<"EOT" >> /root/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=true
function get_cluster_short() {
echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
kubectl config rename-context "kubernetes-admin@kubernetes" "HomeLab" >/dev/null 2>&1
# kube-apiserver 준비 대기 (준비될 때까지 루프)
until kubectl --request-timeout=5s get nodes >/dev/null 2>&1; do
sleep 2
done
# Cilium Helm 설치
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium \
--version 1.18.1 \
--namespace kube-system \
--set k8sServiceHost=192.168.56.30 \
--set k8sServicePort=6443 \
--set routingMode=native \
--set ipv4NativeRoutingCIDR=10.244.0.0/16 \
--set bpf.masquerade=true \
--set ipv4.enabled=true \
--set enableIPv4BIGTCP=true \
--set kubeProxyReplacement=true
# Cilium CLI 설치
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all \
https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz
tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz
W1EC2:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceTypeWK
ImageId:
Fn::Sub:
- "{{resolve:ssm:${AmiParam}}}"
- { AmiParam: !FindInMap [AmiPath, !Ref Arch, Path] }
KeyName: !Ref KeyName
Tags:
- Key: Name
Value: !Sub k8s-worker1
SourceDestCheck: false
NetworkInterfaces:
- DeviceIndex: 0
SubnetId: !Ref PublicSubnet1
GroupSet:
- !Ref MYEC2SG
AssociatePublicIpAddress: true
PrivateIpAddress: 192.168.56.31
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeType: gp3
VolumeSize: 20
DeleteOnTermination: true
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
hostnamectl --static set-hostname "k8s-worker1"
# hosts 설정 추가
cat <<EOF >> /etc/hosts
192.168.56.30 k8s-ctr
192.168.56.31 k8s-worker1
192.168.56.32 k8s-worker2
EOF
# Config convenience
echo 'alias vi=vim' >> /etc/profile
echo "sudo su -" >> /home/ubuntu/.bashrc
# Change Timezone
sed -i "s/UTC/Asia\/Seoul/g" /etc/sysconfig/clock
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
# Disable AppArmor
systemctl stop ufw && systemctl disable ufw >/dev/null 2>&1
systemctl stop apparmor && systemctl disable apparmor >/dev/null 2>&1
# Default Package
apt update -qq >/dev/null 2>&1
apt-get install apt-transport-https ca-certificates curl gpg -y -qq >/dev/null 2>&1
# Download the public signing key for the Kubernetes package repositories.
mkdir -p -m 755 /etc/apt/keyrings
K8SMMV=1.33
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$K8SMMV/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$K8SMMV/deb/ /" >> /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
# packets traversing the bridge are processed by iptables for filtering
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/k8s.conf
# enable br_netfilter for iptables
modprobe br_netfilter
modprobe overlay
echo "br_netfilter" >> /etc/modules-load.d/k8s.conf
echo "overlay" >> /etc/modules-load.d/k8s.conf
# apt list -a kubelet ; apt list -a containerd.io
apt update >/dev/null 2>&1
apt-get install -y kubelet=1.33.2-1.1 kubectl=1.33.2-1.1 kubeadm=1.33.2-1.1 containerd.io=1.7.27-1 >/dev/null 2>&1
apt-mark hold kubelet kubeadm kubectl >/dev/null 2>&1
# containerd configure to default and cgroup managed by systemd
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# avoid WARN&ERRO(default endpoints) when crictl run
cat <<EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF
# ready to install for k8s
systemctl restart containerd && systemctl enable containerd
systemctl enable --now kubelet
# Install Packages & Helm"
apt-get install -y bridge-utils sshpass net-tools conntrack ngrep tcpdump ipset arping wireguard jq tree bash-completion unzip kubecolor >/dev/null 2>&1
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash >/dev/null 2>&1
# K8S Cluster에 Join
kubeadm join \
--token 123456.1234567890123456 \
--discovery-token-unsafe-skip-ca-verification \
"192.168.56.30:6443"
W2EC2:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceTypeWK
ImageId:
Fn::Sub:
- "{{resolve:ssm:${AmiParam}}}"
- { AmiParam: !FindInMap [AmiPath, !Ref Arch, Path] }
KeyName: !Ref KeyName
Tags:
- Key: Name
Value: !Sub k8s-worker2
SourceDestCheck: false
NetworkInterfaces:
- DeviceIndex: 0
SubnetId: !Ref PublicSubnet1
GroupSet:
- !Ref MYEC2SG
AssociatePublicIpAddress: true
PrivateIpAddress: 192.168.56.32
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeType: gp3
VolumeSize: 20
DeleteOnTermination: true
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
hostnamectl --static set-hostname "k8s-worker2"
# hosts 설정 추가
cat <<EOF >> /etc/hosts
192.168.56.30 k8s-ctr
192.168.56.31 k8s-worker1
192.168.56.32 k8s-worker2
EOF
# Config convenience
echo 'alias vi=vim' >> /etc/profile
echo "sudo su -" >> /home/ubuntu/.bashrc
# Change Timezone
sed -i "s/UTC/Asia\/Seoul/g" /etc/sysconfig/clock
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
# Disable AppArmor
systemctl stop ufw && systemctl disable ufw >/dev/null 2>&1
systemctl stop apparmor && systemctl disable apparmor >/dev/null 2>&1
# Default Package
apt update -qq >/dev/null 2>&1
apt-get install apt-transport-https ca-certificates curl gpg -y -qq >/dev/null 2>&1
# Download the public signing key for the Kubernetes package repositories.
mkdir -p -m 755 /etc/apt/keyrings
K8SMMV=1.33
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$K8SMMV/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$K8SMMV/deb/ /" >> /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
# packets traversing the bridge are processed by iptables for filtering
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/k8s.conf
# enable br_netfilter for iptables
modprobe br_netfilter
modprobe overlay
echo "br_netfilter" >> /etc/modules-load.d/k8s.conf
echo "overlay" >> /etc/modules-load.d/k8s.conf
# apt list -a kubelet ; apt list -a containerd.io
apt update >/dev/null 2>&1
apt-get install -y kubelet=1.33.2-1.1 kubectl=1.33.2-1.1 kubeadm=1.33.2-1.1 containerd.io=1.7.27-1 >/dev/null 2>&1
apt-mark hold kubelet kubeadm kubectl >/dev/null 2>&1
# containerd configure to default and cgroup managed by systemd
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# avoid WARN&ERRO(default endpoints) when crictl run
cat <<EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF
# ready to install for k8s
systemctl restart containerd && systemctl enable containerd
systemctl enable --now kubelet
# Install Packages & Helm"
apt-get install -y bridge-utils sshpass net-tools conntrack ngrep tcpdump ipset arping wireguard jq tree bash-completion unzip kubecolor >/dev/null 2>&1
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash >/dev/null 2>&1
# K8S Cluster에 Join
kubeadm join \
--token 123456.1234567890123456 \
--discovery-token-unsafe-skip-ca-verification \
"192.168.56.30:6443"
YAML
복사
•
netperf (amd64)
apiVersion: apps/v1
kind: Deployment
metadata:
name: netperf-server
spec:
replicas: 1
selector:
matchLabels:
app: netperf-server
template:
metadata:
labels:
app: netperf-server
spec:
containers:
- name: netperf-server
image: networkstatic/netperf
command: ["/bin/sh", "-c"]
args: ["netserver -D -4 -p 12865 && sleep infinity"]
ports:
- containerPort: 12865
---
apiVersion: v1
kind: Service
metadata:
name: netperf-server
spec:
selector:
app: netperf-server
ports:
- name: tcp
port: 12865
protocol: TCP
targetPort: 12865
type: ClusterIP
---
# netperf-client
apiVersion: apps/v1
kind: Deployment
metadata:
name: netperf-client
spec:
replicas: 1
selector:
matchLabels:
app: netperf-client
template:
metadata:
labels:
app: netperf-client
spec:
containers:
- name: netperf-client
image: networkstatic/netperf
command: ["/bin/sh", "-c"]
args: ["sleep infinity"]
YAML
복사
•
arm64 : tomopiro/netperf
helm upgrade cilium cilium/cilium -n kube-system \
--set enableIPv4BIGTCP=true
YAML
복사
1. 제약 조건 및 확인
•
커널 버전: Linux Kernel 6.3 이상 필요 (IPv4 BIG TCP는 6.3 이상에서만 지원)
•
netkit은 eBPF와 연동되어 Pod 네트워크 네임스페이스 전환에서 발생하는 오버헤드를 최소화
•
호스트 네임스페이스와 거의 동일한 네트워크 성능을 Pod에 제공하는 것이 목표
•
사실상 veth를 대체하며, Linux 커널 6.8 이상에서 지원
•
BPF 호스트 라우팅과 결합해 고성능 L3 네트워킹 실현
•
BIG TCP 등 최신 네트워크 기술과도 호환 가능
uname -r
--
6.8.0-71-generic
Bash
복사
ethtool -k eth0 | grep -E 'tso|gso|gro'
--
tx-gso-robust: off [fixed]
tx-gso-partial: off [fixed]
tx-gso-list: off [fixed]
rx-gro-hw: off [fixed]
rx-gro-list: off
rx-udp-gro-forwarding: off
Bash
복사
•
gso,gro가 off로 고정됨…..
2. 기본 환경 테스트
DataPath Mode 정보
kubectl exec -it ds/cilium -n kube-system -- cilium status --verbose | grep 'Device Mode'
--
Device Mode: veth
Bash
복사
cilium config view | grep datapath
--
datapath-mode veth
Bash
복사
테스트 디플로이먼트
# 배포
cat << EOF > iperf3-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf3-server
spec:
selector:
matchLabels:
app: iperf3-server
replicas: 1
template:
metadata:
labels:
app: iperf3-server
spec:
containers:
- name: iperf3-server
image: networkstatic/iperf3
args: ["-s"]
ports:
- containerPort: 5201
---
apiVersion: v1
kind: Service
metadata:
name: iperf3-server
spec:
selector:
app: iperf3-server
ports:
- name: tcp-service
protocol: TCP
port: 5201
targetPort: 5201
- name: udp-service
protocol: UDP
port: 5201
targetPort: 5201
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf3-client
spec:
selector:
matchLabels:
app: iperf3-client
replicas: 1
template:
metadata:
labels:
app: iperf3-client
spec:
containers:
- name: iperf3-client
image: networkstatic/iperf3
command: ["sleep"]
args: ["infinity"]
EOF
kubectl apply -f iperf3-deploy.yaml
# 확인 : 서버와 클라이언트가 어떤 노드에 배포되었는지 확인
kubectl get deploy,svc,pod -owide
# 서버 파드 로그 확인 : 기본 5201 포트 Listen
kubectl logs -l app=iperf3-server -f
Bash
복사
apiVersion: apps/v1
kind: Deployment
metadata:
name: netperf-server
spec:
replicas: 1
selector:
matchLabels:
app: netperf-server
template:
metadata:
labels:
app: netperf-server
spec:
containers:
- name: netperf-server
image: networkstatic/netperf
command: ["/bin/sh", "-c"]
args: ["netserver -D -4 -p 12865 && sleep infinity"]
ports:
- containerPort: 12865
---
apiVersion: v1
kind: Service
metadata:
name: netperf-server
spec:
selector:
app: netperf-server
ports:
- name: tcp
port: 12865
protocol: TCP
targetPort: 12865
type: ClusterIP
---
# netperf-client
apiVersion: apps/v1
kind: Deployment
metadata:
name: netperf-client
spec:
replicas: 1
selector:
matchLabels:
app: netperf-client
template:
metadata:
labels:
app: netperf-client
spec:
containers:
- name: netperf-client
image: networkstatic/netperf
command: ["/bin/sh", "-c"]
args: ["sleep infinity"]
YAML
복사
서로 다른 노드에 구성
kubectl patch deploy/iperf3-server -p '{
"spec":{"template":{"spec":{"nodeSelector":{
"kubernetes.io/hostname":"k8s-w1"
}}}}}'
kubectl patch deploy/netperf-server -p '{
"spec":{"template":{"spec":{"nodeSelector":{
"kubernetes.io/hostname":"k8s-w1"
}}}}}'
Bash
복사
kubectl patch deploy/iperf3-client -p '{
"spec":{"template":{"spec":{"nodeSelector":{
"kubernetes.io/hostname":"k8s-w2"
}}}}}'
kubectl patch deploy/netperf-client -p '{
"spec":{"template":{"spec":{"nodeSelector":{
"kubernetes.io/hostname":"k8s-w2"
}}}}}'
Bash
복사
kubectl patch deploy/iperf3-client -p '{
"spec":{"template":{"spec":{"containers":[{"name":"iperf3-client","resources":{"requests":{"cpu":"100m"},"limits":{"cpu":"200m"
}}}]}}}}'
kubectl get pod -l app=iperf3-client -o jsonpath="{..resources}"
Bash
복사
kubectl rollout status deploy/iperf3-server
kubectl rollout status deploy/iperf3-client
kubectl rollout status deploy/netperf-server
kubectl rollout status deploy/netperf-client
kubectl get pod -owide
Bash
복사
측정 방법
A) TCP 5201, 측정시간 5초
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 5
Bash
복사
B) UDP 사용, 역방향 모드(-R)
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -u -b 20G
Bash
복사
C) TCP, 쌍방향 모드(-R)
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 --bidir
Bash
복사
D) TCP 다중 스트, -P (number of parallel client streams to run)
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 5 -P 2
Bash
복사
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 -P 8
Bash
복사
netperf 테스트
NETSER=172.20.1.100
kubectl exec -it deploy/netperf-client \
-- netperf -H $NETSER -p 12865 -t TCP_RR \
-- -r80000:80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT,CONFIDENCE_INTERVAL
--
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.20.1.100 () port 0 AF_INET : demo : first burst 0
Minimum 90th 99th Throughput
Latency Percentile Percentile
Microseconds Latency Latency
Microseconds Microseconds
1118 3122 4177 578.13
YAML
복사
•
Minimum Latency (최소 지연 시간): 측정된 모든 요청 중 가장 빠르게 응답한 요청의 지연 시간. 가장 이상적이고 최적의 네트워크 상태를 나타냄.
•
P90 Latency (90% 지연 시간): 전체 요청 중 90%가 이 시간 이내에 응답한 값. 상위 10%의 상대적으로 느린 요청을 제외한 대부분의 응답 속도를 대표.
•
P99 Latency (99% 지연 시간): 전체 요청 중 99%가 이 시간 이내에 응답한 값. 최악의 1% 요청을 제외한 거의 모든 요청의 최대 응답 지연을 나타내며, 지연 변동성과 극단적인 레이턴시 상황 파악에 중요.
•
속도 (Throughput): Mbps 단위
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -u -b 20G
--
Connecting to host iperf3-server, port 5201
[ 5] local 172.20.2.205 port 49046 connected to 10.96.78.141 port 5201
[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-1.00 sec 208 MBytes 1.74 Gbits/sec 150344
[ 5] 1.00-2.00 sec 209 MBytes 1.76 Gbits/sec 151683
[ 5] 2.00-3.00 sec 210 MBytes 1.76 Gbits/sec 151820
[ 5] 3.00-4.00 sec 209 MBytes 1.75 Gbits/sec 151321
[ 5] 4.00-5.00 sec 209 MBytes 1.75 Gbits/sec 151123
[ 5] 5.00-6.00 sec 210 MBytes 1.76 Gbits/sec 152005
[ 5] 6.00-7.00 sec 211 MBytes 1.77 Gbits/sec 152766
[ 5] 7.00-8.00 sec 210 MBytes 1.76 Gbits/sec 152296
[ 5] 8.00-9.00 sec 209 MBytes 1.76 Gbits/sec 151616
[ 5] 9.00-10.00 sec 209 MBytes 1.75 Gbits/sec 151448
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 2.04 GBytes 1.76 Gbits/sec 0.000 ms 0/1516422 (0%) sender
[ 5] 0.00-10.08 sec 438 MBytes 365 Mbits/sec 0.006 ms 1198897/1516361 (79%) receiver
iperf Done.
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 --bidir
--
Connecting to host iperf3-server, port 5201
[ 5] local 172.20.2.205 port 44832 connected to 10.96.78.141 port 5201
[ 7] local 172.20.2.205 port 44840 connected to 10.96.78.141 port 5201
[ ID][Role] Interval Transfer Bitrate Retr Cwnd
[ 5][TX-C] 0.00-1.00 sec 20.2 MBytes 170 Mbits/sec 0 919 KBytes
[ 7][RX-C] 0.00-1.00 sec 61.9 MBytes 519 Mbits/sec
[ 5][TX-C] 1.00-2.00 sec 15.0 MBytes 126 Mbits/sec 0 1.64 MBytes
[ 7][RX-C] 1.00-2.00 sec 44.8 MBytes 376 Mbits/sec
[ 5][TX-C] 2.00-3.00 sec 23.8 MBytes 199 Mbits/sec 0 2.72 MBytes
[ 7][RX-C] 2.00-3.00 sec 42.1 MBytes 353 Mbits/sec
[ 5][TX-C] 3.00-4.00 sec 27.5 MBytes 231 Mbits/sec 0 3.00 MBytes
[ 7][RX-C] 3.00-4.00 sec 40.6 MBytes 341 Mbits/sec
[ 5][TX-C] 4.00-5.00 sec 28.8 MBytes 241 Mbits/sec 0 3.16 MBytes
[ 7][RX-C] 4.00-5.00 sec 39.8 MBytes 334 Mbits/sec
[ 5][TX-C] 5.00-6.00 sec 27.5 MBytes 231 Mbits/sec 0 3.16 MBytes
[ 7][RX-C] 5.00-6.00 sec 40.3 MBytes 338 Mbits/sec
[ 5][TX-C] 6.00-7.00 sec 28.8 MBytes 241 Mbits/sec 0 3.16 MBytes
[ 7][RX-C] 6.00-7.00 sec 40.7 MBytes 341 Mbits/sec
[ 5][TX-C] 7.00-8.00 sec 27.5 MBytes 231 Mbits/sec 0 3.16 MBytes
[ 7][RX-C] 7.00-8.00 sec 40.5 MBytes 340 Mbits/sec
[ 5][TX-C] 8.00-9.00 sec 28.8 MBytes 241 Mbits/sec 0 3.16 MBytes
[ 7][RX-C] 8.00-9.00 sec 40.3 MBytes 337 Mbits/sec
[ 5][TX-C] 9.00-10.00 sec 28.8 MBytes 241 Mbits/sec 0 3.16 MBytes
[ 7][RX-C] 9.00-10.00 sec 40.2 MBytes 337 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID][Role] Interval Transfer Bitrate Retr
[ 5][TX-C] 0.00-10.00 sec 256 MBytes 215 Mbits/sec 0 sender
[ 5][TX-C] 0.00-10.00 sec 255 MBytes 214 Mbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 435 MBytes 365 Mbits/sec 0 sender
[ 7][RX-C] 0.00-10.00 sec 431 MBytes 362 Mbits/sec receiver
iperf Done.
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 -P 8
--
Connecting to host iperf3-server, port 5201
[ 5] local 172.20.2.205 port 49406 connected to 10.96.78.141 port 5201
[ 7] local 172.20.2.205 port 49420 connected to 10.96.78.141 port 5201
[ 9] local 172.20.2.205 port 49430 connected to 10.96.78.141 port 5201
[ 11] local 172.20.2.205 port 49436 connected to 10.96.78.141 port 5201
[ 13] local 172.20.2.205 port 49448 connected to 10.96.78.141 port 5201
[ 15] local 172.20.2.205 port 49456 connected to 10.96.78.141 port 5201
[ 17] local 172.20.2.205 port 49466 connected to 10.96.78.141 port 5201
[ 19] local 172.20.2.205 port 49478 connected to 10.96.78.141 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 11.8 MBytes 99.2 Mbits/sec 210 390 KBytes
[ 7] 0.00-1.00 sec 8.78 MBytes 73.6 Mbits/sec 159 303 KBytes
[ 9] 0.00-1.00 sec 11.8 MBytes 99.2 Mbits/sec 174 392 KBytes
[ 11] 0.00-1.00 sec 11.6 MBytes 97.5 Mbits/sec 151 390 KBytes
[ 13] 0.00-1.00 sec 10.9 MBytes 91.4 Mbits/sec 0 577 KBytes
[ 15] 0.00-1.00 sec 11.9 MBytes 99.6 Mbits/sec 10 386 KBytes
[ 17] 0.00-1.00 sec 6.53 MBytes 54.7 Mbits/sec 78 208 KBytes
[ 19] 0.00-1.00 sec 5.28 MBytes 44.3 Mbits/sec 87 175 KBytes
[SUM] 0.00-1.00 sec 78.6 MBytes 659 Mbits/sec 869
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 1.00-2.00 sec 5.59 MBytes 46.9 Mbits/sec 105 311 KBytes
[ 7] 1.00-2.00 sec 5.34 MBytes 44.8 Mbits/sec 106 243 KBytes
[ 9] 1.00-2.00 sec 5.59 MBytes 46.9 Mbits/sec 158 310 KBytes
[ 11] 1.00-2.00 sec 5.59 MBytes 46.9 Mbits/sec 138 308 KBytes
[ 13] 1.00-2.00 sec 13.6 MBytes 114 Mbits/sec 140 690 KBytes
[ 15] 1.00-2.00 sec 7.88 MBytes 66.1 Mbits/sec 0 451 KBytes
[ 17] 1.00-2.00 sec 3.73 MBytes 31.3 Mbits/sec 0 249 KBytes
[ 19] 1.00-2.00 sec 3.79 MBytes 31.8 Mbits/sec 0 212 KBytes
[SUM] 1.00-2.00 sec 51.1 MBytes 429 Mbits/sec 647
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 9.00-10.00 sec 5.59 MBytes 46.9 Mbits/sec 0 338 KBytes
[ 7] 9.00-10.00 sec 5.28 MBytes 44.3 Mbits/sec 0 290 KBytes
[ 9] 9.00-10.00 sec 5.59 MBytes 46.9 Mbits/sec 0 341 KBytes
[ 11] 9.00-10.00 sec 6.71 MBytes 56.3 Mbits/sec 0 423 KBytes
[ 13] 9.00-10.00 sec 11.2 MBytes 94.4 Mbits/sec 0 672 KBytes
[ 15] 9.00-10.00 sec 7.83 MBytes 65.7 Mbits/sec 0 448 KBytes
[ 17] 9.00-10.00 sec 4.41 MBytes 37.0 Mbits/sec 0 296 KBytes
[ 19] 9.00-10.00 sec 4.47 MBytes 37.5 Mbits/sec 0 266 KBytes
[SUM] 9.00-10.00 sec 51.1 MBytes 429 Mbits/sec 0
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 62.2 MBytes 52.2 Mbits/sec 340 sender
[ 5] 0.00-10.06 sec 59.7 MBytes 49.8 Mbits/sec receiver
[ 7] 0.00-10.00 sec 51.8 MBytes 43.4 Mbits/sec 275 sender
[ 7] 0.00-10.06 sec 49.9 MBytes 41.7 Mbits/sec receiver
[ 9] 0.00-10.00 sec 63.4 MBytes 53.2 Mbits/sec 334 sender
[ 9] 0.00-10.06 sec 61.3 MBytes 51.1 Mbits/sec receiver
[ 11] 0.00-10.00 sec 75.4 MBytes 63.2 Mbits/sec 289 sender
[ 11] 0.00-10.06 sec 73.2 MBytes 61.1 Mbits/sec receiver
[ 13] 0.00-10.00 sec 123 MBytes 103 Mbits/sec 339 sender
[ 13] 0.00-10.06 sec 121 MBytes 101 Mbits/sec receiver
[ 15] 0.00-10.00 sec 82.5 MBytes 69.2 Mbits/sec 81 sender
[ 15] 0.00-10.06 sec 79.7 MBytes 66.5 Mbits/sec receiver
[ 17] 0.00-10.00 sec 48.9 MBytes 41.0 Mbits/sec 86 sender
[ 17] 0.00-10.06 sec 48.0 MBytes 40.0 Mbits/sec receiver
[ 19] 0.00-10.00 sec 42.2 MBytes 35.4 Mbits/sec 218 sender
[ 19] 0.00-10.06 sec 41.1 MBytes 34.3 Mbits/sec receiver
[SUM] 0.00-10.00 sec 550 MBytes 461 Mbits/sec 1962 sender
[SUM] 0.00-10.06 sec 533 MBytes 445 Mbits/sec receiver
iperf Done.
Bash
복사
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 30 -P 16
--
...
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 112 MBytes 31.2 Mbits/sec 612 sender
[ 5] 0.00-30.05 sec 109 MBytes 30.5 Mbits/sec receiver
[ 7] 0.00-30.00 sec 65.3 MBytes 18.2 Mbits/sec 267 sender
[ 7] 0.00-30.05 sec 64.4 MBytes 18.0 Mbits/sec receiver
[ 9] 0.00-30.00 sec 160 MBytes 44.7 Mbits/sec 899 sender
[ 9] 0.00-30.05 sec 157 MBytes 43.9 Mbits/sec receiver
[ 11] 0.00-30.00 sec 102 MBytes 28.4 Mbits/sec 483 sender
[ 11] 0.00-30.05 sec 100 MBytes 28.0 Mbits/sec receiver
[ 13] 0.00-30.00 sec 98.7 MBytes 27.6 Mbits/sec 636 sender
[ 13] 0.00-30.05 sec 96.7 MBytes 27.0 Mbits/sec receiver
[ 15] 0.00-30.00 sec 112 MBytes 31.3 Mbits/sec 628 sender
[ 15] 0.00-30.05 sec 110 MBytes 30.8 Mbits/sec receiver
[ 17] 0.00-30.00 sec 73.0 MBytes 20.4 Mbits/sec 366 sender
[ 17] 0.00-30.05 sec 72.3 MBytes 20.2 Mbits/sec receiver
[ 19] 0.00-30.00 sec 101 MBytes 28.4 Mbits/sec 599 sender
[ 19] 0.00-30.05 sec 101 MBytes 28.1 Mbits/sec receiver
[ 21] 0.00-30.00 sec 67.0 MBytes 18.7 Mbits/sec 381 sender
[ 21] 0.00-30.05 sec 66.2 MBytes 18.5 Mbits/sec receiver
[ 23] 0.00-30.00 sec 80.6 MBytes 22.5 Mbits/sec 474 sender
[ 23] 0.00-30.05 sec 79.8 MBytes 22.3 Mbits/sec receiver
[ 25] 0.00-30.00 sec 74.5 MBytes 20.8 Mbits/sec 496 sender
[ 25] 0.00-30.05 sec 73.6 MBytes 20.5 Mbits/sec receiver
[ 27] 0.00-30.00 sec 93.9 MBytes 26.3 Mbits/sec 231 sender
[ 27] 0.00-30.05 sec 92.8 MBytes 25.9 Mbits/sec receiver
[ 29] 0.00-30.00 sec 117 MBytes 32.7 Mbits/sec 420 sender
[ 29] 0.00-30.05 sec 115 MBytes 32.1 Mbits/sec receiver
[ 31] 0.00-30.00 sec 102 MBytes 28.4 Mbits/sec 396 sender
[ 31] 0.00-30.05 sec 101 MBytes 28.1 Mbits/sec receiver
[ 33] 0.00-30.00 sec 75.3 MBytes 21.1 Mbits/sec 274 sender
[ 33] 0.00-30.05 sec 74.6 MBytes 20.8 Mbits/sec receiver
[ 35] 0.00-30.00 sec 115 MBytes 32.1 Mbits/sec 245 sender
[ 35] 0.00-30.05 sec 114 MBytes 31.8 Mbits/sec receiver
[SUM] 0.00-30.00 sec 1.51 GBytes 433 Mbits/sec 7407 sender
[SUM] 0.00-30.05 sec 1.49 GBytes 427 Mbits/sec receiver
iperf Done.
Bash
복사
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -u -b 20G -t 30
--
...
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-30.00 sec 6.07 GBytes 1.74 Gbits/sec 0.000 ms 0/4501942 (0%) sender
[ 5] 0.00-30.08 sec 1.26 GBytes 361 Mbits/sec 0.006 ms 3564365/4501884 (79%) receiver
iperf Done.
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-30.00 sec 1.17 GBytes 336 Mbits/sec 0.000 ms 0/869853 (0%) sender
[ 5] 0.00-30.01 sec 1.12 GBytes 321 Mbits/sec 0.006 ms 38905/869783 (4.5%) receiver
Bash
복사
•
동일 노드
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -u -b 20G -t 10
--
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 4.33 GBytes 3.72 Gbits/sec 0.000 ms 0/3209682 (0%) sender
[ 5] 0.00-10.00 sec 4.25 GBytes 3.65 Gbits/sec 0.003 ms 55205/3209682 (1.7%) receiver
iperf Done.
Bash
복사
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 -P 16
[SUM] 0.00-10.00 sec 88.9 GBytes 76.3 Gbits/sec 135 sender
[SUM] 0.00-10.00 sec 88.9 GBytes 76.3 Gbits/sec receiver
iperf Done.
Bash
복사
3. Big TCP Enable
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--reuse-values \
--set bpf.datapathMode=netkit
kubectl -n kube-system rollout restart deploy/cilium-operator
kubectl -n kube-system rollout restart ds/cilium
Bash
복사
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--reuse-values \
--set enableIPv4BIGTCP=true
kubectl -n kube-system rollout restart deploy/cilium-operator
kubectl -n kube-system rollout restart ds/cilium
Bash
복사
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--reuse-values \
--set enableIPv4BIGTCP=false
kubectl -n kube-system rollout restart deploy/cilium-operator
kubectl -n kube-system rollout restart ds/cilium
Bash
복사
cilium config set enable-ipv4-big-tcp true
--
✨ Patching ConfigMap cilium-config with enable-ipv4-big-tcp=true...
♻️ Restarted Cilium pods
YAML
복사
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--reuse-values \
--set installNoConntrackIptablesRules=true
kubectl -n kube-system rollout restart deploy/cilium-operator
kubectl -n kube-system rollout restart ds/cilium
Bash
복사
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--reuse-values \
--set bpf.distributedLRU.enabled=true \
--set bpf.mapDynamicSizeRatio=0.08
kubectl -n kube-system rollout restart deploy/cilium-operator
kubectl -n kube-system rollout restart ds/cilium
Bash
복사
DataPath Mode 정보
kubectl exec -it ds/cilium -n kube-system -- cilium status --verbose | grep 'Device Mode'
--
Device Mode: netkit
Bash
복사
cilium config view | grep datapath
--
datapath-mode netkit
Bash
복사
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--reuse-values \
--set bpf.datapathMode=veth
kubectl -n kube-system rollout restart deploy/cilium-operator
kubectl -n kube-system rollout restart ds/cilium
Bash
복사
kubectl exec -it deploy/netperf-client \
-- netperf -H 10.0.2.144 -p 12865 -t TCP_RR \
-- -r80000:80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT
--
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.144 () port 0 AF_INET : demo : first burst 0
Minimum 90th 99th Throughput
Latency Percentile Percentile
Microseconds Latency Latency
Microseconds Microseconds
272 705 1390 1830.94
YAML
복사
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -u -b 20G
--
Connecting to host iperf3-server, port 5201
[ 5] local 172.20.2.205 port 48727 connected to 10.96.78.141 port 5201
[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-1.00 sec 217 MBytes 1.82 Gbits/sec 157165
[ 5] 1.00-2.00 sec 218 MBytes 1.83 Gbits/sec 158101
[ 5] 2.00-3.00 sec 220 MBytes 1.85 Gbits/sec 159461
[ 5] 3.00-4.00 sec 219 MBytes 1.84 Gbits/sec 158513
[ 5] 4.00-5.00 sec 219 MBytes 1.84 Gbits/sec 158507
[ 5] 5.00-6.00 sec 217 MBytes 1.82 Gbits/sec 157072
[ 5] 6.00-7.00 sec 219 MBytes 1.83 Gbits/sec 158351
[ 5] 7.00-8.00 sec 221 MBytes 1.85 Gbits/sec 159927
[ 5] 8.00-9.00 sec 222 MBytes 1.86 Gbits/sec 160475
[ 5] 9.00-10.00 sec 220 MBytes 1.84 Gbits/sec 159080
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 2.14 GBytes 1.84 Gbits/sec 0.000 ms 0/1586652 (0%) sender
[ 5] 0.00-10.08 sec 475 MBytes 396 Mbits/sec 0.006 ms 1242493/1586601 (78%) receiver
iperf Done.
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 --bidir
--
Connecting to host iperf3-server, port 5201
[ 5] local 172.20.2.205 port 39588 connected to 10.96.78.141 port 5201
[ 7] local 172.20.2.205 port 39604 connected to 10.96.78.141 port 5201
[ ID][Role] Interval Transfer Bitrate Retr Cwnd
[ 5][TX-C] 0.00-1.00 sec 59.6 MBytes 500 Mbits/sec 0 3.13 MBytes
[ 7][RX-C] 0.00-1.00 sec 20.4 MBytes 171 Mbits/sec
[ 5][TX-C] 1.00-2.00 sec 43.8 MBytes 367 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 1.00-2.00 sec 18.8 MBytes 157 Mbits/sec
[ 5][TX-C] 2.00-3.00 sec 41.2 MBytes 346 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 2.00-3.00 sec 26.3 MBytes 220 Mbits/sec
[ 5][TX-C] 3.00-4.00 sec 40.0 MBytes 336 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 3.00-4.00 sec 28.5 MBytes 239 Mbits/sec
[ 5][TX-C] 4.00-5.00 sec 40.0 MBytes 336 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 4.00-5.00 sec 28.2 MBytes 237 Mbits/sec
[ 5][TX-C] 5.00-6.00 sec 41.2 MBytes 346 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 5.00-6.00 sec 28.2 MBytes 237 Mbits/sec
[ 5][TX-C] 6.00-7.00 sec 40.0 MBytes 336 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 6.00-7.00 sec 28.5 MBytes 239 Mbits/sec
[ 5][TX-C] 7.00-8.00 sec 40.0 MBytes 335 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 7.00-8.00 sec 28.8 MBytes 241 Mbits/sec
[ 5][TX-C] 8.00-9.00 sec 40.0 MBytes 336 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 8.00-9.00 sec 28.9 MBytes 242 Mbits/sec
[ 5][TX-C] 9.00-10.00 sec 40.0 MBytes 335 Mbits/sec 0 3.91 MBytes
[ 7][RX-C] 9.00-10.00 sec 28.5 MBytes 239 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID][Role] Interval Transfer Bitrate Retr
[ 5][TX-C] 0.00-10.00 sec 426 MBytes 357 Mbits/sec 0 sender
[ 5][TX-C] 0.00-10.05 sec 426 MBytes 355 Mbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 268 MBytes 225 Mbits/sec 0 sender
[ 7][RX-C] 0.00-10.05 sec 265 MBytes 221 Mbits/sec receiver
iperf Done.
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 -P 8
--
Connecting to host iperf3-server, port 5201
[ 5] local 172.20.2.205 port 39208 connected to 10.96.78.141 port 5201
[ 7] local 172.20.2.205 port 39210 connected to 10.96.78.141 port 5201
[ 9] local 172.20.2.205 port 39222 connected to 10.96.78.141 port 5201
[ 11] local 172.20.2.205 port 39232 connected to 10.96.78.141 port 5201
[ 13] local 172.20.2.205 port 39234 connected to 10.96.78.141 port 5201
[ 15] local 172.20.2.205 port 39240 connected to 10.96.78.141 port 5201
[ 17] local 172.20.2.205 port 39242 connected to 10.96.78.141 port 5201
[ 19] local 172.20.2.205 port 39244 connected to 10.96.78.141 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 10.4 MBytes 87.0 Mbits/sec 59 362 KBytes
[ 7] 0.00-1.00 sec 8.16 MBytes 68.4 Mbits/sec 154 277 KBytes
[ 9] 0.00-1.00 sec 7.62 MBytes 63.9 Mbits/sec 137 276 KBytes
[ 11] 0.00-1.00 sec 9.51 MBytes 79.8 Mbits/sec 173 349 KBytes
[ 13] 0.00-1.00 sec 9.63 MBytes 80.8 Mbits/sec 198 355 KBytes
[ 15] 0.00-1.00 sec 10.6 MBytes 89.1 Mbits/sec 90 358 KBytes
[ 17] 0.00-1.00 sec 10.8 MBytes 90.7 Mbits/sec 0 581 KBytes
[ 19] 0.00-1.00 sec 8.45 MBytes 70.9 Mbits/sec 0 454 KBytes
[SUM] 0.00-1.00 sec 75.2 MBytes 631 Mbits/sec 811
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 1.00-2.00 sec 5.24 MBytes 44.0 Mbits/sec 50 283 KBytes
[ 7] 1.00-2.00 sec 5.72 MBytes 48.0 Mbits/sec 0 327 KBytes
[ 9] 1.00-2.00 sec 6.46 MBytes 54.2 Mbits/sec 0 324 KBytes
[ 11] 1.00-2.00 sec 5.28 MBytes 44.3 Mbits/sec 23 272 KBytes
[ 13] 1.00-2.00 sec 6.09 MBytes 51.1 Mbits/sec 112 274 KBytes
[ 15] 1.00-2.00 sec 5.22 MBytes 43.8 Mbits/sec 116 279 KBytes
[ 17] 1.00-2.00 sec 11.1 MBytes 92.8 Mbits/sec 46 529 KBytes
[ 19] 1.00-2.00 sec 8.39 MBytes 70.4 Mbits/sec 190 414 KBytes
[SUM] 1.00-2.00 sec 53.5 MBytes 448 Mbits/sec 537
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 9.00-10.00 sec 5.28 MBytes 44.3 Mbits/sec 104 209 KBytes
[ 7] 9.00-10.00 sec 6.77 MBytes 56.8 Mbits/sec 61 291 KBytes
[ 9] 9.00-10.00 sec 7.52 MBytes 63.1 Mbits/sec 0 407 KBytes
[ 11] 9.00-10.00 sec 5.16 MBytes 43.3 Mbits/sec 0 294 KBytes
[ 13] 9.00-10.00 sec 5.03 MBytes 42.2 Mbits/sec 39 208 KBytes
[ 15] 9.00-10.00 sec 5.28 MBytes 44.3 Mbits/sec 93 209 KBytes
[ 17] 9.00-10.00 sec 11.2 MBytes 94.4 Mbits/sec 39 479 KBytes
[ 19] 9.00-10.00 sec 7.39 MBytes 62.0 Mbits/sec 129 297 KBytes
[SUM] 9.00-10.00 sec 53.7 MBytes 450 Mbits/sec 465
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 52.6 MBytes 44.1 Mbits/sec 253 sender
[ 5] 0.00-10.04 sec 50.1 MBytes 41.8 Mbits/sec receiver
[ 7] 0.00-10.00 sec 68.5 MBytes 57.5 Mbits/sec 215 sender
[ 7] 0.00-10.04 sec 66.6 MBytes 55.6 Mbits/sec receiver
[ 9] 0.00-10.00 sec 69.3 MBytes 58.1 Mbits/sec 137 sender
[ 9] 0.00-10.04 sec 67.7 MBytes 56.6 Mbits/sec receiver
[ 11] 0.00-10.00 sec 52.9 MBytes 44.4 Mbits/sec 274 sender
[ 11] 0.00-10.04 sec 51.1 MBytes 42.7 Mbits/sec receiver
[ 13] 0.00-10.00 sec 52.7 MBytes 44.2 Mbits/sec 461 sender
[ 13] 0.00-10.04 sec 50.7 MBytes 42.4 Mbits/sec receiver
[ 15] 0.00-10.00 sec 54.3 MBytes 45.6 Mbits/sec 353 sender
[ 15] 0.00-10.04 sec 51.5 MBytes 43.0 Mbits/sec receiver
[ 17] 0.00-10.00 sec 113 MBytes 94.9 Mbits/sec 85 sender
[ 17] 0.00-10.04 sec 111 MBytes 92.5 Mbits/sec receiver
[ 19] 0.00-10.00 sec 74.9 MBytes 62.9 Mbits/sec 503 sender
[ 19] 0.00-10.04 sec 72.2 MBytes 60.4 Mbits/sec receiver
[SUM] 0.00-10.00 sec 538 MBytes 452 Mbits/sec 2281 sender
[SUM] 0.00-10.04 sec 521 MBytes 435 Mbits/sec receiver
iperf Done.
Bash
복사
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 -P 16
--
...
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 87.6 MBytes 24.5 Mbits/sec 528 sender
[ 5] 0.00-30.05 sec 86.8 MBytes 24.2 Mbits/sec receiver
[ 7] 0.00-30.00 sec 67.8 MBytes 19.0 Mbits/sec 384 sender
[ 7] 0.00-30.05 sec 67.0 MBytes 18.7 Mbits/sec receiver
[ 9] 0.00-30.00 sec 69.6 MBytes 19.5 Mbits/sec 378 sender
[ 9] 0.00-30.05 sec 69.0 MBytes 19.3 Mbits/sec receiver
[ 11] 0.00-30.00 sec 75.2 MBytes 21.0 Mbits/sec 467 sender
[ 11] 0.00-30.05 sec 74.2 MBytes 20.7 Mbits/sec receiver
[ 13] 0.00-30.00 sec 66.2 MBytes 18.5 Mbits/sec 459 sender
[ 13] 0.00-30.05 sec 65.4 MBytes 18.3 Mbits/sec receiver
[ 15] 0.00-30.00 sec 72.2 MBytes 20.2 Mbits/sec 417 sender
[ 15] 0.00-30.05 sec 71.3 MBytes 19.9 Mbits/sec receiver
[ 17] 0.00-30.00 sec 85.4 MBytes 23.9 Mbits/sec 414 sender
[ 17] 0.00-30.05 sec 84.6 MBytes 23.6 Mbits/sec receiver
[ 19] 0.00-30.00 sec 54.0 MBytes 15.1 Mbits/sec 269 sender
[ 19] 0.00-30.05 sec 53.1 MBytes 14.8 Mbits/sec receiver
[ 21] 0.00-30.00 sec 138 MBytes 38.7 Mbits/sec 608 sender
[ 21] 0.00-30.05 sec 137 MBytes 38.3 Mbits/sec receiver
[ 23] 0.00-30.00 sec 118 MBytes 32.9 Mbits/sec 644 sender
[ 23] 0.00-30.05 sec 116 MBytes 32.3 Mbits/sec receiver
[ 25] 0.00-30.00 sec 88.3 MBytes 24.7 Mbits/sec 470 sender
[ 25] 0.00-30.05 sec 86.6 MBytes 24.2 Mbits/sec receiver
[ 27] 0.00-30.00 sec 64.1 MBytes 17.9 Mbits/sec 397 sender
[ 27] 0.00-30.05 sec 63.4 MBytes 17.7 Mbits/sec receiver
[ 29] 0.00-30.00 sec 177 MBytes 49.4 Mbits/sec 911 sender
[ 29] 0.00-30.05 sec 174 MBytes 48.7 Mbits/sec receiver
[ 31] 0.00-30.00 sec 168 MBytes 47.0 Mbits/sec 630 sender
[ 31] 0.00-30.05 sec 165 MBytes 46.1 Mbits/sec receiver
[ 33] 0.00-30.00 sec 107 MBytes 29.8 Mbits/sec 515 sender
[ 33] 0.00-30.05 sec 105 MBytes 29.4 Mbits/sec receiver
[ 35] 0.00-30.00 sec 122 MBytes 34.1 Mbits/sec 348 sender
[ 35] 0.00-30.05 sec 121 MBytes 33.8 Mbits/sec receiver
[SUM] 0.00-30.00 sec 1.52 GBytes 436 Mbits/sec 7839 sender
[SUM] 0.00-30.05 sec 1.50 GBytes 430 Mbits/sec receiver
iperf Done.
Bash
복사
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -u -b 20G -t 10
--
...
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-30.00 sec 6.14 GBytes 1.76 Gbits/sec 0.000 ms 0/4554980 (0%) sender
[ 5] 0.00-30.08 sec 1.28 GBytes 366 Mbits/sec 0.006 ms 3604510/4554948 (79%) receiver
iperf Done.
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-30.00 sec 1.31 GBytes 375 Mbits/sec 0.000 ms 0/970390 (0%) sender
[ 5] 0.00-30.02 sec 1.01 GBytes 290 Mbits/sec 0.626 ms 216628/968353 (22%) receiver
Bash
복사
•
동일 노드
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -u -b 20G -t 10
--
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 4.47 GBytes 3.84 Gbits/sec 0.000 ms 0/3315187 (0%) sender
[ 5] 0.00-10.00 sec 4.33 GBytes 3.72 Gbits/sec 0.003 ms 103453/3315187 (3.1%) receiver
iperf Done.
Bash
복사
kubectl exec -it deploy/iperf3-client -- iperf3 -c iperf3-server -t 10 -P 16
[SUM] 0.00-10.00 sec 89.1 GBytes 76.5 Gbits/sec 0 sender
[SUM] 0.00-10.00 sec 89.1 GBytes 76.5 Gbits/sec receiver
iperf Done.
Bash
복사
Cilium netkit 관련 커뮤니티 및 공식 자료 링크를 정리해 드립니다:
1. Cilium 공식 문서 - Tuning Guide (netkit 포함)
https://docs.cilium.io/en/latest/operations/performance/tuning.html
성능 최적화 및 netkit 활성화 가이드 포함
2. Isovalent 블로그 - Cilium netkit 소개
https://isovalent.com/blog/post/cilium-netkit-a-new-container-networking-paradigm-for-the-ai-era/
netkit 기반 고성능 컨테이너 네트워킹 설명
3. Cilium 1.16 릴리즈 및 netkit 기능 뉴스
https://isovalent.com/blog/post/cilium-1-16/
netkit 출시와 성능 개선 내용
4. GitHub 이슈 - Cilium netkit 및 성능 토론
https://github.com/cilium/cilium/issues/34543
netkit, BBR 및 밴드위스 매니저 관련 실무 이슈 및 토론
5. YouTube - eCHO Episode 140: Cilium 및 netkit
https://www.youtube.com/watch?v=hldsOlLCO_Y
Cilium netkit 성능 데모 및 설명
6. LinkedIn 게시물 - Cilium netkit
https://www.linkedin.com/posts/raphink_cilium-netkit-the-final-frontier-in-container-activity-7217148701306167296-SUmD
최신 netkit 성능 및 커뮤니티 소식
7. FOSDEM 2025 발표자료 - netkit 소개
https://fosdem.org/2025/schedule/event/fosdem-2025-4045-an-introduction-to-netkit-the-bpf-programmable-network-device/
netkit 기술 개요 및 성능 발표 슬라이드
필요하면 이 링크들 중 특정 내용을 더 자세히 요약해 드리거나, 직접 테스트용 설정 및 성능 검증 방법도 안내해 드리겠습니다.
helm install cilium cilium/cilium --version 1.18.1 --namespace kube-system \
--set k8sServiceHost=192.168.56.30 --set k8sServicePort=6443 \
--set kubeProxyReplacement=true \
--set routingMode=native \
--set autoDirectNodeRoutes=true \
--set ipam.mode="cluster-pool" \
--set ipam.operator.clusterPoolIPv4PodCIDRList={"172.20.0.0/16"} \
--set ipv4NativeRoutingCIDR=172.20.0.0/16 \
--set endpointRoutes.enabled=true \
--set installNoConntrackIptablesRules=true \
--set bpf.masquerade=true \
--set ipv6.enabled=false
YAML
복사
helm install cilium cilium/cilium --version 1.18.1 \
--namespace kube-system \
--set k8sServiceHost=192.168.56.30 \
--set k8sServicePort=6443 \
--set routingMode=native \
--set bpf.masquerade=true \
--set ipv4.enabled=true \
--set enableIPv4BIGTCP=true \
--set kubeProxyReplacement=true \
--set ipv4NativeRoutingCIDR=10.244.0.0/16
YAML
복사
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl exec -it deploy/netperf-client \
-- netperf -H $NETSER -p 12865 -t TCP_RR \
-- -r80000:80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.137 () port 0 AF_INET : demo : first burst 0
Minimum 90th 99th Throughput
Latency Percentile Percentile
Microseconds Latency Latency
Microseconds Microseconds
246 647 1456 1970.23
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~#
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl exec -it deploy/netperf-client -- netperf -H $NETSER -p 12865 -t TCP_RR -- -r80000:80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.137 () port 0 AF_INET : demo : first burst 0
Minimum 90th 99th Throughput
Latency Percentile Percentile
Microseconds Latency Latency
Microseconds Microseconds
247 626 1248 2042.89
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl exec -it deploy/netperf-client -- netperf -H $NETSER -p 12865 -t TCP_RR -- -r80000:80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT
E0917 23:56:57.128276 20441 websocket.go:297] Unknown stream id 1, discarding message
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.137 () port 0 AF_INET : demo : first burst 0
Minimum 90th 99th Throughput
Latency Percentile Percentile
Microseconds Latency Latency
Microseconds Microseconds
249 791 3980 1572.92
YAML
복사
kubectl exec -it deploy/netperf-client -- netperf -H $NETSER -p 12865 -t TCP_RR -- -r80000:80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.137 () port 0 AF_INET : demo : first burst 0
Minimum 90th 99th Throughput
Latency Percentile Percentile
Microseconds Latency Latency
Microseconds Microseconds
237 649 1504 2001.37
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl exec -it deploy/netperf-client -- netperf -H $NETSER -p 12865 -t TCP_RR -- -r80000:80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.137 () port 0 AF_INET : demo : first burst 0
Minimum 90th 99th Throughput
Latency Percentile Percentile
Microseconds Latency Latency
Microseconds Microseconds
263 653 1386 1956.45
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl exec -it deploy/netperf-client -- netperf -H $NETSER -p 12865 -t TCP_RR -- -r80000:80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.137 () port 0 AF_INET : demo : first burst 0
Minimum 90th 99th Throughput
Latency Percentile Percentile
Microseconds Latency Latency
Microseconds Microseconds
237 599 1157 2121.63
YAML
복사