kubernets 修改 kube-porxy ipvs 模式
发表于|更新于
|字数总计:654|阅读时长:3分钟
查看kubernetes的proxy模式
看到当前运行的kube-proxy pod
~ # kubectl get pods -n kube-system | grep proxy root@k8s-master01 kube-proxy-9ssmd 1/1 Running 0 16m kube-proxy-prs8j 1/1 Running 0 16m kube-proxy-tp9vf 1/1 Running 0 16m kube-proxy-x8xtr 1/1 Running 0 16m
|
随便查看某个pod的日志
~ # kubectl logs -n kube-system kube-proxy-9ssmd root@k8s-master01 W0106 09:16:24.395860 1 proxier.go:649] Failed to load kernel module nf_conntrack_ipv4 with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0106 09:16:24.492916 1 node.go:136] Successfully retrieved node IP: 10.176.57.151 I0106 09:16:24.492952 1 server_others.go:142] kube-proxy node IP is an IPv4 address (10.176.57.151), assume IPv4 operation I0106 09:16:24.539546 1 server_others.go:258] Unknown proxy mode "", assuming iptables proxy W0106 09:16:24.539853 1 proxier.go:434] Using iptables Proxier I0106 09:16:24.540022 1 server.go:650] Version: v1.19.16 I0106 09:16:24.540311 1 conntrack.go:52] Setting nf_conntrack_max to 1310720 I0106 09:16:24.540512 1 config.go:315] Starting service config controller I0106 09:16:24.540521 1 shared_informer.go:240] Waiting for caches to sync for service config I0106 09:16:24.540543 1 config.go:224] Starting endpoint slice config controller I0106 09:16:24.540550 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0106 09:16:24.640635 1 shared_informer.go:247] Caches are synced for endpoint slice config I0106 09:16:24.640647 1 shared_informer.go:247] Caches are synced for service config
|
此时kube-proxy模式为默认的iptailes
设置proxy模式为ipvs
确保ipvs的模块已经运行
~ # lsmod | grep ip_vs root@k8s-master01 ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 584 ip_vs 172032 590 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 172032 6 xt_conntrack,nf_nat,ipt_MASQUERADE,xt_nat,nf_conntrack_netlink,ip_vs nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
|
如果没有加载ipvs模块运行以下命令
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF
|
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
|
修改kube-proxy的configmapkubectl edit configmap kube-proxy -n kube-system
apiVersion: v1 data: config.conf: |- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 bindAddressHardFail: false clientConnection: acceptContentTypes: "" burst: 0 contentType: "" kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 0 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 0s conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "ipvs" #修改此处,原为空 nodePortAddresses: null oomScoreAdj: null portRange: "" showHiddenMetricsForVersion: "" udpIdleTimeout: 0s winkernel: enableDSR: false networkName: "" sourceVip: "" kubeconfig.conf: |- apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https: name: default contexts: - context: cluster: default namespace: default user: default name: default current-context: default users: - name: default user: tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token kind: ConfigMap metadata: annotations: kubeadm.kubernetes.io/component-config.hash: sha256:2c22cc51db35cb01f36fd732e29451af4c46e2b59dcadaea2cd8579efe0d1937 creationTimestamp: "2020-12-30T09:15:14Z" labels: app: kube-proxy name: kube-proxy namespace: kube-system resourceVersion: "213915859" selfLink: /api/v1/namespaces/kube-system/configmaps/kube-proxy uid: 21a0178f-0bea-4256-a710-2382c18c9232
|
重启kube-proxy
kubectl rollout restart daemonset kube-proxy -n kube-system
|