I set up a Kubernetes cluster using Kind with one master and two worker nodes, and implemented Cilium's eBPF kube-proxy replacement. I deployed an nginx application and exposed it externally via NodePort. When I use curl from within node01 or node02 to access any node's IP address with the NodePort port, it works fine. However, I was running curl from the host machine running the Kind cluster, targeting any node's IP address and the NodePort. I cannot reach the service.The host machine can successfully ping all the nodes.Here are the steps I followed.
System Environment
cat /etc/os-release
NAME="openEuler"
VERSION="24.03 (LTS-SP1)"
ID="openEuler"
VERSION_ID="24.03"
PRETTY_NAME="openEuler 24.03 (LTS-SP1)"
ANSI_COLOR="0;31"
Kind
kind: Cluster
name: fz
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- containerPath: /etc/systemd/system/containerd.service.d
hostPath: /etc/systemd/system/containerd.service.d
- containerPath: /opt/cni/bin
hostPath: /opt/cni/bin
- containerPath: /etc/containerd
hostPath: /data/k8s-manifests/containerd
- role: worker
extraMounts:
- containerPath: /etc/systemd/system/containerd.service.d
hostPath: /etc/systemd/system/containerd.service.d
- containerPath: /opt/cni/bin
hostPath: /opt/cni/bin
- containerPath: /etc/containerd
hostPath: /data/k8s-manifests/containerd
- role: worker
extraMounts:
- containerPath: /etc/systemd/system/containerd.service.d
hostPath: /etc/systemd/system/containerd.service.d
- containerPath: /opt/cni/bin
hostPath: /opt/cni/bin
- containerPath: /etc/containerd
hostPath: /data/k8s-manifests/containerd
networking:
podSubnet: "172.31.0.0/16"
serviceSubnet: "10.0.0.0/12"
kubeProxyMode: ipvs
ipFamily: ipv4
disableDefaultCNI: true
Kubernetes Cluster
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
fz-control-plane Ready control-plane 190d v1.32.2 10.4.1.237 <none> Debian GNU/Linux 12 (bookworm) 6.6.0-72.0.0.76.oe2403sp1.x86_64 containerd://2.0.3
fz-worker Ready <none> 190d v1.32.2 10.4.1.239 <none> Debian GNU/Linux 12 (bookworm) 6.6.0-72.0.0.76.oe2403sp1.x86_64 containerd://2.0.3
fz-worker2 Ready <none> 190d v1.32.2 10.4.1.235 <none> Debian GNU/Linux 12 (bookworm) 6.6.0-72.0.0.76.oe2403sp1.x86_64 containerd://2.0.3
Cilium
cilium status --verbose -n cilium-system
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: OK
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet cilium-envoy Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 3
cilium-envoy Running: 3
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay Running: 1
hubble-ui Running: 1
Cluster Pods: 7/7 managed by Cilium
Helm chart version: 1.18.3
Image versions cilium quay.io/cilium/cilium:v1.18.3@sha256:5649db451c88d928ea585514746d50d91e6210801b300c897283ea319d68de15: 3
cilium-envoy quay.io/cilium/cilium-envoy:v1.34.10-1761014632-c360e8557eb41011dfb5210f8fb53fed6c0b3222@sha256:ca76eb4e9812d114c7f43215a742c00b8bf41200992af0d21b5561d46156fd15: 3
cilium-operator quay.io/cilium/operator-generic:v1.18.3@sha256:b5a0138e1a38e4437c5215257ff4e35373619501f4877dbaf92c89ecfad81797: 1
hubble-relay quay.io/cilium/hubble-relay:v1.18.3@sha256:e53e00c47fe4ffb9c086bad0c1c77f23cb968be4385881160683d9e15aa34dc3: 1
hubble-ui quay.io/cilium/hubble-ui-backend:v0.13.3@sha256:db1454e45dc39ca41fbf7cad31eec95d99e5b9949c39daaad0fa81ef29d56953: 1
hubble-ui quay.io/cilium/hubble-ui:v0.13.3@sha256:661d5de7050182d495c6497ff0b007a7a1e379648e60830dd68c4d78ae21761d: 1
kubectl get cm cilium-config -n cilium-system -o yaml | grep replacement
kube-proxy-replacement: "true"
kube-proxy-replacement-healthz-bind-address: ""
App
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/test-nginx1-59c6dfb7cf-2bqcb 1/1 Running 0 42h 172.31.1.111 fz-worker <none> <none>
pod/test-nginx2-86d4748dc4-8fzdl 1/1 Running 0 42h 172.31.2.180 fz-worker2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 190d <none>
service/test-nginx1 NodePort 10.6.146.165 <none> 80:31880/TCP 5d17h app=test-nginx1
service/test-nginx2 NodePort 10.14.187.66 <none> 80:30516/TCP 5d17h app=test-nginx2
On all nodes, I ran:
# curl -I 10.4.1.239:31880
HTTP/1.1 200 OK
Server: nginx/1.22.0
Date: Thu, 20 Nov 2025 02:27:05 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 25 May 2022 10:01:40 GMT
Connection: keep-alive
ETag: "628dfe84-267"
Accept-Ranges: bytes
# curl -I 10.4.1.235:31880
HTTP/1.1 200 OK
Server: nginx/1.22.0
Date: Thu, 20 Nov 2025 02:27:48 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 25 May 2022 10:01:40 GMT
Connection: keep-alive
ETag: "628dfe84-267"
Accept-Ranges: bytes
# curl -I 10.4.1.237:31880
HTTP/1.1 200 OK
Server: nginx/1.22.0
Date: Thu, 20 Nov 2025 02:29:28 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 25 May 2022 10:01:40 GMT
Connection: keep-alive
ETag: "628dfe84-267"
Accept-Ranges: bytes
From the host machine
# curl -I 10.4.1.235:31880
curl: (28) Failed to connect to 10.4.1.235 port 31880 after 132931 ms: Couldn't connect to server