
Ich stecke fest, wenn ich versuche, über den Dienst von einem Pod zum anderen anzurufen, wenn sie sich auf demselben Knoten befinden.
erklären
- Knoten1 – Dienst1a (Pod1A), Dienst1b (Pod1B)
- Knoten2 – service2a (pod2A)
Wann:
- PING pod1A -> pod1B OK
- PING pod1A -> pod2A OK
- CURL pod1A -> service2A OK
- CURL pod1A -> service1B ZEITÜBERSCHREITUNG
Verbringen Sie mehrere Tage damit, einige Teile der Konfigurationen zu ändern, und suchen Sie im Internet nach demselben Problem, aber ohne Erfolg.
nur das gleiche Problem gab es hier:https://stackoverflow.com/questions/64073696/pods-running-on-the-same-node-cant-access-to-each-other-through-service, aber gelöst durch den Wechsel von IPVS zu IPTABLES. Ich habe IPTables.
PODS: (Namen wurden geändert)
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dev pod/pod1B-5d595bf69-8fgsm 1/1 Running 0 16h 10.244.2.63 kube-node2 <none> <none>
dev pod/pod1A-database-596f76d8b5-6bdqv 1/1 Running 0 16h 10.244.2.65 kube-node2 <none> <none>
dev pod/pod2A-dbb8fd4d-xv54n 1/1 Running 1 15h 10.244.3.104 kube-node3 <none> <none>
kube-system pod/coredns-6d4b75cb6d-6b2cn 1/1 Running 4 (50d ago) 292d 10.244.0.10 kube-master <none> <none>
kube-system pod/coredns-6d4b75cb6d-6m7q2 1/1 Running 4 (50d ago) 292d 10.244.0.11 kube-master <none> <none>
kube-system pod/etcd-kube-master 1/1 Running 2 (50d ago) 50d 172.31.42.90 kube-master <none> <none>
kube-system pod/kube-apiserver-kube-master 1/1 Running 2 (50d ago) 50d 172.31.42.90 kube-master <none> <none>
kube-system pod/kube-controller-manager-kube-master 1/1 Running 1 (50d ago) 50d 172.31.42.90 kube-master <none> <none>
kube-system pod/kube-flannel-ds-bwkjg 1/1 Running 0 62s 172.31.45.210 kube-node3 <none> <none>
kube-system pod/kube-flannel-ds-g9v9m 1/1 Running 0 66s 172.31.42.90 kube-master <none> <none>
kube-system pod/kube-flannel-ds-hljj5 1/1 Running 0 30s 172.31.42.77 kube-node2 <none> <none>
kube-system pod/kube-flannel-ds-k4zfw 1/1 Running 0 65s 172.31.43.77 kube-node1 <none> <none>
kube-system pod/kube-proxy-68k5n 1/1 Running 0 35m 172.31.45.210 kube-node3 <none> <none>
kube-system pod/kube-proxy-lb6s9 1/1 Running 0 35m 172.31.42.90 kube-master <none> <none>
kube-system pod/kube-proxy-vggwk 1/1 Running 0 35m 172.31.43.77 kube-node1 <none> <none>
kube-system pod/kube-proxy-wxwd7 1/1 Running 0 34m 172.31.42.77 kube-node2 <none> <none>
kube-system pod/kube-scheduler-kube-master 1/1 Running 1 (50d ago) 50d 172.31.42.90 kube-master <none> <none>
kube-system pod/metrics-server-55d58b59c9-569p5 1/1 Running 0 15h 10.244.2.69 kube-node2 <none> <none>
kubernetes-dashboard pod/dashboard-metrics-scraper-8c47d4b5d-2vxfj 1/1 Running 0 16h 10.244.2.64 kube-node2 <none> <none>
DIENSTLEISTUNGEN: (Namen geändert)
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 292d <none>
dev service/pod1B(service1b) ClusterIP 10.102.52.69 <none> 5432/TCP 42h app=database
dev service/pof2A(service2a) ClusterIP 10.105.208.135 <none> 8080/TCP 42h app=keycloak
dev service/pod1A(service1a) ClusterIP 10.111.140.245 <none> 5432/TCP 42h app=keycloak-database
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 292d k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 10.111.227.187 <none> 443/TCP 285d k8s-app=metrics-server
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.110.143.2 <none> 8000/TCP 247d k8s-app=dashboard-metrics-scraper
verwenden Sie Kube-Proxy IPTABLES und die Dienste sind auf CLUSTER-IP eingestellt (ich verwende etwa 2–3 NodePorts, aber ich denke, das ist kein Problem).
Kube-Proxy-Daemonset-Einstellung
apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: 10.244.10.0/16
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
detectLocal:
bridgeInterface: ""
interfaceNamePrefix: ""
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
enableDSR: false
forwardHealthCheckVip: false
networkName: ""
rootHnsEndpointName: ""
sourceVip: ""
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://172.31.42.90:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
annotations:
kubeadm.kubernetes.io/component-config.hash: sha256:ebcfa3923c1228031a5b824f2edca518edc4bd49fd07cedeffa371084cba342b
creationTimestamp: "2022-07-03T19:28:14Z"
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: "40174591"
uid: cfadfa22-ed43-4a3f-9897-25f605ebb8b9
Flanell-Dämonenset-Einstellung
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"cni-conf.json":"{\n \"name\": \"cbr0\",\n \"cniVersion\": \"0.3.1\",\n \"plugins\": [\n {\n \"type\": \"flannel\",\n \"delegate\": {\n \"hairpinMode\": true,\n \"isDe$ creationTimestamp: "2022-07-03T20:34:32Z"
labels:
app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-system
resourceVersion: "40178136"
uid: ccb81719-7013-4f0b-8c20-ab9d5c3d5f8e
IP-TABELLEN Kube-System
iptables -L -t nat | grep kube-system
KUBE-MARK-MASQ all -- ip-10-244-2-69.us-east-2.compute.internal anywhere /* kube-system/metrics-server:https */
DNAT tcp -- anywhere anywhere /* kube-system/metrics-server:https */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ all -- ip-10-244-0-11.us-east-2.compute.internal anywhere /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ all -- ip-10-244-0-10.us-east-2.compute.internal anywhere /* kube-system/kube-dns:metrics */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:metrics */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ all -- ip-10-244-0-10.us-east-2.compute.internal anywhere /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ all -- ip-10-244-0-10.us-east-2.compute.internal anywhere /* kube-system/kube-dns:dns */
DNAT udp -- anywhere anywhere /* kube-system/kube-dns:dns */ udp DNAT [unsupported revision]
KUBE-MARK-MASQ all -- ip-10-244-0-11.us-east-2.compute.internal anywhere /* kube-system/kube-dns:dns */
DNAT udp -- anywhere anywhere /* kube-system/kube-dns:dns */ udp DNAT [unsupported revision]
KUBE-MARK-MASQ all -- ip-10-244-0-11.us-east-2.compute.internal anywhere /* kube-system/kube-dns:metrics */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:metrics */ tcp DNAT [unsupported revision]
KUBE-SVC-TCOU7JCQXEZGVUNU udp -- anywhere ip-10-96-0-10.us-east-2.compute.internal /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-SVC-JD5MR3NA4I4DYORP tcp -- anywhere ip-10-96-0-10.us-east-2.compute.internal /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
KUBE-SVC-Z4ANX4WAEWEBLCTM tcp -- anywhere ip-10-111-227-187.us-east-2.compute.internal /* kube-system/metrics-server:https cluster IP */ tcp dpt:https
KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- anywhere ip-10-96-0-10.us-east-2.compute.internal /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-MARK-MASQ tcp -- !ip-10-244-0-0.us-east-2.compute.internal/16 ip-10-96-0-10.us-east-2.compute.internal /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-SEP-OP4AXEAS4OXHBEQX all -- anywhere anywhere /* kube-system/kube-dns:dns-tcp -> 10.244.0.10:53 */ statistic mode random probability 0.50000000000
KUBE-SEP-A7YQ4MY4TZII3JTK all -- anywhere anywhere /* kube-system/kube-dns:dns-tcp -> 10.244.0.11:53 */
KUBE-MARK-MASQ tcp -- !ip-10-244-0-0.us-east-2.compute.internal/16 ip-10-96-0-10.us-east-2.compute.internal /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
KUBE-SEP-HJ7EWOW62IX6GL6R all -- anywhere anywhere /* kube-system/kube-dns:metrics -> 10.244.0.10:9153 */ statistic mode random probability 0.50000000000
KUBE-SEP-ZJHOSXJEKQGYJUBC all -- anywhere anywhere /* kube-system/kube-dns:metrics -> 10.244.0.11:9153 */
KUBE-MARK-MASQ udp -- !ip-10-244-0-0.us-east-2.compute.internal/16 ip-10-96-0-10.us-east-2.compute.internal /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-SEP-R7EMXN5TTQQVP4UW all -- anywhere anywhere /* kube-system/kube-dns:dns -> 10.244.0.10:53 */ statistic mode random probability 0.50000000000
KUBE-SEP-VR6VIIG2A6524KLY all -- anywhere anywhere /* kube-system/kube-dns:dns -> 10.244.0.11:53 */
KUBE-MARK-MASQ tcp -- !ip-10-244-0-0.us-east-2.compute.internal/16 ip-10-111-227-187.us-east-2.compute.internal /* kube-system/metrics-server:https cluster IP */ tcp dpt:https
KUBE-SEP-6BOUBB2FEQTN2GDB all -- anywhere anywhere /* kube-system/metrics-server:https -> 10.244.2.69:4443 */
iptables -L -t nat | grep dev
KUBE-MARK-MASQ all -- ip-10-244-3-104.us-east-2.compute.internal anywhere /* dev/pod2A:pod2A */
DNAT tcp -- anywhere anywhere /* dev/pod2A:pod2A */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ all -- ip-10-244-2-63.us-east-2.compute.internal anywhere /* dev/pod1B:pod1B */
DNAT tcp -- anywhere anywhere /* dev/pod1B:pod1B */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ all -- ip-10-244-2-65.us-east-2.compute.internal anywhere /* dev/pod1A:pod1A */
DNAT tcp -- anywhere anywhere /* dev/pod1A:pod1A */ tcp DNAT [unsupported revision]
KUBE-SVC-MI7BJVF4L3EWWCLA tcp -- anywhere ip-10-105-208-135.us-east-2.compute.internal /* dev/pod2A:pod2A cluster IP */ tcp dpt:http-alt
KUBE-SVC-S2FASJERAWCYNV26 tcp -- anywhere ip-10-111-140-245.us-east-2.compute.internal /* dev/pod1A:pod1A cluster IP */ tcp dpt:postgresql
KUBE-SVC-5JHIIG3NJGZTIC4I tcp -- anywhere ip-10-102-52-69.us-east-2.compute.internal /* dev/pod1B:pod1B cluster IP */ tcp dpt:postgresql
KUBE-MARK-MASQ tcp -- !ip-10-244-0-0.us-east-2.compute.internal/16 ip-10-102-52-69.us-east-2.compute.internal /* dev/pod1B:pod1B cluster IP */ tcp dpt:postgresql
KUBE-SEP-FOQDGOYPAUSJGXYE all -- anywhere anywhere /* dev/pod1B:pod1B -> 10.244.2.63:5432 */
KUBE-MARK-MASQ tcp -- !ip-10-244-0-0.us-east-2.compute.internal/16 ip-10-105-208-135.us-east-2.compute.internal /* dev/pod2A:pod2A cluster IP */ tcp dpt:http-alt
KUBE-SEP-AWG5CHYLOHV7OCEH all -- anywhere anywhere /* dev/pod2A:pod2A -> 10.244.3.104:8080 */
KUBE-MARK-MASQ tcp -- !ip-10-244-0-0.us-east-2.compute.internal/16 ip-10-111-140-245.us-east-2.compute.internal /* dev/pod1A:pod1A cluster IP */ tcp dpt:postgresql
KUBE-SEP-YDMKERDDJRZDWWVM all -- anywhere anywhere /* dev/pod1A:pod1A -> 10.244.2.65:5432 */
Kann mir irgendjemand helfen, herauszufinden, warum ich keine Verbindung vom Pod zum Dienst auf demselben Knoten herstellen kann?
Antwort1
Gelöst, aber vielleicht findet jemand anders hier eine Lösung. Das Problem waren mehrere Probleme gleichzeitig.
- Wir hatten unterschiedliche Versionen von Server, Client und Kubelets auf den Workern (1.24.3, 1.24.5, … sogar 1.26.x).
- Die Einstellung für Kube-Proxy / MasqueradeAll war für die Versionen 1.26 nicht korrekt
Nach dem Upgrade aller Flugzeuge und Knoten auf die gleiche Version, in unserem Fall mit kubeadm von 1.24.x->1.25.9->1.26.4 + Upgrade des Ubuntu-Betriebssystemsapt upgrade
Der Cluster ist wieder stabil und alle Knoten wurden korrekt angeschlossen.kubectl get nodes
Die letzte Änderung erfolgte aufgrund des Upgrades 1.25 -> 1.26 und der Diskussion auf GitHubmweissdigchg Antwort...hatte die gleichen Probleme nach dem Upgrade von v1.25 auf v1.26.
... stellte sich heraus, dass die kube-proxy iptables-Konfiguration masqueradeAll: true unsere Dienste von Pods auf anderen Knoten aus unerreichbar machte. Es scheint, als hätte sich die Standardkonfiguration von masqueradeAll: false in masqueradeAll: true geändert....