Kubernetes V1.19.13: los servidores kubeapi no pueden conectarse a diferentes bases de datos etcd

Kubernetes V1.19.13: los servidores kubeapi no pueden conectarse a diferentes bases de datos etcd

He actualizado el clúster de Kubernets (base de datos de 3 servidores maestros, 3 etcd) de 1.18a v1.19.13y ETCD a 3.4.13. Dado que los servidores API no son estables, siga cambiando de servidor etcd diferente, debido a que este clúster no funciona correctamente. estos clústeres se ejecutan en CentOS Steam 8. Este clúster funcionó antes de la actualización, solo después de la actualización vi este problema.

¿Alguna ayuda para resolver este problema? ¿Hay algún problema con esta versión?

Registros del servidor API

I0731 00:54:39.498953       1 client.go:360] parsed scheme: "passthrough"
I0731 00:54:39.499025       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://0.0.0.02:2379  <nil> 0 <nil>}] <nil> <nil>}
I0731 00:54:39.499035       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0731 00:54:40.241615       1 client.go:360] parsed scheme: "passthrough"
I0731 00:54:40.241681       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://0.0.0.01:2379  <nil> 0 <nil>}] <nil> <nil>}
I0731 00:54:40.241691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0731 00:54:45.348969       1 client.go:360] parsed scheme: "passthrough"
I0731 00:54:45.349030       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://0.0.0.03:2379  <nil> 0 <nil>}] <nil> <nil>}
I0731 00:54:45.349040       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0731 00:55:16.460379       1 client.go:360] parsed scheme: "passthrough"
I0731 00:55:16.460428       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://0.0.0.01:2379  <nil> 0 <nil>}] <nil> <nil>}
I0731 00:55:16.460439       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0731 00:55:17.461906       1 client.go:360] parsed scheme: "passthrough"

etcd se ve saludable

# /opt/bin/etcdctl.sh   version
etcdctl version: 3.4.13
API version: 3.4

 /opt/bin/etcdctl.sh  endpoint health
127.0.0.1:2379 is healthy: successfully committed proposal: took = 9.739533ms



# /opt/bin/etcdctl.sh  check perf
 60 / 60 Boooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo! 100.00% 1m0s
PASS: Throughput is 150 writes/s
PASS: Slowest request took 0.042491s
PASS: Stddev is 0.001743s
PASS

# /opt/bin/etcdctl.sh  endpoint status --cluster -w table
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://0.0.0.02:2379 | 15cd65a732ebd5d8 |  3.4.13 |   26 MB |     false |      false |      9305 |   19813854 |           19813854 |        |
| https://0.0.0.03:2379 | add66a254676e690 |  3.4.13 |   26 MB |      true |      false |      9305 |   19813854 |           19813854 |        |
| https://0.0.0.01:2379 | e2811ed02ce71623 |  3.4.13 |   26 MB |     false |      false |      9305 |   19813854 |           19813854 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

información relacionada