
Tengo un clúster EKS con la siguiente configuración
2 VPCS 1 prod, 1 etapa
cada vpc tiene 3 subredes 1 pública y 2 privadas
cada vpc tiene 1 gateway de internet y 1 gateway nat
Las subredes privadas están conectadas al natgateway mediante una asociación de tabla de rutas.
Tengo un clúster eks y un grupo de nodos administrado por aws
el grupo de nodos se asigna a las subredes privadas.
Instalé un controlador ingress-nginx que crea un equilibrador de carga de red.
Este balanceador de carga de red funciona perfectamente en etapa, pero no en producción.
Los balanceadores de carga de red para ambos vpcs se crean en la zona eu-north-1a.
La instancia de destino para el balanceador de carga provisional se crea en la zona eu-north-1a, mientras que la del balanceador de carga prod se crea en la zona eu-north-1b y devuelve el siguiente error:
Targets are not within enabled Availability Zones
Some targets are not receiving traffic because they are in Zones that are not enabled for your load balancer.
Unused target zones
eu-north-1b
To resolve
There are two options:
Enable these Zones on the load balancer by visiting the load balancer detail page and adding subnets in these Zones. View load balancer
Or, deregister targets that are in these Zones. View targets in unused Zones
```
So the staging and prod clusters are identical. The subnets and ingress-nginx configs are identical, everything is identical. But I can't seem to figure out why prod is failing, and staging is not. What could I be missing?
The values file for the ingress:
ingress-nginx:
controller:
replicaCount: 2
resources:
limits:
memory: 300Mi
requests:
memory: 256Mi
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
externalTrafficPolicy: Local
healthCheckNodePort: 30254
stats:
enabled: true
config:
client-max-body-size: "25m"
http-redirect-code: "301"
proxy-buffer-size: 128k
proxy-buffers: 4 256k
proxy-connect-timeout: "600"
proxy-read-timeout: "600"
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"
ssl-protocols: "TLSv1.2 TLSv1.3"
# use-forwarded-headers: "true"
# use-proxy-protocol: "true"
# compute-full-forwarded-for: "true"
# enable-real-ip: "true"
# forwarded-for-header: X-Forwarded-For
log-format-upstream: '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" $upstream_http_resonseHeaderName $ssl_protocol $ssl_cipher'
rbac:
create: true
serviceAccount:
create: true