Istio CNI bloquea el tráfico en los contenedores de inicio de aplicaciones

Istio CNI bloquea el tráfico en los contenedores de inicio de aplicaciones

Después de instalar Istio CNI para malla ambiental, me enfrenté al problema de que Istio CNI bloqueaba el tráfico en los contenedores de inicio de la aplicación. Estoy familiarizado con la solución al problema que se propone en la documentación (https://istio.io/latest/docs/setup/additional-setup/cni/#compatibility-with-application-init-containers) pero estas instrucciones no me ayudaron, el complemento las ignora.

Intenté ejecutar Redis Helm Chart con el contenedor de inicio usando dicho archivo de valores.

helm install redis  bitnami/redis --version 18.2.0 --namespace test --values <(cat redis-values.yaml)

<valores.yaml>

image:
  registry: quay.io
  repository: opstree/redis
  tag: latest

master:
  livenessProbe:
    enabled: true
    initialDelaySeconds: 40
    periodSeconds: 15
    timeoutSeconds: 15
    successThreshold: 1
    failureThreshold: 5

  podSecurityContext:
    enabled: true
    fsGroup: 1337

  podAnnotations:
    proxy.istio.io/config: |
      proxyMetadata:
        ISTIO_META_DNS_CAPTURE: "true"
        ISTIO_META_DNS_AUTO_ALLOCATE: "true"
    traffic.sidecar.istio.io/includeOutboundIPRanges: "*"
    traffic.sidecar.istio.io/includeInboundPorts: "*"
    traffic.sidecar.istio.io/excludeOutboundPorts: "443"
    traffic.sidecar.istio.io/excludeInboundPorts: "443"
    traffic.sidecar.istio.io/excludeOutboundIPRanges: "172.20.0.1/32"

  initContainers:
    - name: init-celery-workers-restart
      image: bitnami/kubectl
      command: ['sh', '-c', 'sleep 30; kubectl delete pods -n test -l app.kubernetes.io/component=celery-worker']
      securityContext:
        runAsUser: 1337

  resources:
    requests:
      cpu: 80m
      memory: 60Mi
    limits:
      cpu: 80m
      memory: 60Mi

Cuando se inicia el pod, obtengo dichos registros en la salida del contenedor de inicio. Al mismo tiempo, istio cni, ztunnel e istiod tienen los siguientes registros.

"contenedor de inicio"

E0118 09:57:46.188460       8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
E0118 09:57:56.195553       8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
E0118 09:58:06.202729       8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
Unable to connect to the server: EOF

"istio cni"

info    cni istio-cni ambient cmdAdd podName: redis-master-0 podIPs: [{IP:10.0.95.14 Mask:ffffffff}]
info    cni Adding pod 'redis-master-0/test' (a9573a59-6adc-48e8-9d41-a9bd08fdc40c) to ipset
info    cni Adding route for redis-master-0/test: [table 100 10.0.95.14/32 via 192.168.126.2 dev istioin src 10.0.81.204]

"túnel"

INFO xds{id=2}: ztunnel::xds::client: received response type_url="type.googleapis.com/istio.workload.Address" size=1
INFO xds{id=2}: ztunnel::xds::client: received response type_url="type.googleapis.com/istio.workload.Address" size=1
WARN outbound{id=b395fe07254d28f5ec84ffc0850994eb}: ztunnel::proxy::outbound: failed dur=96.464µs err=unknown source: 10.0.82.219
WARN outbound{id=a1c3ba7180c313fa148f1899baa136e4}: ztunnel::proxy::outbound: failed dur=87.892µs err=unknown source: 10.0.82.219

"istiodo"

info    ads Push debounce stable[44] 3 for config ServiceEntry/pslabs/redis-master.pslabs.svc.cluster.local and 1 more configs: 100.244777ms since last change, 103.240829ms since last push, full=true
info    ads XDS: Pushing Services:95 ConnectedEndpoints:6 Version:2024-01-18T10:05:13Z/37
info    validationController    Not ready to switch validation to fail-closed: dummy invalid config not rejected
info    validationController    validatingwebhookconfiguration istio-validator-istio-system (failurePolicy=Ignore, resourceVersion=14938048) is up-to-date. No change required.
error   controllers error handling istio-validator-istio-system, retrying (retry count: 1004): webhook is not ready, retry  controller=validation

También intenté usar ServiceEntry y DestinationRule para permitir el tráfico al servicio k8 sin pasar por la malla de istio.

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: k8s-api
  namespace: test
spec:
  hosts:
    - kubernetes.default.svc.cluster.local
  addresses:
    - 172.20.0.1
  endpoints:
    - address: 172.20.0.1
  exportTo:
    - "*"
  location: MESH_EXTERNAL
  resolution: STATIC
  ports:
    - number: 443
      name: https-k8s
      protocol: HTTPS
___

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: k8s-destrule
  namespace: test
spec:
  host: kubernetes.default.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE

información relacionada