
주변 메시용 Istio CNI를 설치한 후 Istio CNI가 애플리케이션 초기화 컨테이너에서 트래픽을 차단하는 문제에 직면했습니다. 나는 문서에서 제안된 문제에 대한 해결 방법을 잘 알고 있습니다(https://istio.io/latest/docs/setup/additional-setup/cni/#compatibility-with-application-init-containers) 그러나 이 지침은 나에게 도움이 되지 않았으며 플러그인은 이를 무시합니다.
이러한 값 파일을 사용하여 init 컨테이너로 redis helm 차트를 실행하려고 했습니다.
helm install redis bitnami/redis --version 18.2.0 --namespace test --values <(cat redis-values.yaml)
<값.yaml>
image:
registry: quay.io
repository: opstree/redis
tag: latest
master:
livenessProbe:
enabled: true
initialDelaySeconds: 40
periodSeconds: 15
timeoutSeconds: 15
successThreshold: 1
failureThreshold: 5
podSecurityContext:
enabled: true
fsGroup: 1337
podAnnotations:
proxy.istio.io/config: |
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
traffic.sidecar.istio.io/includeOutboundIPRanges: "*"
traffic.sidecar.istio.io/includeInboundPorts: "*"
traffic.sidecar.istio.io/excludeOutboundPorts: "443"
traffic.sidecar.istio.io/excludeInboundPorts: "443"
traffic.sidecar.istio.io/excludeOutboundIPRanges: "172.20.0.1/32"
initContainers:
- name: init-celery-workers-restart
image: bitnami/kubectl
command: ['sh', '-c', 'sleep 30; kubectl delete pods -n test -l app.kubernetes.io/component=celery-worker']
securityContext:
runAsUser: 1337
resources:
requests:
cpu: 80m
memory: 60Mi
limits:
cpu: 80m
memory: 60Mi
포드가 시작되면 init 컨테이너의 출력에 이러한 로그가 표시됩니다. 동시에 istio cni, ztunnel 및 istiod에는 다음 로그가 있습니다.
"컨테이너 초기화"
E0118 09:57:46.188460 8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
E0118 09:57:56.195553 8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
E0118 09:58:06.202729 8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
Unable to connect to the server: EOF
"이스티오 CNI"
info cni istio-cni ambient cmdAdd podName: redis-master-0 podIPs: [{IP:10.0.95.14 Mask:ffffffff}]
info cni Adding pod 'redis-master-0/test' (a9573a59-6adc-48e8-9d41-a9bd08fdc40c) to ipset
info cni Adding route for redis-master-0/test: [table 100 10.0.95.14/32 via 192.168.126.2 dev istioin src 10.0.81.204]
"즈터널"
INFO xds{id=2}: ztunnel::xds::client: received response type_url="type.googleapis.com/istio.workload.Address" size=1
INFO xds{id=2}: ztunnel::xds::client: received response type_url="type.googleapis.com/istio.workload.Address" size=1
WARN outbound{id=b395fe07254d28f5ec84ffc0850994eb}: ztunnel::proxy::outbound: failed dur=96.464µs err=unknown source: 10.0.82.219
WARN outbound{id=a1c3ba7180c313fa148f1899baa136e4}: ztunnel::proxy::outbound: failed dur=87.892µs err=unknown source: 10.0.82.219
"이스티오드"
info ads Push debounce stable[44] 3 for config ServiceEntry/pslabs/redis-master.pslabs.svc.cluster.local and 1 more configs: 100.244777ms since last change, 103.240829ms since last push, full=true
info ads XDS: Pushing Services:95 ConnectedEndpoints:6 Version:2024-01-18T10:05:13Z/37
info validationController Not ready to switch validation to fail-closed: dummy invalid config not rejected
info validationController validatingwebhookconfiguration istio-validator-istio-system (failurePolicy=Ignore, resourceVersion=14938048) is up-to-date. No change required.
error controllers error handling istio-validator-istio-system, retrying (retry count: 1004): webhook is not ready, retry controller=validation
또한 ServiceEntry 및 DestinationRule을 사용하여 istio mesh를 우회하는 k8s 서비스에 대한 트래픽을 허용하려고 했습니다.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: k8s-api
namespace: test
spec:
hosts:
- kubernetes.default.svc.cluster.local
addresses:
- 172.20.0.1
endpoints:
- address: 172.20.0.1
exportTo:
- "*"
location: MESH_EXTERNAL
resolution: STATIC
ports:
- number: 443
name: https-k8s
protocol: HTTPS
___
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: k8s-destrule
namespace: test
spec:
host: kubernetes.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE