
Estou tentando montar a seguinte plataforma no Google Cloud:
2 clusters GKE privados (nativos), em 2 VPCs diferentes e para dar acesso à internet, cada vpc possui um Cloud Nat configurado.
O que preciso é que os 2 clusters do GKE interajam, mas ao fazer peering nas VPCs, só consigo comunicação entre PODs, não entre POD -> Serviço ou POD -> balanceador de carga interno.
Aglomerados:
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
Shrek01 asia-east1-a 1.16.8-gke.15 <none> g1-small 1.16.8-gke.15 3 RUNNING
Shrek02 asia-east2-a 1.15.9-gke.24 <none> g1-small 1.15.9-gke.24 3 RUNNING
vpçs:
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
Shrek01 CUSTOM REGIONAL
Shrek02 CUSTOM REGIONAL
sub-redes:
NAME REGION NETWORK RANGE
Shrek01 asia-east1 Shrek01 192.168.13.0/24
Shrek02 asia-east2 Shrek02 192.168.14.0/24
peerings:
NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS
Shrek01-Shrek01-peering Shrek01 pocprod2-2019001 Shrek02 True ACTIVE [2020-05-16T14:29:57.864-07:00]: Connected.
Shrek02-Shrek01-peering Shrek02 pocprod2-2019001 Shrek01 True ACTIVE [2020-05-16T14:29:57.864-07:00]: Connected.
regras de firewall:
- "Shrek01-peering-ingress"
{
"allowed": [
{
"IPProtocol": "all"
}
],
"creationTimestamp": "2020-05-16T16:05:14.829-07:00",
"description": "",
"direction": "INGRESS",
"disabled": false,
"id": "6807007164648771397",
"kind": "compute#firewall",
"logConfig": {
"enable": false
},
"name": "peering-ingress",
"network": "https://www.googleapis.com/compute/v1/projects/pocprod2-2019001/global/networks/Shrek01",
"priority": 1000,
"selfLink": "https://www.googleapis.com/compute/v1/projects/pocprod2-2019001/global/firewalls/peering-ingress",
"sourceRanges": [
"192.168.14.0/24",
"10.113.64.0/19",
"10.213.64.0/19"
]
}
- "Shrek02-peering-ingress"
{
"allowed": [
{
"IPProtocol": "all"
}
],
"creationTimestamp": "2020-05-16T16:24:28.545-07:00",
"description": "",
"direction": "INGRESS",
"disabled": false,
"id": "7130188648920500419",
"kind": "compute#firewall",
"logConfig": {
"enable": false
},
"name": "Shrek02-peering-ingress",
"network": "https://www.googleapis.com/compute/v1/projects/pocprod2-2019001/global/networks/Shrek02",
"priority": 1000,
"selfLink": "https://www.googleapis.com/compute/v1/projects/pocprod2-2019001/global/firewalls/Shrek02-peering-ingress",
"sourceRanges": [
"192.168.13.0/24",
"10.113.32.0/19",
"10.213.32.0/19"
]
}
Aglomerado k8s Shrek01:
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.213.32.1 <none> 443/TCP 85m <none>
nginx LoadBalancer 10.213.60.14 192.168.13.7 80:32612/TCP 92s app=nginx
nginx-cip ClusterIP 10.213.34.24 <none> 80/TCP 93s app=nginx
nginx-np NodePort 10.213.35.31 <none> 80:30444/TCP 92s app=nginx
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-64b4f9bb85-9sjcp 1/1 Running 0 3m34s 10.113.34.11 gke-Shrek01-default-pool-f9ecbfcc-dz9z <none> <none>
nginx-64b4f9bb85-l2bzd 1/1 Running 0 3m34s 10.113.32.5 gke-Shrek01-default-pool-f9ecbfcc-pdll <none> <none>
nginx-64b4f9bb85-xd7kw 1/1 Running 0 3m34s 10.113.33.9 gke-Shrek01-default-pool-f9ecbfcc-v67d <none> <none>
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-Shrek01-default-pool-f9ecbfcc-dz9z Ready <none> 89m v1.16.8-gke.15 192.168.13.4 Container-Optimized OS from Google 4.19.109+ docker://19.3.1
gke-Shrek01-default-pool-f9ecbfcc-pdll Ready <none> 89m v1.16.8-gke.15 192.168.13.2 Container-Optimized OS from Google 4.19.109+ docker://19.3.1
gke-Shrek01-default-pool-f9ecbfcc-v67d Ready <none> 89m v1.16.8-gke.15 192.168.13.3 Container-Optimized OS from Google 4.19.109+ docker://19.3.1
- Verificações do pod do Shrek02:
root@nginx-5c66c56f55-8jwv2:/# echo ${MY_POD_IP}
10.113.66.9
# internal load balancer
root@nginx-5c66c56f55-8jwv2:/# nc -vz 192.168.13.7 80
192.168.13.7: inverse host lookup failed: Unknown host
(UNKNOWN) [192.168.13.7] 80 (?) : Connection timed out
# intarnal load balancer's Cluster IP
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.213.60.14 80
10.213.60.14: inverse host lookup failed: Unknown host
(UNKNOWN) [10.213.60.14] 80 (?) : Connection timed out
# ClusterIP
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.213.34.24 80
10.213.34.24: inverse host lookup failed: Unknown host
(UNKNOWN) [10.213.34.24] 80 (?) : Connection timed out
# NodePort
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.213.35.31 80
10.213.35.31: inverse host lookup failed: Unknown host
(UNKNOWN) [10.213.35.31] 80 (?) : Connection timed out
# Pod IP
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.113.34.11 80
10.113.34.11: inverse host lookup failed: Unknown host
(UNKNOWN) [10.113.34.11] 80 (?) open
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.113.32.5 80
10.113.32.5: inverse host lookup failed: Unknown host
(UNKNOWN) [10.113.32.5] 80 (?) open
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.113.33.9 80
10.113.33.9: inverse host lookup failed: Unknown host
(UNKNOWN) [10.113.33.9] 80 (?) open
Esqueci alguma etapa? Não encontro o erro.
Responder1
- Consegui conectar pods a pods, pods a node-port e porta a um balanceador de carga externo com um ambiente semelhante, aqui está a ressalva:
O balanceador de carga interno só é permitido no peering de VPC nestas circunstâncias:
- As instâncias de máquina virtual (VM) cliente na rede peer estão localizadas na mesma região que seu balanceador de carga interno
- Você configuraacesso global. Com o acesso global configurado, as instâncias de VM cliente de qualquer região da rede VPC com peering podem acessar seu balanceador de carga TCP/UDP interno. O acesso global não é compatível com balanceamento de carga HTTP(S) interno.
Para usar o acesso global interno com peering de VPC de diferentes regiões, você precisaduas opções:
- Se você conhece onome do balanceador de carga interno, você pode usar o seguinte comando:
$ gcloud compute forwarding-rules update <LB_NAME> \
--region=<REGION> \
--allow-global-access
- E confira com:
gcloud compute forwarding-rules describe <LB_NAME> \
--region=us-west1 \
--format="get(name,region,allowGlobalAccess)"
- Outra maneira fácil de alterá-lo se você tiver poucos LB internos em sua região é através da página de balanceamento de carga do GCPClique aqui para acessar:
- NoNomecoluna, clique no balanceador de carga TCP/UDP interno da região do cluster (após clicar nele você verá o nome da sub-rede como no meu exemplo abaixo):
- Então cliqueEDITAR.
- CliqueConfiguração de front-end
- Clique noLápiseditar
- SobAcesso global, selecione Habilitar.
- CliqueFeito
- Abra e cliqueAtualizarpara atualizar a regra.
- Aguarde até que a regra termine de ser aplicada.
- Depois de mudar isso, consegui, a partir do Shrek02, direcionar o LB interno no Shrek01. (darei o exemplo abaixo).
OBSERVAÇÃO:
ClusterIP
: expõe o serviço em um IP interno do cluster. A escolha deste valor torna o Serviçoacessível apenas de dentro do cluster. Assim você não será roteado para acesso externo.
Reprodução:
- Criei duas VPCs seguindo seus parâmetros:
$ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
shrek01 europe-west1-b 1.16.8-gke.15 XX.XXX.XX.XXX g1-small 1.16.8-gke.15 3 RUNNING
shrek02 europe-west2-b 1.15.9-gke.24 XXX.XXX.XX.XXX g1-small 1.15.9-gke.24 3 RUNNING
$ gcloud compute networks subnets list
NAME REGION NETWORK RANGE
shrek01 europe-west1 shrek01 192.168.13.0/24
shrek02 europe-west2 shrek02 192.168.14.0/24
$ gcloud compute networks peerings list-routes sh1-sh2 --network=shrek01 --region europe-west1 --direction=INCOMING
DEST_RANGE TYPE NEXT_HOP_REGION PRIORITY STATUS
192.168.14.0/24 SUBNET_PEERING_ROUTE europe-west2 1000 accepted
10.229.0.0/20 SUBNET_PEERING_ROUTE europe-west2 1000 accepted
10.36.0.0/14 SUBNET_PEERING_ROUTE europe-west2 1000 accepted
$ gcloud compute networks peerings list-routes sh2-sh1 --network=shrek02 --region europe-west2 --direction=INCOMING
DEST_RANGE TYPE NEXT_HOP_REGION PRIORITY STATUS
192.168.13.0/24 SUBNET_PEERING_ROUTE europe-west1 1000 accepted
10.154.0.0/20 SUBNET_PEERING_ROUTE europe-west1 1000 accepted
10.24.0.0/14 SUBNET_PEERING_ROUTE europe-west1 1000 accepted
Depois de garantir que meus nós possam fazer ping entre VPCs, testarei a entrada e as conexões com este yamls
hello-1.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-1
spec:
replicas: 3
selector:
matchLabels:
app: hello-1
template:
metadata:
labels:
app: hello-1
spec:
containers:
- name: hello-1
image: gcr.io/google-samples/hello-app:1.0
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-1-svc
spec:
type: NodePort
selector:
app: hello-1
ports:
- protocol: TCP
port: 80
targetPort: 8080
hello-2.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-2
spec:
replicas: 3
selector:
matchLabels:
app: hello-2
template:
metadata:
labels:
app: hello-2
spec:
containers:
- name: hello-2
image: gcr.io/google-samples/hello-app:2.0
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-2-svc
spec:
type: NodePort
selector:
app: hello-2
ports:
- protocol: TCP
port: 80
targetPort: 8080
hello-ingress.yaml
:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-1-svc
servicePort: 80
- path: /v2
backend:
serviceName: hello-2-svc
servicePort: 80
- Dê uma olhada nos nomes e IP dos pods, IP dos nós e portas do balanceador de carga/porta de nó.
o$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-1-84d5994678-dx8dv 1/1 Running 0 140m 10.24.0.9 gke-shrek01-default-pool-5ffc38d7-bz35 <none> <none>
hello-1-84d5994678-t74mn 1/1 Running 0 14m 10.24.1.3 gke-shrek01-default-pool-5ffc38d7-70sk <none> <none>
hello-1-84d5994678-zq7t2 1/1 Running 0 14m 10.24.2.9 gke-shrek01-default-pool-5ffc38d7-zfj6 <none> <none>
hello-2-5c4f554ccc-b8j6f 1/1 Running 0 140m 10.24.0.10 gke-shrek01-default-pool-5ffc38d7-bz35 <none> <none>
hello-2-5c4f554ccc-km4ph 1/1 Running 0 13m 10.24.1.4 gke-shrek01-default-pool-5ffc38d7-70sk <none> <none>
hello-2-5c4f554ccc-z4f6n 1/1 Running 0 13m 10.24.2.10 gke-shrek01-default-pool-5ffc38d7-zfj6 <none> <none>
$ ubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-1-svc NodePort 10.154.13.186 <none> 80:32186/TCP 140m
hello-2-svc NodePort 10.154.4.214 <none> 80:32450/TCP 140m
$ kubectl get svc ingress-nginx-controller -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.154.10.104 192.168.13.5 80:30112/TCP,443:32156/TCP 4h20m
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
hello-ingress * 192.168.13.5 80 98m
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-shrek01-default-pool-5ffc38d7-70sk Ready <none> 2d19h v1.16.8-gke.15 192.168.13.3 XX.XXX.XX.XXX Container-Optimized OS from Google 4.19.109+ docker://19.3.1
gke-shrek01-default-pool-5ffc38d7-bz35 Ready <none> 2d19h v1.16.8-gke.15 192.168.13.2 XXX.XXX.XX.XXX Container-Optimized OS from Google 4.19.109+ docker://19.3.1
gke-shrek01-default-pool-5ffc38d7-zfj6 Ready <none> 2d19h v1.16.8-gke.15 192.168.13.4 XX.XXX.X.XXX Container-Optimized OS from Google 4.19.109+ docker://19.3.1
Agora vou me conectar ao shrek02
cluster, criar um pod e instalar curl
:
project@cloudshell:~$ kubectl run ubuntu --image=ubuntu -it -- /bin/bash
root@ubuntu:/# apt update
root@ubuntu:/# apt install curl
root@ubuntu:/# exit
project@cloudshell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ubuntu 1/1 Running 1 2m51s 10.36.1.6 gke-shrek02-default-pool-a7a08ac8-0lrz <none> <none>
- Você pode ver que estamos em
shrek02
, agora vamos testar a conexão comshrek01
os recursos. Lembre-se quekube-dns
só estão disponíveis dentro do cluster, então vamos nos conectar usando os IPs:
project@cloudshell:~$ kubectl exec -it ubuntu -- /bin/bash
###Hello-1 POD:
root@ubuntu:/# curl 10.24.0.9:8080
Hello, world!
Version: 1.0.0
Hostname: hello-1-84d5994678-dx8dv
###Hello-2 POD:
root@ubuntu:/# curl 10.24.1.4:8080
Hello, world!
Version: 2.0.0
Hostname: hello-2-5c4f554ccc-km4ph
### HELLO-1-SVC USING NODE IP + NODEPORT:
root@ubuntu:/# curl 192.168.13.3:32186
Hello, world!
Version: 1.0.0
Hostname: hello-1-84d5994678-t74mn
### HELLO-2-SVC USING ANOTHER NODE IP + NODEPORT:
root@ubuntu:/# curl 192.168.13.2:32450
Hello, world!
Version: 2.0.0
Hostname: hello-2-5c4f554ccc-km4ph
### NOW LET'S TEST OUR INGRESS which routes "/" to hello-1 and "/v2" to hello-2:
root@ubuntu:/# curl 192.168.13.5/
Hello, world!
Version: 1.0.0
Hostname: hello-1-84d5994678-dx8dv
root@ubuntu:/# curl 192.168.13.5/v2
Hello, world!
Version: 2.0.0
Hostname: hello-2-5c4f554ccc-b8j6f
Espero que ajude você a solucionar problemas em seu ambiente e, se tiver alguma dúvida, deixe-me saber nos comentários.