Der Nginx-Reverse-Proxy lädt beim Aktualisieren verschiedene Sites

Der Nginx-Reverse-Proxy lädt beim Aktualisieren verschiedene Sites

Ich möchte mehrere Websites auf einem Server mit Nginx Reverse Proxy hosten, indem ich diesem Tutorial folge https://www.datanovia.com/en/lessons/wie-hosten-sie-mehrere-https-websites-auf-einem-server/

Der Nginx-Proxy und jede Website werden separat mit Docker gestartet. Aber jedes Mal, wenn ich eine der Websites neu lade, wird der Inhalt der anderen Website geladen. Beispiel:

  • Beim ersten Laden von websiteone.tk wurde der Inhalt von website ONE geladen.

  • Websiteone.tk aktualisieren, Inhalt von Website TWO geladen

  • Aktualisieren Sie websiteone.tk erneut, der Inhalt von Website DREI wurde geladen

  • Laden Sie websitetwo.tk zum ersten Mal, laden Sie den Inhalt von Website ZWEI

  • Aktualisieren Sie websitetwo.tk, der Inhalt von Website DREI wurde geladen.

Ich bin ein Anfänger, sowohl bei Nginx als auch bei Docker. Ich kann nicht sagen, ob das Problem bei Nginx oder Docker auftritt. Kann mir bitte jemand einen Rat geben? Vielen Dank.

Die nginx-proxy default.conf ist

map $http_x_forwarded_proto $proxy_x_forwarded_proto {  default $http_x_forwarded_proto;
  ''      $scheme;
}
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
server_names_hash_bucket_size 128;
map $proxy_x_forwarded_proto $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss t>log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent" '
                 '"$upstream_addr"';
access_log off;
                ssl_protocols TLSv1.2 TLSv1.3;
                ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA3>                ssl_prefer_server_ciphers off;
error_log /dev/stderr;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header X-Original-URI $request_uri;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        server_tokens off;
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        server_tokens off;
        listen 443 ssl http2;
        access_log /var/log/nginx/access.log vhost;
        return 503;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/default.crt;
        ssl_certificate_key /etc/nginx/certs/default.key;
}


# websiteone.tk
upstream websiteone.tk {
## Can be connected with "nginx-proxy" network
# websiteonetk_my-app_1
server 192.168.32.8:80;
}

server {
        server_name websiteone.tk;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        # Do not HTTPS redirect Let'sEncrypt ACME challenge
        location ^~ /.well-known/acme-challenge/ {
                auth_basic off;
                auth_request off;
                allow all;
                root /usr/share/nginx/html;
                try_files $uri =404;
                break;
                      }
        location / {
                return 301 https://$host$request_uri;
        }
}
server {
        server_name websiteone.tk;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/websiteone.tk.crt;
        ssl_certificate_key /etc/nginx/certs/websiteone.tk.key;
        ssl_dhparam /etc/nginx/certs/websiteone.tk.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/websiteone.tk.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        
        location / {
                        proxy_pass http://websiteone.tk;
        }
}


# websitetwo.tk
upstream websitetwo.tk {
## Can be connected with "nginx-proxy" network
# websitetwotk_my-app_1
server 192.168.32.13:80;
}

server {
        server_name websitetwo.tk;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        # Do not HTTPS redirect Let'sEncrypt ACME challenge
        location ^~ /.well-known/acme-challenge/ {
                auth_basic off;
                auth_request off;
                allow all;
                root /usr/share/nginx/html;
                try_files $uri =404;
                break;
        }
        location / {
                return 301 https://$host$request_uri;
        }
}
server {
        server_name websitetwo.tk;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/websitetwo.tk.crt;
        ssl_certificate_key /etc/nginx/certs/websitetwo.tk.key;
        ssl_dhparam /etc/nginx/certs/websitetwo.tk.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/websitetwo.tk.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        
        location / {
                        proxy_pass http://websitetwo.tk;
        }
}

# websitethree.tk
upstream websitethree.tk {
## Can be connected with "nginx-proxy" network
# websitethreetk_my-app_1
server 192.168.32.3:80;
}
server {
        server_name websitethree.tk;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        # Do not HTTPS redirect Let'sEncrypt ACME challenge
        location ^~ /.well-known/acme-challenge/ {
                auth_basic off;
                auth_request off;
                allow all;
                root /usr/share/nginx/html;
                try_files $uri =404;
                break;
        }
        location / {
                return 301 https://$host$request_uri;
        }
}

server {
        server_name websitethree.tk;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/websitethree.tk.crt;
        ssl_certificate_key /etc/nginx/certs/websitethree.tk.key;
        ssl_dhparam /etc/nginx/certs/websitethree.tk.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/websitethree.tk.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                        proxy_pass http://websitethree.tk;
        }
}

Das Docker-Compose für den Nginx-Proxy ist

version: '3.6'
services:
  nginx:
    image: nginx
    labels:
      com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
    container_name: nginx
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./conf.d:/etc/nginx/conf.d
      - ./vhost.d:/etc/nginx/vhost.d
      - ./html:/usr/share/nginx/html
      - ./certs:/etc/nginx/certs:ro

  nginx-gen:
    image: jwilder/docker-gen
    command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
    container_name: nginx-gen
    restart: unless-stopped
    volumes:
      - ./conf.d:/etc/nginx/conf.d
      - ./vhost.d:/etc/nginx/vhost.d
      - ./html:/usr/share/nginx/html
      - ./certs:/etc/nginx/certs:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro

  nginx-letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: nginx-letsencrypt
    restart: unless-stopped
    volumes:
      - ./conf.d:/etc/nginx/conf.d
      - ./vhost.d:/etc/nginx/vhost.d
      - ./html:/usr/share/nginx/html
      - ./certs:/etc/nginx/certs:rw
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
      NGINX_PROXY_CONTAINER: "nginx"
networks:
  default:
    external:
      name: nginx-proxy

Die nginx default.conf für eine der Websites ist

server {
    root /application2;
    index index.php;

    location ~ \.php$ {
        fastcgi_pass php-fpm:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
        include fastcgi_params;
    }
}

Das Docker-Compose/YML für eine der Websites finden Sie unten.

Das Arbeitsverzeichnis von Website1 ist /application1. Das Arbeitsverzeichnis von Website2 ist /application2 usw.

version: '3.1'
services:
    my-app:
        image: 'nginx:alpine'
        volumes:
            - '.:/application2'
            - './phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf'
        restart: always
        environment:
            - VIRTUAL_HOST=websitetwo.tk
            - VIRTUAL_PORT=80
            - LETSENCRYPT_HOST=websitetwo.tk
        expose:
            - 80
    mailhog:
        image: 'mailhog/mailhog:latest'
        ports:
            - '21001:8025'

    php-fpm:
        build: phpdocker/php-fpm
        working_dir: /application2
        volumes:
            - '.:/application2'
            - './phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/8.1/fpm/conf.d/99-overrides.ini'
networks:
    default:
        external:
            name: nginx-proxy

Antwort1

Ich habe die Antwort selbst herausgefunden. Falls jemand in die gleiche Situation gerät: Für die docker-compose.yml jeder Website muss ein unabhängiges Netzwerk eingerichtet werden.

Zuerst ändere ich den Namen des Nginx-Proxy-Netzwerks von „Standard“ in „Proxy“. Dann verwende ich für jede Website ein unabhängiges Netzwerk (ich habe es „App“ genannt), um jeden im Container verwendeten Dienst zu verknüpfen. Der Nginx-Dienst muss auch das Proxy-Netzwerk verwenden.

Website docker-compose.yml:

version: '3.1'
services:
    my-app:
        networks: 
            - app
            - proxy
        image: 'nginx:alpine'
        volumes:
            - '.:/application2'
            - './phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf'
        restart: always
        environment:
            - VIRTUAL_HOST=websitetwo.tk
            - VIRTUAL_PORT=80
            - LETSENCRYPT_HOST=websitetwo.tk
        expose:
            - 80
    mailhog:
        networks: 
            - app
        image: 'mailhog/mailhog:latest'
        ports:
            - '21001:8025'

    php-fpm:
        networks: 
            - app
        build: phpdocker/php-fpm
        working_dir: /application2
        volumes:
            - '.:/application2'
            - './phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/8.1/fpm/conf.d/99-overrides.ini'
networks:
    proxy:
        external:
            name: nginx-proxy
    app:

verwandte Informationen