You'd want one nginx instance for each site (e.g. forums, main site, rbots) and each would run on its own port. Each site will have its own docker-compose.yml so that it can be run independently (or moved to a different server on its own if needed)
In the docker-compose.yml for each of these you'd bind to these to the docker network only. Run `ip addr` and find the IP of `docker0` then bind it to that IP. In my case it's 172.17.0.1.
So for example, for forums.massassi.net you'd have:
Code:
version: '3'
services:
web:
image: nginx:latest
ports:
- "127.0.0.1:8001:80"
- "172.17.0.1:8001:80"
volumes:
- ./app:/app
- ./nginx.conf:/etc/nginx/conf.d/nginx.conf
restart: always
php:
image: php:8-fpm
volumes:
- ./app:/app
restart: always
By binding to 127.0.01 and 172.17.0.1 it can only be accessed from another container on the same network or via 127.0.0.1 when running it on your development machine.
Then another site, e.g. www.massassi.net would have a similar config:
Code:
version: '3'
services:
web:
image: nginx:latest
ports:
- "127.0.0.1:8002:80"
- "172.17.0.1:8002:80"
volumes:
- ./app:/app
- ./nginx.conf:/etc/nginx/conf.d/nginx.conf
restart: always
php:
image: php:8-fpm
volumes:
- ./app:/app
restart: always
Each site would bind itself to a different port. Then your reverse proxy would listen on ports 80/443 on the server's IP and forward requests to the relevant port and handle certificates for all sites:
Code:
version: '3'
services:
web:
image: nginx:latest
ports:
- "80:80"
- "443:443"
restart: always
volumes:
- ./nginx.conf:/etc/nginx/conf.d/nginx.conf
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
extra_hosts:
- "host.docker.internal:host-gateway"
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
restart: always
A couple of things here, I've changed the commands to reload nginx every 6 hours for when the certificates are updated and to run certbot renew every 12 hours.
And extra_hosts: - "host.docker.internal:host-gateway", allows a docker container to connect to ports on the host domain using the hostname host.docker.internal.
To register certificates you'd start the container and run ` docker exec -it certbot certbot certonly -d x.massassi.net` and when asked use webroot and set the webroot directory to /var/www/certbot (nginx will have its own entry for this directory).
Finallly, your nginx.conf for the proxy will handle all the sites, mapping to the relevant ports and handling the SSL certificates:
Code:
## Listen to requests in siteN.com/.well-known/acme-challenge and point it to the /var/www/certbot, shared by both nginx and certbot
server {
listen 80;
server_name site1.com site2.com site3.com www.site1.com www.site2.com www.site3.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
## Any request to port 80 outside of the acme-challenge directory should be forwarded to the HTTPS version of the site and page requested
location / {
return 301 https://$host$request_uri;
}
}
## Then create a reverse proxy for each site
## Forward non-www to www if desired (or the other way around, as required)
## In this instance www.site1.com is forwarded to site1.com
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.site1.com;
ssl_certificate /etc/letsencrypt/live/site1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site1.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
return 301 https://site1.com;
}
}
## A proxy for site1.com
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name site1.com;
ssl_certificate /etc/letsencrypt/live/site1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site1.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://host.docker.internal:8001;
}
}
## And a proxy for site2.com
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name site2.com;
ssl_certificate /etc/letsencrypt/live/site2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site2.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://host.docker.internal:8002;
}
}
This uses host.docker.internal to map to the port bound in the docker-compose.yml for each site above.
For development, you can just copy the folder containing the website and run it locally without worrying that your nginx.conf requires SSL, when it's live on the web, visitors come through the proxy which adds SSL.
You do need to be careful not to bind the ports directly otherwise someone could go to http://[ip]:8001 and bypass the proxy, and by extension, bypassing SSL. It can also cause duplicate content issues if it's public facing. That's why I bound the ports to specific IP addresses above.
You can also do this by referencing an external network in docker-compose.yml and putting all the sites on the same network rather than different ports, though that makes it slightly less portable as you need to manually create the network on every machine you want to run the site (or have different docker-compose.yml files for live and development, which isn't great either).