Massassi Forums Logo

This is the static archive of the Massassi Forums. The forums are closed indefinitely. Thanks for all the memories!

You can also download Super Old Archived Message Boards from when Massassi first started.

"View" counts are as of the day the forums were archived, and will no longer increase.

ForumsDiscussion Forum → Anyone here really familiar with Docker?
Anyone here really familiar with Docker?
2021-10-05, 11:09 AM #1
I use docker at work to package up a couple of programs (and their dependencies). I can write Dockerfiles and build locally. I am missing a piece, however: when I have a locally-built package/container, I need to get it up to a server. At work, this is handled by the Ops team. But for the New Massassi I need to learn how to do it.

Does anyone here have experience with docker and actual deployments of the containers?

I read that Docker Hub has a registry, but I have to pay for it. I read that both Amazon and Google have platforms to run docker containers but it's pretty expensive in comparison to the linode-based hosting I use now. Linode has kubernetes available for not too expensive but I haven't used that either, and it seems overly complicated for what I need (just run a single container for the dynamic parts of massassi -- although I guess at some point in the future I may create a docker container for the forums as well). It also seems weird to upload to some registry (that may or may not be public?) only to pull it down to the server; like, can't I just go directly somehow?
2021-10-05, 3:30 PM #2
You can host the image on Docker Hub or whatever, but I wouldn't bother.

Just copy the Dockerfile to the server and build/run it there in the same way you do locally. You can do that on any server with Docker installed (and the service running). The only difference is that on a serveryou will want to use `docker run [container] --restart always`. If the main process (defined in the entrypoint) in the container crashes, the docker daemon is restarted or the server is rebooted the container will be restarted automatically.

Personally, I prefer using docker compose, copy docker-compose.yml and any Dockerfiles it references to the server, then run docker-compose up -d (or docker compose up if your server is running a docker version that has the compose v2 spec)
TheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWho
SaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTh
eJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSa
ysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJ
k
WhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSays
N
iTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkW
2021-10-05, 7:08 PM #3
That actually really helps a lot, I appreciate it! Right now it's just the one container. I have been toying with the idea of putting the SQL Server (postgres) into its own container as well, but I'm not sure if having the db in a container, referencing some external file system mount, has any negative performance implications. I'll look into docker compose. For some reason I thought it was discontinued.
2021-10-07, 9:28 AM #4
Yeah, the messaging around docker-compose is poorly advertised. The tool, `docker-compose` is being deprecated. However, the functionality it provides is being merged into docker itself. For 99% of cases this means just replacing `docker-compose up` with `docker compose up`. And, for the short-mid term `docker-compose` will just alias `docker compose` on most systems so it won't even be a noticeable change.

You probably do want to put each service in its own container. In theory, unless you're on a Windows or macOS server (and why you would commit such an atrocity is beyond me anyway) the I/O penalty of a docker volume is basically nil-negligible (see here: https://www.percona.com/blog/2016/02/11/measuring-docker-io-overhead/ ).

By default, however, you'd connect to your database over TCP between containers, which is marginally slower than using a socket (though I'd wager unnoticable outside of artificial benchmarks) . You can just make the socket available to all your containers though by mounting the folder that contains the socket as a volume in both and pointing both the containers (e.g. the PHP container and the postgres container) to the same socket.

I wrote an article here which shows how to set up docker-compose: https://www.sitepoint.com/docker-php-development-environment/ it's aimed at people below your skill level but some of the sample config might be useful.
TheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWho
SaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTh
eJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSa
ysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJ
k
WhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSays
N
iTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkW
2021-10-07, 8:14 PM #5
Thanks again, I'll read through it! The only container I have so far is the python/django/gunicorn application server. It does the dynamic stuff. I was planning to run nginx natively to serve the static stuff. The last piece is the database server which I was planning to just install directly as well, but it seems like it's easy enough to make a container for it (backed by a data volume so I don't lose data when the container stops). I'll try all that out and see how it goes. Thanks again!
2021-10-09, 5:20 AM #6
Quote:
I was planning to run nginx natively to serve the static stuff.


I'd recommend just putting that in docker as well. If everything is in docker, and all related/interconnected services are described in the same docker-compose.yml then you can copy/paste the entire folder to a new server, and run `docker compose up` and everything will work exactly as it was on the last server. Outside of docker, you'd have to configure NGINX manually on each server, while keeping in mind that other websites might also be configured in nginx. With docker, each website can have its own nginx instance with its own nginx config and never tread on each other's toes while being entirely independently portable.
TheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWho
SaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTh
eJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSa
ysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJ
k
WhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSays
N
iTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkW
2021-10-09, 11:45 AM #7
Alright, I'll give it a try.
2021-10-13, 9:04 AM #8
Originally posted by Ni:
I'd recommend just putting that in docker as well. If everything is in docker, and all related/interconnected services are described in the same docker-compose.yml then you can copy/paste the entire folder to a new server, and run `docker compose up` and everything will work exactly as it was on the last server. Outside of docker, you'd have to configure NGINX manually on each server, while keeping in mind that other websites might also be configured in nginx. With docker, each website can have its own nginx instance with its own nginx config and never tread on each other's toes while being entirely independently portable.


How does that work with multiple websites on the same host? Don't you need some webserver anyways as a proxy if you don't have enough IPs for every application?
Sorry for the lousy German
2021-10-13, 11:27 AM #9
Quote:
How does that work with multiple websites on the same host? Don't you need some webserver anyways as a proxy if you don't have enough IPs for every application?


Yeah, you can set up nginx as a reverse proxy. For example, one dockerized nginx instance that handles the connections and then maps them to to the other instance. E.g. you run one website on 127.0.0.1:8080 and one on 127.0.0.1:8081 then set up nginx using server_name blocks to forward requests to the relevant local port.

I benchmarked the overhead of this once and the results were so close that my results were meaningless (sometimes the proxied version came out ahead) [edit: I stuck the code up here: https://github.com/TRPB/reverseproxy-benchmark , run docker-compose up then php response-time-test.php it'll make 500 requests to the proxied server and the one accessed directly and compare the total times of the two] .

With this approach, you can use the nginx config for the proxy to handle SSL for all your sites, then when you develop locally you just use the non-ssl container that's bound to port 8080 (or whatever) on the server. Then you don't have to worry about certbot or SSL when developing the site locally.
TheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWho
SaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTh
eJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSa
ysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJ
k
WhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSays
N
iTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkW
2021-10-13, 5:32 PM #10
So in my case I will have nginx serving static assets and also acting as a reverse proxy for the dynamic stuff. You are suggesting putting nginx in a container as well. So that means that certbot or whoever has to run inside that container as well right? Or if not, how would you recommend handling that? I think I have about 6 or 7 certs for various massassi subdomains.
2021-10-14, 9:29 AM #11
You'd want one nginx instance for each site (e.g. forums, main site, rbots) and each would run on its own port. Each site will have its own docker-compose.yml so that it can be run independently (or moved to a different server on its own if needed)

In the docker-compose.yml for each of these you'd bind to these to the docker network only. Run `ip addr` and find the IP of `docker0` then bind it to that IP. In my case it's 172.17.0.1.

So for example, for forums.massassi.net you'd have:


Code:
version: '3'
services:
    web:
        image: nginx:latest
        ports:
            - "127.0.0.1:8001:80"
            - "172.17.0.1:8001:80"
        volumes:
            - ./app:/app
            - ./nginx.conf:/etc/nginx/conf.d/nginx.conf
        restart: always
    php:
        image: php:8-fpm
        volumes:
            - ./app:/app
        restart: always


By binding to 127.0.01 and 172.17.0.1 it can only be accessed from another container on the same network or via 127.0.0.1 when running it on your development machine.


Then another site, e.g. www.massassi.net would have a similar config:


Code:
version: '3'
services:
    web:
        image: nginx:latest
        ports:
            - "127.0.0.1:8002:80"
            - "172.17.0.1:8002:80"
        volumes:
            - ./app:/app
            - ./nginx.conf:/etc/nginx/conf.d/nginx.conf
        restart: always
    php:
        image: php:8-fpm
        volumes:
            - ./app:/app
        restart: always



Each site would bind itself to a different port. Then your reverse proxy would listen on ports 80/443 on the server's IP and forward requests to the relevant port and handle certificates for all sites:

Code:
version: '3'
services:
    web:
        image: nginx:latest
        ports:
            - "80:80"
            - "443:443"
        restart: always
        volumes:
            - ./nginx.conf:/etc/nginx/conf.d/nginx.conf
            - ./certbot/conf:/etc/letsencrypt
            - ./certbot/www:/var/www/certbot
        command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
        extra_hosts:
            - "host.docker.internal:host-gateway"
    certbot:
        image: certbot/certbot
        container_name: certbot
        volumes:
            - ./certbot/conf:/etc/letsencrypt
            - ./certbot/www:/var/www/certbot
        entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
        restart: always


A couple of things here, I've changed the commands to reload nginx every 6 hours for when the certificates are updated and to run certbot renew every 12 hours.

And extra_hosts: - "host.docker.internal:host-gateway", allows a docker container to connect to ports on the host domain using the hostname host.docker.internal.


To register certificates you'd start the container and run ` docker exec -it certbot certbot certonly -d x.massassi.net` and when asked use webroot and set the webroot directory to /var/www/certbot (nginx will have its own entry for this directory).

Finallly, your nginx.conf for the proxy will handle all the sites, mapping to the relevant ports and handling the SSL certificates:




Code:
## Listen to requests in siteN.com/.well-known/acme-challenge and point it to the /var/www/certbot, shared by both nginx and certbot
server {
    listen 80;

    server_name site1.com site2.com site3.com www.site1.com www.site2.com www.site3.com;

    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    ## Any request to port 80 outside of the acme-challenge directory should be forwarded to the HTTPS version of the site and page requested
    location / {
       return 301 https://$host$request_uri;
    }
}


## Then create a reverse proxy for each site


## Forward non-www to www if desired (or the other way around, as required)
## In this instance www.site1.com is forwarded to site1.com

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name www.site1.com;

    ssl_certificate /etc/letsencrypt/live/site1.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/site1.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;


    location / {
        return 301 https://site1.com;
    }
}


## A proxy for site1.com 

server {
    listen [::]:443 ssl http2;
    listen 443 ssl http2;
    server_name site1.com;


    ssl_certificate /etc/letsencrypt/live/site1.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/site1.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://host.docker.internal:8001;
    }
}

## And a proxy for site2.com


server {
    listen [::]:443 ssl http2;
    listen 443 ssl http2;
    server_name site2.com;

    ssl_certificate /etc/letsencrypt/live/site2.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/site2.com/privkey.pem;

    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://host.docker.internal:8002;
    }
}


This uses host.docker.internal to map to the port bound in the docker-compose.yml for each site above.

For development, you can just copy the folder containing the website and run it locally without worrying that your nginx.conf requires SSL, when it's live on the web, visitors come through the proxy which adds SSL.

You do need to be careful not to bind the ports directly otherwise someone could go to [URL]http://[/URL][ip]:8001 and bypass the proxy, and by extension, bypassing SSL. It can also cause duplicate content issues if it's public facing. That's why I bound the ports to specific IP addresses above.


You can also do this by referencing an external network in docker-compose.yml and putting all the sites on the same network rather than different ports, though that makes it slightly less portable as you need to manually create the network on every machine you want to run the site (or have different docker-compose.yml files for live and development, which isn't great either).
TheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWho
SaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTh
eJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSa
ysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJ
k
WhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSays
N
iTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkW
2021-10-14, 10:17 AM #12
I think I'm understanding what you're saying, but isn't that a whole bunch of layers of nginx when really only one is needed? You're saying have a separate instance of nginx inside every site's container, but then also having a main/proxy nginx that forwards requests to the individual site containers. It seems overkill?

Code:
browser -> massassi.net proxy nginx -> massassi.net app nginx -> gunicorn -> actual python code
browser -> massassi.net proxy nginx -> forums.massassi.net app nginx -> actual php code


Etc. Am I understanding it correctly?
2021-10-14, 12:25 PM #13
I'm not familiar with gunicorn, if that has its own HTTP server then you don't need to put a second nginx instance in front of it, just have it listen on its own port and have the reverse proxy forward connections to it.

Performance wise, the overhead is slightly more memory because you're running one web server instance per site and one for the proxy, (nginx uses multiple threads anyway and in this case each one will be doing less and using less memory than it would because there are fewer rules configured).

Yes, you could do this with a single nginx instance, but then running one of the sites elsewhere (e.g. your development machine) is more difficult because you'll need the same configuration there, minus the configuration for all the other sites and the SSL certificates. If one of the sites made heavy use of `.htaccess` and only worked in Apache, then you can have that one site on Apache with the others on nginx, you might have a site using node.js which creates its own server and isn't based on nginx. You might have one site which only works on PHP 5 and another using PHP 7, while that's possible without docker, it's a lot easier with every site in its own container with its own software stack.

By having a reverse proxy, you can host different sites on different technology stacks without worrying about them all being bound to port 80/443. Of course, if your server has multiple IPs you can skip that step and just bind each site to its own IP address.

It might be overkill in this instance, but creates a nice workflow as each site is packaged up as its own application. You could, for example, swap out nginx for apache on one site along with changing the code, push the whole application to the server and everything still works.
TheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWho
SaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTh
eJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSa
ysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJ
k
WhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSays
N
iTheJkWhoSaysNiTheJkWhoSaysNiTheJkWhoSaysNiTheJkW
2021-10-14, 1:11 PM #14
That makes sense, I think the trade-off is worth it. Yeah gunicorn is a web server for serving python apps. So yes, skipping nginx in this case makes sense, but then I need another container for just the static assets. I'll think about how I want to configure it. I really really appreciate all the info, this is super useful.

At work in a couple of my projects I'm using docker because the servers all run ancient centos and we need way more modern stuff (to run headless chrome and a bunch of nodejs crap); but once a container is built it gets replicated/installed hundreds or thousands of times. So it's the same container over and over vs. a bunch of different container types.
2021-12-01, 10:52 PM #15
Ni, I would like to thank you again for all your help. I struggled a lot with this docker stuff but I think I have the meat of it at this point. I still need to work through the certificate stuff, but I'm going to wait until I add "services" for the forums and tacc. This is what I have so far:

https://github.com/saberworks/massassi/blob/main/docker-compose.yml

I had absolutely no luck using the docker internal ip for proxy_pass and database connections. I found that if I use the "service name" as the hostname to connect to, it seemed to always use the docker internal network. I verified that I can't access the service via the ip/port unless I'm logged into a docker container on the same docker network (like, accessing from browser on my workstation directly didn't work at all). I think this is actually a bit more readable and a bit more portable.

The nginx conf stuff was also a pain. For the main massassi site, most things are served from the "massassi-static" container, but then the dynamic stuff is served from the "massassi-django" container. In order to keep the "apps" standalone, I had to have a working config for dynamic in the massassi-django container and a working one for static in the massassi-static container. This is the main conf file:

https://github.com/saberworks/massassi/blob/main/massassi-web/nginx.conf

I'm curious what you think of it. It first tries "massassi-static" and if that gives a 404, it passes the request to "massassi-django". It seems to be working fine. Prior to your suggestions for having an individual web server for each service, I had a single nginx serving all static content and dynamic content (well, proxying back to that django/gunicorn app for the dynamic stuff). The old conf file looked like this:

Code:
server {
    listen 80;
    server_name localhost;

    error_page 404 /404.html;
    error_page 500 /500.html;

    location ~ tutorial_print.shtml$ {
        alias /home/brian/code/massassi.net/output/tutorial_print.html;
    }

    location ~ /levels/files/screenshots {
        proxy_pass http://localhost:8000;
    }
    
    location ~ /levels/files/thumbnails {
        proxy_pass http://localhost:8000;
    }

    location ~ /cgi-bin/screenshot.cgi {
        proxy_pass http://localhost:8000;
    }

    location ~ ^/(static|media)/ {
        root /home/brian/code/m2/massassi-django;
    }

    location ~ ^/(sotd|admin|account|levels|lotw|holiday) {
        proxy_pass http://localhost:8000;
    }

    # annoying root-level news stuff
    location ~ ^/(news_archive.html|news_search.html|news_archive_.*.html) {
        proxy_pass http://localhost:8000;
    }

    # exact '/' pass to "news" app
    location = / {
        proxy_pass http://localhost:8000;
    }

    # catch-all, everything else
    location / {
        root /home/brian/code/massassi.net/output;
    }
}


So you see I had to specify every single url pattern that should be served by python/django and then proxy that back. Your way seems so much simpler.
2021-12-04, 2:03 PM #16
I understand none of this.
COUCHMAN IS BACK BABY
2021-12-06, 9:38 PM #17
Sorry I just realized my links weren't working in my previous post because my repo was set to private. I just made it public so they should work now.
2021-12-06, 11:31 PM #18
And here are all the repos that will power the new massassi. It feels really weird posting up the source code.

https://github.com/saberworks/massassi-django
https://github.com/saberworks/massassi-static
https://github.com/saberworks/massassi.net
2021-12-11, 7:56 PM #19
Ni, I got the SSL working on my dev server but ran into something I was curious about:

Code:
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;


The server failed to start when these lines were present. I couldn't find the referenced files in the certbot dirs at all after I activated two domains. So I commented them out in nginx.conf and everything seems fine. Did I do something wrong?

↑ Up to the top!