The migration is automatic. Existing users should experience a bit of
downtime until the playbook runs to completion, but don't need to do
anything manually.
This change is provoked by https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/2535
While my statements there ("Traefik is a shared component among
sibling/related playbooks and should retain its global
non-matrix-prefixed name and path") do make sense, there's another point
of view as well.
With the addition of docker-socket-proxy support in bf2b540807,
we potentially introduced another non-`matrix-`-prefixed systemd service
and global path (`/devture-container-socket-proxy`). It would have
started to become messy.
Traefik always being called `devture-traefik.service` and using the `/devture-traefik` path
has the following downsides:
- different playbooks may write to the same place, unintentionally,
before you disable the Traefik role in some of them.
If each playbook manages its own installation, no such conflicts
arise and you'll learn about the conflict when one of them starts its
Traefik service and fails because the ports are already in use
- the data is scattered - backing up `/matrix` is no longer enough when
some stuff lives in `/devture-traefik` or `/devture-container-socket-proxy` as well;
similarly, deleting `/matrix` is no longer enough to clean up
For this reason, the Traefik instance managed by this playbook
will now be called `matrix-traefik` and live under `/matrix/traefik`.
This also makes it obvious to users running multiple playbooks, which
Traefik instance (powered by which playbook) is the active one.
Previously, you'd look at `devture-traefik.service` and wonder which
role was managing it.
* healthchecks.io integration
* mutex on forwarding messages into thread
* fix in prefixes handling
* send error messages as thread reply when possible
We don't need these 2 roughly-the-same settings related to the
traefik-certs-dumper role.
For Traefik, it makes sense, because it's a component used by the
various related playbooks and they could step onto each other's toes
if the role is enabled, but Traefik is disabled (in that case, uninstall
tasks will run).
As for Traefik certs dumper, the other related playbooks don't have it,
so there's no conflict. Even if they used it, each one would use its own
instance (different `devture_traefik_certs_dumper_identifier`), so there
wouldn't be a conflict and uninstall tasks can run without any danger.
This makes it consistent with the rest of the playbook:
- there's a default config which has various variables controlling
settings
- there's also an `_extension_yaml` variable, which lets you override it
The newly extracted role also has native Traefik support,
so we no longer need to rely on `matrix-nginx-proxy` for
reverse-proxying to Ntfy.
The new role uses port `80` inside the container (not `8080`, like
before), because that's the default assumption of the officially
published container image. Using a custom port (like `8080`), means the
default healthcheck command (which hardcodes port `80`) doesn't work.
Instead of fiddling to override the healthcheck command, we've decided
to stick to the default port instead. This only affects the
inside-the-container port, not any external ports.
The new role also supports adding the network ranges of the container's
multiple additional networks as "exempt hosts". Previously, only one
network's address range was added to "exempt hosts".
Previously, it had to go through matrix-nginx-proxy.
It's exposed to Traefik directly via container labels now
Serving at a path other than `/` doesn't work well yet.
We were mounting our own configuration to
`/usr/share/nginx/html/config.json`, which is a symlink to
`/tmp/config.json`. So we effectively mount our file to
`/tmp/config.json`.
When starting:
- if Hydrogen sees a `CONFIG_OVERRIDE` environment variable,
it will try to save it into our read-only config file and fail.
- if Hydrogen doesn't see a `CONFIG_OVERRIDE` environment variable (the
path we go through, because we don't pass such a variable),
it will try to copy its bundled configuration (`/config.json.bundled`)
to `/tmp/config.json`. Because our configuration is mounted as read-only, it will
fail.
In both cases, it will fail with:
> cp: can't create '/tmp/config.json': File exists
Source: 3720de36bb/docker/dynamic-config.sh
We work around this by mounting our configuration on top of the bundled
one (`/config.json.bundled`). We then let Hydrogen's startup script copy
it to `/tmp/config.json` (a tmpfs we've mounted into the container) and use it from there.
* fix: only add element related entries to client well-known if element is enabled
* Fix matrix-base/defaults/main.yml syntax
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>