matrix-docker-ansible-deploy/docs/maintenance-postgres.md

121 lines
7.2 KiB
Markdown
Raw Permalink Normal View History

2019-02-09 12:36:47 +00:00
# PostgreSQL maintenance
This document shows you how to perform various maintenance tasks related to the Postgres database server used by Matrix.
Table of contents:
- [Getting a database terminal](#getting-a-database-terminal), for when you wish to execute SQL queries
- [Vacuuming PostgreSQL](#vacuuming-postgresql), for when you wish to run a Postgres [VACUUM](https://www.postgresql.org/docs/current/sql-vacuum.html) (optimizing disk space)
2019-02-09 12:36:47 +00:00
- [Backing up PostgreSQL](#backing-up-postgresql), for when you wish to make a backup
- [Upgrading PostgreSQL](#upgrading-postgresql), for upgrading to new major versions of PostgreSQL. Such **manual upgrades are sometimes required**.
- [Tuning PostgreSQL](#tuning-postgresql) to make it run faster
2019-02-09 12:36:47 +00:00
## Getting a database terminal
You can use the `/matrix/postgres/bin/cli` tool to get interactive terminal access ([psql](https://www.postgresql.org/docs/11/app-psql.html)) to the PostgreSQL server.
2019-02-09 12:36:47 +00:00
If you are using an [external Postgres server](configuring-playbook-external-postgres.md), the above tool will not be available.
By default, this tool puts you in the `matrix` database, which contains nothing.
To see the available databases, run `\list` (or just `\l`).
To change to another database (for example `synapse`), run `\connect synapse` (or just `\c synapse`).
You can then proceed to write queries. Example: `SELECT COUNT(*) FROM users;`
**Be careful**. Modifying the database directly (especially as services are running) is dangerous and may lead to irreversible database corruption.
When in doubt, consider [making a backup](#backing-up-postgresql).
2019-02-09 12:36:47 +00:00
## Vacuuming PostgreSQL
Deleting lots data from Postgres does not make it release disk space, until you perform a [`VACUUM` operation](https://www.postgresql.org/docs/current/sql-vacuum.html).
You can run different `VACUUM` operations via the playbook, with the default preset being `vacuum-complete`:
- (default) `vacuum-complete`: stops all services temporarily and runs `VACUUM FULL VERBOSE ANALYZE`.
- `vacuum-full`: stops all services temporarily and runs `VACUUM FULL VERBOSE`
- `vacuum`: runs `VACUUM VERBOSE` without stopping any services
- `vacuum-analyze` runs `VACUUM VERBOSE ANALYZE` without stopping any services
- `analyze` runs `ANALYZE VERBOSE` without stopping any services (this is just [ANALYZE](https://www.postgresql.org/docs/current/sql-analyze.html) without doing a vacuum, so it's faster)
**Note**: for the `vacuum-complete` and `vacuum-full` presets, you'll need plenty of available disk space in your Postgres data directory (usually `/matrix/postgres/data`). These presets also stop all services (e.g. Synapse, etc.) while the vacuum operation is running.
Example playbook invocations:
- `just run-tags run-postgres-vacuum`: runs the default `vacuum-complete` preset and restarts all services
- `just run-tags run-postgres-vacuum -e postgres_vacuum_preset=analyze`: runs the `analyze` preset with all services remaining operational at all times
2019-02-09 12:36:47 +00:00
## Backing up PostgreSQL
2021-04-05 08:13:45 +00:00
To automatically make Postgres database backups on a fixed schedule, see [Setting up postgres backup](configuring-playbook-postgres-backup.md).
To make a one off back up of the current PostgreSQL database, make sure it's running and then execute a command like this on the server:
2019-02-09 12:36:47 +00:00
```bash
/usr/bin/docker exec \
2019-02-09 12:36:47 +00:00
--env-file=/matrix/postgres/env-postgres-psql \
matrix-postgres \
/usr/local/bin/pg_dumpall -h matrix-postgres \
2019-02-09 12:36:47 +00:00
| gzip -c \
> /matrix/postgres.sql.gz
2019-02-09 12:36:47 +00:00
```
2021-02-22 06:36:42 +00:00
If you are using an [external Postgres server](configuring-playbook-external-postgres.md), the above command will not work, because neither the credentials file (`/matrix/postgres/env-postgres-psql`), nor the `matrix-postgres` container is available.
2021-01-22 12:04:36 +00:00
Restoring a backup made this way can be done by [importing it](importing-postgres.md).
2019-02-09 12:36:47 +00:00
## Upgrading PostgreSQL
Unless you are using an [external Postgres server](configuring-playbook-external-postgres.md), this playbook initially installs Postgres for you.
Once installed, the playbook attempts to preserve the Postgres version it starts with.
This is because newer Postgres versions cannot start with data generated by older Postgres versions.
Upgrades must be performed manually.
This playbook can upgrade your existing Postgres setup with the following command:
```sh
just run-tags upgrade-postgres
```
2019-02-09 12:36:47 +00:00
**Warning: If you're using Borg Backup keep in mind that there is no official Postgres 16 support yet.**
2022-11-12 22:40:46 +00:00
2021-01-05 20:59:24 +00:00
**The old Postgres data directory is backed up** automatically, by renaming it to `/matrix/postgres/data-auto-upgrade-backup`.
2019-02-09 12:36:47 +00:00
To rename to a different path, pass some extra flags to the command above, like this: `--extra-vars="postgres_auto_upgrade_backup_data_path=/another/disk/matrix-postgres-before-upgrade"`
The auto-upgrade-backup directory stays around forever, until you **manually decide to delete it**.
As part of the upgrade, the database is dumped to `/tmp`, an upgraded and empty Postgres server is started, and then the dump is restored into the new server.
To use a different directory for the dump, pass some extra flags to the command above, like this: `--extra-vars="postgres_dump_dir=/directory/to/dump/here"`
To save disk space in `/tmp`, the dump file is gzipped on the fly at the expense of CPU usage.
If you have plenty of space in `/tmp` and would rather avoid gzipping, you can explicitly pass a dump filename which doesn't end in `.gz`.
Example: `--extra-vars="postgres_dump_name=matrix-postgres-dump.sql"`
**All databases, roles, etc. on the Postgres server are migrated**.
## Tuning PostgreSQL
PostgreSQL can be [tuned](https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server) to make it run faster. This is done by passing extra arguments to the Postgres process.
The [Postgres Ansible role](https://github.com/devture/com.devture.ansible.role.postgres) **already does some tuning by default**, which matches the [tuning logic](https://github.com/le0pard/pgtune/blob/master/src/features/configuration/configurationSlice.js) done by websites like https://pgtune.leopard.in.ua/.
You can manually influence some of the tuning variables . These parameters (variables) are injected via the `devture_postgres_postgres_process_extra_arguments_auto` variable.
2021-05-26 06:55:31 +00:00
Most users should be fine with the automatically-done tuning. However, you may wish to:
- **adjust the automatically-deterimned tuning parameters manually**: change the values for the tuning variables defined in the Postgres role's [default configuration file](https://github.com/devture/com.devture.ansible.role.postgres/blob/main/defaults/main.yml) (see `devture_postgres_max_connections`, `devture_postgres_data_storage` etc). These variables are ultimately passed to Postgres via a `devture_postgres_postgres_process_extra_arguments_auto` variable
- **turn automatically-performed tuning off**: override it like this: `devture_postgres_postgres_process_extra_arguments_auto: []`
- **add additional tuning parameters**: define your additional Postgres configuration parameters in `devture_postgres_postgres_process_extra_arguments_custom`. See `devture_postgres_postgres_process_extra_arguments_auto` defined in the Postgres role's [default configuration file](https://github.com/devture/com.devture.ansible.role.postgres/blob/main/defaults/main.yml) for inspiration