Merge branch 'release-v1.120' into matrix-org-hotfixes

This commit is contained in:
Olivier 'reivilibre 2024-11-20 15:16:26 +00:00
commit f4bbc74f44
53 changed files with 991 additions and 318 deletions

View file

@ -91,10 +91,19 @@ jobs:
rm -rf /tmp/.buildx-cache rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Artifact name
id: artifact-name
# We can't have colons in the upload name of the artifact, so we convert
# e.g. `debian:sid` to `sid`.
env:
DISTRO: ${{ matrix.distro }}
run: |
echo "ARTIFACT_NAME=${DISTRO#*:}" >> "$GITHUB_OUTPUT"
- name: Upload debs as artifacts - name: Upload debs as artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: debs name: debs-${{ steps.artifact-name.outputs.ARTIFACT_NAME }}
path: debs/* path: debs/*
build-wheels: build-wheels:
@ -102,7 +111,7 @@ jobs:
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
os: [ubuntu-22.04, macos-12] os: [ubuntu-22.04, macos-13]
arch: [x86_64, aarch64] arch: [x86_64, aarch64]
# is_pr is a flag used to exclude certain jobs from the matrix on PRs. # is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow. # It is not read by the rest of the workflow.
@ -112,9 +121,9 @@ jobs:
exclude: exclude:
# Don't build macos wheels on PR CI. # Don't build macos wheels on PR CI.
- is_pr: true - is_pr: true
os: "macos-12" os: "macos-13"
# Don't build aarch64 wheels on mac. # Don't build aarch64 wheels on mac.
- os: "macos-12" - os: "macos-13"
arch: aarch64 arch: aarch64
# Don't build aarch64 wheels on PR CI. # Don't build aarch64 wheels on PR CI.
- is_pr: true - is_pr: true
@ -196,17 +205,18 @@ jobs:
- name: Download all workflow run artifacts - name: Download all workflow run artifacts
uses: actions/download-artifact@v4 uses: actions/download-artifact@v4
- name: Build a tarball for the debs - name: Build a tarball for the debs
run: tar -cvJf debs.tar.xz debs # We need to merge all the debs uploads into one folder, then compress
# that.
run: |
mkdir debs
mv debs*/* debs/
tar -cvJf debs.tar.xz debs
- name: Attach to release - name: Attach to release
uses: softprops/action-gh-release@a929a66f232c1b11af63782948aa2210f981808a # PR#109 uses: softprops/action-gh-release@v2
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with: with:
files: | files: |
Sdist/* Sdist/*
Wheel/* Wheel*/*
debs.tar.xz debs.tar.xz
# if it's not already published, keep the release as a draft.
draft: true
# mark it as a prerelease if the tag contains 'rc'.
prerelease: ${{ contains(github.ref, 'rc') }}

View file

@ -1,11 +1,66 @@
# Synapse 1.119.0rc1 (2024-11-11) # Synapse 1.120.0rc1 (2024-11-20)
This release enables the enforcement of authenticated media by default, with exemptions for media that is already present in the
homeserver's media store.
Most homeservers operating in the public federation will not be impacted by this change, given that
the large homeserver `matrix.org` enabled this in September 2024 and therefore most clients and servers
will already have updated as a result.
Some server administrators may still wish to disable this enforcement for the time being, in the interest of compatibility with older clients
and older federated homeservers.
See the [upgrade notes](https://element-hq.github.io/synapse/v1.120/upgrade.html#authenticated-media-is-now-enforced-by-default) for more information.
### Features
- Enforce authenticated media by default. Administrators can revert this by configuring `enable_authenticated_media` to `false`. In a future release of Synapse, this option will be removed and become always-on. ([\#17889](https://github.com/element-hq/synapse/issues/17889))
- Add a one-off task to delete old One-Time Keys, to guard against us having old OTKs in the database that the client has long forgotten about. ([\#17934](https://github.com/element-hq/synapse/issues/17934))
### Improved Documentation
- Clarify the semantics of the `enable_authenticated_media` configuration option. ([\#17913](https://github.com/element-hq/synapse/issues/17913))
- Add documentation about backing up Synapse. ([\#17931](https://github.com/element-hq/synapse/issues/17931))
### Deprecations and Removals
- Remove support for [MSC3886: Simple client rendezvous capability](https://github.com/matrix-org/matrix-spec-proposals/pull/3886), which has been superseded by [MSC4108](https://github.com/matrix-org/matrix-spec-proposals/pull/4108) and therefore closed. ([\#17638](https://github.com/element-hq/synapse/issues/17638))
### Internal Changes
- Addressed some typos in docs and returned error message for unknown MXC ID. ([\#17865](https://github.com/element-hq/synapse/issues/17865))
- Unpin the upload release GHA action. ([\#17923](https://github.com/element-hq/synapse/issues/17923))
- Bump macos version used to build wheels during release, as current version used is end-of-life. ([\#17924](https://github.com/element-hq/synapse/issues/17924))
- Move server event filtering logic to rust. ([\#17928](https://github.com/element-hq/synapse/issues/17928))
- Support new package name of PyPI package `python-multipart` 0.0.13 so that distro packagers do not need to work around name conflict with PyPI package `multipart`. ([\#17932](https://github.com/element-hq/synapse/issues/17932))
- Speed up slow initial sliding syncs on large servers. ([\#17946](https://github.com/element-hq/synapse/issues/17946))
### Updates to locked dependencies
* Bump anyhow from 1.0.92 to 1.0.93. ([\#17920](https://github.com/element-hq/synapse/issues/17920))
* Bump bleach from 6.1.0 to 6.2.0. ([\#17918](https://github.com/element-hq/synapse/issues/17918))
* Bump immutabledict from 4.2.0 to 4.2.1. ([\#17941](https://github.com/element-hq/synapse/issues/17941))
* Bump packaging from 24.1 to 24.2. ([\#17940](https://github.com/element-hq/synapse/issues/17940))
* Bump phonenumbers from 8.13.49 to 8.13.50. ([\#17942](https://github.com/element-hq/synapse/issues/17942))
* Bump pygithub from 2.4.0 to 2.5.0. ([\#17917](https://github.com/element-hq/synapse/issues/17917))
* Bump ruff from 0.7.2 to 0.7.3. ([\#17919](https://github.com/element-hq/synapse/issues/17919))
* Bump serde from 1.0.214 to 1.0.215. ([\#17938](https://github.com/element-hq/synapse/issues/17938))
# Synapse 1.119.0 (2024-11-13)
No significant changes since 1.119.0rc2.
### Python 3.8 support dropped ### Python 3.8 support dropped
Python 3.8 is no longer supported by Synapse. The minimum supported Python version is now 3.9. Python 3.8 is [end-of-life](https://devguide.python.org/versions/) and is no longer supported by Synapse. The minimum supported Python version is now 3.9.
If you are running Synapse with Python 3.8, please upgrade to Python 3.9 (or greater) before upgrading Synapse. If you are running Synapse with Python 3.8, please upgrade to Python 3.9 (or greater) before upgrading Synapse.
# Synapse 1.119.0rc2 (2024-11-11)
Note that due to packaging issues there was no v1.119.0rc1.
### Features ### Features
- Support [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151)'s stable report room API. ([\#17374](https://github.com/element-hq/synapse/issues/17374)) - Support [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151)'s stable report room API. ([\#17374](https://github.com/element-hq/synapse/issues/17374))
@ -37,6 +92,7 @@ If you are running Synapse with Python 3.8, please upgrade to Python 3.9 (or gre
- Update version constraint to allow the latest poetry-core 1.9.1. ([\#17902](https://github.com/element-hq/synapse/pull/17902)) - Update version constraint to allow the latest poetry-core 1.9.1. ([\#17902](https://github.com/element-hq/synapse/pull/17902))
- Update the portdb CI to use Python 3.13 and Postgres 17 as latest dependencies. ([\#17909](https://github.com/element-hq/synapse/pull/17909)) - Update the portdb CI to use Python 3.13 and Postgres 17 as latest dependencies. ([\#17909](https://github.com/element-hq/synapse/pull/17909))
- Add an index to `current_state_delta_stream` table. ([\#17912](https://github.com/element-hq/synapse/issues/17912)) - Add an index to `current_state_delta_stream` table. ([\#17912](https://github.com/element-hq/synapse/issues/17912))
- Fix building and attaching release artifacts during the release process. ([\#17921](https://github.com/element-hq/synapse/issues/17921))
### Updates to locked dependencies ### Updates to locked dependencies

12
Cargo.lock generated
View file

@ -13,9 +13,9 @@ dependencies = [
[[package]] [[package]]
name = "anyhow" name = "anyhow"
version = "1.0.92" version = "1.0.93"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "74f37166d7d48a0284b99dd824694c26119c700b53bf0d1540cdb147dbdaaf13" checksum = "4c95c10ba0b00a02636238b814946408b1322d5ac4760326e6fb8ec956d85775"
[[package]] [[package]]
name = "arc-swap" name = "arc-swap"
@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.214" version = "1.0.215"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f55c3193aca71c12ad7890f1785d2b73e1b9f63a0bbc353c08ef26fe03fc56b5" checksum = "6513c1ad0b11a9376da888e3e0baa0077f1aed55c17f50e7b2397136129fb88f"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.214" version = "1.0.215"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "de523f781f095e28fa605cdce0f8307e451cc0fd14e2eb4cd2e98a355b147766" checksum = "ad1e866f866923f252f05c889987993144fb74e722403468a4ebd70c3cd756c0"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",

18
debian/changelog vendored
View file

@ -1,3 +1,21 @@
matrix-synapse-py3 (1.120.0~rc1) stable; urgency=medium
* New Synapse release 1.120.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 20 Nov 2024 15:02:21 +0000
matrix-synapse-py3 (1.119.0) stable; urgency=medium
* New Synapse release 1.119.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 13 Nov 2024 13:57:51 +0000
matrix-synapse-py3 (1.119.0~rc2) stable; urgency=medium
* New Synapse release 1.119.0rc2.
-- Synapse Packaging team <packages@matrix.org> Mon, 11 Nov 2024 14:33:02 +0000
matrix-synapse-py3 (1.119.0~rc1) stable; urgency=medium matrix-synapse-py3 (1.119.0~rc1) stable; urgency=medium
* New Synapse release 1.119.0rc1. * New Synapse release 1.119.0rc1.

View file

@ -54,6 +54,7 @@
- [Using `synctl` with Workers](synctl_workers.md) - [Using `synctl` with Workers](synctl_workers.md)
- [Systemd](systemd-with-workers/README.md) - [Systemd](systemd-with-workers/README.md)
- [Administration](usage/administration/README.md) - [Administration](usage/administration/README.md)
- [Backups](usage/administration/backups.md)
- [Admin API](usage/administration/admin_api/README.md) - [Admin API](usage/administration/admin_api/README.md)
- [Account Validity](admin_api/account_validity.md) - [Account Validity](admin_api/account_validity.md)
- [Background Updates](usage/administration/admin_api/background_updates.md) - [Background Updates](usage/administration/admin_api/background_updates.md)

View file

@ -100,6 +100,10 @@ database:
keepalives_count: 3 keepalives_count: 3
``` ```
## Backups
Don't forget to [back up](./usage/administration/backups.md#database) your database!
## Tuning Postgres ## Tuning Postgres
The default settings should be fine for most deployments. For larger The default settings should be fine for most deployments. For larger

View file

@ -656,6 +656,10 @@ This also requires the optional `lxml` python dependency to be installed. This
in turn requires the `libxml2` library to be available - on Debian/Ubuntu this in turn requires the `libxml2` library to be available - on Debian/Ubuntu this
means `apt-get install libxml2-dev`, or equivalent for your OS. means `apt-get install libxml2-dev`, or equivalent for your OS.
### Backups
Don't forget to take [backups](../usage/administration/backups.md) of your new server!
### Troubleshooting Installation ### Troubleshooting Installation
`pip` seems to leak *lots* of memory during installation. For instance, a Linux `pip` seems to leak *lots* of memory during installation. For instance, a Linux

View file

@ -117,6 +117,40 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status). [the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.120.0
## Removal of experimental MSC3886 feature
[MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886)
has been closed (and will not enter the Matrix spec). As such, we are
removing the experimental support for it in this release.
The `experimental_features.msc3886_endpoint` configuration option has
been removed.
## Authenticated media is now enforced by default
The [`enable_authenticated_media`] configuration option now defaults to true.
This means that clients and remote (federated) homeservers now need to use
the authenticated media endpoints in order to download media from your
homeserver.
As an exception, existing media that was stored on the server prior to
this option changing to `true` will still be accessible over the
unauthenticated endpoints.
The matrix.org homeserver has already been running with this option enabled
since September 2024, so most common clients and homeservers should already
be compatible.
With that said, administrators who wish to disable this feature for broader
compatibility can still do so by manually configuring
`enable_authenticated_media: False`.
[`enable_authenticated_media`]: usage/configuration/config_documentation.md#enable_authenticated_media
# Upgrading to v1.119.0 # Upgrading to v1.119.0
## Minimum supported Python version ## Minimum supported Python version

View file

@ -0,0 +1,125 @@
# How to back up a Synapse homeserver
It is critical to maintain good backups of your server, to guard against
hardware failure as well as potential corruption due to bugs or administrator
error.
This page documents the things you will need to consider backing up as part of
a Synapse installation.
## Configuration files
Keep a copy of your configuration file (`homeserver.yaml`), as well as any
auxiliary config files it refers to such as the
[`log_config`](../configuration/config_documentation.md#log_config) file,
[`app_service_config_files`](../configuration/config_documentation.md#app_service_config_files).
Often, all such config files will be kept in a single directory such as
`/etc/synapse`, which will make this easier.
## Server signing key
Your server has a [signing
key](../configuration/config_documentation.md#signing_key_path) which it uses
to sign events and outgoing federation requests. It is easiest to back it up
with your configuration files, but an alternative is to have Synapse create a
new signing key if you have to restore.
If you do decide to replace the signing key, you should add the old *public*
key to
[`old_signing_keys`](../configuration/config_documentation.md#old_signing_keys).
## Database
Synapse's support for SQLite is only suitable for testing purposes, so for the
purposes of this document, we'll assume you are using
[PostgreSQL](../../postgres.md).
A full discussion of backup strategies for PostgreSQL is out of scope for this
document; see the [PostgreSQL
documentation](https://www.postgresql.org/docs/current/backup.html) for
detailed information.
### Synapse-specfic details
* Be very careful not to restore into a database that already has tables
present. At best, this will error; at worst, it will lead to subtle database
inconsistencies.
* The `e2e_one_time_keys_json` table should **not** be backed up, or if it is
backed up, should be
[`TRUNCATE`d](https://www.postgresql.org/docs/current/sql-truncate.html)
after restoring the database before Synapse is started.
[Background: restoring the database to an older backup can cause
used one-time-keys to be re-issued, causing subsequent [message decryption
errors](https://github.com/element-hq/element-meta/issues/2155). Clearing
all one-time-keys from the database ensures that this cannot happen, and
will prompt clients to generate and upload new one-time-keys.]
### Quick and easy database backup and restore
Typically, the easiest solution is to use `pg_dump` to take a copy of the whole
database. We recommend `pg_dump`'s custom dump format, as it produces
significantly smaller backup files.
```shell
sudo -u postgres pg_dump -Fc --exclude-table-data e2e_one_time_keys_json synapse > synapse.dump
```
There is no need to stop Postgres or Synapse while `pg_dump` is running: it
will take a consistent snapshot of the databse.
To restore, you will need to recreate the database as described in [Using
Postgres](../../postgres.md#set-up-database),
then load the dump into it with `pg_restore`:
```shell
sudo -u postgres createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
sudo -u postgres pg_restore -d synapse < synapse.dump
```
(If you forgot to exclude `e2e_one_time_keys_json` during `pg_dump`, remember
to connect to the new database and `TRUNCATE e2e_one_time_keys_json;` before
starting Synapse.)
To reiterate: do **not** restore a dump over an existing database.
Again, if you plan to run your homeserver at any sort of production level, we
recommend studying the PostgreSQL documentation on backup options.
## Media store
Synapse keeps a copy of media uploaded by users, including avatars and message
attachments, in its [Media
store](../configuration/config_documentation.md#media-store).
It is a directory on the local disk, containing the following directories:
* `local_content`: this is content uploaded by your local users. As a general
rule, you should back this up: it may represent the only copy of those
media files anywhere in the federation, and if they are lost, users will
see errors when viewing user or room avatars, and messages with attachments.
* `local_thumbnails`: "thumbnails" of images uploaded by your users. If
[`dynamic_thumbnails`](../configuration/config_documentation.md#dynamic_thumbnails)
is enabled, these will be regenerated if they are removed from the disk, and
there is therefore no need to back them up.
If `dynamic_thumbnails` is *not* enabled (the default): although this can
theoretically be regenerated from `local_content`, there is no tooling to do
so. We recommend that these are backed up too.
* `remote_content`: this is a cache of content that was uploaded by a user on
another server, and has since been requested by a user on your own server.
Typically there is no need to back up this directory: if a file in this directory
is removed, Synapse will attempt to fetch it again from the remote
server.
* `remote_thumbnails`: thumbnails of images uploaded by users on other
servers. As with `remote_content`, there is normally no need to back this
up.
* `url_cache`, `url_cache_thumbnails`: temporary caches of files downloaded
by the [URL previews](../../setup/installation.md#url-previews) feature.
These do not need to be backed up.

View file

@ -1887,12 +1887,33 @@ Config options related to Synapse's media store.
When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy
unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to false, but after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to true (previously false). In a future release of Synapse, this option will be removed and become always-on.
this will change to true in a future Synapse release.
In all cases, authenticated requests to download media will succeed, but for unauthenticated requests, this
case-by-case breakdown describes whether media downloads are permitted:
* `enable_authenticated_media = False`:
* unauthenticated client or homeserver requesting local media: allowed
* unauthenticated client or homeserver requesting remote media: allowed as long as the media is in the cache,
or as long as the remote homeserver does not require authentication to retrieve the media
* `enable_authenticated_media = True`:
* unauthenticated client or homeserver requesting local media:
allowed if the media was stored on the server whilst `enable_authenticated_media` was `False` (or in a previous Synapse version where this option did not exist);
otherwise denied.
* unauthenticated client or homeserver requesting remote media: the same as for local media;
allowed if the media was stored on the server whilst `enable_authenticated_media` was `False` (or in a previous Synapse version where this option did not exist);
otherwise denied.
It is especially notable that media downloaded before this option existed (in older Synapse versions), or whilst this option was set to `False`,
will perpetually be available over the legacy, unauthenticated endpoint, even after this option is set to `True`.
This is for backwards compatibility with older clients and homeservers that do not yet support requesting authenticated media;
those older clients or homeservers will not be cut off from media they can already see.
_Changed in Synapse 1.120:_ This option now defaults to `True` when not set, whereas before this version it defaulted to `False`.
Example configuration: Example configuration:
```yaml ```yaml
enable_authenticated_media: true enable_authenticated_media: false
``` ```
--- ---
### `enable_media_repo` ### `enable_media_repo`
@ -3108,6 +3129,15 @@ it was last used.
It is possible to build an entry from an old `signing.key` file using the It is possible to build an entry from an old `signing.key` file using the
`export_signing_key` script which is provided with synapse. `export_signing_key` script which is provided with synapse.
If you have lost the private key file, you can ask another server you trust to
tell you the public keys it has seen from your server. To fetch the keys from
`matrix.org`, try something like:
```
curl https://matrix-federation.matrix.org/_matrix/key/v2/query/myserver.example.com |
jq '.server_keys | map(.verify_keys) | add'
```
Example configuration: Example configuration:
```yaml ```yaml
old_signing_keys: old_signing_keys:

View file

@ -56,24 +56,6 @@
"type": "github" "type": "github"
} }
}, },
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1681202837,
"narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "cfacdce06f30d2b68473a46042957675eebb3401",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"gitignore": { "gitignore": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@ -202,11 +184,11 @@
}, },
"nixpkgs_3": { "nixpkgs_3": {
"locked": { "locked": {
"lastModified": 1681358109, "lastModified": 1728538411,
"narHash": "sha256-eKyxW4OohHQx9Urxi7TQlFBTDWII+F+x2hklDOQPB50=", "narHash": "sha256-f0SBJz1eZ2yOuKUr5CA9BHULGXVSn6miBuUWdTyhUhU=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "96ba1c52e54e74c3197f4d43026b3f3d92e83ff9", "rev": "b69de56fac8c2b6f8fd27f2eca01dcda8e0a4221",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -249,20 +231,19 @@
"devenv": "devenv", "devenv": "devenv",
"nixpkgs": "nixpkgs_2", "nixpkgs": "nixpkgs_2",
"rust-overlay": "rust-overlay", "rust-overlay": "rust-overlay",
"systems": "systems_3" "systems": "systems_2"
} }
}, },
"rust-overlay": { "rust-overlay": {
"inputs": { "inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_3" "nixpkgs": "nixpkgs_3"
}, },
"locked": { "locked": {
"lastModified": 1693966243, "lastModified": 1731897198,
"narHash": "sha256-a2CA1aMIPE67JWSVIGoGtD3EGlFdK9+OlJQs0FOWCKY=", "narHash": "sha256-Ou7vLETSKwmE/HRQz4cImXXJBr/k9gp4J4z/PF8LzTE=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "a8b4bb4cbb744baaabc3e69099f352f99164e2c1", "rev": "0be641045af6d8666c11c2c40e45ffc9667839b5",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -300,21 +281,6 @@
"repo": "default", "repo": "default",
"type": "github" "type": "github"
} }
},
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
} }
}, },
"root": "root", "root": "root",

View file

@ -82,7 +82,7 @@
# #
# NOTE: We currently need to set the Rust version unnecessarily high # NOTE: We currently need to set the Rust version unnecessarily high
# in order to work around https://github.com/matrix-org/synapse/issues/15939 # in order to work around https://github.com/matrix-org/synapse/issues/15939
(rust-bin.stable."1.71.1".default.override { (rust-bin.stable."1.82.0".default.override {
# Additionally install the "rust-src" extension to allow diving into the # Additionally install the "rust-src" extension to allow diving into the
# Rust source code in an IDE (rust-analyzer will also make use of it). # Rust source code in an IDE (rust-analyzer will also make use of it).
extensions = [ "rust-src" ]; extensions = [ "rust-src" ];
@ -205,7 +205,7 @@
# corresponding Nix packages on https://search.nixos.org/packages. # corresponding Nix packages on https://search.nixos.org/packages.
# #
# This was done until `./install-deps.pl --dryrun` produced no output. # This was done until `./install-deps.pl --dryrun` produced no output.
env.PERL5LIB = "${with pkgs.perl536Packages; makePerlPath [ env.PERL5LIB = "${with pkgs.perl538Packages; makePerlPath [
DBI DBI
ClassMethodModifiers ClassMethodModifiers
CryptEd25519 CryptEd25519

79
poetry.lock generated
View file

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.8.4 and should not be changed by hand. # This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
[[package]] [[package]]
name = "annotated-types" name = "annotated-types"
@ -104,21 +104,20 @@ typecheck = ["mypy"]
[[package]] [[package]]
name = "bleach" name = "bleach"
version = "6.1.0" version = "6.2.0"
description = "An easy safelist-based HTML-sanitizing tool." description = "An easy safelist-based HTML-sanitizing tool."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.9"
files = [ files = [
{file = "bleach-6.1.0-py3-none-any.whl", hash = "sha256:3225f354cfc436b9789c66c4ee030194bee0568fbf9cbdad3bc8b5c26c5f12b6"}, {file = "bleach-6.2.0-py3-none-any.whl", hash = "sha256:117d9c6097a7c3d22fd578fcd8d35ff1e125df6736f554da4e432fdd63f31e5e"},
{file = "bleach-6.1.0.tar.gz", hash = "sha256:0a31f1837963c41d46bbf1331b8778e1308ea0791db03cc4e7357b97cf42a8fe"}, {file = "bleach-6.2.0.tar.gz", hash = "sha256:123e894118b8a599fd80d3ec1a6d4cc7ce4e5882b1317a7e1ba69b56e95f991f"},
] ]
[package.dependencies] [package.dependencies]
six = ">=1.9.0"
webencodings = "*" webencodings = "*"
[package.extras] [package.extras]
css = ["tinycss2 (>=1.1.0,<1.3)"] css = ["tinycss2 (>=1.1.0,<1.5)"]
[[package]] [[package]]
name = "canonicaljson" name = "canonicaljson"
@ -725,13 +724,13 @@ files = [
[[package]] [[package]]
name = "immutabledict" name = "immutabledict"
version = "4.2.0" version = "4.2.1"
description = "Immutable wrapper around dictionaries (a fork of frozendict)" description = "Immutable wrapper around dictionaries (a fork of frozendict)"
optional = false optional = false
python-versions = ">=3.8,<4.0" python-versions = ">=3.8"
files = [ files = [
{file = "immutabledict-4.2.0-py3-none-any.whl", hash = "sha256:d728b2c2410d698d95e6200237feb50a695584d20289ad3379a439aa3d90baba"}, {file = "immutabledict-4.2.1-py3-none-any.whl", hash = "sha256:c56a26ced38c236f79e74af3ccce53772827cef5c3bce7cab33ff2060f756373"},
{file = "immutabledict-4.2.0.tar.gz", hash = "sha256:e003fd81aad2377a5a758bf7e1086cf3b70b63e9a5cc2f46bce8d0a2b4727c5f"}, {file = "immutabledict-4.2.1.tar.gz", hash = "sha256:d91017248981c72eb66c8ff9834e99c2f53562346f23e7f51e7a5ebcf66a3bcc"},
] ]
[[package]] [[package]]
@ -1419,13 +1418,13 @@ tests = ["Sphinx", "doubles", "flake8", "flake8-quotes", "gevent", "mock", "pyte
[[package]] [[package]]
name = "packaging" name = "packaging"
version = "24.1" version = "24.2"
description = "Core utilities for Python packages" description = "Core utilities for Python packages"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "packaging-24.1-py3-none-any.whl", hash = "sha256:5b8f2217dbdbd2f7f384c41c628544e6d52f2d0f53c6d0c3ea61aa5d1d7ff124"}, {file = "packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759"},
{file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"}, {file = "packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f"},
] ]
[[package]] [[package]]
@ -1444,13 +1443,13 @@ dev = ["jinja2"]
[[package]] [[package]]
name = "phonenumbers" name = "phonenumbers"
version = "8.13.49" version = "8.13.50"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers." description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false optional = false
python-versions = "*" python-versions = "*"
files = [ files = [
{file = "phonenumbers-8.13.49-py2.py3-none-any.whl", hash = "sha256:e17140955ab3d8f9580727372ea64c5ada5327932d6021ef6fd203c3db8c8139"}, {file = "phonenumbers-8.13.50-py2.py3-none-any.whl", hash = "sha256:bb95dbc0d9979c51f7ad94bcd780784938958861fbb4b75a2fe39ccd3d58954a"},
{file = "phonenumbers-8.13.49.tar.gz", hash = "sha256:e608ccb61f0bd42e6db1d2c421f7c22186b88f494870bf40aa31d1a2718ab0ae"}, {file = "phonenumbers-8.13.50.tar.gz", hash = "sha256:e05ac6fb7b98c6d719a87ea895b9fc153673b4a51f455ec9afaf557ef4629da6"},
] ]
[[package]] [[package]]
@ -1785,13 +1784,13 @@ typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
[[package]] [[package]]
name = "pygithub" name = "pygithub"
version = "2.4.0" version = "2.5.0"
description = "Use the full Github API v3" description = "Use the full Github API v3"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "PyGithub-2.4.0-py3-none-any.whl", hash = "sha256:81935aa4bdc939fba98fee1cb47422c09157c56a27966476ff92775602b9ee24"}, {file = "PyGithub-2.5.0-py3-none-any.whl", hash = "sha256:b0b635999a658ab8e08720bdd3318893ff20e2275f6446fcf35bf3f44f2c0fd2"},
{file = "pygithub-2.4.0.tar.gz", hash = "sha256:6601e22627e87bac192f1e2e39c6e6f69a43152cfb8f307cee575879320b3051"}, {file = "pygithub-2.5.0.tar.gz", hash = "sha256:e1613ac508a9be710920d26eb18b1905ebd9926aa49398e88151c1b526aad3cf"},
] ]
[package.dependencies] [package.dependencies]
@ -2257,29 +2256,29 @@ files = [
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.7.2" version = "0.7.3"
description = "An extremely fast Python linter and code formatter, written in Rust." description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "ruff-0.7.2-py3-none-linux_armv6l.whl", hash = "sha256:b73f873b5f52092e63ed540adefc3c36f1f803790ecf2590e1df8bf0a9f72cb8"}, {file = "ruff-0.7.3-py3-none-linux_armv6l.whl", hash = "sha256:34f2339dc22687ec7e7002792d1f50712bf84a13d5152e75712ac08be565d344"},
{file = "ruff-0.7.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:5b813ef26db1015953daf476202585512afd6a6862a02cde63f3bafb53d0b2d4"}, {file = "ruff-0.7.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:fb397332a1879b9764a3455a0bb1087bda876c2db8aca3a3cbb67b3dbce8cda0"},
{file = "ruff-0.7.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:853277dbd9675810c6826dad7a428d52a11760744508340e66bf46f8be9701d9"}, {file = "ruff-0.7.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:37d0b619546103274e7f62643d14e1adcbccb242efda4e4bdb9544d7764782e9"},
{file = "ruff-0.7.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21aae53ab1490a52bf4e3bf520c10ce120987b047c494cacf4edad0ba0888da2"}, {file = "ruff-0.7.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d59f0c3ee4d1a6787614e7135b72e21024875266101142a09a61439cb6e38a5"},
{file = "ruff-0.7.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ccc7e0fc6e0cb3168443eeadb6445285abaae75142ee22b2b72c27d790ab60ba"}, {file = "ruff-0.7.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:44eb93c2499a169d49fafd07bc62ac89b1bc800b197e50ff4633aed212569299"},
{file = "ruff-0.7.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fd77877a4e43b3a98e5ef4715ba3862105e299af0c48942cc6d51ba3d97dc859"}, {file = "ruff-0.7.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6d0242ce53f3a576c35ee32d907475a8d569944c0407f91d207c8af5be5dae4e"},
{file = "ruff-0.7.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:e00163fb897d35523c70d71a46fbaa43bf7bf9af0f4534c53ea5b96b2e03397b"}, {file = "ruff-0.7.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:6b6224af8b5e09772c2ecb8dc9f3f344c1aa48201c7f07e7315367f6dd90ac29"},
{file = "ruff-0.7.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f3c54b538633482dc342e9b634d91168fe8cc56b30a4b4f99287f4e339103e88"}, {file = "ruff-0.7.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c50f95a82b94421c964fae4c27c0242890a20fe67d203d127e84fbb8013855f5"},
{file = "ruff-0.7.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7b792468e9804a204be221b14257566669d1db5c00d6bb335996e5cd7004ba80"}, {file = "ruff-0.7.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7f3eff9961b5d2644bcf1616c606e93baa2d6b349e8aa8b035f654df252c8c67"},
{file = "ruff-0.7.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dba53ed84ac19ae4bfb4ea4bf0172550a2285fa27fbb13e3746f04c80f7fa088"}, {file = "ruff-0.7.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8963cab06d130c4df2fd52c84e9f10d297826d2e8169ae0c798b6221be1d1d2"},
{file = "ruff-0.7.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:b19fafe261bf741bca2764c14cbb4ee1819b67adb63ebc2db6401dcd652e3748"}, {file = "ruff-0.7.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:61b46049d6edc0e4317fb14b33bd693245281a3007288b68a3f5b74a22a0746d"},
{file = "ruff-0.7.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:28bd8220f4d8f79d590db9e2f6a0674f75ddbc3847277dd44ac1f8d30684b828"}, {file = "ruff-0.7.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:10ebce7696afe4644e8c1a23b3cf8c0f2193a310c18387c06e583ae9ef284de2"},
{file = "ruff-0.7.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:9fd67094e77efbea932e62b5d2483006154794040abb3a5072e659096415ae1e"}, {file = "ruff-0.7.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:3f36d56326b3aef8eeee150b700e519880d1aab92f471eefdef656fd57492aa2"},
{file = "ruff-0.7.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:576305393998b7bd6c46018f8104ea3a9cb3fa7908c21d8580e3274a3b04b691"}, {file = "ruff-0.7.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:5d024301109a0007b78d57ab0ba190087b43dce852e552734ebf0b0b85e4fb16"},
{file = "ruff-0.7.2-py3-none-win32.whl", hash = "sha256:fa993cfc9f0ff11187e82de874dfc3611df80852540331bc85c75809c93253a8"}, {file = "ruff-0.7.3-py3-none-win32.whl", hash = "sha256:4ba81a5f0c5478aa61674c5a2194de8b02652f17addf8dfc40c8937e6e7d79fc"},
{file = "ruff-0.7.2-py3-none-win_amd64.whl", hash = "sha256:dd8800cbe0254e06b8fec585e97554047fb82c894973f7ff18558eee33d1cb88"}, {file = "ruff-0.7.3-py3-none-win_amd64.whl", hash = "sha256:588a9ff2fecf01025ed065fe28809cd5a53b43505f48b69a1ac7707b1b7e4088"},
{file = "ruff-0.7.2-py3-none-win_arm64.whl", hash = "sha256:bb8368cd45bba3f57bb29cbb8d64b4a33f8415d0149d2655c5c8539452ce7760"}, {file = "ruff-0.7.3-py3-none-win_arm64.whl", hash = "sha256:1713e2c5545863cdbfe2cbce21f69ffaf37b813bfd1fb3b90dc9a6f1963f5a8c"},
{file = "ruff-0.7.2.tar.gz", hash = "sha256:2b14e77293380e475b4e3a7a368e14549288ed2931fce259a6f99978669e844f"}, {file = "ruff-0.7.3.tar.gz", hash = "sha256:e1d1ba2e40b6e71a61b063354d04be669ab0d39c352461f3d789cac68b54a313"},
] ]
[[package]] [[package]]
@ -3102,4 +3101,4 @@ user-search = ["pyicu"]
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = "^3.9.0" python-versions = "^3.9.0"
content-hash = "0cd942a5193d01cbcef135a0bebd3fa0f12f7dbc63899d6f1c301e0649e9d902" content-hash = "d71159b19349fdc0b7cd8e06e8c8778b603fc37b941c6df34ddc31746783d94d"

View file

@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.119.0rc1" version = "1.120.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later" license = "AGPL-3.0-or-later"
@ -320,7 +320,7 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot # failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile. # can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates. # This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.7.2" ruff = "0.7.3"
# Type checking only works with the pydantic.v1 compat module from pydantic v2 # Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2" pydantic = "^2"

107
rust/src/events/filter.rs Normal file
View file

@ -0,0 +1,107 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2024 New Vector, Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
use std::collections::HashMap;
use pyo3::{exceptions::PyValueError, pyfunction, PyResult};
use crate::{
identifier::UserID,
matrix_const::{
HISTORY_VISIBILITY_INVITED, HISTORY_VISIBILITY_JOINED, MEMBERSHIP_INVITE, MEMBERSHIP_JOIN,
},
};
#[pyfunction(name = "event_visible_to_server")]
pub fn event_visible_to_server_py(
sender: String,
target_server_name: String,
history_visibility: String,
erased_senders: HashMap<String, bool>,
partial_state_invisible: bool,
memberships: Vec<(String, String)>, // (state_key, membership)
) -> PyResult<bool> {
event_visible_to_server(
sender,
target_server_name,
history_visibility,
erased_senders,
partial_state_invisible,
memberships,
)
.map_err(|e| PyValueError::new_err(format!("{e}")))
}
/// Return whether the target server is allowed to see the event.
///
/// For a fully stated room, the target server is allowed to see an event E if:
/// - the state at E has world readable or shared history vis, OR
/// - the state at E says that the target server is in the room.
///
/// For a partially stated room, the target server is allowed to see E if:
/// - E was created by this homeserver, AND:
/// - the partial state at E has world readable or shared history vis, OR
/// - the partial state at E says that the target server is in the room.
pub fn event_visible_to_server(
sender: String,
target_server_name: String,
history_visibility: String,
erased_senders: HashMap<String, bool>,
partial_state_invisible: bool,
memberships: Vec<(String, String)>, // (state_key, membership)
) -> anyhow::Result<bool> {
if let Some(&erased) = erased_senders.get(&sender) {
if erased {
return Ok(false);
}
}
if partial_state_invisible {
return Ok(false);
}
if history_visibility != HISTORY_VISIBILITY_INVITED
&& history_visibility != HISTORY_VISIBILITY_JOINED
{
return Ok(true);
}
let mut visible = false;
for (state_key, membership) in memberships {
let state_key = UserID::try_from(state_key.as_ref())
.map_err(|e| anyhow::anyhow!(format!("invalid user_id ({state_key}): {e}")))?;
if state_key.server_name() != target_server_name {
return Err(anyhow::anyhow!(
"state_key.server_name ({}) does not match target_server_name ({target_server_name})",
state_key.server_name()
));
}
match membership.as_str() {
MEMBERSHIP_INVITE => {
if history_visibility == HISTORY_VISIBILITY_INVITED {
visible = true;
break;
}
}
MEMBERSHIP_JOIN => {
visible = true;
break;
}
_ => continue,
}
}
Ok(visible)
}

View file

@ -22,15 +22,17 @@
use pyo3::{ use pyo3::{
types::{PyAnyMethods, PyModule, PyModuleMethods}, types::{PyAnyMethods, PyModule, PyModuleMethods},
Bound, PyResult, Python, wrap_pyfunction, Bound, PyResult, Python,
}; };
pub mod filter;
mod internal_metadata; mod internal_metadata;
/// Called when registering modules with python. /// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> { pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "events")?; let child_module = PyModule::new_bound(py, "events")?;
child_module.add_class::<internal_metadata::EventInternalMetadata>()?; child_module.add_class::<internal_metadata::EventInternalMetadata>()?;
child_module.add_function(wrap_pyfunction!(filter::event_visible_to_server_py, m)?)?;
m.add_submodule(&child_module)?; m.add_submodule(&child_module)?;

86
rust/src/identifier.rs Normal file
View file

@ -0,0 +1,86 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2024 New Vector, Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
//! # Matrix Identifiers
//!
//! This module contains definitions and utilities for working with matrix identifiers.
use std::{fmt, ops::Deref};
/// Errors that can occur when parsing a matrix identifier.
#[derive(Clone, Debug, PartialEq)]
pub enum IdentifierError {
IncorrectSigil,
MissingColon,
}
impl fmt::Display for IdentifierError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
/// A Matrix user_id.
#[derive(Clone, Debug, PartialEq)]
pub struct UserID(String);
impl UserID {
/// Returns the `localpart` of the user_id.
pub fn localpart(&self) -> &str {
&self[1..self.colon_pos()]
}
/// Returns the `server_name` / `domain` of the user_id.
pub fn server_name(&self) -> &str {
&self[self.colon_pos() + 1..]
}
/// Returns the position of the ':' inside of the user_id.
/// Used when splitting the user_id into it's respective parts.
fn colon_pos(&self) -> usize {
self.find(':').unwrap()
}
}
impl TryFrom<&str> for UserID {
type Error = IdentifierError;
/// Will try creating a `UserID` from the provided `&str`.
/// Can fail if the user_id is incorrectly formatted.
fn try_from(s: &str) -> Result<Self, Self::Error> {
if !s.starts_with('@') {
return Err(IdentifierError::IncorrectSigil);
}
if s.find(':').is_none() {
return Err(IdentifierError::MissingColon);
}
Ok(UserID(s.to_string()))
}
}
impl Deref for UserID {
type Target = str;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl fmt::Display for UserID {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.0)
}
}

View file

@ -6,6 +6,8 @@ pub mod acl;
pub mod errors; pub mod errors;
pub mod events; pub mod events;
pub mod http; pub mod http;
pub mod identifier;
pub mod matrix_const;
pub mod push; pub mod push;
pub mod rendezvous; pub mod rendezvous;

28
rust/src/matrix_const.rs Normal file
View file

@ -0,0 +1,28 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2024 New Vector, Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
//! # Matrix Constants
//!
//! This module contains definitions for constant values described by the matrix specification.
pub const HISTORY_VISIBILITY_WORLD_READABLE: &str = "world_readable";
pub const HISTORY_VISIBILITY_SHARED: &str = "shared";
pub const HISTORY_VISIBILITY_INVITED: &str = "invited";
pub const HISTORY_VISIBILITY_JOINED: &str = "joined";
pub const MEMBERSHIP_BAN: &str = "ban";
pub const MEMBERSHIP_LEAVE: &str = "leave";
pub const MEMBERSHIP_KNOCK: &str = "knock";
pub const MEMBERSHIP_INVITE: &str = "invite";
pub const MEMBERSHIP_JOIN: &str = "join";

View file

@ -23,7 +23,6 @@ use anyhow::bail;
use anyhow::Context; use anyhow::Context;
use anyhow::Error; use anyhow::Error;
use lazy_static::lazy_static; use lazy_static::lazy_static;
use regex;
use regex::Regex; use regex::Regex;
use regex::RegexBuilder; use regex::RegexBuilder;

View file

@ -88,6 +88,7 @@ from synapse.storage.databases.main.relations import RelationsWorkerStore
from synapse.storage.databases.main.room import RoomBackgroundUpdateStore from synapse.storage.databases.main.room import RoomBackgroundUpdateStore
from synapse.storage.databases.main.roommember import RoomMemberBackgroundUpdateStore from synapse.storage.databases.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.databases.main.search import SearchBackgroundUpdateStore from synapse.storage.databases.main.search import SearchBackgroundUpdateStore
from synapse.storage.databases.main.sliding_sync import SlidingSyncStore
from synapse.storage.databases.main.state import MainStateBackgroundUpdateStore from synapse.storage.databases.main.state import MainStateBackgroundUpdateStore
from synapse.storage.databases.main.stats import StatsStore from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.user_directory import ( from synapse.storage.databases.main.user_directory import (
@ -255,6 +256,7 @@ class Store(
ReceiptsBackgroundUpdateStore, ReceiptsBackgroundUpdateStore,
RelationsWorkerStore, RelationsWorkerStore,
EventFederationWorkerStore, EventFederationWorkerStore,
SlidingSyncStore,
): ):
def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]: def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]:
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs) return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)

View file

@ -365,11 +365,6 @@ class ExperimentalConfig(Config):
# MSC3874: Filtering /messages with rel_types / not_rel_types. # MSC3874: Filtering /messages with rel_types / not_rel_types.
self.msc3874_enabled: bool = experimental.get("msc3874_enabled", False) self.msc3874_enabled: bool = experimental.get("msc3874_enabled", False)
# MSC3886: Simple client rendezvous capability
self.msc3886_endpoint: Optional[str] = experimental.get(
"msc3886_endpoint", None
)
# MSC3890: Remotely silence local notifications # MSC3890: Remotely silence local notifications
# Note: This option requires "experimental_features.msc3391_enabled" to be # Note: This option requires "experimental_features.msc3391_enabled" to be
# set to "true", in order to communicate account data deletions to clients. # set to "true", in order to communicate account data deletions to clients.

View file

@ -272,9 +272,7 @@ class ContentRepositoryConfig(Config):
remote_media_lifetime remote_media_lifetime
) )
self.enable_authenticated_media = config.get( self.enable_authenticated_media = config.get("enable_authenticated_media", True)
"enable_authenticated_media", False
)
def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str: def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str:
assert data_dir_path is not None assert data_dir_path is not None

View file

@ -215,9 +215,6 @@ class HttpListenerConfig:
additional_resources: Dict[str, dict] = attr.Factory(dict) additional_resources: Dict[str, dict] = attr.Factory(dict)
tag: Optional[str] = None tag: Optional[str] = None
request_id_header: Optional[str] = None request_id_header: Optional[str] = None
# If true, the listener will return CORS response headers compatible with MSC3886:
# https://github.com/matrix-org/matrix-spec-proposals/pull/3886
experimental_cors_msc3886: bool = False
@attr.s(slots=True, frozen=True, auto_attribs=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
@ -1004,7 +1001,6 @@ def parse_listener_def(num: int, listener: Any) -> ListenerConfig:
additional_resources=listener.get("additional_resources", {}), additional_resources=listener.get("additional_resources", {}),
tag=listener.get("tag"), tag=listener.get("tag"),
request_id_header=listener.get("request_id_header"), request_id_header=listener.get("request_id_header"),
experimental_cors_msc3886=listener.get("experimental_cors_msc3886", False),
) )
if socket_path: if socket_path:

View file

@ -39,6 +39,8 @@ from synapse.replication.http.devices import ReplicationUploadKeysForUserRestSer
from synapse.types import ( from synapse.types import (
JsonDict, JsonDict,
JsonMapping, JsonMapping,
ScheduledTask,
TaskStatus,
UserID, UserID,
get_domain_from_id, get_domain_from_id,
get_verify_key_from_cross_signing_key, get_verify_key_from_cross_signing_key,
@ -70,6 +72,7 @@ class E2eKeysHandler:
self.is_mine = hs.is_mine self.is_mine = hs.is_mine
self.clock = hs.get_clock() self.clock = hs.get_clock()
self._worker_lock_handler = hs.get_worker_locks_handler() self._worker_lock_handler = hs.get_worker_locks_handler()
self._task_scheduler = hs.get_task_scheduler()
federation_registry = hs.get_federation_registry() federation_registry = hs.get_federation_registry()
@ -116,6 +119,10 @@ class E2eKeysHandler:
hs.config.experimental.msc3984_appservice_key_query hs.config.experimental.msc3984_appservice_key_query
) )
self._task_scheduler.register_action(
self._delete_old_one_time_keys_task, "delete_old_otks"
)
@trace @trace
@cancellable @cancellable
async def query_devices( async def query_devices(
@ -1574,6 +1581,45 @@ class E2eKeysHandler:
return True return True
return False return False
async def _delete_old_one_time_keys_task(
self, task: ScheduledTask
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
"""Scheduler task to delete old one time keys.
Until Synapse 1.119, Synapse used to issue one-time-keys in a random order, leading to the possibility
that it could still have old OTKs that the client has dropped. This task is scheduled exactly once
by a database schema delta file, and it clears out old one-time-keys that look like they came from libolm.
"""
last_user = task.result.get("from_user", "") if task.result else ""
while True:
# We process users in batches of 100
users, rowcount = await self.store.delete_old_otks_for_next_user_batch(
last_user, 100
)
if len(users) == 0:
# We're done!
return TaskStatus.COMPLETE, None, None
logger.debug(
"Deleted %i old one-time-keys for users '%s'..'%s'",
rowcount,
users[0],
users[-1],
)
last_user = users[-1]
# Store our progress
await self._task_scheduler.update_task(
task.id, result={"from_user": last_user}
)
# Sleep a little before doing the next user.
#
# matrix.org has about 15M users in the e2e_one_time_keys_json table
# (comprising 20M devices). We want this to take about a week, so we need
# to do about one batch of 100 users every 4 seconds.
await self.clock.sleep(4)
def _check_cross_signing_key( def _check_cross_signing_key(
key: JsonDict, user_id: str, key_type: str, signing_key: Optional[VerifyKey] = None key: JsonDict, user_id: str, key_type: str, signing_key: Optional[VerifyKey] = None

View file

@ -36,7 +36,6 @@ from typing import (
) )
import attr import attr
import multipart
import treq import treq
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
from netaddr import AddrFormatError, IPAddress, IPSet from netaddr import AddrFormatError, IPAddress, IPSet
@ -93,6 +92,20 @@ from synapse.util.async_helpers import timeout_deferred
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
# Support both import names for the `python-multipart` (PyPI) library,
# which renamed its package name from `multipart` to `python_multipart`
# in 0.0.13 (though supports the old import name for compatibility).
# Note that the `multipart` package name conflicts with `multipart` (PyPI)
# so we should prefer importing from `python_multipart` when possible.
try:
from python_multipart import MultipartParser
if TYPE_CHECKING:
from python_multipart import multipart
except ImportError:
from multipart import MultipartParser # type: ignore[no-redef]
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"]) outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"])
@ -1039,7 +1052,7 @@ class _MultipartParserProtocol(protocol.Protocol):
self.deferred = deferred self.deferred = deferred
self.boundary = boundary self.boundary = boundary
self.max_length = max_length self.max_length = max_length
self.parser: Optional[multipart.MultipartParser] = None self.parser: Optional[MultipartParser] = None
self.multipart_response = MultipartResponse() self.multipart_response = MultipartResponse()
self.has_redirect = False self.has_redirect = False
self.in_json = False self.in_json = False
@ -1097,12 +1110,12 @@ class _MultipartParserProtocol(protocol.Protocol):
self.deferred.errback() self.deferred.errback()
self.file_length += end - start self.file_length += end - start
callbacks: "multipart.multipart.MultipartCallbacks" = { callbacks: "multipart.MultipartCallbacks" = {
"on_header_field": on_header_field, "on_header_field": on_header_field,
"on_header_value": on_header_value, "on_header_value": on_header_value,
"on_part_data": on_part_data, "on_part_data": on_part_data,
} }
self.parser = multipart.MultipartParser(self.boundary, callbacks) self.parser = MultipartParser(self.boundary, callbacks)
self.total_length += len(incoming_data) self.total_length += len(incoming_data)
if self.max_length is not None and self.total_length >= self.max_length: if self.max_length is not None and self.total_length >= self.max_length:

View file

@ -921,15 +921,6 @@ def set_cors_headers(request: "SynapseRequest") -> None:
b"Access-Control-Expose-Headers", b"Access-Control-Expose-Headers",
b"Synapse-Trace-Id, Server, ETag", b"Synapse-Trace-Id, Server, ETag",
) )
elif request.experimental_cors_msc3886:
request.setHeader(
b"Access-Control-Allow-Headers",
b"X-Requested-With, Content-Type, Authorization, Date, If-Match, If-None-Match",
)
request.setHeader(
b"Access-Control-Expose-Headers",
b"ETag, Location, X-Max-Bytes",
)
else: else:
request.setHeader( request.setHeader(
b"Access-Control-Allow-Headers", b"Access-Control-Allow-Headers",

View file

@ -94,7 +94,6 @@ class SynapseRequest(Request):
self.reactor = site.reactor self.reactor = site.reactor
self._channel = channel # this is used by the tests self._channel = channel # this is used by the tests
self.start_time = 0.0 self.start_time = 0.0
self.experimental_cors_msc3886 = site.experimental_cors_msc3886
# The requester, if authenticated. For federation requests this is the # The requester, if authenticated. For federation requests this is the
# server name, for client requests this is the Requester object. # server name, for client requests this is the Requester object.
@ -666,10 +665,6 @@ class SynapseSite(ProxySite):
request_id_header = config.http_options.request_id_header request_id_header = config.http_options.request_id_header
self.experimental_cors_msc3886: bool = (
config.http_options.experimental_cors_msc3886
)
def request_factory(channel: HTTPChannel, queued: bool) -> Request: def request_factory(channel: HTTPChannel, queued: bool) -> Request:
return request_class( return request_class(
channel, channel,

View file

@ -259,7 +259,7 @@ class MediaRepository:
""" """
media = await self.store.get_local_media(media_id) media = await self.store.get_local_media(media_id)
if media is None: if media is None:
raise SynapseError(404, "Unknow media ID", errcode=Codes.NOT_FOUND) raise NotFoundError("Unknown media ID")
if media.user_id != auth_user.to_string(): if media.user_id != auth_user.to_string():
raise SynapseError( raise SynapseError(

View file

@ -34,51 +34,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# n.b [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886) has now been closed.
# However, we want to keep this implementation around for some time.
# TODO: define an end-of-life date for this implementation.
class MSC3886RendezvousServlet(RestServlet):
"""
This is a placeholder implementation of [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886)
simple client rendezvous capability that is used by the "Sign in with QR" functionality.
This implementation only serves as a 307 redirect to a configured server rather than being a full implementation.
A module that implements the full functionality is available at: https://pypi.org/project/matrix-http-rendezvous-synapse/.
Request:
POST /rendezvous HTTP/1.1
Content-Type: ...
...
Response:
HTTP/1.1 307
Location: <configured endpoint>
"""
PATTERNS = client_patterns(
"/org.matrix.msc3886/rendezvous$", releases=[], v1=False, unstable=True
)
def __init__(self, hs: "HomeServer"):
super().__init__()
redirection_target: Optional[str] = hs.config.experimental.msc3886_endpoint
assert (
redirection_target is not None
), "Servlet is only registered if there is a redirection target"
self.endpoint = redirection_target.encode("utf-8")
async def on_POST(self, request: SynapseRequest) -> None:
respond_with_redirect(
request, self.endpoint, statusCode=TEMPORARY_REDIRECT, cors=True
)
# PUT, GET and DELETE are not implemented as they should be fulfilled by the redirect target.
class MSC4108DelegationRendezvousServlet(RestServlet): class MSC4108DelegationRendezvousServlet(RestServlet):
PATTERNS = client_patterns( PATTERNS = client_patterns(
"/org.matrix.msc4108/rendezvous$", releases=[], v1=False, unstable=True "/org.matrix.msc4108/rendezvous$", releases=[], v1=False, unstable=True
@ -114,9 +69,6 @@ class MSC4108RendezvousServlet(RestServlet):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None: def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
if hs.config.experimental.msc3886_endpoint is not None:
MSC3886RendezvousServlet(hs).register(http_server)
if hs.config.experimental.msc4108_enabled: if hs.config.experimental.msc4108_enabled:
MSC4108RendezvousServlet(hs).register(http_server) MSC4108RendezvousServlet(hs).register(http_server)

View file

@ -149,9 +149,6 @@ class VersionsRestServlet(RestServlet):
"org.matrix.msc3881": msc3881_enabled, "org.matrix.msc3881": msc3881_enabled,
# Adds support for filtering /messages by event relation. # Adds support for filtering /messages by event relation.
"org.matrix.msc3874": self.config.experimental.msc3874_enabled, "org.matrix.msc3874": self.config.experimental.msc3874_enabled,
# Adds support for simple HTTP rendezvous as per MSC3886
"org.matrix.msc3886": self.config.experimental.msc3886_endpoint
is not None,
# Adds support for relation-based redactions as per MSC3912. # Adds support for relation-based redactions as per MSC3912.
"org.matrix.msc3912": self.config.experimental.msc3912_enabled, "org.matrix.msc3912": self.config.experimental.msc3912_enabled,
# Whether recursively provide relations is supported. # Whether recursively provide relations is supported.

View file

@ -94,7 +94,7 @@ class BaseUploadServlet(RestServlet):
# if headers.hasHeader(b"Content-Disposition"): # if headers.hasHeader(b"Content-Disposition"):
# disposition = headers.getRawHeaders(b"Content-Disposition")[0] # disposition = headers.getRawHeaders(b"Content-Disposition")[0]
# TODO(markjh): parse content-dispostion # TODO(markjh): parse content-disposition
return content_length, upload_name, media_type return content_length, upload_name, media_type

View file

@ -1453,6 +1453,54 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
impl, impl,
) )
async def delete_old_otks_for_next_user_batch(
self, after_user_id: str, number_of_users: int
) -> Tuple[List[str], int]:
"""Deletes old OTKs belonging to the next batch of users
Returns:
`(users, rows)`, where:
* `users` is the user IDs of the updated users. An empty list if we are done.
* `rows` is the number of deleted rows
"""
def impl(txn: LoggingTransaction) -> Tuple[List[str], int]:
# Find a batch of users
txn.execute(
"""
SELECT DISTINCT(user_id) FROM e2e_one_time_keys_json
WHERE user_id > ?
ORDER BY user_id
LIMIT ?
""",
(after_user_id, number_of_users),
)
users = [row[0] for row in txn.fetchall()]
if len(users) == 0:
return users, 0
# Delete any old OTKs belonging to those users.
#
# We only actually consider OTKs whose key ID is 6 characters long. These
# keys were likely made by libolm rather than Vodozemac; libolm only kept
# 100 private OTKs, so was far more vulnerable than Vodozemac to throwing
# away keys prematurely.
clause, args = make_in_list_sql_clause(
txn.database_engine, "user_id", users
)
sql = f"""
DELETE FROM e2e_one_time_keys_json
WHERE {clause} AND ts_added_ms < ? AND length(key_id) = 6
"""
args.append(self._clock.time_msec() - (7 * 24 * 3600 * 1000))
txn.execute(sql, args)
return users, txn.rowcount
return await self.db_pool.runInteraction(
"delete_old_otks_for_next_user_batch", impl
)
class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore): class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
def __init__( def __init__(

View file

@ -21,7 +21,11 @@ import attr
from synapse.api.errors import SlidingSyncUnknownPosition from synapse.api.errors import SlidingSyncUnknownPosition
from synapse.logging.opentracing import log_kv from synapse.logging.opentracing import log_kv
from synapse.storage._base import SQLBaseStore, db_to_json from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import LoggingTransaction from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
)
from synapse.types import MultiWriterStreamToken, RoomStreamToken from synapse.types import MultiWriterStreamToken, RoomStreamToken
from synapse.types.handlers.sliding_sync import ( from synapse.types.handlers.sliding_sync import (
HaveSentRoom, HaveSentRoom,
@ -35,12 +39,28 @@ from synapse.util import json_encoder
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class SlidingSyncStore(SQLBaseStore): class SlidingSyncStore(SQLBaseStore):
def __init__(
self,
database: DatabasePool,
db_conn: LoggingDatabaseConnection,
hs: "HomeServer",
):
super().__init__(database, db_conn, hs)
self.db_pool.updates.register_background_index_update(
update_name="sliding_sync_connection_room_configs_required_state_id_idx",
index_name="sliding_sync_connection_room_configs_required_state_id_idx",
table="sliding_sync_connection_room_configs",
columns=("required_state_id",),
)
async def get_latest_bump_stamp_for_room( async def get_latest_bump_stamp_for_room(
self, self,
room_id: str, room_id: str,

View file

@ -0,0 +1,19 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Until Synapse 1.119, Synapse used to issue one-time-keys in a random order, leading to the possibility
-- that it could still have old OTKs that the client has dropped.
--
-- We create a scheduled task which will drop old OTKs, to flush them out.
INSERT INTO scheduled_tasks(id, action, status, timestamp)
VALUES ('delete_old_otks_task', 'delete_old_otks', 'scheduled', extract(epoch from current_timestamp) * 1000);

View file

@ -0,0 +1,19 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Until Synapse 1.119, Synapse used to issue one-time-keys in a random order, leading to the possibility
-- that it could still have old OTKs that the client has dropped.
--
-- We create a scheduled task which will drop old OTKs, to flush them out.
INSERT INTO scheduled_tasks(id, action, status, timestamp)
VALUES ('delete_old_otks_task', 'delete_old_otks', 'scheduled', strftime('%s', 'now') * 1000);

View file

@ -0,0 +1,20 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Add an index on sliding_sync_connection_room_configs(required_state_id), so
-- that when we delete entries in `sliding_sync_connection_required_state` it's
-- efficient for Postgres to check they've been deleted from
-- `sliding_sync_connection_room_configs` too
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8805, 'sliding_sync_connection_room_configs_required_state_id_idx', '{}');

View file

@ -10,7 +10,7 @@
# See the GNU Affero General Public License for more details: # See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>. # <https://www.gnu.org/licenses/agpl-3.0.html>.
from typing import Optional from typing import List, Mapping, Optional, Tuple
from synapse.types import JsonDict from synapse.types import JsonDict
@ -105,3 +105,29 @@ class EventInternalMetadata:
def is_notifiable(self) -> bool: def is_notifiable(self) -> bool:
"""Whether this event can trigger a push notification""" """Whether this event can trigger a push notification"""
def event_visible_to_server(
sender: str,
target_server_name: str,
history_visibility: str,
erased_senders: Mapping[str, bool],
partial_state_invisible: bool,
memberships: List[Tuple[str, str]],
) -> bool:
"""Determine whether the server is allowed to see the unredacted event.
Args:
sender: The sender of the event.
target_server_name: The server we want to send the event to.
history_visibility: The history_visibility value at the event.
erased_senders: A mapping of users and whether they have requested erasure. If a
user is not in the map, it is treated as though they haven't requested erasure.
partial_state_invisible: Whether the event should be treated as invisible due to
the partial state status of the room.
memberships: A list of membership state information at the event for users
matching the `target_server_name`. Each list item must contain a tuple of
(state_key, membership).
Returns:
Whether the server is allowed to see the unredacted event.
"""

View file

@ -27,7 +27,6 @@ from typing import (
Final, Final,
FrozenSet, FrozenSet,
List, List,
Mapping,
Optional, Optional,
Sequence, Sequence,
Set, Set,
@ -48,6 +47,7 @@ from synapse.events.utils import clone_event, prune_event
from synapse.logging.opentracing import trace from synapse.logging.opentracing import trace
from synapse.storage.controllers import StorageControllers from synapse.storage.controllers import StorageControllers
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
from synapse.synapse_rust.events import event_visible_to_server
from synapse.types import RetentionPolicy, StateMap, StrCollection, get_domain_from_id from synapse.types import RetentionPolicy, StateMap, StrCollection, get_domain_from_id
from synapse.types.state import StateFilter from synapse.types.state import StateFilter
from synapse.util import Clock from synapse.util import Clock
@ -628,17 +628,6 @@ async def filter_events_for_server(
"""Filter a list of events based on whether the target server is allowed to """Filter a list of events based on whether the target server is allowed to
see them. see them.
For a fully stated room, the target server is allowed to see an event E if:
- the state at E has world readable or shared history vis, OR
- the state at E says that the target server is in the room.
For a partially stated room, the target server is allowed to see E if:
- E was created by this homeserver, AND:
- the partial state at E has world readable or shared history vis, OR
- the partial state at E says that the target server is in the room.
TODO: state before or state after?
Args: Args:
storage storage
target_server_name target_server_name
@ -655,35 +644,6 @@ async def filter_events_for_server(
The filtered events. The filtered events.
""" """
def is_sender_erased(event: EventBase, erased_senders: Mapping[str, bool]) -> bool:
if erased_senders and erased_senders[event.sender]:
logger.info("Sender of %s has been erased, redacting", event.event_id)
return True
return False
def check_event_is_visible(
visibility: str, memberships: StateMap[EventBase]
) -> bool:
if visibility not in (HistoryVisibility.INVITED, HistoryVisibility.JOINED):
return True
# We now loop through all membership events looking for
# membership states for the requesting server to determine
# if the server is either in the room or has been invited
# into the room.
for ev in memberships.values():
assert get_domain_from_id(ev.state_key) == target_server_name
memtype = ev.membership
if memtype == Membership.JOIN:
return True
elif memtype == Membership.INVITE:
if visibility == HistoryVisibility.INVITED:
return True
# server has no users in the room: redact
return False
if filter_out_erased_senders: if filter_out_erased_senders:
erased_senders = await storage.main.are_users_erased(e.sender for e in events) erased_senders = await storage.main.are_users_erased(e.sender for e in events)
else: else:
@ -726,20 +686,16 @@ async def filter_events_for_server(
target_server_name, target_server_name,
) )
def include_event_in_output(e: EventBase) -> bool:
erased = is_sender_erased(e, erased_senders)
visible = check_event_is_visible(
event_to_history_vis[e.event_id], event_to_memberships.get(e.event_id, {})
)
if e.event_id in partial_state_invisible_event_ids:
visible = False
return visible and not erased
to_return = [] to_return = []
for e in events: for e in events:
if include_event_in_output(e): if event_visible_to_server(
sender=e.sender,
target_server_name=target_server_name,
history_visibility=event_to_history_vis[e.event_id],
erased_senders=erased_senders,
partial_state_invisible=e.event_id in partial_state_invisible_event_ids,
memberships=list(event_to_memberships.get(e.event_id, {}).values()),
):
to_return.append(e) to_return.append(e)
elif redact: elif redact:
to_return.append(prune_event(e)) to_return.append(prune_event(e))
@ -796,7 +752,7 @@ async def _event_to_history_vis(
async def _event_to_memberships( async def _event_to_memberships(
storage: StorageControllers, events: Collection[EventBase], server_name: str storage: StorageControllers, events: Collection[EventBase], server_name: str
) -> Dict[str, StateMap[EventBase]]: ) -> Dict[str, StateMap[Tuple[str, str]]]:
"""Get the remote membership list at each of the given events """Get the remote membership list at each of the given events
Returns a map from event id to state map, which will contain only membership events Returns a map from event id to state map, which will contain only membership events
@ -849,7 +805,7 @@ async def _event_to_memberships(
return { return {
e_id: { e_id: {
key: event_map[inner_e_id] key: (event_map[inner_e_id].state_key, event_map[inner_e_id].membership)
for key, inner_e_id in key_to_eid.items() for key, inner_e_id in key_to_eid.items()
if inner_e_id in event_map if inner_e_id in event_map
} }

View file

@ -19,6 +19,7 @@
# [This file includes modifications made by New Vector Limited] # [This file includes modifications made by New Vector Limited]
# #
# #
import time
from typing import Dict, Iterable from typing import Dict, Iterable
from unittest import mock from unittest import mock
@ -1826,3 +1827,72 @@ class E2eKeysHandlerTestCase(unittest.HomeserverTestCase):
) )
self.assertIs(exists, True) self.assertIs(exists, True)
self.assertIs(replaceable_without_uia, False) self.assertIs(replaceable_without_uia, False)
def test_delete_old_one_time_keys(self) -> None:
"""Test the db migration that clears out old OTKs"""
# We upload two sets of keys, one just over a week ago, and one just less than
# a week ago. Each batch contains some keys that match the deletion pattern
# (key IDs of 6 chars), and some that do not.
#
# Finally, set the scheduled task going, and check what gets deleted.
user_id = "@user000:" + self.hs.hostname
device_id = "xyz"
# The scheduled task should be for "now" in real, wallclock time, so
# set the test reactor to just over a week ago.
self.reactor.advance(time.time() - 7.5 * 24 * 3600)
# Upload some keys
self.get_success(
self.handler.upload_keys_for_user(
user_id,
device_id,
{
"one_time_keys": {
# some keys to delete
"alg1:AAAAAA": "key1",
"alg2:AAAAAB": {"key": "key2", "signatures": {"k1": "sig1"}},
# A key to *not* delete
"alg2:AAAAAAAAAA": {"key": "key3"},
}
},
)
)
# A day passes
self.reactor.advance(24 * 3600)
# Upload some more keys
self.get_success(
self.handler.upload_keys_for_user(
user_id,
device_id,
{
"one_time_keys": {
# some keys which match the pattern
"alg1:BAAAAA": "key1",
"alg2:BAAAAB": {"key": "key2", "signatures": {"k1": "sig1"}},
# A key to *not* delete
"alg2:BAAAAAAAAA": {"key": "key3"},
}
},
)
)
# The rest of the week passes, which should set the scheduled task going.
self.reactor.advance(6.5 * 24 * 3600)
# Check what we're left with in the database
remaining_key_ids = {
row[0]
for row in self.get_success(
self.handler.store.db_pool.simple_select_list(
"e2e_one_time_keys_json", None, ["key_id"]
)
)
}
self.assertEqual(
remaining_key_ids, {"AAAAAAAAAA", "BAAAAA", "BAAAAB", "BAAAAAAAAA"}
)

View file

@ -164,7 +164,6 @@ class TerseJsonTestCase(LoggerCleanupMixin, TestCase):
site.site_tag = "test-site" site.site_tag = "test-site"
site.server_version_string = "Server v1" site.server_version_string = "Server v1"
site.reactor = Mock() site.reactor = Mock()
site.experimental_cors_msc3886 = False
request = SynapseRequest( request = SynapseRequest(
cast(HTTPChannel, FakeChannel(site, self.reactor)), site cast(HTTPChannel, FakeChannel(site, self.reactor)), site
) )

View file

@ -419,6 +419,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
return channel return channel
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_handle_missing_content_type(self) -> None: def test_handle_missing_content_type(self) -> None:
channel = self._req( channel = self._req(
b"attachment; filename=out" + self.test_image.extension, b"attachment; filename=out" + self.test_image.extension,
@ -430,6 +435,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
headers.getRawHeaders(b"Content-Type"), [b"application/octet-stream"] headers.getRawHeaders(b"Content-Type"), [b"application/octet-stream"]
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_disposition_filename_ascii(self) -> None: def test_disposition_filename_ascii(self) -> None:
""" """
If the filename is filename=<ascii> then Synapse will decode it as an If the filename is filename=<ascii> then Synapse will decode it as an
@ -450,6 +460,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
], ],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_disposition_filenamestar_utf8escaped(self) -> None: def test_disposition_filenamestar_utf8escaped(self) -> None:
""" """
If the filename is filename=*utf8''<utf8 escaped> then Synapse will If the filename is filename=*utf8''<utf8 escaped> then Synapse will
@ -475,6 +490,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
], ],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_disposition_none(self) -> None: def test_disposition_none(self) -> None:
""" """
If there is no filename, Content-Disposition should only If there is no filename, Content-Disposition should only
@ -491,6 +511,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
[b"inline" if self.test_image.is_inline else b"attachment"], [b"inline" if self.test_image.is_inline else b"attachment"],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_thumbnail_crop(self) -> None: def test_thumbnail_crop(self) -> None:
"""Test that a cropped remote thumbnail is available.""" """Test that a cropped remote thumbnail is available."""
self._test_thumbnail( self._test_thumbnail(
@ -500,6 +525,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
unable_to_thumbnail=self.test_image.unable_to_thumbnail, unable_to_thumbnail=self.test_image.unable_to_thumbnail,
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_thumbnail_scale(self) -> None: def test_thumbnail_scale(self) -> None:
"""Test that a scaled remote thumbnail is available.""" """Test that a scaled remote thumbnail is available."""
self._test_thumbnail( self._test_thumbnail(
@ -509,6 +539,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
unable_to_thumbnail=self.test_image.unable_to_thumbnail, unable_to_thumbnail=self.test_image.unable_to_thumbnail,
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_invalid_type(self) -> None: def test_invalid_type(self) -> None:
"""An invalid thumbnail type is never available.""" """An invalid thumbnail type is never available."""
self._test_thumbnail( self._test_thumbnail(
@ -519,7 +554,10 @@ class MediaRepoTests(unittest.HomeserverTestCase):
) )
@unittest.override_config( @unittest.override_config(
{"thumbnail_sizes": [{"width": 32, "height": 32, "method": "scale"}]} {
"thumbnail_sizes": [{"width": 32, "height": 32, "method": "scale"}],
"enable_authenticated_media": False,
},
) )
def test_no_thumbnail_crop(self) -> None: def test_no_thumbnail_crop(self) -> None:
""" """
@ -533,7 +571,10 @@ class MediaRepoTests(unittest.HomeserverTestCase):
) )
@unittest.override_config( @unittest.override_config(
{"thumbnail_sizes": [{"width": 32, "height": 32, "method": "crop"}]} {
"thumbnail_sizes": [{"width": 32, "height": 32, "method": "crop"}],
"enable_authenticated_media": False,
}
) )
def test_no_thumbnail_scale(self) -> None: def test_no_thumbnail_scale(self) -> None:
""" """
@ -546,6 +587,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
unable_to_thumbnail=self.test_image.unable_to_thumbnail, unable_to_thumbnail=self.test_image.unable_to_thumbnail,
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_thumbnail_repeated_thumbnail(self) -> None: def test_thumbnail_repeated_thumbnail(self) -> None:
"""Test that fetching the same thumbnail works, and deleting the on disk """Test that fetching the same thumbnail works, and deleting the on disk
thumbnail regenerates it. thumbnail regenerates it.
@ -720,6 +766,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
) )
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_x_robots_tag_header(self) -> None: def test_x_robots_tag_header(self) -> None:
""" """
Tests that the `X-Robots-Tag` header is present, which informs web crawlers Tests that the `X-Robots-Tag` header is present, which informs web crawlers
@ -733,6 +784,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
[b"noindex, nofollow, noarchive, noimageindex"], [b"noindex, nofollow, noarchive, noimageindex"],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_cross_origin_resource_policy_header(self) -> None: def test_cross_origin_resource_policy_header(self) -> None:
""" """
Test that the Cross-Origin-Resource-Policy header is set to "cross-origin" Test that the Cross-Origin-Resource-Policy header is set to "cross-origin"
@ -747,6 +803,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
[b"cross-origin"], [b"cross-origin"],
) )
@unittest.override_config(
{
"enable_authenticated_media": False,
}
)
def test_unknown_v3_endpoint(self) -> None: def test_unknown_v3_endpoint(self) -> None:
""" """
If the v3 endpoint fails, try the r0 one. If the v3 endpoint fails, try the r0 one.
@ -985,6 +1046,11 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
d.callback(52428800) d.callback(52428800)
return d return d
@override_config(
{
"enable_authenticated_media": False,
}
)
@patch( @patch(
"synapse.http.matrixfederationclient.read_body_with_max_size", "synapse.http.matrixfederationclient.read_body_with_max_size",
read_body_with_max_size_30MiB, read_body_with_max_size_30MiB,
@ -1060,6 +1126,7 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
{ {
"remote_media_download_per_second": "50M", "remote_media_download_per_second": "50M",
"remote_media_download_burst_count": "50M", "remote_media_download_burst_count": "50M",
"enable_authenticated_media": False,
} }
) )
@patch( @patch(
@ -1119,7 +1186,12 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
) )
assert channel.code == 200 assert channel.code == 200
@override_config({"remote_media_download_burst_count": "87M"}) @override_config(
{
"remote_media_download_burst_count": "87M",
"enable_authenticated_media": False,
}
)
@patch( @patch(
"synapse.http.matrixfederationclient.read_body_with_max_size", "synapse.http.matrixfederationclient.read_body_with_max_size",
read_body_with_max_size_30MiB, read_body_with_max_size_30MiB,
@ -1159,7 +1231,7 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
) )
assert channel2.code == 429 assert channel2.code == 429
@override_config({"max_upload_size": "29M"}) @override_config({"max_upload_size": "29M", "enable_authenticated_media": False})
@patch( @patch(
"synapse.http.matrixfederationclient.read_body_with_max_size", "synapse.http.matrixfederationclient.read_body_with_max_size",
read_body_with_max_size_30MiB, read_body_with_max_size_30MiB,

View file

@ -40,6 +40,7 @@ from tests.http import (
from tests.replication._base import BaseMultiWorkerStreamTestCase from tests.replication._base import BaseMultiWorkerStreamTestCase
from tests.server import FakeChannel, FakeTransport, make_request from tests.server import FakeChannel, FakeTransport, make_request
from tests.test_utils import SMALL_PNG from tests.test_utils import SMALL_PNG
from tests.unittest import override_config
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -148,6 +149,7 @@ class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase):
return channel, request return channel, request
@override_config({"enable_authenticated_media": False})
def test_basic(self) -> None: def test_basic(self) -> None:
"""Test basic fetching of remote media from a single worker.""" """Test basic fetching of remote media from a single worker."""
hs1 = self.make_worker_hs("synapse.app.generic_worker") hs1 = self.make_worker_hs("synapse.app.generic_worker")
@ -164,6 +166,7 @@ class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase):
self.assertEqual(channel.code, 200) self.assertEqual(channel.code, 200)
self.assertEqual(channel.result["body"], b"Hello!") self.assertEqual(channel.result["body"], b"Hello!")
@override_config({"enable_authenticated_media": False})
def test_download_simple_file_race(self) -> None: def test_download_simple_file_race(self) -> None:
"""Test that fetching remote media from two different processes at the """Test that fetching remote media from two different processes at the
same time works. same time works.
@ -203,6 +206,7 @@ class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase):
# We expect only one new file to have been persisted. # We expect only one new file to have been persisted.
self.assertEqual(start_count + 1, self._count_remote_media()) self.assertEqual(start_count + 1, self._count_remote_media())
@override_config({"enable_authenticated_media": False})
def test_download_image_race(self) -> None: def test_download_image_race(self) -> None:
"""Test that fetching remote *images* from two different processes at """Test that fetching remote *images* from two different processes at
the same time works. the same time works.

View file

@ -30,7 +30,7 @@ from twisted.web.resource import Resource
import synapse.rest.admin import synapse.rest.admin
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.rest.admin import VersionServlet from synapse.rest.admin import VersionServlet
from synapse.rest.client import login, room from synapse.rest.client import login, media, room
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.util import Clock from synapse.util import Clock
@ -60,6 +60,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
synapse.rest.admin.register_servlets, synapse.rest.admin.register_servlets,
synapse.rest.admin.register_servlets_for_media_repo, synapse.rest.admin.register_servlets_for_media_repo,
login.register_servlets, login.register_servlets,
media.register_servlets,
room.register_servlets, room.register_servlets,
] ]
@ -74,7 +75,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
"""Ensure a piece of media is quarantined when trying to access it.""" """Ensure a piece of media is quarantined when trying to access it."""
channel = self.make_request( channel = self.make_request(
"GET", "GET",
f"/_matrix/media/v3/download/{server_and_media_id}", f"/_matrix/client/v1/media/download/{server_and_media_id}",
shorthand=False, shorthand=False,
access_token=admin_user_tok, access_token=admin_user_tok,
) )
@ -131,7 +132,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
# Attempt to access the media # Attempt to access the media
channel = self.make_request( channel = self.make_request(
"GET", "GET",
f"/_matrix/media/v3/download/{server_name_and_media_id}", f"/_matrix/client/v1/media/download/{server_name_and_media_id}",
shorthand=False, shorthand=False,
access_token=non_admin_user_tok, access_token=non_admin_user_tok,
) )
@ -295,7 +296,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
# Attempt to access each piece of media # Attempt to access each piece of media
channel = self.make_request( channel = self.make_request(
"GET", "GET",
f"/_matrix/media/v3/download/{server_and_media_id_2}", f"/_matrix/client/v1/media/download/{server_and_media_id_2}",
shorthand=False, shorthand=False,
access_token=non_admin_user_tok, access_token=non_admin_user_tok,
) )

View file

@ -96,7 +96,7 @@ class FederationTestCase(unittest.HomeserverTestCase):
self.assertEqual(400, channel.code, msg=channel.json_body) self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"]) self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# unkown order_by # unknown order_by
channel = self.make_request( channel = self.make_request(
"GET", "GET",
self.url + "?order_by=bar", self.url + "?order_by=bar",

View file

@ -36,6 +36,7 @@ from synapse.util import Clock
from tests import unittest from tests import unittest
from tests.test_utils import SMALL_PNG from tests.test_utils import SMALL_PNG
from tests.unittest import override_config
VALID_TIMESTAMP = 1609459200000 # 2021-01-01 in milliseconds VALID_TIMESTAMP = 1609459200000 # 2021-01-01 in milliseconds
INVALID_TIMESTAMP_IN_S = 1893456000 # 2030-01-01 in seconds INVALID_TIMESTAMP_IN_S = 1893456000 # 2030-01-01 in seconds
@ -126,6 +127,7 @@ class DeleteMediaByIDTestCase(_AdminMediaTests):
self.assertEqual(400, channel.code, msg=channel.json_body) self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual("Can only delete local media", channel.json_body["error"]) self.assertEqual("Can only delete local media", channel.json_body["error"])
@override_config({"enable_authenticated_media": False})
def test_delete_media(self) -> None: def test_delete_media(self) -> None:
""" """
Tests that delete a media is successfully Tests that delete a media is successfully
@ -371,6 +373,7 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self._access_media(server_and_media_id, False) self._access_media(server_and_media_id, False)
@override_config({"enable_authenticated_media": False})
def test_keep_media_by_date(self) -> None: def test_keep_media_by_date(self) -> None:
""" """
Tests that media is not deleted if it is newer than `before_ts` Tests that media is not deleted if it is newer than `before_ts`
@ -408,6 +411,7 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self._access_media(server_and_media_id, False) self._access_media(server_and_media_id, False)
@override_config({"enable_authenticated_media": False})
def test_keep_media_by_size(self) -> None: def test_keep_media_by_size(self) -> None:
""" """
Tests that media is not deleted if its size is smaller than or equal Tests that media is not deleted if its size is smaller than or equal
@ -443,6 +447,7 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self._access_media(server_and_media_id, False) self._access_media(server_and_media_id, False)
@override_config({"enable_authenticated_media": False})
def test_keep_media_by_user_avatar(self) -> None: def test_keep_media_by_user_avatar(self) -> None:
""" """
Tests that we do not delete media if is used as a user avatar Tests that we do not delete media if is used as a user avatar
@ -487,6 +492,7 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self._access_media(server_and_media_id, False) self._access_media(server_and_media_id, False)
@override_config({"enable_authenticated_media": False})
def test_keep_media_by_room_avatar(self) -> None: def test_keep_media_by_room_avatar(self) -> None:
""" """
Tests that we do not delete media if it is used as a room avatar Tests that we do not delete media if it is used as a room avatar

View file

@ -82,7 +82,7 @@ class UserMediaStatisticsTestCase(unittest.HomeserverTestCase):
""" """
If parameters are invalid, an error is returned. If parameters are invalid, an error is returned.
""" """
# unkown order_by # unknown order_by
channel = self.make_request( channel = self.make_request(
"GET", "GET",
self.url + "?order_by=bar", self.url + "?order_by=bar",

View file

@ -45,6 +45,7 @@ from synapse.rest.client import (
devices, devices,
login, login,
logout, logout,
media,
profile, profile,
register, register,
room, room,
@ -719,7 +720,7 @@ class UsersListTestCase(unittest.HomeserverTestCase):
self.assertEqual(400, channel.code, msg=channel.json_body) self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"]) self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# unkown order_by # unknown order_by
channel = self.make_request( channel = self.make_request(
"GET", "GET",
self.url + "?order_by=bar", self.url + "?order_by=bar",
@ -3517,6 +3518,7 @@ class UserMediaRestTestCase(unittest.HomeserverTestCase):
servlets = [ servlets = [
synapse.rest.admin.register_servlets, synapse.rest.admin.register_servlets,
login.register_servlets, login.register_servlets,
media.register_servlets,
] ]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None: def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
@ -3696,7 +3698,7 @@ class UserMediaRestTestCase(unittest.HomeserverTestCase):
@parameterized.expand(["GET", "DELETE"]) @parameterized.expand(["GET", "DELETE"])
def test_invalid_parameter(self, method: str) -> None: def test_invalid_parameter(self, method: str) -> None:
"""If parameters are invalid, an error is returned.""" """If parameters are invalid, an error is returned."""
# unkown order_by # unknown order_by
channel = self.make_request( channel = self.make_request(
method, method,
self.url + "?order_by=bar", self.url + "?order_by=bar",
@ -4023,7 +4025,7 @@ class UserMediaRestTestCase(unittest.HomeserverTestCase):
# Try to access a media and to create `last_access_ts` # Try to access a media and to create `last_access_ts`
channel = self.make_request( channel = self.make_request(
"GET", "GET",
f"/_matrix/media/v3/download/{server_and_media_id}", f"/_matrix/client/v1/media/download/{server_and_media_id}",
shorthand=False, shorthand=False,
access_token=user_token, access_token=user_token,
) )

View file

@ -34,7 +34,6 @@ from tests import unittest
from tests.unittest import override_config from tests.unittest import override_config
from tests.utils import HAS_AUTHLIB from tests.utils import HAS_AUTHLIB
msc3886_endpoint = "/_matrix/client/unstable/org.matrix.msc3886/rendezvous"
msc4108_endpoint = "/_matrix/client/unstable/org.matrix.msc4108/rendezvous" msc4108_endpoint = "/_matrix/client/unstable/org.matrix.msc4108/rendezvous"
@ -54,17 +53,9 @@ class RendezvousServletTestCase(unittest.HomeserverTestCase):
} }
def test_disabled(self) -> None: def test_disabled(self) -> None:
channel = self.make_request("POST", msc3886_endpoint, {}, access_token=None)
self.assertEqual(channel.code, 404)
channel = self.make_request("POST", msc4108_endpoint, {}, access_token=None) channel = self.make_request("POST", msc4108_endpoint, {}, access_token=None)
self.assertEqual(channel.code, 404) self.assertEqual(channel.code, 404)
@override_config({"experimental_features": {"msc3886_endpoint": "/asd"}})
def test_msc3886_redirect(self) -> None:
channel = self.make_request("POST", msc3886_endpoint, {}, access_token=None)
self.assertEqual(channel.code, 307)
self.assertEqual(channel.headers.getRawHeaders("Location"), ["/asd"])
@unittest.skip_unless(HAS_AUTHLIB, "requires authlib") @unittest.skip_unless(HAS_AUTHLIB, "requires authlib")
@override_config( @override_config(
{ {

View file

@ -91,7 +91,8 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
{ {
# Disable downloads from a domain we won't be requesting downloads from. # Disable downloads from a domain we won't be requesting downloads from.
# This proves we haven't broken anything. # This proves we haven't broken anything.
"prevent_media_downloads_from": ["not-listed.com"] "prevent_media_downloads_from": ["not-listed.com"],
"enable_authenticated_media": False,
} }
) )
def test_remote_media_normally_unblocked(self) -> None: def test_remote_media_normally_unblocked(self) -> None:
@ -132,6 +133,7 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
# This proves we haven't broken anything. # This proves we haven't broken anything.
"prevent_media_downloads_from": ["not-listed.com"], "prevent_media_downloads_from": ["not-listed.com"],
"dynamic_thumbnails": True, "dynamic_thumbnails": True,
"enable_authenticated_media": False,
} }
) )
def test_remote_media_thumbnail_normally_unblocked(self) -> None: def test_remote_media_thumbnail_normally_unblocked(self) -> None:

View file

@ -42,6 +42,7 @@ from synapse.util.stringutils import parse_and_validate_mxc_uri
from tests import unittest from tests import unittest
from tests.server import FakeTransport from tests.server import FakeTransport
from tests.test_utils import SMALL_PNG from tests.test_utils import SMALL_PNG
from tests.unittest import override_config
try: try:
import lxml import lxml
@ -1259,6 +1260,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
self.assertIsNone(_port) self.assertIsNone(_port)
return host, media_id return host, media_id
@override_config({"enable_authenticated_media": False})
def test_storage_providers_exclude_files(self) -> None: def test_storage_providers_exclude_files(self) -> None:
"""Test that files are not stored in or fetched from storage providers.""" """Test that files are not stored in or fetched from storage providers."""
host, media_id = self._download_image() host, media_id = self._download_image()
@ -1301,6 +1303,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
"URL cache file was unexpectedly retrieved from a storage provider", "URL cache file was unexpectedly retrieved from a storage provider",
) )
@override_config({"enable_authenticated_media": False})
def test_storage_providers_exclude_thumbnails(self) -> None: def test_storage_providers_exclude_thumbnails(self) -> None:
"""Test that thumbnails are not stored in or fetched from storage providers.""" """Test that thumbnails are not stored in or fetched from storage providers."""
host, media_id = self._download_image() host, media_id = self._download_image()

View file

@ -343,7 +343,6 @@ class FakeSite:
self, self,
resource: IResource, resource: IResource,
reactor: IReactorTime, reactor: IReactorTime,
experimental_cors_msc3886: bool = False,
): ):
""" """
@ -352,7 +351,6 @@ class FakeSite:
""" """
self._resource = resource self._resource = resource
self.reactor = reactor self.reactor = reactor
self.experimental_cors_msc3886 = experimental_cors_msc3886
def getResourceFor(self, request: Request) -> IResource: def getResourceFor(self, request: Request) -> IResource:
return self._resource return self._resource

View file

@ -233,9 +233,7 @@ class OptionsResourceTests(unittest.TestCase):
self.resource = OptionsResource() self.resource = OptionsResource()
self.resource.putChild(b"res", DummyResource()) self.resource.putChild(b"res", DummyResource())
def _make_request( def _make_request(self, method: bytes, path: bytes) -> FakeChannel:
self, method: bytes, path: bytes, experimental_cors_msc3886: bool = False
) -> FakeChannel:
"""Create a request from the method/path and return a channel with the response.""" """Create a request from the method/path and return a channel with the response."""
# Create a site and query for the resource. # Create a site and query for the resource.
site = SynapseSite( site = SynapseSite(
@ -246,7 +244,6 @@ class OptionsResourceTests(unittest.TestCase):
{ {
"type": "http", "type": "http",
"port": 0, "port": 0,
"experimental_cors_msc3886": experimental_cors_msc3886,
}, },
), ),
self.resource, self.resource,
@ -283,32 +280,6 @@ class OptionsResourceTests(unittest.TestCase):
[b"Synapse-Trace-Id, Server"], [b"Synapse-Trace-Id, Server"],
) )
def _check_cors_msc3886_headers(self, channel: FakeChannel) -> None:
# Ensure the correct CORS headers have been added
# as per https://github.com/matrix-org/matrix-spec-proposals/blob/hughns/simple-rendezvous-capability/proposals/3886-simple-rendezvous-capability.md#cors
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Origin"),
[b"*"],
"has correct CORS Origin header",
)
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Methods"),
[b"GET, HEAD, POST, PUT, DELETE, OPTIONS"], # HEAD isn't in the spec
"has correct CORS Methods header",
)
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Headers"),
[
b"X-Requested-With, Content-Type, Authorization, Date, If-Match, If-None-Match"
],
"has correct CORS Headers header",
)
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Expose-Headers"),
[b"ETag, Location, X-Max-Bytes"],
"has correct CORS Expose Headers header",
)
def test_unknown_options_request(self) -> None: def test_unknown_options_request(self) -> None:
"""An OPTIONS requests to an unknown URL still returns 204 No Content.""" """An OPTIONS requests to an unknown URL still returns 204 No Content."""
channel = self._make_request(b"OPTIONS", b"/foo/") channel = self._make_request(b"OPTIONS", b"/foo/")
@ -325,16 +296,6 @@ class OptionsResourceTests(unittest.TestCase):
self._check_cors_standard_headers(channel) self._check_cors_standard_headers(channel)
def test_known_options_request_msc3886(self) -> None:
"""An OPTIONS requests to an known URL still returns 204 No Content."""
channel = self._make_request(
b"OPTIONS", b"/res/", experimental_cors_msc3886=True
)
self.assertEqual(channel.code, 204)
self.assertNotIn("body", channel.result)
self._check_cors_msc3886_headers(channel)
def test_unknown_request(self) -> None: def test_unknown_request(self) -> None:
"""A non-OPTIONS request to an unknown URL should 404.""" """A non-OPTIONS request to an unknown URL should 404."""
channel = self._make_request(b"GET", b"/foo/") channel = self._make_request(b"GET", b"/foo/")