Merge remote-tracking branch 'origin/develop' into erikj/faster_auth_chains

This commit is contained in:
Erik Johnston 2024-06-26 13:15:14 +01:00
commit dac74db74e
236 changed files with 13562 additions and 3436 deletions

View file

@ -2,4 +2,4 @@
(using a matrix.org account if necessary). We do not use GitHub issues for (using a matrix.org account if necessary). We do not use GitHub issues for
support. support.
**If you want to report a security issue** please see https://matrix.org/security-disclosure-policy/ **If you want to report a security issue** please see https://element.io/security/security-disclosure-policy

View file

@ -7,7 +7,7 @@ body:
**THIS IS NOT A SUPPORT CHANNEL!** **THIS IS NOT A SUPPORT CHANNEL!**
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**, please ask in **[#synapse:matrix.org](https://matrix.to/#/#synapse:matrix.org)** (using a matrix.org account if necessary). **IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**, please ask in **[#synapse:matrix.org](https://matrix.to/#/#synapse:matrix.org)** (using a matrix.org account if necessary).
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/ If you want to report a security issue, please see https://element.io/security/security-disclosure-policy
This is a bug report form. By following the instructions below and completing the sections with your information, you will help the us to get all the necessary data to fix your issue. This is a bug report form. By following the instructions below and completing the sections with your information, you will help the us to get all the necessary data to fix your issue.

View file

@ -72,7 +72,7 @@ jobs:
- name: Build and push all platforms - name: Build and push all platforms
id: build-and-push id: build-and-push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v6
with: with:
push: true push: true
labels: | labels: |

View file

@ -14,7 +14,7 @@ jobs:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action # There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess: # (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact - name: 📥 Download artifact
uses: dawidd6/action-download-artifact@09f2f74827fd3a8607589e5ad7f9398816f540fe # v3.1.4 uses: dawidd6/action-download-artifact@bf251b5aa9c2f7eeb574a96ee720e24f801b7c11 # v6
with: with:
workflow: docs-pr.yaml workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }} run_id: ${{ github.event.workflow_run.id }}

View file

@ -102,7 +102,7 @@ jobs:
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
os: [ubuntu-20.04, macos-11] os: [ubuntu-20.04, macos-12]
arch: [x86_64, aarch64] arch: [x86_64, aarch64]
# is_pr is a flag used to exclude certain jobs from the matrix on PRs. # is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow. # It is not read by the rest of the workflow.
@ -112,9 +112,9 @@ jobs:
exclude: exclude:
# Don't build macos wheels on PR CI. # Don't build macos wheels on PR CI.
- is_pr: true - is_pr: true
os: "macos-11" os: "macos-12"
# Don't build aarch64 wheels on mac. # Don't build aarch64 wheels on mac.
- os: "macos-11" - os: "macos-12"
arch: aarch64 arch: aarch64
# Don't build aarch64 wheels on PR CI. # Don't build aarch64 wheels on PR CI.
- is_pr: true - is_pr: true
@ -130,7 +130,7 @@ jobs:
python-version: "3.x" python-version: "3.x"
- name: Install cibuildwheel - name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.16.2 run: python -m pip install cibuildwheel==2.19.1
- name: Set up QEMU to emulate aarch64 - name: Set up QEMU to emulate aarch64
if: matrix.arch == 'aarch64' if: matrix.arch == 'aarch64'

View file

@ -479,6 +479,9 @@ jobs:
volumes: volumes:
- ${{ github.workspace }}:/src - ${{ github.workspace }}:/src
env: env:
# If this is a pull request to a release branch, use that branch as default branch for sytest, else use develop
# This works because the release script always create a branch on the sytest repo with the same name as the release branch
SYTEST_DEFAULT_BRANCH: ${{ startsWith(github.base_ref, 'release-') && github.base_ref || 'develop' }}
SYTEST_BRANCH: ${{ github.head_ref }} SYTEST_BRANCH: ${{ github.head_ref }}
POSTGRES: ${{ matrix.job.postgres && 1}} POSTGRES: ${{ matrix.job.postgres && 1}}
MULTI_POSTGRES: ${{ (matrix.job.postgres == 'multi-postgres') || '' }} MULTI_POSTGRES: ${{ (matrix.job.postgres == 'multi-postgres') || '' }}

View file

@ -1,3 +1,148 @@
# Synapse 1.109.0 (2024-06-18)
### Internal Changes
- Fix the building of binary wheels for macOS by switching to macOS 12 CI runners. ([\#17319](https://github.com/element-hq/synapse/issues/17319))
# Synapse 1.109.0rc3 (2024-06-17)
### Bugfixes
- When rolling back to a previous Synapse version and then forwards again to this release, don't require server operators to manually run SQL. ([\#17305](https://github.com/element-hq/synapse/issues/17305), [\#17309](https://github.com/element-hq/synapse/issues/17309))
### Internal Changes
- Use the release branch for sytest in release-branch PRs. ([\#17306](https://github.com/element-hq/synapse/issues/17306))
# Synapse 1.109.0rc2 (2024-06-11)
### Bugfixes
- Fix bug where one-time-keys were not always included in `/sync` response when using workers. Introduced in v1.109.0rc1. ([\#17275](https://github.com/element-hq/synapse/issues/17275))
- Fix bug where `/sync` could get stuck due to edge case in device lists handling. Introduced in v1.109.0rc1. ([\#17292](https://github.com/element-hq/synapse/issues/17292))
# Synapse 1.109.0rc1 (2024-06-04)
### Features
- Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details. ([\#17147](https://github.com/element-hq/synapse/issues/17147))
- Add experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync/e2ee` endpoint for to-device messages and device encryption info. ([\#17167](https://github.com/element-hq/synapse/issues/17167))
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/issues/3916) by adding unstable media endpoints to `/_matrix/client`. ([\#17213](https://github.com/element-hq/synapse/issues/17213))
- Add logging to tasks managed by the task scheduler, showing CPU and database usage. ([\#17219](https://github.com/element-hq/synapse/issues/17219))
### Bugfixes
- Fix deduplicating of membership events to not create unused state groups. ([\#17164](https://github.com/element-hq/synapse/issues/17164))
- Fix bug where duplicate events could be sent down sync when using workers that are overloaded. ([\#17215](https://github.com/element-hq/synapse/issues/17215))
- Ignore attempts to send to-device messages to bad users, to avoid log spam when we try to connect to the bad server. ([\#17240](https://github.com/element-hq/synapse/issues/17240))
- Fix handling of duplicate concurrent uploading of device one-time-keys. ([\#17241](https://github.com/element-hq/synapse/issues/17241))
- Fix reporting of default tags to Sentry, such as worker name. Broke in v1.108.0. ([\#17251](https://github.com/element-hq/synapse/issues/17251))
- Fix bug where typing updates would not be sent when using workers after a restart. ([\#17252](https://github.com/element-hq/synapse/issues/17252))
### Improved Documentation
- Update the LemonLDAP documentation to say that claims should be explicitly included in the returned `id_token`, as Synapse won't request them. ([\#17204](https://github.com/element-hq/synapse/issues/17204))
### Internal Changes
- Improve DB usage when fetching related events. ([\#17083](https://github.com/element-hq/synapse/issues/17083))
- Log exceptions when failing to auto-join new user according to the `auto_join_rooms` option. ([\#17176](https://github.com/element-hq/synapse/issues/17176))
- Reduce work of calculating outbound device lists updates. ([\#17211](https://github.com/element-hq/synapse/issues/17211))
- Improve performance of calculating device lists changes in `/sync`. ([\#17216](https://github.com/element-hq/synapse/issues/17216))
- Move towards using `MultiWriterIdGenerator` everywhere. ([\#17226](https://github.com/element-hq/synapse/issues/17226))
- Replaces all usages of `StreamIdGenerator` with `MultiWriterIdGenerator`. ([\#17229](https://github.com/element-hq/synapse/issues/17229))
- Change the `allow_unsafe_locale` config option to also apply when setting up new databases. ([\#17238](https://github.com/element-hq/synapse/issues/17238))
- Fix errors in logs about closing incorrect logging contexts when media gets rejected by a module. ([\#17239](https://github.com/element-hq/synapse/issues/17239), [\#17246](https://github.com/element-hq/synapse/issues/17246))
- Clean out invalid destinations from `device_federation_outbox` table. ([\#17242](https://github.com/element-hq/synapse/issues/17242))
- Stop logging errors when receiving invalid User IDs in key querys requests. ([\#17250](https://github.com/element-hq/synapse/issues/17250))
### Updates to locked dependencies
* Bump anyhow from 1.0.83 to 1.0.86. ([\#17220](https://github.com/element-hq/synapse/issues/17220))
* Bump bcrypt from 4.1.2 to 4.1.3. ([\#17224](https://github.com/element-hq/synapse/issues/17224))
* Bump lxml from 5.2.1 to 5.2.2. ([\#17261](https://github.com/element-hq/synapse/issues/17261))
* Bump mypy-zope from 1.0.3 to 1.0.4. ([\#17262](https://github.com/element-hq/synapse/issues/17262))
* Bump phonenumbers from 8.13.35 to 8.13.37. ([\#17235](https://github.com/element-hq/synapse/issues/17235))
* Bump prometheus-client from 0.19.0 to 0.20.0. ([\#17233](https://github.com/element-hq/synapse/issues/17233))
* Bump pyasn1 from 0.5.1 to 0.6.0. ([\#17223](https://github.com/element-hq/synapse/issues/17223))
* Bump pyicu from 2.13 to 2.13.1. ([\#17236](https://github.com/element-hq/synapse/issues/17236))
* Bump pyopenssl from 24.0.0 to 24.1.0. ([\#17234](https://github.com/element-hq/synapse/issues/17234))
* Bump serde from 1.0.201 to 1.0.202. ([\#17221](https://github.com/element-hq/synapse/issues/17221))
* Bump serde from 1.0.202 to 1.0.203. ([\#17232](https://github.com/element-hq/synapse/issues/17232))
* Bump twine from 5.0.0 to 5.1.0. ([\#17225](https://github.com/element-hq/synapse/issues/17225))
* Bump types-psycopg2 from 2.9.21.20240311 to 2.9.21.20240417. ([\#17222](https://github.com/element-hq/synapse/issues/17222))
* Bump types-pyopenssl from 24.0.0.20240311 to 24.1.0.20240425. ([\#17260](https://github.com/element-hq/synapse/issues/17260))
# Synapse 1.108.0 (2024-05-28)
No significant changes since 1.108.0rc1.
# Synapse 1.108.0rc1 (2024-05-21)
### Features
- Add a feature that allows clients to query the configured federation whitelist. Disabled by default. ([\#16848](https://github.com/element-hq/synapse/issues/16848), [\#17199](https://github.com/element-hq/synapse/issues/17199))
- Add the ability to allow numeric user IDs with a specific prefix when in the CAS flow. Contributed by Aurélien Grimpard. ([\#17098](https://github.com/element-hq/synapse/issues/17098))
### Bugfixes
- Fix bug where push rules would be empty in `/sync` for some accounts. Introduced in v1.93.0. ([\#17142](https://github.com/element-hq/synapse/issues/17142))
- Add support for optional whitespace around the Federation API's `Authorization` header's parameter commas. ([\#17145](https://github.com/element-hq/synapse/issues/17145))
- Fix bug where disabling room publication prevented public rooms being created on workers. ([\#17177](https://github.com/element-hq/synapse/issues/17177), [\#17184](https://github.com/element-hq/synapse/issues/17184))
### Improved Documentation
- Document [`/v1/make_knock`](https://spec.matrix.org/v1.10/server-server-api/#get_matrixfederationv1make_knockroomiduserid) and [`/v1/send_knock/`](https://spec.matrix.org/v1.10/server-server-api/#put_matrixfederationv1send_knockroomideventid) federation endpoints as worker-compatible. ([\#17058](https://github.com/element-hq/synapse/issues/17058))
- Update User Admin API with note about prefixing OIDC external_id providers. ([\#17139](https://github.com/element-hq/synapse/issues/17139))
- Clarify the state of the created room when using the `autocreate_auto_join_room_preset` config option. ([\#17150](https://github.com/element-hq/synapse/issues/17150))
- Update the Admin FAQ with the current libjemalloc version for latest Debian stable. Additionally update the name of the "push_rules" stream in the Workers documentation. ([\#17171](https://github.com/element-hq/synapse/issues/17171))
### Internal Changes
- Add note to reflect that [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886) is closed but will remain supported for some time. ([\#17151](https://github.com/element-hq/synapse/issues/17151))
- Update dependency PyO3 to 0.21. ([\#17162](https://github.com/element-hq/synapse/issues/17162))
- Fixes linter errors found in PR #17147. ([\#17166](https://github.com/element-hq/synapse/issues/17166))
- Bump black from 24.2.0 to 24.4.2. ([\#17170](https://github.com/element-hq/synapse/issues/17170))
- Cache literal sync filter validation for performance. ([\#17186](https://github.com/element-hq/synapse/issues/17186))
- Improve performance by fixing a reactor pause. ([\#17192](https://github.com/element-hq/synapse/issues/17192))
- Route `/make_knock` and `/send_knock` federation APIs to the federation reader worker in Complement test runs. ([\#17195](https://github.com/element-hq/synapse/issues/17195))
- Prepare sync handler to be able to return different sync responses (`SyncVersion`). ([\#17200](https://github.com/element-hq/synapse/issues/17200))
- Organize the sync cache key parameter outside of the sync config (separate concerns). ([\#17201](https://github.com/element-hq/synapse/issues/17201))
- Refactor `SyncResultBuilder` assembly to its own function. ([\#17202](https://github.com/element-hq/synapse/issues/17202))
- Rename to be obvious: `joined_rooms` -> `joined_room_ids`. ([\#17203](https://github.com/element-hq/synapse/issues/17203), [\#17208](https://github.com/element-hq/synapse/issues/17208))
- Add a short pause when rate-limiting a request. ([\#17210](https://github.com/element-hq/synapse/issues/17210))
### Updates to locked dependencies
* Bump cryptography from 42.0.5 to 42.0.7. ([\#17180](https://github.com/element-hq/synapse/issues/17180))
* Bump gitpython from 3.1.41 to 3.1.43. ([\#17181](https://github.com/element-hq/synapse/issues/17181))
* Bump immutabledict from 4.1.0 to 4.2.0. ([\#17179](https://github.com/element-hq/synapse/issues/17179))
* Bump sentry-sdk from 1.40.3 to 2.1.1. ([\#17178](https://github.com/element-hq/synapse/issues/17178))
* Bump serde from 1.0.200 to 1.0.201. ([\#17183](https://github.com/element-hq/synapse/issues/17183))
* Bump serde_json from 1.0.116 to 1.0.117. ([\#17182](https://github.com/element-hq/synapse/issues/17182))
Synapse 1.107.0 (2024-05-14)
============================
No significant changes since 1.107.0rc1.
# Synapse 1.107.0rc1 (2024-05-07) # Synapse 1.107.0rc1 (2024-05-07)
### Features ### Features

24
Cargo.lock generated
View file

@ -13,9 +13,9 @@ dependencies = [
[[package]] [[package]]
name = "anyhow" name = "anyhow"
version = "1.0.83" version = "1.0.86"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25bdb32cbbdce2b519a9cd7df3a678443100e265d5e25ca763b7572a5104f5f3" checksum = "b3d1d046238990b9cf5bcde22a3fb3584ee5cf65fb2765f454ed428c7a0063da"
[[package]] [[package]]
name = "arc-swap" name = "arc-swap"
@ -212,9 +212,9 @@ dependencies = [
[[package]] [[package]]
name = "lazy_static" name = "lazy_static"
version = "1.4.0" version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646" checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
[[package]] [[package]]
name = "libc" name = "libc"
@ -444,9 +444,9 @@ dependencies = [
[[package]] [[package]]
name = "regex" name = "regex"
version = "1.10.4" version = "1.10.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c117dbdfde9c8308975b6a18d71f3f385c89461f7b3fb054288ecf2a2058ba4c" checksum = "b91213439dad192326a0d7c6ee3955910425f441d7038e0d6933b0aec5c4517f"
dependencies = [ dependencies = [
"aho-corasick", "aho-corasick",
"memchr", "memchr",
@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.200" version = "1.0.203"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ddc6f9cc94d67c0e21aaf7eda3a010fd3af78ebf6e096aa6e2e13c79749cce4f" checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.200" version = "1.0.203"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "856f046b9400cee3c8c94ed572ecdb752444c24528c035cd35882aad6f492bcb" checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -505,9 +505,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.116" version = "1.0.117"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e17db7126d17feb94eb3fad46bf1a96b034e8aacbc2e775fe81505f8b0b2813" checksum = "455182ea6142b14f93f4bc5320a2b31c1f266b66a4a5c858b013302a5d8cbfc3"
dependencies = [ dependencies = [
"itoa", "itoa",
"ryu", "ryu",

View file

@ -1,21 +1,34 @@
========================================================================= .. image:: https://github.com/element-hq/product/assets/87339233/7abf477a-5277-47f3-be44-ea44917d8ed7
Synapse |support| |development| |documentation| |license| |pypi| |python| :height: 60px
=========================================================================
Synapse is an open-source `Matrix <https://matrix.org/>`_ homeserver written and **Element Synapse - Matrix homeserver implementation**
maintained by the Matrix.org Foundation. We began rapid development in 2014,
reaching v1.0.0 in 2019. Development on Synapse and the Matrix protocol itself continues
in earnest today.
Briefly, Matrix is an open standard for communications on the internet, supporting |support| |development| |documentation| |license| |pypi| |python|
federation, encryption and VoIP. Matrix.org has more to say about the `goals of the
Matrix project <https://matrix.org/docs/guides/introduction>`_, and the `formal specification Synapse is an open source `Matrix <https://matrix.org>`_ homeserver
<https://spec.matrix.org/>`_ describes the technical details. implementation, written and maintained by `Element <https://element.io>`_.
`Matrix <https://github.com/matrix-org>`_ is the open standard for
secure and interoperable real time communications. You can directly run
and manage the source code in this repository, available under an AGPL
license. There is no support provided from Element unless you have a
subscription.
Subscription alternative
========================
Alternatively, for those that need an enterprise-ready solution, Element
Server Suite (ESS) is `available as a subscription <https://element.io/pricing>`_.
ESS builds on Synapse to offer a complete Matrix-based backend including the full
`Admin Console product <https://element.io/enterprise-functionality/admin-console>`_,
giving admins the power to easily manage an organization-wide
deployment. It includes advanced identity management, auditing,
moderation and data retention options as well as Long Term Support and
SLAs. ESS can be used to support any Matrix-based frontend client.
.. contents:: .. contents::
Installing and configuration 🛠️ Installing and configuration
============================ ===============================
The Synapse documentation describes `how to install Synapse <https://element-hq.github.io/synapse/latest/setup/installation.html>`_. We recommend using The Synapse documentation describes `how to install Synapse <https://element-hq.github.io/synapse/latest/setup/installation.html>`_. We recommend using
`Docker images <https://element-hq.github.io/synapse/latest/setup/installation.html#docker-images-and-ansible-playbooks>`_ or `Debian packages from Matrix.org `Docker images <https://element-hq.github.io/synapse/latest/setup/installation.html#docker-images-and-ansible-playbooks>`_ or `Debian packages from Matrix.org
@ -105,8 +118,8 @@ Following this advice ensures that even if an XSS is found in Synapse, the
impact to other applications will be minimal. impact to other applications will be minimal.
Testing a new installation 🧪 Testing a new installation
========================== ============================
The easiest way to try out your new Synapse installation is by connecting to it The easiest way to try out your new Synapse installation is by connecting to it
from a web client. from a web client.
@ -159,8 +172,20 @@ the form of::
As when logging in, you will need to specify a "Custom server". Specify your As when logging in, you will need to specify a "Custom server". Specify your
desired ``localpart`` in the 'User name' box. desired ``localpart`` in the 'User name' box.
Troubleshooting and support 🎯 Troubleshooting and support
=========================== =============================
🚀 Professional support
----------------------
Enterprise quality support for Synapse including SLAs is available as part of an
`Element Server Suite (ESS) <https://element.io/pricing>` subscription.
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`
and access the `knowledge base <https://ems-docs.element.io>`.
🤝 Community support
-------------------
The `Admin FAQ <https://element-hq.github.io/synapse/latest/usage/administration/admin_faq.html>`_ The `Admin FAQ <https://element-hq.github.io/synapse/latest/usage/administration/admin_faq.html>`_
includes tips on dealing with some common problems. For more details, see includes tips on dealing with some common problems. For more details, see
@ -176,8 +201,8 @@ issues for support requests, only for bug reports and feature requests.
.. |docs| replace:: ``docs`` .. |docs| replace:: ``docs``
.. _docs: docs .. _docs: docs
Identity Servers 🪪 Identity Servers
================ ==================
Identity servers have the job of mapping email addresses and other 3rd Party Identity servers have the job of mapping email addresses and other 3rd Party
IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
@ -206,8 +231,8 @@ an email address with your account, or send an invite to another user via their
email address. email address.
Development 🛠️ Development
=========== ==============
We welcome contributions to Synapse from the community! We welcome contributions to Synapse from the community!
The best place to get started is our The best place to get started is our
@ -225,8 +250,8 @@ Alongside all that, join our developer community on Matrix:
`#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_, featuring real humans! `#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_, featuring real humans!
.. |support| image:: https://img.shields.io/matrix/synapse:matrix.org?label=support&logo=matrix .. |support| image:: https://img.shields.io/badge/matrix-community%20support-success
:alt: (get support on #synapse:matrix.org) :alt: (get community support in #synapse:matrix.org)
:target: https://matrix.to/#/#synapse:matrix.org :target: https://matrix.to/#/#synapse:matrix.org
.. |development| image:: https://img.shields.io/matrix/synapse-dev:matrix.org?label=development&logo=matrix .. |development| image:: https://img.shields.io/matrix/synapse-dev:matrix.org?label=development&logo=matrix

View file

@ -1 +0,0 @@
Update User Admin API with note about prefixing OIDC external_id providers.

View file

@ -1 +0,0 @@
Add support for optional whitespace around the Federation API's `Authorization` header's parameter commas.

View file

@ -1 +0,0 @@
Clarify the state of the created room when using the `autocreate_auto_join_room_preset` config option.

View file

@ -1 +0,0 @@
Add note to reflect that [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886) is closed but will support will remain for some time.

View file

@ -1 +0,0 @@
Update dependency PyO3 to 0.21.

View file

@ -0,0 +1 @@
Add initial implementation of an experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

1
changelog.d/17198.misc Normal file
View file

@ -0,0 +1 @@
Remove unused `expire_access_token` option in the Synapse Docker config file. Contributed by @AaronDewes.

1
changelog.d/17254.bugfix Normal file
View file

@ -0,0 +1 @@
Fix searching for users with their exact localpart whose ID includes a hyphen.

View file

@ -0,0 +1 @@
Add support for [MSC823](https://github.com/matrix-org/matrix-spec-proposals/pull/3823) - Account suspension.

View file

@ -0,0 +1 @@
Improve ratelimiting in Synapse (#17256).

1
changelog.d/17265.misc Normal file
View file

@ -0,0 +1 @@
Use fully-qualified `PersistedEventPosition` when returning `RoomsForUser` to facilitate proper comparisons and `RoomStreamToken` generation.

1
changelog.d/17266.misc Normal file
View file

@ -0,0 +1 @@
Add debug logging for when room keys are uploaded, including whether they are replacing other room keys.

View file

@ -0,0 +1 @@
Add support for the unstable [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151) report room API.

1
changelog.d/17271.misc Normal file
View file

@ -0,0 +1 @@
Handle OTK uploads off master.

1
changelog.d/17272.bugfix Normal file
View file

@ -0,0 +1 @@
Fix wrong retention policy being used when filtering events.

1
changelog.d/17273.misc Normal file
View file

@ -0,0 +1 @@
Don't try and resync devices for remote users whose servers are marked as down.

1
changelog.d/17275.bugfix Normal file
View file

@ -0,0 +1 @@
Fix bug where OTKs were not always included in `/sync` response when using workers.

View file

@ -0,0 +1 @@
Filter for public and empty rooms added to Admin-API [List Room API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#list-room-api).

View file

@ -0,0 +1 @@
Add `is_dm` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

1
changelog.d/17279.misc Normal file
View file

@ -0,0 +1 @@
Re-organize Pydantic models and types used in handlers.

View file

@ -0,0 +1 @@
Add `is_encrypted` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View file

@ -0,0 +1 @@
Include user membership in events served to clients, per MSC4115.

1
changelog.d/17283.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a long-standing bug where an invalid 'from' parameter to [`/notifications`](https://spec.matrix.org/v1.10/client-server-api/#get_matrixclientv3notifications) would result in an Internal Server Error.

View file

@ -0,0 +1 @@
Do not require user-interactive authentication for uploading cross-signing keys for the first time, per MSC3967.

View file

@ -0,0 +1 @@
Add `stream_ordering` sort to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View file

@ -0,0 +1,2 @@
`register_new_matrix_user` now supports a --password-file flag, which
is useful for scripting.

1
changelog.d/17295.bugfix Normal file
View file

@ -0,0 +1 @@
Fix edge case in `/sync` returning the wrong the state when using sharded event persisters.

View file

@ -0,0 +1 @@
Add support for the unstable [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151) report room API.

1
changelog.d/17297.misc Normal file
View file

@ -0,0 +1 @@
Bump `mypy` from 1.8.0 to 1.9.0.

1
changelog.d/17300.misc Normal file
View file

@ -0,0 +1 @@
Expose the worker instance that persisted the event on `event.internal_metadata.instance_name`.

1
changelog.d/17301.bugfix Normal file
View file

@ -0,0 +1 @@
Add initial implementation of an experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View file

@ -0,0 +1,2 @@
`register_new_matrix_user` now supports a --exists-ok flag to allow registration of users that already exist in the database.
This is useful for scripts that bootstrap user accounts with initial passwords.

1
changelog.d/17308.doc Normal file
View file

@ -0,0 +1 @@
Add missing quotes for example for `exclude_rooms_from_sync`.

View file

@ -0,0 +1 @@
Add support for via query parameter from MSC415.

1
changelog.d/17324.misc Normal file
View file

@ -0,0 +1 @@
Update the README with Element branding, improve headers and fix the #synapse:matrix.org support room link rendering.

1
changelog.d/17325.misc Normal file
View file

@ -0,0 +1 @@
This is a changelog so tests will run.

1
changelog.d/17329.doc Normal file
View file

@ -0,0 +1 @@
Update header in the README to visually fix the the auto-generated table of contents.

1
changelog.d/17331.misc Normal file
View file

@ -0,0 +1 @@
Change path of the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync implementation to `/org.matrix.simplified_msc3575/sync` since our simplified API is slightly incompatible with what's in the current MSC.

1
changelog.d/17333.misc Normal file
View file

@ -0,0 +1 @@
Handle device lists notifications for large accounts more efficiently in worker mode.

View file

@ -0,0 +1 @@
Add `is_invite` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

1
changelog.d/17336.bugfix Normal file
View file

@ -0,0 +1 @@
Fix email notification subject when invited to a space.

1
changelog.d/17338.misc Normal file
View file

@ -0,0 +1 @@
Do not block event sending/receiving while calculating large event auth chains.

1
changelog.d/17339.misc Normal file
View file

@ -0,0 +1 @@
Tidy up `parse_integer` docs and call sites to reflect the fact that they require non-negative integers by default, and bring `parse_integer_from_args` default in alignment. Contributed by Denis Kasak (@dkasak).

1
changelog.d/17341.doc Normal file
View file

@ -0,0 +1 @@
Fix stale references to the Foundation's Security Disclosure Policy.

1
changelog.d/17347.doc Normal file
View file

@ -0,0 +1 @@
Add default values for `rc_invites.per_issuer` to docs.

1
changelog.d/17348.doc Normal file
View file

@ -0,0 +1 @@
Fix an error in the docs for `search_all_users` parameter under `user_directory`.

View file

@ -0,0 +1,2 @@
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md)
by adding a federation /download endpoint.

1
changelog.d/17358.misc Normal file
View file

@ -0,0 +1 @@
Handle device lists notifications for large accounts more efficiently in worker mode.

48
debian/changelog vendored
View file

@ -1,3 +1,51 @@
matrix-synapse-py3 (1.109.0+nmu1) UNRELEASED; urgency=medium
* `register_new_matrix_user` now supports a --password-file and a --exists-ok flag.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jun 2024 13:29:36 +0100
matrix-synapse-py3 (1.109.0) stable; urgency=medium
* New synapse release 1.109.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jun 2024 09:45:15 +0000
matrix-synapse-py3 (1.109.0~rc3) stable; urgency=medium
* New synapse release 1.109.0rc3.
-- Synapse Packaging team <packages@matrix.org> Mon, 17 Jun 2024 12:05:24 +0000
matrix-synapse-py3 (1.109.0~rc2) stable; urgency=medium
* New synapse release 1.109.0rc2.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Jun 2024 13:20:17 +0000
matrix-synapse-py3 (1.109.0~rc1) stable; urgency=medium
* New Synapse release 1.109.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Jun 2024 09:42:46 +0100
matrix-synapse-py3 (1.108.0) stable; urgency=medium
* New Synapse release 1.108.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 28 May 2024 11:54:22 +0100
matrix-synapse-py3 (1.108.0~rc1) stable; urgency=medium
* New Synapse release 1.108.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 21 May 2024 10:54:13 +0100
matrix-synapse-py3 (1.107.0) stable; urgency=medium
* New Synapse release 1.107.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 14 May 2024 14:15:34 +0100
matrix-synapse-py3 (1.107.0~rc1) stable; urgency=medium matrix-synapse-py3 (1.107.0~rc1) stable; urgency=medium
* New Synapse release 1.107.0rc1. * New Synapse release 1.107.0rc1.

View file

@ -31,8 +31,12 @@ A sample YAML file accepted by `register_new_matrix_user` is described below:
Local part of the new user. Will prompt if omitted. Local part of the new user. Will prompt if omitted.
* `-p`, `--password`: * `-p`, `--password`:
New password for user. Will prompt if omitted. Supplying the password New password for user. Will prompt if this option and `--password-file` are omitted.
on the command line is not recommended. Use the STDIN instead. Supplying the password on the command line is not recommended.
* `--password-file`:
File containing the new password for user. If set, overrides `--password`.
This is a more secure alternative to specifying the password on the command line.
* `-a`, `--admin`: * `-a`, `--admin`:
Register new user as an admin. Will prompt if omitted. Register new user as an admin. Will prompt if omitted.
@ -44,6 +48,9 @@ A sample YAML file accepted by `register_new_matrix_user` is described below:
Shared secret as defined in server config file. This is an optional Shared secret as defined in server config file. This is an optional
parameter as it can be also supplied via the YAML file. parameter as it can be also supplied via the YAML file.
* `--exists-ok`:
Do not fail if the user already exists. The user account will be not updated in this case.
* `server_url`: * `server_url`:
URL of the home server. Defaults to 'https://localhost:8448'. URL of the home server. Defaults to 'https://localhost:8448'.

View file

@ -105,8 +105,6 @@ experimental_features:
# Expose a room summary for public rooms # Expose a room summary for public rooms
msc3266_enabled: true msc3266_enabled: true
msc4115_membership_on_events: true
server_notices: server_notices:
system_mxid_localpart: _server system_mxid_localpart: _server
system_mxid_display_name: "Server Alert" system_mxid_display_name: "Server Alert"

View file

@ -176,7 +176,6 @@ app_service_config_files:
{% endif %} {% endif %}
macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}" macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"
expire_access_token: False
## Signing Keys ## ## Signing Keys ##

View file

@ -211,6 +211,8 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"^/_matrix/federation/(v1|v2)/make_leave/", "^/_matrix/federation/(v1|v2)/make_leave/",
"^/_matrix/federation/(v1|v2)/send_join/", "^/_matrix/federation/(v1|v2)/send_join/",
"^/_matrix/federation/(v1|v2)/send_leave/", "^/_matrix/federation/(v1|v2)/send_leave/",
"^/_matrix/federation/v1/make_knock/",
"^/_matrix/federation/v1/send_knock/",
"^/_matrix/federation/(v1|v2)/invite/", "^/_matrix/federation/(v1|v2)/invite/",
"^/_matrix/federation/(v1|v2)/query_auth/", "^/_matrix/federation/(v1|v2)/query_auth/",
"^/_matrix/federation/(v1|v2)/event_auth/", "^/_matrix/federation/(v1|v2)/event_auth/",

View file

@ -36,6 +36,10 @@ The following query parameters are available:
- the room's name, - the room's name,
- the local part of the room's canonical alias, or - the local part of the room's canonical alias, or
- the complete (local and server part) room's id (case sensitive). - the complete (local and server part) room's id (case sensitive).
* `public_rooms` - Optional flag to filter public rooms. If `true`, only public rooms are queried. If `false`, public rooms are excluded from
the query. When the flag is absent (the default), **both** public and non-public rooms are included in the search results.
* `empty_rooms` - Optional flag to filter empty rooms. A room is empty if joined_members is zero. If `true`, only empty rooms are queried. If `false`, empty rooms are excluded from
the query. When the flag is absent (the default), **both** empty and non-empty rooms are included in the search results.
Defaults to no filtering. Defaults to no filtering.

View file

@ -525,6 +525,8 @@ oidc_providers:
(`Options > Security > ID Token signature algorithm` and `Options > Security > (`Options > Security > ID Token signature algorithm` and `Options > Security >
Access Token signature algorithm`) Access Token signature algorithm`)
- Scopes: OpenID, Email and Profile - Scopes: OpenID, Email and Profile
- Force claims into `id_token`
(`Options > Advanced > Force claims to be returned in ID Token`)
- Allowed redirection addresses for login (`Options > Basic > Allowed - Allowed redirection addresses for login (`Options > Basic > Allowed
redirection addresses for login` ) : redirection addresses for login` ) :
`[synapse public baseurl]/_synapse/client/oidc/callback` `[synapse public baseurl]/_synapse/client/oidc/callback`

View file

@ -242,12 +242,11 @@ host all all ::1/128 ident
### Fixing incorrect `COLLATE` or `CTYPE` ### Fixing incorrect `COLLATE` or `CTYPE`
Synapse will refuse to set up a new database if it has the wrong values of Synapse will refuse to start when using a database with incorrect values of
`COLLATE` and `CTYPE` set. Synapse will also refuse to start an existing database with incorrect values `COLLATE` and `CTYPE` unless the config flag `allow_unsafe_locale`, found in the
of `COLLATE` and `CTYPE` unless the config flag `allow_unsafe_locale`, found in the `database` section of the config, is set to true. Using different locales can
`database` section of the config, is set to true. Using different locales can cause issues if the locale library is updated from cause issues if the locale library is updated from underneath the database, or
underneath the database, or if a different version of the locale is used on any if a different version of the locale is used on any replicas.
replicas.
If you have a database with an unsafe locale, the safest way to fix the issue is to dump the database and recreate it with If you have a database with an unsafe locale, the safest way to fix the issue is to dump the database and recreate it with
the correct locale parameter (as shown above). It is also possible to change the the correct locale parameter (as shown above). It is also possible to change the
@ -256,13 +255,3 @@ however extreme care must be taken to avoid database corruption.
Note that the above may fail with an error about duplicate rows if corruption Note that the above may fail with an error about duplicate rows if corruption
has already occurred, and such duplicate rows will need to be manually removed. has already occurred, and such duplicate rows will need to be manually removed.
### Fixing inconsistent sequences error
Synapse uses Postgres sequences to generate IDs for various tables. A sequence
and associated table can get out of sync if, for example, Synapse has been
downgraded and then upgraded again.
To fix the issue shut down Synapse (including any and all workers) and run the
SQL command included in the error message. Once done Synapse should start
successfully.

View file

@ -250,10 +250,10 @@ Using [libjemalloc](https://jemalloc.net) can also yield a significant
improvement in overall memory use, and especially in terms of giving back improvement in overall memory use, and especially in terms of giving back
RAM to the OS. To use it, the library must simply be put in the RAM to the OS. To use it, the library must simply be put in the
LD_PRELOAD environment variable when launching Synapse. On Debian, this LD_PRELOAD environment variable when launching Synapse. On Debian, this
can be done by installing the `libjemalloc1` package and adding this can be done by installing the `libjemalloc2` package and adding this
line to `/etc/default/matrix-synapse`: line to `/etc/default/matrix-synapse`:
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
This made a significant difference on Python 2.7 - it's unclear how This made a significant difference on Python 2.7 - it's unclear how
much of an improvement it provides on Python 3.x. much of an improvement it provides on Python 3.x.

View file

@ -1232,6 +1232,31 @@ federation_domain_whitelist:
- syd.example.com - syd.example.com
``` ```
--- ---
### `federation_whitelist_endpoint_enabled`
Enables an endpoint for fetching the federation whitelist config.
The request method and path is `GET /_synapse/client/v1/config/federation_whitelist`, and the
response format is:
```json
{
"whitelist_enabled": true, // Whether the federation whitelist is being enforced
"whitelist": [ // Which server names are allowed by the whitelist
"example.com"
]
}
```
If `whitelist_enabled` is `false` then the server is permitted to federate with all others.
The endpoint requires authentication.
Example configuration:
```yaml
federation_whitelist_endpoint_enabled: true
```
---
### `federation_metrics_domains` ### `federation_metrics_domains`
Report prometheus metrics on the age of PDUs being sent to and received from Report prometheus metrics on the age of PDUs being sent to and received from
@ -1734,8 +1759,9 @@ rc_3pid_validation:
### `rc_invites` ### `rc_invites`
This option sets ratelimiting how often invites can be sent in a room or to a This option sets ratelimiting how often invites can be sent in a room or to a
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10`,
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`. `per_user` defaults to `per_second: 0.003`, `burst_count: 5`, and `per_issuer`
defaults to `per_second: 0.3`, `burst_count: 10`.
Client requests that invite user(s) when [creating a Client requests that invite user(s) when [creating a
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom) room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
@ -1921,6 +1947,24 @@ Example configuration:
max_image_pixels: 35M max_image_pixels: 35M
``` ```
--- ---
### `remote_media_download_burst_count`
Remote media downloads are ratelimited using a [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket), where a given "bucket" is keyed to the IP address of the requester when requesting remote media downloads. This configuration option sets the size of the bucket against which the size in bytes of downloads are penalized - if the bucket is full, ie a given number of bytes have already been downloaded, further downloads will be denied until the bucket drains. Defaults to 500MiB. See also `remote_media_download_per_second` which determines the rate at which the "bucket" is emptied and thus has available space to authorize new requests.
Example configuration:
```yaml
remote_media_download_burst_count: 200M
```
---
### `remote_media_download_per_second`
Works in conjunction with `remote_media_download_burst_count` to ratelimit remote media downloads - this configuration option determines the rate at which the "bucket" (see above) leaks in bytes per second. As requests are made to download remote media, the size of those requests in bytes is added to the bucket, and once the bucket has reached it's capacity, no more requests will be allowed until a number of bytes has "drained" from the bucket. This setting determines the rate at which bytes drain from the bucket, with the practical effect that the larger the number, the faster the bucket leaks, allowing for more bytes downloaded over a shorter period of time. Defaults to 87KiB per second. See also `remote_media_download_burst_count`.
Example configuration:
```yaml
remote_media_download_per_second: 40K
```
---
### `prevent_media_downloads_from` ### `prevent_media_downloads_from`
A list of domains to never download media from. Media from these A list of domains to never download media from. Media from these
@ -2675,7 +2719,7 @@ Example configuration:
session_lifetime: 24h session_lifetime: 24h
``` ```
--- ---
### `refresh_access_token_lifetime` ### `refreshable_access_token_lifetime`
Time that an access token remains valid for, if the session is using refresh tokens. Time that an access token remains valid for, if the session is using refresh tokens.
@ -3533,6 +3577,15 @@ Has the following sub-options:
users. This allows the CAS SSO flow to be limited to sign in only, rather than users. This allows the CAS SSO flow to be limited to sign in only, rather than
automatically registering users that have a valid SSO login but do not have automatically registering users that have a valid SSO login but do not have
a pre-registered account. Defaults to true. a pre-registered account. Defaults to true.
* `allow_numeric_ids`: set to 'true' allow numeric user IDs (default false).
This allows CAS SSO flow to provide user IDs composed of numbers only.
These identifiers will be prefixed by the letter "u" by default.
The prefix can be configured using the "numeric_ids_prefix" option.
Be careful to choose the prefix correctly to avoid any possible conflicts
(e.g. user 1234 becomes u1234 when a user u1234 already exists).
* `numeric_ids_prefix`: the prefix you wish to add in front of a numeric user ID
when the "allow_numeric_ids" option is set to "true".
By default, the prefix is the letter "u" and only alphanumeric characters are allowed.
*Added in Synapse 1.93.0.* *Added in Synapse 1.93.0.*
@ -3547,6 +3600,8 @@ cas_config:
userGroup: "staff" userGroup: "staff"
department: None department: None
enable_registration: true enable_registration: true
allow_numeric_ids: true
numeric_ids_prefix: "numericuser"
``` ```
--- ---
### `sso` ### `sso`
@ -3752,7 +3807,8 @@ This setting defines options related to the user directory.
This option has the following sub-options: This option has the following sub-options:
* `enabled`: Defines whether users can search the user directory. If false then * `enabled`: Defines whether users can search the user directory. If false then
empty responses are returned to all queries. Defaults to true. empty responses are returned to all queries. Defaults to true.
* `search_all_users`: Defines whether to search all users visible to your HS at the time the search is performed. If set to true, will return all users who share a room with the user from the homeserver. * `search_all_users`: Defines whether to search all users visible to your homeserver at the time the search is performed.
If set to true, will return all users known to the homeserver matching the search query.
If false, search results will only contain users If false, search results will only contain users
visible in public rooms and users sharing a room with the requester. visible in public rooms and users sharing a room with the requester.
Defaults to false. Defaults to false.
@ -4096,7 +4152,7 @@ By default, no room is excluded.
Example configuration: Example configuration:
```yaml ```yaml
exclude_rooms_from_sync: exclude_rooms_from_sync:
- !foo:example.com - "!foo:example.com"
``` ```
--- ---
@ -4559,3 +4615,32 @@ background_updates:
min_batch_size: 10 min_batch_size: 10
default_batch_size: 50 default_batch_size: 50
``` ```
---
## Auto Accept Invites
Configuration settings related to automatically accepting invites.
---
### `auto_accept_invites`
Automatically accepting invites controls whether users are presented with an invite request or if they
are instead automatically joined to a room when receiving an invite. Set the `enabled` sub-option to true to
enable auto-accepting invites. Defaults to false.
This setting has the following sub-options:
* `enabled`: Whether to run the auto-accept invites logic. Defaults to false.
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
for direct messages. Defaults to false.
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
* `worker_to_run_on`: Which worker to run this module on. This must match the "worker_name".
NOTE: Care should be taken not to enable this setting if the `synapse_auto_accept_invite` module is enabled and installed.
The two modules will compete to perform the same task and may result in undesired behaviour. For example, multiple join
events could be generated from a single invite.
Example configuration:
```yaml
auto_accept_invites:
enabled: true
only_for_direct_messages: true
only_from_local_users: true
worker_to_run_on: "worker_1"
```

View file

@ -62,6 +62,6 @@ following documentation:
## Reporting a security vulnerability ## Reporting a security vulnerability
If you've found a security issue in Synapse or any other Matrix.org Foundation If you've found a security issue in Synapse or any other Element project,
project, please report it to us in accordance with our [Security Disclosure please report it to us in accordance with our [Security Disclosure
Policy](https://www.matrix.org/security-disclosure-policy/). Thank you! Policy](https://element.io/security/security-disclosure-policy). Thank you!

View file

@ -211,6 +211,8 @@ information.
^/_matrix/federation/v1/make_leave/ ^/_matrix/federation/v1/make_leave/
^/_matrix/federation/(v1|v2)/send_join/ ^/_matrix/federation/(v1|v2)/send_join/
^/_matrix/federation/(v1|v2)/send_leave/ ^/_matrix/federation/(v1|v2)/send_leave/
^/_matrix/federation/v1/make_knock/
^/_matrix/federation/v1/send_knock/
^/_matrix/federation/(v1|v2)/invite/ ^/_matrix/federation/(v1|v2)/invite/
^/_matrix/federation/v1/event_auth/ ^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/timestamp_to_event/ ^/_matrix/federation/v1/timestamp_to_event/
@ -535,7 +537,7 @@ the stream writer for the `presence` stream:
##### The `push_rules` stream ##### The `push_rules` stream
The following endpoints should be routed directly to the worker configured as The following endpoints should be routed directly to the worker configured as
the stream writer for the `push` stream: the stream writer for the `push_rules` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/ ^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/

845
poetry.lock generated

File diff suppressed because it is too large Load diff

View file

@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.107.0rc1" version = "1.109.0"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later" license = "AGPL-3.0-or-later"
@ -200,10 +200,8 @@ netaddr = ">=0.7.18"
# add a lower bound to the Jinja2 dependency. # add a lower bound to the Jinja2 dependency.
Jinja2 = ">=3.0" Jinja2 = ">=3.0"
bleach = ">=1.4.3" bleach = ">=1.4.3"
# We use `ParamSpec` and `Concatenate`, which were added in `typing-extensions` 3.10.0.0. # We use `Self`, which were added in `typing-extensions` 4.0.
# Additionally we need https://github.com/python/typing/pull/817 to allow types to be typing-extensions = ">=4.0"
# generic over ParamSpecs.
typing-extensions = ">=3.10.0.1"
# We enforce that we have a `cryptography` version that bundles an `openssl` # We enforce that we have a `cryptography` version that bundles an `openssl`
# with the latest security patches. # with the latest security patches.
cryptography = ">=3.4.7" cryptography = ">=3.4.7"

View file

@ -204,6 +204,8 @@ pub struct EventInternalMetadata {
/// The stream ordering of this event. None, until it has been persisted. /// The stream ordering of this event. None, until it has been persisted.
#[pyo3(get, set)] #[pyo3(get, set)]
stream_ordering: Option<NonZeroI64>, stream_ordering: Option<NonZeroI64>,
#[pyo3(get, set)]
instance_name: Option<String>,
/// whether this event is an outlier (ie, whether we have the state at that /// whether this event is an outlier (ie, whether we have the state at that
/// point in the DAG) /// point in the DAG)
@ -232,6 +234,7 @@ impl EventInternalMetadata {
Ok(EventInternalMetadata { Ok(EventInternalMetadata {
data, data,
stream_ordering: None, stream_ordering: None,
instance_name: None,
outlier: false, outlier: false,
}) })
} }

View file

@ -223,7 +223,6 @@ test_packages=(
./tests/msc3930 ./tests/msc3930
./tests/msc3902 ./tests/msc3902
./tests/msc3967 ./tests/msc3967
./tests/msc4115
) )
# Enable dirty runs, so tests will reuse the same container where possible. # Enable dirty runs, so tests will reuse the same container where possible.

View file

@ -52,6 +52,7 @@ def request_registration(
user_type: Optional[str] = None, user_type: Optional[str] = None,
_print: Callable[[str], None] = print, _print: Callable[[str], None] = print,
exit: Callable[[int], None] = sys.exit, exit: Callable[[int], None] = sys.exit,
exists_ok: bool = False,
) -> None: ) -> None:
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),) url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
@ -97,6 +98,10 @@ def request_registration(
r = requests.post(url, json=data) r = requests.post(url, json=data)
if r.status_code != 200: if r.status_code != 200:
response = r.json()
if exists_ok and response["errcode"] == "M_USER_IN_USE":
_print("User already exists. Skipping.")
return
_print("ERROR! Received %d %s" % (r.status_code, r.reason)) _print("ERROR! Received %d %s" % (r.status_code, r.reason))
if 400 <= r.status_code < 500: if 400 <= r.status_code < 500:
try: try:
@ -115,6 +120,7 @@ def register_new_user(
shared_secret: str, shared_secret: str,
admin: Optional[bool], admin: Optional[bool],
user_type: Optional[str], user_type: Optional[str],
exists_ok: bool = False,
) -> None: ) -> None:
if not user: if not user:
try: try:
@ -154,7 +160,13 @@ def register_new_user(
admin = False admin = False
request_registration( request_registration(
user, password, server_location, shared_secret, bool(admin), user_type user,
password,
server_location,
shared_secret,
bool(admin),
user_type,
exists_ok=exists_ok,
) )
@ -174,10 +186,22 @@ def main() -> None:
help="Local part of the new user. Will prompt if omitted.", help="Local part of the new user. Will prompt if omitted.",
) )
parser.add_argument( parser.add_argument(
"--exists-ok",
action="store_true",
help="Do not fail if user already exists.",
)
password_group = parser.add_mutually_exclusive_group()
password_group.add_argument(
"-p", "-p",
"--password", "--password",
default=None, default=None,
help="New password for user. Will prompt if omitted.", help="New password for user. Will prompt for a password if "
"this flag and `--password-file` are both omitted.",
)
password_group.add_argument(
"--password-file",
default=None,
help="File containing the new password for user. If set, will override `--password`.",
) )
parser.add_argument( parser.add_argument(
"-t", "-t",
@ -185,6 +209,7 @@ def main() -> None:
default=None, default=None,
help="User type as specified in synapse.api.constants.UserTypes", help="User type as specified in synapse.api.constants.UserTypes",
) )
admin_group = parser.add_mutually_exclusive_group() admin_group = parser.add_mutually_exclusive_group()
admin_group.add_argument( admin_group.add_argument(
"-a", "-a",
@ -247,6 +272,11 @@ def main() -> None:
print(_NO_SHARED_SECRET_OPTS_ERROR, file=sys.stderr) print(_NO_SHARED_SECRET_OPTS_ERROR, file=sys.stderr)
sys.exit(1) sys.exit(1)
if args.password_file:
password = _read_file(args.password_file, "password-file").strip()
else:
password = args.password
if args.server_url: if args.server_url:
server_url = args.server_url server_url = args.server_url
elif config is not None: elif config is not None:
@ -270,7 +300,13 @@ def main() -> None:
admin = args.admin admin = args.admin
register_new_user( register_new_user(
args.user, args.password, server_url, secret, admin, args.user_type args.user,
password,
server_url,
secret,
admin,
args.user_type,
exists_ok=args.exists_ok,
) )

View file

@ -777,22 +777,74 @@ class Porter:
await self._setup_events_stream_seqs() await self._setup_events_stream_seqs()
await self._setup_sequence( await self._setup_sequence(
"un_partial_stated_event_stream_sequence", "un_partial_stated_event_stream_sequence",
("un_partial_stated_event_stream",), [("un_partial_stated_event_stream", "stream_id")],
) )
await self._setup_sequence( await self._setup_sequence(
"device_inbox_sequence", ("device_inbox", "device_federation_outbox") "device_inbox_sequence",
[
("device_inbox", "stream_id"),
("device_federation_outbox", "stream_id"),
],
) )
await self._setup_sequence( await self._setup_sequence(
"account_data_sequence", "account_data_sequence",
("room_account_data", "room_tags_revisions", "account_data"), [
("room_account_data", "stream_id"),
("room_tags_revisions", "stream_id"),
("account_data", "stream_id"),
],
)
await self._setup_sequence(
"receipts_sequence",
[
("receipts_linearized", "stream_id"),
],
)
await self._setup_sequence(
"presence_stream_sequence",
[
("presence_stream", "stream_id"),
],
) )
await self._setup_sequence("receipts_sequence", ("receipts_linearized",))
await self._setup_sequence("presence_stream_sequence", ("presence_stream",))
await self._setup_auth_chain_sequence() await self._setup_auth_chain_sequence()
await self._setup_sequence( await self._setup_sequence(
"application_services_txn_id_seq", "application_services_txn_id_seq",
("application_services_txns",), [
"txn_id", (
"application_services_txns",
"txn_id",
)
],
)
await self._setup_sequence(
"device_lists_sequence",
[
("device_lists_stream", "stream_id"),
("user_signature_stream", "stream_id"),
("device_lists_outbound_pokes", "stream_id"),
("device_lists_changes_in_room", "stream_id"),
("device_lists_remote_pending", "stream_id"),
("device_lists_changes_converted_stream_position", "stream_id"),
],
)
await self._setup_sequence(
"e2e_cross_signing_keys_sequence",
[
("e2e_cross_signing_keys", "stream_id"),
],
)
await self._setup_sequence(
"push_rules_stream_sequence",
[
("push_rules_stream", "stream_id"),
],
)
await self._setup_sequence(
"pushers_sequence",
[
("pushers", "id"),
("deleted_pushers", "stream_id"),
],
) )
# Step 3. Get tables. # Step 3. Get tables.
@ -1101,12 +1153,11 @@ class Porter:
async def _setup_sequence( async def _setup_sequence(
self, self,
sequence_name: str, sequence_name: str,
stream_id_tables: Iterable[str], stream_id_tables: Iterable[Tuple[str, str]],
column_name: str = "stream_id",
) -> None: ) -> None:
"""Set a sequence to the correct value.""" """Set a sequence to the correct value."""
current_stream_ids = [] current_stream_ids = []
for stream_id_table in stream_id_tables: for stream_id_table, column_name in stream_id_tables:
max_stream_id = cast( max_stream_id = cast(
int, int,
await self.sqlite_store.db_pool.simple_select_one_onecol( await self.sqlite_store.db_pool.simple_select_one_onecol(

View file

@ -50,7 +50,7 @@ class Membership:
KNOCK: Final = "knock" KNOCK: Final = "knock"
LEAVE: Final = "leave" LEAVE: Final = "leave"
BAN: Final = "ban" BAN: Final = "ban"
LIST: Final = (INVITE, JOIN, KNOCK, LEAVE, BAN) LIST: Final = {INVITE, JOIN, KNOCK, LEAVE, BAN}
class PresenceState: class PresenceState:
@ -238,7 +238,7 @@ class EventUnsignedContentFields:
"""Fields found inside the 'unsigned' data on events""" """Fields found inside the 'unsigned' data on events"""
# Requesting user's membership, per MSC4115 # Requesting user's membership, per MSC4115
MSC4115_MEMBERSHIP: Final = "io.element.msc4115.membership" MEMBERSHIP: Final = "membership"
class RoomTypes: class RoomTypes:

View file

@ -316,6 +316,10 @@ class Ratelimiter:
) )
if not allowed: if not allowed:
# We pause for a bit here to stop clients from "tight-looping" on
# retrying their request.
await self.clock.sleep(0.5)
raise LimitExceededError( raise LimitExceededError(
limiter_name=self._limiter_name, limiter_name=self._limiter_name,
retry_after_ms=int(1000 * (time_allowed - time_now_s)), retry_after_ms=int(1000 * (time_allowed - time_now_s)),

View file

@ -68,6 +68,7 @@ from synapse.config._base import format_config_error
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ListenerConfig, ManholeConfig, TCPListenerConfig from synapse.config.server import ListenerConfig, ManholeConfig, TCPListenerConfig
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.events.auto_accept_invites import InviteAutoAccepter
from synapse.events.presence_router import load_legacy_presence_router from synapse.events.presence_router import load_legacy_presence_router
from synapse.handlers.auth import load_legacy_password_auth_providers from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
@ -582,6 +583,11 @@ async def start(hs: "HomeServer") -> None:
m = module(config, module_api) m = module(config, module_api)
logger.info("Loaded module %s", m) logger.info("Loaded module %s", m)
if hs.config.auto_accept_invites.enabled:
# Start the local auto_accept_invites module.
m = InviteAutoAccepter(hs.config.auto_accept_invites, module_api)
logger.info("Loaded local module %s", m)
load_legacy_spam_checkers(hs) load_legacy_spam_checkers(hs)
load_legacy_third_party_event_rules(hs) load_legacy_third_party_event_rules(hs)
load_legacy_presence_router(hs) load_legacy_presence_router(hs)
@ -675,17 +681,17 @@ def setup_sentry(hs: "HomeServer") -> None:
) )
# We set some default tags that give some context to this instance # We set some default tags that give some context to this instance
with sentry_sdk.configure_scope() as scope: global_scope = sentry_sdk.Scope.get_global_scope()
scope.set_tag("matrix_server_name", hs.config.server.server_name) global_scope.set_tag("matrix_server_name", hs.config.server.server_name)
app = ( app = (
hs.config.worker.worker_app hs.config.worker.worker_app
if hs.config.worker.worker_app if hs.config.worker.worker_app
else "synapse.app.homeserver" else "synapse.app.homeserver"
) )
name = hs.get_instance_name() name = hs.get_instance_name()
scope.set_tag("worker_app", app) global_scope.set_tag("worker_app", app)
scope.set_tag("worker_name", name) global_scope.set_tag("worker_name", name)
def setup_sdnotify(hs: "HomeServer") -> None: def setup_sdnotify(hs: "HomeServer") -> None:

View file

@ -23,6 +23,7 @@ from synapse.config import ( # noqa: F401
api, api,
appservice, appservice,
auth, auth,
auto_accept_invites,
background_updates, background_updates,
cache, cache,
captcha, captcha,
@ -120,6 +121,7 @@ class RootConfig:
federation: federation.FederationConfig federation: federation.FederationConfig
retention: retention.RetentionConfig retention: retention.RetentionConfig
background_updates: background_updates.BackgroundUpdateConfig background_updates: background_updates.BackgroundUpdateConfig
auto_accept_invites: auto_accept_invites.AutoAcceptInvitesConfig
config_classes: List[Type["Config"]] = ... config_classes: List[Type["Config"]] = ...
config_files: List[str] config_files: List[str]

View file

@ -0,0 +1,43 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
from typing import Any
from synapse.types import JsonDict
from ._base import Config
class AutoAcceptInvitesConfig(Config):
section = "auto_accept_invites"
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
auto_accept_invites_config = config.get("auto_accept_invites") or {}
self.enabled = auto_accept_invites_config.get("enabled", False)
self.accept_invites_only_for_direct_messages = auto_accept_invites_config.get(
"only_for_direct_messages", False
)
self.accept_invites_only_from_local_users = auto_accept_invites_config.get(
"only_from_local_users", False
)
self.worker_to_run_on = auto_accept_invites_config.get("worker_to_run_on")

View file

@ -66,6 +66,17 @@ class CasConfig(Config):
self.cas_enable_registration = cas_config.get("enable_registration", True) self.cas_enable_registration = cas_config.get("enable_registration", True)
self.cas_allow_numeric_ids = cas_config.get("allow_numeric_ids")
self.cas_numeric_ids_prefix = cas_config.get("numeric_ids_prefix")
if (
self.cas_numeric_ids_prefix is not None
and self.cas_numeric_ids_prefix.isalnum() is False
):
raise ConfigError(
"Only alphanumeric characters are allowed for numeric IDs prefix",
("cas_config", "numeric_ids_prefix"),
)
self.idp_name = cas_config.get("idp_name", "CAS") self.idp_name = cas_config.get("idp_name", "CAS")
self.idp_icon = cas_config.get("idp_icon") self.idp_icon = cas_config.get("idp_icon")
self.idp_brand = cas_config.get("idp_brand") self.idp_brand = cas_config.get("idp_brand")
@ -77,6 +88,8 @@ class CasConfig(Config):
self.cas_displayname_attribute = None self.cas_displayname_attribute = None
self.cas_required_attributes = [] self.cas_required_attributes = []
self.cas_enable_registration = False self.cas_enable_registration = False
self.cas_allow_numeric_ids = False
self.cas_numeric_ids_prefix = "u"
# CAS uses a legacy required attributes mapping, not the one provided by # CAS uses a legacy required attributes mapping, not the one provided by

View file

@ -332,6 +332,9 @@ class ExperimentalConfig(Config):
# MSC3391: Removing account data. # MSC3391: Removing account data.
self.msc3391_enabled = experimental.get("msc3391_enabled", False) self.msc3391_enabled = experimental.get("msc3391_enabled", False)
# MSC3575 (Sliding Sync API endpoints)
self.msc3575_enabled: bool = experimental.get("msc3575_enabled", False)
# MSC3773: Thread notifications # MSC3773: Thread notifications
self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False) self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False)
@ -390,9 +393,6 @@ class ExperimentalConfig(Config):
# MSC3391: Removing account data. # MSC3391: Removing account data.
self.msc3391_enabled = experimental.get("msc3391_enabled", False) self.msc3391_enabled = experimental.get("msc3391_enabled", False)
# MSC3967: Do not require UIA when first uploading cross signing keys
self.msc3967_enabled = experimental.get("msc3967_enabled", False)
# MSC3861: Matrix architecture change to delegate authentication via OIDC # MSC3861: Matrix architecture change to delegate authentication via OIDC
try: try:
self.msc3861 = MSC3861(**experimental.get("msc3861", {})) self.msc3861 = MSC3861(**experimental.get("msc3861", {}))
@ -433,6 +433,16 @@ class ExperimentalConfig(Config):
("experimental", "msc4108_delegation_endpoint"), ("experimental", "msc4108_delegation_endpoint"),
) )
self.msc4115_membership_on_events = experimental.get( self.msc3823_account_suspension = experimental.get(
"msc4115_membership_on_events", False "msc3823_account_suspension", False
) )
self.msc3916_authenticated_media_enabled = experimental.get(
"msc3916_authenticated_media_enabled", False
)
# MSC4151: Report room API (Client-Server API)
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
# MSC4156: Migrate server_name to via
self.msc4156_enabled: bool = experimental.get("msc4156_enabled", False)

View file

@ -42,6 +42,10 @@ class FederationConfig(Config):
for domain in federation_domain_whitelist: for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True self.federation_domain_whitelist[domain] = True
self.federation_whitelist_endpoint_enabled = config.get(
"federation_whitelist_endpoint_enabled", False
)
federation_metrics_domains = config.get("federation_metrics_domains") or [] federation_metrics_domains = config.get("federation_metrics_domains") or []
validate_config( validate_config(
_METRICS_FOR_DOMAINS_SCHEMA, _METRICS_FOR_DOMAINS_SCHEMA,

View file

@ -23,6 +23,7 @@ from .account_validity import AccountValidityConfig
from .api import ApiConfig from .api import ApiConfig
from .appservice import AppServiceConfig from .appservice import AppServiceConfig
from .auth import AuthConfig from .auth import AuthConfig
from .auto_accept_invites import AutoAcceptInvitesConfig
from .background_updates import BackgroundUpdateConfig from .background_updates import BackgroundUpdateConfig
from .cache import CacheConfig from .cache import CacheConfig
from .captcha import CaptchaConfig from .captcha import CaptchaConfig
@ -105,4 +106,5 @@ class HomeServerConfig(RootConfig):
RedisConfig, RedisConfig,
ExperimentalConfig, ExperimentalConfig,
BackgroundUpdateConfig, BackgroundUpdateConfig,
AutoAcceptInvitesConfig,
] ]

View file

@ -218,3 +218,13 @@ class RatelimitConfig(Config):
"rc_media_create", "rc_media_create",
defaults={"per_second": 10, "burst_count": 50}, defaults={"per_second": 10, "burst_count": 50},
) )
self.remote_media_downloads = RatelimitSettings(
key="rc_remote_media_downloads",
per_second=self.parse_size(
config.get("remote_media_download_per_second", "87K")
),
burst_count=self.parse_size(
config.get("remote_media_download_burst_count", "500M")
),
)

View file

@ -0,0 +1,196 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2021 The Matrix.org Foundation C.I.C
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
import logging
from http import HTTPStatus
from typing import Any, Dict, Tuple
from synapse.api.constants import AccountDataTypes, EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.config.auto_accept_invites import AutoAcceptInvitesConfig
from synapse.module_api import EventBase, ModuleApi, run_as_background_process
logger = logging.getLogger(__name__)
class InviteAutoAccepter:
def __init__(self, config: AutoAcceptInvitesConfig, api: ModuleApi):
# Keep a reference to the Module API.
self._api = api
self._config = config
if not self._config.enabled:
return
should_run_on_this_worker = config.worker_to_run_on == self._api.worker_name
if not should_run_on_this_worker:
logger.info(
"Not accepting invites on this worker (configured: %r, here: %r)",
config.worker_to_run_on,
self._api.worker_name,
)
return
logger.info(
"Accepting invites on this worker (here: %r)", self._api.worker_name
)
# Register the callback.
self._api.register_third_party_rules_callbacks(
on_new_event=self.on_new_event,
)
async def on_new_event(self, event: EventBase, *args: Any) -> None:
"""Listens for new events, and if the event is an invite for a local user then
automatically accepts it.
Args:
event: The incoming event.
"""
# Check if the event is an invite for a local user.
is_invite_for_local_user = (
event.type == EventTypes.Member
and event.is_state()
and event.membership == Membership.INVITE
and self._api.is_mine(event.state_key)
)
# Only accept invites for direct messages if the configuration mandates it.
is_direct_message = event.content.get("is_direct", False)
is_allowed_by_direct_message_rules = (
not self._config.accept_invites_only_for_direct_messages
or is_direct_message is True
)
# Only accept invites from remote users if the configuration mandates it.
is_from_local_user = self._api.is_mine(event.sender)
is_allowed_by_local_user_rules = (
not self._config.accept_invites_only_from_local_users
or is_from_local_user is True
)
if (
is_invite_for_local_user
and is_allowed_by_direct_message_rules
and is_allowed_by_local_user_rules
):
# Make the user join the room. We run this as a background process to circumvent a race condition
# that occurs when responding to invites over federation (see https://github.com/matrix-org/synapse-auto-accept-invite/issues/12)
run_as_background_process(
"retry_make_join",
self._retry_make_join,
event.state_key,
event.state_key,
event.room_id,
"join",
bg_start_span=False,
)
if is_direct_message:
# Mark this room as a direct message!
await self._mark_room_as_direct_message(
event.state_key, event.sender, event.room_id
)
async def _mark_room_as_direct_message(
self, user_id: str, dm_user_id: str, room_id: str
) -> None:
"""
Marks a room (`room_id`) as a direct message with the counterparty `dm_user_id`
from the perspective of the user `user_id`.
Args:
user_id: the user for whom the membership is changing
dm_user_id: the user performing the membership change
room_id: room id of the room the user is invited to
"""
# This is a dict of User IDs to tuples of Room IDs
# (get_global will return a frozendict of tuples as it freezes the data,
# but we should accept either frozen or unfrozen variants.)
# Be careful: we convert the outer frozendict into a dict here,
# but the contents of the dict are still frozen (tuples in lieu of lists,
# etc.)
dm_map: Dict[str, Tuple[str, ...]] = dict(
await self._api.account_data_manager.get_global(
user_id, AccountDataTypes.DIRECT
)
or {}
)
if dm_user_id not in dm_map:
dm_map[dm_user_id] = (room_id,)
else:
dm_rooms_for_user = dm_map[dm_user_id]
assert isinstance(dm_rooms_for_user, (tuple, list))
dm_map[dm_user_id] = tuple(dm_rooms_for_user) + (room_id,)
await self._api.account_data_manager.put_global(
user_id, AccountDataTypes.DIRECT, dm_map
)
async def _retry_make_join(
self, sender: str, target: str, room_id: str, new_membership: str
) -> None:
"""
A function to retry sending the `make_join` request with an increasing backoff. This is
implemented to work around a race condition when receiving invites over federation.
Args:
sender: the user performing the membership change
target: the user for whom the membership is changing
room_id: room id of the room to join to
new_membership: the type of membership event (in this case will be "join")
"""
sleep = 0
retries = 0
join_event = None
while retries < 5:
try:
await self._api.sleep(sleep)
join_event = await self._api.update_room_membership(
sender=sender,
target=target,
room_id=room_id,
new_membership=new_membership,
)
except SynapseError as e:
if e.code == HTTPStatus.FORBIDDEN:
logger.debug(
f"Update_room_membership was forbidden. This can sometimes be expected for remote invites. Exception: {e}"
)
else:
logger.warn(
f"Update_room_membership raised the following unexpected (SynapseError) exception: {e}"
)
except Exception as e:
logger.warn(
f"Update_room_membership raised the following unexpected exception: {e}"
)
sleep = 2**retries
retries += 1
if join_event is not None:
break

View file

@ -90,6 +90,7 @@ def prune_event(event: EventBase) -> EventBase:
pruned_event.internal_metadata.stream_ordering = ( pruned_event.internal_metadata.stream_ordering = (
event.internal_metadata.stream_ordering event.internal_metadata.stream_ordering
) )
pruned_event.internal_metadata.instance_name = event.internal_metadata.instance_name
pruned_event.internal_metadata.outlier = event.internal_metadata.outlier pruned_event.internal_metadata.outlier = event.internal_metadata.outlier
# Mark the event as redacted # Mark the event as redacted
@ -116,6 +117,7 @@ def clone_event(event: EventBase) -> EventBase:
new_event.internal_metadata.stream_ordering = ( new_event.internal_metadata.stream_ordering = (
event.internal_metadata.stream_ordering event.internal_metadata.stream_ordering
) )
new_event.internal_metadata.instance_name = event.internal_metadata.instance_name
new_event.internal_metadata.outlier = event.internal_metadata.outlier new_event.internal_metadata.outlier = event.internal_metadata.outlier
return new_event return new_event

View file

@ -47,9 +47,9 @@ from synapse.events.utils import (
validate_canonicaljson, validate_canonicaljson,
) )
from synapse.http.servlet import validate_json_object from synapse.http.servlet import validate_json_object
from synapse.rest.models import RequestBodyModel
from synapse.storage.controllers.state import server_acl_evaluator_from_event from synapse.storage.controllers.state import server_acl_evaluator_from_event
from synapse.types import EventID, JsonDict, RoomID, StrCollection, UserID from synapse.types import EventID, JsonDict, RoomID, StrCollection, UserID
from synapse.types.rest import RequestBodyModel
class EventValidator: class EventValidator:

View file

@ -56,6 +56,7 @@ from synapse.api.errors import (
SynapseError, SynapseError,
UnsupportedRoomVersionError, UnsupportedRoomVersionError,
) )
from synapse.api.ratelimiting import Ratelimiter
from synapse.api.room_versions import ( from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS, KNOWN_ROOM_VERSIONS,
EventFormatVersions, EventFormatVersions,
@ -1877,6 +1878,8 @@ class FederationClient(FederationBase):
output_stream: BinaryIO, output_stream: BinaryIO,
max_size: int, max_size: int,
max_timeout_ms: int, max_timeout_ms: int,
download_ratelimiter: Ratelimiter,
ip_address: str,
) -> Tuple[int, Dict[bytes, List[bytes]]]: ) -> Tuple[int, Dict[bytes, List[bytes]]]:
try: try:
return await self.transport_layer.download_media_v3( return await self.transport_layer.download_media_v3(
@ -1885,6 +1888,8 @@ class FederationClient(FederationBase):
output_stream=output_stream, output_stream=output_stream,
max_size=max_size, max_size=max_size,
max_timeout_ms=max_timeout_ms, max_timeout_ms=max_timeout_ms,
download_ratelimiter=download_ratelimiter,
ip_address=ip_address,
) )
except HttpResponseException as e: except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint, # If an error is received that is due to an unrecognised endpoint,
@ -1905,6 +1910,8 @@ class FederationClient(FederationBase):
output_stream=output_stream, output_stream=output_stream,
max_size=max_size, max_size=max_size,
max_timeout_ms=max_timeout_ms, max_timeout_ms=max_timeout_ms,
download_ratelimiter=download_ratelimiter,
ip_address=ip_address,
) )

View file

@ -674,7 +674,7 @@ class FederationServer(FederationBase):
# This is in addition to the HS-level rate limiting applied by # This is in addition to the HS-level rate limiting applied by
# BaseFederationServlet. # BaseFederationServlet.
# type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?) # type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?)
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type] await self._room_member_handler._join_rate_per_room_limiter.ratelimit(
requester=None, requester=None,
key=room_id, key=room_id,
update=False, update=False,
@ -717,7 +717,7 @@ class FederationServer(FederationBase):
SynapseTags.SEND_JOIN_RESPONSE_IS_PARTIAL_STATE, SynapseTags.SEND_JOIN_RESPONSE_IS_PARTIAL_STATE,
caller_supports_partial_state, caller_supports_partial_state,
) )
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type] await self._room_member_handler._join_rate_per_room_limiter.ratelimit(
requester=None, requester=None,
key=room_id, key=room_id,
update=False, update=False,

View file

@ -43,6 +43,7 @@ import ijson
from synapse.api.constants import Direction, Membership from synapse.api.constants import Direction, Membership
from synapse.api.errors import Codes, HttpResponseException, SynapseError from synapse.api.errors import Codes, HttpResponseException, SynapseError
from synapse.api.ratelimiting import Ratelimiter
from synapse.api.room_versions import RoomVersion from synapse.api.room_versions import RoomVersion
from synapse.api.urls import ( from synapse.api.urls import (
FEDERATION_UNSTABLE_PREFIX, FEDERATION_UNSTABLE_PREFIX,
@ -819,6 +820,8 @@ class TransportLayerClient:
output_stream: BinaryIO, output_stream: BinaryIO,
max_size: int, max_size: int,
max_timeout_ms: int, max_timeout_ms: int,
download_ratelimiter: Ratelimiter,
ip_address: str,
) -> Tuple[int, Dict[bytes, List[bytes]]]: ) -> Tuple[int, Dict[bytes, List[bytes]]]:
path = f"/_matrix/media/r0/download/{destination}/{media_id}" path = f"/_matrix/media/r0/download/{destination}/{media_id}"
@ -834,6 +837,8 @@ class TransportLayerClient:
"allow_remote": "false", "allow_remote": "false",
"timeout_ms": str(max_timeout_ms), "timeout_ms": str(max_timeout_ms),
}, },
download_ratelimiter=download_ratelimiter,
ip_address=ip_address,
) )
async def download_media_v3( async def download_media_v3(
@ -843,6 +848,8 @@ class TransportLayerClient:
output_stream: BinaryIO, output_stream: BinaryIO,
max_size: int, max_size: int,
max_timeout_ms: int, max_timeout_ms: int,
download_ratelimiter: Ratelimiter,
ip_address: str,
) -> Tuple[int, Dict[bytes, List[bytes]]]: ) -> Tuple[int, Dict[bytes, List[bytes]]]:
path = f"/_matrix/media/v3/download/{destination}/{media_id}" path = f"/_matrix/media/v3/download/{destination}/{media_id}"
@ -862,6 +869,8 @@ class TransportLayerClient:
"allow_redirect": "true", "allow_redirect": "true",
}, },
follow_redirects=True, follow_redirects=True,
download_ratelimiter=download_ratelimiter,
ip_address=ip_address,
) )

View file

@ -33,6 +33,7 @@ from synapse.federation.transport.server.federation import (
FEDERATION_SERVLET_CLASSES, FEDERATION_SERVLET_CLASSES,
FederationAccountStatusServlet, FederationAccountStatusServlet,
FederationUnstableClientKeysClaimServlet, FederationUnstableClientKeysClaimServlet,
FederationUnstableMediaDownloadServlet,
) )
from synapse.http.server import HttpServer, JsonResource from synapse.http.server import HttpServer, JsonResource
from synapse.http.servlet import ( from synapse.http.servlet import (
@ -315,6 +316,13 @@ def register_servlets(
): ):
continue continue
if servletclass == FederationUnstableMediaDownloadServlet:
if (
not hs.config.server.enable_media_repo
or not hs.config.experimental.msc3916_authenticated_media_enabled
):
continue
servletclass( servletclass(
hs=hs, hs=hs,
authenticator=authenticator, authenticator=authenticator,

View file

@ -360,13 +360,29 @@ class BaseFederationServlet:
"request" "request"
) )
return None return None
if (
func.__self__.__class__.__name__ # type: ignore
== "FederationUnstableMediaDownloadServlet"
):
response = await func(
origin, content, request, *args, **kwargs
)
else:
response = await func(
origin, content, request.args, *args, **kwargs
)
else:
if (
func.__self__.__class__.__name__ # type: ignore
== "FederationUnstableMediaDownloadServlet"
):
response = await func(
origin, content, request, *args, **kwargs
)
else:
response = await func( response = await func(
origin, content, request.args, *args, **kwargs origin, content, request.args, *args, **kwargs
) )
else:
response = await func(
origin, content, request.args, *args, **kwargs
)
finally: finally:
# if we used the origin's context as the parent, add a new span using # if we used the origin's context as the parent, add a new span using
# the servlet span as a parent, so that we have a link # the servlet span as a parent, so that we have a link

View file

@ -44,10 +44,13 @@ from synapse.federation.transport.server._base import (
) )
from synapse.http.servlet import ( from synapse.http.servlet import (
parse_boolean_from_args, parse_boolean_from_args,
parse_integer,
parse_integer_from_args, parse_integer_from_args,
parse_string_from_args, parse_string_from_args,
parse_strings_from_args, parse_strings_from_args,
) )
from synapse.http.site import SynapseRequest
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util import SYNAPSE_VERSION from synapse.util import SYNAPSE_VERSION
from synapse.util.ratelimitutils import FederationRateLimiter from synapse.util.ratelimitutils import FederationRateLimiter
@ -787,6 +790,43 @@ class FederationAccountStatusServlet(BaseFederationServerServlet):
return 200, {"account_statuses": statuses, "failures": failures} return 200, {"account_statuses": statuses, "failures": failures}
class FederationUnstableMediaDownloadServlet(BaseFederationServerServlet):
"""
Implementation of new federation media `/download` endpoint outlined in MSC3916. Returns
a multipart/mixed response consisting of a JSON object and the requested media
item. This endpoint only returns local media.
"""
PATH = "/media/download/(?P<media_id>[^/]*)"
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc3916"
RATELIMIT = True
def __init__(
self,
hs: "HomeServer",
ratelimiter: FederationRateLimiter,
authenticator: Authenticator,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.media_repo = self.hs.get_media_repository()
async def on_GET(
self,
origin: Optional[str],
content: Literal[None],
request: SynapseRequest,
media_id: str,
) -> None:
max_timeout_ms = parse_integer(
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
)
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
await self.media_repo.get_local_media(
request, media_id, None, max_timeout_ms, federation=True
)
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = ( FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationSendServlet, FederationSendServlet,
FederationEventServlet, FederationEventServlet,
@ -818,4 +858,5 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationV1SendKnockServlet, FederationV1SendKnockServlet,
FederationMakeKnockServlet, FederationMakeKnockServlet,
FederationAccountStatusServlet, FederationAccountStatusServlet,
FederationUnstableMediaDownloadServlet,
) )

View file

@ -42,7 +42,6 @@ class AdminHandler:
self._device_handler = hs.get_device_handler() self._device_handler = hs.get_device_handler()
self._storage_controllers = hs.get_storage_controllers() self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state self._state_storage_controller = self._storage_controllers.state
self._hs_config = hs.config
self._msc3866_enabled = hs.config.experimental.msc3866.enabled self._msc3866_enabled = hs.config.experimental.msc3866.enabled
async def get_whois(self, user: UserID) -> JsonMapping: async def get_whois(self, user: UserID) -> JsonMapping:
@ -126,13 +125,7 @@ class AdminHandler:
# Get all rooms the user is in or has been in # Get all rooms the user is in or has been in
rooms = await self._store.get_rooms_for_local_user_where_membership_is( rooms = await self._store.get_rooms_for_local_user_where_membership_is(
user_id, user_id,
membership_list=( membership_list=Membership.LIST,
Membership.JOIN,
Membership.LEAVE,
Membership.BAN,
Membership.INVITE,
Membership.KNOCK,
),
) )
# We only try and fetch events for rooms the user has been in. If # We only try and fetch events for rooms the user has been in. If
@ -179,7 +172,7 @@ class AdminHandler:
if room.membership == Membership.JOIN: if room.membership == Membership.JOIN:
stream_ordering = self._store.get_room_max_stream_ordering() stream_ordering = self._store.get_room_max_stream_ordering()
else: else:
stream_ordering = room.stream_ordering stream_ordering = room.event_pos.stream
from_key = RoomStreamToken(topological=0, stream=0) from_key = RoomStreamToken(topological=0, stream=0)
to_key = RoomStreamToken(stream=stream_ordering) to_key = RoomStreamToken(stream=stream_ordering)
@ -221,7 +214,6 @@ class AdminHandler:
self._storage_controllers, self._storage_controllers,
user_id, user_id,
events, events,
msc4115_membership_on_events=self._hs_config.experimental.msc4115_membership_on_events,
) )
writer.write_events(room_id, events) writer.write_events(room_id, events)

View file

@ -78,6 +78,8 @@ class CasHandler:
self._cas_displayname_attribute = hs.config.cas.cas_displayname_attribute self._cas_displayname_attribute = hs.config.cas.cas_displayname_attribute
self._cas_required_attributes = hs.config.cas.cas_required_attributes self._cas_required_attributes = hs.config.cas.cas_required_attributes
self._cas_enable_registration = hs.config.cas.cas_enable_registration self._cas_enable_registration = hs.config.cas.cas_enable_registration
self._cas_allow_numeric_ids = hs.config.cas.cas_allow_numeric_ids
self._cas_numeric_ids_prefix = hs.config.cas.cas_numeric_ids_prefix
self._http_client = hs.get_proxied_http_client() self._http_client = hs.get_proxied_http_client()
@ -188,6 +190,9 @@ class CasHandler:
for child in root[0]: for child in root[0]:
if child.tag.endswith("user"): if child.tag.endswith("user"):
user = child.text user = child.text
# if numeric user IDs are allowed and username is numeric then we add the prefix so Synapse can handle it
if self._cas_allow_numeric_ids and user is not None and user.isdigit():
user = f"{self._cas_numeric_ids_prefix}{user}"
if child.tag.endswith("attributes"): if child.tag.endswith("attributes"):
for attribute in child: for attribute in child:
# ElementTree library expands the namespace in # ElementTree library expands the namespace in

View file

@ -159,20 +159,32 @@ class DeviceWorkerHandler:
@cancellable @cancellable
async def get_device_changes_in_shared_rooms( async def get_device_changes_in_shared_rooms(
self, user_id: str, room_ids: StrCollection, from_token: StreamToken self,
user_id: str,
room_ids: StrCollection,
from_token: StreamToken,
now_token: Optional[StreamToken] = None,
) -> Set[str]: ) -> Set[str]:
"""Get the set of users whose devices have changed who share a room with """Get the set of users whose devices have changed who share a room with
the given user. the given user.
""" """
now_device_lists_key = self.store.get_device_stream_token()
if now_token:
now_device_lists_key = now_token.device_list_key
changed_users = await self.store.get_device_list_changes_in_rooms( changed_users = await self.store.get_device_list_changes_in_rooms(
room_ids, from_token.device_list_key room_ids,
from_token.device_list_key,
now_device_lists_key,
) )
if changed_users is not None: if changed_users is not None:
# We also check if the given user has changed their device. If # We also check if the given user has changed their device. If
# they're in no rooms then the above query won't include them. # they're in no rooms then the above query won't include them.
changed = await self.store.get_users_whose_devices_changed( changed = await self.store.get_users_whose_devices_changed(
from_token.device_list_key, [user_id] from_token.device_list_key,
[user_id],
to_key=now_device_lists_key,
) )
changed_users.update(changed) changed_users.update(changed)
return changed_users return changed_users
@ -190,7 +202,9 @@ class DeviceWorkerHandler:
tracked_users.add(user_id) tracked_users.add(user_id)
changed = await self.store.get_users_whose_devices_changed( changed = await self.store.get_users_whose_devices_changed(
from_token.device_list_key, tracked_users from_token.device_list_key,
tracked_users,
to_key=now_device_lists_key,
) )
return changed return changed
@ -892,6 +906,13 @@ class DeviceHandler(DeviceWorkerHandler):
context=opentracing_context, context=opentracing_context,
) )
await self.store.mark_redundant_device_lists_pokes(
user_id=user_id,
device_id=device_id,
room_id=room_id,
converted_upto_stream_id=stream_id,
)
# Notify replication that we've updated the device list stream. # Notify replication that we've updated the device list stream.
self.notifier.notify_replication() self.notifier.notify_replication()

View file

@ -236,6 +236,13 @@ class DeviceMessageHandler:
local_messages = {} local_messages = {}
remote_messages: Dict[str, Dict[str, Dict[str, JsonDict]]] = {} remote_messages: Dict[str, Dict[str, Dict[str, JsonDict]]] = {}
for user_id, by_device in messages.items(): for user_id, by_device in messages.items():
if not UserID.is_valid(user_id):
logger.warning(
"Ignoring attempt to send device message to invalid user: %r",
user_id,
)
continue
# add an opentracing log entry for each message # add an opentracing log entry for each message
for device_id, message_content in by_device.items(): for device_id, message_content in by_device.items():
log_kv( log_kv(

View file

@ -35,6 +35,7 @@ from synapse.api.errors import CodeMessageException, Codes, NotFoundError, Synap
from synapse.handlers.device import DeviceHandler from synapse.handlers.device import DeviceHandler
from synapse.logging.context import make_deferred_yieldable, run_in_background from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
from synapse.replication.http.devices import ReplicationUploadKeysForUserRestServlet
from synapse.types import ( from synapse.types import (
JsonDict, JsonDict,
JsonMapping, JsonMapping,
@ -45,7 +46,10 @@ from synapse.types import (
from synapse.util import json_decoder from synapse.util import json_decoder
from synapse.util.async_helpers import Linearizer, concurrently_execute from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.cancellation import cancellable from synapse.util.cancellation import cancellable
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import (
NotRetryingDestination,
filter_destinations_by_retry_limiter,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -53,6 +57,9 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
ONE_TIME_KEY_UPLOAD = "one_time_key_upload_lock"
class E2eKeysHandler: class E2eKeysHandler:
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
self.config = hs.config self.config = hs.config
@ -62,6 +69,7 @@ class E2eKeysHandler:
self._appservice_handler = hs.get_application_service_handler() self._appservice_handler = hs.get_application_service_handler()
self.is_mine = hs.is_mine self.is_mine = hs.is_mine
self.clock = hs.get_clock() self.clock = hs.get_clock()
self._worker_lock_handler = hs.get_worker_locks_handler()
federation_registry = hs.get_federation_registry() federation_registry = hs.get_federation_registry()
@ -82,6 +90,12 @@ class E2eKeysHandler:
edu_updater.incoming_signing_key_update, edu_updater.incoming_signing_key_update,
) )
self.device_key_uploader = self.upload_device_keys_for_user
else:
self.device_key_uploader = (
ReplicationUploadKeysForUserRestServlet.make_client(hs)
)
# doesn't really work as part of the generic query API, because the # doesn't really work as part of the generic query API, because the
# query request requires an object POST, but we abuse the # query request requires an object POST, but we abuse the
# "query handler" interface. # "query handler" interface.
@ -145,6 +159,11 @@ class E2eKeysHandler:
remote_queries = {} remote_queries = {}
for user_id, device_ids in device_keys_query.items(): for user_id, device_ids in device_keys_query.items():
if not UserID.is_valid(user_id):
# Ignore invalid user IDs, which is the same behaviour as if
# the user existed but had no keys.
continue
# we use UserID.from_string to catch invalid user ids # we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)): if self.is_mine(UserID.from_string(user_id)):
local_query[user_id] = device_ids local_query[user_id] = device_ids
@ -259,10 +278,8 @@ class E2eKeysHandler:
"%d destinations to query devices for", len(remote_queries_not_in_cache) "%d destinations to query devices for", len(remote_queries_not_in_cache)
) )
async def _query( async def _query(destination: str) -> None:
destination_queries: Tuple[str, Dict[str, Iterable[str]]] queries = remote_queries_not_in_cache[destination]
) -> None:
destination, queries = destination_queries
return await self._query_devices_for_destination( return await self._query_devices_for_destination(
results, results,
cross_signing_keys, cross_signing_keys,
@ -272,9 +289,20 @@ class E2eKeysHandler:
timeout, timeout,
) )
# Only try and fetch keys for destinations that are not marked as
# down.
filtered_destinations = await filter_destinations_by_retry_limiter(
remote_queries_not_in_cache.keys(),
self.clock,
self.store,
# Let's give an arbitrary grace period for those hosts that are
# only recently down
retry_due_within_ms=60 * 1000,
)
await concurrently_execute( await concurrently_execute(
_query, _query,
remote_queries_not_in_cache.items(), filtered_destinations,
10, 10,
delay_cancellation=True, delay_cancellation=True,
) )
@ -775,36 +803,17 @@ class E2eKeysHandler:
"one_time_keys": A mapping from algorithm to number of keys for that "one_time_keys": A mapping from algorithm to number of keys for that
algorithm, including those previously persisted. algorithm, including those previously persisted.
""" """
# This can only be called from the main process.
assert isinstance(self.device_handler, DeviceHandler)
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
# TODO: Validate the JSON to make sure it has the right keys. # TODO: Validate the JSON to make sure it has the right keys.
device_keys = keys.get("device_keys", None) device_keys = keys.get("device_keys", None)
if device_keys: if device_keys:
logger.info( await self.device_key_uploader(
"Updating device_keys for device %r for user %s at %d", user_id=user_id,
device_id, device_id=device_id,
user_id, keys={"device_keys": device_keys},
time_now,
) )
log_kv(
{
"message": "Updating device_keys for user.",
"user_id": user_id,
"device_id": device_id,
}
)
# TODO: Sign the JSON with the server key
changed = await self.store.set_e2e_device_keys(
user_id, device_id, time_now, device_keys
)
if changed:
# Only notify about device updates *if* the keys actually changed
await self.device_handler.notify_device_update(user_id, [device_id])
else:
log_kv({"message": "Not updating device_keys for user", "user_id": user_id})
one_time_keys = keys.get("one_time_keys", None) one_time_keys = keys.get("one_time_keys", None)
if one_time_keys: if one_time_keys:
log_kv( log_kv(
@ -840,6 +849,49 @@ class E2eKeysHandler:
{"message": "Did not update fallback_keys", "reason": "no keys given"} {"message": "Did not update fallback_keys", "reason": "no keys given"}
) )
result = await self.store.count_e2e_one_time_keys(user_id, device_id)
set_tag("one_time_key_counts", str(result))
return {"one_time_key_counts": result}
@tag_args
async def upload_device_keys_for_user(
self, user_id: str, device_id: str, keys: JsonDict
) -> None:
"""
Args:
user_id: user whose keys are being uploaded.
device_id: device whose keys are being uploaded.
device_keys: the `device_keys` of an /keys/upload request.
"""
# This can only be called from the main process.
assert isinstance(self.device_handler, DeviceHandler)
time_now = self.clock.time_msec()
device_keys = keys["device_keys"]
logger.info(
"Updating device_keys for device %r for user %s at %d",
device_id,
user_id,
time_now,
)
log_kv(
{
"message": "Updating device_keys for user.",
"user_id": user_id,
"device_id": device_id,
}
)
# TODO: Sign the JSON with the server key
changed = await self.store.set_e2e_device_keys(
user_id, device_id, time_now, device_keys
)
if changed:
# Only notify about device updates *if* the keys actually changed
await self.device_handler.notify_device_update(user_id, [device_id])
# the device should have been registered already, but it may have been # the device should have been registered already, but it may have been
# deleted due to a race with a DELETE request. Or we may be using an # deleted due to a race with a DELETE request. Or we may be using an
# old access_token without an associated device_id. Either way, we # old access_token without an associated device_id. Either way, we
@ -847,53 +899,56 @@ class E2eKeysHandler:
# keys without a corresponding device. # keys without a corresponding device.
await self.device_handler.check_device_registered(user_id, device_id) await self.device_handler.check_device_registered(user_id, device_id)
result = await self.store.count_e2e_one_time_keys(user_id, device_id)
set_tag("one_time_key_counts", str(result))
return {"one_time_key_counts": result}
async def _upload_one_time_keys_for_user( async def _upload_one_time_keys_for_user(
self, user_id: str, device_id: str, time_now: int, one_time_keys: JsonDict self, user_id: str, device_id: str, time_now: int, one_time_keys: JsonDict
) -> None: ) -> None:
logger.info( # We take out a lock so that we don't have to worry about a client
"Adding one_time_keys %r for device %r for user %r at %d", # sending duplicate requests.
one_time_keys.keys(), lock_key = f"{user_id}_{device_id}"
device_id, async with self._worker_lock_handler.acquire_lock(
user_id, ONE_TIME_KEY_UPLOAD, lock_key
time_now, ):
) logger.info(
"Adding one_time_keys %r for device %r for user %r at %d",
one_time_keys.keys(),
device_id,
user_id,
time_now,
)
# make a list of (alg, id, key) tuples # make a list of (alg, id, key) tuples
key_list = [] key_list = []
for key_id, key_obj in one_time_keys.items(): for key_id, key_obj in one_time_keys.items():
algorithm, key_id = key_id.split(":") algorithm, key_id = key_id.split(":")
key_list.append((algorithm, key_id, key_obj)) key_list.append((algorithm, key_id, key_obj))
# First we check if we have already persisted any of the keys. # First we check if we have already persisted any of the keys.
existing_key_map = await self.store.get_e2e_one_time_keys( existing_key_map = await self.store.get_e2e_one_time_keys(
user_id, device_id, [k_id for _, k_id, _ in key_list] user_id, device_id, [k_id for _, k_id, _ in key_list]
) )
new_keys = [] # Keys that we need to insert. (alg, id, json) tuples. new_keys = [] # Keys that we need to insert. (alg, id, json) tuples.
for algorithm, key_id, key in key_list: for algorithm, key_id, key in key_list:
ex_json = existing_key_map.get((algorithm, key_id), None) ex_json = existing_key_map.get((algorithm, key_id), None)
if ex_json: if ex_json:
if not _one_time_keys_match(ex_json, key): if not _one_time_keys_match(ex_json, key):
raise SynapseError( raise SynapseError(
400, 400,
( (
"One time key %s:%s already exists. " "One time key %s:%s already exists. "
"Old key: %s; new key: %r" "Old key: %s; new key: %r"
)
% (algorithm, key_id, ex_json, key),
) )
% (algorithm, key_id, ex_json, key), else:
new_keys.append(
(algorithm, key_id, encode_canonical_json(key).decode("ascii"))
) )
else:
new_keys.append(
(algorithm, key_id, encode_canonical_json(key).decode("ascii"))
)
log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys}) log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys})
await self.store.add_e2e_one_time_keys(user_id, device_id, time_now, new_keys) await self.store.add_e2e_one_time_keys(
user_id, device_id, time_now, new_keys
)
async def upload_signing_keys_for_user( async def upload_signing_keys_for_user(
self, user_id: str, keys: JsonDict self, user_id: str, keys: JsonDict

View file

@ -247,6 +247,12 @@ class E2eRoomKeysHandler:
if current_room_key: if current_room_key:
if self._should_replace_room_key(current_room_key, room_key): if self._should_replace_room_key(current_room_key, room_key):
log_kv({"message": "Replacing room key."}) log_kv({"message": "Replacing room key."})
logger.debug(
"Replacing room key. room=%s session=%s user=%s",
room_id,
session_id,
user_id,
)
# updates are done one at a time in the DB, so send # updates are done one at a time in the DB, so send
# updates right away rather than batching them up, # updates right away rather than batching them up,
# like we do with the inserts # like we do with the inserts
@ -256,6 +262,12 @@ class E2eRoomKeysHandler:
changed = True changed = True
else: else:
log_kv({"message": "Not replacing room_key."}) log_kv({"message": "Not replacing room_key."})
logger.debug(
"Not replacing room key. room=%s session=%s user=%s",
room_id,
session_id,
user_id,
)
else: else:
log_kv( log_kv(
{ {
@ -265,6 +277,12 @@ class E2eRoomKeysHandler:
} }
) )
log_kv({"message": "Replacing room key."}) log_kv({"message": "Replacing room key."})
logger.debug(
"Inserting new room key. room=%s session=%s user=%s",
room_id,
session_id,
user_id,
)
to_insert.append((room_id, session_id, room_key)) to_insert.append((room_id, session_id, room_key))
changed = True changed = True

Some files were not shown because too many files have changed in this diff Show more