mirror of
https://github.com/element-hq/synapse.git
synced 2024-11-26 19:47:05 +03:00
Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data
This commit is contained in:
commit
a90f3d4ae2
74 changed files with 1948 additions and 1051 deletions
2
.github/workflows/docker.yml
vendored
2
.github/workflows/docker.yml
vendored
|
@ -30,7 +30,7 @@ jobs:
|
|||
run: docker buildx inspect
|
||||
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@v3.5.0
|
||||
uses: sigstore/cosign-installer@v3.6.0
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
|
51
CHANGES.md
51
CHANGES.md
|
@ -1,3 +1,54 @@
|
|||
# Synapse 1.113.0 (2024-08-13)
|
||||
|
||||
No significant changes since 1.113.0rc1.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.113.0rc1 (2024-08-06)
|
||||
|
||||
### Features
|
||||
|
||||
- Track which rooms have been sent to clients in the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17447](https://github.com/element-hq/synapse/issues/17447))
|
||||
- Add Account Data extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17477](https://github.com/element-hq/synapse/issues/17477))
|
||||
- Add receipts extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17489](https://github.com/element-hq/synapse/issues/17489))
|
||||
- Add typing notification extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17505](https://github.com/element-hq/synapse/issues/17505))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering. ([\#17450](https://github.com/element-hq/synapse/issues/17450))
|
||||
- Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients. ([\#17499](https://github.com/element-hq/synapse/issues/17499))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example. ([\#17476](https://github.com/element-hq/synapse/issues/17476))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Change sliding sync to use their own token format in preparation for storing per-connection state. ([\#17452](https://github.com/element-hq/synapse/issues/17452))
|
||||
- Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint. ([\#17478](https://github.com/element-hq/synapse/issues/17478))
|
||||
- Do not send down empty room entries down experimental sliding sync endpoint. ([\#17479](https://github.com/element-hq/synapse/issues/17479))
|
||||
- Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`. ([\#17481](https://github.com/element-hq/synapse/issues/17481), [\#17482](https://github.com/element-hq/synapse/issues/17482))
|
||||
- Add some opentracing tags and logging to the experimental sliding sync implementation. ([\#17501](https://github.com/element-hq/synapse/issues/17501))
|
||||
- Split and move Sliding Sync tests so we have some more sane test file sizes. ([\#17504](https://github.com/element-hq/synapse/issues/17504))
|
||||
- Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents. ([\#17507](https://github.com/element-hq/synapse/issues/17507))
|
||||
- Easier to understand `timeline` assertions in Sliding Sync tests. ([\#17511](https://github.com/element-hq/synapse/issues/17511))
|
||||
- Reset the sliding sync connection if we don't recognize the per-connection state position. ([\#17529](https://github.com/element-hq/synapse/issues/17529))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump bcrypt from 4.1.3 to 4.2.0. ([\#17495](https://github.com/element-hq/synapse/issues/17495))
|
||||
* Bump black from 24.4.2 to 24.8.0. ([\#17522](https://github.com/element-hq/synapse/issues/17522))
|
||||
* Bump phonenumbers from 8.13.39 to 8.13.42. ([\#17521](https://github.com/element-hq/synapse/issues/17521))
|
||||
* Bump ruff from 0.5.4 to 0.5.5. ([\#17494](https://github.com/element-hq/synapse/issues/17494))
|
||||
* Bump serde_json from 1.0.120 to 1.0.121. ([\#17493](https://github.com/element-hq/synapse/issues/17493))
|
||||
* Bump serde_json from 1.0.121 to 1.0.122. ([\#17525](https://github.com/element-hq/synapse/issues/17525))
|
||||
* Bump towncrier from 23.11.0 to 24.7.1. ([\#17523](https://github.com/element-hq/synapse/issues/17523))
|
||||
* Bump types-pyopenssl from 24.1.0.20240425 to 24.1.0.20240722. ([\#17496](https://github.com/element-hq/synapse/issues/17496))
|
||||
* Bump types-setuptools from 70.1.0.20240627 to 71.1.0.20240726. ([\#17497](https://github.com/element-hq/synapse/issues/17497))
|
||||
|
||||
# Synapse 1.112.0 (2024-07-30)
|
||||
|
||||
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
|
||||
|
|
20
Cargo.lock
generated
20
Cargo.lock
generated
|
@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
|
|||
|
||||
[[package]]
|
||||
name = "bytes"
|
||||
version = "1.6.1"
|
||||
version = "1.7.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a12916984aab3fa6e39d655a33e09c0071eb36d6ab3aea5c2d78551f1df6d952"
|
||||
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50"
|
||||
|
||||
[[package]]
|
||||
name = "cfg-if"
|
||||
|
@ -444,9 +444,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "1.10.5"
|
||||
version = "1.10.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b91213439dad192326a0d7c6ee3955910425f441d7038e0d6933b0aec5c4517f"
|
||||
checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"memchr",
|
||||
|
@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
|||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.204"
|
||||
version = "1.0.206"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bc76f558e0cbb2a839d37354c575f1dc3fdc6546b5be373ba43d95f231bf7c12"
|
||||
checksum = "5b3e4cd94123dd520a128bcd11e34d9e9e423e7e3e50425cb1b4b1e3549d0284"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.204"
|
||||
version = "1.0.206"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e0cd7e117be63d3c3678776753929474f3b04a43a080c744d6b0ae2a8c28e222"
|
||||
checksum = "fabfb6138d2383ea8208cf98ccf69cdfb1aff4088460681d84189aa259762f97"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
|
@ -505,9 +505,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.121"
|
||||
version = "1.0.124"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4ab380d7d9f22ef3f21ad3e6c1ebe8e4fc7a2000ccba2e4d71fc96f15b2cb609"
|
||||
checksum = "66ad62847a56b3dba58cc891acd13884b9c61138d330c0d7b6181713d4fce38d"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"memchr",
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
Track which rooms have been sent to clients in the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
@ -1 +0,0 @@
|
|||
Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering.
|
|
@ -1 +0,0 @@
|
|||
Change sliding sync to use their own token format in preparation for storing per-connection state.
|
|
@ -1 +0,0 @@
|
|||
Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example.
|
|
@ -1 +0,0 @@
|
|||
Add Account Data extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
@ -1 +0,0 @@
|
|||
Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint.
|
|
@ -1 +0,0 @@
|
|||
Do not send down empty room entries down experimental sliding sync endpoint.
|
|
@ -1 +0,0 @@
|
|||
Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`.
|
|
@ -1 +0,0 @@
|
|||
Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`.
|
1
changelog.d/17483.bugfix
Normal file
1
changelog.d/17483.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Start handlers for new media endpoints when media resource configured.
|
|
@ -1 +0,0 @@
|
|||
Add receipts extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
@ -1 +0,0 @@
|
|||
Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients.
|
|
@ -1 +0,0 @@
|
|||
Add some opentracing tags and logging to the experimental sliding sync implementation.
|
|
@ -1 +0,0 @@
|
|||
Split and move Sliding Sync tests so we have some more sane test file sizes.
|
|
@ -1 +0,0 @@
|
|||
Add typing notification extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
@ -1 +0,0 @@
|
|||
Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents.
|
1
changelog.d/17510.bugfix
Normal file
1
changelog.d/17510.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Fix timeline ordering (using `stream_ordering` instead of topological ordering) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
@ -1 +0,0 @@
|
|||
Easier to understand `timeline` assertions in Sliding Sync tests.
|
1
changelog.d/17514.misc
Normal file
1
changelog.d/17514.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Add more tracing to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
3
changelog.d/17515.doc
Normal file
3
changelog.d/17515.doc
Normal file
|
@ -0,0 +1,3 @@
|
|||
Clarify default behaviour of the
|
||||
[`auto_accept_invites.worker_to_run_on`](https://element-hq.github.io/synapse/develop/usage/configuration/config_documentation.html#auto-accept-invites)
|
||||
option.
|
1
changelog.d/17531.misc
Normal file
1
changelog.d/17531.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Fixup comment in sliding sync implementation.
|
1
changelog.d/17535.bugfix
Normal file
1
changelog.d/17535.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Fix experimental sliding sync implementation to remember any updates in rooms that were not sent down immediately.
|
1
changelog.d/17536.misc
Normal file
1
changelog.d/17536.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Replace override of deprecated method `HTTPAdapter.get_connection` with `get_connection_with_tls_context`.
|
1
changelog.d/17537.misc
Normal file
1
changelog.d/17537.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Fix performance of device lists in `/key/changes` and sliding sync.
|
1
changelog.d/17538.bugfix
Normal file
1
changelog.d/17538.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Better exclude partially stated rooms if we must await full state in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
1
changelog.d/17542.misc
Normal file
1
changelog.d/17542.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Bump setuptools from 67.6.0 to 72.1.0.
|
1
changelog.d/17557.misc
Normal file
1
changelog.d/17557.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Add a utility function for generating random event IDs.
|
1
changelog.d/17558.misc
Normal file
1
changelog.d/17558.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Speed up responding to media requests.
|
1
changelog.d/17559.doc
Normal file
1
changelog.d/17559.doc
Normal file
|
@ -0,0 +1 @@
|
|||
Improve docstrings for profile methods.
|
1
changelog.d/17561.misc
Normal file
1
changelog.d/17561.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Speed up responding to media requests.
|
1
changelog.d/17563.misc
Normal file
1
changelog.d/17563.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Reduce log spam of multipart files.
|
1
changelog.d/17564.misc
Normal file
1
changelog.d/17564.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Speed up responding to media requests.
|
12
debian/changelog
vendored
12
debian/changelog
vendored
|
@ -1,3 +1,15 @@
|
|||
matrix-synapse-py3 (1.113.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.113.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Aug 2024 14:36:56 +0100
|
||||
|
||||
matrix-synapse-py3 (1.113.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.113.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Aug 2024 12:23:23 +0100
|
||||
|
||||
matrix-synapse-py3 (1.112.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.112.0.
|
||||
|
|
|
@ -21,8 +21,10 @@ incrementing integer, but backfilled events start with `stream_ordering=-1` and
|
|||
|
||||
---
|
||||
|
||||
- `/sync` returns things in the order they arrive at the server (`stream_ordering`).
|
||||
- `/messages` (and `/backfill` in the federation API) return them in the order determined by the event graph `(topological_ordering, stream_ordering)`.
|
||||
- Incremental `/sync?since=xxx` returns things in the order they arrive at the server
|
||||
(`stream_ordering`).
|
||||
- Initial `/sync`, `/messages` (and `/backfill` in the federation API) return them in
|
||||
the order determined by the event graph `(topological_ordering, stream_ordering)`.
|
||||
|
||||
The general idea is that, if you're following a room in real-time (i.e.
|
||||
`/sync`), you probably want to see the messages as they arrive at your server,
|
||||
|
|
|
@ -4685,7 +4685,9 @@ This setting has the following sub-options:
|
|||
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
|
||||
for direct messages. Defaults to false.
|
||||
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
|
||||
* `worker_to_run_on`: Which worker to run this module on. This must match the "worker_name".
|
||||
* `worker_to_run_on`: Which worker to run this module on. This must match
|
||||
the "worker_name". If not set or `null`, invites will be accepted on the
|
||||
main process.
|
||||
|
||||
NOTE: Care should be taken not to enable this setting if the `synapse_auto_accept_invite` module is enabled and installed.
|
||||
The two modules will compete to perform the same task and may result in undesired behaviour. For example, multiple join
|
||||
|
|
395
poetry.lock
generated
395
poetry.lock
generated
|
@ -1,4 +1,4 @@
|
|||
# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
|
@ -107,33 +107,33 @@ typecheck = ["mypy"]
|
|||
|
||||
[[package]]
|
||||
name = "black"
|
||||
version = "24.4.2"
|
||||
version = "24.8.0"
|
||||
description = "The uncompromising code formatter."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "black-24.4.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:dd1b5a14e417189db4c7b64a6540f31730713d173f0b63e55fabd52d61d8fdce"},
|
||||
{file = "black-24.4.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8e537d281831ad0e71007dcdcbe50a71470b978c453fa41ce77186bbe0ed6021"},
|
||||
{file = "black-24.4.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eaea3008c281f1038edb473c1aa8ed8143a5535ff18f978a318f10302b254063"},
|
||||
{file = "black-24.4.2-cp310-cp310-win_amd64.whl", hash = "sha256:7768a0dbf16a39aa5e9a3ded568bb545c8c2727396d063bbaf847df05b08cd96"},
|
||||
{file = "black-24.4.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:257d724c2c9b1660f353b36c802ccece186a30accc7742c176d29c146df6e474"},
|
||||
{file = "black-24.4.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:bdde6f877a18f24844e381d45e9947a49e97933573ac9d4345399be37621e26c"},
|
||||
{file = "black-24.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e151054aa00bad1f4e1f04919542885f89f5f7d086b8a59e5000e6c616896ffb"},
|
||||
{file = "black-24.4.2-cp311-cp311-win_amd64.whl", hash = "sha256:7e122b1c4fb252fd85df3ca93578732b4749d9be076593076ef4d07a0233c3e1"},
|
||||
{file = "black-24.4.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:accf49e151c8ed2c0cdc528691838afd217c50412534e876a19270fea1e28e2d"},
|
||||
{file = "black-24.4.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:88c57dc656038f1ab9f92b3eb5335ee9b021412feaa46330d5eba4e51fe49b04"},
|
||||
{file = "black-24.4.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:be8bef99eb46d5021bf053114442914baeb3649a89dc5f3a555c88737e5e98fc"},
|
||||
{file = "black-24.4.2-cp312-cp312-win_amd64.whl", hash = "sha256:415e686e87dbbe6f4cd5ef0fbf764af7b89f9057b97c908742b6008cc554b9c0"},
|
||||
{file = "black-24.4.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf10f7310db693bb62692609b397e8d67257c55f949abde4c67f9cc574492cc7"},
|
||||
{file = "black-24.4.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:98e123f1d5cfd42f886624d84464f7756f60ff6eab89ae845210631714f6db94"},
|
||||
{file = "black-24.4.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:48a85f2cb5e6799a9ef05347b476cce6c182d6c71ee36925a6c194d074336ef8"},
|
||||
{file = "black-24.4.2-cp38-cp38-win_amd64.whl", hash = "sha256:b1530ae42e9d6d5b670a34db49a94115a64596bc77710b1d05e9801e62ca0a7c"},
|
||||
{file = "black-24.4.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:37aae07b029fa0174d39daf02748b379399b909652a806e5708199bd93899da1"},
|
||||
{file = "black-24.4.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:da33a1a5e49c4122ccdfd56cd021ff1ebc4a1ec4e2d01594fef9b6f267a9e741"},
|
||||
{file = "black-24.4.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef703f83fc32e131e9bcc0a5094cfe85599e7109f896fe8bc96cc402f3eb4b6e"},
|
||||
{file = "black-24.4.2-cp39-cp39-win_amd64.whl", hash = "sha256:b9176b9832e84308818a99a561e90aa479e73c523b3f77afd07913380ae2eab7"},
|
||||
{file = "black-24.4.2-py3-none-any.whl", hash = "sha256:d36ed1124bb81b32f8614555b34cc4259c3fbc7eec17870e8ff8ded335b58d8c"},
|
||||
{file = "black-24.4.2.tar.gz", hash = "sha256:c872b53057f000085da66a19c55d68f6f8ddcac2642392ad3a355878406fbd4d"},
|
||||
{file = "black-24.8.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:09cdeb74d494ec023ded657f7092ba518e8cf78fa8386155e4a03fdcc44679e6"},
|
||||
{file = "black-24.8.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:81c6742da39f33b08e791da38410f32e27d632260e599df7245cccee2064afeb"},
|
||||
{file = "black-24.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:707a1ca89221bc8a1a64fb5e15ef39cd755633daa672a9db7498d1c19de66a42"},
|
||||
{file = "black-24.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:d6417535d99c37cee4091a2f24eb2b6d5ec42b144d50f1f2e436d9fe1916fe1a"},
|
||||
{file = "black-24.8.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:fb6e2c0b86bbd43dee042e48059c9ad7830abd5c94b0bc518c0eeec57c3eddc1"},
|
||||
{file = "black-24.8.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:837fd281f1908d0076844bc2b801ad2d369c78c45cf800cad7b61686051041af"},
|
||||
{file = "black-24.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:62e8730977f0b77998029da7971fa896ceefa2c4c4933fcd593fa599ecbf97a4"},
|
||||
{file = "black-24.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:72901b4913cbac8972ad911dc4098d5753704d1f3c56e44ae8dce99eecb0e3af"},
|
||||
{file = "black-24.8.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:7c046c1d1eeb7aea9335da62472481d3bbf3fd986e093cffd35f4385c94ae368"},
|
||||
{file = "black-24.8.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:649f6d84ccbae73ab767e206772cc2d7a393a001070a4c814a546afd0d423aed"},
|
||||
{file = "black-24.8.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2b59b250fdba5f9a9cd9d0ece6e6d993d91ce877d121d161e4698af3eb9c1018"},
|
||||
{file = "black-24.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:6e55d30d44bed36593c3163b9bc63bf58b3b30e4611e4d88a0c3c239930ed5b2"},
|
||||
{file = "black-24.8.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:505289f17ceda596658ae81b61ebbe2d9b25aa78067035184ed0a9d855d18afd"},
|
||||
{file = "black-24.8.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b19c9ad992c7883ad84c9b22aaa73562a16b819c1d8db7a1a1a49fb7ec13c7d2"},
|
||||
{file = "black-24.8.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1f13f7f386f86f8121d76599114bb8c17b69d962137fc70efe56137727c7047e"},
|
||||
{file = "black-24.8.0-cp38-cp38-win_amd64.whl", hash = "sha256:f490dbd59680d809ca31efdae20e634f3fae27fba3ce0ba3208333b713bc3920"},
|
||||
{file = "black-24.8.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:eab4dd44ce80dea27dc69db40dab62d4ca96112f87996bca68cd75639aeb2e4c"},
|
||||
{file = "black-24.8.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3c4285573d4897a7610054af5a890bde7c65cb466040c5f0c8b732812d7f0e5e"},
|
||||
{file = "black-24.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9e84e33b37be070ba135176c123ae52a51f82306def9f7d063ee302ecab2cf47"},
|
||||
{file = "black-24.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:73bbf84ed136e45d451a260c6b73ed674652f90a2b3211d6a35e78054563a9bb"},
|
||||
{file = "black-24.8.0-py3-none-any.whl", hash = "sha256:972085c618ee94f402da1af548a4f218c754ea7e5dc70acb168bfaca4c2542ed"},
|
||||
{file = "black-24.8.0.tar.gz", hash = "sha256:2500945420b6784c38b9ee885af039f5e7471ef284ab03fa35ecdde4688cd83f"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
@ -998,153 +998,149 @@ pyasn1 = ">=0.4.6"
|
|||
|
||||
[[package]]
|
||||
name = "lxml"
|
||||
version = "5.2.2"
|
||||
version = "5.3.0"
|
||||
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:364d03207f3e603922d0d3932ef363d55bbf48e3647395765f9bfcbdf6d23632"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:50127c186f191b8917ea2fb8b206fbebe87fd414a6084d15568c27d0a21d60db"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74e4f025ef3db1c6da4460dd27c118d8cd136d0391da4e387a15e48e5c975147"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:981a06a3076997adf7c743dcd0d7a0415582661e2517c7d961493572e909aa1d"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aef5474d913d3b05e613906ba4090433c515e13ea49c837aca18bde190853dff"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1e275ea572389e41e8b039ac076a46cb87ee6b8542df3fff26f5baab43713bca"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5b65529bb2f21ac7861a0e94fdbf5dc0daab41497d18223b46ee8515e5ad297"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:bcc98f911f10278d1daf14b87d65325851a1d29153caaf146877ec37031d5f36"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:b47633251727c8fe279f34025844b3b3a3e40cd1b198356d003aa146258d13a2"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:fbc9d316552f9ef7bba39f4edfad4a734d3d6f93341232a9dddadec4f15d425f"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:13e69be35391ce72712184f69000cda04fc89689429179bc4c0ae5f0b7a8c21b"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:3b6a30a9ab040b3f545b697cb3adbf3696c05a3a68aad172e3fd7ca73ab3c835"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:a233bb68625a85126ac9f1fc66d24337d6e8a0f9207b688eec2e7c880f012ec0"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:dfa7c241073d8f2b8e8dbc7803c434f57dbb83ae2a3d7892dd068d99e96efe2c"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1a7aca7964ac4bb07680d5c9d63b9d7028cace3e2d43175cb50bba8c5ad33316"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ae4073a60ab98529ab8a72ebf429f2a8cc612619a8c04e08bed27450d52103c0"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:ffb2be176fed4457e445fe540617f0252a72a8bc56208fd65a690fdb1f57660b"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:e290d79a4107d7d794634ce3e985b9ae4f920380a813717adf61804904dc4393"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:96e85aa09274955bb6bd483eaf5b12abadade01010478154b0ec70284c1b1526"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-win32.whl", hash = "sha256:f956196ef61369f1685d14dad80611488d8dc1ef00be57c0c5a03064005b0f30"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:875a3f90d7eb5c5d77e529080d95140eacb3c6d13ad5b616ee8095447b1d22e7"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:45f9494613160d0405682f9eee781c7e6d1bf45f819654eb249f8f46a2c22545"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b0b3f2df149efb242cee2ffdeb6674b7f30d23c9a7af26595099afaf46ef4e88"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d28cb356f119a437cc58a13f8135ab8a4c8ece18159eb9194b0d269ec4e28083"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:657a972f46bbefdbba2d4f14413c0d079f9ae243bd68193cb5061b9732fa54c1"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b74b9ea10063efb77a965a8d5f4182806fbf59ed068b3c3fd6f30d2ac7bee734"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:07542787f86112d46d07d4f3c4e7c760282011b354d012dc4141cc12a68cef5f"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:303f540ad2dddd35b92415b74b900c749ec2010e703ab3bfd6660979d01fd4ed"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:2eb2227ce1ff998faf0cd7fe85bbf086aa41dfc5af3b1d80867ecfe75fb68df3"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:1d8a701774dfc42a2f0b8ccdfe7dbc140500d1049e0632a611985d943fcf12df"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:56793b7a1a091a7c286b5f4aa1fe4ae5d1446fe742d00cdf2ffb1077865db10d"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:eb00b549b13bd6d884c863554566095bf6fa9c3cecb2e7b399c4bc7904cb33b5"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1a2569a1f15ae6c8c64108a2cd2b4a858fc1e13d25846be0666fc144715e32ab"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:8cf85a6e40ff1f37fe0f25719aadf443686b1ac7652593dc53c7ef9b8492b115"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:d237ba6664b8e60fd90b8549a149a74fcc675272e0e95539a00522e4ca688b04"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0b3f5016e00ae7630a4b83d0868fca1e3d494c78a75b1c7252606a3a1c5fc2ad"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:23441e2b5339bc54dc949e9e675fa35efe858108404ef9aa92f0456929ef6fe8"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:2fb0ba3e8566548d6c8e7dd82a8229ff47bd8fb8c2da237607ac8e5a1b8312e5"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:79d1fb9252e7e2cfe4de6e9a6610c7cbb99b9708e2c3e29057f487de5a9eaefa"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6dcc3d17eac1df7859ae01202e9bb11ffa8c98949dcbeb1069c8b9a75917e01b"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-win32.whl", hash = "sha256:4c30a2f83677876465f44c018830f608fa3c6a8a466eb223535035fbc16f3438"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:49095a38eb333aaf44c06052fd2ec3b8f23e19747ca7ec6f6c954ffea6dbf7be"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:7429e7faa1a60cad26ae4227f4dd0459efde239e494c7312624ce228e04f6391"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:50ccb5d355961c0f12f6cf24b7187dbabd5433f29e15147a67995474f27d1776"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc911208b18842a3a57266d8e51fc3cfaccee90a5351b92079beed912a7914c2"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:33ce9e786753743159799fdf8e92a5da351158c4bfb6f2db0bf31e7892a1feb5"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ec87c44f619380878bd49ca109669c9f221d9ae6883a5bcb3616785fa8f94c97"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08ea0f606808354eb8f2dfaac095963cb25d9d28e27edcc375d7b30ab01abbf6"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75a9632f1d4f698b2e6e2e1ada40e71f369b15d69baddb8968dcc8e683839b18"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:74da9f97daec6928567b48c90ea2c82a106b2d500f397eeb8941e47d30b1ca85"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:0969e92af09c5687d769731e3f39ed62427cc72176cebb54b7a9d52cc4fa3b73"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:9164361769b6ca7769079f4d426a41df6164879f7f3568be9086e15baca61466"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:d26a618ae1766279f2660aca0081b2220aca6bd1aa06b2cf73f07383faf48927"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab67ed772c584b7ef2379797bf14b82df9aa5f7438c5b9a09624dd834c1c1aaf"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:3d1e35572a56941b32c239774d7e9ad724074d37f90c7a7d499ab98761bd80cf"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:8268cbcd48c5375f46e000adb1390572c98879eb4f77910c6053d25cc3ac2c67"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e282aedd63c639c07c3857097fc0e236f984ceb4089a8b284da1c526491e3f3d"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6dfdc2bfe69e9adf0df4915949c22a25b39d175d599bf98e7ddf620a13678585"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:4aefd911793b5d2d7a921233a54c90329bf3d4a6817dc465f12ffdfe4fc7b8fe"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:8b8df03a9e995b6211dafa63b32f9d405881518ff1ddd775db4e7b98fb545e1c"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f11ae142f3a322d44513de1018b50f474f8f736bc3cd91d969f464b5bfef8836"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-win32.whl", hash = "sha256:16a8326e51fcdffc886294c1e70b11ddccec836516a343f9ed0f82aac043c24a"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:bbc4b80af581e18568ff07f6395c02114d05f4865c2812a1f02f2eaecf0bfd48"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e3d9d13603410b72787579769469af730c38f2f25505573a5888a94b62b920f8"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38b67afb0a06b8575948641c1d6d68e41b83a3abeae2ca9eed2ac59892b36706"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c689d0d5381f56de7bd6966a4541bff6e08bf8d3871bbd89a0c6ab18aa699573"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:cf2a978c795b54c539f47964ec05e35c05bd045db5ca1e8366988c7f2fe6b3ce"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:739e36ef7412b2bd940f75b278749106e6d025e40027c0b94a17ef7968d55d56"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d8bbcd21769594dbba9c37d3c819e2d5847656ca99c747ddb31ac1701d0c0ed9"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:2304d3c93f2258ccf2cf7a6ba8c761d76ef84948d87bf9664e14d203da2cd264"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-win32.whl", hash = "sha256:02437fb7308386867c8b7b0e5bc4cd4b04548b1c5d089ffb8e7b31009b961dc3"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-win_amd64.whl", hash = "sha256:edcfa83e03370032a489430215c1e7783128808fd3e2e0a3225deee278585196"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:28bf95177400066596cdbcfc933312493799382879da504633d16cf60bba735b"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a745cc98d504d5bd2c19b10c79c61c7c3df9222629f1b6210c0368177589fb8"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1b590b39ef90c6b22ec0be925b211298e810b4856909c8ca60d27ffbca6c12e6"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b336b0416828022bfd5a2e3083e7f5ba54b96242159f83c7e3eebaec752f1716"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:c2faf60c583af0d135e853c86ac2735ce178f0e338a3c7f9ae8f622fd2eb788c"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:4bc6cb140a7a0ad1f7bc37e018d0ed690b7b6520ade518285dc3171f7a117905"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7ff762670cada8e05b32bf1e4dc50b140790909caa8303cfddc4d702b71ea184"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:57f0a0bbc9868e10ebe874e9f129d2917750adf008fe7b9c1598c0fbbfdde6a6"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:a6d2092797b388342c1bc932077ad232f914351932353e2e8706851c870bca1f"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:60499fe961b21264e17a471ec296dcbf4365fbea611bf9e303ab69db7159ce61"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-win32.whl", hash = "sha256:d9b342c76003c6b9336a80efcc766748a333573abf9350f4094ee46b006ec18f"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:b16db2770517b8799c79aa80f4053cd6f8b716f21f8aca962725a9565ce3ee40"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7ed07b3062b055d7a7f9d6557a251cc655eed0b3152b76de619516621c56f5d3"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f60fdd125d85bf9c279ffb8e94c78c51b3b6a37711464e1f5f31078b45002421"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a7e24cb69ee5f32e003f50e016d5fde438010c1022c96738b04fc2423e61706"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23cfafd56887eaed93d07bc4547abd5e09d837a002b791e9767765492a75883f"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:19b4e485cd07b7d83e3fe3b72132e7df70bfac22b14fe4bf7a23822c3a35bff5"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:7ce7ad8abebe737ad6143d9d3bf94b88b93365ea30a5b81f6877ec9c0dee0a48"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e49b052b768bb74f58c7dda4e0bdf7b79d43a9204ca584ffe1fb48a6f3c84c66"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d14a0d029a4e176795cef99c056d58067c06195e0c7e2dbb293bf95c08f772a3"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:be49ad33819d7dcc28a309b86d4ed98e1a65f3075c6acd3cd4fe32103235222b"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a6d17e0370d2516d5bb9062c7b4cb731cff921fc875644c3d751ad857ba9c5b1"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-win32.whl", hash = "sha256:5b8c041b6265e08eac8a724b74b655404070b636a8dd6d7a13c3adc07882ef30"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:f61efaf4bed1cc0860e567d2ecb2363974d414f7f1f124b1df368bbf183453a6"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:fb91819461b1b56d06fa4bcf86617fac795f6a99d12239fb0c68dbeba41a0a30"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d4ed0c7cbecde7194cd3228c044e86bf73e30a23505af852857c09c24e77ec5d"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54401c77a63cc7d6dc4b4e173bb484f28a5607f3df71484709fe037c92d4f0ed"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:625e3ef310e7fa3a761d48ca7ea1f9d8718a32b1542e727d584d82f4453d5eeb"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:519895c99c815a1a24a926d5b60627ce5ea48e9f639a5cd328bda0515ea0f10c"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c7079d5eb1c1315a858bbf180000757db8ad904a89476653232db835c3114001"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:343ab62e9ca78094f2306aefed67dcfad61c4683f87eee48ff2fd74902447726"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:cd9e78285da6c9ba2d5c769628f43ef66d96ac3085e59b10ad4f3707980710d3"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:546cf886f6242dff9ec206331209db9c8e1643ae642dea5fdbecae2453cb50fd"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:02f6a8eb6512fdc2fd4ca10a49c341c4e109aa6e9448cc4859af5b949622715a"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:339ee4a4704bc724757cd5dd9dc8cf4d00980f5d3e6e06d5847c1b594ace68ab"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0a028b61a2e357ace98b1615fc03f76eb517cc028993964fe08ad514b1e8892d"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:f90e552ecbad426eab352e7b2933091f2be77115bb16f09f78404861c8322981"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d83e2d94b69bf31ead2fa45f0acdef0757fa0458a129734f59f67f3d2eb7ef32"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a02d3c48f9bb1e10c7788d92c0c7db6f2002d024ab6e74d6f45ae33e3d0288a3"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6d68ce8e7b2075390e8ac1e1d3a99e8b6372c694bbe612632606d1d546794207"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:453d037e09a5176d92ec0fd282e934ed26d806331a8b70ab431a81e2fbabf56d"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:3b019d4ee84b683342af793b56bb35034bd749e4cbdd3d33f7d1107790f8c472"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:cb3942960f0beb9f46e2a71a3aca220d1ca32feb5a398656be934320804c0df9"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-win32.whl", hash = "sha256:ac6540c9fff6e3813d29d0403ee7a81897f1d8ecc09a8ff84d2eea70ede1cdbf"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:610b5c77428a50269f38a534057444c249976433f40f53e3b47e68349cca1425"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b537bd04d7ccd7c6350cdaaaad911f6312cbd61e6e6045542f781c7f8b2e99d2"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4820c02195d6dfb7b8508ff276752f6b2ff8b64ae5d13ebe02e7667e035000b9"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a09f6184f17a80897172863a655467da2b11151ec98ba8d7af89f17bf63dae"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:76acba4c66c47d27c8365e7c10b3d8016a7da83d3191d053a58382311a8bf4e1"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b128092c927eaf485928cec0c28f6b8bead277e28acf56800e972aa2c2abd7a2"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ae791f6bd43305aade8c0e22f816b34f3b72b6c820477aab4d18473a37e8090b"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a2f6a1bc2460e643785a2cde17293bd7a8f990884b822f7bca47bee0a82fc66b"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e8d351ff44c1638cb6e980623d517abd9f580d2e53bfcd18d8941c052a5a009"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bec4bd9133420c5c52d562469c754f27c5c9e36ee06abc169612c959bd7dbb07"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:55ce6b6d803890bd3cc89975fca9de1dff39729b43b73cb15ddd933b8bc20484"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ab6a358d1286498d80fe67bd3d69fcbc7d1359b45b41e74c4a26964ca99c3f8"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:06668e39e1f3c065349c51ac27ae430719d7806c026fec462e5693b08b95696b"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9cd5323344d8ebb9fb5e96da5de5ad4ebab993bbf51674259dbe9d7a18049525"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89feb82ca055af0fe797a2323ec9043b26bc371365847dbe83c7fd2e2f181c34"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e481bba1e11ba585fb06db666bfc23dbe181dbafc7b25776156120bf12e0d5a6"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9d6c6ea6a11ca0ff9cd0390b885984ed31157c168565702959c25e2191674a14"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:3d98de734abee23e61f6b8c2e08a88453ada7d6486dc7cdc82922a03968928db"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:69ab77a1373f1e7563e0fb5a29a8440367dec051da6c7405333699d07444f511"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:34e17913c431f5ae01d8658dbf792fdc457073dcdfbb31dc0cc6ab256e664a8d"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:05f8757b03208c3f50097761be2dea0aba02e94f0dc7023ed73a7bb14ff11eb0"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a520b4f9974b0a0a6ed73c2154de57cdfd0c8800f4f15ab2b73238ffed0b36e"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5e097646944b66207023bc3c634827de858aebc226d5d4d6d16f0b77566ea182"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b5e4ef22ff25bfd4ede5f8fb30f7b24446345f3e79d9b7455aef2836437bc38a"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:ff69a9a0b4b17d78170c73abe2ab12084bdf1691550c5629ad1fe7849433f324"},
|
||||
{file = "lxml-5.2.2.tar.gz", hash = "sha256:bb2dc4898180bea79863d5487e5f9c7c34297414bad54bcd0f0852aee9cfdb87"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:dd36439be765e2dde7660212b5275641edbc813e7b24668831a5c8ac91180656"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ae5fe5c4b525aa82b8076c1a59d642c17b6e8739ecf852522c6321852178119d"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:501d0d7e26b4d261fca8132854d845e4988097611ba2531408ec91cf3fd9d20a"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb66442c2546446944437df74379e9cf9e9db353e61301d1a0e26482f43f0dd8"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9e41506fec7a7f9405b14aa2d5c8abbb4dbbd09d88f9496958b6d00cb4d45330"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f7d4a670107d75dfe5ad080bed6c341d18c4442f9378c9f58e5851e86eb79965"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:41ce1f1e2c7755abfc7e759dc34d7d05fd221723ff822947132dc934d122fe22"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:44264ecae91b30e5633013fb66f6ddd05c006d3e0e884f75ce0b4755b3e3847b"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:3c174dc350d3ec52deb77f2faf05c439331d6ed5e702fc247ccb4e6b62d884b7"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:2dfab5fa6a28a0b60a20638dc48e6343c02ea9933e3279ccb132f555a62323d8"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:b1c8c20847b9f34e98080da785bb2336ea982e7f913eed5809e5a3c872900f32"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:2c86bf781b12ba417f64f3422cfc302523ac9cd1d8ae8c0f92a1c66e56ef2e86"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:c162b216070f280fa7da844531169be0baf9ccb17263cf5a8bf876fcd3117fa5"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:36aef61a1678cb778097b4a6eeae96a69875d51d1e8f4d4b491ab3cfb54b5a03"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:f65e5120863c2b266dbcc927b306c5b78e502c71edf3295dfcb9501ec96e5fc7"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-win32.whl", hash = "sha256:ef0c1fe22171dd7c7c27147f2e9c3e86f8bdf473fed75f16b0c2e84a5030ce80"},
|
||||
{file = "lxml-5.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:052d99051e77a4f3e8482c65014cf6372e61b0a6f4fe9edb98503bb5364cfee3"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:74bcb423462233bc5d6066e4e98b0264e7c1bed7541fff2f4e34fe6b21563c8b"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a3d819eb6f9b8677f57f9664265d0a10dd6551d227afb4af2b9cd7bdc2ccbf18"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b8f5db71b28b8c404956ddf79575ea77aa8b1538e8b2ef9ec877945b3f46442"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3406b63232fc7e9b8783ab0b765d7c59e7c59ff96759d8ef9632fca27c7ee4"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2ecdd78ab768f844c7a1d4a03595038c166b609f6395e25af9b0f3f26ae1230f"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:168f2dfcfdedf611eb285efac1516c8454c8c99caf271dccda8943576b67552e"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa617107a410245b8660028a7483b68e7914304a6d4882b5ff3d2d3eb5948d8c"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:69959bd3167b993e6e710b99051265654133a98f20cec1d9b493b931942e9c16"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:bd96517ef76c8654446fc3db9242d019a1bb5fe8b751ba414765d59f99210b79"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:ab6dd83b970dc97c2d10bc71aa925b84788c7c05de30241b9e96f9b6d9ea3080"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:eec1bb8cdbba2925bedc887bc0609a80e599c75b12d87ae42ac23fd199445654"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6a7095eeec6f89111d03dabfe5883a1fd54da319c94e0fb104ee8f23616b572d"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:6f651ebd0b21ec65dfca93aa629610a0dbc13dbc13554f19b0113da2e61a4763"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:f422a209d2455c56849442ae42f25dbaaba1c6c3f501d58761c619c7836642ec"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:62f7fdb0d1ed2065451f086519865b4c90aa19aed51081979ecd05a21eb4d1be"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-win32.whl", hash = "sha256:c6379f35350b655fd817cd0d6cbeef7f265f3ae5fedb1caae2eb442bbeae9ab9"},
|
||||
{file = "lxml-5.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:9c52100e2c2dbb0649b90467935c4b0de5528833c76a35ea1a2691ec9f1ee7a1"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:e99f5507401436fdcc85036a2e7dc2e28d962550afe1cbfc07c40e454256a859"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:384aacddf2e5813a36495233b64cb96b1949da72bef933918ba5c84e06af8f0e"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:874a216bf6afaf97c263b56371434e47e2c652d215788396f60477540298218f"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65ab5685d56914b9a2a34d67dd5488b83213d680b0c5d10b47f81da5a16b0b0e"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aac0bbd3e8dd2d9c45ceb82249e8bdd3ac99131a32b4d35c8af3cc9db1657179"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b369d3db3c22ed14c75ccd5af429086f166a19627e84a8fdade3f8f31426e52a"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c24037349665434f375645fa9d1f5304800cec574d0310f618490c871fd902b3"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:62d172f358f33a26d6b41b28c170c63886742f5b6772a42b59b4f0fa10526cb1"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:c1f794c02903c2824fccce5b20c339a1a14b114e83b306ff11b597c5f71a1c8d"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:5d6a6972b93c426ace71e0be9a6f4b2cfae9b1baed2eed2006076a746692288c"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:3879cc6ce938ff4eb4900d901ed63555c778731a96365e53fadb36437a131a99"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:74068c601baff6ff021c70f0935b0c7bc528baa8ea210c202e03757c68c5a4ff"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:ecd4ad8453ac17bc7ba3868371bffb46f628161ad0eefbd0a855d2c8c32dd81a"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:7e2f58095acc211eb9d8b5771bf04df9ff37d6b87618d1cbf85f92399c98dae8"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e63601ad5cd8f860aa99d109889b5ac34de571c7ee902d6812d5d9ddcc77fa7d"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-win32.whl", hash = "sha256:17e8d968d04a37c50ad9c456a286b525d78c4a1c15dd53aa46c1d8e06bf6fa30"},
|
||||
{file = "lxml-5.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:c1a69e58a6bb2de65902051d57fde951febad631a20a64572677a1052690482f"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:8c72e9563347c7395910de6a3100a4840a75a6f60e05af5e58566868d5eb2d6a"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:e92ce66cd919d18d14b3856906a61d3f6b6a8500e0794142338da644260595cd"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d04f064bebdfef9240478f7a779e8c5dc32b8b7b0b2fc6a62e39b928d428e51"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c2fb570d7823c2bbaf8b419ba6e5662137f8166e364a8b2b91051a1fb40ab8b"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0c120f43553ec759f8de1fee2f4794452b0946773299d44c36bfe18e83caf002"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:562e7494778a69086f0312ec9689f6b6ac1c6b65670ed7d0267e49f57ffa08c4"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:423b121f7e6fa514ba0c7918e56955a1d4470ed35faa03e3d9f0e3baa4c7e492"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:c00f323cc00576df6165cc9d21a4c21285fa6b9989c5c39830c3903dc4303ef3"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_28_ppc64le.whl", hash = "sha256:1fdc9fae8dd4c763e8a31e7630afef517eab9f5d5d31a278df087f307bf601f4"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_28_s390x.whl", hash = "sha256:658f2aa69d31e09699705949b5fc4719cbecbd4a97f9656a232e7d6c7be1a367"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:1473427aff3d66a3fa2199004c3e601e6c4500ab86696edffdbc84954c72d832"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a87de7dd873bf9a792bf1e58b1c3887b9264036629a5bf2d2e6579fe8e73edff"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:0d7b36afa46c97875303a94e8f3ad932bf78bace9e18e603f2085b652422edcd"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:cf120cce539453ae086eacc0130a324e7026113510efa83ab42ef3fcfccac7fb"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:df5c7333167b9674aa8ae1d4008fa4bc17a313cc490b2cca27838bbdcc6bb15b"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-win32.whl", hash = "sha256:c802e1c2ed9f0c06a65bc4ed0189d000ada8049312cfeab6ca635e39c9608957"},
|
||||
{file = "lxml-5.3.0-cp313-cp313-win_amd64.whl", hash = "sha256:406246b96d552e0503e17a1006fd27edac678b3fcc9f1be71a2f94b4ff61528d"},
|
||||
{file = "lxml-5.3.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8f0de2d390af441fe8b2c12626d103540b5d850d585b18fcada58d972b74a74e"},
|
||||
{file = "lxml-5.3.0-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1afe0a8c353746e610bd9031a630a95bcfb1a720684c3f2b36c4710a0a96528f"},
|
||||
{file = "lxml-5.3.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56b9861a71575f5795bde89256e7467ece3d339c9b43141dbdd54544566b3b94"},
|
||||
{file = "lxml-5.3.0-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:9fb81d2824dff4f2e297a276297e9031f46d2682cafc484f49de182aa5e5df99"},
|
||||
{file = "lxml-5.3.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2c226a06ecb8cdef28845ae976da407917542c5e6e75dcac7cc33eb04aaeb237"},
|
||||
{file = "lxml-5.3.0-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:7d3d1ca42870cdb6d0d29939630dbe48fa511c203724820fc0fd507b2fb46577"},
|
||||
{file = "lxml-5.3.0-cp36-cp36m-win32.whl", hash = "sha256:094cb601ba9f55296774c2d57ad68730daa0b13dc260e1f941b4d13678239e70"},
|
||||
{file = "lxml-5.3.0-cp36-cp36m-win_amd64.whl", hash = "sha256:eafa2c8658f4e560b098fe9fc54539f86528651f61849b22111a9b107d18910c"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:cb83f8a875b3d9b458cada4f880fa498646874ba4011dc974e071a0a84a1b033"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:25f1b69d41656b05885aa185f5fdf822cb01a586d1b32739633679699f220391"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e0553b8055600b3bf4a00b255ec5c92e1e4aebf8c2c09334f8368e8bd174d6"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ada35dd21dc6c039259596b358caab6b13f4db4d4a7f8665764d616daf9cc1d"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:81b4e48da4c69313192d8c8d4311e5d818b8be1afe68ee20f6385d0e96fc9512"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:2bc9fd5ca4729af796f9f59cd8ff160fe06a474da40aca03fcc79655ddee1a8b"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:07da23d7ee08577760f0a71d67a861019103e4812c87e2fab26b039054594cc5"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:ea2e2f6f801696ad7de8aec061044d6c8c0dd4037608c7cab38a9a4d316bfb11"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-win32.whl", hash = "sha256:5c54afdcbb0182d06836cc3d1be921e540be3ebdf8b8a51ee3ef987537455f84"},
|
||||
{file = "lxml-5.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:f2901429da1e645ce548bf9171784c0f74f0718c3f6150ce166be39e4dd66c3e"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c56a1d43b2f9ee4786e4658c7903f05da35b923fb53c11025712562d5cc02753"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ee8c39582d2652dcd516d1b879451500f8db3fe3607ce45d7c5957ab2596040"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fdf3a3059611f7585a78ee10399a15566356116a4288380921a4b598d807a22"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:146173654d79eb1fc97498b4280c1d3e1e5d58c398fa530905c9ea50ea849b22"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:0a7056921edbdd7560746f4221dca89bb7a3fe457d3d74267995253f46343f15"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:9e4b47ac0f5e749cfc618efdf4726269441014ae1d5583e047b452a32e221920"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:f914c03e6a31deb632e2daa881fe198461f4d06e57ac3d0e05bbcab8eae01945"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:213261f168c5e1d9b7535a67e68b1f59f92398dd17a56d934550837143f79c42"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-win32.whl", hash = "sha256:218c1b2e17a710e363855594230f44060e2025b05c80d1f0661258142b2add2e"},
|
||||
{file = "lxml-5.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:315f9542011b2c4e1d280e4a20ddcca1761993dda3afc7a73b01235f8641e903"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1ffc23010330c2ab67fac02781df60998ca8fe759e8efde6f8b756a20599c5de"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2b3778cb38212f52fac9fe913017deea2fdf4eb1a4f8e4cfc6b009a13a6d3fcc"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b0c7a688944891086ba192e21c5229dea54382f4836a209ff8d0a660fac06be"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:747a3d3e98e24597981ca0be0fd922aebd471fa99d0043a3842d00cdcad7ad6a"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:86a6b24b19eaebc448dc56b87c4865527855145d851f9fc3891673ff97950540"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b11a5d918a6216e521c715b02749240fb07ae5a1fefd4b7bf12f833bc8b4fe70"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:68b87753c784d6acb8a25b05cb526c3406913c9d988d51f80adecc2b0775d6aa"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:109fa6fede314cc50eed29e6e56c540075e63d922455346f11e4d7a036d2b8cf"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:02ced472497b8362c8e902ade23e3300479f4f43e45f4105c85ef43b8db85229"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:6b038cc86b285e4f9fea2ba5ee76e89f21ed1ea898e287dc277a25884f3a7dfe"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:7437237c6a66b7ca341e868cda48be24b8701862757426852c9b3186de1da8a2"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:7f41026c1d64043a36fda21d64c5026762d53a77043e73e94b71f0521939cc71"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:482c2f67761868f0108b1743098640fbb2a28a8e15bf3f47ada9fa59d9fe08c3"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:1483fd3358963cc5c1c9b122c80606a3a79ee0875bcac0204149fa09d6ff2727"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:2dec2d1130a9cda5b904696cec33b2cfb451304ba9081eeda7f90f724097300a"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-win32.whl", hash = "sha256:a0eabd0a81625049c5df745209dc7fcef6e2aea7793e5f003ba363610aa0a3ff"},
|
||||
{file = "lxml-5.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:89e043f1d9d341c52bf2af6d02e6adde62e0a46e6755d5eb60dc6e4f0b8aeca2"},
|
||||
{file = "lxml-5.3.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7b1cd427cb0d5f7393c31b7496419da594fe600e6fdc4b105a54f82405e6626c"},
|
||||
{file = "lxml-5.3.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:51806cfe0279e06ed8500ce19479d757db42a30fd509940b1701be9c86a5ff9a"},
|
||||
{file = "lxml-5.3.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ee70d08fd60c9565ba8190f41a46a54096afa0eeb8f76bd66f2c25d3b1b83005"},
|
||||
{file = "lxml-5.3.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:8dc2c0395bea8254d8daebc76dcf8eb3a95ec2a46fa6fae5eaccee366bfe02ce"},
|
||||
{file = "lxml-5.3.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:6ba0d3dcac281aad8a0e5b14c7ed6f9fa89c8612b47939fc94f80b16e2e9bc83"},
|
||||
{file = "lxml-5.3.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:6e91cf736959057f7aac7adfc83481e03615a8e8dd5758aa1d95ea69e8931dba"},
|
||||
{file = "lxml-5.3.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:94d6c3782907b5e40e21cadf94b13b0842ac421192f26b84c45f13f3c9d5dc27"},
|
||||
{file = "lxml-5.3.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c300306673aa0f3ed5ed9372b21867690a17dba38c68c44b287437c362ce486b"},
|
||||
{file = "lxml-5.3.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78d9b952e07aed35fe2e1a7ad26e929595412db48535921c5013edc8aa4a35ce"},
|
||||
{file = "lxml-5.3.0-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:01220dca0d066d1349bd6a1726856a78f7929f3878f7e2ee83c296c69495309e"},
|
||||
{file = "lxml-5.3.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:2d9b8d9177afaef80c53c0a9e30fa252ff3036fb1c6494d427c066a4ce6a282f"},
|
||||
{file = "lxml-5.3.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:20094fc3f21ea0a8669dc4c61ed7fa8263bd37d97d93b90f28fc613371e7a875"},
|
||||
{file = "lxml-5.3.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ace2c2326a319a0bb8a8b0e5b570c764962e95818de9f259ce814ee666603f19"},
|
||||
{file = "lxml-5.3.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:92e67a0be1639c251d21e35fe74df6bcc40cba445c2cda7c4a967656733249e2"},
|
||||
{file = "lxml-5.3.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd5350b55f9fecddc51385463a4f67a5da829bc741e38cf689f38ec9023f54ab"},
|
||||
{file = "lxml-5.3.0-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:4c1fefd7e3d00921c44dc9ca80a775af49698bbfd92ea84498e56acffd4c5469"},
|
||||
{file = "lxml-5.3.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:71a8dd38fbd2f2319136d4ae855a7078c69c9a38ae06e0c17c73fd70fc6caad8"},
|
||||
{file = "lxml-5.3.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:97acf1e1fd66ab53dacd2c35b319d7e548380c2e9e8c54525c6e76d21b1ae3b1"},
|
||||
{file = "lxml-5.3.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:68934b242c51eb02907c5b81d138cb977b2129a0a75a8f8b60b01cb8586c7b21"},
|
||||
{file = "lxml-5.3.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b710bc2b8292966b23a6a0121f7a6c51d45d2347edcc75f016ac123b8054d3f2"},
|
||||
{file = "lxml-5.3.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18feb4b93302091b1541221196a2155aa296c363fd233814fa11e181adebc52f"},
|
||||
{file = "lxml-5.3.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3eb44520c4724c2e1a57c0af33a379eee41792595023f367ba3952a2d96c2aab"},
|
||||
{file = "lxml-5.3.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:609251a0ca4770e5a8768ff902aa02bf636339c5a93f9349b48eb1f606f7f3e9"},
|
||||
{file = "lxml-5.3.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:516f491c834eb320d6c843156440fe7fc0d50b33e44387fcec5b02f0bc118a4c"},
|
||||
{file = "lxml-5.3.0.tar.gz", hash = "sha256:4e109ca30d1edec1ac60cdbe341905dc3b8f55b16855e03a54aaf59e51ec8c6f"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
|
@ -1152,7 +1148,7 @@ cssselect = ["cssselect (>=0.7)"]
|
|||
html-clean = ["lxml-html-clean"]
|
||||
html5 = ["html5lib"]
|
||||
htmlsoup = ["BeautifulSoup4"]
|
||||
source = ["Cython (>=3.0.10)"]
|
||||
source = ["Cython (>=3.0.11)"]
|
||||
|
||||
[[package]]
|
||||
name = "lxml-stubs"
|
||||
|
@ -1516,13 +1512,13 @@ files = [
|
|||
|
||||
[[package]]
|
||||
name = "phonenumbers"
|
||||
version = "8.13.39"
|
||||
version = "8.13.43"
|
||||
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "phonenumbers-8.13.39-py2.py3-none-any.whl", hash = "sha256:3ad2d086fa71e7eef409001b9195ac54bebb0c6e3e752209b558ca192c9229a0"},
|
||||
{file = "phonenumbers-8.13.39.tar.gz", hash = "sha256:db7ca4970d206b2056231105300753b1a5b229f43416f8c2b3010e63fbb68d77"},
|
||||
{file = "phonenumbers-8.13.43-py2.py3-none-any.whl", hash = "sha256:339e521403fe4dd9c664dbbeb2fe434f9ea5c81e54c0fdfadbaeb53b26a76c27"},
|
||||
{file = "phonenumbers-8.13.43.tar.gz", hash = "sha256:35b904e4a79226eee027fbb467a9aa6f1ab9ffc3c09c91bf14b885c154936726"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -2103,7 +2099,6 @@ files = [
|
|||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
|
||||
|
@ -2111,16 +2106,8 @@ files = [
|
|||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a08c6f0fe150303c1c6b71ebcd7213c2858041a7e01975da3a99aed1e7a378ef"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
|
||||
|
@ -2137,7 +2124,6 @@ files = [
|
|||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
|
||||
|
@ -2145,7 +2131,6 @@ files = [
|
|||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
|
||||
{file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
|
||||
|
@ -2418,13 +2403,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
|||
|
||||
[[package]]
|
||||
name = "sentry-sdk"
|
||||
version = "2.10.0"
|
||||
version = "2.12.0"
|
||||
description = "Python client for Sentry (https://sentry.io)"
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "sentry_sdk-2.10.0-py2.py3-none-any.whl", hash = "sha256:87b3d413c87d8e7f816cc9334bff255a83d8b577db2b22042651c30c19c09190"},
|
||||
{file = "sentry_sdk-2.10.0.tar.gz", hash = "sha256:545fcc6e36c335faa6d6cda84669b6e17025f31efbf3b2211ec14efe008b75d1"},
|
||||
{file = "sentry_sdk-2.12.0-py2.py3-none-any.whl", hash = "sha256:7a8d5163d2ba5c5f4464628c6b68f85e86972f7c636acc78aed45c61b98b7a5e"},
|
||||
{file = "sentry_sdk-2.12.0.tar.gz", hash = "sha256:8763840497b817d44c49b3fe3f5f7388d083f2337ffedf008b2cdb63b5c86dc6"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
@ -2454,7 +2439,7 @@ langchain = ["langchain (>=0.0.210)"]
|
|||
loguru = ["loguru (>=0.5)"]
|
||||
openai = ["openai (>=1.0.0)", "tiktoken (>=0.3.0)"]
|
||||
opentelemetry = ["opentelemetry-distro (>=0.35b0)"]
|
||||
opentelemetry-experimental = ["opentelemetry-instrumentation-aio-pika (==0.46b0)", "opentelemetry-instrumentation-aiohttp-client (==0.46b0)", "opentelemetry-instrumentation-aiopg (==0.46b0)", "opentelemetry-instrumentation-asgi (==0.46b0)", "opentelemetry-instrumentation-asyncio (==0.46b0)", "opentelemetry-instrumentation-asyncpg (==0.46b0)", "opentelemetry-instrumentation-aws-lambda (==0.46b0)", "opentelemetry-instrumentation-boto (==0.46b0)", "opentelemetry-instrumentation-boto3sqs (==0.46b0)", "opentelemetry-instrumentation-botocore (==0.46b0)", "opentelemetry-instrumentation-cassandra (==0.46b0)", "opentelemetry-instrumentation-celery (==0.46b0)", "opentelemetry-instrumentation-confluent-kafka (==0.46b0)", "opentelemetry-instrumentation-dbapi (==0.46b0)", "opentelemetry-instrumentation-django (==0.46b0)", "opentelemetry-instrumentation-elasticsearch (==0.46b0)", "opentelemetry-instrumentation-falcon (==0.46b0)", "opentelemetry-instrumentation-fastapi (==0.46b0)", "opentelemetry-instrumentation-flask (==0.46b0)", "opentelemetry-instrumentation-grpc (==0.46b0)", "opentelemetry-instrumentation-httpx (==0.46b0)", "opentelemetry-instrumentation-jinja2 (==0.46b0)", "opentelemetry-instrumentation-kafka-python (==0.46b0)", "opentelemetry-instrumentation-logging (==0.46b0)", "opentelemetry-instrumentation-mysql (==0.46b0)", "opentelemetry-instrumentation-mysqlclient (==0.46b0)", "opentelemetry-instrumentation-pika (==0.46b0)", "opentelemetry-instrumentation-psycopg (==0.46b0)", "opentelemetry-instrumentation-psycopg2 (==0.46b0)", "opentelemetry-instrumentation-pymemcache (==0.46b0)", "opentelemetry-instrumentation-pymongo (==0.46b0)", "opentelemetry-instrumentation-pymysql (==0.46b0)", "opentelemetry-instrumentation-pyramid (==0.46b0)", "opentelemetry-instrumentation-redis (==0.46b0)", "opentelemetry-instrumentation-remoulade (==0.46b0)", "opentelemetry-instrumentation-requests (==0.46b0)", "opentelemetry-instrumentation-sklearn (==0.46b0)", "opentelemetry-instrumentation-sqlalchemy (==0.46b0)", "opentelemetry-instrumentation-sqlite3 (==0.46b0)", "opentelemetry-instrumentation-starlette (==0.46b0)", "opentelemetry-instrumentation-system-metrics (==0.46b0)", "opentelemetry-instrumentation-threading (==0.46b0)", "opentelemetry-instrumentation-tornado (==0.46b0)", "opentelemetry-instrumentation-tortoiseorm (==0.46b0)", "opentelemetry-instrumentation-urllib (==0.46b0)", "opentelemetry-instrumentation-urllib3 (==0.46b0)", "opentelemetry-instrumentation-wsgi (==0.46b0)"]
|
||||
opentelemetry-experimental = ["opentelemetry-distro"]
|
||||
pure-eval = ["asttokens", "executing", "pure-eval"]
|
||||
pymongo = ["pymongo (>=3.1)"]
|
||||
pyspark = ["pyspark (>=2.4.4)"]
|
||||
|
@ -2492,19 +2477,19 @@ tests = ["coverage[toml] (>=5.0.2)", "pytest"]
|
|||
|
||||
[[package]]
|
||||
name = "setuptools"
|
||||
version = "67.6.0"
|
||||
version = "72.1.0"
|
||||
description = "Easily download, build, install, upgrade, and uninstall Python packages"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "setuptools-67.6.0-py3-none-any.whl", hash = "sha256:b78aaa36f6b90a074c1fa651168723acbf45d14cb1196b6f02c0fd07f17623b2"},
|
||||
{file = "setuptools-67.6.0.tar.gz", hash = "sha256:2ee892cd5f29f3373097f5a814697e397cf3ce313616df0af11231e2ad118077"},
|
||||
{file = "setuptools-72.1.0-py3-none-any.whl", hash = "sha256:5a03e1860cf56bb6ef48ce186b0e557fdba433237481a9a625176c2831be15d1"},
|
||||
{file = "setuptools-72.1.0.tar.gz", hash = "sha256:8d243eff56d095e5817f796ede6ae32941278f542e0f941867cc05ae52b162ec"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (==0.8.3)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
|
||||
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
|
||||
testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
|
||||
core = ["importlib-metadata (>=6)", "importlib-resources (>=5.10.2)", "jaraco.text (>=3.7)", "more-itertools (>=8.8)", "ordered-set (>=3.1.1)", "packaging (>=24)", "platformdirs (>=2.6.2)", "tomli (>=2.0.1)", "wheel (>=0.43.0)"]
|
||||
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "pyproject-hooks (!=1.1)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
|
||||
test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "jaraco.test", "mypy (==1.11.*)", "packaging (>=23.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (<0.4)", "pytest-ruff (>=0.2.1)", "pytest-ruff (>=0.3.2)", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
|
||||
|
||||
[[package]]
|
||||
name = "setuptools-rust"
|
||||
|
@ -2649,24 +2634,24 @@ files = [
|
|||
|
||||
[[package]]
|
||||
name = "towncrier"
|
||||
version = "23.11.0"
|
||||
version = "24.7.1"
|
||||
description = "Building newsfiles for your project."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "towncrier-23.11.0-py3-none-any.whl", hash = "sha256:2e519ca619426d189e3c98c99558fe8be50c9ced13ea1fc20a4a353a95d2ded7"},
|
||||
{file = "towncrier-23.11.0.tar.gz", hash = "sha256:13937c247e3f8ae20ac44d895cf5f96a60ad46cfdcc1671759530d7837d9ee5d"},
|
||||
{file = "towncrier-24.7.1-py3-none-any.whl", hash = "sha256:685e2a94335b5dc47537b4d3b449a25b18571ea85b07dcf6e8df31ba40f692dd"},
|
||||
{file = "towncrier-24.7.1.tar.gz", hash = "sha256:57a057faedabcadf1a62f6f9bad726ae566c1f31a411338ddb8316993f583b3d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
click = "*"
|
||||
importlib-metadata = {version = ">=4.6", markers = "python_version < \"3.10\""}
|
||||
importlib-resources = {version = ">=5", markers = "python_version < \"3.10\""}
|
||||
incremental = "*"
|
||||
jinja2 = "*"
|
||||
tomli = {version = "*", markers = "python_version < \"3.11\""}
|
||||
|
||||
[package.extras]
|
||||
dev = ["furo", "packaging", "sphinx (>=5)", "twisted"]
|
||||
dev = ["furo (>=2024.05.06)", "nox", "packaging", "sphinx (>=5)", "twisted"]
|
||||
|
||||
[[package]]
|
||||
name = "treq"
|
||||
|
@ -2890,24 +2875,24 @@ types-cffi = "*"
|
|||
|
||||
[[package]]
|
||||
name = "types-pyyaml"
|
||||
version = "6.0.12.20240311"
|
||||
version = "6.0.12.20240808"
|
||||
description = "Typing stubs for PyYAML"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-PyYAML-6.0.12.20240311.tar.gz", hash = "sha256:a9e0f0f88dc835739b0c1ca51ee90d04ca2a897a71af79de9aec5f38cb0a5342"},
|
||||
{file = "types_PyYAML-6.0.12.20240311-py3-none-any.whl", hash = "sha256:b845b06a1c7e54b8e5b4c683043de0d9caf205e7434b3edc678ff2411979b8f6"},
|
||||
{file = "types-PyYAML-6.0.12.20240808.tar.gz", hash = "sha256:b8f76ddbd7f65440a8bda5526a9607e4c7a322dc2f8e1a8c405644f9a6f4b9af"},
|
||||
{file = "types_PyYAML-6.0.12.20240808-py3-none-any.whl", hash = "sha256:deda34c5c655265fc517b546c902aa6eed2ef8d3e921e4765fe606fe2afe8d35"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "types-requests"
|
||||
version = "2.31.0.20240406"
|
||||
version = "2.32.0.20240712"
|
||||
description = "Typing stubs for requests"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-requests-2.31.0.20240406.tar.gz", hash = "sha256:4428df33c5503945c74b3f42e82b181e86ec7b724620419a2966e2de604ce1a1"},
|
||||
{file = "types_requests-2.31.0.20240406-py3-none-any.whl", hash = "sha256:6216cdac377c6b9a040ac1c0404f7284bd13199c0e1bb235f4324627e8898cf5"},
|
||||
{file = "types-requests-2.32.0.20240712.tar.gz", hash = "sha256:90c079ff05e549f6bf50e02e910210b98b8ff1ebdd18e19c873cd237737c1358"},
|
||||
{file = "types_requests-2.32.0.20240712-py3-none-any.whl", hash = "sha256:f754283e152c752e46e70942fa2a146b5bc70393522257bb85bd1ef7e019dcc3"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
@ -3196,4 +3181,4 @@ user-search = ["pyicu"]
|
|||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = "^3.8.0"
|
||||
content-hash = "5f458ce53b7469844af2e0c5a9c5ef720736de5f080c4eb8d3a0e60286424f44"
|
||||
content-hash = "c165cdc1f6612c9f1b5bfd8063c23e2d595d717dd8ac1a468519e902be2cdf93"
|
||||
|
|
|
@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
|
|||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.112.0"
|
||||
version = "1.113.0"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
|
|
|
@ -43,7 +43,7 @@ import argparse
|
|||
import base64
|
||||
import json
|
||||
import sys
|
||||
from typing import Any, Dict, Optional, Tuple
|
||||
from typing import Any, Dict, Mapping, Optional, Tuple, Union
|
||||
from urllib import parse as urlparse
|
||||
|
||||
import requests
|
||||
|
@ -75,7 +75,7 @@ def encode_canonical_json(value: object) -> bytes:
|
|||
value,
|
||||
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
|
||||
ensure_ascii=False,
|
||||
# Remove unecessary white space.
|
||||
# Remove unnecessary white space.
|
||||
separators=(",", ":"),
|
||||
# Sort the keys of dictionaries.
|
||||
sort_keys=True,
|
||||
|
@ -298,12 +298,23 @@ class MatrixConnectionAdapter(HTTPAdapter):
|
|||
|
||||
return super().send(request, *args, **kwargs)
|
||||
|
||||
def get_connection(
|
||||
self, url: str, proxies: Optional[Dict[str, str]] = None
|
||||
def get_connection_with_tls_context(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
verify: Optional[Union[bool, str]],
|
||||
proxies: Optional[Mapping[str, str]] = None,
|
||||
cert: Optional[Union[Tuple[str, str], str]] = None,
|
||||
) -> HTTPConnectionPool:
|
||||
# overrides the get_connection() method in the base class
|
||||
parsed = urlparse.urlsplit(url)
|
||||
(host, port, ssl_server_name) = self._lookup(parsed.netloc)
|
||||
# overrides the get_connection_with_tls_context() method in the base class
|
||||
parsed = urlparse.urlsplit(request.url)
|
||||
|
||||
# Extract the server name from the request URL, and ensure it's a str.
|
||||
hostname = parsed.netloc
|
||||
if isinstance(hostname, bytes):
|
||||
hostname = hostname.decode("utf-8")
|
||||
assert isinstance(hostname, str)
|
||||
|
||||
(host, port, ssl_server_name) = self._lookup(hostname)
|
||||
print(
|
||||
f"Connecting to {host}:{port} with SNI {ssl_server_name}", file=sys.stderr
|
||||
)
|
||||
|
|
|
@ -128,6 +128,10 @@ class Codes(str, Enum):
|
|||
# MSC2677
|
||||
DUPLICATE_ANNOTATION = "M_DUPLICATE_ANNOTATION"
|
||||
|
||||
# MSC3575 we are telling the client they need to expire their sliding sync
|
||||
# connection.
|
||||
UNKNOWN_POS = "M_UNKNOWN_POS"
|
||||
|
||||
|
||||
class CodeMessageException(RuntimeError):
|
||||
"""An exception with integer code, a message string attributes and optional headers.
|
||||
|
@ -847,3 +851,17 @@ class PartialStateConflictError(SynapseError):
|
|||
msg=PartialStateConflictError.message(),
|
||||
errcode=Codes.UNKNOWN,
|
||||
)
|
||||
|
||||
|
||||
class SlidingSyncUnknownPosition(SynapseError):
|
||||
"""An error that Synapse can return to signal to the client to expire their
|
||||
sliding sync connection (i.e. send a new request without a `?since=`
|
||||
param).
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
super().__init__(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
msg="Unknown position",
|
||||
errcode=Codes.UNKNOWN_POS,
|
||||
)
|
||||
|
|
|
@ -206,6 +206,21 @@ class GenericWorkerServer(HomeServer):
|
|||
"/_synapse/admin": admin_resource,
|
||||
}
|
||||
)
|
||||
|
||||
if "federation" not in res.names:
|
||||
# Only load the federation media resource separately if federation
|
||||
# resource is not specified since federation resource includes media
|
||||
# resource.
|
||||
resources[FEDERATION_PREFIX] = TransportLayerServer(
|
||||
self, servlet_groups=["media"]
|
||||
)
|
||||
if "client" not in res.names:
|
||||
# Only load the client media resource separately if client
|
||||
# resource is not specified since client resource includes media
|
||||
# resource.
|
||||
resources[CLIENT_API_PREFIX] = ClientRestResource(
|
||||
self, servlet_groups=["media"]
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"A 'media' listener is configured but the media"
|
||||
|
|
|
@ -101,6 +101,12 @@ class SynapseHomeServer(HomeServer):
|
|||
# Skip loading openid resource if federation is defined
|
||||
# since federation resource will include openid
|
||||
continue
|
||||
if name == "media" and (
|
||||
"federation" in res.names or "client" in res.names
|
||||
):
|
||||
# Skip loading media resource if federation or client are defined
|
||||
# since federation & client resources will include media
|
||||
continue
|
||||
if name == "health":
|
||||
# Skip loading, health resource is always included
|
||||
continue
|
||||
|
@ -231,6 +237,14 @@ class SynapseHomeServer(HomeServer):
|
|||
"'media' resource conflicts with enable_media_repo=False"
|
||||
)
|
||||
|
||||
if name == "media":
|
||||
resources[FEDERATION_PREFIX] = TransportLayerServer(
|
||||
self, servlet_groups=["media"]
|
||||
)
|
||||
resources[CLIENT_API_PREFIX] = ClientRestResource(
|
||||
self, servlet_groups=["media"]
|
||||
)
|
||||
|
||||
if name in ["keys", "federation"]:
|
||||
resources[SERVER_KEY_PREFIX] = KeyResource(self)
|
||||
|
||||
|
|
|
@ -271,6 +271,10 @@ SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
|
|||
"federation": FEDERATION_SERVLET_CLASSES,
|
||||
"room_list": (PublicRoomList,),
|
||||
"openid": (OpenIdUserInfo,),
|
||||
"media": (
|
||||
FederationMediaDownloadServlet,
|
||||
FederationMediaThumbnailServlet,
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -912,6 +912,4 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
|||
FederationV1SendKnockServlet,
|
||||
FederationMakeKnockServlet,
|
||||
FederationAccountStatusServlet,
|
||||
FederationMediaDownloadServlet,
|
||||
FederationMediaThumbnailServlet,
|
||||
)
|
||||
|
|
|
@ -197,8 +197,14 @@ class AdminHandler:
|
|||
# events that we have and then filtering, this isn't the most
|
||||
# efficient method perhaps but it does guarantee we get everything.
|
||||
while True:
|
||||
events, _ = await self._store.paginate_room_events(
|
||||
room_id, from_key, to_key, limit=100, direction=Direction.FORWARDS
|
||||
events, _ = (
|
||||
await self._store.paginate_room_events_by_topological_ordering(
|
||||
room_id=room_id,
|
||||
from_key=from_key,
|
||||
to_key=to_key,
|
||||
limit=100,
|
||||
direction=Direction.FORWARDS,
|
||||
)
|
||||
)
|
||||
if not events:
|
||||
break
|
||||
|
|
|
@ -20,10 +20,20 @@
|
|||
#
|
||||
#
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Mapping, Optional, Set, Tuple
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
AbstractSet,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
)
|
||||
|
||||
from synapse.api import errors
|
||||
from synapse.api.constants import EduTypes, EventTypes
|
||||
from synapse.api.constants import EduTypes, EventTypes, Membership
|
||||
from synapse.api.errors import (
|
||||
Codes,
|
||||
FederationDeniedError,
|
||||
|
@ -38,6 +48,7 @@ from synapse.metrics.background_process_metrics import (
|
|||
wrap_as_background_process,
|
||||
)
|
||||
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.types import (
|
||||
DeviceListUpdates,
|
||||
JsonDict,
|
||||
|
@ -222,129 +233,115 @@ class DeviceWorkerHandler:
|
|||
|
||||
set_tag("user_id", user_id)
|
||||
set_tag("from_token", str(from_token))
|
||||
now_room_key = self.store.get_room_max_token()
|
||||
|
||||
room_ids = await self.store.get_rooms_for_user(user_id)
|
||||
now_token = self._event_sources.get_current_token()
|
||||
|
||||
changed = await self.get_device_changes_in_shared_rooms(
|
||||
user_id, room_ids, from_token
|
||||
# We need to work out all the different membership changes for the user
|
||||
# and user they share a room with, to pass to
|
||||
# `generate_sync_entry_for_device_list`. See its docstring for details
|
||||
# on the data required.
|
||||
|
||||
joined_room_ids = await self.store.get_rooms_for_user(user_id)
|
||||
|
||||
# Get the set of rooms that the user has joined/left
|
||||
membership_changes = (
|
||||
await self.store.get_current_state_delta_membership_changes_for_user(
|
||||
user_id, from_key=from_token.room_key, to_key=now_token.room_key
|
||||
)
|
||||
)
|
||||
|
||||
# Then work out if any users have since joined
|
||||
rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
|
||||
# Check for newly joined or left rooms. We need to make sure that we add
|
||||
# to newly joined in the case membership goes from join -> leave -> join
|
||||
# again.
|
||||
newly_joined_rooms: Set[str] = set()
|
||||
newly_left_rooms: Set[str] = set()
|
||||
for change in membership_changes:
|
||||
# We check for changes in "joinedness", i.e. if the membership has
|
||||
# changed to or from JOIN.
|
||||
if change.membership == Membership.JOIN:
|
||||
if change.prev_membership != Membership.JOIN:
|
||||
newly_joined_rooms.add(change.room_id)
|
||||
newly_left_rooms.discard(change.room_id)
|
||||
elif change.prev_membership == Membership.JOIN:
|
||||
newly_joined_rooms.discard(change.room_id)
|
||||
newly_left_rooms.add(change.room_id)
|
||||
|
||||
member_events = await self.store.get_membership_changes_for_user(
|
||||
user_id, from_token.room_key, now_room_key
|
||||
# We now work out if any other users have since joined or left the rooms
|
||||
# the user is currently in. First we filter out rooms that we know
|
||||
# haven't changed recently.
|
||||
rooms_changed = self.store.get_rooms_that_changed(
|
||||
joined_room_ids, from_token.room_key
|
||||
)
|
||||
rooms_changed.update(event.room_id for event in member_events)
|
||||
|
||||
stream_ordering = from_token.room_key.stream
|
||||
|
||||
possibly_changed = set(changed)
|
||||
possibly_left = set()
|
||||
# List of membership changes per room
|
||||
room_to_deltas: Dict[str, List[StateDelta]] = {}
|
||||
# The set of event IDs of membership events (so we can fetch their
|
||||
# associated membership).
|
||||
memberships_to_fetch: Set[str] = set()
|
||||
for room_id in rooms_changed:
|
||||
# Check if the forward extremities have changed. If not then we know
|
||||
# the current state won't have changed, and so we can skip this room.
|
||||
try:
|
||||
if not await self.store.have_room_forward_extremities_changed_since(
|
||||
room_id, stream_ordering
|
||||
):
|
||||
continue
|
||||
except errors.StoreError:
|
||||
pass
|
||||
|
||||
current_state_ids = await self._state_storage.get_current_state_ids(
|
||||
room_id, await_full_state=False
|
||||
# TODO: Only pull out membership events?
|
||||
state_changes = await self.store.get_current_state_deltas_for_room(
|
||||
room_id, from_token=from_token.room_key, to_token=now_token.room_key
|
||||
)
|
||||
|
||||
# The user may have left the room
|
||||
# TODO: Check if they actually did or if we were just invited.
|
||||
if room_id not in room_ids:
|
||||
for etype, state_key in current_state_ids.keys():
|
||||
if etype != EventTypes.Member:
|
||||
continue
|
||||
possibly_left.add(state_key)
|
||||
continue
|
||||
|
||||
# Fetch the current state at the time.
|
||||
try:
|
||||
event_ids = await self.store.get_forward_extremities_for_room_at_stream_ordering(
|
||||
room_id, stream_ordering=stream_ordering
|
||||
)
|
||||
except errors.StoreError:
|
||||
# we have purged the stream_ordering index since the stream
|
||||
# ordering: treat it the same as a new room
|
||||
event_ids = []
|
||||
|
||||
# special-case for an empty prev state: include all members
|
||||
# in the changed list
|
||||
if not event_ids:
|
||||
log_kv(
|
||||
{"event": "encountered empty previous state", "room_id": room_id}
|
||||
)
|
||||
for etype, state_key in current_state_ids.keys():
|
||||
if etype != EventTypes.Member:
|
||||
continue
|
||||
possibly_changed.add(state_key)
|
||||
continue
|
||||
|
||||
current_member_id = current_state_ids.get((EventTypes.Member, user_id))
|
||||
if not current_member_id:
|
||||
continue
|
||||
|
||||
# mapping from event_id -> state_dict
|
||||
prev_state_ids = await self._state_storage.get_state_ids_for_events(
|
||||
event_ids,
|
||||
await_full_state=False,
|
||||
)
|
||||
|
||||
# Check if we've joined the room? If so we just blindly add all the users to
|
||||
# the "possibly changed" users.
|
||||
for state_dict in prev_state_ids.values():
|
||||
member_event = state_dict.get((EventTypes.Member, user_id), None)
|
||||
if not member_event or member_event != current_member_id:
|
||||
for etype, state_key in current_state_ids.keys():
|
||||
if etype != EventTypes.Member:
|
||||
continue
|
||||
possibly_changed.add(state_key)
|
||||
break
|
||||
|
||||
# If there has been any change in membership, include them in the
|
||||
# possibly changed list. We'll check if they are joined below,
|
||||
# and we're not toooo worried about spuriously adding users.
|
||||
for key, event_id in current_state_ids.items():
|
||||
etype, state_key = key
|
||||
if etype != EventTypes.Member:
|
||||
for delta in state_changes:
|
||||
if delta.event_type != EventTypes.Member:
|
||||
continue
|
||||
|
||||
# check if this member has changed since any of the extremities
|
||||
# at the stream_ordering, and add them to the list if so.
|
||||
for state_dict in prev_state_ids.values():
|
||||
prev_event_id = state_dict.get(key, None)
|
||||
if not prev_event_id or prev_event_id != event_id:
|
||||
if state_key != user_id:
|
||||
possibly_changed.add(state_key)
|
||||
break
|
||||
room_to_deltas.setdefault(room_id, []).append(delta)
|
||||
if delta.event_id:
|
||||
memberships_to_fetch.add(delta.event_id)
|
||||
if delta.prev_event_id:
|
||||
memberships_to_fetch.add(delta.prev_event_id)
|
||||
|
||||
if possibly_changed or possibly_left:
|
||||
possibly_joined = possibly_changed
|
||||
possibly_left = possibly_changed | possibly_left
|
||||
# Fetch all the memberships for the membership events
|
||||
event_id_to_memberships = await self.store.get_membership_from_event_ids(
|
||||
memberships_to_fetch
|
||||
)
|
||||
|
||||
# Double check if we still share rooms with the given user.
|
||||
users_rooms = await self.store.get_rooms_for_users(possibly_left)
|
||||
for changed_user_id, entries in users_rooms.items():
|
||||
if any(rid in room_ids for rid in entries):
|
||||
possibly_left.discard(changed_user_id)
|
||||
else:
|
||||
possibly_joined.discard(changed_user_id)
|
||||
joined_invited_knocked = (
|
||||
Membership.JOIN,
|
||||
Membership.INVITE,
|
||||
Membership.KNOCK,
|
||||
)
|
||||
|
||||
else:
|
||||
possibly_joined = set()
|
||||
possibly_left = set()
|
||||
# We now want to find any user that have newly joined/invited/knocked,
|
||||
# or newly left, similarly to above.
|
||||
newly_joined_or_invited_or_knocked_users: Set[str] = set()
|
||||
newly_left_users: Set[str] = set()
|
||||
for _, deltas in room_to_deltas.items():
|
||||
for delta in deltas:
|
||||
# Get the prev/new memberships for the delta
|
||||
new_membership = None
|
||||
prev_membership = None
|
||||
if delta.event_id:
|
||||
m = event_id_to_memberships.get(delta.event_id)
|
||||
if m is not None:
|
||||
new_membership = m.membership
|
||||
if delta.prev_event_id:
|
||||
m = event_id_to_memberships.get(delta.prev_event_id)
|
||||
if m is not None:
|
||||
prev_membership = m.membership
|
||||
|
||||
device_list_updates = DeviceListUpdates(
|
||||
changed=possibly_joined,
|
||||
left=possibly_left,
|
||||
# Check if a user has newly joined/invited/knocked, or left.
|
||||
if new_membership in joined_invited_knocked:
|
||||
if prev_membership not in joined_invited_knocked:
|
||||
newly_joined_or_invited_or_knocked_users.add(delta.state_key)
|
||||
newly_left_users.discard(delta.state_key)
|
||||
elif prev_membership in joined_invited_knocked:
|
||||
newly_joined_or_invited_or_knocked_users.discard(delta.state_key)
|
||||
newly_left_users.add(delta.state_key)
|
||||
|
||||
# Now we actually calculate the device list entry with the information
|
||||
# calculated above.
|
||||
device_list_updates = await self.generate_sync_entry_for_device_list(
|
||||
user_id=user_id,
|
||||
since_token=from_token,
|
||||
now_token=now_token,
|
||||
joined_room_ids=joined_room_ids,
|
||||
newly_joined_rooms=newly_joined_rooms,
|
||||
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
|
||||
newly_left_rooms=newly_left_rooms,
|
||||
newly_left_users=newly_left_users,
|
||||
)
|
||||
|
||||
log_kv(
|
||||
|
@ -356,6 +353,88 @@ class DeviceWorkerHandler:
|
|||
|
||||
return device_list_updates
|
||||
|
||||
@measure_func("_generate_sync_entry_for_device_list")
|
||||
async def generate_sync_entry_for_device_list(
|
||||
self,
|
||||
user_id: str,
|
||||
since_token: StreamToken,
|
||||
now_token: StreamToken,
|
||||
joined_room_ids: AbstractSet[str],
|
||||
newly_joined_rooms: AbstractSet[str],
|
||||
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
|
||||
newly_left_rooms: AbstractSet[str],
|
||||
newly_left_users: AbstractSet[str],
|
||||
) -> DeviceListUpdates:
|
||||
"""Generate the DeviceListUpdates section of sync
|
||||
|
||||
Args:
|
||||
sync_result_builder
|
||||
newly_joined_rooms: Set of rooms user has joined since previous sync
|
||||
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
|
||||
been invited to a room or are knocking on a room since
|
||||
previous sync.
|
||||
newly_left_rooms: Set of rooms user has left since previous sync
|
||||
newly_left_users: Set of users that have left a room we're in since
|
||||
previous sync
|
||||
"""
|
||||
# Take a copy since these fields will be mutated later.
|
||||
newly_joined_or_invited_or_knocked_users = set(
|
||||
newly_joined_or_invited_or_knocked_users
|
||||
)
|
||||
newly_left_users = set(newly_left_users)
|
||||
|
||||
# We want to figure out what user IDs the client should refetch
|
||||
# device keys for, and which users we aren't going to track changes
|
||||
# for anymore.
|
||||
#
|
||||
# For the first step we check:
|
||||
# a. if any users we share a room with have updated their devices,
|
||||
# and
|
||||
# b. we also check if we've joined any new rooms, or if a user has
|
||||
# joined a room we're in.
|
||||
#
|
||||
# For the second step we just find any users we no longer share a
|
||||
# room with by looking at all users that have left a room plus users
|
||||
# that were in a room we've left.
|
||||
|
||||
users_that_have_changed = set()
|
||||
|
||||
# Step 1a, check for changes in devices of users we share a room
|
||||
# with
|
||||
users_that_have_changed = await self.get_device_changes_in_shared_rooms(
|
||||
user_id,
|
||||
joined_room_ids,
|
||||
from_token=since_token,
|
||||
now_token=now_token,
|
||||
)
|
||||
|
||||
# Step 1b, check for newly joined rooms
|
||||
for room_id in newly_joined_rooms:
|
||||
joined_users = await self.store.get_users_in_room(room_id)
|
||||
newly_joined_or_invited_or_knocked_users.update(joined_users)
|
||||
|
||||
# TODO: Check that these users are actually new, i.e. either they
|
||||
# weren't in the previous sync *or* they left and rejoined.
|
||||
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
|
||||
|
||||
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
|
||||
user_id, since_token.device_list_key
|
||||
)
|
||||
users_that_have_changed.update(user_signatures_changed)
|
||||
|
||||
# Now find users that we no longer track
|
||||
for room_id in newly_left_rooms:
|
||||
left_users = await self.store.get_users_in_room(room_id)
|
||||
newly_left_users.update(left_users)
|
||||
|
||||
# Remove any users that we still share a room with.
|
||||
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
|
||||
for user_id, entries in left_users_rooms.items():
|
||||
if any(rid in joined_room_ids for rid in entries):
|
||||
newly_left_users.discard(user_id)
|
||||
|
||||
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
|
||||
|
||||
async def on_federation_query_user_devices(self, user_id: str) -> JsonDict:
|
||||
if not self.hs.is_mine(UserID.from_string(user_id)):
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
|
|
@ -507,13 +507,15 @@ class PaginationHandler:
|
|||
|
||||
# Initially fetch the events from the database. With any luck, we can return
|
||||
# these without blocking on backfill (handled below).
|
||||
events, next_key = await self.store.paginate_room_events(
|
||||
room_id=room_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_room_key,
|
||||
direction=pagin_config.direction,
|
||||
limit=pagin_config.limit,
|
||||
event_filter=event_filter,
|
||||
events, next_key = (
|
||||
await self.store.paginate_room_events_by_topological_ordering(
|
||||
room_id=room_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_room_key,
|
||||
direction=pagin_config.direction,
|
||||
limit=pagin_config.limit,
|
||||
event_filter=event_filter,
|
||||
)
|
||||
)
|
||||
|
||||
if pagin_config.direction == Direction.BACKWARDS:
|
||||
|
@ -582,13 +584,15 @@ class PaginationHandler:
|
|||
# If we did backfill something, refetch the events from the database to
|
||||
# catch anything new that might have been added since we last fetched.
|
||||
if did_backfill:
|
||||
events, next_key = await self.store.paginate_room_events(
|
||||
room_id=room_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_room_key,
|
||||
direction=pagin_config.direction,
|
||||
limit=pagin_config.limit,
|
||||
event_filter=event_filter,
|
||||
events, next_key = (
|
||||
await self.store.paginate_room_events_by_topological_ordering(
|
||||
room_id=room_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_room_key,
|
||||
direction=pagin_config.direction,
|
||||
limit=pagin_config.limit,
|
||||
event_filter=event_filter,
|
||||
)
|
||||
)
|
||||
else:
|
||||
# Otherwise, we can backfill in the background for eventual
|
||||
|
|
|
@ -74,6 +74,17 @@ class ProfileHandler:
|
|||
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
|
||||
|
||||
async def get_profile(self, user_id: str, ignore_backoff: bool = True) -> JsonDict:
|
||||
"""
|
||||
Get a user's profile as a JSON dictionary.
|
||||
|
||||
Args:
|
||||
user_id: The user to fetch the profile of.
|
||||
ignore_backoff: True to ignore backoff when fetching over federation.
|
||||
|
||||
Returns:
|
||||
A JSON dictionary. For local queries this will include the displayname and avatar_url
|
||||
fields. For remote queries it may contain arbitrary information.
|
||||
"""
|
||||
target_user = UserID.from_string(user_id)
|
||||
|
||||
if self.hs.is_mine(target_user):
|
||||
|
@ -107,6 +118,15 @@ class ProfileHandler:
|
|||
raise e.to_synapse_error()
|
||||
|
||||
async def get_displayname(self, target_user: UserID) -> Optional[str]:
|
||||
"""
|
||||
Fetch a user's display name from their profile.
|
||||
|
||||
Args:
|
||||
target_user: The user to fetch the display name of.
|
||||
|
||||
Returns:
|
||||
The user's display name or None if unset.
|
||||
"""
|
||||
if self.hs.is_mine(target_user):
|
||||
try:
|
||||
displayname = await self.store.get_profile_displayname(target_user)
|
||||
|
@ -203,6 +223,15 @@ class ProfileHandler:
|
|||
await self._update_join_states(requester, target_user)
|
||||
|
||||
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
|
||||
"""
|
||||
Fetch a user's avatar URL from their profile.
|
||||
|
||||
Args:
|
||||
target_user: The user to fetch the avatar URL of.
|
||||
|
||||
Returns:
|
||||
The user's avatar URL or None if unset.
|
||||
"""
|
||||
if self.hs.is_mine(target_user):
|
||||
try:
|
||||
avatar_url = await self.store.get_profile_avatar_url(target_user)
|
||||
|
@ -403,6 +432,12 @@ class ProfileHandler:
|
|||
async def _update_join_states(
|
||||
self, requester: Requester, target_user: UserID
|
||||
) -> None:
|
||||
"""
|
||||
Update the membership events of each room the user is joined to with the
|
||||
new profile information.
|
||||
|
||||
Note that this stomps over any custom display name or avatar URL in member events.
|
||||
"""
|
||||
if not self.hs.is_mine(target_user):
|
||||
return
|
||||
|
||||
|
|
|
@ -1750,7 +1750,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
|||
from_key=from_key,
|
||||
to_key=to_key,
|
||||
limit=limit or 10,
|
||||
order="ASC",
|
||||
direction=Direction.FORWARDS,
|
||||
)
|
||||
|
||||
events = list(room_events)
|
||||
|
|
|
@ -24,6 +24,7 @@ from itertools import chain
|
|||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Callable,
|
||||
Dict,
|
||||
Final,
|
||||
List,
|
||||
|
@ -47,16 +48,27 @@ from synapse.api.constants import (
|
|||
EventTypes,
|
||||
Membership,
|
||||
)
|
||||
from synapse.api.errors import SlidingSyncUnknownPosition
|
||||
from synapse.events import EventBase, StrippedStateEvent
|
||||
from synapse.events.utils import parse_stripped_state_event, strip_event
|
||||
from synapse.handlers.relations import BundledAggregations
|
||||
from synapse.logging.opentracing import log_kv, start_active_span, tag_args, trace
|
||||
from synapse.logging.opentracing import (
|
||||
SynapseTags,
|
||||
log_kv,
|
||||
set_tag,
|
||||
start_active_span,
|
||||
tag_args,
|
||||
trace,
|
||||
)
|
||||
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
|
||||
from synapse.storage.databases.main.state import (
|
||||
ROOM_UNKNOWN_SENTINEL,
|
||||
Sentinel as StateSentinel,
|
||||
)
|
||||
from synapse.storage.databases.main.stream import CurrentStateDeltaMembership
|
||||
from synapse.storage.databases.main.stream import (
|
||||
CurrentStateDeltaMembership,
|
||||
PaginateFunction,
|
||||
)
|
||||
from synapse.storage.roommember import MemberSummary
|
||||
from synapse.types import (
|
||||
DeviceListUpdates,
|
||||
|
@ -348,6 +360,73 @@ class RoomSyncConfig:
|
|||
else:
|
||||
self.required_state_map[state_type].add(state_key)
|
||||
|
||||
def must_await_full_state(
|
||||
self,
|
||||
is_mine_id: Callable[[str], bool],
|
||||
) -> bool:
|
||||
"""
|
||||
Check if we have a we're only requesting `required_state` which is completely
|
||||
satisfied even with partial state, then we don't need to `await_full_state` before
|
||||
we can return it.
|
||||
|
||||
Also see `StateFilter.must_await_full_state(...)` for comparison
|
||||
|
||||
Partially-stated rooms should have all state events except for remote membership
|
||||
events so if we require a remote membership event anywhere, then we need to
|
||||
return `True` (requires full state).
|
||||
|
||||
Args:
|
||||
is_mine_id: a callable which confirms if a given state_key matches a mxid
|
||||
of a local user
|
||||
"""
|
||||
wildcard_state_keys = self.required_state_map.get(StateValues.WILDCARD)
|
||||
# Requesting *all* state in the room so we have to wait
|
||||
if (
|
||||
wildcard_state_keys is not None
|
||||
and StateValues.WILDCARD in wildcard_state_keys
|
||||
):
|
||||
return True
|
||||
|
||||
# If the wildcards don't refer to remote user IDs, then we don't need to wait
|
||||
# for full state.
|
||||
if wildcard_state_keys is not None:
|
||||
for possible_user_id in wildcard_state_keys:
|
||||
if not possible_user_id[0].startswith(UserID.SIGIL):
|
||||
# Not a user ID
|
||||
continue
|
||||
|
||||
localpart_hostname = possible_user_id.split(":", 1)
|
||||
if len(localpart_hostname) < 2:
|
||||
# Not a user ID
|
||||
continue
|
||||
|
||||
if not is_mine_id(possible_user_id):
|
||||
return True
|
||||
|
||||
membership_state_keys = self.required_state_map.get(EventTypes.Member)
|
||||
# We aren't requesting any membership events at all so the partial state will
|
||||
# cover us.
|
||||
if membership_state_keys is None:
|
||||
return False
|
||||
|
||||
# If we're requesting entirely local users, the partial state will cover us.
|
||||
for user_id in membership_state_keys:
|
||||
if user_id == StateValues.ME:
|
||||
continue
|
||||
# We're lazy-loading membership so we can just return the state we have.
|
||||
# Lazy-loading means we include membership for any event `sender` in the
|
||||
# timeline but since we had to auth those timeline events, we will have the
|
||||
# membership state for them (including from remote senders).
|
||||
elif user_id == StateValues.LAZY:
|
||||
continue
|
||||
elif user_id == StateValues.WILDCARD:
|
||||
return False
|
||||
elif not is_mine_id(user_id):
|
||||
return True
|
||||
|
||||
# Local users only so the partial state will cover us.
|
||||
return False
|
||||
|
||||
|
||||
class StateValues:
|
||||
"""
|
||||
|
@ -377,6 +456,7 @@ class SlidingSyncHandler:
|
|||
self.device_handler = hs.get_device_handler()
|
||||
self.push_rules_handler = hs.get_push_rules_handler()
|
||||
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
|
||||
self.connection_store = SlidingSyncConnectionStore()
|
||||
|
||||
|
@ -484,6 +564,22 @@ class SlidingSyncHandler:
|
|||
# See https://github.com/matrix-org/matrix-doc/issues/1144
|
||||
raise NotImplementedError()
|
||||
|
||||
if from_token:
|
||||
# Check that we recognize the connection position, if not tell the
|
||||
# clients that they need to start again.
|
||||
#
|
||||
# If we don't do this and the client asks for the full range of
|
||||
# rooms, we end up sending down all rooms and their state from
|
||||
# scratch (which can be very slow). By expiring the connection we
|
||||
# allow the client a chance to do an initial request with a smaller
|
||||
# range of rooms to get them some results sooner but will end up
|
||||
# taking the same amount of time (more with round-trips and
|
||||
# re-processing) in the end to get everything again.
|
||||
if not await self.connection_store.is_valid_token(
|
||||
sync_config, from_token.connection_position
|
||||
):
|
||||
raise SlidingSyncUnknownPosition()
|
||||
|
||||
await self.connection_store.mark_token_seen(
|
||||
sync_config=sync_config,
|
||||
from_token=from_token,
|
||||
|
@ -509,126 +605,158 @@ class SlidingSyncHandler:
|
|||
lists: Dict[str, SlidingSyncResult.SlidingWindowList] = {}
|
||||
# Keep track of the rooms that we can display and need to fetch more info about
|
||||
relevant_room_map: Dict[str, RoomSyncConfig] = {}
|
||||
# The set of room IDs of all rooms that could appear in any list. These
|
||||
# include rooms that are outside the list ranges.
|
||||
all_rooms: Set[str] = set()
|
||||
if has_lists and sync_config.lists is not None:
|
||||
sync_room_map = await self.filter_rooms_relevant_for_sync(
|
||||
user=sync_config.user,
|
||||
room_membership_for_user_map=room_membership_for_user_map,
|
||||
)
|
||||
|
||||
for list_key, list_config in sync_config.lists.items():
|
||||
# Apply filters
|
||||
filtered_sync_room_map = sync_room_map
|
||||
if list_config.filters is not None:
|
||||
filtered_sync_room_map = await self.filter_rooms(
|
||||
sync_config.user, sync_room_map, list_config.filters, to_token
|
||||
)
|
||||
|
||||
# Sort the list
|
||||
sorted_room_info = await self.sort_rooms(
|
||||
filtered_sync_room_map, to_token
|
||||
with start_active_span("assemble_sliding_window_lists"):
|
||||
sync_room_map = await self.filter_rooms_relevant_for_sync(
|
||||
user=sync_config.user,
|
||||
room_membership_for_user_map=room_membership_for_user_map,
|
||||
)
|
||||
|
||||
# Find which rooms are partially stated and may need to be filtered out
|
||||
# depending on the `required_state` requested (see below).
|
||||
partial_state_room_map = await self.store.is_partial_state_room_batched(
|
||||
filtered_sync_room_map.keys()
|
||||
)
|
||||
|
||||
# Since creating the `RoomSyncConfig` takes some work, let's just do it
|
||||
# once and make a copy whenever we need it.
|
||||
room_sync_config = RoomSyncConfig.from_room_config(list_config)
|
||||
membership_state_keys = room_sync_config.required_state_map.get(
|
||||
EventTypes.Member
|
||||
)
|
||||
# Also see `StateFilter.must_await_full_state(...)` for comparison
|
||||
lazy_loading = (
|
||||
membership_state_keys is not None
|
||||
and StateValues.LAZY in membership_state_keys
|
||||
)
|
||||
|
||||
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
|
||||
if list_config.ranges:
|
||||
for range in list_config.ranges:
|
||||
room_ids_in_list: List[str] = []
|
||||
|
||||
# We're going to loop through the sorted list of rooms starting
|
||||
# at the range start index and keep adding rooms until we fill
|
||||
# up the range or run out of rooms.
|
||||
#
|
||||
# Both sides of range are inclusive so we `+ 1`
|
||||
max_num_rooms = range[1] - range[0] + 1
|
||||
for room_membership in sorted_room_info[range[0] :]:
|
||||
room_id = room_membership.room_id
|
||||
|
||||
if len(room_ids_in_list) >= max_num_rooms:
|
||||
break
|
||||
|
||||
# Exclude partially-stated rooms unless the `required_state`
|
||||
# only has `["m.room.member", "$LAZY"]` for membership
|
||||
# (lazy-loading room members).
|
||||
if partial_state_room_map.get(room_id) and not lazy_loading:
|
||||
continue
|
||||
|
||||
# Take the superset of the `RoomSyncConfig` for each room.
|
||||
#
|
||||
# Update our `relevant_room_map` with the room we're going
|
||||
# to display and need to fetch more info about.
|
||||
existing_room_sync_config = relevant_room_map.get(room_id)
|
||||
if existing_room_sync_config is not None:
|
||||
existing_room_sync_config.combine_room_sync_config(
|
||||
room_sync_config
|
||||
)
|
||||
else:
|
||||
# Make a copy so if we modify it later, it doesn't
|
||||
# affect all references.
|
||||
relevant_room_map[room_id] = (
|
||||
room_sync_config.deep_copy()
|
||||
)
|
||||
|
||||
room_ids_in_list.append(room_id)
|
||||
|
||||
ops.append(
|
||||
SlidingSyncResult.SlidingWindowList.Operation(
|
||||
op=OperationType.SYNC,
|
||||
range=range,
|
||||
room_ids=room_ids_in_list,
|
||||
)
|
||||
for list_key, list_config in sync_config.lists.items():
|
||||
# Apply filters
|
||||
filtered_sync_room_map = sync_room_map
|
||||
if list_config.filters is not None:
|
||||
filtered_sync_room_map = await self.filter_rooms(
|
||||
sync_config.user,
|
||||
sync_room_map,
|
||||
list_config.filters,
|
||||
to_token,
|
||||
)
|
||||
|
||||
lists[list_key] = SlidingSyncResult.SlidingWindowList(
|
||||
count=len(sorted_room_info),
|
||||
ops=ops,
|
||||
)
|
||||
# Find which rooms are partially stated and may need to be filtered out
|
||||
# depending on the `required_state` requested (see below).
|
||||
partial_state_room_map = (
|
||||
await self.store.is_partial_state_room_batched(
|
||||
filtered_sync_room_map.keys()
|
||||
)
|
||||
)
|
||||
|
||||
# Since creating the `RoomSyncConfig` takes some work, let's just do it
|
||||
# once and make a copy whenever we need it.
|
||||
room_sync_config = RoomSyncConfig.from_room_config(list_config)
|
||||
|
||||
# Exclude partially-stated rooms if we must wait for the room to be
|
||||
# fully-stated
|
||||
if room_sync_config.must_await_full_state(self.is_mine_id):
|
||||
filtered_sync_room_map = {
|
||||
room_id: room
|
||||
for room_id, room in filtered_sync_room_map.items()
|
||||
if not partial_state_room_map.get(room_id)
|
||||
}
|
||||
|
||||
all_rooms.update(filtered_sync_room_map)
|
||||
|
||||
# Sort the list
|
||||
sorted_room_info = await self.sort_rooms(
|
||||
filtered_sync_room_map, to_token
|
||||
)
|
||||
|
||||
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
|
||||
if list_config.ranges:
|
||||
for range in list_config.ranges:
|
||||
room_ids_in_list: List[str] = []
|
||||
|
||||
# We're going to loop through the sorted list of rooms starting
|
||||
# at the range start index and keep adding rooms until we fill
|
||||
# up the range or run out of rooms.
|
||||
#
|
||||
# Both sides of range are inclusive so we `+ 1`
|
||||
max_num_rooms = range[1] - range[0] + 1
|
||||
for room_membership in sorted_room_info[range[0] :]:
|
||||
room_id = room_membership.room_id
|
||||
|
||||
if len(room_ids_in_list) >= max_num_rooms:
|
||||
break
|
||||
|
||||
# Take the superset of the `RoomSyncConfig` for each room.
|
||||
#
|
||||
# Update our `relevant_room_map` with the room we're going
|
||||
# to display and need to fetch more info about.
|
||||
existing_room_sync_config = relevant_room_map.get(
|
||||
room_id
|
||||
)
|
||||
if existing_room_sync_config is not None:
|
||||
existing_room_sync_config.combine_room_sync_config(
|
||||
room_sync_config
|
||||
)
|
||||
else:
|
||||
# Make a copy so if we modify it later, it doesn't
|
||||
# affect all references.
|
||||
relevant_room_map[room_id] = (
|
||||
room_sync_config.deep_copy()
|
||||
)
|
||||
|
||||
room_ids_in_list.append(room_id)
|
||||
|
||||
ops.append(
|
||||
SlidingSyncResult.SlidingWindowList.Operation(
|
||||
op=OperationType.SYNC,
|
||||
range=range,
|
||||
room_ids=room_ids_in_list,
|
||||
)
|
||||
)
|
||||
|
||||
lists[list_key] = SlidingSyncResult.SlidingWindowList(
|
||||
count=len(sorted_room_info),
|
||||
ops=ops,
|
||||
)
|
||||
|
||||
# Handle room subscriptions
|
||||
if has_room_subscriptions and sync_config.room_subscriptions is not None:
|
||||
for room_id, room_subscription in sync_config.room_subscriptions.items():
|
||||
room_membership_for_user_at_to_token = (
|
||||
await self.check_room_subscription_allowed_for_user(
|
||||
room_id=room_id,
|
||||
room_membership_for_user_map=room_membership_for_user_map,
|
||||
to_token=to_token,
|
||||
with start_active_span("assemble_room_subscriptions"):
|
||||
# Find which rooms are partially stated and may need to be filtered out
|
||||
# depending on the `required_state` requested (see below).
|
||||
partial_state_room_map = await self.store.is_partial_state_room_batched(
|
||||
sync_config.room_subscriptions.keys()
|
||||
)
|
||||
|
||||
for (
|
||||
room_id,
|
||||
room_subscription,
|
||||
) in sync_config.room_subscriptions.items():
|
||||
room_membership_for_user_at_to_token = (
|
||||
await self.check_room_subscription_allowed_for_user(
|
||||
room_id=room_id,
|
||||
room_membership_for_user_map=room_membership_for_user_map,
|
||||
to_token=to_token,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# Skip this room if the user isn't allowed to see it
|
||||
if not room_membership_for_user_at_to_token:
|
||||
continue
|
||||
# Skip this room if the user isn't allowed to see it
|
||||
if not room_membership_for_user_at_to_token:
|
||||
continue
|
||||
|
||||
room_membership_for_user_map[room_id] = (
|
||||
room_membership_for_user_at_to_token
|
||||
)
|
||||
all_rooms.add(room_id)
|
||||
|
||||
# Take the superset of the `RoomSyncConfig` for each room.
|
||||
#
|
||||
# Update our `relevant_room_map` with the room we're going to display
|
||||
# and need to fetch more info about.
|
||||
room_sync_config = RoomSyncConfig.from_room_config(room_subscription)
|
||||
existing_room_sync_config = relevant_room_map.get(room_id)
|
||||
if existing_room_sync_config is not None:
|
||||
existing_room_sync_config.combine_room_sync_config(room_sync_config)
|
||||
else:
|
||||
relevant_room_map[room_id] = room_sync_config
|
||||
room_membership_for_user_map[room_id] = (
|
||||
room_membership_for_user_at_to_token
|
||||
)
|
||||
|
||||
# Take the superset of the `RoomSyncConfig` for each room.
|
||||
room_sync_config = RoomSyncConfig.from_room_config(
|
||||
room_subscription
|
||||
)
|
||||
|
||||
# Exclude partially-stated rooms if we must wait for the room to be
|
||||
# fully-stated
|
||||
if room_sync_config.must_await_full_state(self.is_mine_id):
|
||||
if partial_state_room_map.get(room_id):
|
||||
continue
|
||||
|
||||
all_rooms.add(room_id)
|
||||
|
||||
# Update our `relevant_room_map` with the room we're going to display
|
||||
# and need to fetch more info about.
|
||||
existing_room_sync_config = relevant_room_map.get(room_id)
|
||||
if existing_room_sync_config is not None:
|
||||
existing_room_sync_config.combine_room_sync_config(
|
||||
room_sync_config
|
||||
)
|
||||
else:
|
||||
relevant_room_map[room_id] = room_sync_config
|
||||
|
||||
# Fetch room data
|
||||
rooms: Dict[str, SlidingSyncResult.RoomResult] = {}
|
||||
|
@ -637,48 +765,49 @@ class SlidingSyncHandler:
|
|||
# previously.
|
||||
# Keep track of the rooms that we're going to display and need to fetch more info about
|
||||
relevant_rooms_to_send_map = relevant_room_map
|
||||
if from_token:
|
||||
rooms_should_send = set()
|
||||
with start_active_span("filter_relevant_rooms_to_send"):
|
||||
if from_token:
|
||||
rooms_should_send = set()
|
||||
|
||||
# First we check if there are rooms that match a list/room
|
||||
# subscription and have updates we need to send (i.e. either because
|
||||
# we haven't sent the room down, or we have but there are missing
|
||||
# updates).
|
||||
for room_id in relevant_room_map:
|
||||
status = await self.connection_store.have_sent_room(
|
||||
sync_config,
|
||||
from_token.connection_position,
|
||||
room_id,
|
||||
# First we check if there are rooms that match a list/room
|
||||
# subscription and have updates we need to send (i.e. either because
|
||||
# we haven't sent the room down, or we have but there are missing
|
||||
# updates).
|
||||
for room_id in relevant_room_map:
|
||||
status = await self.connection_store.have_sent_room(
|
||||
sync_config,
|
||||
from_token.connection_position,
|
||||
room_id,
|
||||
)
|
||||
if (
|
||||
# The room was never sent down before so the client needs to know
|
||||
# about it regardless of any updates.
|
||||
status.status == HaveSentRoomFlag.NEVER
|
||||
# `PREVIOUSLY` literally means the "room was sent down before *AND*
|
||||
# there are updates we haven't sent down" so we already know this
|
||||
# room has updates.
|
||||
or status.status == HaveSentRoomFlag.PREVIOUSLY
|
||||
):
|
||||
rooms_should_send.add(room_id)
|
||||
elif status.status == HaveSentRoomFlag.LIVE:
|
||||
# We know that we've sent all updates up until `from_token`,
|
||||
# so we just need to check if there have been updates since
|
||||
# then.
|
||||
pass
|
||||
else:
|
||||
assert_never(status.status)
|
||||
|
||||
# We only need to check for new events since any state changes
|
||||
# will also come down as new events.
|
||||
rooms_that_have_updates = self.store.get_rooms_that_might_have_updates(
|
||||
relevant_room_map.keys(), from_token.stream_token.room_key
|
||||
)
|
||||
if (
|
||||
# The room was never sent down before so the client needs to know
|
||||
# about it regardless of any updates.
|
||||
status.status == HaveSentRoomFlag.NEVER
|
||||
# `PREVIOUSLY` literally means the "room was sent down before *AND*
|
||||
# there are updates we haven't sent down" so we already know this
|
||||
# room has updates.
|
||||
or status.status == HaveSentRoomFlag.PREVIOUSLY
|
||||
):
|
||||
rooms_should_send.add(room_id)
|
||||
elif status.status == HaveSentRoomFlag.LIVE:
|
||||
# We know that we've sent all updates up until `from_token`,
|
||||
# so we just need to check if there have been updates since
|
||||
# then.
|
||||
pass
|
||||
else:
|
||||
assert_never(status.status)
|
||||
|
||||
# We only need to check for new events since any state changes
|
||||
# will also come down as new events.
|
||||
rooms_that_have_updates = self.store.get_rooms_that_might_have_updates(
|
||||
relevant_room_map.keys(), from_token.stream_token.room_key
|
||||
)
|
||||
rooms_should_send.update(rooms_that_have_updates)
|
||||
relevant_rooms_to_send_map = {
|
||||
room_id: room_sync_config
|
||||
for room_id, room_sync_config in relevant_room_map.items()
|
||||
if room_id in rooms_should_send
|
||||
}
|
||||
rooms_should_send.update(rooms_that_have_updates)
|
||||
relevant_rooms_to_send_map = {
|
||||
room_id: room_sync_config
|
||||
for room_id, room_sync_config in relevant_room_map.items()
|
||||
if room_id in rooms_should_send
|
||||
}
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
|
@ -717,12 +846,40 @@ class SlidingSyncHandler:
|
|||
)
|
||||
|
||||
if has_lists or has_room_subscriptions:
|
||||
# We now calculate if any rooms outside the range have had updates,
|
||||
# which we are not sending down.
|
||||
#
|
||||
# We *must* record rooms that have had updates, but it is also fine
|
||||
# to record rooms as having updates even if there might not actually
|
||||
# be anything new for the user (e.g. due to event filters, events
|
||||
# having happened after the user left, etc).
|
||||
unsent_room_ids = []
|
||||
if from_token:
|
||||
# The set of rooms that the client (may) care about, but aren't
|
||||
# in any list range (or subscribed to).
|
||||
missing_rooms = all_rooms - relevant_room_map.keys()
|
||||
|
||||
# We now just go and try fetching any events in the above rooms
|
||||
# to see if anything has happened since the `from_token`.
|
||||
#
|
||||
# TODO: Replace this with something faster. When we land the
|
||||
# sliding sync tables that record the most recent event
|
||||
# positions we can use that.
|
||||
missing_event_map_by_room = (
|
||||
await self.store.get_room_events_stream_for_rooms(
|
||||
room_ids=missing_rooms,
|
||||
from_key=to_token.room_key,
|
||||
to_key=from_token.stream_token.room_key,
|
||||
limit=1,
|
||||
)
|
||||
)
|
||||
unsent_room_ids = list(missing_event_map_by_room)
|
||||
|
||||
connection_position = await self.connection_store.record_rooms(
|
||||
sync_config=sync_config,
|
||||
from_token=from_token,
|
||||
sent_room_ids=relevant_rooms_to_send_map.keys(),
|
||||
# TODO: We need to calculate which rooms have had updates since the `from_token` but were not included in the `sent_room_ids`
|
||||
unsent_room_ids=[],
|
||||
unsent_room_ids=unsent_room_ids,
|
||||
)
|
||||
elif from_token:
|
||||
connection_position = from_token.connection_position
|
||||
|
@ -730,13 +887,20 @@ class SlidingSyncHandler:
|
|||
# Initial sync without a `from_token` starts at `0`
|
||||
connection_position = 0
|
||||
|
||||
return SlidingSyncResult(
|
||||
sliding_sync_result = SlidingSyncResult(
|
||||
next_pos=SlidingSyncStreamToken(to_token, connection_position),
|
||||
lists=lists,
|
||||
rooms=rooms,
|
||||
extensions=extensions,
|
||||
)
|
||||
|
||||
# Make it easy to find traces for syncs that aren't empty
|
||||
set_tag(SynapseTags.RESULT_PREFIX + "result", bool(sliding_sync_result))
|
||||
set_tag(SynapseTags.FUNC_ARG_PREFIX + "sync_config.user", user_id)
|
||||
|
||||
return sliding_sync_result
|
||||
|
||||
@trace
|
||||
async def get_room_membership_for_user_at_to_token(
|
||||
self,
|
||||
user: UserID,
|
||||
|
@ -1075,6 +1239,7 @@ class SlidingSyncHandler:
|
|||
|
||||
return sync_room_id_set
|
||||
|
||||
@trace
|
||||
async def filter_rooms_relevant_for_sync(
|
||||
self,
|
||||
user: UserID,
|
||||
|
@ -1185,6 +1350,7 @@ class SlidingSyncHandler:
|
|||
|
||||
# return None
|
||||
|
||||
@trace
|
||||
async def _bulk_get_stripped_state_for_rooms_from_sync_room_map(
|
||||
self,
|
||||
room_ids: StrCollection,
|
||||
|
@ -1275,6 +1441,7 @@ class SlidingSyncHandler:
|
|||
|
||||
return room_id_to_stripped_state_map
|
||||
|
||||
@trace
|
||||
async def _bulk_get_partial_current_state_content_for_rooms(
|
||||
self,
|
||||
content_type: Literal[
|
||||
|
@ -1474,125 +1641,132 @@ class SlidingSyncHandler:
|
|||
|
||||
# Filter for Direct-Message (DM) rooms
|
||||
if filters.is_dm is not None:
|
||||
if filters.is_dm:
|
||||
# Only DM rooms please
|
||||
filtered_room_id_set = {
|
||||
room_id
|
||||
for room_id in filtered_room_id_set
|
||||
if sync_room_map[room_id].is_dm
|
||||
}
|
||||
else:
|
||||
# Only non-DM rooms please
|
||||
filtered_room_id_set = {
|
||||
room_id
|
||||
for room_id in filtered_room_id_set
|
||||
if not sync_room_map[room_id].is_dm
|
||||
}
|
||||
with start_active_span("filters.is_dm"):
|
||||
if filters.is_dm:
|
||||
# Only DM rooms please
|
||||
filtered_room_id_set = {
|
||||
room_id
|
||||
for room_id in filtered_room_id_set
|
||||
if sync_room_map[room_id].is_dm
|
||||
}
|
||||
else:
|
||||
# Only non-DM rooms please
|
||||
filtered_room_id_set = {
|
||||
room_id
|
||||
for room_id in filtered_room_id_set
|
||||
if not sync_room_map[room_id].is_dm
|
||||
}
|
||||
|
||||
if filters.spaces is not None:
|
||||
raise NotImplementedError()
|
||||
with start_active_span("filters.spaces"):
|
||||
raise NotImplementedError()
|
||||
|
||||
# Filter for encrypted rooms
|
||||
if filters.is_encrypted is not None:
|
||||
room_id_to_encryption = (
|
||||
await self._bulk_get_partial_current_state_content_for_rooms(
|
||||
content_type="room_encryption",
|
||||
room_ids=filtered_room_id_set,
|
||||
to_token=to_token,
|
||||
sync_room_map=sync_room_map,
|
||||
room_id_to_stripped_state_map=room_id_to_stripped_state_map,
|
||||
with start_active_span("filters.is_encrypted"):
|
||||
room_id_to_encryption = (
|
||||
await self._bulk_get_partial_current_state_content_for_rooms(
|
||||
content_type="room_encryption",
|
||||
room_ids=filtered_room_id_set,
|
||||
to_token=to_token,
|
||||
sync_room_map=sync_room_map,
|
||||
room_id_to_stripped_state_map=room_id_to_stripped_state_map,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
encryption = room_id_to_encryption.get(room_id, ROOM_UNKNOWN_SENTINEL)
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
encryption = room_id_to_encryption.get(
|
||||
room_id, ROOM_UNKNOWN_SENTINEL
|
||||
)
|
||||
|
||||
# Just remove rooms if we can't determine their encryption status
|
||||
if encryption is ROOM_UNKNOWN_SENTINEL:
|
||||
filtered_room_id_set.remove(room_id)
|
||||
continue
|
||||
# Just remove rooms if we can't determine their encryption status
|
||||
if encryption is ROOM_UNKNOWN_SENTINEL:
|
||||
filtered_room_id_set.remove(room_id)
|
||||
continue
|
||||
|
||||
# If we're looking for encrypted rooms, filter out rooms that are not
|
||||
# encrypted and vice versa
|
||||
is_encrypted = encryption is not None
|
||||
if (filters.is_encrypted and not is_encrypted) or (
|
||||
not filters.is_encrypted and is_encrypted
|
||||
):
|
||||
filtered_room_id_set.remove(room_id)
|
||||
# If we're looking for encrypted rooms, filter out rooms that are not
|
||||
# encrypted and vice versa
|
||||
is_encrypted = encryption is not None
|
||||
if (filters.is_encrypted and not is_encrypted) or (
|
||||
not filters.is_encrypted and is_encrypted
|
||||
):
|
||||
filtered_room_id_set.remove(room_id)
|
||||
|
||||
# Filter for rooms that the user has been invited to
|
||||
if filters.is_invite is not None:
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
room_for_user = sync_room_map[room_id]
|
||||
# If we're looking for invite rooms, filter out rooms that the user is
|
||||
# not invited to and vice versa
|
||||
if (
|
||||
filters.is_invite and room_for_user.membership != Membership.INVITE
|
||||
) or (
|
||||
not filters.is_invite
|
||||
and room_for_user.membership == Membership.INVITE
|
||||
):
|
||||
filtered_room_id_set.remove(room_id)
|
||||
with start_active_span("filters.is_invite"):
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
room_for_user = sync_room_map[room_id]
|
||||
# If we're looking for invite rooms, filter out rooms that the user is
|
||||
# not invited to and vice versa
|
||||
if (
|
||||
filters.is_invite
|
||||
and room_for_user.membership != Membership.INVITE
|
||||
) or (
|
||||
not filters.is_invite
|
||||
and room_for_user.membership == Membership.INVITE
|
||||
):
|
||||
filtered_room_id_set.remove(room_id)
|
||||
|
||||
# Filter by room type (space vs room, etc). A room must match one of the types
|
||||
# provided in the list. `None` is a valid type for rooms which do not have a
|
||||
# room type.
|
||||
if filters.room_types is not None or filters.not_room_types is not None:
|
||||
room_id_to_type = (
|
||||
await self._bulk_get_partial_current_state_content_for_rooms(
|
||||
content_type="room_type",
|
||||
room_ids=filtered_room_id_set,
|
||||
to_token=to_token,
|
||||
sync_room_map=sync_room_map,
|
||||
room_id_to_stripped_state_map=room_id_to_stripped_state_map,
|
||||
with start_active_span("filters.room_types"):
|
||||
room_id_to_type = (
|
||||
await self._bulk_get_partial_current_state_content_for_rooms(
|
||||
content_type="room_type",
|
||||
room_ids=filtered_room_id_set,
|
||||
to_token=to_token,
|
||||
sync_room_map=sync_room_map,
|
||||
room_id_to_stripped_state_map=room_id_to_stripped_state_map,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
room_type = room_id_to_type.get(room_id, ROOM_UNKNOWN_SENTINEL)
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
room_type = room_id_to_type.get(room_id, ROOM_UNKNOWN_SENTINEL)
|
||||
|
||||
# Just remove rooms if we can't determine their type
|
||||
if room_type is ROOM_UNKNOWN_SENTINEL:
|
||||
filtered_room_id_set.remove(room_id)
|
||||
continue
|
||||
# Just remove rooms if we can't determine their type
|
||||
if room_type is ROOM_UNKNOWN_SENTINEL:
|
||||
filtered_room_id_set.remove(room_id)
|
||||
continue
|
||||
|
||||
if (
|
||||
filters.room_types is not None
|
||||
and room_type not in filters.room_types
|
||||
):
|
||||
filtered_room_id_set.remove(room_id)
|
||||
if (
|
||||
filters.room_types is not None
|
||||
and room_type not in filters.room_types
|
||||
):
|
||||
filtered_room_id_set.remove(room_id)
|
||||
|
||||
if (
|
||||
filters.not_room_types is not None
|
||||
and room_type in filters.not_room_types
|
||||
):
|
||||
filtered_room_id_set.remove(room_id)
|
||||
if (
|
||||
filters.not_room_types is not None
|
||||
and room_type in filters.not_room_types
|
||||
):
|
||||
filtered_room_id_set.remove(room_id)
|
||||
|
||||
if filters.room_name_like is not None:
|
||||
# TODO: The room name is a bit more sensitive to leak than the
|
||||
# create/encryption event. Maybe we should consider a better way to fetch
|
||||
# historical state before implementing this.
|
||||
#
|
||||
# room_id_to_create_content = await self._bulk_get_partial_current_state_content_for_rooms(
|
||||
# content_type="room_name",
|
||||
# room_ids=filtered_room_id_set,
|
||||
# to_token=to_token,
|
||||
# sync_room_map=sync_room_map,
|
||||
# room_id_to_stripped_state_map=room_id_to_stripped_state_map,
|
||||
# )
|
||||
raise NotImplementedError()
|
||||
with start_active_span("filters.room_name_like"):
|
||||
# TODO: The room name is a bit more sensitive to leak than the
|
||||
# create/encryption event. Maybe we should consider a better way to fetch
|
||||
# historical state before implementing this.
|
||||
#
|
||||
# room_id_to_create_content = await self._bulk_get_partial_current_state_content_for_rooms(
|
||||
# content_type="room_name",
|
||||
# room_ids=filtered_room_id_set,
|
||||
# to_token=to_token,
|
||||
# sync_room_map=sync_room_map,
|
||||
# room_id_to_stripped_state_map=room_id_to_stripped_state_map,
|
||||
# )
|
||||
raise NotImplementedError()
|
||||
|
||||
if filters.tags is not None:
|
||||
raise NotImplementedError()
|
||||
|
||||
if filters.not_tags is not None:
|
||||
raise NotImplementedError()
|
||||
if filters.tags is not None or filters.not_tags is not None:
|
||||
with start_active_span("filters.tags"):
|
||||
raise NotImplementedError()
|
||||
|
||||
# Assemble a new sync room map but only with the `filtered_room_id_set`
|
||||
return {room_id: sync_room_map[room_id] for room_id in filtered_room_id_set}
|
||||
|
@ -1654,6 +1828,7 @@ class SlidingSyncHandler:
|
|||
reverse=True,
|
||||
)
|
||||
|
||||
@trace
|
||||
async def get_current_state_ids_at(
|
||||
self,
|
||||
room_id: str,
|
||||
|
@ -1718,6 +1893,7 @@ class SlidingSyncHandler:
|
|||
|
||||
return state_ids
|
||||
|
||||
@trace
|
||||
async def get_current_state_at(
|
||||
self,
|
||||
room_id: str,
|
||||
|
@ -1779,15 +1955,27 @@ class SlidingSyncHandler:
|
|||
"""
|
||||
user = sync_config.user
|
||||
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "membership",
|
||||
room_membership_for_user_at_to_token.membership,
|
||||
)
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "timeline_limit",
|
||||
room_sync_config.timeline_limit,
|
||||
)
|
||||
|
||||
# Determine whether we should limit the timeline to the token range.
|
||||
#
|
||||
# We should return historical messages (before token range) in the
|
||||
# following cases because we want clients to be able to show a basic
|
||||
# screen of information:
|
||||
#
|
||||
# - Initial sync (because no `from_token` to limit us anyway)
|
||||
# - When users `newly_joined`
|
||||
# - For an incremental sync where we haven't sent it down this
|
||||
# connection before
|
||||
#
|
||||
# Relevant spec issue: https://github.com/matrix-org/matrix-spec/issues/1917
|
||||
from_bound = None
|
||||
initial = True
|
||||
if from_token and not room_membership_for_user_at_to_token.newly_joined:
|
||||
|
@ -1848,7 +2036,36 @@ class SlidingSyncHandler:
|
|||
room_membership_for_user_at_to_token.event_pos.to_room_stream_token()
|
||||
)
|
||||
|
||||
timeline_events, new_room_key = await self.store.paginate_room_events(
|
||||
# For initial `/sync` (and other historical scenarios mentioned above), we
|
||||
# want to view a historical section of the timeline; to fetch events by
|
||||
# `topological_ordering` (best representation of the room DAG as others were
|
||||
# seeing it at the time). This also aligns with the order that `/messages`
|
||||
# returns events in.
|
||||
#
|
||||
# For incremental `/sync`, we want to get all updates for rooms since
|
||||
# the last `/sync` (regardless if those updates arrived late or happened
|
||||
# a while ago in the past); to fetch events by `stream_ordering` (in the
|
||||
# order they were received by the server).
|
||||
#
|
||||
# Relevant spec issue: https://github.com/matrix-org/matrix-spec/issues/1917
|
||||
#
|
||||
# FIXME: Using workaround for mypy,
|
||||
# https://github.com/python/mypy/issues/10740#issuecomment-1997047277 and
|
||||
# https://github.com/python/mypy/issues/17479
|
||||
paginate_room_events_by_topological_ordering: PaginateFunction = (
|
||||
self.store.paginate_room_events_by_topological_ordering
|
||||
)
|
||||
paginate_room_events_by_stream_ordering: PaginateFunction = (
|
||||
self.store.paginate_room_events_by_stream_ordering
|
||||
)
|
||||
pagination_method: PaginateFunction = (
|
||||
# Use `topographical_ordering` for historical events
|
||||
paginate_room_events_by_topological_ordering
|
||||
if from_bound is None
|
||||
# Use `stream_ordering` for updates
|
||||
else paginate_room_events_by_stream_ordering
|
||||
)
|
||||
timeline_events, new_room_key = await pagination_method(
|
||||
room_id=room_id,
|
||||
# The bounds are reversed so we can paginate backwards
|
||||
# (from newer to older events) starting at to_bound.
|
||||
|
@ -1859,7 +2076,6 @@ class SlidingSyncHandler:
|
|||
# We add one so we can determine if there are enough events to saturate
|
||||
# the limit or not (see `limited`)
|
||||
limit=room_sync_config.timeline_limit + 1,
|
||||
event_filter=None,
|
||||
)
|
||||
|
||||
# We want to return the events in ascending order (the last event is the
|
||||
|
@ -2046,6 +2262,10 @@ class SlidingSyncHandler:
|
|||
if StateValues.WILDCARD in room_sync_config.required_state_map.get(
|
||||
StateValues.WILDCARD, set()
|
||||
):
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "required_state_wildcard",
|
||||
True,
|
||||
)
|
||||
required_state_filter = StateFilter.all()
|
||||
# TODO: `StateFilter` currently doesn't support wildcard event types. We're
|
||||
# currently working around this by returning all state to the client but it
|
||||
|
@ -2055,6 +2275,10 @@ class SlidingSyncHandler:
|
|||
room_sync_config.required_state_map.get(StateValues.WILDCARD)
|
||||
is not None
|
||||
):
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "required_state_wildcard_event_type",
|
||||
True,
|
||||
)
|
||||
required_state_filter = StateFilter.all()
|
||||
else:
|
||||
required_state_types: List[Tuple[str, Optional[str]]] = []
|
||||
|
@ -2062,8 +2286,12 @@ class SlidingSyncHandler:
|
|||
state_type,
|
||||
state_key_set,
|
||||
) in room_sync_config.required_state_map.items():
|
||||
num_wild_state_keys = 0
|
||||
lazy_load_room_members = False
|
||||
num_others = 0
|
||||
for state_key in state_key_set:
|
||||
if state_key == StateValues.WILDCARD:
|
||||
num_wild_state_keys += 1
|
||||
# `None` is a wildcard in the `StateFilter`
|
||||
required_state_types.append((state_type, None))
|
||||
# We need to fetch all relevant people when we're lazy-loading membership
|
||||
|
@ -2071,6 +2299,7 @@ class SlidingSyncHandler:
|
|||
state_type == EventTypes.Member
|
||||
and state_key == StateValues.LAZY
|
||||
):
|
||||
lazy_load_room_members = True
|
||||
# Everyone in the timeline is relevant
|
||||
timeline_membership: Set[str] = set()
|
||||
if timeline_events is not None:
|
||||
|
@ -2085,10 +2314,26 @@ class SlidingSyncHandler:
|
|||
# FIXME: We probably also care about invite, ban, kick, targets, etc
|
||||
# but the spec only mentions "senders".
|
||||
elif state_key == StateValues.ME:
|
||||
num_others += 1
|
||||
required_state_types.append((state_type, user.to_string()))
|
||||
else:
|
||||
num_others += 1
|
||||
required_state_types.append((state_type, state_key))
|
||||
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX
|
||||
+ "required_state_wildcard_state_key_count",
|
||||
num_wild_state_keys,
|
||||
)
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "required_state_lazy",
|
||||
lazy_load_room_members,
|
||||
)
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "required_state_other_count",
|
||||
num_others,
|
||||
)
|
||||
|
||||
required_state_filter = StateFilter.from_types(required_state_types)
|
||||
|
||||
# We need this base set of info for the response so let's just fetch it along
|
||||
|
@ -2186,6 +2431,8 @@ class SlidingSyncHandler:
|
|||
if new_bump_event_pos.stream > 0:
|
||||
bump_stamp = new_bump_event_pos.stream
|
||||
|
||||
set_tag(SynapseTags.RESULT_PREFIX + "initial", initial)
|
||||
|
||||
return SlidingSyncResult.RoomResult(
|
||||
name=room_name,
|
||||
avatar=room_avatar,
|
||||
|
@ -2816,6 +3063,16 @@ class SlidingSyncConnectionStore:
|
|||
attr.Factory(dict)
|
||||
)
|
||||
|
||||
async def is_valid_token(
|
||||
self, sync_config: SlidingSyncConfig, connection_token: int
|
||||
) -> bool:
|
||||
"""Return whether the connection token is valid/recognized"""
|
||||
if connection_token == 0:
|
||||
return True
|
||||
|
||||
conn_key = self._get_connection_key(sync_config)
|
||||
return connection_token in self._connections.get(conn_key, {})
|
||||
|
||||
async def have_sent_room(
|
||||
self, sync_config: SlidingSyncConfig, connection_token: int, room_id: str
|
||||
) -> HaveSentRoom:
|
||||
|
@ -2831,6 +3088,7 @@ class SlidingSyncConnectionStore:
|
|||
|
||||
return room_status
|
||||
|
||||
@trace
|
||||
async def record_rooms(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
|
@ -2906,6 +3164,7 @@ class SlidingSyncConnectionStore:
|
|||
|
||||
return new_store_token
|
||||
|
||||
@trace
|
||||
async def mark_token_seen(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
|
|
|
@ -43,6 +43,7 @@ from prometheus_client import Counter
|
|||
|
||||
from synapse.api.constants import (
|
||||
AccountDataTypes,
|
||||
Direction,
|
||||
EventContentFields,
|
||||
EventTypes,
|
||||
JoinRules,
|
||||
|
@ -64,6 +65,7 @@ from synapse.logging.opentracing import (
|
|||
)
|
||||
from synapse.storage.databases.main.event_push_actions import RoomNotifCounts
|
||||
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
|
||||
from synapse.storage.databases.main.stream import PaginateFunction
|
||||
from synapse.storage.roommember import MemberSummary
|
||||
from synapse.types import (
|
||||
DeviceListUpdates,
|
||||
|
@ -84,7 +86,7 @@ from synapse.util.async_helpers import concurrently_execute
|
|||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
|
||||
from synapse.util.metrics import Measure, measure_func
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.visibility import filter_events_for_client
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
@ -879,22 +881,49 @@ class SyncHandler:
|
|||
since_key = since_token.room_key
|
||||
|
||||
while limited and len(recents) < timeline_limit and max_repeat:
|
||||
# If we have a since_key then we are trying to get any events
|
||||
# that have happened since `since_key` up to `end_key`, so we
|
||||
# can just use `get_room_events_stream_for_room`.
|
||||
# Otherwise, we want to return the last N events in the room
|
||||
# in topological ordering.
|
||||
if since_key:
|
||||
events, end_key = await self.store.get_room_events_stream_for_room(
|
||||
room_id,
|
||||
limit=load_limit + 1,
|
||||
from_key=since_key,
|
||||
to_key=end_key,
|
||||
)
|
||||
else:
|
||||
events, end_key = await self.store.get_recent_events_for_room(
|
||||
room_id, limit=load_limit + 1, end_token=end_key
|
||||
)
|
||||
# For initial `/sync`, we want to view a historical section of the
|
||||
# timeline; to fetch events by `topological_ordering` (best
|
||||
# representation of the room DAG as others were seeing it at the time).
|
||||
# This also aligns with the order that `/messages` returns events in.
|
||||
#
|
||||
# For incremental `/sync`, we want to get all updates for rooms since
|
||||
# the last `/sync` (regardless if those updates arrived late or happened
|
||||
# a while ago in the past); to fetch events by `stream_ordering` (in the
|
||||
# order they were received by the server).
|
||||
#
|
||||
# Relevant spec issue: https://github.com/matrix-org/matrix-spec/issues/1917
|
||||
#
|
||||
# FIXME: Using workaround for mypy,
|
||||
# https://github.com/python/mypy/issues/10740#issuecomment-1997047277 and
|
||||
# https://github.com/python/mypy/issues/17479
|
||||
paginate_room_events_by_topological_ordering: PaginateFunction = (
|
||||
self.store.paginate_room_events_by_topological_ordering
|
||||
)
|
||||
paginate_room_events_by_stream_ordering: PaginateFunction = (
|
||||
self.store.paginate_room_events_by_stream_ordering
|
||||
)
|
||||
pagination_method: PaginateFunction = (
|
||||
# Use `topographical_ordering` for historical events
|
||||
paginate_room_events_by_topological_ordering
|
||||
if since_key is None
|
||||
# Use `stream_ordering` for updates
|
||||
else paginate_room_events_by_stream_ordering
|
||||
)
|
||||
events, end_key = await pagination_method(
|
||||
room_id=room_id,
|
||||
# The bounds are reversed so we can paginate backwards
|
||||
# (from newer to older events) starting at to_bound.
|
||||
# This ensures we fill the `limit` with the newest events first,
|
||||
from_key=end_key,
|
||||
to_key=since_key,
|
||||
direction=Direction.BACKWARDS,
|
||||
# We add one so we can determine if there are enough events to saturate
|
||||
# the limit or not (see `limited`)
|
||||
limit=load_limit + 1,
|
||||
)
|
||||
# We want to return the events in ascending order (the last event is the
|
||||
# most recent).
|
||||
events.reverse()
|
||||
|
||||
log_kv({"loaded_recents": len(events)})
|
||||
|
||||
|
@ -1750,8 +1779,15 @@ class SyncHandler:
|
|||
)
|
||||
|
||||
if include_device_list_updates:
|
||||
device_lists = await self._generate_sync_entry_for_device_list(
|
||||
sync_result_builder,
|
||||
# include_device_list_updates can only be True if we have a
|
||||
# since token.
|
||||
assert since_token is not None
|
||||
|
||||
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
|
||||
user_id=user_id,
|
||||
since_token=since_token,
|
||||
now_token=sync_result_builder.now_token,
|
||||
joined_room_ids=sync_result_builder.joined_room_ids,
|
||||
newly_joined_rooms=newly_joined_rooms,
|
||||
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
|
||||
newly_left_rooms=newly_left_rooms,
|
||||
|
@ -1863,8 +1899,14 @@ class SyncHandler:
|
|||
newly_left_users,
|
||||
) = sync_result_builder.calculate_user_changes()
|
||||
|
||||
device_lists = await self._generate_sync_entry_for_device_list(
|
||||
sync_result_builder,
|
||||
# include_device_list_updates can only be True if we have a
|
||||
# since token.
|
||||
assert since_token is not None
|
||||
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
|
||||
user_id=user_id,
|
||||
since_token=since_token,
|
||||
now_token=sync_result_builder.now_token,
|
||||
joined_room_ids=sync_result_builder.joined_room_ids,
|
||||
newly_joined_rooms=newly_joined_rooms,
|
||||
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
|
||||
newly_left_rooms=newly_left_rooms,
|
||||
|
@ -2041,94 +2083,6 @@ class SyncHandler:
|
|||
|
||||
return sync_result_builder
|
||||
|
||||
@measure_func("_generate_sync_entry_for_device_list")
|
||||
async def _generate_sync_entry_for_device_list(
|
||||
self,
|
||||
sync_result_builder: "SyncResultBuilder",
|
||||
newly_joined_rooms: AbstractSet[str],
|
||||
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
|
||||
newly_left_rooms: AbstractSet[str],
|
||||
newly_left_users: AbstractSet[str],
|
||||
) -> DeviceListUpdates:
|
||||
"""Generate the DeviceListUpdates section of sync
|
||||
|
||||
Args:
|
||||
sync_result_builder
|
||||
newly_joined_rooms: Set of rooms user has joined since previous sync
|
||||
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
|
||||
been invited to a room or are knocking on a room since
|
||||
previous sync.
|
||||
newly_left_rooms: Set of rooms user has left since previous sync
|
||||
newly_left_users: Set of users that have left a room we're in since
|
||||
previous sync
|
||||
"""
|
||||
|
||||
user_id = sync_result_builder.sync_config.user.to_string()
|
||||
since_token = sync_result_builder.since_token
|
||||
assert since_token is not None
|
||||
|
||||
# Take a copy since these fields will be mutated later.
|
||||
newly_joined_or_invited_or_knocked_users = set(
|
||||
newly_joined_or_invited_or_knocked_users
|
||||
)
|
||||
newly_left_users = set(newly_left_users)
|
||||
|
||||
# We want to figure out what user IDs the client should refetch
|
||||
# device keys for, and which users we aren't going to track changes
|
||||
# for anymore.
|
||||
#
|
||||
# For the first step we check:
|
||||
# a. if any users we share a room with have updated their devices,
|
||||
# and
|
||||
# b. we also check if we've joined any new rooms, or if a user has
|
||||
# joined a room we're in.
|
||||
#
|
||||
# For the second step we just find any users we no longer share a
|
||||
# room with by looking at all users that have left a room plus users
|
||||
# that were in a room we've left.
|
||||
|
||||
users_that_have_changed = set()
|
||||
|
||||
joined_room_ids = sync_result_builder.joined_room_ids
|
||||
|
||||
# Step 1a, check for changes in devices of users we share a room
|
||||
# with
|
||||
users_that_have_changed = (
|
||||
await self._device_handler.get_device_changes_in_shared_rooms(
|
||||
user_id,
|
||||
joined_room_ids,
|
||||
from_token=since_token,
|
||||
now_token=sync_result_builder.now_token,
|
||||
)
|
||||
)
|
||||
|
||||
# Step 1b, check for newly joined rooms
|
||||
for room_id in newly_joined_rooms:
|
||||
joined_users = await self.store.get_users_in_room(room_id)
|
||||
newly_joined_or_invited_or_knocked_users.update(joined_users)
|
||||
|
||||
# TODO: Check that these users are actually new, i.e. either they
|
||||
# weren't in the previous sync *or* they left and rejoined.
|
||||
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
|
||||
|
||||
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
|
||||
user_id, since_token.device_list_key
|
||||
)
|
||||
users_that_have_changed.update(user_signatures_changed)
|
||||
|
||||
# Now find users that we no longer track
|
||||
for room_id in newly_left_rooms:
|
||||
left_users = await self.store.get_users_in_room(room_id)
|
||||
newly_left_users.update(left_users)
|
||||
|
||||
# Remove any users that we still share a room with.
|
||||
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
|
||||
for user_id, entries in left_users_rooms.items():
|
||||
if any(rid in joined_room_ids for rid in entries):
|
||||
newly_left_users.discard(user_id)
|
||||
|
||||
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
|
||||
|
||||
@trace
|
||||
async def _generate_sync_entry_for_to_device(
|
||||
self, sync_result_builder: "SyncResultBuilder"
|
||||
|
@ -2641,9 +2595,10 @@ class SyncHandler:
|
|||
# a "gap" in the timeline, as described by the spec for /sync.
|
||||
room_to_events = await self.store.get_room_events_stream_for_rooms(
|
||||
room_ids=sync_result_builder.joined_room_ids,
|
||||
from_key=since_token.room_key,
|
||||
to_key=now_token.room_key,
|
||||
from_key=now_token.room_key,
|
||||
to_key=since_token.room_key,
|
||||
limit=timeline_limit + 1,
|
||||
direction=Direction.BACKWARDS,
|
||||
)
|
||||
|
||||
# We loop through all room ids, even if there are no new events, in case
|
||||
|
@ -2654,6 +2609,9 @@ class SyncHandler:
|
|||
newly_joined = room_id in newly_joined_rooms
|
||||
if room_entry:
|
||||
events, start_key = room_entry
|
||||
# We want to return the events in ascending order (the last event is the
|
||||
# most recent).
|
||||
events.reverse()
|
||||
|
||||
prev_batch_token = now_token.copy_and_replace(
|
||||
StreamKeyType.ROOM, start_key
|
||||
|
|
|
@ -1088,7 +1088,6 @@ class _MultipartParserProtocol(protocol.Protocol):
|
|||
return
|
||||
# otherwise we are in the file part
|
||||
else:
|
||||
logger.info("Writing multipart file data to stream")
|
||||
try:
|
||||
self.stream.write(data[start:end])
|
||||
except Exception as e:
|
||||
|
|
|
@ -74,7 +74,6 @@ from synapse.api.errors import (
|
|||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
|
||||
from synapse.logging.opentracing import active_span, start_active_span, trace_servlet
|
||||
from synapse.types import ISynapseReactor
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.caches import intern_dict
|
||||
from synapse.util.cancellation import is_function_cancellable
|
||||
|
@ -869,8 +868,7 @@ async def _async_write_json_to_request_in_thread(
|
|||
|
||||
with start_active_span("encode_json_response"):
|
||||
span = active_span()
|
||||
reactor: ISynapseReactor = request.reactor # type: ignore
|
||||
json_str = await defer_to_thread(reactor, encode, span)
|
||||
json_str = await defer_to_thread(request.reactor, encode, span)
|
||||
|
||||
_write_bytes_to_request(request, json_str)
|
||||
|
||||
|
|
|
@ -658,7 +658,7 @@ class SynapseSite(ProxySite):
|
|||
)
|
||||
|
||||
self.site_tag = site_tag
|
||||
self.reactor = reactor
|
||||
self.reactor: ISynapseReactor = reactor
|
||||
|
||||
assert config.http_options is not None
|
||||
proxied = config.http_options.x_forwarded
|
||||
|
|
|
@ -22,12 +22,14 @@
|
|||
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
import urllib
|
||||
from abc import ABC, abstractmethod
|
||||
from types import TracebackType
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Awaitable,
|
||||
BinaryIO,
|
||||
Dict,
|
||||
Generator,
|
||||
List,
|
||||
|
@ -37,19 +39,27 @@ from typing import (
|
|||
)
|
||||
|
||||
import attr
|
||||
from zope.interface import implementer
|
||||
|
||||
from twisted.internet import interfaces
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.internet.interfaces import IConsumer
|
||||
from twisted.protocols.basic import FileSender
|
||||
from twisted.python.failure import Failure
|
||||
from twisted.web.server import Request
|
||||
|
||||
from synapse.api.errors import Codes, cs_error
|
||||
from synapse.http.server import finish_request, respond_with_json
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.logging.context import (
|
||||
defer_to_threadpool,
|
||||
make_deferred_yieldable,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.util import Clock
|
||||
from synapse.util.stringutils import is_ascii
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.databases.main.media_repository import LocalMedia
|
||||
|
||||
|
||||
|
@ -122,6 +132,7 @@ def respond_404(request: SynapseRequest) -> None:
|
|||
|
||||
|
||||
async def respond_with_file(
|
||||
hs: "HomeServer",
|
||||
request: SynapseRequest,
|
||||
media_type: str,
|
||||
file_path: str,
|
||||
|
@ -138,7 +149,7 @@ async def respond_with_file(
|
|||
add_file_headers(request, media_type, file_size, upload_name)
|
||||
|
||||
with open(file_path, "rb") as f:
|
||||
await make_deferred_yieldable(FileSender().beginFileTransfer(f, request))
|
||||
await ThreadedFileSender(hs).beginFileTransfer(f, request)
|
||||
|
||||
finish_request(request)
|
||||
else:
|
||||
|
@ -601,3 +612,141 @@ def _parseparam(s: bytes) -> Generator[bytes, None, None]:
|
|||
f = s[:end]
|
||||
yield f.strip()
|
||||
s = s[end:]
|
||||
|
||||
|
||||
@implementer(interfaces.IPushProducer)
|
||||
class ThreadedFileSender:
|
||||
"""
|
||||
A producer that sends the contents of a file to a consumer, reading from the
|
||||
file on a thread.
|
||||
|
||||
This works by spawning a loop in a threadpool that repeatedly reads from the
|
||||
file and sends it to the consumer. The main thread communicates with the
|
||||
loop via two `threading.Event`, which controls when to start/pause reading
|
||||
and when to terminate.
|
||||
"""
|
||||
|
||||
# How much data to read in one go.
|
||||
CHUNK_SIZE = 2**14
|
||||
|
||||
# How long we wait for the consumer to be ready again before aborting the
|
||||
# read.
|
||||
TIMEOUT_SECONDS = 90.0
|
||||
|
||||
def __init__(self, hs: "HomeServer") -> None:
|
||||
self.reactor = hs.get_reactor()
|
||||
self.thread_pool = hs.get_media_sender_thread_pool()
|
||||
|
||||
self.file: Optional[BinaryIO] = None
|
||||
self.deferred: "Deferred[None]" = Deferred()
|
||||
self.consumer: Optional[interfaces.IConsumer] = None
|
||||
|
||||
# Signals if the thread should keep reading/sending data. Set means
|
||||
# continue, clear means pause.
|
||||
self.wakeup_event = threading.Event()
|
||||
|
||||
# Signals if the thread should terminate, e.g. because the consumer has
|
||||
# gone away. Both this and `wakeup_event` should be set to terminate the
|
||||
# loop (otherwise the thread will block on `wakeup_event`).
|
||||
self.stop_event = threading.Event()
|
||||
|
||||
def beginFileTransfer(
|
||||
self, file: BinaryIO, consumer: interfaces.IConsumer
|
||||
) -> "Deferred[None]":
|
||||
"""
|
||||
Begin transferring a file
|
||||
"""
|
||||
self.file = file
|
||||
self.consumer = consumer
|
||||
|
||||
self.consumer.registerProducer(self, True)
|
||||
|
||||
# We set the wakeup signal as we should start producing immediately.
|
||||
self.wakeup_event.set()
|
||||
run_in_background(
|
||||
defer_to_threadpool,
|
||||
self.reactor,
|
||||
self.thread_pool,
|
||||
self._on_thread_read_loop,
|
||||
)
|
||||
|
||||
return make_deferred_yieldable(self.deferred)
|
||||
|
||||
def resumeProducing(self) -> None:
|
||||
"""interfaces.IPushProducer"""
|
||||
self.wakeup_event.set()
|
||||
|
||||
def pauseProducing(self) -> None:
|
||||
"""interfaces.IPushProducer"""
|
||||
self.wakeup_event.clear()
|
||||
|
||||
def stopProducing(self) -> None:
|
||||
"""interfaces.IPushProducer"""
|
||||
|
||||
# Unregister the consumer so we don't try and interact with it again.
|
||||
self.consumer = None
|
||||
|
||||
# Terminate the thread loop.
|
||||
self.wakeup_event.set()
|
||||
self.stop_event.set()
|
||||
|
||||
if not self.deferred.called:
|
||||
self.deferred.errback(Exception("Consumer asked us to stop producing"))
|
||||
|
||||
def _on_thread_read_loop(self) -> None:
|
||||
"""This is the loop that happens on a thread."""
|
||||
|
||||
try:
|
||||
while not self.stop_event.is_set():
|
||||
# We wait for the producer to signal that the consumer wants
|
||||
# more data (or we should abort)
|
||||
if not self.wakeup_event.is_set():
|
||||
ret = self.wakeup_event.wait(self.TIMEOUT_SECONDS)
|
||||
if not ret:
|
||||
raise Exception("Timed out waiting to resume")
|
||||
|
||||
# Check if we were woken up so that we abort the download
|
||||
if self.stop_event.is_set():
|
||||
return
|
||||
|
||||
# The file should always have been set before we get here.
|
||||
assert self.file is not None
|
||||
|
||||
chunk = self.file.read(self.CHUNK_SIZE)
|
||||
if not chunk:
|
||||
return
|
||||
|
||||
self.reactor.callFromThread(self._write, chunk)
|
||||
|
||||
except Exception:
|
||||
self.reactor.callFromThread(self._error, Failure())
|
||||
finally:
|
||||
self.reactor.callFromThread(self._finish)
|
||||
|
||||
def _write(self, chunk: bytes) -> None:
|
||||
"""Called from the thread to write a chunk of data"""
|
||||
if self.consumer:
|
||||
self.consumer.write(chunk)
|
||||
|
||||
def _error(self, failure: Failure) -> None:
|
||||
"""Called from the thread when there was a fatal error"""
|
||||
if self.consumer:
|
||||
self.consumer.unregisterProducer()
|
||||
self.consumer = None
|
||||
|
||||
if not self.deferred.called:
|
||||
self.deferred.errback(failure)
|
||||
|
||||
def _finish(self) -> None:
|
||||
"""Called from the thread when it finishes (either on success or
|
||||
failure)."""
|
||||
if self.file:
|
||||
self.file.close()
|
||||
self.file = None
|
||||
|
||||
if self.consumer:
|
||||
self.consumer.unregisterProducer()
|
||||
self.consumer = None
|
||||
|
||||
if not self.deferred.called:
|
||||
self.deferred.callback(None)
|
||||
|
|
|
@ -49,15 +49,11 @@ from zope.interface import implementer
|
|||
from twisted.internet import interfaces
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.internet.interfaces import IConsumer
|
||||
from twisted.protocols.basic import FileSender
|
||||
|
||||
from synapse.api.errors import NotFoundError
|
||||
from synapse.logging.context import (
|
||||
defer_to_thread,
|
||||
make_deferred_yieldable,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.logging.context import defer_to_thread, run_in_background
|
||||
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
|
||||
from synapse.media._base import ThreadedFileSender
|
||||
from synapse.util import Clock
|
||||
from synapse.util.file_consumer import BackgroundFileConsumer
|
||||
|
||||
|
@ -213,7 +209,7 @@ class MediaStorage:
|
|||
local_path = os.path.join(self.local_media_directory, path)
|
||||
if os.path.exists(local_path):
|
||||
logger.debug("responding with local file %s", local_path)
|
||||
return FileResponder(open(local_path, "rb"))
|
||||
return FileResponder(self.hs, open(local_path, "rb"))
|
||||
logger.debug("local file %s did not exist", local_path)
|
||||
|
||||
for provider in self.storage_providers:
|
||||
|
@ -336,13 +332,12 @@ class FileResponder(Responder):
|
|||
is closed when finished streaming.
|
||||
"""
|
||||
|
||||
def __init__(self, open_file: IO):
|
||||
def __init__(self, hs: "HomeServer", open_file: BinaryIO):
|
||||
self.hs = hs
|
||||
self.open_file = open_file
|
||||
|
||||
def write_to_consumer(self, consumer: IConsumer) -> Deferred:
|
||||
return make_deferred_yieldable(
|
||||
FileSender().beginFileTransfer(self.open_file, consumer)
|
||||
)
|
||||
return ThreadedFileSender(self.hs).beginFileTransfer(self.open_file, consumer)
|
||||
|
||||
def __exit__(
|
||||
self,
|
||||
|
|
|
@ -145,6 +145,7 @@ class FileStorageProviderBackend(StorageProvider):
|
|||
|
||||
def __init__(self, hs: "HomeServer", config: str):
|
||||
self.hs = hs
|
||||
self.reactor = hs.get_reactor()
|
||||
self.cache_directory = hs.config.media.media_store_path
|
||||
self.base_directory = config
|
||||
|
||||
|
@ -165,7 +166,7 @@ class FileStorageProviderBackend(StorageProvider):
|
|||
shutil_copyfile: Callable[[str, str], str] = shutil.copyfile
|
||||
with start_active_span("shutil_copyfile"):
|
||||
await defer_to_thread(
|
||||
self.hs.get_reactor(),
|
||||
self.reactor,
|
||||
shutil_copyfile,
|
||||
primary_fname,
|
||||
backup_fname,
|
||||
|
@ -177,7 +178,7 @@ class FileStorageProviderBackend(StorageProvider):
|
|||
|
||||
backup_fname = os.path.join(self.base_directory, path)
|
||||
if os.path.isfile(backup_fname):
|
||||
return FileResponder(open(backup_fname, "rb"))
|
||||
return FileResponder(self.hs, open(backup_fname, "rb"))
|
||||
|
||||
return None
|
||||
|
||||
|
|
|
@ -259,6 +259,7 @@ class ThumbnailProvider:
|
|||
media_storage: MediaStorage,
|
||||
):
|
||||
self.hs = hs
|
||||
self.reactor = hs.get_reactor()
|
||||
self.media_repo = media_repo
|
||||
self.media_storage = media_storage
|
||||
self.store = hs.get_datastores().main
|
||||
|
@ -373,11 +374,11 @@ class ThumbnailProvider:
|
|||
await respond_with_multipart_responder(
|
||||
self.hs.get_clock(),
|
||||
request,
|
||||
FileResponder(open(file_path, "rb")),
|
||||
FileResponder(self.hs, open(file_path, "rb")),
|
||||
media_info,
|
||||
)
|
||||
else:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
await respond_with_file(self.hs, request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
|
@ -455,7 +456,7 @@ class ThumbnailProvider:
|
|||
)
|
||||
|
||||
if file_path:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
await respond_with_file(self.hs, request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
|
|
|
@ -18,7 +18,8 @@
|
|||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
from typing import TYPE_CHECKING, Callable
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Tuple
|
||||
|
||||
from synapse.http.server import HttpServer, JsonResource
|
||||
from synapse.rest import admin
|
||||
|
@ -67,11 +68,64 @@ from synapse.rest.client import (
|
|||
voip,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
RegisterServletsFunc = Callable[["HomeServer", HttpServer], None]
|
||||
|
||||
CLIENT_SERVLET_FUNCTIONS: Tuple[RegisterServletsFunc, ...] = (
|
||||
versions.register_servlets,
|
||||
initial_sync.register_servlets,
|
||||
room.register_deprecated_servlets,
|
||||
events.register_servlets,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
profile.register_servlets,
|
||||
presence.register_servlets,
|
||||
directory.register_servlets,
|
||||
voip.register_servlets,
|
||||
pusher.register_servlets,
|
||||
push_rule.register_servlets,
|
||||
logout.register_servlets,
|
||||
sync.register_servlets,
|
||||
filter.register_servlets,
|
||||
account.register_servlets,
|
||||
register.register_servlets,
|
||||
auth.register_servlets,
|
||||
receipts.register_servlets,
|
||||
read_marker.register_servlets,
|
||||
room_keys.register_servlets,
|
||||
keys.register_servlets,
|
||||
tokenrefresh.register_servlets,
|
||||
tags.register_servlets,
|
||||
account_data.register_servlets,
|
||||
reporting.register_servlets,
|
||||
openid.register_servlets,
|
||||
notifications.register_servlets,
|
||||
devices.register_servlets,
|
||||
thirdparty.register_servlets,
|
||||
sendtodevice.register_servlets,
|
||||
user_directory.register_servlets,
|
||||
room_upgrade_rest_servlet.register_servlets,
|
||||
capabilities.register_servlets,
|
||||
account_validity.register_servlets,
|
||||
relations.register_servlets,
|
||||
password_policy.register_servlets,
|
||||
knock.register_servlets,
|
||||
appservice_ping.register_servlets,
|
||||
admin.register_servlets_for_client_rest_resource,
|
||||
mutual_rooms.register_servlets,
|
||||
login_token_request.register_servlets,
|
||||
rendezvous.register_servlets,
|
||||
auth_issuer.register_servlets,
|
||||
)
|
||||
|
||||
SERVLET_GROUPS: Dict[str, Iterable[RegisterServletsFunc]] = {
|
||||
"client": CLIENT_SERVLET_FUNCTIONS,
|
||||
}
|
||||
|
||||
|
||||
class ClientRestResource(JsonResource):
|
||||
"""Matrix Client API REST resource.
|
||||
|
@ -83,80 +137,56 @@ class ClientRestResource(JsonResource):
|
|||
* etc
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
def __init__(self, hs: "HomeServer", servlet_groups: Optional[List[str]] = None):
|
||||
JsonResource.__init__(self, hs, canonical_json=False)
|
||||
self.register_servlets(self, hs)
|
||||
if hs.config.media.can_load_media_repo:
|
||||
# This import is here to prevent a circular import failure
|
||||
from synapse.rest.client import media
|
||||
|
||||
SERVLET_GROUPS["media"] = (media.register_servlets,)
|
||||
self.register_servlets(self, hs, servlet_groups)
|
||||
|
||||
@staticmethod
|
||||
def register_servlets(client_resource: HttpServer, hs: "HomeServer") -> None:
|
||||
def register_servlets(
|
||||
client_resource: HttpServer,
|
||||
hs: "HomeServer",
|
||||
servlet_groups: Optional[Iterable[str]] = None,
|
||||
) -> None:
|
||||
# Some servlets are only registered on the main process (and not worker
|
||||
# processes).
|
||||
is_main_process = hs.config.worker.worker_app is None
|
||||
|
||||
versions.register_servlets(hs, client_resource)
|
||||
if not servlet_groups:
|
||||
servlet_groups = SERVLET_GROUPS.keys()
|
||||
|
||||
# Deprecated in r0
|
||||
initial_sync.register_servlets(hs, client_resource)
|
||||
room.register_deprecated_servlets(hs, client_resource)
|
||||
for servlet_group in servlet_groups:
|
||||
# Fail on unknown servlet groups.
|
||||
if servlet_group not in SERVLET_GROUPS:
|
||||
if servlet_group == "media":
|
||||
logger.warn(
|
||||
"media.can_load_media_repo needs to be configured for the media servlet to be available"
|
||||
)
|
||||
raise RuntimeError(
|
||||
f"Attempting to register unknown client servlet: '{servlet_group}'"
|
||||
)
|
||||
|
||||
# Partially deprecated in r0
|
||||
events.register_servlets(hs, client_resource)
|
||||
for servletfunc in SERVLET_GROUPS[servlet_group]:
|
||||
if not is_main_process and servletfunc in [
|
||||
pusher.register_servlets,
|
||||
logout.register_servlets,
|
||||
auth.register_servlets,
|
||||
tokenrefresh.register_servlets,
|
||||
reporting.register_servlets,
|
||||
openid.register_servlets,
|
||||
thirdparty.register_servlets,
|
||||
room_upgrade_rest_servlet.register_servlets,
|
||||
account_validity.register_servlets,
|
||||
admin.register_servlets_for_client_rest_resource,
|
||||
mutual_rooms.register_servlets,
|
||||
login_token_request.register_servlets,
|
||||
rendezvous.register_servlets,
|
||||
auth_issuer.register_servlets,
|
||||
]:
|
||||
continue
|
||||
|
||||
room.register_servlets(hs, client_resource)
|
||||
login.register_servlets(hs, client_resource)
|
||||
profile.register_servlets(hs, client_resource)
|
||||
presence.register_servlets(hs, client_resource)
|
||||
directory.register_servlets(hs, client_resource)
|
||||
voip.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
pusher.register_servlets(hs, client_resource)
|
||||
push_rule.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
logout.register_servlets(hs, client_resource)
|
||||
sync.register_servlets(hs, client_resource)
|
||||
filter.register_servlets(hs, client_resource)
|
||||
account.register_servlets(hs, client_resource)
|
||||
register.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
auth.register_servlets(hs, client_resource)
|
||||
receipts.register_servlets(hs, client_resource)
|
||||
read_marker.register_servlets(hs, client_resource)
|
||||
room_keys.register_servlets(hs, client_resource)
|
||||
keys.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
tokenrefresh.register_servlets(hs, client_resource)
|
||||
tags.register_servlets(hs, client_resource)
|
||||
account_data.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
reporting.register_servlets(hs, client_resource)
|
||||
openid.register_servlets(hs, client_resource)
|
||||
notifications.register_servlets(hs, client_resource)
|
||||
devices.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
thirdparty.register_servlets(hs, client_resource)
|
||||
sendtodevice.register_servlets(hs, client_resource)
|
||||
user_directory.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
|
||||
capabilities.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
account_validity.register_servlets(hs, client_resource)
|
||||
relations.register_servlets(hs, client_resource)
|
||||
password_policy.register_servlets(hs, client_resource)
|
||||
knock.register_servlets(hs, client_resource)
|
||||
appservice_ping.register_servlets(hs, client_resource)
|
||||
if hs.config.media.can_load_media_repo:
|
||||
from synapse.rest.client import media
|
||||
|
||||
media.register_servlets(hs, client_resource)
|
||||
|
||||
# moving to /_synapse/admin
|
||||
if is_main_process:
|
||||
admin.register_servlets_for_client_rest_resource(hs, client_resource)
|
||||
|
||||
# unstable
|
||||
if is_main_process:
|
||||
mutual_rooms.register_servlets(hs, client_resource)
|
||||
login_token_request.register_servlets(hs, client_resource)
|
||||
rendezvous.register_servlets(hs, client_resource)
|
||||
auth_issuer.register_servlets(hs, client_resource)
|
||||
servletfunc(hs, client_resource)
|
||||
|
|
|
@ -67,7 +67,8 @@ from synapse.streams.config import PaginationConfig
|
|||
from synapse.types import JsonDict, Requester, StreamToken, ThirdPartyInstanceID, UserID
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.util.cancellation import cancellable
|
||||
from synapse.util.stringutils import parse_and_validate_server_name, random_string
|
||||
from synapse.util.events import generate_fake_event_id
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
@ -325,7 +326,7 @@ class RoomStateEventRestServlet(RestServlet):
|
|||
)
|
||||
event_id = event.event_id
|
||||
except ShadowBanError:
|
||||
event_id = "$" + random_string(43)
|
||||
event_id = generate_fake_event_id()
|
||||
|
||||
set_tag("event_id", event_id)
|
||||
ret = {"event_id": event_id}
|
||||
|
@ -377,7 +378,7 @@ class RoomSendEventRestServlet(TransactionRestServlet):
|
|||
)
|
||||
event_id = event.event_id
|
||||
except ShadowBanError:
|
||||
event_id = "$" + random_string(43)
|
||||
event_id = generate_fake_event_id()
|
||||
|
||||
set_tag("event_id", event_id)
|
||||
return 200, {"event_id": event_id}
|
||||
|
@ -1193,7 +1194,7 @@ class RoomRedactEventRestServlet(TransactionRestServlet):
|
|||
|
||||
event_id = event.event_id
|
||||
except ShadowBanError:
|
||||
event_id = "$" + random_string(43)
|
||||
event_id = generate_fake_event_id()
|
||||
|
||||
set_tag("event_id", event_id)
|
||||
return 200, {"event_id": event_id}
|
||||
|
|
|
@ -899,6 +899,9 @@ class SlidingSyncRestServlet(RestServlet):
|
|||
body = parse_and_validate_json_object_from_request(request, SlidingSyncBody)
|
||||
|
||||
# Tag and log useful data to differentiate requests.
|
||||
set_tag(
|
||||
"sliding_sync.sync_type", "initial" if from_token is None else "incremental"
|
||||
)
|
||||
set_tag("sliding_sync.conn_id", body.conn_id or "")
|
||||
log_kv(
|
||||
{
|
||||
|
@ -912,6 +915,12 @@ class SlidingSyncRestServlet(RestServlet):
|
|||
"sliding_sync.room_subscriptions": list(
|
||||
(body.room_subscriptions or {}).keys()
|
||||
),
|
||||
# We also include the number of room subscriptions because logs are
|
||||
# limited to 1024 characters and the large room ID list above can be cut
|
||||
# off.
|
||||
"sliding_sync.num_room_subscriptions": len(
|
||||
(body.room_subscriptions or {}).keys()
|
||||
),
|
||||
}
|
||||
)
|
||||
|
||||
|
|
|
@ -34,6 +34,7 @@ from typing_extensions import TypeAlias
|
|||
|
||||
from twisted.internet.interfaces import IOpenSSLContextFactory
|
||||
from twisted.internet.tcp import Port
|
||||
from twisted.python.threadpool import ThreadPool
|
||||
from twisted.web.iweb import IPolicyForHTTPS
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
|
@ -941,3 +942,21 @@ class HomeServer(metaclass=abc.ABCMeta):
|
|||
@cache_in_self
|
||||
def get_task_scheduler(self) -> TaskScheduler:
|
||||
return TaskScheduler(self)
|
||||
|
||||
@cache_in_self
|
||||
def get_media_sender_thread_pool(self) -> ThreadPool:
|
||||
"""Fetch the threadpool used to read files when responding to media
|
||||
download requests."""
|
||||
|
||||
# We can choose a large threadpool size as these threads predominately
|
||||
# do IO rather than CPU work.
|
||||
media_threadpool = ThreadPool(
|
||||
name="media_threadpool", minthreads=1, maxthreads=50
|
||||
)
|
||||
|
||||
media_threadpool.start()
|
||||
self.get_reactor().addSystemEventTrigger(
|
||||
"during", "shutdown", media_threadpool.stop
|
||||
)
|
||||
|
||||
return media_threadpool
|
||||
|
|
|
@ -144,6 +144,16 @@ class ProfileWorkerStore(SQLBaseStore):
|
|||
return 50
|
||||
|
||||
async def get_profileinfo(self, user_id: UserID) -> ProfileInfo:
|
||||
"""
|
||||
Fetch the display name and avatar URL of a user.
|
||||
|
||||
Args:
|
||||
user_id: The user ID to fetch the profile for.
|
||||
|
||||
Returns:
|
||||
The user's display name and avatar URL. Values may be null if unset
|
||||
or if the user doesn't exist.
|
||||
"""
|
||||
profile = await self.db_pool.simple_select_one(
|
||||
table="profiles",
|
||||
keyvalues={"full_user_id": user_id.to_string()},
|
||||
|
@ -158,6 +168,15 @@ class ProfileWorkerStore(SQLBaseStore):
|
|||
return ProfileInfo(avatar_url=profile[1], display_name=profile[0])
|
||||
|
||||
async def get_profile_displayname(self, user_id: UserID) -> Optional[str]:
|
||||
"""
|
||||
Fetch the display name of a user.
|
||||
|
||||
Args:
|
||||
user_id: The user to get the display name for.
|
||||
|
||||
Raises:
|
||||
404 if the user does not exist.
|
||||
"""
|
||||
return await self.db_pool.simple_select_one_onecol(
|
||||
table="profiles",
|
||||
keyvalues={"full_user_id": user_id.to_string()},
|
||||
|
@ -166,6 +185,15 @@ class ProfileWorkerStore(SQLBaseStore):
|
|||
)
|
||||
|
||||
async def get_profile_avatar_url(self, user_id: UserID) -> Optional[str]:
|
||||
"""
|
||||
Fetch the avatar URL of a user.
|
||||
|
||||
Args:
|
||||
user_id: The user to get the avatar URL for.
|
||||
|
||||
Raises:
|
||||
404 if the user does not exist.
|
||||
"""
|
||||
return await self.db_pool.simple_select_one_onecol(
|
||||
table="profiles",
|
||||
keyvalues={"full_user_id": user_id.to_string()},
|
||||
|
@ -174,6 +202,12 @@ class ProfileWorkerStore(SQLBaseStore):
|
|||
)
|
||||
|
||||
async def create_profile(self, user_id: UserID) -> None:
|
||||
"""
|
||||
Create a blank profile for a user.
|
||||
|
||||
Args:
|
||||
user_id: The user to create the profile for.
|
||||
"""
|
||||
user_localpart = user_id.localpart
|
||||
await self.db_pool.simple_insert(
|
||||
table="profiles",
|
||||
|
|
|
@ -39,6 +39,7 @@ from typing import (
|
|||
import attr
|
||||
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
|
||||
|
@ -422,6 +423,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
|||
return invite
|
||||
return None
|
||||
|
||||
@trace
|
||||
async def get_rooms_for_local_user_where_membership_is(
|
||||
self,
|
||||
user_id: str,
|
||||
|
|
|
@ -24,6 +24,7 @@ from typing import List, Optional, Tuple
|
|||
|
||||
import attr
|
||||
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.storage._base import SQLBaseStore
|
||||
from synapse.storage.database import LoggingTransaction
|
||||
from synapse.storage.databases.main.stream import _filter_results_by_stream
|
||||
|
@ -159,11 +160,17 @@ class StateDeltasStore(SQLBaseStore):
|
|||
self._get_max_stream_id_in_current_state_deltas_txn,
|
||||
)
|
||||
|
||||
@trace
|
||||
async def get_current_state_deltas_for_room(
|
||||
self, room_id: str, from_token: RoomStreamToken, to_token: RoomStreamToken
|
||||
) -> List[StateDelta]:
|
||||
"""Get the state deltas between two tokens."""
|
||||
|
||||
if not self._curr_state_delta_stream_cache.has_entity_changed(
|
||||
room_id, from_token.stream
|
||||
):
|
||||
return []
|
||||
|
||||
def get_current_state_deltas_for_room_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> List[StateDelta]:
|
||||
|
|
|
@ -51,6 +51,7 @@ from typing import (
|
|||
Iterable,
|
||||
List,
|
||||
Optional,
|
||||
Protocol,
|
||||
Set,
|
||||
Tuple,
|
||||
cast,
|
||||
|
@ -59,7 +60,7 @@ from typing import (
|
|||
|
||||
import attr
|
||||
from immutabledict import immutabledict
|
||||
from typing_extensions import Literal
|
||||
from typing_extensions import Literal, assert_never
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
|
@ -67,7 +68,7 @@ from synapse.api.constants import Direction, EventTypes, Membership
|
|||
from synapse.api.filtering import Filter
|
||||
from synapse.events import EventBase
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.logging.opentracing import tag_args, trace
|
||||
from synapse.storage._base import SQLBaseStore
|
||||
from synapse.storage.database import (
|
||||
DatabasePool,
|
||||
|
@ -97,6 +98,18 @@ _STREAM_TOKEN = "stream"
|
|||
_TOPOLOGICAL_TOKEN = "topological"
|
||||
|
||||
|
||||
class PaginateFunction(Protocol):
|
||||
async def __call__(
|
||||
self,
|
||||
*,
|
||||
room_id: str,
|
||||
from_key: RoomStreamToken,
|
||||
to_key: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = 0,
|
||||
) -> Tuple[List[EventBase], RoomStreamToken]: ...
|
||||
|
||||
|
||||
# Used as return values for pagination APIs
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class _EventDictReturn:
|
||||
|
@ -280,7 +293,7 @@ def generate_pagination_bounds(
|
|||
|
||||
|
||||
def generate_next_token(
|
||||
direction: Direction, last_topo_ordering: int, last_stream_ordering: int
|
||||
direction: Direction, last_topo_ordering: Optional[int], last_stream_ordering: int
|
||||
) -> RoomStreamToken:
|
||||
"""
|
||||
Generate the next room stream token based on the currently returned data.
|
||||
|
@ -447,7 +460,6 @@ def _filter_results_by_stream(
|
|||
The `instance_name` arg is optional to handle historic rows, and is
|
||||
interpreted as if it was "master".
|
||||
"""
|
||||
|
||||
if instance_name is None:
|
||||
instance_name = "master"
|
||||
|
||||
|
@ -660,33 +672,43 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
|
||||
async def get_room_events_stream_for_rooms(
|
||||
self,
|
||||
*,
|
||||
room_ids: Collection[str],
|
||||
from_key: RoomStreamToken,
|
||||
to_key: RoomStreamToken,
|
||||
to_key: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = 0,
|
||||
order: str = "DESC",
|
||||
) -> Dict[str, Tuple[List[EventBase], RoomStreamToken]]:
|
||||
"""Get new room events in stream ordering since `from_key`.
|
||||
|
||||
Args:
|
||||
room_ids
|
||||
from_key: Token from which no events are returned before
|
||||
to_key: Token from which no events are returned after. (This
|
||||
is typically the current stream token)
|
||||
from_key: The token to stream from (starting point and heading in the given
|
||||
direction)
|
||||
to_key: The token representing the end stream position (end point)
|
||||
limit: Maximum number of events to return
|
||||
order: Either "DESC" or "ASC". Determines which events are
|
||||
returned when the result is limited. If "DESC" then the most
|
||||
recent `limit` events are returned, otherwise returns the
|
||||
oldest `limit` events.
|
||||
direction: Indicates whether we are paginating forwards or backwards
|
||||
from `from_key`.
|
||||
|
||||
Returns:
|
||||
A map from room id to a tuple containing:
|
||||
- list of recent events in the room
|
||||
- stream ordering key for the start of the chunk of events returned.
|
||||
|
||||
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
|
||||
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
|
||||
"""
|
||||
room_ids = self._events_stream_cache.get_entities_changed(
|
||||
room_ids, from_key.stream
|
||||
)
|
||||
if direction == Direction.FORWARDS:
|
||||
room_ids = self._events_stream_cache.get_entities_changed(
|
||||
room_ids, from_key.stream
|
||||
)
|
||||
elif direction == Direction.BACKWARDS:
|
||||
if to_key is not None:
|
||||
room_ids = self._events_stream_cache.get_entities_changed(
|
||||
room_ids, to_key.stream
|
||||
)
|
||||
else:
|
||||
assert_never(direction)
|
||||
|
||||
if not room_ids:
|
||||
return {}
|
||||
|
@ -698,12 +720,12 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
defer.gatherResults(
|
||||
[
|
||||
run_in_background(
|
||||
self.get_room_events_stream_for_room,
|
||||
room_id,
|
||||
from_key,
|
||||
to_key,
|
||||
limit,
|
||||
order=order,
|
||||
self.paginate_room_events_by_stream_ordering,
|
||||
room_id=room_id,
|
||||
from_key=from_key,
|
||||
to_key=to_key,
|
||||
direction=direction,
|
||||
limit=limit,
|
||||
)
|
||||
for room_id in rm_ids
|
||||
],
|
||||
|
@ -727,69 +749,122 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
if self._events_stream_cache.has_entity_changed(room_id, from_id)
|
||||
}
|
||||
|
||||
async def get_room_events_stream_for_room(
|
||||
async def paginate_room_events_by_stream_ordering(
|
||||
self,
|
||||
*,
|
||||
room_id: str,
|
||||
from_key: RoomStreamToken,
|
||||
to_key: RoomStreamToken,
|
||||
to_key: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = 0,
|
||||
order: str = "DESC",
|
||||
) -> Tuple[List[EventBase], RoomStreamToken]:
|
||||
"""Get new room events in stream ordering since `from_key`.
|
||||
"""
|
||||
Paginate events by `stream_ordering` in the room from the `from_key` in the
|
||||
given `direction` to the `to_key` or `limit`.
|
||||
|
||||
Args:
|
||||
room_id
|
||||
from_key: Token from which no events are returned before
|
||||
to_key: Token from which no events are returned after. (This
|
||||
is typically the current stream token)
|
||||
from_key: The token to stream from (starting point and heading in the given
|
||||
direction)
|
||||
to_key: The token representing the end stream position (end point)
|
||||
direction: Indicates whether we are paginating forwards or backwards
|
||||
from `from_key`.
|
||||
limit: Maximum number of events to return
|
||||
order: Either "DESC" or "ASC". Determines which events are
|
||||
returned when the result is limited. If "DESC" then the most
|
||||
recent `limit` events are returned, otherwise returns the
|
||||
oldest `limit` events.
|
||||
|
||||
Returns:
|
||||
The list of events (in ascending stream order) and the token from the start
|
||||
of the chunk of events returned.
|
||||
"""
|
||||
if from_key == to_key:
|
||||
return [], from_key
|
||||
The results as a list of events and a token that points to the end
|
||||
of the result set. If no events are returned then the end of the
|
||||
stream has been reached (i.e. there are no events between `from_key`
|
||||
and `to_key`).
|
||||
|
||||
has_changed = self._events_stream_cache.has_entity_changed(
|
||||
room_id, from_key.stream
|
||||
)
|
||||
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
|
||||
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
|
||||
"""
|
||||
|
||||
# FIXME: When going forwards, we should enforce that the `to_key` is not `None`
|
||||
# because we always need an upper bound when querying the events stream (as
|
||||
# otherwise we'll potentially pick up events that are not fully persisted).
|
||||
|
||||
# We should only be working with `stream_ordering` tokens here
|
||||
assert from_key is None or from_key.topological is None
|
||||
assert to_key is None or to_key.topological is None
|
||||
|
||||
# We can bail early if we're looking forwards, and our `to_key` is already
|
||||
# before our `from_key`.
|
||||
if (
|
||||
direction == Direction.FORWARDS
|
||||
and to_key is not None
|
||||
and to_key.is_before_or_eq(from_key)
|
||||
):
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_key if to_key else from_key
|
||||
# Or vice-versa, if we're looking backwards and our `from_key` is already before
|
||||
# our `to_key`.
|
||||
elif (
|
||||
direction == Direction.BACKWARDS
|
||||
and to_key is not None
|
||||
and from_key.is_before_or_eq(to_key)
|
||||
):
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_key if to_key else from_key
|
||||
|
||||
# We can do a quick sanity check to see if any events have been sent in the room
|
||||
# since the earlier token.
|
||||
has_changed = True
|
||||
if direction == Direction.FORWARDS:
|
||||
has_changed = self._events_stream_cache.has_entity_changed(
|
||||
room_id, from_key.stream
|
||||
)
|
||||
elif direction == Direction.BACKWARDS:
|
||||
if to_key is not None:
|
||||
has_changed = self._events_stream_cache.has_entity_changed(
|
||||
room_id, to_key.stream
|
||||
)
|
||||
else:
|
||||
assert_never(direction)
|
||||
|
||||
if not has_changed:
|
||||
return [], from_key
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_key if to_key else from_key
|
||||
|
||||
order, from_bound, to_bound = generate_pagination_bounds(
|
||||
direction, from_key, to_key
|
||||
)
|
||||
|
||||
bounds = generate_pagination_where_clause(
|
||||
direction=direction,
|
||||
# The empty string will shortcut downstream code to only use the
|
||||
# `stream_ordering` column
|
||||
column_names=("", "stream_ordering"),
|
||||
from_token=from_bound,
|
||||
to_token=to_bound,
|
||||
engine=self.database_engine,
|
||||
)
|
||||
|
||||
def f(txn: LoggingTransaction) -> List[_EventDictReturn]:
|
||||
# To handle tokens with a non-empty instance_map we fetch more
|
||||
# results than necessary and then filter down
|
||||
min_from_id = from_key.stream
|
||||
max_to_id = to_key.get_max_stream_pos()
|
||||
|
||||
sql = """
|
||||
SELECT event_id, instance_name, topological_ordering, stream_ordering
|
||||
sql = f"""
|
||||
SELECT event_id, instance_name, stream_ordering
|
||||
FROM events
|
||||
WHERE
|
||||
room_id = ?
|
||||
AND not outlier
|
||||
AND stream_ordering > ? AND stream_ordering <= ?
|
||||
ORDER BY stream_ordering %s LIMIT ?
|
||||
""" % (
|
||||
order,
|
||||
)
|
||||
txn.execute(sql, (room_id, min_from_id, max_to_id, 2 * limit))
|
||||
AND {bounds}
|
||||
ORDER BY stream_ordering {order} LIMIT ?
|
||||
"""
|
||||
txn.execute(sql, (room_id, 2 * limit))
|
||||
|
||||
rows = [
|
||||
_EventDictReturn(event_id, None, stream_ordering)
|
||||
for event_id, instance_name, topological_ordering, stream_ordering in txn
|
||||
if _filter_results(
|
||||
from_key,
|
||||
to_key,
|
||||
instance_name,
|
||||
topological_ordering,
|
||||
stream_ordering,
|
||||
for event_id, instance_name, stream_ordering in txn
|
||||
if _filter_results_by_stream(
|
||||
lower_token=(
|
||||
to_key if direction == Direction.BACKWARDS else from_key
|
||||
),
|
||||
upper_token=(
|
||||
from_key if direction == Direction.BACKWARDS else to_key
|
||||
),
|
||||
instance_name=instance_name,
|
||||
stream_ordering=stream_ordering,
|
||||
)
|
||||
][:limit]
|
||||
return rows
|
||||
|
@ -800,18 +875,20 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
[r.event_id for r in rows], get_prev_content=True
|
||||
)
|
||||
|
||||
if order.lower() == "desc":
|
||||
ret.reverse()
|
||||
|
||||
if rows:
|
||||
key = RoomStreamToken(stream=min(r.stream_ordering for r in rows))
|
||||
next_key = generate_next_token(
|
||||
direction=direction,
|
||||
last_topo_ordering=None,
|
||||
last_stream_ordering=rows[-1].stream_ordering,
|
||||
)
|
||||
else:
|
||||
# Assume we didn't get anything because there was nothing to
|
||||
# get.
|
||||
key = from_key
|
||||
# TODO (erikj): We should work out what to do here instead. (same as
|
||||
# `_paginate_room_events_by_topological_ordering_txn(...)`)
|
||||
next_key = to_key if to_key else from_key
|
||||
|
||||
return ret, key
|
||||
return ret, next_key
|
||||
|
||||
@trace
|
||||
async def get_current_state_delta_membership_changes_for_user(
|
||||
self,
|
||||
user_id: str,
|
||||
|
@ -1117,7 +1194,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
|
||||
rows, token = await self.db_pool.runInteraction(
|
||||
"get_recent_event_ids_for_room",
|
||||
self._paginate_room_events_txn,
|
||||
self._paginate_room_events_by_topological_ordering_txn,
|
||||
room_id,
|
||||
from_token=end_token,
|
||||
limit=limit,
|
||||
|
@ -1186,6 +1263,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
|
||||
return None
|
||||
|
||||
@trace
|
||||
async def get_last_event_pos_in_room_before_stream_ordering(
|
||||
self,
|
||||
room_id: str,
|
||||
|
@ -1622,7 +1700,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
topological=topological_ordering, stream=stream_ordering
|
||||
)
|
||||
|
||||
rows, start_token = self._paginate_room_events_txn(
|
||||
rows, start_token = self._paginate_room_events_by_topological_ordering_txn(
|
||||
txn,
|
||||
room_id,
|
||||
before_token,
|
||||
|
@ -1632,7 +1710,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
)
|
||||
events_before = [r.event_id for r in rows]
|
||||
|
||||
rows, end_token = self._paginate_room_events_txn(
|
||||
rows, end_token = self._paginate_room_events_by_topological_ordering_txn(
|
||||
txn,
|
||||
room_id,
|
||||
after_token,
|
||||
|
@ -1795,14 +1873,14 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
def has_room_changed_since(self, room_id: str, stream_id: int) -> bool:
|
||||
return self._events_stream_cache.has_entity_changed(room_id, stream_id)
|
||||
|
||||
def _paginate_room_events_txn(
|
||||
def _paginate_room_events_by_topological_ordering_txn(
|
||||
self,
|
||||
txn: LoggingTransaction,
|
||||
room_id: str,
|
||||
from_token: RoomStreamToken,
|
||||
to_token: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = -1,
|
||||
limit: int = 0,
|
||||
event_filter: Optional[Filter] = None,
|
||||
) -> Tuple[List[_EventDictReturn], RoomStreamToken]:
|
||||
"""Returns list of events before or after a given token.
|
||||
|
@ -1824,6 +1902,24 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
been reached (i.e. there are no events between `from_token` and
|
||||
`to_token`), or `limit` is zero.
|
||||
"""
|
||||
# We can bail early if we're looking forwards, and our `to_key` is already
|
||||
# before our `from_token`.
|
||||
if (
|
||||
direction == Direction.FORWARDS
|
||||
and to_token is not None
|
||||
and to_token.is_before_or_eq(from_token)
|
||||
):
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_token if to_token else from_token
|
||||
# Or vice-versa, if we're looking backwards and our `from_token` is already before
|
||||
# our `to_token`.
|
||||
elif (
|
||||
direction == Direction.BACKWARDS
|
||||
and to_token is not None
|
||||
and from_token.is_before_or_eq(to_token)
|
||||
):
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_token if to_token else from_token
|
||||
|
||||
args: List[Any] = [room_id]
|
||||
|
||||
|
@ -1908,7 +2004,6 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
"bounds": bounds,
|
||||
"order": order,
|
||||
}
|
||||
|
||||
txn.execute(sql, args)
|
||||
|
||||
# Filter the result set.
|
||||
|
@ -1940,27 +2035,30 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
return rows, next_token
|
||||
|
||||
@trace
|
||||
async def paginate_room_events(
|
||||
@tag_args
|
||||
async def paginate_room_events_by_topological_ordering(
|
||||
self,
|
||||
*,
|
||||
room_id: str,
|
||||
from_key: RoomStreamToken,
|
||||
to_key: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = -1,
|
||||
limit: int = 0,
|
||||
event_filter: Optional[Filter] = None,
|
||||
) -> Tuple[List[EventBase], RoomStreamToken]:
|
||||
"""Returns list of events before or after a given token.
|
||||
|
||||
When Direction.FORWARDS: from_key < x <= to_key
|
||||
When Direction.BACKWARDS: from_key >= x > to_key
|
||||
"""
|
||||
Paginate events by `topological_ordering` (tie-break with `stream_ordering`) in
|
||||
the room from the `from_key` in the given `direction` to the `to_key` or
|
||||
`limit`.
|
||||
|
||||
Args:
|
||||
room_id
|
||||
from_key: The token used to stream from
|
||||
to_key: A token which if given limits the results to only those before
|
||||
from_key: The token to stream from (starting point and heading in the given
|
||||
direction)
|
||||
to_key: The token representing the end stream position (end point)
|
||||
direction: Indicates whether we are paginating forwards or backwards
|
||||
from `from_key`.
|
||||
limit: The maximum number of events to return.
|
||||
limit: Maximum number of events to return
|
||||
event_filter: If provided filters the events to those that match the filter.
|
||||
|
||||
Returns:
|
||||
|
@ -1968,8 +2066,18 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
of the result set. If no events are returned then the end of the
|
||||
stream has been reached (i.e. there are no events between `from_key`
|
||||
and `to_key`).
|
||||
|
||||
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
|
||||
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
|
||||
"""
|
||||
|
||||
# FIXME: When going forwards, we should enforce that the `to_key` is not `None`
|
||||
# because we always need an upper bound when querying the events stream (as
|
||||
# otherwise we'll potentially pick up events that are not fully persisted).
|
||||
|
||||
# We have these checks outside of the transaction function (txn) to save getting
|
||||
# a DB connection and switching threads if we don't need to.
|
||||
#
|
||||
# We can bail early if we're looking forwards, and our `to_key` is already
|
||||
# before our `from_key`.
|
||||
if (
|
||||
|
@ -1992,8 +2100,8 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
return [], to_key if to_key else from_key
|
||||
|
||||
rows, token = await self.db_pool.runInteraction(
|
||||
"paginate_room_events",
|
||||
self._paginate_room_events_txn,
|
||||
"paginate_room_events_by_topological_ordering",
|
||||
self._paginate_room_events_by_topological_ordering_txn,
|
||||
room_id,
|
||||
from_key,
|
||||
to_key,
|
||||
|
@ -2105,6 +2213,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
|
||||
return None
|
||||
|
||||
@trace
|
||||
def get_rooms_that_might_have_updates(
|
||||
self, room_ids: StrCollection, from_token: RoomStreamToken
|
||||
) -> StrCollection:
|
||||
|
|
29
synapse/util/events.py
Normal file
29
synapse/util/events.py
Normal file
|
@ -0,0 +1,29 @@
|
|||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
#
|
||||
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
|
||||
def generate_fake_event_id() -> str:
|
||||
"""
|
||||
Generate an event ID from random ASCII characters.
|
||||
|
||||
This is primarily useful for generating fake event IDs in response to
|
||||
requests from shadow-banned users.
|
||||
|
||||
Returns:
|
||||
A string intended to look like an event ID, but with no actual meaning.
|
||||
"""
|
||||
return "$" + random_string(43)
|
|
@ -44,7 +44,7 @@ from synapse.rest.client import login, room
|
|||
from synapse.server import HomeServer
|
||||
from synapse.storage.databases.main.events_worker import EventCacheEntry
|
||||
from synapse.util import Clock
|
||||
from synapse.util.stringutils import random_string
|
||||
from synapse.util.events import generate_fake_event_id
|
||||
|
||||
from tests import unittest
|
||||
from tests.test_utils import event_injection
|
||||
|
@ -52,10 +52,6 @@ from tests.test_utils import event_injection
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def generate_fake_event_id() -> str:
|
||||
return "$fake_" + random_string(43)
|
||||
|
||||
|
||||
class FederationTestCase(unittest.FederatingHomeserverTestCase):
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
|
|
|
@ -21,8 +21,6 @@ import synapse.rest.admin
|
|||
from synapse.api.constants import EventTypes
|
||||
from synapse.rest.client import login, room, sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import SlidingSyncStreamToken
|
||||
from synapse.types.handlers import SlidingSyncConfig
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
|
||||
|
@ -130,7 +128,6 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
|
|||
self.helper.send(room_id1, "msg", tok=user1_tok)
|
||||
|
||||
timeline_limit = 5
|
||||
conn_id = "conn_id"
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
|
@ -170,40 +167,6 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
|
|||
response_body["rooms"].keys(), {room_id2}, response_body["rooms"]
|
||||
)
|
||||
|
||||
# FIXME: This is a hack to record that the first room wasn't sent down
|
||||
# sync, as we don't implement that currently.
|
||||
sliding_sync_handler = self.hs.get_sliding_sync_handler()
|
||||
requester = self.get_success(
|
||||
self.hs.get_auth().get_user_by_access_token(user1_tok)
|
||||
)
|
||||
sync_config = SlidingSyncConfig(
|
||||
user=requester.user,
|
||||
requester=requester,
|
||||
conn_id=conn_id,
|
||||
)
|
||||
|
||||
parsed_initial_from_token = self.get_success(
|
||||
SlidingSyncStreamToken.from_string(self.store, initial_from_token)
|
||||
)
|
||||
connection_position = self.get_success(
|
||||
sliding_sync_handler.connection_store.record_rooms(
|
||||
sync_config,
|
||||
parsed_initial_from_token,
|
||||
sent_room_ids=[],
|
||||
unsent_room_ids=[room_id1],
|
||||
)
|
||||
)
|
||||
|
||||
# FIXME: Now fix up `from_token` with new connect position above.
|
||||
parsed_from_token = self.get_success(
|
||||
SlidingSyncStreamToken.from_string(self.store, from_token)
|
||||
)
|
||||
parsed_from_token = SlidingSyncStreamToken(
|
||||
stream_token=parsed_from_token.stream_token,
|
||||
connection_position=connection_position,
|
||||
)
|
||||
from_token = self.get_success(parsed_from_token.to_string(self.store))
|
||||
|
||||
# We now send another event to room1, so we should sync all the missing events.
|
||||
resp = self.helper.send(room_id1, "msg2", tok=user1_tok)
|
||||
expected_events.append(resp["event_id"])
|
||||
|
@ -238,7 +201,6 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
|
|||
|
||||
self.helper.send(room_id1, "msg", tok=user1_tok)
|
||||
|
||||
conn_id = "conn_id"
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
|
@ -279,40 +241,6 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
|
|||
response_body["rooms"].keys(), {room_id2}, response_body["rooms"]
|
||||
)
|
||||
|
||||
# FIXME: This is a hack to record that the first room wasn't sent down
|
||||
# sync, as we don't implement that currently.
|
||||
sliding_sync_handler = self.hs.get_sliding_sync_handler()
|
||||
requester = self.get_success(
|
||||
self.hs.get_auth().get_user_by_access_token(user1_tok)
|
||||
)
|
||||
sync_config = SlidingSyncConfig(
|
||||
user=requester.user,
|
||||
requester=requester,
|
||||
conn_id=conn_id,
|
||||
)
|
||||
|
||||
parsed_initial_from_token = self.get_success(
|
||||
SlidingSyncStreamToken.from_string(self.store, initial_from_token)
|
||||
)
|
||||
connection_position = self.get_success(
|
||||
sliding_sync_handler.connection_store.record_rooms(
|
||||
sync_config,
|
||||
parsed_initial_from_token,
|
||||
sent_room_ids=[],
|
||||
unsent_room_ids=[room_id1],
|
||||
)
|
||||
)
|
||||
|
||||
# FIXME: Now fix up `from_token` with new connect position above.
|
||||
parsed_from_token = self.get_success(
|
||||
SlidingSyncStreamToken.from_string(self.store, from_token)
|
||||
)
|
||||
parsed_from_token = SlidingSyncStreamToken(
|
||||
stream_token=parsed_from_token.stream_token,
|
||||
connection_position=connection_position,
|
||||
)
|
||||
from_token = self.get_success(parsed_from_token.to_string(self.store))
|
||||
|
||||
# We now send another event to room1, so we should sync all the missing state.
|
||||
self.helper.send(room_id1, "msg", tok=user1_tok)
|
||||
|
||||
|
|
|
@ -161,10 +161,10 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
self.assertIsNone(response_body["rooms"][room_id1].get("required_state"))
|
||||
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
|
||||
|
||||
def test_rooms_required_state_incremental_sync_restart(self) -> None:
|
||||
def test_rooms_incremental_sync_restart(self) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` returns requested state events in the room during an
|
||||
incremental sync, after a restart (and so the in memory caches are reset).
|
||||
Test that after a restart (and so the in memory caches are reset) that
|
||||
we correctly return an `M_UNKNOWN_POS`
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
|
@ -195,22 +195,16 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
self.hs.get_sliding_sync_handler().connection_store._connections.clear()
|
||||
|
||||
# Make the Sliding Sync request
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
# If the cache has been cleared then we do expect the state to come down
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
channel = self.make_request(
|
||||
method="POST",
|
||||
path=self.sync_endpoint + f"?pos={from_token}",
|
||||
content=sync_body,
|
||||
access_token=user1_tok,
|
||||
)
|
||||
|
||||
self._assertRequiredStateIncludes(
|
||||
response_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Create, "")],
|
||||
state_map[(EventTypes.RoomHistoryVisibility, "")],
|
||||
},
|
||||
exact=True,
|
||||
self.assertEqual(channel.code, 400, channel.json_body)
|
||||
self.assertEqual(
|
||||
channel.json_body["errcode"], "M_UNKNOWN_POS", channel.json_body
|
||||
)
|
||||
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
|
||||
|
||||
def test_rooms_required_state_wildcard(self) -> None:
|
||||
"""
|
||||
|
@ -637,8 +631,7 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
|
||||
def test_rooms_required_state_partial_state(self) -> None:
|
||||
"""
|
||||
Test partially-stated room are excluded unless `rooms.required_state` is
|
||||
lazy-loading room members.
|
||||
Test partially-stated room are excluded if they require full state.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
@ -655,59 +648,195 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||
mark_event_as_partial_state(self.hs, join_response2["event_id"], room_id2)
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request (NOT lazy-loading room members)
|
||||
# Make the Sliding Sync request with examples where `must_await_full_state()` is
|
||||
# `False`
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"no-state-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"other-state-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"lazy-load-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
# Lazy-load room members
|
||||
[EventTypes.Member, StateValues.LAZY],
|
||||
# Local member
|
||||
[EventTypes.Member, user2_id],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"local-members-only-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
# Own user ID
|
||||
[EventTypes.Member, user1_id],
|
||||
# Local member
|
||||
[EventTypes.Member, user2_id],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"me-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
# Own user ID
|
||||
[EventTypes.Member, StateValues.ME],
|
||||
# Local member
|
||||
[EventTypes.Member, user2_id],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"wildcard-type-local-state-key-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
["*", user1_id],
|
||||
# Not a user ID
|
||||
["*", "foobarbaz"],
|
||||
# Not a user ID
|
||||
["*", "foo.bar.baz"],
|
||||
# Not a user ID
|
||||
["*", "@foo"],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
}
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# The list should include both rooms now because we don't need full state
|
||||
for list_key in response_body["lists"].keys():
|
||||
self.assertIncludes(
|
||||
set(response_body["lists"][list_key]["ops"][0]["room_ids"]),
|
||||
{room_id2, room_id1},
|
||||
exact=True,
|
||||
message=f"Expected all rooms to show up for list_key={list_key}. Response "
|
||||
+ str(response_body["lists"][list_key]),
|
||||
)
|
||||
|
||||
# Take each of the list variants and apply them to room subscriptions to make
|
||||
# sure the same rules apply
|
||||
for list_key in sync_body["lists"].keys():
|
||||
sync_body_for_subscriptions = {
|
||||
"room_subscriptions": {
|
||||
room_id1: {
|
||||
"required_state": sync_body["lists"][list_key][
|
||||
"required_state"
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
room_id2: {
|
||||
"required_state": sync_body["lists"][list_key][
|
||||
"required_state"
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
}
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body_for_subscriptions, tok=user1_tok)
|
||||
|
||||
self.assertIncludes(
|
||||
set(response_body["rooms"].keys()),
|
||||
{room_id2, room_id1},
|
||||
exact=True,
|
||||
message=f"Expected all rooms to show up for test_key={list_key}.",
|
||||
)
|
||||
|
||||
# =====================================================================
|
||||
|
||||
# Make the Sliding Sync request with examples where `must_await_full_state()` is
|
||||
# `True`
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"wildcard-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
["*", "*"],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"wildcard-type-remote-state-key-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
["*", "@some:remote"],
|
||||
# Not a user ID
|
||||
["*", "foobarbaz"],
|
||||
# Not a user ID
|
||||
["*", "foo.bar.baz"],
|
||||
# Not a user ID
|
||||
["*", "@foo"],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"remote-member-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
# Own user ID
|
||||
[EventTypes.Member, user1_id],
|
||||
# Remote member
|
||||
[EventTypes.Member, "@some:remote"],
|
||||
# Local member
|
||||
[EventTypes.Member, user2_id],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"lazy-but-remote-member-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
# Lazy-load room members
|
||||
[EventTypes.Member, StateValues.LAZY],
|
||||
# Remote member
|
||||
[EventTypes.Member, "@some:remote"],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
}
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make sure the list includes room1 but room2 is excluded because it's still
|
||||
# partially-stated
|
||||
self.assertListEqual(
|
||||
list(response_body["lists"]["foo-list"]["ops"]),
|
||||
[
|
||||
{
|
||||
"op": "SYNC",
|
||||
"range": [0, 1],
|
||||
"room_ids": [room_id1],
|
||||
}
|
||||
],
|
||||
response_body["lists"]["foo-list"],
|
||||
)
|
||||
for list_key in response_body["lists"].keys():
|
||||
self.assertIncludes(
|
||||
set(response_body["lists"][list_key]["ops"][0]["room_ids"]),
|
||||
{room_id1},
|
||||
exact=True,
|
||||
message=f"Expected only fully-stated rooms to show up for list_key={list_key}. Response "
|
||||
+ str(response_body["lists"][list_key]),
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request (with lazy-loading room members)
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
# Lazy-load room members
|
||||
[EventTypes.Member, StateValues.LAZY],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
# Take each of the list variants and apply them to room subscriptions to make
|
||||
# sure the same rules apply
|
||||
for list_key in sync_body["lists"].keys():
|
||||
sync_body_for_subscriptions = {
|
||||
"room_subscriptions": {
|
||||
room_id1: {
|
||||
"required_state": sync_body["lists"][list_key][
|
||||
"required_state"
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
room_id2: {
|
||||
"required_state": sync_body["lists"][list_key][
|
||||
"required_state"
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
response_body, _ = self.do_sync(sync_body_for_subscriptions, tok=user1_tok)
|
||||
|
||||
# The list should include both rooms now because we're lazy-loading room members
|
||||
self.assertListEqual(
|
||||
list(response_body["lists"]["foo-list"]["ops"]),
|
||||
[
|
||||
{
|
||||
"op": "SYNC",
|
||||
"range": [0, 1],
|
||||
"room_ids": [room_id2, room_id1],
|
||||
}
|
||||
],
|
||||
response_body["lists"]["foo-list"],
|
||||
)
|
||||
self.assertIncludes(
|
||||
set(response_body["rooms"].keys()),
|
||||
{room_id1},
|
||||
exact=True,
|
||||
message=f"Expected only fully-stated rooms to show up for test_key={list_key}.",
|
||||
)
|
||||
|
|
|
@ -1166,6 +1166,12 @@ def setup_test_homeserver(
|
|||
|
||||
hs.get_auth_handler().validate_hash = validate_hash # type: ignore[assignment]
|
||||
|
||||
# We need to replace the media threadpool with the fake test threadpool.
|
||||
def thread_pool() -> threadpool.ThreadPool:
|
||||
return reactor.getThreadPool()
|
||||
|
||||
hs.get_media_sender_thread_pool = thread_pool # type: ignore[method-assign]
|
||||
|
||||
# Load any configured modules into the homeserver
|
||||
module_api = hs.get_module_api()
|
||||
for module, module_config in hs.config.modules.loaded_modules:
|
||||
|
|
|
@ -148,7 +148,7 @@ class PaginationTestCase(HomeserverTestCase):
|
|||
"""Make a request to /messages with a filter, returns the chunk of events."""
|
||||
|
||||
events, next_key = self.get_success(
|
||||
self.hs.get_datastores().main.paginate_room_events(
|
||||
self.hs.get_datastores().main.paginate_room_events_by_topological_ordering(
|
||||
room_id=self.room_id,
|
||||
from_key=self.from_token.room_key,
|
||||
to_key=None,
|
||||
|
|
Loading…
Reference in a new issue