mirror of
https://github.com/element-hq/synapse.git
synced 2024-12-20 19:10:45 +03:00
Merge remote-tracking branch 'origin/develop' into shhs
This commit is contained in:
commit
a1b8767da8
153 changed files with 1446 additions and 1558 deletions
82
CHANGES.md
82
CHANGES.md
|
@ -1,3 +1,85 @@
|
||||||
|
Synapse 0.99.4rc1 (2019-05-13)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Add systemd-python to the optional dependencies to enable logging to the systemd journal. Install with `pip install matrix-synapse[systemd]`. ([\#4339](https://github.com/matrix-org/synapse/issues/4339))
|
||||||
|
- Add a default .m.rule.tombstone push rule. ([\#4867](https://github.com/matrix-org/synapse/issues/4867))
|
||||||
|
- Add ability for password provider modules to bind email addresses to users upon registration. ([\#4947](https://github.com/matrix-org/synapse/issues/4947))
|
||||||
|
- Implementation of [MSC1711](https://github.com/matrix-org/matrix-doc/pull/1711) including config options for requiring valid TLS certificates for federation traffic, the ability to disable TLS validation for specific domains, and the ability to specify your own list of CA certificates. ([\#4967](https://github.com/matrix-org/synapse/issues/4967))
|
||||||
|
- Remove presence list support as per MSC 1819. ([\#4989](https://github.com/matrix-org/synapse/issues/4989))
|
||||||
|
- Reduce CPU usage starting pushers during start up. ([\#4991](https://github.com/matrix-org/synapse/issues/4991))
|
||||||
|
- Add a delete group admin API. ([\#5002](https://github.com/matrix-org/synapse/issues/5002))
|
||||||
|
- Add config option to block users from looking up 3PIDs. ([\#5010](https://github.com/matrix-org/synapse/issues/5010))
|
||||||
|
- Add context to phonehome stats. ([\#5020](https://github.com/matrix-org/synapse/issues/5020))
|
||||||
|
- Configure the example systemd units to have a log identifier of `matrix-synapse`
|
||||||
|
instead of the executable name, `python`.
|
||||||
|
Contributed by Christoph Müller. ([\#5023](https://github.com/matrix-org/synapse/issues/5023))
|
||||||
|
- Add time-based account expiration. ([\#5027](https://github.com/matrix-org/synapse/issues/5027), [\#5047](https://github.com/matrix-org/synapse/issues/5047), [\#5073](https://github.com/matrix-org/synapse/issues/5073), [\#5116](https://github.com/matrix-org/synapse/issues/5116))
|
||||||
|
- Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker. ([\#5063](https://github.com/matrix-org/synapse/issues/5063), [\#5065](https://github.com/matrix-org/synapse/issues/5065), [\#5070](https://github.com/matrix-org/synapse/issues/5070))
|
||||||
|
- Add an configuration option to require authentication on /publicRooms and /profile endpoints. ([\#5083](https://github.com/matrix-org/synapse/issues/5083))
|
||||||
|
- Move admin APIs to `/_synapse/admin/v1`. (The old paths are retained for backwards-compatibility, for now). ([\#5119](https://github.com/matrix-org/synapse/issues/5119))
|
||||||
|
- Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work. ([\#5121](https://github.com/matrix-org/synapse/issues/5121), [\#5142](https://github.com/matrix-org/synapse/issues/5142))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Avoid redundant URL encoding of redirect URL for SSO login in the fallback login page. Fixes a regression introduced in [#4220](https://github.com/matrix-org/synapse/pull/4220). Contributed by Marcel Fabian Krüger ("[zaugin](https://github.com/zauguin)"). ([\#4555](https://github.com/matrix-org/synapse/issues/4555))
|
||||||
|
- Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server. ([\#4942](https://github.com/matrix-org/synapse/issues/4942), [\#5103](https://github.com/matrix-org/synapse/issues/5103))
|
||||||
|
- Fix sync bug which made accepting invites unreliable in worker-mode synapses. ([\#4955](https://github.com/matrix-org/synapse/issues/4955), [\#4956](https://github.com/matrix-org/synapse/issues/4956))
|
||||||
|
- start.sh: Fix the --no-rate-limit option for messages and make it bypass rate limit on registration and login too. ([\#4981](https://github.com/matrix-org/synapse/issues/4981))
|
||||||
|
- Transfer related groups on room upgrade. ([\#4990](https://github.com/matrix-org/synapse/issues/4990))
|
||||||
|
- Prevent the ability to kick users from a room they aren't in. ([\#4999](https://github.com/matrix-org/synapse/issues/4999))
|
||||||
|
- Fix issue #4596 so synapse_port_db script works with --curses option on Python 3. Contributed by Anders Jensen-Waud <anders@jensenwaud.com>. ([\#5003](https://github.com/matrix-org/synapse/issues/5003))
|
||||||
|
- Clients timing out/disappearing while downloading from the media repository will now no longer log a spurious "Producer was not unregistered" message. ([\#5009](https://github.com/matrix-org/synapse/issues/5009))
|
||||||
|
- Fix "cannot import name execute_batch" error with postgres. ([\#5032](https://github.com/matrix-org/synapse/issues/5032))
|
||||||
|
- Fix disappearing exceptions in manhole. ([\#5035](https://github.com/matrix-org/synapse/issues/5035))
|
||||||
|
- Workaround bug in twisted where attempting too many concurrent DNS requests could cause it to hang due to running out of file descriptors. ([\#5037](https://github.com/matrix-org/synapse/issues/5037))
|
||||||
|
- Make sure we're not registering the same 3pid twice on registration. ([\#5071](https://github.com/matrix-org/synapse/issues/5071))
|
||||||
|
- Don't crash on lack of expiry templates. ([\#5077](https://github.com/matrix-org/synapse/issues/5077))
|
||||||
|
- Fix the ratelimting on third party invites. ([\#5104](https://github.com/matrix-org/synapse/issues/5104))
|
||||||
|
- Add some missing limitations to room alias creation. ([\#5124](https://github.com/matrix-org/synapse/issues/5124), [\#5128](https://github.com/matrix-org/synapse/issues/5128))
|
||||||
|
- Limit the number of EDUs in transactions to 100 as expected by synapse. Thanks to @superboum for this work! ([\#5138](https://github.com/matrix-org/synapse/issues/5138))
|
||||||
|
- Fix bogus imports in unit tests. ([\#5154](https://github.com/matrix-org/synapse/issues/5154))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Add test to verify threepid auth check added in #4435. ([\#4474](https://github.com/matrix-org/synapse/issues/4474))
|
||||||
|
- Fix/improve some docstrings in the replication code. ([\#4949](https://github.com/matrix-org/synapse/issues/4949))
|
||||||
|
- Split synapse.replication.tcp.streams into smaller files. ([\#4953](https://github.com/matrix-org/synapse/issues/4953))
|
||||||
|
- Refactor replication row generation/parsing. ([\#4954](https://github.com/matrix-org/synapse/issues/4954))
|
||||||
|
- Run `black` to clean up formatting on `synapse/storage/roommember.py` and `synapse/storage/events.py`. ([\#4959](https://github.com/matrix-org/synapse/issues/4959))
|
||||||
|
- Remove log line for password via the admin API. ([\#4965](https://github.com/matrix-org/synapse/issues/4965))
|
||||||
|
- Fix typo in TLS filenames in docker/README.md. Also add the '-p' commandline option to the 'docker run' example. Contributed by Jurrie Overgoor. ([\#4968](https://github.com/matrix-org/synapse/issues/4968))
|
||||||
|
- Refactor room version definitions. ([\#4969](https://github.com/matrix-org/synapse/issues/4969))
|
||||||
|
- Reduce log level of .well-known/matrix/client responses. ([\#4972](https://github.com/matrix-org/synapse/issues/4972))
|
||||||
|
- Add `config.signing_key_path` that can be read by `synapse.config` utility. ([\#4974](https://github.com/matrix-org/synapse/issues/4974))
|
||||||
|
- Track which identity server is used when binding a threepid and use that for unbinding, as per MSC1915. ([\#4982](https://github.com/matrix-org/synapse/issues/4982))
|
||||||
|
- Rewrite KeyringTestCase as a HomeserverTestCase. ([\#4985](https://github.com/matrix-org/synapse/issues/4985))
|
||||||
|
- README updates: Corrected the default POSTGRES_USER. Added port forwarding hint in TLS section. ([\#4987](https://github.com/matrix-org/synapse/issues/4987))
|
||||||
|
- Remove a number of unused tables from the database schema. ([\#4992](https://github.com/matrix-org/synapse/issues/4992), [\#5028](https://github.com/matrix-org/synapse/issues/5028), [\#5033](https://github.com/matrix-org/synapse/issues/5033))
|
||||||
|
- Run `black` on the remainder of `synapse/storage/`. ([\#4996](https://github.com/matrix-org/synapse/issues/4996))
|
||||||
|
- Fix grammar in get_current_users_in_room and give it a docstring. ([\#4998](https://github.com/matrix-org/synapse/issues/4998))
|
||||||
|
- Clean up some code in the server-key Keyring. ([\#5001](https://github.com/matrix-org/synapse/issues/5001))
|
||||||
|
- Convert SYNAPSE_NO_TLS Docker variable to boolean for user friendliness. Contributed by Gabriel Eckerson. ([\#5005](https://github.com/matrix-org/synapse/issues/5005))
|
||||||
|
- Refactor synapse.storage._base._simple_select_list_paginate. ([\#5007](https://github.com/matrix-org/synapse/issues/5007))
|
||||||
|
- Store the notary server name correctly in server_keys_json. ([\#5024](https://github.com/matrix-org/synapse/issues/5024))
|
||||||
|
- Rewrite Datastore.get_server_verify_keys to reduce the number of database transactions. ([\#5030](https://github.com/matrix-org/synapse/issues/5030))
|
||||||
|
- Remove extraneous period from copyright headers. ([\#5046](https://github.com/matrix-org/synapse/issues/5046))
|
||||||
|
- Update documentation for where to get Synapse packages. ([\#5067](https://github.com/matrix-org/synapse/issues/5067))
|
||||||
|
- Add workarounds for pep-517 install errors. ([\#5098](https://github.com/matrix-org/synapse/issues/5098))
|
||||||
|
- Improve logging when event-signature checks fail. ([\#5100](https://github.com/matrix-org/synapse/issues/5100))
|
||||||
|
- Factor out an "assert_requester_is_admin" function. ([\#5120](https://github.com/matrix-org/synapse/issues/5120))
|
||||||
|
- Remove the requirement to authenticate for /admin/server_version. ([\#5122](https://github.com/matrix-org/synapse/issues/5122))
|
||||||
|
- Prevent an exception from being raised in a IResolutionReceiver and use a more generic error message for blacklisted URL previews. ([\#5155](https://github.com/matrix-org/synapse/issues/5155))
|
||||||
|
- Run `black` on the tests directory. ([\#5170](https://github.com/matrix-org/synapse/issues/5170))
|
||||||
|
- Fix CI after new release of isort. ([\#5179](https://github.com/matrix-org/synapse/issues/5179))
|
||||||
|
|
||||||
|
|
||||||
Synapse 0.99.3.2 (2019-05-03)
|
Synapse 0.99.3.2 (2019-05-03)
|
||||||
=============================
|
=============================
|
||||||
|
|
||||||
|
|
21
INSTALL.md
21
INSTALL.md
|
@ -257,9 +257,8 @@ https://github.com/spantaleev/matrix-docker-ansible-deploy
|
||||||
#### Matrix.org packages
|
#### Matrix.org packages
|
||||||
|
|
||||||
Matrix.org provides Debian/Ubuntu packages of the latest stable version of
|
Matrix.org provides Debian/Ubuntu packages of the latest stable version of
|
||||||
Synapse via https://packages.matrix.org/debian/. To use them:
|
Synapse via https://packages.matrix.org/debian/. They are available for Debian
|
||||||
|
9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them:
|
||||||
For Debian 9 (Stretch), Ubuntu 16.04 (Xenial), and later:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo apt install -y lsb-release wget apt-transport-https
|
sudo apt install -y lsb-release wget apt-transport-https
|
||||||
|
@ -270,19 +269,6 @@ sudo apt update
|
||||||
sudo apt install matrix-synapse-py3
|
sudo apt install matrix-synapse-py3
|
||||||
```
|
```
|
||||||
|
|
||||||
For Debian 8 (Jessie):
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo apt install -y lsb-release wget apt-transport-https
|
|
||||||
sudo wget -O /etc/apt/trusted.gpg.d/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
|
|
||||||
echo "deb [signed-by=5586CCC0CBBBEFC7A25811ADF473DD4473365DE1] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
|
|
||||||
sudo tee /etc/apt/sources.list.d/matrix-org.list
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install matrix-synapse-py3
|
|
||||||
```
|
|
||||||
|
|
||||||
The fingerprint of the repository signing key is AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058.
|
|
||||||
|
|
||||||
**Note**: if you followed a previous version of these instructions which
|
**Note**: if you followed a previous version of these instructions which
|
||||||
recommended using `apt-key add` to add an old key from
|
recommended using `apt-key add` to add an old key from
|
||||||
`https://matrix.org/packages/debian/`, you should note that this key has been
|
`https://matrix.org/packages/debian/`, you should note that this key has been
|
||||||
|
@ -290,6 +276,9 @@ revoked. You should remove the old key with `sudo apt-key remove
|
||||||
C35EB17E1EAE708E6603A9B3AD0592FE47F0DF61`, and follow the above instructions to
|
C35EB17E1EAE708E6603A9B3AD0592FE47F0DF61`, and follow the above instructions to
|
||||||
update your configuration.
|
update your configuration.
|
||||||
|
|
||||||
|
The fingerprint of the repository signing key (as shown by `gpg
|
||||||
|
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
||||||
|
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
||||||
|
|
||||||
#### Downstream Debian/Ubuntu packages
|
#### Downstream Debian/Ubuntu packages
|
||||||
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Add systemd-python to the optional dependencies to enable logging to the systemd journal. Install with `pip install matrix-synapse[systemd]`.
|
|
|
@ -1 +0,0 @@
|
||||||
Add test to verify threepid auth check added in #4435.
|
|
|
@ -1 +0,0 @@
|
||||||
Avoid redundant URL encoding of redirect URL for SSO login in the fallback login page. Fixes a regression introduced in [#4220](https://github.com/matrix-org/synapse/pull/4220). Contributed by Marcel Fabian Krüger ("[zaugin](https://github.com/zauguin)").
|
|
|
@ -1 +0,0 @@
|
||||||
Add a default .m.rule.tombstone push rule.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server.
|
|
|
@ -1 +0,0 @@
|
||||||
Add ability for password provider modules to bind email addresses to users upon registration.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix/improve some docstrings in the replication code.
|
|
|
@ -1,2 +0,0 @@
|
||||||
Split synapse.replication.tcp.streams into smaller files.
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor replication row generation/parsing.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix sync bug which made accepting invites unreliable in worker-mode synapses.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix sync bug which made accepting invites unreliable in worker-mode synapses.
|
|
|
@ -1 +0,0 @@
|
||||||
Run `black` to clean up formatting on `synapse/storage/roommember.py` and `synapse/storage/events.py`.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove log line for password via the admin API.
|
|
|
@ -1 +0,0 @@
|
||||||
Implementation of [MSC1711](https://github.com/matrix-org/matrix-doc/pull/1711) including config options for requiring valid TLS certificates for federation traffic, the ability to disable TLS validation for specific domains, and the ability to specify your own list of CA certificates.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix typo in TLS filenames in docker/README.md. Also add the '-p' commandline option to the 'docker run' example. Contributed by Jurrie Overgoor.
|
|
|
@ -1,2 +0,0 @@
|
||||||
Refactor room version definitions.
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Reduce log level of .well-known/matrix/client responses.
|
|
|
@ -1 +0,0 @@
|
||||||
Add `config.signing_key_path` that can be read by `synapse.config` utility.
|
|
|
@ -1 +0,0 @@
|
||||||
start.sh: Fix the --no-rate-limit option for messages and make it bypass rate limit on registration and login too.
|
|
|
@ -1 +0,0 @@
|
||||||
Track which identity server is used when binding a threepid and use that for unbinding, as per MSC1915.
|
|
|
@ -1 +0,0 @@
|
||||||
Rewrite KeyringTestCase as a HomeserverTestCase.
|
|
|
@ -1 +0,0 @@
|
||||||
README updates: Corrected the default POSTGRES_USER. Added port forwarding hint in TLS section.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove presence list support as per MSC 1819.
|
|
|
@ -1 +0,0 @@
|
||||||
Transfer related groups on room upgrade.
|
|
|
@ -1 +0,0 @@
|
||||||
Reduce CPU usage starting pushers during start up.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove a number of unused tables from the database schema.
|
|
|
@ -1 +0,0 @@
|
||||||
Run `black` on the remainder of `synapse/storage/`.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix grammar in get_current_users_in_room and give it a docstring.
|
|
|
@ -1 +0,0 @@
|
||||||
Prevent the ability to kick users from a room they aren't in.
|
|
|
@ -1 +0,0 @@
|
||||||
Clean up some code in the server-key Keyring.
|
|
|
@ -1 +0,0 @@
|
||||||
Add a delete group admin API.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix issue #4596 so synapse_port_db script works with --curses option on Python 3. Contributed by Anders Jensen-Waud <anders@jensenwaud.com>.
|
|
|
@ -1 +0,0 @@
|
||||||
Convert SYNAPSE_NO_TLS Docker variable to boolean for user friendliness. Contributed by Gabriel Eckerson.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor synapse.storage._base._simple_select_list_paginate.
|
|
|
@ -1 +0,0 @@
|
||||||
Clients timing out/disappearing while downloading from the media repository will now no longer log a spurious "Producer was not unregistered" message.
|
|
|
@ -1 +0,0 @@
|
||||||
Add config option to block users from looking up 3PIDs.
|
|
|
@ -1 +0,0 @@
|
||||||
Add context to phonehome stats.
|
|
|
@ -1 +0,0 @@
|
||||||
Store the notary server name correctly in server_keys_json.
|
|
|
@ -1 +0,0 @@
|
||||||
Add time-based account expiration.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove a number of unused tables from the database schema.
|
|
|
@ -1 +0,0 @@
|
||||||
Rewrite Datastore.get_server_verify_keys to reduce the number of database transactions.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix "cannot import name execute_batch" error with postgres.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove a number of unused tables from the database schema.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix disappearing exceptions in manhole.
|
|
|
@ -1 +0,0 @@
|
||||||
Workaround bug in twisted where attempting too many concurrent DNS requests could cause it to hang due to running out of file descriptors.
|
|
1
changelog.d/5043.feature
Normal file
1
changelog.d/5043.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add ability to blacklist IP ranges for the federation client.
|
|
@ -1 +0,0 @@
|
||||||
Remove extraneous period from copyright headers.
|
|
|
@ -1 +0,0 @@
|
||||||
Add time-based account expiration.
|
|
|
@ -1 +0,0 @@
|
||||||
Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.
|
|
|
@ -1 +0,0 @@
|
||||||
Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.
|
|
|
@ -1 +0,0 @@
|
||||||
Update documentation for where to get Synapse packages.
|
|
|
@ -1 +0,0 @@
|
||||||
Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.
|
|
|
@ -1 +0,0 @@
|
||||||
Make sure we're not registering the same 3pid twice on registration.
|
|
|
@ -1 +0,0 @@
|
||||||
Add time-based account expiration.
|
|
|
@ -1 +0,0 @@
|
||||||
Don't crash on lack of expiry templates.
|
|
|
@ -1 +0,0 @@
|
||||||
Add an configuration option to require authentication on /publicRooms and /profile endpoints.
|
|
|
@ -1 +0,0 @@
|
||||||
Add workarounds for pep-517 install errors.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve logging when event-signature checks fail.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix the ratelimting on third party invites.
|
|
|
@ -1 +0,0 @@
|
||||||
Add time-based account expiration.
|
|
|
@ -1 +0,0 @@
|
||||||
Move admin APIs to `/_synapse/admin/v1`. (The old paths are retained for backwards-compatibility, for now).
|
|
|
@ -1 +0,0 @@
|
||||||
Factor out an "assert_requester_is_admin" function.
|
|
|
@ -1 +0,0 @@
|
||||||
Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove the requirement to authenticate for /admin/server_version.
|
|
|
@ -1 +0,0 @@
|
||||||
Add some missing limitations to room alias creation.
|
|
|
@ -1 +0,0 @@
|
||||||
Add some missing limitations to room alias creation.
|
|
|
@ -1 +0,0 @@
|
||||||
Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bogus imports in unit tests.
|
|
1
changelog.d/5171.misc
Normal file
1
changelog.d/5171.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Update tests to consistently be configured via the same code that is used when loading from configuration files.
|
|
@ -12,6 +12,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.%i --config-path=/
|
||||||
ExecReload=/bin/kill -HUP $MAINPID
|
ExecReload=/bin/kill -HUP $MAINPID
|
||||||
Restart=always
|
Restart=always
|
||||||
RestartSec=3
|
RestartSec=3
|
||||||
|
SyslogIdentifier=matrix-synapse-%i
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=matrix-synapse.service
|
WantedBy=matrix-synapse.service
|
||||||
|
|
|
@ -11,6 +11,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --confi
|
||||||
ExecReload=/bin/kill -HUP $MAINPID
|
ExecReload=/bin/kill -HUP $MAINPID
|
||||||
Restart=always
|
Restart=always
|
||||||
RestartSec=3
|
RestartSec=3
|
||||||
|
SyslogIdentifier=matrix-synapse
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=matrix.target
|
WantedBy=matrix.target
|
||||||
|
|
|
@ -22,10 +22,10 @@ Group=nogroup
|
||||||
|
|
||||||
WorkingDirectory=/opt/synapse
|
WorkingDirectory=/opt/synapse
|
||||||
ExecStart=/opt/synapse/env/bin/python -m synapse.app.homeserver --config-path=/opt/synapse/homeserver.yaml
|
ExecStart=/opt/synapse/env/bin/python -m synapse.app.homeserver --config-path=/opt/synapse/homeserver.yaml
|
||||||
|
SyslogIdentifier=matrix-synapse
|
||||||
|
|
||||||
# adjust the cache factor if necessary
|
# adjust the cache factor if necessary
|
||||||
# Environment=SYNAPSE_CACHE_FACTOR=2.0
|
# Environment=SYNAPSE_CACHE_FACTOR=2.0
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
|
|
||||||
|
|
7
debian/changelog
vendored
7
debian/changelog
vendored
|
@ -1,3 +1,10 @@
|
||||||
|
matrix-synapse-py3 (0.99.3.2+nmu1) UNRELEASED; urgency=medium
|
||||||
|
|
||||||
|
[ Christoph Müller ]
|
||||||
|
* Configure the systemd units to have a log identifier of `matrix-synapse`
|
||||||
|
|
||||||
|
-- Christoph Müller <iblzm@hotmail.de> Wed, 17 Apr 2019 16:17:32 +0200
|
||||||
|
|
||||||
matrix-synapse-py3 (0.99.3.2) stable; urgency=medium
|
matrix-synapse-py3 (0.99.3.2) stable; urgency=medium
|
||||||
|
|
||||||
* New synapse release 0.99.3.2.
|
* New synapse release 0.99.3.2.
|
||||||
|
|
1
debian/matrix-synapse.service
vendored
1
debian/matrix-synapse.service
vendored
|
@ -11,6 +11,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --confi
|
||||||
ExecReload=/bin/kill -HUP $MAINPID
|
ExecReload=/bin/kill -HUP $MAINPID
|
||||||
Restart=always
|
Restart=always
|
||||||
RestartSec=3
|
RestartSec=3
|
||||||
|
SyslogIdentifier=matrix-synapse
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
|
|
|
@ -48,7 +48,10 @@ How to monitor Synapse metrics using Prometheus
|
||||||
- job_name: "synapse"
|
- job_name: "synapse"
|
||||||
metrics_path: "/_synapse/metrics"
|
metrics_path: "/_synapse/metrics"
|
||||||
static_configs:
|
static_configs:
|
||||||
- targets: ["my.server.here:9092"]
|
- targets: ["my.server.here:port"]
|
||||||
|
|
||||||
|
where ``my.server.here`` is the IP address of Synapse, and ``port`` is the listener port
|
||||||
|
configured with the ``metrics`` resource.
|
||||||
|
|
||||||
If your prometheus is older than 1.5.2, you will need to replace
|
If your prometheus is older than 1.5.2, you will need to replace
|
||||||
``static_configs`` in the above with ``target_groups``.
|
``static_configs`` in the above with ``target_groups``.
|
||||||
|
|
|
@ -69,6 +69,7 @@ Let's assume that we expect clients to connect to our server at
|
||||||
SSLEngine on
|
SSLEngine on
|
||||||
ServerName matrix.example.com;
|
ServerName matrix.example.com;
|
||||||
|
|
||||||
|
AllowEncodedSlashes NoDecode
|
||||||
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
||||||
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
||||||
</VirtualHost>
|
</VirtualHost>
|
||||||
|
@ -77,6 +78,7 @@ Let's assume that we expect clients to connect to our server at
|
||||||
SSLEngine on
|
SSLEngine on
|
||||||
ServerName example.com;
|
ServerName example.com;
|
||||||
|
|
||||||
|
AllowEncodedSlashes NoDecode
|
||||||
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
||||||
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
||||||
</VirtualHost>
|
</VirtualHost>
|
||||||
|
|
|
@ -115,6 +115,24 @@ pid_file: DATADIR/homeserver.pid
|
||||||
# - nyc.example.com
|
# - nyc.example.com
|
||||||
# - syd.example.com
|
# - syd.example.com
|
||||||
|
|
||||||
|
# Prevent federation requests from being sent to the following
|
||||||
|
# blacklist IP address CIDR ranges. If this option is not specified, or
|
||||||
|
# specified with an empty list, no ip range blacklist will be enforced.
|
||||||
|
#
|
||||||
|
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
|
||||||
|
# listed here, since they correspond to unroutable addresses.)
|
||||||
|
#
|
||||||
|
federation_ip_range_blacklist:
|
||||||
|
- '127.0.0.0/8'
|
||||||
|
- '10.0.0.0/8'
|
||||||
|
- '172.16.0.0/12'
|
||||||
|
- '192.168.0.0/16'
|
||||||
|
- '100.64.0.0/10'
|
||||||
|
- '169.254.0.0/16'
|
||||||
|
- '::1/128'
|
||||||
|
- 'fe80::/64'
|
||||||
|
- 'fc00::/7'
|
||||||
|
|
||||||
# List of ports that Synapse should listen on, their purpose and their
|
# List of ports that Synapse should listen on, their purpose and their
|
||||||
# configuration.
|
# configuration.
|
||||||
#
|
#
|
||||||
|
|
|
@ -27,4 +27,4 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "0.99.3.2"
|
__version__ = "0.99.4rc1"
|
||||||
|
|
|
@ -17,6 +17,8 @@
|
||||||
import logging
|
import logging
|
||||||
import os.path
|
import os.path
|
||||||
|
|
||||||
|
from netaddr import IPSet
|
||||||
|
|
||||||
from synapse.http.endpoint import parse_and_validate_server_name
|
from synapse.http.endpoint import parse_and_validate_server_name
|
||||||
from synapse.python_dependencies import DependencyException, check_requirements
|
from synapse.python_dependencies import DependencyException, check_requirements
|
||||||
|
|
||||||
|
@ -137,6 +139,24 @@ class ServerConfig(Config):
|
||||||
for domain in federation_domain_whitelist:
|
for domain in federation_domain_whitelist:
|
||||||
self.federation_domain_whitelist[domain] = True
|
self.federation_domain_whitelist[domain] = True
|
||||||
|
|
||||||
|
self.federation_ip_range_blacklist = config.get(
|
||||||
|
"federation_ip_range_blacklist", [],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Attempt to create an IPSet from the given ranges
|
||||||
|
try:
|
||||||
|
self.federation_ip_range_blacklist = IPSet(
|
||||||
|
self.federation_ip_range_blacklist
|
||||||
|
)
|
||||||
|
|
||||||
|
# Always blacklist 0.0.0.0, ::
|
||||||
|
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
|
||||||
|
except Exception as e:
|
||||||
|
raise ConfigError(
|
||||||
|
"Invalid range(s) provided in "
|
||||||
|
"federation_ip_range_blacklist: %s" % e
|
||||||
|
)
|
||||||
|
|
||||||
if self.public_baseurl is not None:
|
if self.public_baseurl is not None:
|
||||||
if self.public_baseurl[-1] != '/':
|
if self.public_baseurl[-1] != '/':
|
||||||
self.public_baseurl += '/'
|
self.public_baseurl += '/'
|
||||||
|
@ -386,6 +406,24 @@ class ServerConfig(Config):
|
||||||
# - nyc.example.com
|
# - nyc.example.com
|
||||||
# - syd.example.com
|
# - syd.example.com
|
||||||
|
|
||||||
|
# Prevent federation requests from being sent to the following
|
||||||
|
# blacklist IP address CIDR ranges. If this option is not specified, or
|
||||||
|
# specified with an empty list, no ip range blacklist will be enforced.
|
||||||
|
#
|
||||||
|
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
|
||||||
|
# listed here, since they correspond to unroutable addresses.)
|
||||||
|
#
|
||||||
|
federation_ip_range_blacklist:
|
||||||
|
- '127.0.0.0/8'
|
||||||
|
- '10.0.0.0/8'
|
||||||
|
- '172.16.0.0/12'
|
||||||
|
- '192.168.0.0/16'
|
||||||
|
- '100.64.0.0/10'
|
||||||
|
- '169.254.0.0/16'
|
||||||
|
- '::1/128'
|
||||||
|
- 'fe80::/64'
|
||||||
|
- 'fc00::/7'
|
||||||
|
|
||||||
# List of ports that Synapse should listen on, their purpose and their
|
# List of ports that Synapse should listen on, their purpose and their
|
||||||
# configuration.
|
# configuration.
|
||||||
#
|
#
|
||||||
|
|
|
@ -33,12 +33,14 @@ from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.storage import UserPresenceState
|
from synapse.storage import UserPresenceState
|
||||||
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
|
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
|
||||||
|
|
||||||
|
# This is defined in the Matrix spec and enforced by the receiver.
|
||||||
|
MAX_EDUS_PER_TRANSACTION = 100
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
sent_edus_counter = Counter(
|
sent_edus_counter = Counter(
|
||||||
"synapse_federation_client_sent_edus",
|
"synapse_federation_client_sent_edus", "Total number of EDUs successfully sent"
|
||||||
"Total number of EDUs successfully sent",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
sent_edus_by_type = Counter(
|
sent_edus_by_type = Counter(
|
||||||
|
@ -58,6 +60,7 @@ class PerDestinationQueue(object):
|
||||||
destination (str): the server_name of the destination that we are managing
|
destination (str): the server_name of the destination that we are managing
|
||||||
transmission for.
|
transmission for.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, hs, transaction_manager, destination):
|
def __init__(self, hs, transaction_manager, destination):
|
||||||
self._server_name = hs.hostname
|
self._server_name = hs.hostname
|
||||||
self._clock = hs.get_clock()
|
self._clock = hs.get_clock()
|
||||||
|
@ -68,17 +71,17 @@ class PerDestinationQueue(object):
|
||||||
self.transmission_loop_running = False
|
self.transmission_loop_running = False
|
||||||
|
|
||||||
# a list of tuples of (pending pdu, order)
|
# a list of tuples of (pending pdu, order)
|
||||||
self._pending_pdus = [] # type: list[tuple[EventBase, int]]
|
self._pending_pdus = [] # type: list[tuple[EventBase, int]]
|
||||||
self._pending_edus = [] # type: list[Edu]
|
self._pending_edus = [] # type: list[Edu]
|
||||||
|
|
||||||
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
|
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
|
||||||
# based on their key (e.g. typing events by room_id)
|
# based on their key (e.g. typing events by room_id)
|
||||||
# Map of (edu_type, key) -> Edu
|
# Map of (edu_type, key) -> Edu
|
||||||
self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
|
self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
|
||||||
|
|
||||||
# Map of user_id -> UserPresenceState of pending presence to be sent to this
|
# Map of user_id -> UserPresenceState of pending presence to be sent to this
|
||||||
# destination
|
# destination
|
||||||
self._pending_presence = {} # type: dict[str, UserPresenceState]
|
self._pending_presence = {} # type: dict[str, UserPresenceState]
|
||||||
|
|
||||||
# room_id -> receipt_type -> user_id -> receipt_dict
|
# room_id -> receipt_type -> user_id -> receipt_dict
|
||||||
self._pending_rrs = {}
|
self._pending_rrs = {}
|
||||||
|
@ -120,9 +123,7 @@ class PerDestinationQueue(object):
|
||||||
Args:
|
Args:
|
||||||
states (iterable[UserPresenceState]): presence to send
|
states (iterable[UserPresenceState]): presence to send
|
||||||
"""
|
"""
|
||||||
self._pending_presence.update({
|
self._pending_presence.update({state.user_id: state for state in states})
|
||||||
state.user_id: state for state in states
|
|
||||||
})
|
|
||||||
self.attempt_new_transaction()
|
self.attempt_new_transaction()
|
||||||
|
|
||||||
def queue_read_receipt(self, receipt):
|
def queue_read_receipt(self, receipt):
|
||||||
|
@ -132,14 +133,9 @@ class PerDestinationQueue(object):
|
||||||
Args:
|
Args:
|
||||||
receipt (synapse.api.receipt_info.ReceiptInfo): receipt to be queued
|
receipt (synapse.api.receipt_info.ReceiptInfo): receipt to be queued
|
||||||
"""
|
"""
|
||||||
self._pending_rrs.setdefault(
|
self._pending_rrs.setdefault(receipt.room_id, {}).setdefault(
|
||||||
receipt.room_id, {},
|
|
||||||
).setdefault(
|
|
||||||
receipt.receipt_type, {}
|
receipt.receipt_type, {}
|
||||||
)[receipt.user_id] = {
|
)[receipt.user_id] = {"event_ids": receipt.event_ids, "data": receipt.data}
|
||||||
"event_ids": receipt.event_ids,
|
|
||||||
"data": receipt.data,
|
|
||||||
}
|
|
||||||
|
|
||||||
def flush_read_receipts_for_room(self, room_id):
|
def flush_read_receipts_for_room(self, room_id):
|
||||||
# if we don't have any read-receipts for this room, it may be that we've already
|
# if we don't have any read-receipts for this room, it may be that we've already
|
||||||
|
@ -170,10 +166,7 @@ class PerDestinationQueue(object):
|
||||||
# request at which point pending_pdus just keeps growing.
|
# request at which point pending_pdus just keeps growing.
|
||||||
# we need application-layer timeouts of some flavour of these
|
# we need application-layer timeouts of some flavour of these
|
||||||
# requests
|
# requests
|
||||||
logger.debug(
|
logger.debug("TX [%s] Transaction already in progress", self._destination)
|
||||||
"TX [%s] Transaction already in progress",
|
|
||||||
self._destination
|
|
||||||
)
|
|
||||||
return
|
return
|
||||||
|
|
||||||
logger.debug("TX [%s] Starting transaction loop", self._destination)
|
logger.debug("TX [%s] Starting transaction loop", self._destination)
|
||||||
|
@ -197,7 +190,8 @@ class PerDestinationQueue(object):
|
||||||
pending_pdus = []
|
pending_pdus = []
|
||||||
while True:
|
while True:
|
||||||
device_message_edus, device_stream_id, dev_list_id = (
|
device_message_edus, device_stream_id, dev_list_id = (
|
||||||
yield self._get_new_device_messages()
|
# We have to keep 2 free slots for presence and rr_edus
|
||||||
|
yield self._get_new_device_messages(MAX_EDUS_PER_TRANSACTION - 2)
|
||||||
)
|
)
|
||||||
|
|
||||||
# BEGIN CRITICAL SECTION
|
# BEGIN CRITICAL SECTION
|
||||||
|
@ -216,19 +210,9 @@ class PerDestinationQueue(object):
|
||||||
|
|
||||||
pending_edus = []
|
pending_edus = []
|
||||||
|
|
||||||
pending_edus.extend(self._get_rr_edus(force_flush=False))
|
|
||||||
|
|
||||||
# We can only include at most 100 EDUs per transactions
|
# We can only include at most 100 EDUs per transactions
|
||||||
pending_edus.extend(self._pop_pending_edus(100 - len(pending_edus)))
|
# rr_edus and pending_presence take at most one slot each
|
||||||
|
pending_edus.extend(self._get_rr_edus(force_flush=False))
|
||||||
pending_edus.extend(
|
|
||||||
self._pending_edus_keyed.values()
|
|
||||||
)
|
|
||||||
|
|
||||||
self._pending_edus_keyed = {}
|
|
||||||
|
|
||||||
pending_edus.extend(device_message_edus)
|
|
||||||
|
|
||||||
pending_presence = self._pending_presence
|
pending_presence = self._pending_presence
|
||||||
self._pending_presence = {}
|
self._pending_presence = {}
|
||||||
if pending_presence:
|
if pending_presence:
|
||||||
|
@ -248,9 +232,23 @@ class PerDestinationQueue(object):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
pending_edus.extend(device_message_edus)
|
||||||
|
pending_edus.extend(
|
||||||
|
self._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
|
||||||
|
)
|
||||||
|
while (
|
||||||
|
len(pending_edus) < MAX_EDUS_PER_TRANSACTION
|
||||||
|
and self._pending_edus_keyed
|
||||||
|
):
|
||||||
|
_, val = self._pending_edus_keyed.popitem()
|
||||||
|
pending_edus.append(val)
|
||||||
|
|
||||||
if pending_pdus:
|
if pending_pdus:
|
||||||
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
|
logger.debug(
|
||||||
self._destination, len(pending_pdus))
|
"TX [%s] len(pending_pdus_by_dest[dest]) = %d",
|
||||||
|
self._destination,
|
||||||
|
len(pending_pdus),
|
||||||
|
)
|
||||||
|
|
||||||
if not pending_pdus and not pending_edus:
|
if not pending_pdus and not pending_edus:
|
||||||
logger.debug("TX [%s] Nothing to send", self._destination)
|
logger.debug("TX [%s] Nothing to send", self._destination)
|
||||||
|
@ -259,7 +257,7 @@ class PerDestinationQueue(object):
|
||||||
|
|
||||||
# if we've decided to send a transaction anyway, and we have room, we
|
# if we've decided to send a transaction anyway, and we have room, we
|
||||||
# may as well send any pending RRs
|
# may as well send any pending RRs
|
||||||
if len(pending_edus) < 100:
|
if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
|
||||||
pending_edus.extend(self._get_rr_edus(force_flush=True))
|
pending_edus.extend(self._get_rr_edus(force_flush=True))
|
||||||
|
|
||||||
# END CRITICAL SECTION
|
# END CRITICAL SECTION
|
||||||
|
@ -303,22 +301,25 @@ class PerDestinationQueue(object):
|
||||||
except HttpResponseException as e:
|
except HttpResponseException as e:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
"TX [%s] Received %d response to transaction: %s",
|
"TX [%s] Received %d response to transaction: %s",
|
||||||
self._destination, e.code, e,
|
self._destination,
|
||||||
|
e.code,
|
||||||
|
e,
|
||||||
)
|
)
|
||||||
except RequestSendFailed as e:
|
except RequestSendFailed as e:
|
||||||
logger.warning("TX [%s] Failed to send transaction: %s", self._destination, e)
|
logger.warning(
|
||||||
|
"TX [%s] Failed to send transaction: %s", self._destination, e
|
||||||
|
)
|
||||||
|
|
||||||
for p, _ in pending_pdus:
|
for p, _ in pending_pdus:
|
||||||
logger.info("Failed to send event %s to %s", p.event_id,
|
logger.info(
|
||||||
self._destination)
|
"Failed to send event %s to %s", p.event_id, self._destination
|
||||||
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception(
|
logger.exception("TX [%s] Failed to send transaction", self._destination)
|
||||||
"TX [%s] Failed to send transaction",
|
|
||||||
self._destination,
|
|
||||||
)
|
|
||||||
for p, _ in pending_pdus:
|
for p, _ in pending_pdus:
|
||||||
logger.info("Failed to send event %s to %s", p.event_id,
|
logger.info(
|
||||||
self._destination)
|
"Failed to send event %s to %s", p.event_id, self._destination
|
||||||
|
)
|
||||||
finally:
|
finally:
|
||||||
# We want to be *very* sure we clear this after we stop processing
|
# We want to be *very* sure we clear this after we stop processing
|
||||||
self.transmission_loop_running = False
|
self.transmission_loop_running = False
|
||||||
|
@ -346,27 +347,13 @@ class PerDestinationQueue(object):
|
||||||
return pending_edus
|
return pending_edus
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _get_new_device_messages(self):
|
def _get_new_device_messages(self, limit):
|
||||||
last_device_stream_id = self._last_device_stream_id
|
|
||||||
to_device_stream_id = self._store.get_to_device_stream_token()
|
|
||||||
contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
|
|
||||||
self._destination, last_device_stream_id, to_device_stream_id
|
|
||||||
)
|
|
||||||
edus = [
|
|
||||||
Edu(
|
|
||||||
origin=self._server_name,
|
|
||||||
destination=self._destination,
|
|
||||||
edu_type="m.direct_to_device",
|
|
||||||
content=content,
|
|
||||||
)
|
|
||||||
for content in contents
|
|
||||||
]
|
|
||||||
|
|
||||||
last_device_list = self._last_device_list_stream_id
|
last_device_list = self._last_device_list_stream_id
|
||||||
|
# Will return at most 20 entries
|
||||||
now_stream_id, results = yield self._store.get_devices_by_remote(
|
now_stream_id, results = yield self._store.get_devices_by_remote(
|
||||||
self._destination, last_device_list
|
self._destination, last_device_list
|
||||||
)
|
)
|
||||||
edus.extend(
|
edus = [
|
||||||
Edu(
|
Edu(
|
||||||
origin=self._server_name,
|
origin=self._server_name,
|
||||||
destination=self._destination,
|
destination=self._destination,
|
||||||
|
@ -374,5 +361,26 @@ class PerDestinationQueue(object):
|
||||||
content=content,
|
content=content,
|
||||||
)
|
)
|
||||||
for content in results
|
for content in results
|
||||||
|
]
|
||||||
|
|
||||||
|
assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
|
||||||
|
|
||||||
|
last_device_stream_id = self._last_device_stream_id
|
||||||
|
to_device_stream_id = self._store.get_to_device_stream_token()
|
||||||
|
contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
|
||||||
|
self._destination,
|
||||||
|
last_device_stream_id,
|
||||||
|
to_device_stream_id,
|
||||||
|
limit - len(edus),
|
||||||
)
|
)
|
||||||
|
edus.extend(
|
||||||
|
Edu(
|
||||||
|
origin=self._server_name,
|
||||||
|
destination=self._destination,
|
||||||
|
edu_type="m.direct_to_device",
|
||||||
|
content=content,
|
||||||
|
)
|
||||||
|
for content in contents
|
||||||
|
)
|
||||||
|
|
||||||
defer.returnValue((edus, stream_id, now_stream_id))
|
defer.returnValue((edus, stream_id, now_stream_id))
|
||||||
|
|
|
@ -90,9 +90,32 @@ class IPBlacklistingResolver(object):
|
||||||
def resolveHostName(self, recv, hostname, portNumber=0):
|
def resolveHostName(self, recv, hostname, portNumber=0):
|
||||||
|
|
||||||
r = recv()
|
r = recv()
|
||||||
d = defer.Deferred()
|
|
||||||
addresses = []
|
addresses = []
|
||||||
|
|
||||||
|
def _callback():
|
||||||
|
r.resolutionBegan(None)
|
||||||
|
|
||||||
|
has_bad_ip = False
|
||||||
|
for i in addresses:
|
||||||
|
ip_address = IPAddress(i.host)
|
||||||
|
|
||||||
|
if check_against_blacklist(
|
||||||
|
ip_address, self._ip_whitelist, self._ip_blacklist
|
||||||
|
):
|
||||||
|
logger.info(
|
||||||
|
"Dropped %s from DNS resolution to %s due to blacklist" %
|
||||||
|
(ip_address, hostname)
|
||||||
|
)
|
||||||
|
has_bad_ip = True
|
||||||
|
|
||||||
|
# if we have a blacklisted IP, we'd like to raise an error to block the
|
||||||
|
# request, but all we can really do from here is claim that there were no
|
||||||
|
# valid results.
|
||||||
|
if not has_bad_ip:
|
||||||
|
for i in addresses:
|
||||||
|
r.addressResolved(i)
|
||||||
|
r.resolutionComplete()
|
||||||
|
|
||||||
@provider(IResolutionReceiver)
|
@provider(IResolutionReceiver)
|
||||||
class EndpointReceiver(object):
|
class EndpointReceiver(object):
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -101,34 +124,16 @@ class IPBlacklistingResolver(object):
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def addressResolved(address):
|
def addressResolved(address):
|
||||||
ip_address = IPAddress(address.host)
|
|
||||||
|
|
||||||
if check_against_blacklist(
|
|
||||||
ip_address, self._ip_whitelist, self._ip_blacklist
|
|
||||||
):
|
|
||||||
logger.info(
|
|
||||||
"Dropped %s from DNS resolution to %s" % (ip_address, hostname)
|
|
||||||
)
|
|
||||||
raise SynapseError(403, "IP address blocked by IP blacklist entry")
|
|
||||||
|
|
||||||
addresses.append(address)
|
addresses.append(address)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def resolutionComplete():
|
def resolutionComplete():
|
||||||
d.callback(addresses)
|
_callback()
|
||||||
|
|
||||||
self._reactor.nameResolver.resolveHostName(
|
self._reactor.nameResolver.resolveHostName(
|
||||||
EndpointReceiver, hostname, portNumber=portNumber
|
EndpointReceiver, hostname, portNumber=portNumber
|
||||||
)
|
)
|
||||||
|
|
||||||
def _callback(addrs):
|
|
||||||
r.resolutionBegan(None)
|
|
||||||
for i in addrs:
|
|
||||||
r.addressResolved(i)
|
|
||||||
r.resolutionComplete()
|
|
||||||
|
|
||||||
d.addCallback(_callback)
|
|
||||||
|
|
||||||
return r
|
return r
|
||||||
|
|
||||||
|
|
||||||
|
@ -160,7 +165,8 @@ class BlacklistingAgentWrapper(Agent):
|
||||||
ip_address, self._ip_whitelist, self._ip_blacklist
|
ip_address, self._ip_whitelist, self._ip_blacklist
|
||||||
):
|
):
|
||||||
logger.info(
|
logger.info(
|
||||||
"Blocking access to %s because of blacklist" % (ip_address,)
|
"Blocking access to %s due to blacklist" %
|
||||||
|
(ip_address,)
|
||||||
)
|
)
|
||||||
e = SynapseError(403, "IP address blocked by IP blacklist entry")
|
e = SynapseError(403, "IP address blocked by IP blacklist entry")
|
||||||
return defer.fail(Failure(e))
|
return defer.fail(Failure(e))
|
||||||
|
@ -258,9 +264,6 @@ class SimpleHttpClient(object):
|
||||||
uri (str): URI to query.
|
uri (str): URI to query.
|
||||||
data (bytes): Data to send in the request body, if applicable.
|
data (bytes): Data to send in the request body, if applicable.
|
||||||
headers (t.w.http_headers.Headers): Request headers.
|
headers (t.w.http_headers.Headers): Request headers.
|
||||||
|
|
||||||
Raises:
|
|
||||||
SynapseError: If the IP is blacklisted.
|
|
||||||
"""
|
"""
|
||||||
# A small wrapper around self.agent.request() so we can easily attach
|
# A small wrapper around self.agent.request() so we can easily attach
|
||||||
# counters to it
|
# counters to it
|
||||||
|
|
|
@ -27,9 +27,11 @@ import treq
|
||||||
from canonicaljson import encode_canonical_json
|
from canonicaljson import encode_canonical_json
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
from signedjson.sign import sign_json
|
from signedjson.sign import sign_json
|
||||||
|
from zope.interface import implementer
|
||||||
|
|
||||||
from twisted.internet import defer, protocol
|
from twisted.internet import defer, protocol
|
||||||
from twisted.internet.error import DNSLookupError
|
from twisted.internet.error import DNSLookupError
|
||||||
|
from twisted.internet.interfaces import IReactorPluggableNameResolver
|
||||||
from twisted.internet.task import _EPSILON, Cooperator
|
from twisted.internet.task import _EPSILON, Cooperator
|
||||||
from twisted.web._newclient import ResponseDone
|
from twisted.web._newclient import ResponseDone
|
||||||
from twisted.web.http_headers import Headers
|
from twisted.web.http_headers import Headers
|
||||||
|
@ -44,6 +46,7 @@ from synapse.api.errors import (
|
||||||
SynapseError,
|
SynapseError,
|
||||||
)
|
)
|
||||||
from synapse.http import QuieterFileBodyProducer
|
from synapse.http import QuieterFileBodyProducer
|
||||||
|
from synapse.http.client import BlacklistingAgentWrapper, IPBlacklistingResolver
|
||||||
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
|
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
|
||||||
from synapse.util.async_helpers import timeout_deferred
|
from synapse.util.async_helpers import timeout_deferred
|
||||||
from synapse.util.logcontext import make_deferred_yieldable
|
from synapse.util.logcontext import make_deferred_yieldable
|
||||||
|
@ -172,19 +175,44 @@ class MatrixFederationHttpClient(object):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.signing_key = hs.config.signing_key[0]
|
self.signing_key = hs.config.signing_key[0]
|
||||||
self.server_name = hs.hostname
|
self.server_name = hs.hostname
|
||||||
reactor = hs.get_reactor()
|
|
||||||
|
real_reactor = hs.get_reactor()
|
||||||
|
|
||||||
|
# We need to use a DNS resolver which filters out blacklisted IP
|
||||||
|
# addresses, to prevent DNS rebinding.
|
||||||
|
nameResolver = IPBlacklistingResolver(
|
||||||
|
real_reactor, None, hs.config.federation_ip_range_blacklist,
|
||||||
|
)
|
||||||
|
|
||||||
|
@implementer(IReactorPluggableNameResolver)
|
||||||
|
class Reactor(object):
|
||||||
|
def __getattr__(_self, attr):
|
||||||
|
if attr == "nameResolver":
|
||||||
|
return nameResolver
|
||||||
|
else:
|
||||||
|
return getattr(real_reactor, attr)
|
||||||
|
|
||||||
|
self.reactor = Reactor()
|
||||||
|
|
||||||
self.agent = MatrixFederationAgent(
|
self.agent = MatrixFederationAgent(
|
||||||
hs.get_reactor(),
|
self.reactor,
|
||||||
tls_client_options_factory,
|
tls_client_options_factory,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Use a BlacklistingAgentWrapper to prevent circumventing the IP
|
||||||
|
# blacklist via IP literals in server names
|
||||||
|
self.agent = BlacklistingAgentWrapper(
|
||||||
|
self.agent, self.reactor,
|
||||||
|
ip_blacklist=hs.config.federation_ip_range_blacklist,
|
||||||
|
)
|
||||||
|
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self._store = hs.get_datastore()
|
self._store = hs.get_datastore()
|
||||||
self.version_string_bytes = hs.version_string.encode('ascii')
|
self.version_string_bytes = hs.version_string.encode('ascii')
|
||||||
self.default_timeout = 60
|
self.default_timeout = 60
|
||||||
|
|
||||||
def schedule(x):
|
def schedule(x):
|
||||||
reactor.callLater(_EPSILON, x)
|
self.reactor.callLater(_EPSILON, x)
|
||||||
|
|
||||||
self._cooperator = Cooperator(scheduler=schedule)
|
self._cooperator = Cooperator(scheduler=schedule)
|
||||||
|
|
||||||
|
@ -370,7 +398,7 @@ class MatrixFederationHttpClient(object):
|
||||||
request_deferred = timeout_deferred(
|
request_deferred = timeout_deferred(
|
||||||
request_deferred,
|
request_deferred,
|
||||||
timeout=_sec_timeout,
|
timeout=_sec_timeout,
|
||||||
reactor=self.hs.get_reactor(),
|
reactor=self.reactor,
|
||||||
)
|
)
|
||||||
|
|
||||||
response = yield request_deferred
|
response = yield request_deferred
|
||||||
|
@ -397,7 +425,7 @@ class MatrixFederationHttpClient(object):
|
||||||
d = timeout_deferred(
|
d = timeout_deferred(
|
||||||
d,
|
d,
|
||||||
timeout=_sec_timeout,
|
timeout=_sec_timeout,
|
||||||
reactor=self.hs.get_reactor(),
|
reactor=self.reactor,
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
@ -586,7 +614,7 @@ class MatrixFederationHttpClient(object):
|
||||||
)
|
)
|
||||||
|
|
||||||
body = yield _handle_json_response(
|
body = yield _handle_json_response(
|
||||||
self.hs.get_reactor(), self.default_timeout, request, response,
|
self.reactor, self.default_timeout, request, response,
|
||||||
)
|
)
|
||||||
|
|
||||||
defer.returnValue(body)
|
defer.returnValue(body)
|
||||||
|
@ -645,7 +673,7 @@ class MatrixFederationHttpClient(object):
|
||||||
_sec_timeout = self.default_timeout
|
_sec_timeout = self.default_timeout
|
||||||
|
|
||||||
body = yield _handle_json_response(
|
body = yield _handle_json_response(
|
||||||
self.hs.get_reactor(), _sec_timeout, request, response,
|
self.reactor, _sec_timeout, request, response,
|
||||||
)
|
)
|
||||||
defer.returnValue(body)
|
defer.returnValue(body)
|
||||||
|
|
||||||
|
@ -704,7 +732,7 @@ class MatrixFederationHttpClient(object):
|
||||||
)
|
)
|
||||||
|
|
||||||
body = yield _handle_json_response(
|
body = yield _handle_json_response(
|
||||||
self.hs.get_reactor(), self.default_timeout, request, response,
|
self.reactor, self.default_timeout, request, response,
|
||||||
)
|
)
|
||||||
|
|
||||||
defer.returnValue(body)
|
defer.returnValue(body)
|
||||||
|
@ -753,7 +781,7 @@ class MatrixFederationHttpClient(object):
|
||||||
)
|
)
|
||||||
|
|
||||||
body = yield _handle_json_response(
|
body = yield _handle_json_response(
|
||||||
self.hs.get_reactor(), self.default_timeout, request, response,
|
self.reactor, self.default_timeout, request, response,
|
||||||
)
|
)
|
||||||
defer.returnValue(body)
|
defer.returnValue(body)
|
||||||
|
|
||||||
|
@ -801,7 +829,7 @@ class MatrixFederationHttpClient(object):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
d = _readBodyToFile(response, output_stream, max_size)
|
d = _readBodyToFile(response, output_stream, max_size)
|
||||||
d.addTimeout(self.default_timeout, self.hs.get_reactor())
|
d.addTimeout(self.default_timeout, self.reactor)
|
||||||
length = yield make_deferred_yieldable(d)
|
length = yield make_deferred_yieldable(d)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warn(
|
logger.warn(
|
||||||
|
|
|
@ -31,6 +31,7 @@ from six.moves import urllib_parse as urlparse
|
||||||
from canonicaljson import json
|
from canonicaljson import json
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
from twisted.internet.error import DNSLookupError
|
||||||
from twisted.web.resource import Resource
|
from twisted.web.resource import Resource
|
||||||
from twisted.web.server import NOT_DONE_YET
|
from twisted.web.server import NOT_DONE_YET
|
||||||
|
|
||||||
|
@ -328,9 +329,18 @@ class PreviewUrlResource(Resource):
|
||||||
# handler will return a SynapseError to the client instead of
|
# handler will return a SynapseError to the client instead of
|
||||||
# blank data or a 500.
|
# blank data or a 500.
|
||||||
raise
|
raise
|
||||||
|
except DNSLookupError:
|
||||||
|
# DNS lookup returned no results
|
||||||
|
# Note: This will also be the case if one of the resolved IP
|
||||||
|
# addresses is blacklisted
|
||||||
|
raise SynapseError(
|
||||||
|
502, "DNS resolution failure during URL preview generation",
|
||||||
|
Codes.UNKNOWN
|
||||||
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# FIXME: pass through 404s and other error messages nicely
|
# FIXME: pass through 404s and other error messages nicely
|
||||||
logger.warn("Error downloading %s: %r", url, e)
|
logger.warn("Error downloading %s: %r", url, e)
|
||||||
|
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
500, "Failed to download content: %s" % (
|
500, "Failed to download content: %s" % (
|
||||||
traceback.format_exception_only(sys.exc_info()[0], e),
|
traceback.format_exception_only(sys.exc_info()[0], e),
|
||||||
|
|
|
@ -108,6 +108,7 @@ class FileStorageProviderBackend(StorageProvider):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, hs, config):
|
def __init__(self, hs, config):
|
||||||
|
self.hs = hs
|
||||||
self.cache_directory = hs.config.media_store_path
|
self.cache_directory = hs.config.media_store_path
|
||||||
self.base_directory = config
|
self.base_directory = config
|
||||||
|
|
||||||
|
|
|
@ -18,7 +18,6 @@ import synapse.server_notices.server_notices_sender
|
||||||
import synapse.state
|
import synapse.state
|
||||||
import synapse.storage
|
import synapse.storage
|
||||||
|
|
||||||
|
|
||||||
class HomeServer(object):
|
class HomeServer(object):
|
||||||
@property
|
@property
|
||||||
def config(self) -> synapse.config.homeserver.HomeServerConfig:
|
def config(self) -> synapse.config.homeserver.HomeServerConfig:
|
||||||
|
|
|
@ -118,7 +118,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
||||||
defer.returnValue(count)
|
defer.returnValue(count)
|
||||||
|
|
||||||
def get_new_device_msgs_for_remote(
|
def get_new_device_msgs_for_remote(
|
||||||
self, destination, last_stream_id, current_stream_id, limit=100
|
self, destination, last_stream_id, current_stream_id, limit
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
|
|
|
@ -109,7 +109,6 @@ class FilteringTestCase(unittest.TestCase):
|
||||||
"event_format": "client",
|
"event_format": "client",
|
||||||
"event_fields": ["type", "content", "sender"],
|
"event_fields": ["type", "content", "sender"],
|
||||||
},
|
},
|
||||||
|
|
||||||
# a single backslash should be permitted (though it is debatable whether
|
# a single backslash should be permitted (though it is debatable whether
|
||||||
# it should be permitted before anything other than `.`, and what that
|
# it should be permitted before anything other than `.`, and what that
|
||||||
# actually means)
|
# actually means)
|
||||||
|
|
|
@ -10,19 +10,19 @@ class TestRatelimiter(unittest.TestCase):
|
||||||
key="test_id", time_now_s=0, rate_hz=0.1, burst_count=1
|
key="test_id", time_now_s=0, rate_hz=0.1, burst_count=1
|
||||||
)
|
)
|
||||||
self.assertTrue(allowed)
|
self.assertTrue(allowed)
|
||||||
self.assertEquals(10., time_allowed)
|
self.assertEquals(10.0, time_allowed)
|
||||||
|
|
||||||
allowed, time_allowed = limiter.can_do_action(
|
allowed, time_allowed = limiter.can_do_action(
|
||||||
key="test_id", time_now_s=5, rate_hz=0.1, burst_count=1
|
key="test_id", time_now_s=5, rate_hz=0.1, burst_count=1
|
||||||
)
|
)
|
||||||
self.assertFalse(allowed)
|
self.assertFalse(allowed)
|
||||||
self.assertEquals(10., time_allowed)
|
self.assertEquals(10.0, time_allowed)
|
||||||
|
|
||||||
allowed, time_allowed = limiter.can_do_action(
|
allowed, time_allowed = limiter.can_do_action(
|
||||||
key="test_id", time_now_s=10, rate_hz=0.1, burst_count=1
|
key="test_id", time_now_s=10, rate_hz=0.1, burst_count=1
|
||||||
)
|
)
|
||||||
self.assertTrue(allowed)
|
self.assertTrue(allowed)
|
||||||
self.assertEquals(20., time_allowed)
|
self.assertEquals(20.0, time_allowed)
|
||||||
|
|
||||||
def test_pruning(self):
|
def test_pruning(self):
|
||||||
limiter = Ratelimiter()
|
limiter = Ratelimiter()
|
||||||
|
|
|
@ -25,16 +25,18 @@ from tests.unittest import HomeserverTestCase
|
||||||
class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
http_client=None, homeserverToUse=FederationReaderServer,
|
http_client=None, homeserverToUse=FederationReaderServer
|
||||||
)
|
)
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
@parameterized.expand([
|
@parameterized.expand(
|
||||||
(["federation"], "auth_fail"),
|
[
|
||||||
([], "no_resource"),
|
(["federation"], "auth_fail"),
|
||||||
(["openid", "federation"], "auth_fail"),
|
([], "no_resource"),
|
||||||
(["openid"], "auth_fail"),
|
(["openid", "federation"], "auth_fail"),
|
||||||
])
|
(["openid"], "auth_fail"),
|
||||||
|
]
|
||||||
|
)
|
||||||
def test_openid_listener(self, names, expectation):
|
def test_openid_listener(self, names, expectation):
|
||||||
"""
|
"""
|
||||||
Test different openid listener configurations.
|
Test different openid listener configurations.
|
||||||
|
@ -53,17 +55,14 @@ class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
||||||
# Grab the resource from the site that was told to listen
|
# Grab the resource from the site that was told to listen
|
||||||
site = self.reactor.tcpServers[0][1]
|
site = self.reactor.tcpServers[0][1]
|
||||||
try:
|
try:
|
||||||
self.resource = (
|
self.resource = site.resource.children[b"_matrix"].children[b"federation"]
|
||||||
site.resource.children[b"_matrix"].children[b"federation"]
|
|
||||||
)
|
|
||||||
except KeyError:
|
except KeyError:
|
||||||
if expectation == "no_resource":
|
if expectation == "no_resource":
|
||||||
return
|
return
|
||||||
raise
|
raise
|
||||||
|
|
||||||
request, channel = self.make_request(
|
request, channel = self.make_request(
|
||||||
"GET",
|
"GET", "/_matrix/federation/v1/openid/userinfo"
|
||||||
"/_matrix/federation/v1/openid/userinfo",
|
|
||||||
)
|
)
|
||||||
self.render(request)
|
self.render(request)
|
||||||
|
|
||||||
|
@ -74,16 +73,18 @@ class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
||||||
class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
|
class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
http_client=None, homeserverToUse=SynapseHomeServer,
|
http_client=None, homeserverToUse=SynapseHomeServer
|
||||||
)
|
)
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
@parameterized.expand([
|
@parameterized.expand(
|
||||||
(["federation"], "auth_fail"),
|
[
|
||||||
([], "no_resource"),
|
(["federation"], "auth_fail"),
|
||||||
(["openid", "federation"], "auth_fail"),
|
([], "no_resource"),
|
||||||
(["openid"], "auth_fail"),
|
(["openid", "federation"], "auth_fail"),
|
||||||
])
|
(["openid"], "auth_fail"),
|
||||||
|
]
|
||||||
|
)
|
||||||
def test_openid_listener(self, names, expectation):
|
def test_openid_listener(self, names, expectation):
|
||||||
"""
|
"""
|
||||||
Test different openid listener configurations.
|
Test different openid listener configurations.
|
||||||
|
@ -102,17 +103,14 @@ class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
|
||||||
# Grab the resource from the site that was told to listen
|
# Grab the resource from the site that was told to listen
|
||||||
site = self.reactor.tcpServers[0][1]
|
site = self.reactor.tcpServers[0][1]
|
||||||
try:
|
try:
|
||||||
self.resource = (
|
self.resource = site.resource.children[b"_matrix"].children[b"federation"]
|
||||||
site.resource.children[b"_matrix"].children[b"federation"]
|
|
||||||
)
|
|
||||||
except KeyError:
|
except KeyError:
|
||||||
if expectation == "no_resource":
|
if expectation == "no_resource":
|
||||||
return
|
return
|
||||||
raise
|
raise
|
||||||
|
|
||||||
request, channel = self.make_request(
|
request, channel = self.make_request(
|
||||||
"GET",
|
"GET", "/_matrix/federation/v1/openid/userinfo"
|
||||||
"/_matrix/federation/v1/openid/userinfo",
|
|
||||||
)
|
)
|
||||||
self.render(request)
|
self.render(request)
|
||||||
|
|
||||||
|
|
|
@ -45,13 +45,7 @@ class ConfigGenerationTestCase(unittest.TestCase):
|
||||||
)
|
)
|
||||||
|
|
||||||
self.assertSetEqual(
|
self.assertSetEqual(
|
||||||
set(
|
set(["homeserver.yaml", "lemurs.win.log.config", "lemurs.win.signing.key"]),
|
||||||
[
|
|
||||||
"homeserver.yaml",
|
|
||||||
"lemurs.win.log.config",
|
|
||||||
"lemurs.win.signing.key",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
set(os.listdir(self.dir)),
|
set(os.listdir(self.dir)),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -22,7 +22,8 @@ from tests import unittest
|
||||||
|
|
||||||
class RoomDirectoryConfigTestCase(unittest.TestCase):
|
class RoomDirectoryConfigTestCase(unittest.TestCase):
|
||||||
def test_alias_creation_acl(self):
|
def test_alias_creation_acl(self):
|
||||||
config = yaml.safe_load("""
|
config = yaml.safe_load(
|
||||||
|
"""
|
||||||
alias_creation_rules:
|
alias_creation_rules:
|
||||||
- user_id: "*bob*"
|
- user_id: "*bob*"
|
||||||
alias: "*"
|
alias: "*"
|
||||||
|
@ -38,43 +39,49 @@ class RoomDirectoryConfigTestCase(unittest.TestCase):
|
||||||
action: "allow"
|
action: "allow"
|
||||||
|
|
||||||
room_list_publication_rules: []
|
room_list_publication_rules: []
|
||||||
""")
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
rd_config = RoomDirectoryConfig()
|
rd_config = RoomDirectoryConfig()
|
||||||
rd_config.read_config(config)
|
rd_config.read_config(config)
|
||||||
|
|
||||||
self.assertFalse(rd_config.is_alias_creation_allowed(
|
self.assertFalse(
|
||||||
user_id="@bob:example.com",
|
rd_config.is_alias_creation_allowed(
|
||||||
room_id="!test",
|
user_id="@bob:example.com", room_id="!test", alias="#test:example.com"
|
||||||
alias="#test:example.com",
|
)
|
||||||
))
|
)
|
||||||
|
|
||||||
self.assertTrue(rd_config.is_alias_creation_allowed(
|
self.assertTrue(
|
||||||
user_id="@test:example.com",
|
rd_config.is_alias_creation_allowed(
|
||||||
room_id="!test",
|
user_id="@test:example.com",
|
||||||
alias="#unofficial_st:example.com",
|
room_id="!test",
|
||||||
))
|
alias="#unofficial_st:example.com",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
self.assertTrue(rd_config.is_alias_creation_allowed(
|
self.assertTrue(
|
||||||
user_id="@foobar:example.com",
|
rd_config.is_alias_creation_allowed(
|
||||||
room_id="!test",
|
user_id="@foobar:example.com",
|
||||||
alias="#test:example.com",
|
room_id="!test",
|
||||||
))
|
alias="#test:example.com",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
self.assertTrue(rd_config.is_alias_creation_allowed(
|
self.assertTrue(
|
||||||
user_id="@gah:example.com",
|
rd_config.is_alias_creation_allowed(
|
||||||
room_id="!test",
|
user_id="@gah:example.com", room_id="!test", alias="#goo:example.com"
|
||||||
alias="#goo:example.com",
|
)
|
||||||
))
|
)
|
||||||
|
|
||||||
self.assertFalse(rd_config.is_alias_creation_allowed(
|
self.assertFalse(
|
||||||
user_id="@test:example.com",
|
rd_config.is_alias_creation_allowed(
|
||||||
room_id="!test",
|
user_id="@test:example.com", room_id="!test", alias="#test:example.com"
|
||||||
alias="#test:example.com",
|
)
|
||||||
))
|
)
|
||||||
|
|
||||||
def test_room_publish_acl(self):
|
def test_room_publish_acl(self):
|
||||||
config = yaml.safe_load("""
|
config = yaml.safe_load(
|
||||||
|
"""
|
||||||
alias_creation_rules: []
|
alias_creation_rules: []
|
||||||
|
|
||||||
room_list_publication_rules:
|
room_list_publication_rules:
|
||||||
|
@ -92,55 +99,66 @@ class RoomDirectoryConfigTestCase(unittest.TestCase):
|
||||||
action: "allow"
|
action: "allow"
|
||||||
- room_id: "!test-deny"
|
- room_id: "!test-deny"
|
||||||
action: "deny"
|
action: "deny"
|
||||||
""")
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
rd_config = RoomDirectoryConfig()
|
rd_config = RoomDirectoryConfig()
|
||||||
rd_config.read_config(config)
|
rd_config.read_config(config)
|
||||||
|
|
||||||
self.assertFalse(rd_config.is_publishing_room_allowed(
|
self.assertFalse(
|
||||||
user_id="@bob:example.com",
|
rd_config.is_publishing_room_allowed(
|
||||||
room_id="!test",
|
user_id="@bob:example.com",
|
||||||
aliases=["#test:example.com"],
|
room_id="!test",
|
||||||
))
|
aliases=["#test:example.com"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
self.assertTrue(
|
||||||
user_id="@test:example.com",
|
rd_config.is_publishing_room_allowed(
|
||||||
room_id="!test",
|
user_id="@test:example.com",
|
||||||
aliases=["#unofficial_st:example.com"],
|
room_id="!test",
|
||||||
))
|
aliases=["#unofficial_st:example.com"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
self.assertTrue(
|
||||||
user_id="@foobar:example.com",
|
rd_config.is_publishing_room_allowed(
|
||||||
room_id="!test",
|
user_id="@foobar:example.com", room_id="!test", aliases=[]
|
||||||
aliases=[],
|
)
|
||||||
))
|
)
|
||||||
|
|
||||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
self.assertTrue(
|
||||||
user_id="@gah:example.com",
|
rd_config.is_publishing_room_allowed(
|
||||||
room_id="!test",
|
user_id="@gah:example.com",
|
||||||
aliases=["#goo:example.com"],
|
room_id="!test",
|
||||||
))
|
aliases=["#goo:example.com"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
self.assertFalse(rd_config.is_publishing_room_allowed(
|
self.assertFalse(
|
||||||
user_id="@test:example.com",
|
rd_config.is_publishing_room_allowed(
|
||||||
room_id="!test",
|
user_id="@test:example.com",
|
||||||
aliases=["#test:example.com"],
|
room_id="!test",
|
||||||
))
|
aliases=["#test:example.com"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
self.assertTrue(
|
||||||
user_id="@foobar:example.com",
|
rd_config.is_publishing_room_allowed(
|
||||||
room_id="!test-deny",
|
user_id="@foobar:example.com", room_id="!test-deny", aliases=[]
|
||||||
aliases=[],
|
)
|
||||||
))
|
)
|
||||||
|
|
||||||
self.assertFalse(rd_config.is_publishing_room_allowed(
|
self.assertFalse(
|
||||||
user_id="@gah:example.com",
|
rd_config.is_publishing_room_allowed(
|
||||||
room_id="!test-deny",
|
user_id="@gah:example.com", room_id="!test-deny", aliases=[]
|
||||||
aliases=[],
|
)
|
||||||
))
|
)
|
||||||
|
|
||||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
self.assertTrue(
|
||||||
user_id="@test:example.com",
|
rd_config.is_publishing_room_allowed(
|
||||||
room_id="!test",
|
user_id="@test:example.com",
|
||||||
aliases=["#unofficial_st:example.com", "#blah:example.com"],
|
room_id="!test",
|
||||||
))
|
aliases=["#unofficial_st:example.com", "#blah:example.com"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
|
@ -19,7 +19,6 @@ from tests import unittest
|
||||||
|
|
||||||
|
|
||||||
class ServerConfigTestCase(unittest.TestCase):
|
class ServerConfigTestCase(unittest.TestCase):
|
||||||
|
|
||||||
def test_is_threepid_reserved(self):
|
def test_is_threepid_reserved(self):
|
||||||
user1 = {'medium': 'email', 'address': 'user1@example.com'}
|
user1 = {'medium': 'email', 'address': 'user1@example.com'}
|
||||||
user2 = {'medium': 'email', 'address': 'user2@example.com'}
|
user2 = {'medium': 'email', 'address': 'user2@example.com'}
|
||||||
|
|
|
@ -26,7 +26,6 @@ class TestConfig(TlsConfig):
|
||||||
|
|
||||||
|
|
||||||
class TLSConfigTests(TestCase):
|
class TLSConfigTests(TestCase):
|
||||||
|
|
||||||
def test_warn_self_signed(self):
|
def test_warn_self_signed(self):
|
||||||
"""
|
"""
|
||||||
Synapse will give a warning when it loads a self-signed certificate.
|
Synapse will give a warning when it loads a self-signed certificate.
|
||||||
|
@ -34,7 +33,8 @@ class TLSConfigTests(TestCase):
|
||||||
config_dir = self.mktemp()
|
config_dir = self.mktemp()
|
||||||
os.mkdir(config_dir)
|
os.mkdir(config_dir)
|
||||||
with open(os.path.join(config_dir, "cert.pem"), 'w') as f:
|
with open(os.path.join(config_dir, "cert.pem"), 'w') as f:
|
||||||
f.write("""-----BEGIN CERTIFICATE-----
|
f.write(
|
||||||
|
"""-----BEGIN CERTIFICATE-----
|
||||||
MIID6DCCAtACAws9CjANBgkqhkiG9w0BAQUFADCBtzELMAkGA1UEBhMCVFIxDzAN
|
MIID6DCCAtACAws9CjANBgkqhkiG9w0BAQUFADCBtzELMAkGA1UEBhMCVFIxDzAN
|
||||||
BgNVBAgMBsOHb3J1bTEUMBIGA1UEBwwLQmHFn21ha8OnxLExEjAQBgNVBAMMCWxv
|
BgNVBAgMBsOHb3J1bTEUMBIGA1UEBwwLQmHFn21ha8OnxLExEjAQBgNVBAMMCWxv
|
||||||
Y2FsaG9zdDEcMBoGA1UECgwTVHdpc3RlZCBNYXRyaXggTGFiczEkMCIGA1UECwwb
|
Y2FsaG9zdDEcMBoGA1UECgwTVHdpc3RlZCBNYXRyaXggTGFiczEkMCIGA1UECwwb
|
||||||
|
@ -56,11 +56,12 @@ I8OtG1xGwcok53lyDuuUUDexnK4O5BkjKiVlNPg4HPim5Kuj2hRNFfNt/F2BVIlj
|
||||||
iZupikC5MT1LQaRwidkSNxCku1TfAyueiBwhLnFwTmIGNnhuDCutEVAD9kFmcJN2
|
iZupikC5MT1LQaRwidkSNxCku1TfAyueiBwhLnFwTmIGNnhuDCutEVAD9kFmcJN2
|
||||||
SznugAcPk4doX2+rL+ila+ThqgPzIkwTUHtnmjI0TI6xsDUlXz5S3UyudrE2Qsfz
|
SznugAcPk4doX2+rL+ila+ThqgPzIkwTUHtnmjI0TI6xsDUlXz5S3UyudrE2Qsfz
|
||||||
s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
|
s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
|
||||||
-----END CERTIFICATE-----""")
|
-----END CERTIFICATE-----"""
|
||||||
|
)
|
||||||
|
|
||||||
config = {
|
config = {
|
||||||
"tls_certificate_path": os.path.join(config_dir, "cert.pem"),
|
"tls_certificate_path": os.path.join(config_dir, "cert.pem"),
|
||||||
"tls_fingerprints": []
|
"tls_fingerprints": [],
|
||||||
}
|
}
|
||||||
|
|
||||||
t = TestConfig()
|
t = TestConfig()
|
||||||
|
@ -75,5 +76,5 @@ s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
|
||||||
"Self-signed TLS certificates will not be accepted by "
|
"Self-signed TLS certificates will not be accepted by "
|
||||||
"Synapse 1.0. Please either provide a valid certificate, "
|
"Synapse 1.0. Please either provide a valid certificate, "
|
||||||
"or use Synapse's ACME support to provision one."
|
"or use Synapse's ACME support to provision one."
|
||||||
)
|
),
|
||||||
)
|
)
|
||||||
|
|
|
@ -169,7 +169,7 @@ class KeyringTestCase(unittest.HomeserverTestCase):
|
||||||
self.http_client.post_json.return_value = defer.Deferred()
|
self.http_client.post_json.return_value = defer.Deferred()
|
||||||
|
|
||||||
res_deferreds_2 = kr.verify_json_objects_for_server(
|
res_deferreds_2 = kr.verify_json_objects_for_server(
|
||||||
[("server10", json1, )]
|
[("server10", json1)]
|
||||||
)
|
)
|
||||||
res_deferreds_2[0].addBoth(self.check_context, None)
|
res_deferreds_2[0].addBoth(self.check_context, None)
|
||||||
yield logcontext.make_deferred_yieldable(res_deferreds_2[0])
|
yield logcontext.make_deferred_yieldable(res_deferreds_2[0])
|
||||||
|
@ -345,6 +345,7 @@ def _verify_json_for_server(keyring, server_name, json_object):
|
||||||
"""thin wrapper around verify_json_for_server which makes sure it is wrapped
|
"""thin wrapper around verify_json_for_server which makes sure it is wrapped
|
||||||
with the patched defer.inlineCallbacks.
|
with the patched defer.inlineCallbacks.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def v():
|
def v():
|
||||||
rv1 = yield keyring.verify_json_for_server(server_name, json_object)
|
rv1 = yield keyring.verify_json_for_server(server_name, json_object)
|
||||||
|
|
|
@ -33,11 +33,15 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||||
mock_state_handler = self.hs.get_state_handler()
|
mock_state_handler = self.hs.get_state_handler()
|
||||||
mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
|
mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
|
||||||
|
|
||||||
mock_send_transaction = self.hs.get_federation_transport_client().send_transaction
|
mock_send_transaction = (
|
||||||
|
self.hs.get_federation_transport_client().send_transaction
|
||||||
|
)
|
||||||
mock_send_transaction.return_value = defer.succeed({})
|
mock_send_transaction.return_value = defer.succeed({})
|
||||||
|
|
||||||
sender = self.hs.get_federation_sender()
|
sender = self.hs.get_federation_sender()
|
||||||
receipt = ReadReceipt("room_id", "m.read", "user_id", ["event_id"], {"ts": 1234})
|
receipt = ReadReceipt(
|
||||||
|
"room_id", "m.read", "user_id", ["event_id"], {"ts": 1234}
|
||||||
|
)
|
||||||
self.successResultOf(sender.send_read_receipt(receipt))
|
self.successResultOf(sender.send_read_receipt(receipt))
|
||||||
|
|
||||||
self.pump()
|
self.pump()
|
||||||
|
@ -46,21 +50,24 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||||
mock_send_transaction.assert_called_once()
|
mock_send_transaction.assert_called_once()
|
||||||
json_cb = mock_send_transaction.call_args[0][1]
|
json_cb = mock_send_transaction.call_args[0][1]
|
||||||
data = json_cb()
|
data = json_cb()
|
||||||
self.assertEqual(data['edus'], [
|
self.assertEqual(
|
||||||
{
|
data['edus'],
|
||||||
'edu_type': 'm.receipt',
|
[
|
||||||
'content': {
|
{
|
||||||
'room_id': {
|
'edu_type': 'm.receipt',
|
||||||
'm.read': {
|
'content': {
|
||||||
'user_id': {
|
'room_id': {
|
||||||
'event_ids': ['event_id'],
|
'm.read': {
|
||||||
'data': {'ts': 1234},
|
'user_id': {
|
||||||
},
|
'event_ids': ['event_id'],
|
||||||
},
|
'data': {'ts': 1234},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
},
|
},
|
||||||
},
|
}
|
||||||
},
|
],
|
||||||
])
|
)
|
||||||
|
|
||||||
def test_send_receipts_with_backoff(self):
|
def test_send_receipts_with_backoff(self):
|
||||||
"""Send two receipts in quick succession; the second should be flushed, but
|
"""Send two receipts in quick succession; the second should be flushed, but
|
||||||
|
@ -68,11 +75,15 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||||
mock_state_handler = self.hs.get_state_handler()
|
mock_state_handler = self.hs.get_state_handler()
|
||||||
mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
|
mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
|
||||||
|
|
||||||
mock_send_transaction = self.hs.get_federation_transport_client().send_transaction
|
mock_send_transaction = (
|
||||||
|
self.hs.get_federation_transport_client().send_transaction
|
||||||
|
)
|
||||||
mock_send_transaction.return_value = defer.succeed({})
|
mock_send_transaction.return_value = defer.succeed({})
|
||||||
|
|
||||||
sender = self.hs.get_federation_sender()
|
sender = self.hs.get_federation_sender()
|
||||||
receipt = ReadReceipt("room_id", "m.read", "user_id", ["event_id"], {"ts": 1234})
|
receipt = ReadReceipt(
|
||||||
|
"room_id", "m.read", "user_id", ["event_id"], {"ts": 1234}
|
||||||
|
)
|
||||||
self.successResultOf(sender.send_read_receipt(receipt))
|
self.successResultOf(sender.send_read_receipt(receipt))
|
||||||
|
|
||||||
self.pump()
|
self.pump()
|
||||||
|
@ -81,25 +92,30 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||||
mock_send_transaction.assert_called_once()
|
mock_send_transaction.assert_called_once()
|
||||||
json_cb = mock_send_transaction.call_args[0][1]
|
json_cb = mock_send_transaction.call_args[0][1]
|
||||||
data = json_cb()
|
data = json_cb()
|
||||||
self.assertEqual(data['edus'], [
|
self.assertEqual(
|
||||||
{
|
data['edus'],
|
||||||
'edu_type': 'm.receipt',
|
[
|
||||||
'content': {
|
{
|
||||||
'room_id': {
|
'edu_type': 'm.receipt',
|
||||||
'm.read': {
|
'content': {
|
||||||
'user_id': {
|
'room_id': {
|
||||||
'event_ids': ['event_id'],
|
'm.read': {
|
||||||
'data': {'ts': 1234},
|
'user_id': {
|
||||||
},
|
'event_ids': ['event_id'],
|
||||||
},
|
'data': {'ts': 1234},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
},
|
},
|
||||||
},
|
}
|
||||||
},
|
],
|
||||||
])
|
)
|
||||||
mock_send_transaction.reset_mock()
|
mock_send_transaction.reset_mock()
|
||||||
|
|
||||||
# send the second RR
|
# send the second RR
|
||||||
receipt = ReadReceipt("room_id", "m.read", "user_id", ["other_id"], {"ts": 1234})
|
receipt = ReadReceipt(
|
||||||
|
"room_id", "m.read", "user_id", ["other_id"], {"ts": 1234}
|
||||||
|
)
|
||||||
self.successResultOf(sender.send_read_receipt(receipt))
|
self.successResultOf(sender.send_read_receipt(receipt))
|
||||||
self.pump()
|
self.pump()
|
||||||
mock_send_transaction.assert_not_called()
|
mock_send_transaction.assert_not_called()
|
||||||
|
@ -111,18 +127,21 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||||
mock_send_transaction.assert_called_once()
|
mock_send_transaction.assert_called_once()
|
||||||
json_cb = mock_send_transaction.call_args[0][1]
|
json_cb = mock_send_transaction.call_args[0][1]
|
||||||
data = json_cb()
|
data = json_cb()
|
||||||
self.assertEqual(data['edus'], [
|
self.assertEqual(
|
||||||
{
|
data['edus'],
|
||||||
'edu_type': 'm.receipt',
|
[
|
||||||
'content': {
|
{
|
||||||
'room_id': {
|
'edu_type': 'm.receipt',
|
||||||
'm.read': {
|
'content': {
|
||||||
'user_id': {
|
'room_id': {
|
||||||
'event_ids': ['other_id'],
|
'm.read': {
|
||||||
'data': {'ts': 1234},
|
'user_id': {
|
||||||
},
|
'event_ids': ['other_id'],
|
||||||
},
|
'data': {'ts': 1234},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
},
|
},
|
||||||
},
|
}
|
||||||
},
|
],
|
||||||
])
|
)
|
||||||
|
|
|
@ -115,11 +115,7 @@ class TestCreateAliasACL(unittest.HomeserverTestCase):
|
||||||
# We cheekily override the config to add custom alias creation rules
|
# We cheekily override the config to add custom alias creation rules
|
||||||
config = {}
|
config = {}
|
||||||
config["alias_creation_rules"] = [
|
config["alias_creation_rules"] = [
|
||||||
{
|
{"user_id": "*", "alias": "#unofficial_*", "action": "allow"}
|
||||||
"user_id": "*",
|
|
||||||
"alias": "#unofficial_*",
|
|
||||||
"action": "allow",
|
|
||||||
}
|
|
||||||
]
|
]
|
||||||
config["room_list_publication_rules"] = []
|
config["room_list_publication_rules"] = []
|
||||||
|
|
||||||
|
@ -162,9 +158,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
|
||||||
room_id = self.helper.create_room_as(self.user_id)
|
room_id = self.helper.create_room_as(self.user_id)
|
||||||
|
|
||||||
request, channel = self.make_request(
|
request, channel = self.make_request(
|
||||||
"PUT",
|
"PUT", b"directory/list/room/%s" % (room_id.encode('ascii'),), b'{}'
|
||||||
b"directory/list/room/%s" % (room_id.encode('ascii'),),
|
|
||||||
b'{}',
|
|
||||||
)
|
)
|
||||||
self.render(request)
|
self.render(request)
|
||||||
self.assertEquals(200, channel.code, channel.result)
|
self.assertEquals(200, channel.code, channel.result)
|
||||||
|
@ -179,10 +173,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
|
||||||
self.directory_handler.enable_room_list_search = True
|
self.directory_handler.enable_room_list_search = True
|
||||||
|
|
||||||
# Room list is enabled so we should get some results
|
# Room list is enabled so we should get some results
|
||||||
request, channel = self.make_request(
|
request, channel = self.make_request("GET", b"publicRooms")
|
||||||
"GET",
|
|
||||||
b"publicRooms",
|
|
||||||
)
|
|
||||||
self.render(request)
|
self.render(request)
|
||||||
self.assertEquals(200, channel.code, channel.result)
|
self.assertEquals(200, channel.code, channel.result)
|
||||||
self.assertTrue(len(channel.json_body["chunk"]) > 0)
|
self.assertTrue(len(channel.json_body["chunk"]) > 0)
|
||||||
|
@ -191,10 +182,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
|
||||||
self.directory_handler.enable_room_list_search = False
|
self.directory_handler.enable_room_list_search = False
|
||||||
|
|
||||||
# Room list disabled so we should get no results
|
# Room list disabled so we should get no results
|
||||||
request, channel = self.make_request(
|
request, channel = self.make_request("GET", b"publicRooms")
|
||||||
"GET",
|
|
||||||
b"publicRooms",
|
|
||||||
)
|
|
||||||
self.render(request)
|
self.render(request)
|
||||||
self.assertEquals(200, channel.code, channel.result)
|
self.assertEquals(200, channel.code, channel.result)
|
||||||
self.assertTrue(len(channel.json_body["chunk"]) == 0)
|
self.assertTrue(len(channel.json_body["chunk"]) == 0)
|
||||||
|
@ -202,9 +190,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
|
||||||
# Room list disabled so we shouldn't be allowed to publish rooms
|
# Room list disabled so we shouldn't be allowed to publish rooms
|
||||||
room_id = self.helper.create_room_as(self.user_id)
|
room_id = self.helper.create_room_as(self.user_id)
|
||||||
request, channel = self.make_request(
|
request, channel = self.make_request(
|
||||||
"PUT",
|
"PUT", b"directory/list/room/%s" % (room_id.encode('ascii'),), b'{}'
|
||||||
b"directory/list/room/%s" % (room_id.encode('ascii'),),
|
|
||||||
b'{}',
|
|
||||||
)
|
)
|
||||||
self.render(request)
|
self.render(request)
|
||||||
self.assertEquals(403, channel.code, channel.result)
|
self.assertEquals(403, channel.code, channel.result)
|
||||||
|
|
|
@ -36,7 +36,7 @@ room_keys = {
|
||||||
"first_message_index": 1,
|
"first_message_index": 1,
|
||||||
"forwarded_count": 1,
|
"forwarded_count": 1,
|
||||||
"is_verified": False,
|
"is_verified": False,
|
||||||
"session_data": "SSBBTSBBIEZJU0gK"
|
"session_data": "SSBBTSBBIEZJU0gK",
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -47,15 +47,13 @@ room_keys = {
|
||||||
class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
super(E2eRoomKeysHandlerTestCase, self).__init__(*args, **kwargs)
|
super(E2eRoomKeysHandlerTestCase, self).__init__(*args, **kwargs)
|
||||||
self.hs = None # type: synapse.server.HomeServer
|
self.hs = None # type: synapse.server.HomeServer
|
||||||
self.handler = None # type: synapse.handlers.e2e_keys.E2eRoomKeysHandler
|
self.handler = None # type: synapse.handlers.e2e_keys.E2eRoomKeysHandler
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.hs = yield utils.setup_test_homeserver(
|
self.hs = yield utils.setup_test_homeserver(
|
||||||
self.addCleanup,
|
self.addCleanup, handlers=None, replication_layer=mock.Mock()
|
||||||
handlers=None,
|
|
||||||
replication_layer=mock.Mock(),
|
|
||||||
)
|
)
|
||||||
self.handler = synapse.handlers.e2e_room_keys.E2eRoomKeysHandler(self.hs)
|
self.handler = synapse.handlers.e2e_room_keys.E2eRoomKeysHandler(self.hs)
|
||||||
self.local_user = "@boris:" + self.hs.hostname
|
self.local_user = "@boris:" + self.hs.hostname
|
||||||
|
@ -88,67 +86,86 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
def test_create_version(self):
|
def test_create_version(self):
|
||||||
"""Check that we can create and then retrieve versions.
|
"""Check that we can create and then retrieve versions.
|
||||||
"""
|
"""
|
||||||
res = yield self.handler.create_version(self.local_user, {
|
res = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(res, "1")
|
self.assertEqual(res, "1")
|
||||||
|
|
||||||
# check we can retrieve it as the current version
|
# check we can retrieve it as the current version
|
||||||
res = yield self.handler.get_version_info(self.local_user)
|
res = yield self.handler.get_version_info(self.local_user)
|
||||||
self.assertDictEqual(res, {
|
self.assertDictEqual(
|
||||||
"version": "1",
|
res,
|
||||||
"algorithm": "m.megolm_backup.v1",
|
{
|
||||||
"auth_data": "first_version_auth_data",
|
"version": "1",
|
||||||
})
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "first_version_auth_data",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
# check we can retrieve it as a specific version
|
# check we can retrieve it as a specific version
|
||||||
res = yield self.handler.get_version_info(self.local_user, "1")
|
res = yield self.handler.get_version_info(self.local_user, "1")
|
||||||
self.assertDictEqual(res, {
|
self.assertDictEqual(
|
||||||
"version": "1",
|
res,
|
||||||
"algorithm": "m.megolm_backup.v1",
|
{
|
||||||
"auth_data": "first_version_auth_data",
|
"version": "1",
|
||||||
})
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "first_version_auth_data",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
# upload a new one...
|
# upload a new one...
|
||||||
res = yield self.handler.create_version(self.local_user, {
|
res = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "second_version_auth_data",
|
{
|
||||||
})
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "second_version_auth_data",
|
||||||
|
},
|
||||||
|
)
|
||||||
self.assertEqual(res, "2")
|
self.assertEqual(res, "2")
|
||||||
|
|
||||||
# check we can retrieve it as the current version
|
# check we can retrieve it as the current version
|
||||||
res = yield self.handler.get_version_info(self.local_user)
|
res = yield self.handler.get_version_info(self.local_user)
|
||||||
self.assertDictEqual(res, {
|
self.assertDictEqual(
|
||||||
"version": "2",
|
res,
|
||||||
"algorithm": "m.megolm_backup.v1",
|
{
|
||||||
"auth_data": "second_version_auth_data",
|
"version": "2",
|
||||||
})
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "second_version_auth_data",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_update_version(self):
|
def test_update_version(self):
|
||||||
"""Check that we can update versions.
|
"""Check that we can update versions.
|
||||||
"""
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
res = yield self.handler.update_version(self.local_user, version, {
|
res = yield self.handler.update_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "revised_first_version_auth_data",
|
version,
|
||||||
"version": version
|
{
|
||||||
})
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "revised_first_version_auth_data",
|
||||||
|
"version": version,
|
||||||
|
},
|
||||||
|
)
|
||||||
self.assertDictEqual(res, {})
|
self.assertDictEqual(res, {})
|
||||||
|
|
||||||
# check we can retrieve it as the current version
|
# check we can retrieve it as the current version
|
||||||
res = yield self.handler.get_version_info(self.local_user)
|
res = yield self.handler.get_version_info(self.local_user)
|
||||||
self.assertDictEqual(res, {
|
self.assertDictEqual(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
res,
|
||||||
"auth_data": "revised_first_version_auth_data",
|
{
|
||||||
"version": version
|
"algorithm": "m.megolm_backup.v1",
|
||||||
})
|
"auth_data": "revised_first_version_auth_data",
|
||||||
|
"version": version,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_update_missing_version(self):
|
def test_update_missing_version(self):
|
||||||
|
@ -156,11 +173,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
"""
|
"""
|
||||||
res = None
|
res = None
|
||||||
try:
|
try:
|
||||||
yield self.handler.update_version(self.local_user, "1", {
|
yield self.handler.update_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "revised_first_version_auth_data",
|
"1",
|
||||||
"version": "1"
|
{
|
||||||
})
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "revised_first_version_auth_data",
|
||||||
|
"version": "1",
|
||||||
|
},
|
||||||
|
)
|
||||||
except errors.SynapseError as e:
|
except errors.SynapseError as e:
|
||||||
res = e.code
|
res = e.code
|
||||||
self.assertEqual(res, 404)
|
self.assertEqual(res, 404)
|
||||||
|
@ -170,29 +191,37 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
"""Check that we get a 400 if the version in the body is missing or
|
"""Check that we get a 400 if the version in the body is missing or
|
||||||
doesn't match
|
doesn't match
|
||||||
"""
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
res = None
|
res = None
|
||||||
try:
|
try:
|
||||||
yield self.handler.update_version(self.local_user, version, {
|
yield self.handler.update_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "revised_first_version_auth_data"
|
version,
|
||||||
})
|
{
|
||||||
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "revised_first_version_auth_data",
|
||||||
|
},
|
||||||
|
)
|
||||||
except errors.SynapseError as e:
|
except errors.SynapseError as e:
|
||||||
res = e.code
|
res = e.code
|
||||||
self.assertEqual(res, 400)
|
self.assertEqual(res, 400)
|
||||||
|
|
||||||
res = None
|
res = None
|
||||||
try:
|
try:
|
||||||
yield self.handler.update_version(self.local_user, version, {
|
yield self.handler.update_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "revised_first_version_auth_data",
|
version,
|
||||||
"version": "incorrect"
|
{
|
||||||
})
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "revised_first_version_auth_data",
|
||||||
|
"version": "incorrect",
|
||||||
|
},
|
||||||
|
)
|
||||||
except errors.SynapseError as e:
|
except errors.SynapseError as e:
|
||||||
res = e.code
|
res = e.code
|
||||||
self.assertEqual(res, 400)
|
self.assertEqual(res, 400)
|
||||||
|
@ -223,10 +252,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
def test_delete_version(self):
|
def test_delete_version(self):
|
||||||
"""Check that we can create and then delete versions.
|
"""Check that we can create and then delete versions.
|
||||||
"""
|
"""
|
||||||
res = yield self.handler.create_version(self.local_user, {
|
res = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(res, "1")
|
self.assertEqual(res, "1")
|
||||||
|
|
||||||
# check we can delete it
|
# check we can delete it
|
||||||
|
@ -255,16 +284,14 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
def test_get_missing_room_keys(self):
|
def test_get_missing_room_keys(self):
|
||||||
"""Check we get an empty response from an empty backup
|
"""Check we get an empty response from an empty backup
|
||||||
"""
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
res = yield self.handler.get_room_keys(self.local_user, version)
|
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||||
self.assertDictEqual(res, {
|
self.assertDictEqual(res, {"rooms": {}})
|
||||||
"rooms": {}
|
|
||||||
})
|
|
||||||
|
|
||||||
# TODO: test the locking semantics when uploading room_keys,
|
# TODO: test the locking semantics when uploading room_keys,
|
||||||
# although this is probably best done in sytest
|
# although this is probably best done in sytest
|
||||||
|
@ -275,7 +302,9 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
"""
|
"""
|
||||||
res = None
|
res = None
|
||||||
try:
|
try:
|
||||||
yield self.handler.upload_room_keys(self.local_user, "no_version", room_keys)
|
yield self.handler.upload_room_keys(
|
||||||
|
self.local_user, "no_version", room_keys
|
||||||
|
)
|
||||||
except errors.SynapseError as e:
|
except errors.SynapseError as e:
|
||||||
res = e.code
|
res = e.code
|
||||||
self.assertEqual(res, 404)
|
self.assertEqual(res, 404)
|
||||||
|
@ -285,10 +314,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
"""Check that we get a 404 on uploading keys when an nonexistent version
|
"""Check that we get a 404 on uploading keys when an nonexistent version
|
||||||
is specified
|
is specified
|
||||||
"""
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
res = None
|
res = None
|
||||||
|
@ -304,16 +333,19 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
def test_upload_room_keys_wrong_version(self):
|
def test_upload_room_keys_wrong_version(self):
|
||||||
"""Check that we get a 403 on uploading keys for an old version
|
"""Check that we get a 403 on uploading keys for an old version
|
||||||
"""
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "second_version_auth_data",
|
{
|
||||||
})
|
"algorithm": "m.megolm_backup.v1",
|
||||||
|
"auth_data": "second_version_auth_data",
|
||||||
|
},
|
||||||
|
)
|
||||||
self.assertEqual(version, "2")
|
self.assertEqual(version, "2")
|
||||||
|
|
||||||
res = None
|
res = None
|
||||||
|
@ -327,10 +359,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
def test_upload_room_keys_insert(self):
|
def test_upload_room_keys_insert(self):
|
||||||
"""Check that we can insert and retrieve keys for a session
|
"""Check that we can insert and retrieve keys for a session
|
||||||
"""
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
|
@ -340,18 +372,13 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
|
|
||||||
# check getting room_keys for a given room
|
# check getting room_keys for a given room
|
||||||
res = yield self.handler.get_room_keys(
|
res = yield self.handler.get_room_keys(
|
||||||
self.local_user,
|
self.local_user, version, room_id="!abc:matrix.org"
|
||||||
version,
|
|
||||||
room_id="!abc:matrix.org"
|
|
||||||
)
|
)
|
||||||
self.assertDictEqual(res, room_keys)
|
self.assertDictEqual(res, room_keys)
|
||||||
|
|
||||||
# check getting room_keys for a given session_id
|
# check getting room_keys for a given session_id
|
||||||
res = yield self.handler.get_room_keys(
|
res = yield self.handler.get_room_keys(
|
||||||
self.local_user,
|
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||||
version,
|
|
||||||
room_id="!abc:matrix.org",
|
|
||||||
session_id="c0ff33",
|
|
||||||
)
|
)
|
||||||
self.assertDictEqual(res, room_keys)
|
self.assertDictEqual(res, room_keys)
|
||||||
|
|
||||||
|
@ -359,10 +386,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
def test_upload_room_keys_merge(self):
|
def test_upload_room_keys_merge(self):
|
||||||
"""Check that we can upload a new room_key for an existing session and
|
"""Check that we can upload a new room_key for an existing session and
|
||||||
have it correctly merged"""
|
have it correctly merged"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
|
@ -378,7 +405,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
res = yield self.handler.get_room_keys(self.local_user, version)
|
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
|
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
|
||||||
"SSBBTSBBIEZJU0gK"
|
"SSBBTSBBIEZJU0gK",
|
||||||
)
|
)
|
||||||
|
|
||||||
# test that marking the session as verified however /does/ replace it
|
# test that marking the session as verified however /does/ replace it
|
||||||
|
@ -387,8 +414,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
|
|
||||||
res = yield self.handler.get_room_keys(self.local_user, version)
|
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
|
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'], "new"
|
||||||
"new"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# test that a session with a higher forwarded_count doesn't replace one
|
# test that a session with a higher forwarded_count doesn't replace one
|
||||||
|
@ -399,8 +425,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
|
|
||||||
res = yield self.handler.get_room_keys(self.local_user, version)
|
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
|
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'], "new"
|
||||||
"new"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# TODO: check edge cases as well as the common variations here
|
# TODO: check edge cases as well as the common variations here
|
||||||
|
@ -409,56 +434,36 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
def test_delete_room_keys(self):
|
def test_delete_room_keys(self):
|
||||||
"""Check that we can insert and delete keys for a session
|
"""Check that we can insert and delete keys for a session
|
||||||
"""
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(
|
||||||
"algorithm": "m.megolm_backup.v1",
|
self.local_user,
|
||||||
"auth_data": "first_version_auth_data",
|
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||||
})
|
)
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
# check for bulk-delete
|
# check for bulk-delete
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
yield self.handler.delete_room_keys(self.local_user, version)
|
yield self.handler.delete_room_keys(self.local_user, version)
|
||||||
res = yield self.handler.get_room_keys(
|
res = yield self.handler.get_room_keys(
|
||||||
self.local_user,
|
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||||
version,
|
|
||||||
room_id="!abc:matrix.org",
|
|
||||||
session_id="c0ff33",
|
|
||||||
)
|
)
|
||||||
self.assertDictEqual(res, {
|
self.assertDictEqual(res, {"rooms": {}})
|
||||||
"rooms": {}
|
|
||||||
})
|
|
||||||
|
|
||||||
# check for bulk-delete per room
|
# check for bulk-delete per room
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
yield self.handler.delete_room_keys(
|
yield self.handler.delete_room_keys(
|
||||||
self.local_user,
|
self.local_user, version, room_id="!abc:matrix.org"
|
||||||
version,
|
|
||||||
room_id="!abc:matrix.org",
|
|
||||||
)
|
)
|
||||||
res = yield self.handler.get_room_keys(
|
res = yield self.handler.get_room_keys(
|
||||||
self.local_user,
|
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||||
version,
|
|
||||||
room_id="!abc:matrix.org",
|
|
||||||
session_id="c0ff33",
|
|
||||||
)
|
)
|
||||||
self.assertDictEqual(res, {
|
self.assertDictEqual(res, {"rooms": {}})
|
||||||
"rooms": {}
|
|
||||||
})
|
|
||||||
|
|
||||||
# check for bulk-delete per session
|
# check for bulk-delete per session
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
yield self.handler.delete_room_keys(
|
yield self.handler.delete_room_keys(
|
||||||
self.local_user,
|
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||||
version,
|
|
||||||
room_id="!abc:matrix.org",
|
|
||||||
session_id="c0ff33",
|
|
||||||
)
|
)
|
||||||
res = yield self.handler.get_room_keys(
|
res = yield self.handler.get_room_keys(
|
||||||
self.local_user,
|
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||||
version,
|
|
||||||
room_id="!abc:matrix.org",
|
|
||||||
session_id="c0ff33",
|
|
||||||
)
|
)
|
||||||
self.assertDictEqual(res, {
|
self.assertDictEqual(res, {"rooms": {}})
|
||||||
"rooms": {}
|
|
||||||
})
|
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue