Merge branch 'master' of github.com:matrix-org/synapse into dinsic

This commit is contained in:
Erik Johnston 2019-04-05 14:10:16 +01:00
commit 9bf49abc07
116 changed files with 1033 additions and 537 deletions

View file

@ -1,3 +1,102 @@
Synapse 0.99.3 (2019-04-01)
===========================
No significant changes.
Synapse 0.99.3rc1 (2019-03-27)
==============================
Features
--------
- The user directory has been rewritten to make it faster, with less chance of falling behind on a large server. ([\#4537](https://github.com/matrix-org/synapse/issues/4537), [\#4846](https://github.com/matrix-org/synapse/issues/4846), [\#4864](https://github.com/matrix-org/synapse/issues/4864), [\#4887](https://github.com/matrix-org/synapse/issues/4887), [\#4900](https://github.com/matrix-org/synapse/issues/4900), [\#4944](https://github.com/matrix-org/synapse/issues/4944))
- Add configurable rate limiting to the /register endpoint. ([\#4735](https://github.com/matrix-org/synapse/issues/4735), [\#4804](https://github.com/matrix-org/synapse/issues/4804))
- Move server key queries to federation reader. ([\#4757](https://github.com/matrix-org/synapse/issues/4757))
- Add support for /account/3pid REST endpoint to client_reader worker. ([\#4759](https://github.com/matrix-org/synapse/issues/4759))
- Add an endpoint to the admin API for querying the server version. Contributed by Joseph Weston. ([\#4772](https://github.com/matrix-org/synapse/issues/4772))
- Include a default configuration file in the 'docs' directory. ([\#4791](https://github.com/matrix-org/synapse/issues/4791), [\#4801](https://github.com/matrix-org/synapse/issues/4801))
- Synapse is now permissive about trailing slashes on some of its federation endpoints, allowing zero or more to be present. ([\#4793](https://github.com/matrix-org/synapse/issues/4793))
- Add support for /keys/query and /keys/changes REST endpoints to client_reader worker. ([\#4796](https://github.com/matrix-org/synapse/issues/4796))
- Add checks to incoming events over federation for events evading auth (aka "soft fail"). ([\#4814](https://github.com/matrix-org/synapse/issues/4814))
- Add configurable rate limiting to the /login endpoint. ([\#4821](https://github.com/matrix-org/synapse/issues/4821), [\#4865](https://github.com/matrix-org/synapse/issues/4865))
- Remove trailing slashes from certain outbound federation requests. Retry if receiving a 404. Context: #3622. ([\#4840](https://github.com/matrix-org/synapse/issues/4840))
- Allow passing --daemonize flags to workers in the same way as with master. ([\#4853](https://github.com/matrix-org/synapse/issues/4853))
- Batch up outgoing read-receipts to reduce federation traffic. ([\#4890](https://github.com/matrix-org/synapse/issues/4890), [\#4927](https://github.com/matrix-org/synapse/issues/4927))
- Add option to disable searching the user directory. ([\#4895](https://github.com/matrix-org/synapse/issues/4895))
- Add option to disable searching of local and remote public room lists. ([\#4896](https://github.com/matrix-org/synapse/issues/4896))
- Add ability for password providers to login/register a user via 3PID (email, phone). ([\#4931](https://github.com/matrix-org/synapse/issues/4931))
Bugfixes
--------
- Fix a bug where media with spaces in the name would get a corrupted name. ([\#2090](https://github.com/matrix-org/synapse/issues/2090))
- Fix attempting to paginate in rooms where server cannot see any events, to avoid unnecessarily pulling in lots of redacted events. ([\#4699](https://github.com/matrix-org/synapse/issues/4699))
- 'event_id' is now a required parameter in federated state requests, as per the matrix spec. ([\#4740](https://github.com/matrix-org/synapse/issues/4740))
- Fix tightloop over connecting to replication server. ([\#4749](https://github.com/matrix-org/synapse/issues/4749))
- Fix parsing of Content-Disposition headers on remote media requests and URL previews. ([\#4763](https://github.com/matrix-org/synapse/issues/4763))
- Fix incorrect log about not persisting duplicate state event. ([\#4776](https://github.com/matrix-org/synapse/issues/4776))
- Fix v4v6 option in HAProxy example config. Contributed by Flakebi. ([\#4790](https://github.com/matrix-org/synapse/issues/4790))
- Handle batch updates in worker replication protocol. ([\#4792](https://github.com/matrix-org/synapse/issues/4792))
- Fix bug where we didn't correctly throttle sending of USER_IP commands over replication. ([\#4818](https://github.com/matrix-org/synapse/issues/4818))
- Fix potential race in handling missing updates in device list updates. ([\#4829](https://github.com/matrix-org/synapse/issues/4829))
- Fix bug where synapse expected an un-specced `prev_state` field on state events. ([\#4837](https://github.com/matrix-org/synapse/issues/4837))
- Transfer a user's notification settings (push rules) on room upgrade. ([\#4838](https://github.com/matrix-org/synapse/issues/4838))
- fix test_auto_create_auto_join_where_no_consent. ([\#4886](https://github.com/matrix-org/synapse/issues/4886))
- Fix a bug where hs_disabled_message was sometimes not correctly enforced. ([\#4888](https://github.com/matrix-org/synapse/issues/4888))
- Fix bug in shutdown room admin API where it would fail if a user in the room hadn't consented to the privacy policy. ([\#4904](https://github.com/matrix-org/synapse/issues/4904))
- Fix bug where blocked world-readable rooms were still peekable. ([\#4908](https://github.com/matrix-org/synapse/issues/4908))
Internal Changes
----------------
- Add a systemd setup that supports synapse workers. Contributed by Luca Corbatto. ([\#4662](https://github.com/matrix-org/synapse/issues/4662))
- Change from TravisCI to Buildkite for CI. ([\#4752](https://github.com/matrix-org/synapse/issues/4752))
- When presence is disabled don't send over replication. ([\#4757](https://github.com/matrix-org/synapse/issues/4757))
- Minor docstring fixes for MatrixFederationAgent. ([\#4765](https://github.com/matrix-org/synapse/issues/4765))
- Optimise EDU transmission for the federation_sender worker. ([\#4770](https://github.com/matrix-org/synapse/issues/4770))
- Update test_typing to use HomeserverTestCase. ([\#4771](https://github.com/matrix-org/synapse/issues/4771))
- Update URLs for riot.im icons and logos in the default notification templates. ([\#4779](https://github.com/matrix-org/synapse/issues/4779))
- Removed unnecessary $ from some federation endpoint path regexes. ([\#4794](https://github.com/matrix-org/synapse/issues/4794))
- Remove link to deleted title in README. ([\#4795](https://github.com/matrix-org/synapse/issues/4795))
- Clean up read-receipt handling. ([\#4797](https://github.com/matrix-org/synapse/issues/4797))
- Add some debug about processing read receipts. ([\#4798](https://github.com/matrix-org/synapse/issues/4798))
- Clean up some replication code. ([\#4799](https://github.com/matrix-org/synapse/issues/4799))
- Add some docstrings. ([\#4815](https://github.com/matrix-org/synapse/issues/4815))
- Add debug logger to try and track down #4422. ([\#4816](https://github.com/matrix-org/synapse/issues/4816))
- Make shutdown API send explanation message to room after users have been forced joined. ([\#4817](https://github.com/matrix-org/synapse/issues/4817))
- Update example_log_config.yaml. ([\#4820](https://github.com/matrix-org/synapse/issues/4820))
- Document the `generate` option for the docker image. ([\#4824](https://github.com/matrix-org/synapse/issues/4824))
- Fix check-newsfragment for debian-only changes. ([\#4825](https://github.com/matrix-org/synapse/issues/4825))
- Add some debug logging for device list updates to help with #4828. ([\#4828](https://github.com/matrix-org/synapse/issues/4828))
- Improve federation documentation, specifically .well-known support. Many thanks to @vaab. ([\#4832](https://github.com/matrix-org/synapse/issues/4832))
- Disable captcha registration by default in unit tests. ([\#4839](https://github.com/matrix-org/synapse/issues/4839))
- Add stuff back to the .gitignore. ([\#4843](https://github.com/matrix-org/synapse/issues/4843))
- Clarify what registration_shared_secret allows for. ([\#4844](https://github.com/matrix-org/synapse/issues/4844))
- Correctly log expected errors when fetching server keys. ([\#4847](https://github.com/matrix-org/synapse/issues/4847))
- Update install docs to explicitly state a full-chain (not just the top-level) TLS certificate must be provided to Synapse. This caused some people's Synapse ports to appear correct in a browser but still (rightfully so) upset the federation tester. ([\#4849](https://github.com/matrix-org/synapse/issues/4849))
- Move client read-receipt processing to federation sender worker. ([\#4852](https://github.com/matrix-org/synapse/issues/4852))
- Refactor federation TransactionQueue. ([\#4855](https://github.com/matrix-org/synapse/issues/4855))
- Comment out most options in the generated config. ([\#4863](https://github.com/matrix-org/synapse/issues/4863))
- Fix yaml library warnings by using safe_load. ([\#4869](https://github.com/matrix-org/synapse/issues/4869))
- Update Apache setup to remove location syntax. Thanks to @cwmke! ([\#4870](https://github.com/matrix-org/synapse/issues/4870))
- Reinstate test case that runs unit tests against oldest supported dependencies. ([\#4879](https://github.com/matrix-org/synapse/issues/4879))
- Update link to federation docs. ([\#4881](https://github.com/matrix-org/synapse/issues/4881))
- fix test_auto_create_auto_join_where_no_consent. ([\#4886](https://github.com/matrix-org/synapse/issues/4886))
- Use a regular HomeServerConfig object for unit tests rater than a Mock. ([\#4889](https://github.com/matrix-org/synapse/issues/4889))
- Add some notes about tuning postgres for larger deployments. ([\#4895](https://github.com/matrix-org/synapse/issues/4895))
- Add a config option for torture-testing worker replication. ([\#4902](https://github.com/matrix-org/synapse/issues/4902))
- Log requests which are simulated by the unit tests. ([\#4905](https://github.com/matrix-org/synapse/issues/4905))
- Allow newsfragments to end with exclamation marks. Exciting! ([\#4912](https://github.com/matrix-org/synapse/issues/4912))
- Refactor some more tests to use HomeserverTestCase. ([\#4913](https://github.com/matrix-org/synapse/issues/4913))
- Refactor out the state deltas portion of the user directory store and handler. ([\#4917](https://github.com/matrix-org/synapse/issues/4917))
- Fix nginx example in ACME doc. ([\#4923](https://github.com/matrix-org/synapse/issues/4923))
- Use an explicit dbname for postgres connections in the tests. ([\#4928](https://github.com/matrix-org/synapse/issues/4928))
- Fix `ClientReplicationStreamProtocol.__str__()`. ([\#4929](https://github.com/matrix-org/synapse/issues/4929))
Synapse 0.99.2 (2019-03-01) Synapse 0.99.2 (2019-03-01)
=========================== ===========================

View file

@ -384,7 +384,7 @@ To configure Synapse to expose an HTTPS port, you will need to edit
`cert.pem`). `cert.pem`).
For those of you upgrading your TLS certificate in readiness for Synapse 1.0, For those of you upgrading your TLS certificate in readiness for Synapse 1.0,
please take a look at `our guide <docs/MSC1711_certificates_FAQ.md#configuring-certificates-for-compatibility-with-synapse-100>`_. please take a look at [our guide](docs/MSC1711_certificates_FAQ.md#configuring-certificates-for-compatibility-with-synapse-100).
## Registering a user ## Registering a user

View file

@ -1 +0,0 @@
Fix a bug where media with spaces in the name would get a corrupted name.

View file

@ -1 +0,0 @@
The user directory has been rewritten to make it faster, with less chance of falling behind on a large server.

View file

@ -1 +0,0 @@
Add a systemd setup that supports synapse workers. Contributed by Luca Corbatto.

View file

@ -1 +0,0 @@
Fix attempting to paginate in rooms where server cannot see any events, to avoid unnecessarily pulling in lots of redacted events.

View file

@ -1 +0,0 @@
Add configurable rate limiting to the /register endpoint.

View file

@ -1 +0,0 @@
'event_id' is now a required parameter in federated state requests, as per the matrix spec.

View file

@ -1 +0,0 @@
Fix tightloop over connecting to replication server.

View file

@ -1 +0,0 @@
Change from TravisCI to Buildkite for CI.

View file

@ -1 +0,0 @@
Move server key queries to federation reader.

View file

@ -1 +0,0 @@
When presence is disabled don't send over replication.

View file

@ -1 +0,0 @@
Add support for /account/3pid REST endpoint to client_reader worker.

View file

@ -1 +0,0 @@
Fix parsing of Content-Disposition headers on remote media requests and URL previews.

View file

@ -1 +0,0 @@
Minor docstring fixes for MatrixFederationAgent.

View file

@ -1 +0,0 @@
Optimise EDU transmission for the federation_sender worker.

View file

@ -1 +0,0 @@
Update test_typing to use HomeserverTestCase.

View file

@ -1 +0,0 @@
Add an endpoint to the admin API for querying the server version. Contributed by Joseph Weston.

View file

@ -1 +0,0 @@
Fix incorrect log about not persisting duplicate state event.

View file

@ -1 +0,0 @@
Update URLs for riot.im icons and logos in the default notification templates.

View file

@ -1 +0,0 @@
Fix v4v6 option in HAProxy example config. Contributed by Flakebi.

View file

@ -1 +0,0 @@
Include a default configuration file in the 'docs' directory.

View file

@ -1 +0,0 @@
Handle batch updates in worker replication protocol.

View file

@ -1 +0,0 @@
Removed unnecessary $ from some federation endpoint path regexes.

View file

@ -1 +0,0 @@
Remove link to deleted title in README.

View file

@ -1 +0,0 @@
Add support for /keys/query and /keys/changes REST endpoints to client_reader worker.

View file

@ -1 +0,0 @@
Clean up read-receipt handling.

View file

@ -1 +0,0 @@
Add some debug about processing read receipts.

View file

@ -1 +0,0 @@
Clean up some replication code.

View file

@ -1 +0,0 @@
Include a default configuration file in the 'docs' directory.

View file

@ -1 +0,0 @@
Add configurable rate limiting to the /register endpoint.

View file

@ -1 +0,0 @@
Add checks to incoming events over federation for events evading auth (aka "soft fail").

View file

@ -1 +0,0 @@
Add some docstrings.

View file

@ -1 +0,0 @@
Add debug logger to try and track down #4422.

View file

@ -1 +0,0 @@
Make shutdown API send explanation message to room after users have been forced joined.

View file

@ -1 +0,0 @@
Fix bug where we didn't correctly throttle sending of USER_IP commands over replication.

View file

@ -1 +0,0 @@
Update example_log_config.yaml.

View file

@ -1 +0,0 @@
Add configurable rate limiting to the /login endpoint.

View file

@ -1 +0,0 @@
Document the `generate` option for the docker image.

View file

@ -1 +0,0 @@
Fix check-newsfragment for debian-only changes.

View file

@ -1 +0,0 @@
Add some debug logging for device list updates to help with #4828.

View file

@ -1 +0,0 @@
Fix potential race in handling missing updates in device list updates.

View file

@ -1 +0,0 @@
Improve federation documentation, specifically .well-known support. Many thanks to @vaab.

View file

@ -1 +0,0 @@
Fix bug where synapse expected an un-specced `prev_state` field on state events.

View file

@ -1 +0,0 @@
Transfer a user's notification settings (push rules) on room upgrade.

View file

@ -1 +0,0 @@
Disable captcha registration by default in unit tests.

View file

@ -1 +0,0 @@
Add stuff back to the .gitignore.

View file

@ -1 +0,0 @@
Clarify what registration_shared_secret allows for.

View file

@ -1 +0,0 @@
The user directory has been rewritten to make it faster, with less chance of falling behind on a large server.

View file

@ -1 +0,0 @@
Correctly log expected errors when fetching server keys.

View file

@ -1 +0,0 @@
Update install docs to explicitly state a full-chain (not just the top-level) TLS certificate must be provided to Synapse. This caused some people's Synapse ports to appear correct in a browser but still (rightfully so) upset the federation tester.

View file

@ -1 +0,0 @@
Move client read-receipt processing to federation sender worker.

View file

@ -1 +0,0 @@
Allow passing --daemonize flags to workers in the same way as with master.

View file

@ -1 +0,0 @@
Refactor federation TransactionQueue.

View file

@ -1 +0,0 @@
Comment out most options in the generated config.

View file

@ -1 +0,0 @@
The user directory has been rewritten to make it faster, with less chance of falling behind on a large server.

View file

@ -1 +0,0 @@
Add configurable rate limiting to the /login endpoint.

View file

@ -1 +0,0 @@
Reinstate test case that runs unit tests against oldest supported dependencies.

View file

@ -1 +0,0 @@
Update link to federation docs.

View file

@ -1 +0,0 @@
fix test_auto_create_auto_join_where_no_consent.

View file

@ -1 +0,0 @@
fix test_auto_create_auto_join_where_no_consent.

View file

@ -1 +0,0 @@
The user directory has been rewritten to make it faster, with less chance of falling behind on a large server.

View file

@ -1,2 +0,0 @@
Fix a bug where hs_disabled_message was sometimes not correctly enforced.

View file

@ -1 +0,0 @@
Use a regular HomeServerConfig object for unit tests rater than a Mock.

View file

@ -1 +0,0 @@
Batch up outgoing read-receipts to reduce federation traffic.

View file

@ -1 +0,0 @@
Add option to disable searching the user directory.

View file

@ -1 +0,0 @@
Add some notes about tuning postgres for larger deployments.

View file

@ -1 +0,0 @@
Add option to disable searching of local and remote public room lists.

View file

@ -1 +0,0 @@
The user directory has been rewritten to make it faster, with less chance of falling behind on a large server.

View file

@ -1 +0,0 @@
Add a config option for torture-testing worker replication.

View file

@ -1 +0,0 @@
Fix bug in shutdown room admin API where it would fail if a user in the room hadn't consented to the privacy policy.

View file

@ -1 +0,0 @@
Log requests which are simulated by the unit tests.

8
debian/changelog vendored
View file

@ -1,8 +1,12 @@
matrix-synapse-py3 (0.99.3) UNRELEASED; urgency=medium matrix-synapse-py3 (0.99.3) stable; urgency=medium
[ Richard van der Hoff ]
* Fix warning during preconfiguration. (Fixes: #4819) * Fix warning during preconfiguration. (Fixes: #4819)
-- Richard van der Hoff <richard@matrix.org> Thu, 07 Mar 2019 07:17:00 +0000 [ Synapse Packaging team ]
* New synapse release 0.99.3.
-- Synapse Packaging team <packages@matrix.org> Mon, 01 Apr 2019 12:48:21 +0000
matrix-synapse-py3 (0.99.2) stable; urgency=medium matrix-synapse-py3 (0.99.2) stable; urgency=medium

View file

@ -67,7 +67,7 @@ For nginx users, add the following line to your existing `server` block:
``` ```
location /.well-known/acme-challenge { location /.well-known/acme-challenge {
proxy_pass http://localhost:8009/; proxy_pass http://localhost:8009;
} }
``` ```

View file

@ -75,6 +75,20 @@ Password auth provider classes may optionally provide the following methods.
result from the ``/login`` call (including ``access_token``, ``device_id``, result from the ``/login`` call (including ``access_token``, ``device_id``,
etc.) etc.)
``someprovider.check_3pid_auth``\(*medium*, *address*, *password*)
This method, if implemented, is called when a user attempts to register or
log in with a third party identifier, such as email. It is passed the
medium (ex. "email"), an address (ex. "jdoe@example.com") and the user's
password.
The method should return a Twisted ``Deferred`` object, which resolves to
a ``str`` containing the user's (canonical) User ID if authentication was
successful, and ``None`` if not.
As with ``check_auth``, the ``Deferred`` may alternatively resolve to a
``(user_id, callback)`` tuple.
``someprovider.check_password``\(*user_id*, *password*) ``someprovider.check_password``\(*user_id*, *password*)
This method provides a simpler interface than ``get_supported_login_types`` This method provides a simpler interface than ``get_supported_login_types``

View file

@ -69,20 +69,16 @@ Let's assume that we expect clients to connect to our server at
SSLEngine on SSLEngine on
ServerName matrix.example.com; ServerName matrix.example.com;
<Location /_matrix> ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPass http://127.0.0.1:8008/_matrix nocanon ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
ProxyPassReverse http://127.0.0.1:8008/_matrix
</Location>
</VirtualHost> </VirtualHost>
<VirtualHost *:8448> <VirtualHost *:8448>
SSLEngine on SSLEngine on
ServerName example.com; ServerName example.com;
<Location /_matrix> ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPass http://127.0.0.1:8008/_matrix nocanon ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
ProxyPassReverse http://127.0.0.1:8008/_matrix
</Location>
</VirtualHost> </VirtualHost>
* HAProxy:: * HAProxy::

View file

@ -31,8 +31,8 @@ echo
# check that any new newsfiles on this branch end with a full stop. # check that any new newsfiles on this branch end with a full stop.
for f in `git diff --name-only FETCH_HEAD... -- changelog.d`; do for f in `git diff --name-only FETCH_HEAD... -- changelog.d`; do
lastchar=`tr -d '\n' < $f | tail -c 1` lastchar=`tr -d '\n' < $f | tail -c 1`
if [ $lastchar != '.' ]; then if [ $lastchar != '.' -a $lastchar != '!' ]; then
echo -e "\e[31mERROR: newsfragment $f does not end with a '.'\e[39m" >&2 echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2
exit 1 exit 1
fi fi
done done

View file

@ -76,7 +76,7 @@ def rows_v2(server, json):
def main(): def main():
config = yaml.load(open(sys.argv[1])) config = yaml.safe_load(open(sys.argv[1]))
valid_until = int(time.time() / (3600 * 24)) * 1000 * 3600 * 24 valid_until = int(time.time() / (3600 * 24)) * 1000 * 3600 * 24
server_name = config["server_name"] server_name = config["server_name"]

View file

@ -27,4 +27,4 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "0.99.2" __version__ = "0.99.3"

View file

@ -137,7 +137,7 @@ class Config(object):
@staticmethod @staticmethod
def read_config_file(file_path): def read_config_file(file_path):
with open(file_path) as file_stream: with open(file_path) as file_stream:
return yaml.load(file_stream) return yaml.safe_load(file_stream)
def invoke_all(self, name, *args, **kargs): def invoke_all(self, name, *args, **kargs):
results = [] results = []
@ -318,7 +318,7 @@ class Config(object):
) )
config_file.write(config_str) config_file.write(config_str)
config = yaml.load(config_str) config = yaml.safe_load(config_str)
obj.invoke_all("generate_files", config) obj.invoke_all("generate_files", config)
print( print(
@ -390,7 +390,7 @@ class Config(object):
server_name=server_name, server_name=server_name,
generate_secrets=False, generate_secrets=False,
) )
config = yaml.load(config_string) config = yaml.safe_load(config_string)
config.pop("log_config") config.pop("log_config")
config.update(specified_config) config.update(specified_config)

View file

@ -68,7 +68,7 @@ def load_appservices(hostname, config_files):
try: try:
with open(config_file, 'r') as f: with open(config_file, 'r') as f:
appservice = _load_appservice( appservice = _load_appservice(
hostname, yaml.load(f), config_file hostname, yaml.safe_load(f), config_file
) )
if appservice.id in seen_ids: if appservice.id in seen_ids:
raise ConfigError( raise ConfigError(

View file

@ -195,7 +195,7 @@ def setup_logging(config, use_worker_options=False):
else: else:
def load_log_config(): def load_log_config():
with open(log_config, 'r') as f: with open(log_config, 'r') as f:
logging.config.dictConfig(yaml.load(f)) logging.config.dictConfig(yaml.safe_load(f))
def sighup(*args): def sighup(*args):
# it might be better to use a file watcher or something for this. # it might be better to use a file watcher or something for this.

View file

@ -51,9 +51,10 @@ class TransportLayerClient(object):
logger.debug("get_room_state dest=%s, room=%s", logger.debug("get_room_state dest=%s, room=%s",
destination, room_id) destination, room_id)
path = _create_v1_path("/state/%s/", room_id) path = _create_v1_path("/state/%s", room_id)
return self.client.get_json( return self.client.get_json(
destination, path=path, args={"event_id": event_id}, destination, path=path, args={"event_id": event_id},
try_trailing_slash_on_400=True,
) )
@log_function @log_function
@ -73,9 +74,10 @@ class TransportLayerClient(object):
logger.debug("get_room_state_ids dest=%s, room=%s", logger.debug("get_room_state_ids dest=%s, room=%s",
destination, room_id) destination, room_id)
path = _create_v1_path("/state_ids/%s/", room_id) path = _create_v1_path("/state_ids/%s", room_id)
return self.client.get_json( return self.client.get_json(
destination, path=path, args={"event_id": event_id}, destination, path=path, args={"event_id": event_id},
try_trailing_slash_on_400=True,
) )
@log_function @log_function
@ -95,8 +97,11 @@ class TransportLayerClient(object):
logger.debug("get_pdu dest=%s, event_id=%s", logger.debug("get_pdu dest=%s, event_id=%s",
destination, event_id) destination, event_id)
path = _create_v1_path("/event/%s/", event_id) path = _create_v1_path("/event/%s", event_id)
return self.client.get_json(destination, path=path, timeout=timeout) return self.client.get_json(
destination, path=path, timeout=timeout,
try_trailing_slash_on_400=True,
)
@log_function @log_function
def backfill(self, destination, room_id, event_tuples, limit): def backfill(self, destination, room_id, event_tuples, limit):
@ -121,7 +126,7 @@ class TransportLayerClient(object):
# TODO: raise? # TODO: raise?
return return
path = _create_v1_path("/backfill/%s/", room_id) path = _create_v1_path("/backfill/%s", room_id)
args = { args = {
"v": event_tuples, "v": event_tuples,
@ -132,6 +137,7 @@ class TransportLayerClient(object):
destination, destination,
path=path, path=path,
args=args, args=args,
try_trailing_slash_on_400=True,
) )
@defer.inlineCallbacks @defer.inlineCallbacks
@ -167,7 +173,7 @@ class TransportLayerClient(object):
# generated by the json_data_callback. # generated by the json_data_callback.
json_data = transaction.get_dict() json_data = transaction.get_dict()
path = _create_v1_path("/send/%s/", transaction.transaction_id) path = _create_v1_path("/send/%s", transaction.transaction_id)
response = yield self.client.put_json( response = yield self.client.put_json(
transaction.destination, transaction.destination,
@ -176,6 +182,7 @@ class TransportLayerClient(object):
json_data_callback=json_data_callback, json_data_callback=json_data_callback,
long_retries=True, long_retries=True,
backoff_on_404=True, # If we get a 404 the other side has gone backoff_on_404=True, # If we get a 404 the other side has gone
try_trailing_slash_on_400=True,
) )
defer.returnValue(response) defer.returnValue(response)
@ -959,7 +966,7 @@ def _create_v1_path(path, *args):
Example: Example:
_create_v1_path("/event/%s/", event_id) _create_v1_path("/event/%s", event_id)
Args: Args:
path (str): String template for the path path (str): String template for the path
@ -980,7 +987,7 @@ def _create_v2_path(path, *args):
Example: Example:
_create_v2_path("/event/%s/", event_id) _create_v2_path("/event/%s", event_id)
Args: Args:
path (str): String template for the path path (str): String template for the path

View file

@ -312,7 +312,7 @@ class BaseFederationServlet(object):
class FederationSendServlet(BaseFederationServlet): class FederationSendServlet(BaseFederationServlet):
PATH = "/send/(?P<transaction_id>[^/]*)/" PATH = "/send/(?P<transaction_id>[^/]*)/?"
def __init__(self, handler, server_name, **kwargs): def __init__(self, handler, server_name, **kwargs):
super(FederationSendServlet, self).__init__( super(FederationSendServlet, self).__init__(
@ -378,7 +378,7 @@ class FederationSendServlet(BaseFederationServlet):
class FederationEventServlet(BaseFederationServlet): class FederationEventServlet(BaseFederationServlet):
PATH = "/event/(?P<event_id>[^/]*)/" PATH = "/event/(?P<event_id>[^/]*)/?"
# This is when someone asks for a data item for a given server data_id pair. # This is when someone asks for a data item for a given server data_id pair.
def on_GET(self, origin, content, query, event_id): def on_GET(self, origin, content, query, event_id):
@ -386,7 +386,7 @@ class FederationEventServlet(BaseFederationServlet):
class FederationStateServlet(BaseFederationServlet): class FederationStateServlet(BaseFederationServlet):
PATH = "/state/(?P<context>[^/]*)/" PATH = "/state/(?P<context>[^/]*)/?"
# This is when someone asks for all data for a given context. # This is when someone asks for all data for a given context.
def on_GET(self, origin, content, query, context): def on_GET(self, origin, content, query, context):
@ -398,7 +398,7 @@ class FederationStateServlet(BaseFederationServlet):
class FederationStateIdsServlet(BaseFederationServlet): class FederationStateIdsServlet(BaseFederationServlet):
PATH = "/state_ids/(?P<room_id>[^/]*)/" PATH = "/state_ids/(?P<room_id>[^/]*)/?"
def on_GET(self, origin, content, query, room_id): def on_GET(self, origin, content, query, room_id):
return self.handler.on_state_ids_request( return self.handler.on_state_ids_request(
@ -409,7 +409,7 @@ class FederationStateIdsServlet(BaseFederationServlet):
class FederationBackfillServlet(BaseFederationServlet): class FederationBackfillServlet(BaseFederationServlet):
PATH = "/backfill/(?P<context>[^/]*)/" PATH = "/backfill/(?P<context>[^/]*)/?"
def on_GET(self, origin, content, query, context): def on_GET(self, origin, content, query, context):
versions = [x.decode('ascii') for x in query[b"v"]] versions = [x.decode('ascii') for x in query[b"v"]]
@ -1080,7 +1080,7 @@ class FederationGroupsCategoriesServlet(BaseFederationServlet):
"""Get all categories for a group """Get all categories for a group
""" """
PATH = ( PATH = (
"/groups/(?P<group_id>[^/]*)/categories/" "/groups/(?P<group_id>[^/]*)/categories/?"
) )
@defer.inlineCallbacks @defer.inlineCallbacks
@ -1150,7 +1150,7 @@ class FederationGroupsRolesServlet(BaseFederationServlet):
"""Get roles in a group """Get roles in a group
""" """
PATH = ( PATH = (
"/groups/(?P<group_id>[^/]*)/roles/" "/groups/(?P<group_id>[^/]*)/roles/?"
) )
@defer.inlineCallbacks @defer.inlineCallbacks

View file

@ -745,6 +745,42 @@ class AuthHandler(BaseHandler):
errcode=Codes.FORBIDDEN errcode=Codes.FORBIDDEN
) )
@defer.inlineCallbacks
def check_password_provider_3pid(self, medium, address, password):
"""Check if a password provider is able to validate a thirdparty login
Args:
medium (str): The medium of the 3pid (ex. email).
address (str): The address of the 3pid (ex. jdoe@example.com).
password (str): The password of the user.
Returns:
Deferred[(str|None, func|None)]: A tuple of `(user_id,
callback)`. If authentication is successful, `user_id` is a `str`
containing the authenticated, canonical user ID. `callback` is
then either a function to be later run after the server has
completed login/registration, or `None`. If authentication was
unsuccessful, `user_id` and `callback` are both `None`.
"""
for provider in self.password_providers:
if hasattr(provider, "check_3pid_auth"):
# This function is able to return a deferred that either
# resolves None, meaning authentication failure, or upon
# success, to a str (which is the user_id) or a tuple of
# (user_id, callback_func), where callback_func should be run
# after we've finished everything else
result = yield provider.check_3pid_auth(
medium, address, password,
)
if result:
# Check if the return value is a str or a tuple
if isinstance(result, str):
# If it's a str, set callback function to None
result = (result, None)
defer.returnValue(result)
defer.returnValue((None, None))
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_local_password(self, user_id, password): def _check_local_password(self, user_id, password):
"""Authenticate a user against the local password database. """Authenticate a user against the local password database.
@ -756,7 +792,8 @@ class AuthHandler(BaseHandler):
user_id (unicode): complete @user:id user_id (unicode): complete @user:id
password (unicode): the provided password password (unicode): the provided password
Returns: Returns:
(unicode) the canonical_user_id, or None if unknown user / bad password Deferred[unicode] the canonical_user_id, or Deferred[None] if
unknown user/bad password
Raises: Raises:
LimitExceededError if the ratelimiter's login requests count for this LimitExceededError if the ratelimiter's login requests count for this

View file

@ -19,7 +19,7 @@ import random
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
from synapse.types import UserID from synapse.types import UserID
@ -61,6 +61,11 @@ class EventStreamHandler(BaseHandler):
If `only_keys` is not None, events from keys will be sent down. If `only_keys` is not None, events from keys will be sent down.
""" """
if room_id:
blocked = yield self.store.is_room_blocked(room_id)
if blocked:
raise SynapseError(403, "This room has been blocked on this server")
# send any outstanding server notices to the user. # send any outstanding server notices to the user.
yield self._server_notices_sender.on_user_syncing(auth_user_id) yield self._server_notices_sender.on_user_syncing(auth_user_id)

View file

@ -18,7 +18,7 @@ import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator from synapse.events.validator import EventValidator
from synapse.handlers.presence import format_user_presence_state from synapse.handlers.presence import format_user_presence_state
@ -262,6 +262,10 @@ class InitialSyncHandler(BaseHandler):
A JSON serialisable dict with the snapshot of the room. A JSON serialisable dict with the snapshot of the room.
""" """
blocked = yield self.store.is_room_blocked(room_id)
if blocked:
raise SynapseError(403, "This room has been blocked on this server")
user_id = requester.user.to_string() user_id = requester.user.to_string()
membership, member_event_id = yield self._check_in_room_or_world_readable( membership, member_event_id = yield self._check_in_room_or_world_readable(

View file

@ -233,8 +233,14 @@ class BaseProfileHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def set_displayname(self, target_user, requester, new_displayname, by_admin=False): def set_displayname(self, target_user, requester, new_displayname, by_admin=False):
"""target_user is the UserID whose displayname is to be changed; """Set the displayname of a user
requester is the authenticated user attempting to make this change."""
Args:
target_user (UserID): the user whose displayname is to be changed.
requester (Requester): The user attempting to make this change.
new_displayname (str): The displayname to give this user.
by_admin (bool): Whether this change was made by an administrator.
"""
if not self.hs.is_mine(target_user): if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this Home Server") raise SynapseError(400, "User is not hosted on this Home Server")

View file

@ -172,7 +172,7 @@ class RegistrationHandler(BaseHandler):
api.constants.UserTypes, or None for a normal user. api.constants.UserTypes, or None for a normal user.
default_display_name (unicode|None): if set, the new user's displayname default_display_name (unicode|None): if set, the new user's displayname
will be set to this. Defaults to 'localpart'. will be set to this. Defaults to 'localpart'.
address (str|None): the IP address used to perform the regitration. address (str|None): the IP address used to perform the registration.
Returns: Returns:
A tuple of (user_id, access_token). A tuple of (user_id, access_token).
Raises: Raises:
@ -719,7 +719,7 @@ class RegistrationHandler(BaseHandler):
admin (boolean): is an admin user? admin (boolean): is an admin user?
user_type (str|None): type of user. One of the values from user_type (str|None): type of user. One of the values from
api.constants.UserTypes, or None for a normal user. api.constants.UserTypes, or None for a normal user.
address (str|None): the IP address used to perform the regitration. address (str|None): the IP address used to perform the registration.
Returns: Returns:
Deferred Deferred
@ -817,9 +817,9 @@ class RegistrationHandler(BaseHandler):
access_token (str|None): The access token of the newly logged in access_token (str|None): The access token of the newly logged in
device, or None if `inhibit_login` enabled. device, or None if `inhibit_login` enabled.
bind_email (bool): Whether to bind the email with the identity bind_email (bool): Whether to bind the email with the identity
server server.
bind_msisdn (bool): Whether to bind the msisdn with the identity bind_msisdn (bool): Whether to bind the msisdn with the identity
server server.
""" """
if self.hs.config.worker_app: if self.hs.config.worker_app:
yield self._post_registration_client( yield self._post_registration_client(
@ -861,7 +861,7 @@ class RegistrationHandler(BaseHandler):
"""A user consented to the terms on registration """A user consented to the terms on registration
Args: Args:
user_id (str): The user ID that consented user_id (str): The user ID that consented.
consent_version (str): version of the policy the user has consent_version (str): version of the policy the user has
consented to. consented to.
""" """

View file

@ -0,0 +1,70 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
logger = logging.getLogger(__name__)
class StateDeltasHandler(object):
def __init__(self, hs):
self.store = hs.get_datastore()
@defer.inlineCallbacks
def _get_key_change(self, prev_event_id, event_id, key_name, public_value):
"""Given two events check if the `key_name` field in content changed
from not matching `public_value` to doing so.
For example, check if `history_visibility` (`key_name`) changed from
`shared` to `world_readable` (`public_value`).
Returns:
None if the field in the events either both match `public_value`
or if neither do, i.e. there has been no change.
True if it didnt match `public_value` but now does
False if it did match `public_value` but now doesn't
"""
prev_event = None
event = None
if prev_event_id:
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if event_id:
event = yield self.store.get_event(event_id, allow_none=True)
if not event and not prev_event:
logger.debug("Neither event exists: %r %r", prev_event_id, event_id)
defer.returnValue(None)
prev_value = None
value = None
if prev_event:
prev_value = prev_event.content.get(key_name)
if event:
value = event.content.get(key_name)
logger.debug("prev_value: %r -> value: %r", prev_value, value)
if value == public_value and prev_value != public_value:
defer.returnValue(True)
elif value != public_value and prev_value == public_value:
defer.returnValue(False)
else:
defer.returnValue(None)

View file

@ -21,6 +21,7 @@ from twisted.internet import defer
import synapse.metrics import synapse.metrics
from synapse.api.constants import EventTypes, JoinRules, Membership from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.handlers.state_deltas import StateDeltasHandler
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.roommember import ProfileInfo from synapse.storage.roommember import ProfileInfo
from synapse.types import get_localpart_from_id from synapse.types import get_localpart_from_id
@ -29,7 +30,7 @@ from synapse.util.metrics import Measure
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class UserDirectoryHandler(object): class UserDirectoryHandler(StateDeltasHandler):
"""Handles querying of and keeping updated the user_directory. """Handles querying of and keeping updated the user_directory.
N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY
@ -41,6 +42,8 @@ class UserDirectoryHandler(object):
""" """
def __init__(self, hs): def __init__(self, hs):
super(UserDirectoryHandler, self).__init__(hs)
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
self.server_name = hs.hostname self.server_name = hs.hostname
@ -360,7 +363,7 @@ class UserDirectoryHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _handle_remove_user(self, room_id, user_id): def _handle_remove_user(self, room_id, user_id):
"""Called when we might need to remove user to directory """Called when we might need to remove user from directory
Args: Args:
room_id (str): room_id that user left or stopped being public that room_id (str): room_id that user left or stopped being public that
@ -402,47 +405,3 @@ class UserDirectoryHandler(object):
if prev_name != new_name or prev_avatar != new_avatar: if prev_name != new_name or prev_avatar != new_avatar:
yield self.store.update_profile_in_user_dir(user_id, new_name, new_avatar) yield self.store.update_profile_in_user_dir(user_id, new_name, new_avatar)
@defer.inlineCallbacks
def _get_key_change(self, prev_event_id, event_id, key_name, public_value):
"""Given two events check if the `key_name` field in content changed
from not matching `public_value` to doing so.
For example, check if `history_visibility` (`key_name`) changed from
`shared` to `world_readable` (`public_value`).
Returns:
None if the field in the events either both match `public_value`
or if neither do, i.e. there has been no change.
True if it didnt match `public_value` but now does
False if it did match `public_value` but now doesn't
"""
prev_event = None
event = None
if prev_event_id:
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if event_id:
event = yield self.store.get_event(event_id, allow_none=True)
if not event and not prev_event:
logger.debug("Neither event exists: %r %r", prev_event_id, event_id)
defer.returnValue(None)
prev_value = None
value = None
if prev_event:
prev_value = prev_event.content.get(key_name)
if event:
value = event.content.get(key_name)
logger.debug("prev_value: %r -> value: %r", prev_value, value)
if value == public_value and prev_value != public_value:
defer.returnValue(True)
elif value != public_value and prev_value == public_value:
defer.returnValue(False)
else:
defer.returnValue(None)

View file

@ -188,6 +188,58 @@ class MatrixFederationHttpClient(object):
self._cooperator = Cooperator(scheduler=schedule) self._cooperator = Cooperator(scheduler=schedule)
@defer.inlineCallbacks
def _send_request_with_optional_trailing_slash(
self,
request,
try_trailing_slash_on_400=False,
**send_request_args
):
"""Wrapper for _send_request which can optionally retry the request
upon receiving a combination of a 400 HTTP response code and a
'M_UNRECOGNIZED' errcode. This is a workaround for Synapse <= v0.99.3
due to #3622.
Args:
request (MatrixFederationRequest): details of request to be sent
try_trailing_slash_on_400 (bool): Whether on receiving a 400
'M_UNRECOGNIZED' from the server to retry the request with a
trailing slash appended to the request path.
send_request_args (Dict): A dictionary of arguments to pass to
`_send_request()`.
Raises:
HttpResponseException: If we get an HTTP response code >= 300
(except 429).
Returns:
Deferred[Dict]: Parsed JSON response body.
"""
try:
response = yield self._send_request(
request, **send_request_args
)
except HttpResponseException as e:
# Received an HTTP error > 300. Check if it meets the requirements
# to retry with a trailing slash
if not try_trailing_slash_on_400:
raise
if e.code != 400 or e.to_synapse_error().errcode != "M_UNRECOGNIZED":
raise
# Retry with a trailing slash if we received a 400 with
# 'M_UNRECOGNIZED' which some endpoints can return when omitting a
# trailing slash on Synapse <= v0.99.3.
logger.info("Retrying request with trailing slash")
request.path += "/"
response = yield self._send_request(
request, **send_request_args
)
defer.returnValue(response)
@defer.inlineCallbacks @defer.inlineCallbacks
def _send_request( def _send_request(
self, self,
@ -196,7 +248,7 @@ class MatrixFederationHttpClient(object):
timeout=None, timeout=None,
long_retries=False, long_retries=False,
ignore_backoff=False, ignore_backoff=False,
backoff_on_404=False backoff_on_404=False,
): ):
""" """
Sends a request to the given server. Sends a request to the given server.
@ -473,7 +525,8 @@ class MatrixFederationHttpClient(object):
json_data_callback=None, json_data_callback=None,
long_retries=False, timeout=None, long_retries=False, timeout=None,
ignore_backoff=False, ignore_backoff=False,
backoff_on_404=False): backoff_on_404=False,
try_trailing_slash_on_400=False):
""" Sends the specifed json data using PUT """ Sends the specifed json data using PUT
Args: Args:
@ -493,7 +546,12 @@ class MatrixFederationHttpClient(object):
and try the request anyway. and try the request anyway.
backoff_on_404 (bool): True if we should count a 404 response as backoff_on_404 (bool): True if we should count a 404 response as
a failure of the server (and should therefore back off future a failure of the server (and should therefore back off future
requests) requests).
try_trailing_slash_on_400 (bool): True if on a 400 M_UNRECOGNIZED
response we should try appending a trailing slash to the end
of the request. Workaround for #3622 in Synapse <= v0.99.3. This
will be attempted before backing off if backing off has been
enabled.
Returns: Returns:
Deferred[dict|list]: Succeeds when we get a 2xx HTTP response. The Deferred[dict|list]: Succeeds when we get a 2xx HTTP response. The
@ -509,7 +567,6 @@ class MatrixFederationHttpClient(object):
RequestSendFailed: If there were problems connecting to the RequestSendFailed: If there were problems connecting to the
remote, due to e.g. DNS failures, connection timeouts etc. remote, due to e.g. DNS failures, connection timeouts etc.
""" """
request = MatrixFederationRequest( request = MatrixFederationRequest(
method="PUT", method="PUT",
destination=destination, destination=destination,
@ -519,17 +576,19 @@ class MatrixFederationHttpClient(object):
json=data, json=data,
) )
response = yield self._send_request( response = yield self._send_request_with_optional_trailing_slash(
request, request,
try_trailing_slash_on_400,
backoff_on_404=backoff_on_404,
ignore_backoff=ignore_backoff,
long_retries=long_retries, long_retries=long_retries,
timeout=timeout, timeout=timeout,
ignore_backoff=ignore_backoff,
backoff_on_404=backoff_on_404,
) )
body = yield _handle_json_response( body = yield _handle_json_response(
self.hs.get_reactor(), self.default_timeout, request, response, self.hs.get_reactor(), self.default_timeout, request, response,
) )
defer.returnValue(body) defer.returnValue(body)
@defer.inlineCallbacks @defer.inlineCallbacks
@ -592,7 +651,8 @@ class MatrixFederationHttpClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_json(self, destination, path, args=None, retry_on_dns_fail=True, def get_json(self, destination, path, args=None, retry_on_dns_fail=True,
timeout=None, ignore_backoff=False): timeout=None, ignore_backoff=False,
try_trailing_slash_on_400=False):
""" GETs some json from the given host homeserver and path """ GETs some json from the given host homeserver and path
Args: Args:
@ -606,6 +666,9 @@ class MatrixFederationHttpClient(object):
be retried. be retried.
ignore_backoff (bool): true to ignore the historical backoff data ignore_backoff (bool): true to ignore the historical backoff data
and try the request anyway. and try the request anyway.
try_trailing_slash_on_400 (bool): True if on a 400 M_UNRECOGNIZED
response we should try appending a trailing slash to the end of
the request. Workaround for #3622 in Synapse <= v0.99.3.
Returns: Returns:
Deferred[dict|list]: Succeeds when we get a 2xx HTTP response. The Deferred[dict|list]: Succeeds when we get a 2xx HTTP response. The
result will be the decoded JSON body. result will be the decoded JSON body.
@ -631,16 +694,19 @@ class MatrixFederationHttpClient(object):
query=args, query=args,
) )
response = yield self._send_request( response = yield self._send_request_with_optional_trailing_slash(
request, request,
try_trailing_slash_on_400,
backoff_on_404=False,
ignore_backoff=ignore_backoff,
retry_on_dns_fail=retry_on_dns_fail, retry_on_dns_fail=retry_on_dns_fail,
timeout=timeout, timeout=timeout,
ignore_backoff=ignore_backoff,
) )
body = yield _handle_json_response( body = yield _handle_json_response(
self.hs.get_reactor(), self.default_timeout, request, response, self.hs.get_reactor(), self.default_timeout, request, response,
) )
defer.returnValue(body) defer.returnValue(body)
@defer.inlineCallbacks @defer.inlineCallbacks

View file

@ -73,14 +73,26 @@ class ModuleApi(object):
""" """
return self._auth_handler.check_user_exists(user_id) return self._auth_handler.check_user_exists(user_id)
def register(self, localpart): @defer.inlineCallbacks
"""Registers a new user with given localpart def register(self, localpart, displayname=None):
"""Registers a new user with given localpart and optional
displayname.
Args:
localpart (str): The localpart of the new user.
displayname (str|None): The displayname of the new user. If None,
the user's displayname will default to `localpart`.
Returns: Returns:
Deferred: a 2-tuple of (user_id, access_token) Deferred: a 2-tuple of (user_id, access_token)
""" """
# Register the user
reg = self.hs.get_registration_handler() reg = self.hs.get_registration_handler()
return reg.register(localpart=localpart) user_id, access_token = yield reg.register(
localpart=localpart, default_display_name=displayname,
)
defer.returnValue((user_id, access_token))
@defer.inlineCallbacks @defer.inlineCallbacks
def invalidate_access_token(self, access_token): def invalidate_access_token(self, access_token):

View file

@ -223,14 +223,25 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
return return
# Now lets try and call on_<CMD_NAME> function # Now lets try and call on_<CMD_NAME> function
try:
run_as_background_process( run_as_background_process(
"replication-" + cmd.get_logcontext_id(), "replication-" + cmd.get_logcontext_id(),
getattr(self, "on_%s" % (cmd_name,)), self.handle_command,
cmd, cmd,
) )
except Exception:
logger.exception("[%s] Failed to handle line: %r", self.id(), line) def handle_command(self, cmd):
"""Handle a command we have received over the replication stream.
By default delegates to on_<COMMAND>
Args:
cmd (synapse.replication.tcp.commands.Command): received command
Returns:
Deferred
"""
handler = getattr(self, "on_%s" % (cmd.NAME,))
return handler(cmd)
def close(self): def close(self):
logger.warn("[%s] Closing connection", self.id()) logger.warn("[%s] Closing connection", self.id())
@ -364,8 +375,11 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
self.transport.unregisterProducer() self.transport.unregisterProducer()
def __str__(self): def __str__(self):
addr = None
if self.transport:
addr = str(self.transport.getPeer())
return "ReplicationConnection<name=%s,conn_id=%s,addr=%s>" % ( return "ReplicationConnection<name=%s,conn_id=%s,addr=%s>" % (
self.name, self.conn_id, self.addr, self.name, self.conn_id, addr,
) )
def id(self): def id(self):
@ -381,12 +395,11 @@ class ServerReplicationStreamProtocol(BaseReplicationStreamProtocol):
VALID_INBOUND_COMMANDS = VALID_CLIENT_COMMANDS VALID_INBOUND_COMMANDS = VALID_CLIENT_COMMANDS
VALID_OUTBOUND_COMMANDS = VALID_SERVER_COMMANDS VALID_OUTBOUND_COMMANDS = VALID_SERVER_COMMANDS
def __init__(self, server_name, clock, streamer, addr): def __init__(self, server_name, clock, streamer):
BaseReplicationStreamProtocol.__init__(self, clock) # Old style class BaseReplicationStreamProtocol.__init__(self, clock) # Old style class
self.server_name = server_name self.server_name = server_name
self.streamer = streamer self.streamer = streamer
self.addr = addr
# The streams the client has subscribed to and is up to date with # The streams the client has subscribed to and is up to date with
self.replication_streams = set() self.replication_streams = set()

View file

@ -57,7 +57,6 @@ class ReplicationStreamProtocolFactory(Factory):
self.server_name, self.server_name,
self.clock, self.clock,
self.streamer, self.streamer,
addr
) )

View file

@ -23,7 +23,7 @@ Each stream is defined by the following information:
current_token: The function that returns the current token for the stream current_token: The function that returns the current token for the stream
update_function: The function that returns a list of updates between two tokens update_function: The function that returns a list of updates between two tokens
""" """
import itertools
import logging import logging
from collections import namedtuple from collections import namedtuple
@ -195,8 +195,8 @@ class Stream(object):
limit=MAX_EVENTS_BEHIND + 1, limit=MAX_EVENTS_BEHIND + 1,
) )
if len(rows) >= MAX_EVENTS_BEHIND: # never turn more than MAX_EVENTS_BEHIND + 1 into updates.
raise Exception("stream %s has fallen behind" % (self.NAME)) rows = itertools.islice(rows, MAX_EVENTS_BEHIND + 1)
else: else:
rows = yield self.update_function( rows = yield self.update_function(
from_token, current_token, from_token, current_token,
@ -204,6 +204,11 @@ class Stream(object):
updates = [(row[0], self.ROW_TYPE(*row[1:])) for row in rows] updates = [(row[0], self.ROW_TYPE(*row[1:])) for row in rows]
# check we didn't get more rows than the limit.
# doing it like this allows the update_function to be a generator.
if self._LIMITED and len(updates) >= MAX_EVENTS_BEHIND:
raise Exception("stream %s has fallen behind" % (self.NAME))
defer.returnValue((updates, current_token)) defer.returnValue((updates, current_token))
def current_token(self): def current_token(self):

View file

@ -201,6 +201,24 @@ class LoginRestServlet(ClientV1RestServlet):
# We store all email addreses as lowercase in the DB. # We store all email addreses as lowercase in the DB.
# (See add_threepid in synapse/handlers/auth.py) # (See add_threepid in synapse/handlers/auth.py)
address = address.lower() address = address.lower()
# Check for login providers that support 3pid login types
canonical_user_id, callback_3pid = (
yield self.auth_handler.check_password_provider_3pid(
medium,
address,
login_submission["password"],
)
)
if canonical_user_id:
# Authentication through password provider and 3pid succeeded
result = yield self._register_device_with_callback(
canonical_user_id, login_submission, callback_3pid,
)
defer.returnValue(result)
# No password providers were able to handle this 3pid
# Check local store
user_id = yield self.hs.get_datastore().get_user_id_by_threepid( user_id = yield self.hs.get_datastore().get_user_id_by_threepid(
medium, address, medium, address,
) )
@ -223,20 +241,43 @@ class LoginRestServlet(ClientV1RestServlet):
if "user" not in identifier: if "user" not in identifier:
raise SynapseError(400, "User identifier is missing 'user' key") raise SynapseError(400, "User identifier is missing 'user' key")
auth_handler = self.auth_handler canonical_user_id, callback = yield self.auth_handler.validate_login(
canonical_user_id, callback = yield auth_handler.validate_login(
identifier["user"], identifier["user"],
login_submission, login_submission,
) )
result = yield self._register_device_with_callback(
canonical_user_id, login_submission, callback,
)
defer.returnValue(result)
@defer.inlineCallbacks
def _register_device_with_callback(
self,
user_id,
login_submission,
callback=None,
):
""" Registers a device with a given user_id. Optionally run a callback
function after registration has completed.
Args:
user_id (str): ID of the user to register.
login_submission (dict): Dictionary of login information.
callback (func|None): Callback function to run after registration.
Returns:
result (Dict[str,str]): Dictionary of account information after
successful registration.
"""
device_id = login_submission.get("device_id") device_id = login_submission.get("device_id")
initial_display_name = login_submission.get("initial_device_display_name") initial_display_name = login_submission.get("initial_device_display_name")
device_id, access_token = yield self.registration_handler.register_device( device_id, access_token = yield self.registration_handler.register_device(
canonical_user_id, device_id, initial_display_name, user_id, device_id, initial_display_name,
) )
result = { result = {
"user_id": canonical_user_id, "user_id": user_id,
"access_token": access_token, "access_token": access_token,
"home_server": self.hs.hostname, "home_server": self.hs.hostname,
"device_id": device_id, "device_id": device_id,

View file

@ -301,7 +301,9 @@ class ReceiptsWorkerStore(SQLBaseStore):
args.append(limit) args.append(limit)
txn.execute(sql, args) txn.execute(sql, args)
return txn.fetchall() return (
r[0:5] + (json.loads(r[5]), ) for r in txn
)
return self.runInteraction( return self.runInteraction(
"get_all_updated_receipts", get_all_updated_receipts_txn "get_all_updated_receipts", get_all_updated_receipts_txn
) )

View file

@ -0,0 +1,74 @@
# -*- coding: utf-8 -*-
# Copyright 2018 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from synapse.storage._base import SQLBaseStore
logger = logging.getLogger(__name__)
class StateDeltasStore(SQLBaseStore):
def get_current_state_deltas(self, prev_stream_id):
prev_stream_id = int(prev_stream_id)
if not self._curr_state_delta_stream_cache.has_any_entity_changed(prev_stream_id):
return []
def get_current_state_deltas_txn(txn):
# First we calculate the max stream id that will give us less than
# N results.
# We arbitarily limit to 100 stream_id entries to ensure we don't
# select toooo many.
sql = """
SELECT stream_id, count(*)
FROM current_state_delta_stream
WHERE stream_id > ?
GROUP BY stream_id
ORDER BY stream_id ASC
LIMIT 100
"""
txn.execute(sql, (prev_stream_id,))
total = 0
max_stream_id = prev_stream_id
for max_stream_id, count in txn:
total += count
if total > 100:
# We arbitarily limit to 100 entries to ensure we don't
# select toooo many.
break
# Now actually get the deltas
sql = """
SELECT stream_id, room_id, type, state_key, event_id, prev_event_id
FROM current_state_delta_stream
WHERE ? < stream_id AND stream_id <= ?
ORDER BY stream_id ASC
"""
txn.execute(sql, (prev_stream_id, max_stream_id,))
return self.cursor_to_dict(txn)
return self.runInteraction(
"get_current_state_deltas", get_current_state_deltas_txn
)
def get_max_stream_id_in_current_state_deltas(self):
return self._simple_select_one_onecol(
table="current_state_delta_stream",
keyvalues={},
retcol="COALESCE(MAX(stream_id), -1)",
desc="get_max_stream_id_in_current_state_deltas",
)

View file

@ -22,6 +22,7 @@ from synapse.api.constants import EventTypes, JoinRules
from synapse.storage.background_updates import BackgroundUpdateStore from synapse.storage.background_updates import BackgroundUpdateStore
from synapse.storage.engines import PostgresEngine, Sqlite3Engine from synapse.storage.engines import PostgresEngine, Sqlite3Engine
from synapse.storage.state import StateFilter from synapse.storage.state import StateFilter
from synapse.storage.state_deltas import StateDeltasStore
from synapse.types import get_domain_from_id, get_localpart_from_id from synapse.types import get_domain_from_id, get_localpart_from_id
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
@ -31,7 +32,7 @@ logger = logging.getLogger(__name__)
TEMP_TABLE = "_temp_populate_user_directory" TEMP_TABLE = "_temp_populate_user_directory"
class UserDirectoryStore(BackgroundUpdateStore): class UserDirectoryStore(StateDeltasStore, BackgroundUpdateStore):
# How many records do we calculate before sending it to # How many records do we calculate before sending it to
# add_users_who_share_private_rooms? # add_users_who_share_private_rooms?
@ -134,7 +135,12 @@ class UserDirectoryStore(BackgroundUpdateStore):
@defer.inlineCallbacks @defer.inlineCallbacks
def _populate_user_directory_process_rooms(self, progress, batch_size): def _populate_user_directory_process_rooms(self, progress, batch_size):
"""
Args:
progress (dict)
batch_size (int): Maximum number of state events to process
per cycle.
"""
state = self.hs.get_state_handler() state = self.hs.get_state_handler()
# If we don't have progress filed, delete everything. # If we don't have progress filed, delete everything.
@ -142,13 +148,14 @@ class UserDirectoryStore(BackgroundUpdateStore):
yield self.delete_all_from_user_dir() yield self.delete_all_from_user_dir()
def _get_next_batch(txn): def _get_next_batch(txn):
# Only fetch 250 rooms, so we don't fetch too many at once, even
# if those 250 rooms have less than batch_size state events.
sql = """ sql = """
SELECT room_id FROM %s SELECT room_id, events FROM %s
ORDER BY events DESC ORDER BY events DESC
LIMIT %s LIMIT 250
""" % ( """ % (
TEMP_TABLE + "_rooms", TEMP_TABLE + "_rooms",
str(batch_size),
) )
txn.execute(sql) txn.execute(sql)
rooms_to_work_on = txn.fetchall() rooms_to_work_on = txn.fetchall()
@ -156,8 +163,6 @@ class UserDirectoryStore(BackgroundUpdateStore):
if not rooms_to_work_on: if not rooms_to_work_on:
return None return None
rooms_to_work_on = [x[0] for x in rooms_to_work_on]
# Get how many are left to process, so we can give status on how # Get how many are left to process, so we can give status on how
# far we are in processing # far we are in processing
txn.execute("SELECT COUNT(*) FROM " + TEMP_TABLE + "_rooms") txn.execute("SELECT COUNT(*) FROM " + TEMP_TABLE + "_rooms")
@ -179,7 +184,9 @@ class UserDirectoryStore(BackgroundUpdateStore):
% (len(rooms_to_work_on), progress["remaining"]) % (len(rooms_to_work_on), progress["remaining"])
) )
for room_id in rooms_to_work_on: processed_event_count = 0
for room_id, event_count in rooms_to_work_on:
is_in_room = yield self.is_host_joined(room_id, self.server_name) is_in_room = yield self.is_host_joined(room_id, self.server_name)
if is_in_room: if is_in_room:
@ -246,7 +253,13 @@ class UserDirectoryStore(BackgroundUpdateStore):
progress, progress,
) )
defer.returnValue(len(rooms_to_work_on)) processed_event_count += event_count
if processed_event_count > batch_size:
# Don't process any more rooms, we've hit our batch size.
defer.returnValue(processed_event_count)
defer.returnValue(processed_event_count)
@defer.inlineCallbacks @defer.inlineCallbacks
def _populate_user_directory_process_users(self, progress, batch_size): def _populate_user_directory_process_users(self, progress, batch_size):
@ -488,16 +501,6 @@ class UserDirectoryStore(BackgroundUpdateStore):
defer.returnValue(user_ids) defer.returnValue(user_ids)
@defer.inlineCallbacks
def get_all_local_users(self):
"""Get all local users
"""
sql = """
SELECT name FROM users
"""
rows = yield self._execute("get_all_local_users", None, sql)
defer.returnValue([name for name, in rows])
def add_users_who_share_private_room(self, room_id, user_id_tuples): def add_users_who_share_private_room(self, room_id, user_id_tuples):
"""Insert entries into the users_who_share_private_rooms table. The first """Insert entries into the users_who_share_private_rooms table. The first
user should be a local user. user should be a local user.
@ -675,59 +678,6 @@ class UserDirectoryStore(BackgroundUpdateStore):
desc="update_user_directory_stream_pos", desc="update_user_directory_stream_pos",
) )
def get_current_state_deltas(self, prev_stream_id):
prev_stream_id = int(prev_stream_id)
if not self._curr_state_delta_stream_cache.has_any_entity_changed(
prev_stream_id
):
return []
def get_current_state_deltas_txn(txn):
# First we calculate the max stream id that will give us less than
# N results.
# We arbitarily limit to 100 stream_id entries to ensure we don't
# select toooo many.
sql = """
SELECT stream_id, count(*)
FROM current_state_delta_stream
WHERE stream_id > ?
GROUP BY stream_id
ORDER BY stream_id ASC
LIMIT 100
"""
txn.execute(sql, (prev_stream_id,))
total = 0
max_stream_id = prev_stream_id
for max_stream_id, count in txn:
total += count
if total > 100:
# We arbitarily limit to 100 entries to ensure we don't
# select toooo many.
break
# Now actually get the deltas
sql = """
SELECT stream_id, room_id, type, state_key, event_id, prev_event_id
FROM current_state_delta_stream
WHERE ? < stream_id AND stream_id <= ?
ORDER BY stream_id ASC
"""
txn.execute(sql, (prev_stream_id, max_stream_id))
return self.cursor_to_dict(txn)
return self.runInteraction(
"get_current_state_deltas", get_current_state_deltas_txn
)
def get_max_stream_id_in_current_state_deltas(self):
return self._simple_select_one_onecol(
table="current_state_delta_stream",
keyvalues={},
retcol="COALESCE(MAX(stream_id), -1)",
desc="get_max_stream_id_in_current_state_deltas",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def search_user_dir(self, user_id, search_term, limit): def search_user_dir(self, user_id, search_term, limit):
"""Searches for users in directory """Searches for users in directory

Some files were not shown because too many files have changed in this diff Show more