Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/v1.110/.nojekyll b/v1.110/.nojekyll new file mode 100644 index 0000000000..f17311098f --- /dev/null +++ b/v1.110/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/v1.110/.sample_config_header.yaml b/v1.110/.sample_config_header.yaml new file mode 100644 index 0000000000..acbaad8231 --- /dev/null +++ b/v1.110/.sample_config_header.yaml @@ -0,0 +1,12 @@ +# This file is maintained as an up-to-date snapshot of the default +# homeserver.yaml configuration generated by Synapse. You can find a +# complete accounting of possible configuration options at +# https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html +# +# It is *not* intended to be copied and used as the basis for a real +# homeserver.yaml. Instead, if you are starting from scratch, please generate +# a fresh config using Synapse by following the instructions in +# https://element-hq.github.io/synapse/latest/setup/installation.html. +# +################################################################################ + diff --git a/v1.110/404.html b/v1.110/404.html new file mode 100644 index 0000000000..e16edc0454 --- /dev/null +++ b/v1.110/404.html @@ -0,0 +1,192 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +A captcha can be enabled on your homeserver to help prevent bots from registering +accounts. Synapse currently uses Google's reCAPTCHA service which requires API keys +from Google.
+public_baseurl
+in homeserver.yaml
, to the list of authorized domains. If you have not set
+public_baseurl
, use server_name
.homeserver.yaml
+configuration file
+recaptcha_public_key: YOUR_SITE_KEY
+recaptcha_private_key: YOUR_SECRET_KEY
+
+enable_registration_captcha: true
+
+The reCAPTCHA API requires that the IP address of the user who solved the
+CAPTCHA is sent. If the client is connecting through a proxy or load balancer,
+it may be required to use the X-Forwarded-For
(XFF) header instead of the origin
+IP address. This can be configured using the x_forwarded
directive in the
+listeners section of the homeserver.yaml
configuration file.
Note: This API is disabled when MSC3861 is enabled. See #15582
+This API allows a server administrator to manage the validity of an account. To
+use it, you must enable the account validity feature (under
+account_validity
) in Synapse's configuration.
To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
This API extends the validity of an account by as much time as configured in the
+period
parameter from the account_validity
configuration.
The API is:
+POST /_synapse/admin/v1/account_validity/validity
+
+with the following body:
+{
+ "user_id": "<user ID for the account to renew>",
+ "expiration_ts": 0,
+ "enable_renewal_emails": true
+}
+
+expiration_ts
is an optional parameter and overrides the expiration date,
+which otherwise defaults to now + validity period.
enable_renewal_emails
is also an optional parameter and enables/disables
+sending renewal emails to the user. Defaults to true.
The API returns with the new expiration date for this account, as a timestamp in +milliseconds since epoch:
+{
+ "expiration_ts": 0
+}
+
+
+ This API returns information about reported events.
+To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
The api is:
+GET /_synapse/admin/v1/event_reports?from=0&limit=10
+
+It returns a JSON body like the following:
+{
+ "event_reports": [
+ {
+ "event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
+ "id": 2,
+ "reason": "foo",
+ "score": -100,
+ "received_ts": 1570897107409,
+ "canonical_alias": "#alias1:matrix.org",
+ "room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
+ "name": "Matrix HQ",
+ "sender": "@foobar:matrix.org",
+ "user_id": "@foo:matrix.org"
+ },
+ {
+ "event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
+ "id": 3,
+ "reason": "bar",
+ "score": -100,
+ "received_ts": 1598889612059,
+ "canonical_alias": "#alias2:matrix.org",
+ "room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
+ "name": "Your room name here",
+ "sender": "@foobar:matrix.org",
+ "user_id": "@bar:matrix.org"
+ }
+ ],
+ "next_token": 2,
+ "total": 4
+}
+
+To paginate, check for next_token
and if present, call the endpoint again with from
+set to the value of next_token
. This will return a new page.
If the endpoint does not return a next_token
then there are no more reports to
+paginate through.
URL parameters:
+limit
: integer - Is optional but is used for pagination, denoting the maximum number
+of items to return in this call. Defaults to 100
.from
: integer - Is optional but used for pagination, denoting the offset in the
+returned results. This should be treated as an opaque value and not explicitly set to
+anything other than the return value of next_token
from a previous call. Defaults to 0
.dir
: string - Direction of event report order. Whether to fetch the most recent
+first (b
) or the oldest first (f
). Defaults to b
.user_id
: string - Is optional and filters to only return users with user IDs that
+contain this value. This is the user who reported the event and wrote the reason.room_id
: string - Is optional and filters to only return rooms with room IDs that
+contain this value.Response
+The following fields are returned in the JSON response body:
+id
: integer - ID of event report.received_ts
: integer - The timestamp (in milliseconds since the unix epoch) when this
+report was sent.room_id
: string - The ID of the room in which the event being reported is located.name
: string - The name of the room.event_id
: string - The ID of the reported event.user_id
: string - This is the user who reported the event and wrote the reason.reason
: string - Comment made by the user_id
in this report. May be blank or null
.score
: integer - Content is reported based upon a negative score, where -100 is
+"most offensive" and 0 is "inoffensive". May be null
.sender
: string - This is the ID of the user who sent the original message/event that
+was reported.canonical_alias
: string - The canonical alias of the room. null
if the room does not
+have a canonical alias set.next_token
: integer - Indication for pagination. See above.total
: integer - Total number of event reports related to the query
+(user_id
and room_id
).This API returns information about a specific event report.
+The api is:
+GET /_synapse/admin/v1/event_reports/<report_id>
+
+It returns a JSON body like the following:
+{
+ "event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
+ "event_json": {
+ "auth_events": [
+ "$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
+ "$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
+ ],
+ "content": {
+ "body": "matrix.org: This Week in Matrix",
+ "format": "org.matrix.custom.html",
+ "formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
+ "msgtype": "m.notice"
+ },
+ "depth": 546,
+ "hashes": {
+ "sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
+ },
+ "origin": "matrix.org",
+ "origin_server_ts": 1592291711430,
+ "prev_events": [
+ "$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
+ ],
+ "prev_state": [],
+ "room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
+ "sender": "@foobar:matrix.org",
+ "signatures": {
+ "matrix.org": {
+ "ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
+ }
+ },
+ "type": "m.room.message",
+ "unsigned": {
+ "age_ts": 1592291711430
+ }
+ },
+ "id": <report_id>,
+ "reason": "foo",
+ "score": -100,
+ "received_ts": 1570897107409,
+ "canonical_alias": "#alias1:matrix.org",
+ "room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
+ "name": "Matrix HQ",
+ "sender": "@foobar:matrix.org",
+ "user_id": "@foo:matrix.org"
+}
+
+URL parameters:
+report_id
: string - The ID of the event report.Response
+The following fields are returned in the JSON response body:
+id
: integer - ID of event report.received_ts
: integer - The timestamp (in milliseconds since the unix epoch) when this
+report was sent.room_id
: string - The ID of the room in which the event being reported is located.name
: string - The name of the room.event_id
: string - The ID of the reported event.user_id
: string - This is the user who reported the event and wrote the reason.reason
: string - Comment made by the user_id
in this report. May be blank.score
: integer - Content is reported based upon a negative score, where -100 is
+"most offensive" and 0 is "inoffensive".sender
: string - This is the ID of the user who sent the original message/event that
+was reported.canonical_alias
: string - The canonical alias of the room. null
if the room does not
+have a canonical alias set.event_json
: object - Details of the original event that was reported.This API deletes a specific event report. If the request is successful, the response body +will be an empty JSON object.
+The api is:
+DELETE /_synapse/admin/v1/event_reports/<report_id>
+
+URL parameters:
+report_id
: string - The ID of the event report.This API allows a server administrator to enable or disable some experimental features on a per-user +basis. The currently supported features are:
+To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
This API allows a server administrator to enable experimental features for a given user. The request must +provide a body containing the user id and listing the features to enable/disable in the following format:
+{
+ "features": {
+ "msc3026":true,
+ "msc3881":true
+ }
+}
+
+where true is used to enable the feature, and false is used to disable the feature.
+The API is:
+PUT /_synapse/admin/v1/experimental_features/<user_id>
+
+To list which features are enabled/disabled for a given user send a request to the following API:
+GET /_synapse/admin/v1/experimental_features/<user_id>
+
+It will return a list of possible features and indicate whether they are enabled or disabled for the +user like so:
+{
+ "features": {
+ "msc3026": true,
+ "msc3881": false,
+ "msc3967": false
+ }
+}
+
+
+ These APIs allow extracting media information from the homeserver.
+Details about the format of the media_id
and storage of the media in the file system
+are documented under media repository.
To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
This API gets a list of known media in a room. +However, it only shows media from unencrypted events or rooms.
+The API is:
+GET /_synapse/admin/v1/room/<room_id>/media
+
+The API returns a JSON body like the following:
+{
+ "local": [
+ "mxc://localhost/xwvutsrqponmlkjihgfedcba",
+ "mxc://localhost/abcdefghijklmnopqrstuvwx"
+ ],
+ "remote": [
+ "mxc://matrix.org/xwvutsrqponmlkjihgfedcba",
+ "mxc://matrix.org/abcdefghijklmnopqrstuvwx"
+ ]
+}
+
+Listing all media that has been uploaded by a local user can be achieved through +the use of the +List media uploaded by a user +Admin API.
+Quarantining media means that it is marked as inaccessible by users. It applies +to any local media, and any locally-cached copies of remote media.
+The media file itself (and any thumbnails) is not deleted from the server.
+This API quarantines a single piece of local or remote media.
+Request:
+POST /_synapse/admin/v1/media/quarantine/<server_name>/<media_id>
+
+{}
+
+Where server_name
is in the form of example.org
, and media_id
is in the
+form of abcdefg12345...
.
Response:
+{}
+
+This API removes a single piece of local or remote media from quarantine.
+Request:
+POST /_synapse/admin/v1/media/unquarantine/<server_name>/<media_id>
+
+{}
+
+Where server_name
is in the form of example.org
, and media_id
is in the
+form of abcdefg12345...
.
Response:
+{}
+
+This API quarantines all local and remote media in a room.
+Request:
+POST /_synapse/admin/v1/room/<room_id>/media/quarantine
+
+{}
+
+Where room_id
is in the form of !roomid12345:example.org
.
Response:
+{
+ "num_quarantined": 10
+}
+
+The following fields are returned in the JSON response body:
+num_quarantined
: integer - The number of media items successfully quarantinedNote that there is a legacy endpoint, POST /_synapse/admin/v1/quarantine_media/<room_id>
, that operates the same.
+However, it is deprecated and may be removed in a future release.
This API quarantines all local media that a local user has uploaded. That is to say, if +you would like to quarantine media uploaded by a user on a remote homeserver, you should +instead use one of the other APIs.
+Request:
+POST /_synapse/admin/v1/user/<user_id>/media/quarantine
+
+{}
+
+URL Parameters
+user_id
: string - User ID in the form of @bob:example.org
Response:
+{
+ "num_quarantined": 10
+}
+
+The following fields are returned in the JSON response body:
+num_quarantined
: integer - The number of media items successfully quarantinedThis API protects a single piece of local media from being quarantined using the +above APIs. This is useful for sticker packs and other shared media which you do +not want to get quarantined, especially when +quarantining media in a room.
+Request:
+POST /_synapse/admin/v1/media/protect/<media_id>
+
+{}
+
+Where media_id
is in the form of abcdefg12345...
.
Response:
+{}
+
+This API reverts the protection of a media.
+Request:
+POST /_synapse/admin/v1/media/unprotect/<media_id>
+
+{}
+
+Where media_id
is in the form of abcdefg12345...
.
Response:
+{}
+
+This API deletes the local media from the disk of your own server. +This includes any local thumbnails and copies of media downloaded from +remote homeservers. +This API will not affect media that has been uploaded to external +media repositories (e.g https://github.com/turt2live/matrix-media-repo/). +See also Purge Remote Media API.
+Delete a specific media_id
.
Request:
+DELETE /_synapse/admin/v1/media/<server_name>/<media_id>
+
+{}
+
+URL Parameters
+server_name
: string - The name of your local server (e.g matrix.org
)media_id
: string - The ID of the media (e.g abcdefghijklmnopqrstuvwx
)Response:
+{
+ "deleted_media": [
+ "abcdefghijklmnopqrstuvwx"
+ ],
+ "total": 1
+}
+
+The following fields are returned in the JSON response body:
+deleted_media
: an array of strings - List of deleted media_id
total
: integer - Total number of deleted media_id
Request:
+POST /_synapse/admin/v1/media/delete?before_ts=<before_ts>
+
+{}
+
+Deprecated in Synapse v1.78.0: This API is available at the deprecated endpoint:
+POST /_synapse/admin/v1/media/<server_name>/delete?before_ts=<before_ts>
+
+{}
+
+URL Parameters
+server_name
: string - The name of your local server (e.g matrix.org
). Deprecated in Synapse v1.78.0.before_ts
: string representing a positive integer - Unix timestamp in milliseconds.
+Files that were last used before this timestamp will be deleted. It is the timestamp of
+last access, not the timestamp when the file was created.size_gt
: Optional - string representing a positive integer - Size of the media in bytes.
+Files that are larger will be deleted. Defaults to 0
.keep_profiles
: Optional - string representing a boolean - Switch to also delete files
+that are still used in image data (e.g user profile, room avatar).
+If false
these files will be deleted. Defaults to true
.Response:
+{
+ "deleted_media": [
+ "abcdefghijklmnopqrstuvwx",
+ "abcdefghijklmnopqrstuvwz"
+ ],
+ "total": 2
+}
+
+The following fields are returned in the JSON response body:
+deleted_media
: an array of strings - List of deleted media_id
total
: integer - Total number of deleted media_id
You can find details of how to delete multiple media uploaded by a user in +User Admin API.
+The purge remote media API allows server admins to purge old cached remote media.
+The API is:
+POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
+
+{}
+
+URL Parameters
+before_ts
: string representing a positive integer - Unix timestamp in milliseconds.
+All cached media that was last accessed before this timestamp will be removed.Response:
+{
+ "deleted": 10
+}
+
+The following fields are returned in the JSON response body:
+deleted
: integer - The number of media items successfully deletedIf the user re-requests purged remote media, synapse will re-request the media +from the originating server.
+ +The purge history API allows server admins to purge historic events from their +database, reclaiming disk space.
+Depending on the amount of history being purged a call to the API may take +several minutes or longer. During this period users will not be able to +paginate further back in the room from the point being purged from.
+Note that Synapse requires at least one message in each room, so it will never +delete the last message in a room.
+To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
The API is:
+POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]
+
+By default, events sent by local users are not deleted, as they may represent +the only copies of this content in existence. (Events sent by remote users are +deleted.)
+Room state data (such as joins, leaves, topic) is always preserved.
+To delete local message events as well, set delete_local_events
in the body:
{
+ "delete_local_events": true
+}
+
+The caller must specify the point in the room to purge up to. This can be
+specified by including an event_id in the URI, or by setting a
+purge_up_to_event_id
or purge_up_to_ts
in the request body. If an event
+id is given, that event (and others at the same graph depth) will be retained.
+If purge_up_to_ts
is given, it should be a timestamp since the unix epoch,
+in milliseconds.
The API starts the purge running, and returns immediately with a JSON body with +a purge id:
+{
+ "purge_id": "<opaque id>"
+}
+
+It is possible to poll for updates on recent purges with a second API;
+GET /_synapse/admin/v1/purge_history_status/<purge_id>
+
+This API returns a JSON body like the following:
+{
+ "status": "active"
+}
+
+The status will be one of active
, complete
, or failed
.
If status
is failed
there will be a string error
with the error message.
To reclaim the disk space and return it to the operating system, you need to run
+VACUUM FULL;
on the database.
Note: This API is disabled when MSC3861 is enabled. See #15582
+This API allows for the creation of users in an administrative and +non-interactive way. This is generally used for bootstrapping a Synapse +instance with administrator accounts.
+To authenticate yourself to the server, you will need both the shared secret
+(registration_shared_secret
+in the homeserver configuration), and a one-time nonce. If the registration
+shared secret is not configured, this API is not enabled.
To fetch the nonce, you need to request one from the API:
+> GET /_synapse/admin/v1/register
+
+< {"nonce": "thisisanonce"}
+
+Once you have the nonce, you can make a POST
to the same URL with a JSON
+body containing the nonce, username, password, whether they are an admin
+(optional, False by default), and a HMAC digest of the content. Also you can
+set the displayname (optional, username
by default).
As an example:
+> POST /_synapse/admin/v1/register
+> {
+ "nonce": "thisisanonce",
+ "username": "pepper_roni",
+ "displayname": "Pepper Roni",
+ "password": "pizza",
+ "admin": true,
+ "mac": "mac_digest_here"
+ }
+
+< {
+ "access_token": "token_here",
+ "user_id": "@pepper_roni:localhost",
+ "home_server": "test",
+ "device_id": "device_id_here"
+ }
+
+The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being +the shared secret and the content being the nonce, user, password, either the +string "admin" or "notadmin", and optionally the user_type +each separated by NULs.
+Here is an easy way to generate the HMAC digest if you have Bash and OpenSSL:
+# Update these values and then paste this code block into a bash terminal
+nonce='thisisanonce'
+username='pepper_roni'
+password='pizza'
+admin='admin'
+secret='shared_secret'
+
+printf '%s\0%s\0%s\0%s' "$nonce" "$username" "$password" "$admin" |
+ openssl sha1 -hmac "$secret" |
+ awk '{print $2}'
+
+For an example of generation in Python:
+import hmac, hashlib
+
+def generate_mac(nonce, user, password, admin=False, user_type=None):
+
+ mac = hmac.new(
+ key=shared_secret,
+ digestmod=hashlib.sha1,
+ )
+
+ mac.update(nonce.encode('utf8'))
+ mac.update(b"\x00")
+ mac.update(user.encode('utf8'))
+ mac.update(b"\x00")
+ mac.update(password.encode('utf8'))
+ mac.update(b"\x00")
+ mac.update(b"admin" if admin else b"notadmin")
+ if user_type:
+ mac.update(b"\x00")
+ mac.update(user_type.encode('utf8'))
+
+ return mac.hexdigest()
+
+
+ This API allows an administrator to join a user account with a given user_id
+to a room with a given room_id_or_alias
. You can only modify the membership of
+local users. The server administrator must be in the room and have permission to
+invite users.
To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
The following parameters are available:
+user_id
- Fully qualified user: for example, @user:server.com
.room_id_or_alias
- The room identifier or alias to join: for example,
+!636q39766251:server.com
.POST /_synapse/admin/v1/join/<room_id_or_alias>
+
+{
+ "user_id": "@user:server.com"
+}
+
+Response:
+{
+ "room_id": "!636q39766251:server.com"
+}
+
+
+ The List Room admin API allows server admins to get a list of rooms on their +server. There are various parameters available that allow for filtering and +sorting the returned list. This API supports pagination.
+To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
Parameters
+The following query parameters are available:
+from
- Offset in the returned list. Defaults to 0
.
limit
- Maximum amount of rooms to return. Defaults to 100
.
order_by
- The method in which to sort the returned list of rooms. Valid values are:
alphabetical
- Same as name
. This is deprecated.size
- Same as joined_members
. This is deprecated.name
- Rooms are ordered alphabetically by room name. This is the default.canonical_alias
- Rooms are ordered alphabetically by main alias address of the room.joined_members
- Rooms are ordered by the number of members. Largest to smallest.joined_local_members
- Rooms are ordered by the number of local members. Largest to smallest.version
- Rooms are ordered by room version. Largest to smallest.creator
- Rooms are ordered alphabetically by creator of the room.encryption
- Rooms are ordered alphabetically by the end-to-end encryption algorithm.federatable
- Rooms are ordered by whether the room is federatable.public
- Rooms are ordered by visibility in room list.join_rules
- Rooms are ordered alphabetically by join rules of the room.guest_access
- Rooms are ordered alphabetically by guest access option of the room.history_visibility
- Rooms are ordered alphabetically by visibility of history of the room.state_events
- Rooms are ordered by number of state events. Largest to smallest.dir
- Direction of room order. Either f
for forwards or b
for backwards. Setting
+this value to b
will reverse the above sort order. Defaults to f
.
search_term
- Filter rooms by their room name, canonical alias and room id.
+Specifically, rooms are selected if the search term is contained in
public_rooms
- Optional flag to filter public rooms. If true
, only public rooms are queried. If false
, public rooms are excluded from
+the query. When the flag is absent (the default), both public and non-public rooms are included in the search results.
empty_rooms
- Optional flag to filter empty rooms. A room is empty if joined_members is zero. If true
, only empty rooms are queried. If false
, empty rooms are excluded from
+the query. When the flag is absent (the default), both empty and non-empty rooms are included in the search results.
Defaults to no filtering.
+Response
+The following fields are possible in the JSON response body:
+rooms
- An array of objects, each containing information about a room.
+room_id
- The ID of the room.name
- The name of the room.canonical_alias
- The canonical (main) alias address of the room.joined_members
- How many users are currently in the room.joined_local_members
- How many local users are currently in the room.version
- The version of the room as a string.creator
- The user_id
of the room creator.encryption
- Algorithm of end-to-end encryption of messages. Is null
if encryption is not active.federatable
- Whether users on other servers can join this room.public
- Whether the room is visible in room directory.join_rules
- The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].guest_access
- Whether guests can join the room. One of: ["can_join", "forbidden"].history_visibility
- Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].state_events
- Total number of state_events of a room. Complexity of the room.room_type
- The type of the room taken from the room's creation event; for example "m.space" if the room is a space. If the room does not define a type, the value will be null
.offset
- The current pagination offset in rooms. This parameter should be
+used instead of next_token
for room offset as next_token
is
+not intended to be parsed.total_rooms
- The total number of rooms this query can return. Using this
+and offset
, you have enough information to know the current
+progression through the list.next_batch
- If this field is present, we know that there are potentially
+more rooms on the server that did not all fit into this response.
+We can use next_batch
to get the "next page" of results. To do
+so, simply repeat your request, setting the from
parameter to
+the value of next_batch
.prev_batch
- If this field is present, it is possible to paginate backwards.
+Use prev_batch
for the from
value in the next request to
+get the "previous page" of results.The API is:
+A standard request with no filtering:
+GET /_synapse/admin/v1/rooms
+
+A response body like the following is returned:
+{
+ "rooms": [
+ {
+ "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
+ "name": "Matrix HQ",
+ "canonical_alias": "#matrix:matrix.org",
+ "joined_members": 8326,
+ "joined_local_members": 2,
+ "version": "1",
+ "creator": "@foo:matrix.org",
+ "encryption": null,
+ "federatable": true,
+ "public": true,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 93534,
+ "room_type": "m.space"
+ },
+ ... (8 hidden items) ...
+ {
+ "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
+ "name": "This Week In Matrix (TWIM)",
+ "canonical_alias": "#twim:matrix.org",
+ "joined_members": 314,
+ "joined_local_members": 20,
+ "version": "4",
+ "creator": "@foo:matrix.org",
+ "encryption": "m.megolm.v1.aes-sha2",
+ "federatable": true,
+ "public": false,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 8345,
+ "room_type": null
+ }
+ ],
+ "offset": 0,
+ "total_rooms": 10
+}
+
+Filtering by room name:
+GET /_synapse/admin/v1/rooms?search_term=TWIM
+
+A response body like the following is returned:
+{
+ "rooms": [
+ {
+ "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
+ "name": "This Week In Matrix (TWIM)",
+ "canonical_alias": "#twim:matrix.org",
+ "joined_members": 314,
+ "joined_local_members": 20,
+ "version": "4",
+ "creator": "@foo:matrix.org",
+ "encryption": "m.megolm.v1.aes-sha2",
+ "federatable": true,
+ "public": false,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 8,
+ "room_type": null
+ }
+ ],
+ "offset": 0,
+ "total_rooms": 1
+}
+
+Paginating through a list of rooms:
+GET /_synapse/admin/v1/rooms?order_by=size
+
+A response body like the following is returned:
+{
+ "rooms": [
+ {
+ "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
+ "name": "Matrix HQ",
+ "canonical_alias": "#matrix:matrix.org",
+ "joined_members": 8326,
+ "joined_local_members": 2,
+ "version": "1",
+ "creator": "@foo:matrix.org",
+ "encryption": null,
+ "federatable": true,
+ "public": true,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 93534,
+ "room_type": null
+ },
+ ... (98 hidden items) ...
+ {
+ "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
+ "name": "This Week In Matrix (TWIM)",
+ "canonical_alias": "#twim:matrix.org",
+ "joined_members": 314,
+ "joined_local_members": 20,
+ "version": "4",
+ "creator": "@foo:matrix.org",
+ "encryption": "m.megolm.v1.aes-sha2",
+ "federatable": true,
+ "public": false,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 8345,
+ "room_type": "m.space"
+ }
+ ],
+ "offset": 0,
+ "total_rooms": 150,
+ "next_token": 100
+}
+
+The presence of the next_token
parameter tells us that there are more rooms
+than returned in this request, and we need to make another request to get them.
+To get the next batch of room results, we repeat our request, setting the from
+parameter to the value of next_token
.
GET /_synapse/admin/v1/rooms?order_by=size&from=100
+
+A response body like the following is returned:
+{
+ "rooms": [
+ {
+ "room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
+ "name": "Music Theory",
+ "canonical_alias": "#musictheory:matrix.org",
+ "joined_members": 127,
+ "joined_local_members": 2,
+ "version": "1",
+ "creator": "@foo:matrix.org",
+ "encryption": null,
+ "federatable": true,
+ "public": true,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 93534,
+ "room_type": "m.space"
+
+ },
+ ... (48 hidden items) ...
+ {
+ "room_id": "!twcBhHVdZlQWuuxBhN:termina.org.uk",
+ "name": "weechat-matrix",
+ "canonical_alias": "#weechat-matrix:termina.org.uk",
+ "joined_members": 137,
+ "joined_local_members": 20,
+ "version": "4",
+ "creator": "@foo:termina.org.uk",
+ "encryption": null,
+ "federatable": true,
+ "public": true,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 8345,
+ "room_type": null
+
+ }
+ ],
+ "offset": 100,
+ "prev_batch": 0,
+ "total_rooms": 150
+}
+
+Once the next_token
parameter is no longer present, we know we've reached the
+end of the list.
The Room Details admin API allows server admins to get all details of a room.
+The following fields are possible in the JSON response body:
+room_id
- The ID of the room.name
- The name of the room.topic
- The topic of the room.avatar
- The mxc
URI to the avatar of the room.canonical_alias
- The canonical (main) alias address of the room.joined_members
- How many users are currently in the room.joined_local_members
- How many local users are currently in the room.joined_local_devices
- How many local devices are currently in the room.version
- The version of the room as a string.creator
- The user_id
of the room creator.encryption
- Algorithm of end-to-end encryption of messages. Is null
if encryption is not active.federatable
- Whether users on other servers can join this room.public
- Whether the room is visible in room directory.join_rules
- The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].guest_access
- Whether guests can join the room. One of: ["can_join", "forbidden"].history_visibility
- Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].state_events
- Total number of state_events of a room. Complexity of the room.room_type
- The type of the room taken from the room's creation event; for example "m.space" if the room is a space.
+If the room does not define a type, the value will be null
.forgotten
- Whether all local users have
+forgotten the room.The API is:
+GET /_synapse/admin/v1/rooms/<room_id>
+
+A response body like the following is returned:
+{
+ "room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
+ "name": "Music Theory",
+ "avatar": "mxc://matrix.org/AQDaVFlbkQoErdOgqWRgiGSV",
+ "topic": "Theory, Composition, Notation, Analysis",
+ "canonical_alias": "#musictheory:matrix.org",
+ "joined_members": 127,
+ "joined_local_members": 2,
+ "joined_local_devices": 2,
+ "version": "1",
+ "creator": "@foo:matrix.org",
+ "encryption": null,
+ "federatable": true,
+ "public": true,
+ "join_rules": "invite",
+ "guest_access": null,
+ "history_visibility": "shared",
+ "state_events": 93534,
+ "room_type": "m.space",
+ "forgotten": false
+}
+
+Changed in Synapse 1.66: Added the forgotten
key to the response body.
The Room Members admin API allows server admins to get a list of all members of a room.
+The response includes the following fields:
+members
- A list of all the members that are present in the room, represented by their ids.total
- Total number of members in the room.The API is:
+GET /_synapse/admin/v1/rooms/<room_id>/members
+
+A response body like the following is returned:
+{
+ "members": [
+ "@foo:matrix.org",
+ "@bar:matrix.org",
+ "@foobar:matrix.org"
+ ],
+ "total": 3
+}
+
+The Room State admin API allows server admins to get a list of all state events in a room.
+The response includes the following fields:
+state
- The current state of the room at the time of request.The API is:
+GET /_synapse/admin/v1/rooms/<room_id>/state
+
+A response body like the following is returned:
+{
+ "state": [
+ {"type": "m.room.create", "state_key": "", "etc": true},
+ {"type": "m.room.power_levels", "state_key": "", "etc": true},
+ {"type": "m.room.name", "state_key": "", "etc": true}
+ ]
+}
+
+The Room Messages admin API allows server admins to get all messages +sent to a room in a given timeframe. There are various parameters available +that allow for filtering and ordering the returned list. This API supports pagination.
+To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
This endpoint mirrors the Matrix Spec defined Messages API.
+The API is:
+GET /_synapse/admin/v1/rooms/<room_id>/messages
+
+Parameters
+The following path parameters are required:
+room_id
- The ID of the room you wish you fetch messages from.The following query parameters are available:
+from
(required) - The token to start returning events from. This token can be obtained from a prev_batch
+or next_batch token returned by the /sync endpoint, or from an end token returned by a previous request to this endpoint.to
- The token to stop returning events at.limit
- The maximum number of events to return. Defaults to 10
.filter
- A JSON RoomEventFilter to filter returned events with.dir
- The direction to return events from. Either f
for forwards or b
for backwards. Setting
+this value to b
will reverse the above sort order. Defaults to f
.Response
+The following fields are possible in the JSON response body:
+chunk
- A list of room events. The order depends on the dir parameter.
+Note that an empty chunk does not necessarily imply that no more events are available. Clients should continue to paginate until no end property is returned.end
- A token corresponding to the end of chunk. This token can be passed back to this endpoint to request further events.
+If no further events are available, this property is omitted from the response.start
- A token corresponding to the start of chunk.state
- A list of state events relevant to showing the chunk.Example
+For more details on each chunk, read the Matrix specification.
+{
+ "chunk": [
+ {
+ "content": {
+ "body": "This is an example text message",
+ "format": "org.matrix.custom.html",
+ "formatted_body": "<b>This is an example text message</b>",
+ "msgtype": "m.text"
+ },
+ "event_id": "$143273582443PhrSn:example.org",
+ "origin_server_ts": 1432735824653,
+ "room_id": "!636q39766251:example.com",
+ "sender": "@example:example.org",
+ "type": "m.room.message",
+ "unsigned": {
+ "age": 1234
+ }
+ },
+ {
+ "content": {
+ "name": "The room name"
+ },
+ "event_id": "$143273582443PhrSn:example.org",
+ "origin_server_ts": 1432735824653,
+ "room_id": "!636q39766251:example.com",
+ "sender": "@example:example.org",
+ "state_key": "",
+ "type": "m.room.name",
+ "unsigned": {
+ "age": 1234
+ }
+ },
+ {
+ "content": {
+ "body": "Gangnam Style",
+ "info": {
+ "duration": 2140786,
+ "h": 320,
+ "mimetype": "video/mp4",
+ "size": 1563685,
+ "thumbnail_info": {
+ "h": 300,
+ "mimetype": "image/jpeg",
+ "size": 46144,
+ "w": 300
+ },
+ "thumbnail_url": "mxc://example.org/FHyPlCeYUSFFxlgbQYZmoEoe",
+ "w": 480
+ },
+ "msgtype": "m.video",
+ "url": "mxc://example.org/a526eYUSFFxlgbQYZmo442"
+ },
+ "event_id": "$143273582443PhrSn:example.org",
+ "origin_server_ts": 1432735824653,
+ "room_id": "!636q39766251:example.com",
+ "sender": "@example:example.org",
+ "type": "m.room.message",
+ "unsigned": {
+ "age": 1234
+ }
+ }
+ ],
+ "end": "t47409-4357353_219380_26003_2265",
+ "start": "t47429-4392820_219380_26003_2265"
+}
+
+The Room Timestamp to Event API endpoint fetches the event_id
of the closest event to the given
+timestamp (ts
query parameter) in the given direction (dir
query parameter).
Useful for cases like jump to date so you can start paginating messages from +a given date in the archive.
+The API is:
+ GET /_synapse/admin/v1/rooms/<room_id>/timestamp_to_event
+
+Parameters
+The following path parameters are required:
+room_id
- The ID of the room you wish to check.The following query parameters are available:
+ts
- a timestamp in milliseconds where we will find the closest event in
+the given direction.dir
- can be f
or b
to indicate forwards and backwards in time from the
+given timestamp. Defaults to f
.Response
+event_id
- The event ID closest to the given timestamp.origin_server_ts
- The timestamp of the event in milliseconds since the Unix epoch.The Block Room admin API allows server admins to block and unblock rooms, +and query to see if a given room is blocked. +This API can be used to pre-emptively block a room, even if it's unknown to this +homeserver. Users will be prevented from joining a blocked room.
+The API is:
+PUT /_synapse/admin/v1/rooms/<room_id>/block
+
+with a body of:
+{
+ "block": true
+}
+
+A response body like the following is returned:
+{
+ "block": true
+}
+
+Parameters
+The following parameters should be set in the URL:
+room_id
- The ID of the room.The following JSON body parameters are available:
+block
- If true
the room will be blocked and if false
the room will be unblocked.Response
+The following fields are possible in the JSON response body:
+block
- A boolean. true
if the room is blocked, otherwise false
The API is:
+GET /_synapse/admin/v1/rooms/<room_id>/block
+
+A response body like the following is returned:
+{
+ "block": true,
+ "user_id": "<user_id>"
+}
+
+Parameters
+The following parameters should be set in the URL:
+room_id
- The ID of the room.Response
+The following fields are possible in the JSON response body:
+block
- A boolean. true
if the room is blocked, otherwise false
user_id
- An optional string. If the room is blocked (block
is true
) shows
+the user who has add the room to blocking list. Otherwise it is not displayed.The Delete Room admin API allows server admins to remove rooms from the server +and block these rooms.
+Shuts down a room. Moves all local users and room aliases automatically to a
+new room if new_room_user_id
is set. Otherwise local users only
+leave the room without any information.
The new room will be created with the user specified by the new_room_user_id
parameter
+as room administrator and will contain a message explaining what happened. Users invited
+to the new room will have power level -10
by default, and thus be unable to speak.
If block
is true
, users will be prevented from joining the old room.
+This option can in Version 1 also be used to pre-emptively
+block a room, even if it's unknown to this homeserver. In this case, the room will be
+blocked, and no further action will be taken. If block
is false
, attempting to
+delete an unknown room is invalid and will be rejected as a bad request.
This API will remove all trace of the old room from your database after removing
+all local users. If purge
is true
(the default), all traces of the old room will
+be removed from your database after removing all local users. If you do not want
+this to happen, set purge
to false
.
+Depending on the amount of history being purged, a call to the API may take
+several minutes or longer.
The local server will only have the power to move local user and room aliases to +the new room. Users on other servers will be unaffected.
+This version works synchronously. That means you only get the response once the server has +finished the action, which may take a long time. If you request the same action +a second time, and the server has not finished the first one, the second request will block. +This is fixed in version 2 of this API. The parameters are the same in both APIs. +This API will become deprecated in the future.
+The API is:
+DELETE /_synapse/admin/v1/rooms/<room_id>
+
+with a body of:
+{
+ "new_room_user_id": "@someuser:example.com",
+ "room_name": "Content Violation Notification",
+ "message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
+ "block": true,
+ "purge": true
+}
+
+A response body like the following is returned:
+{
+ "kicked_users": [
+ "@foobar:example.com"
+ ],
+ "failed_to_kick_users": [],
+ "local_aliases": [
+ "#badroom:example.com",
+ "#evilsaloon:example.com"
+ ],
+ "new_room_id": "!newroomid:example.com"
+}
+
+The parameters and response values have the same format as +version 2 of the API.
+Note: This API is new, experimental and "subject to change".
+This version works asynchronously, meaning you get the response from server immediately +while the server works on that task in background. You can then request the status of the action +to check if it has completed.
+The API is:
+DELETE /_synapse/admin/v2/rooms/<room_id>
+
+with a body of:
+{
+ "new_room_user_id": "@someuser:example.com",
+ "room_name": "Content Violation Notification",
+ "message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
+ "block": true,
+ "purge": true
+}
+
+The API starts the shut down and purge running, and returns immediately with a JSON body with +a purge id:
+{
+ "delete_id": "<opaque id>"
+}
+
+Parameters
+The following parameters should be set in the URL:
+room_id
- The ID of the room.The following JSON body parameters are available:
+new_room_user_id
- Optional. If set, a new room will be created with this user ID
+as the creator and admin, and all users in the old room will be moved into that
+room. If not set, no new room will be created and the users will just be removed
+from the old room. The user ID must be on the local server, but does not necessarily
+have to belong to a registered user.room_name
- Optional. A string representing the name of the room that new users will be
+invited to. Defaults to Content Violation Notification
message
- Optional. A string containing the first message that will be sent as
+new_room_user_id
in the new room. Ideally this will clearly convey why the
+original room was shut down. Defaults to Sharing illegal content on this server is not permitted and rooms in violation will be blocked.
block
- Optional. If set to true
, this room will be added to a blocking list,
+preventing future attempts to join the room. Rooms can be blocked
+even if they're not yet known to the homeserver (only with
+Version 1 of the API). Defaults to false
.purge
- Optional. If set to true
, it will remove all traces of the room from your database.
+Defaults to true
.force_purge
- Optional, and ignored unless purge
is true
. If set to true
, it
+will force a purge to go ahead even if there are local users still in the room. Do not
+use this unless a regular purge
operation fails, as it could leave those users'
+clients in a confused state.The JSON body must not be empty. The body must be at least {}
.
Note: This API is new, experimental and "subject to change".
+It is possible to query the status of the background task for deleting rooms. +The status can be queried up to 24 hours after completion of the task, +or until Synapse is restarted (whichever happens first).
+room_id
With this API you can get the status of all active deletion tasks, and all those completed in the last 24h,
+for the given room_id
.
The API is:
+GET /_synapse/admin/v2/rooms/<room_id>/delete_status
+
+A response body like the following is returned:
+{
+ "results": [
+ {
+ "delete_id": "delete_id1",
+ "status": "failed",
+ "error": "error message",
+ "shutdown_room": {
+ "kicked_users": [],
+ "failed_to_kick_users": [],
+ "local_aliases": [],
+ "new_room_id": null
+ }
+ }, {
+ "delete_id": "delete_id2",
+ "status": "purging",
+ "shutdown_room": {
+ "kicked_users": [
+ "@foobar:example.com"
+ ],
+ "failed_to_kick_users": [],
+ "local_aliases": [
+ "#badroom:example.com",
+ "#evilsaloon:example.com"
+ ],
+ "new_room_id": "!newroomid:example.com"
+ }
+ }
+ ]
+}
+
+Parameters
+The following parameters should be set in the URL:
+room_id
- The ID of the room.delete_id
With this API you can get the status of one specific task by delete_id
.
The API is:
+GET /_synapse/admin/v2/rooms/delete_status/<delete_id>
+
+A response body like the following is returned:
+{
+ "status": "purging",
+ "shutdown_room": {
+ "kicked_users": [
+ "@foobar:example.com"
+ ],
+ "failed_to_kick_users": [],
+ "local_aliases": [
+ "#badroom:example.com",
+ "#evilsaloon:example.com"
+ ],
+ "new_room_id": "!newroomid:example.com"
+ }
+}
+
+Parameters
+The following parameters should be set in the URL:
+delete_id
- The ID for this delete.The following fields are returned in the JSON response body:
+results
- An array of objects, each containing information about one task.
+This field is omitted from the result when you query by delete_id
.
+Task objects contain the following fields:
+delete_id
- The ID for this purge if you query by room_id
.status
- The status will be one of:
+shutting_down
- The process is removing users from the room.purging
- The process is purging the room and event data from database.complete
- The process has completed successfully.failed
- The process is aborted, an error has occurred.error
- A string that shows an error message if status
is failed
.
+Otherwise this field is hidden.shutdown_room
- An object containing information about the result of shutting down the room.
+Note: The result is shown after removing the room members.
+The delete process can still be running. Please pay attention to the status
.
+kicked_users
- An array of users (user_id
) that were kicked.failed_to_kick_users
- An array of users (user_id
) that that were not kicked.local_aliases
- An array of strings representing the local aliases that were
+migrated from the old room to the new.new_room_id
- A string representing the room ID of the new room, or null
if
+no such room was created.Note: This guide may be outdated by the time you read it. By nature of room deletions being performed at the database level, +the structure can and does change without notice.
+First, it's important to understand that a room deletion is very destructive. Undoing a deletion is not as simple as pretending it +never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible +to recover at all:
+With all that being said, if you still want to try and recover the room:
+If the room was block
ed, you must unblock it on your server. This can be
+accomplished as follows:
DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';
+BEGIN; DELETE ...;
, verify you got 1 result, then COMMIT;
.This step is unnecessary if block
was not set.
Any room aliases on your server that pointed to the deleted room may have +been deleted, or redirected to the Content Violation room. These will need +to be restored manually.
+Users on your server that were in the deleted room will have been kicked +from the room. Consider whether you want to update their membership +(possibly via the Edit Room Membership API) or let +them handle rejoining themselves.
+If new_room_user_id
was given, a 'Content Violation' will have been
+created. Consider whether you want to delete that room.
Grants another user the highest power available to a local user who is in the room. +If the user is not in the room, and it is not publicly joinable, then invite the user.
+By default the server admin (the caller) is granted power, but another user can +optionally be specified, e.g.:
+POST /_synapse/admin/v1/rooms/<room_id_or_alias>/make_room_admin
+{
+ "user_id": "@foo:example.com"
+}
+
+Enables querying and deleting forward extremities from rooms. When a lot of forward +extremities accumulate in a room, performance can become degraded. For details, see +#1760.
+To check the status of forward extremities for a room:
+GET /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities
+
+A response as follows will be returned:
+{
+ "count": 1,
+ "results": [
+ {
+ "event_id": "$M5SP266vsnxctfwFgFLNceaCo3ujhRtg_NiiHabcdefgh",
+ "state_group": 439,
+ "depth": 123,
+ "received_ts": 1611263016761
+ }
+ ]
+}
+
+WARNING: Please ensure you know what you're doing and have read +the related issue #1760. +Under no situations should this API be executed as an automated maintenance task!
+If a room has lots of forward extremities, the extra can be +deleted as follows:
+DELETE /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities
+
+A response as follows will be returned, indicating the amount of forward extremities +that were deleted.
+{
+ "deleted": 1
+}
+
+This API lets a client find the context of an event. This is designed primarily to investigate abuse reports.
+GET /_synapse/admin/v1/rooms/<room_id>/context/<event_id>
+
+This API mimmicks GET /_matrix/client/r0/rooms/{roomId}/context/{eventId}. Please refer to the link for all details on parameters and reseponse.
+Example response:
+{
+ "end": "t29-57_2_0_2",
+ "events_after": [
+ {
+ "content": {
+ "body": "This is an example text message",
+ "msgtype": "m.text",
+ "format": "org.matrix.custom.html",
+ "formatted_body": "<b>This is an example text message</b>"
+ },
+ "type": "m.room.message",
+ "event_id": "$143273582443PhrSn:example.org",
+ "room_id": "!636q39766251:example.com",
+ "sender": "@example:example.org",
+ "origin_server_ts": 1432735824653,
+ "unsigned": {
+ "age": 1234
+ }
+ }
+ ],
+ "event": {
+ "content": {
+ "body": "filename.jpg",
+ "info": {
+ "h": 398,
+ "w": 394,
+ "mimetype": "image/jpeg",
+ "size": 31037
+ },
+ "url": "mxc://example.org/JWEIFJgwEIhweiWJE",
+ "msgtype": "m.image"
+ },
+ "type": "m.room.message",
+ "event_id": "$f3h4d129462ha:example.com",
+ "room_id": "!636q39766251:example.com",
+ "sender": "@example:example.org",
+ "origin_server_ts": 1432735824653,
+ "unsigned": {
+ "age": 1234
+ }
+ },
+ "events_before": [
+ {
+ "content": {
+ "body": "something-important.doc",
+ "filename": "something-important.doc",
+ "info": {
+ "mimetype": "application/msword",
+ "size": 46144
+ },
+ "msgtype": "m.file",
+ "url": "mxc://example.org/FHyPlCeYUSFFxlgbQYZmoEoe"
+ },
+ "type": "m.room.message",
+ "event_id": "$143273582443PhrSn:example.org",
+ "room_id": "!636q39766251:example.com",
+ "sender": "@example:example.org",
+ "origin_server_ts": 1432735824653,
+ "unsigned": {
+ "age": 1234
+ }
+ }
+ ],
+ "start": "t27-54_2_0_2",
+ "state": [
+ {
+ "content": {
+ "creator": "@example:example.org",
+ "room_version": "1",
+ "m.federate": true,
+ "predecessor": {
+ "event_id": "$something:example.org",
+ "room_id": "!oldroom:example.org"
+ }
+ },
+ "type": "m.room.create",
+ "event_id": "$143273582443PhrSn:example.org",
+ "room_id": "!636q39766251:example.com",
+ "sender": "@example:example.org",
+ "origin_server_ts": 1432735824653,
+ "unsigned": {
+ "age": 1234
+ },
+ "state_key": ""
+ },
+ {
+ "content": {
+ "membership": "join",
+ "avatar_url": "mxc://example.org/SEsfnsuifSDFSSEF",
+ "displayname": "Alice Margatroid"
+ },
+ "type": "m.room.member",
+ "event_id": "$143273582443PhrSn:example.org",
+ "room_id": "!636q39766251:example.com",
+ "sender": "@example:example.org",
+ "origin_server_ts": 1432735824653,
+ "unsigned": {
+ "age": 1234
+ },
+ "state_key": "@alice:example.org"
+ }
+ ]
+}
+
+
+ The API to send notices is as follows:
+POST /_synapse/admin/v1/send_server_notice
+
+or:
+PUT /_synapse/admin/v1/send_server_notice/{txnId}
+
+You will need to authenticate with an access token for an admin user.
+When using the PUT
form, retransmissions with the same transaction ID will be
+ignored in the same way as with PUT /_matrix/client/r0/rooms/{roomId}/send/{eventType}/{txnId}
.
The request body should look something like the following:
+{
+ "user_id": "@target_user:server_name",
+ "content": {
+ "msgtype": "m.text",
+ "body": "This is my message"
+ }
+}
+
+You can optionally include the following additional parameters:
+type
: the type of event. Defaults to m.room.message
.state_key
: Setting this will result in a state event being sent.Once the notice has been sent, the API will return the following response:
+{
+ "event_id": "<event_id>"
+}
+
+Note that server notices must be enabled in homeserver.yaml
before this API
+can be used. See the server notices documentation for more information.
Returns information about all local media usage of users. Gives the +possibility to filter them by time and user.
+To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
The API is:
+GET /_synapse/admin/v1/statistics/users/media
+
+A response body like the following is returned:
+{
+ "users": [
+ {
+ "displayname": "foo_user_0",
+ "media_count": 2,
+ "media_length": 134,
+ "user_id": "@foo_user_0:test"
+ },
+ {
+ "displayname": "foo_user_1",
+ "media_count": 2,
+ "media_length": 134,
+ "user_id": "@foo_user_1:test"
+ }
+ ],
+ "next_token": 3,
+ "total": 10
+}
+
+To paginate, check for next_token
and if present, call the endpoint
+again with from
set to the value of next_token
. This will return a new page.
If the endpoint does not return a next_token
then there are no more
+reports to paginate through.
Parameters
+The following parameters should be set in the URL:
+limit
: string representing a positive integer - Is optional but is
+used for pagination, denoting the maximum number of items to return
+in this call. Defaults to 100
.from
: string representing a positive integer - Is optional but used for pagination,
+denoting the offset in the returned results. This should be treated as an opaque value
+and not explicitly set to anything other than the return value of next_token
from a
+previous call. Defaults to 0
.order_by
- string - The method in which to sort the returned list of users. Valid values are:
+user_id
- Users are ordered alphabetically by user_id
. This is the default.displayname
- Users are ordered alphabetically by displayname
.media_length
- Users are ordered by the total size of uploaded media in bytes.
+Smallest to largest.media_count
- Users are ordered by number of uploaded media. Smallest to largest.from_ts
- string representing a positive integer - Considers only
+files created at this timestamp or later. Unix timestamp in ms.until_ts
- string representing a positive integer - Considers only
+files created at this timestamp or earlier. Unix timestamp in ms.search_term
- string - Filter users by their user ID localpart or displayname.
+The search term can be found in any part of the string.
+Defaults to no filtering.dir
- string - Direction of order. Either f
for forwards or b
for backwards.
+Setting this value to b
will reverse the above sort order. Defaults to f
.Response
+The following fields are returned in the JSON response body:
+users
- An array of objects, each containing information
+about the user and their local media. Objects contain the following fields:
+displayname
- string - Displayname of this user.media_count
- integer - Number of uploaded media by this user.media_length
- integer - Size of uploaded media in bytes by this user.user_id
- string - Fully-qualified user ID (ex. @user:server.com
).next_token
- integer - Opaque value used for pagination. See above.total
- integer - Total number of users after filtering.Returns the 10 largest rooms and an estimate of how much space in the database +they are taking.
+This does not include the size of any associated media associated with the room.
+Returns an error on SQLite.
+Note: This uses the planner statistics from PostgreSQL to do the estimates, +which means that the returned information can vary widely from reality. However, +it should be enough to get a rough idea of where database disk space is going.
+The API is:
+GET /_synapse/admin/v1/statistics/database/rooms
+
+A response body like the following is returned:
+{
+ "rooms": [
+ {
+ "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
+ "estimated_size": 47325417353
+ }
+ ],
+}
+
+Response
+The following fields are returned in the JSON response body:
+rooms
- An array of objects, sorted by largest room first. Objects contain
+the following fields:
+room_id
- string - The room ID.estimated_size
- integer - Estimated disk space used in bytes by the room
+in the database.Added in Synapse 1.83.0
+ +To use it, you will need to authenticate by providing an access_token
+for a server admin: see Admin API.
This API returns information about a specific user account.
+The api is:
+GET /_synapse/admin/v2/users/<user_id>
+
+It returns a JSON body like the following:
+{
+ "name": "@user:example.com",
+ "displayname": "User", // can be null if not set
+ "threepids": [
+ {
+ "medium": "email",
+ "address": "<user_mail_1>",
+ "added_at": 1586458409743,
+ "validated_at": 1586458409743
+ },
+ {
+ "medium": "email",
+ "address": "<user_mail_2>",
+ "added_at": 1586458409743,
+ "validated_at": 1586458409743
+ }
+ ],
+ "avatar_url": "<avatar_url>", // can be null if not set
+ "is_guest": 0,
+ "admin": 0,
+ "deactivated": 0,
+ "erased": false,
+ "shadow_banned": 0,
+ "creation_ts": 1560432506,
+ "appservice_id": null,
+ "consent_server_notice_sent": null,
+ "consent_version": null,
+ "consent_ts": null,
+ "external_ids": [
+ {
+ "auth_provider": "<provider1>",
+ "external_id": "<user_id_provider_1>"
+ },
+ {
+ "auth_provider": "<provider2>",
+ "external_id": "<user_id_provider_2>"
+ }
+ ],
+ "user_type": null,
+ "locked": false
+}
+
+URL parameters:
+user_id
: fully-qualified user id: for example, @user:server.com
.This API allows an administrator to create or modify a user account with a
+specific user_id
.
This api is:
+PUT /_synapse/admin/v2/users/<user_id>
+
+with a body of:
+{
+ "password": "user_password",
+ "logout_devices": false,
+ "displayname": "Alice Marigold",
+ "avatar_url": "mxc://example.com/abcde12345",
+ "threepids": [
+ {
+ "medium": "email",
+ "address": "alice@example.com"
+ },
+ {
+ "medium": "email",
+ "address": "alice@domain.org"
+ }
+ ],
+ "external_ids": [
+ {
+ "auth_provider": "example",
+ "external_id": "12345"
+ },
+ {
+ "auth_provider": "example2",
+ "external_id": "abc54321"
+ }
+ ],
+ "admin": false,
+ "deactivated": false,
+ "user_type": null,
+ "locked": false
+}
+
+Returns HTTP status code:
+201
- When a new user object was created.200
- When a user was modified.URL parameters:
+user_id
- A fully-qualified user id. For example, @user:server.com
.Body parameters:
+password
- string, optional. If provided, the user's password is updated and all
+devices are logged out, unless logout_devices
is set to false
.
logout_devices
- bool, optional, defaults to true
. If set to false
, devices aren't
+logged out even when password
is provided.
displayname
- string, optional. If set to an empty string (""
), the user's display name
+will be removed.
avatar_url
- string, optional. Must be a
+MXC URI.
+If set to an empty string (""
), the user's avatar is removed.
threepids
- array, optional. If provided, the user's third-party IDs (email, msisdn) are
+entirely replaced with the given list. Each item in the array is an object with the following
+fields:
medium
- string, required. The type of third-party ID, either email
or msisdn
(phone number).address
- string, required. The third-party ID itself, e.g. alice@example.com
for email
or
+447470274584
(for a phone number with country code "44") and 19254857364
(for a phone number
+with country code "1") for msisdn
.
+Note: If a threepid is removed from a user via this option, Synapse will also attempt to remove
+that threepid from any identity servers it is aware has a binding for it.external_ids
- array, optional. Allow setting the identifier of the external identity
+provider for SSO (Single sign-on). More details are in the configuration manual under the
+sections sso and oidc_providers.
auth_provider
- string, required. The unique, internal ID of the external identity provider.
+The same as idp_id
from the homeserver configuration. If using OIDC, this value should be prefixed
+with oidc-
. Note that no error is raised if the provided value is not in the homeserver configuration.external_id
- string, required. An identifier for the user in the external identity provider.
+When the user logs in to the identity provider, this must be the unique ID that they map to.admin
- bool, optional, defaults to false
. Whether the user is a homeserver administrator,
+granting them access to the Admin API, among other things.
deactivated
- bool, optional. If unspecified, deactivation state will be left unchanged.
Note:
+password_config.localdb_enabled
is set false
.
+Users' passwords are wiped upon account deactivation, hence the need to set a new one here.Note: a user cannot be erased with this API. For more details on +deactivating and erasing users see Deactivate Account.
+locked
- bool, optional. If unspecified, locked state will be left unchanged.
user_type
- string or null, optional. If not provided, the user type will be
+not be changed. If null
is given, the user type will be cleared.
+Other allowed options are: bot
and support
.
This API returns all local user accounts. +By default, the response is ordered by ascending user ID.
+GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
+
+A response body like the following is returned:
+{
+ "users": [
+ {
+ "name": "<user_id1>",
+ "is_guest": 0,
+ "admin": 0,
+ "user_type": null,
+ "deactivated": 0,
+ "erased": false,
+ "shadow_banned": 0,
+ "displayname": "<User One>",
+ "avatar_url": null,
+ "creation_ts": 1560432668000,
+ "locked": false
+ }, {
+ "name": "<user_id2>",
+ "is_guest": 0,
+ "admin": 1,
+ "user_type": null,
+ "deactivated": 0,
+ "erased": false,
+ "shadow_banned": 0,
+ "displayname": "<User Two>",
+ "avatar_url": "<avatar_url>",
+ "creation_ts": 1561550621000,
+ "locked": false
+ }
+ ],
+ "next_token": "100",
+ "total": 200
+}
+
+To paginate, check for next_token
and if present, call the endpoint again
+with from
set to the value of next_token
. This will return a new page.
If the endpoint does not return a next_token
then there are no more users
+to paginate through.
Parameters
+The following parameters should be set in the URL:
+user_id
- Is optional and filters to only return users with user IDs
+that contain this value. This parameter is ignored when using the name
parameter.
name
- Is optional and filters to only return users with user ID localparts
+or displaynames that contain this value.
guests
- string representing a bool - Is optional and if false
will exclude guest users.
+Defaults to true
to include guest users. This parameter is not supported when MSC3861 is enabled. See #15582
admins
- Optional flag to filter admins. If true
, only admins are queried. If false
, admins are excluded from
+the query. When the flag is absent (the default), both admins and non-admins are included in the search results.
deactivated
- string representing a bool - Is optional and if true
will include deactivated users.
+Defaults to false
to exclude deactivated users.
limit
- string representing a positive integer - Is optional but is used for pagination,
+denoting the maximum number of items to return in this call. Defaults to 100
.
from
- string representing a positive integer - Is optional but used for pagination,
+denoting the offset in the returned results. This should be treated as an opaque value and
+not explicitly set to anything other than the return value of next_token
from a previous call.
+Defaults to 0
.
order_by
- The method by which to sort the returned list of users.
+If the ordered field has duplicates, the second order is always by ascending name
,
+which guarantees a stable ordering. Valid values are:
name
- Users are ordered alphabetically by name
. This is the default.is_guest
- Users are ordered by is_guest
status.admin
- Users are ordered by admin
status.user_type
- Users are ordered alphabetically by user_type
.deactivated
- Users are ordered by deactivated
status.shadow_banned
- Users are ordered by shadow_banned
status.displayname
- Users are ordered alphabetically by displayname
.avatar_url
- Users are ordered alphabetically by avatar URL.creation_ts
- Users are ordered by when the users was created in ms.last_seen_ts
- Users are ordered by when the user was lastly seen in ms.dir
- Direction of media order. Either f
for forwards or b
for backwards.
+Setting this value to b
will reverse the above sort order. Defaults to f
.
not_user_type
- Exclude certain user types, such as bot users, from the request.
+Can be provided multiple times. Possible values are bot
, support
or "empty string".
+"empty string" here means to exclude users without a type.
locked
- string representing a bool - Is optional and if true
will include locked users.
+Defaults to false
to exclude locked users. Note: Introduced in v1.93.
Caution. The database only has indexes on the columns name
and creation_ts
.
+This means that if a different sort order is used (is_guest
, admin
,
+user_type
, deactivated
, shadow_banned
, avatar_url
or displayname
),
+this can cause a large load on the database, especially for large environments.
Response
+The following fields are returned in the JSON response body:
+users
- An array of objects, each containing information about an user.
+User objects contain the following fields:
name
- string - Fully-qualified user ID (ex. @user:server.com
).is_guest
- bool - Status if that user is a guest account.admin
- bool - Status if that user is a server administrator.user_type
- string - Type of the user. Normal users are type None
.
+This allows user type specific behaviour. There are also types support
and bot
.deactivated
- bool - Status if that user has been marked as deactivated.erased
- bool - Status if that user has been marked as erased.shadow_banned
- bool - Status if that user has been marked as shadow banned.displayname
- string - The user's display name if they have set one.avatar_url
- string - The user's avatar URL if they have set one.creation_ts
- integer - The user's creation timestamp in ms.last_seen_ts
- integer - The user's last activity timestamp in ms.locked
- bool - Status if that user has been marked as locked. Note: Introduced in v1.93.next_token
: string representing a positive integer - Indication for pagination. See above.
total
- integer - Total number of media.
Added in Synapse 1.93: the locked
query parameter and response field.
This API returns all local user accounts (see v2). In contrast to v2, the query parameter deactivated
is handled differently.
GET /_synapse/admin/v3/users
+
+Parameters
+deactivated
- Optional flag to filter deactivated users. If true
, only deactivated users are returned.
+If false
, deactivated users are excluded from the query. When the flag is absent (the default),
+users are not filtered by deactivation status.This API returns information about the active sessions for a specific user.
+The endpoints are:
+GET /_synapse/admin/v1/whois/<user_id>
+
+and:
+GET /_matrix/client/r0/admin/whois/<userId>
+
+See also: Client Server +API Whois.
+It returns a JSON body like the following:
+{
+ "user_id": "<user_id>",
+ "devices": {
+ "": {
+ "sessions": [
+ {
+ "connections": [
+ {
+ "ip": "1.2.3.4",
+ "last_seen": 1417222374433,
+ "user_agent": "Mozilla/5.0 ..."
+ },
+ {
+ "ip": "1.2.3.10",
+ "last_seen": 1417222374500,
+ "user_agent": "Dalvik/2.1.0 ..."
+ }
+ ]
+ }
+ ]
+ }
+ }
+}
+
+last_seen
is measured in milliseconds since the Unix epoch.
This API deactivates an account. It removes active access tokens, resets the +password, and deletes third-party IDs (to prevent the user requesting a +password reset).
+It can also mark the user as GDPR-erased. This means messages sent by the +user will still be visible by anyone that was in the room when these messages +were sent, but hidden from users joining the room afterwards.
+The api is:
+POST /_synapse/admin/v1/deactivate/<user_id>
+
+with a body of:
+{
+ "erase": true
+}
+
+The erase parameter is optional and defaults to false
.
+An empty body may be passed for backwards compatibility.
The following actions are performed when deactivating an user:
+The following additional actions are performed during deactivation if erase
+is set to true
:
The following actions are NOT performed. The list may be incomplete.
+Note: This API is disabled when MSC3861 is enabled. See #15582
+Changes the password of another user. This will automatically log the user out of all their devices.
+The api is:
+POST /_synapse/admin/v1/reset_password/<user_id>
+
+with a body of:
+{
+ "new_password": "<secret>",
+ "logout_devices": true
+}
+
+The parameter new_password
is required.
+The parameter logout_devices
is optional and defaults to true
.
Note: This API is disabled when MSC3861 is enabled. See #15582
+The api is:
+GET /_synapse/admin/v1/users/<user_id>/admin
+
+A response body like the following is returned:
+{
+ "admin": true
+}
+
+Note: This API is disabled when MSC3861 is enabled. See #15582
+Note that you cannot demote yourself.
+The api is:
+PUT /_synapse/admin/v1/users/<user_id>/admin
+
+with a body of:
+{
+ "admin": true
+}
+
+Gets a list of all room_id
that a specific user_id
is member.
The API is:
+GET /_synapse/admin/v1/users/<user_id>/joined_rooms
+
+A response body like the following is returned:
+ {
+ "joined_rooms": [
+ "!DuGcnbhHGaSZQoNQR:matrix.org",
+ "!ZtSaPCawyWtxfWiIy:matrix.org"
+ ],
+ "total": 2
+ }
+
+The server returns the list of rooms of which the user and the server +are member. If the user is local, all the rooms of which the user is +member are returned.
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.Response
+The following fields are returned in the JSON response body:
+joined_rooms
- An array of room_id
.total
- Number of rooms.Gets information about account data for a specific user_id
.
The API is:
+GET /_synapse/admin/v1/users/<user_id>/accountdata
+
+A response body like the following is returned:
+{
+ "account_data": {
+ "global": {
+ "m.secret_storage.key.LmIGHTg5W": {
+ "algorithm": "m.secret_storage.v1.aes-hmac-sha2",
+ "iv": "fwjNZatxg==",
+ "mac": "eWh9kNnLWZUNOgnc="
+ },
+ "im.vector.hide_profile": {
+ "hide_profile": true
+ },
+ "org.matrix.preview_urls": {
+ "disable": false
+ },
+ "im.vector.riot.breadcrumb_rooms": {
+ "rooms": [
+ "!LxcBDAsDUVAfJDEo:matrix.org",
+ "!MAhRxqasbItjOqxu:matrix.org"
+ ]
+ },
+ "m.accepted_terms": {
+ "accepted": [
+ "https://example.org/somewhere/privacy-1.2-en.html",
+ "https://example.org/somewhere/terms-2.0-en.html"
+ ]
+ },
+ "im.vector.setting.breadcrumbs": {
+ "recent_rooms": [
+ "!MAhRxqasbItqxuEt:matrix.org",
+ "!ZtSaPCawyWtxiImy:matrix.org"
+ ]
+ }
+ },
+ "rooms": {
+ "!GUdfZSHUJibpiVqHYd:matrix.org": {
+ "m.fully_read": {
+ "event_id": "$156334540fYIhZ:matrix.org"
+ }
+ },
+ "!tOZwOOiqwCYQkLhV:matrix.org": {
+ "m.fully_read": {
+ "event_id": "$xjsIyp4_NaVl2yPvIZs_k1Jl8tsC_Sp23wjqXPno"
+ }
+ }
+ }
+ }
+}
+
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.Response
+The following fields are returned in the JSON response body:
+account_data
- A map containing the account data for the user
+global
- A map containing the global account data for the userrooms
- A map containing the account data per room for the userGets a list of all local media that a specific user_id
has created.
+These are media that the user has uploaded themselves
+(local media), as well as
+URL preview images requested by the user if the
+feature is enabled.
By default, the response is ordered by descending creation date and ascending media ID.
+The newest media is on top. You can change the order with parameters
+order_by
and dir
.
The API is:
+GET /_synapse/admin/v1/users/<user_id>/media
+
+A response body like the following is returned:
+{
+ "media": [
+ {
+ "created_ts": 100400,
+ "last_access_ts": null,
+ "media_id": "qXhyRzulkwLsNHTbpHreuEgo",
+ "media_length": 67,
+ "media_type": "image/png",
+ "quarantined_by": null,
+ "safe_from_quarantine": false,
+ "upload_name": "test1.png"
+ },
+ {
+ "created_ts": 200400,
+ "last_access_ts": null,
+ "media_id": "FHfiSnzoINDatrXHQIXBtahw",
+ "media_length": 67,
+ "media_type": "image/png",
+ "quarantined_by": null,
+ "safe_from_quarantine": false,
+ "upload_name": "test2.png"
+ },
+ {
+ "created_ts": 300400,
+ "last_access_ts": 300700,
+ "media_id": "BzYNLRUgGHphBkdKGbzXwbjX",
+ "media_length": 1337,
+ "media_type": "application/octet-stream",
+ "quarantined_by": null,
+ "safe_from_quarantine": false,
+ "upload_name": null
+ }
+ ],
+ "next_token": 3,
+ "total": 2
+}
+
+To paginate, check for next_token
and if present, call the endpoint again
+with from
set to the value of next_token
. This will return a new page.
If the endpoint does not return a next_token
then there are no more
+reports to paginate through.
Parameters
+The following parameters should be set in the URL:
+user_id
- string - fully qualified: for example, @user:server.com
.
limit
: string representing a positive integer - Is optional but is used for pagination,
+denoting the maximum number of items to return in this call. Defaults to 100
.
from
: string representing a positive integer - Is optional but used for pagination,
+denoting the offset in the returned results. This should be treated as an opaque value and
+not explicitly set to anything other than the return value of next_token
from a previous call.
+Defaults to 0
.
order_by
- The method by which to sort the returned list of media.
+If the ordered field has duplicates, the second order is always by ascending media_id
,
+which guarantees a stable ordering. Valid values are:
media_id
- Media are ordered alphabetically by media_id
.upload_name
- Media are ordered alphabetically by name the media was uploaded with.created_ts
- Media are ordered by when the content was uploaded in ms.
+Smallest to largest. This is the default.last_access_ts
- Media are ordered by when the content was last accessed in ms.
+Smallest to largest.media_length
- Media are ordered by length of the media in bytes.
+Smallest to largest.media_type
- Media are ordered alphabetically by MIME-type.quarantined_by
- Media are ordered alphabetically by the user ID that
+initiated the quarantine request for this media.safe_from_quarantine
- Media are ordered by the status if this media is safe
+from quarantining.dir
- Direction of media order. Either f
for forwards or b
for backwards.
+Setting this value to b
will reverse the above sort order. Defaults to f
.
If neither order_by
nor dir
is set, the default order is newest media on top
+(corresponds to order_by
= created_ts
and dir
= b
).
Caution. The database only has indexes on the columns media_id
,
+user_id
and created_ts
. This means that if a different sort order is used
+(upload_name
, last_access_ts
, media_length
, media_type
,
+quarantined_by
or safe_from_quarantine
), this can cause a large load on the
+database, especially for large environments.
Response
+The following fields are returned in the JSON response body:
+media
- An array of objects, each containing information about a media.
+Media objects contain the following fields:
+created_ts
- integer - Timestamp when the content was uploaded in ms.last_access_ts
- integer or null - Timestamp when the content was last accessed in ms.
+Null if there was no access, yet.media_id
- string - The id used to refer to the media. Details about the format
+are documented under
+media repository.media_length
- integer - Length of the media in bytes.media_type
- string - The MIME-type of the media.quarantined_by
- string or null - The user ID that initiated the quarantine request
+for this media. Null if not quarantined.safe_from_quarantine
- bool - Status if this media is safe from quarantining.upload_name
- string or null - The name the media was uploaded with. Null if not provided during upload.next_token
: integer - Indication for pagination. See above.total
- integer - Total number of media.This API deletes the local media from the disk of your own server
+that a specific user_id
has created. This includes any local thumbnails.
This API will not affect media that has been uploaded to external +media repositories (e.g https://github.com/turt2live/matrix-media-repo/).
+By default, the API deletes media ordered by descending creation date and ascending media ID.
+The newest media is deleted first. You can change the order with parameters
+order_by
and dir
. If no limit
is set the API deletes 100
files per request.
The API is:
+DELETE /_synapse/admin/v1/users/<user_id>/media
+
+A response body like the following is returned:
+{
+ "deleted_media": [
+ "abcdefghijklmnopqrstuvwx"
+ ],
+ "total": 1
+}
+
+The following fields are returned in the JSON response body:
+deleted_media
: an array of strings - List of deleted media_id
total
: integer - Total number of deleted media_id
Note: There is no next_token
. This is not useful for deleting media, because
+after deleting media the remaining media have a new order.
Parameters
+This API has the same parameters as +List media uploaded by a user. +With the parameters you can for example limit the number of files to delete at once or +delete largest/smallest or newest/oldest files first.
+Note: This API is disabled when MSC3861 is enabled. See #15582
+Get an access token that can be used to authenticate as that user. Useful for +when admins wish to do actions on behalf of a user.
+The API is:
+POST /_synapse/admin/v1/users/<user_id>/login
+{}
+
+An optional valid_until_ms
field can be specified in the request body as an
+integer timestamp that specifies when the token should expire. By default tokens
+do not expire. Note that this API does not allow a user to login as themselves
+(to create more tokens).
A response body like the following is returned:
+{
+ "access_token": "<opaque_access_token_string>"
+}
+
+This API does not generate a new device for the user, and so will not appear
+their /devices
list, and in general the target user should not be able to
+tell they have been logged in as.
To expire the token call the standard /logout
API with the token.
Note: The token will expire if the admin user calls /logout/all
from any
+of their devices, but the token will not expire if the target user does the
+same.
This endpoint is not intended for server administrator usage; +we describe it here for completeness.
+This API temporarily permits a user to replace their master cross-signing key +without going through +user-interactive authentication (UIA). +This is useful when Synapse has delegated its authentication to the +Matrix Authentication Service; +as Synapse cannot perform UIA is not possible in these circumstances.
+The API is
+POST /_synapse/admin/v1/users/<user_id>/_allow_cross_signing_replacement_without_uia
+{}
+
+If the user does not exist, or does exist but has no master cross-signing key,
+this will return with status code 404 Not Found
.
Otherwise, a response body like the following is returned, with status 200 OK
:
{
+ "updatable_without_uia_before_ms": 1234567890
+}
+
+The response body is a JSON object with a single field:
+updatable_without_uia_before_ms
: integer. The timestamp in milliseconds
+before which the user is permitted to replace their cross-signing key without
+going through UIA.Added in Synapse 1.97.0.
+Gets information about all devices for a specific user_id
.
The API is:
+GET /_synapse/admin/v2/users/<user_id>/devices
+
+A response body like the following is returned:
+{
+ "devices": [
+ {
+ "device_id": "QBUAZIFURK",
+ "display_name": "android",
+ "last_seen_ip": "1.2.3.4",
+ "last_seen_user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
+ "last_seen_ts": 1474491775024,
+ "user_id": "<user_id>"
+ },
+ {
+ "device_id": "AUIECTSRND",
+ "display_name": "ios",
+ "last_seen_ip": "1.2.3.5",
+ "last_seen_user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
+ "last_seen_ts": 1474491775025,
+ "user_id": "<user_id>"
+ }
+ ],
+ "total": 2
+}
+
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.Response
+The following fields are returned in the JSON response body:
+devices
- An array of objects, each containing information about a device.
+Device objects contain the following fields:
device_id
- Identifier of device.display_name
- Display name set by the user for this device.
+Absent if no name has been set.last_seen_ip
- The IP address where this device was last seen.
+(May be a few minutes out of date, for efficiency reasons).last_seen_user_agent
- The user agent of the device when it was last seen.
+(May be a few minutes out of date, for efficiency reasons).last_seen_ts
- The timestamp (in milliseconds since the unix epoch) when this
+devices was last seen. (May be a few minutes out of date, for efficiency reasons).user_id
- Owner of device.total
- Total number of user's devices.
Creates a new device for a specific user_id
and device_id
. Does nothing if the device_id
+exists already.
The API is:
+POST /_synapse/admin/v2/users/<user_id>/devices
+
+{
+ "device_id": "QBUAZIFURK"
+}
+
+An empty JSON dict is returned.
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.The following fields are required in the JSON request body:
+device_id
- The device ID to create.Deletes the given devices for a specific user_id
, and invalidates
+any access token associated with them.
The API is:
+POST /_synapse/admin/v2/users/<user_id>/delete_devices
+
+{
+ "devices": [
+ "QBUAZIFURK",
+ "AUIECTSRND"
+ ]
+}
+
+An empty JSON dict is returned.
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.The following fields are required in the JSON request body:
+devices
- The list of device IDs to delete.Gets information on a single device, by device_id
for a specific user_id
.
The API is:
+GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+A response body like the following is returned:
+{
+ "device_id": "<device_id>",
+ "display_name": "android",
+ "last_seen_ip": "1.2.3.4",
+ "last_seen_user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
+ "last_seen_ts": 1474491775024,
+ "user_id": "<user_id>"
+}
+
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.device_id
- The device to retrieve.Response
+The following fields are returned in the JSON response body:
+device_id
- Identifier of device.display_name
- Display name set by the user for this device.
+Absent if no name has been set.last_seen_ip
- The IP address where this device was last seen.
+(May be a few minutes out of date, for efficiency reasons).
+last_seen_user_agent
- The user agent of the device when it was last seen.
+(May be a few minutes out of date, for efficiency reasons).last_seen_ts
- The timestamp (in milliseconds since the unix epoch) when this
+devices was last seen. (May be a few minutes out of date, for efficiency reasons).user_id
- Owner of device.Updates the metadata on the given device_id
for a specific user_id
.
The API is:
+PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+{
+ "display_name": "My other phone"
+}
+
+An empty JSON dict is returned.
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.device_id
- The device to update.The following fields are required in the JSON request body:
+display_name
- The new display name for this device. If not given,
+the display name is unchanged.Deletes the given device_id
for a specific user_id
,
+and invalidates any access token associated with it.
The API is:
+DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
+
+{}
+
+An empty JSON dict is returned.
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.device_id
- The device to delete.Gets information about all pushers for a specific user_id
.
The API is:
+GET /_synapse/admin/v1/users/<user_id>/pushers
+
+A response body like the following is returned:
+{
+ "pushers": [
+ {
+ "app_display_name":"HTTP Push Notifications",
+ "app_id":"m.http",
+ "data": {
+ "url":"example.com"
+ },
+ "device_display_name":"pushy push",
+ "kind":"http",
+ "lang":"None",
+ "profile_tag":"",
+ "pushkey":"a@example.com"
+ }
+ ],
+ "total": 1
+}
+
+Parameters
+The following parameters should be set in the URL:
+user_id
- fully qualified: for example, @user:server.com
.Response
+The following fields are returned in the JSON response body:
+pushers
- An array containing the current pushers for the user
app_display_name
- string - A string that will allow the user to identify
+what application owns this pusher.
app_id
- string - This is a reverse-DNS style identifier for the application.
+Max length, 64 chars.
data
- A dictionary of information for the pusher implementation itself.
url
- string - Required if kind
is http
. The URL to use to send
+notifications to.
format
- string - The format to use when sending notifications to the
+Push Gateway.
device_display_name
- string - A string that will allow the user to identify
+what device owns this pusher.
profile_tag
- string - This string determines which set of device specific rules
+this pusher executes.
kind
- string - The kind of pusher. "http" is a pusher that sends HTTP pokes.
lang
- string - The preferred language for receiving notifications
+(e.g. 'en' or 'en-US')
profile_tag
- string - This string determines which set of device specific rules
+this pusher executes.
pushkey
- string - This is a unique identifier for this pusher.
+Max length, 512 bytes.
total
- integer - Number of pushers.
See also the +Client-Server API Spec on pushers.
+Shadow-banning is a useful tool for moderating malicious or egregiously abusive users. +A shadow-banned users receives successful responses to their client-server API requests, +but the events are not propagated into rooms. This can be an effective tool as it +(hopefully) takes longer for the user to realise they are being moderated before +pivoting to another account.
+Shadow-banning a user should be used as a tool of last resort and may lead to confusing +or broken behaviour for the client. A shadow-banned user will not receive any +notification and it is generally more appropriate to ban or kick abusive users. +A shadow-banned user will be unable to contact anyone on the server.
+To shadow-ban a user the API is:
+POST /_synapse/admin/v1/users/<user_id>/shadow_ban
+
+To un-shadow-ban a user the API is:
+DELETE /_synapse/admin/v1/users/<user_id>/shadow_ban
+
+An empty JSON dict is returned in both cases.
+Parameters
+The following parameters should be set in the URL:
+user_id
- The fully qualified MXID: for example, @user:server.com
. The user must
+be local.This API allows to override or disable ratelimiting for a specific user. +There are specific APIs to set, get and delete a ratelimit.
+The API is:
+GET /_synapse/admin/v1/users/<user_id>/override_ratelimit
+
+A response body like the following is returned:
+{
+ "messages_per_second": 0,
+ "burst_count": 0
+}
+
+Parameters
+The following parameters should be set in the URL:
+user_id
- The fully qualified MXID: for example, @user:server.com
. The user must
+be local.Response
+The following fields are returned in the JSON response body:
+messages_per_second
- integer - The number of actions that can
+be performed in a second. 0
mean that ratelimiting is disabled for this user.burst_count
- integer - How many actions that can be performed before
+being limited.If no custom ratelimit is set, an empty JSON dict is returned.
+{}
+
+The API is:
+POST /_synapse/admin/v1/users/<user_id>/override_ratelimit
+
+A response body like the following is returned:
+{
+ "messages_per_second": 0,
+ "burst_count": 0
+}
+
+Parameters
+The following parameters should be set in the URL:
+user_id
- The fully qualified MXID: for example, @user:server.com
. The user must
+be local.Body parameters:
+messages_per_second
- positive integer, optional. The number of actions that can
+be performed in a second. Defaults to 0
.burst_count
- positive integer, optional. How many actions that can be performed
+before being limited. Defaults to 0
.To disable users' ratelimit set both values to 0
.
Response
+The following fields are returned in the JSON response body:
+messages_per_second
- integer - The number of actions that can
+be performed in a second.burst_count
- integer - How many actions that can be performed before
+being limited.The API is:
+DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit
+
+An empty JSON dict is returned.
+{}
+
+Parameters
+The following parameters should be set in the URL:
+user_id
- The fully qualified MXID: for example, @user:server.com
. The user must
+be local.Checks to see if a username is available, and valid, for the server. See the client-server +API +for more information.
+This endpoint will work even if registration is disabled on the server, unlike
+/_matrix/client/r0/register/available
.
The API is:
+GET /_synapse/admin/v1/username_available?username=$localpart
+
+The request and response format is the same as the +/_matrix/client/r0/register/available API.
+The API is:
+GET /_synapse/admin/v1/auth_providers/$provider/users/$external_id
+
+When a user matched the given ID for the given provider, an HTTP code 200
with a response body like the following is returned:
{
+ "user_id": "@hello:example.org"
+}
+
+Parameters
+The following parameters should be set in the URL:
+provider
- The ID of the authentication provider, as advertised by the GET /_matrix/client/v3/login
API in the m.login.sso
authentication method.external_id
- The user ID from the authentication provider. Usually corresponds to the sub
claim for OIDC providers, or to the uid
attestation for SAML2 providers.The external_id
may have characters that are not URL-safe (typically /
, :
or @
), so it is advised to URL-encode those parameters.
Errors
+Returns a 404
HTTP status code if no user was found, with a response body like this:
{
+ "errcode":"M_NOT_FOUND",
+ "error":"User not found"
+}
+
+Added in Synapse 1.68.0.
+The API is:
+GET /_synapse/admin/v1/threepid/$medium/users/$address
+
+When a user matched the given address for the given medium, an HTTP code 200
with a response body like the following is returned:
{
+ "user_id": "@hello:example.org"
+}
+
+Parameters
+The following parameters should be set in the URL:
+medium
- Kind of third-party ID, either email
or msisdn
.address
- Value of the third-party ID.The address
may have characters that are not URL-safe, so it is advised to URL-encode those parameters.
Errors
+Returns a 404
HTTP status code if no user was found, with a response body like this:
{
+ "errcode":"M_NOT_FOUND",
+ "error":"User not found"
+}
+
+Added in Synapse 1.72.0.
+ +This API returns the running Synapse version. +This is useful when a Synapse instance +is behind a proxy that does not forward the 'Server' header (which also +contains Synapse version information).
+The api is:
+GET /_synapse/admin/v1/server_version
+
+It returns a JSON body like the following:
+{
+ "server_version": "0.99.2rc1 (b=develop, abcdef123)"
+}
+
+Changed in Synapse 1.94.0: The python_version
key was removed from the
+response body.
The registration of new application services depends on the homeserver used.
+In synapse, you need to create a new configuration file for your AS and add it
+to the list specified under the app_service_config_files
config
+option in your synapse config.
For example:
+app_service_config_files:
+- /home/matrix/.synapse/<your-AS>.yaml
+
+The format of the AS configuration file is as follows:
+id: <your-AS-id>
+url: <base url of AS>
+as_token: <token AS will add to requests to HS>
+hs_token: <token HS will add to requests to AS>
+sender_localpart: <localpart of AS user>
+namespaces:
+ users: # List of users we're interested in
+ - exclusive: <bool>
+ regex: <regex>
+ group_id: <group>
+ - ...
+ aliases: [] # List of aliases we're interested in
+ rooms: [] # List of room ids we're interested in
+
+exclusive
: If enabled, only this application service is allowed to register users in its namespace(s).
+group_id
: All users of this application service are dynamically joined to this group. This is useful for e.g user organisation or flairs.
See the spec for further details on how application services work.
+ +The auth chain difference algorithm is used by V2 state resolution, where a +naive implementation can be a significant source of CPU and DB usage.
+A state set is a set of state events; e.g. the input of a state resolution +algorithm is a collection of state sets.
+The auth chain of a set of events are all the events' auth events and their +auth events, recursively (i.e. the events reachable by walking the graph induced +by an event's auth events links).
+The auth chain difference of a collection of state sets is the union minus the +intersection of the sets of auth chains corresponding to the state sets, i.e an +event is in the auth chain difference if it is reachable by walking the auth +event graph from at least one of the state sets but not from all of the state +sets.
+A way of calculating the auth chain difference without calculating the full auth +chains for each state set is to do a parallel breadth first walk (ordered by +depth) of each state set's auth chain. By tracking which events are reachable +from each state set we can finish early if every pending event is reachable from +every state set.
+This can work well for state sets that have a small auth chain difference, but +can be very inefficient for larger differences. However, this algorithm is still +used if we don't have a chain cover index for the room (e.g. because we're in +the process of indexing it).
+Synapse computes auth chain differences by pre-computing a "chain cover" index
+for the auth chain in a room, allowing us to efficiently make reachability queries
+like "is event A
in the auth chain of event B
?". We could do this with an index
+that tracks all pairs (A, B)
such that A
is in the auth chain of B
. However, this
+would be prohibitively large, scaling poorly as the room accumulates more state
+events.
Instead, we break down the graph into chains. A chain is a subset of a DAG
+with the following property: for any pair of events E
and F
in the chain,
+the chain contains a path E -> F
or a path F -> E
. This forces a chain to be
+linear (without forks), e.g. E -> F -> G -> ... -> H
. Each event in the chain
+is given a sequence number local to that chain. The oldest event E
in the
+chain has sequence number 1. If E
has a child F
in the chain, then F
has
+sequence number 2. If E
has a grandchild G
in the chain, then G
has
+sequence number 3; and so on.
Synapse ensures that each persisted event belongs to exactly one chain, and +tracks how the chains are connected to one another. This allows us to +efficiently answer reachability queries. Doing so uses less storage than +tracking reachability on an event-by-event basis, particularly when we have +fewer and longer chains. See
+++Jagadish, H. (1990). A compression technique to materialize transitive closure. +ACM Transactions on Database Systems (TODS), 15*(4)*, 558-598.
+
for the original idea or
+++Y. Chen, Y. Chen, An efficient algorithm for answering graph +reachability queries, +in: 2008 IEEE 24th International Conference on Data Engineering, April 2008, +pp. 893–902. (PDF available via Google Scholar.)
+
for a more modern take.
+In practical terms, the chain cover assigns every event a
+chain ID and sequence number (e.g. (5,3)
), and maintains a map of links
+between events in chains (e.g. (5,3) -> (2,4)
) such that A
is reachable by B
+(i.e. A
is in the auth chain of B
) if and only if either:
A
and B
have the same chain ID and A
's sequence number is less than B
's
+sequence number; orL
between B
's chain ID and A
's chain ID such that
+L.start_seq_no
<= B.seq_no
and A.seq_no
<= L.end_seq_no
.There are actually two potential implementations, one where we store links from
+each chain to every other reachable chain (the transitive closure of the links
+graph), and one where we remove redundant links (the transitive reduction of the
+links graph) e.g. if we have chains C3 -> C2 -> C1
then the link C3 -> C1
+would not be stored. Synapse uses the former implementation so that it doesn't
+need to recurse to test reachability between chains. This trades-off extra storage
+in order to save CPU cycles and DB queries.
An example auth graph would look like the following, where chains have been
+formed based on type/state_key and are denoted by colour and are labelled with
+(chain ID, sequence number)
. Links are denoted by the arrows (links in grey
+are those that would be remove in the second implementation described above).
Note that we don't include all links between events and their auth events, as +most of those links would be redundant. For example, all events point to the +create event, but each chain only needs the one link from it's base to the +create event.
+This index can be used to calculate the auth chain difference of the state sets +by looking at the chain ID and sequence numbers reachable from each state set:
+Note that steps 2 is effectively calculating the auth chain for each state set +(in terms of chain IDs and sequence numbers), and step 3 is calculating the +difference between the union and intersection of the auth chains.
+For example, given the above graph, we can calculate the difference between +state sets consisting of:
+S1
: Alice's invite (4,1)
and Bob's second join (2,2)
; andS2
: Alice's second join (4,3)
and Bob's first join (2,1)
.Using the index we see that the following auth chains are reachable from each +state set:
+S1
: (1,1)
, (2,2)
, (3,1)
& (4,1)
S2
: (1,1)
, (2,1)
, (3,2)
& (4,3)
And so, for each the ranges that are in the auth chain difference:
+(1, 2]
(i.e. just 2
), as 1
is reachable by all state
+sets and the maximum reachable is 2
(corresponding to Bob's second join).(1, 2]
(corresponding to the second power
+level).(1, 3]
(corresponding to both of Alice's joins).So the final result is: Bob's second join (2,2)
, the second power level
+(3,2)
and both of Alice's joins (4,2)
& (4,3)
.
The Synapse codebase uses a number of code formatting tools in order to +quickly and automatically check for formatting (and sometimes logical) +errors in code.
+The necessary tools are:
+See the contributing guide for instructions +on how to install the above tools and run the linters.
+It's worth noting that modern IDEs and text editors can run these tools
+automatically on save. It may be worth looking into whether this
+functionality is supported in your editor for a more convenient
+development workflow. It is not, however, recommended to run mypy
+on save as it takes a while and can be very resource intensive.
CamelCase
for class and type namesfunction_names
and variable_names
.Imports should be sorted by isort
as described above.
Prefer to import classes and functions rather than packages or +modules.
+Example:
+from synapse.types import UserID
+...
+user_id = UserID(local, server)
+
+is preferred over:
+from synapse import types
+...
+user_id = types.UserID(local, server)
+
+(or any other variant).
+This goes against the advice in the Google style guide, but it +means that errors in the name are caught early (at import time).
+Avoid wildcard imports (from synapse.types import *
) and
+relative imports (from .types import UserID
).
When adding a configuration option to the code, if several settings are grouped into a single dict, ensure that your code
+correctly handles the top-level option being set to None
(as it will be if no sub-options are enabled).
The configuration manual acts as a +reference to Synapse's configuration options for server administrators. +Remember that many readers will be unfamiliar with YAML and server +administration in general, so it is important that when you add +a configuration option the documentation be as easy to understand as possible, which +includes following a consistent format.
+Some guidelines follow:
+Each option should be listed in the config manual with the following format:
+The name of the option, prefixed by ###
.
A comment which describes the default behaviour (i.e. what +happens if the setting is omitted), as well as what the effect +will be if the setting is changed.
+An example setting, using backticks to define the code block
+For boolean (on/off) options, convention is that this example +should be the opposite to the default. For other options, the example should give +some non-default value which is likely to be useful to the reader.
+There should be a horizontal rule between each option, which can be achieved by adding ---
before and
+after the option.
true
and false
are spelt thus (as opposed to True
, etc.)
Example:
+modules
Use the module
sub-option to add a module under modules
to extend functionality.
+The module
setting then has a sub-option, config
, which can be used to define some configuration
+for the module
.
Defaults to none.
+Example configuration:
+modules:
+ - module: my_super_module.MySuperClass
+ config:
+ do_thing: true
+ - module: my_other_super_module.SomeClass
+ config: {}
+
+Note that the sample configuration is generated from the synapse code
+and is maintained by a script, scripts-dev/generate_sample_config.sh
.
+Making sure that the output from this script matches the desired format
+is left as an exercise for the reader!
Synapse 0.30 introduces support for tracking whether users have agreed to the +terms and conditions set by the administrator of a server - and blocking access +to the server until they have.
+There are several parts to this functionality; each requires some specific
+configuration in homeserver.yaml
to be enabled.
Note that various parts of the configuration and this document refer to the +"privacy policy": agreement with a privacy policy is one particular use of this +feature, but of course administrators can specify other terms and conditions +unrelated to "privacy" per se.
+Synapse can be configured to serve the user a simple policy form with an +"accept" button. Clicking "Accept" records the user's acceptance in the +database and shows a success page.
+To enable this, first create templates for the policy and success pages. +These should be stored on the local filesystem.
+These templates use the Jinja2 templating language, +and docs/privacy_policy_templates +gives examples of the sort of thing that can be done.
+Note that the templates must be stored under a name giving the language of the
+template - currently this must always be en
(for "English");
+internationalisation support is intended for the future.
The template for the policy itself should be versioned and named according to
+the version: for example 1.0.html
. The version of the policy which the user
+has agreed to is stored in the database.
Once the templates are in place, make the following changes to homeserver.yaml
:
Add a user_consent
section, which should look like:
user_consent:
+ template_dir: privacy_policy_templates
+ version: 1.0
+
+template_dir
points to the directory containing the policy
+templates. version
defines the version of the policy which will be served
+to the user. In the example above, Synapse will serve
+privacy_policy_templates/en/1.0.html
.
Add a form_secret
setting at the top level:
form_secret: "<unique secret>"
+
+This should be set to an arbitrary secret string (try pwgen -y 30
to
+generate suitable secrets).
More on what this is used for below.
+Add consent
wherever the client
resource is currently enabled in the
+listeners
configuration. For example:
listeners:
+ - port: 8008
+ resources:
+ - names:
+ - client
+ - consent
+
+Finally, ensure that jinja2
is installed. If you are using a virtualenv, this
+should be a matter of pip install Jinja2
. On debian, try apt-get install python-jinja2
.
Once this is complete, and the server has been restarted, try visiting
+https://<server>/_matrix/consent
. If correctly configured, this should give
+an error "Missing string query parameter 'u'". It is now possible to manually
+construct URIs where users can give their consent.
Add the following to your configuration:
+user_consent:
+ require_at_registration: true
+ policy_name: "Privacy Policy" # or whatever you'd like to call the policy
+
+In your consent templates, make use of the public_version
variable to
+see if an unauthenticated user is viewing the page. This is typically
+wrapped around the form that would be used to actually agree to the document:
{% if not public_version %}
+ <!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
+ <form method="post" action="consent">
+ <input type="hidden" name="v" value="{{version}}"/>
+ <input type="hidden" name="u" value="{{user}}"/>
+ <input type="hidden" name="h" value="{{userhmac}}"/>
+ <input type="submit" value="Sure thing!"/>
+ </form>
+{% endif %}
+
+Restart Synapse to apply the changes.
+Visiting https://<server>/_matrix/consent
should now give you a view of the privacy
+document. This is what users will be able to see when registering for accounts.
It may be useful to manually construct the "consent URI" for a given user - for
+instance, in order to send them an email asking them to consent. To do this,
+take the base https://<server>/_matrix/consent
URL and add the following
+query parameters:
u
: the user id of the user. This can either be a full MXID
+(@user:server.com
) or just the localpart (user
).
h
: hex-encoded HMAC-SHA256 of u
using the form_secret
as a key. It is
+possible to calculate this on the commandline with something like:
echo -n '<user>' | openssl sha256 -hmac '<form_secret>'
+
+This should result in a URI which looks something like:
+https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...
.
Note that not providing a u
parameter will be interpreted as wanting to view
+the document from an unauthenticated perspective, such as prior to registration.
+Therefore, the h
parameter is not required in this scenario. To enable this
+behaviour, set require_at_registration
to true
in your user_consent
config.
It is possible to configure Synapse to send a server +notice to anybody who has not yet agreed to the current +version of the policy. To do so:
+ensure that the consent resource is configured, as in the previous section
+ensure that server notices are configured, as in the server notice documentation.
+Add server_notice_content
under user_consent
in homeserver.yaml
. For
+example:
user_consent:
+ server_notice_content:
+ msgtype: m.text
+ body: >-
+ Please give your consent to the privacy policy at %(consent_uri)s.
+
+Synapse automatically replaces the placeholder %(consent_uri)s
with the
+consent uri for that user.
ensure that public_baseurl
is set in homeserver.yaml
, and gives the base
+URI that clients use to connect to the server. (It is used to construct
+consent_uri
in the server notice.)
Synapse can be configured to block any attempts to join rooms or send messages +until the user has given their agreement to the policy. (Joining the server +notices room is exempted from this).
+To enable this, add block_events_error
under user_consent
. For example:
user_consent:
+ block_events_error: >-
+ You can't send any messages until you consent to the privacy policy at
+ %(consent_uri)s.
+
+Synapse automatically replaces the placeholder %(consent_uri)s
with the
+consent uri for that user.
ensure that public_baseurl
is set in homeserver.yaml
, and gives the base
+URI that clients use to connect to the server. (It is used to construct
+consent_uri
in the error.)
In the following documentation, we use the term server_name
to refer to that setting
+in your homeserver configuration file. It appears at the ends of user ids, and tells
+other homeservers where they can find your server.
By default, other homeservers will expect to be able to reach yours via
+your server_name
, on port 8448. For example, if you set your server_name
+to example.com
(so that your user names look like @user:example.com
),
+other servers will try to connect to yours at https://example.com:8448/
.
Delegation is a Matrix feature allowing a homeserver admin to retain a
+server_name
of example.com
so that user IDs, room aliases, etc continue
+to look like *:example.com
, whilst having federation traffic routed
+to a different server and/or port (e.g. synapse.example.com:443
).
To use this method, you need to be able to configure the server at
+https://<server_name>
to serve a file at
+https://<server_name>/.well-known/matrix/server
. There are two ways to do this, shown below.
Note that the .well-known
file is hosted on the default port for https
(port 443).
For maximum flexibility, you need to configure an external server such as nginx, Apache
+or HAProxy to serve the https://<server_name>/.well-known/matrix/server
file. Setting
+up such a server is out of the scope of this documentation, but note that it is often
+possible to configure your reverse proxy for this.
The URL https://<server_name>/.well-known/matrix/server
should be configured
+return a JSON structure containing the key m.server
like this:
{
+ "m.server": "<synapse.server.name>[:<yourport>]"
+}
+
+In our example (where we want federation traffic to be routed to
+https://synapse.example.com
, on port 443), this would mean that
+https://example.com/.well-known/matrix/server
should return:
{
+ "m.server": "synapse.example.com:443"
+}
+
+Note, specifying a port is optional. If no port is specified, then it defaults +to 8448.
+.well-known/matrix/server
file with SynapseIf you are able to set up your domain so that https://<server_name>
is routed to
+Synapse (i.e., the only change needed is to direct federation traffic to port 443
+instead of port 8448), then it is possible to configure Synapse to serve a suitable
+.well-known/matrix/server
file. To do so, add the following to your homeserver.yaml
+file:
serve_server_wellknown: true
+
+Note: this only works if https://<server_name>
is routed to Synapse, so is
+generally not suitable if Synapse is hosted at a subdomain such as
+https://synapse.example.com
.
It is also possible to do delegation using a SRV DNS record. However, that is generally
+not recommended, as it can be difficult to configure the TLS certificates correctly in
+this case, and it offers little advantage over .well-known
delegation.
Please keep in mind that server delegation is a function of server-server communication,
+and as such using SRV DNS records will not cover use cases involving client-server comms.
+This means setting global client settings (such as a Jitsi endpoint, or disabling
+creating new rooms as encrypted by default, etc) will still require that you serve a file
+from the https://<server_name>/.well-known/
endpoints defined in the spec! If you are
+considering using SRV DNS delegation to avoid serving files from this endpoint, consider
+the impact that you will not be able to change those client-based default values globally,
+and will be relegated to the featureset of the configuration of each individual client.
However, if you really need it, you can find some documentation on what such a +record should look like and how Synapse will use it in the Matrix +specification.
+If your homeserver's APIs are accessible on the default federation port (8448)
+and the domain your server_name
points to, you do not need any delegation.
For instance, if you registered example.com
and pointed its DNS A record at a
+fresh server, you could install Synapse on that host, giving it a server_name
+of example.com
, and once a reverse proxy has been set up to proxy all requests
+sent to the port 8448
and serve TLS certificates for example.com
, you
+wouldn't need any delegation set up.
However, if your homeserver's APIs aren't accessible on port 8448 and on the
+domain server_name
points to, you will need to let other servers know how to
+find it using delegation.
Generally, using a reverse proxy for both the federation and client traffic is a good +idea, since it saves handling TLS traffic in Synapse. See +the reverse proxy documentation for information on setting up a +reverse proxy.
+ +Synapse has a number of platform dependencies, including Python, Rust, +PostgreSQL and SQLite. This document outlines the policy towards which versions +we support, and when we drop support for versions in the future.
+Synapse follows the upstream support life cycles for Python and PostgreSQL, +i.e. when a version reaches End of Life Synapse will withdraw support for that +version in future releases.
+Details on the upstream support life cycles for Python and PostgreSQL are +documented at https://endoflife.date/python and +https://endoflife.date/postgresql.
+A Rust compiler is required to build Synapse from source. For any given release +the minimum required version may be bumped up to a recent Rust version, and so +people building from source should ensure they can fetch recent versions of Rust +(e.g. by using rustup).
+The oldest supported version of SQLite is the version +provided by +Debian oldstable.
+It is important for system admins to have a clear understanding of the platform +requirements of Synapse and its deprecation policies so that they can +effectively plan upgrading their infrastructure ahead of time. This is +especially important in contexts where upgrading the infrastructure requires +auditing and approval from a security team, or where otherwise upgrading is a +long process.
+By following the upstream support life cycles Synapse can ensure that its +dependencies continue to get security patches, while not requiring system admins +to constantly update their platform dependencies to the latest versions.
+For Rust, the situation is a bit different given that a) the Rust foundation +does not generally support older Rust versions, and b) the library ecosystem +generally bump their minimum support Rust versions frequently. In general, the +Synapse team will try to avoid updating the dependency on Rust to the absolute +latest version, but introducing a formal policy is hard given the constraints of +the ecosystem.
+On a similar note, SQLite does not generally have a concept of "supported
+release"; bugfixes are published for the latest minor release only. We chose to
+track Debian's oldstable as this is relatively conservative, predictably updated
+and is consistent with the .deb
packages released by Matrix.org.
The django-mama-cas project is an +easy to run CAS implementation built on top of Django.
+python3 -m venv <your virtualenv>
source /path/to/your/virtualenv/bin/activate
python -m pip install "django<3" "django-mama-cas==2.4.0"
+
+django-admin startproject cas_test .
+
+python manage.py migrate
python manage.py createsuperuser
+
+python manage.py runserver
+
+You should now have a Django project configured to serve CAS authentication with +a single user created.
+homeserver.yaml
to enable CAS and point it to your locally
+running Django test server:
+cas_config:
+ enabled: true
+ server_url: "http://localhost:8000"
+ service_url: "http://localhost:8081"
+ #displayname_attribute: name
+ #required_attributes:
+ # name: value
+
+Note that the above configuration assumes the homeserver is running on port 8081 +and that the CAS server is on port 8000, both on localhost.
+Then in Element:
+createsuperuser
.If you want to repeat this process you'll need to manually logout first:
+This document aims to get you started with contributing to Synapse!
+Everyone is welcome to contribute code to +Synapse, provided that they are willing +to license their contributions to Element under a Contributor License +Agreement (CLA). This ensures that +their contribution will be made available under an OSI-approved open-source +license, currently Affero General Public License v3 (AGPLv3).
+Please see the +Element blog post +for the full rationale.
+If you are running Windows, the Windows Subsystem for Linux (WSL) is strongly +recommended for development. More information about WSL can be found at +https://docs.microsoft.com/en-us/windows/wsl/install. Running Synapse natively +on Windows is not officially supported.
+The code of Synapse is written in Python 3. To do pretty much anything, you'll need a recent version of Python 3. Your Python also needs support for virtual environments. This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Running sudo apt install python3-venv
should be enough.
A recent version of the Rust compiler is needed to build the native modules. The +easiest way of installing the latest version is to use rustup.
+Synapse can connect to PostgreSQL via the psycopg2 Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed with sudo apt install libpq-dev
.
Synapse has an optional, improved user search with better Unicode support. For that you need the development package of libicu
. On Debian or Ubuntu Linux, this can be installed with sudo apt install libicu-dev
.
The source code of Synapse is hosted on GitHub. You will also need a recent version of git.
+For some tests, you will need a recent version of Docker.
+The preferred and easiest way to contribute changes is to fork the relevant +project on GitHub, and then create a pull request to ask us to pull your +changes into our repo.
+Please base your changes on the develop
branch.
git clone git@github.com:YOUR_GITHUB_USER_NAME/synapse.git
+git checkout develop
+
+If you need help getting started with git, this is beyond the scope of the document, but you +can find many good git tutorials on the web.
+Before installing the Python dependencies, make sure you have installed a recent version +of Rust (see the "What do I need?" section above). The easiest way of installing the +latest version is to use rustup.
+Synapse uses the poetry project to manage its dependencies
+and development environment. Once you have installed Python 3 and added the
+source, you should install poetry
.
+Of their installation methods, we recommend
+installing poetry
using pipx
,
pip install --user pipx
+pipx install poetry
+
+but see poetry's installation instructions +for other installation methods.
+Developing Synapse requires Poetry version 1.3.2 or later.
+Next, open a terminal and install dependencies as follows:
+cd path/where/you/have/cloned/the/repository
+poetry install --extras all
+
+This will install the runtime and developer dependencies for the project. Be sure to check
+that the poetry install
step completed cleanly.
For OSX users, be sure to set PKG_CONFIG_PATH
to support icu4c
. Run brew info icu4c
for more details.
To start a local instance of Synapse in the locked poetry environment, create a config file:
+cp docs/sample_config.yaml homeserver.yaml
+cp docs/sample_log_config.yaml log_config.yaml
+
+Now edit homeserver.yaml
, things you might want to change include:
server_name
log_config
to point to the log config you just copiedregistration_shared_secret
so you can use register_new_matrix_user
command.And then run Synapse with the following command:
+poetry run python -m synapse.app.homeserver -c homeserver.yaml
+
+If you get an error like the following:
+importlib.metadata.PackageNotFoundError: matrix-synapse
+
+this probably indicates that the poetry install
step did not complete cleanly - go back and
+resolve any issues and re-run until successful.
Join our developer community on Matrix: #synapse-dev:matrix.org!
+Fix your favorite problem or perhaps find a Good First Issue +to work on.
+There is a growing amount of documentation located in the
+docs
+directory, with a rendered version available online.
+This documentation is intended primarily for sysadmins running their
+own Synapse instance, as well as developers interacting externally with
+Synapse.
+docs/development
+exists primarily to house documentation for
+Synapse developers.
+docs/admin_api
houses documentation
+regarding Synapse's Admin API, which is used mostly by sysadmins and external
+service developers.
Synapse's code style is documented here. Please follow +it, including the conventions for configuration +options and documentation.
+We welcome improvements and additions to our documentation itself! When
+writing new pages, please
+build docs
to a book
+to check that your contributions render correctly. The docs are written in
+GitHub-Flavoured Markdown.
When changes are made to any Rust code then you must call either poetry install
+or maturin develop
(if installed) to rebuild the Rust code. Using maturin
+is quicker than poetry install
, so is recommended when making frequent
+changes to the Rust code.
While you're developing and before submitting a patch, you'll +want to test your code.
+The linters look at your code and do two things:
+The linter takes no time at all to run as soon as you've downloaded the dependencies.
+poetry run ./scripts-dev/lint.sh
+
+Note that this script will modify your files to fix styling errors. +Make sure that you have saved all your files.
+If you wish to restrict the linters to only the files changed since the last commit +(much faster!), you can instead run:
+poetry run ./scripts-dev/lint.sh -d
+
+Or if you know exactly which files you wish to lint, you can instead run:
+poetry run ./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
+
+The unit tests run parts of Synapse, including your changes, to see if anything +was broken. They are slower than the linters but will typically catch more errors.
+poetry run trial tests
+
+You can run unit tests in parallel by specifying -jX
argument to trial
where X
is the number of parallel runners you want. To use 4 cpu cores, you would run them like:
poetry run trial -j4 tests
+
+If you wish to only run some unit tests, you may specify
+another module instead of tests
- or a test class or a method:
poetry run trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite
+
+If your tests fail, you may wish to look at the logs (the default log level is ERROR
):
less _trial_temp/test.log
+
+To increase the log level for the tests, set SYNAPSE_TEST_LOG_LEVEL
:
SYNAPSE_TEST_LOG_LEVEL=DEBUG poetry run trial tests
+
+By default, tests will use an in-memory SQLite database for test data. For additional
+help with debugging, one can use an on-disk SQLite database file instead, in order to
+review database state during and after running tests. This can be done by setting
+the SYNAPSE_TEST_PERSIST_SQLITE_DB
environment variable. Doing so will cause the
+database state to be stored in a file named test.db
under the trial process'
+working directory. Typically, this ends up being _trial_temp/test.db
. For example:
SYNAPSE_TEST_PERSIST_SQLITE_DB=1 poetry run trial tests
+
+The database file can then be inspected with:
+sqlite3 _trial_temp/test.db
+
+Note that the database file is cleared at the beginning of each test run. Thus it +will always only contain the data generated by the last run test. Though generally +when debugging, one is only running a single test anyway.
+Invoking trial
as above will use an in-memory SQLite database. This is great for
+quick development and testing. However, we recommend using a PostgreSQL database
+in production (and indeed, we have some code paths specific to each database).
+This means that we need to run our unit tests against PostgreSQL too. Our CI does
+this automatically for pull requests and release candidates, but it's sometimes
+useful to reproduce this locally.
The easiest way to do so is to run Postgres via a docker container. In one +terminal:
+docker run --rm -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=postgres -e POSTGRES_DB=postgres -p 5432:5432 postgres:14
+
+If you see an error like
+docker: Error response from daemon: driver failed programming external connectivity on endpoint nice_ride (b57bbe2e251b70015518d00c9981e8cb8346b5c785250341a6c53e3c899875f1): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use.
+
+then something is already bound to port 5432. You're probably already running postgres locally.
+Once you have a postgres server running, invoke trial
in a second terminal:
SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_HOST=127.0.0.1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_POSTGRES_PASSWORD=mysecretpassword poetry run trial tests
+
+If you have postgres already installed on your system, you can run trial
with the
+following environment variables matching your configuration:
SYNAPSE_POSTGRES
to anything nonemptySYNAPSE_POSTGRES_HOST
(optional if it's the default: UNIX socket)SYNAPSE_POSTGRES_PORT
(optional if it's the default: 5432)SYNAPSE_POSTGRES_USER
(optional if using a UNIX socket)SYNAPSE_POSTGRES_PASSWORD
(optional if using a UNIX socket)For example:
+export SYNAPSE_POSTGRES=1
+export SYNAPSE_POSTGRES_HOST=localhost
+export SYNAPSE_POSTGRES_USER=postgres
+export SYNAPSE_POSTGRES_PASSWORD=mydevenvpassword
+trial
+
+You don't need to specify the host, user, port or password if your Postgres
+server is set to authenticate you over the UNIX socket (i.e. if the psql
command
+works without further arguments).
Your Postgres account needs to be able to create databases; see the postgres
+docs for ALTER ROLE
.
The integration tests are a more comprehensive suite of tests. They +run a full version of Synapse, including your changes, to check if +anything was broken. They are slower than the unit tests but will +typically catch more errors.
+The following command will let you run the integration test with the most common +configuration:
+$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:focal
+
+(Note that the paths must be full paths! You could also write $(realpath relative/path)
if needed.)
This configuration should generally cover your needs.
+-e POSTGRES=1 -e MULTI_POSTGRES=1
environment flags.-e WORKERS=1 -e REDIS=1
environment flags (in addition to the Postgres flags).For more details about other configurations, see the Docker-specific documentation in the SyTest repo.
+Complement is a suite of black box tests that can be run on any homeserver implementation. It can also be thought of as end-to-end (e2e) tests.
+It's often nice to develop on Synapse and write Complement tests at the same time. +Here is how to run your local Synapse checkout against your local Complement checkout.
+(checkout complement
alongside your synapse
checkout)
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh
+
+To run a specific test file, you can pass the test name at the end of the command. The name passed comes from the naming structure in your Complement tests. If you're unsure of the name, you can do a full run and copy it from the test output:
+COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages
+
+To run a specific test, you can specify the whole name structure:
+COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages/parallel/Historical_events_resolve_in_the_correct_order
+
+The above will run a monolithic (single-process) Synapse with SQLite as the database. For other configurations, try:
+POSTGRES=1
as an environment variable to use the Postgres database instead.WORKERS=1
as an environment variable to use a workerised setup instead. This option implies the use of Postgres.
+WORKERS=1
, optionally set WORKER_TYPES=
to declare which worker
+types you wish to test. A simple comma-delimited string containing the worker types
+defined from the WORKERS_CONFIG
template in
+here.
+A safe example would be WORKER_TYPES="federation_inbound, federation_sender, synchrotron"
.
+See the worker documentation for additional information on workers.ASYNCIO_REACTOR=1
as an environment variable to use the Twisted asyncio reactor instead of the default one.PODMAN=1
will use the podman container runtime, instead of docker.UNIX_SOCKETS=1
will utilise Unix socket functionality for Synapse, Redis, and Postgres(when applicable).To increase the log level for the tests, set SYNAPSE_TEST_LOG_LEVEL
, e.g:
SYNAPSE_TEST_LOG_LEVEL=DEBUG COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages
+
+gotestfmt
If you want to format the output of the tests the same way as it looks in CI, +install gotestfmt.
+You can then use this incantation to format the tests appropriately:
+COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -json | gotestfmt -hide successful-tests
+
+(Remove -hide successful-tests
if you don't want to hide successful tests.)
If you're curious what the database looks like after you run some tests, here are some steps to get you going in Synapse:
+defer deployment.Destroy(t)
and replace with defer time.Sleep(2 * time.Hour)
to keep the homeserver running after the tests completedocker ps -f name=complement_
(this will filter for just the Compelement related Docker containers)docker exec -it complement_1_hs_with_application_service.hs1_2 /bin/bash
apt-get update && apt-get install -y sqlite3
sqlite3
and open the database .open /conf/homeserver.db
(this db path comes from the Synapse homeserver.yaml)Once you're happy with your patch, it's time to prepare a Pull Request.
+To prepare a Pull Request, please:
+git push
your commit to your fork of Synapse;All changes, even minor ones, need a corresponding changelog / newsfragment +entry. These are managed by Towncrier.
+To create a changelog entry, make a new file in the changelog.d
directory named
+in the format of PRnumber.type
. The type can be one of the following:
feature
bugfix
docker
(for updates to the Docker image)doc
(for updates to the documentation)removal
(also used for deprecations)misc
(for internal-only changes)This file will become part of our changelog at the next +release, so the content of the file should be a short description of your +change in the same style as the rest of the changelog. The file can contain Markdown +formatting, and must end with a full stop (.) or an exclamation mark (!) for +consistency.
+Adding credits to the changelog is encouraged, we value your +contributions and would like to have you shouted out in the release notes!
+For example, a fix in PR #1234 would have its changelog entry in
+changelog.d/1234.bugfix
, and contain content like:
++The security levels of Florbs are now validated when received +via the
+/federation/florb
endpoint. Contributed by Jane Matrix.
If there are multiple pull requests involved in a single bugfix/feature/etc,
+then the content for each changelog.d
file should be the same. Towncrier will
+merge the matching files together into a single changelog entry when we come to
+release.
Obviously, you don't know if you should call your newsfile
+1234.bugfix
or 5678.bugfix
until you create the PR, which leads to a
+chicken-and-egg problem.
There are two options for solving this:
+Open the PR without a changelog file, see what number you got, and then +add the changelog file to your branch, or:
+Look at the list of all +issues/PRs, add one to the +highest number you see, and quickly open the PR before somebody else claims +your number.
+This +script +might be helpful if you find yourself doing this a lot.
+Sorry, we know it's a bit fiddly, but it's really helpful for us when we come +to put together a release!
+Changes which affect the debian packaging files (in debian
) are an
+exception to the rule that all changes require a changelog.d
file.
In this case, you will need to add an entry to the debian changelog for the +next release. For this, run the following command:
+dch
+
+This will make up a new version number (if there isn't already an unreleased +version in flight), and open an editor where you can add a new changelog entry. +(Our release process will ensure that the version number and maintainer name is +corrected for the release.)
+If your change affects both the debian packaging and files outside the debian +directory, you will need both a regular newsfragment and an entry in the +debian changelog. (Though typically such changes should be submitted as two +separate pull requests.)
+After you make a PR a comment from @CLAassistant will appear asking you to sign +the CLA. +This will link a page to allow you to confirm that you have read and agreed to +the CLA by signing in with GitHub.
+Alternatively, you can sign off before opening a PR by going to +https://cla-assistant.io/element-hq/synapse.
+We accept contributions under a legally identifiable name, such as +your name on government documentation or common-law names (names +claimed by legitimate usage or repute). Unfortunately, we cannot +accept anonymous contributions at this time.
+Once the Pull Request is opened, you will see a few things:
+From this point, you should:
+Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!
+By now, you know the drill!
+There are some notes for those with commit access to the project on how we +manage git here.
+That's it! Matrix is a very open and collaborative project as you might expect +given our obsession with open communication. If we're going to successfully +matrix together all the fragmented communication technologies out there we are +reliant on contributions and collaboration from the community to do so. So +please get involved - and we hope you have as much fun hacking on Matrix as we +do!
+ +Synapse's database schema is stored in the synapse.storage.schema
module.
Synapse supports splitting its datastore across multiple physical databases (which can +be useful for large installations), and the schema files are therefore split according +to the logical database they apply to.
+At the time of writing, the following "logical" databases are supported:
+state
- used to store Matrix room state (more specifically, state_groups
,
+their relationships and contents).main
- stores everything else.Additionally, the common
directory contains schema files for tables which must be
+present on all physical databases.
Synapse manages its database schema via "schema versions". These are mainly used to +help avoid confusion if the Synapse codebase is rolled back after the database is +updated. They work as follows:
+The Synapse codebase defines a constant synapse.storage.schema.SCHEMA_VERSION
+which represents the expectations made about the database by that version. For
+example, as of Synapse v1.36, this is 59
.
The database stores a "compatibility version" in
+schema_compat_version.compat_version
which defines the SCHEMA_VERSION
of the
+oldest version of Synapse which will work with the database. On startup, if
+compat_version
is found to be newer than SCHEMA_VERSION
, Synapse will refuse to
+start.
Synapse automatically updates this field from
+synapse.storage.schema.SCHEMA_COMPAT_VERSION
.
Whenever a backwards-incompatible change is made to the database format (normally
+via a delta
file), synapse.storage.schema.SCHEMA_COMPAT_VERSION
is also updated
+so that administrators can not accidentally roll back to a too-old version of Synapse.
Generally, the goal is to maintain compatibility with at least one or two previous +releases of Synapse, so any substantial change tends to require multiple releases and a +bit of forward-planning to get right.
+As a worked example: we want to remove the room_stats_historical
table. Here is how it
+might pan out.
Replace any code that reads from room_stats_historical
with alternative
+implementations, but keep writing to it in case of rollback to an earlier version.
+Also, increase synapse.storage.schema.SCHEMA_VERSION
. In this
+instance, there is no existing code which reads from room_stats_historical
, so
+our starting point is:
v1.36.0: SCHEMA_VERSION=59
, SCHEMA_COMPAT_VERSION=59
Next (say in Synapse v1.37.0): remove the code that writes to
+room_stats_historical
, but don’t yet remove the table in case of rollback to
+v1.36.0. Again, we increase synapse.storage.schema.SCHEMA_VERSION
, but
+because we have not broken compatibility with v1.36, we do not yet update
+SCHEMA_COMPAT_VERSION
. We now have:
v1.37.0: SCHEMA_VERSION=60
, SCHEMA_COMPAT_VERSION=59
.
Later (say in Synapse v1.38.0): we can remove the table altogether. This will
+break compatibility with v1.36.0, so we must update SCHEMA_COMPAT_VERSION
accordingly.
+There is no need to update synapse.storage.schema.SCHEMA_VERSION
, since there is no
+change to the Synapse codebase here. So we end up with:
v1.38.0: SCHEMA_VERSION=60
, SCHEMA_COMPAT_VERSION=60
.
If in doubt about whether to update SCHEMA_VERSION
or not, it is generally best to
+lean towards doing so.
In the full_schemas
directories, only the most recently-numbered snapshot is used
+(54
at the time of writing). Older snapshots (eg, 16
) are present for historical
+reference only.
If you want to recreate these schemas, they need to be made from a database that +has had all background updates run.
+To do so, use scripts-dev/make_full_schema.sh
. This will produce new
+full.sql.postgres
and full.sql.sqlite
files.
Ensure postgres is installed, then run:
+./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/
+
+NB at the time of writing, this script predates the split into separate state
/main
+databases so will require updates to handle that correctly.
Delta files define the steps required to upgrade the database from an earlier version. +They can be written as either a file containing a series of SQL statements, or a Python +module.
+Synapse remembers which delta files it has applied to a database (they are stored in the
+applied_schema_deltas
table) and will not re-apply them (even if a given file is
+subsequently updated).
Delta files should be placed in a directory named synapse/storage/schema/<database>/delta/<version>/
.
+They are applied in alphanumeric order, so by convention the first two characters
+of the filename should be an integer such as 01
, to put the file in the right order.
These should be named *.sql
, or — for changes which should only be applied for a
+given database engine — *.sql.posgres
or *.sql.sqlite
. For example, a delta which
+adds a new column to the foo
table might be called 01add_bar_to_foo.sql
.
Note that our SQL parser is a bit simple - it understands comments (--
and /*...*/
),
+but complex statements which require a ;
in the middle of them (such as CREATE TRIGGER
) are beyond it and you'll have to use a Python delta file.
For more flexibility, a delta file can take the form of a python module. These should
+be named *.py
. Note that database-engine-specific modules are not supported here –
+instead you can write if isinstance(database_engine, PostgresEngine)
or similar.
A Python delta module should define either or both of the following functions:
+import synapse.config.homeserver
+import synapse.storage.engines
+import synapse.storage.types
+
+
+def run_create(
+ cur: synapse.storage.types.Cursor,
+ database_engine: synapse.storage.engines.BaseDatabaseEngine,
+) -> None:
+ """Called whenever an existing or new database is to be upgraded"""
+ ...
+
+def run_upgrade(
+ cur: synapse.storage.types.Cursor,
+ database_engine: synapse.storage.engines.BaseDatabaseEngine,
+ config: synapse.config.homeserver.HomeServerConfig,
+) -> None:
+ """Called whenever an existing database is to be upgraded."""
+ ...
+
+It is sometimes appropriate to perform database migrations as part of a background +process (instead of blocking Synapse until the migration is done). In particular, +this is useful for migrating data when adding new columns or tables.
+Pending background updates stored in the background_updates
table and are denoted
+by a unique name, the current status (stored in JSON), and some dependency information:
A new background updates needs to be added to the background_updates
table:
INSERT INTO background_updates (ordering, update_name, depends_on, progress_json) VALUES
+ (7706, 'my_background_update', 'a_previous_background_update' '{}');
+
+And then needs an associated handler in the appropriate datastore:
+self.db_pool.updates.register_background_update_handler(
+ "my_background_update",
+ update_handler=self._my_background_update,
+)
+
+There are a few types of updates that can be performed, see the BackgroundUpdater
:
register_background_update_handler
: A generic handler for custom SQLregister_background_index_update
: Create an index in the backgroundregister_background_validate_constraint
: Validate a constraint in the background
+(PostgreSQL-only)register_background_validate_constraint_and_delete_rows
: Similar to
+register_background_validate_constraint
, but deletes rows which don't fit
+the constraint.For register_background_update_handler
, the generic handler must track progress
+and then finalize the background update:
async def _my_background_update(self, progress: JsonDict, batch_size: int) -> int:
+ def _do_something(txn: LoggingTransaction) -> int:
+ ...
+ self.db_pool.updates._background_update_progress_txn(
+ txn, "my_background_update", {"last_processed": last_processed}
+ )
+ return last_processed - prev_last_processed
+
+ num_processed = await self.db_pool.runInteraction("_do_something", _do_something)
+ await self.db_pool.updates._end_background_update("my_background_update")
+
+ return num_processed
+
+Synapse will attempt to rate-limit how often background updates are run via the +given batch-size and the returned number of processed entries (and how long the +function took to run). See +background update controller callbacks.
+Boolean columns require special treatment, since SQLite treats booleans the +same as integers.
+Any new boolean column must be added to the BOOLEAN_COLUMNS
list in
+synapse/_scripts/synapse_port_db.py
. This tells the port script to cast
+the integer value from SQLite to a boolean before writing the value to the
+postgres database.
event_id
global uniquenessevent_id
's can be considered globally unique although there has been a lot of
+debate on this topic in places like
+MSC2779 and
+MSC2848 which
+has no resolution yet (as of 2022-09-01). There are several places in Synapse
+and even in the Matrix APIs like GET /_matrix/federation/v1/event/{eventId}
+where we assume that event IDs are globally unique.
When scoping event_id
in a database schema, it is often nice to accompany it
+with room_id
(PRIMARY KEY (room_id, event_id)
and a FOREIGN KEY(room_id) REFERENCES rooms(room_id)
) which makes flexible lookups easy. For example it
+makes it very easy to find and clean up everything in a room when it needs to be
+purged (no need to use sub-select
query or join from the events
table).
A note on collisions: In room versions 1
and 2
it's possible to end up with
+two events with the same event_id
(in the same or different rooms). After room
+version 3
, that can only happen with a hash collision, which we basically hope
+will never happen (SHA256 has a massive big key space).
Some migrations need to be performed gradually. A prime example of this is anything
+which would need to do a large table scan — including adding columns, indices or
+NOT NULL
constraints to non-empty tables — such a migration should be done as a
+background update where possible, at least on Postgres.
+We can afford to be more relaxed about SQLite databases since they are usually
+used on smaller deployments and SQLite does not support the same concurrent
+DDL operations as Postgres.
We also typically insist on having at least one Synapse version's worth of +backwards compatibility, so that administrators can roll back Synapse if an upgrade +did not go smoothly.
+This sometimes results in having to plan a migration across multiple versions +of Synapse.
+This section includes an example and may include more in the future.
+NOT NULL
constraintsThis example illustrates how you would introduce a new column, write data into it +based on data from an old column and then drop the old column.
+We are aiming for semantic equivalence to:
+ALTER TABLE mytable ADD COLUMN new_column INTEGER;
+UPDATE mytable SET new_column = old_column * 100;
+ALTER TABLE mytable ALTER COLUMN new_column ADD CONSTRAINT NOT NULL;
+ALTER TABLE mytable DROP COLUMN old_column;
+
+N
SCHEMA_VERSION = S
+SCHEMA_COMPAT_VERSION = ... # unimportant at this stage
+
+Invariants:
+old_column
is read by Synapse and written to by Synapse.N + 1
SCHEMA_VERSION = S + 1
+SCHEMA_COMPAT_VERSION = ... # unimportant at this stage
+
+Changes:
+ALTER TABLE mytable ADD COLUMN new_column INTEGER;
+
+Invariants:
+old_column
is read by Synapse and written to by Synapse.new_column
is written to by Synapse.Notes:
+new_column
can't have a NOT NULL NOT VALID
constraint yet, because the previous Synapse version did not write to the new column (since we haven't bumped the SCHEMA_COMPAT_VERSION
yet, we still need to be compatible with the previous version).N + 2
SCHEMA_VERSION = S + 2
+SCHEMA_COMPAT_VERSION = S + 1 # this signals that we can't roll back to a time before new_column existed
+
+Changes:
+NOT VALID
constraint to ensure new rows are compliant. SQLite does not have such a construct, but it would be unnecessary anyway since there is no way to concurrently perform this migration on SQLite.
+ALTER TABLE mytable ADD CONSTRAINT CHECK new_column_not_null (new_column IS NOT NULL) NOT VALID;
+
+UPDATE mytable SET new_column = old_column * 100 WHERE 0 < mytable_id AND mytable_id <= 5;
+
+This background update is technically pointless on SQLite, but you must schedule it anyway so that the portdb
script to migrate to Postgres still works.VALIDATE CONSTRAINT
on Postgres to turn the NOT VALID
constraint into a valid one.
+ALTER TABLE mytable VALIDATE CONSTRAINT new_column_not_null;
+
+This will take some time but does NOT hold an exclusive lock over the table.Invariants:
+old_column
is read by Synapse and written to by Synapse.new_column
is written to by Synapse and new rows always have a non-NULL
value in this field.Notes:
+CHECK (new_column IS NOT NULL)
to a NOT NULL
constraint free of charge in Postgres by adding the NOT NULL
constraint and then dropping the CHECK
constraint, because Postgres can statically verify that the NOT NULL
constraint is implied by the CHECK
constraint without performing a table scan.N + 2
redundant by moving the background update to N + 1
and delaying adding the NOT NULL
constraint to N + 3
, but that would mean the constraint would always be validated in the foreground in N + 3
. Whereas if the N + 2
step is kept, the migration in N + 3
would be fast in the happy case.N + 3
SCHEMA_VERSION = S + 3
+SCHEMA_COMPAT_VERSION = S + 1 # we can't roll back to a time before new_column existed
+
+Changes:
+new_column
in case the background update had not completed. Additionally, VALIDATE CONSTRAINT
to make the check fully valid.
+-- you ideally want an index on `new_column` or e.g. `(new_column) WHERE new_column IS NULL` first, or perhaps you can find a way to skip this if the `NOT NULL` constraint has already been validated.
+UPDATE mytable SET new_column = old_column * 100 WHERE new_column IS NULL;
+
+-- this is a no-op if it already ran as part of the background update
+ALTER TABLE mytable VALIDATE CONSTRAINT new_column_not_null;
+
+new_column
as NOT NULL
and populate any outstanding NULL
values at the same time.
+Unfortunately, you can't drop old_column
yet because it must be present for compatibility with the Postgres schema, as needed by portdb
.
+(Otherwise you could do this all in one go with SQLite!)Invariants:
+old_column
is written to by Synapse (but no longer read by Synapse!).new_column
is read by Synapse and written to by Synapse. Moreover, all rows have a non-NULL
value in this field, as guaranteed by a schema constraint.Notes:
+old_column
yet, or even stop writing to it, because that would break a rollback to the previous version of Synapse.new_column
being populated. The remaining steps are only motivated by the wish to clean-up old columns.N + 4
SCHEMA_VERSION = S + 4
+SCHEMA_COMPAT_VERSION = S + 3 # we can't roll back to a time before new_column was entirely non-NULL
+
+Invariants:
+old_column
exists but is not written to or read from by Synapse.new_column
is read by Synapse and written to by Synapse. Moreover, all rows have a non-NULL
value in this field, as guaranteed by a schema constraint.Notes:
+old_column
yet because that would break a rollback to the previous version of Synapse. S + 3
.N + 5
SCHEMA_VERSION = S + 5
+SCHEMA_COMPAT_VERSION = S + 4 # we can't roll back to a time before old_column was no longer being touched
+
+Changes:
+ALTER TABLE mytable DROP COLUMN old_column;
+
+DO NOT USE THESE DEMO SERVERS IN PRODUCTION
+Requires you to have a Synapse development environment setup.
+The demo setup allows running three federation Synapse servers, with server
+names localhost:8480
, localhost:8481
, and localhost:8482
.
You can access them via any Matrix client over HTTP at localhost:8080
,
+localhost:8081
, and localhost:8082
or over HTTPS at localhost:8480
,
+localhost:8481
, and localhost:8482
.
To enable the servers to communicate, self-signed SSL certificates are generated +and the servers are configured in a highly insecure way, including:
+The servers are configured to store their data under demo/8080
, demo/8081
, and
+demo/8082
. This includes configuration, logs, SQLite databases, and media.
Note that when joining a public room on a different homeserver via "#foo:bar.net", +then you are (in the current implementation) joining a room with room_id "foo". +This means that it won't work if your homeserver already has a room with that +name.
+There's three main scripts with straightforward purposes:
+start.sh
will start the Synapse servers, generating any missing configuration.
+--no-rate-limit
to "disable" rate limits
+(they actually still exist, but are very high).stop.sh
will stop the Synapse servers.clean.sh
will delete the configuration, databases, log files, etc.To start a completely new set of servers, run:
+./demo/stop.sh; ./demo/clean.sh && ./demo/start.sh
+
+
+ This is a quick cheat sheet for developers on how to use poetry
.
See the contributing guide.
+Developers should use Poetry 1.3.2 or higher. If you encounter problems related +to poetry, please double-check your poetry version.
+Synapse uses a variety of third-party Python packages to function as a homeserver.
+Some of these are direct dependencies, listed in pyproject.toml
under the
+[tool.poetry.dependencies]
section. The rest are transitive dependencies (the
+things that our direct dependencies themselves depend on, and so on recursively.)
We maintain a locked list of all our dependencies (transitive included) so that
+we can track exactly which version of each dependency appears in a given release.
+See here
+for discussion of why we wanted this for Synapse. We chose to use
+poetry
to manage this locked list; see
+this comment
+for the reasoning.
The locked dependencies get included in our "self-contained" releases: namely, +our docker images and our debian packages. We also use the locked dependencies +in development and our continuous integration.
+Separately, our "broad" dependencies—the version ranges specified in
+pyproject.toml
—are included as metadata in our "sdists" and "wheels" uploaded
+to PyPI. Installing from PyPI or from
+the Synapse source tree directly will not use the locked dependencies; instead,
+they'll pull in the latest version of each package available at install time.
An example may help. We have a broad dependency on
+phonenumbers
, as declared in
+this snippet from pyproject.toml as of Synapse 1.57:
[tool.poetry.dependencies]
+# ...
+phonenumbers = ">=8.2.0"
+
+In our lockfile this is +pinned +to version 8.12.44, even though +newer versions are available.
+[[package]]
+name = "phonenumbers"
+version = "8.12.44"
+description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
+category = "main"
+optional = false
+python-versions = "*"
+
+The lockfile also includes a
+cryptographic checksum
+of the sdists and wheels provided for this version of phonenumbers
.
[metadata.files]
+# ...
+phonenumbers = [
+ {file = "phonenumbers-8.12.44-py2.py3-none-any.whl", hash = "sha256:cc1299cf37b309ecab6214297663ab86cb3d64ae37fd5b88e904fe7983a874a6"},
+ {file = "phonenumbers-8.12.44.tar.gz", hash = "sha256:26cfd0257d1704fe2f88caff2caabb70d16a877b1e65b6aae51f9fbbe10aa8ce"},
+]
+
+We can see this pinned version inside the docker image for that release:
+$ docker pull vectorim/synapse:v1.97.0
+...
+$ docker run --entrypoint pip vectorim/synapse:v1.97.0 show phonenumbers
+Name: phonenumbers
+Version: 8.12.44
+Summary: Python version of Google's common library for parsing, formatting, storing and validating international phone numbers.
+Home-page: https://github.com/daviddrysdale/python-phonenumbers
+Author: David Drysdale
+Author-email: dmd@lurklurk.org
+License: Apache License 2.0
+Location: /usr/local/lib/python3.9/site-packages
+Requires:
+Required-by: matrix-synapse
+
+Whereas the wheel metadata just contains the broad dependencies:
+$ cd /tmp
+$ wget https://files.pythonhosted.org/packages/ca/5e/d722d572cc5b3092402b783d6b7185901b444427633bd8a6b00ea0dd41b7/matrix_synapse-1.57.0rc1-py3-none-any.whl
+...
+$ unzip -c matrix_synapse-1.57.0rc1-py3-none-any.whl matrix_synapse-1.57.0rc1.dist-info/METADATA | grep phonenumbers
+Requires-Dist: phonenumbers (>=8.2.0)
+
+direnv
is a tool for activating environments in your
+shell inside a given directory. Its support for poetry is unofficial (a
+community wiki recipe only), but works solidly in our experience. We thoroughly
+recommend it for daily use. To use it:
direnv
- it's likely
+packaged for your system already.~/.config/direnv/direnvrc
(or more generally $XDG_CONFIG_HOME/direnv/direnvrc
).echo layout poetry > .envrc
..envrc
configuration and project.
+Then formally confirm this to direnv
by running direnv allow
.Then whenever you navigate to the synapse checkout, you should be able to run
+e.g. mypy
instead of poetry run mypy
; python
instead of
+poetry run python
; and your shell commands will automatically run in the
+context of poetry's venv, without having to run poetry shell
beforehand.
poetry install --all-extras --sync
+
+# Stop the current virtualenv if active
+$ deactivate
+
+# Remove all of the files from the current environment.
+# Don't worry, even though it says "all", this will only
+# remove the Poetry virtualenvs for the current project.
+$ poetry env remove --all
+
+# Reactivate Poetry shell to create the virtualenv again
+$ poetry shell
+# Install everything again
+$ poetry install --extras all
+
+poetry
virtualenv?Use poetry run cmd args
when you need the python virtualenv context.
+To avoid typing poetry run
all the time, you can run poetry shell
+to start a new shell in the poetry virtualenv context. Within poetry shell
,
+python
, pip
, mypy
, trial
, etc. are all run inside the project virtualenv
+and isolated from the rest o the system.
Roughly speaking, the translation from a traditional virtualenv is:
+env/bin/activate
-> poetry shell
, anddeactivate
-> close the terminal (Ctrl-D, exit
, etc.)See also the direnv recommendation above, which makes poetry run
and
+poetry shell
unnecessary.
poetry
virtualenv?Some suggestions:
+# Current env only
+poetry env info
+# All envs: this allows you to have e.g. a poetry managed venv for Python 3.7,
+# and another for Python 3.10.
+poetry env list --full-path
+poetry run pip list
+
+Note that poetry show
describes the abstract lock file rather than your
+on-disk environment. With that said, poetry show --tree
can sometimes be
+useful.
Either:
+pyproject.toml
; then poetry lock --no-update
; or elsepoetry add packagename
. See poetry add --help
; note the --dev
,
+--extras
and --optional
flags in particular.Include the updated pyproject.toml
and poetry.lock
files in your commit.
This is not done often and is untested, but
+poetry remove packagename
+
+ought to do the trick. Alternatively, manually update pyproject.toml
and
+poetry lock --no-update
. Include the updated pyproject.toml
and poetry.lock
+files in your commit.
Best done by manually editing pyproject.toml
, then poetry lock --no-update
.
+Include the updated pyproject.toml
and poetry.lock
in your commit.
Use
+poetry update packagename
+
+to use the latest version of packagename
in the locked environment, without
+affecting the broad dependencies listed in the wheel.
There doesn't seem to be a way to do this whilst locking a specific version of
+packagename
. We can workaround this (crudely) as follows:
poetry add packagename==1.2.3
+# This should update pyproject.lock.
+
+# Now undo the changes to pyproject.toml. For example
+# git restore pyproject.toml
+
+# Get poetry to recompute the content-hash of pyproject.toml without changing
+# the locked package versions.
+poetry lock --no-update
+
+Either way, include the updated poetry.lock
file in your commit.
requirements.txt
file?poetry export --extras all
+
+Be wary of bugs in poetry export
and pip install -r requirements.txt
.
I usually use
+poetry run pip install build && poetry run python -m build
+
+because build
is a standardish tool which
+doesn't require poetry. (It's what we use in CI too). However, you could try
+poetry build
too.
Synapse uses Dependabot to keep the poetry.lock
and Cargo.lock
file
+up-to-date with the latest releases of our dependencies. The changelog check is
+omitted for Dependabot PRs; the release script will include them in the
+changelog.
When reviewing a dependabot PR, ensure that:
+In particular, any updates to the type hints (usually packages which start with types-
)
+should be safe to merge if linting passes.
poetry --version
.The minimum version of poetry supported by Synapse is 1.3.2.
+It can also be useful to check the version of poetry-core
in use. If you've
+installed poetry
with pipx
, try pipx runpip poetry list | grep poetry-core
.
poetry cache clear --all pypi
.Poetry caches a bunch of information about packages that isn't readily available
+from PyPI. (This is what makes poetry seem slow when doing the first
+poetry install
.) Try poetry cache list
and poetry cache clear --all <name of cache>
to see if that fixes things.
Delete the matrix_synapse.egg-info/
directory from the root of your Synapse
+install.
This stores some cached information about dependencies and often conflicts with +letting Poetry do the right thing.
+--verbose
or --dry-run
arguments.Sometimes useful to see what poetry's internal logic is.
+ +It can be desirable to implement "experimental" features which are disabled by +default and must be explicitly enabled via the Synapse configuration. This is +applicable for features which:
+Note that this only really applies to features which are expected to be desirable +to a broad audience. The module infrastructure should +instead be investigated for non-standard features.
+Guarding experimental features behind configuration flags should help with some +of the following scenarios:
+Experimental configuration flags should be disabled by default (requiring Synapse +administrators to explicitly opt-in), although there are situations where it makes +sense (from a product point-of-view) to enable features by default. This is +expected and not an issue.
+It is not a requirement for experimental features to be behind a configuration flag, +but one should be used if unsure.
+New experimental configuration flags should be added under the experimental
+configuration key (see the synapse.config.experimental
file) and either explain
+(briefly) what is being enabled, or include the MSC number.
In an ideal world, our git commit history would be a linear progression of
+commits each of which contains a single change building on what came
+before. Here, by way of an arbitrary example, is the top of git log --graph b2dba0607
:
Note how the commit comment explains clearly what is changing and why. Also +note the absence of merge commits, as well as the absence of commits called +things like (to pick a few culprits): +“pep8”, “fix broken +test”, +“oops”, +“typo”, or “Who's +the president?”.
+There are a number of reasons why keeping a clean commit history is a good +thing:
+From time to time, after a change lands, it turns out to be necessary to +revert it, or to backport it to a release branch. Those operations are +much easier when the change is contained in a single commit.
+Similarly, it's much easier to answer questions like “is the fix for
+/publicRooms
on the release branch?” if that change consists of a single
+commit.
Likewise: “what has changed on this branch in the last week?” is much +clearer without merges and “pep8” commits everywhere.
+Sometimes we need to figure out where a bug got introduced, or some
+behaviour changed. One way of doing that is with git bisect
: pick an
+arbitrary commit between the known good point and the known bad point, and
+see how the code behaves. However, that strategy fails if the commit you
+chose is the middle of someone's epic branch in which they broke the world
+before putting it back together again.
One counterargument is that it is sometimes useful to see how a PR evolved as +it went through review cycles. This is true, but that information is always +available via the GitHub UI (or via the little-known refs/pull +namespace).
+Of course, in reality, things are more complicated than that. We have release
+branches as well as develop
and master
, and we deliberately merge changes
+between them. Bugs often slip through and have to be fixed later. That's all
+fine: this not a cast-iron rule which must be obeyed, but an ideal to aim
+towards.
Ok, so that's what we'd like to achieve. How do we achieve it?
+The TL;DR is: when you come to merge a pull request, you probably want to +“squash and merge”:
+.
+(This applies whether you are merging your own PR, or that of another +contributor.)
+“Squash and merge”1 takes all of the changes in the
+PR, and bundles them into a single commit. GitHub gives you the opportunity to
+edit the commit message before you confirm, and normally you should do so,
+because the default will be useless (again: * woops typo
is not a useful
+thing to keep in the historical record).
The main problem with this approach comes when you have a series of pull +requests which build on top of one another: as soon as you squash-merge the +first PR, you'll end up with a stack of conflicts to resolve in all of the +others. In general, it's best to avoid this situation in the first place by +trying not to have multiple related PRs in flight at the same time. Still, +sometimes that's not possible and doing a regular merge is the lesser evil.
+Another occasion in which a regular merge makes more sense is a PR where you've +deliberately created a series of commits each of which makes sense in its own +right. For example: a PR which gradually propagates a refactoring operation +through the codebase, or a +PR which is the culmination of several other +PRs. In this case the ability +to figure out when a particular change/bug was introduced could be very useful.
+Ultimately: this is not a hard-and-fast-rule. If in doubt, ask yourself “do +each of the commits I am about to merge make sense in their own right”, but +remember that we're just doing our best to balance “keeping the commit history +clean” with other factors.
+A lot +of +words have been +written in the past about git branching models (no really, a +lot). I tend to +think the whole thing is overblown. Fundamentally, it's not that +complicated. Here's how we do it.
+Let's start with a picture:
+ +It looks complicated, but it's really not. There's one basic rule: anyone is +free to merge from any more-stable branch to any less-stable branch at +any time2. (The principle behind this is that if a +change is good enough for the more-stable branch, then it's also good enough go +put in a less-stable branch.)
+Meanwhile, merging (or squashing, as per the above) from a less-stable to a +more-stable branch is a deliberate action in which you want to publish a change +or a set of changes to (some subset of) the world: for example, this happens +when a PR is landed, or as part of our release process.
+So, what counts as a more- or less-stable branch? A little reflection will show +that our active branches are ordered thus, from more-stable to less-stable:
+master
(tracks our last release).release-vX.Y
(the branch where we prepare the next release)3.develop
(our "mainline" branch containing our bleeding-edge).The corollary is: if you have a bugfix that needs to land in both
+release-vX.Y
and develop
, then you should base your PR on
+release-vX.Y
, get it merged there, and then merge from release-vX.Y
to
+develop
. (If a fix lands in develop
and we later need it in a
+release-branch, we can of course cherry-pick it, but landing it in the release
+branch first helps reduce the chance of annoying conflicts.)
[1]: “Squash and merge” is GitHub's term for this +operation. Given that there is no merge involved, I'm not convinced it's the +most intuitive name. ^
+[2]: Well, anyone with commit access.^
+[3]: Very, very occasionally (I think this has happened once in
+the history of Synapse), we've had two releases in flight at once. Obviously,
+release-v1.2
is more-stable than release-v1.3
. ^
This section covers implementation documentation for various parts of Synapse.
+If a developer is planning to make a change to a feature of Synapse, it can be useful for +general documentation of how that feature is implemented to be available. This saves the +developer time in place of needing to understand how the feature works by reading the +code.
+Documentation that would be more useful for the perspective of a system administrator, +rather than a developer who's intending to change to code, should instead be placed +under the Usage section of the documentation.
+ +Releases of Synapse follow a two week release cycle with new releases usually +occurring on Tuesdays:
+N - 1
is released.N
release candidate 1 is released.N
release candidates 2+ are released, if bugs are found.N
is released.Note that this schedule might be modified depending on the availability of the +Synapse team, e.g. releases may be skipped to avoid holidays.
+Release announcements can be found in the +release category of the Matrix blog.
+If a bug is found after release that is deemed severe enough (by a combination +of the impacted users and the impact on those users) then a bugfix release may +be issued. This may be at any point in the release cycle.
+Security will sometimes be backported to the previous version and released +immediately before the next release candidate. An example of this might be:
+Depending on the impact and complexity of security fixes, multiple fixes might +be held to be released together.
+In some cases, a pre-disclosure of a security release will be issued as a notice +to Synapse operators that there is an upcoming security release. These can be +found in the security category of the Matrix blog.
+ +The Synapse team works off a shared review queue -- any new pull requests for +Synapse (or related projects) has a review requested from the entire team. Team +members should process this queue using the following rules:
+For the latter two categories above, older pull requests should be prioritised.
+It is explicit that there is no priority given to pull requests from the team +(vs from the community). If a pull request requires a quick turn around, please +explicitly communicate this via #synapse-dev:matrix.org +or as a comment on the pull request.
+Once an initial review has been completed and the author has made additional changes, +follow-up reviews should go back to the same reviewer. This helps build a shared +context and conversation between author and reviewer.
+As a team we aim to keep the number of inflight pull requests to a minimum to ensure +that ongoing work is finished before starting new work.
+To communicate to the rest of the team the status of each pull request, team +members should do the following:
+If you are unsure about a particular part of the pull request (or are not confident +in your understanding of part of the code) then ask questions or request review +from the team again. When requesting review from the team be sure to leave a comment +with the rationale on why you're putting it back in the queue.
+ +The word "edge" comes from graph theory lingo. An edge is just a connection
+between two events. In Synapse, we connect events by specifying their
+prev_events
. A subsequent event points back at a previous event.
A (oldest) <---- B <---- C (most recent)
+
+Events are normally sorted by (topological_ordering, stream_ordering)
where
+topological_ordering
is just depth
. In other words, we first sort by depth
+and then tie-break based on stream_ordering
. depth
is incremented as new
+messages are added to the DAG. Normally, stream_ordering
is an auto
+incrementing integer, but backfilled events start with stream_ordering=-1
and decrement.
/sync
returns things in the order they arrive at the server (stream_ordering
)./messages
(and /backfill
in the federation API) return them in the order determined by the event graph (topological_ordering, stream_ordering)
.The general idea is that, if you're following a room in real-time (i.e.
+/sync
), you probably want to see the messages as they arrive at your server,
+rather than skipping any that arrived late; whereas if you're looking at a
+historical section of timeline (i.e. /messages
), you want to see the best
+representation of the state of the room as others were seeing it at the time.
We mark an event as an outlier
when we haven't figured out the state for the
+room at that point in the DAG yet. They are "floating" events that we haven't
+yet correlated to the DAG.
Outliers typically arise when we fetch the auth chain or state for a given
+event. When that happens, we just grab the events in the state/auth chain,
+without calculating the state at those events, or backfilling their
+prev_events
. Since we don't have the state at any events fetched in that
+way, we mark them as outliers.
So, typically, we won't have the prev_events
of an outlier
in the database,
+(though it's entirely possible that we might have them for some other
+reason). Other things that make outliers different from regular events:
We don't have state for them, so there should be no entry in
+event_to_state_groups
for an outlier. (In practice this isn't always
+the case, though I'm not sure why: see https://github.com/matrix-org/synapse/issues/12201).
We don't record entries for them in the event_edges
,
+event_forward_extremeties
or event_backward_extremities
tables.
Since outliers are not tied into the DAG, they do not normally form part of the
+timeline sent down to clients via /sync
or /messages
; however there is an
+exception:
A special case of outlier events are some membership events for federated rooms +that we aren't full members of. For example:
+In all the above cases, we don't have the state for the room, which is why they
+are treated as outliers. They are a bit special though, in that they are
+proactively sent to clients via /sync
.
Most-recent-in-time events in the DAG which are not referenced by any other
+events' prev_events
yet. (In this definition, outliers, rejected events, and
+soft-failed events don't count.)
The forward extremities of a room (or at least, a subset of them, if there are
+more than ten) are used as the prev_events
when the next event is sent.
The "current state" of a room (ie: the state which would be used if we +generated a new event) is, therefore, the resolution of the room states +at each of the forward extremities.
+The current marker of where we have backfilled up to and will generally be the
+prev_events
of the oldest-in-time events we have in the DAG. This gives a starting point when
+backfilling history.
Note that, unlike forward extremities, we typically don't have any backward +extremity events themselves in the database - or, if we do, they will be "outliers" (see +above). Either way, we don't expect to have the room state at a backward extremity.
+When we persist a non-outlier event, if it was previously a backward extremity,
+we clear it as a backward extremity and set all of its prev_events
as the new
+backward extremities if they aren't already persisted as non-outliers. This
+therefore keeps the backward extremities up-to-date.
For every non-outlier event we need to know the state at that event. Instead of
+storing the full state for each event in the DB (i.e. a event_id -> state
+mapping), which is very space inefficient when state doesn't change, we
+instead assign each different set of state a "state group" and then have
+mappings of event_id -> state_group
and state_group -> state
.
TODO: state_group_edges
is a further optimization...
+notes from @Azrenbeth, https://pastebin.com/seUGVGeT
https://fujifish.github.io/samling/samling.html (https://github.com/fujifish/samling) is a great resource for being able to tinker with the +SAML options within Synapse without needing to deploy and configure a complicated software stack.
+To make Synapse (and therefore Element) use it:
+samling.xml
next to your homeserver.yaml
with
+the XML from step 2 as the contents.homeserver.yaml
to include:
+saml2_config:
+ sp_config:
+ allow_unknown_attributes: true # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
+ metadata:
+ local: ["samling.xml"]
+
+homeserver.yaml
has a setting for public_baseurl
:
+public_baseurl: http://localhost:8080/
+
+apt-get install xmlsec1
and pip install --upgrade --force 'pysaml2>=4.5.0'
to ensure
+the dependencies are installed and ready to go.Then in Element:
+public_baseurl
above.uid=your_localpart
.
+The response must also be signed.If you try and repeat this process, you may be automatically logged in using the information you
+gave previously. To fix this, open your developer console (F12
or Ctrl+Shift+I
) while on the
+samling page and clear the site data. In Chrome, this will be a button on the Application tab.
Sometimes, requests take a long time to service and clients disconnect
+before Synapse produces a response. To avoid wasting resources, Synapse
+can cancel request processing for select endpoints marked with the
+@cancellable
decorator.
Synapse makes use of Twisted's Deferred.cancel()
feature to make
+cancellation work. The @cancellable
decorator does nothing by itself
+and merely acts as a flag, signalling to developers and other code alike
+that a method can be cancelled.
async
functions in its call
+tree handle cancellation correctly. See
+Handling cancellation correctly
+for a list of things to look out for.@cancellable
decorator to the on_GET/POST/PUT/DELETE
+method. It's not recommended to make non-GET
methods cancellable,
+since cancellation midway through some database updates is less
+likely to be handled correctly.There are two stages to cancellation: downward propagation of a
+cancel()
call, followed by upwards propagation of a CancelledError
+out of a blocked await
.
+Both Twisted and asyncio have a cancellation mechanism.
Method | Exception | Exception inherits from | |
---|---|---|---|
Twisted | Deferred.cancel() | twisted.internet.defer.CancelledError | Exception (!) |
asyncio | Task.cancel() | asyncio.CancelledError | BaseException |
When Synapse starts handling a request, it runs the async method
+responsible for handling it using defer.ensureDeferred
, which returns
+a Deferred
. For example:
def do_something() -> Deferred[None]:
+ ...
+
+@cancellable
+async def on_GET() -> Tuple[int, JsonDict]:
+ d = make_deferred_yieldable(do_something())
+ await d
+ return 200, {}
+
+request = defer.ensureDeferred(on_GET())
+
+When a client disconnects early, Synapse checks for the presence of the
+@cancellable
decorator on on_GET
. Since on_GET
is cancellable,
+Deferred.cancel()
is called on the Deferred
from
+defer.ensureDeferred
, ie. request
. Twisted knows which Deferred
+request
is waiting on and passes the cancel()
call on to d
.
The Deferred
being waited on, d
, may have its own handling for
+cancel()
and pass the call on to other Deferred
s.
Eventually, a Deferred
handles the cancel()
call by resolving itself
+with a CancelledError
.
The CancelledError
gets raised out of the await
and bubbles up, as
+per normal Python exception handling.
In general, when writing code that might be subject to cancellation, two +things must be considered:
+CancelledError
s raised out of await
s.Deferred
s being cancel()
ed.Examples of code that handles cancellation incorrectly include:
+try-except
blocks which swallow CancelledError
s.Deferred
, which may be cancelled, between
+multiple requests.Some common patterns are listed below in more detail.
+async
function callsMost functions in Synapse are relatively straightforward from a
+cancellation standpoint: they don't do anything with Deferred
s and
+purely call and await
other async
functions.
An async
function handles cancellation correctly if its own code
+handles cancellation correctly and all the async function it calls
+handle cancellation correctly. For example:
async def do_two_things() -> None:
+ check_something()
+ await do_something()
+ await do_something_else()
+
+do_two_things
handles cancellation correctly if do_something
and
+do_something_else
handle cancellation correctly.
That is, when checking whether a function handles cancellation
+correctly, its implementation and all its async
function calls need to
+be checked, recursively.
As check_something
is not async
, it does not need to be checked.
Because Twisted's CancelledError
s are Exception
s, it's easy to
+accidentally catch and suppress them. Care must be taken to ensure that
+CancelledError
s are allowed to propagate upwards.
+ Bad: +
+ |
+
+ Good: +
+ |
+
+ OK: +
+ |
+
+ Good: +
+ |
+
defer.gatherResults
produces a Deferred
which:
cancel()
calls to every Deferred
being waited on.FirstError
.Together, this means that CancelledError
s will be wrapped in
+a FirstError
unless unwrapped. Such FirstError
s are liable to be
+swallowed, so they must be unwrapped.
+ Bad: +
+ |
+
+ Good: +
+ |
+
Deferred
sIf a function creates a Deferred
, the effect of cancelling it must be considered. Deferred
s that get shared are likely to have unintended behaviour when cancelled.
+ Bad: +
+ |
+
+ Good: +
+ |
+
+ | +
+ Good: +
+ |
+
Some async
functions may kick off some async
processing which is
+intentionally protected from cancellation, by stop_cancellation
or
+other means. If the async
processing inherits the logcontext of the
+request which initiated it, care must be taken to ensure that the
+logcontext is not finished before the async
processing completes.
+ Bad: +
+ |
+
+ Good: +
+ |
+
+ OK: +
+ |
++ | +
This is a work-in-progress set of notes with two goals:
+See also MSC3902.
+The key idea is described by MSC3706. This allows servers to
+request a lightweight response to the federation /send_join
endpoint.
+This is called a faster join, also known as a partial join. In these
+notes we'll usually use the word "partial" as it matches the database schema.
The response to a partial join consists of
+J
,J
),J
,Synapse marks the room as partially joined by adding a row to the database table
+partial_state_rooms
. It also marks the join event J
as "partially stated",
+meaning that we have neither received nor computed the full state before/after
+J
. This is done by adding a row to partial_state_events
.
matrix=> \d partial_state_events
+Table "matrix.partial_state_events"
+ Column │ Type │ Collation │ Nullable │ Default
+══════════╪══════╪═══════════╪══════════╪═════════
+ room_id │ text │ │ not null │
+ event_id │ text │ │ not null │
+
+matrix=> \d partial_state_rooms
+ Table "matrix.partial_state_rooms"
+ Column │ Type │ Collation │ Nullable │ Default
+════════════════════════╪════════╪═══════════╪══════════╪═════════
+ room_id │ text │ │ not null │
+ device_lists_stream_id │ bigint │ │ not null │ 0
+ join_event_id │ text │ │ │
+ joined_via │ text │ │ │
+
+matrix=> \d partial_state_rooms_servers
+ Table "matrix.partial_state_rooms_servers"
+ Column │ Type │ Collation │ Nullable │ Default
+═════════════╪══════╪═══════════╪══════════╪═════════
+ room_id │ text │ │ not null │
+ server_name │ text │ │ not null │
+
+Indices, foreign-keys and check constraints are omitted for brevity.
+While partially joined to a room, Synapse receives events E
from remote
+homeservers as normal, and can create events at the request of its local users.
+However, we run into trouble when we enforce the checks on an event.
+++
+- Is a valid event, otherwise it is dropped. For an event to be valid, it +must contain a room_id, and it must comply with the event format of that +room version.
+- Passes signature checks, otherwise it is dropped.
+- Passes hash checks, otherwise it is redacted before being processed further.
+- Passes authorization rules based on the event’s auth events, otherwise it +is rejected.
+- Passes authorization rules based on the state before the event, otherwise +it is rejected.
+- Passes authorization rules based on the current state of the room, +otherwise it is “soft failed”.
+
We can enforce checks 1--4 without any problems.
+But we cannot enforce checks 5 or 6 with complete certainty, since Synapse does
+not know the full state before E
, nor that of the room.
Instead, we make a best-effort approximation. +While the room is considered partially joined, Synapse tracks the "partial +state" before events. +This works in a similar way as regular state:
+J
is that given to us by the partial join response.E
is the resolution of the partial states
+after each of E
's prev_event
s.E
is rejected or a message event, the partial state after E
is the
+partial state before E
.E
is the partial state before E
, plus
+E
itself.More concisely, partial state propagates just like full state; the only
+difference is that we "seed" it with an incomplete initial state.
+Synapse records that we have only calculated partial state for this event with
+a row in partial_state_events
.
While the room remains partially stated, check 5 on incoming events to that +room becomes:
++++
+- Passes authorization rules based on the resolution between the partial +state before
+E
andE
's auth events. If the event fails to pass +authorization rules, it is rejected.
Additionally, check 6 is deleted: no soft-failures are enforced.
+While partially joined, the current partial state of the room is defined as the +resolution across the partial states after all forward extremities in the room.
+Remark. Events with partial state are not considered +outliers.
+Using partial state means the auth checks can fail in a few different ways1.
+Is this exhaustive?
+(Note that the discrepancies described in the last two bullets are user-visible.)
+This means that we have to be very careful when we want to lookup pieces of room +state in a partially-joined room. Our approximation of the state may be +incorrect or missing. But we can make some educated guesses. If
+then we proceed as normal, and let the resync process fix up any mistakes (see +below).
+When is our partial state likely to be correct?
+In short, we deem it acceptable to trust the partial state for non-membership +and local membership events. For remote membership events, we wait for the +resync to complete, at which point we have the full state of the room and can +proceed as normal.
+The partial-state approximation is only a temporary affair. In the background,
+synapse beings a "resync" process. This is a continuous loop, starting at the
+partial join event and proceeding downwards through the event graph. For each
+E
seen in the room since partial join, Synapse will fetch
E
, via
+/state_ids
;E
, included in the /state_ids
+response; andThis means Synapse has (or can compute) the full state before E
, which allows
+Synapse to properly authorise or reject E
. At this point ,the event
+is considered to have "full state" rather than "partial state". We record this
+by removing E
from the partial_state_events
table.
[TODO: Does Synapse persist a new state group for the full state
+before E
, or do we alter the (partial-)state group in-place? Are state groups
+ever marked as partially-stated? ]
This scheme means it is possible for us to have accepted and sent an event to +clients, only to reject it during the resync. From a client's perspective, the +effect is similar to a retroactive +state change due to state resolution---i.e. a "state reset".2
+Clients should refresh caches to detect such a change. Rumour has it that +sliding sync will fix this.
+When all events since the join J
have been fully-stated, the room resync
+process is complete. We record this by removing the room from
+partial_state_rooms
.
For the time being, the resync process happens on the master worker.
+A new replication stream un_partial_stated_room
is added. Whenever a resync
+completes and a partial-state room becomes fully stated, a new message is sent
+into that stream containing the room ID.
++NB. The notes below are rough. Some of them are hidden under
+<details>
+disclosures because they have yet to be implemented in mainline Synapse.
When sending out messages during a partial join, we assume our partial state is +accurate and proceed as normal. For this to have any hope of succeeding at all, +our partial state must contain an entry for each of the (type, state key) pairs +specified by the auth rules:
+m.room.create
m.room.join_rules
m.room.power_levels
m.room.third_party_invite
m.room.member
The first four of these should be present in the state before J
that is given
+to us in the partial join response; only membership events are omitted. In order
+for us to consider the user joined, we must have their membership event. That
+means the only possible omission is the target's membership in an invite, kick
+or ban.
The worst possibility is that we locally invite someone who is banned according to +the full state, because we lack their ban in our current partial state. The rest +of the federation---at least, those who are fully joined---should correctly +enforce the membership transition constraints. So any the erroneous invite should be ignored by fully-joined +homeservers and resolved by the resync for partially-joined homeservers.
+In more generality, there are two problems we're worrying about here:
+However we expect such problems to be unlikely in practise, because
+TODO: needs prose fleshing out.
+Normally: send out in a fed txn to all HSes in the room. +We only know that some HSes were in the room at some point. Wat do. +Send it out to the list of servers from the first join. +TODO what do we do here if we have full state? +If the prev event was created by us, we can risk sending it to the wrong HS. (Motivation: privacy concern of the content. Not such a big deal for a public room or an encrypted room. But non-encrypted invite-only...) +But don't want to send out sensitive data in other HS's events in this way.
+Suppose we discover after resync that we shouldn't have sent out one our events (not a prev_event) to a target HS. Not much we can do. +What about if we didn't send them an event but shouldn't've? +E.g. what if someone joined from a new HS shortly after you did? We wouldn't talk to them. +Could imagine sending out the "Missed" events after the resync but... painful to work out what they should have seen if they joined/left. +Instead, just send them the latest event (if they're still in the room after resync) and let them backfill.(?)
+NB. Not yet implemented.
+TODO: needs prose fleshing out. Liase with Matthieu. Explain why /send_join +(Rich was surprised we didn't just create it locally. Answer: to try and avoid +a join which then gets rejected after resync.)
+We don't know for sure that any join we create would be accepted. +E.g. the joined user might have been banned; the join rules might have changed in a way that we didn't realise... some way in which the partial state was mistaken. +Instead, do another partial make-join/send-join handshake to confirm that the join works.
+NB. Not yet implemented.
+When you're fully joined to a room, to have U
leave a room their homeserver
+needs to
U
which will be accepted by other homeservers,
+andU
out to the homeservers in the federation.When is a leave event accepted? See +v10 auth rules:
++++
+- If type is m.room.member: [...] +> +> 5. If membership is leave: +> +> 1. If the sender matches state_key, allow if and only if that user’s current membership state is invite, join, or knock. +2. [...]
+
I think this means that (well-formed!) self-leaves are governed entirely by
+4.5.1. This means that if we correctly calculate state which says that U
is
+invited, joined or knocked and include it in the leave's auth events, our event
+is accepted by checks 4 and 5 on incoming events.
+++
+- Passes authorization rules based on the event’s auth events, otherwise +> it is rejected.
+- Passes authorization rules based on the state before the event, otherwise +> it is rejected.
+
The only way to fail check 6 is if the receiving server's current state of the
+room says that U
is banned, has left, or has no membership event. But this is
+fine: the receiving server already thinks that U
isn't in the room.
+++
+- Passes authorization rules based on the current state of the room, +> otherwise it is “soft failed”.
+
For the second point (publishing the leave event), the best thing we can do is +to is publish to all HSes we know to be currently in the room. If they miss that +event, they might send us traffic in the room that we don't care about. This is +a problem with leaving after a "full" join; we don't seek to fix this with +partial joins.
+(With that said: there's nothing machine-readable in the /send response. I don't +think we can deduce "destination has left the room" from a failure to /send an +event into that room?)
+We can create leave events and can choose what gets included in our auth events, +so we can be sure that we pass check 4 on incoming events. For check 5, we might +have an incorrect view of the state before an event. +The only way we might erroneously think a leave is valid is if
+U
joined, invited or knocked, butU
banned, left or not present,in which case the leave doesn't make anything worse: other HSes already consider +us as not in the room, and will continue to do so after seeing the leave.
+The remaining obstacle is then: can we safely broadcast the leave event? We may
+miss servers or incorrectly think that a server is in the room. Or the
+destination server may be offline and miss the transaction containing our leave
+event.This should self-heal when they see an event whose prev_events
descends
+from our leave.
Another option we considered was to use federation /send_leave
to ask a
+fully-joined server to send out the event on our behalf. But that introduces
+complexity without much benefit. Besides, as Rich put it,
++sending out leaves is pretty best-effort currently
+
so this is probably good enough as-is.
+TODO: what cleanup is necessary? Is it all just nice-to-have to save unused +work?
+Synapse has a concept of "streams", which are roughly described in id_generators.py
.
+Generally speaking, streams are a series of notifications that something in Synapse's database has changed that the application might need to respond to.
+For example:
See synapse.replication.tcp.streams
for the full list of streams.
It is very helpful to understand the streams mechanism when working on any part of Synapse that needs to respond to changes—especially if those changes are made by different workers.
+To that end, let's describe streams formally, paraphrasing from the docstring of AbstractStreamIdGenerator
.
A stream is an append-only log T1, T2, ..., Tn, ...
of facts1 which grows over time.
+Only "writers" can add facts to a stream, and there may be multiple writers.
Each fact has an ID, called its "stream ID". +Readers should only process facts in ascending stream ID order.
+Roughly speaking, each stream is backed by a database table.
+It should have a stream_id
(or similar) bigint column holding stream IDs, plus additional columns as necessary to describe the fact.
+Typically, a fact is expressed with a single row in its backing table.2
+Within a stream, no two facts may have the same stream_id.
++Aside. Some additional notes on streams' backing tables.
++
+- Rich would like to ditch the backing tables.
+- The backing tables may have other uses. +> For example, the events table serves backs the events stream, and is read when processing new events. +> But old rows are read from the table all the time, whenever Synapse needs to lookup some facts about an event.
+- Rich suspects that sometimes the stream is backed by multiple tables, so the stream proper is the union of those tables.
+
Stream writers can "reserve" a stream ID, and then later mark it as having being completed. +Stream writers need to track the completion of each stream fact. +In the happy case, completion means a fact has been written to the stream table. +But unhappy cases (e.g. transaction rollback due to an error) also count as completion. +Once completed, the rows written with that stream ID are fixed, and no new rows +will be inserted with that ID.
+For any given stream reader (including writers themselves), we may define a per-writer current stream ID:
+++A current stream ID for a writer W is the largest stream ID such that +all transactions added by W with equal or smaller ID have completed.
+
Similarly, there is a "linear" notion of current stream ID:
+++A "linear" current stream ID is the largest stream ID such that +all facts (added by any writer) with equal or smaller ID have completed.
+
Because different stream readers A and B learn about new facts at different times, A and B may disagree about current stream IDs. +Put differently: we should think of stream readers as being independent of each other, proceeding through a stream of facts at different rates.
+The above definition does not give a unique current stream ID, in fact there can +be a range of current stream IDs. Synapse uses both the minimum and maximum IDs +for different purposes. Most often the maximum is used, as its generally +beneficial for workers to advance their IDs as soon as possible. However, the +minimum is used in situations where e.g. another worker is going to wait until +the stream advances past a position.
+NB. For both senses of "current", that if a writer opens a transaction that never completes, the current stream ID will never advance beyond that writer's last written stream ID.
+For single-writer streams, the per-writer current ID and the linear current ID are the same. +Both senses of current ID are monotonic, but they may "skip" or jump over IDs because facts complete out of order.
+Example. +Consider a single-writer stream which is initially at ID 1.
+Action | Current stream ID | Notes |
---|---|---|
1 | ||
Reserve 2 | 1 | |
Reserve 3 | 1 | |
Complete 3 | 1 | current ID unchanged, waiting for 2 to complete |
Complete 2 | 3 | current ID jumps from 1 -> 3 |
Reserve 4 | 3 | |
Reserve 5 | 3 | |
Reserve 6 | 3 | |
Complete 5 | 3 | |
Complete 4 | 5 | current ID jumps 3->5, even though 6 is pending |
Complete 6 | 6 |
There are two ways to view a multi-writer stream.
+The single stream (option 2) is conceptually simpler, and easier to represent (a single stream id). +However, it requires each reader to know about the entire set of writers, to ensures that readers don't erroneously advance their current stream position too early and miss a fact from an unknown writer. +In contrast, multiple parallel streams (option 1) are more complex, requiring more state to represent (map from writer to stream id). +The payoff for doing so is that readers can "peek" ahead to facts that completed on one writer no matter the state of the others, reducing latency.
+Note that a multi-writer stream can be viewed in both ways. +For example, the events stream is treated as multiple single-writer streams (option 1) by the sync handler, so that events are sent to clients as soon as possible. +But the background process that works through events treats them as a single linear stream.
+Another useful example is the cache invalidation stream. +The facts this stream holds are instructions to "you should now invalidate these cache entries". +We only ever treat this as a multiple single-writer streams as there is no important ordering between cache invalidations. +(Invalidations are self-contained facts; and the invalidations commute/are idempotent).
+Writers need to track:
+At startup,
+To reserve a stream ID, call nextval
on the appropriate postgres sequence.
To write a fact to the stream: insert the appropriate rows to the appropriate backing table.
+To complete a fact, first remove it from your map of facts currently awaiting completion.
+Then, if no earlier fact is awaiting completion, the writer can advance its current position in that stream.
+Upon doing so it should emit an RDATA
message3, once for every fact between the old and the new stream ID.
Readers need to track the current position of every writer.
+At startup, they can find this by contacting each writer with a REPLICATE
message,
+requesting that all writers reply describing their current position in their streams.
+Writers reply with a POSITION
message.
To learn about new facts, readers should listen for RDATA
messages and process them to respond to the new fact.
+The RDATA
itself is not a self-contained representation of the fact;
+readers will have to query the stream tables for the full details.
+Readers must also advance their record of the writer's current position for that stream.
In a nutshell: we have an append-only log with a "buffer/scratchpad" at the end where we have to wait for the sequence to be linear and contiguous.
+we use the word fact here for two reasons. +Firstly, the word "event" is already heavily overloaded (PDUs, EDUs, account data, ...) and we don't need to make that worse. +Secondly, "fact" emphasises that the things we append to a stream cannot change after the fact.
+A fact might be expressed with 0 rows, e.g. if we opened a transaction to persist an event, but failed and rolled the transaction back before marking the fact as completed. +In principle a fact might be expressed with 2 or more rows; if so, each of those rows should share the fact's stream ID.
+This communication used to happen directly with the writers over TCP; +nowadays it's done via Redis's Pubsub.
+Federation is the process by which users on different servers can participate +in the same room. For this to work, those other servers must be able to contact +yours to send messages.
+The server_name
configured in the Synapse configuration file (often
+homeserver.yaml
) defines how resources (users, rooms, etc.) will be
+identified (eg: @user:example.com
, #room:example.com
). By default,
+it is also the domain that other servers will use to try to reach your
+server (via port 8448). This is easy to set up and will work provided
+you set the server_name
to match your machine's public DNS hostname.
For this default configuration to work, you will need to listen for TLS +connections on port 8448. The preferred way to do that is by using a +reverse proxy: see the reverse proxy documentation for instructions +on how to correctly set one up.
+In some cases you might not want to run Synapse on the machine that has
+the server_name
as its public DNS hostname, or you might want federation
+traffic to use a different port than 8448. For example, you might want to
+have your user names look like @user:example.com
, but you want to run
+Synapse on synapse.example.com
on port 443. This can be done using
+delegation, which allows an admin to control where federation traffic should
+be sent. See the delegation documentation for instructions on how to set this up.
Once federation has been configured, you should be able to join a room over
+federation. A good place to start is #synapse:matrix.org
- a room for
+Synapse admins.
You can use the federation tester
+to check if your homeserver is configured correctly. Alternatively try the
+JSON API used by the federation tester.
+Note that you'll have to modify this URL to replace DOMAIN
with your
+server_name
. Hitting the API directly provides extra detail.
The typical failure mode for federation is that when the server tries to join +a room, it is rejected with "401: Unauthorized". Generally this means that other +servers in the room could not access yours. (Joining a room over federation is +a complicated dance which requires connections in both directions).
+Another common problem is that people on other servers can't join rooms that +you invite them to. This can be caused by an incorrectly-configured reverse +proxy: see the reverse proxy documentation for instructions on how +to correctly configure a reverse proxy.
+HTTP 308 Permanent Redirect
redirects are not followed: Due to missing features
+in the HTTP library used by Synapse, 308 redirects are currently not followed by
+federating servers, which can cause M_UNKNOWN
or 401 Unauthorized
errors. This
+may affect users who are redirecting apex-to-www (e.g. example.com
-> www.example.com
),
+and especially users of the Kubernetes Nginx Ingress module, which uses 308 redirect
+codes by default. For those Kubernetes users, this Stackoverflow post
+might be helpful. For other users, switching to a 301 Moved Permanently
code may be
+an option. 308 redirect codes will be supported properly in a future
+release of Synapse.
If you want to get up and running quickly with a trio of homeservers in a
+private federation, there is a script in the demo
directory. This is mainly
+useful just for development purposes. See
+demo scripts.
Welcome to the documentation repository for Synapse, a +Matrix homeserver implementation developed by Element.
+This documentation covers topics for installation, configuration and +maintenance of your Synapse process:
+Learn how to install and +configure your own instance, perhaps with Single +Sign-On.
+See how to upgrade between Synapse versions.
+Administer your instance using the Admin +API, installing pluggable +modules, or by accessing the manhole.
+Learn how to read log lines, configure +logging or set up structured +logging.
+Scale Synapse through additional worker processes.
+Set up monitoring and metrics to keep an eye on your +Synapse instance's performance.
+Contributions are welcome! Synapse is primarily written in +Python. As a developer, you may be interested in the +following documentation:
+Read the Contributing Guide. It is meant +to walk new contributors through the process of developing and submitting a +change to the Synapse codebase (which is hosted on +GitHub).
+Set up your development +environment, then learn +how to lint and +test your code.
+Look at the issue tracker for +bugs to fix or features to add. If you're new, it may be best to start with +those labeled good first +issue.
+Understand how Synapse is +built, how to migrate +database schemas, learn about +federation and how to set up a local +federation for development.
+We like to keep our git
history clean. Learn how to
+do so!
And finally, contribute to this documentation! The source for which is +located here.
+If you've found a security issue in Synapse or any other Element project, +please report it to us in accordance with our Security Disclosure +Policy. Thank you!
+ +