2021-06-03 19:21:02 +03:00
<!DOCTYPE HTML>
< html lang = "en" class = "sidebar-visible no-js light" >
< head >
<!-- Book generated using mdBook -->
< meta charset = "UTF-8" >
< title > Synapse< / title >
< meta name = "robots" content = "noindex" / >
<!-- Custom HTML head -->
< meta content = "text/html; charset=utf-8" http-equiv = "Content-Type" >
< meta name = "description" content = "" >
< meta name = "viewport" content = "width=device-width, initial-scale=1" >
< meta name = "theme-color" content = "#ffffff" / >
< link rel = "icon" href = "favicon.svg" >
< link rel = "shortcut icon" href = "favicon.png" >
< link rel = "stylesheet" href = "css/variables.css" >
< link rel = "stylesheet" href = "css/general.css" >
< link rel = "stylesheet" href = "css/chrome.css" >
< link rel = "stylesheet" href = "css/print.css" media = "print" >
<!-- Fonts -->
< link rel = "stylesheet" href = "FontAwesome/css/font-awesome.css" >
< link rel = "stylesheet" href = "fonts/fonts.css" >
<!-- Highlight.js Stylesheets -->
< link rel = "stylesheet" href = "highlight.css" >
< link rel = "stylesheet" href = "tomorrow-night.css" >
< link rel = "stylesheet" href = "ayu-highlight.css" >
<!-- Custom theme stylesheets -->
< link rel = "stylesheet" href = "docs/website_files/table-of-contents.css" >
< link rel = "stylesheet" href = "docs/website_files/remove-nav-buttons.css" >
< link rel = "stylesheet" href = "docs/website_files/indent-section-headers.css" >
< / head >
< body >
<!-- Provide site root to javascript -->
< script type = "text/javascript" >
var path_to_root = "";
var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "navy" : "light";
< / script >
<!-- Work around some values being stored in localStorage wrapped in quotes -->
< script type = "text/javascript" >
try {
var theme = localStorage.getItem('mdbook-theme');
var sidebar = localStorage.getItem('mdbook-sidebar');
if (theme.startsWith('"') & & theme.endsWith('"')) {
localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1));
}
if (sidebar.startsWith('"') & & sidebar.endsWith('"')) {
localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1));
}
} catch (e) { }
< / script >
<!-- Set the theme before any content is loaded, prevents flash -->
< script type = "text/javascript" >
var theme;
try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { }
if (theme === null || theme === undefined) { theme = default_theme; }
var html = document.querySelector('html');
html.classList.remove('no-js')
html.classList.remove('light')
html.classList.add(theme);
html.classList.add('js');
< / script >
<!-- Hide / unhide sidebar before it is displayed -->
< script type = "text/javascript" >
var html = document.querySelector('html');
var sidebar = 'hidden';
if (document.body.clientWidth >= 1080) {
try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { }
sidebar = sidebar || 'visible';
}
html.classList.remove('sidebar-visible');
html.classList.add("sidebar-" + sidebar);
< / script >
< nav id = "sidebar" class = "sidebar" aria-label = "Table of contents" >
< div class = "sidebar-scrollbox" >
2021-06-16 15:33:55 +03:00
< ol class = "chapter" > < li class = "chapter-item expanded affix " > < li class = "part-title" > Introduction< / li > < li class = "chapter-item expanded " > < a href = "welcome_and_overview.html" > Welcome and Overview< / a > < / li > < li class = "chapter-item expanded affix " > < li class = "part-title" > Setup< / li > < li class = "chapter-item expanded " > < a href = "setup/installation.html" > Installation< / a > < / li > < li class = "chapter-item expanded " > < a href = "postgres.html" > Using Postgres< / a > < / li > < li class = "chapter-item expanded " > < a href = "reverse_proxy.html" > Configuring a Reverse Proxy< / a > < / li > < li class = "chapter-item expanded " > < a href = "turn-howto.html" > Configuring a Turn Server< / a > < / li > < li class = "chapter-item expanded " > < a href = "delegate.html" > Delegation< / a > < / li > < li class = "chapter-item expanded affix " > < li class = "part-title" > Upgrading< / li > < li class = "chapter-item expanded " > < a href = "upgrading/index.html" > Upgrading between Synapse Versions< / a > < / li > < li class = "chapter-item expanded " > < a href = "MSC1711_certificates_FAQ.html" > Upgrading from pre-Synapse 1.0< / a > < / li > < li class = "chapter-item expanded affix " > < li class = "part-title" > Usage< / li > < li class = "chapter-item expanded " > < a href = "federate.html" > Federation< / a > < / li > < li class = "chapter-item expanded " > < a href = "usage/configuration/index.html" > Configuration< / a > < / li > < li > < ol class = "section" > < li class = "chapter-item expanded " > < a href = "usage/configuration/homeserver_sample_config.html" > Homeserver Sample Config File< / a > < / li > < li class = "chapter-item expanded " > < a href = "usage/configuration/logging_sample_config.html" > Logging Sample Config File< / a > < / li > < li class = "chapter-item expanded " > < a href = "structured_logging.html" > Structured Logging< / a > < / li > < li class = "chapter-item expanded " > < a href = "usage/configuration/user_authentication/index.html" > User Authentication< / a > < / li > < li > < ol class = "section" > < li class = "chapter-item expanded " > < div > Single-Sign On< / div > < / li > < li > < ol class = "section" > < li class = "chapter-item expanded " > < a href = "openid.html" > OpenID Connect< / a > < / li > < li class = "chapter-item expanded " > < div > SAML< / div > < / li > < li class = "chapter-item expanded " > < div > CAS< / div > < / li > < li class = "chapter-item expanded " > < a href = "sso_mapping_providers.html" > SSO Mapping Providers< / a > < / li > < / ol > < / li > < li class = "chapter-item expanded " > < a href = "password_auth_providers.html" > Password Auth Providers< / a > < / li > < li class = "chapter-item expanded " > < a href = "jwt.html" > JSON Web Tokens< / a > < / li > < / ol > < / li > < li class = "chapter-item expanded " > < a href = "CAPTCHA_SETUP.html" > Registration Captcha< / a > < / li > < li class = "chapter-item expanded " > < a href = "application_services.html" > Application Services< / a > < / li > < li class = "chapter-item expanded " > < a href = "server_notices.html" > Server Notices< / a > < / li > < li class = "chapter-item expanded " > < a href = "consent_tracking.html" > Consent Tracking< / a > < / li > < li class = "chapter-item expanded " > < a href = "url_previews.html" > URL Previews< / a > < / li > < li class = "chapter-item expanded " > < a href = "user_directory.html" > User Directory< / a > < / li > < li class = "chapter-item expanded " > < a href = "message_retention_policies.html" > Message Retention Policies< / a > < / li > < li class = "chapter-item expanded " > < div > Pluggable Modules< / div > < / li > < li > < ol class = "section" > < li class = "chapter-item expanded " > < div > Third Party Rules< / div > < / li > < li class = "chapter-item expanded " > < a href = "spam_checker.html" > Spam Checker< / a > < / li > < li class = "chapter-item expanded " > < a href = "presence_router_module.html" > Presence Router< / a > < / li > < li class = "chapter-item expanded " > < div > Media Storage Providers< / div > < / li > < / ol > < / li > < li class = "chapter-item expanded " > < a href = "workers.html" > Workers< / a > < / li > < li > < ol class = "section" > < li class = "chapter-item expanded " > < a href = "synctl_workers.html" > Using synctl with Workers< / a > < / li > < li class = "chapter-item expanded " > < a href = "systemd-with-workers/index.html" > Systemd< / a > < / li > < / ol > < / li > < / ol > < / li > < li class = "chapter-item expanded " > < a href = "usage/administration/index.html" > Administration< / a > < / li > < li > < ol class = "section" > < li class = "chapter-item expanded " > < a href = "usage/administration/admin_api/index.html" > Admin API< / a > < / li > < li > < ol class = "section" > < li class = "chapter-item expanded " > < a href = "admin_a
2021-06-03 19:21:02 +03:00
< / div >
< div id = "sidebar-resize-handle" class = "sidebar-resize-handle" > < / div >
< / nav >
< div id = "page-wrapper" class = "page-wrapper" >
< div class = "page" >
< div id = "menu-bar-hover-placeholder" > < / div >
< div id = "menu-bar" class = "menu-bar sticky bordered" >
< div class = "left-buttons" >
< button id = "sidebar-toggle" class = "icon-button" type = "button" title = "Toggle Table of Contents" aria-label = "Toggle Table of Contents" aria-controls = "sidebar" >
< i class = "fa fa-bars" > < / i >
< / button >
< button id = "theme-toggle" class = "icon-button" type = "button" title = "Change theme" aria-label = "Change theme" aria-haspopup = "true" aria-expanded = "false" aria-controls = "theme-list" >
< i class = "fa fa-paint-brush" > < / i >
< / button >
< ul id = "theme-list" class = "theme-popup" aria-label = "Themes" role = "menu" >
< li role = "none" > < button role = "menuitem" class = "theme" id = "light" > Light (default)< / button > < / li >
< li role = "none" > < button role = "menuitem" class = "theme" id = "rust" > Rust< / button > < / li >
< li role = "none" > < button role = "menuitem" class = "theme" id = "coal" > Coal< / button > < / li >
< li role = "none" > < button role = "menuitem" class = "theme" id = "navy" > Navy< / button > < / li >
< li role = "none" > < button role = "menuitem" class = "theme" id = "ayu" > Ayu< / button > < / li >
< / ul >
< button id = "search-toggle" class = "icon-button" type = "button" title = "Search. (Shortkey: s)" aria-label = "Toggle Searchbar" aria-expanded = "false" aria-keyshortcuts = "S" aria-controls = "searchbar" >
< i class = "fa fa-search" > < / i >
< / button >
< / div >
< h1 class = "menu-title" > Synapse< / h1 >
< div class = "right-buttons" >
< a href = "print.html" title = "Print this book" aria-label = "Print this book" >
< i id = "print-button" class = "fa fa-print" > < / i >
< / a >
< a href = "https://github.com/matrix-org/synapse" title = "Git repository" aria-label = "Git repository" >
< i id = "git-repository-button" class = "fa fa-github" > < / i >
< / a >
< / div >
< / div >
< div id = "search-wrapper" class = "hidden" >
< form id = "searchbar-outer" class = "searchbar-outer" >
< input type = "search" id = "searchbar" name = "searchbar" placeholder = "Search this book ..." aria-controls = "searchresults-outer" aria-describedby = "searchresults-header" >
< / form >
< div id = "searchresults-outer" class = "searchresults-outer hidden" >
< div id = "searchresults-header" class = "searchresults-header" > < / div >
< ul id = "searchresults" >
< / ul >
< / div >
< / div >
<!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM -->
< script type = "text/javascript" >
document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) {
link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1);
});
< / script >
< div id = "content" class = "content" >
< main >
<!-- Page table of contents -->
< div class = "sidetoc" >
< nav class = "pagetoc" > < / nav >
< / div >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "introduction" > < a class = "header" href = "#introduction" > Introduction< / a > < / h1 >
< p > Welcome to the documentation repository for Synapse, the reference
< a href = "https://matrix.org" > Matrix< / a > homeserver implementation.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > <!--
Include the contents of INSTALL.md from the project root without moving it, which may
break links around the internet. Additionally, note that SUMMARY.md is unable to
directly link to content outside of the docs/ directory. So we use this file as a
redirection.
-->
< h1 id = "installation-instructions" > < a class = "header" href = "#installation-instructions" > Installation Instructions< / a > < / h1 >
< p > There are 3 steps to follow under < strong > Installation Instructions< / strong > .< / p >
< ul >
< li > < a href = "setup/installation.html#installation-instructions" > Installation Instructions< / a >
< ul >
< li > < a href = "setup/installation.html#choosing-your-server-name" > Choosing your server name< / a > < / li >
< li > < a href = "setup/installation.html#installing-synapse" > Installing Synapse< / a >
< ul >
< li > < a href = "setup/installation.html#installing-from-source" > Installing from source< / a >
< ul >
< li > < a href = "setup/installation.html#platform-specific-prerequisites" > Platform-specific prerequisites< / a >
< ul >
< li > < a href = "setup/installation.html#debianubunturaspbian" > Debian/Ubuntu/Raspbian< / a > < / li >
< li > < a href = "setup/installation.html#archlinux" > ArchLinux< / a > < / li >
< li > < a href = "setup/installation.html#centosfedora" > CentOS/Fedora< / a > < / li >
< li > < a href = "setup/installation.html#macos" > macOS< / a > < / li >
< li > < a href = "setup/installation.html#opensuse" > OpenSUSE< / a > < / li >
< li > < a href = "setup/installation.html#openbsd" > OpenBSD< / a > < / li >
< li > < a href = "setup/installation.html#windows" > Windows< / a > < / li >
< / ul >
< / li >
< / ul >
< / li >
< li > < a href = "setup/installation.html#prebuilt-packages" > Prebuilt packages< / a >
< ul >
< li > < a href = "setup/installation.html#docker-images-and-ansible-playbooks" > Docker images and Ansible playbooks< / a > < / li >
< li > < a href = "setup/installation.html#debianubuntu" > Debian/Ubuntu< / a >
< ul >
< li > < a href = "setup/installation.html#matrixorg-packages" > Matrix.org packages< / a > < / li >
< li > < a href = "setup/installation.html#downstream-debian-packages" > Downstream Debian packages< / a > < / li >
< li > < a href = "setup/installation.html#downstream-ubuntu-packages" > Downstream Ubuntu packages< / a > < / li >
< / ul >
< / li >
< li > < a href = "setup/installation.html#fedora" > Fedora< / a > < / li >
< li > < a href = "setup/installation.html#opensuse-1" > OpenSUSE< / a > < / li >
< li > < a href = "setup/installation.html#suse-linux-enterprise-server" > SUSE Linux Enterprise Server< / a > < / li >
< li > < a href = "setup/installation.html#archlinux-1" > ArchLinux< / a > < / li >
< li > < a href = "setup/installation.html#void-linux" > Void Linux< / a > < / li >
< li > < a href = "setup/installation.html#freebsd" > FreeBSD< / a > < / li >
< li > < a href = "setup/installation.html#openbsd-1" > OpenBSD< / a > < / li >
< li > < a href = "setup/installation.html#nixos" > NixOS< / a > < / li >
< / ul >
< / li >
< / ul >
< / li >
< li > < a href = "setup/installation.html#setting-up-synapse" > Setting up Synapse< / a >
< ul >
< li > < a href = "setup/installation.html#using-postgresql" > Using PostgreSQL< / a > < / li >
< li > < a href = "setup/installation.html#tls-certificates" > TLS certificates< / a > < / li >
< li > < a href = "setup/installation.html#client-well-known-uri" > Client Well-Known URI< / a > < / li >
< li > < a href = "setup/installation.html#email" > Email< / a > < / li >
< li > < a href = "setup/installation.html#registering-a-user" > Registering a user< / a > < / li >
< li > < a href = "setup/installation.html#setting-up-a-turn-server" > Setting up a TURN server< / a > < / li >
< li > < a href = "setup/installation.html#url-previews" > URL previews< / a > < / li >
< li > < a href = "setup/installation.html#troubleshooting-installation" > Troubleshooting Installation< / a > < / li >
< / ul >
< / li >
< / ul >
< / li >
< / ul >
< h2 id = "choosing-your-server-name" > < a class = "header" href = "#choosing-your-server-name" > Choosing your server name< / a > < / h2 >
< p > It is important to choose the name for your server before you install Synapse,
because it cannot be changed later.< / p >
< p > The server name determines the " domain" part of user-ids for users on your
server: these will all be of the format < code > @user:my.domain.name< / code > . It also
determines how other matrix servers will reach yours for federation.< / p >
< p > For a test configuration, set this to the hostname of your server. For a more
production-ready setup, you will probably want to specify your domain
(< code > example.com< / code > ) rather than a matrix-specific hostname here (in the same way
that your email address is probably < code > user@example.com< / code > rather than
< code > user@email.example.com< / code > ) - but doing so may require more advanced setup: see
< a href = "setup/docs/federate.html" > Setting up Federation< / a > .< / p >
< h2 id = "installing-synapse" > < a class = "header" href = "#installing-synapse" > Installing Synapse< / a > < / h2 >
< h3 id = "installing-from-source" > < a class = "header" href = "#installing-from-source" > Installing from source< / a > < / h3 >
< p > (Prebuilt packages are available for some platforms - see < a href = "setup/installation.html#prebuilt-packages" > Prebuilt packages< / a > .)< / p >
< p > When installing from source please make sure that the < a href = "setup/installation.html#platform-specific-prerequisites" > Platform-specific prerequisites< / a > are already installed.< / p >
< p > System requirements:< / p >
< ul >
< li > POSIX-compliant system (tested on Linux & OS X)< / li >
< li > Python 3.5.2 or later, up to Python 3.9.< / li >
< li > At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org< / li >
< / ul >
< p > To install the Synapse homeserver run:< / p >
< pre > < code class = "language-sh" > mkdir -p ~/synapse
virtualenv -p python3 ~/synapse/env
source ~/synapse/env/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install matrix-synapse
< / code > < / pre >
< p > This will download Synapse from < a href = "https://pypi.org/project/matrix-synapse" > PyPI< / a >
and install it, along with the python libraries it uses, into a virtual environment
under < code > ~/synapse/env< / code > . Feel free to pick a different directory if you
prefer.< / p >
< p > This Synapse installation can then be later upgraded by using pip again with the
update flag:< / p >
< pre > < code class = "language-sh" > source ~/synapse/env/bin/activate
pip install -U matrix-synapse
< / code > < / pre >
< p > Before you can start Synapse, you will need to generate a configuration
file. To do this, run (in your virtualenv, as before):< / p >
< pre > < code class = "language-sh" > cd ~/synapse
python -m synapse.app.homeserver \
--server-name my.domain.name \
--config-path homeserver.yaml \
--generate-config \
--report-stats=[yes|no]
< / code > < / pre >
< p > ... substituting an appropriate value for < code > --server-name< / code > .< / p >
< p > This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your homeserver to
identify itself to other homeserver, so don't lose or delete them. It would be
wise to back them up somewhere safe. (If, for whatever reason, you do need to
change your homeserver's keys, you may find that other homeserver have the
old key cached. If you update the signing key, you should change the name of the
key in the < code > < server name> .signing.key< / code > file (the second word) to something
different. See the < a href = "https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys" > spec< / a > for more information on key management).< / p >
< p > To actually run your new homeserver, pick a working directory for Synapse to
run (e.g. < code > ~/synapse< / code > ), and:< / p >
< pre > < code class = "language-sh" > cd ~/synapse
source env/bin/activate
synctl start
< / code > < / pre >
< h4 id = "platform-specific-prerequisites" > < a class = "header" href = "#platform-specific-prerequisites" > Platform-specific prerequisites< / a > < / h4 >
< p > Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions.< / p >
< h5 id = "debianubunturaspbian" > < a class = "header" href = "#debianubunturaspbian" > Debian/Ubuntu/Raspbian< / a > < / h5 >
< p > Installing prerequisites on Ubuntu or Debian:< / p >
< pre > < code class = "language-sh" > sudo apt install build-essential python3-dev libffi-dev \
python3-pip python3-setuptools sqlite3 \
libssl-dev virtualenv libjpeg-dev libxslt1-dev
< / code > < / pre >
< h5 id = "archlinux" > < a class = "header" href = "#archlinux" > ArchLinux< / a > < / h5 >
< p > Installing prerequisites on ArchLinux:< / p >
< pre > < code class = "language-sh" > sudo pacman -S base-devel python python-pip \
python-setuptools python-virtualenv sqlite3
< / code > < / pre >
< h5 id = "centosfedora" > < a class = "header" href = "#centosfedora" > CentOS/Fedora< / a > < / h5 >
< p > Installing prerequisites on CentOS or Fedora Linux:< / p >
< pre > < code class = "language-sh" > sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
libwebp-devel libxml2-devel libxslt-devel libpq-devel \
python3-virtualenv libffi-devel openssl-devel python3-devel
sudo dnf groupinstall " Development Tools"
< / code > < / pre >
< h5 id = "macos" > < a class = "header" href = "#macos" > macOS< / a > < / h5 >
< p > Installing prerequisites on macOS:< / p >
< pre > < code class = "language-sh" > xcode-select --install
sudo easy_install pip
sudo pip install virtualenv
brew install pkg-config libffi
< / code > < / pre >
< p > On macOS Catalina (10.15) you may need to explicitly install OpenSSL
via brew and inform < code > pip< / code > about it so that < code > psycopg2< / code > builds:< / p >
< pre > < code class = "language-sh" > brew install openssl@1.1
export LDFLAGS=" -L/usr/local/opt/openssl/lib"
export CPPFLAGS=" -I/usr/local/opt/openssl/include"
< / code > < / pre >
< h5 id = "opensuse" > < a class = "header" href = "#opensuse" > OpenSUSE< / a > < / h5 >
< p > Installing prerequisites on openSUSE:< / p >
< pre > < code class = "language-sh" > sudo zypper in -t pattern devel_basis
sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \
python-devel libffi-devel libopenssl-devel libjpeg62-devel
< / code > < / pre >
< h5 id = "openbsd" > < a class = "header" href = "#openbsd" > OpenBSD< / a > < / h5 >
< p > A port of Synapse is available under < code > net/synapse< / code > . The filesystem
underlying the homeserver directory (defaults to < code > /var/synapse< / code > ) has to be
mounted with < code > wxallowed< / code > (cf. < code > mount(8)< / code > ), so creating a separate filesystem
and mounting it to < code > /var/synapse< / code > should be taken into consideration.< / p >
< p > To be able to build Synapse's dependency on python the < code > WRKOBJDIR< / code >
(cf. < code > bsd.port.mk(5)< / code > ) for building python, too, needs to be on a filesystem
mounted with < code > wxallowed< / code > (cf. < code > mount(8)< / code > ).< / p >
< p > Creating a < code > WRKOBJDIR< / code > for building python under < code > /usr/local< / code > (which on a
default OpenBSD installation is mounted with < code > wxallowed< / code > ):< / p >
< pre > < code class = "language-sh" > doas mkdir /usr/local/pobj_wxallowed
< / code > < / pre >
< p > Assuming < code > PORTS_PRIVSEP=Yes< / code > (cf. < code > bsd.port.mk(5)< / code > ) and < code > SUDO=doas< / code > are
configured in < code > /etc/mk.conf< / code > :< / p >
< pre > < code class = "language-sh" > doas chown _pbuild:_pbuild /usr/local/pobj_wxallowed
< / code > < / pre >
< p > Setting the < code > WRKOBJDIR< / code > for building python:< / p >
< pre > < code class = "language-sh" > echo WRKOBJDIR_lang/python/3.7=/usr/local/pobj_wxallowed \\nWRKOBJDIR_lang/python/2.7=/usr/local/pobj_wxallowed > > /etc/mk.conf
< / code > < / pre >
< p > Building Synapse:< / p >
< pre > < code class = "language-sh" > cd /usr/ports/net/synapse
make install
< / code > < / pre >
< h5 id = "windows" > < a class = "header" href = "#windows" > Windows< / a > < / h5 >
< p > If you wish to run or develop Synapse on Windows, the Windows Subsystem For
Linux provides a Linux environment on Windows 10 which is capable of using the
Debian, Fedora, or source installation methods. More information about WSL can
be found at < a href = "https://docs.microsoft.com/en-us/windows/wsl/install-win10" > https://docs.microsoft.com/en-us/windows/wsl/install-win10< / a > for
Windows 10 and < a href = "https://docs.microsoft.com/en-us/windows/wsl/install-on-server" > https://docs.microsoft.com/en-us/windows/wsl/install-on-server< / a >
for Windows Server.< / p >
< h3 id = "prebuilt-packages" > < a class = "header" href = "#prebuilt-packages" > Prebuilt packages< / a > < / h3 >
< p > As an alternative to installing from source, prebuilt packages are available
for a number of platforms.< / p >
< h4 id = "docker-images-and-ansible-playbooks" > < a class = "header" href = "#docker-images-and-ansible-playbooks" > Docker images and Ansible playbooks< / a > < / h4 >
< p > There is an official synapse image available at
< a href = "https://hub.docker.com/r/matrixdotorg/synapse" > https://hub.docker.com/r/matrixdotorg/synapse< / a > which can be used with
the docker-compose file available at < a href = "setup/contrib/docker" > contrib/docker< / a > . Further
information on this including configuration options is available in the README
on hub.docker.com.< / p >
< p > Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at
< a href = "https://hub.docker.com/r/avhost/docker-matrix/tags/" > https://hub.docker.com/r/avhost/docker-matrix/tags/< / a > < / p >
< p > Slavi Pantaleev has created an Ansible playbook,
which installs the offical Docker image of Matrix Synapse
along with many other Matrix-related services (Postgres database, Element, coturn,
ma1sd, SSL support, etc.).
For more details, see
< a href = "https://github.com/spantaleev/matrix-docker-ansible-deploy" > https://github.com/spantaleev/matrix-docker-ansible-deploy< / a > < / p >
< h4 id = "debianubuntu" > < a class = "header" href = "#debianubuntu" > Debian/Ubuntu< / a > < / h4 >
< h5 id = "matrixorg-packages" > < a class = "header" href = "#matrixorg-packages" > Matrix.org packages< / a > < / h5 >
< p > Matrix.org provides Debian/Ubuntu packages of the latest stable version of
Synapse via < a href = "https://packages.matrix.org/debian/" > https://packages.matrix.org/debian/< / a > . They are available for Debian
9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them:< / p >
< pre > < code class = "language-sh" > sudo apt install -y lsb-release wget apt-transport-https
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
echo " deb [signed-by=/usr/share/keyrings/matrix-org-archive-keyring.gpg] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
sudo tee /etc/apt/sources.list.d/matrix-org.list
sudo apt update
sudo apt install matrix-synapse-py3
< / code > < / pre >
< p > < strong > Note< / strong > : if you followed a previous version of these instructions which
recommended using < code > apt-key add< / code > to add an old key from
< code > https://matrix.org/packages/debian/< / code > , you should note that this key has been
revoked. You should remove the old key with < code > sudo apt-key remove C35EB17E1EAE708E6603A9B3AD0592FE47F0DF61< / code > , and follow the above instructions to
update your configuration.< / p >
< p > The fingerprint of the repository signing key (as shown by < code > gpg /usr/share/keyrings/matrix-org-archive-keyring.gpg< / code > ) is
< code > AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058< / code > .< / p >
< h5 id = "downstream-debian-packages" > < a class = "header" href = "#downstream-debian-packages" > Downstream Debian packages< / a > < / h5 >
< p > We do not recommend using the packages from the default Debian < code > buster< / code >
repository at this time, as they are old and suffer from known security
vulnerabilities. You can install the latest version of Synapse from
< a href = "setup/installation.html#matrixorg-packages" > our repository< / a > or from < code > buster-backports< / code > . Please
see the < a href = "https://backports.debian.org/Instructions/" > Debian documentation< / a >
for information on how to use backports.< / p >
< p > If you are using Debian < code > sid< / code > or testing, Synapse is available in the default
repositories and it should be possible to install it simply with:< / p >
< pre > < code class = "language-sh" > sudo apt install matrix-synapse
< / code > < / pre >
< h5 id = "downstream-ubuntu-packages" > < a class = "header" href = "#downstream-ubuntu-packages" > Downstream Ubuntu packages< / a > < / h5 >
< p > We do not recommend using the packages in the default Ubuntu repository
at this time, as they are old and suffer from known security vulnerabilities.
The latest version of Synapse can be installed from < a href = "setup/installation.html#matrixorg-packages" > our repository< / a > .< / p >
< h4 id = "fedora" > < a class = "header" href = "#fedora" > Fedora< / a > < / h4 >
< p > Synapse is in the Fedora repositories as < code > matrix-synapse< / code > :< / p >
< pre > < code class = "language-sh" > sudo dnf install matrix-synapse
< / code > < / pre >
< p > Oleg Girko provides Fedora RPMs at
< a href = "https://obs.infoserver.lv/project/monitor/matrix-synapse" > https://obs.infoserver.lv/project/monitor/matrix-synapse< / a > < / p >
< h4 id = "opensuse-1" > < a class = "header" href = "#opensuse-1" > OpenSUSE< / a > < / h4 >
< p > Synapse is in the OpenSUSE repositories as < code > matrix-synapse< / code > :< / p >
< pre > < code class = "language-sh" > sudo zypper install matrix-synapse
< / code > < / pre >
< h4 id = "suse-linux-enterprise-server" > < a class = "header" href = "#suse-linux-enterprise-server" > SUSE Linux Enterprise Server< / a > < / h4 >
< p > Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 repository at
< a href = "https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/" > https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/< / a > < / p >
< h4 id = "archlinux-1" > < a class = "header" href = "#archlinux-1" > ArchLinux< / a > < / h4 >
< p > The quickest way to get up and running with ArchLinux is probably with the community package
< a href = "https://www.archlinux.org/packages/community/any/matrix-synapse/" > https://www.archlinux.org/packages/community/any/matrix-synapse/< / a > , which should pull in most of
the necessary dependencies.< / p >
< p > pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ):< / p >
< pre > < code class = "language-sh" > sudo pip install --upgrade pip
< / code > < / pre >
< p > If you encounter an error with lib bcrypt causing an Wrong ELF Class:
ELFCLASS32 (x64 Systems), you may need to reinstall py-bcrypt to correctly
compile it under the right architecture. (This should not be needed if
installing under virtualenv):< / p >
< pre > < code class = "language-sh" > sudo pip uninstall py-bcrypt
sudo pip install py-bcrypt
< / code > < / pre >
< h4 id = "void-linux" > < a class = "header" href = "#void-linux" > Void Linux< / a > < / h4 >
< p > Synapse can be found in the void repositories as 'synapse':< / p >
< pre > < code class = "language-sh" > xbps-install -Su
xbps-install -S synapse
< / code > < / pre >
< h4 id = "freebsd" > < a class = "header" href = "#freebsd" > FreeBSD< / a > < / h4 >
< p > Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:< / p >
< ul >
< li > Ports: < code > cd /usr/ports/net-im/py-matrix-synapse & & make install clean< / code > < / li >
< li > Packages: < code > pkg install py37-matrix-synapse< / code > < / li >
< / ul >
< h4 id = "openbsd-1" > < a class = "header" href = "#openbsd-1" > OpenBSD< / a > < / h4 >
< p > As of OpenBSD 6.7 Synapse is available as a pre-compiled binary. The filesystem
underlying the homeserver directory (defaults to < code > /var/synapse< / code > ) has to be
mounted with < code > wxallowed< / code > (cf. < code > mount(8)< / code > ), so creating a separate filesystem
and mounting it to < code > /var/synapse< / code > should be taken into consideration.< / p >
< p > Installing Synapse:< / p >
< pre > < code class = "language-sh" > doas pkg_add synapse
< / code > < / pre >
< h4 id = "nixos" > < a class = "header" href = "#nixos" > NixOS< / a > < / h4 >
< p > Robin Lambertz has packaged Synapse for NixOS at:
< a href = "https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix" > https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix< / a > < / p >
< h2 id = "setting-up-synapse" > < a class = "header" href = "#setting-up-synapse" > Setting up Synapse< / a > < / h2 >
< p > Once you have installed synapse as above, you will need to configure it.< / p >
< h3 id = "using-postgresql" > < a class = "header" href = "#using-postgresql" > Using PostgreSQL< / a > < / h3 >
< p > By default Synapse uses an < a href = "https://sqlite.org/" > SQLite< / a > database and in doing so trades
performance for convenience. Almost all installations should opt to use < a href = "https://www.postgresql.org" > PostgreSQL< / a >
instead. Advantages include:< / p >
< ul >
< li > significant performance improvements due to the superior threading and
caching model, smarter query optimiser< / li >
< li > allowing the DB to be run on separate hardware< / li >
< / ul >
< p > For information on how to install and use PostgreSQL in Synapse, please see
< a href = "setup/docs/postgres.html" > docs/postgres.md< / a > < / p >
< p > SQLite is only acceptable for testing purposes. SQLite should not be used in
a production server. Synapse will perform poorly when using
SQLite, especially when participating in large rooms.< / p >
< h3 id = "tls-certificates" > < a class = "header" href = "#tls-certificates" > TLS certificates< / a > < / h3 >
< p > The default configuration exposes a single HTTP port on the local
interface: < code > http://localhost:8008< / code > . It is suitable for local testing,
but for any practical use, you will need Synapse's APIs to be served
over HTTPS.< / p >
< p > The recommended way to do so is to set up a reverse proxy on port
< code > 8448< / code > . You can find documentation on doing so in
< a href = "setup/docs/reverse_proxy.html" > docs/reverse_proxy.md< / a > .< / p >
< p > Alternatively, you can configure Synapse to expose an HTTPS port. To do
so, you will need to edit < code > homeserver.yaml< / code > , as follows:< / p >
< ul >
< li > First, under the < code > listeners< / code > section, uncomment the configuration for the
TLS-enabled listener. (Remove the hash sign (< code > #< / code > ) at the start of
each line). The relevant lines are like this:< / li >
< / ul >
< pre > < code class = "language-yaml" > - port: 8448
type: http
tls: true
resources:
- names: [client, federation]
< / code > < / pre >
< ul >
< li >
< p > You will also need to uncomment the < code > tls_certificate_path< / code > and
< code > tls_private_key_path< / code > lines under the < code > TLS< / code > section. You will need to manage
provisioning of these certificates yourself — Synapse had built-in ACME
support, but the ACMEv1 protocol Synapse implements is deprecated, not
allowed by LetsEncrypt for new sites, and will break for existing sites in
late 2020. See < a href = "setup/docs/ACME.html" > ACME.md< / a > .< / p >
< p > If you are using your own certificate, be sure to use a < code > .pem< / code > file that
includes the full certificate chain including any intermediate certificates
(for instance, if using certbot, use < code > fullchain.pem< / code > as your certificate, not
< code > cert.pem< / code > ).< / p >
< / li >
< / ul >
< p > For a more detailed guide to configuring your server for federation, see
< a href = "setup/docs/federate.html" > federate.md< / a > .< / p >
< h3 id = "client-well-known-uri" > < a class = "header" href = "#client-well-known-uri" > Client Well-Known URI< / a > < / h3 >
< p > Setting up the client Well-Known URI is optional but if you set it up, it will
allow users to enter their full username (e.g. < code > @user:< server_name> < / code > ) into clients
which support well-known lookup to automatically configure the homeserver and
identity server URLs. This is useful so that users don't have to memorize or think
about the actual homeserver URL you are using.< / p >
< p > The URL < code > https://< server_name> /.well-known/matrix/client< / code > should return JSON in
the following format.< / p >
< pre > < code class = "language-json" > {
" m.homeserver" : {
" base_url" : " https://< matrix.example.com> "
}
}
< / code > < / pre >
< p > It can optionally contain identity server information as well.< / p >
< pre > < code class = "language-json" > {
" m.homeserver" : {
" base_url" : " https://< matrix.example.com> "
},
" m.identity_server" : {
" base_url" : " https://< identity.example.com> "
}
}
< / code > < / pre >
< p > To work in browser based clients, the file must be served with the appropriate
Cross-Origin Resource Sharing (CORS) headers. A recommended value would be
< code > Access-Control-Allow-Origin: *< / code > which would allow all browser based clients to
view it.< / p >
< p > In nginx this would be something like:< / p >
< pre > < code class = "language-nginx" > location /.well-known/matrix/client {
return 200 '{" m.homeserver" : {" base_url" : " https://< matrix.example.com> " }}';
default_type application/json;
add_header Access-Control-Allow-Origin *;
}
< / code > < / pre >
< p > You should also ensure the < code > public_baseurl< / code > option in < code > homeserver.yaml< / code > is set
correctly. < code > public_baseurl< / code > should be set to the URL that clients will use to
connect to your server. This is the same URL you put for the < code > m.homeserver< / code >
< code > base_url< / code > above.< / p >
< pre > < code class = "language-yaml" > public_baseurl: " https://< matrix.example.com> "
< / code > < / pre >
< h3 id = "email" > < a class = "header" href = "#email" > Email< / a > < / h3 >
< p > It is desirable for Synapse to have the capability to send email. This allows
Synapse to send password reset emails, send verifications when an email address
is added to a user's account, and send email notifications to users when they
receive new messages.< / p >
< p > To configure an SMTP server for Synapse, modify the configuration section
headed < code > email< / code > , and be sure to have at least the < code > smtp_host< / code > , < code > smtp_port< / code >
and < code > notif_from< / code > fields filled out. You may also need to set < code > smtp_user< / code > ,
< code > smtp_pass< / code > , and < code > require_transport_security< / code > .< / p >
< p > If email is not configured, password reset, registration and notifications via
email will be disabled.< / p >
< h3 id = "registering-a-user" > < a class = "header" href = "#registering-a-user" > Registering a user< / a > < / h3 >
< p > The easiest way to create a new user is to do so from a client like < a href = "https://element.io/" > Element< / a > .< / p >
< p > Alternatively, you can do so from the command line. This can be done as follows:< / p >
< ol >
< li > If synapse was installed via pip, activate the virtualenv as follows (if Synapse was
installed via a prebuilt package, < code > register_new_matrix_user< / code > should already be
on the search path):
< pre > < code class = "language-sh" > cd ~/synapse
source env/bin/activate
synctl start # if not already running
< / code > < / pre >
< / li >
< li > Run the following command:
< pre > < code class = "language-sh" > register_new_matrix_user -c homeserver.yaml http://localhost:8008
< / code > < / pre >
< / li >
< / ol >
< p > This will prompt you to add details for the new user, and will then connect to
the running Synapse to create the new user. For example:< / p >
< pre > < code > New user localpart: erikj
Password:
Confirm password:
Make admin [no]:
Success!
< / code > < / pre >
< p > This process uses a setting < code > registration_shared_secret< / code > in
< code > homeserver.yaml< / code > , which is shared between Synapse itself and the
< code > register_new_matrix_user< / code > script. It doesn't matter what it is (a random
value is generated by < code > --generate-config< / code > ), but it should be kept secret, as
anyone with knowledge of it can register users, including admin accounts,
on your server even if < code > enable_registration< / code > is < code > false< / code > .< / p >
< h3 id = "setting-up-a-turn-server" > < a class = "header" href = "#setting-up-a-turn-server" > Setting up a TURN server< / a > < / h3 >
< p > For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See < a href = "setup/docs/turn-howto.html" > docs/turn-howto.md< / a > for details.< / p >
< h3 id = "url-previews" > < a class = "header" href = "#url-previews" > URL previews< / a > < / h3 >
< p > Synapse includes support for previewing URLs, which is disabled by default. To
turn it on you must enable the < code > url_preview_enabled: True< / code > config parameter
and explicitly specify the IP ranges that Synapse is not allowed to spider for
previewing in the < code > url_preview_ip_range_blacklist< / code > configuration parameter.
This is critical from a security perspective to stop arbitrary Matrix users
spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted.< / p >
< p > This also requires the optional < code > lxml< / code > python dependency to be installed. This
in turn requires the < code > libxml2< / code > library to be available - on Debian/Ubuntu this
means < code > apt-get install libxml2-dev< / code > , or equivalent for your OS.< / p >
< h3 id = "troubleshooting-installation" > < a class = "header" href = "#troubleshooting-installation" > Troubleshooting Installation< / a > < / h3 >
< p > < code > pip< / code > seems to leak < em > lots< / em > of memory during installation. For instance, a Linux
host with 512MB of RAM may run out of memory whilst installing Twisted. If this
happens, you will have to individually install the dependencies which are
failing, e.g.:< / p >
< pre > < code class = "language-sh" > pip install twisted
< / code > < / pre >
< p > If you have any other problems, feel free to ask in
< a href = "https://matrix.to/#/#synapse:matrix.org" > #synapse:matrix.org< / a > .< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "using-postgres" > < a class = "header" href = "#using-postgres" > Using Postgres< / a > < / h1 >
< p > Synapse supports PostgreSQL versions 9.6 or later.< / p >
< h2 id = "install-postgres-client-libraries" > < a class = "header" href = "#install-postgres-client-libraries" > Install postgres client libraries< / a > < / h2 >
< p > Synapse will require the python postgres client library in order to
connect to a postgres database.< / p >
< ul >
< li >
< p > If you are using the < a href = "../INSTALL.html#matrixorg-packages" > matrix.org debian/ubuntu
packages< / a > , the necessary python
library will already be installed, but you will need to ensure the
low-level postgres library is installed, which you can do with
< code > apt install libpq5< / code > .< / p >
< / li >
< li >
< p > For other pre-built packages, please consult the documentation from
the relevant package.< / p >
< / li >
< li >
< p > If you installed synapse < a href = "../INSTALL.html#installing-from-source" > in a
virtualenv< / a > , you can install
the library with:< / p >
< pre > < code > ~/synapse/env/bin/pip install " matrix-synapse[postgres]"
< / code > < / pre >
< p > (substituting the path to your virtualenv for < code > ~/synapse/env< / code > , if
you used a different path). You will require the postgres
development files. These are in the < code > libpq-dev< / code > package on
Debian-derived distributions.< / p >
< / li >
< / ul >
< h2 id = "set-up-database" > < a class = "header" href = "#set-up-database" > Set up database< / a > < / h2 >
< p > Assuming your PostgreSQL database user is called < code > postgres< / code > , first authenticate as the database user with:< / p >
< pre > < code > su - postgres
# Or, if your system uses sudo to get administrative rights
sudo -u postgres bash
< / code > < / pre >
< p > Then, create a postgres user and a database with:< / p >
< pre > < code > # this will prompt for a password for the new user
createuser --pwprompt synapse_user
createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
< / code > < / pre >
< p > The above will create a user called < code > synapse_user< / code > , and a database called
< code > synapse< / code > .< / p >
< p > Note that the PostgreSQL database < em > must< / em > have the correct encoding set
(as shown above), otherwise it will not be able to store UTF8 strings.< / p >
< p > You may need to enable password authentication so < code > synapse_user< / code > can
connect to the database. See
< a href = "https://www.postgresql.org/docs/current/auth-pg-hba-conf.html" > https://www.postgresql.org/docs/current/auth-pg-hba-conf.html< / a > .< / p >
< h2 id = "synapse-config" > < a class = "header" href = "#synapse-config" > Synapse config< / a > < / h2 >
< p > When you are ready to start using PostgreSQL, edit the < code > database< / code >
section in your config file to match the following lines:< / p >
< pre > < code class = "language-yaml" > database:
name: psycopg2
args:
user: < user>
password: < pass>
database: < db>
host: < host>
cp_min: 5
cp_max: 10
< / code > < / pre >
< p > All key, values in < code > args< / code > are passed to the < code > psycopg2.connect(..)< / code >
function, except keys beginning with < code > cp_< / code > , which are consumed by the
twisted adbapi connection pool. See the < a href = "https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS" > libpq
documentation< / a >
for a list of options which can be passed.< / p >
< p > You should consider tuning the < code > args.keepalives_*< / code > options if there is any danger of
the connection between your homeserver and database dropping, otherwise Synapse
may block for an extended period while it waits for a response from the
database server. Example values might be:< / p >
< pre > < code class = "language-yaml" > database:
args:
# ... as above
# seconds of inactivity after which TCP should send a keepalive message to the server
keepalives_idle: 10
# the number of seconds after which a TCP keepalive message that is not
# acknowledged by the server should be retransmitted
keepalives_interval: 10
# the number of TCP keepalives that can be lost before the client's connection
# to the server is considered dead
keepalives_count: 3
< / code > < / pre >
< h2 id = "tuning-postgres" > < a class = "header" href = "#tuning-postgres" > Tuning Postgres< / a > < / h2 >
< p > The default settings should be fine for most deployments. For larger
scale deployments tuning some of the settings is recommended, details of
which can be found at
< a href = "https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server" > https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server< / a > .< / p >
< p > In particular, we've found tuning the following values helpful for
performance:< / p >
< ul >
< li > < code > shared_buffers< / code > < / li >
< li > < code > effective_cache_size< / code > < / li >
< li > < code > work_mem< / code > < / li >
< li > < code > maintenance_work_mem< / code > < / li >
< li > < code > autovacuum_work_mem< / code > < / li >
< / ul >
< p > Note that the appropriate values for those fields depend on the amount
of free memory the database host has available.< / p >
< h2 id = "porting-from-sqlite" > < a class = "header" href = "#porting-from-sqlite" > Porting from SQLite< / a > < / h2 >
< h3 id = "overview" > < a class = "header" href = "#overview" > Overview< / a > < / h3 >
< p > The script < code > synapse_port_db< / code > allows porting an existing synapse server
backed by SQLite to using PostgreSQL. This is done in as a two phase
process:< / p >
< ol >
< li > Copy the existing SQLite database to a separate location and run
the port script against that offline database.< / li >
< li > Shut down the server. Rerun the port script to port any data that
has come in since taking the first snapshot. Restart server against
the PostgreSQL database.< / li >
< / ol >
< p > The port script is designed to be run repeatedly against newer snapshots
of the SQLite database file. This makes it safe to repeat step 1 if
there was a delay between taking the previous snapshot and being ready
to do step 2.< / p >
< p > It is safe to at any time kill the port script and restart it.< / p >
< p > Note that the database may take up significantly more (25% - 100% more)
space on disk after porting to Postgres.< / p >
< h3 id = "using-the-port-script" > < a class = "header" href = "#using-the-port-script" > Using the port script< / a > < / h3 >
< p > Firstly, shut down the currently running synapse server and copy its
database file (typically < code > homeserver.db< / code > ) to another location. Once the
copy is complete, restart synapse. For instance:< / p >
< pre > < code > ./synctl stop
cp homeserver.db homeserver.db.snapshot
./synctl start
< / code > < / pre >
< p > Copy the old config file into a new config file:< / p >
< pre > < code > cp homeserver.yaml homeserver-postgres.yaml
< / code > < / pre >
< p > Edit the database section as described in the section < em > Synapse config< / em >
above and with the SQLite snapshot located at < code > homeserver.db.snapshot< / code >
simply run:< / p >
< pre > < code > synapse_port_db --sqlite-database homeserver.db.snapshot \
--postgres-config homeserver-postgres.yaml
< / code > < / pre >
< p > The flag < code > --curses< / code > displays a coloured curses progress UI.< / p >
< p > If the script took a long time to complete, or time has otherwise passed
since the original snapshot was taken, repeat the previous steps with a
newer snapshot.< / p >
< p > To complete the conversion shut down the synapse server and run the port
script one last time, e.g. if the SQLite database is at < code > homeserver.db< / code >
run:< / p >
< pre > < code > synapse_port_db --sqlite-database homeserver.db \
--postgres-config homeserver-postgres.yaml
< / code > < / pre >
< p > Once that has completed, change the synapse config to point at the
PostgreSQL database configuration file < code > homeserver-postgres.yaml< / code > :< / p >
< pre > < code > ./synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml
mv homeserver-postgres.yaml homeserver.yaml
./synctl start
< / code > < / pre >
< p > Synapse should now be running against PostgreSQL.< / p >
< h2 id = "troubleshooting" > < a class = "header" href = "#troubleshooting" > Troubleshooting< / a > < / h2 >
< h3 id = "alternative-auth-methods" > < a class = "header" href = "#alternative-auth-methods" > Alternative auth methods< / a > < / h3 >
< p > If you get an error along the lines of < code > FATAL: Ident authentication failed for user " synapse_user" < / code > , you may need to use an authentication method other than
< code > ident< / code > :< / p >
< ul >
< li >
< p > If the < code > synapse_user< / code > user has a password, add the password to the < code > database:< / code >
section of < code > homeserver.yaml< / code > . Then add the following to < code > pg_hba.conf< / code > :< / p >
< pre > < code > host synapse synapse_user ::1/128 md5 # or `scram-sha-256` instead of `md5` if you use that
< / code > < / pre >
< / li >
< li >
< p > If the < code > synapse_user< / code > user does not have a password, then a password doesn't
have to be added to < code > homeserver.yaml< / code > . But the following does need to be added
to < code > pg_hba.conf< / code > :< / p >
< pre > < code > host synapse synapse_user ::1/128 trust
< / code > < / pre >
< / li >
< / ul >
< p > Note that line order matters in < code > pg_hba.conf< / code > , so make sure that if you do add a
new line, it is inserted before:< / p >
< pre > < code > host all all ::1/128 ident
< / code > < / pre >
< h3 id = "fixing-incorrect-collate-or-ctype" > < a class = "header" href = "#fixing-incorrect-collate-or-ctype" > Fixing incorrect < code > COLLATE< / code > or < code > CTYPE< / code > < / a > < / h3 >
< p > Synapse will refuse to set up a new database if it has the wrong values of
< code > COLLATE< / code > and < code > CTYPE< / code > set, and will log warnings on existing databases. Using
different locales can cause issues if the locale library is updated from
underneath the database, or if a different version of the locale is used on any
replicas.< / p >
< p > The safest way to fix the issue is to dump the database and recreate it with
the correct locale parameter (as shown above). It is also possible to change the
parameters on a live database and run a < code > REINDEX< / code > on the entire database,
however extreme care must be taken to avoid database corruption.< / p >
< p > Note that the above may fail with an error about duplicate rows if corruption
has already occurred, and such duplicate rows will need to be manually removed.< / p >
< h3 id = "fixing-inconsistent-sequences-error" > < a class = "header" href = "#fixing-inconsistent-sequences-error" > Fixing inconsistent sequences error< / a > < / h3 >
< p > Synapse uses Postgres sequences to generate IDs for various tables. A sequence
and associated table can get out of sync if, for example, Synapse has been
downgraded and then upgraded again.< / p >
< p > To fix the issue shut down Synapse (including any and all workers) and run the
SQL command included in the error message. Once done Synapse should start
successfully.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "using-a-reverse-proxy-with-synapse" > < a class = "header" href = "#using-a-reverse-proxy-with-synapse" > Using a reverse proxy with Synapse< / a > < / h1 >
< p > It is recommended to put a reverse proxy such as
< a href = "https://nginx.org/en/docs/http/ngx_http_proxy_module.html" > nginx< / a > ,
< a href = "https://httpd.apache.org/docs/current/mod/mod_proxy_http.html" > Apache< / a > ,
< a href = "https://caddyserver.com/docs/quick-starts/reverse-proxy" > Caddy< / a > ,
< a href = "https://www.haproxy.org/" > HAProxy< / a > or
< a href = "https://man.openbsd.org/relayd.8" > relayd< / a > in front of Synapse. One advantage
of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root
privileges.< / p >
< p > You should configure your reverse proxy to forward requests to < code > /_matrix< / code > or
< code > /_synapse/client< / code > to Synapse, and have it set the < code > X-Forwarded-For< / code > and
< code > X-Forwarded-Proto< / code > request headers.< / p >
< p > You should remember that Matrix clients and other Matrix servers do not
necessarily need to connect to your server via the same server name or
port. Indeed, clients will use port 443 by default, whereas servers default to
port 8448. Where these are different, we refer to the 'client port' and the
'federation port'. See < a href = "https://matrix.org/docs/spec/server_server/latest#resolving-server-names" > the Matrix
specification< / a >
for more details of the algorithm used for federation connections, and
< a href = "delegate.html" > delegate.md< / a > for instructions on setting up delegation.< / p >
< p > < strong > NOTE< / strong > : Your reverse proxy must not < code > canonicalise< / code > or < code > normalise< / code >
the requested URI in any way (for example, by decoding < code > %xx< / code > escapes).
Beware that Apache < em > will< / em > canonicalise URIs unless you specify
< code > nocanon< / code > .< / p >
< p > Let's assume that we expect clients to connect to our server at
< code > https://matrix.example.com< / code > , and other servers to connect at
< code > https://example.com:8448< / code > . The following sections detail the configuration of
the reverse proxy and the homeserver.< / p >
< h2 id = "reverse-proxy-configuration-examples" > < a class = "header" href = "#reverse-proxy-configuration-examples" > Reverse-proxy configuration examples< / a > < / h2 >
< p > < strong > NOTE< / strong > : You only need one of these.< / p >
< h3 id = "nginx" > < a class = "header" href = "#nginx" > nginx< / a > < / h3 >
< pre > < code > server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
# For the federation port
listen 8448 ssl http2 default_server;
listen [::]:8448 ssl http2 default_server;
server_name matrix.example.com;
location ~* ^(\/_matrix|\/_synapse\/client) {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 50M;
}
}
< / code > < / pre >
< p > < strong > NOTE< / strong > : Do not add a path after the port in < code > proxy_pass< / code > , otherwise nginx will
canonicalise/normalise the URI.< / p >
< h3 id = "caddy-1" > < a class = "header" href = "#caddy-1" > Caddy 1< / a > < / h3 >
< pre > < code > matrix.example.com {
proxy /_matrix http://localhost:8008 {
transparent
}
proxy /_synapse/client http://localhost:8008 {
transparent
}
}
example.com:8448 {
proxy / http://localhost:8008 {
transparent
}
}
< / code > < / pre >
< h3 id = "caddy-2" > < a class = "header" href = "#caddy-2" > Caddy 2< / a > < / h3 >
< pre > < code > matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008
}
example.com:8448 {
reverse_proxy http://localhost:8008
}
< / code > < / pre >
< h3 id = "apache" > < a class = "header" href = "#apache" > Apache< / a > < / h3 >
< pre > < code > < VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com
RequestHeader set " X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode
ProxyPreserveHost on
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon
ProxyPassReverse /_synapse/client http://127.0.0.1:8008/_synapse/client
< /VirtualHost>
< VirtualHost *:8448>
SSLEngine on
ServerName example.com
RequestHeader set " X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
< /VirtualHost>
< / code > < / pre >
< p > < strong > NOTE< / strong > : ensure the < code > nocanon< / code > options are included.< / p >
< p > < strong > NOTE 2< / strong > : It appears that Synapse is currently incompatible with the ModSecurity module for Apache (< code > mod_security2< / code > ). If you need it enabled for other services on your web server, you can disable it for Synapse's two VirtualHosts by including the following lines before each of the two < code > < /VirtualHost> < / code > above:< / p >
< pre > < code > < IfModule security2_module>
SecRuleEngine off
< /IfModule>
< / code > < / pre >
< p > < strong > NOTE 3< / strong > : Missing < code > ProxyPreserveHost on< / code > can lead to a redirect loop.< / p >
< h3 id = "haproxy" > < a class = "header" href = "#haproxy" > HAProxy< / a > < / h3 >
< pre > < code > frontend https
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-For %[src]
# Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com
acl matrix-path path_beg /_matrix
acl matrix-path path_beg /_synapse/client
use_backend matrix if matrix-host matrix-path
frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-For %[src]
default_backend matrix
backend matrix
server matrix 127.0.0.1:8008
< / code > < / pre >
< h3 id = "relayd" > < a class = "header" href = "#relayd" > Relayd< / a > < / h3 >
< pre > < code > table < webserver> { 127.0.0.1 }
table < matrixserver> { 127.0.0.1 }
http protocol " https" {
tls { no tlsv1.0, ciphers " HIGH" }
tls keypair " example.com"
match header set " X-Forwarded-For" value " $REMOTE_ADDR"
match header set " X-Forwarded-Proto" value " https"
# set CORS header for .well-known/matrix/server, .well-known/matrix/client
# httpd does not support setting headers, so do it here
match request path " /.well-known/matrix/*" tag " matrix-cors"
match response tagged " matrix-cors" header set " Access-Control-Allow-Origin" value " *"
pass quick path " /_matrix/*" forward to < matrixserver>
pass quick path " /_synapse/client/*" forward to < matrixserver>
# pass on non-matrix traffic to webserver
pass forward to < webserver>
}
relay " https_traffic" {
listen on egress port 443 tls
protocol " https"
forward to < matrixserver> port 8008 check tcp
forward to < webserver> port 8080 check tcp
}
http protocol " matrix" {
tls { no tlsv1.0, ciphers " HIGH" }
tls keypair " example.com"
block
pass quick path " /_matrix/*" forward to < matrixserver>
pass quick path " /_synapse/client/*" forward to < matrixserver>
}
relay " matrix_federation" {
listen on egress port 8448 tls
protocol " matrix"
forward to < matrixserver> port 8008 check tcp
}
< / code > < / pre >
< h2 id = "homeserver-configuration" > < a class = "header" href = "#homeserver-configuration" > Homeserver Configuration< / a > < / h2 >
< p > You will also want to set < code > bind_addresses: ['127.0.0.1']< / code > and
< code > x_forwarded: true< / code > for port 8008 in < code > homeserver.yaml< / code > to ensure that
client IP addresses are recorded correctly.< / p >
< p > Having done so, you can then use < code > https://matrix.example.com< / code > (instead
of < code > https://matrix.example.com:8448< / code > ) as the " Custom server" when
connecting to Synapse from a client.< / p >
< h2 id = "health-check-endpoint" > < a class = "header" href = "#health-check-endpoint" > Health check endpoint< / a > < / h2 >
< p > Synapse exposes a health check endpoint for use by reverse proxies.
Each configured HTTP listener has a < code > /health< / code > endpoint which always returns
200 OK (and doesn't get logged).< / p >
< h2 id = "synapse-administration-endpoints" > < a class = "header" href = "#synapse-administration-endpoints" > Synapse administration endpoints< / a > < / h2 >
< p > Endpoints for administering your Synapse instance are placed under
< code > /_synapse/admin< / code > . These require authentication through an access token of an
admin user. However as access to these endpoints grants the caller a lot of power,
we do not recommend exposing them to the public internet without good reason.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "overview-1" > < a class = "header" href = "#overview-1" > Overview< / a > < / h1 >
< p > This document explains how to enable VoIP relaying on your Home Server with
TURN.< / p >
< p > The synapse Matrix Home Server supports integration with TURN server via the
2021-06-07 18:36:56 +03:00
< a href = "https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00" > TURN server REST API< / a > . This
2021-06-03 19:21:02 +03:00
allows the Home Server to generate credentials that are valid for use on the
TURN server through the use of a secret shared between the Home Server and the
TURN server.< / p >
< p > The following sections describe how to install < a href = "https://github.com/coturn/coturn" > coturn< / a > (which implements the TURN REST API) and integrate it with synapse.< / p >
< h2 id = "requirements" > < a class = "header" href = "#requirements" > Requirements< / a > < / h2 >
< p > For TURN relaying with < code > coturn< / code > to work, it must be hosted on a server/endpoint with a public IP.< / p >
< p > Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues
and to often not work.< / p >
< h2 id = "coturn-setup" > < a class = "header" href = "#coturn-setup" > < code > coturn< / code > setup< / a > < / h2 >
< h3 id = "initial-installation" > < a class = "header" href = "#initial-installation" > Initial installation< / a > < / h3 >
< p > The TURN daemon < code > coturn< / code > is available from a variety of sources such as native package managers, or installation from source.< / p >
< h4 id = "debian-installation" > < a class = "header" href = "#debian-installation" > Debian installation< / a > < / h4 >
< p > Just install the debian package:< / p >
< pre > < code class = "language-sh" > apt install coturn
< / code > < / pre >
< p > This will install and start a systemd service called < code > coturn< / code > .< / p >
< h4 id = "source-installation" > < a class = "header" href = "#source-installation" > Source installation< / a > < / h4 >
< ol >
< li >
< p > Download the < a href = "https://github.com/coturn/coturn/releases/latest" > latest release< / a > from github. Unpack it and < code > cd< / code > into the directory.< / p >
< / li >
< li >
< p > Configure it:< / p >
< pre > < code > ./configure
< / code > < / pre >
< p > You may need to install < code > libevent2< / code > : if so, you should do so in
the way recommended by your operating system. You can ignore
warnings about lack of database support: a database is unnecessary
for this purpose.< / p >
< / li >
< li >
< p > Build and install it:< / p >
< pre > < code > make
make install
< / code > < / pre >
< / li >
< / ol >
< h3 id = "configuration" > < a class = "header" href = "#configuration" > Configuration< / a > < / h3 >
< ol >
< li >
< p > Create or edit the config file in < code > /etc/turnserver.conf< / code > . The relevant
lines, with example values, are:< / p >
< pre > < code > use-auth-secret
static-auth-secret=[your secret key here]
realm=turn.myserver.org
< / code > < / pre >
< p > See < code > turnserver.conf< / code > for explanations of the options. One way to generate
the < code > static-auth-secret< / code > is with < code > pwgen< / code > :< / p >
< pre > < code > pwgen -s 64 1
< / code > < / pre >
< p > A < code > realm< / code > must be specified, but its value is somewhat arbitrary. (It is
sent to clients as part of the authentication flow.) It is conventional to
set it to be your server name.< / p >
< / li >
< li >
< p > You will most likely want to configure coturn to write logs somewhere. The
easiest way is normally to send them to the syslog:< / p >
< pre > < code > syslog
< / code > < / pre >
< p > (in which case, the logs will be available via < code > journalctl -u coturn< / code > on a
systemd system). Alternatively, coturn can be configured to write to a
logfile - check the example config file supplied with coturn.< / p >
< / li >
< li >
< p > Consider your security settings. TURN lets users request a relay which will
connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point:< / p >
< pre > < code > # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client-> TURN-> TURN-> client flows work
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
< / code > < / pre >
< / li >
< li >
< p > Also consider supporting TLS/DTLS. To do this, add the following settings
to < code > turnserver.conf< / code > :< / p >
< pre > < code > # TLS certificates, including intermediate certs.
# For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
# TLS private key file
pkey=/path/to/privkey.pem
< / code > < / pre >
< p > In this case, replace the < code > turn:< / code > schemes in the < code > turn_uri< / code > settings below
with < code > turns:< / code > .< / p >
< p > We recommend that you only try to set up TLS/DTLS once you have set up a
basic installation and got it working.< / p >
< / li >
< li >
< p > Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (By default: 3478 and 5349 for TURN
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
for the UDP relay.)< / p >
< / li >
< li >
< p > We do not recommend running a TURN server behind NAT, and are not aware of
anyone doing so successfully.< / p >
< p > If you want to try it anyway, you will at least need to tell coturn its
external IP address:< / p >
< pre > < code > external-ip=192.88.99.1
< / code > < / pre >
< p > ... and your NAT gateway must forward all of the relayed ports directly
(eg, port 56789 on the external IP must be always be forwarded to port
56789 on the internal IP).< / p >
< p > If you get this working, let us know!< / p >
< / li >
< li >
< p > (Re)start the turn server:< / p >
< ul >
< li >
< p > If you used the Debian package (or have set up a systemd unit yourself):< / p >
< pre > < code class = "language-sh" > systemctl restart coturn
< / code > < / pre >
< / li >
< li >
< p > If you installed from source:< / p >
< pre > < code class = "language-sh" > bin/turnserver -o
< / code > < / pre >
< / li >
< / ul >
< / li >
< / ol >
< h2 id = "synapse-setup" > < a class = "header" href = "#synapse-setup" > Synapse setup< / a > < / h2 >
< p > Your home server configuration file needs the following extra keys:< / p >
< ol >
< li > " < code > turn_uris< / code > " : This needs to be a yaml list of public-facing URIs
for your TURN server to be given out to your clients. Add separate
entries for each transport your TURN server supports.< / li >
< li > " < code > turn_shared_secret< / code > " : This is the secret shared between your
Home server and your TURN server, so you should set it to the same
string you used in turnserver.conf.< / li >
< li > " < code > turn_user_lifetime< / code > " : This is the amount of time credentials
generated by your Home Server are valid for (in milliseconds).
Shorter times offer less potential for abuse at the expense of
increased traffic between web clients and your home server to
refresh credentials. The TURN REST API specification recommends
one day (86400000).< / li >
< li > " < code > turn_allow_guests< / code > " : Whether to allow guest users to use the
TURN server. This is enabled by default, as otherwise VoIP will
not work reliably for guests. However, it does introduce a
security risk as it lets guests connect to arbitrary endpoints
without having gone through a CAPTCHA or similar to register a
real account.< / li >
< / ol >
< p > As an example, here is the relevant section of the config file for < code > matrix.org< / code > . The
< code > turn_uris< / code > are appropriate for TURN servers listening on the default ports, with no TLS.< / p >
< pre > < code > turn_uris: [ " turn:turn.matrix.org?transport=udp" , " turn:turn.matrix.org?transport=tcp" ]
turn_shared_secret: " n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons"
turn_user_lifetime: 86400000
turn_allow_guests: True
< / code > < / pre >
< p > After updating the homeserver configuration, you must restart synapse:< / p >
< ul >
< li > If you use synctl:
< pre > < code class = "language-sh" > cd /where/you/run/synapse
./synctl restart
< / code > < / pre >
< / li >
< li > If you use systemd:
< pre > < code > systemctl restart matrix-synapse.service
< / code > < / pre >
< / li >
< / ul >
< p > ... and then reload any clients (or wait an hour for them to refresh their
settings).< / p >
< h2 id = "troubleshooting-1" > < a class = "header" href = "#troubleshooting-1" > Troubleshooting< / a > < / h2 >
< p > The normal symptoms of a misconfigured TURN server are that calls between
devices on different networks ring, but get stuck at " call
connecting" . Unfortunately, troubleshooting this can be tricky.< / p >
< p > Here are a few things to try:< / p >
< ul >
< li >
< p > Check that your TURN server is not behind NAT. As above, we're not aware of
anyone who has successfully set this up.< / p >
< / li >
< li >
< p > Check that you have opened your firewall to allow TCP and UDP traffic to the
TURN ports (normally 3478 and 5479).< / p >
< / li >
< li >
< p > Check that you have opened your firewall to allow UDP traffic to the UDP
relay ports (49152-65535 by default).< / p >
< / li >
< li >
< p > Some WebRTC implementations (notably, that of Google Chrome) appear to get
confused by TURN servers which are reachable over IPv6 (this appears to be
an unexpected side-effect of its handling of multiple IP addresses as
defined by
< a href = "https://tools.ietf.org/html/draft-ietf-rtcweb-ip-handling-12" > < code > draft-ietf-rtcweb-ip-handling< / code > < / a > ).< / p >
< p > Try removing any AAAA records for your TURN server, so that it is only
reachable over IPv4.< / p >
< / li >
< li >
< p > Enable more verbose logging in coturn via the < code > verbose< / code > setting:< / p >
< pre > < code > verbose
< / code > < / pre >
< p > ... and then see if there are any clues in its logs.< / p >
< / li >
< li >
< p > If you are using a browser-based client under Chrome, check
< code > chrome://webrtc-internals/< / code > for insights into the internals of the
negotiation. On Firefox, check the " Connection Log" on < code > about:webrtc< / code > .< / p >
< p > (Understanding the output is beyond the scope of this document!)< / p >
< / li >
< li >
< p > You can test your Matrix homeserver TURN setup with https://test.voip.librepush.net/.
Note that this test is not fully reliable yet, so don't be discouraged if
the test fails.
< a href = "https://github.com/matrix-org/voip-tester" > Here< / a > is the github repo of the
source of the tester, where you can file bug reports.< / p >
< / li >
< li >
< p > There is a WebRTC test tool at
https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/. To
use it, you will need a username/password for your TURN server. You can
either:< / p >
< ul >
< li >
< p > look for the < code > GET /_matrix/client/r0/voip/turnServer< / code > request made by a
matrix client to your homeserver in your browser's network inspector. In
the response you should see < code > username< / code > and < code > password< / code > . Or:< / p >
< / li >
< li >
< p > Use the following shell commands:< / p >
< pre > < code class = "language-sh" > secret=staticAuthSecretHere
u=$((`date +%s` + 3600)):test
p=$(echo -n $u | openssl dgst -hmac $secret -sha1 -binary | base64)
echo -e " username: $u\npassword: $p"
< / code > < / pre >
< p > Or:< / p >
< / li >
< li >
< p > Temporarily configure coturn to accept a static username/password. To do
this, comment out < code > use-auth-secret< / code > and < code > static-auth-secret< / code > and add the
following:< / p >
< pre > < code > lt-cred-mech
user=username:password
< / code > < / pre >
< p > < strong > Note< / strong > : these settings will not take effect unless < code > use-auth-secret< / code >
and < code > static-auth-secret< / code > are disabled.< / p >
< p > Restart coturn after changing the configuration file.< / p >
< p > Remember to restore the original settings to go back to testing with
Matrix clients!< / p >
< / li >
< / ul >
< p > If the TURN server is working correctly, you should see at least one < code > relay< / code >
entry in the results.< / p >
< / li >
< / ul >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "delegation" > < a class = "header" href = "#delegation" > Delegation< / a > < / h1 >
< p > By default, other homeservers will expect to be able to reach yours via
your < code > server_name< / code > , on port 8448. For example, if you set your < code > server_name< / code >
to < code > example.com< / code > (so that your user names look like < code > @user:example.com< / code > ),
other servers will try to connect to yours at < code > https://example.com:8448/< / code > .< / p >
< p > Delegation is a Matrix feature allowing a homeserver admin to retain a
< code > server_name< / code > of < code > example.com< / code > so that user IDs, room aliases, etc continue
to look like < code > *:example.com< / code > , whilst having federation traffic routed
to a different server and/or port (e.g. < code > synapse.example.com:443< / code > ).< / p >
< h2 id = "well-known-delegation" > < a class = "header" href = "#well-known-delegation" > .well-known delegation< / a > < / h2 >
< p > To use this method, you need to be able to alter the
< code > server_name< / code > 's https server to serve the < code > /.well-known/matrix/server< / code >
URL. Having an active server (with a valid TLS certificate) serving your
< code > server_name< / code > domain is out of the scope of this documentation.< / p >
< p > The URL < code > https://< server_name> /.well-known/matrix/server< / code > should
return a JSON structure containing the key < code > m.server< / code > like so:< / p >
< pre > < code class = "language-json" > {
" m.server" : " < synapse.server.name> [:< yourport> ]"
}
< / code > < / pre >
< p > In our example, this would mean that URL < code > https://example.com/.well-known/matrix/server< / code >
should return:< / p >
< pre > < code class = "language-json" > {
" m.server" : " synapse.example.com:443"
}
< / code > < / pre >
< p > Note, specifying a port is optional. If no port is specified, then it defaults
to 8448.< / p >
< p > With .well-known delegation, federating servers will check for a valid TLS
certificate for the delegated hostname (in our example: < code > synapse.example.com< / code > ).< / p >
< h2 id = "srv-dns-record-delegation" > < a class = "header" href = "#srv-dns-record-delegation" > SRV DNS record delegation< / a > < / h2 >
< p > It is also possible to do delegation using a SRV DNS record. However, that is
considered an advanced topic since it's a bit complex to set up, and < code > .well-known< / code >
delegation is already enough in most cases.< / p >
< p > However, if you really need it, you can find some documentation on how such a
record should look like and how Synapse will use it in < a href = "https://matrix.org/docs/spec/server_server/latest#resolving-server-names" > the Matrix
specification< / a > .< / p >
< h2 id = "delegation-faq" > < a class = "header" href = "#delegation-faq" > Delegation FAQ< / a > < / h2 >
< h3 id = "when-do-i-need-delegation" > < a class = "header" href = "#when-do-i-need-delegation" > When do I need delegation?< / a > < / h3 >
< p > If your homeserver's APIs are accessible on the default federation port (8448)
and the domain your < code > server_name< / code > points to, you do not need any delegation.< / p >
< p > For instance, if you registered < code > example.com< / code > and pointed its DNS A record at a
fresh server, you could install Synapse on that host, giving it a < code > server_name< / code >
of < code > example.com< / code > , and once a reverse proxy has been set up to proxy all requests
sent to the port < code > 8448< / code > and serve TLS certificates for < code > example.com< / code > , you
wouldn't need any delegation set up.< / p >
< p > < strong > However< / strong > , if your homeserver's APIs aren't accessible on port 8448 and on the
domain < code > server_name< / code > points to, you will need to let other servers know how to
find it using delegation.< / p >
< h3 id = "do-you-still-recommend-against-using-a-reverse-proxy-on-the-federation-port" > < a class = "header" href = "#do-you-still-recommend-against-using-a-reverse-proxy-on-the-federation-port" > Do you still recommend against using a reverse proxy on the federation port?< / a > < / h3 >
< p > We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.< / p >
< p > See < a href = "reverse_proxy.html" > reverse_proxy.md< / a > for information on setting up a
reverse proxy.< / p >
< h3 id = "do-i-still-need-to-give-my-tls-certificates-to-synapse-if-i-am-using-a-reverse-proxy" > < a class = "header" href = "#do-i-still-need-to-give-my-tls-certificates-to-synapse-if-i-am-using-a-reverse-proxy" > Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?< / a > < / h3 >
< p > This is no longer necessary. If you are using a reverse proxy for all of your
TLS traffic, then you can set < code > no_tls: True< / code > in the Synapse config.< / p >
< p > In that case, the only reason Synapse needs the certificate is to populate a legacy
< code > tls_fingerprints< / code > field in the federation API. This is ignored by Synapse 0.99.0
and later, and the only time pre-0.99 Synapses will check it is when attempting to
fetch the server keys - and generally this is delegated via < code > matrix.org< / code > , which
is running a modern version of Synapse.< / p >
< h3 id = "do-i-need-the-same-certificate-for-the-client-and-federation-port" > < a class = "header" href = "#do-i-need-the-same-certificate-for-the-client-and-federation-port" > Do I need the same certificate for the client and federation port?< / a > < / h3 >
< p > No. There is nothing stopping you from using different certificates,
particularly if you are using a reverse proxy.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > <!--
Include the contents of UPGRADE.rst from the project root without moving it, which may
break links around the internet. Additionally, note that SUMMARY.md is unable to
directly link to content outside of the docs/ directory. So we use this file as a
redirection.
-->
< h1 id = "upgrading-synapse" > < a class = "header" href = "#upgrading-synapse" > Upgrading Synapse< / a > < / h1 >
< p > Before upgrading check if any special steps are required to upgrade from the
version you currently have installed to the current version of Synapse. The extra
instructions that may be required are listed later in this document.< / p >
< ul >
< li >
< p > Check that your versions of Python and PostgreSQL are still supported.< / p >
< p > Synapse follows upstream lifecycles for < code > Python< / code > _ and < code > PostgreSQL< / code > _, and
removes support for versions which are no longer maintained.< / p >
< p > The website https://endoflife.date also offers convenient summaries.< / p >
< p > .. _Python: https://devguide.python.org/devcycle/#end-of-life-branches
.. _PostgreSQL: https://www.postgresql.org/support/versioning/< / p >
< / li >
< li >
< p > If Synapse was installed using < code > prebuilt packages < INSTALL.md#prebuilt-packages> < / code > _, you will need to follow the normal process
for upgrading those packages.< / p >
< / li >
< li >
< p > If Synapse was installed from source, then:< / p >
< ol >
< li >
< p > Activate the virtualenv before upgrading. For example, if Synapse is
installed in a virtualenv in < code > ~/synapse/env< / code > then run:< / p >
< p > .. code:: bash< / p >
< p > source ~/synapse/env/bin/activate< / p >
< / li >
< li >
< p > If Synapse was installed using pip then upgrade to the latest version by
running:< / p >
< p > .. code:: bash< / p >
< p > pip install --upgrade matrix-synapse< / p >
< p > If Synapse was installed using git then upgrade to the latest version by
running:< / p >
< p > .. code:: bash< / p >
< p > git pull
pip install --upgrade .< / p >
< / li >
< li >
< p > Restart Synapse:< / p >
< p > .. code:: bash< / p >
< p > ./synctl restart< / p >
< / li >
< / ol >
< / li >
< / ul >
< p > To check whether your update was successful, you can check the running server
version with:< / p >
< p > .. code:: bash< / p >
< pre > < code > # you may need to replace 'localhost:8008' if synapse is not configured
# to listen on port 8008.
curl http://localhost:8008/_synapse/admin/v1/server_version
< / code > < / pre >
< h2 id = "rolling-back-to-older-versions" > < a class = "header" href = "#rolling-back-to-older-versions" > Rolling back to older versions< / a > < / h2 >
< p > Rolling back to previous releases can be difficult, due to database schema
changes between releases. Where we have been able to test the rollback process,
this will be noted below.< / p >
< p > In general, you will need to undo any changes made during the upgrade process,
for example:< / p >
< ul >
< li >
< p > pip:< / p >
< p > .. code:: bash< / p >
< p > source env/bin/activate< / p >
< h1 id = "replace-130-accordingly" > < a class = "header" href = "#replace-130-accordingly" > replace < code > 1.3.0< / code > accordingly:< / a > < / h1 >
< p > pip install matrix-synapse==1.3.0< / p >
< / li >
< li >
< p > Debian:< / p >
< p > .. code:: bash< / p >
< h1 id = "replace-130-and-stretch-accordingly" > < a class = "header" href = "#replace-130-and-stretch-accordingly" > replace < code > 1.3.0< / code > and < code > stretch< / code > accordingly:< / a > < / h1 >
< p > wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb< / p >
< / li >
< / ul >
< h1 id = "upgrading-to-v1340" > < a class = "header" href = "#upgrading-to-v1340" > Upgrading to v1.34.0< / a > < / h1 >
< h2 id = "room_invite_state_types-configuration-setting" > < a class = "header" href = "#room_invite_state_types-configuration-setting" > < code > room_invite_state_types< / code > configuration setting< / a > < / h2 >
< p > The < code > room_invite_state_types< / code > configuration setting has been deprecated and
replaced with < code > room_prejoin_state< / code > . See the < code > sample configuration file < https://github.com/matrix-org/synapse/blob/v1.34.0/docs/sample_config.yaml#L1515> < / code > _.< / p >
< p > If you have set < code > room_invite_state_types< / code > to the default value you should simply
remove it from your configuration file. The default value used to be:< / p >
< p > .. code:: yaml< / p >
< p > room_invite_state_types:
- " m.room.join_rules"
- " m.room.canonical_alias"
- " m.room.avatar"
- " m.room.encryption"
- " m.room.name" < / p >
< p > If you have customised this value, you should remove < code > room_invite_state_types< / code > and
configure < code > room_prejoin_state< / code > instead.< / p >
< h1 id = "upgrading-to-v1330" > < a class = "header" href = "#upgrading-to-v1330" > Upgrading to v1.33.0< / a > < / h1 >
< h2 id = "account-validity-html-templates-can-now-display-a-users-expiration-date" > < a class = "header" href = "#account-validity-html-templates-can-now-display-a-users-expiration-date" > Account Validity HTML templates can now display a user's expiration date< / a > < / h2 >
< p > This may affect you if you have enabled the account validity feature, and have made use of a
custom HTML template specified by the < code > account_validity.template_dir< / code > or < code > account_validity.account_renewed_html_path< / code >
Synapse config options.< / p >
< p > The template can now accept an < code > expiration_ts< / code > variable, which represents the unix timestamp in milliseconds for the
future date of which their account has been renewed until. See the
< code > default template < https://github.com/matrix-org/synapse/blob/release-v1.33.0/synapse/res/templates/account_renewed.html> < / code > _
for an example of usage.< / p >
< p > ALso note that a new HTML template, < code > account_previously_renewed.html< / code > , has been added. This is is shown to users
when they attempt to renew their account with a valid renewal token that has already been used before. The default
template contents can been found
< code > here < https://github.com/matrix-org/synapse/blob/release-v1.33.0/synapse/res/templates/account_previously_renewed.html> < / code > _,
and can also accept an < code > expiration_ts< / code > variable. This template replaces the error message users would previously see
upon attempting to use a valid renewal token more than once.< / p >
< h1 id = "upgrading-to-v1320" > < a class = "header" href = "#upgrading-to-v1320" > Upgrading to v1.32.0< / a > < / h1 >
< h2 id = "regression-causing-connected-prometheus-instances-to-become-overwhelmed" > < a class = "header" href = "#regression-causing-connected-prometheus-instances-to-become-overwhelmed" > Regression causing connected Prometheus instances to become overwhelmed< / a > < / h2 >
< p > This release introduces < code > a regression < https://github.com/matrix-org/synapse/issues/9853> < / code > _
that can overwhelm connected Prometheus instances. This issue is not present in
Synapse v1.32.0rc1.< / p >
< p > If you have been affected, please downgrade to 1.31.0. You then may need to
remove excess writeahead logs in order for Prometheus to recover. Instructions
for doing so are provided
< code > here < https://github.com/matrix-org/synapse/pull/9854#issuecomment-823472183> < / code > _.< / p >
< h2 id = "dropping-support-for-old-python-postgres-and-sqlite-versions" > < a class = "header" href = "#dropping-support-for-old-python-postgres-and-sqlite-versions" > Dropping support for old Python, Postgres and SQLite versions< / a > < / h2 >
< p > In line with our < code > deprecation policy < https://github.com/matrix-org/synapse/blob/release-v1.32.0/docs/deprecation_policy.md> < / code > _,
we've dropped support for Python 3.5 and PostgreSQL 9.5, as they are no longer supported upstream.< / p >
< p > This release of Synapse requires Python 3.6+ and PostgresSQL 9.6+ or SQLite 3.22+.< / p >
< h2 id = "removal-of-old-list-accounts-admin-api" > < a class = "header" href = "#removal-of-old-list-accounts-admin-api" > Removal of old List Accounts Admin API< / a > < / h2 >
< p > The deprecated v1 " list accounts" admin API (< code > GET /_synapse/admin/v1/users/< user_id> < / code > ) has been removed in this version.< / p >
< p > The < code > v2 list accounts API < https://github.com/matrix-org/synapse/blob/master/docs/admin_api/user_admin_api.rst#list-accounts> < / code > _
has been available since Synapse 1.7.0 (2019-12-13), and is accessible under < code > GET /_synapse/admin/v2/users< / code > .< / p >
< p > The deprecation of the old endpoint was announced with Synapse 1.28.0 (released on 2021-02-25).< / p >
< h2 id = "application-services-must-use-type-mloginapplication_service-when-registering-users" > < a class = "header" href = "#application-services-must-use-type-mloginapplication_service-when-registering-users" > Application Services must use type < code > m.login.application_service< / code > when registering users< / a > < / h2 >
< p > In compliance with the
< code > Application Service spec < https://matrix.org/docs/spec/application_service/r0.1.2#server-admin-style-permissions> < / code > _,
Application Services are now required to use the < code > m.login.application_service< / code > type when registering users via the
< code > /_matrix/client/r0/register< / code > endpoint. This behaviour was deprecated in Synapse v1.30.0.< / p >
< p > Please ensure your Application Services are up to date.< / p >
< h1 id = "upgrading-to-v1290" > < a class = "header" href = "#upgrading-to-v1290" > Upgrading to v1.29.0< / a > < / h1 >
< h2 id = "requirement-for-x-forwarded-proto-header" > < a class = "header" href = "#requirement-for-x-forwarded-proto-header" > Requirement for X-Forwarded-Proto header< / a > < / h2 >
< p > When using Synapse with a reverse proxy (in particular, when using the
< code > x_forwarded< / code > option on an HTTP listener), Synapse now expects to receive an
< code > X-Forwarded-Proto< / code > header on incoming HTTP requests. If it is not set, Synapse
will log a warning on each received request.< / p >
< p > To avoid the warning, administrators using a reverse proxy should ensure that
the reverse proxy sets < code > X-Forwarded-Proto< / code > header to < code > https< / code > or < code > http< / code > to
indicate the protocol used by the client.< / p >
< p > Synapse also requires the < code > Host< / code > header to be preserved.< / p >
< p > See the < code > reverse proxy documentation < docs/reverse_proxy.md> < / code > _, where the
example configurations have been updated to show how to set these headers.< / p >
< p > (Users of < code > Caddy < https://caddyserver.com/> < / code > _ are unaffected, since we believe it
sets < code > X-Forwarded-Proto< / code > by default.)< / p >
< h1 id = "upgrading-to-v1270" > < a class = "header" href = "#upgrading-to-v1270" > Upgrading to v1.27.0< / a > < / h1 >
< h2 id = "changes-to-callback-uri-for-oauth2--openid-connect-and-saml2" > < a class = "header" href = "#changes-to-callback-uri-for-oauth2--openid-connect-and-saml2" > Changes to callback URI for OAuth2 / OpenID Connect and SAML2< / a > < / h2 >
< p > This version changes the URI used for callbacks from OAuth2 and SAML2 identity providers:< / p >
< ul >
< li >
< p > If your server is configured for single sign-on via an OpenID Connect or OAuth2 identity
provider, you will need to add < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code >
to the list of permitted " redirect URIs" at the identity provider.< / p >
< p > See < code > docs/openid.md < docs/openid.md> < / code > _ for more information on setting up OpenID
Connect.< / p >
< / li >
< li >
< p > If your server is configured for single sign-on via a SAML2 identity provider, you will
need to add < code > [synapse public baseurl]/_synapse/client/saml2/authn_response< / code > as a permitted
" ACS location" (also known as " allowed callback URLs" ) at the identity provider.< / p >
< p > The " Issuer" in the " AuthnRequest" to the SAML2 identity provider is also updated to
< code > [synapse public baseurl]/_synapse/client/saml2/metadata.xml< / code > . If your SAML2 identity
provider uses this property to validate or otherwise identify Synapse, its configuration
will need to be updated to use the new URL. Alternatively you could create a new, separate
" EntityDescriptor" in your SAML2 identity provider with the new URLs and leave the URLs in
the existing " EntityDescriptor" as they were.< / p >
< / li >
< / ul >
< h2 id = "changes-to-html-templates" > < a class = "header" href = "#changes-to-html-templates" > Changes to HTML templates< / a > < / h2 >
< p > The HTML templates for SSO and email notifications now have < code > Jinja2's autoescape < https://jinja.palletsprojects.com/en/2.11.x/api/#autoescaping> < / code > _
enabled for files ending in < code > .html< / code > , < code > .htm< / code > , and < code > .xml< / code > . If you have customised
these templates and see issues when viewing them you might need to update them.
It is expected that most configurations will need no changes.< / p >
< p > If you have customised the templates < em > names< / em > for these templates, it is recommended
to verify they end in < code > .html< / code > to ensure autoescape is enabled.< / p >
< p > The above applies to the following templates:< / p >
< ul >
< li > < code > add_threepid.html< / code > < / li >
< li > < code > add_threepid_failure.html< / code > < / li >
< li > < code > add_threepid_success.html< / code > < / li >
< li > < code > notice_expiry.html< / code > < / li >
< li > < code > notice_expiry.html< / code > < / li >
< li > < code > notif_mail.html< / code > (which, by default, includes < code > room.html< / code > and < code > notif.html< / code > )< / li >
< li > < code > password_reset.html< / code > < / li >
< li > < code > password_reset_confirmation.html< / code > < / li >
< li > < code > password_reset_failure.html< / code > < / li >
< li > < code > password_reset_success.html< / code > < / li >
< li > < code > registration.html< / code > < / li >
< li > < code > registration_failure.html< / code > < / li >
< li > < code > registration_success.html< / code > < / li >
< li > < code > sso_account_deactivated.html< / code > < / li >
< li > < code > sso_auth_bad_user.html< / code > < / li >
< li > < code > sso_auth_confirm.html< / code > < / li >
< li > < code > sso_auth_success.html< / code > < / li >
< li > < code > sso_error.html< / code > < / li >
< li > < code > sso_login_idp_picker.html< / code > < / li >
< li > < code > sso_redirect_confirm.html< / code > < / li >
< / ul >
< h1 id = "upgrading-to-v1260" > < a class = "header" href = "#upgrading-to-v1260" > Upgrading to v1.26.0< / a > < / h1 >
< h2 id = "rolling-back-to-v1250-after-a-failed-upgrade" > < a class = "header" href = "#rolling-back-to-v1250-after-a-failed-upgrade" > Rolling back to v1.25.0 after a failed upgrade< / a > < / h2 >
< p > v1.26.0 includes a lot of large changes. If something problematic occurs, you
may want to roll-back to a previous version of Synapse. Because v1.26.0 also
includes a new database schema version, reverting that version is also required
alongside the generic rollback instructions mentioned above. In short, to roll
back to v1.25.0 you need to:< / p >
< ol >
< li >
< p > Stop the server< / p >
< / li >
< li >
< p > Decrease the schema version in the database:< / p >
< p > .. code:: sql< / p >
< p > UPDATE schema_version SET version = 58;< / p >
< / li >
< li >
< p > Delete the ignored users & chain cover data:< / p >
< p > .. code:: sql< / p >
< p > DROP TABLE IF EXISTS ignored_users;
UPDATE rooms SET has_auth_chain_index = false;< / p >
< p > For PostgreSQL run:< / p >
< p > .. code:: sql< / p >
< p > TRUNCATE event_auth_chain_links;
TRUNCATE event_auth_chains;< / p >
< p > For SQLite run:< / p >
< p > .. code:: sql< / p >
< p > DELETE FROM event_auth_chain_links;
DELETE FROM event_auth_chains;< / p >
< / li >
< li >
< p > Mark the deltas as not run (so they will re-run on upgrade).< / p >
< p > .. code:: sql< / p >
< p > DELETE FROM applied_schema_deltas WHERE version = 59 AND file = " 59/01ignored_user.py" ;
DELETE FROM applied_schema_deltas WHERE version = 59 AND file = " 59/06chain_cover_index.sql" ;< / p >
< / li >
< li >
< p > Downgrade Synapse by following the instructions for your installation method
in the " Rolling back to older versions" section above.< / p >
< / li >
< / ol >
< h1 id = "upgrading-to-v1250" > < a class = "header" href = "#upgrading-to-v1250" > Upgrading to v1.25.0< / a > < / h1 >
< h2 id = "last-release-supporting-python-35" > < a class = "header" href = "#last-release-supporting-python-35" > Last release supporting Python 3.5< / a > < / h2 >
< p > This is the last release of Synapse which guarantees support with Python 3.5,
which passed its upstream End of Life date several months ago.< / p >
< p > We will attempt to maintain support through March 2021, but without guarantees.< / p >
< p > In the future, Synapse will follow upstream schedules for ending support of
older versions of Python and PostgreSQL. Please upgrade to at least Python 3.6
and PostgreSQL 9.6 as soon as possible.< / p >
< h2 id = "blacklisting-ip-ranges" > < a class = "header" href = "#blacklisting-ip-ranges" > Blacklisting IP ranges< / a > < / h2 >
< p > Synapse v1.25.0 includes new settings, < code > ip_range_blacklist< / code > and
< code > ip_range_whitelist< / code > , for controlling outgoing requests from Synapse for federation,
identity servers, push, and for checking key validity for third-party invite events.
The previous setting, < code > federation_ip_range_blacklist< / code > , is deprecated. The new
< code > ip_range_blacklist< / code > defaults to private IP ranges if it is not defined.< / p >
< p > If you have never customised < code > federation_ip_range_blacklist< / code > it is recommended
that you remove that setting.< / p >
< p > If you have customised < code > federation_ip_range_blacklist< / code > you should update the
setting name to < code > ip_range_blacklist< / code > .< / p >
< p > If you have a custom push server that is reached via private IP space you may
need to customise < code > ip_range_blacklist< / code > or < code > ip_range_whitelist< / code > .< / p >
< h1 id = "upgrading-to-v1240" > < a class = "header" href = "#upgrading-to-v1240" > Upgrading to v1.24.0< / a > < / h1 >
< h2 id = "custom-openid-connect-mapping-provider-breaking-change" > < a class = "header" href = "#custom-openid-connect-mapping-provider-breaking-change" > Custom OpenID Connect mapping provider breaking change< / a > < / h2 >
< p > This release allows the OpenID Connect mapping provider to perform normalisation
of the localpart of the Matrix ID. This allows for the mapping provider to
specify different algorithms, instead of the < a href = "https://matrix.org/docs/spec/appendices#mapping-from-other-character-sets" > default way< / a > .< / p >
< p > If your Synapse configuration uses a custom mapping provider
(< code > oidc_config.user_mapping_provider.module< / code > is specified and not equal to
< code > synapse.handlers.oidc_handler.JinjaOidcMappingProvider< / code > ) then you < em > must< / em > ensure
that < code > map_user_attributes< / code > of the mapping provider performs some normalisation
of the < code > localpart< / code > returned. To match previous behaviour you can use the
< code > map_username_to_mxid_localpart< / code > function provided by Synapse. An example is
shown below:< / p >
< p > .. code-block:: python< / p >
< p > from synapse.types import map_username_to_mxid_localpart< / p >
< p > class MyMappingProvider:
def map_user_attributes(self, userinfo, token):
# ... your custom logic ...
sso_user_id = ...
localpart = map_username_to_mxid_localpart(sso_user_id)< / p >
< pre > < code > return {" localpart" : localpart}
< / code > < / pre >
< h2 id = "removal-historical-synapse-admin-api" > < a class = "header" href = "#removal-historical-synapse-admin-api" > Removal historical Synapse Admin API< / a > < / h2 >
< p > Historically, the Synapse Admin API has been accessible under:< / p >
< ul >
< li > < code > /_matrix/client/api/v1/admin< / code > < / li >
< li > < code > /_matrix/client/unstable/admin< / code > < / li >
< li > < code > /_matrix/client/r0/admin< / code > < / li >
< li > < code > /_synapse/admin/v1< / code > < / li >
< / ul >
< p > The endpoints with < code > /_matrix/client/*< / code > prefixes have been removed as of v1.24.0.
The Admin API is now only accessible under:< / p >
< ul >
< li > < code > /_synapse/admin/v1< / code > < / li >
< / ul >
< p > The only exception is the < code > /admin/whois< / code > endpoint, which is
< code > also available via the client-server API < https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid> < / code > _.< / p >
< p > The deprecation of the old endpoints was announced with Synapse 1.20.0 (released
on 2020-09-22) and makes it easier for homeserver admins to lock down external
access to the Admin API endpoints.< / p >
< h1 id = "upgrading-to-v1230" > < a class = "header" href = "#upgrading-to-v1230" > Upgrading to v1.23.0< / a > < / h1 >
< h2 id = "structured-logging-configuration-breaking-changes" > < a class = "header" href = "#structured-logging-configuration-breaking-changes" > Structured logging configuration breaking changes< / a > < / h2 >
< p > This release deprecates use of the < code > structured: true< / code > logging configuration for
structured logging. If your logging configuration contains < code > structured: true< / code >
then it should be modified based on the < code > structured logging documentation < https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md> < / code > _.< / p >
< p > The < code > structured< / code > and < code > drains< / code > logging options are now deprecated and should
be replaced by standard logging configuration of < code > handlers< / code > and < code > formatters< / code > .< / p >
< p > A future will release of Synapse will make using < code > structured: true< / code > an error.< / p >
< h1 id = "upgrading-to-v1220" > < a class = "header" href = "#upgrading-to-v1220" > Upgrading to v1.22.0< / a > < / h1 >
< h2 id = "thirdpartyeventrules-breaking-changes" > < a class = "header" href = "#thirdpartyeventrules-breaking-changes" > ThirdPartyEventRules breaking changes< / a > < / h2 >
< p > This release introduces a backwards-incompatible change to modules making use of
< code > ThirdPartyEventRules< / code > in Synapse. If you make use of a module defined under the
< code > third_party_event_rules< / code > config option, please make sure it is updated to handle
the below change:< / p >
< p > The < code > http_client< / code > argument is no longer passed to modules as they are initialised. Instead,
modules are expected to make use of the < code > http_client< / code > property on the < code > ModuleApi< / code > class.
Modules are now passed a < code > module_api< / code > argument during initialisation, which is an instance of
< code > ModuleApi< / code > . < code > ModuleApi< / code > instances have a < code > http_client< / code > property which acts the same as
the < code > http_client< / code > argument previously passed to < code > ThirdPartyEventRules< / code > modules.< / p >
< h1 id = "upgrading-to-v1210" > < a class = "header" href = "#upgrading-to-v1210" > Upgrading to v1.21.0< / a > < / h1 >
< h2 id = "forwarding-_synapseclient-through-your-reverse-proxy" > < a class = "header" href = "#forwarding-_synapseclient-through-your-reverse-proxy" > Forwarding < code > /_synapse/client< / code > through your reverse proxy< / a > < / h2 >
< p > The < code > reverse proxy documentation < https://github.com/matrix-org/synapse/blob/develop/docs/reverse_proxy.md> < / code > _ has been updated
to include reverse proxy directives for < code > /_synapse/client/*< / code > endpoints. As the user password
reset flow now uses endpoints under this prefix, < strong > you must update your reverse proxy
configurations for user password reset to work< / strong > .< / p >
< p > Additionally, note that the < code > Synapse worker documentation < https://github.com/matrix-org/synapse/blob/develop/docs/workers.md> < / code > _ has been updated to
state that the < code > /_synapse/client/password_reset/email/submit_token< / code > endpoint can be handled
by all workers. If you make use of Synapse's worker feature, please update your reverse proxy
configuration to reflect this change.< / p >
< h2 id = "new-html-templates" > < a class = "header" href = "#new-html-templates" > New HTML templates< / a > < / h2 >
< p > A new HTML template,
< code > password_reset_confirmation.html < https://github.com/matrix-org/synapse/blob/develop/synapse/res/templates/password_reset_confirmation.html> < / code > _,
has been added to the < code > synapse/res/templates< / code > directory. If you are using a
custom template directory, you may want to copy the template over and modify it.< / p >
< p > Note that as of v1.20.0, templates do not need to be included in custom template
directories for Synapse to start. The default templates will be used if a custom
template cannot be found.< / p >
< p > This page will appear to the user after clicking a password reset link that has
been emailed to them.< / p >
< p > To complete password reset, the page must include a way to make a < code > POST< / code >
request to
< code > /_synapse/client/password_reset/{medium}/submit_token< / code >
with the query parameters from the original link, presented as a URL-encoded form. See the file
itself for more details.< / p >
< h2 id = "updated-single-sign-on-html-templates" > < a class = "header" href = "#updated-single-sign-on-html-templates" > Updated Single Sign-on HTML Templates< / a > < / h2 >
< p > The < code > saml_error.html< / code > template was removed from Synapse and replaced with the
< code > sso_error.html< / code > template. If your Synapse is configured to use SAML and a
custom < code > sso_redirect_confirm_template_dir< / code > configuration then any customisations
of the < code > saml_error.html< / code > template will need to be merged into the < code > sso_error.html< / code >
template. These templates are similar, but the parameters are slightly different:< / p >
< ul >
< li > The < code > msg< / code > parameter should be renamed to < code > error_description< / code > .< / li >
< li > There is no longer a < code > code< / code > parameter for the response code.< / li >
< li > A string < code > error< / code > parameter is available that includes a short hint of why a
user is seeing the error page.< / li >
< / ul >
< h1 id = "upgrading-to-v1180" > < a class = "header" href = "#upgrading-to-v1180" > Upgrading to v1.18.0< / a > < / h1 >
< h2 id = "docker--py3-suffix-will-be-removed-in-future-versions" > < a class = "header" href = "#docker--py3-suffix-will-be-removed-in-future-versions" > Docker < code > -py3< / code > suffix will be removed in future versions< / a > < / h2 >
< p > From 10th August 2020, we will no longer publish Docker images with the < code > -py3< / code > tag suffix. The images tagged with the < code > -py3< / code > suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.< / p >
< p > On 10th August, we will remove the < code > latest-py3< / code > tag. Existing per-release tags (such as < code > v1.18.0-py3< / code > ) will not be removed, but no new < code > -py3< / code > tags will be added.< / p >
< p > Scripts relying on the < code > -py3< / code > suffix will need to be updated.< / p >
< h2 id = "redis-replication-is-now-recommended-in-lieu-of-tcp-replication" > < a class = "header" href = "#redis-replication-is-now-recommended-in-lieu-of-tcp-replication" > Redis replication is now recommended in lieu of TCP replication< / a > < / h2 >
< p > When setting up worker processes, we now recommend the use of a Redis server for replication. < strong > The old direct TCP connection method is deprecated and will be removed in a future release.< / strong >
See < code > docs/workers.md < docs/workers.md> < / code > _ for more details.< / p >
< h1 id = "upgrading-to-v1140" > < a class = "header" href = "#upgrading-to-v1140" > Upgrading to v1.14.0< / a > < / h1 >
< p > This version includes a database update which is run as part of the upgrade,
and which may take a couple of minutes in the case of a large server. Synapse
will not respond to HTTP requests while this update is taking place.< / p >
< h1 id = "upgrading-to-v1130" > < a class = "header" href = "#upgrading-to-v1130" > Upgrading to v1.13.0< / a > < / h1 >
< h2 id = "incorrect-database-migration-in-old-synapse-versions" > < a class = "header" href = "#incorrect-database-migration-in-old-synapse-versions" > Incorrect database migration in old synapse versions< / a > < / h2 >
< p > A bug was introduced in Synapse 1.4.0 which could cause the room directory to
be incomplete or empty if Synapse was upgraded directly from v1.2.1 or
earlier, to versions between v1.4.0 and v1.12.x.< / p >
< p > This will < em > not< / em > be a problem for Synapse installations which were:< / p >
< ul >
< li > created at v1.4.0 or later,< / li >
< li > upgraded via v1.3.x, or< / li >
< li > upgraded straight from v1.2.1 or earlier to v1.13.0 or later.< / li >
< / ul >
< p > If completeness of the room directory is a concern, installations which are
affected can be repaired as follows:< / p >
< ol >
< li >
< p > Run the following sql from a < code > psql< / code > or < code > sqlite3< / code > console:< / p >
< p > .. code:: sql< / p >
< p > INSERT INTO background_updates (update_name, progress_json, depends_on) VALUES
('populate_stats_process_rooms', '{}', 'current_state_events_membership');< / p >
< p > INSERT INTO background_updates (update_name, progress_json, depends_on) VALUES
('populate_stats_process_users', '{}', 'populate_stats_process_rooms');< / p >
< / li >
< li >
< p > Restart synapse.< / p >
< / li >
< / ol >
< h2 id = "new-single-sign-on-html-templates" > < a class = "header" href = "#new-single-sign-on-html-templates" > New Single Sign-on HTML Templates< / a > < / h2 >
< p > New templates (< code > sso_auth_confirm.html< / code > , < code > sso_auth_success.html< / code > , and
< code > sso_account_deactivated.html< / code > ) were added to Synapse. If your Synapse is
configured to use SSO and a custom < code > sso_redirect_confirm_template_dir< / code >
configuration then these templates will need to be copied from
< code > synapse/res/templates < synapse/res/templates> < / code > _ into that directory.< / p >
< h2 id = "synapse-sso-plugins-method-deprecation" > < a class = "header" href = "#synapse-sso-plugins-method-deprecation" > Synapse SSO Plugins Method Deprecation< / a > < / h2 >
< p > Plugins using the < code > complete_sso_login< / code > method of
< code > synapse.module_api.ModuleApi< / code > should update to using the async/await
version < code > complete_sso_login_async< / code > which includes additional checks. The
non-async version is considered deprecated.< / p >
< h2 id = "rolling-back-to-v1124-after-a-failed-upgrade" > < a class = "header" href = "#rolling-back-to-v1124-after-a-failed-upgrade" > Rolling back to v1.12.4 after a failed upgrade< / a > < / h2 >
< p > v1.13.0 includes a lot of large changes. If something problematic occurs, you
may want to roll-back to a previous version of Synapse. Because v1.13.0 also
includes a new database schema version, reverting that version is also required
alongside the generic rollback instructions mentioned above. In short, to roll
back to v1.12.4 you need to:< / p >
< ol >
< li >
< p > Stop the server< / p >
< / li >
< li >
< p > Decrease the schema version in the database:< / p >
< p > .. code:: sql< / p >
< p > UPDATE schema_version SET version = 57;< / p >
< / li >
< li >
< p > Downgrade Synapse by following the instructions for your installation method
in the " Rolling back to older versions" section above.< / p >
< / li >
< / ol >
< h1 id = "upgrading-to-v1120" > < a class = "header" href = "#upgrading-to-v1120" > Upgrading to v1.12.0< / a > < / h1 >
< p > This version includes a database update which is run as part of the upgrade,
and which may take some time (several hours in the case of a large
server). Synapse will not respond to HTTP requests while this update is taking
place.< / p >
< p > This is only likely to be a problem in the case of a server which is
participating in many rooms.< / p >
< ol start = "0" >
< li >
< p > As with all upgrades, it is recommended that you have a recent backup of
your database which can be used for recovery in the event of any problems.< / p >
< / li >
< li >
< p > As an initial check to see if you will be affected, you can try running the
following query from the < code > psql< / code > or < code > sqlite3< / code > console. It is safe to run it
while Synapse is still running.< / p >
< p > .. code:: sql< / p >
< p > SELECT MAX(q.v) FROM (
SELECT (
SELECT ej.json AS v
FROM state_events se INNER JOIN event_json ej USING (event_id)
WHERE se.room_id=rooms.room_id AND se.type='m.room.create' AND se.state_key=''
LIMIT 1
) FROM rooms WHERE rooms.room_version IS NULL
) q;< / p >
< p > This query will take about the same amount of time as the upgrade process: ie,
if it takes 5 minutes, then it is likely that Synapse will be unresponsive for
5 minutes during the upgrade.< / p >
< p > If you consider an outage of this duration to be acceptable, no further
action is necessary and you can simply start Synapse 1.12.0.< / p >
< p > If you would prefer to reduce the downtime, continue with the steps below.< / p >
< / li >
< li >
< p > The easiest workaround for this issue is to manually
create a new index before upgrading. On PostgreSQL, his can be done as follows:< / p >
< p > .. code:: sql< / p >
< p > CREATE INDEX CONCURRENTLY tmp_upgrade_1_12_0_index
ON state_events(room_id) WHERE type = 'm.room.create';< / p >
< p > The above query may take some time, but is also safe to run while Synapse is
running.< / p >
< p > We assume that no SQLite users have databases large enough to be
affected. If you < em > are< / em > affected, you can run a similar query, omitting the
< code > CONCURRENTLY< / code > keyword. Note however that this operation may in itself cause
Synapse to stop running for some time. Synapse admins are reminded that
< code > SQLite is not recommended for use outside a test environment < https://github.com/matrix-org/synapse/blob/master/README.rst#using-postgresql> < / code > _.< / p >
< / li >
< li >
< p > Once the index has been created, the < code > SELECT< / code > query in step 1 above should
complete quickly. It is therefore safe to upgrade to Synapse 1.12.0.< / p >
< / li >
< li >
< p > Once Synapse 1.12.0 has successfully started and is responding to HTTP
requests, the temporary index can be removed:< / p >
< p > .. code:: sql< / p >
< p > DROP INDEX tmp_upgrade_1_12_0_index;< / p >
< / li >
< / ol >
< h1 id = "upgrading-to-v1100" > < a class = "header" href = "#upgrading-to-v1100" > Upgrading to v1.10.0< / a > < / h1 >
< p > Synapse will now log a warning on start up if used with a PostgreSQL database
that has a non-recommended locale set.< / p >
< p > See < code > docs/postgres.md < docs/postgres.md> < / code > _ for details.< / p >
< h1 id = "upgrading-to-v180" > < a class = "header" href = "#upgrading-to-v180" > Upgrading to v1.8.0< / a > < / h1 >
< p > Specifying a < code > log_file< / code > config option will now cause Synapse to refuse to
start, and should be replaced by with the < code > log_config< / code > option. Support for
the < code > log_file< / code > option was removed in v1.3.0 and has since had no effect.< / p >
< h1 id = "upgrading-to-v170" > < a class = "header" href = "#upgrading-to-v170" > Upgrading to v1.7.0< / a > < / h1 >
< p > In an attempt to configure Synapse in a privacy preserving way, the default
behaviours of < code > allow_public_rooms_without_auth< / code > and
< code > allow_public_rooms_over_federation< / code > have been inverted. This means that by
default, only authenticated users querying the Client/Server API will be able
to query the room directory, and relatedly that the server will not share
room directory information with other servers over federation.< / p >
< p > If your installation does not explicitly set these settings one way or the other
and you want either setting to be < code > true< / code > then it will necessary to update
your homeserver configuration file accordingly.< / p >
< p > For more details on the surrounding context see our < code > explainer < https://matrix.org/blog/2019/11/09/avoiding-unwelcome-visitors-on-private-matrix-servers> < / code > _.< / p >
< h1 id = "upgrading-to-v150" > < a class = "header" href = "#upgrading-to-v150" > Upgrading to v1.5.0< / a > < / h1 >
< p > This release includes a database migration which may take several minutes to
complete if there are a large number (more than a million or so) of entries in
the < code > devices< / code > table. This is only likely to a be a problem on very large
installations.< / p >
< h1 id = "upgrading-to-v140" > < a class = "header" href = "#upgrading-to-v140" > Upgrading to v1.4.0< / a > < / h1 >
< h2 id = "new-custom-templates" > < a class = "header" href = "#new-custom-templates" > New custom templates< / a > < / h2 >
< p > If you have configured a custom template directory with the
< code > email.template_dir< / code > option, be aware that there are new templates regarding
registration and threepid management (see below) that must be included.< / p >
< ul >
< li > < code > registration.html< / code > and < code > registration.txt< / code > < / li >
< li > < code > registration_success.html< / code > and < code > registration_failure.html< / code > < / li >
< li > < code > add_threepid.html< / code > and < code > add_threepid.txt< / code > < / li >
< li > < code > add_threepid_failure.html< / code > and < code > add_threepid_success.html< / code > < / li >
< / ul >
< p > Synapse will expect these files to exist inside the configured template
directory, and < strong > will fail to start< / strong > if they are absent.
To view the default templates, see < code > synapse/res/templates < https://github.com/matrix-org/synapse/tree/master/synapse/res/templates> < / code > _.< / p >
< h2 id = "3pid-verification-changes" > < a class = "header" href = "#3pid-verification-changes" > 3pid verification changes< / a > < / h2 >
< p > < strong > Note: As of this release, users will be unable to add phone numbers or email
addresses to their accounts, without changes to the Synapse configuration. This
includes adding an email address during registration.< / strong > < / p >
< p > It is possible for a user to associate an email address or phone number
with their account, for a number of reasons:< / p >
< ul >
< li > for use when logging in, as an alternative to the user id.< / li >
< li > in the case of email, as an alternative contact to help with account recovery.< / li >
< li > in the case of email, to receive notifications of missed messages.< / li >
< / ul >
< p > Before an email address or phone number can be added to a user's account,
or before such an address is used to carry out a password-reset, Synapse must
confirm the operation with the owner of the email address or phone number.
It does this by sending an email or text giving the user a link or token to confirm
receipt. This process is known as '3pid verification'. ('3pid', or 'threepid',
stands for third-party identifier, and we use it to refer to external
identifiers such as email addresses and phone numbers.)< / p >
< p > Previous versions of Synapse delegated the task of 3pid verification to an
identity server by default. In most cases this server is < code > vector.im< / code > or
< code > matrix.org< / code > .< / p >
< p > In Synapse 1.4.0, for security and privacy reasons, the homeserver will no
longer delegate this task to an identity server by default. Instead,
the server administrator will need to explicitly decide how they would like the
verification messages to be sent.< / p >
< p > In the medium term, the < code > vector.im< / code > and < code > matrix.org< / code > identity servers will
disable support for delegated 3pid verification entirely. However, in order to
ease the transition, they will retain the capability for a limited
period. Delegated email verification will be disabled on Monday 2nd December
2019 (giving roughly 2 months notice). Disabling delegated SMS verification
will follow some time after that once SMS verification support lands in
Synapse.< / p >
< p > Once delegated 3pid verification support has been disabled in the < code > vector.im< / code > and
< code > matrix.org< / code > identity servers, all Synapse versions that depend on those
instances will be unable to verify email and phone numbers through them. There
are no imminent plans to remove delegated 3pid verification from Sydent
generally. (Sydent is the identity server project that backs the < code > vector.im< / code > and
< code > matrix.org< / code > instances).< / p >
< p > Email< / p >
< pre > < code > Following upgrade, to continue verifying email (e.g. as part of the
registration process), admins can either:-
* Configure Synapse to use an email server.
* Run or choose an identity server which allows delegated email verification
and delegate to it.
Configure SMTP in Synapse
+++++++++++++++++++++++++
To configure an SMTP server for Synapse, modify the configuration section
headed ``email``, and be sure to have at least the ``smtp_host, smtp_port``
and ``notif_from`` fields filled out.
You may also need to set ``smtp_user``, ``smtp_pass``, and
``require_transport_security``.
See the `sample configuration file < docs/sample_config.yaml> `_ for more details
on these settings.
Delegate email to an identity server
++++++++++++++++++++++++++++++++++++
Some admins will wish to continue using email verification as part of the
registration process, but will not immediately have an appropriate SMTP server
at hand.
To this end, we will continue to support email verification delegation via the
``vector.im`` and ``matrix.org`` identity servers for two months. Support for
delegated email verification will be disabled on Monday 2nd December.
The ``account_threepid_delegates`` dictionary defines whether the homeserver
should delegate an external server (typically an `identity server
< https://matrix.org/docs/spec/identity_service/r0.2.1> `_) to handle sending
confirmation messages via email and SMS.
So to delegate email verification, in ``homeserver.yaml``, set
``account_threepid_delegates.email`` to the base URL of an identity server. For
example:
.. code:: yaml
account_threepid_delegates:
email: https://example.com # Delegate email sending to example.com
Note that ``account_threepid_delegates.email`` replaces the deprecated
``email.trust_identity_server_for_password_resets``: if
``email.trust_identity_server_for_password_resets`` is set to ``true``, and
``account_threepid_delegates.email`` is not set, then the first entry in
``trusted_third_party_id_servers`` will be used as the
``account_threepid_delegate`` for email. This is to ensure compatibility with
existing Synapse installs that set up external server handling for these tasks
before v1.4.0. If ``email.trust_identity_server_for_password_resets`` is
``true`` and no trusted identity server domains are configured, Synapse will
report an error and refuse to start.
If ``email.trust_identity_server_for_password_resets`` is ``false`` or absent
and no ``email`` delegate is configured in ``account_threepid_delegates``,
then Synapse will send email verification messages itself, using the configured
SMTP server (see above).
that type.
Phone numbers
< / code > < / pre >
< p > Synapse does not support phone-number verification itself, so the only way to
maintain the ability for users to add phone numbers to their accounts will be
by continuing to delegate phone number verification to the < code > matrix.org< / code > and
< code > vector.im< / code > identity servers (or another identity server that supports SMS
sending).< / p >
< p > The < code > account_threepid_delegates< / code > dictionary defines whether the homeserver
should delegate an external server (typically an < code > identity server < https://matrix.org/docs/spec/identity_service/r0.2.1> < / code > _) to handle sending
confirmation messages via email and SMS.< / p >
< p > So to delegate phone number verification, in < code > homeserver.yaml< / code > , set
< code > account_threepid_delegates.msisdn< / code > to the base URL of an identity
server. For example:< / p >
< p > .. code:: yaml< / p >
< p > account_threepid_delegates:
msisdn: https://example.com # Delegate sms sending to example.com< / p >
< p > The < code > matrix.org< / code > and < code > vector.im< / code > identity servers will continue to support
delegated phone number verification via SMS until such time as it is possible
for admins to configure their servers to perform phone number verification
directly. More details will follow in a future release.< / p >
< h2 id = "rolling-back-to-v131" > < a class = "header" href = "#rolling-back-to-v131" > Rolling back to v1.3.1< / a > < / h2 >
< p > If you encounter problems with v1.4.0, it should be possible to roll back to
v1.3.1, subject to the following:< / p >
< ul >
< li >
< p > The 'room statistics' engine was heavily reworked in this release (see
< code > #5971 < https://github.com/matrix-org/synapse/pull/5971> < / code > _), including
significant changes to the database schema, which are not easily
reverted. This will cause the room statistics engine to stop updating when
you downgrade.< / p >
< p > The room statistics are essentially unused in v1.3.1 (in future versions of
Synapse, they will be used to populate the room directory), so there should
be no loss of functionality. However, the statistics engine will write errors
to the logs, which can be avoided by setting the following in
< code > homeserver.yaml< / code > :< / p >
< p > .. code:: yaml< / p >
< p > stats:
enabled: false< / p >
< p > Don't forget to re-enable it when you upgrade again, in preparation for its
use in the room directory!< / p >
< / li >
< / ul >
< h1 id = "upgrading-to-v120" > < a class = "header" href = "#upgrading-to-v120" > Upgrading to v1.2.0< / a > < / h1 >
< p > Some counter metrics have been renamed, with the old names deprecated. See
< code > the metrics documentation < docs/metrics-howto.md#renaming-of-metrics--deprecation-of-old-names-in-12> < / code > _
for details.< / p >
< h1 id = "upgrading-to-v110" > < a class = "header" href = "#upgrading-to-v110" > Upgrading to v1.1.0< / a > < / h1 >
< p > Synapse v1.1.0 removes support for older Python and PostgreSQL versions, as
outlined in < code > our deprecation notice < https://matrix.org/blog/2019/04/08/synapse-deprecating-postgres-9-4-and-python-2-x> < / code > _.< / p >
< h2 id = "minimum-python-version" > < a class = "header" href = "#minimum-python-version" > Minimum Python Version< / a > < / h2 >
< p > Synapse v1.1.0 has a minimum Python requirement of Python 3.5. Python 3.6 or
Python 3.7 are recommended as they have improved internal string handling,
significantly reducing memory usage.< / p >
< p > If you use current versions of the Matrix.org-distributed Debian packages or
Docker images, action is not required.< / p >
< p > If you install Synapse in a Python virtual environment, please see " Upgrading to
v0.34.0" for notes on setting up a new virtualenv under Python 3.< / p >
< h2 id = "minimum-postgresql-version" > < a class = "header" href = "#minimum-postgresql-version" > Minimum PostgreSQL Version< / a > < / h2 >
< p > If using PostgreSQL under Synapse, you will need to use PostgreSQL 9.5 or above.
Please see the
< code > PostgreSQL documentation < https://www.postgresql.org/docs/11/upgrading.html> < / code > _
for more details on upgrading your database.< / p >
< h1 id = "upgrading-to-v10" > < a class = "header" href = "#upgrading-to-v10" > Upgrading to v1.0< / a > < / h1 >
< h2 id = "validation-of-tls-certificates" > < a class = "header" href = "#validation-of-tls-certificates" > Validation of TLS certificates< / a > < / h2 >
< p > Synapse v1.0 is the first release to enforce
validation of TLS certificates for the federation API. It is therefore
essential that your certificates are correctly configured. See the < code > FAQ < docs/MSC1711_certificates_FAQ.md> < / code > _ for more information.< / p >
< p > Note, v1.0 installations will also no longer be able to federate with servers
that have not correctly configured their certificates.< / p >
< p > In rare cases, it may be desirable to disable certificate checking: for
example, it might be essential to be able to federate with a given legacy
server in a closed federation. This can be done in one of two ways:-< / p >
< ul >
< li > Configure the global switch < code > federation_verify_certificates< / code > to < code > false< / code > .< / li >
< li > Configure a whitelist of server domains to trust via < code > federation_certificate_verification_whitelist< / code > .< / li >
< / ul >
< p > See the < code > sample configuration file < docs/sample_config.yaml> < / code > _
for more details on these settings.< / p >
< h2 id = "email-1" > < a class = "header" href = "#email-1" > Email< / a > < / h2 >
< p > When a user requests a password reset, Synapse will send an email to the
user to confirm the request.< / p >
< p > Previous versions of Synapse delegated the job of sending this email to an
identity server. If the identity server was somehow malicious or became
compromised, it would be theoretically possible to hijack an account through
this means.< / p >
< p > Therefore, by default, Synapse v1.0 will send the confirmation email itself. If
Synapse is not configured with an SMTP server, password reset via email will be
disabled.< / p >
< p > To configure an SMTP server for Synapse, modify the configuration section
headed < code > email< / code > , and be sure to have at least the < code > smtp_host< / code > , < code > smtp_port< / code >
and < code > notif_from< / code > fields filled out. You may also need to set < code > smtp_user< / code > ,
< code > smtp_pass< / code > , and < code > require_transport_security< / code > .< / p >
< p > If you are absolutely certain that you wish to continue using an identity
server for password resets, set < code > trust_identity_server_for_password_resets< / code > to < code > true< / code > .< / p >
< p > See the < code > sample configuration file < docs/sample_config.yaml> < / code > _
for more details on these settings.< / p >
< h2 id = "new-email-templates" > < a class = "header" href = "#new-email-templates" > New email templates< / a > < / h2 >
< p > Some new templates have been added to the default template directory for the purpose of the
homeserver sending its own password reset emails. If you have configured a custom
< code > template_dir< / code > in your Synapse config, these files will need to be added.< / p >
< p > < code > password_reset.html< / code > and < code > password_reset.txt< / code > are HTML and plain text templates
respectively that contain the contents of what will be emailed to the user upon attempting to
reset their password via email. < code > password_reset_success.html< / code > and
< code > password_reset_failure.html< / code > are HTML files that the content of which (assuming no redirect
URL is set) will be shown to the user after they attempt to click the link in the email sent
to them.< / p >
< h1 id = "upgrading-to-v0990" > < a class = "header" href = "#upgrading-to-v0990" > Upgrading to v0.99.0< / a > < / h1 >
< p > Please be aware that, before Synapse v1.0 is released around March 2019, you
will need to replace any self-signed certificates with those verified by a
root CA. Information on how to do so can be found at < code > the ACME docs < docs/ACME.md> < / code > _.< / p >
< p > For more information on configuring TLS certificates see the < code > FAQ < docs/MSC1711_certificates_FAQ.md> < / code > _.< / p >
< h1 id = "upgrading-to-v0340" > < a class = "header" href = "#upgrading-to-v0340" > Upgrading to v0.34.0< / a > < / h1 >
< ol >
< li >
< p > This release is the first to fully support Python 3. Synapse will now run on
Python versions 3.5, or 3.6 (as well as 2.7). We recommend switching to
Python 3, as it has been shown to give performance improvements.< / p >
< p > For users who have installed Synapse into a virtualenv, we recommend doing
this by creating a new virtualenv. For example::< / p >
< pre > < code > virtualenv -p python3 ~/synapse/env3
source ~/synapse/env3/bin/activate
pip install matrix-synapse
< / code > < / pre >
< p > You can then start synapse as normal, having activated the new virtualenv::< / p >
< pre > < code > cd ~/synapse
source env3/bin/activate
synctl start
< / code > < / pre >
< p > Users who have installed from distribution packages should see the relevant
package documentation. See below for notes on Debian packages.< / p >
< ul >
< li >
< p > When upgrading to Python 3, you < strong > must< / strong > make sure that your log files are
configured as UTF-8, by adding < code > encoding: utf8< / code > to the
< code > RotatingFileHandler< / code > configuration (if you have one) in your
< code > < server> .log.config< / code > file. For example, if your < code > log.config< / code > file
contains::< / p >
< p > handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
console:
class: logging.StreamHandler
formatter: precise
filters: [context]< / p >
< p > Then you should update this to be::< / p >
< p > handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
console:
class: logging.StreamHandler
formatter: precise
filters: [context]< / p >
< p > There is no need to revert this change if downgrading to Python 2.< / p >
< / li >
< / ul >
< p > We are also making available Debian packages which will run Synapse on
Python 3. You can switch to these packages with < code > apt-get install matrix-synapse-py3< / code > , however, please read < code > debian/NEWS < https://github.com/matrix-org/synapse/blob/release-v0.34.0/debian/NEWS> < / code > _
before doing so. The existing < code > matrix-synapse< / code > packages will continue to
use Python 2 for the time being.< / p >
< / li >
< li >
< p > This release removes the < code > riot.im< / code > from the default list of trusted
identity servers.< / p >
< p > If < code > riot.im< / code > is in your homeserver's list of
< code > trusted_third_party_id_servers< / code > , you should remove it. It was added in
case a hypothetical future identity server was put there. If you don't
remove it, users may be unable to deactivate their accounts.< / p >
< / li >
< li >
< p > This release no longer installs the (unmaintained) Matrix Console web client
as part of the default installation. It is possible to re-enable it by
installing it separately and setting the < code > web_client_location< / code > config
option, but please consider switching to another client.< / p >
< / li >
< / ol >
< h1 id = "upgrading-to-v0337" > < a class = "header" href = "#upgrading-to-v0337" > Upgrading to v0.33.7< / a > < / h1 >
< p > This release removes the example email notification templates from
< code > res/templates< / code > (they are now internal to the python package). This should
only affect you if you (a) deploy your Synapse instance from a git checkout or
a github snapshot URL, and (b) have email notifications enabled.< / p >
< p > If you have email notifications enabled, you should ensure that
< code > email.template_dir< / code > is either configured to point at a directory where you
have installed customised templates, or leave it unset to use the default
templates.< / p >
< h1 id = "upgrading-to-v0273" > < a class = "header" href = "#upgrading-to-v0273" > Upgrading to v0.27.3< / a > < / h1 >
< p > This release expands the anonymous usage stats sent if the opt-in
< code > report_stats< / code > configuration is set to < code > true< / code > . We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install
the optional < code > psutil< / code > python module.< / p >
< p > We would appreciate it if you could assist by ensuring this module is available
and < code > report_stats< / code > is enabled. This will let us see if performance changes to
synapse are having an impact to the general community.< / p >
< h1 id = "upgrading-to-v0150" > < a class = "header" href = "#upgrading-to-v0150" > Upgrading to v0.15.0< / a > < / h1 >
< p > If you want to use the new URL previewing API (/_matrix/media/r0/preview_url)
then you have to explicitly enable it in the config and update your dependencies
dependencies. See README.rst for details.< / p >
< h1 id = "upgrading-to-v0110" > < a class = "header" href = "#upgrading-to-v0110" > Upgrading to v0.11.0< / a > < / h1 >
< p > This release includes the option to send anonymous usage stats to matrix.org,
and requires that administrators explictly opt in or out by setting the
< code > report_stats< / code > option to either < code > true< / code > or < code > false< / code > .< / p >
< p > We would really appreciate it if you could help our project out by reporting
anonymized usage statistics from your homeserver. Only very basic aggregate
data (e.g. number of users) will be reported, but it helps us to track the
growth of the Matrix community, and helps us to make Matrix a success, as well
as to convince other networks that they should peer with us.< / p >
< h1 id = "upgrading-to-v090" > < a class = "header" href = "#upgrading-to-v090" > Upgrading to v0.9.0< / a > < / h1 >
< p > Application services have had a breaking API change in this version.< / p >
< p > They can no longer register themselves with a home server using the AS HTTP API. This
decision was made because a compromised application service with free reign to register
any regex in effect grants full read/write access to the home server if a regex of < code > .*< / code >
is used. An attack where a compromised AS re-registers itself with < code > .*< / code > was deemed too
big of a security risk to ignore, and so the ability to register with the HS remotely has
been removed.< / p >
< p > It has been replaced by specifying a list of application service registrations in
< code > homeserver.yaml< / code > ::< / p >
< p > app_service_config_files: [" registration-01.yaml" , " registration-02.yaml" ]< / p >
< p > Where < code > registration-01.yaml< / code > looks like::< / p >
< p > url: < String > # e.g. " https://my.application.service.com"
as_token: < String >
hs_token: < String >
sender_localpart: < String > # This is a new field which denotes the user_id localpart when using the AS token
namespaces:
users:
- exclusive: < Boolean >
regex: < String > # e.g. " @prefix_.*"
aliases:
- exclusive: < Boolean >
regex: < String >
rooms:
- exclusive: < Boolean >
regex: < String > < / p >
< h1 id = "upgrading-to-v080" > < a class = "header" href = "#upgrading-to-v080" > Upgrading to v0.8.0< / a > < / h1 >
< p > Servers which use captchas will need to add their public key to::< / p >
< p > static/client/register/register_config.js< / p >
< pre > < code > window.matrixRegistrationConfig = {
recaptcha_public_key: " YOUR_PUBLIC_KEY"
};
< / code > < / pre >
< p > This is required in order to support registration fallback (typically used on
mobile devices).< / p >
< h1 id = "upgrading-to-v070" > < a class = "header" href = "#upgrading-to-v070" > Upgrading to v0.7.0< / a > < / h1 >
< p > New dependencies are:< / p >
< ul >
< li > pydenticon< / li >
< li > simplejson< / li >
< li > syutil< / li >
< li > matrix-angular-sdk< / li >
< / ul >
< p > To pull in these dependencies in a virtual env, run::< / p >
< pre > < code > python synapse/python_dependencies.py | xargs -n 1 pip install
< / code > < / pre >
< h1 id = "upgrading-to-v060" > < a class = "header" href = "#upgrading-to-v060" > Upgrading to v0.6.0< / a > < / h1 >
< p > To pull in new dependencies, run::< / p >
< pre > < code > python setup.py develop --user
< / code > < / pre >
< p > This update includes a change to the database schema. To upgrade you first need
to upgrade the database by running::< / p >
< pre > < code > python scripts/upgrade_db_to_v0.6.0.py < db> < server_name> < signing_key>
< / code > < / pre >
< p > Where < code > < db> < / code > is the location of the database, < code > < server_name> < / code > is the
server name as specified in the synapse configuration, and < code > < signing_key> < / code > is
the location of the signing key as specified in the synapse configuration.< / p >
< p > This may take some time to complete. Failures of signatures and content hashes
can safely be ignored.< / p >
< h1 id = "upgrading-to-v051" > < a class = "header" href = "#upgrading-to-v051" > Upgrading to v0.5.1< / a > < / h1 >
< p > Depending on precisely when you installed v0.5.0 you may have ended up with
a stale release of the reference matrix webclient installed as a python module.
To uninstall it and ensure you are depending on the latest module, please run::< / p >
< pre > < code > $ pip uninstall syweb
< / code > < / pre >
< h1 id = "upgrading-to-v050" > < a class = "header" href = "#upgrading-to-v050" > Upgrading to v0.5.0< / a > < / h1 >
< p > The webclient has been split out into a seperate repository/pacakage in this
release. Before you restart your homeserver you will need to pull in the
webclient package by running::< / p >
< p > python setup.py develop --user< / p >
< p > This release completely changes the database schema and so requires upgrading
it before starting the new version of the homeserver.< / p >
< p > The script " database-prepare-for-0.5.0.sh" should be used to upgrade the
database. This will save all user information, such as logins and profiles,
but will otherwise purge the database. This includes messages, which
rooms the home server was a member of and room alias mappings.< / p >
< p > If you would like to keep your history, please take a copy of your database
file and ask for help in #matrix:matrix.org. The upgrade process is,
unfortunately, non trivial and requires human intervention to resolve any
resulting conflicts during the upgrade process.< / p >
< p > Before running the command the homeserver should be first completely
shutdown. To run it, simply specify the location of the database, e.g.:< / p >
< p > ./scripts/database-prepare-for-0.5.0.sh " homeserver.db" < / p >
< p > Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
restart than usual as it reinitializes the database.< / p >
< p > On startup of the new version, users can either rejoin remote rooms using room
aliases or by being reinvited. Alternatively, if any other homeserver sends a
message to a room that the homeserver was previously in the local HS will
automatically rejoin the room.< / p >
< h1 id = "upgrading-to-v040" > < a class = "header" href = "#upgrading-to-v040" > Upgrading to v0.4.0< / a > < / h1 >
< p > This release needs an updated syutil version. Run::< / p >
< pre > < code > python setup.py develop
< / code > < / pre >
< p > You will also need to upgrade your configuration as the signing key format has
changed. Run::< / p >
< pre > < code > python -m synapse.app.homeserver --config-path < CONFIG> --generate-config
< / code > < / pre >
< h1 id = "upgrading-to-v030" > < a class = "header" href = "#upgrading-to-v030" > Upgrading to v0.3.0< / a > < / h1 >
< p > This registration API now closely matches the login API. This introduces a bit
more backwards and forwards between the HS and the client, but this improves
the overall flexibility of the API. You can now GET on /register to retrieve a list
of valid registration flows. Upon choosing one, they are submitted in the same
way as login, e.g::< / p >
< p > {
type: m.login.password,
user: foo,
password: bar
}< / p >
< p > The default HS supports 2 flows, with and without Identity Server email
authentication. Enabling captcha on the HS will add in an extra step to all
flows: < code > m.login.recaptcha< / code > which must be completed before you can transition
to the next stage. There is a new login type: < code > m.login.email.identity< / code > which
contains the < code > threepidCreds< / code > key which were previously sent in the original
register request. For more information on this, see the specification.< / p >
< h2 id = "web-client" > < a class = "header" href = "#web-client" > Web Client< / a > < / h2 >
< p > The VoIP specification has changed between v0.2.0 and v0.3.0. Users should
refresh any browser tabs to get the latest web client code. Users on
v0.2.0 of the web client will not be able to call those on v0.3.0 and
vice versa.< / p >
< h1 id = "upgrading-to-v020" > < a class = "header" href = "#upgrading-to-v020" > Upgrading to v0.2.0< / a > < / h1 >
< p > The home server now requires setting up of SSL config before it can run. To
automatically generate default config use::< / p >
< pre > < code > $ python synapse/app/homeserver.py \
--server-name machine.my.domain.name \
--bind-port 8448 \
--config-path homeserver.config \
--generate-config
< / code > < / pre >
< p > This config can be edited if desired, for example to specify a different SSL
certificate to use. Once done you can run the home server using::< / p >
< pre > < code > $ python synapse/app/homeserver.py --config-path homeserver.config
< / code > < / pre >
< p > See the README.rst for more information.< / p >
< p > Also note that some config options have been renamed, including:< / p >
< ul >
< li > " host" to " server-name" < / li >
< li > " database" to " database-path" < / li >
< li > " port" to " bind-port" and " unsecure-port" < / li >
< / ul >
< h1 id = "upgrading-to-v001" > < a class = "header" href = "#upgrading-to-v001" > Upgrading to v0.0.1< / a > < / h1 >
< p > This release completely changes the database schema and so requires upgrading
it before starting the new version of the homeserver.< / p >
< p > The script " database-prepare-for-0.0.1.sh" should be used to upgrade the
database. This will save all user information, such as logins and profiles,
but will otherwise purge the database. This includes messages, which
rooms the home server was a member of and room alias mappings.< / p >
< p > Before running the command the homeserver should be first completely
shutdown. To run it, simply specify the location of the database, e.g.:< / p >
< p > ./scripts/database-prepare-for-0.0.1.sh " homeserver.db" < / p >
< p > Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
restart than usual as it reinitializes the database.< / p >
< p > On startup of the new version, users can either rejoin remote rooms using room
aliases or by being reinvited. Alternatively, if any other homeserver sends a
message to a room that the homeserver was previously in the local HS will
automatically rejoin the room.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "msc1711-certificates-faq" > < a class = "header" href = "#msc1711-certificates-faq" > MSC1711 Certificates FAQ< / a > < / h1 >
< h2 id = "historical-note" > < a class = "header" href = "#historical-note" > Historical Note< / a > < / h2 >
< p > This document was originally written to guide server admins through the upgrade
path towards Synapse 1.0. Specifically,
< a href = "https://github.com/matrix-org/matrix-doc/blob/master/proposals/1711-x509-for-federation.md" > MSC1711< / a >
required that all servers present valid TLS certificates on their federation
API. Admins were encouraged to achieve compliance from version 0.99.0 (released
in February 2019) ahead of version 1.0 (released June 2019) enforcing the
certificate checks.< / p >
< p > Much of what follows is now outdated since most admins will have already
upgraded, however it may be of use to those with old installs returning to the
project.< / p >
< p > If you are setting up a server from scratch you almost certainly should look at
the < a href = "../INSTALL.html" > installation guide< / a > instead.< / p >
< h2 id = "introduction-1" > < a class = "header" href = "#introduction-1" > Introduction< / a > < / h2 >
< p > The goal of Synapse 0.99.0 is to act as a stepping stone to Synapse 1.0.0. It
supports the r0.1 release of the server to server specification, but is
compatible with both the legacy Matrix federation behaviour (pre-r0.1) as well
as post-r0.1 behaviour, in order to allow for a smooth upgrade across the
federation.< / p >
< p > The most important thing to know is that Synapse 1.0.0 will require a valid TLS
certificate on federation endpoints. Self signed certificates will not be
sufficient.< / p >
< p > Synapse 0.99.0 makes it easy to configure TLS certificates and will
interoperate with both > = 1.0.0 servers as well as existing servers yet to
upgrade.< / p >
< p > < strong > It is critical that all admins upgrade to 0.99.0 and configure a valid TLS
certificate.< / strong > Admins will have 1 month to do so, after which 1.0.0 will be
released and those servers without a valid certificate will not longer be able
to federate with > = 1.0.0 servers.< / p >
< p > Full details on how to carry out this configuration change is given
< a href = "MSC1711_certificates_FAQ.html#configuring-certificates-for-compatibility-with-synapse-100" > below< / a > . A
timeline and some frequently asked questions are also given below.< / p >
< p > For more details and context on the release of the r0.1 Server/Server API and
imminent Matrix 1.0 release, you can also see our
< a href = "https://matrix.org/blog/2019/02/04/matrix-at-fosdem-2019/" > main talk from FOSDEM 2019< / a > .< / p >
< h2 id = "contents" > < a class = "header" href = "#contents" > Contents< / a > < / h2 >
< ul >
< li > Timeline< / li >
< li > Configuring certificates for compatibility with Synapse 1.0< / li >
< li > FAQ
< ul >
< li > Synapse 0.99.0 has just been released, what do I need to do right now?< / li >
< li > How do I upgrade?< / li >
< li > What will happen if I do not set up a valid federation certificate
immediately?< / li >
< li > What will happen if I do nothing at all?< / li >
< li > When do I need a SRV record or .well-known URI?< / li >
< li > Can I still use an SRV record?< / li >
< li > I have created a .well-known URI. Do I still need an SRV record?< / li >
< li > It used to work just fine, why are you breaking everything?< / li >
< li > Can I manage my own certificates rather than having Synapse renew
certificates itself?< / li >
< li > Do you still recommend against using a reverse proxy on the federation port?< / li >
< li > Do I still need to give my TLS certificates to Synapse if I am using a
reverse proxy?< / li >
< li > Do I need the same certificate for the client and federation port?< / li >
< li > How do I tell Synapse to reload my keys/certificates after I replace them?< / li >
< / ul >
< / li >
< / ul >
< h2 id = "timeline" > < a class = "header" href = "#timeline" > Timeline< / a > < / h2 >
< p > < strong > 5th Feb 2019 - Synapse 0.99.0 is released.< / strong > < / p >
< p > All server admins are encouraged to upgrade.< / p >
< p > 0.99.0:< / p >
< ul >
< li >
< p > provides support for ACME to make setting up Let's Encrypt certs easy, as
well as .well-known support.< / p >
< / li >
< li >
< p > does not enforce that a valid CA cert is present on the federation API, but
rather makes it easy to set one up.< / p >
< / li >
< li >
< p > provides support for .well-known< / p >
< / li >
< / ul >
< p > Admins should upgrade and configure a valid CA cert. Homeservers that require a
.well-known entry (see below), should retain their SRV record and use it
alongside their .well-known record.< / p >
< p > < strong > 10th June 2019 - Synapse 1.0.0 is released< / strong > < / p >
< p > 1.0.0 is scheduled for release on 10th June. In
accordance with the the < a href = "https://matrix.org/docs/spec/server_server/r0.1.0.html" > S2S spec< / a >
1.0.0 will enforce certificate validity. This means that any homeserver without a
valid certificate after this point will no longer be able to federate with
1.0.0 servers.< / p >
< h2 id = "configuring-certificates-for-compatibility-with-synapse-100" > < a class = "header" href = "#configuring-certificates-for-compatibility-with-synapse-100" > Configuring certificates for compatibility with Synapse 1.0.0< / a > < / h2 >
< h3 id = "if-you-do-not-currently-have-an-srv-record" > < a class = "header" href = "#if-you-do-not-currently-have-an-srv-record" > If you do not currently have an SRV record< / a > < / h3 >
< p > In this case, your < code > server_name< / code > points to the host where your Synapse is
running. There is no need to create a < code > .well-known< / code > URI or an SRV record, but
you will need to give Synapse a valid, signed, certificate.< / p >
< p > The easiest way to do that is with Synapse's built-in ACME (Let's Encrypt)
support. Full details are in < a href = "./ACME.html" > ACME.md< / a > but, in a nutshell:< / p >
< ol >
< li > Allow Synapse to listen on port 80 with < code > authbind< / code > , or forward it from a
reverse proxy.< / li >
< li > Enable acme support in < code > homeserver.yaml< / code > .< / li >
< li > Move your old certificates out of the way.< / li >
< li > Restart Synapse.< / li >
< / ol >
< h3 id = "if-you-do-have-an-srv-record-currently" > < a class = "header" href = "#if-you-do-have-an-srv-record-currently" > If you do have an SRV record currently< / a > < / h3 >
< p > If you are using an SRV record, your matrix domain (< code > server_name< / code > ) may not
point to the same host that your Synapse is running on (the 'target
domain'). (If it does, you can follow the recommendation above; otherwise, read
on.)< / p >
< p > Let's assume that your < code > server_name< / code > is < code > example.com< / code > , and your Synapse is
hosted at a target domain of < code > customer.example.net< / code > . Currently you should have
an SRV record which looks like:< / p >
< pre > < code > _matrix._tcp.example.com. IN SRV 10 5 8000 customer.example.net.
< / code > < / pre >
< p > In this situation, you have three choices for how to proceed:< / p >
< h4 id = "option-1-give-synapse-a-certificate-for-your-matrix-domain" > < a class = "header" href = "#option-1-give-synapse-a-certificate-for-your-matrix-domain" > Option 1: give Synapse a certificate for your matrix domain< / a > < / h4 >
< p > Synapse 1.0 will expect your server to present a TLS certificate for your
< code > server_name< / code > (< code > example.com< / code > in the above example). You can achieve this by
doing one of the following:< / p >
< ul >
< li >
< p > Acquire a certificate for the < code > server_name< / code > yourself (for example, using
< code > certbot< / code > ), and give it and the key to Synapse via < code > tls_certificate_path< / code >
and < code > tls_private_key_path< / code > , or:< / p >
< / li >
< li >
< p > Use Synapse's < a href = "./ACME.html" > ACME support< / a > , and forward port 80 on the
< code > server_name< / code > domain to your Synapse instance.< / p >
< / li >
< / ul >
< h4 id = "option-2-run-synapse-behind-a-reverse-proxy" > < a class = "header" href = "#option-2-run-synapse-behind-a-reverse-proxy" > Option 2: run Synapse behind a reverse proxy< / a > < / h4 >
< p > If you have an existing reverse proxy set up with correct TLS certificates for
your domain, you can simply route all traffic through the reverse proxy by
updating the SRV record appropriately (or removing it, if the proxy listens on
8448).< / p >
< p > See < a href = "reverse_proxy.html" > reverse_proxy.md< / a > for information on setting up a
reverse proxy.< / p >
< h4 id = "option-3-add-a-well-known-file-to-delegate-your-matrix-traffic" > < a class = "header" href = "#option-3-add-a-well-known-file-to-delegate-your-matrix-traffic" > Option 3: add a .well-known file to delegate your matrix traffic< / a > < / h4 >
< p > This will allow you to keep Synapse on a separate domain, without having to
give it a certificate for the matrix domain.< / p >
< p > You can do this with a < code > .well-known< / code > file as follows:< / p >
< ol >
< li >
< p > Keep the SRV record in place - it is needed for backwards compatibility
with Synapse 0.34 and earlier.< / p >
< / li >
< li >
< p > Give Synapse a certificate corresponding to the target domain
(< code > customer.example.net< / code > in the above example). You can either use Synapse's
built-in < a href = "./ACME.html" > ACME support< / a > for this (via the < code > domain< / code > parameter in
the < code > acme< / code > section), or acquire a certificate yourself and give it to
Synapse via < code > tls_certificate_path< / code > and < code > tls_private_key_path< / code > .< / p >
< / li >
< li >
< p > Restart Synapse to ensure the new certificate is loaded.< / p >
< / li >
< li >
< p > Arrange for a < code > .well-known< / code > file at
< code > https://< server_name> /.well-known/matrix/server< / code > with contents:< / p >
< pre > < code class = "language-json" > {" m.server" : " < target server name> " }
< / code > < / pre >
< p > where the target server name is resolved as usual (i.e. SRV lookup, falling
back to talking to port 8448).< / p >
< p > In the above example, where synapse is listening on port 8000,
< code > https://example.com/.well-known/matrix/server< / code > should have < code > m.server< / code > set to one of:< / p >
< ol >
< li >
< p > < code > customer.example.net< / code > ─ with a SRV record on
< code > _matrix._tcp.customer.example.com< / code > pointing to port 8000, or:< / p >
< / li >
< li >
< p > < code > customer.example.net< / code > ─ updating synapse to listen on the default port
8448, or:< / p >
< / li >
< li >
< p > < code > customer.example.net:8000< / code > ─ ensuring that if there is a reverse proxy
on < code > customer.example.net:8000< / code > it correctly handles HTTP requests with
Host header set to < code > customer.example.net:8000< / code > .< / p >
< / li >
< / ol >
< / li >
< / ol >
< h2 id = "faq" > < a class = "header" href = "#faq" > FAQ< / a > < / h2 >
< h3 id = "synapse-0990-has-just-been-released-what-do-i-need-to-do-right-now" > < a class = "header" href = "#synapse-0990-has-just-been-released-what-do-i-need-to-do-right-now" > Synapse 0.99.0 has just been released, what do I need to do right now?< / a > < / h3 >
< p > Upgrade as soon as you can in preparation for Synapse 1.0.0, and update your
TLS certificates as < a href = "MSC1711_certificates_FAQ.html#configuring-certificates-for-compatibility-with-synapse-100" > above< / a > .< / p >
< h3 id = "what-will-happen-if-i-do-not-set-up-a-valid-federation-certificate-immediately" > < a class = "header" href = "#what-will-happen-if-i-do-not-set-up-a-valid-federation-certificate-immediately" > What will happen if I do not set up a valid federation certificate immediately?< / a > < / h3 >
< p > Nothing initially, but once 1.0.0 is in the wild it will not be possible to
federate with 1.0.0 servers.< / p >
< h3 id = "what-will-happen-if-i-do-nothing-at-all" > < a class = "header" href = "#what-will-happen-if-i-do-nothing-at-all" > What will happen if I do nothing at all?< / a > < / h3 >
< p > If the admin takes no action at all, and remains on a Synapse < 0.99.0 then the
homeserver will be unable to federate with those who have implemented
.well-known. Then, as above, once the month upgrade window has expired the
homeserver will not be able to federate with any Synapse > = 1.0.0< / p >
< h3 id = "when-do-i-need-a-srv-record-or-well-known-uri" > < a class = "header" href = "#when-do-i-need-a-srv-record-or-well-known-uri" > When do I need a SRV record or .well-known URI?< / a > < / h3 >
< p > If your homeserver listens on the default federation port (8448), and your
< code > server_name< / code > points to the host that your homeserver runs on, you do not need an
SRV record or < code > .well-known/matrix/server< / code > URI.< / p >
< p > For instance, if you registered < code > example.com< / code > and pointed its DNS A record at a
fresh Upcloud VPS or similar, you could install Synapse 0.99 on that host,
giving it a server_name of < code > example.com< / code > , and it would automatically generate a
valid TLS certificate for you via Let's Encrypt and no SRV record or
< code > .well-known< / code > URI would be needed.< / p >
< p > This is the common case, although you can add an SRV record or
< code > .well-known/matrix/server< / code > URI for completeness if you wish.< / p >
< p > < strong > However< / strong > , if your server does not listen on port 8448, or if your < code > server_name< / code >
does not point to the host that your homeserver runs on, you will need to let
other servers know how to find it.< / p >
< p > In this case, you should see < a href = "MSC1711_certificates_FAQ.html#if-you-do-have-an-srv-record-currently" > " If you do have an SRV record
currently" < / a > above.< / p >
< h3 id = "can-i-still-use-an-srv-record" > < a class = "header" href = "#can-i-still-use-an-srv-record" > Can I still use an SRV record?< / a > < / h3 >
< p > Firstly, if you didn't need an SRV record before (because your server is
listening on port 8448 of your server_name), you certainly don't need one now:
the defaults are still the same.< / p >
< p > If you previously had an SRV record, you can keep using it provided you are
able to give Synapse a TLS certificate corresponding to your server name. For
example, suppose you had the following SRV record, which directs matrix traffic
for example.com to matrix.example.com:443:< / p >
< pre > < code > _matrix._tcp.example.com. IN SRV 10 5 443 matrix.example.com
< / code > < / pre >
< p > In this case, Synapse must be given a certificate for example.com - or be
configured to acquire one from Let's Encrypt.< / p >
< p > If you are unable to give Synapse a certificate for your server_name, you will
also need to use a .well-known URI instead. However, see also " I have created a
.well-known URI. Do I still need an SRV record?" .< / p >
< h3 id = "i-have-created-a-well-known-uri-do-i-still-need-an-srv-record" > < a class = "header" href = "#i-have-created-a-well-known-uri-do-i-still-need-an-srv-record" > I have created a .well-known URI. Do I still need an SRV record?< / a > < / h3 >
< p > As of Synapse 0.99, Synapse will first check for the existence of a < code > .well-known< / code >
URI and follow any delegation it suggests. It will only then check for the
existence of an SRV record.< / p >
< p > That means that the SRV record will often be redundant. However, you should
remember that there may still be older versions of Synapse in the federation
which do not understand < code > .well-known< / code > URIs, so if you removed your SRV record you
would no longer be able to federate with them.< / p >
< p > It is therefore best to leave the SRV record in place for now. Synapse 0.34 and
earlier will follow the SRV record (and not care about the invalid
certificate). Synapse 0.99 and later will follow the .well-known URI, with the
correct certificate chain.< / p >
< h3 id = "it-used-to-work-just-fine-why-are-you-breaking-everything" > < a class = "header" href = "#it-used-to-work-just-fine-why-are-you-breaking-everything" > It used to work just fine, why are you breaking everything?< / a > < / h3 >
< p > We have always wanted Matrix servers to be as easy to set up as possible, and
so back when we started federation in 2014 we didn't want admins to have to go
through the cumbersome process of buying a valid TLS certificate to run a
server. This was before Let's Encrypt came along and made getting a free and
valid TLS certificate straightforward. So instead, we adopted a system based on
< a href = "https://en.wikipedia.org/wiki/Convergence_(SSL)" > Perspectives< / a > : an approach
where you check a set of " notary servers" (in practice, homeservers) to vouch
for the validity of a certificate rather than having it signed by a CA. As long
as enough different notaries agree on the certificate's validity, then it is
trusted.< / p >
< p > However, in practice this has never worked properly. Most people only use the
default notary server (matrix.org), leading to inadvertent centralisation which
we want to eliminate. Meanwhile, we never implemented the full consensus
algorithm to query the servers participating in a room to determine consensus
on whether a given certificate is valid. This is fiddly to get right
(especially in face of sybil attacks), and we found ourselves questioning
whether it was worth the effort to finish the work and commit to maintaining a
secure certificate validation system as opposed to focusing on core Matrix
development.< / p >
< p > Meanwhile, Let's Encrypt came along in 2016, and put the final nail in the
coffin of the Perspectives project (which was already pretty dead). So, the
Spec Core Team decided that a better approach would be to mandate valid TLS
certificates for federation alongside the rest of the Web. More details can be
found in
< a href = "https://github.com/matrix-org/matrix-doc/blob/master/proposals/1711-x509-for-federation.md#background-the-failure-of-the-perspectives-approach" > MSC1711< / a > .< / p >
< p > This results in a breaking change, which is disruptive, but absolutely critical
for the security model. However, the existence of Let's Encrypt as a trivial
way to replace the old self-signed certificates with valid CA-signed ones helps
smooth things over massively, especially as Synapse can now automate Let's
Encrypt certificate generation if needed.< / p >
< h3 id = "can-i-manage-my-own-certificates-rather-than-having-synapse-renew-certificates-itself" > < a class = "header" href = "#can-i-manage-my-own-certificates-rather-than-having-synapse-renew-certificates-itself" > Can I manage my own certificates rather than having Synapse renew certificates itself?< / a > < / h3 >
< p > Yes, you are welcome to manage your certificates yourself. Synapse will only
attempt to obtain certificates from Let's Encrypt if you configure it to do
so.The only requirement is that there is a valid TLS cert present for
federation end points.< / p >
< h3 id = "do-you-still-recommend-against-using-a-reverse-proxy-on-the-federation-port-1" > < a class = "header" href = "#do-you-still-recommend-against-using-a-reverse-proxy-on-the-federation-port-1" > Do you still recommend against using a reverse proxy on the federation port?< / a > < / h3 >
< p > We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.< / p >
< p > See < a href = "reverse_proxy.html" > reverse_proxy.md< / a > for information on setting up a
reverse proxy.< / p >
< h3 id = "do-i-still-need-to-give-my-tls-certificates-to-synapse-if-i-am-using-a-reverse-proxy-1" > < a class = "header" href = "#do-i-still-need-to-give-my-tls-certificates-to-synapse-if-i-am-using-a-reverse-proxy-1" > Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?< / a > < / h3 >
< p > Practically speaking, this is no longer necessary.< / p >
< p > If you are using a reverse proxy for all of your TLS traffic, then you can set
< code > no_tls: True< / code > . In that case, the only reason Synapse needs the certificate is
to populate a legacy 'tls_fingerprints' field in the federation API. This is
ignored by Synapse 0.99.0 and later, and the only time pre-0.99 Synapses will
check it is when attempting to fetch the server keys - and generally this is
delegated via < code > matrix.org< / code > , which is on 0.99.0.< / p >
< p > However, there is a bug in Synapse 0.99.0
< a href = "https://github.com/matrix-org/synapse/issues/4554" > 4554< / a > which prevents
Synapse from starting if you do not give it a TLS certificate. To work around
this, you can give it any TLS certificate at all. This will be fixed soon.< / p >
< h3 id = "do-i-need-the-same-certificate-for-the-client-and-federation-port-1" > < a class = "header" href = "#do-i-need-the-same-certificate-for-the-client-and-federation-port-1" > Do I need the same certificate for the client and federation port?< / a > < / h3 >
< p > No. There is nothing stopping you from using different certificates,
particularly if you are using a reverse proxy. However, Synapse will use the
same certificate on any ports where TLS is configured.< / p >
< h3 id = "how-do-i-tell-synapse-to-reload-my-keyscertificates-after-i-replace-them" > < a class = "header" href = "#how-do-i-tell-synapse-to-reload-my-keyscertificates-after-i-replace-them" > How do I tell Synapse to reload my keys/certificates after I replace them?< / a > < / h3 >
< p > Synapse will reload the keys and certificates when it receives a SIGHUP - for
example < code > kill -HUP $(cat homeserver.pid)< / code > . Alternatively, simply restart
Synapse, though this will result in downtime while it restarts.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "setting-up-federation" > < a class = "header" href = "#setting-up-federation" > Setting up federation< / a > < / h1 >
< p > Federation is the process by which users on different servers can participate
in the same room. For this to work, those other servers must be able to contact
yours to send messages.< / p >
< p > The < code > server_name< / code > configured in the Synapse configuration file (often
< code > homeserver.yaml< / code > ) defines how resources (users, rooms, etc.) will be
identified (eg: < code > @user:example.com< / code > , < code > #room:example.com< / code > ). By default,
it is also the domain that other servers will use to try to reach your
server (via port 8448). This is easy to set up and will work provided
you set the < code > server_name< / code > to match your machine's public DNS hostname.< / p >
< p > For this default configuration to work, you will need to listen for TLS
connections on port 8448. The preferred way to do that is by using a
reverse proxy: see < a href = "reverse_proxy.html" > reverse_proxy.md< / a > for instructions
on how to correctly set one up.< / p >
< p > In some cases you might not want to run Synapse on the machine that has
the < code > server_name< / code > as its public DNS hostname, or you might want federation
traffic to use a different port than 8448. For example, you might want to
have your user names look like < code > @user:example.com< / code > , but you want to run
Synapse on < code > synapse.example.com< / code > on port 443. This can be done using
delegation, which allows an admin to control where federation traffic should
be sent. See < a href = "delegate.html" > delegate.md< / a > for instructions on how to set this up.< / p >
< p > Once federation has been configured, you should be able to join a room over
federation. A good place to start is < code > #synapse:matrix.org< / code > - a room for
Synapse admins.< / p >
< h2 id = "troubleshooting-2" > < a class = "header" href = "#troubleshooting-2" > Troubleshooting< / a > < / h2 >
< p > You can use the < a href = "https://matrix.org/federationtester" > federation tester< / a >
to check if your homeserver is configured correctly. Alternatively try the
< a href = "https://matrix.org/federationtester/api/report?server_name=DOMAIN" > JSON API used by the federation tester< / a > .
Note that you'll have to modify this URL to replace < code > DOMAIN< / code > with your
< code > server_name< / code > . Hitting the API directly provides extra detail.< / p >
< p > The typical failure mode for federation is that when the server tries to join
a room, it is rejected with " 401: Unauthorized" . Generally this means that other
servers in the room could not access yours. (Joining a room over federation is
a complicated dance which requires connections in both directions).< / p >
< p > Another common problem is that people on other servers can't join rooms that
you invite them to. This can be caused by an incorrectly-configured reverse
proxy: see < a href = "reverse_proxy.html" > reverse_proxy.md< / a > for instructions on how to correctly
configure a reverse proxy.< / p >
< h3 id = "known-issues" > < a class = "header" href = "#known-issues" > Known issues< / a > < / h3 >
< p > < strong > HTTP < code > 308 Permanent Redirect< / code > redirects are not followed< / strong > : Due to missing features
in the HTTP library used by Synapse, 308 redirects are currently not followed by
federating servers, which can cause < code > M_UNKNOWN< / code > or < code > 401 Unauthorized< / code > errors. This
may affect users who are redirecting apex-to-www (e.g. < code > example.com< / code > -> < code > www.example.com< / code > ),
and especially users of the Kubernetes < em > Nginx Ingress< / em > module, which uses 308 redirect
codes by default. For those Kubernetes users, < a href = "https://stackoverflow.com/a/52617528/5096871" > this Stackoverflow post< / a >
might be helpful. For other users, switching to a < code > 301 Moved Permanently< / code > code may be
an option. 308 redirect codes will be supported properly in a future
release of Synapse.< / p >
< h2 id = "running-a-demo-federation-of-synapses" > < a class = "header" href = "#running-a-demo-federation-of-synapses" > Running a demo federation of Synapses< / a > < / h2 >
< p > If you want to get up and running quickly with a trio of homeservers in a
private federation, there is a script in the < code > demo< / code > directory. This is mainly
2021-06-16 15:16:14 +03:00
useful just for development purposes. See < a href = "https://github.com/matrix-org/synapse/tree/develop/demo/" > demo/README< / a > .< / p >
2021-06-03 19:21:02 +03:00
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "configuration-1" > < a class = "header" href = "#configuration-1" > Configuration< / a > < / h1 >
< p > This section contains information on tweaking Synapse via the various options in the configuration file. A configuration
file should have been generated when you < a href = "usage/configuration/../../setup/installation.html" > installed Synapse< / a > .< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "homeserver-sample-configuration-file" > < a class = "header" href = "#homeserver-sample-configuration-file" > Homeserver Sample Configuration File< / a > < / h1 >
< p > Below is a sample homeserver configuration file. The homeserver configuration file
can be tweaked to change the behaviour of your homeserver. A restart of the server is
generally required to apply any changes made to this file.< / p >
< p > Note that the contents below are < em > not< / em > intended to be copied and used as the basis for
a real homeserver.yaml. Instead, if you are starting from scratch, please generate
a fresh config using Synapse by following the instructions in
< a href = "usage/configuration/../../setup/installation.html" > Installation< / a > .< / p >
< pre > < code class = "language-yaml" > # This file is maintained as an up-to-date snapshot of the default
# homeserver.yaml configuration generated by Synapse.
#
# It is intended to act as a reference for the default configuration,
# helping admins keep track of new options and other changes, and compare
# their configs with the current default. As such, many of the actual
# config values shown are placeholders.
#
# It is *not* intended to be copied and used as the basis for a real
# homeserver.yaml. Instead, if you are starting from scratch, please generate
# a fresh config using Synapse by following the instructions in INSTALL.md.
# Configuration options that take a time period can be set using a number
# followed by a letter. Letters have the following meanings:
# s = second
# m = minute
# h = hour
# d = day
# w = week
# y = year
# For example, setting redaction_retention_period: 5m would remove redacted
# messages from the database after 5 minutes, rather than 5 months.
################################################################################
# Configuration file for Synapse.
#
# This is a YAML file: see [1] for a quick introduction. Note in particular
# that *indentation is important*: all the elements of a list or dictionary
# should have the same indentation.
#
# [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
## Server ##
# The public-facing domain of the server
#
# The server_name name will appear at the end of usernames and room addresses
# created on this server. For example if the server_name was example.com,
# usernames on this server would be in the format @user:example.com
#
# In most cases you should avoid using a matrix specific subdomain such as
# matrix.example.com or synapse.example.com as the server_name for the same
# reasons you wouldn't use user@email.example.com as your email address.
# See https://github.com/matrix-org/synapse/blob/master/docs/delegate.md
# for information on how to host Synapse on a subdomain while preserving
# a clean server_name.
#
# The server_name cannot be changed later so it is important to
# configure this correctly before you start Synapse. It should be all
# lowercase and may contain an explicit port.
# Examples: matrix.org, localhost:8080
#
server_name: " SERVERNAME"
# When running as a daemon, the file to store the pid in
#
pid_file: DATADIR/homeserver.pid
# The absolute URL to the web client which /_matrix/client will redirect
# to if 'webclient' is configured under the 'listeners' configuration.
#
# This option can be also set to the filesystem path to the web client
# which will be served at /_matrix/client/ if 'webclient' is configured
# under the 'listeners' configuration, however this is a security risk:
# https://github.com/matrix-org/synapse#security-note
#
#web_client_location: https://riot.example.com/
# The public-facing base URL that clients use to access this Homeserver (not
# including _matrix/...). This is the same URL a user might enter into the
# 'Custom Homeserver URL' field on their client. If you use Synapse with a
# reverse proxy, this should be the URL to reach Synapse via the proxy.
# Otherwise, it should be the URL to reach Synapse's client HTTP listener (see
# 'listeners' below).
#
#public_baseurl: https://example.com/
# Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the
# hard limit.
#
#soft_file_limit: 0
# Presence tracking allows users to see the state (e.g online/offline)
# of other local and remote users.
#
presence:
# Uncomment to disable presence tracking on this homeserver. This option
# replaces the previous top-level 'use_presence' option.
#
#enabled: false
# Presence routers are third-party modules that can specify additional logic
# to where presence updates from users are routed.
#
presence_router:
# The custom module's class. Uncomment to use a custom presence router module.
#
#module: " my_custom_router.PresenceRouter"
# Configuration options of the custom module. Refer to your module's
# documentation for available options.
#
#config:
# example_option: 'something'
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API. Defaults to
# 'false'. Note that profile data is also available via the federation
# API, unless allow_profile_lookup_over_federation is set to false.
#
#require_auth_for_profile_requests: true
# Uncomment to require a user to share a room with another user in order
# to retrieve their profile information. Only checked on Client-Server
# requests. Profile requests from other servers should be checked by the
# requesting server. Defaults to 'false'.
#
#limit_profile_requests_to_users_who_share_rooms: true
# Uncomment to prevent a user's profile data from being retrieved and
# displayed in a room until they have joined it. By default, a user's
# profile data is included in an invite event, regardless of the values
# of the above two settings, and whether or not the users share a server.
# Defaults to 'true'.
#
#include_profile_data_on_invite: false
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
#
#allow_public_rooms_without_auth: true
# If set to 'true', allows any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'false'.
#
#allow_public_rooms_over_federation: true
# The default room version for newly created rooms.
#
# Known room versions are listed here:
# https://matrix.org/docs/spec/#complete-list-of-room-versions
#
# For example, for room version 1, default_room_version should be set
# to " 1" .
#
#default_room_version: " 6"
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
#
#gc_thresholds: [700, 10, 10]
# The minimum time in seconds between each GC for a generation, regardless of
# the GC thresholds. This ensures that we don't do GC too frequently.
#
# A value of `[1s, 10s, 30s]` indicates that a second must pass between consecutive
# generation 0 GCs, etc.
#
# Defaults to `[1s, 10s, 30s]`.
#
#gc_min_interval: [0.5s, 30s, 1m]
# Set the limit on the returned events in the timeline in the get
# and sync operations. The default value is 100. -1 means no upper limit.
#
# Uncomment the following to increase the limit to 5000.
#
#filter_timeline_limit: 5000
# Whether room invites to users on this server should be blocked
# (except those sent by local server admins). The default is False.
#
#block_non_admin_invites: true
# Room searching
#
# If disabled, new messages will not be indexed for searching and users
# will receive errors when searching for messages. Defaults to enabled.
#
#enable_search: false
# Prevent outgoing requests from being sent to the following blacklisted IP address
# CIDR ranges. If this option is not specified then it defaults to private IP
# address ranges (see the example below).
#
# The blacklist applies to the outbound requests for federation, identity servers,
# push servers, and for checking key validity for third-party invite events.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
# This option replaces federation_ip_range_blacklist in Synapse v1.25.0.
#
#ip_range_blacklist:
# - '127.0.0.0/8'
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '192.0.0.0/24'
# - '169.254.0.0/16'
# - '192.88.99.0/24'
# - '198.18.0.0/15'
# - '192.0.2.0/24'
# - '198.51.100.0/24'
# - '203.0.113.0/24'
# - '224.0.0.0/4'
# - '::1/128'
# - 'fe80::/10'
# - 'fc00::/7'
# - '2001:db8::/32'
# - 'ff00::/8'
# - 'fec0::/10'
# List of IP address CIDR ranges that should be allowed for federation,
# identity servers, push servers, and for checking key validity for
# third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
#
# This whitelist overrides ip_range_blacklist and defaults to an empty
# list.
#
#ip_range_whitelist:
# - '192.168.1.1'
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#
# Options for each listener include:
#
# port: the TCP port to bind to
#
# bind_addresses: a list of local addresses to listen on. The default is
# 'all local interfaces'.
#
# type: the type of listener. Normally 'http', but other valid options are:
# 'manhole' (see docs/manhole.md),
# 'metrics' (see docs/metrics-howto.md),
# 'replication' (see docs/workers.md).
#
# tls: set to true to enable TLS for this listener. Will use the TLS
# key/cert specified in tls_private_key_path / tls_certificate_path.
#
# x_forwarded: Only valid for an 'http' listener. Set to true to use the
# X-Forwarded-For header as the client IP. Useful when Synapse is
# behind a reverse-proxy.
#
# resources: Only valid for an 'http' listener. A list of resources to host
# on this port. Options for each resource are:
#
# names: a list of names of HTTP resources. See below for a list of
# valid resource names.
#
# compress: set to true to enable HTTP compression for this resource.
#
# additional_resources: Only valid for an 'http' listener. A map of
# additional endpoints which should be loaded via dynamic modules.
#
# Valid resource names are:
#
# client: the client-server API (/_matrix/client), and the synapse admin
# API (/_synapse/admin). Also implies 'media' and 'static'.
#
# consent: user consent forms (/_matrix/consent). See
# docs/consent_tracking.md.
#
# federation: the server-server API (/_matrix/federation). Also implies
# 'media', 'keys', 'openid'
#
# keys: the key discovery API (/_matrix/keys).
#
# media: the media API (/_matrix/media).
#
# metrics: the metrics interface. See docs/metrics-howto.md.
#
# openid: OpenID authentication.
#
# replication: the HTTP replication API (/_synapse/replication). See
# docs/workers.md.
#
# static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.)
#
# webclient: A web client. Requires web_client_location to be set.
#
listeners:
# TLS-enabled listener: for when matrix traffic is sent directly to synapse.
#
# Disabled by default. To enable it, uncomment the following. (Note that you
# will also need to give Synapse a TLS key and certificate: see the TLS section
# below.)
#
#- port: 8448
# type: http
# tls: true
# resources:
# - names: [client, federation]
# Unsecure HTTP listener: for when matrix traffic passes through a reverse proxy
# that unwraps TLS.
#
# If you plan to use a reverse proxy, please see
# https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md.
#
- port: 8008
tls: false
type: http
x_forwarded: true
bind_addresses: ['::1', '127.0.0.1']
resources:
- names: [client, federation]
compress: false
# example additional_resources:
#
#additional_resources:
# " /_matrix/my/custom/endpoint" :
# module: my_module.CustomRequestHandler
# config: {}
# Turn on the twisted ssh manhole service on localhost on the given
# port.
#
#- port: 9000
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
# Forward extremities can build up in a room due to networking delays between
# homeservers. Once this happens in a large room, calculation of the state of
# that room can become quite expensive. To mitigate this, once the number of
# forward extremities reaches a given threshold, Synapse will send an
# org.matrix.dummy_event event, which will reduce the forward extremities
# in the room.
#
# This setting defines the threshold (i.e. number of forward extremities in the
# room) at which dummy events are sent. The default value is 10.
#
#dummy_events_threshold: 5
## Homeserver blocking ##
# How to reach the server admin, used in ResourceLimitError
#
#admin_contact: 'mailto:admin@server.com'
# Global blocking
#
#hs_disabled: false
#hs_disabled_message: 'Human readable reason for why the HS is blocked'
# Monthly Active User Blocking
#
# Used in cases where the admin or server owner wants to limit to the
# number of monthly active users.
#
# 'limit_usage_by_mau' disables/enables monthly active user blocking. When
# enabled and a limit is reached the server returns a 'ResourceLimitError'
# with error type Codes.RESOURCE_LIMIT_EXCEEDED
#
# 'max_mau_value' is the hard limit of monthly active users above which
# the server will start blocking user actions.
#
# 'mau_trial_days' is a means to add a grace period for active users. It
# means that users must be active for this number of days before they
# can be considered active and guards against the case where lots of users
# sign up in a short space of time never to return after their initial
# session.
#
# 'mau_limit_alerting' is a means of limiting client side alerting
# should the mau limit be reached. This is useful for small instances
# where the admin has 5 mau seats (say) for 5 specific people and no
# interest increasing the mau limit further. Defaults to True, which
# means that alerting is enabled
#
#limit_usage_by_mau: false
#max_mau_value: 50
#mau_trial_days: 2
#mau_limit_alerting: false
# If enabled, the metrics for the number of monthly active users will
# be populated, however no one will be limited. If limit_usage_by_mau
# is true, this is implied to be true.
#
#mau_stats_only: false
# Sometimes the server admin will want to ensure certain accounts are
# never blocked by mau checking. These accounts are specified here.
#
#mau_limit_reserved_threepids:
# - medium: 'email'
# address: 'reserved_user@example.com'
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained homeserver settings
#
# When this is enabled, the room " complexity" will be checked before a user
# joins a new remote room. If it is above the complexity limit, the server will
# disallow joining, or will instantly leave.
#
# Room complexity is an arbitrary measure based on factors such as the number of
# users in the room.
#
limit_remote_rooms:
# Uncomment to enable room complexity checking.
#
#enabled: true
# the limit above which rooms cannot be joined. The default is 1.0.
#
#complexity: 0.5
# override the error which is returned when the room is too complex.
#
#complexity_error: " This room is too complex."
# allow server admins to join complex rooms. Default is false.
#
#admins_can_join: true
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.
#
#require_membership_for_aliases: false
# Whether to allow per-room membership profiles through the send of membership
# events with profile information that differ from the target's global profile.
# Defaults to 'true'.
#
#allow_per_room_profiles: false
# How long to keep redacted events in unredacted form in the database. After
# this period redacted events get replaced with their redacted form in the DB.
#
# Defaults to `7d`. Set to `null` to disable.
#
#redaction_retention_period: 28d
# How long to track users' last seen time and IPs in the database.
#
# Defaults to `28d`. Set to `null` to disable clearing out of old rows.
#
#user_ips_max_age: 14d
# Message retention policy at the server level.
#
# Room admins and mods can define a retention period for their rooms using the
# 'm.room.retention' state event, and server admins can cap this period by setting
# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
#
# If this feature is enabled, Synapse will regularly look for and purge events
# which are older than the room's maximum retention period. Synapse will also
# filter events received over federation so that events that should have been
# purged are ignored and not stored again.
#
retention:
# The message retention policies feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# Default retention policy. If set, Synapse will apply it to rooms that lack the
# 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
# matter much because Synapse doesn't take it into account yet.
#
#default_policy:
# min_lifetime: 1d
# max_lifetime: 1y
# Retention policy limits. If set, and the state of a room contains a
# 'm.room.retention' event in its state which contains a 'min_lifetime' or a
# 'max_lifetime' that's out of these bounds, Synapse will cap the room's policy
# to these limits when running purge jobs.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
# Server admins can define the settings of the background jobs purging the
# events which lifetime has expired under the 'purge_jobs' section.
#
# If no configuration is provided, a single job will be set up to delete expired
# events in every room daily.
#
# Each job's configuration defines which range of message lifetimes the job
# takes care of. For example, if 'shortest_max_lifetime' is '2d' and
# 'longest_max_lifetime' is '3d', the job will handle purging expired events in
# rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
# lower than or equal to 3 days. Both the minimum and the maximum value of a
# range are optional, e.g. a job with no 'shortest_max_lifetime' and a
# 'longest_max_lifetime' of '3d' will handle every room with a retention policy
# which 'max_lifetime' is lower than or equal to three days.
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a more frequent basis than for the rest of the rooms
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
# If any purge job is configured, it is strongly recommended to have at least
# a single job with neither 'shortest_max_lifetime' nor 'longest_max_lifetime'
# set, or one job without 'shortest_max_lifetime' and one job without
# 'longest_max_lifetime' set. Otherwise some rooms might be ignored, even if
# 'allowed_lifetime_min' and 'allowed_lifetime_max' are set, because capping a
# room's policy to these values is done after the policies are retrieved from
# Synapse's database (which is done using the range specified in a purge job's
# configuration).
#
#purge_jobs:
# - longest_max_lifetime: 3d
# interval: 12h
# - shortest_max_lifetime: 3d
# interval: 1d
# Inhibits the /requestToken endpoints from returning an error that might leak
# information about whether an e-mail address is in use or not on this
# homeserver.
# Note that for some endpoints the error situation is the e-mail already being
# used, and for others the error is entering the e-mail being unused.
# If this option is enabled, instead of returning an error, these endpoints will
# act as if no error happened and return a fake session ID ('sid') to clients.
#
#request_token_inhibit_3pid_errors: true
# A list of domains that the domain portion of 'next_link' parameters
# must match.
#
# This parameter is optionally provided by clients while requesting
# validation of an email or phone number, and maps to a link that
# users will be automatically redirected to after validation
# succeeds. Clients can make use this parameter to aid the validation
# process.
#
# The whitelist is applied whether the homeserver or an
# identity server is handling validation.
#
# The default value is no whitelist functionality; all domains are
# allowed. Setting this value to an empty list will instead disallow
# all domains.
#
#next_link_domain_whitelist: [" matrix.org" ]
## TLS ##
# PEM-encoded X509 certificate for TLS.
# This certificate, as of Synapse 1.0, will need to be a valid and verifiable
# certificate, signed by a recognised Certificate Authority.
#
# See 'ACME support' below to enable auto-provisioning this certificate via
# Let's Encrypt.
#
# If supplying your own, be sure to use a `.pem` file that includes the
# full certificate chain including any intermediate certificates (for
# instance, if using certbot, use `fullchain.pem` as your certificate,
# not `cert.pem`).
#
#tls_certificate_path: " CONFDIR/SERVERNAME.tls.crt"
# PEM-encoded private key for TLS
#
#tls_private_key_path: " CONFDIR/SERVERNAME.tls.key"
# Whether to verify TLS server certificates for outbound federation requests.
#
# Defaults to `true`. To disable certificate verification, uncomment the
# following line.
#
#federation_verify_certificates: false
# The minimum TLS version that will be used for outbound federation requests.
#
# Defaults to `1`. Configurable to `1`, `1.1`, `1.2`, or `1.3`. Note
# that setting this value higher than `1.2` will prevent federation to most
# of the public Matrix network: only configure it to `1.3` if you have an
# entirely private federation setup and you can ensure TLS 1.3 support.
#
#federation_client_minimum_tls_version: 1.2
# Skip federation certificate verification on the following whitelist
# of domains.
#
# This setting should only be used in very specific cases, such as
# federation over Tor hidden services and similar. For private networks
# of homeservers, you likely want to use a private CA instead.
#
# Only effective if federation_verify_certicates is `true`.
#
#federation_certificate_verification_whitelist:
# - lon.example.com
# - *.domain.com
# - *.onion
# List of custom certificate authorities for federation traffic.
#
# This setting should only normally be used within a private network of
# homeservers.
#
# Note that this list will replace those that are provided by your
# operating environment. Certificates must be in PEM format.
#
#federation_custom_ca_list:
# - myCA1.pem
# - myCA2.pem
# - myCA3.pem
# ACME support: This will configure Synapse to request a valid TLS certificate
# for your configured `server_name` via Let's Encrypt.
#
# Note that ACME v1 is now deprecated, and Synapse currently doesn't support
# ACME v2. This means that this feature currently won't work with installs set
# up after November 2019. For more info, and alternative solutions, see
# https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
#
# Note that provisioning a certificate in this way requires port 80 to be
# routed to Synapse so that it can complete the http-01 ACME challenge.
# By default, if you enable ACME support, Synapse will attempt to listen on
# port 80 for incoming http-01 challenges - however, this will likely fail
# with 'Permission denied' or a similar error.
#
# There are a couple of potential solutions to this:
#
# * If you already have an Apache, Nginx, or similar listening on port 80,
# you can configure Synapse to use an alternate port, and have your web
# server forward the requests. For example, assuming you set 'port: 8009'
# below, on Apache, you would write:
#
# ProxyPass /.well-known/acme-challenge http://localhost:8009/.well-known/acme-challenge
#
# * Alternatively, you can use something like `authbind` to give Synapse
# permission to listen on port 80.
#
acme:
# ACME support is disabled by default. Set this to `true` and uncomment
# tls_certificate_path and tls_private_key_path above to enable it.
#
enabled: false
# Endpoint to use to request certificates. If you only want to test,
# use Let's Encrypt's staging url:
# https://acme-staging.api.letsencrypt.org/directory
#
#url: https://acme-v01.api.letsencrypt.org/directory
# Port number to listen on for the HTTP-01 challenge. Change this if
# you are forwarding connections through Apache/Nginx/etc.
#
port: 80
# Local addresses to listen on for incoming connections.
# Again, you may want to change this if you are forwarding connections
# through Apache/Nginx/etc.
#
bind_addresses: ['::', '0.0.0.0']
# How many days remaining on a certificate before it is renewed.
#
reprovision_threshold: 30
# The domain that the certificate should be for. Normally this
# should be the same as your Matrix domain (i.e., 'server_name'), but,
# by putting a file at 'https://< server_name> /.well-known/matrix/server',
# you can delegate incoming traffic to another server. If you do that,
# you should give the target of the delegation here.
#
# For example: if your 'server_name' is 'example.com', but
# 'https://example.com/.well-known/matrix/server' delegates to
# 'matrix.example.com', you should put 'matrix.example.com' here.
#
# If not set, defaults to your 'server_name'.
#
domain: matrix.example.com
# file to use for the account key. This will be generated if it doesn't
# exist.
#
# If unspecified, we will use CONFDIR/client.key.
#
account_key_file: DATADIR/acme_account.key
## Federation ##
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
#federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# Report prometheus metrics on the age of PDUs being sent to and received from
# the following domains. This can be used to give an idea of " delay" on inbound
# and outbound federation, though be aware that any delay can be due to problems
# at either end or with the intermediate network.
#
# By default, no domains are monitored in this way.
#
#federation_metrics_domains:
# - matrix.org
# - example.com
# Uncomment to disable profile lookup over federation. By default, the
# Federation API allows other homeservers to obtain profile data of any user
# on this homeserver. Defaults to 'true'.
#
#allow_profile_lookup_over_federation: false
# Uncomment to disable device display name lookup over federation. By default, the
# Federation API allows other homeservers to obtain device display names of any user
# on this homeserver. Defaults to 'true'.
#
#allow_device_name_lookup_over_federation: false
## Caching ##
# Caching can be configured through the following options.
#
# A cache 'factor' is a multiplier that can be applied to each of
# Synapse's caches in order to increase or decrease the maximum
# number of entries that can be stored.
# The number of events to cache in memory. Not affected by
# caches.global_factor.
#
#event_cache_size: 10K
caches:
# Controls the global cache factor, which is the default cache factor
# for all caches if a specific factor for that cache is not otherwise
# set.
#
# This can also be set by the " SYNAPSE_CACHE_FACTOR" environment
# variable. Setting by environment variable takes priority over
# setting through the config file.
#
# Defaults to 0.5, which will half the size of all caches.
#
#global_factor: 1.0
# A dictionary of cache name to cache factor for that individual
# cache. Overrides the global cache factor for a given cache.
#
# These can also be set through environment variables comprised
# of " SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
# letters and underscores. Setting by environment variable
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
# Some caches have '*' and other characters that are not
# alphanumeric or underscores. These caches can be named with or
# without the special characters stripped. For example, to specify
# the cache factor for `*stateGroupCache*` via an environment
# variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
## Database ##
# The 'database' setting defines the database that synapse uses to store all of
# its data.
#
# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
# 'psycopg2' (for PostgreSQL).
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:
# * for sqlite: https://docs.python.org/3/library/sqlite3.html#sqlite3.connect
# * for postgres: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
# * for the connection pool: https://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.ConnectionPool.html#__init__
#
#
# Example SQLite configuration:
#
#database:
# name: sqlite3
# args:
# database: /path/to/homeserver.db
#
#
# Example Postgres configuration:
#
#database:
# name: psycopg2
# args:
# user: synapse_user
# password: secretpassword
# database: synapse
# host: localhost
# port: 5432
# cp_min: 5
# cp_max: 10
#
# For more information on using Synapse with Postgres, see `docs/postgres.md`.
#
database:
name: sqlite3
args:
database: DATADIR/homeserver.db
## Logging ##
# A yaml python logging config file as described by
# https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
#
log_config: " CONFDIR/SERVERNAME.log.config"
## Ratelimiting ##
# Ratelimiting settings for client actions (registration, login, messaging).
#
# Each ratelimiting configuration is made of two parameters:
# - per_second: number of requests a client can send per second.
# - burst_count: number of requests a client can send before being throttled.
#
# Synapse currently uses the following configurations:
# - one for messages that ratelimits sending based on the account the client
# is using
# - one for registration that ratelimits registration requests based on the
# client's IP address.
# - one for login that ratelimits login requests based on the client's IP
# address.
# - one for login that ratelimits login requests based on the account the
# client is attempting to log into.
# - one for login that ratelimits login requests based on the account the
# client is attempting to log into, based on the amount of failed login
# attempts for this account.
# - one for ratelimiting redactions by room admins. If this is not explicitly
# set then it uses the same ratelimiting as per rc_message. This is useful
# to allow room admins to deal with abuse quickly.
# - two for ratelimiting number of rooms a user can join, " local" for when
# users are joining rooms the server is already in (this is cheap) vs
# " remote" for when users are trying to join rooms not on the server (which
# can be more expensive)
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
# - two for ratelimiting how often invites can be sent in a room or to a
# specific user.
#
# The defaults are as shown below.
#
#rc_message:
# per_second: 0.2
# burst_count: 10
#
#rc_registration:
# per_second: 0.17
# burst_count: 3
#
#rc_login:
# address:
# per_second: 0.17
# burst_count: 3
# account:
# per_second: 0.17
# burst_count: 3
# failed_attempts:
# per_second: 0.17
# burst_count: 3
#
#rc_admin_redaction:
# per_second: 1
# burst_count: 50
#
#rc_joins:
# local:
# per_second: 0.1
# burst_count: 10
# remote:
# per_second: 0.01
# burst_count: 10
#
#rc_3pid_validation:
# per_second: 0.003
# burst_count: 5
#
#rc_invites:
# per_room:
# per_second: 0.3
# burst_count: 10
# per_user:
# per_second: 0.003
# burst_count: 5
# Ratelimiting settings for incoming federation
#
# The rc_federation configuration is made up of the following settings:
# - window_size: window size in milliseconds
# - sleep_limit: number of federation requests from a single server in
# a window before the server will delay processing the request.
# - sleep_delay: duration in milliseconds to delay processing events
# from remote servers by if they go over the sleep limit.
# - reject_limit: maximum number of concurrent federation requests
# allowed from a single server
# - concurrent: number of federation requests to concurrently process
# from a single server
#
# The defaults are as shown below.
#
#rc_federation:
# window_size: 1000
# sleep_limit: 10
# sleep_delay: 500
# reject_limit: 50
# concurrent: 3
# Target outgoing federation transaction frequency for sending read-receipts,
# per-room.
#
# If we end up trying to send out more read-receipts, they will get buffered up
# into fewer transactions.
#
#federation_rr_transactions_per_room_per_second: 50
## Media Store ##
# Enable the media store service in the Synapse master. Uncomment the
# following if you are using a separate media store worker.
#
#enable_media_repo: false
# Directory where uploaded images and attachments are stored.
#
media_store_path: " DATADIR/media_store"
# Media storage providers allow media to be stored in different
# locations.
#
#media_storage_providers:
# - module: file_system
# # Whether to store newly uploaded local files
# store_local: false
# # Whether to store newly downloaded remote files
# store_remote: false
# # Whether to wait for successful storage for local uploads
# store_synchronous: false
# config:
# directory: /mnt/some/other/directory
# The largest allowed upload size in bytes
#
2021-06-10 13:40:47 +03:00
# If you are using a reverse proxy you may also need to set this value in
# your reverse proxy's config. Notably Nginx has a small max body size by default.
# See https://matrix-org.github.io/synapse/develop/reverse_proxy.html.
#
2021-06-03 19:21:02 +03:00
#max_upload_size: 50M
# Maximum number of pixels that will be thumbnailed
#
#max_image_pixels: 32M
# Whether to generate new thumbnails on the fly to precisely match
# the resolution requested by the client. If true then whenever
# a new resolution is requested by the client the server will
# generate a new thumbnail. If false the server will pick a thumbnail
# from a precalculated list.
#
#dynamic_thumbnails: false
# List of thumbnails to precalculate when an image is uploaded.
#
#thumbnail_sizes:
# - width: 32
# height: 32
# method: crop
# - width: 96
# height: 96
# method: crop
# - width: 320
# height: 240
# method: scale
# - width: 640
# height: 480
# method: scale
# - width: 800
# height: 600
# method: scale
# Is the preview URL API enabled?
#
# 'false' by default: uncomment the following to enable it (and specify a
# url_preview_ip_range_blacklist blacklist).
#
#url_preview_enabled: true
# List of IP address CIDR ranges that the URL preview spider is denied
# from accessing. There are no defaults: you must explicitly
# specify a list for URL previewing to work. You should specify any
# internal services in your network that you do not want synapse to try
# to connect to, otherwise anyone in any Matrix room could cause your
# synapse to issue arbitrary GET requests to your internal services,
# causing serious security issues.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
# This must be specified if url_preview_enabled is set. It is recommended that
# you uncomment the following list as a starting point.
#
#url_preview_ip_range_blacklist:
# - '127.0.0.0/8'
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '192.0.0.0/24'
# - '169.254.0.0/16'
# - '192.88.99.0/24'
# - '198.18.0.0/15'
# - '192.0.2.0/24'
# - '198.51.100.0/24'
# - '203.0.113.0/24'
# - '224.0.0.0/4'
# - '::1/128'
# - 'fe80::/10'
# - 'fc00::/7'
# - '2001:db8::/32'
# - 'ff00::/8'
# - 'fec0::/10'
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.
# This is useful for specifying exceptions to wide-ranging blacklisted
# target IP ranges - e.g. for enabling URL previews for a specific private
# website only visible in your network.
#
#url_preview_ip_range_whitelist:
# - '192.168.1.1'
# Optional list of URL matches that the URL preview spider is
# denied from accessing. You should use url_preview_ip_range_blacklist
# in preference to this, otherwise someone could define a public DNS
# entry that points to a private IP address and circumvent the blacklist.
# This is more useful if you know there is an entire shape of URL that
# you know that will never want synapse to try to spider.
#
# Each list entry is a dictionary of url component attributes as returned
# by urlparse.urlsplit as applied to the absolute form of the URL. See
# https://docs.python.org/2/library/urlparse.html#urlparse.urlsplit
# The values of the dictionary are treated as an filename match pattern
# applied to that component of URLs, unless they start with a ^ in which
# case they are treated as a regular expression match. If all the
# specified component matches for a given list item succeed, the URL is
# blacklisted.
#
#url_preview_url_blacklist:
# # blacklist any URL with a username in its URI
# - username: '*'
#
# # blacklist all *.google.com URLs
# - netloc: 'google.com'
# - netloc: '*.google.com'
#
# # blacklist all plain HTTP URLs
# - scheme: 'http'
#
# # blacklist http(s)://www.acme.com/foo
# - netloc: 'www.acme.com'
# path: '/foo'
#
# # blacklist any URL with a literal IPv4 address
# - netloc: '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$'
# The largest allowed URL preview spidering size in bytes
#
#max_spider_size: 10M
# A list of values for the Accept-Language HTTP header used when
# downloading webpages during URL preview generation. This allows
# Synapse to specify the preferred languages that URL previews should
# be in when communicating with remote servers.
#
# Each value is a IETF language tag; a 2-3 letter identifier for a
# language, optionally followed by subtags separated by '-', specifying
# a country or region variant.
#
# Multiple values can be provided, and a weight can be added to each by
# using quality value syntax (;q=). '*' translates to any language.
#
# Defaults to " en" .
#
# Example:
#
# url_preview_accept_language:
# - en-UK
# - en-US;q=0.9
# - fr;q=0.8
# - *;q=0.7
#
url_preview_accept_language:
# - en
## Captcha ##
# See docs/CAPTCHA_SETUP.md for full details of configuring this.
# This homeserver's ReCAPTCHA public key. Must be specified if
# enable_registration_captcha is enabled.
#
#recaptcha_public_key: " YOUR_PUBLIC_KEY"
# This homeserver's ReCAPTCHA private key. Must be specified if
# enable_registration_captcha is enabled.
#
#recaptcha_private_key: " YOUR_PRIVATE_KEY"
# Uncomment to enable ReCaptcha checks when registering, preventing signup
# unless a captcha is answered. Requires a valid ReCaptcha
# public/private key. Defaults to 'false'.
#
#enable_registration_captcha: true
# The API endpoint to use for verifying m.login.recaptcha responses.
# Defaults to " https://www.recaptcha.net/recaptcha/api/siteverify" .
#
#recaptcha_siteverify_api: " https://my.recaptcha.site"
## TURN ##
# The public URIs of the TURN server to give to clients
#
#turn_uris: []
# The shared secret used to compute passwords for the TURN server
#
#turn_shared_secret: " YOUR_SHARED_SECRET"
# The Username and password if the TURN server needs them and
# does not use a token
#
#turn_username: " TURNSERVER_USERNAME"
#turn_password: " TURNSERVER_PASSWORD"
# How long generated TURN credentials last
#
#turn_user_lifetime: 1h
# Whether guests should be allowed to use the TURN server.
# This defaults to True, otherwise VoIP will be unreliable for guests.
# However, it does introduce a slight security risk as it allows users to
# connect to arbitrary endpoints without having first signed up for a
# valid account (e.g. by passing a CAPTCHA).
#
#turn_allow_guests: true
## Registration ##
#
# Registration can be rate-limited using the parameters in the " Ratelimiting"
# section of this file.
# Enable registration for new users.
#
#enable_registration: false
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
#
# Note also that this is calculated at login time: changes are not applied
# retrospectively to users who have already logged in.
#
# By default, this is infinite.
#
#session_lifetime: 24h
# The user must provide all of the below types of 3PID when registering.
#
#registrations_require_3pid:
# - email
# - msisdn
# Explicitly disable asking for MSISDNs from the registration
# flow (overrides registrations_require_3pid if MSISDNs are set as required)
#
#disable_msisdn_registration: true
# Mandate that users are only allowed to associate certain formats of
# 3PIDs with accounts on this server.
#
#allowed_local_3pids:
# - medium: email
# pattern: '^[^@]+@matrix\.org$'
# - medium: email
# pattern: '^[^@]+@vector\.im$'
# - medium: msisdn
# pattern: '\+44'
# Enable 3PIDs lookup requests to identity servers from this server.
#
#enable_3pid_lookup: true
# If set, allows registration of standard or admin accounts by anyone who
# has the shared secret, even if registration is otherwise disabled.
#
#registration_shared_secret: < PRIVATE STRING>
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
# The default number is 12 (which equates to 2^12 rounds).
# N.B. that increasing this will exponentially increase the time required
# to register or login - e.g. 24 => 2^24 rounds which will take > 20 mins.
#
#bcrypt_rounds: 12
# Allows users to register as guests without a password/email/etc, and
# participate in rooms hosted on this server which have been made
# accessible to anonymous users.
#
#allow_guest_access: false
# The identity server which we suggest that clients should use when users log
# in on this server.
#
# (By default, no suggestion is made, so it is left up to the client.
# This setting is ignored unless public_baseurl is also set.)
#
#default_identity_server: https://matrix.org
# Handle threepid (email/phone etc) registration and password resets through a set of
# *trusted* identity servers. Note that this allows the configured identity server to
# reset passwords for accounts!
#
# Be aware that if `email` is not set, and SMTP options have not been
# configured in the email config block, registration and user password resets via
# email will be globally disabled.
#
# Additionally, if `msisdn` is not set, registration and password resets via msisdn
# will be disabled regardless, and users will not be able to associate an msisdn
# identifier to their account. This is due to Synapse currently not supporting
# any method of sending SMS messages on its own.
#
# To enable using an identity server for operations regarding a particular third-party
# identifier type, set the value to the URL of that identity server as shown in the
# examples below.
#
# Servers handling the these requests must answer the `/requestToken` endpoints defined
# by the Matrix Identity Service API specification:
# https://matrix.org/docs/spec/identity_service/latest
#
# If a delegate is specified, the config option public_baseurl must also be filled out.
#
account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Whether users are allowed to change their displayname after it has
# been initially set. Useful when provisioning users based on the
# contents of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_displayname: false
# Whether users are allowed to change their avatar after it has been
# initially set. Useful when provisioning users based on the contents
# of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_avatar_url: false
# Whether users can change the 3PIDs associated with their accounts
# (email address and msisdn).
#
# Defaults to 'true'
#
#enable_3pid_changes: false
# Users who register on this homeserver will automatically be joined
# to these rooms.
#
# By default, any room aliases included in this list will be created
# as a publicly joinable room when the first user registers for the
# homeserver. This behaviour can be customised with the settings below.
# If the room already exists, make certain it is a publicly joinable
# room. The join rule of the room must be set to 'public'.
#
#auto_join_rooms:
# - " #example:example.com"
# Where auto_join_rooms are specified, setting this flag ensures that the
# the rooms exist by creating them when the first user on the
# homeserver registers.
#
# By default the auto-created rooms are publicly joinable from any federated
# server. Use the autocreate_auto_join_rooms_federated and
# autocreate_auto_join_room_preset settings below to customise this behaviour.
#
# Setting to false means that if the rooms are not manually created,
# users cannot be auto-joined since they do not exist.
#
# Defaults to true. Uncomment the following line to disable automatically
# creating auto-join rooms.
#
#autocreate_auto_join_rooms: false
# Whether the auto_join_rooms that are auto-created are available via
# federation. Only has an effect if autocreate_auto_join_rooms is true.
#
# Note that whether a room is federated cannot be modified after
# creation.
#
# Defaults to true: the room will be joinable from other servers.
# Uncomment the following to prevent users from other homeservers from
# joining these rooms.
#
#autocreate_auto_join_rooms_federated: false
# The room preset to use when auto-creating one of auto_join_rooms. Only has an
# effect if autocreate_auto_join_rooms is true.
#
# This can be one of " public_chat" , " private_chat" , or " trusted_private_chat" .
# If a value of " private_chat" or " trusted_private_chat" is used then
# auto_join_mxid_localpart must also be configured.
#
# Defaults to " public_chat" , meaning that the room is joinable by anyone, including
# federated servers if autocreate_auto_join_rooms_federated is true (the default).
# Uncomment the following to require an invitation to join these rooms.
#
#autocreate_auto_join_room_preset: private_chat
# The local part of the user id which is used to create auto_join_rooms if
# autocreate_auto_join_rooms is true. If this is not provided then the
# initial user account that registers will be used to create the rooms.
#
# The user id is also used to invite new users to any auto-join rooms which
# are set to invite-only.
#
# It *must* be configured if autocreate_auto_join_room_preset is set to
# " private_chat" or " trusted_private_chat" .
#
# Note that this must be specified in order for new users to be correctly
# invited to any auto-join rooms which have been set to invite-only (either
# at the time of creation or subsequently).
#
# Note that, if the room already exists, this user must be joined and
# have the appropriate permissions to invite new members.
#
#auto_join_mxid_localpart: system
# When auto_join_rooms is specified, setting this flag to false prevents
# guest accounts from being automatically joined to the rooms.
#
# Defaults to true.
#
#auto_join_rooms_for_guests: false
## Account Validity ##
# Optional account validity configuration. This allows for accounts to be denied
# any request after a given period.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
# This means that, if a validity period is set, and Synapse is restarted (it will
# then derive an expiration date from the current validity period), and some time
# after that the validity period changes and Synapse is restarted, the users'
# expiration dates won't be updated unless their account is manually renewed. This
# date will be randomly selected within a range [now + period - d ; now + period],
# where d is equal to 10% of the validity period.
#
account_validity:
# The account validity feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# The period after which an account is valid after its registration. When
# renewing the account, its validity period will be extended by this amount
# of time. This parameter is required when using the account validity
# feature.
#
#period: 6w
# The amount of time before an account's expiry date at which Synapse will
# send an email to the account's email address with a renewal link. By
# default, no such emails are sent.
#
# If you enable this setting, you will also need to fill out the 'email' and
# 'public_baseurl' configuration sections.
#
#renew_at: 1w
# The subject of the email sent out with the renewal link. '%(app)s' can be
# used as a placeholder for the 'app_name' parameter from the 'email'
# section.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
# If this is not set, a default value is used.
#
#renew_email_subject: " Renew your %(app)s account"
# Directory in which Synapse will try to find templates for the HTML files to
# serve to the user when trying to renew an account. If not set, default
# templates from within the Synapse package will be used.
#
# The currently available templates are:
#
# * account_renewed.html: Displayed to the user after they have successfully
# renewed their account.
#
# * account_previously_renewed.html: Displayed to the user if they attempt to
# renew their account with a token that is valid, but that has already
# been used. In this case the account is not renewed again.
#
# * invalid_token.html: Displayed to the user when they try to renew an account
# with an unknown or invalid renewal token.
#
# See https://github.com/matrix-org/synapse/tree/master/synapse/res/templates for
# default template contents.
#
# The file name of some of these templates can be configured below for legacy
# reasons.
#
#template_dir: " res/templates"
# A custom file name for the 'account_renewed.html' template.
#
# If not set, the file is assumed to be named " account_renewed.html" .
#
#account_renewed_html_path: " account_renewed.html"
# A custom file name for the 'invalid_token.html' template.
#
# If not set, the file is assumed to be named " invalid_token.html" .
#
#invalid_token_html_path: " invalid_token.html"
## Metrics ###
# Enable collection and rendering of performance metrics
#
#enable_metrics: false
# Enable sentry integration
# NOTE: While attempts are made to ensure that the logs don't contain
# any sensitive information, this cannot be guaranteed. By enabling
# this option the sentry server may therefore receive sensitive
# information, and it in turn may then diseminate sensitive information
# through insecure notification channels if so configured.
#
#sentry:
# dsn: " ..."
# Flags to enable Prometheus metrics which are not suitable to be
# enabled by default, either for performance reasons or limited use.
#
metrics_flags:
# Publish synapse_federation_known_servers, a gauge of the number of
# servers this homeserver knows about, including itself. May cause
# performance problems on large homeservers.
#
#known_servers: true
# Whether or not to report anonymized homeserver usage statistics.
#
#report_stats: true|false
# The endpoint to report the anonymized homeserver usage statistics to.
# Defaults to https://matrix.org/report-usage-stats/push
#
#report_stats_endpoint: https://example.com/report-usage-stats/push
## API Configuration ##
# Controls for the state that is shared with users who receive an invite
# to a room
#
room_prejoin_state:
# By default, the following state event types are shared with users who
# receive invites to the room:
#
# - m.room.join_rules
# - m.room.canonical_alias
# - m.room.avatar
# - m.room.encryption
# - m.room.name
# - m.room.create
#
# Uncomment the following to disable these defaults (so that only the event
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
#
#disable_default_event_types: true
# Additional state event types to share with users when they are invited
# to a room.
#
# By default, this list is empty (so only the default event types are shared).
#
#additional_event_types:
# - org.example.custom.event.type
# A list of application service config files to use
#
#app_service_config_files:
# - app_service_1.yaml
# - app_service_2.yaml
# Uncomment to enable tracking of application service IP addresses. Implicitly
# enables MAU tracking for application service users.
#
#track_appservice_user_ips: true
# a secret which is used to sign access tokens. If none is specified,
# the registration_shared_secret is used, if one is given; otherwise,
# a secret key is derived from the signing key.
#
#macaroon_secret_key: < PRIVATE STRING>
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.
#
#form_secret: < PRIVATE STRING>
## Signing Keys ##
# Path to the signing key to sign messages with
#
signing_key_path: " CONFDIR/SERVERNAME.signing.key"
# The keys that the server used to sign messages with but won't use
# to sign new messages.
#
old_signing_keys:
# For each key, `key` should be the base64-encoded public key, and
# `expired_ts`should be the time (in milliseconds since the unix epoch) that
# it was last used.
#
# It is possible to build an entry from an old signing.key file using the
# `export_signing_key` script which is provided with synapse.
#
# For example:
#
#" ed25519:id" : { key: " base64string" , expired_ts: 123456789123 }
# How long key response published by this server is valid for.
# Used to set the valid_until_ts in /key/v2 APIs.
# Determines how quickly servers will query to check which keys
# are still valid.
#
#key_refresh_interval: 1d
# The trusted servers to download signing keys from.
#
# When we need to fetch a signing key, each server is tried in parallel.
#
# Normally, the connection to the key server is validated via TLS certificates.
# Additional security can be provided by configuring a `verify key`, which
# will make synapse check that the response is signed by that key.
#
# This setting supercedes an older setting named `perspectives`. The old format
# is still supported for backwards-compatibility, but it is deprecated.
#
# 'trusted_key_servers' defaults to matrix.org, but using it will generate a
# warning on start-up. To suppress this warning, set
# 'suppress_key_server_warning' to true.
#
# Options for each entry in the list include:
#
# server_name: the name of the server. required.
#
# verify_keys: an optional map from key id to base64-encoded public key.
# If specified, we will check that the response is signed by at least
# one of the given keys.
#
# accept_keys_insecurely: a boolean. Normally, if `verify_keys` is unset,
# and federation_verify_certificates is not `true`, synapse will refuse
# to start, because this would allow anyone who can spoof DNS responses
# to masquerade as the trusted key server. If you know what you are doing
# and are sure that your network environment provides a secure connection
# to the key server, you can set this to `true` to override this
# behaviour.
#
# An example configuration might look like:
#
#trusted_key_servers:
# - server_name: " my_trusted_server.example.com"
# verify_keys:
# " ed25519:auto" : " abcdefghijklmnopqrstuvwxyzabcdefghijklmopqr"
# - server_name: " my_other_trusted_server.example.com"
#
trusted_key_servers:
- server_name: " matrix.org"
# Uncomment the following to disable the warning that is emitted when the
# trusted_key_servers include 'matrix.org'. See above.
#
#suppress_key_server_warning: true
# The signing keys to use when acting as a trusted key server. If not specified
# defaults to the server signing key.
#
# Can contain multiple keys, one per line.
#
#key_server_signing_keys_path: " key_server_signing_keys.key"
## Single sign-on integration ##
# The following settings can be used to make Synapse use a single sign-on
# provider for authentication, instead of its internal password database.
#
# You will probably also want to set the following options to `false` to
# disable the regular login/registration flows:
# * enable_registration
# * password_config.enabled
#
# You will also want to investigate the settings under the " sso" configuration
# section below.
# Enable SAML2 for registration and login. Uses pysaml2.
#
# At least one of `sp_config` or `config_path` must be set in this section to
# enable SAML login.
#
# Once SAML support is enabled, a metadata file will be exposed at
# https://< server> :< port> /_synapse/client/saml2/metadata.xml, which you may be able to
# use to configure your SAML IdP with. Alternatively, you can manually configure
# the IdP to use an ACS location of
# https://< server> :< port> /_synapse/client/saml2/authn_response.
#
saml2_config:
# `sp_config` is the configuration for the pysaml2 Service Provider.
# See pysaml2 docs for format of config.
#
# Default values will be used for the 'entityid' and 'service' settings,
# so it is not normally necessary to specify them unless you need to
# override them.
#
sp_config:
# Point this to the IdP's metadata. You must provide either a local
# file via the `local` attribute or (preferably) a URL via the
# `remote` attribute.
#
#metadata:
# local: [" saml2/idp.xml" ]
# remote:
# - url: https://our_idp/metadata.xml
# Allowed clock difference in seconds between the homeserver and IdP.
#
# Uncomment the below to increase the accepted time difference from 0 to 3 seconds.
#
#accepted_time_diff: 3
# By default, the user has to go to our login page first. If you'd like
# to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# 'service.sp' section:
#
#service:
# sp:
# allow_unsolicited: true
# The examples below are just used to generate our metadata xml, and you
# may well not need them, depending on your setup. Alternatively you
# may need a whole lot more detail - see the pysaml2 docs!
#description: [" My awesome SP" , " en" ]
#name: [" Test SP" , " en" ]
#ui_info:
# display_name:
# - lang: en
# text: " Display Name is the descriptive name of your service."
# description:
# - lang: en
# text: " Description should be a short paragraph explaining the purpose of the service."
# information_url:
# - lang: en
# text: " https://example.com/terms-of-service"
# privacy_statement_url:
# - lang: en
# text: " https://example.com/privacy-policy"
# keywords:
# - lang: en
# text: [" Matrix" , " Element" ]
# logo:
# - lang: en
# text: " https://example.com/logo.svg"
# width: " 200"
# height: " 80"
#organization:
# name: Example com
# display_name:
# - [" Example co" , " en" ]
# url: " http://example.com"
#contact_person:
# - given_name: Bob
# sur_name: " the Sysadmin"
# email_address" : [" admin@example.com" ]
# contact_type" : technical
# Instead of putting the config inline as above, you can specify a
# separate pysaml2 configuration file:
#
#config_path: " CONFDIR/sp_conf.py"
# The lifetime of a SAML session. This defines how long a user has to
# complete the authentication process, if allow_unsolicited is unset.
# The default is 15 minutes.
#
#saml_session_lifetime: 5m
# An external module can be provided here as a custom solution to
# mapping attributes returned from a saml provider onto a matrix user.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
#
#module: mapping_provider.SamlMappingProvider
# Custom configuration values for the module. Below options are
# intended for the built-in provider, they should be changed if
# using a custom module. This section will be passed as a Python
# dictionary to the module's `parse_config` method.
#
config:
# The SAML attribute (after mapping via the attribute maps) to use
# to derive the Matrix ID from. 'uid' by default.
#
# Note: This used to be configured by the
# saml2_config.mxid_source_attribute option. If that is still
# defined, its value will be used instead.
#
#mxid_source_attribute: displayName
# The mapping system to use for mapping the saml attribute onto a
# matrix ID.
#
# Options include:
# * 'hexencode' (which maps unpermitted characters to '=xx')
# * 'dotreplace' (which replaces unpermitted characters with
# '.').
# The default is 'hexencode'.
#
# Note: This used to be configured by the
# saml2_config.mxid_mapping option. If that is still defined, its
# value will be used instead.
#
#mxid_mapping: dotreplace
# In previous versions of synapse, the mapping from SAML attribute to
# MXID was always calculated dynamically rather than stored in a
# table. For backwards- compatibility, we will look for user_ids
# matching such a pattern before creating a new account.
#
# This setting controls the SAML attribute which will be used for this
# backwards-compatibility lookup. Typically it should be 'uid', but if
# the attribute maps are changed, it may be necessary to change it.
#
# The default is 'uid'.
#
#grandfathered_mxid_source_attribute: upn
# It is possible to configure Synapse to only allow logins if SAML attributes
# match particular values. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted.
#
#attribute_requirements:
# - attribute: userGroup
# value: " staff"
# - attribute: department
# value: " sales"
# If the metadata XML contains multiple IdP entities then the `idp_entityid`
# option must be set to the entity to redirect users to.
#
# Most deployments only have a single IdP entity and so should omit this
# option.
#
#idp_entityid: 'https://our_idp/entityid'
# List of OpenID Connect (OIDC) / OAuth 2.0 identity providers, for registration
# and login.
#
# Options for each entry include:
#
# idp_id: a unique identifier for this identity provider. Used internally
# by Synapse; should be a single word such as 'github'.
#
# Note that, if this is changed, users authenticating via that provider
# will no longer be recognised as the same user!
#
# (Use " oidc" here if you are migrating from an old " oidc_config"
# configuration.)
#
# idp_name: A user-facing name for this identity provider, which is used to
# offer the user a choice of login mechanisms.
#
# idp_icon: An optional icon for this identity provider, which is presented
# by clients and Synapse's own IdP picker page. If given, must be an
# MXC URI of the format mxc://< server-name> /< media-id> . (An easy way to
# obtain such an MXC URI is to upload an image to an (unencrypted) room
# and then copy the " url" from the source of the event.)
#
# idp_brand: An optional brand for this identity provider, allowing clients
# to style the login flow according to the identity provider in question.
# See the spec for possible options here.
#
# discover: set to 'false' to disable the use of the OIDC discovery mechanism
# to discover endpoints. Defaults to true.
#
# issuer: Required. The OIDC issuer. Used to validate tokens and (if discovery
# is enabled) to discover the provider's endpoints.
#
# client_id: Required. oauth2 client id to use.
#
# client_secret: oauth2 client secret to use. May be omitted if
# client_secret_jwt_key is given, or if client_auth_method is 'none'.
#
# client_secret_jwt_key: Alternative to client_secret: details of a key used
# to create a JSON Web Token to be used as an OAuth2 client secret. If
# given, must be a dictionary with the following properties:
#
# key: a pem-encoded signing key. Must be a suitable key for the
# algorithm specified. Required unless 'key_file' is given.
#
# key_file: the path to file containing a pem-encoded signing key file.
# Required unless 'key' is given.
#
# jwt_header: a dictionary giving properties to include in the JWT
# header. Must include the key 'alg', giving the algorithm used to
# sign the JWT, such as " ES256" , using the JWA identifiers in
# RFC7518.
#
# jwt_payload: an optional dictionary giving properties to include in
# the JWT payload. Normally this should include an 'iss' key.
#
# client_auth_method: auth method to use when exchanging the token. Valid
# values are 'client_secret_basic' (default), 'client_secret_post' and
# 'none'.
#
# scopes: list of scopes to request. This should normally include the " openid"
# scope. Defaults to [" openid" ].
#
# authorization_endpoint: the oauth2 authorization endpoint. Required if
# provider discovery is disabled.
#
# token_endpoint: the oauth2 token endpoint. Required if provider discovery is
# disabled.
#
# userinfo_endpoint: the OIDC userinfo endpoint. Required if discovery is
# disabled and the 'openid' scope is not requested.
#
# jwks_uri: URI where to fetch the JWKS. Required if discovery is disabled and
# the 'openid' scope is used.
#
# skip_verification: set to 'true' to skip metadata verification. Use this if
# you are connecting to a provider that is not OpenID Connect compliant.
# Defaults to false. Avoid this in production.
#
# user_profile_method: Whether to fetch the user profile from the userinfo
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
# userinfo endpoint.
#
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to
# match a pre-existing account instead of failing. This could be used if
# switching from password logins to OIDC. Defaults to false.
#
# user_mapping_provider: Configuration for how attributes returned from a OIDC
# provider are mapped onto a matrix user. This setting has the following
# sub-properties:
#
# module: The class name of a custom mapping module. Default is
# 'synapse.handlers.oidc.JinjaOidcMappingProvider'.
# See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers
# for information on implementing a custom mapping provider.
#
# config: Configuration for the mapping provider module. This section will
# be passed as a Python dictionary to the user mapping provider
# module's `parse_config` method.
#
# For the default provider, the following settings are available:
#
# subject_claim: name of the claim containing a unique identifier
# for the user. Defaults to 'sub', which OpenID Connect
# compliant providers should provide.
#
# localpart_template: Jinja2 template for the localpart of the MXID.
# If this is not set, the user will be prompted to choose their
# own username (see 'sso_auth_account_details.html' in the 'sso'
# section of this file).
#
# display_name_template: Jinja2 template for the display name to set
# on first login. If unset, no displayname will be set.
#
# email_template: Jinja2 template for the email address of the user.
# If unset, no email address will be added to the account.
#
# extra_attributes: a map of Jinja2 templates for extra attributes
# to send back to the client during login.
# Note that these are non-standard and clients will ignore them
# without modifications.
#
# When rendering, the Jinja2 templates are given a 'user' variable,
# which is set to the claims returned by the UserInfo Endpoint and/or
# in the ID Token.
#
# It is possible to configure Synapse to only allow logins if certain attributes
# match particular values in the OIDC userinfo. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted. Additional attributes can be added to
# userinfo by expanding the `scopes` section of the OIDC config to retrieve
# additional information from the OIDC provider.
#
# If the OIDC claim is a list, then the attribute must match any value in the list.
# Otherwise, it must exactly match the value of the claim. Using the example
# below, the `family_name` claim MUST be " Stephensson" , but the `groups`
# claim MUST contain " admin" .
#
# attribute_requirements:
# - attribute: family_name
# value: " Stephensson"
# - attribute: groups
# value: " admin"
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
# for information on how to configure these options.
#
# For backwards compatibility, it is also possible to configure a single OIDC
# provider via an 'oidc_config' setting. This is now deprecated and admins are
# advised to migrate to the 'oidc_providers' format. (When doing that migration,
# use 'oidc' for the idp_id to ensure that existing users continue to be
# recognised.)
#
oidc_providers:
# Generic example
#
#- idp_id: my_idp
# idp_name: " My OpenID provider"
# idp_icon: " mxc://example.com/mediaid"
# discover: false
# issuer: " https://accounts.example.com/"
# client_id: " provided-by-your-issuer"
# client_secret: " provided-by-your-issuer"
# client_auth_method: client_secret_post
# scopes: [" openid" , " profile" ]
# authorization_endpoint: " https://accounts.example.com/oauth2/auth"
# token_endpoint: " https://accounts.example.com/oauth2/token"
# userinfo_endpoint: " https://accounts.example.com/userinfo"
# jwks_uri: " https://accounts.example.com/.well-known/jwks.json"
# skip_verification: true
# user_mapping_provider:
# config:
# subject_claim: " id"
# localpart_template: " {{ user.login }}"
# display_name_template: " {{ user.name }}"
# email_template: " {{ user.email }}"
# attribute_requirements:
# - attribute: userGroup
# value: " synapseUsers"
# Enable Central Authentication Service (CAS) for registration and login.
#
cas_config:
# Uncomment the following to enable authorization against a CAS server.
# Defaults to false.
#
#enabled: true
# The URL of the CAS authorization endpoint.
#
#server_url: " https://cas-server.com"
# The attribute of the CAS response to use as the display name.
#
# If unset, no displayname will be set.
#
#displayname_attribute: name
# It is possible to configure Synapse to only allow logins if CAS attributes
# match particular values. All of the keys in the mapping below must exist
# and the values must match the given value. Alternately if the given value
# is None then any value is allowed (the attribute just must exist).
# All of the listed attributes must match for the login to be permitted.
#
#required_attributes:
# userGroup: " staff"
# department: None
# Additional settings to use with single-sign on systems such as OpenID Connect,
# SAML2 and CAS.
#
sso:
# A list of client URLs which are whitelisted so that the user does not
# have to confirm giving access to their account to the URL. Any client
# whose URL starts with an entry in the following list will not be subject
# to an additional confirmation step after the SSO login is completed.
#
# WARNING: An entry such as " https://my.client" is insecure, because it
# will also match " https://my.client.evil.site" , exposing your users to
# phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: " https://my.client/" .
#
# If public_baseurl is set, then the login fallback page (used by clients
# that don't natively support the required login flows) is whitelisted in
# addition to any URLs in this list.
#
# By default, this list is empty.
#
#client_whitelist:
# - https://riot.im/develop
# - https://my.custom.client/
# Directory in which Synapse will try to find the template files below.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page to prompt the user to choose an Identity Provider during
# login: 'sso_login_idp_picker.html'.
#
# This is only used if multiple SSO Identity Providers are configured.
#
# When rendering, this template is given the following variables:
# * redirect_url: the URL that the user will be redirected to after
# login.
#
# * server_name: the homeserver's name.
#
# * providers: a list of available Identity Providers. Each element is
# an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# The rendered HTML page should contain a form which submits its results
# back as a GET request, with the following query parameters:
#
# * redirectUrl: the client redirect URI (ie, the `redirect_url` passed
# to the template)
#
# * idp: the 'idp_id' of the chosen IDP.
#
# * HTML page to prompt new users to enter a userid and confirm other
# details: 'sso_auth_account_details.html'. This is only shown if the
# SSO implementation (with any user_mapping_provider) does not return
# a localpart.
#
# When rendering, this template is given the following variables:
#
# * server_name: the homeserver's name.
#
# * idp: details of the SSO Identity Provider that the user logged in
# with: an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# * user_attributes: an object containing details about the user that
# we received from the IdP. May have the following attributes:
#
# * display_name: the user's display_name
# * emails: a list of email addresses
#
# The template should render a form which submits the following fields:
#
# * username: the localpart of the user's chosen user id
#
# * HTML page allowing the user to consent to the server's terms and
# conditions. This is only shown for new users, and only if
# `user_consent.require_at_registration` is set.
#
# When rendering, this template is given the following variables:
#
# * server_name: the homeserver's name.
#
# * user_id: the user's matrix proposed ID.
#
# * user_profile.display_name: the user's proposed display name, if any.
#
# * consent_version: the version of the terms that the user will be
# shown
#
# * terms_url: a link to the page showing the terms.
#
# The template should render a form which submits the following fields:
#
# * accepted_version: the version of the terms accepted by the user
# (ie, 'consent_version' from the input variables).
#
# * HTML page for a confirmation step before redirecting back to the client
# with the login token: 'sso_redirect_confirm.html'.
#
# When rendering, this template is given the following variables:
#
# * redirect_url: the URL the user is about to be redirected to.
#
# * display_url: the same as `redirect_url`, but with the query
# parameters stripped. The intention is to have a
# human-readable URL to show to users, not to use it as
# the final address to redirect to.
#
# * server_name: the homeserver's name.
#
# * new_user: a boolean indicating whether this is the user's first time
# logging in.
#
# * user_id: the user's matrix ID.
#
# * user_profile.avatar_url: an MXC URI for the user's avatar, if any.
# None if the user has not set an avatar.
#
# * user_profile.display_name: the user's display name. None if the user
# has not set a display name.
#
# * HTML page which notifies the user that they are authenticating to confirm
# an operation on their account during the user interactive authentication
# process: 'sso_auth_confirm.html'.
#
# When rendering, this template is given the following variables:
# * redirect_url: the URL the user is about to be redirected to.
#
# * description: the operation which the user is being asked to confirm
#
# * idp: details of the Identity Provider that we will use to confirm
# the user's identity: an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# * HTML page shown after a successful user interactive authentication session:
# 'sso_auth_success.html'.
#
# Note that this page must include the JavaScript which notifies of a successful authentication
# (see https://matrix.org/docs/spec/client_server/r0.6.0#fallback).
#
# This template has no additional variables.
#
# * HTML page shown after a user-interactive authentication session which
# does not map correctly onto the expected user: 'sso_auth_bad_user.html'.
#
# When rendering, this template is given the following variables:
# * server_name: the homeserver's name.
# * user_id_to_verify: the MXID of the user that we are trying to
# validate.
#
# * HTML page shown during single sign-on if a deactivated user (according to Synapse's database)
# attempts to login: 'sso_account_deactivated.html'.
#
# This template has no additional variables.
#
# * HTML page to display to users if something goes wrong during the
# OpenID Connect authentication process: 'sso_error.html'.
#
# When rendering, this template is given two variables:
# * error: the technical name of the error
# * error_description: a human-readable message for the error
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: " res/templates"
# JSON web token integration. The following settings can be used to make
# Synapse JSON web tokens for authentication, instead of its internal
# password database.
#
# Each JSON Web Token needs to contain a " sub" (subject) claim, which is
# used as the localpart of the mxid.
#
# Additionally, the expiration time (" exp" ), not before time (" nbf" ),
# and issued at (" iat" ) claims are validated if present.
#
# Note that this is a non-standard login type and client support is
# expected to be non-existent.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
#
#jwt_config:
# Uncomment the following to enable authorization using JSON web
# tokens. Defaults to false.
#
#enabled: true
# This is either the private shared secret or the public key used to
# decode the contents of the JSON web token.
#
# Required if 'enabled' is true.
#
#secret: " provided-by-your-issuer"
# The algorithm used to sign the JSON web token.
#
# Supported algorithms are listed at
# https://pyjwt.readthedocs.io/en/latest/algorithms.html
#
# Required if 'enabled' is true.
#
#algorithm: " provided-by-your-issuer"
# The issuer to validate the " iss" claim against.
#
# Optional, if provided the " iss" claim will be required and
# validated for all JSON web tokens.
#
#issuer: " provided-by-your-issuer"
# A list of audiences to validate the " aud" claim against.
#
# Optional, if provided the " aud" claim will be required and
# validated for all JSON web tokens.
#
# Note that if the " aud" claim is included in a JSON web token then
# validation will fail without configuring audiences.
#
#audiences:
# - " provided-by-your-issuer"
password_config:
# Uncomment to disable password login
#
#enabled: false
# Uncomment to disable authentication against the local password
# database. This is ignored if `enabled` is false, and is only useful
# if you have other password_providers.
#
#localdb_enabled: false
# Uncomment and change to a secret random string for extra security.
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
#
#pepper: " EVEN_MORE_SECRET"
# Define and enforce a password policy. Each parameter is optional.
# This is an implementation of MSC2000.
#
policy:
# Whether to enforce the password policy.
# Defaults to 'false'.
#
#enabled: true
# Minimum accepted length for a password.
# Defaults to 0.
#
#minimum_length: 15
# Whether a password must contain at least one digit.
# Defaults to 'false'.
#
#require_digit: true
# Whether a password must contain at least one symbol.
# A symbol is any character that's not a number or a letter.
# Defaults to 'false'.
#
#require_symbol: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_lowercase: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_uppercase: true
ui_auth:
# The amount of time to allow a user-interactive authentication session
# to be active.
#
# This defaults to 0, meaning the user is queried for their credentials
# before every action, but this can be overridden to allow a single
# validation to be re-used. This weakens the protections afforded by
# the user-interactive authentication process, by allowing for multiple
# (and potentially different) operations to use the same validation session.
#
# Uncomment below to allow for credential validation to last for 15
# seconds.
#
#session_timeout: " 15s"
# Configuration for sending emails from Synapse.
#
email:
# The hostname of the outgoing SMTP server to use. Defaults to 'localhost'.
#
#smtp_host: mail.server
# The port on the mail server for outgoing SMTP. Defaults to 25.
#
#smtp_port: 587
# Username/password for authentication to the SMTP server. By default, no
# authentication is attempted.
#
#smtp_user: " exampleusername"
#smtp_pass: " examplepassword"
# Uncomment the following to require TLS transport security for SMTP.
# By default, Synapse will connect over plain text, and will then switch to
# TLS via STARTTLS *if the SMTP server supports it*. If this option is set,
# Synapse will refuse to connect unless the server supports STARTTLS.
#
#require_transport_security: true
# notif_from defines the " From" address to use when sending emails.
# It must be set if email sending is enabled.
#
# The placeholder '%(app)s' will be replaced by the application name,
# which is normally 'app_name' (below), but may be overridden by the
# Matrix client application.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
#notif_from: " Your Friendly %(app)s homeserver < noreply@example.com> "
# app_name defines the default value for '%(app)s' in notif_from and email
# subjects. It defaults to 'Matrix'.
#
#app_name: my_branded_matrix_server
# Uncomment the following to enable sending emails for messages that the user
# has missed. Disabled by default.
#
#enable_notifs: true
# Uncomment the following to disable automatic subscription to email
# notifications for new users. Enabled by default.
#
#notif_for_new_users: false
# Custom URL for client links within the email notifications. By default
# links will be based on " https://matrix.to" .
#
# (This setting used to be called riot_base_url; the old name is still
# supported for backwards-compatibility but is now deprecated.)
#
#client_base_url: " http://localhost/riot"
# Configure the time that a validation email will expire after sending.
# Defaults to 1h.
#
#validation_token_lifetime: 15m
# The web client location to direct users to during an invite. This is passed
# to the identity server as the org.matrix.web_client_location key. Defaults
# to unset, giving no guidance to the identity server.
#
#invite_client_location: https://app.element.io
# Directory in which Synapse will try to find the template files below.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.
#
# Synapse will look for the following templates in this directory:
#
# * The contents of email notifications of missed events: 'notif_mail.html' and
# 'notif_mail.txt'.
#
# * The contents of account expiry notice emails: 'notice_expiry.html' and
# 'notice_expiry.txt'.
#
# * The contents of password reset emails sent by the homeserver:
# 'password_reset.html' and 'password_reset.txt'
#
# * An HTML page that a user will see when they follow the link in the password
# reset email. The user will be asked to confirm the action before their
# password is reset: 'password_reset_confirmation.html'
#
# * HTML pages for success and failure that a user will see when they confirm
# the password reset flow using the page above: 'password_reset_success.html'
# and 'password_reset_failure.html'
#
# * The contents of address verification emails sent during registration:
# 'registration.html' and 'registration.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent during registration:
# 'registration_success.html' and 'registration_failure.html'
#
# * The contents of address verification emails sent when an address is added
# to a Matrix account: 'add_threepid.html' and 'add_threepid.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent when an address is added
# to a Matrix account: 'add_threepid_success.html' and
# 'add_threepid_failure.html'
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: " res/templates"
# Subjects to use when sending emails from Synapse.
#
# The placeholder '%(app)s' will be replaced with the value of the 'app_name'
# setting above, or by a value dictated by the Matrix client application.
#
# If a subject isn't overridden in this configuration file, the value used as
# its example will be used.
#
#subjects:
# Subjects for notification emails.
#
# On top of the '%(app)s' placeholder, these can use the following
# placeholders:
#
# * '%(person)s', which will be replaced by the display name of the user(s)
# that sent the message(s), e.g. " Alice and Bob" .
# * '%(room)s', which will be replaced by the name of the room the
# message(s) have been sent to, e.g. " My super room" .
#
# See the example provided for each setting to see which placeholder can be
# used and how to use them.
#
# Subject to use to notify about one message from one or more user(s) in a
# room which has a name.
#message_from_person_in_room: " [%(app)s] You have a message on %(app)s from %(person)s in the %(room)s room..."
#
# Subject to use to notify about one message from one or more user(s) in a
# room which doesn't have a name.
#message_from_person: " [%(app)s] You have a message on %(app)s from %(person)s..."
#
# Subject to use to notify about multiple messages from one or more users in
# a room which doesn't have a name.
#messages_from_person: " [%(app)s] You have messages on %(app)s from %(person)s..."
#
# Subject to use to notify about multiple messages in a room which has a
# name.
#messages_in_room: " [%(app)s] You have messages on %(app)s in the %(room)s room..."
#
# Subject to use to notify about multiple messages in multiple rooms.
#messages_in_room_and_others: " [%(app)s] You have messages on %(app)s in the %(room)s room and others..."
#
# Subject to use to notify about multiple messages from multiple persons in
# multiple rooms. This is similar to the setting above except it's used when
# the room in which the notification was triggered has no name.
#messages_from_person_and_others: " [%(app)s] You have messages on %(app)s from %(person)s and others..."
#
# Subject to use to notify about an invite to a room which has a name.
#invite_from_person_to_room: " [%(app)s] %(person)s has invited you to join the %(room)s room on %(app)s..."
#
# Subject to use to notify about an invite to a room which doesn't have a
# name.
#invite_from_person: " [%(app)s] %(person)s has invited you to chat on %(app)s..."
# Subject for emails related to account administration.
#
# On top of the '%(app)s' placeholder, these one can use the
# '%(server_name)s' placeholder, which will be replaced by the value of the
# 'server_name' setting in your Synapse configuration.
#
# Subject to use when sending a password reset email.
#password_reset: " [%(server_name)s] Password reset"
#
# Subject to use when sending a verification email to assert an address's
# ownership.
#email_validation: " [%(server_name)s] Validate your email"
# Password providers allow homeserver administrators to integrate
# their Synapse installation with existing authentication methods
# ex. LDAP, external tokens, etc.
#
# For more information and known implementations, please see
# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md
#
# Note: instances wishing to use SAML or CAS authentication should
# instead use the `saml2_config` or `cas_config` options,
# respectively.
#
password_providers:
# # Example config for an LDAP auth provider
# - module: " ldap_auth_provider.LdapAuthProvider"
# config:
# enabled: true
# uri: " ldap://ldap.example.com:389"
# start_tls: true
# base: " ou=users,dc=example,dc=com"
# attributes:
# uid: " cn"
# mail: " email"
# name: " givenName"
# #bind_dn:
# #bind_password:
# #filter: " (objectClass=posixAccount)"
## Push ##
push:
# Clients requesting push notifications can either have the body of
# the message sent in the notification poke along with other details
# like the sender, or just the event ID and room ID (`event_id_only`).
# If clients choose the former, this option controls whether the
# notification request includes the content of the event (other details
# like the sender are still included). For `event_id_only` push, it
# has no effect.
#
# For modern android devices the notification content will still appear
# because it is loaded by the app. iPhone, however will send a
# notification saying only that a message arrived and who it came from.
#
# The default value is " true" to include message details. Uncomment to only
# include the event ID and room ID in push notification payloads.
#
#include_content: false
# When a push notification is received, an unread count is also sent.
# This number can either be calculated as the number of unread messages
# for the user, or the number of *rooms* the user has unread messages in.
#
# The default value is " true" , meaning push clients will see the number of
# rooms with unread messages in them. Uncomment to instead send the number
# of unread messages.
#
#group_unread_count_by_room: false
# Spam checkers are third-party modules that can block specific actions
# of local users, such as creating rooms and registering undesirable
# usernames, as well as remote users by redacting incoming events.
#
spam_checker:
#- module: " my_custom_project.SuperSpamChecker"
# config:
# example_option: 'things'
#- module: " some_other_project.BadEventStopper"
# config:
# example_stop_events_from: ['@bad:example.com']
## Rooms ##
# Controls whether locally-created rooms should be end-to-end encrypted by
# default.
#
# Possible options are " all" , " invite" , and " off" . They are defined as:
#
# * " all" : any locally-created room
# * " invite" : any room created with the " private_chat" or " trusted_private_chat"
# room creation presets
# * " off" : this option will take no effect
#
# The default value is " off" .
#
# Note that this option will only affect rooms created after it is set. It
# will also not affect rooms created by other servers.
#
#encryption_enabled_by_default_for_room_type: invite
# Uncomment to allow non-server-admin users to create groups on this server
#
#enable_group_creation: true
# If enabled, non server admins can only create groups with local parts
# starting with this prefix
#
#group_creation_prefix: " unofficial_"
# User Directory configuration
#
user_directory:
# Defines whether users can search the user directory. If false then
# empty responses are returned to all queries. Defaults to true.
#
# Uncomment to disable the user directory.
#
#enabled: false
# Defines whether to search all users visible to your HS when searching
# the user directory, rather than limiting to users visible in public
# rooms. Defaults to false.
#
# If you set it true, you'll have to rebuild the user_directory search
# indexes, see:
# https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
#
# Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester.
#
#search_all_users: true
# Defines whether to prefer local users in search query results.
# If True, local users are more likely to appear above remote users
# when searching the user directory. Defaults to false.
#
# Uncomment to prefer local over remote users in user directory search
# results.
#
#prefer_local_users: true
# User Consent configuration
#
# for detailed instructions, see
# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md
#
# Parts of this section are required if enabling the 'consent' resource under
# 'listeners', in particular 'template_dir' and 'version'.
#
# 'template_dir' gives the location of the templates for the HTML forms.
# This directory should contain one subdirectory per language (eg, 'en', 'fr'),
# and each language directory should contain the policy document (named as
# '< version> .html') and a success page (success.html).
#
# 'version' specifies the 'current' version of the policy document. It defines
# the version to be served by the consent resource if there is no 'v'
# parameter.
#
# 'server_notice_content', if enabled, will send a user a " Server Notice"
# asking them to consent to the privacy policy. The 'server_notices' section
# must also be configured for this to work. Notices will *not* be sent to
# guest users unless 'send_server_notice_to_guests' is set to true.
#
# 'block_events_error', if set, will block any attempts to send events
# until the user consents to the privacy policy. The value of the setting is
# used as the text of the error.
#
# 'require_at_registration', if enabled, will add a step to the registration
# process, similar to how captcha works. Users will be required to accept the
# policy before their account is created.
#
# 'policy_name' is the display name of the policy users will see when registering
# for an account. Has no effect unless `require_at_registration` is enabled.
# Defaults to " Privacy Policy" .
#
#user_consent:
# template_dir: res/templates/privacy
# version: 1.0
# server_notice_content:
# msgtype: m.text
# body: > -
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# send_server_notice_to_guests: true
# block_events_error: > -
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# require_at_registration: false
# policy_name: Privacy Policy
#
# Settings for local room and user statistics collection. See
# docs/room_and_user_statistics.md.
#
stats:
# Uncomment the following to disable room and user statistics. Note that doing
# so may cause certain features (such as the room directory) not to work
# correctly.
#
#enabled: false
# The size of each timeslice in the room_stats_historical and
# user_stats_historical tables, as a time period. Defaults to " 1d" .
#
#bucket_size: 1h
# Server Notices room configuration
#
# Uncomment this section to enable a room which can be used to send notices
# from the server to users. It is a special room which cannot be left; notices
# come from a special " notices" user id.
#
# If you uncomment this section, you *must* define the system_mxid_localpart
# setting, which defines the id of the user which will be used to send the
# notices.
#
# It's also possible to override the room name, the display name of the
# " notices" user, and the avatar for the user.
#
#server_notices:
# system_mxid_localpart: notices
# system_mxid_display_name: " Server Notices"
# system_mxid_avatar_url: " mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
# room_name: " Server Notices"
# Uncomment to disable searching the public room list. When disabled
# blocks searching local and remote room lists for local and remote
# users by always returning an empty list for all queries.
#
#enable_room_list_search: false
# The `alias_creation` option controls who's allowed to create aliases
# on this server.
#
# The format of this option is a list of rules that contain globs that
# match against user_id, room_id and the new alias (fully qualified with
# server name). The action in the first rule that matches is taken,
# which can currently either be " allow" or " deny" .
#
# Missing user_id/room_id/alias fields default to " *" .
#
# If no rules match the request is denied. An empty list means no one
# can create aliases.
#
# Options for the rules include:
#
# user_id: Matches against the creator of the alias
# alias: Matches against the alias being created
# room_id: Matches against the room ID the alias is being pointed at
# action: Whether to " allow" or " deny" the request if the rule matches
#
# The default is:
#
#alias_creation_rules:
# - user_id: " *"
# alias: " *"
# room_id: " *"
# action: allow
# The `room_list_publication_rules` option controls who can publish and
# which rooms can be published in the public room list.
#
# The format of this option is the same as that for
# `alias_creation_rules`.
#
# If the room has one or more aliases associated with it, only one of
# the aliases needs to match the alias rule. If there are no aliases
# then only rules with `alias: *` match.
#
# If no rules match the request is denied. An empty list means no one
# can publish rooms.
#
# Options for the rules include:
#
# user_id: Matches against the creator of the alias
# room_id: Matches against the room ID being published
# alias: Matches against any current local or canonical aliases
# associated with the room
# action: Whether to " allow" or " deny" the request if the rule matches
#
# The default is:
#
#room_list_publication_rules:
# - user_id: " *"
# alias: " *"
# room_id: " *"
# action: allow
# Server admins can define a Python module that implements extra rules for
# allowing or denying incoming events. In order to work, this module needs to
# override the methods defined in synapse/events/third_party_rules.py.
#
# This feature is designed to be used in closed federations only, where each
# participating server enforces the same rules.
#
#third_party_event_rules:
# module: " my_custom_project.SuperRulesSet"
# config:
# example_option: 'things'
## Opentracing ##
# These settings enable opentracing, which implements distributed tracing.
# This allows you to observe the causal chains of events across servers
# including requests, key lookups etc., across any server running
# synapse or any other other services which supports opentracing
# (specifically those implemented with Jaeger).
#
opentracing:
# tracing is disabled by default. Uncomment the following line to enable it.
#
#enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage.
# See docs/opentracing.rst.
#
# This is a list of regexes which are matched against the server_name of the
# homeserver.
#
# By default, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - " .*"
# A list of the matrix IDs of users whose requests will always be traced,
# even if the tracing system would otherwise drop the traces due to
# probabilistic sampling.
#
# By default, the list is empty.
#
#force_tracing_for_users:
# - " @user1:server_name"
# - " @user2:server_name"
# Jaeger can be configured to sample traces at different rates.
# All configuration options provided by Jaeger can be set here.
# Jaeger's configuration is mostly related to trace sampling which
# is documented here:
# https://www.jaegertracing.io/docs/latest/sampling/.
#
#jaeger_config:
# sampler:
# type: const
# param: 1
# logging:
# false
## Workers ##
# Disables sending of outbound federation transactions on the main process.
# Uncomment if using a federation sender worker.
#
#send_federation: false
# It is possible to run multiple federation sender workers, in which case the
# work is balanced across them.
#
# This configuration must be shared between all federation sender workers, and if
# changed all federation sender workers must be stopped at the same time and then
# started, to ensure that all instances are running with the same config (otherwise
# events may be dropped).
#
#federation_sender_instances:
# - federation_sender1
# When using workers this should be a map from `worker_name` to the
# HTTP replication listener of the worker, if configured.
#
#instance_map:
# worker1:
# host: localhost
# port: 8034
# Experimental: When using workers you can define which workers should
# handle event persistence and typing notifications. Any worker
# specified here must also be in the `instance_map`.
#
#stream_writers:
# events: worker1
# typing: worker1
# The worker that is used to run background tasks (e.g. cleaning up expired
# data). If not provided this defaults to the main process.
#
#run_background_tasks_on: worker1
# A shared secret used by the replication APIs to authenticate HTTP requests
# from workers.
#
# By default this is unused and traffic is not authenticated.
#
#worker_replication_secret: " "
# Configuration for Redis when using workers. This *must* be enabled when
# using workers (unless using old style direct TCP configuration).
#
redis:
# Uncomment the below to enable Redis support.
#
#enabled: true
# Optional host and port to use to connect to redis. Defaults to
# localhost and 6379
#
#host: localhost
#port: 6379
# Optional password if configured on the Redis instance
#
#password: < secret_password>
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "logging-sample-configuration-file" > < a class = "header" href = "#logging-sample-configuration-file" > Logging Sample Configuration File< / a > < / h1 >
< p > Below is a sample logging configuration file. This file can be tweaked to control how your
homeserver will output logs. A restart of the server is generally required to apply any
changes made to this file.< / p >
< p > Note that the contents below are < em > not< / em > intended to be copied and used as the basis for
a real homeserver.yaml. Instead, if you are starting from scratch, please generate
a fresh config using Synapse by following the instructions in
< a href = "usage/configuration/../../setup/installation.html" > Installation< / a > .< / p >
< pre > < code class = "language-yaml" > # Log configuration for Synapse.
#
# This is a YAML file containing a standard Python logging configuration
# dictionary. See [1] for details on the valid settings.
#
# Synapse also supports structured logging for machine readable logs which can
# be ingested by ELK stacks. See [2] for details.
#
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
# [2]: https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
handlers:
file:
class: logging.handlers.TimedRotatingFileHandler
formatter: precise
filename: /var/log/matrix-synapse/homeserver.log
when: midnight
backupCount: 3 # Does not include the current log file.
encoding: utf8
# Default to buffering writes to log file for efficiency. This means that
# will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
# logs will still be flushed immediately.
buffer:
class: logging.handlers.MemoryHandler
target: file
# The capacity is the number of log lines that are buffered before
# being written to disk. Increasing this will lead to better
# performance, at the expensive of it taking longer for log lines to
# be written to disk.
capacity: 10
flushLevel: 30 # Flush for WARNING logs as well
# A handler that writes logs to stderr. Unused by default, but can be used
# instead of " buffer" and " file" in the logger handlers.
console:
class: logging.StreamHandler
formatter: precise
loggers:
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
twisted:
# We send the twisted logging directly to the file handler,
# to work around https://github.com/matrix-org/synapse/issues/3471
# when using " buffer" logger. Use " console" to log to stderr instead.
handlers: [file]
propagate: false
root:
level: INFO
# Write logs to the `buffer` handler, which will buffer them together in memory,
# then write them to a file.
#
# Replace " buffer" with " console" to log to stderr instead. (Note that you'll
# also need to update the configuration for the `twisted` logger above, in
# this case.)
#
handlers: [buffer]
disable_existing_loggers: false
``__`< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "structured-logging" > < a class = "header" href = "#structured-logging" > Structured Logging< / a > < / h1 >
< p > A structured logging system can be useful when your logs are destined for a
machine to parse and process. By maintaining its machine-readable characteristics,
it enables more efficient searching and aggregations when consumed by software
such as the " ELK stack" .< / p >
< p > Synapse's structured logging system is configured via the file that Synapse's
< code > log_config< / code > config option points to. The file should include a formatter which
uses the < code > synapse.logging.TerseJsonFormatter< / code > class included with Synapse and a
handler which uses the above formatter.< / p >
< p > There is also a < code > synapse.logging.JsonFormatter< / code > option which does not include
a timestamp in the resulting JSON. This is useful if the log ingester adds its
own timestamp.< / p >
< p > A structured logging configuration looks similar to the following:< / p >
< pre > < code class = "language-yaml" > version: 1
formatters:
structured:
class: synapse.logging.TerseJsonFormatter
handlers:
file:
class: logging.handlers.TimedRotatingFileHandler
formatter: structured
filename: /path/to/my/logs/homeserver.log
when: midnight
backupCount: 3 # Does not include the current log file.
encoding: utf8
loggers:
synapse:
level: INFO
handlers: [remote]
synapse.storage.SQL:
level: WARNING
< / code > < / pre >
< p > The above logging config will set Synapse as 'INFO' logging level by default,
with the SQL layer at 'WARNING', and will log to a file, stored as JSON.< / p >
< p > It is also possible to figure Synapse to log to a remote endpoint by using the
< code > synapse.logging.RemoteHandler< / code > class included with Synapse. It takes the
following arguments:< / p >
< ul >
< li > < code > host< / code > : Hostname or IP address of the log aggregator.< / li >
< li > < code > port< / code > : Numerical port to contact on the host.< / li >
< li > < code > maximum_buffer< / code > : (Optional, defaults to 1000) The maximum buffer size to allow.< / li >
< / ul >
< p > A remote structured logging configuration looks similar to the following:< / p >
< pre > < code class = "language-yaml" > version: 1
formatters:
structured:
class: synapse.logging.TerseJsonFormatter
handlers:
remote:
class: synapse.logging.RemoteHandler
formatter: structured
host: 10.1.2.3
port: 9999
loggers:
synapse:
level: INFO
handlers: [remote]
synapse.storage.SQL:
level: WARNING
< / code > < / pre >
< p > The above logging config will set Synapse as 'INFO' logging level by default,
with the SQL layer at 'WARNING', and will log JSON formatted messages to a
remote endpoint at 10.1.2.3:9999.< / p >
< h2 id = "upgrading-from-legacy-structured-logging-configuration" > < a class = "header" href = "#upgrading-from-legacy-structured-logging-configuration" > Upgrading from legacy structured logging configuration< / a > < / h2 >
< p > Versions of Synapse prior to v1.23.0 included a custom structured logging
configuration which is deprecated. It used a < code > structured: true< / code > flag and
configured < code > drains< / code > instead of < code > handlers< / code > and < code > formatters< / code > .< / p >
< p > Synapse currently automatically converts the old configuration to the new
configuration, but this will be removed in a future version of Synapse. The
following reference can be used to update your configuration. Based on the drain
< code > type< / code > , we can pick a new handler:< / p >
< ol >
< li > For a type of < code > console< / code > , < code > console_json< / code > , or < code > console_json_terse< / code > : a handler
with a class of < code > logging.StreamHandler< / code > and a < code > stream< / code > of < code > ext://sys.stdout< / code >
or < code > ext://sys.stderr< / code > should be used.< / li >
< li > For a type of < code > file< / code > or < code > file_json< / code > : a handler of < code > logging.FileHandler< / code > with
a location of the file path should be used.< / li >
< li > For a type of < code > network_json_terse< / code > : a handler of < code > synapse.logging.RemoteHandler< / code >
with the host and port should be used.< / li >
< / ol >
< p > Then based on the drain < code > type< / code > we can pick a new formatter:< / p >
< ol >
< li > For a type of < code > console< / code > or < code > file< / code > no formatter is necessary.< / li >
< li > For a type of < code > console_json< / code > or < code > file_json< / code > : a formatter of
< code > synapse.logging.JsonFormatter< / code > should be used.< / li >
< li > For a type of < code > console_json_terse< / code > or < code > network_json_terse< / code > : a formatter of
< code > synapse.logging.TerseJsonFormatter< / code > should be used.< / li >
< / ol >
< p > For each new handler and formatter they should be added to the logging configuration
and then assigned to either a logger or the root logger.< / p >
< p > An example legacy configuration:< / p >
< pre > < code class = "language-yaml" > structured: true
loggers:
synapse:
level: INFO
synapse.storage.SQL:
level: WARNING
drains:
console:
type: console
location: stdout
file:
type: file_json
location: homeserver.log
< / code > < / pre >
< p > Would be converted into a new configuration:< / p >
< pre > < code class = "language-yaml" > version: 1
formatters:
json:
class: synapse.logging.JsonFormatter
handlers:
console:
class: logging.StreamHandler
location: ext://sys.stdout
file:
class: logging.FileHandler
formatter: json
filename: homeserver.log
loggers:
synapse:
level: INFO
handlers: [console, file]
synapse.storage.SQL:
level: WARNING
< / code > < / pre >
< p > The new logging configuration is a bit more verbose, but significantly more
flexible. It allows for configuration that were not previously possible, such as
sending plain logs over the network, or using different handlers for different
modules.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "user-authentication" > < a class = "header" href = "#user-authentication" > User Authentication< / a > < / h1 >
< p > Synapse supports multiple methods of authenticating users, either out-of-the-box or through custom pluggable
authentication modules.< / p >
< p > Included in Synapse is support for authenticating users via:< / p >
< ul >
< li > A username and password.< / li >
< li > An email address and password.< / li >
< li > Single Sign-On through the SAML, Open ID Connect or CAS protocols.< / li >
< li > JSON Web Tokens.< / li >
< li > An administrator's shared secret.< / li >
< / ul >
< p > Synapse can additionally be extended to support custom authentication schemes through optional " password auth provider"
modules.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "configuring-synapse-to-authenticate-against-an-openid-connect-provider" > < a class = "header" href = "#configuring-synapse-to-authenticate-against-an-openid-connect-provider" > Configuring Synapse to authenticate against an OpenID Connect provider< / a > < / h1 >
< p > Synapse can be configured to use an OpenID Connect Provider (OP) for
authentication, instead of its own local password database.< / p >
< p > Any OP should work with Synapse, as long as it supports the authorization code
flow. There are a few options for that:< / p >
< ul >
< li >
< p > start a local OP. Synapse has been tested with < a href = "https://www.ory.sh/docs/hydra/" > Hydra< / a > and
< a href = "https://github.com/dexidp/dex" > Dex< / a > . Note that for an OP to work, it should be served under a
secure (HTTPS) origin. A certificate signed with a self-signed, locally
trusted CA should work. In that case, start Synapse with a < code > SSL_CERT_FILE< / code >
environment variable set to the path of the CA.< / p >
< / li >
< li >
< p > set up a SaaS OP, like < a href = "https://developers.google.com/identity/protocols/oauth2/openid-connect" > Google< / a > , < a href = "https://auth0.com/" > Auth0< / a > or
< a href = "https://www.okta.com/" > Okta< / a > . Synapse has been tested with Auth0 and Google.< / p >
< / li >
< / ul >
< p > It may also be possible to use other OAuth2 providers which provide the
< a href = "https://tools.ietf.org/html/rfc6749#section-4.1" > authorization code grant type< / a > ,
such as < a href = "https://developer.github.com/apps/building-oauth-apps/authorizing-oauth-apps" > Github< / a > .< / p >
< h2 id = "preparing-synapse" > < a class = "header" href = "#preparing-synapse" > Preparing Synapse< / a > < / h2 >
< p > The OpenID integration in Synapse uses the
< a href = "https://pypi.org/project/Authlib/" > < code > authlib< / code > < / a > library, which must be installed
as follows:< / p >
< ul >
< li >
< p > The relevant libraries are included in the Docker images and Debian packages
provided by < code > matrix.org< / code > so no further action is needed.< / p >
< / li >
< li >
< p > If you installed Synapse into a virtualenv, run < code > /path/to/env/bin/pip install matrix-synapse[oidc]< / code > to install the necessary dependencies.< / p >
< / li >
< li >
< p > For other installation mechanisms, see the documentation provided by the
maintainer.< / p >
< / li >
< / ul >
< p > To enable the OpenID integration, you should then add a section to the < code > oidc_providers< / code >
setting in your configuration file (or uncomment one of the existing examples).
See < a href = "./sample_config.yaml" > sample_config.yaml< / a > for some sample settings, as well as
the text below for example configurations for specific providers.< / p >
< h2 id = "sample-configs" > < a class = "header" href = "#sample-configs" > Sample configs< / a > < / h2 >
< p > Here are a few configs for providers that should work with Synapse.< / p >
< h3 id = "microsoft-azure-active-directory" > < a class = "header" href = "#microsoft-azure-active-directory" > Microsoft Azure Active Directory< / a > < / h3 >
< p > Azure AD can act as an OpenID Connect Provider. Register a new application under
< em > App registrations< / em > in the Azure AD management console. The RedirectURI for your
application should point to your matrix server:
< code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > < / p >
< p > Go to < em > Certificates & secrets< / em > and register a new client secret. Make note of your
Directory (tenant) ID as it will be used in the Azure links.
Edit your Synapse config file and change the < code > oidc_config< / code > section:< / p >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: microsoft
idp_name: Microsoft
issuer: " https://login.microsoftonline.com/< tenant id> /v2.0"
client_id: " < client id> "
client_secret: " < client secret> "
scopes: [" openid" , " profile" ]
authorization_endpoint: " https://login.microsoftonline.com/< tenant id> /oauth2/v2.0/authorize"
token_endpoint: " https://login.microsoftonline.com/< tenant id> /oauth2/v2.0/token"
userinfo_endpoint: " https://graph.microsoft.com/oidc/userinfo"
user_mapping_provider:
config:
localpart_template: " {{ user.preferred_username.split('@')[0] }}"
display_name_template: " {{ user.name }}"
< / code > < / pre >
< h3 id = "a-hrefhttpsgithubcomdexidpdexdexa" > < a class = "header" href = "#a-hrefhttpsgithubcomdexidpdexdexa" > < a href = "https://github.com/dexidp/dex" > Dex< / a > < / a > < / h3 >
< p > < a href = "https://github.com/dexidp/dex" > Dex< / a > is a simple, open-source, certified OpenID Connect Provider.
Although it is designed to help building a full-blown provider with an
external database, it can be configured with static passwords in a config file.< / p >
< p > Follow the < a href = "https://dexidp.io/docs/getting-started/" > Getting Started guide< / a >
to install Dex.< / p >
< p > Edit < code > examples/config-dev.yaml< / code > config file from the Dex repo to add a client:< / p >
< pre > < code class = "language-yaml" > staticClients:
- id: synapse
secret: secret
redirectURIs:
- '[synapse public baseurl]/_synapse/client/oidc/callback'
name: 'Synapse'
< / code > < / pre >
< p > Run with < code > dex serve examples/config-dev.yaml< / code > .< / p >
< p > Synapse config:< / p >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: dex
idp_name: " My Dex server"
skip_verification: true # This is needed as Dex is served on an insecure endpoint
issuer: " http://127.0.0.1:5556/dex"
client_id: " synapse"
client_secret: " secret"
scopes: [" openid" , " profile" ]
user_mapping_provider:
config:
localpart_template: " {{ user.name }}"
display_name_template: " {{ user.name|capitalize }}"
< / code > < / pre >
< h3 id = "a-hrefhttpswwwkeycloakorgdocslatestserver_adminsso-protocolskeycloaka" > < a class = "header" href = "#a-hrefhttpswwwkeycloakorgdocslatestserver_adminsso-protocolskeycloaka" > < a href = "https://www.keycloak.org/docs/latest/server_admin/#sso-protocols" > Keycloak< / a > < / a > < / h3 >
< p > < a href = "https://www.keycloak.org/docs/latest/server_admin/#sso-protocols" > Keycloak< / a > is an opensource IdP maintained by Red Hat.< / p >
< p > Follow the < a href = "https://www.keycloak.org/getting-started" > Getting Started Guide< / a > to install Keycloak and set up a realm.< / p >
< ol >
< li >
< p > Click < code > Clients< / code > in the sidebar and click < code > Create< / code > < / p >
< / li >
< li >
< p > Fill in the fields as below:< / p >
< / li >
< / ol >
< table > < thead > < tr > < th > Field< / th > < th > Value< / th > < / tr > < / thead > < tbody >
< tr > < td > Client ID< / td > < td > < code > synapse< / code > < / td > < / tr >
< tr > < td > Client Protocol< / td > < td > < code > openid-connect< / code > < / td > < / tr >
< / tbody > < / table >
< ol start = "3" >
< li > Click < code > Save< / code > < / li >
< li > Fill in the fields as below:< / li >
< / ol >
< table > < thead > < tr > < th > Field< / th > < th > Value< / th > < / tr > < / thead > < tbody >
< tr > < td > Client ID< / td > < td > < code > synapse< / code > < / td > < / tr >
< tr > < td > Enabled< / td > < td > < code > On< / code > < / td > < / tr >
< tr > < td > Client Protocol< / td > < td > < code > openid-connect< / code > < / td > < / tr >
< tr > < td > Access Type< / td > < td > < code > confidential< / code > < / td > < / tr >
< tr > < td > Valid Redirect URIs< / td > < td > < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > < / td > < / tr >
< / tbody > < / table >
< ol start = "5" >
< li > Click < code > Save< / code > < / li >
< li > On the Credentials tab, update the fields:< / li >
< / ol >
< table > < thead > < tr > < th > Field< / th > < th > Value< / th > < / tr > < / thead > < tbody >
< tr > < td > Client Authenticator< / td > < td > < code > Client ID and Secret< / code > < / td > < / tr >
< / tbody > < / table >
< ol start = "7" >
< li > Click < code > Regenerate Secret< / code > < / li >
< li > Copy Secret< / li >
< / ol >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: keycloak
idp_name: " My KeyCloak server"
issuer: " https://127.0.0.1:8443/auth/realms/{realm_name}"
client_id: " synapse"
client_secret: " copy secret generated from above"
scopes: [" openid" , " profile" ]
user_mapping_provider:
config:
localpart_template: " {{ user.preferred_username }}"
display_name_template: " {{ user.name }}"
< / code > < / pre >
< h3 id = "a-hrefhttpsauth0comauth0a" > < a class = "header" href = "#a-hrefhttpsauth0comauth0a" > < a href = "https://auth0.com/" > Auth0< / a > < / a > < / h3 >
< ol >
< li >
< p > Create a regular web application for Synapse< / p >
< / li >
< li >
< p > Set the Allowed Callback URLs to < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > < / p >
< / li >
< li >
< p > Add a rule to add the < code > preferred_username< / code > claim.< / p >
< details >
< summary > Code sample< / summary >
< pre > < code class = "language-js" > function addPersistenceAttribute(user, context, callback) {
user.user_metadata = user.user_metadata || {};
user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
context.idToken.preferred_username = user.user_metadata.preferred_username;
auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
.then(function(){
callback(null, user, context);
})
.catch(function(err){
callback(err);
});
}
< / code > < / pre >
< / li >
< / ol >
< / details >
< p > Synapse config:< / p >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: auth0
idp_name: Auth0
issuer: " https://your-tier.eu.auth0.com/" # TO BE FILLED
client_id: " your-client-id" # TO BE FILLED
client_secret: " your-client-secret" # TO BE FILLED
scopes: [" openid" , " profile" ]
user_mapping_provider:
config:
localpart_template: " {{ user.preferred_username }}"
display_name_template: " {{ user.name }}"
< / code > < / pre >
< h3 id = "github" > < a class = "header" href = "#github" > GitHub< / a > < / h3 >
< p > GitHub is a bit special as it is not an OpenID Connect compliant provider, but
just a regular OAuth2 provider.< / p >
< p > The < a href = "https://developer.github.com/v3/users/#get-the-authenticated-user" > < code > /user< / code > API endpoint< / a >
can be used to retrieve information on the authenticated user. As the Synapse
login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a < code > sub< / code > property, an alternative < code > subject_claim< / code > has to be set.< / p >
< ol >
< li > Create a new OAuth application: https://github.com/settings/applications/new.< / li >
< li > Set the callback URL to < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > .< / li >
< / ol >
< p > Synapse config:< / p >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: github
idp_name: Github
idp_brand: " github" # optional: styling hint for clients
discover: false
issuer: " https://github.com/"
client_id: " your-client-id" # TO BE FILLED
client_secret: " your-client-secret" # TO BE FILLED
authorization_endpoint: " https://github.com/login/oauth/authorize"
token_endpoint: " https://github.com/login/oauth/access_token"
userinfo_endpoint: " https://api.github.com/user"
scopes: [" read:user" ]
user_mapping_provider:
config:
subject_claim: " id"
localpart_template: " {{ user.login }}"
display_name_template: " {{ user.name }}"
< / code > < / pre >
< h3 id = "a-hrefhttpsdevelopersgooglecomidentityprotocolsoauth2openid-connectgooglea" > < a class = "header" href = "#a-hrefhttpsdevelopersgooglecomidentityprotocolsoauth2openid-connectgooglea" > < a href = "https://developers.google.com/identity/protocols/oauth2/openid-connect" > Google< / a > < / a > < / h3 >
< ol >
< li > Set up a project in the Google API Console (see
https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).< / li >
< li > add an " OAuth Client ID" for a Web Application under " Credentials" .< / li >
< li > Copy the Client ID and Client Secret, and add the following to your synapse config:
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: google
idp_name: Google
idp_brand: " google" # optional: styling hint for clients
issuer: " https://accounts.google.com/"
client_id: " your-client-id" # TO BE FILLED
client_secret: " your-client-secret" # TO BE FILLED
scopes: [" openid" , " profile" ]
user_mapping_provider:
config:
localpart_template: " {{ user.given_name|lower }}"
display_name_template: " {{ user.name }}"
< / code > < / pre >
< / li >
< li > Back in the Google console, add this Authorized redirect URI: < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > .< / li >
< / ol >
< h3 id = "twitch" > < a class = "header" href = "#twitch" > Twitch< / a > < / h3 >
< ol >
< li > Setup a developer account on < a href = "https://dev.twitch.tv/" > Twitch< / a > < / li >
< li > Obtain the OAuth 2.0 credentials by < a href = "https://dev.twitch.tv/console/apps/" > creating an app< / a > < / li >
< li > Add this OAuth Redirect URL: < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > < / li >
< / ol >
< p > Synapse config:< / p >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: twitch
idp_name: Twitch
issuer: " https://id.twitch.tv/oauth2/"
client_id: " your-client-id" # TO BE FILLED
client_secret: " your-client-secret" # TO BE FILLED
client_auth_method: " client_secret_post"
user_mapping_provider:
config:
localpart_template: " {{ user.preferred_username }}"
display_name_template: " {{ user.name }}"
< / code > < / pre >
< h3 id = "gitlab" > < a class = "header" href = "#gitlab" > GitLab< / a > < / h3 >
< ol >
< li > Create a < a href = "https://gitlab.com/profile/applications" > new application< / a > .< / li >
< li > Add the < code > read_user< / code > and < code > openid< / code > scopes.< / li >
< li > Add this Callback URL: < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > < / li >
< / ol >
< p > Synapse config:< / p >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: gitlab
idp_name: Gitlab
idp_brand: " gitlab" # optional: styling hint for clients
issuer: " https://gitlab.com/"
client_id: " your-client-id" # TO BE FILLED
client_secret: " your-client-secret" # TO BE FILLED
client_auth_method: " client_secret_post"
scopes: [" openid" , " read_user" ]
user_profile_method: " userinfo_endpoint"
user_mapping_provider:
config:
localpart_template: '{{ user.nickname }}'
display_name_template: '{{ user.name }}'
< / code > < / pre >
< h3 id = "facebook" > < a class = "header" href = "#facebook" > Facebook< / a > < / h3 >
< p > Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant
one so requires a little more configuration.< / p >
< ol start = "0" >
< li > You will need a Facebook developer account. You can register for one
< a href = "https://developers.facebook.com/async/registration/" > here< / a > .< / li >
< li > On the < a href = "https://developers.facebook.com/apps/" > apps< / a > page of the developer
console, " Create App" , and choose " Build Connected Experiences" .< / li >
< li > Once the app is created, add " Facebook Login" and choose " Web" . You don't
need to go through the whole form here.< / li >
< li > In the left-hand menu, open " Products" /" Facebook Login" /" Settings" .
< ul >
< li > Add < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > as an OAuth Redirect
URL.< / li >
< / ul >
< / li >
< li > In the left-hand menu, open " Settings/Basic" . Here you can copy the " App ID"
and " App Secret" for use below.< / li >
< / ol >
< p > Synapse config:< / p >
< pre > < code class = "language-yaml" > - idp_id: facebook
idp_name: Facebook
idp_brand: " facebook" # optional: styling hint for clients
discover: false
issuer: " https://facebook.com"
client_id: " your-client-id" # TO BE FILLED
client_secret: " your-client-secret" # TO BE FILLED
scopes: [" openid" , " email" ]
authorization_endpoint: https://facebook.com/dialog/oauth
token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token
user_profile_method: " userinfo_endpoint"
userinfo_endpoint: " https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
user_mapping_provider:
config:
subject_claim: " id"
display_name_template: " {{ user.name }}"
< / code > < / pre >
< p > Relevant documents:< / p >
< ul >
< li > https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow< / li >
< li > Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/< / li >
< li > Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user< / li >
< / ul >
< h3 id = "gitea" > < a class = "header" href = "#gitea" > Gitea< / a > < / h3 >
< p > Gitea is, like Github, not an OpenID provider, but just an OAuth2 provider.< / p >
< p > The < a href = "https://try.gitea.io/api/swagger#/user/userGetCurrent" > < code > /user< / code > API endpoint< / a >
can be used to retrieve information on the authenticated user. As the Synapse
login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a < code > sub< / code > property, an alternative < code > subject_claim< / code > has to be set.< / p >
< ol >
< li > Create a new application.< / li >
< li > Add this Callback URL: < code > [synapse public baseurl]/_synapse/client/oidc/callback< / code > < / li >
< / ol >
< p > Synapse config:< / p >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: gitea
idp_name: Gitea
discover: false
issuer: " https://your-gitea.com/"
client_id: " your-client-id" # TO BE FILLED
client_secret: " your-client-secret" # TO BE FILLED
client_auth_method: client_secret_post
scopes: [] # Gitea doesn't support Scopes
authorization_endpoint: " https://your-gitea.com/login/oauth/authorize"
token_endpoint: " https://your-gitea.com/login/oauth/access_token"
userinfo_endpoint: " https://your-gitea.com/api/v1/user"
user_mapping_provider:
config:
subject_claim: " id"
localpart_template: " {{ user.login }}"
display_name_template: " {{ user.full_name }}"
< / code > < / pre >
< h3 id = "xwiki" > < a class = "header" href = "#xwiki" > XWiki< / a > < / h3 >
< p > Install < a href = "https://extensions.xwiki.org/xwiki/bin/view/Extension/OpenID%20Connect/OpenID%20Connect%20Provider/" > OpenID Connect Provider< / a > extension in your < a href = "https://www.xwiki.org" > XWiki< / a > instance.< / p >
< p > Synapse config:< / p >
< pre > < code class = "language-yaml" > oidc_providers:
- idp_id: xwiki
idp_name: " XWiki"
issuer: " https://myxwikihost/xwiki/oidc/"
client_id: " your-client-id" # TO BE FILLED
client_auth_method: none
scopes: [" openid" , " profile" ]
user_profile_method: " userinfo_endpoint"
user_mapping_provider:
config:
localpart_template: " {{ user.preferred_username }}"
display_name_template: " {{ user.name }}"
< / code > < / pre >
< h2 id = "apple" > < a class = "header" href = "#apple" > Apple< / a > < / h2 >
< p > Configuring " Sign in with Apple" (SiWA) requires an Apple Developer account.< / p >
< p > You will need to create a new " Services ID" for SiWA, and create and download a
private key with " SiWA" enabled.< / p >
< p > As well as the private key file, you will need:< / p >
< ul >
< li > Client ID: the " identifier" you gave the " Services ID" < / li >
< li > Team ID: a 10-character ID associated with your developer account.< / li >
< li > Key ID: the 10-character identifier for the key.< / li >
< / ul >
< p > https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more
documentation on setting up SiWA.< / p >
< p > The synapse config will look like this:< / p >
< pre > < code class = "language-yaml" > - idp_id: apple
idp_name: Apple
issuer: " https://appleid.apple.com"
client_id: " your-client-id" # Set to the " identifier" for your " ServicesID"
client_auth_method: " client_secret_post"
client_secret_jwt_key:
key_file: " /path/to/AuthKey_KEYIDCODE.p8" # point to your key file
jwt_header:
alg: ES256
kid: " KEYIDCODE" # Set to the 10-char Key ID
jwt_payload:
iss: TEAMIDCODE # Set to the 10-char Team ID
scopes: [" name" , " email" , " openid" ]
authorization_endpoint: https://appleid.apple.com/auth/authorize?response_mode=form_post
user_mapping_provider:
config:
email_template: " {{ user.email }}"
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "sso-mapping-providers" > < a class = "header" href = "#sso-mapping-providers" > SSO Mapping Providers< / a > < / h1 >
< p > A mapping provider is a Python class (loaded via a Python module) that
works out how to map attributes of a SSO response to Matrix-specific
user attributes. Details such as user ID localpart, displayname, and even avatar
URLs are all things that can be mapped from talking to a SSO service.< / p >
< p > As an example, a SSO service may return the email address
" john.smith@example.com" for a user, whereas Synapse will need to figure out how
to turn that into a displayname when creating a Matrix user for this individual.
It may choose < code > John Smith< / code > , or < code > Smith, John [Example.com]< / code > or any number of
variations. As each Synapse configuration may want something different, this is
where SAML mapping providers come into play.< / p >
< p > SSO mapping providers are currently supported for OpenID and SAML SSO
configurations. Please see the details below for how to implement your own.< / p >
< p > It is up to the mapping provider whether the user should be assigned a predefined
Matrix ID based on the SSO attributes, or if the user should be allowed to
choose their own username.< / p >
< p > In the first case - where users are automatically allocated a Matrix ID - it is
the responsibility of the mapping provider to normalise the SSO attributes and
map them to a valid Matrix ID. The < a href = "https://matrix.org/docs/spec/appendices#user-identifiers" > specification for Matrix
IDs< / a > has some
information about what is considered valid.< / p >
< p > If the mapping provider does not assign a Matrix ID, then Synapse will
automatically serve an HTML page allowing the user to pick their own username.< / p >
< p > External mapping providers are provided to Synapse in the form of an external
Python module. You can retrieve this module from < a href = "https://pypi.org" > PyPI< / a > or elsewhere,
but it must be importable via Synapse (e.g. it must be in the same virtualenv
as Synapse). The Synapse config is then modified to point to the mapping provider
(and optionally provide additional configuration for it).< / p >
< h2 id = "openid-mapping-providers" > < a class = "header" href = "#openid-mapping-providers" > OpenID Mapping Providers< / a > < / h2 >
< p > The OpenID mapping provider can be customized by editing the
< code > oidc_config.user_mapping_provider.module< / code > config option.< / p >
< p > < code > oidc_config.user_mapping_provider.config< / code > allows you to provide custom
configuration options to the module. Check with the module's documentation for
what options it provides (if any). The options listed by default are for the
user mapping provider built in to Synapse. If using a custom module, you should
comment these options out and use those specified by the module instead.< / p >
< h3 id = "building-a-custom-openid-mapping-provider" > < a class = "header" href = "#building-a-custom-openid-mapping-provider" > Building a Custom OpenID Mapping Provider< / a > < / h3 >
< p > A custom mapping provider must specify the following methods:< / p >
< ul >
< li > < code > __init__(self, parsed_config)< / code >
< ul >
< li > Arguments:
< ul >
< li > < code > parsed_config< / code > - A configuration object that is the return value of the
< code > parse_config< / code > method. You should set any configuration options needed by
the module here.< / li >
< / ul >
< / li >
< / ul >
< / li >
< li > < code > parse_config(config)< / code >
< ul >
< li > This method should have the < code > @staticmethod< / code > decoration.< / li >
< li > Arguments:
< ul >
< li > < code > config< / code > - A < code > dict< / code > representing the parsed content of the
< code > oidc_config.user_mapping_provider.config< / code > homeserver config option.
Runs on homeserver startup. Providers should extract and validate
any option values they need here.< / li >
< / ul >
< / li >
< li > Whatever is returned will be passed back to the user mapping provider module's
< code > __init__< / code > method during construction.< / li >
< / ul >
< / li >
< li > < code > get_remote_user_id(self, userinfo)< / code >
< ul >
< li > Arguments:
< ul >
< li > < code > userinfo< / code > - A < code > authlib.oidc.core.claims.UserInfo< / code > object to extract user
information from.< / li >
< / ul >
< / li >
< li > This method must return a string, which is the unique, immutable identifier
for the user. Commonly the < code > sub< / code > claim of the response.< / li >
< / ul >
< / li >
< li > < code > map_user_attributes(self, userinfo, token, failures)< / code >
< ul >
< li > This method must be async.< / li >
< li > Arguments:
< ul >
< li > < code > userinfo< / code > - A < code > authlib.oidc.core.claims.UserInfo< / code > object to extract user
information from.< / li >
< li > < code > token< / code > - A dictionary which includes information necessary to make
further requests to the OpenID provider.< / li >
< li > < code > failures< / code > - An < code > int< / code > that represents the amount of times the returned
mxid localpart mapping has failed. This should be used
to create a deduplicated mxid localpart which should be
returned instead. For example, if this method returns
< code > john.doe< / code > as the value of < code > localpart< / code > in the returned
dict, and that is already taken on the homeserver, this
method will be called again with the same parameters but
with failures=1. The method should then return a different
< code > localpart< / code > value, such as < code > john.doe1< / code > .< / li >
< / ul >
< / li >
< li > Returns a dictionary with two keys:
< ul >
< li > < code > localpart< / code > : A string, used to generate the Matrix ID. If this is
< code > None< / code > , the user is prompted to pick their own username. This is only used
during a user's first login. Once a localpart has been associated with a
remote user ID (see < code > get_remote_user_id< / code > ) it cannot be updated.< / li >
< li > < code > displayname< / code > : An optional string, the display name for the user.< / li >
< / ul >
< / li >
< / ul >
< / li >
< li > < code > get_extra_attributes(self, userinfo, token)< / code >
< ul >
< li >
< p > This method must be async.< / p >
< / li >
< li >
< p > Arguments:< / p >
< ul >
< li > < code > userinfo< / code > - A < code > authlib.oidc.core.claims.UserInfo< / code > object to extract user
information from.< / li >
< li > < code > token< / code > - A dictionary which includes information necessary to make
further requests to the OpenID provider.< / li >
< / ul >
< / li >
< li >
< p > Returns a dictionary that is suitable to be serialized to JSON. This
will be returned as part of the response during a successful login.< / p >
< p > Note that care should be taken to not overwrite any of the parameters
usually returned as part of the < a href = "https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login" > login response< / a > .< / p >
< / li >
< / ul >
< / li >
< / ul >
< h3 id = "default-openid-mapping-provider" > < a class = "header" href = "#default-openid-mapping-provider" > Default OpenID Mapping Provider< / a > < / h3 >
< p > Synapse has a built-in OpenID mapping provider if a custom provider isn't
specified in the config. It is located at
2021-06-16 15:16:14 +03:00
< a href = "https://github.com/matrix-org/synapse/blob/develop/synapse/handlers/oidc.py" > < code > synapse.handlers.oidc.JinjaOidcMappingProvider< / code > < / a > .< / p >
2021-06-03 19:21:02 +03:00
< h2 id = "saml-mapping-providers" > < a class = "header" href = "#saml-mapping-providers" > SAML Mapping Providers< / a > < / h2 >
< p > The SAML mapping provider can be customized by editing the
< code > saml2_config.user_mapping_provider.module< / code > config option.< / p >
< p > < code > saml2_config.user_mapping_provider.config< / code > allows you to provide custom
configuration options to the module. Check with the module's documentation for
what options it provides (if any). The options listed by default are for the
user mapping provider built in to Synapse. If using a custom module, you should
comment these options out and use those specified by the module instead.< / p >
< h3 id = "building-a-custom-saml-mapping-provider" > < a class = "header" href = "#building-a-custom-saml-mapping-provider" > Building a Custom SAML Mapping Provider< / a > < / h3 >
< p > A custom mapping provider must specify the following methods:< / p >
< ul >
< li > < code > __init__(self, parsed_config, module_api)< / code >
< ul >
< li > Arguments:
< ul >
< li > < code > parsed_config< / code > - A configuration object that is the return value of the
< code > parse_config< / code > method. You should set any configuration options needed by
the module here.< / li >
< li > < code > module_api< / code > - a < code > synapse.module_api.ModuleApi< / code > object which provides the
stable API available for extension modules.< / li >
< / ul >
< / li >
< / ul >
< / li >
< li > < code > parse_config(config)< / code >
< ul >
< li > This method should have the < code > @staticmethod< / code > decoration.< / li >
< li > Arguments:
< ul >
< li > < code > config< / code > - A < code > dict< / code > representing the parsed content of the
< code > saml_config.user_mapping_provider.config< / code > homeserver config option.
Runs on homeserver startup. Providers should extract and validate
any option values they need here.< / li >
< / ul >
< / li >
< li > Whatever is returned will be passed back to the user mapping provider module's
< code > __init__< / code > method during construction.< / li >
< / ul >
< / li >
< li > < code > get_saml_attributes(config)< / code >
< ul >
< li > This method should have the < code > @staticmethod< / code > decoration.< / li >
< li > Arguments:
< ul >
< li > < code > config< / code > - A object resulting from a call to < code > parse_config< / code > .< / li >
< / ul >
< / li >
< li > Returns a tuple of two sets. The first set equates to the SAML auth
response attributes that are required for the module to function, whereas
the second set consists of those attributes which can be used if available,
but are not necessary.< / li >
< / ul >
< / li >
< li > < code > get_remote_user_id(self, saml_response, client_redirect_url)< / code >
< ul >
< li > Arguments:
< ul >
< li > < code > saml_response< / code > - A < code > saml2.response.AuthnResponse< / code > object to extract user
information from.< / li >
< li > < code > client_redirect_url< / code > - A string, the URL that the client will be
redirected to.< / li >
< / ul >
< / li >
< li > This method must return a string, which is the unique, immutable identifier
for the user. Commonly the < code > uid< / code > claim of the response.< / li >
< / ul >
< / li >
< li > < code > saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)< / code >
< ul >
< li >
< p > Arguments:< / p >
< ul >
< li > < code > saml_response< / code > - A < code > saml2.response.AuthnResponse< / code > object to extract user
information from.< / li >
< li > < code > failures< / code > - An < code > int< / code > that represents the amount of times the returned
mxid localpart mapping has failed. This should be used
to create a deduplicated mxid localpart which should be
returned instead. For example, if this method returns
< code > john.doe< / code > as the value of < code > mxid_localpart< / code > in the returned
dict, and that is already taken on the homeserver, this
method will be called again with the same parameters but
with failures=1. The method should then return a different
< code > mxid_localpart< / code > value, such as < code > john.doe1< / code > .< / li >
< li > < code > client_redirect_url< / code > - A string, the URL that the client will be
redirected to.< / li >
< / ul >
< / li >
< li >
< p > This method must return a dictionary, which will then be used by Synapse
to build a new user. The following keys are allowed:< / p >
< ul >
< li > < code > mxid_localpart< / code > - A string, the mxid localpart of the new user. If this is
< code > None< / code > , the user is prompted to pick their own username. This is only used
during a user's first login. Once a localpart has been associated with a
remote user ID (see < code > get_remote_user_id< / code > ) it cannot be updated.< / li >
< li > < code > displayname< / code > - The displayname of the new user. If not provided, will default to
the value of < code > mxid_localpart< / code > .< / li >
< li > < code > emails< / code > - A list of emails for the new user. If not provided, will
default to an empty list.< / li >
< / ul >
< p > Alternatively it can raise a < code > synapse.api.errors.RedirectException< / code > to
redirect the user to another page. This is useful to prompt the user for
additional information, e.g. if you want them to provide their own username.
It is the responsibility of the mapping provider to either redirect back
to < code > client_redirect_url< / code > (including any additional information) or to
complete registration using methods from the < code > ModuleApi< / code > .< / p >
< / li >
< / ul >
< / li >
< / ul >
< h3 id = "default-saml-mapping-provider" > < a class = "header" href = "#default-saml-mapping-provider" > Default SAML Mapping Provider< / a > < / h3 >
< p > Synapse has a built-in SAML mapping provider if a custom provider isn't
specified in the config. It is located at
2021-06-16 15:16:14 +03:00
< a href = "https://github.com/matrix-org/synapse/blob/develop/synapse/handlers/saml.py" > < code > synapse.handlers.saml.DefaultSamlMappingProvider< / code > < / a > .< / p >
2021-06-03 19:21:02 +03:00
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "password-auth-provider-modules" > < a class = "header" href = "#password-auth-provider-modules" > Password auth provider modules< / a > < / h1 >
< p > Password auth providers offer a way for server administrators to
integrate their Synapse installation with an existing authentication
system.< / p >
< p > A password auth provider is a Python class which is dynamically loaded
into Synapse, and provides a number of methods by which it can integrate
with the authentication system.< / p >
< p > This document serves as a reference for those looking to implement their
own password auth providers. Additionally, here is a list of known
password auth provider module implementations:< / p >
< ul >
< li > < a href = "https://github.com/matrix-org/matrix-synapse-ldap3/" > matrix-synapse-ldap3< / a > < / li >
< li > < a href = "https://github.com/devture/matrix-synapse-shared-secret-auth" > matrix-synapse-shared-secret-auth< / a > < / li >
< li > < a href = "https://github.com/ma1uta/matrix-synapse-rest-password-provider" > matrix-synapse-rest-password-provider< / a > < / li >
< / ul >
< h2 id = "required-methods" > < a class = "header" href = "#required-methods" > Required methods< / a > < / h2 >
< p > Password auth provider classes must provide the following methods:< / p >
< ul >
< li >
< p > < code > parse_config(config)< / code >
This method is passed the < code > config< / code > object for this module from the
homeserver configuration file.< / p >
< p > It should perform any appropriate sanity checks on the provided
configuration, and return an object which is then passed into
< code > __init__< / code > .< / p >
< p > This method should have the < code > @staticmethod< / code > decoration.< / p >
< / li >
< li >
< p > < code > __init__(self, config, account_handler)< / code > < / p >
< p > The constructor is passed the config object returned by
< code > parse_config< / code > , and a < code > synapse.module_api.ModuleApi< / code > object which
allows the password provider to check if accounts exist and/or create
new ones.< / p >
< / li >
< / ul >
< h2 id = "optional-methods" > < a class = "header" href = "#optional-methods" > Optional methods< / a > < / h2 >
< p > Password auth provider classes may optionally provide the following methods:< / p >
< ul >
< li >
< p > < code > get_db_schema_files(self)< / code > < / p >
< p > This method, if implemented, should return an Iterable of
< code > (name, stream)< / code > pairs of database schema files. Each file is applied
in turn at initialisation, and a record is then made in the database
so that it is not re-applied on the next start.< / p >
< / li >
< li >
< p > < code > get_supported_login_types(self)< / code > < / p >
< p > This method, if implemented, should return a < code > dict< / code > mapping from a
login type identifier (such as < code > m.login.password< / code > ) to an iterable
giving the fields which must be provided by the user in the submission
to < a href = "https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login" > the < code > /login< / code > API< / a > .
These fields are passed in the < code > login_dict< / code > dictionary to < code > check_auth< / code > .< / p >
< p > For example, if a password auth provider wants to implement a custom
login type of < code > com.example.custom_login< / code > , where the client is expected
to pass the fields < code > secret1< / code > and < code > secret2< / code > , the provider should
implement this method and return the following dict:< / p >
< pre > < code class = "language-python" > {" com.example.custom_login" : (" secret1" , " secret2" )}
< / code > < / pre >
< / li >
< li >
< p > < code > check_auth(self, username, login_type, login_dict)< / code > < / p >
< p > This method does the real work. If implemented, it
will be called for each login attempt where the login type matches one
of the keys returned by < code > get_supported_login_types< / code > .< / p >
< p > It is passed the (possibly unqualified) < code > user< / code > field provided by the client,
the login type, and a dictionary of login secrets passed by the
client.< / p >
< p > The method should return an < code > Awaitable< / code > object, which resolves
to the canonical < code > @localpart:domain< / code > user ID if authentication is
successful, and < code > None< / code > if not.< / p >
< p > Alternatively, the < code > Awaitable< / code > can resolve to a < code > (str, func)< / code > tuple, in
which case the second field is a callback which will be called with
the result from the < code > /login< / code > call (including < code > access_token< / code > ,
< code > device_id< / code > , etc.)< / p >
< / li >
< li >
< p > < code > check_3pid_auth(self, medium, address, password)< / code > < / p >
< p > This method, if implemented, is called when a user attempts to
register or log in with a third party identifier, such as email. It is
passed the medium (ex. " email" ), an address (ex.
" < a href = "mailto:jdoe@example.com" > jdoe@example.com< / a > " ) and the user's password.< / p >
< p > The method should return an < code > Awaitable< / code > object, which resolves
to a < code > str< / code > containing the user's (canonical) User id if
authentication was successful, and < code > None< / code > if not.< / p >
< p > As with < code > check_auth< / code > , the < code > Awaitable< / code > may alternatively resolve to a
< code > (user_id, callback)< / code > tuple.< / p >
< / li >
< li >
< p > < code > check_password(self, user_id, password)< / code > < / p >
< p > This method provides a simpler interface than
< code > get_supported_login_types< / code > and < code > check_auth< / code > for password auth
providers that just want to provide a mechanism for validating
< code > m.login.password< / code > logins.< / p >
< p > If implemented, it will be called to check logins with an
< code > m.login.password< / code > login type. It is passed a qualified
< code > @localpart:domain< / code > user id, and the password provided by the user.< / p >
< p > The method should return an < code > Awaitable< / code > object, which resolves
to < code > True< / code > if authentication is successful, and < code > False< / code > if not.< / p >
< / li >
< li >
< p > < code > on_logged_out(self, user_id, device_id, access_token)< / code > < / p >
< p > This method, if implemented, is called when a user logs out. It is
passed the qualified user ID, the ID of the deactivated device (if
any: access tokens are occasionally created without an associated
device ID), and the (now deactivated) access token.< / p >
< p > It may return an < code > Awaitable< / code > object; the logout request will
wait for the < code > Awaitable< / code > to complete, but the result is ignored.< / p >
< / li >
< / ul >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "jwt-login-type" > < a class = "header" href = "#jwt-login-type" > JWT Login Type< / a > < / h1 >
< p > Synapse comes with a non-standard login type to support
< a href = "https://en.wikipedia.org/wiki/JSON_Web_Token" > JSON Web Tokens< / a > . In general the
documentation for
< a href = "https://matrix.org/docs/spec/client_server/r0.6.1#login" > the login endpoint< / a >
is still valid (and the mechanism works similarly to the
< a href = "https://matrix.org/docs/spec/client_server/r0.6.1#token-based" > token based login< / a > ).< / p >
< p > To log in using a JSON Web Token, clients should submit a < code > /login< / code > request as
follows:< / p >
< pre > < code class = "language-json" > {
" type" : " org.matrix.login.jwt" ,
" token" : " < jwt> "
}
< / code > < / pre >
< p > Note that the login type of < code > m.login.jwt< / code > is supported, but is deprecated. This
will be removed in a future version of Synapse.< / p >
< p > The < code > token< / code > field should include the JSON web token with the following claims:< / p >
< ul >
< li > The < code > sub< / code > (subject) claim is required and should encode the local part of the
user ID.< / li >
< li > The expiration time (< code > exp< / code > ), not before time (< code > nbf< / code > ), and issued at (< code > iat< / code > )
claims are optional, but validated if present.< / li >
< li > The issuer (< code > iss< / code > ) claim is optional, but required and validated if configured.< / li >
< li > The audience (< code > aud< / code > ) claim is optional, but required and validated if configured.
Providing the audience claim when not configured will cause validation to fail.< / li >
< / ul >
< p > In the case that the token is not valid, the homeserver must respond with
< code > 403 Forbidden< / code > and an error code of < code > M_FORBIDDEN< / code > .< / p >
< p > As with other login types, there are additional fields (e.g. < code > device_id< / code > and
< code > initial_device_display_name< / code > ) which can be included in the above request.< / p >
< h2 id = "preparing-synapse-1" > < a class = "header" href = "#preparing-synapse-1" > Preparing Synapse< / a > < / h2 >
< p > The JSON Web Token integration in Synapse uses the
< a href = "https://pypi.org/project/pyjwt/" > < code > PyJWT< / code > < / a > library, which must be installed
as follows:< / p >
< ul >
< li >
< p > The relevant libraries are included in the Docker images and Debian packages
provided by < code > matrix.org< / code > so no further action is needed.< / p >
< / li >
< li >
< p > If you installed Synapse into a virtualenv, run < code > /path/to/env/bin/pip install synapse[pyjwt]< / code > to install the necessary dependencies.< / p >
< / li >
< li >
< p > For other installation mechanisms, see the documentation provided by the
maintainer.< / p >
< / li >
< / ul >
< p > To enable the JSON web token integration, you should then add an < code > jwt_config< / code > section
to your configuration file (or uncomment the < code > enabled: true< / code > line in the
existing section). See < a href = "./sample_config.yaml" > sample_config.yaml< / a > for some
sample settings.< / p >
< h2 id = "how-to-test-jwt-as-a-developer" > < a class = "header" href = "#how-to-test-jwt-as-a-developer" > How to test JWT as a developer< / a > < / h2 >
< p > Although JSON Web Tokens are typically generated from an external server, the
examples below use < a href = "https://pyjwt.readthedocs.io/en/latest/" > PyJWT< / a > directly.< / p >
< ol >
< li >
< p > Configure Synapse with JWT logins, note that this example uses a pre-shared
secret and an algorithm of HS256:< / p >
< pre > < code class = "language-yaml" > jwt_config:
enabled: true
secret: " my-secret-token"
algorithm: " HS256"
< / code > < / pre >
< / li >
< li >
< p > Generate a JSON web token:< / p >
< pre > < code class = "language-bash" > $ pyjwt --key=my-secret-token --alg=HS256 encode sub=test-user
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc
< / code > < / pre >
< / li >
< li >
< p > Query for the login types and ensure < code > org.matrix.login.jwt< / code > is there:< / p >
< pre > < code class = "language-bash" > curl http://localhost:8080/_matrix/client/r0/login
< / code > < / pre >
< / li >
< li >
< p > Login used the generated JSON web token from above:< / p >
< pre > < code class = "language-bash" > $ curl http://localhost:8082/_matrix/client/r0/login -X POST \
--data '{" type" :" org.matrix.login.jwt" ," token" :" eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc" }'
{
" access_token" : " < access token> " ,
" device_id" : " ACBDEFGHI" ,
" home_server" : " localhost:8080" ,
" user_id" : " @test-user:localhost:8480"
}
< / code > < / pre >
< / li >
< / ol >
< p > You should now be able to use the returned access token to query the client API.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "overview-2" > < a class = "header" href = "#overview-2" > Overview< / a > < / h1 >
< p > A captcha can be enabled on your homeserver to help prevent bots from registering
accounts. Synapse currently uses Google's reCAPTCHA service which requires API keys
from Google.< / p >
< h2 id = "getting-api-keys" > < a class = "header" href = "#getting-api-keys" > Getting API keys< / a > < / h2 >
< ol >
< li > Create a new site at < a href = "https://www.google.com/recaptcha/admin/create" > https://www.google.com/recaptcha/admin/create< / a > < / li >
< li > Set the label to anything you want< / li >
< li > Set the type to reCAPTCHA v2 using the " I'm not a robot" Checkbox option.
This is the only type of captcha that works with Synapse.< / li >
< li > Add the public hostname for your server, as set in < code > public_baseurl< / code >
in < code > homeserver.yaml< / code > , to the list of authorized domains. If you have not set
< code > public_baseurl< / code > , use < code > server_name< / code > .< / li >
< li > Agree to the terms of service and submit.< / li >
< li > Copy your site key and secret key and add them to your < code > homeserver.yaml< / code >
configuration file
< pre > < code > recaptcha_public_key: YOUR_SITE_KEY
recaptcha_private_key: YOUR_SECRET_KEY
< / code > < / pre >
< / li >
< li > Enable the CAPTCHA for new registrations
< pre > < code > enable_registration_captcha: true
< / code > < / pre >
< / li >
< li > Go to the settings page for the CAPTCHA you just created< / li >
< li > Uncheck the " Verify the origin of reCAPTCHA solutions" checkbox so that the
captcha can be displayed in any client. If you do not disable this option then you
must specify the domains of every client that is allowed to display the CAPTCHA.< / li >
< / ol >
< h2 id = "configuring-ip-used-for-auth" > < a class = "header" href = "#configuring-ip-used-for-auth" > Configuring IP used for auth< / a > < / h2 >
< p > The reCAPTCHA API requires that the IP address of the user who solved the
CAPTCHA is sent. If the client is connecting through a proxy or load balancer,
it may be required to use the < code > X-Forwarded-For< / code > (XFF) header instead of the origin
IP address. This can be configured using the < code > x_forwarded< / code > directive in the
listeners section of the < code > homeserver.yaml< / code > configuration file.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "registering-an-application-service" > < a class = "header" href = "#registering-an-application-service" > Registering an Application Service< / a > < / h1 >
< p > The registration of new application services depends on the homeserver used.
In synapse, you need to create a new configuration file for your AS and add it
to the list specified under the < code > app_service_config_files< / code > config
option in your synapse config.< / p >
< p > For example:< / p >
< pre > < code class = "language-yaml" > app_service_config_files:
- /home/matrix/.synapse/< your-AS> .yaml
< / code > < / pre >
< p > The format of the AS configuration file is as follows:< / p >
< pre > < code class = "language-yaml" > url: < base url of AS>
as_token: < token AS will add to requests to HS>
hs_token: < token HS will add to requests to AS>
sender_localpart: < localpart of AS user>
namespaces:
users: # List of users we're interested in
- exclusive: < bool>
regex: < regex>
group_id: < group>
- ...
aliases: [] # List of aliases we're interested in
rooms: [] # List of room ids we're interested in
< / code > < / pre >
< p > < code > exclusive< / code > : If enabled, only this application service is allowed to register users in its namespace(s).
< code > group_id< / code > : All users of this application service are dynamically joined to this group. This is useful for e.g user organisation or flairs.< / p >
< p > See the < a href = "https://matrix.org/docs/spec/application_service/unstable.html" > spec< / a > for further details on how application services work.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "server-notices" > < a class = "header" href = "#server-notices" > Server Notices< / a > < / h1 >
< p > 'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
channel whereby server administrators can send messages to users on the server.< / p >
< p > They are used as part of communication of the server polices(see
< a href = "consent_tracking.html" > consent_tracking.md< / a > ), however the intention is that
they may also find a use for features such as " Message of the day" .< / p >
< p > This is a feature specific to Synapse, but it uses standard Matrix
communication mechanisms, so should work with any Matrix client.< / p >
< h2 id = "user-experience" > < a class = "header" href = "#user-experience" > User experience< / a > < / h2 >
< p > When the user is first sent a server notice, they will get an invitation to a
room (typically called 'Server Notices', though this is configurable in
< code > homeserver.yaml< / code > ). They will be < strong > unable to reject< / strong > this invitation -
attempts to do so will receive an error.< / p >
< p > Once they accept the invitation, they will see the notice message in the room
history; it will appear to have come from the 'server notices user' (see
below).< / p >
< p > The user is prevented from sending any messages in this room by the power
levels.< / p >
< p > Having joined the room, the user can leave the room if they want. Subsequent
server notices will then cause a new room to be created.< / p >
< h2 id = "synapse-configuration" > < a class = "header" href = "#synapse-configuration" > Synapse configuration< / a > < / h2 >
< p > Server notices come from a specific user id on the server. Server
administrators are free to choose the user id - something like < code > server< / code > is
suggested, meaning the notices will come from
< code > @server:< your_server_name> < / code > . Once the Server Notices user is configured, that
user id becomes a special, privileged user, so administrators should ensure
that < strong > it is not already allocated< / strong > .< / p >
< p > In order to support server notices, it is necessary to add some configuration
to the < code > homeserver.yaml< / code > file. In particular, you should add a < code > server_notices< / code >
section, which should look like this:< / p >
< pre > < code class = "language-yaml" > server_notices:
system_mxid_localpart: server
system_mxid_display_name: " Server Notices"
system_mxid_avatar_url: " mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
room_name: " Server Notices"
< / code > < / pre >
< p > The only compulsory setting is < code > system_mxid_localpart< / code > , which defines the user
id of the Server Notices user, as above. < code > room_name< / code > defines the name of the
room which will be created.< / p >
< p > < code > system_mxid_display_name< / code > and < code > system_mxid_avatar_url< / code > can be used to set the
displayname and avatar of the Server Notices user.< / p >
< h2 id = "sending-notices" > < a class = "header" href = "#sending-notices" > Sending notices< / a > < / h2 >
< p > To send server notices to users you can use the
< a href = "admin_api/server_notices.html" > admin_api< / a > .< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "support-in-synapse-for-tracking-agreement-to-server-terms-and-conditions" > < a class = "header" href = "#support-in-synapse-for-tracking-agreement-to-server-terms-and-conditions" > Support in Synapse for tracking agreement to server terms and conditions< / a > < / h1 >
< p > Synapse 0.30 introduces support for tracking whether users have agreed to the
terms and conditions set by the administrator of a server - and blocking access
to the server until they have.< / p >
< p > There are several parts to this functionality; each requires some specific
configuration in < code > homeserver.yaml< / code > to be enabled.< / p >
< p > Note that various parts of the configuation and this document refer to the
" privacy policy" : agreement with a privacy policy is one particular use of this
feature, but of course adminstrators can specify other terms and conditions
unrelated to " privacy" per se.< / p >
< h2 id = "collecting-policy-agreement-from-a-user" > < a class = "header" href = "#collecting-policy-agreement-from-a-user" > Collecting policy agreement from a user< / a > < / h2 >
< p > Synapse can be configured to serve the user a simple policy form with an
" accept" button. Clicking " Accept" records the user's acceptance in the
database and shows a success page.< / p >
< p > To enable this, first create templates for the policy and success pages.
These should be stored on the local filesystem.< / p >
< p > These templates use the < a href = "http://jinja.pocoo.org" > Jinja2< / a > templating language,
2021-06-16 15:16:14 +03:00
and < a href = "https://github.com/matrix-org/synapse/tree/develop/docs/privacy_policy_templates/" > docs/privacy_policy_templates< / a >
gives examples of the sort of thing that can be done.< / p >
2021-06-03 19:21:02 +03:00
< p > Note that the templates must be stored under a name giving the language of the
template - currently this must always be < code > en< / code > (for " English" );
internationalisation support is intended for the future.< / p >
< p > The template for the policy itself should be versioned and named according to
the version: for example < code > 1.0.html< / code > . The version of the policy which the user
has agreed to is stored in the database.< / p >
< p > Once the templates are in place, make the following changes to < code > homeserver.yaml< / code > :< / p >
< ol >
< li >
< p > Add a < code > user_consent< / code > section, which should look like:< / p >
< pre > < code class = "language-yaml" > user_consent:
template_dir: privacy_policy_templates
version: 1.0
< / code > < / pre >
< p > < code > template_dir< / code > points to the directory containing the policy
templates. < code > version< / code > defines the version of the policy which will be served
to the user. In the example above, Synapse will serve
< code > privacy_policy_templates/en/1.0.html< / code > .< / p >
< / li >
< li >
< p > Add a < code > form_secret< / code > setting at the top level:< / p >
< pre > < code class = "language-yaml" > form_secret: " < unique secret> "
< / code > < / pre >
< p > This should be set to an arbitrary secret string (try < code > pwgen -y 30< / code > to
generate suitable secrets).< / p >
< p > More on what this is used for below.< / p >
< / li >
< li >
< p > Add < code > consent< / code > wherever the < code > client< / code > resource is currently enabled in the
< code > listeners< / code > configuration. For example:< / p >
< pre > < code class = "language-yaml" > listeners:
- port: 8008
resources:
- names:
- client
- consent
< / code > < / pre >
< / li >
< / ol >
< p > Finally, ensure that < code > jinja2< / code > is installed. If you are using a virtualenv, this
should be a matter of < code > pip install Jinja2< / code > . On debian, try < code > apt-get install python-jinja2< / code > .< / p >
< p > Once this is complete, and the server has been restarted, try visiting
< code > https://< server> /_matrix/consent< / code > . If correctly configured, this should give
an error " Missing string query parameter 'u'" . It is now possible to manually
construct URIs where users can give their consent.< / p >
< h3 id = "enabling-consent-tracking-at-registration" > < a class = "header" href = "#enabling-consent-tracking-at-registration" > Enabling consent tracking at registration< / a > < / h3 >
< ol >
< li >
< p > Add the following to your configuration:< / p >
< pre > < code class = "language-yaml" > user_consent:
require_at_registration: true
policy_name: " Privacy Policy" # or whatever you'd like to call the policy
< / code > < / pre >
< / li >
< li >
< p > In your consent templates, make use of the < code > public_version< / code > variable to
see if an unauthenticated user is viewing the page. This is typically
wrapped around the form that would be used to actually agree to the document:< / p >
< pre > < code > {% if not public_version %}
< !-- The variables used here are only provided when the 'u' param is given to the homeserver -->
< form method=" post" action=" consent" >
< input type=" hidden" name=" v" value=" {{version}}" />
< input type=" hidden" name=" u" value=" {{user}}" />
< input type=" hidden" name=" h" value=" {{userhmac}}" />
< input type=" submit" value=" Sure thing!" />
< /form>
{% endif %}
< / code > < / pre >
< / li >
< li >
< p > Restart Synapse to apply the changes.< / p >
< / li >
< / ol >
< p > Visiting < code > https://< server> /_matrix/consent< / code > should now give you a view of the privacy
document. This is what users will be able to see when registering for accounts.< / p >
< h3 id = "constructing-the-consent-uri" > < a class = "header" href = "#constructing-the-consent-uri" > Constructing the consent URI< / a > < / h3 >
< p > It may be useful to manually construct the " consent URI" for a given user - for
instance, in order to send them an email asking them to consent. To do this,
take the base < code > https://< server> /_matrix/consent< / code > URL and add the following
query parameters:< / p >
< ul >
< li >
< p > < code > u< / code > : the user id of the user. This can either be a full MXID
(< code > @user:server.com< / code > ) or just the localpart (< code > user< / code > ).< / p >
< / li >
< li >
< p > < code > h< / code > : hex-encoded HMAC-SHA256 of < code > u< / code > using the < code > form_secret< / code > as a key. It is
possible to calculate this on the commandline with something like:< / p >
< pre > < code class = "language-bash" > echo -n '< user> ' | openssl sha256 -hmac '< form_secret> '
< / code > < / pre >
< p > This should result in a URI which looks something like:
< code > https://< server> /_matrix/consent?u=< user> & h=68a152465a4d...< / code > .< / p >
< / li >
< / ul >
< p > Note that not providing a < code > u< / code > parameter will be interpreted as wanting to view
the document from an unauthenticated perspective, such as prior to registration.
Therefore, the < code > h< / code > parameter is not required in this scenario. To enable this
behaviour, set < code > require_at_registration< / code > to < code > true< / code > in your < code > user_consent< / code > config.< / p >
< h2 id = "sending-users-a-server-notice-asking-them-to-agree-to-the-policy" > < a class = "header" href = "#sending-users-a-server-notice-asking-them-to-agree-to-the-policy" > Sending users a server notice asking them to agree to the policy< / a > < / h2 >
< p > It is possible to configure Synapse to send a < a href = "server_notices.html" > server
notice< / a > to anybody who has not yet agreed to the current
version of the policy. To do so:< / p >
< ul >
< li >
< p > ensure that the consent resource is configured, as in the previous section< / p >
< / li >
< li >
< p > ensure that server notices are configured, as in < a href = "server_notices.html" > server_notices.md< / a > .< / p >
< / li >
< li >
< p > Add < code > server_notice_content< / code > under < code > user_consent< / code > in < code > homeserver.yaml< / code > . For
example:< / p >
< pre > < code class = "language-yaml" > user_consent:
server_notice_content:
msgtype: m.text
body: > -
Please give your consent to the privacy policy at %(consent_uri)s.
< / code > < / pre >
< p > Synapse automatically replaces the placeholder < code > %(consent_uri)s< / code > with the
consent uri for that user.< / p >
< / li >
< li >
< p > ensure that < code > public_baseurl< / code > is set in < code > homeserver.yaml< / code > , and gives the base
URI that clients use to connect to the server. (It is used to construct
< code > consent_uri< / code > in the server notice.)< / p >
< / li >
< / ul >
< h2 id = "blocking-users-from-using-the-server-until-they-agree-to-the-policy" > < a class = "header" href = "#blocking-users-from-using-the-server-until-they-agree-to-the-policy" > Blocking users from using the server until they agree to the policy< / a > < / h2 >
< p > Synapse can be configured to block any attempts to join rooms or send messages
until the user has given their agreement to the policy. (Joining the server
notices room is exempted from this).< / p >
< p > To enable this, add < code > block_events_error< / code > under < code > user_consent< / code > . For example:< / p >
< pre > < code class = "language-yaml" > user_consent:
block_events_error: > -
You can't send any messages until you consent to the privacy policy at
%(consent_uri)s.
< / code > < / pre >
< p > Synapse automatically replaces the placeholder < code > %(consent_uri)s< / code > with the
consent uri for that user.< / p >
< p > ensure that < code > public_baseurl< / code > is set in < code > homeserver.yaml< / code > , and gives the base
URI that clients use to connect to the server. (It is used to construct
< code > consent_uri< / code > in the error.)< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "url-previews-1" > < a class = "header" href = "#url-previews-1" > URL Previews< / a > < / h1 >
< p > Design notes on a URL previewing service for Matrix:< / p >
< p > Options are:< / p >
< ol >
< li > Have an AS which listens for URLs, downloads them, and inserts an event that describes their metadata.< / li >
< / ol >
< ul >
< li > Pros:
< ul >
< li > Decouples the implementation entirely from Synapse.< / li >
< li > Uses existing Matrix events & content repo to store the metadata.< / li >
< / ul >
< / li >
< li > Cons:
< ul >
< li > Which AS should provide this service for a room, and why should you trust it?< / li >
< li > Doesn't work well with E2E; you'd have to cut the AS into every room< / li >
< li > the AS would end up subscribing to every room anyway.< / li >
< / ul >
< / li >
< / ul >
< ol start = "2" >
< li > Have a generic preview API (nothing to do with Matrix) that provides a previewing service:< / li >
< / ol >
< ul >
< li > Pros:
< ul >
< li > Simple and flexible; can be used by any clients at any point< / li >
< / ul >
< / li >
< li > Cons:
< ul >
< li > If each HS provides one of these independently, all the HSes in a room may needlessly DoS the target URI< / li >
< li > We need somewhere to store the URL metadata rather than just using Matrix itself< / li >
< li > We can't piggyback on matrix to distribute the metadata between HSes.< / li >
< / ul >
< / li >
< / ul >
< ol start = "3" >
< li > Make the synapse of the sending user responsible for spidering the URL and inserting an event asynchronously which describes the metadata.< / li >
< / ol >
< ul >
< li > Pros:
< ul >
< li > Works transparently for all clients< / li >
< li > Piggy-backs nicely on using Matrix for distributing the metadata.< / li >
< li > No confusion as to which AS< / li >
< / ul >
< / li >
< li > Cons:
< ul >
< li > Doesn't work with E2E< / li >
< li > We might want to decouple the implementation of the spider from the HS, given spider behaviour can be quite complicated and evolve much more rapidly than the HS. It's more like a bot than a core part of the server.< / li >
< / ul >
< / li >
< / ul >
< ol start = "4" >
< li > Make the sending client use the preview API and insert the event itself when successful.< / li >
< / ol >
< ul >
< li > Pros:
< ul >
< li > Works well with E2E< / li >
< li > No custom server functionality< / li >
< li > Lets the client customise the preview that they send (like on FB)< / li >
< / ul >
< / li >
< li > Cons:
< ul >
< li > Entirely specific to the sending client, whereas it'd be nice if /any/ URL was correctly previewed if clients support it.< / li >
< / ul >
< / li >
< / ul >
< ol start = "5" >
< li > Have the option of specifying a shared (centralised) previewing service used by a room, to avoid all the different HSes in the room DoSing the target.< / li >
< / ol >
< p > Best solution is probably a combination of both 2 and 4.< / p >
< ul >
< li > Sending clients do their best to create and send a preview at the point of sending the message, perhaps delaying the message until the preview is computed? (This also lets the user validate the preview before sending)< / li >
< li > Receiving clients have the option of going and creating their own preview if one doesn't arrive soon enough (or if the original sender didn't create one)< / li >
< / ul >
< p > This is a bit magical though in that the preview could come from two entirely different sources - the sending HS or your local one. However, this can always be exposed to users: " Generate your own URL previews if none are available?" < / p >
< p > This is tantamount also to senders calculating their own thumbnails for sending in advance of the main content - we are trusting the sender not to lie about the content in the thumbnail. Whereas currently thumbnails are calculated by the receiving homeserver to avoid this attack.< / p >
< p > However, this kind of phishing attack does exist whether we let senders pick their thumbnails or not, in that a malicious sender can send normal text messages around the attachment claiming it to be legitimate. We could rely on (future) reputation/abuse management to punish users who phish (be it with bogus metadata or bogus descriptions). Bogus metadata is particularly bad though, especially if it's avoidable.< / p >
< p > As a first cut, let's do #2 and have the receiver hit the API to calculate its own previews (as it does currently for image thumbnails). We can then extend/optimise this to option 4 as a special extra if needed.< / p >
< h2 id = "api" > < a class = "header" href = "#api" > API< / a > < / h2 >
< pre > < code > GET /_matrix/media/r0/preview_url?url=http://wherever.com
200 OK
{
" og:type" : " article"
" og:url" : " https://twitter.com/matrixdotorg/status/684074366691356672"
" og:title" : " Matrix on Twitter"
" og:image" : " https://pbs.twimg.com/profile_images/500400952029888512/yI0qtFi7_400x400.png"
" og:description" : " “Synapse 0.12 is out! Lots of polishing, performance & amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
" og:site_name" : " Twitter"
}
< / code > < / pre >
< ul >
< li > Downloads the URL
< ul >
< li > If HTML, just stores it in RAM and parses it for OG meta tags
< ul >
< li > Download any media OG meta tags to the media repo, and refer to them in the OG via mxc:// URIs.< / li >
< / ul >
< / li >
< li > If a media filetype we know we can thumbnail: store it on disk, and hand it to the thumbnailer. Generate OG meta tags from the thumbnailer contents.< / li >
< li > Otherwise, don't bother downloading further.< / li >
< / ul >
< / li >
< / ul >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "user-directory-api-implementation" > < a class = "header" href = "#user-directory-api-implementation" > User Directory API Implementation< / a > < / h1 >
< p > The user directory is currently maintained based on the 'visible' users
on this particular server - i.e. ones which your account shares a room with, or
who are present in a publicly viewable room present on the server.< / p >
< p > The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL < a href = "https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/main/delta/53/user_dir_populate.sql" > here< / a >
and then restart synapse. This should then start a background task to
flush the current tables and regenerate the directory.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "message-retention-policies" > < a class = "header" href = "#message-retention-policies" > Message retention policies< / a > < / h1 >
< p > Synapse admins can enable support for message retention policies on
their homeserver. Message retention policies exist at a room level,
follow the semantics described in
< a href = "https://github.com/matrix-org/matrix-doc/blob/matthew/msc1763/proposals/1763-configurable-retention-periods.md" > MSC1763< / a > ,
and allow server and room admins to configure how long messages should
be kept in a homeserver's database before being purged from it.
< strong > Please note that, as this feature isn't part of the Matrix
specification yet, this implementation is to be considered as
experimental.< / strong > < / p >
< p > A message retention policy is mainly defined by its < code > max_lifetime< / code >
parameter, which defines how long a message can be kept around after
it was sent to the room. If a room doesn't have a message retention
policy, and there's no default one for a given server, then no message
sent in that room is ever purged on that server.< / p >
< p > MSC1763 also specifies semantics for a < code > min_lifetime< / code > parameter which
defines the amount of time after which an event < em > can< / em > get purged (after
it was sent to the room), but Synapse doesn't currently support it
beyond registering it.< / p >
< p > Both < code > max_lifetime< / code > and < code > min_lifetime< / code > are optional parameters.< / p >
< p > Note that message retention policies don't apply to state events.< / p >
< p > Once an event reaches its expiry date (defined as the time it was sent
plus the value for < code > max_lifetime< / code > in the room), two things happen:< / p >
< ul >
< li > Synapse stops serving the event to clients via any endpoint.< / li >
< li > The message gets picked up by the next purge job (see the " Purge jobs"
section) and is removed from Synapse's database.< / li >
< / ul >
< p > Since purge jobs don't run continuously, this means that an event might
stay in a server's database for longer than the value for < code > max_lifetime< / code >
in the room would allow, though hidden from clients.< / p >
< p > Similarly, if a server (with support for message retention policies
enabled) receives from another server an event that should have been
purged according to its room's policy, then the receiving server will
process and store that event until it's picked up by the next purge job,
though it will always hide it from clients.< / p >
< p > Synapse requires at least one message in each room, so it will never
delete the last message in a room. It will, however, hide it from
clients.< / p >
< h2 id = "server-configuration" > < a class = "header" href = "#server-configuration" > Server configuration< / a > < / h2 >
< p > Support for this feature can be enabled and configured in the
< code > retention< / code > section of the Synapse configuration file (see the
2021-06-16 15:16:14 +03:00
< a href = "https://github.com/matrix-org/synapse/blob/v1.36.0/docs/sample_config.yaml#L451-L518" > sample file< / a > ).< / p >
2021-06-03 19:21:02 +03:00
< p > To enable support for message retention policies, set the setting
< code > enabled< / code > in this section to < code > true< / code > .< / p >
< h3 id = "default-policy" > < a class = "header" href = "#default-policy" > Default policy< / a > < / h3 >
< p > A default message retention policy is a policy defined in Synapse's
configuration that is used by Synapse for every room that doesn't have a
message retention policy configured in its state. This allows server
admins to ensure that messages are never kept indefinitely in a server's
database. < / p >
< p > A default policy can be defined as such, in the < code > retention< / code > section of
the configuration file:< / p >
< pre > < code class = "language-yaml" > default_policy:
min_lifetime: 1d
max_lifetime: 1y
< / code > < / pre >
< p > Here, < code > min_lifetime< / code > and < code > max_lifetime< / code > have the same meaning and level
of support as previously described. They can be expressed either as a
duration (using the units < code > s< / code > (seconds), < code > m< / code > (minutes), < code > h< / code > (hours),
< code > d< / code > (days), < code > w< / code > (weeks) and < code > y< / code > (years)) or as a number of milliseconds.< / p >
< h3 id = "purge-jobs" > < a class = "header" href = "#purge-jobs" > Purge jobs< / a > < / h3 >
< p > Purge jobs are the jobs that Synapse runs in the background to purge
expired events from the database. They are only run if support for
message retention policies is enabled in the server's configuration. If
no configuration for purge jobs is configured by the server admin,
Synapse will use a default configuration, which is described in the
2021-06-16 15:16:14 +03:00
< a href = "https://github.com/matrix-org/synapse/blob/v1.36.0/docs/sample_config.yaml#L451-L518" > sample configuration file< / a > .< / p >
2021-06-03 19:21:02 +03:00
< p > Some server admins might want a finer control on when events are removed
depending on an event's room's policy. This can be done by setting the
< code > purge_jobs< / code > sub-section in the < code > retention< / code > section of the configuration
file. An example of such configuration could be:< / p >
< pre > < code class = "language-yaml" > purge_jobs:
- longest_max_lifetime: 3d
interval: 12h
- shortest_max_lifetime: 3d
longest_max_lifetime: 1w
interval: 1d
- shortest_max_lifetime: 1w
interval: 2d
< / code > < / pre >
< p > In this example, we define three jobs:< / p >
< ul >
< li > one that runs twice a day (every 12 hours) and purges events in rooms
which policy's < code > max_lifetime< / code > is lower or equal to 3 days.< / li >
< li > one that runs once a day and purges events in rooms which policy's
< code > max_lifetime< / code > is between 3 days and a week.< / li >
< li > one that runs once every 2 days and purges events in rooms which
policy's < code > max_lifetime< / code > is greater than a week.< / li >
< / ul >
< p > Note that this example is tailored to show different configurations and
features slightly more jobs than it's probably necessary (in practice, a
server admin would probably consider it better to replace the two last
jobs with one that runs once a day and handles rooms which which
policy's < code > max_lifetime< / code > is greater than 3 days).< / p >
< p > Keep in mind, when configuring these jobs, that a purge job can become
quite heavy on the server if it targets many rooms, therefore prefer
having jobs with a low interval that target a limited set of rooms. Also
make sure to include a job with no minimum and one with no maximum to
make sure your configuration handles every policy.< / p >
< p > As previously mentioned in this documentation, while a purge job that
runs e.g. every day means that an expired event might stay in the
database for up to a day after its expiry, Synapse hides expired events
from clients as soon as they expire, so the event is not visible to
local users between its expiry date and the moment it gets purged from
the server's database.< / p >
< h3 id = "lifetime-limits" > < a class = "header" href = "#lifetime-limits" > Lifetime limits< / a > < / h3 >
< p > Server admins can set limits on the values of < code > max_lifetime< / code > to use when
purging old events in a room. These limits can be defined as such in the
< code > retention< / code > section of the configuration file:< / p >
< pre > < code class = "language-yaml" > allowed_lifetime_min: 1d
allowed_lifetime_max: 1y
< / code > < / pre >
< p > The limits are considered when running purge jobs. If necessary, the
effective value of < code > max_lifetime< / code > will be brought between
< code > allowed_lifetime_min< / code > and < code > allowed_lifetime_max< / code > (inclusive).
This means that, if the value of < code > max_lifetime< / code > defined in the room's state
is lower than < code > allowed_lifetime_min< / code > , the value of < code > allowed_lifetime_min< / code >
will be used instead. Likewise, if the value of < code > max_lifetime< / code > is higher
than < code > allowed_lifetime_max< / code > , the value of < code > allowed_lifetime_max< / code > will be
used instead.< / p >
< p > In the example above, we ensure Synapse never deletes events that are less
than one day old, and that it always deletes events that are over a year
old.< / p >
< p > If a default policy is set, and its < code > max_lifetime< / code > value is lower than
< code > allowed_lifetime_min< / code > or higher than < code > allowed_lifetime_max< / code > , the same
process applies.< / p >
< p > Both parameters are optional; if one is omitted Synapse won't use it to
adjust the effective value of < code > max_lifetime< / code > .< / p >
< p > Like other settings in this section, these parameters can be expressed
either as a duration or as a number of milliseconds.< / p >
< h2 id = "room-configuration" > < a class = "header" href = "#room-configuration" > Room configuration< / a > < / h2 >
< p > To configure a room's message retention policy, a room's admin or
moderator needs to send a state event in that room with the type
< code > m.room.retention< / code > and the following content:< / p >
< pre > < code class = "language-json" > {
" max_lifetime" : ...
}
< / code > < / pre >
< p > In this event's content, the < code > max_lifetime< / code > parameter has the same
meaning as previously described, and needs to be expressed in
milliseconds. The event's content can also include a < code > min_lifetime< / code >
parameter, which has the same meaning and limited support as previously
described.< / p >
< p > Note that over every server in the room, only the ones with support for
message retention policies will actually remove expired events. This
support is currently not enabled by default in Synapse.< / p >
< h2 id = "note-on-reclaiming-disk-space" > < a class = "header" href = "#note-on-reclaiming-disk-space" > Note on reclaiming disk space< / a > < / h2 >
< p > While purge jobs actually delete data from the database, the disk space
used by the database might not decrease immediately on the database's
host. However, even though the database engine won't free up the disk
space, it will start writing new data into where the purged data was.< / p >
< p > If you want to reclaim the freed disk space anyway and return it to the
operating system, the server admin needs to run < code > VACUUM FULL;< / code > (or
< code > VACUUM;< / code > for SQLite databases) on Synapse's database (see the related
< a href = "https://www.postgresql.org/docs/current/sql-vacuum.html" > PostgreSQL documentation< / a > ).< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "handling-spam-in-synapse" > < a class = "header" href = "#handling-spam-in-synapse" > Handling spam in Synapse< / a > < / h1 >
< p > Synapse has support to customize spam checking behavior. It can plug into a
variety of events and affect how they are presented to users on your homeserver.< / p >
< p > The spam checking behavior is implemented as a Python class, which must be
able to be imported by the running Synapse.< / p >
< h2 id = "python-spam-checker-class" > < a class = "header" href = "#python-spam-checker-class" > Python spam checker class< / a > < / h2 >
< p > The Python class is instantiated with two objects:< / p >
< ul >
< li > Any configuration (see below).< / li >
< li > An instance of < code > synapse.module_api.ModuleApi< / code > .< / li >
< / ul >
< p > It then implements methods which return a boolean to alter behavior in Synapse.
All the methods must be defined.< / p >
< p > There's a generic method for checking every event (< code > check_event_for_spam< / code > ), as
well as some specific methods:< / p >
< ul >
< li > < code > user_may_invite< / code > < / li >
< li > < code > user_may_create_room< / code > < / li >
< li > < code > user_may_create_room_alias< / code > < / li >
< li > < code > user_may_publish_room< / code > < / li >
< li > < code > check_username_for_spam< / code > < / li >
< li > < code > check_registration_for_spam< / code > < / li >
< li > < code > check_media_file_for_spam< / code > < / li >
< / ul >
< p > The details of each of these methods (as well as their inputs and outputs)
are documented in the < code > synapse.events.spamcheck.SpamChecker< / code > class.< / p >
< p > The < code > ModuleApi< / code > class provides a way for the custom spam checker class to
call back into the homeserver internals.< / p >
< p > Additionally, a < code > parse_config< / code > method is mandatory and receives the plugin config
dictionary. After parsing, It must return an object which will be
passed to < code > __init__< / code > later.< / p >
< h3 id = "example" > < a class = "header" href = "#example" > Example< / a > < / h3 >
< pre > < code class = "language-python" > from synapse.spam_checker_api import RegistrationBehaviour
class ExampleSpamChecker:
def __init__(self, config, api):
self.config = config
self.api = api
@staticmethod
def parse_config(config):
return config
async def check_event_for_spam(self, foo):
return False # allow all events
async def user_may_invite(self, inviter_userid, invitee_userid, room_id):
return True # allow all invites
async def user_may_create_room(self, userid):
return True # allow all room creations
async def user_may_create_room_alias(self, userid, room_alias):
return True # allow all room aliases
async def user_may_publish_room(self, userid, room_id):
return True # allow publishing of all rooms
async def check_username_for_spam(self, user_profile):
return False # allow all usernames
async def check_registration_for_spam(
self,
email_threepid,
username,
request_info,
auth_provider_id,
):
return RegistrationBehaviour.ALLOW # allow all registrations
async def check_media_file_for_spam(self, file_wrapper, file_info):
return False # allow all media
< / code > < / pre >
< h2 id = "configuration-2" > < a class = "header" href = "#configuration-2" > Configuration< / a > < / h2 >
< p > Modify the < code > spam_checker< / code > section of your < code > homeserver.yaml< / code > in the following
manner:< / p >
< p > Create a list entry with the keys < code > module< / code > and < code > config< / code > .< / p >
< ul >
< li >
< p > < code > module< / code > should point to the fully qualified Python class that implements your
custom logic, e.g. < code > my_module.ExampleSpamChecker< / code > .< / p >
< / li >
< li >
< p > < code > config< / code > is a dictionary that gets passed to the spam checker class.< / p >
< / li >
< / ul >
< h3 id = "example-1" > < a class = "header" href = "#example-1" > Example< / a > < / h3 >
< p > This section might look like:< / p >
< pre > < code class = "language-yaml" > spam_checker:
- module: my_module.ExampleSpamChecker
config:
# Enable or disable a specific option in ExampleSpamChecker.
my_custom_option: true
< / code > < / pre >
< p > More spam checkers can be added in tandem by appending more items to the list. An
action is blocked when at least one of the configured spam checkers flags it.< / p >
< h2 id = "examples" > < a class = "header" href = "#examples" > Examples< / a > < / h2 >
< p > The < a href = "https://github.com/matrix-org/mjolnir" > Mjolnir< / a > project is a full fledged
example using the Synapse spam checking API, including a bot for dynamic
configuration.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "presence-router-module" > < a class = "header" href = "#presence-router-module" > Presence Router Module< / a > < / h1 >
< p > Synapse supports configuring a module that can specify additional users
(local or remote) to should receive certain presence updates from local
users.< / p >
< p > Note that routing presence via Application Service transactions is not
currently supported.< / p >
< p > The presence routing module is implemented as a Python class, which will
be imported by the running Synapse.< / p >
< h2 id = "python-presence-router-class" > < a class = "header" href = "#python-presence-router-class" > Python Presence Router Class< / a > < / h2 >
< p > The Python class is instantiated with two objects:< / p >
< ul >
< li > A configuration object of some type (see below).< / li >
< li > An instance of < code > synapse.module_api.ModuleApi< / code > .< / li >
< / ul >
< p > It then implements methods related to presence routing.< / p >
< p > Note that one method of < code > ModuleApi< / code > that may be useful is:< / p >
< pre > < code class = "language-python" > async def ModuleApi.send_local_online_presence_to(users: Iterable[str]) -> None
< / code > < / pre >
< p > which can be given a list of local or remote MXIDs to broadcast known, online user
presence to (for those users that the receiving user is considered interested in).
It does not include state for users who are currently offline, and it can only be
called on workers that support sending federation. Additionally, this method must
only be called from the process that has been configured to write to the
2021-06-16 15:16:14 +03:00
the < a href = "workers.html#stream-writers" > presence stream< / a > .
2021-06-03 19:21:02 +03:00
By default, this is the main process, but another worker can be configured to do
so.< / p >
< h3 id = "module-structure" > < a class = "header" href = "#module-structure" > Module structure< / a > < / h3 >
< p > Below is a list of possible methods that can be implemented, and whether they are
required.< / p >
< h4 id = "parse_config" > < a class = "header" href = "#parse_config" > < code > parse_config< / code > < / a > < / h4 >
< pre > < code class = "language-python" > def parse_config(config_dict: dict) -> Any
< / code > < / pre >
< p > < strong > Required.< / strong > A static method that is passed a dictionary of config options, and
should return a validated config object. This method is described further in
< a href = "presence_router_module.html#configuration" > Configuration< / a > .< / p >
< h4 id = "get_users_for_states" > < a class = "header" href = "#get_users_for_states" > < code > get_users_for_states< / code > < / a > < / h4 >
< pre > < code class = "language-python" > async def get_users_for_states(
self,
state_updates: Iterable[UserPresenceState],
) -> Dict[str, Set[UserPresenceState]]:
< / code > < / pre >
< p > < strong > Required.< / strong > An asynchronous method that is passed an iterable of user presence
state. This method can determine whether a given presence update should be sent to certain
users. It does this by returning a dictionary with keys representing local or remote
Matrix User IDs, and values being a python set
of < code > synapse.handlers.presence.UserPresenceState< / code > instances.< / p >
< p > Synapse will then attempt to send the specified presence updates to each user when
possible.< / p >
< h4 id = "get_interested_users" > < a class = "header" href = "#get_interested_users" > < code > get_interested_users< / code > < / a > < / h4 >
< pre > < code class = "language-python" > async def get_interested_users(self, user_id: str) -> Union[Set[str], str]
< / code > < / pre >
< p > < strong > Required.< / strong > An asynchronous method that is passed a single Matrix User ID. This
method is expected to return the users that the passed in user may be interested in the
presence of. Returned users may be local or remote. The presence routed as a result of
what this method returns is sent in addition to the updates already sent between users
that share a room together. Presence updates are deduplicated.< / p >
< p > This method should return a python set of Matrix User IDs, or the object
< code > synapse.events.presence_router.PresenceRouter.ALL_USERS< / code > to indicate that the passed
user should receive presence information for < em > all< / em > known users.< / p >
< p > For clarity, if the user < code > @alice:example.org< / code > is passed to this method, and the Set
< code > {" @bob:example.com" , " @charlie:somewhere.org" }< / code > is returned, this signifies that Alice
should receive presence updates sent by Bob and Charlie, regardless of whether these
users share a room.< / p >
< h3 id = "example-2" > < a class = "header" href = "#example-2" > Example< / a > < / h3 >
< p > Below is an example implementation of a presence router class.< / p >
< pre > < code class = "language-python" > from typing import Dict, Iterable, Set, Union
from synapse.events.presence_router import PresenceRouter
from synapse.handlers.presence import UserPresenceState
from synapse.module_api import ModuleApi
class PresenceRouterConfig:
def __init__(self):
# Config options with their defaults
# A list of users to always send all user presence updates to
self.always_send_to_users = [] # type: List[str]
# A list of users to ignore presence updates for. Does not affect
# shared-room presence relationships
self.blacklisted_users = [] # type: List[str]
class ExamplePresenceRouter:
" " " An example implementation of synapse.presence_router.PresenceRouter.
Supports routing all presence to a configured set of users, or a subset
of presence from certain users to members of certain rooms.
Args:
config: A configuration object.
module_api: An instance of Synapse's ModuleApi.
" " "
def __init__(self, config: PresenceRouterConfig, module_api: ModuleApi):
self._config = config
self._module_api = module_api
@staticmethod
def parse_config(config_dict: dict) -> PresenceRouterConfig:
" " " Parse a configuration dictionary from the homeserver config, do
some validation and return a typed PresenceRouterConfig.
Args:
config_dict: The configuration dictionary.
Returns:
A validated config object.
" " "
# Initialise a typed config object
config = PresenceRouterConfig()
always_send_to_users = config_dict.get(" always_send_to_users" )
blacklisted_users = config_dict.get(" blacklisted_users" )
# Do some validation of config options... otherwise raise a
# synapse.config.ConfigError.
config.always_send_to_users = always_send_to_users
config.blacklisted_users = blacklisted_users
return config
async def get_users_for_states(
self,
state_updates: Iterable[UserPresenceState],
) -> Dict[str, Set[UserPresenceState]]:
" " " Given an iterable of user presence updates, determine where each one
needs to go. Returned results will not affect presence updates that are
sent between users who share a room.
Args:
state_updates: An iterable of user presence state updates.
Returns:
A dictionary of user_id -> set of UserPresenceState that the user should
receive.
" " "
destination_users = {} # type: Dict[str, Set[UserPresenceState]
# Ignore any updates for blacklisted users
desired_updates = set()
for update in state_updates:
if update.state_key not in self._config.blacklisted_users:
desired_updates.add(update)
# Send all presence updates to specific users
for user_id in self._config.always_send_to_users:
destination_users[user_id] = desired_updates
return destination_users
async def get_interested_users(
self,
user_id: str,
) -> Union[Set[str], PresenceRouter.ALL_USERS]:
" " "
Retrieve a list of users that `user_id` is interested in receiving the
presence of. This will be in addition to those they share a room with.
Optionally, the object PresenceRouter.ALL_USERS can be returned to indicate
that this user should receive all incoming local and remote presence updates.
Note that this method will only be called for local users.
Args:
user_id: A user requesting presence updates.
Returns:
A set of user IDs to return additional presence updates for, or
PresenceRouter.ALL_USERS to return presence updates for all other users.
" " "
if user_id in self._config.always_send_to_users:
return PresenceRouter.ALL_USERS
return set()
< / code > < / pre >
< h4 id = "a-note-on-get_users_for_states-and-get_interested_users" > < a class = "header" href = "#a-note-on-get_users_for_states-and-get_interested_users" > A note on < code > get_users_for_states< / code > and < code > get_interested_users< / code > < / a > < / h4 >
< p > Both of these methods are effectively two different sides of the same coin. The logic
regarding which users should receive updates for other users should be the same
between them.< / p >
< p > < code > get_users_for_states< / code > is called when presence updates come in from either federation
or local users, and is used to either direct local presence to remote users, or to
wake up the sync streams of local users to collect remote presence.< / p >
< p > In contrast, < code > get_interested_users< / code > is used to determine the users that presence should
be fetched for when a local user is syncing. This presence is then retrieved, before
being fed through < code > get_users_for_states< / code > once again, with only the syncing user's
routing information pulled from the resulting dictionary.< / p >
< p > Their routing logic should thus line up, else you may run into unintended behaviour.< / p >
< h2 id = "configuration-3" > < a class = "header" href = "#configuration-3" > Configuration< / a > < / h2 >
< p > Once you've crafted your module and installed it into the same Python environment as
Synapse, amend your homeserver config file with the following.< / p >
< pre > < code class = "language-yaml" > presence:
routing_module:
module: my_module.ExamplePresenceRouter
config:
# Any configuration options for your module. The below is an example.
# of setting options for ExamplePresenceRouter.
always_send_to_users: [" @presence_gobbler:example.org" ]
blacklisted_users:
- " @alice:example.com"
- " @bob:example.com"
...
< / code > < / pre >
< p > The contents of < code > config< / code > will be passed as a Python dictionary to the static
< code > parse_config< / code > method of your class. The object returned by this method will
then be passed to the < code > __init__< / code > method of your module as < code > config< / code > .< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "scaling-synapse-via-workers" > < a class = "header" href = "#scaling-synapse-via-workers" > Scaling synapse via workers< / a > < / h1 >
< p > For small instances it recommended to run Synapse in the default monolith mode.
For larger instances where performance is a concern it can be helpful to split
out functionality into multiple separate python processes. These processes are
called 'workers', and are (eventually) intended to scale horizontally
independently.< / p >
< p > Synapse's worker support is under active development and subject to change as
we attempt to rapidly scale ever larger Synapse instances. However we are
documenting it here to help admins needing a highly scalable Synapse instance
similar to the one running < code > matrix.org< / code > .< / p >
< p > All processes continue to share the same database instance, and as such,
workers only work with PostgreSQL-based Synapse deployments. SQLite should only
be used for demo purposes and any admin considering workers should already be
running PostgreSQL.< / p >
2021-06-16 15:16:14 +03:00
< p > See also < a href = "https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability" > Matrix.org blog post< / a >
2021-06-03 19:21:02 +03:00
for a higher level overview.< / p >
< h2 id = "main-processworker-communication" > < a class = "header" href = "#main-processworker-communication" > Main process/worker communication< / a > < / h2 >
< p > The processes communicate with each other via a Synapse-specific protocol called
'replication' (analogous to MySQL- or Postgres-style database replication) which
feeds streams of newly written data between processes so they can be kept in
sync with the database state.< / p >
< p > When configured to do so, Synapse uses a
< a href = "https://redis.io/topics/pubsub" > Redis pub/sub channel< / a > to send the replication
stream between all configured Synapse processes. Additionally, processes may
make HTTP requests to each other, primarily for operations which need to wait
for a reply ─ such as sending an event.< / p >
< p > Redis support was added in v1.13.0 with it becoming the recommended method in
v1.18.0. It replaced the old direct TCP connections (which is deprecated as of
v1.18.0) to the main process. With Redis, rather than all the workers connecting
to the main process, all the workers and the main process connect to Redis,
which relays replication commands between processes. This can give a significant
cpu saving on the main process and will be a prerequisite for upcoming
performance improvements.< / p >
< p > If Redis support is enabled Synapse will use it as a shared cache, as well as a
pub/sub mechanism.< / p >
< p > See the < a href = "workers.html#architectural-diagram" > Architectural diagram< / a > section at the end for
a visualisation of what this looks like.< / p >
< h2 id = "setting-up-workers" > < a class = "header" href = "#setting-up-workers" > Setting up workers< / a > < / h2 >
< p > A Redis server is required to manage the communication between the processes.
The Redis server should be installed following the normal procedure for your
distribution (e.g. < code > apt install redis-server< / code > on Debian). It is safe to use an
existing Redis deployment if you have one.< / p >
< p > Once installed, check that Redis is running and accessible from the host running
Synapse, for example by executing < code > echo PING | nc -q1 localhost 6379< / code > and seeing
a response of < code > +PONG< / code > .< / p >
< p > The appropriate dependencies must also be installed for Synapse. If using a
virtualenv, these can be installed with:< / p >
< pre > < code class = "language-sh" > pip install " matrix-synapse[redis]"
< / code > < / pre >
< p > Note that these dependencies are included when synapse is installed with < code > pip install matrix-synapse[all]< / code > . They are also included in the debian packages from
< code > matrix.org< / code > and in the docker images at
https://hub.docker.com/r/matrixdotorg/synapse/.< / p >
< p > To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. See
< a href = "reverse_proxy.html" > reverse_proxy.md< / a > for information on setting up a reverse
proxy.< / p >
< p > When using workers, each worker process has its own configuration file which
contains settings specific to that worker, such as the HTTP listener that it
provides (if any), logging configuration, etc.< / p >
< p > Normally, the worker processes are configured to read from a shared
configuration file as well as the worker-specific configuration files. This
makes it easier to keep common configuration settings synchronised across all
the processes.< / p >
< p > The main process is somewhat special in this respect: it does not normally
need its own configuration file and can take all of its configuration from the
shared configuration file.< / p >
< h3 id = "shared-configuration" > < a class = "header" href = "#shared-configuration" > Shared configuration< / a > < / h3 >
< p > Normally, only a couple of changes are needed to make an existing configuration
file suitable for use with workers. First, you need to enable an " HTTP replication
listener" for the main process; and secondly, you need to enable redis-based
replication. Optionally, a shared secret can be used to authenticate HTTP
traffic between workers. For example:< / p >
< pre > < code class = "language-yaml" > # extend the existing `listeners` section. This defines the ports that the
# main process will listen on.
listeners:
# The HTTP replication port
- port: 9093
bind_address: '127.0.0.1'
type: http
resources:
- names: [replication]
# Add a random shared secret to authenticate traffic.
worker_replication_secret: " "
redis:
enabled: true
< / code > < / pre >
< p > See the sample config for the full documentation of each option.< / p >
< p > Under < strong > no circumstances< / strong > should the replication listener be exposed to the
public internet; it has no authentication and is unencrypted.< / p >
< h3 id = "worker-configuration" > < a class = "header" href = "#worker-configuration" > Worker configuration< / a > < / h3 >
< p > In the config file for each worker, you must specify the type of worker
application (< code > worker_app< / code > ), and you should specify a unique name for the worker
(< code > worker_name< / code > ). The currently available worker applications are listed below.
You must also specify the HTTP replication endpoint that it should talk to on
the main synapse process. < code > worker_replication_host< / code > should specify the host of
the main synapse and < code > worker_replication_http_port< / code > should point to the HTTP
replication port. If the worker will handle HTTP requests then the
< code > worker_listeners< / code > option should be set with a < code > http< / code > listener, in the same way
as the < code > listeners< / code > option in the shared config.< / p >
< p > For example:< / p >
< pre > < code class = "language-yaml" > worker_app: synapse.app.generic_worker
worker_name: worker1
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: 8083
resources:
- names:
- client
- federation
worker_log_config: /home/matrix/synapse/config/worker1_log_config.yaml
< / code > < / pre >
< p > ...is a full configuration for a generic worker instance, which will expose a
plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
< code > /sync< / code > , which are listed below.< / p >
< p > Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (< code > localhost:8083< / code > in the above example).< / p >
< h3 id = "running-synapse-with-workers" > < a class = "header" href = "#running-synapse-with-workers" > Running Synapse with workers< / a > < / h3 >
< p > Finally, you need to start your worker processes. This can be done with either
< code > synctl< / code > or your distribution's preferred service manager such as < code > systemd< / code > . We
recommend the use of < code > systemd< / code > where available: for information on setting up
< code > systemd< / code > to start synapse workers, see
< a href = "systemd-with-workers" > systemd-with-workers< / a > . To use < code > synctl< / code > , see
< a href = "synctl_workers.html" > synctl_workers.md< / a > .< / p >
< h2 id = "available-worker-applications" > < a class = "header" href = "#available-worker-applications" > Available worker applications< / a > < / h2 >
< h3 id = "synapseappgeneric_worker" > < a class = "header" href = "#synapseappgeneric_worker" > < code > synapse.app.generic_worker< / code > < / a > < / h3 >
< p > This worker can handle API requests matching the following regular
expressions:< / p >
< pre > < code > # Sync requests
^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
^/_matrix/client/(api/v1|r0)/initialSync$
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
# Federation requests
^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/
^/_matrix/federation/v1/state_ids/
^/_matrix/federation/v1/backfill/
^/_matrix/federation/v1/get_missing_events/
^/_matrix/federation/v1/publicRooms
^/_matrix/federation/v1/query/
^/_matrix/federation/v1/make_join/
^/_matrix/federation/v1/make_leave/
^/_matrix/federation/v1/send_join/
^/_matrix/federation/v2/send_join/
^/_matrix/federation/v1/send_leave/
^/_matrix/federation/v2/send_leave/
^/_matrix/federation/v1/invite/
^/_matrix/federation/v2/invite/
^/_matrix/federation/v1/query_auth/
^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/exchange_third_party_invite/
^/_matrix/federation/v1/user/devices/
^/_matrix/federation/v1/get_groups_publicised$
^/_matrix/key/v2/query
# Inbound federation transaction request
^/_matrix/federation/v1/send/
# Client API requests
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|unstable)/devices$
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/
^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$
^/_matrix/client/(api/v1|r0|unstable)/search$
# Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(r0|unstable)/register$
# Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
< / code > < / pre >
< p > Additionally, the following REST endpoints can be handled for GET requests:< / p >
< pre > < code > ^/_matrix/federation/v1/groups/
< / code > < / pre >
< p > Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight:< / p >
< pre > < code > ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
< / code > < / pre >
< p > Additionally, the following endpoints should be included if Synapse is configured
to use SSO (you only need to include the ones for whichever SSO provider you're
using):< / p >
< pre > < code > # for all SSO providers
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
^/_synapse/client/pick_idp$
^/_synapse/client/pick_username
^/_synapse/client/new_user_consent$
^/_synapse/client/sso_register$
# OpenID Connect requests.
^/_synapse/client/oidc/callback$
# SAML requests.
^/_synapse/client/saml2/authn_response$
# CAS requests.
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
< / code > < / pre >
< p > Ensure that all SSO logins go to a single process.
For multiple workers not handling the SSO endpoints properly, see
< a href = "https://github.com/matrix-org/synapse/issues/7530" > #7530< / a > and
< a href = "https://github.com/matrix-org/synapse/issues/9427" > #9427< / a > .< / p >
< p > Note that a HTTP listener with < code > client< / code > and < code > federation< / code > resources must be
configured in the < code > worker_listeners< / code > option in the worker config.< / p >
< h4 id = "load-balancing" > < a class = "header" href = "#load-balancing" > Load balancing< / a > < / h4 >
< p > It is possible to run multiple instances of this worker app, with incoming requests
being load-balanced between them by the reverse-proxy. However, different endpoints
have different characteristics and so admins
may wish to run multiple groups of workers handling different endpoints so that
load balancing can be done in different ways.< / p >
< p > For < code > /sync< / code > and < code > /initialSync< / code > requests it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting a
user ID from the access token or < code > Authorization< / code > header is currently left as an
exercise for the reader. Admins may additionally wish to separate out < code > /sync< / code >
requests that have a < code > since< / code > query parameter from those that don't (and
< code > /initialSync< / code > ), as requests that don't are known as " initial sync" that happens
when a user logs in on a new device and can be < em > very< / em > resource intensive, so
isolating these requests will stop them from interfering with other users ongoing
syncs.< / p >
< p > Federation and client requests can be balanced via simple round robin.< / p >
< p > The inbound federation transaction request < code > ^/_matrix/federation/v1/send/< / code >
should be balanced by source IP so that transactions from the same remote server
go to the same process.< / p >
< p > Registration/login requests can be handled separately purely to help ensure that
unexpected load doesn't affect new logins and sign ups.< / p >
< p > Finally, event sending requests can be balanced by the room ID in the URI (or
the full URI, or even just round robin), the room ID is the path component after
< code > /rooms/< / code > . If there is a large bridge connected that is sending or may send lots
of events, then a dedicated set of workers can be provisioned to limit the
effects of bursts of events from that bridge on events sent by normal users.< / p >
< h4 id = "stream-writers" > < a class = "header" href = "#stream-writers" > Stream writers< / a > < / h4 >
< p > Additionally, there is < em > experimental< / em > support for moving writing of specific
streams (such as events) off of the main process to a particular worker. (This
is only supported with Redis-based replication.)< / p >
< p > Currently supported streams are < code > events< / code > and < code > typing< / code > .< / p >
< p > To enable this, the worker must have a HTTP replication listener configured,
have a < code > worker_name< / code > and be listed in the < code > instance_map< / code > config. For example to
move event persistence off to a dedicated worker, the shared configuration would
include:< / p >
< pre > < code class = "language-yaml" > instance_map:
event_persister1:
host: localhost
port: 8034
stream_writers:
events: event_persister1
< / code > < / pre >
< p > The < code > events< / code > stream also experimentally supports having multiple writers, where
work is sharded between them by room ID. Note that you < em > must< / em > restart all worker
instances when adding or removing event persisters. An example < code > stream_writers< / code >
configuration with multiple writers:< / p >
< pre > < code class = "language-yaml" > stream_writers:
events:
- event_persister1
- event_persister2
< / code > < / pre >
< h4 id = "background-tasks" > < a class = "header" href = "#background-tasks" > Background tasks< / a > < / h4 >
< p > There is also < em > experimental< / em > support for moving background tasks to a separate
worker. Background tasks are run periodically or started via replication. Exactly
which tasks are configured to run depends on your Synapse configuration (e.g. if
stats is enabled).< / p >
< p > To enable this, the worker must have a < code > worker_name< / code > and can be configured to run
background tasks. For example, to move background tasks to a dedicated worker,
the shared configuration would include:< / p >
< pre > < code class = "language-yaml" > run_background_tasks_on: background_worker
< / code > < / pre >
< p > You might also wish to investigate the < code > update_user_directory< / code > and
< code > media_instance_running_background_jobs< / code > settings.< / p >
< h3 id = "synapseapppusher" > < a class = "header" href = "#synapseapppusher" > < code > synapse.app.pusher< / code > < / a > < / h3 >
< p > Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set < code > start_pushers: False< / code > in the
shared configuration file to stop the main synapse sending push notifications.< / p >
< p > To run multiple instances at once the < code > pusher_instances< / code > option should list all
pusher instances by their worker name, e.g.:< / p >
< pre > < code class = "language-yaml" > pusher_instances:
- pusher_worker1
- pusher_worker2
< / code > < / pre >
< h3 id = "synapseappappservice" > < a class = "header" href = "#synapseappappservice" > < code > synapse.app.appservice< / code > < / a > < / h3 >
< p > Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set < code > notify_appservices: False< / code > in the
shared configuration file to stop the main synapse sending appservice notifications.< / p >
< p > Note this worker cannot be load-balanced: only one instance should be active.< / p >
< h3 id = "synapseappfederation_sender" > < a class = "header" href = "#synapseappfederation_sender" > < code > synapse.app.federation_sender< / code > < / a > < / h3 >
< p > Handles sending federation traffic to other servers. Doesn't handle any
REST endpoints itself, but you should set < code > send_federation: False< / code > in the
shared configuration file to stop the main synapse sending this traffic.< / p >
< p > If running multiple federation senders then you must list each
instance in the < code > federation_sender_instances< / code > option by their < code > worker_name< / code > .
All instances must be stopped and started when adding or removing instances.
For example:< / p >
< pre > < code class = "language-yaml" > federation_sender_instances:
- federation_sender1
- federation_sender2
< / code > < / pre >
< h3 id = "synapseappmedia_repository" > < a class = "header" href = "#synapseappmedia_repository" > < code > synapse.app.media_repository< / code > < / a > < / h3 >
< p > Handles the media repository. It can handle all endpoints starting with:< / p >
< pre > < code > /_matrix/media/
< / code > < / pre >
< p > ... and the following regular expressions matching media-specific administration APIs:< / p >
< pre > < code > ^/_synapse/admin/v1/purge_media_cache$
^/_synapse/admin/v1/room/.*/media.*$
^/_synapse/admin/v1/user/.*/media.*$
^/_synapse/admin/v1/media/.*$
^/_synapse/admin/v1/quarantine_media/.*$
< / code > < / pre >
< p > You should also set < code > enable_media_repo: False< / code > in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.< / p >
< p > In the < code > media_repository< / code > worker configuration file, configure the http listener to
expose the < code > media< / code > resource. For example:< / p >
< pre > < code class = "language-yaml" > worker_listeners:
- type: http
port: 8085
resources:
- names:
- media
< / code > < / pre >
< p > Note that if running multiple media repositories they must be on the same server
and you must configure a single instance to run the background tasks, e.g.:< / p >
< pre > < code class = "language-yaml" > media_instance_running_background_jobs: " media-repository-1"
< / code > < / pre >
< p > Note that if a reverse proxy is used , then < code > /_matrix/media/< / code > must be routed for both inbound client and federation requests (if they are handled separately).< / p >
< h3 id = "synapseappuser_dir" > < a class = "header" href = "#synapseappuser_dir" > < code > synapse.app.user_dir< / code > < / a > < / h3 >
< p > Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions:< / p >
< pre > < code > ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
< / code > < / pre >
< p > When using this worker you must also set < code > update_user_directory: False< / code > in the
shared configuration file to stop the main synapse running background
jobs related to updating the user directory.< / p >
< h3 id = "synapseappfrontend_proxy" > < a class = "header" href = "#synapseappfrontend_proxy" > < code > synapse.app.frontend_proxy< / code > < / a > < / h3 >
< p > Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
regular expressions:< / p >
< pre > < code > ^/_matrix/client/(api/v1|r0|unstable)/keys/upload
< / code > < / pre >
< p > If < code > use_presence< / code > is False in the homeserver config, it can also handle REST
endpoints matching the following regular expressions:< / p >
< pre > < code > ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
< / code > < / pre >
< p > This " stub" presence handler will pass through < code > GET< / code > request but make the
< code > PUT< / code > effectively a no-op.< / p >
< p > It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
the < code > worker_main_http_uri< / code > setting in the < code > frontend_proxy< / code > worker configuration
file. For example:< / p >
< pre > < code > worker_main_http_uri: http://127.0.0.1:8008
< / code > < / pre >
< h3 id = "historical-apps" > < a class = "header" href = "#historical-apps" > Historical apps< / a > < / h3 >
< p > < em > Note:< / em > Historically there used to be more apps, however they have been
amalgamated into a single < code > synapse.app.generic_worker< / code > app. The remaining apps
are ones that do specific processing unrelated to requests, e.g. the < code > pusher< / code >
that handles sending out push notifications for new events. The intention is for
all these to be folded into the < code > generic_worker< / code > app and to use config to define
which processes handle the various proccessing such as push notifications.< / p >
< h2 id = "migration-from-old-config" > < a class = "header" href = "#migration-from-old-config" > Migration from old config< / a > < / h2 >
< p > There are two main independent changes that have been made: introducing Redis
support and merging apps into < code > synapse.app.generic_worker< / code > . Both these changes
are backwards compatible and so no changes to the config are required, however
server admins are encouraged to plan to migrate to Redis as the old style direct
TCP replication config is deprecated.< / p >
< p > To migrate to Redis add the < code > redis< / code > config as above, and optionally remove the
TCP < code > replication< / code > listener from master and < code > worker_replication_port< / code > from worker
config.< / p >
< p > To migrate apps to use < code > synapse.app.generic_worker< / code > simply update the
< code > worker_app< / code > option in the worker configs, and where worker are started (e.g.
in systemd service files, but not required for synctl).< / p >
< h2 id = "architectural-diagram" > < a class = "header" href = "#architectural-diagram" > Architectural diagram< / a > < / h2 >
< p > The following shows an example setup using Redis and a reverse proxy:< / p >
< pre > < code > Clients & Federation
|
v
+-----------+
| |
| Reverse |
| Proxy |
| |
+-----------+
| | |
| | | HTTP requests
+-------------------+ | +-----------+
| +---+ |
| | |
v v v
+--------------+ +--------------+ +--------------+ +--------------+
| Main | | Generic | | Generic | | Event |
| Process | | Worker 1 | | Worker 2 | | Persister |
+--------------+ +--------------+ +--------------+ +--------------+
^ ^ | ^ | | ^ | ^ ^
| | | | | | | | | |
| | | | | HTTP | | | | |
| +----------+< --|---|---------+ | | | |
| | +-------------|--> +----------+ |
| | | |
| | | |
v v v v
====================================================================
Redis pub/sub channel
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h3 id = "using-synctl-with-workers" > < a class = "header" href = "#using-synctl-with-workers" > Using synctl with workers< / a > < / h3 >
< p > If you want to use < code > synctl< / code > to manage your synapse processes, you will need to
create an an additional configuration file for the main synapse process. That
configuration should look like this:< / p >
< pre > < code class = "language-yaml" > worker_app: synapse.app.homeserver
< / code > < / pre >
< p > Additionally, each worker app must be configured with the name of a " pid file" ,
to which it will write its process ID when it starts. For example, for a
synchrotron, you might write:< / p >
< pre > < code class = "language-yaml" > worker_pid_file: /home/matrix/synapse/worker1.pid
< / code > < / pre >
< p > Finally, to actually run your worker-based synapse, you must pass synctl the < code > -a< / code >
commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.:< / p >
< pre > < code > synctl -a $CONFIG/workers start
< / code > < / pre >
< p > Currently one should always restart all workers when restarting or upgrading
synapse, unless you explicitly know it's safe not to. For instance, restarting
synapse without restarting all the synchrotrons may result in broken typing
notifications.< / p >
< p > To manipulate a specific worker, you pass the -w option to synctl:< / p >
< pre > < code > synctl -w $CONFIG/workers/worker1.yaml restart
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "setting-up-synapse-with-workers-and-systemd" > < a class = "header" href = "#setting-up-synapse-with-workers-and-systemd" > Setting up Synapse with Workers and Systemd< / a > < / h1 >
< p > This is a setup for managing synapse with systemd, including support for
managing workers. It provides a < code > matrix-synapse< / code > service for the master, as
well as a < code > matrix-synapse-worker@< / code > service template for any workers you
require. Additionally, to group the required services, it sets up a
< code > matrix-synapse.target< / code > .< / p >
2021-06-16 15:16:14 +03:00
< p > See the folder < a href = "https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/system/" > system< / a >
for the systemd unit files.< / p >
< p > The folder < a href = "https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/workers/" > workers< / a >
contains an example configuration for the < code > federation_reader< / code > worker.< / p >
2021-06-03 19:21:02 +03:00
< h2 id = "synapse-configuration-files" > < a class = "header" href = "#synapse-configuration-files" > Synapse configuration files< / a > < / h2 >
< p > See < a href = "systemd-with-workers/../workers.html" > workers.md< / a > for information on how to set up the
configuration files and reverse-proxy correctly. You can find an example worker
2021-06-16 15:16:14 +03:00
config in the < a href = "https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/workers/" > workers< / a >
folder.< / p >
2021-06-03 19:21:02 +03:00
< p > Systemd manages daemonization itself, so ensure that none of the configuration
files set either < code > daemonize< / code > or < code > worker_daemonize< / code > .< / p >
< p > The config files of all workers are expected to be located in
< code > /etc/matrix-synapse/workers< / code > . If you want to use a different location, edit
the provided < code > *.service< / code > files accordingly.< / p >
< p > There is no need for a separate configuration file for the master process.< / p >
< h2 id = "set-up" > < a class = "header" href = "#set-up" > Set up< / a > < / h2 >
< ol >
< li > Adjust synapse configuration files as above.< / li >
2021-06-16 15:16:14 +03:00
< li > Copy the < code > *.service< / code > and < code > *.target< / code > files in < a href = "https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/system/" > system< / a >
to < code > /etc/systemd/system< / code > .< / li >
2021-06-03 19:21:02 +03:00
< li > Run < code > systemctl daemon-reload< / code > to tell systemd to load the new unit files.< / li >
< li > Run < code > systemctl enable matrix-synapse.service< / code > . This will configure the
synapse master process to be started as part of the < code > matrix-synapse.target< / code >
target.< / li >
< li > For each worker process to be enabled, run < code > systemctl enable matrix-synapse-worker@< worker_name> .service< / code > . For each < code > < worker_name> < / code > , there
should be a corresponding configuration file.
< code > /etc/matrix-synapse/workers/< worker_name> .yaml< / code > .< / li >
< li > Start all the synapse processes with < code > systemctl start matrix-synapse.target< / code > .< / li >
< li > Tell systemd to start synapse on boot with < code > systemctl enable matrix-synapse.target< / code > .< / li >
< / ol >
< h2 id = "usage" > < a class = "header" href = "#usage" > Usage< / a > < / h2 >
< p > Once the services are correctly set up, you can use the following commands
to manage your synapse installation:< / p >
< pre > < code class = "language-sh" > # Restart Synapse master and all workers
systemctl restart matrix-synapse.target
# Stop Synapse and all workers
systemctl stop matrix-synapse.target
# Restart the master alone
systemctl start matrix-synapse.service
# Restart a specific worker (eg. federation_reader); the master is
# unaffected by this.
systemctl restart matrix-synapse-worker@federation_reader.service
# Add a new worker (assuming all configs are set up already)
systemctl enable matrix-synapse-worker@federation_writer.service
systemctl restart matrix-synapse.target
< / code > < / pre >
< h2 id = "hardening" > < a class = "header" href = "#hardening" > Hardening< / a > < / h2 >
< p > < strong > Optional:< / strong > If further hardening is desired, the file
< code > override-hardened.conf< / code > may be copied from
< code > contrib/systemd/override-hardened.conf< / code > in this repository to the location
< code > /etc/systemd/system/matrix-synapse.service.d/override-hardened.conf< / code > (the
directory may have to be created). It enables certain sandboxing features in
systemd to further secure the synapse service. You may read the comments to
understand what the override file is doing. The same file will need to be copied
to
< code > /etc/systemd/system/matrix-synapse-worker@.service.d/override-hardened-worker.conf< / code >
(this directory may also have to be created) in order to apply the same
hardening options to any worker processes.< / p >
< p > Once these files have been copied to their appropriate locations, simply reload
systemd's manager config files and restart all Synapse services to apply the hardening options. They will automatically
be applied at every restart as long as the override files are present at the
specified locations.< / p >
< pre > < code class = "language-sh" > systemctl daemon-reload
# Restart services
systemctl restart matrix-synapse.target
< / code > < / pre >
< p > In order to see their effect, you may run < code > systemd-analyze security matrix-synapse.service< / code > before and after applying the hardening options to see
the changes being applied at a glance.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "administration" > < a class = "header" href = "#administration" > Administration< / a > < / h1 >
< p > This section contains information on managing your Synapse homeserver. This includes:< / p >
< ul >
< li > Managing users, rooms and media via the Admin API.< / li >
< li > Setting up metrics and monitoring to give you insight into your homeserver's health.< / li >
< li > Configuring structured logging.< / li >
< / ul >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "the-admin-api" > < a class = "header" href = "#the-admin-api" > The Admin API< / a > < / h1 >
< h2 id = "authenticate-as-a-server-admin" > < a class = "header" href = "#authenticate-as-a-server-admin" > Authenticate as a server admin< / a > < / h2 >
< p > Many of the API calls in the admin api will require an < code > access_token< / code > for a
server admin. (Note that a server admin is distinct from a room admin.)< / p >
< p > A user can be marked as a server admin by updating the database directly, e.g.:< / p >
< pre > < code class = "language-sql" > UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
< / code > < / pre >
< p > A new server admin user can also be created using the < code > register_new_matrix_user< / code >
command. This is a script that is located in the < code > scripts/< / code > directory, or possibly
already on your < code > $PATH< / code > depending on how Synapse was installed.< / p >
< p > Finding your user's < code > access_token< / code > is client-dependent, but will usually be shown in the client's settings.< / p >
< h2 id = "making-an-admin-api-request" > < a class = "header" href = "#making-an-admin-api-request" > Making an Admin API request< / a > < / h2 >
< p > Once you have your < code > access_token< / code > , you will need to authenticate each request to an Admin API endpoint by
providing the token as either a query parameter or a request header. To add it as a request header in cURL:< / p >
< pre > < code class = "language-sh" > curl --header " Authorization: Bearer < access_token> " < the_rest_of_your_API_request>
< / code > < / pre >
< p > For more details on access tokens in Matrix, please refer to the complete
< a href = "https://matrix.org/docs/spec/client_server/r0.6.1#using-access-tokens" > matrix spec documentation< / a > .< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "account-validity-api" > < a class = "header" href = "#account-validity-api" > Account validity API< / a > < / h1 >
< p > This API allows a server administrator to manage the validity of an account. To
use it, you must enable the account validity feature (under
< code > account_validity< / code > ) in Synapse's configuration.< / p >
< h2 id = "renew-account" > < a class = "header" href = "#renew-account" > Renew account< / a > < / h2 >
< p > This API extends the validity of an account by as much time as configured in the
< code > period< / code > parameter from the < code > account_validity< / code > configuration.< / p >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v1/account_validity/validity
< / code > < / pre >
< p > with the following body:< / p >
< pre > < code class = "language-json" > {
" user_id" : " < user ID for the account to renew> " ,
" expiration_ts" : 0,
" enable_renewal_emails" : true
}
< / code > < / pre >
< p > < code > expiration_ts< / code > is an optional parameter and overrides the expiration date,
which otherwise defaults to now + validity period.< / p >
< p > < code > enable_renewal_emails< / code > is also an optional parameter and enables/disables
sending renewal emails to the user. Defaults to true.< / p >
< p > The API returns with the new expiration date for this account, as a timestamp in
milliseconds since epoch:< / p >
< pre > < code class = "language-json" > {
" expiration_ts" : 0
}
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "delete-a-local-group" > < a class = "header" href = "#delete-a-local-group" > Delete a local group< / a > < / h1 >
< p > This API lets a server admin delete a local group. Doing so will kick all
users out of the group so that their clients will correctly handle the group
being deleted.< / p >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v1/delete_group/< group_id>
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: see < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > .< / p >
2021-06-03 19:21:02 +03:00
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "show-reported-events" > < a class = "header" href = "#show-reported-events" > Show reported events< / a > < / h1 >
< p > This API returns information about reported events.< / p >
< p > The api is:< / p >
< pre > < code > GET /_synapse/admin/v1/event_reports?from=0& limit=10
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: see < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > .< / p >
2021-06-03 19:21:02 +03:00
< p > It returns a JSON body like the following:< / p >
< pre > < code class = "language-json" > {
" event_reports" : [
{
" event_id" : " $bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY" ,
" id" : 2,
" reason" : " foo" ,
" score" : -100,
" received_ts" : 1570897107409,
" canonical_alias" : " #alias1:matrix.org" ,
" room_id" : " !ERAgBpSOcCCuTJqQPk:matrix.org" ,
" name" : " Matrix HQ" ,
" sender" : " @foobar:matrix.org" ,
" user_id" : " @foo:matrix.org"
},
{
" event_id" : " $3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I" ,
" id" : 3,
" reason" : " bar" ,
" score" : -100,
" received_ts" : 1598889612059,
" canonical_alias" : " #alias2:matrix.org" ,
" room_id" : " !eGvUQuTCkHGVwNMOjv:matrix.org" ,
" name" : " Your room name here" ,
" sender" : " @foobar:matrix.org" ,
" user_id" : " @bar:matrix.org"
}
],
" next_token" : 2,
" total" : 4
}
< / code > < / pre >
< p > To paginate, check for < code > next_token< / code > and if present, call the endpoint again with < code > from< / code >
set to the value of < code > next_token< / code > . This will return a new page.< / p >
< p > If the endpoint does not return a < code > next_token< / code > then there are no more reports to
paginate through.< / p >
< p > < strong > URL parameters:< / strong > < / p >
< ul >
< li > < code > limit< / code > : integer - Is optional but is used for pagination, denoting the maximum number
of items to return in this call. Defaults to < code > 100< / code > .< / li >
< li > < code > from< / code > : integer - Is optional but used for pagination, denoting the offset in the
returned results. This should be treated as an opaque value and not explicitly set to
anything other than the return value of < code > next_token< / code > from a previous call. Defaults to < code > 0< / code > .< / li >
< li > < code > dir< / code > : string - Direction of event report order. Whether to fetch the most recent
first (< code > b< / code > ) or the oldest first (< code > f< / code > ). Defaults to < code > b< / code > .< / li >
< li > < code > user_id< / code > : string - Is optional and filters to only return users with user IDs that
contain this value. This is the user who reported the event and wrote the reason.< / li >
< li > < code > room_id< / code > : string - Is optional and filters to only return rooms with room IDs that
contain this value.< / li >
< / ul >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > id< / code > : integer - ID of event report.< / li >
< li > < code > received_ts< / code > : integer - The timestamp (in milliseconds since the unix epoch) when this
report was sent.< / li >
< li > < code > room_id< / code > : string - The ID of the room in which the event being reported is located.< / li >
< li > < code > name< / code > : string - The name of the room.< / li >
< li > < code > event_id< / code > : string - The ID of the reported event.< / li >
< li > < code > user_id< / code > : string - This is the user who reported the event and wrote the reason.< / li >
< li > < code > reason< / code > : string - Comment made by the < code > user_id< / code > in this report. May be blank or < code > null< / code > .< / li >
< li > < code > score< / code > : integer - Content is reported based upon a negative score, where -100 is
" most offensive" and 0 is " inoffensive" . May be < code > null< / code > .< / li >
< li > < code > sender< / code > : string - This is the ID of the user who sent the original message/event that
was reported.< / li >
< li > < code > canonical_alias< / code > : string - The canonical alias of the room. < code > null< / code > if the room does not
have a canonical alias set.< / li >
< li > < code > next_token< / code > : integer - Indication for pagination. See above.< / li >
< li > < code > total< / code > : integer - Total number of event reports related to the query
(< code > user_id< / code > and < code > room_id< / code > ).< / li >
< / ul >
< h1 id = "show-details-of-a-specific-event-report" > < a class = "header" href = "#show-details-of-a-specific-event-report" > Show details of a specific event report< / a > < / h1 >
< p > This API returns information about a specific event report.< / p >
< p > The api is:< / p >
< pre > < code > GET /_synapse/admin/v1/event_reports/< report_id>
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: see < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > .< / p >
2021-06-03 19:21:02 +03:00
< p > It returns a JSON body like the following:< / p >
< pre > < code class = "language-jsonc" > {
" event_id" : " $bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY" ,
" event_json" : {
" auth_events" : [
" $YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M" ,
" $oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
],
" content" : {
" body" : " matrix.org: This Week in Matrix" ,
" format" : " org.matrix.custom.html" ,
" formatted_body" : " < strong> matrix.org< /strong> :< br> < a href=\" https://matrix.org/blog/\" > < strong> This Week in Matrix< /strong> < /a> " ,
" msgtype" : " m.notice"
},
" depth" : 546,
" hashes" : {
" sha256" : " xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
},
" origin" : " matrix.org" ,
" origin_server_ts" : 1592291711430,
" prev_events" : [
" $YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
],
" prev_state" : [],
" room_id" : " !ERAgBpSOcCCuTJqQPk:matrix.org" ,
" sender" : " @foobar:matrix.org" ,
" signatures" : {
" matrix.org" : {
" ed25519:a_JaEG" : " cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
}
},
" type" : " m.room.message" ,
" unsigned" : {
" age_ts" : 1592291711430,
}
},
" id" : < report_id> ,
" reason" : " foo" ,
" score" : -100,
" received_ts" : 1570897107409,
" canonical_alias" : " #alias1:matrix.org" ,
" room_id" : " !ERAgBpSOcCCuTJqQPk:matrix.org" ,
" name" : " Matrix HQ" ,
" sender" : " @foobar:matrix.org" ,
" user_id" : " @foo:matrix.org"
}
< / code > < / pre >
< p > < strong > URL parameters:< / strong > < / p >
< ul >
< li > < code > report_id< / code > : string - The ID of the event report.< / li >
< / ul >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > id< / code > : integer - ID of event report.< / li >
< li > < code > received_ts< / code > : integer - The timestamp (in milliseconds since the unix epoch) when this
report was sent.< / li >
< li > < code > room_id< / code > : string - The ID of the room in which the event being reported is located.< / li >
< li > < code > name< / code > : string - The name of the room.< / li >
< li > < code > event_id< / code > : string - The ID of the reported event.< / li >
< li > < code > user_id< / code > : string - This is the user who reported the event and wrote the reason.< / li >
< li > < code > reason< / code > : string - Comment made by the < code > user_id< / code > in this report. May be blank.< / li >
< li > < code > score< / code > : integer - Content is reported based upon a negative score, where -100 is
" most offensive" and 0 is " inoffensive" .< / li >
< li > < code > sender< / code > : string - This is the ID of the user who sent the original message/event that
was reported.< / li >
< li > < code > canonical_alias< / code > : string - The canonical alias of the room. < code > null< / code > if the room does not
have a canonical alias set.< / li >
< li > < code > event_json< / code > : object - Details of the original event that was reported.< / li >
< / ul >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "contents-1" > < a class = "header" href = "#contents-1" > Contents< / a > < / h1 >
< ul >
< li > < a href = "admin_api/media_admin_api.html#querying-media" > Querying media< / a >
< ul >
< li > < a href = "admin_api/media_admin_api.html#list-all-media-in-a-room" > List all media in a room< / a > < / li >
< li > < a href = "admin_api/media_admin_api.html#list-all-media-uploaded-by-a-user" > List all media uploaded by a user< / a > < / li >
< / ul >
< / li >
< li > < a href = "admin_api/media_admin_api.html#quarantine-media" > Quarantine media< / a >
< ul >
< li > < a href = "admin_api/media_admin_api.html#quarantining-media-by-id" > Quarantining media by ID< / a > < / li >
< li > < a href = "admin_api/media_admin_api.html#remove-media-from-quarantine-by-id" > Remove media from quarantine by ID< / a > < / li >
< li > < a href = "admin_api/media_admin_api.html#quarantining-media-in-a-room" > Quarantining media in a room< / a > < / li >
< li > < a href = "admin_api/media_admin_api.html#quarantining-all-media-of-a-user" > Quarantining all media of a user< / a > < / li >
< li > < a href = "admin_api/media_admin_api.html#protecting-media-from-being-quarantined" > Protecting media from being quarantined< / a > < / li >
< li > < a href = "admin_api/media_admin_api.html#unprotecting-media-from-being-quarantined" > Unprotecting media from being quarantined< / a > < / li >
< / ul >
< / li >
< li > < a href = "admin_api/media_admin_api.html#delete-local-media" > Delete local media< / a >
< ul >
< li > < a href = "admin_api/media_admin_api.html#delete-a-specific-local-media" > Delete a specific local media< / a > < / li >
< li > < a href = "admin_api/media_admin_api.html#delete-local-media-by-date-or-size" > Delete local media by date or size< / a > < / li >
< / ul >
< / li >
< li > < a href = "admin_api/media_admin_api.html#purge-remote-media-api" > Purge Remote Media API< / a > < / li >
< / ul >
< h1 id = "querying-media" > < a class = "header" href = "#querying-media" > Querying media< / a > < / h1 >
< p > These APIs allow extracting media information from the homeserver.< / p >
< h2 id = "list-all-media-in-a-room" > < a class = "header" href = "#list-all-media-in-a-room" > List all media in a room< / a > < / h2 >
< p > This API gets a list of known media in a room.
However, it only shows media from unencrypted events or rooms.< / p >
< p > The API is:< / p >
< pre > < code > GET /_synapse/admin/v1/room/< room_id> /media
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: see < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > .< / p >
2021-06-03 19:21:02 +03:00
< p > The API returns a JSON body like the following:< / p >
< pre > < code class = "language-json" > {
" local" : [
" mxc://localhost/xwvutsrqponmlkjihgfedcba" ,
" mxc://localhost/abcdefghijklmnopqrstuvwx"
],
" remote" : [
" mxc://matrix.org/xwvutsrqponmlkjihgfedcba" ,
" mxc://matrix.org/abcdefghijklmnopqrstuvwx"
]
}
< / code > < / pre >
< h2 id = "list-all-media-uploaded-by-a-user" > < a class = "header" href = "#list-all-media-uploaded-by-a-user" > List all media uploaded by a user< / a > < / h2 >
< p > Listing all media that has been uploaded by a local user can be achieved through
the use of the < a href = "admin_api/user_admin_api.rst#list-media-of-a-user" > List media of a user< / a >
Admin API.< / p >
< h1 id = "quarantine-media" > < a class = "header" href = "#quarantine-media" > Quarantine media< / a > < / h1 >
< p > Quarantining media means that it is marked as inaccessible by users. It applies
to any local media, and any locally-cached copies of remote media.< / p >
< p > The media file itself (and any thumbnails) is not deleted from the server.< / p >
< h2 id = "quarantining-media-by-id" > < a class = "header" href = "#quarantining-media-by-id" > Quarantining media by ID< / a > < / h2 >
< p > This API quarantines a single piece of local or remote media.< / p >
< p > Request:< / p >
< pre > < code > POST /_synapse/admin/v1/media/quarantine/< server_name> /< media_id>
{}
< / code > < / pre >
< p > Where < code > server_name< / code > is in the form of < code > example.org< / code > , and < code > media_id< / code > is in the
form of < code > abcdefg12345...< / code > .< / p >
< p > Response:< / p >
< pre > < code class = "language-json" > {}
< / code > < / pre >
< h2 id = "remove-media-from-quarantine-by-id" > < a class = "header" href = "#remove-media-from-quarantine-by-id" > Remove media from quarantine by ID< / a > < / h2 >
< p > This API removes a single piece of local or remote media from quarantine.< / p >
< p > Request:< / p >
< pre > < code > POST /_synapse/admin/v1/media/unquarantine/< server_name> /< media_id>
{}
< / code > < / pre >
< p > Where < code > server_name< / code > is in the form of < code > example.org< / code > , and < code > media_id< / code > is in the
form of < code > abcdefg12345...< / code > .< / p >
< p > Response:< / p >
< pre > < code class = "language-json" > {}
< / code > < / pre >
< h2 id = "quarantining-media-in-a-room" > < a class = "header" href = "#quarantining-media-in-a-room" > Quarantining media in a room< / a > < / h2 >
< p > This API quarantines all local and remote media in a room.< / p >
< p > Request:< / p >
< pre > < code > POST /_synapse/admin/v1/room/< room_id> /media/quarantine
{}
< / code > < / pre >
< p > Where < code > room_id< / code > is in the form of < code > !roomid12345:example.org< / code > .< / p >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" num_quarantined" : 10
}
< / code > < / pre >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > num_quarantined< / code > : integer - The number of media items successfully quarantined< / li >
< / ul >
< p > Note that there is a legacy endpoint, < code > POST /_synapse/admin/v1/quarantine_media/< room_id> < / code > , that operates the same.
However, it is deprecated and may be removed in a future release.< / p >
< h2 id = "quarantining-all-media-of-a-user" > < a class = "header" href = "#quarantining-all-media-of-a-user" > Quarantining all media of a user< / a > < / h2 >
< p > This API quarantines all < em > local< / em > media that a < em > local< / em > user has uploaded. That is to say, if
you would like to quarantine media uploaded by a user on a remote homeserver, you should
instead use one of the other APIs.< / p >
< p > Request:< / p >
< pre > < code > POST /_synapse/admin/v1/user/< user_id> /media/quarantine
{}
< / code > < / pre >
< p > URL Parameters< / p >
< ul >
< li > < code > user_id< / code > : string - User ID in the form of < code > @bob:example.org< / code > < / li >
< / ul >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" num_quarantined" : 10
}
< / code > < / pre >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > num_quarantined< / code > : integer - The number of media items successfully quarantined< / li >
< / ul >
< h2 id = "protecting-media-from-being-quarantined" > < a class = "header" href = "#protecting-media-from-being-quarantined" > Protecting media from being quarantined< / a > < / h2 >
< p > This API protects a single piece of local media from being quarantined using the
above APIs. This is useful for sticker packs and other shared media which you do
not want to get quarantined, especially when
< a href = "admin_api/media_admin_api.html#quarantining-media-in-a-room" > quarantining media in a room< / a > .< / p >
< p > Request:< / p >
< pre > < code > POST /_synapse/admin/v1/media/protect/< media_id>
{}
< / code > < / pre >
< p > Where < code > media_id< / code > is in the form of < code > abcdefg12345...< / code > .< / p >
< p > Response:< / p >
< pre > < code class = "language-json" > {}
< / code > < / pre >
< h2 id = "unprotecting-media-from-being-quarantined" > < a class = "header" href = "#unprotecting-media-from-being-quarantined" > Unprotecting media from being quarantined< / a > < / h2 >
< p > This API reverts the protection of a media.< / p >
< p > Request:< / p >
< pre > < code > POST /_synapse/admin/v1/media/unprotect/< media_id>
{}
< / code > < / pre >
< p > Where < code > media_id< / code > is in the form of < code > abcdefg12345...< / code > .< / p >
< p > Response:< / p >
< pre > < code class = "language-json" > {}
< / code > < / pre >
< h1 id = "delete-local-media" > < a class = "header" href = "#delete-local-media" > Delete local media< / a > < / h1 >
< p > This API deletes the < em > local< / em > media from the disk of your own server.
This includes any local thumbnails and copies of media downloaded from
remote homeservers.
This API will not affect media that has been uploaded to external
media repositories (e.g https://github.com/turt2live/matrix-media-repo/).
See also < a href = "admin_api/media_admin_api.html#purge-remote-media-api" > Purge Remote Media API< / a > .< / p >
< h2 id = "delete-a-specific-local-media" > < a class = "header" href = "#delete-a-specific-local-media" > Delete a specific local media< / a > < / h2 >
< p > Delete a specific < code > media_id< / code > .< / p >
< p > Request:< / p >
< pre > < code > DELETE /_synapse/admin/v1/media/< server_name> /< media_id>
{}
< / code > < / pre >
< p > URL Parameters< / p >
< ul >
< li > < code > server_name< / code > : string - The name of your local server (e.g < code > matrix.org< / code > )< / li >
< li > < code > media_id< / code > : string - The ID of the media (e.g < code > abcdefghijklmnopqrstuvwx< / code > )< / li >
< / ul >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" deleted_media" : [
" abcdefghijklmnopqrstuvwx"
],
" total" : 1
}
< / code > < / pre >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > deleted_media< / code > : an array of strings - List of deleted < code > media_id< / code > < / li >
< li > < code > total< / code > : integer - Total number of deleted < code > media_id< / code > < / li >
< / ul >
< h2 id = "delete-local-media-by-date-or-size" > < a class = "header" href = "#delete-local-media-by-date-or-size" > Delete local media by date or size< / a > < / h2 >
< p > Request:< / p >
< pre > < code > POST /_synapse/admin/v1/media/< server_name> /delete?before_ts=< before_ts>
{}
< / code > < / pre >
< p > URL Parameters< / p >
< ul >
< li > < code > server_name< / code > : string - The name of your local server (e.g < code > matrix.org< / code > ).< / li >
< li > < code > before_ts< / code > : string representing a positive integer - Unix timestamp in ms.
Files that were last used before this timestamp will be deleted. It is the timestamp of
last access and not the timestamp creation. < / li >
< li > < code > size_gt< / code > : Optional - string representing a positive integer - Size of the media in bytes.
Files that are larger will be deleted. Defaults to < code > 0< / code > .< / li >
< li > < code > keep_profiles< / code > : Optional - string representing a boolean - Switch to also delete files
that are still used in image data (e.g user profile, room avatar).
If < code > false< / code > these files will be deleted. Defaults to < code > true< / code > .< / li >
< / ul >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" deleted_media" : [
" abcdefghijklmnopqrstuvwx" ,
" abcdefghijklmnopqrstuvwz"
],
" total" : 2
}
< / code > < / pre >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > deleted_media< / code > : an array of strings - List of deleted < code > media_id< / code > < / li >
< li > < code > total< / code > : integer - Total number of deleted < code > media_id< / code > < / li >
< / ul >
< h1 id = "purge-remote-media-api" > < a class = "header" href = "#purge-remote-media-api" > Purge Remote Media API< / a > < / h1 >
< p > The purge remote media API allows server admins to purge old cached remote media.< / p >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v1/purge_media_cache?before_ts=< unix_timestamp_in_ms>
{}
< / code > < / pre >
< p > URL Parameters< / p >
< ul >
< li > < code > unix_timestamp_in_ms< / code > : string representing a positive integer - Unix timestamp in ms.
All cached media that was last accessed before this timestamp will be removed.< / li >
< / ul >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" deleted" : 10
}
< / code > < / pre >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > deleted< / code > : integer - The number of media items successfully deleted< / li >
< / ul >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: see < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > .< / p >
2021-06-03 19:21:02 +03:00
< p > If the user re-requests purged remote media, synapse will re-request the media
from the originating server.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "purge-history-api" > < a class = "header" href = "#purge-history-api" > Purge History API< / a > < / h1 >
< p > The purge history API allows server admins to purge historic events from their
database, reclaiming disk space.< / p >
< p > Depending on the amount of history being purged a call to the API may take
several minutes or longer. During this period users will not be able to
paginate further back in the room from the point being purged from.< / p >
< p > Note that Synapse requires at least one message in each room, so it will never
delete the last message in a room.< / p >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v1/purge_history/< room_id> [/< event_id> ]
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are
deleted.)< / p >
< p > Room state data (such as joins, leaves, topic) is always preserved.< / p >
< p > To delete local message events as well, set < code > delete_local_events< / code > in the body:< / p >
< pre > < code > {
" delete_local_events" : true
}
< / code > < / pre >
< p > The caller must specify the point in the room to purge up to. This can be
specified by including an event_id in the URI, or by setting a
< code > purge_up_to_event_id< / code > or < code > purge_up_to_ts< / code > in the request body. If an event
id is given, that event (and others at the same graph depth) will be retained.
If < code > purge_up_to_ts< / code > is given, it should be a timestamp since the unix epoch,
in milliseconds.< / p >
< p > The API starts the purge running, and returns immediately with a JSON body with
a purge id:< / p >
< pre > < code class = "language-json" > {
" purge_id" : " < opaque id> "
}
< / code > < / pre >
< h2 id = "purge-status-query" > < a class = "header" href = "#purge-status-query" > Purge status query< / a > < / h2 >
< p > It is possible to poll for updates on recent purges with a second API;< / p >
< pre > < code > GET /_synapse/admin/v1/purge_history_status/< purge_id>
< / code > < / pre >
< p > Again, you will need to authenticate by providing an < code > access_token< / code > for a
server admin.< / p >
< p > This API returns a JSON body like the following:< / p >
< pre > < code class = "language-json" > {
" status" : " active"
}
< / code > < / pre >
< p > The status will be one of < code > active< / code > , < code > complete< / code > , or < code > failed< / code > .< / p >
< h2 id = "reclaim-disk-space-postgres" > < a class = "header" href = "#reclaim-disk-space-postgres" > Reclaim disk space (Postgres)< / a > < / h2 >
< p > To reclaim the disk space and return it to the operating system, you need to run
< code > VACUUM FULL;< / code > on the database.< / p >
< p > < a href = "https://www.postgresql.org/docs/current/sql-vacuum.html" > https://www.postgresql.org/docs/current/sql-vacuum.html< / a > < / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "deprecated-purge-room-api" > < a class = "header" href = "#deprecated-purge-room-api" > Deprecated: Purge room API< / a > < / h1 >
< p > < strong > The old Purge room API is deprecated and will be removed in a future release.
See the new < a href = "admin_api/rooms.html#delete-room-api" > Delete Room API< / a > for more details.< / strong > < / p >
< p > This API will remove all trace of a room from your database.< / p >
< p > All local users must have left the room before it can be removed.< / p >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v1/purge_room
{
" room_id" : " !room:id"
}
< / code > < / pre >
< p > You must authenticate using the access token of an admin user.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "shared-secret-registration" > < a class = "header" href = "#shared-secret-registration" > Shared-Secret Registration< / a > < / h1 >
< p > This API allows for the creation of users in an administrative and
non-interactive way. This is generally used for bootstrapping a Synapse
instance with administrator accounts.< / p >
< p > To authenticate yourself to the server, you will need both the shared secret
(< code > registration_shared_secret< / code > in the homeserver configuration), and a
one-time nonce. If the registration shared secret is not configured, this API
is not enabled.< / p >
< p > To fetch the nonce, you need to request one from the API:< / p >
< pre > < code > > GET /_synapse/admin/v1/register
< {" nonce" : " thisisanonce" }
< / code > < / pre >
< p > Once you have the nonce, you can make a < code > POST< / code > to the same URL with a JSON
body containing the nonce, username, password, whether they are an admin
(optional, False by default), and a HMAC digest of the content. Also you can
set the displayname (optional, < code > username< / code > by default).< / p >
< p > As an example:< / p >
< pre > < code > > POST /_synapse/admin/v1/register
> {
" nonce" : " thisisanonce" ,
" username" : " pepper_roni" ,
" displayname" : " Pepper Roni" ,
" password" : " pizza" ,
" admin" : true,
" mac" : " mac_digest_here"
}
< {
" access_token" : " token_here" ,
" user_id" : " @pepper_roni:localhost" ,
" home_server" : " test" ,
" device_id" : " device_id_here"
}
< / code > < / pre >
< p > The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
the shared secret and the content being the nonce, user, password, either the
string " admin" or " notadmin" , and optionally the user_type
each separated by NULs. For an example of generation in Python:< / p >
< pre > < code class = "language-python" > import hmac, hashlib
def generate_mac(nonce, user, password, admin=False, user_type=None):
mac = hmac.new(
key=shared_secret,
digestmod=hashlib.sha1,
)
mac.update(nonce.encode('utf8'))
mac.update(b" \x00" )
mac.update(user.encode('utf8'))
mac.update(b" \x00" )
mac.update(password.encode('utf8'))
mac.update(b" \x00" )
mac.update(b" admin" if admin else b" notadmin" )
if user_type:
mac.update(b" \x00" )
mac.update(user_type.encode('utf8'))
return mac.hexdigest()
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "edit-room-membership-api" > < a class = "header" href = "#edit-room-membership-api" > Edit Room Membership API< / a > < / h1 >
< p > This API allows an administrator to join an user account with a given < code > user_id< / code >
to a room with a given < code > room_id_or_alias< / code > . You can only modify the membership of
local users. The server administrator must be in the room and have permission to
invite users.< / p >
< h2 id = "parameters" > < a class = "header" href = "#parameters" > Parameters< / a > < / h2 >
< p > The following parameters are available:< / p >
< ul >
< li > < code > user_id< / code > - Fully qualified user: for example, < code > @user:server.com< / code > .< / li >
< li > < code > room_id_or_alias< / code > - The room identifier or alias to join: for example,
< code > !636q39766251:server.com< / code > .< / li >
< / ul >
< h2 id = "usage-1" > < a class = "header" href = "#usage-1" > Usage< / a > < / h2 >
< pre > < code > POST /_synapse/admin/v1/join/< room_id_or_alias>
{
" user_id" : " @user:server.com"
}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: see < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > .< / p >
2021-06-03 19:21:02 +03:00
< p > Response:< / p >
< pre > < code > {
" room_id" : " !636q39766251:server.com"
}
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "contents-2" > < a class = "header" href = "#contents-2" > Contents< / a > < / h1 >
< ul >
< li > < a href = "admin_api/rooms.html#list-room-api" > List Room API< / a >
< ul >
< li > < a href = "admin_api/rooms.html#parameters" > Parameters< / a > < / li >
< li > < a href = "admin_api/rooms.html#usage" > Usage< / a > < / li >
< / ul >
< / li >
< li > < a href = "admin_api/rooms.html#room-details-api" > Room Details API< / a > < / li >
< li > < a href = "admin_api/rooms.html#room-members-api" > Room Members API< / a > < / li >
< li > < a href = "admin_api/rooms.html#room-state-api" > Room State API< / a > < / li >
< li > < a href = "admin_api/rooms.html#delete-room-api" > Delete Room API< / a >
< ul >
< li > < a href = "admin_api/rooms.html#parameters-1" > Parameters< / a > < / li >
< li > < a href = "admin_api/rooms.html#response" > Response< / a > < / li >
< li > < a href = "admin_api/rooms.html#undoing-room-shutdowns" > Undoing room shutdowns< / a > < / li >
< / ul >
< / li >
< li > < a href = "admin_api/rooms.html#make-room-admin-api" > Make Room Admin API< / a > < / li >
< li > < a href = "admin_api/rooms.html#forward-extremities-admin-api" > Forward Extremities Admin API< / a > < / li >
< li > < a href = "admin_api/rooms.html#event-context-api" > Event Context API< / a > < / li >
< / ul >
< h1 id = "list-room-api" > < a class = "header" href = "#list-room-api" > List Room API< / a > < / h1 >
< p > The List Room admin API allows server admins to get a list of rooms on their
server. There are various parameters available that allow for filtering and
sorting the returned list. This API supports pagination.< / p >
< h2 id = "parameters-1" > < a class = "header" href = "#parameters-1" > Parameters< / a > < / h2 >
< p > The following query parameters are available:< / p >
< ul >
< li > < code > from< / code > - Offset in the returned list. Defaults to < code > 0< / code > .< / li >
< li > < code > limit< / code > - Maximum amount of rooms to return. Defaults to < code > 100< / code > .< / li >
< li > < code > order_by< / code > - The method in which to sort the returned list of rooms. Valid values are:
< ul >
< li > < code > alphabetical< / code > - Same as < code > name< / code > . This is deprecated.< / li >
< li > < code > size< / code > - Same as < code > joined_members< / code > . This is deprecated.< / li >
< li > < code > name< / code > - Rooms are ordered alphabetically by room name. This is the default.< / li >
< li > < code > canonical_alias< / code > - Rooms are ordered alphabetically by main alias address of the room.< / li >
< li > < code > joined_members< / code > - Rooms are ordered by the number of members. Largest to smallest.< / li >
< li > < code > joined_local_members< / code > - Rooms are ordered by the number of local members. Largest to smallest.< / li >
< li > < code > version< / code > - Rooms are ordered by room version. Largest to smallest.< / li >
< li > < code > creator< / code > - Rooms are ordered alphabetically by creator of the room.< / li >
< li > < code > encryption< / code > - Rooms are ordered alphabetically by the end-to-end encryption algorithm.< / li >
< li > < code > federatable< / code > - Rooms are ordered by whether the room is federatable.< / li >
< li > < code > public< / code > - Rooms are ordered by visibility in room list.< / li >
< li > < code > join_rules< / code > - Rooms are ordered alphabetically by join rules of the room.< / li >
< li > < code > guest_access< / code > - Rooms are ordered alphabetically by guest access option of the room.< / li >
< li > < code > history_visibility< / code > - Rooms are ordered alphabetically by visibility of history of the room.< / li >
< li > < code > state_events< / code > - Rooms are ordered by number of state events. Largest to smallest.< / li >
< / ul >
< / li >
< li > < code > dir< / code > - Direction of room order. Either < code > f< / code > for forwards or < code > b< / code > for backwards. Setting
this value to < code > b< / code > will reverse the above sort order. Defaults to < code > f< / code > .< / li >
< li > < code > search_term< / code > - Filter rooms by their room name. Search term can be contained in any
part of the room name. Defaults to no filtering.< / li >
< / ul >
< p > The following fields are possible in the JSON response body:< / p >
< ul >
< li > < code > rooms< / code > - An array of objects, each containing information about a room.
< ul >
< li > Room objects contain the following fields:
< ul >
< li > < code > room_id< / code > - The ID of the room.< / li >
< li > < code > name< / code > - The name of the room.< / li >
< li > < code > canonical_alias< / code > - The canonical (main) alias address of the room.< / li >
< li > < code > joined_members< / code > - How many users are currently in the room.< / li >
< li > < code > joined_local_members< / code > - How many local users are currently in the room.< / li >
< li > < code > version< / code > - The version of the room as a string.< / li >
< li > < code > creator< / code > - The < code > user_id< / code > of the room creator.< / li >
< li > < code > encryption< / code > - Algorithm of end-to-end encryption of messages. Is < code > null< / code > if encryption is not active.< / li >
< li > < code > federatable< / code > - Whether users on other servers can join this room.< / li >
< li > < code > public< / code > - Whether the room is visible in room directory.< / li >
< li > < code > join_rules< / code > - The type of rules used for users wishing to join this room. One of: [" public" , " knock" , " invite" , " private" ].< / li >
< li > < code > guest_access< / code > - Whether guests can join the room. One of: [" can_join" , " forbidden" ].< / li >
< li > < code > history_visibility< / code > - Who can see the room history. One of: [" invited" , " joined" , " shared" , " world_readable" ].< / li >
< li > < code > state_events< / code > - Total number of state_events of a room. Complexity of the room.< / li >
< / ul >
< / li >
< / ul >
< / li >
< li > < code > offset< / code > - The current pagination offset in rooms. This parameter should be
used instead of < code > next_token< / code > for room offset as < code > next_token< / code > is
not intended to be parsed.< / li >
< li > < code > total_rooms< / code > - The total number of rooms this query can return. Using this
and < code > offset< / code > , you have enough information to know the current
progression through the list.< / li >
< li > < code > next_batch< / code > - If this field is present, we know that there are potentially
more rooms on the server that did not all fit into this response.
We can use < code > next_batch< / code > to get the " next page" of results. To do
so, simply repeat your request, setting the < code > from< / code > parameter to
the value of < code > next_batch< / code > .< / li >
< li > < code > prev_batch< / code > - If this field is present, it is possible to paginate backwards.
Use < code > prev_batch< / code > for the < code > from< / code > value in the next request to
get the " previous page" of results.< / li >
< / ul >
< h2 id = "usage-2" > < a class = "header" href = "#usage-2" > Usage< / a > < / h2 >
< p > A standard request with no filtering:< / p >
< pre > < code > GET /_synapse/admin/v1/rooms
{}
< / code > < / pre >
< p > Response:< / p >
< pre > < code class = "language-jsonc" > {
" rooms" : [
{
" room_id" : " !OGEhHVWSdvArJzumhm:matrix.org" ,
" name" : " Matrix HQ" ,
" canonical_alias" : " #matrix:matrix.org" ,
" joined_members" : 8326,
" joined_local_members" : 2,
" version" : " 1" ,
" creator" : " @foo:matrix.org" ,
" encryption" : null,
" federatable" : true,
" public" : true,
" join_rules" : " invite" ,
" guest_access" : null,
" history_visibility" : " shared" ,
" state_events" : 93534
},
... (8 hidden items) ...
{
" room_id" : " !xYvNcQPhnkrdUmYczI:matrix.org" ,
" name" : " This Week In Matrix (TWIM)" ,
" canonical_alias" : " #twim:matrix.org" ,
" joined_members" : 314,
" joined_local_members" : 20,
" version" : " 4" ,
" creator" : " @foo:matrix.org" ,
" encryption" : " m.megolm.v1.aes-sha2" ,
" federatable" : true,
" public" : false,
" join_rules" : " invite" ,
" guest_access" : null,
" history_visibility" : " shared" ,
" state_events" : 8345
}
],
" offset" : 0,
" total_rooms" : 10
}
< / code > < / pre >
< p > Filtering by room name:< / p >
< pre > < code > GET /_synapse/admin/v1/rooms?search_term=TWIM
{}
< / code > < / pre >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" rooms" : [
{
" room_id" : " !xYvNcQPhnkrdUmYczI:matrix.org" ,
" name" : " This Week In Matrix (TWIM)" ,
" canonical_alias" : " #twim:matrix.org" ,
" joined_members" : 314,
" joined_local_members" : 20,
" version" : " 4" ,
" creator" : " @foo:matrix.org" ,
" encryption" : " m.megolm.v1.aes-sha2" ,
" federatable" : true,
" public" : false,
" join_rules" : " invite" ,
" guest_access" : null,
" history_visibility" : " shared" ,
" state_events" : 8
}
],
" offset" : 0,
" total_rooms" : 1
}
< / code > < / pre >
< p > Paginating through a list of rooms:< / p >
< pre > < code > GET /_synapse/admin/v1/rooms?order_by=size
{}
< / code > < / pre >
< p > Response:< / p >
< pre > < code class = "language-jsonc" > {
" rooms" : [
{
" room_id" : " !OGEhHVWSdvArJzumhm:matrix.org" ,
" name" : " Matrix HQ" ,
" canonical_alias" : " #matrix:matrix.org" ,
" joined_members" : 8326,
" joined_local_members" : 2,
" version" : " 1" ,
" creator" : " @foo:matrix.org" ,
" encryption" : null,
" federatable" : true,
" public" : true,
" join_rules" : " invite" ,
" guest_access" : null,
" history_visibility" : " shared" ,
" state_events" : 93534
},
... (98 hidden items) ...
{
" room_id" : " !xYvNcQPhnkrdUmYczI:matrix.org" ,
" name" : " This Week In Matrix (TWIM)" ,
" canonical_alias" : " #twim:matrix.org" ,
" joined_members" : 314,
" joined_local_members" : 20,
" version" : " 4" ,
" creator" : " @foo:matrix.org" ,
" encryption" : " m.megolm.v1.aes-sha2" ,
" federatable" : true,
" public" : false,
" join_rules" : " invite" ,
" guest_access" : null,
" history_visibility" : " shared" ,
" state_events" : 8345
}
],
" offset" : 0,
" total_rooms" : 150
" next_token" : 100
}
< / code > < / pre >
< p > The presence of the < code > next_token< / code > parameter tells us that there are more rooms
than returned in this request, and we need to make another request to get them.
To get the next batch of room results, we repeat our request, setting the < code > from< / code >
parameter to the value of < code > next_token< / code > .< / p >
< pre > < code > GET /_synapse/admin/v1/rooms?order_by=size& from=100
{}
< / code > < / pre >
< p > Response:< / p >
< pre > < code class = "language-jsonc" > {
" rooms" : [
{
" room_id" : " !mscvqgqpHYjBGDxNym:matrix.org" ,
" name" : " Music Theory" ,
" canonical_alias" : " #musictheory:matrix.org" ,
" joined_members" : 127,
" joined_local_members" : 2,
" version" : " 1" ,
" creator" : " @foo:matrix.org" ,
" encryption" : null,
" federatable" : true,
" public" : true,
" join_rules" : " invite" ,
" guest_access" : null,
" history_visibility" : " shared" ,
" state_events" : 93534
},
... (48 hidden items) ...
{
" room_id" : " !twcBhHVdZlQWuuxBhN:termina.org.uk" ,
" name" : " weechat-matrix" ,
" canonical_alias" : " #weechat-matrix:termina.org.uk" ,
" joined_members" : 137,
" joined_local_members" : 20,
" version" : " 4" ,
" creator" : " @foo:termina.org.uk" ,
" encryption" : null,
" federatable" : true,
" public" : true,
" join_rules" : " invite" ,
" guest_access" : null,
" history_visibility" : " shared" ,
" state_events" : 8345
}
],
" offset" : 100,
" prev_batch" : 0,
" total_rooms" : 150
}
< / code > < / pre >
< p > Once the < code > next_token< / code > parameter is no longer present, we know we've reached the
end of the list.< / p >
< h1 id = "room-details-api" > < a class = "header" href = "#room-details-api" > Room Details API< / a > < / h1 >
< p > The Room Details admin API allows server admins to get all details of a room.< / p >
< p > The following fields are possible in the JSON response body:< / p >
< ul >
< li > < code > room_id< / code > - The ID of the room.< / li >
< li > < code > name< / code > - The name of the room.< / li >
< li > < code > topic< / code > - The topic of the room.< / li >
< li > < code > avatar< / code > - The < code > mxc< / code > URI to the avatar of the room.< / li >
< li > < code > canonical_alias< / code > - The canonical (main) alias address of the room.< / li >
< li > < code > joined_members< / code > - How many users are currently in the room.< / li >
< li > < code > joined_local_members< / code > - How many local users are currently in the room.< / li >
< li > < code > joined_local_devices< / code > - How many local devices are currently in the room.< / li >
< li > < code > version< / code > - The version of the room as a string.< / li >
< li > < code > creator< / code > - The < code > user_id< / code > of the room creator.< / li >
< li > < code > encryption< / code > - Algorithm of end-to-end encryption of messages. Is < code > null< / code > if encryption is not active.< / li >
< li > < code > federatable< / code > - Whether users on other servers can join this room.< / li >
< li > < code > public< / code > - Whether the room is visible in room directory.< / li >
< li > < code > join_rules< / code > - The type of rules used for users wishing to join this room. One of: [" public" , " knock" , " invite" , " private" ].< / li >
< li > < code > guest_access< / code > - Whether guests can join the room. One of: [" can_join" , " forbidden" ].< / li >
< li > < code > history_visibility< / code > - Who can see the room history. One of: [" invited" , " joined" , " shared" , " world_readable" ].< / li >
< li > < code > state_events< / code > - Total number of state_events of a room. Complexity of the room.< / li >
< / ul >
< h2 id = "usage-3" > < a class = "header" href = "#usage-3" > Usage< / a > < / h2 >
< p > A standard request:< / p >
< pre > < code > GET /_synapse/admin/v1/rooms/< room_id>
{}
< / code > < / pre >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" room_id" : " !mscvqgqpHYjBGDxNym:matrix.org" ,
" name" : " Music Theory" ,
" avatar" : " mxc://matrix.org/AQDaVFlbkQoErdOgqWRgiGSV" ,
" topic" : " Theory, Composition, Notation, Analysis" ,
" canonical_alias" : " #musictheory:matrix.org" ,
" joined_members" : 127,
" joined_local_members" : 2,
" joined_local_devices" : 2,
" version" : " 1" ,
" creator" : " @foo:matrix.org" ,
" encryption" : null,
" federatable" : true,
" public" : true,
" join_rules" : " invite" ,
" guest_access" : null,
" history_visibility" : " shared" ,
" state_events" : 93534
}
< / code > < / pre >
< h1 id = "room-members-api" > < a class = "header" href = "#room-members-api" > Room Members API< / a > < / h1 >
< p > The Room Members admin API allows server admins to get a list of all members of a room.< / p >
< p > The response includes the following fields:< / p >
< ul >
< li > < code > members< / code > - A list of all the members that are present in the room, represented by their ids.< / li >
< li > < code > total< / code > - Total number of members in the room.< / li >
< / ul >
< h2 id = "usage-4" > < a class = "header" href = "#usage-4" > Usage< / a > < / h2 >
< p > A standard request:< / p >
< pre > < code > GET /_synapse/admin/v1/rooms/< room_id> /members
{}
< / code > < / pre >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" members" : [
" @foo:matrix.org" ,
" @bar:matrix.org" ,
" @foobar:matrix.org"
],
" total" : 3
}
< / code > < / pre >
< h1 id = "room-state-api" > < a class = "header" href = "#room-state-api" > Room State API< / a > < / h1 >
< p > The Room State admin API allows server admins to get a list of all state events in a room.< / p >
< p > The response includes the following fields:< / p >
< ul >
< li > < code > state< / code > - The current state of the room at the time of request.< / li >
< / ul >
< h2 id = "usage-5" > < a class = "header" href = "#usage-5" > Usage< / a > < / h2 >
< p > A standard request:< / p >
< pre > < code > GET /_synapse/admin/v1/rooms/< room_id> /state
{}
< / code > < / pre >
< p > Response:< / p >
< pre > < code class = "language-json" > {
" state" : [
{" type" : " m.room.create" , " state_key" : " " , " etc" : true},
{" type" : " m.room.power_levels" , " state_key" : " " , " etc" : true},
{" type" : " m.room.name" , " state_key" : " " , " etc" : true}
]
}
< / code > < / pre >
< h1 id = "delete-room-api" > < a class = "header" href = "#delete-room-api" > Delete Room API< / a > < / h1 >
< p > The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.< / p >
< p > Shuts down a room. Moves all local users and room aliases automatically to a
new room if < code > new_room_user_id< / code > is set. Otherwise local users only
leave the room without any information.< / p >
< p > The new room will be created with the user specified by the < code > new_room_user_id< / code > parameter
as room administrator and will contain a message explaining what happened. Users invited
to the new room will have power level < code > -10< / code > by default, and thus be unable to speak.< / p >
< p > If < code > block< / code > is < code > True< / code > it prevents new joins to the old room.< / p >
< p > This API will remove all trace of the old room from your database after removing
all local users. If < code > purge< / code > is < code > true< / code > (the default), all traces of the old room will
be removed from your database after removing all local users. If you do not want
this to happen, set < code > purge< / code > to < code > false< / code > .
Depending on the amount of history being purged a call to the API may take
several minutes or longer.< / p >
< p > The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected.< / p >
< p > The API is:< / p >
< pre > < code > DELETE /_synapse/admin/v1/rooms/< room_id>
< / code > < / pre >
< p > with a body of:< / p >
< pre > < code class = "language-json" > {
" new_room_user_id" : " @someuser:example.com" ,
" room_name" : " Content Violation Notification" ,
" message" : " Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service." ,
" block" : true,
" purge" : true
}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: see < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > .< / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" kicked_users" : [
" @foobar:example.com"
],
" failed_to_kick_users" : [],
" local_aliases" : [
" #badroom:example.com" ,
" #evilsaloon:example.com"
],
" new_room_id" : " !newroomid:example.com"
}
< / code > < / pre >
< h2 id = "parameters-2" > < a class = "header" href = "#parameters-2" > Parameters< / a > < / h2 >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > room_id< / code > - The ID of the room.< / li >
< / ul >
< p > The following JSON body parameters are available:< / p >
< ul >
< li > < code > new_room_user_id< / code > - Optional. If set, a new room will be created with this user ID
as the creator and admin, and all users in the old room will be moved into that
room. If not set, no new room will be created and the users will just be removed
from the old room. The user ID must be on the local server, but does not necessarily
have to belong to a registered user.< / li >
< li > < code > room_name< / code > - Optional. A string representing the name of the room that new users will be
invited to. Defaults to < code > Content Violation Notification< / code > < / li >
< li > < code > message< / code > - Optional. A string containing the first message that will be sent as
< code > new_room_user_id< / code > in the new room. Ideally this will clearly convey why the
original room was shut down. Defaults to < code > Sharing illegal content on this server is not permitted and rooms in violation will be blocked.< / code > < / li >
< li > < code > block< / code > - Optional. If set to < code > true< / code > , this room will be added to a blocking list, preventing
future attempts to join the room. Defaults to < code > false< / code > .< / li >
< li > < code > purge< / code > - Optional. If set to < code > true< / code > , it will remove all traces of the room from your database.
Defaults to < code > true< / code > .< / li >
< li > < code > force_purge< / code > - Optional, and ignored unless < code > purge< / code > is < code > true< / code > . If set to < code > true< / code > , it
will force a purge to go ahead even if there are local users still in the room. Do not
use this unless a regular < code > purge< / code > operation fails, as it could leave those users'
clients in a confused state.< / li >
< / ul >
< p > The JSON body must not be empty. The body must be at least < code > {}< / code > .< / p >
< h2 id = "response" > < a class = "header" href = "#response" > Response< / a > < / h2 >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > kicked_users< / code > - An array of users (< code > user_id< / code > ) that were kicked.< / li >
< li > < code > failed_to_kick_users< / code > - An array of users (< code > user_id< / code > ) that that were not kicked.< / li >
< li > < code > local_aliases< / code > - An array of strings representing the local aliases that were migrated from
the old room to the new.< / li >
< li > < code > new_room_id< / code > - A string representing the room ID of the new room.< / li >
< / ul >
< h2 id = "undoing-room-shutdowns" > < a class = "header" href = "#undoing-room-shutdowns" > Undoing room shutdowns< / a > < / h2 >
< p > < em > Note< / em > : This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
the structure can and does change without notice.< / p >
< p > First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible
to recover at all:< / p >
< ul >
< li > If the room was invite-only, your users will need to be re-invited.< / li >
< li > If the room no longer has any members at all, it'll be impossible to rejoin.< / li >
< li > The first user to rejoin will have to do so via an alias on a different server.< / li >
< / ul >
< p > With all that being said, if you still want to try and recover the room:< / p >
< ol >
< li > For safety reasons, shut down Synapse.< / li >
< li > In the database, run < code > DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';< / code >
< ul >
< li > For caution: it's recommended to run this in a transaction: < code > BEGIN; DELETE ...;< / code > , verify you got 1 result, then < code > COMMIT;< / code > .< / li >
< li > The room ID is the same one supplied to the shutdown room API, not the Content Violation room.< / li >
< / ul >
< / li >
< li > Restart Synapse.< / li >
< / ol >
< p > You will have to manually handle, if you so choose, the following:< / p >
< ul >
< li > Aliases that would have been redirected to the Content Violation room.< / li >
< li > Users that would have been booted from the room (and will have been force-joined to the Content Violation room).< / li >
< li > Removal of the Content Violation room if desired.< / li >
< / ul >
< h2 id = "deprecated-endpoint" > < a class = "header" href = "#deprecated-endpoint" > Deprecated endpoint< / a > < / h2 >
< p > The previous deprecated API will be removed in a future release, it was:< / p >
< pre > < code > POST /_synapse/admin/v1/rooms/< room_id> /delete
< / code > < / pre >
< p > It behaves the same way than the current endpoint except the path and the method.< / p >
< h1 id = "make-room-admin-api" > < a class = "header" href = "#make-room-admin-api" > Make Room Admin API< / a > < / h1 >
< p > Grants another user the highest power available to a local user who is in the room.
If the user is not in the room, and it is not publicly joinable, then invite the user.< / p >
< p > By default the server admin (the caller) is granted power, but another user can
optionally be specified, e.g.:< / p >
< pre > < code > POST /_synapse/admin/v1/rooms/< room_id_or_alias> /make_room_admin
{
" user_id" : " @foo:example.com"
}
< / code > < / pre >
< h1 id = "forward-extremities-admin-api" > < a class = "header" href = "#forward-extremities-admin-api" > Forward Extremities Admin API< / a > < / h1 >
< p > Enables querying and deleting forward extremities from rooms. When a lot of forward
extremities accumulate in a room, performance can become degraded. For details, see
< a href = "https://github.com/matrix-org/synapse/issues/1760" > #1760< / a > .< / p >
< h2 id = "check-for-forward-extremities" > < a class = "header" href = "#check-for-forward-extremities" > Check for forward extremities< / a > < / h2 >
< p > To check the status of forward extremities for a room:< / p >
< pre > < code > GET /_synapse/admin/v1/rooms/< room_id_or_alias> /forward_extremities
< / code > < / pre >
< p > A response as follows will be returned:< / p >
< pre > < code class = "language-json" > {
" count" : 1,
" results" : [
{
" event_id" : " $M5SP266vsnxctfwFgFLNceaCo3ujhRtg_NiiHabcdefgh" ,
" state_group" : 439,
" depth" : 123,
" received_ts" : 1611263016761
}
]
}
< / code > < / pre >
< h2 id = "deleting-forward-extremities" > < a class = "header" href = "#deleting-forward-extremities" > Deleting forward extremities< / a > < / h2 >
< p > < strong > WARNING< / strong > : Please ensure you know what you're doing and have read
the related issue < a href = "https://github.com/matrix-org/synapse/issues/1760" > #1760< / a > .
Under no situations should this API be executed as an automated maintenance task!< / p >
< p > If a room has lots of forward extremities, the extra can be
deleted as follows:< / p >
< pre > < code > DELETE /_synapse/admin/v1/rooms/< room_id_or_alias> /forward_extremities
< / code > < / pre >
< p > A response as follows will be returned, indicating the amount of forward extremities
that were deleted.< / p >
< pre > < code class = "language-json" > {
" deleted" : 1
}
< / code > < / pre >
< h1 id = "event-context-api" > < a class = "header" href = "#event-context-api" > Event Context API< / a > < / h1 >
< p > This API lets a client find the context of an event. This is designed primarily to investigate abuse reports.< / p >
< pre > < code > GET /_synapse/admin/v1/rooms/< room_id> /context/< event_id>
< / code > < / pre >
< p > This API mimmicks < a href = "https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-rooms-roomid-context-eventid" > GET /_matrix/client/r0/rooms/{roomId}/context/{eventId}< / a > . Please refer to the link for all details on parameters and reseponse.< / p >
< p > Example response:< / p >
< pre > < code class = "language-json" > {
" end" : " t29-57_2_0_2" ,
" events_after" : [
{
" content" : {
" body" : " This is an example text message" ,
" msgtype" : " m.text" ,
" format" : " org.matrix.custom.html" ,
" formatted_body" : " < b> This is an example text message< /b> "
},
" type" : " m.room.message" ,
" event_id" : " $143273582443PhrSn:example.org" ,
" room_id" : " !636q39766251:example.com" ,
" sender" : " @example:example.org" ,
" origin_server_ts" : 1432735824653,
" unsigned" : {
" age" : 1234
}
}
],
" event" : {
" content" : {
" body" : " filename.jpg" ,
" info" : {
" h" : 398,
" w" : 394,
" mimetype" : " image/jpeg" ,
" size" : 31037
},
" url" : " mxc://example.org/JWEIFJgwEIhweiWJE" ,
" msgtype" : " m.image"
},
" type" : " m.room.message" ,
" event_id" : " $f3h4d129462ha:example.com" ,
" room_id" : " !636q39766251:example.com" ,
" sender" : " @example:example.org" ,
" origin_server_ts" : 1432735824653,
" unsigned" : {
" age" : 1234
}
},
" events_before" : [
{
" content" : {
" body" : " something-important.doc" ,
" filename" : " something-important.doc" ,
" info" : {
" mimetype" : " application/msword" ,
" size" : 46144
},
" msgtype" : " m.file" ,
" url" : " mxc://example.org/FHyPlCeYUSFFxlgbQYZmoEoe"
},
" type" : " m.room.message" ,
" event_id" : " $143273582443PhrSn:example.org" ,
" room_id" : " !636q39766251:example.com" ,
" sender" : " @example:example.org" ,
" origin_server_ts" : 1432735824653,
" unsigned" : {
" age" : 1234
}
}
],
" start" : " t27-54_2_0_2" ,
" state" : [
{
" content" : {
" creator" : " @example:example.org" ,
" room_version" : " 1" ,
" m.federate" : true,
" predecessor" : {
" event_id" : " $something:example.org" ,
" room_id" : " !oldroom:example.org"
}
},
" type" : " m.room.create" ,
" event_id" : " $143273582443PhrSn:example.org" ,
" room_id" : " !636q39766251:example.com" ,
" sender" : " @example:example.org" ,
" origin_server_ts" : 1432735824653,
" unsigned" : {
" age" : 1234
},
" state_key" : " "
},
{
" content" : {
" membership" : " join" ,
" avatar_url" : " mxc://example.org/SEsfnsuifSDFSSEF" ,
" displayname" : " Alice Margatroid"
},
" type" : " m.room.member" ,
" event_id" : " $143273582443PhrSn:example.org" ,
" room_id" : " !636q39766251:example.com" ,
" sender" : " @example:example.org" ,
" origin_server_ts" : 1432735824653,
" unsigned" : {
" age" : 1234
},
" state_key" : " @alice:example.org"
}
]
}
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "server-notices-1" > < a class = "header" href = "#server-notices-1" > Server Notices< / a > < / h1 >
< p > The API to send notices is as follows:< / p >
< pre > < code > POST /_synapse/admin/v1/send_server_notice
< / code > < / pre >
< p > or:< / p >
< pre > < code > PUT /_synapse/admin/v1/send_server_notice/{txnId}
< / code > < / pre >
< p > You will need to authenticate with an access token for an admin user.< / p >
< p > When using the < code > PUT< / code > form, retransmissions with the same transaction ID will be
ignored in the same way as with < code > PUT /_matrix/client/r0/rooms/{roomId}/send/{eventType}/{txnId}< / code > .< / p >
< p > The request body should look something like the following:< / p >
< pre > < code class = "language-json" > {
" user_id" : " @target_user:server_name" ,
" content" : {
" msgtype" : " m.text" ,
" body" : " This is my message"
}
}
< / code > < / pre >
< p > You can optionally include the following additional parameters:< / p >
< ul >
< li > < code > type< / code > : the type of event. Defaults to < code > m.room.message< / code > .< / li >
< li > < code > state_key< / code > : Setting this will result in a state event being sent.< / li >
< / ul >
< p > Once the notice has been sent, the API will return the following response:< / p >
< pre > < code class = "language-json" > {
" event_id" : " < event_id> "
}
< / code > < / pre >
< p > Note that server notices must be enabled in < code > homeserver.yaml< / code > before this API
can be used. See < a href = "admin_api/../server_notices.html" > server_notices.md< / a > for more information.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "deprecated-shutdown-room-api" > < a class = "header" href = "#deprecated-shutdown-room-api" > Deprecated: Shutdown room API< / a > < / h1 >
< p > < strong > The old Shutdown room API is deprecated and will be removed in a future release.
See the new < a href = "admin_api/rooms.html#delete-room-api" > Delete Room API< / a > for more details.< / strong > < / p >
< p > Shuts down a room, preventing new joins and moves local users and room aliases automatically
to a new room. The new room will be created with the user specified by the
< code > new_room_user_id< / code > parameter as room administrator and will contain a message
explaining what happened. Users invited to the new room will have power level
-10 by default, and thus be unable to speak. The old room's power levels will be changed to
disallow any further invites or joins.< / p >
< p > The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected.< / p >
< h2 id = "api-1" > < a class = "header" href = "#api-1" > API< / a > < / h2 >
< p > You will need to authenticate with an access token for an admin user.< / p >
< h3 id = "url" > < a class = "header" href = "#url" > URL< / a > < / h3 >
< p > < code > POST /_synapse/admin/v1/shutdown_room/{room_id}< / code > < / p >
< h3 id = "url-parameters" > < a class = "header" href = "#url-parameters" > URL Parameters< / a > < / h3 >
< ul >
< li > < code > room_id< / code > - The ID of the room (e.g < code > !someroom:example.com< / code > )< / li >
< / ul >
< h3 id = "json-body-parameters" > < a class = "header" href = "#json-body-parameters" > JSON Body Parameters< / a > < / h3 >
< ul >
< li > < code > new_room_user_id< / code > - Required. A string representing the user ID of the user that will admin
the new room that all users in the old room will be moved to.< / li >
< li > < code > room_name< / code > - Optional. A string representing the name of the room that new users will be
invited to.< / li >
< li > < code > message< / code > - Optional. A string containing the first message that will be sent as
< code > new_room_user_id< / code > in the new room. Ideally this will clearly convey why the
original room was shut down.< / li >
< / ul >
< p > If not specified, the default value of < code > room_name< / code > is " Content Violation
Notification" . The default value of < code > message< / code > is " Sharing illegal content on
othis server is not permitted and rooms in violation will be blocked." < / p >
< h3 id = "response-parameters" > < a class = "header" href = "#response-parameters" > Response Parameters< / a > < / h3 >
< ul >
< li > < code > kicked_users< / code > - An integer number representing the number of users that
were kicked.< / li >
< li > < code > failed_to_kick_users< / code > - An integer number representing the number of users
that were not kicked.< / li >
< li > < code > local_aliases< / code > - An array of strings representing the local aliases that were migrated from
the old room to the new.< / li >
< li > < code > new_room_id< / code > - A string representing the room ID of the new room.< / li >
< / ul >
< h2 id = "example-3" > < a class = "header" href = "#example-3" > Example< / a > < / h2 >
< p > Request:< / p >
< pre > < code > POST /_synapse/admin/v1/shutdown_room/!somebadroom%3Aexample.com
{
" new_room_user_id" : " @someuser:example.com" ,
" room_name" : " Content Violation Notification" ,
" message" : " Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service."
}
< / code > < / pre >
< p > Response:< / p >
< pre > < code > {
" kicked_users" : 5,
" failed_to_kick_users" : 0,
" local_aliases" : [" #badroom:example.com" , " #evilsaloon:example.com],
" new_room_id" : " !newroomid:example.com" ,
},
< / code > < / pre >
< h2 id = "undoing-room-shutdowns-1" > < a class = "header" href = "#undoing-room-shutdowns-1" > Undoing room shutdowns< / a > < / h2 >
< p > < em > Note< / em > : This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
the structure can and does change without notice.< / p >
< p > First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible
to recover at all:< / p >
< ul >
< li > If the room was invite-only, your users will need to be re-invited.< / li >
< li > If the room no longer has any members at all, it'll be impossible to rejoin.< / li >
< li > The first user to rejoin will have to do so via an alias on a different server.< / li >
< / ul >
< p > With all that being said, if you still want to try and recover the room:< / p >
< ol >
< li > For safety reasons, shut down Synapse.< / li >
< li > In the database, run < code > DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';< / code >
< ul >
< li > For caution: it's recommended to run this in a transaction: < code > BEGIN; DELETE ...;< / code > , verify you got 1 result, then < code > COMMIT;< / code > .< / li >
< li > The room ID is the same one supplied to the shutdown room API, not the Content Violation room.< / li >
< / ul >
< / li >
< li > Restart Synapse.< / li >
< / ol >
< p > You will have to manually handle, if you so choose, the following:< / p >
< ul >
< li > Aliases that would have been redirected to the Content Violation room.< / li >
< li > Users that would have been booted from the room (and will have been force-joined to the Content Violation room).< / li >
< li > Removal of the Content Violation room if desired.< / li >
< / ul >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "users-media-usage-statistics" > < a class = "header" href = "#users-media-usage-statistics" > Users' media usage statistics< / a > < / h1 >
< p > Returns information about all local media usage of users. Gives the
possibility to filter them by time and user.< / p >
< p > The API is:< / p >
< pre > < code > GET /_synapse/admin/v1/statistics/users/media
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code >
2021-06-16 15:16:14 +03:00
for a server admin: see < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > .< / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" users" : [
{
" displayname" : " foo_user_0" ,
" media_count" : 2,
" media_length" : 134,
" user_id" : " @foo_user_0:test"
},
{
" displayname" : " foo_user_1" ,
" media_count" : 2,
" media_length" : 134,
" user_id" : " @foo_user_1:test"
}
],
" next_token" : 3,
" total" : 10
}
< / code > < / pre >
< p > To paginate, check for < code > next_token< / code > and if present, call the endpoint
again with < code > from< / code > set to the value of < code > next_token< / code > . This will return a new page.< / p >
< p > If the endpoint does not return a < code > next_token< / code > then there are no more
reports to paginate through.< / p >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > limit< / code > : string representing a positive integer - Is optional but is
used for pagination, denoting the maximum number of items to return
in this call. Defaults to < code > 100< / code > .< / li >
< li > < code > from< / code > : string representing a positive integer - Is optional but used for pagination,
denoting the offset in the returned results. This should be treated as an opaque value
and not explicitly set to anything other than the return value of < code > next_token< / code > from a
previous call. Defaults to < code > 0< / code > .< / li >
< li > < code > order_by< / code > - string - The method in which to sort the returned list of users. Valid values are:
< ul >
< li > < code > user_id< / code > - Users are ordered alphabetically by < code > user_id< / code > . This is the default.< / li >
< li > < code > displayname< / code > - Users are ordered alphabetically by < code > displayname< / code > .< / li >
< li > < code > media_length< / code > - Users are ordered by the total size of uploaded media in bytes.
Smallest to largest.< / li >
< li > < code > media_count< / code > - Users are ordered by number of uploaded media. Smallest to largest.< / li >
< / ul >
< / li >
< li > < code > from_ts< / code > - string representing a positive integer - Considers only
files created at this timestamp or later. Unix timestamp in ms.< / li >
< li > < code > until_ts< / code > - string representing a positive integer - Considers only
files created at this timestamp or earlier. Unix timestamp in ms.< / li >
< li > < code > search_term< / code > - string - Filter users by their user ID localpart < strong > or< / strong > displayname.
The search term can be found in any part of the string.
Defaults to no filtering.< / li >
< li > < code > dir< / code > - string - Direction of order. Either < code > f< / code > for forwards or < code > b< / code > for backwards.
Setting this value to < code > b< / code > will reverse the above sort order. Defaults to < code > f< / code > .< / li >
< / ul >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > users< / code > - An array of objects, each containing information
about the user and their local media. Objects contain the following fields:
< ul >
< li > < code > displayname< / code > - string - Displayname of this user.< / li >
< li > < code > media_count< / code > - integer - Number of uploaded media by this user.< / li >
< li > < code > media_length< / code > - integer - Size of uploaded media in bytes by this user.< / li >
< li > < code > user_id< / code > - string - Fully-qualified user ID (ex. < code > @user:server.com< / code > ).< / li >
< / ul >
< / li >
< li > < code > next_token< / code > - integer - Opaque value used for pagination. See above.< / li >
< li > < code > total< / code > - integer - Total number of users after filtering.< / li >
< / ul >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "user-admin-api" > < a class = "header" href = "#user-admin-api" > User Admin API< / a > < / h1 >
< h2 id = "query-user-account" > < a class = "header" href = "#query-user-account" > Query User Account< / a > < / h2 >
< p > This API returns information about a specific user account.< / p >
< p > The api is:< / p >
< pre > < code > GET /_synapse/admin/v2/users/< user_id>
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > It returns a JSON body like the following:< / p >
< pre > < code class = "language-json" > {
" displayname" : " User" ,
" threepids" : [
{
" medium" : " email" ,
" address" : " < user_mail_1> "
},
{
" medium" : " email" ,
" address" : " < user_mail_2> "
}
],
" avatar_url" : " < avatar_url> " ,
" admin" : 0,
" deactivated" : 0,
" shadow_banned" : 0,
" password_hash" : " $2b$12$p9B4GkqYdRTPGD" ,
" creation_ts" : 1560432506,
" appservice_id" : null,
" consent_server_notice_sent" : null,
" consent_version" : null
}
< / code > < / pre >
< p > URL parameters:< / p >
< ul >
< li > < code > user_id< / code > : fully-qualified user id: for example, < code > @user:server.com< / code > .< / li >
< / ul >
< h2 id = "create-or-modify-account" > < a class = "header" href = "#create-or-modify-account" > Create or modify Account< / a > < / h2 >
< p > This API allows an administrator to create or modify a user account with a
specific < code > user_id< / code > .< / p >
< p > This api is:< / p >
< pre > < code > PUT /_synapse/admin/v2/users/< user_id>
< / code > < / pre >
< p > with a body of:< / p >
< pre > < code class = "language-json" > {
" password" : " user_password" ,
" displayname" : " User" ,
" threepids" : [
{
" medium" : " email" ,
" address" : " < user_mail_1> "
},
{
" medium" : " email" ,
" address" : " < user_mail_2> "
}
],
" avatar_url" : " < avatar_url> " ,
" admin" : false,
" deactivated" : false
}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > URL parameters:< / p >
< ul >
< li > < code > user_id< / code > : fully-qualified user id: for example, < code > @user:server.com< / code > .< / li >
< / ul >
< p > Body parameters:< / p >
< ul >
< li >
< p > < code > password< / code > , optional. If provided, the user's password is updated and all
devices are logged out.< / p >
< / li >
< li >
< p > < code > displayname< / code > , optional, defaults to the value of < code > user_id< / code > .< / p >
< / li >
< li >
< p > < code > threepids< / code > , optional, allows setting the third-party IDs (email, msisdn)
belonging to a user.< / p >
< / li >
< li >
< p > < code > avatar_url< / code > , optional, must be a
< a href = "https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris" > MXC URI< / a > .< / p >
< / li >
< li >
< p > < code > admin< / code > , optional, defaults to < code > false< / code > .< / p >
< / li >
< li >
< p > < code > deactivated< / code > , optional. If unspecified, deactivation state will be left
unchanged on existing accounts and set to < code > false< / code > for new accounts.
A user cannot be erased by deactivating with this API. For details on
deactivating users see < a href = "admin_api/user_admin_api.html#deactivate-account" > Deactivate Account< / a > .< / p >
< / li >
< / ul >
< p > If the user already exists then optional parameters default to the current value.< / p >
< p > In order to re-activate an account < code > deactivated< / code > must be set to < code > false< / code > . If
users do not login via single-sign-on, a new < code > password< / code > must be provided.< / p >
< h2 id = "list-accounts" > < a class = "header" href = "#list-accounts" > List Accounts< / a > < / h2 >
< p > This API returns all local user accounts.
By default, the response is ordered by ascending user ID.< / p >
< pre > < code > GET /_synapse/admin/v2/users?from=0& limit=10& guests=false
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" users" : [
{
" name" : " < user_id1> " ,
" is_guest" : 0,
" admin" : 0,
" user_type" : null,
" deactivated" : 0,
" shadow_banned" : 0,
" displayname" : " < User One> " ,
" avatar_url" : null
}, {
" name" : " < user_id2> " ,
" is_guest" : 0,
" admin" : 1,
" user_type" : null,
" deactivated" : 0,
" shadow_banned" : 0,
" displayname" : " < User Two> " ,
" avatar_url" : " < avatar_url> "
}
],
" next_token" : " 100" ,
" total" : 200
}
< / code > < / pre >
< p > To paginate, check for < code > next_token< / code > and if present, call the endpoint again
with < code > from< / code > set to the value of < code > next_token< / code > . This will return a new page.< / p >
< p > If the endpoint does not return a < code > next_token< / code > then there are no more users
to paginate through.< / p >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li >
< p > < code > user_id< / code > - Is optional and filters to only return users with user IDs
that contain this value. This parameter is ignored when using the < code > name< / code > parameter.< / p >
< / li >
< li >
< p > < code > name< / code > - Is optional and filters to only return users with user ID localparts
< strong > or< / strong > displaynames that contain this value.< / p >
< / li >
< li >
< p > < code > guests< / code > - string representing a bool - Is optional and if < code > false< / code > will < strong > exclude< / strong > guest users.
Defaults to < code > true< / code > to include guest users.< / p >
< / li >
< li >
< p > < code > deactivated< / code > - string representing a bool - Is optional and if < code > true< / code > will < strong > include< / strong > deactivated users.
Defaults to < code > false< / code > to exclude deactivated users.< / p >
< / li >
< li >
< p > < code > limit< / code > - string representing a positive integer - Is optional but is used for pagination,
denoting the maximum number of items to return in this call. Defaults to < code > 100< / code > .< / p >
< / li >
< li >
< p > < code > from< / code > - string representing a positive integer - Is optional but used for pagination,
denoting the offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of < code > next_token< / code > from a previous call.
Defaults to < code > 0< / code > .< / p >
< / li >
< li >
< p > < code > order_by< / code > - The method by which to sort the returned list of users.
If the ordered field has duplicates, the second order is always by ascending < code > name< / code > ,
which guarantees a stable ordering. Valid values are:< / p >
< ul >
< li > < code > name< / code > - Users are ordered alphabetically by < code > name< / code > . This is the default.< / li >
< li > < code > is_guest< / code > - Users are ordered by < code > is_guest< / code > status.< / li >
< li > < code > admin< / code > - Users are ordered by < code > admin< / code > status.< / li >
< li > < code > user_type< / code > - Users are ordered alphabetically by < code > user_type< / code > .< / li >
< li > < code > deactivated< / code > - Users are ordered by < code > deactivated< / code > status.< / li >
< li > < code > shadow_banned< / code > - Users are ordered by < code > shadow_banned< / code > status.< / li >
< li > < code > displayname< / code > - Users are ordered alphabetically by < code > displayname< / code > .< / li >
< li > < code > avatar_url< / code > - Users are ordered alphabetically by avatar URL.< / li >
< / ul >
< / li >
< li >
< p > < code > dir< / code > - Direction of media order. Either < code > f< / code > for forwards or < code > b< / code > for backwards.
Setting this value to < code > b< / code > will reverse the above sort order. Defaults to < code > f< / code > .< / p >
< / li >
< / ul >
< p > Caution. The database only has indexes on the columns < code > name< / code > and < code > created_ts< / code > .
This means that if a different sort order is used (< code > is_guest< / code > , < code > admin< / code > ,
< code > user_type< / code > , < code > deactivated< / code > , < code > shadow_banned< / code > , < code > avatar_url< / code > or < code > displayname< / code > ),
this can cause a large load on the database, especially for large environments.< / p >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li >
< p > < code > users< / code > - An array of objects, each containing information about an user.
User objects contain the following fields:< / p >
< ul >
< li > < code > name< / code > - string - Fully-qualified user ID (ex. < code > @user:server.com< / code > ).< / li >
< li > < code > is_guest< / code > - bool - Status if that user is a guest account.< / li >
< li > < code > admin< / code > - bool - Status if that user is a server administrator.< / li >
< li > < code > user_type< / code > - string - Type of the user. Normal users are type < code > None< / code > .
This allows user type specific behaviour. There are also types < code > support< / code > and < code > bot< / code > . < / li >
< li > < code > deactivated< / code > - bool - Status if that user has been marked as deactivated.< / li >
< li > < code > shadow_banned< / code > - bool - Status if that user has been marked as shadow banned.< / li >
< li > < code > displayname< / code > - string - The user's display name if they have set one.< / li >
< li > < code > avatar_url< / code > - string - The user's avatar URL if they have set one.< / li >
< / ul >
< / li >
< li >
< p > < code > next_token< / code > : string representing a positive integer - Indication for pagination. See above.< / p >
< / li >
< li >
< p > < code > total< / code > - integer - Total number of media.< / p >
< / li >
< / ul >
< h2 id = "query-current-sessions-for-a-user" > < a class = "header" href = "#query-current-sessions-for-a-user" > Query current sessions for a user< / a > < / h2 >
< p > This API returns information about the active sessions for a specific user.< / p >
< p > The endpoints are:< / p >
< pre > < code > GET /_synapse/admin/v1/whois/< user_id>
< / code > < / pre >
< p > and:< / p >
< pre > < code > GET /_matrix/client/r0/admin/whois/< userId>
< / code > < / pre >
< p > See also: < a href = "https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid" > Client Server
API Whois< / a > .< / p >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > It returns a JSON body like the following:< / p >
< pre > < code class = "language-json" > {
" user_id" : " < user_id> " ,
" devices" : {
" " : {
" sessions" : [
{
" connections" : [
{
" ip" : " 1.2.3.4" ,
" last_seen" : 1417222374433,
" user_agent" : " Mozilla/5.0 ..."
},
{
" ip" : " 1.2.3.10" ,
" last_seen" : 1417222374500,
" user_agent" : " Dalvik/2.1.0 ..."
}
]
}
]
}
}
}
< / code > < / pre >
< p > < code > last_seen< / code > is measured in milliseconds since the Unix epoch.< / p >
< h2 id = "deactivate-account" > < a class = "header" href = "#deactivate-account" > Deactivate Account< / a > < / h2 >
< p > This API deactivates an account. It removes active access tokens, resets the
password, and deletes third-party IDs (to prevent the user requesting a
password reset).< / p >
< p > It can also mark the user as GDPR-erased. This means messages sent by the
user will still be visible by anyone that was in the room when these messages
were sent, but hidden from users joining the room afterwards.< / p >
< p > The api is:< / p >
< pre > < code > POST /_synapse/admin/v1/deactivate/< user_id>
< / code > < / pre >
< p > with a body of:< / p >
< pre > < code class = "language-json" > {
" erase" : true
}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > The erase parameter is optional and defaults to < code > false< / code > .
An empty body may be passed for backwards compatibility.< / p >
< p > The following actions are performed when deactivating an user:< / p >
< ul >
< li > Try to unpind 3PIDs from the identity server< / li >
< li > Remove all 3PIDs from the homeserver< / li >
< li > Delete all devices and E2EE keys< / li >
< li > Delete all access tokens< / li >
< li > Delete the password hash< / li >
< li > Removal from all rooms the user is a member of< / li >
< li > Remove the user from the user directory< / li >
< li > Reject all pending invites< / li >
< li > Remove all account validity information related to the user< / li >
< / ul >
< p > The following additional actions are performed during deactivation if < code > erase< / code >
is set to < code > true< / code > :< / p >
< ul >
< li > Remove the user's display name< / li >
< li > Remove the user's avatar URL< / li >
< li > Mark the user as erased< / li >
< / ul >
< h2 id = "reset-password" > < a class = "header" href = "#reset-password" > Reset password< / a > < / h2 >
< p > Changes the password of another user. This will automatically log the user out of all their devices.< / p >
< p > The api is:< / p >
< pre > < code > POST /_synapse/admin/v1/reset_password/< user_id>
< / code > < / pre >
< p > with a body of:< / p >
< pre > < code class = "language-json" > {
" new_password" : " < secret> " ,
" logout_devices" : true
}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > The parameter < code > new_password< / code > is required.
The parameter < code > logout_devices< / code > is optional and defaults to < code > true< / code > .< / p >
< h2 id = "get-whether-a-user-is-a-server-administrator-or-not" > < a class = "header" href = "#get-whether-a-user-is-a-server-administrator-or-not" > Get whether a user is a server administrator or not< / a > < / h2 >
< p > The api is:< / p >
< pre > < code > GET /_synapse/admin/v1/users/< user_id> /admin
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" admin" : true
}
< / code > < / pre >
< h2 id = "change-whether-a-user-is-a-server-administrator-or-not" > < a class = "header" href = "#change-whether-a-user-is-a-server-administrator-or-not" > Change whether a user is a server administrator or not< / a > < / h2 >
< p > Note that you cannot demote yourself.< / p >
< p > The api is:< / p >
< pre > < code > PUT /_synapse/admin/v1/users/< user_id> /admin
< / code > < / pre >
< p > with a body of:< / p >
< pre > < code class = "language-json" > {
" admin" : true
}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< h2 id = "list-room-memberships-of-a-user" > < a class = "header" href = "#list-room-memberships-of-a-user" > List room memberships of a user< / a > < / h2 >
< p > Gets a list of all < code > room_id< / code > that a specific < code > user_id< / code > is member.< / p >
< p > The API is:< / p >
< pre > < code > GET /_synapse/admin/v1/users/< user_id> /joined_rooms
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" joined_rooms" : [
" !DuGcnbhHGaSZQoNQR:matrix.org" ,
" !ZtSaPCawyWtxfWiIy:matrix.org"
],
" total" : 2
}
< / code > < / pre >
< p > The server returns the list of rooms of which the user and the server
are member. If the user is local, all the rooms of which the user is
member are returned.< / p >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - fully qualified: for example, < code > @user:server.com< / code > .< / li >
< / ul >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > joined_rooms< / code > - An array of < code > room_id< / code > .< / li >
< li > < code > total< / code > - Number of rooms.< / li >
< / ul >
< h2 id = "list-media-of-a-user" > < a class = "header" href = "#list-media-of-a-user" > List media of a user< / a > < / h2 >
< p > Gets a list of all local media that a specific < code > user_id< / code > has created.
By default, the response is ordered by descending creation date and ascending media ID.
The newest media is on top. You can change the order with parameters
< code > order_by< / code > and < code > dir< / code > .< / p >
< p > The API is:< / p >
< pre > < code > GET /_synapse/admin/v1/users/< user_id> /media
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" media" : [
{
" created_ts" : 100400,
" last_access_ts" : null,
" media_id" : " qXhyRzulkwLsNHTbpHreuEgo" ,
" media_length" : 67,
" media_type" : " image/png" ,
" quarantined_by" : null,
" safe_from_quarantine" : false,
" upload_name" : " test1.png"
},
{
" created_ts" : 200400,
" last_access_ts" : null,
" media_id" : " FHfiSnzoINDatrXHQIXBtahw" ,
" media_length" : 67,
" media_type" : " image/png" ,
" quarantined_by" : null,
" safe_from_quarantine" : false,
" upload_name" : " test2.png"
}
],
" next_token" : 3,
" total" : 2
}
< / code > < / pre >
< p > To paginate, check for < code > next_token< / code > and if present, call the endpoint again
with < code > from< / code > set to the value of < code > next_token< / code > . This will return a new page.< / p >
< p > If the endpoint does not return a < code > next_token< / code > then there are no more
reports to paginate through.< / p >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li >
< p > < code > user_id< / code > - string - fully qualified: for example, < code > @user:server.com< / code > .< / p >
< / li >
< li >
< p > < code > limit< / code > : string representing a positive integer - Is optional but is used for pagination,
denoting the maximum number of items to return in this call. Defaults to < code > 100< / code > .< / p >
< / li >
< li >
< p > < code > from< / code > : string representing a positive integer - Is optional but used for pagination,
denoting the offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of < code > next_token< / code > from a previous call.
Defaults to < code > 0< / code > .< / p >
< / li >
< li >
< p > < code > order_by< / code > - The method by which to sort the returned list of media.
If the ordered field has duplicates, the second order is always by ascending < code > media_id< / code > ,
which guarantees a stable ordering. Valid values are:< / p >
< ul >
< li > < code > media_id< / code > - Media are ordered alphabetically by < code > media_id< / code > .< / li >
< li > < code > upload_name< / code > - Media are ordered alphabetically by name the media was uploaded with.< / li >
< li > < code > created_ts< / code > - Media are ordered by when the content was uploaded in ms.
Smallest to largest. This is the default.< / li >
< li > < code > last_access_ts< / code > - Media are ordered by when the content was last accessed in ms.
Smallest to largest.< / li >
< li > < code > media_length< / code > - Media are ordered by length of the media in bytes.
Smallest to largest.< / li >
< li > < code > media_type< / code > - Media are ordered alphabetically by MIME-type.< / li >
< li > < code > quarantined_by< / code > - Media are ordered alphabetically by the user ID that
initiated the quarantine request for this media.< / li >
< li > < code > safe_from_quarantine< / code > - Media are ordered by the status if this media is safe
from quarantining.< / li >
< / ul >
< / li >
< li >
< p > < code > dir< / code > - Direction of media order. Either < code > f< / code > for forwards or < code > b< / code > for backwards.
Setting this value to < code > b< / code > will reverse the above sort order. Defaults to < code > f< / code > .< / p >
< / li >
< / ul >
< p > If neither < code > order_by< / code > nor < code > dir< / code > is set, the default order is newest media on top
(corresponds to < code > order_by< / code > = < code > created_ts< / code > and < code > dir< / code > = < code > b< / code > ).< / p >
< p > Caution. The database only has indexes on the columns < code > media_id< / code > ,
< code > user_id< / code > and < code > created_ts< / code > . This means that if a different sort order is used
(< code > upload_name< / code > , < code > last_access_ts< / code > , < code > media_length< / code > , < code > media_type< / code > ,
< code > quarantined_by< / code > or < code > safe_from_quarantine< / code > ), this can cause a large load on the
database, especially for large environments.< / p >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li >
< p > < code > media< / code > - An array of objects, each containing information about a media.
Media objects contain the following fields:< / p >
< ul >
< li >
< p > < code > created_ts< / code > - integer - Timestamp when the content was uploaded in ms.< / p >
< / li >
< li >
< p > < code > last_access_ts< / code > - integer - Timestamp when the content was last accessed in ms.< / p >
< / li >
< li >
< p > < code > media_id< / code > - string - The id used to refer to the media.< / p >
< / li >
< li >
< p > < code > media_length< / code > - integer - Length of the media in bytes.< / p >
< / li >
< li >
< p > < code > media_type< / code > - string - The MIME-type of the media.< / p >
< / li >
< li >
< p > < code > quarantined_by< / code > - string - The user ID that initiated the quarantine request
for this media.< / p >
< / li >
< li >
< p > < code > safe_from_quarantine< / code > - bool - Status if this media is safe from quarantining.< / p >
< / li >
< li >
< p > < code > upload_name< / code > - string - The name the media was uploaded with.< / p >
< / li >
< / ul >
< / li >
< li >
< p > < code > next_token< / code > : integer - Indication for pagination. See above.< / p >
< / li >
< li >
< p > < code > total< / code > - integer - Total number of media.< / p >
< / li >
< / ul >
< h2 id = "login-as-a-user" > < a class = "header" href = "#login-as-a-user" > Login as a user< / a > < / h2 >
< p > Get an access token that can be used to authenticate as that user. Useful for
when admins wish to do actions on behalf of a user.< / p >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v1/users/< user_id> /login
{}
< / code > < / pre >
< p > An optional < code > valid_until_ms< / code > field can be specified in the request body as an
integer timestamp that specifies when the token should expire. By default tokens
do not expire.< / p >
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" access_token" : " < opaque_access_token_string> "
}
< / code > < / pre >
< p > This API does < em > not< / em > generate a new device for the user, and so will not appear
their < code > /devices< / code > list, and in general the target user should not be able to
tell they have been logged in as.< / p >
< p > To expire the token call the standard < code > /logout< / code > API with the token.< / p >
< p > Note: The token will expire if the < em > admin< / em > user calls < code > /logout/all< / code > from any
of their devices, but the token will < em > not< / em > expire if the target user does the
same.< / p >
< h2 id = "user-devices" > < a class = "header" href = "#user-devices" > User devices< / a > < / h2 >
< h3 id = "list-all-devices" > < a class = "header" href = "#list-all-devices" > List all devices< / a > < / h3 >
< p > Gets information about all devices for a specific < code > user_id< / code > .< / p >
< p > The API is:< / p >
< pre > < code > GET /_synapse/admin/v2/users/< user_id> /devices
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" devices" : [
{
" device_id" : " QBUAZIFURK" ,
" display_name" : " android" ,
" last_seen_ip" : " 1.2.3.4" ,
" last_seen_ts" : 1474491775024,
" user_id" : " < user_id> "
},
{
" device_id" : " AUIECTSRND" ,
" display_name" : " ios" ,
" last_seen_ip" : " 1.2.3.5" ,
" last_seen_ts" : 1474491775025,
" user_id" : " < user_id> "
}
],
" total" : 2
}
< / code > < / pre >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - fully qualified: for example, < code > @user:server.com< / code > .< / li >
< / ul >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li >
< p > < code > devices< / code > - An array of objects, each containing information about a device.
Device objects contain the following fields:< / p >
< ul >
< li > < code > device_id< / code > - Identifier of device.< / li >
< li > < code > display_name< / code > - Display name set by the user for this device.
Absent if no name has been set.< / li >
< li > < code > last_seen_ip< / code > - The IP address where this device was last seen.
(May be a few minutes out of date, for efficiency reasons).< / li >
< li > < code > last_seen_ts< / code > - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).< / li >
< li > < code > user_id< / code > - Owner of device.< / li >
< / ul >
< / li >
< li >
< p > < code > total< / code > - Total number of user's devices.< / p >
< / li >
< / ul >
< h3 id = "delete-multiple-devices" > < a class = "header" href = "#delete-multiple-devices" > Delete multiple devices< / a > < / h3 >
< p > Deletes the given devices for a specific < code > user_id< / code > , and invalidates
any access token associated with them.< / p >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v2/users/< user_id> /delete_devices
{
" devices" : [
" QBUAZIFURK" ,
" AUIECTSRND"
],
}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > An empty JSON dict is returned.< / p >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - fully qualified: for example, < code > @user:server.com< / code > .< / li >
< / ul >
< p > The following fields are required in the JSON request body:< / p >
< ul >
< li > < code > devices< / code > - The list of device IDs to delete.< / li >
< / ul >
< h3 id = "show-a-device" > < a class = "header" href = "#show-a-device" > Show a device< / a > < / h3 >
< p > Gets information on a single device, by < code > device_id< / code > for a specific < code > user_id< / code > .< / p >
< p > The API is:< / p >
< pre > < code > GET /_synapse/admin/v2/users/< user_id> /devices/< device_id>
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" device_id" : " < device_id> " ,
" display_name" : " android" ,
" last_seen_ip" : " 1.2.3.4" ,
" last_seen_ts" : 1474491775024,
" user_id" : " < user_id> "
}
< / code > < / pre >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - fully qualified: for example, < code > @user:server.com< / code > .< / li >
< li > < code > device_id< / code > - The device to retrieve.< / li >
< / ul >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > device_id< / code > - Identifier of device.< / li >
< li > < code > display_name< / code > - Display name set by the user for this device.
Absent if no name has been set.< / li >
< li > < code > last_seen_ip< / code > - The IP address where this device was last seen.
(May be a few minutes out of date, for efficiency reasons).< / li >
< li > < code > last_seen_ts< / code > - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).< / li >
< li > < code > user_id< / code > - Owner of device.< / li >
< / ul >
< h3 id = "update-a-device" > < a class = "header" href = "#update-a-device" > Update a device< / a > < / h3 >
< p > Updates the metadata on the given < code > device_id< / code > for a specific < code > user_id< / code > .< / p >
< p > The API is:< / p >
< pre > < code > PUT /_synapse/admin/v2/users/< user_id> /devices/< device_id>
{
" display_name" : " My other phone"
}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > An empty JSON dict is returned.< / p >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - fully qualified: for example, < code > @user:server.com< / code > .< / li >
< li > < code > device_id< / code > - The device to update.< / li >
< / ul >
< p > The following fields are required in the JSON request body:< / p >
< ul >
< li > < code > display_name< / code > - The new display name for this device. If not given,
the display name is unchanged.< / li >
< / ul >
< h3 id = "delete-a-device" > < a class = "header" href = "#delete-a-device" > Delete a device< / a > < / h3 >
< p > Deletes the given < code > device_id< / code > for a specific < code > user_id< / code > ,
and invalidates any access token associated with it.< / p >
< p > The API is:< / p >
< pre > < code > DELETE /_synapse/admin/v2/users/< user_id> /devices/< device_id>
{}
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > An empty JSON dict is returned.< / p >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - fully qualified: for example, < code > @user:server.com< / code > .< / li >
< li > < code > device_id< / code > - The device to delete.< / li >
< / ul >
< h2 id = "list-all-pushers" > < a class = "header" href = "#list-all-pushers" > List all pushers< / a > < / h2 >
< p > Gets information about all pushers for a specific < code > user_id< / code > .< / p >
< p > The API is:< / p >
< pre > < code > GET /_synapse/admin/v1/users/< user_id> /pushers
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" pushers" : [
{
" app_display_name" :" HTTP Push Notifications" ,
" app_id" :" m.http" ,
" data" : {
" url" :" example.com"
},
" device_display_name" :" pushy push" ,
" kind" :" http" ,
" lang" :" None" ,
" profile_tag" :" " ,
" pushkey" :" a@example.com"
}
],
" total" : 1
}
< / code > < / pre >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - fully qualified: for example, < code > @user:server.com< / code > .< / li >
< / ul >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li >
< p > < code > pushers< / code > - An array containing the current pushers for the user< / p >
< ul >
< li >
< p > < code > app_display_name< / code > - string - A string that will allow the user to identify
what application owns this pusher.< / p >
< / li >
< li >
< p > < code > app_id< / code > - string - This is a reverse-DNS style identifier for the application.
Max length, 64 chars.< / p >
< / li >
< li >
< p > < code > data< / code > - A dictionary of information for the pusher implementation itself.< / p >
< ul >
< li >
< p > < code > url< / code > - string - Required if < code > kind< / code > is < code > http< / code > . The URL to use to send
notifications to.< / p >
< / li >
< li >
< p > < code > format< / code > - string - The format to use when sending notifications to the
Push Gateway.< / p >
< / li >
< / ul >
< / li >
< li >
< p > < code > device_display_name< / code > - string - A string that will allow the user to identify
what device owns this pusher.< / p >
< / li >
< li >
< p > < code > profile_tag< / code > - string - This string determines which set of device specific rules
this pusher executes.< / p >
< / li >
< li >
< p > < code > kind< / code > - string - The kind of pusher. " http" is a pusher that sends HTTP pokes.< / p >
< / li >
< li >
< p > < code > lang< / code > - string - The preferred language for receiving notifications
(e.g. 'en' or 'en-US')< / p >
< / li >
< li >
< p > < code > profile_tag< / code > - string - This string determines which set of device specific rules
this pusher executes.< / p >
< / li >
< li >
< p > < code > pushkey< / code > - string - This is a unique identifier for this pusher.
Max length, 512 bytes.< / p >
< / li >
< / ul >
< / li >
< li >
< p > < code > total< / code > - integer - Number of pushers.< / p >
< / li >
< / ul >
< p > See also the
< a href = "https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers" > Client-Server API Spec on pushers< / a > .< / p >
< h2 id = "shadow-banning-users" > < a class = "header" href = "#shadow-banning-users" > Shadow-banning users< / a > < / h2 >
< p > Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
A shadow-banned users receives successful responses to their client-server API requests,
but the events are not propagated into rooms. This can be an effective tool as it
(hopefully) takes longer for the user to realise they are being moderated before
pivoting to another account.< / p >
< p > Shadow-banning a user should be used as a tool of last resort and may lead to confusing
or broken behaviour for the client. A shadow-banned user will not receive any
notification and it is generally more appropriate to ban or kick abusive users.
A shadow-banned user will be unable to contact anyone on the server.< / p >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v1/users/< user_id> /shadow_ban
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > An empty JSON dict is returned.< / p >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - The fully qualified MXID: for example, < code > @user:server.com< / code > . The user must
be local.< / li >
< / ul >
< h2 id = "override-ratelimiting-for-users" > < a class = "header" href = "#override-ratelimiting-for-users" > Override ratelimiting for users< / a > < / h2 >
< p > This API allows to override or disable ratelimiting for a specific user.
There are specific APIs to set, get and delete a ratelimit.< / p >
< h3 id = "get-status-of-ratelimit" > < a class = "header" href = "#get-status-of-ratelimit" > Get status of ratelimit< / a > < / h3 >
< p > The API is:< / p >
< pre > < code > GET /_synapse/admin/v1/users/< user_id> /override_ratelimit
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" messages_per_second" : 0,
" burst_count" : 0
}
< / code > < / pre >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - The fully qualified MXID: for example, < code > @user:server.com< / code > . The user must
be local.< / li >
< / ul >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > messages_per_second< / code > - integer - The number of actions that can
be performed in a second. < code > 0< / code > mean that ratelimiting is disabled for this user.< / li >
< li > < code > burst_count< / code > - integer - How many actions that can be performed before
being limited.< / li >
< / ul >
< p > If < strong > no< / strong > custom ratelimit is set, an empty JSON dict is returned.< / p >
< pre > < code class = "language-json" > {}
< / code > < / pre >
< h3 id = "set-ratelimit" > < a class = "header" href = "#set-ratelimit" > Set ratelimit< / a > < / h3 >
< p > The API is:< / p >
< pre > < code > POST /_synapse/admin/v1/users/< user_id> /override_ratelimit
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > A response body like the following is returned:< / p >
< pre > < code class = "language-json" > {
" messages_per_second" : 0,
" burst_count" : 0
}
< / code > < / pre >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - The fully qualified MXID: for example, < code > @user:server.com< / code > . The user must
be local.< / li >
< / ul >
< p > Body parameters:< / p >
< ul >
< li > < code > messages_per_second< / code > - positive integer, optional. The number of actions that can
be performed in a second. Defaults to < code > 0< / code > .< / li >
< li > < code > burst_count< / code > - positive integer, optional. How many actions that can be performed
before being limited. Defaults to < code > 0< / code > .< / li >
< / ul >
< p > To disable users' ratelimit set both values to < code > 0< / code > .< / p >
< p > < strong > Response< / strong > < / p >
< p > The following fields are returned in the JSON response body:< / p >
< ul >
< li > < code > messages_per_second< / code > - integer - The number of actions that can
be performed in a second.< / li >
< li > < code > burst_count< / code > - integer - How many actions that can be performed before
being limited.< / li >
< / ul >
< h3 id = "delete-ratelimit" > < a class = "header" href = "#delete-ratelimit" > Delete ratelimit< / a > < / h3 >
< p > The API is:< / p >
< pre > < code > DELETE /_synapse/admin/v1/users/< user_id> /override_ratelimit
< / code > < / pre >
< p > To use it, you will need to authenticate by providing an < code > access_token< / code > for a
2021-06-16 15:16:14 +03:00
server admin: < a href = "admin_api/../usage/administration/admin_api" > Admin API< / a > < / p >
2021-06-03 19:21:02 +03:00
< p > An empty JSON dict is returned.< / p >
< pre > < code class = "language-json" > {}
< / code > < / pre >
< p > < strong > Parameters< / strong > < / p >
< p > The following parameters should be set in the URL:< / p >
< ul >
< li > < code > user_id< / code > - The fully qualified MXID: for example, < code > @user:server.com< / code > . The user must
be local.< / li >
< / ul >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "version-api" > < a class = "header" href = "#version-api" > Version API< / a > < / h1 >
< p > This API returns the running Synapse version and the Python version
on which Synapse is being run. This is useful when a Synapse instance
is behind a proxy that does not forward the 'Server' header (which also
contains Synapse version information).< / p >
< p > The api is:< / p >
< pre > < code > GET /_synapse/admin/v1/server_version
< / code > < / pre >
< p > It returns a JSON body like the following:< / p >
< pre > < code class = "language-json" > {
" server_version" : " 0.99.2rc1 (b=develop, abcdef123)" ,
" python_version" : " 3.6.8"
}
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "using-the-synapse-manhole" > < a class = "header" href = "#using-the-synapse-manhole" > Using the synapse manhole< / a > < / h1 >
< p > The " manhole" allows server administrators to access a Python shell on a running
Synapse installation. This is a very powerful mechanism for administration and
debugging.< / p >
< p > < strong > < em > Security Warning< / em > < / strong > < / p >
< p > Note that this will give administrative access to synapse to < strong > all users< / strong > with
shell access to the server. It should therefore < strong > not< / strong > be enabled in
environments where untrusted users have shell access.< / p >
< hr / >
< p > To enable it, first uncomment the < code > manhole< / code > listener configuration in
< code > homeserver.yaml< / code > . The configuration is slightly different if you're using docker.< / p >
< h4 id = "docker-config" > < a class = "header" href = "#docker-config" > Docker config< / a > < / h4 >
< p > If you are using Docker, set < code > bind_addresses< / code > to < code > ['0.0.0.0']< / code > as shown:< / p >
< pre > < code class = "language-yaml" > listeners:
- port: 9000
bind_addresses: ['0.0.0.0']
type: manhole
< / code > < / pre >
< p > When using < code > docker run< / code > to start the server, you will then need to change the command to the following to include the
< code > manhole< / code > port forwarding. The < code > -p 127.0.0.1:9000:9000< / code > below is important: it
ensures that access to the < code > manhole< / code > is only possible for local users.< / p >
< pre > < code class = "language-bash" > docker run -d --name synapse \
--mount type=volume,src=synapse-data,dst=/data \
-p 8008:8008 \
-p 127.0.0.1:9000:9000 \
matrixdotorg/synapse:latest
< / code > < / pre >
< h4 id = "native-config" > < a class = "header" href = "#native-config" > Native config< / a > < / h4 >
< p > If you are not using docker, set < code > bind_addresses< / code > to < code > ['::1', '127.0.0.1']< / code > as shown.
The < code > bind_addresses< / code > in the example below is important: it ensures that access to the
< code > manhole< / code > is only possible for local users).< / p >
< pre > < code class = "language-yaml" > listeners:
- port: 9000
bind_addresses: ['::1', '127.0.0.1']
type: manhole
< / code > < / pre >
< h4 id = "accessing-synapse-manhole" > < a class = "header" href = "#accessing-synapse-manhole" > Accessing synapse manhole< / a > < / h4 >
< p > Then restart synapse, and point an ssh client at port 9000 on localhost, using
the username < code > matrix< / code > :< / p >
< pre > < code class = "language-bash" > ssh -p9000 matrix@localhost
< / code > < / pre >
< p > The password is < code > rabbithole< / code > .< / p >
< p > This gives a Python REPL in which < code > hs< / code > gives access to the
< code > synapse.server.HomeServer< / code > object - which in turn gives access to many other
parts of the process.< / p >
< p > Note that any call which returns a coroutine will need to be wrapped in < code > ensureDeferred< / code > .< / p >
< p > As a simple example, retrieving an event from the database:< / p >
< pre > < code class = "language-pycon" > > > > from twisted.internet import defer
> > > defer.ensureDeferred(hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org'))
< Deferred at 0x7ff253fc6998 current result: < FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''> >
< / code > < / pre >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "how-to-monitor-synapse-metrics-using-prometheus" > < a class = "header" href = "#how-to-monitor-synapse-metrics-using-prometheus" > How to monitor Synapse metrics using Prometheus< / a > < / h1 >
< ol >
< li >
< p > Install Prometheus:< / p >
< p > Follow instructions at
< a href = "http://prometheus.io/docs/introduction/install/" > http://prometheus.io/docs/introduction/install/< / a > < / p >
< / li >
< li >
< p > Enable Synapse metrics:< / p >
< p > There are two methods of enabling metrics in Synapse.< / p >
< p > The first serves the metrics as a part of the usual web server and
can be enabled by adding the " metrics" resource to the existing
listener as such:< / p >
< pre > < code class = "language-yaml" > resources:
- names:
- client
- metrics
< / code > < / pre >
< p > This provides a simple way of adding metrics to your Synapse
installation, and serves under < code > /_synapse/metrics< / code > . If you do not
wish your metrics be publicly exposed, you will need to either
filter it out at your load balancer, or use the second method.< / p >
< p > The second method runs the metrics server on a different port, in a
different thread to Synapse. This can make it more resilient to
heavy load meaning metrics cannot be retrieved, and can be exposed
to just internal networks easier. The served metrics are available
over HTTP only, and will be available at < code > /_synapse/metrics< / code > .< / p >
< p > Add a new listener to homeserver.yaml:< / p >
< pre > < code class = "language-yaml" > listeners:
- type: metrics
port: 9000
bind_addresses:
- '0.0.0.0'
< / code > < / pre >
< p > For both options, you will need to ensure that < code > enable_metrics< / code > is
set to < code > True< / code > .< / p >
< / li >
< li >
< p > Restart Synapse.< / p >
< / li >
< li >
< p > Add a Prometheus target for Synapse.< / p >
< p > It needs to set the < code > metrics_path< / code > to a non-default value (under
< code > scrape_configs< / code > ):< / p >
< pre > < code class = "language-yaml" > - job_name: " synapse"
scrape_interval: 15s
metrics_path: " /_synapse/metrics"
static_configs:
- targets: [" my.server.here:port" ]
< / code > < / pre >
< p > where < code > my.server.here< / code > is the IP address of Synapse, and < code > port< / code > is
the listener port configured with the < code > metrics< / code > resource.< / p >
< p > If your prometheus is older than 1.5.2, you will need to replace
< code > static_configs< / code > in the above with < code > target_groups< / code > .< / p >
< / li >
< li >
< p > Restart Prometheus.< / p >
< / li >
< li >
< p > Consider using the < a href = "https://github.com/matrix-org/synapse/tree/master/contrib/grafana/" > grafana dashboard< / a >
and required < a href = "https://github.com/matrix-org/synapse/tree/master/contrib/prometheus/" > recording rules< / a > < / p >
< / li >
< / ol >
< h2 id = "monitoring-workers" > < a class = "header" href = "#monitoring-workers" > Monitoring workers< / a > < / h2 >
2021-06-16 15:16:14 +03:00
< p > To monitor a Synapse installation using < a href = "workers.html" > workers< / a > ,
2021-06-03 19:21:02 +03:00
every worker needs to be monitored independently, in addition to
the main homeserver process. This is because workers don't send
their metrics to the main homeserver process, but expose them
directly (if they are configured to do so).< / p >
< p > To allow collecting metrics from a worker, you need to add a
< code > metrics< / code > listener to its configuration, by adding the following
under < code > worker_listeners< / code > :< / p >
< pre > < code class = "language-yaml" > - type: metrics
bind_address: ''
port: 9101
< / code > < / pre >
< p > The < code > bind_address< / code > and < code > port< / code > parameters should be set so that
the resulting listener can be reached by prometheus, and they
don't clash with an existing worker.
With this example, the worker's metrics would then be available
on < code > http://127.0.0.1:9101< / code > .< / p >
< p > Example Prometheus target for Synapse with workers:< / p >
< pre > < code class = "language-yaml" > - job_name: " synapse"
scrape_interval: 15s
metrics_path: " /_synapse/metrics"
static_configs:
- targets: [" my.server.here:port" ]
labels:
instance: " my.server"
job: " master"
index: 1
- targets: [" my.workerserver.here:port" ]
labels:
instance: " my.server"
job: " generic_worker"
index: 1
- targets: [" my.workerserver.here:port" ]
labels:
instance: " my.server"
job: " generic_worker"
index: 2
- targets: [" my.workerserver.here:port" ]
labels:
instance: " my.server"
job: " media_repository"
index: 1
< / code > < / pre >
< p > Labels (< code > instance< / code > , < code > job< / code > , < code > index< / code > ) can be defined as anything.
The labels are used to group graphs in grafana.< / p >
< h2 id = "renaming-of-metrics--deprecation-of-old-names-in-12" > < a class = "header" href = "#renaming-of-metrics--deprecation-of-old-names-in-12" > Renaming of metrics & deprecation of old names in 1.2< / a > < / h2 >
< p > Synapse 1.2 updates the Prometheus metrics to match the naming
convention of the upstream < code > prometheus_client< / code > . The old names are
considered deprecated and will be removed in a future version of
Synapse.< / p >
< table > < thead > < tr > < th > New Name< / th > < th > Old Name< / th > < / tr > < / thead > < tbody >
< tr > < td > python_gc_objects_collected_total< / td > < td > python_gc_objects_collected< / td > < / tr >
< tr > < td > python_gc_objects_uncollectable_total< / td > < td > python_gc_objects_uncollectable< / td > < / tr >
< tr > < td > python_gc_collections_total< / td > < td > python_gc_collections< / td > < / tr >
< tr > < td > process_cpu_seconds_total< / td > < td > process_cpu_seconds< / td > < / tr >
< tr > < td > synapse_federation_client_sent_transactions_total< / td > < td > synapse_federation_client_sent_transactions< / td > < / tr >
< tr > < td > synapse_federation_client_events_processed_total< / td > < td > synapse_federation_client_events_processed< / td > < / tr >
< tr > < td > synapse_event_processing_loop_count_total< / td > < td > synapse_event_processing_loop_count< / td > < / tr >
< tr > < td > synapse_event_processing_loop_room_count_total< / td > < td > synapse_event_processing_loop_room_count< / td > < / tr >
< tr > < td > synapse_util_metrics_block_count_total< / td > < td > synapse_util_metrics_block_count< / td > < / tr >
< tr > < td > synapse_util_metrics_block_time_seconds_total< / td > < td > synapse_util_metrics_block_time_seconds< / td > < / tr >
< tr > < td > synapse_util_metrics_block_ru_utime_seconds_total< / td > < td > synapse_util_metrics_block_ru_utime_seconds< / td > < / tr >
< tr > < td > synapse_util_metrics_block_ru_stime_seconds_total< / td > < td > synapse_util_metrics_block_ru_stime_seconds< / td > < / tr >
< tr > < td > synapse_util_metrics_block_db_txn_count_total< / td > < td > synapse_util_metrics_block_db_txn_count< / td > < / tr >
< tr > < td > synapse_util_metrics_block_db_txn_duration_seconds_total< / td > < td > synapse_util_metrics_block_db_txn_duration_seconds< / td > < / tr >
< tr > < td > synapse_util_metrics_block_db_sched_duration_seconds_total< / td > < td > synapse_util_metrics_block_db_sched_duration_seconds< / td > < / tr >
< tr > < td > synapse_background_process_start_count_total< / td > < td > synapse_background_process_start_count< / td > < / tr >
< tr > < td > synapse_background_process_ru_utime_seconds_total< / td > < td > synapse_background_process_ru_utime_seconds< / td > < / tr >
< tr > < td > synapse_background_process_ru_stime_seconds_total< / td > < td > synapse_background_process_ru_stime_seconds< / td > < / tr >
< tr > < td > synapse_background_process_db_txn_count_total< / td > < td > synapse_background_process_db_txn_count< / td > < / tr >
< tr > < td > synapse_background_process_db_txn_duration_seconds_total< / td > < td > synapse_background_process_db_txn_duration_seconds< / td > < / tr >
< tr > < td > synapse_background_process_db_sched_duration_seconds_total< / td > < td > synapse_background_process_db_sched_duration_seconds< / td > < / tr >
< tr > < td > synapse_storage_events_persisted_events_total< / td > < td > synapse_storage_events_persisted_events< / td > < / tr >
< tr > < td > synapse_storage_events_persisted_events_sep_total< / td > < td > synapse_storage_events_persisted_events_sep< / td > < / tr >
< tr > < td > synapse_storage_events_state_delta_total< / td > < td > synapse_storage_events_state_delta< / td > < / tr >
< tr > < td > synapse_storage_events_state_delta_single_event_total< / td > < td > synapse_storage_events_state_delta_single_event< / td > < / tr >
< tr > < td > synapse_storage_events_state_delta_reuse_delta_total< / td > < td > synapse_storage_events_state_delta_reuse_delta< / td > < / tr >
< tr > < td > synapse_federation_server_received_pdus_total< / td > < td > synapse_federation_server_received_pdus< / td > < / tr >
< tr > < td > synapse_federation_server_received_edus_total< / td > < td > synapse_federation_server_received_edus< / td > < / tr >
< tr > < td > synapse_handler_presence_notified_presence_total< / td > < td > synapse_handler_presence_notified_presence< / td > < / tr >
< tr > < td > synapse_handler_presence_federation_presence_out_total< / td > < td > synapse_handler_presence_federation_presence_out< / td > < / tr >
< tr > < td > synapse_handler_presence_presence_updates_total< / td > < td > synapse_handler_presence_presence_updates< / td > < / tr >
< tr > < td > synapse_handler_presence_timers_fired_total< / td > < td > synapse_handler_presence_timers_fired< / td > < / tr >
< tr > < td > synapse_handler_presence_federation_presence_total< / td > < td > synapse_handler_presence_federation_presence< / td > < / tr >
< tr > < td > synapse_handler_presence_bump_active_time_total< / td > < td > synapse_handler_presence_bump_active_time< / td > < / tr >
< tr > < td > synapse_federation_client_sent_edus_total< / td > < td > synapse_federation_client_sent_edus< / td > < / tr >
< tr > < td > synapse_federation_client_sent_pdu_destinations_count_total< / td > < td > synapse_federation_client_sent_pdu_destinations:count< / td > < / tr >
< tr > < td > synapse_federation_client_sent_pdu_destinations_total< / td > < td > synapse_federation_client_sent_pdu_destinations:total< / td > < / tr >
< tr > < td > synapse_handlers_appservice_events_processed_total< / td > < td > synapse_handlers_appservice_events_processed< / td > < / tr >
< tr > < td > synapse_notifier_notified_events_total< / td > < td > synapse_notifier_notified_events< / td > < / tr >
< tr > < td > synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total< / td > < td > synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter< / td > < / tr >
< tr > < td > synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total< / td > < td > synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter< / td > < / tr >
< tr > < td > synapse_http_httppusher_http_pushes_processed_total< / td > < td > synapse_http_httppusher_http_pushes_processed< / td > < / tr >
< tr > < td > synapse_http_httppusher_http_pushes_failed_total< / td > < td > synapse_http_httppusher_http_pushes_failed< / td > < / tr >
< tr > < td > synapse_http_httppusher_badge_updates_processed_total< / td > < td > synapse_http_httppusher_badge_updates_processed< / td > < / tr >
< tr > < td > synapse_http_httppusher_badge_updates_failed_total< / td > < td > synapse_http_httppusher_badge_updates_failed< / td > < / tr >
< / tbody > < / table >
< h2 id = "removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310" > < a class = "header" href = "#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310" > Removal of deprecated metrics & time based counters becoming histograms in 0.31.0< / a > < / h2 >
< p > The duplicated metrics deprecated in Synapse 0.27.0 have been removed.< / p >
< p > All time duration-based metrics have been changed to be seconds. This
affects:< / p >
< table > < thead > < tr > < th > msec -> sec metrics< / th > < / tr > < / thead > < tbody >
< tr > < td > python_gc_time< / td > < / tr >
< tr > < td > python_twisted_reactor_tick_time< / td > < / tr >
< tr > < td > synapse_storage_query_time< / td > < / tr >
< tr > < td > synapse_storage_schedule_time< / td > < / tr >
< tr > < td > synapse_storage_transaction_time< / td > < / tr >
< / tbody > < / table >
< p > Several metrics have been changed to be histograms, which sort entries
into buckets and allow better analysis. The following metrics are now
histograms:< / p >
< table > < thead > < tr > < th > Altered metrics< / th > < / tr > < / thead > < tbody >
< tr > < td > python_gc_time< / td > < / tr >
< tr > < td > python_twisted_reactor_pending_calls< / td > < / tr >
< tr > < td > python_twisted_reactor_tick_time< / td > < / tr >
< tr > < td > synapse_http_server_response_time_seconds< / td > < / tr >
< tr > < td > synapse_storage_query_time< / td > < / tr >
< tr > < td > synapse_storage_schedule_time< / td > < / tr >
< tr > < td > synapse_storage_transaction_time< / td > < / tr >
< / tbody > < / table >
< h2 id = "block-and-response-metrics-renamed-for-0270" > < a class = "header" href = "#block-and-response-metrics-renamed-for-0270" > Block and response metrics renamed for 0.27.0< / a > < / h2 >
< p > Synapse 0.27.0 begins the process of rationalising the duplicate
< code > *:count< / code > metrics reported for the resource tracking for code blocks and
HTTP requests.< / p >
< p > At the same time, the corresponding < code > *:total< / code > metrics are being renamed,
as the < code > :total< / code > suffix no longer makes sense in the absence of a
corresponding < code > :count< / code > metric.< / p >
< p > To enable a graceful migration path, this release just adds new names
for the metrics being renamed. A future release will remove the old
ones.< / p >
< p > The following table shows the new metrics, and the old metrics which
they are replacing.< / p >
< table > < thead > < tr > < th > New name< / th > < th > Old name< / th > < / tr > < / thead > < tbody >
< tr > < td > synapse_util_metrics_block_count< / td > < td > synapse_util_metrics_block_timer:count< / td > < / tr >
< tr > < td > synapse_util_metrics_block_count< / td > < td > synapse_util_metrics_block_ru_utime:count< / td > < / tr >
< tr > < td > synapse_util_metrics_block_count< / td > < td > synapse_util_metrics_block_ru_stime:count< / td > < / tr >
< tr > < td > synapse_util_metrics_block_count< / td > < td > synapse_util_metrics_block_db_txn_count:count< / td > < / tr >
< tr > < td > synapse_util_metrics_block_count< / td > < td > synapse_util_metrics_block_db_txn_duration:count< / td > < / tr >
< tr > < td > synapse_util_metrics_block_time_seconds< / td > < td > synapse_util_metrics_block_timer:total< / td > < / tr >
< tr > < td > synapse_util_metrics_block_ru_utime_seconds< / td > < td > synapse_util_metrics_block_ru_utime:total< / td > < / tr >
< tr > < td > synapse_util_metrics_block_ru_stime_seconds< / td > < td > synapse_util_metrics_block_ru_stime:total< / td > < / tr >
< tr > < td > synapse_util_metrics_block_db_txn_count< / td > < td > synapse_util_metrics_block_db_txn_count:total< / td > < / tr >
< tr > < td > synapse_util_metrics_block_db_txn_duration_seconds< / td > < td > synapse_util_metrics_block_db_txn_duration:total< / td > < / tr >
< tr > < td > synapse_http_server_response_count< / td > < td > synapse_http_server_requests< / td > < / tr >
< tr > < td > synapse_http_server_response_count< / td > < td > synapse_http_server_response_time:count< / td > < / tr >
< tr > < td > synapse_http_server_response_count< / td > < td > synapse_http_server_response_ru_utime:count< / td > < / tr >
< tr > < td > synapse_http_server_response_count< / td > < td > synapse_http_server_response_ru_stime:count< / td > < / tr >
< tr > < td > synapse_http_server_response_count< / td > < td > synapse_http_server_response_db_txn_count:count< / td > < / tr >
< tr > < td > synapse_http_server_response_count< / td > < td > synapse_http_server_response_db_txn_duration:count< / td > < / tr >
< tr > < td > synapse_http_server_response_time_seconds< / td > < td > synapse_http_server_response_time:total< / td > < / tr >
< tr > < td > synapse_http_server_response_ru_utime_seconds< / td > < td > synapse_http_server_response_ru_utime:total< / td > < / tr >
< tr > < td > synapse_http_server_response_ru_stime_seconds< / td > < td > synapse_http_server_response_ru_stime:total< / td > < / tr >
< tr > < td > synapse_http_server_response_db_txn_count< / td > < td > synapse_http_server_response_db_txn_count:total< / td > < / tr >
< tr > < td > synapse_http_server_response_db_txn_duration_seconds< / td > < td > synapse_http_server_response_db_txn_duration:total< / td > < / tr >
< / tbody > < / table >
< h2 id = "standard-metric-names" > < a class = "header" href = "#standard-metric-names" > Standard Metric Names< / a > < / h2 >
< p > As of synapse version 0.18.2, the format of the process-wide metrics has
been changed to fit prometheus standard naming conventions. Additionally
the units have been changed to seconds, from miliseconds.< / p >
< table > < thead > < tr > < th > New name< / th > < th > Old name< / th > < / tr > < / thead > < tbody >
< tr > < td > process_cpu_user_seconds_total< / td > < td > process_resource_utime / 1000< / td > < / tr >
< tr > < td > process_cpu_system_seconds_total< / td > < td > process_resource_stime / 1000< / td > < / tr >
< tr > < td > process_open_fds (no 'type' label)< / td > < td > process_fds< / td > < / tr >
< / tbody > < / table >
< p > The python-specific counts of garbage collector performance have been
renamed.< / p >
< table > < thead > < tr > < th > New name< / th > < th > Old name< / th > < / tr > < / thead > < tbody >
< tr > < td > python_gc_time< / td > < td > reactor_gc_time< / td > < / tr >
< tr > < td > python_gc_unreachable_total< / td > < td > reactor_gc_unreachable< / td > < / tr >
< tr > < td > python_gc_counts< / td > < td > reactor_gc_counts< / td > < / tr >
< / tbody > < / table >
< p > The twisted-specific reactor metrics have been renamed.< / p >
< table > < thead > < tr > < th > New name< / th > < th > Old name< / th > < / tr > < / thead > < tbody >
< tr > < td > python_twisted_reactor_pending_calls< / td > < td > reactor_pending_calls< / td > < / tr >
< tr > < td > python_twisted_reactor_tick_time< / td > < td > reactor_tick_time< / td > < / tr >
< / tbody > < / table >
2021-06-16 15:33:55 +03:00
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "request-log-format" > < a class = "header" href = "#request-log-format" > Request log format< / a > < / h1 >
< p > HTTP request logs are written by synapse (see < a href = "usage/administration/../synapse/http/site.py" > < code > site.py< / code > < / a > for details).< / p >
< p > See the following for how to decode the dense data available from the default logging configuration.< / p >
< pre > < code > 2020-10-01 12:00:00,000 - synapse.access.http.8008 - 311 - INFO - PUT-1000- 192.168.0.1 - 8008 - {another-matrix-server.com} Processed request: 0.100sec/-0.000sec (0.000sec, 0.000sec) (0.001sec/0.090sec/3) 11B !200 " PUT /_matrix/federation/v1/send/1600000000000 HTTP/1.1" " Synapse/1.20.1" [0 dbevts]
-AAAAAAAAAAAAAAAAAAAAA- -BBBBBBBBBBBBBBBBBBBBBB- -C- -DD- -EEEEEE- -FFFFFFFFF- -GG- -HHHHHHHHHHHHHHHHHHHHHHH- -IIIIII- -JJJJJJJ- -KKKKKK-, -LLLLLL- -MMMMMMM- -NNNNNN- O -P- -QQ- -RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR- -SSSSSSSSSSSS- -TTTTTT-
< / code > < / pre >
< table > < thead > < tr > < th > Part< / th > < th > Explanation< / th > < / tr > < / thead > < tbody >
< tr > < td > AAAA< / td > < td > Timestamp request was logged (not recieved)< / td > < / tr >
< tr > < td > BBBB< / td > < td > Logger name (< code > synapse.access.(http\|https).< tag> < / code > , where 'tag' is defined in the < code > listeners< / code > config section, normally the port)< / td > < / tr >
< tr > < td > CCCC< / td > < td > Line number in code< / td > < / tr >
< tr > < td > DDDD< / td > < td > Log Level< / td > < / tr >
< tr > < td > EEEE< / td > < td > Request Identifier (This identifier is shared by related log lines)< / td > < / tr >
< tr > < td > FFFF< / td > < td > Source IP (Or X-Forwarded-For if enabled)< / td > < / tr >
< tr > < td > GGGG< / td > < td > Server Port< / td > < / tr >
< tr > < td > HHHH< / td > < td > Federated Server or Local User making request (blank if unauthenticated or not supplied)< / td > < / tr >
< tr > < td > IIII< / td > < td > Total Time to process the request< / td > < / tr >
< tr > < td > JJJJ< / td > < td > Time to send response over network once generated (this may be negative if the socket is closed before the response is generated)< / td > < / tr >
< tr > < td > KKKK< / td > < td > Userland CPU time< / td > < / tr >
< tr > < td > LLLL< / td > < td > System CPU time< / td > < / tr >
< tr > < td > MMMM< / td > < td > Total time waiting for a free DB connection from the pool across all parallel DB work from this request< / td > < / tr >
< tr > < td > NNNN< / td > < td > Total time waiting for response to DB queries across all parallel DB work from this request< / td > < / tr >
< tr > < td > OOOO< / td > < td > Count of DB transactions performed< / td > < / tr >
< tr > < td > PPPP< / td > < td > Response body size< / td > < / tr >
< tr > < td > QQQQ< / td > < td > Response status code (prefixed with ! if the socket was closed before the response was generated)< / td > < / tr >
< tr > < td > RRRR< / td > < td > Request< / td > < / tr >
< tr > < td > SSSS< / td > < td > User-agent< / td > < / tr >
< tr > < td > TTTT< / td > < td > Events fetched from DB to service this request (note that this does not include events fetched from the cache)< / td > < / tr >
< / tbody > < / table >
< p > MMMM / NNNN can be greater than IIII if there are multiple slow database queries
running in parallel.< / p >
< p > Some actions can result in multiple identical http requests, which will return
the same data, but only the first request will report time/transactions in
< code > KKKK< / code > /< code > LLLL< / code > /< code > MMMM< / code > /< code > NNNN< / code > /< code > OOOO< / code > - the others will be awaiting the first query to return a
response and will simultaneously return with the first request, but with very
small processing times.< / p >
2021-06-03 19:21:02 +03:00
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > <!--
Include the contents of CONTRIBUTING.md from the project root (where GitHub likes it
to be)
-->
< h1 id = "contributing" > < a class = "header" href = "#contributing" > Contributing< / a > < / h1 >
< p > Welcome to Synapse< / p >
< p > This document aims to get you started with contributing to this repo! < / p >
< ul >
< li > < a href = "development/contributing_guide.html#1-who-can-contribute-to-synapse" > 1. Who can contribute to Synapse?< / a > < / li >
< li > < a href = "development/contributing_guide.html#2-what-do-i-need" > 2. What do I need?< / a > < / li >
< li > < a href = "development/contributing_guide.html#3-get-the-source" > 3. Get the source.< / a > < / li >
< li > < a href = "development/contributing_guide.html#4-install-the-dependencies" > 4. Install the dependencies< / a >
< ul >
< li > < a href = "development/contributing_guide.html#under-unix-macos-linux-bsd-" > Under Unix (macOS, Linux, BSD, ...)< / a > < / li >
< li > < a href = "development/contributing_guide.html#under-windows" > Under Windows< / a > < / li >
< / ul >
< / li >
< li > < a href = "development/contributing_guide.html#5-get-in-touch" > 5. Get in touch.< / a > < / li >
< li > < a href = "development/contributing_guide.html#6-pick-an-issue" > 6. Pick an issue.< / a > < / li >
< li > < a href = "development/contributing_guide.html#7-turn-coffee-and-documentation-into-code-and-documentation" > 7. Turn coffee and documentation into code and documentation!< / a > < / li >
< li > < a href = "development/contributing_guide.html#8-test-test-test" > 8. Test, test, test!< / a >
< ul >
< li > < a href = "development/contributing_guide.html#run-the-linters" > Run the linters.< / a > < / li >
< li > < a href = "development/contributing_guide.html#run-the-unit-tests" > Run the unit tests.< / a > < / li >
< li > < a href = "development/contributing_guide.html#run-the-integration-tests" > Run the integration tests.< / a > < / li >
< / ul >
< / li >
< li > < a href = "development/contributing_guide.html#9-submit-your-patch" > 9. Submit your patch.< / a >
< ul >
< li > < a href = "development/contributing_guide.html#changelog" > Changelog< / a >
< ul >
< li > < a href = "development/contributing_guide.html#how-do-i-know-what-to-call-the-changelog-file-before-i-create-the-pr" > How do I know what to call the changelog file before I create the PR?< / a > < / li >
< li > < a href = "development/contributing_guide.html#debian-changelog" > Debian changelog< / a > < / li >
< / ul >
< / li >
< li > < a href = "development/contributing_guide.html#sign-off" > Sign off< / a > < / li >
< / ul >
< / li >
< li > < a href = "development/contributing_guide.html#10-turn-feedback-into-better-code" > 10. Turn feedback into better code.< / a > < / li >
< li > < a href = "development/contributing_guide.html#11-find-a-new-issue" > 11. Find a new issue.< / a > < / li >
< li > < a href = "development/contributing_guide.html#notes-for-maintainers-on-merging-prs-etc" > Notes for maintainers on merging PRs etc< / a > < / li >
< li > < a href = "development/contributing_guide.html#conclusion" > Conclusion< / a > < / li >
< / ul >
< h1 id = "1-who-can-contribute-to-synapse" > < a class = "header" href = "#1-who-can-contribute-to-synapse" > 1. Who can contribute to Synapse?< / a > < / h1 >
< p > Everyone is welcome to contribute code to < a href = "https://github.com/matrix-org" > matrix.org
projects< / a > , provided that they are willing to
license their contributions under the same license as the project itself. We
follow a simple 'inbound=outbound' model for contributions: the act of
submitting an 'inbound' contribution means that the contributor agrees to
license the code under the same terms as the project's overall 'outbound'
license - in our case, this is almost always Apache Software License v2 (see
< a href = "development/LICENSE" > LICENSE< / a > ).< / p >
< h1 id = "2-what-do-i-need" > < a class = "header" href = "#2-what-do-i-need" > 2. What do I need?< / a > < / h1 >
< p > The code of Synapse is written in Python 3. To do pretty much anything, you'll need < a href = "https://wiki.python.org/moin/BeginnersGuide/Download" > a recent version of Python 3< / a > .< / p >
< p > The source code of Synapse is hosted on GitHub. You will also need < a href = "https://github.com/git-guides/install-git" > a recent version of git< / a > .< / p >
< p > For some tests, you will need < a href = "https://docs.docker.com/get-docker/" > a recent version of Docker< / a > .< / p >
< h1 id = "3-get-the-source" > < a class = "header" href = "#3-get-the-source" > 3. Get the source.< / a > < / h1 >
< p > The preferred and easiest way to contribute changes is to fork the relevant
project on GitHub, and then < a href = "https://help.github.com/articles/using-pull-requests/" > create a pull request< / a > to ask us to pull your
changes into our repo.< / p >
< p > Please base your changes on the < code > develop< / code > branch.< / p >
< pre > < code class = "language-sh" > git clone git@github.com:YOUR_GITHUB_USER_NAME/synapse.git
git checkout develop
< / code > < / pre >
< p > If you need help getting started with git, this is beyond the scope of the document, but you
can find many good git tutorials on the web.< / p >
< h1 id = "4-install-the-dependencies" > < a class = "header" href = "#4-install-the-dependencies" > 4. Install the dependencies< / a > < / h1 >
< h2 id = "under-unix-macos-linux-bsd-" > < a class = "header" href = "#under-unix-macos-linux-bsd-" > Under Unix (macOS, Linux, BSD, ...)< / a > < / h2 >
< p > Once you have installed Python 3 and added the source, please open a terminal and
setup a < em > virtualenv< / em > , as follows:< / p >
< pre > < code class = "language-sh" > cd path/where/you/have/cloned/the/repository
python3 -m venv ./env
source ./env/bin/activate
pip install -e " .[all,lint,mypy,test]"
pip install tox
< / code > < / pre >
< p > This will install the developer dependencies for the project.< / p >
< h2 id = "under-windows" > < a class = "header" href = "#under-windows" > Under Windows< / a > < / h2 >
< p > TBD< / p >
< h1 id = "5-get-in-touch" > < a class = "header" href = "#5-get-in-touch" > 5. Get in touch.< / a > < / h1 >
< p > Join our developer community on Matrix: #synapse-dev:matrix.org !< / p >
< h1 id = "6-pick-an-issue" > < a class = "header" href = "#6-pick-an-issue" > 6. Pick an issue.< / a > < / h1 >
< p > Fix your favorite problem or perhaps find a < a href = "https://github.com/matrix-org/synapse/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+First+Issue%22" > Good First Issue< / a >
to work on.< / p >
< h1 id = "7-turn-coffee-and-documentation-into-code-and-documentation" > < a class = "header" href = "#7-turn-coffee-and-documentation-into-code-and-documentation" > 7. Turn coffee and documentation into code and documentation!< / a > < / h1 >
< p > Synapse's code style is documented < a href = "development/docs/code_style.html" > here< / a > . Please follow
it, including the conventions for the < a href = "development/docs/code_style.html#configuration-file-format" > sample configuration
file< / a > .< / p >
< p > There is a growing amount of documentation located in the < a href = "development/docs" > docs< / a >
directory. This documentation is intended primarily for sysadmins running their
own Synapse instance, as well as developers interacting externally with
Synapse. < a href = "development/docs/dev" > docs/dev< / a > exists primarily to house documentation for
Synapse developers. < a href = "development/docs/admin_api" > docs/admin_api< / a > houses documentation
regarding Synapse's Admin API, which is used mostly by sysadmins and external
service developers.< / p >
< p > If you add new files added to either of these folders, please use < a href = "https://guides.github.com/features/mastering-markdown/" > GitHub-Flavoured
Markdown< / a > .< / p >
< p > Some documentation also exists in < a href = "https://github.com/matrix-org/synapse/wiki" > Synapse's GitHub
Wiki< / a > , although this is primarily
contributed to by community authors.< / p >
< h1 id = "8-test-test-test" > < a class = "header" href = "#8-test-test-test" > 8. Test, test, test!< / a > < / h1 >
< p > < a name = "test-test-test" > < / a > < / p >
< p > While you're developing and before submitting a patch, you'll
want to test your code.< / p >
< h2 id = "run-the-linters" > < a class = "header" href = "#run-the-linters" > Run the linters.< / a > < / h2 >
< p > The linters look at your code and do two things:< / p >
< ul >
< li > ensure that your code follows the coding style adopted by the project;< / li >
< li > catch a number of errors in your code.< / li >
< / ul >
< p > They're pretty fast, don't hesitate!< / p >
< pre > < code class = "language-sh" > source ./env/bin/activate
./scripts-dev/lint.sh
< / code > < / pre >
< p > Note that this script < em > will modify your files< / em > to fix styling errors.
Make sure that you have saved all your files.< / p >
< p > If you wish to restrict the linters to only the files changed since the last commit
(much faster!), you can instead run:< / p >
< pre > < code class = "language-sh" > source ./env/bin/activate
./scripts-dev/lint.sh -d
< / code > < / pre >
< p > Or if you know exactly which files you wish to lint, you can instead run:< / p >
< pre > < code class = "language-sh" > source ./env/bin/activate
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
< / code > < / pre >
< h2 id = "run-the-unit-tests" > < a class = "header" href = "#run-the-unit-tests" > Run the unit tests.< / a > < / h2 >
< p > The unit tests run parts of Synapse, including your changes, to see if anything
was broken. They are slower than the linters but will typically catch more errors.< / p >
< pre > < code class = "language-sh" > source ./env/bin/activate
trial tests
< / code > < / pre >
< p > If you wish to only run < em > some< / em > unit tests, you may specify
another module instead of < code > tests< / code > - or a test class or a method:< / p >
< pre > < code class = "language-sh" > source ./env/bin/activate
trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite
< / code > < / pre >
2021-06-11 11:57:53 +03:00
< p > If your tests fail, you may wish to look at the logs (the default log level is < code > ERROR< / code > ):< / p >
2021-06-03 19:21:02 +03:00
< pre > < code class = "language-sh" > less _trial_temp/test.log
< / code > < / pre >
2021-06-11 11:57:53 +03:00
< p > To increase the log level for the tests, set < code > SYNAPSE_TEST_LOG_LEVEL< / code > :< / p >
< pre > < code class = "language-sh" > SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
< / code > < / pre >
2021-06-03 19:21:02 +03:00
< h2 id = "run-the-integration-tests" > < a class = "header" href = "#run-the-integration-tests" > Run the integration tests.< / a > < / h2 >
< p > The integration tests are a more comprehensive suite of tests. They
run a full version of Synapse, including your changes, to check if
anything was broken. They are slower than the unit tests but will
typically catch more errors.< / p >
< p > The following command will let you run the integration test with the most common
configuration:< / p >
< pre > < code class = "language-sh" > $ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:py37
< / code > < / pre >
< p > This configuration should generally cover your needs. For more details about other configurations, see < a href = "https://github.com/matrix-org/sytest/blob/develop/docker/README.md" > documentation in the SyTest repo< / a > .< / p >
< h1 id = "9-submit-your-patch" > < a class = "header" href = "#9-submit-your-patch" > 9. Submit your patch.< / a > < / h1 >
< p > Once you're happy with your patch, it's time to prepare a Pull Request.< / p >
< p > To prepare a Pull Request, please:< / p >
< ol >
< li > verify that < a href = "development/contributing_guide.html#test-test-test" > all the tests pass< / a > , including the coding style;< / li >
< li > < a href = "development/contributing_guide.html#sign-off" > sign off< / a > your contribution;< / li >
< li > < code > git push< / code > your commit to your fork of Synapse;< / li >
< li > on GitHub, < a href = "https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request" > create the Pull Request< / a > ;< / li >
< li > add a < a href = "development/contributing_guide.html#changelog" > changelog entry< / a > and push it to your Pull Request;< / li >
< li > for most contributors, that's all - however, if you are a member of the organization < code > matrix-org< / code > , on GitHub, please request a review from < code > matrix.org / Synapse Core< / code > .< / li >
< / ol >
< h2 id = "changelog" > < a class = "header" href = "#changelog" > Changelog< / a > < / h2 >
< p > All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by < a href = "https://github.com/hawkowl/towncrier" > Towncrier< / a > .< / p >
< p > To create a changelog entry, make a new file in the < code > changelog.d< / code > directory named
in the format of < code > PRnumber.type< / code > . The type can be one of the following:< / p >
< ul >
< li > < code > feature< / code > < / li >
< li > < code > bugfix< / code > < / li >
< li > < code > docker< / code > (for updates to the Docker image)< / li >
< li > < code > doc< / code > (for updates to the documentation)< / li >
< li > < code > removal< / code > (also used for deprecations)< / li >
< li > < code > misc< / code > (for internal-only changes)< / li >
< / ul >
< p > This file will become part of our < a href = "https://github.com/matrix-org/synapse/blob/master/CHANGES.md" > changelog< / a > at the next
release, so the content of the file should be a short description of your
change in the same style as the rest of the changelog. The file can contain Markdown
formatting, and should end with a full stop (.) or an exclamation mark (!) for
consistency.< / p >
< p > Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!< / p >
< p > For example, a fix in PR #1234 would have its changelog entry in
< code > changelog.d/1234.bugfix< / code > , and contain content like:< / p >
< blockquote >
< p > The security levels of Florbs are now validated when received
via the < code > /federation/florb< / code > endpoint. Contributed by Jane Matrix.< / p >
< / blockquote >
< p > If there are multiple pull requests involved in a single bugfix/feature/etc,
then the content for each < code > changelog.d< / code > file should be the same. Towncrier will
merge the matching files together into a single changelog entry when we come to
release.< / p >
< h3 id = "how-do-i-know-what-to-call-the-changelog-file-before-i-create-the-pr" > < a class = "header" href = "#how-do-i-know-what-to-call-the-changelog-file-before-i-create-the-pr" > How do I know what to call the changelog file before I create the PR?< / a > < / h3 >
< p > Obviously, you don't know if you should call your newsfile
< code > 1234.bugfix< / code > or < code > 5678.bugfix< / code > until you create the PR, which leads to a
chicken-and-egg problem.< / p >
< p > There are two options for solving this:< / p >
< ol >
< li >
< p > Open the PR without a changelog file, see what number you got, and < em > then< / em >
add the changelog file to your branch (see < a href = "development/contributing_guide.html#updating-your-pull-request" > Updating your pull
request< / a > ), or:< / p >
< / li >
< li >
< p > Look at the < a href = "https://github.com/matrix-org/synapse/issues?q=" > list of all
issues/PRs< / a > , add one to the
highest number you see, and quickly open the PR before somebody else claims
your number.< / p >
< p > < a href = "https://github.com/richvdh/scripts/blob/master/next_github_number.sh" > This
script< / a >
might be helpful if you find yourself doing this a lot.< / p >
< / li >
< / ol >
< p > Sorry, we know it's a bit fiddly, but it's < em > really< / em > helpful for us when we come
to put together a release!< / p >
< h3 id = "debian-changelog" > < a class = "header" href = "#debian-changelog" > Debian changelog< / a > < / h3 >
< p > Changes which affect the debian packaging files (in < code > debian< / code > ) are an
exception to the rule that all changes require a < code > changelog.d< / code > file.< / p >
< p > In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command:< / p >
< pre > < code > dch
< / code > < / pre >
< p > This will make up a new version number (if there isn't already an unreleased
version in flight), and open an editor where you can add a new changelog entry.
(Our release process will ensure that the version number and maintainer name is
corrected for the release.)< / p >
< p > If your change affects both the debian packaging < em > and< / em > files outside the debian
directory, you will need both a regular newsfragment < em > and< / em > an entry in the
debian changelog. (Though typically such changes should be submitted as two
separate pull requests.)< / p >
< h2 id = "sign-off" > < a class = "header" href = "#sign-off" > Sign off< / a > < / h2 >
< p > In order to have a concrete record that your contribution is intentional
and you agree to license it under the same terms as the project's license, we've adopted the
same lightweight approach that the Linux Kernel
< a href = "https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin%3E" > submitting patches process< / a > ,
< a href = "https://github.com/docker/docker/blob/master/CONTRIBUTING.md" > Docker< / a > , and many other
projects use: the DCO (Developer Certificate of Origin:
http://developercertificate.org/). This is a simple declaration that you wrote
the contribution or otherwise have the right to contribute it to Matrix:< / p >
< pre > < code > Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
< / code > < / pre >
< p > If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment:< / p >
< pre > < code > Signed-off-by: Your Name < your@email.example.org>
< / code > < / pre >
< p > We accept contributions under a legally identifiable name, such as
your name on government documentation or common-law names (names
claimed by legitimate usage or repute). Unfortunately, we cannot
accept anonymous contributions at this time.< / p >
< p > Git allows you to add this signoff automatically when using the < code > -s< / code >
flag to < code > git commit< / code > , which uses the name and email set in your
< code > user.name< / code > and < code > user.email< / code > git configs.< / p >
< h1 id = "10-turn-feedback-into-better-code" > < a class = "header" href = "#10-turn-feedback-into-better-code" > 10. Turn feedback into better code.< / a > < / h1 >
< p > Once the Pull Request is opened, you will see a few things:< / p >
< ol >
< li > our automated CI (Continuous Integration) pipeline will run (again) the linters, the unit tests, the integration tests and more;< / li >
< li > one or more of the developers will take a look at your Pull Request and offer feedback.< / li >
< / ol >
< p > From this point, you should:< / p >
< ol >
< li > Look at the results of the CI pipeline.
< ul >
< li > If there is any error, fix the error.< / li >
< / ul >
< / li >
< li > If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.< / li >
< li > Create a new commit with the changes.
< ul >
< li > Please do NOT overwrite the history. New commits make the reviewer's life easier.< / li >
< li > Push this commits to your Pull Request.< / li >
< / ul >
< / li >
< li > Back to 1.< / li >
< / ol >
< p > Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!< / p >
< h1 id = "11-find-a-new-issue" > < a class = "header" href = "#11-find-a-new-issue" > 11. Find a new issue.< / a > < / h1 >
< p > By now, you know the drill!< / p >
< h1 id = "notes-for-maintainers-on-merging-prs-etc" > < a class = "header" href = "#notes-for-maintainers-on-merging-prs-etc" > Notes for maintainers on merging PRs etc< / a > < / h1 >
< p > There are some notes for those with commit access to the project on how we
manage git < a href = "development/docs/dev/git.html" > here< / a > .< / p >
< h1 id = "conclusion" > < a class = "header" href = "#conclusion" > Conclusion< / a > < / h1 >
< p > That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully
matrix together all the fragmented communication technologies out there we are
reliant on contributions and collaboration from the community to do so. So
please get involved - and we hope you have as much fun hacking on Matrix as we
do!< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "code-style" > < a class = "header" href = "#code-style" > Code Style< / a > < / h1 >
< h2 id = "formatting-tools" > < a class = "header" href = "#formatting-tools" > Formatting tools< / a > < / h2 >
< p > The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical)
errors in code.< / p >
< p > The necessary tools are detailed below.< / p >
< p > First install them with:< / p >
< pre > < code > pip install -e " .[lint,mypy]"
< / code > < / pre >
< ul >
< li >
< p > < strong > black< / strong > < / p >
< p > The Synapse codebase uses < a href = "https://pypi.org/project/black/" > black< / a >
as an opinionated code formatter, ensuring all comitted code is
properly formatted.< / p >
< p > Have < code > black< / code > auto-format your code (it shouldn't change any
functionality) with:< / p >
< pre > < code > black . --exclude=" \.tox|build|env"
< / code > < / pre >
< / li >
< li >
< p > < strong > flake8< / strong > < / p >
< p > < code > flake8< / code > is a code checking tool. We require code to pass < code > flake8< / code >
before being merged into the codebase.< / p >
< p > Check all application and test code with:< / p >
< pre > < code > flake8 synapse tests
< / code > < / pre >
< / li >
< li >
< p > < strong > isort< / strong > < / p >
< p > < code > isort< / code > ensures imports are nicely formatted, and can suggest and
auto-fix issues such as double-importing.< / p >
< p > Auto-fix imports with:< / p >
< pre > < code > isort -rc synapse tests
< / code > < / pre >
< p > < code > -rc< / code > means to recursively search the given directories.< / p >
< / li >
< / ul >
< p > It's worth noting that modern IDEs and text editors can run these tools
automatically on save. It may be worth looking into whether this
functionality is supported in your editor for a more convenient
development workflow. It is not, however, recommended to run < code > flake8< / code > on
save as it takes a while and is very resource intensive.< / p >
< h2 id = "general-rules" > < a class = "header" href = "#general-rules" > General rules< / a > < / h2 >
< ul >
< li > < strong > Naming< / strong > :
< ul >
< li > Use camel case for class and type names< / li >
< li > Use underscores for functions and variables.< / li >
< / ul >
< / li >
< li > < strong > Docstrings< / strong > : should follow the < a href = "https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings" > google code
style< / a > .
See the
< a href = "http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html" > examples< / a >
in the sphinx documentation.< / li >
< li > < strong > Imports< / strong > :
< ul >
< li >
< p > Imports should be sorted by < code > isort< / code > as described above.< / p >
< / li >
< li >
< p > Prefer to import classes and functions rather than packages or
modules.< / p >
< p > Example:< / p >
< pre > < code > from synapse.types import UserID
...
user_id = UserID(local, server)
< / code > < / pre >
< p > is preferred over:< / p >
< pre > < code > from synapse import types
...
user_id = types.UserID(local, server)
< / code > < / pre >
< p > (or any other variant).< / p >
< p > This goes against the advice in the Google style guide, but it
means that errors in the name are caught early (at import time).< / p >
< / li >
< li >
< p > Avoid wildcard imports (< code > from synapse.types import *< / code > ) and
relative imports (< code > from .types import UserID< / code > ).< / p >
< / li >
< / ul >
< / li >
< / ul >
< h2 id = "configuration-file-format" > < a class = "header" href = "#configuration-file-format" > Configuration file format< / a > < / h2 >
< p > The < a href = "./sample_config.yaml" > sample configuration file< / a > acts as a
reference to Synapse's configuration options for server administrators.
Remember that many readers will be unfamiliar with YAML and server
administration in general, so that it is important that the file be as
easy to understand as possible, which includes following a consistent
format.< / p >
< p > Some guidelines follow:< / p >
< ul >
< li >
< p > Sections should be separated with a heading consisting of a single
line prefixed and suffixed with < code > ##< / code > . There should be < strong > two< / strong > blank
lines before the section header, and < strong > one< / strong > after.< / p >
< / li >
< li >
< p > Each option should be listed in the file with the following format:< / p >
< ul >
< li >
< p > A comment describing the setting. Each line of this comment
should be prefixed with a hash (< code > #< / code > ) and a space.< / p >
< p > The comment should describe the default behaviour (ie, what
happens if the setting is omitted), as well as what the effect
will be if the setting is changed.< / p >
< p > Often, the comment end with something like " uncomment the
following to < do action > " .< / p >
< / li >
< li >
< p > A line consisting of only < code > #< / code > .< / p >
< / li >
< li >
< p > A commented-out example setting, prefixed with only < code > #< / code > .< / p >
< p > For boolean (on/off) options, convention is that this example
should be the < em > opposite< / em > to the default (so the comment will end
with " Uncomment the following to enable [or disable]
< feature > ." For other options, the example should give some
non-default value which is likely to be useful to the reader.< / p >
< / li >
< / ul >
< / li >
< li >
< p > There should be a blank line between each option.< / p >
< / li >
< li >
< p > Where several settings are grouped into a single dict, < em > avoid< / em > the
convention where the whole block is commented out, resulting in
comment lines starting < code > # #< / code > , as this is hard to read and confusing
to edit. Instead, leave the top-level config option uncommented, and
follow the conventions above for sub-options. Ensure that your code
correctly handles the top-level option being set to < code > None< / code > (as it
will be if no sub-options are enabled).< / p >
< / li >
< li >
< p > Lines should be wrapped at 80 characters.< / p >
< / li >
< li >
< p > Use two-space indents.< / p >
< / li >
< li >
< p > < code > true< / code > and < code > false< / code > are spelt thus (as opposed to < code > True< / code > , etc.)< / p >
< / li >
< li >
< p > Use single quotes (< code > '< / code > ) rather than double-quotes (< code > " < / code > ) or backticks
(< code > `< / code > ) to refer to configuration options.< / p >
< / li >
< / ul >
< p > Example:< / p >
< pre > < code > ## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
< / code > < / pre >
< p > Note that the sample configuration is generated from the synapse code
and is maintained by a script, < code > scripts-dev/generate_sample_config< / code > .
Making sure that the output from this script matches the desired format
is left as an exercise for the reader!< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "some-notes-on-how-we-use-git" > < a class = "header" href = "#some-notes-on-how-we-use-git" > Some notes on how we use git< / a > < / h1 >
< h2 id = "on-keeping-the-commit-history-clean" > < a class = "header" href = "#on-keeping-the-commit-history-clean" > On keeping the commit history clean< / a > < / h2 >
< p > In an ideal world, our git commit history would be a linear progression of
commits each of which contains a single change building on what came
before. Here, by way of an arbitrary example, is the top of < code > git log --graph b2dba0607< / code > :< / p >
< img src = "dev/git/clean.png" alt = "clean git graph" width = "500px" >
< p > Note how the commit comment explains clearly what is changing and why. Also
note the < em > absence< / em > of merge commits, as well as the absence of commits called
things like (to pick a few culprits):
< a href = "https://github.com/matrix-org/synapse/commit/84691da6c" > “pep8”< / a > , < a href = "https://github.com/matrix-org/synapse/commit/474810d9d" > “fix broken
test”< / a > ,
< a href = "https://github.com/matrix-org/synapse/commit/c9d72e457" > “oops”< / a > ,
< a href = "https://github.com/matrix-org/synapse/commit/836358823" > “typo”< / a > , or < a href = "https://github.com/matrix-org/synapse/commit/707374d5d" > “Who's
the president?”< / a > .< / p >
< p > There are a number of reasons why keeping a clean commit history is a good
thing:< / p >
< ul >
< li >
< p > From time to time, after a change lands, it turns out to be necessary to
revert it, or to backport it to a release branch. Those operations are
< em > much< / em > easier when the change is contained in a single commit.< / p >
< / li >
< li >
< p > Similarly, it's much easier to answer questions like “is the fix for
< code > /publicRooms< / code > on the release branch?” if that change consists of a single
commit.< / p >
< / li >
< li >
< p > Likewise: “what has changed on this branch in the last week?” is much
clearer without merges and “pep8” commits everywhere.< / p >
< / li >
< li >
< p > Sometimes we need to figure out where a bug got introduced, or some
behaviour changed. One way of doing that is with < code > git bisect< / code > : pick an
arbitrary commit between the known good point and the known bad point, and
see how the code behaves. However, that strategy fails if the commit you
chose is the middle of someone's epic branch in which they broke the world
before putting it back together again.< / p >
< / li >
< / ul >
< p > One counterargument is that it is sometimes useful to see how a PR evolved as
it went through review cycles. This is true, but that information is always
available via the GitHub UI (or via the little-known < a href = "https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally" > refs/pull
namespace< / a > ).< / p >
< p > Of course, in reality, things are more complicated than that. We have release
branches as well as < code > develop< / code > and < code > master< / code > , and we deliberately merge changes
between them. Bugs often slip through and have to be fixed later. That's all
fine: this not a cast-iron rule which must be obeyed, but an ideal to aim
towards.< / p >
< h2 id = "merges-squashes-rebases-wtf" > < a class = "header" href = "#merges-squashes-rebases-wtf" > Merges, squashes, rebases: wtf?< / a > < / h2 >
< p > Ok, so that's what we'd like to achieve. How do we achieve it?< / p >
< p > The TL;DR is: when you come to merge a pull request, you < em > probably< / em > want to
“squash and merge”:< / p >
< p > < img src = "dev/git/squash.png" alt = "squash and merge" / > .< / p >
< p > (This applies whether you are merging your own PR, or that of another
contributor.)< / p >
< p > “Squash and merge”< sup id = "a1" > < a href = "dev/git.html#f1" > 1< / a > < / sup > takes all of the changes in the
PR, and bundles them into a single commit. GitHub gives you the opportunity to
edit the commit message before you confirm, and normally you should do so,
because the default will be useless (again: < code > * woops typo< / code > is not a useful
thing to keep in the historical record).< / p >
< p > The main problem with this approach comes when you have a series of pull
requests which build on top of one another: as soon as you squash-merge the
first PR, you'll end up with a stack of conflicts to resolve in all of the
others. In general, it's best to avoid this situation in the first place by
trying not to have multiple related PRs in flight at the same time. Still,
sometimes that's not possible and doing a regular merge is the lesser evil.< / p >
< p > Another occasion in which a regular merge makes more sense is a PR where you've
deliberately created a series of commits each of which makes sense in its own
right. For example: < a href = "https://github.com/matrix-org/synapse/pull/6837" > a PR which gradually propagates a refactoring operation
through the codebase< / a > , or < a href = "https://github.com/matrix-org/synapse/pull/5987" > a
PR which is the culmination of several other
PRs< / a > . In this case the ability
to figure out when a particular change/bug was introduced could be very useful.< / p >
< p > Ultimately: < strong > this is not a hard-and-fast-rule< / strong > . If in doubt, ask yourself “do
each of the commits I am about to merge make sense in their own right”, but
remember that we're just doing our best to balance “keeping the commit history
clean” with other factors.< / p >
< h2 id = "git-branching-model" > < a class = "header" href = "#git-branching-model" > Git branching model< / a > < / h2 >
< p > A < a href = "https://nvie.com/posts/a-successful-git-branching-model/" > lot< / a >
< a href = "http://scottchacon.com/2011/08/31/github-flow.html" > of< / a >
< a href = "https://www.endoflineblog.com/gitflow-considered-harmful" > words< / a > have been
written in the past about git branching models (no really, < a href = "https://martinfowler.com/articles/branching-patterns.html" > a
lot< / a > ). I tend to
think the whole thing is overblown. Fundamentally, it's not that
complicated. Here's how we do it.< / p >
< p > Let's start with a picture:< / p >
< p > < img src = "dev/git/branches.jpg" alt = "branching model" / > < / p >
< p > It looks complicated, but it's really not. There's one basic rule: < em > anyone< / em > is
free to merge from < em > any< / em > more-stable branch to < em > any< / em > less-stable branch at
< em > any< / em > time< sup id = "a2" > < a href = "dev/git.html#f2" > 2< / a > < / sup > . (The principle behind this is that if a
change is good enough for the more-stable branch, then it's also good enough go
put in a less-stable branch.)< / p >
< p > Meanwhile, merging (or squashing, as per the above) from a less-stable to a
more-stable branch is a deliberate action in which you want to publish a change
or a set of changes to (some subset of) the world: for example, this happens
when a PR is landed, or as part of our release process.< / p >
< p > So, what counts as a more- or less-stable branch? A little reflection will show
that our active branches are ordered thus, from more-stable to less-stable:< / p >
< ul >
< li > < code > master< / code > (tracks our last release).< / li >
2021-06-08 13:45:10 +03:00
< li > < code > release-vX.Y< / code > (the branch where we prepare the next release)< sup
2021-06-03 19:21:02 +03:00
id="a3">< a href = "dev/git.html#f3" > 3< / a > < / sup > .< / li >
< li > PR branches which are targeting the release.< / li >
< li > < code > develop< / code > (our " mainline" branch containing our bleeding-edge).< / li >
< li > regular PR branches.< / li >
< / ul >
< p > The corollary is: if you have a bugfix that needs to land in both
2021-06-08 13:45:10 +03:00
< code > release-vX.Y< / code > < em > and< / em > < code > develop< / code > , then you should base your PR on
< code > release-vX.Y< / code > , get it merged there, and then merge from < code > release-vX.Y< / code > to
2021-06-03 19:21:02 +03:00
< code > develop< / code > . (If a fix lands in < code > develop< / code > and we later need it in a
release-branch, we can of course cherry-pick it, but landing it in the release
branch first helps reduce the chance of annoying conflicts.)< / p >
< hr / >
< p > < b id = "f1" > [1]< / b > : “Squash and merge” is GitHub's term for this
operation. Given that there is no merge involved, I'm not convinced it's the
most intuitive name. < a href = "dev/git.html#a1" > ^< / a > < / p >
< p > < b id = "f2" > [2]< / b > : Well, anyone with commit access.< a href = "dev/git.html#a2" > ^< / a > < / p >
< p > < b id = "f3" > [3]< / b > : Very, very occasionally (I think this has happened once in
the history of Synapse), we've had two releases in flight at once. Obviously,
2021-06-08 13:45:10 +03:00
< code > release-v1.2< / code > is more-stable than < code > release-v1.3< / code > . < a href = "dev/git.html#a3" > ^< / a > < / p >
2021-06-03 19:21:02 +03:00
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "opentracing" > < a class = "header" href = "#opentracing" > OpenTracing< / a > < / h1 >
< h2 id = "background" > < a class = "header" href = "#background" > Background< / a > < / h2 >
< p > OpenTracing is a semi-standard being adopted by a number of distributed
tracing platforms. It is a common api for facilitating vendor-agnostic
tracing instrumentation. That is, we can use the OpenTracing api and
select one of a number of tracer implementations to do the heavy lifting
in the background. Our current selected implementation is Jaeger.< / p >
< p > OpenTracing is a tool which gives an insight into the causal
relationship of work done in and between servers. The servers each track
events and report them to a centralised server - in Synapse's case:
Jaeger. The basic unit used to represent events is the span. The span
roughly represents a single piece of work that was done and the time at
which it occurred. A span can have child spans, meaning that the work of
the child had to be completed for the parent span to complete, or it can
have follow-on spans which represent work that is undertaken as a result
of the parent but is not depended on by the parent to in order to
finish.< / p >
< p > Since this is undertaken in a distributed environment a request to
another server, such as an RPC or a simple GET, can be considered a span
(a unit or work) for the local server. This causal link is what
OpenTracing aims to capture and visualise. In order to do this metadata
about the local server's span, i.e the 'span context', needs to be
included with the request to the remote.< / p >
< p > It is up to the remote server to decide what it does with the spans it
creates. This is called the sampling policy and it can be configured
through Jaeger's settings.< / p >
< p > For OpenTracing concepts see
< a href = "https://opentracing.io/docs/overview/what-is-tracing/" > https://opentracing.io/docs/overview/what-is-tracing/< / a > .< / p >
< p > For more information about Jaeger's implementation see
< a href = "https://www.jaegertracing.io/docs/" > https://www.jaegertracing.io/docs/< / a > < / p >
< h2 id = "setting-up-opentracing" > < a class = "header" href = "#setting-up-opentracing" > Setting up OpenTracing< / a > < / h2 >
< p > To receive OpenTracing spans, start up a Jaeger server. This can be done
using docker like so:< / p >
< pre > < code class = "language-sh" > docker run -d --name jaeger \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
jaegertracing/all-in-one:1
< / code > < / pre >
< p > Latest documentation is probably at
https://www.jaegertracing.io/docs/latest/getting-started.< / p >
< h2 id = "enable-opentracing-in-synapse" > < a class = "header" href = "#enable-opentracing-in-synapse" > Enable OpenTracing in Synapse< / a > < / h2 >
< p > OpenTracing is not enabled by default. It must be enabled in the
homeserver config by uncommenting the config options under < code > opentracing< / code >
as shown in the < a href = "./sample_config.yaml" > sample config< / a > . For example:< / p >
< pre > < code class = "language-yaml" > opentracing:
enabled: true
homeserver_whitelist:
- " mytrustedhomeserver.org"
- " *.myotherhomeservers.com"
< / code > < / pre >
< h2 id = "homeserver-whitelisting" > < a class = "header" href = "#homeserver-whitelisting" > Homeserver whitelisting< / a > < / h2 >
< p > The homeserver whitelist is configured using regular expressions. A list
of regular expressions can be given and their union will be compared
when propagating any spans contexts to another homeserver.< / p >
< p > Though it's mostly safe to send and receive span contexts to and from
untrusted users since span contexts are usually opaque ids it can lead
to two problems, namely:< / p >
< ul >
< li > If the span context is marked as sampled by the sending homeserver
the receiver will sample it. Therefore two homeservers with wildly
different sampling policies could incur higher sampling counts than
intended.< / li >
< li > Sending servers can attach arbitrary data to spans, known as
'baggage'. For safety this has been disabled in Synapse but that
doesn't prevent another server sending you baggage which will be
logged to OpenTracing's logs.< / li >
< / ul >
< h2 id = "configuring-jaeger" > < a class = "header" href = "#configuring-jaeger" > Configuring Jaeger< / a > < / h2 >
< p > Sampling strategies can be set as in this document:
< a href = "https://www.jaegertracing.io/docs/latest/sampling/" > https://www.jaegertracing.io/docs/latest/sampling/< / a > .< / p >
2021-06-11 16:46:16 +03:00
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "synapse-database-schema-files" > < a class = "header" href = "#synapse-database-schema-files" > Synapse database schema files< / a > < / h1 >
< p > Synapse's database schema is stored in the < code > synapse.storage.schema< / code > module.< / p >
< h2 id = "logical-databases" > < a class = "header" href = "#logical-databases" > Logical databases< / a > < / h2 >
< p > Synapse supports splitting its datastore across multiple physical databases (which can
be useful for large installations), and the schema files are therefore split according
to the logical database they apply to.< / p >
< p > At the time of writing, the following " logical" databases are supported:< / p >
< ul >
< li > < code > state< / code > - used to store Matrix room state (more specifically, < code > state_groups< / code > ,
their relationships and contents).< / li >
< li > < code > main< / code > - stores everything else.< / li >
< / ul >
< p > Additionally, the < code > common< / code > directory contains schema files for tables which must be
present on < em > all< / em > physical databases.< / p >
< h2 id = "synapse-schema-versions" > < a class = "header" href = "#synapse-schema-versions" > Synapse schema versions< / a > < / h2 >
< p > Synapse manages its database schema via " schema versions" . These are mainly used to
help avoid confusion if the Synapse codebase is rolled back after the database is
updated. They work as follows:< / p >
< ul >
< li >
< p > The Synapse codebase defines a constant < code > synapse.storage.schema.SCHEMA_VERSION< / code >
which represents the expectations made about the database by that version. For
example, as of Synapse v1.36, this is < code > 59< / code > .< / p >
< / li >
< li >
< p > The database stores a " compatibility version" in
< code > schema_compat_version.compat_version< / code > which defines the < code > SCHEMA_VERSION< / code > of the
oldest version of Synapse which will work with the database. On startup, if
< code > compat_version< / code > is found to be newer than < code > SCHEMA_VERSION< / code > , Synapse will refuse to
start.< / p >
< p > Synapse automatically updates this field from
< code > synapse.storage.schema.SCHEMA_COMPAT_VERSION< / code > .< / p >
< / li >
< li >
< p > Whenever a backwards-incompatible change is made to the database format (normally
via a < code > delta< / code > file), < code > synapse.storage.schema.SCHEMA_COMPAT_VERSION< / code > is also updated
so that administrators can not accidentally roll back to a too-old version of Synapse.< / p >
< / li >
< / ul >
< p > Generally, the goal is to maintain compatibility with at least one or two previous
releases of Synapse, so any substantial change tends to require multiple releases and a
bit of forward-planning to get right.< / p >
< p > As a worked example: we want to remove the < code > room_stats_historical< / code > table. Here is how it
might pan out.< / p >
< ol >
< li >
< p > Replace any code that < em > reads< / em > from < code > room_stats_historical< / code > with alternative
implementations, but keep writing to it in case of rollback to an earlier version.
Also, increase < code > synapse.storage.schema.SCHEMA_VERSION< / code > . In this
instance, there is no existing code which reads from < code > room_stats_historical< / code > , so
our starting point is:< / p >
< p > v1.36.0: < code > SCHEMA_VERSION=59< / code > , < code > SCHEMA_COMPAT_VERSION=59< / code > < / p >
< / li >
< li >
< p > Next (say in Synapse v1.37.0): remove the code that < em > writes< / em > to
< code > room_stats_historical< / code > , but don’ t yet remove the table in case of rollback to
v1.36.0. Again, we increase < code > synapse.storage.schema.SCHEMA_VERSION< / code > , but
because we have not broken compatibility with v1.36, we do not yet update
< code > SCHEMA_COMPAT_VERSION< / code > . We now have:< / p >
< p > v1.37.0: < code > SCHEMA_VERSION=60< / code > , < code > SCHEMA_COMPAT_VERSION=59< / code > .< / p >
< / li >
< li >
< p > Later (say in Synapse v1.38.0): we can remove the table altogether. This will
break compatibility with v1.36.0, so we must update < code > SCHEMA_COMPAT_VERSION< / code > accordingly.
There is no need to update < code > synapse.storage.schema.SCHEMA_VERSION< / code > , since there is no
change to the Synapse codebase here. So we end up with:< / p >
< p > v1.38.0: < code > SCHEMA_VERSION=60< / code > , < code > SCHEMA_COMPAT_VERSION=60< / code > .< / p >
< / li >
< / ol >
< p > If in doubt about whether to update < code > SCHEMA_VERSION< / code > or not, it is generally best to
lean towards doing so.< / p >
< h2 id = "full-schema-dumps" > < a class = "header" href = "#full-schema-dumps" > Full schema dumps< / a > < / h2 >
< p > In the < code > full_schemas< / code > directories, only the most recently-numbered snapshot is used
(< code > 54< / code > at the time of writing). Older snapshots (eg, < code > 16< / code > ) are present for historical
reference only.< / p >
< h3 id = "building-full-schema-dumps" > < a class = "header" href = "#building-full-schema-dumps" > Building full schema dumps< / a > < / h3 >
< p > If you want to recreate these schemas, they need to be made from a database that
has had all background updates run.< / p >
< p > To do so, use < code > scripts-dev/make_full_schema.sh< / code > . This will produce new
< code > full.sql.postgres< / code > and < code > full.sql.sqlite< / code > files.< / p >
< p > Ensure postgres is installed, then run:< / p >
< pre > < code > ./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/
< / code > < / pre >
< p > NB at the time of writing, this script predates the split into separate < code > state< / code > /< code > main< / code >
databases so will require updates to handle that correctly.< / p >
2021-06-11 19:14:17 +03:00
< h2 id = "boolean-columns" > < a class = "header" href = "#boolean-columns" > Boolean columns< / a > < / h2 >
< p > Boolean columns require special treatment, since SQLite treats booleans the
same as integers.< / p >
< p > There are three separate aspects to this:< / p >
< ul >
< li >
< p > Any new boolean column must be added to the < code > BOOLEAN_COLUMNS< / code > list in
< code > scripts/synapse_port_db< / code > . This tells the port script to cast the integer
value from SQLite to a boolean before writing the value to the postgres
database.< / p >
< / li >
< li >
< p > Before SQLite 3.23, < code > TRUE< / code > and < code > FALSE< / code > were not recognised as constants by
SQLite, and the < code > IS [NOT] TRUE< / code > /< code > IS [NOT] FALSE< / code > operators were not
supported. This makes it necessary to avoid using < code > TRUE< / code > and < code > FALSE< / code >
constants in SQL commands.< / p >
< p > For example, to insert a < code > TRUE< / code > value into the database, write:< / p >
< pre > < code class = "language-python" > txn.execute(" INSERT INTO tbl(col) VALUES (?)" , (True, ))
< / code > < / pre >
< / li >
< li >
< p > Default values for new boolean columns present a particular
difficulty. Generally it is best to create separate schema files for
Postgres and SQLite. For example:< / p >
< pre > < code class = "language-sql" > # in 00delta.sql.postgres:
ALTER TABLE tbl ADD COLUMN col BOOLEAN DEFAULT FALSE;
< / code > < / pre >
< pre > < code class = "language-sql" > # in 00delta.sql.sqlite:
ALTER TABLE tbl ADD COLUMN col BOOLEAN DEFAULT 0;
< / code > < / pre >
< p > Note that there is a particularly insidious failure mode here: the Postgres
flavour will be accepted by SQLite 3.22, but will give a column whose
default value is the < strong > string< / strong > < code > " FALSE" < / code > - which, when cast back to a boolean
in Python, evaluates to < code > True< / code > .< / p >
< / li >
< / ul >
2021-06-03 19:21:02 +03:00
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "log-contexts" > < a class = "header" href = "#log-contexts" > Log Contexts< / a > < / h1 >
< p > To help track the processing of individual requests, synapse uses a
'< code > log context< / code > ' to track which request it is handling at any given
moment. This is done via a thread-local variable; a < code > logging.Filter< / code > is
then used to fish the information back out of the thread-local variable
and add it to each log record.< / p >
< p > Logcontexts are also used for CPU and database accounting, so that we
can track which requests were responsible for high CPU use or database
activity.< / p >
< p > The < code > synapse.logging.context< / code > module provides a facilities for managing
the current log context (as well as providing the < code > LoggingContextFilter< / code >
class).< / p >
< p > Deferreds make the whole thing complicated, so this document describes
how it all works, and how to write code which follows the rules.< / p >
< p > ##Logcontexts without Deferreds< / p >
< p > In the absence of any Deferred voodoo, things are simple enough. As with
any code of this nature, the rule is that our function should leave
things as it found them:< / p >
< pre > < code class = "language-python" > from synapse.logging import context # omitted from future snippets
def handle_request(request_id):
request_context = context.LoggingContext()
calling_context = context.set_current_context(request_context)
try:
request_context.request = request_id
do_request_handling()
logger.debug(" finished" )
finally:
context.set_current_context(calling_context)
def do_request_handling():
logger.debug(" phew" ) # this will be logged against request_id
< / code > < / pre >
< p > LoggingContext implements the context management methods, so the above
can be written much more succinctly as:< / p >
< pre > < code class = "language-python" > def handle_request(request_id):
with context.LoggingContext() as request_context:
request_context.request = request_id
do_request_handling()
logger.debug(" finished" )
def do_request_handling():
logger.debug(" phew" )
< / code > < / pre >
< h2 id = "using-logcontexts-with-deferreds" > < a class = "header" href = "#using-logcontexts-with-deferreds" > Using logcontexts with Deferreds< / a > < / h2 >
< p > Deferreds --- and in particular, < code > defer.inlineCallbacks< / code > --- break the
linear flow of code so that there is no longer a single entry point
where we should set the logcontext and a single exit point where we
should remove it.< / p >
< p > Consider the example above, where < code > do_request_handling< / code > needs to do some
blocking operation, and returns a deferred:< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def handle_request(request_id):
with context.LoggingContext() as request_context:
request_context.request = request_id
yield do_request_handling()
logger.debug(" finished" )
< / code > < / pre >
< p > In the above flow:< / p >
< ul >
< li > The logcontext is set< / li >
< li > < code > do_request_handling< / code > is called, and returns a deferred< / li >
< li > < code > handle_request< / code > yields the deferred< / li >
< li > The < code > inlineCallbacks< / code > wrapper of < code > handle_request< / code > returns a deferred< / li >
< / ul >
< p > So we have stopped processing the request (and will probably go on to
start processing the next), without clearing the logcontext.< / p >
< p > To circumvent this problem, synapse code assumes that, wherever you have
a deferred, you will want to yield on it. To that end, whereever
functions return a deferred, we adopt the following conventions:< / p >
< p > < strong > Rules for functions returning deferreds:< / strong > < / p >
< blockquote >
< ul >
< li > If the deferred is already complete, the function returns with the
same logcontext it started with.< / li >
< li > If the deferred is incomplete, the function clears the logcontext
before returning; when the deferred completes, it restores the
logcontext before running any callbacks.< / li >
< / ul >
< / blockquote >
< p > That sounds complicated, but actually it means a lot of code (including
the example above) " just works" . There are two cases:< / p >
< ul >
< li >
< p > If < code > do_request_handling< / code > returns a completed deferred, then the
logcontext will still be in place. In this case, execution will
continue immediately after the < code > yield< / code > ; the " finished" line will
be logged against the right context, and the < code > with< / code > block restores
the original context before we return to the caller.< / p >
< / li >
< li >
< p > If the returned deferred is incomplete, < code > do_request_handling< / code > clears
the logcontext before returning. The logcontext is therefore clear
when < code > handle_request< / code > yields the deferred. At that point, the
< code > inlineCallbacks< / code > wrapper adds a callback to the deferred, and
returns another (incomplete) deferred to the caller, and it is safe
to begin processing the next request.< / p >
< p > Once < code > do_request_handling< / code > 's deferred completes, it will reinstate
the logcontext, before running the callback added by the
< code > inlineCallbacks< / code > wrapper. That callback runs the second half of
< code > handle_request< / code > , so again the " finished" line will be logged
against the right context, and the < code > with< / code > block restores the
original context.< / p >
< / li >
< / ul >
< p > As an aside, it's worth noting that < code > handle_request< / code > follows our rules
-though that only matters if the caller has its own logcontext which it
cares about.< / p >
< p > The following sections describe pitfalls and helpful patterns when
implementing these rules.< / p >
< h2 id = "always-yield-your-deferreds" > < a class = "header" href = "#always-yield-your-deferreds" > Always yield your deferreds< / a > < / h2 >
< p > Whenever you get a deferred back from a function, you should < code > yield< / code > on
it as soon as possible. (Returning it directly to your caller is ok too,
if you're not doing < code > inlineCallbacks< / code > .) Do not pass go; do not do any
logging; do not call any other functions.< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def fun():
logger.debug(" starting" )
yield do_some_stuff() # just like this
d = more_stuff()
result = yield d # also fine, of course
return result
def nonInlineCallbacksFun():
logger.debug(" just a wrapper really" )
return do_some_stuff() # this is ok too - the caller will yield on
# it anyway.
< / code > < / pre >
< p > Provided this pattern is followed all the way back up to the callchain
to where the logcontext was set, this will make things work out ok:
provided < code > do_some_stuff< / code > and < code > more_stuff< / code > follow the rules above, then
so will < code > fun< / code > (as wrapped by < code > inlineCallbacks< / code > ) and
< code > nonInlineCallbacksFun< / code > .< / p >
< p > It's all too easy to forget to < code > yield< / code > : for instance if we forgot that
< code > do_some_stuff< / code > returned a deferred, we might plough on regardless. This
leads to a mess; it will probably work itself out eventually, but not
before a load of stuff has been logged against the wrong context.
(Normally, other things will break, more obviously, if you forget to
< code > yield< / code > , so this tends not to be a major problem in practice.)< / p >
< p > Of course sometimes you need to do something a bit fancier with your
Deferreds - not all code follows the linear A-then-B-then-C pattern.
Notes on implementing more complex patterns are in later sections.< / p >
< h2 id = "where-you-create-a-new-deferred-make-it-follow-the-rules" > < a class = "header" href = "#where-you-create-a-new-deferred-make-it-follow-the-rules" > Where you create a new Deferred, make it follow the rules< / a > < / h2 >
< p > Most of the time, a Deferred comes from another synapse function.
Sometimes, though, we need to make up a new Deferred, or we get a
Deferred back from external code. We need to make it follow our rules.< / p >
< p > The easy way to do it is with a combination of < code > defer.inlineCallbacks< / code > ,
and < code > context.PreserveLoggingContext< / code > . Suppose we want to implement
< code > sleep< / code > , which returns a deferred which will run its callbacks after a
given number of seconds. That might look like:< / p >
< pre > < code class = "language-python" > # not a logcontext-rules-compliant function
def get_sleep_deferred(seconds):
d = defer.Deferred()
reactor.callLater(seconds, d.callback, None)
return d
< / code > < / pre >
< p > That doesn't follow the rules, but we can fix it by wrapping it with
< code > PreserveLoggingContext< / code > and < code > yield< / code > ing on it:< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def sleep(seconds):
with PreserveLoggingContext():
yield get_sleep_deferred(seconds)
< / code > < / pre >
< p > This technique works equally for external functions which return
deferreds, or deferreds we have made ourselves.< / p >
< p > You can also use < code > context.make_deferred_yieldable< / code > , which just does the
boilerplate for you, so the above could be written:< / p >
< pre > < code class = "language-python" > def sleep(seconds):
return context.make_deferred_yieldable(get_sleep_deferred(seconds))
< / code > < / pre >
< h2 id = "fire-and-forget" > < a class = "header" href = "#fire-and-forget" > Fire-and-forget< / a > < / h2 >
< p > Sometimes you want to fire off a chain of execution, but not wait for
its result. That might look a bit like this:< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
# *don't* do this
background_operation()
logger.debug(" Request handling complete" )
@defer.inlineCallbacks
def background_operation():
yield first_background_step()
logger.debug(" Completed first step" )
yield second_background_step()
logger.debug(" Completed second step" )
< / code > < / pre >
< p > The above code does a couple of steps in the background after
< code > do_request_handling< / code > has finished. The log lines are still logged
against the < code > request_context< / code > logcontext, which may or may not be
desirable. There are two big problems with the above, however. The first
problem is that, if < code > background_operation< / code > returns an incomplete
Deferred, it will expect its caller to < code > yield< / code > immediately, so will have
cleared the logcontext. In this example, that means that 'Request
handling complete' will be logged without any context.< / p >
< p > The second problem, which is potentially even worse, is that when the
Deferred returned by < code > background_operation< / code > completes, it will restore
the original logcontext. There is nothing waiting on that Deferred, so
the logcontext will leak into the reactor and possibly get attached to
some arbitrary future operation.< / p >
< p > There are two potential solutions to this.< / p >
< p > One option is to surround the call to < code > background_operation< / code > with a
< code > PreserveLoggingContext< / code > call. That will reset the logcontext before
starting < code > background_operation< / code > (so the context restored when the
deferred completes will be the empty logcontext), and will restore the
current logcontext before continuing the foreground process:< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
# start background_operation off in the empty logcontext, to
# avoid leaking the current context into the reactor.
with PreserveLoggingContext():
background_operation()
# this will now be logged against the request context
logger.debug(" Request handling complete" )
< / code > < / pre >
< p > Obviously that option means that the operations done in
< code > background_operation< / code > would be not be logged against a logcontext
(though that might be fixed by setting a different logcontext via a
< code > with LoggingContext(...)< / code > in < code > background_operation< / code > ).< / p >
< p > The second option is to use < code > context.run_in_background< / code > , which wraps a
function so that it doesn't reset the logcontext even when it returns
an incomplete deferred, and adds a callback to the returned deferred to
reset the logcontext. In other words, it turns a function that follows
the Synapse rules about logcontexts and Deferreds into one which behaves
more like an external function --- the opposite operation to that
described in the previous section. It can be used like this:< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
context.run_in_background(background_operation)
# this will now be logged against the request context
logger.debug(" Request handling complete" )
< / code > < / pre >
< h2 id = "passing-synapse-deferreds-into-third-party-functions" > < a class = "header" href = "#passing-synapse-deferreds-into-third-party-functions" > Passing synapse deferreds into third-party functions< / a > < / h2 >
< p > A typical example of this is where we want to collect together two or
more deferred via < code > defer.gatherResults< / code > :< / p >
< pre > < code class = "language-python" > d1 = operation1()
d2 = operation2()
d3 = defer.gatherResults([d1, d2])
< / code > < / pre >
< p > This is really a variation of the fire-and-forget problem above, in that
we are firing off < code > d1< / code > and < code > d2< / code > without yielding on them. The difference
is that we now have third-party code attached to their callbacks. Anyway
either technique given in the < a href = "log_contexts.html#fire-and-forget" > Fire-and-forget< / a >
section will work.< / p >
< p > Of course, the new Deferred returned by < code > gatherResults< / code > needs to be
wrapped in order to make it follow the logcontext rules before we can
yield it, as described in < a href = "log_contexts.html#where-you-create-a-new-deferred-make-it-follow-the-rules" > Where you create a new Deferred, make it
follow the
rules< / a > .< / p >
< p > So, option one: reset the logcontext before starting the operations to
be gathered:< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def do_request_handling():
with PreserveLoggingContext():
d1 = operation1()
d2 = operation2()
result = yield defer.gatherResults([d1, d2])
< / code > < / pre >
< p > In this case particularly, though, option two, of using
< code > context.preserve_fn< / code > almost certainly makes more sense, so that
< code > operation1< / code > and < code > operation2< / code > are both logged against the original
logcontext. This looks like:< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def do_request_handling():
d1 = context.preserve_fn(operation1)()
d2 = context.preserve_fn(operation2)()
with PreserveLoggingContext():
result = yield defer.gatherResults([d1, d2])
< / code > < / pre >
< h2 id = "was-all-this-really-necessary" > < a class = "header" href = "#was-all-this-really-necessary" > Was all this really necessary?< / a > < / h2 >
< p > The conventions used work fine for a linear flow where everything
happens in series via < code > defer.inlineCallbacks< / code > and < code > yield< / code > , but are
certainly tricky to follow for any more exotic flows. It's hard not to
wonder if we could have done something else.< / p >
< p > We're not going to rewrite Synapse now, so the following is entirely of
academic interest, but I'd like to record some thoughts on an
alternative approach.< / p >
< p > I briefly prototyped some code following an alternative set of rules. I
think it would work, but I certainly didn't get as far as thinking how
it would interact with concepts as complicated as the cache descriptors.< / p >
< p > My alternative rules were:< / p >
< ul >
< li > functions always preserve the logcontext of their caller, whether or
not they are returning a Deferred.< / li >
< li > Deferreds returned by synapse functions run their callbacks in the
same context as the function was orignally called in.< / li >
< / ul >
< p > The main point of this scheme is that everywhere that sets the
logcontext is responsible for clearing it before returning control to
the reactor.< / p >
< p > So, for example, if you were the function which started a
< code > with LoggingContext< / code > block, you wouldn't < code > yield< / code > within it --- instead
you'd start off the background process, and then leave the < code > with< / code > block
to wait for it:< / p >
< pre > < code class = "language-python" > def handle_request(request_id):
with context.LoggingContext() as request_context:
request_context.request = request_id
d = do_request_handling()
def cb(r):
logger.debug(" finished" )
d.addCallback(cb)
return d
< / code > < / pre >
< p > (in general, mixing < code > with LoggingContext< / code > blocks and
< code > defer.inlineCallbacks< / code > in the same function leads to slighly
counter-intuitive code, under this scheme).< / p >
< p > Because we leave the original < code > with< / code > block as soon as the Deferred is
returned (as opposed to waiting for it to be resolved, as we do today),
the logcontext is cleared before control passes back to the reactor; so
if there is some code within < code > do_request_handling< / code > which needs to wait
for a Deferred to complete, there is no need for it to worry about
clearing the logcontext before doing so:< / p >
< pre > < code class = "language-python" > def handle_request():
r = do_some_stuff()
r.addCallback(do_some_more_stuff)
return r
< / code > < / pre >
< p > --- and provided < code > do_some_stuff< / code > follows the rules of returning a
Deferred which runs its callbacks in the original logcontext, all is
happy.< / p >
< p > The business of a Deferred which runs its callbacks in the original
logcontext isn't hard to achieve --- we have it today, in the shape of
< code > context._PreservingContextDeferred< / code > :< / p >
< pre > < code class = "language-python" > def do_some_stuff():
deferred = do_some_io()
pcd = _PreservingContextDeferred(LoggingContext.current_context())
deferred.chainDeferred(pcd)
return pcd
< / code > < / pre >
< p > It turns out that, thanks to the way that Deferreds chain together, we
automatically get the property of a context-preserving deferred with
< code > defer.inlineCallbacks< / code > , provided the final Defered the function
< code > yields< / code > on has that property. So we can just write:< / p >
< pre > < code class = "language-python" > @defer.inlineCallbacks
def handle_request():
yield do_some_stuff()
yield do_some_more_stuff()
< / code > < / pre >
< p > To conclude: I think this scheme would have worked equally well, with
less danger of messing it up, and probably made some more esoteric code
easier to write. But again --- changing the conventions of the entire
Synapse codebase is not a sensible option for the marginal improvement
offered.< / p >
< h2 id = "a-note-on-garbage-collection-of-deferred-chains" > < a class = "header" href = "#a-note-on-garbage-collection-of-deferred-chains" > A note on garbage-collection of Deferred chains< / a > < / h2 >
< p > It turns out that our logcontext rules do not play nicely with Deferred
chains which get orphaned and garbage-collected.< / p >
< p > Imagine we have some code that looks like this:< / p >
< pre > < code class = "language-python" > listener_queue = []
def on_something_interesting():
for d in listener_queue:
d.callback(" foo" )
@defer.inlineCallbacks
def await_something_interesting():
new_deferred = defer.Deferred()
listener_queue.append(new_deferred)
with PreserveLoggingContext():
yield new_deferred
< / code > < / pre >
< p > Obviously, the idea here is that we have a bunch of things which are
waiting for an event. (It's just an example of the problem here, but a
relatively common one.)< / p >
< p > Now let's imagine two further things happen. First of all, whatever was
waiting for the interesting thing goes away. (Perhaps the request times
out, or something < em > even more< / em > interesting happens.)< / p >
< p > Secondly, let's suppose that we decide that the interesting thing is
never going to happen, and we reset the listener queue:< / p >
< pre > < code class = "language-python" > def reset_listener_queue():
listener_queue.clear()
< / code > < / pre >
< p > So, both ends of the deferred chain have now dropped their references,
and the deferred chain is now orphaned, and will be garbage-collected at
some point. Note that < code > await_something_interesting< / code > is a generator
function, and when Python garbage-collects generator functions, it gives
them a chance to clean up by making the < code > yield< / code > raise a < code > GeneratorExit< / code >
exception. In our case, that means that the < code > __exit__< / code > handler of
< code > PreserveLoggingContext< / code > will carefully restore the request context, but
there is now nothing waiting for its return, so the request context is
never cleared.< / p >
< p > To reiterate, this problem only arises when < em > both< / em > ends of a deferred
chain are dropped. Dropping the the reference to a deferred you're
supposed to be calling is probably bad practice, so this doesn't
actually happen too much. Unfortunately, when it does happen, it will
lead to leaked logcontexts which are incredibly hard to track down.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "replication-architecture" > < a class = "header" href = "#replication-architecture" > Replication Architecture< / a > < / h1 >
< h2 id = "motivation" > < a class = "header" href = "#motivation" > Motivation< / a > < / h2 >
< p > We'd like to be able to split some of the work that synapse does into
multiple python processes. In theory multiple synapse processes could
share a single postgresql database and we'd scale up by running more
synapse processes. However much of synapse assumes that only one process
is interacting with the database, both for assigning unique identifiers
when inserting into tables, notifying components about new updates, and
for invalidating its caches.< / p >
< p > So running multiple copies of the current code isn't an option. One way
to run multiple processes would be to have a single writer process and
multiple reader processes connected to the same database. In order to do
this we'd need a way for the reader process to invalidate its in-memory
caches when an update happens on the writer. One way to do this is for
the writer to present an append-only log of updates which the readers
can consume to invalidate their caches and to push updates to listening
clients or pushers.< / p >
< p > Synapse already stores much of its data as an append-only log so that it
can correctly respond to < code > /sync< / code > requests so the amount of code changes
needed to expose the append-only log to the readers should be fairly
minimal.< / p >
< h2 id = "architecture" > < a class = "header" href = "#architecture" > Architecture< / a > < / h2 >
< h3 id = "the-replication-protocol" > < a class = "header" href = "#the-replication-protocol" > The Replication Protocol< / a > < / h3 >
< p > See < a href = "tcp_replication.html" > tcp_replication.md< / a > < / p >
< h3 id = "the-slaved-datastore" > < a class = "header" href = "#the-slaved-datastore" > The Slaved DataStore< / a > < / h3 >
< p > There are read-only version of the synapse storage layer in
< code > synapse/replication/slave/storage< / code > that use the response of the
replication API to invalidate their caches.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "tcp-replication" > < a class = "header" href = "#tcp-replication" > TCP Replication< / a > < / h1 >
< h2 id = "motivation-1" > < a class = "header" href = "#motivation-1" > Motivation< / a > < / h2 >
< p > Previously the workers used an HTTP long poll mechanism to get updates
from the master, which had the problem of causing a lot of duplicate
work on the server. This TCP protocol replaces those APIs with the aim
of increased efficiency.< / p >
< h2 id = "overview-3" > < a class = "header" href = "#overview-3" > Overview< / a > < / h2 >
< p > The protocol is based on fire and forget, line based commands. An
example flow would be (where '> ' indicates master to worker and
'< ' worker to master flows):< / p >
< pre > < code > > SERVER example.com
< REPLICATE
> POSITION events master 53 53
> RDATA events master 54 [" $foo1:bar.com" , ...]
> RDATA events master 55 [" $foo4:bar.com" , ...]
< / code > < / pre >
< p > The example shows the server accepting a new connection and sending its identity
with the < code > SERVER< / code > command, followed by the client server to respond with the
position of all streams. The server then periodically sends < code > RDATA< / code > commands
which have the format < code > RDATA < stream_name> < instance_name> < token> < row> < / code > , where
the format of < code > < row> < / code > is defined by the individual streams. The
< code > < instance_name> < / code > is the name of the Synapse process that generated the data
(usually " master" ).< / p >
< p > Error reporting happens by either the client or server sending an ERROR
command, and usually the connection will be closed.< / p >
< p > Since the protocol is a simple line based, its possible to manually
connect to the server using a tool like netcat. A few things should be
noted when manually using the protocol:< / p >
< ul >
< li > The federation stream is only available if federation sending has
been disabled on the main process.< / li >
< li > The server will only time connections out that have sent a < code > PING< / code >
command. If a ping is sent then the connection will be closed if no
further commands are receieved within 15s. Both the client and
server protocol implementations will send an initial PING on
connection and ensure at least one command every 5s is sent (not
necessarily < code > PING< / code > ).< / li >
< li > < code > RDATA< / code > commands < em > usually< / em > include a numeric token, however if the
stream has multiple rows to replicate per token the server will send
multiple < code > RDATA< / code > commands, with all but the last having a token of
< code > batch< / code > . See the documentation on < code > commands.RdataCommand< / code > for
further details.< / li >
< / ul >
< h2 id = "architecture-1" > < a class = "header" href = "#architecture-1" > Architecture< / a > < / h2 >
< p > The basic structure of the protocol is line based, where the initial
word of each line specifies the command. The rest of the line is parsed
based on the command. For example, the RDATA command is defined as:< / p >
< pre > < code > RDATA < stream_name> < instance_name> < token> < row_json>
< / code > < / pre >
< p > (Note that < row_json> may contains spaces, but cannot contain
newlines.)< / p >
< p > Blank lines are ignored.< / p >
< h3 id = "keep-alives" > < a class = "header" href = "#keep-alives" > Keep alives< / a > < / h3 >
< p > Both sides are expected to send at least one command every 5s or so, and
should send a < code > PING< / code > command if necessary. If either side do not receive
a command within e.g. 15s then the connection should be closed.< / p >
< p > Because the server may be connected to manually using e.g. netcat, the
timeouts aren't enabled until an initial < code > PING< / code > command is seen. Both
the client and server implementations below send a < code > PING< / code > command
immediately on connection to ensure the timeouts are enabled.< / p >
< p > This ensures that both sides can quickly realize if the tcp connection
has gone and handle the situation appropriately.< / p >
< h3 id = "start-up" > < a class = "header" href = "#start-up" > Start up< / a > < / h3 >
< p > When a new connection is made, the server:< / p >
< ul >
< li > Sends a < code > SERVER< / code > command, which includes the identity of the server,
allowing the client to detect if its connected to the expected
server< / li >
< li > Sends a < code > PING< / code > command as above, to enable the client to time out
connections promptly.< / li >
< / ul >
< p > The client:< / p >
< ul >
< li > Sends a < code > NAME< / code > command, allowing the server to associate a human
friendly name with the connection. This is optional.< / li >
< li > Sends a < code > PING< / code > as above< / li >
< li > Sends a < code > REPLICATE< / code > to get the current position of all streams.< / li >
< li > On receipt of a < code > SERVER< / code > command, checks that the server name
matches the expected server name.< / li >
< / ul >
< h3 id = "error-handling" > < a class = "header" href = "#error-handling" > Error handling< / a > < / h3 >
< p > If either side detects an error it can send an < code > ERROR< / code > command and close
the connection.< / p >
< p > If the client side loses the connection to the server it should
reconnect, following the steps above.< / p >
< h3 id = "congestion" > < a class = "header" href = "#congestion" > Congestion< / a > < / h3 >
< p > If the server sends messages faster than the client can consume them the
server will first buffer a (fairly large) number of commands and then
disconnect the client. This ensures that we don't queue up an unbounded
number of commands in memory and gives us a potential oppurtunity to
squawk loudly. When/if the client recovers it can reconnect to the
server and ask for missed messages.< / p >
< h3 id = "reliability" > < a class = "header" href = "#reliability" > Reliability< / a > < / h3 >
< p > In general the replication stream should be considered an unreliable
transport since e.g. commands are not resent if the connection
disappears.< / p >
< p > The exception to that are the replication streams, i.e. RDATA commands,
since these include tokens which can be used to restart the stream on
connection errors.< / p >
< p > The client should keep track of the token in the last RDATA command
received for each stream so that on reconneciton it can start streaming
from the correct place. Note: not all RDATA have valid tokens due to
batching. See < code > RdataCommand< / code > for more details.< / p >
< h3 id = "example-4" > < a class = "header" href = "#example-4" > Example< / a > < / h3 >
< p > An example iteraction is shown below. Each line is prefixed with '> '
or '< ' to indicate which side is sending, these are < em > not< / em > included on
the wire:< / p >
< pre > < code > * connection established *
> SERVER localhost:8823
> PING 1490197665618
< NAME synapse.app.appservice
< PING 1490197665618
< REPLICATE
> POSITION events master 1 1
> POSITION backfill master 1 1
> POSITION caches master 1 1
> RDATA caches master 2 [" get_user_by_id" ,[" @01register-user:localhost:8823" ],1490197670513]
> RDATA events master 14 [" $149019767112vOHxz:localhost:8823" ,
" !AFDCvgApUmpdfVjIXm:localhost:8823" ," m.room.guest_access" ," " ,null]
< PING 1490197675618
> ERROR server stopping
* connection closed by server *
< / code > < / pre >
< p > The < code > POSITION< / code > command sent by the server is used to set the clients
position without needing to send data with the < code > RDATA< / code > command.< / p >
< p > An example of a batched set of < code > RDATA< / code > is:< / p >
< pre > < code > > RDATA caches master batch [" get_user_by_id" ,[" @test:localhost:8823" ],1490197670513]
> RDATA caches master batch [" get_user_by_id" ,[" @test2:localhost:8823" ],1490197670513]
> RDATA caches master batch [" get_user_by_id" ,[" @test3:localhost:8823" ],1490197670513]
> RDATA caches master 54 [" get_user_by_id" ,[" @test4:localhost:8823" ],1490197670513]
< / code > < / pre >
< p > In this case the client shouldn't advance their caches token until it
sees the the last < code > RDATA< / code > .< / p >
< h3 id = "list-of-commands" > < a class = "header" href = "#list-of-commands" > List of commands< / a > < / h3 >
< p > The list of valid commands, with which side can send it: server (S) or
client (C):< / p >
< h4 id = "server-s" > < a class = "header" href = "#server-s" > SERVER (S)< / a > < / h4 >
< p > Sent at the start to identify which server the client is talking to< / p >
< h4 id = "rdata-s" > < a class = "header" href = "#rdata-s" > RDATA (S)< / a > < / h4 >
< p > A single update in a stream< / p >
< h4 id = "position-s" > < a class = "header" href = "#position-s" > POSITION (S)< / a > < / h4 >
< p > On receipt of a POSITION command clients should check if they have missed any
updates, and if so then fetch them out of band. Sent in response to a
REPLICATE command (but can happen at any time).< / p >
< p > The POSITION command includes the source of the stream. Currently all streams
are written by a single process (usually " master" ). If fetching missing
updates via HTTP API, rather than via the DB, then processes should make the
request to the appropriate process.< / p >
< p > Two positions are included, the " new" position and the last position sent respectively.
This allows servers to tell instances that the positions have advanced but no
data has been written, without clients needlessly checking to see if they
have missed any updates.< / p >
< h4 id = "error-s-c" > < a class = "header" href = "#error-s-c" > ERROR (S, C)< / a > < / h4 >
< p > There was an error< / p >
< h4 id = "ping-s-c" > < a class = "header" href = "#ping-s-c" > PING (S, C)< / a > < / h4 >
< p > Sent periodically to ensure the connection is still alive< / p >
< h4 id = "name-c" > < a class = "header" href = "#name-c" > NAME (C)< / a > < / h4 >
< p > Sent at the start by client to inform the server who they are< / p >
< h4 id = "replicate-c" > < a class = "header" href = "#replicate-c" > REPLICATE (C)< / a > < / h4 >
< p > Asks the server for the current position of all streams.< / p >
< h4 id = "user_sync-c" > < a class = "header" href = "#user_sync-c" > USER_SYNC (C)< / a > < / h4 >
< p > A user has started or stopped syncing on this process.< / p >
< h4 id = "clear_user_sync-c" > < a class = "header" href = "#clear_user_sync-c" > CLEAR_USER_SYNC (C)< / a > < / h4 >
< p > The server should clear all associated user sync data from the worker.< / p >
< p > This is used when a worker is shutting down.< / p >
< h4 id = "federation_ack-c" > < a class = "header" href = "#federation_ack-c" > FEDERATION_ACK (C)< / a > < / h4 >
< p > Acknowledge receipt of some federation data< / p >
< h3 id = "remote_server_up-s-c" > < a class = "header" href = "#remote_server_up-s-c" > REMOTE_SERVER_UP (S, C)< / a > < / h3 >
< p > Inform other processes that a remote server may have come back online.< / p >
< p > See < code > synapse/replication/tcp/commands.py< / code > for a detailed description and
the format of each command.< / p >
< h3 id = "cache-invalidation-stream" > < a class = "header" href = "#cache-invalidation-stream" > Cache Invalidation Stream< / a > < / h3 >
< p > The cache invalidation stream is used to inform workers when they need
to invalidate any of their caches in the data store. This is done by
streaming all cache invalidations done on master down to the workers,
assuming that any caches on the workers also exist on the master.< / p >
< p > Each individual cache invalidation results in a row being sent down
replication, which includes the cache name (the name of the function)
and they key to invalidate. For example:< / p >
< pre > < code > > RDATA caches master 550953771 [" get_user_by_id" , [" @bob:example.com" ], 1550574873251]
< / code > < / pre >
< p > Alternatively, an entire cache can be invalidated by sending down a < code > null< / code >
instead of the key. For example:< / p >
< pre > < code > > RDATA caches master 550953772 [" get_user_by_id" , null, 1550574873252]
< / code > < / pre >
< p > However, there are times when a number of caches need to be invalidated
at the same time with the same key. To reduce traffic we batch those
invalidations into a single poke by defining a special cache name that
workers understand to mean to expand to invalidate the correct caches.< / p >
< p > Currently the special cache names are declared in
< code > synapse/storage/_base.py< / code > and are:< / p >
< ol >
< li > < code > cs_cache_fake< / code > ─ invalidates caches that depend on the current
state< / li >
< / ol >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "internal-documentation" > < a class = "header" href = "#internal-documentation" > Internal Documentation< / a > < / h1 >
< p > This section covers implementation documentation for various parts of Synapse.< / p >
< p > If a developer is planning to make a change to a feature of Synapse, it can be useful for
general documentation of how that feature is implemented to be available. This saves the
developer time in place of needing to understand how the feature works by reading the
code.< / p >
< p > Documentation that would be more useful for the perspective of a system administrator,
rather than a developer who's intending to change to code, should instead be placed
under the Usage section of the documentation.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "how-to-test-saml-as-a-developer-without-a-server" > < a class = "header" href = "#how-to-test-saml-as-a-developer-without-a-server" > How to test SAML as a developer without a server< / a > < / h1 >
< p > https://capriza.github.io/samling/samling.html (https://github.com/capriza/samling) is a great
resource for being able to tinker with the SAML options within Synapse without needing to
deploy and configure a complicated software stack.< / p >
< p > To make Synapse (and therefore Riot) use it:< / p >
< ol >
< li > Use the samling.html URL above or deploy your own and visit the IdP Metadata tab.< / li >
< li > Copy the XML to your clipboard.< / li >
< li > On your Synapse server, create a new file < code > samling.xml< / code > next to your < code > homeserver.yaml< / code > with
the XML from step 2 as the contents.< / li >
< li > Edit your < code > homeserver.yaml< / code > to include:
< pre > < code class = "language-yaml" > saml2_config:
sp_config:
allow_unknown_attributes: true # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
metadata:
local: [" samling.xml" ]
< / code > < / pre >
< / li >
< li > Ensure that your < code > homeserver.yaml< / code > has a setting for < code > public_baseurl< / code > :
< pre > < code class = "language-yaml" > public_baseurl: http://localhost:8080/
< / code > < / pre >
< / li >
< li > Run < code > apt-get install xmlsec1< / code > and < code > pip install --upgrade --force 'pysaml2> =4.5.0'< / code > to ensure
the dependencies are installed and ready to go.< / li >
< li > Restart Synapse.< / li >
< / ol >
< p > Then in Riot:< / p >
< ol >
< li > Visit the login page with a Riot pointing at your homeserver.< / li >
< li > Click the Single Sign-On button.< / li >
< li > On the samling page, enter a Name Identifier and add a SAML Attribute for < code > uid=your_localpart< / code > .
The response must also be signed.< / li >
< li > Click " Next" .< / li >
< li > Click " Post Response" (change nothing).< / li >
< li > You should be logged in.< / li >
< / ol >
< p > If you try and repeat this process, you may be automatically logged in using the information you
gave previously. To fix this, open your developer console (< code > F12< / code > or < code > Ctrl+Shift+I< / code > ) while on the
samling page and clear the site data. In Chrome, this will be a button on the Application tab.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "how-to-test-cas-as-a-developer-without-a-server" > < a class = "header" href = "#how-to-test-cas-as-a-developer-without-a-server" > How to test CAS as a developer without a server< / a > < / h1 >
< p > The < a href = "https://github.com/jbittel/django-mama-cas" > django-mama-cas< / a > project is an
easy to run CAS implementation built on top of Django.< / p >
< h2 id = "prerequisites" > < a class = "header" href = "#prerequisites" > Prerequisites< / a > < / h2 >
< ol >
< li > Create a new virtualenv: < code > python3 -m venv < your virtualenv> < / code > < / li >
< li > Activate your virtualenv: < code > source /path/to/your/virtualenv/bin/activate< / code > < / li >
< li > Install Django and django-mama-cas:
< pre > < code > python -m pip install " django< 3" " django-mama-cas==2.4.0"
< / code > < / pre >
< / li >
< li > Create a Django project in the current directory:
< pre > < code > django-admin startproject cas_test .
< / code > < / pre >
< / li >
< li > Follow the < a href = "https://django-mama-cas.readthedocs.io/en/latest/installation.html#configuring" > install directions< / a > for django-mama-cas< / li >
< li > Setup the SQLite database: < code > python manage.py migrate< / code > < / li >
< li > Create a user:
< pre > < code > python manage.py createsuperuser
< / code > < / pre >
< ol >
< li > Use whatever you want as the username and password.< / li >
< li > Leave the other fields blank.< / li >
< / ol >
< / li >
< li > Use the built-in Django test server to serve the CAS endpoints on port 8000:
< pre > < code > python manage.py runserver
< / code > < / pre >
< / li >
< / ol >
< p > You should now have a Django project configured to serve CAS authentication with
a single user created.< / p >
< h2 id = "configure-synapse-and-element-to-use-cas" > < a class = "header" href = "#configure-synapse-and-element-to-use-cas" > Configure Synapse (and Element) to use CAS< / a > < / h2 >
< ol >
< li > Modify your < code > homeserver.yaml< / code > to enable CAS and point it to your locally
running Django test server:
< pre > < code class = "language-yaml" > cas_config:
enabled: true
server_url: " http://localhost:8000"
service_url: " http://localhost:8081"
#displayname_attribute: name
#required_attributes:
# name: value
< / code > < / pre >
< / li >
< li > Restart Synapse.< / li >
< / ol >
< p > Note that the above configuration assumes the homeserver is running on port 8081
and that the CAS server is on port 8000, both on localhost.< / p >
< h2 id = "testing-the-configuration" > < a class = "header" href = "#testing-the-configuration" > Testing the configuration< / a > < / h2 >
< p > Then in Element:< / p >
< ol >
< li > Visit the login page with a Element pointing at your homeserver.< / li >
< li > Click the Single Sign-On button.< / li >
< li > Login using the credentials created with < code > createsuperuser< / code > .< / li >
< li > You should be logged in.< / li >
< / ol >
< p > If you want to repeat this process you'll need to manually logout first:< / p >
< ol >
< li > http://localhost:8000/admin/< / li >
< li > Click " logout" in the top right.< / li >
< / ol >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "auth-chain-difference-algorithm" > < a class = "header" href = "#auth-chain-difference-algorithm" > Auth Chain Difference Algorithm< / a > < / h1 >
< p > The auth chain difference algorithm is used by V2 state resolution, where a
naive implementation can be a significant source of CPU and DB usage.< / p >
< h3 id = "definitions" > < a class = "header" href = "#definitions" > Definitions< / a > < / h3 >
< p > A < em > state set< / em > is a set of state events; e.g. the input of a state resolution
algorithm is a collection of state sets.< / p >
< p > The < em > auth chain< / em > of a set of events are all the events' auth events and < em > their< / em >
auth events, recursively (i.e. the events reachable by walking the graph induced
by an event's auth events links).< / p >
< p > The < em > auth chain difference< / em > of a collection of state sets is the union minus the
intersection of the sets of auth chains corresponding to the state sets, i.e an
event is in the auth chain difference if it is reachable by walking the auth
event graph from at least one of the state sets but not from < em > all< / em > of the state
sets.< / p >
< h2 id = "breadth-first-walk-algorithm" > < a class = "header" href = "#breadth-first-walk-algorithm" > Breadth First Walk Algorithm< / a > < / h2 >
< p > A way of calculating the auth chain difference without calculating the full auth
chains for each state set is to do a parallel breadth first walk (ordered by
depth) of each state set's auth chain. By tracking which events are reachable
from each state set we can finish early if every pending event is reachable from
every state set.< / p >
< p > This can work well for state sets that have a small auth chain difference, but
can be very inefficient for larger differences. However, this algorithm is still
used if we don't have a chain cover index for the room (e.g. because we're in
the process of indexing it).< / p >
< h2 id = "chain-cover-index" > < a class = "header" href = "#chain-cover-index" > Chain Cover Index< / a > < / h2 >
< p > Synapse computes auth chain differences by pre-computing a " chain cover" index
for the auth chain in a room, allowing efficient reachability queries like " is
event A in the auth chain of event B" . This is done by assigning every event a
< em > chain ID< / em > and < em > sequence number< / em > (e.g. < code > (5,3)< / code > ), and having a map of < em > links< / em >
between chains (e.g. < code > (5,3) -> (2,4)< / code > ) such that A is reachable by B (i.e. < code > A< / code >
is in the auth chain of < code > B< / code > ) if and only if either:< / p >
< ol >
< li > A and B have the same chain ID and < code > A< / code > 's sequence number is less than < code > B< / code > 's
sequence number; or< / li >
< li > there is a link < code > L< / code > between < code > B< / code > 's chain ID and < code > A< / code > 's chain ID such that
< code > L.start_seq_no< / code > < = < code > B.seq_no< / code > and < code > A.seq_no< / code > < = < code > L.end_seq_no< / code > .< / li >
< / ol >
< p > There are actually two potential implementations, one where we store links from
each chain to every other reachable chain (the transitive closure of the links
graph), and one where we remove redundant links (the transitive reduction of the
links graph) e.g. if we have chains < code > C3 -> C2 -> C1< / code > then the link < code > C3 -> C1< / code >
would not be stored. Synapse uses the former implementations so that it doesn't
need to recurse to test reachability between chains.< / p >
< h3 id = "example-5" > < a class = "header" href = "#example-5" > Example< / a > < / h3 >
< p > An example auth graph would look like the following, where chains have been
formed based on type/state_key and are denoted by colour and are labelled with
< code > (chain ID, sequence number)< / code > . Links are denoted by the arrows (links in grey
are those that would be remove in the second implementation described above).< / p >
< p > < img src = "auth_chain_diff.dot.png" alt = "Example" / > < / p >
< p > Note that we don't include all links between events and their auth events, as
most of those links would be redundant. For example, all events point to the
create event, but each chain only needs the one link from it's base to the
create event.< / p >
< h2 id = "using-the-index" > < a class = "header" href = "#using-the-index" > Using the Index< / a > < / h2 >
< p > This index can be used to calculate the auth chain difference of the state sets
by looking at the chain ID and sequence numbers reachable from each state set:< / p >
< ol >
< li > For every state set lookup the chain ID/sequence numbers of each state event< / li >
< li > Use the index to find all chains and the maximum sequence number reachable
from each state set.< / li >
< li > The auth chain difference is then all events in each chain that have sequence
numbers between the maximum sequence number reachable from < em > any< / em > state set and
the minimum reachable by < em > all< / em > state sets (if any).< / li >
< / ol >
< p > Note that steps 2 is effectively calculating the auth chain for each state set
(in terms of chain IDs and sequence numbers), and step 3 is calculating the
difference between the union and intersection of the auth chains.< / p >
< h3 id = "worked-example" > < a class = "header" href = "#worked-example" > Worked Example< / a > < / h3 >
< p > For example, given the above graph, we can calculate the difference between
state sets consisting of:< / p >
< ol >
< li > < code > S1< / code > : Alice's invite < code > (4,1)< / code > and Bob's second join < code > (2,2)< / code > ; and< / li >
< li > < code > S2< / code > : Alice's second join < code > (4,3)< / code > and Bob's first join < code > (2,1)< / code > .< / li >
< / ol >
< p > Using the index we see that the following auth chains are reachable from each
state set:< / p >
< ol >
< li > < code > S1< / code > : < code > (1,1)< / code > , < code > (2,2)< / code > , < code > (3,1)< / code > & < code > (4,1)< / code > < / li >
< li > < code > S2< / code > : < code > (1,1)< / code > , < code > (2,1)< / code > , < code > (3,2)< / code > & < code > (4,3)< / code > < / li >
< / ol >
< p > And so, for each the ranges that are in the auth chain difference:< / p >
< ol >
< li > Chain 1: None, (since everything can reach the create event).< / li >
< li > Chain 2: The range < code > (1, 2]< / code > (i.e. just < code > 2< / code > ), as < code > 1< / code > is reachable by all state
sets and the maximum reachable is < code > 2< / code > (corresponding to Bob's second join).< / li >
< li > Chain 3: Similarly the range < code > (1, 2]< / code > (corresponding to the second power
level).< / li >
< li > Chain 4: The range < code > (1, 3]< / code > (corresponding to both of Alice's joins).< / li >
< / ol >
< p > So the final result is: Bob's second join < code > (2,2)< / code > , the second power level
< code > (3,2)< / code > and both of Alice's joins < code > (4,2)< / code > & < code > (4,3)< / code > .< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "media-repository" > < a class = "header" href = "#media-repository" > Media Repository< / a > < / h1 >
< p > < em > Synapse implementation-specific details for the media repository< / em > < / p >
< p > The media repository is where attachments and avatar photos are stored.
It stores attachment content and thumbnails for media uploaded by local users.
It caches attachment content and thumbnails for media uploaded by remote users.< / p >
< h2 id = "storage" > < a class = "header" href = "#storage" > Storage< / a > < / h2 >
< p > Each item of media is assigned a < code > media_id< / code > when it is uploaded.
The < code > media_id< / code > is a randomly chosen, URL safe 24 character string.< / p >
< p > Metadata such as the MIME type, upload time and length are stored in the
sqlite3 database indexed by < code > media_id< / code > .< / p >
< p > Content is stored on the filesystem under a < code > " local_content" < / code > directory.< / p >
< p > Thumbnails are stored under a < code > " local_thumbnails" < / code > directory.< / p >
< p > The item with < code > media_id< / code > < code > " aabbccccccccdddddddddddd" < / code > is stored under
< code > " local_content/aa/bb/ccccccccdddddddddddd" < / code > . Its thumbnail with width
< code > 128< / code > and height < code > 96< / code > and type < code > " image/jpeg" < / code > is stored under
< code > " local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg" < / code > < / p >
< p > Remote content is cached under < code > " remote_content" < / code > directory. Each item of
remote content is assigned a local < code > " filesystem_id" < / code > to ensure that the
directory structure < code > " remote_content/server_name/aa/bb/ccccccccdddddddddddd" < / code >
is appropriate. Thumbnails for remote content are stored under
< code > " remote_thumbnails/server_name/..." < / code > < / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "room-and-user-statistics" > < a class = "header" href = "#room-and-user-statistics" > Room and User Statistics< / a > < / h1 >
< p > Synapse maintains room and user statistics (as well as a cache of room state),
in various tables. These can be used for administrative purposes but are also
used when generating the public room directory.< / p >
< h1 id = "synapse-developer-documentation" > < a class = "header" href = "#synapse-developer-documentation" > Synapse Developer Documentation< / a > < / h1 >
< h2 id = "high-level-concepts" > < a class = "header" href = "#high-level-concepts" > High-Level Concepts< / a > < / h2 >
< h3 id = "definitions-1" > < a class = "header" href = "#definitions-1" > Definitions< / a > < / h3 >
< ul >
< li > < strong > subject< / strong > : Something we are tracking stats about – currently a room or user.< / li >
< li > < strong > current row< / strong > : An entry for a subject in the appropriate current statistics
table. Each subject can have only one.< / li >
< li > < strong > historical row< / strong > : An entry for a subject in the appropriate historical
statistics table. Each subject can have any number of these.< / li >
< / ul >
< h3 id = "overview-4" > < a class = "header" href = "#overview-4" > Overview< / a > < / h3 >
< p > Stats are maintained as time series. There are two kinds of column:< / p >
< ul >
< li > absolute columns – where the value is correct for the time given by < code > end_ts< / code >
in the stats row. (Imagine a line graph for these values)
< ul >
< li > They can also be thought of as 'gauges' in Prometheus, if you are familiar.< / li >
< / ul >
< / li >
< li > per-slice columns – where the value corresponds to how many of the occurrences
occurred within the time slice given by < code > (end_ts − bucket_size)…end_ts< / code >
or < code > start_ts…end_ts< / code > . (Imagine a histogram for these values)< / li >
< / ul >
< p > Stats are maintained in two tables (for each type): current and historical.< / p >
< p > Current stats correspond to the present values. Each subject can only have one
entry.< / p >
< p > Historical stats correspond to values in the past. Subjects may have multiple
entries.< / p >
< h2 id = "concepts-around-the-management-of-stats" > < a class = "header" href = "#concepts-around-the-management-of-stats" > Concepts around the management of stats< / a > < / h2 >
< h3 id = "current-rows" > < a class = "header" href = "#current-rows" > Current rows< / a > < / h3 >
< p > Current rows contain the most up-to-date statistics for a room.
They only contain absolute columns< / p >
< h3 id = "historical-rows" > < a class = "header" href = "#historical-rows" > Historical rows< / a > < / h3 >
< p > Historical rows can always be considered to be valid for the time slice and
end time specified.< / p >
< ul >
< li > historical rows will not exist for every time slice – they will be omitted
if there were no changes. In this case, the following assumptions can be
made to interpolate/recreate missing rows:
< ul >
< li > absolute fields have the same values as in the preceding row< / li >
< li > per-slice fields are zero (< code > 0< / code > )< / li >
< / ul >
< / li >
< li > historical rows will not be retained forever – rows older than a configurable
time will be purged.< / li >
< / ul >
< h4 id = "purge" > < a class = "header" href = "#purge" > Purge< / a > < / h4 >
< p > The purging of historical rows is not yet implemented.< / p >
< div id = "chapter_begin" style = "break-before: page; page-break-before: always;" > < / div > < h1 id = "deprecation-policy-for-platform-dependencies" > < a class = "header" href = "#deprecation-policy-for-platform-dependencies" > Deprecation Policy for Platform Dependencies< / a > < / h1 >
< p > Synapse has a number of platform dependencies, including Python and PostgreSQL.
This document outlines the policy towards which versions we support, and when we
drop support for versions in the future.< / p >
< h2 id = "policy" > < a class = "header" href = "#policy" > Policy< / a > < / h2 >
< p > Synapse follows the upstream support life cycles for Python and PostgreSQL,
i.e. when a version reaches End of Life Synapse will withdraw support for that
version in future releases.< / p >
< p > Details on the upstream support life cycles for Python and PostgreSQL are
documented at https://endoflife.date/python and
https://endoflife.date/postgresql.< / p >
< h2 id = "context" > < a class = "header" href = "#context" > Context< / a > < / h2 >
< p > It is important for system admins to have a clear understanding of the platform
requirements of Synapse and its deprecation policies so that they can
effectively plan upgrading their infrastructure ahead of time. This is
especially important in contexts where upgrading the infrastructure requires
auditing and approval from a security team, or where otherwise upgrading is a
long process.< / p >
< p > By following the upstream support life cycles Synapse can ensure that its
dependencies continue to get security patches, while not requiring system admins
to constantly update their platform dependencies to the latest versions.< / p >
< / main >
< nav class = "nav-wrapper" aria-label = "Page navigation" >
<!-- Mobile navigation buttons -->
< div style = "clear: both" > < / div >
< / nav >
< / div >
< / div >
< nav class = "nav-wide-wrapper" aria-label = "Page navigation" >
< / nav >
< / div >
< script type = "text/javascript" >
window.playground_copyable = true;
< / script >
< script src = "elasticlunr.min.js" type = "text/javascript" charset = "utf-8" > < / script >
< script src = "mark.min.js" type = "text/javascript" charset = "utf-8" > < / script >
< script src = "searcher.js" type = "text/javascript" charset = "utf-8" > < / script >
< script src = "clipboard.min.js" type = "text/javascript" charset = "utf-8" > < / script >
< script src = "highlight.js" type = "text/javascript" charset = "utf-8" > < / script >
< script src = "book.js" type = "text/javascript" charset = "utf-8" > < / script >
<!-- Custom JS scripts -->
< script type = "text/javascript" src = "docs/website_files/table-of-contents.js" > < / script >
< script type = "text/javascript" >
window.addEventListener('load', function() {
window.setTimeout(window.print, 100);
});
< / script >
< / body >
< / html >