mirror of
https://github.com/element-hq/synapse.git
synced 2024-11-22 17:46:08 +03:00
Merge branch 'develop' into room-initial-sync
Conflicts: synapse/handlers/message.py
This commit is contained in:
commit
31a049eb69
197 changed files with 4504 additions and 67061 deletions
12
.gitignore
vendored
12
.gitignore
vendored
|
@ -1,6 +1,7 @@
|
||||||
*.pyc
|
*.pyc
|
||||||
.*.swp
|
.*.swp
|
||||||
|
|
||||||
|
.DS_Store
|
||||||
_trial_temp/
|
_trial_temp/
|
||||||
logs/
|
logs/
|
||||||
dbs/
|
dbs/
|
||||||
|
@ -11,6 +12,14 @@ docs/build/
|
||||||
|
|
||||||
cmdclient_config.json
|
cmdclient_config.json
|
||||||
homeserver*.db
|
homeserver*.db
|
||||||
|
homeserver*.log
|
||||||
|
homeserver*.pid
|
||||||
|
homeserver*.yaml
|
||||||
|
|
||||||
|
*.signing.key
|
||||||
|
*.tls.crt
|
||||||
|
*.tls.dh
|
||||||
|
*.tls.key
|
||||||
|
|
||||||
.coverage
|
.coverage
|
||||||
htmlcov
|
htmlcov
|
||||||
|
@ -25,6 +34,7 @@ graph/*.png
|
||||||
graph/*.dot
|
graph/*.dot
|
||||||
|
|
||||||
**/webclient/config.js
|
**/webclient/config.js
|
||||||
webclient/test/environment-protractor.js
|
**/webclient/test/coverage/
|
||||||
|
**/webclient/test/environment-protractor.js
|
||||||
|
|
||||||
uploads
|
uploads
|
||||||
|
|
25
README.rst
25
README.rst
|
@ -53,7 +53,7 @@ To get up and running:
|
||||||
config file: ``./synctl start`` will give you instructions on how to do this.
|
config file: ``./synctl start`` will give you instructions on how to do this.
|
||||||
For this purpose, you can use 'localhost' or your hostname as a server name.
|
For this purpose, you can use 'localhost' or your hostname as a server name.
|
||||||
Once you've done so, running ``./synctl start`` again will start your private
|
Once you've done so, running ``./synctl start`` again will start your private
|
||||||
home sserver. You will find a webclient running at http://localhost:8008.
|
home server. You will find a webclient running at http://localhost:8008.
|
||||||
Please use a recent Chrome or Firefox for now (or Safari if you don't need
|
Please use a recent Chrome or Firefox for now (or Safari if you don't need
|
||||||
VoIP support).
|
VoIP support).
|
||||||
|
|
||||||
|
@ -131,17 +131,20 @@ header files for python C extensions.
|
||||||
|
|
||||||
Installing prerequisites on Ubuntu::
|
Installing prerequisites on Ubuntu::
|
||||||
|
|
||||||
$ sudo apt-get install build-essential python2.7-dev libffi-dev
|
$ sudo apt-get install build-essential python2.7-dev libffi-dev \
|
||||||
|
python-pip python-setuptools
|
||||||
|
|
||||||
Installing prerequisites on Mac OS X::
|
Installing prerequisites on Mac OS X::
|
||||||
|
|
||||||
$ xcode-select --install
|
$ xcode-select --install
|
||||||
|
|
||||||
Synapse uses NaCl (http://nacl.cr.yp.to/) for encryption and digital
|
Synapse uses NaCl (http://nacl.cr.yp.to/) for encryption and digital signatures.
|
||||||
signatures. Unfortunately PyNACL currently has a few issues
|
Unfortunately PyNACL currently has a few issues
|
||||||
(https://github.com/pyca/pynacl/issues/53) and
|
(https://github.com/pyca/pynacl/issues/53) and
|
||||||
(https://github.com/pyca/pynacl/issues/79) that mean it may not install
|
(https://github.com/pyca/pynacl/issues/79) that mean it may not install
|
||||||
correctly. To fix try re-installing from PyPI or directly from (https://github.com/pyca/pynacl)::
|
correctly, causing all tests to fail with errors about missing "sodium.h". To
|
||||||
|
fix try re-installing from PyPI or directly from
|
||||||
|
(https://github.com/pyca/pynacl)::
|
||||||
|
|
||||||
$ # Install from PyPI
|
$ # Install from PyPI
|
||||||
$ pip install --user --upgrade --force pynacl
|
$ pip install --user --upgrade --force pynacl
|
||||||
|
@ -158,9 +161,21 @@ To install the synapse homeserver run::
|
||||||
This installs synapse, along with the libraries it uses, into
|
This installs synapse, along with the libraries it uses, into
|
||||||
``$HOME/.local/lib/``.
|
``$HOME/.local/lib/``.
|
||||||
|
|
||||||
|
To actually run your new homeserver, pick a working directory for Synapse to run (e.g. ``~/.synapse``), and::
|
||||||
|
|
||||||
|
$ mkdir ~/.synapse
|
||||||
|
$ cd ~/.synapse
|
||||||
|
$ synctl start
|
||||||
|
|
||||||
Homeserver Development
|
Homeserver Development
|
||||||
======================
|
======================
|
||||||
|
|
||||||
|
To check out a homeserver for development, clone the git repo into a working
|
||||||
|
directory of your choice:
|
||||||
|
|
||||||
|
$ git clone https://github.com/matrix-org/synapse.git
|
||||||
|
$ cd synapse
|
||||||
|
|
||||||
The homeserver has a number of external dependencies, that are easiest
|
The homeserver has a number of external dependencies, that are easiest
|
||||||
to install by making setup.py do so, in --user mode::
|
to install by making setup.py do so, in --user mode::
|
||||||
|
|
||||||
|
|
|
@ -32,7 +32,7 @@ for port in 8080 8081 8082; do
|
||||||
-D --pid-file "$DIR/$port.pid" \
|
-D --pid-file "$DIR/$port.pid" \
|
||||||
--manhole $((port + 1000)) \
|
--manhole $((port + 1000)) \
|
||||||
--tls-dh-params-path "demo/demo.tls.dh" \
|
--tls-dh-params-path "demo/demo.tls.dh" \
|
||||||
$PARAMS
|
$PARAMS $SYNAPSE_PARAMS
|
||||||
|
|
||||||
python -m synapse.app.homeserver \
|
python -m synapse.app.homeserver \
|
||||||
--config-path "demo/etc/$port.config" \
|
--config-path "demo/etc/$port.config" \
|
||||||
|
|
|
@ -1,3 +1,9 @@
|
||||||
|
.. WARNING::
|
||||||
|
These architecture notes are spectacularly old, and date back to when Synapse
|
||||||
|
was just federation code in isolation. This should be merged into the main
|
||||||
|
spec.
|
||||||
|
|
||||||
|
|
||||||
= Server to Server =
|
= Server to Server =
|
||||||
|
|
||||||
== Server to Server Stack ==
|
== Server to Server Stack ==
|
68
docs/architecture.rst
Normal file
68
docs/architecture.rst
Normal file
|
@ -0,0 +1,68 @@
|
||||||
|
Synapse Architecture
|
||||||
|
====================
|
||||||
|
|
||||||
|
As of the end of Oct 2014, Synapse's overall architecture looks like::
|
||||||
|
|
||||||
|
synapse
|
||||||
|
.-----------------------------------------------------.
|
||||||
|
| Notifier |
|
||||||
|
| ^ | |
|
||||||
|
| | | |
|
||||||
|
| .------------|------. |
|
||||||
|
| | handlers/ | | |
|
||||||
|
| | v | |
|
||||||
|
| | Event*Handler <--------> rest/* <=> Client
|
||||||
|
| | Rooms*Handler | |
|
||||||
|
HSes <=> federation/* <==> FederationHandler | |
|
||||||
|
| | | PresenceHandler | |
|
||||||
|
| | | TypingHandler | |
|
||||||
|
| | '-------------------' |
|
||||||
|
| | | | |
|
||||||
|
| | state/* | |
|
||||||
|
| | | | |
|
||||||
|
| | v v |
|
||||||
|
| `--------------> storage/* |
|
||||||
|
| | |
|
||||||
|
'--------------------------|--------------------------'
|
||||||
|
v
|
||||||
|
.----.
|
||||||
|
| DB |
|
||||||
|
'----'
|
||||||
|
|
||||||
|
* Handlers: business logic of synapse itself. Follows a set contract of BaseHandler:
|
||||||
|
|
||||||
|
- BaseHandler gives us onNewRoomEvent which: (TODO: flesh this out and make it less cryptic):
|
||||||
|
|
||||||
|
+ handle_state(event)
|
||||||
|
+ auth(event)
|
||||||
|
+ persist_event(event)
|
||||||
|
+ notify notifier or federation(event)
|
||||||
|
|
||||||
|
- PresenceHandler: use distributor to get EDUs out of Federation. Very
|
||||||
|
lightweight logic built on the distributor
|
||||||
|
- TypingHandler: use distributor to get EDUs out of Federation. Very
|
||||||
|
lightweight logic built on the distributor
|
||||||
|
- EventsHandler: handles the events stream...
|
||||||
|
- FederationHandler: - gets PDU from Federation Layer; turns into an event;
|
||||||
|
follows basehandler functionality.
|
||||||
|
- RoomsHandler: does all the room logic, including members - lots of classes in
|
||||||
|
RoomsHandler.
|
||||||
|
- ProfileHandler: talks to the storage to store/retrieve profile info.
|
||||||
|
|
||||||
|
* EventFactory: generates events of particular event types.
|
||||||
|
* Notifier: Backs the events handler
|
||||||
|
* REST: Interfaces handlers and events to the outside world via HTTP/JSON.
|
||||||
|
Converts events back and forth from JSON.
|
||||||
|
* Federation: holds the HTTP client & server to talk to other servers. Does
|
||||||
|
replication to make sure there's nothing missing in the graph. Handles
|
||||||
|
reliability. Handles txns.
|
||||||
|
* Distributor: generic event bus. used for presence & typing only currently.
|
||||||
|
Notifier could be implemented using Distributor - so far we are only using for
|
||||||
|
things which actually /require/ dynamic pluggability however as it can
|
||||||
|
obfuscate the actual flow of control.
|
||||||
|
* Auth: helper singleton to say whether a given event is allowed to do a given
|
||||||
|
thing (TODO: put this on the diagram)
|
||||||
|
* State: helper singleton: does state conflict resolution. You give it an event
|
||||||
|
and it tells you if it actually updates the state or not, and annotates the
|
||||||
|
event up properly and handles merge conflict resolution.
|
||||||
|
* Storage: abstracts the storage engine.
|
File diff suppressed because it is too large
Load diff
|
@ -1,149 +0,0 @@
|
||||||
API Efficiency
|
|
||||||
==============
|
|
||||||
|
|
||||||
A simple implementation of presence messaging has the ability to cause a large
|
|
||||||
amount of Internet traffic relating to presence updates. In order to minimise
|
|
||||||
the impact of such a feature, the following observations can be made:
|
|
||||||
|
|
||||||
* There is no point in a Home Server polling status for peers in a user's
|
|
||||||
presence list if the user has no clients connected that care about it.
|
|
||||||
|
|
||||||
* It is highly likely that most presence subscriptions will be symmetric - a
|
|
||||||
given user watching another is likely to in turn be watched by that user.
|
|
||||||
|
|
||||||
* It is likely that most subscription pairings will be between users who share
|
|
||||||
at least one Room in common, and so their Home Servers are actively
|
|
||||||
exchanging message PDUs or transactions relating to that Room.
|
|
||||||
|
|
||||||
* Presence update messages do not need realtime guarantees. It is acceptable to
|
|
||||||
delay delivery of updates for some small amount of time (10 seconds to a
|
|
||||||
minute).
|
|
||||||
|
|
||||||
The general model of presence information is that of a HS registering its
|
|
||||||
interest in receiving presence status updates from other HSes, which then
|
|
||||||
promise to send them when required. Rather than actively polling for the
|
|
||||||
currentt state all the time, HSes can rely on their relative stability to only
|
|
||||||
push updates when required.
|
|
||||||
|
|
||||||
A Home Server should not rely on the longterm validity of this presence
|
|
||||||
information, however, as this would not cover such cases as a user's server
|
|
||||||
crashing and thus failing to inform their peers that users it used to host are
|
|
||||||
no longer available online. Therefore, each promise of future updates should
|
|
||||||
carry with a timeout value (whether explicit in the message, or implicit as some
|
|
||||||
defined default in the protocol), after which the receiving HS should consider
|
|
||||||
the information potentially stale and request it again.
|
|
||||||
|
|
||||||
However, because of the likelyhood that two home servers are exchanging messages
|
|
||||||
relating to chat traffic in a room common to both of them, the ongoing receipt
|
|
||||||
of these messages can be taken by each server as an implicit notification that
|
|
||||||
the sending server is still up and running, and therefore that no status changes
|
|
||||||
have happened; because if they had the server would have sent them. A second,
|
|
||||||
larger timeout should be applied to this implicit inference however, to protect
|
|
||||||
against implementation bugs or other reasons that the presence state cache may
|
|
||||||
become invalid; eventually the HS should re-enquire the current state of users
|
|
||||||
and update them with its own.
|
|
||||||
|
|
||||||
The following workflows can therefore be used to handle presence updates:
|
|
||||||
|
|
||||||
1 When a user first appears online their HS sends a message to each other HS
|
|
||||||
containing at least one user to be watched; each message carrying both a
|
|
||||||
notification of the sender's new online status, and a request to obtain and
|
|
||||||
watch the target users' presence information. This message implicitly
|
|
||||||
promises the sending HS will now push updates to the target HSes.
|
|
||||||
|
|
||||||
2 The target HSes then respond a single message each, containing the current
|
|
||||||
status of the requested user(s). These messages too implicitly promise the
|
|
||||||
target HSes will themselves push updates to the sending HS.
|
|
||||||
|
|
||||||
As these messages arrive at the sending user's HS they can be pushed to the
|
|
||||||
user's client(s), possibly batched again to ensure not too many small
|
|
||||||
messages which add extra protocol overheads.
|
|
||||||
|
|
||||||
At this point, all the user's clients now have the current presence status
|
|
||||||
information for this moment in time, and have promised to send each other
|
|
||||||
updates in future.
|
|
||||||
|
|
||||||
3 The HS maintains two watchdog timers per peer HS it is exchanging presence
|
|
||||||
information with. The first timer should have a relatively small expiry
|
|
||||||
(perhaps 1 minute), and the second timer should have a much longer time
|
|
||||||
(perhaps 1 hour).
|
|
||||||
|
|
||||||
4 Any time any kind of message is received from a peer HS, the short-term
|
|
||||||
presence timer associated with it is reset.
|
|
||||||
|
|
||||||
5 Whenever either of these timers expires, an HS should push a status reminder
|
|
||||||
to the target HS whose timer has now expired, and request again from that
|
|
||||||
server the status of the subscribed users.
|
|
||||||
|
|
||||||
6 On receipt of one of these presence status reminders, an HS can reset both
|
|
||||||
of its presence watchdog timers.
|
|
||||||
|
|
||||||
To avoid bursts of traffic, implementations should attempt to stagger the expiry
|
|
||||||
of the longer-term watchdog timers for different peer HSes.
|
|
||||||
|
|
||||||
When individual users actively change their status (either by explicit requests
|
|
||||||
from clients, or inferred changes due to idle timers or client timeouts), the HS
|
|
||||||
should batch up any status changes for some reasonable amount of time (10
|
|
||||||
seconds to a minute). This allows for reduced protocol overheads in the case of
|
|
||||||
multiple messages needing to be sent to the same peer HS; as is the likely
|
|
||||||
scenario in many cases, such as a given human user having multiple user
|
|
||||||
accounts.
|
|
||||||
|
|
||||||
|
|
||||||
API Requirements
|
|
||||||
================
|
|
||||||
|
|
||||||
The data model presented here puts the following requirements on the APIs:
|
|
||||||
|
|
||||||
Client-Server
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Requests that a client can make to its Home Server
|
|
||||||
|
|
||||||
* get/set current presence state
|
|
||||||
Basic enumeration + ability to set a custom piece of text
|
|
||||||
|
|
||||||
* report per-device idle time
|
|
||||||
After some (configurable?) idle time the device should send a single message
|
|
||||||
to set the idle duration. The HS can then infer a "start of idle" instant and
|
|
||||||
use that to keep the device idleness up to date. At some later point the
|
|
||||||
device can cancel this idleness.
|
|
||||||
|
|
||||||
* report per-device type
|
|
||||||
Inform the server that this device is a "mobile" device, or perhaps some
|
|
||||||
other to-be-defined category of reduced capability that could be presented to
|
|
||||||
other users.
|
|
||||||
|
|
||||||
* start/stop presence polling for my presence list
|
|
||||||
It is likely that these messages could be implicitly inferred by other
|
|
||||||
messages, though having explicit control is always useful.
|
|
||||||
|
|
||||||
* get my presence list
|
|
||||||
[implicit poll start?]
|
|
||||||
It is possible that the HS doesn't yet have current presence information when
|
|
||||||
the client requests this. There should be a "don't know" type too.
|
|
||||||
|
|
||||||
* add/remove a user to my presence list
|
|
||||||
|
|
||||||
Server-Server
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Requests that Home Servers make to others
|
|
||||||
|
|
||||||
* request permission to add a user to presence list
|
|
||||||
|
|
||||||
* allow/deny a request to add to a presence list
|
|
||||||
|
|
||||||
* perform a combined presence state push and subscription request
|
|
||||||
For each sending user ID, the message contains their new status.
|
|
||||||
For each receiving user ID, the message should contain an indication on
|
|
||||||
whether the sending server is also interested in receiving status from that
|
|
||||||
user; either as an immediate update response now, or as a promise to send
|
|
||||||
future updates.
|
|
||||||
|
|
||||||
Server to Client
|
|
||||||
----------------
|
|
||||||
|
|
||||||
[[TODO(paul): There also needs to be some way for a user's HS to push status
|
|
||||||
updates of the presence list to clients, but the general server-client event
|
|
||||||
model currently lacks a space to do that.]]
|
|
|
@ -1,232 +0,0 @@
|
||||||
========
|
|
||||||
Profiles
|
|
||||||
========
|
|
||||||
|
|
||||||
A description of Synapse user profile metadata support.
|
|
||||||
|
|
||||||
|
|
||||||
Overview
|
|
||||||
========
|
|
||||||
|
|
||||||
Internally within Synapse users are referred to by an opaque ID, which consists
|
|
||||||
of some opaque localpart combined with the domain name of their home server.
|
|
||||||
Obviously this does not yield a very nice user experience; users would like to
|
|
||||||
see readable names for other users that are in some way meaningful to them.
|
|
||||||
Additionally, users like to be able to publish "profile" details to inform other
|
|
||||||
users of other information about them.
|
|
||||||
|
|
||||||
It is also conceivable that since we are attempting to provide a
|
|
||||||
worldwide-applicable messaging system, that users may wish to present different
|
|
||||||
subsets of information in their profile to different other people, from a
|
|
||||||
privacy and permissions perspective.
|
|
||||||
|
|
||||||
A Profile consists of a display name, an (optional?) avatar picture, and a set
|
|
||||||
of other metadata fields that the user may wish to publish (email address, phone
|
|
||||||
numbers, website URLs, etc...). We put no requirements on the display name other
|
|
||||||
than it being a valid Unicode string. Since it is likely that users will end up
|
|
||||||
having multiple accounts (perhaps by necessity of being hosted in multiple
|
|
||||||
places, perhaps by choice of wanting multiple distinct identifies), it would be
|
|
||||||
useful that a metadata field type exists that can refer to another Synapse User
|
|
||||||
ID, so that clients and HSes can make use of this information.
|
|
||||||
|
|
||||||
Metadata Fields
|
|
||||||
---------------
|
|
||||||
|
|
||||||
[[TODO(paul): Likely this list is incomplete; more fields can be defined as we
|
|
||||||
think of them. At the very least, any sort of supported ID for the 3rd Party ID
|
|
||||||
servers should be accounted for here.]]
|
|
||||||
|
|
||||||
* Synapse Directory Server username(s)
|
|
||||||
|
|
||||||
* Email address
|
|
||||||
|
|
||||||
* Phone number - classify "home"/"work"/"mobile"/custom?
|
|
||||||
|
|
||||||
* Twitter/Facebook/Google+/... social networks
|
|
||||||
|
|
||||||
* Location - keep this deliberately vague to allow people to choose how
|
|
||||||
granular it is
|
|
||||||
|
|
||||||
* "Bio" information - date of birth, etc...
|
|
||||||
|
|
||||||
* Synapse User ID of another account
|
|
||||||
|
|
||||||
* Web URL
|
|
||||||
|
|
||||||
* Freeform description text
|
|
||||||
|
|
||||||
|
|
||||||
Visibility Permissions
|
|
||||||
======================
|
|
||||||
|
|
||||||
A home server implementation could offer the ability to set permissions on
|
|
||||||
limited visibility of those fields. When another user requests access to the
|
|
||||||
target user's profile, their own identity should form part of that request. The
|
|
||||||
HS implementation can then decide which fields to make available to the
|
|
||||||
requestor.
|
|
||||||
|
|
||||||
A particular detail of implementation could allow the user to create one or more
|
|
||||||
ACLs; where each list is granted permission to see a given set of non-public
|
|
||||||
fields (compare to Google+ Circles) and contains a set of other people allowed
|
|
||||||
to use it. By giving these ACLs strong identities within the HS, they can be
|
|
||||||
referenced in communications with it, granting other users who encounter these
|
|
||||||
the "ACL Token" to use the details in that ACL.
|
|
||||||
|
|
||||||
If we further allow an ACL Token to be present on Room join requests or stored
|
|
||||||
by 3PID servers, then users of these ACLs gain the extra convenience of not
|
|
||||||
having to manually curate people in the access list; anyone in the room or with
|
|
||||||
knowledge of the 3rd Party ID is automatically granted access. Every HS and
|
|
||||||
client implementation would have to be aware of the existence of these ACL
|
|
||||||
Token, and include them in requests if present, but not every HS implementation
|
|
||||||
needs to actually provide the full permissions model. This can be used as a
|
|
||||||
distinguishing feature among competing implementations. However, servers MUST
|
|
||||||
NOT serve profile information from a cache if there is a chance that its limited
|
|
||||||
understanding could lead to information leakage.
|
|
||||||
|
|
||||||
|
|
||||||
Client Concerns of Multiple Accounts
|
|
||||||
====================================
|
|
||||||
|
|
||||||
Because a given person may want to have multiple Synapse User accounts, client
|
|
||||||
implementations should allow the use of multiple accounts simultaneously
|
|
||||||
(especially in the field of mobile phone clients, which generally don't support
|
|
||||||
running distinct instances of the same application). Where features like address
|
|
||||||
books, presence lists or rooms are presented, the client UI should remember to
|
|
||||||
make distinct with user account is in use for each.
|
|
||||||
|
|
||||||
|
|
||||||
Directory Servers
|
|
||||||
=================
|
|
||||||
|
|
||||||
Directory Servers can provide a forward mapping from human-readable names to
|
|
||||||
User IDs. These can provide a service similar to giving domain-namespaced names
|
|
||||||
for Rooms; in this case they can provide a way for a user to reference their
|
|
||||||
User ID in some external form (e.g. that can be printed on a business card).
|
|
||||||
|
|
||||||
The format for Synapse user name will consist of a localpart specific to the
|
|
||||||
directory server, and the domain name of that directory server:
|
|
||||||
|
|
||||||
@localname:some.domain.name
|
|
||||||
|
|
||||||
The localname is separated from the domain name using a colon, so as to ensure
|
|
||||||
the localname can still contain periods, as users may want this for similarity
|
|
||||||
to email addresses or the like, which typically can contain them. The format is
|
|
||||||
also visually quite distinct from email addresses, phone numbers, etc... so
|
|
||||||
hopefully reasonably "self-describing" when written on e.g. a business card
|
|
||||||
without surrounding context.
|
|
||||||
|
|
||||||
[[TODO(paul): we might have to think about this one - too close to email?
|
|
||||||
Twitter? Also it suggests a format scheme for room names of
|
|
||||||
#localname:domain.name, which I quite like]]
|
|
||||||
|
|
||||||
Directory server administrators should be able to make some kind of policy
|
|
||||||
decision on how these are allocated. Servers within some "closed" domain (such
|
|
||||||
as company-specific ones) may wish to verify the validity of a mapping using
|
|
||||||
their own internal mechanisms; "public" naming servers can operate on a FCFS
|
|
||||||
basis. There are overlapping concerns here with the idea of the 3rd party
|
|
||||||
identity servers as well, though in this specific case we are creating a new
|
|
||||||
namespace to allocate names into.
|
|
||||||
|
|
||||||
It would also be nice from a user experience perspective if the profile that a
|
|
||||||
given name links to can also declare that name as part of its metadata.
|
|
||||||
Furthermore as a security and consistency perspective it would be nice if each
|
|
||||||
end (the directory server and the user's home server) check the validity of the
|
|
||||||
mapping in some way. This needs investigation from a security perspective to
|
|
||||||
ensure against spoofing.
|
|
||||||
|
|
||||||
One such model may be that the user starts by declaring their intent to use a
|
|
||||||
given user name link to their home server, which then contacts the directory
|
|
||||||
service. At some point later (maybe immediately for "public open FCFS servers",
|
|
||||||
maybe after some kind of human intervention for verification) the DS decides to
|
|
||||||
honour this link, and includes it in its served output. It should also tell the
|
|
||||||
HS of this fact, so that the HS can present this as fact when requested for the
|
|
||||||
profile information. For efficiency, it may further wish to provide the HS with
|
|
||||||
a cryptographically-signed certificate as proof, so the HS serving the profile
|
|
||||||
can provide that too when asked, avoiding requesting HSes from constantly having
|
|
||||||
to contact the DS to verify this mapping. (Note: This is similar to the security
|
|
||||||
model often applied in DNS to verify PTR <-> A bidirectional mappings).
|
|
||||||
|
|
||||||
|
|
||||||
Identity Servers
|
|
||||||
================
|
|
||||||
|
|
||||||
The identity servers should support the concept of pointing a 3PID being able to
|
|
||||||
store an ACL Token as well as the main User ID. It is however, beyond scope to
|
|
||||||
do any kind of verification that any third-party IDs that the profile is
|
|
||||||
claiming match up to the 3PID mappings.
|
|
||||||
|
|
||||||
|
|
||||||
User Interface and Expectations Concerns
|
|
||||||
========================================
|
|
||||||
|
|
||||||
Given the weak "security" of some parts of this model as compared to what users
|
|
||||||
might expect, some care should be taken on how it is presented to users,
|
|
||||||
specifically in the naming or other wording of user interface components.
|
|
||||||
|
|
||||||
Most notably mere knowledge of an ACL Pointer is enough to read the information
|
|
||||||
stored in it. It is possible that Home or Identity Servers could leak this
|
|
||||||
information, allowing others to see it. This is a security-vs-convenience
|
|
||||||
balancing choice on behalf of the user who would choose, or not, to make use of
|
|
||||||
such a feature to publish their information.
|
|
||||||
|
|
||||||
Additionally, unless some form of strong end-to-end user-based encryption is
|
|
||||||
used, a user of ACLs for information privacy has to trust other home servers not
|
|
||||||
to lie about the identify of the user requesting access to the Profile.
|
|
||||||
|
|
||||||
|
|
||||||
API Requirements
|
|
||||||
================
|
|
||||||
|
|
||||||
The data model presented here puts the following requirements on the APIs:
|
|
||||||
|
|
||||||
Client-Server
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Requests that a client can make to its Home Server
|
|
||||||
|
|
||||||
* get/set my Display Name
|
|
||||||
This should return/take a simple "text/plain" field
|
|
||||||
|
|
||||||
* get/set my Avatar URL
|
|
||||||
The avatar image data itself is not stored by this API; we'll just store a
|
|
||||||
URL to let the clients fetch it. Optionally HSes could integrate this with
|
|
||||||
their generic content attacmhent storage service, allowing a user to set
|
|
||||||
upload their profile Avatar and update the URL to point to it.
|
|
||||||
|
|
||||||
* get/add/remove my metadata fields
|
|
||||||
Also we need to actually define types of metadata
|
|
||||||
|
|
||||||
* get another user's Display Name / Avatar / metadata fields
|
|
||||||
|
|
||||||
[[TODO(paul): At some later stage we should consider the API for:
|
|
||||||
|
|
||||||
* get/set ACL permissions on my metadata fields
|
|
||||||
|
|
||||||
* manage my ACL tokens
|
|
||||||
]]
|
|
||||||
|
|
||||||
Server-Server
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Requests that Home Servers make to others
|
|
||||||
|
|
||||||
* get a user's Display Name / Avatar
|
|
||||||
|
|
||||||
* get a user's full profile - name/avatar + MD fields
|
|
||||||
This request must allow for specifying the User ID of the requesting user,
|
|
||||||
for permissions purposes. It also needs to take into account any ACL Tokens
|
|
||||||
the requestor has.
|
|
||||||
|
|
||||||
* push a change of Display Name to observers (overlaps with the presence API)
|
|
||||||
|
|
||||||
Room Event PDU Types
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
Events that are pushed from Home Servers to other Home Servers or clients.
|
|
||||||
|
|
||||||
* user Display Name change
|
|
||||||
|
|
||||||
* user Avatar change
|
|
||||||
[[TODO(paul): should the avatar image itself be stored in all the room
|
|
||||||
histories? maybe this event should just be a hint to clients that they should
|
|
||||||
re-fetch the avatar image]]
|
|
|
@ -1,64 +0,0 @@
|
||||||
PUT /send/abc/ HTTP/1.1
|
|
||||||
Host: ...
|
|
||||||
Content-Length: ...
|
|
||||||
Content-Type: application/json
|
|
||||||
|
|
||||||
{
|
|
||||||
"origin": "localhost:5000",
|
|
||||||
"pdus": [
|
|
||||||
{
|
|
||||||
"content": {},
|
|
||||||
"context": "tng",
|
|
||||||
"depth": 12,
|
|
||||||
"is_state": false,
|
|
||||||
"origin": "localhost:5000",
|
|
||||||
"pdu_id": 1404381396854,
|
|
||||||
"pdu_type": "feedback",
|
|
||||||
"prev_pdus": [
|
|
||||||
[
|
|
||||||
"1404381395883",
|
|
||||||
"localhost:6000"
|
|
||||||
]
|
|
||||||
],
|
|
||||||
"ts": 1404381427581
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"prev_ids": [
|
|
||||||
"1404381396852"
|
|
||||||
],
|
|
||||||
"ts": 1404381427823
|
|
||||||
}
|
|
||||||
|
|
||||||
HTTP/1.1 200 OK
|
|
||||||
...
|
|
||||||
|
|
||||||
======================================
|
|
||||||
|
|
||||||
GET /pull/-1/ HTTP/1.1
|
|
||||||
Host: ...
|
|
||||||
Content-Length: 0
|
|
||||||
|
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Length: ...
|
|
||||||
Content-Type: application/json
|
|
||||||
|
|
||||||
{
|
|
||||||
origin: ...,
|
|
||||||
prev_ids: ...,
|
|
||||||
data: [
|
|
||||||
{
|
|
||||||
data_id: ...,
|
|
||||||
prev_pdus: [...],
|
|
||||||
depth: ...,
|
|
||||||
ts: ...,
|
|
||||||
context: ...,
|
|
||||||
origin: ...,
|
|
||||||
content: {
|
|
||||||
...
|
|
||||||
}
|
|
||||||
},
|
|
||||||
...,
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
|
@ -1,113 +0,0 @@
|
||||||
==================
|
|
||||||
Room Join Workflow
|
|
||||||
==================
|
|
||||||
|
|
||||||
An outline of the workflows required when a user joins a room.
|
|
||||||
|
|
||||||
Discovery
|
|
||||||
=========
|
|
||||||
|
|
||||||
To join a room, a user has to discover the room by some mechanism in order to
|
|
||||||
obtain the (opaque) Room ID and a candidate list of likely home servers that
|
|
||||||
contain it.
|
|
||||||
|
|
||||||
Sending an Invitation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
The most direct way a user discovers the existence of a room is from a
|
|
||||||
invitation from some other user who is a member of that room.
|
|
||||||
|
|
||||||
The inviter's HS sets the membership status of the invitee to "invited" in the
|
|
||||||
"m.members" state key by sending a state update PDU. The HS then broadcasts this
|
|
||||||
PDU among the existing members in the usual way. An invitation message is also
|
|
||||||
sent to the invited user, containing the Room ID and the PDU ID of this
|
|
||||||
invitation state change and potentially a list of some other home servers to use
|
|
||||||
to accept the invite. The user's client can then choose to display it in some
|
|
||||||
way to alert the user.
|
|
||||||
|
|
||||||
[[TODO(paul): At present, no API has been designed or described to actually send
|
|
||||||
that invite to the invited user. Likely it will be some facet of the larger
|
|
||||||
user-user API required for presence, profile management, etc...]]
|
|
||||||
|
|
||||||
Directory Service
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
Alternatively, the user may discover the channel via a directory service; either
|
|
||||||
by performing a name lookup, or some kind of browse or search acitivty. However
|
|
||||||
this is performed, the end result is that the user's home server requests the
|
|
||||||
Room ID and candidate list from the directory service.
|
|
||||||
|
|
||||||
[[TODO(paul): At present, no API has been designed or described for this
|
|
||||||
directory service]]
|
|
||||||
|
|
||||||
|
|
||||||
Joining
|
|
||||||
=======
|
|
||||||
|
|
||||||
Once the ID and home servers are obtained, the user can then actually join the
|
|
||||||
room.
|
|
||||||
|
|
||||||
Accepting an Invite
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
If a user has received and accepted an invitation to join a room, the invitee's
|
|
||||||
home server can now send an invite acceptance message to a chosen candidate
|
|
||||||
server from the list given in the invitation, citing also the PDU ID of the
|
|
||||||
invitation as "proof" of their invite. (This is required as due to late message
|
|
||||||
propagation it could be the case that the acceptance is received before the
|
|
||||||
invite by some servers). If this message is allowed by the candidate server, it
|
|
||||||
generates a new PDU that updates the invitee's membership status to "joined",
|
|
||||||
referring back to the acceptance PDU, and broadcasts that as a state change in
|
|
||||||
the usual way. The newly-invited user is now a full member of the room, and
|
|
||||||
state propagation proceeds as usual.
|
|
||||||
|
|
||||||
Joining a Public Room
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
If a user has discovered the existence of a room they wish to join but does not
|
|
||||||
have an active invitation, they can request to join it directly by sending a
|
|
||||||
join message to a candidate server on the list provided by the directory
|
|
||||||
service. As this list may be out of date, the HS should be prepared to retry
|
|
||||||
other candidates if the chosen one is no longer aware of the room, because it
|
|
||||||
has no users as members in it.
|
|
||||||
|
|
||||||
Once a candidate server that is aware of the room has been found, it can
|
|
||||||
broadcast an update PDU to add the member into the "m.members" key setting their
|
|
||||||
state directly to "joined" (i.e. bypassing the two-phase invite semantics),
|
|
||||||
remembering to include the new user's HS in that list.
|
|
||||||
|
|
||||||
Knocking on a Semi-Public Room
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
If a user requests to join a room but the join mode of the room is "knock", the
|
|
||||||
join is not immediately allowed. Instead, if the user wishes to proceed, they
|
|
||||||
can instead post a "knock" message, which informs other members of the room that
|
|
||||||
the would-be joiner wishes to become a member and sets their membership value to
|
|
||||||
"knocked". If any of them wish to accept this, they can then send an invitation
|
|
||||||
in the usual way described above. Knowing that the user has already knocked and
|
|
||||||
expressed an interest in joining, the invited user's home server should
|
|
||||||
immediately accept that invitation on the user's behalf, and go on to join the
|
|
||||||
room in the usual way.
|
|
||||||
|
|
||||||
[[NOTE(Erik): Though this may confuse users who expect 'X has joined' to
|
|
||||||
actually be a user initiated action, i.e. they may expect that 'X' is actually
|
|
||||||
looking at synapse right now?]]
|
|
||||||
|
|
||||||
[[NOTE(paul): Yes, a fair point maybe we should suggest HSes don't do that, and
|
|
||||||
just offer an invite to the user as normal]]
|
|
||||||
|
|
||||||
Private and Non-Existent Rooms
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
If a user requests to join a room but the room is either unknown by the home
|
|
||||||
server receiving the request, or is known by the join mode is "invite" and the
|
|
||||||
user has not been invited, the server must respond that the room does not exist.
|
|
||||||
This is to prevent leaking information about the existence and identity of
|
|
||||||
private rooms.
|
|
||||||
|
|
||||||
|
|
||||||
Outstanding Questions
|
|
||||||
=====================
|
|
||||||
|
|
||||||
* Do invitations or knocks time out and expire at some point? If so when? Time
|
|
||||||
is hard in distributed systems.
|
|
|
@ -1,274 +0,0 @@
|
||||||
===========
|
|
||||||
Rooms Model
|
|
||||||
===========
|
|
||||||
|
|
||||||
A description of the general data model used to implement Rooms, and the
|
|
||||||
user-level visible effects and implications.
|
|
||||||
|
|
||||||
|
|
||||||
Overview
|
|
||||||
========
|
|
||||||
|
|
||||||
"Rooms" in Synapse are shared messaging channels over which all the participant
|
|
||||||
users can exchange messages. Rooms have an opaque persistent identify, a
|
|
||||||
globally-replicated set of state (consisting principly of a membership set of
|
|
||||||
users, and other management and miscellaneous metadata), and a message history.
|
|
||||||
|
|
||||||
|
|
||||||
Room Identity and Naming
|
|
||||||
========================
|
|
||||||
|
|
||||||
Rooms can be arbitrarily created by any user on any home server; at which point
|
|
||||||
the home server will sign the message that creates the channel, and the
|
|
||||||
fingerprint of this signature becomes the strong persistent identify of the
|
|
||||||
room. This now identifies the room to any home server in the network regardless
|
|
||||||
of its original origin. This allows the identify of the room to outlive any
|
|
||||||
particular server. Subject to appropriate permissions [to be discussed later],
|
|
||||||
any current member of a room can invite others to join it, can post messages
|
|
||||||
that become part of its history, and can change the persistent state of the room
|
|
||||||
(including its current set of permissions).
|
|
||||||
|
|
||||||
Home servers can provide a directory service, allowing a lookup from a
|
|
||||||
convenient human-readable form of room label to a room ID. This mapping is
|
|
||||||
scoped to the particular home server domain and so simply represents that server
|
|
||||||
administrator's opinion of what room should take that label; it does not have to
|
|
||||||
be globally replicated and does not form part of the stored state of that room.
|
|
||||||
|
|
||||||
This room name takes the form
|
|
||||||
|
|
||||||
#localname:some.domain.name
|
|
||||||
|
|
||||||
for similarity and consistency with user names on directories.
|
|
||||||
|
|
||||||
To join a room (and therefore to be allowed to inspect past history, post new
|
|
||||||
messages to it, and read its state), a user must become aware of the room's
|
|
||||||
fingerprint ID. There are two mechanisms to allow this:
|
|
||||||
|
|
||||||
* An invite message from someone else in the room
|
|
||||||
|
|
||||||
* A referral from a room directory service
|
|
||||||
|
|
||||||
As room IDs are opaque and ephemeral, they can serve as a mechanism to create
|
|
||||||
"ad-hoc" rooms deliberately unnamed, for small group-chats or even private
|
|
||||||
one-to-one message exchange.
|
|
||||||
|
|
||||||
|
|
||||||
Stored State and Permissions
|
|
||||||
============================
|
|
||||||
|
|
||||||
Every room has a globally-replicated set of stored state. This state is a set of
|
|
||||||
key/value or key/subkey/value pairs. The value of every (sub)key is a
|
|
||||||
JSON-representable object. The main key of a piece of stored state establishes
|
|
||||||
its meaning; some keys store sub-keys to allow a sub-structure within them [more
|
|
||||||
detail below]. Some keys have special meaning to Synapse, as they relate to
|
|
||||||
management details of the room itself, storing such details as user membership,
|
|
||||||
and permissions of users to alter the state of the room itself. Other keys may
|
|
||||||
store information to present to users, which the system does not directly rely
|
|
||||||
on. The key space itself is namespaced, allowing 3rd party extensions, subject
|
|
||||||
to suitable permission.
|
|
||||||
|
|
||||||
Permission management is based on the concept of "power-levels". Every user
|
|
||||||
within a room has an integer assigned, being their "power-level" within that
|
|
||||||
room. Along with its actual data value, each key (or subkey) also stores the
|
|
||||||
minimum power-level a user must have in order to write to that key, the
|
|
||||||
power-level of the last user who actually did write to it, and the PDU ID of
|
|
||||||
that state change.
|
|
||||||
|
|
||||||
To be accepted as valid, a change must NOT:
|
|
||||||
|
|
||||||
* Be made by a user having a power-level lower than required to write to the
|
|
||||||
state key
|
|
||||||
|
|
||||||
* Alter the required power-level for that state key to a value higher than the
|
|
||||||
user has
|
|
||||||
|
|
||||||
* Increase that user's own power-level
|
|
||||||
|
|
||||||
* Grant any other user a power-level higher than the level of the user making
|
|
||||||
the change
|
|
||||||
|
|
||||||
[[TODO(paul): consider if relaxations should be allowed; e.g. is the current
|
|
||||||
outright-winner allowed to raise their own level, to allow for "inflation"?]]
|
|
||||||
|
|
||||||
|
|
||||||
Room State Keys
|
|
||||||
===============
|
|
||||||
|
|
||||||
[[TODO(paul): if this list gets too big it might become necessary to move it
|
|
||||||
into its own doc]]
|
|
||||||
|
|
||||||
The following keys have special semantics or meaning to Synapse itself:
|
|
||||||
|
|
||||||
m.member (has subkeys)
|
|
||||||
Stores a sub-key for every Synapse User ID which is currently a member of
|
|
||||||
this room. Its value gives the membership type ("knocked", "invited",
|
|
||||||
"joined").
|
|
||||||
|
|
||||||
m.power_levels
|
|
||||||
Stores a mapping from Synapse User IDs to their power-level in the room. If
|
|
||||||
they are not present in this mapping, the default applies.
|
|
||||||
|
|
||||||
The reason to store this as a single value rather than a value with subkeys
|
|
||||||
is that updates to it are atomic; allowing a number of colliding-edit
|
|
||||||
problems to be avoided.
|
|
||||||
|
|
||||||
m.default_level
|
|
||||||
Gives the default power-level for members of the room that do not have one
|
|
||||||
specified in their membership key.
|
|
||||||
|
|
||||||
m.invite_level
|
|
||||||
If set, gives the minimum power-level required for members to invite others
|
|
||||||
to join, or to accept knock requests from non-members requesting access. If
|
|
||||||
absent, then invites are not allowed. An invitation involves setting their
|
|
||||||
membership type to "invited", in addition to sending the invite message.
|
|
||||||
|
|
||||||
m.join_rules
|
|
||||||
Encodes the rules on how non-members can join the room. Has the following
|
|
||||||
possibilities:
|
|
||||||
"public" - a non-member can join the room directly
|
|
||||||
"knock" - a non-member cannot join the room, but can post a single "knock"
|
|
||||||
message requesting access, which existing members may approve or deny
|
|
||||||
"invite" - non-members cannot join the room without an invite from an
|
|
||||||
existing member
|
|
||||||
"private" - nobody who is not in the 'may_join' list or already a member
|
|
||||||
may join by any mechanism
|
|
||||||
|
|
||||||
In any of the first three modes, existing members with sufficient permission
|
|
||||||
can send invites to non-members if allowed by the "m.invite_level" key. A
|
|
||||||
"private" room is not allowed to have the "m.invite_level" set.
|
|
||||||
|
|
||||||
A client may use the value of this key to hint at the user interface
|
|
||||||
expectations to provide; in particular, a private chat with one other use
|
|
||||||
might warrant specific handling in the client.
|
|
||||||
|
|
||||||
m.may_join
|
|
||||||
A list of User IDs that are always allowed to join the room, regardless of any
|
|
||||||
of the prevailing join rules and invite levels. These apply even to private
|
|
||||||
rooms. These are stored in a single list with normal update-powerlevel
|
|
||||||
permissions applied; users cannot arbitrarily remove themselves from the list.
|
|
||||||
|
|
||||||
m.add_state_level
|
|
||||||
The power-level required for a user to be able to add new state keys.
|
|
||||||
|
|
||||||
m.public_history
|
|
||||||
If set and true, anyone can request the history of the room, without needing
|
|
||||||
to be a member of the room.
|
|
||||||
|
|
||||||
m.archive_servers
|
|
||||||
For "public" rooms with public history, gives a list of home servers that
|
|
||||||
should be included in message distribution to the room, even if no users on
|
|
||||||
that server are present. These ensure that a public room can still persist
|
|
||||||
even if no users are currently members of it. This list should be consulted by
|
|
||||||
the dirctory servers as the candidate list they respond with.
|
|
||||||
|
|
||||||
The following keys are provided by Synapse for user benefit, but their value is
|
|
||||||
not otherwise used by Synapse.
|
|
||||||
|
|
||||||
m.name
|
|
||||||
Stores a short human-readable name for the room, such that clients can display
|
|
||||||
to a user to assist in identifying which room is which.
|
|
||||||
|
|
||||||
This name specifically is not the strong ID used by the message transport
|
|
||||||
system to refer to the room, because it may be changed from time to time.
|
|
||||||
|
|
||||||
m.topic
|
|
||||||
Stores the current human-readable topic
|
|
||||||
|
|
||||||
|
|
||||||
Room Creation Templates
|
|
||||||
=======================
|
|
||||||
|
|
||||||
A client (or maybe home server?) could offer a few templates for the creation of
|
|
||||||
new rooms. For example, for a simple private one-to-one chat the channel could
|
|
||||||
assign the creator a power-level of 1, requiring a level of 1 to invite, and
|
|
||||||
needing an invite before members can join. An invite is then sent to the other
|
|
||||||
party, and if accepted and the other user joins, the creator's power-level can
|
|
||||||
now be reduced to 0. This now leaves a room with two participants in it being
|
|
||||||
unable to add more.
|
|
||||||
|
|
||||||
|
|
||||||
Rooms that Continue History
|
|
||||||
===========================
|
|
||||||
|
|
||||||
An option that could be considered for room creation, is that when a new room is
|
|
||||||
created the creator could specify a PDU ID into an existing room, as the history
|
|
||||||
continuation point. This would be stored as an extra piece of meta-data on the
|
|
||||||
initial PDU of the room's creation. (It does not appear in the normal previous
|
|
||||||
PDU linkage).
|
|
||||||
|
|
||||||
This would allow users in rooms to "fork" a room, if it is considered that the
|
|
||||||
conversations in the room no longer fit its original purpose, and wish to
|
|
||||||
diverge. Existing permissions on the original room would continue to apply of
|
|
||||||
course, for viewing that history. If both rooms are considered "public" we might
|
|
||||||
also want to define a message to post into the original room to represent this
|
|
||||||
fork point, and give a reference to the new room.
|
|
||||||
|
|
||||||
|
|
||||||
User Direct Message Rooms
|
|
||||||
=========================
|
|
||||||
|
|
||||||
There is no need to build a mechanism for directly sending messages between
|
|
||||||
users, because a room can handle this ability. To allow direct user-to-user chat
|
|
||||||
messaging we simply need to be able to create rooms with specific set of
|
|
||||||
permissions to allow this direct messaging.
|
|
||||||
|
|
||||||
Between any given pair of user IDs that wish to exchange private messages, there
|
|
||||||
will exist a single shared Room, created lazily by either side. These rooms will
|
|
||||||
need a certain amount of special handling in both home servers and display on
|
|
||||||
clients, but as much as possible should be treated by the lower layers of code
|
|
||||||
the same as other rooms.
|
|
||||||
|
|
||||||
Specially, a client would likely offer a special menu choice associated with
|
|
||||||
another user (in room member lists, presence list, etc..) as "direct chat". That
|
|
||||||
would perform all the necessary steps to create the private chat room. Receiving
|
|
||||||
clients should display these in a special way too as the room name is not
|
|
||||||
important; instead it should distinguish them on the Display Name of the other
|
|
||||||
party.
|
|
||||||
|
|
||||||
Home Servers will need a client-API option to request setting up a new user-user
|
|
||||||
chat room, which will then need special handling within the server. It will
|
|
||||||
create a new room with the following
|
|
||||||
|
|
||||||
m.member: the proposing user
|
|
||||||
m.join_rules: "private"
|
|
||||||
m.may_join: both users
|
|
||||||
m.power_levels: empty
|
|
||||||
m.default_level: 0
|
|
||||||
m.add_state_level: 0
|
|
||||||
m.public_history: False
|
|
||||||
|
|
||||||
Having created the room, it can send an invite message to the other user in the
|
|
||||||
normal way - the room permissions state that no users can be set to the invited
|
|
||||||
state, but because they're in the may_join list then they'd be allowed to join
|
|
||||||
anyway.
|
|
||||||
|
|
||||||
In this arrangement there is now a room with both users may join but neither has
|
|
||||||
the power to invite any others. Both users now have the confidence that (at
|
|
||||||
least within the messaging system itself) their messages remain private and
|
|
||||||
cannot later be provably leaked to a third party. They can freely set the topic
|
|
||||||
or name if they choose and add or edit any other state of the room. The update
|
|
||||||
powerlevel of each of these fixed properties should be 1, to lock out the users
|
|
||||||
from being able to alter them.
|
|
||||||
|
|
||||||
|
|
||||||
Anti-Glare
|
|
||||||
==========
|
|
||||||
|
|
||||||
There exists the possibility of a race condition if two users who have no chat
|
|
||||||
history with each other simultaneously create a room and invite the other to it.
|
|
||||||
This is called a "glare" situation. There are two possible ideas for how to
|
|
||||||
resolve this:
|
|
||||||
|
|
||||||
* Each Home Server should persist the mapping of (user ID pair) to room ID, so
|
|
||||||
that duplicate requests can be suppressed. On receipt of a room creation
|
|
||||||
request that the HS thinks there already exists a room for, the invitation to
|
|
||||||
join can be rejected if:
|
|
||||||
a) the HS believes the sending user is already a member of the room (and
|
|
||||||
maybe their HS has forgotten this fact), or
|
|
||||||
b) the proposed room has a lexicographically-higher ID than the existing
|
|
||||||
room (to resolve true race condition conflicts)
|
|
||||||
|
|
||||||
* The room ID for a private 1:1 chat has a special form, determined by
|
|
||||||
concatenting the User IDs of both members in a deterministic order, such that
|
|
||||||
it doesn't matter which side creates it first; the HSes can just ignore
|
|
||||||
(or merge?) received PDUs that create the room twice.
|
|
|
@ -1,86 +0,0 @@
|
||||||
===========
|
|
||||||
Terminology
|
|
||||||
===========
|
|
||||||
|
|
||||||
A list of definitions of specific terminology used among these documents.
|
|
||||||
These terms were originally taken from the server-server documentation, and may
|
|
||||||
not currently match the exact meanings used in other places; though as a
|
|
||||||
medium-term goal we should encourage the unification of this terminology.
|
|
||||||
|
|
||||||
|
|
||||||
Terms
|
|
||||||
=====
|
|
||||||
|
|
||||||
Backfilling:
|
|
||||||
The process of synchronising historic state from one home server to another,
|
|
||||||
to backfill the event storage so that scrollback can be presented to the
|
|
||||||
client(s). (Formerly, and confusingly, called 'pagination')
|
|
||||||
|
|
||||||
Context:
|
|
||||||
A single human-level entity of interest (currently, a chat room)
|
|
||||||
|
|
||||||
EDU (Ephemeral Data Unit):
|
|
||||||
A message that relates directly to a given pair of home servers that are
|
|
||||||
exchanging it. EDUs are short-lived messages that related only to one single
|
|
||||||
pair of servers; they are not persisted for a long time and are not forwarded
|
|
||||||
on to other servers. Because of this, they have no internal ID nor previous
|
|
||||||
EDUs reference chain.
|
|
||||||
|
|
||||||
Event:
|
|
||||||
A record of activity that records a single thing that happened on to a context
|
|
||||||
(currently, a chat room). These are the "chat messages" that Synapse makes
|
|
||||||
available.
|
|
||||||
[[NOTE(paul): The current server-server implementation calls these simply
|
|
||||||
"messages" but the term is too ambiguous here; I've called them Events]]
|
|
||||||
|
|
||||||
PDU (Persistent Data Unit):
|
|
||||||
A message that relates to a single context, irrespective of the server that
|
|
||||||
is communicating it. PDUs either encode a single Event, or a single State
|
|
||||||
change. A PDU is referred to by its PDU ID; the pair of its origin server
|
|
||||||
and local reference from that server.
|
|
||||||
|
|
||||||
PDU ID:
|
|
||||||
The pair of PDU Origin and PDU Reference, that together globally uniquely
|
|
||||||
refers to a specific PDU.
|
|
||||||
|
|
||||||
PDU Origin:
|
|
||||||
The name of the origin server that generated a given PDU. This may not be the
|
|
||||||
server from which it has been received, due to the way they are copied around
|
|
||||||
from server to server. The origin always records the original server that
|
|
||||||
created it.
|
|
||||||
|
|
||||||
PDU Reference:
|
|
||||||
A local ID used to refer to a specific PDU from a given origin server. These
|
|
||||||
references are opaque at the protocol level, but may optionally have some
|
|
||||||
structured meaning within a given origin server or implementation.
|
|
||||||
|
|
||||||
Presence:
|
|
||||||
The concept of whether a user is currently online, how available they declare
|
|
||||||
they are, and so on. See also: doc/model/presence
|
|
||||||
|
|
||||||
Profile:
|
|
||||||
A set of metadata about a user, such as a display name, provided for the
|
|
||||||
benefit of other users. See also: doc/model/profiles
|
|
||||||
|
|
||||||
Room ID:
|
|
||||||
An opaque string (of as-yet undecided format) that identifies a particular
|
|
||||||
room and used in PDUs referring to it.
|
|
||||||
|
|
||||||
Room Alias:
|
|
||||||
A human-readable string of the form #name:some.domain that users can use as a
|
|
||||||
pointer to identify a room; a Directory Server will map this to its Room ID
|
|
||||||
|
|
||||||
State:
|
|
||||||
A set of metadata maintained about a Context, which is replicated among the
|
|
||||||
servers in addition to the history of Events.
|
|
||||||
|
|
||||||
User ID:
|
|
||||||
A string of the form @localpart:domain.name that identifies a user for
|
|
||||||
wire-protocol purposes. The localpart is meaningless outside of a particular
|
|
||||||
home server. This takes a human-readable form that end-users can use directly
|
|
||||||
if they so wish, avoiding the 3PIDs.
|
|
||||||
|
|
||||||
Transaction:
|
|
||||||
A message which relates to the communication between a given pair of servers.
|
|
||||||
A transaction contains possibly-empty lists of PDUs and EDUs.
|
|
||||||
|
|
|
@ -1,108 +0,0 @@
|
||||||
======================
|
|
||||||
Third Party Identities
|
|
||||||
======================
|
|
||||||
|
|
||||||
A description of how email addresses, mobile phone numbers and other third
|
|
||||||
party identifiers can be used to authenticate and discover users in Matrix.
|
|
||||||
|
|
||||||
|
|
||||||
Overview
|
|
||||||
========
|
|
||||||
|
|
||||||
New users need to authenticate their account. An email or SMS text message can
|
|
||||||
be a convenient form of authentication. Users already have email addresses
|
|
||||||
and phone numbers for contacts in their address book. They want to communicate
|
|
||||||
with those contacts in Matrix without manually exchanging a Matrix User ID with
|
|
||||||
them.
|
|
||||||
|
|
||||||
Third Party IDs
|
|
||||||
---------------
|
|
||||||
|
|
||||||
[[TODO(markjh): Describe the format of a 3PID]]
|
|
||||||
|
|
||||||
|
|
||||||
Third Party ID Associations
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
An Associaton is a binding between a Matrix User ID and a Third Party ID (3PID).
|
|
||||||
Each 3PID can be associated with one Matrix User ID at a time.
|
|
||||||
|
|
||||||
[[TODO(markjh): JSON format of the association.]]
|
|
||||||
|
|
||||||
Verification
|
|
||||||
------------
|
|
||||||
|
|
||||||
An Assocation must be verified by a trusted Verification Server. Email
|
|
||||||
addresses and phone numbers can be verified by sending a token to the address
|
|
||||||
which a client can supply to the verifier to confirm ownership.
|
|
||||||
|
|
||||||
An email Verification Server may be capable of verifying all email 3PIDs or may
|
|
||||||
be restricted to verifying addresses for a particular domain. A phone number
|
|
||||||
Verification Server may be capable of verifying all phone numbers or may be
|
|
||||||
restricted to verifying numbers for a given country or phone prefix.
|
|
||||||
|
|
||||||
Verification Servers fulfil a similar role to Certificate Authorities in PKI so
|
|
||||||
a similar level of vetting should be required before clients trust their
|
|
||||||
signatures.
|
|
||||||
|
|
||||||
A Verification Server may wish to check for existing Associations for a 3PID
|
|
||||||
before creating a new Association.
|
|
||||||
|
|
||||||
Discovery
|
|
||||||
---------
|
|
||||||
|
|
||||||
Users can discover Associations using a trusted Identity Server. Each
|
|
||||||
Association will be signed by the Identity Server. An Identity Server may store
|
|
||||||
the entire space of Associations or may delegate to other Identity Servers when
|
|
||||||
looking up Associations.
|
|
||||||
|
|
||||||
Each Association returned from an Identity Server must be signed by a
|
|
||||||
Verification Server. Clients should check these signatures.
|
|
||||||
|
|
||||||
Identity Servers fulfil a similar role to DNS servers.
|
|
||||||
|
|
||||||
Privacy
|
|
||||||
-------
|
|
||||||
|
|
||||||
A User may publish the association between their phone number and Matrix User ID
|
|
||||||
on the Identity Server without publishing the number in their Profile hosted on
|
|
||||||
their Home Server.
|
|
||||||
|
|
||||||
Identity Servers should refrain from publishing reverse mappings and should
|
|
||||||
take steps, such as rate limiting, to prevent attackers enumerating the space of
|
|
||||||
mappings.
|
|
||||||
|
|
||||||
Federation
|
|
||||||
==========
|
|
||||||
|
|
||||||
Delegation
|
|
||||||
----------
|
|
||||||
|
|
||||||
Verification Servers could delegate signing to another server by issuing
|
|
||||||
certificate to that server allowing it to verify and sign a subset of 3PID on
|
|
||||||
its behalf. It would be necessary to provide a language for describing which
|
|
||||||
subset of 3PIDs that server had authority to validate. Alternatively it could
|
|
||||||
delegate the verification step to another server but sign the resulting
|
|
||||||
association itself.
|
|
||||||
|
|
||||||
The 3PID space will have a heirachical structure like DNS so Identity Servers
|
|
||||||
can delegate lookups to other servers. An Identity Server should be prepared
|
|
||||||
to host or delegate any valid association within the subset of the 3PIDs it is
|
|
||||||
resonsible for.
|
|
||||||
|
|
||||||
Multiple Root Verification Servers
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
There can be multiple root Verification Servers and an Association could be
|
|
||||||
signed by multiple servers if different clients trust different subsets of
|
|
||||||
the verification servers.
|
|
||||||
|
|
||||||
Multiple Root Identity Servers
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
There can be be multiple root Identity Servers. Clients will add each
|
|
||||||
Association to all root Identity Servers.
|
|
||||||
|
|
||||||
[[TODO(markjh): Describe how clients find the list of root Identity Servers]]
|
|
||||||
|
|
||||||
|
|
|
@ -1,5 +0,0 @@
|
||||||
To get this running:
|
|
||||||
ln -s ../swagger_matrix
|
|
||||||
python -m SimpleHTTPServer
|
|
||||||
|
|
||||||
Go to http://localhost:8000/swagger.html
|
|
38
docs/client-server/web/files/backbone-min.js
vendored
38
docs/client-server/web/files/backbone-min.js
vendored
|
@ -1,38 +0,0 @@
|
||||||
// Backbone.js 0.9.2
|
|
||||||
|
|
||||||
// (c) 2010-2012 Jeremy Ashkenas, DocumentCloud Inc.
|
|
||||||
// Backbone may be freely distributed under the MIT license.
|
|
||||||
// For all details and documentation:
|
|
||||||
// http://backbonejs.org
|
|
||||||
(function(){var l=this,y=l.Backbone,z=Array.prototype.slice,A=Array.prototype.splice,g;g="undefined"!==typeof exports?exports:l.Backbone={};g.VERSION="0.9.2";var f=l._;!f&&"undefined"!==typeof require&&(f=require("underscore"));var i=l.jQuery||l.Zepto||l.ender;g.setDomLibrary=function(a){i=a};g.noConflict=function(){l.Backbone=y;return this};g.emulateHTTP=!1;g.emulateJSON=!1;var p=/\s+/,k=g.Events={on:function(a,b,c){var d,e,f,g,j;if(!b)return this;a=a.split(p);for(d=this._callbacks||(this._callbacks=
|
|
||||||
{});e=a.shift();)f=(j=d[e])?j.tail:{},f.next=g={},f.context=c,f.callback=b,d[e]={tail:g,next:j?j.next:f};return this},off:function(a,b,c){var d,e,h,g,j,q;if(e=this._callbacks){if(!a&&!b&&!c)return delete this._callbacks,this;for(a=a?a.split(p):f.keys(e);d=a.shift();)if(h=e[d],delete e[d],h&&(b||c))for(g=h.tail;(h=h.next)!==g;)if(j=h.callback,q=h.context,b&&j!==b||c&&q!==c)this.on(d,j,q);return this}},trigger:function(a){var b,c,d,e,f,g;if(!(d=this._callbacks))return this;f=d.all;a=a.split(p);for(g=
|
|
||||||
z.call(arguments,1);b=a.shift();){if(c=d[b])for(e=c.tail;(c=c.next)!==e;)c.callback.apply(c.context||this,g);if(c=f){e=c.tail;for(b=[b].concat(g);(c=c.next)!==e;)c.callback.apply(c.context||this,b)}}return this}};k.bind=k.on;k.unbind=k.off;var o=g.Model=function(a,b){var c;a||(a={});b&&b.parse&&(a=this.parse(a));if(c=n(this,"defaults"))a=f.extend({},c,a);b&&b.collection&&(this.collection=b.collection);this.attributes={};this._escapedAttributes={};this.cid=f.uniqueId("c");this.changed={};this._silent=
|
|
||||||
{};this._pending={};this.set(a,{silent:!0});this.changed={};this._silent={};this._pending={};this._previousAttributes=f.clone(this.attributes);this.initialize.apply(this,arguments)};f.extend(o.prototype,k,{changed:null,_silent:null,_pending:null,idAttribute:"id",initialize:function(){},toJSON:function(){return f.clone(this.attributes)},get:function(a){return this.attributes[a]},escape:function(a){var b;if(b=this._escapedAttributes[a])return b;b=this.get(a);return this._escapedAttributes[a]=f.escape(null==
|
|
||||||
b?"":""+b)},has:function(a){return null!=this.get(a)},set:function(a,b,c){var d,e;f.isObject(a)||null==a?(d=a,c=b):(d={},d[a]=b);c||(c={});if(!d)return this;d instanceof o&&(d=d.attributes);if(c.unset)for(e in d)d[e]=void 0;if(!this._validate(d,c))return!1;this.idAttribute in d&&(this.id=d[this.idAttribute]);var b=c.changes={},h=this.attributes,g=this._escapedAttributes,j=this._previousAttributes||{};for(e in d){a=d[e];if(!f.isEqual(h[e],a)||c.unset&&f.has(h,e))delete g[e],(c.silent?this._silent:
|
|
||||||
b)[e]=!0;c.unset?delete h[e]:h[e]=a;!f.isEqual(j[e],a)||f.has(h,e)!=f.has(j,e)?(this.changed[e]=a,c.silent||(this._pending[e]=!0)):(delete this.changed[e],delete this._pending[e])}c.silent||this.change(c);return this},unset:function(a,b){(b||(b={})).unset=!0;return this.set(a,null,b)},clear:function(a){(a||(a={})).unset=!0;return this.set(f.clone(this.attributes),a)},fetch:function(a){var a=a?f.clone(a):{},b=this,c=a.success;a.success=function(d,e,f){if(!b.set(b.parse(d,f),a))return!1;c&&c(b,d)};
|
|
||||||
a.error=g.wrapError(a.error,b,a);return(this.sync||g.sync).call(this,"read",this,a)},save:function(a,b,c){var d,e;f.isObject(a)||null==a?(d=a,c=b):(d={},d[a]=b);c=c?f.clone(c):{};if(c.wait){if(!this._validate(d,c))return!1;e=f.clone(this.attributes)}a=f.extend({},c,{silent:!0});if(d&&!this.set(d,c.wait?a:c))return!1;var h=this,i=c.success;c.success=function(a,b,e){b=h.parse(a,e);if(c.wait){delete c.wait;b=f.extend(d||{},b)}if(!h.set(b,c))return false;i?i(h,a):h.trigger("sync",h,a,c)};c.error=g.wrapError(c.error,
|
|
||||||
h,c);b=this.isNew()?"create":"update";b=(this.sync||g.sync).call(this,b,this,c);c.wait&&this.set(e,a);return b},destroy:function(a){var a=a?f.clone(a):{},b=this,c=a.success,d=function(){b.trigger("destroy",b,b.collection,a)};if(this.isNew())return d(),!1;a.success=function(e){a.wait&&d();c?c(b,e):b.trigger("sync",b,e,a)};a.error=g.wrapError(a.error,b,a);var e=(this.sync||g.sync).call(this,"delete",this,a);a.wait||d();return e},url:function(){var a=n(this,"urlRoot")||n(this.collection,"url")||t();
|
|
||||||
return this.isNew()?a:a+("/"==a.charAt(a.length-1)?"":"/")+encodeURIComponent(this.id)},parse:function(a){return a},clone:function(){return new this.constructor(this.attributes)},isNew:function(){return null==this.id},change:function(a){a||(a={});var b=this._changing;this._changing=!0;for(var c in this._silent)this._pending[c]=!0;var d=f.extend({},a.changes,this._silent);this._silent={};for(c in d)this.trigger("change:"+c,this,this.get(c),a);if(b)return this;for(;!f.isEmpty(this._pending);){this._pending=
|
|
||||||
{};this.trigger("change",this,a);for(c in this.changed)!this._pending[c]&&!this._silent[c]&&delete this.changed[c];this._previousAttributes=f.clone(this.attributes)}this._changing=!1;return this},hasChanged:function(a){return!arguments.length?!f.isEmpty(this.changed):f.has(this.changed,a)},changedAttributes:function(a){if(!a)return this.hasChanged()?f.clone(this.changed):!1;var b,c=!1,d=this._previousAttributes,e;for(e in a)if(!f.isEqual(d[e],b=a[e]))(c||(c={}))[e]=b;return c},previous:function(a){return!arguments.length||
|
|
||||||
!this._previousAttributes?null:this._previousAttributes[a]},previousAttributes:function(){return f.clone(this._previousAttributes)},isValid:function(){return!this.validate(this.attributes)},_validate:function(a,b){if(b.silent||!this.validate)return!0;var a=f.extend({},this.attributes,a),c=this.validate(a,b);if(!c)return!0;b&&b.error?b.error(this,c,b):this.trigger("error",this,c,b);return!1}});var r=g.Collection=function(a,b){b||(b={});b.model&&(this.model=b.model);b.comparator&&(this.comparator=b.comparator);
|
|
||||||
this._reset();this.initialize.apply(this,arguments);a&&this.reset(a,{silent:!0,parse:b.parse})};f.extend(r.prototype,k,{model:o,initialize:function(){},toJSON:function(a){return this.map(function(b){return b.toJSON(a)})},add:function(a,b){var c,d,e,g,i,j={},k={},l=[];b||(b={});a=f.isArray(a)?a.slice():[a];c=0;for(d=a.length;c<d;c++){if(!(e=a[c]=this._prepareModel(a[c],b)))throw Error("Can't add an invalid model to a collection");g=e.cid;i=e.id;j[g]||this._byCid[g]||null!=i&&(k[i]||this._byId[i])?
|
|
||||||
l.push(c):j[g]=k[i]=e}for(c=l.length;c--;)a.splice(l[c],1);c=0;for(d=a.length;c<d;c++)(e=a[c]).on("all",this._onModelEvent,this),this._byCid[e.cid]=e,null!=e.id&&(this._byId[e.id]=e);this.length+=d;A.apply(this.models,[null!=b.at?b.at:this.models.length,0].concat(a));this.comparator&&this.sort({silent:!0});if(b.silent)return this;c=0;for(d=this.models.length;c<d;c++)if(j[(e=this.models[c]).cid])b.index=c,e.trigger("add",e,this,b);return this},remove:function(a,b){var c,d,e,g;b||(b={});a=f.isArray(a)?
|
|
||||||
a.slice():[a];c=0;for(d=a.length;c<d;c++)if(g=this.getByCid(a[c])||this.get(a[c]))delete this._byId[g.id],delete this._byCid[g.cid],e=this.indexOf(g),this.models.splice(e,1),this.length--,b.silent||(b.index=e,g.trigger("remove",g,this,b)),this._removeReference(g);return this},push:function(a,b){a=this._prepareModel(a,b);this.add(a,b);return a},pop:function(a){var b=this.at(this.length-1);this.remove(b,a);return b},unshift:function(a,b){a=this._prepareModel(a,b);this.add(a,f.extend({at:0},b));return a},
|
|
||||||
shift:function(a){var b=this.at(0);this.remove(b,a);return b},get:function(a){return null==a?void 0:this._byId[null!=a.id?a.id:a]},getByCid:function(a){return a&&this._byCid[a.cid||a]},at:function(a){return this.models[a]},where:function(a){return f.isEmpty(a)?[]:this.filter(function(b){for(var c in a)if(a[c]!==b.get(c))return!1;return!0})},sort:function(a){a||(a={});if(!this.comparator)throw Error("Cannot sort a set without a comparator");var b=f.bind(this.comparator,this);1==this.comparator.length?
|
|
||||||
this.models=this.sortBy(b):this.models.sort(b);a.silent||this.trigger("reset",this,a);return this},pluck:function(a){return f.map(this.models,function(b){return b.get(a)})},reset:function(a,b){a||(a=[]);b||(b={});for(var c=0,d=this.models.length;c<d;c++)this._removeReference(this.models[c]);this._reset();this.add(a,f.extend({silent:!0},b));b.silent||this.trigger("reset",this,b);return this},fetch:function(a){a=a?f.clone(a):{};void 0===a.parse&&(a.parse=!0);var b=this,c=a.success;a.success=function(d,
|
|
||||||
e,f){b[a.add?"add":"reset"](b.parse(d,f),a);c&&c(b,d)};a.error=g.wrapError(a.error,b,a);return(this.sync||g.sync).call(this,"read",this,a)},create:function(a,b){var c=this,b=b?f.clone(b):{},a=this._prepareModel(a,b);if(!a)return!1;b.wait||c.add(a,b);var d=b.success;b.success=function(e,f){b.wait&&c.add(e,b);d?d(e,f):e.trigger("sync",a,f,b)};a.save(null,b);return a},parse:function(a){return a},chain:function(){return f(this.models).chain()},_reset:function(){this.length=0;this.models=[];this._byId=
|
|
||||||
{};this._byCid={}},_prepareModel:function(a,b){b||(b={});a instanceof o?a.collection||(a.collection=this):(b.collection=this,a=new this.model(a,b),a._validate(a.attributes,b)||(a=!1));return a},_removeReference:function(a){this==a.collection&&delete a.collection;a.off("all",this._onModelEvent,this)},_onModelEvent:function(a,b,c,d){("add"==a||"remove"==a)&&c!=this||("destroy"==a&&this.remove(b,d),b&&a==="change:"+b.idAttribute&&(delete this._byId[b.previous(b.idAttribute)],this._byId[b.id]=b),this.trigger.apply(this,
|
|
||||||
arguments))}});f.each("forEach,each,map,reduce,reduceRight,find,detect,filter,select,reject,every,all,some,any,include,contains,invoke,max,min,sortBy,sortedIndex,toArray,size,first,initial,rest,last,without,indexOf,shuffle,lastIndexOf,isEmpty,groupBy".split(","),function(a){r.prototype[a]=function(){return f[a].apply(f,[this.models].concat(f.toArray(arguments)))}});var u=g.Router=function(a){a||(a={});a.routes&&(this.routes=a.routes);this._bindRoutes();this.initialize.apply(this,arguments)},B=/:\w+/g,
|
|
||||||
C=/\*\w+/g,D=/[-[\]{}()+?.,\\^$|#\s]/g;f.extend(u.prototype,k,{initialize:function(){},route:function(a,b,c){g.history||(g.history=new m);f.isRegExp(a)||(a=this._routeToRegExp(a));c||(c=this[b]);g.history.route(a,f.bind(function(d){d=this._extractParameters(a,d);c&&c.apply(this,d);this.trigger.apply(this,["route:"+b].concat(d));g.history.trigger("route",this,b,d)},this));return this},navigate:function(a,b){g.history.navigate(a,b)},_bindRoutes:function(){if(this.routes){var a=[],b;for(b in this.routes)a.unshift([b,
|
|
||||||
this.routes[b]]);b=0;for(var c=a.length;b<c;b++)this.route(a[b][0],a[b][1],this[a[b][1]])}},_routeToRegExp:function(a){a=a.replace(D,"\\$&").replace(B,"([^/]+)").replace(C,"(.*?)");return RegExp("^"+a+"$")},_extractParameters:function(a,b){return a.exec(b).slice(1)}});var m=g.History=function(){this.handlers=[];f.bindAll(this,"checkUrl")},s=/^[#\/]/,E=/msie [\w.]+/;m.started=!1;f.extend(m.prototype,k,{interval:50,getHash:function(a){return(a=(a?a.location:window.location).href.match(/#(.*)$/))?a[1]:
|
|
||||||
""},getFragment:function(a,b){if(null==a)if(this._hasPushState||b){var a=window.location.pathname,c=window.location.search;c&&(a+=c)}else a=this.getHash();a.indexOf(this.options.root)||(a=a.substr(this.options.root.length));return a.replace(s,"")},start:function(a){if(m.started)throw Error("Backbone.history has already been started");m.started=!0;this.options=f.extend({},{root:"/"},this.options,a);this._wantsHashChange=!1!==this.options.hashChange;this._wantsPushState=!!this.options.pushState;this._hasPushState=
|
|
||||||
!(!this.options.pushState||!window.history||!window.history.pushState);var a=this.getFragment(),b=document.documentMode;if(b=E.exec(navigator.userAgent.toLowerCase())&&(!b||7>=b))this.iframe=i('<iframe src="javascript:0" tabindex="-1" />').hide().appendTo("body")[0].contentWindow,this.navigate(a);this._hasPushState?i(window).bind("popstate",this.checkUrl):this._wantsHashChange&&"onhashchange"in window&&!b?i(window).bind("hashchange",this.checkUrl):this._wantsHashChange&&(this._checkUrlInterval=setInterval(this.checkUrl,
|
|
||||||
this.interval));this.fragment=a;a=window.location;b=a.pathname==this.options.root;if(this._wantsHashChange&&this._wantsPushState&&!this._hasPushState&&!b)return this.fragment=this.getFragment(null,!0),window.location.replace(this.options.root+"#"+this.fragment),!0;this._wantsPushState&&this._hasPushState&&b&&a.hash&&(this.fragment=this.getHash().replace(s,""),window.history.replaceState({},document.title,a.protocol+"//"+a.host+this.options.root+this.fragment));if(!this.options.silent)return this.loadUrl()},
|
|
||||||
stop:function(){i(window).unbind("popstate",this.checkUrl).unbind("hashchange",this.checkUrl);clearInterval(this._checkUrlInterval);m.started=!1},route:function(a,b){this.handlers.unshift({route:a,callback:b})},checkUrl:function(){var a=this.getFragment();a==this.fragment&&this.iframe&&(a=this.getFragment(this.getHash(this.iframe)));if(a==this.fragment)return!1;this.iframe&&this.navigate(a);this.loadUrl()||this.loadUrl(this.getHash())},loadUrl:function(a){var b=this.fragment=this.getFragment(a);return f.any(this.handlers,
|
|
||||||
function(a){if(a.route.test(b))return a.callback(b),!0})},navigate:function(a,b){if(!m.started)return!1;if(!b||!0===b)b={trigger:b};var c=(a||"").replace(s,"");this.fragment!=c&&(this._hasPushState?(0!=c.indexOf(this.options.root)&&(c=this.options.root+c),this.fragment=c,window.history[b.replace?"replaceState":"pushState"]({},document.title,c)):this._wantsHashChange?(this.fragment=c,this._updateHash(window.location,c,b.replace),this.iframe&&c!=this.getFragment(this.getHash(this.iframe))&&(b.replace||
|
|
||||||
this.iframe.document.open().close(),this._updateHash(this.iframe.location,c,b.replace))):window.location.assign(this.options.root+a),b.trigger&&this.loadUrl(a))},_updateHash:function(a,b,c){c?a.replace(a.toString().replace(/(javascript:|#).*$/,"")+"#"+b):a.hash=b}});var v=g.View=function(a){this.cid=f.uniqueId("view");this._configure(a||{});this._ensureElement();this.initialize.apply(this,arguments);this.delegateEvents()},F=/^(\S+)\s*(.*)$/,w="model,collection,el,id,attributes,className,tagName".split(",");
|
|
||||||
f.extend(v.prototype,k,{tagName:"div",$:function(a){return this.$el.find(a)},initialize:function(){},render:function(){return this},remove:function(){this.$el.remove();return this},make:function(a,b,c){a=document.createElement(a);b&&i(a).attr(b);c&&i(a).html(c);return a},setElement:function(a,b){this.$el&&this.undelegateEvents();this.$el=a instanceof i?a:i(a);this.el=this.$el[0];!1!==b&&this.delegateEvents();return this},delegateEvents:function(a){if(a||(a=n(this,"events"))){this.undelegateEvents();
|
|
||||||
for(var b in a){var c=a[b];f.isFunction(c)||(c=this[a[b]]);if(!c)throw Error('Method "'+a[b]+'" does not exist');var d=b.match(F),e=d[1],d=d[2],c=f.bind(c,this),e=e+(".delegateEvents"+this.cid);""===d?this.$el.bind(e,c):this.$el.delegate(d,e,c)}}},undelegateEvents:function(){this.$el.unbind(".delegateEvents"+this.cid)},_configure:function(a){this.options&&(a=f.extend({},this.options,a));for(var b=0,c=w.length;b<c;b++){var d=w[b];a[d]&&(this[d]=a[d])}this.options=a},_ensureElement:function(){if(this.el)this.setElement(this.el,
|
|
||||||
!1);else{var a=n(this,"attributes")||{};this.id&&(a.id=this.id);this.className&&(a["class"]=this.className);this.setElement(this.make(this.tagName,a),!1)}}});o.extend=r.extend=u.extend=v.extend=function(a,b){var c=G(this,a,b);c.extend=this.extend;return c};var H={create:"POST",update:"PUT","delete":"DELETE",read:"GET"};g.sync=function(a,b,c){var d=H[a];c||(c={});var e={type:d,dataType:"json"};c.url||(e.url=n(b,"url")||t());if(!c.data&&b&&("create"==a||"update"==a))e.contentType="application/json",
|
|
||||||
e.data=JSON.stringify(b.toJSON());g.emulateJSON&&(e.contentType="application/x-www-form-urlencoded",e.data=e.data?{model:e.data}:{});if(g.emulateHTTP&&("PUT"===d||"DELETE"===d))g.emulateJSON&&(e.data._method=d),e.type="POST",e.beforeSend=function(a){a.setRequestHeader("X-HTTP-Method-Override",d)};"GET"!==e.type&&!g.emulateJSON&&(e.processData=!1);return i.ajax(f.extend(e,c))};g.wrapError=function(a,b,c){return function(d,e){e=d===b?e:d;a?a(b,e,c):b.trigger("error",b,e,c)}};var x=function(){},G=function(a,
|
|
||||||
b,c){var d;d=b&&b.hasOwnProperty("constructor")?b.constructor:function(){a.apply(this,arguments)};f.extend(d,a);x.prototype=a.prototype;d.prototype=new x;b&&f.extend(d.prototype,b);c&&f.extend(d,c);d.prototype.constructor=d;d.__super__=a.prototype;return d},n=function(a,b){return!a||!a[b]?null:f.isFunction(a[b])?a[b]():a[b]},t=function(){throw Error('A "url" property or function must be specified');}}).call(this);
|
|
|
@ -1,16 +0,0 @@
|
||||||
/* latin */
|
|
||||||
@font-face {
|
|
||||||
font-family: 'Droid Sans';
|
|
||||||
font-style: normal;
|
|
||||||
font-weight: 400;
|
|
||||||
src: local('Droid Sans'), local('DroidSans'), url(http://fonts.gstatic.com/s/droidsans/v5/s-BiyweUPV0v-yRb-cjciPk_vArhqVIZ0nv9q090hN8.woff2) format('woff2');
|
|
||||||
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2212, U+2215, U+E0FF, U+EFFD, U+F000;
|
|
||||||
}
|
|
||||||
/* latin */
|
|
||||||
@font-face {
|
|
||||||
font-family: 'Droid Sans';
|
|
||||||
font-style: normal;
|
|
||||||
font-weight: 700;
|
|
||||||
src: local('Droid Sans Bold'), local('DroidSans-Bold'), url(http://fonts.gstatic.com/s/droidsans/v5/EFpQQyG9GqCrobXxL-KRMYWiMMZ7xLd792ULpGE4W_Y.woff2) format('woff2');
|
|
||||||
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2212, U+2215, U+E0FF, U+EFFD, U+F000;
|
|
||||||
}
|
|
File diff suppressed because it is too large
Load diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -1,18 +0,0 @@
|
||||||
/*
|
|
||||||
* jQuery BBQ: Back Button & Query Library - v1.2.1 - 2/17/2010
|
|
||||||
* http://benalman.com/projects/jquery-bbq-plugin/
|
|
||||||
*
|
|
||||||
* Copyright (c) 2010 "Cowboy" Ben Alman
|
|
||||||
* Dual licensed under the MIT and GPL licenses.
|
|
||||||
* http://benalman.com/about/license/
|
|
||||||
*/
|
|
||||||
(function($,p){var i,m=Array.prototype.slice,r=decodeURIComponent,a=$.param,c,l,v,b=$.bbq=$.bbq||{},q,u,j,e=$.event.special,d="hashchange",A="querystring",D="fragment",y="elemUrlAttr",g="location",k="href",t="src",x=/^.*\?|#.*$/g,w=/^.*\#/,h,C={};function E(F){return typeof F==="string"}function B(G){var F=m.call(arguments,1);return function(){return G.apply(this,F.concat(m.call(arguments)))}}function n(F){return F.replace(/^[^#]*#?(.*)$/,"$1")}function o(F){return F.replace(/(?:^[^?#]*\?([^#]*).*$)?.*/,"$1")}function f(H,M,F,I,G){var O,L,K,N,J;if(I!==i){K=F.match(H?/^([^#]*)\#?(.*)$/:/^([^#?]*)\??([^#]*)(#?.*)/);J=K[3]||"";if(G===2&&E(I)){L=I.replace(H?w:x,"")}else{N=l(K[2]);I=E(I)?l[H?D:A](I):I;L=G===2?I:G===1?$.extend({},I,N):$.extend({},N,I);L=a(L);if(H){L=L.replace(h,r)}}O=K[1]+(H?"#":L||!K[1]?"?":"")+L+J}else{O=M(F!==i?F:p[g][k])}return O}a[A]=B(f,0,o);a[D]=c=B(f,1,n);c.noEscape=function(G){G=G||"";var F=$.map(G.split(""),encodeURIComponent);h=new RegExp(F.join("|"),"g")};c.noEscape(",/");$.deparam=l=function(I,F){var H={},G={"true":!0,"false":!1,"null":null};$.each(I.replace(/\+/g," ").split("&"),function(L,Q){var K=Q.split("="),P=r(K[0]),J,O=H,M=0,R=P.split("]["),N=R.length-1;if(/\[/.test(R[0])&&/\]$/.test(R[N])){R[N]=R[N].replace(/\]$/,"");R=R.shift().split("[").concat(R);N=R.length-1}else{N=0}if(K.length===2){J=r(K[1]);if(F){J=J&&!isNaN(J)?+J:J==="undefined"?i:G[J]!==i?G[J]:J}if(N){for(;M<=N;M++){P=R[M]===""?O.length:R[M];O=O[P]=M<N?O[P]||(R[M+1]&&isNaN(R[M+1])?{}:[]):J}}else{if($.isArray(H[P])){H[P].push(J)}else{if(H[P]!==i){H[P]=[H[P],J]}else{H[P]=J}}}}else{if(P){H[P]=F?i:""}}});return H};function z(H,F,G){if(F===i||typeof F==="boolean"){G=F;F=a[H?D:A]()}else{F=E(F)?F.replace(H?w:x,""):F}return l(F,G)}l[A]=B(z,0);l[D]=v=B(z,1);$[y]||($[y]=function(F){return $.extend(C,F)})({a:k,base:k,iframe:t,img:t,input:t,form:"action",link:k,script:t});j=$[y];function s(I,G,H,F){if(!E(H)&&typeof H!=="object"){F=H;H=G;G=i}return this.each(function(){var L=$(this),J=G||j()[(this.nodeName||"").toLowerCase()]||"",K=J&&L.attr(J)||"";L.attr(J,a[I](K,H,F))})}$.fn[A]=B(s,A);$.fn[D]=B(s,D);b.pushState=q=function(I,F){if(E(I)&&/^#/.test(I)&&F===i){F=2}var H=I!==i,G=c(p[g][k],H?I:{},H?F:2);p[g][k]=G+(/#/.test(G)?"":"#")};b.getState=u=function(F,G){return F===i||typeof F==="boolean"?v(F):v(G)[F]};b.removeState=function(F){var G={};if(F!==i){G=u();$.each($.isArray(F)?F:arguments,function(I,H){delete G[H]})}q(G,2)};e[d]=$.extend(e[d],{add:function(F){var H;function G(J){var I=J[D]=c();J.getState=function(K,L){return K===i||typeof K==="boolean"?l(I,K):l(I,L)[K]};H.apply(this,arguments)}if($.isFunction(F)){H=F;return G}else{H=F.handler;F.handler=G}}})})(jQuery,this);
|
|
||||||
/*
|
|
||||||
* jQuery hashchange event - v1.2 - 2/11/2010
|
|
||||||
* http://benalman.com/projects/jquery-hashchange-plugin/
|
|
||||||
*
|
|
||||||
* Copyright (c) 2010 "Cowboy" Ben Alman
|
|
||||||
* Dual licensed under the MIT and GPL licenses.
|
|
||||||
* http://benalman.com/about/license/
|
|
||||||
*/
|
|
||||||
(function($,i,b){var j,k=$.event.special,c="location",d="hashchange",l="href",f=$.browser,g=document.documentMode,h=f.msie&&(g===b||g<8),e="on"+d in i&&!h;function a(m){m=m||i[c][l];return m.replace(/^[^#]*#?(.*)$/,"$1")}$[d+"Delay"]=100;k[d]=$.extend(k[d],{setup:function(){if(e){return false}$(j.start)},teardown:function(){if(e){return false}$(j.stop)}});j=(function(){var m={},r,n,o,q;function p(){o=q=function(s){return s};if(h){n=$('<iframe src="javascript:0"/>').hide().insertAfter("body")[0].contentWindow;q=function(){return a(n.document[c][l])};o=function(u,s){if(u!==s){var t=n.document;t.open().close();t[c].hash="#"+u}};o(a())}}m.start=function(){if(r){return}var t=a();o||p();(function s(){var v=a(),u=q(t);if(v!==t){o(t=v,u);$(i).trigger(d)}else{if(u!==t){i[c][l]=i[c][l].replace(/#.*/,"")+"#"+u}}r=setTimeout(s,$[d+"Delay"])})()};m.stop=function(){if(!n){r&&clearTimeout(r);r=0}};return m})()})(jQuery,this);
|
|
|
@ -1 +0,0 @@
|
||||||
(function(b){b.fn.slideto=function(a){a=b.extend({slide_duration:"slow",highlight_duration:3E3,highlight:true,highlight_color:"#FFFF99"},a);return this.each(function(){obj=b(this);b("body").animate({scrollTop:obj.offset().top},a.slide_duration,function(){a.highlight&&b.ui.version&&obj.effect("highlight",{color:a.highlight_color},a.highlight_duration)})})}})(jQuery);
|
|
|
@ -1,8 +0,0 @@
|
||||||
/*
|
|
||||||
jQuery Wiggle
|
|
||||||
Author: WonderGroup, Jordan Thomas
|
|
||||||
URL: http://labs.wondergroup.com/demos/mini-ui/index.html
|
|
||||||
License: MIT (http://en.wikipedia.org/wiki/MIT_License)
|
|
||||||
*/
|
|
||||||
jQuery.fn.wiggle=function(o){var d={speed:50,wiggles:3,travel:5,callback:null};var o=jQuery.extend(d,o);return this.each(function(){var cache=this;var wrap=jQuery(this).wrap('<div class="wiggle-wrap"></div>').css("position","relative");var calls=0;for(i=1;i<=o.wiggles;i++){jQuery(this).animate({left:"-="+o.travel},o.speed).animate({left:"+="+o.travel*2},o.speed*2).animate({left:"-="+o.travel},o.speed,function(){calls++;if(jQuery(cache).parent().hasClass('wiggle-wrap')){jQuery(cache).parent().replaceWith(cache);}
|
|
||||||
if(calls==o.wiggles&&jQuery.isFunction(o.callback)){o.callback();}});}});};
|
|
|
@ -1,125 +0,0 @@
|
||||||
/* http://meyerweb.com/eric/tools/css/reset/ v2.0 | 20110126 */
|
|
||||||
html,
|
|
||||||
body,
|
|
||||||
div,
|
|
||||||
span,
|
|
||||||
applet,
|
|
||||||
object,
|
|
||||||
iframe,
|
|
||||||
h1,
|
|
||||||
h2,
|
|
||||||
h3,
|
|
||||||
h4,
|
|
||||||
h5,
|
|
||||||
h6,
|
|
||||||
p,
|
|
||||||
blockquote,
|
|
||||||
pre,
|
|
||||||
a,
|
|
||||||
abbr,
|
|
||||||
acronym,
|
|
||||||
address,
|
|
||||||
big,
|
|
||||||
cite,
|
|
||||||
code,
|
|
||||||
del,
|
|
||||||
dfn,
|
|
||||||
em,
|
|
||||||
img,
|
|
||||||
ins,
|
|
||||||
kbd,
|
|
||||||
q,
|
|
||||||
s,
|
|
||||||
samp,
|
|
||||||
small,
|
|
||||||
strike,
|
|
||||||
strong,
|
|
||||||
sub,
|
|
||||||
sup,
|
|
||||||
tt,
|
|
||||||
var,
|
|
||||||
b,
|
|
||||||
u,
|
|
||||||
i,
|
|
||||||
center,
|
|
||||||
dl,
|
|
||||||
dt,
|
|
||||||
dd,
|
|
||||||
ol,
|
|
||||||
ul,
|
|
||||||
li,
|
|
||||||
fieldset,
|
|
||||||
form,
|
|
||||||
label,
|
|
||||||
legend,
|
|
||||||
table,
|
|
||||||
caption,
|
|
||||||
tbody,
|
|
||||||
tfoot,
|
|
||||||
thead,
|
|
||||||
tr,
|
|
||||||
th,
|
|
||||||
td,
|
|
||||||
article,
|
|
||||||
aside,
|
|
||||||
canvas,
|
|
||||||
details,
|
|
||||||
embed,
|
|
||||||
figure,
|
|
||||||
figcaption,
|
|
||||||
footer,
|
|
||||||
header,
|
|
||||||
hgroup,
|
|
||||||
menu,
|
|
||||||
nav,
|
|
||||||
output,
|
|
||||||
ruby,
|
|
||||||
section,
|
|
||||||
summary,
|
|
||||||
time,
|
|
||||||
mark,
|
|
||||||
audio,
|
|
||||||
video {
|
|
||||||
margin: 0;
|
|
||||||
padding: 0;
|
|
||||||
border: 0;
|
|
||||||
font-size: 100%;
|
|
||||||
font: inherit;
|
|
||||||
vertical-align: baseline;
|
|
||||||
}
|
|
||||||
/* HTML5 display-role reset for older browsers */
|
|
||||||
article,
|
|
||||||
aside,
|
|
||||||
details,
|
|
||||||
figcaption,
|
|
||||||
figure,
|
|
||||||
footer,
|
|
||||||
header,
|
|
||||||
hgroup,
|
|
||||||
menu,
|
|
||||||
nav,
|
|
||||||
section {
|
|
||||||
display: block;
|
|
||||||
}
|
|
||||||
body {
|
|
||||||
line-height: 1;
|
|
||||||
}
|
|
||||||
ol,
|
|
||||||
ul {
|
|
||||||
list-style: none;
|
|
||||||
}
|
|
||||||
blockquote,
|
|
||||||
q {
|
|
||||||
quotes: none;
|
|
||||||
}
|
|
||||||
blockquote:before,
|
|
||||||
blockquote:after,
|
|
||||||
q:before,
|
|
||||||
q:after {
|
|
||||||
content: '';
|
|
||||||
content: none;
|
|
||||||
}
|
|
||||||
table {
|
|
||||||
border-collapse: collapse;
|
|
||||||
border-spacing: 0;
|
|
||||||
}
|
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
@ -1,211 +0,0 @@
|
||||||
var appName;
|
|
||||||
var popupMask;
|
|
||||||
var popupDialog;
|
|
||||||
var clientId;
|
|
||||||
var realm;
|
|
||||||
|
|
||||||
function handleLogin() {
|
|
||||||
var scopes = [];
|
|
||||||
|
|
||||||
if(window.swaggerUi.api.authSchemes
|
|
||||||
&& window.swaggerUi.api.authSchemes.oauth2
|
|
||||||
&& window.swaggerUi.api.authSchemes.oauth2.scopes) {
|
|
||||||
scopes = window.swaggerUi.api.authSchemes.oauth2.scopes;
|
|
||||||
}
|
|
||||||
|
|
||||||
if(window.swaggerUi.api
|
|
||||||
&& window.swaggerUi.api.info) {
|
|
||||||
appName = window.swaggerUi.api.info.title;
|
|
||||||
}
|
|
||||||
|
|
||||||
if(popupDialog.length > 0)
|
|
||||||
popupDialog = popupDialog.last();
|
|
||||||
else {
|
|
||||||
popupDialog = $(
|
|
||||||
[
|
|
||||||
'<div class="api-popup-dialog">',
|
|
||||||
'<div class="api-popup-title">Select OAuth2.0 Scopes</div>',
|
|
||||||
'<div class="api-popup-content">',
|
|
||||||
'<p>Scopes are used to grant an application different levels of access to data on behalf of the end user. Each API may declare one or more scopes.',
|
|
||||||
'<a href="#">Learn how to use</a>',
|
|
||||||
'</p>',
|
|
||||||
'<p><strong>' + appName + '</strong> API requires the following scopes. Select which ones you want to grant to Swagger UI.</p>',
|
|
||||||
'<ul class="api-popup-scopes">',
|
|
||||||
'</ul>',
|
|
||||||
'<p class="error-msg"></p>',
|
|
||||||
'<div class="api-popup-actions"><button class="api-popup-authbtn api-button green" type="button">Authorize</button><button class="api-popup-cancel api-button gray" type="button">Cancel</button></div>',
|
|
||||||
'</div>',
|
|
||||||
'</div>'].join(''));
|
|
||||||
$(document.body).append(popupDialog);
|
|
||||||
|
|
||||||
popup = popupDialog.find('ul.api-popup-scopes').empty();
|
|
||||||
for (i = 0; i < scopes.length; i ++) {
|
|
||||||
scope = scopes[i];
|
|
||||||
str = '<li><input type="checkbox" id="scope_' + i + '" scope="' + scope.scope + '"/>' + '<label for="scope_' + i + '">' + scope.scope;
|
|
||||||
if (scope.description) {
|
|
||||||
str += '<br/><span class="api-scope-desc">' + scope.description + '</span>';
|
|
||||||
}
|
|
||||||
str += '</label></li>';
|
|
||||||
popup.append(str);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var $win = $(window),
|
|
||||||
dw = $win.width(),
|
|
||||||
dh = $win.height(),
|
|
||||||
st = $win.scrollTop(),
|
|
||||||
dlgWd = popupDialog.outerWidth(),
|
|
||||||
dlgHt = popupDialog.outerHeight(),
|
|
||||||
top = (dh -dlgHt)/2 + st,
|
|
||||||
left = (dw - dlgWd)/2;
|
|
||||||
|
|
||||||
popupDialog.css({
|
|
||||||
top: (top < 0? 0 : top) + 'px',
|
|
||||||
left: (left < 0? 0 : left) + 'px'
|
|
||||||
});
|
|
||||||
|
|
||||||
popupDialog.find('button.api-popup-cancel').click(function() {
|
|
||||||
popupMask.hide();
|
|
||||||
popupDialog.hide();
|
|
||||||
});
|
|
||||||
popupDialog.find('button.api-popup-authbtn').click(function() {
|
|
||||||
popupMask.hide();
|
|
||||||
popupDialog.hide();
|
|
||||||
|
|
||||||
var authSchemes = window.swaggerUi.api.authSchemes;
|
|
||||||
var host = window.location;
|
|
||||||
var redirectUrl = host.protocol + '//' + host.host + "/o2c.html";
|
|
||||||
var url = null;
|
|
||||||
|
|
||||||
var p = window.swaggerUi.api.authSchemes;
|
|
||||||
for (var key in p) {
|
|
||||||
if (p.hasOwnProperty(key)) {
|
|
||||||
var o = p[key].grantTypes;
|
|
||||||
for(var t in o) {
|
|
||||||
if(o.hasOwnProperty(t) && t === 'implicit') {
|
|
||||||
var dets = o[t];
|
|
||||||
url = dets.loginEndpoint.url + "?response_type=token";
|
|
||||||
window.swaggerUi.tokenName = dets.tokenName;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
var scopes = []
|
|
||||||
var o = $('.api-popup-scopes').find('input:checked');
|
|
||||||
|
|
||||||
for(k =0; k < o.length; k++) {
|
|
||||||
scopes.push($(o[k]).attr("scope"));
|
|
||||||
}
|
|
||||||
|
|
||||||
window.enabledScopes=scopes;
|
|
||||||
|
|
||||||
url += '&redirect_uri=' + encodeURIComponent(redirectUrl);
|
|
||||||
url += '&realm=' + encodeURIComponent(realm);
|
|
||||||
url += '&client_id=' + encodeURIComponent(clientId);
|
|
||||||
url += '&scope=' + encodeURIComponent(scopes);
|
|
||||||
|
|
||||||
window.open(url);
|
|
||||||
});
|
|
||||||
|
|
||||||
popupMask.show();
|
|
||||||
popupDialog.show();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
function handleLogout() {
|
|
||||||
for(key in window.authorizations.authz){
|
|
||||||
window.authorizations.remove(key)
|
|
||||||
}
|
|
||||||
window.enabledScopes = null;
|
|
||||||
$('.api-ic.ic-on').addClass('ic-off');
|
|
||||||
$('.api-ic.ic-on').removeClass('ic-on');
|
|
||||||
|
|
||||||
// set the info box
|
|
||||||
$('.api-ic.ic-warning').addClass('ic-error');
|
|
||||||
$('.api-ic.ic-warning').removeClass('ic-warning');
|
|
||||||
}
|
|
||||||
|
|
||||||
function initOAuth(opts) {
|
|
||||||
var o = (opts||{});
|
|
||||||
var errors = [];
|
|
||||||
|
|
||||||
appName = (o.appName||errors.push("missing appName"));
|
|
||||||
popupMask = (o.popupMask||$('#api-common-mask'));
|
|
||||||
popupDialog = (o.popupDialog||$('.api-popup-dialog'));
|
|
||||||
clientId = (o.clientId||errors.push("missing client id"));
|
|
||||||
realm = (o.realm||errors.push("missing realm"));
|
|
||||||
|
|
||||||
if(errors.length > 0){
|
|
||||||
log("auth unable initialize oauth: " + errors);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
$('pre code').each(function(i, e) {hljs.highlightBlock(e)});
|
|
||||||
$('.api-ic').click(function(s) {
|
|
||||||
if($(s.target).hasClass('ic-off'))
|
|
||||||
handleLogin();
|
|
||||||
else {
|
|
||||||
handleLogout();
|
|
||||||
}
|
|
||||||
false;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function onOAuthComplete(token) {
|
|
||||||
if(token) {
|
|
||||||
if(token.error) {
|
|
||||||
var checkbox = $('input[type=checkbox],.secured')
|
|
||||||
checkbox.each(function(pos){
|
|
||||||
checkbox[pos].checked = false;
|
|
||||||
});
|
|
||||||
alert(token.error);
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
var b = token[window.swaggerUi.tokenName];
|
|
||||||
if(b){
|
|
||||||
// if all roles are satisfied
|
|
||||||
var o = null;
|
|
||||||
$.each($('.auth #api_information_panel'), function(k, v) {
|
|
||||||
var children = v;
|
|
||||||
if(children && children.childNodes) {
|
|
||||||
var requiredScopes = [];
|
|
||||||
$.each((children.childNodes), function (k1, v1){
|
|
||||||
var inner = v1.innerHTML;
|
|
||||||
if(inner)
|
|
||||||
requiredScopes.push(inner);
|
|
||||||
});
|
|
||||||
var diff = [];
|
|
||||||
for(var i=0; i < requiredScopes.length; i++) {
|
|
||||||
var s = requiredScopes[i];
|
|
||||||
if(window.enabledScopes && window.enabledScopes.indexOf(s) == -1) {
|
|
||||||
diff.push(s);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if(diff.length > 0){
|
|
||||||
o = v.parentNode;
|
|
||||||
$(o.parentNode).find('.api-ic.ic-on').addClass('ic-off');
|
|
||||||
$(o.parentNode).find('.api-ic.ic-on').removeClass('ic-on');
|
|
||||||
|
|
||||||
// sorry, not all scopes are satisfied
|
|
||||||
$(o).find('.api-ic').addClass('ic-warning');
|
|
||||||
$(o).find('.api-ic').removeClass('ic-error');
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
o = v.parentNode;
|
|
||||||
$(o.parentNode).find('.api-ic.ic-off').addClass('ic-on');
|
|
||||||
$(o.parentNode).find('.api-ic.ic-off').removeClass('ic-off');
|
|
||||||
|
|
||||||
// all scopes are satisfied
|
|
||||||
$(o).find('.api-ic').addClass('ic-info');
|
|
||||||
$(o).find('.api-ic').removeClass('ic-warning');
|
|
||||||
$(o).find('.api-ic').removeClass('ic-error');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
window.authorizations.add("oauth2", new ApiKeyAuthorization("Authorization", "Bearer " + b, "header"));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
32
docs/client-server/web/files/underscore-min.js
vendored
32
docs/client-server/web/files/underscore-min.js
vendored
|
@ -1,32 +0,0 @@
|
||||||
// Underscore.js 1.3.3
|
|
||||||
// (c) 2009-2012 Jeremy Ashkenas, DocumentCloud Inc.
|
|
||||||
// Underscore is freely distributable under the MIT license.
|
|
||||||
// Portions of Underscore are inspired or borrowed from Prototype,
|
|
||||||
// Oliver Steele's Functional, and John Resig's Micro-Templating.
|
|
||||||
// For all details and documentation:
|
|
||||||
// http://documentcloud.github.com/underscore
|
|
||||||
(function(){function r(a,c,d){if(a===c)return 0!==a||1/a==1/c;if(null==a||null==c)return a===c;a._chain&&(a=a._wrapped);c._chain&&(c=c._wrapped);if(a.isEqual&&b.isFunction(a.isEqual))return a.isEqual(c);if(c.isEqual&&b.isFunction(c.isEqual))return c.isEqual(a);var e=l.call(a);if(e!=l.call(c))return!1;switch(e){case "[object String]":return a==""+c;case "[object Number]":return a!=+a?c!=+c:0==a?1/a==1/c:a==+c;case "[object Date]":case "[object Boolean]":return+a==+c;case "[object RegExp]":return a.source==
|
|
||||||
c.source&&a.global==c.global&&a.multiline==c.multiline&&a.ignoreCase==c.ignoreCase}if("object"!=typeof a||"object"!=typeof c)return!1;for(var f=d.length;f--;)if(d[f]==a)return!0;d.push(a);var f=0,g=!0;if("[object Array]"==e){if(f=a.length,g=f==c.length)for(;f--&&(g=f in a==f in c&&r(a[f],c[f],d)););}else{if("constructor"in a!="constructor"in c||a.constructor!=c.constructor)return!1;for(var h in a)if(b.has(a,h)&&(f++,!(g=b.has(c,h)&&r(a[h],c[h],d))))break;if(g){for(h in c)if(b.has(c,h)&&!f--)break;
|
|
||||||
g=!f}}d.pop();return g}var s=this,I=s._,o={},k=Array.prototype,p=Object.prototype,i=k.slice,J=k.unshift,l=p.toString,K=p.hasOwnProperty,y=k.forEach,z=k.map,A=k.reduce,B=k.reduceRight,C=k.filter,D=k.every,E=k.some,q=k.indexOf,F=k.lastIndexOf,p=Array.isArray,L=Object.keys,t=Function.prototype.bind,b=function(a){return new m(a)};"undefined"!==typeof exports?("undefined"!==typeof module&&module.exports&&(exports=module.exports=b),exports._=b):s._=b;b.VERSION="1.3.3";var j=b.each=b.forEach=function(a,
|
|
||||||
c,d){if(a!=null)if(y&&a.forEach===y)a.forEach(c,d);else if(a.length===+a.length)for(var e=0,f=a.length;e<f;e++){if(e in a&&c.call(d,a[e],e,a)===o)break}else for(e in a)if(b.has(a,e)&&c.call(d,a[e],e,a)===o)break};b.map=b.collect=function(a,c,b){var e=[];if(a==null)return e;if(z&&a.map===z)return a.map(c,b);j(a,function(a,g,h){e[e.length]=c.call(b,a,g,h)});if(a.length===+a.length)e.length=a.length;return e};b.reduce=b.foldl=b.inject=function(a,c,d,e){var f=arguments.length>2;a==null&&(a=[]);if(A&&
|
|
||||||
a.reduce===A){e&&(c=b.bind(c,e));return f?a.reduce(c,d):a.reduce(c)}j(a,function(a,b,i){if(f)d=c.call(e,d,a,b,i);else{d=a;f=true}});if(!f)throw new TypeError("Reduce of empty array with no initial value");return d};b.reduceRight=b.foldr=function(a,c,d,e){var f=arguments.length>2;a==null&&(a=[]);if(B&&a.reduceRight===B){e&&(c=b.bind(c,e));return f?a.reduceRight(c,d):a.reduceRight(c)}var g=b.toArray(a).reverse();e&&!f&&(c=b.bind(c,e));return f?b.reduce(g,c,d,e):b.reduce(g,c)};b.find=b.detect=function(a,
|
|
||||||
c,b){var e;G(a,function(a,g,h){if(c.call(b,a,g,h)){e=a;return true}});return e};b.filter=b.select=function(a,c,b){var e=[];if(a==null)return e;if(C&&a.filter===C)return a.filter(c,b);j(a,function(a,g,h){c.call(b,a,g,h)&&(e[e.length]=a)});return e};b.reject=function(a,c,b){var e=[];if(a==null)return e;j(a,function(a,g,h){c.call(b,a,g,h)||(e[e.length]=a)});return e};b.every=b.all=function(a,c,b){var e=true;if(a==null)return e;if(D&&a.every===D)return a.every(c,b);j(a,function(a,g,h){if(!(e=e&&c.call(b,
|
|
||||||
a,g,h)))return o});return!!e};var G=b.some=b.any=function(a,c,d){c||(c=b.identity);var e=false;if(a==null)return e;if(E&&a.some===E)return a.some(c,d);j(a,function(a,b,h){if(e||(e=c.call(d,a,b,h)))return o});return!!e};b.include=b.contains=function(a,c){var b=false;if(a==null)return b;if(q&&a.indexOf===q)return a.indexOf(c)!=-1;return b=G(a,function(a){return a===c})};b.invoke=function(a,c){var d=i.call(arguments,2);return b.map(a,function(a){return(b.isFunction(c)?c||a:a[c]).apply(a,d)})};b.pluck=
|
|
||||||
function(a,c){return b.map(a,function(a){return a[c]})};b.max=function(a,c,d){if(!c&&b.isArray(a)&&a[0]===+a[0])return Math.max.apply(Math,a);if(!c&&b.isEmpty(a))return-Infinity;var e={computed:-Infinity};j(a,function(a,b,h){b=c?c.call(d,a,b,h):a;b>=e.computed&&(e={value:a,computed:b})});return e.value};b.min=function(a,c,d){if(!c&&b.isArray(a)&&a[0]===+a[0])return Math.min.apply(Math,a);if(!c&&b.isEmpty(a))return Infinity;var e={computed:Infinity};j(a,function(a,b,h){b=c?c.call(d,a,b,h):a;b<e.computed&&
|
|
||||||
(e={value:a,computed:b})});return e.value};b.shuffle=function(a){var b=[],d;j(a,function(a,f){d=Math.floor(Math.random()*(f+1));b[f]=b[d];b[d]=a});return b};b.sortBy=function(a,c,d){var e=b.isFunction(c)?c:function(a){return a[c]};return b.pluck(b.map(a,function(a,b,c){return{value:a,criteria:e.call(d,a,b,c)}}).sort(function(a,b){var c=a.criteria,d=b.criteria;return c===void 0?1:d===void 0?-1:c<d?-1:c>d?1:0}),"value")};b.groupBy=function(a,c){var d={},e=b.isFunction(c)?c:function(a){return a[c]};
|
|
||||||
j(a,function(a,b){var c=e(a,b);(d[c]||(d[c]=[])).push(a)});return d};b.sortedIndex=function(a,c,d){d||(d=b.identity);for(var e=0,f=a.length;e<f;){var g=e+f>>1;d(a[g])<d(c)?e=g+1:f=g}return e};b.toArray=function(a){return!a?[]:b.isArray(a)||b.isArguments(a)?i.call(a):a.toArray&&b.isFunction(a.toArray)?a.toArray():b.values(a)};b.size=function(a){return b.isArray(a)?a.length:b.keys(a).length};b.first=b.head=b.take=function(a,b,d){return b!=null&&!d?i.call(a,0,b):a[0]};b.initial=function(a,b,d){return i.call(a,
|
|
||||||
0,a.length-(b==null||d?1:b))};b.last=function(a,b,d){return b!=null&&!d?i.call(a,Math.max(a.length-b,0)):a[a.length-1]};b.rest=b.tail=function(a,b,d){return i.call(a,b==null||d?1:b)};b.compact=function(a){return b.filter(a,function(a){return!!a})};b.flatten=function(a,c){return b.reduce(a,function(a,e){if(b.isArray(e))return a.concat(c?e:b.flatten(e));a[a.length]=e;return a},[])};b.without=function(a){return b.difference(a,i.call(arguments,1))};b.uniq=b.unique=function(a,c,d){var d=d?b.map(a,d):a,
|
|
||||||
e=[];a.length<3&&(c=true);b.reduce(d,function(d,g,h){if(c?b.last(d)!==g||!d.length:!b.include(d,g)){d.push(g);e.push(a[h])}return d},[]);return e};b.union=function(){return b.uniq(b.flatten(arguments,true))};b.intersection=b.intersect=function(a){var c=i.call(arguments,1);return b.filter(b.uniq(a),function(a){return b.every(c,function(c){return b.indexOf(c,a)>=0})})};b.difference=function(a){var c=b.flatten(i.call(arguments,1),true);return b.filter(a,function(a){return!b.include(c,a)})};b.zip=function(){for(var a=
|
|
||||||
i.call(arguments),c=b.max(b.pluck(a,"length")),d=Array(c),e=0;e<c;e++)d[e]=b.pluck(a,""+e);return d};b.indexOf=function(a,c,d){if(a==null)return-1;var e;if(d){d=b.sortedIndex(a,c);return a[d]===c?d:-1}if(q&&a.indexOf===q)return a.indexOf(c);d=0;for(e=a.length;d<e;d++)if(d in a&&a[d]===c)return d;return-1};b.lastIndexOf=function(a,b){if(a==null)return-1;if(F&&a.lastIndexOf===F)return a.lastIndexOf(b);for(var d=a.length;d--;)if(d in a&&a[d]===b)return d;return-1};b.range=function(a,b,d){if(arguments.length<=
|
|
||||||
1){b=a||0;a=0}for(var d=arguments[2]||1,e=Math.max(Math.ceil((b-a)/d),0),f=0,g=Array(e);f<e;){g[f++]=a;a=a+d}return g};var H=function(){};b.bind=function(a,c){var d,e;if(a.bind===t&&t)return t.apply(a,i.call(arguments,1));if(!b.isFunction(a))throw new TypeError;e=i.call(arguments,2);return d=function(){if(!(this instanceof d))return a.apply(c,e.concat(i.call(arguments)));H.prototype=a.prototype;var b=new H,g=a.apply(b,e.concat(i.call(arguments)));return Object(g)===g?g:b}};b.bindAll=function(a){var c=
|
|
||||||
i.call(arguments,1);c.length==0&&(c=b.functions(a));j(c,function(c){a[c]=b.bind(a[c],a)});return a};b.memoize=function(a,c){var d={};c||(c=b.identity);return function(){var e=c.apply(this,arguments);return b.has(d,e)?d[e]:d[e]=a.apply(this,arguments)}};b.delay=function(a,b){var d=i.call(arguments,2);return setTimeout(function(){return a.apply(null,d)},b)};b.defer=function(a){return b.delay.apply(b,[a,1].concat(i.call(arguments,1)))};b.throttle=function(a,c){var d,e,f,g,h,i,j=b.debounce(function(){h=
|
|
||||||
g=false},c);return function(){d=this;e=arguments;f||(f=setTimeout(function(){f=null;h&&a.apply(d,e);j()},c));g?h=true:i=a.apply(d,e);j();g=true;return i}};b.debounce=function(a,b,d){var e;return function(){var f=this,g=arguments;d&&!e&&a.apply(f,g);clearTimeout(e);e=setTimeout(function(){e=null;d||a.apply(f,g)},b)}};b.once=function(a){var b=false,d;return function(){if(b)return d;b=true;return d=a.apply(this,arguments)}};b.wrap=function(a,b){return function(){var d=[a].concat(i.call(arguments,0));
|
|
||||||
return b.apply(this,d)}};b.compose=function(){var a=arguments;return function(){for(var b=arguments,d=a.length-1;d>=0;d--)b=[a[d].apply(this,b)];return b[0]}};b.after=function(a,b){return a<=0?b():function(){if(--a<1)return b.apply(this,arguments)}};b.keys=L||function(a){if(a!==Object(a))throw new TypeError("Invalid object");var c=[],d;for(d in a)b.has(a,d)&&(c[c.length]=d);return c};b.values=function(a){return b.map(a,b.identity)};b.functions=b.methods=function(a){var c=[],d;for(d in a)b.isFunction(a[d])&&
|
|
||||||
c.push(d);return c.sort()};b.extend=function(a){j(i.call(arguments,1),function(b){for(var d in b)a[d]=b[d]});return a};b.pick=function(a){var c={};j(b.flatten(i.call(arguments,1)),function(b){b in a&&(c[b]=a[b])});return c};b.defaults=function(a){j(i.call(arguments,1),function(b){for(var d in b)a[d]==null&&(a[d]=b[d])});return a};b.clone=function(a){return!b.isObject(a)?a:b.isArray(a)?a.slice():b.extend({},a)};b.tap=function(a,b){b(a);return a};b.isEqual=function(a,b){return r(a,b,[])};b.isEmpty=
|
|
||||||
function(a){if(a==null)return true;if(b.isArray(a)||b.isString(a))return a.length===0;for(var c in a)if(b.has(a,c))return false;return true};b.isElement=function(a){return!!(a&&a.nodeType==1)};b.isArray=p||function(a){return l.call(a)=="[object Array]"};b.isObject=function(a){return a===Object(a)};b.isArguments=function(a){return l.call(a)=="[object Arguments]"};b.isArguments(arguments)||(b.isArguments=function(a){return!(!a||!b.has(a,"callee"))});b.isFunction=function(a){return l.call(a)=="[object Function]"};
|
|
||||||
b.isString=function(a){return l.call(a)=="[object String]"};b.isNumber=function(a){return l.call(a)=="[object Number]"};b.isFinite=function(a){return b.isNumber(a)&&isFinite(a)};b.isNaN=function(a){return a!==a};b.isBoolean=function(a){return a===true||a===false||l.call(a)=="[object Boolean]"};b.isDate=function(a){return l.call(a)=="[object Date]"};b.isRegExp=function(a){return l.call(a)=="[object RegExp]"};b.isNull=function(a){return a===null};b.isUndefined=function(a){return a===void 0};b.has=function(a,
|
|
||||||
b){return K.call(a,b)};b.noConflict=function(){s._=I;return this};b.identity=function(a){return a};b.times=function(a,b,d){for(var e=0;e<a;e++)b.call(d,e)};b.escape=function(a){return(""+a).replace(/&/g,"&").replace(/</g,"<").replace(/>/g,">").replace(/"/g,""").replace(/'/g,"'").replace(/\//g,"/")};b.result=function(a,c){if(a==null)return null;var d=a[c];return b.isFunction(d)?d.call(a):d};b.mixin=function(a){j(b.functions(a),function(c){M(c,b[c]=a[c])})};var N=0;b.uniqueId=
|
|
||||||
function(a){var b=N++;return a?a+b:b};b.templateSettings={evaluate:/<%([\s\S]+?)%>/g,interpolate:/<%=([\s\S]+?)%>/g,escape:/<%-([\s\S]+?)%>/g};var u=/.^/,n={"\\":"\\","'":"'",r:"\r",n:"\n",t:"\t",u2028:"\u2028",u2029:"\u2029"},v;for(v in n)n[n[v]]=v;var O=/\\|'|\r|\n|\t|\u2028|\u2029/g,P=/\\(\\|'|r|n|t|u2028|u2029)/g,w=function(a){return a.replace(P,function(a,b){return n[b]})};b.template=function(a,c,d){d=b.defaults(d||{},b.templateSettings);a="__p+='"+a.replace(O,function(a){return"\\"+n[a]}).replace(d.escape||
|
|
||||||
u,function(a,b){return"'+\n_.escape("+w(b)+")+\n'"}).replace(d.interpolate||u,function(a,b){return"'+\n("+w(b)+")+\n'"}).replace(d.evaluate||u,function(a,b){return"';\n"+w(b)+"\n;__p+='"})+"';\n";d.variable||(a="with(obj||{}){\n"+a+"}\n");var a="var __p='';var print=function(){__p+=Array.prototype.join.call(arguments, '')};\n"+a+"return __p;\n",e=new Function(d.variable||"obj","_",a);if(c)return e(c,b);c=function(a){return e.call(this,a,b)};c.source="function("+(d.variable||"obj")+"){\n"+a+"}";return c};
|
|
||||||
b.chain=function(a){return b(a).chain()};var m=function(a){this._wrapped=a};b.prototype=m.prototype;var x=function(a,c){return c?b(a).chain():a},M=function(a,c){m.prototype[a]=function(){var a=i.call(arguments);J.call(a,this._wrapped);return x(c.apply(b,a),this._chain)}};b.mixin(b);j("pop,push,reverse,shift,sort,splice,unshift".split(","),function(a){var b=k[a];m.prototype[a]=function(){var d=this._wrapped;b.apply(d,arguments);var e=d.length;(a=="shift"||a=="splice")&&e===0&&delete d[0];return x(d,
|
|
||||||
this._chain)}});j(["concat","join","slice"],function(a){var b=k[a];m.prototype[a]=function(){return x(b.apply(this._wrapped,arguments),this._chain)}});m.prototype.chain=function(){this._chain=true;return this};m.prototype.value=function(){return this._wrapped}}).call(this);
|
|
|
@ -1,78 +0,0 @@
|
||||||
<!DOCTYPE html>
|
|
||||||
<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
|
|
||||||
<title>Matrix Client-Server API Documentation</title>
|
|
||||||
<link href="./files/css" rel="stylesheet" type="text/css">
|
|
||||||
<link href="./files/reset.css" media="screen" rel="stylesheet" type="text/css">
|
|
||||||
<link href="./files/screen.css" media="screen" rel="stylesheet" type="text/css">
|
|
||||||
<link href="./files/reset.css" media="print" rel="stylesheet" type="text/css">
|
|
||||||
<link href="./files/screen.css" media="print" rel="stylesheet" type="text/css">
|
|
||||||
<script type="text/javascript" src="./files/shred.bundle.js"></script>
|
|
||||||
<script src="./files/jquery-1.8.0.min.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/jquery.slideto.min.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/jquery.wiggle.min.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/jquery.ba-bbq.min.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/handlebars-1.0.0.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/underscore-min.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/backbone-min.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/swagger.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/swagger-ui.js" type="text/javascript"></script>
|
|
||||||
<script src="./files/highlight.7.3.pack.js" type="text/javascript"></script>
|
|
||||||
|
|
||||||
<!-- enabling this will enable oauth2 implicit scope support -->
|
|
||||||
<script src="./files/swagger-oauth.js" type="text/javascript"></script>
|
|
||||||
|
|
||||||
<script type="text/javascript">
|
|
||||||
$(function () {
|
|
||||||
window.swaggerUi = new SwaggerUi({
|
|
||||||
url: "http://localhost:8000/swagger_matrix/api-docs",
|
|
||||||
dom_id: "swagger-ui-container",
|
|
||||||
supportedSubmitMethods: ['get', 'post', 'put', 'delete'],
|
|
||||||
onComplete: function(swaggerApi, swaggerUi){
|
|
||||||
log("Loaded SwaggerUI");
|
|
||||||
|
|
||||||
if(typeof initOAuth == "function") {
|
|
||||||
initOAuth({
|
|
||||||
clientId: "your-client-id",
|
|
||||||
realm: "your-realms",
|
|
||||||
appName: "your-app-name"
|
|
||||||
});
|
|
||||||
}
|
|
||||||
$('pre code').each(function(i, e) {
|
|
||||||
hljs.highlightBlock(e)
|
|
||||||
});
|
|
||||||
},
|
|
||||||
onFailure: function(data) {
|
|
||||||
log("Unable to Load SwaggerUI");
|
|
||||||
},
|
|
||||||
docExpansion: "none"
|
|
||||||
});
|
|
||||||
|
|
||||||
$('#input_apiKey').change(function() {
|
|
||||||
var key = $('#input_apiKey')[0].value;
|
|
||||||
log("key: " + key);
|
|
||||||
if(key && key.trim() != "") {
|
|
||||||
log("added key " + key);
|
|
||||||
window.authorizations.add("key", new ApiKeyAuthorization("access_token", key, "query"));
|
|
||||||
}
|
|
||||||
})
|
|
||||||
window.swaggerUi.load();
|
|
||||||
});
|
|
||||||
</script>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
<body class="swagger-section">
|
|
||||||
<div id="header">
|
|
||||||
<div class="swagger-ui-wrap">
|
|
||||||
<a id="logo" href="http://swagger.wordnik.com/">swagger</a>
|
|
||||||
<form id="api_selector">
|
|
||||||
<div class="input"><input placeholder="http://example.com/api" id="input_baseUrl" name="baseUrl" type="text"></div>
|
|
||||||
<div class="input"><input placeholder="access_token" id="input_apiKey" name="apiKey" type="text"></div>
|
|
||||||
</form>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div id="message-bar" class="swagger-ui-wrap message-fail">Can't read from server. It may not have the appropriate access-control-origin settings.</div>
|
|
||||||
<div id="swagger-ui-container" class="swagger-ui-wrap"></div>
|
|
||||||
|
|
||||||
|
|
||||||
</body></html>
|
|
|
@ -1,59 +0,0 @@
|
||||||
|
|
||||||
Transaction
|
|
||||||
===========
|
|
||||||
|
|
||||||
Required keys:
|
|
||||||
|
|
||||||
============ =================== ===============================================
|
|
||||||
Key Type Description
|
|
||||||
============ =================== ===============================================
|
|
||||||
origin String DNS name of homeserver making this transaction.
|
|
||||||
ts Integer Timestamp in milliseconds on originating
|
|
||||||
homeserver when this transaction started.
|
|
||||||
previous_ids List of Strings List of transactions that were sent immediately
|
|
||||||
prior to this transaction.
|
|
||||||
pdus List of Objects List of updates contained in this transaction.
|
|
||||||
============ =================== ===============================================
|
|
||||||
|
|
||||||
|
|
||||||
PDU
|
|
||||||
===
|
|
||||||
|
|
||||||
Required keys:
|
|
||||||
|
|
||||||
============ ================== ================================================
|
|
||||||
Key Type Description
|
|
||||||
============ ================== ================================================
|
|
||||||
context String Event context identifier
|
|
||||||
origin String DNS name of homeserver that created this PDU.
|
|
||||||
pdu_id String Unique identifier for PDU within the context for
|
|
||||||
the originating homeserver.
|
|
||||||
ts Integer Timestamp in milliseconds on originating
|
|
||||||
homeserver when this PDU was created.
|
|
||||||
pdu_type String PDU event type.
|
|
||||||
prev_pdus List of Pairs The originating homeserver and PDU ids of the
|
|
||||||
of Strings most recent PDUs the homeserver was aware of for
|
|
||||||
this context when it made this PDU.
|
|
||||||
depth Integer The maximum depth of the previous PDUs plus one.
|
|
||||||
============ ================== ================================================
|
|
||||||
|
|
||||||
Keys for state updates:
|
|
||||||
|
|
||||||
================== ============ ================================================
|
|
||||||
Key Type Description
|
|
||||||
================== ============ ================================================
|
|
||||||
is_state Boolean True if this PDU is updating state.
|
|
||||||
state_key String Optional key identifying the updated state within
|
|
||||||
the context.
|
|
||||||
power_level Integer The asserted power level of the user performing
|
|
||||||
the update.
|
|
||||||
min_update Integer The required power level needed to replace this
|
|
||||||
update.
|
|
||||||
prev_state_id String The homeserver of the update this replaces
|
|
||||||
prev_state_origin String The PDU id of the update this replaces.
|
|
||||||
user String The user updating the state.
|
|
||||||
================== ============ ================================================
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,151 +0,0 @@
|
||||||
Signing JSON
|
|
||||||
============
|
|
||||||
|
|
||||||
JSON is signed by encoding the JSON object without ``signatures`` or ``meta``
|
|
||||||
keys using a canonical encoding. The JSON bytes are then signed using the
|
|
||||||
signature algorithm and the signature encoded using base64 with the padding
|
|
||||||
stripped. The resulting base64 signature is added to an object under the
|
|
||||||
*signing key identifier* which is added to the ``signatures`` object under the
|
|
||||||
name of the server signing it which is added back to the original JSON object
|
|
||||||
along with the ``meta`` object.
|
|
||||||
|
|
||||||
The *signing key identifier* is the concatenation of the *signing algorithm*
|
|
||||||
and a *key version*. The *signing algorithm* identifies the algorithm used to
|
|
||||||
sign the JSON. The currently support value for *signing algorithm* is
|
|
||||||
``ed25519`` as implemented by NACL (http://nacl.cr.yp.to/). The *key version*
|
|
||||||
is used to distinguish between different signing keys used by the same entity.
|
|
||||||
|
|
||||||
The ``meta`` object and the ``signatures`` object are not covered by the
|
|
||||||
signature. Therefore intermediate servers can add metadata such as time stamps
|
|
||||||
and additional signatures.
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
{
|
|
||||||
"name": "example.org",
|
|
||||||
"signing_keys": {
|
|
||||||
"ed25519:1": "XSl0kuyvrXNj6A+7/tkrB9sxSbRi08Of5uRhxOqZtEQ"
|
|
||||||
},
|
|
||||||
"meta": {
|
|
||||||
"retrieved_ts_ms": 922834800000
|
|
||||||
},
|
|
||||||
"signatures": {
|
|
||||||
"example.org": {
|
|
||||||
"ed25519:1": "s76RUgajp8w172am0zQb/iPTHsRnb4SkrzGoeCOSFfcBY2V/1c8QfrmdXHpvnc2jK5BD1WiJIxiMW95fMjK7Bw"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
def sign_json(json_object, signing_key, signing_name):
|
|
||||||
signatures = json_object.pop("signatures", {})
|
|
||||||
meta = json_object.pop("meta", None)
|
|
||||||
|
|
||||||
signed = signing_key.sign(encode_canonical_json(json_object))
|
|
||||||
signature_base64 = encode_base64(signed.signature)
|
|
||||||
|
|
||||||
key_id = "%s:%s" % (signing_key.alg, signing_key.version)
|
|
||||||
signatures.setdefault(sigature_name, {})[key_id] = signature_base64
|
|
||||||
|
|
||||||
json_object["signatures"] = signatures
|
|
||||||
if meta is not None:
|
|
||||||
json_object["meta"] = meta
|
|
||||||
|
|
||||||
return json_object
|
|
||||||
|
|
||||||
Checking for a Signature
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
To check if an entity has signed a JSON object a server does the following
|
|
||||||
|
|
||||||
1. Checks if the ``signatures`` object contains an entry with the name of the
|
|
||||||
entity. If the entry is missing then the check fails.
|
|
||||||
2. Removes any *signing key identifiers* from the entry with algorithms it
|
|
||||||
doesn't understand. If there are no *signing key identifiers* left then the
|
|
||||||
check fails.
|
|
||||||
3. Looks up *verification keys* for the remaining *signing key identifiers*
|
|
||||||
either from a local cache or by consulting a trusted key server. If it
|
|
||||||
cannot find a *verification key* then the check fails.
|
|
||||||
4. Decodes the base64 encoded signature bytes. If base64 decoding fails then
|
|
||||||
the check fails.
|
|
||||||
5. Checks the signature bytes using the *verification key*. If this fails then
|
|
||||||
the check fails. Otherwise the check succeeds.
|
|
||||||
|
|
||||||
Canonical JSON
|
|
||||||
--------------
|
|
||||||
|
|
||||||
The canonical JSON encoding for a value is the shortest UTF-8 JSON encoding
|
|
||||||
with dictionary keys lexicographically sorted by unicode codepoint. Numbers in
|
|
||||||
the JSON value must be integers in the range [-(2**53)+1, (2**53)-1].
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
import json
|
|
||||||
|
|
||||||
def canonical_json(value):
|
|
||||||
return json.dumps(
|
|
||||||
value,
|
|
||||||
ensure_ascii=False,
|
|
||||||
separators=(',',':'),
|
|
||||||
sort_keys=True,
|
|
||||||
).encode("UTF-8")
|
|
||||||
|
|
||||||
Grammar
|
|
||||||
+++++++
|
|
||||||
|
|
||||||
Adapted from the grammar in http://tools.ietf.org/html/rfc7159 removing
|
|
||||||
insignificant whitespace, fractions, exponents and redundant character escapes
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
value = false / null / true / object / array / number / string
|
|
||||||
false = %x66.61.6c.73.65
|
|
||||||
null = %x6e.75.6c.6c
|
|
||||||
true = %x74.72.75.65
|
|
||||||
object = %x7B [ member *( %x2C member ) ] %7D
|
|
||||||
member = string %x3A value
|
|
||||||
array = %x5B [ value *( %x2C value ) ] %5B
|
|
||||||
number = [ %x2D ] int
|
|
||||||
int = %x30 / ( %x31-39 *digit )
|
|
||||||
digit = %x30-39
|
|
||||||
string = %x22 *char %x22
|
|
||||||
char = unescaped / %x5C escaped
|
|
||||||
unescaped = %x20-21 / %x23-5B / %x5D-10FFFF
|
|
||||||
escaped = %x22 ; " quotation mark U+0022
|
|
||||||
/ %x5C ; \ reverse solidus U+005C
|
|
||||||
/ %x62 ; b backspace U+0008
|
|
||||||
/ %x66 ; f form feed U+000C
|
|
||||||
/ %x6E ; n line feed U+000A
|
|
||||||
/ %x72 ; r carriage return U+000D
|
|
||||||
/ %x74 ; t tab U+0009
|
|
||||||
/ %x75.30.30.30 (%x30-37 / %x62 / %x65-66) ; u000X
|
|
||||||
/ %x75.30.30.31 (%x30-39 / %x61-66) ; u001X
|
|
||||||
|
|
||||||
Signing Events
|
|
||||||
==============
|
|
||||||
|
|
||||||
Signing events is a more complicated process since servers can choose to redact
|
|
||||||
non-essential event contents. Before signing the event it is encoded as
|
|
||||||
Canonical JSON and hashed using SHA-256. The resulting hash is then stored
|
|
||||||
in the event JSON in a ``hash`` object under a ``sha256`` key. Then all
|
|
||||||
non-essential keys are stripped from the event object, and the resulting object
|
|
||||||
which included the ``hash`` key is signed using the JSON signing algorithm.
|
|
||||||
|
|
||||||
Servers can then transmit the entire event or the event with the non-essential
|
|
||||||
keys removed. Receiving servers can then check the entire event if it is
|
|
||||||
present by computing the SHA-256 of the event excluding the ``hash`` object, or
|
|
||||||
by using the ``hash`` object included in the event if keys have been redacted.
|
|
||||||
|
|
||||||
New hash functions can be introduced by adding additional keys to the ``hash``
|
|
||||||
object. Since the ``hash`` object cannot be redacted a server shouldn't allow
|
|
||||||
too many hashes to be listed, otherwise a server might embed illict data within
|
|
||||||
the ``hash`` object. For similar reasons a server shouldn't allow hash values
|
|
||||||
that are too long.
|
|
||||||
|
|
||||||
[[TODO(markjh): We might want to specify a maximum number of keys for the
|
|
||||||
``hash`` and we might want to specify the maximum output size of a hash]]
|
|
||||||
|
|
||||||
[[TODO(markjh) We might want to allow the server to omit the output of well
|
|
||||||
known hash functions like SHA-256 when none of the keys have been redacted]]
|
|
|
@ -1,231 +0,0 @@
|
||||||
===========================
|
|
||||||
Matrix Server-to-Server API
|
|
||||||
===========================
|
|
||||||
|
|
||||||
A description of the protocol used to communicate between Matrix home servers;
|
|
||||||
also known as Federation.
|
|
||||||
|
|
||||||
|
|
||||||
Overview
|
|
||||||
========
|
|
||||||
|
|
||||||
The server-server API is a mechanism by which two home servers can exchange
|
|
||||||
Matrix event messages, both as a real-time push of current events, and as a
|
|
||||||
historic fetching mechanism to synchronise past history for clients to view. It
|
|
||||||
uses HTTP connections between each pair of servers involved as the underlying
|
|
||||||
transport. Messages are exchanged between servers in real-time by active pushing
|
|
||||||
from each server's HTTP client into the server of the other. Queries to fetch
|
|
||||||
historic data for the purpose of back-filling scrollback buffers and the like
|
|
||||||
can also be performed.
|
|
||||||
|
|
||||||
|
|
||||||
{ Matrix clients } { Matrix clients }
|
|
||||||
^ | ^ |
|
|
||||||
| events | | events |
|
|
||||||
| V | V
|
|
||||||
+------------------+ +------------------+
|
|
||||||
| |---------( HTTP )---------->| |
|
|
||||||
| Home Server | | Home Server |
|
|
||||||
| |<--------( HTTP )-----------| |
|
|
||||||
+------------------+ +------------------+
|
|
||||||
|
|
||||||
There are three main kinds of communication that occur between home servers:
|
|
||||||
|
|
||||||
* Queries
|
|
||||||
These are single request/response interactions between a given pair of
|
|
||||||
servers, initiated by one side sending an HTTP request to obtain some
|
|
||||||
information, and responded by the other. They are not persisted and contain
|
|
||||||
no long-term significant history. They simply request a snapshot state at the
|
|
||||||
instant the query is made.
|
|
||||||
|
|
||||||
* EDUs - Ephemeral Data Units
|
|
||||||
These are notifications of events that are pushed from one home server to
|
|
||||||
another. They are not persisted and contain no long-term significant history,
|
|
||||||
nor does the receiving home server have to reply to them.
|
|
||||||
|
|
||||||
* PDUs - Persisted Data Units
|
|
||||||
These are notifications of events that are broadcast from one home server to
|
|
||||||
any others that are interested in the same "context" (namely, a Room ID).
|
|
||||||
They are persisted to long-term storage and form the record of history for
|
|
||||||
that context.
|
|
||||||
|
|
||||||
Where Queries are presented directly across the HTTP connection as GET requests
|
|
||||||
to specific URLs, EDUs and PDUs are further wrapped in an envelope called a
|
|
||||||
Transaction, which is transferred from the origin to the destination home server
|
|
||||||
using a PUT request.
|
|
||||||
|
|
||||||
|
|
||||||
Transactions and EDUs/PDUs
|
|
||||||
==========================
|
|
||||||
|
|
||||||
The transfer of EDUs and PDUs between home servers is performed by an exchange
|
|
||||||
of Transaction messages, which are encoded as JSON objects with a dict as the
|
|
||||||
top-level element, passed over an HTTP PUT request. A Transaction is meaningful
|
|
||||||
only to the pair of home servers that exchanged it; they are not globally-
|
|
||||||
meaningful.
|
|
||||||
|
|
||||||
Each transaction has an opaque ID and timestamp (UNIX epoch time in
|
|
||||||
milliseconds) generated by its origin server, an origin and destination server
|
|
||||||
name, a list of "previous IDs", and a list of PDUs - the actual message payload
|
|
||||||
that the Transaction carries.
|
|
||||||
|
|
||||||
{"transaction_id":"916d630ea616342b42e98a3be0b74113",
|
|
||||||
"ts":1404835423000,
|
|
||||||
"origin":"red",
|
|
||||||
"destination":"blue",
|
|
||||||
"prev_ids":["e1da392e61898be4d2009b9fecce5325"],
|
|
||||||
"pdus":[...],
|
|
||||||
"edus":[...]}
|
|
||||||
|
|
||||||
The "previous IDs" field will contain a list of previous transaction IDs that
|
|
||||||
the origin server has sent to this destination. Its purpose is to act as a
|
|
||||||
sequence checking mechanism - the destination server can check whether it has
|
|
||||||
successfully received that Transaction, or ask for a retransmission if not.
|
|
||||||
|
|
||||||
The "pdus" field of a transaction is a list, containing zero or more PDUs.[*]
|
|
||||||
Each PDU is itself a dict containing a number of keys, the exact details of
|
|
||||||
which will vary depending on the type of PDU. Similarly, the "edus" field is
|
|
||||||
another list containing the EDUs. This key may be entirely absent if there are
|
|
||||||
no EDUs to transfer.
|
|
||||||
|
|
||||||
(* Normally the PDU list will be non-empty, but the server should cope with
|
|
||||||
receiving an "empty" transaction, as this is useful for informing peers of other
|
|
||||||
transaction IDs they should be aware of. This effectively acts as a push
|
|
||||||
mechanism to encourage peers to continue to replicate content.)
|
|
||||||
|
|
||||||
All PDUs have an ID, a context, a declaration of their type, a list of other PDU
|
|
||||||
IDs that have been seen recently on that context (regardless of which origin
|
|
||||||
sent them), and a nested content field containing the actual event content.
|
|
||||||
|
|
||||||
[[TODO(paul): Update this structure so that 'pdu_id' is a two-element
|
|
||||||
[origin,ref] pair like the prev_pdus are]]
|
|
||||||
|
|
||||||
{"pdu_id":"a4ecee13e2accdadf56c1025af232176",
|
|
||||||
"context":"#example.green",
|
|
||||||
"origin":"green",
|
|
||||||
"ts":1404838188000,
|
|
||||||
"pdu_type":"m.text",
|
|
||||||
"prev_pdus":[["blue","99d16afbc857975916f1d73e49e52b65"]],
|
|
||||||
"content":...
|
|
||||||
"is_state":false}
|
|
||||||
|
|
||||||
In contrast to the transaction layer, it is important to note that the prev_pdus
|
|
||||||
field of a PDU refers to PDUs that any origin server has sent, rather than
|
|
||||||
previous IDs that this origin has sent. This list may refer to other PDUs sent
|
|
||||||
by the same origin as the current one, or other origins.
|
|
||||||
|
|
||||||
Because of the distributed nature of participants in a Matrix conversation, it
|
|
||||||
is impossible to establish a globally-consistent total ordering on the events.
|
|
||||||
However, by annotating each outbound PDU at its origin with IDs of other PDUs it
|
|
||||||
has received, a partial ordering can be constructed allowing causallity
|
|
||||||
relationships to be preserved. A client can then display these messages to the
|
|
||||||
end-user in some order consistent with their content and ensure that no message
|
|
||||||
that is semantically in reply of an earlier one is ever displayed before it.
|
|
||||||
|
|
||||||
PDUs fall into two main categories: those that deliver Events, and those that
|
|
||||||
synchronise State. For PDUs that relate to State synchronisation, additional
|
|
||||||
keys exist to support this:
|
|
||||||
|
|
||||||
{...,
|
|
||||||
"is_state":true,
|
|
||||||
"state_key":TODO
|
|
||||||
"power_level":TODO
|
|
||||||
"prev_state_id":TODO
|
|
||||||
"prev_state_origin":TODO}
|
|
||||||
|
|
||||||
[[TODO(paul): At this point we should probably have a long description of how
|
|
||||||
State management works, with descriptions of clobbering rules, power levels, etc
|
|
||||||
etc... But some of that detail is rather up-in-the-air, on the whiteboard, and
|
|
||||||
so on. This part needs refining. And writing in its own document as the details
|
|
||||||
relate to the server/system as a whole, not specifically to server-server
|
|
||||||
federation.]]
|
|
||||||
|
|
||||||
EDUs, by comparison to PDUs, do not have an ID, a context, or a list of
|
|
||||||
"previous" IDs. The only mandatory fields for these are the type, origin and
|
|
||||||
destination home server names, and the actual nested content.
|
|
||||||
|
|
||||||
{"edu_type":"m.presence",
|
|
||||||
"origin":"blue",
|
|
||||||
"destination":"orange",
|
|
||||||
"content":...}
|
|
||||||
|
|
||||||
|
|
||||||
Protocol URLs
|
|
||||||
=============
|
|
||||||
|
|
||||||
All these URLs are namespaced within a prefix of
|
|
||||||
|
|
||||||
/_matrix/federation/v1/...
|
|
||||||
|
|
||||||
For active pushing of messages representing live activity "as it happens":
|
|
||||||
|
|
||||||
PUT .../send/:transaction_id/
|
|
||||||
Body: JSON encoding of a single Transaction
|
|
||||||
|
|
||||||
Response: [[TODO(paul): I don't actually understand what
|
|
||||||
ReplicationLayer.on_transaction() is doing here, so I'm not sure what the
|
|
||||||
response ought to be]]
|
|
||||||
|
|
||||||
The transaction_id path argument will override any ID given in the JSON body.
|
|
||||||
The destination name will be set to that of the receiving server itself. Each
|
|
||||||
embedded PDU in the transaction body will be processed.
|
|
||||||
|
|
||||||
|
|
||||||
To fetch a particular PDU:
|
|
||||||
|
|
||||||
GET .../pdu/:origin/:pdu_id/
|
|
||||||
|
|
||||||
Response: JSON encoding of a single Transaction containing one PDU
|
|
||||||
|
|
||||||
Retrieves a given PDU from the server. The response will contain a single new
|
|
||||||
Transaction, inside which will be the requested PDU.
|
|
||||||
|
|
||||||
|
|
||||||
To fetch all the state of a given context:
|
|
||||||
|
|
||||||
GET .../state/:context/
|
|
||||||
|
|
||||||
Response: JSON encoding of a single Transaction containing multiple PDUs
|
|
||||||
|
|
||||||
Retrieves a snapshot of the entire current state of the given context. The
|
|
||||||
response will contain a single Transaction, inside which will be a list of
|
|
||||||
PDUs that encode the state.
|
|
||||||
|
|
||||||
|
|
||||||
To backfill events on a given context:
|
|
||||||
|
|
||||||
GET .../backfill/:context/
|
|
||||||
Query args: v, limit
|
|
||||||
|
|
||||||
Response: JSON encoding of a single Transaction containing multiple PDUs
|
|
||||||
|
|
||||||
Retrieves a sliding-window history of previous PDUs that occurred on the
|
|
||||||
given context. Starting from the PDU ID(s) given in the "v" argument, the
|
|
||||||
PDUs that preceeded it are retrieved, up to a total number given by the
|
|
||||||
"limit" argument. These are then returned in a new Transaction containing all
|
|
||||||
off the PDUs.
|
|
||||||
|
|
||||||
|
|
||||||
To stream events all the events:
|
|
||||||
|
|
||||||
GET .../pull/
|
|
||||||
Query args: origin, v
|
|
||||||
|
|
||||||
Response: JSON encoding of a single Transaction consisting of multiple PDUs
|
|
||||||
|
|
||||||
Retrieves all of the transactions later than any version given by the "v"
|
|
||||||
arguments. [[TODO(paul): I'm not sure what the "origin" argument does because
|
|
||||||
I think at some point in the code it's got swapped around.]]
|
|
||||||
|
|
||||||
|
|
||||||
To make a query:
|
|
||||||
|
|
||||||
GET .../query/:query_type
|
|
||||||
Query args: as specified by the individual query types
|
|
||||||
|
|
||||||
Response: JSON encoding of a response object
|
|
||||||
|
|
||||||
Performs a single query request on the receiving home server. The Query Type
|
|
||||||
part of the path specifies the kind of query being made, and its query
|
|
||||||
arguments have a meaning specific to that kind of query. The response is a
|
|
||||||
JSON-encoded object whose meaning also depends on the kind of query.
|
|
|
@ -1,11 +0,0 @@
|
||||||
Versioning is, like, hard for backfilling backwards because of the number of Home Servers involved.
|
|
||||||
|
|
||||||
The way we solve this is by doing versioning as an acyclic directed graph of PDUs. For backfilling purposes, this is done on a per context basis.
|
|
||||||
When we send a PDU we include all PDUs that have been received for that context that hasn't been subsequently listed in a later PDU. The trivial case is a simple list of PDUs, e.g. A <- B <- C. However, if two servers send out a PDU at the same to, both B and C would point at A - a later PDU would then list both B and C.
|
|
||||||
|
|
||||||
Problems with opaque version strings:
|
|
||||||
- How do you do clustering without mandating that a cluster can only have one transaction in flight to a given remote home server at a time.
|
|
||||||
If you have multiple transactions sent at once, then you might drop one transaction, receive another with a version that is later than the dropped transaction and which point ARGH WE LOST A TRANSACTION.
|
|
||||||
- How do you do backfilling? A version string defines a point in a stream w.r.t. a single home server, not a point in the context.
|
|
||||||
|
|
||||||
We only need to store the ends of the directed graph, we DO NOT need to do the whole one table of nodes and one of edges.
|
|
1
docs/sphinx/README.rst
Normal file
1
docs/sphinx/README.rst
Normal file
|
@ -0,0 +1 @@
|
||||||
|
TODO: how (if at all) is this actually maintained?
|
47
scripts/check_event_hash.py
Normal file
47
scripts/check_event_hash.py
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
from synapse.crypto.event_signing import *
|
||||||
|
from syutil.base64util import encode_base64
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import hashlib
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
|
||||||
|
|
||||||
|
class dictobj(dict):
|
||||||
|
def __init__(self, *args, **kargs):
|
||||||
|
dict.__init__(self, *args, **kargs)
|
||||||
|
self.__dict__ = self
|
||||||
|
|
||||||
|
def get_dict(self):
|
||||||
|
return dict(self)
|
||||||
|
|
||||||
|
def get_full_dict(self):
|
||||||
|
return dict(self)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument("input_json", nargs="?", type=argparse.FileType('r'),
|
||||||
|
default=sys.stdin)
|
||||||
|
args = parser.parse_args()
|
||||||
|
logging.basicConfig()
|
||||||
|
|
||||||
|
event_json = dictobj(json.load(args.input_json))
|
||||||
|
|
||||||
|
algorithms = {
|
||||||
|
"sha256": hashlib.sha256,
|
||||||
|
}
|
||||||
|
|
||||||
|
for alg_name in event_json.hashes:
|
||||||
|
if check_event_content_hash(event_json, algorithms[alg_name]):
|
||||||
|
print "PASS content hash %s" % (alg_name,)
|
||||||
|
else:
|
||||||
|
print "FAIL content hash %s" % (alg_name,)
|
||||||
|
|
||||||
|
for algorithm in algorithms.values():
|
||||||
|
name, h_bytes = compute_event_reference_hash(event_json, algorithm)
|
||||||
|
print "Reference hash %s: %s" % (name, encode_base64(h_bytes))
|
||||||
|
|
||||||
|
if __name__=="__main__":
|
||||||
|
main()
|
||||||
|
|
73
scripts/check_signature.py
Normal file
73
scripts/check_signature.py
Normal file
|
@ -0,0 +1,73 @@
|
||||||
|
|
||||||
|
from syutil.crypto.jsonsign import verify_signed_json
|
||||||
|
from syutil.crypto.signing_key import (
|
||||||
|
decode_verify_key_bytes, write_signing_keys
|
||||||
|
)
|
||||||
|
from syutil.base64util import decode_base64
|
||||||
|
|
||||||
|
import urllib2
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import dns.resolver
|
||||||
|
import pprint
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
|
||||||
|
def get_targets(server_name):
|
||||||
|
if ":" in server_name:
|
||||||
|
target, port = server_name.split(":")
|
||||||
|
yield (target, int(port))
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
answers = dns.resolver.query("_matrix._tcp." + server_name, "SRV")
|
||||||
|
for srv in answers:
|
||||||
|
yield (srv.target, srv.port)
|
||||||
|
except dns.resolver.NXDOMAIN:
|
||||||
|
yield (server_name, 8480)
|
||||||
|
|
||||||
|
def get_server_keys(server_name, target, port):
|
||||||
|
url = "https://%s:%i/_matrix/key/v1" % (target, port)
|
||||||
|
keys = json.load(urllib2.urlopen(url))
|
||||||
|
verify_keys = {}
|
||||||
|
for key_id, key_base64 in keys["verify_keys"].items():
|
||||||
|
verify_key = decode_verify_key_bytes(key_id, decode_base64(key_base64))
|
||||||
|
verify_signed_json(keys, server_name, verify_key)
|
||||||
|
verify_keys[key_id] = verify_key
|
||||||
|
return verify_keys
|
||||||
|
|
||||||
|
def main():
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument("signature_name")
|
||||||
|
parser.add_argument("input_json", nargs="?", type=argparse.FileType('r'),
|
||||||
|
default=sys.stdin)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
logging.basicConfig()
|
||||||
|
|
||||||
|
server_name = args.signature_name
|
||||||
|
keys = {}
|
||||||
|
for target, port in get_targets(server_name):
|
||||||
|
try:
|
||||||
|
keys = get_server_keys(server_name, target, port)
|
||||||
|
print "Using keys from https://%s:%s/_matrix/key/v1" % (target, port)
|
||||||
|
write_signing_keys(sys.stdout, keys.values())
|
||||||
|
break
|
||||||
|
except:
|
||||||
|
logging.exception("Error talking to %s:%s", target, port)
|
||||||
|
|
||||||
|
json_to_check = json.load(args.input_json)
|
||||||
|
print "Checking JSON:"
|
||||||
|
for key_id in json_to_check["signatures"][args.signature_name]:
|
||||||
|
try:
|
||||||
|
key = keys[key_id]
|
||||||
|
verify_signed_json(json_to_check, args.signature_name, key)
|
||||||
|
print "PASS %s" % (key_id,)
|
||||||
|
except:
|
||||||
|
logging.exception("Check for key %s failed" % (key_id,))
|
||||||
|
print "FAIL %s" % (key_id,)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
69
scripts/hash_history.py
Normal file
69
scripts/hash_history.py
Normal file
|
@ -0,0 +1,69 @@
|
||||||
|
from synapse.storage.pdu import PduStore
|
||||||
|
from synapse.storage.signatures import SignatureStore
|
||||||
|
from synapse.storage._base import SQLBaseStore
|
||||||
|
from synapse.federation.units import Pdu
|
||||||
|
from synapse.crypto.event_signing import (
|
||||||
|
add_event_pdu_content_hash, compute_pdu_event_reference_hash
|
||||||
|
)
|
||||||
|
from synapse.api.events.utils import prune_pdu
|
||||||
|
from syutil.base64util import encode_base64, decode_base64
|
||||||
|
from syutil.jsonutil import encode_canonical_json
|
||||||
|
import sqlite3
|
||||||
|
import sys
|
||||||
|
|
||||||
|
class Store(object):
|
||||||
|
_get_pdu_tuples = PduStore.__dict__["_get_pdu_tuples"]
|
||||||
|
_get_pdu_content_hashes_txn = SignatureStore.__dict__["_get_pdu_content_hashes_txn"]
|
||||||
|
_get_prev_pdu_hashes_txn = SignatureStore.__dict__["_get_prev_pdu_hashes_txn"]
|
||||||
|
_get_pdu_origin_signatures_txn = SignatureStore.__dict__["_get_pdu_origin_signatures_txn"]
|
||||||
|
_store_pdu_content_hash_txn = SignatureStore.__dict__["_store_pdu_content_hash_txn"]
|
||||||
|
_store_pdu_reference_hash_txn = SignatureStore.__dict__["_store_pdu_reference_hash_txn"]
|
||||||
|
_store_prev_pdu_hash_txn = SignatureStore.__dict__["_store_prev_pdu_hash_txn"]
|
||||||
|
_simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"]
|
||||||
|
|
||||||
|
|
||||||
|
store = Store()
|
||||||
|
|
||||||
|
|
||||||
|
def select_pdus(cursor):
|
||||||
|
cursor.execute(
|
||||||
|
"SELECT pdu_id, origin FROM pdus ORDER BY depth ASC"
|
||||||
|
)
|
||||||
|
|
||||||
|
ids = cursor.fetchall()
|
||||||
|
|
||||||
|
pdu_tuples = store._get_pdu_tuples(cursor, ids)
|
||||||
|
|
||||||
|
pdus = [Pdu.from_pdu_tuple(p) for p in pdu_tuples]
|
||||||
|
|
||||||
|
reference_hashes = {}
|
||||||
|
|
||||||
|
for pdu in pdus:
|
||||||
|
try:
|
||||||
|
if pdu.prev_pdus:
|
||||||
|
print "PROCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus
|
||||||
|
for pdu_id, origin, hashes in pdu.prev_pdus:
|
||||||
|
ref_alg, ref_hsh = reference_hashes[(pdu_id, origin)]
|
||||||
|
hashes[ref_alg] = encode_base64(ref_hsh)
|
||||||
|
store._store_prev_pdu_hash_txn(cursor, pdu.pdu_id, pdu.origin, pdu_id, origin, ref_alg, ref_hsh)
|
||||||
|
print "SUCCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus
|
||||||
|
pdu = add_event_pdu_content_hash(pdu)
|
||||||
|
ref_alg, ref_hsh = compute_pdu_event_reference_hash(pdu)
|
||||||
|
reference_hashes[(pdu.pdu_id, pdu.origin)] = (ref_alg, ref_hsh)
|
||||||
|
store._store_pdu_reference_hash_txn(cursor, pdu.pdu_id, pdu.origin, ref_alg, ref_hsh)
|
||||||
|
|
||||||
|
for alg, hsh_base64 in pdu.hashes.items():
|
||||||
|
print alg, hsh_base64
|
||||||
|
store._store_pdu_content_hash_txn(cursor, pdu.pdu_id, pdu.origin, alg, decode_base64(hsh_base64))
|
||||||
|
|
||||||
|
except:
|
||||||
|
print "FAILED_", pdu.pdu_id, pdu.origin, pdu.prev_pdus
|
||||||
|
|
||||||
|
def main():
|
||||||
|
conn = sqlite3.connect(sys.argv[1])
|
||||||
|
cursor = conn.cursor()
|
||||||
|
select_pdus(cursor)
|
||||||
|
conn.commit()
|
||||||
|
|
||||||
|
if __name__=='__main__':
|
||||||
|
main()
|
8
setup.py
8
setup.py
|
@ -26,12 +26,13 @@ def read(fname):
|
||||||
return open(os.path.join(os.path.dirname(__file__), fname)).read()
|
return open(os.path.join(os.path.dirname(__file__), fname)).read()
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
name="SynapseHomeServer",
|
name="synapse",
|
||||||
version="0.0.1",
|
version=read("VERSION"),
|
||||||
packages=find_packages(exclude=["tests", "tests.*"]),
|
packages=find_packages(exclude=["tests", "tests.*"]),
|
||||||
description="Reference Synapse Home Server",
|
description="Reference Synapse Home Server",
|
||||||
install_requires=[
|
install_requires=[
|
||||||
"syutil==0.0.2",
|
"syutil==0.0.2",
|
||||||
|
"syweb==0.0.1",
|
||||||
"Twisted>=14.0.0",
|
"Twisted>=14.0.0",
|
||||||
"service_identity>=1.0.0",
|
"service_identity>=1.0.0",
|
||||||
"pyopenssl>=0.14",
|
"pyopenssl>=0.14",
|
||||||
|
@ -44,6 +45,7 @@ setup(
|
||||||
dependency_links=[
|
dependency_links=[
|
||||||
"https://github.com/matrix-org/syutil/tarball/v0.0.2#egg=syutil-0.0.2",
|
"https://github.com/matrix-org/syutil/tarball/v0.0.2#egg=syutil-0.0.2",
|
||||||
"https://github.com/pyca/pynacl/tarball/52dbe2dc33f1#egg=pynacl-0.3.0",
|
"https://github.com/pyca/pynacl/tarball/52dbe2dc33f1#egg=pynacl-0.3.0",
|
||||||
|
"https://github.com/matrix-org/matrix-angular-sdk/tarball/master/#egg=syweb-0.0.1",
|
||||||
],
|
],
|
||||||
setup_requires=[
|
setup_requires=[
|
||||||
"setuptools_trial",
|
"setuptools_trial",
|
||||||
|
@ -52,9 +54,11 @@ setup(
|
||||||
"mock"
|
"mock"
|
||||||
],
|
],
|
||||||
include_package_data=True,
|
include_package_data=True,
|
||||||
|
zip_safe=False,
|
||||||
long_description=read("README.rst"),
|
long_description=read("README.rst"),
|
||||||
entry_points="""
|
entry_points="""
|
||||||
[console_scripts]
|
[console_scripts]
|
||||||
|
synctl=synapse.app.synctl:main
|
||||||
synapse-homeserver=synapse.app.homeserver:run
|
synapse-homeserver=synapse.app.homeserver:run
|
||||||
"""
|
"""
|
||||||
)
|
)
|
||||||
|
|
|
@ -21,8 +21,10 @@ from synapse.api.constants import Membership, JoinRules
|
||||||
from synapse.api.errors import AuthError, StoreError, Codes, SynapseError
|
from synapse.api.errors import AuthError, StoreError, Codes, SynapseError
|
||||||
from synapse.api.events.room import (
|
from synapse.api.events.room import (
|
||||||
RoomMemberEvent, RoomPowerLevelsEvent, RoomRedactionEvent,
|
RoomMemberEvent, RoomPowerLevelsEvent, RoomRedactionEvent,
|
||||||
|
RoomJoinRulesEvent, RoomCreateEvent,
|
||||||
)
|
)
|
||||||
from synapse.util.logutils import log_function
|
from synapse.util.logutils import log_function
|
||||||
|
from syutil.base64util import encode_base64
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
@ -34,9 +36,9 @@ class Auth(object):
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
self.state = hs.get_state_handler()
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
def check(self, event, raises=False):
|
||||||
def check(self, event, snapshot, raises=False):
|
|
||||||
""" Checks if this event is correctly authed.
|
""" Checks if this event is correctly authed.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
|
@ -47,43 +49,51 @@ class Auth(object):
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
if hasattr(event, "room_id"):
|
if hasattr(event, "room_id"):
|
||||||
is_state = hasattr(event, "state_key")
|
if event.old_state_events is None:
|
||||||
|
# Oh, we don't know what the state of the room was, so we
|
||||||
|
# are trusting that this is allowed (at least for now)
|
||||||
|
logger.warn("Trusting event: %s", event.event_id)
|
||||||
|
return True
|
||||||
|
|
||||||
|
if hasattr(event, "outlier") and event.outlier is True:
|
||||||
|
# TODO (erikj): Auth for outliers is done differently.
|
||||||
|
return True
|
||||||
|
|
||||||
|
if event.type == RoomCreateEvent.TYPE:
|
||||||
|
# FIXME
|
||||||
|
return True
|
||||||
|
|
||||||
if event.type == RoomMemberEvent.TYPE:
|
if event.type == RoomMemberEvent.TYPE:
|
||||||
yield self._can_replace_state(event)
|
allowed = self.is_membership_change_allowed(event)
|
||||||
allowed = yield self.is_membership_change_allowed(event)
|
if allowed:
|
||||||
defer.returnValue(allowed)
|
logger.debug("Allowing! %s", event)
|
||||||
return
|
else:
|
||||||
|
logger.debug("Denying! %s", event)
|
||||||
|
return allowed
|
||||||
|
|
||||||
self._check_joined_room(
|
self.check_event_sender_in_room(event)
|
||||||
member=snapshot.membership_state,
|
self._can_send_event(event)
|
||||||
user_id=snapshot.user_id,
|
|
||||||
room_id=snapshot.room_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
if is_state:
|
|
||||||
# TODO (erikj): This really only should be called for *new*
|
|
||||||
# state
|
|
||||||
yield self._can_add_state(event)
|
|
||||||
yield self._can_replace_state(event)
|
|
||||||
else:
|
|
||||||
yield self._can_send_event(event)
|
|
||||||
|
|
||||||
if event.type == RoomPowerLevelsEvent.TYPE:
|
if event.type == RoomPowerLevelsEvent.TYPE:
|
||||||
yield self._check_power_levels(event)
|
self._check_power_levels(event)
|
||||||
|
|
||||||
if event.type == RoomRedactionEvent.TYPE:
|
if event.type == RoomRedactionEvent.TYPE:
|
||||||
yield self._check_redaction(event)
|
self._check_redaction(event)
|
||||||
|
|
||||||
defer.returnValue(True)
|
logger.debug("Allowing! %s", event)
|
||||||
|
return True
|
||||||
else:
|
else:
|
||||||
raise AuthError(500, "Unknown event: %s" % event)
|
raise AuthError(500, "Unknown event: %s" % event)
|
||||||
except AuthError as e:
|
except AuthError as e:
|
||||||
logger.info("Event auth check failed on event %s with msg: %s",
|
logger.info(
|
||||||
event, e.msg)
|
"Event auth check failed on event %s with msg: %s",
|
||||||
|
event, e.msg
|
||||||
|
)
|
||||||
|
logger.info("Denying! %s", event)
|
||||||
if raises:
|
if raises:
|
||||||
raise e
|
raise
|
||||||
defer.returnValue(False)
|
|
||||||
|
return False
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def check_joined_room(self, room_id, user_id):
|
def check_joined_room(self, room_id, user_id):
|
||||||
|
@ -98,45 +108,92 @@ class Auth(object):
|
||||||
pass
|
pass
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def check_host_in_room(self, room_id, host):
|
||||||
|
curr_state = yield self.state.get_current_state(room_id)
|
||||||
|
|
||||||
|
for event in curr_state:
|
||||||
|
if event.type == RoomMemberEvent.TYPE:
|
||||||
|
try:
|
||||||
|
if self.hs.parse_userid(event.state_key).domain != host:
|
||||||
|
continue
|
||||||
|
except:
|
||||||
|
logger.warn("state_key not user_id: %s", event.state_key)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if event.content["membership"] == Membership.JOIN:
|
||||||
|
defer.returnValue(True)
|
||||||
|
|
||||||
|
defer.returnValue(False)
|
||||||
|
|
||||||
|
def check_event_sender_in_room(self, event):
|
||||||
|
key = (RoomMemberEvent.TYPE, event.user_id, )
|
||||||
|
member_event = event.state_events.get(key)
|
||||||
|
|
||||||
|
return self._check_joined_room(
|
||||||
|
member_event,
|
||||||
|
event.user_id,
|
||||||
|
event.room_id
|
||||||
|
)
|
||||||
|
|
||||||
def _check_joined_room(self, member, user_id, room_id):
|
def _check_joined_room(self, member, user_id, room_id):
|
||||||
if not member or member.membership != Membership.JOIN:
|
if not member or member.membership != Membership.JOIN:
|
||||||
raise AuthError(403, "User %s not in room %s (%s)" % (
|
raise AuthError(403, "User %s not in room %s (%s)" % (
|
||||||
user_id, room_id, repr(member)
|
user_id, room_id, repr(member)
|
||||||
))
|
))
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@log_function
|
||||||
def is_membership_change_allowed(self, event):
|
def is_membership_change_allowed(self, event):
|
||||||
target_user_id = event.state_key
|
target_user_id = event.state_key
|
||||||
|
|
||||||
# does this room even exist
|
|
||||||
room = yield self.store.get_room(event.room_id)
|
|
||||||
if not room:
|
|
||||||
raise AuthError(403, "Room does not exist")
|
|
||||||
|
|
||||||
# get info about the caller
|
# get info about the caller
|
||||||
try:
|
key = (RoomMemberEvent.TYPE, event.user_id, )
|
||||||
caller = yield self.store.get_room_member(
|
caller = event.old_state_events.get(key)
|
||||||
user_id=event.user_id,
|
|
||||||
room_id=event.room_id)
|
caller_in_room = caller and caller.membership == Membership.JOIN
|
||||||
except:
|
caller_invited = caller and caller.membership == Membership.INVITE
|
||||||
caller = None
|
|
||||||
caller_in_room = caller and caller.membership == "join"
|
|
||||||
|
|
||||||
# get info about the target
|
# get info about the target
|
||||||
try:
|
key = (RoomMemberEvent.TYPE, target_user_id, )
|
||||||
target = yield self.store.get_room_member(
|
target = event.old_state_events.get(key)
|
||||||
user_id=target_user_id,
|
|
||||||
room_id=event.room_id)
|
target_in_room = target and target.membership == Membership.JOIN
|
||||||
except:
|
|
||||||
target = None
|
|
||||||
target_in_room = target and target.membership == "join"
|
|
||||||
|
|
||||||
membership = event.content["membership"]
|
membership = event.content["membership"]
|
||||||
|
|
||||||
join_rule = yield self.store.get_room_join_rule(event.room_id)
|
key = (RoomJoinRulesEvent.TYPE, "", )
|
||||||
if not join_rule:
|
join_rule_event = event.old_state_events.get(key)
|
||||||
|
if join_rule_event:
|
||||||
|
join_rule = join_rule_event.content.get(
|
||||||
|
"join_rule", JoinRules.INVITE
|
||||||
|
)
|
||||||
|
else:
|
||||||
join_rule = JoinRules.INVITE
|
join_rule = JoinRules.INVITE
|
||||||
|
|
||||||
|
user_level = self._get_power_level_from_event_state(
|
||||||
|
event,
|
||||||
|
event.user_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
ban_level, kick_level, redact_level = (
|
||||||
|
self._get_ops_level_from_event_state(
|
||||||
|
event
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.debug(
|
||||||
|
"is_membership_change_allowed: %s",
|
||||||
|
{
|
||||||
|
"caller_in_room": caller_in_room,
|
||||||
|
"caller_invited": caller_invited,
|
||||||
|
"target_in_room": target_in_room,
|
||||||
|
"membership": membership,
|
||||||
|
"join_rule": join_rule,
|
||||||
|
"target_user_id": target_user_id,
|
||||||
|
"event.user_id": event.user_id,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
if Membership.INVITE == membership:
|
if Membership.INVITE == membership:
|
||||||
# TODO (erikj): We should probably handle this more intelligently
|
# TODO (erikj): We should probably handle this more intelligently
|
||||||
# PRIVATE join rules.
|
# PRIVATE join rules.
|
||||||
|
@ -153,13 +210,10 @@ class Auth(object):
|
||||||
# joined: It's a NOOP
|
# joined: It's a NOOP
|
||||||
if event.user_id != target_user_id:
|
if event.user_id != target_user_id:
|
||||||
raise AuthError(403, "Cannot force another user to join.")
|
raise AuthError(403, "Cannot force another user to join.")
|
||||||
elif join_rule == JoinRules.PUBLIC or room.is_public:
|
elif join_rule == JoinRules.PUBLIC:
|
||||||
pass
|
pass
|
||||||
elif join_rule == JoinRules.INVITE:
|
elif join_rule == JoinRules.INVITE:
|
||||||
if (
|
if not caller_in_room and not caller_invited:
|
||||||
not caller or caller.membership not in
|
|
||||||
[Membership.INVITE, Membership.JOIN]
|
|
||||||
):
|
|
||||||
raise AuthError(403, "You are not invited to this room.")
|
raise AuthError(403, "You are not invited to this room.")
|
||||||
else:
|
else:
|
||||||
# TODO (erikj): may_join list
|
# TODO (erikj): may_join list
|
||||||
|
@ -171,29 +225,16 @@ class Auth(object):
|
||||||
if not caller_in_room: # trying to leave a room you aren't joined
|
if not caller_in_room: # trying to leave a room you aren't joined
|
||||||
raise AuthError(403, "You are not in room %s." % event.room_id)
|
raise AuthError(403, "You are not in room %s." % event.room_id)
|
||||||
elif target_user_id != event.user_id:
|
elif target_user_id != event.user_id:
|
||||||
user_level = yield self.store.get_power_level(
|
|
||||||
event.room_id,
|
|
||||||
event.user_id,
|
|
||||||
)
|
|
||||||
_, kick_level, _ = yield self.store.get_ops_levels(event.room_id)
|
|
||||||
|
|
||||||
if kick_level:
|
if kick_level:
|
||||||
kick_level = int(kick_level)
|
kick_level = int(kick_level)
|
||||||
else:
|
else:
|
||||||
kick_level = 50
|
kick_level = 50 # FIXME (erikj): What should we do here?
|
||||||
|
|
||||||
if user_level < kick_level:
|
if user_level < kick_level:
|
||||||
raise AuthError(
|
raise AuthError(
|
||||||
403, "You cannot kick user %s." % target_user_id
|
403, "You cannot kick user %s." % target_user_id
|
||||||
)
|
)
|
||||||
elif Membership.BAN == membership:
|
elif Membership.BAN == membership:
|
||||||
user_level = yield self.store.get_power_level(
|
|
||||||
event.room_id,
|
|
||||||
event.user_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
ban_level, _, _ = yield self.store.get_ops_levels(event.room_id)
|
|
||||||
|
|
||||||
if ban_level:
|
if ban_level:
|
||||||
ban_level = int(ban_level)
|
ban_level = int(ban_level)
|
||||||
else:
|
else:
|
||||||
|
@ -204,7 +245,30 @@ class Auth(object):
|
||||||
else:
|
else:
|
||||||
raise AuthError(500, "Unknown membership %s" % membership)
|
raise AuthError(500, "Unknown membership %s" % membership)
|
||||||
|
|
||||||
defer.returnValue(True)
|
return True
|
||||||
|
|
||||||
|
def _get_power_level_from_event_state(self, event, user_id):
|
||||||
|
key = (RoomPowerLevelsEvent.TYPE, "", )
|
||||||
|
power_level_event = event.old_state_events.get(key)
|
||||||
|
level = None
|
||||||
|
if power_level_event:
|
||||||
|
level = power_level_event.content.get("users", {}).get(user_id)
|
||||||
|
if not level:
|
||||||
|
level = power_level_event.content.get("users_default", 0)
|
||||||
|
|
||||||
|
return level
|
||||||
|
|
||||||
|
def _get_ops_level_from_event_state(self, event):
|
||||||
|
key = (RoomPowerLevelsEvent.TYPE, "", )
|
||||||
|
power_level_event = event.old_state_events.get(key)
|
||||||
|
|
||||||
|
if power_level_event:
|
||||||
|
return (
|
||||||
|
power_level_event.content.get("ban", 50),
|
||||||
|
power_level_event.content.get("kick", 50),
|
||||||
|
power_level_event.content.get("redact", 50),
|
||||||
|
)
|
||||||
|
return None, None, None,
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def get_user_by_req(self, request):
|
def get_user_by_req(self, request):
|
||||||
|
@ -229,7 +293,7 @@ class Auth(object):
|
||||||
default=[""]
|
default=[""]
|
||||||
)[0]
|
)[0]
|
||||||
if user and access_token and ip_addr:
|
if user and access_token and ip_addr:
|
||||||
self.store.insert_client_ip(
|
yield self.store.insert_client_ip(
|
||||||
user=user,
|
user=user,
|
||||||
access_token=access_token,
|
access_token=access_token,
|
||||||
device_id=user_info["device_id"],
|
device_id=user_info["device_id"],
|
||||||
|
@ -273,17 +337,81 @@ class Auth(object):
|
||||||
return self.store.is_server_admin(user)
|
return self.store.is_server_admin(user)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
def add_auth_events(self, event):
|
||||||
|
if event.type == RoomCreateEvent.TYPE:
|
||||||
|
event.auth_events = []
|
||||||
|
return
|
||||||
|
|
||||||
|
auth_events = []
|
||||||
|
|
||||||
|
key = (RoomPowerLevelsEvent.TYPE, "", )
|
||||||
|
power_level_event = event.old_state_events.get(key)
|
||||||
|
|
||||||
|
if power_level_event:
|
||||||
|
auth_events.append(power_level_event.event_id)
|
||||||
|
|
||||||
|
key = (RoomJoinRulesEvent.TYPE, "", )
|
||||||
|
join_rule_event = event.old_state_events.get(key)
|
||||||
|
|
||||||
|
key = (RoomMemberEvent.TYPE, event.user_id, )
|
||||||
|
member_event = event.old_state_events.get(key)
|
||||||
|
|
||||||
|
if join_rule_event:
|
||||||
|
join_rule = join_rule_event.content.get("join_rule")
|
||||||
|
is_public = join_rule == JoinRules.PUBLIC if join_rule else False
|
||||||
|
else:
|
||||||
|
is_public = False
|
||||||
|
|
||||||
|
if event.type == RoomMemberEvent.TYPE:
|
||||||
|
e_type = event.content["membership"]
|
||||||
|
if e_type in [Membership.JOIN, Membership.INVITE]:
|
||||||
|
if join_rule_event:
|
||||||
|
auth_events.append(join_rule_event.event_id)
|
||||||
|
|
||||||
|
if member_event and not is_public:
|
||||||
|
auth_events.append(member_event.event_id)
|
||||||
|
elif member_event:
|
||||||
|
if member_event.content["membership"] == Membership.JOIN:
|
||||||
|
auth_events.append(member_event.event_id)
|
||||||
|
|
||||||
|
hashes = yield self.store.get_event_reference_hashes(
|
||||||
|
auth_events
|
||||||
|
)
|
||||||
|
hashes = [
|
||||||
|
{
|
||||||
|
k: encode_base64(v) for k, v in h.items()
|
||||||
|
if k == "sha256"
|
||||||
|
}
|
||||||
|
for h in hashes
|
||||||
|
]
|
||||||
|
event.auth_events = zip(auth_events, hashes)
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def _can_send_event(self, event):
|
def _can_send_event(self, event):
|
||||||
send_level = yield self.store.get_send_event_level(event.room_id)
|
key = (RoomPowerLevelsEvent.TYPE, "", )
|
||||||
|
send_level_event = event.old_state_events.get(key)
|
||||||
|
send_level = None
|
||||||
|
if send_level_event:
|
||||||
|
send_level = send_level_event.content.get("events", {}).get(
|
||||||
|
event.type
|
||||||
|
)
|
||||||
|
if not send_level:
|
||||||
|
if hasattr(event, "state_key"):
|
||||||
|
send_level = send_level_event.content.get(
|
||||||
|
"state_default", 50
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
send_level = send_level_event.content.get(
|
||||||
|
"events_default", 0
|
||||||
|
)
|
||||||
|
|
||||||
if send_level:
|
if send_level:
|
||||||
send_level = int(send_level)
|
send_level = int(send_level)
|
||||||
else:
|
else:
|
||||||
send_level = 0
|
send_level = 0
|
||||||
|
|
||||||
user_level = yield self.store.get_power_level(
|
user_level = self._get_power_level_from_event_state(
|
||||||
event.room_id,
|
event,
|
||||||
event.user_id,
|
event.user_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -294,84 +422,22 @@ class Auth(object):
|
||||||
|
|
||||||
if user_level < send_level:
|
if user_level < send_level:
|
||||||
raise AuthError(
|
raise AuthError(
|
||||||
403, "You don't have permission to post to the room"
|
403,
|
||||||
|
"You don't have permission to post that to the room. " +
|
||||||
|
"user_level (%d) < send_level (%d)" % (user_level, send_level)
|
||||||
)
|
)
|
||||||
|
|
||||||
defer.returnValue(True)
|
return True
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _can_add_state(self, event):
|
|
||||||
add_level = yield self.store.get_add_state_level(event.room_id)
|
|
||||||
|
|
||||||
if not add_level:
|
|
||||||
defer.returnValue(True)
|
|
||||||
|
|
||||||
add_level = int(add_level)
|
|
||||||
|
|
||||||
user_level = yield self.store.get_power_level(
|
|
||||||
event.room_id,
|
|
||||||
event.user_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
user_level = int(user_level)
|
|
||||||
|
|
||||||
if user_level < add_level:
|
|
||||||
raise AuthError(
|
|
||||||
403, "You don't have permission to add state to the room"
|
|
||||||
)
|
|
||||||
|
|
||||||
defer.returnValue(True)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _can_replace_state(self, event):
|
|
||||||
current_state = yield self.store.get_current_state(
|
|
||||||
event.room_id,
|
|
||||||
event.type,
|
|
||||||
event.state_key,
|
|
||||||
)
|
|
||||||
|
|
||||||
if current_state:
|
|
||||||
current_state = current_state[0]
|
|
||||||
|
|
||||||
user_level = yield self.store.get_power_level(
|
|
||||||
event.room_id,
|
|
||||||
event.user_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
if user_level:
|
|
||||||
user_level = int(user_level)
|
|
||||||
else:
|
|
||||||
user_level = 0
|
|
||||||
|
|
||||||
logger.debug(
|
|
||||||
"Checking power level for %s, %s", event.user_id, user_level
|
|
||||||
)
|
|
||||||
if current_state and hasattr(current_state, "required_power_level"):
|
|
||||||
req = current_state.required_power_level
|
|
||||||
|
|
||||||
logger.debug("Checked power level for %s, %s", event.user_id, req)
|
|
||||||
if user_level < req:
|
|
||||||
raise AuthError(
|
|
||||||
403,
|
|
||||||
"You don't have permission to change that state"
|
|
||||||
)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _check_redaction(self, event):
|
def _check_redaction(self, event):
|
||||||
user_level = yield self.store.get_power_level(
|
user_level = self._get_power_level_from_event_state(
|
||||||
event.room_id,
|
event,
|
||||||
event.user_id,
|
event.user_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
if user_level:
|
_, _, redact_level = self._get_ops_level_from_event_state(
|
||||||
user_level = int(user_level)
|
event
|
||||||
else:
|
)
|
||||||
user_level = 0
|
|
||||||
|
|
||||||
_, _, redact_level = yield self.store.get_ops_levels(event.room_id)
|
|
||||||
|
|
||||||
if not redact_level:
|
|
||||||
redact_level = 50
|
|
||||||
|
|
||||||
if user_level < redact_level:
|
if user_level < redact_level:
|
||||||
raise AuthError(
|
raise AuthError(
|
||||||
|
@ -379,16 +445,10 @@ class Auth(object):
|
||||||
"You don't have permission to redact events"
|
"You don't have permission to redact events"
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _check_power_levels(self, event):
|
def _check_power_levels(self, event):
|
||||||
for k, v in event.content.items():
|
user_list = event.content.get("users", {})
|
||||||
if k == "default":
|
# Validate users
|
||||||
continue
|
for k, v in user_list.items():
|
||||||
|
|
||||||
# FIXME (erikj): We don't want hsob_Ts in content.
|
|
||||||
if k == "hsob_ts":
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.hs.parse_userid(k)
|
self.hs.parse_userid(k)
|
||||||
except:
|
except:
|
||||||
|
@ -399,80 +459,68 @@ class Auth(object):
|
||||||
except:
|
except:
|
||||||
raise SynapseError(400, "Not a valid power level: %s" % (v,))
|
raise SynapseError(400, "Not a valid power level: %s" % (v,))
|
||||||
|
|
||||||
current_state = yield self.store.get_current_state(
|
key = (event.type, event.state_key, )
|
||||||
event.room_id,
|
current_state = event.old_state_events.get(key)
|
||||||
event.type,
|
|
||||||
event.state_key,
|
|
||||||
)
|
|
||||||
|
|
||||||
if not current_state:
|
if not current_state:
|
||||||
return
|
return
|
||||||
else:
|
|
||||||
current_state = current_state[0]
|
|
||||||
|
|
||||||
user_level = yield self.store.get_power_level(
|
user_level = self._get_power_level_from_event_state(
|
||||||
event.room_id,
|
event,
|
||||||
event.user_id,
|
event.user_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
if user_level:
|
# Check other levels:
|
||||||
user_level = int(user_level)
|
levels_to_check = [
|
||||||
else:
|
("users_default", []),
|
||||||
user_level = 0
|
("events_default", []),
|
||||||
|
("ban", []),
|
||||||
|
("redact", []),
|
||||||
|
("kick", []),
|
||||||
|
]
|
||||||
|
|
||||||
old_list = current_state.content
|
old_list = current_state.content.get("users")
|
||||||
|
for user in set(old_list.keys() + user_list.keys()):
|
||||||
|
levels_to_check.append(
|
||||||
|
(user, ["users"])
|
||||||
|
)
|
||||||
|
|
||||||
# FIXME (erikj)
|
old_list = current_state.content.get("events")
|
||||||
old_people = {k: v for k, v in old_list.items() if k.startswith("@")}
|
new_list = event.content.get("events")
|
||||||
new_people = {
|
for ev_id in set(old_list.keys() + new_list.keys()):
|
||||||
k: v for k, v in event.content.items()
|
levels_to_check.append(
|
||||||
if k.startswith("@")
|
(ev_id, ["events"])
|
||||||
}
|
)
|
||||||
|
|
||||||
removed = set(old_people.keys()) - set(new_people.keys())
|
old_state = current_state.content
|
||||||
added = set(new_people.keys()) - set(old_people.keys())
|
new_state = event.content
|
||||||
same = set(old_people.keys()) & set(new_people.keys())
|
|
||||||
|
|
||||||
for r in removed:
|
for level_to_check, dir in levels_to_check:
|
||||||
if int(old_list[r]) > user_level:
|
old_loc = old_state
|
||||||
raise AuthError(
|
for d in dir:
|
||||||
403,
|
old_loc = old_loc.get(d, {})
|
||||||
"You don't have permission to remove user: %s" % (r, )
|
|
||||||
)
|
|
||||||
|
|
||||||
for n in added:
|
new_loc = new_state
|
||||||
if int(event.content[n]) > user_level:
|
for d in dir:
|
||||||
|
new_loc = new_loc.get(d, {})
|
||||||
|
|
||||||
|
if level_to_check in old_loc:
|
||||||
|
old_level = int(old_loc[level_to_check])
|
||||||
|
else:
|
||||||
|
old_level = None
|
||||||
|
|
||||||
|
if level_to_check in new_loc:
|
||||||
|
new_level = int(new_loc[level_to_check])
|
||||||
|
else:
|
||||||
|
new_level = None
|
||||||
|
|
||||||
|
if new_level is not None and old_level is not None:
|
||||||
|
if new_level == old_level:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if old_level > user_level or new_level > user_level:
|
||||||
raise AuthError(
|
raise AuthError(
|
||||||
403,
|
403,
|
||||||
"You don't have permission to add ops level greater "
|
"You don't have permission to add ops level greater "
|
||||||
"than your own"
|
"than your own"
|
||||||
)
|
)
|
||||||
|
|
||||||
for s in same:
|
|
||||||
if int(event.content[s]) != int(old_list[s]):
|
|
||||||
if int(event.content[s]) > user_level:
|
|
||||||
raise AuthError(
|
|
||||||
403,
|
|
||||||
"You don't have permission to add ops level greater "
|
|
||||||
"than your own"
|
|
||||||
)
|
|
||||||
|
|
||||||
if "default" in old_list:
|
|
||||||
old_default = int(old_list["default"])
|
|
||||||
|
|
||||||
if old_default > user_level:
|
|
||||||
raise AuthError(
|
|
||||||
403,
|
|
||||||
"You don't have permission to add ops level greater than "
|
|
||||||
"your own"
|
|
||||||
)
|
|
||||||
|
|
||||||
if "default" in event.content:
|
|
||||||
new_default = int(event.content["default"])
|
|
||||||
|
|
||||||
if new_default > user_level:
|
|
||||||
raise AuthError(
|
|
||||||
403,
|
|
||||||
"You don't have permission to add ops level greater "
|
|
||||||
"than your own"
|
|
||||||
)
|
|
||||||
|
|
|
@ -158,3 +158,37 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
|
||||||
for key, value in kwargs.iteritems():
|
for key, value in kwargs.iteritems():
|
||||||
err[key] = value
|
err[key] = value
|
||||||
return err
|
return err
|
||||||
|
|
||||||
|
|
||||||
|
class FederationError(RuntimeError):
|
||||||
|
""" This class is used to inform remote home servers about erroneous
|
||||||
|
PDUs they sent us.
|
||||||
|
|
||||||
|
FATAL: The remote server could not interpret the source event.
|
||||||
|
(e.g., it was missing a required field)
|
||||||
|
ERROR: The remote server interpreted the event, but it failed some other
|
||||||
|
check (e.g. auth)
|
||||||
|
WARN: The remote server accepted the event, but believes some part of it
|
||||||
|
is wrong (e.g., it referred to an invalid event)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, level, code, reason, affected, source=None):
|
||||||
|
if level not in ["FATAL", "ERROR", "WARN"]:
|
||||||
|
raise ValueError("Level is not valid: %s" % (level,))
|
||||||
|
self.level = level
|
||||||
|
self.code = code
|
||||||
|
self.reason = reason
|
||||||
|
self.affected = affected
|
||||||
|
self.source = source
|
||||||
|
|
||||||
|
msg = "%s %s: %s" % (level, code, reason,)
|
||||||
|
super(FederationError, self).__init__(msg)
|
||||||
|
|
||||||
|
def get_dict(self):
|
||||||
|
return {
|
||||||
|
"level": self.level,
|
||||||
|
"code": self.code,
|
||||||
|
"reason": self.reason,
|
||||||
|
"affected": self.affected,
|
||||||
|
"source": self.source if self.source else self.affected,
|
||||||
|
}
|
||||||
|
|
|
@ -13,7 +13,6 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from synapse.api.errors import SynapseError, Codes
|
|
||||||
from synapse.util.jsonobject import JsonEncodedObject
|
from synapse.util.jsonobject import JsonEncodedObject
|
||||||
|
|
||||||
|
|
||||||
|
@ -56,22 +55,26 @@ class SynapseEvent(JsonEncodedObject):
|
||||||
"user_id", # sender/initiator
|
"user_id", # sender/initiator
|
||||||
"content", # HTTP body, JSON
|
"content", # HTTP body, JSON
|
||||||
"state_key",
|
"state_key",
|
||||||
"required_power_level",
|
|
||||||
"age_ts",
|
"age_ts",
|
||||||
"prev_content",
|
"prev_content",
|
||||||
"prev_state",
|
"replaces_state",
|
||||||
"redacted_because",
|
"redacted_because",
|
||||||
|
"origin_server_ts",
|
||||||
]
|
]
|
||||||
|
|
||||||
internal_keys = [
|
internal_keys = [
|
||||||
"is_state",
|
"is_state",
|
||||||
"prev_events",
|
|
||||||
"depth",
|
"depth",
|
||||||
"destinations",
|
"destinations",
|
||||||
"origin",
|
"origin",
|
||||||
"outlier",
|
"outlier",
|
||||||
"power_level",
|
|
||||||
"redacted",
|
"redacted",
|
||||||
|
"prev_events",
|
||||||
|
"hashes",
|
||||||
|
"signatures",
|
||||||
|
"prev_state",
|
||||||
|
"auth_events",
|
||||||
|
"state_hash",
|
||||||
]
|
]
|
||||||
|
|
||||||
required_keys = [
|
required_keys = [
|
||||||
|
@ -82,8 +85,8 @@ class SynapseEvent(JsonEncodedObject):
|
||||||
|
|
||||||
def __init__(self, raises=True, **kwargs):
|
def __init__(self, raises=True, **kwargs):
|
||||||
super(SynapseEvent, self).__init__(**kwargs)
|
super(SynapseEvent, self).__init__(**kwargs)
|
||||||
if "content" in kwargs:
|
# if "content" in kwargs:
|
||||||
self.check_json(self.content, raises=raises)
|
# self.check_json(self.content, raises=raises)
|
||||||
|
|
||||||
def get_content_template(self):
|
def get_content_template(self):
|
||||||
""" Retrieve the JSON template for this event as a dict.
|
""" Retrieve the JSON template for this event as a dict.
|
||||||
|
@ -114,65 +117,11 @@ class SynapseEvent(JsonEncodedObject):
|
||||||
"""
|
"""
|
||||||
raise NotImplementedError("get_content_template not implemented.")
|
raise NotImplementedError("get_content_template not implemented.")
|
||||||
|
|
||||||
def check_json(self, content, raises=True):
|
def get_pdu_json(self):
|
||||||
"""Checks the given JSON content abides by the rules of the template.
|
pdu_json = self.get_full_dict()
|
||||||
|
pdu_json.pop("destination", None)
|
||||||
Args:
|
pdu_json.pop("outlier", None)
|
||||||
content : A JSON object to check.
|
return pdu_json
|
||||||
raises: True to raise a SynapseError if the check fails.
|
|
||||||
Returns:
|
|
||||||
True if the content passes the template. Returns False if the check
|
|
||||||
fails and raises=False.
|
|
||||||
Raises:
|
|
||||||
SynapseError if the check fails and raises=True.
|
|
||||||
"""
|
|
||||||
# recursively call to inspect each layer
|
|
||||||
err_msg = self._check_json(content, self.get_content_template())
|
|
||||||
if err_msg:
|
|
||||||
if raises:
|
|
||||||
raise SynapseError(400, err_msg, Codes.BAD_JSON)
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _check_json(self, content, template):
|
|
||||||
"""Check content and template matches.
|
|
||||||
|
|
||||||
If the template is a dict, each key in the dict will be validated with
|
|
||||||
the content, else it will just compare the types of content and
|
|
||||||
template. This basic type check is required because this function will
|
|
||||||
be recursively called and could be called with just strs or ints.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
content: The content to validate.
|
|
||||||
template: The validation template.
|
|
||||||
Returns:
|
|
||||||
str: An error message if the validation fails, else None.
|
|
||||||
"""
|
|
||||||
if type(content) != type(template):
|
|
||||||
return "Mismatched types: %s" % template
|
|
||||||
|
|
||||||
if type(template) == dict:
|
|
||||||
for key in template:
|
|
||||||
if key not in content:
|
|
||||||
return "Missing %s key" % key
|
|
||||||
|
|
||||||
if type(content[key]) != type(template[key]):
|
|
||||||
return "Key %s is of the wrong type (got %s, want %s)" % (
|
|
||||||
key, type(content[key]), type(template[key]))
|
|
||||||
|
|
||||||
if type(content[key]) == dict:
|
|
||||||
# we must go deeper
|
|
||||||
msg = self._check_json(content[key], template[key])
|
|
||||||
if msg:
|
|
||||||
return msg
|
|
||||||
elif type(content[key]) == list:
|
|
||||||
# make sure each item type in content matches the template
|
|
||||||
for entry in content[key]:
|
|
||||||
msg = self._check_json(entry, template[key][0])
|
|
||||||
if msg:
|
|
||||||
return msg
|
|
||||||
|
|
||||||
|
|
||||||
class SynapseStateEvent(SynapseEvent):
|
class SynapseStateEvent(SynapseEvent):
|
||||||
|
|
|
@ -16,11 +16,13 @@
|
||||||
from synapse.api.events.room import (
|
from synapse.api.events.room import (
|
||||||
RoomTopicEvent, MessageEvent, RoomMemberEvent, FeedbackEvent,
|
RoomTopicEvent, MessageEvent, RoomMemberEvent, FeedbackEvent,
|
||||||
InviteJoinEvent, RoomConfigEvent, RoomNameEvent, GenericEvent,
|
InviteJoinEvent, RoomConfigEvent, RoomNameEvent, GenericEvent,
|
||||||
RoomPowerLevelsEvent, RoomJoinRulesEvent, RoomOpsPowerLevelsEvent,
|
RoomPowerLevelsEvent, RoomJoinRulesEvent,
|
||||||
RoomCreateEvent, RoomAddStateLevelEvent, RoomSendEventLevelEvent,
|
RoomCreateEvent,
|
||||||
RoomRedactionEvent,
|
RoomRedactionEvent,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
from synapse.types import EventID
|
||||||
|
|
||||||
from synapse.util.stringutils import random_string
|
from synapse.util.stringutils import random_string
|
||||||
|
|
||||||
|
|
||||||
|
@ -37,9 +39,6 @@ class EventFactory(object):
|
||||||
RoomPowerLevelsEvent,
|
RoomPowerLevelsEvent,
|
||||||
RoomJoinRulesEvent,
|
RoomJoinRulesEvent,
|
||||||
RoomCreateEvent,
|
RoomCreateEvent,
|
||||||
RoomAddStateLevelEvent,
|
|
||||||
RoomSendEventLevelEvent,
|
|
||||||
RoomOpsPowerLevelsEvent,
|
|
||||||
RoomRedactionEvent,
|
RoomRedactionEvent,
|
||||||
]
|
]
|
||||||
|
|
||||||
|
@ -51,12 +50,26 @@ class EventFactory(object):
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
|
|
||||||
|
self.event_id_count = 0
|
||||||
|
|
||||||
|
def create_event_id(self):
|
||||||
|
i = str(self.event_id_count)
|
||||||
|
self.event_id_count += 1
|
||||||
|
|
||||||
|
local_part = str(int(self.clock.time())) + i + random_string(5)
|
||||||
|
|
||||||
|
e_id = EventID.create_local(local_part, self.hs)
|
||||||
|
|
||||||
|
return e_id.to_string()
|
||||||
|
|
||||||
def create_event(self, etype=None, **kwargs):
|
def create_event(self, etype=None, **kwargs):
|
||||||
kwargs["type"] = etype
|
kwargs["type"] = etype
|
||||||
if "event_id" not in kwargs:
|
if "event_id" not in kwargs:
|
||||||
kwargs["event_id"] = "%s@%s" % (
|
kwargs["event_id"] = self.create_event_id()
|
||||||
random_string(10), self.hs.hostname
|
kwargs["origin"] = self.hs.hostname
|
||||||
)
|
else:
|
||||||
|
ev_id = self.hs.parse_eventid(kwargs["event_id"])
|
||||||
|
kwargs["origin"] = ev_id.domain
|
||||||
|
|
||||||
if "origin_server_ts" not in kwargs:
|
if "origin_server_ts" not in kwargs:
|
||||||
kwargs["origin_server_ts"] = int(self.clock.time_msec())
|
kwargs["origin_server_ts"] = int(self.clock.time_msec())
|
||||||
|
|
|
@ -154,27 +154,6 @@ class RoomPowerLevelsEvent(SynapseStateEvent):
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
|
|
||||||
class RoomAddStateLevelEvent(SynapseStateEvent):
|
|
||||||
TYPE = "m.room.add_state_level"
|
|
||||||
|
|
||||||
def get_content_template(self):
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
class RoomSendEventLevelEvent(SynapseStateEvent):
|
|
||||||
TYPE = "m.room.send_event_level"
|
|
||||||
|
|
||||||
def get_content_template(self):
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
class RoomOpsPowerLevelsEvent(SynapseStateEvent):
|
|
||||||
TYPE = "m.room.ops_levels"
|
|
||||||
|
|
||||||
def get_content_template(self):
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
class RoomAliasesEvent(SynapseStateEvent):
|
class RoomAliasesEvent(SynapseStateEvent):
|
||||||
TYPE = "m.room.aliases"
|
TYPE = "m.room.aliases"
|
||||||
|
|
||||||
|
|
|
@ -15,21 +15,36 @@
|
||||||
|
|
||||||
from .room import (
|
from .room import (
|
||||||
RoomMemberEvent, RoomJoinRulesEvent, RoomPowerLevelsEvent,
|
RoomMemberEvent, RoomJoinRulesEvent, RoomPowerLevelsEvent,
|
||||||
RoomAddStateLevelEvent, RoomSendEventLevelEvent, RoomOpsPowerLevelsEvent,
|
|
||||||
RoomAliasesEvent, RoomCreateEvent,
|
RoomAliasesEvent, RoomCreateEvent,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def prune_event(event):
|
def prune_event(event):
|
||||||
""" Prunes the given event of all keys we don't know about or think could
|
""" Returns a pruned version of the given event, which removes all keys we
|
||||||
potentially be dodgy.
|
don't know about or think could potentially be dodgy.
|
||||||
|
|
||||||
This is used when we "redact" an event. We want to remove all fields that
|
This is used when we "redact" an event. We want to remove all fields that
|
||||||
the user has specified, but we do want to keep necessary information like
|
the user has specified, but we do want to keep necessary information like
|
||||||
type, state_key etc.
|
type, state_key etc.
|
||||||
"""
|
"""
|
||||||
|
event_type = event.type
|
||||||
|
|
||||||
# Remove all extraneous fields.
|
allowed_keys = [
|
||||||
event.unrecognized_keys = {}
|
"event_id",
|
||||||
|
"user_id",
|
||||||
|
"room_id",
|
||||||
|
"hashes",
|
||||||
|
"signatures",
|
||||||
|
"content",
|
||||||
|
"type",
|
||||||
|
"state_key",
|
||||||
|
"depth",
|
||||||
|
"prev_events",
|
||||||
|
"prev_state",
|
||||||
|
"auth_events",
|
||||||
|
"origin",
|
||||||
|
"origin_server_ts",
|
||||||
|
]
|
||||||
|
|
||||||
new_content = {}
|
new_content = {}
|
||||||
|
|
||||||
|
@ -38,27 +53,33 @@ def prune_event(event):
|
||||||
if field in event.content:
|
if field in event.content:
|
||||||
new_content[field] = event.content[field]
|
new_content[field] = event.content[field]
|
||||||
|
|
||||||
if event.type == RoomMemberEvent.TYPE:
|
if event_type == RoomMemberEvent.TYPE:
|
||||||
add_fields("membership")
|
add_fields("membership")
|
||||||
elif event.type == RoomCreateEvent.TYPE:
|
elif event_type == RoomCreateEvent.TYPE:
|
||||||
add_fields("creator")
|
add_fields("creator")
|
||||||
elif event.type == RoomJoinRulesEvent.TYPE:
|
elif event_type == RoomJoinRulesEvent.TYPE:
|
||||||
add_fields("join_rule")
|
add_fields("join_rule")
|
||||||
elif event.type == RoomPowerLevelsEvent.TYPE:
|
elif event_type == RoomPowerLevelsEvent.TYPE:
|
||||||
# TODO: Actually check these are valid user_ids etc.
|
add_fields(
|
||||||
add_fields("default")
|
"users",
|
||||||
for k, v in event.content.items():
|
"users_default",
|
||||||
if k.startswith("@") and isinstance(v, (int, long)):
|
"events",
|
||||||
new_content[k] = v
|
"events_default",
|
||||||
elif event.type == RoomAddStateLevelEvent.TYPE:
|
"events_default",
|
||||||
add_fields("level")
|
"state_default",
|
||||||
elif event.type == RoomSendEventLevelEvent.TYPE:
|
"ban",
|
||||||
add_fields("level")
|
"kick",
|
||||||
elif event.type == RoomOpsPowerLevelsEvent.TYPE:
|
"redact",
|
||||||
add_fields("kick_level", "ban_level", "redact_level")
|
)
|
||||||
elif event.type == RoomAliasesEvent.TYPE:
|
elif event_type == RoomAliasesEvent.TYPE:
|
||||||
add_fields("aliases")
|
add_fields("aliases")
|
||||||
|
|
||||||
event.content = new_content
|
allowed_fields = {
|
||||||
|
k: v
|
||||||
|
for k, v in event.get_full_dict().items()
|
||||||
|
if k in allowed_keys
|
||||||
|
}
|
||||||
|
|
||||||
return event
|
allowed_fields["content"] = new_content
|
||||||
|
|
||||||
|
return type(event)(**allowed_fields)
|
||||||
|
|
87
synapse/api/events/validator.py
Normal file
87
synapse/api/events/validator.py
Normal file
|
@ -0,0 +1,87 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 OpenMarket Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from synapse.api.errors import SynapseError, Codes
|
||||||
|
|
||||||
|
|
||||||
|
class EventValidator(object):
|
||||||
|
def __init__(self, hs):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def validate(self, event):
|
||||||
|
"""Checks the given JSON content abides by the rules of the template.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
content : A JSON object to check.
|
||||||
|
raises: True to raise a SynapseError if the check fails.
|
||||||
|
Returns:
|
||||||
|
True if the content passes the template. Returns False if the check
|
||||||
|
fails and raises=False.
|
||||||
|
Raises:
|
||||||
|
SynapseError if the check fails and raises=True.
|
||||||
|
"""
|
||||||
|
# recursively call to inspect each layer
|
||||||
|
err_msg = self._check_json_template(
|
||||||
|
event.content,
|
||||||
|
event.get_content_template()
|
||||||
|
)
|
||||||
|
if err_msg:
|
||||||
|
raise SynapseError(400, err_msg, Codes.BAD_JSON)
|
||||||
|
else:
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _check_json_template(self, content, template):
|
||||||
|
"""Check content and template matches.
|
||||||
|
|
||||||
|
If the template is a dict, each key in the dict will be validated with
|
||||||
|
the content, else it will just compare the types of content and
|
||||||
|
template. This basic type check is required because this function will
|
||||||
|
be recursively called and could be called with just strs or ints.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
content: The content to validate.
|
||||||
|
template: The validation template.
|
||||||
|
Returns:
|
||||||
|
str: An error message if the validation fails, else None.
|
||||||
|
"""
|
||||||
|
if type(content) != type(template):
|
||||||
|
return "Mismatched types: %s" % template
|
||||||
|
|
||||||
|
if type(template) == dict:
|
||||||
|
for key in template:
|
||||||
|
if key not in content:
|
||||||
|
return "Missing %s key" % key
|
||||||
|
|
||||||
|
if type(content[key]) != type(template[key]):
|
||||||
|
return "Key %s is of the wrong type (got %s, want %s)" % (
|
||||||
|
key, type(content[key]), type(template[key]))
|
||||||
|
|
||||||
|
if type(content[key]) == dict:
|
||||||
|
# we must go deeper
|
||||||
|
msg = self._check_json_template(
|
||||||
|
content[key],
|
||||||
|
template[key]
|
||||||
|
)
|
||||||
|
if msg:
|
||||||
|
return msg
|
||||||
|
elif type(content[key]) == list:
|
||||||
|
# make sure each item type in content matches the template
|
||||||
|
for entry in content[key]:
|
||||||
|
msg = self._check_json_template(
|
||||||
|
entry,
|
||||||
|
template[key][0]
|
||||||
|
)
|
||||||
|
if msg:
|
||||||
|
return msg
|
|
@ -33,6 +33,7 @@ from synapse.api.urls import (
|
||||||
)
|
)
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.crypto import context_factory
|
from synapse.crypto import context_factory
|
||||||
|
from synapse.util.logcontext import LoggingContext
|
||||||
|
|
||||||
from daemonize import Daemonize
|
from daemonize import Daemonize
|
||||||
import twisted.manhole.telnet
|
import twisted.manhole.telnet
|
||||||
|
@ -236,14 +237,17 @@ def setup():
|
||||||
f.namespace['hs'] = hs
|
f.namespace['hs'] = hs
|
||||||
reactor.listenTCP(config.manhole, f, interface='127.0.0.1')
|
reactor.listenTCP(config.manhole, f, interface='127.0.0.1')
|
||||||
|
|
||||||
hs.start_listening(config.bind_port, config.unsecure_port)
|
bind_port = config.bind_port
|
||||||
|
if config.no_tls:
|
||||||
|
bind_port = None
|
||||||
|
hs.start_listening(bind_port, config.unsecure_port)
|
||||||
|
|
||||||
if config.daemonize:
|
if config.daemonize:
|
||||||
print config.pid_file
|
print config.pid_file
|
||||||
daemon = Daemonize(
|
daemon = Daemonize(
|
||||||
app="synapse-homeserver",
|
app="synapse-homeserver",
|
||||||
pid=config.pid_file,
|
pid=config.pid_file,
|
||||||
action=reactor.run,
|
action=run,
|
||||||
auto_close_fds=False,
|
auto_close_fds=False,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
logger=logger,
|
logger=logger,
|
||||||
|
@ -253,6 +257,13 @@ def setup():
|
||||||
else:
|
else:
|
||||||
reactor.run()
|
reactor.run()
|
||||||
|
|
||||||
|
def run():
|
||||||
|
with LoggingContext("run"):
|
||||||
|
reactor.run()
|
||||||
|
|
||||||
|
def main():
|
||||||
|
with LoggingContext("main"):
|
||||||
|
setup()
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
setup()
|
main()
|
||||||
|
|
66
synapse/app/synctl.py
Executable file
66
synapse/app/synctl.py
Executable file
|
@ -0,0 +1,66 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 OpenMarket Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import signal
|
||||||
|
|
||||||
|
SYNAPSE = ["python", "-m", "synapse.app.homeserver"]
|
||||||
|
|
||||||
|
CONFIGFILE="homeserver.yaml"
|
||||||
|
PIDFILE="homeserver.pid"
|
||||||
|
|
||||||
|
GREEN="\x1b[1;32m"
|
||||||
|
NORMAL="\x1b[m"
|
||||||
|
|
||||||
|
def start():
|
||||||
|
if not os.path.exists(CONFIGFILE):
|
||||||
|
sys.stderr.write(
|
||||||
|
"No config file found\n"
|
||||||
|
"To generate a config file, run '%s -c %s --generate-config"
|
||||||
|
" --server-name=<server name>'\n" % (
|
||||||
|
" ".join(SYNAPSE), CONFIGFILE
|
||||||
|
)
|
||||||
|
)
|
||||||
|
sys.exit(1)
|
||||||
|
print "Starting ...",
|
||||||
|
args = SYNAPSE
|
||||||
|
args.extend(["--daemonize", "-c", CONFIGFILE, "--pid-file", PIDFILE])
|
||||||
|
subprocess.check_call(args)
|
||||||
|
print GREEN + "started" + NORMAL
|
||||||
|
|
||||||
|
def stop():
|
||||||
|
if os.path.exists(PIDFILE):
|
||||||
|
pid = int(open(PIDFILE).read())
|
||||||
|
os.kill(pid, signal.SIGTERM)
|
||||||
|
print GREEN + "stopped" + NORMAL
|
||||||
|
|
||||||
|
def main():
|
||||||
|
action = sys.argv[1] if sys.argv[1:] else "usage"
|
||||||
|
if action == "start":
|
||||||
|
start()
|
||||||
|
elif action == "stop":
|
||||||
|
stop()
|
||||||
|
elif action == "restart":
|
||||||
|
start()
|
||||||
|
stop()
|
||||||
|
else:
|
||||||
|
sys.stderr.write("Usage: %s [start|stop|restart]\n" % (sys.argv[0],))
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if __name__=='__main__':
|
||||||
|
main()
|
|
@ -36,7 +36,10 @@ class Config(object):
|
||||||
if file_path is None:
|
if file_path is None:
|
||||||
raise ConfigError(
|
raise ConfigError(
|
||||||
"Missing config for %s."
|
"Missing config for %s."
|
||||||
" Try running again with --generate-config"
|
" You must specify a path for the config file. You can "
|
||||||
|
"do this with the -c or --config-path option. "
|
||||||
|
"Adding --generate-config along with --server-name "
|
||||||
|
"<server name> will generate a config file at the given path."
|
||||||
% (config_name,)
|
% (config_name,)
|
||||||
)
|
)
|
||||||
if not os.path.exists(file_path):
|
if not os.path.exists(file_path):
|
||||||
|
|
|
@ -14,7 +14,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from ._base import Config
|
from ._base import Config
|
||||||
|
from synapse.util.logcontext import LoggingContextFilter
|
||||||
from twisted.python.log import PythonLoggingObserver
|
from twisted.python.log import PythonLoggingObserver
|
||||||
import logging
|
import logging
|
||||||
import logging.config
|
import logging.config
|
||||||
|
@ -46,7 +46,8 @@ class LoggingConfig(Config):
|
||||||
|
|
||||||
def setup_logging(self):
|
def setup_logging(self):
|
||||||
log_format = (
|
log_format = (
|
||||||
'%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s'
|
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
|
||||||
|
" - %(message)s"
|
||||||
)
|
)
|
||||||
if self.log_config is None:
|
if self.log_config is None:
|
||||||
|
|
||||||
|
@ -54,13 +55,20 @@ class LoggingConfig(Config):
|
||||||
if self.verbosity:
|
if self.verbosity:
|
||||||
level = logging.DEBUG
|
level = logging.DEBUG
|
||||||
|
|
||||||
# FIXME: we need a logging.WARN for a -q quiet option
|
# FIXME: we need a logging.WARN for a -q quiet option
|
||||||
|
logger = logging.getLogger('')
|
||||||
|
logger.setLevel(level)
|
||||||
|
formatter = logging.Formatter(log_format)
|
||||||
|
if self.log_file:
|
||||||
|
handler = logging.FileHandler(self.log_file)
|
||||||
|
else:
|
||||||
|
handler = logging.StreamHandler()
|
||||||
|
handler.setFormatter(formatter)
|
||||||
|
|
||||||
logging.basicConfig(
|
handler.addFilter(LoggingContextFilter(request=""))
|
||||||
level=level,
|
|
||||||
filename=self.log_file,
|
logger.addHandler(handler)
|
||||||
format=log_format
|
logger.info("Test")
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
logging.config.fileConfig(self.log_config)
|
logging.config.fileConfig(self.log_config)
|
||||||
|
|
||||||
|
|
|
@ -30,6 +30,7 @@ class ServerConfig(Config):
|
||||||
self.pid_file = self.abspath(args.pid_file)
|
self.pid_file = self.abspath(args.pid_file)
|
||||||
self.webclient = True
|
self.webclient = True
|
||||||
self.manhole = args.manhole
|
self.manhole = args.manhole
|
||||||
|
self.no_tls = args.no_tls
|
||||||
|
|
||||||
if not args.content_addr:
|
if not args.content_addr:
|
||||||
host = args.server_name
|
host = args.server_name
|
||||||
|
@ -67,6 +68,8 @@ class ServerConfig(Config):
|
||||||
server_group.add_argument("--content-addr", default=None,
|
server_group.add_argument("--content-addr", default=None,
|
||||||
help="The host and scheme to use for the "
|
help="The host and scheme to use for the "
|
||||||
"content repository")
|
"content repository")
|
||||||
|
server_group.add_argument("--no-tls", action='store_true',
|
||||||
|
help="Don't bind to the https port.")
|
||||||
|
|
||||||
def read_signing_key(self, signing_key_path):
|
def read_signing_key(self, signing_key_path):
|
||||||
signing_keys = self.read_file(signing_key_path, "signing_key")
|
signing_keys = self.read_file(signing_key_path, "signing_key")
|
||||||
|
|
108
synapse/crypto/event_signing.py
Normal file
108
synapse/crypto/event_signing.py
Normal file
|
@ -0,0 +1,108 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# Copyright 2014 OpenMarket Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
|
||||||
|
from synapse.api.events.utils import prune_event
|
||||||
|
from syutil.jsonutil import encode_canonical_json
|
||||||
|
from syutil.base64util import encode_base64, decode_base64
|
||||||
|
from syutil.crypto.jsonsign import sign_json
|
||||||
|
from synapse.api.errors import SynapseError, Codes
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def check_event_content_hash(event, hash_algorithm=hashlib.sha256):
|
||||||
|
"""Check whether the hash for this PDU matches the contents"""
|
||||||
|
computed_hash = _compute_content_hash(event, hash_algorithm)
|
||||||
|
logging.debug("Expecting hash: %s", encode_base64(computed_hash.digest()))
|
||||||
|
if computed_hash.name not in event.hashes:
|
||||||
|
raise SynapseError(
|
||||||
|
400,
|
||||||
|
"Algorithm %s not in hashes %s" % (
|
||||||
|
computed_hash.name, list(event.hashes),
|
||||||
|
),
|
||||||
|
Codes.UNAUTHORIZED,
|
||||||
|
)
|
||||||
|
message_hash_base64 = event.hashes[computed_hash.name]
|
||||||
|
try:
|
||||||
|
message_hash_bytes = decode_base64(message_hash_base64)
|
||||||
|
except:
|
||||||
|
raise SynapseError(
|
||||||
|
400,
|
||||||
|
"Invalid base64: %s" % (message_hash_base64,),
|
||||||
|
Codes.UNAUTHORIZED,
|
||||||
|
)
|
||||||
|
return message_hash_bytes == computed_hash.digest()
|
||||||
|
|
||||||
|
|
||||||
|
def _compute_content_hash(event, hash_algorithm):
|
||||||
|
event_json = event.get_pdu_json()
|
||||||
|
event_json.pop("age_ts", None)
|
||||||
|
event_json.pop("unsigned", None)
|
||||||
|
event_json.pop("signatures", None)
|
||||||
|
event_json.pop("hashes", None)
|
||||||
|
event_json.pop("outlier", None)
|
||||||
|
event_json.pop("destinations", None)
|
||||||
|
event_json_bytes = encode_canonical_json(event_json)
|
||||||
|
return hash_algorithm(event_json_bytes)
|
||||||
|
|
||||||
|
|
||||||
|
def compute_event_reference_hash(event, hash_algorithm=hashlib.sha256):
|
||||||
|
tmp_event = prune_event(event)
|
||||||
|
event_json = tmp_event.get_pdu_json()
|
||||||
|
event_json.pop("signatures", None)
|
||||||
|
event_json.pop("age_ts", None)
|
||||||
|
event_json.pop("unsigned", None)
|
||||||
|
event_json_bytes = encode_canonical_json(event_json)
|
||||||
|
hashed = hash_algorithm(event_json_bytes)
|
||||||
|
return (hashed.name, hashed.digest())
|
||||||
|
|
||||||
|
|
||||||
|
def compute_event_signature(event, signature_name, signing_key):
|
||||||
|
tmp_event = prune_event(event)
|
||||||
|
redact_json = tmp_event.get_pdu_json()
|
||||||
|
redact_json.pop("age_ts", None)
|
||||||
|
redact_json.pop("unsigned", None)
|
||||||
|
logger.debug("Signing event: %s", redact_json)
|
||||||
|
redact_json = sign_json(redact_json, signature_name, signing_key)
|
||||||
|
return redact_json["signatures"]
|
||||||
|
|
||||||
|
|
||||||
|
def add_hashes_and_signatures(event, signature_name, signing_key,
|
||||||
|
hash_algorithm=hashlib.sha256):
|
||||||
|
if hasattr(event, "old_state_events"):
|
||||||
|
state_json_bytes = encode_canonical_json(
|
||||||
|
[e.event_id for e in event.old_state_events.values()]
|
||||||
|
)
|
||||||
|
hashed = hash_algorithm(state_json_bytes)
|
||||||
|
event.state_hash = {
|
||||||
|
hashed.name: encode_base64(hashed.digest())
|
||||||
|
}
|
||||||
|
|
||||||
|
hashed = _compute_content_hash(event, hash_algorithm=hash_algorithm)
|
||||||
|
|
||||||
|
if not hasattr(event, "hashes"):
|
||||||
|
event.hashes = {}
|
||||||
|
event.hashes[hashed.name] = encode_base64(hashed.digest())
|
||||||
|
|
||||||
|
event.signatures = compute_event_signature(
|
||||||
|
event,
|
||||||
|
signature_name=signature_name,
|
||||||
|
signing_key=signing_key,
|
||||||
|
)
|
|
@ -18,6 +18,7 @@ from twisted.web.http import HTTPClient
|
||||||
from twisted.internet.protocol import Factory
|
from twisted.internet.protocol import Factory
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
from synapse.http.endpoint import matrix_endpoint
|
from synapse.http.endpoint import matrix_endpoint
|
||||||
|
from synapse.util.logcontext import PreserveLoggingContext
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
@ -36,10 +37,11 @@ def fetch_server_key(server_name, ssl_context_factory):
|
||||||
|
|
||||||
for i in range(5):
|
for i in range(5):
|
||||||
try:
|
try:
|
||||||
protocol = yield endpoint.connect(factory)
|
with PreserveLoggingContext():
|
||||||
server_response, server_certificate = yield protocol.remote_key
|
protocol = yield endpoint.connect(factory)
|
||||||
defer.returnValue((server_response, server_certificate))
|
server_response, server_certificate = yield protocol.remote_key
|
||||||
return
|
defer.returnValue((server_response, server_certificate))
|
||||||
|
return
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.exception(e)
|
logger.exception(e)
|
||||||
raise IOError("Cannot get key for %s" % server_name)
|
raise IOError("Cannot get key for %s" % server_name)
|
||||||
|
|
|
@ -1,102 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Copyright 2014 OpenMarket Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from .units import Pdu
|
|
||||||
|
|
||||||
import copy
|
|
||||||
|
|
||||||
|
|
||||||
def decode_event_id(event_id, server_name):
|
|
||||||
parts = event_id.split("@")
|
|
||||||
if len(parts) < 2:
|
|
||||||
return (event_id, server_name)
|
|
||||||
else:
|
|
||||||
return (parts[0], "".join(parts[1:]))
|
|
||||||
|
|
||||||
|
|
||||||
def encode_event_id(pdu_id, origin):
|
|
||||||
return "%s@%s" % (pdu_id, origin)
|
|
||||||
|
|
||||||
|
|
||||||
class PduCodec(object):
|
|
||||||
|
|
||||||
def __init__(self, hs):
|
|
||||||
self.server_name = hs.hostname
|
|
||||||
self.event_factory = hs.get_event_factory()
|
|
||||||
self.clock = hs.get_clock()
|
|
||||||
|
|
||||||
def event_from_pdu(self, pdu):
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs["event_id"] = encode_event_id(pdu.pdu_id, pdu.origin)
|
|
||||||
kwargs["room_id"] = pdu.context
|
|
||||||
kwargs["etype"] = pdu.pdu_type
|
|
||||||
kwargs["prev_events"] = [
|
|
||||||
encode_event_id(p[0], p[1]) for p in pdu.prev_pdus
|
|
||||||
]
|
|
||||||
|
|
||||||
if hasattr(pdu, "prev_state_id") and hasattr(pdu, "prev_state_origin"):
|
|
||||||
kwargs["prev_state"] = encode_event_id(
|
|
||||||
pdu.prev_state_id, pdu.prev_state_origin
|
|
||||||
)
|
|
||||||
|
|
||||||
kwargs.update({
|
|
||||||
k: v
|
|
||||||
for k, v in pdu.get_full_dict().items()
|
|
||||||
if k not in [
|
|
||||||
"pdu_id",
|
|
||||||
"context",
|
|
||||||
"pdu_type",
|
|
||||||
"prev_pdus",
|
|
||||||
"prev_state_id",
|
|
||||||
"prev_state_origin",
|
|
||||||
]
|
|
||||||
})
|
|
||||||
|
|
||||||
return self.event_factory.create_event(**kwargs)
|
|
||||||
|
|
||||||
def pdu_from_event(self, event):
|
|
||||||
d = event.get_full_dict()
|
|
||||||
|
|
||||||
d["pdu_id"], d["origin"] = decode_event_id(
|
|
||||||
event.event_id, self.server_name
|
|
||||||
)
|
|
||||||
d["context"] = event.room_id
|
|
||||||
d["pdu_type"] = event.type
|
|
||||||
|
|
||||||
if hasattr(event, "prev_events"):
|
|
||||||
d["prev_pdus"] = [
|
|
||||||
decode_event_id(e, self.server_name)
|
|
||||||
for e in event.prev_events
|
|
||||||
]
|
|
||||||
|
|
||||||
if hasattr(event, "prev_state"):
|
|
||||||
d["prev_state_id"], d["prev_state_origin"] = (
|
|
||||||
decode_event_id(event.prev_state, self.server_name)
|
|
||||||
)
|
|
||||||
|
|
||||||
if hasattr(event, "state_key"):
|
|
||||||
d["is_state"] = True
|
|
||||||
|
|
||||||
kwargs = copy.deepcopy(event.unrecognized_keys)
|
|
||||||
kwargs.update({
|
|
||||||
k: v for k, v in d.items()
|
|
||||||
if k not in ["event_id", "room_id", "type", "prev_events"]
|
|
||||||
})
|
|
||||||
|
|
||||||
if "origin_server_ts" not in kwargs:
|
|
||||||
kwargs["origin_server_ts"] = int(self.clock.time_msec())
|
|
||||||
|
|
||||||
return Pdu(**kwargs)
|
|
|
@ -21,8 +21,6 @@ These actions are mostly only used by the :py:mod:`.replication` module.
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from .units import Pdu
|
|
||||||
|
|
||||||
from synapse.util.logutils import log_function
|
from synapse.util.logutils import log_function
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
@ -32,76 +30,6 @@ import logging
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class PduActions(object):
|
|
||||||
""" Defines persistence actions that relate to handling PDUs.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, datastore):
|
|
||||||
self.store = datastore
|
|
||||||
|
|
||||||
@log_function
|
|
||||||
def mark_as_processed(self, pdu):
|
|
||||||
""" Persist the fact that we have fully processed the given `Pdu`
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Deferred
|
|
||||||
"""
|
|
||||||
return self.store.mark_pdu_as_processed(pdu.pdu_id, pdu.origin)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
@log_function
|
|
||||||
def after_transaction(self, transaction_id, destination, origin):
|
|
||||||
""" Returns all `Pdu`s that we sent to the given remote home server
|
|
||||||
after a given transaction id.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Deferred: Results in a list of `Pdu`s
|
|
||||||
"""
|
|
||||||
results = yield self.store.get_pdus_after_transaction(
|
|
||||||
transaction_id,
|
|
||||||
destination
|
|
||||||
)
|
|
||||||
|
|
||||||
defer.returnValue([Pdu.from_pdu_tuple(p) for p in results])
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
@log_function
|
|
||||||
def get_all_pdus_from_context(self, context):
|
|
||||||
results = yield self.store.get_all_pdus_from_context(context)
|
|
||||||
defer.returnValue([Pdu.from_pdu_tuple(p) for p in results])
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
@log_function
|
|
||||||
def backfill(self, context, pdu_list, limit):
|
|
||||||
""" For a given list of PDU id and origins return the proceeding
|
|
||||||
`limit` `Pdu`s in the given `context`.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Deferred: Results in a list of `Pdu`s.
|
|
||||||
"""
|
|
||||||
results = yield self.store.get_backfill(
|
|
||||||
context, pdu_list, limit
|
|
||||||
)
|
|
||||||
|
|
||||||
defer.returnValue([Pdu.from_pdu_tuple(p) for p in results])
|
|
||||||
|
|
||||||
@log_function
|
|
||||||
def is_new(self, pdu):
|
|
||||||
""" When we receive a `Pdu` from a remote home server, we want to
|
|
||||||
figure out whether it is `new`, i.e. it is not some historic PDU that
|
|
||||||
we haven't seen simply because we haven't backfilled back that far.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Deferred: Results in a `bool`
|
|
||||||
"""
|
|
||||||
return self.store.is_pdu_new(
|
|
||||||
pdu_id=pdu.pdu_id,
|
|
||||||
origin=pdu.origin,
|
|
||||||
context=pdu.context,
|
|
||||||
depth=pdu.depth
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TransactionActions(object):
|
class TransactionActions(object):
|
||||||
""" Defines persistence actions that relate to handling Transactions.
|
""" Defines persistence actions that relate to handling Transactions.
|
||||||
"""
|
"""
|
||||||
|
@ -158,7 +86,6 @@ class TransactionActions(object):
|
||||||
transaction.transaction_id,
|
transaction.transaction_id,
|
||||||
transaction.destination,
|
transaction.destination,
|
||||||
transaction.origin_server_ts,
|
transaction.origin_server_ts,
|
||||||
[(p["pdu_id"], p["origin"]) for p in transaction.pdus]
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
|
|
|
@ -19,9 +19,9 @@ a given transport.
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from .units import Transaction, Pdu, Edu
|
from .units import Transaction, Edu
|
||||||
|
|
||||||
from .persistence import PduActions, TransactionActions
|
from .persistence import TransactionActions
|
||||||
|
|
||||||
from synapse.util.logutils import log_function
|
from synapse.util.logutils import log_function
|
||||||
|
|
||||||
|
@ -57,7 +57,7 @@ class ReplicationLayer(object):
|
||||||
self.transport_layer.register_request_handler(self)
|
self.transport_layer.register_request_handler(self)
|
||||||
|
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
self.pdu_actions = PduActions(self.store)
|
# self.pdu_actions = PduActions(self.store)
|
||||||
self.transaction_actions = TransactionActions(self.store)
|
self.transaction_actions = TransactionActions(self.store)
|
||||||
|
|
||||||
self._transaction_queue = _TransactionQueue(
|
self._transaction_queue = _TransactionQueue(
|
||||||
|
@ -72,6 +72,8 @@ class ReplicationLayer(object):
|
||||||
|
|
||||||
self._clock = hs.get_clock()
|
self._clock = hs.get_clock()
|
||||||
|
|
||||||
|
self.event_factory = hs.get_event_factory()
|
||||||
|
|
||||||
def set_handler(self, handler):
|
def set_handler(self, handler):
|
||||||
"""Sets the handler that the replication layer will use to communicate
|
"""Sets the handler that the replication layer will use to communicate
|
||||||
receipt of new PDUs from other home servers. The required methods are
|
receipt of new PDUs from other home servers. The required methods are
|
||||||
|
@ -81,7 +83,7 @@ class ReplicationLayer(object):
|
||||||
|
|
||||||
def register_edu_handler(self, edu_type, handler):
|
def register_edu_handler(self, edu_type, handler):
|
||||||
if edu_type in self.edu_handlers:
|
if edu_type in self.edu_handlers:
|
||||||
raise KeyError("Already have an EDU handler for %s" % (edu_type))
|
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
|
||||||
|
|
||||||
self.edu_handlers[edu_type] = handler
|
self.edu_handlers[edu_type] = handler
|
||||||
|
|
||||||
|
@ -102,24 +104,17 @@ class ReplicationLayer(object):
|
||||||
object to encode as JSON.
|
object to encode as JSON.
|
||||||
"""
|
"""
|
||||||
if query_type in self.query_handlers:
|
if query_type in self.query_handlers:
|
||||||
raise KeyError("Already have a Query handler for %s" % (query_type))
|
raise KeyError(
|
||||||
|
"Already have a Query handler for %s" % (query_type,)
|
||||||
|
)
|
||||||
|
|
||||||
self.query_handlers[query_type] = handler
|
self.query_handlers[query_type] = handler
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
@log_function
|
@log_function
|
||||||
def send_pdu(self, pdu):
|
def send_pdu(self, pdu):
|
||||||
"""Informs the replication layer about a new PDU generated within the
|
"""Informs the replication layer about a new PDU generated within the
|
||||||
home server that should be transmitted to others.
|
home server that should be transmitted to others.
|
||||||
|
|
||||||
This will fill out various attributes on the PDU object, e.g. the
|
|
||||||
`prev_pdus` key.
|
|
||||||
|
|
||||||
*Note:* The home server should always call `send_pdu` even if it knows
|
|
||||||
that it does not need to be replicated to other home servers. This is
|
|
||||||
in case e.g. someone else joins via a remote home server and then
|
|
||||||
backfills.
|
|
||||||
|
|
||||||
TODO: Figure out when we should actually resolve the deferred.
|
TODO: Figure out when we should actually resolve the deferred.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -132,18 +127,15 @@ class ReplicationLayer(object):
|
||||||
order = self._order
|
order = self._order
|
||||||
self._order += 1
|
self._order += 1
|
||||||
|
|
||||||
logger.debug("[%s] Persisting PDU", pdu.pdu_id)
|
logger.debug("[%s] transaction_layer.enqueue_pdu... ", pdu.event_id)
|
||||||
|
|
||||||
# Save *before* trying to send
|
|
||||||
yield self.store.persist_event(pdu=pdu)
|
|
||||||
|
|
||||||
logger.debug("[%s] Persisted PDU", pdu.pdu_id)
|
|
||||||
logger.debug("[%s] transaction_layer.enqueue_pdu... ", pdu.pdu_id)
|
|
||||||
|
|
||||||
# TODO, add errback, etc.
|
# TODO, add errback, etc.
|
||||||
self._transaction_queue.enqueue_pdu(pdu, order)
|
self._transaction_queue.enqueue_pdu(pdu, order)
|
||||||
|
|
||||||
logger.debug("[%s] transaction_layer.enqueue_pdu... done", pdu.pdu_id)
|
logger.debug(
|
||||||
|
"[%s] transaction_layer.enqueue_pdu... done",
|
||||||
|
pdu.event_id
|
||||||
|
)
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def send_edu(self, destination, edu_type, content):
|
def send_edu(self, destination, edu_type, content):
|
||||||
|
@ -158,6 +150,11 @@ class ReplicationLayer(object):
|
||||||
self._transaction_queue.enqueue_edu(edu)
|
self._transaction_queue.enqueue_edu(edu)
|
||||||
return defer.succeed(None)
|
return defer.succeed(None)
|
||||||
|
|
||||||
|
@log_function
|
||||||
|
def send_failure(self, failure, destination):
|
||||||
|
self._transaction_queue.enqueue_failure(failure, destination)
|
||||||
|
return defer.succeed(None)
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def make_query(self, destination, query_type, args,
|
def make_query(self, destination, query_type, args,
|
||||||
retry_on_dns_fail=True):
|
retry_on_dns_fail=True):
|
||||||
|
@ -181,7 +178,7 @@ class ReplicationLayer(object):
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def backfill(self, dest, context, limit):
|
def backfill(self, dest, context, limit, extremities):
|
||||||
"""Requests some more historic PDUs for the given context from the
|
"""Requests some more historic PDUs for the given context from the
|
||||||
given destination server.
|
given destination server.
|
||||||
|
|
||||||
|
@ -189,12 +186,12 @@ class ReplicationLayer(object):
|
||||||
dest (str): The remote home server to ask.
|
dest (str): The remote home server to ask.
|
||||||
context (str): The context to backfill.
|
context (str): The context to backfill.
|
||||||
limit (int): The maximum number of PDUs to return.
|
limit (int): The maximum number of PDUs to return.
|
||||||
|
extremities (list): List of PDU id and origins of the first pdus
|
||||||
|
we have seen from the context
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred: Results in the received PDUs.
|
Deferred: Results in the received PDUs.
|
||||||
"""
|
"""
|
||||||
extremities = yield self.store.get_oldest_pdus_in_context(context)
|
|
||||||
|
|
||||||
logger.debug("backfill extrem=%s", extremities)
|
logger.debug("backfill extrem=%s", extremities)
|
||||||
|
|
||||||
# If there are no extremeties then we've (probably) reached the start.
|
# If there are no extremeties then we've (probably) reached the start.
|
||||||
|
@ -208,15 +205,18 @@ class ReplicationLayer(object):
|
||||||
|
|
||||||
transaction = Transaction(**transaction_data)
|
transaction = Transaction(**transaction_data)
|
||||||
|
|
||||||
pdus = [Pdu(outlier=False, **p) for p in transaction.pdus]
|
pdus = [
|
||||||
|
self.event_from_pdu_json(p, outlier=False)
|
||||||
|
for p in transaction.pdus
|
||||||
|
]
|
||||||
for pdu in pdus:
|
for pdu in pdus:
|
||||||
yield self._handle_new_pdu(pdu, backfilled=True)
|
yield self._handle_new_pdu(dest, pdu, backfilled=True)
|
||||||
|
|
||||||
defer.returnValue(pdus)
|
defer.returnValue(pdus)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def get_pdu(self, destination, pdu_origin, pdu_id, outlier=False):
|
def get_pdu(self, destination, event_id, outlier=False):
|
||||||
"""Requests the PDU with given origin and ID from the remote home
|
"""Requests the PDU with given origin and ID from the remote home
|
||||||
server.
|
server.
|
||||||
|
|
||||||
|
@ -225,7 +225,7 @@ class ReplicationLayer(object):
|
||||||
Args:
|
Args:
|
||||||
destination (str): Which home server to query
|
destination (str): Which home server to query
|
||||||
pdu_origin (str): The home server that originally sent the pdu.
|
pdu_origin (str): The home server that originally sent the pdu.
|
||||||
pdu_id (str)
|
event_id (str)
|
||||||
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
|
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
|
||||||
it's from an arbitary point in the context as opposed to part
|
it's from an arbitary point in the context as opposed to part
|
||||||
of the current block of PDUs. Defaults to `False`
|
of the current block of PDUs. Defaults to `False`
|
||||||
|
@ -234,23 +234,27 @@ class ReplicationLayer(object):
|
||||||
Deferred: Results in the requested PDU.
|
Deferred: Results in the requested PDU.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
transaction_data = yield self.transport_layer.get_pdu(
|
transaction_data = yield self.transport_layer.get_event(
|
||||||
destination, pdu_origin, pdu_id)
|
destination, event_id
|
||||||
|
)
|
||||||
|
|
||||||
transaction = Transaction(**transaction_data)
|
transaction = Transaction(**transaction_data)
|
||||||
|
|
||||||
pdu_list = [Pdu(outlier=outlier, **p) for p in transaction.pdus]
|
pdu_list = [
|
||||||
|
self.event_from_pdu_json(p, outlier=outlier)
|
||||||
|
for p in transaction.pdus
|
||||||
|
]
|
||||||
|
|
||||||
pdu = None
|
pdu = None
|
||||||
if pdu_list:
|
if pdu_list:
|
||||||
pdu = pdu_list[0]
|
pdu = pdu_list[0]
|
||||||
yield self._handle_new_pdu(pdu)
|
yield self._handle_new_pdu(destination, pdu)
|
||||||
|
|
||||||
defer.returnValue(pdu)
|
defer.returnValue(pdu)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def get_state_for_context(self, destination, context):
|
def get_state_for_context(self, destination, context, event_id=None):
|
||||||
"""Requests all of the `current` state PDUs for a given context from
|
"""Requests all of the `current` state PDUs for a given context from
|
||||||
a remote home server.
|
a remote home server.
|
||||||
|
|
||||||
|
@ -263,29 +267,25 @@ class ReplicationLayer(object):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
transaction_data = yield self.transport_layer.get_context_state(
|
transaction_data = yield self.transport_layer.get_context_state(
|
||||||
destination, context)
|
destination,
|
||||||
|
context,
|
||||||
|
event_id=event_id,
|
||||||
|
)
|
||||||
|
|
||||||
transaction = Transaction(**transaction_data)
|
transaction = Transaction(**transaction_data)
|
||||||
|
pdus = [
|
||||||
pdus = [Pdu(outlier=True, **p) for p in transaction.pdus]
|
self.event_from_pdu_json(p, outlier=True)
|
||||||
for pdu in pdus:
|
for p in transaction.pdus
|
||||||
yield self._handle_new_pdu(pdu)
|
]
|
||||||
|
|
||||||
defer.returnValue(pdus)
|
defer.returnValue(pdus)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def on_context_pdus_request(self, context):
|
def on_backfill_request(self, origin, context, versions, limit):
|
||||||
pdus = yield self.pdu_actions.get_all_pdus_from_context(
|
pdus = yield self.handler.on_backfill_request(
|
||||||
context
|
origin, context, versions, limit
|
||||||
)
|
)
|
||||||
defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict()))
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
@log_function
|
|
||||||
def on_backfill_request(self, context, versions, limit):
|
|
||||||
|
|
||||||
pdus = yield self.pdu_actions.backfill(context, versions, limit)
|
|
||||||
|
|
||||||
defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict()))
|
defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict()))
|
||||||
|
|
||||||
|
@ -295,11 +295,17 @@ class ReplicationLayer(object):
|
||||||
transaction = Transaction(**transaction_data)
|
transaction = Transaction(**transaction_data)
|
||||||
|
|
||||||
for p in transaction.pdus:
|
for p in transaction.pdus:
|
||||||
|
if "unsigned" in p:
|
||||||
|
unsigned = p["unsigned"]
|
||||||
|
if "age" in unsigned:
|
||||||
|
p["age"] = unsigned["age"]
|
||||||
if "age" in p:
|
if "age" in p:
|
||||||
p["age_ts"] = int(self._clock.time_msec()) - int(p["age"])
|
p["age_ts"] = int(self._clock.time_msec()) - int(p["age"])
|
||||||
del p["age"]
|
del p["age"]
|
||||||
|
|
||||||
pdu_list = [Pdu(**p) for p in transaction.pdus]
|
pdu_list = [
|
||||||
|
self.event_from_pdu_json(p) for p in transaction.pdus
|
||||||
|
]
|
||||||
|
|
||||||
logger.debug("[%s] Got transaction", transaction.transaction_id)
|
logger.debug("[%s] Got transaction", transaction.transaction_id)
|
||||||
|
|
||||||
|
@ -315,11 +321,15 @@ class ReplicationLayer(object):
|
||||||
|
|
||||||
dl = []
|
dl = []
|
||||||
for pdu in pdu_list:
|
for pdu in pdu_list:
|
||||||
dl.append(self._handle_new_pdu(pdu))
|
dl.append(self._handle_new_pdu(transaction.origin, pdu))
|
||||||
|
|
||||||
if hasattr(transaction, "edus"):
|
if hasattr(transaction, "edus"):
|
||||||
for edu in [Edu(**x) for x in transaction.edus]:
|
for edu in [Edu(**x) for x in transaction.edus]:
|
||||||
self.received_edu(transaction.origin, edu.edu_type, edu.content)
|
self.received_edu(
|
||||||
|
transaction.origin,
|
||||||
|
edu.edu_type,
|
||||||
|
edu.content
|
||||||
|
)
|
||||||
|
|
||||||
results = yield defer.DeferredList(dl)
|
results = yield defer.DeferredList(dl)
|
||||||
|
|
||||||
|
@ -347,20 +357,22 @@ class ReplicationLayer(object):
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def on_context_state_request(self, context):
|
def on_context_state_request(self, origin, context, event_id):
|
||||||
results = yield self.store.get_current_state_for_context(
|
if event_id:
|
||||||
context
|
pdus = yield self.handler.get_state_for_pdu(
|
||||||
)
|
origin,
|
||||||
|
context,
|
||||||
|
event_id,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
raise NotImplementedError("Specify an event")
|
||||||
|
|
||||||
logger.debug("Context returning %d results", len(results))
|
|
||||||
|
|
||||||
pdus = [Pdu.from_pdu_tuple(p) for p in results]
|
|
||||||
defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict()))
|
defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict()))
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def on_pdu_request(self, pdu_origin, pdu_id):
|
def on_pdu_request(self, origin, event_id):
|
||||||
pdu = yield self._get_persisted_pdu(pdu_id, pdu_origin)
|
pdu = yield self._get_persisted_pdu(origin, event_id)
|
||||||
|
|
||||||
if pdu:
|
if pdu:
|
||||||
defer.returnValue(
|
defer.returnValue(
|
||||||
|
@ -372,20 +384,7 @@ class ReplicationLayer(object):
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def on_pull_request(self, origin, versions):
|
def on_pull_request(self, origin, versions):
|
||||||
transaction_id = max([int(v) for v in versions])
|
raise NotImplementedError("Pull transacions not implemented")
|
||||||
|
|
||||||
response = yield self.pdu_actions.after_transaction(
|
|
||||||
transaction_id,
|
|
||||||
origin,
|
|
||||||
self.server_name
|
|
||||||
)
|
|
||||||
|
|
||||||
if not response:
|
|
||||||
response = []
|
|
||||||
|
|
||||||
defer.returnValue(
|
|
||||||
(200, self._transaction_from_pdus(response).get_dict())
|
|
||||||
)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_query_request(self, query_type, args):
|
def on_query_request(self, query_type, args):
|
||||||
|
@ -393,95 +392,199 @@ class ReplicationLayer(object):
|
||||||
response = yield self.query_handlers[query_type](args)
|
response = yield self.query_handlers[query_type](args)
|
||||||
defer.returnValue((200, response))
|
defer.returnValue((200, response))
|
||||||
else:
|
else:
|
||||||
defer.returnValue((404, "No handler for Query type '%s'"
|
defer.returnValue(
|
||||||
% (query_type)
|
(404, "No handler for Query type '%s'" % (query_type, ))
|
||||||
))
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
def on_make_join_request(self, context, user_id):
|
||||||
|
pdu = yield self.handler.on_make_join_request(context, user_id)
|
||||||
|
defer.returnValue({
|
||||||
|
"event": pdu.get_pdu_json(),
|
||||||
|
})
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def on_invite_request(self, origin, content):
|
||||||
|
pdu = self.event_from_pdu_json(content)
|
||||||
|
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
|
||||||
|
defer.returnValue(
|
||||||
|
(
|
||||||
|
200,
|
||||||
|
{
|
||||||
|
"event": ret_pdu.get_pdu_json(),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def on_send_join_request(self, origin, content):
|
||||||
|
pdu = self.event_from_pdu_json(content)
|
||||||
|
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
|
||||||
|
|
||||||
|
defer.returnValue((200, {
|
||||||
|
"state": [p.get_pdu_json() for p in res_pdus["state"]],
|
||||||
|
"auth_chain": [p.get_pdu_json() for p in res_pdus["auth_chain"]],
|
||||||
|
}))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def on_event_auth(self, origin, context, event_id):
|
||||||
|
auth_pdus = yield self.handler.on_event_auth(event_id)
|
||||||
|
defer.returnValue(
|
||||||
|
(
|
||||||
|
200,
|
||||||
|
{
|
||||||
|
"auth_chain": [a.get_pdu_json() for a in auth_pdus],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def make_join(self, destination, context, user_id):
|
||||||
|
ret = yield self.transport_layer.make_join(
|
||||||
|
destination=destination,
|
||||||
|
context=context,
|
||||||
|
user_id=user_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
pdu_dict = ret["event"]
|
||||||
|
|
||||||
|
logger.debug("Got response to make_join: %s", pdu_dict)
|
||||||
|
|
||||||
|
defer.returnValue(self.event_from_pdu_json(pdu_dict))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def send_join(self, destination, pdu):
|
||||||
|
_, content = yield self.transport_layer.send_join(
|
||||||
|
destination,
|
||||||
|
pdu.room_id,
|
||||||
|
pdu.event_id,
|
||||||
|
pdu.get_pdu_json(),
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.debug("Got content: %s", content)
|
||||||
|
|
||||||
|
state = [
|
||||||
|
self.event_from_pdu_json(p, outlier=True)
|
||||||
|
for p in content.get("state", [])
|
||||||
|
]
|
||||||
|
|
||||||
|
# FIXME: We probably want to do something with the auth_chain given
|
||||||
|
# to us
|
||||||
|
|
||||||
|
# auth_chain = [
|
||||||
|
# Pdu(outlier=True, **p) for p in content.get("auth_chain", [])
|
||||||
|
# ]
|
||||||
|
|
||||||
|
defer.returnValue(state)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def send_invite(self, destination, context, event_id, pdu):
|
||||||
|
code, content = yield self.transport_layer.send_invite(
|
||||||
|
destination=destination,
|
||||||
|
context=context,
|
||||||
|
event_id=event_id,
|
||||||
|
content=pdu.get_pdu_json(),
|
||||||
|
)
|
||||||
|
|
||||||
|
pdu_dict = content["event"]
|
||||||
|
|
||||||
|
logger.debug("Got response to send_invite: %s", pdu_dict)
|
||||||
|
|
||||||
|
defer.returnValue(self.event_from_pdu_json(pdu_dict))
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def _get_persisted_pdu(self, pdu_id, pdu_origin):
|
def _get_persisted_pdu(self, origin, event_id):
|
||||||
""" Get a PDU from the database with given origin and id.
|
""" Get a PDU from the database with given origin and id.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred: Results in a `Pdu`.
|
Deferred: Results in a `Pdu`.
|
||||||
"""
|
"""
|
||||||
pdu_tuple = yield self.store.get_pdu(pdu_id, pdu_origin)
|
return self.handler.get_persisted_pdu(origin, event_id)
|
||||||
|
|
||||||
defer.returnValue(Pdu.from_pdu_tuple(pdu_tuple))
|
|
||||||
|
|
||||||
def _transaction_from_pdus(self, pdu_list):
|
def _transaction_from_pdus(self, pdu_list):
|
||||||
"""Returns a new Transaction containing the given PDUs suitable for
|
"""Returns a new Transaction containing the given PDUs suitable for
|
||||||
transmission.
|
transmission.
|
||||||
"""
|
"""
|
||||||
pdus = [p.get_dict() for p in pdu_list]
|
pdus = [p.get_pdu_json() for p in pdu_list]
|
||||||
|
time_now = self._clock.time_msec()
|
||||||
for p in pdus:
|
for p in pdus:
|
||||||
if "age_ts" in pdus:
|
if "age_ts" in p:
|
||||||
p["age"] = int(self.clock.time_msec()) - p["age_ts"]
|
age = time_now - p["age_ts"]
|
||||||
|
p.setdefault("unsigned", {})["age"] = int(age)
|
||||||
|
del p["age_ts"]
|
||||||
return Transaction(
|
return Transaction(
|
||||||
origin=self.server_name,
|
origin=self.server_name,
|
||||||
pdus=pdus,
|
pdus=pdus,
|
||||||
origin_server_ts=int(self._clock.time_msec()),
|
origin_server_ts=int(time_now),
|
||||||
destination=None,
|
destination=None,
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def _handle_new_pdu(self, pdu, backfilled=False):
|
def _handle_new_pdu(self, origin, pdu, backfilled=False):
|
||||||
# We reprocess pdus when we have seen them only as outliers
|
# We reprocess pdus when we have seen them only as outliers
|
||||||
existing = yield self._get_persisted_pdu(pdu.pdu_id, pdu.origin)
|
existing = yield self._get_persisted_pdu(origin, pdu.event_id)
|
||||||
|
|
||||||
if existing and (not existing.outlier or pdu.outlier):
|
if existing and (not existing.outlier or pdu.outlier):
|
||||||
logger.debug("Already seen pdu %s %s", pdu.pdu_id, pdu.origin)
|
logger.debug("Already seen pdu %s", pdu.event_id)
|
||||||
defer.returnValue({})
|
defer.returnValue({})
|
||||||
return
|
return
|
||||||
|
|
||||||
|
state = None
|
||||||
|
|
||||||
# Get missing pdus if necessary.
|
# Get missing pdus if necessary.
|
||||||
is_new = yield self.pdu_actions.is_new(pdu)
|
if not pdu.outlier:
|
||||||
if is_new and not pdu.outlier:
|
|
||||||
# We only backfill backwards to the min depth.
|
# We only backfill backwards to the min depth.
|
||||||
min_depth = yield self.store.get_min_depth_for_context(pdu.context)
|
min_depth = yield self.handler.get_min_depth_for_context(
|
||||||
|
pdu.room_id
|
||||||
|
)
|
||||||
|
|
||||||
if min_depth and pdu.depth > min_depth:
|
if min_depth and pdu.depth > min_depth:
|
||||||
for pdu_id, origin in pdu.prev_pdus:
|
for event_id, hashes in pdu.prev_events:
|
||||||
exists = yield self._get_persisted_pdu(pdu_id, origin)
|
exists = yield self._get_persisted_pdu(origin, event_id)
|
||||||
|
|
||||||
if not exists:
|
if not exists:
|
||||||
logger.debug("Requesting pdu %s %s", pdu_id, origin)
|
logger.debug("Requesting pdu %s", event_id)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield self.get_pdu(
|
yield self.get_pdu(
|
||||||
pdu.origin,
|
pdu.origin,
|
||||||
pdu_id=pdu_id,
|
event_id=event_id,
|
||||||
pdu_origin=origin
|
|
||||||
)
|
)
|
||||||
logger.debug("Processed pdu %s %s", pdu_id, origin)
|
logger.debug("Processed pdu %s", event_id)
|
||||||
except:
|
except:
|
||||||
# TODO(erikj): Do some more intelligent retries.
|
# TODO(erikj): Do some more intelligent retries.
|
||||||
logger.exception("Failed to get PDU")
|
logger.exception("Failed to get PDU")
|
||||||
|
else:
|
||||||
# Persist the Pdu, but don't mark it as processed yet.
|
# We need to get the state at this event, since we have reached
|
||||||
yield self.store.persist_event(pdu=pdu)
|
# a backward extremity edge.
|
||||||
|
state = yield self.get_state_for_context(
|
||||||
|
origin, pdu.room_id, pdu.event_id,
|
||||||
|
)
|
||||||
|
|
||||||
if not backfilled:
|
if not backfilled:
|
||||||
ret = yield self.handler.on_receive_pdu(pdu, backfilled=backfilled)
|
ret = yield self.handler.on_receive_pdu(
|
||||||
|
pdu,
|
||||||
|
backfilled=backfilled,
|
||||||
|
state=state,
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
ret = None
|
ret = None
|
||||||
|
|
||||||
yield self.pdu_actions.mark_as_processed(pdu)
|
# yield self.pdu_actions.mark_as_processed(pdu)
|
||||||
|
|
||||||
defer.returnValue(ret)
|
defer.returnValue(ret)
|
||||||
|
|
||||||
def __str__(self):
|
def __str__(self):
|
||||||
return "<ReplicationLayer(%s)>" % self.server_name
|
return "<ReplicationLayer(%s)>" % self.server_name
|
||||||
|
|
||||||
|
def event_from_pdu_json(self, pdu_json, outlier=False):
|
||||||
class ReplicationHandler(object):
|
#TODO: Check we have all the PDU keys here
|
||||||
"""This defines the methods that the :py:class:`.ReplicationLayer` will
|
pdu_json.setdefault("hashes", {})
|
||||||
use to communicate with the rest of the home server.
|
pdu_json.setdefault("signatures", {})
|
||||||
"""
|
return self.event_factory.create_event(
|
||||||
def on_receive_pdu(self, pdu):
|
pdu_json["type"], outlier=outlier, **pdu_json
|
||||||
raise NotImplementedError("on_receive_pdu")
|
)
|
||||||
|
|
||||||
|
|
||||||
class _TransactionQueue(object):
|
class _TransactionQueue(object):
|
||||||
|
@ -509,6 +612,9 @@ class _TransactionQueue(object):
|
||||||
# destination -> list of tuple(edu, deferred)
|
# destination -> list of tuple(edu, deferred)
|
||||||
self.pending_edus_by_dest = {}
|
self.pending_edus_by_dest = {}
|
||||||
|
|
||||||
|
# destination -> list of tuple(failure, deferred)
|
||||||
|
self.pending_failures_by_dest = {}
|
||||||
|
|
||||||
# HACK to get unique tx id
|
# HACK to get unique tx id
|
||||||
self._next_txn_id = int(self._clock.time_msec())
|
self._next_txn_id = int(self._clock.time_msec())
|
||||||
|
|
||||||
|
@ -561,6 +667,18 @@ class _TransactionQueue(object):
|
||||||
|
|
||||||
return deferred
|
return deferred
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def enqueue_failure(self, failure, destination):
|
||||||
|
deferred = defer.Deferred()
|
||||||
|
|
||||||
|
self.pending_failures_by_dest.setdefault(
|
||||||
|
destination, []
|
||||||
|
).append(
|
||||||
|
(failure, deferred)
|
||||||
|
)
|
||||||
|
|
||||||
|
yield deferred
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def _attempt_new_transaction(self, destination):
|
def _attempt_new_transaction(self, destination):
|
||||||
|
@ -570,8 +688,9 @@ class _TransactionQueue(object):
|
||||||
# list of (pending_pdu, deferred, order)
|
# list of (pending_pdu, deferred, order)
|
||||||
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
|
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
|
||||||
pending_edus = self.pending_edus_by_dest.pop(destination, [])
|
pending_edus = self.pending_edus_by_dest.pop(destination, [])
|
||||||
|
pending_failures = self.pending_failures_by_dest.pop(destination, [])
|
||||||
|
|
||||||
if not pending_pdus and not pending_edus:
|
if not pending_pdus and not pending_edus and not pending_failures:
|
||||||
return
|
return
|
||||||
|
|
||||||
logger.debug("TX [%s] Attempting new transaction", destination)
|
logger.debug("TX [%s] Attempting new transaction", destination)
|
||||||
|
@ -581,7 +700,11 @@ class _TransactionQueue(object):
|
||||||
|
|
||||||
pdus = [x[0] for x in pending_pdus]
|
pdus = [x[0] for x in pending_pdus]
|
||||||
edus = [x[0] for x in pending_edus]
|
edus = [x[0] for x in pending_edus]
|
||||||
deferreds = [x[1] for x in pending_pdus + pending_edus]
|
failures = [x[0].get_dict() for x in pending_failures]
|
||||||
|
deferreds = [
|
||||||
|
x[1]
|
||||||
|
for x in pending_pdus + pending_edus + pending_failures
|
||||||
|
]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.pending_transactions[destination] = 1
|
self.pending_transactions[destination] = 1
|
||||||
|
@ -589,12 +712,13 @@ class _TransactionQueue(object):
|
||||||
logger.debug("TX [%s] Persisting transaction...", destination)
|
logger.debug("TX [%s] Persisting transaction...", destination)
|
||||||
|
|
||||||
transaction = Transaction.create_new(
|
transaction = Transaction.create_new(
|
||||||
origin_server_ts=self._clock.time_msec(),
|
origin_server_ts=int(self._clock.time_msec()),
|
||||||
transaction_id=str(self._next_txn_id),
|
transaction_id=str(self._next_txn_id),
|
||||||
origin=self.server_name,
|
origin=self.server_name,
|
||||||
destination=destination,
|
destination=destination,
|
||||||
pdus=pdus,
|
pdus=pdus,
|
||||||
edus=edus,
|
edus=edus,
|
||||||
|
pdu_failures=failures,
|
||||||
)
|
)
|
||||||
|
|
||||||
self._next_txn_id += 1
|
self._next_txn_id += 1
|
||||||
|
@ -614,7 +738,9 @@ class _TransactionQueue(object):
|
||||||
if "pdus" in data:
|
if "pdus" in data:
|
||||||
for p in data["pdus"]:
|
for p in data["pdus"]:
|
||||||
if "age_ts" in p:
|
if "age_ts" in p:
|
||||||
p["age"] = now - int(p["age_ts"])
|
unsigned = p.setdefault("unsigned", {})
|
||||||
|
unsigned["age"] = now - int(p["age_ts"])
|
||||||
|
del p["age_ts"]
|
||||||
return data
|
return data
|
||||||
|
|
||||||
code, response = yield self.transport_layer.send_transaction(
|
code, response = yield self.transport_layer.send_transaction(
|
||||||
|
|
|
@ -72,7 +72,7 @@ class TransportLayer(object):
|
||||||
self.received_handler = None
|
self.received_handler = None
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def get_context_state(self, destination, context):
|
def get_context_state(self, destination, context, event_id=None):
|
||||||
""" Requests all state for a given context (i.e. room) from the
|
""" Requests all state for a given context (i.e. room) from the
|
||||||
given server.
|
given server.
|
||||||
|
|
||||||
|
@ -89,54 +89,62 @@ class TransportLayer(object):
|
||||||
|
|
||||||
subpath = "/state/%s/" % context
|
subpath = "/state/%s/" % context
|
||||||
|
|
||||||
return self._do_request_for_transaction(destination, subpath)
|
args = {}
|
||||||
|
if event_id:
|
||||||
|
args["event_id"] = event_id
|
||||||
|
|
||||||
|
return self._do_request_for_transaction(
|
||||||
|
destination, subpath, args=args
|
||||||
|
)
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def get_pdu(self, destination, pdu_origin, pdu_id):
|
def get_event(self, destination, event_id):
|
||||||
""" Requests the pdu with give id and origin from the given server.
|
""" Requests the pdu with give id and origin from the given server.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
destination (str): The host name of the remote home server we want
|
destination (str): The host name of the remote home server we want
|
||||||
to get the state from.
|
to get the state from.
|
||||||
pdu_origin (str): The home server which created the PDU.
|
event_id (str): The id of the event being requested.
|
||||||
pdu_id (str): The id of the PDU being requested.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred: Results in a dict received from the remote homeserver.
|
Deferred: Results in a dict received from the remote homeserver.
|
||||||
"""
|
"""
|
||||||
logger.debug("get_pdu dest=%s, pdu_origin=%s, pdu_id=%s",
|
logger.debug("get_pdu dest=%s, event_id=%s",
|
||||||
destination, pdu_origin, pdu_id)
|
destination, event_id)
|
||||||
|
|
||||||
subpath = "/pdu/%s/%s/" % (pdu_origin, pdu_id)
|
subpath = "/event/%s/" % (event_id, )
|
||||||
|
|
||||||
return self._do_request_for_transaction(destination, subpath)
|
return self._do_request_for_transaction(destination, subpath)
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def backfill(self, dest, context, pdu_tuples, limit):
|
def backfill(self, dest, context, event_tuples, limit):
|
||||||
""" Requests `limit` previous PDUs in a given context before list of
|
""" Requests `limit` previous PDUs in a given context before list of
|
||||||
PDUs.
|
PDUs.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
dest (str)
|
dest (str)
|
||||||
context (str)
|
context (str)
|
||||||
pdu_tuples (list)
|
event_tuples (list)
|
||||||
limt (int)
|
limt (int)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred: Results in a dict received from the remote homeserver.
|
Deferred: Results in a dict received from the remote homeserver.
|
||||||
"""
|
"""
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"backfill dest=%s, context=%s, pdu_tuples=%s, limit=%s",
|
"backfill dest=%s, context=%s, event_tuples=%s, limit=%s",
|
||||||
dest, context, repr(pdu_tuples), str(limit)
|
dest, context, repr(event_tuples), str(limit)
|
||||||
)
|
)
|
||||||
|
|
||||||
if not pdu_tuples:
|
if not event_tuples:
|
||||||
|
# TODO: raise?
|
||||||
return
|
return
|
||||||
|
|
||||||
subpath = "/backfill/%s/" % context
|
subpath = "/backfill/%s/" % (context,)
|
||||||
|
|
||||||
args = {"v": ["%s,%s" % (i, o) for i, o in pdu_tuples]}
|
args = {
|
||||||
args["limit"] = limit
|
"v": event_tuples,
|
||||||
|
"limit": limit,
|
||||||
|
}
|
||||||
|
|
||||||
return self._do_request_for_transaction(
|
return self._do_request_for_transaction(
|
||||||
dest,
|
dest,
|
||||||
|
@ -197,6 +205,72 @@ class TransportLayer(object):
|
||||||
|
|
||||||
defer.returnValue(response)
|
defer.returnValue(response)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def make_join(self, destination, context, user_id, retry_on_dns_fail=True):
|
||||||
|
path = PREFIX + "/make_join/%s/%s" % (context, user_id,)
|
||||||
|
|
||||||
|
response = yield self.client.get_json(
|
||||||
|
destination=destination,
|
||||||
|
path=path,
|
||||||
|
retry_on_dns_fail=retry_on_dns_fail,
|
||||||
|
)
|
||||||
|
|
||||||
|
defer.returnValue(response)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def send_join(self, destination, context, event_id, content):
|
||||||
|
path = PREFIX + "/send_join/%s/%s" % (
|
||||||
|
context,
|
||||||
|
event_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
code, content = yield self.client.put_json(
|
||||||
|
destination=destination,
|
||||||
|
path=path,
|
||||||
|
data=content,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not 200 <= code < 300:
|
||||||
|
raise RuntimeError("Got %d from send_join", code)
|
||||||
|
|
||||||
|
defer.returnValue(json.loads(content))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def send_invite(self, destination, context, event_id, content):
|
||||||
|
path = PREFIX + "/invite/%s/%s" % (
|
||||||
|
context,
|
||||||
|
event_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
code, content = yield self.client.put_json(
|
||||||
|
destination=destination,
|
||||||
|
path=path,
|
||||||
|
data=content,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not 200 <= code < 300:
|
||||||
|
raise RuntimeError("Got %d from send_invite", code)
|
||||||
|
|
||||||
|
defer.returnValue(json.loads(content))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def get_event_auth(self, destination, context, event_id):
|
||||||
|
path = PREFIX + "/event_auth/%s/%s" % (
|
||||||
|
context,
|
||||||
|
event_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
response = yield self.client.get_json(
|
||||||
|
destination=destination,
|
||||||
|
path=path,
|
||||||
|
)
|
||||||
|
|
||||||
|
defer.returnValue(response)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _authenticate_request(self, request):
|
def _authenticate_request(self, request):
|
||||||
json_request = {
|
json_request = {
|
||||||
|
@ -210,7 +284,7 @@ class TransportLayer(object):
|
||||||
origin = None
|
origin = None
|
||||||
|
|
||||||
if request.method == "PUT":
|
if request.method == "PUT":
|
||||||
#TODO: Handle other method types? other content types?
|
# TODO: Handle other method types? other content types?
|
||||||
try:
|
try:
|
||||||
content_bytes = request.content.read()
|
content_bytes = request.content.read()
|
||||||
content = json.loads(content_bytes)
|
content = json.loads(content_bytes)
|
||||||
|
@ -222,11 +296,13 @@ class TransportLayer(object):
|
||||||
try:
|
try:
|
||||||
params = auth.split(" ")[1].split(",")
|
params = auth.split(" ")[1].split(",")
|
||||||
param_dict = dict(kv.split("=") for kv in params)
|
param_dict = dict(kv.split("=") for kv in params)
|
||||||
|
|
||||||
def strip_quotes(value):
|
def strip_quotes(value):
|
||||||
if value.startswith("\""):
|
if value.startswith("\""):
|
||||||
return value[1:-1]
|
return value[1:-1]
|
||||||
else:
|
else:
|
||||||
return value
|
return value
|
||||||
|
|
||||||
origin = strip_quotes(param_dict["origin"])
|
origin = strip_quotes(param_dict["origin"])
|
||||||
key = strip_quotes(param_dict["key"])
|
key = strip_quotes(param_dict["key"])
|
||||||
sig = strip_quotes(param_dict["sig"])
|
sig = strip_quotes(param_dict["sig"])
|
||||||
|
@ -247,7 +323,7 @@ class TransportLayer(object):
|
||||||
if auth.startswith("X-Matrix"):
|
if auth.startswith("X-Matrix"):
|
||||||
(origin, key, sig) = parse_auth_header(auth)
|
(origin, key, sig) = parse_auth_header(auth)
|
||||||
json_request["origin"] = origin
|
json_request["origin"] = origin
|
||||||
json_request["signatures"].setdefault(origin,{})[key] = sig
|
json_request["signatures"].setdefault(origin, {})[key] = sig
|
||||||
|
|
||||||
if not json_request["signatures"]:
|
if not json_request["signatures"]:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
|
@ -313,10 +389,10 @@ class TransportLayer(object):
|
||||||
# data_id pair.
|
# data_id pair.
|
||||||
self.server.register_path(
|
self.server.register_path(
|
||||||
"GET",
|
"GET",
|
||||||
re.compile("^" + PREFIX + "/pdu/([^/]*)/([^/]*)/$"),
|
re.compile("^" + PREFIX + "/event/([^/]*)/$"),
|
||||||
self._with_authentication(
|
self._with_authentication(
|
||||||
lambda origin, content, query, pdu_origin, pdu_id:
|
lambda origin, content, query, event_id:
|
||||||
handler.on_pdu_request(pdu_origin, pdu_id)
|
handler.on_pdu_request(origin, event_id)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -326,7 +402,11 @@ class TransportLayer(object):
|
||||||
re.compile("^" + PREFIX + "/state/([^/]*)/$"),
|
re.compile("^" + PREFIX + "/state/([^/]*)/$"),
|
||||||
self._with_authentication(
|
self._with_authentication(
|
||||||
lambda origin, content, query, context:
|
lambda origin, content, query, context:
|
||||||
handler.on_context_state_request(context)
|
handler.on_context_state_request(
|
||||||
|
origin,
|
||||||
|
context,
|
||||||
|
query.get("event_id", [None])[0],
|
||||||
|
)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -336,20 +416,11 @@ class TransportLayer(object):
|
||||||
self._with_authentication(
|
self._with_authentication(
|
||||||
lambda origin, content, query, context:
|
lambda origin, content, query, context:
|
||||||
self._on_backfill_request(
|
self._on_backfill_request(
|
||||||
context, query["v"], query["limit"]
|
origin, context, query["v"], query["limit"]
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
self.server.register_path(
|
|
||||||
"GET",
|
|
||||||
re.compile("^" + PREFIX + "/context/([^/]*)/$"),
|
|
||||||
self._with_authentication(
|
|
||||||
lambda origin, content, query, context:
|
|
||||||
handler.on_context_pdus_request(context)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
# This is when we receive a server-server Query
|
# This is when we receive a server-server Query
|
||||||
self.server.register_path(
|
self.server.register_path(
|
||||||
"GET",
|
"GET",
|
||||||
|
@ -362,6 +433,50 @@ class TransportLayer(object):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.server.register_path(
|
||||||
|
"GET",
|
||||||
|
re.compile("^" + PREFIX + "/make_join/([^/]*)/([^/]*)$"),
|
||||||
|
self._with_authentication(
|
||||||
|
lambda origin, content, query, context, user_id:
|
||||||
|
self._on_make_join_request(
|
||||||
|
origin, content, query, context, user_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.server.register_path(
|
||||||
|
"GET",
|
||||||
|
re.compile("^" + PREFIX + "/event_auth/([^/]*)/([^/]*)$"),
|
||||||
|
self._with_authentication(
|
||||||
|
lambda origin, content, query, context, event_id:
|
||||||
|
handler.on_event_auth(
|
||||||
|
origin, context, event_id,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.server.register_path(
|
||||||
|
"PUT",
|
||||||
|
re.compile("^" + PREFIX + "/send_join/([^/]*)/([^/]*)$"),
|
||||||
|
self._with_authentication(
|
||||||
|
lambda origin, content, query, context, event_id:
|
||||||
|
self._on_send_join_request(
|
||||||
|
origin, content, query,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.server.register_path(
|
||||||
|
"PUT",
|
||||||
|
re.compile("^" + PREFIX + "/invite/([^/]*)/([^/]*)$"),
|
||||||
|
self._with_authentication(
|
||||||
|
lambda origin, content, query, context, event_id:
|
||||||
|
self._on_invite_request(
|
||||||
|
origin, content, query,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def _on_send_request(self, origin, content, query, transaction_id):
|
def _on_send_request(self, origin, content, query, transaction_id):
|
||||||
|
@ -402,7 +517,8 @@ class TransportLayer(object):
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
code, response = yield self.received_handler.on_incoming_transaction(
|
handler = self.received_handler
|
||||||
|
code, response = yield handler.on_incoming_transaction(
|
||||||
transaction_data
|
transaction_data
|
||||||
)
|
)
|
||||||
except:
|
except:
|
||||||
|
@ -440,7 +556,7 @@ class TransportLayer(object):
|
||||||
defer.returnValue(data)
|
defer.returnValue(data)
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def _on_backfill_request(self, context, v_list, limits):
|
def _on_backfill_request(self, origin, context, v_list, limits):
|
||||||
if not limits:
|
if not limits:
|
||||||
return defer.succeed(
|
return defer.succeed(
|
||||||
(400, {"error": "Did not include limit param"})
|
(400, {"error": "Did not include limit param"})
|
||||||
|
@ -448,124 +564,34 @@ class TransportLayer(object):
|
||||||
|
|
||||||
limit = int(limits[-1])
|
limit = int(limits[-1])
|
||||||
|
|
||||||
versions = [v.split(",", 1) for v in v_list]
|
versions = v_list
|
||||||
|
|
||||||
return self.request_handler.on_backfill_request(
|
return self.request_handler.on_backfill_request(
|
||||||
context, versions, limit)
|
origin, context, versions, limit
|
||||||
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def _on_make_join_request(self, origin, content, query, context, user_id):
|
||||||
|
content = yield self.request_handler.on_make_join_request(
|
||||||
|
context, user_id,
|
||||||
|
)
|
||||||
|
defer.returnValue((200, content))
|
||||||
|
|
||||||
class TransportReceivedHandler(object):
|
@defer.inlineCallbacks
|
||||||
""" Callbacks used when we receive a transaction
|
@log_function
|
||||||
"""
|
def _on_send_join_request(self, origin, content, query):
|
||||||
def on_incoming_transaction(self, transaction):
|
content = yield self.request_handler.on_send_join_request(
|
||||||
""" Called on PUT /send/<transaction_id>, or on response to a request
|
origin, content,
|
||||||
that we sent (e.g. a backfill request)
|
)
|
||||||
|
|
||||||
Args:
|
defer.returnValue((200, content))
|
||||||
transaction (synapse.transaction.Transaction): The transaction that
|
|
||||||
was sent to us.
|
|
||||||
|
|
||||||
Returns:
|
@defer.inlineCallbacks
|
||||||
twisted.internet.defer.Deferred: A deferred that gets fired when
|
@log_function
|
||||||
the transaction has finished being processed.
|
def _on_invite_request(self, origin, content, query):
|
||||||
|
content = yield self.request_handler.on_invite_request(
|
||||||
|
origin, content,
|
||||||
|
)
|
||||||
|
|
||||||
The result should be a tuple in the form of
|
defer.returnValue((200, content))
|
||||||
`(response_code, respond_body)`, where `response_body` is a python
|
|
||||||
dict that will get serialized to JSON.
|
|
||||||
|
|
||||||
On errors, the dict should have an `error` key with a brief message
|
|
||||||
of what went wrong.
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TransportRequestHandler(object):
|
|
||||||
""" Handlers used when someone want's data from us
|
|
||||||
"""
|
|
||||||
def on_pull_request(self, versions):
|
|
||||||
""" Called on GET /pull/?v=...
|
|
||||||
|
|
||||||
This is hit when a remote home server wants to get all data
|
|
||||||
after a given transaction. Mainly used when a home server comes back
|
|
||||||
online and wants to get everything it has missed.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
versions (list): A list of transaction_ids that should be used to
|
|
||||||
determine what PDUs the remote side have not yet seen.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Deferred: Resultsin a tuple in the form of
|
|
||||||
`(response_code, respond_body)`, where `response_body` is a python
|
|
||||||
dict that will get serialized to JSON.
|
|
||||||
|
|
||||||
On errors, the dict should have an `error` key with a brief message
|
|
||||||
of what went wrong.
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def on_pdu_request(self, pdu_origin, pdu_id):
|
|
||||||
""" Called on GET /pdu/<pdu_origin>/<pdu_id>/
|
|
||||||
|
|
||||||
Someone wants a particular PDU. This PDU may or may not have originated
|
|
||||||
from us.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pdu_origin (str)
|
|
||||||
pdu_id (str)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Deferred: Resultsin a tuple in the form of
|
|
||||||
`(response_code, respond_body)`, where `response_body` is a python
|
|
||||||
dict that will get serialized to JSON.
|
|
||||||
|
|
||||||
On errors, the dict should have an `error` key with a brief message
|
|
||||||
of what went wrong.
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def on_context_state_request(self, context):
|
|
||||||
""" Called on GET /state/<context>/
|
|
||||||
|
|
||||||
Gets hit when someone wants all the *current* state for a given
|
|
||||||
contexts.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
context (str): The name of the context that we're interested in.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
twisted.internet.defer.Deferred: A deferred that gets fired when
|
|
||||||
the transaction has finished being processed.
|
|
||||||
|
|
||||||
The result should be a tuple in the form of
|
|
||||||
`(response_code, respond_body)`, where `response_body` is a python
|
|
||||||
dict that will get serialized to JSON.
|
|
||||||
|
|
||||||
On errors, the dict should have an `error` key with a brief message
|
|
||||||
of what went wrong.
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def on_backfill_request(self, context, versions, limit):
|
|
||||||
""" Called on GET /backfill/<context>/?v=...&limit=...
|
|
||||||
|
|
||||||
Gets hit when we want to backfill backwards on a given context from
|
|
||||||
the given point.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
context (str): The context to backfill
|
|
||||||
versions (list): A list of 2-tuples representing where to backfill
|
|
||||||
from, in the form `(pdu_id, origin)`
|
|
||||||
limit (int): How many pdus to return.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Deferred: Results in a tuple in the form of
|
|
||||||
`(response_code, respond_body)`, where `response_body` is a python
|
|
||||||
dict that will get serialized to JSON.
|
|
||||||
|
|
||||||
On errors, the dict should have an `error` key with a brief message
|
|
||||||
of what went wrong.
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def on_query_request(self):
|
|
||||||
""" Called on a GET /query/<query_type> request. """
|
|
||||||
|
|
|
@ -20,126 +20,11 @@ server protocol.
|
||||||
from synapse.util.jsonobject import JsonEncodedObject
|
from synapse.util.jsonobject import JsonEncodedObject
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
import json
|
|
||||||
import copy
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class Pdu(JsonEncodedObject):
|
|
||||||
""" A Pdu represents a piece of data sent from a server and is associated
|
|
||||||
with a context.
|
|
||||||
|
|
||||||
A Pdu can be classified as "state". For a given context, we can efficiently
|
|
||||||
retrieve all state pdu's that haven't been clobbered. Clobbering is done
|
|
||||||
via a unique constraint on the tuple (context, pdu_type, state_key). A pdu
|
|
||||||
is a state pdu if `is_state` is True.
|
|
||||||
|
|
||||||
Example pdu::
|
|
||||||
|
|
||||||
{
|
|
||||||
"pdu_id": "78c",
|
|
||||||
"origin_server_ts": 1404835423000,
|
|
||||||
"origin": "bar",
|
|
||||||
"prev_ids": [
|
|
||||||
["23b", "foo"],
|
|
||||||
["56a", "bar"],
|
|
||||||
],
|
|
||||||
"content": { ... },
|
|
||||||
}
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
valid_keys = [
|
|
||||||
"pdu_id",
|
|
||||||
"context",
|
|
||||||
"origin",
|
|
||||||
"origin_server_ts",
|
|
||||||
"pdu_type",
|
|
||||||
"destinations",
|
|
||||||
"transaction_id",
|
|
||||||
"prev_pdus",
|
|
||||||
"depth",
|
|
||||||
"content",
|
|
||||||
"outlier",
|
|
||||||
"is_state", # Below this are keys valid only for State Pdus.
|
|
||||||
"state_key",
|
|
||||||
"power_level",
|
|
||||||
"prev_state_id",
|
|
||||||
"prev_state_origin",
|
|
||||||
"required_power_level",
|
|
||||||
"user_id",
|
|
||||||
]
|
|
||||||
|
|
||||||
internal_keys = [
|
|
||||||
"destinations",
|
|
||||||
"transaction_id",
|
|
||||||
"outlier",
|
|
||||||
]
|
|
||||||
|
|
||||||
required_keys = [
|
|
||||||
"pdu_id",
|
|
||||||
"context",
|
|
||||||
"origin",
|
|
||||||
"origin_server_ts",
|
|
||||||
"pdu_type",
|
|
||||||
"content",
|
|
||||||
]
|
|
||||||
|
|
||||||
# TODO: We need to make this properly load content rather than
|
|
||||||
# just leaving it as a dict. (OR DO WE?!)
|
|
||||||
|
|
||||||
def __init__(self, destinations=[], is_state=False, prev_pdus=[],
|
|
||||||
outlier=False, **kwargs):
|
|
||||||
if is_state:
|
|
||||||
for required_key in ["state_key"]:
|
|
||||||
if required_key not in kwargs:
|
|
||||||
raise RuntimeError("Key %s is required" % required_key)
|
|
||||||
|
|
||||||
super(Pdu, self).__init__(
|
|
||||||
destinations=destinations,
|
|
||||||
is_state=is_state,
|
|
||||||
prev_pdus=prev_pdus,
|
|
||||||
outlier=outlier,
|
|
||||||
**kwargs
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_pdu_tuple(cls, pdu_tuple):
|
|
||||||
""" Converts a PduTuple to a Pdu
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pdu_tuple (synapse.persistence.transactions.PduTuple): The tuple to
|
|
||||||
convert
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Pdu
|
|
||||||
"""
|
|
||||||
if pdu_tuple:
|
|
||||||
d = copy.copy(pdu_tuple.pdu_entry._asdict())
|
|
||||||
d["origin_server_ts"] = d.pop("ts")
|
|
||||||
|
|
||||||
d["content"] = json.loads(d["content_json"])
|
|
||||||
del d["content_json"]
|
|
||||||
|
|
||||||
args = {f: d[f] for f in cls.valid_keys if f in d}
|
|
||||||
if "unrecognized_keys" in d and d["unrecognized_keys"]:
|
|
||||||
args.update(json.loads(d["unrecognized_keys"]))
|
|
||||||
|
|
||||||
return Pdu(
|
|
||||||
prev_pdus=pdu_tuple.prev_pdu_list,
|
|
||||||
**args
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return "(%s, %s)" % (self.__class__.__name__, repr(self.__dict__))
|
|
||||||
|
|
||||||
def __repr__(self):
|
|
||||||
return "<%s, %s>" % (self.__class__.__name__, repr(self.__dict__))
|
|
||||||
|
|
||||||
|
|
||||||
class Edu(JsonEncodedObject):
|
class Edu(JsonEncodedObject):
|
||||||
""" An Edu represents a piece of data sent from one homeserver to another.
|
""" An Edu represents a piece of data sent from one homeserver to another.
|
||||||
|
@ -160,11 +45,10 @@ class Edu(JsonEncodedObject):
|
||||||
"edu_type",
|
"edu_type",
|
||||||
]
|
]
|
||||||
|
|
||||||
# TODO: SYN-103: Remove "origin" and "destination" keys.
|
internal_keys = [
|
||||||
# internal_keys = [
|
"origin",
|
||||||
# "origin",
|
"destination",
|
||||||
# "destination",
|
]
|
||||||
# ]
|
|
||||||
|
|
||||||
|
|
||||||
class Transaction(JsonEncodedObject):
|
class Transaction(JsonEncodedObject):
|
||||||
|
@ -193,6 +77,7 @@ class Transaction(JsonEncodedObject):
|
||||||
"edus",
|
"edus",
|
||||||
"transaction_id",
|
"transaction_id",
|
||||||
"destination",
|
"destination",
|
||||||
|
"pdu_failures",
|
||||||
]
|
]
|
||||||
|
|
||||||
internal_keys = [
|
internal_keys = [
|
||||||
|
@ -229,7 +114,9 @@ class Transaction(JsonEncodedObject):
|
||||||
transaction_id and origin_server_ts keys.
|
transaction_id and origin_server_ts keys.
|
||||||
"""
|
"""
|
||||||
if "origin_server_ts" not in kwargs:
|
if "origin_server_ts" not in kwargs:
|
||||||
raise KeyError("Require 'origin_server_ts' to construct a Transaction")
|
raise KeyError(
|
||||||
|
"Require 'origin_server_ts' to construct a Transaction"
|
||||||
|
)
|
||||||
if "transaction_id" not in kwargs:
|
if "transaction_id" not in kwargs:
|
||||||
raise KeyError(
|
raise KeyError(
|
||||||
"Require 'transaction_id' to construct a Transaction"
|
"Require 'transaction_id' to construct a Transaction"
|
||||||
|
@ -238,9 +125,6 @@ class Transaction(JsonEncodedObject):
|
||||||
for p in pdus:
|
for p in pdus:
|
||||||
p.transaction_id = kwargs["transaction_id"]
|
p.transaction_id = kwargs["transaction_id"]
|
||||||
|
|
||||||
kwargs["pdus"] = [p.get_dict() for p in pdus]
|
kwargs["pdus"] = [p.get_pdu_json() for p in pdus]
|
||||||
|
|
||||||
return Transaction(**kwargs)
|
return Transaction(**kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -14,7 +14,18 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.errors import LimitExceededError
|
from synapse.api.errors import LimitExceededError
|
||||||
|
from synapse.util.async import run_on_reactor
|
||||||
|
from synapse.crypto.event_signing import add_hashes_and_signatures
|
||||||
|
from synapse.api.events.room import RoomMemberEvent
|
||||||
|
from synapse.api.constants import Membership
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class BaseHandler(object):
|
class BaseHandler(object):
|
||||||
|
|
||||||
|
@ -30,6 +41,9 @@ class BaseHandler(object):
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
|
|
||||||
|
self.signing_key = hs.config.signing_key[0]
|
||||||
|
self.server_name = hs.hostname
|
||||||
|
|
||||||
def ratelimit(self, user_id):
|
def ratelimit(self, user_id):
|
||||||
time_now = self.clock.time()
|
time_now = self.clock.time()
|
||||||
allowed, time_allowed = self.ratelimiter.send_message(
|
allowed, time_allowed = self.ratelimiter.send_message(
|
||||||
|
@ -44,16 +58,58 @@ class BaseHandler(object):
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _on_new_room_event(self, event, snapshot, extra_destinations=[],
|
def _on_new_room_event(self, event, snapshot, extra_destinations=[],
|
||||||
extra_users=[]):
|
extra_users=[], suppress_auth=False,
|
||||||
|
do_invite_host=None):
|
||||||
|
yield run_on_reactor()
|
||||||
|
|
||||||
snapshot.fill_out_prev_events(event)
|
snapshot.fill_out_prev_events(event)
|
||||||
|
|
||||||
|
yield self.state_handler.annotate_event_with_state(event)
|
||||||
|
|
||||||
|
yield self.auth.add_auth_events(event)
|
||||||
|
|
||||||
|
logger.debug("Signing event...")
|
||||||
|
|
||||||
|
add_hashes_and_signatures(
|
||||||
|
event, self.server_name, self.signing_key
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.debug("Signed event.")
|
||||||
|
|
||||||
|
if not suppress_auth:
|
||||||
|
logger.debug("Authing...")
|
||||||
|
self.auth.check(event, raises=True)
|
||||||
|
logger.debug("Authed")
|
||||||
|
else:
|
||||||
|
logger.debug("Suppressed auth.")
|
||||||
|
|
||||||
|
if do_invite_host:
|
||||||
|
federation_handler = self.hs.get_handlers().federation_handler
|
||||||
|
invite_event = yield federation_handler.send_invite(
|
||||||
|
do_invite_host,
|
||||||
|
event
|
||||||
|
)
|
||||||
|
|
||||||
|
# FIXME: We need to check if the remote changed anything else
|
||||||
|
event.signatures = invite_event.signatures
|
||||||
|
|
||||||
yield self.store.persist_event(event)
|
yield self.store.persist_event(event)
|
||||||
|
|
||||||
destinations = set(extra_destinations)
|
destinations = set(extra_destinations)
|
||||||
# Send a PDU to all hosts who have joined the room.
|
# Send a PDU to all hosts who have joined the room.
|
||||||
destinations.update((yield self.store.get_joined_hosts_for_room(
|
|
||||||
event.room_id
|
for k, s in event.state_events.items():
|
||||||
)))
|
try:
|
||||||
|
if k[0] == RoomMemberEvent.TYPE:
|
||||||
|
if s.content["membership"] == Membership.JOIN:
|
||||||
|
destinations.add(
|
||||||
|
self.hs.parse_userid(s.state_key).domain
|
||||||
|
)
|
||||||
|
except:
|
||||||
|
logger.warn(
|
||||||
|
"Failed to get destination from event %s", s.event_id
|
||||||
|
)
|
||||||
|
|
||||||
event.destinations = list(destinations)
|
event.destinations = list(destinations)
|
||||||
|
|
||||||
self.notifier.on_new_room_event(event, extra_users=extra_users)
|
self.notifier.on_new_room_event(event, extra_users=extra_users)
|
||||||
|
|
|
@ -147,10 +147,8 @@ class DirectoryHandler(BaseHandler):
|
||||||
content={"aliases": aliases},
|
content={"aliases": aliases},
|
||||||
)
|
)
|
||||||
|
|
||||||
snapshot = yield self.store.snapshot_room(
|
snapshot = yield self.store.snapshot_room(event)
|
||||||
room_id=room_id,
|
|
||||||
user_id=user_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
yield self.state_handler.handle_new_event(event, snapshot)
|
yield self._on_new_room_event(
|
||||||
yield self._on_new_room_event(event, snapshot, extra_users=[user_id])
|
event, snapshot, extra_users=[user_id], suppress_auth=True
|
||||||
|
)
|
||||||
|
|
|
@ -17,13 +17,18 @@
|
||||||
|
|
||||||
from ._base import BaseHandler
|
from ._base import BaseHandler
|
||||||
|
|
||||||
from synapse.api.events.room import InviteJoinEvent, RoomMemberEvent
|
from synapse.api.events.utils import prune_event
|
||||||
|
from synapse.api.errors import AuthError, FederationError, SynapseError
|
||||||
|
from synapse.api.events.room import RoomMemberEvent
|
||||||
from synapse.api.constants import Membership
|
from synapse.api.constants import Membership
|
||||||
from synapse.util.logutils import log_function
|
from synapse.util.logutils import log_function
|
||||||
from synapse.federation.pdu_codec import PduCodec
|
from synapse.util.async import run_on_reactor
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.crypto.event_signing import (
|
||||||
|
compute_event_signature, check_event_content_hash
|
||||||
|
)
|
||||||
|
from syutil.jsonutil import encode_canonical_json
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
@ -38,6 +43,8 @@ class FederationHandler(BaseHandler):
|
||||||
of the home server (including auth and state conflict resoultion)
|
of the home server (including auth and state conflict resoultion)
|
||||||
b) converting events that were produced by local clients that may need
|
b) converting events that were produced by local clients that may need
|
||||||
to be sent to remote home servers.
|
to be sent to remote home servers.
|
||||||
|
c) doing the necessary dances to invite remote users and join remote
|
||||||
|
rooms.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
|
@ -55,12 +62,14 @@ class FederationHandler(BaseHandler):
|
||||||
self.state_handler = hs.get_state_handler()
|
self.state_handler = hs.get_state_handler()
|
||||||
# self.auth_handler = gs.get_auth_handler()
|
# self.auth_handler = gs.get_auth_handler()
|
||||||
self.server_name = hs.hostname
|
self.server_name = hs.hostname
|
||||||
|
self.keyring = hs.get_keyring()
|
||||||
|
|
||||||
self.lock_manager = hs.get_room_lock_manager()
|
self.lock_manager = hs.get_room_lock_manager()
|
||||||
|
|
||||||
self.replication_layer.set_handler(self)
|
self.replication_layer.set_handler(self)
|
||||||
|
|
||||||
self.pdu_codec = PduCodec(hs)
|
# When joining a room we need to queue any events for that room up
|
||||||
|
self.room_queues = {}
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@ -78,7 +87,9 @@ class FederationHandler(BaseHandler):
|
||||||
processing.
|
processing.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
pdu = self.pdu_codec.pdu_from_event(event)
|
yield run_on_reactor()
|
||||||
|
|
||||||
|
pdu = event
|
||||||
|
|
||||||
if not hasattr(pdu, "destinations") or not pdu.destinations:
|
if not hasattr(pdu, "destinations") or not pdu.destinations:
|
||||||
pdu.destinations = []
|
pdu.destinations = []
|
||||||
|
@ -87,97 +98,113 @@ class FederationHandler(BaseHandler):
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_receive_pdu(self, pdu, backfilled):
|
def on_receive_pdu(self, pdu, backfilled, state=None):
|
||||||
""" Called by the ReplicationLayer when we have a new pdu. We need to
|
""" Called by the ReplicationLayer when we have a new pdu. We need to
|
||||||
do auth checks and put it throught the StateHandler.
|
do auth checks and put it through the StateHandler.
|
||||||
"""
|
"""
|
||||||
event = self.pdu_codec.event_from_pdu(pdu)
|
event = pdu
|
||||||
|
|
||||||
logger.debug("Got event: %s", event.event_id)
|
logger.debug("Got event: %s", event.event_id)
|
||||||
|
|
||||||
with (yield self.lock_manager.lock(pdu.context)):
|
# If we are currently in the process of joining this room, then we
|
||||||
if event.is_state and not backfilled:
|
# queue up events for later processing.
|
||||||
is_new_state = yield self.state_handler.handle_new_state(
|
if event.room_id in self.room_queues:
|
||||||
pdu
|
self.room_queues[event.room_id].append(pdu)
|
||||||
)
|
return
|
||||||
else:
|
|
||||||
is_new_state = False
|
logger.debug("Processing event: %s", event.event_id)
|
||||||
|
|
||||||
|
redacted_event = prune_event(event)
|
||||||
|
|
||||||
|
redacted_pdu_json = redacted_event.get_pdu_json()
|
||||||
|
try:
|
||||||
|
yield self.keyring.verify_json_for_server(
|
||||||
|
event.origin, redacted_pdu_json
|
||||||
|
)
|
||||||
|
except SynapseError as e:
|
||||||
|
logger.warn("Signature check failed for %s redacted to %s",
|
||||||
|
encode_canonical_json(pdu.get_pdu_json()),
|
||||||
|
encode_canonical_json(redacted_pdu_json),
|
||||||
|
)
|
||||||
|
raise FederationError(
|
||||||
|
"ERROR",
|
||||||
|
e.code,
|
||||||
|
e.msg,
|
||||||
|
affected=event.event_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not check_event_content_hash(event):
|
||||||
|
logger.warn(
|
||||||
|
"Event content has been tampered, redacting %s, %s",
|
||||||
|
event.event_id, encode_canonical_json(event.get_full_dict())
|
||||||
|
)
|
||||||
|
event = redacted_event
|
||||||
|
|
||||||
|
is_new_state = yield self.state_handler.annotate_event_with_state(
|
||||||
|
event,
|
||||||
|
old_state=state
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.debug("Event: %s", event)
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.auth.check(event, raises=True)
|
||||||
|
except AuthError as e:
|
||||||
|
raise FederationError(
|
||||||
|
"ERROR",
|
||||||
|
e.code,
|
||||||
|
e.msg,
|
||||||
|
affected=event.event_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
is_new_state = is_new_state and not backfilled
|
||||||
|
|
||||||
# TODO: Implement something in federation that allows us to
|
# TODO: Implement something in federation that allows us to
|
||||||
# respond to PDU.
|
# respond to PDU.
|
||||||
|
|
||||||
target_is_mine = False
|
yield self.store.persist_event(
|
||||||
if hasattr(event, "target_host"):
|
event,
|
||||||
target_is_mine = event.target_host == self.hs.hostname
|
backfilled,
|
||||||
|
is_new_state=is_new_state
|
||||||
|
)
|
||||||
|
|
||||||
if event.type == InviteJoinEvent.TYPE:
|
room = yield self.store.get_room(event.room_id)
|
||||||
if not target_is_mine:
|
|
||||||
logger.debug("Ignoring invite/join event %s", event)
|
|
||||||
return
|
|
||||||
|
|
||||||
# If we receive an invite/join event then we need to join the
|
if not room:
|
||||||
# sender to the given room.
|
# Huh, let's try and get the current state
|
||||||
# TODO: We should probably auth this or some such
|
try:
|
||||||
content = event.content
|
yield self.replication_layer.get_state_for_context(
|
||||||
content.update({"membership": Membership.JOIN})
|
event.origin, event.room_id, event.event_id,
|
||||||
new_event = self.event_factory.create_event(
|
|
||||||
etype=RoomMemberEvent.TYPE,
|
|
||||||
state_key=event.user_id,
|
|
||||||
room_id=event.room_id,
|
|
||||||
user_id=event.user_id,
|
|
||||||
membership=Membership.JOIN,
|
|
||||||
content=content
|
|
||||||
)
|
|
||||||
|
|
||||||
yield self.hs.get_handlers().room_member_handler.change_membership(
|
|
||||||
new_event,
|
|
||||||
do_auth=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
|
||||||
with (yield self.room_lock.lock(event.room_id)):
|
|
||||||
yield self.store.persist_event(
|
|
||||||
event,
|
|
||||||
backfilled,
|
|
||||||
is_new_state=is_new_state
|
|
||||||
)
|
)
|
||||||
|
|
||||||
room = yield self.store.get_room(event.room_id)
|
hosts = yield self.store.get_joined_hosts_for_room(
|
||||||
|
event.room_id
|
||||||
if not room:
|
|
||||||
# Huh, let's try and get the current state
|
|
||||||
try:
|
|
||||||
yield self.replication_layer.get_state_for_context(
|
|
||||||
event.origin, event.room_id
|
|
||||||
)
|
|
||||||
|
|
||||||
hosts = yield self.store.get_joined_hosts_for_room(
|
|
||||||
event.room_id
|
|
||||||
)
|
|
||||||
if self.hs.hostname in hosts:
|
|
||||||
try:
|
|
||||||
yield self.store.store_room(
|
|
||||||
room_id=event.room_id,
|
|
||||||
room_creator_user_id="",
|
|
||||||
is_public=False,
|
|
||||||
)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
except:
|
|
||||||
logger.exception(
|
|
||||||
"Failed to get current state for room %s",
|
|
||||||
event.room_id
|
|
||||||
)
|
|
||||||
|
|
||||||
if not backfilled:
|
|
||||||
extra_users = []
|
|
||||||
if event.type == RoomMemberEvent.TYPE:
|
|
||||||
target_user_id = event.state_key
|
|
||||||
target_user = self.hs.parse_userid(target_user_id)
|
|
||||||
extra_users.append(target_user)
|
|
||||||
|
|
||||||
yield self.notifier.on_new_room_event(
|
|
||||||
event, extra_users=extra_users
|
|
||||||
)
|
)
|
||||||
|
if self.hs.hostname in hosts:
|
||||||
|
try:
|
||||||
|
yield self.store.store_room(
|
||||||
|
room_id=event.room_id,
|
||||||
|
room_creator_user_id="",
|
||||||
|
is_public=False,
|
||||||
|
)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
except:
|
||||||
|
logger.exception(
|
||||||
|
"Failed to get current state for room %s",
|
||||||
|
event.room_id
|
||||||
|
)
|
||||||
|
|
||||||
|
if not backfilled:
|
||||||
|
extra_users = []
|
||||||
|
if event.type == RoomMemberEvent.TYPE:
|
||||||
|
target_user_id = event.state_key
|
||||||
|
target_user = self.hs.parse_userid(target_user_id)
|
||||||
|
extra_users.append(target_user)
|
||||||
|
|
||||||
|
yield self.notifier.on_new_room_event(
|
||||||
|
event, extra_users=extra_users
|
||||||
|
)
|
||||||
|
|
||||||
if event.type == RoomMemberEvent.TYPE:
|
if event.type == RoomMemberEvent.TYPE:
|
||||||
if event.membership == Membership.JOIN:
|
if event.membership == Membership.JOIN:
|
||||||
|
@ -189,79 +216,349 @@ class FederationHandler(BaseHandler):
|
||||||
@log_function
|
@log_function
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def backfill(self, dest, room_id, limit):
|
def backfill(self, dest, room_id, limit):
|
||||||
pdus = yield self.replication_layer.backfill(dest, room_id, limit)
|
""" Trigger a backfill request to `dest` for the given `room_id`
|
||||||
|
"""
|
||||||
|
extremities = yield self.store.get_oldest_events_in_room(room_id)
|
||||||
|
|
||||||
|
pdus = yield self.replication_layer.backfill(
|
||||||
|
dest,
|
||||||
|
room_id,
|
||||||
|
limit,
|
||||||
|
extremities=extremities,
|
||||||
|
)
|
||||||
|
|
||||||
events = []
|
events = []
|
||||||
|
|
||||||
for pdu in pdus:
|
for pdu in pdus:
|
||||||
event = self.pdu_codec.event_from_pdu(pdu)
|
event = pdu
|
||||||
|
|
||||||
|
# FIXME (erikj): Not sure this actually works :/
|
||||||
|
yield self.state_handler.annotate_event_with_state(event)
|
||||||
|
|
||||||
events.append(event)
|
events.append(event)
|
||||||
|
|
||||||
yield self.store.persist_event(event, backfilled=True)
|
yield self.store.persist_event(event, backfilled=True)
|
||||||
|
|
||||||
defer.returnValue(events)
|
defer.returnValue(events)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def send_invite(self, target_host, event):
|
||||||
|
""" Sends the invite to the remote server for signing.
|
||||||
|
|
||||||
|
Invites must be signed by the invitee's server before distribution.
|
||||||
|
"""
|
||||||
|
pdu = yield self.replication_layer.send_invite(
|
||||||
|
destination=target_host,
|
||||||
|
context=event.room_id,
|
||||||
|
event_id=event.event_id,
|
||||||
|
pdu=event
|
||||||
|
)
|
||||||
|
|
||||||
|
defer.returnValue(pdu)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def on_event_auth(self, event_id):
|
||||||
|
auth = yield self.store.get_auth_chain(event_id)
|
||||||
|
defer.returnValue([e for e in auth])
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def do_invite_join(self, target_host, room_id, joinee, content, snapshot):
|
def do_invite_join(self, target_host, room_id, joinee, content, snapshot):
|
||||||
|
""" Attempts to join the `joinee` to the room `room_id` via the
|
||||||
|
server `target_host`.
|
||||||
|
|
||||||
hosts = yield self.store.get_joined_hosts_for_room(room_id)
|
This first triggers a /make_join/ request that returns a partial
|
||||||
if self.hs.hostname in hosts:
|
event that we can fill out and sign. This is then sent to the
|
||||||
# We are already in the room.
|
remote server via /send_join/ which responds with the state at that
|
||||||
logger.debug("We're already in the room apparently")
|
event and the auth_chains.
|
||||||
defer.returnValue(False)
|
|
||||||
|
|
||||||
# First get current state to see if we are already joined.
|
We suspend processing of any received events from this room until we
|
||||||
try:
|
have finished processing the join.
|
||||||
yield self.replication_layer.get_state_for_context(
|
"""
|
||||||
target_host, room_id
|
pdu = yield self.replication_layer.make_join(
|
||||||
)
|
target_host,
|
||||||
|
room_id,
|
||||||
hosts = yield self.store.get_joined_hosts_for_room(room_id)
|
joinee
|
||||||
if self.hs.hostname in hosts:
|
|
||||||
# Oh, we were actually in the room already.
|
|
||||||
logger.debug("We're already in the room apparently")
|
|
||||||
defer.returnValue(False)
|
|
||||||
except Exception:
|
|
||||||
logger.exception("Failed to get current state")
|
|
||||||
|
|
||||||
new_event = self.event_factory.create_event(
|
|
||||||
etype=InviteJoinEvent.TYPE,
|
|
||||||
target_host=target_host,
|
|
||||||
room_id=room_id,
|
|
||||||
user_id=joinee,
|
|
||||||
content=content
|
|
||||||
)
|
)
|
||||||
|
|
||||||
new_event.destinations = [target_host]
|
logger.debug("Got response to make_join: %s", pdu)
|
||||||
|
|
||||||
snapshot.fill_out_prev_events(new_event)
|
event = pdu
|
||||||
yield self.handle_new_event(new_event, snapshot)
|
|
||||||
|
|
||||||
# TODO (erikj): Time out here.
|
# We should assert some things.
|
||||||
d = defer.Deferred()
|
assert(event.type == RoomMemberEvent.TYPE)
|
||||||
self.waiting_for_join_list.setdefault((joinee, room_id), []).append(d)
|
assert(event.user_id == joinee)
|
||||||
reactor.callLater(10, d.cancel)
|
assert(event.state_key == joinee)
|
||||||
|
assert(event.room_id == room_id)
|
||||||
|
|
||||||
|
event.outlier = False
|
||||||
|
|
||||||
|
self.room_queues[room_id] = []
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield d
|
event.event_id = self.event_factory.create_event_id()
|
||||||
except defer.CancelledError:
|
event.content = content
|
||||||
raise SynapseError(500, "Unable to join remote room")
|
|
||||||
|
|
||||||
try:
|
state = yield self.replication_layer.send_join(
|
||||||
yield self.store.store_room(
|
target_host,
|
||||||
room_id=room_id,
|
event
|
||||||
room_creator_user_id="",
|
|
||||||
is_public=False
|
|
||||||
)
|
)
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
logger.debug("do_invite_join state: %s", state)
|
||||||
|
|
||||||
|
yield self.state_handler.annotate_event_with_state(
|
||||||
|
event,
|
||||||
|
old_state=state
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.debug("do_invite_join event: %s", event)
|
||||||
|
|
||||||
|
try:
|
||||||
|
yield self.store.store_room(
|
||||||
|
room_id=room_id,
|
||||||
|
room_creator_user_id="",
|
||||||
|
is_public=False
|
||||||
|
)
|
||||||
|
except:
|
||||||
|
# FIXME
|
||||||
|
pass
|
||||||
|
|
||||||
|
for e in state:
|
||||||
|
# FIXME: Auth these.
|
||||||
|
e.outlier = True
|
||||||
|
|
||||||
|
yield self.state_handler.annotate_event_with_state(
|
||||||
|
e,
|
||||||
|
)
|
||||||
|
|
||||||
|
yield self.store.persist_event(
|
||||||
|
e,
|
||||||
|
backfilled=False,
|
||||||
|
is_new_state=True
|
||||||
|
)
|
||||||
|
|
||||||
|
yield self.store.persist_event(
|
||||||
|
event,
|
||||||
|
backfilled=False,
|
||||||
|
is_new_state=True
|
||||||
|
)
|
||||||
|
finally:
|
||||||
|
room_queue = self.room_queues[room_id]
|
||||||
|
del self.room_queues[room_id]
|
||||||
|
|
||||||
|
for p in room_queue:
|
||||||
|
try:
|
||||||
|
yield self.on_receive_pdu(p, backfilled=False)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
defer.returnValue(True)
|
defer.returnValue(True)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def on_make_join_request(self, context, user_id):
|
||||||
|
""" We've received a /make_join/ request, so we create a partial
|
||||||
|
join event for the room and return that. We don *not* persist or
|
||||||
|
process it until the other server has signed it and sent it back.
|
||||||
|
"""
|
||||||
|
event = self.event_factory.create_event(
|
||||||
|
etype=RoomMemberEvent.TYPE,
|
||||||
|
content={"membership": Membership.JOIN},
|
||||||
|
room_id=context,
|
||||||
|
user_id=user_id,
|
||||||
|
state_key=user_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
snapshot = yield self.store.snapshot_room(event)
|
||||||
|
snapshot.fill_out_prev_events(event)
|
||||||
|
|
||||||
|
yield self.state_handler.annotate_event_with_state(event)
|
||||||
|
yield self.auth.add_auth_events(event)
|
||||||
|
self.auth.check(event, raises=True)
|
||||||
|
|
||||||
|
pdu = event
|
||||||
|
|
||||||
|
defer.returnValue(pdu)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def on_send_join_request(self, origin, pdu):
|
||||||
|
""" We have received a join event for a room. Fully process it and
|
||||||
|
respond with the current state and auth chains.
|
||||||
|
"""
|
||||||
|
event = pdu
|
||||||
|
|
||||||
|
event.outlier = False
|
||||||
|
|
||||||
|
is_new_state = yield self.state_handler.annotate_event_with_state(event)
|
||||||
|
self.auth.check(event, raises=True)
|
||||||
|
|
||||||
|
# FIXME (erikj): All this is duplicated above :(
|
||||||
|
|
||||||
|
yield self.store.persist_event(
|
||||||
|
event,
|
||||||
|
backfilled=False,
|
||||||
|
is_new_state=is_new_state
|
||||||
|
)
|
||||||
|
|
||||||
|
extra_users = []
|
||||||
|
if event.type == RoomMemberEvent.TYPE:
|
||||||
|
target_user_id = event.state_key
|
||||||
|
target_user = self.hs.parse_userid(target_user_id)
|
||||||
|
extra_users.append(target_user)
|
||||||
|
|
||||||
|
yield self.notifier.on_new_room_event(
|
||||||
|
event, extra_users=extra_users
|
||||||
|
)
|
||||||
|
|
||||||
|
if event.type == RoomMemberEvent.TYPE:
|
||||||
|
if event.membership == Membership.JOIN:
|
||||||
|
user = self.hs.parse_userid(event.state_key)
|
||||||
|
self.distributor.fire(
|
||||||
|
"user_joined_room", user=user, room_id=event.room_id
|
||||||
|
)
|
||||||
|
|
||||||
|
new_pdu = event
|
||||||
|
|
||||||
|
destinations = set()
|
||||||
|
|
||||||
|
for k, s in event.state_events.items():
|
||||||
|
try:
|
||||||
|
if k[0] == RoomMemberEvent.TYPE:
|
||||||
|
if s.content["membership"] == Membership.JOIN:
|
||||||
|
destinations.add(
|
||||||
|
self.hs.parse_userid(s.state_key).domain
|
||||||
|
)
|
||||||
|
except:
|
||||||
|
logger.warn(
|
||||||
|
"Failed to get destination from event %s", s.event_id
|
||||||
|
)
|
||||||
|
|
||||||
|
new_pdu.destinations = list(destinations)
|
||||||
|
|
||||||
|
yield self.replication_layer.send_pdu(new_pdu)
|
||||||
|
|
||||||
|
auth_chain = yield self.store.get_auth_chain(event.event_id)
|
||||||
|
|
||||||
|
defer.returnValue({
|
||||||
|
"state": event.state_events.values(),
|
||||||
|
"auth_chain": auth_chain,
|
||||||
|
})
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def on_invite_request(self, origin, pdu):
|
||||||
|
""" We've got an invite event. Process and persist it. Sign it.
|
||||||
|
|
||||||
|
Respond with the now signed event.
|
||||||
|
"""
|
||||||
|
event = pdu
|
||||||
|
|
||||||
|
event.outlier = True
|
||||||
|
|
||||||
|
event.signatures.update(
|
||||||
|
compute_event_signature(
|
||||||
|
event,
|
||||||
|
self.hs.hostname,
|
||||||
|
self.hs.config.signing_key[0]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
yield self.state_handler.annotate_event_with_state(event)
|
||||||
|
|
||||||
|
yield self.store.persist_event(
|
||||||
|
event,
|
||||||
|
backfilled=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
target_user = self.hs.parse_userid(event.state_key)
|
||||||
|
yield self.notifier.on_new_room_event(
|
||||||
|
event, extra_users=[target_user],
|
||||||
|
)
|
||||||
|
|
||||||
|
defer.returnValue(event)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_state_for_pdu(self, origin, room_id, event_id):
|
||||||
|
yield run_on_reactor()
|
||||||
|
|
||||||
|
in_room = yield self.auth.check_host_in_room(room_id, origin)
|
||||||
|
if not in_room:
|
||||||
|
raise AuthError(403, "Host not in room.")
|
||||||
|
|
||||||
|
state_groups = yield self.store.get_state_groups(
|
||||||
|
[event_id]
|
||||||
|
)
|
||||||
|
|
||||||
|
if state_groups:
|
||||||
|
_, state = state_groups.items().pop()
|
||||||
|
results = {
|
||||||
|
(e.type, e.state_key): e for e in state
|
||||||
|
}
|
||||||
|
|
||||||
|
event = yield self.store.get_event(event_id)
|
||||||
|
if hasattr(event, "state_key"):
|
||||||
|
# Get previous state
|
||||||
|
if hasattr(event, "replaces_state") and event.replaces_state:
|
||||||
|
prev_event = yield self.store.get_event(
|
||||||
|
event.replaces_state
|
||||||
|
)
|
||||||
|
results[(event.type, event.state_key)] = prev_event
|
||||||
|
else:
|
||||||
|
del results[(event.type, event.state_key)]
|
||||||
|
|
||||||
|
defer.returnValue(results.values())
|
||||||
|
else:
|
||||||
|
defer.returnValue([])
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def on_backfill_request(self, origin, context, pdu_list, limit):
|
||||||
|
in_room = yield self.auth.check_host_in_room(context, origin)
|
||||||
|
if not in_room:
|
||||||
|
raise AuthError(403, "Host not in room.")
|
||||||
|
|
||||||
|
events = yield self.store.get_backfill_events(
|
||||||
|
context,
|
||||||
|
pdu_list,
|
||||||
|
limit
|
||||||
|
)
|
||||||
|
|
||||||
|
defer.returnValue(events)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def get_persisted_pdu(self, origin, event_id):
|
||||||
|
""" Get a PDU from the database with given origin and id.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred: Results in a `Pdu`.
|
||||||
|
"""
|
||||||
|
event = yield self.store.get_event(
|
||||||
|
event_id,
|
||||||
|
allow_none=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
if event:
|
||||||
|
in_room = yield self.auth.check_host_in_room(
|
||||||
|
event.room_id,
|
||||||
|
origin
|
||||||
|
)
|
||||||
|
if not in_room:
|
||||||
|
raise AuthError(403, "Host not in room.")
|
||||||
|
|
||||||
|
defer.returnValue(event)
|
||||||
|
else:
|
||||||
|
defer.returnValue(None)
|
||||||
|
|
||||||
|
@log_function
|
||||||
|
def get_min_depth_for_context(self, context):
|
||||||
|
return self.store.get_min_depth(context)
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def _on_user_joined(self, user, room_id):
|
def _on_user_joined(self, user, room_id):
|
||||||
waiters = self.waiting_for_join_list.get((user.to_string(), room_id), [])
|
waiters = self.waiting_for_join_list.get(
|
||||||
|
(user.to_string(), room_id),
|
||||||
|
[]
|
||||||
|
)
|
||||||
while waiters:
|
while waiters:
|
||||||
waiters.pop().callback(None)
|
waiters.pop().callback(None)
|
||||||
|
|
|
@ -16,7 +16,6 @@
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.constants import Membership
|
from synapse.api.constants import Membership
|
||||||
from synapse.api.events.room import RoomTopicEvent
|
|
||||||
from synapse.api.errors import RoomError
|
from synapse.api.errors import RoomError
|
||||||
from synapse.streams.config import PaginationConfig
|
from synapse.streams.config import PaginationConfig
|
||||||
from ._base import BaseHandler
|
from ._base import BaseHandler
|
||||||
|
@ -26,7 +25,6 @@ import logging
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class MessageHandler(BaseHandler):
|
class MessageHandler(BaseHandler):
|
||||||
|
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
|
@ -59,7 +57,8 @@ class MessageHandler(BaseHandler):
|
||||||
# user_id=sender_id
|
# user_id=sender_id
|
||||||
# )
|
# )
|
||||||
|
|
||||||
# TODO (erikj): Once we work out the correct c-s api we need to think on how to do this.
|
# TODO (erikj): Once we work out the correct c-s api we need to think
|
||||||
|
# on how to do this.
|
||||||
|
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
|
|
||||||
|
@ -81,12 +80,11 @@ class MessageHandler(BaseHandler):
|
||||||
user = self.hs.parse_userid(event.user_id)
|
user = self.hs.parse_userid(event.user_id)
|
||||||
assert user.is_mine, "User must be our own: %s" % (user,)
|
assert user.is_mine, "User must be our own: %s" % (user,)
|
||||||
|
|
||||||
snapshot = yield self.store.snapshot_room(event.room_id, event.user_id)
|
snapshot = yield self.store.snapshot_room(event)
|
||||||
|
|
||||||
if not suppress_auth:
|
yield self._on_new_room_event(
|
||||||
yield self.auth.check(event, snapshot, raises=True)
|
event, snapshot, suppress_auth=suppress_auth
|
||||||
|
)
|
||||||
yield self._on_new_room_event(event, snapshot)
|
|
||||||
|
|
||||||
self.hs.get_handlers().presence_handler.bump_presence_active_time(
|
self.hs.get_handlers().presence_handler.bump_presence_active_time(
|
||||||
user
|
user
|
||||||
|
@ -111,7 +109,9 @@ class MessageHandler(BaseHandler):
|
||||||
data_source = self.hs.get_event_sources().sources["room"]
|
data_source = self.hs.get_event_sources().sources["room"]
|
||||||
|
|
||||||
if not pagin_config.from_token:
|
if not pagin_config.from_token:
|
||||||
pagin_config.from_token = yield self.hs.get_event_sources().get_current_token()
|
pagin_config.from_token = (
|
||||||
|
yield self.hs.get_event_sources().get_current_token()
|
||||||
|
)
|
||||||
|
|
||||||
user = self.hs.parse_userid(user_id)
|
user = self.hs.parse_userid(user_id)
|
||||||
|
|
||||||
|
@ -142,66 +142,27 @@ class MessageHandler(BaseHandler):
|
||||||
SynapseError if something went wrong.
|
SynapseError if something went wrong.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
snapshot = yield self.store.snapshot_room(
|
snapshot = yield self.store.snapshot_room(event)
|
||||||
event.room_id,
|
|
||||||
event.user_id,
|
|
||||||
state_type=event.type,
|
|
||||||
state_key=event.state_key,
|
|
||||||
)
|
|
||||||
|
|
||||||
yield self.auth.check(event, snapshot, raises=True)
|
|
||||||
|
|
||||||
yield self.state_handler.handle_new_event(event, snapshot)
|
|
||||||
|
|
||||||
yield self._on_new_room_event(event, snapshot)
|
yield self._on_new_room_event(event, snapshot)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def get_room_data(self, user_id=None, room_id=None,
|
def get_room_data(self, user_id=None, room_id=None,
|
||||||
event_type=None, state_key="",
|
event_type=None, state_key=""):
|
||||||
public_room_rules=[],
|
|
||||||
private_room_rules=["join"]):
|
|
||||||
""" Get data from a room.
|
""" Get data from a room.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
event : The room path event
|
event : The room path event
|
||||||
public_room_rules : A list of membership states the user can be in,
|
|
||||||
in order to read this data IN A PUBLIC ROOM. An empty list means
|
|
||||||
'any state'.
|
|
||||||
private_room_rules : A list of membership states the user can be
|
|
||||||
in, in order to read this data IN A PRIVATE ROOM. An empty list
|
|
||||||
means 'any state'.
|
|
||||||
Returns:
|
Returns:
|
||||||
The path data content.
|
The path data content.
|
||||||
Raises:
|
Raises:
|
||||||
SynapseError if something went wrong.
|
SynapseError if something went wrong.
|
||||||
"""
|
"""
|
||||||
if event_type == RoomTopicEvent.TYPE:
|
have_joined = yield self.auth.check_joined_room(room_id, user_id)
|
||||||
# anyone invited/joined can read the topic
|
if not have_joined:
|
||||||
private_room_rules = ["invite", "join"]
|
raise RoomError(403, "User not in room.")
|
||||||
|
|
||||||
# does this room exist
|
data = yield self.state_handler.get_current_state(
|
||||||
room = yield self.store.get_room(room_id)
|
|
||||||
if not room:
|
|
||||||
raise RoomError(403, "Room does not exist.")
|
|
||||||
|
|
||||||
# does this user exist in this room
|
|
||||||
member = yield self.store.get_room_member(
|
|
||||||
room_id=room_id,
|
|
||||||
user_id="" if not user_id else user_id)
|
|
||||||
|
|
||||||
member_state = member.membership if member else None
|
|
||||||
|
|
||||||
if room.is_public and public_room_rules:
|
|
||||||
# make sure the user meets public room rules
|
|
||||||
if member_state not in public_room_rules:
|
|
||||||
raise RoomError(403, "Member does not meet public room rules.")
|
|
||||||
elif not room.is_public and private_room_rules:
|
|
||||||
# make sure the user meets private room rules
|
|
||||||
if member_state not in private_room_rules:
|
|
||||||
raise RoomError(
|
|
||||||
403, "Member does not meet private room rules.")
|
|
||||||
|
|
||||||
data = yield self.store.get_current_state(
|
|
||||||
room_id, event_type, state_key
|
room_id, event_type, state_key
|
||||||
)
|
)
|
||||||
defer.returnValue(data)
|
defer.returnValue(data)
|
||||||
|
@ -219,9 +180,7 @@ class MessageHandler(BaseHandler):
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def send_feedback(self, event):
|
def send_feedback(self, event):
|
||||||
snapshot = yield self.store.snapshot_room(event.room_id, event.user_id)
|
snapshot = yield self.store.snapshot_room(event)
|
||||||
|
|
||||||
yield self.auth.check(event, snapshot, raises=True)
|
|
||||||
|
|
||||||
# store message in db
|
# store message in db
|
||||||
yield self._on_new_room_event(event, snapshot)
|
yield self._on_new_room_event(event, snapshot)
|
||||||
|
@ -239,7 +198,7 @@ class MessageHandler(BaseHandler):
|
||||||
yield self.auth.check_joined_room(room_id, user_id)
|
yield self.auth.check_joined_room(room_id, user_id)
|
||||||
|
|
||||||
# TODO: This is duplicating logic from snapshot_all_rooms
|
# TODO: This is duplicating logic from snapshot_all_rooms
|
||||||
current_state = yield self.store.get_current_state(room_id)
|
current_state = yield self.state_handler.get_current_state(room_id)
|
||||||
defer.returnValue([self.hs.serialize_event(c) for c in current_state])
|
defer.returnValue([self.hs.serialize_event(c) for c in current_state])
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@ -289,8 +248,10 @@ class MessageHandler(BaseHandler):
|
||||||
d = {
|
d = {
|
||||||
"room_id": event.room_id,
|
"room_id": event.room_id,
|
||||||
"membership": event.membership,
|
"membership": event.membership,
|
||||||
"visibility": ("public" if event.room_id in
|
"visibility": (
|
||||||
public_room_ids else "private"),
|
"public" if event.room_id in public_room_ids
|
||||||
|
else "private"
|
||||||
|
),
|
||||||
}
|
}
|
||||||
|
|
||||||
if event.membership == Membership.INVITE:
|
if event.membership == Membership.INVITE:
|
||||||
|
@ -316,10 +277,12 @@ class MessageHandler(BaseHandler):
|
||||||
"end": end_token.to_string(),
|
"end": end_token.to_string(),
|
||||||
}
|
}
|
||||||
|
|
||||||
current_state = yield self.store.get_current_state(
|
current_state = yield self.state_handler.get_current_state(
|
||||||
event.room_id
|
event.room_id
|
||||||
)
|
)
|
||||||
d["state"] = [self.hs.serialize_event(c) for c in current_state]
|
d["state"] = [
|
||||||
|
self.hs.serialize_event(c) for c in current_state
|
||||||
|
]
|
||||||
except:
|
except:
|
||||||
logger.exception("Failed to get snapshot")
|
logger.exception("Failed to get snapshot")
|
||||||
|
|
||||||
|
|
|
@ -17,7 +17,6 @@ from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.errors import SynapseError, AuthError, CodeMessageException
|
from synapse.api.errors import SynapseError, AuthError, CodeMessageException
|
||||||
from synapse.api.constants import Membership
|
from synapse.api.constants import Membership
|
||||||
from synapse.api.events.room import RoomMemberEvent
|
|
||||||
|
|
||||||
from ._base import BaseHandler
|
from ._base import BaseHandler
|
||||||
|
|
||||||
|
@ -153,10 +152,13 @@ class ProfileHandler(BaseHandler):
|
||||||
if not user.is_mine:
|
if not user.is_mine:
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
|
|
||||||
(displayname, avatar_url) = yield defer.gatherResults([
|
(displayname, avatar_url) = yield defer.gatherResults(
|
||||||
self.store.get_profile_displayname(user.localpart),
|
[
|
||||||
self.store.get_profile_avatar_url(user.localpart),
|
self.store.get_profile_displayname(user.localpart),
|
||||||
])
|
self.store.get_profile_avatar_url(user.localpart),
|
||||||
|
],
|
||||||
|
consumeErrors=True
|
||||||
|
)
|
||||||
|
|
||||||
state["displayname"] = displayname
|
state["displayname"] = displayname
|
||||||
state["avatar_url"] = avatar_url
|
state["avatar_url"] = avatar_url
|
||||||
|
@ -196,10 +198,7 @@ class ProfileHandler(BaseHandler):
|
||||||
)
|
)
|
||||||
|
|
||||||
for j in joins:
|
for j in joins:
|
||||||
snapshot = yield self.store.snapshot_room(
|
snapshot = yield self.store.snapshot_room(j)
|
||||||
j.room_id, j.state_key, RoomMemberEvent.TYPE,
|
|
||||||
j.state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
content = {
|
content = {
|
||||||
"membership": j.content["membership"],
|
"membership": j.content["membership"],
|
||||||
|
@ -218,5 +217,6 @@ class ProfileHandler(BaseHandler):
|
||||||
user_id=j.state_key,
|
user_id=j.state_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
yield self.state_handler.handle_new_event(new_event, snapshot)
|
yield self._on_new_room_event(
|
||||||
yield self._on_new_room_event(new_event, snapshot)
|
new_event, snapshot, suppress_auth=True
|
||||||
|
)
|
||||||
|
|
|
@ -21,10 +21,10 @@ from synapse.api.constants import Membership, JoinRules
|
||||||
from synapse.api.errors import StoreError, SynapseError
|
from synapse.api.errors import StoreError, SynapseError
|
||||||
from synapse.api.events.room import (
|
from synapse.api.events.room import (
|
||||||
RoomMemberEvent, RoomCreateEvent, RoomPowerLevelsEvent,
|
RoomMemberEvent, RoomCreateEvent, RoomPowerLevelsEvent,
|
||||||
RoomJoinRulesEvent, RoomAddStateLevelEvent, RoomTopicEvent,
|
RoomTopicEvent, RoomNameEvent, RoomJoinRulesEvent,
|
||||||
RoomSendEventLevelEvent, RoomOpsPowerLevelsEvent, RoomNameEvent,
|
|
||||||
)
|
)
|
||||||
from synapse.util import stringutils
|
from synapse.util import stringutils
|
||||||
|
from synapse.util.async import run_on_reactor
|
||||||
from ._base import BaseHandler
|
from ._base import BaseHandler
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
@ -111,26 +111,15 @@ class RoomCreationHandler(BaseHandler):
|
||||||
user, room_id, is_public=is_public
|
user, room_id, is_public=is_public
|
||||||
)
|
)
|
||||||
|
|
||||||
if room_alias:
|
|
||||||
directory_handler = self.hs.get_handlers().directory_handler
|
|
||||||
yield directory_handler.create_association(
|
|
||||||
user_id=user_id,
|
|
||||||
room_id=room_id,
|
|
||||||
room_alias=room_alias,
|
|
||||||
servers=[self.hs.hostname],
|
|
||||||
)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def handle_event(event):
|
def handle_event(event):
|
||||||
snapshot = yield self.store.snapshot_room(
|
snapshot = yield self.store.snapshot_room(event)
|
||||||
room_id=room_id,
|
|
||||||
user_id=user_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.debug("Event: %s", event)
|
logger.debug("Event: %s", event)
|
||||||
|
|
||||||
yield self.state_handler.handle_new_event(event, snapshot)
|
yield self._on_new_room_event(
|
||||||
yield self._on_new_room_event(event, snapshot, extra_users=[user])
|
event, snapshot, extra_users=[user], suppress_auth=True
|
||||||
|
)
|
||||||
|
|
||||||
for event in creation_events:
|
for event in creation_events:
|
||||||
yield handle_event(event)
|
yield handle_event(event)
|
||||||
|
@ -141,7 +130,6 @@ class RoomCreationHandler(BaseHandler):
|
||||||
etype=RoomNameEvent.TYPE,
|
etype=RoomNameEvent.TYPE,
|
||||||
room_id=room_id,
|
room_id=room_id,
|
||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
required_power_level=50,
|
|
||||||
content={"name": name},
|
content={"name": name},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -153,7 +141,6 @@ class RoomCreationHandler(BaseHandler):
|
||||||
etype=RoomTopicEvent.TYPE,
|
etype=RoomTopicEvent.TYPE,
|
||||||
room_id=room_id,
|
room_id=room_id,
|
||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
required_power_level=50,
|
|
||||||
content={"topic": topic},
|
content={"topic": topic},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -188,9 +175,18 @@ class RoomCreationHandler(BaseHandler):
|
||||||
join_event,
|
join_event,
|
||||||
do_auth=False
|
do_auth=False
|
||||||
)
|
)
|
||||||
|
|
||||||
result = {"room_id": room_id}
|
result = {"room_id": room_id}
|
||||||
|
|
||||||
if room_alias:
|
if room_alias:
|
||||||
result["room_alias"] = room_alias.to_string()
|
result["room_alias"] = room_alias.to_string()
|
||||||
|
directory_handler = self.hs.get_handlers().directory_handler
|
||||||
|
yield directory_handler.create_association(
|
||||||
|
user_id=user_id,
|
||||||
|
room_id=room_id,
|
||||||
|
room_alias=room_alias,
|
||||||
|
servers=[self.hs.hostname],
|
||||||
|
)
|
||||||
|
|
||||||
defer.returnValue(result)
|
defer.returnValue(result)
|
||||||
|
|
||||||
|
@ -198,7 +194,6 @@ class RoomCreationHandler(BaseHandler):
|
||||||
event_keys = {
|
event_keys = {
|
||||||
"room_id": room_id,
|
"room_id": room_id,
|
||||||
"user_id": creator.to_string(),
|
"user_id": creator.to_string(),
|
||||||
"required_power_level": 100,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
def create(etype, **content):
|
def create(etype, **content):
|
||||||
|
@ -215,7 +210,21 @@ class RoomCreationHandler(BaseHandler):
|
||||||
|
|
||||||
power_levels_event = self.event_factory.create_event(
|
power_levels_event = self.event_factory.create_event(
|
||||||
etype=RoomPowerLevelsEvent.TYPE,
|
etype=RoomPowerLevelsEvent.TYPE,
|
||||||
content={creator.to_string(): 100, "default": 0},
|
content={
|
||||||
|
"users": {
|
||||||
|
creator.to_string(): 100,
|
||||||
|
},
|
||||||
|
"users_default": 0,
|
||||||
|
"events": {
|
||||||
|
RoomNameEvent.TYPE: 100,
|
||||||
|
RoomPowerLevelsEvent.TYPE: 100,
|
||||||
|
},
|
||||||
|
"events_default": 0,
|
||||||
|
"state_default": 50,
|
||||||
|
"ban": 50,
|
||||||
|
"kick": 50,
|
||||||
|
"redact": 50
|
||||||
|
},
|
||||||
**event_keys
|
**event_keys
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -225,30 +234,10 @@ class RoomCreationHandler(BaseHandler):
|
||||||
join_rule=join_rule,
|
join_rule=join_rule,
|
||||||
)
|
)
|
||||||
|
|
||||||
add_state_event = create(
|
|
||||||
etype=RoomAddStateLevelEvent.TYPE,
|
|
||||||
level=100,
|
|
||||||
)
|
|
||||||
|
|
||||||
send_event = create(
|
|
||||||
etype=RoomSendEventLevelEvent.TYPE,
|
|
||||||
level=0,
|
|
||||||
)
|
|
||||||
|
|
||||||
ops = create(
|
|
||||||
etype=RoomOpsPowerLevelsEvent.TYPE,
|
|
||||||
ban_level=50,
|
|
||||||
kick_level=50,
|
|
||||||
redact_level=50,
|
|
||||||
)
|
|
||||||
|
|
||||||
return [
|
return [
|
||||||
creation_event,
|
creation_event,
|
||||||
power_levels_event,
|
power_levels_event,
|
||||||
join_rules_event,
|
join_rules_event,
|
||||||
add_state_event,
|
|
||||||
send_event,
|
|
||||||
ops,
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@ -363,10 +352,8 @@ class RoomMemberHandler(BaseHandler):
|
||||||
"""
|
"""
|
||||||
target_user_id = event.state_key
|
target_user_id = event.state_key
|
||||||
|
|
||||||
snapshot = yield self.store.snapshot_room(
|
snapshot = yield self.store.snapshot_room(event)
|
||||||
event.room_id, event.user_id,
|
|
||||||
RoomMemberEvent.TYPE, target_user_id
|
|
||||||
)
|
|
||||||
## TODO(markjh): get prev state from snapshot.
|
## TODO(markjh): get prev state from snapshot.
|
||||||
prev_state = yield self.store.get_room_member(
|
prev_state = yield self.store.get_room_member(
|
||||||
target_user_id, event.room_id
|
target_user_id, event.room_id
|
||||||
|
@ -375,13 +362,6 @@ class RoomMemberHandler(BaseHandler):
|
||||||
if prev_state:
|
if prev_state:
|
||||||
event.content["prev"] = prev_state.membership
|
event.content["prev"] = prev_state.membership
|
||||||
|
|
||||||
# if prev_state and prev_state.membership == event.membership:
|
|
||||||
# # treat this event as a NOOP.
|
|
||||||
# if do_auth: # This is mainly to fix a unit test.
|
|
||||||
# yield self.auth.check(event, raises=True)
|
|
||||||
# defer.returnValue({})
|
|
||||||
# return
|
|
||||||
|
|
||||||
room_id = event.room_id
|
room_id = event.room_id
|
||||||
|
|
||||||
# If we're trying to join a room then we have to do this differently
|
# If we're trying to join a room then we have to do this differently
|
||||||
|
@ -391,29 +371,17 @@ class RoomMemberHandler(BaseHandler):
|
||||||
yield self._do_join(event, snapshot, do_auth=do_auth)
|
yield self._do_join(event, snapshot, do_auth=do_auth)
|
||||||
else:
|
else:
|
||||||
# This is not a JOIN, so we can handle it normally.
|
# This is not a JOIN, so we can handle it normally.
|
||||||
if do_auth:
|
|
||||||
yield self.auth.check(event, snapshot, raises=True)
|
|
||||||
|
|
||||||
# If we're banning someone, set a req power level
|
|
||||||
if event.membership == Membership.BAN:
|
|
||||||
if not hasattr(event, "required_power_level") or event.required_power_level is None:
|
|
||||||
# Add some default required_power_level
|
|
||||||
user_level = yield self.store.get_power_level(
|
|
||||||
event.room_id,
|
|
||||||
event.user_id,
|
|
||||||
)
|
|
||||||
event.required_power_level = user_level
|
|
||||||
|
|
||||||
if prev_state and prev_state.membership == event.membership:
|
if prev_state and prev_state.membership == event.membership:
|
||||||
# double same action, treat this event as a NOOP.
|
# double same action, treat this event as a NOOP.
|
||||||
defer.returnValue({})
|
defer.returnValue({})
|
||||||
return
|
return
|
||||||
|
|
||||||
yield self.state_handler.handle_new_event(event, snapshot)
|
|
||||||
yield self._do_local_membership_update(
|
yield self._do_local_membership_update(
|
||||||
event,
|
event,
|
||||||
membership=event.content["membership"],
|
membership=event.content["membership"],
|
||||||
snapshot=snapshot,
|
snapshot=snapshot,
|
||||||
|
do_auth=do_auth,
|
||||||
)
|
)
|
||||||
|
|
||||||
defer.returnValue({"room_id": room_id})
|
defer.returnValue({"room_id": room_id})
|
||||||
|
@ -443,10 +411,7 @@ class RoomMemberHandler(BaseHandler):
|
||||||
content=content,
|
content=content,
|
||||||
)
|
)
|
||||||
|
|
||||||
snapshot = yield self.store.snapshot_room(
|
snapshot = yield self.store.snapshot_room(new_event)
|
||||||
room_id, joinee.to_string(), RoomMemberEvent.TYPE,
|
|
||||||
joinee.to_string()
|
|
||||||
)
|
|
||||||
|
|
||||||
yield self._do_join(new_event, snapshot, room_host=host, do_auth=True)
|
yield self._do_join(new_event, snapshot, room_host=host, do_auth=True)
|
||||||
|
|
||||||
|
@ -468,9 +433,12 @@ class RoomMemberHandler(BaseHandler):
|
||||||
# that we are allowed to join when we decide whether or not we
|
# that we are allowed to join when we decide whether or not we
|
||||||
# need to do the invite/join dance.
|
# need to do the invite/join dance.
|
||||||
|
|
||||||
hosts = yield self.store.get_joined_hosts_for_room(room_id)
|
is_host_in_room = yield self.auth.check_host_in_room(
|
||||||
|
event.room_id,
|
||||||
|
self.hs.hostname
|
||||||
|
)
|
||||||
|
|
||||||
if self.hs.hostname in hosts:
|
if is_host_in_room:
|
||||||
should_do_dance = False
|
should_do_dance = False
|
||||||
elif room_host:
|
elif room_host:
|
||||||
should_do_dance = True
|
should_do_dance = True
|
||||||
|
@ -502,14 +470,11 @@ class RoomMemberHandler(BaseHandler):
|
||||||
if not have_joined:
|
if not have_joined:
|
||||||
logger.debug("Doing normal join")
|
logger.debug("Doing normal join")
|
||||||
|
|
||||||
if do_auth:
|
|
||||||
yield self.auth.check(event, snapshot, raises=True)
|
|
||||||
|
|
||||||
yield self.state_handler.handle_new_event(event, snapshot)
|
|
||||||
yield self._do_local_membership_update(
|
yield self._do_local_membership_update(
|
||||||
event,
|
event,
|
||||||
membership=event.content["membership"],
|
membership=event.content["membership"],
|
||||||
snapshot=snapshot,
|
snapshot=snapshot,
|
||||||
|
do_auth=do_auth,
|
||||||
)
|
)
|
||||||
|
|
||||||
user = self.hs.parse_userid(event.user_id)
|
user = self.hs.parse_userid(event.user_id)
|
||||||
|
@ -553,26 +518,29 @@ class RoomMemberHandler(BaseHandler):
|
||||||
|
|
||||||
defer.returnValue([r.room_id for r in rooms])
|
defer.returnValue([r.room_id for r in rooms])
|
||||||
|
|
||||||
def _do_local_membership_update(self, event, membership, snapshot):
|
@defer.inlineCallbacks
|
||||||
destinations = []
|
def _do_local_membership_update(self, event, membership, snapshot,
|
||||||
|
do_auth):
|
||||||
|
yield run_on_reactor()
|
||||||
|
|
||||||
# If we're inviting someone, then we should also send it to that
|
# If we're inviting someone, then we should also send it to that
|
||||||
# HS.
|
# HS.
|
||||||
target_user_id = event.state_key
|
target_user_id = event.state_key
|
||||||
target_user = self.hs.parse_userid(target_user_id)
|
target_user = self.hs.parse_userid(target_user_id)
|
||||||
if membership == Membership.INVITE:
|
if membership == Membership.INVITE and not target_user.is_mine:
|
||||||
host = target_user.domain
|
do_invite_host = target_user.domain
|
||||||
destinations.append(host)
|
else:
|
||||||
|
do_invite_host = None
|
||||||
|
|
||||||
# Always include target domain
|
yield self._on_new_room_event(
|
||||||
host = target_user.domain
|
event,
|
||||||
destinations.append(host)
|
snapshot,
|
||||||
|
extra_users=[target_user],
|
||||||
return self._on_new_room_event(
|
suppress_auth=(not do_auth),
|
||||||
event, snapshot, extra_destinations=destinations,
|
do_invite_host=do_invite_host,
|
||||||
extra_users=[target_user]
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class RoomListHandler(BaseHandler):
|
class RoomListHandler(BaseHandler):
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
|
|
@ -23,6 +23,7 @@ from twisted.web.http_headers import Headers
|
||||||
|
|
||||||
from synapse.http.endpoint import matrix_endpoint
|
from synapse.http.endpoint import matrix_endpoint
|
||||||
from synapse.util.async import sleep
|
from synapse.util.async import sleep
|
||||||
|
from synapse.util.logcontext import PreserveLoggingContext
|
||||||
|
|
||||||
from syutil.jsonutil import encode_canonical_json
|
from syutil.jsonutil import encode_canonical_json
|
||||||
|
|
||||||
|
@ -108,16 +109,17 @@ class BaseHttpClient(object):
|
||||||
producer = body_callback(method, url_bytes, headers_dict)
|
producer = body_callback(method, url_bytes, headers_dict)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response = yield self.agent.request(
|
with PreserveLoggingContext():
|
||||||
destination,
|
response = yield self.agent.request(
|
||||||
endpoint,
|
destination,
|
||||||
method,
|
endpoint,
|
||||||
path_bytes,
|
method,
|
||||||
param_bytes,
|
path_bytes,
|
||||||
query_bytes,
|
param_bytes,
|
||||||
Headers(headers_dict),
|
query_bytes,
|
||||||
producer
|
Headers(headers_dict),
|
||||||
)
|
producer
|
||||||
|
)
|
||||||
|
|
||||||
logger.debug("Got response to %s", method)
|
logger.debug("Got response to %s", method)
|
||||||
break
|
break
|
||||||
|
|
|
@ -129,6 +129,14 @@ class ContentRepoResource(resource.Resource):
|
||||||
logger.info("Sending file %s", file_path)
|
logger.info("Sending file %s", file_path)
|
||||||
f = open(file_path, 'rb')
|
f = open(file_path, 'rb')
|
||||||
request.setHeader('Content-Type', content_type)
|
request.setHeader('Content-Type', content_type)
|
||||||
|
|
||||||
|
# cache for at least a day.
|
||||||
|
# XXX: we might want to turn this off for data we don't want to recommend
|
||||||
|
# caching as it's sensitive or private - or at least select private.
|
||||||
|
# don't bother setting Expires as all our matrix clients are smart enough to
|
||||||
|
# be happy with Cache-Control (right?)
|
||||||
|
request.setHeader('Cache-Control', 'public,max-age=86400,s-maxage=86400')
|
||||||
|
|
||||||
d = FileSender().beginFileTransfer(f, request)
|
d = FileSender().beginFileTransfer(f, request)
|
||||||
|
|
||||||
# after the file has been sent, clean up and finish the request
|
# after the file has been sent, clean up and finish the request
|
||||||
|
|
|
@ -20,6 +20,7 @@ from syutil.jsonutil import (
|
||||||
from synapse.api.errors import (
|
from synapse.api.errors import (
|
||||||
cs_exception, SynapseError, CodeMessageException
|
cs_exception, SynapseError, CodeMessageException
|
||||||
)
|
)
|
||||||
|
from synapse.util.logcontext import LoggingContext
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
from twisted.web import server, resource
|
from twisted.web import server, resource
|
||||||
|
@ -88,9 +89,19 @@ class JsonResource(HttpServer, resource.Resource):
|
||||||
def render(self, request):
|
def render(self, request):
|
||||||
""" This get's called by twisted every time someone sends us a request.
|
""" This get's called by twisted every time someone sends us a request.
|
||||||
"""
|
"""
|
||||||
self._async_render(request)
|
self._async_render_with_logging_context(request)
|
||||||
return server.NOT_DONE_YET
|
return server.NOT_DONE_YET
|
||||||
|
|
||||||
|
_request_id = 0
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _async_render_with_logging_context(self, request):
|
||||||
|
request_id = "%s-%s" % (request.method, JsonResource._request_id)
|
||||||
|
JsonResource._request_id += 1
|
||||||
|
with LoggingContext(request_id) as request_context:
|
||||||
|
request_context.request = request_id
|
||||||
|
yield self._async_render(request)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _async_render(self, request):
|
def _async_render(self, request):
|
||||||
""" This get's called by twisted every time someone sends us a request.
|
""" This get's called by twisted every time someone sends us a request.
|
||||||
|
|
|
@ -18,6 +18,11 @@ from synapse.api.urls import CLIENT_PREFIX
|
||||||
from synapse.rest.transactions import HttpTransactionStore
|
from synapse.rest.transactions import HttpTransactionStore
|
||||||
import re
|
import re
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def client_path_pattern(path_regex):
|
def client_path_pattern(path_regex):
|
||||||
"""Creates a regex compiled client path with the correct client path
|
"""Creates a regex compiled client path with the correct client path
|
||||||
|
@ -62,6 +67,8 @@ class RestServlet(object):
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.txns = HttpTransactionStore()
|
self.txns = HttpTransactionStore()
|
||||||
|
|
||||||
|
self.validator = hs.get_event_validator()
|
||||||
|
|
||||||
def register(self, http_server):
|
def register(self, http_server):
|
||||||
""" Register this servlet with the given HTTP server. """
|
""" Register this servlet with the given HTTP server. """
|
||||||
if hasattr(self, "PATTERN"):
|
if hasattr(self, "PATTERN"):
|
||||||
|
|
|
@ -20,6 +20,12 @@ from synapse.api.errors import SynapseError
|
||||||
from synapse.streams.config import PaginationConfig
|
from synapse.streams.config import PaginationConfig
|
||||||
from synapse.rest.base import RestServlet, client_path_pattern
|
from synapse.rest.base import RestServlet, client_path_pattern
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class EventStreamRestServlet(RestServlet):
|
class EventStreamRestServlet(RestServlet):
|
||||||
PATTERN = client_path_pattern("/events$")
|
PATTERN = client_path_pattern("/events$")
|
||||||
|
@ -29,18 +35,22 @@ class EventStreamRestServlet(RestServlet):
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_GET(self, request):
|
def on_GET(self, request):
|
||||||
auth_user = yield self.auth.get_user_by_req(request)
|
auth_user = yield self.auth.get_user_by_req(request)
|
||||||
|
try:
|
||||||
|
handler = self.handlers.event_stream_handler
|
||||||
|
pagin_config = PaginationConfig.from_request(request)
|
||||||
|
timeout = EventStreamRestServlet.DEFAULT_LONGPOLL_TIME_MS
|
||||||
|
if "timeout" in request.args:
|
||||||
|
try:
|
||||||
|
timeout = int(request.args["timeout"][0])
|
||||||
|
except ValueError:
|
||||||
|
raise SynapseError(400, "timeout must be in milliseconds.")
|
||||||
|
|
||||||
handler = self.handlers.event_stream_handler
|
chunk = yield handler.get_stream(
|
||||||
pagin_config = PaginationConfig.from_request(request)
|
auth_user.to_string(), pagin_config, timeout=timeout
|
||||||
timeout = EventStreamRestServlet.DEFAULT_LONGPOLL_TIME_MS
|
)
|
||||||
if "timeout" in request.args:
|
except:
|
||||||
try:
|
logger.exception("Event stream failed")
|
||||||
timeout = int(request.args["timeout"][0])
|
raise
|
||||||
except ValueError:
|
|
||||||
raise SynapseError(400, "timeout must be in milliseconds.")
|
|
||||||
|
|
||||||
chunk = yield handler.get_stream(auth_user.to_string(), pagin_config,
|
|
||||||
timeout=timeout)
|
|
||||||
|
|
||||||
defer.returnValue((200, chunk))
|
defer.returnValue((200, chunk))
|
||||||
|
|
||||||
|
|
|
@ -138,7 +138,7 @@ class RoomStateEventRestServlet(RestServlet):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
404, "Event not found.", errcode=Codes.NOT_FOUND
|
404, "Event not found.", errcode=Codes.NOT_FOUND
|
||||||
)
|
)
|
||||||
defer.returnValue((200, data[0].get_dict()["content"]))
|
defer.returnValue((200, data.get_dict()["content"]))
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_PUT(self, request, room_id, event_type, state_key):
|
def on_PUT(self, request, room_id, event_type, state_key):
|
||||||
|
@ -154,6 +154,9 @@ class RoomStateEventRestServlet(RestServlet):
|
||||||
user_id=user.to_string(),
|
user_id=user.to_string(),
|
||||||
state_key=urllib.unquote(state_key)
|
state_key=urllib.unquote(state_key)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.validator.validate(event)
|
||||||
|
|
||||||
if event_type == RoomMemberEvent.TYPE:
|
if event_type == RoomMemberEvent.TYPE:
|
||||||
# membership events are special
|
# membership events are special
|
||||||
handler = self.handlers.room_member_handler
|
handler = self.handlers.room_member_handler
|
||||||
|
@ -188,6 +191,8 @@ class RoomSendEventRestServlet(RestServlet):
|
||||||
content=content
|
content=content
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.validator.validate(event)
|
||||||
|
|
||||||
msg_handler = self.handlers.message_handler
|
msg_handler = self.handlers.message_handler
|
||||||
yield msg_handler.send_message(event)
|
yield msg_handler.send_message(event)
|
||||||
|
|
||||||
|
@ -253,6 +258,9 @@ class JoinRoomAliasServlet(RestServlet):
|
||||||
user_id=user.to_string(),
|
user_id=user.to_string(),
|
||||||
state_key=user.to_string()
|
state_key=user.to_string()
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.validator.validate(event)
|
||||||
|
|
||||||
handler = self.handlers.room_member_handler
|
handler = self.handlers.room_member_handler
|
||||||
yield handler.change_membership(event)
|
yield handler.change_membership(event)
|
||||||
defer.returnValue((200, {}))
|
defer.returnValue((200, {}))
|
||||||
|
@ -409,6 +417,9 @@ class RoomMembershipRestServlet(RestServlet):
|
||||||
user_id=user.to_string(),
|
user_id=user.to_string(),
|
||||||
state_key=state_key
|
state_key=state_key
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.validator.validate(event)
|
||||||
|
|
||||||
handler = self.handlers.room_member_handler
|
handler = self.handlers.room_member_handler
|
||||||
yield handler.change_membership(event)
|
yield handler.change_membership(event)
|
||||||
defer.returnValue((200, {}))
|
defer.returnValue((200, {}))
|
||||||
|
@ -446,6 +457,8 @@ class RoomRedactEventRestServlet(RestServlet):
|
||||||
redacts=urllib.unquote(event_id),
|
redacts=urllib.unquote(event_id),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.validator.validate(event)
|
||||||
|
|
||||||
msg_handler = self.handlers.message_handler
|
msg_handler = self.handlers.message_handler
|
||||||
yield msg_handler.send_message(event)
|
yield msg_handler.send_message(event)
|
||||||
|
|
||||||
|
|
|
@ -22,13 +22,14 @@
|
||||||
from synapse.federation import initialize_http_replication
|
from synapse.federation import initialize_http_replication
|
||||||
from synapse.api.events import serialize_event
|
from synapse.api.events import serialize_event
|
||||||
from synapse.api.events.factory import EventFactory
|
from synapse.api.events.factory import EventFactory
|
||||||
|
from synapse.api.events.validator import EventValidator
|
||||||
from synapse.notifier import Notifier
|
from synapse.notifier import Notifier
|
||||||
from synapse.api.auth import Auth
|
from synapse.api.auth import Auth
|
||||||
from synapse.handlers import Handlers
|
from synapse.handlers import Handlers
|
||||||
from synapse.rest import RestServletFactory
|
from synapse.rest import RestServletFactory
|
||||||
from synapse.state import StateHandler
|
from synapse.state import StateHandler
|
||||||
from synapse.storage import DataStore
|
from synapse.storage import DataStore
|
||||||
from synapse.types import UserID, RoomAlias, RoomID
|
from synapse.types import UserID, RoomAlias, RoomID, EventID
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
from synapse.util.distributor import Distributor
|
from synapse.util.distributor import Distributor
|
||||||
from synapse.util.lockutils import LockManager
|
from synapse.util.lockutils import LockManager
|
||||||
|
@ -80,6 +81,7 @@ class BaseHomeServer(object):
|
||||||
'event_sources',
|
'event_sources',
|
||||||
'ratelimiter',
|
'ratelimiter',
|
||||||
'keyring',
|
'keyring',
|
||||||
|
'event_validator',
|
||||||
]
|
]
|
||||||
|
|
||||||
def __init__(self, hostname, **kwargs):
|
def __init__(self, hostname, **kwargs):
|
||||||
|
@ -143,6 +145,11 @@ class BaseHomeServer(object):
|
||||||
object."""
|
object."""
|
||||||
return RoomID.from_string(s, hs=self)
|
return RoomID.from_string(s, hs=self)
|
||||||
|
|
||||||
|
def parse_eventid(self, s):
|
||||||
|
"""Parse the string given by 's' as a Event ID and return a EventID
|
||||||
|
object."""
|
||||||
|
return EventID.from_string(s, hs=self)
|
||||||
|
|
||||||
def serialize_event(self, e):
|
def serialize_event(self, e):
|
||||||
return serialize_event(self, e)
|
return serialize_event(self, e)
|
||||||
|
|
||||||
|
@ -218,6 +225,9 @@ class HomeServer(BaseHomeServer):
|
||||||
def build_keyring(self):
|
def build_keyring(self):
|
||||||
return Keyring(self)
|
return Keyring(self)
|
||||||
|
|
||||||
|
def build_event_validator(self):
|
||||||
|
return EventValidator(self)
|
||||||
|
|
||||||
def register_servlets(self):
|
def register_servlets(self):
|
||||||
""" Register all servlets associated with this HomeServer.
|
""" Register all servlets associated with this HomeServer.
|
||||||
"""
|
"""
|
||||||
|
|
360
synapse/state.py
360
synapse/state.py
|
@ -16,11 +16,13 @@
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.federation.pdu_codec import encode_event_id, decode_event_id
|
|
||||||
from synapse.util.logutils import log_function
|
from synapse.util.logutils import log_function
|
||||||
|
from synapse.util.async import run_on_reactor
|
||||||
|
from synapse.api.events.room import RoomPowerLevelsEvent
|
||||||
|
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
|
|
||||||
|
import copy
|
||||||
import logging
|
import logging
|
||||||
import hashlib
|
import hashlib
|
||||||
|
|
||||||
|
@ -35,230 +37,204 @@ KeyStateTuple = namedtuple("KeyStateTuple", ("context", "type", "state_key"))
|
||||||
|
|
||||||
|
|
||||||
class StateHandler(object):
|
class StateHandler(object):
|
||||||
""" Repsonsible for doing state conflict resolution.
|
""" Responsible for doing state conflict resolution.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
self._replication = hs.get_replication_layer()
|
|
||||||
self.server_name = hs.hostname
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def handle_new_event(self, event, snapshot):
|
def annotate_event_with_state(self, event, old_state=None):
|
||||||
""" Given an event this works out if a) we have sufficient power level
|
""" Annotates the event with the current state events as of that event.
|
||||||
to update the state and b) works out what the prev_state should be.
|
|
||||||
|
|
||||||
Returns:
|
This method adds three new attributes to the event:
|
||||||
Deferred: Resolved with a boolean indicating if we succesfully
|
* `state_events`: The state up to and including the event. Encoded
|
||||||
updated the state.
|
as a dict mapping tuple (type, state_key) -> event.
|
||||||
|
* `old_state_events`: The state up to, but excluding, the event.
|
||||||
|
Encoded similarly as `state_events`.
|
||||||
|
* `state_group`: If there is an existing state group that can be
|
||||||
|
used, then return that. Otherwise return `None`. See state
|
||||||
|
storage for more information.
|
||||||
|
|
||||||
Raised:
|
If the argument `old_state` is given (in the form of a list of
|
||||||
AuthError
|
events), then they are used as a the values for `old_state_events` and
|
||||||
|
the value for `state_events` is generated from it. `state_group` is
|
||||||
|
set to None.
|
||||||
|
|
||||||
|
This needs to be called before persisting the event.
|
||||||
"""
|
"""
|
||||||
# This needs to be done in a transaction.
|
yield run_on_reactor()
|
||||||
|
|
||||||
if not hasattr(event, "state_key"):
|
if old_state:
|
||||||
|
event.state_group = None
|
||||||
|
event.old_state_events = {
|
||||||
|
(s.type, s.state_key): s for s in old_state
|
||||||
|
}
|
||||||
|
event.state_events = event.old_state_events
|
||||||
|
|
||||||
|
if hasattr(event, "state_key"):
|
||||||
|
event.state_events[(event.type, event.state_key)] = event
|
||||||
|
|
||||||
|
defer.returnValue(False)
|
||||||
return
|
return
|
||||||
|
|
||||||
key = KeyStateTuple(
|
if hasattr(event, "outlier") and event.outlier:
|
||||||
event.room_id,
|
event.state_group = None
|
||||||
event.type,
|
event.old_state_events = None
|
||||||
_get_state_key_from_event(event)
|
event.state_events = {}
|
||||||
)
|
defer.returnValue(False)
|
||||||
|
return
|
||||||
|
|
||||||
# Now I need to fill out the prev state and work out if it has auth
|
ids = [e for e, _ in event.prev_events]
|
||||||
# (w.r.t. to power levels)
|
|
||||||
|
|
||||||
snapshot.fill_out_prev_events(event)
|
ret = yield self.resolve_state_groups(ids)
|
||||||
|
state_group, new_state = ret
|
||||||
|
|
||||||
event.prev_events = [
|
event.old_state_events = copy.deepcopy(new_state)
|
||||||
e for e in event.prev_events if e != event.event_id
|
|
||||||
|
if hasattr(event, "state_key"):
|
||||||
|
key = (event.type, event.state_key)
|
||||||
|
if key in new_state:
|
||||||
|
event.replaces_state = new_state[key].event_id
|
||||||
|
new_state[key] = event
|
||||||
|
elif state_group:
|
||||||
|
event.state_group = state_group
|
||||||
|
event.state_events = new_state
|
||||||
|
defer.returnValue(False)
|
||||||
|
|
||||||
|
event.state_group = None
|
||||||
|
event.state_events = new_state
|
||||||
|
|
||||||
|
defer.returnValue(hasattr(event, "state_key"))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_current_state(self, room_id, event_type=None, state_key=""):
|
||||||
|
""" Returns the current state for the room as a list. This is done by
|
||||||
|
calling `get_latest_events_in_room` to get the leading edges of the
|
||||||
|
event graph and then resolving any of the state conflicts.
|
||||||
|
|
||||||
|
This is equivalent to getting the state of an event that were to send
|
||||||
|
next before receiving any new events.
|
||||||
|
|
||||||
|
If `event_type` is specified, then the method returns only the one
|
||||||
|
event (or None) with that `event_type` and `state_key`.
|
||||||
|
"""
|
||||||
|
events = yield self.store.get_latest_events_in_room(room_id)
|
||||||
|
|
||||||
|
event_ids = [
|
||||||
|
e_id
|
||||||
|
for e_id, _, _ in events
|
||||||
]
|
]
|
||||||
|
|
||||||
current_state = snapshot.prev_state_pdu
|
res = yield self.resolve_state_groups(event_ids)
|
||||||
|
|
||||||
if current_state:
|
if event_type:
|
||||||
event.prev_state = encode_event_id(
|
defer.returnValue(res[1].get((event_type, state_key)))
|
||||||
current_state.pdu_id, current_state.origin
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO check current_state to see if the min power level is less
|
|
||||||
# than the power level of the user
|
|
||||||
# power_level = self._get_power_level_for_event(event)
|
|
||||||
|
|
||||||
pdu_id, origin = decode_event_id(event.event_id, self.server_name)
|
|
||||||
|
|
||||||
yield self.store.update_current_state(
|
|
||||||
pdu_id=pdu_id,
|
|
||||||
origin=origin,
|
|
||||||
context=key.context,
|
|
||||||
pdu_type=key.type,
|
|
||||||
state_key=key.state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
defer.returnValue(True)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
@log_function
|
|
||||||
def handle_new_state(self, new_pdu):
|
|
||||||
""" Apply conflict resolution to `new_pdu`.
|
|
||||||
|
|
||||||
This should be called on every new state pdu, regardless of whether or
|
|
||||||
not there is a conflict.
|
|
||||||
|
|
||||||
This function is safe against the race of it getting called with two
|
|
||||||
`PDU`s trying to update the same state.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# This needs to be done in a transaction.
|
|
||||||
|
|
||||||
is_new = yield self._handle_new_state(new_pdu)
|
|
||||||
|
|
||||||
logger.debug("is_new: %s %s %s", is_new, new_pdu.pdu_id, new_pdu.origin)
|
|
||||||
|
|
||||||
if is_new:
|
|
||||||
yield self.store.update_current_state(
|
|
||||||
pdu_id=new_pdu.pdu_id,
|
|
||||||
origin=new_pdu.origin,
|
|
||||||
context=new_pdu.context,
|
|
||||||
pdu_type=new_pdu.pdu_type,
|
|
||||||
state_key=new_pdu.state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
defer.returnValue(is_new)
|
|
||||||
|
|
||||||
def _get_power_level_for_event(self, event):
|
|
||||||
# return self._persistence.get_power_level_for_user(event.room_id,
|
|
||||||
# event.sender)
|
|
||||||
return event.power_level
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
@log_function
|
|
||||||
def _handle_new_state(self, new_pdu):
|
|
||||||
tree, missing_branch = yield self.store.get_unresolved_state_tree(
|
|
||||||
new_pdu
|
|
||||||
)
|
|
||||||
new_branch, current_branch = tree
|
|
||||||
|
|
||||||
logger.debug(
|
|
||||||
"_handle_new_state new=%s, current=%s",
|
|
||||||
new_branch, current_branch
|
|
||||||
)
|
|
||||||
|
|
||||||
if missing_branch is not None:
|
|
||||||
# We're missing some PDUs. Fetch them.
|
|
||||||
# TODO (erikj): Limit this.
|
|
||||||
missing_prev = tree[missing_branch][-1]
|
|
||||||
|
|
||||||
pdu_id = missing_prev.prev_state_id
|
|
||||||
origin = missing_prev.prev_state_origin
|
|
||||||
|
|
||||||
is_missing = yield self.store.get_pdu(pdu_id, origin) is None
|
|
||||||
if not is_missing:
|
|
||||||
raise Exception("Conflict resolution failed")
|
|
||||||
|
|
||||||
yield self._replication.get_pdu(
|
|
||||||
destination=missing_prev.origin,
|
|
||||||
pdu_origin=origin,
|
|
||||||
pdu_id=pdu_id,
|
|
||||||
outlier=True
|
|
||||||
)
|
|
||||||
|
|
||||||
updated_current = yield self._handle_new_state(new_pdu)
|
|
||||||
defer.returnValue(updated_current)
|
|
||||||
|
|
||||||
if not current_branch:
|
|
||||||
# There is no current state
|
|
||||||
defer.returnValue(True)
|
|
||||||
return
|
return
|
||||||
|
|
||||||
n = new_branch[-1]
|
defer.returnValue(res[1].values())
|
||||||
c = current_branch[-1]
|
|
||||||
|
|
||||||
common_ancestor = n.pdu_id == c.pdu_id and n.origin == c.origin
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
|
def resolve_state_groups(self, event_ids):
|
||||||
|
""" Given a list of event_ids this method fetches the state at each
|
||||||
|
event, resolves conflicts between them and returns them.
|
||||||
|
|
||||||
if common_ancestor:
|
Return format is a tuple: (`state_group`, `state_events`), where the
|
||||||
# We found a common ancestor!
|
first is the name of a state group if one and only one is involved,
|
||||||
|
otherwise `None`.
|
||||||
|
"""
|
||||||
|
state_groups = yield self.store.get_state_groups(
|
||||||
|
event_ids
|
||||||
|
)
|
||||||
|
|
||||||
if len(current_branch) == 1:
|
group_names = set(state_groups.keys())
|
||||||
# This is a direct clobber so we can just...
|
if len(group_names) == 1:
|
||||||
defer.returnValue(True)
|
name, state_list = state_groups.items().pop()
|
||||||
|
state = {
|
||||||
|
(e.type, e.state_key): e
|
||||||
|
for e in state_list
|
||||||
|
}
|
||||||
|
defer.returnValue((name, state))
|
||||||
|
|
||||||
|
state = {}
|
||||||
|
for group, g_state in state_groups.items():
|
||||||
|
for s in g_state:
|
||||||
|
state.setdefault(
|
||||||
|
(s.type, s.state_key),
|
||||||
|
{}
|
||||||
|
)[s.event_id] = s
|
||||||
|
|
||||||
|
unconflicted_state = {
|
||||||
|
k: v.values()[0] for k, v in state.items()
|
||||||
|
if len(v.values()) == 1
|
||||||
|
}
|
||||||
|
|
||||||
|
conflicted_state = {
|
||||||
|
k: v.values()
|
||||||
|
for k, v in state.items()
|
||||||
|
if len(v.values()) > 1
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
new_state = {}
|
||||||
|
new_state.update(unconflicted_state)
|
||||||
|
for key, events in conflicted_state.items():
|
||||||
|
new_state[key] = self._resolve_state_events(events)
|
||||||
|
except:
|
||||||
|
logger.exception("Failed to resolve state")
|
||||||
|
raise
|
||||||
|
|
||||||
|
defer.returnValue((None, new_state))
|
||||||
|
|
||||||
|
def _get_power_level_from_event_state(self, event, user_id):
|
||||||
|
if hasattr(event, "old_state_events") and event.old_state_events:
|
||||||
|
key = (RoomPowerLevelsEvent.TYPE, "", )
|
||||||
|
power_level_event = event.old_state_events.get(key)
|
||||||
|
level = None
|
||||||
|
if power_level_event:
|
||||||
|
level = power_level_event.content.get("users", {}).get(
|
||||||
|
user_id
|
||||||
|
)
|
||||||
|
if not level:
|
||||||
|
level = power_level_event.content.get("users_default", 0)
|
||||||
|
|
||||||
|
return level
|
||||||
else:
|
else:
|
||||||
# We didn't find a common ancestor. This is probably fine.
|
return 0
|
||||||
pass
|
|
||||||
|
|
||||||
result = yield self._do_conflict_res(
|
@log_function
|
||||||
new_branch, current_branch, common_ancestor
|
def _resolve_state_events(self, events):
|
||||||
)
|
curr_events = events
|
||||||
defer.returnValue(result)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
new_powers = [
|
||||||
def _do_conflict_res(self, new_branch, current_branch, common_ancestor):
|
self._get_power_level_from_event_state(e, e.user_id)
|
||||||
conflict_res = [
|
for e in curr_events
|
||||||
self._do_power_level_conflict_res,
|
|
||||||
self._do_chain_length_conflict_res,
|
|
||||||
self._do_hash_conflict_res,
|
|
||||||
]
|
]
|
||||||
|
|
||||||
for algo in conflict_res:
|
new_powers = [
|
||||||
new_res, curr_res = yield defer.maybeDeferred(
|
int(p) if p else 0 for p in new_powers
|
||||||
algo,
|
]
|
||||||
new_branch, current_branch, common_ancestor
|
|
||||||
)
|
|
||||||
|
|
||||||
if new_res < curr_res:
|
max_power = max(new_powers)
|
||||||
defer.returnValue(False)
|
|
||||||
elif new_res > curr_res:
|
|
||||||
defer.returnValue(True)
|
|
||||||
|
|
||||||
raise Exception("Conflict resolution failed.")
|
curr_events = [
|
||||||
|
z[0] for z in zip(curr_events, new_powers)
|
||||||
|
if z[1] == max_power
|
||||||
|
]
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
if not curr_events:
|
||||||
def _do_power_level_conflict_res(self, new_branch, current_branch,
|
raise RuntimeError("Max didn't get a max?")
|
||||||
common_ancestor):
|
elif len(curr_events) == 1:
|
||||||
new_powers_deferreds = []
|
return curr_events[0]
|
||||||
for e in new_branch[:-1] if common_ancestor else new_branch:
|
|
||||||
if hasattr(e, "user_id"):
|
|
||||||
new_powers_deferreds.append(
|
|
||||||
self.store.get_power_level(e.context, e.user_id)
|
|
||||||
)
|
|
||||||
|
|
||||||
current_powers_deferreds = []
|
|
||||||
for e in current_branch[:-1] if common_ancestor else current_branch:
|
|
||||||
if hasattr(e, "user_id"):
|
|
||||||
current_powers_deferreds.append(
|
|
||||||
self.store.get_power_level(e.context, e.user_id)
|
|
||||||
)
|
|
||||||
|
|
||||||
new_powers = yield defer.gatherResults(
|
|
||||||
new_powers_deferreds,
|
|
||||||
consumeErrors=True
|
|
||||||
)
|
|
||||||
|
|
||||||
current_powers = yield defer.gatherResults(
|
|
||||||
current_powers_deferreds,
|
|
||||||
consumeErrors=True
|
|
||||||
)
|
|
||||||
|
|
||||||
max_power_new = max(new_powers)
|
|
||||||
max_power_current = max(current_powers)
|
|
||||||
|
|
||||||
defer.returnValue(
|
|
||||||
(max_power_new, max_power_current)
|
|
||||||
)
|
|
||||||
|
|
||||||
def _do_chain_length_conflict_res(self, new_branch, current_branch,
|
|
||||||
common_ancestor):
|
|
||||||
return (len(new_branch), len(current_branch))
|
|
||||||
|
|
||||||
def _do_hash_conflict_res(self, new_branch, current_branch,
|
|
||||||
common_ancestor):
|
|
||||||
new_str = "".join([p.pdu_id + p.origin for p in new_branch])
|
|
||||||
c_str = "".join([p.pdu_id + p.origin for p in current_branch])
|
|
||||||
|
|
||||||
|
# TODO: For now, just choose the one with the largest event_id.
|
||||||
return (
|
return (
|
||||||
hashlib.sha1(new_str).hexdigest(),
|
sorted(
|
||||||
hashlib.sha1(c_str).hexdigest()
|
curr_events,
|
||||||
|
key=lambda e: hashlib.sha1(
|
||||||
|
e.event_id + e.user_id + e.room_id + e.type
|
||||||
|
).hexdigest()
|
||||||
|
)[0]
|
||||||
)
|
)
|
||||||
|
|
|
@ -16,14 +16,7 @@
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.events.room import (
|
from synapse.api.events.room import (
|
||||||
RoomMemberEvent, RoomTopicEvent, FeedbackEvent,
|
RoomMemberEvent, RoomTopicEvent, FeedbackEvent, RoomNameEvent,
|
||||||
# RoomConfigEvent,
|
|
||||||
RoomNameEvent,
|
|
||||||
RoomJoinRulesEvent,
|
|
||||||
RoomPowerLevelsEvent,
|
|
||||||
RoomAddStateLevelEvent,
|
|
||||||
RoomSendEventLevelEvent,
|
|
||||||
RoomOpsPowerLevelsEvent,
|
|
||||||
RoomRedactionEvent,
|
RoomRedactionEvent,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -37,9 +30,17 @@ from .registration import RegistrationStore
|
||||||
from .room import RoomStore
|
from .room import RoomStore
|
||||||
from .roommember import RoomMemberStore
|
from .roommember import RoomMemberStore
|
||||||
from .stream import StreamStore
|
from .stream import StreamStore
|
||||||
from .pdu import StatePduStore, PduStore, PdusTable
|
|
||||||
from .transactions import TransactionStore
|
from .transactions import TransactionStore
|
||||||
from .keys import KeyStore
|
from .keys import KeyStore
|
||||||
|
from .event_federation import EventFederationStore
|
||||||
|
|
||||||
|
from .state import StateStore
|
||||||
|
from .signatures import SignatureStore
|
||||||
|
|
||||||
|
from syutil.base64util import decode_base64
|
||||||
|
|
||||||
|
from synapse.crypto.event_signing import compute_event_reference_hash
|
||||||
|
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
@ -51,7 +52,6 @@ logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
SCHEMAS = [
|
SCHEMAS = [
|
||||||
"transactions",
|
"transactions",
|
||||||
"pdu",
|
|
||||||
"users",
|
"users",
|
||||||
"profiles",
|
"profiles",
|
||||||
"presence",
|
"presence",
|
||||||
|
@ -59,6 +59,9 @@ SCHEMAS = [
|
||||||
"room_aliases",
|
"room_aliases",
|
||||||
"keys",
|
"keys",
|
||||||
"redactions",
|
"redactions",
|
||||||
|
"state",
|
||||||
|
"event_edges",
|
||||||
|
"event_signatures",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@ -73,10 +76,12 @@ class _RollbackButIsFineException(Exception):
|
||||||
"""
|
"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class DataStore(RoomMemberStore, RoomStore,
|
class DataStore(RoomMemberStore, RoomStore,
|
||||||
RegistrationStore, StreamStore, ProfileStore, FeedbackStore,
|
RegistrationStore, StreamStore, ProfileStore, FeedbackStore,
|
||||||
PresenceStore, PduStore, StatePduStore, TransactionStore,
|
PresenceStore, TransactionStore,
|
||||||
DirectoryStore, KeyStore):
|
DirectoryStore, KeyStore, StateStore, SignatureStore,
|
||||||
|
EventFederationStore, ):
|
||||||
|
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
super(DataStore, self).__init__(hs)
|
super(DataStore, self).__init__(hs)
|
||||||
|
@ -88,8 +93,7 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@log_function
|
@log_function
|
||||||
def persist_event(self, event=None, backfilled=False, pdu=None,
|
def persist_event(self, event, backfilled=False, is_new_state=True):
|
||||||
is_new_state=True):
|
|
||||||
stream_ordering = None
|
stream_ordering = None
|
||||||
if backfilled:
|
if backfilled:
|
||||||
if not self.min_token_deferred.called:
|
if not self.min_token_deferred.called:
|
||||||
|
@ -99,8 +103,8 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield self.runInteraction(
|
yield self.runInteraction(
|
||||||
self._persist_pdu_event_txn,
|
"persist_event",
|
||||||
pdu=pdu,
|
self._persist_event_txn,
|
||||||
event=event,
|
event=event,
|
||||||
backfilled=backfilled,
|
backfilled=backfilled,
|
||||||
stream_ordering=stream_ordering,
|
stream_ordering=stream_ordering,
|
||||||
|
@ -119,7 +123,8 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
"type",
|
"type",
|
||||||
"room_id",
|
"room_id",
|
||||||
"content",
|
"content",
|
||||||
"unrecognized_keys"
|
"unrecognized_keys",
|
||||||
|
"depth",
|
||||||
],
|
],
|
||||||
allow_none=allow_none,
|
allow_none=allow_none,
|
||||||
)
|
)
|
||||||
|
@ -127,44 +132,8 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
if not events_dict:
|
if not events_dict:
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
|
|
||||||
event = self._parse_event_from_row(events_dict)
|
event = yield self._parse_events([events_dict])
|
||||||
defer.returnValue(event)
|
defer.returnValue(event[0])
|
||||||
|
|
||||||
def _persist_pdu_event_txn(self, txn, pdu=None, event=None,
|
|
||||||
backfilled=False, stream_ordering=None,
|
|
||||||
is_new_state=True):
|
|
||||||
if pdu is not None:
|
|
||||||
self._persist_event_pdu_txn(txn, pdu)
|
|
||||||
if event is not None:
|
|
||||||
return self._persist_event_txn(
|
|
||||||
txn, event, backfilled, stream_ordering,
|
|
||||||
is_new_state=is_new_state,
|
|
||||||
)
|
|
||||||
|
|
||||||
def _persist_event_pdu_txn(self, txn, pdu):
|
|
||||||
cols = dict(pdu.__dict__)
|
|
||||||
unrec_keys = dict(pdu.unrecognized_keys)
|
|
||||||
del cols["content"]
|
|
||||||
del cols["prev_pdus"]
|
|
||||||
cols["content_json"] = json.dumps(pdu.content)
|
|
||||||
|
|
||||||
unrec_keys.update({
|
|
||||||
k: v for k, v in cols.items()
|
|
||||||
if k not in PdusTable.fields
|
|
||||||
})
|
|
||||||
|
|
||||||
cols["unrecognized_keys"] = json.dumps(unrec_keys)
|
|
||||||
|
|
||||||
cols["ts"] = cols.pop("origin_server_ts")
|
|
||||||
|
|
||||||
logger.debug("Persisting: %s", repr(cols))
|
|
||||||
|
|
||||||
if pdu.is_state:
|
|
||||||
self._persist_state_txn(txn, pdu.prev_pdus, cols)
|
|
||||||
else:
|
|
||||||
self._persist_pdu_txn(txn, pdu.prev_pdus, cols)
|
|
||||||
|
|
||||||
self._update_min_depth_for_context_txn(txn, pdu.context, pdu.depth)
|
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def _persist_event_txn(self, txn, event, backfilled, stream_ordering=None,
|
def _persist_event_txn(self, txn, event, backfilled, stream_ordering=None,
|
||||||
|
@ -177,19 +146,13 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
self._store_room_name_txn(txn, event)
|
self._store_room_name_txn(txn, event)
|
||||||
elif event.type == RoomTopicEvent.TYPE:
|
elif event.type == RoomTopicEvent.TYPE:
|
||||||
self._store_room_topic_txn(txn, event)
|
self._store_room_topic_txn(txn, event)
|
||||||
elif event.type == RoomJoinRulesEvent.TYPE:
|
|
||||||
self._store_join_rule(txn, event)
|
|
||||||
elif event.type == RoomPowerLevelsEvent.TYPE:
|
|
||||||
self._store_power_levels(txn, event)
|
|
||||||
elif event.type == RoomAddStateLevelEvent.TYPE:
|
|
||||||
self._store_add_state_level(txn, event)
|
|
||||||
elif event.type == RoomSendEventLevelEvent.TYPE:
|
|
||||||
self._store_send_event_level(txn, event)
|
|
||||||
elif event.type == RoomOpsPowerLevelsEvent.TYPE:
|
|
||||||
self._store_ops_level(txn, event)
|
|
||||||
elif event.type == RoomRedactionEvent.TYPE:
|
elif event.type == RoomRedactionEvent.TYPE:
|
||||||
self._store_redaction(txn, event)
|
self._store_redaction(txn, event)
|
||||||
|
|
||||||
|
outlier = False
|
||||||
|
if hasattr(event, "outlier"):
|
||||||
|
outlier = event.outlier
|
||||||
|
|
||||||
vals = {
|
vals = {
|
||||||
"topological_ordering": event.depth,
|
"topological_ordering": event.depth,
|
||||||
"event_id": event.event_id,
|
"event_id": event.event_id,
|
||||||
|
@ -197,25 +160,34 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
"room_id": event.room_id,
|
"room_id": event.room_id,
|
||||||
"content": json.dumps(event.content),
|
"content": json.dumps(event.content),
|
||||||
"processed": True,
|
"processed": True,
|
||||||
|
"outlier": outlier,
|
||||||
|
"depth": event.depth,
|
||||||
}
|
}
|
||||||
|
|
||||||
if stream_ordering is not None:
|
if stream_ordering is not None:
|
||||||
vals["stream_ordering"] = stream_ordering
|
vals["stream_ordering"] = stream_ordering
|
||||||
|
|
||||||
if hasattr(event, "outlier"):
|
|
||||||
vals["outlier"] = event.outlier
|
|
||||||
else:
|
|
||||||
vals["outlier"] = False
|
|
||||||
|
|
||||||
unrec = {
|
unrec = {
|
||||||
k: v
|
k: v
|
||||||
for k, v in event.get_full_dict().items()
|
for k, v in event.get_full_dict().items()
|
||||||
if k not in vals.keys() and k not in ["redacted", "redacted_because"]
|
if k not in vals.keys() and k not in [
|
||||||
|
"redacted",
|
||||||
|
"redacted_because",
|
||||||
|
"signatures",
|
||||||
|
"hashes",
|
||||||
|
"prev_events",
|
||||||
|
]
|
||||||
}
|
}
|
||||||
vals["unrecognized_keys"] = json.dumps(unrec)
|
vals["unrecognized_keys"] = json.dumps(unrec)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self._simple_insert_txn(txn, "events", vals)
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
"events",
|
||||||
|
vals,
|
||||||
|
or_replace=(not outlier),
|
||||||
|
or_ignore=bool(outlier),
|
||||||
|
)
|
||||||
except:
|
except:
|
||||||
logger.warn(
|
logger.warn(
|
||||||
"Failed to persist, probably duplicate: %s",
|
"Failed to persist, probably duplicate: %s",
|
||||||
|
@ -224,6 +196,16 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
)
|
)
|
||||||
raise _RollbackButIsFineException("_persist_event")
|
raise _RollbackButIsFineException("_persist_event")
|
||||||
|
|
||||||
|
self._handle_prev_events(
|
||||||
|
txn,
|
||||||
|
outlier=outlier,
|
||||||
|
event_id=event.event_id,
|
||||||
|
prev_events=event.prev_events,
|
||||||
|
room_id=event.room_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
self._store_state_groups_txn(txn, event)
|
||||||
|
|
||||||
is_state = hasattr(event, "state_key") and event.state_key is not None
|
is_state = hasattr(event, "state_key") and event.state_key is not None
|
||||||
if is_new_state and is_state:
|
if is_new_state and is_state:
|
||||||
vals = {
|
vals = {
|
||||||
|
@ -233,10 +215,15 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
"state_key": event.state_key,
|
"state_key": event.state_key,
|
||||||
}
|
}
|
||||||
|
|
||||||
if hasattr(event, "prev_state"):
|
if hasattr(event, "replaces_state"):
|
||||||
vals["prev_state"] = event.prev_state
|
vals["prev_state"] = event.replaces_state
|
||||||
|
|
||||||
self._simple_insert_txn(txn, "state_events", vals)
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
"state_events",
|
||||||
|
vals,
|
||||||
|
or_replace=True,
|
||||||
|
)
|
||||||
|
|
||||||
self._simple_insert_txn(
|
self._simple_insert_txn(
|
||||||
txn,
|
txn,
|
||||||
|
@ -246,9 +233,87 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
"room_id": event.room_id,
|
"room_id": event.room_id,
|
||||||
"type": event.type,
|
"type": event.type,
|
||||||
"state_key": event.state_key,
|
"state_key": event.state_key,
|
||||||
}
|
},
|
||||||
|
or_replace=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
for e_id, h in event.prev_state:
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="event_edges",
|
||||||
|
values={
|
||||||
|
"event_id": event.event_id,
|
||||||
|
"prev_event_id": e_id,
|
||||||
|
"room_id": event.room_id,
|
||||||
|
"is_state": 1,
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not backfilled:
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="state_forward_extremities",
|
||||||
|
values={
|
||||||
|
"event_id": event.event_id,
|
||||||
|
"room_id": event.room_id,
|
||||||
|
"type": event.type,
|
||||||
|
"state_key": event.state_key,
|
||||||
|
},
|
||||||
|
or_replace=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
for prev_state_id, _ in event.prev_state:
|
||||||
|
self._simple_delete_txn(
|
||||||
|
txn,
|
||||||
|
table="state_forward_extremities",
|
||||||
|
keyvalues={
|
||||||
|
"event_id": prev_state_id,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
for hash_alg, hash_base64 in event.hashes.items():
|
||||||
|
hash_bytes = decode_base64(hash_base64)
|
||||||
|
self._store_event_content_hash_txn(
|
||||||
|
txn, event.event_id, hash_alg, hash_bytes,
|
||||||
|
)
|
||||||
|
|
||||||
|
if hasattr(event, "signatures"):
|
||||||
|
logger.debug("sigs: %s", event.signatures)
|
||||||
|
for name, sigs in event.signatures.items():
|
||||||
|
for key_id, signature_base64 in sigs.items():
|
||||||
|
signature_bytes = decode_base64(signature_base64)
|
||||||
|
self._store_event_signature_txn(
|
||||||
|
txn, event.event_id, name, key_id,
|
||||||
|
signature_bytes,
|
||||||
|
)
|
||||||
|
|
||||||
|
for prev_event_id, prev_hashes in event.prev_events:
|
||||||
|
for alg, hash_base64 in prev_hashes.items():
|
||||||
|
hash_bytes = decode_base64(hash_base64)
|
||||||
|
self._store_prev_event_hash_txn(
|
||||||
|
txn, event.event_id, prev_event_id, alg, hash_bytes
|
||||||
|
)
|
||||||
|
|
||||||
|
for auth_id, _ in event.auth_events:
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="event_auth",
|
||||||
|
values={
|
||||||
|
"event_id": event.event_id,
|
||||||
|
"room_id": event.room_id,
|
||||||
|
"auth_id": auth_id,
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
(ref_alg, ref_hash_bytes) = compute_event_reference_hash(event)
|
||||||
|
self._store_event_reference_hash_txn(
|
||||||
|
txn, event.event_id, ref_alg, ref_hash_bytes
|
||||||
|
)
|
||||||
|
|
||||||
|
self._update_min_depth_for_room_txn(txn, event.room_id, event.depth)
|
||||||
|
|
||||||
def _store_redaction(self, txn, event):
|
def _store_redaction(self, txn, event):
|
||||||
txn.execute(
|
txn.execute(
|
||||||
"INSERT OR IGNORE INTO redactions "
|
"INSERT OR IGNORE INTO redactions "
|
||||||
|
@ -319,7 +384,7 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
def snapshot_room(self, room_id, user_id, state_type=None, state_key=None):
|
def snapshot_room(self, event):
|
||||||
"""Snapshot the room for an update by a user
|
"""Snapshot the room for an update by a user
|
||||||
Args:
|
Args:
|
||||||
room_id (synapse.types.RoomId): The room to snapshot.
|
room_id (synapse.types.RoomId): The room to snapshot.
|
||||||
|
@ -330,29 +395,33 @@ class DataStore(RoomMemberStore, RoomStore,
|
||||||
synapse.storage.Snapshot: A snapshot of the state of the room.
|
synapse.storage.Snapshot: A snapshot of the state of the room.
|
||||||
"""
|
"""
|
||||||
def _snapshot(txn):
|
def _snapshot(txn):
|
||||||
membership_state = self._get_room_member(txn, user_id, room_id)
|
prev_events = self._get_latest_events_in_room(
|
||||||
prev_pdus = self._get_latest_pdus_in_context(
|
txn,
|
||||||
txn, room_id
|
event.room_id
|
||||||
)
|
)
|
||||||
if state_type is not None and state_key is not None:
|
|
||||||
prev_state_pdu = self._get_current_state_pdu(
|
prev_state = None
|
||||||
txn, room_id, state_type, state_key
|
state_key = None
|
||||||
|
if hasattr(event, "state_key"):
|
||||||
|
state_key = event.state_key
|
||||||
|
prev_state = self._get_latest_state_in_room(
|
||||||
|
txn,
|
||||||
|
event.room_id,
|
||||||
|
type=event.type,
|
||||||
|
state_key=state_key,
|
||||||
)
|
)
|
||||||
else:
|
|
||||||
prev_state_pdu = None
|
|
||||||
|
|
||||||
return Snapshot(
|
return Snapshot(
|
||||||
store=self,
|
store=self,
|
||||||
room_id=room_id,
|
room_id=event.room_id,
|
||||||
user_id=user_id,
|
user_id=event.user_id,
|
||||||
prev_pdus=prev_pdus,
|
prev_events=prev_events,
|
||||||
membership_state=membership_state,
|
prev_state=prev_state,
|
||||||
state_type=state_type,
|
state_type=event.type,
|
||||||
state_key=state_key,
|
state_key=state_key,
|
||||||
prev_state_pdu=prev_state_pdu,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return self.runInteraction(_snapshot)
|
return self.runInteraction("snapshot_room", _snapshot)
|
||||||
|
|
||||||
|
|
||||||
class Snapshot(object):
|
class Snapshot(object):
|
||||||
|
@ -361,7 +430,7 @@ class Snapshot(object):
|
||||||
store (DataStore): The datastore.
|
store (DataStore): The datastore.
|
||||||
room_id (RoomId): The room of the snapshot.
|
room_id (RoomId): The room of the snapshot.
|
||||||
user_id (UserId): The user this snapshot is for.
|
user_id (UserId): The user this snapshot is for.
|
||||||
prev_pdus (list): The list of PDU ids this snapshot is after.
|
prev_events (list): The list of event ids this snapshot is after.
|
||||||
membership_state (RoomMemberEvent): The current state of the user in
|
membership_state (RoomMemberEvent): The current state of the user in
|
||||||
the room.
|
the room.
|
||||||
state_type (str, optional): State type captured by the snapshot
|
state_type (str, optional): State type captured by the snapshot
|
||||||
|
@ -370,32 +439,30 @@ class Snapshot(object):
|
||||||
the previous value of the state type and key in the room.
|
the previous value of the state type and key in the room.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, store, room_id, user_id, prev_pdus,
|
def __init__(self, store, room_id, user_id, prev_events,
|
||||||
membership_state, state_type=None, state_key=None,
|
prev_state, state_type=None, state_key=None):
|
||||||
prev_state_pdu=None):
|
|
||||||
self.store = store
|
self.store = store
|
||||||
self.room_id = room_id
|
self.room_id = room_id
|
||||||
self.user_id = user_id
|
self.user_id = user_id
|
||||||
self.prev_pdus = prev_pdus
|
self.prev_events = prev_events
|
||||||
self.membership_state = membership_state
|
self.prev_state = prev_state
|
||||||
self.state_type = state_type
|
self.state_type = state_type
|
||||||
self.state_key = state_key
|
self.state_key = state_key
|
||||||
self.prev_state_pdu = prev_state_pdu
|
|
||||||
|
|
||||||
def fill_out_prev_events(self, event):
|
def fill_out_prev_events(self, event):
|
||||||
if hasattr(event, "prev_events"):
|
if not hasattr(event, "prev_events"):
|
||||||
return
|
event.prev_events = [
|
||||||
|
(event_id, hashes)
|
||||||
|
for event_id, hashes, _ in self.prev_events
|
||||||
|
]
|
||||||
|
|
||||||
es = [
|
if self.prev_events:
|
||||||
"%s@%s" % (p_id, origin) for p_id, origin, _ in self.prev_pdus
|
event.depth = max([int(v) for _, _, v in self.prev_events]) + 1
|
||||||
]
|
else:
|
||||||
|
event.depth = 0
|
||||||
|
|
||||||
event.prev_events = [e for e in es if e != event.event_id]
|
if not hasattr(event, "prev_state") and self.prev_state is not None:
|
||||||
|
event.prev_state = self.prev_state
|
||||||
if self.prev_pdus:
|
|
||||||
event.depth = max([int(v) for _, _, v in self.prev_pdus]) + 1
|
|
||||||
else:
|
|
||||||
event.depth = 0
|
|
||||||
|
|
||||||
|
|
||||||
def schema_path(schema):
|
def schema_path(schema):
|
||||||
|
@ -436,11 +503,13 @@ def prepare_database(db_conn):
|
||||||
user_version = row[0]
|
user_version = row[0]
|
||||||
|
|
||||||
if user_version > SCHEMA_VERSION:
|
if user_version > SCHEMA_VERSION:
|
||||||
raise ValueError("Cannot use this database as it is too " +
|
raise ValueError(
|
||||||
|
"Cannot use this database as it is too " +
|
||||||
"new for the server to understand"
|
"new for the server to understand"
|
||||||
)
|
)
|
||||||
elif user_version < SCHEMA_VERSION:
|
elif user_version < SCHEMA_VERSION:
|
||||||
logging.info("Upgrading database from version %d",
|
logging.info(
|
||||||
|
"Upgrading database from version %d",
|
||||||
user_version
|
user_version
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -452,13 +521,13 @@ def prepare_database(db_conn):
|
||||||
db_conn.commit()
|
db_conn.commit()
|
||||||
|
|
||||||
else:
|
else:
|
||||||
sql_script = "BEGIN TRANSACTION;"
|
sql_script = "BEGIN TRANSACTION;\n"
|
||||||
for sql_loc in SCHEMAS:
|
for sql_loc in SCHEMAS:
|
||||||
sql_script += read_schema(sql_loc)
|
sql_script += read_schema(sql_loc)
|
||||||
|
sql_script += "\n"
|
||||||
sql_script += "COMMIT TRANSACTION;"
|
sql_script += "COMMIT TRANSACTION;"
|
||||||
c.executescript(sql_script)
|
c.executescript(sql_script)
|
||||||
db_conn.commit()
|
db_conn.commit()
|
||||||
c.execute("PRAGMA user_version = %d" % SCHEMA_VERSION)
|
c.execute("PRAGMA user_version = %d" % SCHEMA_VERSION)
|
||||||
|
|
||||||
c.close()
|
c.close()
|
||||||
|
|
||||||
|
|
|
@ -14,59 +14,72 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from twisted.internet import defer
|
|
||||||
|
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
from synapse.api.events.utils import prune_event
|
from synapse.api.events.utils import prune_event
|
||||||
from synapse.util.logutils import log_function
|
from synapse.util.logutils import log_function
|
||||||
|
from synapse.util.logcontext import PreserveLoggingContext, LoggingContext
|
||||||
|
from syutil.base64util import encode_base64
|
||||||
|
|
||||||
|
from twisted.internet import defer
|
||||||
|
|
||||||
import collections
|
import collections
|
||||||
import copy
|
import copy
|
||||||
import json
|
import json
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
sql_logger = logging.getLogger("synapse.storage.SQL")
|
sql_logger = logging.getLogger("synapse.storage.SQL")
|
||||||
|
transaction_logger = logging.getLogger("synapse.storage.txn")
|
||||||
|
|
||||||
|
|
||||||
class LoggingTransaction(object):
|
class LoggingTransaction(object):
|
||||||
"""An object that almost-transparently proxies for the 'txn' object
|
"""An object that almost-transparently proxies for the 'txn' object
|
||||||
passed to the constructor. Adds logging to the .execute() method."""
|
passed to the constructor. Adds logging to the .execute() method."""
|
||||||
__slots__ = ["txn"]
|
__slots__ = ["txn", "name"]
|
||||||
|
|
||||||
def __init__(self, txn):
|
def __init__(self, txn, name):
|
||||||
object.__setattr__(self, "txn", txn)
|
object.__setattr__(self, "txn", txn)
|
||||||
|
object.__setattr__(self, "name", name)
|
||||||
|
|
||||||
def __getattribute__(self, name):
|
def __getattr__(self, name):
|
||||||
if name == "execute":
|
return getattr(self.txn, name)
|
||||||
return object.__getattribute__(self, "execute")
|
|
||||||
|
|
||||||
return getattr(object.__getattribute__(self, "txn"), name)
|
|
||||||
|
|
||||||
def __setattr__(self, name, value):
|
def __setattr__(self, name, value):
|
||||||
setattr(object.__getattribute__(self, "txn"), name, value)
|
setattr(self.txn, name, value)
|
||||||
|
|
||||||
def execute(self, sql, *args, **kwargs):
|
def execute(self, sql, *args, **kwargs):
|
||||||
# TODO(paul): Maybe use 'info' and 'debug' for values?
|
# TODO(paul): Maybe use 'info' and 'debug' for values?
|
||||||
sql_logger.debug("[SQL] %s", sql)
|
sql_logger.debug("[SQL] {%s} %s", self.name, sql)
|
||||||
try:
|
try:
|
||||||
if args and args[0]:
|
if args and args[0]:
|
||||||
values = args[0]
|
values = args[0]
|
||||||
sql_logger.debug("[SQL values] " +
|
sql_logger.debug(
|
||||||
", ".join(("<%s>",) * len(values)), *values)
|
"[SQL values] {%s} " + ", ".join(("<%s>",) * len(values)),
|
||||||
|
self.name,
|
||||||
|
*values
|
||||||
|
)
|
||||||
except:
|
except:
|
||||||
# Don't let logging failures stop SQL from working
|
# Don't let logging failures stop SQL from working
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# TODO(paul): Here would be an excellent place to put some timing
|
start = time.clock() * 1000
|
||||||
# measurements, and log (warning?) slow queries.
|
try:
|
||||||
return object.__getattribute__(self, "txn").execute(
|
return self.txn.execute(
|
||||||
sql, *args, **kwargs
|
sql, *args, **kwargs
|
||||||
)
|
)
|
||||||
|
except:
|
||||||
|
logger.exception("[SQL FAIL] {%s}", self.name)
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
end = time.clock() * 1000
|
||||||
|
sql_logger.debug("[SQL time] {%s} %f", self.name, end - start)
|
||||||
|
|
||||||
|
|
||||||
class SQLBaseStore(object):
|
class SQLBaseStore(object):
|
||||||
|
_TXN_ID = 0
|
||||||
|
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
|
@ -74,12 +87,40 @@ class SQLBaseStore(object):
|
||||||
self.event_factory = hs.get_event_factory()
|
self.event_factory = hs.get_event_factory()
|
||||||
self._clock = hs.get_clock()
|
self._clock = hs.get_clock()
|
||||||
|
|
||||||
def runInteraction(self, func, *args, **kwargs):
|
@defer.inlineCallbacks
|
||||||
|
def runInteraction(self, desc, func, *args, **kwargs):
|
||||||
"""Wraps the .runInteraction() method on the underlying db_pool."""
|
"""Wraps the .runInteraction() method on the underlying db_pool."""
|
||||||
|
current_context = LoggingContext.current_context()
|
||||||
def inner_func(txn, *args, **kwargs):
|
def inner_func(txn, *args, **kwargs):
|
||||||
return func(LoggingTransaction(txn), *args, **kwargs)
|
with LoggingContext("runInteraction") as context:
|
||||||
|
current_context.copy_to(context)
|
||||||
|
start = time.clock() * 1000
|
||||||
|
txn_id = SQLBaseStore._TXN_ID
|
||||||
|
|
||||||
return self._db_pool.runInteraction(inner_func, *args, **kwargs)
|
# We don't really need these to be unique, so lets stop it from
|
||||||
|
# growing really large.
|
||||||
|
self._TXN_ID = (self._TXN_ID + 1) % (sys.maxint - 1)
|
||||||
|
|
||||||
|
name = "%s-%x" % (desc, txn_id, )
|
||||||
|
|
||||||
|
transaction_logger.debug("[TXN START] {%s}", name)
|
||||||
|
try:
|
||||||
|
return func(LoggingTransaction(txn, name), *args, **kwargs)
|
||||||
|
except:
|
||||||
|
logger.exception("[TXN FAIL] {%s}", name)
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
end = time.clock() * 1000
|
||||||
|
transaction_logger.debug(
|
||||||
|
"[TXN END] {%s} %f",
|
||||||
|
name, end - start
|
||||||
|
)
|
||||||
|
|
||||||
|
with PreserveLoggingContext():
|
||||||
|
result = yield self._db_pool.runInteraction(
|
||||||
|
inner_func, *args, **kwargs
|
||||||
|
)
|
||||||
|
defer.returnValue(result)
|
||||||
|
|
||||||
def cursor_to_dict(self, cursor):
|
def cursor_to_dict(self, cursor):
|
||||||
"""Converts a SQL cursor into an list of dicts.
|
"""Converts a SQL cursor into an list of dicts.
|
||||||
|
@ -113,7 +154,7 @@ class SQLBaseStore(object):
|
||||||
else:
|
else:
|
||||||
return cursor.fetchall()
|
return cursor.fetchall()
|
||||||
|
|
||||||
return self.runInteraction(interaction)
|
return self.runInteraction("_execute", interaction)
|
||||||
|
|
||||||
def _execute_and_decode(self, query, *args):
|
def _execute_and_decode(self, query, *args):
|
||||||
return self._execute(self.cursor_to_dict, query, *args)
|
return self._execute(self.cursor_to_dict, query, *args)
|
||||||
|
@ -130,6 +171,7 @@ class SQLBaseStore(object):
|
||||||
or_replace : bool; if True performs an INSERT OR REPLACE
|
or_replace : bool; if True performs an INSERT OR REPLACE
|
||||||
"""
|
"""
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
"_simple_insert",
|
||||||
self._simple_insert_txn, table, values, or_replace=or_replace,
|
self._simple_insert_txn, table, values, or_replace=or_replace,
|
||||||
or_ignore=or_ignore,
|
or_ignore=or_ignore,
|
||||||
)
|
)
|
||||||
|
@ -146,7 +188,7 @@ class SQLBaseStore(object):
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"[SQL] %s Args=%s Func=%s",
|
"[SQL] %s Args=%s",
|
||||||
sql, values.values(),
|
sql, values.values(),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -170,7 +212,6 @@ class SQLBaseStore(object):
|
||||||
table, keyvalues, retcols=retcols, allow_none=allow_none
|
table, keyvalues, retcols=retcols, allow_none=allow_none
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _simple_select_one_onecol(self, table, keyvalues, retcol,
|
def _simple_select_one_onecol(self, table, keyvalues, retcol,
|
||||||
allow_none=False):
|
allow_none=False):
|
||||||
"""Executes a SELECT query on the named table, which is expected to
|
"""Executes a SELECT query on the named table, which is expected to
|
||||||
|
@ -181,19 +222,40 @@ class SQLBaseStore(object):
|
||||||
keyvalues : dict of column names and values to select the row with
|
keyvalues : dict of column names and values to select the row with
|
||||||
retcol : string giving the name of the column to return
|
retcol : string giving the name of the column to return
|
||||||
"""
|
"""
|
||||||
ret = yield self._simple_select_one(
|
return self.runInteraction(
|
||||||
|
"_simple_select_one_onecol",
|
||||||
|
self._simple_select_one_onecol_txn,
|
||||||
|
table, keyvalues, retcol, allow_none=allow_none,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _simple_select_one_onecol_txn(self, txn, table, keyvalues, retcol,
|
||||||
|
allow_none=False):
|
||||||
|
ret = self._simple_select_onecol_txn(
|
||||||
|
txn,
|
||||||
table=table,
|
table=table,
|
||||||
keyvalues=keyvalues,
|
keyvalues=keyvalues,
|
||||||
retcols=[retcol],
|
retcol=retcol,
|
||||||
allow_none=allow_none
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if ret:
|
if ret:
|
||||||
defer.returnValue(ret[retcol])
|
return ret[0]
|
||||||
else:
|
else:
|
||||||
defer.returnValue(None)
|
if allow_none:
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
raise StoreError(404, "No row found")
|
||||||
|
|
||||||
|
def _simple_select_onecol_txn(self, txn, table, keyvalues, retcol):
|
||||||
|
sql = "SELECT %(retcol)s FROM %(table)s WHERE %(where)s" % {
|
||||||
|
"retcol": retcol,
|
||||||
|
"table": table,
|
||||||
|
"where": " AND ".join("%s = ?" % k for k in keyvalues.keys()),
|
||||||
|
}
|
||||||
|
|
||||||
|
txn.execute(sql, keyvalues.values())
|
||||||
|
|
||||||
|
return [r[0] for r in txn.fetchall()]
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _simple_select_onecol(self, table, keyvalues, retcol):
|
def _simple_select_onecol(self, table, keyvalues, retcol):
|
||||||
"""Executes a SELECT query on the named table, which returns a list
|
"""Executes a SELECT query on the named table, which returns a list
|
||||||
comprising of the values of the named column from the selected rows.
|
comprising of the values of the named column from the selected rows.
|
||||||
|
@ -206,19 +268,11 @@ class SQLBaseStore(object):
|
||||||
Returns:
|
Returns:
|
||||||
Deferred: Results in a list
|
Deferred: Results in a list
|
||||||
"""
|
"""
|
||||||
sql = "SELECT %(retcol)s FROM %(table)s WHERE %(where)s" % {
|
return self.runInteraction(
|
||||||
"retcol": retcol,
|
"_simple_select_onecol",
|
||||||
"table": table,
|
self._simple_select_onecol_txn,
|
||||||
"where": " AND ".join("%s = ?" % k for k in keyvalues.keys()),
|
table, keyvalues, retcol
|
||||||
}
|
)
|
||||||
|
|
||||||
def func(txn):
|
|
||||||
txn.execute(sql, keyvalues.values())
|
|
||||||
return txn.fetchall()
|
|
||||||
|
|
||||||
res = yield self.runInteraction(func)
|
|
||||||
|
|
||||||
defer.returnValue([r[0] for r in res])
|
|
||||||
|
|
||||||
def _simple_select_list(self, table, keyvalues, retcols):
|
def _simple_select_list(self, table, keyvalues, retcols):
|
||||||
"""Executes a SELECT query on the named table, which may return zero or
|
"""Executes a SELECT query on the named table, which may return zero or
|
||||||
|
@ -229,17 +283,30 @@ class SQLBaseStore(object):
|
||||||
keyvalues : dict of column names and values to select the rows with
|
keyvalues : dict of column names and values to select the rows with
|
||||||
retcols : list of strings giving the names of the columns to return
|
retcols : list of strings giving the names of the columns to return
|
||||||
"""
|
"""
|
||||||
|
return self.runInteraction(
|
||||||
|
"_simple_select_list",
|
||||||
|
self._simple_select_list_txn,
|
||||||
|
table, keyvalues, retcols
|
||||||
|
)
|
||||||
|
|
||||||
|
def _simple_select_list_txn(self, txn, table, keyvalues, retcols):
|
||||||
|
"""Executes a SELECT query on the named table, which may return zero or
|
||||||
|
more rows, returning the result as a list of dicts.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
txn : Transaction object
|
||||||
|
table : string giving the table name
|
||||||
|
keyvalues : dict of column names and values to select the rows with
|
||||||
|
retcols : list of strings giving the names of the columns to return
|
||||||
|
"""
|
||||||
sql = "SELECT %s FROM %s WHERE %s" % (
|
sql = "SELECT %s FROM %s WHERE %s" % (
|
||||||
", ".join(retcols),
|
", ".join(retcols),
|
||||||
table,
|
table,
|
||||||
" AND ".join("%s = ?" % (k) for k in keyvalues)
|
" AND ".join("%s = ?" % (k, ) for k in keyvalues)
|
||||||
)
|
)
|
||||||
|
|
||||||
def func(txn):
|
txn.execute(sql, keyvalues.values())
|
||||||
txn.execute(sql, keyvalues.values())
|
return self.cursor_to_dict(txn)
|
||||||
return self.cursor_to_dict(txn)
|
|
||||||
|
|
||||||
return self.runInteraction(func)
|
|
||||||
|
|
||||||
def _simple_update_one(self, table, keyvalues, updatevalues,
|
def _simple_update_one(self, table, keyvalues, updatevalues,
|
||||||
retcols=None):
|
retcols=None):
|
||||||
|
@ -307,7 +374,7 @@ class SQLBaseStore(object):
|
||||||
raise StoreError(500, "More than one row matched")
|
raise StoreError(500, "More than one row matched")
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
return self.runInteraction(func)
|
return self.runInteraction("_simple_selectupdate_one", func)
|
||||||
|
|
||||||
def _simple_delete_one(self, table, keyvalues):
|
def _simple_delete_one(self, table, keyvalues):
|
||||||
"""Executes a DELETE query on the named table, expecting to delete a
|
"""Executes a DELETE query on the named table, expecting to delete a
|
||||||
|
@ -319,7 +386,7 @@ class SQLBaseStore(object):
|
||||||
"""
|
"""
|
||||||
sql = "DELETE FROM %s WHERE %s" % (
|
sql = "DELETE FROM %s WHERE %s" % (
|
||||||
table,
|
table,
|
||||||
" AND ".join("%s = ?" % (k) for k in keyvalues)
|
" AND ".join("%s = ?" % (k, ) for k in keyvalues)
|
||||||
)
|
)
|
||||||
|
|
||||||
def func(txn):
|
def func(txn):
|
||||||
|
@ -328,7 +395,25 @@ class SQLBaseStore(object):
|
||||||
raise StoreError(404, "No row found")
|
raise StoreError(404, "No row found")
|
||||||
if txn.rowcount > 1:
|
if txn.rowcount > 1:
|
||||||
raise StoreError(500, "more than one row matched")
|
raise StoreError(500, "more than one row matched")
|
||||||
return self.runInteraction(func)
|
return self.runInteraction("_simple_delete_one", func)
|
||||||
|
|
||||||
|
def _simple_delete(self, table, keyvalues):
|
||||||
|
"""Executes a DELETE query on the named table.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
table : string giving the table name
|
||||||
|
keyvalues : dict of column names and values to select the row with
|
||||||
|
"""
|
||||||
|
|
||||||
|
return self.runInteraction("_simple_delete", self._simple_delete_txn)
|
||||||
|
|
||||||
|
def _simple_delete_txn(self, txn, table, keyvalues):
|
||||||
|
sql = "DELETE FROM %s WHERE %s" % (
|
||||||
|
table,
|
||||||
|
" AND ".join("%s = ?" % (k, ) for k in keyvalues)
|
||||||
|
)
|
||||||
|
|
||||||
|
return txn.execute(sql, keyvalues.values())
|
||||||
|
|
||||||
def _simple_max_id(self, table):
|
def _simple_max_id(self, table):
|
||||||
"""Executes a SELECT query on the named table, expecting to return the
|
"""Executes a SELECT query on the named table, expecting to return the
|
||||||
|
@ -346,7 +431,7 @@ class SQLBaseStore(object):
|
||||||
return 0
|
return 0
|
||||||
return max_id
|
return max_id
|
||||||
|
|
||||||
return self.runInteraction(func)
|
return self.runInteraction("_simple_max_id", func)
|
||||||
|
|
||||||
def _parse_event_from_row(self, row_dict):
|
def _parse_event_from_row(self, row_dict):
|
||||||
d = copy.deepcopy({k: v for k, v in row_dict.items()})
|
d = copy.deepcopy({k: v for k, v in row_dict.items()})
|
||||||
|
@ -355,6 +440,10 @@ class SQLBaseStore(object):
|
||||||
d.pop("topological_ordering", None)
|
d.pop("topological_ordering", None)
|
||||||
d.pop("processed", None)
|
d.pop("processed", None)
|
||||||
d["origin_server_ts"] = d.pop("ts", 0)
|
d["origin_server_ts"] = d.pop("ts", 0)
|
||||||
|
replaces_state = d.pop("prev_state", None)
|
||||||
|
|
||||||
|
if replaces_state:
|
||||||
|
d["replaces_state"] = replaces_state
|
||||||
|
|
||||||
d.update(json.loads(row_dict["unrecognized_keys"]))
|
d.update(json.loads(row_dict["unrecognized_keys"]))
|
||||||
d["content"] = json.loads(d["content"])
|
d["content"] = json.loads(d["content"])
|
||||||
|
@ -369,23 +458,76 @@ class SQLBaseStore(object):
|
||||||
**d
|
**d
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def _get_events_txn(self, txn, event_ids):
|
||||||
|
# FIXME (erikj): This should be batched?
|
||||||
|
|
||||||
|
sql = "SELECT * FROM events WHERE event_id = ?"
|
||||||
|
|
||||||
|
event_rows = []
|
||||||
|
for e_id in event_ids:
|
||||||
|
c = txn.execute(sql, (e_id,))
|
||||||
|
event_rows.extend(self.cursor_to_dict(c))
|
||||||
|
|
||||||
|
return self._parse_events_txn(txn, event_rows)
|
||||||
|
|
||||||
def _parse_events(self, rows):
|
def _parse_events(self, rows):
|
||||||
return self.runInteraction(self._parse_events_txn, rows)
|
return self.runInteraction(
|
||||||
|
"_parse_events", self._parse_events_txn, rows
|
||||||
|
)
|
||||||
|
|
||||||
def _parse_events_txn(self, txn, rows):
|
def _parse_events_txn(self, txn, rows):
|
||||||
events = [self._parse_event_from_row(r) for r in rows]
|
events = [self._parse_event_from_row(r) for r in rows]
|
||||||
|
|
||||||
sql = "SELECT * FROM events WHERE event_id = ?"
|
select_event_sql = "SELECT * FROM events WHERE event_id = ?"
|
||||||
|
|
||||||
for ev in events:
|
for i, ev in enumerate(events):
|
||||||
if hasattr(ev, "prev_state"):
|
signatures = self._get_event_signatures_txn(
|
||||||
# Load previous state_content.
|
txn, ev.event_id,
|
||||||
# TODO: Should we be pulling this out above?
|
)
|
||||||
cursor = txn.execute(sql, (ev.prev_state,))
|
|
||||||
prevs = self.cursor_to_dict(cursor)
|
ev.signatures = {
|
||||||
if prevs:
|
n: {
|
||||||
prev = self._parse_event_from_row(prevs[0])
|
k: encode_base64(v) for k, v in s.items()
|
||||||
ev.prev_content = prev.content
|
}
|
||||||
|
for n, s in signatures.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
hashes = self._get_event_content_hashes_txn(
|
||||||
|
txn, ev.event_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
ev.hashes = {
|
||||||
|
k: encode_base64(v) for k, v in hashes.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
prevs = self._get_prev_events_and_state(txn, ev.event_id)
|
||||||
|
|
||||||
|
ev.prev_events = [
|
||||||
|
(e_id, h)
|
||||||
|
for e_id, h, is_state in prevs
|
||||||
|
if is_state == 0
|
||||||
|
]
|
||||||
|
|
||||||
|
ev.auth_events = self._get_auth_events(txn, ev.event_id)
|
||||||
|
|
||||||
|
if hasattr(ev, "state_key"):
|
||||||
|
ev.prev_state = [
|
||||||
|
(e_id, h)
|
||||||
|
for e_id, h, is_state in prevs
|
||||||
|
if is_state == 1
|
||||||
|
]
|
||||||
|
|
||||||
|
if hasattr(ev, "replaces_state"):
|
||||||
|
# Load previous state_content.
|
||||||
|
# FIXME (erikj): Handle multiple prev_states.
|
||||||
|
cursor = txn.execute(
|
||||||
|
select_event_sql,
|
||||||
|
(ev.replaces_state,)
|
||||||
|
)
|
||||||
|
prevs = self.cursor_to_dict(cursor)
|
||||||
|
if prevs:
|
||||||
|
prev = self._parse_event_from_row(prevs[0])
|
||||||
|
ev.prev_content = prev.content
|
||||||
|
|
||||||
if not hasattr(ev, "redacted"):
|
if not hasattr(ev, "redacted"):
|
||||||
logger.debug("Doesn't have redacted key: %s", ev)
|
logger.debug("Doesn't have redacted key: %s", ev)
|
||||||
|
@ -393,15 +535,16 @@ class SQLBaseStore(object):
|
||||||
|
|
||||||
if ev.redacted:
|
if ev.redacted:
|
||||||
# Get the redaction event.
|
# Get the redaction event.
|
||||||
sql = "SELECT * FROM events WHERE event_id = ?"
|
select_event_sql = "SELECT * FROM events WHERE event_id = ?"
|
||||||
txn.execute(sql, (ev.redacted,))
|
txn.execute(select_event_sql, (ev.redacted,))
|
||||||
|
|
||||||
del_evs = self._parse_events_txn(
|
del_evs = self._parse_events_txn(
|
||||||
txn, self.cursor_to_dict(txn)
|
txn, self.cursor_to_dict(txn)
|
||||||
)
|
)
|
||||||
|
|
||||||
if del_evs:
|
if del_evs:
|
||||||
prune_event(ev)
|
ev = prune_event(ev)
|
||||||
|
events[i] = ev
|
||||||
ev.redacted_because = del_evs[0]
|
ev.redacted_because = del_evs[0]
|
||||||
|
|
||||||
return events
|
return events
|
||||||
|
|
|
@ -95,6 +95,7 @@ class DirectoryStore(SQLBaseStore):
|
||||||
|
|
||||||
def delete_room_alias(self, room_alias):
|
def delete_room_alias(self, room_alias):
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
"delete_room_alias",
|
||||||
self._delete_room_alias_txn,
|
self._delete_room_alias_txn,
|
||||||
room_alias,
|
room_alias,
|
||||||
)
|
)
|
||||||
|
|
386
synapse/storage/event_federation.py
Normal file
386
synapse/storage/event_federation.py
Normal file
|
@ -0,0 +1,386 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 OpenMarket Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from ._base import SQLBaseStore
|
||||||
|
from syutil.base64util import encode_base64
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class EventFederationStore(SQLBaseStore):
|
||||||
|
""" Responsible for storing and serving up the various graphs associated
|
||||||
|
with an event. Including the main event graph and the auth chains for an
|
||||||
|
event.
|
||||||
|
|
||||||
|
Also has methods for getting the front (latest) and back (oldest) edges
|
||||||
|
of the event graphs. These are used to generate the parents for new events
|
||||||
|
and backfilling from another server respectively.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def get_auth_chain(self, event_id):
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_auth_chain",
|
||||||
|
self._get_auth_chain_txn,
|
||||||
|
event_id
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_auth_chain_txn(self, txn, event_id):
|
||||||
|
results = self._get_auth_chain_ids_txn(txn, event_id)
|
||||||
|
|
||||||
|
sql = "SELECT * FROM events WHERE event_id = ?"
|
||||||
|
rows = []
|
||||||
|
for ev_id in results:
|
||||||
|
c = txn.execute(sql, (ev_id,))
|
||||||
|
rows.extend(self.cursor_to_dict(c))
|
||||||
|
|
||||||
|
return self._parse_events_txn(txn, rows)
|
||||||
|
|
||||||
|
def get_auth_chain_ids(self, event_id):
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_auth_chain_ids",
|
||||||
|
self._get_auth_chain_ids_txn,
|
||||||
|
event_id
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_auth_chain_ids_txn(self, txn, event_id):
|
||||||
|
results = set()
|
||||||
|
|
||||||
|
base_sql = (
|
||||||
|
"SELECT auth_id FROM event_auth WHERE %s"
|
||||||
|
)
|
||||||
|
|
||||||
|
front = set([event_id])
|
||||||
|
while front:
|
||||||
|
sql = base_sql % (
|
||||||
|
" OR ".join(["event_id=?"] * len(front)),
|
||||||
|
)
|
||||||
|
|
||||||
|
txn.execute(sql, list(front))
|
||||||
|
front = [r[0] for r in txn.fetchall()]
|
||||||
|
results.update(front)
|
||||||
|
|
||||||
|
return list(results)
|
||||||
|
|
||||||
|
def get_oldest_events_in_room(self, room_id):
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_oldest_events_in_room",
|
||||||
|
self._get_oldest_events_in_room_txn,
|
||||||
|
room_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_oldest_events_in_room_txn(self, txn, room_id):
|
||||||
|
return self._simple_select_onecol_txn(
|
||||||
|
txn,
|
||||||
|
table="event_backward_extremities",
|
||||||
|
keyvalues={
|
||||||
|
"room_id": room_id,
|
||||||
|
},
|
||||||
|
retcol="event_id",
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_latest_events_in_room(self, room_id):
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_latest_events_in_room",
|
||||||
|
self._get_latest_events_in_room,
|
||||||
|
room_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_latest_events_in_room(self, txn, room_id):
|
||||||
|
sql = (
|
||||||
|
"SELECT e.event_id, e.depth FROM events as e "
|
||||||
|
"INNER JOIN event_forward_extremities as f "
|
||||||
|
"ON e.event_id = f.event_id "
|
||||||
|
"WHERE f.room_id = ?"
|
||||||
|
)
|
||||||
|
|
||||||
|
txn.execute(sql, (room_id, ))
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for event_id, depth in txn.fetchall():
|
||||||
|
hashes = self._get_event_reference_hashes_txn(txn, event_id)
|
||||||
|
prev_hashes = {
|
||||||
|
k: encode_base64(v) for k, v in hashes.items()
|
||||||
|
if k == "sha256"
|
||||||
|
}
|
||||||
|
results.append((event_id, prev_hashes, depth))
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
def _get_latest_state_in_room(self, txn, room_id, type, state_key):
|
||||||
|
event_ids = self._simple_select_onecol_txn(
|
||||||
|
txn,
|
||||||
|
table="state_forward_extremities",
|
||||||
|
keyvalues={
|
||||||
|
"room_id": room_id,
|
||||||
|
"type": type,
|
||||||
|
"state_key": state_key,
|
||||||
|
},
|
||||||
|
retcol="event_id",
|
||||||
|
)
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for event_id in event_ids:
|
||||||
|
hashes = self._get_event_reference_hashes_txn(txn, event_id)
|
||||||
|
prev_hashes = {
|
||||||
|
k: encode_base64(v) for k, v in hashes.items()
|
||||||
|
if k == "sha256"
|
||||||
|
}
|
||||||
|
results.append((event_id, prev_hashes))
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
def _get_prev_events(self, txn, event_id):
|
||||||
|
results = self._get_prev_events_and_state(
|
||||||
|
txn,
|
||||||
|
event_id,
|
||||||
|
is_state=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
return [(e_id, h, ) for e_id, h, _ in results]
|
||||||
|
|
||||||
|
def _get_prev_state(self, txn, event_id):
|
||||||
|
results = self._get_prev_events_and_state(
|
||||||
|
txn,
|
||||||
|
event_id,
|
||||||
|
is_state=1,
|
||||||
|
)
|
||||||
|
|
||||||
|
return [(e_id, h, ) for e_id, h, _ in results]
|
||||||
|
|
||||||
|
def _get_prev_events_and_state(self, txn, event_id, is_state=None):
|
||||||
|
keyvalues = {
|
||||||
|
"event_id": event_id,
|
||||||
|
}
|
||||||
|
|
||||||
|
if is_state is not None:
|
||||||
|
keyvalues["is_state"] = is_state
|
||||||
|
|
||||||
|
res = self._simple_select_list_txn(
|
||||||
|
txn,
|
||||||
|
table="event_edges",
|
||||||
|
keyvalues=keyvalues,
|
||||||
|
retcols=["prev_event_id", "is_state"],
|
||||||
|
)
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for d in res:
|
||||||
|
hashes = self._get_event_reference_hashes_txn(
|
||||||
|
txn,
|
||||||
|
d["prev_event_id"]
|
||||||
|
)
|
||||||
|
prev_hashes = {
|
||||||
|
k: encode_base64(v) for k, v in hashes.items()
|
||||||
|
if k == "sha256"
|
||||||
|
}
|
||||||
|
results.append((d["prev_event_id"], prev_hashes, d["is_state"]))
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
def _get_auth_events(self, txn, event_id):
|
||||||
|
auth_ids = self._simple_select_onecol_txn(
|
||||||
|
txn,
|
||||||
|
table="event_auth",
|
||||||
|
keyvalues={
|
||||||
|
"event_id": event_id,
|
||||||
|
},
|
||||||
|
retcol="auth_id",
|
||||||
|
)
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for auth_id in auth_ids:
|
||||||
|
hashes = self._get_event_reference_hashes_txn(txn, auth_id)
|
||||||
|
prev_hashes = {
|
||||||
|
k: encode_base64(v) for k, v in hashes.items()
|
||||||
|
if k == "sha256"
|
||||||
|
}
|
||||||
|
results.append((auth_id, prev_hashes))
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
def get_min_depth(self, room_id):
|
||||||
|
""" For hte given room, get the minimum depth we have seen for it.
|
||||||
|
"""
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_min_depth",
|
||||||
|
self._get_min_depth_interaction,
|
||||||
|
room_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_min_depth_interaction(self, txn, room_id):
|
||||||
|
min_depth = self._simple_select_one_onecol_txn(
|
||||||
|
txn,
|
||||||
|
table="room_depth",
|
||||||
|
keyvalues={"room_id": room_id},
|
||||||
|
retcol="min_depth",
|
||||||
|
allow_none=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
return int(min_depth) if min_depth is not None else None
|
||||||
|
|
||||||
|
def _update_min_depth_for_room_txn(self, txn, room_id, depth):
|
||||||
|
min_depth = self._get_min_depth_interaction(txn, room_id)
|
||||||
|
|
||||||
|
do_insert = depth < min_depth if min_depth else True
|
||||||
|
|
||||||
|
if do_insert:
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="room_depth",
|
||||||
|
values={
|
||||||
|
"room_id": room_id,
|
||||||
|
"min_depth": depth,
|
||||||
|
},
|
||||||
|
or_replace=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _handle_prev_events(self, txn, outlier, event_id, prev_events,
|
||||||
|
room_id):
|
||||||
|
"""
|
||||||
|
For the given event, update the event edges table and forward and
|
||||||
|
backward extremities tables.
|
||||||
|
"""
|
||||||
|
for e_id, _ in prev_events:
|
||||||
|
# TODO (erikj): This could be done as a bulk insert
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="event_edges",
|
||||||
|
values={
|
||||||
|
"event_id": event_id,
|
||||||
|
"prev_event_id": e_id,
|
||||||
|
"room_id": room_id,
|
||||||
|
"is_state": 0,
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Update the extremities table if this is not an outlier.
|
||||||
|
if not outlier:
|
||||||
|
for e_id, _ in prev_events:
|
||||||
|
# TODO (erikj): This could be done as a bulk insert
|
||||||
|
self._simple_delete_txn(
|
||||||
|
txn,
|
||||||
|
table="event_forward_extremities",
|
||||||
|
keyvalues={
|
||||||
|
"event_id": e_id,
|
||||||
|
"room_id": room_id,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# We only insert as a forward extremity the new event if there are
|
||||||
|
# no other events that reference it as a prev event
|
||||||
|
query = (
|
||||||
|
"INSERT OR IGNORE INTO %(table)s (event_id, room_id) "
|
||||||
|
"SELECT ?, ? WHERE NOT EXISTS ("
|
||||||
|
"SELECT 1 FROM %(event_edges)s WHERE "
|
||||||
|
"prev_event_id = ? "
|
||||||
|
")"
|
||||||
|
) % {
|
||||||
|
"table": "event_forward_extremities",
|
||||||
|
"event_edges": "event_edges",
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.debug("query: %s", query)
|
||||||
|
|
||||||
|
txn.execute(query, (event_id, room_id, event_id))
|
||||||
|
|
||||||
|
# Insert all the prev_events as a backwards thing, they'll get
|
||||||
|
# deleted in a second if they're incorrect anyway.
|
||||||
|
for e_id, _ in prev_events:
|
||||||
|
# TODO (erikj): This could be done as a bulk insert
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="event_backward_extremities",
|
||||||
|
values={
|
||||||
|
"event_id": e_id,
|
||||||
|
"room_id": room_id,
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Also delete from the backwards extremities table all ones that
|
||||||
|
# reference events that we have already seen
|
||||||
|
query = (
|
||||||
|
"DELETE FROM event_backward_extremities WHERE EXISTS ("
|
||||||
|
"SELECT 1 FROM events "
|
||||||
|
"WHERE "
|
||||||
|
"event_backward_extremities.event_id = events.event_id "
|
||||||
|
"AND not events.outlier "
|
||||||
|
")"
|
||||||
|
)
|
||||||
|
txn.execute(query)
|
||||||
|
|
||||||
|
def get_backfill_events(self, room_id, event_list, limit):
|
||||||
|
"""Get a list of Events for a given topic that occurred before (and
|
||||||
|
including) the events in event_list. Return a list of max size `limit`
|
||||||
|
|
||||||
|
Args:
|
||||||
|
txn
|
||||||
|
room_id (str)
|
||||||
|
event_list (list)
|
||||||
|
limit (int)
|
||||||
|
"""
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_backfill_events",
|
||||||
|
self._get_backfill_events, room_id, event_list, limit
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_backfill_events(self, txn, room_id, event_list, limit):
|
||||||
|
logger.debug(
|
||||||
|
"_get_backfill_events: %s, %s, %s",
|
||||||
|
room_id, repr(event_list), limit
|
||||||
|
)
|
||||||
|
|
||||||
|
event_results = event_list
|
||||||
|
|
||||||
|
front = event_list
|
||||||
|
|
||||||
|
query = (
|
||||||
|
"SELECT prev_event_id FROM event_edges "
|
||||||
|
"WHERE room_id = ? AND event_id = ? "
|
||||||
|
"LIMIT ?"
|
||||||
|
)
|
||||||
|
|
||||||
|
# We iterate through all event_ids in `front` to select their previous
|
||||||
|
# events. These are dumped in `new_front`.
|
||||||
|
# We continue until we reach the limit *or* new_front is empty (i.e.,
|
||||||
|
# we've run out of things to select
|
||||||
|
while front and len(event_results) < limit:
|
||||||
|
|
||||||
|
new_front = []
|
||||||
|
for event_id in front:
|
||||||
|
logger.debug(
|
||||||
|
"_backfill_interaction: id=%s",
|
||||||
|
event_id
|
||||||
|
)
|
||||||
|
|
||||||
|
txn.execute(
|
||||||
|
query,
|
||||||
|
(room_id, event_id, limit - len(event_results))
|
||||||
|
)
|
||||||
|
|
||||||
|
for row in txn.fetchall():
|
||||||
|
logger.debug(
|
||||||
|
"_backfill_interaction: got id=%s",
|
||||||
|
*row
|
||||||
|
)
|
||||||
|
new_front.append(row[0])
|
||||||
|
|
||||||
|
front = new_front
|
||||||
|
event_results += new_front
|
||||||
|
|
||||||
|
return self._get_events_txn(txn, event_results)
|
|
@ -41,7 +41,7 @@ class FeedbackStore(SQLBaseStore):
|
||||||
|
|
||||||
defer.returnValue(
|
defer.returnValue(
|
||||||
[
|
[
|
||||||
self._parse_event_from_row(r)
|
(yield self._parse_events(r))
|
||||||
for r in rows
|
for r in rows
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,915 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Copyright 2014 OpenMarket Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from twisted.internet import defer
|
|
||||||
|
|
||||||
from ._base import SQLBaseStore, Table, JoinHelper
|
|
||||||
|
|
||||||
from synapse.federation.units import Pdu
|
|
||||||
from synapse.util.logutils import log_function
|
|
||||||
|
|
||||||
from collections import namedtuple
|
|
||||||
|
|
||||||
import logging
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class PduStore(SQLBaseStore):
|
|
||||||
"""A collection of queries for handling PDUs.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def get_pdu(self, pdu_id, origin):
|
|
||||||
"""Given a pdu_id and origin, get a PDU.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
pdu_id (str)
|
|
||||||
origin (str)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
PduTuple: If the pdu does not exist in the database, returns None
|
|
||||||
"""
|
|
||||||
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_pdu_tuple, pdu_id, origin
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_pdu_tuple(self, txn, pdu_id, origin):
|
|
||||||
res = self._get_pdu_tuples(txn, [(pdu_id, origin)])
|
|
||||||
return res[0] if res else None
|
|
||||||
|
|
||||||
def _get_pdu_tuples(self, txn, pdu_id_tuples):
|
|
||||||
results = []
|
|
||||||
for pdu_id, origin in pdu_id_tuples:
|
|
||||||
txn.execute(
|
|
||||||
PduEdgesTable.select_statement("pdu_id = ? AND origin = ?"),
|
|
||||||
(pdu_id, origin)
|
|
||||||
)
|
|
||||||
|
|
||||||
edges = [
|
|
||||||
(r.prev_pdu_id, r.prev_origin)
|
|
||||||
for r in PduEdgesTable.decode_results(txn.fetchall())
|
|
||||||
]
|
|
||||||
|
|
||||||
query = (
|
|
||||||
"SELECT %(fields)s FROM %(pdus)s as p "
|
|
||||||
"LEFT JOIN %(state)s as s "
|
|
||||||
"ON p.pdu_id = s.pdu_id AND p.origin = s.origin "
|
|
||||||
"WHERE p.pdu_id = ? AND p.origin = ? "
|
|
||||||
) % {
|
|
||||||
"fields": _pdu_state_joiner.get_fields(
|
|
||||||
PdusTable="p", StatePdusTable="s"),
|
|
||||||
"pdus": PdusTable.table_name,
|
|
||||||
"state": StatePdusTable.table_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
txn.execute(query, (pdu_id, origin))
|
|
||||||
|
|
||||||
row = txn.fetchone()
|
|
||||||
if row:
|
|
||||||
results.append(PduTuple(PduEntry(*row), edges))
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
def get_current_state_for_context(self, context):
|
|
||||||
"""Get a list of PDUs that represent the current state for a given
|
|
||||||
context
|
|
||||||
|
|
||||||
Args:
|
|
||||||
context (str)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
list: A list of PduTuples
|
|
||||||
"""
|
|
||||||
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_current_state_for_context,
|
|
||||||
context
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_current_state_for_context(self, txn, context):
|
|
||||||
query = (
|
|
||||||
"SELECT pdu_id, origin FROM %s WHERE context = ?"
|
|
||||||
% CurrentStateTable.table_name
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.debug("get_current_state %s, Args=%s", query, context)
|
|
||||||
txn.execute(query, (context,))
|
|
||||||
|
|
||||||
res = txn.fetchall()
|
|
||||||
|
|
||||||
logger.debug("get_current_state %d results", len(res))
|
|
||||||
|
|
||||||
return self._get_pdu_tuples(txn, res)
|
|
||||||
|
|
||||||
def _persist_pdu_txn(self, txn, prev_pdus, cols):
|
|
||||||
"""Inserts a (non-state) PDU into the database.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn,
|
|
||||||
prev_pdus (list)
|
|
||||||
**cols: The columns to insert into the PdusTable.
|
|
||||||
"""
|
|
||||||
entry = PdusTable.EntryType(
|
|
||||||
**{k: cols.get(k, None) for k in PdusTable.fields}
|
|
||||||
)
|
|
||||||
|
|
||||||
txn.execute(PdusTable.insert_statement(), entry)
|
|
||||||
|
|
||||||
self._handle_prev_pdus(
|
|
||||||
txn, entry.outlier, entry.pdu_id, entry.origin,
|
|
||||||
prev_pdus, entry.context
|
|
||||||
)
|
|
||||||
|
|
||||||
def mark_pdu_as_processed(self, pdu_id, pdu_origin):
|
|
||||||
"""Mark a received PDU as processed.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
pdu_id (str)
|
|
||||||
pdu_origin (str)
|
|
||||||
"""
|
|
||||||
|
|
||||||
return self.runInteraction(
|
|
||||||
self._mark_as_processed, pdu_id, pdu_origin
|
|
||||||
)
|
|
||||||
|
|
||||||
def _mark_as_processed(self, txn, pdu_id, pdu_origin):
|
|
||||||
txn.execute("UPDATE %s SET have_processed = 1" % PdusTable.table_name)
|
|
||||||
|
|
||||||
def get_all_pdus_from_context(self, context):
|
|
||||||
"""Get a list of all PDUs for a given context."""
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_all_pdus_from_context, context,
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_all_pdus_from_context(self, txn, context):
|
|
||||||
query = (
|
|
||||||
"SELECT pdu_id, origin FROM %s "
|
|
||||||
"WHERE context = ?"
|
|
||||||
) % PdusTable.table_name
|
|
||||||
|
|
||||||
txn.execute(query, (context,))
|
|
||||||
|
|
||||||
return self._get_pdu_tuples(txn, txn.fetchall())
|
|
||||||
|
|
||||||
def get_backfill(self, context, pdu_list, limit):
|
|
||||||
"""Get a list of Pdus for a given topic that occured before (and
|
|
||||||
including) the pdus in pdu_list. Return a list of max size `limit`.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
context (str)
|
|
||||||
pdu_list (list)
|
|
||||||
limit (int)
|
|
||||||
|
|
||||||
Return:
|
|
||||||
list: A list of PduTuples
|
|
||||||
"""
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_backfill, context, pdu_list, limit
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_backfill(self, txn, context, pdu_list, limit):
|
|
||||||
logger.debug(
|
|
||||||
"backfill: %s, %s, %s",
|
|
||||||
context, repr(pdu_list), limit
|
|
||||||
)
|
|
||||||
|
|
||||||
# We seed the pdu_results with the things from the pdu_list.
|
|
||||||
pdu_results = pdu_list
|
|
||||||
|
|
||||||
front = pdu_list
|
|
||||||
|
|
||||||
query = (
|
|
||||||
"SELECT prev_pdu_id, prev_origin FROM %(edges_table)s "
|
|
||||||
"WHERE context = ? AND pdu_id = ? AND origin = ? "
|
|
||||||
"LIMIT ?"
|
|
||||||
) % {
|
|
||||||
"edges_table": PduEdgesTable.table_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
# We iterate through all pdu_ids in `front` to select their previous
|
|
||||||
# pdus. These are dumped in `new_front`. We continue until we reach the
|
|
||||||
# limit *or* new_front is empty (i.e., we've run out of things to
|
|
||||||
# select
|
|
||||||
while front and len(pdu_results) < limit:
|
|
||||||
|
|
||||||
new_front = []
|
|
||||||
for pdu_id, origin in front:
|
|
||||||
logger.debug(
|
|
||||||
"_backfill_interaction: i=%s, o=%s",
|
|
||||||
pdu_id, origin
|
|
||||||
)
|
|
||||||
|
|
||||||
txn.execute(
|
|
||||||
query,
|
|
||||||
(context, pdu_id, origin, limit - len(pdu_results))
|
|
||||||
)
|
|
||||||
|
|
||||||
for row in txn.fetchall():
|
|
||||||
logger.debug(
|
|
||||||
"_backfill_interaction: got i=%s, o=%s",
|
|
||||||
*row
|
|
||||||
)
|
|
||||||
new_front.append(row)
|
|
||||||
|
|
||||||
front = new_front
|
|
||||||
pdu_results += new_front
|
|
||||||
|
|
||||||
# We also want to update the `prev_pdus` attributes before returning.
|
|
||||||
return self._get_pdu_tuples(txn, pdu_results)
|
|
||||||
|
|
||||||
def get_min_depth_for_context(self, context):
|
|
||||||
"""Get the current minimum depth for a context
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
context (str)
|
|
||||||
"""
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_min_depth_for_context, context
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_min_depth_for_context(self, txn, context):
|
|
||||||
return self._get_min_depth_interaction(txn, context)
|
|
||||||
|
|
||||||
def _get_min_depth_interaction(self, txn, context):
|
|
||||||
txn.execute(
|
|
||||||
"SELECT min_depth FROM %s WHERE context = ?"
|
|
||||||
% ContextDepthTable.table_name,
|
|
||||||
(context,)
|
|
||||||
)
|
|
||||||
|
|
||||||
row = txn.fetchone()
|
|
||||||
|
|
||||||
return row[0] if row else None
|
|
||||||
|
|
||||||
def _update_min_depth_for_context_txn(self, txn, context, depth):
|
|
||||||
"""Update the minimum `depth` of the given context, which is the line
|
|
||||||
on which we stop backfilling backwards.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
context (str)
|
|
||||||
depth (int)
|
|
||||||
"""
|
|
||||||
min_depth = self._get_min_depth_interaction(txn, context)
|
|
||||||
|
|
||||||
do_insert = depth < min_depth if min_depth else True
|
|
||||||
|
|
||||||
if do_insert:
|
|
||||||
txn.execute(
|
|
||||||
"INSERT OR REPLACE INTO %s (context, min_depth) "
|
|
||||||
"VALUES (?,?)" % ContextDepthTable.table_name,
|
|
||||||
(context, depth)
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_latest_pdus_in_context(self, txn, context):
|
|
||||||
"""Get's a list of the most current pdus for a given context. This is
|
|
||||||
used when we are sending a Pdu and need to fill out the `prev_pdus`
|
|
||||||
key
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
context
|
|
||||||
"""
|
|
||||||
query = (
|
|
||||||
"SELECT p.pdu_id, p.origin, p.depth FROM %(pdus)s as p "
|
|
||||||
"INNER JOIN %(forward)s as f ON p.pdu_id = f.pdu_id "
|
|
||||||
"AND f.origin = p.origin "
|
|
||||||
"WHERE f.context = ?"
|
|
||||||
) % {
|
|
||||||
"pdus": PdusTable.table_name,
|
|
||||||
"forward": PduForwardExtremitiesTable.table_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.debug("get_prev query: %s", query)
|
|
||||||
|
|
||||||
txn.execute(
|
|
||||||
query,
|
|
||||||
(context, )
|
|
||||||
)
|
|
||||||
|
|
||||||
results = txn.fetchall()
|
|
||||||
|
|
||||||
return [(row[0], row[1], row[2]) for row in results]
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def get_oldest_pdus_in_context(self, context):
|
|
||||||
"""Get a list of Pdus that we haven't backfilled beyond yet (and havent
|
|
||||||
seen). This list is used when we want to backfill backwards and is the
|
|
||||||
list we send to the remote server.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
context (str)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
list: A list of PduIdTuple.
|
|
||||||
"""
|
|
||||||
results = yield self._execute(
|
|
||||||
None,
|
|
||||||
"SELECT pdu_id, origin FROM %(back)s WHERE context = ?"
|
|
||||||
% {"back": PduBackwardExtremitiesTable.table_name, },
|
|
||||||
context
|
|
||||||
)
|
|
||||||
|
|
||||||
defer.returnValue([PduIdTuple(i, o) for i, o in results])
|
|
||||||
|
|
||||||
def is_pdu_new(self, pdu_id, origin, context, depth):
|
|
||||||
"""For a given Pdu, try and figure out if it's 'new', i.e., if it's
|
|
||||||
not something we got randomly from the past, for example when we
|
|
||||||
request the current state of the room that will probably return a bunch
|
|
||||||
of pdus from before we joined.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
pdu_id (str)
|
|
||||||
origin (str)
|
|
||||||
context (str)
|
|
||||||
depth (int)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
bool
|
|
||||||
"""
|
|
||||||
|
|
||||||
return self.runInteraction(
|
|
||||||
self._is_pdu_new,
|
|
||||||
pdu_id=pdu_id,
|
|
||||||
origin=origin,
|
|
||||||
context=context,
|
|
||||||
depth=depth
|
|
||||||
)
|
|
||||||
|
|
||||||
def _is_pdu_new(self, txn, pdu_id, origin, context, depth):
|
|
||||||
# If depth > min depth in back table, then we classify it as new.
|
|
||||||
# OR if there is nothing in the back table, then it kinda needs to
|
|
||||||
# be a new thing.
|
|
||||||
query = (
|
|
||||||
"SELECT min(p.depth) FROM %(edges)s as e "
|
|
||||||
"INNER JOIN %(back)s as b "
|
|
||||||
"ON e.prev_pdu_id = b.pdu_id AND e.prev_origin = b.origin "
|
|
||||||
"INNER JOIN %(pdus)s as p "
|
|
||||||
"ON e.pdu_id = p.pdu_id AND p.origin = e.origin "
|
|
||||||
"WHERE p.context = ?"
|
|
||||||
) % {
|
|
||||||
"pdus": PdusTable.table_name,
|
|
||||||
"edges": PduEdgesTable.table_name,
|
|
||||||
"back": PduBackwardExtremitiesTable.table_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
txn.execute(query, (context,))
|
|
||||||
|
|
||||||
min_depth, = txn.fetchone()
|
|
||||||
|
|
||||||
if not min_depth or depth > int(min_depth):
|
|
||||||
logger.debug(
|
|
||||||
"is_new true: id=%s, o=%s, d=%s min_depth=%s",
|
|
||||||
pdu_id, origin, depth, min_depth
|
|
||||||
)
|
|
||||||
return True
|
|
||||||
|
|
||||||
# If this pdu is in the forwards table, then it also is a new one
|
|
||||||
query = (
|
|
||||||
"SELECT * FROM %(forward)s WHERE pdu_id = ? AND origin = ?"
|
|
||||||
) % {
|
|
||||||
"forward": PduForwardExtremitiesTable.table_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
txn.execute(query, (pdu_id, origin))
|
|
||||||
|
|
||||||
# Did we get anything?
|
|
||||||
if txn.fetchall():
|
|
||||||
logger.debug(
|
|
||||||
"is_new true: id=%s, o=%s, d=%s was forward",
|
|
||||||
pdu_id, origin, depth
|
|
||||||
)
|
|
||||||
return True
|
|
||||||
|
|
||||||
logger.debug(
|
|
||||||
"is_new false: id=%s, o=%s, d=%s",
|
|
||||||
pdu_id, origin, depth
|
|
||||||
)
|
|
||||||
|
|
||||||
# FINE THEN. It's probably old.
|
|
||||||
return False
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
@log_function
|
|
||||||
def _handle_prev_pdus(txn, outlier, pdu_id, origin, prev_pdus,
|
|
||||||
context):
|
|
||||||
txn.executemany(
|
|
||||||
PduEdgesTable.insert_statement(),
|
|
||||||
[(pdu_id, origin, p[0], p[1], context) for p in prev_pdus]
|
|
||||||
)
|
|
||||||
|
|
||||||
# Update the extremities table if this is not an outlier.
|
|
||||||
if not outlier:
|
|
||||||
|
|
||||||
# First, we delete the new one from the forwards extremities table.
|
|
||||||
query = (
|
|
||||||
"DELETE FROM %s WHERE pdu_id = ? AND origin = ?"
|
|
||||||
% PduForwardExtremitiesTable.table_name
|
|
||||||
)
|
|
||||||
txn.executemany(query, prev_pdus)
|
|
||||||
|
|
||||||
# We only insert as a forward extremety the new pdu if there are no
|
|
||||||
# other pdus that reference it as a prev pdu
|
|
||||||
query = (
|
|
||||||
"INSERT INTO %(table)s (pdu_id, origin, context) "
|
|
||||||
"SELECT ?, ?, ? WHERE NOT EXISTS ("
|
|
||||||
"SELECT 1 FROM %(pdu_edges)s WHERE "
|
|
||||||
"prev_pdu_id = ? AND prev_origin = ?"
|
|
||||||
")"
|
|
||||||
) % {
|
|
||||||
"table": PduForwardExtremitiesTable.table_name,
|
|
||||||
"pdu_edges": PduEdgesTable.table_name
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.debug("query: %s", query)
|
|
||||||
|
|
||||||
txn.execute(query, (pdu_id, origin, context, pdu_id, origin))
|
|
||||||
|
|
||||||
# Insert all the prev_pdus as a backwards thing, they'll get
|
|
||||||
# deleted in a second if they're incorrect anyway.
|
|
||||||
txn.executemany(
|
|
||||||
PduBackwardExtremitiesTable.insert_statement(),
|
|
||||||
[(i, o, context) for i, o in prev_pdus]
|
|
||||||
)
|
|
||||||
|
|
||||||
# Also delete from the backwards extremities table all ones that
|
|
||||||
# reference pdus that we have already seen
|
|
||||||
query = (
|
|
||||||
"DELETE FROM %(pdu_back)s WHERE EXISTS ("
|
|
||||||
"SELECT 1 FROM %(pdus)s AS pdus "
|
|
||||||
"WHERE "
|
|
||||||
"%(pdu_back)s.pdu_id = pdus.pdu_id "
|
|
||||||
"AND %(pdu_back)s.origin = pdus.origin "
|
|
||||||
"AND not pdus.outlier "
|
|
||||||
")"
|
|
||||||
) % {
|
|
||||||
"pdu_back": PduBackwardExtremitiesTable.table_name,
|
|
||||||
"pdus": PdusTable.table_name,
|
|
||||||
}
|
|
||||||
txn.execute(query)
|
|
||||||
|
|
||||||
|
|
||||||
class StatePduStore(SQLBaseStore):
|
|
||||||
"""A collection of queries for handling state PDUs.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def _persist_state_txn(self, txn, prev_pdus, cols):
|
|
||||||
"""Inserts a state PDU into the database
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn,
|
|
||||||
prev_pdus (list)
|
|
||||||
**cols: The columns to insert into the PdusTable and StatePdusTable
|
|
||||||
"""
|
|
||||||
pdu_entry = PdusTable.EntryType(
|
|
||||||
**{k: cols.get(k, None) for k in PdusTable.fields}
|
|
||||||
)
|
|
||||||
state_entry = StatePdusTable.EntryType(
|
|
||||||
**{k: cols.get(k, None) for k in StatePdusTable.fields}
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.debug("Inserting pdu: %s", repr(pdu_entry))
|
|
||||||
logger.debug("Inserting state: %s", repr(state_entry))
|
|
||||||
|
|
||||||
txn.execute(PdusTable.insert_statement(), pdu_entry)
|
|
||||||
txn.execute(StatePdusTable.insert_statement(), state_entry)
|
|
||||||
|
|
||||||
self._handle_prev_pdus(
|
|
||||||
txn,
|
|
||||||
pdu_entry.outlier, pdu_entry.pdu_id, pdu_entry.origin, prev_pdus,
|
|
||||||
pdu_entry.context
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_unresolved_state_tree(self, new_state_pdu):
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_unresolved_state_tree, new_state_pdu
|
|
||||||
)
|
|
||||||
|
|
||||||
@log_function
|
|
||||||
def _get_unresolved_state_tree(self, txn, new_pdu):
|
|
||||||
current = self._get_current_interaction(
|
|
||||||
txn,
|
|
||||||
new_pdu.context, new_pdu.pdu_type, new_pdu.state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
ReturnType = namedtuple(
|
|
||||||
"StateReturnType", ["new_branch", "current_branch"]
|
|
||||||
)
|
|
||||||
return_value = ReturnType([new_pdu], [])
|
|
||||||
|
|
||||||
if not current:
|
|
||||||
logger.debug("get_unresolved_state_tree No current state.")
|
|
||||||
return (return_value, None)
|
|
||||||
|
|
||||||
return_value.current_branch.append(current)
|
|
||||||
|
|
||||||
enum_branches = self._enumerate_state_branches(
|
|
||||||
txn, new_pdu, current
|
|
||||||
)
|
|
||||||
|
|
||||||
missing_branch = None
|
|
||||||
for branch, prev_state, state in enum_branches:
|
|
||||||
if state:
|
|
||||||
return_value[branch].append(state)
|
|
||||||
else:
|
|
||||||
# We don't have prev_state :(
|
|
||||||
missing_branch = branch
|
|
||||||
break
|
|
||||||
|
|
||||||
return (return_value, missing_branch)
|
|
||||||
|
|
||||||
def update_current_state(self, pdu_id, origin, context, pdu_type,
|
|
||||||
state_key):
|
|
||||||
return self.runInteraction(
|
|
||||||
self._update_current_state,
|
|
||||||
pdu_id, origin, context, pdu_type, state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
def _update_current_state(self, txn, pdu_id, origin, context, pdu_type,
|
|
||||||
state_key):
|
|
||||||
query = (
|
|
||||||
"INSERT OR REPLACE INTO %(curr)s (%(fields)s) VALUES (%(qs)s)"
|
|
||||||
) % {
|
|
||||||
"curr": CurrentStateTable.table_name,
|
|
||||||
"fields": CurrentStateTable.get_fields_string(),
|
|
||||||
"qs": ", ".join(["?"] * len(CurrentStateTable.fields))
|
|
||||||
}
|
|
||||||
|
|
||||||
query_args = CurrentStateTable.EntryType(
|
|
||||||
pdu_id=pdu_id,
|
|
||||||
origin=origin,
|
|
||||||
context=context,
|
|
||||||
pdu_type=pdu_type,
|
|
||||||
state_key=state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
txn.execute(query, query_args)
|
|
||||||
|
|
||||||
def get_current_state_pdu(self, context, pdu_type, state_key):
|
|
||||||
"""For a given context, pdu_type, state_key 3-tuple, return what is
|
|
||||||
currently considered the current state.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
context (str)
|
|
||||||
pdu_type (str)
|
|
||||||
state_key (str)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
PduEntry
|
|
||||||
"""
|
|
||||||
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_current_state_pdu, context, pdu_type, state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_current_state_pdu(self, txn, context, pdu_type, state_key):
|
|
||||||
return self._get_current_interaction(txn, context, pdu_type, state_key)
|
|
||||||
|
|
||||||
def _get_current_interaction(self, txn, context, pdu_type, state_key):
|
|
||||||
logger.debug(
|
|
||||||
"_get_current_interaction %s %s %s",
|
|
||||||
context, pdu_type, state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
fields = _pdu_state_joiner.get_fields(
|
|
||||||
PdusTable="p", StatePdusTable="s")
|
|
||||||
|
|
||||||
current_query = (
|
|
||||||
"SELECT %(fields)s FROM %(state)s as s "
|
|
||||||
"INNER JOIN %(pdus)s as p "
|
|
||||||
"ON s.pdu_id = p.pdu_id AND s.origin = p.origin "
|
|
||||||
"INNER JOIN %(curr)s as c "
|
|
||||||
"ON s.pdu_id = c.pdu_id AND s.origin = c.origin "
|
|
||||||
"WHERE s.context = ? AND s.pdu_type = ? AND s.state_key = ? "
|
|
||||||
) % {
|
|
||||||
"fields": fields,
|
|
||||||
"curr": CurrentStateTable.table_name,
|
|
||||||
"state": StatePdusTable.table_name,
|
|
||||||
"pdus": PdusTable.table_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
txn.execute(
|
|
||||||
current_query,
|
|
||||||
(context, pdu_type, state_key)
|
|
||||||
)
|
|
||||||
|
|
||||||
row = txn.fetchone()
|
|
||||||
|
|
||||||
result = PduEntry(*row) if row else None
|
|
||||||
|
|
||||||
if not result:
|
|
||||||
logger.debug("_get_current_interaction not found")
|
|
||||||
else:
|
|
||||||
logger.debug(
|
|
||||||
"_get_current_interaction found %s %s",
|
|
||||||
result.pdu_id, result.origin
|
|
||||||
)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def handle_new_state(self, new_pdu):
|
|
||||||
"""Actually perform conflict resolution on the new_pdu on the
|
|
||||||
assumption we have all the pdus required to perform it.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
new_pdu
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
bool: True if the new_pdu clobbered the current state, False if not
|
|
||||||
"""
|
|
||||||
return self.runInteraction(
|
|
||||||
self._handle_new_state, new_pdu
|
|
||||||
)
|
|
||||||
|
|
||||||
def _handle_new_state(self, txn, new_pdu):
|
|
||||||
logger.debug(
|
|
||||||
"handle_new_state %s %s",
|
|
||||||
new_pdu.pdu_id, new_pdu.origin
|
|
||||||
)
|
|
||||||
|
|
||||||
current = self._get_current_interaction(
|
|
||||||
txn,
|
|
||||||
new_pdu.context, new_pdu.pdu_type, new_pdu.state_key
|
|
||||||
)
|
|
||||||
|
|
||||||
is_current = False
|
|
||||||
|
|
||||||
if (not current or not current.prev_state_id
|
|
||||||
or not current.prev_state_origin):
|
|
||||||
# Oh, we don't have any state for this yet.
|
|
||||||
is_current = True
|
|
||||||
elif (current.pdu_id == new_pdu.prev_state_id
|
|
||||||
and current.origin == new_pdu.prev_state_origin):
|
|
||||||
# Oh! A direct clobber. Just do it.
|
|
||||||
is_current = True
|
|
||||||
else:
|
|
||||||
##
|
|
||||||
# Ok, now loop through until we get to a common ancestor.
|
|
||||||
max_new = int(new_pdu.power_level)
|
|
||||||
max_current = int(current.power_level)
|
|
||||||
|
|
||||||
enum_branches = self._enumerate_state_branches(
|
|
||||||
txn, new_pdu, current
|
|
||||||
)
|
|
||||||
for branch, prev_state, state in enum_branches:
|
|
||||||
if not state:
|
|
||||||
raise RuntimeError(
|
|
||||||
"Could not find state_pdu %s %s" %
|
|
||||||
(
|
|
||||||
prev_state.prev_state_id,
|
|
||||||
prev_state.prev_state_origin
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
if branch == 0:
|
|
||||||
max_new = max(int(state.depth), max_new)
|
|
||||||
else:
|
|
||||||
max_current = max(int(state.depth), max_current)
|
|
||||||
|
|
||||||
is_current = max_new > max_current
|
|
||||||
|
|
||||||
if is_current:
|
|
||||||
logger.debug("handle_new_state make current")
|
|
||||||
|
|
||||||
# Right, this is a new thing, so woo, just insert it.
|
|
||||||
txn.execute(
|
|
||||||
"INSERT OR REPLACE INTO %(curr)s (%(fields)s) VALUES (%(qs)s)"
|
|
||||||
% {
|
|
||||||
"curr": CurrentStateTable.table_name,
|
|
||||||
"fields": CurrentStateTable.get_fields_string(),
|
|
||||||
"qs": ", ".join(["?"] * len(CurrentStateTable.fields))
|
|
||||||
},
|
|
||||||
CurrentStateTable.EntryType(
|
|
||||||
*(new_pdu.__dict__[k] for k in CurrentStateTable.fields)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
logger.debug("handle_new_state not current")
|
|
||||||
|
|
||||||
logger.debug("handle_new_state done")
|
|
||||||
|
|
||||||
return is_current
|
|
||||||
|
|
||||||
@log_function
|
|
||||||
def _enumerate_state_branches(self, txn, pdu_a, pdu_b):
|
|
||||||
branch_a = pdu_a
|
|
||||||
branch_b = pdu_b
|
|
||||||
|
|
||||||
while True:
|
|
||||||
if (branch_a.pdu_id == branch_b.pdu_id
|
|
||||||
and branch_a.origin == branch_b.origin):
|
|
||||||
# Woo! We found a common ancestor
|
|
||||||
logger.debug("_enumerate_state_branches Found common ancestor")
|
|
||||||
break
|
|
||||||
|
|
||||||
do_branch_a = (
|
|
||||||
hasattr(branch_a, "prev_state_id") and
|
|
||||||
branch_a.prev_state_id
|
|
||||||
)
|
|
||||||
|
|
||||||
do_branch_b = (
|
|
||||||
hasattr(branch_b, "prev_state_id") and
|
|
||||||
branch_b.prev_state_id
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.debug(
|
|
||||||
"do_branch_a=%s, do_branch_b=%s",
|
|
||||||
do_branch_a, do_branch_b
|
|
||||||
)
|
|
||||||
|
|
||||||
if do_branch_a and do_branch_b:
|
|
||||||
do_branch_a = int(branch_a.depth) > int(branch_b.depth)
|
|
||||||
|
|
||||||
if do_branch_a:
|
|
||||||
pdu_tuple = PduIdTuple(
|
|
||||||
branch_a.prev_state_id,
|
|
||||||
branch_a.prev_state_origin
|
|
||||||
)
|
|
||||||
|
|
||||||
prev_branch = branch_a
|
|
||||||
|
|
||||||
logger.debug("getting branch_a prev %s", pdu_tuple)
|
|
||||||
branch_a = self._get_pdu_tuple(txn, *pdu_tuple)
|
|
||||||
if branch_a:
|
|
||||||
branch_a = Pdu.from_pdu_tuple(branch_a)
|
|
||||||
|
|
||||||
logger.debug("branch_a=%s", branch_a)
|
|
||||||
|
|
||||||
yield (0, prev_branch, branch_a)
|
|
||||||
|
|
||||||
if not branch_a:
|
|
||||||
break
|
|
||||||
elif do_branch_b:
|
|
||||||
pdu_tuple = PduIdTuple(
|
|
||||||
branch_b.prev_state_id,
|
|
||||||
branch_b.prev_state_origin
|
|
||||||
)
|
|
||||||
|
|
||||||
prev_branch = branch_b
|
|
||||||
|
|
||||||
logger.debug("getting branch_b prev %s", pdu_tuple)
|
|
||||||
branch_b = self._get_pdu_tuple(txn, *pdu_tuple)
|
|
||||||
if branch_b:
|
|
||||||
branch_b = Pdu.from_pdu_tuple(branch_b)
|
|
||||||
|
|
||||||
logger.debug("branch_b=%s", branch_b)
|
|
||||||
|
|
||||||
yield (1, prev_branch, branch_b)
|
|
||||||
|
|
||||||
if not branch_b:
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
|
|
||||||
class PdusTable(Table):
|
|
||||||
table_name = "pdus"
|
|
||||||
|
|
||||||
fields = [
|
|
||||||
"pdu_id",
|
|
||||||
"origin",
|
|
||||||
"context",
|
|
||||||
"pdu_type",
|
|
||||||
"ts",
|
|
||||||
"depth",
|
|
||||||
"is_state",
|
|
||||||
"content_json",
|
|
||||||
"unrecognized_keys",
|
|
||||||
"outlier",
|
|
||||||
"have_processed",
|
|
||||||
]
|
|
||||||
|
|
||||||
EntryType = namedtuple("PdusEntry", fields)
|
|
||||||
|
|
||||||
|
|
||||||
class PduDestinationsTable(Table):
|
|
||||||
table_name = "pdu_destinations"
|
|
||||||
|
|
||||||
fields = [
|
|
||||||
"pdu_id",
|
|
||||||
"origin",
|
|
||||||
"destination",
|
|
||||||
"delivered_ts",
|
|
||||||
]
|
|
||||||
|
|
||||||
EntryType = namedtuple("PduDestinationsEntry", fields)
|
|
||||||
|
|
||||||
|
|
||||||
class PduEdgesTable(Table):
|
|
||||||
table_name = "pdu_edges"
|
|
||||||
|
|
||||||
fields = [
|
|
||||||
"pdu_id",
|
|
||||||
"origin",
|
|
||||||
"prev_pdu_id",
|
|
||||||
"prev_origin",
|
|
||||||
"context"
|
|
||||||
]
|
|
||||||
|
|
||||||
EntryType = namedtuple("PduEdgesEntry", fields)
|
|
||||||
|
|
||||||
|
|
||||||
class PduForwardExtremitiesTable(Table):
|
|
||||||
table_name = "pdu_forward_extremities"
|
|
||||||
|
|
||||||
fields = [
|
|
||||||
"pdu_id",
|
|
||||||
"origin",
|
|
||||||
"context",
|
|
||||||
]
|
|
||||||
|
|
||||||
EntryType = namedtuple("PduForwardExtremitiesEntry", fields)
|
|
||||||
|
|
||||||
|
|
||||||
class PduBackwardExtremitiesTable(Table):
|
|
||||||
table_name = "pdu_backward_extremities"
|
|
||||||
|
|
||||||
fields = [
|
|
||||||
"pdu_id",
|
|
||||||
"origin",
|
|
||||||
"context",
|
|
||||||
]
|
|
||||||
|
|
||||||
EntryType = namedtuple("PduBackwardExtremitiesEntry", fields)
|
|
||||||
|
|
||||||
|
|
||||||
class ContextDepthTable(Table):
|
|
||||||
table_name = "context_depth"
|
|
||||||
|
|
||||||
fields = [
|
|
||||||
"context",
|
|
||||||
"min_depth",
|
|
||||||
]
|
|
||||||
|
|
||||||
EntryType = namedtuple("ContextDepthEntry", fields)
|
|
||||||
|
|
||||||
|
|
||||||
class StatePdusTable(Table):
|
|
||||||
table_name = "state_pdus"
|
|
||||||
|
|
||||||
fields = [
|
|
||||||
"pdu_id",
|
|
||||||
"origin",
|
|
||||||
"context",
|
|
||||||
"pdu_type",
|
|
||||||
"state_key",
|
|
||||||
"power_level",
|
|
||||||
"prev_state_id",
|
|
||||||
"prev_state_origin",
|
|
||||||
]
|
|
||||||
|
|
||||||
EntryType = namedtuple("StatePdusEntry", fields)
|
|
||||||
|
|
||||||
|
|
||||||
class CurrentStateTable(Table):
|
|
||||||
table_name = "current_state"
|
|
||||||
|
|
||||||
fields = [
|
|
||||||
"pdu_id",
|
|
||||||
"origin",
|
|
||||||
"context",
|
|
||||||
"pdu_type",
|
|
||||||
"state_key",
|
|
||||||
]
|
|
||||||
|
|
||||||
EntryType = namedtuple("CurrentStateEntry", fields)
|
|
||||||
|
|
||||||
_pdu_state_joiner = JoinHelper(PdusTable, StatePdusTable)
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: These should probably be put somewhere more sensible
|
|
||||||
PduIdTuple = namedtuple("PduIdTuple", ("pdu_id", "origin"))
|
|
||||||
|
|
||||||
PduEntry = _pdu_state_joiner.EntryType
|
|
||||||
""" We are always interested in the join of the PdusTable and StatePdusTable,
|
|
||||||
rather than just the PdusTable.
|
|
||||||
|
|
||||||
This does not include a prev_pdus key.
|
|
||||||
"""
|
|
||||||
|
|
||||||
PduTuple = namedtuple(
|
|
||||||
"PduTuple",
|
|
||||||
("pdu_entry", "prev_pdu_list")
|
|
||||||
)
|
|
||||||
""" This is a tuple of a `PduEntry` and a list of `PduIdTuple` that represent
|
|
||||||
the `prev_pdus` key of a PDU.
|
|
||||||
"""
|
|
|
@ -62,8 +62,10 @@ class RegistrationStore(SQLBaseStore):
|
||||||
Raises:
|
Raises:
|
||||||
StoreError if the user_id could not be registered.
|
StoreError if the user_id could not be registered.
|
||||||
"""
|
"""
|
||||||
yield self.runInteraction(self._register, user_id, token,
|
yield self.runInteraction(
|
||||||
password_hash)
|
"register",
|
||||||
|
self._register, user_id, token, password_hash
|
||||||
|
)
|
||||||
|
|
||||||
def _register(self, txn, user_id, token, password_hash):
|
def _register(self, txn, user_id, token, password_hash):
|
||||||
now = int(self.clock.time())
|
now = int(self.clock.time())
|
||||||
|
@ -100,17 +102,22 @@ class RegistrationStore(SQLBaseStore):
|
||||||
StoreError if no user was found.
|
StoreError if no user was found.
|
||||||
"""
|
"""
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
"get_user_by_token",
|
||||||
self._query_for_auth,
|
self._query_for_auth,
|
||||||
token
|
token
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
def is_server_admin(self, user):
|
def is_server_admin(self, user):
|
||||||
return self._simple_select_one_onecol(
|
res = yield self._simple_select_one_onecol(
|
||||||
table="users",
|
table="users",
|
||||||
keyvalues={"name": user.to_string()},
|
keyvalues={"name": user.to_string()},
|
||||||
retcol="admin",
|
retcol="admin",
|
||||||
|
allow_none=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
defer.returnValue(res if res else False)
|
||||||
|
|
||||||
def _query_for_auth(self, txn, token):
|
def _query_for_auth(self, txn, token):
|
||||||
sql = (
|
sql = (
|
||||||
"SELECT users.name, users.admin, access_tokens.device_id "
|
"SELECT users.name, users.admin, access_tokens.device_id "
|
||||||
|
|
|
@ -132,209 +132,29 @@ class RoomStore(SQLBaseStore):
|
||||||
|
|
||||||
defer.returnValue(ret)
|
defer.returnValue(ret)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def get_room_join_rule(self, room_id):
|
|
||||||
sql = (
|
|
||||||
"SELECT join_rule FROM room_join_rules as r "
|
|
||||||
"INNER JOIN current_state_events as c "
|
|
||||||
"ON r.event_id = c.event_id "
|
|
||||||
"WHERE c.room_id = ? "
|
|
||||||
)
|
|
||||||
|
|
||||||
rows = yield self._execute(None, sql, room_id)
|
|
||||||
|
|
||||||
if len(rows) == 1:
|
|
||||||
defer.returnValue(rows[0][0])
|
|
||||||
else:
|
|
||||||
defer.returnValue(None)
|
|
||||||
|
|
||||||
def get_power_level(self, room_id, user_id):
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_power_level,
|
|
||||||
room_id, user_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_power_level(self, txn, room_id, user_id):
|
|
||||||
sql = (
|
|
||||||
"SELECT level FROM room_power_levels as r "
|
|
||||||
"INNER JOIN current_state_events as c "
|
|
||||||
"ON r.event_id = c.event_id "
|
|
||||||
"WHERE c.room_id = ? AND r.user_id = ? "
|
|
||||||
)
|
|
||||||
|
|
||||||
rows = txn.execute(sql, (room_id, user_id,)).fetchall()
|
|
||||||
|
|
||||||
if len(rows) == 1:
|
|
||||||
return rows[0][0]
|
|
||||||
|
|
||||||
sql = (
|
|
||||||
"SELECT level FROM room_default_levels as r "
|
|
||||||
"INNER JOIN current_state_events as c "
|
|
||||||
"ON r.event_id = c.event_id "
|
|
||||||
"WHERE c.room_id = ? "
|
|
||||||
)
|
|
||||||
|
|
||||||
rows = txn.execute(sql, (room_id,)).fetchall()
|
|
||||||
|
|
||||||
if len(rows) == 1:
|
|
||||||
return rows[0][0]
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_ops_levels(self, room_id):
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_ops_levels,
|
|
||||||
room_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_ops_levels(self, txn, room_id):
|
|
||||||
sql = (
|
|
||||||
"SELECT ban_level, kick_level, redact_level "
|
|
||||||
"FROM room_ops_levels as r "
|
|
||||||
"INNER JOIN current_state_events as c "
|
|
||||||
"ON r.event_id = c.event_id "
|
|
||||||
"WHERE c.room_id = ? "
|
|
||||||
)
|
|
||||||
|
|
||||||
rows = txn.execute(sql, (room_id,)).fetchall()
|
|
||||||
|
|
||||||
if len(rows) == 1:
|
|
||||||
return OpsLevel(rows[0][0], rows[0][1], rows[0][2])
|
|
||||||
else:
|
|
||||||
return OpsLevel(None, None)
|
|
||||||
|
|
||||||
def get_add_state_level(self, room_id):
|
|
||||||
return self._get_level_from_table("room_add_state_levels", room_id)
|
|
||||||
|
|
||||||
def get_send_event_level(self, room_id):
|
|
||||||
return self._get_level_from_table("room_send_event_levels", room_id)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _get_level_from_table(self, table, room_id):
|
|
||||||
sql = (
|
|
||||||
"SELECT level FROM %(table)s as r "
|
|
||||||
"INNER JOIN current_state_events as c "
|
|
||||||
"ON r.event_id = c.event_id "
|
|
||||||
"WHERE c.room_id = ? "
|
|
||||||
) % {"table": table}
|
|
||||||
|
|
||||||
rows = yield self._execute(None, sql, room_id)
|
|
||||||
|
|
||||||
if len(rows) == 1:
|
|
||||||
defer.returnValue(rows[0][0])
|
|
||||||
else:
|
|
||||||
defer.returnValue(None)
|
|
||||||
|
|
||||||
def _store_room_topic_txn(self, txn, event):
|
def _store_room_topic_txn(self, txn, event):
|
||||||
self._simple_insert_txn(
|
if hasattr(event, "topic"):
|
||||||
txn,
|
self._simple_insert_txn(
|
||||||
"topics",
|
txn,
|
||||||
{
|
"topics",
|
||||||
"event_id": event.event_id,
|
{
|
||||||
"room_id": event.room_id,
|
"event_id": event.event_id,
|
||||||
"topic": event.topic,
|
"room_id": event.room_id,
|
||||||
}
|
"topic": event.topic,
|
||||||
)
|
}
|
||||||
|
)
|
||||||
|
|
||||||
def _store_room_name_txn(self, txn, event):
|
def _store_room_name_txn(self, txn, event):
|
||||||
self._simple_insert_txn(
|
if hasattr(event, "name"):
|
||||||
txn,
|
self._simple_insert_txn(
|
||||||
"room_names",
|
txn,
|
||||||
{
|
"room_names",
|
||||||
"event_id": event.event_id,
|
{
|
||||||
"room_id": event.room_id,
|
"event_id": event.event_id,
|
||||||
"name": event.name,
|
"room_id": event.room_id,
|
||||||
}
|
"name": event.name,
|
||||||
)
|
}
|
||||||
|
)
|
||||||
def _store_join_rule(self, txn, event):
|
|
||||||
self._simple_insert_txn(
|
|
||||||
txn,
|
|
||||||
"room_join_rules",
|
|
||||||
{
|
|
||||||
"event_id": event.event_id,
|
|
||||||
"room_id": event.room_id,
|
|
||||||
"join_rule": event.content["join_rule"],
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
def _store_power_levels(self, txn, event):
|
|
||||||
for user_id, level in event.content.items():
|
|
||||||
if user_id == "default":
|
|
||||||
self._simple_insert_txn(
|
|
||||||
txn,
|
|
||||||
"room_default_levels",
|
|
||||||
{
|
|
||||||
"event_id": event.event_id,
|
|
||||||
"room_id": event.room_id,
|
|
||||||
"level": level,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
self._simple_insert_txn(
|
|
||||||
txn,
|
|
||||||
"room_power_levels",
|
|
||||||
{
|
|
||||||
"event_id": event.event_id,
|
|
||||||
"room_id": event.room_id,
|
|
||||||
"user_id": user_id,
|
|
||||||
"level": level
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
def _store_default_level(self, txn, event):
|
|
||||||
self._simple_insert_txn(
|
|
||||||
txn,
|
|
||||||
"room_default_levels",
|
|
||||||
{
|
|
||||||
"event_id": event.event_id,
|
|
||||||
"room_id": event.room_id,
|
|
||||||
"level": event.content["default_level"],
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
def _store_add_state_level(self, txn, event):
|
|
||||||
self._simple_insert_txn(
|
|
||||||
txn,
|
|
||||||
"room_add_state_levels",
|
|
||||||
{
|
|
||||||
"event_id": event.event_id,
|
|
||||||
"room_id": event.room_id,
|
|
||||||
"level": event.content["level"],
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
def _store_send_event_level(self, txn, event):
|
|
||||||
self._simple_insert_txn(
|
|
||||||
txn,
|
|
||||||
"room_send_event_levels",
|
|
||||||
{
|
|
||||||
"event_id": event.event_id,
|
|
||||||
"room_id": event.room_id,
|
|
||||||
"level": event.content["level"],
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
def _store_ops_level(self, txn, event):
|
|
||||||
content = {
|
|
||||||
"event_id": event.event_id,
|
|
||||||
"room_id": event.room_id,
|
|
||||||
}
|
|
||||||
|
|
||||||
if "kick_level" in event.content:
|
|
||||||
content["kick_level"] = event.content["kick_level"]
|
|
||||||
|
|
||||||
if "ban_level" in event.content:
|
|
||||||
content["ban_level"] = event.content["ban_level"]
|
|
||||||
|
|
||||||
if "redact_level" in event.content:
|
|
||||||
content["redact_level"] = event.content["redact_level"]
|
|
||||||
|
|
||||||
self._simple_insert_txn(
|
|
||||||
txn,
|
|
||||||
"room_ops_levels",
|
|
||||||
content,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class RoomsTable(Table):
|
class RoomsTable(Table):
|
||||||
|
|
|
@ -1,31 +0,0 @@
|
||||||
/* Copyright 2014 OpenMarket Ltd
|
|
||||||
*
|
|
||||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
* you may not use this file except in compliance with the License.
|
|
||||||
* You may obtain a copy of the License at
|
|
||||||
*
|
|
||||||
* http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
*
|
|
||||||
* Unless required by applicable law or agreed to in writing, software
|
|
||||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
* See the License for the specific language governing permissions and
|
|
||||||
* limitations under the License.
|
|
||||||
*/
|
|
||||||
CREATE TABLE IF NOT EXISTS context_edge_pdus(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT, -- twistar requires this
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
context TEXT,
|
|
||||||
CONSTRAINT context_edge_pdu_id_origin UNIQUE (pdu_id, origin)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS origin_edge_pdus(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT, -- twistar requires this
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
CONSTRAINT origin_edge_pdu_id_origin UNIQUE (pdu_id, origin)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS context_edge_pdu_id ON context_edge_pdus(pdu_id, origin);
|
|
||||||
CREATE INDEX IF NOT EXISTS origin_edge_pdu_id ON origin_edge_pdus(pdu_id, origin);
|
|
75
synapse/storage/schema/event_edges.sql
Normal file
75
synapse/storage/schema/event_edges.sql
Normal file
|
@ -0,0 +1,75 @@
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_forward_extremities(
|
||||||
|
event_id TEXT NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, room_id) ON CONFLICT REPLACE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS ev_extrem_room ON event_forward_extremities(room_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS ev_extrem_id ON event_forward_extremities(event_id);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_backward_extremities(
|
||||||
|
event_id TEXT NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, room_id) ON CONFLICT REPLACE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS ev_b_extrem_room ON event_backward_extremities(room_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS ev_b_extrem_id ON event_backward_extremities(event_id);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_edges(
|
||||||
|
event_id TEXT NOT NULL,
|
||||||
|
prev_event_id TEXT NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
is_state INTEGER NOT NULL,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, prev_event_id, room_id, is_state)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS ev_edges_id ON event_edges(event_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS ev_edges_prev_id ON event_edges(prev_event_id);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS room_depth(
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
min_depth INTEGER NOT NULL,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (room_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS room_depth_room ON room_depth(room_id);
|
||||||
|
|
||||||
|
|
||||||
|
create TABLE IF NOT EXISTS event_destinations(
|
||||||
|
event_id TEXT NOT NULL,
|
||||||
|
destination TEXT NOT NULL,
|
||||||
|
delivered_ts INTEGER DEFAULT 0, -- or 0 if not delivered
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, destination) ON CONFLICT REPLACE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS event_destinations_id ON event_destinations(event_id);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS state_forward_extremities(
|
||||||
|
event_id TEXT NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
type TEXT NOT NULL,
|
||||||
|
state_key TEXT NOT NULL,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, room_id) ON CONFLICT REPLACE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS st_extrem_keys ON state_forward_extremities(
|
||||||
|
room_id, type, state_key
|
||||||
|
);
|
||||||
|
CREATE INDEX IF NOT EXISTS st_extrem_id ON state_forward_extremities(event_id);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_auth(
|
||||||
|
event_id TEXT NOT NULL,
|
||||||
|
auth_id TEXT NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, auth_id, room_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS evauth_edges_id ON event_auth(event_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS evauth_edges_auth_id ON event_auth(auth_id);
|
65
synapse/storage/schema/event_signatures.sql
Normal file
65
synapse/storage/schema/event_signatures.sql
Normal file
|
@ -0,0 +1,65 @@
|
||||||
|
/* Copyright 2014 OpenMarket Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_content_hashes (
|
||||||
|
event_id TEXT,
|
||||||
|
algorithm TEXT,
|
||||||
|
hash BLOB,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, algorithm)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS event_content_hashes_id ON event_content_hashes(
|
||||||
|
event_id
|
||||||
|
);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_reference_hashes (
|
||||||
|
event_id TEXT,
|
||||||
|
algorithm TEXT,
|
||||||
|
hash BLOB,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, algorithm)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS event_reference_hashes_id ON event_reference_hashes (
|
||||||
|
event_id
|
||||||
|
);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_signatures (
|
||||||
|
event_id TEXT,
|
||||||
|
signature_name TEXT,
|
||||||
|
key_id TEXT,
|
||||||
|
signature BLOB,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (event_id, key_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS event_signatures_id ON event_signatures (
|
||||||
|
event_id
|
||||||
|
);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_edge_hashes(
|
||||||
|
event_id TEXT,
|
||||||
|
prev_event_id TEXT,
|
||||||
|
algorithm TEXT,
|
||||||
|
hash BLOB,
|
||||||
|
CONSTRAINT uniqueness UNIQUE (
|
||||||
|
event_id, prev_event_id, algorithm
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS event_edge_hashes_id ON event_edge_hashes(
|
||||||
|
event_id
|
||||||
|
);
|
|
@ -23,6 +23,7 @@ CREATE TABLE IF NOT EXISTS events(
|
||||||
unrecognized_keys TEXT,
|
unrecognized_keys TEXT,
|
||||||
processed BOOL NOT NULL,
|
processed BOOL NOT NULL,
|
||||||
outlier BOOL NOT NULL,
|
outlier BOOL NOT NULL,
|
||||||
|
depth INTEGER DEFAULT 0 NOT NULL,
|
||||||
CONSTRAINT ev_uniq UNIQUE (event_id)
|
CONSTRAINT ev_uniq UNIQUE (event_id)
|
||||||
);
|
);
|
||||||
|
|
||||||
|
@ -84,80 +85,24 @@ CREATE TABLE IF NOT EXISTS topics(
|
||||||
topic TEXT NOT NULL
|
topic TEXT NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS topics_event_id ON topics(event_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS topics_room_id ON topics(room_id);
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS room_names(
|
CREATE TABLE IF NOT EXISTS room_names(
|
||||||
event_id TEXT NOT NULL,
|
event_id TEXT NOT NULL,
|
||||||
room_id TEXT NOT NULL,
|
room_id TEXT NOT NULL,
|
||||||
name TEXT NOT NULL
|
name TEXT NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS room_names_event_id ON room_names(event_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS room_names_room_id ON room_names(room_id);
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS rooms(
|
CREATE TABLE IF NOT EXISTS rooms(
|
||||||
room_id TEXT PRIMARY KEY NOT NULL,
|
room_id TEXT PRIMARY KEY NOT NULL,
|
||||||
is_public INTEGER,
|
is_public INTEGER,
|
||||||
creator TEXT
|
creator TEXT
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS room_join_rules(
|
|
||||||
event_id TEXT NOT NULL,
|
|
||||||
room_id TEXT NOT NULL,
|
|
||||||
join_rule TEXT NOT NULL
|
|
||||||
);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_join_rules_event_id ON room_join_rules(event_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_join_rules_room_id ON room_join_rules(room_id);
|
|
||||||
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS room_power_levels(
|
|
||||||
event_id TEXT NOT NULL,
|
|
||||||
room_id TEXT NOT NULL,
|
|
||||||
user_id TEXT NOT NULL,
|
|
||||||
level INTEGER NOT NULL
|
|
||||||
);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_power_levels_event_id ON room_power_levels(event_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_power_levels_room_id ON room_power_levels(room_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_power_levels_room_user ON room_power_levels(room_id, user_id);
|
|
||||||
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS room_default_levels(
|
|
||||||
event_id TEXT NOT NULL,
|
|
||||||
room_id TEXT NOT NULL,
|
|
||||||
level INTEGER NOT NULL
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS room_default_levels_event_id ON room_default_levels(event_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_default_levels_room_id ON room_default_levels(room_id);
|
|
||||||
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS room_add_state_levels(
|
|
||||||
event_id TEXT NOT NULL,
|
|
||||||
room_id TEXT NOT NULL,
|
|
||||||
level INTEGER NOT NULL
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS room_add_state_levels_event_id ON room_add_state_levels(event_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_add_state_levels_room_id ON room_add_state_levels(room_id);
|
|
||||||
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS room_send_event_levels(
|
|
||||||
event_id TEXT NOT NULL,
|
|
||||||
room_id TEXT NOT NULL,
|
|
||||||
level INTEGER NOT NULL
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS room_send_event_levels_event_id ON room_send_event_levels(event_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_send_event_levels_room_id ON room_send_event_levels(room_id);
|
|
||||||
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS room_ops_levels(
|
|
||||||
event_id TEXT NOT NULL,
|
|
||||||
room_id TEXT NOT NULL,
|
|
||||||
ban_level INTEGER,
|
|
||||||
kick_level INTEGER,
|
|
||||||
redact_level INTEGER
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS room_ops_levels_event_id ON room_ops_levels(event_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS room_ops_levels_room_id ON room_ops_levels(room_id);
|
|
||||||
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS room_hosts(
|
CREATE TABLE IF NOT EXISTS room_hosts(
|
||||||
room_id TEXT NOT NULL,
|
room_id TEXT NOT NULL,
|
||||||
host TEXT NOT NULL,
|
host TEXT NOT NULL,
|
||||||
|
|
|
@ -1,106 +0,0 @@
|
||||||
/* Copyright 2014 OpenMarket Ltd
|
|
||||||
*
|
|
||||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
* you may not use this file except in compliance with the License.
|
|
||||||
* You may obtain a copy of the License at
|
|
||||||
*
|
|
||||||
* http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
*
|
|
||||||
* Unless required by applicable law or agreed to in writing, software
|
|
||||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
* See the License for the specific language governing permissions and
|
|
||||||
* limitations under the License.
|
|
||||||
*/
|
|
||||||
-- Stores pdus and their content
|
|
||||||
CREATE TABLE IF NOT EXISTS pdus(
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
context TEXT,
|
|
||||||
pdu_type TEXT,
|
|
||||||
ts INTEGER,
|
|
||||||
depth INTEGER DEFAULT 0 NOT NULL,
|
|
||||||
is_state BOOL,
|
|
||||||
content_json TEXT,
|
|
||||||
unrecognized_keys TEXT,
|
|
||||||
outlier BOOL NOT NULL,
|
|
||||||
have_processed BOOL,
|
|
||||||
CONSTRAINT pdu_id_origin UNIQUE (pdu_id, origin)
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Stores what the current state pdu is for a given (context, pdu_type, key) tuple
|
|
||||||
CREATE TABLE IF NOT EXISTS state_pdus(
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
context TEXT,
|
|
||||||
pdu_type TEXT,
|
|
||||||
state_key TEXT,
|
|
||||||
power_level TEXT,
|
|
||||||
prev_state_id TEXT,
|
|
||||||
prev_state_origin TEXT,
|
|
||||||
CONSTRAINT pdu_id_origin UNIQUE (pdu_id, origin)
|
|
||||||
CONSTRAINT prev_pdu_id_origin UNIQUE (prev_state_id, prev_state_origin)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS current_state(
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
context TEXT,
|
|
||||||
pdu_type TEXT,
|
|
||||||
state_key TEXT,
|
|
||||||
CONSTRAINT pdu_id_origin UNIQUE (pdu_id, origin)
|
|
||||||
CONSTRAINT uniqueness UNIQUE (context, pdu_type, state_key) ON CONFLICT REPLACE
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Stores where each pdu we want to send should be sent and the delivery status.
|
|
||||||
create TABLE IF NOT EXISTS pdu_destinations(
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
destination TEXT,
|
|
||||||
delivered_ts INTEGER DEFAULT 0, -- or 0 if not delivered
|
|
||||||
CONSTRAINT uniqueness UNIQUE (pdu_id, origin, destination) ON CONFLICT REPLACE
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS pdu_forward_extremities(
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
context TEXT,
|
|
||||||
CONSTRAINT uniqueness UNIQUE (pdu_id, origin, context) ON CONFLICT REPLACE
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS pdu_backward_extremities(
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
context TEXT,
|
|
||||||
CONSTRAINT uniqueness UNIQUE (pdu_id, origin, context) ON CONFLICT REPLACE
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS pdu_edges(
|
|
||||||
pdu_id TEXT,
|
|
||||||
origin TEXT,
|
|
||||||
prev_pdu_id TEXT,
|
|
||||||
prev_origin TEXT,
|
|
||||||
context TEXT,
|
|
||||||
CONSTRAINT uniqueness UNIQUE (pdu_id, origin, prev_pdu_id, prev_origin, context)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS context_depth(
|
|
||||||
context TEXT,
|
|
||||||
min_depth INTEGER,
|
|
||||||
CONSTRAINT uniqueness UNIQUE (context)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS context_depth_context ON context_depth(context);
|
|
||||||
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS pdu_id ON pdus(pdu_id, origin);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS dests_id ON pdu_destinations (pdu_id, origin);
|
|
||||||
-- CREATE INDEX IF NOT EXISTS dests ON pdu_destinations (destination);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS pdu_extrem_context ON pdu_forward_extremities(context);
|
|
||||||
CREATE INDEX IF NOT EXISTS pdu_extrem_id ON pdu_forward_extremities(pdu_id, origin);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS pdu_edges_id ON pdu_edges(pdu_id, origin);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS pdu_b_extrem_context ON pdu_backward_extremities(context);
|
|
46
synapse/storage/schema/state.sql
Normal file
46
synapse/storage/schema/state.sql
Normal file
|
@ -0,0 +1,46 @@
|
||||||
|
/* Copyright 2014 OpenMarket Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS state_groups(
|
||||||
|
id INTEGER PRIMARY KEY,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
event_id TEXT NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS state_groups_state(
|
||||||
|
state_group INTEGER NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
type TEXT NOT NULL,
|
||||||
|
state_key TEXT NOT NULL,
|
||||||
|
event_id TEXT NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS event_to_state_groups(
|
||||||
|
event_id TEXT NOT NULL,
|
||||||
|
state_group INTEGER NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS state_groups_id ON state_groups(id);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS state_groups_state_id ON state_groups_state(
|
||||||
|
state_group
|
||||||
|
);
|
||||||
|
CREATE INDEX IF NOT EXISTS state_groups_state_tuple ON state_groups_state(
|
||||||
|
room_id, type, state_key
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS event_to_state_groups_id ON event_to_state_groups(
|
||||||
|
event_id
|
||||||
|
);
|
183
synapse/storage/signatures.py
Normal file
183
synapse/storage/signatures.py
Normal file
|
@ -0,0 +1,183 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 OpenMarket Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from _base import SQLBaseStore
|
||||||
|
|
||||||
|
|
||||||
|
class SignatureStore(SQLBaseStore):
|
||||||
|
"""Persistence for event signatures and hashes"""
|
||||||
|
|
||||||
|
def _get_event_content_hashes_txn(self, txn, event_id):
|
||||||
|
"""Get all the hashes for a given Event.
|
||||||
|
Args:
|
||||||
|
txn (cursor):
|
||||||
|
event_id (str): Id for the Event.
|
||||||
|
Returns:
|
||||||
|
A dict of algorithm -> hash.
|
||||||
|
"""
|
||||||
|
query = (
|
||||||
|
"SELECT algorithm, hash"
|
||||||
|
" FROM event_content_hashes"
|
||||||
|
" WHERE event_id = ?"
|
||||||
|
)
|
||||||
|
txn.execute(query, (event_id, ))
|
||||||
|
return dict(txn.fetchall())
|
||||||
|
|
||||||
|
def _store_event_content_hash_txn(self, txn, event_id, algorithm,
|
||||||
|
hash_bytes):
|
||||||
|
"""Store a hash for a Event
|
||||||
|
Args:
|
||||||
|
txn (cursor):
|
||||||
|
event_id (str): Id for the Event.
|
||||||
|
algorithm (str): Hashing algorithm.
|
||||||
|
hash_bytes (bytes): Hash function output bytes.
|
||||||
|
"""
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
"event_content_hashes",
|
||||||
|
{
|
||||||
|
"event_id": event_id,
|
||||||
|
"algorithm": algorithm,
|
||||||
|
"hash": buffer(hash_bytes),
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_event_reference_hashes(self, event_ids):
|
||||||
|
def f(txn):
|
||||||
|
return [
|
||||||
|
self._get_event_reference_hashes_txn(txn, ev)
|
||||||
|
for ev in event_ids
|
||||||
|
]
|
||||||
|
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_event_reference_hashes",
|
||||||
|
f
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_event_reference_hashes_txn(self, txn, event_id):
|
||||||
|
"""Get all the hashes for a given PDU.
|
||||||
|
Args:
|
||||||
|
txn (cursor):
|
||||||
|
event_id (str): Id for the Event.
|
||||||
|
Returns:
|
||||||
|
A dict of algorithm -> hash.
|
||||||
|
"""
|
||||||
|
query = (
|
||||||
|
"SELECT algorithm, hash"
|
||||||
|
" FROM event_reference_hashes"
|
||||||
|
" WHERE event_id = ?"
|
||||||
|
)
|
||||||
|
txn.execute(query, (event_id, ))
|
||||||
|
return dict(txn.fetchall())
|
||||||
|
|
||||||
|
def _store_event_reference_hash_txn(self, txn, event_id, algorithm,
|
||||||
|
hash_bytes):
|
||||||
|
"""Store a hash for a PDU
|
||||||
|
Args:
|
||||||
|
txn (cursor):
|
||||||
|
event_id (str): Id for the Event.
|
||||||
|
algorithm (str): Hashing algorithm.
|
||||||
|
hash_bytes (bytes): Hash function output bytes.
|
||||||
|
"""
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
"event_reference_hashes",
|
||||||
|
{
|
||||||
|
"event_id": event_id,
|
||||||
|
"algorithm": algorithm,
|
||||||
|
"hash": buffer(hash_bytes),
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_event_signatures_txn(self, txn, event_id):
|
||||||
|
"""Get all the signatures for a given PDU.
|
||||||
|
Args:
|
||||||
|
txn (cursor):
|
||||||
|
event_id (str): Id for the Event.
|
||||||
|
Returns:
|
||||||
|
A dict of sig name -> dict(key_id -> signature_bytes)
|
||||||
|
"""
|
||||||
|
query = (
|
||||||
|
"SELECT signature_name, key_id, signature"
|
||||||
|
" FROM event_signatures"
|
||||||
|
" WHERE event_id = ? "
|
||||||
|
)
|
||||||
|
txn.execute(query, (event_id, ))
|
||||||
|
rows = txn.fetchall()
|
||||||
|
|
||||||
|
res = {}
|
||||||
|
|
||||||
|
for name, key, sig in rows:
|
||||||
|
res.setdefault(name, {})[key] = sig
|
||||||
|
|
||||||
|
return res
|
||||||
|
|
||||||
|
def _store_event_signature_txn(self, txn, event_id, signature_name, key_id,
|
||||||
|
signature_bytes):
|
||||||
|
"""Store a signature from the origin server for a PDU.
|
||||||
|
Args:
|
||||||
|
txn (cursor):
|
||||||
|
event_id (str): Id for the Event.
|
||||||
|
origin (str): origin of the Event.
|
||||||
|
key_id (str): Id for the signing key.
|
||||||
|
signature (bytes): The signature.
|
||||||
|
"""
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
"event_signatures",
|
||||||
|
{
|
||||||
|
"event_id": event_id,
|
||||||
|
"signature_name": signature_name,
|
||||||
|
"key_id": key_id,
|
||||||
|
"signature": buffer(signature_bytes),
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_prev_event_hashes_txn(self, txn, event_id):
|
||||||
|
"""Get all the hashes for previous PDUs of a PDU
|
||||||
|
Args:
|
||||||
|
txn (cursor):
|
||||||
|
event_id (str): Id for the Event.
|
||||||
|
Returns:
|
||||||
|
dict of (pdu_id, origin) -> dict of algorithm -> hash_bytes.
|
||||||
|
"""
|
||||||
|
query = (
|
||||||
|
"SELECT prev_event_id, algorithm, hash"
|
||||||
|
" FROM event_edge_hashes"
|
||||||
|
" WHERE event_id = ?"
|
||||||
|
)
|
||||||
|
txn.execute(query, (event_id, ))
|
||||||
|
results = {}
|
||||||
|
for prev_event_id, algorithm, hash_bytes in txn.fetchall():
|
||||||
|
hashes = results.setdefault(prev_event_id, {})
|
||||||
|
hashes[algorithm] = hash_bytes
|
||||||
|
return results
|
||||||
|
|
||||||
|
def _store_prev_event_hash_txn(self, txn, event_id, prev_event_id,
|
||||||
|
algorithm, hash_bytes):
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
"event_edge_hashes",
|
||||||
|
{
|
||||||
|
"event_id": event_id,
|
||||||
|
"prev_event_id": prev_event_id,
|
||||||
|
"algorithm": algorithm,
|
||||||
|
"hash": buffer(hash_bytes),
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
127
synapse/storage/state.py
Normal file
127
synapse/storage/state.py
Normal file
|
@ -0,0 +1,127 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 OpenMarket Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from ._base import SQLBaseStore
|
||||||
|
|
||||||
|
|
||||||
|
class StateStore(SQLBaseStore):
|
||||||
|
""" Keeps track of the state at a given event.
|
||||||
|
|
||||||
|
This is done by the concept of `state groups`. Every event is a assigned
|
||||||
|
a state group (identified by an arbitrary string), which references a
|
||||||
|
collection of state events. The current state of an event is then the
|
||||||
|
collection of state events referenced by the event's state group.
|
||||||
|
|
||||||
|
Hence, every change in the current state causes a new state group to be
|
||||||
|
generated. However, if no change happens (e.g., if we get a message event
|
||||||
|
with only one parent it inherits the state group from its parent.)
|
||||||
|
|
||||||
|
There are three tables:
|
||||||
|
* `state_groups`: Stores group name, first event with in the group and
|
||||||
|
room id.
|
||||||
|
* `event_to_state_groups`: Maps events to state groups.
|
||||||
|
* `state_groups_state`: Maps state group to state events.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def get_state_groups(self, event_ids):
|
||||||
|
""" Get the state groups for the given list of event_ids
|
||||||
|
|
||||||
|
The return value is a dict mapping group names to lists of events.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def f(txn):
|
||||||
|
groups = set()
|
||||||
|
for event_id in event_ids:
|
||||||
|
group = self._simple_select_one_onecol_txn(
|
||||||
|
txn,
|
||||||
|
table="event_to_state_groups",
|
||||||
|
keyvalues={"event_id": event_id},
|
||||||
|
retcol="state_group",
|
||||||
|
allow_none=True,
|
||||||
|
)
|
||||||
|
if group:
|
||||||
|
groups.add(group)
|
||||||
|
|
||||||
|
res = {}
|
||||||
|
for group in groups:
|
||||||
|
state_ids = self._simple_select_onecol_txn(
|
||||||
|
txn,
|
||||||
|
table="state_groups_state",
|
||||||
|
keyvalues={"state_group": group},
|
||||||
|
retcol="event_id",
|
||||||
|
)
|
||||||
|
state = []
|
||||||
|
for state_id in state_ids:
|
||||||
|
s = self._get_events_txn(
|
||||||
|
txn,
|
||||||
|
[state_id],
|
||||||
|
)
|
||||||
|
if s:
|
||||||
|
state.extend(s)
|
||||||
|
|
||||||
|
res[group] = state
|
||||||
|
|
||||||
|
return res
|
||||||
|
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_state_groups",
|
||||||
|
f,
|
||||||
|
)
|
||||||
|
|
||||||
|
def store_state_groups(self, event):
|
||||||
|
return self.runInteraction(
|
||||||
|
"store_state_groups",
|
||||||
|
self._store_state_groups_txn, event
|
||||||
|
)
|
||||||
|
|
||||||
|
def _store_state_groups_txn(self, txn, event):
|
||||||
|
if not event.state_events:
|
||||||
|
return
|
||||||
|
|
||||||
|
state_group = event.state_group
|
||||||
|
if not state_group:
|
||||||
|
state_group = self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="state_groups",
|
||||||
|
values={
|
||||||
|
"room_id": event.room_id,
|
||||||
|
"event_id": event.event_id,
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
for state in event.state_events.values():
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="state_groups_state",
|
||||||
|
values={
|
||||||
|
"state_group": state_group,
|
||||||
|
"room_id": state.room_id,
|
||||||
|
"type": state.type,
|
||||||
|
"state_key": state.state_key,
|
||||||
|
"event_id": state.event_id,
|
||||||
|
},
|
||||||
|
or_ignore=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
self._simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table="event_to_state_groups",
|
||||||
|
values={
|
||||||
|
"state_group": state_group,
|
||||||
|
"event_id": event.event_id,
|
||||||
|
},
|
||||||
|
or_replace=True,
|
||||||
|
)
|
|
@ -177,10 +177,9 @@ class StreamStore(SQLBaseStore):
|
||||||
|
|
||||||
sql = (
|
sql = (
|
||||||
"SELECT *, (%(redacted)s) AS redacted FROM events AS e WHERE "
|
"SELECT *, (%(redacted)s) AS redacted FROM events AS e WHERE "
|
||||||
"((room_id IN (%(current)s)) OR "
|
"(e.outlier = 0 AND (room_id IN (%(current)s)) OR "
|
||||||
"(event_id IN (%(invites)s))) "
|
"(event_id IN (%(invites)s))) "
|
||||||
"AND e.stream_ordering > ? AND e.stream_ordering <= ? "
|
"AND e.stream_ordering > ? AND e.stream_ordering <= ? "
|
||||||
"AND e.outlier = 0 "
|
|
||||||
"ORDER BY stream_ordering ASC LIMIT %(limit)d "
|
"ORDER BY stream_ordering ASC LIMIT %(limit)d "
|
||||||
) % {
|
) % {
|
||||||
"redacted": del_sql,
|
"redacted": del_sql,
|
||||||
|
@ -309,7 +308,10 @@ class StreamStore(SQLBaseStore):
|
||||||
defer.returnValue(ret)
|
defer.returnValue(ret)
|
||||||
|
|
||||||
def get_room_events_max_id(self):
|
def get_room_events_max_id(self):
|
||||||
return self.runInteraction(self._get_room_events_max_id_txn)
|
return self.runInteraction(
|
||||||
|
"get_room_events_max_id",
|
||||||
|
self._get_room_events_max_id_txn
|
||||||
|
)
|
||||||
|
|
||||||
def _get_room_events_max_id_txn(self, txn):
|
def _get_room_events_max_id_txn(self, txn):
|
||||||
txn.execute(
|
txn.execute(
|
||||||
|
|
|
@ -14,7 +14,6 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from ._base import SQLBaseStore, Table
|
from ._base import SQLBaseStore, Table
|
||||||
from .pdu import PdusTable
|
|
||||||
|
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
|
|
||||||
|
@ -42,6 +41,7 @@ class TransactionStore(SQLBaseStore):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
"get_received_txn_response",
|
||||||
self._get_received_txn_response, transaction_id, origin
|
self._get_received_txn_response, transaction_id, origin
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -73,6 +73,7 @@ class TransactionStore(SQLBaseStore):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
"set_received_txn_response",
|
||||||
self._set_received_txn_response,
|
self._set_received_txn_response,
|
||||||
transaction_id, origin, code, response_dict
|
transaction_id, origin, code, response_dict
|
||||||
)
|
)
|
||||||
|
@ -88,7 +89,7 @@ class TransactionStore(SQLBaseStore):
|
||||||
txn.execute(query, (code, response_json, transaction_id, origin))
|
txn.execute(query, (code, response_json, transaction_id, origin))
|
||||||
|
|
||||||
def prep_send_transaction(self, transaction_id, destination,
|
def prep_send_transaction(self, transaction_id, destination,
|
||||||
origin_server_ts, pdu_list):
|
origin_server_ts):
|
||||||
"""Persists an outgoing transaction and calculates the values for the
|
"""Persists an outgoing transaction and calculates the values for the
|
||||||
previous transaction id list.
|
previous transaction id list.
|
||||||
|
|
||||||
|
@ -99,19 +100,19 @@ class TransactionStore(SQLBaseStore):
|
||||||
transaction_id (str)
|
transaction_id (str)
|
||||||
destination (str)
|
destination (str)
|
||||||
origin_server_ts (int)
|
origin_server_ts (int)
|
||||||
pdu_list (list)
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
list: A list of previous transaction ids.
|
list: A list of previous transaction ids.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
"prep_send_transaction",
|
||||||
self._prep_send_transaction,
|
self._prep_send_transaction,
|
||||||
transaction_id, destination, origin_server_ts, pdu_list
|
transaction_id, destination, origin_server_ts
|
||||||
)
|
)
|
||||||
|
|
||||||
def _prep_send_transaction(self, txn, transaction_id, destination,
|
def _prep_send_transaction(self, txn, transaction_id, destination,
|
||||||
origin_server_ts, pdu_list):
|
origin_server_ts):
|
||||||
|
|
||||||
# First we find out what the prev_txs should be.
|
# First we find out what the prev_txs should be.
|
||||||
# Since we know that we are only sending one transaction at a time,
|
# Since we know that we are only sending one transaction at a time,
|
||||||
|
@ -139,15 +140,15 @@ class TransactionStore(SQLBaseStore):
|
||||||
|
|
||||||
# Update the tx id -> pdu id mapping
|
# Update the tx id -> pdu id mapping
|
||||||
|
|
||||||
values = [
|
# values = [
|
||||||
(transaction_id, destination, pdu[0], pdu[1])
|
# (transaction_id, destination, pdu[0], pdu[1])
|
||||||
for pdu in pdu_list
|
# for pdu in pdu_list
|
||||||
]
|
# ]
|
||||||
|
#
|
||||||
logger.debug("Inserting: %s", repr(values))
|
# logger.debug("Inserting: %s", repr(values))
|
||||||
|
#
|
||||||
query = TransactionsToPduTable.insert_statement()
|
# query = TransactionsToPduTable.insert_statement()
|
||||||
txn.executemany(query, values)
|
# txn.executemany(query, values)
|
||||||
|
|
||||||
return prev_txns
|
return prev_txns
|
||||||
|
|
||||||
|
@ -161,6 +162,7 @@ class TransactionStore(SQLBaseStore):
|
||||||
response_json (str)
|
response_json (str)
|
||||||
"""
|
"""
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
"delivered_txn",
|
||||||
self._delivered_txn,
|
self._delivered_txn,
|
||||||
transaction_id, destination, code, response_dict
|
transaction_id, destination, code, response_dict
|
||||||
)
|
)
|
||||||
|
@ -186,6 +188,7 @@ class TransactionStore(SQLBaseStore):
|
||||||
list: A list of `ReceivedTransactionsTable.EntryType`
|
list: A list of `ReceivedTransactionsTable.EntryType`
|
||||||
"""
|
"""
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
"get_transactions_after",
|
||||||
self._get_transactions_after, transaction_id, destination
|
self._get_transactions_after, transaction_id, destination
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -202,49 +205,6 @@ class TransactionStore(SQLBaseStore):
|
||||||
|
|
||||||
return ReceivedTransactionsTable.decode_results(txn.fetchall())
|
return ReceivedTransactionsTable.decode_results(txn.fetchall())
|
||||||
|
|
||||||
def get_pdus_after_transaction(self, transaction_id, destination):
|
|
||||||
"""For a given local transaction_id that we sent to a given destination
|
|
||||||
home server, return a list of PDUs that were sent to that destination
|
|
||||||
after it.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
txn
|
|
||||||
transaction_id (str)
|
|
||||||
destination (str)
|
|
||||||
|
|
||||||
Returns
|
|
||||||
list: A list of PduTuple
|
|
||||||
"""
|
|
||||||
return self.runInteraction(
|
|
||||||
self._get_pdus_after_transaction,
|
|
||||||
transaction_id, destination
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_pdus_after_transaction(self, txn, transaction_id, destination):
|
|
||||||
|
|
||||||
# Query that first get's all transaction_ids with an id greater than
|
|
||||||
# the one given from the `sent_transactions` table. Then JOIN on this
|
|
||||||
# from the `tx->pdu` table to get a list of (pdu_id, origin) that
|
|
||||||
# specify the pdus that were sent in those transactions.
|
|
||||||
query = (
|
|
||||||
"SELECT pdu_id, pdu_origin FROM %(tx_pdu)s as tp "
|
|
||||||
"INNER JOIN %(sent_tx)s as st "
|
|
||||||
"ON tp.transaction_id = st.transaction_id "
|
|
||||||
"AND tp.destination = st.destination "
|
|
||||||
"WHERE st.id > ("
|
|
||||||
"SELECT id FROM %(sent_tx)s "
|
|
||||||
"WHERE transaction_id = ? AND destination = ?"
|
|
||||||
) % {
|
|
||||||
"tx_pdu": TransactionsToPduTable.table_name,
|
|
||||||
"sent_tx": SentTransactions.table_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
txn.execute(query, (transaction_id, destination))
|
|
||||||
|
|
||||||
pdus = PdusTable.decode_results(txn.fetchall())
|
|
||||||
|
|
||||||
return self._get_pdu_tuples(txn, pdus)
|
|
||||||
|
|
||||||
|
|
||||||
class ReceivedTransactionsTable(Table):
|
class ReceivedTransactionsTable(Table):
|
||||||
table_name = "received_transactions"
|
table_name = "received_transactions"
|
||||||
|
|
|
@ -78,6 +78,11 @@ class DomainSpecificString(
|
||||||
"""Create a structure on the local domain"""
|
"""Create a structure on the local domain"""
|
||||||
return cls(localpart=localpart, domain=hs.hostname, is_mine=True)
|
return cls(localpart=localpart, domain=hs.hostname, is_mine=True)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def create(cls, localpart, domain, hs):
|
||||||
|
is_mine = domain == hs.hostname
|
||||||
|
return cls(localpart=localpart, domain=domain, is_mine=is_mine)
|
||||||
|
|
||||||
|
|
||||||
class UserID(DomainSpecificString):
|
class UserID(DomainSpecificString):
|
||||||
"""Structure representing a user ID."""
|
"""Structure representing a user ID."""
|
||||||
|
@ -94,6 +99,11 @@ class RoomID(DomainSpecificString):
|
||||||
SIGIL = "!"
|
SIGIL = "!"
|
||||||
|
|
||||||
|
|
||||||
|
class EventID(DomainSpecificString):
|
||||||
|
"""Structure representing an event id. """
|
||||||
|
SIGIL = "$"
|
||||||
|
|
||||||
|
|
||||||
class StreamToken(
|
class StreamToken(
|
||||||
namedtuple(
|
namedtuple(
|
||||||
"Token",
|
"Token",
|
||||||
|
|
|
@ -16,8 +16,17 @@
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
|
|
||||||
|
from .logcontext import PreserveLoggingContext
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
def sleep(seconds):
|
def sleep(seconds):
|
||||||
d = defer.Deferred()
|
d = defer.Deferred()
|
||||||
reactor.callLater(seconds, d.callback, seconds)
|
reactor.callLater(seconds, d.callback, seconds)
|
||||||
return d
|
with PreserveLoggingContext():
|
||||||
|
yield d
|
||||||
|
|
||||||
|
def run_on_reactor():
|
||||||
|
""" This will cause the rest of the function to be invoked upon the next
|
||||||
|
iteration of the main loop
|
||||||
|
"""
|
||||||
|
return sleep(0)
|
||||||
|
|
|
@ -80,7 +80,7 @@ class JsonEncodedObject(object):
|
||||||
|
|
||||||
def get_full_dict(self):
|
def get_full_dict(self):
|
||||||
d = {
|
d = {
|
||||||
k: v for (k, v) in self.__dict__.items()
|
k: _encode(v) for (k, v) in self.__dict__.items()
|
||||||
if k in self.valid_keys or k in self.internal_keys
|
if k in self.valid_keys or k in self.internal_keys
|
||||||
}
|
}
|
||||||
d.update(self.unrecognized_keys)
|
d.update(self.unrecognized_keys)
|
||||||
|
|
108
synapse/util/logcontext.py
Normal file
108
synapse/util/logcontext.py
Normal file
|
@ -0,0 +1,108 @@
|
||||||
|
import threading
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
class LoggingContext(object):
|
||||||
|
"""Additional context for log formatting. Contexts are scoped within a
|
||||||
|
"with" block. Contexts inherit the state of their parent contexts.
|
||||||
|
Args:
|
||||||
|
name (str): Name for the context for debugging.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__slots__ = ["parent_context", "name", "__dict__"]
|
||||||
|
|
||||||
|
thread_local = threading.local()
|
||||||
|
|
||||||
|
class Sentinel(object):
|
||||||
|
"""Sentinel to represent the root context"""
|
||||||
|
|
||||||
|
__slots__ = []
|
||||||
|
|
||||||
|
def copy_to(self, record):
|
||||||
|
pass
|
||||||
|
|
||||||
|
sentinel = Sentinel()
|
||||||
|
|
||||||
|
def __init__(self, name=None):
|
||||||
|
self.parent_context = None
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return "%s@%x" % (self.name, id(self))
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def current_context(cls):
|
||||||
|
"""Get the current logging context from thread local storage"""
|
||||||
|
return getattr(cls.thread_local, "current_context", cls.sentinel)
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
"""Enters this logging context into thread local storage"""
|
||||||
|
if self.parent_context is not None:
|
||||||
|
raise Exception("Attempt to enter logging context multiple times")
|
||||||
|
self.parent_context = self.current_context()
|
||||||
|
self.thread_local.current_context = self
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, type, value, traceback):
|
||||||
|
"""Restore the logging context in thread local storage to the state it
|
||||||
|
was before this context was entered.
|
||||||
|
Returns:
|
||||||
|
None to avoid suppressing any exeptions that were thrown.
|
||||||
|
"""
|
||||||
|
if self.thread_local.current_context is not self:
|
||||||
|
logging.error(
|
||||||
|
"Current logging context %s is not the expected context %s",
|
||||||
|
self.thread_local.current_context,
|
||||||
|
self
|
||||||
|
)
|
||||||
|
self.thread_local.current_context = self.parent_context
|
||||||
|
self.parent_context = None
|
||||||
|
|
||||||
|
def __getattr__(self, name):
|
||||||
|
"""Delegate member lookup to parent context"""
|
||||||
|
return getattr(self.parent_context, name)
|
||||||
|
|
||||||
|
def copy_to(self, record):
|
||||||
|
"""Copy fields from this context and its parents to the record"""
|
||||||
|
if self.parent_context is not None:
|
||||||
|
self.parent_context.copy_to(record)
|
||||||
|
for key, value in self.__dict__.items():
|
||||||
|
setattr(record, key, value)
|
||||||
|
|
||||||
|
|
||||||
|
class LoggingContextFilter(logging.Filter):
|
||||||
|
"""Logging filter that adds values from the current logging context to each
|
||||||
|
record.
|
||||||
|
Args:
|
||||||
|
**defaults: Default values to avoid formatters complaining about
|
||||||
|
missing fields
|
||||||
|
"""
|
||||||
|
def __init__(self, **defaults):
|
||||||
|
self.defaults = defaults
|
||||||
|
|
||||||
|
def filter(self, record):
|
||||||
|
"""Add each fields from the logging contexts to the record.
|
||||||
|
Returns:
|
||||||
|
True to include the record in the log output.
|
||||||
|
"""
|
||||||
|
context = LoggingContext.current_context()
|
||||||
|
for key, value in self.defaults.items():
|
||||||
|
setattr(record, key, value)
|
||||||
|
context.copy_to(record)
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
class PreserveLoggingContext(object):
|
||||||
|
"""Captures the current logging context and restores it when the scope is
|
||||||
|
exited. Used to restore the context after a function using
|
||||||
|
@defer.inlineCallbacks is resumed by a callback from the reactor."""
|
||||||
|
|
||||||
|
__slots__ = ["current_context"]
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
"""Captures the current logging context"""
|
||||||
|
self.current_context = LoggingContext.current_context()
|
||||||
|
|
||||||
|
def __exit__(self, type, value, traceback):
|
||||||
|
"""Restores the current logging context"""
|
||||||
|
LoggingContext.thread_local.current_context = self.current_context
|
|
@ -75,6 +75,7 @@ def trace_function(f):
|
||||||
linenum = f.func_code.co_firstlineno
|
linenum = f.func_code.co_firstlineno
|
||||||
pathname = f.func_code.co_filename
|
pathname = f.func_code.co_filename
|
||||||
|
|
||||||
|
@wraps(f)
|
||||||
def wrapped(*args, **kwargs):
|
def wrapped(*args, **kwargs):
|
||||||
name = f.__module__
|
name = f.__module__
|
||||||
logger = logging.getLogger(name)
|
logger = logging.getLogger(name)
|
||||||
|
|
35
synctl
35
synctl
|
@ -1,35 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
SYNAPSE="python -m synapse.app.homeserver"
|
|
||||||
|
|
||||||
CONFIGFILE="homeserver.yaml"
|
|
||||||
PIDFILE="homeserver.pid"
|
|
||||||
|
|
||||||
GREEN=$'\e[1;32m'
|
|
||||||
NORMAL=$'\e[m'
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
case "$1" in
|
|
||||||
start)
|
|
||||||
if [ ! -f "$CONFIGFILE" ]; then
|
|
||||||
echo "No config file found"
|
|
||||||
echo "To generate a config file, run '$SYNAPSE -c $CONFIGFILE --generate-config --server-name=<server name>'"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -n "Starting ..."
|
|
||||||
$SYNAPSE --daemonize -c "$CONFIGFILE" --pid-file "$PIDFILE"
|
|
||||||
echo "${GREEN}started${NORMAL}"
|
|
||||||
;;
|
|
||||||
stop)
|
|
||||||
echo -n "Stopping ..."
|
|
||||||
test -f $PIDFILE && kill `cat $PIDFILE` && echo "${GREEN}stopped${NORMAL}"
|
|
||||||
;;
|
|
||||||
restart)
|
|
||||||
$0 stop && $0 start
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Usage: $0 [start|stop|restart]" >&2
|
|
||||||
exit 1
|
|
||||||
esac
|
|
1
synctl
Symbolic link
1
synctl
Symbolic link
|
@ -0,0 +1 @@
|
||||||
|
./synapse/app/synctl.py
|
|
@ -1,46 +0,0 @@
|
||||||
Captcha can be enabled for this web client / home server. This file explains how to do that.
|
|
||||||
The captcha mechanism used is Google's ReCaptcha. This requires API keys from Google.
|
|
||||||
|
|
||||||
Getting keys
|
|
||||||
------------
|
|
||||||
Requires a public/private key pair from:
|
|
||||||
|
|
||||||
https://developers.google.com/recaptcha/
|
|
||||||
|
|
||||||
|
|
||||||
Setting Private ReCaptcha Key
|
|
||||||
-----------------------------
|
|
||||||
The private key is a config option on the home server config. If it is not
|
|
||||||
visible, you can generate it via --generate-config. Set the following value:
|
|
||||||
|
|
||||||
recaptcha_private_key: YOUR_PRIVATE_KEY
|
|
||||||
|
|
||||||
In addition, you MUST enable captchas via:
|
|
||||||
|
|
||||||
enable_registration_captcha: true
|
|
||||||
|
|
||||||
Setting Public ReCaptcha Key
|
|
||||||
----------------------------
|
|
||||||
The web client will look for the global variable webClientConfig for config
|
|
||||||
options. You should put your ReCaptcha public key there like so:
|
|
||||||
|
|
||||||
webClientConfig = {
|
|
||||||
useCaptcha: true,
|
|
||||||
recaptcha_public_key: "YOUR_PUBLIC_KEY"
|
|
||||||
}
|
|
||||||
|
|
||||||
This should be put in webclient/config.js which is already .gitignored, rather
|
|
||||||
than in the web client source files. You MUST set useCaptcha to true else a
|
|
||||||
ReCaptcha widget will not be generated.
|
|
||||||
|
|
||||||
Configuring IP used for auth
|
|
||||||
----------------------------
|
|
||||||
The ReCaptcha API requires that the IP address of the user who solved the
|
|
||||||
captcha is sent. If the client is connecting through a proxy or load balancer,
|
|
||||||
it may be required to use the X-Forwarded-For (XFF) header instead of the origin
|
|
||||||
IP address. This can be configured as an option on the home server like so:
|
|
||||||
|
|
||||||
captcha_ip_origin_is_x_forwarded: true
|
|
||||||
|
|
||||||
|
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue