We append the salt (just like the IV) to the ciphertext of the private
key. This means we also have to split it off properly.
This breaks compartibility with currently stored keys on your server.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Those functions had no use anymore since we store the key and cert in
the keychain. Removed them so we don't use them by accident.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
If the server has the 'uploadConflictFiles' capability conflict
files will be uploaded instead of ignored.
Uploaded conflict files have the following headers set during upload
OC-Conflict: 1
OC-ConflictBaseFileId: 172489174instanceid
OC-ConflictBaseMtime: 1235789213
OC-ConflictBaseEtag: myetag
when the data is available. Downloads accept the same headers in return
when downloading a conflict file.
In the absence of server support clients will identify conflict files
through the file name pattern and attempt to deduce the base fileid.
Base etag and mtime can't be deduced though.
The upload job for a new conflict file will be triggered directly from
the job that created the conflict file now. No second sync run is
necessary anymore.
This commit does not yet introduce a 'username' like identifier that
automatically gets added to conflict file filenames (to name the files
foo_conflict-Fred-1345.txt instead of just foo_conflict-1345.txt).
Previously, there was csync_ftw_type_e and SyncFileItem::Type. Having
two enums lead to a bug where Type::Unknown == Type::File that went
unnoticed for a good while.
This patch keeps only a single enum.
If the application binary is not installed in /usr/bin the client
with this patch considers to check the relative location
../../etc/owncloud-client/ to find the system exclude.
This is an important bit for AppImage based packages of the client,
as this runs from a temporar mountpoint and the system file can not
be found under /etc.
This can happen if the upload of a file is finished, but we just got
disconnected right before recieving the reply containing the etag.
So nothing was save din the DB, and we are not sure if the server
recieved the file properly or not. Further local update of the file
will cause a conflict.
In order to fix this, store the checksum of the uploading file in
the uploadinfo table of the local db (even if there is no chunking
involved). And when we have a conflict, check that it is not because
of this situation by checking the entry in the uploadinfo table.
Issue #5106
The upload is made in an event loop with more than one
upload at the same time, this confuses the hell out of the
folder locking mechanism.
We need to lock the folder and ask the other trials to try
again in a few seconds in the future to give time for the
uploader to actually upload the current file that's locking
the folder.
Also use appName instead of appNameGui in order to compute the path
Issue: #2245
The reason is to respect the XDG spec on Unix (#1601) and might help
on windows roaming profiles (#684)
Make ExcludedFiles something that is instantiated outside of
the CSYNC context and then given to it as a hook.
ExcludedFiles still lives in csync_exclude and the internal
workings haven't been touched.
For duplicate file ids the update phase and reconcile phase determined
the rename mappings independently. If they disagreed (due to different
order of processing), complicated misbehavior would result.
This patch fixes it by letting reconcile try to use the mapping that the
update phase has computed first.
Add a new member for the UploadFileInfo in PropagateUploadCommon
to hold the full file path - as it can change if we use a temporary
file to upload.
Adapt propagateuploadv1 to use the new calls.
They can be conceptually equal - I can upload the file
on disk, and that's what I do right now. But if we want
to accept filters in the future, filters that change
the file on disk like shrinking an image, the current
information used is wrong and we need a way to separate those.
This patch introduces a new struct that holds the *actual*
file that will be uploaded, be it a temporary one or
the original file.
* Store privatekey, certificate and mnemonic in keychain
* Retrieve private + public key from server
- ask for mnemonic to decrypt private key
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
* Check for cert + privateKey in keychain
* Work with QSslKey and QSslCertificate
* Abstract reading the BIO's a bit more
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Also, Do not create variables in the heap to change it's value
via reference, prefer an aggregation value. use a Typedef to
fully specify what you want in return.
This means we cannot use QtGui in libsync.
So this mostly disable the avatar from the account and the avatarjob
Note that there is one logic change: in ConnectionValidator::slotUserFetched
we do the avatar job even if the user is empty. Otherwise we would end up in
a invalid state. This restore the 2.3.x behavior that was broken in
commit e05d6bfcdc
This is to move generic encryption methods out of the main code and into
small helper functions. So we don't scatter the encryption code all over
the place.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
This is mainly for linux, whose local is not UTF-8.
For example, in latin1, it is not possible to encode emoji or chinese character.
If there are such character in the filename, Qt would just save the file using
the replacement character ('?'). Then, on the next sync, client would rename
the files using this replacement character.
Avoid this by ignoring the files which cannot be downloaded because the
filename cannot be represented with the user's locale
Relates to issue #5676 and #5719
* Drop AvatarJob2
* Allow AvatarJob to retrieve different sizes and users
* Make creating a circular avatar into a function
(maybe all avatars should be made into that shape in the first place)
To do this conveniently a bunch of functionality that's common to
IssueWidget and ProtocolWidget is moved to ProtocolItem.
Also the convenience function to asynchronously retrieve the private
link url is moved from the socket api to the network jobs.
Previously we required matching mtimes but that's actually
unnecessary when the question is about whether to skip the
download. We will still update the file's metadata.
Also, adjust behavior when the checksum is weak (Adler32):
in these cases we still depend on equal mtimes.
This restores 2.3 behavior. Some servers reply 404 to GETs and PROPFINDs
to the remote.php/webdav/ url and used to work. Being more picky would
break them.
This is important as a lot of the code would start
to rely in direct access to the client side encryption
and there are different keys for different accounts.
With some firewalls we can't GET /remote.php/webdav/. Here we keep the
GET request to detect shibboleth through the redirect pattern but then
use PROPFIND to figure out the http auth method.
Currently we prefer OAuth to Shibboleth to Basic auth.
This also restores the fallback behavior of assuming basic auth
when no auth type can be determined.
It appears that Qt implementation of the DELETE http request
does not send bodyData, and we need that for Nextcloud.
Currently I changed the http request on the server side
to accept a POST instead of a DELETE, so I can actually
develop.
Also, I already poked the Qt developers that did this code.
Also, commented out the finalization of the decrypt operation
because that was messing with the encryption. There's something
wrong here but I need to get this working and I can fix stuff
later.
* Do not use AAD
* Do not try to decrypt the last 16 bytes as Android adds the tag there
by default
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
* Do not use padding
* Do not use the ADD data
* Append the tag to the ciphertext to be compatible with Android
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
... even if the file is not changed.
We get an UPDATE_METADATA in that case, so make sure we let the
SyncFileStatusTracker know about it.
That means we need to filter out UPDATE_METADATA in the other listeners
of this signal.
Issue #6098
We mostly trust the file watchers meaning that we don't re-scan the
local tree if we have done that recently and no file watcher events
have arrived. If the file watchers invalidate a subtree, we rescan
only that subtree.
Since we're not entirely sure the file watchers are reliable, we still
do full local discoveries regularly (1h by default). There is a config
file setting as well as an environment variable to control the interval.
For some reason, this was working untill I added a call
to X509_REQ_get_subject_name, then the linking suddenly
stopped working (even tougth I'm using a ton of other
OpenSSL calls)
Force to link against 1.0
This network job does a DELETE http request on a URL. It's the
second class that does basically the same, but this one returns
the http return code, and it's set to do a api call.
If the server supports client syde encryption, display
a menu on right click that should display encrypt and decrypt.
ideally it would show the encrypt if the folder is decrypted, and
decrypt if the folder is encrypted but currently there's no way
for the client to know that.
On Mac, this halves the time spent in csync_excluded_traversal
when using check_csync_excluded_performance. A similar performance
increase is seen on linux.
This gets rid of the csync_statedb sqlite layer and use
the same code and same connection as the rest of the SyncEngine.
Missing functions are added to SyncJournalDb and change a few minor
things (like changing SyncJournalFileRecord::_modtime to be an int64
instead of a QDateTime, like it was in csync).
The current implementation would return the same value whether the query failed
or if no row would be found. This is something that is currently checked by csync
and needs to be provided if we want to use SyncJournalDB there.
Adjusted all call sites to also check the return value even though they
could still just rely on rec.isValid(), but makes it more explicit as to what
happens for database errors in those cases, if we ever want to gracefully handle
them.
We need to use concatPath to avoid possible double '/' in the URLs if the
account url() ends with '/'.
This has become even more of a problem since commit
d1b8370a4a which was resolving the url after
a redirect where most server actually add a '/' if the url is a folder
Create a specific type that parses the permissions so we can store
it in a short rather than in a QByteArray
Note: in RemotePermissions::toString, we make sure the string is not
empty by adding a space, this was already existing before commit
e8f7adc7ca where it was removed by mistake.
Some slot were protected or private but needed to be public.
Some needed a static_cast (can't use qOverload because it is in Qt 5.7)
This is not only a partial change.
This is motivated by the fact that QMetaObject::noralizeSignature takes 7.35%
CPU of the LargeSyncBench. (Mostly from ABstractNetworkJob::setupConnections and
PropagateUploadFileV1::startNextChunk). It could be fixed by using normalized
signature in the connection statement, but i tought it was a good oportunity
to modernize the code.
This commit only contains calls that were automatically converted with clazy.
Set OWNCLOUD_UPLOAD_CONFLICT_FILES=1 to trigger this behavior.
Note that this is experimental and unsupported. The real feature is
likely to end up in 2.5.
Uploading conflict files is simply done by removing the pattern from
csync_exclude. The rest here deals with making the conflict notification
ui approximately work.
There are still some concerns about where an uploaded conflict file
appears in the sync protocol and issues list (it should be in both, but
is only in one of them currently!).
See #4557.
* The sharing ui does a propfind anyway: use that to query the new
property as well!
* For the socket api, asynchronously query the server for the right url
when an action that needs it is triggered.
The old, manually generated URL will be used as fallback in case the
server doesn't support the new property or the property can't be
retrieved for some reason.
Depends on owncloud/core#29021
This remove the remaining "other" fields of the sync log to save a
bit of memory.
other_etag and other_fileId don't give much information to the users
and other_instruction will always be INST_NONE anyway.
other_modtime and other_size are kept since they are sometimes used.
They were renamed to have a bit more meaningful name.
SyncEngine::checkPermissions will now fetch its information from the
csync trees since they are now preserved until right after this point.
Fixes#3213
Now that csync is using a more convenient data structure for
its file trees, wait a little bit longer before destroying them and
fetch the remote permissions from the remote tree there instead.
Some filesystems, vms or other limitations make using the WAL journal
mode impossible. We are notified of this problem through an sqlite
IOERR for SHMMAP. In that case We want to attempt to fall back to the
DELETE journal mode.
The query args of POST requests become the request body. If there's a
redirect, the redirected url will therefore not contain the query
arguments. Use an explicit request body to make the redirection work.
Also add logging of extended error codes for this IO error, maybe we can
become more specific about which situations should trigger a journal
mode switch.
This is the first time the account url may update outside of
account setup.
Summary of redirection handling:
1. During account setup (wizard)
- status.php gets permanently redirected -> adjust url
- authed PROPFIND gets *any* redirection -> adjust url
2. During connectivity ping (ConnectionValidator)
- status.php gets permanently redirected -> adjust url (new!)
All other redirections should be followed transparently and
don't update the account url in the settings.
Merge csync_create and csync_init into the constructor and
replace csync_destroy with the destructor.
Also use a QByteArray for csync_s::root_perms and flatten
csync_rename_s as a rename sub-struct of csync_s since it
can now handle C++ types.
Just expose csync_file_stat_t since we don't need an abstraction layer
anymore. Also pass the nodes of both trees directly to the visitor
function.
Issue #1817
Also move csync_normalize_etag to common/utility since we
don't need the char* function anymore.
Remove the single space file_stat->remotePerm codepath since
this won't be used in csync anymore since
8de3bda0b1.
Issue #1817
This is the first commit trying to unify csync_file_stat_s,
csync_vio_file_stat_s and csync_tree_walk_file_s. Use QByteArray
and unique_ptr already since I'm not used to track memory allocations
and this will make the transition easier.
Issue #1817
Now that csync builds as C++, this will avoid having to implement
functionalities needed by csync mandatorily in csync itself.
This library is built as part of libocsync and symbols exported
through it.
This requires a relicense of Utility as LGPL. All classes moved into
this library from src/libsync will need to be relicensed as well.
This will allow us to unify data structures between csync and libsync.
Utility functions like csync_time and c_std are still compiled as C
since we won't need to be coupled with Qt in the short term.
By setting the icon in Desktop.ini of the root folder, this adds the icon
both when browsing the folder directly and to the sidebar shortcut.
To avoid overwriting any user setting that could exist in Desktop.ini,
only do this if the file doesn't exist. Editing .ini files on Windows
isn't trivial and isn't worth it given that this file won't exist most
of the time.
Fixes#2446
Allow upgrade path when the server removes support for oauth
Relates: https://github.com/owncloud/client/issues/5848#issuecomment-317353049
We also need to force the account to commit the config to the disk,
otherwise we may not register we are no longer using owncloud and we
risk sending the password as the token to the token refresh API call
Before commit d3b00532b1,
fetchFromKeychain was called everytime we detect that the creds are
invalid (in AccountState::slotInvalidCredentials)
But since that commit, AccountState was calling askFromUser directly,
breaking the refresh of the token.
So I made sure AccountState::slotInvalidCredentials still calls
refreshAccessToken.
Another change that was made was too be sure to clear the cookies
in HttpCredentials::invalidateToken even when we are only clearing the
access_token. That's because the session with a cookie may stay valid
longer than the access_token
We only want to know if they were touched within the last 15 seconds,
so change the data structure to use a QMultiMap, and sort them by
QElapsedTimer. This allows us to iterate over old entries ordered by
time and to stop once we find a recent entry.
This makes the look-up slower but in most cases the folder watcher
will report any change within milliseconds, and we start from the
most recent. What this really makes slower are actual user file
changes while a fast sync is underways which will need to iterate
over the whole map to find out the file isn't there.
This reduces the growth of the memory usage when downloading a large
amount of files.
We need Qt 5.9 for HTTP2 because, even if Qt 5.8 already has support
for it, there is some critical bug in the HTTP2 implementation which
make it unusable [ https://codereview.qt-project.org/186050 and
https://codereview.qt-project.org/186066 ]
When using HTTP2, we can use many more parallel network request, this
is especially good for small file handling
Lower the priority of the GET and PUT propagation jobs, so the quota
or selective sync ui PROPFIND will not be blocked by them
Since these errors are blacklisted, it can take up to 24h to retry items
that had a 507 error for a while. This way users can intervene and cause
an upload attempt immediately.
It now produces a summary error message indicating the problem.
Adjust blacklist database table to contain 'errorCategory'. This is
useful for two things:
- Reestablishing summary messages based on blacklisted errors. For
example if we don't retry a 507ed file, we still want to show the
message about space on the server
- Selectively wiping the blacklist: When we have ui for something like
"I deleted some files, please retry all files now!", we want to
delete all blacklist entries of a specific category only.
* A bunch of code was determining sync status by ad-hoc comparing some
progress info fields. It can now just check the status, making it
easier to comprehend.
* There's a clear indication for "a new sync is starting", which helps
wiping the issues tab at the right time.
For now we use them for:
* csync errors: This allows them to appear in the sync issues tab
* insufficient local disk space, as a summary of individual file errors
Insufficient remote space will use them too, as might other issues that
are bigger than a single sync item.
The QNAM may continue to outlive both.
Rename Credentials::getQNAM() to createQNAM() while we're at it - it's
used to make a new QNAM that will subsequently be owned by the Account
object.
See d01065b9a1 for rationale.
Relates to
d40c56eda5147cf798a6
* SocketAPI has COPL_LOCAL_LINK / EMAIL_LOCAL_LINK commands
* The nautilus and dolphing shell integrations show a submenu from which
one can share as well as access the private link.
* The SocketAPI provides a new GET_STRINGS command to access localized
strings.
* The private link can also be accessed from the user/group sharing
dialog.
* The numeric file id is extracted from the full id to create the
private link url.
Calling forgetSensitiveData() on account deletion leads to a timer for
clearQNAMCache() being queued. Then the Account object is deleted. The
Credentials object stays alive for now because it has a deleteLater
deleter.
If the timer calls into a slot on the Credentials object, the _account
pointer will be invalid at this time.
As a workaround, move the target slot to Account - that way it will not
be called as the account object is already destroyed.
However since Account and Credentials are mutually dependent, it would
be much preferable if their lifetimes were linked, avoiding this
category of bugs.
The current behavior was introduced in
d40c56eda5 and I currently don't
understand why - maybe there's another way of dealing with the problem
that existed then.
Before, blacklisted errors were set to FileIgnored status and hence
displayed as warnings. Now, they have their own BlacklistedError
category which allows them to appear as errors in the issues list and in
the shell integration icons.
When synchronizing a folder on a samba share, creating files that begin
with ._ is often forbidden. This prevented the client from creating
its ._sync_abcdef.db file.
Now, it'll check whether the preferred filename is creatable, and if
it isn't it'll use .sync_abcdef.db instead.
The disadvantage is that this alternative path won't be ignored by
older clients - that was the reason for the ._ prefix.
* For conflicts where mtime and size are identical:
a) If there's no remote checksum, skip (unchanged)
b) If there's a remote checksum that's a useful hash, create a
PropagateDownload job and compute the local hash. If the hashes
are identical, don't download the file and just update metadata.
* Avoid exposing the existence of checksumTypeId beyond the database
layer. This makes handling checksums easier in general because they
can usually be treated as a single blob.
This change was prompted by the difficulty of producing file_stat_t
entries uniformly from PROPFINDs and the database.
- Add category to the all messages (they did not have it was merged right after
the patch to add category everywhere, but this code did not have it.)
- Make sure there is no warnings in the normal flow. (The wizard does a request
without authentication to determine the auth type)
All our crypto code is handled by qt nodaways.
No need to carry this dependency.
Especially since it causes warnings on system where there are
twp openssl version installed:
/usr/bin/ld: warning: libcrypto.so.1.0.0, needed by /usr/lib/libQt5Network.so.5.9.0, may conflict with libcrypto.so.1.1
Issue #5783
When the directry that should be removed by selective sync contains changes,
we ignore the whole sub tree instead of only ignoreing new files.
We cannot ignore the whole directory, we need to ignore only the directory
that do not have files to remove
See owncloud/enterprise#1966
If the server and the client's database go out of sync, there could be
persistent 404 errors. This change ensures that the problem corrects
itself eventually by triggering a remote discovery of the file's
parent folders.
It does not address the root cause that might have lead to the
divergence.
These would otherwise be line-wrapped by clang-format,
and then consecutive reformattings remove the aligned
comment indentation
Example:
int a; // too long comment
->
int a; // too long
// comment
->
int a; // too long
// comment
When a new folder becomes selective-sync excluded, we already mark it
and all its parent folders with _invalid_ etags to force rediscovery.
That's not enough however. Later calls to csync_statedb_get_below_path
could still pull data about the excluded files into the remote tree.
That lead to incorrect behavior, such as uploads happening for folders
that had been explicitly excluded from sync.
To fix the problem, statedb_get_below_path is adjusted to not read the
data about excluded folders from the database.
Currently we can't wipe this data from the database outright because we
need it to determine whether the files in the excluded folder can be
wiped away or not.
See owncloud/enterprise#1965
Use qCInfo for anything that has general value for support and
development. Use qCWarning for any recoverable error and qCCritical
for anything that could result in data loss or would identify a serious
issue with the code.
Issue #5647
This gives more insight about the logs and allow setting fine-tuned
logging rules. The categories are set to only output Info by default
so this allows us to provide more concise logging while keeping the
ability to extract more information for a specific category when
developping or debugging customer issues.
Issue #5647
Add the log level and category name in the output. Only output the
thread ID and function name for qCDebug statements as they are not
necessary for general use and make the log harder to read.
Also make sure that the message pattern is set when NO_MSG_HANDLER is
used. Using an environment variable should have priority over it anyway.
When we first detect a 503 (probably from a PROPFIND) and enter the
ServiceUnavailable state, we new trigger a status.php query that will
switch the state to MaintenanceMode if necessary.
Before this patch, to deep folder would just be ignored, without any feedback.
This patch makes it so deep folder are properly shown as ignored in the UI.
Also increase the MAX_DEPTH
Issue: #1067
I'm confident this is unnecessary. The original bug in #3283 was
to call ignoreSslErrors() without an argument in the 'accept'
case, which meant ignoring *all* subsequent SSL errors.
With that fixed, explicitly aborting the reply and resetting QNAM
is not needed since not ignoring the error will lead to the SSL
handshake failing.
See also:
75b38d1a2f (workaround introduced)
89376e14d6 (real fix)
76ce5adbf0 (cherry-pick of workaround)