This flagged mostly unused parameters. Didn't enable the
misc-non-private-member-variables-in-classes check as we got a lot of
those. Hopefully we'll get to fix them at some point but that feels too
early and too much work for now.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
Some of the comments didn't match the size or were missing. This also
means reducing one of the 150 MB payloads left behind reducing the
execution time by a few more seconds. This is now around 30s execution
time which is more acceptable.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
This is about the conflicts that happens when the file has been uploaded
correctly to the server, but the etag was not recieved because the connection
was closed before we got the reply.
We used to compare only the mtime when comparing the uploaded file and the
existing file. However, to be perfectly correct, we also should check the
size.
This was found because TestChunkingNG::connectionDroppedBeforeEtagRecieved is
flaky. Example of faillure found in https://drone.owncloud.com/owncloud/client/481/5
while testing PR #6626
(very trimmed log:)
06-29 07:58:02:015 [ info sync.csync.csync ]: ## Starting local discovery ##
06-29 07:58:02:016 [ info sync.csync.updater ]: Database entry found, compare: 1530259082 <-> 1530259051, etag: <-> 1644a8c8750, inode: 1935629 <-> 1935629, size: 301 <-> 300, perms: 0 <-> ff, type: 0 <-> 0, checksum: <-> SHA1:cc9adedebe27a6259efb8d6ed09f4f2eff559ad1, ignore: 0
06-29 07:58:02:016 [ info sync.csync.updater ]: file: A/a0, instruction: INSTRUCTION_EVAL <<=
06-29 07:58:02:972 [ warning sync.networkjob ]: QNetworkReply::NetworkError(OperationCanceledError) "Connection timed out" QVariant(Invalid)
.. next sync...
06-29 07:58:02:980 [ info sync.engine ]: #### Discovery start ####################################################
06-29 07:58:02:981 [ info sync.csync.csync ]: ## Starting local discovery ##
06-29 07:58:02:983 [ info sync.csync.updater ]: Database entry found, compare: 1530259082 <-> 1530259051, etag: <-> 1644a8c8750, inode: 1935629 <-> 1935629, size: 302 <-> 300, perms: 0 <-> ff, type: 0 <-> 0, checksum: <-> SHA1:cc9adedebe27a6259efb8d6ed09f4f2eff559ad1, ignore: 0
06-29 07:58:02:983 [ info sync.csync.updater ]: file: A/a0, instruction: INSTRUCTION_EVAL <<=
06-29 07:58:02:985 [ info sync.csync.csync ]: ## Starting remote discovery ##
06-29 07:58:02:985 [ info sync.networkjob ]: OCC::LsColJob created for "http://localhost/owncloud" + "" "OCC::DiscoverySingleDirectoryJob"
06-29 07:58:02:987 [ info sync.csync.updater ]: Database entry found, compare: 1530259082 <-> 1530259051, etag: 1644a8c8b26 <-> 1644a8c8750, inode: 0 <-> 1935629, size: 301 <-> 300, perms: ff <-> ff, type: 0 <-> 0, checksum: SHA1:5adcdac9608ae0811247f07f4cf1ab0a2ef99154 <-> SHA1:cc9adedebe27a6259efb8d6ed09f4f2eff559ad1, ignore: 0
06-29 07:58:02:987 [ info sync.csync.updater ]: file: A/a0, instruction: INSTRUCTION_EVAL <<=
06-29 07:58:02:989 [ info sync.csync.csync ]: Update detection for remote replica took 0.004 seconds walking 13 files
06-29 07:58:02:990 [ info sync.engine ]: #### Discovery end #################################################### 9 ms
06-29 07:58:02:990 [ info sync.database ]: Updating file record for path: "A/a0" inode: 1935629 modtime: 1530259082 type: 0 etag: "1644a8c8b26" fileId: "16383ea4" remotePerm: "WDNVCKR" fileSize: 301 checksum: "SHA1:cc9adedebe27a6259efb8d6ed09f4f2eff559ad1"
06-29 07:58:02:990 [ info sync.csync.reconciler ]: INSTRUCTION_UPDATE_METADATA client file: A/a0
06-29 07:58:02:990 [ info sync.csync.csync ]: Reconciliation for local replica took 0 seconds visiting 13 files.
06-29 07:58:02:990 [ info sync.csync.reconciler ]: INSTRUCTION_UPDATE_METADATA server dir: A
06-29 07:58:02:990 [ info sync.csync.csync ]: Reconciliation for remote replica took 0 seconds visiting 13 files.
06-29 07:58:02:990 [ info sync.engine ]: #### Reconcile end #################################################### 9 ms
06-29 07:58:02:990 [ info sync.database ]: Updating local metadata for: "A/a0" 1530259082 302 1935629
FAIL! : TestChunkingNG::connectionDroppedBeforeEtagRecieved(small file) '!fakeFolder.syncOnce()' returned FALSE. ()
Use smaller files so the test run faster.
Particulary usefull for TestChunkingNG::connectionDroppedBeforeEtagRecieved
Which had become so much slower after 2638332dc6
increased the timeout for bigger files
If we receive data without base64 encoding for encryption, it makes
sense to get it without base64 encoding out of decryption.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
Make the codebase consistent, we already have a lot of implicit pointer comparisons.
Exception: Stay explicit on return's, example:
return _db != nullptr;
Signed-off-by: Michael Schuster <michael@schuster.ms>
We keep NULL in the pure C files in src/csync/std and test/csync.
We also replace Doxygen documentation referring to "NULL" to
"\c nullptr" (formatted as code).
Signed-off-by: Stephan Beyer <s-beyer@gmx.net>
This test was failing locally for me. Indeed, through QStandardPaths it
was finding the user settings of my production client and not having the
initial state it expected. Using QStandardPaths test mode then it starts
from a clean slate every time.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
We simply use a static QObject using Q_GLOBAL_STATIC()
instead of allocating a leaking QObject on the heap.
Signed-off-by: Stephan Beyer <s-beyer@gmx.net>
In case of denormalized paths in the dav href (presence of . or .. in
the path) simple string startsWith comparison wasn't enough to know if
said href ended up in the right namespace. That's why we're now using
QUrl (pretending local file since we don't have a full URL in the href)
to normalize the path before comparison.
This could happen with broken proxies for instance where we would
wrongly validate the dav information resulting in potentially surprising
syncing and name collisions.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
- When the the users logs because of 401 or 403 errors, it checks if the
server requested the remote wipe. If yes, locally deletes account and folders
connected to the account and notify the server. If no, proceeds to ask the
user to login again.
- The app password is restored in the keychain.
- WIP: The change also includes a test class for RemoteWipe.
Signed-off-by: Camila San <hello@camila.codes>
Add a test to test the data fingerprint feature make me realize it was broken.
The code was relying in the distinction between empty and null QByteArray,
but this was a bad idea as this difference is lost when going through QString.
Note that we also needed to adjust the server url to contains the user name
in the folder wizard. (As checkPathValidityForNewFolder expect the user name)
Issue #6654
Fix a strange warning seen on the log from the CI on Windows in
https://github.com/owncloud/client/pull/6621
The test shows, at the beginning
QObject::connect: No such signal DesktopServiceHook::destroyed(QObject*)
And crashes at the and.
My guess is that when QDesktopServices::setUrlHandler is called, the
QMetaObject is not yet initialized
But this is probably not the reason of the crash
It could happen that readyRead was emitted for incoming data while the
download was not yet finished. Then the network job could finish with
no more data arriving - so readyRead wasn't emitted again.
To fix this, the finished signal also gets connected to the readyRead
slot.
Since commit 4dc49ff3, we store an entry in the upload info table even
for non chunked uploads. However, if this fails we don't want to remove
non-existant stale chunks if the upload fails.
Without this commit, we would send a DELETE command to clean non-existant
chunks in the dav/uploads/ namespace.
Since the release package will be build with unit test, we don't
want to query the env variable at every call to fsCasePreserving.
So only test the env variable at startup.
And the testutility can still change the value.
(The env variable is still used from t8.pl and maybe smashbox)
Issue #6318
Paths can contain the wildcards % and _ and that would lead to odd
behavior.
This patch also clarifies the behavior of avoidReadFromDbOnNextSync()
which previously dependend on whether "foo/bar" or "foo/bar/" was
passed as input.
Possibly affects #6322
Previously conflicts with a different type on both ends lead to sync
errors. Now they are handled in the expected way: the local item gets
renamed and the remote item gets propagated downwards.
This also adds a unittest for the TYPE_CHANGE case. That one looks like
parts of it might be unified with CONFLICT cases.
Previously we'd use the full regex when the bname triggered a full-path
matching to take place. Now we have a simplified full-traversal regex
for this case that can be significantly faster to apply.
Triggered by #5017 but doesn't actually solve it.
Mainly uses target_include_directories instead of include_directories
so libraries public include directory get automatically added when adding
the target in target_link_library
Unfortunately matching behaved differently on Windows. This patch
restores the previous matching behavior but still uses the new regular
expression based matching.
Further work will hopefully unify the behavior between platforms without
breaking backwards compatibility.
fopen does not work well with relative path tand forward slashes on windows
This fix the windows textexcludedfiles test.
And also make the code simpler.
Note that the 'trimmed' might be a behavior change, but i think it is ok
If the server has the 'uploadConflictFiles' capability conflict
files will be uploaded instead of ignored.
Uploaded conflict files have the following headers set during upload
OC-Conflict: 1
OC-ConflictBaseFileId: 172489174instanceid
OC-ConflictBaseMtime: 1235789213
OC-ConflictBaseEtag: myetag
when the data is available. Downloads accept the same headers in return
when downloading a conflict file.
In the absence of server support clients will identify conflict files
through the file name pattern and attempt to deduce the base fileid.
Base etag and mtime can't be deduced though.
The upload job for a new conflict file will be triggered directly from
the job that created the conflict file now. No second sync run is
necessary anymore.
This commit does not yet introduce a 'username' like identifier that
automatically gets added to conflict file filenames (to name the files
foo_conflict-Fred-1345.txt instead of just foo_conflict-1345.txt).
Previously, there was csync_ftw_type_e and SyncFileItem::Type. Having
two enums lead to a bug where Type::Unknown == Type::File that went
unnoticed for a good while.
This patch keeps only a single enum.
This can happen if the upload of a file is finished, but we just got
disconnected right before recieving the reply containing the etag.
So nothing was save din the DB, and we are not sure if the server
recieved the file properly or not. Further local update of the file
will cause a conflict.
In order to fix this, store the checksum of the uploading file in
the uploadinfo table of the local db (even if there is no chunking
involved). And when we have a conflict, check that it is not because
of this situation by checking the entry in the uploadinfo table.
Issue #5106
In addition to using the right function when retrieving inodes this
*also* fixes a more general bug ownsql had with storing uint64 values
that didn't fit into an int64.
Improves full matches by more than an order of magnitude
and also improves speed of traversal matches by roughly 20%,
judging by the check_csync_exclude performance test.
Make ExcludedFiles something that is instantiated outside of
the CSYNC context and then given to it as a hook.
ExcludedFiles still lives in csync_exclude and the internal
workings haven't been touched.
For duplicate file ids the update phase and reconcile phase determined
the rename mappings independently. If they disagreed (due to different
order of processing), complicated misbehavior would result.
This patch fixes it by letting reconcile try to use the mapping that the
update phase has computed first.
Previously we required matching mtimes but that's actually
unnecessary when the question is about whether to skip the
download. We will still update the file's metadata.
Also, adjust behavior when the checksum is weak (Adler32):
in these cases we still depend on equal mtimes.
... even if the file is not changed.
We get an UPDATE_METADATA in that case, so make sure we let the
SyncFileStatusTracker know about it.
That means we need to filter out UPDATE_METADATA in the other listeners
of this signal.
Issue #6098
When users share the same tree several times (say A/ and A/B/ are both
shared) the remote tree can have several entries that have the same
file id. This needs to be respected in rename detection.
Also adds several tests and fixes for issues noticed during testing.
See #6096
We mostly trust the file watchers meaning that we don't re-scan the
local tree if we have done that recently and no file watcher events
have arrived. If the file watchers invalidate a subtree, we rescan
only that subtree.
Since we're not entirely sure the file watchers are reliable, we still
do full local discoveries regularly (1h by default). There is a config
file setting as well as an environment variable to control the interval.
Comparison of file sizes for potential conflicts was added in
0eb9401c62, but did not extend to checking
the file size in case of potential local moves.
This commit adds this check and adds tests for various move+change
scenarios.
On Mac, this halves the time spent in csync_excluded_traversal
when using check_csync_excluded_performance. A similar performance
increase is seen on linux.
This gets rid of the csync_statedb sqlite layer and use
the same code and same connection as the rest of the SyncEngine.
Missing functions are added to SyncJournalDb and change a few minor
things (like changing SyncJournalFileRecord::_modtime to be an int64
instead of a QDateTime, like it was in csync).
The current implementation would return the same value whether the query failed
or if no row would be found. This is something that is currently checked by csync
and needs to be provided if we want to use SyncJournalDB there.
Adjusted all call sites to also check the return value even though they
could still just rely on rec.isValid(), but makes it more explicit as to what
happens for database errors in those cases, if we ever want to gracefully handle
them.
The code that was creating the files in the directory was removed in
commit 6906b8d30c. The directory is empty
so the result is expected to be null. It was passing before because the
code was returning an entry for . and .., but since commit
35f80bd439 this is no longer the case
Create a specific type that parses the permissions so we can store
it in a short rather than in a QByteArray
Note: in RemotePermissions::toString, we make sure the string is not
empty by adding a space, this was already existing before commit
e8f7adc7ca where it was removed by mistake.
This remove the remaining "other" fields of the sync log to save a
bit of memory.
other_etag and other_fileId don't give much information to the users
and other_instruction will always be INST_NONE anyway.
other_modtime and other_size are kept since they are sometimes used.
They were renamed to have a bit more meaningful name.
SyncEngine::checkPermissions will now fetch its information from the
csync trees since they are now preserved until right after this point.
Fixes#3213
Only store the path since they represent the same thing, and do the
phash conversion during DB lookup like done in libsync.
We could get rid of everything since we also have an index on the path
column, but since it's the primary key this makes the migration non-trivial.
Now that they use the same structure, avoid _csync_detect_update
having to recreate another instance and transfer everything manually.
Any instance created during discovery should now be used all the way
up to SyncEngine::treewalkFile.
This also makes sure that the path and types are properly set in that
object instead of having to pass everything as separate parameters.
This gets rid of csync_ftw_flags_e which was now converted from,
and to csync_ftw_type_e, already in the csync_file_stat_t.
Issue #1817
Otherwise adding patterns that start with # are impossible to add, since
they get treated as comments. Also add this escaping for patterns added
in the ui.
Merge csync_create and csync_init into the constructor and
replace csync_destroy with the destructor.
Also use a QByteArray for csync_s::root_perms and flatten
csync_rename_s as a rename sub-struct of csync_s since it
can now handle C++ types.
The only difference with csync_s::current is that it's
assigned the value of csync_s::local::type and
csync_s::remote::type, which never change. So might as
well only use the "current" field with constants.
Also move csync_normalize_etag to common/utility since we
don't need the char* function anymore.
Remove the single space file_stat->remotePerm codepath since
this won't be used in csync anymore since
8de3bda0b1.
Issue #1817
This is the first commit trying to unify csync_file_stat_s,
csync_vio_file_stat_s and csync_tree_walk_file_s. Use QByteArray
and unique_ptr already since I'm not used to track memory allocations
and this will make the transition easier.
Issue #1817
Now that csync builds as C++, this will avoid having to implement
functionalities needed by csync mandatorily in csync itself.
This library is built as part of libocsync and symbols exported
through it.
This requires a relicense of Utility as LGPL. All classes moved into
this library from src/libsync will need to be relicensed as well.
This will allow us to unify data structures between csync and libsync.
Utility functions like csync_time and c_std are still compiled as C
since we won't need to be coupled with Qt in the short term.
The QNAM may continue to outlive both.
Rename Credentials::getQNAM() to createQNAM() while we're at it - it's
used to make a new QNAM that will subsequently be owned by the Account
object.
See d01065b9a1 for rationale.
Relates to
d40c56eda5147cf798a6
* SocketAPI has COPL_LOCAL_LINK / EMAIL_LOCAL_LINK commands
* The nautilus and dolphing shell integrations show a submenu from which
one can share as well as access the private link.
* The SocketAPI provides a new GET_STRINGS command to access localized
strings.
* The private link can also be accessed from the user/group sharing
dialog.
* The numeric file id is extracted from the full id to create the
private link url.
There was a rounding issue in the mtimes which sometimes resulted in an
off-by-one error. Caused by storing a full QDateTime in the FileInfo but
the mtime saved to the disk being truncated to seconds.
* For conflicts where mtime and size are identical:
a) If there's no remote checksum, skip (unchanged)
b) If there's a remote checksum that's a useful hash, create a
PropagateDownload job and compute the local hash. If the hashes
are identical, don't download the file and just update metadata.
* Avoid exposing the existence of checksumTypeId beyond the database
layer. This makes handling checksums easier in general because they
can usually be treated as a single blob.
This change was prompted by the difficulty of producing file_stat_t
entries uniformly from PROPFINDs and the database.
This is useful for monitoring what kind of network requests are
sent to the fake server. Such as "did this sync cause an upload?"
and "was there a propfind for this path?". It can also inject
custom replies.