This would happen if the directory would first need to be created
through an mkdir propagation job. This job's itemCompleted signal
would trigger the directory to show as SYNC even though its children
are still propagating.
Fix the issue by tracking the sync count for each file, affecting
its parents. This allows us to get rid of the O(n) vector lookup
for each status query, and properly track the hierachical sync
status of a directory.
This also removes the itemCompleted signal emission from the
PropagateDirectory job. Since we only needed for overlay icons, and
since this job doesn't do any direct propagation, we can remove it
to ensure that we won't call itemCompleted twice for the item attached
to Propagate*Mkdir jobs (since the PropagateDirectory is backed by
the same SyncFileItem, instruction and status).
The current way of tracking the need to update the metadata without
propagation using a separate flag makes it difficult to track
priorities between the local and remote tree. The logic is also
difficult to logically cover since the possibilities matrix isn't
100% covered, leaving the flag only used in a few situations
(mostly involving folders, but not only).
The reason we need to change this is to be able to track the sync
state of files for overlay icons. The instruction alone can't be
used since CSYNC_INSTRUCTION_SYNC is used for folders even though
they won't be propagated. Removing this logic is however not possible
without using something else than CSYNC_INSTRUCTION_NONE since too
many codepath interpret (rightfully) this as meaning "nothing to do".
This patch adds a new CSYNC_INSTRUCTION_UPDATE_METADATA instruction
to let the update and reconcile steps tell the SyncEngine to update
the metadata of a file without any propagation. Other flags are left
to be interpretted by the implementation as implicitly needing
metadata update or not, as this was already the case for most file
propagation jobs. For example, CSYNC_INSTRUCTION_NEW for directories
now also implicitly update the metadata.
Since it's not impossible for folders to emit CSYNC_INSTRUCTION_SYNC
or CSYNC_INSTRUCTION_CONFLICT, the corresponding code paths in the
sync engine have been removed.
Since the reconcile step can now know if the local tree needs metadata
update while the remote side might want propagation, the
localMetadataUpdate logic in SyncEngine::treewalkFile now simply use
a CSYNC_INSTRUCTION_UPDATE_METADATA for the local side, which is now
implemented as a different database query.
Add a missing call that we currently only do in slotItemCompleted.
This would normally only affect the first sync and would have
gotten properly update at the end of the sync anyway.
To be able to test the SyncEngine efficiently, a set of server
mocking classes have been implemented on top of QNetworkAccessManager.
The local disk side hasn't been mocked since this would require adding
a large abstraction layer in csync. The SyncEngine is instead pointed
to a different temporary dir in each test and we test by interacting
with files in this directory instead.
The FakeFolder object wraps the SyncEngine with those abstractions
and allow controlling the local files, and the fake remote state
through the FileModifier interface, using a FileInfo tree structure
for the remote-side implementation as well as feeding and comparing
the states on both side in tests.
Tests run fast and require no setup to be run, but each server feature
that we want to test on the client side needs to be implemented in
this fake objects library. For example, the OC-FileId header isn't
set as of this commit, and we can't test the file move logic properly
without implementing it first.
The TestSyncFileStatusTracker tests already contain a few QEXPECT_FAIL
for what I esteem being issues that need to be fixed in order to catch
up on our test coverage without making this patch too huge.
This is a move away from the original policy where jobs
would only follow redirects in special cases.
Two restrictions are in place:
1. We do not allow protocol downgrades (https -> http)
2. We stop redirects after we find them looping (e.g. old = new url, or
indirectly when looping 10 times).
This is closer to RFC conforming behavior, although currently
we will treat 301 replies like they were 302. This is for a separate
commit.
Error handling (and display) also needs improvement.
Addresses #2791
Missing deleteLater when the CleanupPollsJob aborts.
This is only a problem if the SyncEngine is kept alive a long time. Which is
usually not the case in the configuration where poll jobs are used.
While loading the account, only override the server url if Theme::forceConfigAuthType
is set. This restore the behavior from the client 2.1 for theme that did not
use Theme::forceConfigAuthType.
Issue: owncloud/enterprise#1418
This was removed in 0194ebb222
because it breaks on Linux. However, it looks like it is correct
for Windows. In the meantime the surrounding ifdef has changed
from !Q_OS_MAC to Q_OS_WIN, so reverting it makes sense.
Since the SyncEngine now quits and waits for the discovery thread,
the main thread can enter a deadlock where the discovery thread waits
for its directory result.
Add a 2 seconds timer to the discovery thread wait condition
to limit the deadlock time.
This disables the workaround 487e1fdca5ee04fc98c1ed77898df70d740967c8
for servers that are new enough to support fine grained permissions
on federated shares.
The consequence is that the 'reshare' permission is now granted by
default and that users can edit permissions on the usual fine-grained
level again.
The way the client deals with servers <9.1 is unchanged.
Use a QMap to avoid using a full hashtable for only a few entries, and
clear the QMap once we're done with the measuring. This saves a few
hundred bytes per job during propagation that would otherwise only be
freed at the end of the sync.
During propagation, we create a line for each file, taking memory, but
we delete all lines passed 2000 right at the beginning of the next sync.
Since the user has little chances of being able to read past those 2000
lines in the log, we might as well keep it capped at 2000 also during
propagation to prevent it from eating memory.
The SyncRunFileLog owned by the Folder must be destroyed after the
SyncEngine since the SyncEngine will abort during destruction, resulting
in all jobs being aborted.
It's possible that this crash only happens with a debug build.
The FolderWatcher inserts files to be marked as SYNC and we
currently assume that all file statuses will be updated by the
following sync. It's however possible that the FolderWatcher
notify us of a change that csync won't consider necessary to
propagate, in which case a new status wouldn't be pushed and
the file manager would continue showing this file as syncing.
Re-push the file status when emptying the dirty files list
before propagating to avoid this issue, most likely the OK
status.
No need to allocate (and initialize to 0) a 10 MiB buffer for each files, even
when most files are much smaller than that.
So make sure the buffer that we allocate is not bigger than the file size.
And Also 10 MiB is a bit big for a buffer. 500 KiB should be more than enough.
(Too big allocations can cause problem because of memory fragmentation and such)
We first need to set the abort flag to csync and then aborting the discovery
job, otherwise, the discovery thread could start a new job in the mean time.
We also need to make sure that the thread has existed before we destroy the
exclude list.
Events from the crash reporter suggest that the QNAM and its
child replies might get deleted before returning from this method
and the only possible cause we can see is that the inner event
loop has something to do with it.
Try keeping a ref on the QNAM while in this method to make sure
that it won't get deleted by the inner event loop.