We used to do it when the propagation starts, let's do it even before
the discovery starts. This way we'll have a chance to exploit the
information during the discovery phase.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
This is a much better place than the GUI, this way we ensure the
propagator is always operating of up to date information. Previously if
the propagator kicked in without user interaction from startup (not
showing the settings dialog) it would have no E2E information available
whatsoever... unsurprisingly it would thus take wrong information at
every turn.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
Previously conflicts with a different type on both ends lead to sync
errors. Now they are handled in the expected way: the local item gets
renamed and the remote item gets propagated downwards.
This also adds a unittest for the TYPE_CHANGE case. That one looks like
parts of it might be unified with CONFLICT cases.
If the server has the 'uploadConflictFiles' capability conflict
files will be uploaded instead of ignored.
Uploaded conflict files have the following headers set during upload
OC-Conflict: 1
OC-ConflictBaseFileId: 172489174instanceid
OC-ConflictBaseMtime: 1235789213
OC-ConflictBaseEtag: myetag
when the data is available. Downloads accept the same headers in return
when downloading a conflict file.
In the absence of server support clients will identify conflict files
through the file name pattern and attempt to deduce the base fileid.
Base etag and mtime can't be deduced though.
The upload job for a new conflict file will be triggered directly from
the job that created the conflict file now. No second sync run is
necessary anymore.
This commit does not yet introduce a 'username' like identifier that
automatically gets added to conflict file filenames (to name the files
foo_conflict-Fred-1345.txt instead of just foo_conflict-1345.txt).
Add a new member for the UploadFileInfo in PropagateUploadCommon
to hold the full file path - as it can change if we use a temporary
file to upload.
Adapt propagateuploadv1 to use the new calls.
This is motivated by the fact that QMetaObject::noralizeSignature takes 7.35%
CPU of the LargeSyncBench. (Mostly from ABstractNetworkJob::setupConnections and
PropagateUploadFileV1::startNextChunk). It could be fixed by using normalized
signature in the connection statement, but i tought it was a good oportunity
to modernize the code.
This commit only contains calls that were automatically converted with clazy.
This will allow us to unify data structures between csync and libsync.
Utility functions like csync_time and c_std are still compiled as C
since we won't need to be coupled with Qt in the short term.
It now produces a summary error message indicating the problem.
Adjust blacklist database table to contain 'errorCategory'. This is
useful for two things:
- Reestablishing summary messages based on blacklisted errors. For
example if we don't retry a 507ed file, we still want to show the
message about space on the server
- Selectively wiping the blacklist: When we have ui for something like
"I deleted some files, please retry all files now!", we want to
delete all blacklist entries of a specific category only.
For now we use them for:
* csync errors: This allows them to appear in the sync issues tab
* insufficient local disk space, as a summary of individual file errors
Insufficient remote space will use them too, as might other issues that
are bigger than a single sync item.
Use qCInfo for anything that has general value for support and
development. Use qCWarning for any recoverable error and qCCritical
for anything that could result in data loss or would identify a serious
issue with the code.
Issue #5647
This gives more insight about the logs and allow setting fine-tuned
logging rules. The categories are set to only output Info by default
so this allows us to provide more concise logging while keeping the
ability to extract more information for a specific category when
developping or debugging customer issues.
Issue #5647
* make target duration a client option instead of a capability
* simplify algorithm for determining chunk size significantly
* preserve chunk size for the whole propagation, not just per upload
* move options to SyncOptions to avoid depending on ConfigFile
in the propagator
* move chunk-size adjustment to after a chunk finishes, not when
a new chunk starts
The destructor of the PropagateItemJob will access the propagator's
_activeJobList. So the _rootJob needs to be destroyed before it.
Order of destruction is the reverse of the order of the members in
the class. So put it at the end so it can be destroyed first.
(This made TestSyncEngine::testDirDownloadWithError crash sometimes
in the master branch)
It is possible to create files with filenames that differ
only by case in NTFS, but most operations such as stat and
open only target one of these by default.
When that happens, we want to avoid uploading incorrect data
and give up on the file.
Typically this situation should never occurr during normal use
of Windows. It can happen, however, when a NTFS partition is
mounted in another OS.
The crash reporter shows many crashes in OwncloudPropagator::scheduleNextJob.
We don't really know what could be the cause, but it's probably because
the _activeJobList contains dangling pointer.
So this patch makes sure to remove all the jobs from this list as they get
destroyed.
This leads to crashes since we changed the connection to the parent
jobs not to be queued anymore.
We don't really need to bubble up the finished state through
parents in that case, and it would also mean that we'd recurse
all the way through leaves as we go up to each parent. So just call
abort directly on the OwncloudPropagator and make sure the abortion
call is posted to the event loop.
Avoid using connections to report up the job tree for signals
that we can directly communicate to the OwncloudPropagator.
This slightly reduces the memory usage and avoid passing those calls
through the whole parent chain.
In preparation for the PropagateDirectory refactoring, simplify things
by removing WaitForFinishedInParentDirectory, which is currently
implemented as a one-level check.
This value is important for directory items, but is however never
used since a directory CSYNC_INSTRUCTION_RENAME item will always be in
PropagateDirectory::_firstJob, which will have to pass through its own
PropagateDirectory job's parallelism() before reaching the parent's
_subJobs optimization.
Since PropagateDirectory::parallelism can only return WaitForFinished
or FullParallelism, that value is lost. So this commit doesn't
change the behavior for directories, and allow file renames to be
scheduled in parallel across directories (which isn't a problem).
It could be possible that _firstJob is marked as finished if
aborted before its parent PropagateDirectory was marked as finished,
allowing a posted scheduleNextJob call to schedule the child job
in-between.
The test sets OWNCLOUD_MAX_PARALLEL to 1 to disable parallelism.
But since the max amount of parallelism is twice as much, that does not
work.
So change the way we compute the hardMaximumActiveJob: Use the value of
OWNCLOUD_MAX_PARALLEL to maximize this amount and base the maximum amount
of transfer jobs on it instead of the other way.
A result of this change is that, in case of bandwidth limit, we keep the
default of 6 non-transfer jobs in parallel. I believe that's fine since
the short jobs do not really use bandwidth, so we can still keep the same
amount of small jobs.
It could be possible that _firstJob is marked as finished if
aborted before its parent PropagateDirectory was marked as finished,
allowing a posted scheduleNextJob call to schedule the child job
in-between.