We need Qt 5.9 for HTTP2 because, even if Qt 5.8 already has support
for it, there is some critical bug in the HTTP2 implementation which
make it unusable [ https://codereview.qt-project.org/186050 and
https://codereview.qt-project.org/186066 ]
When using HTTP2, we can use many more parallel network request, this
is especially good for small file handling
Lower the priority of the GET and PUT propagation jobs, so the quota
or selective sync ui PROPFIND will not be blocked by them
Since these errors are blacklisted, it can take up to 24h to retry items
that had a 507 error for a while. This way users can intervene and cause
an upload attempt immediately.
It now produces a summary error message indicating the problem.
Adjust blacklist database table to contain 'errorCategory'. This is
useful for two things:
- Reestablishing summary messages based on blacklisted errors. For
example if we don't retry a 507ed file, we still want to show the
message about space on the server
- Selectively wiping the blacklist: When we have ui for something like
"I deleted some files, please retry all files now!", we want to
delete all blacklist entries of a specific category only.
* A bunch of code was determining sync status by ad-hoc comparing some
progress info fields. It can now just check the status, making it
easier to comprehend.
* There's a clear indication for "a new sync is starting", which helps
wiping the issues tab at the right time.
For now we use them for:
* csync errors: This allows them to appear in the sync issues tab
* insufficient local disk space, as a summary of individual file errors
Insufficient remote space will use them too, as might other issues that
are bigger than a single sync item.
Before, blacklisted errors were set to FileIgnored status and hence
displayed as warnings. Now, they have their own BlacklistedError
category which allows them to appear as errors in the issues list and in
the shell integration icons.
* For conflicts where mtime and size are identical:
a) If there's no remote checksum, skip (unchanged)
b) If there's a remote checksum that's a useful hash, create a
PropagateDownload job and compute the local hash. If the hashes
are identical, don't download the file and just update metadata.
* Avoid exposing the existence of checksumTypeId beyond the database
layer. This makes handling checksums easier in general because they
can usually be treated as a single blob.
This change was prompted by the difficulty of producing file_stat_t
entries uniformly from PROPFINDs and the database.
Issue #5783
When the directry that should be removed by selective sync contains changes,
we ignore the whole sub tree instead of only ignoreing new files.
We cannot ignore the whole directory, we need to ignore only the directory
that do not have files to remove
Use qCInfo for anything that has general value for support and
development. Use qCWarning for any recoverable error and qCCritical
for anything that could result in data loss or would identify a serious
issue with the code.
Issue #5647
This gives more insight about the logs and allow setting fine-tuned
logging rules. The categories are set to only output Info by default
so this allows us to provide more concise logging while keeping the
ability to extract more information for a specific category when
developping or debugging customer issues.
Issue #5647
Before this patch, to deep folder would just be ignored, without any feedback.
This patch makes it so deep folder are properly shown as ignored in the UI.
Also increase the MAX_DEPTH
Issue: #1067
* make target duration a client option instead of a capability
* simplify algorithm for determining chunk size significantly
* preserve chunk size for the whole propagation, not just per upload
* move options to SyncOptions to avoid depending on ConfigFile
in the propagator
* move chunk-size adjustment to after a chunk finishes, not when
a new chunk starts
In 8ef11a38c9, we started blacklisting
SoftError for 0 seconds. But if the two sync happen with less than
1s interval, we would still prevent them to happen.
So make sure we expire if 0 seconds have expired
We can do that because the only changes that were in master but not in 2.3 were the
translations change and documentation change, and the support for the 'M' permission
which we want in 2.3.
Added two checkboxes in the Account Wizard in the advanced page to change the first options.
Also added a checkbox in the general settings to ask for confirmation for external storages.
Theme options allow to hide the checkboxes in the wizard.
As described in issue #5340
We now delete subjobs as their propagation is complete. This allows us
to also release the item by making sure that nothing else is holding a
reference to it.
Remove the stored SyncFileItemVector from SyncEngine and SyncResult
and instead gather the needed info progressively as each itemCompleted
signal is emitted.
This frees some holes on the heap as propagation goes, allowing many
memory allocations without the need of requesting more virtual memory
from the OS, preventing the memory usage from increasingly growing.
This was to catch duplicate emissions for PropagateDirectory but we
don't emit this signal anymore from there.
This fixes a warning about PropagatorJob not being a registered metatype.
This reverts commit fe42c1a818.
Stale chunks might be there because a file was removed or would just not
be uploaded, for any reason.
We just start the DeleteJob but we don't care if it success or not.
Relates to https://github.com/owncloud/core/issues/26981
One of the test is testing the case where the file is modified on the server
during the upload. So this test the precondition failed error.
The FakeGetReply logic was modified because resizing a 150MB big QByteArray
by increment of 16k just did not scale when downloading a big file.
We are going to change the webdav path depending on the capabilities.
But the SyncEngine and csync might have been created before the capabilities
are retrieved.
The main raison why we gave the path to the sync engine was to pass it to csync.
But the thing is that csync don't need anymore this url as everything is done by the
discovery classes in libsync that use the network jobs that use the account for the urls.
So csync do not need the remote URI.
shortenFilename in folderstatusmodel.cpp was useless because the string is the
_file of a SyncFileItem which is the relative file name, that name never
starts with owncloud://.
All the csync test creates the folder because csync use to check if the folder
exists. But we don't need to do that anymore
Some listeners detect whether a sync is finished by checking
for isUpdatingEstimates and completedFiles >= totalFiles. But
if a sync didn't transfer any files we never sent signal
with these values. Now we do.
The ownCloud 9.1 server has a data-fingerprint property that the admin must
change in case of backup restoration. When this change, the client understands
that a backup was restored, and will generate conflict files and re-upload
new files.
The heuristics based system checks that there is at least two files wose mtime
is put back in the past and no files that goes forward. In that case we ask the
user before creating the conflicts.
This commit disable the heuristics for newer server that have the data-fingerpint.
And change the heuristics to two hours because we want to avoid false positive due
to some clock error, and that 2 hours of lost due to backup restoration is probably
not so bad.
We only ask the user in the heuristics based aproach so in practice this mean that
the "backup-detected" dialog will no longer appear with newer server.
Relates issues #5260, #5109
Two bugs:
- The change filed are not considered as move, they are re-downloaded
but the old file was not removed from the database. The change in
owncloudpropagator.cpp takes care of removing the old entries.
- Next sync would then remove the file in the server in the old folder
This was not a problem until we start reusing the sync engine, and
that the _renamedFolders map is not cleared. We were before deleting
a non-existing file. But now we delete the actual file.
Also improve the tests to be able to do move on the server.
This include support for file id.
Issue #5192
Before it was in Folder, however, the command line client does not
have the Folder class. To not duplicate code, the function to generate
the sync journal name went to SyncEngine class.
This would happen if the directory would first need to be created
through an mkdir propagation job. This job's itemCompleted signal
would trigger the directory to show as SYNC even though its children
are still propagating.
Fix the issue by tracking the sync count for each file, affecting
its parents. This allows us to get rid of the O(n) vector lookup
for each status query, and properly track the hierachical sync
status of a directory.
This also removes the itemCompleted signal emission from the
PropagateDirectory job. Since we only needed for overlay icons, and
since this job doesn't do any direct propagation, we can remove it
to ensure that we won't call itemCompleted twice for the item attached
to Propagate*Mkdir jobs (since the PropagateDirectory is backed by
the same SyncFileItem, instruction and status).
The current way of tracking the need to update the metadata without
propagation using a separate flag makes it difficult to track
priorities between the local and remote tree. The logic is also
difficult to logically cover since the possibilities matrix isn't
100% covered, leaving the flag only used in a few situations
(mostly involving folders, but not only).
The reason we need to change this is to be able to track the sync
state of files for overlay icons. The instruction alone can't be
used since CSYNC_INSTRUCTION_SYNC is used for folders even though
they won't be propagated. Removing this logic is however not possible
without using something else than CSYNC_INSTRUCTION_NONE since too
many codepath interpret (rightfully) this as meaning "nothing to do".
This patch adds a new CSYNC_INSTRUCTION_UPDATE_METADATA instruction
to let the update and reconcile steps tell the SyncEngine to update
the metadata of a file without any propagation. Other flags are left
to be interpretted by the implementation as implicitly needing
metadata update or not, as this was already the case for most file
propagation jobs. For example, CSYNC_INSTRUCTION_NEW for directories
now also implicitly update the metadata.
Since it's not impossible for folders to emit CSYNC_INSTRUCTION_SYNC
or CSYNC_INSTRUCTION_CONFLICT, the corresponding code paths in the
sync engine have been removed.
Since the reconcile step can now know if the local tree needs metadata
update while the remote side might want propagation, the
localMetadataUpdate logic in SyncEngine::treewalkFile now simply use
a CSYNC_INSTRUCTION_UPDATE_METADATA for the local side, which is now
implemented as a different database query.
To be able to test the SyncEngine efficiently, a set of server
mocking classes have been implemented on top of QNetworkAccessManager.
The local disk side hasn't been mocked since this would require adding
a large abstraction layer in csync. The SyncEngine is instead pointed
to a different temporary dir in each test and we test by interacting
with files in this directory instead.
The FakeFolder object wraps the SyncEngine with those abstractions
and allow controlling the local files, and the fake remote state
through the FileModifier interface, using a FileInfo tree structure
for the remote-side implementation as well as feeding and comparing
the states on both side in tests.
Tests run fast and require no setup to be run, but each server feature
that we want to test on the client side needs to be implemented in
this fake objects library. For example, the OC-FileId header isn't
set as of this commit, and we can't test the file move logic properly
without implementing it first.
The TestSyncFileStatusTracker tests already contain a few QEXPECT_FAIL
for what I esteem being issues that need to be fixed in order to catch
up on our test coverage without making this patch too huge.
Once upon a time, the SyncEngine was instantiated once per sync. But now that
the SyncEngine is kept between sync, we need to reset all these variable between
syncs.
We first need to set the abort flag to csync and then aborting the discovery
job, otherwise, the discovery thread could start a new job in the mean time.
We also need to make sure that the thread has existed before we destroy the
exclude list.
The problem in this case is if we rename the file "xxx" to "invalid\file".
The rename will fail because the new filename constains a slash, and it
will be blacklisted.
But then if the user re-rename the file to "valid_name", then we should
invalidate the blacklist entry and retry to upload. But we did not do
that because renaming don't change the mtime and we did not store the
rename target in the database
IL issue 558
Make sure that we push the new status when the status of the SyncEngine
changed. SyncEngine::started comes a bit late, only when the propagation
starts, although it's better in this case since child folders will
only switch to Sync in aboutToPropagate.
Also fix an issue with SyncEngine::findSyncItem when using an empty
fileName; this would match and return the wrong item, even though
not currently happening with the code since fileStatus won't call
it with an empty fileName anymore.
When a conflict-rename or a temporary-rename fails, notify the
LockWatcher. It'll regularly check whether the file has become
accesible again. When it has, another sync is triggered.
owncloud/enterprise#1288
Imagine tgus scenario on a read only share that you move file from
one location to a new directory in the read only share.
Creating the read only directory fails for permission error.
But we should also restore the files that have been moved.
IL issue 542
This also remove all smartness from the SocketApi about the status
of a file and solely use info from the current and last sync.
This simplifies the logic a lot and prevents any discrepancy between
the status shown in the activity log and the one displayed on the
overlay icon of a file.
The main benefit of the additional simplicity is that we are able
to push all new status of a file reliably (including warnings for
parent folders) to properly update the icon on overlay implementations
that don't allow us invalidating the status cache, like on OS X.
Both errors and warning from the last sync are now kept in a set,
which is used to also affect parent folders of an error.
To make sure that errors don't become warning icons on a second
sync, SyncFileItem::_hasBlacklistEntry is also interpreted as an error.
This also renames StatusIgnore to StatusWarning to match this semantic.
SyncEngine::aboutToPropagate is used in favor of SyncEngine::syncItemDiscovered
since the latter is emitted before file permission warnings are set on the
SyncFileItem. SyncEngine::finished is not used since we have all the
needed information in SyncEngine::itemCompleted.
SyncFileStatus' purpose is to track overlay icon status.
Instead of putting comments and default: clauses in switch
on both sides about unused enums, use different enums.
This also remove STATUS_NEW which is the equivalent of
STATUS_SYNC in all shell extension implementations, and
remove STATUS_UPDATED and STATUS_STAT_ERROR which have
the same semantic as STATUS_UPTODATE and STATUS__ERROR.
the size on the server might be different from the size on the client
with certain backend so it should be ignored.
(cherry picked from commit 9222db6df9b19a21e1bea5a238d745d96a6385e3)
The creation doesn't need to be separated from the SyncEngine anymore.
This allows the SyncEngine to be created in fewer steps if we want to
use it in tests.
This moves most of the direct csync code from Folder into the SyncEngine.
The exclude file logic for the context has been wrapped using the
existing ExcludedFiles class as well.
This is the fix for issue #4370
Step to reproduce the bug:
1) have lots of files in directory "dir1"
2) do mkdir dir2 && mv dir1/* dir2
3) DURING the sync (which takes time because of the many moves) do mkdir dir3 && mv dir2/* dir3/
4) observe that files are PUT in the next sync
The problem is that SyncJournalFileRecord::SyncJournalFileRecord will fail to
get the inode after the forst move because the files are already moved on the
filesystem. Normaly it should use the inode from the discovery phase in that
case but that is not working because it comes from the remote node in case of
moves, so the code in SyncEngine::treewalkFile would not set the inode.
Test in https://github.com/owncloud/smashbox/pull/143
Server older than 8.1 cannot cope with invalid char in the filename
so we must not send them from the client. We were already checking
for new files, but not for renames or new directories.
https://github.com/owncloud/enterprise/issues/1009
* Ensure every time a file becomes a directory or the other way around
the item is flagged as INSTRUCTION_TYPE_CHANGE.
* Delete the badly-typed entity if necessary in the propagation jobs.
Since the presence of any path in SyncEngine::_syncedItems
would translate in a SYNC status, platforms that don't refresh
all their status cache after an UPDATE_VIEW message like OS X
or Windows would keep displaying that status even after all
files are successfully synchronized.
- Read SyncFileItem::_status to determine the status to display mid-sync
- Match moved paths also to _renameTarget since this might be the
path to match
- Make sure that PropagateDirectory jobs also set SyncFileItem::_status
properly
If all the files bring us to past timestamp, it is possibly a backup
restoration in the server. In which case we want don't want to just
overwrite newer files with the older ones.
Issue #2325
* Compute the content checksum (in addition to the optional
transmission checksum) during upload (.eml files only)
* Add hook to compute and compare the checksum in csync_update
* Add content checksum to database, remove transmission checksum