When a new folder becomes selective-sync excluded, we already mark it
and all its parent folders with _invalid_ etags to force rediscovery.
That's not enough however. Later calls to csync_statedb_get_below_path
could still pull data about the excluded files into the remote tree.
That lead to incorrect behavior, such as uploads happening for folders
that had been explicitly excluded from sync.
To fix the problem, statedb_get_below_path is adjusted to not read the
data about excluded folders from the database.
Currently we can't wipe this data from the database outright because we
need it to determine whether the files in the excluded folder can be
wiped away or not.
See owncloud/enterprise#1965
* For requests:
- reuse the original QNetworkRequest, so headers and attributes
are the same as in the original request
- determine the original http method from the reply and the request
attributes
- keep the original request body around such that it can be sent
again in case the request is redirected
* Simplify the interface that is used for creating new requests in
AbstractNetworkJob.
We were removing the wholme journal db when the user wanted to keep all files,
But that would also remove the selective sync lists.
We should only remove the metadata table.
Issue #5484
- Put all tests in the bin directory so that DLLs can be loaded
- Add missing exports
- Skip tests that use code depending on zlib
- The "GMT" timezone is named differently, use the int constructor instead
5 tests are still failing, it's not really worth fixing at the moment
since no developper is currently using Windows as its main platform.
On macOS /var is a symlink to /private/var and we have to make sure that we
use the canonical path before and after it enters the code to make sure we
compare paths correctly.
- We need to use a QGuiApplication on macOS or else we don't get notifications
- Switch to use QSignalSpy rather than lists and sleeps
- Use system() for all modifications since we pass kFSEventStreamCreateFlagIgnoreSelf
- Keep using the local process on Windows since it catches its own events
It could be possible that _firstJob is marked as finished if
aborted before its parent PropagateDirectory was marked as finished,
allowing a posted scheduleNextJob call to schedule the child job
in-between.
This was to catch duplicate emissions for PropagateDirectory but we
don't emit this signal anymore from there.
This fixes a warning about PropagatorJob not being a registered metatype.
This reverts commit fe42c1a818.
Stale chunks might be there because a file was removed or would just not
be uploaded, for any reason.
We just start the DeleteJob but we don't care if it success or not.
Relates to https://github.com/owncloud/core/issues/26981
One of the test is testing the case where the file is modified on the server
during the upload. So this test the precondition failed error.
The FakeGetReply logic was modified because resizing a 150MB big QByteArray
by increment of 16k just did not scale when downloading a big file.
Relates to https://github.com/owncloud/core/issues/26981
We do not track the success or error of the DeleteJob because it does not
matter. If it fails, it might be because the chunks were already removed.
If not, the chunks will be stale, but the server must anyway do a few
cleanup from time to time because we do not always remove the chunks
The current logic tried to avoid a DB lookup just to fetch whether
the file is shared or not since that info is already in the
SyncFileItem. The implementation would however need to decrease the
sync count for itself (and parents) before emitting the new status,
thus emitting the OK status for parents before that last child that
ended the propagation for that folder.
Change the implementation to achieve what we want: give the
possibility to decSyncCount to use a pre-fetched sharing state while
still doing the emission for all involved files. This ensures that
the leaf file also gets its status emitted before its parents.
Issue #4797
Shrinks owncloud binary by 24 KB and libowncloudsync by 14 KB.
I don't know if it has influence on memory usage or runtime speed though.
Was worth a try.
Previously this wasn't happening for errors that were not
NormalErrors because they don't end up in the blacklist.
This revises the resetting logic to be independent of the
error blacklist and make use of UploadInfo::errorCount
instead.
412 errors should reset chunked uploads because they might be
indicative of a checksum error.
Additionally, server bugs might require that additional
errors cause an upload reset. To allow that, a new capability
is added that can be used to advise the client about this.
We are going to change the webdav path depending on the capabilities.
But the SyncEngine and csync might have been created before the capabilities
are retrieved.
The main raison why we gave the path to the sync engine was to pass it to csync.
But the thing is that csync don't need anymore this url as everything is done by the
discovery classes in libsync that use the network jobs that use the account for the urls.
So csync do not need the remote URI.
shortenFilename in folderstatusmodel.cpp was useless because the string is the
_file of a SyncFileItem which is the relative file name, that name never
starts with owncloud://.
All the csync test creates the folder because csync use to check if the folder
exists. But we don't need to do that anymore
The "S" in the permission is only for the "Shared with me" files.
It is only used to show the shared status in the overlay icons.
But we also wish to show the shared status for files that are shared
"by" the users. We can find that out using the 'share-types' webdav
property. If set, then we are sharing the object.
We fake a 'S' in the permission as for our purpose, they mean the same.
Issue #4788
As the file can be some hunreds of megabytes, allocating such big arrays may
cause problems.
Also make the timeout a bit bigger so the test can rununder valgrind.
Update from commit 05ce8a23cdc12e825532dc6de06c267fb8d48b4f from
https://github.com/dragotin/QProgressIndicator
Which itself is forked from commit e5ba0fd09bfd43b067ee3646d70b294c7efcb558 from
upstream, with additional license header.
It was relicensed to MIT according to
14bb9d10e2
Relates to issues #5180 and #5184