NEW/NEW conflicts could sometime be ignored and replaced by update
metadata instructions
we stop doing this and handle them like any other conflicts
that would cause more download from the server
those conflicts would be solved automatically in case this is not a real
conflict but the client was missing the server reply with the updated
metadata
will enable more changes to improve MOVE detection from server side
Signed-off-by: Matthieu Gallien <matthieu.gallien@nextcloud.com>
seems we have an issue with Windows and QTimer instances used to detect
network timeout
workaround, find cause of https://github.com/nextcloud/desktop/issues/7184
Signed-off-by: Matthieu Gallien <matthieu.gallien@nextcloud.com>
Issue #7506
This is a regression introduced by the delta sync feature (as the chunk offset
changed from being the chunk number to be the byte offset, it needs to be a
qint64 now)
Some of the comments didn't match the size or were missing. This also
means reducing one of the 150 MB payloads left behind reducing the
execution time by a few more seconds. This is now around 30s execution
time which is more acceptable.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
Use smaller files so the test run faster.
Particulary usefull for TestChunkingNG::connectionDroppedBeforeEtagRecieved
Which had become so much slower after 2638332dc6
increased the timeout for bigger files
Since commit 4dc49ff3, we store an entry in the upload info table even
for non chunked uploads. However, if this fails we don't want to remove
non-existant stale chunks if the upload fails.
Without this commit, we would send a DELETE command to clean non-existant
chunks in the dav/uploads/ namespace.
This can happen if the upload of a file is finished, but we just got
disconnected right before recieving the reply containing the etag.
So nothing was save din the DB, and we are not sure if the server
recieved the file properly or not. Further local update of the file
will cause a conflict.
In order to fix this, store the checksum of the uploading file in
the uploadinfo table of the local db (even if there is no chunking
involved). And when we have a conflict, check that it is not because
of this situation by checking the entry in the uploadinfo table.
Issue #5106
Stale chunks might be there because a file was removed or would just not
be uploaded, for any reason.
We just start the DeleteJob but we don't care if it success or not.
Relates to https://github.com/owncloud/core/issues/26981
One of the test is testing the case where the file is modified on the server
during the upload. So this test the precondition failed error.
The FakeGetReply logic was modified because resizing a 150MB big QByteArray
by increment of 16k just did not scale when downloading a big file.
Relates to https://github.com/owncloud/core/issues/26981
We do not track the success or error of the DeleteJob because it does not
matter. If it fails, it might be because the chunks were already removed.
If not, the chunks will be stale, but the server must anyway do a few
cleanup from time to time because we do not always remove the chunks