- Close the UploadDevice to close the QFile after the PUT job is done.
This allows winvfs to get an oplock on the file later.
- Don't rely on QFile::fileName() to be valid after
openAndSeekFileSharedRead() was called. The way it is openend on
Windows makes it have an empty filename.
Instead of all at once, to reduce peak memory use.
Changing UploadDevice in this way requires keeping the file open for the
duration of the upload. It also means changes to open(), seek(), close()
to ensure that uses of the device work right when a request needs to
be resent.
Some servers have virus scanners and the like that can delay the
response of the final chunked upload assembly significantly, often
breaking the current 5min (!) timeout. See owncloud/enterprise#2480
for details.
Previously it tried to abort even jobs that had already finished, which
was not going to work as they wouldn't emit finished() again.
Also, in some cases the abortCount would never go to zero and that case
wasn't well documented.
The upload is made in an event loop with more than one
upload at the same time, this confuses the hell out of the
folder locking mechanism.
We need to lock the folder and ask the other trials to try
again in a few seconds in the future to give time for the
uploader to actually upload the current file that's locking
the folder.
Add a new member for the UploadFileInfo in PropagateUploadCommon
to hold the full file path - as it can change if we use a temporary
file to upload.
Adapt propagateuploadv1 to use the new calls.
They can be conceptually equal - I can upload the file
on disk, and that's what I do right now. But if we want
to accept filters in the future, filters that change
the file on disk like shrinking an image, the current
information used is wrong and we need a way to separate those.
This patch introduces a new struct that holds the *actual*
file that will be uploaded, be it a temporary one or
the original file.
* For conflicts where mtime and size are identical:
a) If there's no remote checksum, skip (unchanged)
b) If there's a remote checksum that's a useful hash, create a
PropagateDownload job and compute the local hash. If the hashes
are identical, don't download the file and just update metadata.
* Avoid exposing the existence of checksumTypeId beyond the database
layer. This makes handling checksums easier in general because they
can usually be treated as a single blob.
This change was prompted by the difficulty of producing file_stat_t
entries uniformly from PROPFINDs and the database.