The headers() method is used to pass extra headers to the PUT jobs for
instance, definitely needed for uploads now.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
With the current design of the file upload this necessarily pushed to a
lock starvation on the folder. Indeed you could end up with N jobs
asking for the lock at the same time. So just avoid parallelizing for
now even though it will be slow.
We could try to optimize but that'd require some serious changes to the
sync logic on the jobs... let's stabilize first and optimize later.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
Yes... I still wish this would be all driven by the type system, would be
much less error-prone.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
PropagateUploadEncrypted made the assumption of the folder names never
being mangled. This is not true since the previous commits so make sure
we properly deal with that using the journal db.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
There in no "return" in
PropagateUploadFileCommon::slotStartUpload in if (prevModtime != _item-
>_modtime) {... }
There is possibility that
PropagateItemJob::done(status, errorString)
maybe called two times from PropagateUploadFileCommon::slotStartUpload
1. in if (prevModtime != _item->_modtime) {... }
2. in if (fileIsStillChanging(*_item)) {..}
if changes in files are frequent the second call is possible.
This two calls has effect in PropagatorCompositeJob::slotSubJobFinished
and job is removed two times in _runningJobs.remove(i);
(the second time with argumetnt -1 (because first call removed job).
This return was removed in commit
efc039863b - by accident I think.
Good simulation is to synchronize firefox profile with frequent page
refresh.
Signed-off-by: Mariusz Wasak <mawasak@gmail.com>
Some servers have virus scanners and the like that can delay the
response of the final chunked upload assembly significantly, often
breaking the current 5min (!) timeout. See owncloud/enterprise#2480
for details.
Previously it tried to abort even jobs that had already finished, which
was not going to work as they wouldn't emit finished() again.
Also, in some cases the abortCount would never go to zero and that case
wasn't well documented.
If the code was not complex enough syncing two tables
already started to give UNIQUE constrains errors on
simple sync operations, this also adds initial support
remote delete of an encrypted file
If the server has the 'uploadConflictFiles' capability conflict
files will be uploaded instead of ignored.
Uploaded conflict files have the following headers set during upload
OC-Conflict: 1
OC-ConflictBaseFileId: 172489174instanceid
OC-ConflictBaseMtime: 1235789213
OC-ConflictBaseEtag: myetag
when the data is available. Downloads accept the same headers in return
when downloading a conflict file.
In the absence of server support clients will identify conflict files
through the file name pattern and attempt to deduce the base fileid.
Base etag and mtime can't be deduced though.
The upload job for a new conflict file will be triggered directly from
the job that created the conflict file now. No second sync run is
necessary anymore.
This commit does not yet introduce a 'username' like identifier that
automatically gets added to conflict file filenames (to name the files
foo_conflict-Fred-1345.txt instead of just foo_conflict-1345.txt).
The upload is made in an event loop with more than one
upload at the same time, this confuses the hell out of the
folder locking mechanism.
We need to lock the folder and ask the other trials to try
again in a few seconds in the future to give time for the
uploader to actually upload the current file that's locking
the folder.
Add a new member for the UploadFileInfo in PropagateUploadCommon
to hold the full file path - as it can change if we use a temporary
file to upload.
Adapt propagateuploadv1 to use the new calls.
They can be conceptually equal - I can upload the file
on disk, and that's what I do right now. But if we want
to accept filters in the future, filters that change
the file on disk like shrinking an image, the current
information used is wrong and we need a way to separate those.
This patch introduces a new struct that holds the *actual*
file that will be uploaded, be it a temporary one or
the original file.