This code was actually not breaking most cookie handling by accident.
As the raw cookies where not split properly we added cookies with values like
"key: val; key2 = val2; key3 = val3"
When the code was corrected we overwrote the newer values in the jar with
the old ones from a request.
This was done because the propagator jobs where running in a thread a long
time ago, but this is no longer the case.
(Also QAtomicInt::load is marked as deprecated now)
The original code from csync was stopping at any error.
But we have been whitelisting soeme http error code one by one
to ignore the directory instead of aborting the sync.
However, as there are more requests to continue the sync in case
of error, just ignore most HTTP errors
Issue #7586
On windows we do a test to know if we should change the case of the files,
but that conflict with the test that checks if the file was still there
when the filename is actually the same. Which can happen with virtual files
as they have two representation (the one with and without suffix).
When an item is downloaded because it is restored, it shall be shown in the
sync protocol.
(It is also going to be shown in the not synchronized for a short while, but
that's fine)
Previously the source was deleted (or attempted to be deleted), even if
the new location was not acceptable for upload. This could make data
unavilable on the server.
For #7410
By introducing a PropagateRootDirectory job that explicitly
separates the directory deletion jobs from all the other jobs.
Note that this means that if there are errors in subJobs the
dirDeletionJobs won't get executed.
Previously the job would only become "active" when the downloads
started. That meant that arbitrarily many hash computations could be
queued at the same time.
Since the the file was opened during future creation this could lead to
a "too many open files" problem if there were lots of new-new conflicts.
To change this:
- Make PropagateDownload become active when computing a hash
asynchronously.
- Make the computation future open the file only once it gets run. This
will make it less likely for this problem to occur even if thousands
of these futures are queued.
For #7372
Previously fatal error texts were duplicated: Once they entered the
SyncResult via the SyncFileItem and once via syncError().
syncError is intended for folder-wide sync issues that are not pinned
to particular files. Thus that duplicated path is removed.
For #5088