* Ensure every time a file becomes a directory or the other way around
the item is flagged as INSTRUCTION_TYPE_CHANGE.
* Delete the badly-typed entity if necessary in the propagation jobs.
* Compute the content checksum (in addition to the optional
transmission checksum) during upload (.eml files only)
* Add hook to compute and compare the checksum in csync_update
* Add content checksum to database, remove transmission checksum
Since QNonContiguousByteDeviceThreadForwardImpl::reset will
call UploadDevice::reset with a BlockingQueuedConnection, this
allows us to reset the HTTP channel along with its buffers
before they get the chance to be reused with a subsequent request.
Use a QSharedPointer to keep the same ownership and
continue passing the SyncFileItems as a const& when
ownership isn't taken. This allows sharing the same
allocations between the jobs and the result vectors.
This saves about 20MB of memory (off 120MB) once all
jobs are created.
This is largely a guess, but this is the only place where we use
a QIODevice to push data through QNAM and that the QIODevice isn't
a direct child of the QNetworkReply.
Fix the issue by making sure that we don't go back to the event loop
and possibly handle network events between the destruction of the
upload QIODevice and the QNetworkReply, which might lead to QNAM
dereferencing a dangling QIODevice pointer.
This should solve #2675 and #1981
By preloading the chunks in memory before sending them, we don't keep the
file open and therefore we let other program open the file for writing.
If the file is modified between two chunks, we detect that and abort anyway
* Use a shared pointer to Account everywhere to ensure
the instance stays alive long enough for a sync to terminate
* Folder is now tied to an AccountState
* SyncEngine and OwncloudPropagator tie to an Account and use that
for all jobs they run
Issue: Since the setup wizard currently always replaces the
account, it will always wipe all folder definitions, even when
the actual changes to the account were minor.