Use a QSharedPointer to keep the same ownership and
continue passing the SyncFileItems as a const& when
ownership isn't taken. This allows sharing the same
allocations between the jobs and the result vectors.
This saves about 20MB of memory (off 120MB) once all
jobs are created.
Because file system like FAT only have two second accuracy and would result
in a upload if the mtime in the database is not the same as the one that was
downloaded
Issue #3103
QFileInfo has to be refreshed if the underlying file has been
modified in between. That is dangerous so ckamm and me decided
to eliminate the QFileInfo based implementations.
This was triggered by a bug that the client uploaded files that
it should not have.
There could be a race condition if the file was updated on the server
between the discovery and the propagate phase. By taking the mtime from
the server, we make sure that we do not have a race.
This is tested by t6.pl with BIG3.file because the script was modifying
the file between the two phases
* Use a shared pointer to Account everywhere to ensure
the instance stays alive long enough for a sync to terminate
* Folder is now tied to an AccountState
* SyncEngine and OwncloudPropagator tie to an Account and use that
for all jobs they run
Issue: Since the setup wizard currently always replaces the
account, it will always wipe all folder definitions, even when
the actual changes to the account were minor.
The server might support resuming, so don't always erase the temporary file
and pass the startSize, so the temporary file will be remove if the server
does not support it after all (because it is not sending the "bytes" header
Also pass the expected etag for consistency even if it's not used in this case.