The test sets OWNCLOUD_MAX_PARALLEL to 1 to disable parallelism.
But since the max amount of parallelism is twice as much, that does not
work.
So change the way we compute the hardMaximumActiveJob: Use the value of
OWNCLOUD_MAX_PARALLEL to maximize this amount and base the maximum amount
of transfer jobs on it instead of the other way.
A result of this change is that, in case of bandwidth limit, we keep the
default of 6 non-transfer jobs in parallel. I believe that's fine since
the short jobs do not really use bandwidth, so we can still keep the same
amount of small jobs.
It could be possible that _firstJob is marked as finished if
aborted before its parent PropagateDirectory was marked as finished,
allowing a posted scheduleNextJob call to schedule the child job
in-between.
This was to catch duplicate emissions for PropagateDirectory but we
don't emit this signal anymore from there.
This fixes a warning about PropagatorJob not being a registered metatype.
This reverts commit fe42c1a818.
We are going to change the webdav path depending on the capabilities.
But the SyncEngine and csync might have been created before the capabilities
are retrieved.
The main raison why we gave the path to the sync engine was to pass it to csync.
But the thing is that csync don't need anymore this url as everything is done by the
discovery classes in libsync that use the network jobs that use the account for the urls.
So csync do not need the remote URI.
shortenFilename in folderstatusmodel.cpp was useless because the string is the
_file of a SyncFileItem which is the relative file name, that name never
starts with owncloud://.
All the csync test creates the folder because csync use to check if the folder
exists. But we don't need to do that anymore
Missing deleteLater when the CleanupPollsJob aborts.
This is only a problem if the SyncEngine is kept alive a long time. Which is
usually not the case in the configuration where poll jobs are used.
This fixes an issue in which too many jobs are started un parallel
while uploading many files, which could cause too much memory usage as the
chunks are stored in memory.
Probably the fix for #4611
propagatedownload.cpp:712:35: error: 'seenLockedFile' is a protected member of 'OCC::OwncloudPropagator'
Signals are protected in Qt4 but public in Qt5, mark the class accessing it
as friend when compiling with Qt4
When a conflict-rename or a temporary-rename fails, notify the
LockWatcher. It'll regularly check whether the file has become
accesible again. When it has, another sync is triggered.
owncloud/enterprise#1288
Helps with small file sync #331
When I benchmarked this, it went up to 6 parallelism and
was about 1/3 faster than the previous fixed 3 parallelism.
Doing more than 6 is dangerous because QNAM limits to 6 TCP
connections and also the server might become a bottleneck.
Should also help for #4081
Added a new "chunkSize" entry in the General group of the owncloud.cfg
which can be set to the size, in bytes, of the chunks.
This allow user with hude bandwidth to select more optimal chunk size
Issue #4354
Because that's what's going on. A job can 'complete an item' or 'finish'.
Note that several jobs could complete the same item: a new directory
will complete on the PropagateRemoteMkdir and the PropagateDirectory
jobs.
Previously, PropagateDirectory jobs didn't emit the completed() signal.
Now that they do, we need to make sure to not add extra lines to the
protocol widget for them.
To accomplish that, the jobCompleted() signal now also contains the job
that completed the item.
Use a QSharedPointer to keep the same ownership and
continue passing the SyncFileItems as a const& when
ownership isn't taken. This allows sharing the same
allocations between the jobs and the result vectors.
This saves about 20MB of memory (off 120MB) once all
jobs are created.
* Use a shared pointer to Account everywhere to ensure
the instance stays alive long enough for a sync to terminate
* Folder is now tied to an AccountState
* SyncEngine and OwncloudPropagator tie to an Account and use that
for all jobs they run
Issue: Since the setup wizard currently always replaces the
account, it will always wipe all folder definitions, even when
the actual changes to the account were minor.