* For requests:
- reuse the original QNetworkRequest, so headers and attributes
are the same as in the original request
- determine the original http method from the reply and the request
attributes
- keep the original request body around such that it can be sent
again in case the request is redirected
* Simplify the interface that is used for creating new requests in
AbstractNetworkJob.
This leads to crashes since we changed the connection to the parent
jobs not to be queued anymore.
We don't really need to bubble up the finished state through
parents in that case, and it would also mean that we'd recurse
all the way through leaves as we go up to each parent. So just call
abort directly on the OwncloudPropagator and make sure the abortion
call is posted to the event loop.
Avoid using connections to report up the job tree for signals
that we can directly communicate to the OwncloudPropagator.
This slightly reduces the memory usage and avoid passing those calls
through the whole parent chain.
In preparation for the PropagateDirectory refactoring, simplify things
by removing WaitForFinishedInParentDirectory, which is currently
implemented as a one-level check.
This value is important for directory items, but is however never
used since a directory CSYNC_INSTRUCTION_RENAME item will always be in
PropagateDirectory::_firstJob, which will have to pass through its own
PropagateDirectory job's parallelism() before reaching the parent's
_subJobs optimization.
Since PropagateDirectory::parallelism can only return WaitForFinished
or FullParallelism, that value is lost. So this commit doesn't
change the behavior for directories, and allow file renames to be
scheduled in parallel across directories (which isn't a problem).
It could be possible that _firstJob is marked as finished if
aborted before its parent PropagateDirectory was marked as finished,
allowing a posted scheduleNextJob call to schedule the child job
in-between.
- I checked every occurence of a '%2' and make correct use of the
QString::arg overload that takes several argument instead of chaining
them, because the first argument can contains a '%1'
- I tried to look for every label that they either use plain text or richtext
and escape the user provided strings in there.
By default, followRedirects is true for all requests, to transparently
handle redirections. In the wizard, we have special redirect-handling
code though and that was being skipped.
Setting the flag to false allows the wizard to be aware of redirects
and to handle them in the correct way. Tested with the server described
in
https://github.com/owncloud/administration/tree/master/redirectServer
There's a second bug here, where followRedirects always converts
redirected requests to the GET verb. That means redirected PROPFINDs
will never have worked. This change un-breaks them for the wizard only.
There should be no case that previously worked that stops working now.
The test sets OWNCLOUD_MAX_PARALLEL to 1 to disable parallelism.
But since the max amount of parallelism is twice as much, that does not
work.
So change the way we compute the hardMaximumActiveJob: Use the value of
OWNCLOUD_MAX_PARALLEL to maximize this amount and base the maximum amount
of transfer jobs on it instead of the other way.
A result of this change is that, in case of bandwidth limit, we keep the
default of 6 non-transfer jobs in parallel. I believe that's fine since
the short jobs do not really use bandwidth, so we can still keep the same
amount of small jobs.
This reverts commit e1f5a49c21.
Retrying uploads with insufficent storage errors frequently leads to
high server traffic. See #5537 for links and a sketch of a correct
solution.
If we call
setConfiguration(QNetworkConfiguration());
This sets an invalid configuration on the QNAM.
But later, when we really go online because interfaces are discovered,
QNetworkAccessManagerPrivate::_q_onlineStateChanged is called (with isOnline=true).
And this will set the state to disconnected because customNetworkConfiguration is
true, and the networkConfiguration state is disabled.
The workaround we to fix another bug on Windows in which the default network
configuration was not behaving properly.
The issue on linux is hard to reproduce and only happen in some condition,
but it was reproduced on smashbox when they run two owncloudcmd at the same time.
Issues: #4720 , #3600
We were removing the wholme journal db when the user wanted to keep all files,
But that would also remove the selective sync lists.
We should only remove the metadata table.
Issue #5484
- Put all tests in the bin directory so that DLLs can be loaded
- Add missing exports
- Skip tests that use code depending on zlib
- The "GMT" timezone is named differently, use the int constructor instead
5 tests are still failing, it's not really worth fixing at the moment
since no developper is currently using Windows as its main platform.
It could be possible that _firstJob is marked as finished if
aborted before its parent PropagateDirectory was marked as finished,
allowing a posted scheduleNextJob call to schedule the child job
in-between.
We can do that because the only changes that were in master but not in 2.3 were the
translations change and documentation change, and the support for the 'M' permission
which we want in 2.3.
The sync engine rely on the 'M' in premission to ask for confirmation
(As requested in issue #5340)
But we only want to ask the premission for the 'root' of the mounting point and not
for every subfolders within it.
So we change the discovery phase in a way that it does not keep the 'M' for
children within the external storage.
Added two checkboxes in the Account Wizard in the advanced page to change the first options.
Also added a checkbox in the general settings to ask for confirmation for external storages.
Theme options allow to hide the checkboxes in the wizard.
As described in issue #5340
We now delete subjobs as their propagation is complete. This allows us
to also release the item by making sure that nothing else is holding a
reference to it.
Remove the stored SyncFileItemVector from SyncEngine and SyncResult
and instead gather the needed info progressively as each itemCompleted
signal is emitted.
This frees some holes on the heap as propagation goes, allowing many
memory allocations without the need of requesting more virtual memory
from the OS, preventing the memory usage from increasingly growing.
This was to catch duplicate emissions for PropagateDirectory but we
don't emit this signal anymore from there.
This fixes a warning about PropagatorJob not being a registered metatype.
This reverts commit fe42c1a818.