This is motivated by the fact that QMetaObject::noralizeSignature takes 7.35%
CPU of the LargeSyncBench. (Mostly from ABstractNetworkJob::setupConnections and
PropagateUploadFileV1::startNextChunk). It could be fixed by using normalized
signature in the connection statement, but i tought it was a good oportunity
to modernize the code.
This commit only contains calls that were automatically converted with clazy.
Now that csync builds as C++, this will avoid having to implement
functionalities needed by csync mandatorily in csync itself.
This library is built as part of libocsync and symbols exported
through it.
This requires a relicense of Utility as LGPL. All classes moved into
this library from src/libsync will need to be relicensed as well.
We need Qt 5.9 for HTTP2 because, even if Qt 5.8 already has support
for it, there is some critical bug in the HTTP2 implementation which
make it unusable [ https://codereview.qt-project.org/186050 and
https://codereview.qt-project.org/186066 ]
When using HTTP2, we can use many more parallel network request, this
is especially good for small file handling
Lower the priority of the GET and PUT propagation jobs, so the quota
or selective sync ui PROPFIND will not be blocked by them
It now produces a summary error message indicating the problem.
Adjust blacklist database table to contain 'errorCategory'. This is
useful for two things:
- Reestablishing summary messages based on blacklisted errors. For
example if we don't retry a 507ed file, we still want to show the
message about space on the server
- Selectively wiping the blacklist: When we have ui for something like
"I deleted some files, please retry all files now!", we want to
delete all blacklist entries of a specific category only.
* For conflicts where mtime and size are identical:
a) If there's no remote checksum, skip (unchanged)
b) If there's a remote checksum that's a useful hash, create a
PropagateDownload job and compute the local hash. If the hashes
are identical, don't download the file and just update metadata.
* Avoid exposing the existence of checksumTypeId beyond the database
layer. This makes handling checksums easier in general because they
can usually be treated as a single blob.
This change was prompted by the difficulty of producing file_stat_t
entries uniformly from PROPFINDs and the database.
Use qCInfo for anything that has general value for support and
development. Use qCWarning for any recoverable error and qCCritical
for anything that could result in data loss or would identify a serious
issue with the code.
Issue #5647
This gives more insight about the logs and allow setting fine-tuned
logging rules. The categories are set to only output Info by default
so this allows us to provide more concise logging while keeping the
ability to extract more information for a specific category when
developping or debugging customer issues.
Issue #5647
By default QNetworkReply::errorString() often produces messages like
"Error downloading <url> - server replied: <reason>"
but the "downloading" part invariably confuses people since the
error might very well have been produced by a PUT request.
This commit produces clearer error messages for HTTP errors.
Additionally:
* Remove some unnecessary null checks from slots connected to
network job signals and document that these signals never send
null replies.
* There was a bug where AbstractNetworkJob::_timedout wasn't
set when derived classes overrode slotTimeout. We now ensure
it's always set by disallowing overrides of slotTimeout.
Instead it now calls onTimedOut, which allows custom handling.
* Several subclasses declared errorString, isTimedOut. Move
these to AbstractNetworkJob.
* Unify handling of OC-ErrorString (via the new, general
Job::errorString)
* Add documentation in various places.
* make target duration a client option instead of a capability
* simplify algorithm for determining chunk size significantly
* preserve chunk size for the whole propagation, not just per upload
* move options to SyncOptions to avoid depending on ConfigFile
in the propagator
* move chunk-size adjustment to after a chunk finishes, not when
a new chunk starts
It is possible to create files with filenames that differ
only by case in NTFS, but most operations such as stat and
open only target one of these by default.
When that happens, we want to avoid uploading incorrect data
and give up on the file.
Typically this situation should never occurr during normal use
of Windows. It can happen, however, when a NTFS partition is
mounted in another OS.
* For requests:
- reuse the original QNetworkRequest, so headers and attributes
are the same as in the original request
- determine the original http method from the reply and the request
attributes
- keep the original request body around such that it can be sent
again in case the request is redirected
* Simplify the interface that is used for creating new requests in
AbstractNetworkJob.
Previously this wasn't happening for errors that were not
NormalErrors because they don't end up in the blacklist.
This revises the resetting logic to be independent of the
error blacklist and make use of UploadInfo::errorCount
instead.
412 errors should reset chunked uploads because they might be
indicative of a checksum error.
Additionally, server bugs might require that additional
errors cause an upload reset. To allow that, a new capability
is added that can be used to advise the client about this.
Use a QMap to avoid using a full hashtable for only a few entries, and
clear the QMap once we're done with the measuring. This saves a few
hundred bytes per job during propagation that would otherwise only be
freed at the end of the sync.
Same fix as in commit 60c101d9
From the crash reporter:
Crash
EXCEPTION_ACCESS_VIOLATION_READ at 0x4
qnetworkreply.cpp in QNetworkReply::request at line 476
propagateupload.cpp in OCC::PUTFileJob::slotTimeout at line 100
moc_abstractnetworkjob.cpp in OCC::AbstractNetworkJob::qt_static_metacall at line 98
qobject.cpp in QMetaObject::activate at line 3716
moc_qtimer.cpp in QTimer::timeout at line 192
qtimer.cpp in QTimer::timerEvent at line 247
qobject.cpp in QObject::event at line 1267
qapplication.cpp in QApplicationPrivate::notify_helper at line 3722
qapplication.cpp in QApplication::notify at line 3505
qcoreapplication.cpp in QCoreApplication::notifyInternal at line 932
This fixes an issue in which too many jobs are started un parallel
while uploading many files, which could cause too much memory usage as the
chunks are stored in memory.
Probably the fix for #4611
* Add checksums/supportedTypes and checksums/preferredUploadType
capabilities. The default is that no checksum types are supported.
* Remove the transmissionChecksum config option. Servers must now
use the capabilities to indicate that they are fine with the
client sending checksums.
Note: This intentionally breaks brandings that overrode
Theme::transmissionChecksum. The override must be removed and the
server's capabilities must be adjusted to include the new values.
Helps with small file sync #331
When I benchmarked this, it went up to 6 parallelism and
was about 1/3 faster than the previous fixed 3 parallelism.
Doing more than 6 is dangerous because QNAM limits to 6 TCP
connections and also the server might become a bottleneck.
Should also help for #4081
Added a new "chunkSize" entry in the General group of the owncloud.cfg
which can be set to the size, in bytes, of the chunks.
This allow user with hude bandwidth to select more optimal chunk size
Issue #4354
Server older than 8.1 cannot cope with invalid char in the filename
so we must not send them from the client. We were already checking
for new files, but not for renames or new directories.
https://github.com/owncloud/enterprise/issues/1009
* Ensure every time a file becomes a directory or the other way around
the item is flagged as INSTRUCTION_TYPE_CHANGE.
* Delete the badly-typed entity if necessary in the propagation jobs.
* Compute the content checksum (in addition to the optional
transmission checksum) during upload (.eml files only)
* Add hook to compute and compare the checksum in csync_update
* Add content checksum to database, remove transmission checksum
We changed the discovery code not to ignore files whose filename contains
charachter invalid on windows. (Because newer versions of the server
supports them)
Servers older than 8.1 will just say "Bad Request" as an error and it's a
regression against previous client version. So keep nice error even with
older server.
Relates to #3736
Because that's what's going on. A job can 'complete an item' or 'finish'.
Note that several jobs could complete the same item: a new directory
will complete on the PropagateRemoteMkdir and the PropagateDirectory
jobs.
Previously, PropagateDirectory jobs didn't emit the completed() signal.
Now that they do, we need to make sure to not add extra lines to the
protocol widget for them.
To accomplish that, the jobCompleted() signal now also contains the job
that completed the item.
In particular the 'unsupported client version' error message
is now visible to the user when trying to connect to a
server that no longer supports the current client version.
Since QNonContiguousByteDeviceThreadForwardImpl::reset will
call UploadDevice::reset with a BlockingQueuedConnection, this
allows us to reset the HTTP channel along with its buffers
before they get the chance to be reused with a subsequent request.
Use a QSharedPointer to keep the same ownership and
continue passing the SyncFileItems as a const& when
ownership isn't taken. This allows sharing the same
allocations between the jobs and the result vectors.
This saves about 20MB of memory (off 120MB) once all
jobs are created.
This is largely a guess, but this is the only place where we use
a QIODevice to push data through QNAM and that the QIODevice isn't
a direct child of the QNetworkReply.
Fix the issue by making sure that we don't go back to the event loop
and possibly handle network events between the destruction of the
upload QIODevice and the QNetworkReply, which might lead to QNAM
dereferencing a dangling QIODevice pointer.