Before it was in Folder, however, the command line client does not
have the Folder class. To not duplicate code, the function to generate
the sync journal name went to SyncEngine class.
This would happen if the directory would first need to be created
through an mkdir propagation job. This job's itemCompleted signal
would trigger the directory to show as SYNC even though its children
are still propagating.
Fix the issue by tracking the sync count for each file, affecting
its parents. This allows us to get rid of the O(n) vector lookup
for each status query, and properly track the hierachical sync
status of a directory.
This also removes the itemCompleted signal emission from the
PropagateDirectory job. Since we only needed for overlay icons, and
since this job doesn't do any direct propagation, we can remove it
to ensure that we won't call itemCompleted twice for the item attached
to Propagate*Mkdir jobs (since the PropagateDirectory is backed by
the same SyncFileItem, instruction and status).
The current way of tracking the need to update the metadata without
propagation using a separate flag makes it difficult to track
priorities between the local and remote tree. The logic is also
difficult to logically cover since the possibilities matrix isn't
100% covered, leaving the flag only used in a few situations
(mostly involving folders, but not only).
The reason we need to change this is to be able to track the sync
state of files for overlay icons. The instruction alone can't be
used since CSYNC_INSTRUCTION_SYNC is used for folders even though
they won't be propagated. Removing this logic is however not possible
without using something else than CSYNC_INSTRUCTION_NONE since too
many codepath interpret (rightfully) this as meaning "nothing to do".
This patch adds a new CSYNC_INSTRUCTION_UPDATE_METADATA instruction
to let the update and reconcile steps tell the SyncEngine to update
the metadata of a file without any propagation. Other flags are left
to be interpretted by the implementation as implicitly needing
metadata update or not, as this was already the case for most file
propagation jobs. For example, CSYNC_INSTRUCTION_NEW for directories
now also implicitly update the metadata.
Since it's not impossible for folders to emit CSYNC_INSTRUCTION_SYNC
or CSYNC_INSTRUCTION_CONFLICT, the corresponding code paths in the
sync engine have been removed.
Since the reconcile step can now know if the local tree needs metadata
update while the remote side might want propagation, the
localMetadataUpdate logic in SyncEngine::treewalkFile now simply use
a CSYNC_INSTRUCTION_UPDATE_METADATA for the local side, which is now
implemented as a different database query.
Add a missing call that we currently only do in slotItemCompleted.
This would normally only affect the first sync and would have
gotten properly update at the end of the sync anyway.
To be able to test the SyncEngine efficiently, a set of server
mocking classes have been implemented on top of QNetworkAccessManager.
The local disk side hasn't been mocked since this would require adding
a large abstraction layer in csync. The SyncEngine is instead pointed
to a different temporary dir in each test and we test by interacting
with files in this directory instead.
The FakeFolder object wraps the SyncEngine with those abstractions
and allow controlling the local files, and the fake remote state
through the FileModifier interface, using a FileInfo tree structure
for the remote-side implementation as well as feeding and comparing
the states on both side in tests.
Tests run fast and require no setup to be run, but each server feature
that we want to test on the client side needs to be implemented in
this fake objects library. For example, the OC-FileId header isn't
set as of this commit, and we can't test the file move logic properly
without implementing it first.
The TestSyncFileStatusTracker tests already contain a few QEXPECT_FAIL
for what I esteem being issues that need to be fixed in order to catch
up on our test coverage without making this patch too huge.
This is a move away from the original policy where jobs
would only follow redirects in special cases.
Two restrictions are in place:
1. We do not allow protocol downgrades (https -> http)
2. We stop redirects after we find them looping (e.g. old = new url, or
indirectly when looping 10 times).
This is closer to RFC conforming behavior, although currently
we will treat 301 replies like they were 302. This is for a separate
commit.
Error handling (and display) also needs improvement.
Addresses #2791
Once upon a time, the SyncEngine was instantiated once per sync. But now that
the SyncEngine is kept between sync, we need to reset all these variable between
syncs.
Reverts commit 622017adcf
Could be the cause of #5092 and the cost is higher than the benefit if this is the case.
A network request taking more than 30 seconds isn't something unlikely in this world
and shouldn't be a good reason to abort. We should try to untangle the threads
dependencies to properly fix this if possible instead.
Missing deleteLater when the CleanupPollsJob aborts.
This is only a problem if the SyncEngine is kept alive a long time. Which is
usually not the case in the configuration where poll jobs are used.
Since the SyncEngine now quits and waits for the discovery thread,
the main thread can enter a deadlock where the discovery thread waits
for its directory result.
Add a 2 seconds timer to the discovery thread wait condition
to limit the deadlock time.
Use a QMap to avoid using a full hashtable for only a few entries, and
clear the QMap once we're done with the measuring. This saves a few
hundred bytes per job during propagation that would otherwise only be
freed at the end of the sync.
The FolderWatcher inserts files to be marked as SYNC and we
currently assume that all file statuses will be updated by the
following sync. It's however possible that the FolderWatcher
notify us of a change that csync won't consider necessary to
propagate, in which case a new status wouldn't be pushed and
the file manager would continue showing this file as syncing.
Re-push the file status when emptying the dirty files list
before propagating to avoid this issue, most likely the OK
status.
No need to allocate (and initialize to 0) a 10 MiB buffer for each files, even
when most files are much smaller than that.
So make sure the buffer that we allocate is not bigger than the file size.
And Also 10 MiB is a bit big for a buffer. 500 KiB should be more than enough.
(Too big allocations can cause problem because of memory fragmentation and such)
We first need to set the abort flag to csync and then aborting the discovery
job, otherwise, the discovery thread could start a new job in the mean time.
We also need to make sure that the thread has existed before we destroy the
exclude list.
Events from the crash reporter suggest that the QNAM and its
child replies might get deleted before returning from this method
and the only possible cause we can see is that the inner event
loop has something to do with it.
Try keeping a ref on the QNAM while in this method to make sure
that it won't get deleted by the inner event loop.
Same fix as in commit 60c101d9
From the crash reporter:
Crash
EXCEPTION_ACCESS_VIOLATION_READ at 0x4
qnetworkreply.cpp in QNetworkReply::request at line 476
propagateupload.cpp in OCC::PUTFileJob::slotTimeout at line 100
moc_abstractnetworkjob.cpp in OCC::AbstractNetworkJob::qt_static_metacall at line 98
qobject.cpp in QMetaObject::activate at line 3716
moc_qtimer.cpp in QTimer::timeout at line 192
qtimer.cpp in QTimer::timerEvent at line 247
qobject.cpp in QObject::event at line 1267
qapplication.cpp in QApplicationPrivate::notify_helper at line 3722
qapplication.cpp in QApplication::notify at line 3505
qcoreapplication.cpp in QCoreApplication::notifyInternal at line 932
Previously rejecting any kind of certificate meant that the user
was never asked again, even if the certificate changed.
Now we keep track of which certificates were rejected and ask again
if the ones mentioned in the ssl errors change.
mitmproxy is excellent for testing this.
* Progress: Don't count dirs without propagation jobs #4856
These directory SyncFileItems are necessary for bookkeeping
but should not influence the progress display at all.
* Progress: Skip ignored files #4856
The problem in this case is if we rename the file "xxx" to "invalid\file".
The rename will fail because the new filename constains a slash, and it
will be blacklisted.
But then if the user re-rename the file to "valid_name", then we should
invalidate the blacklist entry and retry to upload. But we did not do
that because renaming don't change the mtime and we did not store the
rename target in the database
IL issue 558
This fixes an issue in which too many jobs are started un parallel
while uploading many files, which could cause too much memory usage as the
chunks are stored in memory.
Probably the fix for #4611
Issue #4855
A typo in the context string made the translation lookup fail.
But also the %Ln was not recognized as a plural form by transifex, so only
the singular was translated
The assert was there to make sure that this case wasn't happening
to eventually be properly tested. Remove the assert for now but this
codepath should eventually be unit tested using this specific situation.
Since the windows implementation first does cache lookups using the
path string, directories need to be passed identically as through
RETRIEVE_FILE_STATUS.
Change the convention to never have a trailing slash for directories
in the protocol. This allows the convention to be applied without
having to access the disk (since we'd need to know if the path is
represented by a directory) and also matches the convention of the
rest of the sync engine. Individual file manager plugins are then
responsible of handling pushed paths as not ending with a trailing
slash.
This also:
- Moves the trailing slash removal logic from the SyncFileStatusTracker
to the SocketApi class
- Remove the unneeded QString::normalized call in fileStatus, since
this should already be done by the FolderWatcher and plugins
Go through fileStatus like other cases to make sure that all use
cases go through the same code path. This also makes sure to use
lookupProblem which will use lower_bound which is more efficient
for larger sets of sync problems.
This also fixes the issue with lookupProblem that prevented it to
properly match an empty pathToMatch, caused by the fact that the
problem map contains relative paths not starting with a slash.
Make sure that we push the new status when the status of the SyncEngine
changed. SyncEngine::started comes a bit late, only when the propagation
starts, although it's better in this case since child folders will
only switch to Sync in aboutToPropagate.
Also fix an issue with SyncEngine::findSyncItem when using an empty
fileName; this would match and return the wrong item, even though
not currently happening with the code since fileStatus won't call
it with an empty fileName anymore.
As before, we rely on metadata-update SyncFileItem entries for parent
directories to notify us that a directory contains files to propagate,
and to know when all children were propagated through its itemCompleted
signal.
Those metadata SyncFileItems however have a None direction and we need
to add a explicit directory check to show them as Sync.
This fix also handles new files as well as existing ones, so no need
to keep a separate logic for new files.
propagatedownload.cpp:712:35: error: 'seenLockedFile' is a protected member of 'OCC::OwncloudPropagator'
Signals are protected in Qt4 but public in Qt5, mark the class accessing it
as friend when compiling with Qt4
When a conflict-rename or a temporary-rename fails, notify the
LockWatcher. It'll regularly check whether the file has become
accesible again. When it has, another sync is triggered.
owncloud/enterprise#1288
If the downloaded file is empty but the PROPFIND previously announced it
should not have been empty, this might mean the file was somehow corrupted
because of a bug on the server and that we should therefore not accept
the file.
Normaly we accept a change between the actual size of the file and what we
got during discovery because the file might have been updated to a new version
inbetween. But after this patch we won't accept the file if it was replaced
by an empty file.
Will help for issue #4583
Also requested by IL for issue 548
This uses the file watcher to keep track of files that were modified
in order to assign them the blue icon.
This is transient state that's not persisted across restarts.
As discussed on issue ##4460
Having the quote to be queried on subfolder is wrong in the generic case,
so add a branding option to configure it.
This partially reverts commit ff4cdc3161
In the before-propagate slot, new files that wait to be
pushed to the server are remembered in the _syncProblems
map. That way, the parents show a sync icon properly as
asked for in #4682.
After the item has been transfered properly, the item is
removed from the map again because success is the default.
Added in previous commit from pull request #4663
As discussed, we do not need this option so no need to introduce
a new dependency on the config file in the sync engine
* Add checksums/supportedTypes and checksums/preferredUploadType
capabilities. The default is that no checksum types are supported.
* Remove the transmissionChecksum config option. Servers must now
use the capabilities to indicate that they are fine with the
client sending checksums.
Note: This intentionally breaks brandings that overrode
Theme::transmissionChecksum. The override must be removed and the
server's capabilities must be adjusted to include the new values.
SQLITE_DONE is the indicator for not more query results, which is a legal
thing and not an error condition.
Also, check _getFileRecordQuery for null pointer, as close() wipes it.
The idea is that the next call to any database operation will try to
reopen the database through the checkConnect() method. So even if there
was a disconnect trom the db file, this will reestablish the connection.
Imagine tgus scenario on a read only share that you move file from
one location to a new directory in the read only share.
Creating the read only directory fails for permission error.
But we should also restore the files that have been moved.
IL issue 542
Bring back the hardcoded status logic for excluded files.
Since the activity log doesn't even mention those files on purpose,
we can't rely on the SyncEngine to notify us about the status of those files.
Looking up a/aa while an error is present in a/aab/aaba would return
a warning status since a/aa is a substring of a/aab.
Fix the issue by checking if the following character is a slash.
This prevents having to define a Problem structure with dubious
operator overloads to accomplish the same.
Also use std::map::lower_bound to quickly iterate over the
list of problems.
This also remove all smartness from the SocketApi about the status
of a file and solely use info from the current and last sync.
This simplifies the logic a lot and prevents any discrepancy between
the status shown in the activity log and the one displayed on the
overlay icon of a file.
The main benefit of the additional simplicity is that we are able
to push all new status of a file reliably (including warnings for
parent folders) to properly update the icon on overlay implementations
that don't allow us invalidating the status cache, like on OS X.
Both errors and warning from the last sync are now kept in a set,
which is used to also affect parent folders of an error.
To make sure that errors don't become warning icons on a second
sync, SyncFileItem::_hasBlacklistEntry is also interpreted as an error.
This also renames StatusIgnore to StatusWarning to match this semantic.
SyncEngine::aboutToPropagate is used in favor of SyncEngine::syncItemDiscovered
since the latter is emitted before file permission warnings are set on the
SyncFileItem. SyncEngine::finished is not used since we have all the
needed information in SyncEngine::itemCompleted.
SyncFileStatus' purpose is to track overlay icon status.
Instead of putting comments and default: clauses in switch
on both sides about unused enums, use different enums.
This also remove STATUS_NEW which is the equivalent of
STATUS_SYNC in all shell extension implementations, and
remove STATUS_UPDATED and STATUS_STAT_ERROR which have
the same semantic as STATUS_UPTODATE and STATUS__ERROR.
This will help moving the SyncEngine construction in the constructor
and allow moving functionalities from Folder to SyncEngine or its
delegated objects.
Soldiering on with a broken or incomplete response could lead to
incorrect sync behavior.
Since discovery uses LsCol jobs which already handle errors
correctly, this should not have a significant impact.
the size on the server might be different from the size on the client
with certain backend so it should be ignored.
(cherry picked from commit 9222db6df9b19a21e1bea5a238d745d96a6385e3)
In SQLite bindings are not cleared by sqlite3_reset() calls, so
skipping a sqlite3_bind call to create a NULL value doesn't work,
instead the previous value will be written.
To fix this, I clear all bindings in SqlQuery::reset and make sure
to explicitly bind NULL when desired in SqlQuery::bind.
To make sure there's no confusion about SqlQuery::reset and
sqlite3_reset, I rename our method to reset_and_clear_bindings().
(cherry picked from commit 7bd4f95b8c)
In SQLite bindings are not cleared by sqlite3_reset() calls, so
skipping a sqlite3_bind call to create a NULL value doesn't work,
instead the previous value will be written.
To fix this, I clear all bindings in SqlQuery::reset and make sure
to explicitly bind NULL when desired in SqlQuery::bind.
To make sure there's no confusion about SqlQuery::reset and
sqlite3_reset, I rename our method to reset_and_clear_bindings().
The isValid check should be used everywhere the capabilities
are used as the loading of the capabilities is happening
in parallel of the startup, so it is not guaranteed to be
available always.
Helps with small file sync #331
When I benchmarked this, it went up to 6 parallelism and
was about 1/3 faster than the previous fixed 3 parallelism.
Doing more than 6 is dangerous because QNAM limits to 6 TCP
connections and also the server might become a bottleneck.
Should also help for #4081
This will be useful if we ever want to store account-level gui state.
I built this originally because I thought a paused account would be
this kind of state.
The creation doesn't need to be separated from the SyncEngine anymore.
This allows the SyncEngine to be created in fewer steps if we want to
use it in tests.
This moves most of the direct csync code from Folder into the SyncEngine.
The exclude file logic for the context has been wrapped using the
existing ExcludedFiles class as well.
As discussed with Klaas, this seems to be a better compromise.
10MB * 3 prarralel jobs = 30MB in memory, and to retry in case of
disconnection. Which is still reasonable. And might make the upload
almost twice as fast on fast network where the amount of chunk is the
bottleneck (because of more server processing)
Relates to issue #4354