In FolderStatusModel::slotLscolFinishedWithError, the call to parentInfo->resetSubs
deleted the 'job' and the reply 'r' which we accessed later to get the error code.
Fix this problem twice by
1) Get the error code before caling resetSubs
2) in FolderStatusModel::SubFolderInfo::resetSubs, call deleteLater instead of delete
Regression introduced in commit d69936e0
Issue #6562
Since the e2e oracle works only in term of absolute remote paths and
that our model x._path was relative, we need to properly convert.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
This is a much better place than the GUI, this way we ensure the
propagator is always operating of up to date information. Previously if
the propagator kicked in without user interaction from startup (not
showing the settings dialog) it would have no E2E information available
whatsoever... unsurprisingly it would thus take wrong information at
every turn.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
This also fixes a couple of warnings at places (out of order init for
instance) and a potential bug in the webflow credentials / qtkeychain
integration.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
This should address Tobias' concerns regarding the icon being
misleading. Now we basically do the following inside an encrypted folder
parent:
* encrypted folders get the encrypted icon;
* non-encrypted empty folders get the regular folder icon;
* non-encrypted non-empty folders get the broken encryption icon.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
The E2E application allows creating unencrypted subdirectories
in an encrypted parent. This is a big privacy problem.
This patch shows a red broken lock icon for these subdirectories
in the NC client UI.
Signed-off-by: Ivan Čukić <ivan.cukic@kde.org>
OCC::FolderStatusModel::slotUpdateDirectories: ASSERT: "parentInfo->_fetching" in file /home/olivier/kdegit/owncloud/mirall/src/gui/folderstatusmodel.cpp, line 599
This can happen if the structure of a folder is change while the user
expands the root folder. In this case, resetSubs() is called which
resets _fetching to false.
Instead, we need to keep a pointer to the job so we can abort it by
deleting it.
This is motivated by the fact that QMetaObject::noralizeSignature takes 7.35%
CPU of the LargeSyncBench. (Mostly from ABstractNetworkJob::setupConnections and
PropagateUploadFileV1::startNextChunk). It could be fixed by using normalized
signature in the connection statement, but i tought it was a good oportunity
to modernize the code.
This commit only contains calls that were automatically converted with clazy.
... or child folders
There is also no real reason to forbid the user from syncing the same
folder to multiple location on its hardrive.
A real use case is when the user uncheck a big directory using "choose
what to sync", but would still like to sync a folder within this disabled
tree. The user can now do this with the "add folder" feature
Since 2.3, we even support syncing the same local folder to multiple
remote folder, so why not allow syncing the same remote folder several
times?
Relates to issue #3645
Now that csync builds as C++, this will avoid having to implement
functionalities needed by csync mandatorily in csync itself.
This library is built as part of libocsync and symbols exported
through it.
This requires a relicense of Utility as LGPL. All classes moved into
this library from src/libsync will need to be relicensed as well.
This is motivated by the fact that QMetaObject::noralizeSignature takes 7.35%
CPU of the LargeSyncBench. (Mostly from ABstractNetworkJob::setupConnections and
PropagateUploadFileV1::startNextChunk). It could be fixed by using normalized
signature in the connection statement, but i tought it was a good oportunity
to modernize the code.
This commit only contains calls that were automatically converted with clazy.
... or child folders
There is also no real reason to forbid the user from syncing the same
folder to multiple location on its hardrive.
A real use case is when the user uncheck a big directory using "choose
what to sync", but would still like to sync a folder within this disabled
tree. The user can now do this with the "add folder" feature
Since 2.3, we even support syncing the same local folder to multiple
remote folder, so why not allow syncing the same remote folder several
times?
Relates to issue #3645
Now that csync builds as C++, this will avoid having to implement
functionalities needed by csync mandatorily in csync itself.
This library is built as part of libocsync and symbols exported
through it.
This requires a relicense of Utility as LGPL. All classes moved into
this library from src/libsync will need to be relicensed as well.
* A bunch of code was determining sync status by ad-hoc comparing some
progress info fields. It can now just check the status, making it
easier to comprehend.
* There's a clear indication for "a new sync is starting", which helps
wiping the issues tab at the right time.
Use qCInfo for anything that has general value for support and
development. Use qCWarning for any recoverable error and qCCritical
for anything that could result in data loss or would identify a serious
issue with the code.
Issue #5647
This gives more insight about the logs and allow setting fine-tuned
logging rules. The categories are set to only output Info by default
so this allows us to provide more concise logging while keeping the
ability to extract more information for a specific category when
developping or debugging customer issues.
Issue #5647
The backtrace looks like:
File "atomic_base.h", line 396, in QString::~QString
File "qlist.h", line 442, in OCC::FolderStatusModel::slotUpdateDirectories
This is the only QList operation, and it may crash if the list is empty.
It can be empty if the propfind returned empty results.
I'm not sure how this can be possible to have an empty list there since
the server is always supposed to return at least one entry, for the directory
itself. But it can happen if a directory was transformed in a file, or
if there is a bug on the server.
- I checked every occurence of a '%2' and make correct use of the
QString::arg overload that takes several argument instead of chaining
them, because the first argument can contains a '%1'
- I tried to look for every label that they either use plain text or richtext
and escape the user provided strings in there.
We can do that because the only changes that were in master but not in 2.3 were the
translations change and documentation change, and the support for the 'M' permission
which we want in 2.3.
We now delete subjobs as their propagation is complete. This allows us
to also release the item by making sure that nothing else is holding a
reference to it.
Remove the stored SyncFileItemVector from SyncEngine and SyncResult
and instead gather the needed info progressively as each itemCompleted
signal is emitted.
This frees some holes on the heap as propagation goes, allowing many
memory allocations without the need of requesting more virtual memory
from the OS, preventing the memory usage from increasingly growing.
Previously if you paused/unpaused a folder for a disconnected account
they would prepare to sync and thus display the 'Waiting...' text. With
this change, folders that can't possibly sync don't show text like
this.
When account connectivity changes, all unpaused folders will be
scheduled anyway.
Otherwise it might happen that the model is inconsistant and this can
lead to crash in the worst case.
(For example, if there was a "fetching" label, and we hide it because it
was a 404. In this case, we would not call begin/endRemoveRows, so the
view could still call the model with an index of row 0, that used to be
for the label, but now correspond to the first element of _subs. And
because _subs is empty, this could lead to crashes)
We are going to change the webdav path depending on the capabilities.
But the SyncEngine and csync might have been created before the capabilities
are retrieved.
The main raison why we gave the path to the sync engine was to pass it to csync.
But the thing is that csync don't need anymore this url as everything is done by the
discovery classes in libsync that use the network jobs that use the account for the urls.
So csync do not need the remote URI.
shortenFilename in folderstatusmodel.cpp was useless because the string is the
_file of a SyncFileItem which is the relative file name, that name never
starts with owncloud://.
All the csync test creates the folder because csync use to check if the folder
exists. But we don't need to do that anymore
Instead of using the regular selective-sync UI (where it's unclear what
the "Cancel" button would even mean in this context), provide a
different set of buttons that allow the user to quickly synchronize
all pending big folders, none of them, or perform manual changes
as usual.
I got into a situation where the model would endlessly request the directory
contents from the server because we did not notice yet that the server
is actually in maintenance mode while we were expanding the tree view when
changing the tab to the account or when just expanding it by clicking.
* Remove duplicate remote path
* Use thin progress bar
* Move bandwidth and file info to tooltip
* Shorten overall progress message
This also fixes#4562 by making the layout not dependent on the
width of the displayed text.
* Ensure every time a file becomes a directory or the other way around
the item is flagged as INSTRUCTION_TYPE_CHANGE.
* Delete the badly-typed entity if necessary in the propagation jobs.
This use the previous code by resetting the progress to hide
the progress back and then return errors in the FolderErrorMsg
data role of the folder model.
This also remove the unused FolderRemotePath role, remove FolderStatus
in favor of invalidating all roles in dataChanged and make sure
that the SyncRunning role is transfered properly from the SyncResult
to show the warning icon during sync.
It's just a feature that was not there in 2.0
It means that removed folder stay on the undecided list if it is removed
from the server until the user press apply in the selective sync widget.
Not a very bad bug anyway.
If a folder was paused while being the next item in the scheduling
queue, the whole scheduling could get stuck.
This also fixes the progress information of paused folders possibly
getting stuck.