- Controled by an option.
- New remote files start out as ItemTypePlaceholder, are created with a
.owncloud extension.
- When their db entry is set to ItemTypePlaceholderDownload the next
sync run will download them.
- Files that aren't in the placeholder state sync as usual.
- See test cases in testsyncplaceholders.
Missing:
- User ui for triggering placeholder file download
- Maybe: Going back from file to placeholder?
The socket api move and delete commands are not strictly about conflicts
since they also deal with files which couldn't be uploaded for some
other reason. Still the new ConflictSolver could be used in those cases.
This opens the door at reusing that logic in other places.
Signed-off-by: Kevin Ottens <kevin.ottens@nextcloud.com>
- When the the users logs because of 401 or 403 errors, it checks if the
server requested the remote wipe. If yes, locally deletes account and folders
connected to the account and notify the server. If no, proceeds to ask the
user to login again.
- The app password is restored in the keychain.
- WIP: The change also includes a test class for RemoteWipe.
Signed-off-by: Camila San <hello@camila.codes>
Mainly uses target_include_directories instead of include_directories
so libraries public include directory get automatically added when adding
the target in target_link_library
If the server has the 'uploadConflictFiles' capability conflict
files will be uploaded instead of ignored.
Uploaded conflict files have the following headers set during upload
OC-Conflict: 1
OC-ConflictBaseFileId: 172489174instanceid
OC-ConflictBaseMtime: 1235789213
OC-ConflictBaseEtag: myetag
when the data is available. Downloads accept the same headers in return
when downloading a conflict file.
In the absence of server support clients will identify conflict files
through the file name pattern and attempt to deduce the base fileid.
Base etag and mtime can't be deduced though.
The upload job for a new conflict file will be triggered directly from
the job that created the conflict file now. No second sync run is
necessary anymore.
This commit does not yet introduce a 'username' like identifier that
automatically gets added to conflict file filenames (to name the files
foo_conflict-Fred-1345.txt instead of just foo_conflict-1345.txt).
When users share the same tree several times (say A/ and A/B/ are both
shared) the remote tree can have several entries that have the same
file id. This needs to be respected in rename detection.
Also adds several tests and fixes for issues noticed during testing.
See #6096
This gets rid of the csync_statedb sqlite layer and use
the same code and same connection as the rest of the SyncEngine.
Missing functions are added to SyncJournalDb and change a few minor
things (like changing SyncJournalFileRecord::_modtime to be an int64
instead of a QDateTime, like it was in csync).
* SocketAPI has COPL_LOCAL_LINK / EMAIL_LOCAL_LINK commands
* The nautilus and dolphing shell integrations show a submenu from which
one can share as well as access the private link.
* The SocketAPI provides a new GET_STRINGS command to access localized
strings.
* The private link can also be accessed from the user/group sharing
dialog.
* The numeric file id is extracted from the full id to create the
private link url.
We were removing the wholme journal db when the user wanted to keep all files,
But that would also remove the selective sync lists.
We should only remove the metadata table.
Issue #5484
Shrinks owncloud binary by 24 KB and libowncloudsync by 14 KB.
I don't know if it has influence on memory usage or runtime speed though.
Was worth a try.
Previously this wasn't happening for errors that were not
NormalErrors because they don't end up in the blacklist.
This revises the resetting logic to be independent of the
error blacklist and make use of UploadInfo::errorCount
instead.
412 errors should reset chunked uploads because they might be
indicative of a checksum error.
Additionally, server bugs might require that additional
errors cause an upload reset. To allow that, a new capability
is added that can be used to advise the client about this.
Update from commit 05ce8a23cdc12e825532dc6de06c267fb8d48b4f from
https://github.com/dragotin/QProgressIndicator
Which itself is forked from commit e5ba0fd09bfd43b067ee3646d70b294c7efcb558 from
upstream, with additional license header.
It was relicensed to MIT according to
14bb9d10e2
Relates to issues #5180 and #5184
To be able to test the SyncEngine efficiently, a set of server
mocking classes have been implemented on top of QNetworkAccessManager.
The local disk side hasn't been mocked since this would require adding
a large abstraction layer in csync. The SyncEngine is instead pointed
to a different temporary dir in each test and we test by interacting
with files in this directory instead.
The FakeFolder object wraps the SyncEngine with those abstractions
and allow controlling the local files, and the fake remote state
through the FileModifier interface, using a FileInfo tree structure
for the remote-side implementation as well as feeding and comparing
the states on both side in tests.
Tests run fast and require no setup to be run, but each server feature
that we want to test on the client side needs to be implemented in
this fake objects library. For example, the OC-FileId header isn't
set as of this commit, and we can't test the file move logic properly
without implementing it first.
The TestSyncFileStatusTracker tests already contain a few QEXPECT_FAIL
for what I esteem being issues that need to be fixed in order to catch
up on our test coverage without making this patch too huge.
When a conflict-rename or a temporary-rename fails, notify the
LockWatcher. It'll regularly check whether the file has become
accesible again. When it has, another sync is triggered.
owncloud/enterprise#1288