the unix-permissions of new files/folders can now be changed
* global-option --chmod-f, volflag chmod_f for files
* global-option --chmod-d, volflag chmod_d for directories
the expected value is a standard three-digit octal value
(User/Group/Other) such as 755, 750, 644, 640, etc
if a hook relocates a file into a folder where that same file
exists with the same filename, the filename-collision-avoidance
would kick in, generating a new filename and another copy
the up2k databases are, by default, stored in a `.hist` subfolder
inside each volume, next to thumbnails and transcoded audio
add a new option for storing the databases in a separate location,
making it possible to tune the underlying filesystem for optimal
performance characteristics
the `--hist` global-option and `hist` volflag still behave like
before, but `--dbpath` and volflag `dbpath` will override the
histpath for the up2k-db and up2k-snap exclusivey
download-as-tar-gz becomes 2.4x faster in docker
segfaults on windows, so don't use it there
does not affect fedora or gentoo,
since zlib-ng is already system-default on those
also adds a global-option to write list of successful
binds to a textfile, for automation / smoketest purposes
too restrictive, blocking editing through webdav and ftp
but since logues and readmes can be used as helptext for users
with write-only access, it makes sense to block logue/readme
uploads from write-only users
users with write-only access can still upload any file as before,
but the filename prefix `_wo_` is added onto files named either
README.md | PREADME.md | .prologue.html | .epilogue.html
the new option `--wo-up-readme` restores previous behavior, and
will not add the filename-prefix for readmes/logues
previously, when moving or renaming a symlink to a file (or
a folder with symlinks inside), the dedup setting would decide
whether those links would be expanded into full files or not
with dedup disabled (which is the default),
all symlinks would be expanded during a move operation
now, the dedup-setting is ignored when files/folders are moved,
but it still applies when uploading or copying files/folders
* absolute symlinks are moved as-is
* relative symlinks are rewritten as necessary,
assuming both source and destination is known in db
android-chrome bug https://issues.chromium.org/issues/393149335
sends last-modified time `-11644473600` for all uploads
this has been fixed in chromium, but there might be similar
bugs in other browsers, so add server-side and client-side
detection for unreasonable lastmod times
previously, if the js detected a similar situation, it would
substitute the lastmod-time with the client's wallclock, but
now the server's wallclock is always preferrred as fallback
adds a third possible value for the `replace` property in handshakes:
* absent or False: never overwrite an existing file on the server,
and instead generate a new filename to avoid collision
* True: always overwrite existing files on the server
* "mt": only overwrite if client's last-modified is more recent
(this is the new option)
the new UI button toggles between all three options,
defaulting to never-overwrite
* `xz` would show the "unrecognized volflag" warning,
but it still applied correctly
* removing volflags with `-foo` would also show the warning
but it would still get removed correctly
* hide `ext_th_d` in the startup volume-listing
may improve upload performance in some particular uncommon scenarios,
for example if hdd-writes are uncached, and/or the hdd is drastically
slower than the network throughput
one particular usecase where nosparse *might* improve performance
is when the upload destination is cloud-storage provided by FUSE
(for example an s3 bucket) but this is educated guesswork
when loading up2k snaps, entries are forgotten if
the relevant file has been deleted since last run
when the entry is an unfinished upload, the file that should
be asserted is the .PARTIAL, and not the placeholder / final
filename (which, unintentionally, was the case until now)
if .PARTIAL is missing but the placeholder still exists,
the only safe alternative is to forget/disown the file,
since its state is obviously wrong and unknown
fixes a bug reported on discord;
1. run with `--idp-h-usr=iu -v=srv::A`
2. upload a file with up2k; this succeeds
3. announce an idp user: `curl -Hiu:a 127.1:3923`
4. upload another file; fails with "fs-reload"
the idp announce would `up2k.reload` which raises the
`reload_flag` and `rescan_cond`, but there is nothing
listening on `rescan_cond` because `have_e2d` was false
must assume e2d if idp is enabled, because `have_e2d` will
only be true if there are non-idp volumes with e2d enabled
when deleting a folder, any dotfiles/folders within would only
be deleted if the user had the dot-permission to see dotfiles;
this gave the confusing behavior of not removing the "empty"
folders after deleting them
fix this to only require the delete-permission, and always
delete the entire folder, including any dotfiles within
similar behavior would also apply to moves, renames, and copies;
fix moves and renames to only require the move-permission in
the source volume; dotfiles will now always be included,
regardless of whether the user does (or does not) have the
dot-permission in either the source and/or destination volumes
copying folders now also behaves more intuitively: if the user has
the dot-permission in the target volume, then dotfiles will only be
included from source folders where the user also has the dot-perm,
to prevent the user from seeing intentionally hidden files/folders
as processing of a HTTP request begins (GET, HEAD, PUT, POST, ...),
the original query line is printed in its encoded form. This makes
debugging easier, since there is no ambiguity in how the client
phrased its request.
however, this results in very opaque logs for non-ascii languages;
basically a wall of percent-encoded characters. Avoid this issue
by printing an additional log-message if the URL contains `%`,
immediately below the original url-encoded entry.
also fix tests on macos, and an unrelated bad logmsg in up2k
if someone accidentally starts uploading a file in the wrong folder,
it was not obvious that you can forget that upload in the unpost tab
this '(explain)' button in the upload-error hopefully explains that,
and upload immediately commences when the initial attempt is aborted
on the backend, cleanup the dupesched when an upload is
aborted, and save some cpu by adding unique entries only
shadowing is the act of intentinoally blocking off access to
files in a volume by placing another volume atop of a file/folder.
say you have volume '/' with a file '/a/b/c/d.txt'; if you create a
volume at '/a/b', then all files/folders inside the original folder
becomes inaccessible, and replaced with the contents of the new vol
the initial code for forgetting shadowed files from the parent vol
database would only forget files which were discovered during a
filesystem scan; any uploaded files would be intentionally preseved
in the parent volume's database, probably to avoid losing uploader
info in the event of a brief mistaken config change, where a volume
is shadowed by accident.
this precaution was a mistake, currently causing far more
issues than it solves (#61 and #120), so away it goes.
huge thanks to @Gremious for doing all the legwork on this!
a better alternative to using `--no-idx` for this purpose since
this also excludes recent uploads, not just during fs-indexing,
and it doesn't prevent deduplication
also speeds up searches by a tiny amount due to building the
sanchecks into the exclude-filter while parsing the config,
instead of during each search query
whenever a new idp user is registered, up2k will continuously
reload in the background until all users have been processed
just like before, this blocks up2k uploads from each user
until said user makes it into a reload, but as of now,
reloads will batch and execute without interrupting read-access
needs further testing before next release,
probably some rough edges to sand down
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk
however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic
as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times