may improve upload performance in some particular uncommon scenarios,
for example if hdd-writes are uncached, and/or the hdd is drastically
slower than the network throughput
one particular usecase where nosparse *might* improve performance
is when the upload destination is cloud-storage provided by FUSE
(for example an s3 bucket) but this is educated guesswork
also includes a slight tweak to the json upload info:
when exactly one file is uploaded, the json-response has a
new top-level property, `fileurl` -- this is just a copy of
`files[0].url` as a workaround for castdrian/ishare#107
("only toplevel json properties can be referenced")
chrome (and chromium-based browsers) can OOM when:
* the OS is Windows, MacOS, or Android (but not Linux?)
* the website is hosted on a remote IP (not localhost)
* webworkers are used to read files
unfortunately this also applies to Android, which heavily relies
on webworkers to make read-speeds anywhere close to acceptable
as for android, there are diminishing returns with more than 4
webworkers (1=1x, 2=2.3x, 3=3.8x, 4=4.2x, 6=4.5x, 8=5.3x), and
limiting the number of workers to ensure at least one idle core
appears to sufficiently reduce the OOM probability
on desktop, webworkers are only necessary for hashwasm, so
limit the number of workers to 2 if crypto.subtle is available
and otherwise use the nproc-1 rule for hashwasm in workers
bug report: https://issues.chromium.org/issues/383568268
previously, when IdP was enabled, the password-based login would be
entirely disabled. This was a semi-conscious decision, based on the
assumption that you would always want to use IdP after enabling it.
it makes more sense to keep password-based login working as usual,
conditionally disengaging it for requests which contains a valid
IdP username header. This makes it possible to define fallback
users, or API-only users, and all similar escape hatches.
a better alternative to using `--no-idx` for this purpose since
this also excludes recent uploads, not just during fs-indexing,
and it doesn't prevent deduplication
also speeds up searches by a tiny amount due to building the
sanchecks into the exclude-filter while parsing the config,
instead of during each search query
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
global-option `--no-clone` / volflag `noclone` entirely disables
serverside deduplication; clients will then fully upload dupe files
can be useful when `--safe-dedup=1` is not an option due to other
software tampering with the on-disk files, and your filesystem has
prohibitively slow or expensive reads
* do not absreal paths unless necessary
* do not determine username if no users configured
* impacket 0.12 fixed the foldersize limit, but now
you get extremely poor performance in large folders
so the previous workaround is still default-enabled
dedup is still encouraged and fully supported, but
being default-enabled has caused too many surprises
enabling `--dedup` restores the previous default behavior
also renames `--never-symlink` to `--hardlink-only`
previously, the assumption was made that the database and filesystem
would not desync, and that an upload could safely be substituted with
a symlink to an existing copy on-disk, assuming said copy still
existed on-disk at all
this is fine if copyparty is the only software that makes changes to
the filesystem, but that is a shitty assumption to make in hindsight
add `--safe-dedup` which takes a "safety level", and by default (50)
it will no longer blindly expect that the filesystem has not been
altered through other means; the file contents will now be hashed
and compared to the database
deduplication can be much slower as a result, but definitely worth it
as this avoids some potentially very unpleasant surprises
the previous behavior can be restored with `--safe-dedup 1`
if files (one or more) are selected for sharing, then
a virtual folder is created to hold the selected files
if a single file is selected for sharing, then
the returned URL will point directly to that file
and fix some shares-related bugs:
* password coalescing
* log-spam on reload
* support x-forwarded-for
* option to specify socket permissions and group
* in containers, avoid collision during restart
* add --help-bind with examples
hooks can now interrupt or redirect actions, and initiate
related actions, by printing json on stdout with commands
mainly to mitigate limitations such as sharex/sharex#3992
xbr/xau can redirect uploads to other destinations with `reloc`
and most hooks can initiate indexing or deletion of additional
files by giving a list of vpaths in json-keys `idx` or `del`
there are limitations;
* xbu/xau effects don't apply to ftp, tftp, smb
* xau will intentionally fail if a reloc destination exists
* xau effects do not apply to up2k
also provides more details for hooks:
* xbu/xau: basic-uploader vpath with filename
* xbr/xar: add client ip
audio extraction happens serverside to opus or mp3
depending on browser support
remuxing (extracting audio without transcoding)
is currently not supported, and is not planned