this avoids a false-positive in the info-zip unzip zipbomb detector.
unfortunately,
* now impossible to extract large (4 GiB) zipfiles using old software
(WinXP, macos 10.12)
* now less viable to stream download-as-zip into a zipfile unpacker
(please use download-as-tar for that purpose)
context:
the zipfile specification (APPNOTE.TXT) is slightly ambiguous as to when
data-descriptor (0x504b0708) filesize-fields change from 32bit to 64bit;
both copyparty and libarchive independently made the same interpretation
that this is only when the local header is zip64, AND the size-fields
are both 0xFFFFFFFF. This makes sense because the data descriptor is
only necessary when that particular file-to-be-added exceeds 4 GiB,
and/or when the crc32 is not known ahead of time.
another interpretation, seen in an early version of the patchset
to fix CVE-2019-13232 (zip-bombs) in the info-zip unzip command,
believes the only requirement is that the local header is zip64.
in many linux distributions, the unzip command would thus fail on
zipfiles created by copyparty, since they (by default) satisfy
the three requirements to hit the zipbomb false-positive:
* total filesize exceeds 4 GiB, and...
* a mix of regular (32bit) and zip64 entries, and...
* streaming-mode zipfile (not made with ?zip=crc)
this issue no longer exists in a more recent version of that patchset,
https://github.com/madler/unzip/commit/af0d07f95809653b
but this fix has not yet made it into most linux distros
the up2k databases are, by default, stored in a `.hist` subfolder
inside each volume, next to thumbnails and transcoded audio
add a new option for storing the databases in a separate location,
making it possible to tune the underlying filesystem for optimal
performance characteristics
the `--hist` global-option and `hist` volflag still behave like
before, but `--dbpath` and volflag `dbpath` will override the
histpath for the up2k-db and up2k-snap exclusivey
`write_dls` assumed `vfs.all_nodes` included shares; make it so
shares now also appear in the active-downloads list, but the
URL is hidden unless the viewer definitely already knows the
share exists (which is why vfs-nodes now have `shr_owner`)
also adds PRTY_FORCE_MP, a beefybit (opposite of chickenbit)
to allow multiprocessing on known-buggy platforms (macos)
adds a third possible value for the `replace` property in handshakes:
* absent or False: never overwrite an existing file on the server,
and instead generate a new filename to avoid collision
* True: always overwrite existing files on the server
* "mt": only overwrite if client's last-modified is more recent
(this is the new option)
the new UI button toggles between all three options,
defaulting to never-overwrite
support for "owa", audio-only webm, was introduced in iOS 17.5
owa is a more compliant alternative to opus-caf from iOS 11,
which was technically limited to CBR opus, a limitation which
we ignored since it worked mostly fine for regular opus too
being the new officially-recommended way to do things,
we'll default to owa for iOS 18 and later, even though
iOS still has some bugs affecting our use specifically:
if a weba file is preloaded into a 2nd audio object,
safari will throw a spurious exception as playback is
initiated, even as the file is playing just fine
the `.ld` stuff is an attempt at catching and ignoring this
spurious error without eating any actual network exceptions
previously, the `?zip` url-suffix would create a cp437 zipfile,
and `?zip=utf` would use utf-8, which is now generally expected
now, both `?zip=utf` and `?zip` will produce a utf8 zipfile,
and `?zip=dos` provides the old behavior
may improve upload performance in some particular uncommon scenarios,
for example if hdd-writes are uncached, and/or the hdd is drastically
slower than the network throughput
one particular usecase where nosparse *might* improve performance
is when the upload destination is cloud-storage provided by FUSE
(for example an s3 bucket) but this is educated guesswork
also includes a slight tweak to the json upload info:
when exactly one file is uploaded, the json-response has a
new top-level property, `fileurl` -- this is just a copy of
`files[0].url` as a workaround for castdrian/ishare#107
("only toplevel json properties can be referenced")
chrome (and chromium-based browsers) can OOM when:
* the OS is Windows, MacOS, or Android (but not Linux?)
* the website is hosted on a remote IP (not localhost)
* webworkers are used to read files
unfortunately this also applies to Android, which heavily relies
on webworkers to make read-speeds anywhere close to acceptable
as for android, there are diminishing returns with more than 4
webworkers (1=1x, 2=2.3x, 3=3.8x, 4=4.2x, 6=4.5x, 8=5.3x), and
limiting the number of workers to ensure at least one idle core
appears to sufficiently reduce the OOM probability
on desktop, webworkers are only necessary for hashwasm, so
limit the number of workers to 2 if crypto.subtle is available
and otherwise use the nproc-1 rule for hashwasm in workers
bug report: https://issues.chromium.org/issues/383568268
previously, when IdP was enabled, the password-based login would be
entirely disabled. This was a semi-conscious decision, based on the
assumption that you would always want to use IdP after enabling it.
it makes more sense to keep password-based login working as usual,
conditionally disengaging it for requests which contains a valid
IdP username header. This makes it possible to define fallback
users, or API-only users, and all similar escape hatches.
a better alternative to using `--no-idx` for this purpose since
this also excludes recent uploads, not just during fs-indexing,
and it doesn't prevent deduplication
also speeds up searches by a tiny amount due to building the
sanchecks into the exclude-filter while parsing the config,
instead of during each search query
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded