file hashing became drastically slower in recent chrome versions;
* 748 MiB/s in 131.0.6778.86
* 747 MiB/s in 132.0.6834.160
* 485 MiB/s in 133.0.6943.60
* 319 MiB/s in 134.0.6998.36
the silver lining: it looks like chrome-bug 1352210 is improving
(crypto.subtle, the native hasher, now scales with multiple cores)
* 133.0.6943.60: speed peaked at 2 threads; 341 MiB/s, 485 MiB/s
* 134.0.6998.36: peak at 7; 193, 383, 383, 408, 421, 431, 438, 438
* 137.0.7151.41: peak at 8; 210, 382, 445, 513, 573, 573, 585, 598
MiB/s when hashing with 1, 2, ..., 7, 8 webworkers respectively
on a ryzen7-5800x with 2x16g 2133mhz ram
characteristics of versions between v134 and v137 are unknown
(cannot find old official builds to test), but v137 is a good
cutoff for minimizing risk of hitting chrome-bugs
meanwhile, hash-wasm scales linearly up to 8 cores;
0=328 1=377 2=738 3=947 4=1090 5=1190 6=1380 7=1530 8=1810
(0 = wasm on mainthread, no webworkers)
but it looks like chrome-bug 383568268 is making a return,
so keep the limit of max 4 threads if machine has more than
4 cores (and numCores-1 otherwise)
This enables compatibility with users who also disable aliases
The utillinux alias was added in 2020[1], which is older than the previous
Nixpkgs pin, which means we can safely switch to the non-aliased version.
1: 3896a0c0e2/pkgs/top-level/aliases.nix (L1967)
previously, `--rp-loc` only took effect for trusted reverse-proxies
this was a source of confusion when setting up a config from
scratch, since there is no obvious relation to `--xff-src`
as this behavior was incidental, `--rp-loc` is now always applied,
even if the proxy is untrusted (or not detected at all)
if a hook relocates a file into a folder where that same file
exists with the same filename, the filename-collision-avoidance
would kick in, generating a new filename and another copy
* formatting clean-up with alejandra.
* added ability to specify user and group.
* added option to have hist data live with volumes.
* improved my understanding of what paths copyparty needs to function.
* added environment script.
* Revert "added environment script."
Cant have 2 instances of copyparty running, even if one is just for
ah-cli...
This reverts commit c60c8d8e0b.
* fixup! added ability to specify user and group.
* Reapply "added environment script."
This reverts commit a54e950ecc.
* Moved back to TemporaryFileSystem for system hardening.
I misunderstood bind mounts...
* made systemd.tmpfiles rules to ensure the volume directories exist.
* changed copyparty-env script to copyparty-hash.
* removed seperatehist in favor of default settings attrset.
* new update of copyparty removed the need for some options.
* minor refactoring.
* fixed some descriptions that had not kept up with changes.
* fixup! removed seperatehist in favor of default settings attrset.
* do not take lock on shares-db / sessions-db when running with
`--ah-gen` or `--ah-cli` (allows a 2nd instance for that purpose)
* add options to print effective salt for ah/fk/dk; useful for nixos
and other usecases where config is derived or otherwise opaque
* mention potential hdd-bottleneck from big values
* most browsers enforce a max-value of 6 (c354a38b)
* chunk-stitching (132a8350) made this less important;
still beneficial, but only to a point
this avoids a false-positive in the info-zip unzip zipbomb detector.
unfortunately,
* now impossible to extract large (4 GiB) zipfiles using old software
(WinXP, macos 10.12)
* now less viable to stream download-as-zip into a zipfile unpacker
(please use download-as-tar for that purpose)
context:
the zipfile specification (APPNOTE.TXT) is slightly ambiguous as to when
data-descriptor (0x504b0708) filesize-fields change from 32bit to 64bit;
both copyparty and libarchive independently made the same interpretation
that this is only when the local header is zip64, AND the size-fields
are both 0xFFFFFFFF. This makes sense because the data descriptor is
only necessary when that particular file-to-be-added exceeds 4 GiB,
and/or when the crc32 is not known ahead of time.
another interpretation, seen in an early version of the patchset
to fix CVE-2019-13232 (zip-bombs) in the info-zip unzip command,
believes the only requirement is that the local header is zip64.
in many linux distributions, the unzip command would thus fail on
zipfiles created by copyparty, since they (by default) satisfy
the three requirements to hit the zipbomb false-positive:
* total filesize exceeds 4 GiB, and...
* a mix of regular (32bit) and zip64 entries, and...
* streaming-mode zipfile (not made with ?zip=crc)
this issue no longer exists in a more recent version of that patchset,
https://github.com/madler/unzip/commit/af0d07f95809653b
but this fix has not yet made it into most linux distros