* add feature showcase video
Signed-off-by: Adam <134429563+RustoMCSpit@users.noreply.github.com>
* add youtube link too
Signed-off-by: ed <s@ocv.me>
---------
Signed-off-by: Adam <134429563+RustoMCSpit@users.noreply.github.com>
Signed-off-by: ed <s@ocv.me>
Co-authored-by: ed <s@ocv.me>
until now, ${u} would match all users,
${u%-foo} would exclude users in group foo,
${u%+foo} would only include users in group foo
now, the following is also possible:
${u%-foo,%-bar} excludes users in group foo and/or group bar,
${u%+foo,%+bar} only includes users which are in groups foo AND bar,
${g%-foo} skips group foo (includes all others),
${g%-foo,%-bar} skips group foo and/or bar (includes all others)
see ./docs/examples/docker/idp/copyparty.conf ;
https://github.com/9001/copyparty/blob/hovudstraum/docs/examples/docker/idp/copyparty.conf
file hashing became drastically slower in recent chrome versions;
* 748 MiB/s in 131.0.6778.86
* 747 MiB/s in 132.0.6834.160
* 485 MiB/s in 133.0.6943.60
* 319 MiB/s in 134.0.6998.36
the silver lining: it looks like chrome-bug 1352210 is improving
(crypto.subtle, the native hasher, now scales with multiple cores)
* 133.0.6943.60: speed peaked at 2 threads; 341 MiB/s, 485 MiB/s
* 134.0.6998.36: peak at 7; 193, 383, 383, 408, 421, 431, 438, 438
* 137.0.7151.41: peak at 8; 210, 382, 445, 513, 573, 573, 585, 598
MiB/s when hashing with 1, 2, ..., 7, 8 webworkers respectively
on a ryzen7-5800x with 2x16g 2133mhz ram
characteristics of versions between v134 and v137 are unknown
(cannot find old official builds to test), but v137 is a good
cutoff for minimizing risk of hitting chrome-bugs
meanwhile, hash-wasm scales linearly up to 8 cores;
0=328 1=377 2=738 3=947 4=1090 5=1190 6=1380 7=1530 8=1810
(0 = wasm on mainthread, no webworkers)
but it looks like chrome-bug 383568268 is making a return,
so keep the limit of max 4 threads if machine has more than
4 cores (and numCores-1 otherwise)
this avoids a false-positive in the info-zip unzip zipbomb detector.
unfortunately,
* now impossible to extract large (4 GiB) zipfiles using old software
(WinXP, macos 10.12)
* now less viable to stream download-as-zip into a zipfile unpacker
(please use download-as-tar for that purpose)
context:
the zipfile specification (APPNOTE.TXT) is slightly ambiguous as to when
data-descriptor (0x504b0708) filesize-fields change from 32bit to 64bit;
both copyparty and libarchive independently made the same interpretation
that this is only when the local header is zip64, AND the size-fields
are both 0xFFFFFFFF. This makes sense because the data descriptor is
only necessary when that particular file-to-be-added exceeds 4 GiB,
and/or when the crc32 is not known ahead of time.
another interpretation, seen in an early version of the patchset
to fix CVE-2019-13232 (zip-bombs) in the info-zip unzip command,
believes the only requirement is that the local header is zip64.
in many linux distributions, the unzip command would thus fail on
zipfiles created by copyparty, since they (by default) satisfy
the three requirements to hit the zipbomb false-positive:
* total filesize exceeds 4 GiB, and...
* a mix of regular (32bit) and zip64 entries, and...
* streaming-mode zipfile (not made with ?zip=crc)
this issue no longer exists in a more recent version of that patchset,
https://github.com/madler/unzip/commit/af0d07f95809653b
but this fix has not yet made it into most linux distros
the up2k databases are, by default, stored in a `.hist` subfolder
inside each volume, next to thumbnails and transcoded audio
add a new option for storing the databases in a separate location,
making it possible to tune the underlying filesystem for optimal
performance characteristics
the `--hist` global-option and `hist` volflag still behave like
before, but `--dbpath` and volflag `dbpath` will override the
histpath for the up2k-db and up2k-snap exclusivey