listen for errors from <img> and <video> in the media gallery and
show an error-toast to indicate that the file isn't going to appear
unfortunately, when iOS-Safari fails to decode an unsupported video,
Safari itself appears to believe that everything is fine, and doesn't
issue the expected error-event, meaning we cannot detect this...
for example, trying to play non-yuv420p vp9 webm will silently fail,
with the only symptom being the play() promise throwing as the
<video> is destroyed during cleanup (bbox-close or media unload)
recent iPads do not indicate being an iPad in the user-agent,
so the audio-player would fall back on transcoding to mp3,
assuming the device cannot play opus-caf
improve this with pessimistic feature-detection for caf
hopefully still avoiding false-positives
support for "owa", audio-only webm, was introduced in iOS 17.5
owa is a more compliant alternative to opus-caf from iOS 11,
which was technically limited to CBR opus, a limitation which
we ignored since it worked mostly fine for regular opus too
being the new officially-recommended way to do things,
we'll default to owa for iOS 18 and later, even though
iOS still has some bugs affecting our use specifically:
if a weba file is preloaded into a 2nd audio object,
safari will throw a spurious exception as playback is
initiated, even as the file is playing just fine
the `.ld` stuff is an attempt at catching and ignoring this
spurious error without eating any actual network exceptions
previously, the `?zip` url-suffix would create a cp437 zipfile,
and `?zip=utf` would use utf-8, which is now generally expected
now, both `?zip=utf` and `?zip` will produce a utf8 zipfile,
and `?zip=dos` provides the old behavior
when hashing files on android-chrome, read a contiguous range of
several chunks at a time, ensuring each read is at least 48 MiB
and then slice that cache into the correct chunksizes for hashing
especially on GrapheneOS Vanadium (where webworkers are forbidden),
improves worst-case speed (filesize <= 256 MiB) from 13 to 139 MiB/s
48M was chosen wrt RAM usage (48*4 MiB); a target read-size of
16M would have given 76 MiB/s, 32M = 117 MiB/s, and 64M = 154 MiB/s
additionally, on all platforms (not just android and/or chrome),
allow async hashing of <= 3 chunks in parallel on main-thread
when chunksize <= 48 MiB, and <= 2 at <= 96 MiB; this gives
decent speeds approaching that of webworkers (around 50%)
this is a new take on c06d928bb5
which was removed in 184af0c603
when a chrome-beta temporarily fixed the poor file-read performance
(afaict the fix was reverted and never made it to chrome stable)
as for why any of this is necessary,
the security features in android have the unfortunate side-effect
of making file-reads from web-browsers extremely expensive;
this is especially noticeable in android-chrome, where
file-hashing is painfully slow, around 9 MiB/s worst-case
this is due to a fixed-time overhead for each read operation;
reading 1 MiB takes 60 msec, while reading 16 MiB takes 112 msec
18:17:26 &ed | what's wrong with it
18:17:38 +Mai | that you don't know it's the volume bar before you try it
18:17:46 &ed | oh
18:17:48 &ed | yeah i guess
18:17:54 +Mai | especially when it's at 100
18:18:00 &ed | how do i fix it tho
18:19:50 +Mai | you could add an icon that's also a mute button (to not make it a useless icon)
18:22:38 &ed | i'll make the volume text always visible and include a speaker icon before it
18:23:53 +Mai | that is better at least
chrome (and chromium-based browsers) can OOM when:
* the OS is Windows, MacOS, or Android (but not Linux?)
* the website is hosted on a remote IP (not localhost)
* webworkers are used to read files
unfortunately this also applies to Android, which heavily relies
on webworkers to make read-speeds anywhere close to acceptable
as for android, there are diminishing returns with more than 4
webworkers (1=1x, 2=2.3x, 3=3.8x, 4=4.2x, 6=4.5x, 8=5.3x), and
limiting the number of workers to ensure at least one idle core
appears to sufficiently reduce the OOM probability
on desktop, webworkers are only necessary for hashwasm, so
limit the number of workers to 2 if crypto.subtle is available
and otherwise use the nproc-1 rule for hashwasm in workers
bug report: https://issues.chromium.org/issues/383568268
if someone accidentally starts uploading a file in the wrong folder,
it was not obvious that you can forget that upload in the unpost tab
this '(explain)' button in the upload-error hopefully explains that,
and upload immediately commences when the initial attempt is aborted
on the backend, cleanup the dupesched when an upload is
aborted, and save some cpu by adding unique entries only
url-param / header `ck` specifies hashing algo;
md5 sha1 sha256 sha512 b2 blake2 b2s blake2s
value 'no' or blank disables checksumming,
for when copyparty is running on ancient gear
and you don't really care about file integrity
show a spinning halfcircle around the +/- instead of
moving the focus to the selected folder in the sidebar,
since that could mess with keyboard scrolling
an extremely brutish workaround for issues such as #110 where
browsers receive an HTTP 304 and misinterpret as HTTP 200
option `--no304=1` adds the button `no304` to the controlpanel
which can be enabled to force-disable caching in that browser
the button is default-disabled; by specifying `--no304=2`
instead of `--no304=1` the button becomes default-enabled
can also always be enabled by accessing `/?setck=no304=y`
the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization
in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin
add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
drop chunk-hashes in the up2k snap, plus other insignificant attribs
to reduce both the snapfile size and the ram usage by about 90%
reduces startup/shutdown time by a lot since there's less to serdes
(does not affect -e2d which was already optimal)
other changes:
* improve incoming-eta accuracy when the initial handshake
was made a long time before the upload actually started
* move the list of incoming files in the controlpanel to the top
* do not absreal paths unless necessary
* do not determine username if no users configured
* impacket 0.12 fixed the foldersize limit, but now
you get extremely poor performance in large folders
so the previous workaround is still default-enabled
* show media tags in shares
* html hydrator assumed a folder named `foo.txt` was a doc
* due to sessions, use `pwd` as password placeholder on services
due to deduplication, it is intentionally impossible to
upload several identical copies of a file in parallel
by default, the up2k client will upload files sorted by
size, which usually leads to dupes being grouped together,
and it will try to do just that
this is by design, as it improves performance on average,
but it also shows the confusing (but technically-correct)
message "resume the partial upload into the original path"
fix this with a more appropriate message
note that this approach was selected in favor of pausing
handshakes while the initial copy finishes uploading,
because that could severely reduce upload performance
by preventing optimal use of multiple connections
if files (one or more) are selected for sharing, then
a virtual folder is created to hold the selected files
if a single file is selected for sharing, then
the returned URL will point directly to that file
and fix some shares-related bugs:
* password coalescing
* log-spam on reload
* fix: translation: changing from `" "` to `' '` for some strings;
using `./scripts/tlcheck.sh eng chi copyparty/web/browser.js`
* fix: translation: Check the newly added Chinese translation