the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization
in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin
add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
due to deduplication, it is intentionally impossible to
upload several identical copies of a file in parallel
by default, the up2k client will upload files sorted by
size, which usually leads to dupes being grouped together,
and it will try to do just that
this is by design, as it improves performance on average,
but it also shows the confusing (but technically-correct)
message "resume the partial upload into the original path"
fix this with a more appropriate message
note that this approach was selected in favor of pausing
handshakes while the initial copy finishes uploading,
because that could severely reduce upload performance
by preventing optimal use of multiple connections
v1.13.5 made some proxies angry with its massive chunklists
when stitching chunks, only list the first chunk hash in full,
and include a truncated hash for the consecutive chunks
should be enough for logfiles to make sense
and to smoketest that clients are behaving
in the event that an upload chunk gets stuck, the js would
never stop waiting for a response, requiring a page reload
improves reliability when running behind a reverse-proxy
which is configured to never timeout requests (can make
sense when combined with other services on the same box)
* wait until page (au) has loaded to register hotkeys
* hotkey `m` would grow sidebar if tree was minimized
* more exact warning about num.parallel uploads
* keep more console logs in memory
* message phrasing
* progress donuts should include inflight bytes
* changes to stitch-size in settings didn't apply until next refresh
* serverlog was too verbose; truncate chunk hashes
* mention absolute cloudflare limit in readme
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East, `1.6x` from US-West to Finland )
the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body
a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value
this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies
the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes
a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case
on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this
if a reverse-proxy starts hijacking requests and replying with HTML,
don't panic when it fails to decode as a handshake json
fix this for most other json-expecting gizmos too,
and take the opportunity to cleanup some text formatting
much more accurate total-ETA when uploading with many connections
and/or uploading huge files to really slow servers
the titlebar % still only does actually confirmed bytes,
partially because that makes sense, partially because
that's what happened by accident
window.localStorage was null, so trying to read would fail
seen on falkon 23.08.4 with qtwebengine 5.15.12 (fedora39)
might as well be paranoid about the other failure modes too
(sudden exceptions on reads and/or writes)
* some malicious requests are now answered with HTTP 422,
so that they count against --ban-422
* do not include request headers when replying to invalid requests,
in case there is a reverse-proxy inserting something interesting
never bonk anyone with read-access (able to see directory-listing)
or write-only (not able to retrieve any files at all) due to
either --ban-404 or --ban-url
fixes accidental ban when webdav-uploading files which
match any of the --ban-url patterns (#55)
also default-enables --ban-404 since it is now generally safe
(even when up2k is in turbo mode), plus make turbo smart enough to
disengage when necessary
* slightly faster startup / shutdown
* forgot a jinja2 golf
* waste 4KiB changing prismjs back to gz since brotli is https-gated ;_;
* broke support for firefox<52 (non-var functions must be toplevel
or immediately within another function), now even firefox 10 /
centos 6 is somewhat supported again
* js: use .call instead of .bind when possible
* when running without e2d, the message on startup regarding
unfinished uploads didn't show the correct filesystem path
* --doctitle defines most titles, prefixed with "--name: " by default
* the file browser is only prefixed with the --name itself
* --nth ("no-title-hostname") removes it
* also removed by --nih ("no-info-hostname")