the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization
in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin
add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk
however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic
as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times
newlines, invalid utf8, and worst of all... %20 (whitespace)
due to up2k protocol limitations,
filenames are normalized when they hit the server,
but folders get to keep their intended jank
global-option `--no-clone` / volflag `noclone` entirely disables
serverside deduplication; clients will then fully upload dupe files
can be useful when `--safe-dedup=1` is not an option due to other
software tampering with the on-disk files, and your filesystem has
prohibitively slow or expensive reads