an extremely brutish workaround for issues such as #110 where
browsers receive an HTTP 304 and misinterpret as HTTP 200
option `--no304=1` adds the button `no304` to the controlpanel
which can be enabled to force-disable caching in that browser
the button is default-disabled; by specifying `--no304=2`
instead of `--no304=1` the button becomes default-enabled
can also always be enabled by accessing `/?setck=no304=y`
these response headers are usually not included in 304 replies,
and their presence are suspected to confuse some clients (#110)
also strip `out_headerlist` (primarily cookie assignments)
add support for the `If-Range` header which is generally used to
prevent resuming a partial download after the source file on the
server has been modified, by returning HTTP 200 instead of a 206
also simplifies `If-Modified-Since` and `If-Range` handling;
previously this was a spec-compliant lexical comparison,
now it's a basic string-comparison instead. The server will now
reply 200 also when the server mtime is older than the client's.
This is technically not according to spec, but should be safer,
as it allows backdating timestamps without purging client cache
previously, it only accepted the 3-tuple `min,default,max`
if given a single integer (or any other unexpected value),
the up2k js would enter an infinite loop, eat all the ram
and crash the browser (nice)
fix this by accepting a single integer (for example 96)
and translating it to `1,96,96`
PUT uploads, as used by webdav, would stat the absolute
path of the file to be created, which would throw ENOENT
strip components until the path is an existing directory
and also try to enforce disk space / volume size limits
even when the incoming file is of unknown size
the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization
in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin
add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk
however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic
as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times
newlines, invalid utf8, and worst of all... %20 (whitespace)
due to up2k protocol limitations,
filenames are normalized when they hit the server,
but folders get to keep their intended jank