PUT uploads, as used by webdav, would stat the absolute
path of the file to be created, which would throw ENOENT
strip components until the path is an existing directory
and also try to enforce disk space / volume size limits
even when the incoming file is of unknown size
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
global-option `--no-clone` / volflag `noclone` entirely disables
serverside deduplication; clients will then fully upload dupe files
can be useful when `--safe-dedup=1` is not an option due to other
software tampering with the on-disk files, and your filesystem has
prohibitively slow or expensive reads
previously, only real folders could be listed by a webdav client;
a server which does not have any filesystem paths mapped to `/`
would cause clients to panic when trying to list the server root
now, assuming volumes `/foo` and `/bar/qux` exist, when accessing `/`
the user will see `/foo` but not `/bar` due to limitations in `walk`,
and `qux` will only appear when viewing `/bar`
a future rework of the recursion logic should further improve this
* pyz: yeet the resource tar which is now pointless thanks to pkgres
* cache impresource stuff because pyz lookups are Extremely slow
* prefer tx_file when possible for slightly better performance
* use hardcoded list of expected resources instead of dynamic
discovery at runtime; much simpler and probably safer
* fix some forgotten resources (copying.txt, insecure.pem)
* fix loading jinja templates on windows
add support for reading webdeps and jinja-templates using either
importlib_resources or pkg_resources, which removes the need for
extracting these to a temporary folder on the filesystem
* util: add helper functions to abstract embedded resource access
* http*: serve embedded resources through resource abstraction
* main: check webdeps through resource abstraction
* httpconn: remove unused method `respath(name)`
* use __package__ to find package resources
* util: use importlib_resources backport if available
* pass E.pkg as module object for importlib_resources compatibility
* util: add pkg_resources compatibility to resource abstraction
* show media tags in shares
* html hydrator assumed a folder named `foo.txt` was a doc
* due to sessions, use `pwd` as password placeholder on services
* exponentially slow upload handshakes caused by lack of rd+fn
sqlite index; became apparent after a volume hit 200k files
* listing big folders 5% faster due to `_quotep3b`
* optimize `unquote`, 20% faster but only used rarely
* reindex on startup 150x faster in some rare cases
(same filename in MANY folders)
the database is now around 10% larger (likely worst-case)
reduce the overhead of function-calls from the client thread
to the svchub singletons (up2k, thumbs, metrics) down to 14%
and optimize up2k chunk-receiver to spend 5x less time bookkeeping
which restores up2k performance to before introducing incoming-ETA
if files (one or more) are selected for sharing, then
a virtual folder is created to hold the selected files
if a single file is selected for sharing, then
the returned URL will point directly to that file
and fix some shares-related bugs:
* password coalescing
* log-spam on reload
* navpane would always feed the vproxy paths into the tree
instead of only when necessary (the initial load)
* mkdir would return `X-New-Dir` without the `rp-loc` prefix
* chpw and some other redirects also sent raw vpaths
Reported-by: @iridial
hooks can now interrupt or redirect actions, and initiate
related actions, by printing json on stdout with commands
mainly to mitigate limitations such as sharex/sharex#3992
xbr/xau can redirect uploads to other destinations with `reloc`
and most hooks can initiate indexing or deletion of additional
files by giving a list of vpaths in json-keys `idx` or `del`
there are limitations;
* xbu/xau effects don't apply to ftp, tftp, smb
* xau will intentionally fail if a reloc destination exists
* xau effects do not apply to up2k
also provides more details for hooks:
* xbu/xau: basic-uploader vpath with filename
* xbr/xar: add client ip
v1.13.5 made some proxies angry with its massive chunklists
when stitching chunks, only list the first chunk hash in full,
and include a truncated hash for the consecutive chunks
should be enough for logfiles to make sense
and to smoketest that clients are behaving
* progress donuts should include inflight bytes
* changes to stitch-size in settings didn't apply until next refresh
* serverlog was too verbose; truncate chunk hashes
* mention absolute cloudflare limit in readme
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East, `1.6x` from US-West to Finland )
the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body
a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value
this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies
the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes
a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case
on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this