dedup is still encouraged and fully supported, but
being default-enabled has caused too many surprises
enabling `--dedup` restores the previous default behavior
also renames `--never-symlink` to `--hardlink-only`
hooks can now interrupt or redirect actions, and initiate
related actions, by printing json on stdout with commands
mainly to mitigate limitations such as sharex/sharex#3992
xbr/xau can redirect uploads to other destinations with `reloc`
and most hooks can initiate indexing or deletion of additional
files by giving a list of vpaths in json-keys `idx` or `del`
there are limitations;
* xbu/xau effects don't apply to ftp, tftp, smb
* xau will intentionally fail if a reloc destination exists
* xau effects do not apply to up2k
also provides more details for hooks:
* xbu/xau: basic-uploader vpath with filename
* xbr/xar: add client ip
* general changes:
* upload speed comparisons considering v1.13.5
* hfs2:
* dead project with unfixed vulnerabilities
* hfs3:
* has replaced hfs2
* uploads are now resumable
* add new functionality:
* write-only folders
* unmap subfolders
* move and delete files
* folder-rproxy
* themes
* basic audio player, image viewer
* filebrowser:
* uploads are now parallelized, resumable, segmented
* but single large files are not accelerated
* can listen on unix sockets
* folder-rproxy is supported
* more cpu efficient than copyparty
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East, `1.6x` from US-West to Finland )
the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body
a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value
this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies
the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes
a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case
on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this
counterpart of `--s-wr-sz` which existed already
the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)
was previously 32 KiB, so large uploads should now use 17% less CPU
also adds sanchecks for values of `--iobuf`, `--s-rd-sz`, `--s-wr-sz`
also adds file-overwrite feature for multipart posts
* add readme section on using amazon/aws s3 as storage
* mention http/https confusion caused by incorrectly configured cloudflare
* improve custom-font notes
* docker: ftp-server howto
* docker: suggest moving hist-folders into the config path
and switch the idp docker-compose files to use the
main image, in anticipation of v1.11