also includes a slight tweak to the json upload info:
when exactly one file is uploaded, the json-response has a
new top-level property, `fileurl` -- this is just a copy of
`files[0].url` as a workaround for castdrian/ishare#107
("only toplevel json properties can be referenced")
in case someone writes a plugin which
expects certain params to be sanitized
note that because mojibake filenames are supported,
URLs and filepaths can still be absolutely bonkers
this fixes one known issue:
invalid rss-feed xml if ?pw contains special chars
...and somehow things now run 2% faster, idgi
as processing of a HTTP request begins (GET, HEAD, PUT, POST, ...),
the original query line is printed in its encoded form. This makes
debugging easier, since there is no ambiguity in how the client
phrased its request.
however, this results in very opaque logs for non-ascii languages;
basically a wall of percent-encoded characters. Avoid this issue
by printing an additional log-message if the URL contains `%`,
immediately below the original url-encoded entry.
also fix tests on macos, and an unrelated bad logmsg in up2k
previously, when IdP was enabled, the password-based login would be
entirely disabled. This was a semi-conscious decision, based on the
assumption that you would always want to use IdP after enabling it.
it makes more sense to keep password-based login working as usual,
conditionally disengaging it for requests which contains a valid
IdP username header. This makes it possible to define fallback
users, or API-only users, and all similar escape hatches.
url-param / header `ck` specifies hashing algo;
md5 sha1 sha256 sha512 b2 blake2 b2s blake2s
value 'no' or blank disables checksumming,
for when copyparty is running on ancient gear
and you don't really care about file integrity
* return 403 instead of 404 in the following sitations:
* viewing an RSS feed without necessary auth
* accessing a file with the wrong filekey
* accessing a file/folder without necessary auth
(would previously 404 for intentional ambiguity)
* only allow PROPFIND if user has either read or write;
previously a blank response was returned if user has
get-access, but this could confuse webdav clients into
skipping authentication (for example AuthPass)
* return 401 basic-challenge instead of 403 if the client
appears to be non-graphical, because many webdav clients
do not provide the credentials until they're challenged.
There is a heavy bias towards assuming the client is a
browser, because browsers must NEVER EVER get a 401
(tricky state that is near-impossible to deal with)
* return 401 basic-challenge instead of 403 if a PUT
is attempted without any credentials included; this
should be safe, as graphical browsers never do that
this fixes the interoperability issues mentioned in
https://github.com/authpass/authpass/issues/379
where AuthPass would GET files without providing the
password because it expected a 401 instead of a 403;
AuthPass is behaving correctly, this is not a bug
whenever a new idp user is registered, up2k will continuously
reload in the background until all users have been processed
just like before, this blocks up2k uploads from each user
until said user makes it into a reload, but as of now,
reloads will batch and execute without interrupting read-access
needs further testing before next release,
probably some rough edges to sand down
an extremely brutish workaround for issues such as #110 where
browsers receive an HTTP 304 and misinterpret as HTTP 200
option `--no304=1` adds the button `no304` to the controlpanel
which can be enabled to force-disable caching in that browser
the button is default-disabled; by specifying `--no304=2`
instead of `--no304=1` the button becomes default-enabled
can also always be enabled by accessing `/?setck=no304=y`
these response headers are usually not included in 304 replies,
and their presence are suspected to confuse some clients (#110)
also strip `out_headerlist` (primarily cookie assignments)
add support for the `If-Range` header which is generally used to
prevent resuming a partial download after the source file on the
server has been modified, by returning HTTP 200 instead of a 206
also simplifies `If-Modified-Since` and `If-Range` handling;
previously this was a spec-compliant lexical comparison,
now it's a basic string-comparison instead. The server will now
reply 200 also when the server mtime is older than the client's.
This is technically not according to spec, but should be safer,
as it allows backdating timestamps without purging client cache
PUT uploads, as used by webdav, would stat the absolute
path of the file to be created, which would throw ENOENT
strip components until the path is an existing directory
and also try to enforce disk space / volume size limits
even when the incoming file is of unknown size
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
global-option `--no-clone` / volflag `noclone` entirely disables
serverside deduplication; clients will then fully upload dupe files
can be useful when `--safe-dedup=1` is not an option due to other
software tampering with the on-disk files, and your filesystem has
prohibitively slow or expensive reads
previously, only real folders could be listed by a webdav client;
a server which does not have any filesystem paths mapped to `/`
would cause clients to panic when trying to list the server root
now, assuming volumes `/foo` and `/bar/qux` exist, when accessing `/`
the user will see `/foo` but not `/bar` due to limitations in `walk`,
and `qux` will only appear when viewing `/bar`
a future rework of the recursion logic should further improve this
* pyz: yeet the resource tar which is now pointless thanks to pkgres
* cache impresource stuff because pyz lookups are Extremely slow
* prefer tx_file when possible for slightly better performance
* use hardcoded list of expected resources instead of dynamic
discovery at runtime; much simpler and probably safer
* fix some forgotten resources (copying.txt, insecure.pem)
* fix loading jinja templates on windows
add support for reading webdeps and jinja-templates using either
importlib_resources or pkg_resources, which removes the need for
extracting these to a temporary folder on the filesystem
* util: add helper functions to abstract embedded resource access
* http*: serve embedded resources through resource abstraction
* main: check webdeps through resource abstraction
* httpconn: remove unused method `respath(name)`
* use __package__ to find package resources
* util: use importlib_resources backport if available
* pass E.pkg as module object for importlib_resources compatibility
* util: add pkg_resources compatibility to resource abstraction
* show media tags in shares
* html hydrator assumed a folder named `foo.txt` was a doc
* due to sessions, use `pwd` as password placeholder on services
* exponentially slow upload handshakes caused by lack of rd+fn
sqlite index; became apparent after a volume hit 200k files
* listing big folders 5% faster due to `_quotep3b`
* optimize `unquote`, 20% faster but only used rarely
* reindex on startup 150x faster in some rare cases
(same filename in MANY folders)
the database is now around 10% larger (likely worst-case)
reduce the overhead of function-calls from the client thread
to the svchub singletons (up2k, thumbs, metrics) down to 14%
and optimize up2k chunk-receiver to spend 5x less time bookkeeping
which restores up2k performance to before introducing incoming-ETA
if files (one or more) are selected for sharing, then
a virtual folder is created to hold the selected files
if a single file is selected for sharing, then
the returned URL will point directly to that file
and fix some shares-related bugs:
* password coalescing
* log-spam on reload
* navpane would always feed the vproxy paths into the tree
instead of only when necessary (the initial load)
* mkdir would return `X-New-Dir` without the `rp-loc` prefix
* chpw and some other redirects also sent raw vpaths
Reported-by: @iridial
hooks can now interrupt or redirect actions, and initiate
related actions, by printing json on stdout with commands
mainly to mitigate limitations such as sharex/sharex#3992
xbr/xau can redirect uploads to other destinations with `reloc`
and most hooks can initiate indexing or deletion of additional
files by giving a list of vpaths in json-keys `idx` or `del`
there are limitations;
* xbu/xau effects don't apply to ftp, tftp, smb
* xau will intentionally fail if a reloc destination exists
* xau effects do not apply to up2k
also provides more details for hooks:
* xbu/xau: basic-uploader vpath with filename
* xbr/xar: add client ip
v1.13.5 made some proxies angry with its massive chunklists
when stitching chunks, only list the first chunk hash in full,
and include a truncated hash for the consecutive chunks
should be enough for logfiles to make sense
and to smoketest that clients are behaving
* progress donuts should include inflight bytes
* changes to stitch-size in settings didn't apply until next refresh
* serverlog was too verbose; truncate chunk hashes
* mention absolute cloudflare limit in readme
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East, `1.6x` from US-West to Finland )
the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body
a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value
this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies
the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes
a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case
on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this
hooks can be restricted to users with certain permissions, for example
`--xm aw,notify-send` will only `notify-send` if user has write-access
the user's list of permissions are now also included in the json
that is passed to the hook if enabled; `--xm aw,j,notify-send`
will now also stop parsing flags when encountering a blank value,
allowing to specify any initial arguments to the command:
`--xm aw,j,,notify-send,hey` would run `notify-send` with `hey`
as its first argument, and the json would be the 2nd argument,
similarly `--xm ,notify-send,hey` when no flags specified
this is somewhat explained in `--help-hooks`, but
additional related features are planned in the near future
and will all be better documented when the dust settles
mtime the file that was used to produce the folder thumbnail
(rather than the folder itself) since the folder-thumb is
always resolved to the file's thumb in the on-disk cache
if a request body is expected, but request has no content-length,
set the timeout to 1/20 of `--s-tbody`, so 9 seconds by default,
or 3 seconds if it's 60 as recommended in helptext
this gives less confusing behavior if a client accidentally does
something invalid, replying with an error response before the
previous timeout of 186 seconds
also raise the slowloris flag, in case a client bugs out and
keeps making such requests
was intentionally skipped to avoid complexity but enough people have
asked why it doesn't work that it's time to do something about it
turns out it wasn't that bad
when there was more than ~700 active connections,
* sendfile (non-https downloads) could fail
* mdns and ssdp could fail to reinitialize on network changes
...because `select` can't handle FDs higher than 512 on windows
(1024 on linux/macos), so prefer `poll` where possible (linux/macos)
but apple keeps breaking and unbreaking `poll` in macos,
so use `--no-poll` if necessary to force `select` instead