adds options `--bauth-last` to lower the preference for
taking the basic-auth password in case of conflict,
and `--no-bauth` to entirely disable basic-authentication
if a client is providing multiple passwords, for example when
"logged in" with one password (the `cppwd` cookie) and switching
to another account by also sending a PW header/url-param, then
the default evaluation order to determine which password to use is:
url-param `pw`, header `pw`, basic-auth header, cookie (cppwd/cppws)
so if a client supplies a basic-auth header, it will ignore the cookie
and use the basic-auth password instead, which usually makes sense
but this can become a problem if you have other webservers running
on the same domain which also support basic-authentication
--bauth-last is a good choice for cooperating with such services, as
--no-bauth currently breaks support for the android app...
counterpart of `--s-wr-sz` which existed already
the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)
was previously 32 KiB, so large uploads should now use 17% less CPU
also adds sanchecks for values of `--iobuf`, `--s-rd-sz`, `--s-wr-sz`
also adds file-overwrite feature for multipart posts
the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)
was previously a mix of 64 and 512 KiB;
now the same value is enforced everywhere
download-as-tar is now 20% faster with the default value
the volflags of `/` were used to determine if e2d was enabled,
which is wrong in two ways:
* if there is no `/` volume, it would be globally disabled
* if `/` has e2d, but another volume doesn't, it would
erroneously think unpost was available, which is not an
issue unless that volume used to have e2d enabled AND
there is stale data matching the client's IP
3f05b665 (v1.11.0) had an incomplete fix for the stale-data part of
the above, which also introduced the other issue
this commit partially fixes the following issue:
if a client manages to escape real-ip detection, copyparty will
try to ban the reverse-proxy instead, effectively banning all clients
this can happen if the configuration says to obtain client real-ip
from a cloudflare header, but the server is not configured to reject
connections from non-cloudflare IPs, so a scanner will eventually
hit the server IP with malicious-looking requests and trigger a ban
copyparty will now continue to process requests from banned IPs until
the header has been parsed and the real-ip has been obtained (or not),
causing an increased server load from malicious clients
assuming the `--xff-src` and `--xff-hdr` config is correct,
this issue should no longer be hitting innocent clients
the old behavior of immediately rejecting a banned IP address
can be re-enabled with the new option `--early-ban`
to abort an upload, refresh the page and access the unpost tab,
which now includes unfinished uploads (sorted before completed ones)
can be configured through u2abort (global or volflag);
by default it requires both the IP and account to match
https://a.ocv.me/pub/g/nerd-stuff/2024-0310-stoltzekleiven.jpg
running behind cloudflare doesn't necessarily
mean being accessible ONLY through cloudflare
also include a general warning about optimal
configuration for non-cloudflare intermediates
as this option is very rarely useful, add global-option `--k304` to
unhide the button and/or set it default-enabled
the toggle will still appear when the feature was previously enabled by
a client, and the feature is still default-enabled for all IE clients
* docker: warn if there are config-files in ~/.config/copyparty
because somebody copied their config into
/cfg/copyparty instead of /cfg as intended
* docker: warn if there are no config-files in an included directory
* make misconfigured reverse-proxies more obvious
* explain cors rejections in server log
* indicate cors rejection in error toast
some reverse-proxies expect plaintext replies, and
we don't have a brotli decompressor to satisfy this
additionally, because brotli is https-gated (thx google),
it was already an impractical mess anyways
the sfx is now 7 KiB larger
chrome crashes if there's more than 2000 unique SVGs on one page, so
there was serverside useragent-sniffing to determine if the icon should
be an svg or a raster
however since the useragent is not in our vary, cloudflare wouldn't see
the difference and cache everything equally, meaning most folders would
display a random mix of png and svg thumbnails
move browser detection to the clientside to ensure unique URLs
as each chunk is written to the file, httpcli calls
up2k.confirm_chunk to register the chunk as completed, and the reply
indicates whether that was the final outstanding chunk, in which case
httpcli closes the file descriptors since there's nothing more to write
the issue is that the final chunk is registered as completed before the
file descriptors are closed, meaning there could be writes that haven't
finished flushing to disk yet
if the client decides to issue another handshake during this window,
up2k sees that all chunks are complete and calls up2k.finish_upload
even as some threads might still be flushing the final writes to disk
so the conditions to hit this bug were as follows (all must be true):
* multiprocessing is disabled
* there is a reverse-proxy
* a client has several idle connections and reuses one of those
* the server's filesystem is EXTREMELY slow, to the point where
closing a file takes over 30 seconds
the fix is to stop handshakes from being processed while a file is
being closed, which is unfortunately a small bottleneck in that it
prohibits initiating another upload while one is being finalized, but
the required complexity to handle this better is probably not worth it
(a separate mutex for each upload session or something like that)
this issue is mostly harmless, partially because it is super tricky to
hit (only aware of it happening synthetically), and because there is
usually no harmful consequences; the worst-case is if this were to
happen exactly as the server OS decides to crash, which would make the
file appear to be fully uploaded even though it's missing some data
(all extremely unlikely, but not impossible)
there is no performance impact; if anything it should now accept
new tcp connections slightly faster thanks to more granular locking
features which should be good to go:
* user groups
* assigning permissions by group
* dynamically created volumes based on username/groupname
* rebuild vfs when new users/groups appear
but several important features still pending;
* detect dangerous configurations
* dynamic vol below readable path
* remember volumes created during previous runs
* helps prevent unintended access
* correct filesystem-scan on startup
* allow mounting `/` (the entire filesystem) as a volume
* not that you should (really, you shouldn't)
* improve `-v` helptext
* change IdP group symbol to @ because % is used for file inclusion
* not technically necessary but is less confusing in docs
some clients (clonezilla-webdav) rapidly create and delete files;
this fails if copyparty is still hashing the file (usually the case)
and the same thing can probably happen due to antivirus etc
add global-option --rm-retry (volflag rm_retry) specifying
for how long (and how quickly) to keep retrying the deletion
default: retry for 5sec on windows, 0sec (disabled) on everything else
because this is only a problem on windows
* webdav: extend applesan regex with more stuff to exclude
* on macos, set applesan as default `--no-idx` to avoid indexing them
(they didn't show up in search since they're dotfiles, but still)
igloo irc has an absolute time limit of 2 minutes before it just
disconnects mid-upload and that kinda looked like it had a buggy
multipart generator instead of just being funny
anticipating similar events in the future, also log the
client-selected boundary value to eyeball its yoloness
primarily to support uploading from Igloo IRC but also generally useful
(not actually tested with Igloo IRC yet because it's a paid feature
so just gonna wait for spiky to wake up and tell me it didn't work)