as processing of a HTTP request begins (GET, HEAD, PUT, POST, ...),
the original query line is printed in its encoded form. This makes
debugging easier, since there is no ambiguity in how the client
phrased its request.
however, this results in very opaque logs for non-ascii languages;
basically a wall of percent-encoded characters. Avoid this issue
by printing an additional log-message if the URL contains `%`,
immediately below the original url-encoded entry.
also fix tests on macos, and an unrelated bad logmsg in up2k
chrome (and chromium-based browsers) can OOM when:
* the OS is Windows, MacOS, or Android (but not Linux?)
* the website is hosted on a remote IP (not localhost)
* webworkers are used to read files
unfortunately this also applies to Android, which heavily relies
on webworkers to make read-speeds anywhere close to acceptable
as for android, there are diminishing returns with more than 4
webworkers (1=1x, 2=2.3x, 3=3.8x, 4=4.2x, 6=4.5x, 8=5.3x), and
limiting the number of workers to ensure at least one idle core
appears to sufficiently reduce the OOM probability
on desktop, webworkers are only necessary for hashwasm, so
limit the number of workers to 2 if crypto.subtle is available
and otherwise use the nproc-1 rule for hashwasm in workers
bug report: https://issues.chromium.org/issues/383568268
if a NIC is brought up with several IPs,
it would only mention one of the new IPs in the logs
or if a PCIe bus crashes and all NICs drop dead,
it would only mention one of the IPs that disappeared
as both scenarios are oddly common, be more verbose
previously, when IdP was enabled, the password-based login would be
entirely disabled. This was a semi-conscious decision, based on the
assumption that you would always want to use IdP after enabling it.
it makes more sense to keep password-based login working as usual,
conditionally disengaging it for requests which contains a valid
IdP username header. This makes it possible to define fallback
users, or API-only users, and all similar escape hatches.
if someone accidentally starts uploading a file in the wrong folder,
it was not obvious that you can forget that upload in the unpost tab
this '(explain)' button in the upload-error hopefully explains that,
and upload immediately commences when the initial attempt is aborted
on the backend, cleanup the dupesched when an upload is
aborted, and save some cpu by adding unique entries only
* support globbing/wildcards on windows
* add `osc 9;4` to show upload progress in the taskbar
(currently windows-only; linux is picking it up)
* workaround msys2-terminal not normalizing
absolute paths which contain whitespace
* show a helpful "now hashing..." while the
first file is being hashed, since it kinda
looks like a deadlock on windows otherwise
url-param / header `ck` specifies hashing algo;
md5 sha1 sha256 sha512 b2 blake2 b2s blake2s
value 'no' or blank disables checksumming,
for when copyparty is running on ancient gear
and you don't really care about file integrity
shadowing is the act of intentinoally blocking off access to
files in a volume by placing another volume atop of a file/folder.
say you have volume '/' with a file '/a/b/c/d.txt'; if you create a
volume at '/a/b', then all files/folders inside the original folder
becomes inaccessible, and replaced with the contents of the new vol
the initial code for forgetting shadowed files from the parent vol
database would only forget files which were discovered during a
filesystem scan; any uploaded files would be intentionally preseved
in the parent volume's database, probably to avoid losing uploader
info in the event of a brief mistaken config change, where a volume
is shadowed by accident.
this precaution was a mistake, currently causing far more
issues than it solves (#61 and #120), so away it goes.
huge thanks to @Gremious for doing all the legwork on this!
* return 403 instead of 404 in the following sitations:
* viewing an RSS feed without necessary auth
* accessing a file with the wrong filekey
* accessing a file/folder without necessary auth
(would previously 404 for intentional ambiguity)
* only allow PROPFIND if user has either read or write;
previously a blank response was returned if user has
get-access, but this could confuse webdav clients into
skipping authentication (for example AuthPass)
* return 401 basic-challenge instead of 403 if the client
appears to be non-graphical, because many webdav clients
do not provide the credentials until they're challenged.
There is a heavy bias towards assuming the client is a
browser, because browsers must NEVER EVER get a 401
(tricky state that is near-impossible to deal with)
* return 401 basic-challenge instead of 403 if a PUT
is attempted without any credentials included; this
should be safe, as graphical browsers never do that
this fixes the interoperability issues mentioned in
https://github.com/authpass/authpass/issues/379
where AuthPass would GET files without providing the
password because it expected a 401 instead of a 403;
AuthPass is behaving correctly, this is not a bug
a better alternative to using `--no-idx` for this purpose since
this also excludes recent uploads, not just during fs-indexing,
and it doesn't prevent deduplication
also speeds up searches by a tiny amount due to building the
sanchecks into the exclude-filter while parsing the config,
instead of during each search query