when hashing files on android-chrome, read a contiguous range of
several chunks at a time, ensuring each read is at least 48 MiB
and then slice that cache into the correct chunksizes for hashing
especially on GrapheneOS Vanadium (where webworkers are forbidden),
improves worst-case speed (filesize <= 256 MiB) from 13 to 139 MiB/s
48M was chosen wrt RAM usage (48*4 MiB); a target read-size of
16M would have given 76 MiB/s, 32M = 117 MiB/s, and 64M = 154 MiB/s
additionally, on all platforms (not just android and/or chrome),
allow async hashing of <= 3 chunks in parallel on main-thread
when chunksize <= 48 MiB, and <= 2 at <= 96 MiB; this gives
decent speeds approaching that of webworkers (around 50%)
this is a new take on c06d928bb5
which was removed in 184af0c603
when a chrome-beta temporarily fixed the poor file-read performance
(afaict the fix was reverted and never made it to chrome stable)
as for why any of this is necessary,
the security features in android have the unfortunate side-effect
of making file-reads from web-browsers extremely expensive;
this is especially noticeable in android-chrome, where
file-hashing is painfully slow, around 9 MiB/s worst-case
this is due to a fixed-time overhead for each read operation;
reading 1 MiB takes 60 msec, while reading 16 MiB takes 112 msec
fixes a bug reported on discord;
1. run with `--idp-h-usr=iu -v=srv::A`
2. upload a file with up2k; this succeeds
3. announce an idp user: `curl -Hiu:a 127.1:3923`
4. upload another file; fails with "fs-reload"
the idp announce would `up2k.reload` which raises the
`reload_flag` and `rescan_cond`, but there is nothing
listening on `rescan_cond` because `have_e2d` was false
must assume e2d if idp is enabled, because `have_e2d` will
only be true if there are non-idp volumes with e2d enabled
in case someone writes a plugin which
expects certain params to be sanitized
note that because mojibake filenames are supported,
URLs and filepaths can still be absolutely bonkers
this fixes one known issue:
invalid rss-feed xml if ?pw contains special chars
...and somehow things now run 2% faster, idgi
18:17:26 &ed | what's wrong with it
18:17:38 +Mai | that you don't know it's the volume bar before you try it
18:17:46 &ed | oh
18:17:48 &ed | yeah i guess
18:17:54 +Mai | especially when it's at 100
18:18:00 &ed | how do i fix it tho
18:19:50 +Mai | you could add an icon that's also a mute button (to not make it a useless icon)
18:22:38 &ed | i'll make the volume text always visible and include a speaker icon before it
18:23:53 +Mai | that is better at least
when deleting a folder, any dotfiles/folders within would only
be deleted if the user had the dot-permission to see dotfiles;
this gave the confusing behavior of not removing the "empty"
folders after deleting them
fix this to only require the delete-permission, and always
delete the entire folder, including any dotfiles within
similar behavior would also apply to moves, renames, and copies;
fix moves and renames to only require the move-permission in
the source volume; dotfiles will now always be included,
regardless of whether the user does (or does not) have the
dot-permission in either the source and/or destination volumes
copying folders now also behaves more intuitively: if the user has
the dot-permission in the target volume, then dotfiles will only be
included from source folders where the user also has the dot-perm,
to prevent the user from seeing intentionally hidden files/folders
as processing of a HTTP request begins (GET, HEAD, PUT, POST, ...),
the original query line is printed in its encoded form. This makes
debugging easier, since there is no ambiguity in how the client
phrased its request.
however, this results in very opaque logs for non-ascii languages;
basically a wall of percent-encoded characters. Avoid this issue
by printing an additional log-message if the URL contains `%`,
immediately below the original url-encoded entry.
also fix tests on macos, and an unrelated bad logmsg in up2k
chrome (and chromium-based browsers) can OOM when:
* the OS is Windows, MacOS, or Android (but not Linux?)
* the website is hosted on a remote IP (not localhost)
* webworkers are used to read files
unfortunately this also applies to Android, which heavily relies
on webworkers to make read-speeds anywhere close to acceptable
as for android, there are diminishing returns with more than 4
webworkers (1=1x, 2=2.3x, 3=3.8x, 4=4.2x, 6=4.5x, 8=5.3x), and
limiting the number of workers to ensure at least one idle core
appears to sufficiently reduce the OOM probability
on desktop, webworkers are only necessary for hashwasm, so
limit the number of workers to 2 if crypto.subtle is available
and otherwise use the nproc-1 rule for hashwasm in workers
bug report: https://issues.chromium.org/issues/383568268
if a NIC is brought up with several IPs,
it would only mention one of the new IPs in the logs
or if a PCIe bus crashes and all NICs drop dead,
it would only mention one of the IPs that disappeared
as both scenarios are oddly common, be more verbose
previously, when IdP was enabled, the password-based login would be
entirely disabled. This was a semi-conscious decision, based on the
assumption that you would always want to use IdP after enabling it.
it makes more sense to keep password-based login working as usual,
conditionally disengaging it for requests which contains a valid
IdP username header. This makes it possible to define fallback
users, or API-only users, and all similar escape hatches.
if someone accidentally starts uploading a file in the wrong folder,
it was not obvious that you can forget that upload in the unpost tab
this '(explain)' button in the upload-error hopefully explains that,
and upload immediately commences when the initial attempt is aborted
on the backend, cleanup the dupesched when an upload is
aborted, and save some cpu by adding unique entries only
url-param / header `ck` specifies hashing algo;
md5 sha1 sha256 sha512 b2 blake2 b2s blake2s
value 'no' or blank disables checksumming,
for when copyparty is running on ancient gear
and you don't really care about file integrity
shadowing is the act of intentinoally blocking off access to
files in a volume by placing another volume atop of a file/folder.
say you have volume '/' with a file '/a/b/c/d.txt'; if you create a
volume at '/a/b', then all files/folders inside the original folder
becomes inaccessible, and replaced with the contents of the new vol
the initial code for forgetting shadowed files from the parent vol
database would only forget files which were discovered during a
filesystem scan; any uploaded files would be intentionally preseved
in the parent volume's database, probably to avoid losing uploader
info in the event of a brief mistaken config change, where a volume
is shadowed by accident.
this precaution was a mistake, currently causing far more
issues than it solves (#61 and #120), so away it goes.
huge thanks to @Gremious for doing all the legwork on this!
* return 403 instead of 404 in the following sitations:
* viewing an RSS feed without necessary auth
* accessing a file with the wrong filekey
* accessing a file/folder without necessary auth
(would previously 404 for intentional ambiguity)
* only allow PROPFIND if user has either read or write;
previously a blank response was returned if user has
get-access, but this could confuse webdav clients into
skipping authentication (for example AuthPass)
* return 401 basic-challenge instead of 403 if the client
appears to be non-graphical, because many webdav clients
do not provide the credentials until they're challenged.
There is a heavy bias towards assuming the client is a
browser, because browsers must NEVER EVER get a 401
(tricky state that is near-impossible to deal with)
* return 401 basic-challenge instead of 403 if a PUT
is attempted without any credentials included; this
should be safe, as graphical browsers never do that
this fixes the interoperability issues mentioned in
https://github.com/authpass/authpass/issues/379
where AuthPass would GET files without providing the
password because it expected a 401 instead of a 403;
AuthPass is behaving correctly, this is not a bug
a better alternative to using `--no-idx` for this purpose since
this also excludes recent uploads, not just during fs-indexing,
and it doesn't prevent deduplication
also speeds up searches by a tiny amount due to building the
sanchecks into the exclude-filter while parsing the config,
instead of during each search query
* if free ram on startup is less than 2 GiB,
use smaller chunks for parallel file hashing
* if --th-max-ram is lower than 0.25 (256 MiB),
print a warning that thumbnails will not work
* make thumbnail cleaner immediately do a sweep on startup,
forgetting any failed conversions so they can be retried
in case the memory limit was increased since last run
whenever a new idp user is registered, up2k will continuously
reload in the background until all users have been processed
just like before, this blocks up2k uploads from each user
until said user makes it into a reload, but as of now,
reloads will batch and execute without interrupting read-access
needs further testing before next release,
probably some rough edges to sand down
show a spinning halfcircle around the +/- instead of
moving the focus to the selected folder in the sidebar,
since that could mess with keyboard scrolling
an extremely brutish workaround for issues such as #110 where
browsers receive an HTTP 304 and misinterpret as HTTP 200
option `--no304=1` adds the button `no304` to the controlpanel
which can be enabled to force-disable caching in that browser
the button is default-disabled; by specifying `--no304=2`
instead of `--no304=1` the button becomes default-enabled
can also always be enabled by accessing `/?setck=no304=y`
these response headers are usually not included in 304 replies,
and their presence are suspected to confuse some clients (#110)
also strip `out_headerlist` (primarily cookie assignments)
add support for the `If-Range` header which is generally used to
prevent resuming a partial download after the source file on the
server has been modified, by returning HTTP 200 instead of a 206
also simplifies `If-Modified-Since` and `If-Range` handling;
previously this was a spec-compliant lexical comparison,
now it's a basic string-comparison instead. The server will now
reply 200 also when the server mtime is older than the client's.
This is technically not according to spec, but should be safer,
as it allows backdating timestamps without purging client cache
previously, it only accepted the 3-tuple `min,default,max`
if given a single integer (or any other unexpected value),
the up2k js would enter an infinite loop, eat all the ram
and crash the browser (nice)
fix this by accepting a single integer (for example 96)
and translating it to `1,96,96`
PUT uploads, as used by webdav, would stat the absolute
path of the file to be created, which would throw ENOENT
strip components until the path is an existing directory
and also try to enforce disk space / volume size limits
even when the incoming file is of unknown size
the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization
in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin
add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk
however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic
as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times
global-option `--no-clone` / volflag `noclone` entirely disables
serverside deduplication; clients will then fully upload dupe files
can be useful when `--safe-dedup=1` is not an option due to other
software tampering with the on-disk files, and your filesystem has
prohibitively slow or expensive reads
previously, only real folders could be listed by a webdav client;
a server which does not have any filesystem paths mapped to `/`
would cause clients to panic when trying to list the server root
now, assuming volumes `/foo` and `/bar/qux` exist, when accessing `/`
the user will see `/foo` but not `/bar` due to limitations in `walk`,
and `qux` will only appear when viewing `/bar`
a future rework of the recursion logic should further improve this
drop chunk-hashes in the up2k snap, plus other insignificant attribs
to reduce both the snapfile size and the ram usage by about 90%
reduces startup/shutdown time by a lot since there's less to serdes
(does not affect -e2d which was already optimal)
other changes:
* improve incoming-eta accuracy when the initial handshake
was made a long time before the upload actually started
* move the list of incoming files in the controlpanel to the top
* do not absreal paths unless necessary
* do not determine username if no users configured
* impacket 0.12 fixed the foldersize limit, but now
you get extremely poor performance in large folders
so the previous workaround is still default-enabled
* pyz: yeet the resource tar which is now pointless thanks to pkgres
* cache impresource stuff because pyz lookups are Extremely slow
* prefer tx_file when possible for slightly better performance
* use hardcoded list of expected resources instead of dynamic
discovery at runtime; much simpler and probably safer
* fix some forgotten resources (copying.txt, insecure.pem)
* fix loading jinja templates on windows
add support for reading webdeps and jinja-templates using either
importlib_resources or pkg_resources, which removes the need for
extracting these to a temporary folder on the filesystem
* util: add helper functions to abstract embedded resource access
* http*: serve embedded resources through resource abstraction
* main: check webdeps through resource abstraction
* httpconn: remove unused method `respath(name)`
* use __package__ to find package resources
* util: use importlib_resources backport if available
* pass E.pkg as module object for importlib_resources compatibility
* util: add pkg_resources compatibility to resource abstraction
* show media tags in shares
* html hydrator assumed a folder named `foo.txt` was a doc
* due to sessions, use `pwd` as password placeholder on services
* exponentially slow upload handshakes caused by lack of rd+fn
sqlite index; became apparent after a volume hit 200k files
* listing big folders 5% faster due to `_quotep3b`
* optimize `unquote`, 20% faster but only used rarely
* reindex on startup 150x faster in some rare cases
(same filename in MANY folders)
the database is now around 10% larger (likely worst-case)
reduce the overhead of function-calls from the client thread
to the svchub singletons (up2k, thumbs, metrics) down to 14%
and optimize up2k chunk-receiver to spend 5x less time bookkeeping
which restores up2k performance to before introducing incoming-ETA
dedup is still encouraged and fully supported, but
being default-enabled has caused too many surprises
enabling `--dedup` restores the previous default behavior
also renames `--never-symlink` to `--hardlink-only`
symlinks between volumes will only be created if xlink is
enabled, so such symlinks should be ignored if xlink is
disabled, as they might originate from other software
this prevents accidental rewriting of non-dedup symlinks
if --no-dedup was enabled in a volume which already contained
symlinked duplicate files, renaming/moving folders could fail
this is due to folder contents being moved one file at a time
(which is how symlink breakage is prevented) except the links
are moved assuming the final directory layout, meaning they
may be intermittently broken during the movie
with no-dedup, the symlinks are converted into full files as
each symlink is encountered, but a temporarily broken symlink
would crash the procedure
fix this by giving `_symlink` a new parameter `fsrc`
which is a known valid inode for data copying purposes
previously, the assumption was made that the database and filesystem
would not desync, and that an upload could safely be substituted with
a symlink to an existing copy on-disk, assuming said copy still
existed on-disk at all
this is fine if copyparty is the only software that makes changes to
the filesystem, but that is a shitty assumption to make in hindsight
add `--safe-dedup` which takes a "safety level", and by default (50)
it will no longer blindly expect that the filesystem has not been
altered through other means; the file contents will now be hashed
and compared to the database
deduplication can be much slower as a result, but definitely worth it
as this avoids some potentially very unpleasant surprises
the previous behavior can be restored with `--safe-dedup 1`
timezone can be changed with `export TZ=Europe/Oslo` before launch
using naive timestamps like this appears to be safe as of 3.13-rc1,
no deprecation warnings, just a tiny bit slower than assuming UTC
due to deduplication, it is intentionally impossible to
upload several identical copies of a file in parallel
by default, the up2k client will upload files sorted by
size, which usually leads to dupes being grouped together,
and it will try to do just that
this is by design, as it improves performance on average,
but it also shows the confusing (but technically-correct)
message "resume the partial upload into the original path"
fix this with a more appropriate message
note that this approach was selected in favor of pausing
handshakes while the initial copy finishes uploading,
because that could severely reduce upload performance
by preventing optimal use of multiple connections
* v1.13.8 broke collision resolving for non-identical files;
the correct filename was reserved but not symlinked to
the original file, leaving a zerobyte file instead.
See v1.14.3 github release notes for remediation info
* add sanchecks for early detection of index/fs desync;
saves performance and gives less confusing logs
if files (one or more) are selected for sharing, then
a virtual folder is created to hold the selected files
if a single file is selected for sharing, then
the returned URL will point directly to that file
and fix some shares-related bugs:
* password coalescing
* log-spam on reload
* fix: translation: changing from `" "` to `' '` for some strings;
using `./scripts/tlcheck.sh eng chi copyparty/web/browser.js`
* fix: translation: Check the newly added Chinese translation
<daniiooo> also iirc some time ago we were talking about the scroll for volume ed
<daniiooo> and how its reversed
<ed> is it reversed though? most people said it worked the way they expected
<daniiooo> fuck maybe i agreed back then too
<daniiooo> its the opposite in both aimp and mpv though
<ed> is it w
<tatsu> its a feature
<Devices> it's to keep you on your toes
<Devices> consciously use copyparty
<ed> i can invert it no problem
<ed> would be a nice surprise for anyone who's used it
<Flaminator> Scroll down turns the audio down right?
<daniiooo> ye it makes it louder in cpp
<Devices> why would scrolling down make something louder
<Vin> yeah that's odd
<Vin> scrolling up should make it louder
<Flaminator> It's what it does for me in winamp, mpc-hc and foobar2000.
<daniiooo> so now the question is who itc agreed to whats currently in cpp
<daniiooo> haha
<ed> idk but i'm inverting it
<ed> let's invert it every 6 months
* navpane would always feed the vproxy paths into the tree
instead of only when necessary (the initial load)
* mkdir would return `X-New-Dir` without the `rp-loc` prefix
* chpw and some other redirects also sent raw vpaths
Reported-by: @iridial
* wark landed in the wrong registry when moved to another volume
(harmless; upload would succeed on the next handshake)
* dedup did not apply correctly when moved into another volume,
since all the checks were done based on the previous vol;
fix this by recursing the whole thing
also update the reloc example after some real-world experience
Reported-by: @daniiooo
* support x-forwarded-for
* option to specify socket permissions and group
* in containers, avoid collision during restart
* add --help-bind with examples
hooks can now interrupt or redirect actions, and initiate
related actions, by printing json on stdout with commands
mainly to mitigate limitations such as sharex/sharex#3992
xbr/xau can redirect uploads to other destinations with `reloc`
and most hooks can initiate indexing or deletion of additional
files by giving a list of vpaths in json-keys `idx` or `del`
there are limitations;
* xbu/xau effects don't apply to ftp, tftp, smb
* xau will intentionally fail if a reloc destination exists
* xau effects do not apply to up2k
also provides more details for hooks:
* xbu/xau: basic-uploader vpath with filename
* xbr/xar: add client ip
v1.13.5 made some proxies angry with its massive chunklists
when stitching chunks, only list the first chunk hash in full,
and include a truncated hash for the consecutive chunks
should be enough for logfiles to make sense
and to smoketest that clients are behaving
compile to bytecode so cpython doesn't have to keep it in memory
ram usage reduced by:
* min: 5.4 MiB (32.6 to 27.2)
* ac/im: 5.2 MiB (39.0 to 33.8)
* dj/iv: 10.6 MiB (67.3 to 56.7)
startup time reduced from:
* min: 1.3s to 0.6s
* ac/im: 1.6s to 0.9s
* dj/iv: 2.0s to 1.1s
image size increased by 4 MiB (min), 6 MiB (ac/im/iv), 9 MiB (dj)
ram usage measured on idle with:
while true; do ps aux | grep -E 'R[S]S|no[-]crt'; read -n1; echo; done
startup time measured with:
time podman run --rm -it localhost/copyparty-min-amd64 --exit=idx
in the event that an upload chunk gets stuck, the js would
never stop waiting for a response, requiring a page reload
improves reliability when running behind a reverse-proxy
which is configured to never timeout requests (can make
sense when combined with other services on the same box)
with overflow:auto, firefox picks the div-width before estimating
the height, causing it to undershoot by the scrollbar width
and then messing up the text alignment
fix: conditionally set overflow-y:scroll using js
* wait until page (au) has loaded to register hotkeys
* hotkey `m` would grow sidebar if tree was minimized
* more exact warning about num.parallel uploads
* keep more console logs in memory
* message phrasing
audio extraction happens serverside to opus or mp3
depending on browser support
remuxing (extracting audio without transcoding)
is currently not supported, and is not planned
* progress donuts should include inflight bytes
* changes to stitch-size in settings didn't apply until next refresh
* serverlog was too verbose; truncate chunk hashes
* mention absolute cloudflare limit in readme
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East, `1.6x` from US-West to Finland )
the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body
a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value
this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies
the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes
a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case
on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this
hooks can be restricted to users with certain permissions, for example
`--xm aw,notify-send` will only `notify-send` if user has write-access
the user's list of permissions are now also included in the json
that is passed to the hook if enabled; `--xm aw,j,notify-send`
will now also stop parsing flags when encountering a blank value,
allowing to specify any initial arguments to the command:
`--xm aw,j,,notify-send,hey` would run `notify-send` with `hey`
as its first argument, and the json would be the 2nd argument,
similarly `--xm ,notify-send,hey` when no flags specified
this is somewhat explained in `--help-hooks`, but
additional related features are planned in the near future
and will all be better documented when the dust settles
if an ftp client tried to list the toplevel folder on a server
where nothing is mounted toplevel, it would syntheisze a
directory listing which included all volumes, even those
which the user would not be able to access
so basically not a problem, just very confusing
mtime the file that was used to produce the folder thumbnail
(rather than the folder itself) since the folder-thumb is
always resolved to the file's thumb in the on-disk cache
if a request body is expected, but request has no content-length,
set the timeout to 1/20 of `--s-tbody`, so 9 seconds by default,
or 3 seconds if it's 60 as recommended in helptext
this gives less confusing behavior if a client accidentally does
something invalid, replying with an error response before the
previous timeout of 186 seconds
also raise the slowloris flag, in case a client bugs out and
keeps making such requests
if a song fails to play for some reason (network loss,
corrupt file), a timer plays the next track after 5s
the timer was not cancelled if the user
started another track in the meantime