* wait until page (au) has loaded to register hotkeys
* hotkey `m` would grow sidebar if tree was minimized
* more exact warning about num.parallel uploads
* keep more console logs in memory
* message phrasing
audio extraction happens serverside to opus or mp3
depending on browser support
remuxing (extracting audio without transcoding)
is currently not supported, and is not planned
* progress donuts should include inflight bytes
* changes to stitch-size in settings didn't apply until next refresh
* serverlog was too verbose; truncate chunk hashes
* mention absolute cloudflare limit in readme
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East, `1.6x` from US-West to Finland )
the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body
a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value
this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies
the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes
a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case
on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this
hooks can be restricted to users with certain permissions, for example
`--xm aw,notify-send` will only `notify-send` if user has write-access
the user's list of permissions are now also included in the json
that is passed to the hook if enabled; `--xm aw,j,notify-send`
will now also stop parsing flags when encountering a blank value,
allowing to specify any initial arguments to the command:
`--xm aw,j,,notify-send,hey` would run `notify-send` with `hey`
as its first argument, and the json would be the 2nd argument,
similarly `--xm ,notify-send,hey` when no flags specified
this is somewhat explained in `--help-hooks`, but
additional related features are planned in the near future
and will all be better documented when the dust settles
if an ftp client tried to list the toplevel folder on a server
where nothing is mounted toplevel, it would syntheisze a
directory listing which included all volumes, even those
which the user would not be able to access
so basically not a problem, just very confusing
mtime the file that was used to produce the folder thumbnail
(rather than the folder itself) since the folder-thumb is
always resolved to the file's thumb in the on-disk cache
if a request body is expected, but request has no content-length,
set the timeout to 1/20 of `--s-tbody`, so 9 seconds by default,
or 3 seconds if it's 60 as recommended in helptext
this gives less confusing behavior if a client accidentally does
something invalid, replying with an error response before the
previous timeout of 186 seconds
also raise the slowloris flag, in case a client bugs out and
keeps making such requests
if a song fails to play for some reason (network loss,
corrupt file), a timer plays the next track after 5s
the timer was not cancelled if the user
started another track in the meantime
the spec doesn't say what you're supposed to do if the target filename of an upload is already taken, but this seems to be the most common behavior on other ftp servers, and is required by wondows 2000 (otherwise it'll freak out and issue a delete and then not actually upload it, nice)
new option `--ftp-no-ow` restores old default behavior of rejecting upload if target filename exists
was intentionally skipped to avoid complexity but enough people have
asked why it doesn't work that it's time to do something about it
turns out it wasn't that bad
* upgrade to partftpy 0.4.0
* workarounds for buggy clients/servers
* improved ipv6 support, especially on macos
* improved robustness on unreliable networks
* make `--tftp4` separate from `--ftp4`
only keep characters `>+-*` if there's less than three of them,
and discard entire prefix if there's more
markdown spec only cares about exactly-one or three-or-more, but
let's keep pairs in case anyone use that as unconventional markup
when there was more than ~700 active connections,
* sendfile (non-https downloads) could fail
* mdns and ssdp could fail to reinitialize on network changes
...because `select` can't handle FDs higher than 512 on windows
(1024 on linux/macos), so prefer `poll` where possible (linux/macos)
but apple keeps breaking and unbreaking `poll` in macos,
so use `--no-poll` if necessary to force `select` instead
metadata is no longer discarded when transcoding to opus or mp3;
this was a good idea back when the transcodes were only used by
the webplayer, but now that folders can be batch-downloaded with
on-the-fly transcoding, it makes sense to keep most of the tags
individual tags are discarded if its value exceeds 1023 letters
this should mainly affect the following:
* traktor beatmaps, size usually somewhere around 100 KiB
* non-standard cover-art embeddings, size around 250 KiB
* XMP (project data from adobe premiere), around 48 KiB
use sigmasks to block SIGINT, SIGTERM, SIGUSR1 from all other threads
also initiate shutdown by calling sighandler directly,
in case this misses anything and that is still unreliable
(discovered by `--exit=idx` being noop once in a blue moon)
* template-based title formatting
* picture embeds are no longer ant-sized
* `--og-color` sets accent color; default #333
* `--og-s-title` forces default title, ignoring e2t
* add a music indicator to song titles because discord doesn't
currently only being used to workaround discord discarding
query strings in opengraph tags, but i'm sure there will be
plenty more wonderful usecases for this atrocity
if a given filesystem were to disappear (e.g. removable storage)
followed by another filesystem appearing at the same location,
this would not get noticed by up2k in a timely manner
fix this by discarding the mtab cache after `--mtab-age` seconds and
rebuild it from scratch, unless the previous values are definitely
correct (as indicated by identical output from `/bin/mount`)
probably reduces windows performance by an acceptable amount
* hasher thread could die if a client would rapidly
upload and delete files (so very unlikely)
* two unprotected calls to register_vpath which was
almost-definitely safe because the volumes
already existed in the registry
adds options `--bauth-last` to lower the preference for
taking the basic-auth password in case of conflict,
and `--no-bauth` to entirely disable basic-authentication
if a client is providing multiple passwords, for example when
"logged in" with one password (the `cppwd` cookie) and switching
to another account by also sending a PW header/url-param, then
the default evaluation order to determine which password to use is:
url-param `pw`, header `pw`, basic-auth header, cookie (cppwd/cppws)
so if a client supplies a basic-auth header, it will ignore the cookie
and use the basic-auth password instead, which usually makes sense
but this can become a problem if you have other webservers running
on the same domain which also support basic-authentication
--bauth-last is a good choice for cooperating with such services, as
--no-bauth currently breaks support for the android app...
plus misc similar technically-incorrect addq usages;
most of these don't matter in practice since they'll
never get a url with a hash, but makes the intent clear
and make sure hashes never get passed around
like they're part of a dirkey, harmless as it is
counterpart of `--s-wr-sz` which existed already
the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)
was previously 32 KiB, so large uploads should now use 17% less CPU
also adds sanchecks for values of `--iobuf`, `--s-rd-sz`, `--s-wr-sz`
also adds file-overwrite feature for multipart posts
the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)
was previously a mix of 64 and 512 KiB;
now the same value is enforced everywhere
download-as-tar is now 20% faster with the default value
it is now possible to grant access to users other than `${u}`
(the user which the volume belongs to)
previously, permissions did not apply correctly to IdP volumes due to
the way `${u}` and `${g}` was expanded, which was a funky iteration
over all known users/groups instead of... just expanding them?
also adds another sanchk that a volume's URL must contain a
`${u}` to be allowed to mention `${u}` in the accs list, and
similarly for `${g}` / `@${g}` since users can be in multiple groups
the volflags of `/` were used to determine if e2d was enabled,
which is wrong in two ways:
* if there is no `/` volume, it would be globally disabled
* if `/` has e2d, but another volume doesn't, it would
erroneously think unpost was available, which is not an
issue unless that volume used to have e2d enabled AND
there is stale data matching the client's IP
3f05b665 (v1.11.0) had an incomplete fix for the stale-data part of
the above, which also introduced the other issue
this commit partially fixes the following issue:
if a client manages to escape real-ip detection, copyparty will
try to ban the reverse-proxy instead, effectively banning all clients
this can happen if the configuration says to obtain client real-ip
from a cloudflare header, but the server is not configured to reject
connections from non-cloudflare IPs, so a scanner will eventually
hit the server IP with malicious-looking requests and trigger a ban
copyparty will now continue to process requests from banned IPs until
the header has been parsed and the real-ip has been obtained (or not),
causing an increased server load from malicious clients
assuming the `--xff-src` and `--xff-hdr` config is correct,
this issue should no longer be hitting innocent clients
the old behavior of immediately rejecting a banned IP address
can be re-enabled with the new option `--early-ban`
too fraught with subtle dangers, such as other copyparty instances
ending up sharing knowledge of volumes unintentionally, and
configuration becoming mysteriously sticky (not to mention
this would all become hella difficult to reason about)
instead, rely entirely on users seeing the big red warning
added in 2ebfdc25 if their configuration is dangerous
this decision has the drawback that there will be server stuttering
whenever a new user makes themselves known since the last restart,
as it realizes the volumes exist and does the usual e2ds indexing,
instead of doing it early during startup
but it's probably good enough
when switching to another folder with identical filenames, the
mediaplayer would get confused and think it was the same files,
messing up the playback order
to abort an upload, refresh the page and access the unpost tab,
which now includes unfinished uploads (sorted before completed ones)
can be configured through u2abort (global or volflag);
by default it requires both the IP and account to match
https://a.ocv.me/pub/g/nerd-stuff/2024-0310-stoltzekleiven.jpg
running behind cloudflare doesn't necessarily
mean being accessible ONLY through cloudflare
also include a general warning about optimal
configuration for non-cloudflare intermediates
as this option is very rarely useful, add global-option `--k304` to
unhide the button and/or set it default-enabled
the toggle will still appear when the feature was previously enabled by
a client, and the feature is still default-enabled for all IE clients
if a reverse-proxy starts hijacking requests and replying with HTML,
don't panic when it fails to decode as a handshake json
fix this for most other json-expecting gizmos too,
and take the opportunity to cleanup some text formatting
this improves performance on s3-backed volumes
noktuas reported on discord that the upload performance was
unexpectedly poor when writing to an s3 bucket through a JuiceFS
fuse-mount, only getting 1.5 MiB/s with copyparty, meanwhile a
regular filecopy averaged 30 MiB/s plus
the issue was that s3 does not support sparse files, so copyparty
would fall back to sequential uploading, and also disable fpool,
causing JuiceFS to repeatedly commit the same 5 MiB range to
the storage provider as each chunk arrived from the client
by forcing use of sparse files, s3 adapters such as JuiceFS and
geesefs will "only" write the entire file to s3 *twice*, initially
it writes the full filesize of zerobytes (depending on adapter,
hopefully using gzip compression to reduce the bandwidth necessary)
and then the actual file data in an adapter-specific chunksize
with this volflag, copyparty appears to reach the full expected speed
* docker: warn if there are config-files in ~/.config/copyparty
because somebody copied their config into
/cfg/copyparty instead of /cfg as intended
* docker: warn if there are no config-files in an included directory
* make misconfigured reverse-proxies more obvious
* explain cors rejections in server log
* indicate cors rejection in error toast
nothing dangerous, just confusing log messages if an
admin hammers the reload button 100+ times per second,
or another linux process rapidly sends SIGUSR1
`@import url(https://...)` would get rewritten to baseURL + https://...
also reorder the generated csstext so that @imports appear first;
necessary for stuff like googlefonts to take effect
some reverse-proxies expect plaintext replies, and
we don't have a brotli decompressor to satisfy this
additionally, because brotli is https-gated (thx google),
it was already an impractical mess anyways
the sfx is now 7 KiB larger
chrome crashes if there's more than 2000 unique SVGs on one page, so
there was serverside useragent-sniffing to determine if the icon should
be an svg or a raster
however since the useragent is not in our vary, cloudflare wouldn't see
the difference and cache everything equally, meaning most folders would
display a random mix of png and svg thumbnails
move browser detection to the clientside to ensure unique URLs
* if a nic was restarted mid-transfer, the server could crash
* this workaround will probably fix a bunch of similar issues too
* fix resource leak if dualstack fails the ipv4 bind
on phones especially, hitting the end of a folder while playing music
could permanently stop audio playback, because the browser will
revoke playback privileges unless we have a song ready to go...
there's no time to navigate through folders looking for the next file
the preloader will now start jumping through folders ahead of time
some cifs servers cause sqlite to fail in interesting ways; any attempt
to create a table can instantly throw an exception, which results in a
zerobyte database being created. During the next startup, the db would
be determined to be corrupted, and up2k would invoke _backup_db before
deleting and recreating it -- except that sqlite's connection.backup()
will hang indefinitely and deadlock up2k
add a watchdog which fires if it takes longer than 1 minute to open the
database, printing a big warning that the filesystem probably does not
support locking or is otherwise sqlite-incompatible, then writing a
stacktrace of all threads to a textfile in the config directory
(in case this deadlock is due to something completely different),
before finally crashing spectacularly
additionally, delete the database if the creation fails, which should
prevents the deadlock on the next startup, so combine that with a
message hinting at the filesystem incompatibility
the 1-minute limit may sound excessively gracious, but considering what
some of the copyparty instances out there is running on, really isn't
this was reported when connecting to a cifs server running alpine
thx to abex on discord for the detailed bug report!
as each chunk is written to the file, httpcli calls
up2k.confirm_chunk to register the chunk as completed, and the reply
indicates whether that was the final outstanding chunk, in which case
httpcli closes the file descriptors since there's nothing more to write
the issue is that the final chunk is registered as completed before the
file descriptors are closed, meaning there could be writes that haven't
finished flushing to disk yet
if the client decides to issue another handshake during this window,
up2k sees that all chunks are complete and calls up2k.finish_upload
even as some threads might still be flushing the final writes to disk
so the conditions to hit this bug were as follows (all must be true):
* multiprocessing is disabled
* there is a reverse-proxy
* a client has several idle connections and reuses one of those
* the server's filesystem is EXTREMELY slow, to the point where
closing a file takes over 30 seconds
the fix is to stop handshakes from being processed while a file is
being closed, which is unfortunately a small bottleneck in that it
prohibits initiating another upload while one is being finalized, but
the required complexity to handle this better is probably not worth it
(a separate mutex for each upload session or something like that)
this issue is mostly harmless, partially because it is super tricky to
hit (only aware of it happening synthetically), and because there is
usually no harmful consequences; the worst-case is if this were to
happen exactly as the server OS decides to crash, which would make the
file appear to be fully uploaded even though it's missing some data
(all extremely unlikely, but not impossible)
there is no performance impact; if anything it should now accept
new tcp connections slightly faster thanks to more granular locking
if a reverseproxy decides to strip away URL parameters, show an
appropriate error-toast instead of silently entering a bad state
someone on discord ended up in an infinite page-reload loop
since the js would try to recover by fully navigating to the
requested dir if `?ls` failed, which wouldn't do any good anyways
if the dir in question is the initial dir to display
videos unloaded correctly when switching between files, but not when
closing the lightbox while playing a video and then clicking another
now, only media within the preload window (+/- 2 from current file)
is kept loaded into DOM, everything else gets ejected, both on
navigation and when closing the lightbox
much more accurate total-ETA when uploading with many connections
and/or uploading huge files to really slow servers
the titlebar % still only does actually confirmed bytes,
partially because that makes sense, partially because
that's what happened by accident
features which should be good to go:
* user groups
* assigning permissions by group
* dynamically created volumes based on username/groupname
* rebuild vfs when new users/groups appear
but several important features still pending;
* detect dangerous configurations
* dynamic vol below readable path
* remember volumes created during previous runs
* helps prevent unintended access
* correct filesystem-scan on startup
* allow mounting `/` (the entire filesystem) as a volume
* not that you should (really, you shouldn't)
* improve `-v` helptext
* change IdP group symbol to @ because % is used for file inclusion
* not technically necessary but is less confusing in docs
window.localStorage was null, so trying to read would fail
seen on falkon 23.08.4 with qtwebengine 5.15.12 (fedora39)
might as well be paranoid about the other failure modes too
(sudden exceptions on reads and/or writes)
* polyfill Set() for gridview (ie9, ie10)
* navpane: do full-page nav if history api is ng (ie9)
* show markdown as plaintext if rendering fails (ie*)
* text-editor: hide preview pane if it doesn't work (ie*)
* explicitly hide toasts on close (ie9, ff10)
some clients (clonezilla-webdav) rapidly create and delete files;
this fails if copyparty is still hashing the file (usually the case)
and the same thing can probably happen due to antivirus etc
add global-option --rm-retry (volflag rm_retry) specifying
for how long (and how quickly) to keep retrying the deletion
default: retry for 5sec on windows, 0sec (disabled) on everything else
because this is only a problem on windows
* fix crash on keyboard input in modals
* text editor works again (but without markdown preview)
* keyboard hotkeys for the few features that actually work
when a file was reindexed (due to a change in size or last-modified
timestamp) the uploader-IP would get removed, but the upload timestamp
was ported over. This was intentional so there was probably a reason...
new behavior is to keep both uploader-IP and upload timestamp if the
file contents are unchanged (determined by comparing warks), and to
discard both uploader-IP and upload timestamp if that is not the case
* webdav: extend applesan regex with more stuff to exclude
* on macos, set applesan as default `--no-idx` to avoid indexing them
(they didn't show up in search since they're dotfiles, but still)
igloo irc has an absolute time limit of 2 minutes before it just
disconnects mid-upload and that kinda looked like it had a buggy
multipart generator instead of just being funny
anticipating similar events in the future, also log the
client-selected boundary value to eyeball its yoloness
due to all upload APIs invoking up2k.hash_file to index uploads,
the uploads could block during a rescan for a crazy long time
(past most gateway timeouts); now this is mostly fire-and-forget
"mostly" because this also adds a conditional slowdown to
help the hasher churn through if the queue gets too big
worst case, if the server is restarted before it catches up, this
would rely on filesystem reindexing to eventually index the files
after a restart or on a schedule, meaning uploader info would be
lost on shutdown, but this is usually fine anyways (and this was
also the case until now)
primarily to support uploading from Igloo IRC but also generally useful
(not actually tested with Igloo IRC yet because it's a paid feature
so just gonna wait for spiky to wake up and tell me it didn't work)
* make gen_tree 0.1% faster
* improve filekey warning message
* fix oversight in 0c50ea1757
* support `--xdev` on windows (the python docs mention that os.scandir
doesn't assign st_ino, st_dev and st_nlink on win but i can't read)
* permission `.` grants dotfile visibility if user has `r` too
* `-ed` will grant dotfiles to all `r` accounts (same as before)
* volflag `dots` likewise
also drops compatibility for pre-0.12.0 `-v` syntax
(`-v .::red` will no longer translate to `-v .::r,ed`)
when moving/deleting a file, all symlinked dupes are verified to ensure
this action does not break any symlinks, however it did this by checking
the realpath of each link. This was not good enough, since the deleted
file may be a part of a series of nested symlinks
this situation occurs because the deduper tries to keep relative
symlinks as close as possible, only traversing into parent/sibling
folders as required, which can lead to several levels of nested links