`@import url(https://...)` would get rewritten to baseURL + https://...
also reorder the generated csstext so that @imports appear first;
necessary for stuff like googlefonts to take effect
some reverse-proxies expect plaintext replies, and
we don't have a brotli decompressor to satisfy this
additionally, because brotli is https-gated (thx google),
it was already an impractical mess anyways
the sfx is now 7 KiB larger
chrome crashes if there's more than 2000 unique SVGs on one page, so
there was serverside useragent-sniffing to determine if the icon should
be an svg or a raster
however since the useragent is not in our vary, cloudflare wouldn't see
the difference and cache everything equally, meaning most folders would
display a random mix of png and svg thumbnails
move browser detection to the clientside to ensure unique URLs
* if a nic was restarted mid-transfer, the server could crash
* this workaround will probably fix a bunch of similar issues too
* fix resource leak if dualstack fails the ipv4 bind
on phones especially, hitting the end of a folder while playing music
could permanently stop audio playback, because the browser will
revoke playback privileges unless we have a song ready to go...
there's no time to navigate through folders looking for the next file
the preloader will now start jumping through folders ahead of time
some cifs servers cause sqlite to fail in interesting ways; any attempt
to create a table can instantly throw an exception, which results in a
zerobyte database being created. During the next startup, the db would
be determined to be corrupted, and up2k would invoke _backup_db before
deleting and recreating it -- except that sqlite's connection.backup()
will hang indefinitely and deadlock up2k
add a watchdog which fires if it takes longer than 1 minute to open the
database, printing a big warning that the filesystem probably does not
support locking or is otherwise sqlite-incompatible, then writing a
stacktrace of all threads to a textfile in the config directory
(in case this deadlock is due to something completely different),
before finally crashing spectacularly
additionally, delete the database if the creation fails, which should
prevents the deadlock on the next startup, so combine that with a
message hinting at the filesystem incompatibility
the 1-minute limit may sound excessively gracious, but considering what
some of the copyparty instances out there is running on, really isn't
this was reported when connecting to a cifs server running alpine
thx to abex on discord for the detailed bug report!
as each chunk is written to the file, httpcli calls
up2k.confirm_chunk to register the chunk as completed, and the reply
indicates whether that was the final outstanding chunk, in which case
httpcli closes the file descriptors since there's nothing more to write
the issue is that the final chunk is registered as completed before the
file descriptors are closed, meaning there could be writes that haven't
finished flushing to disk yet
if the client decides to issue another handshake during this window,
up2k sees that all chunks are complete and calls up2k.finish_upload
even as some threads might still be flushing the final writes to disk
so the conditions to hit this bug were as follows (all must be true):
* multiprocessing is disabled
* there is a reverse-proxy
* a client has several idle connections and reuses one of those
* the server's filesystem is EXTREMELY slow, to the point where
closing a file takes over 30 seconds
the fix is to stop handshakes from being processed while a file is
being closed, which is unfortunately a small bottleneck in that it
prohibits initiating another upload while one is being finalized, but
the required complexity to handle this better is probably not worth it
(a separate mutex for each upload session or something like that)
this issue is mostly harmless, partially because it is super tricky to
hit (only aware of it happening synthetically), and because there is
usually no harmful consequences; the worst-case is if this were to
happen exactly as the server OS decides to crash, which would make the
file appear to be fully uploaded even though it's missing some data
(all extremely unlikely, but not impossible)
there is no performance impact; if anything it should now accept
new tcp connections slightly faster thanks to more granular locking
if a reverseproxy decides to strip away URL parameters, show an
appropriate error-toast instead of silently entering a bad state
someone on discord ended up in an infinite page-reload loop
since the js would try to recover by fully navigating to the
requested dir if `?ls` failed, which wouldn't do any good anyways
if the dir in question is the initial dir to display
videos unloaded correctly when switching between files, but not when
closing the lightbox while playing a video and then clicking another
now, only media within the preload window (+/- 2 from current file)
is kept loaded into DOM, everything else gets ejected, both on
navigation and when closing the lightbox
much more accurate total-ETA when uploading with many connections
and/or uploading huge files to really slow servers
the titlebar % still only does actually confirmed bytes,
partially because that makes sense, partially because
that's what happened by accident
features which should be good to go:
* user groups
* assigning permissions by group
* dynamically created volumes based on username/groupname
* rebuild vfs when new users/groups appear
but several important features still pending;
* detect dangerous configurations
* dynamic vol below readable path
* remember volumes created during previous runs
* helps prevent unintended access
* correct filesystem-scan on startup
* allow mounting `/` (the entire filesystem) as a volume
* not that you should (really, you shouldn't)
* improve `-v` helptext
* change IdP group symbol to @ because % is used for file inclusion
* not technically necessary but is less confusing in docs
window.localStorage was null, so trying to read would fail
seen on falkon 23.08.4 with qtwebengine 5.15.12 (fedora39)
might as well be paranoid about the other failure modes too
(sudden exceptions on reads and/or writes)
* polyfill Set() for gridview (ie9, ie10)
* navpane: do full-page nav if history api is ng (ie9)
* show markdown as plaintext if rendering fails (ie*)
* text-editor: hide preview pane if it doesn't work (ie*)
* explicitly hide toasts on close (ie9, ff10)
some clients (clonezilla-webdav) rapidly create and delete files;
this fails if copyparty is still hashing the file (usually the case)
and the same thing can probably happen due to antivirus etc
add global-option --rm-retry (volflag rm_retry) specifying
for how long (and how quickly) to keep retrying the deletion
default: retry for 5sec on windows, 0sec (disabled) on everything else
because this is only a problem on windows
* fix crash on keyboard input in modals
* text editor works again (but without markdown preview)
* keyboard hotkeys for the few features that actually work
when a file was reindexed (due to a change in size or last-modified
timestamp) the uploader-IP would get removed, but the upload timestamp
was ported over. This was intentional so there was probably a reason...
new behavior is to keep both uploader-IP and upload timestamp if the
file contents are unchanged (determined by comparing warks), and to
discard both uploader-IP and upload timestamp if that is not the case
* webdav: extend applesan regex with more stuff to exclude
* on macos, set applesan as default `--no-idx` to avoid indexing them
(they didn't show up in search since they're dotfiles, but still)
igloo irc has an absolute time limit of 2 minutes before it just
disconnects mid-upload and that kinda looked like it had a buggy
multipart generator instead of just being funny
anticipating similar events in the future, also log the
client-selected boundary value to eyeball its yoloness
due to all upload APIs invoking up2k.hash_file to index uploads,
the uploads could block during a rescan for a crazy long time
(past most gateway timeouts); now this is mostly fire-and-forget
"mostly" because this also adds a conditional slowdown to
help the hasher churn through if the queue gets too big
worst case, if the server is restarted before it catches up, this
would rely on filesystem reindexing to eventually index the files
after a restart or on a schedule, meaning uploader info would be
lost on shutdown, but this is usually fine anyways (and this was
also the case until now)
primarily to support uploading from Igloo IRC but also generally useful
(not actually tested with Igloo IRC yet because it's a paid feature
so just gonna wait for spiky to wake up and tell me it didn't work)
* make gen_tree 0.1% faster
* improve filekey warning message
* fix oversight in 0c50ea1757
* support `--xdev` on windows (the python docs mention that os.scandir
doesn't assign st_ino, st_dev and st_nlink on win but i can't read)
* permission `.` grants dotfile visibility if user has `r` too
* `-ed` will grant dotfiles to all `r` accounts (same as before)
* volflag `dots` likewise
also drops compatibility for pre-0.12.0 `-v` syntax
(`-v .::red` will no longer translate to `-v .::r,ed`)
when moving/deleting a file, all symlinked dupes are verified to ensure
this action does not break any symlinks, however it did this by checking
the realpath of each link. This was not good enough, since the deleted
file may be a part of a series of nested symlinks
this situation occurs because the deduper tries to keep relative
symlinks as close as possible, only traversing into parent/sibling
folders as required, which can lead to several levels of nested links
* start banning malicious clients according to --ban-422
* reply with a blank 500 to stop firefox from retrying like 20 times
* allow Cc's in a few specific URL params (filenames, dirnames)
hitting enter would clear out an entire chain of modals,
because the event didn't get consumed like it should,
so let's make double sure that will be the case
connections from outside the specified list of IP prefixes are rejected
(docker-friendly alternative to -i 127.0.0.1)
also mkdir any missing folders when logging to file
add argument --hdr-au-usr which specifies a HTTP header to read
usernames from; entirely bypasses copyparty's password checks
for http/https clients (ftp/smb are unaffected)
users must exist in the copyparty config, passwords can be whatever
just the first step but already a bit useful on its own,
more to come in a few months
will probably fail when some devices (sup iphone) stream to car stereos
but at least passwords won't end up somewhere unexpected this way
(plus, the js no longer uses the jank url to request waveforms)
webdav clients tend to upload and then immediately delete
files to test for write-access and available disk space,
so don't crash and burn when that happens
* cpp_uptime is now a gauge
* cpp_bans is now cpp_active_bans (and also a gauge)
and other related fixes:
* stop emitting invalid cpp_disk_size/free for offline volumes
* support overriding the spec-mandatory mimetype with ?mime=foo
* some malicious requests are now answered with HTTP 422,
so that they count against --ban-422
* do not include request headers when replying to invalid requests,
in case there is a reverse-proxy inserting something interesting
* fix toast/tooltip colors on splashpage
* properly warn if --ah-cli or --ah-gen is used without --ah-alg
* support ^D during --ah-cli
* improve flavor texts
not even the deprecationwarning that got silently generated burning
20~30% of all CPU-time without actually displaying it anywhere, nice
python 3.12.0 is now only 5% slower than 3.11.6
also fixes some other, less-performance-fatal deprecations
never bonk anyone with read-access (able to see directory-listing)
or write-only (not able to retrieve any files at all) due to
either --ban-404 or --ban-url
fixes accidental ban when webdav-uploading files which
match any of the --ban-url patterns (#55)
also default-enables --ban-404 since it is now generally safe
(even when up2k is in turbo mode), plus make turbo smart enough to
disengage when necessary
accessing the syntax hilighter using a filekey is impossible anyways
because the client expects to build its state from the folder listing
and the backend refuses to return a listing given just a filekey
safari can immediately popstate when alt-tabbing back to the browser,
causing the page to load twice in parallel:
2174 log-capture ok
2295 h-repl $location
2498 h-pop $location <==
2551 sha-ok # from initial load
this carries some intentional side-effects; each thumbnail format will
now be stored in its own subfolder under .hist/th/ making cleanup more
effective (jpeg and webm are dropped separately)
table header click-handler didn't cover the entire cell so it was
easy to sort the table by accident; also do not exit hiding mode
automatically since you usually want to hide several columns
(so also adjust css to make it obvious you're in hiding mode)
* slightly faster startup / shutdown
* forgot a jinja2 golf
* waste 4KiB changing prismjs back to gz since brotli is https-gated ;_;
* broke support for firefox<52 (non-var functions must be toplevel
or immediately within another function), now even firefox 10 /
centos 6 is somewhat supported again
safest way to make copyparty like a general-purpose webserver where
index.html is returned as expected yet directory listing is entirely
disabled / unavailable
on upload, dupes are by default handled by symlinking to the existing
copy on disk, writing the uploader's local mtime into the symlink mtime,
which is also what gets indexed in the db
this worked as intended, however during an -e2dsa rescan on startup the
symlink destination timestamps would be used instead, causing a reindex
and the resulting loss of uploader metadata (ip, timestamp)
will now always use the symlink's mtime;
worst-case 1% slower startup (no dhash)
this change will cause a reindex of incorrectly indexed files, however
as this has already happened at least once due to the bug being fixed,
there will be no additional loss of metadata
some of this looks shady af but appears to have been harmless
(decent amount of testing came out ok)
* some location normalization happened before unquoting; however vfs
handled this correctly so the outcome was just confusing messages
* some url parameters were double-decoded (unpost filter, move
destinations), causing some operations to fail unexpectedly
* invalid cache-control headers could be generated,
but not in a maliciously-beneficial way
(there are safeguards stripping newlines and control-characters)
also adds an exception-message cleanup step to strip away the
filesystem path that copyparty's python files are located at,
in case that could be interesting knowledge
* in case someone gets a confusing access-related error message,
include more context in serverlogs (exact path)
* fix js console spam in search results
* same markdown line-height in viewer and browser
good news: apple finally added support for samplerates other than
44100 for AudioContext, meaning it would now have been possible to
set non-100% volume for audio files including opus files
bad news: apple broke AudioContext in a way that makes it bug out
mediaSessions, causing lockscreen controls to become mostly useless
bad news: apple broke AudioContext additionally where it randomly
causes playback issues, blocking playback of audio files, even if
the AudioContext is sitting idle doing nothing (which is a
requirement for reliable upload speeds on other platforms)
disable AudioContext on iOS
by running dompurify after marked.parse if plugins are not enabled;
adds no protection against the more practical approach of just
putting a malicious <script> in an html file and uploading that,
but one footgun less is one less footgun
* --ban-url: URLs which 404 and also match --sus-urls (bot-scan)
* --ban-403: trying to access volumes that dont exist or require auth
* --ban-422: invalid POST messages, fuzzing and such
* --nonsus-urls: regex of 404s which shouldn't trigger --ban-404
in may situations it makes sense to handle this logic inside copyparty,
since stuff like cloudflare and running copyparty on another physical
box than the nginx frontend is on becomes fairly clunky
when repeatedly tapping the next-folder button, occasionally it will
reload the entire page instead of ajax'ing the directory contents.
Navigation happens by simulating a click in the directory sidebar,
so the incorrect behavior matches what would happen if the link to the
folder didn't have its onclick-handler attached, so should probably
double-check if there's some way for that to happen
Issue observed fairly easily in firefox on android, regardless if
copyparty is running locally or on a server in a different country.
Unable to reproduce with android-chrome or desktop-firefox
Could also be due to an addon (dark-reader, noscript, ublock-origin)
anyways, avoiding this by doing the navigation more explicitly
* js: use .call instead of .bind when possible
* when running without e2d, the message on startup regarding
unfinished uploads didn't show the correct filesystem path
* --doctitle defines most titles, prefixed with "--name: " by default
* the file browser is only prefixed with the --name itself
* --nth ("no-title-hostname") removes it
* also removed by --nih ("no-info-hostname")