previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk
however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic
as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times
global-option `--no-clone` / volflag `noclone` entirely disables
serverside deduplication; clients will then fully upload dupe files
can be useful when `--safe-dedup=1` is not an option due to other
software tampering with the on-disk files, and your filesystem has
prohibitively slow or expensive reads
previously, only real folders could be listed by a webdav client;
a server which does not have any filesystem paths mapped to `/`
would cause clients to panic when trying to list the server root
now, assuming volumes `/foo` and `/bar/qux` exist, when accessing `/`
the user will see `/foo` but not `/bar` due to limitations in `walk`,
and `qux` will only appear when viewing `/bar`
a future rework of the recursion logic should further improve this
drop chunk-hashes in the up2k snap, plus other insignificant attribs
to reduce both the snapfile size and the ram usage by about 90%
reduces startup/shutdown time by a lot since there's less to serdes
(does not affect -e2d which was already optimal)
other changes:
* improve incoming-eta accuracy when the initial handshake
was made a long time before the upload actually started
* move the list of incoming files in the controlpanel to the top
* do not absreal paths unless necessary
* do not determine username if no users configured
* impacket 0.12 fixed the foldersize limit, but now
you get extremely poor performance in large folders
so the previous workaround is still default-enabled
* pyz: yeet the resource tar which is now pointless thanks to pkgres
* cache impresource stuff because pyz lookups are Extremely slow
* prefer tx_file when possible for slightly better performance
* use hardcoded list of expected resources instead of dynamic
discovery at runtime; much simpler and probably safer
* fix some forgotten resources (copying.txt, insecure.pem)
* fix loading jinja templates on windows
add support for reading webdeps and jinja-templates using either
importlib_resources or pkg_resources, which removes the need for
extracting these to a temporary folder on the filesystem
* util: add helper functions to abstract embedded resource access
* http*: serve embedded resources through resource abstraction
* main: check webdeps through resource abstraction
* httpconn: remove unused method `respath(name)`
* use __package__ to find package resources
* util: use importlib_resources backport if available
* pass E.pkg as module object for importlib_resources compatibility
* util: add pkg_resources compatibility to resource abstraction
* show media tags in shares
* html hydrator assumed a folder named `foo.txt` was a doc
* due to sessions, use `pwd` as password placeholder on services
* exponentially slow upload handshakes caused by lack of rd+fn
sqlite index; became apparent after a volume hit 200k files
* listing big folders 5% faster due to `_quotep3b`
* optimize `unquote`, 20% faster but only used rarely
* reindex on startup 150x faster in some rare cases
(same filename in MANY folders)
the database is now around 10% larger (likely worst-case)
reduce the overhead of function-calls from the client thread
to the svchub singletons (up2k, thumbs, metrics) down to 14%
and optimize up2k chunk-receiver to spend 5x less time bookkeeping
which restores up2k performance to before introducing incoming-ETA
dedup is still encouraged and fully supported, but
being default-enabled has caused too many surprises
enabling `--dedup` restores the previous default behavior
also renames `--never-symlink` to `--hardlink-only`
symlinks between volumes will only be created if xlink is
enabled, so such symlinks should be ignored if xlink is
disabled, as they might originate from other software
this prevents accidental rewriting of non-dedup symlinks
if --no-dedup was enabled in a volume which already contained
symlinked duplicate files, renaming/moving folders could fail
this is due to folder contents being moved one file at a time
(which is how symlink breakage is prevented) except the links
are moved assuming the final directory layout, meaning they
may be intermittently broken during the movie
with no-dedup, the symlinks are converted into full files as
each symlink is encountered, but a temporarily broken symlink
would crash the procedure
fix this by giving `_symlink` a new parameter `fsrc`
which is a known valid inode for data copying purposes
previously, the assumption was made that the database and filesystem
would not desync, and that an upload could safely be substituted with
a symlink to an existing copy on-disk, assuming said copy still
existed on-disk at all
this is fine if copyparty is the only software that makes changes to
the filesystem, but that is a shitty assumption to make in hindsight
add `--safe-dedup` which takes a "safety level", and by default (50)
it will no longer blindly expect that the filesystem has not been
altered through other means; the file contents will now be hashed
and compared to the database
deduplication can be much slower as a result, but definitely worth it
as this avoids some potentially very unpleasant surprises
the previous behavior can be restored with `--safe-dedup 1`
timezone can be changed with `export TZ=Europe/Oslo` before launch
using naive timestamps like this appears to be safe as of 3.13-rc1,
no deprecation warnings, just a tiny bit slower than assuming UTC
due to deduplication, it is intentionally impossible to
upload several identical copies of a file in parallel
by default, the up2k client will upload files sorted by
size, which usually leads to dupes being grouped together,
and it will try to do just that
this is by design, as it improves performance on average,
but it also shows the confusing (but technically-correct)
message "resume the partial upload into the original path"
fix this with a more appropriate message
note that this approach was selected in favor of pausing
handshakes while the initial copy finishes uploading,
because that could severely reduce upload performance
by preventing optimal use of multiple connections
* v1.13.8 broke collision resolving for non-identical files;
the correct filename was reserved but not symlinked to
the original file, leaving a zerobyte file instead.
See v1.14.3 github release notes for remediation info
* add sanchecks for early detection of index/fs desync;
saves performance and gives less confusing logs
if files (one or more) are selected for sharing, then
a virtual folder is created to hold the selected files
if a single file is selected for sharing, then
the returned URL will point directly to that file
and fix some shares-related bugs:
* password coalescing
* log-spam on reload
* fix: translation: changing from `" "` to `' '` for some strings;
using `./scripts/tlcheck.sh eng chi copyparty/web/browser.js`
* fix: translation: Check the newly added Chinese translation
<daniiooo> also iirc some time ago we were talking about the scroll for volume ed
<daniiooo> and how its reversed
<ed> is it reversed though? most people said it worked the way they expected
<daniiooo> fuck maybe i agreed back then too
<daniiooo> its the opposite in both aimp and mpv though
<ed> is it w
<tatsu> its a feature
<Devices> it's to keep you on your toes
<Devices> consciously use copyparty
<ed> i can invert it no problem
<ed> would be a nice surprise for anyone who's used it
<Flaminator> Scroll down turns the audio down right?
<daniiooo> ye it makes it louder in cpp
<Devices> why would scrolling down make something louder
<Vin> yeah that's odd
<Vin> scrolling up should make it louder
<Flaminator> It's what it does for me in winamp, mpc-hc and foobar2000.
<daniiooo> so now the question is who itc agreed to whats currently in cpp
<daniiooo> haha
<ed> idk but i'm inverting it
<ed> let's invert it every 6 months
* navpane would always feed the vproxy paths into the tree
instead of only when necessary (the initial load)
* mkdir would return `X-New-Dir` without the `rp-loc` prefix
* chpw and some other redirects also sent raw vpaths
Reported-by: @iridial
* wark landed in the wrong registry when moved to another volume
(harmless; upload would succeed on the next handshake)
* dedup did not apply correctly when moved into another volume,
since all the checks were done based on the previous vol;
fix this by recursing the whole thing
also update the reloc example after some real-world experience
Reported-by: @daniiooo
* support x-forwarded-for
* option to specify socket permissions and group
* in containers, avoid collision during restart
* add --help-bind with examples
hooks can now interrupt or redirect actions, and initiate
related actions, by printing json on stdout with commands
mainly to mitigate limitations such as sharex/sharex#3992
xbr/xau can redirect uploads to other destinations with `reloc`
and most hooks can initiate indexing or deletion of additional
files by giving a list of vpaths in json-keys `idx` or `del`
there are limitations;
* xbu/xau effects don't apply to ftp, tftp, smb
* xau will intentionally fail if a reloc destination exists
* xau effects do not apply to up2k
also provides more details for hooks:
* xbu/xau: basic-uploader vpath with filename
* xbr/xar: add client ip
v1.13.5 made some proxies angry with its massive chunklists
when stitching chunks, only list the first chunk hash in full,
and include a truncated hash for the consecutive chunks
should be enough for logfiles to make sense
and to smoketest that clients are behaving
compile to bytecode so cpython doesn't have to keep it in memory
ram usage reduced by:
* min: 5.4 MiB (32.6 to 27.2)
* ac/im: 5.2 MiB (39.0 to 33.8)
* dj/iv: 10.6 MiB (67.3 to 56.7)
startup time reduced from:
* min: 1.3s to 0.6s
* ac/im: 1.6s to 0.9s
* dj/iv: 2.0s to 1.1s
image size increased by 4 MiB (min), 6 MiB (ac/im/iv), 9 MiB (dj)
ram usage measured on idle with:
while true; do ps aux | grep -E 'R[S]S|no[-]crt'; read -n1; echo; done
startup time measured with:
time podman run --rm -it localhost/copyparty-min-amd64 --exit=idx
in the event that an upload chunk gets stuck, the js would
never stop waiting for a response, requiring a page reload
improves reliability when running behind a reverse-proxy
which is configured to never timeout requests (can make
sense when combined with other services on the same box)
with overflow:auto, firefox picks the div-width before estimating
the height, causing it to undershoot by the scrollbar width
and then messing up the text alignment
fix: conditionally set overflow-y:scroll using js
* wait until page (au) has loaded to register hotkeys
* hotkey `m` would grow sidebar if tree was minimized
* more exact warning about num.parallel uploads
* keep more console logs in memory
* message phrasing
audio extraction happens serverside to opus or mp3
depending on browser support
remuxing (extracting audio without transcoding)
is currently not supported, and is not planned
* progress donuts should include inflight bytes
* changes to stitch-size in settings didn't apply until next refresh
* serverlog was too verbose; truncate chunk hashes
* mention absolute cloudflare limit in readme
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East, `1.6x` from US-West to Finland )
the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body
a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value
this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies
the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes
a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case
on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this
hooks can be restricted to users with certain permissions, for example
`--xm aw,notify-send` will only `notify-send` if user has write-access
the user's list of permissions are now also included in the json
that is passed to the hook if enabled; `--xm aw,j,notify-send`
will now also stop parsing flags when encountering a blank value,
allowing to specify any initial arguments to the command:
`--xm aw,j,,notify-send,hey` would run `notify-send` with `hey`
as its first argument, and the json would be the 2nd argument,
similarly `--xm ,notify-send,hey` when no flags specified
this is somewhat explained in `--help-hooks`, but
additional related features are planned in the near future
and will all be better documented when the dust settles
if an ftp client tried to list the toplevel folder on a server
where nothing is mounted toplevel, it would syntheisze a
directory listing which included all volumes, even those
which the user would not be able to access
so basically not a problem, just very confusing
mtime the file that was used to produce the folder thumbnail
(rather than the folder itself) since the folder-thumb is
always resolved to the file's thumb in the on-disk cache
if a request body is expected, but request has no content-length,
set the timeout to 1/20 of `--s-tbody`, so 9 seconds by default,
or 3 seconds if it's 60 as recommended in helptext
this gives less confusing behavior if a client accidentally does
something invalid, replying with an error response before the
previous timeout of 186 seconds
also raise the slowloris flag, in case a client bugs out and
keeps making such requests
if a song fails to play for some reason (network loss,
corrupt file), a timer plays the next track after 5s
the timer was not cancelled if the user
started another track in the meantime
the spec doesn't say what you're supposed to do if the target filename of an upload is already taken, but this seems to be the most common behavior on other ftp servers, and is required by wondows 2000 (otherwise it'll freak out and issue a delete and then not actually upload it, nice)
new option `--ftp-no-ow` restores old default behavior of rejecting upload if target filename exists
was intentionally skipped to avoid complexity but enough people have
asked why it doesn't work that it's time to do something about it
turns out it wasn't that bad
* upgrade to partftpy 0.4.0
* workarounds for buggy clients/servers
* improved ipv6 support, especially on macos
* improved robustness on unreliable networks
* make `--tftp4` separate from `--ftp4`
only keep characters `>+-*` if there's less than three of them,
and discard entire prefix if there's more
markdown spec only cares about exactly-one or three-or-more, but
let's keep pairs in case anyone use that as unconventional markup
when there was more than ~700 active connections,
* sendfile (non-https downloads) could fail
* mdns and ssdp could fail to reinitialize on network changes
...because `select` can't handle FDs higher than 512 on windows
(1024 on linux/macos), so prefer `poll` where possible (linux/macos)
but apple keeps breaking and unbreaking `poll` in macos,
so use `--no-poll` if necessary to force `select` instead
metadata is no longer discarded when transcoding to opus or mp3;
this was a good idea back when the transcodes were only used by
the webplayer, but now that folders can be batch-downloaded with
on-the-fly transcoding, it makes sense to keep most of the tags
individual tags are discarded if its value exceeds 1023 letters
this should mainly affect the following:
* traktor beatmaps, size usually somewhere around 100 KiB
* non-standard cover-art embeddings, size around 250 KiB
* XMP (project data from adobe premiere), around 48 KiB
use sigmasks to block SIGINT, SIGTERM, SIGUSR1 from all other threads
also initiate shutdown by calling sighandler directly,
in case this misses anything and that is still unreliable
(discovered by `--exit=idx` being noop once in a blue moon)
* template-based title formatting
* picture embeds are no longer ant-sized
* `--og-color` sets accent color; default #333
* `--og-s-title` forces default title, ignoring e2t
* add a music indicator to song titles because discord doesn't
currently only being used to workaround discord discarding
query strings in opengraph tags, but i'm sure there will be
plenty more wonderful usecases for this atrocity
if a given filesystem were to disappear (e.g. removable storage)
followed by another filesystem appearing at the same location,
this would not get noticed by up2k in a timely manner
fix this by discarding the mtab cache after `--mtab-age` seconds and
rebuild it from scratch, unless the previous values are definitely
correct (as indicated by identical output from `/bin/mount`)
probably reduces windows performance by an acceptable amount
* hasher thread could die if a client would rapidly
upload and delete files (so very unlikely)
* two unprotected calls to register_vpath which was
almost-definitely safe because the volumes
already existed in the registry
adds options `--bauth-last` to lower the preference for
taking the basic-auth password in case of conflict,
and `--no-bauth` to entirely disable basic-authentication
if a client is providing multiple passwords, for example when
"logged in" with one password (the `cppwd` cookie) and switching
to another account by also sending a PW header/url-param, then
the default evaluation order to determine which password to use is:
url-param `pw`, header `pw`, basic-auth header, cookie (cppwd/cppws)
so if a client supplies a basic-auth header, it will ignore the cookie
and use the basic-auth password instead, which usually makes sense
but this can become a problem if you have other webservers running
on the same domain which also support basic-authentication
--bauth-last is a good choice for cooperating with such services, as
--no-bauth currently breaks support for the android app...
plus misc similar technically-incorrect addq usages;
most of these don't matter in practice since they'll
never get a url with a hash, but makes the intent clear
and make sure hashes never get passed around
like they're part of a dirkey, harmless as it is
counterpart of `--s-wr-sz` which existed already
the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)
was previously 32 KiB, so large uploads should now use 17% less CPU
also adds sanchecks for values of `--iobuf`, `--s-rd-sz`, `--s-wr-sz`
also adds file-overwrite feature for multipart posts
the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)
was previously a mix of 64 and 512 KiB;
now the same value is enforced everywhere
download-as-tar is now 20% faster with the default value
it is now possible to grant access to users other than `${u}`
(the user which the volume belongs to)
previously, permissions did not apply correctly to IdP volumes due to
the way `${u}` and `${g}` was expanded, which was a funky iteration
over all known users/groups instead of... just expanding them?
also adds another sanchk that a volume's URL must contain a
`${u}` to be allowed to mention `${u}` in the accs list, and
similarly for `${g}` / `@${g}` since users can be in multiple groups
the volflags of `/` were used to determine if e2d was enabled,
which is wrong in two ways:
* if there is no `/` volume, it would be globally disabled
* if `/` has e2d, but another volume doesn't, it would
erroneously think unpost was available, which is not an
issue unless that volume used to have e2d enabled AND
there is stale data matching the client's IP
3f05b665 (v1.11.0) had an incomplete fix for the stale-data part of
the above, which also introduced the other issue
this commit partially fixes the following issue:
if a client manages to escape real-ip detection, copyparty will
try to ban the reverse-proxy instead, effectively banning all clients
this can happen if the configuration says to obtain client real-ip
from a cloudflare header, but the server is not configured to reject
connections from non-cloudflare IPs, so a scanner will eventually
hit the server IP with malicious-looking requests and trigger a ban
copyparty will now continue to process requests from banned IPs until
the header has been parsed and the real-ip has been obtained (or not),
causing an increased server load from malicious clients
assuming the `--xff-src` and `--xff-hdr` config is correct,
this issue should no longer be hitting innocent clients
the old behavior of immediately rejecting a banned IP address
can be re-enabled with the new option `--early-ban`
too fraught with subtle dangers, such as other copyparty instances
ending up sharing knowledge of volumes unintentionally, and
configuration becoming mysteriously sticky (not to mention
this would all become hella difficult to reason about)
instead, rely entirely on users seeing the big red warning
added in 2ebfdc25 if their configuration is dangerous
this decision has the drawback that there will be server stuttering
whenever a new user makes themselves known since the last restart,
as it realizes the volumes exist and does the usual e2ds indexing,
instead of doing it early during startup
but it's probably good enough
when switching to another folder with identical filenames, the
mediaplayer would get confused and think it was the same files,
messing up the playback order
to abort an upload, refresh the page and access the unpost tab,
which now includes unfinished uploads (sorted before completed ones)
can be configured through u2abort (global or volflag);
by default it requires both the IP and account to match
https://a.ocv.me/pub/g/nerd-stuff/2024-0310-stoltzekleiven.jpg
running behind cloudflare doesn't necessarily
mean being accessible ONLY through cloudflare
also include a general warning about optimal
configuration for non-cloudflare intermediates
as this option is very rarely useful, add global-option `--k304` to
unhide the button and/or set it default-enabled
the toggle will still appear when the feature was previously enabled by
a client, and the feature is still default-enabled for all IE clients
if a reverse-proxy starts hijacking requests and replying with HTML,
don't panic when it fails to decode as a handshake json
fix this for most other json-expecting gizmos too,
and take the opportunity to cleanup some text formatting
this improves performance on s3-backed volumes
noktuas reported on discord that the upload performance was
unexpectedly poor when writing to an s3 bucket through a JuiceFS
fuse-mount, only getting 1.5 MiB/s with copyparty, meanwhile a
regular filecopy averaged 30 MiB/s plus
the issue was that s3 does not support sparse files, so copyparty
would fall back to sequential uploading, and also disable fpool,
causing JuiceFS to repeatedly commit the same 5 MiB range to
the storage provider as each chunk arrived from the client
by forcing use of sparse files, s3 adapters such as JuiceFS and
geesefs will "only" write the entire file to s3 *twice*, initially
it writes the full filesize of zerobytes (depending on adapter,
hopefully using gzip compression to reduce the bandwidth necessary)
and then the actual file data in an adapter-specific chunksize
with this volflag, copyparty appears to reach the full expected speed
* docker: warn if there are config-files in ~/.config/copyparty
because somebody copied their config into
/cfg/copyparty instead of /cfg as intended
* docker: warn if there are no config-files in an included directory
* make misconfigured reverse-proxies more obvious
* explain cors rejections in server log
* indicate cors rejection in error toast
nothing dangerous, just confusing log messages if an
admin hammers the reload button 100+ times per second,
or another linux process rapidly sends SIGUSR1
`@import url(https://...)` would get rewritten to baseURL + https://...
also reorder the generated csstext so that @imports appear first;
necessary for stuff like googlefonts to take effect
some reverse-proxies expect plaintext replies, and
we don't have a brotli decompressor to satisfy this
additionally, because brotli is https-gated (thx google),
it was already an impractical mess anyways
the sfx is now 7 KiB larger
chrome crashes if there's more than 2000 unique SVGs on one page, so
there was serverside useragent-sniffing to determine if the icon should
be an svg or a raster
however since the useragent is not in our vary, cloudflare wouldn't see
the difference and cache everything equally, meaning most folders would
display a random mix of png and svg thumbnails
move browser detection to the clientside to ensure unique URLs
* if a nic was restarted mid-transfer, the server could crash
* this workaround will probably fix a bunch of similar issues too
* fix resource leak if dualstack fails the ipv4 bind
on phones especially, hitting the end of a folder while playing music
could permanently stop audio playback, because the browser will
revoke playback privileges unless we have a song ready to go...
there's no time to navigate through folders looking for the next file
the preloader will now start jumping through folders ahead of time
some cifs servers cause sqlite to fail in interesting ways; any attempt
to create a table can instantly throw an exception, which results in a
zerobyte database being created. During the next startup, the db would
be determined to be corrupted, and up2k would invoke _backup_db before
deleting and recreating it -- except that sqlite's connection.backup()
will hang indefinitely and deadlock up2k
add a watchdog which fires if it takes longer than 1 minute to open the
database, printing a big warning that the filesystem probably does not
support locking or is otherwise sqlite-incompatible, then writing a
stacktrace of all threads to a textfile in the config directory
(in case this deadlock is due to something completely different),
before finally crashing spectacularly
additionally, delete the database if the creation fails, which should
prevents the deadlock on the next startup, so combine that with a
message hinting at the filesystem incompatibility
the 1-minute limit may sound excessively gracious, but considering what
some of the copyparty instances out there is running on, really isn't
this was reported when connecting to a cifs server running alpine
thx to abex on discord for the detailed bug report!
as each chunk is written to the file, httpcli calls
up2k.confirm_chunk to register the chunk as completed, and the reply
indicates whether that was the final outstanding chunk, in which case
httpcli closes the file descriptors since there's nothing more to write
the issue is that the final chunk is registered as completed before the
file descriptors are closed, meaning there could be writes that haven't
finished flushing to disk yet
if the client decides to issue another handshake during this window,
up2k sees that all chunks are complete and calls up2k.finish_upload
even as some threads might still be flushing the final writes to disk
so the conditions to hit this bug were as follows (all must be true):
* multiprocessing is disabled
* there is a reverse-proxy
* a client has several idle connections and reuses one of those
* the server's filesystem is EXTREMELY slow, to the point where
closing a file takes over 30 seconds
the fix is to stop handshakes from being processed while a file is
being closed, which is unfortunately a small bottleneck in that it
prohibits initiating another upload while one is being finalized, but
the required complexity to handle this better is probably not worth it
(a separate mutex for each upload session or something like that)
this issue is mostly harmless, partially because it is super tricky to
hit (only aware of it happening synthetically), and because there is
usually no harmful consequences; the worst-case is if this were to
happen exactly as the server OS decides to crash, which would make the
file appear to be fully uploaded even though it's missing some data
(all extremely unlikely, but not impossible)
there is no performance impact; if anything it should now accept
new tcp connections slightly faster thanks to more granular locking
if a reverseproxy decides to strip away URL parameters, show an
appropriate error-toast instead of silently entering a bad state
someone on discord ended up in an infinite page-reload loop
since the js would try to recover by fully navigating to the
requested dir if `?ls` failed, which wouldn't do any good anyways
if the dir in question is the initial dir to display
videos unloaded correctly when switching between files, but not when
closing the lightbox while playing a video and then clicking another
now, only media within the preload window (+/- 2 from current file)
is kept loaded into DOM, everything else gets ejected, both on
navigation and when closing the lightbox
much more accurate total-ETA when uploading with many connections
and/or uploading huge files to really slow servers
the titlebar % still only does actually confirmed bytes,
partially because that makes sense, partially because
that's what happened by accident
features which should be good to go:
* user groups
* assigning permissions by group
* dynamically created volumes based on username/groupname
* rebuild vfs when new users/groups appear
but several important features still pending;
* detect dangerous configurations
* dynamic vol below readable path
* remember volumes created during previous runs
* helps prevent unintended access
* correct filesystem-scan on startup
* allow mounting `/` (the entire filesystem) as a volume
* not that you should (really, you shouldn't)
* improve `-v` helptext
* change IdP group symbol to @ because % is used for file inclusion
* not technically necessary but is less confusing in docs
window.localStorage was null, so trying to read would fail
seen on falkon 23.08.4 with qtwebengine 5.15.12 (fedora39)
might as well be paranoid about the other failure modes too
(sudden exceptions on reads and/or writes)
* polyfill Set() for gridview (ie9, ie10)
* navpane: do full-page nav if history api is ng (ie9)
* show markdown as plaintext if rendering fails (ie*)
* text-editor: hide preview pane if it doesn't work (ie*)
* explicitly hide toasts on close (ie9, ff10)
some clients (clonezilla-webdav) rapidly create and delete files;
this fails if copyparty is still hashing the file (usually the case)
and the same thing can probably happen due to antivirus etc
add global-option --rm-retry (volflag rm_retry) specifying
for how long (and how quickly) to keep retrying the deletion
default: retry for 5sec on windows, 0sec (disabled) on everything else
because this is only a problem on windows
* fix crash on keyboard input in modals
* text editor works again (but without markdown preview)
* keyboard hotkeys for the few features that actually work
when a file was reindexed (due to a change in size or last-modified
timestamp) the uploader-IP would get removed, but the upload timestamp
was ported over. This was intentional so there was probably a reason...
new behavior is to keep both uploader-IP and upload timestamp if the
file contents are unchanged (determined by comparing warks), and to
discard both uploader-IP and upload timestamp if that is not the case
* webdav: extend applesan regex with more stuff to exclude
* on macos, set applesan as default `--no-idx` to avoid indexing them
(they didn't show up in search since they're dotfiles, but still)
igloo irc has an absolute time limit of 2 minutes before it just
disconnects mid-upload and that kinda looked like it had a buggy
multipart generator instead of just being funny
anticipating similar events in the future, also log the
client-selected boundary value to eyeball its yoloness
due to all upload APIs invoking up2k.hash_file to index uploads,
the uploads could block during a rescan for a crazy long time
(past most gateway timeouts); now this is mostly fire-and-forget
"mostly" because this also adds a conditional slowdown to
help the hasher churn through if the queue gets too big
worst case, if the server is restarted before it catches up, this
would rely on filesystem reindexing to eventually index the files
after a restart or on a schedule, meaning uploader info would be
lost on shutdown, but this is usually fine anyways (and this was
also the case until now)
primarily to support uploading from Igloo IRC but also generally useful
(not actually tested with Igloo IRC yet because it's a paid feature
so just gonna wait for spiky to wake up and tell me it didn't work)