mirror of
https://github.com/9001/copyparty.git
synced 2025-08-18 01:22:13 -06:00
Compare commits
No commits in common. "hovudstraum" and "v1.14.3" have entirely different histories.
hovudstrau
...
v1.14.3
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
@ -8,42 +8,33 @@ assignees: '9001'
|
||||||
---
|
---
|
||||||
|
|
||||||
NOTE:
|
NOTE:
|
||||||
**please use english, or include an english translation.** aside from that,
|
|
||||||
all of the below are optional, consider them as inspiration, delete and rewrite at will, thx md
|
all of the below are optional, consider them as inspiration, delete and rewrite at will, thx md
|
||||||
|
|
||||||
|
|
||||||
### Describe the bug
|
**Describe the bug**
|
||||||
a description of what the bug is
|
a description of what the bug is
|
||||||
|
|
||||||
### To Reproduce
|
**To Reproduce**
|
||||||
List of steps to reproduce the issue, or, if it's hard to reproduce, then at least a detailed explanation of what you did to run into it
|
List of steps to reproduce the issue, or, if it's hard to reproduce, then at least a detailed explanation of what you did to run into it
|
||||||
|
|
||||||
### Expected behavior
|
**Expected behavior**
|
||||||
a description of what you expected to happen
|
a description of what you expected to happen
|
||||||
|
|
||||||
### Screenshots
|
**Screenshots**
|
||||||
if applicable, add screenshots to help explain your problem, such as the kickass crashpage :^)
|
if applicable, add screenshots to help explain your problem, such as the kickass crashpage :^)
|
||||||
|
|
||||||
### Server details (if you are using docker/podman)
|
**Server details**
|
||||||
remove the ones that are not relevant:
|
if the issue is possibly on the server-side, then mention some of the following:
|
||||||
* **server OS / version:**
|
* server OS / version:
|
||||||
* **how you're running copyparty:** (docker/podman/something-else)
|
* python version:
|
||||||
* **docker image:** (variant, version, and arch if you know)
|
* copyparty arguments:
|
||||||
* **copyparty arguments and/or config-file:**
|
* filesystem (`lsblk -f` on linux):
|
||||||
|
|
||||||
### Server details (if you're NOT using docker/podman)
|
**Client details**
|
||||||
remove the ones that are not relevant:
|
|
||||||
* **server OS / version:**
|
|
||||||
* **what copyparty did you grab:** (sfx/exe/pip/arch/...)
|
|
||||||
* **how you're running it:** (in a terminal, as a systemd-service, ...)
|
|
||||||
* run copyparty with `--version` and grab the last 3 lines (they start with `copyparty`, `CPython`, `sqlite`) and paste them below this line:
|
|
||||||
* **copyparty arguments and/or config-file:**
|
|
||||||
|
|
||||||
### Client details
|
|
||||||
if the issue is possibly on the client-side, then mention some of the following:
|
if the issue is possibly on the client-side, then mention some of the following:
|
||||||
* the device type and model:
|
* the device type and model:
|
||||||
* OS version:
|
* OS version:
|
||||||
* browser version:
|
* browser version:
|
||||||
|
|
||||||
### Additional context
|
**Additional context**
|
||||||
any other context about the problem here
|
any other context about the problem here
|
||||||
|
|
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
@ -7,8 +7,6 @@ assignees: '9001'
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
NOTE:
|
|
||||||
**please use english, or include an english translation.** aside from that,
|
|
||||||
all of the below are optional, consider them as inspiration, delete and rewrite at will
|
all of the below are optional, consider them as inspiration, delete and rewrite at will
|
||||||
|
|
||||||
**is your feature request related to a problem? Please describe.**
|
**is your feature request related to a problem? Please describe.**
|
||||||
|
|
4
.gitignore
vendored
4
.gitignore
vendored
|
@ -30,7 +30,6 @@ copyparty/res/COPYING.txt
|
||||||
copyparty/web/deps/
|
copyparty/web/deps/
|
||||||
srv/
|
srv/
|
||||||
scripts/docker/i/
|
scripts/docker/i/
|
||||||
scripts/deps-docker/uncomment.py
|
|
||||||
contrib/package/arch/pkg/
|
contrib/package/arch/pkg/
|
||||||
contrib/package/arch/src/
|
contrib/package/arch/src/
|
||||||
|
|
||||||
|
@ -43,6 +42,3 @@ scripts/docker/*.err
|
||||||
|
|
||||||
# nix build output link
|
# nix build output link
|
||||||
result
|
result
|
||||||
|
|
||||||
# IDEA config
|
|
||||||
.idea/
|
|
||||||
|
|
|
@ -1,21 +1,8 @@
|
||||||
* **found a bug?** [create an issue!](https://github.com/9001/copyparty/issues) or let me know in the [discord](https://discord.gg/25J8CdTT6G) :>
|
* do something cool
|
||||||
* **fixed a bug?** create a PR or post a patch! big thx in advance :>
|
|
||||||
* **have a cool idea?** let's discuss it! anywhere's fine, you choose.
|
|
||||||
|
|
||||||
but please:
|
really tho, send a PR or an issue or whatever, all appreciated, anything goes, just behave aight 👍👍
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# do not use AI / LMM when writing code
|
|
||||||
|
|
||||||
copyparty is 100% organic, free-range, human-written software!
|
|
||||||
|
|
||||||
> ⚠ you are now entering a no-copilot zone
|
|
||||||
|
|
||||||
the *only* place where LMM/AI *may* be accepted is for [localization](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice#translations) if you are fluent and have confirmed that the translation is accurate.
|
|
||||||
|
|
||||||
sorry for the harsh tone, but this is important to me 🙏
|
|
||||||
|
|
||||||
|
but to be more specific,
|
||||||
|
|
||||||
|
|
||||||
# contribution ideas
|
# contribution ideas
|
||||||
|
@ -41,8 +28,6 @@ aside from documentation and ideas, some other things that would be cool to have
|
||||||
|
|
||||||
* **translations** -- the copyparty web-UI has translations for english and norwegian at the top of [browser.js](https://github.com/9001/copyparty/blob/hovudstraum/copyparty/web/browser.js); if you'd like to add a translation for another language then that'd be welcome! and if that language has a grammar that doesn't fit into the way the strings are assembled, then we'll fix that as we go :>
|
* **translations** -- the copyparty web-UI has translations for english and norwegian at the top of [browser.js](https://github.com/9001/copyparty/blob/hovudstraum/copyparty/web/browser.js); if you'd like to add a translation for another language then that'd be welcome! and if that language has a grammar that doesn't fit into the way the strings are assembled, then we'll fix that as we go :>
|
||||||
|
|
||||||
* but please note that support for [RTL (Right-to-Left) languages](https://en.wikipedia.org/wiki/Right-to-left_script) is currently not planned, since the javascript is a bit too jank for that
|
|
||||||
|
|
||||||
* **UI ideas** -- at some point I was thinking of rewriting the UI in react/preact/something-not-vanilla-javascript, but I'll admit the comfiness of not having any build stage combined with raw performance has kinda convinced me otherwise :p but I'd be very open to ideas on how the UI could be improved, or be more intuitive.
|
* **UI ideas** -- at some point I was thinking of rewriting the UI in react/preact/something-not-vanilla-javascript, but I'll admit the comfiness of not having any build stage combined with raw performance has kinda convinced me otherwise :p but I'd be very open to ideas on how the UI could be improved, or be more intuitive.
|
||||||
|
|
||||||
* **docker improvements** -- I don't really know what I'm doing when it comes to containers, so I'm sure there's a *huge* room for improvement here, mainly regarding how you're supposed to use the container with kubernetes / docker-compose / any of the other popular ways to do things. At some point I swear I'll start learning about docker so I can pick up clach04's [docker-compose draft](https://github.com/9001/copyparty/issues/38) and learn how that stuff ticks, unless someone beats me to it!
|
* **docker improvements** -- I don't really know what I'm doing when it comes to containers, so I'm sure there's a *huge* room for improvement here, mainly regarding how you're supposed to use the container with kubernetes / docker-compose / any of the other popular ways to do things. At some point I swear I'll start learning about docker so I can pick up clach04's [docker-compose draft](https://github.com/9001/copyparty/issues/38) and learn how that stuff ticks, unless someone beats me to it!
|
||||||
|
|
|
@ -15,18 +15,22 @@ produces a chronological list of all uploads by collecting info from up2k databa
|
||||||
# [`partyfuse.py`](partyfuse.py)
|
# [`partyfuse.py`](partyfuse.py)
|
||||||
* mount a copyparty server as a local filesystem (read-only)
|
* mount a copyparty server as a local filesystem (read-only)
|
||||||
* **supports Windows!** -- expect `194 MiB/s` sequential read
|
* **supports Windows!** -- expect `194 MiB/s` sequential read
|
||||||
* **supports Linux** -- expect `600 MiB/s` sequential read
|
* **supports Linux** -- expect `117 MiB/s` sequential read
|
||||||
* **supports macos** -- expect `85 MiB/s` sequential read
|
* **supports macos** -- expect `85 MiB/s` sequential read
|
||||||
|
|
||||||
|
filecache is default-on for windows and macos;
|
||||||
|
* macos readsize is 64kB, so speed ~32 MiB/s without the cache
|
||||||
|
* windows readsize varies by software; explorer=1M, pv=32k
|
||||||
|
|
||||||
note that copyparty should run with `-ed` to enable dotfiles (hidden otherwise)
|
note that copyparty should run with `-ed` to enable dotfiles (hidden otherwise)
|
||||||
|
|
||||||
and consider using [../docs/rclone.md](../docs/rclone.md) instead; usually a bit faster, especially on windows
|
also consider using [../docs/rclone.md](../docs/rclone.md) instead for 5x performance
|
||||||
|
|
||||||
|
|
||||||
## to run this on windows:
|
## to run this on windows:
|
||||||
* install [winfsp](https://github.com/billziss-gh/winfsp/releases/latest) and [python 3](https://www.python.org/downloads/)
|
* install [winfsp](https://github.com/billziss-gh/winfsp/releases/latest) and [python 3](https://www.python.org/downloads/)
|
||||||
* [x] add python 3.x to PATH (it asks during install)
|
* [x] add python 3.x to PATH (it asks during install)
|
||||||
* `python -m pip install --user fusepy` (or grab a copy of `fuse.py` from the `connect` page on your copyparty, and keep it in the same folder)
|
* `python -m pip install --user fusepy`
|
||||||
* `python ./partyfuse.py n: http://192.168.1.69:3923/`
|
* `python ./partyfuse.py n: http://192.168.1.69:3923/`
|
||||||
|
|
||||||
10% faster in [msys2](https://www.msys2.org/), 700% faster if debug prints are enabled:
|
10% faster in [msys2](https://www.msys2.org/), 700% faster if debug prints are enabled:
|
||||||
|
@ -78,6 +82,3 @@ cd /mnt/nas/music/.hist
|
||||||
# [`prisonparty.sh`](prisonparty.sh)
|
# [`prisonparty.sh`](prisonparty.sh)
|
||||||
* run copyparty in a chroot, preventing any accidental file access
|
* run copyparty in a chroot, preventing any accidental file access
|
||||||
* creates bindmounts for /bin, /lib, and so on, see `sysdirs=`
|
* creates bindmounts for /bin, /lib, and so on, see `sysdirs=`
|
||||||
|
|
||||||
# [`bubbleparty.sh`](bubbleparty.sh)
|
|
||||||
* run copyparty in an isolated process, preventing any accidental file access and more
|
|
||||||
|
|
|
@ -1,19 +0,0 @@
|
||||||
#!/bin/sh
|
|
||||||
# usage: ./bubbleparty.sh ./copyparty-sfx.py ....
|
|
||||||
bwrap \
|
|
||||||
--unshare-all \
|
|
||||||
--ro-bind /usr /usr \
|
|
||||||
--ro-bind /bin /bin \
|
|
||||||
--ro-bind /lib /lib \
|
|
||||||
--ro-bind /etc/resolv.conf /etc/resolv.conf \
|
|
||||||
--dev-bind /dev /dev \
|
|
||||||
--dir /tmp \
|
|
||||||
--dir /var \
|
|
||||||
--bind $(pwd) $(pwd) \
|
|
||||||
--share-net \
|
|
||||||
--die-with-parent \
|
|
||||||
--file 11 /etc/passwd \
|
|
||||||
--file 12 /etc/group \
|
|
||||||
"$@" \
|
|
||||||
11< <(getent passwd $(id -u) 65534) \
|
|
||||||
12< <(getent group $(id -g) 65534)
|
|
|
@ -20,8 +20,6 @@ each plugin must define a `main()` which takes 3 arguments;
|
||||||
|
|
||||||
## on404
|
## on404
|
||||||
|
|
||||||
* [redirect.py](redirect.py) sends an HTTP 301 or 302, redirecting the client to another page/file
|
|
||||||
* [randpic.py](randpic.py) redirects `/foo/bar/randpic.jpg` to a random pic in `/foo/bar/`
|
|
||||||
* [sorry.py](answer.py) replies with a custom message instead of the usual 404
|
* [sorry.py](answer.py) replies with a custom message instead of the usual 404
|
||||||
* [nooo.py](nooo.py) replies with an endless noooooooooooooo
|
* [nooo.py](nooo.py) replies with an endless noooooooooooooo
|
||||||
* [never404.py](never404.py) 100% guarantee that 404 will never be a thing again as it automatically creates dummy files whenever necessary
|
* [never404.py](never404.py) 100% guarantee that 404 will never be a thing again as it automatically creates dummy files whenever necessary
|
||||||
|
|
|
@ -1,35 +0,0 @@
|
||||||
import os
|
|
||||||
import random
|
|
||||||
from urllib.parse import quote
|
|
||||||
|
|
||||||
|
|
||||||
# assuming /foo/bar/ is a valid URL but /foo/bar/randpic.png does not exist,
|
|
||||||
# hijack the 404 with a redirect to a random pic in that folder
|
|
||||||
#
|
|
||||||
# thx to lia & kipu for the idea
|
|
||||||
|
|
||||||
|
|
||||||
def main(cli, vn, rem):
|
|
||||||
req_fn = rem.split("/")[-1]
|
|
||||||
if not cli.can_read or not req_fn.startswith("randpic"):
|
|
||||||
return
|
|
||||||
|
|
||||||
req_abspath = vn.canonical(rem)
|
|
||||||
req_ap_dir = os.path.dirname(req_abspath)
|
|
||||||
files_in_dir = os.listdir(req_ap_dir)
|
|
||||||
|
|
||||||
if "." in req_fn:
|
|
||||||
file_ext = "." + req_fn.split(".")[-1]
|
|
||||||
files_in_dir = [x for x in files_in_dir if x.lower().endswith(file_ext)]
|
|
||||||
|
|
||||||
if not files_in_dir:
|
|
||||||
return
|
|
||||||
|
|
||||||
selected_file = random.choice(files_in_dir)
|
|
||||||
|
|
||||||
req_url = "/".join([vn.vpath, rem]).strip("/")
|
|
||||||
req_dir = req_url.rsplit("/", 1)[0]
|
|
||||||
new_url = "/".join([req_dir, quote(selected_file)]).strip("/")
|
|
||||||
|
|
||||||
cli.reply(b"redirecting...", 302, headers={"Location": "/" + new_url})
|
|
||||||
return "true"
|
|
|
@ -1,52 +0,0 @@
|
||||||
# if someone hits a 404, redirect them to another location
|
|
||||||
|
|
||||||
|
|
||||||
def send_http_302_temporary_redirect(cli, new_path):
|
|
||||||
"""
|
|
||||||
replies with an HTTP 302, which is a temporary redirect;
|
|
||||||
"new_path" can be any of the following:
|
|
||||||
- "http://a.com/" would redirect to another website,
|
|
||||||
- "/foo/bar" would redirect to /foo/bar on the same server;
|
|
||||||
note the leading '/' in the location which is important
|
|
||||||
"""
|
|
||||||
cli.reply(b"redirecting...", 302, headers={"Location": new_path})
|
|
||||||
|
|
||||||
|
|
||||||
def send_http_301_permanent_redirect(cli, new_path):
|
|
||||||
"""
|
|
||||||
replies with an HTTP 301, which is a permanent redirect;
|
|
||||||
otherwise identical to send_http_302_temporary_redirect
|
|
||||||
"""
|
|
||||||
cli.reply(b"redirecting...", 301, headers={"Location": new_path})
|
|
||||||
|
|
||||||
|
|
||||||
def send_errorpage_with_redirect_link(cli, new_path):
|
|
||||||
"""
|
|
||||||
replies with a website explaining that the page has moved;
|
|
||||||
"new_path" must be an absolute location on the same server
|
|
||||||
but without a leading '/', so for example "foo/bar"
|
|
||||||
would redirect to "/foo/bar"
|
|
||||||
"""
|
|
||||||
cli.redirect(new_path, click=False, msg="this page has moved")
|
|
||||||
|
|
||||||
|
|
||||||
def main(cli, vn, rem):
|
|
||||||
"""
|
|
||||||
this is the function that gets called by copyparty;
|
|
||||||
note that vn.vpath and cli.vpath does not have a leading '/'
|
|
||||||
so we're adding the slash in the debug messages below
|
|
||||||
"""
|
|
||||||
print(f"this client just hit a 404: {cli.ip}")
|
|
||||||
print(f"they were accessing this volume: /{vn.vpath}")
|
|
||||||
print(f"and the original request-path (straight from the URL) was /{cli.vpath}")
|
|
||||||
print(f"...which resolves to the following filesystem path: {vn.canonical(rem)}")
|
|
||||||
|
|
||||||
new_path = "/foo/bar/"
|
|
||||||
print(f"will now redirect the client to {new_path}")
|
|
||||||
|
|
||||||
# uncomment one of these:
|
|
||||||
send_http_302_temporary_redirect(cli, new_path)
|
|
||||||
#send_http_301_permanent_redirect(cli, new_path)
|
|
||||||
#send_errorpage_with_redirect_link(cli, new_path)
|
|
||||||
|
|
||||||
return "true"
|
|
|
@ -2,7 +2,7 @@ standalone programs which are executed by copyparty when an event happens (uploa
|
||||||
|
|
||||||
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
|
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
|
||||||
|
|
||||||
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbc/xac/xbr/xar/xbd/xad/xban)
|
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbr/xar/xbd/xad/xban)
|
||||||
|
|
||||||
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
|
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
|
||||||
|
|
||||||
|
@ -14,8 +14,6 @@ run copyparty with `--help-hooks` for usage details / hook type explanations (xm
|
||||||
* [discord-announce.py](discord-announce.py) announces new uploads on discord using webhooks ([example](https://user-images.githubusercontent.com/241032/215304439-1c1cb3c8-ec6f-4c17-9f27-81f969b1811a.png))
|
* [discord-announce.py](discord-announce.py) announces new uploads on discord using webhooks ([example](https://user-images.githubusercontent.com/241032/215304439-1c1cb3c8-ec6f-4c17-9f27-81f969b1811a.png))
|
||||||
* [reject-mimetype.py](reject-mimetype.py) rejects uploads unless the mimetype is acceptable
|
* [reject-mimetype.py](reject-mimetype.py) rejects uploads unless the mimetype is acceptable
|
||||||
* [into-the-cache-it-goes.py](into-the-cache-it-goes.py) avoids bugs in caching proxies by immediately downloading each file that is uploaded
|
* [into-the-cache-it-goes.py](into-the-cache-it-goes.py) avoids bugs in caching proxies by immediately downloading each file that is uploaded
|
||||||
* [podcast-normalizer.py](podcast-normalizer.py) creates a second file with dynamic-range-compression whenever an audio file is uploaded
|
|
||||||
* good example of the `idx` [hook effect](https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#hook-effects) to tell copyparty about additional files to scan/index
|
|
||||||
|
|
||||||
|
|
||||||
# upload batches
|
# upload batches
|
||||||
|
@ -27,11 +25,9 @@ these are `--xiu` hooks; unlike `xbu` and `xau` (which get executed on every sin
|
||||||
# before upload
|
# before upload
|
||||||
* [reject-extension.py](reject-extension.py) rejects uploads if they match a list of file extensions
|
* [reject-extension.py](reject-extension.py) rejects uploads if they match a list of file extensions
|
||||||
* [reloc-by-ext.py](reloc-by-ext.py) redirects an upload to another destination based on the file extension
|
* [reloc-by-ext.py](reloc-by-ext.py) redirects an upload to another destination based on the file extension
|
||||||
* good example of the `reloc` [hook effect](https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#hook-effects)
|
|
||||||
|
|
||||||
|
|
||||||
# on message
|
# on message
|
||||||
* [wget.py](wget.py) lets you download files by POSTing URLs to copyparty
|
* [wget.py](wget.py) lets you download files by POSTing URLs to copyparty
|
||||||
* [qbittorrent-magnet.py](qbittorrent-magnet.py) starts downloading a torrent if you post a magnet url
|
* [qbittorrent-magnet.py](qbittorrent-magnet.py) starts downloading a torrent if you post a magnet url
|
||||||
* [usb-eject.py](usb-eject.py) adds web-UI buttons to safe-remove usb flashdrives shared through copyparty
|
|
||||||
* [msg-log.py](msg-log.py) is a guestbook; logs messages to a doc in the same folder
|
* [msg-log.py](msg-log.py) is a guestbook; logs messages to a doc in the same folder
|
||||||
|
|
|
@ -1,121 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import subprocess as sp
|
|
||||||
|
|
||||||
|
|
||||||
_ = r"""
|
|
||||||
sends all uploaded audio files through an aggressive
|
|
||||||
dynamic-range-compressor to even out the volume levels
|
|
||||||
|
|
||||||
dependencies:
|
|
||||||
ffmpeg
|
|
||||||
|
|
||||||
being an xau hook, this gets eXecuted After Upload completion
|
|
||||||
but before copyparty has started hashing/indexing the file, so
|
|
||||||
we'll create a second normalized copy in a subfolder and tell
|
|
||||||
copyparty to hash/index that additional file as well
|
|
||||||
|
|
||||||
example usage as global config:
|
|
||||||
-e2d -e2t --xau j,c1,bin/hooks/podcast-normalizer.py
|
|
||||||
|
|
||||||
parameters explained,
|
|
||||||
e2d/e2t = enable database and metadata indexing
|
|
||||||
xau = execute after upload
|
|
||||||
j = this hook needs upload information as json (not just the filename)
|
|
||||||
c1 = this hook returns json on stdout, so tell copyparty to read that
|
|
||||||
|
|
||||||
example usage as a volflag (per-volume config):
|
|
||||||
-v srv/inc/pods:inc/pods:r:rw,ed:c,xau=j,c1,bin/hooks/podcast-normalizer.py
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
(share fs-path srv/inc/pods at URL /inc/pods,
|
|
||||||
readable by all, read-write for user ed,
|
|
||||||
running this xau (exec-after-upload) plugin for all uploaded files)
|
|
||||||
|
|
||||||
example usage as a volflag in a copyparty config file:
|
|
||||||
[/inc/pods]
|
|
||||||
srv/inc/pods
|
|
||||||
accs:
|
|
||||||
r: *
|
|
||||||
rw: ed
|
|
||||||
flags:
|
|
||||||
e2d # enables file indexing
|
|
||||||
e2t # metadata tags too
|
|
||||||
xau: j,c1,bin/hooks/podcast-normalizer.py
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
########################################################################
|
|
||||||
### CONFIG
|
|
||||||
|
|
||||||
# filetypes to process; ignores everything else
|
|
||||||
EXTS = "mp3 flac ogg oga opus m4a aac wav wma"
|
|
||||||
|
|
||||||
# the name of the subdir to put the normalized files in
|
|
||||||
SUBDIR = "normalized"
|
|
||||||
|
|
||||||
########################################################################
|
|
||||||
|
|
||||||
|
|
||||||
# try to enable support for crazy filenames
|
|
||||||
try:
|
|
||||||
from copyparty.util import fsenc
|
|
||||||
except:
|
|
||||||
|
|
||||||
def fsenc(p):
|
|
||||||
return p.encode("utf-8")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
# read info from copyparty
|
|
||||||
inf = json.loads(sys.argv[1])
|
|
||||||
vpath = inf["vp"]
|
|
||||||
abspath = inf["ap"]
|
|
||||||
|
|
||||||
# check if the file-extension is on the to-be-processed list
|
|
||||||
ext = abspath.lower().split(".")[-1]
|
|
||||||
if ext not in EXTS.split():
|
|
||||||
return
|
|
||||||
|
|
||||||
# jump into the folder where the file was uploaded
|
|
||||||
# and create the subfolder to place the normalized copy inside
|
|
||||||
dirpath, filename = os.path.split(abspath)
|
|
||||||
os.chdir(fsenc(dirpath))
|
|
||||||
os.makedirs(SUBDIR, exist_ok=True)
|
|
||||||
|
|
||||||
# the input and output filenames to give ffmpeg
|
|
||||||
fname_in = fsenc(f"./{filename}")
|
|
||||||
fname_out = fsenc(f"{SUBDIR}/{filename}.opus")
|
|
||||||
|
|
||||||
# fmt: off
|
|
||||||
# create and run the ffmpeg command
|
|
||||||
cmd = [
|
|
||||||
b"ffmpeg",
|
|
||||||
b"-nostdin",
|
|
||||||
b"-hide_banner",
|
|
||||||
b"-i", fname_in,
|
|
||||||
b"-af", b"dynaudnorm=f=100:g=9", # the normalizer config
|
|
||||||
b"-c:a", b"libopus",
|
|
||||||
b"-b:a", b"128k",
|
|
||||||
fname_out,
|
|
||||||
]
|
|
||||||
# fmt: on
|
|
||||||
sp.check_output(cmd)
|
|
||||||
|
|
||||||
# and finally, tell copyparty about the new file
|
|
||||||
# so it appears in the database and rss-feed:
|
|
||||||
vpath = f"{SUBDIR}/{filename}.opus"
|
|
||||||
print(json.dumps({"idx": {"vp": [vpath]}}))
|
|
||||||
|
|
||||||
# (it's fine to give it a relative path like that; it gets
|
|
||||||
# resolved relative to the folder the file was uploaded into)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
try:
|
|
||||||
main()
|
|
||||||
except Exception as ex:
|
|
||||||
print("podcast-normalizer failed; %r" % (ex,))
|
|
|
@ -71,9 +71,6 @@ def main():
|
||||||
## selecting it inside the print at the end:
|
## selecting it inside the print at the end:
|
||||||
##
|
##
|
||||||
|
|
||||||
# move all uploads to one specific folder
|
|
||||||
into_junk = {"vp": "/junk"}
|
|
||||||
|
|
||||||
# create a subfolder named after the filetype and move it into there
|
# create a subfolder named after the filetype and move it into there
|
||||||
into_subfolder = {"vp": ext}
|
into_subfolder = {"vp": ext}
|
||||||
|
|
||||||
|
@ -95,8 +92,8 @@ def main():
|
||||||
by_category = {} # no action
|
by_category = {} # no action
|
||||||
|
|
||||||
# now choose the default effect to apply; can be any of these:
|
# now choose the default effect to apply; can be any of these:
|
||||||
# into_junk into_subfolder into_toplevel into_sibling by_category
|
# into_subfolder into_toplevel into_sibling by_category
|
||||||
effect = into_sibling
|
effect = {"vp": "/junk"}
|
||||||
|
|
||||||
##
|
##
|
||||||
## but we can keep going, adding more speicifc rules
|
## but we can keep going, adding more speicifc rules
|
||||||
|
|
|
@ -1,62 +0,0 @@
|
||||||
// see usb-eject.py for usage
|
|
||||||
|
|
||||||
function usbclick() {
|
|
||||||
var o = QS('#treeul a[dst="/usb/"]') || QS('#treepar a[dst="/usb/"]');
|
|
||||||
if (o)
|
|
||||||
o.click();
|
|
||||||
}
|
|
||||||
|
|
||||||
function eject_cb() {
|
|
||||||
var t = ('' + this.responseText).trim();
|
|
||||||
if (t.indexOf('can be safely unplugged') < 0 && t.indexOf('Device can be removed') < 0)
|
|
||||||
return toast.err(30, 'usb eject failed:\n\n' + t);
|
|
||||||
|
|
||||||
toast.ok(5, esc(t.replace(/ - /g, '\n\n')).trim());
|
|
||||||
usbclick(); setTimeout(usbclick, 10);
|
|
||||||
};
|
|
||||||
|
|
||||||
function add_eject_2(a) {
|
|
||||||
var aw = a.getAttribute('href').split(/\//g);
|
|
||||||
if (aw.length != 4 || aw[3])
|
|
||||||
return;
|
|
||||||
|
|
||||||
var v = aw[2],
|
|
||||||
k = 'umount_' + v;
|
|
||||||
|
|
||||||
for (var b = 0; b < 9; b++) {
|
|
||||||
var o = ebi(k);
|
|
||||||
if (!o)
|
|
||||||
break;
|
|
||||||
o.parentNode.removeChild(o);
|
|
||||||
}
|
|
||||||
|
|
||||||
a.appendChild(mknod('span', k, '⏏'), a);
|
|
||||||
o = ebi(k);
|
|
||||||
o.style.cssText = 'position:absolute; right:1em; margin-top:-.2em; font-size:1.3em';
|
|
||||||
o.onclick = function (e) {
|
|
||||||
ev(e);
|
|
||||||
var xhr = new XHR();
|
|
||||||
xhr.open('POST', get_evpath(), true);
|
|
||||||
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded;charset=UTF-8');
|
|
||||||
xhr.send('msg=' + uricom_enc(':usb-eject:' + v + ':'));
|
|
||||||
xhr.onload = xhr.onerror = eject_cb;
|
|
||||||
toast.inf(10, "ejecting " + v + "...");
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
function add_eject() {
|
|
||||||
var o = QSA('#treeul a[href^="/usb/"]') || QSA('#treepar a[href^="/usb/"]');
|
|
||||||
for (var a = o.length - 1; a > 0; a--)
|
|
||||||
add_eject_2(o[a]);
|
|
||||||
};
|
|
||||||
|
|
||||||
(function() {
|
|
||||||
var f0 = treectl.rendertree;
|
|
||||||
treectl.rendertree = function (res, ts, top0, dst, rst) {
|
|
||||||
var ret = f0(res, ts, top0, dst, rst);
|
|
||||||
add_eject();
|
|
||||||
return ret;
|
|
||||||
};
|
|
||||||
})();
|
|
||||||
|
|
||||||
setTimeout(add_eject, 50);
|
|
|
@ -1,62 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import os
|
|
||||||
import stat
|
|
||||||
import subprocess as sp
|
|
||||||
import sys
|
|
||||||
from urllib.parse import unquote_to_bytes as unquote
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
if you've found yourself using copyparty to serve flashdrives on a LAN
|
|
||||||
and your only wish is that the web-UI had a button to unmount / safely
|
|
||||||
remove those flashdrives, then boy howdy are you in the right place :D
|
|
||||||
|
|
||||||
put usb-eject.js in the webroot (or somewhere else http-accessible)
|
|
||||||
then run copyparty with these args:
|
|
||||||
|
|
||||||
-v /run/media/egon:/usb:A:c,hist=/tmp/junk
|
|
||||||
--xm=c1,bin/hooks/usb-eject.py
|
|
||||||
--js-browser=/usb-eject.js
|
|
||||||
|
|
||||||
which does the following respectively,
|
|
||||||
|
|
||||||
* share all of /run/media/egon as /usb with admin for everyone
|
|
||||||
and put the histpath somewhere it won't cause trouble
|
|
||||||
* run the usb-eject hook with stdout redirect to the web-ui
|
|
||||||
* add the complementary usb-eject.js to the browser
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
MOUNT_BASE = b"/run/media/egon/"
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
try:
|
|
||||||
label = sys.argv[1].split(":usb-eject:")[1].split(":")[0]
|
|
||||||
mp = MOUNT_BASE + unquote(label)
|
|
||||||
# print("ejecting [%s]... " % (mp,), end="")
|
|
||||||
mp = os.path.abspath(os.path.realpath(mp))
|
|
||||||
st = os.lstat(mp)
|
|
||||||
if not stat.S_ISDIR(st.st_mode) or not mp.startswith(MOUNT_BASE):
|
|
||||||
raise Exception("not a regular directory")
|
|
||||||
|
|
||||||
# if you're running copyparty as root (thx for the faith)
|
|
||||||
# you'll need something like this to make dbus talkative
|
|
||||||
cmd = b"sudo -u egon DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus gio mount -e"
|
|
||||||
|
|
||||||
# but if copyparty and the ui-session is running
|
|
||||||
# as the same user (good) then this is plenty
|
|
||||||
cmd = b"gio mount -e"
|
|
||||||
|
|
||||||
cmd = cmd.split(b" ") + [mp]
|
|
||||||
ret = sp.check_output(cmd).decode("utf-8", "replace")
|
|
||||||
print(ret.strip() or (label + " can be safely unplugged"))
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
print("unmount failed: %r" % (ex,))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
|
@ -31,9 +31,6 @@ plugins in this section should only be used with appropriate precautions:
|
||||||
* [very-bad-idea.py](./very-bad-idea.py) combined with [meadup.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/meadup.js) converts copyparty into a janky yet extremely flexible chromecast clone
|
* [very-bad-idea.py](./very-bad-idea.py) combined with [meadup.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/meadup.js) converts copyparty into a janky yet extremely flexible chromecast clone
|
||||||
* also adds a virtual keyboard by @steinuil to the basic-upload tab for comfy couch crowd control
|
* also adds a virtual keyboard by @steinuil to the basic-upload tab for comfy couch crowd control
|
||||||
* anything uploaded through the [android app](https://github.com/9001/party-up) (files or links) are executed on the server, meaning anyone can infect your PC with malware... so protect this with a password and keep it on a LAN!
|
* anything uploaded through the [android app](https://github.com/9001/party-up) (files or links) are executed on the server, meaning anyone can infect your PC with malware... so protect this with a password and keep it on a LAN!
|
||||||
* [kamelåså](https://github.com/steinuil/kameloso) is a much better (and MUCH safer) alternative to this plugin
|
|
||||||
* powered by [chicken-curry-banana-pineapple-peanut pizza](https://a.ocv.me/pub/g/i/2025/01/298437ce-8351-4c8c-861c-fa131d217999.jpg?cache) so you know it's good
|
|
||||||
* and, unlike this plugin, kamelåså even has windows support (nice)
|
|
||||||
|
|
||||||
|
|
||||||
# dependencies
|
# dependencies
|
||||||
|
|
|
@ -2,15 +2,11 @@
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
import json
|
import json
|
||||||
|
import zlib
|
||||||
import struct
|
import struct
|
||||||
import base64
|
import base64
|
||||||
import hashlib
|
import hashlib
|
||||||
|
|
||||||
try:
|
|
||||||
from zlib_ng import zlib_ng as zlib
|
|
||||||
except:
|
|
||||||
import zlib
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from copyparty.util import fsenc
|
from copyparty.util import fsenc
|
||||||
except:
|
except:
|
||||||
|
|
|
@ -22,8 +22,6 @@ set -e
|
||||||
# modifies the keyfinder python lib to load the .so in ~/pe
|
# modifies the keyfinder python lib to load the .so in ~/pe
|
||||||
|
|
||||||
|
|
||||||
export FORCE_COLOR=1
|
|
||||||
|
|
||||||
linux=1
|
linux=1
|
||||||
|
|
||||||
win=
|
win=
|
||||||
|
@ -189,14 +187,11 @@ install_keyfinder() {
|
||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
x=${-//[^x]/}; set -x; cat /etc/alpine-release
|
|
||||||
# rm -rf /Users/ed/Library/Python/3.9/lib/python/site-packages/*keyfinder*
|
# rm -rf /Users/ed/Library/Python/3.9/lib/python/site-packages/*keyfinder*
|
||||||
CFLAGS="-I$h/pe/keyfinder/include -I/opt/local/include -I/usr/include/ffmpeg" \
|
CFLAGS="-I$h/pe/keyfinder/include -I/opt/local/include -I/usr/include/ffmpeg" \
|
||||||
CXXFLAGS="-I$h/pe/keyfinder/include -I/opt/local/include -I/usr/include/ffmpeg" \
|
|
||||||
LDFLAGS="-L$h/pe/keyfinder/lib -L$h/pe/keyfinder/lib64 -L/opt/local/lib" \
|
LDFLAGS="-L$h/pe/keyfinder/lib -L$h/pe/keyfinder/lib64 -L/opt/local/lib" \
|
||||||
PKG_CONFIG_PATH="/c/msys64/mingw64/lib/pkgconfig:$h/pe/keyfinder/lib/pkgconfig" \
|
PKG_CONFIG_PATH=/c/msys64/mingw64/lib/pkgconfig \
|
||||||
$pybin -m pip install --user keyfinder
|
$pybin -m pip install --user keyfinder
|
||||||
[ "$x" ] || set +x
|
|
||||||
|
|
||||||
pypath="$($pybin -c 'import keyfinder; print(keyfinder.__file__)')"
|
pypath="$($pybin -c 'import keyfinder; print(keyfinder.__file__)')"
|
||||||
for pyso in "${pypath%/*}"/*.so; do
|
for pyso in "${pypath%/*}"/*.so; do
|
||||||
|
|
|
@ -6,11 +6,6 @@ WARNING -- DANGEROUS PLUGIN --
|
||||||
running this plugin, they can execute malware on your machine
|
running this plugin, they can execute malware on your machine
|
||||||
so please keep this on a LAN and protect it with a password
|
so please keep this on a LAN and protect it with a password
|
||||||
|
|
||||||
here is a MUCH BETTER ALTERNATIVE (which also works on Windows):
|
|
||||||
https://github.com/steinuil/kameloso
|
|
||||||
|
|
||||||
----------------------------------------------------------------------
|
|
||||||
|
|
||||||
use copyparty as a chromecast replacement:
|
use copyparty as a chromecast replacement:
|
||||||
* post a URL and it will open in the default browser
|
* post a URL and it will open in the default browser
|
||||||
* upload a file and it will open in the default application
|
* upload a file and it will open in the default application
|
||||||
|
|
618
bin/partyfuse.py
618
bin/partyfuse.py
File diff suppressed because it is too large
Load diff
770
bin/u2c.py
770
bin/u2c.py
File diff suppressed because it is too large
Load diff
|
@ -1,76 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import zmq
|
|
||||||
|
|
||||||
"""
|
|
||||||
zmq-recv.py: demo zmq receiver
|
|
||||||
2025-01-22, v1.0, ed <irc.rizon.net>, MIT-Licensed
|
|
||||||
https://github.com/9001/copyparty/blob/hovudstraum/bin/zmq-recv.py
|
|
||||||
|
|
||||||
basic zmq-server to receive events from copyparty; try one of
|
|
||||||
the below and then "send a message to serverlog" in the web-ui:
|
|
||||||
|
|
||||||
1) dumb fire-and-forget to any and all listeners;
|
|
||||||
run this script with "sub" and run copyparty with this:
|
|
||||||
--xm zmq:pub:tcp://*:5556
|
|
||||||
|
|
||||||
2) one lucky listener gets the message, blocks if no listeners:
|
|
||||||
run this script with "pull" and run copyparty with this:
|
|
||||||
--xm t3,zmq:push:tcp://*:5557
|
|
||||||
|
|
||||||
3) blocking syn/ack mode, client must ack each message;
|
|
||||||
run this script with "rep" and run copyparty with this:
|
|
||||||
--xm t3,zmq:req:tcp://localhost:5555
|
|
||||||
|
|
||||||
note: to conditionally block uploads based on message contents,
|
|
||||||
use rep_server to answer with "return 1" and run copyparty with
|
|
||||||
--xau t3,c,zmq:req:tcp://localhost:5555
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
ctx = zmq.Context()
|
|
||||||
|
|
||||||
|
|
||||||
def sub_server():
|
|
||||||
# PUB/SUB allows any number of servers/clients, and
|
|
||||||
# messages are fire-and-forget
|
|
||||||
sck = ctx.socket(zmq.SUB)
|
|
||||||
sck.connect("tcp://localhost:5556")
|
|
||||||
sck.setsockopt_string(zmq.SUBSCRIBE, "")
|
|
||||||
while True:
|
|
||||||
print("copyparty says %r" % (sck.recv_string(),))
|
|
||||||
|
|
||||||
|
|
||||||
def pull_server():
|
|
||||||
# PUSH/PULL allows any number of servers/clients, and
|
|
||||||
# each message is sent to a exactly one PULL client
|
|
||||||
sck = ctx.socket(zmq.PULL)
|
|
||||||
sck.connect("tcp://localhost:5557")
|
|
||||||
while True:
|
|
||||||
print("copyparty says %r" % (sck.recv_string(),))
|
|
||||||
|
|
||||||
|
|
||||||
def rep_server():
|
|
||||||
# REP/REQ is a server/client pair where each message must be
|
|
||||||
# acked by the other before another message can be sent, so
|
|
||||||
# copyparty will do a blocking-wait for the ack
|
|
||||||
sck = ctx.socket(zmq.REP)
|
|
||||||
sck.bind("tcp://*:5555")
|
|
||||||
while True:
|
|
||||||
print("copyparty says %r" % (sck.recv_string(),))
|
|
||||||
reply = b"thx"
|
|
||||||
# reply = b"return 1" # non-zero to block an upload
|
|
||||||
sck.send(reply)
|
|
||||||
|
|
||||||
|
|
||||||
mode = sys.argv[1].lower() if len(sys.argv) > 1 else ""
|
|
||||||
|
|
||||||
if mode == "sub":
|
|
||||||
sub_server()
|
|
||||||
elif mode == "pull":
|
|
||||||
pull_server()
|
|
||||||
elif mode == "rep":
|
|
||||||
rep_server()
|
|
||||||
else:
|
|
||||||
print("specify mode as first argument: SUB | PULL | REP")
|
|
|
@ -12,21 +12,13 @@
|
||||||
* assumes the webserver and copyparty is running on the same server/IP
|
* assumes the webserver and copyparty is running on the same server/IP
|
||||||
* modify `10.13.1.1` as necessary if you wish to support browsers without javascript
|
* modify `10.13.1.1` as necessary if you wish to support browsers without javascript
|
||||||
|
|
||||||
### [`sharex.sxcu`](sharex.sxcu) - Windows screenshot uploader
|
### [`sharex.sxcu`](sharex.sxcu)
|
||||||
* [sharex](https://getsharex.com/) config file to upload screenshots and grab the URL
|
* sharex config file to upload screenshots and grab the URL
|
||||||
* `RequestURL`: full URL to the target folder
|
* `RequestURL`: full URL to the target folder
|
||||||
* `pw`: password (remove the `pw` line if anon-write)
|
* `pw`: password (remove the `pw` line if anon-write)
|
||||||
* the `act:bput` thing is optional since copyparty v1.9.29
|
* the `act:bput` thing is optional since copyparty v1.9.29
|
||||||
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
||||||
|
|
||||||
### [`ishare.iscu`](ishare.iscu) - MacOS screenshot uploader
|
|
||||||
* [ishare](https://isharemac.app/) config file to upload screenshots and grab the URL
|
|
||||||
* `RequestURL`: full URL to the target folder
|
|
||||||
* `pw`: password (remove the `pw` line if anon-write)
|
|
||||||
|
|
||||||
### [`flameshot.sh`](flameshot.sh) - Linux screenshot uploader
|
|
||||||
* takes a screenshot with [flameshot](https://flameshot.org/) on Linux, uploads it, and writes the URL to clipboard
|
|
||||||
|
|
||||||
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
||||||
* browser integration, kind of? custom rightclick actions and stuff
|
* browser integration, kind of? custom rightclick actions and stuff
|
||||||
* rightclick a pic and send it to copyparty straight from your browser
|
* rightclick a pic and send it to copyparty straight from your browser
|
||||||
|
@ -50,9 +42,6 @@
|
||||||
* give a 3rd argument to install it to your copyparty config
|
* give a 3rd argument to install it to your copyparty config
|
||||||
* systemd service at [`systemd/cfssl.service`](systemd/cfssl.service)
|
* systemd service at [`systemd/cfssl.service`](systemd/cfssl.service)
|
||||||
|
|
||||||
### [`zfs-tune.py`](zfs-tune.py)
|
|
||||||
* optimizes databases for optimal performance when stored on a zfs filesystem; also see [openzfs docs](https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#database-workloads) and specifically the SQLite subsection
|
|
||||||
|
|
||||||
# OS integration
|
# OS integration
|
||||||
init-scripts to start copyparty as a service
|
init-scripts to start copyparty as a service
|
||||||
* [`systemd/copyparty.service`](systemd/copyparty.service) runs the sfx normally
|
* [`systemd/copyparty.service`](systemd/copyparty.service) runs the sfx normally
|
||||||
|
@ -61,10 +50,5 @@ init-scripts to start copyparty as a service
|
||||||
* [`openrc/copyparty`](openrc/copyparty)
|
* [`openrc/copyparty`](openrc/copyparty)
|
||||||
|
|
||||||
# Reverse-proxy
|
# Reverse-proxy
|
||||||
copyparty supports running behind another webserver
|
copyparty has basic support for running behind another webserver
|
||||||
* [`apache/copyparty.conf`](apache/copyparty.conf)
|
* [`nginx/copyparty.conf`](nginx/copyparty.conf)
|
||||||
* [`haproxy/copyparty.conf`](haproxy/copyparty.conf)
|
|
||||||
* [`lighttpd/subdomain.conf`](lighttpd/subdomain.conf)
|
|
||||||
* [`lighttpd/subpath.conf`](lighttpd/subpath.conf)
|
|
||||||
* [`nginx/copyparty.conf`](nginx/copyparty.conf) -- recommended
|
|
||||||
* [`traefik/copyparty.yaml`](traefik/copyparty.yaml)
|
|
||||||
|
|
|
@ -1,29 +1,14 @@
|
||||||
# if you would like to use unix-sockets (recommended),
|
# when running copyparty behind a reverse proxy,
|
||||||
# you must run copyparty with one of the following:
|
# the following arguments are recommended:
|
||||||
#
|
#
|
||||||
# -i unix:777:/dev/shm/party.sock
|
# -i 127.0.0.1 only accept connections from nginx
|
||||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
|
||||||
#
|
#
|
||||||
# if you are doing location-based proxying (such as `/stuff` below)
|
# if you are doing location-based proxying (such as `/stuff` below)
|
||||||
# you must run copyparty with --rp-loc=stuff
|
# you must run copyparty with --rp-loc=stuff
|
||||||
#
|
#
|
||||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||||
|
|
||||||
|
|
||||||
LoadModule proxy_module modules/mod_proxy.so
|
LoadModule proxy_module modules/mod_proxy.so
|
||||||
|
ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
|
||||||
|
# do not specify ProxyPassReverse
|
||||||
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
||||||
# NOTE: do not specify ProxyPassReverse
|
|
||||||
|
|
||||||
|
|
||||||
##
|
|
||||||
## then, enable one of the below:
|
|
||||||
|
|
||||||
# use subdomain proxying to unix-socket (best)
|
|
||||||
ProxyPass "/" "unix:///dev/shm/party.sock|http://whatever/"
|
|
||||||
|
|
||||||
# use subdomain proxying to 127.0.0.1 (slower)
|
|
||||||
#ProxyPass "/" "http://127.0.0.1:3923/"
|
|
||||||
|
|
||||||
# use subpath proxying to 127.0.0.1 (slow and maybe buggy)
|
|
||||||
#ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
|
|
||||||
|
|
|
@ -1,14 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# take a screenshot with flameshot and send it to copyparty;
|
|
||||||
# the image url will be placed on your clipboard
|
|
||||||
|
|
||||||
password=wark
|
|
||||||
url=https://a.ocv.me/up/
|
|
||||||
filename=$(date +%Y-%m%d-%H%M%S).png
|
|
||||||
|
|
||||||
flameshot gui -s -r |
|
|
||||||
curl -T- $url$filename?pw=$password |
|
|
||||||
tail -n 1 |
|
|
||||||
xsel -ib
|
|
|
@ -1,24 +0,0 @@
|
||||||
# this config is essentially two separate examples;
|
|
||||||
#
|
|
||||||
# foo1 connects to copyparty using tcp, and
|
|
||||||
# foo2 uses unix-sockets for 27% higher performance
|
|
||||||
#
|
|
||||||
# to use foo2 you must run copyparty with one of the following:
|
|
||||||
#
|
|
||||||
# -i unix:777:/dev/shm/party.sock
|
|
||||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
|
||||||
|
|
||||||
defaults
|
|
||||||
mode http
|
|
||||||
option forwardfor
|
|
||||||
timeout connect 1s
|
|
||||||
timeout client 610s
|
|
||||||
timeout server 610s
|
|
||||||
|
|
||||||
listen foo1
|
|
||||||
bind *:8081
|
|
||||||
server srv1 127.0.0.1:3923 maxconn 512
|
|
||||||
|
|
||||||
listen foo2
|
|
||||||
bind *:8082
|
|
||||||
server srv1 /dev/shm/party.sock maxconn 512
|
|
|
@ -1,10 +0,0 @@
|
||||||
{
|
|
||||||
"Name": "copyparty",
|
|
||||||
"RequestURL": "http://127.0.0.1:3923/screenshots/",
|
|
||||||
"Headers": {
|
|
||||||
"pw": "PUT_YOUR_PASSWORD_HERE_MY_DUDE",
|
|
||||||
"accept": "json"
|
|
||||||
},
|
|
||||||
"FileFormName": "f",
|
|
||||||
"ResponseURL": "{{fileurl}}"
|
|
||||||
}
|
|
|
@ -1,24 +0,0 @@
|
||||||
# example usage for benchmarking:
|
|
||||||
#
|
|
||||||
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subdomain.conf
|
|
||||||
#
|
|
||||||
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
|
|
||||||
# or a unix-socket, but unix-sockets are 37% faster because
|
|
||||||
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
|
|
||||||
#
|
|
||||||
# this means we must run copyparty with one of the following:
|
|
||||||
#
|
|
||||||
# -i unix:777:/dev/shm/party.sock
|
|
||||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
|
||||||
#
|
|
||||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
|
||||||
|
|
||||||
server.port = 80
|
|
||||||
server.document-root = "/var/empty"
|
|
||||||
server.upload-dirs = ( "/dev/shm", "/tmp" )
|
|
||||||
server.modules = ( "mod_proxy" )
|
|
||||||
proxy.forwarded = ( "for" => 1, "proto" => 1 )
|
|
||||||
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
|
|
||||||
|
|
||||||
# if you really need to use tcp instead of unix-sockets, do this instead:
|
|
||||||
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
|
|
|
@ -1,31 +0,0 @@
|
||||||
# example usage for benchmarking:
|
|
||||||
#
|
|
||||||
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subpath.conf
|
|
||||||
#
|
|
||||||
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
|
|
||||||
# or a unix-socket, but unix-sockets are 37% faster because
|
|
||||||
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
|
|
||||||
#
|
|
||||||
# this means we must run copyparty with one of the following:
|
|
||||||
#
|
|
||||||
# -i unix:777:/dev/shm/party.sock
|
|
||||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
|
||||||
#
|
|
||||||
# also since this example proxies a subpath instead of the
|
|
||||||
# recommended subdomain-proxying, we must also specify this:
|
|
||||||
#
|
|
||||||
# --rp-loc files
|
|
||||||
#
|
|
||||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
|
||||||
|
|
||||||
server.port = 80
|
|
||||||
server.document-root = "/var/empty"
|
|
||||||
server.upload-dirs = ( "/dev/shm", "/tmp" )
|
|
||||||
server.modules = ( "mod_proxy" )
|
|
||||||
$HTTP["url"] =~ "^/files" {
|
|
||||||
proxy.forwarded = ( "for" => 1, "proto" => 1 )
|
|
||||||
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
|
|
||||||
|
|
||||||
# if you really need to use tcp instead of unix-sockets, do this instead:
|
|
||||||
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
|
|
||||||
}
|
|
|
@ -1,67 +1,29 @@
|
||||||
# look for "max clients:" when starting copyparty, as nginx should
|
# when running copyparty behind a reverse proxy,
|
||||||
# not accept more consecutive clients than what copyparty is able to;
|
# the following arguments are recommended:
|
||||||
|
#
|
||||||
|
# -i 127.0.0.1 only accept connections from nginx
|
||||||
|
#
|
||||||
|
# -nc must match or exceed the webserver's max number of concurrent clients;
|
||||||
|
# copyparty default is 1024 if OS permits it (see "max clients:" on startup),
|
||||||
# nginx default is 512 (worker_processes 1, worker_connections 512)
|
# nginx default is 512 (worker_processes 1, worker_connections 512)
|
||||||
#
|
#
|
||||||
# ======================================================================
|
# you may also consider adding -j0 for CPU-intensive configurations
|
||||||
#
|
# (5'000 requests per second, or 20gbps upload/download in parallel)
|
||||||
# to reverse-proxy a specific path/subpath/location below a domain
|
|
||||||
# (rather than a complete subdomain), for example "/qw/er", you must
|
|
||||||
# run copyparty with --rp-loc /qw/as and also change the following:
|
|
||||||
# location / {
|
|
||||||
# proxy_pass http://cpp_tcp;
|
|
||||||
# to this:
|
|
||||||
# location /qw/er/ {
|
|
||||||
# proxy_pass http://cpp_tcp/qw/er/;
|
|
||||||
#
|
|
||||||
# ======================================================================
|
|
||||||
#
|
|
||||||
# rarely, in some extreme usecases, it can be good to add -j0
|
|
||||||
# (40'000 requests per second, or 20gbps upload/download in parallel)
|
|
||||||
# but this is usually counterproductive and slightly buggy
|
|
||||||
#
|
|
||||||
# ======================================================================
|
|
||||||
#
|
#
|
||||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||||
#
|
#
|
||||||
# ======================================================================
|
# if you are behind cloudflare (or another protection service),
|
||||||
#
|
|
||||||
# if you are behind cloudflare (or another CDN/WAF/protection service),
|
|
||||||
# remember to reject all connections which are not coming from your
|
# remember to reject all connections which are not coming from your
|
||||||
# protection service -- for cloudflare in particular, you can
|
# protection service -- for cloudflare in particular, you can
|
||||||
# generate the list of permitted IP ranges like so:
|
# generate the list of permitted IP ranges like so:
|
||||||
# (curl -s https://www.cloudflare.com/ips-v{4,6} | sed 's/^/allow /; s/$/;/'; echo; echo "deny all;") > /etc/nginx/cloudflare-only.conf
|
# (curl -s https://www.cloudflare.com/ips-v{4,6} | sed 's/^/allow /; s/$/;/'; echo; echo "deny all;") > /etc/nginx/cloudflare-only.conf
|
||||||
#
|
#
|
||||||
# and then enable it below by uncomenting the cloudflare-only.conf line
|
# and then enable it below by uncomenting the cloudflare-only.conf line
|
||||||
#
|
|
||||||
# ======================================================================
|
|
||||||
|
|
||||||
|
|
||||||
upstream cpp_tcp {
|
|
||||||
# alternative 1: connect to copyparty using tcp;
|
|
||||||
# cpp_uds is slightly faster and more secure, but
|
|
||||||
# cpp_tcp is easier to setup and "just works"
|
|
||||||
# ...you should however restrict copyparty to only
|
|
||||||
# accept connections from nginx by adding these args:
|
|
||||||
# -i 127.0.0.1
|
|
||||||
|
|
||||||
|
upstream cpp {
|
||||||
server 127.0.0.1:3923 fail_timeout=1s;
|
server 127.0.0.1:3923 fail_timeout=1s;
|
||||||
keepalive 1;
|
keepalive 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
upstream cpp_uds {
|
|
||||||
# alternative 2: unix-socket, aka. "unix domain socket";
|
|
||||||
# 5-10% faster, and better isolation from other software,
|
|
||||||
# but there must be at least one unix-group which both
|
|
||||||
# nginx and copyparty is a member of; if that group is
|
|
||||||
# "www" then run copyparty with the following args:
|
|
||||||
# -i unix:770:www:/dev/shm/party.sock
|
|
||||||
|
|
||||||
server unix:/dev/shm/party.sock fail_timeout=1s;
|
|
||||||
keepalive 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
server {
|
server {
|
||||||
listen 443 ssl;
|
listen 443 ssl;
|
||||||
listen [::]:443 ssl;
|
listen [::]:443 ssl;
|
||||||
|
@ -72,30 +34,24 @@ server {
|
||||||
#include /etc/nginx/cloudflare-only.conf;
|
#include /etc/nginx/cloudflare-only.conf;
|
||||||
|
|
||||||
location / {
|
location / {
|
||||||
# recommendation: replace cpp_tcp with cpp_uds below
|
proxy_pass http://cpp;
|
||||||
proxy_pass http://cpp_tcp;
|
|
||||||
proxy_redirect off;
|
proxy_redirect off;
|
||||||
# disable buffering (next 4 lines)
|
# disable buffering (next 4 lines)
|
||||||
proxy_http_version 1.1;
|
proxy_http_version 1.1;
|
||||||
client_max_body_size 0;
|
client_max_body_size 0;
|
||||||
proxy_buffering off;
|
proxy_buffering off;
|
||||||
proxy_request_buffering off;
|
proxy_request_buffering off;
|
||||||
# improve download speed from 600 to 1500 MiB/s
|
|
||||||
proxy_buffers 32 8k;
|
|
||||||
proxy_buffer_size 16k;
|
|
||||||
proxy_busy_buffers_size 24k;
|
|
||||||
|
|
||||||
proxy_set_header Connection "Keep-Alive";
|
|
||||||
proxy_set_header Host $host;
|
proxy_set_header Host $host;
|
||||||
proxy_set_header X-Real-IP $remote_addr;
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
|
||||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
# NOTE: with cloudflare you want this X-Forwarded-For instead:
|
# NOTE: with cloudflare you want this instead:
|
||||||
#proxy_set_header X-Forwarded-For $http_cf_connecting_ip;
|
#proxy_set_header X-Forwarded-For $http_cf_connecting_ip;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
proxy_set_header Connection "Keep-Alive";
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
# default client_max_body_size (1M) blocks uploads larger than 256 MiB
|
# default client_max_body_size (1M) blocks uploads larger than 256 MiB
|
||||||
client_max_body_size 1024M;
|
client_max_body_size 1024M;
|
||||||
client_header_timeout 610m;
|
client_header_timeout 610m;
|
||||||
|
|
|
@ -1,13 +1,9 @@
|
||||||
{
|
{ config, pkgs, lib, ... }:
|
||||||
config,
|
|
||||||
pkgs,
|
|
||||||
lib,
|
|
||||||
...
|
|
||||||
}:
|
|
||||||
with lib;
|
with lib;
|
||||||
|
|
||||||
let
|
let
|
||||||
mkKeyValue =
|
mkKeyValue = key: value:
|
||||||
key: value:
|
|
||||||
if value == true then
|
if value == true then
|
||||||
# sets with a true boolean value are coerced to just the key name
|
# sets with a true boolean value are coerced to just the key name
|
||||||
key
|
key
|
||||||
|
@ -19,10 +15,9 @@ let
|
||||||
|
|
||||||
mkAttrsString = value: (generators.toKeyValue { inherit mkKeyValue; } value);
|
mkAttrsString = value: (generators.toKeyValue { inherit mkKeyValue; } value);
|
||||||
|
|
||||||
mkValueString =
|
mkValueString = value:
|
||||||
value:
|
|
||||||
if isList value then
|
if isList value then
|
||||||
(concatStringsSep "," (map mkValueString value))
|
(concatStringsSep ", " (map mkValueString value))
|
||||||
else if isAttrs value then
|
else if isAttrs value then
|
||||||
"\n" + (mkAttrsString value)
|
"\n" + (mkAttrsString value)
|
||||||
else
|
else
|
||||||
|
@ -54,14 +49,13 @@ let
|
||||||
${concatStringsSep "\n" (mapAttrsToList mkVolume cfg.volumes)}
|
${concatStringsSep "\n" (mapAttrsToList mkVolume cfg.volumes)}
|
||||||
'';
|
'';
|
||||||
|
|
||||||
|
name = "copyparty";
|
||||||
cfg = config.services.copyparty;
|
cfg = config.services.copyparty;
|
||||||
configFile = pkgs.writeText "copyparty.conf" configStr;
|
configFile = pkgs.writeText "${name}.conf" configStr;
|
||||||
runtimeConfigPath = "/run/copyparty/copyparty.conf";
|
runtimeConfigPath = "/run/${name}/${name}.conf";
|
||||||
externalCacheDir = "/var/cache/copyparty";
|
home = "/var/lib/${name}";
|
||||||
externalStateDir = "/var/lib/copyparty";
|
defaultShareDir = "${home}/data";
|
||||||
defaultShareDir = "${externalStateDir}/data";
|
in {
|
||||||
in
|
|
||||||
{
|
|
||||||
options.services.copyparty = {
|
options.services.copyparty = {
|
||||||
enable = mkEnableOption "web-based file manager";
|
enable = mkEnableOption "web-based file manager";
|
||||||
|
|
||||||
|
@ -74,35 +68,6 @@ in
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
mkHashWrapper = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = true;
|
|
||||||
description = ''
|
|
||||||
Make a shell script wrapper called 'copyparty-hash' with all options set here,
|
|
||||||
that launches the hashing cli.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
user = mkOption {
|
|
||||||
type = types.str;
|
|
||||||
default = "copyparty";
|
|
||||||
description = ''
|
|
||||||
The user that copyparty will run under.
|
|
||||||
|
|
||||||
If changed from default, you are responsible for making sure the user exists.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
group = mkOption {
|
|
||||||
type = types.str;
|
|
||||||
default = "copyparty";
|
|
||||||
description = ''
|
|
||||||
The group that copyparty will run under.
|
|
||||||
|
|
||||||
If changed from default, you are responsible for making sure the user exists.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
openFilesLimit = mkOption {
|
openFilesLimit = mkOption {
|
||||||
default = 4096;
|
default = 4096;
|
||||||
type = types.either types.int types.str;
|
type = types.either types.int types.str;
|
||||||
|
@ -114,28 +79,22 @@ in
|
||||||
description = ''
|
description = ''
|
||||||
Global settings to apply.
|
Global settings to apply.
|
||||||
Directly maps to values in the [global] section of the copyparty config.
|
Directly maps to values in the [global] section of the copyparty config.
|
||||||
Cannot set "c" or "hist", those are set by this module.
|
|
||||||
See `${getExe cfg.package} --help` for more details.
|
See `${getExe cfg.package} --help` for more details.
|
||||||
'';
|
'';
|
||||||
default = {
|
default = {
|
||||||
i = "127.0.0.1";
|
i = "127.0.0.1";
|
||||||
no-reload = true;
|
no-reload = true;
|
||||||
hist = externalCacheDir;
|
|
||||||
};
|
};
|
||||||
example = literalExpression ''
|
example = literalExpression ''
|
||||||
{
|
{
|
||||||
i = "0.0.0.0";
|
i = "0.0.0.0";
|
||||||
no-reload = true;
|
no-reload = true;
|
||||||
hist = ${externalCacheDir};
|
|
||||||
}
|
}
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
|
||||||
accounts = mkOption {
|
accounts = mkOption {
|
||||||
type = types.attrsOf (
|
type = types.attrsOf (types.submodule ({ ... }: {
|
||||||
types.submodule (
|
|
||||||
{ ... }:
|
|
||||||
{
|
|
||||||
options = {
|
options = {
|
||||||
passwordFile = mkOption {
|
passwordFile = mkOption {
|
||||||
type = types.str;
|
type = types.str;
|
||||||
|
@ -146,9 +105,7 @@ in
|
||||||
example = "/run/keys/copyparty/ed";
|
example = "/run/keys/copyparty/ed";
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
}
|
}));
|
||||||
)
|
|
||||||
);
|
|
||||||
description = ''
|
description = ''
|
||||||
A set of copyparty accounts to create.
|
A set of copyparty accounts to create.
|
||||||
'';
|
'';
|
||||||
|
@ -161,13 +118,10 @@ in
|
||||||
};
|
};
|
||||||
|
|
||||||
volumes = mkOption {
|
volumes = mkOption {
|
||||||
type = types.attrsOf (
|
type = types.attrsOf (types.submodule ({ ... }: {
|
||||||
types.submodule (
|
|
||||||
{ ... }:
|
|
||||||
{
|
|
||||||
options = {
|
options = {
|
||||||
path = mkOption {
|
path = mkOption {
|
||||||
type = types.path;
|
type = types.str;
|
||||||
description = ''
|
description = ''
|
||||||
Path of a directory to share.
|
Path of a directory to share.
|
||||||
'';
|
'';
|
||||||
|
@ -226,16 +180,12 @@ in
|
||||||
default = { };
|
default = { };
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
}
|
}));
|
||||||
)
|
|
||||||
);
|
|
||||||
description = "A set of copyparty volumes to create";
|
description = "A set of copyparty volumes to create";
|
||||||
default = {
|
default = {
|
||||||
"/" = {
|
"/" = {
|
||||||
path = defaultShareDir;
|
path = defaultShareDir;
|
||||||
access = {
|
access = { r = "*"; };
|
||||||
r = "*";
|
|
||||||
};
|
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
example = literalExpression ''
|
example = literalExpression ''
|
||||||
|
@ -254,63 +204,52 @@ in
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
config = mkIf cfg.enable (
|
config = mkIf cfg.enable {
|
||||||
let
|
|
||||||
command = "${getExe cfg.package} -c ${runtimeConfigPath}";
|
|
||||||
in
|
|
||||||
{
|
|
||||||
systemd.services.copyparty = {
|
systemd.services.copyparty = {
|
||||||
description = "http file sharing hub";
|
description = "http file sharing hub";
|
||||||
wantedBy = [ "multi-user.target" ];
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
|
||||||
environment = {
|
environment = {
|
||||||
PYTHONUNBUFFERED = "true";
|
PYTHONUNBUFFERED = "true";
|
||||||
XDG_CONFIG_HOME = externalStateDir;
|
XDG_CONFIG_HOME = "${home}/.config";
|
||||||
};
|
};
|
||||||
|
|
||||||
preStart =
|
preStart = let
|
||||||
let
|
replaceSecretCommand = name: attrs:
|
||||||
replaceSecretCommand =
|
"${getExe pkgs.replace-secret} '${
|
||||||
name: attrs:
|
passwordPlaceholder name
|
||||||
"${getExe pkgs.replace-secret} '${passwordPlaceholder name}' '${attrs.passwordFile}' ${runtimeConfigPath}";
|
}' '${attrs.passwordFile}' ${runtimeConfigPath}";
|
||||||
in
|
in ''
|
||||||
''
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
install -m 600 ${configFile} ${runtimeConfigPath}
|
install -m 600 ${configFile} ${runtimeConfigPath}
|
||||||
${concatStringsSep "\n" (mapAttrsToList replaceSecretCommand cfg.accounts)}
|
${concatStringsSep "\n"
|
||||||
|
(mapAttrsToList replaceSecretCommand cfg.accounts)}
|
||||||
'';
|
'';
|
||||||
|
|
||||||
serviceConfig = {
|
serviceConfig = {
|
||||||
Type = "simple";
|
Type = "simple";
|
||||||
ExecStart = command;
|
ExecStart = "${getExe cfg.package} -c ${runtimeConfigPath}";
|
||||||
|
|
||||||
# Hardening options
|
# Hardening options
|
||||||
User = cfg.user;
|
User = "copyparty";
|
||||||
Group = cfg.group;
|
Group = "copyparty";
|
||||||
RuntimeDirectory = [ "copyparty" ];
|
RuntimeDirectory = name;
|
||||||
RuntimeDirectoryMode = "0700";
|
RuntimeDirectoryMode = "0700";
|
||||||
StateDirectory = [ "copyparty" ];
|
StateDirectory = [ name "${name}/data" "${name}/.config" ];
|
||||||
StateDirectoryMode = "0700";
|
StateDirectoryMode = "0700";
|
||||||
CacheDirectory = lib.mkIf (cfg.settings ? hist) [ "copyparty" ];
|
WorkingDirectory = home;
|
||||||
CacheDirectoryMode = lib.mkIf (cfg.settings ? hist) "0700";
|
TemporaryFileSystem = "/:ro";
|
||||||
WorkingDirectory = externalStateDir;
|
|
||||||
BindReadOnlyPaths = [
|
BindReadOnlyPaths = [
|
||||||
"/nix/store"
|
"/nix/store"
|
||||||
"-/etc/resolv.conf"
|
"-/etc/resolv.conf"
|
||||||
"-/etc/nsswitch.conf"
|
"-/etc/nsswitch.conf"
|
||||||
"-/etc/group"
|
|
||||||
"-/etc/hosts"
|
"-/etc/hosts"
|
||||||
"-/etc/localtime"
|
"-/etc/localtime"
|
||||||
] ++ (mapAttrsToList (k: v: "-${v.passwordFile}") cfg.accounts);
|
] ++ (mapAttrsToList (k: v: "-${v.passwordFile}") cfg.accounts);
|
||||||
BindPaths =
|
BindPaths = [ home ] ++ (mapAttrsToList (k: v: v.path) cfg.volumes);
|
||||||
(if cfg.settings ? hist then [ cfg.settings.hist ] else [ ])
|
# Would re-mount paths ignored by temporary root
|
||||||
++ [ externalStateDir ]
|
#ProtectSystem = "strict";
|
||||||
++ (mapAttrsToList (k: v: v.path) cfg.volumes);
|
ProtectHome = true;
|
||||||
# ProtectSystem = "strict";
|
|
||||||
# Note that unlike what 'ro' implies,
|
|
||||||
# this actually makes it impossible to read anything in the root FS,
|
|
||||||
# except for things explicitly mounted via `RuntimeDirectory`, `StateDirectory`, `CacheDirectory`, and `BindReadOnlyPaths`.
|
|
||||||
# This is because TemporaryFileSystem creates a *new* *empty* filesystem for the process, so only bindmounts are visible.
|
|
||||||
TemporaryFileSystem = "/:ro";
|
|
||||||
PrivateTmp = true;
|
PrivateTmp = true;
|
||||||
PrivateDevices = true;
|
PrivateDevices = true;
|
||||||
ProtectKernelTunables = true;
|
ProtectKernelTunables = true;
|
||||||
|
@ -330,46 +269,15 @@ in
|
||||||
NoNewPrivileges = true;
|
NoNewPrivileges = true;
|
||||||
LockPersonality = true;
|
LockPersonality = true;
|
||||||
RestrictRealtime = true;
|
RestrictRealtime = true;
|
||||||
MemoryDenyWriteExecute = true;
|
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
# ensure volumes exist:
|
users.groups.copyparty = { };
|
||||||
systemd.tmpfiles.settings."copyparty" = (
|
users.users.copyparty = {
|
||||||
lib.attrsets.mapAttrs' (
|
|
||||||
name: value:
|
|
||||||
lib.attrsets.nameValuePair (value.path) {
|
|
||||||
d = {
|
|
||||||
#: in front of things means it wont change it if the directory already exists.
|
|
||||||
group = ":${cfg.group}";
|
|
||||||
user = ":${cfg.user}";
|
|
||||||
mode = ":755";
|
|
||||||
};
|
|
||||||
}
|
|
||||||
) cfg.volumes
|
|
||||||
);
|
|
||||||
|
|
||||||
users.groups.copyparty = lib.mkIf (cfg.user == "copyparty" && cfg.group == "copyparty") { };
|
|
||||||
users.users.copyparty = lib.mkIf (cfg.user == "copyparty" && cfg.group == "copyparty") {
|
|
||||||
description = "Service user for copyparty";
|
description = "Service user for copyparty";
|
||||||
group = "copyparty";
|
group = "copyparty";
|
||||||
home = externalStateDir;
|
home = home;
|
||||||
isSystemUser = true;
|
isSystemUser = true;
|
||||||
};
|
};
|
||||||
environment.systemPackages = lib.mkIf cfg.mkHashWrapper [
|
};
|
||||||
(pkgs.writeShellScriptBin "copyparty-hash" ''
|
|
||||||
set -a # automatically export variables
|
|
||||||
# set same environment variables as the systemd service
|
|
||||||
${lib.pipe config.systemd.services.copyparty.environment [
|
|
||||||
(lib.filterAttrs (n: v: v != null && n != "PATH"))
|
|
||||||
(lib.mapAttrs (_: v: "${v}"))
|
|
||||||
(lib.toShellVars)
|
|
||||||
]}
|
|
||||||
PATH=${config.systemd.services.copyparty.environment.PATH}:$PATH
|
|
||||||
|
|
||||||
exec ${command} --ah-cli
|
|
||||||
'')
|
|
||||||
];
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,48 +1,56 @@
|
||||||
# Maintainer: icxes <dev.null@need.moe>
|
# Maintainer: icxes <dev.null@need.moe>
|
||||||
# Contributor: Morgan Adamiec <morganamilo@archlinux.org>
|
|
||||||
# NOTE: You generally shouldn't use this PKGBUILD on Arch, as it is mainly for testing purposes. Install copyparty using pacman instead.
|
|
||||||
|
|
||||||
pkgname=copyparty
|
pkgname=copyparty
|
||||||
pkgver="1.19.4"
|
pkgver="1.14.2"
|
||||||
pkgrel=1
|
pkgrel=1
|
||||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||||
arch=("any")
|
arch=("any")
|
||||||
url="https://github.com/9001/${pkgname}"
|
url="https://github.com/9001/${pkgname}"
|
||||||
license=('MIT')
|
license=('MIT')
|
||||||
depends=("bash" "python" "lsof" "python-jinja")
|
depends=("python" "lsof" "python-jinja")
|
||||||
makedepends=("python-wheel" "python-setuptools" "python-build" "python-installer" "make" "pigz")
|
makedepends=("python-wheel" "python-setuptools" "python-build" "python-installer" "make" "pigz")
|
||||||
optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tags"
|
optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tags"
|
||||||
"cfssl: generate TLS certificates on startup"
|
"cfssl: generate TLS certificates on startup (pointless when reverse-proxied)"
|
||||||
"python-mutagen: music tags (alternative)"
|
"python-mutagen: music tags (alternative)"
|
||||||
"python-pillow: thumbnails for images"
|
"python-pillow: thumbnails for images"
|
||||||
"python-pyvips: thumbnails for images (higher quality, faster, uses more ram)"
|
"python-pyvips: thumbnails for images (higher quality, faster, uses more ram)"
|
||||||
"libkeyfinder: detection of musical keys"
|
"libkeyfinder-git: detection of musical keys"
|
||||||
|
"qm-vamp-plugins: BPM detection"
|
||||||
"python-pyopenssl: ftps functionality"
|
"python-pyopenssl: ftps functionality"
|
||||||
"python-pyzmq: send zeromq messages from event-hooks"
|
"python-argon2_cffi: hashed passwords in config"
|
||||||
"python-argon2-cffi: hashed passwords in config"
|
"python-impacket-git: smb support (bad idea)"
|
||||||
)
|
)
|
||||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||||
backup=("etc/${pkgname}/copyparty.conf" )
|
backup=("etc/${pkgname}.d/init" )
|
||||||
sha256sums=("b0e84a78eb2701cb7447b6023afcec280c550617dde67b6f0285bb23483111eb")
|
sha256sums=("a39f3950c663671d635c453d1a400f6cec6ec827e7dc9d22c3e791b8ab54017b")
|
||||||
|
|
||||||
build() {
|
build() {
|
||||||
cd "${srcdir}/${pkgname}-${pkgver}/copyparty/web"
|
|
||||||
make
|
|
||||||
|
|
||||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||||
python -m build --wheel --no-isolation
|
|
||||||
|
pushd copyparty/web
|
||||||
|
make -j$(nproc)
|
||||||
|
rm Makefile
|
||||||
|
popd
|
||||||
|
|
||||||
|
python3 -m build -wn
|
||||||
}
|
}
|
||||||
|
|
||||||
package() {
|
package() {
|
||||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||||
python -m installer --destdir="$pkgdir" dist/*.whl
|
python3 -m installer -d "$pkgdir" dist/*.whl
|
||||||
|
|
||||||
install -dm755 "${pkgdir}/etc/${pkgname}"
|
install -dm755 "${pkgdir}/etc/${pkgname}.d"
|
||||||
install -Dm755 "bin/prisonparty.sh" "${pkgdir}/usr/bin/prisonparty"
|
install -Dm755 "bin/prisonparty.sh" "${pkgdir}/usr/bin/prisonparty"
|
||||||
install -Dm644 "contrib/systemd/${pkgname}.conf" "${pkgdir}/etc/${pkgname}/copyparty.conf"
|
install -Dm644 "contrib/package/arch/${pkgname}.conf" "${pkgdir}/etc/${pkgname}.d/init"
|
||||||
install -Dm644 "contrib/systemd/${pkgname}@.service" "${pkgdir}/usr/lib/systemd/system/${pkgname}@.service"
|
install -Dm644 "contrib/package/arch/${pkgname}.service" "${pkgdir}/usr/lib/systemd/system/${pkgname}.service"
|
||||||
install -Dm644 "contrib/systemd/${pkgname}-user.service" "${pkgdir}/usr/lib/systemd/user/${pkgname}.service"
|
install -Dm644 "contrib/package/arch/prisonparty.service" "${pkgdir}/usr/lib/systemd/system/prisonparty.service"
|
||||||
install -Dm644 "contrib/systemd/prisonparty@.service" "${pkgdir}/usr/lib/systemd/system/prisonparty@.service"
|
install -Dm644 "contrib/package/arch/index.md" "${pkgdir}/var/lib/${pkgname}-jail/README.md"
|
||||||
install -Dm644 "contrib/systemd/index.md" "${pkgdir}/var/lib/${pkgname}-jail/README.md"
|
|
||||||
install -Dm644 "LICENSE" "${pkgdir}/usr/share/licenses/${pkgname}/LICENSE"
|
install -Dm644 "LICENSE" "${pkgdir}/usr/share/licenses/${pkgname}/LICENSE"
|
||||||
|
|
||||||
|
find /etc/${pkgname}.d -iname '*.conf' 2>/dev/null | grep -qE . && return
|
||||||
|
echo "┏━━━━━━━━━━━━━━━──-"
|
||||||
|
echo "┃ Configure ${pkgname} by adding .conf files into /etc/${pkgname}.d/"
|
||||||
|
echo "┃ and maybe copy+edit one of the following to /etc/systemd/system/:"
|
||||||
|
echo "┣━♦ /usr/lib/systemd/system/${pkgname}.service (standard)"
|
||||||
|
echo "┣━♦ /usr/lib/systemd/system/prisonparty.service (chroot)"
|
||||||
|
echo "┗━━━━━━━━━━━━━━━──-"
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,44 +0,0 @@
|
||||||
# Contributor: Beethoven <beethovenisadog@protonmail.com>
|
|
||||||
|
|
||||||
|
|
||||||
pkgname=copyparty
|
|
||||||
pkgver=1.19.4
|
|
||||||
pkgrel=1
|
|
||||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
|
||||||
arch=("any")
|
|
||||||
url="https://github.com/9001/${pkgname}"
|
|
||||||
license=('MIT')
|
|
||||||
depends=("bash" "python3" "lsof" "python3-jinja2")
|
|
||||||
makedepends=("python3-wheel" "python3-setuptools" "python3-build" "python3-installer" "make" "pigz")
|
|
||||||
optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tags"
|
|
||||||
"golang-cfssl: generate TLS certificates on startup"
|
|
||||||
"python3-mutagen: music tags (alternative)"
|
|
||||||
"python3-pil: thumbnails for images"
|
|
||||||
"python3-openssl: ftps functionality"
|
|
||||||
"python3-zmq: send zeromq messages from event-hooks"
|
|
||||||
"python3-argon2: hashed passwords in config"
|
|
||||||
)
|
|
||||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
|
||||||
backup=("/etc/${pkgname}.d/init" )
|
|
||||||
sha256sums=("b0e84a78eb2701cb7447b6023afcec280c550617dde67b6f0285bb23483111eb")
|
|
||||||
|
|
||||||
build() {
|
|
||||||
cd "${srcdir}/${pkgname}-${pkgver}/copyparty/web"
|
|
||||||
make
|
|
||||||
|
|
||||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
|
||||||
python -m build --wheel --no-isolation
|
|
||||||
}
|
|
||||||
|
|
||||||
package() {
|
|
||||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
|
||||||
python -m installer --destdir="$pkgdir" dist/*.whl
|
|
||||||
|
|
||||||
install -dm755 "${pkgdir}/etc/${pkgname}.d"
|
|
||||||
install -Dm755 "bin/prisonparty.sh" "${pkgdir}/usr/bin/prisonparty"
|
|
||||||
install -Dm644 "contrib/package/makedeb-mpr/${pkgname}.conf" "${pkgdir}/etc/${pkgname}.d/init"
|
|
||||||
install -Dm644 "contrib/package/makedeb-mpr/${pkgname}.service" "${pkgdir}/usr/lib/systemd/system/${pkgname}.service"
|
|
||||||
install -Dm644 "contrib/package/makedeb-mpr/prisonparty.service" "${pkgdir}/usr/lib/systemd/system/prisonparty.service"
|
|
||||||
install -Dm644 "contrib/package/makedeb-mpr/index.md" "${pkgdir}/var/lib/${pkgname}-jail/README.md"
|
|
||||||
install -Dm644 "LICENSE" "${pkgdir}/usr/share/licenses/${pkgname}/LICENSE"
|
|
||||||
}
|
|
|
@ -1,118 +1,63 @@
|
||||||
{
|
{ lib, stdenv, makeWrapper, fetchurl, utillinux, python, jinja2, impacket, pyftpdlib, pyopenssl, argon2-cffi, pillow, pyvips, ffmpeg, mutagen,
|
||||||
lib,
|
|
||||||
buildPythonApplication,
|
|
||||||
fetchurl,
|
|
||||||
util-linux,
|
|
||||||
python,
|
|
||||||
setuptools,
|
|
||||||
jinja2,
|
|
||||||
impacket,
|
|
||||||
pyopenssl,
|
|
||||||
cfssl,
|
|
||||||
argon2-cffi,
|
|
||||||
pillow,
|
|
||||||
pyvips,
|
|
||||||
pyzmq,
|
|
||||||
ffmpeg,
|
|
||||||
mutagen,
|
|
||||||
pyftpdlib,
|
|
||||||
magic,
|
|
||||||
partftpy,
|
|
||||||
fusepy, # for partyfuse
|
|
||||||
|
|
||||||
# use argon2id-hashed passwords in config files (sha2 is always available)
|
# use argon2id-hashed passwords in config files (sha2 is always available)
|
||||||
withHashedPasswords ? true,
|
withHashedPasswords ? true,
|
||||||
|
|
||||||
# generate TLS certificates on startup (pointless when reverse-proxied)
|
# generate TLS certificates on startup (pointless when reverse-proxied)
|
||||||
withCertgen ? false,
|
withCertgen ? false,
|
||||||
|
|
||||||
# create thumbnails with Pillow; faster than FFmpeg / MediaProcessing
|
# create thumbnails with Pillow; faster than FFmpeg / MediaProcessing
|
||||||
withThumbnails ? true,
|
withThumbnails ? true,
|
||||||
|
|
||||||
# create thumbnails with PyVIPS; even faster, uses more memory
|
# create thumbnails with PyVIPS; even faster, uses more memory
|
||||||
# -- can be combined with Pillow to support more filetypes
|
# -- can be combined with Pillow to support more filetypes
|
||||||
withFastThumbnails ? false,
|
withFastThumbnails ? false,
|
||||||
|
|
||||||
# enable FFmpeg; thumbnails for most filetypes (also video and audio), extract audio metadata, transcode audio to opus
|
# enable FFmpeg; thumbnails for most filetypes (also video and audio), extract audio metadata, transcode audio to opus
|
||||||
# -- possibly dangerous if you allow anonymous uploads, since FFmpeg has a huge attack surface
|
# -- possibly dangerous if you allow anonymous uploads, since FFmpeg has a huge attack surface
|
||||||
# -- can be combined with Thumbnails and/or FastThumbnails, since FFmpeg is slower than both
|
# -- can be combined with Thumbnails and/or FastThumbnails, since FFmpeg is slower than both
|
||||||
withMediaProcessing ? true,
|
withMediaProcessing ? true,
|
||||||
|
|
||||||
# if MediaProcessing is not enabled, you probably want this instead (less accurate, but much safer and faster)
|
# if MediaProcessing is not enabled, you probably want this instead (less accurate, but much safer and faster)
|
||||||
withBasicAudioMetadata ? false,
|
withBasicAudioMetadata ? false,
|
||||||
|
|
||||||
# send ZeroMQ messages from event-hooks
|
# enable FTPS support in the FTP server
|
||||||
withZeroMQ ? true,
|
withFTPS ? false,
|
||||||
|
|
||||||
# enable FTP server
|
# samba/cifs server; dangerous and buggy, enable if you really need it
|
||||||
withFTP ? true,
|
withSMB ? false,
|
||||||
|
|
||||||
# enable FTPS support in the FTP server
|
|
||||||
withFTPS ? false,
|
|
||||||
|
|
||||||
# enable TFTP server
|
|
||||||
withTFTP ? false,
|
|
||||||
|
|
||||||
# samba/cifs server; dangerous and buggy, enable if you really need it
|
|
||||||
withSMB ? false,
|
|
||||||
|
|
||||||
# enables filetype detection for nameless uploads
|
|
||||||
withMagic ? false,
|
|
||||||
|
|
||||||
# extra packages to add to the PATH
|
|
||||||
extraPackages ? [ ],
|
|
||||||
|
|
||||||
# function that accepts a python packageset and returns a list of packages to
|
|
||||||
# be added to the python venv. useful for scripts and such that require
|
|
||||||
# additional dependencies
|
|
||||||
extraPythonPackages ? (_p: [ ]),
|
|
||||||
|
|
||||||
}:
|
}:
|
||||||
|
|
||||||
let
|
let
|
||||||
pinData = lib.importJSON ./pin.json;
|
pinData = lib.importJSON ./pin.json;
|
||||||
runtimeDeps = ([ util-linux ] ++ extraPackages ++ lib.optional withMediaProcessing ffmpeg);
|
pyEnv = python.withPackages (ps:
|
||||||
in
|
with ps; [
|
||||||
buildPythonApplication {
|
|
||||||
pname = "copyparty";
|
|
||||||
inherit (pinData) version;
|
|
||||||
src = fetchurl {
|
|
||||||
inherit (pinData) url hash;
|
|
||||||
};
|
|
||||||
dependencies =
|
|
||||||
[
|
|
||||||
jinja2
|
jinja2
|
||||||
fusepy
|
|
||||||
]
|
]
|
||||||
++ lib.optional withSMB impacket
|
++ lib.optional withSMB impacket
|
||||||
++ lib.optional withFTP pyftpdlib
|
|
||||||
++ lib.optional withFTPS pyopenssl
|
++ lib.optional withFTPS pyopenssl
|
||||||
++ lib.optional withTFTP partftpy
|
|
||||||
++ lib.optional withCertgen cfssl
|
++ lib.optional withCertgen cfssl
|
||||||
++ lib.optional withThumbnails pillow
|
++ lib.optional withThumbnails pillow
|
||||||
++ lib.optional withFastThumbnails pyvips
|
++ lib.optional withFastThumbnails pyvips
|
||||||
++ lib.optional withMediaProcessing ffmpeg
|
++ lib.optional withMediaProcessing ffmpeg
|
||||||
++ lib.optional withBasicAudioMetadata mutagen
|
++ lib.optional withBasicAudioMetadata mutagen
|
||||||
++ lib.optional withHashedPasswords argon2-cffi
|
++ lib.optional withHashedPasswords argon2-cffi
|
||||||
++ lib.optional withZeroMQ pyzmq
|
);
|
||||||
++ lib.optional withMagic magic
|
in stdenv.mkDerivation {
|
||||||
++ (extraPythonPackages python.pkgs);
|
pname = "copyparty";
|
||||||
makeWrapperArgs = [ "--prefix PATH : ${lib.makeBinPath runtimeDeps}" ];
|
version = pinData.version;
|
||||||
|
src = fetchurl {
|
||||||
pyproject = true;
|
url = pinData.url;
|
||||||
build-system = [
|
hash = pinData.hash;
|
||||||
setuptools
|
|
||||||
];
|
|
||||||
meta = {
|
|
||||||
description = "Turn almost any device into a file server";
|
|
||||||
longDescription = ''
|
|
||||||
Portable file server with accelerated resumable uploads, dedup, WebDAV,
|
|
||||||
FTP, TFTP, zeroconf, media indexer, thumbnails++ all in one file, no deps
|
|
||||||
'';
|
|
||||||
homepage = "https://github.com/9001/copyparty";
|
|
||||||
changelog = "https://github.com/9001/copyparty/releases/tag/v${pinData.version}";
|
|
||||||
license = lib.licenses.mit;
|
|
||||||
mainProgram = "copyparty";
|
|
||||||
sourceProvenance = [ lib.sourceTypes.fromSource ];
|
|
||||||
};
|
};
|
||||||
|
buildInputs = [ makeWrapper ];
|
||||||
|
dontUnpack = true;
|
||||||
|
dontBuild = true;
|
||||||
|
installPhase = ''
|
||||||
|
install -Dm755 $src $out/share/copyparty-sfx.py
|
||||||
|
makeWrapper ${pyEnv.interpreter} $out/bin/copyparty \
|
||||||
|
--set PATH '${lib.makeBinPath ([ utillinux ] ++ lib.optional withMediaProcessing ffmpeg)}:$PATH' \
|
||||||
|
--add-flags "$out/share/copyparty-sfx.py"
|
||||||
|
'';
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
{
|
{
|
||||||
"url": "https://github.com/9001/copyparty/releases/download/v1.19.4/copyparty-1.19.4.tar.gz",
|
"url": "https://github.com/9001/copyparty/releases/download/v1.14.2/copyparty-sfx.py",
|
||||||
"version": "1.19.4",
|
"version": "1.14.2",
|
||||||
"hash": "sha256-sOhKeOsnAct0R7YCOvzsKAxVBhfd5ntvAoW7I0gxEes="
|
"hash": "sha256-n9Dj2MMrvkWhlXAKWOXn5YQsFCxNpgo5HDFQ111a66A="
|
||||||
}
|
}
|
|
@ -11,14 +11,14 @@ import base64
|
||||||
import json
|
import json
|
||||||
import hashlib
|
import hashlib
|
||||||
import sys
|
import sys
|
||||||
import tarfile
|
import re
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
OUTPUT_FILE = Path("pin.json")
|
OUTPUT_FILE = Path("pin.json")
|
||||||
TARGET_ASSET = lambda version: f"copyparty-{version}.tar.gz"
|
TARGET_ASSET = "copyparty-sfx.py"
|
||||||
HASH_TYPE = "sha256"
|
HASH_TYPE = "sha256"
|
||||||
LATEST_RELEASE_URL = "https://api.github.com/repos/9001/copyparty/releases/latest"
|
LATEST_RELEASE_URL = "https://api.github.com/repos/9001/copyparty/releases/latest"
|
||||||
DOWNLOAD_URL = lambda version: f"https://github.com/9001/copyparty/releases/download/v{version}/{TARGET_ASSET(version)}"
|
DOWNLOAD_URL = lambda version: f"https://github.com/9001/copyparty/releases/download/v{version}/{TARGET_ASSET}"
|
||||||
|
|
||||||
|
|
||||||
def get_formatted_hash(binary):
|
def get_formatted_hash(binary):
|
||||||
|
@ -29,13 +29,11 @@ def get_formatted_hash(binary):
|
||||||
return f"{HASH_TYPE}-{encoded_hash}"
|
return f"{HASH_TYPE}-{encoded_hash}"
|
||||||
|
|
||||||
|
|
||||||
def version_from_tar_gz(path):
|
def version_from_sfx(binary):
|
||||||
with tarfile.open(path) as tarball:
|
result = re.search(b'^VER = "(.*)"$', binary, re.MULTILINE)
|
||||||
release_name = tarball.getmembers()[0].name
|
if result:
|
||||||
prefix = "copyparty-"
|
return result.groups(1)[0].decode("ascii")
|
||||||
|
|
||||||
if release_name.startswith(prefix):
|
|
||||||
return release_name.replace(prefix, "")
|
|
||||||
raise ValueError("version not found in provided file")
|
raise ValueError("version not found in provided file")
|
||||||
|
|
||||||
|
|
||||||
|
@ -44,7 +42,7 @@ def remote_release_pin():
|
||||||
|
|
||||||
response = requests.get(LATEST_RELEASE_URL).json()
|
response = requests.get(LATEST_RELEASE_URL).json()
|
||||||
version = response["tag_name"].lstrip("v")
|
version = response["tag_name"].lstrip("v")
|
||||||
asset_info = [a for a in response["assets"] if a["name"] == TARGET_ASSET(version)][0]
|
asset_info = [a for a in response["assets"] if a["name"] == TARGET_ASSET][0]
|
||||||
download_url = asset_info["browser_download_url"]
|
download_url = asset_info["browser_download_url"]
|
||||||
asset = requests.get(download_url)
|
asset = requests.get(download_url)
|
||||||
formatted_hash = get_formatted_hash(asset.content)
|
formatted_hash = get_formatted_hash(asset.content)
|
||||||
|
@ -54,9 +52,10 @@ def remote_release_pin():
|
||||||
|
|
||||||
|
|
||||||
def local_release_pin(path):
|
def local_release_pin(path):
|
||||||
version = version_from_tar_gz(path)
|
asset = path.read_bytes()
|
||||||
|
version = version_from_sfx(asset)
|
||||||
download_url = DOWNLOAD_URL(version)
|
download_url = DOWNLOAD_URL(version)
|
||||||
formatted_hash = get_formatted_hash(path.read_bytes())
|
formatted_hash = get_formatted_hash(asset)
|
||||||
|
|
||||||
result = {"url": download_url, "version": version, "hash": formatted_hash}
|
result = {"url": download_url, "version": version, "hash": formatted_hash}
|
||||||
return result
|
return result
|
||||||
|
|
|
@ -1,30 +0,0 @@
|
||||||
{
|
|
||||||
lib,
|
|
||||||
buildPythonPackage,
|
|
||||||
fetchurl,
|
|
||||||
setuptools,
|
|
||||||
}:
|
|
||||||
let
|
|
||||||
pinData = lib.importJSON ./pin.json;
|
|
||||||
in
|
|
||||||
|
|
||||||
buildPythonPackage rec {
|
|
||||||
pname = "partftpy";
|
|
||||||
inherit (pinData) version;
|
|
||||||
pyproject = true;
|
|
||||||
|
|
||||||
src = fetchurl {
|
|
||||||
inherit (pinData) url hash;
|
|
||||||
};
|
|
||||||
|
|
||||||
build-system = [ setuptools ];
|
|
||||||
|
|
||||||
pythonImportsCheck = [ "partftpy.TftpServer" ];
|
|
||||||
|
|
||||||
meta = {
|
|
||||||
description = "Pure Python TFTP library (copyparty edition)";
|
|
||||||
homepage = "https://github.com/9001/partftpy";
|
|
||||||
changelog = "https://github.com/9001/partftpy/releases/tag/${version}";
|
|
||||||
license = lib.licenses.mit;
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -1,5 +0,0 @@
|
||||||
{
|
|
||||||
"url": "https://github.com/9001/partftpy/releases/download/v0.4.0/partftpy-0.4.0.tar.gz",
|
|
||||||
"version": "0.4.0",
|
|
||||||
"hash": "sha256-5Q2zyuJ892PGZmb+YXg0ZPW/DK8RDL1uE0j5HPd4We0="
|
|
||||||
}
|
|
|
@ -1,50 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
# Update the Nix package pin
|
|
||||||
#
|
|
||||||
# Usage: ./update.sh
|
|
||||||
|
|
||||||
import base64
|
|
||||||
import json
|
|
||||||
import hashlib
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
OUTPUT_FILE = Path("pin.json")
|
|
||||||
TARGET_ASSET = lambda version: f"partftpy-{version}.tar.gz"
|
|
||||||
HASH_TYPE = "sha256"
|
|
||||||
LATEST_RELEASE_URL = "https://api.github.com/repos/9001/partftpy/releases/latest"
|
|
||||||
|
|
||||||
|
|
||||||
def get_formatted_hash(binary):
|
|
||||||
hasher = hashlib.new("sha256")
|
|
||||||
hasher.update(binary)
|
|
||||||
asset_hash = hasher.digest()
|
|
||||||
encoded_hash = base64.b64encode(asset_hash).decode("ascii")
|
|
||||||
return f"{HASH_TYPE}-{encoded_hash}"
|
|
||||||
|
|
||||||
|
|
||||||
def remote_release_pin():
|
|
||||||
import requests
|
|
||||||
|
|
||||||
response = requests.get(LATEST_RELEASE_URL).json()
|
|
||||||
version = response["tag_name"].lstrip("v")
|
|
||||||
asset_info = [a for a in response["assets"] if a["name"] == TARGET_ASSET(version)][0]
|
|
||||||
download_url = asset_info["browser_download_url"]
|
|
||||||
asset = requests.get(download_url)
|
|
||||||
formatted_hash = get_formatted_hash(asset.content)
|
|
||||||
|
|
||||||
result = {"url": download_url, "version": version, "hash": formatted_hash}
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
result = remote_release_pin()
|
|
||||||
|
|
||||||
print(result)
|
|
||||||
json_result = json.dumps(result, indent=4)
|
|
||||||
OUTPUT_FILE.write_text(json_result)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
|
@ -1,62 +0,0 @@
|
||||||
Name: copyparty
|
|
||||||
Version: $pkgver
|
|
||||||
Release: $pkgrel
|
|
||||||
License: MIT
|
|
||||||
Group: Utilities
|
|
||||||
URL: https://github.com/9001/copyparty
|
|
||||||
Source0: copyparty-$pkgver.tar.gz
|
|
||||||
Summary: File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++
|
|
||||||
BuildArch: noarch
|
|
||||||
BuildRequires: python3, python3-devel, pyproject-rpm-macros, python-setuptools, python-wheel, make
|
|
||||||
Requires: python3, (python3-jinja2 or python-jinja2), lsof
|
|
||||||
Recommends: ffmpeg, (golang-github-cloudflare-cfssl or cfssl), python-mutagen, python-pillow, python-pyvips
|
|
||||||
Recommends: qm-vamp-plugins, python-argon2-cffi, (python-pyopenssl or pyopenssl), python-impacket
|
|
||||||
|
|
||||||
%description
|
|
||||||
Portable file server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++ all in one file, no deps
|
|
||||||
|
|
||||||
See release at https://github.com/9001/copyparty/releases
|
|
||||||
|
|
||||||
%global debug_package %{nil}
|
|
||||||
|
|
||||||
%generate_buildrequires
|
|
||||||
%pyproject_buildrequires
|
|
||||||
|
|
||||||
%prep
|
|
||||||
%setup -q
|
|
||||||
|
|
||||||
%build
|
|
||||||
cd "copyparty/web"
|
|
||||||
make
|
|
||||||
cd -
|
|
||||||
%pyproject_wheel
|
|
||||||
|
|
||||||
%install
|
|
||||||
mkdir -p %{buildroot}%{_bindir}
|
|
||||||
mkdir -p %{buildroot}%{_libdir}/systemd/{system,user}
|
|
||||||
mkdir -p %{buildroot}/etc/%{name}
|
|
||||||
mkdir -p %{buildroot}/var/lib/%{name}-jail
|
|
||||||
mkdir -p %{buildroot}%{_datadir}/licenses/%{name}
|
|
||||||
|
|
||||||
%pyproject_install
|
|
||||||
%pyproject_save_files copyparty
|
|
||||||
|
|
||||||
install -m 0755 bin/prisonparty.sh %{buildroot}%{_bindir}/prisonpary.sh
|
|
||||||
install -m 0644 contrib/systemd/%{name}.conf %{buildroot}/etc/%{name}/%{name}.conf
|
|
||||||
install -m 0644 contrib/systemd/%{name}@.service %{buildroot}%{_libdir}/systemd/system/%{name}@.service
|
|
||||||
install -m 0644 contrib/systemd/%{name}-user.service %{buildroot}%{_libdir}/systemd/user/%{name}.service
|
|
||||||
install -m 0644 contrib/systemd/prisonparty@.service %{buildroot}%{_libdir}/systemd/system/prisonparty@.service
|
|
||||||
install -m 0644 contrib/systemd/index.md %{buildroot}/var/lib/%{name}-jail/README.md
|
|
||||||
install -m 0644 LICENSE %{buildroot}%{_datadir}/licenses/%{name}/LICENSE
|
|
||||||
|
|
||||||
%files -n copyparty -f %{pyproject_files}
|
|
||||||
%license LICENSE
|
|
||||||
%{_bindir}/copyparty
|
|
||||||
%{_bindir}/partyfuse
|
|
||||||
%{_bindir}/u2c
|
|
||||||
%{_bindir}/prisonpary.sh
|
|
||||||
/etc/%{name}/%{name}.conf
|
|
||||||
%{_libdir}/systemd/system/%{name}@.service
|
|
||||||
%{_libdir}/systemd/user/%{name}.service
|
|
||||||
%{_libdir}/systemd/system/prisonparty@.service
|
|
||||||
/var/lib/%{name}-jail/README.md
|
|
|
@ -15,7 +15,6 @@ save one of these as `.epilogue.html` inside a folder to customize it:
|
||||||
point `--js-browser` to one of these by URL:
|
point `--js-browser` to one of these by URL:
|
||||||
|
|
||||||
* [`minimal-up2k.js`](minimal-up2k.js) is similar to the above `minimal-up2k.html` except it applies globally to all write-only folders
|
* [`minimal-up2k.js`](minimal-up2k.js) is similar to the above `minimal-up2k.html` except it applies globally to all write-only folders
|
||||||
* [`quickmove.js`](quickmove.js) adds a hotkey to move selected files into a subfolder
|
|
||||||
* [`up2k-hooks.js`](up2k-hooks.js) lets you specify a ruleset for files to skip uploading
|
* [`up2k-hooks.js`](up2k-hooks.js) lets you specify a ruleset for files to skip uploading
|
||||||
* [`up2k-hook-ytid.js`](up2k-hook-ytid.js) is a more specific example checking youtube-IDs against some API
|
* [`up2k-hook-ytid.js`](up2k-hook-ytid.js) is a more specific example checking youtube-IDs against some API
|
||||||
|
|
||||||
|
|
|
@ -1,117 +0,0 @@
|
||||||
// USAGE:
|
|
||||||
// place this file somewhere in the webroot and then
|
|
||||||
// python3 -m copyparty --js-browser /.res/graft-thumbs.js
|
|
||||||
//
|
|
||||||
// DESCRIPTION:
|
|
||||||
// this is a gridview plugin which, for each file in a folder,
|
|
||||||
// looks for another file with the same filename (but with a
|
|
||||||
// different file extension)
|
|
||||||
//
|
|
||||||
// if one of those files is an image and the other is not,
|
|
||||||
// then this plugin assumes the image is a "sidecar thumbnail"
|
|
||||||
// for the other file, and it will graft the image thumbnail
|
|
||||||
// onto the non-image file (for example an mp3)
|
|
||||||
//
|
|
||||||
// optional feature 1, default-enabled:
|
|
||||||
// the image-file is then hidden from the directory listing
|
|
||||||
//
|
|
||||||
// optional feature 2, default-enabled:
|
|
||||||
// when clicking the audio file, the image will also open
|
|
||||||
|
|
||||||
|
|
||||||
(function() {
|
|
||||||
|
|
||||||
// `graft_thumbs` assumes the gridview has just been rendered;
|
|
||||||
// it looks for sidecars, and transplants those thumbnails onto
|
|
||||||
// the other file with the same basename (filename sans extension)
|
|
||||||
|
|
||||||
var graft_thumbs = function () {
|
|
||||||
if (!thegrid.en)
|
|
||||||
return; // not in grid mode
|
|
||||||
|
|
||||||
var files = msel.getall(),
|
|
||||||
pairs = {};
|
|
||||||
|
|
||||||
console.log(files);
|
|
||||||
|
|
||||||
for (var a = 0; a < files.length; a++) {
|
|
||||||
var file = files[a],
|
|
||||||
is_pic = /\.(jpe?g|png|gif|webp)$/i.exec(file.vp),
|
|
||||||
is_audio = re_au_all.exec(file.vp),
|
|
||||||
basename = file.vp.replace(/\.[^\.]+$/, ""),
|
|
||||||
entry = pairs[basename];
|
|
||||||
|
|
||||||
if (!entry)
|
|
||||||
// first time seeing this basename; create a new entry in pairs
|
|
||||||
entry = pairs[basename] = {};
|
|
||||||
|
|
||||||
if (is_pic)
|
|
||||||
entry.thumb = file;
|
|
||||||
else if (is_audio)
|
|
||||||
entry.audio = file;
|
|
||||||
}
|
|
||||||
|
|
||||||
var basenames = Object.keys(pairs);
|
|
||||||
for (var a = 0; a < basenames.length; a++)
|
|
||||||
(function(a) {
|
|
||||||
var pair = pairs[basenames[a]];
|
|
||||||
|
|
||||||
if (!pair.thumb || !pair.audio)
|
|
||||||
return; // not a matching pair of files
|
|
||||||
|
|
||||||
var img_thumb = QS('#ggrid a[ref="' + pair.thumb.id + '"] img[onload]'),
|
|
||||||
img_audio = QS('#ggrid a[ref="' + pair.audio.id + '"] img[onload]');
|
|
||||||
|
|
||||||
if (!img_thumb || !img_audio)
|
|
||||||
return; // something's wrong... let's bail
|
|
||||||
|
|
||||||
// alright, graft the thumb...
|
|
||||||
img_audio.src = img_thumb.src;
|
|
||||||
|
|
||||||
// ...and hide the sidecar
|
|
||||||
img_thumb.closest('a').style.display = 'none';
|
|
||||||
|
|
||||||
// ...and add another onclick-handler to the audio,
|
|
||||||
// so it also opens the pic while playing the song
|
|
||||||
img_audio.addEventListener('click', function() {
|
|
||||||
img_thumb.click();
|
|
||||||
return false; // let it bubble to the next listener
|
|
||||||
});
|
|
||||||
|
|
||||||
})(a);
|
|
||||||
};
|
|
||||||
|
|
||||||
// ...and then the trick! near the end of loadgrid,
|
|
||||||
// thegrid.bagit is called to initialize the baguettebox
|
|
||||||
// (image/video gallery); this is the perfect function to
|
|
||||||
// "hook" (hijack) so we can run our code :^)
|
|
||||||
|
|
||||||
// need to grab a backup of the original function first,
|
|
||||||
var orig_func = thegrid.bagit;
|
|
||||||
|
|
||||||
// and then replace it with our own:
|
|
||||||
thegrid.bagit = function (isrc) {
|
|
||||||
|
|
||||||
if (isrc !== '#ggrid')
|
|
||||||
// we only want to modify the grid, so
|
|
||||||
// let the original function handle this one
|
|
||||||
return orig_func(isrc);
|
|
||||||
|
|
||||||
graft_thumbs();
|
|
||||||
|
|
||||||
// when changing directories, the grid is
|
|
||||||
// rendered before msel returns the correct
|
|
||||||
// filenames, so schedule another run:
|
|
||||||
setTimeout(graft_thumbs, 1);
|
|
||||||
|
|
||||||
// and finally, call the original thegrid.bagit function
|
|
||||||
return orig_func(isrc);
|
|
||||||
};
|
|
||||||
|
|
||||||
if (ls0) {
|
|
||||||
// the server included an initial listing json (ls0),
|
|
||||||
// so the grid has already been rendered without our hook
|
|
||||||
graft_thumbs();
|
|
||||||
}
|
|
||||||
|
|
||||||
})();
|
|
|
@ -12,23 +12,6 @@ almost the same as minimal-up2k.html except this one...:
|
||||||
|
|
||||||
-- looks slightly better
|
-- looks slightly better
|
||||||
|
|
||||||
|
|
||||||
========================
|
|
||||||
== USAGE INSTRUCTIONS ==
|
|
||||||
|
|
||||||
1. create a volume which anyone can read from (if you haven't already)
|
|
||||||
2. copy this file into that volume, so anyone can download it
|
|
||||||
3. enable the plugin by telling the webbrowser to load this file;
|
|
||||||
assuming the URL to the public volume is /res/, and
|
|
||||||
assuming you're using config-files, then add this to your config:
|
|
||||||
|
|
||||||
[global]
|
|
||||||
js-browser: /res/minimal-up2k.js
|
|
||||||
|
|
||||||
alternatively, if you're not using config-files, then
|
|
||||||
add the following commandline argument instead:
|
|
||||||
--js-browser=/res/minimal-up2k.js
|
|
||||||
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
var u2min = `
|
var u2min = `
|
||||||
|
|
|
@ -1,140 +0,0 @@
|
||||||
"use strict";
|
|
||||||
|
|
||||||
|
|
||||||
// USAGE:
|
|
||||||
// place this file somewhere in the webroot,
|
|
||||||
// for example in a folder named ".res" to hide it, and then
|
|
||||||
// python3 copyparty-sfx.py -v .::A --js-browser /.res/quickmove.js
|
|
||||||
//
|
|
||||||
// DESCRIPTION:
|
|
||||||
// the command above launches copyparty with one single volume;
|
|
||||||
// ".::A" = current folder as webroot, and everyone has Admin
|
|
||||||
//
|
|
||||||
// the plugin adds hotkey "W" which moves all selected files
|
|
||||||
// into a subfolder named "foobar" inside the current folder
|
|
||||||
|
|
||||||
|
|
||||||
(function() {
|
|
||||||
|
|
||||||
var action_to_perform = ask_for_confirmation_and_then_move;
|
|
||||||
// this decides what the new hotkey should do;
|
|
||||||
// ask_for_confirmation_and_then_move = show a yes/no box,
|
|
||||||
// move_selected_files = just move the files immediately
|
|
||||||
|
|
||||||
var move_destination = "foobar";
|
|
||||||
// this is the target folder to move files to;
|
|
||||||
// by default it is a subfolder of the current folder,
|
|
||||||
// but it can also be an absolute path like "/foo/bar"
|
|
||||||
|
|
||||||
// ===
|
|
||||||
// === END OF CONFIG
|
|
||||||
// ===
|
|
||||||
|
|
||||||
var main_hotkey_handler, // copyparty's original hotkey handler
|
|
||||||
plugin_enabler, // timer to engage this plugin when safe
|
|
||||||
files_to_move; // list of files to move
|
|
||||||
|
|
||||||
function ask_for_confirmation_and_then_move() {
|
|
||||||
var num_files = msel.getsel().length,
|
|
||||||
msg = "move the selected " + num_files + " files?";
|
|
||||||
|
|
||||||
if (!num_files)
|
|
||||||
return toast.warn(2, 'no files were selected to be moved');
|
|
||||||
|
|
||||||
modal.confirm(msg, move_selected_files, null);
|
|
||||||
}
|
|
||||||
|
|
||||||
function move_selected_files() {
|
|
||||||
var selection = msel.getsel();
|
|
||||||
|
|
||||||
if (!selection.length)
|
|
||||||
return toast.warn(2, 'no files were selected to be moved');
|
|
||||||
|
|
||||||
if (thegrid.bbox) {
|
|
||||||
// close image/video viewer
|
|
||||||
thegrid.bbox = null;
|
|
||||||
baguetteBox.destroy();
|
|
||||||
}
|
|
||||||
|
|
||||||
files_to_move = [];
|
|
||||||
for (var a = 0; a < selection.length; a++)
|
|
||||||
files_to_move.push(selection[a].vp);
|
|
||||||
|
|
||||||
move_next_file();
|
|
||||||
}
|
|
||||||
|
|
||||||
function move_next_file() {
|
|
||||||
var num_files = files_to_move.length,
|
|
||||||
filepath = files_to_move.pop(),
|
|
||||||
filename = vsplit(filepath)[1];
|
|
||||||
|
|
||||||
toast.inf(10, "moving " + num_files + " files...\n\n" + filename);
|
|
||||||
|
|
||||||
var dst = move_destination;
|
|
||||||
|
|
||||||
if (!dst.endsWith('/'))
|
|
||||||
// must have a trailing slash, so add it
|
|
||||||
dst += '/';
|
|
||||||
|
|
||||||
if (!dst.startsWith('/'))
|
|
||||||
// destination is a relative path, so prefix current folder path
|
|
||||||
dst = get_evpath() + dst;
|
|
||||||
|
|
||||||
// and finally append the filename
|
|
||||||
dst += '/' + filename;
|
|
||||||
|
|
||||||
// prepare the move-request to be sent
|
|
||||||
var xhr = new XHR();
|
|
||||||
xhr.onload = xhr.onerror = function() {
|
|
||||||
if (this.status !== 201)
|
|
||||||
return toast.err(30, 'move failed: ' + esc(this.responseText));
|
|
||||||
|
|
||||||
if (files_to_move.length)
|
|
||||||
return move_next_file(); // still more files to go
|
|
||||||
|
|
||||||
toast.ok(1, 'move OK');
|
|
||||||
treectl.goto(); // reload the folder contents
|
|
||||||
};
|
|
||||||
xhr.open('POST', filepath + '?move=' + dst);
|
|
||||||
xhr.send();
|
|
||||||
}
|
|
||||||
|
|
||||||
function our_hotkey_handler(e) {
|
|
||||||
// bail if either ALT, CTRL, or SHIFT is pressed
|
|
||||||
if (e.altKey || e.shiftKey || e.isComposing || ctrl(e))
|
|
||||||
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
|
||||||
|
|
||||||
var key_name = (e.code || e.key) + '',
|
|
||||||
ae = document.activeElement,
|
|
||||||
aet = ae && ae != document.body ? ae.nodeName.toLowerCase() : '';
|
|
||||||
|
|
||||||
// check the current aet (active element type),
|
|
||||||
// only continue if one of the following currently has input focus:
|
|
||||||
// nothing | link | button | table-row | table-cell | div | text
|
|
||||||
if (aet && !/^(a|button|tr|td|div|pre)$/.test(aet))
|
|
||||||
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
|
||||||
|
|
||||||
if (key_name == 'KeyW') {
|
|
||||||
// okay, this one's for us... do the thing
|
|
||||||
action_to_perform();
|
|
||||||
return ev(e);
|
|
||||||
}
|
|
||||||
|
|
||||||
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
|
||||||
}
|
|
||||||
|
|
||||||
function enable_plugin() {
|
|
||||||
if (!window.hotkeys_attached)
|
|
||||||
return console.log('quickmove is waiting for the page to finish loading');
|
|
||||||
|
|
||||||
clearInterval(plugin_enabler);
|
|
||||||
main_hotkey_handler = document.onkeydown;
|
|
||||||
document.onkeydown = our_hotkey_handler;
|
|
||||||
console.log('quickmove is now enabled');
|
|
||||||
}
|
|
||||||
|
|
||||||
// copyparty doesn't enable its hotkeys until the page
|
|
||||||
// has finished loading, so we'll wait for that too
|
|
||||||
plugin_enabler = setInterval(enable_plugin, 100);
|
|
||||||
|
|
||||||
})();
|
|
|
@ -1,26 +0,0 @@
|
||||||
# this will start `/usr/bin/copyparty`
|
|
||||||
# and read config from `$HOME/.config/copyparty.conf`
|
|
||||||
#
|
|
||||||
# unless you add -q to disable logging, you may want to remove the
|
|
||||||
# following line to allow buffering (slightly better performance):
|
|
||||||
# Environment=PYTHONUNBUFFERED=x
|
|
||||||
|
|
||||||
[Unit]
|
|
||||||
Description=copyparty file server
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=notify
|
|
||||||
SyslogIdentifier=copyparty
|
|
||||||
WorkingDirectory=/var/lib/copyparty-jail
|
|
||||||
Environment=PYTHONUNBUFFERED=x
|
|
||||||
Environment=PRTY_CONFIG=%h/.config/copyparty/copyparty.conf
|
|
||||||
ExecReload=/bin/kill -s USR1 $MAINPID
|
|
||||||
|
|
||||||
# ensure there is a config
|
|
||||||
ExecStartPre=/bin/bash -c 'if [[ ! -f %h/.config/copyparty/copyparty.conf ]]; then mkdir -p %h/.config/copyparty; cp /etc/copyparty/copyparty.conf %h/.config/copyparty/copyparty.conf; fi'
|
|
||||||
|
|
||||||
# run copyparty
|
|
||||||
ExecStart=/usr/bin/python3 /usr/bin/copyparty
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=default.target
|
|
|
@ -1,13 +1,42 @@
|
||||||
|
# not actually YAML but lets pretend:
|
||||||
|
# -*- mode: yaml -*-
|
||||||
|
# vim: ft=yaml:
|
||||||
|
|
||||||
|
|
||||||
|
# put this file in /etc/
|
||||||
|
|
||||||
|
|
||||||
[global]
|
[global]
|
||||||
i: 127.0.0.1
|
e2dsa # enable file indexing and filesystem scanning
|
||||||
|
e2ts # and enable multimedia indexing
|
||||||
|
ansi # and colors in log messages
|
||||||
|
|
||||||
|
# disable logging to stdout/journalctl and log to a file instead;
|
||||||
|
# $LOGS_DIRECTORY is usually /var/log/copyparty (comes from systemd)
|
||||||
|
# and copyparty replaces %Y-%m%d with Year-MonthDay, so the
|
||||||
|
# full path will be something like /var/log/copyparty/2023-1130.txt
|
||||||
|
# (note: enable compression by adding .xz at the end)
|
||||||
|
q, lo: $LOGS_DIRECTORY/%Y-%m%d.log
|
||||||
|
|
||||||
|
# p: 80,443,3923 # listen on 80/443 as well (requires CAP_NET_BIND_SERVICE)
|
||||||
|
# i: 127.0.0.1 # only allow connections from localhost (reverse-proxies)
|
||||||
|
# ftp: 3921 # enable ftp server on port 3921
|
||||||
|
# p: 3939 # listen on another port
|
||||||
|
# df: 16 # stop accepting uploads if less than 16 GB free disk space
|
||||||
|
# ver # show copyparty version in the controlpanel
|
||||||
|
# grid # show thumbnails/grid-view by default
|
||||||
|
# theme: 2 # monokai
|
||||||
|
# name: datasaver # change the server-name that's displayed in the browser
|
||||||
|
# stats, nos-dup # enable the prometheus endpoint, but disable the dupes counter (too slow)
|
||||||
|
# no-robots, force-js # make it harder for search engines to read your server
|
||||||
|
|
||||||
|
|
||||||
[accounts]
|
[accounts]
|
||||||
user: password
|
ed: wark # username: password
|
||||||
|
|
||||||
[/]
|
|
||||||
/var/lib/copyparty-jail
|
[/] # create a volume at "/" (the webroot), which will
|
||||||
|
/mnt # share the contents of the "/mnt" folder
|
||||||
accs:
|
accs:
|
||||||
r: *
|
rw: * # everyone gets read-write access, but
|
||||||
rwdma: user
|
rwmda: ed # the user "ed" gets read-write-move-delete-admin
|
||||||
flags:
|
|
||||||
grid
|
|
||||||
|
|
|
@ -1,42 +0,0 @@
|
||||||
# not actually YAML but lets pretend:
|
|
||||||
# -*- mode: yaml -*-
|
|
||||||
# vim: ft=yaml:
|
|
||||||
|
|
||||||
|
|
||||||
# put this file in /etc/
|
|
||||||
|
|
||||||
|
|
||||||
[global]
|
|
||||||
e2dsa # enable file indexing and filesystem scanning
|
|
||||||
e2ts # and enable multimedia indexing
|
|
||||||
ansi # and colors in log messages
|
|
||||||
|
|
||||||
# disable logging to stdout/journalctl and log to a file instead;
|
|
||||||
# $LOGS_DIRECTORY is usually /var/log/copyparty (comes from systemd)
|
|
||||||
# and copyparty replaces %Y-%m%d with Year-MonthDay, so the
|
|
||||||
# full path will be something like /var/log/copyparty/2023-1130.txt
|
|
||||||
# (note: enable compression by adding .xz at the end)
|
|
||||||
q, lo: $LOGS_DIRECTORY/%Y-%m%d.log
|
|
||||||
|
|
||||||
# p: 80,443,3923 # listen on 80/443 as well (requires CAP_NET_BIND_SERVICE)
|
|
||||||
# i: 127.0.0.1 # only allow connections from localhost (reverse-proxies)
|
|
||||||
# ftp: 3921 # enable ftp server on port 3921
|
|
||||||
# p: 3939 # listen on another port
|
|
||||||
# df: 16 # stop accepting uploads if less than 16 GB free disk space
|
|
||||||
# ver # show copyparty version in the controlpanel
|
|
||||||
# grid # show thumbnails/grid-view by default
|
|
||||||
# theme: 2 # monokai
|
|
||||||
# name: datasaver # change the server-name that's displayed in the browser
|
|
||||||
# stats, nos-dup # enable the prometheus endpoint, but disable the dupes counter (too slow)
|
|
||||||
# no-robots, force-js # make it harder for search engines to read your server
|
|
||||||
|
|
||||||
|
|
||||||
[accounts]
|
|
||||||
ed: wark # username: password
|
|
||||||
|
|
||||||
|
|
||||||
[/] # create a volume at "/" (the webroot), which will
|
|
||||||
/mnt # share the contents of the "/mnt" folder
|
|
||||||
accs:
|
|
||||||
rw: * # everyone gets read-write access, but
|
|
||||||
rwmda: ed # the user "ed" gets read-write-move-delete-admin
|
|
|
@ -1,30 +0,0 @@
|
||||||
# this will start `/usr/bin/copyparty`
|
|
||||||
# and read config from `/etc/copyparty/copyparty.conf`
|
|
||||||
#
|
|
||||||
# the %i refers to whatever you put after the copyparty@
|
|
||||||
# so with copyparty@foo.service, %i == foo
|
|
||||||
#
|
|
||||||
# unless you add -q to disable logging, you may want to remove the
|
|
||||||
# following line to allow buffering (slightly better performance):
|
|
||||||
# Environment=PYTHONUNBUFFERED=x
|
|
||||||
|
|
||||||
[Unit]
|
|
||||||
Description=copyparty file server
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=notify
|
|
||||||
SyslogIdentifier=copyparty
|
|
||||||
WorkingDirectory=/var/lib/copyparty-jail
|
|
||||||
Environment=PYTHONUNBUFFERED=x
|
|
||||||
Environment=PRTY_CONFIG=/etc/copyparty/copyparty.conf
|
|
||||||
ExecReload=/bin/kill -s USR1 $MAINPID
|
|
||||||
|
|
||||||
# user to run as + where the TLS certificate is (if any)
|
|
||||||
User=%i
|
|
||||||
Environment=XDG_CONFIG_HOME=/home/%i/.config
|
|
||||||
|
|
||||||
# run copyparty
|
|
||||||
ExecStart=/usr/bin/python3 /usr/bin/copyparty
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
|
@ -1,10 +0,0 @@
|
||||||
this is `/var/lib/copyparty-jail`, the fallback webroot when copyparty has not yet been configured
|
|
||||||
|
|
||||||
please edit `/etc/copyparty/copyparty.conf` (if running as a system service)
|
|
||||||
or `$HOME/.config/copyparty/copyparty.conf` if running as a user service
|
|
||||||
|
|
||||||
a basic configuration example is available at https://github.com/9001/copyparty/blob/hovudstraum/contrib/systemd/copyparty.example.conf
|
|
||||||
a configuration example that explains most flags is available at https://github.com/9001/copyparty/blob/hovudstraum/docs/chungus.conf
|
|
||||||
|
|
||||||
the full list of configuration options can be seen at https://ocv.me/copyparty/helptext.html
|
|
||||||
or by running `copyparty --help`
|
|
|
@ -1,38 +0,0 @@
|
||||||
# this will start `/usr/bin/copyparty`
|
|
||||||
# in a chroot, preventing accidental access elsewhere,
|
|
||||||
# and read copyparty config from `/etc/copyparty/copyparty.conf`
|
|
||||||
#
|
|
||||||
# expose additional filesystem locations to copyparty
|
|
||||||
# by listing them between the last `%i` and `--`
|
|
||||||
#
|
|
||||||
# `%i %i` = user/group to run copyparty as; can be IDs (1000 1000)
|
|
||||||
# the %i refers to whatever you put after the prisonparty@
|
|
||||||
# so with prisonparty@foo.service, %i == foo
|
|
||||||
#
|
|
||||||
# unless you add -q to disable logging, you may want to remove the
|
|
||||||
# following line to allow buffering (slightly better performance):
|
|
||||||
# Environment=PYTHONUNBUFFERED=x
|
|
||||||
|
|
||||||
[Unit]
|
|
||||||
Description=copyparty file server
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=notify
|
|
||||||
SyslogIdentifier=prisonparty
|
|
||||||
WorkingDirectory=/var/lib/copyparty-jail
|
|
||||||
Environment=PYTHONUNBUFFERED=x
|
|
||||||
Environment=PRTY_CONFIG=/etc/copyparty/copyparty.conf
|
|
||||||
ExecReload=/bin/kill -s USR1 $MAINPID
|
|
||||||
|
|
||||||
# user to run as + where the TLS certificate is (if any)
|
|
||||||
User=%i
|
|
||||||
Environment=XDG_CONFIG_HOME=/home/%i/.config
|
|
||||||
|
|
||||||
# run copyparty
|
|
||||||
ExecStart=/bin/bash /usr/bin/prisonparty /var/lib/copyparty-jail %i %i \
|
|
||||||
/etc/copyparty \
|
|
||||||
-- \
|
|
||||||
/usr/bin/python3 /usr/bin/copyparty
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
|
@ -1,25 +0,0 @@
|
||||||
# ./traefik --configFile=copyparty.yaml
|
|
||||||
|
|
||||||
entryPoints:
|
|
||||||
web:
|
|
||||||
address: :8080
|
|
||||||
transport:
|
|
||||||
# don't disconnect during big uploads
|
|
||||||
respondingTimeouts:
|
|
||||||
readTimeout: "0s"
|
|
||||||
log:
|
|
||||||
level: DEBUG
|
|
||||||
providers:
|
|
||||||
file:
|
|
||||||
# WARNING: must be same filename as current file
|
|
||||||
filename: "copyparty.yaml"
|
|
||||||
http:
|
|
||||||
services:
|
|
||||||
service-cpp:
|
|
||||||
loadBalancer:
|
|
||||||
servers:
|
|
||||||
- url: "http://127.0.0.1:3923/"
|
|
||||||
routers:
|
|
||||||
my-router:
|
|
||||||
rule: "PathPrefix(`/`)"
|
|
||||||
service: service-cpp
|
|
|
@ -1,107 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sqlite3
|
|
||||||
import sys
|
|
||||||
import traceback
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
when the up2k-database is stored on a zfs volume, this may give
|
|
||||||
slightly higher performance (actual gains not measured yet)
|
|
||||||
|
|
||||||
NOTE: must be applied in combination with the related advice in the openzfs documentation;
|
|
||||||
https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#database-workloads
|
|
||||||
and see specifically the SQLite subsection
|
|
||||||
|
|
||||||
it is assumed that all databases are stored in a single location,
|
|
||||||
for example with `--hist /var/store/hists`
|
|
||||||
|
|
||||||
three alternatives for running this script:
|
|
||||||
|
|
||||||
1. copy it into /var/store/hists and run "python3 zfs-tune.py s"
|
|
||||||
(s = modify all databases below folder containing script)
|
|
||||||
|
|
||||||
2. cd into /var/store/hists and run "python3 ~/zfs-tune.py w"
|
|
||||||
(w = modify all databases below current working directory)
|
|
||||||
|
|
||||||
3. python3 ~/zfs-tune.py /var/store/hists
|
|
||||||
|
|
||||||
if you use docker, run copyparty with `--hist /cfg/hists`, copy this script into /cfg, and run this:
|
|
||||||
podman run --rm -it --entrypoint /usr/bin/python3 ghcr.io/9001/copyparty-ac /cfg/zfs-tune.py s
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
PAGESIZE = 65536
|
|
||||||
|
|
||||||
|
|
||||||
# borrowed from copyparty; short efficient stacktrace for errors
|
|
||||||
def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
|
|
||||||
et, ev, tb = sys.exc_info()
|
|
||||||
stb = traceback.extract_tb(tb) if tb else traceback.extract_stack()[:-1]
|
|
||||||
fmt = "%s:%d <%s>: %s"
|
|
||||||
ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
|
|
||||||
if et or ev or tb:
|
|
||||||
ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev))
|
|
||||||
return "\n".join(ex[-max_lines:][:: -1 if reverse else 1])
|
|
||||||
|
|
||||||
|
|
||||||
def set_pagesize(db_path):
|
|
||||||
try:
|
|
||||||
# check current page_size
|
|
||||||
with sqlite3.connect(db_path) as db:
|
|
||||||
v = db.execute("pragma page_size").fetchone()[0]
|
|
||||||
if v == PAGESIZE:
|
|
||||||
print(" `-- OK")
|
|
||||||
return
|
|
||||||
|
|
||||||
# https://www.sqlite.org/pragma.html#pragma_page_size
|
|
||||||
# `- disable wal; set pagesize; vacuum
|
|
||||||
# (copyparty will reenable wal if necessary)
|
|
||||||
|
|
||||||
with sqlite3.connect(db_path) as db:
|
|
||||||
db.execute("pragma journal_mode=delete")
|
|
||||||
db.commit()
|
|
||||||
|
|
||||||
with sqlite3.connect(db_path) as db:
|
|
||||||
db.execute(f"pragma page_size = {PAGESIZE}")
|
|
||||||
db.execute("vacuum")
|
|
||||||
|
|
||||||
print(" `-- new pagesize OK")
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
err = min_ex().replace("\n", "\n -- ")
|
|
||||||
print(f"FAILED: {db_path}\n -- {err}")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
top = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
cwd = os.path.abspath(os.getcwd())
|
|
||||||
try:
|
|
||||||
x = sys.argv[1]
|
|
||||||
except:
|
|
||||||
print(f"""
|
|
||||||
this script takes one mandatory argument:
|
|
||||||
specify 's' to start recursing from folder containing this script file ({top})
|
|
||||||
specify 'w' to start recursing from the current working directory ({cwd})
|
|
||||||
specify a path to start recursing from there
|
|
||||||
""")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
if x.lower() == "w":
|
|
||||||
top = cwd
|
|
||||||
elif x.lower() != "s":
|
|
||||||
top = x
|
|
||||||
|
|
||||||
for dirpath, dirs, files in os.walk(top):
|
|
||||||
for fname in files:
|
|
||||||
if not fname.endswith(".db"):
|
|
||||||
continue
|
|
||||||
db_path = os.path.join(dirpath, fname)
|
|
||||||
print(db_path)
|
|
||||||
set_pagesize(db_path)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
|
@ -16,10 +16,9 @@ except:
|
||||||
TYPE_CHECKING = False
|
TYPE_CHECKING = False
|
||||||
|
|
||||||
if True:
|
if True:
|
||||||
from typing import Any, Callable, Optional
|
from typing import Any, Callable
|
||||||
|
|
||||||
PY2 = sys.version_info < (3,)
|
PY2 = sys.version_info < (3,)
|
||||||
PY36 = sys.version_info > (3, 6)
|
|
||||||
if not PY2:
|
if not PY2:
|
||||||
unicode: Callable[[Any], str] = str
|
unicode: Callable[[Any], str] = str
|
||||||
else:
|
else:
|
||||||
|
@ -51,61 +50,6 @@ try:
|
||||||
except:
|
except:
|
||||||
CORES = (os.cpu_count() if hasattr(os, "cpu_count") else 0) or 2
|
CORES = (os.cpu_count() if hasattr(os, "cpu_count") else 0) or 2
|
||||||
|
|
||||||
# all embedded resources to be retrievable over http
|
|
||||||
zs = """
|
|
||||||
web/a/partyfuse.py
|
|
||||||
web/a/u2c.py
|
|
||||||
web/a/webdav-cfg.bat
|
|
||||||
web/baguettebox.js
|
|
||||||
web/browser.css
|
|
||||||
web/browser.html
|
|
||||||
web/browser.js
|
|
||||||
web/browser2.html
|
|
||||||
web/cf.html
|
|
||||||
web/copyparty.gif
|
|
||||||
web/deps/busy.mp3
|
|
||||||
web/deps/easymde.css
|
|
||||||
web/deps/easymde.js
|
|
||||||
web/deps/marked.js
|
|
||||||
web/deps/fuse.py
|
|
||||||
web/deps/mini-fa.css
|
|
||||||
web/deps/mini-fa.woff
|
|
||||||
web/deps/prism.css
|
|
||||||
web/deps/prism.js
|
|
||||||
web/deps/prismd.css
|
|
||||||
web/deps/scp.woff2
|
|
||||||
web/deps/sha512.ac.js
|
|
||||||
web/deps/sha512.hw.js
|
|
||||||
web/idp.html
|
|
||||||
web/iiam.gif
|
|
||||||
web/md.css
|
|
||||||
web/md.html
|
|
||||||
web/md.js
|
|
||||||
web/md2.css
|
|
||||||
web/md2.js
|
|
||||||
web/mde.css
|
|
||||||
web/mde.html
|
|
||||||
web/mde.js
|
|
||||||
web/msg.css
|
|
||||||
web/msg.html
|
|
||||||
web/rups.css
|
|
||||||
web/rups.html
|
|
||||||
web/rups.js
|
|
||||||
web/shares.css
|
|
||||||
web/shares.html
|
|
||||||
web/shares.js
|
|
||||||
web/splash.css
|
|
||||||
web/splash.html
|
|
||||||
web/splash.js
|
|
||||||
web/svcs.html
|
|
||||||
web/svcs.js
|
|
||||||
web/ui.css
|
|
||||||
web/up2k.js
|
|
||||||
web/util.js
|
|
||||||
web/w.hash.js
|
|
||||||
"""
|
|
||||||
RES = set(zs.strip().split("\n"))
|
|
||||||
|
|
||||||
|
|
||||||
class EnvParams(object):
|
class EnvParams(object):
|
||||||
def __init__(self) -> None:
|
def __init__(self) -> None:
|
||||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -1,8 +1,8 @@
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
VERSION = (1, 19, 4)
|
VERSION = (1, 14, 3)
|
||||||
CODENAME = "usernames"
|
CODENAME = "one step forward"
|
||||||
BUILD_DT = (2025, 8, 17)
|
BUILD_DT = (2024, 8, 30)
|
||||||
|
|
||||||
S_VERSION = ".".join(map(str, VERSION))
|
S_VERSION = ".".join(map(str, VERSION))
|
||||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||||
|
|
1018
copyparty/authsrv.py
1018
copyparty/authsrv.py
File diff suppressed because it is too large
Load diff
|
@ -9,11 +9,8 @@ from . import path as path
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
MKD_755 = {"chmod_d": 0o755}
|
_ = (path,)
|
||||||
MKD_700 = {"chmod_d": 0o700}
|
__all__ = ["path"]
|
||||||
|
|
||||||
_ = (path, MKD_755, MKD_700)
|
|
||||||
__all__ = ["path", "MKD_755", "MKD_700"]
|
|
||||||
|
|
||||||
# grep -hRiE '(^|[^a-zA-Z_\.-])os\.' . | gsed -r 's/ /\n/g;s/\(/(\n/g' | grep -hRiE '(^|[^a-zA-Z_\.-])os\.' | sort | uniq -c
|
# grep -hRiE '(^|[^a-zA-Z_\.-])os\.' . | gsed -r 's/ /\n/g;s/\(/(\n/g' | grep -hRiE '(^|[^a-zA-Z_\.-])os\.' | sort | uniq -c
|
||||||
# printf 'os\.(%s)' "$(grep ^def bos/__init__.py | gsed -r 's/^def //;s/\(.*//' | tr '\n' '|' | gsed -r 's/.$//')"
|
# printf 'os\.(%s)' "$(grep ^def bos/__init__.py | gsed -r 's/^def //;s/\(.*//' | tr '\n' '|' | gsed -r 's/.$//')"
|
||||||
|
@ -23,39 +20,19 @@ def chmod(p: str, mode: int) -> None:
|
||||||
return os.chmod(fsenc(p), mode)
|
return os.chmod(fsenc(p), mode)
|
||||||
|
|
||||||
|
|
||||||
def chown(p: str, uid: int, gid: int) -> None:
|
|
||||||
return os.chown(fsenc(p), uid, gid)
|
|
||||||
|
|
||||||
|
|
||||||
def listdir(p: str = ".") -> list[str]:
|
def listdir(p: str = ".") -> list[str]:
|
||||||
return [fsdec(x) for x in os.listdir(fsenc(p))]
|
return [fsdec(x) for x in os.listdir(fsenc(p))]
|
||||||
|
|
||||||
|
|
||||||
def makedirs(name: str, vf: dict[str, Any] = MKD_755, exist_ok: bool = True) -> bool:
|
def makedirs(name: str, mode: int = 0o755, exist_ok: bool = True) -> bool:
|
||||||
# os.makedirs does 777 for all but leaf; this does mode on all
|
|
||||||
todo = []
|
|
||||||
bname = fsenc(name)
|
bname = fsenc(name)
|
||||||
while bname:
|
|
||||||
if os.path.isdir(bname):
|
|
||||||
break
|
|
||||||
todo.append(bname)
|
|
||||||
bname = os.path.dirname(bname)
|
|
||||||
if not todo:
|
|
||||||
if not exist_ok:
|
|
||||||
os.mkdir(bname) # to throw
|
|
||||||
return False
|
|
||||||
mode = vf["chmod_d"]
|
|
||||||
chown = "chown" in vf
|
|
||||||
for zb in todo[::-1]:
|
|
||||||
try:
|
try:
|
||||||
os.mkdir(zb, mode)
|
os.makedirs(bname, mode)
|
||||||
if chown:
|
|
||||||
os.chown(zb, vf["uid"], vf["gid"])
|
|
||||||
except:
|
|
||||||
if os.path.isdir(zb):
|
|
||||||
continue
|
|
||||||
raise
|
|
||||||
return True
|
return True
|
||||||
|
except:
|
||||||
|
if not exist_ok or not os.path.isdir(bname):
|
||||||
|
raise
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
def mkdir(p: str, mode: int = 0o755) -> None:
|
def mkdir(p: str, mode: int = 0o755) -> None:
|
||||||
|
|
|
@ -9,14 +9,14 @@ import queue
|
||||||
|
|
||||||
from .__init__ import CORES, TYPE_CHECKING
|
from .__init__ import CORES, TYPE_CHECKING
|
||||||
from .broker_mpw import MpWorker
|
from .broker_mpw import MpWorker
|
||||||
from .broker_util import ExceptionalQueue, NotExQueue, try_exec
|
from .broker_util import ExceptionalQueue, try_exec
|
||||||
from .util import Daemon, mp
|
from .util import Daemon, mp
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Union
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
class MProcess(mp.Process):
|
class MProcess(mp.Process):
|
||||||
|
@ -43,9 +43,6 @@ class BrokerMp(object):
|
||||||
self.procs = []
|
self.procs = []
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
|
|
||||||
self.retpend: dict[int, Any] = {}
|
|
||||||
self.retpend_mutex = threading.Lock()
|
|
||||||
|
|
||||||
self.num_workers = self.args.j or CORES
|
self.num_workers = self.args.j or CORES
|
||||||
self.log("broker", "booting {} subprocesses".format(self.num_workers))
|
self.log("broker", "booting {} subprocesses".format(self.num_workers))
|
||||||
for n in range(1, self.num_workers + 1):
|
for n in range(1, self.num_workers + 1):
|
||||||
|
@ -57,8 +54,6 @@ class BrokerMp(object):
|
||||||
self.procs.append(proc)
|
self.procs.append(proc)
|
||||||
proc.start()
|
proc.start()
|
||||||
|
|
||||||
Daemon(self.periodic, "mp-periodic")
|
|
||||||
|
|
||||||
def shutdown(self) -> None:
|
def shutdown(self) -> None:
|
||||||
self.log("broker", "shutting down")
|
self.log("broker", "shutting down")
|
||||||
for n, proc in enumerate(self.procs):
|
for n, proc in enumerate(self.procs):
|
||||||
|
@ -81,10 +76,6 @@ class BrokerMp(object):
|
||||||
for _, proc in enumerate(self.procs):
|
for _, proc in enumerate(self.procs):
|
||||||
proc.q_pend.put((0, "reload", []))
|
proc.q_pend.put((0, "reload", []))
|
||||||
|
|
||||||
def reload_sessions(self) -> None:
|
|
||||||
for _, proc in enumerate(self.procs):
|
|
||||||
proc.q_pend.put((0, "reload_sessions", []))
|
|
||||||
|
|
||||||
def collector(self, proc: MProcess) -> None:
|
def collector(self, proc: MProcess) -> None:
|
||||||
"""receive message from hub in other process"""
|
"""receive message from hub in other process"""
|
||||||
while True:
|
while True:
|
||||||
|
@ -95,10 +86,8 @@ class BrokerMp(object):
|
||||||
self.log(*args)
|
self.log(*args)
|
||||||
|
|
||||||
elif dest == "retq":
|
elif dest == "retq":
|
||||||
with self.retpend_mutex:
|
# response from previous ipc call
|
||||||
retq = self.retpend.pop(retq_id)
|
raise Exception("invalid broker_mp usage")
|
||||||
|
|
||||||
retq.put(args[0])
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# new ipc invoking managed service in hub
|
# new ipc invoking managed service in hub
|
||||||
|
@ -115,7 +104,8 @@ class BrokerMp(object):
|
||||||
if retq_id:
|
if retq_id:
|
||||||
proc.q_pend.put((retq_id, "retq", rv))
|
proc.q_pend.put((retq_id, "retq", rv))
|
||||||
|
|
||||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
||||||
|
|
||||||
# new non-ipc invoking managed service in hub
|
# new non-ipc invoking managed service in hub
|
||||||
obj = self.hub
|
obj = self.hub
|
||||||
for node in dest.split("."):
|
for node in dest.split("."):
|
||||||
|
@ -127,30 +117,17 @@ class BrokerMp(object):
|
||||||
retq.put(rv)
|
retq.put(rv)
|
||||||
return retq
|
return retq
|
||||||
|
|
||||||
def wask(self, dest: str, *args: Any) -> list[Union[ExceptionalQueue, NotExQueue]]:
|
|
||||||
# call from hub to workers
|
|
||||||
ret = []
|
|
||||||
for p in self.procs:
|
|
||||||
retq = ExceptionalQueue(1)
|
|
||||||
retq_id = id(retq)
|
|
||||||
with self.retpend_mutex:
|
|
||||||
self.retpend[retq_id] = retq
|
|
||||||
|
|
||||||
p.q_pend.put((retq_id, dest, list(args)))
|
|
||||||
ret.append(retq)
|
|
||||||
return ret
|
|
||||||
|
|
||||||
def say(self, dest: str, *args: Any) -> None:
|
def say(self, dest: str, *args: Any) -> None:
|
||||||
"""
|
"""
|
||||||
send message to non-hub component in other process,
|
send message to non-hub component in other process,
|
||||||
returns a Queue object which eventually contains the response if want_retval
|
returns a Queue object which eventually contains the response if want_retval
|
||||||
(not-impl here since nothing uses it yet)
|
(not-impl here since nothing uses it yet)
|
||||||
"""
|
"""
|
||||||
if dest == "httpsrv.listen":
|
if dest == "listen":
|
||||||
for p in self.procs:
|
for p in self.procs:
|
||||||
p.q_pend.put((0, dest, [args[0], len(self.procs)]))
|
p.q_pend.put((0, dest, [args[0], len(self.procs)]))
|
||||||
|
|
||||||
elif dest == "httpsrv.set_netdevs":
|
elif dest == "set_netdevs":
|
||||||
for p in self.procs:
|
for p in self.procs:
|
||||||
p.q_pend.put((0, dest, list(args)))
|
p.q_pend.put((0, dest, list(args)))
|
||||||
|
|
||||||
|
@ -159,19 +136,3 @@ class BrokerMp(object):
|
||||||
|
|
||||||
else:
|
else:
|
||||||
raise Exception("what is " + str(dest))
|
raise Exception("what is " + str(dest))
|
||||||
|
|
||||||
def periodic(self) -> None:
|
|
||||||
while True:
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
tdli = {}
|
|
||||||
tdls = {}
|
|
||||||
qs = self.wask("httpsrv.read_dls")
|
|
||||||
for q in qs:
|
|
||||||
qr = q.get()
|
|
||||||
dli, dls = qr
|
|
||||||
tdli.update(dli)
|
|
||||||
tdls.update(dls)
|
|
||||||
tdl = (tdli, tdls)
|
|
||||||
for p in self.procs:
|
|
||||||
p.q_pend.put((0, "httpsrv.write_dls", tdl))
|
|
||||||
|
|
|
@ -11,7 +11,7 @@ import queue
|
||||||
|
|
||||||
from .__init__ import ANYWIN
|
from .__init__ import ANYWIN
|
||||||
from .authsrv import AuthSrv
|
from .authsrv import AuthSrv
|
||||||
from .broker_util import BrokerCli, ExceptionalQueue, NotExQueue
|
from .broker_util import BrokerCli, ExceptionalQueue
|
||||||
from .httpsrv import HttpSrv
|
from .httpsrv import HttpSrv
|
||||||
from .util import FAKE_MP, Daemon, HMaccas
|
from .util import FAKE_MP, Daemon, HMaccas
|
||||||
|
|
||||||
|
@ -82,40 +82,35 @@ class MpWorker(BrokerCli):
|
||||||
while True:
|
while True:
|
||||||
retq_id, dest, args = self.q_pend.get()
|
retq_id, dest, args = self.q_pend.get()
|
||||||
|
|
||||||
if dest == "retq":
|
# self.logw("work: [{}]".format(d[0]))
|
||||||
# response from previous ipc call
|
|
||||||
with self.retpend_mutex:
|
|
||||||
retq = self.retpend.pop(retq_id)
|
|
||||||
|
|
||||||
retq.put(args)
|
|
||||||
continue
|
|
||||||
|
|
||||||
if dest == "shutdown":
|
if dest == "shutdown":
|
||||||
self.httpsrv.shutdown()
|
self.httpsrv.shutdown()
|
||||||
self.logw("ok bye")
|
self.logw("ok bye")
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
return
|
return
|
||||||
|
|
||||||
if dest == "reload":
|
elif dest == "reload":
|
||||||
self.logw("mpw.asrv reloading")
|
self.logw("mpw.asrv reloading")
|
||||||
self.asrv.reload()
|
self.asrv.reload()
|
||||||
self.logw("mpw.asrv reloaded")
|
self.logw("mpw.asrv reloaded")
|
||||||
continue
|
|
||||||
|
|
||||||
if dest == "reload_sessions":
|
elif dest == "listen":
|
||||||
with self.asrv.mutex:
|
self.httpsrv.listen(args[0], args[1])
|
||||||
self.asrv.load_sessions()
|
|
||||||
continue
|
|
||||||
|
|
||||||
obj = self
|
elif dest == "set_netdevs":
|
||||||
for node in dest.split("."):
|
self.httpsrv.set_netdevs(args[0])
|
||||||
obj = getattr(obj, node)
|
|
||||||
|
|
||||||
rv = obj(*args) # type: ignore
|
elif dest == "retq":
|
||||||
if retq_id:
|
# response from previous ipc call
|
||||||
self.say("retq", rv, retq_id=retq_id)
|
with self.retpend_mutex:
|
||||||
|
retq = self.retpend.pop(retq_id)
|
||||||
|
|
||||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
retq.put(args)
|
||||||
|
|
||||||
|
else:
|
||||||
|
raise Exception("what is " + str(dest))
|
||||||
|
|
||||||
|
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
||||||
retq = ExceptionalQueue(1)
|
retq = ExceptionalQueue(1)
|
||||||
retq_id = id(retq)
|
retq_id = id(retq)
|
||||||
with self.retpend_mutex:
|
with self.retpend_mutex:
|
||||||
|
@ -124,5 +119,5 @@ class MpWorker(BrokerCli):
|
||||||
self.q_yield.put((retq_id, dest, list(args)))
|
self.q_yield.put((retq_id, dest, list(args)))
|
||||||
return retq
|
return retq
|
||||||
|
|
||||||
def say(self, dest: str, *args: Any, retq_id=0) -> None:
|
def say(self, dest: str, *args: Any) -> None:
|
||||||
self.q_yield.put((retq_id, dest, list(args)))
|
self.q_yield.put((0, dest, list(args)))
|
||||||
|
|
|
@ -5,7 +5,7 @@ import os
|
||||||
import threading
|
import threading
|
||||||
|
|
||||||
from .__init__ import TYPE_CHECKING
|
from .__init__ import TYPE_CHECKING
|
||||||
from .broker_util import BrokerCli, ExceptionalQueue, NotExQueue
|
from .broker_util import BrokerCli, ExceptionalQueue, try_exec
|
||||||
from .httpsrv import HttpSrv
|
from .httpsrv import HttpSrv
|
||||||
from .util import HMaccas
|
from .util import HMaccas
|
||||||
|
|
||||||
|
@ -13,7 +13,7 @@ if TYPE_CHECKING:
|
||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Union
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
class BrokerThr(BrokerCli):
|
class BrokerThr(BrokerCli):
|
||||||
|
@ -34,7 +34,6 @@ class BrokerThr(BrokerCli):
|
||||||
self.iphash = HMaccas(os.path.join(self.args.E.cfg, "iphash"), 8)
|
self.iphash = HMaccas(os.path.join(self.args.E.cfg, "iphash"), 8)
|
||||||
self.httpsrv = HttpSrv(self, None)
|
self.httpsrv = HttpSrv(self, None)
|
||||||
self.reload = self.noop
|
self.reload = self.noop
|
||||||
self.reload_sessions = self.noop
|
|
||||||
|
|
||||||
def shutdown(self) -> None:
|
def shutdown(self) -> None:
|
||||||
# self.log("broker", "shutting down")
|
# self.log("broker", "shutting down")
|
||||||
|
@ -43,21 +42,26 @@ class BrokerThr(BrokerCli):
|
||||||
def noop(self) -> None:
|
def noop(self) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
||||||
|
|
||||||
# new ipc invoking managed service in hub
|
# new ipc invoking managed service in hub
|
||||||
obj = self.hub
|
obj = self.hub
|
||||||
for node in dest.split("."):
|
for node in dest.split("."):
|
||||||
obj = getattr(obj, node)
|
obj = getattr(obj, node)
|
||||||
|
|
||||||
return NotExQueue(obj(*args)) # type: ignore
|
rv = try_exec(True, obj, *args)
|
||||||
|
|
||||||
|
# pretend we're broker_mp
|
||||||
|
retq = ExceptionalQueue(1)
|
||||||
|
retq.put(rv)
|
||||||
|
return retq
|
||||||
|
|
||||||
def say(self, dest: str, *args: Any) -> None:
|
def say(self, dest: str, *args: Any) -> None:
|
||||||
if dest == "httpsrv.listen":
|
if dest == "listen":
|
||||||
self.httpsrv.listen(args[0], 1)
|
self.httpsrv.listen(args[0], 1)
|
||||||
return
|
return
|
||||||
|
|
||||||
if dest == "httpsrv.set_netdevs":
|
if dest == "set_netdevs":
|
||||||
self.httpsrv.set_netdevs(args[0])
|
self.httpsrv.set_netdevs(args[0])
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -66,4 +70,4 @@ class BrokerThr(BrokerCli):
|
||||||
for node in dest.split("."):
|
for node in dest.split("."):
|
||||||
obj = getattr(obj, node)
|
obj = getattr(obj, node)
|
||||||
|
|
||||||
obj(*args) # type: ignore
|
try_exec(False, obj, *args)
|
||||||
|
|
|
@ -33,18 +33,6 @@ class ExceptionalQueue(Queue, object):
|
||||||
return rv
|
return rv
|
||||||
|
|
||||||
|
|
||||||
class NotExQueue(object):
|
|
||||||
"""
|
|
||||||
BrokerThr uses this instead of ExceptionalQueue; 7x faster
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, rv: Any) -> None:
|
|
||||||
self.rv = rv
|
|
||||||
|
|
||||||
def get(self) -> Any:
|
|
||||||
return self.rv
|
|
||||||
|
|
||||||
|
|
||||||
class BrokerCli(object):
|
class BrokerCli(object):
|
||||||
"""
|
"""
|
||||||
helps mypy understand httpsrv.broker but still fails a few levels deeper,
|
helps mypy understand httpsrv.broker but still fails a few levels deeper,
|
||||||
|
@ -60,7 +48,7 @@ class BrokerCli(object):
|
||||||
def __init__(self) -> None:
|
def __init__(self) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
||||||
return ExceptionalQueue(1)
|
return ExceptionalQueue(1)
|
||||||
|
|
||||||
def say(self, dest: str, *args: Any) -> None:
|
def say(self, dest: str, *args: Any) -> None:
|
||||||
|
|
|
@ -1,11 +1,13 @@
|
||||||
import calendar
|
import calendar
|
||||||
import errno
|
import errno
|
||||||
|
import filecmp
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
|
import shutil
|
||||||
import time
|
import time
|
||||||
|
|
||||||
from .__init__ import ANYWIN
|
from .__init__ import ANYWIN
|
||||||
from .util import Netdev, atomic_move, load_resource, runcmd, wunlink
|
from .util import Netdev, runcmd, wrename, wunlink
|
||||||
|
|
||||||
HAVE_CFSSL = not os.environ.get("PRTY_NO_CFSSL")
|
HAVE_CFSSL = not os.environ.get("PRTY_NO_CFSSL")
|
||||||
|
|
||||||
|
@ -27,15 +29,13 @@ def ensure_cert(log: "RootLogger", args) -> None:
|
||||||
|
|
||||||
i feel awful about this and so should they
|
i feel awful about this and so should they
|
||||||
"""
|
"""
|
||||||
with load_resource(args.E, "res/insecure.pem") as f:
|
cert_insec = os.path.join(args.E.mod, "res/insecure.pem")
|
||||||
cert_insec = f.read()
|
|
||||||
cert_appdata = os.path.join(args.E.cfg, "cert.pem")
|
cert_appdata = os.path.join(args.E.cfg, "cert.pem")
|
||||||
if not os.path.isfile(args.cert):
|
if not os.path.isfile(args.cert):
|
||||||
if cert_appdata != args.cert:
|
if cert_appdata != args.cert:
|
||||||
raise Exception("certificate file does not exist: " + args.cert)
|
raise Exception("certificate file does not exist: " + args.cert)
|
||||||
|
|
||||||
with open(args.cert, "wb") as f:
|
shutil.copy(cert_insec, args.cert)
|
||||||
f.write(cert_insec)
|
|
||||||
|
|
||||||
with open(args.cert, "rb") as f:
|
with open(args.cert, "rb") as f:
|
||||||
buf = f.read()
|
buf = f.read()
|
||||||
|
@ -50,9 +50,7 @@ def ensure_cert(log: "RootLogger", args) -> None:
|
||||||
raise Exception(m + "private key must appear before server certificate")
|
raise Exception(m + "private key must appear before server certificate")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with open(args.cert, "rb") as f:
|
if filecmp.cmp(args.cert, cert_insec):
|
||||||
active_cert = f.read()
|
|
||||||
if active_cert == cert_insec:
|
|
||||||
t = "using default TLS certificate; https will be insecure:\033[36m {}"
|
t = "using default TLS certificate; https will be insecure:\033[36m {}"
|
||||||
log("cert", t.format(args.cert), 3)
|
log("cert", t.format(args.cert), 3)
|
||||||
except:
|
except:
|
||||||
|
@ -120,7 +118,7 @@ def _gen_ca(log: "RootLogger", args):
|
||||||
wunlink(nlog, bname + ".key", VF)
|
wunlink(nlog, bname + ".key", VF)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
atomic_move(nlog, bname + "-key.pem", bname + ".key", VF)
|
wrename(nlog, bname + "-key.pem", bname + ".key", VF)
|
||||||
wunlink(nlog, bname + ".csr", VF)
|
wunlink(nlog, bname + ".csr", VF)
|
||||||
|
|
||||||
log("cert", "new ca OK", 2)
|
log("cert", "new ca OK", 2)
|
||||||
|
@ -153,22 +151,14 @@ def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
|
||||||
raise Exception("no useable cert found")
|
raise Exception("no useable cert found")
|
||||||
|
|
||||||
expired = time.time() + args.crt_sdays * 60 * 60 * 24 * 0.5 > expiry
|
expired = time.time() + args.crt_sdays * 60 * 60 * 24 * 0.5 > expiry
|
||||||
if expired:
|
cert_insec = os.path.join(args.E.mod, "res/insecure.pem")
|
||||||
raise Exception("old server-cert has expired")
|
|
||||||
|
|
||||||
for n in names:
|
for n in names:
|
||||||
if n not in inf["sans"]:
|
if n not in inf["sans"]:
|
||||||
raise Exception("does not have {}".format(n))
|
raise Exception("does not have {}".format(n))
|
||||||
|
if expired:
|
||||||
with load_resource(args.E, "res/insecure.pem") as f:
|
raise Exception("old server-cert has expired")
|
||||||
cert_insec = f.read()
|
if not filecmp.cmp(args.cert, cert_insec):
|
||||||
|
|
||||||
with open(args.cert, "rb") as f:
|
|
||||||
active_cert = f.read()
|
|
||||||
|
|
||||||
if active_cert and active_cert != cert_insec:
|
|
||||||
return
|
return
|
||||||
|
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
log("cert", "will create new server-cert; {}".format(ex))
|
log("cert", "will create new server-cert; {}".format(ex))
|
||||||
|
|
||||||
|
@ -213,7 +203,7 @@ def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
|
||||||
wunlink(nlog, bname + ".key", VF)
|
wunlink(nlog, bname + ".key", VF)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
atomic_move(nlog, bname + "-key.pem", bname + ".key", VF)
|
wrename(nlog, bname + "-key.pem", bname + ".key", VF)
|
||||||
wunlink(nlog, bname + ".csr", VF)
|
wunlink(nlog, bname + ".csr", VF)
|
||||||
|
|
||||||
with open(os.path.join(args.crt_dir, "ca.pem"), "rb") as f:
|
with open(os.path.join(args.crt_dir, "ca.pem"), "rb") as f:
|
||||||
|
|
152
copyparty/cfg.py
152
copyparty/cfg.py
|
@ -2,12 +2,9 @@
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
# awk -F\" '/add_argument\("-[^-]/{print(substr($2,2))}' copyparty/__main__.py | sort | tr '\n' ' '
|
# awk -F\" '/add_argument\("-[^-]/{print(substr($2,2))}' copyparty/__main__.py | sort | tr '\n' ' '
|
||||||
zs = "a c e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vp e2vu ed emp i j lo mcr mte mth mtm mtp nb nc nid nih nth nw p q s ss sss v z zv"
|
zs = "a c e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vp e2vu ed emp i j lo mcr mte mth mtm mtp nb nc nid nih nw p q s ss sss v z zv"
|
||||||
onedash = set(zs.split())
|
onedash = set(zs.split())
|
||||||
|
|
||||||
# verify that all volflags are documented here:
|
|
||||||
# grep volflag= __main__.py | sed -r 's/.*volflag=//;s/\).*//' | sort | uniq | while IFS= read -r x; do grep -E "\"$x(=[^ \"]+)?\": \"" cfg.py || printf '%s\n' "$x"; done
|
|
||||||
|
|
||||||
|
|
||||||
def vf_bmap() -> dict[str, str]:
|
def vf_bmap() -> dict[str, str]:
|
||||||
"""argv-to-volflag: simple bools"""
|
"""argv-to-volflag: simple bools"""
|
||||||
|
@ -15,20 +12,17 @@ def vf_bmap() -> dict[str, str]:
|
||||||
"dav_auth": "davauth",
|
"dav_auth": "davauth",
|
||||||
"dav_rt": "davrt",
|
"dav_rt": "davrt",
|
||||||
"ed": "dots",
|
"ed": "dots",
|
||||||
"hardlink_only": "hardlinkonly",
|
"never_symlink": "neversymlink",
|
||||||
"no_clone": "noclone",
|
"no_dedup": "copydupes",
|
||||||
"no_dirsz": "nodirsz",
|
|
||||||
"no_dupe": "nodupe",
|
"no_dupe": "nodupe",
|
||||||
"no_forget": "noforget",
|
"no_forget": "noforget",
|
||||||
"no_pipe": "nopipe",
|
"no_pipe": "nopipe",
|
||||||
"no_robots": "norobots",
|
"no_robots": "norobots",
|
||||||
"no_tail": "notail",
|
|
||||||
"no_thumb": "dthumb",
|
"no_thumb": "dthumb",
|
||||||
"no_vthumb": "dvthumb",
|
"no_vthumb": "dvthumb",
|
||||||
"no_athumb": "dathumb",
|
"no_athumb": "dathumb",
|
||||||
}
|
}
|
||||||
for k in (
|
for k in (
|
||||||
"dedup",
|
|
||||||
"dotsrch",
|
"dotsrch",
|
||||||
"e2d",
|
"e2d",
|
||||||
"e2ds",
|
"e2ds",
|
||||||
|
@ -44,22 +38,15 @@ def vf_bmap() -> dict[str, str]:
|
||||||
"gsel",
|
"gsel",
|
||||||
"hardlink",
|
"hardlink",
|
||||||
"magic",
|
"magic",
|
||||||
"no_db_ip",
|
|
||||||
"no_sb_md",
|
"no_sb_md",
|
||||||
"no_sb_lg",
|
"no_sb_lg",
|
||||||
"nsort",
|
|
||||||
"og",
|
"og",
|
||||||
"og_no_head",
|
"og_no_head",
|
||||||
"og_s_title",
|
"og_s_title",
|
||||||
"rand",
|
"rand",
|
||||||
"reflink",
|
|
||||||
"rmagic",
|
|
||||||
"rss",
|
|
||||||
"wo_up_readme",
|
|
||||||
"xdev",
|
"xdev",
|
||||||
"xlink",
|
"xlink",
|
||||||
"xvol",
|
"xvol",
|
||||||
"zipmaxu",
|
|
||||||
):
|
):
|
||||||
ret[k] = k
|
ret[k] = k
|
||||||
return ret
|
return ret
|
||||||
|
@ -68,31 +55,20 @@ def vf_bmap() -> dict[str, str]:
|
||||||
def vf_vmap() -> dict[str, str]:
|
def vf_vmap() -> dict[str, str]:
|
||||||
"""argv-to-volflag: simple values"""
|
"""argv-to-volflag: simple values"""
|
||||||
ret = {
|
ret = {
|
||||||
"ac_convt": "aconvt",
|
|
||||||
"no_hash": "nohash",
|
"no_hash": "nohash",
|
||||||
"no_idx": "noidx",
|
"no_idx": "noidx",
|
||||||
"re_maxage": "scan",
|
"re_maxage": "scan",
|
||||||
"safe_dedup": "safededup",
|
|
||||||
"th_convt": "convt",
|
"th_convt": "convt",
|
||||||
"th_size": "thsize",
|
"th_size": "thsize",
|
||||||
"th_crop": "crop",
|
"th_crop": "crop",
|
||||||
"th_x3": "th3x",
|
"th_x3": "th3x",
|
||||||
}
|
}
|
||||||
for k in (
|
for k in (
|
||||||
"bup_ck",
|
|
||||||
"chmod_d",
|
|
||||||
"chmod_f",
|
|
||||||
"dbd",
|
"dbd",
|
||||||
"forget_ip",
|
|
||||||
"hsortn",
|
|
||||||
"html_head",
|
"html_head",
|
||||||
"lg_sbf",
|
"lg_sbf",
|
||||||
"md_sbf",
|
"md_sbf",
|
||||||
"lg_sba",
|
|
||||||
"md_sba",
|
|
||||||
"md_hist",
|
|
||||||
"nrand",
|
"nrand",
|
||||||
"u2ow",
|
|
||||||
"og_desc",
|
"og_desc",
|
||||||
"og_site",
|
"og_site",
|
||||||
"og_th",
|
"og_th",
|
||||||
|
@ -102,29 +78,13 @@ def vf_vmap() -> dict[str, str]:
|
||||||
"og_title_i",
|
"og_title_i",
|
||||||
"og_tpl",
|
"og_tpl",
|
||||||
"og_ua",
|
"og_ua",
|
||||||
"put_ck",
|
|
||||||
"put_name",
|
|
||||||
"mv_retry",
|
"mv_retry",
|
||||||
"rm_retry",
|
"rm_retry",
|
||||||
"sort",
|
"sort",
|
||||||
"tail_fd",
|
|
||||||
"tail_rate",
|
|
||||||
"tail_tmax",
|
|
||||||
"tail_who",
|
|
||||||
"tcolor",
|
"tcolor",
|
||||||
"th_spec_p",
|
|
||||||
"txt_eol",
|
|
||||||
"unlist",
|
"unlist",
|
||||||
"u2abort",
|
"u2abort",
|
||||||
"u2ts",
|
"u2ts",
|
||||||
"uid",
|
|
||||||
"gid",
|
|
||||||
"unp_who",
|
|
||||||
"ups_who",
|
|
||||||
"zip_who",
|
|
||||||
"zipmaxn",
|
|
||||||
"zipmaxs",
|
|
||||||
"zipmaxt",
|
|
||||||
):
|
):
|
||||||
ret[k] = k
|
ret[k] = k
|
||||||
return ret
|
return ret
|
||||||
|
@ -136,16 +96,13 @@ def vf_cmap() -> dict[str, str]:
|
||||||
for k in (
|
for k in (
|
||||||
"exp_lg",
|
"exp_lg",
|
||||||
"exp_md",
|
"exp_md",
|
||||||
"ext_th",
|
|
||||||
"mte",
|
"mte",
|
||||||
"mth",
|
"mth",
|
||||||
"mtp",
|
"mtp",
|
||||||
"xac",
|
|
||||||
"xad",
|
"xad",
|
||||||
"xar",
|
"xar",
|
||||||
"xau",
|
"xau",
|
||||||
"xban",
|
"xban",
|
||||||
"xbc",
|
|
||||||
"xbd",
|
"xbd",
|
||||||
"xbr",
|
"xbr",
|
||||||
"xbu",
|
"xbu",
|
||||||
|
@ -172,27 +129,15 @@ permdescs = {
|
||||||
|
|
||||||
flagcats = {
|
flagcats = {
|
||||||
"uploads, general": {
|
"uploads, general": {
|
||||||
"dedup": "enable symlink-based file deduplication",
|
"nodupe": "rejects existing files (instead of symlinking them)",
|
||||||
"hardlink": "enable hardlink-based file deduplication,\nwith fallback on symlinks when that is impossible",
|
"hardlink": "does dedup with hardlinks instead of symlinks",
|
||||||
"hardlinkonly": "dedup with hardlink only, never symlink;\nmake a full copy if hardlink is impossible",
|
"neversymlink": "disables symlink fallback; full copy instead",
|
||||||
"reflink": "enable reflink-based file deduplication,\nwith fallback on full copy when that is impossible",
|
"copydupes": "disables dedup, always saves full copies of dupes",
|
||||||
"safededup": "verify on-disk data before using it for dedup",
|
|
||||||
"noclone": "take dupe data from clients, even if available on HDD",
|
|
||||||
"nodupe": "rejects existing files (instead of linking/cloning them)",
|
|
||||||
"chmod_d=755": "unix-permission for new dirs/folders",
|
|
||||||
"chmod_f=644": "unix-permission for new files",
|
|
||||||
"uid=573": "change owner of new files/folders to unix-user 573",
|
|
||||||
"gid=999": "change owner of new files/folders to unix-group 999",
|
|
||||||
"sparse": "force use of sparse files, mainly for s3-backed storage",
|
"sparse": "force use of sparse files, mainly for s3-backed storage",
|
||||||
"nosparse": "deny use of sparse files, mainly for slow storage",
|
|
||||||
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
||||||
"nosub": "forces all uploads into the top folder of the vfs",
|
"nosub": "forces all uploads into the top folder of the vfs",
|
||||||
"magic": "enables filetype detection for nameless uploads",
|
"magic": "enables filetype detection for nameless uploads",
|
||||||
"put_name": "fallback filename for nameless uploads",
|
"gz": "allows server-side gzip of uploads with ?gz (also c,xz)",
|
||||||
"put_ck": "default checksum-hasher for PUT/WebDAV uploads",
|
|
||||||
"bup_ck": "default checksum-hasher for bup/basic uploads",
|
|
||||||
"gz": "allows server-side gzip compression of uploads with ?gz",
|
|
||||||
"xz": "allows server-side lzma compression of uploads with ?xz",
|
|
||||||
"pk": "forces server-side compression, optional arg: xz,9",
|
"pk": "forces server-side compression, optional arg: xz,9",
|
||||||
},
|
},
|
||||||
"upload rules": {
|
"upload rules": {
|
||||||
|
@ -201,10 +146,8 @@ flagcats = {
|
||||||
"vmaxb=1g": "total volume size max 1 GiB (suffixes: b, k, m, g, t)",
|
"vmaxb=1g": "total volume size max 1 GiB (suffixes: b, k, m, g, t)",
|
||||||
"vmaxn=4k": "max 4096 files in volume (suffixes: b, k, m, g, t)",
|
"vmaxn=4k": "max 4096 files in volume (suffixes: b, k, m, g, t)",
|
||||||
"medialinks": "return medialinks for non-up2k uploads (not hotlinks)",
|
"medialinks": "return medialinks for non-up2k uploads (not hotlinks)",
|
||||||
"wo_up_readme": "write-only users can upload logues without getting renamed",
|
|
||||||
"rand": "force randomized filenames, 9 chars long by default",
|
"rand": "force randomized filenames, 9 chars long by default",
|
||||||
"nrand=N": "randomized filenames are N chars long",
|
"nrand=N": "randomized filenames are N chars long",
|
||||||
"u2ow=N": "overwrite existing files? 0=no 1=if-older 2=always",
|
|
||||||
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
||||||
"u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk",
|
"u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk",
|
||||||
"sz=1k-3m": "allow filesizes between 1 KiB and 3MiB",
|
"sz=1k-3m": "allow filesizes between 1 KiB and 3MiB",
|
||||||
|
@ -216,41 +159,31 @@ flagcats = {
|
||||||
"lifetime=3600": "uploads are deleted after 1 hour",
|
"lifetime=3600": "uploads are deleted after 1 hour",
|
||||||
},
|
},
|
||||||
"database, general": {
|
"database, general": {
|
||||||
"e2d": "enable database; makes files searchable + enables upload-undo",
|
"e2d": "enable database; makes files searchable + enables upload dedup",
|
||||||
"e2ds": "scan writable folders for new files on startup; also sets -e2d",
|
"e2ds": "scan writable folders for new files on startup; also sets -e2d",
|
||||||
"e2dsa": "scans all folders for new files on startup; also sets -e2d",
|
"e2dsa": "scans all folders for new files on startup; also sets -e2d",
|
||||||
"e2t": "enable multimedia indexing; makes it possible to search for tags",
|
"e2t": "enable multimedia indexing; makes it possible to search for tags",
|
||||||
"e2ts": "scan existing files for tags on startup; also sets -e2t",
|
"e2ts": "scan existing files for tags on startup; also sets -e2t",
|
||||||
"e2tsr": "delete all metadata from DB (full rescan); also sets -e2ts",
|
"e2tsa": "delete all metadata from DB (full rescan); also sets -e2ts",
|
||||||
"d2ts": "disables metadata collection for existing files",
|
"d2ts": "disables metadata collection for existing files",
|
||||||
"e2v": "verify integrity on startup by hashing files and comparing to db",
|
|
||||||
"e2vu": "when e2v fails, update the db (assume on-disk files are good)",
|
|
||||||
"e2vp": "when e2v fails, panic and quit copyparty",
|
|
||||||
"d2ds": "disables onboot indexing, overrides -e2ds*",
|
"d2ds": "disables onboot indexing, overrides -e2ds*",
|
||||||
"d2t": "disables metadata collection, overrides -e2t*",
|
"d2t": "disables metadata collection, overrides -e2t*",
|
||||||
"d2v": "disables file verification, overrides -e2v*",
|
"d2v": "disables file verification, overrides -e2v*",
|
||||||
"d2d": "disables all database stuff, overrides -e2*",
|
"d2d": "disables all database stuff, overrides -e2*",
|
||||||
"hist=/tmp/cdb": "puts thumbnails and indexes at that location",
|
"hist=/tmp/cdb": "puts thumbnails and indexes at that location",
|
||||||
"dbpath=/tmp/cdb": "puts indexes at that location",
|
|
||||||
"landmark=foo": "disable db if file foo doesn't exist",
|
|
||||||
"scan=60": "scan for new files every 60sec, same as --re-maxage",
|
"scan=60": "scan for new files every 60sec, same as --re-maxage",
|
||||||
"nohash=\\.iso$": "skips hashing file contents if path matches *.iso",
|
"nohash=\\.iso$": "skips hashing file contents if path matches *.iso",
|
||||||
"noidx=\\.iso$": "fully ignores the contents at paths matching *.iso",
|
"noidx=\\.iso$": "fully ignores the contents at paths matching *.iso",
|
||||||
"noforget": "don't forget files when deleted from disk",
|
"noforget": "don't forget files when deleted from disk",
|
||||||
"forget_ip=43200": "forget uploader-IP after 30 days (GDPR)",
|
|
||||||
"no_db_ip": "never store uploader-IP in the db; disables unpost",
|
|
||||||
"fat32": "avoid excessive reindexing on android sdcardfs",
|
"fat32": "avoid excessive reindexing on android sdcardfs",
|
||||||
"dbd=[acid|swal|wal|yolo]": "database speed-durability tradeoff",
|
"dbd=[acid|swal|wal|yolo]": "database speed-durability tradeoff",
|
||||||
"xlink": "cross-volume dupe detection / linking (dangerous)",
|
"xlink": "cross-volume dupe detection / linking",
|
||||||
"xdev": "do not descend into other filesystems",
|
"xdev": "do not descend into other filesystems",
|
||||||
"xvol": "do not follow symlinks leaving the volume root",
|
"xvol": "do not follow symlinks leaving the volume root",
|
||||||
"dotsrch": "show dotfiles in search results",
|
"dotsrch": "show dotfiles in search results",
|
||||||
"nodotsrch": "hide dotfiles in search results (default)",
|
"nodotsrch": "hide dotfiles in search results (default)",
|
||||||
"srch_excl": "exclude search results with URL matching this regex",
|
|
||||||
},
|
},
|
||||||
'database, audio tags\n"mte", "mth", "mtp", "mtm" all work the same as -mte, -mth, ...': {
|
'database, audio tags\n"mte", "mth", "mtp", "mtm" all work the same as -mte, -mth, ...': {
|
||||||
"mte=artist,title": "media-tags to index/display",
|
|
||||||
"mth=fmt,res,ac": "media-tags to hide by default",
|
|
||||||
"mtp=.bpm=f,audio-bpm.py": 'uses the "audio-bpm.py" program to\ngenerate ".bpm" tags from uploads (f = overwrite tags)',
|
"mtp=.bpm=f,audio-bpm.py": 'uses the "audio-bpm.py" program to\ngenerate ".bpm" tags from uploads (f = overwrite tags)',
|
||||||
"mtp=ahash,vhash=media-hash.py": "collects two tags at once",
|
"mtp=ahash,vhash=media-hash.py": "collects two tags at once",
|
||||||
},
|
},
|
||||||
|
@ -263,10 +196,7 @@ flagcats = {
|
||||||
"thsize": "thumbnail res; WxH",
|
"thsize": "thumbnail res; WxH",
|
||||||
"crop": "center-cropping (y/n/fy/fn)",
|
"crop": "center-cropping (y/n/fy/fn)",
|
||||||
"th3x": "3x resolution (y/n/fy/fn)",
|
"th3x": "3x resolution (y/n/fy/fn)",
|
||||||
"convt": "convert-to-image timeout in seconds",
|
"convt": "conversion timeout in seconds",
|
||||||
"aconvt": "convert-to-audio timeout in seconds",
|
|
||||||
"th_spec_p=1": "make spectrograms? 0=never 1=fallback 2=always",
|
|
||||||
"ext_th=s=/b.png": "use /b.png as thumbnail for file-extension s",
|
|
||||||
},
|
},
|
||||||
"handlers\n(better explained in --help-handlers)": {
|
"handlers\n(better explained in --help-handlers)": {
|
||||||
"on404=PY": "handle 404s by executing PY file",
|
"on404=PY": "handle 404s by executing PY file",
|
||||||
|
@ -276,8 +206,6 @@ flagcats = {
|
||||||
"xbu=CMD": "execute CMD before a file upload starts",
|
"xbu=CMD": "execute CMD before a file upload starts",
|
||||||
"xau=CMD": "execute CMD after a file upload finishes",
|
"xau=CMD": "execute CMD after a file upload finishes",
|
||||||
"xiu=CMD": "execute CMD after all uploads finish and volume is idle",
|
"xiu=CMD": "execute CMD after all uploads finish and volume is idle",
|
||||||
"xbc=CMD": "execute CMD before a file copy",
|
|
||||||
"xac=CMD": "execute CMD after a file copy",
|
|
||||||
"xbr=CMD": "execute CMD before a file rename/move",
|
"xbr=CMD": "execute CMD before a file rename/move",
|
||||||
"xar=CMD": "execute CMD after a file rename/move",
|
"xar=CMD": "execute CMD after a file rename/move",
|
||||||
"xbd=CMD": "execute CMD before a file delete",
|
"xbd=CMD": "execute CMD before a file delete",
|
||||||
|
@ -289,71 +217,22 @@ flagcats = {
|
||||||
"grid": "show grid/thumbnails by default",
|
"grid": "show grid/thumbnails by default",
|
||||||
"gsel": "select files in grid by ctrl-click",
|
"gsel": "select files in grid by ctrl-click",
|
||||||
"sort": "default sort order",
|
"sort": "default sort order",
|
||||||
"nsort": "natural-sort of leading digits in filenames",
|
|
||||||
"hsortn": "number of sort-rules to add to media URLs",
|
|
||||||
"unlist": "dont list files matching REGEX",
|
"unlist": "dont list files matching REGEX",
|
||||||
"html_head=TXT": "includes TXT in the <head>, or @PATH for file at PATH",
|
"html_head=TXT": "includes TXT in the <head>, or @PATH for file at PATH",
|
||||||
"tcolor=#fc0": "theme color (a hint for webbrowsers, discord, etc.)",
|
|
||||||
"nodirsz": "don't show total folder size",
|
|
||||||
"robots": "allows indexing by search engines (default)",
|
"robots": "allows indexing by search engines (default)",
|
||||||
"norobots": "kindly asks search engines to leave",
|
"norobots": "kindly asks search engines to leave",
|
||||||
"unlistcr": "don't list read-access in controlpanel",
|
|
||||||
"unlistcw": "don't list write-access in controlpanel",
|
|
||||||
"no_sb_md": "disable js sandbox for markdown files",
|
"no_sb_md": "disable js sandbox for markdown files",
|
||||||
"no_sb_lg": "disable js sandbox for prologue/epilogue",
|
"no_sb_lg": "disable js sandbox for prologue/epilogue",
|
||||||
"sb_md": "enable js sandbox for markdown files (default)",
|
"sb_md": "enable js sandbox for markdown files (default)",
|
||||||
"sb_lg": "enable js sandbox for prologue/epilogue (default)",
|
"sb_lg": "enable js sandbox for prologue/epilogue (default)",
|
||||||
"md_sbf": "list of markdown-sandbox safeguards to disable",
|
"md_sbf": "list of markdown-sandbox safeguards to disable",
|
||||||
"lg_sbf": "list of *logue-sandbox safeguards to disable",
|
"lg_sbf": "list of *logue-sandbox safeguards to disable",
|
||||||
"md_sba": "value of iframe allow-prop for markdown-sandbox",
|
|
||||||
"lg_sba": "value of iframe allow-prop for *logue-sandbox",
|
|
||||||
"nohtml": "return html and markdown as text/html",
|
"nohtml": "return html and markdown as text/html",
|
||||||
},
|
},
|
||||||
"opengraph (discord embeds)": {
|
|
||||||
"og": "enable OG (disables hotlinking)",
|
|
||||||
"og_site": "sitename; defaults to --name, disable with '-'",
|
|
||||||
"og_desc": "description text for all files; disable with '-'",
|
|
||||||
"og_th=jf": "thumbnail format; j / jf / jf3 / w / w3 / ...",
|
|
||||||
"og_title_a": "audio title format; default: {{ artist }} - {{ title }}",
|
|
||||||
"og_title_v": "video title format; default: {{ title }}",
|
|
||||||
"og_title_i": "image title format; default: {{ title }}",
|
|
||||||
"og_title=foo": "fallback title if there's nothing in the db",
|
|
||||||
"og_s_title": "force default title; do not read from tags",
|
|
||||||
"og_tpl": "custom html; see --og-tpl in --help",
|
|
||||||
"og_no_head": "you want to add tags manually with og_tpl",
|
|
||||||
"og_ua": "if defined: only send OG html if useragent matches this regex",
|
|
||||||
},
|
|
||||||
"textfiles": {
|
|
||||||
"md_hist": "where to put markdown backups; s=subfolder, v=volHist, n=nope",
|
|
||||||
"exp": "enable textfile expansion; see --help-exp",
|
|
||||||
"exp_md": "placeholders to expand in markdown files; see --help",
|
|
||||||
"exp_lg": "placeholders to expand in prologue/epilogue; see --help",
|
|
||||||
"txt_eol=lf": "enable EOL conversion when writing docs (LF or CRLF)",
|
|
||||||
},
|
|
||||||
"tailing": {
|
|
||||||
"notail": "disable ?tail (download a growing file continuously)",
|
|
||||||
"tail_fd=1": "check if file was replaced (new fd) every 1 sec",
|
|
||||||
"tail_rate=0.2": "check for new data every 0.2 sec",
|
|
||||||
"tail_tmax=30": "kill connection after 30 sec",
|
|
||||||
"tail_who=2": "restrict ?tail access (1=admins,2=authed,3=everyone)",
|
|
||||||
},
|
|
||||||
"others": {
|
"others": {
|
||||||
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
|
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
|
||||||
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
|
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
|
||||||
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
|
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
|
||||||
"dk=8": 'generates per-directory accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
|
|
||||||
"dks": "per-directory accesskeys allow browsing into subdirs",
|
|
||||||
"dky": 'allow seeing files (not folders) inside a specific folder\nwith "g" perm, and does not require a valid dirkey to do so',
|
|
||||||
"rss": "allow '?rss' URL suffix (experimental)",
|
|
||||||
"rmagic": "expensive analysis for mimetype accuracy",
|
|
||||||
"unp_who=2": "unpost only if same... 1=ip+name, 2=ip, 3=name",
|
|
||||||
"ups_who=2": "restrict viewing the list of recent uploads",
|
|
||||||
"zip_who=2": "restrict access to download-as-zip/tar",
|
|
||||||
"zipmaxn=9k": "reject download-as-zip if more than 9000 files",
|
|
||||||
"zipmaxs=2g": "reject download-as-zip if size over 2 GiB",
|
|
||||||
"zipmaxt=no": "reply with 'no' if download-as-zip exceeds max",
|
|
||||||
"zipmaxu": "zip-size-limit does not apply to authenticated users",
|
|
||||||
"nopipe": "disable race-the-beam (download unfinished uploads)",
|
|
||||||
"mv_retry": "ms-windows: timeout for renaming busy files",
|
"mv_retry": "ms-windows: timeout for renaming busy files",
|
||||||
"rm_retry": "ms-windows: timeout for deleting busy files",
|
"rm_retry": "ms-windows: timeout for deleting busy files",
|
||||||
"davauth": "ask webdav clients to login for all folders",
|
"davauth": "ask webdav clients to login for all folders",
|
||||||
|
@ -363,10 +242,3 @@ flagcats = {
|
||||||
|
|
||||||
|
|
||||||
flagdescs = {k.split("=")[0]: v for tab in flagcats.values() for k, v in tab.items()}
|
flagdescs = {k.split("=")[0]: v for tab in flagcats.values() for k, v in tab.items()}
|
||||||
|
|
||||||
|
|
||||||
if True: # so it gets removed in release-builds
|
|
||||||
for fun in [vf_bmap, vf_cmap, vf_vmap]:
|
|
||||||
for k in fun().values():
|
|
||||||
if k not in flagdescs:
|
|
||||||
raise Exception("undocumented volflag: " + k)
|
|
||||||
|
|
|
@ -1,6 +1,3 @@
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import print_function, unicode_literals
|
|
||||||
|
|
||||||
import importlib
|
import importlib
|
||||||
import sys
|
import sys
|
||||||
import xml.etree.ElementTree as ET
|
import xml.etree.ElementTree as ET
|
||||||
|
@ -11,10 +8,6 @@ if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
|
||||||
class BadXML(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def get_ET() -> ET.XMLParser:
|
def get_ET() -> ET.XMLParser:
|
||||||
pn = "xml.etree.ElementTree"
|
pn = "xml.etree.ElementTree"
|
||||||
cn = "_elementtree"
|
cn = "_elementtree"
|
||||||
|
@ -41,7 +34,7 @@ def get_ET() -> ET.XMLParser:
|
||||||
XMLParser: ET.XMLParser = get_ET()
|
XMLParser: ET.XMLParser = get_ET()
|
||||||
|
|
||||||
|
|
||||||
class _DXMLParser(XMLParser): # type: ignore
|
class DXMLParser(XMLParser): # type: ignore
|
||||||
def __init__(self) -> None:
|
def __init__(self) -> None:
|
||||||
tb = ET.TreeBuilder()
|
tb = ET.TreeBuilder()
|
||||||
super(DXMLParser, self).__init__(target=tb)
|
super(DXMLParser, self).__init__(target=tb)
|
||||||
|
@ -56,57 +49,16 @@ class _DXMLParser(XMLParser): # type: ignore
|
||||||
raise BadXML("{}, {}".format(a, ka))
|
raise BadXML("{}, {}".format(a, ka))
|
||||||
|
|
||||||
|
|
||||||
class _NG(XMLParser): # type: ignore
|
class BadXML(Exception):
|
||||||
def __int__(self) -> None:
|
pass
|
||||||
raise BadXML("dxml selftest failed")
|
|
||||||
|
|
||||||
|
|
||||||
DXMLParser = _DXMLParser
|
|
||||||
|
|
||||||
|
|
||||||
def parse_xml(txt: str) -> ET.Element:
|
def parse_xml(txt: str) -> ET.Element:
|
||||||
"""
|
|
||||||
Parse XML into an xml.etree.ElementTree.Element while defusing some unsafe parts.
|
|
||||||
"""
|
|
||||||
parser = DXMLParser()
|
parser = DXMLParser()
|
||||||
parser.feed(txt)
|
parser.feed(txt)
|
||||||
return parser.close() # type: ignore
|
return parser.close() # type: ignore
|
||||||
|
|
||||||
|
|
||||||
def selftest() -> bool:
|
|
||||||
qbe = r"""<!DOCTYPE d [
|
|
||||||
<!ENTITY a "nice_bakuretsu">
|
|
||||||
]>
|
|
||||||
<root>&a;&a;&a;</root>"""
|
|
||||||
|
|
||||||
emb = r"""<!DOCTYPE d [
|
|
||||||
<!ENTITY a SYSTEM "file:///etc/hostname">
|
|
||||||
]>
|
|
||||||
<root>&a;</root>"""
|
|
||||||
|
|
||||||
# future-proofing; there's never been any known vulns
|
|
||||||
# regarding DTDs and ET.XMLParser, but might as well
|
|
||||||
# block them since webdav-clients don't use them
|
|
||||||
dtd = r"""<!DOCTYPE d SYSTEM "a.dtd">
|
|
||||||
<root>a</root>"""
|
|
||||||
|
|
||||||
for txt in (qbe, emb, dtd):
|
|
||||||
try:
|
|
||||||
parse_xml(txt)
|
|
||||||
t = "WARNING: dxml selftest failed:\n%s\n"
|
|
||||||
print(t % (txt,), file=sys.stderr)
|
|
||||||
return False
|
|
||||||
except BadXML:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
DXML_OK = selftest()
|
|
||||||
if not DXML_OK:
|
|
||||||
DXMLParser = _NG
|
|
||||||
|
|
||||||
|
|
||||||
def mktnod(name: str, text: str) -> ET.Element:
|
def mktnod(name: str, text: str) -> ET.Element:
|
||||||
el = ET.Element(name)
|
el = ET.Element(name)
|
||||||
el.text = text
|
el.text = text
|
||||||
|
|
|
@ -42,14 +42,14 @@ class Fstab(object):
|
||||||
self.cache = {}
|
self.cache = {}
|
||||||
|
|
||||||
fs = "ext4"
|
fs = "ext4"
|
||||||
msg = "failed to determine filesystem at %r; assuming %s\n%s"
|
msg = "failed to determine filesystem at [{}]; assuming {}\n{}"
|
||||||
|
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
fs = "vfat"
|
fs = "vfat"
|
||||||
try:
|
try:
|
||||||
path = self._winpath(path)
|
path = self._winpath(path)
|
||||||
except:
|
except:
|
||||||
self.log(msg % (path, fs, min_ex()), 3)
|
self.log(msg.format(path, fs, min_ex()), 3)
|
||||||
return fs
|
return fs
|
||||||
|
|
||||||
path = undot(path)
|
path = undot(path)
|
||||||
|
@ -61,11 +61,11 @@ class Fstab(object):
|
||||||
try:
|
try:
|
||||||
fs = self.get_w32(path) if ANYWIN else self.get_unix(path)
|
fs = self.get_w32(path) if ANYWIN else self.get_unix(path)
|
||||||
except:
|
except:
|
||||||
self.log(msg % (path, fs, min_ex()), 3)
|
self.log(msg.format(path, fs, min_ex()), 3)
|
||||||
|
|
||||||
fs = fs.lower()
|
fs = fs.lower()
|
||||||
self.cache[path] = fs
|
self.cache[path] = fs
|
||||||
self.log("found %s at %r" % (fs, path))
|
self.log("found {} at {}".format(fs, path))
|
||||||
return fs
|
return fs
|
||||||
|
|
||||||
def _winpath(self, path: str) -> str:
|
def _winpath(self, path: str) -> str:
|
||||||
|
@ -78,7 +78,7 @@ class Fstab(object):
|
||||||
return vid
|
return vid
|
||||||
|
|
||||||
def build_fallback(self) -> None:
|
def build_fallback(self) -> None:
|
||||||
self.tab = VFS(self.log_func, "idk", "/", "/", AXS(), {})
|
self.tab = VFS(self.log_func, "idk", "/", AXS(), {})
|
||||||
self.trusted = False
|
self.trusted = False
|
||||||
|
|
||||||
def build_tab(self) -> None:
|
def build_tab(self) -> None:
|
||||||
|
@ -111,16 +111,15 @@ class Fstab(object):
|
||||||
|
|
||||||
tab1.sort(key=lambda x: (len(x[0]), x[0]))
|
tab1.sort(key=lambda x: (len(x[0]), x[0]))
|
||||||
path1, fs1 = tab1[0]
|
path1, fs1 = tab1[0]
|
||||||
tab = VFS(self.log_func, fs1, path1, path1, AXS(), {})
|
tab = VFS(self.log_func, fs1, path1, AXS(), {})
|
||||||
for path, fs in tab1[1:]:
|
for path, fs in tab1[1:]:
|
||||||
zs = path.lstrip("/")
|
tab.add(fs, path.lstrip("/"))
|
||||||
tab.add(fs, zs, zs)
|
|
||||||
|
|
||||||
self.tab = tab
|
self.tab = tab
|
||||||
self.srctab = srctab
|
self.srctab = srctab
|
||||||
|
|
||||||
def relabel(self, path: str, nval: str) -> None:
|
def relabel(self, path: str, nval: str) -> None:
|
||||||
assert self.tab # !rm
|
assert self.tab
|
||||||
self.cache = {}
|
self.cache = {}
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
path = self._winpath(path)
|
path = self._winpath(path)
|
||||||
|
@ -131,10 +130,9 @@ class Fstab(object):
|
||||||
if not self.trusted:
|
if not self.trusted:
|
||||||
# no mtab access; have to build as we go
|
# no mtab access; have to build as we go
|
||||||
if "/" in rem:
|
if "/" in rem:
|
||||||
zs = os.path.join(vn.vpath, rem.split("/")[0])
|
self.tab.add("idk", os.path.join(vn.vpath, rem.split("/")[0]))
|
||||||
self.tab.add("idk", zs, zs)
|
|
||||||
if rem:
|
if rem:
|
||||||
self.tab.add(nval, path, path)
|
self.tab.add(nval, path)
|
||||||
else:
|
else:
|
||||||
vn.realpath = nval
|
vn.realpath = nval
|
||||||
|
|
||||||
|
@ -158,7 +156,7 @@ class Fstab(object):
|
||||||
self.log("failed to build tab:\n{}".format(min_ex()), 3)
|
self.log("failed to build tab:\n{}".format(min_ex()), 3)
|
||||||
self.build_fallback()
|
self.build_fallback()
|
||||||
|
|
||||||
assert self.tab # !rm
|
assert self.tab
|
||||||
ret = self.tab._find(path)[0]
|
ret = self.tab._find(path)[0]
|
||||||
if self.trusted or path == ret.vpath:
|
if self.trusted or path == ret.vpath:
|
||||||
return ret.realpath.split("/")[0]
|
return ret.realpath.split("/")[0]
|
||||||
|
@ -169,6 +167,6 @@ class Fstab(object):
|
||||||
if not self.tab:
|
if not self.tab:
|
||||||
self.build_fallback()
|
self.build_fallback()
|
||||||
|
|
||||||
assert self.tab # !rm
|
assert self.tab
|
||||||
ret = self.tab._find(path)[0]
|
ret = self.tab._find(path)[0]
|
||||||
return ret.realpath
|
return ret.realpath
|
||||||
|
|
|
@ -19,7 +19,6 @@ from .__init__ import PY2, TYPE_CHECKING
|
||||||
from .authsrv import VFS
|
from .authsrv import VFS
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .util import (
|
from .util import (
|
||||||
FN_EMB,
|
|
||||||
VF_CAREFUL,
|
VF_CAREFUL,
|
||||||
Daemon,
|
Daemon,
|
||||||
ODict,
|
ODict,
|
||||||
|
@ -31,7 +30,6 @@ from .util import (
|
||||||
relchk,
|
relchk,
|
||||||
runhook,
|
runhook,
|
||||||
sanitize_fn,
|
sanitize_fn,
|
||||||
set_fperms,
|
|
||||||
vjoin,
|
vjoin,
|
||||||
wunlink,
|
wunlink,
|
||||||
)
|
)
|
||||||
|
@ -78,29 +76,16 @@ class FtpAuth(DummyAuthorizer):
|
||||||
else:
|
else:
|
||||||
raise AuthenticationFailed("banned")
|
raise AuthenticationFailed("banned")
|
||||||
|
|
||||||
args = self.hub.args
|
|
||||||
asrv = self.hub.asrv
|
asrv = self.hub.asrv
|
||||||
uname = "*"
|
uname = "*"
|
||||||
if username != "anonymous":
|
if username != "anonymous":
|
||||||
uname = ""
|
uname = ""
|
||||||
if args.usernames:
|
for zs in (password, username):
|
||||||
alts = ["%s:%s" % (username, password)]
|
|
||||||
else:
|
|
||||||
alts = password, username
|
|
||||||
|
|
||||||
for zs in alts:
|
|
||||||
zs = asrv.iacct.get(asrv.ah.hash(zs), "")
|
zs = asrv.iacct.get(asrv.ah.hash(zs), "")
|
||||||
if zs:
|
if zs:
|
||||||
uname = zs
|
uname = zs
|
||||||
break
|
break
|
||||||
|
|
||||||
if args.ipu and uname == "*":
|
|
||||||
uname = args.ipu_iu[args.ipu_nm.map(ip)]
|
|
||||||
if args.ipr and uname in args.ipr_u:
|
|
||||||
if not args.ipr_u[uname].map(ip):
|
|
||||||
logging.warning("username [%s] rejected by --ipr", uname)
|
|
||||||
uname = "*"
|
|
||||||
|
|
||||||
if not uname or not (asrv.vfs.aread.get(uname) or asrv.vfs.awrite.get(uname)):
|
if not uname or not (asrv.vfs.aread.get(uname) or asrv.vfs.awrite.get(uname)):
|
||||||
g = self.hub.gpwd
|
g = self.hub.gpwd
|
||||||
if g.lim:
|
if g.lim:
|
||||||
|
@ -178,19 +163,9 @@ class FtpFs(AbstractedFS):
|
||||||
t = "Unsupported characters in [{}]"
|
t = "Unsupported characters in [{}]"
|
||||||
raise FSE(t.format(vpath), 1)
|
raise FSE(t.format(vpath), 1)
|
||||||
|
|
||||||
fn = sanitize_fn(fn or "", "")
|
fn = sanitize_fn(fn or "", "", [".prologue.html", ".epilogue.html"])
|
||||||
vpath = vjoin(rd, fn)
|
vpath = vjoin(rd, fn)
|
||||||
vfs, rem = self.hub.asrv.vfs.get(vpath, self.uname, r, w, m, d)
|
vfs, rem = self.hub.asrv.vfs.get(vpath, self.uname, r, w, m, d)
|
||||||
if (
|
|
||||||
w
|
|
||||||
and fn.lower() in FN_EMB
|
|
||||||
and self.h.uname not in vfs.axs.uread
|
|
||||||
and "wo_up_readme" not in vfs.flags
|
|
||||||
):
|
|
||||||
fn = "_wo_" + fn
|
|
||||||
vpath = vjoin(rd, fn)
|
|
||||||
vfs, rem = self.hub.asrv.vfs.get(vpath, self.uname, r, w, m, d)
|
|
||||||
|
|
||||||
if not vfs.realpath:
|
if not vfs.realpath:
|
||||||
t = "No filesystem mounted at [{}]"
|
t = "No filesystem mounted at [{}]"
|
||||||
raise FSE(t.format(vpath))
|
raise FSE(t.format(vpath))
|
||||||
|
@ -239,7 +214,7 @@ class FtpFs(AbstractedFS):
|
||||||
r = "r" in mode
|
r = "r" in mode
|
||||||
w = "w" in mode or "a" in mode or "+" in mode
|
w = "w" in mode or "a" in mode or "+" in mode
|
||||||
|
|
||||||
ap, vfs, _ = self.rv2a(filename, r, w)
|
ap = self.rv2a(filename, r, w)[0]
|
||||||
self.validpath(ap)
|
self.validpath(ap)
|
||||||
if w:
|
if w:
|
||||||
try:
|
try:
|
||||||
|
@ -271,11 +246,7 @@ class FtpFs(AbstractedFS):
|
||||||
|
|
||||||
wunlink(self.log, ap, VF_CAREFUL)
|
wunlink(self.log, ap, VF_CAREFUL)
|
||||||
|
|
||||||
ret = open(fsenc(ap), mode, self.args.iobuf)
|
return open(fsenc(ap), mode, self.args.iobuf)
|
||||||
if w and "fperms" in vfs.flags:
|
|
||||||
set_fperms(ret, vfs.flags)
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
def chdir(self, path: str) -> None:
|
def chdir(self, path: str) -> None:
|
||||||
nwd = join(self.cwd, path)
|
nwd = join(self.cwd, path)
|
||||||
|
@ -289,12 +260,9 @@ class FtpFs(AbstractedFS):
|
||||||
# returning 550 is library-default and suitable
|
# returning 550 is library-default and suitable
|
||||||
raise FSE("No such file or directory")
|
raise FSE("No such file or directory")
|
||||||
|
|
||||||
if vfs.realpath:
|
|
||||||
avfs = vfs.chk_ap(ap, st)
|
avfs = vfs.chk_ap(ap, st)
|
||||||
if not avfs:
|
if not avfs:
|
||||||
raise FSE("Permission denied", 1)
|
raise FSE("Permission denied", 1)
|
||||||
else:
|
|
||||||
avfs = vfs
|
|
||||||
|
|
||||||
self.cwd = nwd
|
self.cwd = nwd
|
||||||
(
|
(
|
||||||
|
@ -309,8 +277,8 @@ class FtpFs(AbstractedFS):
|
||||||
) = avfs.can_access("", self.h.uname)
|
) = avfs.can_access("", self.h.uname)
|
||||||
|
|
||||||
def mkdir(self, path: str) -> None:
|
def mkdir(self, path: str) -> None:
|
||||||
ap, vfs, _ = self.rv2a(path, w=True)
|
ap = self.rv2a(path, w=True)[0]
|
||||||
bos.makedirs(ap, vf=vfs.flags) # filezilla expects this
|
bos.makedirs(ap) # filezilla expects this
|
||||||
|
|
||||||
def listdir(self, path: str) -> list[str]:
|
def listdir(self, path: str) -> list[str]:
|
||||||
vpath = join(self.cwd, path)
|
vpath = join(self.cwd, path)
|
||||||
|
@ -324,7 +292,6 @@ class FtpFs(AbstractedFS):
|
||||||
self.uname,
|
self.uname,
|
||||||
not self.args.no_scandir,
|
not self.args.no_scandir,
|
||||||
[[True, False], [False, True]],
|
[[True, False], [False, True]],
|
||||||
throw=True,
|
|
||||||
)
|
)
|
||||||
vfs_ls = [x[0] for x in vfs_ls1]
|
vfs_ls = [x[0] for x in vfs_ls1]
|
||||||
vfs_ls.extend(vfs_virt.keys())
|
vfs_ls.extend(vfs_virt.keys())
|
||||||
|
@ -409,12 +376,8 @@ class FtpFs(AbstractedFS):
|
||||||
return st
|
return st
|
||||||
|
|
||||||
def utime(self, path: str, timeval: float) -> None:
|
def utime(self, path: str, timeval: float) -> None:
|
||||||
try:
|
|
||||||
ap = self.rv2a(path, w=True)[0]
|
ap = self.rv2a(path, w=True)[0]
|
||||||
return bos.utime(ap, (int(time.time()), int(timeval)))
|
return bos.utime(ap, (timeval, timeval))
|
||||||
except Exception as ex:
|
|
||||||
logging.error("ftp.utime: %s, %r", ex, ex)
|
|
||||||
raise
|
|
||||||
|
|
||||||
def lstat(self, path: str) -> os.stat_result:
|
def lstat(self, path: str) -> os.stat_result:
|
||||||
ap = self.rv2a(path)[0]
|
ap = self.rv2a(path)[0]
|
||||||
|
@ -503,11 +466,7 @@ class FtpHandler(FTPHandler):
|
||||||
def ftp_STOR(self, file: str, mode: str = "w") -> Any:
|
def ftp_STOR(self, file: str, mode: str = "w") -> Any:
|
||||||
# Optional[str]
|
# Optional[str]
|
||||||
vp = join(self.fs.cwd, file).lstrip("/")
|
vp = join(self.fs.cwd, file).lstrip("/")
|
||||||
try:
|
|
||||||
ap, vfs, rem = self.fs.v2a(vp, w=True)
|
ap, vfs, rem = self.fs.v2a(vp, w=True)
|
||||||
except Exception as ex:
|
|
||||||
self.respond("550 %s" % (ex,), logging.info)
|
|
||||||
return
|
|
||||||
self.vfs_map[ap] = vp
|
self.vfs_map[ap] = vp
|
||||||
xbu = vfs.flags.get("xbu")
|
xbu = vfs.flags.get("xbu")
|
||||||
if xbu and not runhook(
|
if xbu and not runhook(
|
||||||
|
@ -627,7 +586,7 @@ class Ftpd(object):
|
||||||
if "::" in ips:
|
if "::" in ips:
|
||||||
ips.append("0.0.0.0")
|
ips.append("0.0.0.0")
|
||||||
|
|
||||||
ips = [x for x in ips if not x.startswith(("unix:", "fd:"))]
|
ips = [x for x in ips if "unix:" not in x]
|
||||||
|
|
||||||
if self.args.ftp4:
|
if self.args.ftp4:
|
||||||
ips = [x for x in ips if ":" not in x]
|
ips = [x for x in ips if ":" not in x]
|
||||||
|
|
2671
copyparty/httpcli.py
2671
copyparty/httpcli.py
File diff suppressed because it is too large
Load diff
|
@ -59,8 +59,6 @@ class HttpConn(object):
|
||||||
self.asrv: AuthSrv = hsrv.asrv # mypy404
|
self.asrv: AuthSrv = hsrv.asrv # mypy404
|
||||||
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
|
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
|
||||||
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
|
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
|
||||||
self.ipu_iu: Optional[dict[str, str]] = hsrv.ipu_iu
|
|
||||||
self.ipu_nm: Optional[NetMap] = hsrv.ipu_nm
|
|
||||||
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
|
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
|
||||||
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
|
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
|
||||||
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
|
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
|
||||||
|
@ -105,6 +103,9 @@ class HttpConn(object):
|
||||||
self.log_src = ("%s \033[%dm%d" % (ip, color, self.addr[1])).ljust(26)
|
self.log_src = ("%s \033[%dm%d" % (ip, color, self.addr[1])).ljust(26)
|
||||||
return self.log_src
|
return self.log_src
|
||||||
|
|
||||||
|
def respath(self, res_name: str) -> str:
|
||||||
|
return os.path.join(self.E.mod, "web", res_name)
|
||||||
|
|
||||||
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
||||||
self.log_func(self.log_src, msg, c)
|
self.log_func(self.log_src, msg, c)
|
||||||
|
|
||||||
|
@ -164,7 +165,6 @@ class HttpConn(object):
|
||||||
|
|
||||||
self.log_src = self.log_src.replace("[36m", "[35m")
|
self.log_src = self.log_src.replace("[36m", "[35m")
|
||||||
try:
|
try:
|
||||||
assert ssl # type: ignore # !rm
|
|
||||||
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
|
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
|
||||||
ctx.load_cert_chain(self.args.cert)
|
ctx.load_cert_chain(self.args.cert)
|
||||||
if self.args.ssl_ver:
|
if self.args.ssl_ver:
|
||||||
|
@ -190,7 +190,7 @@ class HttpConn(object):
|
||||||
|
|
||||||
if self.args.ssl_dbg and hasattr(self.s, "shared_ciphers"):
|
if self.args.ssl_dbg and hasattr(self.s, "shared_ciphers"):
|
||||||
ciphers = self.s.shared_ciphers()
|
ciphers = self.s.shared_ciphers()
|
||||||
assert ciphers # !rm
|
assert ciphers
|
||||||
overlap = [str(y[::-1]) for y in ciphers]
|
overlap = [str(y[::-1]) for y in ciphers]
|
||||||
self.log("TLS cipher overlap:" + "\n".join(overlap))
|
self.log("TLS cipher overlap:" + "\n".join(overlap))
|
||||||
for k, v in [
|
for k, v in [
|
||||||
|
@ -224,6 +224,3 @@ class HttpConn(object):
|
||||||
if self.u2idx:
|
if self.u2idx:
|
||||||
self.hsrv.put_u2idx(str(self.addr), self.u2idx)
|
self.hsrv.put_u2idx(str(self.addr), self.u2idx)
|
||||||
self.u2idx = None
|
self.u2idx = None
|
||||||
|
|
||||||
if self.rproxy:
|
|
||||||
self.set_rproxy()
|
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
import hashlib
|
import base64
|
||||||
import math
|
import math
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
|
@ -67,22 +67,17 @@ from .util import (
|
||||||
Magician,
|
Magician,
|
||||||
Netdev,
|
Netdev,
|
||||||
NetMap,
|
NetMap,
|
||||||
|
absreal,
|
||||||
build_netmap,
|
build_netmap,
|
||||||
has_resource,
|
|
||||||
ipnorm,
|
ipnorm,
|
||||||
load_ipr,
|
|
||||||
load_ipu,
|
|
||||||
load_resource,
|
|
||||||
min_ex,
|
min_ex,
|
||||||
shut_socket,
|
shut_socket,
|
||||||
spack,
|
spack,
|
||||||
start_log_thrs,
|
start_log_thrs,
|
||||||
start_stackmon,
|
start_stackmon,
|
||||||
ub64enc,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .authsrv import VFS
|
|
||||||
from .broker_util import BrokerCli
|
from .broker_util import BrokerCli
|
||||||
from .ssdp import SSDPr
|
from .ssdp import SSDPr
|
||||||
|
|
||||||
|
@ -96,11 +91,6 @@ if not hasattr(socket, "AF_UNIX"):
|
||||||
setattr(socket, "AF_UNIX", -9001)
|
setattr(socket, "AF_UNIX", -9001)
|
||||||
|
|
||||||
|
|
||||||
def load_jinja2_resource(E: EnvParams, name: str):
|
|
||||||
with load_resource(E, "web/" + name, "r") as f:
|
|
||||||
return f.read()
|
|
||||||
|
|
||||||
|
|
||||||
class HttpSrv(object):
|
class HttpSrv(object):
|
||||||
"""
|
"""
|
||||||
handles incoming connections using HttpConn to process http,
|
handles incoming connections using HttpConn to process http,
|
||||||
|
@ -124,7 +114,6 @@ class HttpSrv(object):
|
||||||
self.nm = NetMap([], [])
|
self.nm = NetMap([], [])
|
||||||
self.ssdp: Optional["SSDPr"] = None
|
self.ssdp: Optional["SSDPr"] = None
|
||||||
self.gpwd = Garda(self.args.ban_pw)
|
self.gpwd = Garda(self.args.ban_pw)
|
||||||
self.gpwc = Garda(self.args.ban_pwc)
|
|
||||||
self.g404 = Garda(self.args.ban_404)
|
self.g404 = Garda(self.args.ban_404)
|
||||||
self.g403 = Garda(self.args.ban_403)
|
self.g403 = Garda(self.args.ban_403)
|
||||||
self.g422 = Garda(self.args.ban_422, False)
|
self.g422 = Garda(self.args.ban_422, False)
|
||||||
|
@ -133,12 +122,6 @@ class HttpSrv(object):
|
||||||
self.bans: dict[str, int] = {}
|
self.bans: dict[str, int] = {}
|
||||||
self.aclose: dict[str, int] = {}
|
self.aclose: dict[str, int] = {}
|
||||||
|
|
||||||
dli: dict[str, tuple[float, int, "VFS", str, str]] = {} # info
|
|
||||||
dls: dict[str, tuple[float, int]] = {} # state
|
|
||||||
self.dli = self.tdli = dli
|
|
||||||
self.dls = self.tdls = dls
|
|
||||||
self.iiam = '<img src="%s.cpr/iiam.gif?cache=i" />' % (self.args.SRS,)
|
|
||||||
|
|
||||||
self.bound: set[tuple[str, int]] = set()
|
self.bound: set[tuple[str, int]] = set()
|
||||||
self.name = "hsrv" + nsuf
|
self.name = "hsrv" + nsuf
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
|
@ -154,7 +137,6 @@ class HttpSrv(object):
|
||||||
self.t_periodic: Optional[threading.Thread] = None
|
self.t_periodic: Optional[threading.Thread] = None
|
||||||
|
|
||||||
self.u2fh = FHC()
|
self.u2fh = FHC()
|
||||||
self.u2sc: dict[str, tuple[int, "hashlib._Hash"]] = {}
|
|
||||||
self.pipes = CachedDict(0.2)
|
self.pipes = CachedDict(0.2)
|
||||||
self.metrics = Metrics(self)
|
self.metrics = Metrics(self)
|
||||||
self.nreq = 0
|
self.nreq = 0
|
||||||
|
@ -170,39 +152,33 @@ class HttpSrv(object):
|
||||||
self.u2idx_free: dict[str, U2idx] = {}
|
self.u2idx_free: dict[str, U2idx] = {}
|
||||||
self.u2idx_n = 0
|
self.u2idx_n = 0
|
||||||
|
|
||||||
assert jinja2 # type: ignore # !rm
|
|
||||||
env = jinja2.Environment()
|
env = jinja2.Environment()
|
||||||
env.loader = jinja2.FunctionLoader(lambda f: load_jinja2_resource(self.E, f))
|
env.loader = jinja2.FileSystemLoader(os.path.join(self.E.mod, "web"))
|
||||||
jn = [
|
jn = [
|
||||||
|
"splash",
|
||||||
|
"shares",
|
||||||
|
"svcs",
|
||||||
"browser",
|
"browser",
|
||||||
"browser2",
|
"browser2",
|
||||||
"cf",
|
"msg",
|
||||||
"idp",
|
|
||||||
"md",
|
"md",
|
||||||
"mde",
|
"mde",
|
||||||
"msg",
|
"cf",
|
||||||
"rups",
|
|
||||||
"shares",
|
|
||||||
"splash",
|
|
||||||
"svcs",
|
|
||||||
]
|
]
|
||||||
self.j2 = {x: env.get_template(x + ".html") for x in jn}
|
self.j2 = {x: env.get_template(x + ".html") for x in jn}
|
||||||
self.prism = has_resource(self.E, "web/deps/prism.js.gz")
|
zs = os.path.join(self.E.mod, "web", "deps", "prism.js.gz")
|
||||||
|
self.prism = os.path.exists(zs)
|
||||||
if self.args.ipu:
|
|
||||||
self.ipu_iu, self.ipu_nm = load_ipu(self.log, self.args.ipu)
|
|
||||||
else:
|
|
||||||
self.ipu_iu = self.ipu_nm = None
|
|
||||||
|
|
||||||
if self.args.ipr:
|
|
||||||
self.ipr = load_ipr(self.log, self.args.ipr)
|
|
||||||
else:
|
|
||||||
self.ipr = None
|
|
||||||
|
|
||||||
self.ipa_nm = build_netmap(self.args.ipa)
|
self.ipa_nm = build_netmap(self.args.ipa)
|
||||||
self.xff_nm = build_netmap(self.args.xff_src)
|
self.xff_nm = build_netmap(self.args.xff_src)
|
||||||
self.xff_lan = build_netmap("lan")
|
self.xff_lan = build_netmap("lan")
|
||||||
|
|
||||||
|
self.statics: set[str] = set()
|
||||||
|
self._build_statics()
|
||||||
|
|
||||||
|
self.ptn_cc = re.compile(r"[\x00-\x1f]")
|
||||||
|
self.ptn_hsafe = re.compile(r"[\x00-\x1f<>\"'&]")
|
||||||
|
|
||||||
self.mallow = "GET HEAD POST PUT DELETE OPTIONS".split()
|
self.mallow = "GET HEAD POST PUT DELETE OPTIONS".split()
|
||||||
if not self.args.no_dav:
|
if not self.args.no_dav:
|
||||||
zs = "PROPFIND PROPPATCH LOCK UNLOCK MKCOL COPY MOVE"
|
zs = "PROPFIND PROPPATCH LOCK UNLOCK MKCOL COPY MOVE"
|
||||||
|
@ -217,9 +193,6 @@ class HttpSrv(object):
|
||||||
self.start_threads(4)
|
self.start_threads(4)
|
||||||
|
|
||||||
if nid:
|
if nid:
|
||||||
self.tdli = {}
|
|
||||||
self.tdls = {}
|
|
||||||
|
|
||||||
if self.args.stackmon:
|
if self.args.stackmon:
|
||||||
start_stackmon(self.args.stackmon, nid)
|
start_stackmon(self.args.stackmon, nid)
|
||||||
|
|
||||||
|
@ -236,6 +209,14 @@ class HttpSrv(object):
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def _build_statics(self) -> None:
|
||||||
|
for dp, _, df in os.walk(os.path.join(self.E.mod, "web")):
|
||||||
|
for fn in df:
|
||||||
|
ap = absreal(os.path.join(dp, fn))
|
||||||
|
self.statics.add(ap)
|
||||||
|
if ap.endswith(".gz"):
|
||||||
|
self.statics.add(ap[:-3])
|
||||||
|
|
||||||
def set_netdevs(self, netdevs: dict[str, Netdev]) -> None:
|
def set_netdevs(self, netdevs: dict[str, Netdev]) -> None:
|
||||||
ips = set()
|
ips = set()
|
||||||
for ip, _ in self.bound:
|
for ip, _ in self.bound:
|
||||||
|
@ -256,7 +237,7 @@ class HttpSrv(object):
|
||||||
if self.args.log_htp:
|
if self.args.log_htp:
|
||||||
self.log(self.name, "workers -= {} = {}".format(n, self.tp_nthr), 6)
|
self.log(self.name, "workers -= {} = {}".format(n, self.tp_nthr), 6)
|
||||||
|
|
||||||
assert self.tp_q # !rm
|
assert self.tp_q
|
||||||
for _ in range(n):
|
for _ in range(n):
|
||||||
self.tp_q.put(None)
|
self.tp_q.put(None)
|
||||||
|
|
||||||
|
@ -321,8 +302,6 @@ class HttpSrv(object):
|
||||||
|
|
||||||
Daemon(self.broker.say, "sig-hsrv-up1", ("cb_httpsrv_up",))
|
Daemon(self.broker.say, "sig-hsrv-up1", ("cb_httpsrv_up",))
|
||||||
|
|
||||||
saddr = ("", 0) # fwd-decl for `except TypeError as ex:`
|
|
||||||
|
|
||||||
while not self.stopping:
|
while not self.stopping:
|
||||||
if self.args.log_conn:
|
if self.args.log_conn:
|
||||||
self.log(self.name, "|%sC-ncli" % ("-" * 1,), c="90")
|
self.log(self.name, "|%sC-ncli" % ("-" * 1,), c="90")
|
||||||
|
@ -330,8 +309,7 @@ class HttpSrv(object):
|
||||||
spins = 0
|
spins = 0
|
||||||
while self.ncli >= self.nclimax:
|
while self.ncli >= self.nclimax:
|
||||||
if not spins:
|
if not spins:
|
||||||
t = "at connection limit (global-option 'nc'); waiting"
|
self.log(self.name, "at connection limit; waiting", 3)
|
||||||
self.log(self.name, t, 3)
|
|
||||||
|
|
||||||
spins += 1
|
spins += 1
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
|
@ -405,19 +383,6 @@ class HttpSrv(object):
|
||||||
self.log(self.name, "accept({}): {}".format(fno, ex), c=6)
|
self.log(self.name, "accept({}): {}".format(fno, ex), c=6)
|
||||||
time.sleep(0.02)
|
time.sleep(0.02)
|
||||||
continue
|
continue
|
||||||
except TypeError as ex:
|
|
||||||
# on macOS, accept() may return a None saddr if blocked by LittleSnitch;
|
|
||||||
# unicode(saddr[0]) ==> TypeError: 'NoneType' object is not subscriptable
|
|
||||||
if tcp and not saddr:
|
|
||||||
t = "accept(%s): failed to accept connection from client due to firewall or network issue"
|
|
||||||
self.log(self.name, t % (fno,), c=3)
|
|
||||||
try:
|
|
||||||
sck.close() # type: ignore
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
time.sleep(0.02)
|
|
||||||
continue
|
|
||||||
raise
|
|
||||||
|
|
||||||
if self.args.log_conn:
|
if self.args.log_conn:
|
||||||
t = "|{}C-acc2 \033[0;36m{} \033[3{}m{}".format(
|
t = "|{}C-acc2 \033[0;36m{} \033[3{}m{}".format(
|
||||||
|
@ -466,7 +431,7 @@ class HttpSrv(object):
|
||||||
)
|
)
|
||||||
|
|
||||||
def thr_poolw(self) -> None:
|
def thr_poolw(self) -> None:
|
||||||
assert self.tp_q # !rm
|
assert self.tp_q
|
||||||
while True:
|
while True:
|
||||||
task = self.tp_q.get()
|
task = self.tp_q.get()
|
||||||
if not task:
|
if not task:
|
||||||
|
@ -578,8 +543,8 @@ class HttpSrv(object):
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# spack gives 4 lsb, take 3 lsb, get 4 ch
|
v = base64.urlsafe_b64encode(spack(b">xxL", int(v)))
|
||||||
self.cb_v = ub64enc(spack(b">L", int(v))[1:]).decode("ascii")
|
self.cb_v = v.decode("ascii")[-4:]
|
||||||
self.cb_ts = time.time()
|
self.cb_ts = time.time()
|
||||||
return self.cb_v
|
return self.cb_v
|
||||||
|
|
||||||
|
@ -610,32 +575,3 @@ class HttpSrv(object):
|
||||||
ident += "a"
|
ident += "a"
|
||||||
|
|
||||||
self.u2idx_free[ident] = u2idx
|
self.u2idx_free[ident] = u2idx
|
||||||
|
|
||||||
def read_dls(
|
|
||||||
self,
|
|
||||||
) -> tuple[
|
|
||||||
dict[str, tuple[float, int, str, str, str]], dict[str, tuple[float, int]]
|
|
||||||
]:
|
|
||||||
"""
|
|
||||||
mp-broker asking for local dl-info + dl-state;
|
|
||||||
reduce overhead by sending just the vfs vpath
|
|
||||||
"""
|
|
||||||
dli = {k: (a, b, c.vpath, d, e) for k, (a, b, c, d, e) in self.dli.items()}
|
|
||||||
return (dli, self.dls)
|
|
||||||
|
|
||||||
def write_dls(
|
|
||||||
self,
|
|
||||||
sdli: dict[str, tuple[float, int, str, str, str]],
|
|
||||||
dls: dict[str, tuple[float, int]],
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
mp-broker pushing total dl-info + dl-state;
|
|
||||||
swap out the vfs vpath with the vfs node
|
|
||||||
"""
|
|
||||||
dli: dict[str, tuple[float, int, "VFS", str, str]] = {}
|
|
||||||
for k, (a, b, c, d, e) in sdli.items():
|
|
||||||
vn = self.asrv.vfs.all_nodes[c]
|
|
||||||
dli[k] = (a, b, vn, d, e)
|
|
||||||
|
|
||||||
self.tdli = dli
|
|
||||||
self.tdls = dls
|
|
||||||
|
|
|
@ -94,21 +94,10 @@ class Ico(object):
|
||||||
<?xml version="1.0" encoding="UTF-8"?>
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
<svg version="1.1" viewBox="0 0 100 {}" xmlns="http://www.w3.org/2000/svg"><g>
|
<svg version="1.1" viewBox="0 0 100 {}" xmlns="http://www.w3.org/2000/svg"><g>
|
||||||
<rect width="100%" height="100%" fill="#{}" />
|
<rect width="100%" height="100%" fill="#{}" />
|
||||||
<text x="50%" y="{}" dominant-baseline="middle" text-anchor="middle" xml:space="preserve"
|
<text x="50%" y="50%" dominant-baseline="middle" text-anchor="middle" xml:space="preserve"
|
||||||
fill="#{}" font-family="monospace" font-size="14px" style="letter-spacing:.5px">{}</text>
|
fill="#{}" font-family="monospace" font-size="14px" style="letter-spacing:.5px">{}</text>
|
||||||
</g></svg>
|
</g></svg>
|
||||||
"""
|
"""
|
||||||
|
svg = svg.format(h, c[:6], c[6:], html_escape(ext, True))
|
||||||
txt = html_escape(ext, True)
|
|
||||||
if "\n" in txt:
|
|
||||||
lines = txt.split("\n")
|
|
||||||
n = len(lines)
|
|
||||||
y = "20%" if n == 2 else "10%" if n == 3 else "0"
|
|
||||||
zs = '<tspan x="50%%" dy="1.2em">%s</tspan>'
|
|
||||||
txt = "".join([zs % (x,) for x in lines])
|
|
||||||
else:
|
|
||||||
y = "50%"
|
|
||||||
|
|
||||||
svg = svg.format(h, c[:6], y, c[6:], txt)
|
|
||||||
|
|
||||||
return "image/svg+xml", svg.encode("utf-8")
|
return "image/svg+xml", svg.encode("utf-8")
|
||||||
|
|
|
@ -25,7 +25,6 @@ from .stolen.dnslib import (
|
||||||
DNSHeader,
|
DNSHeader,
|
||||||
DNSQuestion,
|
DNSQuestion,
|
||||||
DNSRecord,
|
DNSRecord,
|
||||||
set_avahi_379,
|
|
||||||
)
|
)
|
||||||
from .util import CachedSet, Daemon, Netdev, list_ips, min_ex
|
from .util import CachedSet, Daemon, Netdev, list_ips, min_ex
|
||||||
|
|
||||||
|
@ -73,11 +72,7 @@ class MDNS(MCast):
|
||||||
self.ngen = ngen
|
self.ngen = ngen
|
||||||
self.ttl = 300
|
self.ttl = 300
|
||||||
|
|
||||||
if not self.args.zm_nwa_1:
|
zs = self.args.name + ".local."
|
||||||
set_avahi_379()
|
|
||||||
|
|
||||||
zs = self.args.zm_fqdn or (self.args.name + ".local")
|
|
||||||
zs = zs.replace("--name", self.args.name).rstrip(".") + "."
|
|
||||||
zs = zs.encode("ascii", "replace").decode("ascii", "replace")
|
zs = zs.encode("ascii", "replace").decode("ascii", "replace")
|
||||||
self.hn = "-".join(x for x in zs.split("?") if x) or (
|
self.hn = "-".join(x for x in zs.split("?") if x) or (
|
||||||
"vault-{}".format(random.randint(1, 255))
|
"vault-{}".format(random.randint(1, 255))
|
||||||
|
@ -341,9 +336,6 @@ class MDNS(MCast):
|
||||||
self.log("stopped", 2)
|
self.log("stopped", 2)
|
||||||
return
|
return
|
||||||
|
|
||||||
if self.args.zm_no_pe:
|
|
||||||
continue
|
|
||||||
|
|
||||||
t = "{} {} \033[33m|{}| {}\n{}".format(
|
t = "{} {} \033[33m|{}| {}\n{}".format(
|
||||||
self.srv[sck].name, addr, len(buf), repr(buf)[2:-1], min_ex()
|
self.srv[sck].name, addr, len(buf), repr(buf)[2:-1], min_ex()
|
||||||
)
|
)
|
||||||
|
|
|
@ -18,7 +18,7 @@ class Metrics(object):
|
||||||
|
|
||||||
def tx(self, cli: "HttpCli") -> bool:
|
def tx(self, cli: "HttpCli") -> bool:
|
||||||
if not cli.avol:
|
if not cli.avol:
|
||||||
raise Pebkac(403, "'stats' not allowed for user " + cli.uname)
|
raise Pebkac(403, "not allowed for user " + cli.uname)
|
||||||
|
|
||||||
args = cli.args
|
args = cli.args
|
||||||
if not args.stats:
|
if not args.stats:
|
||||||
|
@ -72,9 +72,6 @@ class Metrics(object):
|
||||||
v = "{:.3f}".format(self.hsrv.t0)
|
v = "{:.3f}".format(self.hsrv.t0)
|
||||||
addug("cpp_boot_unixtime", "seconds", v, t)
|
addug("cpp_boot_unixtime", "seconds", v, t)
|
||||||
|
|
||||||
t = "number of active downloads"
|
|
||||||
addg("cpp_active_dl", str(len(self.hsrv.tdls)), t)
|
|
||||||
|
|
||||||
t = "number of open http(s) client connections"
|
t = "number of open http(s) client connections"
|
||||||
addg("cpp_http_conns", str(self.hsrv.ncli), t)
|
addg("cpp_http_conns", str(self.hsrv.ncli), t)
|
||||||
|
|
||||||
|
@ -91,7 +88,7 @@ class Metrics(object):
|
||||||
addg("cpp_total_bans", str(self.hsrv.nban), t)
|
addg("cpp_total_bans", str(self.hsrv.nban), t)
|
||||||
|
|
||||||
if not args.nos_vst:
|
if not args.nos_vst:
|
||||||
x = self.hsrv.broker.ask("up2k.get_state", True, "")
|
x = self.hsrv.broker.ask("up2k.get_state")
|
||||||
vs = json.loads(x.get())
|
vs = json.loads(x.get())
|
||||||
|
|
||||||
nvidle = 0
|
nvidle = 0
|
||||||
|
@ -131,7 +128,7 @@ class Metrics(object):
|
||||||
addbh("cpp_disk_size_bytes", "total HDD size of volume")
|
addbh("cpp_disk_size_bytes", "total HDD size of volume")
|
||||||
addbh("cpp_disk_free_bytes", "free HDD space in volume")
|
addbh("cpp_disk_free_bytes", "free HDD space in volume")
|
||||||
for vpath, vol in allvols:
|
for vpath, vol in allvols:
|
||||||
free, total, _ = get_df(vol.realpath, False)
|
free, total = get_df(vol.realpath)
|
||||||
if free is None or total is None:
|
if free is None or total is None:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,6 @@ from __future__ import print_function, unicode_literals
|
||||||
import argparse
|
import argparse
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import re
|
|
||||||
import shutil
|
import shutil
|
||||||
import subprocess as sp
|
import subprocess as sp
|
||||||
import sys
|
import sys
|
||||||
|
@ -18,7 +17,6 @@ from .util import (
|
||||||
REKOBO_LKEY,
|
REKOBO_LKEY,
|
||||||
VF_CAREFUL,
|
VF_CAREFUL,
|
||||||
fsenc,
|
fsenc,
|
||||||
gzip,
|
|
||||||
min_ex,
|
min_ex,
|
||||||
pybin,
|
pybin,
|
||||||
retchk,
|
retchk,
|
||||||
|
@ -29,7 +27,7 @@ from .util import (
|
||||||
)
|
)
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import IO, Any, Optional, Union
|
from typing import Any, Optional, Union
|
||||||
|
|
||||||
from .util import NamedLogger, RootLogger
|
from .util import NamedLogger, RootLogger
|
||||||
|
|
||||||
|
@ -64,11 +62,6 @@ def have_ff(scmd: str) -> bool:
|
||||||
HAVE_FFMPEG = not os.environ.get("PRTY_NO_FFMPEG") and have_ff("ffmpeg")
|
HAVE_FFMPEG = not os.environ.get("PRTY_NO_FFMPEG") and have_ff("ffmpeg")
|
||||||
HAVE_FFPROBE = not os.environ.get("PRTY_NO_FFPROBE") and have_ff("ffprobe")
|
HAVE_FFPROBE = not os.environ.get("PRTY_NO_FFPROBE") and have_ff("ffprobe")
|
||||||
|
|
||||||
CBZ_PICS = set("png jpg jpeg gif bmp tga tif tiff webp avif".split())
|
|
||||||
CBZ_01 = re.compile(r"(^|[^0-9v])0+[01]\b")
|
|
||||||
|
|
||||||
FMT_AU = set("mp3 ogg flac wav".split())
|
|
||||||
|
|
||||||
|
|
||||||
class MParser(object):
|
class MParser(object):
|
||||||
def __init__(self, cmdline: str) -> None:
|
def __init__(self, cmdline: str) -> None:
|
||||||
|
@ -133,7 +126,6 @@ def au_unpk(
|
||||||
log: "NamedLogger", fmt_map: dict[str, str], abspath: str, vn: Optional[VFS] = None
|
log: "NamedLogger", fmt_map: dict[str, str], abspath: str, vn: Optional[VFS] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
ret = ""
|
ret = ""
|
||||||
maxsz = 1024 * 1024 * 64
|
|
||||||
try:
|
try:
|
||||||
ext = abspath.split(".")[-1].lower()
|
ext = abspath.split(".")[-1].lower()
|
||||||
au, pk = fmt_map[ext].split(".")
|
au, pk = fmt_map[ext].split(".")
|
||||||
|
@ -141,6 +133,8 @@ def au_unpk(
|
||||||
fd, ret = tempfile.mkstemp("." + au)
|
fd, ret = tempfile.mkstemp("." + au)
|
||||||
|
|
||||||
if pk == "gz":
|
if pk == "gz":
|
||||||
|
import gzip
|
||||||
|
|
||||||
fi = gzip.GzipFile(abspath, mode="rb")
|
fi = gzip.GzipFile(abspath, mode="rb")
|
||||||
|
|
||||||
elif pk == "xz":
|
elif pk == "xz":
|
||||||
|
@ -154,52 +148,24 @@ def au_unpk(
|
||||||
zf = zipfile.ZipFile(abspath, "r")
|
zf = zipfile.ZipFile(abspath, "r")
|
||||||
zil = zf.infolist()
|
zil = zf.infolist()
|
||||||
zil = [x for x in zil if x.filename.lower().split(".")[-1] == au]
|
zil = [x for x in zil if x.filename.lower().split(".")[-1] == au]
|
||||||
if not zil:
|
|
||||||
raise Exception("no audio inside zip")
|
|
||||||
fi = zf.open(zil[0])
|
fi = zf.open(zil[0])
|
||||||
|
|
||||||
elif pk == "cbz":
|
|
||||||
import zipfile
|
|
||||||
|
|
||||||
zf = zipfile.ZipFile(abspath, "r")
|
|
||||||
znil = [(x.filename.lower(), x) for x in zf.infolist()]
|
|
||||||
nf = len(znil)
|
|
||||||
znil = [x for x in znil if x[0].split(".")[-1] in CBZ_PICS]
|
|
||||||
znil = [x for x in znil if "cover" in x[0]] or znil
|
|
||||||
znil = [x for x in znil if CBZ_01.search(x[0])] or znil
|
|
||||||
t = "cbz: %d files, %d hits" % (nf, len(znil))
|
|
||||||
using = sorted(znil)[0][1].filename
|
|
||||||
if znil:
|
|
||||||
t += ", using " + using
|
|
||||||
log(t)
|
|
||||||
if not znil:
|
|
||||||
raise Exception("no images inside cbz")
|
|
||||||
fi = zf.open(using)
|
|
||||||
|
|
||||||
elif pk == "epub":
|
|
||||||
fi = get_cover_from_epub(log, abspath)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
raise Exception("unknown compression %s" % (pk,))
|
raise Exception("unknown compression %s" % (pk,))
|
||||||
|
|
||||||
fsz = 0
|
|
||||||
with os.fdopen(fd, "wb") as fo:
|
with os.fdopen(fd, "wb") as fo:
|
||||||
while True:
|
while True:
|
||||||
buf = fi.read(32768)
|
buf = fi.read(32768)
|
||||||
if not buf:
|
if not buf:
|
||||||
break
|
break
|
||||||
|
|
||||||
fsz += len(buf)
|
|
||||||
if fsz > maxsz:
|
|
||||||
raise Exception("zipbomb defused")
|
|
||||||
|
|
||||||
fo.write(buf)
|
fo.write(buf)
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
if ret:
|
if ret:
|
||||||
t = "failed to decompress audio file %r: %r"
|
t = "failed to decompress audio file [%s]: %r"
|
||||||
log(t % (abspath, ex))
|
log(t % (abspath, ex))
|
||||||
wunlink(log, ret, vn.flags if vn else VF_CAREFUL)
|
wunlink(log, ret, vn.flags if vn else VF_CAREFUL)
|
||||||
|
|
||||||
|
@ -208,7 +174,7 @@ def au_unpk(
|
||||||
|
|
||||||
def ffprobe(
|
def ffprobe(
|
||||||
abspath: str, timeout: int = 60
|
abspath: str, timeout: int = 60
|
||||||
) -> tuple[dict[str, tuple[int, Any]], dict[str, list[Any]], list[Any], dict[str, Any]]:
|
) -> tuple[dict[str, tuple[int, Any]], dict[str, list[Any]]]:
|
||||||
cmd = [
|
cmd = [
|
||||||
b"ffprobe",
|
b"ffprobe",
|
||||||
b"-hide_banner",
|
b"-hide_banner",
|
||||||
|
@ -222,17 +188,8 @@ def ffprobe(
|
||||||
return parse_ffprobe(so)
|
return parse_ffprobe(so)
|
||||||
|
|
||||||
|
|
||||||
def parse_ffprobe(
|
def parse_ffprobe(txt: str) -> tuple[dict[str, tuple[int, Any]], dict[str, list[Any]]]:
|
||||||
txt: str,
|
"""ffprobe -show_format -show_streams"""
|
||||||
) -> tuple[dict[str, tuple[int, Any]], dict[str, list[Any]], list[Any], dict[str, Any]]:
|
|
||||||
"""
|
|
||||||
txt: output from ffprobe -show_format -show_streams
|
|
||||||
returns:
|
|
||||||
* normalized tags
|
|
||||||
* original/raw tags
|
|
||||||
* list of streams
|
|
||||||
* format props
|
|
||||||
"""
|
|
||||||
streams = []
|
streams = []
|
||||||
fmt = {}
|
fmt = {}
|
||||||
g = {}
|
g = {}
|
||||||
|
@ -256,7 +213,7 @@ def parse_ffprobe(
|
||||||
ret: dict[str, Any] = {} # processed
|
ret: dict[str, Any] = {} # processed
|
||||||
md: dict[str, list[Any]] = {} # raw tags
|
md: dict[str, list[Any]] = {} # raw tags
|
||||||
|
|
||||||
is_audio = fmt.get("format_name") in FMT_AU
|
is_audio = fmt.get("format_name") in ["mp3", "ogg", "flac", "wav"]
|
||||||
if fmt.get("filename", "").split(".")[-1].lower() in ["m4a", "aac"]:
|
if fmt.get("filename", "").split(".")[-1].lower() in ["m4a", "aac"]:
|
||||||
is_audio = True
|
is_audio = True
|
||||||
|
|
||||||
|
@ -284,8 +241,6 @@ def parse_ffprobe(
|
||||||
["channel_layout", "chs"],
|
["channel_layout", "chs"],
|
||||||
["sample_rate", ".hz"],
|
["sample_rate", ".hz"],
|
||||||
["bit_rate", ".aq"],
|
["bit_rate", ".aq"],
|
||||||
["bits_per_sample", ".bps"],
|
|
||||||
["bits_per_raw_sample", ".bprs"],
|
|
||||||
["duration", ".dur"],
|
["duration", ".dur"],
|
||||||
]
|
]
|
||||||
|
|
||||||
|
@ -325,7 +280,7 @@ def parse_ffprobe(
|
||||||
ret[rk] = v1
|
ret[rk] = v1
|
||||||
|
|
||||||
if ret.get("vc") == "ansi": # shellscript
|
if ret.get("vc") == "ansi": # shellscript
|
||||||
return {}, {}, [], {}
|
return {}, {}
|
||||||
|
|
||||||
for strm in streams:
|
for strm in streams:
|
||||||
for sk, sv in strm.items():
|
for sk, sv in strm.items():
|
||||||
|
@ -374,77 +329,7 @@ def parse_ffprobe(
|
||||||
zero = int("0")
|
zero = int("0")
|
||||||
zd = {k: (zero, v) for k, v in ret.items()}
|
zd = {k: (zero, v) for k, v in ret.items()}
|
||||||
|
|
||||||
return zd, md, streams, fmt
|
return zd, md
|
||||||
|
|
||||||
|
|
||||||
def get_cover_from_epub(log: "NamedLogger", abspath: str) -> Optional[IO[bytes]]:
|
|
||||||
import zipfile
|
|
||||||
|
|
||||||
from .dxml import parse_xml
|
|
||||||
|
|
||||||
try:
|
|
||||||
from urlparse import urljoin # Python2
|
|
||||||
except ImportError:
|
|
||||||
from urllib.parse import urljoin # Python3
|
|
||||||
|
|
||||||
with zipfile.ZipFile(abspath, "r") as z:
|
|
||||||
# First open the container file to find the package document (.opf file)
|
|
||||||
try:
|
|
||||||
container_root = parse_xml(z.read("META-INF/container.xml").decode())
|
|
||||||
except KeyError:
|
|
||||||
log("epub: no container file found in %s" % (abspath,))
|
|
||||||
return None
|
|
||||||
|
|
||||||
# https://www.w3.org/TR/epub-33/#sec-container.xml-rootfile-elem
|
|
||||||
container_ns = {"": "urn:oasis:names:tc:opendocument:xmlns:container"}
|
|
||||||
# One file could contain multiple package documents, default to the first one
|
|
||||||
rootfile_path = container_root.find("./rootfiles/rootfile", container_ns).get(
|
|
||||||
"full-path"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Then open the first package document to find the path of the cover image
|
|
||||||
try:
|
|
||||||
package_root = parse_xml(z.read(rootfile_path).decode())
|
|
||||||
except KeyError:
|
|
||||||
log("epub: no package document found in %s" % (abspath,))
|
|
||||||
return None
|
|
||||||
|
|
||||||
# https://www.w3.org/TR/epub-33/#sec-package-doc
|
|
||||||
package_ns = {"": "http://www.idpf.org/2007/opf"}
|
|
||||||
# https://www.w3.org/TR/epub-33/#sec-cover-image
|
|
||||||
coverimage_path_node = package_root.find(
|
|
||||||
"./manifest/item[@properties='cover-image']", package_ns
|
|
||||||
)
|
|
||||||
if coverimage_path_node is not None:
|
|
||||||
coverimage_path = coverimage_path_node.get("href")
|
|
||||||
else:
|
|
||||||
# This might be an EPUB2 file, try the legacy way of specifying covers
|
|
||||||
coverimage_path = _get_cover_from_epub2(log, package_root, package_ns)
|
|
||||||
|
|
||||||
# This url is either absolute (in the .epub) or relative to the package document
|
|
||||||
adjusted_cover_path = urljoin(rootfile_path, coverimage_path)
|
|
||||||
|
|
||||||
return z.open(adjusted_cover_path)
|
|
||||||
|
|
||||||
|
|
||||||
def _get_cover_from_epub2(
|
|
||||||
log: "NamedLogger", package_root, package_ns
|
|
||||||
) -> Optional[str]:
|
|
||||||
# <meta name="cover" content="id-to-cover-image"> in <metadata>, then
|
|
||||||
# <item> in <manifest>
|
|
||||||
cover_id = package_root.find("./metadata/meta[@name='cover']", package_ns).get(
|
|
||||||
"content"
|
|
||||||
)
|
|
||||||
|
|
||||||
if not cover_id:
|
|
||||||
return None
|
|
||||||
|
|
||||||
for node in package_root.iterfind("./manifest/item", package_ns):
|
|
||||||
if node.get("id") == cover_id:
|
|
||||||
cover_path = node.get("href")
|
|
||||||
return cover_path
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class MTag(object):
|
class MTag(object):
|
||||||
|
@ -588,7 +473,7 @@ class MTag(object):
|
||||||
sv = str(zv).split("/")[0].strip().lstrip("0")
|
sv = str(zv).split("/")[0].strip().lstrip("0")
|
||||||
ret[sk] = sv or 0
|
ret[sk] = sv or 0
|
||||||
|
|
||||||
# normalize key notation to rekobo
|
# normalize key notation to rkeobo
|
||||||
okey = ret.get("key")
|
okey = ret.get("key")
|
||||||
if okey:
|
if okey:
|
||||||
key = str(okey).replace(" ", "").replace("maj", "").replace("min", "m")
|
key = str(okey).replace(" ", "").replace("maj", "").replace("min", "m")
|
||||||
|
@ -668,7 +553,7 @@ class MTag(object):
|
||||||
raise Exception()
|
raise Exception()
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
if self.args.mtag_v:
|
if self.args.mtag_v:
|
||||||
self.log("mutagen-err [%s] @ %r" % (ex, abspath), "90")
|
self.log("mutagen-err [{}] @ [{}]".format(ex, abspath), "90")
|
||||||
|
|
||||||
return self.get_ffprobe(abspath) if self.can_ffprobe else {}
|
return self.get_ffprobe(abspath) if self.can_ffprobe else {}
|
||||||
|
|
||||||
|
@ -715,7 +600,7 @@ class MTag(object):
|
||||||
if not bos.path.isfile(abspath):
|
if not bos.path.isfile(abspath):
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
ret, md, _, _ = ffprobe(abspath, self.args.mtag_to)
|
ret, md = ffprobe(abspath, self.args.mtag_to)
|
||||||
|
|
||||||
if self.args.mtag_vv:
|
if self.args.mtag_vv:
|
||||||
for zd in (ret, dict(md)):
|
for zd in (ret, dict(md)):
|
||||||
|
@ -785,8 +670,8 @@ class MTag(object):
|
||||||
ret[tag] = zj[tag]
|
ret[tag] = zj[tag]
|
||||||
except:
|
except:
|
||||||
if self.args.mtag_v:
|
if self.args.mtag_v:
|
||||||
t = "mtag error: tagname %r, parser %r, file %r => %r"
|
t = "mtag error: tagname {}, parser {}, file {} => {}"
|
||||||
self.log(t % (tagname, parser.bin, abspath, min_ex()), 6)
|
self.log(t.format(tagname, parser.bin, abspath, min_ex()))
|
||||||
|
|
||||||
if ap != abspath:
|
if ap != abspath:
|
||||||
wunlink(self.log, ap, VF_CAREFUL)
|
wunlink(self.log, ap, VF_CAREFUL)
|
||||||
|
|
|
@ -163,7 +163,6 @@ class MCast(object):
|
||||||
sck.settimeout(None)
|
sck.settimeout(None)
|
||||||
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||||
try:
|
try:
|
||||||
# safe for this purpose; https://lwn.net/Articles/853637/
|
|
||||||
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
@ -183,7 +182,11 @@ class MCast(object):
|
||||||
srv.ips[oth_ip.split("/")[0]] = ipaddress.ip_network(oth_ip, False)
|
srv.ips[oth_ip.split("/")[0]] = ipaddress.ip_network(oth_ip, False)
|
||||||
|
|
||||||
# gvfs breaks if a linklocal ip appears in a dns reply
|
# gvfs breaks if a linklocal ip appears in a dns reply
|
||||||
ll = {k: v for k, v in srv.ips.items() if k.startswith(("169.254", "fe80"))}
|
ll = {
|
||||||
|
k: v
|
||||||
|
for k, v in srv.ips.items()
|
||||||
|
if k.startswith("169.254") or k.startswith("fe80")
|
||||||
|
}
|
||||||
rt = {k: v for k, v in srv.ips.items() if k not in ll}
|
rt = {k: v for k, v in srv.ips.items() if k not in ll}
|
||||||
|
|
||||||
if self.args.ll or not rt:
|
if self.args.ll or not rt:
|
||||||
|
|
|
@ -15,7 +15,7 @@ try:
|
||||||
raise Exception()
|
raise Exception()
|
||||||
|
|
||||||
HAVE_ARGON2 = True
|
HAVE_ARGON2 = True
|
||||||
from argon2 import exceptions as argon2ex
|
from argon2 import __version__ as argon2ver
|
||||||
except:
|
except:
|
||||||
HAVE_ARGON2 = False
|
HAVE_ARGON2 = False
|
||||||
|
|
||||||
|
@ -24,13 +24,17 @@ class PWHash(object):
|
||||||
def __init__(self, args: argparse.Namespace):
|
def __init__(self, args: argparse.Namespace):
|
||||||
self.args = args
|
self.args = args
|
||||||
|
|
||||||
zsl = args.ah_alg.split(",")
|
try:
|
||||||
alg = zsl[0]
|
alg, ac = args.ah_alg.split(",")
|
||||||
|
except:
|
||||||
|
alg = args.ah_alg
|
||||||
|
ac = {}
|
||||||
|
|
||||||
if alg == "none":
|
if alg == "none":
|
||||||
alg = ""
|
alg = ""
|
||||||
|
|
||||||
self.alg = alg
|
self.alg = alg
|
||||||
self.ac = zsl[1:]
|
self.ac = ac
|
||||||
if not alg:
|
if not alg:
|
||||||
self.on = False
|
self.on = False
|
||||||
self.hash = unicode
|
self.hash = unicode
|
||||||
|
@ -86,23 +90,17 @@ class PWHash(object):
|
||||||
its = 2
|
its = 2
|
||||||
blksz = 8
|
blksz = 8
|
||||||
para = 4
|
para = 4
|
||||||
ramcap = 0 # openssl 1.1 = 32 MiB
|
|
||||||
try:
|
try:
|
||||||
cost = 2 << int(self.ac[0])
|
cost = 2 << int(self.ac[0])
|
||||||
its = int(self.ac[1])
|
its = int(self.ac[1])
|
||||||
blksz = int(self.ac[2])
|
blksz = int(self.ac[2])
|
||||||
para = int(self.ac[3])
|
para = int(self.ac[3])
|
||||||
ramcap = int(self.ac[4]) * 1024 * 1024
|
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
cfg = {"salt": self.salt, "n": cost, "r": blksz, "p": para, "dklen": 24}
|
|
||||||
if ramcap:
|
|
||||||
cfg["maxmem"] = ramcap
|
|
||||||
|
|
||||||
ret = plain.encode("utf-8")
|
ret = plain.encode("utf-8")
|
||||||
for _ in range(its):
|
for _ in range(its):
|
||||||
ret = hashlib.scrypt(ret, **cfg)
|
ret = hashlib.scrypt(ret, salt=self.salt, n=cost, r=blksz, p=para, dklen=24)
|
||||||
|
|
||||||
return "+" + base64.urlsafe_b64encode(ret).decode("utf-8")
|
return "+" + base64.urlsafe_b64encode(ret).decode("utf-8")
|
||||||
|
|
||||||
|
@ -147,10 +145,6 @@ class PWHash(object):
|
||||||
def cli(self) -> None:
|
def cli(self) -> None:
|
||||||
import getpass
|
import getpass
|
||||||
|
|
||||||
if self.args.usernames:
|
|
||||||
t = "since you have enabled --usernames, please provide username:password"
|
|
||||||
print(t)
|
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
p1 = getpass.getpass("password> ")
|
p1 = getpass.getpass("password> ")
|
||||||
|
|
|
@ -12,7 +12,7 @@ from types import SimpleNamespace
|
||||||
from .__init__ import ANYWIN, EXE, TYPE_CHECKING
|
from .__init__ import ANYWIN, EXE, TYPE_CHECKING
|
||||||
from .authsrv import LEELOO_DALLAS, VFS
|
from .authsrv import LEELOO_DALLAS, VFS
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .util import Daemon, absreal, min_ex, pybin, runhook, vjoin
|
from .util import Daemon, min_ex, pybin, runhook
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Union
|
from typing import Any, Union
|
||||||
|
@ -151,8 +151,6 @@ class SMB(object):
|
||||||
def _uname(self) -> str:
|
def _uname(self) -> str:
|
||||||
if self.noacc:
|
if self.noacc:
|
||||||
return LEELOO_DALLAS
|
return LEELOO_DALLAS
|
||||||
if not self.asrv.acct:
|
|
||||||
return "*"
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# you found it! my single worst bit of code so far
|
# you found it! my single worst bit of code so far
|
||||||
|
@ -191,7 +189,7 @@ class SMB(object):
|
||||||
vfs, rem = self.asrv.vfs.get(vpath, uname, *perms)
|
vfs, rem = self.asrv.vfs.get(vpath, uname, *perms)
|
||||||
if not vfs.realpath:
|
if not vfs.realpath:
|
||||||
raise Exception("unmapped vfs")
|
raise Exception("unmapped vfs")
|
||||||
return vfs, vjoin(vfs.realpath, rem)
|
return vfs, vfs.canonical(rem)
|
||||||
|
|
||||||
def _listdir(self, vpath: str, *a: Any, **ka: Any) -> list[str]:
|
def _listdir(self, vpath: str, *a: Any, **ka: Any) -> list[str]:
|
||||||
vpath = vpath.replace("\\", "/").lstrip("/")
|
vpath = vpath.replace("\\", "/").lstrip("/")
|
||||||
|
@ -215,7 +213,7 @@ class SMB(object):
|
||||||
sz = 112 * 2 # ['.', '..']
|
sz = 112 * 2 # ['.', '..']
|
||||||
for n, fn in enumerate(ls):
|
for n, fn in enumerate(ls):
|
||||||
if sz >= 64000:
|
if sz >= 64000:
|
||||||
t = "listing only %d of %d files (%d byte) in /%s for performance; see --smb-nwa-1"
|
t = "listing only %d of %d files (%d byte) in /%s; see impacket#1433"
|
||||||
warning(t, n, len(ls), sz, vpath)
|
warning(t, n, len(ls), sz, vpath)
|
||||||
break
|
break
|
||||||
|
|
||||||
|
@ -244,7 +242,6 @@ class SMB(object):
|
||||||
t = "blocked write (no-write-acc %s): /%s @%s"
|
t = "blocked write (no-write-acc %s): /%s @%s"
|
||||||
yeet(t % (vfs.axs.uwrite, vpath, uname))
|
yeet(t % (vfs.axs.uwrite, vpath, uname))
|
||||||
|
|
||||||
ap = absreal(ap)
|
|
||||||
xbu = vfs.flags.get("xbu")
|
xbu = vfs.flags.get("xbu")
|
||||||
if xbu and not runhook(
|
if xbu and not runhook(
|
||||||
self.nlog,
|
self.nlog,
|
||||||
|
@ -263,7 +260,7 @@ class SMB(object):
|
||||||
time.time(),
|
time.time(),
|
||||||
"",
|
"",
|
||||||
):
|
):
|
||||||
yeet("blocked by xbu server config: %r" % (vpath,))
|
yeet("blocked by xbu server config: " + vpath)
|
||||||
|
|
||||||
ret = bos.open(ap, flags, *a, mode=chmod, **ka)
|
ret = bos.open(ap, flags, *a, mode=chmod, **ka)
|
||||||
if wr:
|
if wr:
|
||||||
|
@ -320,7 +317,7 @@ class SMB(object):
|
||||||
|
|
||||||
self.hub.up2k.handle_mv(uname, "1.7.6.2", vp1, vp2)
|
self.hub.up2k.handle_mv(uname, "1.7.6.2", vp1, vp2)
|
||||||
try:
|
try:
|
||||||
bos.makedirs(ap2, vf=vfs2.flags)
|
bos.makedirs(ap2)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -334,7 +331,7 @@ class SMB(object):
|
||||||
t = "blocked mkdir (no-write-acc %s): /%s @%s"
|
t = "blocked mkdir (no-write-acc %s): /%s @%s"
|
||||||
yeet(t % (vfs.axs.uwrite, vpath, uname))
|
yeet(t % (vfs.axs.uwrite, vpath, uname))
|
||||||
|
|
||||||
return bos.mkdir(ap, vfs.flags["chmod_d"])
|
return bos.mkdir(ap)
|
||||||
|
|
||||||
def _stat(self, vpath: str, *a: Any, **ka: Any) -> os.stat_result:
|
def _stat(self, vpath: str, *a: Any, **ka: Any) -> os.stat_result:
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -84,7 +84,7 @@ class SSDPr(object):
|
||||||
name = self.args.doctitle
|
name = self.args.doctitle
|
||||||
zs = zs.strip().format(c(ubase), c(url), c(name), c(self.args.zsid))
|
zs = zs.strip().format(c(ubase), c(url), c(name), c(self.args.zsid))
|
||||||
hc.reply(zs.encode("utf-8", "replace"))
|
hc.reply(zs.encode("utf-8", "replace"))
|
||||||
return False # close connection
|
return False # close connectino
|
||||||
|
|
||||||
|
|
||||||
class SSDPd(MCast):
|
class SSDPd(MCast):
|
||||||
|
|
|
@ -8,7 +8,7 @@ from itertools import chain
|
||||||
from .bimap import Bimap, BimapError
|
from .bimap import Bimap, BimapError
|
||||||
from .bit import get_bits, set_bits
|
from .bit import get_bits, set_bits
|
||||||
from .buffer import BufferError
|
from .buffer import BufferError
|
||||||
from .label import DNSBuffer, DNSLabel, set_avahi_379
|
from .label import DNSBuffer, DNSLabel
|
||||||
from .ranges import IP4, IP6, H, I, check_bytes
|
from .ranges import IP4, IP6, H, I, check_bytes
|
||||||
|
|
||||||
|
|
||||||
|
@ -426,7 +426,7 @@ class RR(object):
|
||||||
if rdlength:
|
if rdlength:
|
||||||
rdata = RDMAP.get(QTYPE.get(rtype), RD).parse(buffer, rdlength)
|
rdata = RDMAP.get(QTYPE.get(rtype), RD).parse(buffer, rdlength)
|
||||||
else:
|
else:
|
||||||
rdata = RD(b"a")
|
rdata = ""
|
||||||
return cls(rname, rtype, rclass, ttl, rdata)
|
return cls(rname, rtype, rclass, ttl, rdata)
|
||||||
except (BufferError, BimapError) as e:
|
except (BufferError, BimapError) as e:
|
||||||
raise DNSError("Error unpacking RR [offset=%d]: %s" % (buffer.offset, e))
|
raise DNSError("Error unpacking RR [offset=%d]: %s" % (buffer.offset, e))
|
||||||
|
|
|
@ -11,23 +11,6 @@ LDH = set(range(33, 127))
|
||||||
ESCAPE = re.compile(r"\\([0-9][0-9][0-9])")
|
ESCAPE = re.compile(r"\\([0-9][0-9][0-9])")
|
||||||
|
|
||||||
|
|
||||||
avahi_379 = 0
|
|
||||||
|
|
||||||
|
|
||||||
def set_avahi_379():
|
|
||||||
global avahi_379
|
|
||||||
avahi_379 = 1
|
|
||||||
|
|
||||||
|
|
||||||
def log_avahi_379(args):
|
|
||||||
global avahi_379
|
|
||||||
if avahi_379 == 2:
|
|
||||||
return
|
|
||||||
avahi_379 = 2
|
|
||||||
t = "Invalid pointer in DNSLabel [offset=%d,pointer=%d,length=%d];\n\033[35m NOTE: this is probably avahi-bug #379, packet corruption in Avahi's mDNS-reflection feature. Copyparty has a workaround and is OK, but other devices need either --zm4 or --zm6"
|
|
||||||
raise BufferError(t % args)
|
|
||||||
|
|
||||||
|
|
||||||
class DNSLabelError(Exception):
|
class DNSLabelError(Exception):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -113,11 +96,8 @@ class DNSBuffer(Buffer):
|
||||||
)
|
)
|
||||||
if pointer < self.offset:
|
if pointer < self.offset:
|
||||||
self.offset = pointer
|
self.offset = pointer
|
||||||
elif avahi_379:
|
|
||||||
log_avahi_379((self.offset, pointer, len(self.data)))
|
|
||||||
label.extend(b"a")
|
|
||||||
break
|
|
||||||
else:
|
else:
|
||||||
|
|
||||||
raise BufferError(
|
raise BufferError(
|
||||||
"Invalid pointer in DNSLabel [offset=%d,pointer=%d,length=%d]"
|
"Invalid pointer in DNSLabel [offset=%d,pointer=%d,length=%d]"
|
||||||
% (self.offset, pointer, len(self.data))
|
% (self.offset, pointer, len(self.data))
|
||||||
|
|
|
@ -594,20 +594,3 @@ def _get_bit(x: int, i: int) -> bool:
|
||||||
|
|
||||||
class DataTooLongError(ValueError):
|
class DataTooLongError(ValueError):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
def qr2svg(qr: QrCode, border: int) -> str:
|
|
||||||
parts: list[str] = []
|
|
||||||
for y in range(qr.size):
|
|
||||||
sy = border + y
|
|
||||||
for x in range(qr.size):
|
|
||||||
if qr.modules[y][x]:
|
|
||||||
parts.append("M%d,%dh1v1h-1z" % (border + x, sy))
|
|
||||||
t = """\
|
|
||||||
<?xml version="1.0" encoding="UTF-8"?>
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" version="1.1" viewBox="0 0 {0} {0}" stroke="none">
|
|
||||||
<rect width="100%" height="100%" fill="#F7F7F7"/>
|
|
||||||
<path d="{1}" fill="#111111"/>
|
|
||||||
</svg>
|
|
||||||
"""
|
|
||||||
return t.format(qr.size + border * 2, " ".join(parts))
|
|
||||||
|
|
|
@ -17,9 +17,6 @@ if True: # pylint: disable=using-constant-test
|
||||||
from .util import NamedLogger
|
from .util import NamedLogger
|
||||||
|
|
||||||
|
|
||||||
TAR_NO_OPUS = set("aac|m4a|mp3|oga|ogg|opus|wma".split("|"))
|
|
||||||
|
|
||||||
|
|
||||||
class StreamArc(object):
|
class StreamArc(object):
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
|
@ -85,7 +82,9 @@ def enthumb(
|
||||||
) -> dict[str, Any]:
|
) -> dict[str, Any]:
|
||||||
rem = f["vp"]
|
rem = f["vp"]
|
||||||
ext = rem.rsplit(".", 1)[-1].lower()
|
ext = rem.rsplit(".", 1)[-1].lower()
|
||||||
if (fmt == "mp3" and ext == "mp3") or (fmt == "opus" and ext in TAR_NO_OPUS):
|
if (fmt == "mp3" and ext == "mp3") or (
|
||||||
|
fmt == "opus" and ext in "aac|m4a|mp3|ogg|opus|wma".split("|")
|
||||||
|
):
|
||||||
raise Exception()
|
raise Exception()
|
||||||
|
|
||||||
vp = vjoin(vtop, rem.split("/", 1)[1])
|
vp = vjoin(vtop, rem.split("/", 1)[1])
|
||||||
|
@ -111,7 +110,7 @@ def errdesc(
|
||||||
report = ["copyparty failed to add the following files to the archive:", ""]
|
report = ["copyparty failed to add the following files to the archive:", ""]
|
||||||
|
|
||||||
for fn, err in errors:
|
for fn, err in errors:
|
||||||
report.extend([" file: %r" % (fn,), "error: %s" % (err,), ""])
|
report.extend([" file: {}".format(fn), "error: {}".format(err), ""])
|
||||||
|
|
||||||
btxt = "\r\n".join(report).encode("utf-8", "replace")
|
btxt = "\r\n".join(report).encode("utf-8", "replace")
|
||||||
btxt = vol_san(list(vfs.all_vols.values()), btxt)
|
btxt = vol_san(list(vfs.all_vols.values()), btxt)
|
||||||
|
|
|
@ -2,8 +2,10 @@
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import atexit
|
import base64
|
||||||
|
import calendar
|
||||||
import errno
|
import errno
|
||||||
|
import gzip
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
|
@ -14,7 +16,7 @@ import string
|
||||||
import sys
|
import sys
|
||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
from datetime import datetime
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
# from inspect import currentframe
|
# from inspect import currentframe
|
||||||
# print(currentframe().f_lineno)
|
# print(currentframe().f_lineno)
|
||||||
|
@ -28,7 +30,6 @@ if True: # pylint: disable=using-constant-test
|
||||||
|
|
||||||
from .__init__ import ANYWIN, EXE, MACOS, PY2, TYPE_CHECKING, E, EnvParams, unicode
|
from .__init__ import ANYWIN, EXE, MACOS, PY2, TYPE_CHECKING, E, EnvParams, unicode
|
||||||
from .authsrv import BAD_CFG, AuthSrv
|
from .authsrv import BAD_CFG, AuthSrv
|
||||||
from .bos import bos
|
|
||||||
from .cert import ensure_cert
|
from .cert import ensure_cert
|
||||||
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, HAVE_MUTAGEN
|
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, HAVE_MUTAGEN
|
||||||
from .pwhash import HAVE_ARGON2
|
from .pwhash import HAVE_ARGON2
|
||||||
|
@ -39,7 +40,6 @@ from .th_srv import (
|
||||||
HAVE_FFPROBE,
|
HAVE_FFPROBE,
|
||||||
HAVE_HEIF,
|
HAVE_HEIF,
|
||||||
HAVE_PIL,
|
HAVE_PIL,
|
||||||
HAVE_RAW,
|
|
||||||
HAVE_VIPS,
|
HAVE_VIPS,
|
||||||
HAVE_WEBP,
|
HAVE_WEBP,
|
||||||
ThumbSrv,
|
ThumbSrv,
|
||||||
|
@ -52,9 +52,6 @@ from .util import (
|
||||||
FFMPEG_URL,
|
FFMPEG_URL,
|
||||||
HAVE_PSUTIL,
|
HAVE_PSUTIL,
|
||||||
HAVE_SQLITE3,
|
HAVE_SQLITE3,
|
||||||
HAVE_ZMQ,
|
|
||||||
RE_ANSI,
|
|
||||||
URL_BUG,
|
|
||||||
UTC,
|
UTC,
|
||||||
VERSIONS,
|
VERSIONS,
|
||||||
Daemon,
|
Daemon,
|
||||||
|
@ -63,25 +60,16 @@ from .util import (
|
||||||
HMaccas,
|
HMaccas,
|
||||||
ODict,
|
ODict,
|
||||||
alltrace,
|
alltrace,
|
||||||
|
ansi_re,
|
||||||
build_netmap,
|
build_netmap,
|
||||||
expat_ver,
|
|
||||||
gzip,
|
|
||||||
load_ipr,
|
|
||||||
load_ipu,
|
|
||||||
lock_file,
|
|
||||||
min_ex,
|
min_ex,
|
||||||
mp,
|
mp,
|
||||||
odfusion,
|
odfusion,
|
||||||
pybin,
|
pybin,
|
||||||
start_log_thrs,
|
start_log_thrs,
|
||||||
start_stackmon,
|
start_stackmon,
|
||||||
termsize,
|
|
||||||
ub64enc,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if HAVE_SQLITE3:
|
|
||||||
import sqlite3
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
try:
|
try:
|
||||||
from .mdns import MDNS
|
from .mdns import MDNS
|
||||||
|
@ -93,11 +81,6 @@ if PY2:
|
||||||
range = xrange # type: ignore
|
range = xrange # type: ignore
|
||||||
|
|
||||||
|
|
||||||
VER_IDP_DB = 1
|
|
||||||
VER_SESSION_DB = 1
|
|
||||||
VER_SHARES_DB = 2
|
|
||||||
|
|
||||||
|
|
||||||
class SvcHub(object):
|
class SvcHub(object):
|
||||||
"""
|
"""
|
||||||
Hosts all services which cannot be parallelized due to reliance on monolithic resources.
|
Hosts all services which cannot be parallelized due to reliance on monolithic resources.
|
||||||
|
@ -121,7 +104,6 @@ class SvcHub(object):
|
||||||
self.argv = argv
|
self.argv = argv
|
||||||
self.E: EnvParams = args.E
|
self.E: EnvParams = args.E
|
||||||
self.no_ansi = args.no_ansi
|
self.no_ansi = args.no_ansi
|
||||||
self.tz = UTC if args.log_utc else None
|
|
||||||
self.logf: Optional[typing.TextIO] = None
|
self.logf: Optional[typing.TextIO] = None
|
||||||
self.logf_base_fn = ""
|
self.logf_base_fn = ""
|
||||||
self.is_dut = False # running in unittest; always False
|
self.is_dut = False # running in unittest; always False
|
||||||
|
@ -129,15 +111,14 @@ class SvcHub(object):
|
||||||
self.stopping = False
|
self.stopping = False
|
||||||
self.stopped = False
|
self.stopped = False
|
||||||
self.reload_req = False
|
self.reload_req = False
|
||||||
self.reload_mutex = threading.Lock()
|
self.reloading = 0
|
||||||
self.stop_cond = threading.Condition()
|
self.stop_cond = threading.Condition()
|
||||||
self.nsigs = 3
|
self.nsigs = 3
|
||||||
self.retcode = 0
|
self.retcode = 0
|
||||||
self.httpsrv_up = 0
|
self.httpsrv_up = 0
|
||||||
|
|
||||||
self.log_mutex = threading.Lock()
|
self.log_mutex = threading.Lock()
|
||||||
self.cday = 0
|
self.next_day = 0
|
||||||
self.cmon = 0
|
|
||||||
self.tstack = 0.0
|
self.tstack = 0.0
|
||||||
|
|
||||||
self.iphash = HMaccas(os.path.join(self.E.cfg, "iphash"), 8)
|
self.iphash = HMaccas(os.path.join(self.E.cfg, "iphash"), 8)
|
||||||
|
@ -156,7 +137,6 @@ class SvcHub(object):
|
||||||
args.no_del = True
|
args.no_del = True
|
||||||
args.no_mv = True
|
args.no_mv = True
|
||||||
args.hardlink = True
|
args.hardlink = True
|
||||||
args.dav_auth = True
|
|
||||||
args.vague_403 = True
|
args.vague_403 = True
|
||||||
args.nih = True
|
args.nih = True
|
||||||
|
|
||||||
|
@ -173,7 +153,6 @@ class SvcHub(object):
|
||||||
# for non-http clients (ftp, tftp)
|
# for non-http clients (ftp, tftp)
|
||||||
self.bans: dict[str, int] = {}
|
self.bans: dict[str, int] = {}
|
||||||
self.gpwd = Garda(self.args.ban_pw)
|
self.gpwd = Garda(self.args.ban_pw)
|
||||||
self.gpwc = Garda(self.args.ban_pwc)
|
|
||||||
self.g404 = Garda(self.args.ban_404)
|
self.g404 = Garda(self.args.ban_404)
|
||||||
self.g403 = Garda(self.args.ban_403)
|
self.g403 = Garda(self.args.ban_403)
|
||||||
self.g422 = Garda(self.args.ban_422, False)
|
self.g422 = Garda(self.args.ban_422, False)
|
||||||
|
@ -202,14 +181,8 @@ class SvcHub(object):
|
||||||
|
|
||||||
if not args.use_fpool and args.j != 1:
|
if not args.use_fpool and args.j != 1:
|
||||||
args.no_fpool = True
|
args.no_fpool = True
|
||||||
t = "multithreading enabled with -j {}, so disabling fpool -- this can reduce upload performance on some filesystems, and make some antivirus-softwares "
|
t = "multithreading enabled with -j {}, so disabling fpool -- this can reduce upload performance on some filesystems"
|
||||||
c = 0
|
self.log("root", t.format(args.j))
|
||||||
if ANYWIN:
|
|
||||||
t += "(especially Microsoft Defender) stress your CPU and HDD severely during big uploads"
|
|
||||||
c = 3
|
|
||||||
else:
|
|
||||||
t += "consume more resources (CPU/HDD) than normal"
|
|
||||||
self.log("root", t.format(args.j), c)
|
|
||||||
|
|
||||||
if not args.no_fpool and args.j != 1:
|
if not args.no_fpool and args.j != 1:
|
||||||
t = "WARNING: ignoring --use-fpool because multithreading (-j{}) is enabled"
|
t = "WARNING: ignoring --use-fpool because multithreading (-j{}) is enabled"
|
||||||
|
@ -236,16 +209,7 @@ class SvcHub(object):
|
||||||
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
|
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
|
||||||
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
|
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
|
||||||
|
|
||||||
zs = ""
|
if args.chpw and args.idp_h_usr:
|
||||||
if args.th_ram_max < 0.22:
|
|
||||||
zs = "generate thumbnails"
|
|
||||||
elif args.th_ram_max < 1:
|
|
||||||
zs = "generate audio waveforms or spectrograms"
|
|
||||||
if zs:
|
|
||||||
t = "WARNING: --th-ram-max is very small (%.2f GiB); will not be able to %s"
|
|
||||||
self.log("root", t % (args.th_ram_max, zs), 3)
|
|
||||||
|
|
||||||
if args.chpw and args.have_idp_hdrs:
|
|
||||||
t = "ERROR: user-changeable passwords is incompatible with IdP/identity-providers; you must disable either --chpw or --idp-h-usr"
|
t = "ERROR: user-changeable passwords is incompatible with IdP/identity-providers; you must disable either --chpw or --idp-h-usr"
|
||||||
self.log("root", t, 1)
|
self.log("root", t, 1)
|
||||||
raise Exception(t)
|
raise Exception(t)
|
||||||
|
@ -256,31 +220,6 @@ class SvcHub(object):
|
||||||
noch.update([x for x in zsl if x])
|
noch.update([x for x in zsl if x])
|
||||||
args.chpw_no = noch
|
args.chpw_no = noch
|
||||||
|
|
||||||
if args.ipu:
|
|
||||||
iu, nm = load_ipu(self.log, args.ipu, True)
|
|
||||||
setattr(args, "ipu_iu", iu)
|
|
||||||
setattr(args, "ipu_nm", nm)
|
|
||||||
|
|
||||||
if args.ipr:
|
|
||||||
ipr = load_ipr(self.log, args.ipr, True)
|
|
||||||
setattr(args, "ipr_u", ipr)
|
|
||||||
|
|
||||||
for zs in "ah_salt fk_salt dk_salt".split():
|
|
||||||
if getattr(args, "show_%s" % (zs,)):
|
|
||||||
self.log("root", "effective %s is %s" % (zs, getattr(args, zs)))
|
|
||||||
|
|
||||||
if args.ah_cli or args.ah_gen:
|
|
||||||
args.idp_store = 0
|
|
||||||
args.no_ses = True
|
|
||||||
args.shr = ""
|
|
||||||
|
|
||||||
if args.idp_store and args.have_idp_hdrs:
|
|
||||||
self.setup_db("idp")
|
|
||||||
|
|
||||||
if not self.args.no_ses:
|
|
||||||
self.setup_db("ses")
|
|
||||||
|
|
||||||
args.shr1 = ""
|
|
||||||
if args.shr:
|
if args.shr:
|
||||||
self.setup_share_db()
|
self.setup_share_db()
|
||||||
|
|
||||||
|
@ -330,8 +269,6 @@ class SvcHub(object):
|
||||||
decs.pop("vips", None)
|
decs.pop("vips", None)
|
||||||
if not HAVE_PIL:
|
if not HAVE_PIL:
|
||||||
decs.pop("pil", None)
|
decs.pop("pil", None)
|
||||||
if not HAVE_RAW:
|
|
||||||
decs.pop("raw", None)
|
|
||||||
if not HAVE_FFMPEG or not HAVE_FFPROBE:
|
if not HAVE_FFMPEG or not HAVE_FFPROBE:
|
||||||
decs.pop("ff", None)
|
decs.pop("ff", None)
|
||||||
|
|
||||||
|
@ -431,195 +368,6 @@ class SvcHub(object):
|
||||||
|
|
||||||
self.broker = Broker(self)
|
self.broker = Broker(self)
|
||||||
|
|
||||||
# create netmaps early to avoid firewall gaps,
|
|
||||||
# but the mutex blocks multiprocessing startup
|
|
||||||
for zs in "ipu_iu ftp_ipa_nm tftp_ipa_nm".split():
|
|
||||||
try:
|
|
||||||
getattr(args, zs).mutex = threading.Lock()
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
if args.ipr:
|
|
||||||
for nm in args.ipr_u.values():
|
|
||||||
nm.mutex = threading.Lock()
|
|
||||||
|
|
||||||
def _db_onfail_ses(self) -> None:
|
|
||||||
self.args.no_ses = True
|
|
||||||
|
|
||||||
def _db_onfail_idp(self) -> None:
|
|
||||||
self.args.idp_store = 0
|
|
||||||
|
|
||||||
def setup_db(self, which: str) -> None:
|
|
||||||
"""
|
|
||||||
the "non-mission-critical" databases; if something looks broken then just nuke it
|
|
||||||
"""
|
|
||||||
if which == "ses":
|
|
||||||
native_ver = VER_SESSION_DB
|
|
||||||
db_path = self.args.ses_db
|
|
||||||
desc = "sessions-db"
|
|
||||||
pathopt = "ses-db"
|
|
||||||
sanchk_q = "select count(*) from us"
|
|
||||||
createfun = self._create_session_db
|
|
||||||
failfun = self._db_onfail_ses
|
|
||||||
elif which == "idp":
|
|
||||||
native_ver = VER_IDP_DB
|
|
||||||
db_path = self.args.idp_db
|
|
||||||
desc = "idp-db"
|
|
||||||
pathopt = "idp-db"
|
|
||||||
sanchk_q = "select count(*) from us"
|
|
||||||
createfun = self._create_idp_db
|
|
||||||
failfun = self._db_onfail_idp
|
|
||||||
else:
|
|
||||||
raise Exception("unknown cachetype")
|
|
||||||
|
|
||||||
if not db_path.endswith(".db"):
|
|
||||||
zs = "config option --%s (the %s) was configured to [%s] which is invalid; must be a filepath ending with .db"
|
|
||||||
self.log("root", zs % (pathopt, desc, db_path), 1)
|
|
||||||
raise Exception(BAD_CFG)
|
|
||||||
|
|
||||||
if not HAVE_SQLITE3:
|
|
||||||
failfun()
|
|
||||||
if which == "ses":
|
|
||||||
zs = "disabling sessions, will use plaintext passwords in cookies"
|
|
||||||
elif which == "idp":
|
|
||||||
zs = "disabling idp-db, will be unable to remember IdP-volumes after a restart"
|
|
||||||
self.log("root", "WARNING: sqlite3 not available; %s" % (zs,), 3)
|
|
||||||
return
|
|
||||||
|
|
||||||
assert sqlite3 # type: ignore # !rm
|
|
||||||
|
|
||||||
db_lock = db_path + ".lock"
|
|
||||||
try:
|
|
||||||
create = not os.path.getsize(db_path)
|
|
||||||
except:
|
|
||||||
create = True
|
|
||||||
zs = "creating new" if create else "opening"
|
|
||||||
self.log("root", "%s %s %s" % (zs, desc, db_path))
|
|
||||||
|
|
||||||
for tries in range(2):
|
|
||||||
sver = 0
|
|
||||||
try:
|
|
||||||
db = sqlite3.connect(db_path)
|
|
||||||
cur = db.cursor()
|
|
||||||
try:
|
|
||||||
zs = "select v from kv where k='sver'"
|
|
||||||
sver = cur.execute(zs).fetchall()[0][0]
|
|
||||||
if sver > native_ver:
|
|
||||||
zs = "this version of copyparty only understands %s v%d and older; the db is v%d"
|
|
||||||
raise Exception(zs % (desc, native_ver, sver))
|
|
||||||
|
|
||||||
cur.execute(sanchk_q).fetchone()
|
|
||||||
except:
|
|
||||||
if sver:
|
|
||||||
raise
|
|
||||||
sver = createfun(cur)
|
|
||||||
|
|
||||||
err = self._verify_db(
|
|
||||||
cur, which, pathopt, db_path, desc, sver, native_ver
|
|
||||||
)
|
|
||||||
if err:
|
|
||||||
tries = 99
|
|
||||||
self.args.no_ses = True
|
|
||||||
self.log("root", err, 3)
|
|
||||||
break
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
if tries or sver > native_ver:
|
|
||||||
raise
|
|
||||||
t = "%s is unusable; deleting and recreating: %r"
|
|
||||||
self.log("root", t % (desc, ex), 3)
|
|
||||||
try:
|
|
||||||
cur.close() # type: ignore
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
db.close() # type: ignore
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
os.unlink(db_lock)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
os.unlink(db_path)
|
|
||||||
|
|
||||||
def _create_session_db(self, cur: "sqlite3.Cursor") -> int:
|
|
||||||
sch = [
|
|
||||||
r"create table kv (k text, v int)",
|
|
||||||
r"create table us (un text, si text, t0 int)",
|
|
||||||
# username, session-id, creation-time
|
|
||||||
r"create index us_un on us(un)",
|
|
||||||
r"create index us_si on us(si)",
|
|
||||||
r"create index us_t0 on us(t0)",
|
|
||||||
r"insert into kv values ('sver', 1)",
|
|
||||||
]
|
|
||||||
for cmd in sch:
|
|
||||||
cur.execute(cmd)
|
|
||||||
self.log("root", "created new sessions-db")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
def _create_idp_db(self, cur: "sqlite3.Cursor") -> int:
|
|
||||||
sch = [
|
|
||||||
r"create table kv (k text, v int)",
|
|
||||||
r"create table us (un text, gs text)",
|
|
||||||
# username, groups
|
|
||||||
r"create index us_un on us(un)",
|
|
||||||
r"insert into kv values ('sver', 1)",
|
|
||||||
]
|
|
||||||
for cmd in sch:
|
|
||||||
cur.execute(cmd)
|
|
||||||
self.log("root", "created new idp-db")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
def _verify_db(
|
|
||||||
self,
|
|
||||||
cur: "sqlite3.Cursor",
|
|
||||||
which: str,
|
|
||||||
pathopt: str,
|
|
||||||
db_path: str,
|
|
||||||
desc: str,
|
|
||||||
sver: int,
|
|
||||||
native_ver: int,
|
|
||||||
) -> str:
|
|
||||||
# ensure writable (maybe owned by other user)
|
|
||||||
db = cur.connection
|
|
||||||
|
|
||||||
try:
|
|
||||||
zil = cur.execute("select v from kv where k='pid'").fetchall()
|
|
||||||
if len(zil) > 1:
|
|
||||||
raise Exception()
|
|
||||||
owner = zil[0][0]
|
|
||||||
except:
|
|
||||||
owner = 0
|
|
||||||
|
|
||||||
if which == "ses":
|
|
||||||
cons = "Will now disable sessions and instead use plaintext passwords in cookies."
|
|
||||||
elif which == "idp":
|
|
||||||
cons = "Each IdP-volume will not become available until its associated user sends their first request."
|
|
||||||
else:
|
|
||||||
raise Exception()
|
|
||||||
|
|
||||||
if not lock_file(db_path + ".lock"):
|
|
||||||
t = "the %s [%s] is already in use by another copyparty instance (pid:%d). This is not supported; please provide another database with --%s or give this copyparty-instance its entirely separate config-folder by setting another path in the XDG_CONFIG_HOME env-var. You can also disable this safeguard by setting env-var PRTY_NO_DB_LOCK=1. %s"
|
|
||||||
return t % (desc, db_path, owner, pathopt, cons)
|
|
||||||
|
|
||||||
vars = (("pid", os.getpid()), ("ts", int(time.time() * 1000)))
|
|
||||||
if owner:
|
|
||||||
# wear-estimate: 2 cells; offsets 0x10, 0x50, 0x19720
|
|
||||||
for k, v in vars:
|
|
||||||
cur.execute("update kv set v=? where k=?", (v, k))
|
|
||||||
else:
|
|
||||||
# wear-estimate: 3~4 cells; offsets 0x10, 0x50, 0x19180, 0x19710, 0x36000, 0x360b0, 0x36b90
|
|
||||||
for k, v in vars:
|
|
||||||
cur.execute("insert into kv values(?, ?)", (k, v))
|
|
||||||
|
|
||||||
if sver < native_ver:
|
|
||||||
cur.execute("delete from kv where k='sver'")
|
|
||||||
cur.execute("insert into kv values('sver',?)", (native_ver,))
|
|
||||||
|
|
||||||
db.commit()
|
|
||||||
cur.close()
|
|
||||||
db.close()
|
|
||||||
return ""
|
|
||||||
|
|
||||||
def setup_share_db(self) -> None:
|
def setup_share_db(self) -> None:
|
||||||
al = self.args
|
al = self.args
|
||||||
if not HAVE_SQLITE3:
|
if not HAVE_SQLITE3:
|
||||||
|
@ -627,7 +375,7 @@ class SvcHub(object):
|
||||||
al.shr = ""
|
al.shr = ""
|
||||||
return
|
return
|
||||||
|
|
||||||
assert sqlite3 # type: ignore # !rm
|
import sqlite3
|
||||||
|
|
||||||
al.shr = al.shr.strip("/")
|
al.shr = al.shr.strip("/")
|
||||||
if "/" in al.shr or not al.shr:
|
if "/" in al.shr or not al.shr:
|
||||||
|
@ -636,50 +384,35 @@ class SvcHub(object):
|
||||||
raise Exception(t)
|
raise Exception(t)
|
||||||
|
|
||||||
al.shr = "/%s/" % (al.shr,)
|
al.shr = "/%s/" % (al.shr,)
|
||||||
al.shr1 = al.shr[1:]
|
|
||||||
|
|
||||||
# policy:
|
|
||||||
# the shares-db is important, so panic if something is wrong
|
|
||||||
|
|
||||||
db_path = self.args.shr_db
|
|
||||||
db_lock = db_path + ".lock"
|
|
||||||
try:
|
|
||||||
create = not os.path.getsize(db_path)
|
|
||||||
except:
|
|
||||||
create = True
|
create = True
|
||||||
zs = "creating new" if create else "opening"
|
modified = False
|
||||||
self.log("root", "%s shares-db %s" % (zs, db_path))
|
db_path = self.args.shr_db
|
||||||
|
self.log("root", "opening shares-db %s" % (db_path,))
|
||||||
sver = 0
|
for n in range(2):
|
||||||
try:
|
try:
|
||||||
db = sqlite3.connect(db_path)
|
db = sqlite3.connect(db_path)
|
||||||
cur = db.cursor()
|
cur = db.cursor()
|
||||||
if not create:
|
|
||||||
zs = "select v from kv where k='sver'"
|
|
||||||
sver = cur.execute(zs).fetchall()[0][0]
|
|
||||||
if sver > VER_SHARES_DB:
|
|
||||||
zs = "this version of copyparty only understands shares-db v%d and older; the db is v%d"
|
|
||||||
raise Exception(zs % (VER_SHARES_DB, sver))
|
|
||||||
|
|
||||||
cur.execute("select count(*) from sh").fetchone()
|
|
||||||
except Exception as ex:
|
|
||||||
t = "could not open shares-db; will now panic...\nthe following database must be repaired or deleted before you can launch copyparty:\n%s\n\nERROR: %s\n\nadditional details:\n%s\n"
|
|
||||||
self.log("root", t % (db_path, ex, min_ex()), 1)
|
|
||||||
raise
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
zil = cur.execute("select v from kv where k='pid'").fetchall()
|
cur.execute("select count(*) from sh").fetchone()
|
||||||
if len(zil) > 1:
|
create = False
|
||||||
raise Exception()
|
break
|
||||||
owner = zil[0][0]
|
|
||||||
except:
|
except:
|
||||||
owner = 0
|
pass
|
||||||
|
except Exception as ex:
|
||||||
if not lock_file(db_lock):
|
if n:
|
||||||
t = "the shares-db [%s] is already in use by another copyparty instance (pid:%d). This is not supported; please provide another database with --shr-db or give this copyparty-instance its entirely separate config-folder by setting another path in the XDG_CONFIG_HOME env-var. You can also disable this safeguard by setting env-var PRTY_NO_DB_LOCK=1. Will now panic."
|
raise
|
||||||
t = t % (db_path, owner)
|
t = "shares-db corrupt; deleting and recreating: %r"
|
||||||
self.log("root", t, 1)
|
self.log("root", t % (ex,), 3)
|
||||||
raise Exception(t)
|
try:
|
||||||
|
cur.close() # type: ignore
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
db.close() # type: ignore
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
os.unlink(db_path)
|
||||||
|
|
||||||
sch1 = [
|
sch1 = [
|
||||||
r"create table kv (k text, v int)",
|
r"create table kv (k text, v int)",
|
||||||
|
@ -691,37 +424,34 @@ class SvcHub(object):
|
||||||
r"create index sf_k on sf(k)",
|
r"create index sf_k on sf(k)",
|
||||||
r"create index sh_k on sh(k)",
|
r"create index sh_k on sh(k)",
|
||||||
r"create index sh_t1 on sh(t1)",
|
r"create index sh_t1 on sh(t1)",
|
||||||
r"insert into kv values ('sver', 2)",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
assert db # type: ignore # !rm
|
assert db # type: ignore
|
||||||
assert cur # type: ignore # !rm
|
assert cur # type: ignore
|
||||||
if not sver:
|
if create:
|
||||||
sver = VER_SHARES_DB
|
dver = 2
|
||||||
|
modified = True
|
||||||
for cmd in sch1 + sch2:
|
for cmd in sch1 + sch2:
|
||||||
cur.execute(cmd)
|
cur.execute(cmd)
|
||||||
self.log("root", "created new shares-db")
|
self.log("root", "created new shares-db")
|
||||||
|
else:
|
||||||
|
(dver,) = cur.execute("select v from kv where k = 'sver'").fetchall()[0]
|
||||||
|
|
||||||
if sver == 1:
|
if dver == 1:
|
||||||
|
modified = True
|
||||||
for cmd in sch2:
|
for cmd in sch2:
|
||||||
cur.execute(cmd)
|
cur.execute(cmd)
|
||||||
cur.execute("update sh set st = 0")
|
cur.execute("update sh set st = 0")
|
||||||
self.log("root", "shares-db schema upgrade ok")
|
self.log("root", "shares-db schema upgrade ok")
|
||||||
|
|
||||||
if sver < VER_SHARES_DB:
|
if modified:
|
||||||
cur.execute("delete from kv where k='sver'")
|
for cmd in [
|
||||||
cur.execute("insert into kv values('sver',?)", (VER_SHARES_DB,))
|
r"delete from kv where k = 'sver'",
|
||||||
|
r"insert into kv values ('sver', %d)" % (2,),
|
||||||
vars = (("pid", os.getpid()), ("ts", int(time.time() * 1000)))
|
]:
|
||||||
if owner:
|
cur.execute(cmd)
|
||||||
# wear-estimate: same as sessions-db
|
|
||||||
for k, v in vars:
|
|
||||||
cur.execute("update kv set v=? where k=?", (v, k))
|
|
||||||
else:
|
|
||||||
for k, v in vars:
|
|
||||||
cur.execute("insert into kv values(?, ?)", (k, v))
|
|
||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
|
|
||||||
cur.close()
|
cur.close()
|
||||||
db.close()
|
db.close()
|
||||||
|
|
||||||
|
@ -786,39 +516,6 @@ class SvcHub(object):
|
||||||
def sigterm(self) -> None:
|
def sigterm(self) -> None:
|
||||||
self.signal_handler(signal.SIGTERM, None)
|
self.signal_handler(signal.SIGTERM, None)
|
||||||
|
|
||||||
def sticky_qr(self) -> None:
|
|
||||||
tw, th = termsize()
|
|
||||||
zs1, qr = self.tcpsrv.qr.split("\n", 1)
|
|
||||||
url, colr = zs1.split(" ", 1)
|
|
||||||
nl = len(qr.split("\n")) # numlines
|
|
||||||
lp = 3 if nl * 2 + 4 < tw else 0 # leftpad
|
|
||||||
lp0 = lp
|
|
||||||
if self.args.qr_pin == 2:
|
|
||||||
url = ""
|
|
||||||
else:
|
|
||||||
while lp and (nl + lp) * 2 + len(url) + 1 > tw:
|
|
||||||
lp -= 1
|
|
||||||
if (nl + lp) * 2 + len(url) + 1 > tw:
|
|
||||||
qr = url + "\n" + qr
|
|
||||||
url = ""
|
|
||||||
nl += 1
|
|
||||||
lp = lp0
|
|
||||||
sh = 1 + th - nl
|
|
||||||
if lp:
|
|
||||||
zs = " " * lp
|
|
||||||
qr = zs + qr.replace("\n", "\n" + zs)
|
|
||||||
if url:
|
|
||||||
url = "%s\033[%d;%dH%s\033[0m" % (colr, sh + 1, (nl + lp) * 2, url)
|
|
||||||
qr = colr + qr
|
|
||||||
|
|
||||||
def unlock():
|
|
||||||
print("\033[s\033[r\033[u", file=sys.stderr)
|
|
||||||
|
|
||||||
atexit.register(unlock)
|
|
||||||
t = "%s\033[%dA" % ("\n" * nl, nl)
|
|
||||||
t = "%s\033[s\033[1;%dr\033[%dH%s%s\033[u" % (t, sh - 1, sh, qr, url)
|
|
||||||
self.pr(t, file=sys.stderr)
|
|
||||||
|
|
||||||
def cb_httpsrv_up(self) -> None:
|
def cb_httpsrv_up(self) -> None:
|
||||||
self.httpsrv_up += 1
|
self.httpsrv_up += 1
|
||||||
if self.httpsrv_up != self.broker.num_workers:
|
if self.httpsrv_up != self.broker.num_workers:
|
||||||
|
@ -831,9 +528,6 @@ class SvcHub(object):
|
||||||
break
|
break
|
||||||
|
|
||||||
if self.tcpsrv.qr:
|
if self.tcpsrv.qr:
|
||||||
if self.args.qr_pin:
|
|
||||||
self.sticky_qr()
|
|
||||||
else:
|
|
||||||
self.log("qr-code", self.tcpsrv.qr)
|
self.log("qr-code", self.tcpsrv.qr)
|
||||||
else:
|
else:
|
||||||
self.log("root", "workers OK\n")
|
self.log("root", "workers OK\n")
|
||||||
|
@ -850,7 +544,7 @@ class SvcHub(object):
|
||||||
fng = []
|
fng = []
|
||||||
t_ff = "transcode audio, create spectrograms, video thumbnails"
|
t_ff = "transcode audio, create spectrograms, video thumbnails"
|
||||||
to_check = [
|
to_check = [
|
||||||
(HAVE_SQLITE3, "sqlite", "sessions and file/media indexing"),
|
(HAVE_SQLITE3, "sqlite", "file and media indexing"),
|
||||||
(HAVE_PIL, "pillow", "image thumbnails (plenty fast)"),
|
(HAVE_PIL, "pillow", "image thumbnails (plenty fast)"),
|
||||||
(HAVE_VIPS, "vips", "image thumbnails (faster, eats more ram)"),
|
(HAVE_VIPS, "vips", "image thumbnails (faster, eats more ram)"),
|
||||||
(HAVE_WEBP, "pillow-webp", "create thumbnails as webp files"),
|
(HAVE_WEBP, "pillow-webp", "create thumbnails as webp files"),
|
||||||
|
@ -858,10 +552,8 @@ class SvcHub(object):
|
||||||
(HAVE_FFPROBE, "ffprobe", t_ff + ", read audio/media tags"),
|
(HAVE_FFPROBE, "ffprobe", t_ff + ", read audio/media tags"),
|
||||||
(HAVE_MUTAGEN, "mutagen", "read audio tags (ffprobe is better but slower)"),
|
(HAVE_MUTAGEN, "mutagen", "read audio tags (ffprobe is better but slower)"),
|
||||||
(HAVE_ARGON2, "argon2", "secure password hashing (advanced users only)"),
|
(HAVE_ARGON2, "argon2", "secure password hashing (advanced users only)"),
|
||||||
(HAVE_ZMQ, "pyzmq", "send zeromq messages from event-hooks"),
|
|
||||||
(HAVE_HEIF, "pillow-heif", "read .heif images with pillow (rarely useful)"),
|
(HAVE_HEIF, "pillow-heif", "read .heif images with pillow (rarely useful)"),
|
||||||
(HAVE_AVIF, "pillow-avif", "read .avif images with pillow (rarely useful)"),
|
(HAVE_AVIF, "pillow-avif", "read .avif images with pillow (rarely useful)"),
|
||||||
(HAVE_RAW, "rawpy", "read RAW images"),
|
|
||||||
]
|
]
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
to_check += [
|
to_check += [
|
||||||
|
@ -896,11 +588,19 @@ class SvcHub(object):
|
||||||
t += ", "
|
t += ", "
|
||||||
t += "\033[0mNG: \033[35m" + sng
|
t += "\033[0mNG: \033[35m" + sng
|
||||||
|
|
||||||
t += "\033[0m, see --deps (this is fine btw)"
|
t += "\033[0m, see --deps"
|
||||||
self.log("optional-dependencies", t, 6)
|
self.log("dependencies", t, 6)
|
||||||
|
|
||||||
def _check_env(self) -> None:
|
def _check_env(self) -> None:
|
||||||
al = self.args
|
try:
|
||||||
|
files = os.listdir(E.cfg)
|
||||||
|
except:
|
||||||
|
files = []
|
||||||
|
|
||||||
|
hits = [x for x in files if x.lower().endswith(".conf")]
|
||||||
|
if hits:
|
||||||
|
t = "WARNING: found config files in [%s]: %s\n config files are not expected here, and will NOT be loaded (unless your setup is intentionally hella funky)"
|
||||||
|
self.log("root", t % (E.cfg, ", ".join(hits)), 3)
|
||||||
|
|
||||||
if self.args.no_bauth:
|
if self.args.no_bauth:
|
||||||
t = "WARNING: --no-bauth disables support for the Android app; you may want to use --bauth-last instead"
|
t = "WARNING: --no-bauth disables support for the Android app; you may want to use --bauth-last instead"
|
||||||
|
@ -908,30 +608,6 @@ class SvcHub(object):
|
||||||
if self.args.bauth_last:
|
if self.args.bauth_last:
|
||||||
self.log("root", "WARNING: ignoring --bauth-last due to --no-bauth", 3)
|
self.log("root", "WARNING: ignoring --bauth-last due to --no-bauth", 3)
|
||||||
|
|
||||||
have_tcp = False
|
|
||||||
for zs in al.i:
|
|
||||||
if not zs.startswith(("unix:", "fd:")):
|
|
||||||
have_tcp = True
|
|
||||||
if not have_tcp:
|
|
||||||
zb = False
|
|
||||||
zs = "z zm zm4 zm6 zmv zmvv zs zsv zv"
|
|
||||||
for zs in zs.split():
|
|
||||||
if getattr(al, zs, False):
|
|
||||||
setattr(al, zs, False)
|
|
||||||
zb = True
|
|
||||||
if zb:
|
|
||||||
t = "not listening on any ip-addresses (only unix-sockets and/or FDs); cannot enable zeroconf/mdns/ssdp as requested"
|
|
||||||
self.log("root", t, 3)
|
|
||||||
|
|
||||||
if not self.args.no_dav:
|
|
||||||
from .dxml import DXML_OK
|
|
||||||
|
|
||||||
if not DXML_OK:
|
|
||||||
if not self.args.no_dav:
|
|
||||||
self.args.no_dav = True
|
|
||||||
t = "WARNING:\nDisabling WebDAV support because dxml selftest failed. Please report this bug;\n%s\n...and include the following information in the bug-report:\n%s | expat %s\n"
|
|
||||||
self.log("root", t % (URL_BUG, VERSIONS, expat_ver()), 1)
|
|
||||||
|
|
||||||
def _process_config(self) -> bool:
|
def _process_config(self) -> bool:
|
||||||
al = self.args
|
al = self.args
|
||||||
|
|
||||||
|
@ -987,20 +663,13 @@ class SvcHub(object):
|
||||||
vl = [os.path.expandvars(os.path.expanduser(x)) for x in vl]
|
vl = [os.path.expandvars(os.path.expanduser(x)) for x in vl]
|
||||||
setattr(al, k, vl)
|
setattr(al, k, vl)
|
||||||
|
|
||||||
for k in "lo hist dbpath ssl_log".split(" "):
|
for k in "lo hist ssl_log".split(" "):
|
||||||
vs = getattr(al, k)
|
vs = getattr(al, k)
|
||||||
if vs:
|
if vs:
|
||||||
vs = os.path.expandvars(os.path.expanduser(vs))
|
vs = os.path.expandvars(os.path.expanduser(vs))
|
||||||
setattr(al, k, vs)
|
setattr(al, k, vs)
|
||||||
|
|
||||||
for k in "idp_adm".split(" "):
|
for k in "sus_urls nonsus_urls".split(" "):
|
||||||
vs = getattr(al, k)
|
|
||||||
vsa = [x.strip() for x in vs.split(",")]
|
|
||||||
vsa = [x.lower() for x in vsa if x]
|
|
||||||
setattr(al, k + "_set", set(vsa))
|
|
||||||
|
|
||||||
zs = "dav_ua1 sus_urls nonsus_urls ua_nodoc ua_nozip"
|
|
||||||
for k in zs.split(" "):
|
|
||||||
vs = getattr(al, k)
|
vs = getattr(al, k)
|
||||||
if not vs or vs == "no":
|
if not vs or vs == "no":
|
||||||
setattr(al, k, None)
|
setattr(al, k, None)
|
||||||
|
@ -1020,25 +689,12 @@ class SvcHub(object):
|
||||||
al.sus_urls = None
|
al.sus_urls = None
|
||||||
|
|
||||||
al.xff_hdr = al.xff_hdr.lower()
|
al.xff_hdr = al.xff_hdr.lower()
|
||||||
al.idp_h_usr = [x.lower() for x in al.idp_h_usr or []]
|
al.idp_h_usr = al.idp_h_usr.lower()
|
||||||
al.idp_h_grp = al.idp_h_grp.lower()
|
al.idp_h_grp = al.idp_h_grp.lower()
|
||||||
al.idp_h_key = al.idp_h_key.lower()
|
al.idp_h_key = al.idp_h_key.lower()
|
||||||
|
|
||||||
al.idp_hm_usr_p = {}
|
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa)
|
||||||
for zs0 in al.idp_hm_usr or []:
|
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa)
|
||||||
try:
|
|
||||||
sep = zs0[:1]
|
|
||||||
hn, zs1, zs2 = zs0[1:].split(sep)
|
|
||||||
hn = hn.lower()
|
|
||||||
if hn in al.idp_hm_usr_p:
|
|
||||||
al.idp_hm_usr_p[hn][zs1] = zs2
|
|
||||||
else:
|
|
||||||
al.idp_hm_usr_p[hn] = {zs1: zs2}
|
|
||||||
except:
|
|
||||||
raise Exception("invalid --idp-hm-usr [%s]" % (zs0,))
|
|
||||||
|
|
||||||
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa, True)
|
|
||||||
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa, True)
|
|
||||||
|
|
||||||
mte = ODict.fromkeys(DEF_MTE.split(","), True)
|
mte = ODict.fromkeys(DEF_MTE.split(","), True)
|
||||||
al.mte = odfusion(mte, al.mte)
|
al.mte = odfusion(mte, al.mte)
|
||||||
|
@ -1050,7 +706,7 @@ class SvcHub(object):
|
||||||
al.exp_md = odfusion(exp, al.exp_md.replace(" ", ","))
|
al.exp_md = odfusion(exp, al.exp_md.replace(" ", ","))
|
||||||
al.exp_lg = odfusion(exp, al.exp_lg.replace(" ", ","))
|
al.exp_lg = odfusion(exp, al.exp_lg.replace(" ", ","))
|
||||||
|
|
||||||
for k in ["no_hash", "no_idx", "og_ua", "srch_excl"]:
|
for k in ["no_hash", "no_idx", "og_ua"]:
|
||||||
ptn = getattr(self.args, k)
|
ptn = getattr(self.args, k)
|
||||||
if ptn:
|
if ptn:
|
||||||
setattr(self.args, k, re.compile(ptn))
|
setattr(self.args, k, re.compile(ptn))
|
||||||
|
@ -1081,30 +737,10 @@ class SvcHub(object):
|
||||||
except:
|
except:
|
||||||
raise Exception("invalid --mv-retry [%s]" % (self.args.mv_retry,))
|
raise Exception("invalid --mv-retry [%s]" % (self.args.mv_retry,))
|
||||||
|
|
||||||
al.js_utc = "false" if al.localtime else "true"
|
|
||||||
|
|
||||||
al.tcolor = al.tcolor.lstrip("#")
|
al.tcolor = al.tcolor.lstrip("#")
|
||||||
if len(al.tcolor) == 3: # fc5 => ffcc55
|
if len(al.tcolor) == 3: # fc5 => ffcc55
|
||||||
al.tcolor = "".join([x * 2 for x in al.tcolor])
|
al.tcolor = "".join([x * 2 for x in al.tcolor])
|
||||||
|
|
||||||
zs = al.u2sz
|
|
||||||
zsl = zs.split(",")
|
|
||||||
if len(zsl) not in (1, 3):
|
|
||||||
t = "invalid --u2sz; must be either one number, or a comma-separated list of three numbers (min,default,max)"
|
|
||||||
raise Exception(t)
|
|
||||||
if len(zsl) < 3:
|
|
||||||
zsl = ["1", zs, zs]
|
|
||||||
zi2 = 1
|
|
||||||
for zs in zsl:
|
|
||||||
zi = int(zs)
|
|
||||||
# arbitrary constraint (anything above 2 GiB is probably unintended)
|
|
||||||
if zi < 1 or zi > 2047:
|
|
||||||
raise Exception("invalid --u2sz; minimum is 1, max is 2047")
|
|
||||||
if zi < zi2:
|
|
||||||
raise Exception("invalid --u2sz; values must be equal or ascending")
|
|
||||||
zi2 = zi
|
|
||||||
al.u2sz = ",".join(zsl)
|
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _ipa2re(self, txt) -> Optional[re.Pattern]:
|
def _ipa2re(self, txt) -> Optional[re.Pattern]:
|
||||||
|
@ -1155,7 +791,7 @@ class SvcHub(object):
|
||||||
self.args.nc = min(self.args.nc, soft // 2)
|
self.args.nc = min(self.args.nc, soft // 2)
|
||||||
|
|
||||||
def _logname(self) -> str:
|
def _logname(self) -> str:
|
||||||
dt = datetime.now(self.tz)
|
dt = datetime.now(UTC)
|
||||||
fn = str(self.args.lo)
|
fn = str(self.args.lo)
|
||||||
for fs in "YmdHMS":
|
for fs in "YmdHMS":
|
||||||
fs = "%" + fs
|
fs = "%" + fs
|
||||||
|
@ -1177,7 +813,7 @@ class SvcHub(object):
|
||||||
|
|
||||||
fn = sel_fn
|
fn = sel_fn
|
||||||
try:
|
try:
|
||||||
bos.makedirs(os.path.dirname(fn))
|
os.makedirs(os.path.dirname(fn))
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -1194,9 +830,6 @@ class SvcHub(object):
|
||||||
|
|
||||||
lh = codecs.open(fn, "w", encoding="utf-8", errors="replace")
|
lh = codecs.open(fn, "w", encoding="utf-8", errors="replace")
|
||||||
|
|
||||||
if getattr(self.args, "free_umask", False):
|
|
||||||
os.fchmod(lh.fileno(), 0o644)
|
|
||||||
|
|
||||||
argv = [pybin] + self.argv
|
argv = [pybin] + self.argv
|
||||||
if hasattr(shlex, "quote"):
|
if hasattr(shlex, "quote"):
|
||||||
argv = [shlex.quote(x) for x in argv]
|
argv = [shlex.quote(x) for x in argv]
|
||||||
|
@ -1275,23 +908,41 @@ class SvcHub(object):
|
||||||
except:
|
except:
|
||||||
self.log("root", "ssdp startup failed;\n" + min_ex(), 3)
|
self.log("root", "ssdp startup failed;\n" + min_ex(), 3)
|
||||||
|
|
||||||
def reload(self, rescan_all_vols: bool, up2k: bool) -> str:
|
def reload(self) -> str:
|
||||||
t = "config has been reloaded"
|
with self.up2k.mutex:
|
||||||
with self.reload_mutex:
|
if self.reloading:
|
||||||
|
return "cannot reload; already in progress"
|
||||||
|
self.reloading = 1
|
||||||
|
|
||||||
|
Daemon(self._reload, "reloading")
|
||||||
|
return "reload initiated"
|
||||||
|
|
||||||
|
def _reload(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
|
||||||
|
with self.up2k.mutex:
|
||||||
|
if self.reloading != 1:
|
||||||
|
return
|
||||||
|
self.reloading = 2
|
||||||
self.log("root", "reloading config")
|
self.log("root", "reloading config")
|
||||||
self.asrv.reload(9 if up2k else 4)
|
self.asrv.reload(9 if up2k else 4)
|
||||||
if up2k:
|
if up2k:
|
||||||
self.up2k.reload(rescan_all_vols)
|
self.up2k.reload(rescan_all_vols)
|
||||||
t += "; volumes are now reinitializing"
|
|
||||||
else:
|
else:
|
||||||
self.log("root", "reload done")
|
self.log("root", "reload done")
|
||||||
self.broker.reload()
|
self.broker.reload()
|
||||||
return t
|
self.reloading = 0
|
||||||
|
|
||||||
def _reload_sessions(self) -> None:
|
def _reload_blocking(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
|
||||||
with self.asrv.mutex:
|
while True:
|
||||||
self.asrv.load_sessions(True)
|
with self.up2k.mutex:
|
||||||
self.broker.reload_sessions()
|
if self.reloading < 2:
|
||||||
|
self.reloading = 1
|
||||||
|
break
|
||||||
|
time.sleep(0.05)
|
||||||
|
|
||||||
|
# try to handle multiple pending IdP reloads at once:
|
||||||
|
time.sleep(0.2)
|
||||||
|
|
||||||
|
self._reload(rescan_all_vols=rescan_all_vols, up2k=up2k)
|
||||||
|
|
||||||
def stop_thr(self) -> None:
|
def stop_thr(self) -> None:
|
||||||
while not self.stop_req:
|
while not self.stop_req:
|
||||||
|
@ -1300,7 +951,7 @@ class SvcHub(object):
|
||||||
|
|
||||||
if self.reload_req:
|
if self.reload_req:
|
||||||
self.reload_req = False
|
self.reload_req = False
|
||||||
self.reload(True, True)
|
self.reload()
|
||||||
|
|
||||||
self.shutdown()
|
self.shutdown()
|
||||||
|
|
||||||
|
@ -1413,12 +1064,12 @@ class SvcHub(object):
|
||||||
return
|
return
|
||||||
|
|
||||||
with self.log_mutex:
|
with self.log_mutex:
|
||||||
dt = datetime.now(self.tz)
|
zd = datetime.now(UTC)
|
||||||
ts = self.log_dfmt % (
|
ts = self.log_dfmt % (
|
||||||
dt.year,
|
zd.year,
|
||||||
dt.month * 100 + dt.day,
|
zd.month * 100 + zd.day,
|
||||||
(dt.hour * 100 + dt.minute) * 100 + dt.second,
|
(zd.hour * 100 + zd.minute) * 100 + zd.second,
|
||||||
dt.microsecond // self.log_div,
|
zd.microsecond // self.log_div,
|
||||||
)
|
)
|
||||||
|
|
||||||
if c and not self.args.no_ansi:
|
if c and not self.args.no_ansi:
|
||||||
|
@ -1439,43 +1090,51 @@ class SvcHub(object):
|
||||||
if not self.args.no_logflush:
|
if not self.args.no_logflush:
|
||||||
self.logf.flush()
|
self.logf.flush()
|
||||||
|
|
||||||
if dt.day != self.cday or dt.month != self.cmon:
|
now = time.time()
|
||||||
self._set_next_day(dt)
|
if int(now) >= self.next_day:
|
||||||
|
self._set_next_day()
|
||||||
|
|
||||||
def _set_next_day(self, dt: datetime) -> None:
|
def _set_next_day(self) -> None:
|
||||||
if self.cday and self.logf and self.logf_base_fn != self._logname():
|
if self.next_day and self.logf and self.logf_base_fn != self._logname():
|
||||||
self.logf.close()
|
self.logf.close()
|
||||||
self._setup_logfile("")
|
self._setup_logfile("")
|
||||||
|
|
||||||
self.cday = dt.day
|
dt = datetime.now(UTC)
|
||||||
self.cmon = dt.month
|
|
||||||
|
# unix timestamp of next 00:00:00 (leap-seconds safe)
|
||||||
|
day_now = dt.day
|
||||||
|
while dt.day == day_now:
|
||||||
|
dt += timedelta(hours=12)
|
||||||
|
|
||||||
|
dt = dt.replace(hour=0, minute=0, second=0)
|
||||||
|
try:
|
||||||
|
tt = dt.utctimetuple()
|
||||||
|
except:
|
||||||
|
# still makes me hella uncomfortable
|
||||||
|
tt = dt.timetuple()
|
||||||
|
|
||||||
|
self.next_day = calendar.timegm(tt)
|
||||||
|
|
||||||
def _log_enabled(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
|
def _log_enabled(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
|
||||||
"""handles logging from all components"""
|
"""handles logging from all components"""
|
||||||
with self.log_mutex:
|
with self.log_mutex:
|
||||||
dt = datetime.now(self.tz)
|
now = time.time()
|
||||||
if dt.day != self.cday or dt.month != self.cmon:
|
if int(now) >= self.next_day:
|
||||||
|
dt = datetime.fromtimestamp(now, UTC)
|
||||||
zs = "{}\n" if self.no_ansi else "\033[36m{}\033[0m\n"
|
zs = "{}\n" if self.no_ansi else "\033[36m{}\033[0m\n"
|
||||||
zs = zs.format(dt.strftime("%Y-%m-%d"))
|
zs = zs.format(dt.strftime("%Y-%m-%d"))
|
||||||
print(zs, end="")
|
print(zs, end="")
|
||||||
self._set_next_day(dt)
|
self._set_next_day()
|
||||||
if self.logf:
|
if self.logf:
|
||||||
self.logf.write(zs)
|
self.logf.write(zs)
|
||||||
|
|
||||||
fmt = "\033[36m%s \033[33m%-21s \033[0m%s\n"
|
fmt = "\033[36m%s \033[33m%-21s \033[0m%s\n"
|
||||||
if self.no_ansi:
|
if self.no_ansi:
|
||||||
if c == 1:
|
fmt = "%s %-21s %s\n"
|
||||||
fmt = "%s %-21s CRIT: %s\n"
|
|
||||||
elif c == 3:
|
|
||||||
fmt = "%s %-21s WARN: %s\n"
|
|
||||||
elif c == 6:
|
|
||||||
fmt = "%s %-21s BTW: %s\n"
|
|
||||||
else:
|
|
||||||
fmt = "%s %-21s LOG: %s\n"
|
|
||||||
if "\033" in msg:
|
if "\033" in msg:
|
||||||
msg = RE_ANSI.sub("", msg)
|
msg = ansi_re.sub("", msg)
|
||||||
if "\033" in src:
|
if "\033" in src:
|
||||||
src = RE_ANSI.sub("", src)
|
src = ansi_re.sub("", src)
|
||||||
elif c:
|
elif c:
|
||||||
if isinstance(c, int):
|
if isinstance(c, int):
|
||||||
msg = "\033[3%sm%s\033[0m" % (c, msg)
|
msg = "\033[3%sm%s\033[0m" % (c, msg)
|
||||||
|
@ -1484,11 +1143,12 @@ class SvcHub(object):
|
||||||
else:
|
else:
|
||||||
msg = "%s%s\033[0m" % (c, msg)
|
msg = "%s%s\033[0m" % (c, msg)
|
||||||
|
|
||||||
|
zd = datetime.fromtimestamp(now, UTC)
|
||||||
ts = self.log_efmt % (
|
ts = self.log_efmt % (
|
||||||
dt.hour,
|
zd.hour,
|
||||||
dt.minute,
|
zd.minute,
|
||||||
dt.second,
|
zd.second,
|
||||||
dt.microsecond // self.log_div,
|
zd.microsecond // self.log_div,
|
||||||
)
|
)
|
||||||
msg = fmt % (ts, src, msg)
|
msg = fmt % (ts, src, msg)
|
||||||
try:
|
try:
|
||||||
|
@ -1516,7 +1176,7 @@ class SvcHub(object):
|
||||||
raise
|
raise
|
||||||
|
|
||||||
def check_mp_support(self) -> str:
|
def check_mp_support(self) -> str:
|
||||||
if MACOS and not os.environ.get("PRTY_FORCE_MP"):
|
if MACOS:
|
||||||
return "multiprocessing is wonky on mac osx;"
|
return "multiprocessing is wonky on mac osx;"
|
||||||
elif sys.version_info < (3, 3):
|
elif sys.version_info < (3, 3):
|
||||||
return "need python 3.3 or newer for multiprocessing;"
|
return "need python 3.3 or newer for multiprocessing;"
|
||||||
|
@ -1536,7 +1196,7 @@ class SvcHub(object):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if mp.cpu_count() <= 1 and not os.environ.get("PRTY_FORCE_MP"):
|
if mp.cpu_count() <= 1:
|
||||||
raise Exception()
|
raise Exception()
|
||||||
except:
|
except:
|
||||||
self.log("svchub", "only one CPU detected; multiprocessing disabled")
|
self.log("svchub", "only one CPU detected; multiprocessing disabled")
|
||||||
|
@ -1586,5 +1246,5 @@ class SvcHub(object):
|
||||||
zs = "{}\n{}".format(VERSIONS, alltrace())
|
zs = "{}\n{}".format(VERSIONS, alltrace())
|
||||||
zb = zs.encode("utf-8", "replace")
|
zb = zs.encode("utf-8", "replace")
|
||||||
zb = gzip.compress(zb)
|
zb = gzip.compress(zb)
|
||||||
zs = ub64enc(zb).decode("ascii")
|
zs = base64.b64encode(zb).decode("ascii")
|
||||||
self.log("stacks", zs)
|
self.log("stacks", zs)
|
||||||
|
|
|
@ -4,11 +4,12 @@ from __future__ import print_function, unicode_literals
|
||||||
import calendar
|
import calendar
|
||||||
import stat
|
import stat
|
||||||
import time
|
import time
|
||||||
|
import zlib
|
||||||
|
|
||||||
from .authsrv import AuthSrv
|
from .authsrv import AuthSrv
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .sutil import StreamArc, errdesc
|
from .sutil import StreamArc, errdesc
|
||||||
from .util import min_ex, sanitize_fn, spack, sunpack, yieldfile, zlib
|
from .util import min_ex, sanitize_fn, spack, sunpack, yieldfile
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Generator, Optional
|
from typing import Any, Generator, Optional
|
||||||
|
@ -54,7 +55,6 @@ def gen_fdesc(sz: int, crc32: int, z64: bool) -> bytes:
|
||||||
|
|
||||||
def gen_hdr(
|
def gen_hdr(
|
||||||
h_pos: Optional[int],
|
h_pos: Optional[int],
|
||||||
z64: bool,
|
|
||||||
fn: str,
|
fn: str,
|
||||||
sz: int,
|
sz: int,
|
||||||
lastmod: int,
|
lastmod: int,
|
||||||
|
@ -71,6 +71,7 @@ def gen_hdr(
|
||||||
# appnote 4.5 / zip 3.0 (2008) / unzip 6.0 (2009) says to add z64
|
# appnote 4.5 / zip 3.0 (2008) / unzip 6.0 (2009) says to add z64
|
||||||
# extinfo for values which exceed H, but that becomes an off-by-one
|
# extinfo for values which exceed H, but that becomes an off-by-one
|
||||||
# (can't tell if it was clamped or exactly maxval), make it obvious
|
# (can't tell if it was clamped or exactly maxval), make it obvious
|
||||||
|
z64 = sz >= 0xFFFFFFFF
|
||||||
z64v = [sz, sz] if z64 else []
|
z64v = [sz, sz] if z64 else []
|
||||||
if h_pos and h_pos >= 0xFFFFFFFF:
|
if h_pos and h_pos >= 0xFFFFFFFF:
|
||||||
# central, also consider ptr to original header
|
# central, also consider ptr to original header
|
||||||
|
@ -99,12 +100,12 @@ def gen_hdr(
|
||||||
|
|
||||||
# spec says to put zeros when !crc if bit3 (streaming)
|
# spec says to put zeros when !crc if bit3 (streaming)
|
||||||
# however infozip does actual sz and it even works on winxp
|
# however infozip does actual sz and it even works on winxp
|
||||||
# (same reasoning for z64 extradata later)
|
# (same reasning for z64 extradata later)
|
||||||
vsz = 0xFFFFFFFF if z64 else sz
|
vsz = 0xFFFFFFFF if z64 else sz
|
||||||
ret += spack(b"<LL", vsz, vsz)
|
ret += spack(b"<LL", vsz, vsz)
|
||||||
|
|
||||||
# windows support (the "?" replace below too)
|
# windows support (the "?" replace below too)
|
||||||
fn = sanitize_fn(fn, "/")
|
fn = sanitize_fn(fn, "/", [])
|
||||||
bfn = fn.encode("utf-8" if utf8 else "cp437", "replace").replace(b"?", b"_")
|
bfn = fn.encode("utf-8" if utf8 else "cp437", "replace").replace(b"?", b"_")
|
||||||
|
|
||||||
# add ntfs (0x24) and/or unix (0x10) extrafields for utc, add z64 if requested
|
# add ntfs (0x24) and/or unix (0x10) extrafields for utc, add z64 if requested
|
||||||
|
@ -244,7 +245,6 @@ class StreamZip(StreamArc):
|
||||||
|
|
||||||
sz = st.st_size
|
sz = st.st_size
|
||||||
ts = st.st_mtime
|
ts = st.st_mtime
|
||||||
h_pos = self.pos
|
|
||||||
|
|
||||||
crc = 0
|
crc = 0
|
||||||
if self.pre_crc:
|
if self.pre_crc:
|
||||||
|
@ -253,12 +253,8 @@ class StreamZip(StreamArc):
|
||||||
|
|
||||||
crc &= 0xFFFFFFFF
|
crc &= 0xFFFFFFFF
|
||||||
|
|
||||||
# some unzip-programs expect a 64bit data-descriptor
|
h_pos = self.pos
|
||||||
# even if the only 32bit-exceeding value is the offset,
|
buf = gen_hdr(None, name, sz, ts, self.utf8, crc, self.pre_crc)
|
||||||
# so force that by placeholdering the filesize too
|
|
||||||
z64 = h_pos >= 0xFFFFFFFF or sz >= 0xFFFFFFFF
|
|
||||||
|
|
||||||
buf = gen_hdr(None, z64, name, sz, ts, self.utf8, crc, self.pre_crc)
|
|
||||||
yield self._ct(buf)
|
yield self._ct(buf)
|
||||||
|
|
||||||
for buf in yieldfile(src, self.args.iobuf):
|
for buf in yieldfile(src, self.args.iobuf):
|
||||||
|
@ -271,6 +267,8 @@ class StreamZip(StreamArc):
|
||||||
|
|
||||||
self.items.append((name, sz, ts, crc, h_pos))
|
self.items.append((name, sz, ts, crc, h_pos))
|
||||||
|
|
||||||
|
z64 = sz >= 4 * 1024 * 1024 * 1024
|
||||||
|
|
||||||
if z64 or not self.pre_crc:
|
if z64 or not self.pre_crc:
|
||||||
buf = gen_fdesc(sz, crc, z64)
|
buf = gen_fdesc(sz, crc, z64)
|
||||||
yield self._ct(buf)
|
yield self._ct(buf)
|
||||||
|
@ -309,8 +307,7 @@ class StreamZip(StreamArc):
|
||||||
|
|
||||||
cdir_pos = self.pos
|
cdir_pos = self.pos
|
||||||
for name, sz, ts, crc, h_pos in self.items:
|
for name, sz, ts, crc, h_pos in self.items:
|
||||||
z64 = h_pos >= 0xFFFFFFFF or sz >= 0xFFFFFFFF
|
buf = gen_hdr(h_pos, name, sz, ts, self.utf8, crc, self.pre_crc)
|
||||||
buf = gen_hdr(h_pos, z64, name, sz, ts, self.utf8, crc, self.pre_crc)
|
|
||||||
mbuf += self._ct(buf)
|
mbuf += self._ct(buf)
|
||||||
if len(mbuf) >= 16384:
|
if len(mbuf) >= 16384:
|
||||||
yield mbuf
|
yield mbuf
|
||||||
|
|
|
@ -25,8 +25,8 @@ from .util import (
|
||||||
termsize,
|
termsize,
|
||||||
)
|
)
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True:
|
||||||
from typing import Generator, Optional, Union
|
from typing import Generator, Union
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
|
@ -95,7 +95,7 @@ class TcpSrv(object):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# binding 0.0.0.0 after :: fails on dualstack
|
# binding 0.0.0.0 after :: fails on dualstack
|
||||||
# but is necessary on non-dualstack
|
# but is necessary on non-dualstakc
|
||||||
if successful_binds:
|
if successful_binds:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -151,15 +151,9 @@ class TcpSrv(object):
|
||||||
if just_ll or self.args.ll:
|
if just_ll or self.args.ll:
|
||||||
ll_ok.add(ip.split("/")[0])
|
ll_ok.add(ip.split("/")[0])
|
||||||
|
|
||||||
listening_on = []
|
|
||||||
for ip, ports in sorted(ok.items()):
|
|
||||||
for port in sorted(ports):
|
|
||||||
listening_on.append("%s %s" % (ip, port))
|
|
||||||
|
|
||||||
qr1: dict[str, list[int]] = {}
|
qr1: dict[str, list[int]] = {}
|
||||||
qr2: dict[str, list[int]] = {}
|
qr2: dict[str, list[int]] = {}
|
||||||
msgs = []
|
msgs = []
|
||||||
accessible_on = []
|
|
||||||
title_tab: dict[str, dict[str, int]] = {}
|
title_tab: dict[str, dict[str, int]] = {}
|
||||||
title_vars = [x[1:] for x in self.args.wintitle.split(" ") if x.startswith("$")]
|
title_vars = [x[1:] for x in self.args.wintitle.split(" ") if x.startswith("$")]
|
||||||
t = "available @ {}://{}:{}/ (\033[33m{}\033[0m)"
|
t = "available @ {}://{}:{}/ (\033[33m{}\033[0m)"
|
||||||
|
@ -175,10 +169,6 @@ class TcpSrv(object):
|
||||||
):
|
):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
zs = "%s %s" % (ip, port)
|
|
||||||
if zs not in accessible_on:
|
|
||||||
accessible_on.append(zs)
|
|
||||||
|
|
||||||
proto = " http"
|
proto = " http"
|
||||||
if self.args.http_only:
|
if self.args.http_only:
|
||||||
pass
|
pass
|
||||||
|
@ -229,14 +219,6 @@ class TcpSrv(object):
|
||||||
else:
|
else:
|
||||||
print("\n", end="")
|
print("\n", end="")
|
||||||
|
|
||||||
for fn, ls in (
|
|
||||||
(self.args.wr_h_eps, listening_on),
|
|
||||||
(self.args.wr_h_aon, accessible_on),
|
|
||||||
):
|
|
||||||
if fn:
|
|
||||||
with open(fn, "wb") as f:
|
|
||||||
f.write(("\n".join(ls)).encode("utf-8"))
|
|
||||||
|
|
||||||
if self.args.qr or self.args.qrs:
|
if self.args.qr or self.args.qrs:
|
||||||
self.qr = self._qr(qr1, qr2)
|
self.qr = self._qr(qr1, qr2)
|
||||||
|
|
||||||
|
@ -245,10 +227,8 @@ class TcpSrv(object):
|
||||||
|
|
||||||
def _listen(self, ip: str, port: int) -> None:
|
def _listen(self, ip: str, port: int) -> None:
|
||||||
uds_perm = uds_gid = -1
|
uds_perm = uds_gid = -1
|
||||||
bound: Optional[socket.socket] = None
|
|
||||||
tcp = False
|
|
||||||
|
|
||||||
if "unix:" in ip:
|
if "unix:" in ip:
|
||||||
|
tcp = False
|
||||||
ipv = socket.AF_UNIX
|
ipv = socket.AF_UNIX
|
||||||
uds = ip.split(":")
|
uds = ip.split(":")
|
||||||
ip = uds[-1]
|
ip = uds[-1]
|
||||||
|
@ -261,12 +241,7 @@ class TcpSrv(object):
|
||||||
import grp
|
import grp
|
||||||
|
|
||||||
uds_gid = grp.getgrnam(uds[2]).gr_gid
|
uds_gid = grp.getgrnam(uds[2]).gr_gid
|
||||||
elif "fd:" in ip:
|
|
||||||
fd = ip[3:]
|
|
||||||
bound = socket.socket(fileno=int(fd))
|
|
||||||
|
|
||||||
tcp = bound.proto == socket.IPPROTO_TCP
|
|
||||||
ipv = bound.family
|
|
||||||
elif ":" in ip:
|
elif ":" in ip:
|
||||||
tcp = True
|
tcp = True
|
||||||
ipv = socket.AF_INET6
|
ipv = socket.AF_INET6
|
||||||
|
@ -274,7 +249,7 @@ class TcpSrv(object):
|
||||||
tcp = True
|
tcp = True
|
||||||
ipv = socket.AF_INET
|
ipv = socket.AF_INET
|
||||||
|
|
||||||
srv = bound or socket.socket(ipv, socket.SOCK_STREAM)
|
srv = socket.socket(ipv, socket.SOCK_STREAM)
|
||||||
|
|
||||||
if not ANYWIN or self.args.reuseaddr:
|
if not ANYWIN or self.args.reuseaddr:
|
||||||
srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||||
|
@ -289,13 +264,9 @@ class TcpSrv(object):
|
||||||
except:
|
except:
|
||||||
pass # will create another ipv4 socket instead
|
pass # will create another ipv4 socket instead
|
||||||
|
|
||||||
if getattr(self.args, "freebind", False):
|
if not ANYWIN and self.args.freebind:
|
||||||
srv.setsockopt(socket.SOL_IP, socket.IP_FREEBIND, 1)
|
srv.setsockopt(socket.SOL_IP, socket.IP_FREEBIND, 1)
|
||||||
|
|
||||||
if bound:
|
|
||||||
self.srv.append(srv)
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if tcp:
|
if tcp:
|
||||||
srv.bind((ip, port))
|
srv.bind((ip, port))
|
||||||
|
@ -400,7 +371,7 @@ class TcpSrv(object):
|
||||||
if self.args.q:
|
if self.args.q:
|
||||||
print(msg)
|
print(msg)
|
||||||
|
|
||||||
self.hub.broker.say("httpsrv.listen", srv)
|
self.hub.broker.say("listen", srv)
|
||||||
|
|
||||||
self.srv = srvs
|
self.srv = srvs
|
||||||
self.bound = bound
|
self.bound = bound
|
||||||
|
@ -408,7 +379,7 @@ class TcpSrv(object):
|
||||||
self._distribute_netdevs()
|
self._distribute_netdevs()
|
||||||
|
|
||||||
def _distribute_netdevs(self):
|
def _distribute_netdevs(self):
|
||||||
self.hub.broker.say("httpsrv.set_netdevs", self.netdevs)
|
self.hub.broker.say("set_netdevs", self.netdevs)
|
||||||
self.hub.start_zeroconf()
|
self.hub.start_zeroconf()
|
||||||
gencert(self.log, self.args, self.netdevs)
|
gencert(self.log, self.args, self.netdevs)
|
||||||
self.hub.restart_ftpd()
|
self.hub.restart_ftpd()
|
||||||
|
@ -431,24 +402,24 @@ class TcpSrv(object):
|
||||||
if not netdevs:
|
if not netdevs:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
add = []
|
added = "nothing"
|
||||||
rem = []
|
removed = "nothing"
|
||||||
for k, v in netdevs.items():
|
for k, v in netdevs.items():
|
||||||
if k not in self.netdevs:
|
if k not in self.netdevs:
|
||||||
add.append("\n\033[32m added %s = %s" % (k, v))
|
added = "{} = {}".format(k, v)
|
||||||
for k, v in self.netdevs.items():
|
for k, v in self.netdevs.items():
|
||||||
if k not in netdevs:
|
if k not in netdevs:
|
||||||
rem.append("\n\033[33mremoved %s = %s" % (k, v))
|
removed = "{} = {}".format(k, v)
|
||||||
|
|
||||||
t = "network change detected:%s%s"
|
t = "network change detected:\n added {}\033[0;33m\nremoved {}"
|
||||||
self.log("tcpsrv", t % ("".join(add), "".join(rem)), 3)
|
self.log("tcpsrv", t.format(added, removed), 3)
|
||||||
self.netdevs = netdevs
|
self.netdevs = netdevs
|
||||||
self._distribute_netdevs()
|
self._distribute_netdevs()
|
||||||
|
|
||||||
def detect_interfaces(self, listen_ips: list[str]) -> dict[str, Netdev]:
|
def detect_interfaces(self, listen_ips: list[str]) -> dict[str, Netdev]:
|
||||||
from .stolen.ifaddr import get_adapters
|
from .stolen.ifaddr import get_adapters
|
||||||
|
|
||||||
listen_ips = [x for x in listen_ips if not x.startswith(("unix:", "fd:"))]
|
listen_ips = [x for x in listen_ips if "unix:" not in x]
|
||||||
|
|
||||||
nics = get_adapters(True)
|
nics = get_adapters(True)
|
||||||
eps: dict[str, Netdev] = {}
|
eps: dict[str, Netdev] = {}
|
||||||
|
@ -577,7 +548,7 @@ class TcpSrv(object):
|
||||||
ip = None
|
ip = None
|
||||||
ips = list(t1) + list(t2)
|
ips = list(t1) + list(t2)
|
||||||
qri = self.args.qri
|
qri = self.args.qri
|
||||||
if self.args.zm and not qri and ips:
|
if self.args.zm and not qri:
|
||||||
name = self.args.name + ".local"
|
name = self.args.name + ".local"
|
||||||
t1[name] = next(v for v in (t1 or t2).values())
|
t1[name] = next(v for v in (t1 or t2).values())
|
||||||
ips = [name] + ips
|
ips = [name] + ips
|
||||||
|
@ -594,7 +565,8 @@ class TcpSrv(object):
|
||||||
if not ip:
|
if not ip:
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
hip = "[%s]" % (ip,) if ":" in ip else ip
|
if ":" in ip:
|
||||||
|
ip = "[{}]".format(ip)
|
||||||
|
|
||||||
if self.args.http_only:
|
if self.args.http_only:
|
||||||
https = ""
|
https = ""
|
||||||
|
@ -606,7 +578,7 @@ class TcpSrv(object):
|
||||||
ports = t1.get(ip, t2.get(ip, []))
|
ports = t1.get(ip, t2.get(ip, []))
|
||||||
dport = 443 if https else 80
|
dport = 443 if https else 80
|
||||||
port = "" if dport in ports or not ports else ":{}".format(ports[0])
|
port = "" if dport in ports or not ports else ":{}".format(ports[0])
|
||||||
txt = "http{}://{}{}/{}".format(https, hip, port, self.args.qrl)
|
txt = "http{}://{}{}/{}".format(https, ip, port, self.args.qrl)
|
||||||
|
|
||||||
btxt = txt.encode("utf-8")
|
btxt = txt.encode("utf-8")
|
||||||
if PY2:
|
if PY2:
|
||||||
|
@ -614,10 +586,6 @@ class TcpSrv(object):
|
||||||
|
|
||||||
fg = self.args.qr_fg
|
fg = self.args.qr_fg
|
||||||
bg = self.args.qr_bg
|
bg = self.args.qr_bg
|
||||||
nocolor = fg == -1
|
|
||||||
if nocolor:
|
|
||||||
fg = 0
|
|
||||||
|
|
||||||
pad = self.args.qrp
|
pad = self.args.qrp
|
||||||
zoom = self.args.qrz
|
zoom = self.args.qrz
|
||||||
qrc = QrCode.encode_binary(btxt)
|
qrc = QrCode.encode_binary(btxt)
|
||||||
|
@ -645,8 +613,6 @@ class TcpSrv(object):
|
||||||
|
|
||||||
qr = qr.replace("\n", "\033[K\n") + "\033[K" # win10do
|
qr = qr.replace("\n", "\033[K\n") + "\033[K" # win10do
|
||||||
cc = " \033[0;38;5;{0};47;48;5;{1}m" if fg else " \033[0;30;47m"
|
cc = " \033[0;38;5;{0};47;48;5;{1}m" if fg else " \033[0;30;47m"
|
||||||
if nocolor:
|
|
||||||
cc = " \033[0m"
|
|
||||||
t = cc + "\n{2}\033[999G\033[0m\033[J"
|
t = cc + "\n{2}\033[999G\033[0m\033[J"
|
||||||
t = t.format(fg, bg, qr)
|
t = t.format(fg, bg, qr)
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
|
|
|
@ -36,20 +36,7 @@ from partftpy.TftpShared import TftpException
|
||||||
from .__init__ import EXE, PY2, TYPE_CHECKING
|
from .__init__ import EXE, PY2, TYPE_CHECKING
|
||||||
from .authsrv import VFS
|
from .authsrv import VFS
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .util import (
|
from .util import UTC, BytesIO, Daemon, ODict, exclude_dotfiles, min_ex, runhook, undot
|
||||||
FN_EMB,
|
|
||||||
UTC,
|
|
||||||
BytesIO,
|
|
||||||
Daemon,
|
|
||||||
ODict,
|
|
||||||
exclude_dotfiles,
|
|
||||||
min_ex,
|
|
||||||
runhook,
|
|
||||||
set_fperms,
|
|
||||||
undot,
|
|
||||||
vjoin,
|
|
||||||
vsplit,
|
|
||||||
)
|
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Union
|
from typing import Any, Union
|
||||||
|
@ -179,7 +166,7 @@ class Tftpd(object):
|
||||||
if "::" in ips:
|
if "::" in ips:
|
||||||
ips.append("0.0.0.0")
|
ips.append("0.0.0.0")
|
||||||
|
|
||||||
ips = [x for x in ips if not x.startswith(("unix:", "fd:"))]
|
ips = [x for x in ips if "unix:" not in x]
|
||||||
|
|
||||||
if self.args.tftp4:
|
if self.args.tftp4:
|
||||||
ips = [x for x in ips if ":" not in x]
|
ips = [x for x in ips if ":" not in x]
|
||||||
|
@ -257,25 +244,16 @@ class Tftpd(object):
|
||||||
for srv in srvs:
|
for srv in srvs:
|
||||||
srv.stop()
|
srv.stop()
|
||||||
|
|
||||||
def _v2a(
|
def _v2a(self, caller: str, vpath: str, perms: list, *a: Any) -> tuple[VFS, str]:
|
||||||
self, caller: str, vpath: str, perms: list, *a: Any
|
|
||||||
) -> tuple[VFS, str, str]:
|
|
||||||
vpath = vpath.replace("\\", "/").lstrip("/")
|
vpath = vpath.replace("\\", "/").lstrip("/")
|
||||||
if not perms:
|
if not perms:
|
||||||
perms = [True, True]
|
perms = [True, True]
|
||||||
|
|
||||||
debug('%s("%s", %s) %s\033[K\033[0m', caller, vpath, str(a), perms)
|
debug('%s("%s", %s) %s\033[K\033[0m', caller, vpath, str(a), perms)
|
||||||
vfs, rem = self.asrv.vfs.get(vpath, "*", *perms)
|
vfs, rem = self.asrv.vfs.get(vpath, "*", *perms)
|
||||||
if perms[1] and "*" not in vfs.axs.uread and "wo_up_readme" not in vfs.flags:
|
|
||||||
zs, fn = vsplit(vpath)
|
|
||||||
if fn.lower() in FN_EMB:
|
|
||||||
vpath = vjoin(zs, "_wo_" + fn)
|
|
||||||
vfs, rem = self.asrv.vfs.get(vpath, "*", *perms)
|
|
||||||
|
|
||||||
if not vfs.realpath:
|
if not vfs.realpath:
|
||||||
raise Exception("unmapped vfs")
|
raise Exception("unmapped vfs")
|
||||||
|
return vfs, vfs.canonical(rem)
|
||||||
return vfs, vpath, vfs.canonical(rem)
|
|
||||||
|
|
||||||
def _ls(self, vpath: str, raddress: str, rport: int, force=False) -> Any:
|
def _ls(self, vpath: str, raddress: str, rport: int, force=False) -> Any:
|
||||||
# generate file listing if vpath is dir.txt and return as file object
|
# generate file listing if vpath is dir.txt and return as file object
|
||||||
|
@ -285,20 +263,18 @@ class Tftpd(object):
|
||||||
if not ptn or not ptn.match(fn.lower()):
|
if not ptn or not ptn.match(fn.lower()):
|
||||||
return None
|
return None
|
||||||
|
|
||||||
tsdt = datetime.fromtimestamp
|
|
||||||
vn, rem = self.asrv.vfs.get(vpath, "*", True, False)
|
vn, rem = self.asrv.vfs.get(vpath, "*", True, False)
|
||||||
fsroot, vfs_ls, vfs_virt = vn.ls(
|
fsroot, vfs_ls, vfs_virt = vn.ls(
|
||||||
rem,
|
rem,
|
||||||
"*",
|
"*",
|
||||||
not self.args.no_scandir,
|
not self.args.no_scandir,
|
||||||
[[True, False]],
|
[[True, False]],
|
||||||
throw=True,
|
|
||||||
)
|
)
|
||||||
dnames = set([x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)])
|
dnames = set([x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)])
|
||||||
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]
|
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]
|
||||||
fils1 = [(v.st_mtime, v.st_size, k) for k, v in vfs_ls if k not in dnames]
|
fils1 = [(v.st_mtime, v.st_size, k) for k, v in vfs_ls if k not in dnames]
|
||||||
real1 = dirs1 + fils1
|
real1 = dirs1 + fils1
|
||||||
realt = [(tsdt(max(0, mt), UTC), sz, fn) for mt, sz, fn in real1]
|
realt = [(datetime.fromtimestamp(mt, UTC), sz, fn) for mt, sz, fn in real1]
|
||||||
reals = [
|
reals = [
|
||||||
(
|
(
|
||||||
"%04d-%02d-%02d %02d:%02d:%02d"
|
"%04d-%02d-%02d %02d:%02d:%02d"
|
||||||
|
@ -354,7 +330,7 @@ class Tftpd(object):
|
||||||
else:
|
else:
|
||||||
raise Exception("bad mode %s" % (mode,))
|
raise Exception("bad mode %s" % (mode,))
|
||||||
|
|
||||||
vfs, vpath, ap = self._v2a("open", vpath, [rd, wr])
|
vfs, ap = self._v2a("open", vpath, [rd, wr])
|
||||||
if wr:
|
if wr:
|
||||||
if "*" not in vfs.axs.uwrite:
|
if "*" not in vfs.axs.uwrite:
|
||||||
yeet("blocked write; folder not world-writable: /%s" % (vpath,))
|
yeet("blocked write; folder not world-writable: /%s" % (vpath,))
|
||||||
|
@ -380,7 +356,7 @@ class Tftpd(object):
|
||||||
time.time(),
|
time.time(),
|
||||||
"",
|
"",
|
||||||
):
|
):
|
||||||
yeet("blocked by xbu server config: %r" % (vpath,))
|
yeet("blocked by xbu server config: " + vpath)
|
||||||
|
|
||||||
if not self.args.tftp_nols and bos.path.isdir(ap):
|
if not self.args.tftp_nols and bos.path.isdir(ap):
|
||||||
return self._ls(vpath, "", 0, True)
|
return self._ls(vpath, "", 0, True)
|
||||||
|
@ -388,24 +364,18 @@ class Tftpd(object):
|
||||||
if not a:
|
if not a:
|
||||||
a = (self.args.iobuf,)
|
a = (self.args.iobuf,)
|
||||||
|
|
||||||
ret = open(ap, mode, *a, **ka)
|
return open(ap, mode, *a, **ka)
|
||||||
if wr and "fperms" in vfs.flags:
|
|
||||||
set_fperms(ret, vfs.flags)
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
def _mkdir(self, vpath: str, *a) -> None:
|
def _mkdir(self, vpath: str, *a) -> None:
|
||||||
vfs, _, ap = self._v2a("mkdir", vpath, [False, True])
|
vfs, ap = self._v2a("mkdir", vpath, [])
|
||||||
if "*" not in vfs.axs.uwrite:
|
if "*" not in vfs.axs.uwrite:
|
||||||
yeet("blocked mkdir; folder not world-writable: /%s" % (vpath,))
|
yeet("blocked mkdir; folder not world-writable: /%s" % (vpath,))
|
||||||
|
|
||||||
bos.mkdir(ap, vfs.flags["chmod_d"])
|
return bos.mkdir(ap)
|
||||||
if "chown" in vfs.flags:
|
|
||||||
bos.chown(ap, vfs.flags["uid"], vfs.flags["gid"])
|
|
||||||
|
|
||||||
def _unlink(self, vpath: str) -> None:
|
def _unlink(self, vpath: str) -> None:
|
||||||
# return bos.unlink(self._v2a("stat", vpath, *a)[1])
|
# return bos.unlink(self._v2a("stat", vpath, *a)[1])
|
||||||
vfs, _, ap = self._v2a("delete", vpath, [True, False, False, True])
|
vfs, ap = self._v2a("delete", vpath, [True, False, False, True])
|
||||||
|
|
||||||
try:
|
try:
|
||||||
inf = bos.stat(ap)
|
inf = bos.stat(ap)
|
||||||
|
@ -429,7 +399,7 @@ class Tftpd(object):
|
||||||
|
|
||||||
def _p_exists(self, vpath: str) -> bool:
|
def _p_exists(self, vpath: str) -> bool:
|
||||||
try:
|
try:
|
||||||
ap = self._v2a("p.exists", vpath, [False, False])[2]
|
ap = self._v2a("p.exists", vpath, [False, False])[1]
|
||||||
bos.stat(ap)
|
bos.stat(ap)
|
||||||
return True
|
return True
|
||||||
except:
|
except:
|
||||||
|
@ -437,7 +407,7 @@ class Tftpd(object):
|
||||||
|
|
||||||
def _p_isdir(self, vpath: str) -> bool:
|
def _p_isdir(self, vpath: str) -> bool:
|
||||||
try:
|
try:
|
||||||
st = bos.stat(self._v2a("p.isdir", vpath, [False, False])[2])
|
st = bos.stat(self._v2a("p.isdir", vpath, [False, False])[1])
|
||||||
ret = stat.S_ISDIR(st.st_mode)
|
ret = stat.S_ISDIR(st.st_mode)
|
||||||
return ret
|
return ret
|
||||||
except:
|
except:
|
||||||
|
|
|
@ -1,15 +1,13 @@
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
import errno
|
|
||||||
import os
|
import os
|
||||||
import stat
|
|
||||||
|
|
||||||
from .__init__ import TYPE_CHECKING
|
from .__init__ import TYPE_CHECKING
|
||||||
from .authsrv import VFS
|
from .authsrv import VFS
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .th_srv import EXTS_AC, HAVE_WEBP, thumb_path
|
from .th_srv import HAVE_WEBP, thumb_path
|
||||||
from .util import Cooldown, Pebkac
|
from .util import Cooldown
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Optional, Union
|
from typing import Optional, Union
|
||||||
|
@ -18,9 +16,6 @@ if TYPE_CHECKING:
|
||||||
from .httpsrv import HttpSrv
|
from .httpsrv import HttpSrv
|
||||||
|
|
||||||
|
|
||||||
IOERROR = "reading the file was denied by the server os; either due to filesystem permissions, selinux, apparmor, or similar:\n%r"
|
|
||||||
|
|
||||||
|
|
||||||
class ThumbCli(object):
|
class ThumbCli(object):
|
||||||
def __init__(self, hsrv: "HttpSrv") -> None:
|
def __init__(self, hsrv: "HttpSrv") -> None:
|
||||||
self.broker = hsrv.broker
|
self.broker = hsrv.broker
|
||||||
|
@ -36,15 +31,11 @@ class ThumbCli(object):
|
||||||
if not c:
|
if not c:
|
||||||
raise Exception()
|
raise Exception()
|
||||||
except:
|
except:
|
||||||
c = {
|
c = {k: set() for k in ["thumbable", "pil", "vips", "ffi", "ffv", "ffa"]}
|
||||||
k: set()
|
|
||||||
for k in ["thumbable", "pil", "vips", "raw", "ffi", "ffv", "ffa"]
|
|
||||||
}
|
|
||||||
|
|
||||||
self.thumbable = c["thumbable"]
|
self.thumbable = c["thumbable"]
|
||||||
self.fmt_pil = c["pil"]
|
self.fmt_pil = c["pil"]
|
||||||
self.fmt_vips = c["vips"]
|
self.fmt_vips = c["vips"]
|
||||||
self.fmt_raw = c["raw"]
|
|
||||||
self.fmt_ffi = c["ffi"]
|
self.fmt_ffi = c["ffi"]
|
||||||
self.fmt_ffv = c["ffv"]
|
self.fmt_ffv = c["ffv"]
|
||||||
self.fmt_ffa = c["ffa"]
|
self.fmt_ffa = c["ffa"]
|
||||||
|
@ -66,17 +57,13 @@ class ThumbCli(object):
|
||||||
if is_vid and "dvthumb" in dbv.flags:
|
if is_vid and "dvthumb" in dbv.flags:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
want_opus = fmt in EXTS_AC
|
want_opus = fmt in ("opus", "caf", "mp3")
|
||||||
is_au = ext in self.fmt_ffa
|
is_au = ext in self.fmt_ffa
|
||||||
is_vau = want_opus and ext in self.fmt_ffv
|
is_vau = want_opus and ext in self.fmt_ffv
|
||||||
if is_au or is_vau:
|
if is_au or is_vau:
|
||||||
if want_opus:
|
if want_opus:
|
||||||
if self.args.no_acode:
|
if self.args.no_acode:
|
||||||
return None
|
return None
|
||||||
elif fmt == "caf" and self.args.no_caf:
|
|
||||||
fmt = "mp3"
|
|
||||||
elif fmt == "owa" and self.args.no_owa:
|
|
||||||
fmt = "mp3"
|
|
||||||
else:
|
else:
|
||||||
if "dathumb" in dbv.flags:
|
if "dathumb" in dbv.flags:
|
||||||
return None
|
return None
|
||||||
|
@ -92,7 +79,7 @@ class ThumbCli(object):
|
||||||
if rem.startswith(".hist/th/") and rem.split(".")[-1] in ["webp", "jpg", "png"]:
|
if rem.startswith(".hist/th/") and rem.split(".")[-1] in ["webp", "jpg", "png"]:
|
||||||
return os.path.join(ptop, rem)
|
return os.path.join(ptop, rem)
|
||||||
|
|
||||||
if fmt[:1] in "jw" and fmt != "wav":
|
if fmt[:1] in "jw":
|
||||||
sfmt = fmt[:1]
|
sfmt = fmt[:1]
|
||||||
|
|
||||||
if sfmt == "j" and self.args.th_no_jpg:
|
if sfmt == "j" and self.args.th_no_jpg:
|
||||||
|
@ -122,18 +109,18 @@ class ThumbCli(object):
|
||||||
fmt = sfmt
|
fmt = sfmt
|
||||||
|
|
||||||
elif fmt[:1] == "p" and not is_au and not is_vid:
|
elif fmt[:1] == "p" and not is_au and not is_vid:
|
||||||
t = "cannot thumbnail %r: png only allowed for waveforms"
|
t = "cannot thumbnail [%s]: png only allowed for waveforms"
|
||||||
self.log(t % (rem,), 6)
|
self.log(t % (rem), 6)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||||
if not histpath:
|
if not histpath:
|
||||||
self.log("no histpath for %r" % (ptop,))
|
self.log("no histpath for [{}]".format(ptop))
|
||||||
return None
|
return None
|
||||||
|
|
||||||
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
||||||
tpaths = [tpath]
|
tpaths = [tpath]
|
||||||
if fmt[:1] == "w" and fmt != "wav":
|
if fmt == "w":
|
||||||
# also check for jpg (maybe webp is unavailable)
|
# also check for jpg (maybe webp is unavailable)
|
||||||
tpaths.append(tpath.rsplit(".", 1)[0] + ".jpg")
|
tpaths.append(tpath.rsplit(".", 1)[0] + ".jpg")
|
||||||
|
|
||||||
|
@ -166,22 +153,8 @@ class ThumbCli(object):
|
||||||
if abort:
|
if abort:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
ap = os.path.join(ptop, rem)
|
if not bos.path.getsize(os.path.join(ptop, rem)):
|
||||||
try:
|
|
||||||
st = bos.stat(ap)
|
|
||||||
if not st.st_size or not stat.S_ISREG(st.st_mode):
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
with open(ap, "rb", 4) as f:
|
|
||||||
if not f.read(4):
|
|
||||||
raise Exception()
|
|
||||||
except OSError as ex:
|
|
||||||
if ex.errno == errno.ENOENT:
|
|
||||||
raise Pebkac(404)
|
|
||||||
else:
|
|
||||||
raise Pebkac(500, IOERROR % (ex,))
|
|
||||||
except Exception as ex:
|
|
||||||
raise Pebkac(500, IOERROR % (ex,))
|
|
||||||
|
|
||||||
x = self.broker.ask("thumbsrv.get", ptop, rem, mtime, fmt)
|
x = self.broker.ask("thumbsrv.get", ptop, rem, mtime, fmt)
|
||||||
return x.get() # type: ignore
|
return x.get() # type: ignore
|
||||||
|
|
|
@ -1,14 +1,12 @@
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
|
import base64
|
||||||
import hashlib
|
import hashlib
|
||||||
import io
|
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import re
|
|
||||||
import shutil
|
import shutil
|
||||||
import subprocess as sp
|
import subprocess as sp
|
||||||
import tempfile
|
|
||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
|
|
||||||
|
@ -21,22 +19,21 @@ from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, au_unpk, ffprobe
|
||||||
from .util import BytesIO # type: ignore
|
from .util import BytesIO # type: ignore
|
||||||
from .util import (
|
from .util import (
|
||||||
FFMPEG_URL,
|
FFMPEG_URL,
|
||||||
VF_CAREFUL,
|
|
||||||
Cooldown,
|
Cooldown,
|
||||||
Daemon,
|
Daemon,
|
||||||
|
Pebkac,
|
||||||
afsenc,
|
afsenc,
|
||||||
atomic_move,
|
|
||||||
fsenc,
|
fsenc,
|
||||||
min_ex,
|
min_ex,
|
||||||
runcmd,
|
runcmd,
|
||||||
statdir,
|
statdir,
|
||||||
ub64enc,
|
|
||||||
vsplit,
|
vsplit,
|
||||||
|
wrename,
|
||||||
wunlink,
|
wunlink,
|
||||||
)
|
)
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Optional, Union
|
from typing import Optional, Union
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
|
@ -50,13 +47,6 @@ HAVE_HEIF = False
|
||||||
HAVE_AVIF = False
|
HAVE_AVIF = False
|
||||||
HAVE_WEBP = False
|
HAVE_WEBP = False
|
||||||
|
|
||||||
EXTS_TH = set(["jpg", "webp", "png"])
|
|
||||||
EXTS_AC = set(["opus", "owa", "caf", "mp3", "flac", "wav"])
|
|
||||||
EXTS_SPEC_SAFE = set("aif aiff flac mp3 opus wav".split())
|
|
||||||
|
|
||||||
PTN_TS = re.compile("^-?[0-9a-f]{8,10}$")
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if os.environ.get("PRTY_NO_PIL"):
|
if os.environ.get("PRTY_NO_PIL"):
|
||||||
raise Exception()
|
raise Exception()
|
||||||
|
@ -86,9 +76,6 @@ try:
|
||||||
if os.environ.get("PRTY_NO_PIL_HEIF"):
|
if os.environ.get("PRTY_NO_PIL_HEIF"):
|
||||||
raise Exception()
|
raise Exception()
|
||||||
|
|
||||||
try:
|
|
||||||
from pillow_heif import register_heif_opener
|
|
||||||
except ImportError:
|
|
||||||
from pyheif_pillow_opener import register_heif_opener
|
from pyheif_pillow_opener import register_heif_opener
|
||||||
|
|
||||||
register_heif_opener()
|
register_heif_opener()
|
||||||
|
@ -100,10 +87,6 @@ try:
|
||||||
if os.environ.get("PRTY_NO_PIL_AVIF"):
|
if os.environ.get("PRTY_NO_PIL_AVIF"):
|
||||||
raise Exception()
|
raise Exception()
|
||||||
|
|
||||||
if ".avif" in Image.registered_extensions():
|
|
||||||
HAVE_AVIF = True
|
|
||||||
raise Exception()
|
|
||||||
|
|
||||||
import pillow_avif # noqa: F401 # pylint: disable=unused-import
|
import pillow_avif # noqa: F401 # pylint: disable=unused-import
|
||||||
|
|
||||||
HAVE_AVIF = True
|
HAVE_AVIF = True
|
||||||
|
@ -116,31 +99,14 @@ except:
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if os.environ.get("PRTY_NO_VIPS"):
|
if os.environ.get("PRTY_NO_VIPS"):
|
||||||
raise ImportError()
|
raise Exception()
|
||||||
|
|
||||||
HAVE_VIPS = True
|
HAVE_VIPS = True
|
||||||
import pyvips
|
import pyvips
|
||||||
|
|
||||||
logging.getLogger("pyvips").setLevel(logging.WARNING)
|
logging.getLogger("pyvips").setLevel(logging.WARNING)
|
||||||
except Exception as e:
|
|
||||||
HAVE_VIPS = False
|
|
||||||
if not isinstance(e, ImportError):
|
|
||||||
logging.warning("libvips found, but failed to load: " + str(e))
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
if os.environ.get("PRTY_NO_RAW"):
|
|
||||||
raise Exception()
|
|
||||||
|
|
||||||
HAVE_RAW = True
|
|
||||||
import rawpy
|
|
||||||
|
|
||||||
logging.getLogger("rawpy").setLevel(logging.WARNING)
|
|
||||||
except:
|
except:
|
||||||
HAVE_RAW = False
|
HAVE_VIPS = False
|
||||||
|
|
||||||
|
|
||||||
th_dir_cache = {}
|
|
||||||
|
|
||||||
|
|
||||||
def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -> str:
|
def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -> str:
|
||||||
|
@ -156,22 +122,16 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
|
||||||
if ext in ffa and fmt[:2] in ("wf", "jf"):
|
if ext in ffa and fmt[:2] in ("wf", "jf"):
|
||||||
fmt = fmt.replace("f", "")
|
fmt = fmt.replace("f", "")
|
||||||
|
|
||||||
dcache = th_dir_cache
|
rd += "\n" + fmt
|
||||||
rd_key = rd + "\n" + fmt
|
h = hashlib.sha512(afsenc(rd)).digest()
|
||||||
rd = dcache.get(rd_key)
|
b64 = base64.urlsafe_b64encode(h).decode("ascii")[:24]
|
||||||
if not rd:
|
|
||||||
h = hashlib.sha512(afsenc(rd_key)).digest()
|
|
||||||
b64 = ub64enc(h).decode("ascii")[:24]
|
|
||||||
rd = ("%s/%s/" % (b64[:2], b64[2:4])).lower() + b64
|
rd = ("%s/%s/" % (b64[:2], b64[2:4])).lower() + b64
|
||||||
if len(dcache) > 9001:
|
|
||||||
dcache.clear()
|
|
||||||
dcache[rd_key] = rd
|
|
||||||
|
|
||||||
# could keep original filenames but this is safer re pathlen
|
# could keep original filenames but this is safer re pathlen
|
||||||
h = hashlib.sha512(afsenc(fn)).digest()
|
h = hashlib.sha512(afsenc(fn)).digest()
|
||||||
fn = ub64enc(h).decode("ascii")[:24]
|
fn = base64.urlsafe_b64encode(h).decode("ascii")[:24]
|
||||||
|
|
||||||
if fmt in EXTS_AC:
|
if fmt in ("opus", "caf", "mp3"):
|
||||||
cat = "ac"
|
cat = "ac"
|
||||||
else:
|
else:
|
||||||
fc = fmt[:1]
|
fc = fmt[:1]
|
||||||
|
@ -192,15 +152,11 @@ class ThumbSrv(object):
|
||||||
|
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
self.busy: dict[str, list[threading.Condition]] = {}
|
self.busy: dict[str, list[threading.Condition]] = {}
|
||||||
self.untemp: dict[str, list[str]] = {}
|
|
||||||
self.ram: dict[str, float] = {}
|
self.ram: dict[str, float] = {}
|
||||||
self.memcond = threading.Condition(self.mutex)
|
self.memcond = threading.Condition(self.mutex)
|
||||||
self.stopping = False
|
self.stopping = False
|
||||||
self.rm_nullthumbs = True # forget failed conversions on startup
|
|
||||||
self.nthr = max(1, self.args.th_mt)
|
self.nthr = max(1, self.args.th_mt)
|
||||||
|
|
||||||
self.exts_spec_unsafe = set(self.args.th_spec_cnv.split(","))
|
|
||||||
|
|
||||||
self.q: Queue[Optional[tuple[str, str, str, VFS]]] = Queue(self.nthr * 4)
|
self.q: Queue[Optional[tuple[str, str, str, VFS]]] = Queue(self.nthr * 4)
|
||||||
for n in range(self.nthr):
|
for n in range(self.nthr):
|
||||||
Daemon(self.worker, "thumb-{}-{}".format(n, self.nthr))
|
Daemon(self.worker, "thumb-{}-{}".format(n, self.nthr))
|
||||||
|
@ -223,19 +179,11 @@ class ThumbSrv(object):
|
||||||
if self.args.th_clean:
|
if self.args.th_clean:
|
||||||
Daemon(self.cleaner, "thumb.cln")
|
Daemon(self.cleaner, "thumb.cln")
|
||||||
|
|
||||||
(
|
self.fmt_pil, self.fmt_vips, self.fmt_ffi, self.fmt_ffv, self.fmt_ffa = [
|
||||||
self.fmt_pil,
|
|
||||||
self.fmt_vips,
|
|
||||||
self.fmt_raw,
|
|
||||||
self.fmt_ffi,
|
|
||||||
self.fmt_ffv,
|
|
||||||
self.fmt_ffa,
|
|
||||||
) = [
|
|
||||||
set(y.split(","))
|
set(y.split(","))
|
||||||
for y in [
|
for y in [
|
||||||
self.args.th_r_pil,
|
self.args.th_r_pil,
|
||||||
self.args.th_r_vips,
|
self.args.th_r_vips,
|
||||||
self.args.th_r_raw,
|
|
||||||
self.args.th_r_ffi,
|
self.args.th_r_ffi,
|
||||||
self.args.th_r_ffv,
|
self.args.th_r_ffv,
|
||||||
self.args.th_r_ffa,
|
self.args.th_r_ffa,
|
||||||
|
@ -258,9 +206,6 @@ class ThumbSrv(object):
|
||||||
if "vips" in self.args.th_dec:
|
if "vips" in self.args.th_dec:
|
||||||
self.thumbable |= self.fmt_vips
|
self.thumbable |= self.fmt_vips
|
||||||
|
|
||||||
if "raw" in self.args.th_dec:
|
|
||||||
self.thumbable |= self.fmt_raw
|
|
||||||
|
|
||||||
if "ff" in self.args.th_dec:
|
if "ff" in self.args.th_dec:
|
||||||
for zss in [self.fmt_ffi, self.fmt_ffv, self.fmt_ffa]:
|
for zss in [self.fmt_ffi, self.fmt_ffv, self.fmt_ffa]:
|
||||||
self.thumbable |= zss
|
self.thumbable |= zss
|
||||||
|
@ -285,7 +230,7 @@ class ThumbSrv(object):
|
||||||
def get(self, ptop: str, rem: str, mtime: float, fmt: str) -> Optional[str]:
|
def get(self, ptop: str, rem: str, mtime: float, fmt: str) -> Optional[str]:
|
||||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||||
if not histpath:
|
if not histpath:
|
||||||
self.log("no histpath for %r" % (ptop,))
|
self.log("no histpath for [{}]".format(ptop))
|
||||||
return None
|
return None
|
||||||
|
|
||||||
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
||||||
|
@ -295,11 +240,10 @@ class ThumbSrv(object):
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
try:
|
try:
|
||||||
self.busy[tpath].append(cond)
|
self.busy[tpath].append(cond)
|
||||||
self.log("joined waiting room for %r" % (tpath,))
|
self.log("joined waiting room for %s" % (tpath,))
|
||||||
except:
|
except:
|
||||||
thdir = os.path.dirname(tpath)
|
thdir = os.path.dirname(tpath)
|
||||||
chmod = bos.MKD_700 if self.args.free_umask else bos.MKD_755
|
bos.makedirs(os.path.join(thdir, "w"))
|
||||||
bos.makedirs(os.path.join(thdir, "w"), vf=chmod)
|
|
||||||
|
|
||||||
inf_path = os.path.join(thdir, "dir.txt")
|
inf_path = os.path.join(thdir, "dir.txt")
|
||||||
if not bos.path.exists(inf_path):
|
if not bos.path.exists(inf_path):
|
||||||
|
@ -313,11 +257,11 @@ class ThumbSrv(object):
|
||||||
allvols = list(self.asrv.vfs.all_vols.values())
|
allvols = list(self.asrv.vfs.all_vols.values())
|
||||||
vn = next((x for x in allvols if x.realpath == ptop), None)
|
vn = next((x for x in allvols if x.realpath == ptop), None)
|
||||||
if not vn:
|
if not vn:
|
||||||
self.log("ptop %r not in %s" % (ptop, allvols), 3)
|
self.log("ptop [{}] not in {}".format(ptop, allvols), 3)
|
||||||
vn = self.asrv.vfs.all_aps[0][1][0]
|
vn = self.asrv.vfs.all_aps[0][1]
|
||||||
|
|
||||||
self.q.put((abspath, tpath, fmt, vn))
|
self.q.put((abspath, tpath, fmt, vn))
|
||||||
self.log("conv %r :%s \033[0m%r" % (tpath, fmt, abspath), 6)
|
self.log("conv {} :{} \033[0m{}".format(tpath, fmt, abspath), c=6)
|
||||||
|
|
||||||
while not self.stopping:
|
while not self.stopping:
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
|
@ -342,7 +286,6 @@ class ThumbSrv(object):
|
||||||
"thumbable": self.thumbable,
|
"thumbable": self.thumbable,
|
||||||
"pil": self.fmt_pil,
|
"pil": self.fmt_pil,
|
||||||
"vips": self.fmt_vips,
|
"vips": self.fmt_vips,
|
||||||
"raw": self.fmt_raw,
|
|
||||||
"ffi": self.fmt_ffi,
|
"ffi": self.fmt_ffi,
|
||||||
"ffv": self.fmt_ffv,
|
"ffv": self.fmt_ffv,
|
||||||
"ffa": self.fmt_ffa,
|
"ffa": self.fmt_ffa,
|
||||||
|
@ -382,13 +325,10 @@ class ThumbSrv(object):
|
||||||
ap_unpk = abspath
|
ap_unpk = abspath
|
||||||
|
|
||||||
if not bos.path.exists(tpath):
|
if not bos.path.exists(tpath):
|
||||||
tex = tpath.rsplit(".", 1)[-1]
|
want_mp3 = tpath.endswith(".mp3")
|
||||||
want_mp3 = tex == "mp3"
|
want_opus = tpath.endswith(".opus") or tpath.endswith(".caf")
|
||||||
want_opus = tex in ("opus", "owa", "caf")
|
want_png = tpath.endswith(".png")
|
||||||
want_flac = tex == "flac"
|
want_au = want_mp3 or want_opus
|
||||||
want_wav = tex == "wav"
|
|
||||||
want_png = tex == "png"
|
|
||||||
want_au = want_mp3 or want_opus or want_flac or want_wav
|
|
||||||
for lib in self.args.th_dec:
|
for lib in self.args.th_dec:
|
||||||
can_au = lib == "ff" and (
|
can_au = lib == "ff" and (
|
||||||
ext in self.fmt_ffa or ext in self.fmt_ffv
|
ext in self.fmt_ffa or ext in self.fmt_ffv
|
||||||
|
@ -398,17 +338,11 @@ class ThumbSrv(object):
|
||||||
funs.append(self.conv_pil)
|
funs.append(self.conv_pil)
|
||||||
elif lib == "vips" and ext in self.fmt_vips:
|
elif lib == "vips" and ext in self.fmt_vips:
|
||||||
funs.append(self.conv_vips)
|
funs.append(self.conv_vips)
|
||||||
elif lib == "raw" and ext in self.fmt_raw:
|
|
||||||
funs.append(self.conv_raw)
|
|
||||||
elif can_au and (want_png or want_au):
|
elif can_au and (want_png or want_au):
|
||||||
if want_opus:
|
if want_opus:
|
||||||
funs.append(self.conv_opus)
|
funs.append(self.conv_opus)
|
||||||
elif want_mp3:
|
elif want_mp3:
|
||||||
funs.append(self.conv_mp3)
|
funs.append(self.conv_mp3)
|
||||||
elif want_flac:
|
|
||||||
funs.append(self.conv_flac)
|
|
||||||
elif want_wav:
|
|
||||||
funs.append(self.conv_wav)
|
|
||||||
elif want_png:
|
elif want_png:
|
||||||
funs.append(self.conv_waves)
|
funs.append(self.conv_waves)
|
||||||
png_ok = True
|
png_ok = True
|
||||||
|
@ -432,18 +366,14 @@ class ThumbSrv(object):
|
||||||
fun(ap_unpk, ttpath, fmt, vn)
|
fun(ap_unpk, ttpath, fmt, vn)
|
||||||
break
|
break
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
msg = "%s could not create thumbnail of %r\n%s"
|
msg = "{} could not create thumbnail of {}\n{}"
|
||||||
msg = msg % (fun.__name__, abspath, min_ex())
|
msg = msg.format(fun.__name__, abspath, min_ex())
|
||||||
c: Union[str, int] = 1 if "<Signals.SIG" in msg else "90"
|
c: Union[str, int] = 1 if "<Signals.SIG" in msg else "90"
|
||||||
self.log(msg, c)
|
self.log(msg, c)
|
||||||
if getattr(ex, "returncode", 0) != 321:
|
if getattr(ex, "returncode", 0) != 321:
|
||||||
if fun == funs[-1]:
|
if fun == funs[-1]:
|
||||||
try:
|
|
||||||
with open(ttpath, "wb") as _:
|
with open(ttpath, "wb") as _:
|
||||||
pass
|
pass
|
||||||
except Exception as ex:
|
|
||||||
t = "failed to create the file [%s]: %r"
|
|
||||||
self.log(t % (ttpath, ex), 3)
|
|
||||||
else:
|
else:
|
||||||
# ffmpeg may spawn empty files on windows
|
# ffmpeg may spawn empty files on windows
|
||||||
try:
|
try:
|
||||||
|
@ -455,25 +385,14 @@ class ThumbSrv(object):
|
||||||
wunlink(self.log, ap_unpk, vn.flags)
|
wunlink(self.log, ap_unpk, vn.flags)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
atomic_move(self.log, ttpath, tpath, vn.flags)
|
wrename(self.log, ttpath, tpath, vn.flags)
|
||||||
except Exception as ex:
|
except:
|
||||||
if not os.path.exists(tpath):
|
|
||||||
t = "failed to move [%s] to [%s]: %r"
|
|
||||||
self.log(t % (ttpath, tpath, ex), 3)
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
untemp = []
|
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
subs = self.busy[tpath]
|
subs = self.busy[tpath]
|
||||||
del self.busy[tpath]
|
del self.busy[tpath]
|
||||||
self.ram.pop(ttpath, None)
|
self.ram.pop(ttpath, None)
|
||||||
untemp = self.untemp.pop(ttpath, None) or []
|
|
||||||
|
|
||||||
for ap in untemp:
|
|
||||||
try:
|
|
||||||
wunlink(self.log, ap, VF_CAREFUL)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
for x in subs:
|
for x in subs:
|
||||||
with x:
|
with x:
|
||||||
|
@ -512,7 +431,9 @@ class ThumbSrv(object):
|
||||||
|
|
||||||
return im
|
return im
|
||||||
|
|
||||||
def conv_image_pil(self, im: "Image.Image", tpath: str, fmt: str, vn: VFS) -> None:
|
def conv_pil(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||||
|
self.wait4ram(0.2, tpath)
|
||||||
|
with Image.open(fsenc(abspath)) as im:
|
||||||
try:
|
try:
|
||||||
im = self.fancy_pillow(im, fmt, vn)
|
im = self.fancy_pillow(im, fmt, vn)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
|
@ -540,11 +461,6 @@ class ThumbSrv(object):
|
||||||
|
|
||||||
im.save(tpath, **args)
|
im.save(tpath, **args)
|
||||||
|
|
||||||
def conv_pil(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
|
||||||
self.wait4ram(0.2, tpath)
|
|
||||||
with Image.open(fsenc(abspath)) as im:
|
|
||||||
self.conv_image_pil(im, tpath, fmt, vn)
|
|
||||||
|
|
||||||
def conv_vips(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
def conv_vips(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||||
self.wait4ram(0.2, tpath)
|
self.wait4ram(0.2, tpath)
|
||||||
crops = ["centre", "none"]
|
crops = ["centre", "none"]
|
||||||
|
@ -563,56 +479,12 @@ class ThumbSrv(object):
|
||||||
if c == crops[-1]:
|
if c == crops[-1]:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
assert img # type: ignore # !rm
|
assert img # type: ignore
|
||||||
img.write_to_file(tpath, Q=40)
|
img.write_to_file(tpath, Q=40)
|
||||||
|
|
||||||
def conv_raw(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
|
||||||
self.wait4ram(0.2, tpath)
|
|
||||||
with rawpy.imread(abspath) as raw:
|
|
||||||
thumb = raw.extract_thumb()
|
|
||||||
if thumb.format == rawpy.ThumbFormat.JPEG and tpath.endswith(".jpg"):
|
|
||||||
# if we have a jpg thumbnail and no webp output is available,
|
|
||||||
# just write the jpg directly (it'll be the wrong size, but it's fast)
|
|
||||||
with open(tpath, "wb") as f:
|
|
||||||
f.write(thumb.data)
|
|
||||||
if HAVE_VIPS:
|
|
||||||
crops = ["centre", "none"]
|
|
||||||
if "f" in fmt:
|
|
||||||
crops = ["none"]
|
|
||||||
w, h = self.getres(vn, fmt)
|
|
||||||
kw = {"height": h, "size": "down", "intent": "relative"}
|
|
||||||
|
|
||||||
for c in crops:
|
|
||||||
try:
|
|
||||||
kw["crop"] = c
|
|
||||||
if thumb.format == rawpy.ThumbFormat.BITMAP:
|
|
||||||
img = pyvips.Image.new_from_array(
|
|
||||||
thumb.data, interpretation="rgb"
|
|
||||||
)
|
|
||||||
img = img.thumbnail_image(w, **kw)
|
|
||||||
else:
|
|
||||||
img = pyvips.Image.thumbnail_buffer(thumb.data, w, **kw)
|
|
||||||
break
|
|
||||||
except:
|
|
||||||
if c == crops[-1]:
|
|
||||||
raise
|
|
||||||
|
|
||||||
assert img # type: ignore # !rm
|
|
||||||
img.write_to_file(tpath, Q=40)
|
|
||||||
elif HAVE_PIL:
|
|
||||||
if thumb.format == rawpy.ThumbFormat.BITMAP:
|
|
||||||
im = Image.fromarray(thumb.data, "RGB")
|
|
||||||
else:
|
|
||||||
im = Image.open(io.BytesIO(thumb.data))
|
|
||||||
self.conv_image_pil(im, tpath, fmt, vn)
|
|
||||||
else:
|
|
||||||
raise Exception(
|
|
||||||
"either pil or vips is needed to process embedded bitmap thumbnails in raw files"
|
|
||||||
)
|
|
||||||
|
|
||||||
def conv_ffmpeg(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
def conv_ffmpeg(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||||
self.wait4ram(0.2, tpath)
|
self.wait4ram(0.2, tpath)
|
||||||
ret, _, _, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
||||||
if not ret:
|
if not ret:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -623,17 +495,6 @@ class ThumbSrv(object):
|
||||||
dur = ret[".dur"][1] if ".dur" in ret else 4
|
dur = ret[".dur"][1] if ".dur" in ret else 4
|
||||||
seek = [b"-ss", "{:.0f}".format(dur / 3).encode("utf-8")]
|
seek = [b"-ss", "{:.0f}".format(dur / 3).encode("utf-8")]
|
||||||
|
|
||||||
self._ffmpeg_im(abspath, tpath, fmt, vn, seek, b"0:v:0")
|
|
||||||
|
|
||||||
def _ffmpeg_im(
|
|
||||||
self,
|
|
||||||
abspath: str,
|
|
||||||
tpath: str,
|
|
||||||
fmt: str,
|
|
||||||
vn: VFS,
|
|
||||||
seek: list[bytes],
|
|
||||||
imap: bytes,
|
|
||||||
) -> None:
|
|
||||||
scale = "scale={0}:{1}:force_original_aspect_ratio="
|
scale = "scale={0}:{1}:force_original_aspect_ratio="
|
||||||
if "f" in fmt:
|
if "f" in fmt:
|
||||||
scale += "decrease,setsar=1:1"
|
scale += "decrease,setsar=1:1"
|
||||||
|
@ -652,7 +513,7 @@ class ThumbSrv(object):
|
||||||
cmd += seek
|
cmd += seek
|
||||||
cmd += [
|
cmd += [
|
||||||
b"-i", fsenc(abspath),
|
b"-i", fsenc(abspath),
|
||||||
b"-map", imap,
|
b"-map", b"0:v:0",
|
||||||
b"-vf", bscale,
|
b"-vf", bscale,
|
||||||
b"-frames:v", b"1",
|
b"-frames:v", b"1",
|
||||||
b"-metadata:s:v:0", b"rotate=0",
|
b"-metadata:s:v:0", b"rotate=0",
|
||||||
|
@ -673,11 +534,11 @@ class ThumbSrv(object):
|
||||||
]
|
]
|
||||||
|
|
||||||
cmd += [fsenc(tpath)]
|
cmd += [fsenc(tpath)]
|
||||||
self._run_ff(cmd, vn, "convt")
|
self._run_ff(cmd, vn)
|
||||||
|
|
||||||
def _run_ff(self, cmd: list[bytes], vn: VFS, kto: str, oom: int = 400) -> None:
|
def _run_ff(self, cmd: list[bytes], vn: VFS, oom: int = 400) -> None:
|
||||||
# self.log((b" ".join(cmd)).decode("utf-8"))
|
# self.log((b" ".join(cmd)).decode("utf-8"))
|
||||||
ret, _, serr = runcmd(cmd, timeout=vn.flags[kto], nice=True, oom=oom)
|
ret, _, serr = runcmd(cmd, timeout=vn.flags["convt"], nice=True, oom=oom)
|
||||||
if not ret:
|
if not ret:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -721,7 +582,7 @@ class ThumbSrv(object):
|
||||||
raise sp.CalledProcessError(ret, (cmd[0], b"...", cmd[-1]))
|
raise sp.CalledProcessError(ret, (cmd[0], b"...", cmd[-1]))
|
||||||
|
|
||||||
def conv_waves(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
def conv_waves(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||||
ret, _, _, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
||||||
if "ac" not in ret:
|
if "ac" not in ret:
|
||||||
raise Exception("not audio")
|
raise Exception("not audio")
|
||||||
|
|
||||||
|
@ -759,7 +620,7 @@ class ThumbSrv(object):
|
||||||
# fmt: on
|
# fmt: on
|
||||||
|
|
||||||
cmd += [fsenc(tpath)]
|
cmd += [fsenc(tpath)]
|
||||||
self._run_ff(cmd, vn, "convt")
|
self._run_ff(cmd, vn)
|
||||||
|
|
||||||
if "pngquant" in vn.flags:
|
if "pngquant" in vn.flags:
|
||||||
wtpath = tpath + ".png"
|
wtpath = tpath + ".png"
|
||||||
|
@ -778,70 +639,22 @@ class ThumbSrv(object):
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
else:
|
else:
|
||||||
atomic_move(self.log, wtpath, tpath, vn.flags)
|
wrename(self.log, wtpath, tpath, vn.flags)
|
||||||
|
|
||||||
def conv_emb_cv(
|
|
||||||
self, abspath: str, tpath: str, fmt: str, vn: VFS, strm: dict[str, Any]
|
|
||||||
) -> None:
|
|
||||||
self.wait4ram(0.2, tpath)
|
|
||||||
self._ffmpeg_im(
|
|
||||||
abspath, tpath, fmt, vn, [], b"0:" + strm["index"].encode("ascii")
|
|
||||||
)
|
|
||||||
|
|
||||||
def conv_spec(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
def conv_spec(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||||
ret, raw, strms, ctnr = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
||||||
if "ac" not in ret:
|
if "ac" not in ret:
|
||||||
raise Exception("not audio")
|
raise Exception("not audio")
|
||||||
|
|
||||||
want_spec = vn.flags.get("th_spec_p", 1)
|
|
||||||
if want_spec < 2:
|
|
||||||
for strm in strms:
|
|
||||||
if (
|
|
||||||
strm.get("codec_type") == "video"
|
|
||||||
and strm.get("DISPOSITION:attached_pic") == "1"
|
|
||||||
):
|
|
||||||
return self.conv_emb_cv(abspath, tpath, fmt, vn, strm)
|
|
||||||
|
|
||||||
if not want_spec:
|
|
||||||
raise Exception("spectrograms forbidden by volflag")
|
|
||||||
|
|
||||||
fext = abspath.split(".")[-1].lower()
|
|
||||||
|
|
||||||
# https://trac.ffmpeg.org/ticket/10797
|
# https://trac.ffmpeg.org/ticket/10797
|
||||||
# expect 1 GiB every 600 seconds when duration is tricky;
|
# expect 1 GiB every 600 seconds when duration is tricky;
|
||||||
# simple filetypes are generally safer so let's special-case those
|
# simple filetypes are generally safer so let's special-case those
|
||||||
coeff = 1800 if fext in EXTS_SPEC_SAFE else 600
|
safe = ("flac", "wav", "aif", "aiff", "opus")
|
||||||
dur = ret[".dur"][1] if ".dur" in ret else 900
|
coeff = 1800 if abspath.split(".")[-1].lower() in safe else 600
|
||||||
|
dur = ret[".dur"][1] if ".dur" in ret else 300
|
||||||
need = 0.2 + dur / coeff
|
need = 0.2 + dur / coeff
|
||||||
self.wait4ram(need, tpath)
|
self.wait4ram(need, tpath)
|
||||||
|
|
||||||
infile = abspath
|
|
||||||
if dur >= 900 or fext in self.exts_spec_unsafe:
|
|
||||||
with tempfile.NamedTemporaryFile(suffix=".spec.flac", delete=False) as f:
|
|
||||||
f.write(b"h")
|
|
||||||
infile = f.name
|
|
||||||
try:
|
|
||||||
self.untemp[tpath].append(infile)
|
|
||||||
except:
|
|
||||||
self.untemp[tpath] = [infile]
|
|
||||||
|
|
||||||
# fmt: off
|
|
||||||
cmd = [
|
|
||||||
b"ffmpeg",
|
|
||||||
b"-nostdin",
|
|
||||||
b"-v", b"error",
|
|
||||||
b"-hide_banner",
|
|
||||||
b"-i", fsenc(abspath),
|
|
||||||
b"-map", b"0:a:0",
|
|
||||||
b"-ac", b"1",
|
|
||||||
b"-ar", b"48000",
|
|
||||||
b"-sample_fmt", b"s16",
|
|
||||||
b"-t", b"900",
|
|
||||||
b"-y", fsenc(infile),
|
|
||||||
]
|
|
||||||
# fmt: on
|
|
||||||
self._run_ff(cmd, vn, "convt")
|
|
||||||
|
|
||||||
fc = "[0:a:0]aresample=48000{},showspectrumpic=s="
|
fc = "[0:a:0]aresample=48000{},showspectrumpic=s="
|
||||||
if "3" in fmt:
|
if "3" in fmt:
|
||||||
fc += "1280x1024,crop=1420:1056:70:48[o]"
|
fc += "1280x1024,crop=1420:1056:70:48[o]"
|
||||||
|
@ -861,7 +674,7 @@ class ThumbSrv(object):
|
||||||
b"-nostdin",
|
b"-nostdin",
|
||||||
b"-v", b"error",
|
b"-v", b"error",
|
||||||
b"-hide_banner",
|
b"-hide_banner",
|
||||||
b"-i", fsenc(infile),
|
b"-i", fsenc(abspath),
|
||||||
b"-filter_complex", fc.encode("utf-8"),
|
b"-filter_complex", fc.encode("utf-8"),
|
||||||
b"-map", b"[o]",
|
b"-map", b"[o]",
|
||||||
b"-frames:v", b"1",
|
b"-frames:v", b"1",
|
||||||
|
@ -882,7 +695,7 @@ class ThumbSrv(object):
|
||||||
]
|
]
|
||||||
|
|
||||||
cmd += [fsenc(tpath)]
|
cmd += [fsenc(tpath)]
|
||||||
self._run_ff(cmd, vn, "convt")
|
self._run_ff(cmd, vn)
|
||||||
|
|
||||||
def conv_mp3(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
def conv_mp3(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||||
quality = self.args.q_mp3.lower()
|
quality = self.args.q_mp3.lower()
|
||||||
|
@ -890,7 +703,7 @@ class ThumbSrv(object):
|
||||||
raise Exception("disabled in server config")
|
raise Exception("disabled in server config")
|
||||||
|
|
||||||
self.wait4ram(0.2, tpath)
|
self.wait4ram(0.2, tpath)
|
||||||
tags, rawtags, _, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
tags, rawtags = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
||||||
if "ac" not in tags:
|
if "ac" not in tags:
|
||||||
raise Exception("not audio")
|
raise Exception("not audio")
|
||||||
|
|
||||||
|
@ -921,148 +734,36 @@ class ThumbSrv(object):
|
||||||
fsenc(tpath)
|
fsenc(tpath)
|
||||||
]
|
]
|
||||||
# fmt: on
|
# fmt: on
|
||||||
self._run_ff(cmd, vn, "aconvt", oom=300)
|
self._run_ff(cmd, vn, oom=300)
|
||||||
|
|
||||||
def conv_flac(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
|
||||||
if self.args.no_acode or not self.args.allow_flac:
|
|
||||||
raise Exception("flac not permitted in server config")
|
|
||||||
|
|
||||||
self.wait4ram(0.2, tpath)
|
|
||||||
tags, _, _, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
|
||||||
if "ac" not in tags:
|
|
||||||
raise Exception("not audio")
|
|
||||||
|
|
||||||
self.log("conv2 flac", 6)
|
|
||||||
|
|
||||||
# fmt: off
|
|
||||||
cmd = [
|
|
||||||
b"ffmpeg",
|
|
||||||
b"-nostdin",
|
|
||||||
b"-v", b"error",
|
|
||||||
b"-hide_banner",
|
|
||||||
b"-i", fsenc(abspath),
|
|
||||||
b"-map", b"0:a:0",
|
|
||||||
b"-c:a", b"flac",
|
|
||||||
fsenc(tpath)
|
|
||||||
]
|
|
||||||
# fmt: on
|
|
||||||
self._run_ff(cmd, vn, "aconvt", oom=300)
|
|
||||||
|
|
||||||
def conv_wav(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
|
||||||
if self.args.no_acode or not self.args.allow_wav:
|
|
||||||
raise Exception("wav not permitted in server config")
|
|
||||||
|
|
||||||
self.wait4ram(0.2, tpath)
|
|
||||||
tags, _, _, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
|
||||||
if "ac" not in tags:
|
|
||||||
raise Exception("not audio")
|
|
||||||
|
|
||||||
bits = tags[".bps"][1]
|
|
||||||
if bits == 0.0:
|
|
||||||
bits = tags[".bprs"][1]
|
|
||||||
|
|
||||||
codec = b"pcm_s32le"
|
|
||||||
if bits <= 16.0:
|
|
||||||
codec = b"pcm_s16le"
|
|
||||||
elif bits <= 24.0:
|
|
||||||
codec = b"pcm_s24le"
|
|
||||||
|
|
||||||
self.log("conv2 wav", 6)
|
|
||||||
|
|
||||||
# fmt: off
|
|
||||||
cmd = [
|
|
||||||
b"ffmpeg",
|
|
||||||
b"-nostdin",
|
|
||||||
b"-v", b"error",
|
|
||||||
b"-hide_banner",
|
|
||||||
b"-i", fsenc(abspath),
|
|
||||||
b"-map", b"0:a:0",
|
|
||||||
b"-c:a", codec,
|
|
||||||
fsenc(tpath)
|
|
||||||
]
|
|
||||||
# fmt: on
|
|
||||||
self._run_ff(cmd, vn, "aconvt", oom=300)
|
|
||||||
|
|
||||||
def conv_opus(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
def conv_opus(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||||
if self.args.no_acode or not self.args.q_opus:
|
if self.args.no_acode or not self.args.q_opus:
|
||||||
raise Exception("disabled in server config")
|
raise Exception("disabled in server config")
|
||||||
|
|
||||||
self.wait4ram(0.2, tpath)
|
self.wait4ram(0.2, tpath)
|
||||||
tags, rawtags, _, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
tags, rawtags = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
||||||
if "ac" not in tags:
|
if "ac" not in tags:
|
||||||
raise Exception("not audio")
|
raise Exception("not audio")
|
||||||
|
|
||||||
sq = "%dk" % (self.args.q_opus,)
|
|
||||||
bq = sq.encode("ascii")
|
|
||||||
if tags["ac"][1] == "opus":
|
|
||||||
enc = "-c:a copy"
|
|
||||||
else:
|
|
||||||
enc = "-c:a libopus -b:a " + sq
|
|
||||||
|
|
||||||
fun = self._conv_caf if fmt == "caf" else self._conv_owa
|
|
||||||
|
|
||||||
fun(abspath, tpath, tags, rawtags, enc, bq, vn)
|
|
||||||
|
|
||||||
def _conv_owa(
|
|
||||||
self,
|
|
||||||
abspath: str,
|
|
||||||
tpath: str,
|
|
||||||
tags: dict[str, tuple[int, Any]],
|
|
||||||
rawtags: dict[str, list[Any]],
|
|
||||||
enc: str,
|
|
||||||
bq: bytes,
|
|
||||||
vn: VFS,
|
|
||||||
) -> None:
|
|
||||||
if tpath.endswith(".owa"):
|
|
||||||
container = b"webm"
|
|
||||||
tagset = [b"-map_metadata", b"-1"]
|
|
||||||
else:
|
|
||||||
container = b"opus"
|
|
||||||
tagset = self.big_tags(rawtags)
|
|
||||||
|
|
||||||
self.log("conv2 %s [%s]" % (container, enc), 6)
|
|
||||||
benc = enc.encode("ascii").split(b" ")
|
|
||||||
|
|
||||||
# fmt: off
|
|
||||||
cmd = [
|
|
||||||
b"ffmpeg",
|
|
||||||
b"-nostdin",
|
|
||||||
b"-v", b"error",
|
|
||||||
b"-hide_banner",
|
|
||||||
b"-i", fsenc(abspath),
|
|
||||||
] + tagset + [
|
|
||||||
b"-map", b"0:a:0",
|
|
||||||
] + benc + [
|
|
||||||
b"-f", container,
|
|
||||||
fsenc(tpath)
|
|
||||||
]
|
|
||||||
# fmt: on
|
|
||||||
self._run_ff(cmd, vn, "aconvt", oom=300)
|
|
||||||
|
|
||||||
def _conv_caf(
|
|
||||||
self,
|
|
||||||
abspath: str,
|
|
||||||
tpath: str,
|
|
||||||
tags: dict[str, tuple[int, Any]],
|
|
||||||
rawtags: dict[str, list[Any]],
|
|
||||||
enc: str,
|
|
||||||
bq: bytes,
|
|
||||||
vn: VFS,
|
|
||||||
) -> None:
|
|
||||||
tmp_opus = tpath + ".opus"
|
|
||||||
try:
|
|
||||||
wunlink(self.log, tmp_opus, vn.flags)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
dur = tags[".dur"][1]
|
dur = tags[".dur"][1]
|
||||||
except:
|
except:
|
||||||
dur = 0
|
dur = 0
|
||||||
|
|
||||||
self.log("conv2 caf-tmp [%s]" % (enc,), 6)
|
src_opus = abspath.lower().endswith(".opus") or tags["ac"][1] == "opus"
|
||||||
benc = enc.encode("ascii").split(b" ")
|
want_caf = tpath.endswith(".caf")
|
||||||
|
tmp_opus = tpath
|
||||||
|
if want_caf:
|
||||||
|
tmp_opus = tpath + ".opus"
|
||||||
|
try:
|
||||||
|
wunlink(self.log, tmp_opus, vn.flags)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
caf_src = abspath if src_opus else tmp_opus
|
||||||
|
bq = ("%dk" % (self.args.q_opus,)).encode("ascii")
|
||||||
|
|
||||||
|
if not want_caf or not src_opus:
|
||||||
# fmt: off
|
# fmt: off
|
||||||
cmd = [
|
cmd = [
|
||||||
b"ffmpeg",
|
b"ffmpeg",
|
||||||
|
@ -1070,24 +771,21 @@ class ThumbSrv(object):
|
||||||
b"-v", b"error",
|
b"-v", b"error",
|
||||||
b"-hide_banner",
|
b"-hide_banner",
|
||||||
b"-i", fsenc(abspath),
|
b"-i", fsenc(abspath),
|
||||||
b"-map_metadata", b"-1",
|
] + self.big_tags(rawtags) + [
|
||||||
b"-map", b"0:a:0",
|
b"-map", b"0:a:0",
|
||||||
] + benc + [
|
b"-c:a", b"libopus",
|
||||||
b"-f", b"opus",
|
b"-b:a", bq,
|
||||||
fsenc(tmp_opus)
|
fsenc(tmp_opus)
|
||||||
]
|
]
|
||||||
# fmt: on
|
# fmt: on
|
||||||
self._run_ff(cmd, vn, "aconvt", oom=300)
|
self._run_ff(cmd, vn, oom=300)
|
||||||
|
|
||||||
# iOS fails to play some "insufficiently complex" files
|
# iOS fails to play some "insufficiently complex" files
|
||||||
# (average file shorter than 8 seconds), so of course we
|
# (average file shorter than 8 seconds), so of course we
|
||||||
# fix that by mixing in some inaudible pink noise :^)
|
# fix that by mixing in some inaudible pink noise :^)
|
||||||
# 6.3 sec seems like the cutoff so lets do 7, and
|
# 6.3 sec seems like the cutoff so lets do 7, and
|
||||||
# 7 sec of psyqui-musou.opus @ 3:50 is 174 KiB
|
# 7 sec of psyqui-musou.opus @ 3:50 is 174 KiB
|
||||||
sz = bos.path.getsize(tmp_opus)
|
if want_caf and (dur < 20 or bos.path.getsize(caf_src) < 256 * 1024):
|
||||||
if dur < 20 or sz < 256 * 1024:
|
|
||||||
zs = bq.decode("ascii")
|
|
||||||
self.log("conv2 caf-transcode; dur=%d sz=%d q=%s" % (dur, sz, zs), 6)
|
|
||||||
# fmt: off
|
# fmt: off
|
||||||
cmd = [
|
cmd = [
|
||||||
b"ffmpeg",
|
b"ffmpeg",
|
||||||
|
@ -1104,18 +802,17 @@ class ThumbSrv(object):
|
||||||
fsenc(tpath)
|
fsenc(tpath)
|
||||||
]
|
]
|
||||||
# fmt: on
|
# fmt: on
|
||||||
self._run_ff(cmd, vn, "aconvt", oom=300)
|
self._run_ff(cmd, vn, oom=300)
|
||||||
|
|
||||||
else:
|
elif want_caf:
|
||||||
# simple remux should be safe
|
# simple remux should be safe
|
||||||
self.log("conv2 caf-remux; dur=%d sz=%d" % (dur, sz), 6)
|
|
||||||
# fmt: off
|
# fmt: off
|
||||||
cmd = [
|
cmd = [
|
||||||
b"ffmpeg",
|
b"ffmpeg",
|
||||||
b"-nostdin",
|
b"-nostdin",
|
||||||
b"-v", b"error",
|
b"-v", b"error",
|
||||||
b"-hide_banner",
|
b"-hide_banner",
|
||||||
b"-i", fsenc(tmp_opus),
|
b"-i", fsenc(abspath if src_opus else tmp_opus),
|
||||||
b"-map_metadata", b"-1",
|
b"-map_metadata", b"-1",
|
||||||
b"-map", b"0:a:0",
|
b"-map", b"0:a:0",
|
||||||
b"-c:a", b"copy",
|
b"-c:a", b"copy",
|
||||||
|
@ -1123,8 +820,9 @@ class ThumbSrv(object):
|
||||||
fsenc(tpath)
|
fsenc(tpath)
|
||||||
]
|
]
|
||||||
# fmt: on
|
# fmt: on
|
||||||
self._run_ff(cmd, vn, "aconvt", oom=300)
|
self._run_ff(cmd, vn, oom=300)
|
||||||
|
|
||||||
|
if tmp_opus != tpath:
|
||||||
try:
|
try:
|
||||||
wunlink(self.log, tmp_opus, vn.flags)
|
wunlink(self.log, tmp_opus, vn.flags)
|
||||||
except:
|
except:
|
||||||
|
@ -1155,6 +853,7 @@ class ThumbSrv(object):
|
||||||
def cleaner(self) -> None:
|
def cleaner(self) -> None:
|
||||||
interval = self.args.th_clean
|
interval = self.args.th_clean
|
||||||
while True:
|
while True:
|
||||||
|
time.sleep(interval)
|
||||||
ndirs = 0
|
ndirs = 0
|
||||||
for vol, histpath in self.asrv.vfs.histtab.items():
|
for vol, histpath in self.asrv.vfs.histtab.items():
|
||||||
if histpath.startswith(vol):
|
if histpath.startswith(vol):
|
||||||
|
@ -1168,8 +867,6 @@ class ThumbSrv(object):
|
||||||
self.log("\033[Jcln err in %s: %r" % (histpath, ex), 3)
|
self.log("\033[Jcln err in %s: %r" % (histpath, ex), 3)
|
||||||
|
|
||||||
self.log("\033[Jcln ok; rm {} dirs".format(ndirs))
|
self.log("\033[Jcln ok; rm {} dirs".format(ndirs))
|
||||||
self.rm_nullthumbs = False
|
|
||||||
time.sleep(interval)
|
|
||||||
|
|
||||||
def clean(self, histpath: str) -> int:
|
def clean(self, histpath: str) -> int:
|
||||||
ret = 0
|
ret = 0
|
||||||
|
@ -1184,15 +881,13 @@ class ThumbSrv(object):
|
||||||
|
|
||||||
def _clean(self, cat: str, thumbpath: str) -> int:
|
def _clean(self, cat: str, thumbpath: str) -> int:
|
||||||
# self.log("cln {}".format(thumbpath))
|
# self.log("cln {}".format(thumbpath))
|
||||||
exts = EXTS_TH if cat == "th" else EXTS_AC
|
exts = ["jpg", "webp", "png"] if cat == "th" else ["opus", "caf", "mp3"]
|
||||||
maxage = getattr(self.args, cat + "_maxage")
|
maxage = getattr(self.args, cat + "_maxage")
|
||||||
now = time.time()
|
now = time.time()
|
||||||
prev_b64 = None
|
prev_b64 = None
|
||||||
prev_fp = ""
|
prev_fp = ""
|
||||||
try:
|
try:
|
||||||
t1 = statdir(
|
t1 = statdir(self.log_func, not self.args.no_scandir, False, thumbpath)
|
||||||
self.log_func, not self.args.no_scandir, False, thumbpath, False
|
|
||||||
)
|
|
||||||
ents = sorted(list(t1))
|
ents = sorted(list(t1))
|
||||||
except:
|
except:
|
||||||
return 0
|
return 0
|
||||||
|
@ -1225,8 +920,6 @@ class ThumbSrv(object):
|
||||||
# thumb file
|
# thumb file
|
||||||
try:
|
try:
|
||||||
b64, ts, ext = f.split(".")
|
b64, ts, ext = f.split(".")
|
||||||
if len(ts) > 8 and PTN_TS.match(ts):
|
|
||||||
ts = "yeahokay"
|
|
||||||
if len(b64) != 24 or len(ts) != 8 or ext not in exts:
|
if len(b64) != 24 or len(ts) != 8 or ext not in exts:
|
||||||
raise Exception()
|
raise Exception()
|
||||||
except:
|
except:
|
||||||
|
@ -1235,10 +928,6 @@ class ThumbSrv(object):
|
||||||
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if self.rm_nullthumbs and not inf.st_size:
|
|
||||||
bos.unlink(fp)
|
|
||||||
continue
|
|
||||||
|
|
||||||
if b64 == prev_b64:
|
if b64 == prev_b64:
|
||||||
self.log("rm replaced [{}]".format(fp))
|
self.log("rm replaced [{}]".format(fp))
|
||||||
bos.unlink(prev_fp)
|
bos.unlink(prev_fp)
|
||||||
|
|
|
@ -53,8 +53,6 @@ class U2idx(object):
|
||||||
self.log("your python does not have sqlite3; searching will be disabled")
|
self.log("your python does not have sqlite3; searching will be disabled")
|
||||||
return
|
return
|
||||||
|
|
||||||
assert sqlite3 # type: ignore # !rm
|
|
||||||
|
|
||||||
self.active_id = ""
|
self.active_id = ""
|
||||||
self.active_cur: Optional["sqlite3.Cursor"] = None
|
self.active_cur: Optional["sqlite3.Cursor"] = None
|
||||||
self.cur: dict[str, "sqlite3.Cursor"] = {}
|
self.cur: dict[str, "sqlite3.Cursor"] = {}
|
||||||
|
@ -70,9 +68,6 @@ class U2idx(object):
|
||||||
self.log_func("u2idx", msg, c)
|
self.log_func("u2idx", msg, c)
|
||||||
|
|
||||||
def shutdown(self) -> None:
|
def shutdown(self) -> None:
|
||||||
if not HAVE_SQLITE3:
|
|
||||||
return
|
|
||||||
|
|
||||||
for cur in self.cur.values():
|
for cur in self.cur.values():
|
||||||
db = cur.connection
|
db = cur.connection
|
||||||
try:
|
try:
|
||||||
|
@ -83,12 +78,6 @@ class U2idx(object):
|
||||||
cur.close()
|
cur.close()
|
||||||
db.close()
|
db.close()
|
||||||
|
|
||||||
for cur in (self.mem_cur, self.sh_cur):
|
|
||||||
if cur:
|
|
||||||
db = cur.connection
|
|
||||||
cur.close()
|
|
||||||
db.close()
|
|
||||||
|
|
||||||
def fsearch(
|
def fsearch(
|
||||||
self, uname: str, vols: list[VFS], body: dict[str, Any]
|
self, uname: str, vols: list[VFS], body: dict[str, Any]
|
||||||
) -> list[dict[str, Any]]:
|
) -> list[dict[str, Any]]:
|
||||||
|
@ -104,7 +93,7 @@ class U2idx(object):
|
||||||
uv: list[Union[str, int]] = [wark[:16], wark]
|
uv: list[Union[str, int]] = [wark[:16], wark]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
return self.run_query(uname, vols, uq, uv, False, True, 99999)[0]
|
return self.run_query(uname, vols, uq, uv, False, 99999)[0]
|
||||||
except:
|
except:
|
||||||
raise Pebkac(500, min_ex())
|
raise Pebkac(500, min_ex())
|
||||||
|
|
||||||
|
@ -115,7 +104,7 @@ class U2idx(object):
|
||||||
if not HAVE_SQLITE3 or not self.args.shr:
|
if not HAVE_SQLITE3 or not self.args.shr:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
assert sqlite3 # type: ignore # !rm
|
assert sqlite3 # type: ignore
|
||||||
|
|
||||||
db = sqlite3.connect(self.args.shr_db, timeout=2, check_same_thread=False)
|
db = sqlite3.connect(self.args.shr_db, timeout=2, check_same_thread=False)
|
||||||
cur = db.cursor()
|
cur = db.cursor()
|
||||||
|
@ -131,12 +120,12 @@ class U2idx(object):
|
||||||
if not HAVE_SQLITE3 or "e2d" not in vn.flags:
|
if not HAVE_SQLITE3 or "e2d" not in vn.flags:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
assert sqlite3 # type: ignore # !rm
|
assert sqlite3 # type: ignore
|
||||||
|
|
||||||
ptop = vn.realpath
|
ptop = vn.realpath
|
||||||
histpath = self.asrv.vfs.dbpaths.get(ptop)
|
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||||
if not histpath:
|
if not histpath:
|
||||||
self.log("no dbpath for %r" % (ptop,))
|
self.log("no histpath for [{}]".format(ptop))
|
||||||
return None
|
return None
|
||||||
|
|
||||||
db_path = os.path.join(histpath, "up2k.db")
|
db_path = os.path.join(histpath, "up2k.db")
|
||||||
|
@ -151,7 +140,7 @@ class U2idx(object):
|
||||||
db = sqlite3.connect(uri, timeout=2, uri=True, check_same_thread=False)
|
db = sqlite3.connect(uri, timeout=2, uri=True, check_same_thread=False)
|
||||||
cur = db.cursor()
|
cur = db.cursor()
|
||||||
cur.execute('pragma table_info("up")').fetchone()
|
cur.execute('pragma table_info("up")').fetchone()
|
||||||
self.log("ro: %r" % (db_path,))
|
self.log("ro: {}".format(db_path))
|
||||||
except:
|
except:
|
||||||
self.log("could not open read-only: {}\n{}".format(uri, min_ex()))
|
self.log("could not open read-only: {}\n{}".format(uri, min_ex()))
|
||||||
# may not fail until the pragma so unset it
|
# may not fail until the pragma so unset it
|
||||||
|
@ -161,7 +150,7 @@ class U2idx(object):
|
||||||
# on windows, this steals the write-lock from up2k.deferred_init --
|
# on windows, this steals the write-lock from up2k.deferred_init --
|
||||||
# seen on win 10.0.17763.2686, py 3.10.4, sqlite 3.37.2
|
# seen on win 10.0.17763.2686, py 3.10.4, sqlite 3.37.2
|
||||||
cur = sqlite3.connect(db_path, timeout=2, check_same_thread=False).cursor()
|
cur = sqlite3.connect(db_path, timeout=2, check_same_thread=False).cursor()
|
||||||
self.log("opened %r" % (db_path,))
|
self.log("opened {}".format(db_path))
|
||||||
|
|
||||||
self.cur[ptop] = cur
|
self.cur[ptop] = cur
|
||||||
return cur
|
return cur
|
||||||
|
@ -310,7 +299,7 @@ class U2idx(object):
|
||||||
q += " lower({}) {} ? ) ".format(field, oper)
|
q += " lower({}) {} ? ) ".format(field, oper)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
return self.run_query(uname, vols, q, va, have_mt, True, lim)
|
return self.run_query(uname, vols, q, va, have_mt, lim)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
raise Pebkac(500, repr(ex))
|
raise Pebkac(500, repr(ex))
|
||||||
|
|
||||||
|
@ -321,11 +310,9 @@ class U2idx(object):
|
||||||
uq: str,
|
uq: str,
|
||||||
uv: list[Union[str, int]],
|
uv: list[Union[str, int]],
|
||||||
have_mt: bool,
|
have_mt: bool,
|
||||||
sort: bool,
|
|
||||||
lim: int,
|
lim: int,
|
||||||
) -> tuple[list[dict[str, Any]], list[str], bool]:
|
) -> tuple[list[dict[str, Any]], list[str], bool]:
|
||||||
dbg = self.args.srch_dbg
|
if self.args.srch_dbg:
|
||||||
if dbg:
|
|
||||||
t = "searching across all %s volumes in which the user has 'r' (full read access):\n %s"
|
t = "searching across all %s volumes in which the user has 'r' (full read access):\n %s"
|
||||||
zs = "\n ".join(["/%s = %s" % (x.vpath, x.realpath) for x in vols])
|
zs = "\n ".join(["/%s = %s" % (x.vpath, x.realpath) for x in vols])
|
||||||
self.log(t % (len(vols), zs), 5)
|
self.log(t % (len(vols), zs), 5)
|
||||||
|
@ -368,14 +355,14 @@ class U2idx(object):
|
||||||
if not cur:
|
if not cur:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
dots = flags.get("dotsrch") and uname in vol.axs.udot
|
excl = []
|
||||||
zs = "srch_re_dots" if dots else "srch_re_nodot"
|
for vp2 in self.asrv.vfs.all_vols.keys():
|
||||||
rex: re.Pattern = flags.get(zs) # type: ignore
|
if vp2.startswith((vtop + "/").lstrip("/")) and vtop != vp2:
|
||||||
|
excl.append(vp2[len(vtop) :].lstrip("/"))
|
||||||
|
|
||||||
if dbg:
|
if self.args.srch_dbg:
|
||||||
t = "searching in volume /%s (%s), excluding %s"
|
t = "searching in volume /%s (%s), excludelist %s"
|
||||||
self.log(t % (vtop, ptop, rex.pattern), 5)
|
self.log(t % (vtop, ptop, excl), 5)
|
||||||
rex_cfg: Optional[re.Pattern] = flags.get("srch_excl")
|
|
||||||
|
|
||||||
self.active_cur = cur
|
self.active_cur = cur
|
||||||
|
|
||||||
|
@ -388,31 +375,29 @@ class U2idx(object):
|
||||||
|
|
||||||
sret = []
|
sret = []
|
||||||
fk = flags.get("fk")
|
fk = flags.get("fk")
|
||||||
|
dots = flags.get("dotsrch") and uname in vol.axs.udot
|
||||||
fk_alg = 2 if "fka" in flags else 1
|
fk_alg = 2 if "fka" in flags else 1
|
||||||
c = cur.execute(uq, tuple(vuv))
|
c = cur.execute(uq, tuple(vuv))
|
||||||
for hit in c:
|
for hit in c:
|
||||||
w, ts, sz, rd, fn = hit[:5]
|
w, ts, sz, rd, fn, ip, at = hit[:7]
|
||||||
|
|
||||||
if rd.startswith("//") or fn.startswith("//"):
|
if rd.startswith("//") or fn.startswith("//"):
|
||||||
rd, fn = s3dec(rd, fn)
|
rd, fn = s3dec(rd, fn)
|
||||||
|
|
||||||
vp = vjoin(vjoin(vtop, rd), fn)
|
if rd in excl or any([x for x in excl if rd.startswith(x + "/")]):
|
||||||
|
if self.args.srch_dbg:
|
||||||
if vp in seen_rps:
|
zs = vjoin(vjoin(vtop, rd), fn)
|
||||||
continue
|
|
||||||
|
|
||||||
if rex.search(vp):
|
|
||||||
if dbg:
|
|
||||||
if rex_cfg and rex_cfg.search(vp): # type: ignore
|
|
||||||
self.log("filtered by srch_excl: %s" % (vp,), 6)
|
|
||||||
elif not dots and "/." in ("/" + vp):
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
t = "database inconsistency in volume '/%s'; ignoring: %s"
|
t = "database inconsistency in volume '/%s'; ignoring: %s"
|
||||||
self.log(t % (vtop, vp), 1)
|
self.log(t % (vtop, zs), 1)
|
||||||
|
continue
|
||||||
|
|
||||||
|
rp = quotep("/".join([x for x in [vtop, rd, fn] if x]))
|
||||||
|
if not dots and "/." in ("/" + rp):
|
||||||
|
continue
|
||||||
|
|
||||||
|
if rp in seen_rps:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
rp = quotep(vp)
|
|
||||||
if not fk:
|
if not fk:
|
||||||
suf = ""
|
suf = ""
|
||||||
else:
|
else:
|
||||||
|
@ -434,7 +419,7 @@ class U2idx(object):
|
||||||
if lim < 0:
|
if lim < 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
if dbg:
|
if self.args.srch_dbg:
|
||||||
t = "in volume '/%s': hit: %s"
|
t = "in volume '/%s': hit: %s"
|
||||||
self.log(t % (vtop, rp), 5)
|
self.log(t % (vtop, rp), 5)
|
||||||
|
|
||||||
|
@ -464,14 +449,13 @@ class U2idx(object):
|
||||||
ret.extend(sret)
|
ret.extend(sret)
|
||||||
# print("[{}] {}".format(ptop, sret))
|
# print("[{}] {}".format(ptop, sret))
|
||||||
|
|
||||||
if dbg:
|
if self.args.srch_dbg:
|
||||||
t = "in volume '/%s': got %d hits, %d total so far"
|
t = "in volume '/%s': got %d hits, %d total so far"
|
||||||
self.log(t % (vtop, len(sret), len(ret)), 5)
|
self.log(t % (vtop, len(sret), len(ret)), 5)
|
||||||
|
|
||||||
done_flag.append(True)
|
done_flag.append(True)
|
||||||
self.active_id = ""
|
self.active_id = ""
|
||||||
|
|
||||||
if sort:
|
|
||||||
ret.sort(key=itemgetter("rp"))
|
ret.sort(key=itemgetter("rp"))
|
||||||
|
|
||||||
return ret, list(taglist.keys()), lim < 0 and not clamped
|
return ret, list(taglist.keys()), lim < 0 and not clamped
|
||||||
|
@ -483,5 +467,5 @@ class U2idx(object):
|
||||||
return
|
return
|
||||||
|
|
||||||
if identifier == self.active_id:
|
if identifier == self.active_id:
|
||||||
assert self.active_cur # !rm
|
assert self.active_cur
|
||||||
self.active_cur.connection.interrupt()
|
self.active_cur.connection.interrupt()
|
||||||
|
|
1568
copyparty/up2k.py
1568
copyparty/up2k.py
File diff suppressed because it is too large
Load diff
1072
copyparty/util.py
1072
copyparty/util.py
File diff suppressed because it is too large
Load diff
|
@ -32,7 +32,7 @@ window.baguetteBox = (function () {
|
||||||
scrollCSS = ['', ''],
|
scrollCSS = ['', ''],
|
||||||
scrollTimer = 0,
|
scrollTimer = 0,
|
||||||
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
|
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
|
||||||
re_v = /^[^?]+\.(webm|mkv|mp4|m4v|mov)(\?|$)/i,
|
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
|
||||||
anims = ['slideIn', 'fadeIn', 'none'],
|
anims = ['slideIn', 'fadeIn', 'none'],
|
||||||
data = {}, // all galleries
|
data = {}, // all galleries
|
||||||
imagesElements = [],
|
imagesElements = [],
|
||||||
|
@ -48,7 +48,6 @@ window.baguetteBox = (function () {
|
||||||
|
|
||||||
var onFSC = function (e) {
|
var onFSC = function (e) {
|
||||||
isFullscreen = !!document.fullscreenElement;
|
isFullscreen = !!document.fullscreenElement;
|
||||||
clmod(document.documentElement, 'bb_fsc', isFullscreen);
|
|
||||||
};
|
};
|
||||||
|
|
||||||
var overlayClickHandler = function (e) {
|
var overlayClickHandler = function (e) {
|
||||||
|
@ -403,7 +402,7 @@ window.baguetteBox = (function () {
|
||||||
if (isFullscreen)
|
if (isFullscreen)
|
||||||
document.exitFullscreen();
|
document.exitFullscreen();
|
||||||
else
|
else
|
||||||
ebi('bbox-overlay').requestFullscreen();
|
(vid() || ebi('bbox-overlay')).requestFullscreen();
|
||||||
}
|
}
|
||||||
catch (ex) {
|
catch (ex) {
|
||||||
if (IPHONE)
|
if (IPHONE)
|
||||||
|
@ -593,7 +592,9 @@ window.baguetteBox = (function () {
|
||||||
preloadPrev(currentIndex);
|
preloadPrev(currentIndex);
|
||||||
});
|
});
|
||||||
|
|
||||||
show_buttons(0);
|
clmod(ebi('bbox-btns'), 'off');
|
||||||
|
clmod(btnPrev, 'off');
|
||||||
|
clmod(btnNext, 'off');
|
||||||
|
|
||||||
updateOffset();
|
updateOffset();
|
||||||
overlay.style.display = 'block';
|
overlay.style.display = 'block';
|
||||||
|
@ -632,9 +633,6 @@ window.baguetteBox = (function () {
|
||||||
catch (ex) { }
|
catch (ex) { }
|
||||||
isFullscreen = false;
|
isFullscreen = false;
|
||||||
|
|
||||||
if (toast.tag == 'bb-ded')
|
|
||||||
toast.hide();
|
|
||||||
|
|
||||||
if (dtor || overlay.style.display === 'none')
|
if (dtor || overlay.style.display === 'none')
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
@ -670,7 +668,6 @@ window.baguetteBox = (function () {
|
||||||
if (v == keep)
|
if (v == keep)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
unbind(v, 'error', lerr);
|
|
||||||
v.src = '';
|
v.src = '';
|
||||||
v.load();
|
v.load();
|
||||||
|
|
||||||
|
@ -698,28 +695,6 @@ window.baguetteBox = (function () {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
function lerr() {
|
|
||||||
var t;
|
|
||||||
try {
|
|
||||||
t = this.getAttribute('src');
|
|
||||||
t = uricom_dec(t.split('/').pop().split('?')[0]);
|
|
||||||
}
|
|
||||||
catch (ex) { }
|
|
||||||
|
|
||||||
t = 'Failed to open ' + (t?t:'file');
|
|
||||||
console.log('bb-ded', t);
|
|
||||||
t += '\n\nEither the file is corrupt, or your browser does not understand the file format or codec';
|
|
||||||
|
|
||||||
try {
|
|
||||||
t += "\n\nerr#" + this.error.code + ", " + this.error.message;
|
|
||||||
}
|
|
||||||
catch (ex) { }
|
|
||||||
|
|
||||||
this.ded = esc(t);
|
|
||||||
if (this === vidimg())
|
|
||||||
toast.err(20, this.ded, 'bb-ded');
|
|
||||||
}
|
|
||||||
|
|
||||||
function loadImage(index, callback) {
|
function loadImage(index, callback) {
|
||||||
var imageContainer = imagesElements[index];
|
var imageContainer = imagesElements[index];
|
||||||
var galleryItem = currentGallery[index];
|
var galleryItem = currentGallery[index];
|
||||||
|
@ -764,8 +739,7 @@ window.baguetteBox = (function () {
|
||||||
var image = mknod(is_vid ? 'video' : 'img');
|
var image = mknod(is_vid ? 'video' : 'img');
|
||||||
clmod(imageContainer, 'vid', is_vid);
|
clmod(imageContainer, 'vid', is_vid);
|
||||||
|
|
||||||
bind(image, 'error', lerr);
|
image.addEventListener(is_vid ? 'loadedmetadata' : 'load', function () {
|
||||||
bind(image, is_vid ? 'loadedmetadata' : 'load', function () {
|
|
||||||
// Remove loader element
|
// Remove loader element
|
||||||
qsr('#baguette-img-' + index + ' .bbox-spinner');
|
qsr('#baguette-img-' + index + ' .bbox-spinner');
|
||||||
if (!options.async && callback)
|
if (!options.async && callback)
|
||||||
|
@ -775,8 +749,6 @@ window.baguetteBox = (function () {
|
||||||
if (is_vid) {
|
if (is_vid) {
|
||||||
image.volume = clamp(fcfg_get('vol', dvol / 100), 0, 1);
|
image.volume = clamp(fcfg_get('vol', dvol / 100), 0, 1);
|
||||||
image.setAttribute('controls', 'controls');
|
image.setAttribute('controls', 'controls');
|
||||||
image.setAttribute('playsinline', '1');
|
|
||||||
// ios ignores poster
|
|
||||||
image.onended = vidEnd;
|
image.onended = vidEnd;
|
||||||
image.onplay = function () { show_buttons(1); };
|
image.onplay = function () { show_buttons(1); };
|
||||||
image.onpause = function () { show_buttons(); };
|
image.onpause = function () { show_buttons(); };
|
||||||
|
@ -844,12 +816,6 @@ window.baguetteBox = (function () {
|
||||||
});
|
});
|
||||||
updateOffset();
|
updateOffset();
|
||||||
|
|
||||||
var im = vidimg();
|
|
||||||
if (im && im.ded)
|
|
||||||
toast.err(20, im.ded, 'bb-ded');
|
|
||||||
else if (toast.tag == 'bb-ded')
|
|
||||||
toast.hide();
|
|
||||||
|
|
||||||
if (options.animation == 'none')
|
if (options.animation == 'none')
|
||||||
unvid(vid());
|
unvid(vid());
|
||||||
else
|
else
|
||||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -108,9 +108,12 @@
|
||||||
|
|
||||||
{%- for f in files %}
|
{%- for f in files %}
|
||||||
<tr><td>{{ f.lead }}</td><td><a href="{{ f.href }}">{{ f.name|e }}</a></td><td>{{ f.sz }}</td>
|
<tr><td>{{ f.lead }}</td><td><a href="{{ f.href }}">{{ f.name|e }}</a></td><td>{{ f.sz }}</td>
|
||||||
{%- if f.tags is defined %}
|
{%- if f.tags is defined %}
|
||||||
{%- for k in taglist %}<td>{{ f.tags[k]|e }}</td>{%- endfor %}
|
{%- for k in taglist %}
|
||||||
{%- endif %}<td>{{ f.ext }}</td><td>{{ f.dt }}</td></tr>
|
<td>{{ f.tags[k] }}</td>
|
||||||
|
{%- endfor %}
|
||||||
|
{%- endif %}
|
||||||
|
<td>{{ f.ext }}</td><td>{{ f.dt }}</td></tr>
|
||||||
{%- endfor %}
|
{%- endfor %}
|
||||||
|
|
||||||
</tbody>
|
</tbody>
|
||||||
|
@ -124,21 +127,24 @@
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{%- if srv_info %}
|
||||||
<div id="srv_info"><span>{{ srv_info }}</span></div>
|
<div id="srv_info"><span>{{ srv_info }}</span></div>
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
<div id="widget"></div>
|
<div id="widget"></div>
|
||||||
|
|
||||||
<script>
|
<script>
|
||||||
var SR = "{{ r }}",
|
var SR = {{ r|tojson }},
|
||||||
CGV1 = {{ cgv1 }},
|
|
||||||
CGV = {{ cgv|tojson }},
|
CGV = {{ cgv|tojson }},
|
||||||
TS = "{{ ts }}",
|
TS = "{{ ts }}",
|
||||||
dtheme = "{{ dtheme }}",
|
dtheme = "{{ dtheme }}",
|
||||||
srvinf = "{{ srv_info }}",
|
srvinf = "{{ srv_info }}",
|
||||||
|
s_name = "{{ s_name }}",
|
||||||
lang = "{{ lang }}",
|
lang = "{{ lang }}",
|
||||||
dfavico = "{{ favico }}",
|
dfavico = "{{ favico }}",
|
||||||
have_tags_idx = {{ have_tags_idx }},
|
have_tags_idx = {{ have_tags_idx|tojson }},
|
||||||
sb_lg = "{{ sb_lg }}",
|
sb_lg = "{{ sb_lg }}",
|
||||||
|
txt_ext = "{{ txt_ext }}",
|
||||||
logues = {{ logues|tojson if sb_lg else "[]" }},
|
logues = {{ logues|tojson if sb_lg else "[]" }},
|
||||||
ls0 = {{ ls0|tojson }};
|
ls0 = {{ ls0|tojson }};
|
||||||
|
|
||||||
|
|
10845
copyparty/web/browser.js
10845
copyparty/web/browser.js
File diff suppressed because it is too large
Load diff
BIN
copyparty/web/dd/2.png
Normal file
BIN
copyparty/web/dd/2.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 258 B |
BIN
copyparty/web/dd/3.png
Normal file
BIN
copyparty/web/dd/3.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 252 B |
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue