Compare commits

..

61 Commits

Author SHA1 Message Date
Safihre
e1ea4f1e7e Update text files for 5.0.0Beta1 2026-02-02 15:26:00 +01:00
mnightingale
e40098d0e7 Remove sleep when cache is full (#3300)
* Remove cache full sleep

* Remove override_trigger
2026-02-02 10:27:48 +01:00
SABnzbd Automation
5025f9ec5d Update translatable texts
[skip ci]
2026-02-02 08:15:36 +00:00
mnightingale
26a485374c Only wake servers while requests are available (#3299) 2026-02-02 09:14:54 +01:00
mnightingale
5b3a8fcd3f Fix nzb deadlocks (#3298)
* Fix nzb deadlocks

* Keep the lock behaviour unchanged but ensure correct order
2026-01-30 17:23:57 +01:00
mnightingale
44447ab416 Add more logging to stop_idle_jobs (#3294)
* Add more logging to stop_idle_jobs

* Would log too much

* Reduce logging a little

* Tweak message

* Spelling

* Log first articles
2026-01-30 15:24:52 +01:00
mnightingale
040573c75c Fix deadlock in hard_reset/remove_socket (#3297) 2026-01-29 18:14:22 +01:00
mnightingale
16a6936053 Bind socket throughout test but don't listen and configure a timeout (#3296) 2026-01-29 12:07:27 +01:00
mnightingale
e2921e7b9c Add guards to process_nw_read (#3295) 2026-01-29 08:10:12 +01:00
mnightingale
e1cd1eed83 Remove unused logging arguments (#3293) 2026-01-27 20:35:23 +01:00
SABnzbd Automation
a4de704967 Update translatable texts
[skip ci]
2026-01-26 18:07:10 +00:00
mnightingale
d9f9aa5bea Fix adding sockets mid-connect (#3291)
* Do not add sockets that are not already connected

* Don't preemptively mark thread busy

* Clear nntp instance on failed connect

* Just use reset_nw like everywhere else

* Track when the socket is connected and idle connections can handle requested when connected (completed auth) or socket_connected

* Add tests for connection state handling

* Windows is really slow at this

* Rename connected to ready and socket_connected to connected
2026-01-26 19:06:22 +01:00
renovate[bot]
f4b73cf9ec Update all dependencies (#3292)
* Update all dependencies

* pycparser dropped support for Python 3.9

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Safihre <safihre@sabnzbd.org>
2026-01-26 11:09:39 +01:00
SABnzbd Automation
ddc84542eb Update translatable texts
[skip ci]
2026-01-26 09:50:06 +00:00
Safihre
9624a285f1 Add Apprise documentation URL to Notifications page 2026-01-26 10:45:11 +01:00
Safihre
43a9678f07 Do not show tracebacks externally
Releastes to #3286
2026-01-23 13:36:46 +01:00
mnightingale
4ee41e331c Handle SSLWantWriteError exceptions and buffer writes (#3289)
* Handle SSLWantWriteError

* Add buffer for non-blocking writes
2026-01-23 09:21:32 +01:00
mnightingale
062dc9fa11 Fix assembler waiting for failed article (#3290) 2026-01-23 07:12:16 +01:00
SABnzbd Automation
d215d4b0d7 Update translatable texts
[skip ci]
2026-01-21 20:44:06 +00:00
Safihre
04711886d9 Bump next release from 4.6 to 5.0 due to major changes
No release yet, just text bumps
2026-01-21 21:42:59 +01:00
mnightingale
a19b3750e3 Handle non-fatal errors during read (#3280)
* Handle non-fatal errors during read

* sabctools 9.3.1
2026-01-21 21:18:58 +01:00
mnightingale
eff5f663ab Add database indexes (#3283)
* Add database indexes

* Remove completed bytes index

* Allow duplicate query to short circuit

* Remove duplicate indexes

* Remove most of the query changes
2026-01-20 11:52:49 +01:00
mnightingale
46c98acff3 Fix inconsistent NzbFile sorting (#3276)
* Fix inconsistent NzbFile sorting

* Add more groups and tiers

* Black formatting
2026-01-20 11:06:02 +01:00
renovate[bot]
df5fad29bc Update all dependencies (#3285)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-19 13:04:31 +01:00
SABnzbd Automation
27d222943c Update translatable texts
[skip ci]
2026-01-19 11:43:49 +00:00
Safihre
3384beed24 Make black 26.1.0 happy again - almost 2026-01-19 12:42:51 +01:00
mnightingale
bf41237135 Trigger assemble when next is available and it has been 5 seconds (#3281) 2026-01-17 09:53:06 +01:00
mnightingale
3d4fabfbdf Log socket exception type on write error (#3279) 2026-01-16 22:08:16 +01:00
mnightingale
cf14e24036 Revert pipelining stat/head check (#3278) 2026-01-15 21:12:20 +01:00
Safihre
d0c2b74181 Add current version/environment metadata to _api_showlog
Closes #3277
2026-01-15 10:02:34 +01:00
mnightingale
d21a111993 Fix pipelining connection read/write logic errors (#3272)
* Fix commands which fail to be sent are lost

* Force macOS to use the select implementation

* Do not recreate lock when reinitialised

* Suppress errors when closing socket to ensure socket is closed

* Make connection errors on read or write both only wait 5 seconds before reconnecting

* Fix selector selection

* Only check generation under lock

* Use PollSelector
2026-01-12 15:19:07 +01:00
mnightingale
3e7dcce365 Fix queue cannot be loaded (#3271)
* Fix queue cannot be restored

* Also change init

* Add a test

Fixes #3269
2026-01-12 09:54:59 +01:00
renovate[bot]
5594d4d6eb Update dependency urllib3 to v2.6.3 (#3274)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 01:34:09 +00:00
SABnzbd Automation
605a1b30be Update translatable texts
[skip ci]
2026-01-11 21:42:10 +00:00
mnightingale
a2cb861640 Fix bug when bandwidth limit is removed (#3273) 2026-01-11 22:41:27 +01:00
mnightingale
df1c0915d0 Recreate frènch_german_demö test data (#3268) 2026-01-09 10:59:51 +01:00
mnightingale
4d73c3e9c0 Adjust monitored socket events as required to prevent hot looping (#3267)
* Adjust monitored socket events as required to prevent hot looping

* Prevent write hot looping

* Guard against pending recursive call

* Already have EVENT_READ and don't need to handle it in two places
2026-01-08 15:14:59 +01:00
Safihre
17dcff49b2 Generalize locking strategy (#3264)
* Use per-nzo/nzf lock in wrapper

* Replace global TryList lock with per-nzo one

* Offset does require file_lock
2026-01-06 17:00:27 +01:00
mnightingale
220186299b Do not throttle flushing cache during shutdown (#3265)
* Add a deadline for flushing cache contents on shutdown and don't throttle

* Revert "Add a deadline for flushing cache contents on shutdown and don't throttle"

This reverts commit e405b4c4f4.

* Always flush the whole cache but don't sleep when shutting down
2026-01-06 16:58:39 +01:00
Safihre
ae30be382b Merge platform specific memory functions into single 2026-01-06 13:38:06 +01:00
SABnzbd Automation
13b10fd9bb Update translatable texts
[skip ci]
2026-01-06 11:46:06 +00:00
mnightingale
d9bb544caf Always enqueue when file_done (#3263) 2026-01-06 12:45:25 +01:00
SABnzbd Automation
bf2080068c Update translatable texts
[skip ci]
2026-01-06 09:03:40 +00:00
mnightingale
b4e8c80bc9 Implement Direct Write (#3236)
* Implement direct write

* Support direct_write changes at runtime

* Check sparse support when download_dir changes

* Fixes to reverting to append mode and add tests

* Single write path, remove truncate, improve tests, add test for append mode with out of order direct writes

* assert expected nzf.assembler_next_index

* bytes_written_sequentially assertions

* Slim tests and mock load_article as a dictionary

* More robust bytes_written_sequentially

* Worked but guard Python -1 semantics

* os.path.getsize silly

* Add test with force followed by append to gaps

* Split flush_cache into its own function so the loop does not need to clear the article variable

* Fewer private functions

* Extract article cache limit for waiting constant

* Move option back to specials

* Use Status.DELETED for clarity

* Use nzo.lock in articlecache

* Document why assembler_next_index increments

* Remove duplicated code from write

* load_data formatting

* Create files with the same permissions as with open(...)

* Options are callable

* Fix crash if direct writing from cache but has been deleted

* Fix crash in next_index check via article cache

* Fix assembler waiting for register_article and cache waiting for assembler to write

* Simplify flush_cache loop and only log once per second

* Document why we would leave the assembler when forced at the first not tried article

* When skippedwe can't increment the next_index

* Rename bytes_written_sequentially to sequential_offset improve comments and logic

* Don't need to check when the config changes, due to the runtime changes any failure during assembly will disable it

* Remove unused constant

* Improve append triggering based on contiguous bytes ready to write to file and add a trigger to direct write

* Throttle downloader threads when direct writing out of order

* Clear ready_bytes when removed from queue

* Rework check_assembler_levels sleeping to have a deadline, be based on if the assembler actual pending bytes, and if delaying could have any impact

* Always write first articles if filenames are checked

* Rename force to allow_non_contiguous so it is clearer what it means

* Article is required

* Tweak delay triggers

* Fix for possible dictionary changed size during iteration

* postproc only gets the nzo

* Rename constants and remove redundant calculation

* For safety just key by nzf_id

* Not redundant because capped at 500M

* Tweak a little more

* Only delay if assembler is busy

* Remove unused constant and rename the remaining one

* Calculate if direct write is allowed when cache limit changes

* Allow direct writes to bypass trigger

* Avoid race to requeue

* Breakup the queuing logic so its understandable
2026-01-06 10:03:00 +01:00
Safihre
33aa4f1199 Revert "OptionBool should return bool"
This reverts commit ecb36442d3.

It messes up the sabnzbd.ini reading/writing!
2026-01-05 15:58:54 +01:00
Safihre
ecb36442d3 OptionBool should return bool
Hmm why wasn't this the case?
2026-01-05 14:50:35 +01:00
Safihre
0bbe34242e Stop updating uvicorn branch 2026-01-05 09:51:55 +01:00
renovate[bot]
7c6abd9528 Update all dependencies (#3258)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 01:53:32 +00:00
Safihre
448c034f79 Simplify get_windows_memory 2026-01-03 22:03:26 +01:00
mnightingale
9d5cf9fc5b Fix binding outgoing ip test when ipv6 is available (#3257) 2026-01-02 15:53:48 +01:00
mnightingale
4f9d0fb7d4 Fix connection test failing when bad credentials are used (#3256) 2026-01-02 15:03:04 +01:00
mnightingale
240d5b4ff7 Fix failing downloader tests and make tests less fragile (#3254)
* Fix failing downloader tests and make tests less fragile

* Implement feedback

* Spelling and only sleep if necessary

* Grammar is hard
2026-01-02 15:02:34 +01:00
SABnzbd Automation
a2161ba89b Update translatable texts
[skip ci]
2026-01-02 11:50:29 +00:00
Safihre
68e193bf56 Use blocking writes instead of buffering (#3248) 2026-01-02 12:49:42 +01:00
SABnzbd Automation
b5dda7c52d Update translatable texts
[skip ci]
2025-12-30 08:24:04 +00:00
mnightingale
b6691003db Refactor RSS flow (#3247) 2025-12-30 09:23:20 +01:00
Safihre
ed655553c8 Pausing the queue with Force'd downloads doesn't let them download
Closes #3246
2025-12-29 12:37:44 +01:00
Safihre
316b96c653 Add context folder with documents for AI-tools 2025-12-29 12:12:58 +01:00
mnightingale
62401cba27 Simplify RSS rule evaluation (#3243) 2025-12-29 10:50:55 +01:00
renovate[bot]
3cabf44ce3 Update dependency pyinstaller-hooks-contrib to v2025.11 (#3244)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-29 08:55:14 +01:00
mnightingale
a637d218c4 Refactor preparing the rss config (#3242)
* Refactor preparing the rss config

* Defaults, bool, and iteration
2025-12-24 21:47:15 +01:00
123 changed files with 3418 additions and 1304 deletions

View File

@@ -7,7 +7,7 @@
"schedule": [
"before 8am on Monday"
],
"baseBranches": ["develop", "feature/uvicorn"],
"baseBranches": ["develop"],
"pip_requirements": {
"fileMatch": [
"requirements.txt",

View File

@@ -11,11 +11,11 @@ jobs:
- name: Black Code Formatter
uses: lgeiger/black-action@master
with:
# Tools folder excluded for now due to https://github.com/psf/black/issues/4963
args: >
SABnzbd.py
sabnzbd
scripts
tools
builder
builder/SABnzbd.spec
tests

View File

@@ -1,19 +1,25 @@
Release Notes - SABnzbd 4.6.0 Beta 2
Release Notes - SABnzbd 5.0.0 Beta 1
=========================================================
This is the second beta release of version 4.6.
This is the first beta release of version 5.0.
## New features in 4.6.0
Due to several fundamental changes we decided to
not just call this 4.6 but promote it to 5.0!
## New features in 5.0.0
* Added support for NNTP Pipelining which eliminates idle waiting between
requests, significantly improving speeds on high-latency connections.
Read more here: https://sabnzbd.org/wiki/advanced/nntp-pipelining
* Dynamically increase Assembler limits on faster connections.
* Implemented Direct Write to optimize assembly of downloaded files.
Read more here: https://sabnzbd.org/wiki/advanced/direct-write
* Complete redesign of article cache.
* Improved disk speed measurement in Status window.
* Enable `verify_xff_header` by default.
* Reduce delays between jobs during post-processing.
* If a download only has `.nzb` files inside, the new downloads
will include the name of the original download.
* No longer show tracebacks in the browser, only in the logs.
* Dropped support for Python 3.8.
* Windows: Added Windows ARM (portable) release.
@@ -23,6 +29,7 @@ This is the second beta release of version 4.6.
* No error was shown in case NZB upload failed.
* Correct mobile layout if `Full Width` is enabled.
* Aborted Direct Unpack could result in no files being unpacked.
* Sorting of files inside jobs was inconsistent.
* Windows: Tray icon disappears after Explorer restart.
* macOS: Slow to start on some network setups.

View File

@@ -236,9 +236,7 @@ def print_help():
def print_version():
print(
(
"""
print(("""
%s-%s
(C) Copyright 2007-2025 by The SABnzbd-Team (sabnzbd.org)
@@ -247,10 +245,7 @@ This is free software, and you are welcome to redistribute it
under certain conditions. It is licensed under the
GNU GENERAL PUBLIC LICENSE Version 2 or (at your option) any later version.
"""
% (sabnzbd.MY_NAME, sabnzbd.__version__)
)
)
""" % (sabnzbd.MY_NAME, sabnzbd.__version__)))
def daemonize():
@@ -870,7 +865,7 @@ def main():
elif opt in ("-t", "--templates"):
web_dir = arg
elif opt in ("-s", "--server"):
(web_host, web_port) = split_host(arg)
web_host, web_port = split_host(arg)
elif opt in ("-n", "--nobrowser"):
autobrowser = False
elif opt in ("-b", "--browser"):
@@ -1280,7 +1275,6 @@ def main():
"tools.encode.on": True,
"tools.gzip.on": True,
"tools.gzip.mime_types": mime_gzip,
"request.show_tracebacks": True,
"error_page.401": sabnzbd.panic.error_page_401,
"error_page.404": sabnzbd.panic.error_page_404,
}

View File

@@ -18,7 +18,6 @@
import os
from constants import RELEASE_VERSION
# We need to call dmgbuild from command-line, so here we can setup how
if __name__ == "__main__":
# Check for DMGBuild

View File

@@ -1,19 +1,19 @@
# Basic build requirements
# Note that not all sub-dependencies are listed, but only ones we know could cause trouble
pyinstaller==6.17.0
packaging==25.0
pyinstaller-hooks-contrib==2025.10
pyinstaller==6.18.0
packaging==26.0
pyinstaller-hooks-contrib==2026.0
altgraph==0.17.5
wrapt==2.0.1
setuptools==80.9.0
setuptools==80.10.2
# For the Windows build
pefile==2024.8.26; sys_platform == 'win32'
pywin32-ctypes==0.2.3; sys_platform == 'win32'
# For the macOS build
dmgbuild==1.6.6; sys_platform == 'darwin'
dmgbuild==1.6.7; sys_platform == 'darwin'
mac-alias==2.2.3; sys_platform == 'darwin'
macholib==1.16.4; sys_platform == 'darwin'
ds-store==1.3.2; sys_platform == 'darwin'
PyNaCl==1.6.1; sys_platform == 'darwin'
PyNaCl==1.6.2; sys_platform == 'darwin'

39
context/Download-flow.md Normal file
View File

@@ -0,0 +1,39 @@
## Download flow (Downloader + NewsWrapper)
1. **Job ingestion**
- NZBs arrive via UI/API/URL; `urlgrabber.py` fetches remote NZBs, `nzbparser.py` turns them into `NzbObject`s, and `nzbqueue.NzbQueue` stores ordered jobs with priorities and categories.
2. **Queue to articles**
- When servers need work, `NzbQueue.get_articles` (called from `Server.get_article` in `downloader.py`) hands out batches of `Article`s per server, respecting retention, priority, and forced/paused items.
3. **Downloader setup**
- `Downloader` thread loads server configs (`config.get_servers`), instantiates `Server` objects (per host/port/SSL/threads), and spawns `NewsWrapper` instances per configured connection.
- A `selectors.DefaultSelector` watches all sockets; `BPSMeter` tracks throughput and speed limits; timers manage server penalties/restarts.
4. **Connection establishment (NewsWrapper.init_connect → NNTP.connect)**
- `Server.request_addrinfo` resolves fastest address; `NewsWrapper` builds an `NNTP` socket, wraps SSL if needed, sets non-blocking, and registers with the selector.
- First server greeting (200/201) is queued; `finish_connect` drives the login handshake (`AUTHINFO USER/PASS`) and handles temporary (480) or permanent (400/502) errors.
5. **Request scheduling & pipelining**
- `write()` chooses the next article command (`STAT/HEAD` for precheck, `BODY` or `ARTICLE` otherwise).
- Concurrency is limited by `server.pipelining_requests`; commands are queued and sent with `sock.sendall`, so there is no local send buffer.
- Sockets stay registered for `EVENT_WRITE`: without write readiness events, a temporarily full kernel send buffer could stall queued commands when there is nothing to read, so WRITE interest is needed to resume sending promptly.
6. **Receiving data**
- Selector events route to `process_nw_read`; `NewsWrapper.read` pulls bytes (SSL optimized via sabctools), parses NNTP responses, and calls `on_response`.
- Successful BODY/ARTICLE (220/222) updates per-server stats; missing/500 variants toggle capability flags (BODY/STAT support).
7. **Decoding and caching**
- `Downloader.decode` hands responses to `decoder.decode`, which yEnc/UU decodes, CRC-checks, and stores payloads in `ArticleCache` (memory or disk spill).
- Articles with DMCA/bad data trigger retry on other servers until `max_art_tries` is exceeded.
8. **Assembly to files**
- `Assembler` worker consumes decoded pieces, writes to the target file, updates CRC, and cleans admin markers. It guards disk space (`diskspace_check`) and schedules direct unpack or PAR2 handling when files finish.
9. **Queue bookkeeping**
- `NzbQueue.register_article` records success/failure; completed files advance NZF/NZO state. If all files done, the job moves to post-processing (`PostProcessor.process`), which runs `newsunpack`, scripts, sorting, etc.
10. **Control & resilience**
- Pausing/resuming (`Downloader.pause/resume`), bandwidth limiting, and sleep tuning happen in the main loop.
- Errors/timeouts lead to `reset_nw` (close socket, return article, maybe penalize server). Optional servers can be temporarily disabled; required ones schedule resumes.
- Forced disconnect/shutdown drains sockets, refreshes DNS, and exits cleanly.

32
context/Repo-layout.md Normal file
View File

@@ -0,0 +1,32 @@
## Repo layout
- Entry points & metadata
- `SABnzbd.py`: starts the app.
- `README.md` / `README.mkd`: release notes and overview.
- `requirements.txt`: runtime deps.
- Core application package `sabnzbd/`
- Download engine: `downloader.py` (main loop), `newswrapper.py` (NNTP connections), `urlgrabber.py`, `nzbqueue.py` (queue), `nzbparser.py` (parse NZB), `assembler.py` (writes decoded parts), `decoder.py` (yEnc/UU decode), `articlecache.py` (in-memory/on-disk cache).
- Post-processing: `newsunpack.py`, `postproc.py`, `directunpacker.py`, `sorting.py`, `deobfuscate_filenames.py`.
- Config/constants/utilities: `cfg.py`, `config.py`, `constants.py`, `misc.py`, `filesystem.py`, `encoding.py`, `lang.py`, `scheduler.py`, `notifier.py`, `emailer.py`, `rss.py`.
- UI plumbing: `interface.py`, `skintext.py`, `version.py`, platform helpers (`macosmenu.py`, `sabtray*.py`).
- Subpackages: `sabnzbd/nzb/` (NZB model objects), `sabnzbd/utils/` (helpers).
- Web interfaces & assets
- `interfaces/Glitter`, `interfaces/Config`, `interfaces/wizard`: HTML/JS/CSS skins.
- `icons/`: tray/web icons.
- `locale/`, `po/`, `tools/`: translation sources and helper scripts (`make_mo.py`, etc.).
- Testing & samples
- `tests/`: pytest suite plus `data/` fixtures and `test_utils/`.
- `scripts/`: sample post-processing hooks (`Sample-PostProc.*`).
- Packaging/build
- `builder/`: platform build scripts (DMG/EXE specs, `package.py`, `release.py`).
- Platform folders `win/`, `macos/`, `linux/`, `snap/`: installer or platform-specific assets.
- `admin/`, `builder/constants.py`, `licenses/`: release and licensing support files.
- Documentation
- Documentation website source is stored in the `sabnzbd.github.io` repo.
- This repo is most likely located 1 level up from the root folder of this repo.
- Documentation is split per SABnzbd version, in the `wiki` folder.

View File

@@ -188,6 +188,7 @@
</tr>
</table>
<p>$T('explain-apprise_enable')</p>
<p><a href="https://appriseit.com/" target="_blank">Apprise documentation</a></p>
<p>$T('version'): ${apprise.__version__}</p>
$show_cat_box('apprise')

View File

@@ -42,8 +42,8 @@
<url type="faq">https://sabnzbd.org/wiki/faq</url>
<url type="contact">https://sabnzbd.org/live-chat.html</url>
<releases>
<release version="4.6.0" date="2025-12-24" type="stable">
<url type="details">https://github.com/sabnzbd/sabnzbd/releases/tag/4.6.0</url>
<release version="5.0.0" date="2026-03-01" type="stable">
<url type="details">https://github.com/sabnzbd/sabnzbd/releases/tag/5.0.0</url>
</release>
<release version="4.5.5" date="2025-10-24" type="stable">
<url type="details">https://github.com/sabnzbd/sabnzbd/releases/tag/4.5.5</url>

View File

@@ -4,7 +4,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: team@sabnzbd.org\n"
"Language-Team: SABnzbd <team@sabnzbd.org>\n"

View File

@@ -4,7 +4,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: team@sabnzbd.org\n"
"Language-Team: SABnzbd <team@sabnzbd.org>\n"
@@ -675,6 +675,11 @@ msgstr ""
msgid "%s is not writable with special character filenames. This can cause problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1558,6 +1563,14 @@ msgstr ""
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr ""
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr ""
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1584,14 +1597,6 @@ msgstr ""
msgid "RSS Feed %s was empty"
msgstr ""
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr ""
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr ""

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Language-Team: Czech (https://app.transifex.com/sabnzbd/teams/111101/cs/)\n"
@@ -731,6 +731,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "Odmítnuto spojení z:"
@@ -1649,6 +1654,14 @@ msgstr ""
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Prázdný RSS záznam nalezen (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Nekompatibilní kanál"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1675,14 +1688,6 @@ msgstr ""
msgid "RSS Feed %s was empty"
msgstr "RSS kanál %s byl prázdný"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Nekompatibilní kanál"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Prázdný RSS záznam nalezen (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Zobrazit rozhraní"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Danish (https://app.transifex.com/sabnzbd/teams/111101/da/)\n"
@@ -769,6 +769,11 @@ msgid ""
msgstr ""
"%s er ikke skrivbar med filnavne med specialtegn. Dette kan give problemer."
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "Afviste forbindelse fra:"
@@ -1723,6 +1728,14 @@ msgstr "Fejl ved lukning af system"
msgid "Received a DBus exception %s"
msgstr "Modtog en DBus-undtagelse %s"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Tom RSS post blev fundet (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Inkompatibel feed"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1749,14 +1762,6 @@ msgstr "Server %s bruger et upålideligt HTTPS-certifikat"
msgid "RSS Feed %s was empty"
msgstr "RSS Feed %s er tom"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Inkompatibel feed"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Tom RSS post blev fundet (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Vis grænseflade"

View File

@@ -20,7 +20,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: German (https://app.transifex.com/sabnzbd/teams/111101/de/)\n"
@@ -808,6 +808,11 @@ msgstr ""
"Dateinamen mit Umlaute können nicht in %s gespeichert werden. Dies kann zu "
"Problemen führen."
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "Abgelehnte Verbindung von:"
@@ -1779,6 +1784,14 @@ msgstr "Fehler beim Herunterfahren des Systems"
msgid "Received a DBus exception %s"
msgstr "DBus-Ausnahmefehler empfangen %s "
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Leerer RSS-Feed gefunden: %s"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Inkompatibeler RSS-Feed"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1805,14 +1818,6 @@ msgstr "Der Server %s nutzt ein nicht vertrauenswürdiges HTTPS-Zertifikat"
msgid "RSS Feed %s was empty"
msgstr "RSS-Feed %s war leer"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Inkompatibeler RSS-Feed"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Leerer RSS-Feed gefunden: %s"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Interface anzeigen"

View File

@@ -9,7 +9,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Spanish (https://app.transifex.com/sabnzbd/teams/111101/es/)\n"
@@ -792,6 +792,11 @@ msgstr ""
"%s no permite escribir nombres de archivo con caracteres especiales. Esto "
"puede causar problemas."
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "Conexión rechazada de:"
@@ -1767,6 +1772,14 @@ msgstr "Error al apagarel sistema"
msgid "Received a DBus exception %s"
msgstr "Se ha recibido una excepción DBus %s"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Entrada RSS vacía (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Canal Incorrecto"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1795,14 +1808,6 @@ msgstr "El servidor %s utiliza un certificado HTTPS no fiable"
msgid "RSS Feed %s was empty"
msgstr "El canal RSS %s estaba vacío"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Canal Incorrecto"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Entrada RSS vacía (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Mostrar interfaz"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Language-Team: Finnish (https://app.transifex.com/sabnzbd/teams/111101/fi/)\n"
@@ -737,6 +737,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1676,6 +1681,14 @@ msgstr "Virhe sammutettaessa järjestelmää"
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Tyhjä RSS kohde löytyi (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Puutteellinen syöte"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1702,14 +1715,6 @@ msgstr "Palvelin %s käyttää epäluotettavaa HTTPS sertifikaattia"
msgid "RSS Feed %s was empty"
msgstr "RSS syöte %s oli tyhjä"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Puutteellinen syöte"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Tyhjä RSS kohde löytyi (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Näytä käyttöliittymä"

View File

@@ -3,13 +3,13 @@
#
# Translators:
# Safihre <safihre@sabnzbd.org>, 2025
# Fred L <88com88@gmail.com>, 2025
# Fred L <88com88@gmail.com>, 2026
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Fred L <88com88@gmail.com>, 2025\n"
"Last-Translator: Fred L <88com88@gmail.com>, 2026\n"
"Language-Team: French (https://app.transifex.com/sabnzbd/teams/111101/fr/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
@@ -161,6 +161,8 @@ msgstr ""
#: sabnzbd/__init__.py
msgid "Windows ARM version of SABnzbd is available from our Downloads page!"
msgstr ""
"La version Windows ARM de SABnzbd est disponible depuis notre page "
"Téléchargements!"
#. Warning message
#: sabnzbd/__init__.py
@@ -797,6 +799,13 @@ msgstr ""
"Le fichier %s n'est pas inscriptible à cause des caractères spéciaux dans le"
" nom. Cela peut causer des problèmes."
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
"%s ne prend pas en charge les fichiers fragmentés. Désactivation du mode "
"d'écriture directe."
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "Connexion refusée de:"
@@ -1765,6 +1774,14 @@ msgstr "Erreur lors de l'arrêt du système"
msgid "Received a DBus exception %s"
msgstr "Exception DBus reçue %s"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Entrée vide de flux RSS trouvée (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Flux incompatible"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1792,14 +1809,6 @@ msgstr "Le serveur %s utilise un certificat de sécurité HTTPS non authentifié
msgid "RSS Feed %s was empty"
msgstr "Le flux RSS %s était vide"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Flux incompatible"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Entrée vide de flux RSS trouvée (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Afficher linterface"
@@ -3859,7 +3868,7 @@ msgstr "Activer"
#: sabnzbd/skintext.py
msgid "Articles per request"
msgstr ""
msgstr "Articles par demande"
#: sabnzbd/skintext.py
msgid ""
@@ -3867,6 +3876,9 @@ msgid ""
"first.<br />This can improve download speeds, especially on connections with"
" higher latency."
msgstr ""
" Demandez plusieurs articles par connexion sans attendre chaque réponse.<br "
"/>Cela peut améliorer les vitesses de téléchargement, en particulier sur les"
" connexions à latence élevée. "
#. Button: Remove server
#: sabnzbd/skintext.py

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Hebrew (https://app.transifex.com/sabnzbd/teams/111101/he/)\n"
@@ -750,6 +750,11 @@ msgid ""
"problems."
msgstr "%s אינו בר־כתיבה עם שמות קבצים עם תו מיוחד. זה יכול לגרום לבעיות."
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "חיבור מסורב מאת:"
@@ -1695,6 +1700,14 @@ msgstr "שגיאה בזמן כיבוי מערכת"
msgid "Received a DBus exception %s"
msgstr "חריגת DBus התקבלה %s"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "כניסת RSS ריקה נמצאה (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "הזנה בלתי תואמת"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1721,14 +1734,6 @@ msgstr "השרת %s משתמש בתעודת HTTPS בלתי מהימנה"
msgid "RSS Feed %s was empty"
msgstr "הזנת RSS %s הייתה ריקה"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "הזנה בלתי תואמת"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "כניסת RSS ריקה נמצאה (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "הראה ממשק"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Italian (https://app.transifex.com/sabnzbd/teams/111101/it/)\n"
@@ -788,6 +788,11 @@ msgstr ""
"%s non è scrivibile con nomi di file con caratteri speciali. Questo può "
"causare problemi."
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "Connessione rifiutata da:"
@@ -1746,6 +1751,14 @@ msgstr "Errore durante lo spegnimento del sistema"
msgid "Received a DBus exception %s"
msgstr "Ricevuta un'eccezione DBus %s"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Trovata voce RSS vuota (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Feed incompatibile"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1772,14 +1785,6 @@ msgstr "Il server %s utilizza un certificato HTTPS non attendibile"
msgid "RSS Feed %s was empty"
msgstr "Il feed RSS %s era vuoto"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Feed incompatibile"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Trovata voce RSS vuota (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Mostra interfaccia"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Language-Team: Norwegian Bokmål (https://app.transifex.com/sabnzbd/teams/111101/nb/)\n"
@@ -734,6 +734,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1674,6 +1679,14 @@ msgstr "Feil under avslutting av systemet"
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Tom RSS post funnet (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Ukompatibel nyhetsstrøm"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1700,14 +1713,6 @@ msgstr "Server %s bruker et usikkert HTTP sertifikat"
msgid "RSS Feed %s was empty"
msgstr "RSS-kilde %s var tom"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Ukompatibel nyhetsstrøm"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Tom RSS post funnet (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Vis grensesnitt"

View File

@@ -8,7 +8,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Dutch (https://app.transifex.com/sabnzbd/teams/111101/nl/)\n"
@@ -791,6 +791,11 @@ msgstr ""
"Het is niet mogelijk bestanden met speciale tekens op te slaan in %s. Dit "
"geeft mogelijk problemen bij het verwerken van downloads."
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "Verbinding geweigerd van: "
@@ -1749,6 +1754,14 @@ msgstr "Fout bij het afsluiten van het systeem"
msgid "Received a DBus exception %s"
msgstr "DBus foutmelding %s "
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Lege RSS-feed gevonden (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Ongeschikte RSS-feed"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1775,14 +1788,6 @@ msgstr "Server %s gebruikt een onbetrouwbaar HTTPS-certificaat"
msgid "RSS Feed %s was empty"
msgstr "RSS-feed %s is leeg"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Ongeschikte RSS-feed"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Lege RSS-feed gevonden (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Toon webinterface"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Language-Team: Polish (https://app.transifex.com/sabnzbd/teams/111101/pl/)\n"
@@ -737,6 +737,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1683,6 +1688,14 @@ msgstr "Wyłączenie systemu nie powiodło się"
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Znaleziono pusty wpis RSS (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Niekompatybilny kanał"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1709,14 +1722,6 @@ msgstr "Serwer %s używa niezaufanego certyfikatu HTTPS"
msgid "RSS Feed %s was empty"
msgstr "Kanał RSS %s był pusty"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Niekompatybilny kanał"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Znaleziono pusty wpis RSS (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Pokaż interfejs"

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Language-Team: Portuguese (Brazil) (https://app.transifex.com/sabnzbd/teams/111101/pt_BR/)\n"
@@ -749,6 +749,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1693,6 +1698,14 @@ msgstr "Erro ao desligar o sistema"
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Entrada RSS vazia encontrada (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Feed incompatível"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1720,14 +1733,6 @@ msgstr "Servidor %s usa um certificado HTTPS não confiável"
msgid "RSS Feed %s was empty"
msgstr "O feed RSS %s estava vazio"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Feed incompatível"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Entrada RSS vazia encontrada (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Exibir interface"

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Language-Team: Romanian (https://app.transifex.com/sabnzbd/teams/111101/ro/)\n"
@@ -757,6 +757,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1712,6 +1717,14 @@ msgstr "Eroare la oprirea sistemului"
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Valoare RSS gasită a fost goală (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Fulx RSS incompatibil"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1738,14 +1751,6 @@ msgstr "Serverul %s utilizează un certificat HTTPS nesigur"
msgid "RSS Feed %s was empty"
msgstr "Fluxul RSS %s a fost gol"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Fulx RSS incompatibil"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Valoare RSS gasită a fost goală (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Arată interfața"

View File

@@ -3,12 +3,13 @@
#
# Translators:
# Safihre <safihre@sabnzbd.org>, 2023
# ST02, 2026
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Last-Translator: ST02, 2026\n"
"Language-Team: Russian (https://app.transifex.com/sabnzbd/teams/111101/ru/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
@@ -24,7 +25,7 @@ msgstr "Предупреждение"
#. Notification
#: SABnzbd.py, sabnzbd/notifier.py
msgid "Error"
msgstr ""
msgstr "Ошибка"
#. Error message
#: SABnzbd.py
@@ -88,7 +89,7 @@ msgstr ""
#. Error message
#: SABnzbd.py
msgid "HTTP and HTTPS ports cannot be the same"
msgstr ""
msgstr "HTTP и HTTPS порты не могут быть одинаковыми"
#. Warning message
#: SABnzbd.py
@@ -103,12 +104,12 @@ msgstr "HTTPS отключён, поскольку отсутствуют фай
#. Warning message
#: SABnzbd.py
msgid "Disabled HTTPS because of invalid CERT and KEY files"
msgstr ""
msgstr "HTTPS отключён, поскольку файлы CERT и KEY недействительны"
#. Error message
#: SABnzbd.py
msgid "Failed to start web-interface: "
msgstr ""
msgstr "Не удалось запустить веб-интерфейс:"
#: SABnzbd.py
msgid "SABnzbd %s started"
@@ -306,7 +307,7 @@ msgstr ""
#: sabnzbd/assembler.py
msgid "Aborted, encryption detected"
msgstr ""
msgstr "Прервано, обнаружено шифрование"
#. Warning message
#: sabnzbd/assembler.py
@@ -319,7 +320,7 @@ msgstr ""
#: sabnzbd/assembler.py
msgid "Aborted, unwanted extension detected"
msgstr ""
msgstr "Прервано, обнаружено нежелательное расширение"
#. Warning message
#: sabnzbd/assembler.py
@@ -348,7 +349,7 @@ msgstr ""
#: sabnzbd/bpsmeter.py
msgid "Downloading resumed after quota reset"
msgstr ""
msgstr "Загрузка возобновилась после сброса квоты"
#: sabnzbd/cfg.py, sabnzbd/interface.py
msgid "Incorrect parameter"
@@ -516,7 +517,7 @@ msgstr "Не удаётся прочитать наблюдаемую папку
#: sabnzbd/downloader.py
msgid "Resuming"
msgstr ""
msgstr "Возобновление"
#. PP status - Priority pick list
#: sabnzbd/downloader.py, sabnzbd/macosmenu.py, sabnzbd/sabtray.py,
@@ -733,6 +734,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1676,6 +1682,14 @@ msgstr "Не удалось завершить работу системы"
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Обнаружена пустая запись RSS (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Несовместимая лента"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1702,14 +1716,6 @@ msgstr ""
msgid "RSS Feed %s was empty"
msgstr "RSS-лента %s была пустой"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Несовместимая лента"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Обнаружена пустая запись RSS (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Показать интерфейс"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Language-Team: Serbian (https://app.transifex.com/sabnzbd/teams/111101/sr/)\n"
@@ -731,6 +731,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1669,6 +1674,14 @@ msgstr "Greška pri gašenju sistema"
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Nađen prazan RSS unos (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Некомпатибилан Фид"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1695,14 +1708,6 @@ msgstr "Server %s koristi nepouzdan HTTPS sertifikat"
msgid "RSS Feed %s was empty"
msgstr "RSS фид %s је празан"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Некомпатибилан Фид"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Nađen prazan RSS unos (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Pokaži interfejs"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2023\n"
"Language-Team: Swedish (https://app.transifex.com/sabnzbd/teams/111101/sv/)\n"
@@ -731,6 +731,11 @@ msgid ""
"problems."
msgstr ""
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr ""
@@ -1675,6 +1680,14 @@ msgstr "Fel uppstod då systemet skulle stängas"
msgid "Received a DBus exception %s"
msgstr ""
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Tom RSS post hittades (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Inkompatibel feed"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1701,14 +1714,6 @@ msgstr "Server %s använder ett otillförlitlig HTTPS-certifikat"
msgid "RSS Feed %s was empty"
msgstr "RSS-flödet %s var tomt"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Inkompatibel feed"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Tom RSS post hittades (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Visa gränssnitt"

View File

@@ -4,13 +4,13 @@
# Translators:
# Taylan Tatlı, 2025
# Safihre <safihre@sabnzbd.org>, 2025
# mauron, 2025
# mauron, 2026
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: mauron, 2025\n"
"Last-Translator: mauron, 2026\n"
"Language-Team: Turkish (https://app.transifex.com/sabnzbd/teams/111101/tr/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
@@ -154,7 +154,7 @@ msgstr ""
#. Warning message
#: sabnzbd/__init__.py
msgid "Windows ARM version of SABnzbd is available from our Downloads page!"
msgstr ""
msgstr "SABnzbd'nin Windows ARM sürümü İndirmeler sayfamızda mevcuttur!"
#. Warning message
#: sabnzbd/__init__.py
@@ -782,6 +782,13 @@ msgid ""
msgstr ""
"%s özel karakterli dosya isimleri ile yazılamıyor. Bu, sorun oluşturabilir."
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
"%s aralıklı dosyaları desteklememektedir. Doğrudan yazma kipi devre dışı "
"bırakılıyor."
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "Şuradan bağlantı reddedildi:"
@@ -1738,6 +1745,14 @@ msgstr "Sistemin kapatılması esnasında hata"
msgid "Received a DBus exception %s"
msgstr "Bir DBUS istisnası alındı %s"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Boş RSS girdisi bulundu (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Uyumsuz besleme"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1764,14 +1779,6 @@ msgstr "%s sunucusu güvenilmez bir HTTPS sertifikası kullanıyor"
msgid "RSS Feed %s was empty"
msgstr "%s RSS Beselemesi boştu"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "Uyumsuz besleme"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "Boş RSS girdisi bulundu (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "Arayüzü göster"
@@ -3805,7 +3812,7 @@ msgstr "Etkinleştir"
#: sabnzbd/skintext.py
msgid "Articles per request"
msgstr ""
msgstr "Talep başı makale"
#: sabnzbd/skintext.py
msgid ""
@@ -3813,6 +3820,9 @@ msgid ""
"first.<br />This can improve download speeds, especially on connections with"
" higher latency."
msgstr ""
"Her bir cevabı beklemeden bağlantı başına birden fazla makale talep et.<br "
"/>Bu, indirme hızlarını bilhassa yüksek gecikmeli bağlantılarda "
"arttırabilir."
#. Button: Remove server
#: sabnzbd/skintext.py

View File

@@ -8,7 +8,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:49+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Chinese (China) (https://app.transifex.com/sabnzbd/teams/111101/zh_CN/)\n"
@@ -731,6 +731,11 @@ msgid ""
"problems."
msgstr "%s 不可写入带有特殊字符的文件名。这可能会导致问题。"
#. Warning message
#: sabnzbd/filesystem.py
msgid "%s does not support sparse files. Disabling direct write mode."
msgstr ""
#: sabnzbd/interface.py
msgid "Refused connection from:"
msgstr "拒绝来自以下的连接:"
@@ -1665,6 +1670,14 @@ msgstr "关闭系统时出错"
msgid "Received a DBus exception %s"
msgstr "收到 DBus 异常 %s"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "发现空的 RSS 条目 (%s)"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "feed 不兼容"
#. Error message
#: sabnzbd/rss.py
msgid "Incorrect RSS feed description \"%s\""
@@ -1691,14 +1704,6 @@ msgstr "服务器 %s 使用的 HTTPS 证书不受信任"
msgid "RSS Feed %s was empty"
msgstr "RSS Feed %s 为空"
#: sabnzbd/rss.py
msgid "Incompatible feed"
msgstr "feed 不兼容"
#: sabnzbd/rss.py
msgid "Empty RSS entry found (%s)"
msgstr "发现空的 RSS 条目 (%s)"
#: sabnzbd/sabtray.py, sabnzbd/sabtraylinux.py
msgid "Show interface"
msgstr "显示界面"

View File

@@ -4,7 +4,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: team@sabnzbd.org\n"
"Language-Team: SABnzbd <team@sabnzbd.org>\n"

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Czech (https://app.transifex.com/sabnzbd/teams/111101/cs/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Danish (https://app.transifex.com/sabnzbd/teams/111101/da/)\n"

View File

@@ -8,7 +8,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: German (https://app.transifex.com/sabnzbd/teams/111101/de/)\n"

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Spanish (https://app.transifex.com/sabnzbd/teams/111101/es/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/sabnzbd/teams/111101/fi/)\n"

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: French (https://app.transifex.com/sabnzbd/teams/111101/fr/)\n"

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Hebrew (https://app.transifex.com/sabnzbd/teams/111101/he/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Italian (https://app.transifex.com/sabnzbd/teams/111101/it/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Norwegian Bokmål (https://app.transifex.com/sabnzbd/teams/111101/nb/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Dutch (https://app.transifex.com/sabnzbd/teams/111101/nl/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Polish (https://app.transifex.com/sabnzbd/teams/111101/pl/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Portuguese (Brazil) (https://app.transifex.com/sabnzbd/teams/111101/pt_BR/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Romanian (https://app.transifex.com/sabnzbd/teams/111101/ro/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Russian (https://app.transifex.com/sabnzbd/teams/111101/ru/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Serbian (https://app.transifex.com/sabnzbd/teams/111101/sr/)\n"

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Swedish (https://app.transifex.com/sabnzbd/teams/111101/sv/)\n"

View File

@@ -7,7 +7,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Turkish (https://app.transifex.com/sabnzbd/teams/111101/tr/)\n"

View File

@@ -6,7 +6,7 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-4.6.0\n"
"Project-Id-Version: SABnzbd-5.0.0\n"
"PO-Revision-Date: 2020-06-27 15:56+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>, 2025\n"
"Language-Team: Chinese (China) (https://app.transifex.com/sabnzbd/teams/111101/zh_CN/)\n"

View File

@@ -1,10 +1,11 @@
# Main requirements
# Note that not all sub-dependencies are listed, but only ones we know could cause trouble
apprise==1.9.6
sabctools==9.1.0
apprise==1.9.7
sabctools==9.3.1
CT3==3.4.0.post5
cffi==2.0.0
pycparser==2.23
pycparser # Version-less for Python 3.9 and below
pycparser==3.0; python_version > '3.9'
feedparser==6.0.12
configobj==5.0.9
cheroot==11.1.2
@@ -61,14 +62,14 @@ requests==2.32.5
requests-oauthlib==2.0.0
PyYAML==6.0.3
markdown # Version-less for Python 3.9 and below
markdown==3.10; python_version > '3.9'
markdown==3.10.1; python_version > '3.9'
paho-mqtt==1.6.1 # Pinned, newer versions don't work with AppRise yet
# Requests Requirements
charset_normalizer==3.4.4
idna==3.11
urllib3==2.6.2
certifi==2025.11.12
urllib3==2.6.3
certifi==2026.1.4
oauthlib==3.3.1
PyJWT==2.10.1
blinker==1.9.0

View File

@@ -249,6 +249,7 @@ def initialize(pause_downloader=False, clean_up=False, repair=0):
# Set call backs for Config items
cfg.cache_limit.callback(cfg.new_limit)
cfg.direct_write.callback(cfg.new_direct_write)
cfg.web_host.callback(cfg.guard_restart)
cfg.web_port.callback(cfg.guard_restart)
cfg.web_dir.callback(cfg.guard_restart)
@@ -303,6 +304,7 @@ def initialize(pause_downloader=False, clean_up=False, repair=0):
sabnzbd.NzbQueue.read_queue(repair)
sabnzbd.Scheduler.analyse(pause_downloader)
sabnzbd.ArticleCache.new_limit(cfg.cache_limit.get_int())
sabnzbd.Assembler.new_limit(sabnzbd.ArticleCache.cache_info().cache_limit)
logging.info("All processes started")
sabnzbd.RESTART_REQ = False
@@ -315,6 +317,9 @@ def start():
logging.debug("Starting postprocessor")
sabnzbd.PostProcessor.start()
logging.debug("Starting article cache")
sabnzbd.ArticleCache.start()
logging.debug("Starting assembler")
sabnzbd.Assembler.start()
@@ -383,6 +388,13 @@ def halt():
except Exception:
pass
logging.debug("Stopping article cache")
sabnzbd.ArticleCache.stop()
try:
sabnzbd.ArticleCache.join(timeout=3)
except Exception:
pass
logging.debug("Stopping postprocessor")
sabnzbd.PostProcessor.stop()
try:
@@ -500,7 +512,7 @@ def delayed_startup_actions():
logging.debug("Completed Download Folder %s is not on FAT", complete_dir)
if filesystem.directory_is_writable(sabnzbd.cfg.download_dir.get_path()):
filesystem.check_filesystem_capabilities(sabnzbd.cfg.download_dir.get_path())
filesystem.check_filesystem_capabilities(sabnzbd.cfg.download_dir.get_path(), is_download_dir=True)
if filesystem.directory_is_writable(sabnzbd.cfg.complete_dir.get_path()):
filesystem.check_filesystem_capabilities(sabnzbd.cfg.complete_dir.get_path())

View File

@@ -20,6 +20,7 @@ sabnzbd.api - api
"""
import os
import sys
import logging
import re
import gc
@@ -80,6 +81,7 @@ from sabnzbd.misc import (
clean_comma_separated_list,
match_str,
bool_conv,
get_platform_description,
)
from sabnzbd.filesystem import diskspace, get_ext, clip_path, remove_all, list_scripts, purge_log_files, pathbrowser
from sabnzbd.encoding import xml_name, utob
@@ -690,9 +692,16 @@ LOG_HASH_RE = re.compile(rb"([a-zA-Z\d]{25})", re.I)
def _api_showlog(name: str, kwargs: dict[str, Union[str, list[str]]]) -> bytes:
"""Fetch the INI and the log-data and add a message at the top"""
log_data = b"--------------------------------\n\n"
log_data += b"The log includes a copy of your sabnzbd.ini with\nall usernames, passwords and API-keys removed."
log_data += b"\n\n--------------------------------\n"
# Build header with version and environment info
header = "--------------------------------\n"
header += f"SABnzbd version: {sabnzbd.__version__}\n"
header += f"Commit: {sabnzbd.__baseline__}\n"
header += f"Python-version: {sys.version}\n"
header += f"Platform: {get_platform_description()}\n"
header += "--------------------------------\n\n"
header += "The log includes a copy of your sabnzbd.ini with\nall usernames, passwords and API-keys removed."
header += "\n\n--------------------------------\n"
log_data = header.encode("utf-8")
if sabnzbd.LOGFILE and os.path.exists(sabnzbd.LOGFILE):
with open(sabnzbd.LOGFILE, "rb") as f:
@@ -1403,9 +1412,11 @@ def test_nntp_server_dict(kwargs: dict[str, Union[str, list[str]]]) -> tuple[boo
try:
nw.init_connect()
while not nw.connected:
while test_server.active:
nw.write()
nw.read(on_response=on_response)
if nw.ready:
break
except socket.timeout:
if port != 119 and not ssl:
@@ -1510,10 +1521,10 @@ def build_status(calculate_performance: bool = False, skip_dashboard: bool = Fal
info["servers"] = []
# Servers-list could be modified during iteration, so we need a copy
for server in sabnzbd.Downloader.servers[:]:
activeconn = sum(nw.connected for nw in server.idle_threads.copy())
activeconn = sum(nw.ready for nw in server.idle_threads.copy())
serverconnections = []
for nw in server.busy_threads.copy():
if nw.connected:
if nw.ready:
activeconn += 1
if article := nw.article:
serverconnections.append(

View File

@@ -22,26 +22,39 @@ sabnzbd.articlecache - Article cache handling
import logging
import threading
import struct
from typing import Collection
import time
from typing import Collection, Optional
import sabnzbd
import sabnzbd.cfg as cfg
from sabnzbd.decorators import synchronized
from sabnzbd.constants import GIGI, ANFO, ASSEMBLER_WRITE_THRESHOLD
from sabnzbd.nzb import Article
from sabnzbd.constants import (
GIGI,
ANFO,
ARTICLE_CACHE_NON_CONTIGUOUS_FLUSH_PERCENTAGE,
)
from sabnzbd.nzb import Article, NzbFile
from sabnzbd.misc import to_units
# Operations on the article table are handled via try/except.
# The counters need to be made atomic to ensure consistency.
ARTICLE_COUNTER_LOCK = threading.RLock()
_SECONDS_BETWEEN_FLUSHES = 0.5
class ArticleCache:
class ArticleCache(threading.Thread):
def __init__(self):
super().__init__()
self.shutdown = False
self.__direct_write: bool = bool(cfg.direct_write())
self.__cache_limit_org = 0
self.__cache_limit = 0
self.__cache_size = 0
self.__article_table: dict[Article, bytes] = {} # Dict of buffered articles
self.assembler_write_trigger: int = 1
self.__article_table: dict[Article, bytearray] = {} # Dict of buffered articles
self.__cache_size_cv: threading.Condition = threading.Condition(ARTICLE_COUNTER_LOCK)
self.__last_flush: float = 0
self.__non_contiguous_trigger: int = 0 # Force flush trigger
# On 32 bit we only allow the user to set 1GB
# For 64 bit we allow up to 4GB, in case somebody wants that
@@ -49,9 +62,62 @@ class ArticleCache:
if sabnzbd.MACOS or sabnzbd.WINDOWS or (struct.calcsize("P") * 8) == 64:
self.__cache_upper_limit = 4 * GIGI
def cache_info(self):
return ANFO(len(self.__article_table), abs(self.__cache_size), self.__cache_limit_org)
def change_direct_write(self, direct_write: bool) -> None:
self.__direct_write = direct_write and self.__cache_limit > 1
def stop(self):
self.shutdown = True
with self.__cache_size_cv:
self.__cache_size_cv.notify_all()
def should_flush(self) -> bool:
"""
Should we flush the cache?
Only if direct write is supported and cache usage is over the upper limit.
Or the downloader is paused and cache is not empty.
"""
return (
self.__direct_write
and self.__cache_limit
and (
self.__cache_size > self.__non_contiguous_trigger
or self.__cache_size
and sabnzbd.Downloader.no_active_jobs()
)
)
def flush_cache(self) -> None:
"""In direct_write mode flush cache contents to file"""
forced: set[NzbFile] = set()
for article in self.__article_table.copy():
if not article.can_direct_write or article.nzf in forced:
continue
forced.add(article.nzf)
if time.monotonic() - self.__last_flush > 1:
logging.debug("Forcing write of %s", article.nzf.filepath)
sabnzbd.Assembler.process(article.nzf.nzo, article.nzf, allow_non_contiguous=True, article=article)
self.__last_flush = time.monotonic()
def run(self):
while True:
with self.__cache_size_cv:
self.__cache_size_cv.wait_for(
lambda: self.shutdown or self.should_flush(),
timeout=5.0,
)
if self.shutdown:
break
# Could be reached by timeout when paused and no further articles arrive
with self.__cache_size_cv:
if not self.should_flush():
continue
self.flush_cache()
time.sleep(_SECONDS_BETWEEN_FLUSHES)
def cache_info(self):
return ANFO(len(self.__article_table), abs(self.__cache_size), self.__cache_limit)
@synchronized(ARTICLE_COUNTER_LOCK)
def new_limit(self, limit: int):
"""Called when cache limit changes"""
self.__cache_limit_org = limit
@@ -59,31 +125,32 @@ class ArticleCache:
self.__cache_limit = self.__cache_upper_limit
else:
self.__cache_limit = min(limit, self.__cache_upper_limit)
# Set assembler_write_trigger to be the equivalent of ASSEMBLER_WRITE_THRESHOLD %
# of the total cache, assuming an article size of 750 000 bytes
self.assembler_write_trigger = int(self.__cache_limit * ASSEMBLER_WRITE_THRESHOLD / 100 / 750_000) + 1
logging.debug(
"Assembler trigger = %d",
self.assembler_write_trigger,
)
self.__non_contiguous_trigger = self.__cache_limit * ARTICLE_CACHE_NON_CONTIGUOUS_FLUSH_PERCENTAGE
if self.__cache_limit:
logging.debug("Article cache trigger:%s", to_units(self.__non_contiguous_trigger))
self.change_direct_write(cfg.direct_write())
@synchronized(ARTICLE_COUNTER_LOCK)
def reserve_space(self, data_size: int):
def reserve_space(self, data_size: int) -> bool:
"""Reserve space in the cache"""
self.__cache_size += data_size
if (usage := self.__cache_size + data_size) > self.__cache_limit:
return False
self.__cache_size = usage
self.__cache_size_cv.notify_all()
return True
@synchronized(ARTICLE_COUNTER_LOCK)
def free_reserved_space(self, data_size: int):
"""Remove previously reserved space"""
self.__cache_size -= data_size
self.__cache_size_cv.notify_all()
@synchronized(ARTICLE_COUNTER_LOCK)
def space_left(self) -> bool:
"""Is there space left in the set limit?"""
return self.__cache_size < self.__cache_limit
def save_article(self, article: Article, data: bytes):
def save_article(self, article: Article, data: bytearray):
"""Save article in cache, either memory or disk"""
nzo = article.nzf.nzo
# Skip if already post-processing or fully finished
@@ -91,7 +158,8 @@ class ArticleCache:
return
# Register article for bookkeeping in case the job is deleted
nzo.saved_articles.add(article)
with nzo.lock:
nzo.saved_articles.add(article)
if article.lowest_partnum and not (article.nzf.import_finished or article.nzf.filename_checked):
# Write the first-fetched articles to temporary file unless downloading
@@ -100,24 +168,17 @@ class ArticleCache:
self.__flush_article_to_disk(article, data)
return
if self.__cache_limit:
# Check if we exceed the limit
data_size = len(data)
self.reserve_space(data_size)
if self.space_left():
# Add new article to the cache
self.__article_table[article] = data
else:
# Return the space and save to disk
self.free_reserved_space(data_size)
self.__flush_article_to_disk(article, data)
# Check if we exceed the limit
if self.__cache_limit and self.reserve_space(len(data)):
# Add new article to the cache
self.__article_table[article] = data
else:
# No data saved in memory, direct to disk
self.__flush_article_to_disk(article, data)
def load_article(self, article: Article):
def load_article(self, article: Article) -> Optional[bytearray]:
"""Load the data of the article"""
data = None
data: Optional[bytearray] = None
nzo = article.nzf.nzo
if article in self.__article_table:
@@ -131,9 +192,10 @@ class ArticleCache:
return data
elif article.art_id:
data = sabnzbd.filesystem.load_data(
article.art_id, nzo.admin_path, remove=True, do_pickle=False, silent=True
article.art_id, nzo.admin_path, remove=True, do_pickle=False, silent=True, mutable=True
)
nzo.saved_articles.discard(article)
with nzo.lock:
nzo.saved_articles.discard(article)
return data
def flush_articles(self):
@@ -161,10 +223,16 @@ class ArticleCache:
elif article.art_id:
sabnzbd.filesystem.remove_data(article.art_id, article.nzf.nzo.admin_path)
@staticmethod
def __flush_article_to_disk(article: Article, data):
def __flush_article_to_disk(self, article: Article, data: bytearray):
# Save data, but don't complain when destination folder is missing
# because this flush may come after completion of the NZO.
# Direct write to destination if cache is being used
if self.__cache_limit and self.__direct_write and sabnzbd.Assembler.assemble_article(article, data):
with article.nzf.nzo.lock:
article.nzf.nzo.saved_articles.discard(article)
return
# Fallback to disk cache
sabnzbd.filesystem.save_data(
data, article.get_art_id(), article.nzf.nzo.admin_path, do_pickle=False, silent=True
)

View File

@@ -23,13 +23,16 @@ import os
import queue
import logging
import re
import threading
from threading import Thread
import ctypes
from typing import Optional
from typing import Optional, NamedTuple, Union
import rarfile
import time
import sabctools
import sabnzbd
from sabnzbd.misc import get_all_passwords, match_str, SABRarFile
from sabnzbd.misc import get_all_passwords, match_str, SABRarFile, to_units
from sabnzbd.filesystem import (
set_permissions,
clip_path,
@@ -39,33 +42,222 @@ from sabnzbd.filesystem import (
has_unwanted_extension,
get_basename,
)
from sabnzbd.constants import Status, GIGI
from sabnzbd.constants import (
Status,
GIGI,
ASSEMBLER_WRITE_THRESHOLD_FACTOR_APPEND,
ASSEMBLER_WRITE_THRESHOLD_FACTOR_DIRECT_WRITE,
ASSEMBLER_MAX_WRITE_THRESHOLD_DIRECT_WRITE,
SOFT_ASSEMBLER_QUEUE_LIMIT,
ASSEMBLER_DELAY_FACTOR_DIRECT_WRITE,
ARTICLE_CACHE_NON_CONTIGUOUS_FLUSH_PERCENTAGE,
ASSEMBLER_WRITE_INTERVAL,
)
import sabnzbd.cfg as cfg
from sabnzbd.nzb import NzbFile, NzbObject
from sabnzbd.nzb import NzbFile, NzbObject, Article
import sabnzbd.par2file as par2file
class AssemblerTask(NamedTuple):
nzo: Optional[NzbObject] = None
nzf: Optional[NzbFile] = None
file_done: bool = False
allow_non_contiguous: bool = False
direct_write: bool = False
class Assembler(Thread):
def __init__(self):
super().__init__()
self.max_queue_size: int = cfg.assembler_max_queue_size()
self.queue: queue.Queue[tuple[Optional[NzbObject], Optional[NzbFile], Optional[bool]]] = queue.Queue()
self.direct_write: bool = cfg.direct_write()
self.cache_limit: int = 0
# Contiguous bytes required to trigger append writes
self.append_trigger: int = 1
# Total bytes required to trigger direct-write assembles
self.direct_write_trigger: int = 1
self.delay_trigger: int = 1
self.queue: queue.Queue[AssemblerTask] = queue.Queue()
self.queued_lock = threading.Lock()
self.queued_nzf: set[str] = set()
self.queued_nzf_non_contiguous: set[str] = set()
self.queued_next_time: dict[str, float] = dict()
self.ready_bytes_lock = threading.Lock()
self.ready_bytes: dict[str, int] = dict()
def stop(self):
self.queue.put((None, None, None))
self.queue.put(AssemblerTask())
def process(self, nzo: NzbObject, nzf: Optional[NzbFile] = None, file_done: Optional[bool] = None):
self.queue.put((nzo, nzf, file_done))
def new_limit(self, limit: int):
"""Called when cache limit changes"""
self.cache_limit = limit
self.append_trigger = max(1, int(limit * ASSEMBLER_WRITE_THRESHOLD_FACTOR_APPEND))
self.direct_write_trigger = max(
1,
min(
max(1, int(limit * ASSEMBLER_WRITE_THRESHOLD_FACTOR_DIRECT_WRITE)),
ASSEMBLER_MAX_WRITE_THRESHOLD_DIRECT_WRITE,
),
)
self.calculate_delay_trigger()
self.change_direct_write(cfg.direct_write())
logging.debug(
"Assembler trigger append=%s, direct=%s, delay=%s",
to_units(self.append_trigger),
to_units(self.direct_write_trigger),
to_units(self.delay_trigger),
)
def queue_level(self) -> float:
return self.queue.qsize() / self.max_queue_size
def change_direct_write(self, direct_write: bool) -> None:
self.direct_write = direct_write and self.direct_write_trigger > 1
self.calculate_delay_trigger()
def calculate_delay_trigger(self):
"""Point at which downloader should start being delayed, recalculated when cache limit or direct write changes"""
self.delay_trigger = int(
max(
(
750_000 * self.max_queue_size * ASSEMBLER_DELAY_FACTOR_DIRECT_WRITE
if self.direct_write
else 750_000 * self.max_queue_size
),
(
self.cache_limit * ARTICLE_CACHE_NON_CONTIGUOUS_FLUSH_PERCENTAGE
if self.direct_write
else min(self.append_trigger * self.max_queue_size, int(self.cache_limit * 0.5))
),
)
)
def is_busy(self) -> bool:
"""Returns True if the assembler thread has at least one NzbFile it is assembling"""
return bool(self.queued_nzf or self.queued_nzf_non_contiguous)
def total_ready_bytes(self) -> int:
with self.ready_bytes_lock:
return sum(self.ready_bytes.values())
def update_ready_bytes(self, nzf: NzbFile, delta: int) -> int:
with self.ready_bytes_lock:
cur = self.ready_bytes.get(nzf.nzf_id, 0) + delta
if cur <= 0:
self.ready_bytes.pop(nzf.nzf_id, None)
else:
self.ready_bytes[nzf.nzf_id] = cur
return cur
def clear_ready_bytes(self, *nzfs: NzbFile) -> None:
with self.ready_bytes_lock:
for nzf in nzfs:
self.ready_bytes.pop(nzf.nzf_id, None)
self.queued_next_time.pop(nzf.nzf_id, None)
def process(
self,
nzo: NzbObject = None,
nzf: Optional[NzbFile] = None,
file_done: bool = False,
allow_non_contiguous: bool = False,
article: Optional[Article] = None,
) -> None:
if nzf is None:
# post-proc
self.queue.put(AssemblerTask(nzo))
return
# Track bytes pending being written for this nzf
if self.should_track_ready_bytes(article, allow_non_contiguous):
ready_bytes = self.update_ready_bytes(nzf, article.decoded_size)
else:
ready_bytes = 0
article_has_first_part = bool(article and article.lowest_partnum)
if article_has_first_part:
self.queued_next_time[nzf.nzf_id] = time.monotonic() + ASSEMBLER_WRITE_INTERVAL
if not self.should_queue_nzf(
nzf,
article_has_first_part=article_has_first_part,
filename_checked=nzf.filename_checked,
import_finished=nzf.import_finished,
file_done=file_done,
allow_non_contiguous=allow_non_contiguous,
ready_bytes=ready_bytes,
):
return
with self.queued_lock:
# Recheck not already in the normal queue under lock, but always enqueue when file_done
if not file_done and nzf.nzf_id in self.queued_nzf:
return
if allow_non_contiguous:
if not file_done and nzf.nzf_id in self.queued_nzf_non_contiguous:
return
self.queued_nzf_non_contiguous.add(nzf.nzf_id)
else:
self.queued_nzf.add(nzf.nzf_id)
self.queued_next_time[nzf.nzf_id] = time.monotonic() + ASSEMBLER_WRITE_INTERVAL
can_direct_write = self.direct_write and nzf.type == "yenc"
self.queue.put(AssemblerTask(nzo, nzf, file_done, allow_non_contiguous, can_direct_write))
def should_queue_nzf(
self,
nzf: NzbFile,
*,
article_has_first_part: bool,
filename_checked: bool,
import_finished: bool,
file_done: bool,
allow_non_contiguous: bool,
ready_bytes: int,
) -> bool:
# Always queue if done
if file_done:
return True
if nzf.nzf_id in self.queued_nzf:
return False
# Always write
if article_has_first_part and filename_checked and not import_finished:
return True
next_ready = (next_article := nzf.assembler_next_article) and (next_article.decoded or next_article.on_disk)
# Trigger every 5 seconds if next article is decoded or on_disk
if next_ready and time.monotonic() > self.queued_next_time.get(nzf.nzf_id, 0):
return True
# Append
if not self.direct_write or nzf.type != "yenc":
return nzf.contiguous_ready_bytes() >= self.append_trigger
# Direct Write
if allow_non_contiguous:
return True
# Direct Write ready bytes trigger if next is also ready
if next_ready and ready_bytes >= self.direct_write_trigger:
return True
return False
@staticmethod
def should_track_ready_bytes(article: Optional[Article], allow_non_contiguous: bool) -> bool:
""""""
return article and not allow_non_contiguous and article.decoded_size
def delay(self) -> float:
"""Calculate how long if at all the downloader thread should sleep to allow the assembler to catch up"""
ready_total = self.total_ready_bytes()
# Below trigger: no delay possible
if ready_total <= self.delay_trigger:
return 0
pressure = (ready_total - self.delay_trigger) / max(1.0, self.cache_limit - self.delay_trigger)
if pressure <= SOFT_ASSEMBLER_QUEUE_LIMIT:
return 0
# 50-100%: 0-0.25 seconds, capped at 0.15
sleep = min((pressure - SOFT_ASSEMBLER_QUEUE_LIMIT) / 2, 0.15)
return max(0.001, sleep)
def run(self):
while 1:
# Set NzbObject and NzbFile objects to None so references
# from this thread do not keep the objects alive (see #1628)
nzo = nzf = None
nzo, nzf, file_done = self.queue.get()
nzo, nzf, file_done, allow_non_contiguous, direct_write = self.queue.get()
if not nzo:
logging.debug("Shutting down assembler")
break
@@ -75,11 +267,15 @@ class Assembler(Thread):
if file_done and not sabnzbd.Downloader.paused:
self.diskspace_check(nzo, nzf)
# Prepare filepath
if filepath := nzf.prepare_filepath():
try:
# Prepare filepath
if not (filepath := nzf.prepare_filepath()):
logging.debug("Prepare filepath failed for file %s in job %s", nzf.filename, nzo.final_name)
continue
try:
logging.debug("Decoding part of %s", filepath)
self.assemble(nzo, nzf, file_done)
self.assemble(nzo, nzf, file_done, allow_non_contiguous, direct_write)
# Continue after partly written data
if not file_done:
@@ -122,9 +318,16 @@ class Assembler(Thread):
except Exception:
logging.error(T("Fatal error in Assembler"), exc_info=True)
break
finally:
with self.queued_lock:
if allow_non_contiguous:
self.queued_nzf_non_contiguous.discard(nzf.nzf_id)
else:
self.queued_nzf.discard(nzf.nzf_id)
else:
sabnzbd.NzbQueue.remove(nzo.nzo_id, cleanup=False)
sabnzbd.PostProcessor.process(nzo)
self.clear_ready_bytes(*nzo.files)
@staticmethod
def diskspace_check(nzo: NzbObject, nzf: NzbFile):
@@ -162,52 +365,115 @@ class Assembler(Thread):
sabnzbd.emailer.diskfull_mail()
@staticmethod
def assemble(nzo: NzbObject, nzf: NzbFile, file_done: bool):
def assemble(nzo: NzbObject, nzf: NzbFile, file_done: bool, allow_non_contiguous: bool, direct_write: bool) -> None:
"""Assemble a NZF from its table of articles
1) Partial write: write what we have
2) Nothing written before: write all
"""
load_article = sabnzbd.ArticleCache.load_article
downloader = sabnzbd.Downloader
decodetable = nzf.decodetable
fd: Optional[int] = None
skipped: bool = False # have any articles been skipped
offset: int = 0 # sequential offset for append writes
try:
# Resume assembly from where we got to previously
for idx in range(nzf.assembler_next_index, len(decodetable)):
article = decodetable[idx]
# We write large article-sized chunks, so we can safely skip the buffering of Python
with open(nzf.filepath, "ab", buffering=0) as fout:
for article in nzf.decodetable:
# Break if deleted during writing
if nzo.status is Status.DELETED:
break
# allow_non_contiguous is when the cache forces the assembler to write all articles, even if it leaves gaps.
# In most cases we can stop at the first article that has not been tried, because they are requested in order.
# However, if we are paused then always consider the whole decodetable to ensure everything possible is written.
if allow_non_contiguous and not article.tries and not downloader.paused:
break
# Skip already written articles
if article.on_disk:
if fd is not None and article.decoded_size is not None:
# Move the file descriptor forward past this article
offset += article.decoded_size
if not skipped:
with nzf.lock:
nzf.assembler_next_index = idx + 1
continue
# Write all decoded articles
if article.decoded:
# Could be empty in case nzo was deleted
if data := sabnzbd.ArticleCache.load_article(article):
written = fout.write(data)
# In raw/non-buffered mode fout.write may not write everything requested:
# https://docs.python.org/3/library/io.html?highlight=write#io.RawIOBase.write
while written < len(data):
written += fout.write(data[written:])
nzf.update_crc32(article.crc32, len(data))
article.on_disk = True
else:
logging.info("No data found when trying to write %s", article)
else:
# stop if next piece not yet decoded
if not article.decoded:
# If the article was not decoded but the file
# is done, it is just a missing piece, so keep writing
if file_done:
continue
# We reach an article that was not decoded
if allow_non_contiguous:
skipped = True
continue
break
# Could be empty in case nzo was deleted
data = load_article(article)
if not data:
if file_done:
continue
if allow_non_contiguous:
skipped = True
continue
else:
# We reach an article that was not decoded
logging.info("No data found when trying to write %s", article)
break
# If required open the file
if fd is None:
fd, offset, direct_write = Assembler.open(
nzf, direct_write and article.can_direct_write, article.file_size
)
if not direct_write and allow_non_contiguous:
# Can only be allow_non_contiguous if we wanted direct_write, file_done will always be queued separately
break
if direct_write and article.can_direct_write:
offset += Assembler.write(fd, idx, nzf, article, data)
else:
if direct_write and skipped and not file_done:
# If we have already skipped an article then need to abort, unless this is the final assemble
break
offset += Assembler.write(fd, idx, nzf, article, data, offset)
finally:
if fd is not None:
os.close(fd)
# Final steps
if file_done:
sabnzbd.Assembler.clear_ready_bytes(nzf)
set_permissions(nzf.filepath)
nzf.assembled = True
@staticmethod
def assemble_article(article: Article, data: bytearray) -> bool:
"""Write a single article to disk"""
if not article.can_direct_write:
return False
nzf = article.nzf
with nzf.file_lock:
fd, _, direct_write = Assembler.open(nzf, True, article.file_size)
try:
if not direct_write:
cfg.direct_write.set(False)
return False
Assembler.write(fd, None, nzf, article, data)
except FileNotFoundError:
# nzo has probably been deleted, ArticleCache tries the fallback and handles it
return False
finally:
os.close(fd)
return True
@staticmethod
def check_encrypted_and_unwanted(nzo: NzbObject, nzf: NzbFile):
"""Encryption and unwanted extension detection"""
@@ -245,6 +511,71 @@ class Assembler(Thread):
nzo.fail_msg = T("Aborted, unwanted extension detected")
sabnzbd.NzbQueue.end_job(nzo)
@staticmethod
def write(
fd: int, nzf_index: Optional[int], nzf: NzbFile, article: Article, data: bytearray, offset: Optional[int] = None
) -> int:
"""Write data at position in a file"""
pos = article.data_begin if offset is None else offset
written = Assembler._write(fd, nzf, data, pos)
# In raw/non-buffered mode os.write may not write everything requested:
# https://docs.python.org/3/library/io.html?highlight=write#io.RawIOBase.write
if written < len(data) and (mv := memoryview(data)):
while written < len(data):
written += Assembler._write(fd, nzf, mv[written:], pos + written)
nzf.update_crc32(article.crc32, len(data))
article.on_disk = True
sabnzbd.Assembler.update_ready_bytes(nzf, -len(data))
with nzf.lock:
# assembler_next_index is the lowest index that has not yet been written sequentially from the start of the file.
# If this was the next required index to remain sequential, it can be incremented which allows the assembler to
# resume without rechecking articles that are already known to be on disk.
# If nzf_index is None, determine it now.
if nzf_index is None:
idx = nzf.assembler_next_index
if idx < len(nzf.decodetable) and article == nzf.decodetable[idx]:
nzf_index = idx
if nzf_index is not None and nzf.assembler_next_index == nzf_index:
nzf.assembler_next_index += 1
return written
@staticmethod
def _write(fd: int, nzf: NzbFile, data: Union[bytearray, memoryview], offset: int) -> int:
if sabnzbd.WINDOWS:
# pwrite is not implemented on Windows so fallback to os.lseek and os.write
# Must lock since it is possible to write from multiple threads (assembler + downloader)
with nzf.file_lock:
os.lseek(fd, offset, os.SEEK_SET)
return os.write(fd, data)
else:
return os.pwrite(fd, data, offset)
@staticmethod
def open(nzf: NzbFile, direct_write: bool, file_size: int) -> tuple[int, int, bool]:
"""Open file for nzf
Use direct_write if requested, with a fallback to setting the current file position for append mode
:returns (file_descriptor, current_offset, can_direct_write)
"""
with nzf.file_lock:
# Get the current umask without changing it, to create a file with the same permissions as `with open(...)`
os.umask(os.umask(0))
fd = os.open(nzf.filepath, os.O_CREAT | os.O_WRONLY | getattr(os, "O_BINARY", 0), 0o666)
offset = nzf.contiguous_offset()
os.lseek(fd, offset, os.SEEK_SET)
if direct_write:
if not file_size:
direct_write = False
if os.fstat(fd).st_size == 0:
try:
sabctools.sparse(fd, file_size)
except OSError:
logging.debug("Sparse call failed for %s", nzf.filepath)
cfg.direct_write.set(False)
direct_write = False
return fd, offset, direct_write
RE_SUBS = re.compile(r"\W+sub|subs|subpack|subtitle|subtitles(?![a-z])", re.I)
SAFE_EXTS = (".mkv", ".mp4", ".avi", ".wmv", ".mpg", ".webm")

View File

@@ -25,6 +25,7 @@ import re
import argparse
import socket
import ipaddress
import threading
from typing import Union
import sabnzbd
@@ -508,6 +509,7 @@ x_frame_options = OptionBool("misc", "x_frame_options", True)
allow_old_ssl_tls = OptionBool("misc", "allow_old_ssl_tls", False)
enable_season_sorting = OptionBool("misc", "enable_season_sorting", True)
verify_xff_header = OptionBool("misc", "verify_xff_header", True)
direct_write = OptionBool("misc", "direct_write", True)
# Text values
rss_odd_titles = OptionList("misc", "rss_odd_titles", ["nzbindex.nl/", "nzbindex.com/", "nzbclub.com/"])
@@ -743,6 +745,13 @@ def new_limit():
if sabnzbd.__INITIALIZED__:
# Only update after full startup
sabnzbd.ArticleCache.new_limit(cache_limit.get_int())
sabnzbd.Assembler.new_limit(sabnzbd.ArticleCache.cache_info().cache_limit)
def new_direct_write():
"""Callback for direct write changes"""
sabnzbd.Assembler.change_direct_write(bool(direct_write()))
sabnzbd.ArticleCache.change_direct_write(bool(direct_write()))
def guard_restart():

View File

@@ -210,7 +210,8 @@ class OptionBool(Option):
super().set(sabnzbd.misc.bool_conv(value))
def __call__(self) -> int:
"""get() replacement"""
"""Many places assume 0/1 is used for historical reasons.
Using pure bools breaks in random places"""
return int(self.get())

View File

@@ -50,7 +50,7 @@ RENAMES_FILE = "__renames__"
ATTRIB_FILE = "SABnzbd_attrib"
REPAIR_REQUEST = "repair-all.sab"
SABCTOOLS_VERSION_REQUIRED = "9.1.0"
SABCTOOLS_VERSION_REQUIRED = "9.3.1"
DB_HISTORY_VERSION = 1
DB_HISTORY_NAME = "history%s.db" % DB_HISTORY_VERSION
@@ -100,10 +100,16 @@ CONFIG_BACKUP_HTTPS = { # "basename": "associated setting"
DEF_MAX_ASSEMBLER_QUEUE = 12
SOFT_ASSEMBLER_QUEUE_LIMIT = 0.5
# Percentage of cache to use before adding file to assembler
ASSEMBLER_WRITE_THRESHOLD = 5
ASSEMBLER_WRITE_THRESHOLD_FACTOR_APPEND = 0.05
ASSEMBLER_WRITE_THRESHOLD_FACTOR_DIRECT_WRITE = 0.75
ASSEMBLER_MAX_WRITE_THRESHOLD_DIRECT_WRITE = int(1 * GIGI)
ASSEMBLER_DELAY_FACTOR_DIRECT_WRITE = 1.5
ASSEMBLER_WRITE_INTERVAL = 5.0
NNTP_BUFFER_SIZE = int(256 * KIBI)
NTTP_MAX_BUFFER_SIZE = int(10 * MEBI)
DEF_PIPELINING_REQUESTS = 1
# Article cache capacity factor to force a non-contiguous flush to disk
ARTICLE_CACHE_NON_CONTIGUOUS_FLUSH_PERCENTAGE = 0.9
REPAIR_PRIORITY = 3
FORCE_PRIORITY = 2

View File

@@ -114,6 +114,12 @@ class HistoryDB:
_ = self.execute("PRAGMA user_version = 5;") and self.execute(
"ALTER TABLE history ADD COLUMN time_added INTEGER;"
)
if version < 6:
_ = (
self.execute("PRAGMA user_version = 6;")
and self.execute("CREATE UNIQUE INDEX idx_history_nzo_id ON history(nzo_id);")
and self.execute("CREATE INDEX idx_history_archive_completed ON history(archive, completed DESC);")
)
HistoryDB.startup_done = True
@@ -160,8 +166,7 @@ class HistoryDB:
def create_history_db(self):
"""Create a new (empty) database file"""
self.execute(
"""
self.execute("""
CREATE TABLE history (
"id" INTEGER PRIMARY KEY,
"completed" INTEGER NOT NULL,
@@ -194,9 +199,10 @@ class HistoryDB:
"archive" INTEGER,
"time_added" INTEGER
)
"""
)
self.execute("PRAGMA user_version = 5;")
""")
self.execute("PRAGMA user_version = 6;")
self.execute("CREATE UNIQUE INDEX idx_history_nzo_id ON history(nzo_id);")
self.execute("CREATE INDEX idx_history_archive_completed ON history(archive, completed DESC);")
def close(self):
"""Close database connection"""
@@ -369,33 +375,34 @@ class HistoryDB:
def have_duplicate_key(self, duplicate_key: str) -> bool:
"""Check whether History contains this duplicate key"""
total = 0
if self.execute(
"""
SELECT COUNT(*)
FROM History
WHERE
duplicate_key = ? AND
STATUS != ?""",
SELECT EXISTS(
SELECT 1
FROM history
WHERE duplicate_key = ? AND status != ?
) as found
""",
(duplicate_key, Status.FAILED),
):
total = self.cursor.fetchone()["COUNT(*)"]
return total > 0
return bool(self.cursor.fetchone()["found"])
return False
def have_name_or_md5sum(self, name: str, md5sum: str) -> bool:
"""Check whether this name or md5sum is already in History"""
total = 0
if self.execute(
"""
SELECT COUNT(*)
FROM History
WHERE
( LOWER(name) = LOWER(?) OR md5sum = ? ) AND
STATUS != ?""",
SELECT EXISTS(
SELECT 1
FROM history
WHERE (name = ? COLLATE NOCASE OR md5sum = ?)
AND status != ?
) as found
""",
(name, md5sum, Status.FAILED),
):
total = self.cursor.fetchone()["COUNT(*)"]
return total > 0
return bool(self.cursor.fetchone()["found"])
return False
def get_history_size(self) -> tuple[int, int, int]:
"""Returns the total size of the history and

View File

@@ -163,6 +163,7 @@ def decode_yenc(article: Article, response: sabctools.NNTPResponse) -> bytearray
article.file_size = response.file_size
article.data_begin = response.part_begin
article.data_size = response.part_size
article.decoded_size = response.bytes_decoded
nzf = article.nzf
# Assume it is yenc
@@ -198,6 +199,7 @@ def decode_uu(article: Article, response: sabctools.NNTPResponse) -> bytearray:
raise BadData(response.data)
decoded_data = response.data
article.decoded_size = response.bytes_decoded
nzf = article.nzf
nzf.type = "uu"

View File

@@ -20,10 +20,9 @@
##############################################################################
import time
import functools
from typing import Union, Callable
from typing import Union, Callable, Any
from threading import Lock, RLock, Condition
# All operations that modify the queue need to happen in a lock
# Also used when importing NZBs to prevent IO-race conditions
# Names of wrapper-functions should be the same in misc.caller_name
@@ -35,15 +34,21 @@ DOWNLOADER_CV = Condition(NZBQUEUE_LOCK)
DOWNLOADER_LOCK = RLock()
def synchronized(lock: Union[Lock, RLock]):
def synchronized(lock: Union[Lock, RLock, Condition, None] = None):
def wrap(func: Callable):
def call_func(*args, **kw):
# Using the try/finally approach is 25% faster compared to using "with lock"
# Either use the supplied lock or the object-specific one
# Because it's a variable in the upper function, we cannot use it directly
lock_obj = lock
if not lock_obj:
lock_obj = getattr(args[0], "lock")
# Using try/finally is ~25% faster than "with lock"
try:
lock.acquire()
lock_obj.acquire()
return func(*args, **kw)
finally:
lock.release()
lock_obj.release()
return call_func

View File

@@ -27,6 +27,7 @@ files to the job-name in the queue if the filename looks obfuscated
Based on work by P1nGu1n
"""
import hashlib
import logging
import os

View File

@@ -28,7 +28,7 @@ import sys
import ssl
import time
from datetime import date
from typing import Optional, Union, Deque
from typing import Optional, Union, Deque, Callable
import sabctools
@@ -37,10 +37,8 @@ from sabnzbd.decorators import synchronized, NzbQueueLocker, DOWNLOADER_CV, DOWN
from sabnzbd.newswrapper import NewsWrapper, NNTPPermanentError
import sabnzbd.config as config
import sabnzbd.cfg as cfg
from sabnzbd.misc import from_units, helpful_warning, int_conv, MultiAddQueue
from sabnzbd.misc import from_units, helpful_warning, int_conv, MultiAddQueue, to_units
from sabnzbd.get_addrinfo import get_fastest_addrinfo, AddrInfo
from sabnzbd.constants import SOFT_ASSEMBLER_QUEUE_LIMIT
# Timeout penalty in minutes for each cause
_PENALTY_UNKNOWN = 3 # Unknown cause
@@ -173,7 +171,6 @@ class Server:
def stop(self):
"""Remove all connections and cached articles from server"""
for nw in self.idle_threads:
sabnzbd.Downloader.remove_socket(nw)
nw.hard_reset()
self.idle_threads = set()
self.reset_article_queue()
@@ -299,7 +296,12 @@ class Downloader(Thread):
self.force_disconnect: bool = False
self.selector: selectors.DefaultSelector = selectors.DefaultSelector()
# macOS/BSD will default to KqueueSelector, it's very efficient but produces separate events for READ and WRITE.
# Which causes problems when two receive threads are both trying to use the connection while it is resetting.
if selectors.DefaultSelector is getattr(selectors, "KqueueSelector", None):
self.selector: selectors.BaseSelector = selectors.PollSelector()
else:
self.selector: selectors.BaseSelector = selectors.DefaultSelector()
self.servers: list[Server] = []
self.timers: dict[str, list[float]] = {}
@@ -375,6 +377,8 @@ class Downloader(Thread):
def add_socket(self, nw: NewsWrapper):
"""Add a socket to be watched for read or write availability"""
if nw.nntp:
nw.server.idle_threads.discard(nw)
nw.server.busy_threads.add(nw)
try:
self.selector.register(nw.nntp.fileno, selectors.EVENT_READ | selectors.EVENT_WRITE, nw)
nw.selector_events = selectors.EVENT_READ | selectors.EVENT_WRITE
@@ -384,7 +388,7 @@ class Downloader(Thread):
@synchronized(DOWNLOADER_LOCK)
def modify_socket(self, nw: NewsWrapper, events: int):
"""Modify the events socket are watched for"""
if nw.nntp and nw.selector_events != events:
if nw.nntp and nw.selector_events != events and not nw.blocking:
try:
self.selector.modify(nw.nntp.fileno, events, nw)
nw.selector_events = events
@@ -395,6 +399,9 @@ class Downloader(Thread):
def remove_socket(self, nw: NewsWrapper):
"""Remove a socket to be watched"""
if nw.nntp:
nw.server.busy_threads.discard(nw)
nw.server.idle_threads.add(nw)
nw.timeout = None
try:
self.selector.unregister(nw.nntp.fileno)
nw.selector_events = 0
@@ -649,12 +656,12 @@ class Downloader(Thread):
if not server.get_article(peek=True):
break
server.idle_threads.remove(nw)
server.busy_threads.add(nw)
if nw.connected:
# Assign a request immediately if NewsWrapper is ready, if we wait until the socket is
# selected all idle connections will be activated when there may only be one request
nw.prepare_request()
self.add_socket(nw)
else:
elif not nw.nntp:
try:
logging.info("%s@%s: Initiating connection", nw.thrdnum, server.host)
nw.init_connect()
@@ -749,24 +756,43 @@ class Downloader(Thread):
# Drop stale items
if nw.generation != generation:
return
if event & selectors.EVENT_READ:
# Read on EVENT_READ, or on EVENT_WRITE if TLS needs a write to complete a read
if (event & selectors.EVENT_READ) or (event & selectors.EVENT_WRITE and nw.tls_wants_write):
self.process_nw_read(nw, generation)
# If read caused a reset, don't proceed to write
if nw.generation != generation:
return
if event & selectors.EVENT_WRITE:
# The read may have removed the socket, so prevent calling prepare_request again
if not (nw.selector_events & selectors.EVENT_WRITE):
return
# Only attempt app-level writes if TLS is not blocked
if (event & selectors.EVENT_WRITE) and not nw.tls_wants_write:
nw.write()
def process_nw_read(self, nw: NewsWrapper, generation: int) -> None:
bytes_received: int = 0
bytes_pending: int = 0
while nw.decoder and nw.generation == generation:
while (
nw.connected
and nw.generation == generation
and not self.force_disconnect
and not self.shutdown
and not (nw.timeout and time.time() > nw.timeout)
):
try:
n, bytes_pending = nw.read(nbytes=bytes_pending, generation=generation)
bytes_received += n
nw.tls_wants_write = False
except ssl.SSLWantReadError:
return
except ssl.SSLWantWriteError:
# TLS needs to write handshake/key-update data before we can continue reading
nw.tls_wants_write = True
self.modify_socket(nw, selectors.EVENT_READ | selectors.EVENT_WRITE)
return
except (ConnectionError, ConnectionAbortedError):
# The ConnectionAbortedError is also thrown by sabctools in case of fatal SSL-layer problems
self.reset_nw(nw, "Server closed connection", wait=False)
@@ -796,33 +822,38 @@ class Downloader(Thread):
and sabnzbd.BPSMeter.bps + sabnzbd.BPSMeter.sum_cached_amount > self.bandwidth_limit
):
sabnzbd.BPSMeter.update()
while sabnzbd.BPSMeter.bps > self.bandwidth_limit:
while self.bandwidth_limit and sabnzbd.BPSMeter.bps > self.bandwidth_limit:
time.sleep(0.01)
sabnzbd.BPSMeter.update()
def check_assembler_levels(self):
"""Check the Assembler queue to see if we need to delay, depending on queue size"""
if (assembler_level := sabnzbd.Assembler.queue_level()) > SOFT_ASSEMBLER_QUEUE_LIMIT:
time.sleep(min((assembler_level - SOFT_ASSEMBLER_QUEUE_LIMIT) / 4, 0.15))
sabnzbd.BPSMeter.delayed_assembler += 1
logged_counter = 0
if not sabnzbd.Assembler.is_busy() or (delay := sabnzbd.Assembler.delay()) <= 0:
return
time.sleep(delay)
sabnzbd.BPSMeter.delayed_assembler += 1
start_time = time.monotonic()
deadline = start_time + 5
next_log = start_time + 1.0
logged_counter = 0
while not self.shutdown and sabnzbd.Assembler.queue_level() >= 1:
# Only log/update once every second, to not waste any CPU-cycles
if not logged_counter % 10:
# Make sure the BPS-meter is updated
sabnzbd.BPSMeter.update()
# Update who is delaying us
logging.debug(
"Delayed - %d seconds - Assembler queue: %d",
logged_counter / 10,
sabnzbd.Assembler.queue.qsize(),
)
# Wait and update the queue sizes
time.sleep(0.1)
while not self.shutdown and sabnzbd.Assembler.is_busy() and time.monotonic() < deadline:
if (delay := sabnzbd.Assembler.delay()) <= 0:
break
# Sleep for the current delay (but cap to remaining time)
sleep_time = max(0.001, min(delay, deadline - time.monotonic()))
time.sleep(sleep_time)
# Make sure the BPS-meter is updated
sabnzbd.BPSMeter.update()
# Only log/update once every second
if time.monotonic() >= next_log:
logged_counter += 1
logging.debug(
"Delayed - %d seconds - Assembler queue: %s",
logged_counter,
to_units(sabnzbd.Assembler.total_ready_bytes()),
)
next_log += 1.0
@synchronized(DOWNLOADER_LOCK)
def finish_connect_nw(self, nw: NewsWrapper, response: sabctools.NNTPResponse) -> bool:
@@ -925,13 +956,6 @@ class Downloader(Thread):
elif reset_msg:
logging.debug("Thread %s@%s: %s", nw.thrdnum, nw.server.host, reset_msg)
# Make sure this NewsWrapper is in the idle threads
nw.server.busy_threads.discard(nw)
nw.server.idle_threads.add(nw)
# Make sure it is not in the readable sockets
self.remove_socket(nw)
# Discard the article request which failed
nw.discard(article, count_article_try=count_article_try, retry_article=retry_article)

View File

@@ -255,8 +255,7 @@ def diskfull_mail():
"""Send email about disk full, no templates"""
if cfg.email_full():
return send_email(
T(
"""To: %s
T("""To: %s
From: %s
Date: %s
Subject: SABnzbd reports Disk Full
@@ -266,9 +265,7 @@ Hi,
SABnzbd has stopped downloading, because the disk is almost full.
Please make room and resume SABnzbd manually.
"""
)
% (cfg.email_to.get_string(), cfg.email_from(), get_email_date()),
""") % (cfg.email_to.get_string(), cfg.email_from(), get_email_date()),
cfg.email_to(),
)
else:

View File

@@ -18,6 +18,7 @@
"""
sabnzbd.misc - filesystem operations
"""
import gzip
import os
import pickle
@@ -42,6 +43,7 @@ try:
except ImportError:
pass
import sabctools
import sabnzbd
from sabnzbd.decorators import synchronized, conditional_cache
from sabnzbd.constants import (
@@ -56,7 +58,6 @@ from sabnzbd.constants import (
from sabnzbd.encoding import correct_unknown_encoding, utob, limit_encoded_length
import rarfile
# For Windows: determine executable extensions
if os.name == "nt":
PATHEXT = os.environ.get("PATHEXT", "").lower().split(";")
@@ -1081,7 +1082,14 @@ def save_data(data: Any, _id: str, path: str, do_pickle: bool = True, silent: bo
time.sleep(0.1)
def load_data(data_id: str, path: str, remove: bool = True, do_pickle: bool = True, silent: bool = False) -> Any:
def load_data(
data_id: str,
path: str,
remove: bool = True,
do_pickle: bool = True,
silent: bool = False,
mutable: bool = False,
) -> Any:
"""Read data from disk file"""
path = os.path.join(path, data_id)
@@ -1100,6 +1108,9 @@ def load_data(data_id: str, path: str, remove: bool = True, do_pickle: bool = Tr
except UnicodeDecodeError:
# Could be Python 2 data that we can load using old encoding
data = pickle.load(data_file, encoding="latin1")
elif mutable:
data = bytearray(os.fstat(data_file.fileno()).st_size)
data_file.readinto(data)
else:
data = data_file.read()
@@ -1222,7 +1233,7 @@ def directory_is_writable(test_dir: str) -> bool:
return True
def check_filesystem_capabilities(test_dir: str) -> bool:
def check_filesystem_capabilities(test_dir: str, is_download_dir: bool = False) -> bool:
"""Checks if we can write long and unicode filenames to the given directory.
If not on Windows, also check for special chars like slashes and :
Returns True if all OK, otherwise False"""
@@ -1250,9 +1261,24 @@ def check_filesystem_capabilities(test_dir: str) -> bool:
)
allgood = False
# sparse files allow efficient use of empty space in files
if is_download_dir and not check_sparse_and_disable(test_dir):
# Writing to correct file offsets will be disabled, and it won't be possible to flush the article cache
# directly to the destination file
sabnzbd.misc.helpful_warning(T("%s does not support sparse files. Disabling direct write mode."), test_dir)
allgood = False
return allgood
def check_sparse_and_disable(test_dir: str) -> bool:
"""Check if sparse files are supported, otherwise disable direct write mode"""
if sabnzbd.cfg.direct_write() and not is_sparse_supported(test_dir):
sabnzbd.cfg.direct_write.set(False)
return False
return True
def get_win_drives() -> list[str]:
"""Return list of detected drives, adapted from:
http://stackoverflow.com/questions/827371/is-there-a-way-to-list-all-the-available-drive-letters-in-python/827490
@@ -1378,43 +1404,33 @@ def create_work_name(name: str) -> str:
return name.strip()
def nzf_cmp_name(nzf1, nzf2):
"""Comparison function for sorting NZB files.
The comparison will sort .par2 files to the top of the queue followed by .rar files,
they will then be sorted by name.
def is_sparse(path: str) -> bool:
"""Check if a path is a sparse file"""
info = os.stat(path)
if sabnzbd.WINDOWS:
return bool(info.st_file_attributes & stat.FILE_ATTRIBUTE_SPARSE_FILE)
Note: nzf1 and nzf2 should be NzbFile objects, but we can't import that here
to avoid circular dependencies.
"""
nzf1_name = nzf1.filename.lower()
nzf2_name = nzf2.filename.lower()
# Linux and macOS
if info.st_blocks * 512 < info.st_size:
return True
# Determine vol-pars
is_par1 = ".vol" in nzf1_name and ".par2" in nzf1_name
is_par2 = ".vol" in nzf2_name and ".par2" in nzf2_name
# Filesystem with SEEK_HOLE (ZFS)
try:
with open(path, "rb") as f:
pos = f.seek(0, os.SEEK_HOLE)
return pos < info.st_size
except (AttributeError, OSError):
pass
# mini-par2 in front
if not is_par1 and nzf1_name.endswith(".par2"):
return -1
if not is_par2 and nzf2_name.endswith(".par2"):
return 1
return False
# vol-pars go to the back
if is_par1 and not is_par2:
return 1
if is_par2 and not is_par1:
return -1
# Prioritize .rar files above any other type of file (other than vol-par)
m1 = RAR_RE.search(nzf1_name)
m2 = RAR_RE.search(nzf2_name)
if m1 and not (is_par2 or m2):
return -1
elif m2 and not (is_par1 or m1):
return 1
# Force .rar to come before 'r00'
if m1 and m1.group(1) == ".rar":
nzf1_name = nzf1_name.replace(".rar", ".r//")
if m2 and m2.group(1) == ".rar":
nzf2_name = nzf2_name.replace(".rar", ".r//")
return sabnzbd.misc.cmp(nzf1_name, nzf2_name)
def is_sparse_supported(check_dir: str) -> bool:
"""Check if a directory supports sparse files"""
sparse_file = tempfile.NamedTemporaryFile(dir=check_dir, delete=False)
try:
sabctools.sparse(sparse_file.fileno(), 64)
sparse_file.close()
return is_sparse(sparse_file.name)
finally:
os.remove(sparse_file.name)

View File

@@ -895,6 +895,7 @@ SPECIAL_BOOL_LIST = (
"allow_old_ssl_tls",
"enable_season_sorting",
"verify_xff_header",
"direct_write",
)
SPECIAL_VALUE_LIST = (
"downloader_sleep_time",
@@ -1269,7 +1270,7 @@ class ConfigRss:
active_feed,
download=self.__refresh_download,
force=self.__refresh_force,
ignoreFirst=self.__refresh_ignore,
ignore_first=self.__refresh_ignore,
readout=readout,
)
else:

View File

@@ -62,6 +62,7 @@ from sabnzbd.filesystem import userxbit, make_script_path, remove_file, strip_ex
if sabnzbd.WINDOWS:
try:
import winreg
import win32api
import win32process
import win32con
@@ -717,15 +718,7 @@ def get_cache_limit() -> str:
"""
# Calculate, if possible
try:
if sabnzbd.WINDOWS:
# Windows
mem_bytes = get_windows_memory()
elif sabnzbd.MACOS:
# macOS
mem_bytes = get_macos_memory()
else:
# Linux
mem_bytes = os.sysconf("SC_PAGE_SIZE") * os.sysconf("SC_PHYS_PAGES")
mem_bytes = get_memory()
# Use 1/4th of available memory
mem_bytes = mem_bytes / 4
@@ -748,36 +741,27 @@ def get_cache_limit() -> str:
return ""
def get_windows_memory() -> int:
"""Use ctypes to extract available memory"""
class MEMORYSTATUSEX(ctypes.Structure):
_fields_ = [
("dwLength", ctypes.c_ulong),
("dwMemoryLoad", ctypes.c_ulong),
("ullTotalPhys", ctypes.c_ulonglong),
("ullAvailPhys", ctypes.c_ulonglong),
("ullTotalPageFile", ctypes.c_ulonglong),
("ullAvailPageFile", ctypes.c_ulonglong),
("ullTotalVirtual", ctypes.c_ulonglong),
("ullAvailVirtual", ctypes.c_ulonglong),
("sullAvailExtendedVirtual", ctypes.c_ulonglong),
]
def __init__(self):
# have to initialize this to the size of MEMORYSTATUSEX
self.dwLength = ctypes.sizeof(self)
super(MEMORYSTATUSEX, self).__init__()
stat = MEMORYSTATUSEX()
ctypes.windll.kernel32.GlobalMemoryStatusEx(ctypes.byref(stat))
return stat.ullTotalPhys
def get_macos_memory() -> float:
"""Use system-call to extract total memory on macOS"""
system_output = run_command(["sysctl", "hw.memsize"])
return float(system_output.split()[1])
def get_memory() -> int:
try:
if sabnzbd.WINDOWS:
# Use win32api to get total physical memory
mem_info = win32api.GlobalMemoryStatusEx()
return mem_info["TotalPhys"]
elif sabnzbd.MACOS:
# Use system-call to extract total memory on macOS
system_output = run_command(["sysctl", "-n", "hw.memsize"]).strip()
else:
try:
with open("/proc/meminfo") as f:
for line in f:
if line.startswith("MemTotal:"):
return int(line.split()[1]) * 1024
except Exception:
pass
return os.sysconf("SC_PAGE_SIZE") * os.sysconf("SC_PHYS_PAGES")
except Exception:
pass
return 0
@conditional_cache(cache_time=3600)

View File

@@ -70,7 +70,6 @@ from sabnzbd.nzb import NzbObject
import sabnzbd.cfg as cfg
from sabnzbd.constants import Status
# Regex globals
RAR_V3_RE = re.compile(r"\.(?P<ext>part\d*)$", re.I)
RAR_EXTRACTFROM_RE = re.compile(r"^Extracting\sfrom\s(.+)")

View File

@@ -23,6 +23,7 @@ import errno
import socket
import threading
from collections import deque
from contextlib import suppress
from selectors import EVENT_READ, EVENT_WRITE
from threading import Thread
import time
@@ -60,9 +61,9 @@ class NewsWrapper:
"blocking",
"timeout",
"decoder",
"send_buffer",
"nntp",
"connected",
"ready",
"user_sent",
"pass_sent",
"group",
@@ -75,6 +76,7 @@ class NewsWrapper:
"selector_events",
"lock",
"generation",
"tls_wants_write",
)
def __init__(self, server: "sabnzbd.downloader.Server", thrdnum: int, block: bool = False, generation: int = 0):
@@ -82,15 +84,17 @@ class NewsWrapper:
self.thrdnum: int = thrdnum
self.blocking: bool = block
self.generation: int = generation
if getattr(self, "lock", None) is None:
self.lock: threading.Lock = threading.Lock()
self.timeout: Optional[float] = None
self.decoder: Optional[sabctools.Decoder] = None
self.send_buffer = b""
self.nntp: Optional[NNTP] = None
self.connected: bool = False
self.connected: bool = False # TCP/TLS handshake complete
self.ready: bool = False # Auth complete, can serve requests
self.user_sent: bool = False
self.pass_sent: bool = False
self.user_ok: bool = False
@@ -105,7 +109,7 @@ class NewsWrapper:
)
self._response_queue: deque[Optional[sabnzbd.nzb.Article]] = deque()
self.selector_events = 0
self.lock: threading.Lock = threading.Lock()
self.tls_wants_write: bool = False
@property
def article(self) -> Optional["sabnzbd.nzb.Article"]:
@@ -133,7 +137,7 @@ class NewsWrapper:
def finish_connect(self, code: int, message: str) -> None:
"""Perform login options"""
if not (self.server.username or self.server.password or self.force_login):
self.connected = True
self.ready = True
self.user_sent = True
self.user_ok = True
self.pass_sent = True
@@ -141,7 +145,7 @@ class NewsWrapper:
if code == 480:
self.force_login = True
self.connected = False
self.ready = False
self.user_sent = False
self.user_ok = False
self.pass_sent = False
@@ -161,7 +165,7 @@ class NewsWrapper:
self.user_ok = True
self.pass_sent = True
self.pass_ok = True
self.connected = True
self.ready = True
if self.user_ok and not self.pass_sent:
command = utob("authinfo pass %s\r\n" % self.server.password)
@@ -172,7 +176,7 @@ class NewsWrapper:
# Assume that login failed (code 481 or other)
raise NNTPPermanentError(message, code)
else:
self.connected = True
self.ready = True
self.timeout = time.time() + self.server.timeout
@@ -201,7 +205,6 @@ class NewsWrapper:
def on_response(self, response: sabctools.NNTPResponse, article: Optional["sabnzbd.nzb.Article"]) -> None:
"""A response to a NNTP request is received"""
self.concurrent_requests.release()
sabnzbd.Downloader.modify_socket(self, EVENT_READ | EVENT_WRITE)
server = self.server
article_done = response.status_code in (220, 222) and article
@@ -214,11 +217,11 @@ class NewsWrapper:
# Response code depends on request command:
# 220 = ARTICLE, 222 = BODY
if not article_done:
if not self.connected or not article or response.status_code in (281, 381, 480, 481, 482):
if not self.ready or not article or response.status_code in (281, 381, 480, 481, 482):
self.discard(article, count_article_try=False)
if not sabnzbd.Downloader.finish_connect_nw(self, response):
return
if self.connected:
if self.ready:
logging.info("Connecting %s@%s finished", self.thrdnum, server.host)
elif response.status_code == 223:
@@ -237,15 +240,9 @@ class NewsWrapper:
elif response.status_code == 500:
if article.nzf.nzo.precheck:
# Did we try "STAT" already?
if not server.have_stat:
# Hopless server, just discard
logging.info("Server %s does not support STAT or HEAD, precheck not possible", server.host)
article_done = True
else:
# Assume "STAT" command is not supported
server.have_stat = False
logging.debug("Server %s does not support STAT, trying HEAD", server.host)
# Assume "STAT" command is not supported
server.have_stat = False
logging.debug("Server %s does not support STAT", server.host)
else:
# Assume "BODY" command is not supported
server.have_body = False
@@ -296,7 +293,7 @@ class NewsWrapper:
generation = self.generation
# NewsWrapper is being reset
if not self.decoder:
if self.decoder is None:
return 0, None
# Receive data into the decoder pre-allocated buffer
@@ -314,17 +311,31 @@ class NewsWrapper:
self.timeout = time.time() + self.server.timeout
self.decoder.process(bytes_recv)
for response in self.decoder:
if self.generation != generation:
break
with self.lock:
# Re-check under lock to avoid racing with hard_reset
if self.generation != generation or not self._response_queue:
break
article = self._response_queue.popleft()
if on_response:
on_response(response.status_code, response.message)
self.on_response(response, article)
if self.decoder:
for response in self.decoder:
with self.lock:
# Check generation under lock to avoid racing with hard_reset
if self.generation != generation or not self._response_queue:
break
article = self._response_queue.popleft()
if on_response:
on_response(response.status_code, response.message)
self.on_response(response, article)
# After each response this socket may need to be made available to write the next request,
# or removed from socket monitoring to prevent hot looping.
if self.prepare_request():
# There is either a next_request or an inflight request
# If there is a next_request to send, ensure the socket is registered for write events
# Checks before calling modify_socket to prevent locks on the hot path
if self.next_request and self.selector_events != EVENT_READ | EVENT_WRITE:
sabnzbd.Downloader.modify_socket(self, EVENT_READ | EVENT_WRITE)
else:
# Only remove the socket if it's not SSL or has no pending data, otherwise the recursive call may
# call prepare_request again and find a request, but the socket would have already been removed.
if not self.server.ssl or not self.nntp or not self.nntp.sock.pending():
# No further work for this socket
sabnzbd.Downloader.remove_socket(self)
# The SSL-layer might still contain data even though the socket does not. Another Downloader-loop would
# not identify this socket anymore as it is not returned by select(). So, we have to forcefully trigger
@@ -333,86 +344,101 @@ class NewsWrapper:
return bytes_recv, pending
return bytes_recv, None
def prepare_request(self) -> bool:
"""Queue an article request if appropriate."""
server = self.server
# Do not pipeline requests until authentication is completed (connected)
if self.ready or not self._response_queue:
server_ready = (
server.active
and not server.restart
and not (
sabnzbd.Downloader.no_active_jobs()
or sabnzbd.Downloader.shutdown
or sabnzbd.Downloader.paused_for_postproc
)
)
if server_ready:
# Queue the next article if none exists
if not self.next_request and (article := server.get_article()):
self.next_request = self.body(article)
return True
else:
# Server not ready, discard any queued next_request
if self.next_request and self.next_request[1]:
self.discard(self.next_request[1], count_article_try=False, retry_article=True)
self.next_request = None
# Return True if there is work queued or in flight
return bool(self.next_request or self._response_queue)
def write(self):
"""Send data to server"""
server = self.server
try:
# First, try to flush any remaining data
if self.send_buffer:
sent = self.nntp.sock.send(self.send_buffer)
self.send_buffer = self.send_buffer[sent:]
if self.send_buffer:
# Still unsent data, wait for next EVENT_WRITE
# Flush any buffered data
if self.nntp.write_buffer:
sent = self.nntp.sock.send(self.nntp.write_buffer)
self.nntp.write_buffer = self.nntp.write_buffer[sent:]
# If buffer still has data, wait for next write opportunity
if self.nntp.write_buffer:
return
if self.connected:
if (
server.active
and not server.restart
and not (
sabnzbd.Downloader.paused
or sabnzbd.Downloader.shutdown
or sabnzbd.Downloader.paused_for_postproc
)
):
# Prepare the next request
if not self.next_request and (article := server.get_article()):
self.next_request = self.body(article)
elif self.next_request and self.next_request[1]:
# Discard the next request
self.discard(self.next_request[1], count_article_try=False, retry_article=True)
self.next_request = None
# If available, try to send new command
if self.prepare_request():
# Nothing to send but already requests in-flight
if not self.next_request:
sabnzbd.Downloader.modify_socket(self, EVENT_READ)
return
# If no pending buffer, try to send new command
if not self.send_buffer and self.next_request:
if self.concurrent_requests.acquire(blocking=False):
command, article = self.next_request
self.next_request = None
if article:
nzo = article.nzf.nzo
if nzo.removed_from_queue or nzo.status is Status.PAUSED and nzo.priority is not FORCE_PRIORITY:
self.discard(article, count_article_try=False, retry_article=True)
self.concurrent_requests.release()
self.next_request = None
return
self._response_queue.append(article)
if sabnzbd.LOG_ALL:
logging.debug("Thread %s@%s: %s", self.thrdnum, server.host, command)
try:
sent = self.nntp.sock.send(command)
if sent < len(command):
# Partial send, store remainder
self.send_buffer = command[sent:]
except (BlockingIOError, ssl.SSLWantWriteError):
# Can't send now, store full command
self.send_buffer = command
# Non-blocking send - buffer any unsent data
sent = self.nntp.sock.send(command)
if sent < len(command):
logging.debug("%s@%s: Partial send", self.thrdnum, server.host)
self.nntp.write_buffer = command[sent:]
self._response_queue.append(article)
self.next_request = None
else:
# Concurrency limit reached
# Concurrency limit reached; wait until a response is read to prevent hot looping on EVENT_WRITE
sabnzbd.Downloader.modify_socket(self, EVENT_READ)
else:
# Is it safe to shut down this socket?
if (
not self.send_buffer
and not self.next_request
and not self._response_queue
and (not server.active or server.restart or not self.timeout or time.time() > self.timeout)
):
# Make socket available again
server.busy_threads.discard(self)
server.idle_threads.add(self)
sabnzbd.Downloader.remove_socket(self)
except (BlockingIOError, ssl.SSLWantWriteError):
# Socket not currently writable — just try again later
return
# No further work for this socket
sabnzbd.Downloader.remove_socket(self)
except ssl.SSLWantWriteError:
# Socket not ready for writing, keep buffer and wait for next write event
pass
except ssl.SSLWantReadError:
# SSL renegotiation needs read first
sabnzbd.Downloader.modify_socket(self, EVENT_READ)
except BlockingIOError:
# Socket not ready for writing, keep buffer and wait for next write event
pass
except socket.error as err:
logging.info("Looks like server closed connection: %s", err)
sabnzbd.Downloader.reset_nw(self, "Server broke off connection", warn=True)
logging.info("Looks like server closed connection: %s, type: %s", err, type(err))
sabnzbd.Downloader.reset_nw(self, "Server broke off connection", warn=True, wait=False)
except Exception:
logging.error(T("Suspect error in downloader"))
logging.info("Traceback: ", exc_info=True)
sabnzbd.Downloader.reset_nw(self, "Server broke off connection", warn=True)
@synchronized(DOWNLOADER_LOCK)
def hard_reset(self, wait: bool = True):
"""Destroy and restart"""
with self.lock:
@@ -427,11 +453,11 @@ class NewsWrapper:
if article := self._response_queue.popleft():
self.discard(article, count_article_try=False, retry_article=True)
if self.nntp:
self.nntp.close(send_quit=self.connected)
self.nntp = None
if self.nntp:
sabnzbd.Downloader.remove_socket(self)
self.nntp.close(send_quit=self.ready)
self.nntp = None
with self.lock:
# Reset all variables (including the NNTP connection) and increment the generation counter
self.__init__(self.server, self.thrdnum, generation=self.generation + 1)
@@ -470,13 +496,13 @@ class NewsWrapper:
self.server.host,
self.server.port,
self.thrdnum,
self.connected,
self.ready,
)
class NNTP:
# Pre-define attributes to save memory
__slots__ = ("nw", "addrinfo", "error_msg", "sock", "fileno", "closed")
__slots__ = ("nw", "addrinfo", "error_msg", "sock", "fileno", "closed", "write_buffer")
def __init__(self, nw: NewsWrapper, addrinfo: AddrInfo):
self.nw: NewsWrapper = nw
@@ -487,6 +513,9 @@ class NNTP:
# Prevent closing this socket until it's done connecting
self.closed = False
# Buffer for non-blocking writes
self.write_buffer: bytes = b""
# Create SSL-context if it is needed and not created yet
if self.nw.server.ssl and not self.nw.server.ssl_context:
# Setup the SSL socket
@@ -586,6 +615,7 @@ class NNTP:
# Locked, so it can't interleave with any of the Downloader "__nw" actions
with DOWNLOADER_LOCK:
if not self.closed:
self.nw.connected = True
sabnzbd.Downloader.add_socket(self.nw)
except OSError as e:
self.error(e)
@@ -643,6 +673,8 @@ class NNTP:
else:
logging.warning(msg)
self.nw.server.warning = msg
# No reset-warning needed, above logging is sufficient
sabnzbd.Downloader.reset_nw(self.nw)
@synchronized(DOWNLOADER_LOCK)
def close(self, send_quit: bool):
@@ -650,10 +682,12 @@ class NNTP:
Locked to match connect(), even though most likely the caller already holds the same lock."""
# Set status first, so any calls in connect/error are handled correctly
self.closed = True
self.write_buffer = b""
try:
if send_quit:
self.sock.sendall(b"QUIT\r\n")
time.sleep(0.01)
with suppress(socket.error):
self.sock.sendall(b"QUIT\r\n")
time.sleep(0.01)
self.sock.close()
except Exception as e:
logging.info("%s@%s: Failed to close socket (error=%s)", self.nw.thrdnum, self.nw.server.host, str(e))

View File

@@ -20,7 +20,6 @@
sabnzbd.notifier - Send notifications to any notification services
"""
import sys
import os.path
import logging

View File

@@ -20,7 +20,7 @@ sabnzbd.nzb - NZB-related classes and functionality
"""
# Article-related classes
from sabnzbd.nzb.article import Article, ArticleSaver, TryList, TRYLIST_LOCK
from sabnzbd.nzb.article import Article, ArticleSaver, TryList
# File-related classes
from sabnzbd.nzb.file import NzbFile, NzbFileSaver, SkippedNzbFile
@@ -30,7 +30,6 @@ from sabnzbd.nzb.object import (
NzbObject,
NzbObjectSaver,
NzoAttributeSaver,
NZO_LOCK,
NzbEmpty,
NzbRejected,
NzbPreQueueRejected,
@@ -42,7 +41,6 @@ __all__ = [
"Article",
"ArticleSaver",
"TryList",
"TRYLIST_LOCK",
# File
"NzbFile",
"NzbFileSaver",
@@ -51,7 +49,6 @@ __all__ = [
"NzbObject",
"NzbObjectSaver",
"NzoAttributeSaver",
"NZO_LOCK",
"NzbEmpty",
"NzbRejected",
"NzbPreQueueRejected",

View File

@@ -18,6 +18,7 @@
"""
sabnzbd.article - Article and TryList classes for NZB downloading
"""
import logging
import threading
from typing import Optional
@@ -27,13 +28,10 @@ from sabnzbd.downloader import Server
from sabnzbd.filesystem import get_new_id
from sabnzbd.decorators import synchronized
##############################################################################
# Trylist
##############################################################################
TRYLIST_LOCK = threading.RLock()
class TryList:
"""TryList keeps track of which servers have been tried for a specific article"""
@@ -45,32 +43,32 @@ class TryList:
# Sets are faster than lists
self.try_list: set[Server] = set()
@synchronized()
def server_in_try_list(self, server: Server) -> bool:
"""Return whether specified server has been tried"""
with TRYLIST_LOCK:
return server in self.try_list
return server in self.try_list
@synchronized()
def all_servers_in_try_list(self, all_servers: set[Server]) -> bool:
"""Check if all servers have been tried"""
with TRYLIST_LOCK:
return all_servers.issubset(self.try_list)
return all_servers.issubset(self.try_list)
@synchronized()
def add_to_try_list(self, server: Server):
"""Register server as having been tried already"""
with TRYLIST_LOCK:
# Sets cannot contain duplicate items
self.try_list.add(server)
# Sets cannot contain duplicate items
self.try_list.add(server)
@synchronized()
def remove_from_try_list(self, server: Server):
"""Remove server from list of tried servers"""
with TRYLIST_LOCK:
# Discard does not require the item to be present
self.try_list.discard(server)
# Discard does not require the item to be present
self.try_list.discard(server)
@synchronized()
def reset_try_list(self):
"""Clean the list"""
with TRYLIST_LOCK:
self.try_list = set()
self.try_list = set()
def __getstate__(self):
"""Save the servers"""
@@ -98,6 +96,7 @@ ArticleSaver = (
"on_disk",
"nzf",
"crc32",
"decoded_size",
)
@@ -105,7 +104,7 @@ class Article(TryList):
"""Representation of one article"""
# Pre-define attributes to save memory
__slots__ = ArticleSaver + ("fetcher", "fetcher_priority", "tries")
__slots__ = ArticleSaver + ("fetcher", "fetcher_priority", "tries", "lock")
def __init__(self, article, article_bytes, nzf):
super().__init__()
@@ -120,11 +119,14 @@ class Article(TryList):
self.file_size: Optional[int] = None
self.data_begin: Optional[int] = None
self.data_size: Optional[int] = None
self.decoded_size: Optional[int] = None # Size of the decoded article
self.on_disk: bool = False
self.crc32: Optional[int] = None
self.nzf = nzf # NzbFile reference
self.nzf: "sabnzbd.nzb.NzbFile" = nzf # NzbFile reference
# Share NzbFile lock for file-wide atomicity of try-list ops
self.lock: threading.RLock = nzf.lock
@synchronized(TRYLIST_LOCK)
@synchronized()
def reset_try_list(self):
"""In addition to resetting the try list, also reset fetcher so all servers
are tried again. Locked so fetcher setting changes are also protected."""
@@ -132,16 +134,17 @@ class Article(TryList):
self.fetcher_priority = 0
super().reset_try_list()
@synchronized(TRYLIST_LOCK)
def allow_new_fetcher(self, remove_fetcher_from_try_list: bool = True):
"""Let article get new fetcher and reset try lists of file and job.
Locked so all resets are performed at once"""
if remove_fetcher_from_try_list:
self.remove_from_try_list(self.fetcher)
self.fetcher = None
self.tries = 0
self.nzf.reset_try_list()
self.nzf.nzo.reset_try_list()
Locked so all resets are performed at once.
Must acquire nzo lock first, then nzf lock (which is self.lock) to prevent deadlock."""
with self.nzf.nzo.lock, self.lock:
if remove_fetcher_from_try_list:
self.remove_from_try_list(self.fetcher)
self.fetcher = None
self.tries = 0
self.nzf.reset_try_list()
self.nzf.nzo.reset_try_list()
def get_article(self, server: Server, servers: list[Server]):
"""Return article when appropriate for specified server"""
@@ -189,6 +192,14 @@ class Article(TryList):
logging.info("Article %s unavailable on all servers, discarding", self.article)
return False
@property
def can_direct_write(self) -> bool:
return bool(
self.data_size # decoder sets data_size to 0 when offsets or file_size are outside allowed range
and self.nzf.type == "yenc"
and self.nzf.prepare_filepath()
)
def __getstate__(self):
"""Save to pickle file, selecting attributes"""
dict_ = {}
@@ -205,6 +216,7 @@ class Article(TryList):
except KeyError:
# Handle new attributes
setattr(self, item, None)
self.lock = threading.RLock()
super().__setstate__(dict_.get("try_list", []))
self.fetcher = None
self.fetcher_priority = 0

View File

@@ -18,14 +18,15 @@
"""
sabnzbd.nzb.file - NzbFile class for representing files in NZB downloads
"""
import datetime
import logging
import os
import threading
from typing import Optional
from typing import Optional, Any
import sabctools
from sabnzbd.nzb.article import TryList, Article, TRYLIST_LOCK
from sabnzbd.nzb.article import TryList, Article
from sabnzbd.downloader import Server
from sabnzbd.filesystem import (
sanitize_filename,
@@ -35,6 +36,7 @@ from sabnzbd.filesystem import (
get_new_id,
save_data,
load_data,
RAR_RE,
)
from sabnzbd.misc import int_conv, subject_name_extractor
from sabnzbd.decorators import synchronized
@@ -75,12 +77,13 @@ class NzbFile(TryList):
"""Representation of one file consisting of multiple articles"""
# Pre-define attributes to save memory
__slots__ = NzbFileSaver + ("lock",)
__slots__ = NzbFileSaver + ("lock", "file_lock", "assembler_next_index")
def __init__(self, date, subject, raw_article_db, file_bytes, nzo):
"""Setup object"""
super().__init__()
self.lock = threading.RLock()
self.lock: threading.RLock = threading.RLock()
self.file_lock: threading.RLock = threading.RLock()
self.date: datetime.datetime = date
self.type: Optional[str] = None
@@ -108,6 +111,7 @@ class NzbFile(TryList):
self.crc32: Optional[int] = 0
self.assembled: bool = False
self.md5of16k: Optional[bytes] = None
self.assembler_next_index: int = 0
# Add first article to decodetable, this way we can check
# if this is maybe a duplicate nzf
@@ -133,6 +137,13 @@ class NzbFile(TryList):
# All imported
self.import_finished = True
@property
@synchronized()
def assembler_next_article(self) -> Optional[Article]:
if (next_index := self.assembler_next_index) < len(self.decodetable):
return self.decodetable[next_index]
return None
def finish_import(self):
"""Load the article objects from disk"""
logging.debug("Finishing import on %s", self.filename)
@@ -147,21 +158,21 @@ class NzbFile(TryList):
# Mark safe to continue
self.import_finished = True
@synchronized()
def add_article(self, article_info):
"""Add article to object database and return article object"""
article = Article(article_info[0], article_info[1], self)
with self.lock:
self.articles[article] = article
self.decodetable.append(article)
self.articles[article] = article
self.decodetable.append(article)
return article
@synchronized()
def remove_article(self, article: Article, success: bool) -> int:
"""Handle completed article, possibly end of file"""
with self.lock:
if self.articles.pop(article, None) is not None:
if success:
self.bytes_left -= article.bytes
return len(self.articles)
if self.articles.pop(article, None) is not None:
if success:
self.bytes_left -= article.bytes
return len(self.articles)
def set_par2(self, setname, vol, blocks):
"""Designate this file as a par2 file"""
@@ -170,31 +181,31 @@ class NzbFile(TryList):
self.vol = vol
self.blocks = int_conv(blocks)
@synchronized()
def update_crc32(self, crc32: Optional[int], length: int) -> None:
if self.crc32 is None or crc32 is None:
self.crc32 = None
else:
self.crc32 = sabctools.crc32_combine(self.crc32, crc32, length)
@synchronized()
def get_articles(self, server: Server, servers: list[Server], fetch_limit: int):
"""Get next articles to be downloaded"""
articles = server.article_queue
with self.lock:
for article in self.articles:
if article := article.get_article(server, servers):
articles.append(article)
if len(articles) >= fetch_limit:
return
for article in self.articles:
if article := article.get_article(server, servers):
articles.append(article)
if len(articles) >= fetch_limit:
return
self.add_to_try_list(server)
@synchronized(TRYLIST_LOCK)
@synchronized()
def reset_all_try_lists(self):
"""Reset all try lists. Locked so reset is performed
for all items at the same time without chance of another
thread changing any of the items while we are resetting"""
with self.lock:
for art in self.articles:
art.reset_try_list()
for art in self.articles:
art.reset_try_list()
self.reset_try_list()
def first_article_processed(self) -> bool:
@@ -237,11 +248,90 @@ class NzbFile(TryList):
except Exception:
pass
def __enter__(self):
self.lock.acquire()
@synchronized()
def contiguous_offset(self) -> int:
"""The next file offset to write to continue sequentially.
def __exit__(self, exc_type, exc_val, exc_tb):
self.lock.release()
Note: there could be non-sequential direct writes already beyond this point
"""
# If last written article has valid yenc headers
if self.assembler_next_index:
article = self.decodetable[self.assembler_next_index - 1]
if article.on_disk and article.data_size:
return article.data_begin + article.data_size
# Fallback to summing decoded size
offset = 0
for article in self.decodetable[: self.assembler_next_index]:
if not article.on_disk:
break
if article.data_size:
offset = article.data_begin + article.decoded_size
elif article.decoded_size is not None:
# queues from <= 4.5.5 do not have this attribute
offset += article.decoded_size
elif os.path.exists(self.filepath):
# fallback for <= 4.5.5 because files were always opened in append mode, so use the file size
return os.path.getsize(self.filepath)
return offset
@synchronized()
def contiguous_ready_bytes(self) -> int:
"""How many bytes from assembler_next_index onward are ready to write to file contiguously?"""
bytes_ready: int = 0
for article in self.decodetable[self.assembler_next_index :]:
if not article.decoded:
break
if article.on_disk:
continue
if article.decoded_size is None:
break
bytes_ready += article.decoded_size
return bytes_ready
def sort_key(self) -> tuple[Any, ...]:
"""Comparison function for sorting NZB files.
The comparison will sort .par2 files to the top of the queue followed by .rar files,
they will then be sorted by name.
"""
name = self.filename.lower()
base, ext = os.path.splitext(name)
is_par2 = ext == ".par2"
is_vol_par2 = is_par2 and ".vol" in base
is_mini_par2 = is_par2 and not is_vol_par2
m = RAR_RE.search(name)
is_rar = bool(m)
is_main_rar = is_rar and m.group(1) == "rar"
# Initially group by mini-par2, other files, vol-par2
if is_mini_par2:
tier = 0
elif is_vol_par2:
tier = 2
else:
tier = 1
if tier == 1:
if is_rar and m:
# strip matched RAR suffix including leading dot (.part01.rar, .rar, .r00, ...)
group_base = name[: m.start()]
local_group = 0
type_rank = 0 if is_main_rar else 1
else:
# nfo, sfv, sample.mkv, etc.
group_base = base
local_group = 1
type_rank = 0
else:
# mini/vol par2 ignore the group base
group_base = ""
local_group = 0
type_rank = 0
return tier, group_base, local_group, type_rank, name
def __getstate__(self):
"""Save to pickle file, selecting attributes"""
@@ -259,11 +349,18 @@ class NzbFile(TryList):
except KeyError:
# Handle new attributes
setattr(self, item, None)
super().__setstate__(dict_.get("try_list", []))
self.lock = threading.RLock()
self.file_lock = threading.RLock()
self.assembler_next_index = 0
if isinstance(self.articles, list):
# Converted from list to dict
self.articles = {x: x for x in self.articles}
for article in self.articles:
article.lock = self.lock
super().__setstate__(dict_.get("try_list", []))
def __lt__(self, other: "NzbFile"):
return self.sort_key() < other.sort_key()
def __eq__(self, other: "NzbFile"):
"""Assume it's the same file if the number bytes and first article

View File

@@ -18,19 +18,19 @@
"""
sabnzbd.nzb.object - NzbObject class for representing NZB download jobs
"""
import os
import time
import re
import logging
import datetime
import threading
import functools
import difflib
from typing import Any, Optional, Union, BinaryIO, Deque
# SABnzbd modules
import sabnzbd
from sabnzbd.nzb.article import TryList, Article, TRYLIST_LOCK
from sabnzbd.nzb.article import TryList, Article
from sabnzbd.nzb.file import NzbFile
from sabnzbd.constants import (
GIGI,
@@ -93,7 +93,6 @@ from sabnzbd.filesystem import (
strip_extensions,
get_ext,
create_work_name,
nzf_cmp_name,
RAR_RE,
)
from sabnzbd.par2file import FilePar2Info, has_par2_in_filename, analyse_par2, parse_par2_file, is_par2_file
@@ -186,9 +185,6 @@ NzbObjectSaver = (
NzoAttributeSaver = ("cat", "pp", "script", "priority", "final_name", "password", "url")
# Lock to prevent errors when saving the NZO data
NZO_LOCK = threading.RLock()
class NzbObject(TryList):
def __init__(
@@ -210,6 +206,7 @@ class NzbObject(TryList):
dup_check: bool = True,
):
super().__init__()
self.lock: threading.RLock = threading.RLock()
# Use original filename as basis
self.work_name = self.filename = filename
@@ -567,7 +564,7 @@ class NzbObject(TryList):
logging.info("File %s added to queue", nzf.filename)
@synchronized(NZO_LOCK)
@synchronized()
def remove_nzf(self, nzf: NzbFile) -> bool:
if nzf in self.files:
self.files.remove(nzf)
@@ -581,7 +578,7 @@ class NzbObject(TryList):
"""Sort the files in the NZO based on name and type
and then optimize for unwanted extensions search.
"""
self.files.sort(key=functools.cmp_to_key(nzf_cmp_name))
self.files.sort()
# In the hunt for Unwanted Extensions:
# The file with the unwanted extension often is in the first or the last rar file
@@ -610,7 +607,7 @@ class NzbObject(TryList):
except Exception:
logging.debug("The lastrar swap did not go well")
@synchronized(TRYLIST_LOCK)
@synchronized()
def reset_all_try_lists(self):
"""Reset all try lists. Locked so reset is performed
for all items at the same time without chance of another
@@ -619,7 +616,7 @@ class NzbObject(TryList):
nzf.reset_all_try_lists()
self.reset_try_list()
@synchronized(NZO_LOCK)
@synchronized()
def postpone_pars(self, parset: str):
"""Move all vol-par files matching 'parset' to the extrapars table"""
# Create new extrapars if it didn't already exist
@@ -650,7 +647,7 @@ class NzbObject(TryList):
# Also re-parse all filenames in case par2 came after first articles
self.verify_all_filenames_and_resort()
@synchronized(NZO_LOCK)
@synchronized()
def handle_par2(self, nzf: NzbFile, filepath):
"""Check if file is a par2 and build up par2 collection"""
# Need to remove it from the other set it might be in
@@ -702,7 +699,7 @@ class NzbObject(TryList):
self.renamed_file(get_filename(new_fname), nzf.filename)
nzf.filename = get_filename(new_fname)
@synchronized(NZO_LOCK)
@synchronized()
def promote_par2(self, nzf: NzbFile):
"""In case of a broken par2 or missing par2, move another
of the same set to the top (if we can find it)
@@ -763,7 +760,7 @@ class NzbObject(TryList):
# Not enough
return 0
@synchronized(NZO_LOCK)
@synchronized()
def remove_article(self, article: Article, success: bool):
"""Remove article from the NzbFile and do check if it can succeed"""
job_can_succeed = True
@@ -1054,7 +1051,7 @@ class NzbObject(TryList):
if self.duplicate:
self.duplicate = DuplicateStatus.DUPLICATE_IGNORED
@synchronized(NZO_LOCK)
@synchronized()
def add_parfile(self, parfile: NzbFile) -> bool:
"""Add parfile to the files to be downloaded
Add it to the start so we try it first
@@ -1069,7 +1066,7 @@ class NzbObject(TryList):
return True
return False
@synchronized(NZO_LOCK)
@synchronized()
def remove_extrapar(self, parfile: NzbFile):
"""Remove par file from any/all sets"""
for parset in list(self.extrapars):
@@ -1080,7 +1077,7 @@ class NzbObject(TryList):
if not self.extrapars[parset]:
self.extrapars.pop(parset)
@synchronized(NZO_LOCK)
@synchronized()
def prospective_add(self, nzf: NzbFile):
"""Add par2 files to compensate for missing articles"""
# Get some blocks!
@@ -1151,7 +1148,7 @@ class NzbObject(TryList):
return False
return True
@synchronized(NZO_LOCK)
@synchronized()
def set_download_report(self):
"""Format the stats for the history information"""
# Pretty-format the per-server stats
@@ -1206,7 +1203,7 @@ class NzbObject(TryList):
self.set_unpack_info("RSS", rss_feed, unique=True)
self.set_unpack_info("Source", self.url or self.filename, unique=True)
@synchronized(NZO_LOCK)
@synchronized()
def increase_bad_articles_counter(self, bad_article_type: str):
"""Record information about bad articles. Should be called before
register_article, which triggers the availability check."""
@@ -1269,7 +1266,7 @@ class NzbObject(TryList):
# No articles for this server, block for next time
self.add_to_try_list(server)
@synchronized(NZO_LOCK)
@synchronized()
def move_top_bulk(self, nzf_ids: list[str]):
self.cleanup_nzf_ids(nzf_ids)
if nzf_ids:
@@ -1286,7 +1283,7 @@ class NzbObject(TryList):
if target == keys:
break
@synchronized(NZO_LOCK)
@synchronized()
def move_bottom_bulk(self, nzf_ids):
self.cleanup_nzf_ids(nzf_ids)
if nzf_ids:
@@ -1303,7 +1300,7 @@ class NzbObject(TryList):
if target == keys:
break
@synchronized(NZO_LOCK)
@synchronized()
def move_up_bulk(self, nzf_ids, cleanup=True):
if cleanup:
self.cleanup_nzf_ids(nzf_ids)
@@ -1320,7 +1317,7 @@ class NzbObject(TryList):
self.files[pos - 1] = nzf
self.files[pos] = tmp_nzf
@synchronized(NZO_LOCK)
@synchronized()
def move_down_bulk(self, nzf_ids, cleanup=True):
if cleanup:
self.cleanup_nzf_ids(nzf_ids)
@@ -1373,7 +1370,7 @@ class NzbObject(TryList):
self.renamed_file(yenc_filename, nzf.filename)
nzf.filename = yenc_filename
@synchronized(NZO_LOCK)
@synchronized()
def verify_all_filenames_and_resort(self):
"""Verify all filenames based on par2 info and then re-sort files.
Locked so all files are verified at once without interruptions.
@@ -1388,7 +1385,7 @@ class NzbObject(TryList):
if self.direct_unpacker:
self.direct_unpacker.set_volumes_for_nzo()
@synchronized(NZO_LOCK)
@synchronized()
def renamed_file(self, name_set, old_name=None):
"""Save renames at various stages (Download/PP)
to be used on Retry. Accepts strings and dicts.
@@ -1416,7 +1413,7 @@ class NzbObject(TryList):
"""Return remaining bytes"""
return self.bytes - self.bytes_tried
@synchronized(NZO_LOCK)
@synchronized()
def purge_data(self, delete_all_data=True):
"""Remove (all) job data"""
logging.info(
@@ -1428,6 +1425,7 @@ class NzbObject(TryList):
# Remove all cached files
sabnzbd.ArticleCache.purge_articles(self.saved_articles)
sabnzbd.Assembler.clear_ready_bytes(*self.files)
# Delete all, or just basic files
if self.futuretype:
@@ -1447,7 +1445,7 @@ class NzbObject(TryList):
if nzf_id in self.files_table:
return self.files_table[nzf_id]
@synchronized(NZO_LOCK)
@synchronized()
def set_unpack_info(self, key: str, msg: str, setname: Optional[str] = None, unique: bool = False):
"""Builds a dictionary containing the stage name (key) and a message
If unique is present, it will only have a single line message
@@ -1475,7 +1473,7 @@ class NzbObject(TryList):
# Make sure it's updated in the interface
sabnzbd.misc.history_updated()
@synchronized(NZO_LOCK)
@synchronized()
def save_to_disk(self):
"""Save job's admin to disk"""
self.save_attribs()
@@ -1512,7 +1510,7 @@ class NzbObject(TryList):
# Rest is to be used directly in the NZO-init flow
return attribs["cat"], attribs["pp"], attribs["script"]
@synchronized(NZO_LOCK)
@synchronized()
def build_pos_nzf_table(self, nzf_ids: list[str]) -> dict[int, NzbFile]:
pos_nzf_table = {}
for nzf_id in nzf_ids:
@@ -1523,7 +1521,7 @@ class NzbObject(TryList):
return pos_nzf_table
@synchronized(NZO_LOCK)
@synchronized()
def cleanup_nzf_ids(self, nzf_ids: list[str]):
for nzf_id in nzf_ids[:]:
if nzf_id in self.files_table:
@@ -1664,6 +1662,7 @@ class NzbObject(TryList):
except KeyError:
# Handle new attributes
setattr(self, item, None)
self.lock = threading.RLock()
super().__setstate__(dict_.get("try_list", []))
# Set non-transferable values

View File

@@ -18,6 +18,7 @@
"""
sabnzbd.nzbparser - Parse and import NZB files
"""
import os
import bz2
import gzip

View File

@@ -730,20 +730,16 @@ class NzbQueue:
articles_left, file_done, post_done = nzo.remove_article(article, success)
# Write data if file is done or at trigger time
# Skip if the file is already queued, since all available articles will then be written
if (
file_done
or (article.lowest_partnum and nzf.filename_checked and not nzf.import_finished)
or (articles_left and (articles_left % sabnzbd.ArticleCache.assembler_write_trigger) == 0)
):
if not nzo.precheck:
# The type is only set if sabctools could decode the article
if nzf.type:
sabnzbd.Assembler.process(nzo, nzf, file_done)
elif sabnzbd.par2file.has_par2_in_filename(nzf.filename):
# Broken par2 file, try to get another one
nzo.promote_par2(nzf)
if not nzo.precheck:
# Mark as on_disk so assembler knows it can skip this article
if not success:
article.on_disk = True
# The type is only set if sabctools could decode the article
if nzf.type:
sabnzbd.Assembler.process(nzo, nzf, file_done, article=article)
elif sabnzbd.par2file.has_par2_in_filename(nzf.filename):
# Broken par2 file, try to get another one
nzo.promote_par2(nzf)
# Save bookkeeping in case of crash
if file_done and (nzo.next_save is None or time.time() > nzo.next_save):
@@ -783,6 +779,7 @@ class NzbQueue:
if not nzo.nzo_id:
self.add(nzo, quiet=True)
self.remove(nzo.nzo_id, cleanup=False)
sabnzbd.Assembler.clear_ready_bytes(*nzo.files)
sabnzbd.PostProcessor.process(nzo)
def actives(self, grabs: bool = True) -> int:
@@ -893,7 +890,7 @@ class NzbQueue:
if nzf.all_servers_in_try_list(active_servers):
# Check for articles where all active servers have already been tried
with nzf:
with nzf.lock:
for article in nzf.articles:
if article.all_servers_in_try_list(active_servers):
logging.debug(
@@ -904,6 +901,29 @@ class NzbQueue:
logging.info("Resetting bad trylist for file %s in job %s", nzf.filename, nzo.final_name)
nzf.reset_try_list()
if not nzf.assembled and not nzf.articles:
logging.debug("Not assembled but no remaining articles for file %s", nzf.filename)
if not nzf.assembled and (next_article := nzf.assembler_next_article):
logging.debug(
"Next article to assemble for file %s is %s, decoded: %s, on_disk: %s, decoded_size: %s",
nzf.filename,
next_article,
next_article.decoded,
next_article.on_disk,
next_article.decoded_size,
)
for article in nzo.first_articles.copy():
logging.debug(
"First article for file %s is %s, decoded: %s, on_disk: %s, decoded_size: %s, has_fetcher: %s, tries: %s",
article.nzf.filename,
article,
article.decoded,
article.on_disk,
article.decoded_size,
article.fetcher is not None,
article.tries,
)
# Reset main try list, minimal performance impact
logging.info("Resetting bad trylist for job %s", nzo.final_name)

View File

@@ -66,14 +66,12 @@ def MSG_BAD_NEWS():
def MSG_BAD_PORT():
return (
T(
r"""
T(r"""
SABnzbd needs a free tcp/ip port for its internal web server.<br>
Port %s on %s was tried , but it is not available.<br>
Some other software uses the port or SABnzbd is already running.<br>
<br>
Please restart SABnzbd with a different port number."""
)
Please restart SABnzbd with a different port number.""")
+ """<br>
<br>
%s<br>
@@ -85,14 +83,12 @@ def MSG_BAD_PORT():
def MSG_BAD_HOST():
return (
T(
r"""
T(r"""
SABnzbd needs a valid host address for its internal web server.<br>
You have specified an invalid address.<br>
Safe values are <b>localhost</b> and <b>0.0.0.0</b><br>
<br>
Please restart SABnzbd with a proper host address."""
)
Please restart SABnzbd with a proper host address.""")
+ """<br>
<br>
%s<br>
@@ -104,15 +100,13 @@ def MSG_BAD_HOST():
def MSG_BAD_QUEUE():
return (
T(
r"""
T(r"""
SABnzbd detected saved data from an other SABnzbd version<br>
but cannot re-use the data of the other program.<br><br>
You may want to finish your queue first with the other program.<br><br>
After that, start this program with the "--clean" option.<br>
This will erase the current queue and history!<br>
SABnzbd read the file "%s"."""
)
SABnzbd read the file "%s".""")
+ """<br>
<br>
%s<br>
@@ -123,13 +117,11 @@ def MSG_BAD_QUEUE():
def MSG_BAD_TEMPL():
return T(
r"""
return T(r"""
SABnzbd cannot find its web interface files in %s.<br>
Please install the program again.<br>
<br>
"""
)
""")
def MSG_OTHER():
@@ -137,14 +129,12 @@ def MSG_OTHER():
def MSG_SQLITE():
return T(
r"""
return T(r"""
SABnzbd detected that the file sqlite3.dll is missing.<br><br>
Some poorly designed virus-scanners remove this file.<br>
Please check your virus-scanner, try to re-install SABnzbd and complain to your virus-scanner vendor.<br>
<br>
"""
)
""")
def panic_message(panic_code, a=None, b=None):
@@ -280,8 +270,7 @@ def error_page_401(status, message, traceback, version):
def error_page_404(status, message, traceback, version):
"""Custom handler for 404 error, redirect to main page"""
return (
r"""
return r"""
<html>
<head>
<script type="text/javascript">
@@ -292,6 +281,4 @@ def error_page_404(status, message, traceback, version):
</head>
<body><br/></body>
</html>
"""
% cfg.url_base()
)
""" % cfg.url_base()

View File

@@ -18,6 +18,7 @@
"""
sabnzbd.par2file - All par2-related functionality
"""
import hashlib
import logging
import os

View File

@@ -18,6 +18,7 @@
"""
sabnzbd.postproc - threaded post-processing of jobs
"""
import os
import logging
import functools
@@ -95,7 +96,6 @@ import sabnzbd.utils.rarvolinfo as rarvolinfo
import sabnzbd.utils.checkdir
import sabnzbd.deobfuscate_filenames as deobfuscate
MAX_FAST_JOB_COUNT = 3

View File

@@ -24,7 +24,6 @@ import subprocess
import logging
import time
##############################################################################
# Power management for Windows
##############################################################################

View File

File diff suppressed because it is too large Load Diff

View File

@@ -933,11 +933,9 @@ SKIN_TEXT = {
"wizard-test-server-required": TT("Click on Test Server before continuing"), #: Tooltip for disabled Next button
"restore-backup": TT("Restore backup"),
# Special
"yourRights": TT(
"""
"yourRights": TT("""
SABnzbd comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it under certain conditions.
It is licensed under the GNU GENERAL PUBLIC LICENSE Version 2 or (at your option) any later version.
"""
),
"""),
}

View File

@@ -4,7 +4,6 @@
Note: extension always contains a leading dot
"""
import puremagic
import os
import sys

View File

@@ -47,7 +47,6 @@ application callbacks) are always unicode instances.
"""
__author__ = "Christopher Stawarz <cstawarz@csail.mit.edu>"
__version__ = "1.1.1"
__revision__ = int("$Revision: 6125 $".split()[1])
@@ -59,7 +58,6 @@ import re
import socket
import sys
################################################################################
#
# Global setup

View File

@@ -19,6 +19,7 @@
"""
sabnzbd.utils.rarvolinfo - Find out volume number and/or original extension of a rar file. Useful with obfuscated files
"""
import os
import rarfile

View File

@@ -20,11 +20,9 @@
sabnzbd.utils.sleepless - Keep macOS awake by setting power assertions
"""
import objc
from Foundation import NSBundle
# https://developer.apple.com/documentation/iokit/iopowersources.h?language=objc
IOKit = NSBundle.bundleWithIdentifier_("com.apple.framework.IOKit")

View File

@@ -35,6 +35,7 @@ http://upnp.org/specs/arch/UPnP-arch-DeviceArchitecture-v1.1.pdf
"""
import logging
import socket
import uuid

View File

@@ -6,5 +6,5 @@
# You MUST use double quotes (so " and not ')
# Do not forget to update the appdata file for every major release!
__version__ = "4.6.0Beta2"
__version__ = "5.0.0Beta1"
__baseline__ = "unknown"

View File

@@ -22,7 +22,7 @@ for item in os.environ:
# More intelligent parsing:
try:
(scriptname, directory, orgnzbname, jobname, reportnumber, category, group, postprocstatus, url) = sys.argv
scriptname, directory, orgnzbname, jobname, reportnumber, category, group, postprocstatus, url = sys.argv
except Exception:
print("No SAB compliant number of commandline parameters found (should be 8):", len(sys.argv) - 1)
sys.exit(1) # non-zero return code

View File

@@ -19,6 +19,7 @@
tests.conftest - Setup pytest fixtures
These have to be separate otherwise SABnzbd is started multiple times!
"""
import shutil
import subprocess
import sys

View File

Binary file not shown.

View File

Binary file not shown.

View File

@@ -18,6 +18,7 @@
"""
tests.test_api - Tests for API functions
"""
import cherrypy
import pytest

315
tests/test_assembler.py Normal file
View File

@@ -0,0 +1,315 @@
#!/usr/bin/python3 -OO
# Copyright 2007-2025 by The SABnzbd-Team (sabnzbd.org)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
tests.test_assembler - Testing functions in assembler.py
"""
from types import SimpleNamespace
from zlib import crc32
from sabnzbd.assembler import Assembler
from sabnzbd.nzb import Article, NzbFile, NzbObject
from tests.testhelper import *
class TestAssembler:
@pytest.fixture
def assembler(self, tmp_path):
"""Prepare a sabnzbd assembler, tmp_path is used because C libraries require a real filesystem."""
try:
sabnzbd.Downloader = SimpleNamespace(paused=False)
sabnzbd.ArticleCache = SimpleNamespace()
sabnzbd.Assembler = Assembler()
# Create a minimal NzbObject / NzbFile
self.nzo = NzbObject("test.nzb")
admin_path = str(tmp_path / "admin")
with mock.patch.object(
NzbObject,
"admin_path",
new_callable=mock.PropertyMock,
) as admin_path_mock:
admin_path_mock.return_value = admin_path
self.nzo.download_path = str(tmp_path / "download")
os.mkdir(self.nzo.download_path)
os.mkdir(self.nzo.admin_path)
# NzbFile requires some constructor args; use dummy but valid values
self.nzf = NzbFile(
date=self.nzo.avg_date,
subject="test-file",
raw_article_db=[[None, None]],
file_bytes=0,
nzo=self.nzo,
)
self.nzo.files.append(self.nzf)
self.nzf.type = "yenc" # for writes from article cache
assert self.nzf.prepare_filepath() is not None
# Clear the state after prepare_filepath
self.nzf.articles.clear()
self.nzf.decodetable.clear()
with mock.patch.object(Assembler, "write", wraps=Assembler.write) as mocked_assembler_write:
yield mocked_assembler_write
# All articles should be marked on_disk
for article in self.nzf.decodetable:
assert article.on_disk is True
# File should be marked assembled
assert self.nzf.assembled is True
finally:
# Reset values after test
del sabnzbd.Downloader
del sabnzbd.ArticleCache
del sabnzbd.Assembler
def _make_article(
self, nzf: NzbFile, offset: int, data: bytearray, decoded: bool = True, can_direct_write: bool = True
) -> tuple[Article, bytearray]:
article = Article("msgid", len(data), nzf)
article.decoded = decoded
article.data_begin = offset
article.data_size = len(data) if can_direct_write else None
article.file_size = nzf.bytes
article.decoded_size = len(data)
article.crc32 = crc32(data)
article.tries = 1 # force aborts if never tried
return article, data
def _make_request(
self,
nzf: NzbFile,
articles: list[tuple[Article, bytearray]],
):
article_data = {}
for article, raw in articles:
nzf.decodetable.append(article)
article_data[article] = raw
expected = b"".join(article_data.values())
nzf.bytes = len(expected)
sabnzbd.ArticleCache.load_article = mock.Mock(side_effect=lambda article: article_data.get(article))
for article, _ in articles:
article.file_size = nzf.bytes
return article_data.values(), expected
@staticmethod
def _assert_expected_content(nzf: NzbFile, expected: bytes):
with open(nzf.filepath, "rb") as f:
content = f.read()
assert content == expected
assert nzf.assembler_next_index == len(nzf.decodetable)
assert nzf.contiguous_offset() == nzf.decodetable[0].file_size
def test_assemble_direct_write(self, assembler):
"""Pure direct write mode"""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"hello"), can_direct_write=True),
self._make_article(self.nzf, offset=5, data=bytearray(b"world"), can_direct_write=True),
],
)
assert self.nzf.contiguous_offset() == 0
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=True)
self._assert_expected_content(self.nzf, expected)
def test_assemble_direct_write_aborted_to_append(self, assembler):
"""
Start in direct_write, but encounter an article that cannot be direct-written.
Assembler should abort direct_write and switch to append mode.
"""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"hello"), can_direct_write=True),
self._make_article(self.nzf, offset=5, data=bytearray(b"world"), can_direct_write=False),
self._make_article(self.nzf, offset=10, data=bytearray(b"12345"), can_direct_write=True),
],
)
# [0] direct_write, [1] append, [2] append
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=True)
self._assert_expected_content(self.nzf, expected)
def test_assemble_direct_append_direct_append(self, assembler):
"""Out-of-order direct write via cache, append fills the gap."""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"hello"), can_direct_write=True),
self._make_article(self.nzf, offset=5, data=bytearray(b"world"), can_direct_write=False),
self._make_article(
self.nzf, offset=10, data=bytearray(b"12345"), decoded=False, can_direct_write=False
),
self._make_article(
self.nzf, offset=15, data=bytearray(b"abcde"), decoded=False, can_direct_write=True
), # Cache direct
],
)
# [0] direct_write, [1] append
Assembler.assemble(self.nzo, self.nzf, file_done=False, allow_non_contiguous=False, direct_write=True)
assert assembler.call_count == 2
assert self.nzf.contiguous_offset() == 10
# [3] direct_write
article = self.nzf.decodetable[3]
article.decoded = True
Assembler.assemble_article(article, sabnzbd.ArticleCache.load_article(article))
assert assembler.call_count == 3
assert self.nzf.contiguous_offset() == 10 # was not a sequential write
# [3] append
article = self.nzf.decodetable[2]
article.decoded = True
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=True)
assert assembler.call_count == 4
self._assert_expected_content(self.nzf, expected)
def test_assemble_direct_write_aborted_to_append_second_attempt(self, assembler):
"""Second attempt after initial partial assemble, including revert to append mode."""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"hello"), can_direct_write=True),
self._make_article(self.nzf, offset=5, data=bytearray(b"world"), can_direct_write=False),
self._make_article(
self.nzf, offset=10, data=bytearray(b"12345"), decoded=False, can_direct_write=False
),
],
)
# [0] direct_write, [1] append
Assembler.assemble(self.nzo, self.nzf, file_done=False, allow_non_contiguous=False, direct_write=True)
assert self.nzf.decodetable[2].on_disk is False
self.nzf.decodetable[2].decoded = True
# [2] append
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=True)
self._assert_expected_content(self.nzf, expected)
def test_assemble_append_direct_second_attempt(self, assembler):
"""Second attempt after initial partial assemble"""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"hello"), can_direct_write=False),
self._make_article(self.nzf, offset=5, data=bytearray(b"world"), decoded=False, can_direct_write=True),
],
)
# [0] append
Assembler.assemble(self.nzo, self.nzf, file_done=False, allow_non_contiguous=False, direct_write=False)
self.nzf.decodetable[1].decoded = True
# [1] append
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=True)
self._assert_expected_content(self.nzf, expected)
def test_assemble_append_only(self, assembler):
"""Pure append mode"""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"abcd"), can_direct_write=False),
self._make_article(self.nzf, offset=0, data=bytearray(b"efg"), can_direct_write=False),
],
)
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=False)
self._assert_expected_content(self.nzf, expected)
def test_assemble_append_second_attempt(self, assembler):
"""Pure append mode, second attempt"""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"abcd"), can_direct_write=False),
self._make_article(self.nzf, offset=0, data=bytearray(b"efg"), decoded=False, can_direct_write=False),
],
)
# [0] append
Assembler.assemble(self.nzo, self.nzf, file_done=False, allow_non_contiguous=False, direct_write=False)
assert self.nzf.assembled is False
self.nzf.decodetable[1].decoded = True
# [1] append
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=False)
self._assert_expected_content(self.nzf, expected)
def test_assemble_append_first_not_decoded(self, assembler):
"""Pure append mode, second attempt"""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"abcd"), decoded=False, can_direct_write=False),
self._make_article(self.nzf, offset=0, data=bytearray(b"efg"), can_direct_write=False),
],
)
# Nothing written
Assembler.assemble(self.nzo, self.nzf, file_done=False, allow_non_contiguous=False, direct_write=False)
assert not os.path.exists(self.nzf.filepath)
self.nzf.decodetable[0].decoded = True
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=False)
self._assert_expected_content(self.nzf, expected)
def test_force_append(self, assembler):
"""Force in direct_write mode, then fill in gaps in append mode"""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"hello")),
self._make_article(self.nzf, offset=5, data=bytearray(b"world"), decoded=False, can_direct_write=False),
self._make_article(self.nzf, offset=10, data=bytearray(b"12345")),
self._make_article(self.nzf, offset=15, data=bytearray(b"abcd"), decoded=False, can_direct_write=False),
self._make_article(self.nzf, offset=19, data=bytearray(b"efg")),
],
)
# [0] direct, [2] direct, [4], direct
Assembler.assemble(self.nzo, self.nzf, file_done=False, allow_non_contiguous=True, direct_write=True)
assert assembler.call_count == 3
assert self.nzf.assembled is False
# [1] append, [3], append
self.nzf.decodetable[1].decoded = True
self.nzf.decodetable[3].decoded = True
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=False)
assert assembler.call_count == 5
self._assert_expected_content(self.nzf, expected)
def test_force_force_direct(self, assembler):
"""Force the first, then force the last, then direct the gap"""
data, expected = self._make_request(
self.nzf,
[
self._make_article(self.nzf, offset=0, data=bytearray(b"hello")),
self._make_article(self.nzf, offset=5, data=bytearray(b"world"), decoded=False),
self._make_article(self.nzf, offset=10, data=bytearray(b"12345"), decoded=False),
],
)
# [0] direct
Assembler.assemble(self.nzo, self.nzf, file_done=False, allow_non_contiguous=False, direct_write=True)
assert assembler.call_count == 1
assert self.nzf.assembler_next_index == 1
# Client restart
self.nzf.assembler_next_index = 0
# force: [2] direct
self.nzf.decodetable[2].decoded = True
Assembler.assemble(self.nzo, self.nzf, file_done=False, allow_non_contiguous=True, direct_write=True)
assert assembler.call_count == 2
assert self.nzf.assembler_next_index == 1
# [1] direct
self.nzf.decodetable[1].decoded = True
Assembler.assemble(self.nzo, self.nzf, file_done=True, allow_non_contiguous=False, direct_write=True)
assert assembler.call_count == 3
self._assert_expected_content(self.nzf, expected)

View File

@@ -18,6 +18,7 @@
"""
tests.test_cfg - Testing functions in cfg.py
"""
import sys
import pytest

View File

@@ -18,6 +18,7 @@
"""
tests.test_config - Tests of config methods
"""
from sabnzbd.filesystem import long_path
from tests.testhelper import *
import shutil
@@ -35,7 +36,6 @@ from sabnzbd.constants import (
from sabnzbd import config
from sabnzbd import filesystem
DEF_CHAIN_FILE = "server.chain"

View File

@@ -18,6 +18,7 @@
"""
tests.test_decoder- Testing functions in decoder.py
"""
import binascii
import os
import pytest
@@ -71,10 +72,10 @@ class TestUuDecoder:
data, and the expected result of uu decoding for the generated message.
"""
article_id = "test@host" + os.urandom(8).hex() + ".sab"
article = Article(article_id, randint(4321, 54321), None)
article.lowest_partnum = True if part in ("begin", "single") else False
# Mock an nzf so results from hashing and filename handling can be stored
article.nzf = mock.Mock()
mock_nzf = mock.Mock()
article = Article(article_id, randint(4321, 54321), mock_nzf)
article.lowest_partnum = True if part in ("begin", "single") else False
# Store the message data and the expected decoding result
data = []
@@ -140,8 +141,10 @@ class TestUuDecoder:
],
)
def test_short_data(self, raw_data):
mock_nzf = mock.Mock()
article = Article("foo@bar", 4321, mock_nzf)
with pytest.raises(decoder.BadUu):
assert decoder.decode_uu(Article("foo@bar", 4321, None), self._response(raw_data))
assert decoder.decode_uu(article, self._response(raw_data))
@pytest.mark.parametrize(
"raw_data",
@@ -158,7 +161,8 @@ class TestUuDecoder:
],
)
def test_missing_uu_begin(self, raw_data):
article = Article("foo@bar", 1234, None)
mock_nzf = mock.Mock()
article = Article("foo@bar", 1234, mock_nzf)
article.lowest_partnum = True
filler = b"\r\n" * 4
with pytest.raises(decoder.BadUu):
@@ -226,7 +230,8 @@ class TestUuDecoder:
],
)
def test_broken_uu(self, bad_data):
article = Article("foo@bar", 4321, None)
mock_nzf = mock.Mock()
article = Article("foo@bar", 4321, mock_nzf)
article.lowest_partnum = False
filler = b"\r\n".join(VALID_UU_LINES[:4]) + b"\r\n"
with pytest.raises(decoder.BadData):

View File

@@ -18,6 +18,7 @@
"""
Testing SABnzbd deobfuscate module
"""
import os.path
import random
import shutil

View File

@@ -18,6 +18,7 @@
"""
tests.test_dirscanner - Testing functions in dirscanner.py
"""
import asyncio
import pyfakefs.fake_filesystem_unittest as ffs

281
tests/test_downloader.py Normal file
View File

@@ -0,0 +1,281 @@
#!/usr/bin/python3 -OO
# Copyright 2007-2025 by The SABnzbd-Team (sabnzbd.org)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
tests.test_downloader - Test the downloader connection state machine
"""
import socket
import threading
import sabnzbd.cfg
from sabnzbd.downloader import Server, Downloader
from sabnzbd.newswrapper import NewsWrapper
from sabnzbd.get_addrinfo import AddrInfo
from tests.testhelper import *
class FakeNNTPServer:
"""Minimal NNTP server for testing connection state machine"""
def __init__(self, host: str = "127.0.0.1", port: int = 0):
self.host: str = host
self.port: int = port
self.server_socket = None
self.connections = []
self._stop = threading.Event()
self._thread = None
def start(self):
self.server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_socket.bind((self.host, self.port))
self.port = self.server_socket.getsockname()[1] # Get assigned port
self.server_socket.listen(5)
self.server_socket.settimeout(0.5)
self._thread = threading.Thread(target=self._accept_loop, daemon=True)
self._thread.start()
def _accept_loop(self):
while not self._stop.is_set():
try:
conn, addr = self.server_socket.accept()
self.connections.append(conn)
threading.Thread(target=self._handle_client, args=(conn,), daemon=True).start()
except socket.timeout:
continue
def _handle_client(self, conn):
try:
conn.sendall(b"200 Welcome\r\n")
# Keep connection alive until stop
while not self._stop.is_set():
conn.settimeout(0.5)
try:
data = conn.recv(1024)
if not data:
break
if data.startswith(b"QUIT"):
conn.sendall(b"205 Goodbye\r\n")
break
# Simple auth responses
if data.startswith(b"authinfo user"):
conn.sendall(b"381 More auth required\r\n")
elif data.startswith(b"authinfo pass"):
conn.sendall(b"281 Auth accepted\r\n")
except socket.timeout:
continue
except Exception:
pass
finally:
conn.close()
def stop(self):
self._stop.set()
for conn in self.connections:
try:
conn.close()
except Exception:
pass
if self.server_socket:
self.server_socket.close()
if self._thread:
self._thread.join(timeout=2)
@pytest.fixture
def fake_nntp_server(request):
"""Fixture that provides a fake NNTP server"""
params = getattr(request, "param", {})
# For fail_connect, don't start a server at all - use a closed port
if params.get("fail_connect"):
# Find a port and don't listen on it
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(("127.0.0.1", 0))
port = sock.getsockname()[1]
server = FakeNNTPServer(port=port)
server.port = port # Don't start, just hold the port number
yield server
sock.close()
return
server = FakeNNTPServer()
server.start()
yield server
server.stop()
@pytest.fixture
def mock_downloader(mocker):
"""Create a minimal mock Downloader for testing"""
import selectors
downloader = mock.Mock(spec=Downloader)
downloader.selector = selectors.DefaultSelector()
downloader.shutdown = False
downloader.paused = False
downloader.paused_for_postproc = False
# Use real implementations for socket management
downloader.add_socket = lambda nw: Downloader.add_socket(downloader, nw)
downloader.remove_socket = lambda nw: Downloader.remove_socket(downloader, nw)
downloader.finish_connect_nw = lambda nw, resp: Downloader.finish_connect_nw(downloader, nw, resp)
downloader.reset_nw = lambda nw, reset_msg=None, warn=False, wait=True, count_article_try=True, retry_article=True, article=None: Downloader.reset_nw(
downloader, nw, reset_msg, warn, wait, count_article_try, retry_article, article
)
sabnzbd.Downloader = downloader
yield downloader
del sabnzbd.Downloader
@pytest.fixture
def test_server(request, fake_nntp_server, mocker):
"""Create a Server pointing to the fake NNTP server"""
addrinfo = AddrInfo(
*socket.getaddrinfo(fake_nntp_server.host, fake_nntp_server.port, socket.AF_INET, socket.SOCK_STREAM)[0]
)
params = getattr(request, "param", {})
server = Server(
server_id="test_server",
displayname="Test Server",
host=fake_nntp_server.host,
port=fake_nntp_server.port,
timeout=params.get("timeout", 5),
threads=0, # Don't auto-create connections
priority=0,
use_ssl=False,
ssl_verify=0,
ssl_ciphers="",
pipelining_requests=mocker.Mock(return_value=1),
)
server.addrinfo = addrinfo
return server
class TestConnectionStateMachine:
"""Test the init_connect / socket_connected / connected state transitions"""
def test_socket_connected_set_after_successful_connect_no_auth(self, test_server, mock_downloader):
"""socket_connected should be True after NNTP.connect succeeds"""
nw = NewsWrapper(test_server, thrdnum=1)
test_server.idle_threads.add(nw)
nw.init_connect()
# Wait for async connect to complete
for _ in range(50):
if nw.connected:
break
time.sleep(0.1)
assert nw.connected is True
assert nw.ready is False
assert nw.nntp is not None
# Read the 200 Welcome
nw.nntp.sock.setblocking(True)
nw.nntp.sock.settimeout(2)
try:
nw.read()
except Exception:
pass
# Server has no user/pass so finish_connect_nw goes directly to connected state
assert nw.connected is True
assert nw.ready is True
assert nw.nntp is not None
def test_socket_connected_enables_auth_flow(self, test_server, mock_downloader):
"""connected should be True after auth completes"""
nw = NewsWrapper(test_server, thrdnum=1)
test_server.idle_threads.add(nw)
test_server.username = "user"
test_server.password = "pass"
nw.init_connect()
# Wait for socket_connected
for _ in range(50):
if nw.connected:
break
time.sleep(0.1)
assert nw.connected is True
assert nw.ready is False
assert nw.user_sent is False
# Read the 200 Welcome
nw.nntp.sock.setblocking(True)
nw.nntp.sock.settimeout(2)
try:
nw.read()
except Exception:
pass
# Auth should have started
assert nw.user_sent is True
assert nw.next_request is not None # Auth command queued
def test_hard_reset_clears_all_state(self, test_server, mock_downloader):
"""hard_reset should clear nntp, socket_connected, and connected"""
nw = NewsWrapper(test_server, thrdnum=1)
test_server.idle_threads.add(nw)
nw.init_connect()
# Wait for connection
for _ in range(50):
if nw.connected:
break
time.sleep(0.1)
assert nw.nntp is not None
assert nw.connected is True
nw.hard_reset(wait=False)
assert nw.nntp is None
assert nw.connected is False
assert nw.ready is False
@pytest.mark.parametrize("fake_nntp_server", [{"fail_connect": True}], indirect=True)
@pytest.mark.parametrize("test_server", [{"timeout": 0.1}], indirect=True)
def test_failed_connect_allows_retry(self, fake_nntp_server, test_server, mock_downloader):
"""Failed connect should set error_msg (and optionally clear nntp)"""
nw = NewsWrapper(test_server, thrdnum=1)
test_server.idle_threads.add(nw)
nw.init_connect()
# Wait for connect to fail (connection refused)
for _ in range(100):
if nw.nntp is None:
break
time.sleep(0.05)
# Connection should have failed and been reset
assert nw.ready is False
assert nw.connected is False
assert nw.nntp is None

Some files were not shown because too many files have changed in this diff Show More