mirror of
https://github.com/sabnzbd/sabnzbd.git
synced 2026-02-07 22:34:10 -05:00
3.3 KiB
3.3 KiB
Download flow (Downloader + NewsWrapper)
-
Job ingestion
- NZBs arrive via UI/API/URL;
urlgrabber.pyfetches remote NZBs,nzbparser.pyturns them intoNzbObjects, andnzbqueue.NzbQueuestores ordered jobs with priorities and categories.
- NZBs arrive via UI/API/URL;
-
Queue to articles
- When servers need work,
NzbQueue.get_articles(called fromServer.get_articleindownloader.py) hands out batches ofArticles per server, respecting retention, priority, and forced/paused items.
- When servers need work,
-
Downloader setup
Downloaderthread loads server configs (config.get_servers), instantiatesServerobjects (per host/port/SSL/threads), and spawnsNewsWrapperinstances per configured connection.- A
selectors.DefaultSelectorwatches all sockets;BPSMetertracks throughput and speed limits; timers manage server penalties/restarts.
-
Connection establishment (NewsWrapper.init_connect → NNTP.connect)
Server.request_addrinforesolves fastest address;NewsWrapperbuilds anNNTPsocket, wraps SSL if needed, sets non-blocking, and registers with the selector.- First server greeting (200/201) is queued;
finish_connectdrives the login handshake (AUTHINFO USER/PASS) and handles temporary (480) or permanent (400/502) errors.
-
Request scheduling & pipelining
write()chooses the next article command (STAT/HEADfor precheck,BODYorARTICLEotherwise).- Concurrency is limited by
server.pipelining_requests; commands are queued and sent withsock.sendall, so there is no local send buffer. - Sockets stay registered for
EVENT_WRITE: without write readiness events, a temporarily full kernel send buffer could stall queued commands when there is nothing to read, so WRITE interest is needed to resume sending promptly.
-
Receiving data
- Selector events route to
process_nw_read;NewsWrapper.readpulls bytes (SSL optimized via sabctools), parses NNTP responses, and callson_response. - Successful BODY/ARTICLE (220/222) updates per-server stats; missing/500 variants toggle capability flags (BODY/STAT support).
- Selector events route to
-
Decoding and caching
Downloader.decodehands responses todecoder.decode, which yEnc/UU decodes, CRC-checks, and stores payloads inArticleCache(memory or disk spill).- Articles with DMCA/bad data trigger retry on other servers until
max_art_triesis exceeded.
-
Assembly to files
Assemblerworker consumes decoded pieces, writes to the target file, updates CRC, and cleans admin markers. It guards disk space (diskspace_check) and schedules direct unpack or PAR2 handling when files finish.
-
Queue bookkeeping
NzbQueue.register_articlerecords success/failure; completed files advance NZF/NZO state. If all files done, the job moves to post-processing (PostProcessor.process), which runsnewsunpack, scripts, sorting, etc.
-
Control & resilience
- Pausing/resuming (
Downloader.pause/resume), bandwidth limiting, and sleep tuning happen in the main loop. - Errors/timeouts lead to
reset_nw(close socket, return article, maybe penalize server). Optional servers can be temporarily disabled; required ones schedule resumes. - Forced disconnect/shutdown drains sockets, refreshes DNS, and exits cleanly.
- Pausing/resuming (