Compare commits

..

478 Commits
v10.2 ... v14.2

Author SHA1 Message Date
Andrey Prygunkov
5e9afb8781 version 14.2 2015-02-15 20:19:05 +00:00
Andrey Prygunkov
f1b6492d1c fixed: unlike to all other scripts the update-script should not be automatically terminated when the program quits 2015-02-14 21:12:42 +00:00
Andrey Prygunkov
19d297f934 fixed: the program could crash during download when article cache was active (more likely on very high download speeds) 2015-02-11 22:38:59 +00:00
Andrey Prygunkov
a23128f25f addition to r1205: when sorting by priority in auto mode (without specifying + or -) the default order is descending since it is more logical to use for priority 2015-02-07 22:23:17 +00:00
Andrey Prygunkov
567f7c3028 added on-demand queue sorting; one click on column title in web interface sorts the selected or all items; if the items were already sorted in that order they are sorted backwards; in other words the second click sorts in descending order; when sorting selected items they are also grouped together in a case there were holes between selected items; RPC-method "editqueue" has new command "GroupSort", parameter "Text" must be one of: "name", "priority", "category", "size", "left"; add character "+" or "-" to sort to explicitly define ascending or descending order (for example "name-"); if none of these characters are used the auto-mode is active: the items are sorted in ascending order first, if nothing changed - they are sorted again in descending order 2015-02-07 19:17:49 +00:00
Andrey Prygunkov
30af4cfc3d fixed: XML-RPC method "history" returned invalid xml when used with parameter "hidden=true" (JSON-RPC worked correct) 2015-02-06 21:33:42 +00:00
Andrey Prygunkov
019fcf519a addition to r1182 and fix for r1193: unused connections are now closed only if there are no active connections on the same level; this reduces the reconnects during active download (which may be caused by the random connection pick-up implemented in r1182) 2015-02-03 20:05:49 +00:00
Andrey Prygunkov
1645562d78 eliminated compiler warning 2015-02-01 15:02:10 +00:00
Andrey Prygunkov
fab726482c improved windows installer: during an update the installer stops a possibly running NZBGet automatically 2015-01-27 20:32:08 +00:00
Andrey Prygunkov
351cb9835f suppress printing of memory leaks reports when the program terminates because of wrong command line switches (Windows debug mode only) 2015-01-27 20:30:14 +00:00
Andrey Prygunkov
d0d59469bc fixed: remote command "-L HA" (which prints the history including hidden records) could crash 2015-01-27 20:26:41 +00:00
Andrey Prygunkov
577d934ccd improved timeout handling during establishing of connections 2015-01-27 20:23:46 +00:00
Andrey Prygunkov
4438131d56 fixed: web-browser was launched on program reload; now it is launched only if the reload is initiated via tray menu (Windows only) 2015-01-26 21:26:32 +00:00
Andrey Prygunkov
7b13c9a9ba addition to r1182: fixed compilation error on certain systems (added missing include-directive) 2015-01-25 20:15:18 +00:00
Andrey Prygunkov
7d60566f3c reverted r1193 because of many problems reported by users (as a temporary solution) 2015-01-25 20:08:59 +00:00
Andrey Prygunkov
e9a7c2f184 fixed possible crash when using remote command "-B dump" to print debug info 2015-01-24 19:25:43 +00:00
Andrey Prygunkov
3e07873575 addition to r1182: unused connections are now closed only if there are no active connections on the same level; this reduces the reconnects during active download (which may be caused by the random connection pick-up implemented in r1182) 2015-01-24 18:49:59 +00:00
Andrey Prygunkov
2f17584ab4 update info in about dialogs (Windows and Mac OSX) 2015-01-24 12:44:27 +00:00
Andrey Prygunkov
02f87b23fb fixed: command "download again" was not disabled for hidden history records and resulted in a crash 2015-01-23 20:01:08 +00:00
Andrey Prygunkov
31032e29f5 when launching web-browser from the tray icon now using the real IP-address from option "ControlIP" instead of hard coded "127.0.0.1" (windows only) 2015-01-23 19:41:45 +00:00
Andrey Prygunkov
11bfb57809 added support for password list file; new option "UnpackPassFile" to set the location of the file; during unpack the passwords are tried from the file until unpack succeeds or all passwords were tried; implemented different strategies for rar4 and rar5-archives taking into account the features of formats; for rar5-archives a wrong password is reported by unrar unambiguously and the program can immediately try other passwords from the password list; for rar4-archives and for 7z-archives it is not possible to differentiate between damaged archive and wrong password; for those archives if the first unpack attempt (without password) fails the program executes par-check (preferably quick par-check if enabled via option "ParQuick) before trying the passwords from the list; another optimization is that the password list is tried only when the first unpack attempt (without password) reports a password error or decryption errors; this saves unnecessary unpack attempts for damaged unencrypted archives 2015-01-22 20:57:39 +00:00
Andrey Prygunkov
0e83ef32bb addition to r1187: renaming of hidden history items is now also supported 2015-01-17 16:34:49 +00:00
Andrey Prygunkov
86ae9e94cd name and category of history items can now be changed in web-interface; RPC-API method "editqueue" extended with new actions "HistorySetName" and "HistorySetCategory" 2015-01-15 20:46:17 +00:00
Andrey Prygunkov
2388250dfa added optional parameters to remote command "--append/-A" allowing to pass duplicate key, duplicate mode and duplicate score; removed parameters "F" and "U" of command "--append/-A", which were used to set mode (file or URL), which is now detected automatically; the parameters are still supported for compatibility 2015-01-15 18:09:37 +00:00
Andrey Prygunkov
d947ea65a2 added confirmation dialogs to recently implemented mass history edit commands "mark as good" and "mark as bad" 2015-01-13 21:37:04 +00:00
Andrey Prygunkov
4a11c04742 added subcommand "HA" to remote command "--list/-L" to list the whole history including hidden records 2015-01-06 20:00:22 +00:00
Andrey Prygunkov
14b24e6050 added support for negative numeric values in rss filter (useful for fields "dupescore" and "priority") 2014-12-21 19:28:38 +00:00
Andrey Prygunkov
4402d6fbd6 improved news server connections handling: if a download of an article fails due to connection error the news server becomes temporary disabled (blocked) for several seconds (defined by option "RetryInterval"); the download is then retried on another news server (of the same level) if available; if no other news servers (of the same level) exist the program will retry the same news server after its block interval expires; this increases failure tolerance when multiple news servers are used 2014-12-21 18:21:49 +00:00
Andrey Prygunkov
c69b81404c small change in error message text 2014-12-21 18:21:16 +00:00
Andrey Prygunkov
241a3efacf options "UnrarCmd" and "SevenZipCmd" can include extra switches to pass to unrar/7-zip. This allows for easy passing of additional parameters without creating of proxy shell scripts. 2014-12-12 18:22:20 +00:00
Andrey Prygunkov
185d52a9d4 added new option "ServerX.Retention" to define server retention time (days); files older than configured server retention time are not even tried on this server 2014-12-11 20:45:08 +00:00
Andrey Prygunkov
6b933f18dd options "ParIgnoreExt" and "ExtCleanupDisk" can now contain wildcard characters * and ? 2014-12-08 21:36:23 +00:00
Andrey Prygunkov
31dbbb546c created installer for windows; the program is installed into "program files" by default; the working directory with all subdirectories is now placed into "AppData" directory; the batch files nzbget-start.bat and nzbget-recovery-mode.bat are not needed and not installed anymore 2014-11-30 17:08:00 +00:00
Andrey Prygunkov
ac4f8a30e5 improved application for Windows: added tray icon (near clock); left click on icon pauses/resumes download; right lick opens menu with important functions; console window can be shown/hidden via preferences (is hidden by default); new preference to automatically start the program after login; new preference to show browser on start; new preference to hide tray icon; menu commands to show important folders in windows explorer (destination, etc.); on first start the config file is now placed into subdirectory "NZBGet" inside standard AppData-directory; default destination and other directories are now placed in the AppData\NZBGet-directory instead of programs directory; this allows to install the program into "program files"-directory since the program does not write into the programs directory anymore; the program exe has an icon now; if the exe is started from windows explorer the program starts in application mode; if the exe is called from command prompt the program works in console mode 2014-11-30 14:24:23 +00:00
Andrey Prygunkov
a060531ae3 actions for history items can now be performed for multiple (selected) records: post-process again, download again, mark as good, mark as bad; extended RPC-API method "editqueue": for history-records of type "URL" the action "HistoryRedownload" can now be used as synonym to "HistoryReturn" (makes it easier to redownload multiple items of different types (URL and NZB) with one API call) 2014-11-25 19:23:17 +00:00
Andrey Prygunkov
fb77937acd fixed: unrar may sometimes fail with message "no files to extract" 2014-11-25 19:18:07 +00:00
Andrey Prygunkov
9d9a81710f fixed false memory leak warning when compiled in debug mode (Windows only) 2014-11-24 22:31:57 +00:00
Andrey Prygunkov
c3b4438d1f fixed: program could crash during unpack (bug introduced in v14-r1130) 2014-11-22 18:03:08 +00:00
Andrey Prygunkov
eeb3679b82 addition to r1159: fixed: menubar icon was not visible on OSX in dark mode 2014-11-18 18:26:24 +00:00
Andrey Prygunkov
d2d9bfb4bd system sleep on idle state is now prevented during download and post-processing (Mac OSX only) 2014-11-16 16:24:06 +00:00
Andrey Prygunkov
2dcbe4628b fixed: menubar icon was not visible on OSX in dark mode 2014-11-15 19:05:45 +00:00
Andrey Prygunkov
634247676a fixed: quick par-check could hang on certain nzb-files containing multiple par-sets (occured only in 64 bit mode) 2014-11-14 19:38:41 +00:00
Andrey Prygunkov
1a01b323e5 updated version string to 15.0-testing 2014-11-14 19:29:27 +00:00
Andrey Prygunkov
c71a33eba0 updated version string (preparing to release 14.0) 2014-11-09 10:04:04 +00:00
Andrey Prygunkov
0387c7a8e1 updated ChangeLog 2014-11-09 09:50:41 +00:00
Andrey Prygunkov
1ae0404592 addition to r1152: fixed: the old directory was sometimes not removed when the download was renamed or assigned to another category (bug introduced in v14) 2014-11-03 19:55:25 +00:00
Andrey Prygunkov
6796bef261 fixed: the old directory was sometimes not removed when the download was renamed or assigned to another category (bug introduced in v14) 2014-11-01 13:05:30 +00:00
Andrey Prygunkov
a5bd6dc7c5 fixed: description was not shown correctly for queue scripts with defined events (bug introduced in r1148) 2014-11-01 11:00:40 +00:00
Andrey Prygunkov
4e7b9290ac fixed: program could crash during restart if an extension script was running; now all active scripts are terminated during restart 2014-10-21 20:21:31 +00:00
Andrey Prygunkov
9acbee976d fixed potential crash which could happen in debug mode during program restart 2014-10-21 19:32:07 +00:00
Andrey Prygunkov
e6f4f8c05e queue scripts can now define what events they are interested in; this avoids unnecessary calling of the scripts which do not process certain events 2014-10-20 21:17:54 +00:00
Andrey Prygunkov
c89cb3d287 addition to r1145: fixed a compiling error on OS/2 2014-10-19 21:03:11 +00:00
Andrey Prygunkov
c5cb95fd8c additional parameters (env. vars) are now passed to scan scripts: NZBNP_DUPEKEY, NZBNP_DUPESCORE, NZBNP_DUPEMODE; scan-scripts can now set dupekey, dupemode and dupescore by printing new special commands 2014-10-16 20:40:09 +00:00
Andrey Prygunkov
fa46714b19 debug builds for Windows now print call stack on crash to the log-file, which is very useful for debugging 2014-10-15 21:58:30 +00:00
Andrey Prygunkov
bfbcde3b47 fixed: RPC-method "editqueue" with action "HistoryReturn" caused a crash if the history item did not have any remaining (parked) files 2014-10-14 16:12:25 +00:00
Andrey Prygunkov
c6dc66cb45 addition to r1128: paths with drive letters are now considered absolute on all OSes not only on Windows because there are also other OSes using drive letters 2014-10-12 21:34:26 +00:00
Andrey Prygunkov
a9e6912a2f added column "age" to history tab in web-interface 2014-10-12 14:23:54 +00:00
Andrey Prygunkov
eb8885b915 fixed: a superfluous comma at the end of option "TaskX.Time" was interpreted as an error or may cause a crash 2014-10-11 22:10:51 +00:00
Andrey Prygunkov
029c808458 added news server name to message "Cancelling hanging download ..." to help identifying problematic servers 2014-10-10 21:13:02 +00:00
Andrey Prygunkov
9269f69a38 improvement in quick par-verification: if unpack fails (excluding invalid password errors) and quick par-check does not find any errors or quick par-check was already performed the full par-check is performed; this helps in rare situations when files were correctly downloaded (and therefore assumed correct by quick par-check) but incorrectly written into disk due to abnormal program termination (caused by bugs or hardware crashes) 2014-10-09 21:11:42 +00:00
Andrey Prygunkov
63d938ae04 fixed: RPC-method "saveconfig" did not work via XML-RPC (but worked via JSON-RPC) 2014-10-09 16:06:39 +00:00
Andrey Prygunkov
a8aa110f43 added missing new line character at the end of the help screen printed by "nzbget -h" 2014-10-05 15:13:30 +00:00
Andrey Prygunkov
6f7af5aef4 option "ParThreads" can now be set to "0" (which is a default setting now) to let the program automatically determine the number of CPU cores; this works on major modern platforms) 2014-10-04 19:34:03 +00:00
Andrey Prygunkov
6afbade8f7 improved scan-scripts: if the category of nzb-file is changed by the scan-script the assigned post-processing scripts are now automatically reset according to the new category 2014-10-03 20:58:11 +00:00
Andrey Prygunkov
5ec38498f1 quick par verification now works even if articles do not contain CRCs (although it is a rare case) 2014-09-27 22:18:49 +00:00
Andrey Prygunkov
e206d3a833 fixed several compiler warnings 2014-09-27 21:04:06 +00:00
Andrey Prygunkov
6529cf6498 addition to r1089: fixed: env. var "NZBPP_PARSTATUS" were not set to "FAILURE" for deleted/marked downloads 2014-09-27 21:03:23 +00:00
Andrey Prygunkov
21f5de8de8 improved cleanup: disk cleanup is now not performed if unrar failed even if par-check was successful; 2) queue cleanup (for remaining par2-files) is now made more smarter: the files are kept (parked) if they can be used by command "post-process again" and are removed otherwise 2014-09-25 22:08:57 +00:00
Andrey Prygunkov
837d5c7f68 unpack is now immediately aborted if unrar reports wrong password (works for rar5 as well as for older formats); the unpack error status "PASSWORD" is now set for older formats (not only rar5) 2014-09-24 20:52:28 +00:00
Andrey Prygunkov
f90a53c2b0 addition to r1127: better compatibility with unrar 5 2014-09-22 19:37:52 +00:00
Andrey Prygunkov
e184e5b7c5 fixed: relative destination paths (options "DestDir" and "CategoryX.DestDir") caused failures during unrar 2014-09-21 15:37:10 +00:00
Andrey Prygunkov
1ca1381e05 unpack is now automatically immediately aborted when unrar reports CRC errors 2014-09-18 16:48:00 +00:00
Andrey Prygunkov
811f807de6 fixed: splitted .cbr-files were not properly joined 2014-09-16 22:15:02 +00:00
Andrey Prygunkov
95b76bc586 when option "ContinuePartial" is active the current state is saved not more often than once per second instead of after every downloaded article; this significantly reduce the amount of disk writings on high download speeds 2014-09-16 20:54:50 +00:00
Andrey Prygunkov
90fac39a26 added commands "PausePostProcess" and "UnpausePostProcess" to scheduler 2014-09-15 16:28:55 +00:00
Andrey Prygunkov
44cf680f14 an improvement in duplicate check: if a new download with empty dupekey and empty dupescore is marked as "dupe" and the another download with the same name have non empty dupekey or dupescore these properties are copied from that download; this is useful because the new download is most likely another upload of the same file and it should have the same duplicate properties for best duplicate handling results 2014-09-13 21:30:42 +00:00
Andrey Prygunkov
d0754e022f addition to r1121: now fixed on windows too: inner files (files listed in nzb) bigger than 2GB could not be downloaded 2014-09-08 19:35:11 +00:00
Andrey Prygunkov
ed7245c852 fixed: inner files (files listed in nzb) bigger than 2GB could not be downloaded 2014-09-07 10:00:52 +00:00
Andrey Prygunkov
2b44618858 added validation check for option "ParBuffer" when compiled in 32-bit 2014-09-06 19:50:23 +00:00
Andrey Prygunkov
a3634d689e fixed: web interface showed an error box when trying to submit files with extensions other than .nzb, although these files could be processed by a scan-script; now the error is not shown if any scan-script is set in options 2014-09-05 20:22:49 +00:00
Andrey Prygunkov
96e8cbd3c1 small improvement in multithreading par-repair: the number of repair threads is now automatically reduced to the amount of bad blocks if there are too few of them; if there is only one bad block the multithreading par-repair is switched off to avoid overhead of thread synchronisation (which does not make sense for one working thread) 2014-09-03 17:34:36 +00:00
Andrey Prygunkov
658d41f0fd refactor: moved nzbget specific code from libpar2 into nzbget units in order to make updates of libpar2 easier in the future 2014-09-03 17:28:29 +00:00
Andrey Prygunkov
9dab8fd7dc added multithreading par-repair: does not depend on other libraries and works on all platforms and all CPUs (with multiple cores); new option "ParThreads" to set the number of threads for repairing; new option "ParBuffer" to define the memory limit to use during par-repair 2014-09-02 23:07:32 +00:00
Andrey Prygunkov
2cb9d81a3c fixed: the program could crash during deleting of active download (bug introduced in r1108) 2014-08-30 15:10:49 +00:00
Andrey Prygunkov
2b4662856e better error reporting if a temp file could not be found 2014-08-29 18:12:44 +00:00
Andrey Prygunkov
44e949eafe fixed: crash and possible queue corruption when option "ParCleanuQueue" was active (which is a default setting) (bug introduced in r1108) 2014-08-29 15:05:27 +00:00
Andrey Prygunkov
aa3acd12a6 for downloads delayed due to propagation delay (option "PropagationDelay") a new badge "propagation" is now shown near download name 2014-08-28 20:51:29 +00:00
Andrey Prygunkov
1c00e62d3e fixed: the "pause extra pars"-state was missing in the pause/resume-loop of curses interface, key "P" 2014-08-28 20:29:51 +00:00
Andrey Prygunkov
0d630d9ea3 when connecting in remote mode using command line parameter "--connect/-C" the option "ControlIP" is now interpreted as "127.0.0.1" if it is set to "0.0.0.0" (instead of failing with an error message) 2014-08-28 20:22:20 +00:00
Andrey Prygunkov
7de78cd088 added new option "UrlTimeout" to set timeout for URL fetching and RSS feed fetching; renamed option "ConnectionTimeout" to "ArticleTimeout" 2014-08-28 19:31:31 +00:00
Andrey Prygunkov
0f98c72f1e fixed: cancelling of post-processing could delete the nzb-item completely (bug introduced in v14) 2014-08-28 19:15:42 +00:00
Andrey Prygunkov
459a79a1f1 improved pp-script EMail.py: now it can send time statistics (thanks to JVM for the patch) 2014-08-27 16:27:40 +00:00
Andrey Prygunkov
aaea8d9717 fixed: scheduler tasks were not checked after wake up if the sleep time was longer than 90 minutes 2014-08-25 20:12:38 +00:00
Andrey Prygunkov
d5b99732d1 fixed: no warning were printed for invalid values of option "ArticleCache" (max value 1900 when compiled in 32 bit mode) 2014-08-25 20:04:09 +00:00
Andrey Prygunkov
f5cef8a997 fixed: par-check could fail on valid files (bug introduced in libpar2 0.3) 2014-08-24 12:51:42 +00:00
Andrey Prygunkov
44907aa700 when quick par verification is active the repaired files are not verified to save time; the only reason for incorrect files after repair can be hardware errors (memory, disk) but this is not something NZBGet should care about 2014-08-22 17:24:34 +00:00
Andrey Prygunkov
54303d464b fixed: one log-message was printed only to global log but not to nzb-item pp-log 2014-08-22 17:05:30 +00:00
Andrey Prygunkov
4e83a68bf1 when a download is downloaded again (from history) the queue-scripts are now called with event "NZB_ADDED" 2014-08-22 16:57:54 +00:00
Andrey Prygunkov
00893a6cca updated configure-script to not require gcrypt for newer GnuTLS versions (when gcrypt is not needed) 2014-08-20 20:57:40 +00:00
Andrey Prygunkov
008768cea1 better error reporting during par-check 2014-08-20 18:51:13 +00:00
Andrey Prygunkov
43e096c6dc refactor: eliminated two compiler warnings 2014-08-19 20:53:00 +00:00
Andrey Prygunkov
b10b48f5e9 the list of scripts (pp-scripts, queue-scripts, etc.) is now read once on program start instead of reading everytime a script is executed; that eliminates the unnecessary disk access; the list of post-processing parameters shown on page "Postprocess" of download details dialog is now built using the preloaded list of scripts instead of reading the script config sections on every load of web-interface; the settings page of web-interface loads available scripts every time the page is shown; this allows to configure newly added scripts without restarting the program first (just like it was before); a restart is still required to apply the settings (just like it was before); RPC-method "configtemplates" has new parameter "loadFromDisk" 2014-08-19 19:56:09 +00:00
Andrey Prygunkov
1a76c72bf3 fixed: the program could crash during executing of queue-scripts (bug introduced in r1094); the list of queue-scripts is now read only once, at program start; queue-scripts added to scripts-directory after the program was started can be selected in download details dialog on page "Postprocess" but will not be executed until the program is restarted 2014-08-19 19:47:49 +00:00
Andrey Prygunkov
74a1f6301a added option "EventInterval" allowing to reduce the number of calls of queue-scripts, which can be useful on slow systems 2014-08-19 19:45:30 +00:00
Andrey Prygunkov
dd22ec68fc improvement in support for detection of bad downloads (fakes, etc.): scripts supporting two modes (post-processing-mode and queue-mode) are now executed if selected in post-processing parameters: either in options "PostScript" and "CategoryX.PostScript" or manually on page "Postprocess" of download details dialog in web-interface; it is not necessary to select dual-mode scripts in option "QueueScript"; that provides more flexibility: the scripts can be selected per-category or activated/deactivated for each nzb individually 2014-08-17 23:07:48 +00:00
Andrey Prygunkov
6ecdfc25fd updated description in config file template 2014-08-15 22:28:09 +00:00
Andrey Prygunkov
f439f09c2e improvement in support for detection of bad downloads (fakes, etc.): queue-scripts are now called after every downloaded file included in nzb; new event "FILE_DOWNLOADED" of parameter "NZBNA_EVENT"; event "UNPACK" removed; instead added event "NZB_DOWNLOADED" which is similar to "UNPACK" but is called for every download even not having archive files and even if unpack is disabled; the execution of queue-scripts is serialized - only one script is executed at a time and other scripts wait in script-queue; the script-queue is compressed so that the same script for the same event is not queued more than once; this reduces the number of calls of scripts if files are downloaded faster than queue-scripts can work up them; a call for event "NZB_DOWNLOADED" is always performed even if the previous calls for events "FILE_DOWNLOADED" were skipped; when a script marks nzb as bad the nzb is deleted from queue, no further internal post-processing (par, unrar, etc.) is made for the nzb but all post-processing scripts are executed; if option "DeleteCleanupDisk" is active the already downloaded files are deleted; new status "BAD" for field "DeleteStatus" of nzb-item in RPC-method "history"; queue-scripts can set post-processing parameters by printing special command, just like post-processing-scripts can do that; this simplifies transferring (of small amount) of information between queue-scripts and post-processing-scripts 2014-08-15 22:24:53 +00:00
Andrey Prygunkov
ebe955020c addition to r1072: fixed: renaming of active downloads was broken (bug introduced in r1070) 2014-08-15 17:17:05 +00:00
Andrey Prygunkov
60119a89c0 fixed: compiler error if configured using parameter "--disable-gzip" 2014-08-13 21:14:11 +00:00
Andrey Prygunkov
6a14353391 added support for detection of bad downloads (fakes, etc.): extended queue-scripts with new event "UNPACK", scripts are called before unpack and have a chance to detect bad downloads before unpacking; queue-scripts and post-processing scripts can mark downloads as bad by printing special command; marked downloads become status "FAILURE/BAD" and are processed by the program as failures (triggering duplicate handling); scripts executed thereafter see the new status and can react accordingly (inform an indexer or a third-party automation tool); new env. var "NZBNA_DIRECTORY" passed to queue scripts 2014-08-11 23:15:58 +00:00
Andrey Prygunkov
9090fe5fc9 fixed: not all statistic fields were reset when using command "Download again" (bug introduced in v14) 2014-08-11 18:10:47 +00:00
Andrey Prygunkov
93bc9a4293 fixed: malformed articles could crash the program (bug introduced in v14) 2014-08-11 18:02:15 +00:00
Andrey Prygunkov
80b2e22d9d added new search field "dupestatus" for use in rss filters: the search is performed through download queue and history testing items with the same dupekey or title as current rss item; the field contains comma-separated list of following possible statuses (if duplicates were found): QUEUED, DOWNLOADING, SUCCESS, WARNING, FAILURE or an empty string if there were no matching items found 2014-08-10 22:14:03 +00:00
Andrey Prygunkov
5a6a098990 suppressed certain warning types in VC++ project file (Windows) 2014-08-10 21:59:06 +00:00
Andrey Prygunkov
c64ef201ff addition to r1079: fixed: par-check could not be cancelled. 2014-08-10 16:42:23 +00:00
Andrey Prygunkov
817ae02295 fixed: damaged files could be ignored during par-check and not repair was performed (bug introduced in r1071) 2014-08-09 22:39:39 +00:00
Andrey Prygunkov
910dab98f1 fixed memory error which could lead to segfault (bug introduced in r1074) 2014-08-09 21:50:50 +00:00
Andrey Prygunkov
b9c59ffad4 fixed few compiler warnings 2014-08-09 15:50:09 +00:00
Andrey Prygunkov
79426ec959 fixed: when rotating log-files option TimeCorrection were not respected when bulding new file name - the filename could have wrong date stamp in the name (bug introduced in r1059) 2014-08-09 10:42:13 +00:00
Andrey Prygunkov
2e0ba0e3d1 integrated par2-module (libpar2) into NZBGet’s source code tree; the par2-module is now built automatically during building of NZBGet; this eliminates dependency from external libpar2 and libsigc++ making it much easier for users to compile NZBGet with newest recommended patches for libpar2 2014-08-08 22:37:30 +00:00
Andrey Prygunkov
0c3ce58ffa fixed: cleanup may leave some files undeleted (Mac OSX only) 2014-08-06 19:56:12 +00:00
Andrey Prygunkov
c482820746 addition to r1074: changed few info messages to debug as they supposed to be 2014-08-06 19:43:39 +00:00
Andrey Prygunkov
195bc1f290 addition to r1075: added missing changed file 2014-08-06 18:29:43 +00:00
Andrey Prygunkov
d8108f998b disabled block-by-block scan during par verification because: 1) it could cause incorrect verification results for certain kinds of damaged files; 1) after implementing of quick scan for damaged files the block-by-block scan was not necessary anymore; block-by-block scan was also removed from the libpar2-patch 2014-08-06 15:24:25 +00:00
Andrey Prygunkov
40de60dd8b added quick par verification for damaged (partially downloaded) files 2014-08-06 00:11:07 +00:00
Andrey Prygunkov
c9981472a8 refactor: disk state now holds info about failed files: their IDs, CRCs of download articles and full intitial article information; these data can be used later to retry download of failed articles and for quick par-verification of damaged files 2014-08-05 23:45:28 +00:00
Andrey Prygunkov
83b3789282 fixed: renaming of active downloads was broken (bug introduced in r1070) 2014-08-02 16:41:27 +00:00
Andrey Prygunkov
0078e9e225 options "ParIgnoreExt" and "ExtCleanupDisk" are now respected by par-check (in addition to being respected by par-rename): if all damaged or missing files are covered by these options then no par-repair is performed and the download assumed successful 2014-07-30 22:10:50 +00:00
Andrey Prygunkov
a62966227a added quick file verification during par-check/repair; if par-repair is required for download the files downloaded without errors are verified quickly by comparing their checksums against the checksums stored in the par2-file; this makes the verification of undamaged files almost instant; damaged files are verified as usual; new option "ParQuick" (active by default); added support for block-by-block scan of files during verification, which improves scan speed of damaged files; the quick par-verification requires a patch for libpar2 (see http://nzbget.net/libpar2 for details) 2014-07-27 21:59:00 +00:00
Andrey Prygunkov
5f0ccf3257 fixed: certain nzb-files failed to download (with decoding errors) if article cache was active 2014-07-25 22:16:33 +00:00
Andrey Prygunkov
61d0a1d498 fixed: program could crash during download if there were missing articles, DirectWrite was disabled and ArticleCache was enabled 2014-07-25 21:57:14 +00:00
Andrey Prygunkov
c626528a83 fixed: post-process time (statistic) was not correctly reset when post-processing again 2014-07-25 21:53:40 +00:00
Andrey Prygunkov
2e0e8e18ef removed accidentally committed debug logging 2014-07-25 21:51:36 +00:00
Andrey Prygunkov
54d98a6cad if an nzb has only few failed articles it may have completion shown as 100%; now it is shown as 99.9% to indicate that not everything was successfully downloaded 2014-07-21 19:44:35 +00:00
Andrey Prygunkov
0fe503658b pp-script "EMail.py" now supports mail server relays (thanks l2g for the patch) 2014-07-20 16:20:24 +00:00
Andrey Prygunkov
5941464402 addition to r1057 (added article cache): fixed a segfault which could happen if none of articles could be downloaded for a file 2014-07-19 00:17:39 +00:00
Andrey Prygunkov
3074ea62dc added per-nzb time and size statistics: total time, download, verify, repair and unpack times, downloaded size and average speed, shown in history details dialog via click on the row with total size in statistics block; RPC-methods "listgroups" and "history" return new fields: "DownloadedSizeLo", "DownloadedSizeHi", "DownloadedSizeMB", "DownloadTimeSec", "PostTotalTimeSec", "ParTimeSec", "RepairTimeSec", "UnpackTimeSec" 2014-07-19 00:06:28 +00:00
Andrey Prygunkov
312bf91003 improved joining of splitted files: instead of performing par-repair the files are now joined by unpacker, which is much faster; the files splitted before creating of par-sets are now joined as well (they were not joined in v13 because par-repair has nothing to repair in this case); the unpacker can detect missing fragments and requests par-check if necessary 2014-07-18 23:27:41 +00:00
Andrey Prygunkov
a42c323343 refactor: removed an old commented code 2014-07-18 23:19:46 +00:00
Andrey Prygunkov
39d9fe2794 added log file rotation; options "CreateLog" and "ResetLog" replaced with new option "WriteLog (none, append, reset, rotate)"; new option "RotateLog" defines rotation period; when compiled in debug mode new field "process id" is printed to the file log for each row (it is easier to identify processes than threads) 2014-07-18 23:17:16 +00:00
Andrey Prygunkov
7993e2971c renamed option "WriteBufferSize" into "WriteBuffer"; changed the dimension - now option is set in kilobytes instead of bytes; old name and value are automatically converted; if the size of article is below the value defined by the option, the buffer is allocated with the articles size (to not waste memory); therefore the special value "-1" is not required anymore; during conversion "-1" is replaced with "1024" (1 megabyte) but it can be of course manually changed to any other value later 2014-07-18 23:06:45 +00:00
Andrey Prygunkov
ba9efe43be added article cache: new option "ArticleCache" defines memory limit to use for cache; when cache is active the articles are written into cache first and then all flushed to disk into the destination file; article cache reduces disk IO and may reduce file fragmentation improving post-processing speed (unpack); it works with both writing modes (direct write on and off); when option "DirectWrite" is disabled the cache should be big enough (for best performance) to accommodate all articles of one file (sometimes up to 500 MB) in order to avoid writing articles into temporary files, otherwise temporary files are used for articles which do not fill into cache; when used in combination with DirectWrite there is no such limitation and even a small cache (100 MB or even less) can be used effectively; when the cache becomes full it is flushed automatically (directly into destination file) providing room for new articles; new row in the "statistics and status dialog" in web-interface indicates the amount of memory used for cache; new fields "ArticleCacheLo", "ArticleCacheHi" and "ArticleCacheMB" returned by RPC-method "status"; refactor: parts of unit "ArticleDownloader" responsible for writing into disk were moved into new unit "ArticleWriter" 2014-07-18 22:48:35 +00:00
Andrey Prygunkov
cfa5e7d19c updated version string to 14.0-testing 2014-07-18 15:51:38 +00:00
Andrey Prygunkov
7acd2ad884 updated version string (preparing to release 13.0) 2014-07-14 20:22:36 +00:00
Andrey Prygunkov
1f474c3097 updated ChangeLog 2014-07-05 21:56:20 +00:00
Andrey Prygunkov
c8b4f6e985 removed libpar2-patches from NZBGet source tree; the documentation now suggests to use the libpar2 version maintained by Debian/Ubuntu team, which already includes all necessary patches; also removed patches to create libpar2 and libsigc++ project files for Visual Studio on Windows, no one needed them anyway 2014-07-04 21:01:13 +00:00
Andrey Prygunkov
fc20bcca91 pp-script "EMail.py" now takes the status of previous pp-scripts into account and report a failure if any of the scripts has failed 2014-07-04 19:50:11 +00:00
Andrey Prygunkov
702b635826 improved RPC-API: history items now preserve "NZBID" from queue items; that makes the tracking of items across queue and history easier for third-party apps; field "NZBID" returned by RPC-method "history" is now available for history items of all kinds (NZB, URL, DUP); field "ID" is deprecated and should not be used 2014-07-04 19:07:51 +00:00
Andrey Prygunkov
990c5f67e4 fixed: current download could be damaged if the program was restarted during download and the option "ContinuePartial" was active (bug introduced in v13) 2014-07-03 20:45:53 +00:00
Andrey Prygunkov
8ef4ca2ce8 fixed: port number was not sent in headers when downloading from URLs which could cause issues with RSS for web-sites using non-standard http ports 2014-06-30 20:42:50 +00:00
Andrey Prygunkov
b105ce6698 fixed: queued nzb-files was not deleted from disk when deleting download without history tracking 2014-06-29 21:24:58 +00:00
Andrey Prygunkov
6c93b836f5 fixed: check for file or directory existense could fail sometimes (Windows only, bug introduced in v13) 2014-06-26 18:29:55 +00:00
Andrey Prygunkov
f0e60ee577 improvement in RPC-API: method "append" now returns id of added nzb-file or "0" on an error; this makes it easier for third-party apps to track added nzb-files; for backward compatibility with older software expecting a boolean result the old version of method "append" is still supported; the new version of method "append" has a different signature (order of parameters); parameter "content" can now be either nzb-file content (encoded in base 64) or an URL; this makes the method "appendurl" obsolete (still supported for compatibility); if an URL was added to queue the queue entry created for fetched nzb-file has the same "NZBID" for easier tracking 2014-06-19 15:00:46 +00:00
Andrey Prygunkov
2cfbb2373a fixed: scheduler command "FetchFeed" did not work properly with parameter "0" (fetch all feeds) 2014-06-17 21:31:09 +00:00
Andrey Prygunkov
d26d04d92b when changing category in web-interface the post-processing parameters are now automatically updated according to new category settings; only parameters which are different in old and new category are changed; parameters which present in both or in neither categories are not changed; that ensures that only the relevant parameters are updated and parameters which were manually changed by user remain they settings when it make sense; in the "download details dialog" the new parameters are updated on the postprocess-tab directly after changing of category and can be controlled before saving; in the "edit multiple downloads dialog" the parameters are updated individually for each download on saving; new action "CP" of remote command "--edit/-E" for groups to set category and apply parameters; new action "GroupApplyCategory of RPC-method "editqueue" for the same purpose 2014-06-13 21:53:27 +00:00
Andrey Prygunkov
0d6fe32246 to detect daylight saving activation/deactivation the time zone information is now checked every minute if a download is active or once in 3 hours if the program is in stand-by; these delays should work well with hibernation mode on synology) 2014-06-12 20:57:00 +00:00
Andrey Prygunkov
36de8073f2 apostrophe is not considered an invalid file name character anymore 2014-06-11 21:15:36 +00:00
Andrey Prygunkov
5aaaa1e6a7 fixed: the program could crash during cleanup if files with invalid timestamps were found in the directory (windows only) 2014-06-09 20:52:23 +00:00
Andrey Prygunkov
a4126a52ce fixed: par-rename initiated unnecessary par-check if option "InterDir" were not active (bug introduced in r1030) 2014-06-06 21:49:11 +00:00
Andrey Prygunkov
076017128e added support for power management on windows to avoid pc going into sleep mode during download or post-processing 2014-06-06 19:25:02 +00:00
Andrey Prygunkov
7240147418 added option "ParIgnoreExt" which lists files which do not trigger par-repair if they are missing 2014-06-03 20:47:28 +00:00
Andrey Prygunkov
0923f2bb5c added new choice "Always" for option "ParCheck"; it forces the par-check for every (even undamaged) download but in contrast to choice "Force" only one par2-file is downloaded first; additional files are downloaded if needed 2014-06-02 20:43:37 +00:00
Andrey Prygunkov
5ce0b9985a corrected a typing error in a month name 2014-06-02 20:28:36 +00:00
Andrey Prygunkov
cd76375d8e post-processing scripts which move the whole download into a new location can inform the program about new location using command "[NZB] DIRECTORY=/new/path", allowing other scripts to process files further 2014-05-30 22:09:50 +00:00
Andrey Prygunkov
df2ef01494 when checking for missing files the files whose extensions match with option "ExtCleanupDisk" are ignored now (to avoid time consuming restoring of files which will be deleted later anyway) 2014-05-30 21:35:30 +00:00
Andrey Prygunkov
1d3d875f3d refactor: created new class "Tokenizer" and replaced all usages of function "strtok_r" with new class; also created new function "MatchFileExt" for the similar code used in two places 2014-05-29 21:38:27 +00:00
Andrey Prygunkov
48446367f4 post-processing scripts now have two new parameters: env. var "NZBPP_STATUS" indicates the status of download including the total status (SUCCESS, FAILURE, etc.) and the detail field (for example in case of failures: PAR, UNPACK, etc.); env. var "NZBPP_TOTALSTATUS" is equal to the total status of parameter "NZBPP_STATUS" and is provided for convenience (to avoid parsing of "NZBPP_STATUS"); the new parameters provide a simple way for pp-scripts to determine download status without a guess work needed in previous versions; parameters "NZBPP_PARSTATUS" and "NZBPP_UNPACKSTATUS" are now considered deprecated (still passed for compatibility); updated script "EMail.py" to use new parameters "NZBPP_TOTALSTATUS" and "NZBPP_STATUS" instead of "NZBPP_PARSTATUS" and "NZBPP_UNPACKSTATUS" 2014-05-28 22:19:39 +00:00
Andrey Prygunkov
fb1f293a17 improved fast par-renamer: it can now detect missing files (files listed in par2-files but not present on disk) 2014-05-28 21:50:15 +00:00
Andrey Prygunkov
f85533d608 fixed: some nzb-file data were not calculated for history items loaded from disk state; this may cause problems for commands "Post-process again" and "Download remaining files" (bug introduced in v13) 2014-05-28 21:37:44 +00:00
Andrey Prygunkov
e32faf6053 better error reporting if deleting of directories fails 2014-05-25 20:42:50 +00:00
Andrey Prygunkov
a429ea4679 windows version is now configured to use OpenSSL instead of GnuTLS 2014-05-24 17:37:42 +00:00
Andrey Prygunkov
9112d2277e fixed: incorrect number of paused files were shown in curses output mode 2014-05-24 12:25:24 +00:00
Andrey Prygunkov
a9050045f3 renamed section "SCRIPTS" to "EXTENSION SCRIPTS" in the settings 2014-05-24 12:25:08 +00:00
Andrey Prygunkov
ed3cad6e9c when building nzbget if both OpenSSL and GnuTLS are available now using OpenSSL by default (the preferred library can still be selected with configure-parameter --with-tlslib=OpenSSL/GnuTLS) 2014-05-23 18:12:57 +00:00
Andrey Prygunkov
deee5aff00 rolled back changes made in r1019 (not necessary anymore) 2014-05-22 17:09:34 +00:00
Andrey Prygunkov
8c36a4d4c6 fixed: renaming or deleting of temporary files could fail, especially when options "UnpackPauseQueue" and "ScriptPauseQueue" were not active (windows only) 2014-05-22 16:58:16 +00:00
Andrey Prygunkov
157074db29 fixed small memory leak (bug introduced in r1012) 2014-05-22 15:50:12 +00:00
Andrey Prygunkov
14ff04d2e3 improved error reporting: added error check when closing article file for writing 2014-05-21 21:27:51 +00:00
Andrey Prygunkov
d0e2d439aa if renaming of files fails, few more attempts are made; this should improve compatibility with virus scanner or sync software; better error reporting if renaming still fails 2014-05-20 21:20:49 +00:00
Andrey Prygunkov
159340a396 fixed: remaining size and time were not printed in remote console mode (bug introduced somewhere in v13) 2014-05-19 21:28:35 +00:00
Andrey Prygunkov
de6625bcaf updated links to doc-article "Extensions scripts" 2014-05-18 21:24:34 +00:00
Andrey Prygunkov
0d7ed691e6 fixed: program could hang when adding nzb-files from fetched RSS feed (bug introduced in r966) 2014-05-17 21:39:49 +00:00
Andrey Prygunkov
0721f723be fixed: nzb-files were sometimes not deleted from NzbDir (option "NzbCleanupDisk") 2014-05-15 17:02:10 +00:00
Andrey Prygunkov
2da7239ac6 fixed: if post-processing step "move" failed, the command "post-process again" did not try to move again 2014-05-08 19:54:25 +00:00
Andrey Prygunkov
7b4c07c837 refactor: better handling of completed URL downloads 2014-05-07 19:58:47 +00:00
Andrey Prygunkov
169c56f105 implemented general scripts concept which is an extension of the post-processing scripts concept initially introduced in v11; the general scripts concept applies to all scripts used in the program: scan-script, queue-script and scheduler-script (in addition to post-processing scripts); option "NzbProcess" renamed to "ScanScript"; option "NzbAddedProcess" renamed to "QueueScript"; option "DefScript" and "CategoryX.DefScript" renamed to "PostScript" and "CategoryX.PostScript" (options with old names are recognized and automatically converted on first settings saving); new option "TaskX.Script"; old option "TaskX.Process" kept for scheduling of external programs not related to nzbget (to avoid writing of intermediate proxy scripts); scan-script, queue-script and scheduler-script now work similar to post-processing scripts: -scripts must be put into scripts-directory; -scripts can be configured via web-interface and can have options; -multiple scripts can be chosen for each scripts-option, all chosen scripts are executed; -program and script options are passed to the script as env. variables;; renamed default directory with scripts from "ppscripts" to "scripts"; script signature indicates the type of script (post-processing, scan, queue or scheduler); one script can have mixed signature allowing it to be used for multiple purposes (for example a notification script can send a notification on both events: after adding to queue and after post-processing); result of RPC-method "configtemplates" has new fields "PostScript", "ScanScript", "QueueScript", "SchedulerScript" to indicate the purpose of the script; queue-script (formerly NzbAddedProcess) has new parameter "NZBNA_EVENT" indicating the reason of calling the script; currently the script is called only after adding of files to download queue and therefore the parameter is always set to "NZB_ADDED" but the queue-script can be called on other events in the future too 2014-05-06 15:36:15 +00:00
Andrey Prygunkov
d51cdfd7c4 icreased few wait intervals which were unnecessary too small 2014-05-04 13:18:28 +00:00
Andrey Prygunkov
3c02b139e8 eliminated loop waiting time in queue coordinator on certain conditions - may improve performance on very high speed connections 2014-05-03 11:28:39 +00:00
Andrey Prygunkov
9d660b9d4e extended info printed by remote command "nzbget -B dump" 2014-05-02 19:36:02 +00:00
Andrey Prygunkov
eaf7c61f01 fixed: for downloads with force priority the status was shown orange (instead of green) and the progress info was not shown during post-processing if the program was paused 2014-04-27 12:05:56 +00:00
Andrey Prygunkov
1234c05690 small adjustment in speed formatting 2014-04-26 21:40:18 +00:00
Andrey Prygunkov
fd5b6769fa small fix: data sizes exactly equal to 10, 100, 1000 MB or GB were formatted using 4 digits instead of 3 (one digit after decimal point too much) 2014-04-26 21:29:49 +00:00
Andrey Prygunkov
63db34070e data sizes above 1000 GB are now shown as TB in web-interface (instead of GB) 2014-04-26 21:21:29 +00:00
Andrey Prygunkov
b41cd3ff97 additon to r945: adjusted modules initialization to avoid possible bugs due to delayed thread starts 2014-04-25 23:02:51 +00:00
Andrey Prygunkov
f2406ee0e4 fixed: queue was not locked during loading on program start and that could cause problems 2014-04-25 22:56:33 +00:00
Andrey Prygunkov
4712c6a372 fixed: errors during loading of queue from disk state may render the already loaded parts useless too; now at least these parts of queue are used 2014-04-25 22:51:22 +00:00
Andrey Prygunkov
56dc1b2b6c fixed: the program could crash during parsing of malformed nzb-files 2014-04-25 21:37:47 +00:00
Andrey Prygunkov
cb13d00844 added force-priorities; downloads with priorities equal to or greater than 900 are downloaded and post-processed even if the program is in paused state (force mode); in web-interface the combo for choosing priority has new entry "force" (priority value 900); new fields "ForcedSizeLo", "ForcedSizeHi" and "ForcedSizeMB" returned by RPC-method "status"; 2014-04-22 20:26:29 +00:00
Andrey Prygunkov
7a11e8eb19 splitted files are now joined automatically (again) 2014-04-17 16:33:20 +00:00
Andrey Prygunkov
482af25c90 fixed: field "STATUS" was not set correctly for par-checked downloads without unpack (bug introduced in r992) 2014-04-16 17:51:28 +00:00
Andrey Prygunkov
0c17e21b85 fixed: par-check could hang on renamed and splitted files 2014-04-16 17:49:41 +00:00
Andrey Prygunkov
0acb6ac548 fixed: cancelling of active par-job sometimes didn't work 2014-04-16 17:48:44 +00:00
Andrey Prygunkov
7f339860ad fixed: command "Pause" was not shown in actions menu in download details dialog (bug introduced in r987) 2014-04-15 19:11:56 +00:00
Andrey Prygunkov
67cf38a291 experimental: download speeds above 1024 KB/s are now indicated in MB/s 2014-04-15 16:09:35 +00:00
Andrey Prygunkov
cad28b9fd5 addition to r990: fixed: download speeds above approx. 70 MB/s were not indicated correctly in web-interface and by RPC-method "status" 2014-04-15 16:04:18 +00:00
Andrey Prygunkov
80ceca6e28 new field "STATUS" in RPC-method "history" to allow third-party apps easier determine the status of an item without inspecting status-fields of every processing step; web-interface uses new field "STATUS" 2014-04-14 22:06:23 +00:00
Andrey Prygunkov
bdcb8864fb fixed: history status "SKIPPED" and "SCAN" for URL-items were not properly read from disk state 2014-04-14 20:55:04 +00:00
Andrey Prygunkov
ced444282f fixed: download speeds above approx. 70 MB/s were not indicated correctly in web-interface and by RPC-method "status" 2014-04-14 19:47:18 +00:00
Andrey Prygunkov
5c7c11e3f4 fixed: status "PP-QUEUED" was shown in green/orange instead of gray (bug introduced in r987) 2014-04-13 08:22:27 +00:00
Andrey Prygunkov
2b4628fb43 fixed: estimated time was not shown during download (bug introduced in r987) 2014-04-13 08:13:11 +00:00
Andrey Prygunkov
a0dbd75f35 RPC-method "listgroups" now returns new field "Status" making it easier for third-party apps to determine the status of download entry; added prefix "Post" to new post-processing fields added in r984; changed web-interface to use new field "Status"; fixed: progress-label during post-processing did not show output of the pp-scripts (bug introduced in r984); fixed: button "Log" were not shown in the download details dialog for active post-processing download (bug introduced in r984) 2014-04-12 21:30:19 +00:00
Andrey Prygunkov
a20877ea80 corrected html formatting for statistics data in details dialog 2014-04-12 21:21:05 +00:00
Andrey Prygunkov
e151691711 fixed: files state were not fully reloaded when option "ContinuePartial" was used (bug introduced in r982) 2014-04-12 08:14:01 +00:00
Andrey Prygunkov
f42db27eaa RPC-method "listgroups" now returns info about post-processing similar to info returned by method "postqueue"; RPC-method "postqueue" is obsolete now; web-interface requires less requests to NZBGet on each page update and it is now easier for third-party developers to obtain the info about download and post-processing status (no need to merge download queue and post queue) 2014-04-10 20:45:46 +00:00
Andrey Prygunkov
724eab69d8 per-server/per-nzb article completion statistics are now available for active downloads in details dialog (not only for history); the info on that page is constantly updated as long as the page is active (unless refresh is disabled); download age info removed from details dialog to save place (it is shown in the download list anyway) ;if backup news-servers start to be used for nzb-file a badge appears in the download list showing the percentage of articles downloaded from backup servers; click on the badge opens download details dialog directly on the completion page 2014-04-10 20:24:28 +00:00
Andrey Prygunkov
a83dbccc6c changed the way option "ContinuePartial" works: now the information about completed articles is stored in a special file in QueueDir; when option "DirectWrite" is active no separate flag-files per article are created in TempDir; the file contains additional information, which were not stored/available before; fixed: per-server/per-nzb article completion statistics could be inaccurate for nzb-files whose download were interrupted by reload/restart; per-server/per-nzb article completion statistics are now available via RPC-method "listgroups" for active downloads (not only for "history") 2014-04-10 20:06:55 +00:00
Andrey Prygunkov
178e987650 fixed: seconds/minutes/hours slots of volume statistics could be incorrectly cleared on program start due to time zone offset not yet initialized at the time the volume data was loaded 2014-04-09 21:09:45 +00:00
Andrey Prygunkov
3cd126f08d fixed: after deleting servers from config file the program could crash on start when loading server volume statistics data from disk 2014-04-09 20:09:08 +00:00
Andrey Prygunkov
b109123a43 fixed: data volume dialoge may show wrong current date due to incorrect time zone calculation 2014-04-05 23:29:52 +00:00
Andrey Prygunkov
c97e97d2cc updated all links to go to new domain (nzbget.net) 2014-04-04 21:45:48 +00:00
Andrey Prygunkov
160d098510 extended data volume statistics dialog with numbers for current day, month, all-time total and custom counter; the custom counter can be manually reset; new fields in the result of RPC-method "servervolumes"; new RPC-method "resetservervolume" 2014-04-04 20:44:46 +00:00
Andrey Prygunkov
a72e1924ca updated options descriptions in template config file 2014-04-03 20:34:27 +00:00
Andrey Prygunkov
fd7508f152 better handling of backwards system clock changes in data volume meter 2014-04-03 20:21:59 +00:00
Andrey Prygunkov
1de995f9d5 better handling of incorrect system clock date (such as 01-01-2000) in data volume meter 2014-04-02 20:56:11 +00:00
Andrey Prygunkov
47fbe6423e added collecting of download volume statistics data per news server; in web-interface the data is shown as chart in "Statistics and Status" dialog; new RPC-method "servervolumes" returns the collected data 2014-04-01 21:06:31 +00:00
Andrey Prygunkov
461c2a38a5 fixed: sometimes URLs were removed too early from the feed history causing them to be detected as "new" and fetched again; if duplicate check was not active the same nzb-files could be downloaded again 2014-03-28 21:32:55 +00:00
Andrey Prygunkov
d89036f9f3 1) the current time zone is now determined once on program start and if a clock adjustment is detected using system function "localtime"; the function "localtime" is no longer constantly used by the scheduler; this should solve the hibernation problem on synology NAS, even when task scheduler is used; 2) fixed: RSS feed preview dialog displayed slightly incorrect post ages because of the wrong time zone conversion 2014-03-21 21:35:32 +00:00
Andrey Prygunkov
998cb16bfa added new files 2014-03-20 21:44:00 +00:00
Andrey Prygunkov
4c2a8c2892 refactor: moved speed meter code from "QueueCoordinator" into new module "StatMeter" 2014-03-20 21:37:32 +00:00
Andrey Prygunkov
8d3afa0bb6 remote command "-B dump" now can be used also in release (non-debug) versions and prints useful debug data as "INFO" instead of "DEBUG" 2014-03-20 21:18:27 +00:00
Andrey Prygunkov
3fd7bbc0a3 refactor: reducing dependencies between modules 2014-03-20 21:14:39 +00:00
Andrey Prygunkov
bf66500aac reworking queue (continued): merged url queue into main download queue: urls added to queue are now immediately shown in web-interface; urls can be reordered and deleted; when urls are fetched the downloaded nzb-files are put into queue at the positions of their urls; this solves the problem with fetched nzb-files ordered differently than the urls if the fetching of upper (position wise) urls were completed after of the lower urls; removed options "ReloadUrlQueue" and "ReloadPostQueue" since there are no separate url- and post-queues anymore; nzb-files added via urls have new field "URL" which can be accessed via RPC-methods "listgroups" and "history"; new env. var. "NZBNP_URL", "NZBNA_URL" and "NZBPP_URL" passed to NzbProcess, NzbAddedProcess and PostProcess-scripts; removed remote command "--list U", urls are now shown as groups by command "--list G"; RPC-method "urlqueue" is still supported for compatibility but should not be used since the urls are now returned by method "listgroups", the entries have new field "Kind" which can be "NZB" or "URL" 2014-03-18 22:35:58 +00:00
Andrey Prygunkov
e28da0d7fd added new option "PropagationDelay", which sets the minimum post age to download; newer posts are kept on hold in download queue until they get older than the defined delay, after that they are downloaded 2014-03-11 22:05:27 +00:00
Andrey Prygunkov
f10bc886c4 column "age" in web-interface now shows minutes for recent posts (instead of "0 h") 2014-03-11 21:42:54 +00:00
Andrey Prygunkov
df578ac78b updated MSVC project file 2014-03-09 21:51:07 +00:00
Andrey Prygunkov
18e1557cf3 fixed: if during par-repair the downloaded extra par-files were damaged and the repair was terminated with failure status the post-processing scripts were executed twice sometimes 2014-03-09 21:18:57 +00:00
Andrey Prygunkov
30e6131cd7 improved par-check for damaged collections with multiple par-sets and having missing files: only orphaned files (not belonging to any par-set) are scanned when looking for missing files; this greatly decrease the par-check time for big collections 2014-03-05 23:46:29 +00:00
Andrey Prygunkov
44310fda20 impoved error reporting if par-renamer fails to rename files 2014-03-04 21:29:27 +00:00
Andrey Prygunkov
fb7431abb5 impoved error reporting if unpacker fails to move files 2014-03-04 18:24:16 +00:00
Andrey Prygunkov
5b109ea3ce fixed missing includes (bug introduced in r957) 2014-02-26 21:34:55 +00:00
Andrey Prygunkov
a671e9f925 refactor: splitted unit ScriptController.cpp into three units: Script.cpp, QueueScript.cpp, PostScript.cpp 2014-02-26 21:28:15 +00:00
Andrey Prygunkov
8168804f05 reorganized source code directory structure: created directory 'daemon' with several subdirectories and put all source code files there 2014-02-24 22:11:14 +00:00
Andrey Prygunkov
ec576ad0a9 fixed: damaged nzb-files containing multiple par-sets and not having enough par-blocks could cause a crash during par-check 2014-02-22 23:23:45 +00:00
Andrey Prygunkov
fa3abcfdec reworking queue (continued): refactor: download queue can now be accessed without QueueCoordinator; edit and save functions can now be called directly on download queue without accessing global objects QueueEditor and DiskState (the calls are rerouted to these objects internally) 2014-02-22 23:21:20 +00:00
Andrey Prygunkov
33864614e7 eliminated the distinction between manual pause and soft-pause; there is only one pause register now; options "ParPauseQueue", "UnpackPauseQueue" and "ScriptPauseQueue" do not change the state of the pause but instead are respected directly; RPC-methods "pausedownload2" and "resumedownload2" are aliases to "pausedownload" and "resumedownload" (kept for compatibility); field "Download2Paused" of RPC-method "status" is an alias to "DownloadPaused" (kept for compatibility); action "D2" of remote commands "--pause/-P" and "--unpause/-U" is not supported anymore 2014-02-19 21:45:56 +00:00
Andrey Prygunkov
f4bf68ee59 refactor: moved parts from unit "PrePostProcessor.cpp" into new unit "HistoryCoordinator.cpp" 2014-02-19 21:17:24 +00:00
Andrey Prygunkov
2b3d6f976d refactor: removed unneded parameter in one function 2014-02-14 21:04:20 +00:00
Andrey Prygunkov
641a3313ea fixed: health check action (pause or delete) didn't work properly (bug introduced in r949) 2014-02-14 20:55:21 +00:00
Andrey Prygunkov
08e6665ffc reworking queue (continued): remote command "-E/--edit" and RPC-method "editqueue" now use NZBIDs of groups to edit groups (instead of using ID of any file in the group as in older versions); remote command "-L/--list" for groups (G) and group-view in curses-frontend now print NZBIDs instead of "FirstID-LastID"; RPC-method "listgroups" returns NZBIDs in fields "FirstID" and "LastID", which are usually used as arguments to "editqueue" (for compatibility with existing third-party software); items queued for post-processing and not having any remaining files now can be edited (to cancel post-processing), which was not possibly before due to lack of "LastID" in empty groups; edit commands for download queue and post-processing queue are now both use the same IDs (NZBIDs) 2014-02-12 21:24:46 +00:00
Andrey Prygunkov
e6a7af4ab3 fixed: crash during post-processing if history was disabled (bug introduced in r943) 2014-02-10 20:56:54 +00:00
Andrey Prygunkov
a0030a7909 fixed: strange (damaged?) par2-files could cause a crash during par-renaming 2014-02-10 20:54:12 +00:00
Andrey Prygunkov
13c7a7986e fixed: when splitting paused downloads the destination download has shown icorrect paused size (bug introduced in r934) 2014-02-09 21:01:52 +00:00
Andrey Prygunkov
bd1ea872be adjusted modules initialization to avoid possible bugs due to delayed thread starts 2014-02-08 22:09:44 +00:00
Andrey Prygunkov
dfcd595bc1 fixed a locking issue happen in non-daemon mode (bug introduced in r943) 2014-02-05 22:06:43 +00:00
Andrey Prygunkov
f77c97c66a reworking queue (continued): merged post-processing queue into main download queue; changing the order of (pp-queued) items in the download queue now also means changing the order of post-processing jobs; priorities of downloads are now respected when picking the next queued post-processing job; ; the moving of download items in web-interface is now allowed for downloads queued for post-processing; removed actions of remote command "--edit/-E" and of RPC-method "editqueue" used to move post-processing jobs in the post-processing queue (the moving of download items should be used instead) 2014-02-04 22:30:52 +00:00
Andrey Prygunkov
ca53391bdb reworked download queue (continued): removed few (not more necessary) checks from duplicate manager 2014-02-03 20:50:53 +00:00
Andrey Prygunkov
d01dd904da reworking queue (continued): field "Priority" was removed from individual files; instead nzb-files (collections) now have field "Priority"; nzb-files now also have new fields "MinTime" and "MaxTime", which are set when nzb-file is parsed and then kept; this eliminates the need of recalculation file statistics (min and max priority, min and max time); removed action "FileSetPriority" from RPC-command "editqueue"; removed action "I" from remote command "--edit/-E" for individual files (now it is allowed for groups only) 2014-01-31 20:51:14 +00:00
Andrey Prygunkov
bb885bddd4 for downloads not having any (obviously named) par2-files the critical health is assumed 85% instead of 100% as the absense of par2-files suggests; this avoids the possibly false triggering of health-check action (detele or pause) for downloads having misnamed (obfuscated) par2-files; combined with improved fast par-renamer this provides proper processing of downloads with misnamed (obfuscated) par2-files 2014-01-28 22:14:50 +00:00
Andrey Prygunkov
3d8f2c62ea improved fast par-renamer: it now automatically detects and renames misnamed (obfuscated) par2-files 2014-01-28 22:06:56 +00:00
Andrey Prygunkov
7cdb5e86c6 reworked download queue (continued): removed fields FirstID and LastID from internal nzb-file file data; RPC-method "listgroups" returns ID of the last file in the group for both FirstID and LastID fields; the only usage for these fields were in RPC-method "editqueue" where LastID was preferred anyway; remote command "--list/-L" for groups now shows only LastID; curses-interface shows only LastID 2014-01-26 21:33:26 +00:00
Andrey Prygunkov
255b2b464d fixed: download priority was not shown correctly in web-interface (and via RPC) (bug introduced in r934) 2014-01-25 10:30:04 +00:00
Andrey Prygunkov
0ef771ca15 avoiding unnecessary calls to system function "localtime" from scheduler if no tasks are defined; this solves hibernation issues on synology NAS (but requires no usage of scheduler) 2014-01-23 21:03:04 +00:00
Andrey Prygunkov
a3207496b6 fixed: post-processing scripts were not executed in standalone mode ("nzbget /path/to/file.nzb") 2014-01-23 20:46:37 +00:00
Andrey Prygunkov
741724973c reworked download queue (continued): 1) current download data such as remained size or size of paused files is now internally automatically updated on related events (download of article is completed, queue edited, etc.); 2) this eliminates the need of calculating this data upon each RPC-request (from web-interface) and greatly decrease CPU load of processing RPC-requests when having large download queue (and/or large nzb-files in queue) 2014-01-21 21:56:43 +00:00
Andrey Prygunkov
3375c91b56 reworked download queue: 1) queue now holds nzb-jobs instead of individual files (contained within nzbs); 2) this drastically improves performance when managing queue containing big nzb-files on operations such as pause/unpause/move items; 3) tested with queue of 30 nzb-files each 40-100GB size (total queue size 1.5TB) - queue managing is fast even on slow device; 4) limitation: individual files (contained within nzbs) now cannot be moved beyond nzb borders (in older version it was possible to move individual files freely and mix files from different nzbs, although this feature was not supported in web-interface and therefore was not much known); 5) this change opens doors for further speed optimizations and integration of download queue with post-processing queue and possibly url-queue; 6) NOTE: make backup of your queue-directory before trying this (devel) version 2014-01-21 21:45:47 +00:00
Andrey Prygunkov
67da9d7233 updated version string to 13.0-testing 2014-01-21 20:51:58 +00:00
Andrey Prygunkov
4b228b32f0 OSX-app: restarting via troubleshooting-menu could result in an error message "Could not establish connection to background process" although the process was successfully restarted 2014-01-02 21:06:03 +00:00
Andrey Prygunkov
f66189de6b updated version string (preparing to release 12.0) 2014-01-02 20:32:11 +00:00
Andrey Prygunkov
743805ecd5 updated ChangeLog 2014-01-01 21:27:29 +00:00
Andrey Prygunkov
6c9dcea08c fixed: for rar-files with old naming scheme only files with extensions rxx and sxx were deleted during cleanup leaving the files with extensions txx, uxx, etc. 2013-12-25 22:43:58 +00:00
Andrey Prygunkov
fa1944090d fixed wrong size of progress wheel during adding of URL in add dialog 2013-12-24 21:00:17 +00:00
Andrey Prygunkov
04cf428619 fixed: duplicate mode was not saved from history dialog 2013-12-24 18:38:10 +00:00
Andrey Prygunkov
86d6b00886 added new command "Download again" for history items; new action "HistoryRedownload" of RPC-method "editqueue"; for controlling via command line: new action "A" of subcommand "H" of command "--edit/-E" 2013-12-21 21:39:49 +00:00
Andrey Prygunkov
38429d98df fixed a potential problem in incorrect using of one library function 2013-12-19 21:28:45 +00:00
Andrey Prygunkov
0c9667fe58 fixed memory leak in RSS feed parser (Posix only) 2013-12-19 20:28:50 +00:00
Andrey Prygunkov
3c6bb7be4c 1) NZBIDs are now generated with more care avoiding numbering holes possible with previous versions; 2) fixed: new NZBIDs were generated on each refresh of web-ui (bug introduced in r811); 3) for queue disk state written by versions r811-r920 the NZBIDs are renumbered starting from 1 2013-12-18 20:19:42 +00:00
Andrey Prygunkov
40fa732122 do not closing rss filter dialog if a communication error occurs during first fetching of rss and the filter was already edited by user (this allows to save the filter) 2013-12-16 21:15:49 +00:00
Andrey Prygunkov
01e2e25bce fixed potential segfault 2013-12-12 23:43:38 +00:00
Andrey Prygunkov
29e916dcdd NZBGet.app for OSX: fixed: one text message was not properly shown 2013-12-10 20:44:58 +00:00
Andrey Prygunkov
1bfa7610ae improved error reporting when creation of temporary output file fails 2013-12-10 20:37:02 +00:00
Andrey Prygunkov
fb94c32bb4 fixed: when deleting download, if all remaining queued files are par2-files the disk cleanup should not be performed, but it was sometimes 2013-12-10 20:25:07 +00:00
Andrey Prygunkov
94ad26d818 fixed: RSS feed filter fields "age" and "size" did not work (bug introduced in r908) 2013-12-03 21:39:18 +00:00
Andrey Prygunkov
f323addc1c added new option "TimeCorrection" to adjust conversion from system time to local local (solves issues with scheduler when using a binary compiled for other platform) 2013-11-28 21:03:01 +00:00
Andrey Prygunkov
5559c91c0e do not closing rss filter dialog if a communication error occurs during editing of rss filter (this allows to save the filter into clipboard at least) 2013-11-28 20:46:32 +00:00
Andrey Prygunkov
6cc5eab94b fixed: some of actions for remote command "--edit/-E" did not work properly (bug introduced in r900) 2013-11-24 20:15:41 +00:00
Andrey Prygunkov
c2a3450c8f refactor: removed many unneeded pointer-not-null-safety-checks 2013-11-24 19:29:52 +00:00
Andrey Prygunkov
01e1ec0794 fixed line endings in one source file 2013-11-24 13:47:36 +00:00
Andrey Prygunkov
ea381cde90 fixed encoding issue for non-ASCII characters in DNZB-Headers 2013-11-18 20:37:20 +00:00
Andrey Prygunkov
26074c67c6 extended RSS filters: 1) added search field "description"; 2) any newznab attribute can now be used as search field with prefix "attr-" (for example "attr-genre"); 3) removed search fields "genre" and "rating" (use "attr-genre" and "attr-rating" instead 2013-11-17 21:57:32 +00:00
Andrey Prygunkov
e2b13fcda5 fixed: for downloads deleted by health-check status was shown as "DELETED-HEALTH" instead of "FAILURE" 2013-11-14 20:17:45 +00:00
Andrey Prygunkov
0130852d9a added scheduler command "FetchFeed"; renamed RPC-method "fetchfeeds" to "fetchfeed" and added parameter "ID" 2013-11-12 20:54:45 +00:00
Andrey Prygunkov
a027af9e84 if unpack fails with write error (usually because of not enough space on disk) this is shown as status "Unpack: space" in web-interface; this unpack-status is handled as "success" by duplicate handling (no download of other duplicate); also added new unpack-status "wrong password" (only for rar5-archives); env.var. NZBPP_UNPACKSTATUS has two new possible values: 3 (write error) and 4 (wrong password); updated pp-script "EMail.py" to support new unpack-statuses 2013-11-08 21:54:44 +00:00
Andrey Prygunkov
ce81b3d4da added status filter buttons to history page 2013-11-07 21:01:44 +00:00
Andrey Prygunkov
96d8ff3cb7 added new scheduler commands "ActivateServer" and "DeactivateServer"; combined options "TaskX.DownloadRate" and "TaskX.Process" into one option "TaskX.Param", also used by two new commands 2013-11-07 20:55:33 +00:00
Andrey Prygunkov
b67c354fdb better handling of obfuscated nzb-files containing multiple files with same names; removed option "StrictParName" which was not working good with obfuscated files; if more par-files are required for repair the files with strict names are tried first and then other par-files 2013-11-07 20:14:23 +00:00
Andrey Prygunkov
9a610197ea when a duplicate backup is returned from history to download queue its paused-state is now correctly restored 2013-11-05 21:11:47 +00:00
Andrey Prygunkov
1109c3423c reworked duplicate handling: 1) when a duplicate is added to queue it is now moved to history as dupe-backup instead of being paused in download queue; 2) if download fails the best duplicate is moved from history back to queue for download (if there are no duplicates in queue); this makes it easier to manage download queue without worrying about properly pausing/unpausing duplicates); 3) badges with duplicate info are not shown in the list of downloads and history anymore; if necessary they can be activated by manually editing setting "dupeBadges" in index.js; 4) when deleting downloads from queue there are three options now: "move to history", "move to history as duplicate" and "delete without history tracking"; 5) new actions "GroupDupeDelete" and "GroupFinalDelete" in addition to "GroupDelete" in RPC-method "editqueue"; 6) DUPE-mark for history records can now be set or cleared via history dialog; 7) new action "HistorySetDupeBackup" in RPC-method "editqueue"; 8) when deleting downloads from queue the messages about deleted individual files are now printed as "detail" instead of "info"; 9) removed command "Mark as duplicates" from edit dialog for multiple selected downloads and from RPC-method "editqueue"; the command is not needed anymore since all duplicate properties are now changeable 2013-11-04 20:59:20 +00:00
Andrey Prygunkov
18fcb8d0ad if unpack did not find archive files the par-check is not requested anymore if par-rename was already done 2013-10-30 20:15:07 +00:00
Andrey Prygunkov
a5845ed0d9 improved par-check: if main par2-file is corrupted and can not be loaded other par2-files are downloaded and then used as replacement for main par2-file 2013-10-30 20:06:18 +00:00
Andrey Prygunkov
3392fa59fe improved support of non-ascii characters in file names on windows (again) 2013-10-25 20:09:25 +00:00
Andrey Prygunkov
95e816572a small restructure in settings order: 1) combined sections "REMOTE CONTROL" and "PERMISSIONS" into one section with name "SECURITY"; 2) moved sections "CATEGORIES" and "RSS FEEDS" higher in the section list 2013-10-24 20:33:40 +00:00
Andrey Prygunkov
3acb3aab9f addition to r894: commited missing changes in another file 2013-10-24 20:24:43 +00:00
Andrey Prygunkov
509860d890 fixed: if unpack fails the created destination directory was not automatically removed (only if option "InterDir" was active) 2013-10-24 20:17:51 +00:00
Andrey Prygunkov
61dcc467ee added support for rar5-format when checking signatures of archives with non-standard file extensions 2013-10-24 20:15:59 +00:00
Andrey Prygunkov
ae6601d9e3 improved handling of non-ascii characters in file names on windows 2013-10-23 19:22:02 +00:00
Andrey Prygunkov
33733af3c5 when option "UnpackCleanupDisk" is active all archive files are now deleted from download directory without relying on output printed by unrar; this solves issues with non-ascii-characters in archive file names on some platforms and especially in combination with rar5 2013-10-23 19:16:20 +00:00
Andrey Prygunkov
da7c0ab7d6 variable substitution now works for all options, including "FeedX.Filter"; if variable could not be found no error is printed anymore because character sequence used to define variable referenece can be part of feed filter and therefore should not be reported as error 2013-10-22 21:34:36 +00:00
Andrey Prygunkov
afd156b51f when option "InterDir" is used the intermediate destination directory names now include unique number to avoid several downloads with same name to use the same directory and interfere with each other 2013-10-22 21:17:02 +00:00
Andrey Prygunkov
5d549b7c60 option "InterDir" is now active by default 2013-10-22 21:05:13 +00:00
Andrey Prygunkov
1b5671dc87 increased width of Update-dialog to accommodate 80 character columns; this improves output of console tools like wget relying on standard terminal width 2013-10-22 21:04:22 +00:00
Andrey Prygunkov
87e2893505 for external script exec-permissions are now added automatically; this makes installation of pp-scripts and other scripts easier 2013-10-21 20:24:02 +00:00
Andrey Prygunkov
89443e342f if option "ParRename" is disabled (not recommended) unpacker does not initiate par-rename anymore, instead the full par-verify is performed then; refactor: simplified the code requesting par-rename after unpack 2013-10-20 20:59:49 +00:00
Andrey Prygunkov
de5ed803ed for archives including par-files for renaming of extracted files the par-renaming now works for extracted sub-directories too 2013-10-19 21:16:53 +00:00
Andrey Prygunkov
528f9a7ec4 added new files to VC project 2013-10-18 20:38:41 +00:00
Andrey Prygunkov
a1f7656fe4 addition to r879: removed check if download has downloaded files 2013-10-17 22:00:37 +00:00
Andrey Prygunkov
a5703a55eb added automatic updates: new button "Check for updates" on settings tab of web-interface, in section "SYSTEM", initiates check and shows dialog allowing to install new version; it is possible to choose between stable, testing and development branches; this feature is for end-users using binary packages created and updated by maintainers, who need to write an update script specific for platform; the script is then called by NZBGet when user clicks on install-button; the script must download and install new version; for more info visit http://nzbget.sourceforge.net/Packaging 2013-10-17 19:35:43 +00:00
Andrey Prygunkov
1275b85465 option "DiskSpace" now checks space on "InterDir" in addition to "DestDir" 2013-10-17 19:11:46 +00:00
Andrey Prygunkov
52dc2738a1 when removing duplicates from queue after successful download now removing only unpaused items which does not have any downloaded articles; this prevents deletion of higher score duplicates 2013-10-17 19:09:40 +00:00
Andrey Prygunkov
7c0f7cbdc2 addition to r877: commited missing changes in header-file 2013-10-17 18:56:26 +00:00
Andrey Prygunkov
c14ef8bd13 fixed: in RSS filters when using substitution variables referring to matches a wrong variable could be substituted if substring search did not start with star-character 2013-10-17 18:52:16 +00:00
Andrey Prygunkov
ce62ae9f50 fixed: superfluous spaces in RSS filter caused non-matching 2013-10-14 20:33:52 +00:00
Andrey Prygunkov
133347f884 support for rar-archives with non-standard extensions is now limited to file extensions consisting of digits; this is to avoid extracting of rar-archives having non-rar extensions on purpose (example: .cbr) 2013-10-09 19:47:11 +00:00
Andrey Prygunkov
ca4f56cb04 fixed: detection of par-files inside archives did not work properly 2013-10-09 19:45:35 +00:00
Andrey Prygunkov
f512ae973d history records with failed script status are now shown as "PP-FAILURE" in history list (instead of just "FAILURE") 2013-10-09 19:43:53 +00:00
Andrey Prygunkov
04cceb314e updated descriptions of few options 2013-10-09 19:38:49 +00:00
Andrey Prygunkov
4138e10788 removed option "Also delete downloaded files" from history delete confirmation dialog; for failed downloads cleanup is now performed if option "DeleteCleanupDisk" is active; in RPC-Method "editqueue" removed actions "HistoryDeleteCleanup" and "HistoryFinalDeleteCleanup" 2013-10-09 19:37:44 +00:00
Andrey Prygunkov
5d11b9aa97 added special handling for files ".AppleDouble" and ".DS_Store" during unpack to avoid problems on NAS having support for AFP protocol (used on Mac OSX) 2013-10-08 19:44:06 +00:00
Andrey Prygunkov
6033c2b3ce fixed: invalid "Offset" passed to RPC-method "editqueue" or command line action "-E/--edit" could crash the program 2013-10-08 19:41:26 +00:00
Andrey Prygunkov
dfb28dc155 history records can now be either permanently deleted or just hidden from history list; hiding (instead of deleting) is recommended for proper work of duplicate handling; in addition it is now possible to delete downloaded files; RPC-method "editqueue" has now four actions to delete history records: "HistoryDelete", "HistoryDeleteCleanup", "HistoryFinalDelete", "HistoryFinalDeleteCleanup"; action "HistoryDelete" which has existed before now hides records, already hidden records are ignored; button "Old" in web-interface on history tab renamed to "Hidden"; badge "DUP", used to distinguish old history records changed to "hidden" 2013-10-08 19:35:59 +00:00
Andrey Prygunkov
756b29d9ed fixed a potential seg. fault in a commonly used function 2013-10-07 21:15:46 +00:00
Andrey Prygunkov
dc7a3af768 destination directory for option "DestDir" is not checked/created on program start anymore (only when a download starts); this helps when DestDir is mounted to a network drive which is not available on program start 2013-10-07 19:29:36 +00:00
Andrey Prygunkov
baf3a2d915 addition to r863: fixed: option "UrlForce" did not really worked 2013-10-07 16:15:38 +00:00
Andrey Prygunkov
25832ab2ea option "HealthCheck" is now set to "Delete" by default; the previously default setting "Pause" did not work well with automatic duplicate handling 2013-10-07 16:14:14 +00:00
Andrey Prygunkov
c95c0401eb added new option "UrlForce" to allow URL-downloads (including fetching of RSS feeds and nzb-files from feeds) even if download is paused; the option is active by default 2013-10-06 18:55:09 +00:00
Andrey Prygunkov
936580a924 moved command "Duplicate properties" from actions-menu into button "Dupe" near "PP-Parameters"; renamed button "PP-Parameters" to "Postprocess" and button "PP-Messages" (visible only during post-processing) to "Log" to free space for new Dupe-button 2013-10-05 13:03:52 +00:00
Andrey Prygunkov
00df147688 fixed: NZBGet.app (OSX): when the option "Show web-interface on start" was active, it took too long to initialize and could result in an error message "Could not establish connection to background process" 2013-10-03 20:35:01 +00:00
Andrey Prygunkov
8273dcfdfc fixed: error reading diskstate when upgrading from r808 or an older version 2013-10-03 13:34:33 +00:00
Andrey Prygunkov
81e2dc3635 improved parsing of multi-episodes from titles when generating dupekeys using item-options "rageid" or "series" and season/episode numbers 2013-10-01 16:52:23 +00:00
Andrey Prygunkov
94611cd80b fixed: when generating dupekeys with item-options "rageid" or "series" the season/episode numbers were not parsed from title if they were not used in the filter string 2013-10-01 16:49:12 +00:00
Andrey Prygunkov
28af81142f added two new item-options for RSS filter rules "Accept" and "Options": option "rageid" generates duplicate key using custom rageid and season/episode numbers; option "series" generates duplicate key using custom series name (any unique string) and season/episode numbers 2013-09-30 19:38:26 +00:00
Andrey Prygunkov
b3fd3ec0ba hiding badges for dupekeys in downloads/history lists if option "DupeCheck" is disabled 2013-09-30 19:31:27 +00:00
Andrey Prygunkov
44518d5b33 fixed: if download failed an existing queued duplicate was not automatically unpaused 2013-09-30 19:29:53 +00:00
Andrey Prygunkov
323e74f50f fixed: space character was not separator in word search mode in RSS filter 2013-09-29 10:47:25 +00:00
Andrey Prygunkov
a972e755d1 changes in duplicate handling: 1) when comparing two items if the both have dupekey only dupekeys ae compared (names not checked); 2) when a new item without dupekey is added and there is another items with the same name having dupekey its dupekey was copied to new item, this is disabled now; 3) fixed: command "Mark as Good" in history removed all duplicates but should remove only records with status "DUPE" 2013-09-28 21:16:16 +00:00
Andrey Prygunkov
49b6292f7f changes in duplicate handling: removed internal field "DupeMark" showing the item having duplicates; this flag was not always in sync with reality and it was used only to show (or not) badges with duplicate key in web-interface; now badges are always shown for items having non-empty duplicate keys; the badges becomes red if duplicate mode is set to "force" 2013-09-28 19:53:20 +00:00
Andrey Prygunkov
dd27dc1503 removed command "Unmark Duplicate" from actions menu and from command line syntax; the duplicate mark is removed automatically once the duplicate mode is set to "force"; otherwise manually removing duplicate mark does not make much sense since the titles are checked for duplicates anyway 2013-09-27 20:45:24 +00:00
Andrey Prygunkov
547ed1fd26 fixed compiler warning 2013-09-27 20:25:10 +00:00
Andrey Prygunkov
c387b0d069 duplicate properties (dupekey, dupescore and dupemode) can now be viewed and changed in download-edit-dialog and history-edit-dialog via new command "Duplicate properties" in actions menu 2013-09-27 19:56:16 +00:00
Andrey Prygunkov
ba2af4d84d fixed: in rss feed filter the substitution variable did not work 2013-09-26 21:06:43 +00:00
Andrey Prygunkov
20b7c6a823 do not showing dupemode in the build-filter-dialog if the mode is set to default value "score" 2013-09-26 20:22:28 +00:00
Andrey Prygunkov
b2edab0452 1) refactor: moved feed related classes from unit "DownloadInfo" to new unit "FeedInfo"; 2) rss filter fields "season" and "episode" are now available for all feeds (not restricted to newznab); if the feed does not have the fields, they are automatically parsed from feed title; 3) fields "season" and "episode" can now be used as substitution variables in option "dupekey" of rss filter command "Options" 2013-09-26 19:37:25 +00:00
Andrey Prygunkov
a72dc67268 added new search fields to RSS feed filter: imdbid, priority, dupekey and dupescore 2013-09-26 19:15:59 +00:00
Andrey Prygunkov
11b9745268 added OR-operator and groups (braces) to RSS filter syntax 2013-09-26 19:10:26 +00:00
Andrey Prygunkov
1d1d49a3c9 addition to r755: fixed: passwords containing special characters such as TAB were not properly read from nzb-files metatag 2013-09-24 20:42:32 +00:00
Andrey Prygunkov
31a4e7a22c by adding nzb-file to queue via RPC-methods "append" and "appendurl" the actual format of the file is checked and if nzb-format is detected the file is added even if it does not have .nzb extension 2013-09-24 19:35:54 +00:00
Andrey Prygunkov
dca13f0749 better handling of queued duplicates in command "Mark as bad" 2013-09-24 19:33:35 +00:00
Andrey Prygunkov
d377eee11c when adding nzb-file in in dupemode "score" the file is skipped if dup-history has a success-item with the same or higher score and recent history does not have a success-duplicate 2013-09-24 19:31:56 +00:00
Andrey Prygunkov
196052efed removed prefix "dupe" from badges in download and history lists 2013-09-24 19:28:36 +00:00
Andrey Prygunkov
694c5104fa fixed: incorrect health calculation for downloads consisting of only par-files 2013-09-23 20:26:29 +00:00
Andrey Prygunkov
acbf765851 fixed compiler warning 2013-09-23 20:24:31 +00:00
Andrey Prygunkov
ef06dfb7b3 replaced parameter "NoDupeCheck (yes, no)" with "DupeMode (score, all, force)" when adding nzb-files to queue using RPC-methods "append" and "appendurl"; changed option "nodupe" to "dupemode" in RSS filter commands "Append" and "Options" 2013-09-23 20:18:54 +00:00
Andrey Prygunkov
613432ef2c tuned algorithm calculating maximum threads limit to allow more threads for backup server connections (related to option "TreadLimit" removed in v11); this may sometimes increase speed when backup servers were used 2013-09-22 13:39:43 +00:00
Andrey Prygunkov
512dc87b3f fixed: items were removed from history too soon (option "KeepHistory") 2013-09-22 12:59:30 +00:00
Andrey Prygunkov
480f034301 fixed few compiler warnings 2013-09-20 21:55:47 +00:00
Andrey Prygunkov
39275ce133 improved smart duplicates features: added functions "Mark as Bad" and "Mark as Good" for history items; when a history item having success-status is marked as bad: 1) it is considered as failure by any duplicate check performed later; 2) if history has duplicates with dupe-status (dupe-backups) they are all moved (as paused) to download queue and one of them (with the highest duplicate score) is unpaused (downloaded); when a history item is marked as good: 1) it is considered as success by any duplicate check performed later; 2) no other duplicates will be added to history as dupe-backups anymore; 3) if history has duplicates with dupe-status (dupe-backups) they are all removed from recent history (moved to dup-history); new actions "HistoryMarkBad" and "HistoryMarkGood" in RPC-method "editqueue"; new actions "B" and "G" of command "--edit/-E" for history items (subcommand "H") 2013-09-20 20:45:07 +00:00
Andrey Prygunkov
c73fa0b42d addition to r817: fixed: error by reading dup-history from disk state 2013-09-19 19:45:31 +00:00
Andrey Prygunkov
3d4ed43337 changed format of generated dupekey for tv shows; now using season and episode exactly as they are passed by rss feed 2013-09-18 20:47:46 +00:00
Andrey Prygunkov
74067fd515 source nzb-files are now deleted when download-item leaves queue and history (option "NzbCleanupDisk") 2013-09-18 20:27:47 +00:00
Andrey Prygunkov
9538771eef refactor: addition to r825: small optimization 2013-09-18 20:09:32 +00:00
Andrey Prygunkov
7ecb968e23 if download was deleted by duplicate check its status in the history is now shown as "DUPE" instead of just "DELETED" 2013-09-18 19:49:59 +00:00
Andrey Prygunkov
c9365732d9 option values in RSS filter command "Options" can now refer to pattern groups in regular expressions 2013-09-17 19:33:56 +00:00
Andrey Prygunkov
d41c13ac29 fixed: if duplicate check has prevented adding file to queue the unneeded disk state files were not deleted from queue directory 2013-09-17 19:18:33 +00:00
Andrey Prygunkov
1b2fa2b2e8 extended word/substring search in RSS feed filters with pattern character "#" which matches one digit 2013-09-17 19:06:11 +00:00
Andrey Prygunkov
b9a9113abe addition to r820: fixed: when old disk state was converted the content hashes were not initialized (bug introduced in r816) 2013-09-17 19:02:59 +00:00
Andrey Prygunkov
1dd9dbec6c option values in RSS filter command "Options" can now refer to pattern groups in search string 2013-09-16 20:18:58 +00:00
Andrey Prygunkov
84efe5447b fixed compiler warning 2013-09-16 19:51:27 +00:00
Andrey Prygunkov
61bab55d11 fixed: when old disk state was converted the content hashes were not initialized (bug introduced in r816) 2013-09-16 19:49:45 +00:00
Andrey Prygunkov
85d05153f7 changed syntax of option "dupekey" of command "Option" in RSS filter: option "dupekey" now sets duplicate key (overrides existing key); option "dupekey+" adds to existing duplicate key 2013-09-15 14:02:35 +00:00
Andrey Prygunkov
f9bc316c98 rss filter command "Options" can now increase priority and dupe scrore using new option names "priority+" and "dupecore+" 2013-09-15 08:50:06 +00:00
Andrey Prygunkov
c4adc8d9be improved detection of same nzb-files acquired from different sources (nzb-sites): 1) the order of individual files as well as the order of articles in nzb-files do not matter; 2) individual files having extensions listed in option "ExtCleanupDisk" are now excluded from content comparison (unless these are par2-files, which are never excluded) 2013-09-14 21:20:31 +00:00
Andrey Prygunkov
169719c62d improved duplicate check: when history item expires (as defined by option "KeepHistory") and the duplicate check is active (option "DupeCheck") the item is not completely deleted from history; instead the amount of stored data reduces to minimum required for duplicate check (about 150 bytes vs 2000 bytes for full history item); such old history items are not shown in web-interface by default (to avoid transferring of large amount of history items); new button "Old" in web-interface to show old history items; the items are marked with badge "DUP"; 2013-09-13 20:13:09 +00:00
Andrey Prygunkov
a509a491af improved duplicate check: 1) added check for nzb-file content to avoid queueing of exactly same files (even with different names); 2) when nzb-file is added with option "Disable duplicate check" the option now works only during adding, it does not exclude the file from later checks (when adding other files) 2013-09-12 15:54:14 +00:00
Andrey Prygunkov
aed8e26062 extended add-dialog with options "Add paused" and "Disable duplicate check" 2013-09-11 20:21:49 +00:00
Andrey Prygunkov
cec414126d improved smart duplicates: nzb-files now have field "DupeScore" which can be set from rss filter using command "Options"; items with higher duplicate scores are downloaded even if the history already has successfully downloaded item (with lower score); changed the syntax of rss filter command "Options" to allow use of more options (and easy add new options in the future); new options "DupeScore", "DupeKey" and "NoDupe" to fine tune duplicate handling; updated description of option "DupeCheck" 2013-09-11 20:18:52 +00:00
Andrey Prygunkov
6bf96ab808 addition to r811: fixed: items added from feed view dialog were not marked as duplicates 2013-09-09 20:52:51 +00:00
Andrey Prygunkov
d8d9f72985 added smart duplicates feature: similar nzb-files are automatically marked as duplicates; queue items can be also manually marked as duplicates using new commands in multi-edit-dialog (action menu); duplicate-mark be manually removed using using new command in multi-edit-dialog and edit-dialog (action menu); duplicates are added in paused state; if download of first duplicates fail, another duplicate is unpaused; if download succeeds all remaining duplicates are removed from queue; an item marked as duplicate has field "DupeKey" which has the same value for all duplicates of the title; this field is shown in web-interface near nzb-name (in a short form to save screen place); new actions "GroupMarkDupe" and "GroupUnMarkDupe" of RPC-command "editqueue" to manually mark/unmark duplicates; new subcommands "DM" and "DU" of command "--edit/-E" to manually mark/unmark duplicates;;; if url-download results in a file without nzb-extension a history item with status "Scan: skipped" is created to inform user about this fact; RPC-commands "listgroups", "postqueue" and "history" now return more info about nzb-item (many new fields); removed option "MergeNzb" because it conflicts with duplicate handling, items can be merged manually if necessary 2013-09-09 20:31:17 +00:00
Andrey Prygunkov
cecb2e8f4c failed article downloads are now logged as "detail" instead of "warning" to reduce number of warnings for downloads removed from server (DMCA); one warning is printed for a file with a summary of number of failed downloads for the file 2013-09-08 20:34:53 +00:00
Andrey Prygunkov
d9ae28d3ed fixed compiler errors when configured with switch "--disable-parcheck" 2013-09-03 20:51:51 +00:00
Andrey Prygunkov
761078625e addition to r807: corrected makefile 2013-09-01 09:28:06 +00:00
Andrey Prygunkov
be37a75928 set svn keywords 2013-08-31 21:14:39 +00:00
Andrey Prygunkov
884e9fb3c9 created NZBGet.app - NZBGet is now a user friendly Mac OSX application with easy installation and seamless integration into OS UI: works in background, accessible via menubar icon 2013-08-31 21:05:31 +00:00
Andrey Prygunkov
8821502a81 updated support for DNZB-Headers: removed "X-DNZB-Link", added "X-DNZB-Details" 2013-08-29 19:56:29 +00:00
Andrey Prygunkov
f66c012df6 pp-scripts can now set post-processing parameters by printing command "[NZB] NZBPR_varname=value"; this allows scripts which are executed sooner to pass data for scripts executed later 2013-08-28 22:27:50 +00:00
Andrey Prygunkov
baf996fc06 new env-var "NZBPP_FINALDIR" passed to pp-scripts 2013-08-28 21:59:02 +00:00
Andrey Prygunkov
e56a1d3274 refactor: small code rework in passing parameters to post-processing scripts 2013-08-28 20:11:05 +00:00
Andrey Prygunkov
b04af33fb5 addition to r794: removed mistakenly added pp-parameter "NZBPP_DELETED"; since post-processing is not performed for deleted downloads, this parameter has no use 2013-08-28 19:17:41 +00:00
Andrey Prygunkov
1f53d32a62 new option "ParRename" to force par-renaming as a first post-processing step (active by default); this saves an unpack attempt and is even more useful if unpack is disabled 2013-08-28 15:08:37 +00:00
Andrey Prygunkov
534aeb3d07 addition to r795: renamed option "ApprovedIP" to "AuthorizedIP" 2013-08-26 22:07:43 +00:00
Andrey Prygunkov
a3693d0a45 refactor: fixed compiler warnings regarding "printf" 2013-08-26 16:05:29 +00:00
Andrey Prygunkov
967a6bd4a4 fixed potential buffer overflow in remote client 2013-08-26 10:02:45 +00:00
Andrey Prygunkov
c589c9b9ec addition to r795: removed a (wrong) tip about router from option decription 2013-08-24 18:56:47 +00:00
Andrey Prygunkov
d10ad4835b added new option "ApprovedIP" to set the list of IP-addresses which may connect without authorization 2013-08-23 22:05:29 +00:00
Andrey Prygunkov
38a273b195 added collecting of server usage statistical data for each download: number of successful and failed article downloads per news server; new page in history dialog shows collected statistics; new fields in RPC-method "history": ServerStats (array), TotalArticles, SuccessArticles, FailedArticles; new env. vars passed to pp-scripts: NZBPP_TOTALARTICLES, NZBPP_SUCCESSARTICLES, NZBPP_FAILEDARTICLES and per used news server: NZBPP_SERVERX_SUCCESSARTICLES, NZBPP_SERVERX_FAILEDARTICLES; also new env.vars: DELETED, HEALTHDELETED 2013-08-16 21:53:32 +00:00
Andrey Prygunkov
9618f46188 fixed scrolling to the top of page happening by clicking on items in downloads/history lists and on action-buttons in edit-download and history dialogs 2013-08-15 17:21:01 +00:00
Andrey Prygunkov
2562b384bc addition to r779: fixed: health were not shown for items with status "PP-QUEUED" 2013-08-14 22:00:39 +00:00
Andrey Prygunkov
bc8d133b69 post-processing progress label is now automatically trimmed if it doesn't fill into one line; this avoids layout breaking if the text is too long 2013-08-14 21:10:02 +00:00
Andrey Prygunkov
beb9967ad0 addition to r775: fixed: the confirmation by leaving settings page could be sometimes shown even if there were no changes made 2013-08-14 21:07:00 +00:00
Andrey Prygunkov
433baf0923 added support for http redirects when fetching URLs 2013-08-13 17:22:45 +00:00
Andrey Prygunkov
423da8b785 addition to r782: fixed: when adding files to queue the info about category and priority could get lost for some files 2013-08-12 18:39:17 +00:00
Andrey Prygunkov
f4708d2b1a fixed: final directory were not properly shown (Windows only) (bug introduced in r765) 2013-08-12 18:35:28 +00:00
Andrey Prygunkov
b00b7f7c31 critical health was sometimes not calculated properly on certain CPU architectures (mipsel) 2013-08-11 14:58:19 +00:00
Andrey Prygunkov
ec4110cb2c 1) when a duplicate file was detected in collection it was automatically deleted (if option DupeCheck is active) but the total size of collection was not updated; 2) when deleting individual files the total count of files in collection was not updated 2013-08-10 20:30:33 +00:00
Andrey Prygunkov
b2f02c7fa6 addition to r779: added calculation of critical health for old items in download queue (added to queue with program versions older than r779) 2013-08-10 20:03:14 +00:00
Andrey Prygunkov
aeb561c240 added automatic par-renaming of extracted files if archive includes par-files 2013-08-10 09:04:33 +00:00
Andrey Prygunkov
97a6abca0e fixed: when multiple nzb-files were added via URL (rss including) at the same time the info about category and priority could get lost for some of files 2013-08-09 20:37:41 +00:00
Andrey Prygunkov
5aa3a29288 addition to r779: added missing include to avoid compilation error on some systems 2013-08-08 21:46:42 +00:00
Andrey Prygunkov
bc49e7c48e all table columns except "Name" now have fixed widths to avoid annoying layout changes especially during post-processing when long status messages are displayed in the name-column 2013-08-08 21:23:29 +00:00
Andrey Prygunkov
9ba10446e9 added download health monitoring: health indicates download status, whether the file is damaged and how much; new option "HealthCheck" to define what to do with bad downloads (pause, delete, none); par-check is now automatically started for downloads having health below 100%, this works independently of unpack (even if unpack is disabled); for downloads having health less than critical health no par-check is performed (it would fail); new fields "Health" and "CriticalHealth" are returned by RPC-Method "listgroups"; new fields "Health", "CriticalHealth", "Deleted" and "HealthDeleted" are returned by RPC-Method "history"; new parameters "NZBPP_HEALTH" and "NZBPP_CRITICALHEALTH" are passed to pp-scripts; manually deleted downloads now have history status "deleted" (instead of "unknown") 2013-08-08 21:09:36 +00:00
Andrey Prygunkov
a4c686876f added filter buttons to messages tab (info, warning, etc.); also changed the color of filter buttons in feed view and feed filter dialogs (from blue to black) 2013-08-07 20:09:43 +00:00
Andrey Prygunkov
f31ba7dea3 small correction in help text 2013-08-05 20:49:28 +00:00
Andrey Prygunkov
897946c1ce added fields "rageid", "season", "episode" and command "=" to rss filters 2013-08-05 18:41:02 +00:00
Andrey Prygunkov
802266e3aa added confirmation dialog by leaving settings page if there are unsaved changes 2013-08-05 18:09:10 +00:00
Andrey Prygunkov
d9f89f7457 added menu "View" to settings page which allows to switch to "Compact Mode" when option descriptions are hidden 2013-08-05 18:05:04 +00:00
Andrey Prygunkov
b871d84379 added support for fields "rating" and "genre" in rss filters 2013-08-04 21:41:50 +00:00
Andrey Prygunkov
0375309060 fixed: rss filter commands "<=" and ">=" did not work 2013-08-04 15:35:07 +00:00
Andrey Prygunkov
a5ca653df8 fixed: crash on certain invalid rss filter strings 2013-08-03 21:06:35 +00:00
Andrey Prygunkov
ec194a15fb fixed: colons in regular expressins could cause incorrect parsing of rss filters 2013-08-03 09:56:40 +00:00
Andrey Prygunkov
eaf5d71b40 small changes in button captions: edit dialogs called from settings page (choose script, choose order, build rss filter) now have buttons "Discard/Apply" instead of "Close/Save"; in all other dialogs button "Close" renamed to "Cancel" unless it was the only button in dialog 2013-08-02 21:03:58 +00:00
Andrey Prygunkov
7a9ee279ed reversed the order of priorities in comboboxes in dialogs: the highest priority - at the top, the lowest - at the bottom 2013-08-02 16:46:24 +00:00
Andrey Prygunkov
827acdadea 1) added multiline filters for RSS feeds; new dialog to build filters in web-interface; 2) refactor: the length of configuration option values is now unlimited (previously was limited to 1000 characters; unlimited needed for long feed filters) 2013-08-02 15:54:11 +00:00
Andrey Prygunkov
ef99b2057a addition to r765: fixed small memory leak 2013-07-29 15:58:59 +00:00
Andrey Prygunkov
c938714b70 pp-scripts which move files can now inform the program about new location by printing text "[NZB] FINALDIR=/path/to/files"; the final path is then shown in history dialog instead of download path 2013-07-28 21:27:12 +00:00
Andrey Prygunkov
21e3dd30fd fixed: URLs for nzb-files were not properly read from RSS feeds of certains sites 2013-07-28 17:47:56 +00:00
Andrey Prygunkov
b271ab4721 addition to r757: fixed: statistic dialog had a scroll bar 2013-07-27 21:57:40 +00:00
Andrey Prygunkov
3abe382f44 program can now be stopped via web-interface: new button "shutdown" in section "SYSTEM" 2013-07-27 16:19:27 +00:00
Andrey Prygunkov
88a6b702d2 updated VC project file 2013-07-26 21:34:52 +00:00
Andrey Prygunkov
497d1af8bf fixed: download could hang if there were defined active news servers with 0 connections (ServerX.Active=yes, ServerX.Connections=0) (bug introduced in r743) 2013-07-26 20:38:57 +00:00
Andrey Prygunkov
cc4b6acd14 options "DeleteCleanupDisk" and "NzbCleanupDisk" are now active by default (in the example config file) 2013-07-26 20:14:32 +00:00
Andrey Prygunkov
1714e2331c combined rss filter commands @ and " into one command to make filters more intuitive 2013-07-26 20:09:36 +00:00
Andrey Prygunkov
4e419ec16d small change in css: slightly reduced the max height of modal dialogs to better work on notebooks 2013-07-25 20:13:26 +00:00
Andrey Prygunkov
5e0f214a8f fixed: malformed nzb-file could cause a memory leak 2013-07-25 19:20:06 +00:00
Andrey Prygunkov
da1727e5e4 added support for metatag "password" in nzb-files 2013-07-25 18:32:07 +00:00
Andrey Prygunkov
101be2eeb1 added confirmation by deleting download or history item from edit-dialog 2013-07-25 18:25:13 +00:00
Andrey Prygunkov
e69015204a when saving/restoring the feed status (last update time) the feeds are identified by url and filter (previously only by url) 2013-07-25 18:22:56 +00:00
Andrey Prygunkov
1b203e3292 fully implemented feed filters 2013-07-24 21:32:15 +00:00
Andrey Prygunkov
1ad8bd212c refactor: small rework of NZBParameterList-class 2013-07-24 21:09:56 +00:00
Andrey Prygunkov
3711f30d01 new logo (thanks dogzipp for the logo) 2013-07-24 21:01:27 +00:00
Andrey Prygunkov
85d59d25df 1) DirectNZB headers X-DNZB-MoreInfo, X-DNZB-Report and X-DNZB-Link are now processed when downloading URLs and the links "More Info", "External Link" and "Report This NZB" are shown in download-edit-dialog and in history-dialog; 2) combined all footer buttons into one button "Actions" with menu: in download-edit-dialog: "Pause/Resume", "Delete" and "Cancel Post-Processing", in history-dialog: "Delete", "Post-Process Again" and "Download Remaining Files (Return to Queue)" 2013-07-23 21:21:14 +00:00
Andrey Prygunkov
6d7f55a435 added missing svn-keywords 2013-07-22 20:39:49 +00:00
Andrey Prygunkov
c22ca18a82 added filtering for RSS feeds via new option "FeedX.Filter" (not all filter commands are implemented yet but this is mentioned in the option description) 2013-07-22 20:38:21 +00:00
Andrey Prygunkov
ec48959600 changed the way how option "Unpack" works: instead of enabling/disabling the unpacker as a whole, it now defines the initial value of post-processing parameter "Unpack" for nzb-file when it is added to queue; this makes it now possible to disable Unpack globally but still enable it for selected nzb-files; new option "CategoryX.Unpack" to set unpack on a per category basis 2013-07-21 20:44:13 +00:00
Andrey Prygunkov
67634c4a71 fixed compilation error on Linux (bug introduced in r743) 2013-07-20 14:05:21 +00:00
Andrey Prygunkov
c31d38a562 fixed: certain characters printed by pp-scripts could crash the program 2013-07-20 14:02:26 +00:00
Andrey Prygunkov
6b0499b82e news servers can now be temporarily disabled via speed limit dialog without reloading of the program; new option "ServerX.Active" to disable servers via settings; new option "ServerX.Name" to use for logging and in UI 2013-07-20 07:15:21 +00:00
Andrey Prygunkov
046364283f fixed: choosing local files didn't work in Opera 2013-07-18 19:21:38 +00:00
Andrey Prygunkov
85880f9bd1 in RPC-Method "appendurl" parameter "addtop" adds nzb to the top of the main download queue (not only to the top of the URL queue) 2013-07-17 19:12:23 +00:00
Andrey Prygunkov
5fd436e5c5 added switch "Titles/Filenames" to feed view dialog for rss feeds with "bad" titles 2013-07-17 19:11:35 +00:00
Andrey Prygunkov
01533cbf9f better parsing of rss feeds of certain nzb-sites (now using enclosure-tag if possible) (Windows only) 2013-07-17 19:02:41 +00:00
Andrey Prygunkov
f5c8276fdc addition to r734: fixed possible matching bug 2013-07-16 22:18:09 +00:00
Andrey Prygunkov
05f2b81025 better parsing of rss feeds of certain nzb-sites (now using enclosure-tag if possible) (POSIX only) 2013-07-16 21:41:53 +00:00
Andrey Prygunkov
9dfd6cecad added filter buttons (all, new, fetched, backlog) to feed view dialog 2013-07-16 21:11:08 +00:00
Andrey Prygunkov
2febf837e5 fixed: restoring of settings didn't work for multi-sections (servers, categories, etc.) if they were empty 2013-07-16 18:47:52 +00:00
Andrey Prygunkov
ac954bba11 refactor: small speed/memory optimization in aliases support for categories 2013-07-16 18:46:41 +00:00
Andrey Prygunkov
2bda4fef5b made alias-matching case-insensitive 2013-07-15 22:27:14 +00:00
Andrey Prygunkov
5a815592dc added new option "CategoryX.Aliases" to configure category name matching with nzb-sites; especially useful with rss feeds 2013-07-15 21:28:55 +00:00
Andrey Prygunkov
3f4c6ce144 added more debug logging to feed manager 2013-07-15 21:28:26 +00:00
Andrey Prygunkov
19e0c53d1e destination directory for option "CategoryX.DestDir" is not checked/created on program start anymore (only when a download starts for that category); this helps when certain categories are configured for external disks, which are not always connected 2013-07-15 20:07:50 +00:00
Andrey Prygunkov
95963b2289 download queue is now saved in a more secure way to avoid potential loss of queue if the program crashes during saving of queue 2013-07-15 19:53:01 +00:00
Andrey Prygunkov
4cd21cad9c fixed: crash after downloading of an URL (happen only on certain systems) 2013-07-15 19:13:36 +00:00
Andrey Prygunkov
fcf1d7d502 improved compatibility with certain nzb-sites when fetching nzb-files (original nzb-filenames were sometimes not detected properly) 2013-07-14 21:27:13 +00:00
Andrey Prygunkov
e92d04771d toolbar button "Rss Feeds" is now visible only if there are feeds defined in settings 2013-07-14 18:39:07 +00:00
Andrey Prygunkov
ce43190ca6 fixed: crash after executing of remote commands (bug introduced in r722) 2013-07-14 15:47:49 +00:00
Andrey Prygunkov
284dbbad24 addition to r722: added missing file to Makefile 2013-07-14 07:08:10 +00:00
Andrey Prygunkov
1e115a5eab addition to r722: added missing file 2013-07-14 06:39:36 +00:00
Andrey Prygunkov
f18a355469 added rss feeds support: 1) new options "FeedX.Name", "FeedX.URL", "FeedX.Interval", "FeedX.PauseNzb", "FeedX.Category", "FeedX.Priority" (section "Rss Feeds"); 2) new option "FeedHistory" (section "Download Queue"); 3) Button "Preview Feed" on settings tab near each feed definition; 4) new toolbar button "Feeds" on downloads tab with menu to view feeds or fetch new nzbs from all feeds (the button is visible only if there are feeds defined in settings); 5) new dialog to see feed content showing status of each item (new, fetched, backlog) with ability to manually fetch selected items 2013-07-13 22:00:49 +00:00
Andrey Prygunkov
8ba95bb82a updated version string to 12.0-testing 2013-07-12 21:16:55 +00:00
Andrey Prygunkov
ee5a2a320e updated version string (preparing to release 11.0) 2013-07-01 19:18:41 +00:00
Andrey Prygunkov
738fd3da58 updated ChangeLog 2013-07-01 18:48:07 +00:00
235 changed files with 76592 additions and 25245 deletions

29
AUTHORS
View File

@@ -1,4 +1,27 @@
nzbget:
Sven Henkel <sidddy@users.sourceforge.net> (versions 0.1.0 - ?)
Bo Cordes Petersen <placebodk@users.sourceforge.net> (versions ? - 0.2.3)
NZBGet:
Andrey Prygunkov <hugbug@users.sourceforge.net> (versions 0.3.0 and later)
Bo Cordes Petersen <placebodk@users.sourceforge.net> (versions ? - 0.2.3)
Sven Henkel <sidddy@users.sourceforge.net> (versions 0.1.0 - ?)
PAR2:
Peter Brian Clements <peterbclements@users.sourceforge.net>
PAR2 library API:
Francois Lesueur <flesueur@users.sourceforge.net>
jQuery:
John Resig <http://jquery.com>
The Dojo Foundation <http://sizzlejs.com>
Bootstrap:
Twitter, Inc <http://twitter.github.com/bootstrap>
Raphaël:
Dmitry Baranovskiy <http://raphaeljs.com>
Sencha Labs <http://sencha.com>
Elycharts:
Void Labs s.n.c. <http://void.it>
iconSweets:
Yummygum <http://yummygum.com>

View File

File diff suppressed because it is too large Load Diff

2641
ChangeLog
View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,64 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef DISKSTATE_H
#define DISKSTATE_H
#include "DownloadInfo.h"
class DiskState
{
private:
int ParseFormatVersion(const char* szFormatSignature);
bool SaveFileInfo(FileInfo* pFileInfo, const char* szFilename);
bool LoadFileInfo(FileInfo* pFileInfo, const char* szFilename, bool bFileSummary, bool bArticles);
void SaveNZBList(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadNZBList(DownloadQueue* pDownloadQueue, FILE* infile, int iFormatVersion);
void SaveFileQueue(DownloadQueue* pDownloadQueue, FileQueue* pFileQueue, FILE* outfile);
bool LoadFileQueue(DownloadQueue* pDownloadQueue, FileQueue* pFileQueue, FILE* infile, int iFormatVersion);
void SavePostQueue(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadPostQueue(DownloadQueue* pDownloadQueue, FILE* infile, int iFormatVersion);
bool LoadOldPostQueue(DownloadQueue* pDownloadQueue);
void SaveUrlQueue(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadUrlQueue(DownloadQueue* pDownloadQueue, FILE* infile, int iFormatVersion);
void SaveUrlInfo(UrlInfo* pUrlInfo, FILE* outfile);
bool LoadUrlInfo(UrlInfo* pUrlInfo, FILE* infile, int iFormatVersion);
void SaveHistory(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadHistory(DownloadQueue* pDownloadQueue, FILE* infile, int iFormatVersion);
int FindNZBInfoIndex(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
public:
bool DownloadQueueExists();
bool PostQueueExists(bool bCompleted);
bool SaveDownloadQueue(DownloadQueue* pDownloadQueue);
bool LoadDownloadQueue(DownloadQueue* pDownloadQueue);
bool SaveFile(FileInfo* pFileInfo);
bool LoadArticles(FileInfo* pFileInfo);
bool DiscardDownloadQueue();
bool DiscardFile(FileInfo* pFileInfo);
void CleanupTempDir(DownloadQueue* pDownloadQueue);
};
#endif

View File

@@ -1,946 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <cctype>
#include <cstdio>
#include <map>
#include <sys/stat.h>
#include "nzbget.h"
#include "DownloadInfo.h"
#include "Options.h"
#include "Log.h"
#include "Util.h"
extern Options* g_pOptions;
int FileInfo::m_iIDGen = 0;
int NZBInfo::m_iIDGen = 0;
int PostInfo::m_iIDGen = 0;
int UrlInfo::m_iIDGen = 0;
int HistoryInfo::m_iIDGen = 0;
NZBParameter::NZBParameter(const char* szName)
{
m_szName = strdup(szName);
m_szValue = NULL;
}
NZBParameter::~NZBParameter()
{
if (m_szName)
{
free(m_szName);
}
if (m_szValue)
{
free(m_szValue);
}
}
void NZBParameter::SetValue(const char* szValue)
{
if (m_szValue)
{
free(m_szValue);
}
m_szValue = strdup(szValue);
}
void NZBParameterList::SetParameter(const char* szName, const char* szValue)
{
NZBParameter* pParameter = NULL;
bool bDelete = !szValue || !*szValue;
for (iterator it = begin(); it != end(); it++)
{
NZBParameter* pLookupParameter = *it;
if (!strcmp(pLookupParameter->GetName(), szName))
{
if (bDelete)
{
delete pLookupParameter;
erase(it);
return;
}
pParameter = pLookupParameter;
break;
}
}
if (bDelete)
{
return;
}
if (!pParameter)
{
pParameter = new NZBParameter(szName);
push_back(pParameter);
}
pParameter->SetValue(szValue);
}
NZBInfo::NZBInfo()
{
debug("Creating NZBInfo");
m_szFilename = NULL;
m_szDestDir = NULL;
m_szCategory = strdup("");
m_szName = NULL;
m_iFileCount = 0;
m_iParkedFileCount = 0;
m_lSize = 0;
m_iRefCount = 0;
m_bPostProcess = false;
m_eRenameStatus = rsNone;
m_eParStatus = psNone;
m_eUnpackStatus = usNone;
m_eMoveStatus = msNone;
m_eScriptStatus = srNone;
m_bDeleted = false;
m_bParCleanup = false;
m_bCleanupDisk = false;
m_bUnpackCleanedUpDisk = false;
m_szQueuedFilename = strdup("");
m_Owner = NULL;
m_Messages.clear();
m_iIDMessageGen = 0;
m_iIDGen++;
m_iID = m_iIDGen;
}
NZBInfo::~NZBInfo()
{
debug("Destroying NZBInfo");
if (m_szFilename)
{
free(m_szFilename);
}
if (m_szDestDir)
{
free(m_szDestDir);
}
if (m_szCategory)
{
free(m_szCategory);
}
if (m_szName)
{
free(m_szName);
}
if (m_szQueuedFilename)
{
free(m_szQueuedFilename);
}
ClearCompletedFiles();
for (NZBParameterList::iterator it = m_ppParameters.begin(); it != m_ppParameters.end(); it++)
{
delete *it;
}
m_ppParameters.clear();
for (Messages::iterator it = m_Messages.begin(); it != m_Messages.end(); it++)
{
delete *it;
}
m_Messages.clear();
if (m_Owner)
{
m_Owner->Remove(this);
}
}
void NZBInfo::AddReference()
{
m_iRefCount++;
}
void NZBInfo::Release()
{
m_iRefCount--;
if (m_iRefCount <= 0)
{
delete this;
}
}
void NZBInfo::ClearCompletedFiles()
{
for (Files::iterator it = m_completedFiles.begin(); it != m_completedFiles.end(); it++)
{
free(*it);
}
m_completedFiles.clear();
}
void NZBInfo::SetDestDir(const char* szDestDir)
{
if (m_szDestDir)
{
free(m_szDestDir);
}
m_szDestDir = strdup(szDestDir);
}
void NZBInfo::SetFilename(const char * szFilename)
{
if (m_szFilename)
{
free(m_szFilename);
}
m_szFilename = strdup(szFilename);
if (!m_szName)
{
char szNZBNicename[1024];
MakeNiceNZBName(m_szFilename, szNZBNicename, sizeof(szNZBNicename), true);
szNZBNicename[1024-1] = '\0';
SetName(szNZBNicename);
}
}
void NZBInfo::SetName(const char* szName)
{
if (m_szName)
{
free(m_szName);
}
m_szName = szName ? strdup(szName) : NULL;
}
void NZBInfo::SetCategory(const char* szCategory)
{
if (m_szCategory)
{
free(m_szCategory);
}
m_szCategory = strdup(szCategory);
}
void NZBInfo::SetQueuedFilename(const char * szQueuedFilename)
{
if (m_szQueuedFilename)
{
free(m_szQueuedFilename);
}
m_szQueuedFilename = strdup(szQueuedFilename);
}
void NZBInfo::MakeNiceNZBName(const char * szNZBFilename, char * szBuffer, int iSize, bool bRemoveExt)
{
char postname[1024];
const char* szBaseName = Util::BaseFileName(szNZBFilename);
strncpy(postname, szBaseName, 1024);
postname[1024-1] = '\0';
if (bRemoveExt)
{
// wipe out ".nzb"
char* p = strrchr(postname, '.');
if (p && !strcasecmp(p, ".nzb")) *p = '\0';
}
Util::MakeValidFilename(postname, '_', false);
strncpy(szBuffer, postname, iSize);
szBuffer[iSize-1] = '\0';
}
void NZBInfo::BuildDestDirName()
{
char szDestDir[1024];
if (strlen(g_pOptions->GetInterDir()) == 0)
{
BuildFinalDirName(szDestDir, 1024);
}
else
{
snprintf(szDestDir, 1024, "%s%s", g_pOptions->GetInterDir(), GetName());
szDestDir[1024-1] = '\0';
}
SetDestDir(szDestDir);
}
void NZBInfo::BuildFinalDirName(char* szFinalDirBuf, int iBufSize)
{
char szBuffer[1024];
bool bUseCategory = m_szCategory && m_szCategory[0] != '\0';
snprintf(szFinalDirBuf, iBufSize, "%s", g_pOptions->GetDestDir());
szFinalDirBuf[iBufSize-1] = '\0';
if (bUseCategory)
{
Options::Category *pCategory = g_pOptions->FindCategory(m_szCategory);
if (pCategory && pCategory->GetDestDir() && pCategory->GetDestDir()[0] != '\0')
{
snprintf(szFinalDirBuf, iBufSize, "%s", pCategory->GetDestDir());
szFinalDirBuf[iBufSize-1] = '\0';
bUseCategory = false;
}
}
if (g_pOptions->GetAppendCategoryDir() && bUseCategory)
{
char szCategoryDir[1024];
strncpy(szCategoryDir, m_szCategory, 1024);
szCategoryDir[1024 - 1] = '\0';
Util::MakeValidFilename(szCategoryDir, '_', true);
snprintf(szBuffer, 1024, "%s%s%c", szFinalDirBuf, szCategoryDir, PATH_SEPARATOR);
szBuffer[1024-1] = '\0';
strncpy(szFinalDirBuf, szBuffer, iBufSize);
}
if (g_pOptions->GetAppendNZBDir())
{
snprintf(szBuffer, 1024, "%s%s", szFinalDirBuf, GetName());
szBuffer[1024-1] = '\0';
strncpy(szFinalDirBuf, szBuffer, iBufSize);
}
}
void NZBInfo::SetParameter(const char* szName, const char* szValue)
{
m_ppParameters.SetParameter(szName, szValue);
}
NZBInfo::Messages* NZBInfo::LockMessages()
{
m_mutexLog.Lock();
return &m_Messages;
}
void NZBInfo::UnlockMessages()
{
m_mutexLog.Unlock();
}
void NZBInfo::AppendMessage(Message::EKind eKind, time_t tTime, const char * szText)
{
if (tTime == 0)
{
tTime = time(NULL);
}
m_mutexLog.Lock();
Message* pMessage = new Message(++m_iIDMessageGen, eKind, tTime, szText);
m_Messages.push_back(pMessage);
m_mutexLog.Unlock();
}
void NZBInfoList::Add(NZBInfo* pNZBInfo)
{
pNZBInfo->m_Owner = this;
push_back(pNZBInfo);
}
void NZBInfoList::Remove(NZBInfo* pNZBInfo)
{
for (iterator it = begin(); it != end(); it++)
{
NZBInfo* pNZBInfo2 = *it;
if (pNZBInfo2 == pNZBInfo)
{
erase(it);
break;
}
}
}
void NZBInfoList::ReleaseAll()
{
int i = 0;
for (iterator it = begin(); it != end(); )
{
NZBInfo* pNZBInfo = *it;
bool bObjDeleted = pNZBInfo->m_iRefCount == 1;
pNZBInfo->Release();
if (bObjDeleted)
{
it = begin() + i;
}
else
{
it++;
i++;
}
}
}
ArticleInfo::ArticleInfo()
{
//debug("Creating ArticleInfo");
m_szMessageID = NULL;
m_iSize = 0;
m_eStatus = aiUndefined;
m_szResultFilename = NULL;
}
ArticleInfo::~ ArticleInfo()
{
//debug("Destroying ArticleInfo");
if (m_szMessageID)
{
free(m_szMessageID);
}
if (m_szResultFilename)
{
free(m_szResultFilename);
}
}
void ArticleInfo::SetMessageID(const char * szMessageID)
{
m_szMessageID = strdup(szMessageID);
}
void ArticleInfo::SetResultFilename(const char * v)
{
if (m_szResultFilename)
{
free(m_szResultFilename);
}
m_szResultFilename = strdup(v);
}
FileInfo::FileInfo()
{
debug("Creating FileInfo");
m_Articles.clear();
m_Groups.clear();
m_szSubject = NULL;
m_szFilename = NULL;
m_szOutputFilename = NULL;
m_bFilenameConfirmed = false;
m_lSize = 0;
m_lRemainingSize = 0;
m_tTime = 0;
m_bPaused = false;
m_bDeleted = false;
m_iCompleted = 0;
m_bOutputInitialized = false;
m_pNZBInfo = NULL;
m_iPriority = 0;
m_bExtraPriority = false;
m_iActiveDownloads = 0;
m_iIDGen++;
m_iID = m_iIDGen;
}
FileInfo::~ FileInfo()
{
debug("Destroying FileInfo");
if (m_szSubject)
{
free(m_szSubject);
}
if (m_szFilename)
{
free(m_szFilename);
}
if (m_szOutputFilename)
{
free(m_szOutputFilename);
}
for (Groups::iterator it = m_Groups.begin(); it != m_Groups.end() ;it++)
{
free(*it);
}
m_Groups.clear();
ClearArticles();
if (m_pNZBInfo)
{
m_pNZBInfo->Release();
}
}
void FileInfo::ClearArticles()
{
for (Articles::iterator it = m_Articles.begin(); it != m_Articles.end() ;it++)
{
delete *it;
}
m_Articles.clear();
}
void FileInfo::SetID(int s)
{
m_iID = s;
if (m_iIDGen < m_iID)
{
m_iIDGen = m_iID;
}
}
void FileInfo::SetNZBInfo(NZBInfo* pNZBInfo)
{
if (m_pNZBInfo)
{
m_pNZBInfo->Release();
}
m_pNZBInfo = pNZBInfo;
m_pNZBInfo->AddReference();
}
void FileInfo::SetSubject(const char* szSubject)
{
m_szSubject = strdup(szSubject);
}
void FileInfo::SetFilename(const char* szFilename)
{
if (m_szFilename)
{
free(m_szFilename);
}
m_szFilename = strdup(szFilename);
}
void FileInfo::MakeValidFilename()
{
Util::MakeValidFilename(m_szFilename, '_', false);
}
void FileInfo::LockOutputFile()
{
m_mutexOutputFile.Lock();
}
void FileInfo::UnlockOutputFile()
{
m_mutexOutputFile.Unlock();
}
void FileInfo::SetOutputFilename(const char* szOutputFilename)
{
if (m_szOutputFilename)
{
free(m_szOutputFilename);
}
m_szOutputFilename = strdup(szOutputFilename);
}
bool FileInfo::IsDupe(const char* szFilename)
{
char fileName[1024];
snprintf(fileName, 1024, "%s%c%s", m_pNZBInfo->GetDestDir(), (int)PATH_SEPARATOR, szFilename);
fileName[1024-1] = '\0';
if (Util::FileExists(fileName))
{
return true;
}
snprintf(fileName, 1024, "%s%c%s_broken", m_pNZBInfo->GetDestDir(), (int)PATH_SEPARATOR, szFilename);
fileName[1024-1] = '\0';
if (Util::FileExists(fileName))
{
return true;
}
return false;
}
GroupInfo::GroupInfo()
{
m_iFirstID = 0;
m_iLastID = 0;
m_iRemainingFileCount = 0;
m_iPausedFileCount = 0;
m_lRemainingSize = 0;
m_lPausedSize = 0;
m_iRemainingParCount = 0;
m_tMinTime = 0;
m_tMaxTime = 0;
m_iMinPriority = 0;
m_iMaxPriority = 0;
m_iActiveDownloads = 0;
}
GroupInfo::~GroupInfo()
{
if (m_pNZBInfo)
{
m_pNZBInfo->Release();
}
}
PostInfo::PostInfo()
{
debug("Creating PostInfo");
m_pNZBInfo = NULL;
m_szParFilename = NULL;
m_szInfoName = NULL;
m_bWorking = false;
m_bDeleted = false;
m_eRenameStatus = rsNone;
m_eParStatus = psNone;
m_eUnpackStatus = usNone;
m_eRequestParCheck = rpNone;
m_bRequestParRename = false;
m_eScriptStatus = srNone;
m_szProgressLabel = strdup("");
m_iFileProgress = 0;
m_iStageProgress = 0;
m_tStartTime = 0;
m_tStageTime = 0;
m_eStage = ptQueued;
m_pPostThread = NULL;
m_Messages.clear();
m_iIDMessageGen = 0;
m_iIDGen++;
m_iID = m_iIDGen;
}
PostInfo::~ PostInfo()
{
debug("Destroying PostInfo");
if (m_szParFilename)
{
free(m_szParFilename);
}
if (m_szInfoName)
{
free(m_szInfoName);
}
if (m_szProgressLabel)
{
free(m_szProgressLabel);
}
for (Messages::iterator it = m_Messages.begin(); it != m_Messages.end(); it++)
{
delete *it;
}
m_Messages.clear();
if (m_pNZBInfo)
{
m_pNZBInfo->Release();
}
}
void PostInfo::SetNZBInfo(NZBInfo* pNZBInfo)
{
if (m_pNZBInfo)
{
m_pNZBInfo->Release();
}
m_pNZBInfo = pNZBInfo;
m_pNZBInfo->AddReference();
}
void PostInfo::SetParFilename(const char* szParFilename)
{
m_szParFilename = strdup(szParFilename);
}
void PostInfo::SetInfoName(const char* szInfoName)
{
m_szInfoName = strdup(szInfoName);
}
void PostInfo::SetProgressLabel(const char* szProgressLabel)
{
if (m_szProgressLabel)
{
free(m_szProgressLabel);
}
m_szProgressLabel = strdup(szProgressLabel);
}
PostInfo::Messages* PostInfo::LockMessages()
{
m_mutexLog.Lock();
return &m_Messages;
}
void PostInfo::UnlockMessages()
{
m_mutexLog.Unlock();
}
void PostInfo::AppendMessage(Message::EKind eKind, const char * szText)
{
m_mutexLog.Lock();
Message* pMessage = new Message(++m_iIDMessageGen, eKind, time(NULL), szText);
m_Messages.push_back(pMessage);
while (m_Messages.size() > (unsigned int)g_pOptions->GetLogBufferSize())
{
Message* pMessage = m_Messages.front();
delete pMessage;
m_Messages.pop_front();
}
m_mutexLog.Unlock();
}
void DownloadQueue::BuildGroups(GroupQueue* pGroupQueue)
{
std::map<int, GroupInfo*> groupMap;
for (FileQueue::iterator it = GetFileQueue()->begin(); it != GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
GroupInfo *&pGroupInfo = groupMap[pFileInfo->GetNZBInfo()->GetID()];
if (!pGroupInfo)
{
pGroupInfo = new GroupInfo();
pGroupInfo->m_pNZBInfo = pFileInfo->GetNZBInfo();
pGroupInfo->m_pNZBInfo->AddReference();
pGroupInfo->m_iFirstID = pFileInfo->GetID();
pGroupInfo->m_iLastID = pFileInfo->GetID();
pGroupInfo->m_tMinTime = pFileInfo->GetTime();
pGroupInfo->m_tMaxTime = pFileInfo->GetTime();
pGroupInfo->m_iMinPriority = pFileInfo->GetPriority();
pGroupInfo->m_iMaxPriority = pFileInfo->GetPriority();
pGroupQueue->push_back(pGroupInfo);
}
if (pFileInfo->GetID() < pGroupInfo->GetFirstID())
{
pGroupInfo->m_iFirstID = pFileInfo->GetID();
}
if (pFileInfo->GetID() > pGroupInfo->GetLastID())
{
pGroupInfo->m_iLastID = pFileInfo->GetID();
}
if (pFileInfo->GetTime() > 0)
{
if (pFileInfo->GetTime() < pGroupInfo->GetMinTime())
{
pGroupInfo->m_tMinTime = pFileInfo->GetTime();
}
if (pFileInfo->GetTime() > pGroupInfo->GetMaxTime())
{
pGroupInfo->m_tMaxTime = pFileInfo->GetTime();
}
}
if (pFileInfo->GetPriority() < pGroupInfo->GetMinPriority())
{
pGroupInfo->m_iMinPriority = pFileInfo->GetPriority();
}
if (pFileInfo->GetPriority() > pGroupInfo->GetMaxPriority())
{
pGroupInfo->m_iMaxPriority = pFileInfo->GetPriority();
}
pGroupInfo->m_iActiveDownloads += pFileInfo->GetActiveDownloads();
pGroupInfo->m_iRemainingFileCount++;
pGroupInfo->m_lRemainingSize += pFileInfo->GetRemainingSize();
if (pFileInfo->GetPaused())
{
pGroupInfo->m_lPausedSize += pFileInfo->GetRemainingSize();
pGroupInfo->m_iPausedFileCount++;
}
char szLoFileName[1024];
strncpy(szLoFileName, pFileInfo->GetFilename(), 1024);
szLoFileName[1024-1] = '\0';
for (char* p = szLoFileName; *p; p++) *p = tolower(*p); // convert string to lowercase
if (strstr(szLoFileName, ".par2"))
{
pGroupInfo->m_iRemainingParCount++;
}
}
}
UrlInfo::UrlInfo()
{
//debug("Creating ArticleInfo");
m_szURL = NULL;
m_szNZBFilename = strdup("");
m_szCategory = strdup("");
m_iPriority = 0;
m_bAddTop = false;
m_bAddPaused = false;
m_eStatus = aiUndefined;
m_iIDGen++;
m_iID = m_iIDGen;
}
UrlInfo::~ UrlInfo()
{
if (m_szURL)
{
free(m_szURL);
}
if (m_szNZBFilename)
{
free(m_szNZBFilename);
}
if (m_szCategory)
{
free(m_szCategory);
}
}
void UrlInfo::SetURL(const char* szURL)
{
if (m_szURL)
{
free(m_szURL);
}
m_szURL = strdup(szURL);
}
void UrlInfo::SetID(int s)
{
m_iID = s;
if (m_iIDGen < m_iID)
{
m_iIDGen = m_iID;
}
}
void UrlInfo::SetNZBFilename(const char* szNZBFilename)
{
if (m_szNZBFilename)
{
free(m_szNZBFilename);
}
m_szNZBFilename = strdup(szNZBFilename);
}
void UrlInfo::SetCategory(const char* szCategory)
{
if (m_szCategory)
{
free(m_szCategory);
}
m_szCategory = strdup(szCategory);
}
void UrlInfo::GetName(char* szBuffer, int iSize)
{
MakeNiceName(m_szURL, m_szNZBFilename, szBuffer, iSize);
}
void UrlInfo::MakeNiceName(const char* szURL, const char* szNZBFilename, char* szBuffer, int iSize)
{
URL url(szURL);
if (strlen(szNZBFilename) > 0)
{
char szNZBNicename[1024];
NZBInfo::MakeNiceNZBName(szNZBFilename, szNZBNicename, sizeof(szNZBNicename), true);
snprintf(szBuffer, iSize, "%s @ %s", szNZBNicename, url.GetHost());
}
else
{
snprintf(szBuffer, iSize, "%s%s", url.GetHost(), url.GetResource());
}
szBuffer[iSize-1] = '\0';
}
HistoryInfo::HistoryInfo(NZBInfo* pNZBInfo)
{
m_eKind = hkNZBInfo;
m_pInfo = pNZBInfo;
pNZBInfo->AddReference();
m_tTime = 0;
m_iIDGen++;
m_iID = m_iIDGen;
}
HistoryInfo::HistoryInfo(UrlInfo* pUrlInfo)
{
m_eKind = hkUrlInfo;
m_pInfo = pUrlInfo;
m_tTime = 0;
m_iIDGen++;
m_iID = m_iIDGen;
}
HistoryInfo::~HistoryInfo()
{
if (m_eKind == hkNZBInfo && m_pInfo)
{
((NZBInfo*)m_pInfo)->Release();
}
else if (m_eKind == hkUrlInfo && m_pInfo)
{
delete (UrlInfo*)m_pInfo;
}
}
void HistoryInfo::SetID(int s)
{
m_iID = s;
if (m_iIDGen < m_iID)
{
m_iIDGen = m_iID;
}
}
void HistoryInfo::GetName(char* szBuffer, int iSize)
{
if (m_eKind == hkNZBInfo)
{
strncpy(szBuffer, GetNZBInfo()->GetName(), iSize);
szBuffer[iSize-1] = '\0';
}
else if (m_eKind == hkUrlInfo)
{
GetUrlInfo()->GetName(szBuffer, iSize);
}
else
{
strncpy(szBuffer, "<unknown>", iSize);
}
}

View File

@@ -1,605 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef DOWNLOADINFO_H
#define DOWNLOADINFO_H
#include <vector>
#include <deque>
#include <time.h>
#include "Log.h"
#include "Thread.h"
class NZBInfo;
class DownloadQueue;
class ArticleInfo
{
public:
enum EStatus
{
aiUndefined,
aiRunning,
aiFinished,
aiFailed
};
private:
int m_iPartNumber;
char* m_szMessageID;
int m_iSize;
EStatus m_eStatus;
char* m_szResultFilename;
public:
ArticleInfo();
~ArticleInfo();
void SetPartNumber(int s) { m_iPartNumber = s; }
int GetPartNumber() { return m_iPartNumber; }
const char* GetMessageID() { return m_szMessageID; }
void SetMessageID(const char* szMessageID);
void SetSize(int s) { m_iSize = s; }
int GetSize() { return m_iSize; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
const char* GetResultFilename() { return m_szResultFilename; }
void SetResultFilename(const char* v);
};
class FileInfo
{
public:
typedef std::vector<ArticleInfo*> Articles;
typedef std::vector<char*> Groups;
private:
int m_iID;
NZBInfo* m_pNZBInfo;
Articles m_Articles;
Groups m_Groups;
char* m_szSubject;
char* m_szFilename;
long long m_lSize;
long long m_lRemainingSize;
time_t m_tTime;
bool m_bPaused;
bool m_bDeleted;
bool m_bFilenameConfirmed;
int m_iCompleted;
bool m_bOutputInitialized;
char* m_szOutputFilename;
Mutex m_mutexOutputFile;
int m_iPriority;
bool m_bExtraPriority;
int m_iActiveDownloads;
static int m_iIDGen;
public:
FileInfo();
~FileInfo();
int GetID() { return m_iID; }
void SetID(int s);
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
void SetNZBInfo(NZBInfo* pNZBInfo);
Articles* GetArticles() { return &m_Articles; }
Groups* GetGroups() { return &m_Groups; }
const char* GetSubject() { return m_szSubject; }
void SetSubject(const char* szSubject);
const char* GetFilename() { return m_szFilename; }
void SetFilename(const char* szFilename);
void MakeValidFilename();
bool GetFilenameConfirmed() { return m_bFilenameConfirmed; }
void SetFilenameConfirmed(bool bFilenameConfirmed) { m_bFilenameConfirmed = bFilenameConfirmed; }
void SetSize(long long s) { m_lSize = s; m_lRemainingSize = s; }
long long GetSize() { return m_lSize; }
long long GetRemainingSize() { return m_lRemainingSize; }
void SetRemainingSize(long long s) { m_lRemainingSize = s; }
time_t GetTime() { return m_tTime; }
void SetTime(time_t tTime) { m_tTime = tTime; }
bool GetPaused() { return m_bPaused; }
void SetPaused(bool Paused) { m_bPaused = Paused; }
bool GetDeleted() { return m_bDeleted; }
void SetDeleted(bool Deleted) { m_bDeleted = Deleted; }
int GetCompleted() { return m_iCompleted; }
void SetCompleted(int s) { m_iCompleted = s; }
void ClearArticles();
void LockOutputFile();
void UnlockOutputFile();
const char* GetOutputFilename() { return m_szOutputFilename; }
void SetOutputFilename(const char* szOutputFilename);
bool GetOutputInitialized() { return m_bOutputInitialized; }
void SetOutputInitialized(bool bOutputInitialized) { m_bOutputInitialized = bOutputInitialized; }
bool IsDupe(const char* szFilename);
int GetPriority() { return m_iPriority; }
void SetPriority(int iPriority) { m_iPriority = iPriority; }
bool GetExtraPriority() { return m_bExtraPriority; }
void SetExtraPriority(bool bExtraPriority) { m_bExtraPriority = bExtraPriority; };
int GetActiveDownloads() { return m_iActiveDownloads; }
void SetActiveDownloads(int iActiveDownloads) { m_iActiveDownloads = iActiveDownloads; }
};
typedef std::deque<FileInfo*> FileQueue;
class GroupInfo
{
private:
NZBInfo* m_pNZBInfo;
int m_iFirstID;
int m_iLastID;
int m_iRemainingFileCount;
int m_iPausedFileCount;
long long m_lRemainingSize;
long long m_lPausedSize;
int m_iRemainingParCount;
time_t m_tMinTime;
time_t m_tMaxTime;
int m_iMinPriority;
int m_iMaxPriority;
int m_iActiveDownloads;
friend class DownloadQueue;
public:
GroupInfo();
~GroupInfo();
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
int GetFirstID() { return m_iFirstID; }
int GetLastID() { return m_iLastID; }
long long GetRemainingSize() { return m_lRemainingSize; }
long long GetPausedSize() { return m_lPausedSize; }
int GetRemainingFileCount() { return m_iRemainingFileCount; }
int GetPausedFileCount() { return m_iPausedFileCount; }
int GetRemainingParCount() { return m_iRemainingParCount; }
time_t GetMinTime() { return m_tMinTime; }
time_t GetMaxTime() { return m_tMaxTime; }
int GetMinPriority() { return m_iMinPriority; }
int GetMaxPriority() { return m_iMaxPriority; }
int GetActiveDownloads() { return m_iActiveDownloads; }
};
typedef std::deque<GroupInfo*> GroupQueue;
class NZBParameter
{
private:
char* m_szName;
char* m_szValue;
void SetValue(const char* szValue);
friend class NZBParameterList;
public:
NZBParameter(const char* szName);
~NZBParameter();
const char* GetName() { return m_szName; }
const char* GetValue() { return m_szValue; }
};
typedef std::deque<NZBParameter*> NZBParameterListBase;
class NZBParameterList : public NZBParameterListBase
{
public:
void SetParameter(const char* szName, const char* szValue);
};
class NZBInfoList;
class NZBInfo
{
public:
enum ERenameStatus
{
rsNone,
rsSkipped,
rsFailure,
rsSuccess
};
enum EParStatus
{
psNone,
psSkipped,
psFailure,
psSuccess,
psRepairPossible
};
enum EUnpackStatus
{
usNone,
usSkipped,
usFailure,
usSuccess
};
enum EScriptStatus
{
srNone,
srUnknown,
srFailure,
srSuccess
};
enum EMoveStatus
{
msNone,
msFailure,
msSuccess
};
typedef std::vector<char*> Files;
typedef std::deque<Message*> Messages;
private:
int m_iID;
int m_iRefCount;
char* m_szFilename;
char* m_szName;
char* m_szDestDir;
char* m_szCategory;
int m_iFileCount;
int m_iParkedFileCount;
long long m_lSize;
Files m_completedFiles;
bool m_bPostProcess;
ERenameStatus m_eRenameStatus;
EParStatus m_eParStatus;
EUnpackStatus m_eUnpackStatus;
EScriptStatus m_eScriptStatus;
EMoveStatus m_eMoveStatus;
char* m_szQueuedFilename;
bool m_bDeleted;
bool m_bParCleanup;
bool m_bCleanupDisk;
bool m_bUnpackCleanedUpDisk;
NZBInfoList* m_Owner;
NZBParameterList m_ppParameters;
Mutex m_mutexLog;
Messages m_Messages;
int m_iIDMessageGen;
static int m_iIDGen;
friend class NZBInfoList;
public:
NZBInfo();
~NZBInfo();
void AddReference();
void Release();
int GetID() { return m_iID; }
const char* GetFilename() { return m_szFilename; }
void SetFilename(const char* szFilename);
static void MakeNiceNZBName(const char* szNZBFilename, char* szBuffer, int iSize, bool bRemoveExt);
const char* GetDestDir() { return m_szDestDir; } // needs locking (for shared objects)
void SetDestDir(const char* szDestDir); // needs locking (for shared objects)
const char* GetCategory() { return m_szCategory; } // needs locking (for shared objects)
void SetCategory(const char* szCategory); // needs locking (for shared objects)
const char* GetName() { return m_szName; } // needs locking (for shared objects)
void SetName(const char* szName); // needs locking (for shared objects)
long long GetSize() { return m_lSize; }
void SetSize(long long lSize) { m_lSize = lSize; }
int GetFileCount() { return m_iFileCount; }
void SetFileCount(int iFileCount) { m_iFileCount = iFileCount; }
int GetParkedFileCount() { return m_iParkedFileCount; }
void SetParkedFileCount(int iParkedFileCount) { m_iParkedFileCount = iParkedFileCount; }
void BuildDestDirName();
void BuildFinalDirName(char* szFinalDirBuf, int iBufSize);
Files* GetCompletedFiles() { return &m_completedFiles; } // needs locking (for shared objects)
void ClearCompletedFiles();
bool GetPostProcess() { return m_bPostProcess; }
void SetPostProcess(bool bPostProcess) { m_bPostProcess = bPostProcess; }
ERenameStatus GetRenameStatus() { return m_eRenameStatus; }
void SetRenameStatus(ERenameStatus eRenameStatus) { m_eRenameStatus = eRenameStatus; }
EParStatus GetParStatus() { return m_eParStatus; }
void SetParStatus(EParStatus eParStatus) { m_eParStatus = eParStatus; }
EUnpackStatus GetUnpackStatus() { return m_eUnpackStatus; }
void SetUnpackStatus(EUnpackStatus eUnpackStatus) { m_eUnpackStatus = eUnpackStatus; }
EMoveStatus GetMoveStatus() { return m_eMoveStatus; }
void SetMoveStatus(EMoveStatus eMoveStatus) { m_eMoveStatus = eMoveStatus; }
EScriptStatus GetScriptStatus() { return m_eScriptStatus; }
void SetScriptStatus(EScriptStatus eScriptStatus) { m_eScriptStatus = eScriptStatus; }
const char* GetQueuedFilename() { return m_szQueuedFilename; }
void SetQueuedFilename(const char* szQueuedFilename);
bool GetDeleted() { return m_bDeleted; }
void SetDeleted(bool bDeleted) { m_bDeleted = bDeleted; }
bool GetParCleanup() { return m_bParCleanup; }
void SetParCleanup(bool bParCleanup) { m_bParCleanup = bParCleanup; }
bool GetCleanupDisk() { return m_bCleanupDisk; }
void SetCleanupDisk(bool bCleanupDisk) { m_bCleanupDisk = bCleanupDisk; }
bool GetUnpackCleanedUpDisk() { return m_bUnpackCleanedUpDisk; }
void SetUnpackCleanedUpDisk(bool bUnpackCleanedUpDisk) { m_bUnpackCleanedUpDisk = bUnpackCleanedUpDisk; }
NZBParameterList* GetParameters() { return &m_ppParameters; } // needs locking (for shared objects)
void SetParameter(const char* szName, const char* szValue); // needs locking (for shared objects)
void AppendMessage(Message::EKind eKind, time_t tTime, const char* szText);
Messages* LockMessages();
void UnlockMessages();
};
typedef std::deque<NZBInfo*> NZBInfoListBase;
class NZBInfoList : public NZBInfoListBase
{
public:
void Add(NZBInfo* pNZBInfo);
void Remove(NZBInfo* pNZBInfo);
void ReleaseAll();
};
class PostInfo
{
public:
enum EStage
{
ptQueued,
ptLoadingPars,
ptVerifyingSources,
ptRepairing,
ptVerifyingRepaired,
ptRenaming,
ptUnpacking,
ptMoving,
ptExecutingScript,
ptFinished
};
enum ERenameStatus
{
rsNone,
rsSkipped,
rsFailure,
rsSuccess
};
enum EParStatus
{
psNone,
psSkipped,
psFailure,
psSuccess,
psRepairPossible
};
enum ERequestParCheck
{
rpNone,
rpCurrent,
rpAll
};
enum EUnpackStatus
{
usNone,
usSkipped,
usFailure,
usSuccess
};
enum EScriptStatus
{
srNone,
srUnknown,
srFailure,
srSuccess
};
typedef std::deque<Message*> Messages;
private:
int m_iID;
NZBInfo* m_pNZBInfo;
char* m_szParFilename;
char* m_szInfoName;
bool m_bWorking;
bool m_bDeleted;
ERenameStatus m_eRenameStatus;
EParStatus m_eParStatus;
EUnpackStatus m_eUnpackStatus;
EScriptStatus m_eScriptStatus;
ERequestParCheck m_eRequestParCheck;
bool m_bRequestParRename;
EStage m_eStage;
char* m_szProgressLabel;
int m_iFileProgress;
int m_iStageProgress;
time_t m_tStartTime;
time_t m_tStageTime;
Thread* m_pPostThread;
Mutex m_mutexLog;
Messages m_Messages;
int m_iIDMessageGen;
static int m_iIDGen;
public:
PostInfo();
~PostInfo();
int GetID() { return m_iID; }
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
void SetNZBInfo(NZBInfo* pNZBInfo);
const char* GetParFilename() { return m_szParFilename; }
void SetParFilename(const char* szParFilename);
const char* GetInfoName() { return m_szInfoName; }
void SetInfoName(const char* szInfoName);
EStage GetStage() { return m_eStage; }
void SetStage(EStage eStage) { m_eStage = eStage; }
void SetProgressLabel(const char* szProgressLabel);
const char* GetProgressLabel() { return m_szProgressLabel; }
int GetFileProgress() { return m_iFileProgress; }
void SetFileProgress(int iFileProgress) { m_iFileProgress = iFileProgress; }
int GetStageProgress() { return m_iStageProgress; }
void SetStageProgress(int iStageProgress) { m_iStageProgress = iStageProgress; }
time_t GetStartTime() { return m_tStartTime; }
void SetStartTime(time_t tStartTime) { m_tStartTime = tStartTime; }
time_t GetStageTime() { return m_tStageTime; }
void SetStageTime(time_t tStageTime) { m_tStageTime = tStageTime; }
bool GetWorking() { return m_bWorking; }
void SetWorking(bool bWorking) { m_bWorking = bWorking; }
bool GetDeleted() { return m_bDeleted; }
void SetDeleted(bool bDeleted) { m_bDeleted = bDeleted; }
ERenameStatus GetRenameStatus() { return m_eRenameStatus; }
void SetRenameStatus(ERenameStatus eRenameStatus) { m_eRenameStatus = eRenameStatus; }
EParStatus GetParStatus() { return m_eParStatus; }
void SetParStatus(EParStatus eParStatus) { m_eParStatus = eParStatus; }
EUnpackStatus GetUnpackStatus() { return m_eUnpackStatus; }
void SetUnpackStatus(EUnpackStatus eUnpackStatus) { m_eUnpackStatus = eUnpackStatus; }
ERequestParCheck GetRequestParCheck() { return m_eRequestParCheck; }
void SetRequestParCheck(ERequestParCheck eRequestParCheck) { m_eRequestParCheck = eRequestParCheck; }
bool GetRequestParRename() { return m_bRequestParRename; }
void SetRequestParRename(bool bRequestParRename) { m_bRequestParRename = bRequestParRename; }
EScriptStatus GetScriptStatus() { return m_eScriptStatus; }
void SetScriptStatus(EScriptStatus eScriptStatus) { m_eScriptStatus = eScriptStatus; }
void AppendMessage(Message::EKind eKind, const char* szText);
Thread* GetPostThread() { return m_pPostThread; }
void SetPostThread(Thread* pPostThread) { m_pPostThread = pPostThread; }
Messages* LockMessages();
void UnlockMessages();
};
typedef std::deque<PostInfo*> PostQueue;
typedef std::vector<int> IDList;
typedef std::vector<char*> NameList;
class UrlInfo
{
public:
enum EStatus
{
aiUndefined,
aiRunning,
aiFinished,
aiFailed,
aiRetry
};
private:
int m_iID;
char* m_szURL;
char* m_szNZBFilename;
char* m_szCategory;
int m_iPriority;
bool m_bAddTop;
bool m_bAddPaused;
EStatus m_eStatus;
static int m_iIDGen;
public:
UrlInfo();
~UrlInfo();
int GetID() { return m_iID; }
void SetID(int s);
const char* GetURL() { return m_szURL; } // needs locking (for shared objects)
void SetURL(const char* szURL); // needs locking (for shared objects)
const char* GetNZBFilename() { return m_szNZBFilename; } // needs locking (for shared objects)
void SetNZBFilename(const char* szNZBFilename); // needs locking (for shared objects)
const char* GetCategory() { return m_szCategory; } // needs locking (for shared objects)
void SetCategory(const char* szCategory); // needs locking (for shared objects)
int GetPriority() { return m_iPriority; }
void SetPriority(int iPriority) { m_iPriority = iPriority; }
bool GetAddTop() { return m_bAddTop; }
void SetAddTop(bool bAddTop) { m_bAddTop = bAddTop; }
bool GetAddPaused() { return m_bAddPaused; }
void SetAddPaused(bool bAddPaused) { m_bAddPaused = bAddPaused; }
void GetName(char* szBuffer, int iSize); // needs locking (for shared objects)
static void MakeNiceName(const char* szURL, const char* szNZBFilename, char* szBuffer, int iSize);
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
};
typedef std::deque<UrlInfo*> UrlQueue;
class HistoryInfo
{
public:
enum EKind
{
hkUnknown,
hkNZBInfo,
hkUrlInfo
};
private:
int m_iID;
EKind m_eKind;
void* m_pInfo;
time_t m_tTime;
static int m_iIDGen;
public:
HistoryInfo(NZBInfo* pNZBInfo);
HistoryInfo(UrlInfo* pUrlInfo);
~HistoryInfo();
int GetID() { return m_iID; }
void SetID(int s);
EKind GetKind() { return m_eKind; }
NZBInfo* GetNZBInfo() { return (NZBInfo*)m_pInfo; }
UrlInfo* GetUrlInfo() { return (UrlInfo*)m_pInfo; }
void DiscardUrlInfo() { m_pInfo = NULL; }
time_t GetTime() { return m_tTime; }
void SetTime(time_t tTime) { m_tTime = tTime; }
void GetName(char* szBuffer, int iSize); // needs locking (for shared objects)
};
typedef std::deque<HistoryInfo*> HistoryList;
class DownloadQueue
{
protected:
NZBInfoList m_NZBInfoList;
FileQueue m_FileQueue;
PostQueue m_PostQueue;
HistoryList m_HistoryList;
FileQueue m_ParkedFiles;
UrlQueue m_UrlQueue;
public:
NZBInfoList* GetNZBInfoList() { return &m_NZBInfoList; }
FileQueue* GetFileQueue() { return &m_FileQueue; }
PostQueue* GetPostQueue() { return &m_PostQueue; }
HistoryList* GetHistoryList() { return &m_HistoryList; }
FileQueue* GetParkedFiles() { return &m_ParkedFiles; }
UrlQueue* GetUrlQueue() { return &m_UrlQueue; }
void BuildGroups(GroupQueue* pGroupQueue);
};
class DownloadQueueHolder
{
public:
virtual ~DownloadQueueHolder() {};
virtual DownloadQueue* LockQueue() = 0;
virtual void UnlockQueue() = 0;
};
#endif

View File

@@ -1,7 +1,7 @@
#
# This file if part of nzbget
# This file is part of nzbget
#
# Copyright (C) 2008-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
# Copyright (C) 2008-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -22,58 +22,265 @@
bin_PROGRAMS = nzbget
nzbget_SOURCES = \
ArticleDownloader.cpp ArticleDownloader.h BinRpc.cpp BinRpc.h \
ColoredFrontend.cpp ColoredFrontend.h Connection.cpp Connection.h Decoder.cpp Decoder.h \
DiskState.cpp DiskState.h DownloadInfo.cpp DownloadInfo.h Frontend.cpp Frontend.h \
Log.cpp Log.h LoggableFrontend.cpp LoggableFrontend.h MessageBase.h \
NCursesFrontend.cpp NCursesFrontend.h NNTPConnection.cpp NNTPConnection.h NZBFile.cpp \
NZBFile.h NewsServer.cpp NewsServer.h Observer.cpp \
Observer.h Options.cpp Options.h ParChecker.cpp ParChecker.h ParRenamer.cpp ParRenamer.h \
ParCoordinator.cpp ParCoordinator.h PrePostProcessor.cpp PrePostProcessor.h QueueCoordinator.cpp \
QueueCoordinator.h QueueEditor.cpp QueueEditor.h RemoteClient.cpp RemoteClient.h \
RemoteServer.cpp RemoteServer.h Scanner.cpp Scanner.h Scheduler.cpp Scheduler.h ScriptController.cpp \
ScriptController.h ServerPool.cpp ServerPool.h svn_version.cpp TLS.cpp TLS.h Thread.cpp Thread.h \
Util.cpp Util.h XmlRpc.cpp XmlRpc.h WebDownloader.cpp WebDownloader.h WebServer.cpp WebServer.h \
UrlCoordinator.cpp UrlCoordinator.h Unpack.cpp Unpack.h nzbget.cpp nzbget.h
daemon/connect/Connection.cpp \
daemon/connect/Connection.h \
daemon/connect/TLS.cpp \
daemon/connect/TLS.h \
daemon/connect/WebDownloader.cpp \
daemon/connect/WebDownloader.h \
daemon/feed/FeedCoordinator.cpp \
daemon/feed/FeedCoordinator.h \
daemon/feed/FeedFile.cpp \
daemon/feed/FeedFile.h \
daemon/feed/FeedFilter.cpp \
daemon/feed/FeedFilter.h \
daemon/feed/FeedInfo.cpp \
daemon/feed/FeedInfo.h \
daemon/frontend/ColoredFrontend.cpp \
daemon/frontend/ColoredFrontend.h \
daemon/frontend/Frontend.cpp \
daemon/frontend/Frontend.h \
daemon/frontend/LoggableFrontend.cpp \
daemon/frontend/LoggableFrontend.h \
daemon/frontend/NCursesFrontend.cpp \
daemon/frontend/NCursesFrontend.h \
daemon/main/Maintenance.cpp \
daemon/main/Maintenance.h \
daemon/main/nzbget.cpp \
daemon/main/nzbget.h \
daemon/main/Options.cpp \
daemon/main/Options.h \
daemon/main/Scheduler.cpp \
daemon/main/Scheduler.h \
daemon/main/StackTrace.cpp \
daemon/main/StackTrace.h \
daemon/nntp/ArticleDownloader.cpp \
daemon/nntp/ArticleDownloader.h \
daemon/nntp/ArticleWriter.cpp \
daemon/nntp/ArticleWriter.h \
daemon/nntp/Decoder.cpp \
daemon/nntp/Decoder.h \
daemon/nntp/NewsServer.cpp \
daemon/nntp/NewsServer.h \
daemon/nntp/NNTPConnection.cpp \
daemon/nntp/NNTPConnection.h \
daemon/nntp/ServerPool.cpp \
daemon/nntp/ServerPool.h \
daemon/nntp/StatMeter.cpp \
daemon/nntp/StatMeter.h \
daemon/postprocess/ParChecker.cpp \
daemon/postprocess/ParChecker.h \
daemon/postprocess/ParCoordinator.cpp \
daemon/postprocess/ParCoordinator.h \
daemon/postprocess/ParRenamer.cpp \
daemon/postprocess/ParRenamer.h \
daemon/postprocess/PostScript.cpp \
daemon/postprocess/PostScript.h \
daemon/postprocess/PrePostProcessor.cpp \
daemon/postprocess/PrePostProcessor.h \
daemon/postprocess/Unpack.cpp \
daemon/postprocess/Unpack.h \
daemon/queue/DiskState.cpp \
daemon/queue/DiskState.h \
daemon/queue/DownloadInfo.cpp \
daemon/queue/DownloadInfo.h \
daemon/queue/DupeCoordinator.cpp \
daemon/queue/DupeCoordinator.h \
daemon/queue/HistoryCoordinator.cpp \
daemon/queue/HistoryCoordinator.h \
daemon/queue/NZBFile.cpp \
daemon/queue/NZBFile.h \
daemon/queue/QueueCoordinator.cpp \
daemon/queue/QueueCoordinator.h \
daemon/queue/QueueEditor.cpp \
daemon/queue/QueueEditor.h \
daemon/queue/QueueScript.cpp \
daemon/queue/QueueScript.h \
daemon/queue/Scanner.cpp \
daemon/queue/Scanner.h \
daemon/queue/UrlCoordinator.cpp \
daemon/queue/UrlCoordinator.h \
daemon/remote/BinRpc.cpp \
daemon/remote/BinRpc.h \
daemon/remote/MessageBase.h \
daemon/remote/RemoteClient.cpp \
daemon/remote/RemoteClient.h \
daemon/remote/RemoteServer.cpp \
daemon/remote/RemoteServer.h \
daemon/remote/WebServer.cpp \
daemon/remote/WebServer.h \
daemon/remote/XmlRpc.cpp \
daemon/remote/XmlRpc.h \
daemon/util/Log.cpp \
daemon/util/Log.h \
daemon/util/Observer.cpp \
daemon/util/Observer.h \
daemon/util/Script.cpp \
daemon/util/Script.h \
daemon/util/Thread.cpp \
daemon/util/Thread.h \
daemon/util/Util.cpp \
daemon/util/Util.h \
svn_version.cpp
if WITH_PAR2
nzbget_SOURCES += \
lib/par2/commandline.cpp \
lib/par2/commandline.h \
lib/par2/crc.cpp \
lib/par2/crc.h \
lib/par2/creatorpacket.cpp \
lib/par2/creatorpacket.h \
lib/par2/criticalpacket.cpp \
lib/par2/criticalpacket.h \
lib/par2/datablock.cpp \
lib/par2/datablock.h \
lib/par2/descriptionpacket.cpp \
lib/par2/descriptionpacket.h \
lib/par2/diskfile.cpp \
lib/par2/diskfile.h \
lib/par2/filechecksummer.cpp \
lib/par2/filechecksummer.h \
lib/par2/galois.cpp \
lib/par2/galois.h \
lib/par2/letype.h \
lib/par2/mainpacket.cpp \
lib/par2/mainpacket.h \
lib/par2/md5.cpp \
lib/par2/md5.h \
lib/par2/par2cmdline.h \
lib/par2/par2creatorsourcefile.cpp \
lib/par2/par2creatorsourcefile.h \
lib/par2/par2fileformat.cpp \
lib/par2/par2fileformat.h \
lib/par2/par2repairer.cpp \
lib/par2/par2repairer.h \
lib/par2/par2repairersourcefile.cpp \
lib/par2/par2repairersourcefile.h \
lib/par2/parheaders.cpp \
lib/par2/parheaders.h \
lib/par2/recoverypacket.cpp \
lib/par2/recoverypacket.h \
lib/par2/reedsolomon.cpp \
lib/par2/reedsolomon.h \
lib/par2/verificationhashtable.cpp \
lib/par2/verificationhashtable.h \
lib/par2/verificationpacket.cpp \
lib/par2/verificationpacket.h
endif
AM_CPPFLAGS = \
-I$(srcdir)/daemon/connect \
-I$(srcdir)/daemon/feed \
-I$(srcdir)/daemon/frontend \
-I$(srcdir)/daemon/main \
-I$(srcdir)/daemon/nntp \
-I$(srcdir)/daemon/postprocess \
-I$(srcdir)/daemon/queue \
-I$(srcdir)/daemon/remote \
-I$(srcdir)/daemon/util \
-I$(srcdir)/lib/par2
EXTRA_DIST = \
Makefile.cvs nzbgetd nzbget-postprocess.sh \
$(patches_FILES) $(windows_FILES)
patches_FILES = \
libpar2-0.2-bugfixes.patch libpar2-0.2-cancel.patch \
libpar2-0.2-MSVC8.patch libsigc++-2.0.18-MSVC8.patch
Makefile.cvs \
nzbgetd \
$(windows_FILES) \
$(osx_FILES)
windows_FILES = \
win32.h NTService.cpp NTService.h nzbget.sln nzbget.vcproj nzbget-shell.bat
daemon/windows/NTService.cpp \
daemon/windows/NTService.h \
daemon/windows/win32.h \
nzbget.sln \
nzbget.vcproj \
nzbget-shell.bat
osx_FILES = \
osx/App_Prefix.pch \
osx/NZBGet-Info.plist \
osx/DaemonController.h \
osx/DaemonController.m \
osx/MainApp.h \
osx/MainApp.m \
osx/MainApp.xib \
osx/PFMoveApplication.h \
osx/PFMoveApplication.m \
osx/PreferencesDialog.h \
osx/PreferencesDialog.m \
osx/PreferencesDialog.xib \
osx/RPC.h \
osx/RPC.m \
osx/WebClient.h \
osx/WebClient.m \
osx/WelcomeDialog.h \
osx/WelcomeDialog.m \
osx/WelcomeDialog.xib \
osx/NZBGet.xcodeproj/project.pbxproj \
osx/Resources/Images/mainicon.icns \
osx/Resources/Images/statusicon.png \
osx/Resources/Images/statusicon@2x.png \
osx/Resources/licenses/license-bootstrap.txt \
osx/Resources/licenses/license-jquery-GPL.txt \
osx/Resources/licenses/license-jquery-MIT.txt \
osx/Resources/Credits.rtf \
osx/Resources/Localizable.strings \
osx/Resources/Welcome.rtf
doc_FILES = \
README ChangeLog COPYING
README \
ChangeLog \
COPYING \
lib/par2/AUTHORS \
lib/par2/README
exampleconf_FILES = \
nzbget.conf nzbget-postprocess.conf
nzbget.conf
webui_FILES = \
webui/index.html webui/index.js webui/downloads.js webui/edit.js webui/fasttable.js \
webui/history.js webui/messages.js webui/status.js webui/style.css webui/upload.js \
webui/util.js webui/config.js \
webui/lib/bootstrap.js webui/lib/bootstrap.min.js webui/lib/bootstrap.css \
webui/lib/jquery.js webui/lib/jquery.min.js \
webui/img/icons.png webui/img/icons-2x.png \
webui/img/transmit.gif webui/img/transmit-file.gif webui/img/favicon.ico \
webui/img/download-anim-green-2x.png webui/img/download-anim-orange-2x.png \
webui/index.html \
webui/index.js \
webui/downloads.js \
webui/edit.js \
webui/fasttable.js \
webui/history.js \
webui/messages.js \
webui/status.js \
webui/style.css \
webui/upload.js \
webui/util.js \
webui/config.js \
webui/feed.js \
webui/lib/bootstrap.js \
webui/lib/bootstrap.min.js \
webui/lib/bootstrap.css \
webui/lib/jquery.js \
webui/lib/jquery.min.js \
webui/lib/raphael.js \
webui/lib/raphael.min.js \
webui/lib/elycharts.js \
webui/lib/elycharts.min.js \
webui/img/icons.png \
webui/img/icons-2x.png \
webui/img/transmit.gif \
webui/img/transmit-file.gif \
webui/img/favicon.ico \
webui/img/download-anim-green-2x.png \
webui/img/download-anim-orange-2x.png \
webui/img/transmit-reload-2x.gif
scripts_FILES = \
scripts/EMail.py \
scripts/Logger.py
# Install
sbin_SCRIPTS = nzbgetd
bin_SCRIPTS = nzbget-postprocess.sh
dist_doc_DATA = $(doc_FILES)
exampleconfdir = $(datadir)/nzbget
dist_exampleconf_DATA = $(exampleconf_FILES)
webuiconfdir = $(datadir)/nzbget/webui
dist_webuiconf_DATA = $(exampleconf_FILES)
webuidir = $(datadir)/nzbget
nobase_dist_webui_DATA = $(webui_FILES)
scriptsdir = $(datadir)/nzbget
nobase_dist_scripts_SCRIPTS = $(scripts_FILES)
# Note about "sed":
# We need to make some changes in installed files.
@@ -91,12 +298,15 @@ install-exec-hook:
sed 's?/usr/local/bin?$(bindir)?' < "$(DESTDIR)$(sbindir)/nzbgetd.temp" > "$(DESTDIR)$(sbindir)/nzbgetd"
rm "$(DESTDIR)$(sbindir)/nzbgetd.temp"
# Prepare example configuration files
# Prepare example configuration file
install-data-hook:
rm -f "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
cp "$(DESTDIR)$(exampleconfdir)/nzbget.conf" "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:"nzbget-postprocess.sh":"nzbget-postprocess.sh" (installed into $(bindir)):' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:^WebDir=:WebDir=$(webuidir)/webui:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
sed 's:^ConfigTemplate=:ConfigTemplate=$(exampleconfdir)/nzbget.conf:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
sed 's:configuration file (typically installed:configuration file (installed:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:/usr/local/share/nzbget/nzbget.conf):$(exampleconfdir)/nzbget.conf):' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
sed 's:^WebDir=:WebDir=$(webuidir)/webui:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:typically installed to /usr/local/share/nzbget/scripts:installed to $(scriptsdir)/scripts:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
rm "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
# Install configuration files into /etc
@@ -105,18 +315,9 @@ install-conf:
if test ! -f "$(DESTDIR)$(sysconfdir)/nzbget.conf" ; then \
$(mkinstalldirs) "$(DESTDIR)$(sysconfdir)" ; \
cp "$(DESTDIR)$(exampleconfdir)/nzbget.conf" "$(DESTDIR)$(sysconfdir)/nzbget.conf" ; \
rm -f "$(DESTDIR)$(sysconfdir)/nzbget.conf.temp" ; \
cp "$(DESTDIR)$(sysconfdir)/nzbget.conf" "$(DESTDIR)$(sysconfdir)/nzbget.conf.temp" ; \
sed 's:^PostProcess=:PostProcess=$(bindir)/nzbget-postprocess.sh:' < "$(DESTDIR)$(sysconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(sysconfdir)/nzbget.conf" ; \
rm "$(DESTDIR)$(sysconfdir)/nzbget.conf.temp" ; \
fi
if test ! -f "$(DESTDIR)$(sysconfdir)/nzbget-postprocess.conf" ; then \
$(mkinstalldirs) "$(DESTDIR)$(sysconfdir)" ; \
cp "$(DESTDIR)$(exampleconfdir)/nzbget-postprocess.conf" "$(DESTDIR)$(sysconfdir)/nzbget-postprocess.conf" ; \
fi
uninstall-conf:
rm -f "$(DESTDIR)$(sysconfdir)/nzbget-postprocess.conf"
rm -f "$(DESTDIR)$(sysconfdir)/nzbget.conf"
# Determining subversion revision:
@@ -165,6 +366,7 @@ clean-bak: rm *~
# Fix premissions
dist-hook:
chmod -x $(distdir)/*.cpp $(distdir)/*.h
find $(distdir)/daemon -type f -print -exec chmod -x {} \;
find $(distdir)/webui -type f -print -exec chmod -x {} \;
find $(distdir)/lib -type f -print -exec chmod -x {} \;

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,750 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#ifndef DISABLE_PARCHECK
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <fstream>
#ifdef WIN32
#include <par2cmdline.h>
#include <par2repairer.h>
#else
#include <unistd.h>
#include <libpar2/par2cmdline.h>
#include <libpar2/par2repairer.h>
#endif
#include "nzbget.h"
#include "ParChecker.h"
#include "Log.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
const char* Par2CmdLineErrStr[] = { "OK",
"data files are damaged and there is enough recovery data available to repair them",
"data files are damaged and there is insufficient recovery data available to be able to repair them",
"there was something wrong with the command line arguments",
"the PAR2 files did not contain sufficient information about the data files to be able to verify them",
"repair completed but the data files still appear to be damaged",
"an error occured when accessing files",
"internal error occurred",
"out of memory" };
class Repairer : public Par2Repairer
{
private:
CommandLine commandLine;
public:
Result PreProcess(const char *szParFilename);
Result Process(bool dorepair);
friend class ParChecker;
};
Result Repairer::PreProcess(const char *szParFilename)
{
#ifdef HAVE_PAR2_BUGFIXES_V2
// Ensure linking against the patched version of libpar2
BugfixesPatchVersion2();
#endif
if (g_pOptions->GetParScan() == Options::psFull)
{
char szWildcardParam[1024];
strncpy(szWildcardParam, szParFilename, 1024);
szWildcardParam[1024-1] = '\0';
char* szBasename = Util::BaseFileName(szWildcardParam);
if (szBasename != szWildcardParam && strlen(szBasename) > 0)
{
szBasename[0] = '*';
szBasename[1] = '\0';
}
const char* argv[] = { "par2", "r", "-v", "-v", szParFilename, szWildcardParam };
if (!commandLine.Parse(6, (char**)argv))
{
return eInvalidCommandLineArguments;
}
}
else
{
const char* argv[] = { "par2", "r", "-v", "-v", szParFilename };
if (!commandLine.Parse(5, (char**)argv))
{
return eInvalidCommandLineArguments;
}
}
return Par2Repairer::PreProcess(commandLine);
}
Result Repairer::Process(bool dorepair)
{
return Par2Repairer::Process(commandLine, dorepair);
}
class MissingFilesComparator
{
private:
const char* m_szBaseParFilename;
public:
MissingFilesComparator(const char* szBaseParFilename) : m_szBaseParFilename(szBaseParFilename) {}
bool operator()(CommandLine::ExtraFile* pFirst, CommandLine::ExtraFile* pSecond) const;
};
/*
* Files with the same name as in par-file (and a differnt extension) are
* placed at the top of the list to be scanned first.
*/
bool MissingFilesComparator::operator()(CommandLine::ExtraFile* pFile1, CommandLine::ExtraFile* pFile2) const
{
char name1[1024];
strncpy(name1, Util::BaseFileName(pFile1->FileName().c_str()), 1024);
name1[1024-1] = '\0';
if (char* ext = strrchr(name1, '.')) *ext = '\0'; // trim extension
char name2[1024];
strncpy(name2, Util::BaseFileName(pFile2->FileName().c_str()), 1024);
name2[1024-1] = '\0';
if (char* ext = strrchr(name2, '.')) *ext = '\0'; // trim extension
return strcmp(name1, m_szBaseParFilename) == 0 && strcmp(name1, name2) != 0;
}
ParChecker::ParChecker()
{
debug("Creating ParChecker");
m_eStatus = psUndefined;
m_szParFilename = NULL;
m_szInfoName = NULL;
m_szErrMsg = NULL;
m_szProgressLabel = (char*)malloc(1024);
m_iFileProgress = 0;
m_iStageProgress = 0;
m_iExtraFiles = 0;
m_bVerifyingExtraFiles = false;
m_bCancelled = false;
m_eStage = ptLoadingPars;
}
ParChecker::~ParChecker()
{
debug("Destroying ParChecker");
if (m_szParFilename)
{
free(m_szParFilename);
}
if (m_szInfoName)
{
free(m_szInfoName);
}
if (m_szErrMsg)
{
free(m_szErrMsg);
}
free(m_szProgressLabel);
Cleanup();
}
void ParChecker::Cleanup()
{
for (FileList::iterator it = m_QueuedParFiles.begin(); it != m_QueuedParFiles.end() ;it++)
{
free(*it);
}
m_QueuedParFiles.clear();
for (FileList::iterator it = m_ProcessedFiles.begin(); it != m_ProcessedFiles.end() ;it++)
{
free(*it);
}
m_ProcessedFiles.clear();
}
void ParChecker::SetParFilename(const char * szParFilename)
{
if (m_szParFilename)
{
free(m_szParFilename);
}
m_szParFilename = strdup(szParFilename);
}
void ParChecker::SetInfoName(const char * szInfoName)
{
if (m_szInfoName)
{
free(m_szInfoName);
}
m_szInfoName = strdup(szInfoName);
}
void ParChecker::SetStatus(EStatus eStatus)
{
m_eStatus = eStatus;
Notify(NULL);
}
void ParChecker::Run()
{
Cleanup();
m_bRepairNotNeeded = false;
m_eStage = ptLoadingPars;
m_iProcessedFiles = 0;
m_iExtraFiles = 0;
m_bVerifyingExtraFiles = false;
m_bCancelled = false;
info("Verifying %s", m_szInfoName);
SetStatus(psWorking);
debug("par: %s", m_szParFilename);
Result res;
Repairer* pRepairer = new Repairer();
m_pRepairer = pRepairer;
pRepairer->sig_filename.connect(sigc::mem_fun(*this, &ParChecker::signal_filename));
pRepairer->sig_progress.connect(sigc::mem_fun(*this, &ParChecker::signal_progress));
pRepairer->sig_done.connect(sigc::mem_fun(*this, &ParChecker::signal_done));
snprintf(m_szProgressLabel, 1024, "Verifying %s", m_szInfoName);
m_szProgressLabel[1024-1] = '\0';
m_iFileProgress = 0;
m_iStageProgress = 0;
UpdateProgress();
res = pRepairer->PreProcess(m_szParFilename);
debug("ParChecker: PreProcess-result=%i", res);
if (res != eSuccess || IsStopped())
{
if (res == eInvalidCommandLineArguments)
{
error("Could not start par-check for %s. Par-file: %s", m_szInfoName, m_szParFilename);
m_szErrMsg = strdup("Command line could not be parsed");
}
else
{
error("Could not verify %s: %s", m_szInfoName, IsStopped() ? "due stopping" : "par2-file could not be processed");
m_szErrMsg = strdup("par2-file could not be processed");
}
SetStatus(psFailed);
delete pRepairer;
Cleanup();
return;
}
char BufReason[1024];
BufReason[0] = '\0';
if (m_szErrMsg)
{
free(m_szErrMsg);
}
m_szErrMsg = NULL;
m_eStage = ptVerifyingSources;
res = pRepairer->Process(false);
debug("ParChecker: Process-result=%i", res);
if (!IsStopped() && pRepairer->missingfilecount > 0 && g_pOptions->GetParScan() == Options::psAuto && AddMissingFiles())
{
res = pRepairer->Process(false);
debug("ParChecker: Process-result=%i", res);
}
if (!IsStopped() && res == eRepairNotPossible && CheckSplittedFragments())
{
pRepairer->UpdateVerificationResults();
res = pRepairer->Process(false);
debug("ParChecker: Process-result=%i", res);
}
bool bMoreFilesLoaded = true;
while (!IsStopped() && res == eRepairNotPossible)
{
int missingblockcount = pRepairer->missingblockcount - pRepairer->recoverypacketmap.size();
if (bMoreFilesLoaded)
{
info("Need more %i par-block(s) for %s", missingblockcount, m_szInfoName);
}
m_mutexQueuedParFiles.Lock();
bool hasMorePars = !m_QueuedParFiles.empty();
m_mutexQueuedParFiles.Unlock();
if (!hasMorePars)
{
int iBlockFound = 0;
bool requested = RequestMorePars(missingblockcount, &iBlockFound);
if (requested)
{
strncpy(m_szProgressLabel, "Awaiting additional par-files", 1024);
m_szProgressLabel[1024-1] = '\0';
m_iFileProgress = 0;
UpdateProgress();
}
m_mutexQueuedParFiles.Lock();
hasMorePars = !m_QueuedParFiles.empty();
m_bQueuedParFilesChanged = false;
m_mutexQueuedParFiles.Unlock();
if (!requested && !hasMorePars)
{
snprintf(BufReason, 1024, "not enough par-blocks, %i block(s) needed, but %i block(s) available", missingblockcount, iBlockFound);
BufReason[1024-1] = '\0';
m_szErrMsg = strdup(BufReason);
break;
}
if (!hasMorePars)
{
// wait until new files are added by "AddParFile" or a change is signaled by "QueueChanged"
bool bQueuedParFilesChanged = false;
while (!bQueuedParFilesChanged && !IsStopped())
{
m_mutexQueuedParFiles.Lock();
bQueuedParFilesChanged = m_bQueuedParFilesChanged;
m_mutexQueuedParFiles.Unlock();
usleep(100 * 1000);
}
}
}
if (IsStopped())
{
break;
}
bMoreFilesLoaded = LoadMorePars();
if (bMoreFilesLoaded)
{
pRepairer->UpdateVerificationResults();
res = pRepairer->Process(false);
debug("ParChecker: Process-result=%i", res);
}
}
if (IsStopped())
{
SetStatus(psFailed);
delete pRepairer;
Cleanup();
return;
}
if (res == eSuccess)
{
info("Repair not needed for %s", m_szInfoName);
m_bRepairNotNeeded = true;
}
else if (res == eRepairPossible)
{
if (g_pOptions->GetParRepair())
{
info("Repairing %s", m_szInfoName);
snprintf(m_szProgressLabel, 1024, "Repairing %s", m_szInfoName);
m_szProgressLabel[1024-1] = '\0';
m_iFileProgress = 0;
m_iStageProgress = 0;
m_iProcessedFiles = 0;
m_eStage = ptRepairing;
m_iFilesToRepair = pRepairer->damagedfilecount + pRepairer->missingfilecount;
UpdateProgress();
res = pRepairer->Process(true);
debug("ParChecker: Process-result=%i", res);
if (res == eSuccess)
{
info("Successfully repaired %s", m_szInfoName);
}
}
else
{
info("Repair possible for %s", m_szInfoName);
res = eSuccess;
}
}
if (m_bCancelled)
{
warn("Repair cancelled for %s", m_szInfoName);
m_szErrMsg = strdup("repair cancelled");
SetStatus(psFailed);
}
else if (res == eSuccess)
{
SetStatus(psFinished);
}
else
{
if (!m_szErrMsg && (int)res >= 0 && (int)res <= 8)
{
m_szErrMsg = strdup(Par2CmdLineErrStr[res]);
}
error("Repair failed for %s: %s", m_szInfoName, m_szErrMsg ? m_szErrMsg : "");
SetStatus(psFailed);
}
delete pRepairer;
Cleanup();
}
bool ParChecker::LoadMorePars()
{
m_mutexQueuedParFiles.Lock();
FileList moreFiles;
moreFiles.assign(m_QueuedParFiles.begin(), m_QueuedParFiles.end());
m_QueuedParFiles.clear();
m_mutexQueuedParFiles.Unlock();
for (FileList::iterator it = moreFiles.begin(); it != moreFiles.end() ;it++)
{
char* szParFilename = *it;
bool loadedOK = ((Repairer*)m_pRepairer)->LoadPacketsFromFile(szParFilename);
if (loadedOK)
{
info("File %s successfully loaded for par-check", Util::BaseFileName(szParFilename), m_szInfoName);
}
else
{
info("Could not load file %s for par-check", Util::BaseFileName(szParFilename), m_szInfoName);
}
free(szParFilename);
}
return !moreFiles.empty();
}
void ParChecker::AddParFile(const char * szParFilename)
{
m_mutexQueuedParFiles.Lock();
m_QueuedParFiles.push_back(strdup(szParFilename));
m_bQueuedParFilesChanged = true;
m_mutexQueuedParFiles.Unlock();
}
void ParChecker::QueueChanged()
{
m_mutexQueuedParFiles.Lock();
m_bQueuedParFilesChanged = true;
m_mutexQueuedParFiles.Unlock();
}
bool ParChecker::CheckSplittedFragments()
{
bool bFragmentsAdded = false;
for (std::vector<Par2RepairerSourceFile*>::iterator it = ((Repairer*)m_pRepairer)->sourcefiles.begin();
it != ((Repairer*)m_pRepairer)->sourcefiles.end(); it++)
{
Par2RepairerSourceFile *sourcefile = *it;
if (!sourcefile->GetTargetExists() && AddSplittedFragments(sourcefile->TargetFileName().c_str()))
{
bFragmentsAdded = true;
}
}
return bFragmentsAdded;
}
bool ParChecker::AddSplittedFragments(const char* szFilename)
{
char szDirectory[1024];
strncpy(szDirectory, szFilename, 1024);
szDirectory[1024-1] = '\0';
char* szBasename = Util::BaseFileName(szDirectory);
if (szBasename == szDirectory)
{
return false;
}
szBasename[-1] = '\0';
int iBaseLen = strlen(szBasename);
std::list<CommandLine::ExtraFile> extrafiles;
DirBrowser dir(szDirectory);
while (const char* filename = dir.Next())
{
if (!strncasecmp(filename, szBasename, iBaseLen))
{
const char* p = filename + iBaseLen;
if (*p == '.')
{
for (p++; *p && strchr("0123456789", *p); p++) ;
if (!*p)
{
debug("Found splitted fragment %s", filename);
char fullfilename[1024];
snprintf(fullfilename, 1024, "%s%c%s", szDirectory, PATH_SEPARATOR, filename);
fullfilename[1024-1] = '\0';
CommandLine::ExtraFile extrafile(fullfilename, Util::FileSize(fullfilename));
extrafiles.push_back(extrafile);
}
}
}
}
bool bFragmentsAdded = false;
if (!extrafiles.empty())
{
m_iExtraFiles += extrafiles.size();
m_bVerifyingExtraFiles = true;
bFragmentsAdded = ((Repairer*)m_pRepairer)->VerifyExtraFiles(extrafiles);
m_bVerifyingExtraFiles = false;
}
return bFragmentsAdded;
}
bool ParChecker::AddMissingFiles()
{
info("Performing extra par-scan for %s", m_szInfoName);
char szDirectory[1024];
strncpy(szDirectory, m_szParFilename, 1024);
szDirectory[1024-1] = '\0';
char* szBasename = Util::BaseFileName(szDirectory);
if (szBasename == szDirectory)
{
return false;
}
szBasename[-1] = '\0';
std::list<CommandLine::ExtraFile*> extrafiles;
DirBrowser dir(szDirectory);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") && strcmp(filename, "_brokenlog.txt"))
{
bool bAlreadyScanned = false;
for (FileList::iterator it = m_ProcessedFiles.begin(); it != m_ProcessedFiles.end(); it++)
{
const char* szProcessedFilename = *it;
if (!strcasecmp(Util::BaseFileName(szProcessedFilename), filename))
{
bAlreadyScanned = true;
break;
}
}
if (!bAlreadyScanned)
{
char fullfilename[1024];
snprintf(fullfilename, 1024, "%s%c%s", szDirectory, PATH_SEPARATOR, filename);
fullfilename[1024-1] = '\0';
extrafiles.push_back(new CommandLine::ExtraFile(fullfilename, Util::FileSize(fullfilename)));
}
}
}
// Sort the list
char* szBaseParFilename = strdup(Util::BaseFileName(m_szParFilename));
if (char* ext = strrchr(szBaseParFilename, '.')) *ext = '\0'; // trim extension
extrafiles.sort(MissingFilesComparator(szBaseParFilename));
free(szBaseParFilename);
// Scan files
bool bFilesAdded = false;
if (!extrafiles.empty())
{
m_iExtraFiles += extrafiles.size();
m_bVerifyingExtraFiles = true;
std::list<CommandLine::ExtraFile> extrafiles1;
// adding files one by one until all missing files are found
while (!IsStopped() && !m_bCancelled && extrafiles.size() > 0 && ((Repairer*)m_pRepairer)->missingfilecount > 0)
{
CommandLine::ExtraFile* pExtraFile = extrafiles.front();
extrafiles.pop_front();
extrafiles1.clear();
extrafiles1.push_back(*pExtraFile);
bFilesAdded = ((Repairer*)m_pRepairer)->VerifyExtraFiles(extrafiles1) || bFilesAdded;
((Repairer*)m_pRepairer)->UpdateVerificationResults();
delete pExtraFile;
}
m_bVerifyingExtraFiles = false;
// free any remaining objects
for (std::list<CommandLine::ExtraFile*>::iterator it = extrafiles.begin(); it != extrafiles.end() ;it++)
{
delete *it;
}
}
return bFilesAdded;
}
void ParChecker::signal_filename(std::string str)
{
const char* szStageMessage[] = { "Loading file", "Verifying file", "Repairing file", "Verifying repaired file" };
if (m_eStage == ptRepairing)
{
m_eStage = ptVerifyingRepaired;
}
info("%s %s", szStageMessage[m_eStage], str.c_str());
if (m_eStage == ptLoadingPars || m_eStage == ptVerifyingSources)
{
m_ProcessedFiles.push_back(strdup(str.c_str()));
}
snprintf(m_szProgressLabel, 1024, "%s %s", szStageMessage[m_eStage], str.c_str());
m_szProgressLabel[1024-1] = '\0';
m_iFileProgress = 0;
UpdateProgress();
}
void ParChecker::signal_progress(double progress)
{
m_iFileProgress = (int)progress;
if (m_eStage == ptRepairing)
{
// calculating repair-data for all files
m_iStageProgress = m_iFileProgress;
}
else
{
// processing individual files
int iTotalFiles = 0;
if (m_eStage == ptVerifyingRepaired)
{
// repairing individual files
iTotalFiles = m_iFilesToRepair;
}
else
{
// verifying individual files
iTotalFiles = ((Repairer*)m_pRepairer)->sourcefiles.size() + m_iExtraFiles;
}
if (iTotalFiles > 0)
{
if (m_iFileProgress < 1000)
{
m_iStageProgress = (m_iProcessedFiles * 1000 + m_iFileProgress) / iTotalFiles;
}
else
{
m_iStageProgress = m_iProcessedFiles * 1000 / iTotalFiles;
}
}
else
{
m_iStageProgress = 0;
}
}
debug("Current-progres: %i, Total-progress: %i", m_iFileProgress, m_iStageProgress);
UpdateProgress();
}
void ParChecker::signal_done(std::string str, int available, int total)
{
m_iProcessedFiles++;
if (m_eStage == ptVerifyingSources)
{
if (available < total && !m_bVerifyingExtraFiles)
{
bool bFileExists = true;
for (std::vector<Par2RepairerSourceFile*>::iterator it = ((Repairer*)m_pRepairer)->sourcefiles.begin();
it != ((Repairer*)m_pRepairer)->sourcefiles.end(); it++)
{
Par2RepairerSourceFile *sourcefile = *it;
if (sourcefile && !strcmp(str.c_str(), Util::BaseFileName(sourcefile->TargetFileName().c_str())) &&
!sourcefile->GetTargetExists())
{
bFileExists = false;
break;
}
}
if (bFileExists)
{
warn("File %s has %i bad block(s) of total %i block(s)", str.c_str(), total - available, total);
}
else
{
warn("File %s with %i block(s) is missing", str.c_str(), total);
}
}
}
}
void ParChecker::Cancel()
{
#ifdef HAVE_PAR2_CANCEL
((Repairer*)m_pRepairer)->cancelled = true;
m_bCancelled = true;
#else
error("Could not cancel par-repair. The program was compiled using version of libpar2 which doesn't support cancelling of par-repair. Please apply libpar2-patches supplied with NZBGet and recompile libpar2 and NZBGet (see README for details).");
#endif
}
#endif

View File

@@ -1,120 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef PARCHECKER_H
#define PARCHECKER_H
#ifndef DISABLE_PARCHECK
#include <deque>
#include "Thread.h"
#include "Observer.h"
class ParChecker : public Thread, public Subject
{
public:
enum EStatus
{
psUndefined,
psWorking,
psFailed,
psFinished
};
enum EStage
{
ptLoadingPars,
ptVerifyingSources,
ptRepairing,
ptVerifyingRepaired,
};
typedef std::deque<char*> FileList;
private:
char* m_szInfoName;
char* m_szParFilename;
EStatus m_eStatus;
EStage m_eStage;
void* m_pRepairer; // declared as void* to prevent the including of libpar2-headers into this header-file
char* m_szErrMsg;
bool m_bRepairNotNeeded;
FileList m_QueuedParFiles;
Mutex m_mutexQueuedParFiles;
bool m_bQueuedParFilesChanged;
FileList m_ProcessedFiles;
int m_iProcessedFiles;
int m_iFilesToRepair;
int m_iExtraFiles;
bool m_bVerifyingExtraFiles;
char* m_szProgressLabel;
int m_iFileProgress;
int m_iStageProgress;
bool m_bCancelled;
void Cleanup();
bool LoadMorePars();
bool CheckSplittedFragments();
bool AddSplittedFragments(const char* szFilename);
bool AddMissingFiles();
void signal_filename(std::string str);
void signal_progress(double progress);
void signal_done(std::string str, int available, int total);
protected:
/**
* Unpause par2-files
* returns true, if the files with required number of blocks were unpaused,
* or false if there are no more files in queue for this collection or not enough blocks
*/
virtual bool RequestMorePars(int iBlockNeeded, int* pBlockFound) = 0;
virtual void UpdateProgress() {}
EStage GetStage() { return m_eStage; }
const char* GetProgressLabel() { return m_szProgressLabel; }
int GetFileProgress() { return m_iFileProgress; }
int GetStageProgress() { return m_iStageProgress; }
public:
ParChecker();
virtual ~ParChecker();
virtual void Run();
const char* GetParFilename() { return m_szParFilename; }
void SetParFilename(const char* szParFilename);
const char* GetInfoName() { return m_szInfoName; }
void SetInfoName(const char* szInfoName);
void SetStatus(EStatus eStatus);
EStatus GetStatus() { return m_eStatus; }
const char* GetErrMsg() { return m_szErrMsg; }
bool GetRepairNotNeeded() { return m_bRepairNotNeeded; }
void AddParFile(const char* szParFilename);
void QueueChanged();
void Cancel();
bool GetCancelled() { return m_bCancelled; }
};
#endif
#endif

View File

@@ -1,323 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#ifndef DISABLE_PARCHECK
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <fstream>
#ifdef WIN32
#include <par2cmdline.h>
#include <par2repairer.h>
#include <md5.h>
#else
#include <unistd.h>
#include <libpar2/par2cmdline.h>
#include <libpar2/par2repairer.h>
#include <libpar2/md5.h>
#endif
#include "nzbget.h"
#include "ParRenamer.h"
#include "ParCoordinator.h"
#include "Log.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
class ParRenamerRepairer : public Par2Repairer
{
public:
friend class ParRenamer;
};
ParRenamer::FileHash::FileHash(const char* szFilename, const char* szHash)
{
m_szFilename = strdup(szFilename);
m_szHash = strdup(szHash);
}
ParRenamer::FileHash::~FileHash()
{
free(m_szFilename);
free(m_szHash);
}
ParRenamer::ParRenamer()
{
debug("Creating ParRenamer");
m_eStatus = psUnknown;
m_szDestDir = NULL;
m_szInfoName = NULL;
m_szProgressLabel = (char*)malloc(1024);
m_iStageProgress = 0;
m_bCancelled = false;
}
ParRenamer::~ParRenamer()
{
debug("Destroying ParRenamer");
if (m_szDestDir)
{
free(m_szDestDir);
}
if (m_szInfoName)
{
free(m_szInfoName);
}
free(m_szProgressLabel);
Cleanup();
}
void ParRenamer::Cleanup()
{
for (FileHashList::iterator it = m_fileHashList.begin(); it != m_fileHashList.end(); it++)
{
delete *it;
}
m_fileHashList.clear();
}
void ParRenamer::SetDestDir(const char * szDestDir)
{
if (m_szDestDir)
{
free(m_szDestDir);
}
m_szDestDir = strdup(szDestDir);
}
void ParRenamer::SetInfoName(const char * szInfoName)
{
if (m_szInfoName)
{
free(m_szInfoName);
}
m_szInfoName = strdup(szInfoName);
}
void ParRenamer::SetStatus(EStatus eStatus)
{
m_eStatus = eStatus;
Notify(NULL);
}
void ParRenamer::Cancel()
{
m_bCancelled = true;
}
void ParRenamer::Run()
{
Cleanup();
m_bCancelled = false;
m_iRenamedCount = 0;
SetStatus(psUnknown);
snprintf(m_szProgressLabel, 1024, "Checking renamed files for %s", m_szInfoName);
m_szProgressLabel[1024-1] = '\0';
m_iStageProgress = 0;
UpdateProgress();
LoadParFiles();
CheckFiles();
if (m_bCancelled)
{
warn("Renaming cancelled for %s", m_szInfoName);
SetStatus(psFailed);
}
else if (m_iRenamedCount > 0)
{
info("Successfully renamed %i file(s) for %s", m_iRenamedCount, m_szInfoName);
SetStatus(psFinished);
}
else
{
info("Could not rename any files for %s", m_szInfoName);
SetStatus(psFailed);
}
Cleanup();
}
void ParRenamer::LoadParFiles()
{
ParCoordinator::FileList parFileList;
ParCoordinator::FindMainPars(m_szDestDir, &parFileList);
for (ParCoordinator::FileList::iterator it = parFileList.begin(); it != parFileList.end(); it++)
{
char* szParFilename = *it;
char szFullParFilename[1024];
snprintf(szFullParFilename, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, szParFilename);
szFullParFilename[1024-1] = '\0';
LoadParFile(szFullParFilename);
free(*it);
}
}
void ParRenamer::LoadParFile(const char* szParFilename)
{
ParRenamerRepairer* pRepairer = new ParRenamerRepairer();
if (!pRepairer->LoadPacketsFromFile(szParFilename))
{
warn("Could not load par2-file %s", szParFilename);
delete pRepairer;
return;
}
for (map<MD5Hash, Par2RepairerSourceFile*>::iterator it = pRepairer->sourcefilemap.begin(); it != pRepairer->sourcefilemap.end(); it++)
{
if (m_bCancelled)
{
break;
}
Par2RepairerSourceFile* sourceFile = (*it).second;
m_fileHashList.push_back(new FileHash(sourceFile->GetDescriptionPacket()->FileName().c_str(),
sourceFile->GetDescriptionPacket()->Hash16k().print().c_str()));
}
delete pRepairer;
}
void ParRenamer::CheckFiles()
{
int iFileCount = 0;
DirBrowser dir2(m_szDestDir);
while (const char* filename = dir2.Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") && !m_bCancelled)
{
iFileCount++;
}
}
int iCurFile = 0;
DirBrowser dir(m_szDestDir);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") && !m_bCancelled)
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
snprintf(m_szProgressLabel, 1024, "Checking file %s", filename);
m_szProgressLabel[1024-1] = '\0';
m_iStageProgress = iCurFile * 1000 / iFileCount;
UpdateProgress();
iCurFile++;
CheckFile(szFullFilename);
}
}
}
void ParRenamer::CheckFile(const char* szFilename)
{
debug("Computing hash for %s", szFilename);
const int iBlockSize = 16*1024;
FILE* pFile = fopen(szFilename, "rb");
if (!pFile)
{
error("Could not open file %s", szFilename);
return;
}
// load first 16K of the file into buffer
void* pBuffer = malloc(iBlockSize);
int iReadBytes = fread(pBuffer, 1, iBlockSize, pFile);
int iError = ferror(pFile);
if (iReadBytes != iBlockSize && iError)
{
error("Could not read file %s", szFilename);
return;
}
fclose(pFile);
MD5Hash hash16k;
MD5Context context;
context.Update(pBuffer, iReadBytes);
context.Final(hash16k);
free(pBuffer);
debug("file: %s; hash16k: %s", Util::BaseFileName(szFilename), hash16k.print().c_str());
for (FileHashList::iterator it = m_fileHashList.begin(); it != m_fileHashList.end(); it++)
{
FileHash* pFileHash = *it;
if (!strcmp(pFileHash->GetHash(), hash16k.print().c_str()))
{
debug("Found correct filename: %s", pFileHash->GetFilename());
char szDstFilename[1024];
snprintf(szDstFilename, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, pFileHash->GetFilename());
szDstFilename[1024-1] = '\0';
if (!Util::FileExists(szDstFilename))
{
info("Renaming %s to %s", Util::BaseFileName(szFilename), pFileHash->GetFilename());
if (Util::MoveFile(szFilename, szDstFilename))
{
m_iRenamedCount++;
}
else
{
error("Could not rename %s to %s", szFilename, szDstFilename);
}
}
break;
}
}
}
#endif

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,121 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef PREPOSTPROCESSOR_H
#define PREPOSTPROCESSOR_H
#include <deque>
#include "Thread.h"
#include "Observer.h"
#include "DownloadInfo.h"
#include "Scanner.h"
#include "ParCoordinator.h"
class PrePostProcessor : public Thread
{
public:
enum EEditAction
{
eaPostMoveOffset = 51, // move post to m_iOffset relative to the current position in post-queue
eaPostMoveTop,
eaPostMoveBottom,
eaPostDelete,
eaHistoryDelete,
eaHistoryReturn,
eaHistoryProcess
};
private:
class QueueCoordinatorObserver: public Observer
{
public:
PrePostProcessor* m_pOwner;
virtual void Update(Subject* Caller, void* Aspect) { m_pOwner->QueueCoordinatorUpdate(Caller, Aspect); }
};
class PostParCoordinator: public ParCoordinator
{
private:
PrePostProcessor* m_pOwner;
protected:
virtual bool PauseDownload() { return m_pOwner->PauseDownload(); }
virtual bool UnpauseDownload() { return m_pOwner->UnpauseDownload(); }
friend class PrePostProcessor;
};
private:
PostParCoordinator m_ParCoordinator;
QueueCoordinatorObserver m_QueueCoordinatorObserver;
bool m_bHasMoreJobs;
bool m_bPostScript;
bool m_bSchedulerPauseChanged;
bool m_bSchedulerPause;
bool m_bPostPause;
Scanner m_Scanner;
const char* m_szPauseReason;
bool IsNZBFileCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo,
bool bIgnorePausedPars, bool bCheckPostQueue, bool bAllowOnlyOneDeleted);
void CheckPostQueue();
void JobCompleted(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo);
void StartProcessJob(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo);
void SaveQueue(DownloadQueue* pDownloadQueue);
void SanitisePostQueue(PostQueue* pPostQueue);
void CheckDiskSpace();
void ApplySchedulerState();
void CheckScheduledResume();
void UpdatePauseState(bool bNeedPause, const char* szReason);
bool PauseDownload();
bool UnpauseDownload();
void NZBAdded(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBDownloaded(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBDeleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, bool bSaveQueue);
bool CreatePostJobs(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, bool bParCheck, bool bUnpackOrScript, bool bAddTop);
void DeleteQueuedFile(const char* szQueuedFile);
NZBInfo* MergeGroups(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
bool PostQueueMove(IDList* pIDList, EEditAction eAction, int iOffset);
bool PostQueueDelete(IDList* pIDList);
bool HistoryDelete(IDList* pIDList);
bool HistoryReturn(IDList* pIDList, bool bReprocess);
void Cleanup();
FileInfo* GetQueueGroup(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void CheckHistory();
void DeletePostThread(PostInfo* pPostInfo);
public:
PrePostProcessor();
virtual ~PrePostProcessor();
virtual void Run();
virtual void Stop();
void QueueCoordinatorUpdate(Subject* Caller, void* Aspect);
bool HasMoreJobs() { return m_bHasMoreJobs; }
void ScanNZBDir(bool bSyncMode);
bool QueueEditList(IDList* pIDList, EEditAction eAction, int iOffset);
};
#endif

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,130 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef QUEUECOORDINATOR_H
#define QUEUECOORDINATOR_H
#include <deque>
#include <list>
#include <time.h>
#include "Thread.h"
#include "NZBFile.h"
#include "ArticleDownloader.h"
#include "DownloadInfo.h"
#include "Observer.h"
#include "QueueEditor.h"
#include "NNTPConnection.h"
class QueueCoordinator : public Thread, public Observer, public Subject, public DownloadSpeedMeter, public DownloadQueueHolder
{
public:
typedef std::list<ArticleDownloader*> ActiveDownloads;
enum EAspectAction
{
eaNZBFileAdded,
eaFileCompleted,
eaFileDeleted
};
struct Aspect
{
EAspectAction eAction;
DownloadQueue* pDownloadQueue;
NZBInfo* pNZBInfo;
FileInfo* pFileInfo;
};
private:
DownloadQueue m_DownloadQueue;
ActiveDownloads m_ActiveDownloads;
QueueEditor m_QueueEditor;
Mutex m_mutexDownloadQueue;
bool m_bHasMoreJobs;
// statistics
static const int SPEEDMETER_SLOTS = 30;
static const int SPEEDMETER_SLOTSIZE = 1; //Split elapsed time into this number of secs.
int m_iSpeedBytes[SPEEDMETER_SLOTS];
int m_iSpeedTotalBytes;
int m_iSpeedTime[SPEEDMETER_SLOTS];
int m_iSpeedStartTime;
time_t m_tSpeedCorrection;
#ifdef HAVE_SPINLOCK
SpinLock m_spinlockSpeed;
#else
Mutex m_mutexSpeed;
#endif
int m_iSpeedBytesIndex;
long long m_iAllBytes;
time_t m_tStartServer;
time_t m_tLastCheck;
time_t m_tStartDownload;
time_t m_tPausedFrom;
bool m_bStandBy;
Mutex m_mutexStat;
bool GetNextArticle(FileInfo* &pFileInfo, ArticleInfo* &pArticleInfo);
void StartArticleDownload(FileInfo* pFileInfo, ArticleInfo* pArticleInfo, NNTPConnection* pConnection);
void BuildArticleFilename(ArticleDownloader* pArticleDownloader, FileInfo* pFileInfo, ArticleInfo* pArticleInfo);
bool IsDupe(FileInfo* pFileInfo);
void ArticleCompleted(ArticleDownloader* pArticleDownloader);
void DeleteFileInfo(FileInfo* pFileInfo, bool bCompleted);
void ResetHangingDownloads();
void ResetSpeedStat();
void EnterLeaveStandBy(bool bEnter);
void AdjustStartTime();
public:
QueueCoordinator();
virtual ~QueueCoordinator();
virtual void Run();
virtual void Stop();
void Update(Subject* Caller, void* Aspect);
// statistics
long long CalcRemainingSize();
virtual int CalcCurrentDownloadSpeed();
virtual void AddSpeedReading(int iBytes);
void CalcStat(int* iUpTimeSec, int* iDnTimeSec, long long* iAllBytes, bool* bStandBy);
// Editing the queue
DownloadQueue* LockQueue();
void UnlockQueue() ;
void AddNZBFileToQueue(NZBFile* pNZBFile, bool bAddFirst);
bool HasMoreJobs() { return m_bHasMoreJobs; }
bool GetStandBy() { return m_bStandBy; }
bool DeleteQueueEntry(FileInfo* pFileInfo);
bool SetQueueEntryNZBCategory(NZBInfo* pNZBInfo, const char* szCategory);
bool SetQueueEntryNZBName(NZBInfo* pNZBInfo, const char* szName);
bool MergeQueueEntries(NZBInfo* pDestNZBInfo, NZBInfo* pSrcNZBInfo);
void DiscardDiskFile(FileInfo* pFileInfo);
QueueEditor* GetQueueEditor() { return &m_QueueEditor; }
void LogDebugInfo();
};
#endif

View File

@@ -1,927 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2011 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <cctype>
#include <cstdio>
#include <sys/stat.h>
#include <set>
#ifndef WIN32
#include <unistd.h>
#include <sys/time.h>
#endif
#include "nzbget.h"
#include "DownloadInfo.h"
#include "QueueEditor.h"
#include "QueueCoordinator.h"
#include "DiskState.h"
#include "Options.h"
#include "Log.h"
#include "Util.h"
extern QueueCoordinator* g_pQueueCoordinator;
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
const int MAX_ID = 100000000;
QueueEditor::EditItem::EditItem(FileInfo* pFileInfo, int iOffset)
{
m_pFileInfo = pFileInfo;
m_iOffset = iOffset;
}
QueueEditor::QueueEditor()
{
debug("Creating QueueEditor");
}
QueueEditor::~QueueEditor()
{
debug("Destroying QueueEditor");
}
FileInfo* QueueEditor::FindFileInfo(DownloadQueue* pDownloadQueue, int iID)
{
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
if (pFileInfo->GetID() == iID)
{
return pFileInfo;
}
}
return NULL;
}
int QueueEditor::FindFileInfoEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo)
{
int iEntry = 0;
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo2 = *it;
if (pFileInfo2 == pFileInfo)
{
return iEntry;
}
iEntry ++;
}
return -1;
}
/*
* Set the pause flag of the specific entry in the queue
* returns true if successful, false if operation is not possible
*/
void QueueEditor::PauseUnpauseEntry(FileInfo* pFileInfo, bool bPause)
{
pFileInfo->SetPaused(bPause);
}
/*
* Removes entry
* returns true if successful, false if operation is not possible
*/
void QueueEditor::DeleteEntry(FileInfo* pFileInfo)
{
info("Deleting file %s from download queue", pFileInfo->GetFilename());
g_pQueueCoordinator->DeleteQueueEntry(pFileInfo);
}
/*
* Moves entry in the queue
* returns true if successful, false if operation is not possible
*/
void QueueEditor::MoveEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, int iOffset)
{
int iEntry = FindFileInfoEntry(pDownloadQueue, pFileInfo);
if (iEntry > -1)
{
int iNewEntry = iEntry + iOffset;
if (iNewEntry < 0)
{
iNewEntry = 0;
}
if ((unsigned int)iNewEntry > pDownloadQueue->GetFileQueue()->size() - 1)
{
iNewEntry = (int)pDownloadQueue->GetFileQueue()->size() - 1;
}
if (iNewEntry >= 0 && (unsigned int)iNewEntry <= pDownloadQueue->GetFileQueue()->size() - 1)
{
FileInfo* fi = pDownloadQueue->GetFileQueue()->at(iEntry);
pDownloadQueue->GetFileQueue()->erase(pDownloadQueue->GetFileQueue()->begin() + iEntry);
pDownloadQueue->GetFileQueue()->insert(pDownloadQueue->GetFileQueue()->begin() + iNewEntry, fi);
}
}
}
/*
* Set priority for entry
* returns true if successful, false if operation is not possible
*/
void QueueEditor::SetPriorityEntry(FileInfo* pFileInfo, const char* szPriority)
{
debug("Setting priority %s for file %s", szPriority, pFileInfo->GetFilename());
int iPriority = atoi(szPriority);
pFileInfo->SetPriority(iPriority);
}
bool QueueEditor::EditEntry(int ID, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText)
{
IDList cIDList;
cIDList.clear();
cIDList.push_back(ID);
return EditList(&cIDList, NULL, mmID, bSmartOrder, eAction, iOffset, szText);
}
bool QueueEditor::LockedEditEntry(DownloadQueue* pDownloadQueue, int ID, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText)
{
IDList cIDList;
cIDList.clear();
cIDList.push_back(ID);
return InternEditList(pDownloadQueue, &cIDList, bSmartOrder, eAction, iOffset, szText);
}
bool QueueEditor::EditList(IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, bool bSmartOrder,
EEditAction eAction, int iOffset, const char* szText)
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
bool bOK = true;
if (pNameList)
{
pIDList = new IDList();
bOK = BuildIDListFromNameList(pDownloadQueue, pIDList, pNameList, eMatchMode, eAction);
}
bOK = bOK && (InternEditList(pDownloadQueue, pIDList, bSmartOrder, eAction, iOffset, szText) || eMatchMode == mmRegEx);
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
g_pQueueCoordinator->UnlockQueue();
if (pNameList)
{
delete pIDList;
}
return bOK;
}
bool QueueEditor::LockedEditList(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText)
{
return InternEditList(pDownloadQueue, pIDList, bSmartOrder, eAction, iOffset, szText);
}
bool QueueEditor::InternEditList(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText)
{
if (eAction == eaGroupMoveOffset)
{
AlignAffectedGroups(pDownloadQueue, pIDList, bSmartOrder, iOffset);
}
ItemList cItemList;
PrepareList(pDownloadQueue, &cItemList, pIDList, bSmartOrder, eAction, iOffset);
if (eAction == eaFilePauseAllPars || eAction == eaFilePauseExtraPars)
{
PauseParsInGroups(&cItemList, eAction == eaFilePauseExtraPars);
}
else if (eAction == eaGroupMerge)
{
MergeGroups(pDownloadQueue, &cItemList);
}
else if (eAction == eaFileReorder)
{
ReorderFiles(pDownloadQueue, &cItemList);
}
else
{
for (ItemList::iterator it = cItemList.begin(); it != cItemList.end(); it++)
{
EditItem* pItem = *it;
switch (eAction)
{
case eaFilePause:
PauseUnpauseEntry(pItem->m_pFileInfo, true);
break;
case eaFileResume:
PauseUnpauseEntry(pItem->m_pFileInfo, false);
break;
case eaFileMoveOffset:
case eaFileMoveTop:
case eaFileMoveBottom:
MoveEntry(pDownloadQueue, pItem->m_pFileInfo, pItem->m_iOffset);
break;
case eaFileDelete:
DeleteEntry(pItem->m_pFileInfo);
break;
case eaFileSetPriority:
SetPriorityEntry(pItem->m_pFileInfo, szText);
break;
case eaGroupSetCategory:
SetNZBCategory(pItem->m_pFileInfo->GetNZBInfo(), szText);
break;
case eaGroupSetName:
SetNZBName(pItem->m_pFileInfo->GetNZBInfo(), szText);
break;
case eaGroupSetParameter:
SetNZBParameter(pItem->m_pFileInfo->GetNZBInfo(), szText);
break;
case eaGroupPause:
case eaGroupResume:
case eaGroupDelete:
case eaGroupMoveTop:
case eaGroupMoveBottom:
case eaGroupMoveOffset:
case eaGroupPauseAllPars:
case eaGroupPauseExtraPars:
case eaGroupSetPriority:
EditGroup(pDownloadQueue, pItem->m_pFileInfo, eAction, iOffset, szText);
break;
case eaFilePauseAllPars:
case eaFilePauseExtraPars:
case eaGroupMerge:
case eaFileReorder:
// remove compiler warning "enumeration not handled in switch"
break;
}
delete pItem;
}
}
return cItemList.size() > 0;
}
void QueueEditor::PrepareList(DownloadQueue* pDownloadQueue, ItemList* pItemList, IDList* pIDList, bool bSmartOrder,
EEditAction EEditAction, int iOffset)
{
if (EEditAction == eaFileMoveTop)
{
iOffset = -MAX_ID;
}
else if (EEditAction == eaFileMoveBottom)
{
iOffset = MAX_ID;
}
pItemList->reserve(pIDList->size());
if (bSmartOrder && iOffset != 0 &&
(EEditAction == eaFileMoveOffset || EEditAction == eaFileMoveTop || EEditAction == eaFileMoveBottom))
{
//add IDs to list in order they currently have in download queue
int iLastDestPos = -1;
int iStart, iEnd, iStep;
if (iOffset < 0)
{
iStart = 0;
iEnd = pDownloadQueue->GetFileQueue()->size();
iStep = 1;
}
else
{
iStart = pDownloadQueue->GetFileQueue()->size() - 1;
iEnd = -1;
iStep = -1;
}
for (int iIndex = iStart; iIndex != iEnd; iIndex += iStep)
{
FileInfo* pFileInfo = pDownloadQueue->GetFileQueue()->at(iIndex);
int iID = pFileInfo->GetID();
for (IDList::iterator it = pIDList->begin(); it != pIDList->end(); it++)
{
if (iID == *it)
{
int iWorkOffset = iOffset;
int iDestPos = iIndex + iWorkOffset;
if (iLastDestPos == -1)
{
if (iDestPos < 0)
{
iWorkOffset = -iIndex;
}
else if (iDestPos > int(pDownloadQueue->GetFileQueue()->size()) - 1)
{
iWorkOffset = int(pDownloadQueue->GetFileQueue()->size()) - 1 - iIndex;
}
}
else
{
if (iWorkOffset < 0 && iDestPos <= iLastDestPos)
{
iWorkOffset = iLastDestPos - iIndex + 1;
}
else if (iWorkOffset > 0 && iDestPos >= iLastDestPos)
{
iWorkOffset = iLastDestPos - iIndex - 1;
}
}
iLastDestPos = iIndex + iWorkOffset;
pItemList->push_back(new EditItem(pFileInfo, iWorkOffset));
break;
}
}
}
}
else
{
// check ID range
int iMaxID = 0;
int iMinID = MAX_ID;
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
int ID = pFileInfo->GetID();
if (ID > iMaxID)
{
iMaxID = ID;
}
if (ID < iMinID)
{
iMinID = ID;
}
}
//add IDs to list in order they were transmitted in command
for (IDList::iterator it = pIDList->begin(); it != pIDList->end(); it++)
{
int iID = *it;
if (iMinID <= iID && iID <= iMaxID)
{
FileInfo* pFileInfo = FindFileInfo(pDownloadQueue, iID);
if (pFileInfo)
{
pItemList->push_back(new EditItem(pFileInfo, iOffset));
}
}
}
}
}
bool QueueEditor::BuildIDListFromNameList(DownloadQueue* pDownloadQueue, IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, EEditAction eAction)
{
#ifndef HAVE_REGEX_H
if (eMatchMode == mmRegEx)
{
return false;
}
#endif
std::set<int> uniqueIDs;
for (NameList::iterator it = pNameList->begin(); it != pNameList->end(); it++)
{
const char* szName = *it;
RegEx *pRegEx = NULL;
if (eMatchMode == mmRegEx)
{
pRegEx = new RegEx(szName);
if (!pRegEx->IsValid())
{
delete pRegEx;
return false;
}
}
bool bFound = false;
for (FileQueue::iterator it2 = pDownloadQueue->GetFileQueue()->begin(); it2 != pDownloadQueue->GetFileQueue()->end(); it2++)
{
FileInfo* pFileInfo = *it2;
if (eAction < eaGroupMoveOffset)
{
// file action
char szFilename[MAX_PATH];
snprintf(szFilename, sizeof(szFilename) - 1, "%s/%s", pFileInfo->GetNZBInfo()->GetName(), Util::BaseFileName(pFileInfo->GetFilename()));
if (((!pRegEx && !strcmp(szFilename, szName)) || (pRegEx && pRegEx->Match(szFilename))) &&
(uniqueIDs.find(pFileInfo->GetID()) == uniqueIDs.end()))
{
uniqueIDs.insert(pFileInfo->GetID());
pIDList->push_back(pFileInfo->GetID());
bFound = true;
}
}
else
{
// group action
const char *szFilename = pFileInfo->GetNZBInfo()->GetName();
if (((!pRegEx && !strcmp(szFilename, szName)) || (pRegEx && pRegEx->Match(szFilename))) &&
(uniqueIDs.find(pFileInfo->GetNZBInfo()->GetID()) == uniqueIDs.end()))
{
uniqueIDs.insert(pFileInfo->GetNZBInfo()->GetID());
pIDList->push_back(pFileInfo->GetID());
bFound = true;
}
}
}
if (pRegEx)
{
delete pRegEx;
}
if (!bFound && (eMatchMode == mmName))
{
return false;
}
}
return true;
}
bool QueueEditor::EditGroup(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, EEditAction eAction, int iOffset, const char* szText)
{
IDList cIDList;
cIDList.clear();
// collecting files belonging to group
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo2 = *it;
if (pFileInfo2->GetNZBInfo() == pFileInfo->GetNZBInfo())
{
cIDList.push_back(pFileInfo2->GetID());
}
}
if (eAction == eaGroupMoveOffset)
{
// calculating offset in terms of files
FileList cGroupList;
BuildGroupList(pDownloadQueue, &cGroupList);
unsigned int iNum = 0;
for (FileList::iterator it = cGroupList.begin(); it != cGroupList.end(); it++, iNum++)
{
FileInfo* pGroupInfo = *it;
if (pGroupInfo->GetNZBInfo() == pFileInfo->GetNZBInfo())
{
break;
}
}
int iFileOffset = 0;
if (iOffset > 0)
{
if (iNum + iOffset >= cGroupList.size() - 1)
{
eAction = eaGroupMoveBottom;
}
else
{
for (unsigned int i = iNum + 2; i < cGroupList.size() && iOffset > 0; i++, iOffset--)
{
iFileOffset += FindFileInfoEntry(pDownloadQueue, cGroupList[i]) - FindFileInfoEntry(pDownloadQueue, cGroupList[i-1]);
}
}
}
else
{
if (iNum + iOffset <= 0)
{
eAction = eaGroupMoveTop;
}
else
{
for (unsigned int i = iNum; i > 0 && iOffset < 0; i--, iOffset++)
{
iFileOffset -= FindFileInfoEntry(pDownloadQueue, cGroupList[i]) - FindFileInfoEntry(pDownloadQueue, cGroupList[i-1]);
}
}
}
iOffset = iFileOffset;
}
else if (eAction == eaGroupDelete)
{
pFileInfo->GetNZBInfo()->SetDeleted(true);
pFileInfo->GetNZBInfo()->SetCleanupDisk(CanCleanupDisk(pDownloadQueue, pFileInfo->GetNZBInfo()));
}
EEditAction GroupToFileMap[] = { (EEditAction)0, eaFileMoveOffset, eaFileMoveTop, eaFileMoveBottom,
eaFilePause, eaFileResume, eaFileDelete, eaFilePauseAllPars, eaFilePauseExtraPars, eaFileSetPriority, eaFileReorder,
eaFileMoveOffset, eaFileMoveTop, eaFileMoveBottom, eaFilePause, eaFileResume, eaFileDelete,
eaFilePauseAllPars, eaFilePauseExtraPars, eaFileSetPriority,
(EEditAction)0, (EEditAction)0, (EEditAction)0 };
return InternEditList(pDownloadQueue, &cIDList, true, GroupToFileMap[eAction], iOffset, szText);
}
void QueueEditor::BuildGroupList(DownloadQueue* pDownloadQueue, FileList* pGroupList)
{
pGroupList->clear();
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
FileInfo* pGroupInfo = NULL;
for (FileList::iterator itg = pGroupList->begin(); itg != pGroupList->end(); itg++)
{
FileInfo* pGroupInfo1 = *itg;
if (pGroupInfo1->GetNZBInfo() == pFileInfo->GetNZBInfo())
{
pGroupInfo = pGroupInfo1;
break;
}
}
if (!pGroupInfo)
{
pGroupList->push_back(pFileInfo);
}
}
}
bool QueueEditor::ItemExists(FileList* pFileList, FileInfo* pFileInfo)
{
for (FileList::iterator it = pFileList->begin(); it != pFileList->end(); it++)
{
if (*it == pFileInfo)
{
return true;
}
}
return false;
}
void QueueEditor::AlignAffectedGroups(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, int iOffset)
{
// Build list of all groups; List contains first file of each group
FileList cGroupList;
BuildGroupList(pDownloadQueue, &cGroupList);
// Find affected groups. It includes groups being moved and groups directly
// above or under of these groups (those order is also changed)
FileList cAffectedGroupList;
cAffectedGroupList.clear();
ItemList cItemList;
PrepareList(pDownloadQueue, &cItemList, pIDList, bSmartOrder, eaFileMoveOffset, iOffset);
for (ItemList::iterator it = cItemList.begin(); it != cItemList.end(); it++)
{
EditItem* pItem = *it;
unsigned int iNum = 0;
for (FileList::iterator it = cGroupList.begin(); it != cGroupList.end(); it++, iNum++)
{
FileInfo* pFileInfo = *it;
if (pItem->m_pFileInfo->GetNZBInfo() == pFileInfo->GetNZBInfo())
{
if (!ItemExists(&cAffectedGroupList, pFileInfo))
{
cAffectedGroupList.push_back(pFileInfo);
}
if (iOffset < 0)
{
for (int i = iNum - 1; i >= -iOffset-1; i--)
{
if (!ItemExists(&cAffectedGroupList, cGroupList[i]))
{
cAffectedGroupList.push_back(cGroupList[i]);
}
}
}
if (iOffset > 0)
{
for (unsigned int i = iNum + 1; i <= cGroupList.size() - iOffset; i++)
{
if (!ItemExists(&cAffectedGroupList, cGroupList[i]))
{
cAffectedGroupList.push_back(cGroupList[i]);
}
}
if (iNum + 1 < cGroupList.size())
{
cAffectedGroupList.push_back(cGroupList[iNum + 1]);
}
}
break;
}
}
delete pItem;
}
cGroupList.clear();
// Aligning groups
for (FileList::iterator it = cAffectedGroupList.begin(); it != cAffectedGroupList.end(); it++)
{
FileInfo* pFileInfo = *it;
AlignGroup(pDownloadQueue, pFileInfo->GetNZBInfo());
}
}
void QueueEditor::AlignGroup(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
FileInfo* pLastFileInfo = NULL;
unsigned int iLastNum = 0;
unsigned int iNum = 0;
while (iNum < pDownloadQueue->GetFileQueue()->size())
{
FileInfo* pFileInfo = pDownloadQueue->GetFileQueue()->at(iNum);
if (pFileInfo->GetNZBInfo() == pNZBInfo)
{
if (pLastFileInfo && iNum - iLastNum > 1)
{
pDownloadQueue->GetFileQueue()->erase(pDownloadQueue->GetFileQueue()->begin() + iNum);
pDownloadQueue->GetFileQueue()->insert(pDownloadQueue->GetFileQueue()->begin() + iLastNum + 1, pFileInfo);
iLastNum++;
}
else
{
iLastNum = iNum;
}
pLastFileInfo = pFileInfo;
}
iNum++;
}
}
void QueueEditor::PauseParsInGroups(ItemList* pItemList, bool bExtraParsOnly)
{
while (true)
{
FileList GroupFileList;
GroupFileList.clear();
FileInfo* pFirstFileInfo = NULL;
for (ItemList::iterator it = pItemList->begin(); it != pItemList->end(); )
{
EditItem* pItem = *it;
if (!pFirstFileInfo ||
(pFirstFileInfo->GetNZBInfo() == pItem->m_pFileInfo->GetNZBInfo()))
{
GroupFileList.push_back(pItem->m_pFileInfo);
if (!pFirstFileInfo)
{
pFirstFileInfo = pItem->m_pFileInfo;
}
delete pItem;
pItemList->erase(it);
it = pItemList->begin();
continue;
}
it++;
}
if (!GroupFileList.empty())
{
PausePars(&GroupFileList, bExtraParsOnly);
}
else
{
break;
}
}
}
/**
* If the parameter "bExtraParsOnly" is set to "false", then we pause all par2-files.
* If the parameter "bExtraParsOnly" is set to "true", we use the following strategy:
* At first we find all par-files, which do not have "vol" in their names, then we pause
* all vols and do not affect all just-pars.
* In a case, if there are no just-pars, but only vols, we find the smallest vol-file
* and do not affect it, but pause all other pars.
*/
void QueueEditor::PausePars(FileList* pFileList, bool bExtraParsOnly)
{
debug("QueueEditor: Pausing pars");
FileList Pars, Vols;
Pars.clear();
Vols.clear();
for (FileList::iterator it = pFileList->begin(); it != pFileList->end(); it++)
{
FileInfo* pFileInfo = *it;
char szLoFileName[1024];
strncpy(szLoFileName, pFileInfo->GetFilename(), 1024);
szLoFileName[1024-1] = '\0';
for (char* p = szLoFileName; *p; p++) *p = tolower(*p); // convert string to lowercase
if (strstr(szLoFileName, ".par2"))
{
if (!bExtraParsOnly)
{
pFileInfo->SetPaused(true);
}
else
{
if (strstr(szLoFileName, ".vol"))
{
Vols.push_back(pFileInfo);
}
else
{
Pars.push_back(pFileInfo);
}
}
}
}
if (bExtraParsOnly)
{
if (!Pars.empty())
{
for (FileList::iterator it = Vols.begin(); it != Vols.end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->SetPaused(true);
}
}
else
{
// pausing all Vol-files except the smallest one
FileInfo* pSmallest = NULL;
for (FileList::iterator it = Vols.begin(); it != Vols.end(); it++)
{
FileInfo* pFileInfo = *it;
if (!pSmallest)
{
pSmallest = pFileInfo;
}
else if (pSmallest->GetSize() > pFileInfo->GetSize())
{
pSmallest->SetPaused(true);
pSmallest = pFileInfo;
}
else
{
pFileInfo->SetPaused(true);
}
}
}
}
}
void QueueEditor::SetNZBCategory(NZBInfo* pNZBInfo, const char* szCategory)
{
debug("QueueEditor: setting category '%s' for '%s'", szCategory, Util::BaseFileName(pNZBInfo->GetFilename()));
g_pQueueCoordinator->SetQueueEntryNZBCategory(pNZBInfo, szCategory);
}
void QueueEditor::SetNZBName(NZBInfo* pNZBInfo, const char* szName)
{
debug("QueueEditor: renaming '%s' to '%s'", Util::BaseFileName(pNZBInfo->GetFilename()), szName);
g_pQueueCoordinator->SetQueueEntryNZBName(pNZBInfo, szName);
}
/**
* Check if deletion of already downloaded files is possible (when nzb id deleted from queue).
* The deletion is most always possible, except the case if all remaining files in queue
* (belonging to this nzb-file) are PARS.
*/
bool QueueEditor::CanCleanupDisk(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
char szLoFileName[1024];
strncpy(szLoFileName, pFileInfo->GetFilename(), 1024);
szLoFileName[1024-1] = '\0';
for (char* p = szLoFileName; *p; p++) *p = tolower(*p); // convert string to lowercase
if (!strstr(szLoFileName, ".par2"))
{
// non-par file found
return true;
}
}
return false;
}
void QueueEditor::MergeGroups(DownloadQueue* pDownloadQueue, ItemList* pItemList)
{
if (pItemList->size() == 0)
{
return;
}
EditItem* pDestItem = pItemList->front();
for (ItemList::iterator it = pItemList->begin() + 1; it != pItemList->end(); it++)
{
EditItem* pItem = *it;
if (pItem->m_pFileInfo->GetNZBInfo() != pDestItem->m_pFileInfo->GetNZBInfo())
{
debug("merge %s to %s", pItem->m_pFileInfo->GetNZBInfo()->GetFilename(), pDestItem->m_pFileInfo->GetNZBInfo()->GetFilename());
g_pQueueCoordinator->MergeQueueEntries(pDestItem->m_pFileInfo->GetNZBInfo(), pItem->m_pFileInfo->GetNZBInfo());
}
delete pItem;
}
// align group
AlignGroup(pDownloadQueue, pDestItem->m_pFileInfo->GetNZBInfo());
delete pDestItem;
}
void QueueEditor::ReorderFiles(DownloadQueue* pDownloadQueue, ItemList* pItemList)
{
if (pItemList->size() == 0)
{
return;
}
EditItem* pFirstItem = pItemList->front();
NZBInfo* pNZBInfo = pFirstItem->m_pFileInfo->GetNZBInfo();
unsigned int iInsertPos = 0;
// find first file of the group
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
if (pFileInfo->GetNZBInfo() == pNZBInfo)
{
break;
}
iInsertPos++;
}
// now can reorder
for (ItemList::iterator it = pItemList->begin(); it != pItemList->end(); it++)
{
EditItem* pItem = *it;
FileInfo* pFileInfo = pItem->m_pFileInfo;
// move file item
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo1 = *it;
if (pFileInfo1 == pFileInfo)
{
pDownloadQueue->GetFileQueue()->erase(it);
pDownloadQueue->GetFileQueue()->insert(pDownloadQueue->GetFileQueue()->begin() + iInsertPos, pFileInfo);
iInsertPos++;
break;
}
}
delete pItem;
}
}
void QueueEditor::SetNZBParameter(NZBInfo* pNZBInfo, const char* szParamString)
{
debug("QueueEditor: setting nzb parameter '%s' for '%s'", szParamString, Util::BaseFileName(pNZBInfo->GetFilename()));
char* szStr = strdup(szParamString);
char* szValue = strchr(szStr, '=');
if (szValue)
{
*szValue = '\0';
szValue++;
pNZBInfo->SetParameter(szStr, szValue);
}
else
{
error("Could not set nzb parameter for %s: invalid argument: %s", pNZBInfo->GetName(), szParamString);
}
free(szStr);
}

View File

@@ -1,119 +0,0 @@
/*
* This file if part of nzbget
*
* Copyright (C) 2007-2011 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef QUEUEEDITOR_H
#define QUEUEEDITOR_H
#include <vector>
#include "DownloadInfo.h"
class QueueEditor
{
public:
enum EEditAction
{
eaFileMoveOffset = 1, // move to m_iOffset relative to the current position in queue
eaFileMoveTop,
eaFileMoveBottom,
eaFilePause,
eaFileResume,
eaFileDelete,
eaFilePauseAllPars,
eaFilePauseExtraPars,
eaFileSetPriority,
eaFileReorder,
eaGroupMoveOffset, // move to m_iOffset relative to the current position in queue
eaGroupMoveTop,
eaGroupMoveBottom,
eaGroupPause,
eaGroupResume,
eaGroupDelete,
eaGroupPauseAllPars,
eaGroupPauseExtraPars,
eaGroupSetPriority,
eaGroupSetCategory,
eaGroupMerge,
eaGroupSetParameter,
eaGroupSetName
};
enum EMatchMode
{
mmID = 1,
mmName,
mmRegEx
};
private:
class EditItem
{
public:
int m_iOffset;
FileInfo* m_pFileInfo;
EditItem(FileInfo* pFileInfo, int iOffset);
};
typedef std::vector<EditItem*> ItemList;
typedef std::vector<FileInfo*> FileList;
private:
FileInfo* FindFileInfo(DownloadQueue* pDownloadQueue, int iID);
int FindFileInfoEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo);
bool InternEditList(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
void PrepareList(DownloadQueue* pDownloadQueue, ItemList* pItemList, IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset);
bool BuildIDListFromNameList(DownloadQueue* pDownloadQueue, IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, EEditAction eAction);
bool EditGroup(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, EEditAction eAction, int iOffset, const char* szText);
void BuildGroupList(DownloadQueue* pDownloadQueue, FileList* pGroupList);
void AlignAffectedGroups(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, int iOffset);
bool ItemExists(FileList* pFileList, FileInfo* pFileInfo);
void AlignGroup(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void PauseParsInGroups(ItemList* pItemList, bool bExtraParsOnly);
void PausePars(FileList* pFileList, bool bExtraParsOnly);
void SetNZBCategory(NZBInfo* pNZBInfo, const char* szCategory);
void SetNZBName(NZBInfo* pNZBInfo, const char* szName);
bool CanCleanupDisk(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void MergeGroups(DownloadQueue* pDownloadQueue, ItemList* pItemList);
void ReorderFiles(DownloadQueue* pDownloadQueue, ItemList* pItemList);
void SetNZBParameter(NZBInfo* pNZBInfo, const char* szParamString);
void PauseUnpauseEntry(FileInfo* pFileInfo, bool bPause);
void DeleteEntry(FileInfo* pFileInfo);
void MoveEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, int iOffset);
void SetPriorityEntry(FileInfo* pFileInfo, const char* szPriority);
public:
QueueEditor();
~QueueEditor();
bool EditEntry(int ID, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
bool EditList(IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
bool LockedEditEntry(DownloadQueue* pDownloadQueue, int ID, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
bool LockedEditList(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
};
#endif

191
README
View File

@@ -4,7 +4,7 @@
This is a short documentation. For more information please
visit NZBGet home page at
http://nzbget.sourceforge.net
http://nzbget.net
Contents
--------
@@ -85,15 +85,11 @@ And the following libraries are optional:
- libcurses (usually part of commercial systems)
or (better)
- libncurses (http://invisible-island.net/ncurses)
- for par-check and -repair (enabled by default):
- libpar2 (http://parchive.sourceforge.net)
- libsigc++ (http://libsigc.sourceforge.net)
- for encrypted connections (TLS/SSL):
- GnuTLS (http://www.gnu.org/software/gnutls)
or
- OpenSSL (http://www.openssl.org)
or
- GnuTLS (http://www.gnu.org/software/gnutls)
- for gzip support in web-server and web-client (enabled by default):
- zlib (http://www.zlib.net)
@@ -151,13 +147,13 @@ You may run configure with additional arguments:
if you can not use curses/ncurses.
--disable-parcheck - to make without parcheck-support. Use this option
if you can not use libpar2 or libsigc++.
if you have troubles when compiling par2-module.
--with-tlslib=(GnuTLS, OpenSSL) - to select which TLS/SSL library
--with-tlslib=(OpenSSL, GnuTLS) - to select which TLS/SSL library
should be used for encrypted server connections.
--disable-tls - to make without TLS/SSL support. Use this option if
you can not neither GnuTLS nor OpenSSL.
you can not neither OpenSSL nor GnuTLS.
--disable-gzip - to make without gzip support. Use this option
if you can not use zlib.
@@ -168,37 +164,13 @@ You may run configure with additional arguments:
Optional package: par-check
---------------------------
NZBGet can check and repair downloaded files for you. For this purpose
it uses library par2 (libpar2), which needs sigc++ on its part.
it uses library par2.
The libpar2 and libsigc++ (version 2 or later) must be installed on your
system. On most linux distributions these libraries are available as packages.
If you do not have these packages you can compile them yourself.
Following configure-parameters may be usefull:
For your convenience the source code of libpar2 is integrated into
NZBGets source tree and is compiled automatically when you make NZBGet.
--with-libpar2-includes
--with-libpar2-libraries
--with-libsigc-includes
--with-libsigc-libraries
The library libsigc++ must be installed first, since libpar2 requires it.
If you use nzbget on a very slow computer like NAS-device, it may be good to
limit the time allowed for par-repair (option "ParTimeLimit" in nzbget
configuration file). This feature requires a patched version of libpar2.
To compile that version download the original source code of libpar2
(version 0.2) and apply patches "libpar2-0.2-bugfixes.patch" and
"libpar2-0.2-cancel.patch", provided with nzbget:
cd libpar2-0.2
cp ~/nzbget/libpar2-0.2-*.patch .
patch < libpar2-0.2-bugfixes.patch
patch < libpar2-0.2-cancel.patch
./configure
make
make install
If you are not able to use libpar2 or libsigc++ or do not want them you can
make nzbget without support for par-check using option "--disable-parcheck":
In a case errors occur during this process the inclusion of par2-module
can be disabled using configure option "--disable-parcheck":
./configure --disable-parcheck
@@ -206,7 +178,7 @@ Optional package: curses
-------------------------
For curses-outputmode you need ncurses or curses on your system.
If you do not have one of them you can download and compile ncurses yourself.
Following configure-parameters may be usefull:
Following configure-parameters may be useful:
--with-libcurses-includes
--with-libcurses-libraries
@@ -219,14 +191,14 @@ make the program without support for curses using option "--disable-curses":
Optional package: TLS
-------------------------
To enable encrypted server connections (TLS/SSL) you need to build the program
with TLS/SSL support. NZBGet can use two libraries: GnuTLS or OpenSSL.
with TLS/SSL support. NZBGet can use two libraries: OpenSSL or GnuTLS.
Configure-script checks which library is installed and use it. If both are
avialable it gives the precedence to GnuTLS. You may override that with
the option --with-tlslib=(GnuTLS, OpenSSL). For example to build whith OpenSSL:
available it gives the precedence to OpenSSL. You may override that with
the option --with-tlslib=(OpenSSL, GnuTLS). For example to build with GnuTLS:
./configure --with-tlslib=OpenSSL
./configure --with-tlslib= GnuTLS
Following configure-parameters may be usefull:
Following configure-parameters may be useful:
--with-libtls-includes
--with-libtls-libraries
@@ -247,28 +219,14 @@ NZBGet is developed using MS Visual C++ 2005. The project file and solution
are provided. If you use MS Visual C++ 2005 Express you need to download
and install Platform SDK.
To compile the program with par-check-support you also need the following
libraries:
- libsigc++ (http://libsigc.sourceforge.net)
- libpar2 (http://parchive.sourceforge.net)
Download these libaries, then use patch-files provided with NZBGet to create
preconfigured project files and solutions for each library.
Look at http://gnuwin32.sourceforge.net/packages/patch.htm for info on how
to use patch-files, if you do not familiar with this technique.
To compile the program with TLS/SSL support you also need the library:
To compile the program with TLS/SSL support you need either OpenSSL or GnuTLS:
- OpenSSL (http://www.openssl.org)
or
- GnuTLS (http://www.gnu.org/software/gnutls)
Download a precompiled version of GnuTLS from http://josefsson.org/gnutls4win
and create lib-file as described there in section "Using the GnuTLS DLL from
your Visual Studio program".
After libsigc++ and libpar2 are compiled in static libraries (.lib), the
library for GnuTLS is created and include- and libraries-paths are configured
in MS Visual C++ 2005 you should be able to compile NZBGet.
Also required are:
- Regex (http://gnuwin32.sourceforge.net/packages/regex.htm)
- Zlib (http://gnuwin32.sourceforge.net/packages/zlib.htm)
=====================================
6. Configuration
@@ -387,9 +345,18 @@ It prints something like:
[1] nzbname\filename1.rar (50.00 MB)
[2] nzbname\filename1.r01 (50.00 MB)
[3] another-nzb\filename3.r01 (100.00 MB)
[4] another-nzb\filename3.r02 (100.00 MB)
The numbers in square braces are ID's of files in queue. They can be used
in edit-command. For example to move file with ID 2 to the top of queue:
This is the list of individual files listed within nzb-file. To print
the list of nzb-files (without content) add G-modifier to the list command:
[1] nzbname (4.56 GB)
[2] another-nzb (4.20 GB)
The numbers in square braces are ID's of files or groups in queue.
They can be used in edit-command. For example to move file with
ID 2 to the top of queue:
nzbget -E T 2
@@ -402,8 +369,8 @@ or to delete files from queue:
nzbget -E D 3 10-15 20-21 16
The edit-command has also a group-mode which affects all files from the
same nzb-request. You need to pass one ID of any file in the group. For
example to delete all files from the first nzb-request:
same nzb-file. You need to pass an ID of the group. For example to delete
the whole group 1:
nzbget -E G D 1
@@ -429,40 +396,28 @@ same computer)
Security warning
----------------
NZBGet client communicates with NZBGet server via unsecured socket connections.
This makes it vulnerable. Although server checks the password passed by client,
this password is still transmitted in unsecured way. For this reason it is
highly recommended to configure your firewall to not expose the port used by
NZBGet (option <ControlPort>) to WAN.
NZBGet communicates via unsecured socket connections. This makes it vulnerable.
Although server checks the password passed by client, this password is still
transmitted in unsecured way. For this reason it is highly recommended
to configure your Firewall to not expose the port used by NZBGet to WAN.
If you need to control server from WAN it is better to use web-interface via HTTPS
or (if you prefer remote commands) connect to server's terminal via SSH (POSIX)
or remote desktop (Windows) and then run nzbget-client-commands in this terminal.
If you need to control server from WAN it is better to connect to server's
terminal via SSH (POSIX) or remote desktop (Windows) and then run
nzbget-client-commands in this terminal.
Post processing scripts
-----------------------
After the download of nzb-file is completed nzbget can call post-processing
script, defined in configuration file. See example configuration file for
the description of parameters passed to the script (option "PostProcess").
scripts, defined in configuration file.
An example script for unraring of downloaded files is provided in file
"nzbget-postprocess.sh" installed into "<prefix>/bin". The script requires
configuration file "nzbget-postprocess.conf". If you have installed the
program with "make install" this file is copied to "<prefix>/etc",
where the post-processing script finds it automatically. If you install
the program manually from a binary archive you have to copy the file
from "<prefix>/share/nzbget" to the directory where you have put the
nzbget configuration file ("nzbget.conf").
Example post-processing scripts are provided in directory "scripts".
Set the option "PostProcess" in "nzbget.conf" to point to the post-
processing script.
To use the scripts copy them into your local directory and set options
<ScriptDir>, <PostScript> and <ScriptOrder>.
Additional usage instructions are included in "nzbget-postprocess.sh",
please open the file in a text editor to read them.
NOTE: The post-processing script "nzbget-postprocess.sh" is for
POSIX systems and will not work on Windows.
For information on writing your own post-processing scripts please
visit NZBGet web site.
Web-interface
-------------
@@ -481,13 +436,14 @@ and port defined in NZBGet configuration file in options "ControlIP" and
http://localhost:6789/
For login credentials type username "nzbget" (predefined and not changeable)
and the password from the option "ControlPassword" (default is tegbzn6789).
For login credentials type username and the password defined by
options "ControlUsername" (default "nzbget") and "ControlPassword"
(default "tegbzn6789").
In a case your browser forget credentials, to prevent typing them each
time, there is a workaround - use URL in the form:
http://localhost:6789/nzbget:password/
http://localhost:6789/username:password/
Please note, that in this case the password is saved in a bookmark or in
browser history in plain text and is easy to find by persons having
@@ -506,6 +462,32 @@ Bo Cordes Petersen (placebodk@users.sourceforge.net) until 2005.
In 2007 the abandoned project was overtaken by Andrey Prygunkov.
Since then the program has been completely rewritten.
NZBGet distribution archive includes additional components
written by other authors:
PAR2:
Peter Brian Clements <peterbclements@users.sourceforge.net>
PAR2 library API:
Francois Lesueur <flesueur@users.sourceforge.net>
jQuery:
John Resig <http://jquery.com>
The Dojo Foundation <http://sizzlejs.com>
Bootstrap:
Twitter, Inc <http://twitter.github.com/bootstrap>
Raphaël:
Dmitry Baranovskiy <http://raphaeljs.com>
Sencha Labs <http://sencha.com>
Elycharts:
Void Labs s.n.c. <http://void.it>
iconSweets:
Yummygum <http://yummygum.com>
=====================================
9. Copyright
=====================================
@@ -519,24 +501,13 @@ The complete content of license is provided in file COPYING.
Additional exemption: compiling, linking, and/or using OpenSSL is allowed.
Binary distribution for Windows contains code from the following libraries:
- libpar2 (http://parchive.sourceforge.net)
- libsigc++ (http://libsigc.sourceforge.net)
- GnuTLS (http://www.gnu.org/software/gnutls)
- zlib (http://www.zlib.net)
- regex (http://gnuwin32.sourceforge.net/packages/regex.htm)
libpar2 is distributed under GPL; libsigc++, GnuTLS and regex - under LGPL;
zlib - under zlib license.
=====================================
10. Contact
=====================================
If you encounter any problem, feel free to use the forum
nzbget.sourceforge.net/forum
nzbget.net/forum
or contact me at

View File

@@ -1,387 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2011 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <fstream>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#endif
#include <sys/stat.h>
#include <errno.h>
#include "nzbget.h"
#include "Scanner.h"
#include "Options.h"
#include "Log.h"
#include "QueueCoordinator.h"
#include "ScriptController.h"
#include "DiskState.h"
#include "Util.h"
extern QueueCoordinator* g_pQueueCoordinator;
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
Scanner::FileData::FileData(const char* szFilename)
{
m_szFilename = strdup(szFilename);
m_iSize = 0;
m_tLastChange = 0;
}
Scanner::FileData::~FileData()
{
free(m_szFilename);
}
Scanner::Scanner()
{
debug("Creating Scanner");
m_bRequestedNZBDirScan = false;
m_bScanning = false;
m_iNZBDirInterval = g_pOptions->GetNzbDirInterval() * 1000;
m_iPass = 0;
m_iStepMSec = 0;
const char* szNZBScript = g_pOptions->GetNZBProcess();
m_bNZBScript = szNZBScript && strlen(szNZBScript) > 0;
}
Scanner::~Scanner()
{
debug("Destroying Scanner");
for (FileList::iterator it = m_FileList.begin(); it != m_FileList.end(); it++)
{
delete *it;
}
m_FileList.clear();
}
void Scanner::Check()
{
if (g_pOptions->GetNzbDir() && (m_bRequestedNZBDirScan ||
(!g_pOptions->GetPauseScan() && g_pOptions->GetNzbDirInterval() > 0 &&
m_iNZBDirInterval >= g_pOptions->GetNzbDirInterval() * 1000)))
{
// check nzbdir every g_pOptions->GetNzbDirInterval() seconds or if requested
bool bCheckStat = !m_bRequestedNZBDirScan;
m_bRequestedNZBDirScan = false;
m_bScanning = true;
CheckIncomingNZBs(g_pOptions->GetNzbDir(), "", bCheckStat);
if (!bCheckStat && m_bNZBScript)
{
// if immediate scan requesten, we need second scan to process files extracted by NzbProcess-script
CheckIncomingNZBs(g_pOptions->GetNzbDir(), "", bCheckStat);
}
m_bScanning = false;
m_iNZBDirInterval = 0;
// if NzbDirFileAge is less than NzbDirInterval (that can happen if NzbDirInterval
// is set for rare scans like once per hour) we make 4 scans:
// - one additional scan is neccessary to check sizes of detected files;
// - another scan is required to check files which were extracted by NzbProcess-script;
// - third scan is needed to check sizes of extracted files.
if (g_pOptions->GetNzbDirFileAge() < g_pOptions->GetNzbDirInterval())
{
int iMaxPass = m_bNZBScript ? 3 : 1;
if (m_iPass < iMaxPass)
{
// scheduling another scan of incoming directory in NzbDirFileAge seconds.
m_iNZBDirInterval = (g_pOptions->GetNzbDirInterval() - g_pOptions->GetNzbDirFileAge()) * 1000;
m_iPass++;
}
else
{
m_iPass = 0;
}
}
DropOldFiles();
}
m_iNZBDirInterval += m_iStepMSec;
}
/**
* Check if there are files in directory for incoming nzb-files
* and add them to download queue
*/
void Scanner::CheckIncomingNZBs(const char* szDirectory, const char* szCategory, bool bCheckStat)
{
DirBrowser dir(szDirectory);
while (const char* filename = dir.Next())
{
struct stat buffer;
char fullfilename[1023 + 1]; // one char reserved for the trailing slash (if needed)
snprintf(fullfilename, 1023, "%s%s", szDirectory, filename);
fullfilename[1023 - 1] = '\0';
if (!stat(fullfilename, &buffer))
{
// check subfolders
if ((buffer.st_mode & S_IFDIR) != 0 && strcmp(filename, ".") && strcmp(filename, ".."))
{
fullfilename[strlen(fullfilename) + 1] = '\0';
fullfilename[strlen(fullfilename)] = PATH_SEPARATOR;
const char* szUseCategory = filename;
char szSubCategory[1024];
if (strlen(szCategory) > 0)
{
snprintf(szSubCategory, 1023, "%s%c%s", szCategory, PATH_SEPARATOR, filename);
szSubCategory[1024 - 1] = '\0';
szUseCategory = szSubCategory;
}
CheckIncomingNZBs(fullfilename, szUseCategory, bCheckStat);
}
else if ((buffer.st_mode & S_IFDIR) == 0 && CanProcessFile(fullfilename, bCheckStat))
{
ProcessIncomingFile(szDirectory, filename, fullfilename, szCategory);
}
}
}
}
/**
* Only files which were not changed during last g_pOptions->GetNzbDirFileAge() seconds
* can be processed. That prevents the processing of files, which are currently being
* copied into nzb-directory (eg. being downloaded in web-browser).
*/
bool Scanner::CanProcessFile(const char* szFullFilename, bool bCheckStat)
{
const char* szExtension = strrchr(szFullFilename, '.');
if (!szExtension ||
!strcasecmp(szExtension, ".queued") ||
!strcasecmp(szExtension, ".error") ||
!strcasecmp(szExtension, ".processed"))
{
return false;
}
if (!bCheckStat)
{
return true;
}
long long lSize = Util::FileSize(szFullFilename);
time_t tCurrent = time(NULL);
bool bCanProcess = false;
bool bInList = false;
for (FileList::iterator it = m_FileList.begin(); it != m_FileList.end(); it++)
{
FileData* pFileData = *it;
if (!strcmp(pFileData->GetFilename(), szFullFilename))
{
bInList = true;
if (pFileData->GetSize() == lSize &&
tCurrent - pFileData->GetLastChange() >= g_pOptions->GetNzbDirFileAge())
{
bCanProcess = true;
delete pFileData;
m_FileList.erase(it);
}
else
{
pFileData->SetSize(lSize);
if (pFileData->GetSize() != lSize)
{
pFileData->SetLastChange(tCurrent);
}
}
break;
}
}
if (!bInList)
{
FileData* pFileData = new FileData(szFullFilename);
pFileData->SetSize(lSize);
pFileData->SetLastChange(tCurrent);
m_FileList.push_back(pFileData);
}
return bCanProcess;
}
/**
* Remove old files from the list of monitored files.
* Normally these files are deleted from the list when they are processed.
* However if a file was detected by function "CanProcessFile" once but wasn't
* processed later (for example if the user deleted it), it will stay in the list,
* until we remove it here.
*/
void Scanner::DropOldFiles()
{
time_t tCurrent = time(NULL);
int i = 0;
for (FileList::iterator it = m_FileList.begin(); it != m_FileList.end(); )
{
FileData* pFileData = *it;
if ((tCurrent - pFileData->GetLastChange() >=
(g_pOptions->GetNzbDirInterval() + g_pOptions->GetNzbDirFileAge()) * 2) ||
// can occur if the system clock was adjusted
tCurrent < pFileData->GetLastChange())
{
debug("Removing file %s from scan file list", pFileData->GetFilename());
delete pFileData;
m_FileList.erase(it);
it = m_FileList.begin() + i;
}
else
{
it++;
i++;
}
}
}
void Scanner::ProcessIncomingFile(const char* szDirectory, const char* szBaseFilename, const char* szFullFilename, const char* szCategory)
{
const char* szExtension = strrchr(szBaseFilename, '.');
if (!szExtension)
{
return;
}
char* szNZBCategory = strdup(szCategory);
NZBParameterList* pParameterList = new NZBParameterList;
int iPriority = 0;
bool bExists = true;
if (m_bNZBScript && strcasecmp(szExtension, ".nzb_processed"))
{
NZBScriptController::ExecuteScript(g_pOptions->GetNZBProcess(), szFullFilename, szDirectory,
&szNZBCategory, &iPriority, pParameterList);
bExists = Util::FileExists(szFullFilename);
if (bExists && strcasecmp(szExtension, ".nzb"))
{
char bakname2[1024];
bool bRenameOK = Util::RenameBak(szFullFilename, "processed", false, bakname2, 1024);
if (!bRenameOK)
{
error("Could not rename file %s to %s! Errcode: %i", szFullFilename, bakname2, errno);
}
}
}
if (!strcasecmp(szExtension, ".nzb_processed"))
{
char szRenamedName[1024];
bool bRenameOK = Util::RenameBak(szFullFilename, "nzb", true, szRenamedName, 1024);
if (bRenameOK)
{
AddFileToQueue(szRenamedName, szNZBCategory, iPriority, pParameterList);
}
else
{
error("Could not rename file %s to %s! Errcode: %i", szFullFilename, szRenamedName, errno);
}
}
else if (bExists && !strcasecmp(szExtension, ".nzb"))
{
AddFileToQueue(szFullFilename, szNZBCategory, iPriority, pParameterList);
}
for (NZBParameterList::iterator it = pParameterList->begin(); it != pParameterList->end(); it++)
{
delete *it;
}
pParameterList->clear();
delete pParameterList;
free(szNZBCategory);
}
void Scanner::AddFileToQueue(const char* szFilename, const char* szCategory, int iPriority, NZBParameterList* pParameterList)
{
const char* szBasename = Util::BaseFileName(szFilename);
info("Collection %s found", szBasename);
NZBFile* pNZBFile = NZBFile::CreateFromFile(szFilename, szCategory);
if (!pNZBFile)
{
error("Could not add collection %s to queue", szBasename);
}
char bakname2[1024];
bool bRenameOK = Util::RenameBak(szFilename, pNZBFile ? "queued" : "error", false, bakname2, 1024);
if (!bRenameOK)
{
error("Could not rename file %s to %s! Errcode: %i", szFilename, bakname2, errno);
}
if (pNZBFile && bRenameOK)
{
pNZBFile->GetNZBInfo()->SetQueuedFilename(bakname2);
for (NZBParameterList::iterator it = pParameterList->begin(); it != pParameterList->end(); it++)
{
NZBParameter* pParameter = *it;
pNZBFile->GetNZBInfo()->SetParameter(pParameter->GetName(), pParameter->GetValue());
}
for (NZBFile::FileInfos::iterator it = pNZBFile->GetFileInfos()->begin(); it != pNZBFile->GetFileInfos()->end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->SetPriority(iPriority);
}
g_pQueueCoordinator->AddNZBFileToQueue(pNZBFile, false);
info("Collection %s added to queue", szBasename);
}
if (pNZBFile)
{
delete pNZBFile;
}
}
void Scanner::ScanNZBDir(bool bSyncMode)
{
// ideally we should use mutex to access "m_bRequestedNZBDirScan",
// but it's not critical here.
m_bScanning = true;
m_bRequestedNZBDirScan = true;
while (bSyncMode && (m_bScanning || m_bRequestedNZBDirScan))
{
usleep(100 * 1000);
}
}

View File

@@ -1,77 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2011 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef SCANNER_H
#define SCANNER_H
#include <deque>
#include <time.h>
#include "DownloadInfo.h"
class Scanner
{
private:
class FileData
{
private:
char* m_szFilename;
long long m_iSize;
time_t m_tLastChange;
public:
FileData(const char* szFilename);
~FileData();
const char* GetFilename() { return m_szFilename; }
long long GetSize() { return m_iSize; }
void SetSize(long long lSize) { m_iSize = lSize; }
time_t GetLastChange() { return m_tLastChange; }
void SetLastChange(time_t tLastChange) { m_tLastChange = tLastChange; }
};
typedef std::deque<FileData*> FileList;
bool m_bRequestedNZBDirScan;
int m_iNZBDirInterval;
bool m_bNZBScript;
int m_iPass;
int m_iStepMSec;
FileList m_FileList;
bool m_bScanning;
void CheckIncomingNZBs(const char* szDirectory, const char* szCategory, bool bCheckStat);
void AddFileToQueue(const char* szFilename, const char* szCategory, int iPriority, NZBParameterList* pParameterList);
void ProcessIncomingFile(const char* szDirectory, const char* szBaseFilename, const char* szFullFilename, const char* szCategory);
bool CanProcessFile(const char* szFullFilename, bool bCheckStat);
void DropOldFiles();
public:
Scanner();
~Scanner();
void SetStepInterval(int iStepMSec) { m_iStepMSec = iStepMSec; }
void ScanNZBDir(bool bSyncMode);
void Check();
};
#endif

View File

@@ -1,246 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2008-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#else
#include <unistd.h>
#endif
#include <stdlib.h>
#include <string.h>
#include "nzbget.h"
#include "Scheduler.h"
#include "ScriptController.h"
#include "Options.h"
#include "Log.h"
extern Options* g_pOptions;
Scheduler::Task::Task(int iHours, int iMinutes, int iWeekDaysBits, ECommand eCommand,
int iDownloadRate, const char* szProcess)
{
m_iHours = iHours;
m_iMinutes = iMinutes;
m_iWeekDaysBits = iWeekDaysBits;
m_eCommand = eCommand;
m_iDownloadRate = iDownloadRate;
m_szProcess = NULL;
if (szProcess)
{
m_szProcess = strdup(szProcess);
}
m_tLastExecuted = 0;
}
Scheduler::Task::~Task()
{
if (m_szProcess)
{
free(m_szProcess);
}
}
Scheduler::Scheduler()
{
debug("Creating Scheduler");
m_tLastCheck = 0;
m_TaskList.clear();
}
Scheduler::~Scheduler()
{
debug("Destroying Scheduler");
for (TaskList::iterator it = m_TaskList.begin(); it != m_TaskList.end(); it++)
{
delete *it;
}
}
void Scheduler::AddTask(Task* pTask)
{
m_mutexTaskList.Lock();
m_TaskList.push_back(pTask);
m_mutexTaskList.Unlock();
}
bool Scheduler::CompareTasks(Scheduler::Task* pTask1, Scheduler::Task* pTask2)
{
return (pTask1->m_iHours < pTask2->m_iHours) ||
((pTask1->m_iHours == pTask2->m_iHours) && (pTask1->m_iMinutes < pTask2->m_iMinutes));
}
void Scheduler::FirstCheck()
{
m_mutexTaskList.Lock();
m_TaskList.sort(CompareTasks);
m_mutexTaskList.Unlock();
// check all tasks for the last week
time_t tCurrent = time(NULL);
m_tLastCheck = tCurrent - 60*60*24*7;
m_bDetectClockChanges = false;
m_bExecuteProcess = false;
m_bDownloadRateChanged = false;
m_bPauseDownloadChanged = false;
m_bPauseScanChanged = false;
CheckTasks();
}
void Scheduler::IntervalCheck()
{
m_bDetectClockChanges = true;
m_bExecuteProcess = true;
m_bDownloadRateChanged = false;
m_bPauseDownloadChanged = false;
m_bPauseScanChanged = false;
CheckTasks();
}
void Scheduler::CheckTasks()
{
m_mutexTaskList.Lock();
time_t tCurrent = time(NULL);
struct tm tmCurrent;
localtime_r(&tCurrent, &tmCurrent);
struct tm tmLastCheck;
if (m_bDetectClockChanges)
{
// Detect large step changes of system time
time_t tDiff = tCurrent - m_tLastCheck;
if (tDiff > 60*90 || tDiff < -60*90)
{
debug("Reset scheduled tasks (detected clock adjustment greater than 90 minutes)");
m_bExecuteProcess = false;
m_tLastCheck = tCurrent;
for (TaskList::iterator it = m_TaskList.begin(); it != m_TaskList.end(); it++)
{
Task* pTask = *it;
pTask->m_tLastExecuted = 0;
}
}
}
localtime_r(&m_tLastCheck, &tmLastCheck);
struct tm tmLoop;
memcpy(&tmLoop, &tmLastCheck, sizeof(tmLastCheck));
tmLoop.tm_hour = tmCurrent.tm_hour;
tmLoop.tm_min = tmCurrent.tm_min;
tmLoop.tm_sec = tmCurrent.tm_sec;
time_t tLoop = mktime(&tmLoop);
while (tLoop <= tCurrent)
{
for (TaskList::iterator it = m_TaskList.begin(); it != m_TaskList.end(); it++)
{
Task* pTask = *it;
if (pTask->m_tLastExecuted != tLoop)
{
struct tm tmAppoint;
memcpy(&tmAppoint, &tmLoop, sizeof(tmLoop));
tmAppoint.tm_hour = pTask->m_iHours;
tmAppoint.tm_min = pTask->m_iMinutes;
tmAppoint.tm_sec = 0;
time_t tAppoint = mktime(&tmAppoint);
int iWeekDay = tmAppoint.tm_wday;
if (iWeekDay == 0)
{
iWeekDay = 7;
}
bool bWeekDayOK = pTask->m_iWeekDaysBits == 0 || (pTask->m_iWeekDaysBits & (1 << (iWeekDay - 1)));
bool bDoTask = bWeekDayOK && m_tLastCheck < tAppoint && tAppoint <= tCurrent;
//debug("TEMP: 1) m_tLastCheck=%i, tCurrent=%i, tLoop=%i, tAppoint=%i, bWeekDayOK=%i, bDoTask=%i", m_tLastCheck, tCurrent, tLoop, tAppoint, (int)bWeekDayOK, (int)bDoTask);
if (bDoTask)
{
ExecuteTask(pTask);
pTask->m_tLastExecuted = tLoop;
}
}
}
tLoop += 60*60*24; // inc day
localtime_r(&tLoop, &tmLoop);
}
m_tLastCheck = tCurrent;
m_mutexTaskList.Unlock();
}
void Scheduler::ExecuteTask(Task* pTask)
{
if (pTask->m_eCommand == scDownloadRate)
{
debug("Executing scheduled command: Set download rate to %i", pTask->m_iDownloadRate);
}
else
{
const char* szCommandName[] = { "Pause", "Unpause", "Set download rate", "Execute program", "Pause Scan", "Unpause Scan" };
debug("Executing scheduled command: %s", szCommandName[pTask->m_eCommand]);
}
switch (pTask->m_eCommand)
{
case scDownloadRate:
m_iDownloadRate = pTask->m_iDownloadRate;
m_bDownloadRateChanged = true;
break;
case scPauseDownload:
case scUnpauseDownload:
m_bPauseDownload = pTask->m_eCommand == scPauseDownload;
m_bPauseDownloadChanged = true;
break;
case scProcess:
if (m_bExecuteProcess)
{
SchedulerScriptController::StartScript(pTask->m_szProcess);
}
break;
case scPauseScan:
case scUnpauseScan:
m_bPauseScan = pTask->m_eCommand == scPauseScan;
m_bPauseScanChanged = true;
break;
}
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,719 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <fstream>
#ifndef WIN32
#include <unistd.h>
#endif
#include <errno.h>
#include "nzbget.h"
#include "Unpack.h"
#include "Log.h"
#include "Util.h"
#include "ParCoordinator.h"
extern Options* g_pOptions;
extern DownloadQueueHolder* g_pDownloadQueueHolder;
void UnpackController::FileList::Clear()
{
for (iterator it = begin(); it != end(); it++)
{
free(*it);
}
clear();
}
bool UnpackController::FileList::Exists(const char* szFilename)
{
for (iterator it = begin(); it != end(); it++)
{
char* szFilename1 = *it;
if (!strcmp(szFilename1, szFilename))
{
return true;
}
}
return false;
}
UnpackController::~UnpackController()
{
m_archiveFiles.Clear();
}
void UnpackController::StartUnpackJob(PostInfo* pPostInfo)
{
UnpackController* pUnpackController = new UnpackController();
pUnpackController->m_pPostInfo = pPostInfo;
pUnpackController->SetAutoDestroy(false);
pPostInfo->SetPostThread(pUnpackController);
pUnpackController->Start();
}
void UnpackController::Run()
{
// the locking is needed for accessing the members of NZBInfo
g_pDownloadQueueHolder->LockQueue();
strncpy(m_szDestDir, m_pPostInfo->GetNZBInfo()->GetDestDir(), 1024);
m_szDestDir[1024-1] = '\0';
strncpy(m_szName, m_pPostInfo->GetNZBInfo()->GetName(), 1024);
m_szName[1024-1] = '\0';
m_bCleanedUpDisk = false;
bool bUnpack = true;
m_szPassword[0] = '\0';
m_szFinalDir[0] = '\0';
for (NZBParameterList::iterator it = m_pPostInfo->GetNZBInfo()->GetParameters()->begin(); it != m_pPostInfo->GetNZBInfo()->GetParameters()->end(); it++)
{
NZBParameter* pParameter = *it;
if (!strcasecmp(pParameter->GetName(), "*Unpack:") && !strcasecmp(pParameter->GetValue(), "no"))
{
bUnpack = false;
}
if (!strcasecmp(pParameter->GetName(), "*Unpack:Password"))
{
strncpy(m_szPassword, pParameter->GetValue(), 1024-1);
m_szPassword[1024-1] = '\0';
}
}
g_pDownloadQueueHolder->UnlockQueue();
snprintf(m_szInfoName, 1024, "unpack for %s", m_szName);
m_szInfoName[1024-1] = '\0';
snprintf(m_szInfoNameUp, 1024, "Unpack for %s", m_szName); // first letter in upper case
m_szInfoNameUp[1024-1] = '\0';
#ifndef DISABLE_PARCHECK
if (bUnpack && HasBrokenFiles() && m_pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped && HasParFiles())
{
info("%s has broken files", m_szName);
RequestParCheck(false);
m_pPostInfo->SetWorking(false);
return;
}
#endif
if (bUnpack)
{
CheckArchiveFiles();
}
if (bUnpack && (m_bHasRarFiles || m_bHasSevenZipFiles || m_bHasSevenZipMultiFiles))
{
SetInfoName(m_szInfoName);
SetDefaultLogKind(g_pOptions->GetProcessLogKind());
SetWorkingDir(m_szDestDir);
PrintMessage(Message::mkInfo, "Unpacking %s", m_szName);
CreateUnpackDir();
m_bUnpackOK = true;
m_bUnpackStartError = false;
if (m_bHasRarFiles)
{
ExecuteUnrar();
}
if (m_bHasSevenZipFiles && m_bUnpackOK)
{
ExecuteSevenZip(false);
}
if (m_bHasSevenZipMultiFiles && m_bUnpackOK)
{
ExecuteSevenZip(true);
}
Completed();
}
else
{
PrintMessage(Message::mkInfo, (bUnpack ? "Nothing to unpack for %s" : "Unpack for %s skipped"), m_szName);
#ifndef DISABLE_PARCHECK
if (bUnpack && m_pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped && HasParFiles())
{
RequestParCheck(m_pPostInfo->GetNZBInfo()->GetRenameStatus() <= NZBInfo::rsSkipped);
}
else
#endif
{
m_pPostInfo->SetUnpackStatus(PostInfo::usSkipped);
m_pPostInfo->GetNZBInfo()->SetUnpackStatus(NZBInfo::usSkipped);
m_pPostInfo->SetStage(PostInfo::ptQueued);
}
}
m_pPostInfo->SetWorking(false);
}
void UnpackController::ExecuteUnrar()
{
// Format:
// unrar x -y -p- -o+ *.rar ./_unpack
char szPasswordParam[1024];
const char* szArgs[8];
szArgs[0] = g_pOptions->GetUnrarCmd();
szArgs[1] = "x";
szArgs[2] = "-y";
szArgs[3] = "-p-";
if (strlen(m_szPassword) > 0)
{
snprintf(szPasswordParam, 1024, "-p%s", m_szPassword);
szArgs[3] = szPasswordParam;
}
szArgs[4] = "-o+";
szArgs[5] = "*.rar";
szArgs[6] = m_szUnpackDir;
szArgs[7] = NULL;
SetArgs(szArgs, false);
SetScript(g_pOptions->GetUnrarCmd());
SetDefaultKindPrefix("Unrar: ");
m_bAllOKMessageReceived = false;
m_eUnpacker = upUnrar;
SetProgressLabel("");
int iExitCode = Execute();
SetProgressLabel("");
m_bUnpackOK = iExitCode == 0 && m_bAllOKMessageReceived && !GetTerminated();
m_bUnpackStartError = iExitCode == -1;
if (!m_bUnpackOK && iExitCode > 0)
{
PrintMessage(Message::mkError, "Unrar error code: %i", iExitCode);
}
}
void UnpackController::ExecuteSevenZip(bool bMultiVolumes)
{
// Format:
// 7z x -y -p- -o./_unpack *.7z
// OR
// 7z x -y -p- -o./_unpack *.7z.001
char szPasswordParam[1024];
const char* szArgs[7];
szArgs[0] = g_pOptions->GetSevenZipCmd();
szArgs[1] = "x";
szArgs[2] = "-y";
szArgs[3] = "-p-";
if (strlen(m_szPassword) > 0)
{
snprintf(szPasswordParam, 1024, "-p%s", m_szPassword);
szArgs[3] = szPasswordParam;
}
char szUnpackDirParam[1024];
snprintf(szUnpackDirParam, 1024, "-o%s", m_szUnpackDir);
szArgs[4] = szUnpackDirParam;
szArgs[5] = bMultiVolumes ? "*.7z.001" : "*.7z";
szArgs[6] = NULL;
SetArgs(szArgs, false);
SetScript(g_pOptions->GetSevenZipCmd());
SetDefaultKindPrefix("7-Zip: ");
m_bAllOKMessageReceived = false;
m_eUnpacker = upSevenZip;
PrintMessage(Message::mkInfo, "Executing 7-Zip");
SetProgressLabel("");
int iExitCode = Execute();
SetProgressLabel("");
m_bUnpackOK = iExitCode == 0 && m_bAllOKMessageReceived && !GetTerminated();
m_bUnpackStartError = iExitCode == -1;
if (!m_bUnpackOK && iExitCode > 0)
{
PrintMessage(Message::mkError, "7-Zip error code: %i", iExitCode);
}
}
void UnpackController::Completed()
{
bool bCleanupSuccess = Cleanup();
if (m_bUnpackOK && bCleanupSuccess)
{
PrintMessage(Message::mkInfo, "%s %s", m_szInfoNameUp, "successful");
m_pPostInfo->SetUnpackStatus(PostInfo::usSuccess);
m_pPostInfo->GetNZBInfo()->SetUnpackStatus(NZBInfo::usSuccess);
m_pPostInfo->GetNZBInfo()->SetUnpackCleanedUpDisk(m_bCleanedUpDisk);
m_pPostInfo->SetStage(PostInfo::ptQueued);
}
else
{
#ifndef DISABLE_PARCHECK
if (!m_bUnpackOK && m_pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped && !m_bUnpackStartError && !GetTerminated() && HasParFiles())
{
RequestParCheck(false);
}
else
#endif
{
PrintMessage(Message::mkError, "%s failed", m_szInfoNameUp);
m_pPostInfo->SetUnpackStatus(PostInfo::usFailure);
m_pPostInfo->GetNZBInfo()->SetUnpackStatus(NZBInfo::usFailure);
m_pPostInfo->SetStage(PostInfo::ptQueued);
}
}
}
#ifndef DISABLE_PARCHECK
void UnpackController::RequestParCheck(bool bRename)
{
PrintMessage(Message::mkInfo, "%s requested %s", m_szInfoNameUp, bRename ? "par-rename": "par-check/repair");
if (bRename)
{
m_pPostInfo->SetRequestParRename(true);
}
else
{
m_pPostInfo->SetRequestParCheck(PostInfo::rpAll);
}
m_pPostInfo->SetStage(PostInfo::ptFinished);
}
#endif
bool UnpackController::HasParFiles()
{
return ParCoordinator::FindMainPars(m_szDestDir, NULL);
}
bool UnpackController::HasBrokenFiles()
{
char szBrokenLog[1024];
snprintf(szBrokenLog, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, "_brokenlog.txt");
szBrokenLog[1024-1] = '\0';
return Util::FileExists(szBrokenLog);
}
void UnpackController::CreateUnpackDir()
{
if (strlen(g_pOptions->GetInterDir()) > 0 &&
!strncmp(m_szDestDir, g_pOptions->GetInterDir(), strlen(g_pOptions->GetInterDir())))
{
m_pPostInfo->GetNZBInfo()->BuildFinalDirName(m_szFinalDir, 1024);
m_szFinalDir[1024-1] = '\0';
Util::ForceDirectories(m_szFinalDir);
snprintf(m_szUnpackDir, 1024, "%s%c%s", m_szFinalDir, PATH_SEPARATOR, "_unpack");
}
else
{
snprintf(m_szUnpackDir, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, "_unpack");
}
m_szUnpackDir[1024-1] = '\0';
Util::ForceDirectories(m_szUnpackDir);
}
void UnpackController::CheckArchiveFiles()
{
m_bHasRarFiles = false;
m_bHasSevenZipFiles = false;
m_bHasSevenZipMultiFiles = false;
RegEx regExRar(".*\\.rar$");
RegEx regExSevenZip(".*\\.7z$");
RegEx regExSevenZipMulti(".*\\.7z\\.[0-9]*$");
DirBrowser dir(m_szDestDir);
while (const char* filename = dir.Next())
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
if (strcmp(filename, ".") && strcmp(filename, "..") && !Util::DirectoryExists(szFullFilename))
{
if (regExRar.Match(filename))
{
m_bHasRarFiles = true;
}
if (regExSevenZip.Match(filename))
{
m_bHasSevenZipFiles = true;
}
if (regExSevenZipMulti.Match(filename))
{
m_bHasSevenZipMultiFiles = true;
}
}
}
}
bool UnpackController::Cleanup()
{
// By success:
// - move unpacked files to destination dir;
// - remove _unpack-dir;
// - delete archive-files.
// By failure:
// - remove _unpack-dir.
bool bOK = true;
FileList extractedFiles;
if (m_bUnpackOK)
{
// moving files back
DirBrowser dir(m_szUnpackDir);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, ".."))
{
char szSrcFile[1024];
snprintf(szSrcFile, 1024, "%s%c%s", m_szUnpackDir, PATH_SEPARATOR, filename);
szSrcFile[1024-1] = '\0';
char szDstFile[1024];
snprintf(szDstFile, 1024, "%s%c%s", m_szFinalDir[0] != '\0' ? m_szFinalDir : m_szDestDir, PATH_SEPARATOR, filename);
szDstFile[1024-1] = '\0';
// silently overwrite existing files
remove(szDstFile);
if (!Util::MoveFile(szSrcFile, szDstFile))
{
PrintMessage(Message::mkError, "Could not move file %s to %s", szSrcFile, szDstFile);
bOK = false;
}
extractedFiles.push_back(strdup(filename));
}
}
}
if (bOK && !Util::DeleteDirectoryWithContent(m_szUnpackDir))
{
PrintMessage(Message::mkError, "Could not remove temporary directory %s", m_szUnpackDir);
}
if (m_bUnpackOK && bOK && g_pOptions->GetUnpackCleanupDisk())
{
PrintMessage(Message::mkInfo, "Deleting archive files");
// Delete rar-files (only files which were used by unrar)
for (FileList::iterator it = m_archiveFiles.begin(); it != m_archiveFiles.end(); it++)
{
char* szFilename = *it;
if (!extractedFiles.Exists(szFilename))
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, szFilename);
szFullFilename[1024-1] = '\0';
PrintMessage(Message::mkInfo, "Deleting file %s", szFilename);
if (remove(szFullFilename) != 0)
{
PrintMessage(Message::mkError, "Could not delete file %s", szFullFilename);
}
}
}
// Unfortunately 7-Zip doesn't print the processed archive-files to the output.
// Therefore we don't know for sure which files were extracted.
// We just delete all 7z-files in the directory.
RegEx regExSevenZip(".*\\.7z$|.*\\.7z\\.[0-9]*$");
DirBrowser dir(m_szDestDir);
while (const char* filename = dir.Next())
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
if (strcmp(filename, ".") && strcmp(filename, "..") && !Util::DirectoryExists(szFullFilename)
&& regExSevenZip.Match(filename) && !extractedFiles.Exists(filename))
{
PrintMessage(Message::mkInfo, "Deleting file %s", filename);
if (remove(szFullFilename) != 0)
{
PrintMessage(Message::mkError, "Could not delete file %s", szFullFilename);
}
}
}
m_bCleanedUpDisk = true;
}
extractedFiles.Clear();
return bOK;
}
/**
* Unrar prints progress information into the same line using backspace control character.
* In order to print progress continuously we analyze the output after every char
* and update post-job progress information.
*/
bool UnpackController::ReadLine(char* szBuf, int iBufSize, FILE* pStream)
{
bool bPrinted = false;
int i = 0;
for (; i < iBufSize - 1; i++)
{
int ch = fgetc(pStream);
szBuf[i] = ch;
szBuf[i+1] = '\0';
if (ch == EOF)
{
break;
}
if (ch == '\n')
{
i++;
break;
}
char* szBackspace = strrchr(szBuf, '\b');
if (szBackspace)
{
if (!bPrinted)
{
char tmp[1024];
strncpy(tmp, szBuf, 1024);
tmp[1024-1] = '\0';
char* szTmpPercent = strrchr(tmp, '\b');
if (szTmpPercent)
{
*szTmpPercent = '\0';
}
if (strncmp(szBuf, "...", 3))
{
ProcessOutput(tmp);
}
bPrinted = true;
}
if (strchr(szBackspace, '%'))
{
int iPercent = atoi(szBackspace + 1);
m_pPostInfo->SetStageProgress(iPercent * 10);
}
}
}
szBuf[i] = '\0';
if (bPrinted)
{
szBuf[0] = '\0';
}
return i > 0;
}
void UnpackController::AddMessage(Message::EKind eKind, bool bDefaultKind, const char* szText)
{
char szMsgText[1024];
strncpy(szMsgText, szText, 1024);
szMsgText[1024-1] = '\0';
// Modify unrar messages for better readability:
// remove the destination path part from message "Extracting file.xxx"
if (m_eUnpacker == upUnrar && !strncmp(szText, "Extracting ", 12) &&
!strncmp(szText + 12, m_szUnpackDir, strlen(m_szUnpackDir)))
{
snprintf(szMsgText, 1024, "Extracting %s", szText + 12 + strlen(m_szUnpackDir) + 1);
szMsgText[1024-1] = '\0';
}
ScriptController::AddMessage(eKind, bDefaultKind, szMsgText);
m_pPostInfo->AppendMessage(eKind, szMsgText);
if (m_eUnpacker == upUnrar && !strncmp(szMsgText, "Extracting ", 11))
{
SetProgressLabel(szMsgText);
}
if (m_eUnpacker == upUnrar && !strncmp(szText, "Extracting from ", 16))
{
const char *szFilename = szText + 16;
debug("Filename: %s", szFilename);
m_archiveFiles.push_back(strdup(szFilename));
SetProgressLabel(szText);
}
if ((m_eUnpacker == upUnrar && !strncmp(szText, "All OK", 6)) ||
(m_eUnpacker == upSevenZip && !strncmp(szText, "Everything is Ok", 16)))
{
m_bAllOKMessageReceived = true;
}
}
void UnpackController::Stop()
{
debug("Stopping unpack");
Thread::Stop();
Terminate();
}
void UnpackController::SetProgressLabel(const char* szProgressLabel)
{
g_pDownloadQueueHolder->LockQueue();
m_pPostInfo->SetProgressLabel(szProgressLabel);
g_pDownloadQueueHolder->UnlockQueue();
}
void MoveController::StartMoveJob(PostInfo* pPostInfo)
{
MoveController* pMoveController = new MoveController();
pMoveController->m_pPostInfo = pPostInfo;
pMoveController->SetAutoDestroy(false);
pPostInfo->SetPostThread(pMoveController);
pMoveController->Start();
}
void MoveController::Run()
{
// the locking is needed for accessing the members of NZBInfo
g_pDownloadQueueHolder->LockQueue();
char szNZBName[1024];
strncpy(szNZBName, m_pPostInfo->GetNZBInfo()->GetName(), 1024);
szNZBName[1024-1] = '\0';
char szInfoName[1024];
snprintf(szInfoName, 1024, "move for %s", m_pPostInfo->GetNZBInfo()->GetName());
szInfoName[1024-1] = '\0';
SetInfoName(szInfoName);
SetDefaultKindPrefix("Move: ");
SetDefaultLogKind(g_pOptions->GetProcessLogKind());
strncpy(m_szInterDir, m_pPostInfo->GetNZBInfo()->GetDestDir(), 1024);
m_szInterDir[1024-1] = '\0';
m_pPostInfo->GetNZBInfo()->BuildFinalDirName(m_szDestDir, 1024);
m_szDestDir[1024-1] = '\0';
g_pDownloadQueueHolder->UnlockQueue();
info("Moving completed files for %s", szNZBName);
bool bOK = MoveFiles();
szInfoName[0] = 'M'; // uppercase
if (bOK)
{
info("%s successful", szInfoName);
// save new dest dir
g_pDownloadQueueHolder->LockQueue();
m_pPostInfo->GetNZBInfo()->SetDestDir(m_szDestDir);
m_pPostInfo->GetNZBInfo()->SetMoveStatus(NZBInfo::msSuccess);
g_pDownloadQueueHolder->UnlockQueue();
}
else
{
error("%s failed", szInfoName);
m_pPostInfo->GetNZBInfo()->SetMoveStatus(NZBInfo::msFailure);
}
m_pPostInfo->SetStage(PostInfo::ptQueued);
m_pPostInfo->SetWorking(false);
}
bool MoveController::MoveFiles()
{
bool bOK = true;
bOK = Util::ForceDirectories(m_szDestDir);
DirBrowser dir(m_szInterDir);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, ".."))
{
char szSrcFile[1024];
snprintf(szSrcFile, 1024, "%s%c%s", m_szInterDir, PATH_SEPARATOR, filename);
szSrcFile[1024-1] = '\0';
char szDstFile[1024];
snprintf(szDstFile, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, filename);
szDstFile[1024-1] = '\0';
// prevent overwriting of existing files
int dupcount = 0;
while (Util::FileExists(szDstFile))
{
dupcount++;
snprintf(szDstFile, 1024, "%s%c%s_duplicate%d", m_szDestDir, PATH_SEPARATOR, filename, dupcount);
szDstFile[1024-1] = '\0';
}
PrintMessage(Message::mkInfo, "Moving file %s to %s", Util::BaseFileName(szSrcFile), m_szDestDir);
if (!Util::MoveFile(szSrcFile, szDstFile))
{
PrintMessage(Message::mkError, "Could not move file %s to %s! Errcode: %i", szSrcFile, szDstFile, errno);
bOK = false;
}
}
}
Util::RemoveDirectory(m_szInterDir);
return bOK;
}

View File

@@ -1,455 +0,0 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <sys/stat.h>
#ifndef WIN32
#include <unistd.h>
#include <sys/time.h>
#endif
#include "nzbget.h"
#include "UrlCoordinator.h"
#include "Options.h"
#include "WebDownloader.h"
#include "DiskState.h"
#include "Log.h"
#include "Util.h"
#include "NZBFile.h"
#include "QueueCoordinator.h"
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
extern QueueCoordinator* g_pQueueCoordinator;
UrlDownloader::UrlDownloader() : WebDownloader()
{
m_szCategory = NULL;
}
UrlDownloader::~UrlDownloader()
{
if (m_szCategory)
{
free(m_szCategory);
}
}
void UrlDownloader::ProcessHeader(const char* szLine)
{
WebDownloader::ProcessHeader(szLine);
if (!strncmp(szLine, "X-DNZB-Category: ", 17))
{
if (m_szCategory)
{
free(m_szCategory);
}
const char *szCat = szLine + 17;
int iCatLen = strlen(szCat);
// trim trailing CR/LF/spaces
while (iCatLen > 0 && (szCat[iCatLen-1] == '\n' || szCat[iCatLen-1] == '\r' || szCat[iCatLen-1] == ' ')) iCatLen--;
m_szCategory = (char*)malloc(iCatLen + 1);
strncpy(m_szCategory, szCat, iCatLen);
m_szCategory[iCatLen] = '\0';
debug("Category: %s", m_szCategory);
}
}
UrlCoordinator::UrlCoordinator()
{
debug("Creating UrlCoordinator");
m_bHasMoreJobs = true;
}
UrlCoordinator::~UrlCoordinator()
{
debug("Destroying UrlCoordinator");
// Cleanup
debug("Deleting UrlDownloaders");
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
delete *it;
}
m_ActiveDownloads.clear();
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
for (UrlQueue::iterator it = pDownloadQueue->GetUrlQueue()->begin(); it != pDownloadQueue->GetUrlQueue()->end(); it++)
{
delete *it;
}
pDownloadQueue->GetUrlQueue()->clear();
g_pQueueCoordinator->UnlockQueue();
debug("UrlCoordinator destroyed");
}
void UrlCoordinator::Run()
{
debug("Entering UrlCoordinator-loop");
int iResetCounter = 0;
while (!IsStopped())
{
if (!(g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2()))
{
// start download for next URL
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
if ((int)m_ActiveDownloads.size() < g_pOptions->GetUrlConnections())
{
UrlInfo* pUrlInfo;
bool bHasMoreUrls = GetNextUrl(pDownloadQueue, pUrlInfo);
bool bUrlDownloadsRunning = !m_ActiveDownloads.empty();
m_bHasMoreJobs = bHasMoreUrls || bUrlDownloadsRunning;
if (bHasMoreUrls && !IsStopped() && Thread::GetThreadCount() < g_pOptions->GetThreadLimit())
{
StartUrlDownload(pUrlInfo);
}
}
g_pQueueCoordinator->UnlockQueue();
}
int iSleepInterval = 100;
usleep(iSleepInterval * 1000);
iResetCounter += iSleepInterval;
if (iResetCounter >= 1000)
{
// this code should not be called too often, once per second is OK
ResetHangingDownloads();
iResetCounter = 0;
}
}
// waiting for downloads
debug("UrlCoordinator: waiting for Downloads to complete");
bool completed = false;
while (!completed)
{
g_pQueueCoordinator->LockQueue();
completed = m_ActiveDownloads.size() == 0;
g_pQueueCoordinator->UnlockQueue();
usleep(100 * 1000);
ResetHangingDownloads();
}
debug("UrlCoordinator: Downloads are completed");
debug("Exiting UrlCoordinator-loop");
}
void UrlCoordinator::Stop()
{
Thread::Stop();
debug("Stopping UrlDownloads");
g_pQueueCoordinator->LockQueue();
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
(*it)->Stop();
}
g_pQueueCoordinator->UnlockQueue();
debug("UrlDownloads are notified");
}
void UrlCoordinator::ResetHangingDownloads()
{
const int TimeOut = g_pOptions->GetTerminateTimeout();
if (TimeOut == 0)
{
return;
}
g_pQueueCoordinator->LockQueue();
time_t tm = ::time(NULL);
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end();)
{
UrlDownloader* pUrlDownloader = *it;
if (tm - pUrlDownloader->GetLastUpdateTime() > TimeOut &&
pUrlDownloader->GetStatus() == UrlDownloader::adRunning)
{
UrlInfo* pUrlInfo = pUrlDownloader->GetUrlInfo();
debug("Terminating hanging download %s", pUrlDownloader->GetInfoName());
if (pUrlDownloader->Terminate())
{
error("Terminated hanging download %s", pUrlDownloader->GetInfoName());
pUrlInfo->SetStatus(UrlInfo::aiUndefined);
}
else
{
error("Could not terminate hanging download %s", pUrlDownloader->GetInfoName());
}
m_ActiveDownloads.erase(it);
// it's not safe to destroy pUrlDownloader, because the state of object is unknown
delete pUrlDownloader;
it = m_ActiveDownloads.begin();
continue;
}
it++;
}
g_pQueueCoordinator->UnlockQueue();
}
void UrlCoordinator::LogDebugInfo()
{
debug(" UrlCoordinator");
debug(" ----------------");
g_pQueueCoordinator->LockQueue();
debug(" Active Downloads: %i", m_ActiveDownloads.size());
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
UrlDownloader* pUrlDownloader = *it;
pUrlDownloader->LogDebugInfo();
}
g_pQueueCoordinator->UnlockQueue();
}
void UrlCoordinator::AddUrlToQueue(UrlInfo* pUrlInfo, bool AddFirst)
{
debug("Adding NZB-URL to queue");
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
pDownloadQueue->GetUrlQueue()->push_back(pUrlInfo);
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
g_pQueueCoordinator->UnlockQueue();
}
/*
* Returns next URL for download.
*/
bool UrlCoordinator::GetNextUrl(DownloadQueue* pDownloadQueue, UrlInfo* &pUrlInfo)
{
bool bOK = false;
for (UrlQueue::iterator at = pDownloadQueue->GetUrlQueue()->begin(); at != pDownloadQueue->GetUrlQueue()->end(); at++)
{
pUrlInfo = *at;
if (pUrlInfo->GetStatus() == 0)
{
bOK = true;
break;
}
}
return bOK;
}
void UrlCoordinator::StartUrlDownload(UrlInfo* pUrlInfo)
{
debug("Starting new UrlDownloader");
UrlDownloader* pUrlDownloader = new UrlDownloader();
pUrlDownloader->SetAutoDestroy(true);
pUrlDownloader->Attach(this);
pUrlDownloader->SetUrlInfo(pUrlInfo);
pUrlDownloader->SetURL(pUrlInfo->GetURL());
char tmp[1024];
pUrlInfo->GetName(tmp, 1024);
pUrlDownloader->SetInfoName(tmp);
snprintf(tmp, 1024, "%surl-%i.tmp", g_pOptions->GetTempDir(), pUrlInfo->GetID());
tmp[1024-1] = '\0';
pUrlDownloader->SetOutputFilename(tmp);
pUrlInfo->SetStatus(UrlInfo::aiRunning);
m_ActiveDownloads.push_back(pUrlDownloader);
pUrlDownloader->Start();
}
void UrlCoordinator::Update(Subject* Caller, void* Aspect)
{
debug("Notification from UrlDownloader received");
UrlDownloader* pUrlDownloader = (UrlDownloader*) Caller;
if ((pUrlDownloader->GetStatus() == WebDownloader::adFinished) ||
(pUrlDownloader->GetStatus() == WebDownloader::adFailed) ||
(pUrlDownloader->GetStatus() == WebDownloader::adRetry))
{
UrlCompleted(pUrlDownloader);
}
}
void UrlCoordinator::UrlCompleted(UrlDownloader* pUrlDownloader)
{
debug("URL downloaded");
UrlInfo* pUrlInfo = pUrlDownloader->GetUrlInfo();
if (pUrlDownloader->GetStatus() == WebDownloader::adFinished)
{
pUrlInfo->SetStatus(UrlInfo::aiFinished);
}
else if (pUrlDownloader->GetStatus() == WebDownloader::adFailed)
{
pUrlInfo->SetStatus(UrlInfo::aiFailed);
}
else if (pUrlDownloader->GetStatus() == WebDownloader::adRetry)
{
pUrlInfo->SetStatus(UrlInfo::aiUndefined);
}
char filename[1024];
if (pUrlDownloader->GetOriginalFilename())
{
strncpy(filename, pUrlDownloader->GetOriginalFilename(), 1024);
filename[1024-1] = '\0';
}
else
{
strncpy(filename, Util::BaseFileName(pUrlInfo->GetURL()), 1024);
filename[1024-1] = '\0';
// TODO: decode URL escaping
}
Util::MakeValidFilename(filename, '_', false);
debug("Filename: [%s]", filename);
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
// delete Download from Queue
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
UrlDownloader* pa = *it;
if (pa == pUrlDownloader)
{
m_ActiveDownloads.erase(it);
break;
}
}
bool bDeleteObj = false;
if (pUrlInfo->GetStatus() == UrlInfo::aiFinished || pUrlInfo->GetStatus() == UrlInfo::aiFailed)
{
// delete UrlInfo from Queue
for (UrlQueue::iterator it = pDownloadQueue->GetUrlQueue()->begin(); it != pDownloadQueue->GetUrlQueue()->end(); it++)
{
UrlInfo* pa = *it;
if (pa == pUrlInfo)
{
pDownloadQueue->GetUrlQueue()->erase(it);
break;
}
}
bDeleteObj = true;
if (g_pOptions->GetKeepHistory() > 0 && pUrlInfo->GetStatus() == UrlInfo::aiFailed)
{
HistoryInfo* pHistoryInfo = new HistoryInfo(pUrlInfo);
pHistoryInfo->SetTime(time(NULL));
pDownloadQueue->GetHistoryList()->push_front(pHistoryInfo);
bDeleteObj = false;
}
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
}
g_pQueueCoordinator->UnlockQueue();
if (pUrlInfo->GetStatus() == UrlInfo::aiFinished)
{
// add nzb-file to download queue
AddToNZBQueue(pUrlInfo, pUrlDownloader->GetOutputFilename(), filename, pUrlDownloader->GetCategory());
}
if (bDeleteObj)
{
delete pUrlInfo;
}
}
void UrlCoordinator::AddToNZBQueue(UrlInfo* pUrlInfo, const char* szTempFilename, const char* szOriginalFilename, const char* szOriginalCategory)
{
info("Queue downloaded collection %s", szOriginalFilename);
NZBFile* pNZBFile = NZBFile::CreateFromFile(szTempFilename, pUrlInfo->GetCategory());
if (pNZBFile)
{
pNZBFile->GetNZBInfo()->SetName(NULL);
pNZBFile->GetNZBInfo()->SetFilename(pUrlInfo->GetNZBFilename() && strlen(pUrlInfo->GetNZBFilename()) > 0 ? pUrlInfo->GetNZBFilename() : szOriginalFilename);
if (strlen(pUrlInfo->GetCategory()) > 0)
{
pNZBFile->GetNZBInfo()->SetCategory(pUrlInfo->GetCategory());
}
else if (szOriginalCategory)
{
pNZBFile->GetNZBInfo()->SetCategory(szOriginalCategory);
}
pNZBFile->GetNZBInfo()->BuildDestDirName();
for (NZBFile::FileInfos::iterator it = pNZBFile->GetFileInfos()->begin(); it != pNZBFile->GetFileInfos()->end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->SetPriority(pUrlInfo->GetPriority());
pFileInfo->SetPaused(pUrlInfo->GetAddPaused());
}
g_pQueueCoordinator->AddNZBFileToQueue(pNZBFile, pUrlInfo->GetAddTop());
delete pNZBFile;
info("Collection %s added to queue", szOriginalFilename);
}
else
{
error("Could not add downloaded collection %s to queue", szOriginalFilename);
}
}

1616
Util.cpp
View File

File diff suppressed because it is too large Load Diff

2322
XmlRpc.cpp
View File

File diff suppressed because it is too large Load Diff

View File

@@ -3,13 +3,17 @@
/* Define to 1 to include debug-code */
#undef DEBUG
/* Define to 1 if deleting of files during reading of directory is not
properly supported by OS */
#undef DIRBROWSER_SNAPSHOT
/* Define to 1 to not use curses */
#undef DISABLE_CURSES
/* Define to 1 to disable gzip-support */
#undef DISABLE_GZIP
/* Define to 1 to disable smart par-verification and restoration */
/* Define to 1 to disable par-verification and repair */
#undef DISABLE_PARCHECK
/* Define to 1 to not use TLS/SSL */
@@ -31,6 +35,16 @@
/* Define to 1 if you have the <curses.h> header file. */
#undef HAVE_CURSES_H
/* Define to 1 if you have the <dirent.h> header file, and it defines `DIR'.
*/
#undef HAVE_DIRENT_H
/* Define to 1 if you have the <endian.h> header file. */
#undef HAVE_ENDIAN_H
/* Define to 1 if fseeko (and presumably ftello) exists and is declared. */
#undef HAVE_FSEEKO
/* Define to 1 if getaddrinfo is supported */
#undef HAVE_GETADDRINFO
@@ -46,6 +60,12 @@
/* Define to 1 if gethostbyname_r takes 6 arguments */
#undef HAVE_GETHOSTBYNAME_R_6
/* Define to 1 if you have the `getopt' function. */
#undef HAVE_GETOPT
/* Define to 1 if you have the <getopt.h> header file. */
#undef HAVE_GETOPT_H
/* Define to 1 if getopt_long is supported */
#undef HAVE_GETOPT_LONG
@@ -55,6 +75,9 @@
/* Define to 1 to use GnuTLS library for TLS/SSL-support. */
#undef HAVE_LIBGNUTLS
/* Define to 1 if you have the `memcpy' function. */
#undef HAVE_MEMCPY
/* Define to 1 if you have the <memory.h> header file. */
#undef HAVE_MEMORY_H
@@ -64,36 +87,56 @@
/* Define to 1 if you have the <ncurses/ncurses.h> header file. */
#undef HAVE_NCURSES_NCURSES_H
/* Define to 1 if you have the <ndir.h> header file, and it defines `DIR'. */
#undef HAVE_NDIR_H
/* Define to 1 to use OpenSSL library for TLS/SSL-support. */
#undef HAVE_OPENSSL
/* Define to 1 if libpar2 has recent bugfixes-patch (version 2) */
#undef HAVE_PAR2_BUGFIXES_V2
/* Define to 1 if libpar2 supports cancelling (needs a special patch) */
#undef HAVE_PAR2_CANCEL
/* Define to 1 if you have the <regex.h> header file. */
#undef HAVE_REGEX_H
/* Define to 1 if _SC_NPROCESSORS_ONLN is present in unistd.h */
#undef HAVE_SC_NPROCESSORS_ONLN
/* Define to 1 if spinlocks are supported */
#undef HAVE_SPINLOCK
/* Define to 1 if stat64 is supported */
#undef HAVE_STAT64
/* Define to 1 if stdbool.h conforms to C99. */
#undef HAVE_STDBOOL_H
/* Define to 1 if you have the <stdint.h> header file. */
#undef HAVE_STDINT_H
/* Define to 1 if you have the <stdio.h> header file. */
#undef HAVE_STDIO_H
/* Define to 1 if you have the <stdlib.h> header file. */
#undef HAVE_STDLIB_H
/* Define to 1 if you have the `strcasecmp' function. */
#undef HAVE_STRCASECMP
/* Define to 1 if you have the `strchr' function. */
#undef HAVE_STRCHR
/* Define to 1 if you have the `stricmp' function. */
#undef HAVE_STRICMP
/* Define to 1 if you have the <strings.h> header file. */
#undef HAVE_STRINGS_H
/* Define to 1 if you have the <string.h> header file. */
#undef HAVE_STRING_H
/* Define to 1 if you have the <sys/dir.h> header file, and it defines `DIR'.
*/
#undef HAVE_SYS_DIR_H
/* Define to 1 if you have the <sys/ndir.h> header file, and it defines `DIR'.
*/
#undef HAVE_SYS_NDIR_H
/* Define to 1 if you have the <sys/prctl.h> header file. */
#undef HAVE_SYS_PRCTL_H
@@ -109,6 +152,9 @@
/* Define to 1 if variadic macros are supported */
#undef HAVE_VARIADIC_MACROS
/* Define to 1 if the system has the type `_Bool'. */
#undef HAVE__BOOL
/* Name of package */
#undef PACKAGE
@@ -138,3 +184,28 @@
/* Version number of package */
#undef VERSION
/* Define to 1 if your processor stores words with the most significant byte
first (like Motorola and SPARC, unlike Intel and VAX). */
#undef WORDS_BIGENDIAN
/* Number of bits in a file offset, on hosts where this is settable. */
#undef _FILE_OFFSET_BITS
/* Define to 1 to make fseeko visible on some hosts (e.g. glibc 2.2). */
#undef _LARGEFILE_SOURCE
/* Define for large files, on AIX-style hosts. */
#undef _LARGE_FILES
/* Define to empty if `const' does not conform to ANSI C. */
#undef const
/* Define to `__inline__' or `__inline' if that's what the C compiler
calls it, or to nothing if 'inline' is not supported under any name. */
#ifndef __cplusplus
#undef inline
#endif
/* Define to `unsigned int' if <sys/types.h> does not define. */
#undef size_t

3519
configure vendored
View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,32 @@
#
# This file is part of nzbget
#
# Copyright (C) 2008-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
#
# -*- Autoconf -*-
# Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59)
AC_INIT(nzbget, 10.2, hugbug@users.sourceforge.net)
AC_INIT(nzbget, 14.2, hugbug@users.sourceforge.net)
AC_CANONICAL_SYSTEM
AM_INIT_AUTOMAKE(nzbget, 10.2)
AC_CONFIG_SRCDIR([nzbget.cpp])
AM_INIT_AUTOMAKE(nzbget, 14.2)
AC_CONFIG_SRCDIR([daemon/main/nzbget.cpp])
AC_CONFIG_HEADERS([config.h])
@@ -56,10 +77,9 @@ AC_CHECK_FUNC(getopt_long,
dnl
dnl stat64
dnl use 64-Bits for file sizes
dnl
AC_CHECK_FUNC(stat64,
[AC_DEFINE([HAVE_STAT64], 1, [Define to 1 if stat64 is supported])],)
AC_SYS_LARGEFILE
dnl
@@ -144,7 +164,7 @@ fi
dnl
dnl cCheck if spinlocks are available
dnl Check if spinlocks are available
dnl
AC_CHECK_FUNC(pthread_spin_init,
[AC_DEFINE([HAVE_SPINLOCK], 1, [Define to 1 if spinlocks are supported])]
@@ -181,6 +201,31 @@ AC_TRY_COMPILE([
AC_DEFINE_UNQUOTED(SOCKLEN_T, $SOCKLEN_T, [Determine what socket length (socklen_t) data type is])
dnl
dnl Dir-browser's snapshot
dnl
AC_MSG_CHECKING(whether dir-browser snapshot workaround is needed)
if test "$target_vendor" == "apple"; then
AC_MSG_RESULT([[yes]])
AC_DEFINE([DIRBROWSER_SNAPSHOT], 1, [Define to 1 if deleting of files during reading of directory is not properly supported by OS])
else
AC_MSG_RESULT([[no]])
fi
dnl
dnl check cpu cores via sysconf
dnl
AC_MSG_CHECKING(for cpu cores via sysconf)
AC_TRY_COMPILE(
[#include <unistd.h>],
[ int a = _SC_NPROCESSORS_ONLN; ],
FOUND="yes"
AC_MSG_RESULT([[yes]])
AC_DEFINE([HAVE_SC_NPROCESSORS_ONLN], 1, [Define to 1 if _SC_NPROCESSORS_ONLN is present in unistd.h]),
FOUND="no")
dnl
dnl checks for libxml2 includes and libraries.
dnl
@@ -197,7 +242,8 @@ AC_ARG_WITH(libxml2_libraries,
if test "$INCVAL" = "no" -o "$LIBVAL" = "no"; then
PKG_CHECK_MODULES(libxml2, libxml-2.0,
[LIBS="${LIBS} $libxml2_LIBS"]
[CPPFLAGS="${CPPFLAGS} $libxml2_CFLAGS"])
[CPPFLAGS="${CPPFLAGS} $libxml2_CFLAGS"],
AC_MSG_ERROR("libxml2 library not found"))
fi
AC_CHECK_HEADER(libxml/tree.h,,
AC_MSG_ERROR("libxml2 header files not found"))
@@ -253,116 +299,37 @@ fi
dnl
dnl Use libpar2 for par-checking. Deafult: no
dnl Use par-checking. Deafult: yes.
dnl
AC_MSG_CHECKING(whether to include code for par-checking)
AC_ARG_ENABLE(parcheck,
[AS_HELP_STRING([--disable-parcheck], [do not include par-check/-repair-support (removes dependency from libpar2- and libsigc-libraries)])],
[AS_HELP_STRING([--disable-parcheck], [do not include par-check/-repair-support])],
[ ENABLEPARCHECK=$enableval ],
[ ENABLEPARCHECK=yes] )
AC_MSG_RESULT($ENABLEPARCHECK)
if test "$ENABLEPARCHECK" = "yes"; then
dnl PAR2 checks.
dnl
dnl checks for libsigc++ includes and libraries (required for libpar2).
dnl
AC_ARG_WITH(libsigc_includes,
[AS_HELP_STRING([--with-libsigc-includes=DIR], [libsigc++-2.0 include directory])],
[CPPFLAGS="${CPPFLAGS} -I${withval}"]
[INCVAL="yes"],
[INCVAL="no"])
AC_ARG_WITH(libsigc_libraries,
[AS_HELP_STRING([--with-libsigc-libraries=DIR], [libsigc++-2.0 library directory])],
[LDFLAGS="${LDFLAGS} -L${withval}"]
[CPPFLAGS="${CPPFLAGS} -I${withval}/sigc++-2.0/include"]
[LIBVAL="yes"],
[LIBVAL="no"])
if test "$INCVAL" = "no" -o "$LIBVAL" = "no"; then
PKG_CHECK_MODULES(libsigc, sigc++-2.0,
[LIBS="${LIBS} $libsigc_LIBS"]
[CPPFLAGS="${CPPFLAGS} $libsigc_CFLAGS"])
fi
AC_CHECK_HEADER(sigc++/type_traits.h,,
AC_MSG_ERROR("libsigc++-2.0 header files not found"))
dnl
dnl checks for libpar2 includes and libraries.
dnl
INCVAL="${LIBPREF}/include"
LIBVAL="${LIBPREF}/lib"
AC_ARG_WITH(libpar2_includes,
[AS_HELP_STRING([--with-libpar2-includes=DIR], [libpar2 include directory])],
[INCVAL="$withval"])
CPPFLAGS="${CPPFLAGS} -I${INCVAL}"
AC_CHECK_HEADER(libpar2/libpar2.h,,
AC_MSG_ERROR("libpar2 header files not found"))
AC_ARG_WITH(libpar2_libraries,
[AS_HELP_STRING([--with-libpar2-libraries=DIR], [libpar2 library directory])],
[LIBVAL="$withval"])
LDFLAGS="${LDFLAGS} -L${LIBVAL}"
AC_SEARCH_LIBS([_ZN12Par2RepairerC1Ev], [par2], ,
AC_MSG_ERROR("libpar2 library not found"))
dnl
dnl check if libpar2 library is linkable
dnl
AC_MSG_CHECKING(for libpar2 linking)
AC_TRY_LINK(
[#include <libpar2/par2cmdline.h>]
[#include <libpar2/par2repairer.h>]
[ class Repairer : public Par2Repairer { }; ],
[ Repairer* p = new Repairer(); ],
AC_MSG_RESULT([[yes]]),
AC_MSG_RESULT([[no]])
AC_MSG_ERROR("libpar2 library not found"))
dnl
dnl check if libpar2 has support for cancelling
dnl
AC_MSG_CHECKING(whether libpar2 supports cancelling)
AC_TRY_COMPILE(
[#include <libpar2/par2cmdline.h>]
[#include <libpar2/par2repairer.h>]
[ class Repairer : public Par2Repairer { void test() { cancelled = true; } }; ],
[],
AC_MSG_RESULT([[yes]])
AC_DEFINE([HAVE_PAR2_CANCEL], 1, [Define to 1 if libpar2 supports cancelling (needs a special patch)]),
AC_MSG_RESULT([[no]]))
dnl
dnl check if libpar2 has recent bugfixes-patch
dnl
AC_MSG_CHECKING(whether libpar2 has recent bugfixes-patch (version 2))
AC_TRY_COMPILE(
[#include <libpar2/par2cmdline.h>]
[#include <libpar2/par2repairer.h>]
[ class Repairer : public Par2Repairer { void test() { BugfixesPatchVersion2(); } }; ],
[],
AC_MSG_RESULT([[yes]])
PAR2PATCHV2=yes
AC_DEFINE([HAVE_PAR2_BUGFIXES_V2], 1, [Define to 1 if libpar2 has recent bugfixes-patch (version 2)]),
AC_MSG_RESULT([[no]])
PAR2PATCHV2=no)
if test "$PAR2PATCHV2" = "no" ; then
AC_ARG_ENABLE(libpar2-bugfixes-check,
[AS_HELP_STRING([--disable-libpar2-bugfixes-check], [do not check if libpar2 has recent bugfixes-patch applied])],
[ PAR2PATCHCHECK=$enableval ],
[ PAR2PATCHCHECK=yes] )
if test "$PAR2PATCHCHECK" = "yes"; then
AC_ERROR([Your version of libpar2 doesn't include the recent bugfixes-patch (version 2, updated Dec 3, 2012). Please patch libpar2 with the patches supplied with NZBGet (see README for details). If you cannot patch libpar2, you can use configure parameter --disable-libpar2-bugfixes-check to suppress the check. Please note however that in this case the program may crash during par-check/repair. The patch is highly recommended!])
fi
fi
dnl Checks for header files.
AC_HEADER_DIRENT
AC_HEADER_STDBOOL
AC_HEADER_STDC
AC_CHECK_HEADERS([stdio.h] [endian.h] [getopt.h])
dnl Checks for typedefs, structures, and compiler characteristics.
AC_TYPE_SIZE_T
AC_C_BIGENDIAN
AC_C_CONST
AC_C_INLINE
AC_FUNC_FSEEKO
dnl Checks for library functions.
AC_FUNC_MEMCMP
AC_CHECK_FUNCS([stricmp] [strcasecmp])
AC_CHECK_FUNCS([strchr] [memcpy])
AC_CHECK_FUNCS([getopt])
AM_CONDITIONAL(WITH_PAR2, true)
else
AC_DEFINE([DISABLE_PARCHECK],1,[Define to 1 to disable smart par-verification and restoration])
AC_DEFINE([DISABLE_PARCHECK],1,[Define to 1 to disable par-verification and repair])
AM_CONDITIONAL(WITH_PAR2, false)
fi
@@ -377,47 +344,12 @@ AC_ARG_ENABLE(tls,
AC_MSG_RESULT($USETLS)
if test "$USETLS" = "yes"; then
AC_ARG_WITH(tlslib,
[AS_HELP_STRING([--with-tlslib=(GnuTLS, OpenSSL)], [TLS/SSL library to use])],
[AS_HELP_STRING([--with-tlslib=(OpenSSL, GnuTLS)], [TLS/SSL library to use])],
[TLSLIB="$withval"])
if test "$TLSLIB" != "GnuTLS" -a "$TLSLIB" != "OpenSSL" -a "$TLSLIB" != ""; then
AC_MSG_ERROR([Invalid argument for option --with-tlslib])
fi
if test "$TLSLIB" = "GnuTLS" -o "$TLSLIB" = ""; then
INCVAL="${LIBPREF}/include"
LIBVAL="${LIBPREF}/lib"
AC_ARG_WITH(libgnutls_includes,
[AS_HELP_STRING([--with-libgnutls-includes=DIR], [GnuTLS include directory])],
[INCVAL="$withval"])
CPPFLAGS="${CPPFLAGS} -I${INCVAL}"
AC_ARG_WITH(libgnutls_libraries,
[AS_HELP_STRING([--with-libgnutls-libraries=DIR], [GnuTLS library directory])],
[LIBVAL="$withval"])
LDFLAGS="${LDFLAGS} -L${LIBVAL}"
AC_CHECK_HEADER(gnutls/gnutls.h,
FOUND=yes
TLSHEADERS=yes,
FOUND=no)
if test "$FOUND" = "no" -a "$TLSLIB" = "GnuTLS"; then
AC_MSG_ERROR([Couldn't find GnuTLS headers (gnutls.h)])
fi
if test "$FOUND" = "yes"; then
AC_SEARCH_LIBS([gnutls_global_init], [gnutls],
AC_SEARCH_LIBS([gcry_control], [gnutls gcrypt],
FOUND=yes,
FOUND=no),
FOUND=no)
if test "$FOUND" = "no" -a "$TLSLIB" = "GnuTLS"; then
AC_MSG_ERROR([Couldn't find GnuTLS library])
fi
if test "$FOUND" = "yes"; then
TLSLIB="GnuTLS"
AC_DEFINE([HAVE_LIBGNUTLS],1,[Define to 1 to use GnuTLS library for TLS/SSL-support.])
fi
fi
fi
if test "$TLSLIB" = "OpenSSL" -o "$TLSLIB" = ""; then
AC_ARG_WITH(openssl_includes,
[AS_HELP_STRING([--with-openssl-includes=DIR], [OpenSSL include directory])],
@@ -430,9 +362,10 @@ if test "$USETLS" = "yes"; then
[LIBVAL="yes"],
[LIBVAL="no"])
if test "$INCVAL" = "no" -o "$LIBVAL" = "no"; then
PKG_CHECK_MODULES(openssl, openssl,
PKG_CHECK_MODULES([openssl], [openssl],
[LIBS="${LIBS} $openssl_LIBS"]
[CPPFLAGS="${CPPFLAGS} $openssl_CFLAGS"])
[CPPFLAGS="${CPPFLAGS} $openssl_CFLAGS"],
FOUND=no)
fi
AC_CHECK_HEADER(openssl/ssl.h,
@@ -457,12 +390,66 @@ if test "$USETLS" = "yes"; then
fi
fi
fi
if test "$TLSLIB" = "GnuTLS" -o "$TLSLIB" = ""; then
INCVAL="${LIBPREF}/include"
LIBVAL="${LIBPREF}/lib"
AC_ARG_WITH(libgnutls_includes,
[AS_HELP_STRING([--with-libgnutls-includes=DIR], [GnuTLS include directory])],
[INCVAL="$withval"])
CPPFLAGS="${CPPFLAGS} -I${INCVAL}"
AC_ARG_WITH(libgnutls_libraries,
[AS_HELP_STRING([--with-libgnutls-libraries=DIR], [GnuTLS library directory])],
[LIBVAL="$withval"])
LDFLAGS="${LDFLAGS} -L${LIBVAL}"
AC_CHECK_HEADER(gnutls/gnutls.h,
FOUND=yes
TLSHEADERS=yes,
FOUND=no)
if test "$FOUND" = "no" -a "$TLSLIB" = "GnuTLS"; then
AC_MSG_ERROR([Couldn't find GnuTLS headers (gnutls.h)])
fi
if test "$FOUND" = "yes"; then
AC_SEARCH_LIBS([gnutls_global_init], [gnutls],
FOUND=yes,
FOUND=no)
if test "$FOUND" = "yes"; then
dnl gcrypt is optional
AC_MSG_CHECKING([whether gcrypt is needed])
AC_TRY_COMPILE(
[#include <gnutls/gnutls.h>]
[#if GNUTLS_VERSION_NUMBER <= 0x020b00]
[compile error]
[#endif],
[int a;],
AC_MSG_RESULT([no])
GCRYPT=no,
AC_MSG_RESULT([yes])
GCRYPT=yes)
if test "$GCRYPT" = "yes"; then
AC_CHECK_HEADER([gcrypt.h],
AC_SEARCH_LIBS([gcry_control], [gnutls gcrypt],
FOUND=yes,
FOUND=no),
FOUND=yes)
fi
fi
if test "$FOUND" = "no" -a "$TLSLIB" = "GnuTLS"; then
AC_MSG_ERROR([Couldn't find GnuTLS library])
fi
if test "$FOUND" = "yes"; then
TLSLIB="GnuTLS"
AC_DEFINE([HAVE_LIBGNUTLS],1,[Define to 1 to use GnuTLS library for TLS/SSL-support.])
fi
fi
fi
if test "$TLSLIB" = ""; then
if test "$TLSHEADERS" = ""; then
AC_MSG_ERROR([Couldn't find neither GnuTLS nor OpenSSL headers (gnutls.h or ssl.h)])
AC_MSG_ERROR([Couldn't find neither OpenSSL nor GnuTLS headers (ssl.h or gnutls.h)])
else
AC_MSG_ERROR([Couldn't find neither GnuTLS nor OpenSSL library])
AC_MSG_ERROR([Couldn't find neither OpenSSL nor GnuTLS library])
fi
fi
else
@@ -503,22 +490,14 @@ fi
dnl
dnl Some Linux systems require an empty signal handler for SIGCHLD
dnl in order for exit codes to be correctly delivered to parent process.
dnl Some BSD systems however may not function properly if the handler is installed.
dnl The default behavior is to check the target and disable the handler on BSD but keep it enabled on other systems.
dnl Some 32-Bit BSD systems however may not function properly if the handler is installed.
dnl The default behavior is to install the handler.
dnl
AC_MSG_CHECKING(whether to use an empty SIGCHLD handler)
AC_ARG_ENABLE(sigchld-handler,
[AS_HELP_STRING([--disable-sigchld-handler], [do not use sigchld-handler (the disabling is recommended for BSD)])],
[AS_HELP_STRING([--disable-sigchld-handler], [do not use sigchld-handler (the disabling may be neccessary on 32-Bit BSD)])],
[SIGCHLDHANDLER=$enableval],
[SIGCHLDHANDLER=auto])
if test "$SIGCHLDHANDLER" = "auto"; then
SIGCHLDHANDLER=yes
case "$target" in
*bsd*)
SIGCHLDHANDLER=no
;;
esac
fi
[SIGCHLDHANDLER=yes])
AC_MSG_RESULT($SIGCHLDHANDLER)
if test "$SIGCHLDHANDLER" = "yes"; then
AC_DEFINE([SIGCHLD_HANDLER], 1, [Define to 1 to install an empty signal handler for SIGCHLD])

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -37,7 +37,7 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <stdio.h>
#ifdef WIN32
#include <winsock2.h>
#include <ws2tcpip.h>
@@ -117,19 +117,20 @@ Connection::Connection(const char* szHost, int iPort, bool bTLS)
{
debug("Creating Connection");
m_szHost = NULL;
m_iPort = iPort;
m_bTLS = bTLS;
m_szCipher = NULL;
m_eStatus = csDisconnected;
m_iSocket = INVALID_SOCKET;
m_iBufAvail = 0;
m_iTimeout = 60;
m_bSuppressErrors = true;
m_szReadBuf = (char*)malloc(CONNECTION_READBUFFER_SIZE + 1);
m_szHost = NULL;
m_iPort = iPort;
m_bTLS = bTLS;
m_szCipher = NULL;
m_eStatus = csDisconnected;
m_iSocket = INVALID_SOCKET;
m_iBufAvail = 0;
m_iTimeout = 60;
m_bSuppressErrors = true;
m_szReadBuf = (char*)malloc(CONNECTION_READBUFFER_SIZE + 1);
m_iTotalBytesRead = 0;
#ifndef DISABLE_TLS
m_pTLSSocket = NULL;
m_bTLSError = false;
m_pTLSSocket = NULL;
m_bTLSError = false;
#endif
if (szHost)
@@ -164,21 +165,11 @@ Connection::~Connection()
Disconnect();
if (m_szHost)
{
free(m_szHost);
}
if (m_szCipher)
{
free(m_szCipher);
}
free(m_szHost);
free(m_szCipher);
free(m_szReadBuf);
#ifndef DISABLE_TLS
if (m_pTLSSocket)
{
delete m_pTLSSocket;
}
delete m_pTLSSocket;
#endif
}
@@ -195,10 +186,7 @@ void Connection::SetSuppressErrors(bool bSuppressErrors)
void Connection::SetCipher(const char* szCipher)
{
if (m_szCipher)
{
free(m_szCipher);
}
free(m_szCipher);
m_szCipher = szCipher ? strdup(szCipher) : NULL;
}
@@ -219,7 +207,7 @@ bool Connection::Connect()
}
else
{
Connection::DoDisconnect();
DoDisconnect();
}
return bRes;
@@ -243,35 +231,120 @@ bool Connection::Disconnect()
return bRes;
}
int Connection::Bind()
bool Connection::Bind()
{
debug("Binding");
if (m_eStatus == csListening)
{
return 0;
return true;
}
int iRes = DoBind();
if (iRes == 0)
#ifdef HAVE_GETADDRINFO
struct addrinfo addr_hints, *addr_list, *addr;
char iPortStr[sizeof(int) * 4 + 1]; // is enough to hold any converted int
memset(&addr_hints, 0, sizeof(addr_hints));
addr_hints.ai_family = AF_UNSPEC; // Allow IPv4 or IPv6
addr_hints.ai_socktype = SOCK_STREAM,
addr_hints.ai_flags = AI_PASSIVE; // For wildcard IP address
sprintf(iPortStr, "%d", m_iPort);
int res = getaddrinfo(m_szHost, iPortStr, &addr_hints, &addr_list);
if (res != 0)
{
m_eStatus = csListening;
error("Could not resolve hostname %s", m_szHost);
return false;
}
m_iSocket = INVALID_SOCKET;
for (addr = addr_list; addr != NULL; addr = addr->ai_next)
{
m_iSocket = socket(addr->ai_family, addr->ai_socktype, addr->ai_protocol);
if (m_iSocket != INVALID_SOCKET)
{
int opt = 1;
setsockopt(m_iSocket, SOL_SOCKET, SO_REUSEADDR, (char*)&opt, sizeof(opt));
res = bind(m_iSocket, addr->ai_addr, addr->ai_addrlen);
if (res != -1)
{
// Connection established
break;
}
// Connection failed
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
}
freeaddrinfo(addr_list);
#else
struct sockaddr_in sSocketAddress;
memset(&sSocketAddress, 0, sizeof(sSocketAddress));
sSocketAddress.sin_family = AF_INET;
if (!m_szHost || strlen(m_szHost) == 0)
{
sSocketAddress.sin_addr.s_addr = htonl(INADDR_ANY);
}
else
{
sSocketAddress.sin_addr.s_addr = ResolveHostAddr(m_szHost);
if (sSocketAddress.sin_addr.s_addr == (unsigned int)-1)
{
return false;
}
}
sSocketAddress.sin_port = htons(m_iPort);
m_iSocket = socket(PF_INET, SOCK_STREAM, 0);
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Socket creation failed for %s", m_szHost, true, 0);
return false;
}
int opt = 1;
setsockopt(m_iSocket, SOL_SOCKET, SO_REUSEADDR, (char*)&opt, sizeof(opt));
int res = bind(m_iSocket, (struct sockaddr *) &sSocketAddress, sizeof(sSocketAddress));
if (res == -1)
{
// Connection failed
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
#endif
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Binding socket failed for %s", m_szHost, true, 0);
return false;
}
if (listen(m_iSocket, 100) < 0)
{
ReportError("Listen on socket failed for %s", m_szHost, true, 0);
return false;
}
m_eStatus = csListening;
return iRes;
return true;
}
int Connection::WriteLine(const char* pBuffer)
{
//debug("Connection::write(char* line)");
//debug("Connection::WriteLine");
if (m_eStatus != csConnected)
{
return -1;
}
int iRes = DoWriteLine(pBuffer);
int iRes = send(m_iSocket, pBuffer, strlen(pBuffer), 0);
return iRes;
}
@@ -306,9 +379,75 @@ char* Connection::ReadLine(char* pBuffer, int iSize, int* pBytesRead)
return NULL;
}
char* res = DoReadLine(pBuffer, iSize, pBytesRead);
char* pBufPtr = pBuffer;
iSize--; // for trailing '0'
int iBytesRead = 0;
int iBufAvail = m_iBufAvail; // local variable is faster
char* szBufPtr = m_szBufPtr; // local variable is faster
while (iSize)
{
if (!iBufAvail)
{
iBufAvail = recv(m_iSocket, m_szReadBuf, CONNECTION_READBUFFER_SIZE, 0);
if (iBufAvail < 0)
{
ReportError("Could not receive data on socket", NULL, true, 0);
break;
}
else if (iBufAvail == 0)
{
break;
}
szBufPtr = m_szReadBuf;
m_szReadBuf[iBufAvail] = '\0';
}
int len = 0;
char* p = (char*)memchr(szBufPtr, '\n', iBufAvail);
if (p)
{
len = (int)(p - szBufPtr + 1);
}
else
{
len = iBufAvail;
}
if (len > iSize)
{
len = iSize;
}
memcpy(pBufPtr, szBufPtr, len);
pBufPtr += len;
szBufPtr += len;
iBufAvail -= len;
iBytesRead += len;
iSize -= len;
if (p)
{
break;
}
}
*pBufPtr = '\0';
m_iBufAvail = iBufAvail > 0 ? iBufAvail : 0; // copy back to member
m_szBufPtr = szBufPtr; // copy back to member
if (pBytesRead)
{
*pBytesRead = iBytesRead;
}
return res;
m_iTotalBytesRead += iBytesRead;
if (pBufPtr == pBuffer)
{
return NULL;
}
return pBuffer;
}
Connection* Connection::Accept()
@@ -320,13 +459,17 @@ Connection* Connection::Accept()
return NULL;
}
SOCKET iRes = DoAccept();
if (iRes == INVALID_SOCKET)
SOCKET iSocket = accept(m_iSocket, NULL, NULL);
if (iSocket == INVALID_SOCKET && m_eStatus != csCancelled)
{
ReportError("Could not accept connection", NULL, true, 0);
}
if (iSocket == INVALID_SOCKET)
{
return NULL;
}
Connection* pCon = new Connection(iRes, m_bTLS);
Connection* pCon = new Connection(iSocket, m_bTLS);
return pCon;
}
@@ -508,200 +651,12 @@ bool Connection::DoDisconnect()
return true;
}
int Connection::DoWriteLine(const char* pBuffer)
{
//debug("Connection::doWrite()");
return send(m_iSocket, pBuffer, strlen(pBuffer), 0);
}
char* Connection::DoReadLine(char* pBuffer, int iSize, int* pBytesRead)
{
//debug( "Connection::DoReadLine()" );
char* pBufPtr = pBuffer;
iSize--; // for trailing '0'
int iBytesRead = 0;
int iBufAvail = m_iBufAvail; // local variable is faster
char* szBufPtr = m_szBufPtr; // local variable is faster
while (iSize)
{
if (!iBufAvail)
{
iBufAvail = recv(m_iSocket, m_szReadBuf, CONNECTION_READBUFFER_SIZE, 0);
if (iBufAvail < 0)
{
ReportError("Could not receive data on socket", NULL, true, 0);
break;
}
else if (iBufAvail == 0)
{
break;
}
szBufPtr = m_szReadBuf;
m_szReadBuf[iBufAvail] = '\0';
}
int len = 0;
char* p = (char*)memchr(szBufPtr, '\n', iBufAvail);
if (p)
{
len = (int)(p - szBufPtr + 1);
}
else
{
len = iBufAvail;
}
if (len > iSize)
{
len = iSize;
}
memcpy(pBufPtr, szBufPtr, len);
pBufPtr += len;
szBufPtr += len;
iBufAvail -= len;
iBytesRead += len;
iSize -= len;
if (p)
{
break;
}
}
*pBufPtr = '\0';
m_iBufAvail = iBufAvail > 0 ? iBufAvail : 0; // copy back to member
m_szBufPtr = szBufPtr; // copy back to member
if (pBytesRead)
{
*pBytesRead = iBytesRead;
}
if (pBufPtr == pBuffer)
{
return NULL;
}
return pBuffer;
}
void Connection::ReadBuffer(char** pBuffer, int *iBufLen)
{
*iBufLen = m_iBufAvail;
*pBuffer = m_szBufPtr;
m_iBufAvail = 0;
};
int Connection::DoBind()
{
debug("Do binding");
#ifdef HAVE_GETADDRINFO
struct addrinfo addr_hints, *addr_list, *addr;
char iPortStr[sizeof(int) * 4 + 1]; // is enough to hold any converted int
memset(&addr_hints, 0, sizeof(addr_hints));
addr_hints.ai_family = AF_UNSPEC; // Allow IPv4 or IPv6
addr_hints.ai_socktype = SOCK_STREAM,
addr_hints.ai_flags = AI_PASSIVE; // For wildcard IP address
sprintf(iPortStr, "%d", m_iPort);
int res = getaddrinfo(m_szHost, iPortStr, &addr_hints, &addr_list);
if (res != 0)
{
error("Could not resolve hostname %s", m_szHost);
return -1;
}
m_iSocket = INVALID_SOCKET;
for (addr = addr_list; addr != NULL; addr = addr->ai_next)
{
m_iSocket = socket(addr->ai_family, addr->ai_socktype, addr->ai_protocol);
if (m_iSocket != INVALID_SOCKET)
{
int opt = 1;
setsockopt(m_iSocket, SOL_SOCKET, SO_REUSEADDR, (char*)&opt, sizeof(opt));
res = bind(m_iSocket, addr->ai_addr, addr->ai_addrlen);
if (res != -1)
{
// Connection established
break;
}
// Connection failed
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
}
freeaddrinfo(addr_list);
#else
struct sockaddr_in sSocketAddress;
memset(&sSocketAddress, 0, sizeof(sSocketAddress));
sSocketAddress.sin_family = AF_INET;
if (!m_szHost || strlen(m_szHost) == 0)
{
sSocketAddress.sin_addr.s_addr = htonl(INADDR_ANY);
}
else
{
sSocketAddress.sin_addr.s_addr = ResolveHostAddr(m_szHost);
if (sSocketAddress.sin_addr.s_addr == (unsigned int)-1)
{
return -1;
}
}
sSocketAddress.sin_port = htons(m_iPort);
m_iSocket = socket(PF_INET, SOCK_STREAM, 0);
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Socket creation failed for %s", m_szHost, true, 0);
return -1;
}
int opt = 1;
setsockopt(m_iSocket, SOL_SOCKET, SO_REUSEADDR, (char*)&opt, sizeof(opt));
int res = bind(m_iSocket, (struct sockaddr *) &sSocketAddress, sizeof(sSocketAddress));
if (res == -1)
{
// Connection failed
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
#endif
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Binding socket failed for %s", m_szHost, true, 0);
return -1;
}
if (listen(m_iSocket, 100) < 0)
{
ReportError("Listen on socket failed for %s", m_szHost, true, 0);
return -1;
}
return 0;
}
SOCKET Connection::DoAccept()
{
SOCKET iSocket = accept(m_iSocket, NULL, NULL);
if (iSocket == INVALID_SOCKET && m_eStatus != csCancelled)
{
ReportError("Could not accept connection", NULL, true, 0);
}
return iSocket;
}
};
void Connection::Cancel()
{
@@ -779,11 +734,7 @@ bool Connection::StartTLS(bool bIsClient, const char* szCertFile, const char* sz
{
debug("Starting TLS");
if (m_pTLSSocket)
{
delete m_pTLSSocket;
}
delete m_pTLSSocket;
m_pTLSSocket = new TLSSocket(m_iSocket, bIsClient, szCertFile, szKeyFile, m_szCipher);
m_pTLSSocket->SetSuppressErrors(m_bSuppressErrors);
@@ -910,3 +861,10 @@ const char* Connection::GetRemoteAddr()
return m_szRemoteAddr;
}
int Connection::FetchTotalBytesRead()
{
int iTotal = m_iTotalBytesRead;
m_iTotalBytesRead = 0;
return iTotal;
}

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -60,6 +60,7 @@ protected:
int m_iTimeout;
bool m_bSuppressErrors;
char m_szRemoteAddr[20];
int m_iTotalBytesRead;
#ifndef DISABLE_TLS
TLSSocket* m_pTLSSocket;
bool m_bTLSError;
@@ -72,12 +73,8 @@ protected:
Connection(SOCKET iSocket, bool bTLS);
void ReportError(const char* szMsgPrefix, const char* szMsgArg, bool PrintErrCode, int herrno);
virtual bool DoConnect();
virtual bool DoDisconnect();
int DoBind();
int DoWriteLine(const char* pBuffer);
char* DoReadLine(char* pBuffer, int iSize, int* pBytesRead);
SOCKET DoAccept();
bool DoConnect();
bool DoDisconnect();
#ifndef HAVE_GETADDRINFO
unsigned int ResolveHostAddr(const char* szHost);
#endif
@@ -92,9 +89,9 @@ public:
virtual ~Connection();
static void Init();
static void Final();
bool Connect();
bool Disconnect();
int Bind();
virtual bool Connect();
virtual bool Disconnect();
bool Bind();
bool Send(const char* pBuffer, int iSize);
bool Recv(char* pBuffer, int iSize);
int TryRecv(char* pBuffer, int iSize);
@@ -116,6 +113,7 @@ public:
#ifndef DISABLE_TLS
bool StartTLS(bool bIsClient, const char* szCertFile, const char* szKeyFile);
#endif
int FetchTotalBytesRead();
};
#endif

View File

@@ -26,16 +26,15 @@
# include "config.h"
#endif
#ifndef DISABLE_TLS
#ifdef WIN32
#define SKIP_DEFAULT_WINDOWS_HEADERS
#include "win32.h"
#endif
#ifndef DISABLE_TLS
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#ifdef WIN32
#include <winsock2.h>
#include <ws2tcpip.h>
@@ -43,7 +42,6 @@
#include <strings.h>
#endif
#include <ctype.h>
#include <stdarg.h>
#include <limits.h>
#include <time.h>
#include <errno.h>
@@ -277,18 +275,9 @@ TLSSocket::TLSSocket(SOCKET iSocket, bool bIsClient, const char* szCertFile, con
TLSSocket::~TLSSocket()
{
if (m_szCertFile)
{
free(m_szCertFile);
}
if (m_szKeyFile)
{
free(m_szKeyFile);
}
if (m_szCipher)
{
free(m_szCipher);
}
free(m_szCertFile);
free(m_szKeyFile);
free(m_szCipher);
Close();
}

View File

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2012-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -33,7 +33,7 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
@@ -62,6 +62,8 @@ WebDownloader::WebDownloader()
m_bConfirmedLength = false;
m_eStatus = adUndefined;
m_szOriginalFilename = NULL;
m_bForce = false;
m_bRetry = true;
SetLastUpdateTimeNow();
}
@@ -69,22 +71,10 @@ WebDownloader::~WebDownloader()
{
debug("Destroying WebDownloader");
if (m_szURL)
{
free(m_szURL);
}
if (m_szInfoName)
{
free(m_szInfoName);
}
if (m_szOutputFilename)
{
free(m_szOutputFilename);
}
if (m_szOriginalFilename)
{
free(m_szOriginalFilename);
}
free(m_szURL);
free(m_szInfoName);
free(m_szOutputFilename);
free(m_szOriginalFilename);
}
void WebDownloader::SetOutputFilename(const char* v)
@@ -99,6 +89,7 @@ void WebDownloader::SetInfoName(const char* v)
void WebDownloader::SetURL(const char * szURL)
{
free(m_szURL);
m_szURL = strdup(szURL);
}
@@ -116,7 +107,13 @@ void WebDownloader::Run()
int iRemainedDownloadRetries = g_pOptions->GetRetries() > 0 ? g_pOptions->GetRetries() : 1;
int iRemainedConnectRetries = iRemainedDownloadRetries > 10 ? iRemainedDownloadRetries : 10;
if (!m_bRetry)
{
iRemainedDownloadRetries = 1;
iRemainedConnectRetries = 1;
}
m_iRedirects = 0;
EStatus Status = adFailed;
while (!IsStopped() && iRemainedDownloadRetries > 0 && iRemainedConnectRetries > 0)
@@ -127,19 +124,19 @@ void WebDownloader::Run()
if ((((Status == adFailed) && (iRemainedDownloadRetries > 1)) ||
((Status == adConnectError) && (iRemainedConnectRetries > 1)))
&& !IsStopped() && !(g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2()))
&& !IsStopped() && !(!m_bForce && g_pOptions->GetPauseDownload()))
{
detail("Waiting %i sec to retry", g_pOptions->GetRetryInterval());
int msec = 0;
while (!IsStopped() && (msec < g_pOptions->GetRetryInterval() * 1000) &&
!(g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2()))
!(!m_bForce && g_pOptions->GetPauseDownload()))
{
usleep(100 * 1000);
msec += 100;
}
}
if (IsStopped() || g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2())
if (IsStopped() || (!m_bForce && g_pOptions->GetPauseDownload()))
{
Status = adRetry;
break;
@@ -150,6 +147,17 @@ void WebDownloader::Run()
break;
}
if (Status == adRedirect)
{
m_iRedirects++;
if (m_iRedirects > 5)
{
warn("Too many redirects for %s", m_szInfoName);
Status = adFailed;
break;
}
}
if (Status != adConnectError)
{
iRemainedDownloadRetries--;
@@ -199,6 +207,7 @@ WebDownloader::EStatus WebDownloader::Download()
return Status;
}
m_pConnection->SetTimeout(g_pOptions->GetUrlTimeout());
m_pConnection->SetSuppressErrors(false);
// connection
@@ -290,7 +299,15 @@ void WebDownloader::SendHeaders(URL *pUrl)
tmp[1024-1] = '\0';
m_pConnection->WriteLine(tmp);
snprintf(tmp, 1024, "Host: %s\r\n", pUrl->GetHost());
if ((!strcasecmp(pUrl->GetProtocol(), "http") && (pUrl->GetPort() == 80 || pUrl->GetPort() == 0)) ||
(!strcasecmp(pUrl->GetProtocol(), "https") && (pUrl->GetPort() == 443 || pUrl->GetPort() == 0)))
{
snprintf(tmp, 1024, "Host: %s\r\n", pUrl->GetHost());
}
else
{
snprintf(tmp, 1024, "Host: %s:%i\r\n", pUrl->GetHost(), pUrl->GetPort());
}
tmp[1024-1] = '\0';
m_pConnection->WriteLine(tmp);
@@ -312,6 +329,8 @@ WebDownloader::EStatus WebDownloader::DownloadHeaders()
m_iContentLen = -1;
bool bFirstLine = true;
m_bGZip = false;
m_bRedirecting = false;
m_bRedirected = false;
// Headers
while (!IsStopped())
@@ -354,7 +373,14 @@ WebDownloader::EStatus WebDownloader::DownloadHeaders()
break;
}
Util::TrimRight(line);
ProcessHeader(line);
if (m_bRedirected)
{
Status = adRedirect;
break;
}
}
free(szLineBuf);
@@ -397,7 +423,7 @@ WebDownloader::EStatus WebDownloader::DownloadBody()
// Have we encountered a timeout?
if (iLen <= 0)
{
if (m_iContentLen == -1)
if (m_iContentLen == -1 && iWrittenLen > 0)
{
bEnd = true;
break;
@@ -430,10 +456,7 @@ WebDownloader::EStatus WebDownloader::DownloadBody()
free(szLineBuf);
#ifndef DISABLE_GZIP
if (m_pGUnzipStream)
{
delete m_pGUnzipStream;
}
delete m_pGUnzipStream;
#endif
if (m_pOutFile)
@@ -485,6 +508,11 @@ WebDownloader::EStatus WebDownloader::CheckResponse(const char* szResponse)
warn("URL %s failed: %s", m_szInfoName, szHTTPResponse);
return adNotFound;
}
else if (!strncmp(szHTTPResponse, "301", 3) || !strncmp(szHTTPResponse, "302", 3))
{
m_bRedirecting = true;
return adRunning;
}
else if (!strncmp(szHTTPResponse, "200", 3))
{
// OK
@@ -500,21 +528,24 @@ WebDownloader::EStatus WebDownloader::CheckResponse(const char* szResponse)
void WebDownloader::ProcessHeader(const char* szLine)
{
if (!strncmp(szLine, "Content-Length: ", 16))
if (!strncasecmp(szLine, "Content-Length: ", 16))
{
m_iContentLen = atoi(szLine + 16);
m_bConfirmedLength = true;
}
if (!strncmp(szLine, "Content-Encoding: gzip", 22))
else if (!strncasecmp(szLine, "Content-Encoding: gzip", 22))
{
m_bGZip = true;
}
if (!strncmp(szLine, "Content-Disposition: ", 21))
else if (!strncasecmp(szLine, "Content-Disposition: ", 21))
{
ParseFilename(szLine);
}
else if (m_bRedirecting && !strncasecmp(szLine, "Location: ", 10))
{
ParseRedirect(szLine + 10);
m_bRedirected = true;
}
}
void WebDownloader::ParseFilename(const char* szContentDisposition)
@@ -551,15 +582,36 @@ void WebDownloader::ParseFilename(const char* szContentDisposition)
WebUtil::HttpUnquote(fname);
if (m_szOriginalFilename)
{
free(m_szOriginalFilename);
}
free(m_szOriginalFilename);
m_szOriginalFilename = strdup(Util::BaseFileName(fname));
debug("OriginalFilename: %s", m_szOriginalFilename);
}
void WebDownloader::ParseRedirect(const char* szLocation)
{
const char* szNewURL = szLocation;
char szUrlBuf[1024];
URL newUrl(szNewURL);
if (!newUrl.IsValid())
{
// relative address
URL oldUrl(m_szURL);
if (oldUrl.GetPort() > 0)
{
snprintf(szUrlBuf, 1024, "%s://%s:%i%s", oldUrl.GetProtocol(), oldUrl.GetHost(), oldUrl.GetPort(), szNewURL);
}
else
{
snprintf(szUrlBuf, 1024, "%s://%s%s", oldUrl.GetProtocol(), oldUrl.GetHost(), szNewURL);
}
szUrlBuf[1024-1] = '\0';
szNewURL = szUrlBuf;
}
detail("URL %s redirected to %s", m_szURL, szNewURL);
SetURL(szNewURL);
}
bool WebDownloader::Write(void* pBuffer, int iLen)
{
if (!m_pOutFile && !PrepareFile())
@@ -607,15 +659,15 @@ bool WebDownloader::PrepareFile()
// prepare file for writing
const char* szFilename = m_szOutputFilename;
m_pOutFile = fopen(szFilename, "wb");
m_pOutFile = fopen(szFilename, FOPEN_WB);
if (!m_pOutFile)
{
error("Could not %s file %s", "create", szFilename);
return false;
}
if (g_pOptions->GetWriteBufferSize() > 0)
if (g_pOptions->GetWriteBuffer() > 0)
{
setvbuf(m_pOutFile, (char *)NULL, _IOFBF, g_pOptions->GetWriteBufferSize());
setvbuf(m_pOutFile, NULL, _IOFBF, g_pOptions->GetWriteBuffer() * 1024);
}
return true;
@@ -630,7 +682,7 @@ void WebDownloader::LogDebugInfo()
ctime_r(&m_tLastUpdateTime, szTime);
#endif
debug(" Web-Download: status=%i, LastUpdateTime=%s, filename=%s", m_eStatus, szTime, Util::BaseFileName(m_szOutputFilename));
info(" Web-Download: status=%i, LastUpdateTime=%s, filename=%s", m_eStatus, szTime, Util::BaseFileName(m_szOutputFilename));
}
void WebDownloader::Stop()

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2012-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -44,6 +44,7 @@ public:
adFailed,
adRetry,
adNotFound,
adRedirect,
adConnectError,
adFatalError
};
@@ -60,7 +61,12 @@ private:
int m_iContentLen;
bool m_bConfirmedLength;
char* m_szOriginalFilename;
bool m_bForce;
bool m_bRedirecting;
bool m_bRedirected;
int m_iRedirects;
bool m_bGZip;
bool m_bRetry;
#ifndef DISABLE_GZIP
GUnzipStream* m_pGUnzipStream;
#endif
@@ -70,22 +76,23 @@ private:
bool PrepareFile();
void FreeConnection();
EStatus CheckResponse(const char* szResponse);
EStatus Download();
EStatus CreateConnection(URL *pUrl);
void ParseFilename(const char* szContentDisposition);
void SendHeaders(URL *pUrl);
EStatus DownloadHeaders();
EStatus DownloadBody();
void ParseRedirect(const char* szLocation);
protected:
virtual void ProcessHeader(const char* szLine);
public:
WebDownloader();
~WebDownloader();
virtual ~WebDownloader();
EStatus GetStatus() { return m_eStatus; }
virtual void Run();
virtual void Stop();
EStatus Download();
bool Terminate();
void SetInfoName(const char* v);
const char* GetInfoName() { return m_szInfoName; }
@@ -96,6 +103,8 @@ public:
void SetLastUpdateTimeNow() { m_tLastUpdateTime = ::time(NULL); }
bool GetConfirmedLength() { return m_bConfirmedLength; }
const char* GetOriginalFilename() { return m_szOriginalFilename; }
void SetForce(bool bForce) { m_bForce = bForce; }
void SetRetry(bool bRetry) { m_bRetry = bRetry; }
void LogDebugInfo();
};

View File

@@ -0,0 +1,758 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <sys/stat.h>
#ifndef WIN32
#include <unistd.h>
#include <sys/time.h>
#endif
#include "nzbget.h"
#include "FeedCoordinator.h"
#include "Options.h"
#include "WebDownloader.h"
#include "Util.h"
#include "FeedFile.h"
#include "FeedFilter.h"
#include "DiskState.h"
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
FeedCoordinator::FeedCacheItem::FeedCacheItem(const char* szUrl, int iCacheTimeSec,const char* szCacheId,
time_t tLastUsage, FeedItemInfos* pFeedItemInfos)
{
m_szUrl = strdup(szUrl);
m_iCacheTimeSec = iCacheTimeSec;
m_szCacheId = strdup(szCacheId);
m_tLastUsage = tLastUsage;
m_pFeedItemInfos = pFeedItemInfos;
m_pFeedItemInfos->Retain();
}
FeedCoordinator::FeedCacheItem::~FeedCacheItem()
{
free(m_szUrl);
free(m_szCacheId);
m_pFeedItemInfos->Release();
}
FeedCoordinator::FeedCoordinator()
{
debug("Creating FeedCoordinator");
m_bForce = false;
m_bSave = false;
g_pLog->RegisterDebuggable(this);
m_DownloadQueueObserver.m_pOwner = this;
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
pDownloadQueue->Attach(&m_DownloadQueueObserver);
DownloadQueue::Unlock();
}
FeedCoordinator::~FeedCoordinator()
{
debug("Destroying FeedCoordinator");
// Cleanup
g_pLog->UnregisterDebuggable(this);
debug("Deleting FeedDownloaders");
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
delete *it;
}
m_ActiveDownloads.clear();
debug("Deleting Feeds");
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
delete *it;
}
m_Feeds.clear();
debug("Deleting FeedCache");
for (FeedCache::iterator it = m_FeedCache.begin(); it != m_FeedCache.end(); it++)
{
delete *it;
}
m_FeedCache.clear();
debug("FeedCoordinator destroyed");
}
void FeedCoordinator::AddFeed(FeedInfo* pFeedInfo)
{
m_Feeds.push_back(pFeedInfo);
}
void FeedCoordinator::Run()
{
debug("Entering FeedCoordinator-loop");
while (!DownloadQueue::IsLoaded())
{
usleep(20 * 1000);
}
if (g_pOptions->GetServerMode() && g_pOptions->GetSaveQueue() && g_pOptions->GetReloadQueue())
{
m_mutexDownloads.Lock();
g_pDiskState->LoadFeeds(&m_Feeds, &m_FeedHistory);
m_mutexDownloads.Unlock();
}
int iSleepInterval = 100;
int iUpdateCounter = 0;
int iCleanupCounter = 60000;
while (!IsStopped())
{
usleep(iSleepInterval * 1000);
iUpdateCounter += iSleepInterval;
if (iUpdateCounter >= 1000)
{
// this code should not be called too often, once per second is OK
if (!g_pOptions->GetPauseDownload() || m_bForce || g_pOptions->GetUrlForce())
{
m_mutexDownloads.Lock();
time_t tCurrent = time(NULL);
if ((int)m_ActiveDownloads.size() < g_pOptions->GetUrlConnections())
{
m_bForce = false;
// check feed list and update feeds
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
FeedInfo* pFeedInfo = *it;
if (((pFeedInfo->GetInterval() > 0 &&
(tCurrent - pFeedInfo->GetLastUpdate() >= pFeedInfo->GetInterval() * 60 ||
tCurrent < pFeedInfo->GetLastUpdate())) ||
pFeedInfo->GetFetch()) &&
pFeedInfo->GetStatus() != FeedInfo::fsRunning)
{
StartFeedDownload(pFeedInfo, pFeedInfo->GetFetch());
}
else if (pFeedInfo->GetFetch())
{
m_bForce = true;
}
}
}
m_mutexDownloads.Unlock();
}
CheckSaveFeeds();
ResetHangingDownloads();
iUpdateCounter = 0;
}
iCleanupCounter += iSleepInterval;
if (iCleanupCounter >= 60000)
{
// clean up feed history once a minute
CleanupHistory();
CleanupCache();
CheckSaveFeeds();
iCleanupCounter = 0;
}
}
// waiting for downloads
debug("FeedCoordinator: waiting for Downloads to complete");
bool completed = false;
while (!completed)
{
m_mutexDownloads.Lock();
completed = m_ActiveDownloads.size() == 0;
m_mutexDownloads.Unlock();
CheckSaveFeeds();
usleep(100 * 1000);
ResetHangingDownloads();
}
debug("FeedCoordinator: Downloads are completed");
debug("Exiting FeedCoordinator-loop");
}
void FeedCoordinator::Stop()
{
Thread::Stop();
debug("Stopping UrlDownloads");
m_mutexDownloads.Lock();
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
(*it)->Stop();
}
m_mutexDownloads.Unlock();
debug("UrlDownloads are notified");
}
void FeedCoordinator::ResetHangingDownloads()
{
const int TimeOut = g_pOptions->GetTerminateTimeout();
if (TimeOut == 0)
{
return;
}
m_mutexDownloads.Lock();
time_t tm = ::time(NULL);
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end();)
{
FeedDownloader* pFeedDownloader = *it;
if (tm - pFeedDownloader->GetLastUpdateTime() > TimeOut &&
pFeedDownloader->GetStatus() == FeedDownloader::adRunning)
{
debug("Terminating hanging download %s", pFeedDownloader->GetInfoName());
if (pFeedDownloader->Terminate())
{
error("Terminated hanging download %s", pFeedDownloader->GetInfoName());
pFeedDownloader->GetFeedInfo()->SetStatus(FeedInfo::fsUndefined);
}
else
{
error("Could not terminate hanging download %s", pFeedDownloader->GetInfoName());
}
m_ActiveDownloads.erase(it);
// it's not safe to destroy pFeedDownloader, because the state of object is unknown
delete pFeedDownloader;
it = m_ActiveDownloads.begin();
continue;
}
it++;
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::LogDebugInfo()
{
info(" ---------- FeedCoordinator");
m_mutexDownloads.Lock();
info(" Active Downloads: %i", m_ActiveDownloads.size());
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
FeedDownloader* pFeedDownloader = *it;
pFeedDownloader->LogDebugInfo();
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::StartFeedDownload(FeedInfo* pFeedInfo, bool bForce)
{
debug("Starting new FeedDownloader for %s", pFeedInfo->GetName());
FeedDownloader* pFeedDownloader = new FeedDownloader();
pFeedDownloader->SetAutoDestroy(true);
pFeedDownloader->Attach(this);
pFeedDownloader->SetFeedInfo(pFeedInfo);
pFeedDownloader->SetURL(pFeedInfo->GetUrl());
if (strlen(pFeedInfo->GetName()) > 0)
{
pFeedDownloader->SetInfoName(pFeedInfo->GetName());
}
else
{
char szUrlName[1024];
NZBInfo::MakeNiceUrlName(pFeedInfo->GetUrl(), "", szUrlName, sizeof(szUrlName));
pFeedDownloader->SetInfoName(szUrlName);
}
pFeedDownloader->SetForce(bForce || g_pOptions->GetUrlForce());
char tmp[1024];
if (pFeedInfo->GetID() > 0)
{
snprintf(tmp, 1024, "%sfeed-%i.tmp", g_pOptions->GetTempDir(), pFeedInfo->GetID());
}
else
{
snprintf(tmp, 1024, "%sfeed-%i-%i.tmp", g_pOptions->GetTempDir(), (int)time(NULL), rand());
}
tmp[1024-1] = '\0';
pFeedDownloader->SetOutputFilename(tmp);
pFeedInfo->SetStatus(FeedInfo::fsRunning);
pFeedInfo->SetForce(bForce);
pFeedInfo->SetFetch(false);
m_ActiveDownloads.push_back(pFeedDownloader);
pFeedDownloader->Start();
}
void FeedCoordinator::Update(Subject* pCaller, void* pAspect)
{
debug("Notification from FeedDownloader received");
FeedDownloader* pFeedDownloader = (FeedDownloader*) pCaller;
if ((pFeedDownloader->GetStatus() == WebDownloader::adFinished) ||
(pFeedDownloader->GetStatus() == WebDownloader::adFailed) ||
(pFeedDownloader->GetStatus() == WebDownloader::adRetry))
{
FeedCompleted(pFeedDownloader);
}
}
void FeedCoordinator::FeedCompleted(FeedDownloader* pFeedDownloader)
{
debug("Feed downloaded");
FeedInfo* pFeedInfo = pFeedDownloader->GetFeedInfo();
bool bStatusOK = pFeedDownloader->GetStatus() == WebDownloader::adFinished;
if (bStatusOK)
{
pFeedInfo->SetOutputFilename(pFeedDownloader->GetOutputFilename());
}
// delete Download from Queue
m_mutexDownloads.Lock();
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
FeedDownloader* pa = *it;
if (pa == pFeedDownloader)
{
m_ActiveDownloads.erase(it);
break;
}
}
m_mutexDownloads.Unlock();
if (bStatusOK)
{
if (!pFeedInfo->GetPreview())
{
FeedFile* pFeedFile = FeedFile::Create(pFeedInfo->GetOutputFilename());
remove(pFeedInfo->GetOutputFilename());
NZBList addedNZBs;
m_mutexDownloads.Lock();
if (pFeedFile)
{
ProcessFeed(pFeedInfo, pFeedFile->GetFeedItemInfos(), &addedNZBs);
delete pFeedFile;
}
pFeedInfo->SetLastUpdate(time(NULL));
pFeedInfo->SetForce(false);
m_bSave = true;
m_mutexDownloads.Unlock();
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
for (NZBList::iterator it = addedNZBs.begin(); it != addedNZBs.end(); it++)
{
NZBInfo* pNZBInfo = *it;
pDownloadQueue->GetQueue()->Add(pNZBInfo, false);
}
pDownloadQueue->Save();
DownloadQueue::Unlock();
}
pFeedInfo->SetStatus(FeedInfo::fsFinished);
}
else
{
pFeedInfo->SetStatus(FeedInfo::fsFailed);
}
}
void FeedCoordinator::FilterFeed(FeedInfo* pFeedInfo, FeedItemInfos* pFeedItemInfos)
{
debug("Filtering feed %s", pFeedInfo->GetName());
FeedFilter* pFeedFilter = NULL;
if (pFeedInfo->GetFilter() && strlen(pFeedInfo->GetFilter()) > 0)
{
pFeedFilter = new FeedFilter(pFeedInfo->GetFilter());
}
for (FeedItemInfos::iterator it = pFeedItemInfos->begin(); it != pFeedItemInfos->end(); it++)
{
FeedItemInfo* pFeedItemInfo = *it;
pFeedItemInfo->SetMatchStatus(FeedItemInfo::msAccepted);
pFeedItemInfo->SetMatchRule(0);
pFeedItemInfo->SetPauseNzb(pFeedInfo->GetPauseNzb());
pFeedItemInfo->SetPriority(pFeedInfo->GetPriority());
pFeedItemInfo->SetAddCategory(pFeedInfo->GetCategory());
pFeedItemInfo->SetDupeScore(0);
pFeedItemInfo->SetDupeMode(dmScore);
pFeedItemInfo->BuildDupeKey(NULL, NULL);
if (pFeedFilter)
{
pFeedFilter->Match(pFeedItemInfo);
}
}
delete pFeedFilter;
}
void FeedCoordinator::ProcessFeed(FeedInfo* pFeedInfo, FeedItemInfos* pFeedItemInfos, NZBList* pAddedNZBs)
{
debug("Process feed %s", pFeedInfo->GetName());
FilterFeed(pFeedInfo, pFeedItemInfos);
bool bFirstFetch = pFeedInfo->GetLastUpdate() == 0;
int iAdded = 0;
for (FeedItemInfos::iterator it = pFeedItemInfos->begin(); it != pFeedItemInfos->end(); it++)
{
FeedItemInfo* pFeedItemInfo = *it;
if (pFeedItemInfo->GetMatchStatus() == FeedItemInfo::msAccepted)
{
FeedHistoryInfo* pFeedHistoryInfo = m_FeedHistory.Find(pFeedItemInfo->GetUrl());
FeedHistoryInfo::EStatus eStatus = FeedHistoryInfo::hsUnknown;
if (bFirstFetch)
{
eStatus = FeedHistoryInfo::hsBacklog;
}
else if (!pFeedHistoryInfo)
{
NZBInfo* pNZBInfo = CreateNZBInfo(pFeedInfo, pFeedItemInfo);
pAddedNZBs->Add(pNZBInfo, false);
eStatus = FeedHistoryInfo::hsFetched;
iAdded++;
}
if (pFeedHistoryInfo)
{
pFeedHistoryInfo->SetLastSeen(time(NULL));
}
else
{
m_FeedHistory.Add(pFeedItemInfo->GetUrl(), eStatus, time(NULL));
}
}
}
if (iAdded)
{
info("%s has %i new item(s)", pFeedInfo->GetName(), iAdded);
}
else
{
detail("%s has no new items", pFeedInfo->GetName());
}
}
NZBInfo* FeedCoordinator::CreateNZBInfo(FeedInfo* pFeedInfo, FeedItemInfo* pFeedItemInfo)
{
debug("Download %s from %s", pFeedItemInfo->GetUrl(), pFeedInfo->GetName());
NZBInfo* pNZBInfo = new NZBInfo();
pNZBInfo->SetKind(NZBInfo::nkUrl);
pNZBInfo->SetURL(pFeedItemInfo->GetUrl());
// add .nzb-extension if not present
char szNZBName[1024];
strncpy(szNZBName, pFeedItemInfo->GetFilename(), 1024);
szNZBName[1024-1] = '\0';
char* ext = strrchr(szNZBName, '.');
if (ext && !strcasecmp(ext, ".nzb"))
{
*ext = '\0';
}
char szNZBName2[1024];
snprintf(szNZBName2, 1024, "%s.nzb", szNZBName);
Util::MakeValidFilename(szNZBName2, '_', false);
if (strlen(szNZBName) > 0)
{
pNZBInfo->SetFilename(szNZBName2);
}
pNZBInfo->SetCategory(pFeedItemInfo->GetAddCategory());
pNZBInfo->SetPriority(pFeedItemInfo->GetPriority());
pNZBInfo->SetAddUrlPaused(pFeedItemInfo->GetPauseNzb());
pNZBInfo->SetDupeKey(pFeedItemInfo->GetDupeKey());
pNZBInfo->SetDupeScore(pFeedItemInfo->GetDupeScore());
pNZBInfo->SetDupeMode(pFeedItemInfo->GetDupeMode());
return pNZBInfo;
}
bool FeedCoordinator::ViewFeed(int iID, FeedItemInfos** ppFeedItemInfos)
{
if (iID < 1 || iID > (int)m_Feeds.size())
{
return false;
}
FeedInfo* pFeedInfo = m_Feeds.at(iID - 1);
return PreviewFeed(pFeedInfo->GetName(), pFeedInfo->GetUrl(), pFeedInfo->GetFilter(),
pFeedInfo->GetPauseNzb(), pFeedInfo->GetCategory(), pFeedInfo->GetPriority(),
0, NULL, ppFeedItemInfos);
}
bool FeedCoordinator::PreviewFeed(const char* szName, const char* szUrl, const char* szFilter,
bool bPauseNzb, const char* szCategory, int iPriority,
int iCacheTimeSec, const char* szCacheId, FeedItemInfos** ppFeedItemInfos)
{
debug("Preview feed %s", szName);
FeedInfo* pFeedInfo = new FeedInfo(0, szName, szUrl, 0, szFilter, bPauseNzb, szCategory, iPriority);
pFeedInfo->SetPreview(true);
FeedItemInfos* pFeedItemInfos = NULL;
bool bHasCache = false;
if (iCacheTimeSec > 0 && *szCacheId != '\0')
{
m_mutexDownloads.Lock();
for (FeedCache::iterator it = m_FeedCache.begin(); it != m_FeedCache.end(); it++)
{
FeedCacheItem* pFeedCacheItem = *it;
if (!strcmp(pFeedCacheItem->GetCacheId(), szCacheId))
{
pFeedCacheItem->SetLastUsage(time(NULL));
pFeedItemInfos = pFeedCacheItem->GetFeedItemInfos();
pFeedItemInfos->Retain();
bHasCache = true;
break;
}
}
m_mutexDownloads.Unlock();
}
if (!bHasCache)
{
m_mutexDownloads.Lock();
bool bFirstFetch = true;
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
FeedInfo* pFeedInfo2 = *it;
if (!strcmp(pFeedInfo2->GetUrl(), pFeedInfo->GetUrl()) &&
!strcmp(pFeedInfo2->GetFilter(), pFeedInfo->GetFilter()) &&
pFeedInfo2->GetLastUpdate() > 0)
{
bFirstFetch = false;
break;
}
}
StartFeedDownload(pFeedInfo, true);
m_mutexDownloads.Unlock();
// wait until the download in a separate thread completes
while (pFeedInfo->GetStatus() == FeedInfo::fsRunning)
{
usleep(100 * 1000);
}
// now can process the feed
FeedFile* pFeedFile = NULL;
if (pFeedInfo->GetStatus() == FeedInfo::fsFinished)
{
pFeedFile = FeedFile::Create(pFeedInfo->GetOutputFilename());
}
remove(pFeedInfo->GetOutputFilename());
if (!pFeedFile)
{
delete pFeedInfo;
return false;
}
pFeedItemInfos = pFeedFile->GetFeedItemInfos();
pFeedItemInfos->Retain();
delete pFeedFile;
for (FeedItemInfos::iterator it = pFeedItemInfos->begin(); it != pFeedItemInfos->end(); it++)
{
FeedItemInfo* pFeedItemInfo = *it;
pFeedItemInfo->SetStatus(bFirstFetch ? FeedItemInfo::isBacklog : FeedItemInfo::isNew);
FeedHistoryInfo* pFeedHistoryInfo = m_FeedHistory.Find(pFeedItemInfo->GetUrl());
if (pFeedHistoryInfo)
{
pFeedItemInfo->SetStatus((FeedItemInfo::EStatus)pFeedHistoryInfo->GetStatus());
}
}
}
FilterFeed(pFeedInfo, pFeedItemInfos);
delete pFeedInfo;
if (iCacheTimeSec > 0 && *szCacheId != '\0' && !bHasCache)
{
FeedCacheItem* pFeedCacheItem = new FeedCacheItem(szUrl, iCacheTimeSec, szCacheId, time(NULL), pFeedItemInfos);
m_mutexDownloads.Lock();
m_FeedCache.push_back(pFeedCacheItem);
m_mutexDownloads.Unlock();
}
*ppFeedItemInfos = pFeedItemInfos;
return true;
}
void FeedCoordinator::FetchFeed(int iID)
{
debug("FetchFeeds");
m_mutexDownloads.Lock();
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
FeedInfo* pFeedInfo = *it;
if (pFeedInfo->GetID() == iID || iID == 0)
{
pFeedInfo->SetFetch(true);
m_bForce = true;
}
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::DownloadQueueUpdate(Subject* pCaller, void* pAspect)
{
debug("Notification from URL-Coordinator received");
DownloadQueue::Aspect* pQueueAspect = (DownloadQueue::Aspect*)pAspect;
if (pQueueAspect->eAction == DownloadQueue::eaUrlCompleted)
{
m_mutexDownloads.Lock();
FeedHistoryInfo* pFeedHistoryInfo = m_FeedHistory.Find(pQueueAspect->pNZBInfo->GetURL());
if (pFeedHistoryInfo)
{
pFeedHistoryInfo->SetStatus(FeedHistoryInfo::hsFetched);
}
else
{
m_FeedHistory.Add(pQueueAspect->pNZBInfo->GetURL(), FeedHistoryInfo::hsFetched, time(NULL));
}
m_bSave = true;
m_mutexDownloads.Unlock();
}
}
bool FeedCoordinator::HasActiveDownloads()
{
m_mutexDownloads.Lock();
bool bActive = !m_ActiveDownloads.empty();
m_mutexDownloads.Unlock();
return bActive;
}
void FeedCoordinator::CheckSaveFeeds()
{
debug("CheckSaveFeeds");
m_mutexDownloads.Lock();
if (m_bSave)
{
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveFeeds(&m_Feeds, &m_FeedHistory);
}
m_bSave = false;
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::CleanupHistory()
{
debug("CleanupHistory");
m_mutexDownloads.Lock();
time_t tOldestUpdate = time(NULL);
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
FeedInfo* pFeedInfo = *it;
if (pFeedInfo->GetLastUpdate() < tOldestUpdate)
{
tOldestUpdate = pFeedInfo->GetLastUpdate();
}
}
time_t tBorderDate = tOldestUpdate - g_pOptions->GetFeedHistory() * 60*60*24;
int i = 0;
for (FeedHistory::iterator it = m_FeedHistory.begin(); it != m_FeedHistory.end(); )
{
FeedHistoryInfo* pFeedHistoryInfo = *it;
if (pFeedHistoryInfo->GetLastSeen() < tBorderDate)
{
detail("Deleting %s from feed history", pFeedHistoryInfo->GetUrl());
delete pFeedHistoryInfo;
m_FeedHistory.erase(it);
it = m_FeedHistory.begin() + i;
m_bSave = true;
}
else
{
it++;
i++;
}
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::CleanupCache()
{
debug("CleanupCache");
m_mutexDownloads.Lock();
time_t tCurTime = time(NULL);
int i = 0;
for (FeedCache::iterator it = m_FeedCache.begin(); it != m_FeedCache.end(); )
{
FeedCacheItem* pFeedCacheItem = *it;
if (pFeedCacheItem->GetLastUsage() + pFeedCacheItem->GetCacheTimeSec() < tCurTime ||
pFeedCacheItem->GetLastUsage() > tCurTime)
{
debug("Deleting %s from feed cache", pFeedCacheItem->GetUrl());
delete pFeedCacheItem;
m_FeedCache.erase(it);
it = m_FeedCache.begin() + i;
}
else
{
it++;
i++;
}
}
m_mutexDownloads.Unlock();
}

View File

@@ -0,0 +1,127 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef FEEDCOORDINATOR_H
#define FEEDCOORDINATOR_H
#include <deque>
#include <list>
#include <time.h>
#include "Log.h"
#include "Thread.h"
#include "WebDownloader.h"
#include "DownloadInfo.h"
#include "FeedInfo.h"
#include "Observer.h"
class FeedDownloader;
class FeedCoordinator : public Thread, public Observer, public Subject, public Debuggable
{
private:
class DownloadQueueObserver: public Observer
{
public:
FeedCoordinator* m_pOwner;
virtual void Update(Subject* pCaller, void* pAspect) { m_pOwner->DownloadQueueUpdate(pCaller, pAspect); }
};
class FeedCacheItem
{
private:
char* m_szUrl;
int m_iCacheTimeSec;
char* m_szCacheId;
time_t m_tLastUsage;
FeedItemInfos* m_pFeedItemInfos;
public:
FeedCacheItem(const char* szUrl, int iCacheTimeSec,const char* szCacheId,
time_t tLastUsage, FeedItemInfos* pFeedItemInfos);
~FeedCacheItem();
const char* GetUrl() { return m_szUrl; }
int GetCacheTimeSec() { return m_iCacheTimeSec; }
const char* GetCacheId() { return m_szCacheId; }
time_t GetLastUsage() { return m_tLastUsage; }
void SetLastUsage(time_t tLastUsage) { m_tLastUsage = tLastUsage; }
FeedItemInfos* GetFeedItemInfos() { return m_pFeedItemInfos; }
};
typedef std::deque<FeedCacheItem*> FeedCache;
typedef std::list<FeedDownloader*> ActiveDownloads;
private:
Feeds m_Feeds;
ActiveDownloads m_ActiveDownloads;
FeedHistory m_FeedHistory;
Mutex m_mutexDownloads;
DownloadQueueObserver m_DownloadQueueObserver;
bool m_bForce;
bool m_bSave;
FeedCache m_FeedCache;
void StartFeedDownload(FeedInfo* pFeedInfo, bool bForce);
void FeedCompleted(FeedDownloader* pFeedDownloader);
void FilterFeed(FeedInfo* pFeedInfo, FeedItemInfos* pFeedItemInfos);
void ProcessFeed(FeedInfo* pFeedInfo, FeedItemInfos* pFeedItemInfos, NZBList* pAddedNZBs);
NZBInfo* CreateNZBInfo(FeedInfo* pFeedInfo, FeedItemInfo* pFeedItemInfo);
void ResetHangingDownloads();
void DownloadQueueUpdate(Subject* pCaller, void* pAspect);
void CleanupHistory();
void CleanupCache();
void CheckSaveFeeds();
protected:
virtual void LogDebugInfo();
public:
FeedCoordinator();
virtual ~FeedCoordinator();
virtual void Run();
virtual void Stop();
void Update(Subject* pCaller, void* pAspect);
void AddFeed(FeedInfo* pFeedInfo);
bool PreviewFeed(const char* szName, const char* szUrl, const char* szFilter,
bool bPauseNzb, const char* szCategory, int iPriority,
int iCacheTimeSec, const char* szCacheId, FeedItemInfos** ppFeedItemInfos);
bool ViewFeed(int iID, FeedItemInfos** ppFeedItemInfos);
void FetchFeed(int iID);
bool HasActiveDownloads();
Feeds* GetFeeds() { return &m_Feeds; }
};
class FeedDownloader : public WebDownloader
{
private:
FeedInfo* m_pFeedInfo;
public:
void SetFeedInfo(FeedInfo* pFeedInfo) { m_pFeedInfo = pFeedInfo; }
FeedInfo* GetFeedInfo() { return m_pFeedInfo; }
};
#endif

609
daemon/feed/FeedFile.cpp Normal file
View File

@@ -0,0 +1,609 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <string.h>
#include <list>
#ifdef WIN32
#include <comutil.h>
#import <msxml.tlb> named_guids
using namespace MSXML;
#else
#include <libxml/parser.h>
#include <libxml/xmlreader.h>
#include <libxml/xmlerror.h>
#include <libxml/entities.h>
#endif
#include "nzbget.h"
#include "FeedFile.h"
#include "Log.h"
#include "DownloadInfo.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
FeedFile::FeedFile(const char* szFileName)
{
debug("Creating FeedFile");
m_szFileName = strdup(szFileName);
m_pFeedItemInfos = new FeedItemInfos();
m_pFeedItemInfos->Retain();
#ifndef WIN32
m_pFeedItemInfo = NULL;
m_szTagContent = NULL;
m_iTagContentLen = 0;
#endif
}
FeedFile::~FeedFile()
{
debug("Destroying FeedFile");
// Cleanup
free(m_szFileName);
m_pFeedItemInfos->Release();
#ifndef WIN32
delete m_pFeedItemInfo;
free(m_szTagContent);
#endif
}
void FeedFile::LogDebugInfo()
{
info(" FeedFile %s", m_szFileName);
}
void FeedFile::AddItem(FeedItemInfo* pFeedItemInfo)
{
m_pFeedItemInfos->Add(pFeedItemInfo);
}
void FeedFile::ParseSubject(FeedItemInfo* pFeedItemInfo)
{
// if title has quatation marks we use only part within quatation marks
char* p = (char*)pFeedItemInfo->GetTitle();
char* start = strchr(p, '\"');
if (start)
{
start++;
char* end = strchr(start + 1, '\"');
if (end)
{
int len = (int)(end - start);
char* point = strchr(start + 1, '.');
if (point && point < end)
{
char* filename = (char*)malloc(len + 1);
strncpy(filename, start, len);
filename[len] = '\0';
char* ext = strrchr(filename, '.');
if (ext && !strcasecmp(ext, ".par2"))
{
*ext = '\0';
}
pFeedItemInfo->SetFilename(filename);
free(filename);
return;
}
}
}
pFeedItemInfo->SetFilename(pFeedItemInfo->GetTitle());
}
#ifdef WIN32
FeedFile* FeedFile::Create(const char* szFileName)
{
CoInitialize(NULL);
HRESULT hr;
MSXML::IXMLDOMDocumentPtr doc;
hr = doc.CreateInstance(MSXML::CLSID_DOMDocument);
if (FAILED(hr))
{
return NULL;
}
// Load the XML document file...
doc->put_resolveExternals(VARIANT_FALSE);
doc->put_validateOnParse(VARIANT_FALSE);
doc->put_async(VARIANT_FALSE);
// filename needs to be properly encoded
char* szURL = (char*)malloc(strlen(szFileName)*3 + 1);
EncodeURL(szFileName, szURL);
debug("url=\"%s\"", szURL);
_variant_t v(szURL);
free(szURL);
VARIANT_BOOL success = doc->load(v);
if (success == VARIANT_FALSE)
{
_bstr_t r(doc->GetparseError()->reason);
const char* szErrMsg = r;
error("Error parsing rss feed: %s", szErrMsg);
return NULL;
}
FeedFile* pFile = new FeedFile(szFileName);
if (!pFile->ParseFeed(doc))
{
delete pFile;
pFile = NULL;
}
return pFile;
}
void FeedFile::EncodeURL(const char* szFilename, char* szURL)
{
while (char ch = *szFilename++)
{
if (('0' <= ch && ch <= '9') ||
('a' <= ch && ch <= 'z') ||
('A' <= ch && ch <= 'Z') )
{
*szURL++ = ch;
}
else
{
*szURL++ = '%';
int a = ch >> 4;
*szURL++ = a > 9 ? a - 10 + 'a' : a + '0';
a = ch & 0xF;
*szURL++ = a > 9 ? a - 10 + 'a' : a + '0';
}
}
*szURL = NULL;
}
bool FeedFile::ParseFeed(IUnknown* nzb)
{
MSXML::IXMLDOMDocumentPtr doc = nzb;
MSXML::IXMLDOMNodePtr root = doc->documentElement;
MSXML::IXMLDOMNodeListPtr itemList = root->selectNodes("/rss/channel/item");
for (int i = 0; i < itemList->Getlength(); i++)
{
MSXML::IXMLDOMNodePtr node = itemList->Getitem(i);
FeedItemInfo* pFeedItemInfo = new FeedItemInfo();
AddItem(pFeedItemInfo);
MSXML::IXMLDOMNodePtr tag;
MSXML::IXMLDOMNodePtr attr;
// <title>Debian 6</title>
tag = node->selectSingleNode("title");
if (!tag)
{
// bad rss feed
return false;
}
_bstr_t title(tag->Gettext());
pFeedItemInfo->SetTitle(title);
ParseSubject(pFeedItemInfo);
// <pubDate>Wed, 26 Jun 2013 00:02:54 -0600</pubDate>
tag = node->selectSingleNode("pubDate");
if (tag)
{
_bstr_t time(tag->Gettext());
time_t unixtime = WebUtil::ParseRfc822DateTime(time);
if (unixtime > 0)
{
pFeedItemInfo->SetTime(unixtime);
}
}
// <category>Movies &gt; HD</category>
tag = node->selectSingleNode("category");
if (tag)
{
_bstr_t category(tag->Gettext());
pFeedItemInfo->SetCategory(category);
}
// <description>long text</description>
tag = node->selectSingleNode("description");
if (tag)
{
_bstr_t description(tag->Gettext());
pFeedItemInfo->SetDescription(description);
}
//<enclosure url="http://myindexer.com/fetch/9eeb264aecce961a6e0d" length="150263340" type="application/x-nzb" />
tag = node->selectSingleNode("enclosure");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("url");
if (attr)
{
_bstr_t url(attr->Gettext());
pFeedItemInfo->SetUrl(url);
}
attr = tag->Getattributes()->getNamedItem("length");
if (attr)
{
_bstr_t size(attr->Gettext());
long long lSize = atoll(size);
pFeedItemInfo->SetSize(lSize);
}
}
if (!pFeedItemInfo->GetUrl())
{
// <link>https://nzb.org/fetch/334534ce/4364564564</link>
tag = node->selectSingleNode("link");
if (!tag)
{
// bad rss feed
return false;
}
_bstr_t link(tag->Gettext());
pFeedItemInfo->SetUrl(link);
}
// newznab special
//<newznab:attr name="size" value="5423523453534" />
if (pFeedItemInfo->GetSize() == 0)
{
tag = node->selectSingleNode("newznab:attr[@name='size']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t size(attr->Gettext());
long long lSize = atoll(size);
pFeedItemInfo->SetSize(lSize);
}
}
}
//<newznab:attr name="imdb" value="1588173"/>
tag = node->selectSingleNode("newznab:attr[@name='imdb']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t val(attr->Gettext());
int iVal = atoi(val);
pFeedItemInfo->SetImdbId(iVal);
}
}
//<newznab:attr name="rageid" value="33877"/>
tag = node->selectSingleNode("newznab:attr[@name='rageid']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t val(attr->Gettext());
int iVal = atoi(val);
pFeedItemInfo->SetRageId(iVal);
}
}
//<newznab:attr name="episode" value="E09"/>
//<newznab:attr name="episode" value="9"/>
tag = node->selectSingleNode("newznab:attr[@name='episode']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t val(attr->Gettext());
pFeedItemInfo->SetEpisode(val);
}
}
//<newznab:attr name="season" value="S03"/>
//<newznab:attr name="season" value="3"/>
tag = node->selectSingleNode("newznab:attr[@name='season']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t val(attr->Gettext());
pFeedItemInfo->SetSeason(val);
}
}
MSXML::IXMLDOMNodeListPtr itemList = node->selectNodes("newznab:attr");
for (int i = 0; i < itemList->Getlength(); i++)
{
MSXML::IXMLDOMNodePtr node = itemList->Getitem(i);
MSXML::IXMLDOMNodePtr name = node->Getattributes()->getNamedItem("name");
MSXML::IXMLDOMNodePtr value = node->Getattributes()->getNamedItem("value");
if (name && value)
{
_bstr_t name(name->Gettext());
_bstr_t val(value->Gettext());
pFeedItemInfo->GetAttributes()->Add(name, val);
}
}
}
return true;
}
#else
FeedFile* FeedFile::Create(const char* szFileName)
{
FeedFile* pFile = new FeedFile(szFileName);
xmlSAXHandler SAX_handler = {0};
SAX_handler.startElement = reinterpret_cast<startElementSAXFunc>(SAX_StartElement);
SAX_handler.endElement = reinterpret_cast<endElementSAXFunc>(SAX_EndElement);
SAX_handler.characters = reinterpret_cast<charactersSAXFunc>(SAX_characters);
SAX_handler.error = reinterpret_cast<errorSAXFunc>(SAX_error);
SAX_handler.getEntity = reinterpret_cast<getEntitySAXFunc>(SAX_getEntity);
pFile->m_bIgnoreNextError = false;
int ret = xmlSAXUserParseFile(&SAX_handler, pFile, szFileName);
if (ret != 0)
{
error("Failed to parse rss feed");
delete pFile;
pFile = NULL;
}
return pFile;
}
void FeedFile::Parse_StartElement(const char *name, const char **atts)
{
ResetTagContent();
if (!strcmp("item", name))
{
delete m_pFeedItemInfo;
m_pFeedItemInfo = new FeedItemInfo();
}
else if (!strcmp("enclosure", name) && m_pFeedItemInfo)
{
//<enclosure url="http://myindexer.com/fetch/9eeb264aecce961a6e0d" length="150263340" type="application/x-nzb" />
for (; *atts; atts+=2)
{
if (!strcmp("url", atts[0]))
{
char* szUrl = strdup(atts[1]);
WebUtil::XmlDecode(szUrl);
m_pFeedItemInfo->SetUrl(szUrl);
free(szUrl);
}
else if (!strcmp("length", atts[0]))
{
long long lSize = atoll(atts[1]);
m_pFeedItemInfo->SetSize(lSize);
}
}
}
else if (m_pFeedItemInfo && !strcmp("newznab:attr", name) &&
atts[0] && atts[1] && atts[2] && atts[3] &&
!strcmp("name", atts[0]) && !strcmp("value", atts[2]))
{
m_pFeedItemInfo->GetAttributes()->Add(atts[1], atts[3]);
//<newznab:attr name="size" value="5423523453534" />
if (m_pFeedItemInfo->GetSize() == 0 &&
!strcmp("size", atts[1]))
{
long long lSize = atoll(atts[3]);
m_pFeedItemInfo->SetSize(lSize);
}
//<newznab:attr name="imdb" value="1588173"/>
else if (!strcmp("imdb", atts[1]))
{
m_pFeedItemInfo->SetImdbId(atoi(atts[3]));
}
//<newznab:attr name="rageid" value="33877"/>
else if (!strcmp("rageid", atts[1]))
{
m_pFeedItemInfo->SetRageId(atoi(atts[3]));
}
//<newznab:attr name="episode" value="E09"/>
//<newznab:attr name="episode" value="9"/>
else if (!strcmp("episode", atts[1]))
{
m_pFeedItemInfo->SetEpisode(atts[3]);
}
//<newznab:attr name="season" value="S03"/>
//<newznab:attr name="season" value="3"/>
else if (!strcmp("season", atts[1]))
{
m_pFeedItemInfo->SetSeason(atts[3]);
}
}
}
void FeedFile::Parse_EndElement(const char *name)
{
if (!strcmp("item", name))
{
// Close the file element, add the new file to file-list
AddItem(m_pFeedItemInfo);
m_pFeedItemInfo = NULL;
}
else if (!strcmp("title", name) && m_pFeedItemInfo)
{
m_pFeedItemInfo->SetTitle(m_szTagContent);
ResetTagContent();
ParseSubject(m_pFeedItemInfo);
}
else if (!strcmp("link", name) && m_pFeedItemInfo &&
(!m_pFeedItemInfo->GetUrl() || strlen(m_pFeedItemInfo->GetUrl()) == 0))
{
m_pFeedItemInfo->SetUrl(m_szTagContent);
ResetTagContent();
}
else if (!strcmp("category", name) && m_pFeedItemInfo)
{
m_pFeedItemInfo->SetCategory(m_szTagContent);
ResetTagContent();
}
else if (!strcmp("description", name) && m_pFeedItemInfo)
{
m_pFeedItemInfo->SetDescription(m_szTagContent);
ResetTagContent();
}
else if (!strcmp("pubDate", name) && m_pFeedItemInfo)
{
time_t unixtime = WebUtil::ParseRfc822DateTime(m_szTagContent);
if (unixtime > 0)
{
m_pFeedItemInfo->SetTime(unixtime);
}
ResetTagContent();
}
}
void FeedFile::Parse_Content(const char *buf, int len)
{
m_szTagContent = (char*)realloc(m_szTagContent, m_iTagContentLen + len + 1);
strncpy(m_szTagContent + m_iTagContentLen, buf, len);
m_iTagContentLen += len;
m_szTagContent[m_iTagContentLen] = '\0';
}
void FeedFile::ResetTagContent()
{
free(m_szTagContent);
m_szTagContent = NULL;
m_iTagContentLen = 0;
}
void FeedFile::SAX_StartElement(FeedFile* pFile, const char *name, const char **atts)
{
pFile->Parse_StartElement(name, atts);
}
void FeedFile::SAX_EndElement(FeedFile* pFile, const char *name)
{
pFile->Parse_EndElement(name);
}
void FeedFile::SAX_characters(FeedFile* pFile, const char * xmlstr, int len)
{
char* str = (char*)xmlstr;
// trim starting blanks
int off = 0;
for (int i = 0; i < len; i++)
{
char ch = str[i];
if (ch == ' ' || ch == 10 || ch == 13 || ch == 9)
{
off++;
}
else
{
break;
}
}
int newlen = len - off;
// trim ending blanks
for (int i = len - 1; i >= off; i--)
{
char ch = str[i];
if (ch == ' ' || ch == 10 || ch == 13 || ch == 9)
{
newlen--;
}
else
{
break;
}
}
if (newlen > 0)
{
// interpret tag content
pFile->Parse_Content(str + off, newlen);
}
}
void* FeedFile::SAX_getEntity(FeedFile* pFile, const char * name)
{
xmlEntityPtr e = xmlGetPredefinedEntity((xmlChar* )name);
if (!e)
{
warn("entity not found");
pFile->m_bIgnoreNextError = true;
}
return e;
}
void FeedFile::SAX_error(FeedFile* pFile, const char *msg, ...)
{
if (pFile->m_bIgnoreNextError)
{
pFile->m_bIgnoreNextError = false;
return;
}
va_list argp;
va_start(argp, msg);
char szErrMsg[1024];
vsnprintf(szErrMsg, sizeof(szErrMsg), msg, argp);
szErrMsg[1024-1] = '\0';
va_end(argp);
// remove trailing CRLF
for (char* pend = szErrMsg + strlen(szErrMsg) - 1; pend >= szErrMsg && (*pend == '\n' || *pend == '\r' || *pend == ' '); pend--) *pend = '\0';
error("Error parsing rss feed: %s", szErrMsg);
}
#endif

70
daemon/feed/FeedFile.h Normal file
View File

@@ -0,0 +1,70 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef FEEDFILE_H
#define FEEDFILE_H
#include <vector>
#include "FeedInfo.h"
class FeedFile
{
private:
FeedItemInfos* m_pFeedItemInfos;
char* m_szFileName;
FeedFile(const char* szFileName);
void AddItem(FeedItemInfo* pFeedItemInfo);
void ParseSubject(FeedItemInfo* pFeedItemInfo);
#ifdef WIN32
bool ParseFeed(IUnknown* nzb);
static void EncodeURL(const char* szFilename, char* szURL);
#else
FeedItemInfo* m_pFeedItemInfo;
char* m_szTagContent;
int m_iTagContentLen;
bool m_bIgnoreNextError;
static void SAX_StartElement(FeedFile* pFile, const char *name, const char **atts);
static void SAX_EndElement(FeedFile* pFile, const char *name);
static void SAX_characters(FeedFile* pFile, const char * xmlstr, int len);
static void* SAX_getEntity(FeedFile* pFile, const char * name);
static void SAX_error(FeedFile* pFile, const char *msg, ...);
void Parse_StartElement(const char *name, const char **atts);
void Parse_EndElement(const char *name);
void Parse_Content(const char *buf, int len);
void ResetTagContent();
#endif
public:
virtual ~FeedFile();
static FeedFile* Create(const char* szFileName);
FeedItemInfos* GetFeedItemInfos() { return m_pFeedItemInfos; }
void LogDebugInfo();
};
#endif

1185
daemon/feed/FeedFilter.cpp Normal file
View File

File diff suppressed because it is too large Load Diff

186
daemon/feed/FeedFilter.h Normal file
View File

@@ -0,0 +1,186 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef FEEDFILTER_H
#define FEEDFILTER_H
#include "DownloadInfo.h"
#include "FeedInfo.h"
#include "Util.h"
class FeedFilter
{
private:
typedef std::deque<char*> RefValues;
enum ETermCommand
{
fcText,
fcRegex,
fcEqual,
fcLess,
fcLessEqual,
fcGreater,
fcGreaterEqual,
fcOpeningBrace,
fcClosingBrace,
fcOrOperator
};
class Term
{
private:
bool m_bPositive;
char* m_szField;
ETermCommand m_eCommand;
char* m_szParam;
long long m_iIntParam;
double m_fFloatParam;
bool m_bFloat;
RegEx* m_pRegEx;
RefValues* m_pRefValues;
bool GetFieldData(const char* szField, FeedItemInfo* pFeedItemInfo,
const char** StrValue, long long* IntValue);
bool ParseParam(const char* szField, const char* szParam);
bool ParseSizeParam(const char* szParam);
bool ParseAgeParam(const char* szParam);
bool ParseNumericParam(const char* szParam);
bool MatchValue(const char* szStrValue, long long iIntValue);
bool MatchText(const char* szStrValue);
bool MatchRegex(const char* szStrValue);
void FillWildMaskRefValues(const char* szStrValue, WildMask* pMask, int iRefOffset);
void FillRegExRefValues(const char* szStrValue, RegEx* pRegEx);
public:
Term();
~Term();
void SetRefValues(RefValues* pRefValues) { m_pRefValues = pRefValues; }
bool Compile(char* szToken);
bool Match(FeedItemInfo* pFeedItemInfo);
ETermCommand GetCommand() { return m_eCommand; }
};
typedef std::deque<Term*> TermList;
enum ERuleCommand
{
frAccept,
frReject,
frRequire,
frOptions,
frComment
};
class Rule
{
private:
bool m_bIsValid;
ERuleCommand m_eCommand;
char* m_szCategory;
int m_iPriority;
int m_iAddPriority;
bool m_bPause;
int m_iDupeScore;
int m_iAddDupeScore;
char* m_szDupeKey;
char* m_szAddDupeKey;
EDupeMode m_eDupeMode;
char* m_szSeries;
char* m_szRageId;
bool m_bHasCategory;
bool m_bHasPriority;
bool m_bHasAddPriority;
bool m_bHasPause;
bool m_bHasDupeScore;
bool m_bHasAddDupeScore;
bool m_bHasDupeKey;
bool m_bHasAddDupeKey;
bool m_bHasDupeMode;
bool m_bPatCategory;
bool m_bPatDupeKey;
bool m_bPatAddDupeKey;
bool m_bHasSeries;
bool m_bHasRageId;
char* m_szPatCategory;
char* m_szPatDupeKey;
char* m_szPatAddDupeKey;
TermList m_Terms;
RefValues m_RefValues;
char* CompileCommand(char* szRule);
char* CompileOptions(char* szRule);
bool CompileTerm(char* szTerm);
bool MatchExpression(FeedItemInfo* pFeedItemInfo);
public:
Rule();
~Rule();
void Compile(char* szRule);
bool IsValid() { return m_bIsValid; }
ERuleCommand GetCommand() { return m_eCommand; }
const char* GetCategory() { return m_szCategory; }
int GetPriority() { return m_iPriority; }
int GetAddPriority() { return m_iAddPriority; }
bool GetPause() { return m_bPause; }
const char* GetDupeKey() { return m_szDupeKey; }
const char* GetAddDupeKey() { return m_szAddDupeKey; }
int GetDupeScore() { return m_iDupeScore; }
int GetAddDupeScore() { return m_iAddDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
const char* GetRageId() { return m_szRageId; }
const char* GetSeries() { return m_szSeries; }
bool HasCategory() { return m_bHasCategory; }
bool HasPriority() { return m_bHasPriority; }
bool HasAddPriority() { return m_bHasAddPriority; }
bool HasPause() { return m_bHasPause; }
bool HasDupeScore() { return m_bHasDupeScore; }
bool HasAddDupeScore() { return m_bHasAddDupeScore; }
bool HasDupeKey() { return m_bHasDupeKey; }
bool HasAddDupeKey() { return m_bHasAddDupeKey; }
bool HasDupeMode() { return m_bHasDupeMode; }
bool HasRageId() { return m_bHasRageId; }
bool HasSeries() { return m_bHasSeries; }
bool Match(FeedItemInfo* pFeedItemInfo);
void ExpandRefValues(FeedItemInfo* pFeedItemInfo, char** pDestStr, char* pPatStr);
const char* GetRefValue(FeedItemInfo* pFeedItemInfo, const char* szVarName);
};
typedef std::deque<Rule*> RuleList;
private:
RuleList m_Rules;
void Compile(const char* szFilter);
void CompileRule(char* szRule);
void ApplyOptions(Rule* pRule, FeedItemInfo* pFeedItemInfo);
public:
FeedFilter(const char* szFilter);
~FeedFilter();
void Match(FeedItemInfo* pFeedItemInfo);
};
#endif

475
daemon/feed/FeedInfo.cpp Normal file
View File

@@ -0,0 +1,475 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision: 0 $
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#include <sys/stat.h>
#include "nzbget.h"
#include "FeedInfo.h"
#include "DupeCoordinator.h"
#include "Util.h"
extern DupeCoordinator* g_pDupeCoordinator;
FeedInfo::FeedInfo(int iID, const char* szName, const char* szUrl, int iInterval,
const char* szFilter, bool bPauseNzb, const char* szCategory, int iPriority)
{
m_iID = iID;
m_szName = strdup(szName ? szName : "");
m_szUrl = strdup(szUrl ? szUrl : "");
m_szFilter = strdup(szFilter ? szFilter : "");
m_iFilterHash = Util::HashBJ96(m_szFilter, strlen(m_szFilter), 0);
m_szCategory = strdup(szCategory ? szCategory : "");
m_iInterval = iInterval;
m_bPauseNzb = bPauseNzb;
m_iPriority = iPriority;
m_tLastUpdate = 0;
m_bPreview = false;
m_eStatus = fsUndefined;
m_szOutputFilename = NULL;
m_bFetch = false;
m_bForce = false;
}
FeedInfo::~FeedInfo()
{
free(m_szName);
free(m_szUrl);
free(m_szFilter);
free(m_szCategory);
free(m_szOutputFilename);
}
void FeedInfo::SetOutputFilename(const char* szOutputFilename)
{
free(m_szOutputFilename);
m_szOutputFilename = strdup(szOutputFilename);
}
FeedItemInfo::Attr::Attr(const char* szName, const char* szValue)
{
m_szName = strdup(szName ? szName : "");
m_szValue = strdup(szValue ? szValue : "");
}
FeedItemInfo::Attr::~Attr()
{
free(m_szName);
free(m_szValue);
}
FeedItemInfo::Attributes::~Attributes()
{
for (iterator it = begin(); it != end(); it++)
{
delete *it;
}
}
void FeedItemInfo::Attributes::Add(const char* szName, const char* szValue)
{
push_back(new Attr(szName, szValue));
}
FeedItemInfo::Attr* FeedItemInfo::Attributes::Find(const char* szName)
{
for (iterator it = begin(); it != end(); it++)
{
Attr* pAttr = *it;
if (!strcasecmp(pAttr->GetName(), szName))
{
return pAttr;
}
}
return NULL;
}
FeedItemInfo::FeedItemInfo()
{
m_pSharedFeedData = NULL;
m_szTitle = NULL;
m_szFilename = NULL;
m_szUrl = NULL;
m_szCategory = strdup("");
m_lSize = 0;
m_tTime = 0;
m_iImdbId = 0;
m_iRageId = 0;
m_szDescription = strdup("");
m_szSeason = NULL;
m_szEpisode = NULL;
m_iSeasonNum = 0;
m_iEpisodeNum = 0;
m_bSeasonEpisodeParsed = false;
m_szAddCategory = strdup("");
m_bPauseNzb = false;
m_iPriority = 0;
m_eStatus = isUnknown;
m_eMatchStatus = msIgnored;
m_iMatchRule = 0;
m_szDupeKey = NULL;
m_iDupeScore = 0;
m_eDupeMode = dmScore;
m_szDupeStatus = NULL;
}
FeedItemInfo::~FeedItemInfo()
{
free(m_szTitle);
free(m_szFilename);
free(m_szUrl);
free(m_szCategory);
free(m_szDescription);
free(m_szSeason);
free(m_szEpisode);
free(m_szAddCategory);
free(m_szDupeKey);
free(m_szDupeStatus);
}
void FeedItemInfo::SetTitle(const char* szTitle)
{
free(m_szTitle);
m_szTitle = szTitle ? strdup(szTitle) : NULL;
}
void FeedItemInfo::SetFilename(const char* szFilename)
{
free(m_szFilename);
m_szFilename = szFilename ? strdup(szFilename) : NULL;
}
void FeedItemInfo::SetUrl(const char* szUrl)
{
free(m_szUrl);
m_szUrl = szUrl ? strdup(szUrl) : NULL;
}
void FeedItemInfo::SetCategory(const char* szCategory)
{
free(m_szCategory);
m_szCategory = strdup(szCategory ? szCategory: "");
}
void FeedItemInfo::SetDescription(const char* szDescription)
{
free(m_szDescription);
m_szDescription = strdup(szDescription ? szDescription: "");
}
void FeedItemInfo::SetSeason(const char* szSeason)
{
free(m_szSeason);
m_szSeason = szSeason ? strdup(szSeason) : NULL;
m_iSeasonNum = szSeason ? ParsePrefixedInt(szSeason) : 0;
}
void FeedItemInfo::SetEpisode(const char* szEpisode)
{
free(m_szEpisode);
m_szEpisode = szEpisode ? strdup(szEpisode) : NULL;
m_iEpisodeNum = szEpisode ? ParsePrefixedInt(szEpisode) : 0;
}
int FeedItemInfo::ParsePrefixedInt(const char *szValue)
{
const char* szVal = szValue;
if (!strchr("0123456789", *szVal))
{
szVal++;
}
return atoi(szVal);
}
void FeedItemInfo::SetAddCategory(const char* szAddCategory)
{
free(m_szAddCategory);
m_szAddCategory = strdup(szAddCategory ? szAddCategory : "");
}
void FeedItemInfo::SetDupeKey(const char* szDupeKey)
{
free(m_szDupeKey);
m_szDupeKey = strdup(szDupeKey ? szDupeKey : "");
}
void FeedItemInfo::AppendDupeKey(const char* szExtraDupeKey)
{
if (!m_szDupeKey || *m_szDupeKey == '\0' || !szExtraDupeKey || *szExtraDupeKey == '\0')
{
return;
}
int iLen = (m_szDupeKey ? strlen(m_szDupeKey) : 0) + 1 + strlen(szExtraDupeKey) + 1;
char* szNewKey = (char*)malloc(iLen);
snprintf(szNewKey, iLen, "%s-%s", m_szDupeKey, szExtraDupeKey);
szNewKey[iLen - 1] = '\0';
free(m_szDupeKey);
m_szDupeKey = szNewKey;
}
void FeedItemInfo::BuildDupeKey(const char* szRageId, const char* szSeries)
{
int iRageId = szRageId && *szRageId ? atoi(szRageId) : m_iRageId;
free(m_szDupeKey);
if (m_iImdbId != 0)
{
m_szDupeKey = (char*)malloc(20);
snprintf(m_szDupeKey, 20, "imdb=%i", m_iImdbId);
}
else if (szSeries && *szSeries && GetSeasonNum() != 0 && GetEpisodeNum() != 0)
{
int iLen = strlen(szSeries) + 50;
m_szDupeKey = (char*)malloc(iLen);
snprintf(m_szDupeKey, iLen, "series=%s-%s-%s", szSeries, m_szSeason, m_szEpisode);
m_szDupeKey[iLen-1] = '\0';
}
else if (iRageId != 0 && GetSeasonNum() != 0 && GetEpisodeNum() != 0)
{
m_szDupeKey = (char*)malloc(100);
snprintf(m_szDupeKey, 100, "rageid=%i-%s-%s", iRageId, m_szSeason, m_szEpisode);
m_szDupeKey[100-1] = '\0';
}
else
{
m_szDupeKey = strdup("");
}
}
int FeedItemInfo::GetSeasonNum()
{
if (!m_szSeason && !m_bSeasonEpisodeParsed)
{
ParseSeasonEpisode();
}
return m_iSeasonNum;
}
int FeedItemInfo::GetEpisodeNum()
{
if (!m_szEpisode && !m_bSeasonEpisodeParsed)
{
ParseSeasonEpisode();
}
return m_iEpisodeNum;
}
void FeedItemInfo::ParseSeasonEpisode()
{
m_bSeasonEpisodeParsed = true;
RegEx* pRegEx = m_pSharedFeedData->GetSeasonEpisodeRegEx();
if (pRegEx->Match(m_szTitle))
{
char szRegValue[100];
char szValue[100];
snprintf(szValue, 100, "S%02d", atoi(m_szTitle + pRegEx->GetMatchStart(1)));
szValue[100-1] = '\0';
SetSeason(szValue);
int iLen = pRegEx->GetMatchLen(2);
iLen = iLen < 99 ? iLen : 99;
strncpy(szRegValue, m_szTitle + pRegEx->GetMatchStart(2), pRegEx->GetMatchLen(2));
szRegValue[iLen] = '\0';
snprintf(szValue, 100, "E%s", szRegValue);
szValue[100-1] = '\0';
Util::ReduceStr(szValue, "-", "");
for (char* p = szValue; *p; p++) *p = toupper(*p); // convert string to uppercase e02 -> E02
SetEpisode(szValue);
}
}
const char* FeedItemInfo::GetDupeStatus()
{
if (!m_szDupeStatus)
{
const char* szDupeStatusName[] = { "", "QUEUED", "DOWNLOADING", "3", "SUCCESS", "5", "6", "7", "WARNING",
"9", "10", "11", "12", "13", "14", "15", "FAILURE" };
char szStatuses[200];
szStatuses[0] = '\0';
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
DupeCoordinator::EDupeStatus eDupeStatus = g_pDupeCoordinator->GetDupeStatus(pDownloadQueue, m_szTitle, m_szDupeKey);
DownloadQueue::Unlock();
for (int i = 1; i <= (int)DupeCoordinator::dsFailure; i = i << 1)
{
if (eDupeStatus & i)
{
if (*szStatuses)
{
strcat(szStatuses, ",");
}
strcat(szStatuses, szDupeStatusName[i]);
}
}
m_szDupeStatus = strdup(szStatuses);
}
return m_szDupeStatus;
}
FeedHistoryInfo::FeedHistoryInfo(const char* szUrl, FeedHistoryInfo::EStatus eStatus, time_t tLastSeen)
{
m_szUrl = szUrl ? strdup(szUrl) : NULL;
m_eStatus = eStatus;
m_tLastSeen = tLastSeen;
}
FeedHistoryInfo::~FeedHistoryInfo()
{
free(m_szUrl);
}
FeedHistory::~FeedHistory()
{
Clear();
}
void FeedHistory::Clear()
{
for (iterator it = begin(); it != end(); it++)
{
delete *it;
}
clear();
}
void FeedHistory::Add(const char* szUrl, FeedHistoryInfo::EStatus eStatus, time_t tLastSeen)
{
push_back(new FeedHistoryInfo(szUrl, eStatus, tLastSeen));
}
void FeedHistory::Remove(const char* szUrl)
{
for (iterator it = begin(); it != end(); it++)
{
FeedHistoryInfo* pFeedHistoryInfo = *it;
if (!strcmp(pFeedHistoryInfo->GetUrl(), szUrl))
{
delete pFeedHistoryInfo;
erase(it);
break;
}
}
}
FeedHistoryInfo* FeedHistory::Find(const char* szUrl)
{
for (iterator it = begin(); it != end(); it++)
{
FeedHistoryInfo* pFeedHistoryInfo = *it;
if (!strcmp(pFeedHistoryInfo->GetUrl(), szUrl))
{
return pFeedHistoryInfo;
}
}
return NULL;
}
FeedItemInfos::FeedItemInfos()
{
debug("Creating FeedItemInfos");
m_iRefCount = 0;
}
FeedItemInfos::~FeedItemInfos()
{
debug("Destroing FeedItemInfos");
for (iterator it = begin(); it != end(); it++)
{
delete *it;
}
}
void FeedItemInfos::Retain()
{
m_iRefCount++;
}
void FeedItemInfos::Release()
{
m_iRefCount--;
if (m_iRefCount <= 0)
{
delete this;
}
}
void FeedItemInfos::Add(FeedItemInfo* pFeedItemInfo)
{
push_back(pFeedItemInfo);
pFeedItemInfo->SetSharedFeedData(&m_SharedFeedData);
}
SharedFeedData::SharedFeedData()
{
m_pSeasonEpisodeRegEx = NULL;
}
SharedFeedData::~SharedFeedData()
{
delete m_pSeasonEpisodeRegEx;
}
RegEx* SharedFeedData::GetSeasonEpisodeRegEx()
{
if (!m_pSeasonEpisodeRegEx)
{
m_pSeasonEpisodeRegEx = new RegEx("[^[:alnum:]]s?([0-9]+)[ex]([0-9]+(-?e[0-9]+)?)[^[:alnum:]]", 10);
}
return m_pSeasonEpisodeRegEx;
}

280
daemon/feed/FeedInfo.h Normal file
View File

@@ -0,0 +1,280 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision: 0 $
* $Date$
*
*/
#ifndef FEEDINFO_H
#define FEEDINFO_H
#include <deque>
#include <time.h>
#include "Util.h"
#include "DownloadInfo.h"
class FeedInfo
{
public:
enum EStatus
{
fsUndefined,
fsRunning,
fsFinished,
fsFailed
};
private:
int m_iID;
char* m_szName;
char* m_szUrl;
int m_iInterval;
char* m_szFilter;
unsigned int m_iFilterHash;
bool m_bPauseNzb;
char* m_szCategory;
int m_iPriority;
time_t m_tLastUpdate;
bool m_bPreview;
EStatus m_eStatus;
char* m_szOutputFilename;
bool m_bFetch;
bool m_bForce;
public:
FeedInfo(int iID, const char* szName, const char* szUrl, int iInterval,
const char* szFilter, bool bPauseNzb, const char* szCategory, int iPriority);
~FeedInfo();
int GetID() { return m_iID; }
const char* GetName() { return m_szName; }
const char* GetUrl() { return m_szUrl; }
int GetInterval() { return m_iInterval; }
const char* GetFilter() { return m_szFilter; }
unsigned int GetFilterHash() { return m_iFilterHash; }
bool GetPauseNzb() { return m_bPauseNzb; }
const char* GetCategory() { return m_szCategory; }
int GetPriority() { return m_iPriority; }
time_t GetLastUpdate() { return m_tLastUpdate; }
void SetLastUpdate(time_t tLastUpdate) { m_tLastUpdate = tLastUpdate; }
bool GetPreview() { return m_bPreview; }
void SetPreview(bool bPreview) { m_bPreview = bPreview; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
const char* GetOutputFilename() { return m_szOutputFilename; }
void SetOutputFilename(const char* szOutputFilename);
bool GetFetch() { return m_bFetch; }
void SetFetch(bool bFetch) { m_bFetch = bFetch; }
bool GetForce() { return m_bForce; }
void SetForce(bool bForce) { m_bForce = bForce; }
};
typedef std::deque<FeedInfo*> Feeds;
class SharedFeedData
{
private:
RegEx* m_pSeasonEpisodeRegEx;
public:
SharedFeedData();
~SharedFeedData();
RegEx* GetSeasonEpisodeRegEx();
};
class FeedItemInfo
{
public:
enum EStatus
{
isUnknown,
isBacklog,
isFetched,
isNew
};
enum EMatchStatus
{
msIgnored,
msAccepted,
msRejected
};
class Attr
{
private:
char* m_szName;
char* m_szValue;
public:
Attr(const char* szName, const char* szValue);
~Attr();
const char* GetName() { return m_szName; }
const char* GetValue() { return m_szValue; }
};
typedef std::deque<Attr*> AttributesBase;
class Attributes: public AttributesBase
{
public:
~Attributes();
void Add(const char* szName, const char* szValue);
Attr* Find(const char* szName);
};
private:
char* m_szTitle;
char* m_szFilename;
char* m_szUrl;
time_t m_tTime;
long long m_lSize;
char* m_szCategory;
int m_iImdbId;
int m_iRageId;
char* m_szDescription;
char* m_szSeason;
char* m_szEpisode;
int m_iSeasonNum;
int m_iEpisodeNum;
bool m_bSeasonEpisodeParsed;
char* m_szAddCategory;
bool m_bPauseNzb;
int m_iPriority;
EStatus m_eStatus;
EMatchStatus m_eMatchStatus;
int m_iMatchRule;
char* m_szDupeKey;
int m_iDupeScore;
EDupeMode m_eDupeMode;
char* m_szDupeStatus;
SharedFeedData* m_pSharedFeedData;
Attributes m_Attributes;
int ParsePrefixedInt(const char *szValue);
void ParseSeasonEpisode();
public:
FeedItemInfo();
~FeedItemInfo();
void SetSharedFeedData(SharedFeedData* pSharedFeedData) { m_pSharedFeedData = pSharedFeedData; }
const char* GetTitle() { return m_szTitle; }
void SetTitle(const char* szTitle);
const char* GetFilename() { return m_szFilename; }
void SetFilename(const char* szFilename);
const char* GetUrl() { return m_szUrl; }
void SetUrl(const char* szUrl);
long long GetSize() { return m_lSize; }
void SetSize(long long lSize) { m_lSize = lSize; }
const char* GetCategory() { return m_szCategory; }
void SetCategory(const char* szCategory);
int GetImdbId() { return m_iImdbId; }
void SetImdbId(int iImdbId) { m_iImdbId = iImdbId; }
int GetRageId() { return m_iRageId; }
void SetRageId(int iRageId) { m_iRageId = iRageId; }
const char* GetDescription() { return m_szDescription; }
void SetDescription(const char* szDescription);
const char* GetSeason() { return m_szSeason; }
void SetSeason(const char* szSeason);
const char* GetEpisode() { return m_szEpisode; }
void SetEpisode(const char* szEpisode);
int GetSeasonNum();
int GetEpisodeNum();
const char* GetAddCategory() { return m_szAddCategory; }
void SetAddCategory(const char* szAddCategory);
bool GetPauseNzb() { return m_bPauseNzb; }
void SetPauseNzb(bool bPauseNzb) { m_bPauseNzb = bPauseNzb; }
int GetPriority() { return m_iPriority; }
void SetPriority(int iPriority) { m_iPriority = iPriority; }
time_t GetTime() { return m_tTime; }
void SetTime(time_t tTime) { m_tTime = tTime; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus eStatus) { m_eStatus = eStatus; }
EMatchStatus GetMatchStatus() { return m_eMatchStatus; }
void SetMatchStatus(EMatchStatus eMatchStatus) { m_eMatchStatus = eMatchStatus; }
int GetMatchRule() { return m_iMatchRule; }
void SetMatchRule(int iMatchRule) { m_iMatchRule = iMatchRule; }
const char* GetDupeKey() { return m_szDupeKey; }
void SetDupeKey(const char* szDupeKey);
void AppendDupeKey(const char* szExtraDupeKey);
void BuildDupeKey(const char* szRageId, const char* szSeries);
int GetDupeScore() { return m_iDupeScore; }
void SetDupeScore(int iDupeScore) { m_iDupeScore = iDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
void SetDupeMode(EDupeMode eDupeMode) { m_eDupeMode = eDupeMode; }
const char* GetDupeStatus();
Attributes* GetAttributes() { return &m_Attributes; }
};
typedef std::deque<FeedItemInfo*> FeedItemInfosBase;
class FeedItemInfos : public FeedItemInfosBase
{
private:
int m_iRefCount;
SharedFeedData m_SharedFeedData;
public:
FeedItemInfos();
~FeedItemInfos();
void Retain();
void Release();
void Add(FeedItemInfo* pFeedItemInfo);
};
class FeedHistoryInfo
{
public:
enum EStatus
{
hsUnknown,
hsBacklog,
hsFetched
};
private:
char* m_szUrl;
EStatus m_eStatus;
time_t m_tLastSeen;
public:
FeedHistoryInfo(const char* szUrl, EStatus eStatus, time_t tLastSeen);
~FeedHistoryInfo();
const char* GetUrl() { return m_szUrl; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
time_t GetLastSeen() { return m_tLastSeen; }
void SetLastSeen(time_t tLastSeen) { m_tLastSeen = tLastSeen; }
};
typedef std::deque<FeedHistoryInfo*> FeedHistoryBase;
class FeedHistory : public FeedHistoryBase
{
public:
~FeedHistory();
void Clear();
void Add(const char* szUrl, FeedHistoryInfo::EStatus eStatus, time_t tLastSeen);
void Remove(const char* szUrl);
FeedHistoryInfo* Find(const char* szUrl);
};
#endif

View File

@@ -76,7 +76,7 @@ void ColoredFrontend::PrintStatus()
timeString[0] = '\0';
int iCurrentDownloadSpeed = m_bStandBy ? 0 : m_iCurrentDownloadSpeed;
if (iCurrentDownloadSpeed > 0 && !(m_bPauseDownload || m_bPauseDownload2))
if (iCurrentDownloadSpeed > 0 && !m_bPauseDownload)
{
long long remain_sec = (long long)(m_lRemainingSize / iCurrentDownloadSpeed);
int h = (int)(remain_sec / 3600);
@@ -115,7 +115,7 @@ void ColoredFrontend::PrintStatus()
snprintf(tmp, 1024, " %d threads, %.*f KB/s, %.2f MB remaining%s%s%s%s%s\n",
m_iThreadCount, (iCurrentDownloadSpeed >= 10*1024 ? 0 : 1), (float)iCurrentDownloadSpeed / 1024.0,
(float)(Util::Int64ToFloat(m_lRemainingSize) / 1024.0 / 1024.0), timeString, szPostStatus,
m_bPauseDownload || m_bPauseDownload2 ? (m_bStandBy ? ", Paused" : ", Pausing") : "",
m_bPauseDownload ? (m_bStandBy ? ", Paused" : ", Pausing") : "",
szDownloadLimit, szControlSeq);
tmp[1024-1] = '\0';
printf("%s", tmp);

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -34,8 +34,7 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <fstream>
#include <stdio.h>
#ifndef WIN32
#include <unistd.h>
#include <arpa/inet.h>
@@ -48,12 +47,12 @@
#include "Log.h"
#include "Connection.h"
#include "MessageBase.h"
#include "QueueCoordinator.h"
#include "RemoteClient.h"
#include "Util.h"
#include "StatMeter.h"
extern QueueCoordinator* g_pQueueCoordinator;
extern Options* g_pOptions;
extern StatMeter* g_pStatMeter;
Frontend::Frontend()
{
@@ -66,7 +65,6 @@ Frontend::Frontend()
m_iCurrentDownloadSpeed = 0;
m_lRemainingSize = 0;
m_bPauseDownload = false;
m_bPauseDownload2 = false;
m_iDownloadLimit = 0;
m_iThreadCount = 0;
m_iPostJobCount = 0;
@@ -96,16 +94,22 @@ bool Frontend::PrepareData()
{
if (m_bSummary)
{
m_iCurrentDownloadSpeed = g_pQueueCoordinator->CalcCurrentDownloadSpeed();
m_lRemainingSize = g_pQueueCoordinator->CalcRemainingSize();
m_iCurrentDownloadSpeed = g_pStatMeter->CalcCurrentDownloadSpeed();
m_bPauseDownload = g_pOptions->GetPauseDownload();
m_bPauseDownload2 = g_pOptions->GetPauseDownload2();
m_iDownloadLimit = g_pOptions->GetDownloadRate();
m_iThreadCount = Thread::GetThreadCount();
PostQueue* pPostQueue = g_pQueueCoordinator->LockQueue()->GetPostQueue();
m_iPostJobCount = pPostQueue->size();
g_pQueueCoordinator->UnlockQueue();
g_pQueueCoordinator->CalcStat(&m_iUpTimeSec, &m_iDnTimeSec, &m_iAllBytes, &m_bStandBy);
g_pStatMeter->CalcTotalStat(&m_iUpTimeSec, &m_iDnTimeSec, &m_iAllBytes, &m_bStandBy);
DownloadQueue *pDownloadQueue = DownloadQueue::Lock();
m_iPostJobCount = 0;
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pNZBInfo = *it;
m_iPostJobCount += pNZBInfo->GetPostInfo() ? 1 : 0;
}
pDownloadQueue->CalcRemainingSize(&m_lRemainingSize, NULL);
DownloadQueue::Unlock();
}
}
return true;
@@ -121,15 +125,13 @@ void Frontend::FreeData()
}
m_RemoteMessages.clear();
for (FileQueue::iterator it = m_RemoteQueue.GetFileQueue()->begin(); it != m_RemoteQueue.GetFileQueue()->end(); it++)
{
delete *it;
}
m_RemoteQueue.GetFileQueue()->clear();
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
pDownloadQueue->GetQueue()->Clear();
DownloadQueue::Unlock();
}
}
Log::Messages * Frontend::LockMessages()
Log::Messages* Frontend::LockMessages()
{
if (IsRemoteMode())
{
@@ -151,22 +153,12 @@ void Frontend::UnlockMessages()
DownloadQueue* Frontend::LockQueue()
{
if (IsRemoteMode())
{
return &m_RemoteQueue;
}
else
{
return g_pQueueCoordinator->LockQueue();
}
return DownloadQueue::Lock();
}
void Frontend::UnlockQueue()
{
if (!IsRemoteMode())
{
g_pQueueCoordinator->UnlockQueue();
}
DownloadQueue::Unlock();
}
bool Frontend::IsRemoteMode()
@@ -174,23 +166,16 @@ bool Frontend::IsRemoteMode()
return g_pOptions->GetRemoteClientMode();
}
void Frontend::ServerPauseUnpause(bool bPause, bool bSecondRegister)
void Frontend::ServerPauseUnpause(bool bPause)
{
if (IsRemoteMode())
{
RequestPauseUnpause(bPause, bSecondRegister);
RequestPauseUnpause(bPause);
}
else
{
g_pOptions->SetResumeTime(0);
if (bSecondRegister)
{
g_pOptions->SetPauseDownload2(bPause);
}
else
{
g_pOptions->SetPauseDownload(bPause);
}
g_pOptions->SetPauseDownload(bPause);
}
}
@@ -206,27 +191,18 @@ void Frontend::ServerSetDownloadRate(int iRate)
}
}
void Frontend::ServerDumpDebug()
bool Frontend::ServerEditQueue(DownloadQueue::EEditAction eAction, int iOffset, int iID)
{
if (IsRemoteMode())
{
RequestDumpDebug();
return RequestEditQueue(eAction, iOffset, iID);
}
else
{
g_pQueueCoordinator->LogDebugInfo();
}
}
bool Frontend::ServerEditQueue(QueueEditor::EEditAction eAction, int iOffset, int iID)
{
if (IsRemoteMode())
{
return RequestEditQueue((eRemoteEditAction)eAction, iOffset, iID);
}
else
{
return g_pQueueCoordinator->GetQueueEditor()->EditEntry(iID, true, eAction, iOffset, NULL);
DownloadQueue* pDownloadQueue = LockQueue();
bool bOK = pDownloadQueue->EditEntry(iID, eAction, iOffset, NULL);
UnlockQueue();
return bOK;
}
return false;
}
@@ -236,6 +212,10 @@ void Frontend::InitMessageBase(SNZBRequestBase* pMessageBase, int iRequest, int
pMessageBase->m_iSignature = htonl(NZBMESSAGE_SIGNATURE);
pMessageBase->m_iType = htonl(iRequest);
pMessageBase->m_iStructSize = htonl(iSize);
strncpy(pMessageBase->m_szUsername, g_pOptions->GetControlUsername(), NZBREQUESTPASSWORDSIZE - 1);
pMessageBase->m_szUsername[NZBREQUESTPASSWORDSIZE - 1] = '\0';
strncpy(pMessageBase->m_szPassword, g_pOptions->GetControlPassword(), NZBREQUESTPASSWORDSIZE);
pMessageBase->m_szPassword[NZBREQUESTPASSWORDSIZE - 1] = '\0';
}
@@ -357,7 +337,6 @@ bool Frontend::RequestFileList()
if (m_bSummary)
{
m_bPauseDownload = ntohl(ListResponse.m_bDownloadPaused);
m_bPauseDownload2 = ntohl(ListResponse.m_bDownload2Paused);
m_lRemainingSize = Util::JoinInt64(ntohl(ListResponse.m_iRemainingSizeHi), ntohl(ListResponse.m_iRemainingSizeLo));
m_iCurrentDownloadSpeed = ntohl(ListResponse.m_iDownloadRate);
m_iDownloadLimit = ntohl(ListResponse.m_iDownloadLimit);
@@ -373,7 +352,10 @@ bool Frontend::RequestFileList()
{
RemoteClient client;
client.SetVerbose(false);
client.BuildFileList(&ListResponse, pBuf, &m_RemoteQueue);
DownloadQueue* pDownloadQueue = LockQueue();
client.BuildFileList(&ListResponse, pBuf, pDownloadQueue);
UnlockQueue();
}
if (pBuf)
@@ -384,11 +366,11 @@ bool Frontend::RequestFileList()
return true;
}
bool Frontend::RequestPauseUnpause(bool bPause, bool bSecondRegister)
bool Frontend::RequestPauseUnpause(bool bPause)
{
RemoteClient client;
client.SetVerbose(false);
return client.RequestServerPauseUnpause(bPause, bSecondRegister ? eRemotePauseUnpauseActionDownload2 : eRemotePauseUnpauseActionDownload);
return client.RequestServerPauseUnpause(bPause, eRemotePauseUnpauseActionDownload);
}
bool Frontend::RequestSetDownloadRate(int iRate)
@@ -398,16 +380,9 @@ bool Frontend::RequestSetDownloadRate(int iRate)
return client.RequestServerSetDownloadRate(iRate);
}
bool Frontend::RequestDumpDebug()
bool Frontend::RequestEditQueue(DownloadQueue::EEditAction eAction, int iOffset, int iID)
{
RemoteClient client;
client.SetVerbose(false);
return client.RequestServerDumpDebug();
}
bool Frontend::RequestEditQueue(eRemoteEditAction iAction, int iOffset, int iID)
{
RemoteClient client;
client.SetVerbose(false);
return client.RequestServerEditQueue(iAction, iOffset, NULL, &iID, 1, NULL, eRemoteMatchModeID, false);
return client.RequestServerEditQueue(eAction, iOffset, NULL, &iID, 1, NULL, eRemoteMatchModeID);
}

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -37,7 +37,6 @@ class Frontend : public Thread
{
private:
Log::Messages m_RemoteMessages;
DownloadQueue m_RemoteQueue;
bool RequestMessages();
bool RequestFileList();
@@ -53,7 +52,6 @@ protected:
int m_iCurrentDownloadSpeed;
long long m_lRemainingSize;
bool m_bPauseDownload;
bool m_bPauseDownload2;
int m_iDownloadLimit;
int m_iThreadCount;
int m_iPostJobCount;
@@ -70,14 +68,12 @@ protected:
void UnlockQueue();
bool IsRemoteMode();
void InitMessageBase(SNZBRequestBase* pMessageBase, int iRequest, int iSize);
void ServerPauseUnpause(bool bPause, bool bSecondRegister);
bool RequestPauseUnpause(bool bPause, bool bSecondRegister);
void ServerPauseUnpause(bool bPause);
bool RequestPauseUnpause(bool bPause);
void ServerSetDownloadRate(int iRate);
bool RequestSetDownloadRate(int iRate);
void ServerDumpDebug();
bool RequestDumpDebug();
bool ServerEditQueue(QueueEditor::EEditAction eAction, int iOffset, int iEntry);
bool RequestEditQueue(eRemoteEditAction iAction, int iOffset, int iID);
bool ServerEditQueue(DownloadQueue::EEditAction eAction, int iOffset, int iEntry);
bool RequestEditQueue(DownloadQueue::EEditAction eAction, int iOffset, int iID);
public:
Frontend();

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2011 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -43,6 +43,7 @@
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifndef WIN32
#include <unistd.h>
#endif
@@ -202,14 +203,9 @@ NCursesFrontend::NCursesFrontend()
NCursesFrontend::~NCursesFrontend()
{
#ifdef WIN32
if (m_pScreenBuffer)
{
free(m_pScreenBuffer);
}
if (m_pOldScreenBuffer)
{
free(m_pOldScreenBuffer);
}
free(m_pScreenBuffer);
free(m_pOldScreenBuffer);
m_ColorAttr.clear();
HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
@@ -274,7 +270,6 @@ void NCursesFrontend::Run()
}
FreeData();
ClearGroupQueue();
debug("Exiting NCursesFrontend-loop");
}
@@ -293,13 +288,11 @@ void NCursesFrontend::Update(int iKey)
if (m_iDataUpdatePos <= 0)
{
FreeData();
ClearGroupQueue();
m_iNeededLogEntries = m_iMessagesWinClientHeight;
if (!PrepareData())
{
return;
}
PrepareGroupQueue();
// recalculate frame sizes
CalcWindowSizes();
@@ -392,17 +385,22 @@ void NCursesFrontend::CalcWindowSizes()
int NCursesFrontend::CalcQueueSize()
{
int iQueueSize = 0;
DownloadQueue* pDownloadQueue = LockQueue();
if (m_bGroupFiles)
{
return m_groupQueue.size();
iQueueSize = pDownloadQueue->GetQueue()->size();
}
else
{
DownloadQueue* pDownloadQueue = LockQueue();
int iQueueSize = pDownloadQueue->GetFileQueue()->size();
UnlockQueue();
return iQueueSize;
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pNZBInfo = *it;
iQueueSize += pNZBInfo->GetFileList()->size();
}
}
UnlockQueue();
return iQueueSize;
}
void NCursesFrontend::PlotLine(const char * szString, int iRow, int iPos, int iColorPair)
@@ -550,6 +548,8 @@ int NCursesFrontend::PrintMessage(Message* Msg, int iRow, int iMaxLines)
szText = (char*)malloc(iLen);
time_t rawtime = Msg->GetTime();
rawtime += g_pOptions->GetTimeCorrection();
char szTime[50];
#ifdef HAVE_CTIME_R_3
ctime_r(&rawtime, szTime, 50);
@@ -614,7 +614,7 @@ void NCursesFrontend::PrintStatus()
timeString[0] = '\0';
int iCurrentDownloadSpeed = m_bStandBy ? 0 : m_iCurrentDownloadSpeed;
if (iCurrentDownloadSpeed > 0 && !(m_bPauseDownload || m_bPauseDownload2))
if (iCurrentDownloadSpeed > 0 && !m_bPauseDownload)
{
long long remain_sec = (long long)(m_lRemainingSize / iCurrentDownloadSpeed);
int h = (int)(remain_sec / 3600);
@@ -645,12 +645,10 @@ void NCursesFrontend::PrintStatus()
float fAverageSpeed = (float)(Util::Int64ToFloat(m_iDnTimeSec > 0 ? m_iAllBytes / m_iDnTimeSec : 0) / 1024.0);
snprintf(tmp, MAX_SCREEN_WIDTH, " %d threads, %.*f KB/s, %.2f MB remaining%s%s%s%s%s, Avg. %.*f KB/s",
snprintf(tmp, MAX_SCREEN_WIDTH, " %d threads, %.*f KB/s, %.2f MB remaining%s%s%s%s, Avg. %.*f KB/s",
m_iThreadCount, (iCurrentDownloadSpeed >= 10*1024 ? 0 : 1), (float)iCurrentDownloadSpeed / 1024.0,
(float)(Util::Int64ToFloat(m_lRemainingSize) / 1024.0 / 1024.0), timeString, szPostStatus,
m_bPauseDownload || m_bPauseDownload2 ? (m_bStandBy ? ", Paused" : ", Pausing") : "",
m_bPauseDownload || m_bPauseDownload2 ?
(m_bPauseDownload && m_bPauseDownload2 ? " (+2)" : m_bPauseDownload2 ? " (2)" : "") : "",
m_bPauseDownload ? (m_bStandBy ? ", Paused" : ", Pausing") : "",
szDownloadLimit, (fAverageSpeed >= 10 ? 0 : 1), fAverageSpeed);
tmp[MAX_SCREEN_WIDTH - 1] = '\0';
PlotLine(tmp, iStatusRow, 0, NCURSES_COLORPAIR_STATUS);
@@ -723,11 +721,8 @@ void NCursesFrontend::PrintKeyInputBar()
void NCursesFrontend::SetHint(const char* szHint)
{
if (m_szHint)
{
free(m_szHint);
m_szHint = NULL;
}
free(m_szHint);
m_szHint = NULL;
if (szHint)
{
m_szHint = strdup(szHint);
@@ -749,31 +744,24 @@ void NCursesFrontend::PrintQueue()
void NCursesFrontend::PrintFileQueue()
{
int iLineNr = m_iQueueWinTop;
DownloadQueue* pDownloadQueue = LockQueue();
if (pDownloadQueue->GetFileQueue()->empty())
{
char szBuffer[MAX_SCREEN_WIDTH];
snprintf(szBuffer, sizeof(szBuffer), "%s Files for downloading", m_bUseColor ? "" : "*** ");
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
PrintTopHeader(szBuffer, iLineNr++, true);
PlotLine("Ready to receive nzb-job", iLineNr++, 0, NCURSES_COLORPAIR_TEXT);
}
else
{
iLineNr++;
long long lRemaining = 0;
long long lPaused = 0;
int iPausedFiles = 0;
int i = 0;
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++, i++)
{
FileInfo* pFileInfo = *it;
if (i >= m_iQueueScrollOffset && i < m_iQueueScrollOffset + m_iQueueWinHeight -1)
int iLineNr = m_iQueueWinTop + 1;
long long lRemaining = 0;
long long lPaused = 0;
int iPausedFiles = 0;
int iFileNum = 0;
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pNZBInfo = *it;
for (FileList::iterator it2 = pNZBInfo->GetFileList()->begin(); it2 != pNZBInfo->GetFileList()->end(); it2++, iFileNum++)
{
FileInfo* pFileInfo = *it2;
if (iFileNum >= m_iQueueScrollOffset && iFileNum < m_iQueueScrollOffset + m_iQueueWinHeight -1)
{
PrintFilename(pFileInfo, iLineNr++, i == m_iSelectedQueueEntry);
PrintFilename(pFileInfo, iLineNr++, iFileNum == m_iSelectedQueueEntry);
}
if (pFileInfo->GetPaused())
@@ -782,8 +770,11 @@ void NCursesFrontend::PrintFileQueue()
lPaused += pFileInfo->GetRemainingSize();
}
lRemaining += pFileInfo->GetRemainingSize();
}
}
}
if (iFileNum > 0)
{
char szRemaining[20];
Util::FormatFileSize(szRemaining, sizeof(szRemaining), lRemaining);
@@ -792,11 +783,21 @@ void NCursesFrontend::PrintFileQueue()
char szBuffer[MAX_SCREEN_WIDTH];
snprintf(szBuffer, sizeof(szBuffer), " %sFiles for downloading - %i / %i files in queue - %s / %s",
m_bUseColor ? "" : "*** ", pDownloadQueue->GetFileQueue()->size(),
pDownloadQueue->GetFileQueue()->size() - iPausedFiles, szRemaining, szUnpaused);
m_bUseColor ? "" : "*** ", iFileNum,
iFileNum - iPausedFiles, szRemaining, szUnpaused);
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
PrintTopHeader(szBuffer, m_iQueueWinTop, true);
}
else
{
iLineNr--;
char szBuffer[MAX_SCREEN_WIDTH];
snprintf(szBuffer, sizeof(szBuffer), "%s Files for downloading", m_bUseColor ? "" : "*** ");
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
PrintTopHeader(szBuffer, iLineNr++, true);
PlotLine("Ready to receive nzb-job", iLineNr++, 0, NCURSES_COLORPAIR_TEXT);
}
UnlockQueue();
}
@@ -827,9 +828,9 @@ void NCursesFrontend::PrintFilename(FileInfo * pFileInfo, int iRow, bool bSelect
char szPriority[100];
szPriority[0] = '\0';
if (pFileInfo->GetPriority() != 0)
if (pFileInfo->GetNZBInfo()->GetPriority() != 0)
{
sprintf(szPriority, " [%+i]", pFileInfo->GetPriority());
sprintf(szPriority, " [%+i]", pFileInfo->GetNZBInfo()->GetPriority());
}
char szCompleted[20];
@@ -926,9 +927,8 @@ void NCursesFrontend::PrintGroupQueue()
{
int iLineNr = m_iQueueWinTop;
LockQueue();
GroupQueue* pGroupQueue = &m_groupQueue;
if (pGroupQueue->empty())
DownloadQueue* pDownloadQueue = LockQueue();
if (pDownloadQueue->GetQueue()->empty())
{
char szBuffer[MAX_SCREEN_WIDTH];
snprintf(szBuffer, sizeof(szBuffer), "%s NZBs for downloading", m_bUseColor ? "" : "*** ");
@@ -943,30 +943,27 @@ void NCursesFrontend::PrintGroupQueue()
ResetColWidths();
int iCalcLineNr = iLineNr;
int i = 0;
for (GroupQueue::iterator it = pGroupQueue->begin(); it != pGroupQueue->end(); it++, i++)
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++, i++)
{
GroupInfo* pGroupInfo = *it;
NZBInfo* pNZBInfo = *it;
if (i >= m_iQueueScrollOffset && i < m_iQueueScrollOffset + m_iQueueWinHeight -1)
{
PrintGroupname(pGroupInfo, iCalcLineNr++, false, true);
PrintGroupname(pNZBInfo, iCalcLineNr++, false, true);
}
}
long long lRemaining = 0;
long long lPaused = 0;
i = 0;
for (GroupQueue::iterator it = pGroupQueue->begin(); it != pGroupQueue->end(); it++, i++)
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++, i++)
{
GroupInfo* pGroupInfo = *it;
NZBInfo* pNZBInfo = *it;
if (i >= m_iQueueScrollOffset && i < m_iQueueScrollOffset + m_iQueueWinHeight -1)
{
PrintGroupname(pGroupInfo, iLineNr++, i == m_iSelectedQueueEntry, false);
PrintGroupname(pNZBInfo, iLineNr++, i == m_iSelectedQueueEntry, false);
}
lRemaining += pGroupInfo->GetRemainingSize();
lPaused += pGroupInfo->GetPausedSize();
lRemaining += pNZBInfo->GetRemainingSize();
lPaused += pNZBInfo->GetPausedSize();
}
char szRemaining[20];
@@ -977,7 +974,7 @@ void NCursesFrontend::PrintGroupQueue()
char szBuffer[MAX_SCREEN_WIDTH];
snprintf(szBuffer, sizeof(szBuffer), " %sNZBs for downloading - %i NZBs in queue - %s / %s",
m_bUseColor ? "" : "*** ", pGroupQueue->size(), szRemaining, szUnpaused);
m_bUseColor ? "" : "*** ", (int)pDownloadQueue->GetQueue()->size(), szRemaining, szUnpaused);
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
PrintTopHeader(szBuffer, m_iQueueWinTop, false);
}
@@ -991,7 +988,7 @@ void NCursesFrontend::ResetColWidths()
m_iColWidthLeft = 0;
}
void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSelected, bool bCalcColWidth)
void NCursesFrontend::PrintGroupname(NZBInfo* pNZBInfo, int iRow, bool bSelected, bool bCalcColWidth)
{
int color = NCURSES_COLORPAIR_TEXT;
char chBrace1 = '[';
@@ -1007,28 +1004,21 @@ void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSel
}
const char* szDownloading = "";
if (pGroupInfo->GetActiveDownloads() > 0)
if (pNZBInfo->GetActiveDownloads() > 0)
{
szDownloading = " *";
}
long long lUnpausedRemainingSize = pGroupInfo->GetRemainingSize() - pGroupInfo->GetPausedSize();
long long lUnpausedRemainingSize = pNZBInfo->GetRemainingSize() - pNZBInfo->GetPausedSize();
char szRemaining[20];
Util::FormatFileSize(szRemaining, sizeof(szRemaining), lUnpausedRemainingSize);
char szPriority[100];
szPriority[0] = '\0';
if (pGroupInfo->GetMinPriority() != 0 || pGroupInfo->GetMaxPriority() != 0)
if (pNZBInfo->GetPriority() != 0)
{
if (pGroupInfo->GetMinPriority() == pGroupInfo->GetMaxPriority())
{
sprintf(szPriority, " [%+i]", pGroupInfo->GetMinPriority());
}
else
{
sprintf(szPriority, " [%+i..%+i]", pGroupInfo->GetMinPriority(), pGroupInfo->GetMaxPriority());
}
sprintf(szPriority, " [%+i]", pNZBInfo->GetPriority());
}
char szBuffer[MAX_SCREEN_WIDTH];
@@ -1052,26 +1042,26 @@ void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSel
if (bPrintFormatted)
{
char szFiles[20];
snprintf(szFiles, 20, "%i/%i", pGroupInfo->GetRemainingFileCount(), pGroupInfo->GetPausedFileCount());
snprintf(szFiles, 20, "%i/%i", (int)pNZBInfo->GetFileList()->size(), pNZBInfo->GetPausedFileCount());
szFiles[20-1] = '\0';
char szTotal[20];
Util::FormatFileSize(szTotal, sizeof(szTotal), pGroupInfo->GetNZBInfo()->GetSize());
Util::FormatFileSize(szTotal, sizeof(szTotal), pNZBInfo->GetSize());
char szNameWithIds[1024];
snprintf(szNameWithIds, 1024, "%c%i-%i%c%s%s %s", chBrace1, pGroupInfo->GetFirstID(), pGroupInfo->GetLastID(), chBrace2,
szPriority, szDownloading, pGroupInfo->GetNZBInfo()->GetName());
snprintf(szNameWithIds, 1024, "%c%i%c%s%s %s", chBrace1, pNZBInfo->GetID(), chBrace2,
szPriority, szDownloading, pNZBInfo->GetName());
szNameWithIds[iNameLen] = '\0';
char szTime[100];
szTime[0] = '\0';
int iCurrentDownloadSpeed = m_bStandBy ? 0 : m_iCurrentDownloadSpeed;
if (pGroupInfo->GetPausedSize() > 0 && lUnpausedRemainingSize == 0)
if (pNZBInfo->GetPausedSize() > 0 && lUnpausedRemainingSize == 0)
{
snprintf(szTime, 100, "[paused]");
Util::FormatFileSize(szRemaining, sizeof(szRemaining), pGroupInfo->GetRemainingSize());
Util::FormatFileSize(szRemaining, sizeof(szRemaining), pNZBInfo->GetRemainingSize());
}
else if (iCurrentDownloadSpeed > 0 && !(m_bPauseDownload || m_bPauseDownload2))
else if (iCurrentDownloadSpeed > 0 && !m_bPauseDownload)
{
long long remain_sec = (long long)(lUnpausedRemainingSize / iCurrentDownloadSpeed);
int h = (int)(remain_sec / 3600);
@@ -1105,8 +1095,8 @@ void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSel
}
else
{
snprintf(szBuffer, MAX_SCREEN_WIDTH, "%c%i-%i%c%s %s", chBrace1, pGroupInfo->GetFirstID(),
pGroupInfo->GetLastID(), chBrace2, szDownloading, pGroupInfo->GetNZBInfo()->GetName());
snprintf(szBuffer, MAX_SCREEN_WIDTH, "%c%i%c%s %s", chBrace1, pNZBInfo->GetID(),
chBrace2, szDownloading, pNZBInfo->GetName());
}
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
@@ -1117,79 +1107,73 @@ void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSel
}
}
void NCursesFrontend::PrepareGroupQueue()
{
m_groupQueue.clear();
DownloadQueue* pDownloadQueue = LockQueue();
pDownloadQueue->BuildGroups(&m_groupQueue);
UnlockQueue();
}
void NCursesFrontend::ClearGroupQueue()
{
for (GroupQueue::iterator it = m_groupQueue.begin(); it != m_groupQueue.end(); it++)
{
delete *it;
}
m_groupQueue.clear();
}
bool NCursesFrontend::EditQueue(QueueEditor::EEditAction eAction, int iOffset)
bool NCursesFrontend::EditQueue(DownloadQueue::EEditAction eAction, int iOffset)
{
int ID = 0;
if (m_bGroupFiles)
{
if (m_iSelectedQueueEntry >= 0 && m_iSelectedQueueEntry < (int)m_groupQueue.size())
DownloadQueue* pDownloadQueue = LockQueue();
if (m_iSelectedQueueEntry >= 0 && m_iSelectedQueueEntry < (int)pDownloadQueue->GetQueue()->size())
{
GroupInfo* pGroupInfo = m_groupQueue[m_iSelectedQueueEntry];
ID = pGroupInfo->GetLastID();
if (eAction == QueueEditor::eaFilePause)
NZBInfo* pNZBInfo = pDownloadQueue->GetQueue()->at(m_iSelectedQueueEntry);
ID = pNZBInfo->GetID();
if (eAction == DownloadQueue::eaFilePause)
{
if (pGroupInfo->GetRemainingSize() == pGroupInfo->GetPausedSize())
if (pNZBInfo->GetRemainingSize() == pNZBInfo->GetPausedSize())
{
eAction = QueueEditor::eaFileResume;
eAction = DownloadQueue::eaFileResume;
}
else if (pGroupInfo->GetPausedSize() == 0 && (pGroupInfo->GetRemainingParCount() > 0) &&
else if (pNZBInfo->GetPausedSize() == 0 && (pNZBInfo->GetRemainingParCount() > 0) &&
!(m_bLastPausePars && m_iLastEditEntry == m_iSelectedQueueEntry))
{
eAction = QueueEditor::eaFilePauseExtraPars;
eAction = DownloadQueue::eaFilePauseExtraPars;
m_bLastPausePars = true;
}
else
{
eAction = QueueEditor::eaFilePause;
eAction = DownloadQueue::eaFilePause;
m_bLastPausePars = false;
}
}
}
UnlockQueue();
// map file-edit-actions to group-edit-actions
QueueEditor::EEditAction FileToGroupMap[] = {
(QueueEditor::EEditAction)0,
QueueEditor::eaGroupMoveOffset,
QueueEditor::eaGroupMoveTop,
QueueEditor::eaGroupMoveBottom,
QueueEditor::eaGroupPause,
QueueEditor::eaGroupResume,
QueueEditor::eaGroupDelete,
QueueEditor::eaGroupPauseAllPars,
QueueEditor::eaGroupPauseExtraPars };
DownloadQueue::EEditAction FileToGroupMap[] = {
(DownloadQueue::EEditAction)0,
DownloadQueue::eaGroupMoveOffset,
DownloadQueue::eaGroupMoveTop,
DownloadQueue::eaGroupMoveBottom,
DownloadQueue::eaGroupPause,
DownloadQueue::eaGroupResume,
DownloadQueue::eaGroupDelete,
DownloadQueue::eaGroupPauseAllPars,
DownloadQueue::eaGroupPauseExtraPars };
eAction = FileToGroupMap[eAction];
}
else
{
DownloadQueue* pDownloadQueue = LockQueue();
if (m_iSelectedQueueEntry >= 0 && m_iSelectedQueueEntry < (int)pDownloadQueue->GetFileQueue()->size())
int iFileNum = 0;
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
FileInfo* pFileInfo = pDownloadQueue->GetFileQueue()->at(m_iSelectedQueueEntry);
ID = pFileInfo->GetID();
if (eAction == QueueEditor::eaFilePause)
NZBInfo* pNZBInfo = *it;
for (FileList::iterator it2 = pNZBInfo->GetFileList()->begin(); it2 != pNZBInfo->GetFileList()->end(); it2++, iFileNum++)
{
eAction = !pFileInfo->GetPaused() ? QueueEditor::eaFilePause : QueueEditor::eaFileResume;
if (m_iSelectedQueueEntry == iFileNum)
{
FileInfo* pFileInfo = *it2;
ID = pFileInfo->GetID();
if (eAction == DownloadQueue::eaFilePause)
{
eAction = !pFileInfo->GetPaused() ? DownloadQueue::eaFilePause : DownloadQueue::eaFileResume;
}
}
}
}
UnlockQueue();
}
@@ -1306,12 +1290,9 @@ void NCursesFrontend::UpdateInput(int initialKey)
// Key 'p' for pause
if (!IsRemoteMode())
{
info(m_bPauseDownload || m_bPauseDownload2 ? "Unpausing download" : "Pausing download");
info(m_bPauseDownload ? "Unpausing download" : "Pausing download");
}
ServerPauseUnpause(!(m_bPauseDownload || m_bPauseDownload2), m_bPauseDownload2 && !m_bPauseDownload);
break;
case '\'':
ServerDumpDebug();
ServerPauseUnpause(!m_bPauseDownload);
break;
case 'e':
case 10: // return
@@ -1397,38 +1378,38 @@ void NCursesFrontend::UpdateInput(int initialKey)
break;
case 'p':
// Key 'p' for pause
EditQueue(QueueEditor::eaFilePause, 0);
EditQueue(DownloadQueue::eaFilePause, 0);
break;
case 'd':
SetHint(" Use Uppercase \"D\" for delete");
break;
case 'D':
// Delete entry
if (EditQueue(QueueEditor::eaFileDelete, 0))
if (EditQueue(DownloadQueue::eaFileDelete, 0))
{
SetCurrentQueueEntry(m_iSelectedQueueEntry);
}
break;
case 'u':
if (EditQueue(QueueEditor::eaFileMoveOffset, -1))
if (EditQueue(DownloadQueue::eaFileMoveOffset, -1))
{
SetCurrentQueueEntry(m_iSelectedQueueEntry - 1);
}
break;
case 'n':
if (EditQueue(QueueEditor::eaFileMoveOffset, +1))
if (EditQueue(DownloadQueue::eaFileMoveOffset, +1))
{
SetCurrentQueueEntry(m_iSelectedQueueEntry + 1);
}
break;
case 't':
if (EditQueue(QueueEditor::eaFileMoveTop, 0))
if (EditQueue(DownloadQueue::eaFileMoveTop, 0))
{
SetCurrentQueueEntry(0);
}
break;
case 'b':
if (EditQueue(QueueEditor::eaFileMoveBottom, 0))
if (EditQueue(DownloadQueue::eaFileMoveBottom, 0))
{
SetCurrentQueueEntry(iQueueSize > 0 ? iQueueSize - 1 : 0);
}

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -62,7 +62,6 @@ private:
int m_iLastEditEntry;
bool m_bLastPausePars;
int m_iQueueScrollOffset;
GroupQueue m_groupQueue;
char* m_szHint;
time_t m_tStartHint;
int m_iColWidthFiles;
@@ -99,10 +98,8 @@ private:
void PrintFilename(FileInfo* pFileInfo, int iRow, bool bSelected);
void PrintGroupQueue();
void ResetColWidths();
void PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSelected, bool bCalcColWidth);
void PrepareGroupQueue();
void PrintGroupname(NZBInfo* pNZBInfo, int iRow, bool bSelected, bool bCalcColWidth);
void PrintTopHeader(char* szHeader, int iLineNr, bool bUpTime);
void ClearGroupQueue();
int PrintMessage(Message* Msg, int iRow, int iMaxLines);
void PrintKeyInputBar();
void PrintStatus();
@@ -114,7 +111,7 @@ private:
int ReadConsoleKey();
int CalcQueueSize();
void NeedUpdateData();
bool EditQueue(QueueEditor::EEditAction eAction, int iOffset);
bool EditQueue(DownloadQueue::EEditAction eAction, int iOffset);
void SetHint(const char* szHint);
protected:

328
daemon/main/Maintenance.cpp Normal file
View File

@@ -0,0 +1,328 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013-2015 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#ifndef WIN32
#include <unistd.h>
#endif
#include <errno.h>
#include "nzbget.h"
#include "Log.h"
#include "Util.h"
#include "Maintenance.h"
#include "Options.h"
extern Options* g_pOptions;
extern Maintenance* g_pMaintenance;
Maintenance::Maintenance()
{
m_iIDMessageGen = 0;
m_UpdateScriptController = NULL;
m_szUpdateScript = NULL;
}
Maintenance::~Maintenance()
{
m_mutexController.Lock();
if (m_UpdateScriptController)
{
m_UpdateScriptController->Detach();
m_mutexController.Unlock();
while (m_UpdateScriptController)
{
usleep(20*1000);
}
}
ClearMessages();
free(m_szUpdateScript);
}
void Maintenance::ResetUpdateController()
{
m_mutexController.Lock();
m_UpdateScriptController = NULL;
m_mutexController.Unlock();
}
void Maintenance::ClearMessages()
{
for (Log::Messages::iterator it = m_Messages.begin(); it != m_Messages.end(); it++)
{
delete *it;
}
m_Messages.clear();
}
Log::Messages* Maintenance::LockMessages()
{
m_mutexLog.Lock();
return &m_Messages;
}
void Maintenance::UnlockMessages()
{
m_mutexLog.Unlock();
}
void Maintenance::AppendMessage(Message::EKind eKind, time_t tTime, const char * szText)
{
if (tTime == 0)
{
tTime = time(NULL);
}
m_mutexLog.Lock();
Message* pMessage = new Message(++m_iIDMessageGen, eKind, tTime, szText);
m_Messages.push_back(pMessage);
m_mutexLog.Unlock();
}
bool Maintenance::StartUpdate(EBranch eBranch)
{
m_mutexController.Lock();
bool bAlreadyUpdating = m_UpdateScriptController != NULL;
m_mutexController.Unlock();
if (bAlreadyUpdating)
{
error("Could not start update-script: update-script is already running");
return false;
}
if (m_szUpdateScript)
{
free(m_szUpdateScript);
m_szUpdateScript = NULL;
}
if (!ReadPackageInfoStr("install-script", &m_szUpdateScript))
{
return false;
}
ClearMessages();
m_UpdateScriptController = new UpdateScriptController();
m_UpdateScriptController->SetScript(m_szUpdateScript);
m_UpdateScriptController->SetBranch(eBranch);
m_UpdateScriptController->SetAutoDestroy(true);
m_UpdateScriptController->Start();
return true;
}
bool Maintenance::CheckUpdates(char** pUpdateInfo)
{
char* szUpdateInfoScript;
if (!ReadPackageInfoStr("update-info-script", &szUpdateInfoScript))
{
return false;
}
*pUpdateInfo = NULL;
UpdateInfoScriptController::ExecuteScript(szUpdateInfoScript, pUpdateInfo);
free(szUpdateInfoScript);
return *pUpdateInfo;
}
bool Maintenance::ReadPackageInfoStr(const char* szKey, char** pValue)
{
char szFileName[1024];
snprintf(szFileName, 1024, "%s%cpackage-info.json", g_pOptions->GetWebDir(), PATH_SEPARATOR);
szFileName[1024-1] = '\0';
char* szPackageInfo;
int iPackageInfoLen;
if (!Util::LoadFileIntoBuffer(szFileName, &szPackageInfo, &iPackageInfoLen))
{
error("Could not load file %s", szFileName);
return false;
}
char szKeyStr[100];
snprintf(szKeyStr, 100, "\"%s\"", szKey);
szKeyStr[100-1] = '\0';
char* p = strstr(szPackageInfo, szKeyStr);
if (!p)
{
error("Could not parse file %s", szFileName);
free(szPackageInfo);
return false;
}
p = strchr(p + strlen(szKeyStr), '"');
if (!p)
{
error("Could not parse file %s", szFileName);
free(szPackageInfo);
return false;
}
p++;
char* pend = strchr(p, '"');
if (!pend)
{
error("Could not parse file %s", szFileName);
free(szPackageInfo);
return false;
}
int iLen = pend - p;
if (iLen >= sizeof(szFileName))
{
error("Could not parse file %s", szFileName);
free(szPackageInfo);
return false;
}
*pValue = (char*)malloc(iLen+1);
strncpy(*pValue, p, iLen);
(*pValue)[iLen] = '\0';
WebUtil::JsonDecode(*pValue);
free(szPackageInfo);
return true;
}
void UpdateScriptController::Run()
{
// the update-script should not be automatically terminated when the program quits
UnregisterRunningScript();
m_iPrefixLen = 0;
PrintMessage(Message::mkInfo, "Executing update-script %s", GetScript());
char szInfoName[1024];
snprintf(szInfoName, 1024, "update-script %s", Util::BaseFileName(GetScript()));
szInfoName[1024-1] = '\0';
SetInfoName(szInfoName);
const char* szBranchName[] = { "STABLE", "TESTING", "DEVEL" };
SetEnvVar("NZBUP_BRANCH", szBranchName[m_eBranch]);
char szProcessID[20];
#ifdef WIN32
int pid = (int)GetCurrentProcessId();
#else
int pid = (int)getppid();
#endif
snprintf(szProcessID, 20, "%i", pid);
szProcessID[20-1] = '\0';
SetEnvVar("NZBUP_PROCESSID", szProcessID);
char szLogPrefix[100];
strncpy(szLogPrefix, Util::BaseFileName(GetScript()), 100);
szLogPrefix[100-1] = '\0';
if (char* ext = strrchr(szLogPrefix, '.')) *ext = '\0'; // strip file extension
SetLogPrefix(szLogPrefix);
m_iPrefixLen = strlen(szLogPrefix) + 2; // 2 = strlen(": ");
Execute();
g_pMaintenance->ResetUpdateController();
}
void UpdateScriptController::AddMessage(Message::EKind eKind, const char* szText)
{
szText = szText + m_iPrefixLen;
g_pMaintenance->AppendMessage(eKind, time(NULL), szText);
ScriptController::AddMessage(eKind, szText);
}
void UpdateInfoScriptController::ExecuteScript(const char* szScript, char** pUpdateInfo)
{
detail("Executing update-info-script %s", Util::BaseFileName(szScript));
UpdateInfoScriptController* pScriptController = new UpdateInfoScriptController();
pScriptController->SetScript(szScript);
char szInfoName[1024];
snprintf(szInfoName, 1024, "update-info-script %s", Util::BaseFileName(szScript));
szInfoName[1024-1] = '\0';
pScriptController->SetInfoName(szInfoName);
char szLogPrefix[1024];
strncpy(szLogPrefix, Util::BaseFileName(szScript), 1024);
szLogPrefix[1024-1] = '\0';
if (char* ext = strrchr(szLogPrefix, '.')) *ext = '\0'; // strip file extension
pScriptController->SetLogPrefix(szLogPrefix);
pScriptController->m_iPrefixLen = strlen(szLogPrefix) + 2; // 2 = strlen(": ");
pScriptController->Execute();
if (pScriptController->m_UpdateInfo.GetBuffer())
{
int iLen = strlen(pScriptController->m_UpdateInfo.GetBuffer());
*pUpdateInfo = (char*)malloc(iLen + 1);
strncpy(*pUpdateInfo, pScriptController->m_UpdateInfo.GetBuffer(), iLen);
(*pUpdateInfo)[iLen] = '\0';
}
delete pScriptController;
}
void UpdateInfoScriptController::AddMessage(Message::EKind eKind, const char* szText)
{
szText = szText + m_iPrefixLen;
if (!strncmp(szText, "[NZB] ", 6))
{
debug("Command %s detected", szText + 6);
if (!strncmp(szText + 6, "[UPDATEINFO]", 12))
{
m_UpdateInfo.Append(szText + 6 + 12);
}
else
{
error("Invalid command \"%s\" received from %s", szText, GetInfoName());
}
}
else
{
ScriptController::AddMessage(eKind, szText);
}
}

93
daemon/main/Maintenance.h Normal file
View File

@@ -0,0 +1,93 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef MAINTENANCE_H
#define MAINTENANCE_H
#include "Thread.h"
#include "Script.h"
#include "Log.h"
#include "Util.h"
class UpdateScriptController;
class Maintenance
{
private:
Log::Messages m_Messages;
Mutex m_mutexLog;
Mutex m_mutexController;
int m_iIDMessageGen;
UpdateScriptController* m_UpdateScriptController;
char* m_szUpdateScript;
bool ReadPackageInfoStr(const char* szKey, char** pValue);
public:
enum EBranch
{
brStable,
brTesting,
brDevel
};
Maintenance();
~Maintenance();
void ClearMessages();
void AppendMessage(Message::EKind eKind, time_t tTime, const char* szText);
Log::Messages* LockMessages();
void UnlockMessages();
bool StartUpdate(EBranch eBranch);
void ResetUpdateController();
bool CheckUpdates(char** pUpdateInfo);
};
class UpdateScriptController : public Thread, public ScriptController
{
private:
Maintenance::EBranch m_eBranch;
int m_iPrefixLen;
protected:
virtual void AddMessage(Message::EKind eKind, const char* szText);
public:
virtual void Run();
void SetBranch(Maintenance::EBranch eBranch) { m_eBranch = eBranch; }
};
class UpdateInfoScriptController : public ScriptController
{
private:
int m_iPrefixLen;
StringBuilder m_UpdateInfo;
protected:
virtual void AddMessage(Message::EKind eKind, const char* szText);
public:
static void ExecuteScript(const char* szScript, char** pUpdateInfo);
};
#endif

View File

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -28,8 +28,11 @@
#define OPTIONS_H
#include <vector>
#include <list>
#include <time.h>
#include "Thread.h"
#include "Util.h"
class Options
{
@@ -54,15 +57,19 @@ public:
opClientRequestScanAsync,
opClientRequestDownloadPause,
opClientRequestDownloadUnpause,
opClientRequestDownload2Pause,
opClientRequestDownload2Unpause,
opClientRequestPostPause,
opClientRequestPostUnpause,
opClientRequestScanPause,
opClientRequestScanUnpause,
opClientRequestHistory,
opClientRequestDownloadUrl,
opClientRequestUrlQueue
opClientRequestDownloadUrl
};
enum EWriteLog
{
wlNone,
wlAppend,
wlReset,
wlRotate
};
enum EMessageTarget
{
@@ -77,11 +84,12 @@ public:
omColored,
omNCurses
};
enum ELoadPars
enum EParCheck
{
lpNone,
lpOne,
lpAll
pcAuto,
pcAlways,
pcForce,
pcManual
};
enum EParScan
{
@@ -89,16 +97,12 @@ public:
psFull,
psAuto
};
enum EScriptLogKind
enum EHealthCheck
{
slNone,
slDetail,
slInfo,
slWarning,
slError,
slDebug
hcPause,
hcDelete,
hcNone
};
enum EMatchMode
{
mmID = 1,
@@ -106,12 +110,6 @@ public:
mmRegEx
};
enum EDomain
{
dmServer = 1,
dmPostProcess
};
class OptEntry
{
private:
@@ -152,12 +150,18 @@ public:
private:
char* m_szName;
char* m_szDestDir;
bool m_bUnpack;
char* m_szPostScript;
NameList m_Aliases;
public:
Category(const char* szName, const char* szDestDir);
Category(const char* szName, const char* szDestDir, bool bUnpack, const char* szPostScript);
~Category();
const char* GetName() { return m_szName; }
const char* GetDestDir() { return m_szDestDir; }
bool GetUnpack() { return m_bUnpack; }
const char* GetPostScript() { return m_szPostScript; }
NameList* GetAliases() { return &m_Aliases; }
};
typedef std::vector<Category*> CategoriesBase;
@@ -166,7 +170,71 @@ public:
{
public:
~Categories();
Category* FindCategory(const char* szName);
Category* FindCategory(const char* szName, bool bSearchAliases);
};
class Script
{
private:
char* m_szName;
char* m_szLocation;
char* m_szDisplayName;
bool m_bPostScript;
bool m_bScanScript;
bool m_bQueueScript;
bool m_bSchedulerScript;
char* m_szQueueEvents;
public:
Script(const char* szName, const char* szLocation);
~Script();
const char* GetName() { return m_szName; }
const char* GetLocation() { return m_szLocation; }
void SetDisplayName(const char* szDisplayName);
const char* GetDisplayName() { return m_szDisplayName; }
bool GetPostScript() { return m_bPostScript; }
void SetPostScript(bool bPostScript) { m_bPostScript = bPostScript; }
bool GetScanScript() { return m_bScanScript; }
void SetScanScript(bool bScanScript) { m_bScanScript = bScanScript; }
bool GetQueueScript() { return m_bQueueScript; }
void SetQueueScript(bool bQueueScript) { m_bQueueScript = bQueueScript; }
bool GetSchedulerScript() { return m_bSchedulerScript; }
void SetSchedulerScript(bool bSchedulerScript) { m_bSchedulerScript = bSchedulerScript; }
void SetQueueEvents(const char* szQueueEvents);
const char* GetQueueEvents() { return m_szQueueEvents; }
};
typedef std::list<Script*> ScriptsBase;
class Scripts: public ScriptsBase
{
public:
~Scripts();
void Clear();
Script* Find(const char* szName);
};
class ConfigTemplate
{
private:
Script* m_pScript;
char* m_szTemplate;
friend class Options;
public:
ConfigTemplate(Script* pScript, const char* szTemplate);
~ConfigTemplate();
Script* GetScript() { return m_pScript; }
const char* GetTemplate() { return m_szTemplate; }
};
typedef std::vector<ConfigTemplate*> ConfigTemplatesBase;
class ConfigTemplates: public ConfigTemplatesBase
{
public:
~ConfigTemplates();
};
private:
@@ -174,6 +242,8 @@ private:
bool m_bConfigInitialized;
Mutex m_mutexOptEntries;
Categories m_Categories;
Scripts m_Scripts;
ConfigTemplates m_ConfigTemplates;
// Options
bool m_bConfigErrors;
@@ -185,6 +255,8 @@ private:
char* m_szQueueDir;
char* m_szNzbDir;
char* m_szWebDir;
char* m_szConfigTemplate;
char* m_szScriptDir;
EMessageTarget m_eInfoTarget;
EMessageTarget m_eWarningTarget;
EMessageTarget m_eErrorTarget;
@@ -192,43 +264,45 @@ private:
EMessageTarget m_eDetailTarget;
bool m_bDecode;
bool m_bCreateBrokenLog;
bool m_bResetLog;
int m_iConnectionTimeout;
int m_iArticleTimeout;
int m_iUrlTimeout;
int m_iTerminateTimeout;
bool m_bAppendNZBDir;
bool m_bAppendCategoryDir;
bool m_bContinuePartial;
bool m_bRenameBroken;
int m_iRetries;
int m_iRetryInterval;
bool m_bSaveQueue;
bool m_bDupeCheck;
char* m_szControlIP;
char* m_szControlUsername;
char* m_szControlPassword;
int m_iControlPort;
bool m_bSecureControl;
int m_iSecurePort;
char* m_szSecureCert;
char* m_szSecureKey;
char* m_szAuthorizedIP;
char* m_szLockFile;
char* m_szDaemonUserName;
char* m_szDaemonUsername;
EOutputMode m_eOutputMode;
bool m_bReloadQueue;
bool m_bReloadUrlQueue;
bool m_bReloadPostQueue;
int m_iUrlConnections;
int m_iLogBufferSize;
bool m_bCreateLog;
EWriteLog m_eWriteLog;
int m_iRotateLog;
char* m_szLogFile;
ELoadPars m_eLoadPars;
bool m_bParCheck;
EParCheck m_eParCheck;
bool m_bParRepair;
EParScan m_eParScan;
char* m_szPostProcess;
char* m_szPostConfigFilename;
char* m_szNZBProcess;
char* m_szNZBAddedProcess;
bool m_bStrictParName;
bool m_bParQuick;
bool m_bParRename;
int m_iParBuffer;
int m_iParThreads;
EHealthCheck m_eHealthCheck;
char* m_szPostScript;
char* m_szScriptOrder;
char* m_szScanScript;
char* m_szQueueScript;
bool m_bNoConfig;
int m_iUMask;
int m_iUpdateInterval;
@@ -236,22 +310,18 @@ private:
bool m_bCursesTime;
bool m_bCursesGroup;
bool m_bCrcCheck;
int m_iThreadLimit;
bool m_bDirectWrite;
int m_iWriteBufferSize;
int m_iWriteBuffer;
int m_iNzbDirInterval;
int m_iNzbDirFileAge;
bool m_bParCleanupQueue;
int m_iDiskSpace;
EScriptLogKind m_eProcessLogKind;
bool m_bAllowReProcess;
bool m_bTLS;
bool m_bDumpCore;
bool m_bParPauseQueue;
bool m_bPostPauseQueue;
bool m_bScriptPauseQueue;
bool m_bNzbCleanupDisk;
bool m_bDeleteCleanupDisk;
bool m_bMergeNzb;
int m_iParTimeLimit;
int m_iKeepHistory;
bool m_bAccurateRate;
@@ -260,6 +330,14 @@ private:
char* m_szUnrarCmd;
char* m_szSevenZipCmd;
bool m_bUnpackPauseQueue;
char* m_szExtCleanupDisk;
char* m_szParIgnoreExt;
int m_iFeedHistory;
bool m_bUrlForce;
int m_iTimeCorrection;
int m_iPropagationDelay;
int m_iArticleCache;
int m_iEventInterval;
// Parsed command-line parameters
bool m_bServerMode;
@@ -287,22 +365,25 @@ private:
// Current state
bool m_bPauseDownload;
bool m_bPauseDownload2;
bool m_bPausePostProcess;
bool m_bPauseScan;
bool m_bTempPauseDownload;
int m_iDownloadRate;
EClientOperation m_eClientOperation;
time_t m_tResumeTime;
int m_iLocalTimeOffset;
void InitDefault();
void InitOptFile();
void InitCommandLine(int argc, char* argv[]);
void InitOptions();
void InitPostConfig();
void InitFileArg(int argc, char* argv[]);
void InitServers();
void InitCategories();
void InitScheduler();
void InitFeeds();
void InitScripts();
void InitConfigTemplates();
void CheckOptions();
void PrintUsage(char* com);
void Dump();
@@ -313,24 +394,33 @@ private:
const char* GetOption(const char* optname);
void SetOption(const char* optname, const char* value);
bool SetOptionString(const char* option);
bool ValidateOptionName(const char* optname);
bool SplitOptionString(const char* option, char** pOptName, char** pOptValue);
bool ValidateOptionName(const char* optname, const char* optvalue);
void LoadConfigFile();
void CheckDir(char** dir, const char* szOptionName, bool bAllowEmpty, bool bCreate);
void CheckDir(char** dir, const char* szOptionName, const char* szParentDir,
bool bAllowEmpty, bool bCreate);
void ParseFileIDList(int argc, char* argv[], int optind);
void ParseFileNameList(int argc, char* argv[], int optind);
bool ParseTime(const char** pTime, int* pHours, int* pMinutes);
bool ParseTime(const char* szTime, int* pHours, int* pMinutes);
bool ParseWeekDays(const char* szWeekDays, int* pWeekDaysBits);
void ConfigError(const char* msg, ...);
void ConfigWarn(const char* msg, ...);
void LocateOptionSrcPos(const char *szOptionName);
void ConvertOldOptionName(char *szOption, int iBufLen);
void ConvertOldOption(char *szOption, int iOptionBufLen, char *szValue, int iValueBufLen);
static bool CompareScripts(Script* pScript1, Script* pScript2);
void LoadScriptDir(Scripts* pScripts, const char* szDirectory, bool bIsSubDir);
void BuildScriptDisplayNames(Scripts* pScripts);
void LoadScripts(Scripts* pScripts);
public:
Options(int argc, char* argv[]);
~Options();
bool LoadConfig(EDomain eDomain, OptEntries* pOptEntries);
bool SaveConfig(EDomain eDomain, OptEntries* pOptEntries);
bool LoadConfig(OptEntries* pOptEntries);
bool SaveConfig(OptEntries* pOptEntries);
bool LoadConfigTemplates(ConfigTemplates* pConfigTemplates);
Scripts* GetScripts() { return &m_Scripts; }
ConfigTemplates* GetConfigTemplates() { return &m_ConfigTemplates; }
// Options
OptEntries* LockOptEntries();
@@ -342,72 +432,72 @@ public:
const char* GetQueueDir() { return m_szQueueDir; }
const char* GetNzbDir() { return m_szNzbDir; }
const char* GetWebDir() { return m_szWebDir; }
const char* GetConfigTemplate() { return m_szConfigTemplate; }
const char* GetScriptDir() { return m_szScriptDir; }
bool GetCreateBrokenLog() const { return m_bCreateBrokenLog; }
bool GetResetLog() const { return m_bResetLog; }
EMessageTarget GetInfoTarget() const { return m_eInfoTarget; }
EMessageTarget GetWarningTarget() const { return m_eWarningTarget; }
EMessageTarget GetErrorTarget() const { return m_eErrorTarget; }
EMessageTarget GetDebugTarget() const { return m_eDebugTarget; }
EMessageTarget GetDetailTarget() const { return m_eDetailTarget; }
int GetConnectionTimeout() { return m_iConnectionTimeout; }
int GetArticleTimeout() { return m_iArticleTimeout; }
int GetUrlTimeout() { return m_iUrlTimeout; }
int GetTerminateTimeout() { return m_iTerminateTimeout; }
bool GetDecode() { return m_bDecode; };
bool GetAppendNZBDir() { return m_bAppendNZBDir; }
bool GetAppendCategoryDir() { return m_bAppendCategoryDir; }
bool GetContinuePartial() { return m_bContinuePartial; }
bool GetRenameBroken() { return m_bRenameBroken; }
int GetRetries() { return m_iRetries; }
int GetRetryInterval() { return m_iRetryInterval; }
bool GetSaveQueue() { return m_bSaveQueue; }
bool GetDupeCheck() { return m_bDupeCheck; }
const char* GetControlIP() { return m_szControlIP; }
const char* GetControlIP();
const char* GetControlUsername() { return m_szControlUsername; }
const char* GetControlPassword() { return m_szControlPassword; }
int GetControlPort() { return m_iControlPort; }
bool GetSecureControl() { return m_bSecureControl; }
int GetSecurePort() { return m_iSecurePort; }
const char* GetSecureCert() { return m_szSecureCert; }
const char* GetSecureKey() { return m_szSecureKey; }
const char* GetAuthorizedIP() { return m_szAuthorizedIP; }
const char* GetLockFile() { return m_szLockFile; }
const char* GetDaemonUserName() { return m_szDaemonUserName; }
const char* GetDaemonUsername() { return m_szDaemonUsername; }
EOutputMode GetOutputMode() { return m_eOutputMode; }
bool GetReloadQueue() { return m_bReloadQueue; }
bool GetReloadUrlQueue() { return m_bReloadUrlQueue; }
bool GetReloadPostQueue() { return m_bReloadPostQueue; }
int GetUrlConnections() { return m_iUrlConnections; }
int GetLogBufferSize() { return m_iLogBufferSize; }
bool GetCreateLog() { return m_bCreateLog; }
EWriteLog GetWriteLog() { return m_eWriteLog; }
const char* GetLogFile() { return m_szLogFile; }
ELoadPars GetLoadPars() { return m_eLoadPars; }
bool GetParCheck() { return m_bParCheck; }
int GetRotateLog() { return m_iRotateLog; }
EParCheck GetParCheck() { return m_eParCheck; }
bool GetParRepair() { return m_bParRepair; }
EParScan GetParScan() { return m_eParScan; }
const char* GetPostProcess() { return m_szPostProcess; }
const char* GetPostConfigFilename() { return m_szPostConfigFilename; }
const char* GetNZBProcess() { return m_szNZBProcess; }
const char* GetNZBAddedProcess() { return m_szNZBAddedProcess; }
bool GetStrictParName() { return m_bStrictParName; }
bool GetParQuick() { return m_bParQuick; }
bool GetParRename() { return m_bParRename; }
int GetParBuffer() { return m_iParBuffer; }
int GetParThreads() { return m_iParThreads; }
EHealthCheck GetHealthCheck() { return m_eHealthCheck; }
const char* GetScriptOrder() { return m_szScriptOrder; }
const char* GetPostScript() { return m_szPostScript; }
const char* GetScanScript() { return m_szScanScript; }
const char* GetQueueScript() { return m_szQueueScript; }
int GetUMask() { return m_iUMask; }
int GetUpdateInterval() {return m_iUpdateInterval; }
bool GetCursesNZBName() { return m_bCursesNZBName; }
bool GetCursesTime() { return m_bCursesTime; }
bool GetCursesGroup() { return m_bCursesGroup; }
bool GetCrcCheck() { return m_bCrcCheck; }
int GetThreadLimit() { return m_iThreadLimit; }
bool GetDirectWrite() { return m_bDirectWrite; }
int GetWriteBufferSize() { return m_iWriteBufferSize; }
int GetWriteBuffer() { return m_iWriteBuffer; }
int GetNzbDirInterval() { return m_iNzbDirInterval; }
int GetNzbDirFileAge() { return m_iNzbDirFileAge; }
bool GetParCleanupQueue() { return m_bParCleanupQueue; }
int GetDiskSpace() { return m_iDiskSpace; }
EScriptLogKind GetProcessLogKind() { return m_eProcessLogKind; }
bool GetAllowReProcess() { return m_bAllowReProcess; }
bool GetTLS() { return m_bTLS; }
bool GetDumpCore() { return m_bDumpCore; }
bool GetParPauseQueue() { return m_bParPauseQueue; }
bool GetPostPauseQueue() { return m_bPostPauseQueue; }
bool GetScriptPauseQueue() { return m_bScriptPauseQueue; }
bool GetNzbCleanupDisk() { return m_bNzbCleanupDisk; }
bool GetDeleteCleanupDisk() { return m_bDeleteCleanupDisk; }
bool GetMergeNzb() { return m_bMergeNzb; }
int GetParTimeLimit() { return m_iParTimeLimit; }
int GetKeepHistory() { return m_iKeepHistory; }
bool GetAccurateRate() { return m_bAccurateRate; }
@@ -416,8 +506,16 @@ public:
const char* GetUnrarCmd() { return m_szUnrarCmd; }
const char* GetSevenZipCmd() { return m_szSevenZipCmd; }
bool GetUnpackPauseQueue() { return m_bUnpackPauseQueue; }
const char* GetExtCleanupDisk() { return m_szExtCleanupDisk; }
const char* GetParIgnoreExt() { return m_szParIgnoreExt; }
int GetFeedHistory() { return m_iFeedHistory; }
bool GetUrlForce() { return m_bUrlForce; }
int GetTimeCorrection() { return m_iTimeCorrection; }
int GetPropagationDelay() { return m_iPropagationDelay; }
int GetArticleCache() { return m_iArticleCache; }
int GetEventInterval() { return m_iEventInterval; }
Category* FindCategory(const char* szName) { return m_Categories.FindCategory(szName); }
Category* FindCategory(const char* szName, bool bSearchAliases) { return m_Categories.FindCategory(szName, bSearchAliases); }
// Parsed command-line parameters
bool GetServerMode() { return m_bServerMode; }
@@ -446,16 +544,18 @@ public:
// Current state
void SetPauseDownload(bool bPauseDownload) { m_bPauseDownload = bPauseDownload; }
bool GetPauseDownload() const { return m_bPauseDownload; }
void SetPauseDownload2(bool bPauseDownload2) { m_bPauseDownload2 = bPauseDownload2; }
bool GetPauseDownload2() const { return m_bPauseDownload2; }
void SetPausePostProcess(bool bPausePostProcess) { m_bPausePostProcess = bPausePostProcess; }
bool GetPausePostProcess() const { return m_bPausePostProcess; }
void SetPauseScan(bool bPauseScan) { m_bPauseScan = bPauseScan; }
bool GetPauseScan() const { return m_bPauseScan; }
void SetTempPauseDownload(bool bTempPauseDownload) { m_bTempPauseDownload = bTempPauseDownload; }
bool GetTempPauseDownload() const { return m_bTempPauseDownload; }
void SetDownloadRate(int iRate) { m_iDownloadRate = iRate; }
int GetDownloadRate() const { return m_iDownloadRate; }
void SetResumeTime(time_t tResumeTime) { m_tResumeTime = tResumeTime; }
time_t GetResumeTime() const { return m_tResumeTime; }
void SetLocalTimeOffset(int iLocalTimeOffset) { m_iLocalTimeOffset = iLocalTimeOffset; }
int GetLocalTimeOffset() { return m_iLocalTimeOffset; }
};
#endif

483
daemon/main/Scheduler.cpp Normal file
View File

@@ -0,0 +1,483 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2008-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#else
#include <unistd.h>
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include "nzbget.h"
#include "Scheduler.h"
#include "Options.h"
#include "Log.h"
#include "NewsServer.h"
#include "ServerPool.h"
#include "FeedInfo.h"
#include "FeedCoordinator.h"
#include "QueueScript.h"
extern Options* g_pOptions;
extern ServerPool* g_pServerPool;
extern FeedCoordinator* g_pFeedCoordinator;
class SchedulerScriptController : public Thread, public NZBScriptController
{
private:
char* m_szScript;
bool m_bExternalProcess;
int m_iTaskID;
void PrepareParams(const char* szScriptName);
void ExecuteExternalProcess();
protected:
virtual void ExecuteScript(Options::Script* pScript);
public:
virtual ~SchedulerScriptController();
virtual void Run();
static void StartScript(const char* szParam, bool bExternalProcess, int iTaskID);
};
Scheduler::Task::Task(int iID, int iHours, int iMinutes, int iWeekDaysBits, ECommand eCommand, const char* szParam)
{
m_iID = iID;
m_iHours = iHours;
m_iMinutes = iMinutes;
m_iWeekDaysBits = iWeekDaysBits;
m_eCommand = eCommand;
m_szParam = szParam ? strdup(szParam) : NULL;
m_tLastExecuted = 0;
}
Scheduler::Task::~Task()
{
free(m_szParam);
}
Scheduler::Scheduler()
{
debug("Creating Scheduler");
m_tLastCheck = 0;
m_TaskList.clear();
}
Scheduler::~Scheduler()
{
debug("Destroying Scheduler");
for (TaskList::iterator it = m_TaskList.begin(); it != m_TaskList.end(); it++)
{
delete *it;
}
}
void Scheduler::AddTask(Task* pTask)
{
m_mutexTaskList.Lock();
m_TaskList.push_back(pTask);
m_mutexTaskList.Unlock();
}
bool Scheduler::CompareTasks(Scheduler::Task* pTask1, Scheduler::Task* pTask2)
{
return (pTask1->m_iHours < pTask2->m_iHours) ||
((pTask1->m_iHours == pTask2->m_iHours) && (pTask1->m_iMinutes < pTask2->m_iMinutes));
}
void Scheduler::FirstCheck()
{
m_mutexTaskList.Lock();
m_TaskList.sort(CompareTasks);
m_mutexTaskList.Unlock();
// check all tasks for the last week
CheckTasks();
}
void Scheduler::IntervalCheck()
{
m_bExecuteProcess = true;
CheckTasks();
CheckScheduledResume();
}
void Scheduler::CheckTasks()
{
PrepareLog();
m_mutexTaskList.Lock();
time_t tCurrent = time(NULL);
if (!m_TaskList.empty())
{
// Detect large step changes of system time
time_t tDiff = tCurrent - m_tLastCheck;
if (tDiff > 60*90 || tDiff < 0)
{
debug("Reset scheduled tasks (detected clock change greater than 90 minutes or negative)");
// check all tasks for the last week
m_tLastCheck = tCurrent - 60*60*24*7;
m_bExecuteProcess = false;
for (TaskList::iterator it = m_TaskList.begin(); it != m_TaskList.end(); it++)
{
Task* pTask = *it;
pTask->m_tLastExecuted = 0;
}
}
time_t tLocalCurrent = tCurrent + g_pOptions->GetLocalTimeOffset();
time_t tLocalLastCheck = m_tLastCheck + g_pOptions->GetLocalTimeOffset();
tm tmCurrent;
gmtime_r(&tLocalCurrent, &tmCurrent);
tm tmLastCheck;
gmtime_r(&tLocalLastCheck, &tmLastCheck);
tm tmLoop;
memcpy(&tmLoop, &tmLastCheck, sizeof(tmLastCheck));
tmLoop.tm_hour = tmCurrent.tm_hour;
tmLoop.tm_min = tmCurrent.tm_min;
tmLoop.tm_sec = tmCurrent.tm_sec;
time_t tLoop = Util::Timegm(&tmLoop);
while (tLoop <= tLocalCurrent)
{
for (TaskList::iterator it = m_TaskList.begin(); it != m_TaskList.end(); it++)
{
Task* pTask = *it;
if (pTask->m_tLastExecuted != tLoop)
{
tm tmAppoint;
memcpy(&tmAppoint, &tmLoop, sizeof(tmLoop));
tmAppoint.tm_hour = pTask->m_iHours;
tmAppoint.tm_min = pTask->m_iMinutes;
tmAppoint.tm_sec = 0;
time_t tAppoint = Util::Timegm(&tmAppoint);
int iWeekDay = tmAppoint.tm_wday;
if (iWeekDay == 0)
{
iWeekDay = 7;
}
bool bWeekDayOK = pTask->m_iWeekDaysBits == 0 || (pTask->m_iWeekDaysBits & (1 << (iWeekDay - 1)));
bool bDoTask = bWeekDayOK && tLocalLastCheck < tAppoint && tAppoint <= tLocalCurrent;
//debug("TEMP: 1) m_tLastCheck=%i, tLocalCurrent=%i, tLoop=%i, tAppoint=%i, bWeekDayOK=%i, bDoTask=%i", m_tLastCheck, tLocalCurrent, tLoop, tAppoint, (int)bWeekDayOK, (int)bDoTask);
if (bDoTask)
{
ExecuteTask(pTask);
pTask->m_tLastExecuted = tLoop;
}
}
}
tLoop += 60*60*24; // inc day
gmtime_r(&tLoop, &tmLoop);
}
}
m_tLastCheck = tCurrent;
m_mutexTaskList.Unlock();
PrintLog();
}
void Scheduler::ExecuteTask(Task* pTask)
{
const char* szCommandName[] = { "Pause", "Unpause", "Pause Post-processing", "Unpause Post-processing",
"Set download rate", "Execute process", "Execute script",
"Pause Scan", "Unpause Scan", "Enable Server", "Disable Server", "Fetch Feed" };
debug("Executing scheduled command: %s", szCommandName[pTask->m_eCommand]);
switch (pTask->m_eCommand)
{
case scDownloadRate:
if (!Util::EmptyStr(pTask->m_szParam))
{
g_pOptions->SetDownloadRate(atoi(pTask->m_szParam) * 1024);
m_bDownloadRateChanged = true;
}
break;
case scPauseDownload:
case scUnpauseDownload:
g_pOptions->SetPauseDownload(pTask->m_eCommand == scPauseDownload);
m_bPauseDownloadChanged = true;
break;
case scPausePostProcess:
case scUnpausePostProcess:
g_pOptions->SetPausePostProcess(pTask->m_eCommand == scPausePostProcess);
m_bPausePostProcessChanged = true;
break;
case scPauseScan:
case scUnpauseScan:
g_pOptions->SetPauseScan(pTask->m_eCommand == scPauseScan);
m_bPauseScanChanged = true;
break;
case scScript:
case scProcess:
if (m_bExecuteProcess)
{
SchedulerScriptController::StartScript(pTask->m_szParam, pTask->m_eCommand == scProcess, pTask->m_iID);
}
break;
case scActivateServer:
case scDeactivateServer:
EditServer(pTask->m_eCommand == scActivateServer, pTask->m_szParam);
break;
case scFetchFeed:
if (m_bExecuteProcess)
{
FetchFeed(pTask->m_szParam);
break;
}
}
}
void Scheduler::PrepareLog()
{
m_bDownloadRateChanged = false;
m_bPauseDownloadChanged = false;
m_bPausePostProcessChanged = false;
m_bPauseScanChanged = false;
m_bServerChanged = false;
}
void Scheduler::PrintLog()
{
if (m_bDownloadRateChanged)
{
info("Scheduler: setting download rate to %i KB/s", g_pOptions->GetDownloadRate() / 1024);
}
if (m_bPauseDownloadChanged)
{
info("Scheduler: %s download", g_pOptions->GetPauseDownload() ? "pausing" : "unpausing");
}
if (m_bPausePostProcessChanged)
{
info("Scheduler: %s post-processing", g_pOptions->GetPausePostProcess() ? "pausing" : "unpausing");
}
if (m_bPauseScanChanged)
{
info("Scheduler: %s scan", g_pOptions->GetPauseScan() ? "pausing" : "unpausing");
}
if (m_bServerChanged)
{
int index = 0;
for (Servers::iterator it = g_pServerPool->GetServers()->begin(); it != g_pServerPool->GetServers()->end(); it++, index++)
{
NewsServer* pServer = *it;
if (pServer->GetActive() != m_ServerStatusList[index])
{
info("Scheduler: %s %s", pServer->GetActive() ? "activating" : "deactivating", pServer->GetName());
}
}
g_pServerPool->Changed();
}
}
void Scheduler::EditServer(bool bActive, const char* szServerList)
{
Tokenizer tok(szServerList, ",;");
while (const char* szServer = tok.Next())
{
int iID = atoi(szServer);
for (Servers::iterator it = g_pServerPool->GetServers()->begin(); it != g_pServerPool->GetServers()->end(); it++)
{
NewsServer* pServer = *it;
if ((iID > 0 && pServer->GetID() == iID) ||
!strcasecmp(pServer->GetName(), szServer))
{
if (!m_bServerChanged)
{
// store old server status for logging
m_ServerStatusList.clear();
m_ServerStatusList.reserve(g_pServerPool->GetServers()->size());
for (Servers::iterator it2 = g_pServerPool->GetServers()->begin(); it2 != g_pServerPool->GetServers()->end(); it2++)
{
NewsServer* pServer2 = *it2;
m_ServerStatusList.push_back(pServer2->GetActive());
}
}
m_bServerChanged = true;
pServer->SetActive(bActive);
break;
}
}
}
}
void Scheduler::FetchFeed(const char* szFeedList)
{
Tokenizer tok(szFeedList, ",;");
while (const char* szFeed = tok.Next())
{
int iID = atoi(szFeed);
for (Feeds::iterator it = g_pFeedCoordinator->GetFeeds()->begin(); it != g_pFeedCoordinator->GetFeeds()->end(); it++)
{
FeedInfo* pFeed = *it;
if (pFeed->GetID() == iID ||
!strcasecmp(pFeed->GetName(), szFeed) ||
!strcasecmp("0", szFeed))
{
g_pFeedCoordinator->FetchFeed(!strcasecmp("0", szFeed) ? 0 : pFeed->GetID());
break;
}
}
}
}
void Scheduler::CheckScheduledResume()
{
time_t tResumeTime = g_pOptions->GetResumeTime();
time_t tCurrentTime = time(NULL);
if (tResumeTime > 0 && tCurrentTime >= tResumeTime)
{
info("Autoresume");
g_pOptions->SetResumeTime(0);
g_pOptions->SetPauseDownload(false);
g_pOptions->SetPausePostProcess(false);
g_pOptions->SetPauseScan(false);
}
}
SchedulerScriptController::~SchedulerScriptController()
{
free(m_szScript);
}
void SchedulerScriptController::StartScript(const char* szParam, bool bExternalProcess, int iTaskID)
{
char** argv = NULL;
if (bExternalProcess && !Util::SplitCommandLine(szParam, &argv))
{
error("Could not execute scheduled process-script, failed to parse command line: %s", szParam);
return;
}
SchedulerScriptController* pScriptController = new SchedulerScriptController();
pScriptController->m_bExternalProcess = bExternalProcess;
pScriptController->m_szScript = strdup(szParam);
pScriptController->m_iTaskID = iTaskID;
if (bExternalProcess)
{
pScriptController->SetScript(argv[0]);
pScriptController->SetArgs((const char**)argv, true);
}
pScriptController->SetAutoDestroy(true);
pScriptController->Start();
}
void SchedulerScriptController::Run()
{
if (m_bExternalProcess)
{
ExecuteExternalProcess();
}
else
{
ExecuteScriptList(m_szScript);
}
}
void SchedulerScriptController::ExecuteScript(Options::Script* pScript)
{
if (!pScript->GetSchedulerScript())
{
return;
}
PrintMessage(Message::mkInfo, "Executing scheduler-script %s for Task%i", pScript->GetName(), m_iTaskID);
SetScript(pScript->GetLocation());
SetArgs(NULL, false);
char szInfoName[1024];
snprintf(szInfoName, 1024, "scheduler-script %s for Task%i", pScript->GetName(), m_iTaskID);
szInfoName[1024-1] = '\0';
SetInfoName(szInfoName);
SetLogPrefix(pScript->GetDisplayName());
PrepareParams(pScript->GetName());
Execute();
SetLogPrefix(NULL);
}
void SchedulerScriptController::PrepareParams(const char* szScriptName)
{
ResetEnv();
SetIntEnvVar("NZBSP_TASKID", m_iTaskID);
PrepareEnvScript(NULL, szScriptName);
}
void SchedulerScriptController::ExecuteExternalProcess()
{
info("Executing scheduled process-script %s for Task%i", Util::BaseFileName(GetScript()), m_iTaskID);
char szInfoName[1024];
snprintf(szInfoName, 1024, "scheduled process-script %s for Task%i", Util::BaseFileName(GetScript()), m_iTaskID);
szInfoName[1024-1] = '\0';
SetInfoName(szInfoName);
char szLogPrefix[1024];
strncpy(szLogPrefix, Util::BaseFileName(GetScript()), 1024);
szLogPrefix[1024-1] = '\0';
if (char* ext = strrchr(szLogPrefix, '.')) *ext = '\0'; // strip file extension
SetLogPrefix(szLogPrefix);
Execute();
}

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2008-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2008-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -27,6 +27,7 @@
#define SCHEDULER_H
#include <list>
#include <vector>
#include <time.h>
#include "Thread.h"
@@ -38,26 +39,32 @@ public:
{
scPauseDownload,
scUnpauseDownload,
scPausePostProcess,
scUnpausePostProcess,
scDownloadRate,
scScript,
scProcess,
scPauseScan,
scUnpauseScan
scUnpauseScan,
scActivateServer,
scDeactivateServer,
scFetchFeed
};
class Task
{
private:
int m_iID;
int m_iHours;
int m_iMinutes;
int m_iWeekDaysBits;
ECommand m_eCommand;
int m_iDownloadRate;
char* m_szProcess;
char* m_szParam;
time_t m_tLastExecuted;
public:
Task(int iHours, int iMinutes, int iWeekDaysBits, ECommand eCommand,
int iDownloadRate, const char* szProcess);
Task(int iID, int iHours, int iMinutes, int iWeekDaysBits, ECommand eCommand,
const char* szParam);
~Task();
friend class Scheduler;
};
@@ -65,21 +72,26 @@ public:
private:
typedef std::list<Task*> TaskList;
typedef std::vector<bool> ServerStatusList;
TaskList m_TaskList;
Mutex m_mutexTaskList;
time_t m_tLastCheck;
bool m_bDetectClockChanges;
bool m_bDownloadRateChanged;
bool m_bExecuteProcess;
int m_iDownloadRate;
bool m_bPauseDownloadChanged;
bool m_bPauseDownload;
bool m_bPausePostProcessChanged;
bool m_bPauseScanChanged;
bool m_bPauseScan;
bool m_bServerChanged;
ServerStatusList m_ServerStatusList;
void ExecuteTask(Task* pTask);
void CheckTasks();
static bool CompareTasks(Scheduler::Task* pTask1, Scheduler::Task* pTask2);
void PrepareLog();
void PrintLog();
void EditServer(bool bActive, const char* szServerList);
void FetchFeed(const char* szFeedList);
void CheckScheduledResume();
public:
Scheduler();
@@ -87,12 +99,6 @@ public:
void AddTask(Task* pTask);
void FirstCheck();
void IntervalCheck();
bool GetDownloadRateChanged() { return m_bDownloadRateChanged; }
int GetDownloadRate() { return m_iDownloadRate; }
bool GetPauseDownloadChanged() { return m_bPauseDownloadChanged; }
bool GetPauseDownload() { return m_bPauseDownload; }
bool GetPauseScanChanged() { return m_bPauseScanChanged; }
bool GetPauseScan() { return m_bPauseScan; }
};
#endif

321
daemon/main/StackTrace.cpp Executable file
View File

@@ -0,0 +1,321 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifdef WIN32
#include <dbghelp.h>
#else
#include <unistd.h>
#include <sys/resource.h>
#include <signal.h>
#endif
#ifdef HAVE_SYS_PRCTL_H
#include <sys/prctl.h>
#endif
#ifdef HAVE_BACKTRACE
#include <execinfo.h>
#endif
#include "nzbget.h"
#include "Log.h"
#include "Options.h"
#include "StackTrace.h"
extern Options* g_pOptions;
extern void ExitProc();
#ifdef WIN32
#ifdef DEBUG
void PrintBacktrace(PCONTEXT pContext)
{
HANDLE hProcess = GetCurrentProcess();
HANDLE hThread = GetCurrentThread();
char szAppDir[MAX_PATH + 1];
GetModuleFileName(NULL, szAppDir, sizeof(szAppDir));
char* end = strrchr(szAppDir, PATH_SEPARATOR);
if (end) *end = '\0';
SymSetOptions(SymGetOptions() | SYMOPT_LOAD_LINES | SYMOPT_FAIL_CRITICAL_ERRORS);
if (!SymInitialize(hProcess, szAppDir, TRUE))
{
warn("Could not obtain detailed exception information: SymInitialize failed");
return;
}
const int MAX_NAMELEN = 1024;
IMAGEHLP_SYMBOL64* pSym = (IMAGEHLP_SYMBOL64 *) malloc(sizeof(IMAGEHLP_SYMBOL64) + MAX_NAMELEN);
memset(pSym, 0, sizeof(IMAGEHLP_SYMBOL64) + MAX_NAMELEN);
pSym->SizeOfStruct = sizeof(IMAGEHLP_SYMBOL64);
pSym->MaxNameLength = MAX_NAMELEN;
IMAGEHLP_LINE64 ilLine;
memset(&ilLine, 0, sizeof(ilLine));
ilLine.SizeOfStruct = sizeof(ilLine);
STACKFRAME64 sfStackFrame;
memset(&sfStackFrame, 0, sizeof(sfStackFrame));
DWORD imageType;
#ifdef _M_IX86
imageType = IMAGE_FILE_MACHINE_I386;
sfStackFrame.AddrPC.Offset = pContext->Eip;
sfStackFrame.AddrPC.Mode = AddrModeFlat;
sfStackFrame.AddrFrame.Offset = pContext->Ebp;
sfStackFrame.AddrFrame.Mode = AddrModeFlat;
sfStackFrame.AddrStack.Offset = pContext->Esp;
sfStackFrame.AddrStack.Mode = AddrModeFlat;
#elif _M_X64
imageType = IMAGE_FILE_MACHINE_AMD64;
sfStackFrame.AddrPC.Offset = pContext->Rip;
sfStackFrame.AddrPC.Mode = AddrModeFlat;
sfStackFrame.AddrFrame.Offset = pContext->Rsp;
sfStackFrame.AddrFrame.Mode = AddrModeFlat;
sfStackFrame.AddrStack.Offset = pContext->Rsp;
sfStackFrame.AddrStack.Mode = AddrModeFlat;
#else
warn("Could not obtain detailed exception information: platform not supported");
return;
#endif
for (int frameNum = 0; ; frameNum++)
{
if (frameNum > 1000)
{
warn("Endless stack, abort tracing");
return;
}
if (!StackWalk64(imageType, hProcess, hThread, &sfStackFrame, pContext, NULL, SymFunctionTableAccess64, SymGetModuleBase64, NULL))
{
warn("Could not obtain detailed exception information: StackWalk64 failed");
return;
}
DWORD64 dwAddr = sfStackFrame.AddrPC.Offset;
char szSymName[1024];
char szSrcFileName[1024];
int iLineNumber = 0;
DWORD64 dwSymbolDisplacement;
if (SymGetSymFromAddr64(hProcess, dwAddr, &dwSymbolDisplacement, pSym))
{
UnDecorateSymbolName(pSym->Name, szSymName, sizeof(szSymName), UNDNAME_COMPLETE);
szSymName[sizeof(szSymName) - 1] = '\0';
}
else
{
strncpy(szSymName, "<symbol not available>", sizeof(szSymName));
}
DWORD dwLineDisplacement;
if (SymGetLineFromAddr64(hProcess, dwAddr, &dwLineDisplacement, &ilLine))
{
iLineNumber = ilLine.LineNumber;
char* szUseFileName = ilLine.FileName;
char* szRoot = strstr(szUseFileName, "\\daemon\\");
if (szRoot)
{
szUseFileName = szRoot;
}
strncpy(szSrcFileName, szUseFileName, sizeof(szSrcFileName));
szSrcFileName[sizeof(szSrcFileName) - 1] = '\0';
}
else
{
strncpy(szSrcFileName, "<filename not available>", sizeof(szSymName));
}
info("%s (%i) : %s", szSrcFileName, iLineNumber, szSymName);
if (sfStackFrame.AddrReturn.Offset == 0)
{
break;
}
}
}
#endif
LONG __stdcall ExceptionFilter(EXCEPTION_POINTERS* pExPtrs)
{
error("Unhandled Exception: code: 0x%8.8X, flags: %d, address: 0x%8.8X",
pExPtrs->ExceptionRecord->ExceptionCode,
pExPtrs->ExceptionRecord->ExceptionFlags,
pExPtrs->ExceptionRecord->ExceptionAddress);
#ifdef DEBUG
PrintBacktrace(pExPtrs->ContextRecord);
#else
info("Detailed exception information can be printed by debug version of NZBGet (available from download page)");
#endif
ExitProcess(-1);
return EXCEPTION_CONTINUE_SEARCH;
}
void InstallErrorHandler()
{
SetUnhandledExceptionFilter(ExceptionFilter);
}
#else
#ifdef DEBUG
typedef void(*sighandler)(int);
std::vector<sighandler> SignalProcList;
#endif
#ifdef HAVE_SYS_PRCTL_H
/**
* activates the creation of core-files
*/
void EnableDumpCore()
{
rlimit rlim;
rlim.rlim_cur= RLIM_INFINITY;
rlim.rlim_max= RLIM_INFINITY;
setrlimit(RLIMIT_CORE, &rlim);
prctl(PR_SET_DUMPABLE, 1);
}
#endif
void PrintBacktrace()
{
#ifdef HAVE_BACKTRACE
printf("Segmentation fault, tracing...\n");
void *array[100];
size_t size;
char **strings;
size_t i;
size = backtrace(array, 100);
strings = backtrace_symbols(array, size);
// first trace to screen
printf("Obtained %zd stack frames\n", size);
for (i = 0; i < size; i++)
{
printf("%s\n", strings[i]);
}
// then trace to log
error("Segmentation fault, tracing...");
error("Obtained %zd stack frames", size);
for (i = 0; i < size; i++)
{
error("%s", strings[i]);
}
free(strings);
#else
error("Segmentation fault");
#endif
}
/*
* Signal handler
*/
void SignalProc(int iSignal)
{
switch (iSignal)
{
case SIGINT:
signal(SIGINT, SIG_DFL); // Reset the signal handler
ExitProc();
break;
case SIGTERM:
signal(SIGTERM, SIG_DFL); // Reset the signal handler
ExitProc();
break;
case SIGCHLD:
// ignoring
break;
#ifdef DEBUG
case SIGSEGV:
signal(SIGSEGV, SIG_DFL); // Reset the signal handler
PrintBacktrace();
break;
#endif
}
}
void InstallErrorHandler()
{
#ifdef HAVE_SYS_PRCTL_H
if (g_pOptions->GetDumpCore())
{
EnableDumpCore();
}
#endif
signal(SIGINT, SignalProc);
signal(SIGTERM, SignalProc);
signal(SIGPIPE, SIG_IGN);
#ifdef DEBUG
signal(SIGSEGV, SignalProc);
#endif
#ifdef SIGCHLD_HANDLER
// it could be necessary on some systems to activate a handler for SIGCHLD
// however it make troubles on other systems and is deactivated by default
signal(SIGCHLD, SignalProc);
#endif
}
#endif
#ifdef DEBUG
class SegFault
{
public:
void DoSegFault()
{
char* N = NULL;
strcpy(N, "");
}
};
void TestSegFault()
{
SegFault s;
s.DoSegFault();
}
#endif

35
daemon/main/StackTrace.h Executable file
View File

@@ -0,0 +1,35 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef STACKTRACE_H
#define STACKTRACE_H
void InstallErrorHandler();
#ifdef DEBUG
void TestSegFault();
#endif
#endif

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2012 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -40,7 +40,6 @@
#include <unistd.h>
#include <pwd.h>
#include <grp.h>
#include <sys/resource.h>
#ifdef HAVE_SYS_PRCTL_H
#include <sys/prctl.h>
#endif
@@ -53,9 +52,6 @@
#ifndef DISABLE_PARCHECK
#include <iostream>
#endif
#ifdef HAVE_BACKTRACE
#include <execinfo.h>
#endif
#include "nzbget.h"
#include "ServerPool.h"
@@ -72,9 +68,18 @@
#include "MessageBase.h"
#include "DiskState.h"
#include "PrePostProcessor.h"
#include "HistoryCoordinator.h"
#include "DupeCoordinator.h"
#include "ParChecker.h"
#include "Scheduler.h"
#include "Scanner.h"
#include "FeedCoordinator.h"
#include "Maintenance.h"
#include "ArticleWriter.h"
#include "StatMeter.h"
#include "QueueScript.h"
#include "Util.h"
#include "StackTrace.h"
#ifdef WIN32
#include "NTService.h"
#endif
@@ -86,15 +91,7 @@ void Reload();
void Cleanup();
void ProcessClientRequest();
#ifndef WIN32
void InstallSignalHandlers();
void Daemonize();
void PrintBacktrace();
#ifdef HAVE_SYS_PRCTL_H
void EnableDumpCore();
#endif
#ifdef DEBUG
void MakeSegFault();
#endif
#endif
#ifndef DISABLE_PARCHECK
void DisableCout();
@@ -107,12 +104,18 @@ QueueCoordinator* g_pQueueCoordinator = NULL;
UrlCoordinator* g_pUrlCoordinator = NULL;
RemoteServer* g_pRemoteServer = NULL;
RemoteServer* g_pRemoteSecureServer = NULL;
DownloadSpeedMeter* g_pDownloadSpeedMeter = NULL;
DownloadQueueHolder* g_pDownloadQueueHolder = NULL;
StatMeter* g_pStatMeter = NULL;
Log* g_pLog = NULL;
PrePostProcessor* g_pPrePostProcessor = NULL;
HistoryCoordinator* g_pHistoryCoordinator = NULL;
DupeCoordinator* g_pDupeCoordinator = NULL;
DiskState* g_pDiskState = NULL;
Scheduler* g_pScheduler = NULL;
Scanner* g_pScanner = NULL;
FeedCoordinator* g_pFeedCoordinator = NULL;
Maintenance* g_pMaintenance = NULL;
ArticleCache* g_pArticleCache = NULL;
QueueScriptCoordinator* g_pQueueScriptCoordinator = NULL;
int g_iArgumentCount;
char* (*g_szEnvironmentVariables)[] = NULL;
char* (*g_szArguments)[] = NULL;
@@ -127,7 +130,7 @@ int main(int argc, char *argv[], char *argp[])
#ifdef _DEBUG
_CrtSetReportMode(_CRT_WARN, _CRTDBG_MODE_FILE | _CRTDBG_MODE_DEBUG);
_CrtSetReportFile(_CRT_WARN, _CRTDBG_FILE_STDERR);
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF
#ifdef DEBUG_CRTMEMLEAKS
| _CRTDBG_CHECK_CRT_DF | _CRTDBG_CHECK_ALWAYS_DF
#endif
@@ -145,6 +148,8 @@ int main(int argc, char *argv[], char *argp[])
DisableCout();
#endif
srand (time(NULL));
g_iArgumentCount = argc;
g_szArguments = (char*(*)[])argv;
g_szEnvironmentVariables = (char*(*)[])argp;
@@ -162,12 +167,6 @@ int main(int argc, char *argv[], char *argp[])
RunMain();
#ifdef WIN32
#ifdef _DEBUG
_CrtDumpMemoryLeaks();
#endif
#endif
return 0;
}
@@ -203,6 +202,17 @@ void Run(bool bReload)
g_pServerPool = new ServerPool();
g_pScheduler = new Scheduler();
g_pQueueCoordinator = new QueueCoordinator();
g_pStatMeter = new StatMeter();
g_pScanner = new Scanner();
g_pPrePostProcessor = new PrePostProcessor();
g_pHistoryCoordinator = new HistoryCoordinator();
g_pDupeCoordinator = new DupeCoordinator();
g_pUrlCoordinator = new UrlCoordinator();
g_pFeedCoordinator = new FeedCoordinator();
g_pArticleCache = new ArticleCache();
g_pMaintenance = new Maintenance();
g_pQueueScriptCoordinator = new QueueScriptCoordinator();
debug("Reading options");
g_pOptions = new Options(g_iArgumentCount, *g_szArguments);
@@ -215,24 +225,23 @@ void Run(bool bReload)
}
#endif
if (g_pOptions->GetServerMode() && g_pOptions->GetCreateLog() && g_pOptions->GetResetLog())
{
debug("Deleting old log-file");
g_pLog->ResetLog();
}
g_pLog->InitOptions();
g_pScanner->InitOptions();
g_pQueueScriptCoordinator->InitOptions();
if (g_pOptions->GetDaemonMode() && !bReload)
if (g_pOptions->GetDaemonMode())
{
#ifdef WIN32
info("nzbget %s service-mode", Util::VersionRevision());
#else
Daemonize();
if (!bReload)
{
Daemonize();
}
info("nzbget %s daemon-mode", Util::VersionRevision());
#endif
}
else if (g_pOptions->GetServerMode() && !bReload)
else if (g_pOptions->GetServerMode())
{
info("nzbget %s server-mode", Util::VersionRevision());
}
@@ -249,28 +258,16 @@ void Run(bool bReload)
if (!g_pOptions->GetRemoteClientMode())
{
g_pServerPool->InitConnections();
#ifdef DEBUG
g_pServerPool->LogDebugInfo();
#endif
g_pStatMeter->Init();
}
#ifndef WIN32
#ifdef HAVE_SYS_PRCTL_H
if (g_pOptions->GetDumpCore())
{
EnableDumpCore();
}
#endif
#endif
InstallErrorHandler();
#ifndef WIN32
InstallSignalHandlers();
#ifdef DEBUG
if (g_pOptions->GetTestBacktrace())
{
MakeSegFault();
TestSegFault();
}
#endif
#endif
// client request
@@ -281,16 +278,6 @@ void Run(bool bReload)
return;
}
// Create the queue coordinator
if (!g_pOptions->GetRemoteClientMode())
{
g_pQueueCoordinator = new QueueCoordinator();
g_pDownloadSpeedMeter = g_pQueueCoordinator;
g_pDownloadQueueHolder = g_pQueueCoordinator;
g_pUrlCoordinator = new UrlCoordinator();
}
// Setup the network-server
if (g_pOptions->GetServerMode())
{
@@ -304,12 +291,6 @@ void Run(bool bReload)
}
}
// Creating PrePostProcessor
if (!g_pOptions->GetRemoteClientMode())
{
g_pPrePostProcessor = new PrePostProcessor();
}
// Create the frontend
if (!g_pOptions->GetDaemonMode())
{
@@ -341,13 +322,15 @@ void Run(bool bReload)
// Standalone-mode
if (!g_pOptions->GetServerMode())
{
NZBFile* pNZBFile = NZBFile::CreateFromFile(g_pOptions->GetArgFilename(), g_pOptions->GetAddCategory() ? g_pOptions->GetAddCategory() : "");
const char* szCategory = g_pOptions->GetAddCategory() ? g_pOptions->GetAddCategory() : "";
NZBFile* pNZBFile = NZBFile::Create(g_pOptions->GetArgFilename(), szCategory);
if (!pNZBFile)
{
abort("FATAL ERROR: Parsing NZB-document %s failed\n\n", g_pOptions->GetArgFilename() ? g_pOptions->GetArgFilename() : "N/A");
return;
}
g_pQueueCoordinator->AddNZBFileToQueue(pNZBFile, false);
g_pScanner->InitPPParameters(szCategory, pNZBFile->GetNZBInfo()->GetParameters(), false);
g_pQueueCoordinator->AddNZBFileToQueue(pNZBFile, NULL, false);
delete pNZBFile;
}
@@ -359,11 +342,18 @@ void Run(bool bReload)
g_pQueueCoordinator->Start();
g_pUrlCoordinator->Start();
g_pPrePostProcessor->Start();
g_pFeedCoordinator->Start();
if (g_pOptions->GetArticleCache() > 0)
{
g_pArticleCache->Start();
}
// enter main program-loop
while (g_pQueueCoordinator->IsRunning() ||
g_pUrlCoordinator->IsRunning() ||
g_pPrePostProcessor->IsRunning())
g_pPrePostProcessor->IsRunning() ||
g_pFeedCoordinator->IsRunning() ||
g_pArticleCache->IsRunning())
{
if (!g_pOptions->GetServerMode() &&
!g_pQueueCoordinator->HasMoreJobs() &&
@@ -383,6 +373,14 @@ void Run(bool bReload)
{
g_pPrePostProcessor->Stop();
}
if (!g_pFeedCoordinator->IsStopped())
{
g_pFeedCoordinator->Stop();
}
if (!g_pArticleCache->IsStopped())
{
g_pArticleCache->Stop();
}
}
usleep(100 * 1000);
}
@@ -391,8 +389,12 @@ void Run(bool bReload)
debug("QueueCoordinator stopped");
debug("UrlCoordinator stopped");
debug("PrePostProcessor stopped");
debug("FeedCoordinator stopped");
debug("ArticleCache stopped");
}
ScriptController::TerminateAll();
// Stop network-server
if (g_pRemoteServer)
{
@@ -474,14 +476,6 @@ void ProcessClientRequest()
Client->RequestServerPauseUnpause(false, eRemotePauseUnpauseActionDownload);
break;
case Options::opClientRequestDownload2Pause:
Client->RequestServerPauseUnpause(true, eRemotePauseUnpauseActionDownload2);
break;
case Options::opClientRequestDownload2Unpause:
Client->RequestServerPauseUnpause(false, eRemotePauseUnpauseActionDownload2);
break;
case Options::opClientRequestSetRate:
Client->RequestServerSetDownloadRate(g_pOptions->GetSetRate());
break;
@@ -491,9 +485,10 @@ void ProcessClientRequest()
break;
case Options::opClientRequestEditQueue:
Client->RequestServerEditQueue((eRemoteEditAction)g_pOptions->GetEditQueueAction(), g_pOptions->GetEditQueueOffset(),
g_pOptions->GetEditQueueText(), g_pOptions->GetEditQueueIDList(), g_pOptions->GetEditQueueIDCount(),
g_pOptions->GetEditQueueNameList(), (eRemoteMatchMode)g_pOptions->GetMatchMode(), true);
Client->RequestServerEditQueue((DownloadQueue::EEditAction)g_pOptions->GetEditQueueAction(),
g_pOptions->GetEditQueueOffset(), g_pOptions->GetEditQueueText(),
g_pOptions->GetEditQueueIDList(), g_pOptions->GetEditQueueIDCount(),
g_pOptions->GetEditQueueNameList(), (eRemoteMatchMode)g_pOptions->GetMatchMode());
break;
case Options::opClientRequestLog:
@@ -556,10 +551,6 @@ void ProcessClientRequest()
Client->RequestServerDownloadUrl(g_pOptions->GetLastArg(), g_pOptions->GetAddNZBFilename(), g_pOptions->GetAddCategory(), g_pOptions->GetAddTop(), g_pOptions->GetAddPaused(), g_pOptions->GetAddPriority());
break;
case Options::opClientRequestUrlQueue:
Client->RequestUrlQueue();
break;
case Options::opClientNoOperation:
break;
}
@@ -589,6 +580,8 @@ void ExitProc()
g_pQueueCoordinator->Stop();
g_pUrlCoordinator->Stop();
g_pPrePostProcessor->Stop();
g_pFeedCoordinator->Stop();
g_pArticleCache->Stop();
}
}
}
@@ -600,172 +593,55 @@ void Reload()
ExitProc();
}
#ifndef WIN32
#ifdef DEBUG
typedef void(*sighandler)(int);
std::vector<sighandler> SignalProcList;
#endif
/*
* Signal handler
*/
void SignalProc(int iSignal)
{
switch (iSignal)
{
case SIGINT:
signal(SIGINT, SIG_DFL); // Reset the signal handler
ExitProc();
break;
case SIGTERM:
signal(SIGTERM, SIG_DFL); // Reset the signal handler
ExitProc();
break;
case SIGCHLD:
// ignoring
break;
#ifdef DEBUG
case SIGSEGV:
signal(SIGSEGV, SIG_DFL); // Reset the signal handler
PrintBacktrace();
break;
#endif
}
}
void InstallSignalHandlers()
{
signal(SIGINT, SignalProc);
signal(SIGTERM, SignalProc);
signal(SIGPIPE, SIG_IGN);
#ifdef DEBUG
signal(SIGSEGV, SignalProc);
#endif
#ifdef SIGCHLD_HANDLER
// it could be necessary on some systems to activate a handler for SIGCHLD
// however it make troubles on other systems and is deactivated by default
signal(SIGCHLD, SignalProc);
#endif
}
void PrintBacktrace()
{
#ifdef HAVE_BACKTRACE
printf("Segmentation fault, tracing...\n");
void *array[100];
size_t size;
char **strings;
size_t i;
size = backtrace(array, 100);
strings = backtrace_symbols(array, size);
// first trace to screen
printf("Obtained %zd stack frames\n", size);
for (i = 0; i < size; i++)
{
printf("%s\n", strings[i]);
}
// then trace to log
error("Segmentation fault, tracing...");
error("Obtained %zd stack frames", size);
for (i = 0; i < size; i++)
{
error("%s", strings[i]);
}
free(strings);
#else
error("Segmentation fault");
#endif
}
#ifdef DEBUG
void MakeSegFault()
{
char* N = NULL;
strcpy(N, "");
}
#endif
#ifdef HAVE_SYS_PRCTL_H
/**
* activates the creation of core-files
*/
void EnableDumpCore()
{
rlimit rlim;
rlim.rlim_cur= RLIM_INFINITY;
rlim.rlim_max= RLIM_INFINITY;
setrlimit(RLIMIT_CORE, &rlim);
prctl(PR_SET_DUMPABLE, 1);
}
#endif
#endif
void Cleanup()
{
debug("Cleaning up global objects");
debug("Deleting UrlCoordinator");
if (g_pUrlCoordinator)
{
delete g_pUrlCoordinator;
g_pUrlCoordinator = NULL;
}
delete g_pUrlCoordinator;
g_pUrlCoordinator = NULL;
debug("UrlCoordinator deleted");
debug("Deleting RemoteServer");
if (g_pRemoteServer)
{
delete g_pRemoteServer;
g_pRemoteServer = NULL;
}
delete g_pRemoteServer;
g_pRemoteServer = NULL;
debug("RemoteServer deleted");
debug("Deleting RemoteSecureServer");
if (g_pRemoteSecureServer)
{
delete g_pRemoteSecureServer;
g_pRemoteSecureServer = NULL;
}
delete g_pRemoteSecureServer;
g_pRemoteSecureServer = NULL;
debug("RemoteSecureServer deleted");
debug("Deleting PrePostProcessor");
if (g_pPrePostProcessor)
{
delete g_pPrePostProcessor;
g_pPrePostProcessor = NULL;
}
delete g_pPrePostProcessor;
g_pPrePostProcessor = NULL;
delete g_pScanner;
g_pScanner = NULL;
debug("PrePostProcessor deleted");
debug("Deleting HistoryCoordinator");
delete g_pHistoryCoordinator;
g_pHistoryCoordinator = NULL;
debug("HistoryCoordinator deleted");
debug("Deleting DupeCoordinator");
delete g_pDupeCoordinator;
g_pDupeCoordinator = NULL;
debug("DupeCoordinator deleted");
debug("Deleting Frontend");
if (g_pFrontend)
{
delete g_pFrontend;
g_pFrontend = NULL;
}
delete g_pFrontend;
g_pFrontend = NULL;
debug("Frontend deleted");
debug("Deleting QueueCoordinator");
if (g_pQueueCoordinator)
{
delete g_pQueueCoordinator;
g_pQueueCoordinator = NULL;
}
delete g_pQueueCoordinator;
g_pQueueCoordinator = NULL;
debug("QueueCoordinator deleted");
debug("Deleting DiskState");
if (g_pDiskState)
{
delete g_pDiskState;
g_pDiskState = NULL;
}
delete g_pDiskState;
g_pDiskState = NULL;
debug("DiskState deleted");
debug("Deleting Options");
@@ -782,21 +658,40 @@ void Cleanup()
debug("Options deleted");
debug("Deleting ServerPool");
if (g_pServerPool)
{
delete g_pServerPool;
g_pServerPool = NULL;
}
delete g_pServerPool;
g_pServerPool = NULL;
debug("ServerPool deleted");
debug("Deleting Scheduler");
if (g_pScheduler)
{
delete g_pScheduler;
g_pScheduler = NULL;
}
delete g_pScheduler;
g_pScheduler = NULL;
debug("Scheduler deleted");
debug("Deleting FeedCoordinator");
delete g_pFeedCoordinator;
g_pFeedCoordinator = NULL;
debug("FeedCoordinator deleted");
debug("Deleting ArticleCache");
delete g_pArticleCache;
g_pArticleCache = NULL;
debug("ArticleCache deleted");
debug("Deleting QueueScriptCoordinator");
delete g_pQueueScriptCoordinator;
g_pQueueScriptCoordinator = NULL;
debug("QueueScriptCoordinator deleted");
debug("Deleting Maintenance");
delete g_pMaintenance;
g_pMaintenance = NULL;
debug("Maintenance deleted");
debug("Deleting StatMeter");
delete g_pStatMeter;
g_pStatMeter = NULL;
debug("StatMeter deleted");
if (!g_bReloading)
{
Connection::Final();
@@ -805,11 +700,8 @@ void Cleanup()
debug("Global objects cleaned up");
if (g_pLog)
{
delete g_pLog;
g_pLog = NULL;
}
delete g_pLog;
g_pLog = NULL;
}
#ifndef WIN32
@@ -833,14 +725,14 @@ void Daemonize()
/* Drop user if there is one, and we were run as root */
if ( getuid() == 0 || geteuid() == 0 )
{
struct passwd *pw = getpwnam(g_pOptions->GetDaemonUserName());
struct passwd *pw = getpwnam(g_pOptions->GetDaemonUsername());
if (pw)
{
fchown(lfp, pw->pw_uid, pw->pw_gid); /* change owner of lock file */
setgroups( 0, (const gid_t*) 0 ); /* Set aux groups to null. */
setgid(pw->pw_gid); /* Set primary group. */
/* Try setting aux groups correctly - not critical if this fails. */
initgroups( g_pOptions->GetDaemonUserName(),pw->pw_gid);
initgroups( g_pOptions->GetDaemonUsername(),pw->pw_gid);
/* Finally, set uid. */
setuid(pw->pw_uid);
}

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -36,7 +36,9 @@
#endif
#define fdopen _fdopen
#define ctime_r(timep, buf, bufsize) ctime_s(buf, bufsize, timep)
#define localtime_r(time, tm) localtime_s(tm, time)
#define gmtime_r(time, tm) gmtime_s(tm, time)
#define strtok_r(str, delim, saveptr) strtok_s(str, delim, saveptr)
#define strerror_r(errnum, buffer, size) strerror_s(buffer, size, errnum)
#define int32_t __int32
#define mkdir(dir, flags) _mkdir(dir)
#define rmdir _rmdir
@@ -48,7 +50,6 @@
#define S_ISREG(mode) __S_ISTYPE((mode), _S_IFREG)
#define S_DIRMODE NULL
#define usleep(usec) Sleep((usec) / 1000)
#define gettimeofday(tm, ignore) _ftime(tm)
#define socklen_t int
#define SHUT_WR 0x01
#define SHUT_RDWR 0x02
@@ -56,9 +57,18 @@
#define ALT_PATH_SEPARATOR '/'
#define LINE_ENDING "\r\n"
#define pid_t int
#define atoll _atoi64
#define fseek _fseeki64
#define ftell _ftelli64
#ifndef FSCTL_SET_SPARSE
#define FSCTL_SET_SPARSE 590020
#endif
#define FOPEN_RB "rbN"
#define FOPEN_RBP "rb+N"
#define FOPEN_WB "wbN"
#define FOPEN_WBP "wb+N"
#define FOPEN_AB "abN"
#define FOPEN_ABP "ab+N"
#pragma warning(disable:4800) // 'type' : forcing value to bool 'true' or 'false' (performance warning)
#pragma warning(disable:4267) // 'var' : conversion from 'size_t' to 'type', possible loss of data
@@ -75,6 +85,12 @@
#define MAX_PATH 1024
#define S_DIRMODE (S_IRWXU | S_IRWXG | S_IRWXO)
#define LINE_ENDING "\n"
#define FOPEN_RB "rb"
#define FOPEN_RBP "rb+"
#define FOPEN_WB "wb"
#define FOPEN_WBP "wb+"
#define FOPEN_AB "ab"
#define FOPEN_ABP "ab+"
#endif

View File

@@ -0,0 +1,723 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#include <sys/time.h>
#endif
#include <sys/stat.h>
#include <errno.h>
#include "nzbget.h"
#include "ArticleDownloader.h"
#include "ArticleWriter.h"
#include "Decoder.h"
#include "Log.h"
#include "Options.h"
#include "ServerPool.h"
#include "StatMeter.h"
#include "Util.h"
extern Options* g_pOptions;
extern ServerPool* g_pServerPool;
extern StatMeter* g_pStatMeter;
ArticleDownloader::ArticleDownloader()
{
debug("Creating ArticleDownloader");
m_szInfoName = NULL;
m_szConnectionName[0] = '\0';
m_pConnection = NULL;
m_eStatus = adUndefined;
m_eFormat = Decoder::efUnknown;
m_szArticleFilename = NULL;
m_iDownloadedSize = 0;
m_ArticleWriter.SetOwner(this);
SetLastUpdateTimeNow();
}
ArticleDownloader::~ArticleDownloader()
{
debug("Destroying ArticleDownloader");
free(m_szInfoName);
free(m_szArticleFilename);
}
void ArticleDownloader::SetInfoName(const char* szInfoName)
{
m_szInfoName = strdup(szInfoName);
m_ArticleWriter.SetInfoName(m_szInfoName);
}
/*
* How server management (for one particular article) works:
- there is a list of failed servers which is initially empty;
- level is initially 0;
<loop>
- request a connection from server pool for current level;
Exception: this step is skipped for the very first download attempt, because a
level-0 connection is initially passed from queue manager;
- try to download from server;
- if connection to server cannot be established or download fails due to interrupted connection,
try again (as many times as needed without limit) the same server until connection is OK;
- if download fails with error "Not-Found" (article or group not found) or with CRC error,
add the server to failed server list;
- if download fails with general failure error (article incomplete, other unknown error
codes), try the same server again as many times as defined by option <Retries>; if all attempts
fail, add the server to failed server list;
- if all servers from current level were tried, increase level;
- if all servers from all levels were tried, break the loop with failure status.
<end-loop>
*/
void ArticleDownloader::Run()
{
debug("Entering ArticleDownloader-loop");
SetStatus(adRunning);
m_ArticleWriter.SetFileInfo(m_pFileInfo);
m_ArticleWriter.SetArticleInfo(m_pArticleInfo);
m_ArticleWriter.Prepare();
EStatus Status = adFailed;
int iRetries = g_pOptions->GetRetries() > 0 ? g_pOptions->GetRetries() : 1;
int iRemainedRetries = iRetries;
Servers failedServers;
failedServers.reserve(g_pServerPool->GetServers()->size());
NewsServer* pWantServer = NULL;
NewsServer* pLastServer = NULL;
int iLevel = 0;
int iServerConfigGeneration = g_pServerPool->GetGeneration();
bool bForce = m_pFileInfo->GetNZBInfo()->GetForcePriority();
while (!IsStopped())
{
Status = adFailed;
SetStatus(adWaiting);
while (!m_pConnection && !(IsStopped() || iServerConfigGeneration != g_pServerPool->GetGeneration()))
{
m_pConnection = g_pServerPool->GetConnection(iLevel, pWantServer, &failedServers);
usleep(5 * 1000);
}
SetLastUpdateTimeNow();
SetStatus(adRunning);
if (IsStopped() || (g_pOptions->GetPauseDownload() && !bForce) ||
(g_pOptions->GetTempPauseDownload() && !m_pFileInfo->GetExtraPriority()) ||
iServerConfigGeneration != g_pServerPool->GetGeneration())
{
Status = adRetry;
break;
}
pLastServer = m_pConnection->GetNewsServer();
m_pConnection->SetSuppressErrors(false);
snprintf(m_szConnectionName, sizeof(m_szConnectionName), "%s (%s)",
m_pConnection->GetNewsServer()->GetName(), m_pConnection->GetHost());
m_szConnectionName[sizeof(m_szConnectionName) - 1] = '\0';
// test connection
bool bConnected = m_pConnection && m_pConnection->Connect();
if (bConnected && !IsStopped())
{
NewsServer* pNewsServer = m_pConnection->GetNewsServer();
detail("Downloading %s @ %s", m_szInfoName, m_szConnectionName);
Status = Download();
if (Status == adFinished || Status == adFailed || Status == adNotFound || Status == adCrcError)
{
m_ServerStats.StatOp(pNewsServer->GetID(), Status == adFinished ? 1 : 0, Status == adFinished ? 0 : 1, ServerStatList::soSet);
}
}
if (bConnected)
{
if (Status == adConnectError)
{
m_pConnection->Disconnect();
bConnected = false;
Status = adFailed;
}
else
{
// freeing connection allows other threads to start.
// we doing this only if the problem was with article or group.
// if the problem occurs by connecting or authorization we do not
// free the connection, to prevent starting of thousands of threads
// (cause each of them will also free it's connection after the
// same connect-error).
FreeConnection(Status == adFinished || Status == adNotFound);
}
}
if (m_pConnection)
{
AddServerData();
}
if (Status == adFinished || Status == adFatalError)
{
break;
}
pWantServer = NULL;
if (bConnected && Status == adFailed)
{
iRemainedRetries--;
}
if (!bConnected || (Status == adFailed && iRemainedRetries > 0))
{
pWantServer = pLastServer;
}
if (pWantServer &&
!(IsStopped() || (g_pOptions->GetPauseDownload() && !bForce) ||
(g_pOptions->GetTempPauseDownload() && !m_pFileInfo->GetExtraPriority()) ||
iServerConfigGeneration != g_pServerPool->GetGeneration()))
{
detail("Waiting %i sec to retry", g_pOptions->GetRetryInterval());
SetStatus(adWaiting);
int msec = 0;
while (!(IsStopped() || (g_pOptions->GetPauseDownload() && !bForce) ||
(g_pOptions->GetTempPauseDownload() && !m_pFileInfo->GetExtraPriority()) ||
iServerConfigGeneration != g_pServerPool->GetGeneration()) &&
msec < g_pOptions->GetRetryInterval() * 1000)
{
usleep(100 * 1000);
msec += 100;
}
SetLastUpdateTimeNow();
SetStatus(adRunning);
}
if (IsStopped() || (g_pOptions->GetPauseDownload() && !bForce) ||
(g_pOptions->GetTempPauseDownload() && !m_pFileInfo->GetExtraPriority()) ||
iServerConfigGeneration != g_pServerPool->GetGeneration())
{
Status = adRetry;
break;
}
if (!pWantServer)
{
failedServers.push_back(pLastServer);
// if all servers from current level were tried, increase level
// if all servers from all levels were tried, break the loop with failure status
bool bAllServersOnLevelFailed = true;
for (Servers::iterator it = g_pServerPool->GetServers()->begin(); it != g_pServerPool->GetServers()->end(); it++)
{
NewsServer* pCandidateServer = *it;
if (pCandidateServer->GetNormLevel() == iLevel)
{
bool bServerFailed = !pCandidateServer->GetActive() || pCandidateServer->GetMaxConnections() == 0;
if (!bServerFailed)
{
for (Servers::iterator it = failedServers.begin(); it != failedServers.end(); it++)
{
NewsServer* pIgnoreServer = *it;
if (pIgnoreServer == pCandidateServer ||
(pIgnoreServer->GetGroup() > 0 && pIgnoreServer->GetGroup() == pCandidateServer->GetGroup() &&
pIgnoreServer->GetNormLevel() == pCandidateServer->GetNormLevel()))
{
bServerFailed = true;
break;
}
}
}
if (!bServerFailed)
{
bAllServersOnLevelFailed = false;
break;
}
}
}
if (bAllServersOnLevelFailed)
{
if (iLevel < g_pServerPool->GetMaxNormLevel())
{
detail("Article %s @ all level %i servers failed, increasing level", m_szInfoName, iLevel);
iLevel++;
}
else
{
detail("Article %s @ all servers failed", m_szInfoName);
Status = adFailed;
break;
}
}
iRemainedRetries = iRetries;
}
}
FreeConnection(Status == adFinished);
if (m_ArticleWriter.GetDuplicate())
{
Status = adFinished;
}
if (Status != adFinished && Status != adRetry)
{
Status = adFailed;
}
if (IsStopped())
{
detail("Download %s cancelled", m_szInfoName);
Status = adRetry;
}
if (Status == adFailed)
{
detail("Download %s failed", m_szInfoName);
}
SetStatus(Status);
Notify(NULL);
debug("Exiting ArticleDownloader-loop");
}
ArticleDownloader::EStatus ArticleDownloader::Download()
{
const char* szResponse = NULL;
EStatus Status = adRunning;
m_bWritingStarted = false;
m_pArticleInfo->SetCrc(0);
if (m_pConnection->GetNewsServer()->GetJoinGroup())
{
// change group
for (FileInfo::Groups::iterator it = m_pFileInfo->GetGroups()->begin(); it != m_pFileInfo->GetGroups()->end(); it++)
{
szResponse = m_pConnection->JoinGroup(*it);
if (szResponse && !strncmp(szResponse, "2", 1))
{
break;
}
}
Status = CheckResponse(szResponse, "could not join group");
if (Status != adFinished)
{
return Status;
}
}
// retrieve article
char tmp[1024];
snprintf(tmp, 1024, "ARTICLE %s\r\n", m_pArticleInfo->GetMessageID());
tmp[1024-1] = '\0';
for (int retry = 3; retry > 0; retry--)
{
szResponse = m_pConnection->Request(tmp);
if ((szResponse && !strncmp(szResponse, "2", 1)) || m_pConnection->GetAuthError())
{
break;
}
}
Status = CheckResponse(szResponse, "could not fetch article");
if (Status != adFinished)
{
return Status;
}
if (g_pOptions->GetDecode())
{
m_YDecoder.Clear();
m_YDecoder.SetCrcCheck(g_pOptions->GetCrcCheck());
m_UDecoder.Clear();
}
bool bBody = false;
bool bEnd = false;
const int LineBufSize = 1024*10;
char* szLineBuf = (char*)malloc(LineBufSize);
Status = adRunning;
while (!IsStopped())
{
time_t tOldTime = m_tLastUpdateTime;
SetLastUpdateTimeNow();
if (tOldTime != m_tLastUpdateTime)
{
AddServerData();
}
// Throttle the bandwidth
while (!IsStopped() && (g_pOptions->GetDownloadRate() > 0.0f) &&
(g_pStatMeter->CalcCurrentDownloadSpeed() > g_pOptions->GetDownloadRate()))
{
SetLastUpdateTimeNow();
usleep(10 * 1000);
}
int iLen = 0;
char* line = m_pConnection->ReadLine(szLineBuf, LineBufSize, &iLen);
g_pStatMeter->AddSpeedReading(iLen);
if (g_pOptions->GetAccurateRate())
{
AddServerData();
}
// Have we encountered a timeout?
if (!line)
{
if (!IsStopped())
{
detail("Article %s @ %s failed: Unexpected end of article", m_szInfoName, m_szConnectionName);
}
Status = adFailed;
break;
}
//detect end of article
if (!strcmp(line, ".\r\n") || !strcmp(line, ".\n"))
{
bEnd = true;
break;
}
//detect lines starting with "." (marked as "..")
if (!strncmp(line, "..", 2))
{
line++;
iLen--;
}
if (!bBody)
{
// detect body of article
if (*line == '\r' || *line == '\n')
{
bBody = true;
}
// check id of returned article
else if (!strncmp(line, "Message-ID: ", 12))
{
char* p = line + 12;
if (strncmp(p, m_pArticleInfo->GetMessageID(), strlen(m_pArticleInfo->GetMessageID())))
{
if (char* e = strrchr(p, '\r')) *e = '\0'; // remove trailing CR-character
detail("Article %s @ %s failed: Wrong message-id, expected %s, returned %s", m_szInfoName,
m_szConnectionName, m_pArticleInfo->GetMessageID(), p);
Status = adFailed;
break;
}
}
}
else if (m_eFormat == Decoder::efUnknown && g_pOptions->GetDecode())
{
m_eFormat = Decoder::DetectFormat(line, iLen);
}
// write to output file
if (((bBody && m_eFormat != Decoder::efUnknown) || !g_pOptions->GetDecode()) && !Write(line, iLen))
{
Status = adFatalError;
break;
}
}
free(szLineBuf);
if (!bEnd && Status == adRunning && !IsStopped())
{
detail("Article %s @ %s failed: article incomplete", m_szInfoName, m_szConnectionName);
Status = adFailed;
}
if (IsStopped())
{
Status = adFailed;
}
if (Status == adRunning)
{
FreeConnection(true);
Status = DecodeCheck();
}
if (m_bWritingStarted)
{
m_ArticleWriter.Finish(Status == adFinished);
}
if (Status == adFinished)
{
detail("Successfully downloaded %s", m_szInfoName);
}
return Status;
}
ArticleDownloader::EStatus ArticleDownloader::CheckResponse(const char* szResponse, const char* szComment)
{
if (!szResponse)
{
if (!IsStopped())
{
detail("Article %s @ %s failed, %s: Connection closed by remote host",
m_szInfoName, m_szConnectionName, szComment);
}
return adConnectError;
}
else if (m_pConnection->GetAuthError() || !strncmp(szResponse, "400", 3) || !strncmp(szResponse, "499", 3))
{
detail("Article %s @ %s failed, %s: %s", m_szInfoName, m_szConnectionName, szComment, szResponse);
return adConnectError;
}
else if (!strncmp(szResponse, "41", 2) || !strncmp(szResponse, "42", 2) || !strncmp(szResponse, "43", 2))
{
detail("Article %s @ %s failed, %s: %s", m_szInfoName, m_szConnectionName, szComment, szResponse);
return adNotFound;
}
else if (!strncmp(szResponse, "2", 1))
{
// OK
return adFinished;
}
else
{
// unknown error, no special handling
detail("Article %s @ %s failed, %s: %s", m_szInfoName, m_szConnectionName, szComment, szResponse);
return adFailed;
}
}
bool ArticleDownloader::Write(char* szLine, int iLen)
{
const char* szArticleFilename = NULL;
long long iArticleFileSize = 0;
long long iArticleOffset = 0;
int iArticleSize = 0;
if (g_pOptions->GetDecode())
{
if (m_eFormat == Decoder::efYenc)
{
iLen = m_YDecoder.DecodeBuffer(szLine, iLen);
szArticleFilename = m_YDecoder.GetArticleFilename();
iArticleFileSize = m_YDecoder.GetSize();
}
else if (m_eFormat == Decoder::efUx)
{
iLen = m_UDecoder.DecodeBuffer(szLine, iLen);
szArticleFilename = m_UDecoder.GetArticleFilename();
}
else
{
detail("Decoding %s failed: unsupported encoding", m_szInfoName);
return false;
}
if (iLen > 0 && m_eFormat == Decoder::efYenc)
{
if (m_YDecoder.GetBegin() == 0 || m_YDecoder.GetEnd() == 0)
{
return false;
}
iArticleOffset = m_YDecoder.GetBegin() - 1;
iArticleSize = (int)(m_YDecoder.GetEnd() - m_YDecoder.GetBegin() + 1);
}
}
if (!m_bWritingStarted && iLen > 0)
{
if (!m_ArticleWriter.Start(m_eFormat, szArticleFilename, iArticleFileSize, iArticleOffset, iArticleSize))
{
return false;
}
m_bWritingStarted = true;
}
bool bOK = iLen == 0 || m_ArticleWriter.Write(szLine, iLen);
return bOK;
}
ArticleDownloader::EStatus ArticleDownloader::DecodeCheck()
{
if (g_pOptions->GetDecode())
{
Decoder* pDecoder = NULL;
if (m_eFormat == Decoder::efYenc)
{
pDecoder = &m_YDecoder;
}
else if (m_eFormat == Decoder::efUx)
{
pDecoder = &m_UDecoder;
}
else
{
detail("Decoding %s failed: no binary data or unsupported encoding format", m_szInfoName);
return adFailed;
}
Decoder::EStatus eStatus = pDecoder->Check();
if (eStatus == Decoder::eFinished)
{
if (pDecoder->GetArticleFilename())
{
free(m_szArticleFilename);
m_szArticleFilename = strdup(pDecoder->GetArticleFilename());
}
if (m_eFormat == Decoder::efYenc)
{
m_pArticleInfo->SetCrc(g_pOptions->GetCrcCheck() ?
m_YDecoder.GetCalculatedCrc() : m_YDecoder.GetExpectedCrc());
}
return adFinished;
}
else if (eStatus == Decoder::eCrcError)
{
detail("Decoding %s failed: CRC-Error", m_szInfoName);
return adCrcError;
}
else if (eStatus == Decoder::eArticleIncomplete)
{
detail("Decoding %s failed: article incomplete", m_szInfoName);
return adFailed;
}
else if (eStatus == Decoder::eInvalidSize)
{
detail("Decoding %s failed: size mismatch", m_szInfoName);
return adFailed;
}
else if (eStatus == Decoder::eNoBinaryData)
{
detail("Decoding %s failed: no binary data found", m_szInfoName);
return adFailed;
}
else
{
detail("Decoding %s failed", m_szInfoName);
return adFailed;
}
}
else
{
return adFinished;
}
}
void ArticleDownloader::LogDebugInfo()
{
char szTime[50];
#ifdef HAVE_CTIME_R_3
ctime_r(&m_tLastUpdateTime, szTime, 50);
#else
ctime_r(&m_tLastUpdateTime, szTime);
#endif
info(" Download: Status=%i, LastUpdateTime=%s, InfoName=%s", m_eStatus, szTime, m_szInfoName);
}
void ArticleDownloader::Stop()
{
debug("Trying to stop ArticleDownloader");
Thread::Stop();
m_mutexConnection.Lock();
if (m_pConnection)
{
m_pConnection->SetSuppressErrors(true);
m_pConnection->Cancel();
}
m_mutexConnection.Unlock();
debug("ArticleDownloader stopped successfully");
}
bool ArticleDownloader::Terminate()
{
NNTPConnection* pConnection = m_pConnection;
bool terminated = Kill();
if (terminated && pConnection)
{
debug("Terminating connection");
pConnection->SetSuppressErrors(true);
pConnection->Cancel();
pConnection->Disconnect();
g_pStatMeter->AddServerData(pConnection->FetchTotalBytesRead(), pConnection->GetNewsServer()->GetID());
g_pServerPool->FreeConnection(pConnection, true);
}
return terminated;
}
void ArticleDownloader::FreeConnection(bool bKeepConnected)
{
if (m_pConnection)
{
debug("Releasing connection");
m_mutexConnection.Lock();
if (!bKeepConnected || m_pConnection->GetStatus() == Connection::csCancelled)
{
m_pConnection->Disconnect();
}
AddServerData();
g_pServerPool->FreeConnection(m_pConnection, true);
m_pConnection = NULL;
m_mutexConnection.Unlock();
}
}
void ArticleDownloader::AddServerData()
{
int iBytesRead = m_pConnection->FetchTotalBytesRead();
g_pStatMeter->AddServerData(iBytesRead, m_pConnection->GetNewsServer()->GetID());
m_iDownloadedSize += iBytesRead;
}

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -34,6 +34,7 @@
#include "Thread.h"
#include "NNTPConnection.h"
#include "Decoder.h"
#include "ArticleWriter.h"
class ArticleDownloader : public Thread, public Subject
{
@@ -47,13 +48,20 @@ public:
adFailed,
adRetry,
adCrcError,
adDecoding,
adJoining,
adJoined,
adNotFound,
adConnectError,
adFatalError
};
class ArticleWriterImpl : public ArticleWriter
{
private:
ArticleDownloader* m_pOwner;
protected:
virtual void SetLastUpdateTimeNow() { m_pOwner->SetLastUpdateTimeNow(); }
public:
void SetOwner(ArticleDownloader* pOwner) { m_pOwner = pOwner; }
};
private:
FileInfo* m_pFileInfo;
@@ -61,59 +69,49 @@ private:
NNTPConnection* m_pConnection;
EStatus m_eStatus;
Mutex m_mutexConnection;
const char* m_szResultFilename;
char* m_szTempFilename;
char* m_szArticleFilename;
char* m_szInfoName;
char* m_szOutputFilename;
char m_szConnectionName[250];
char* m_szArticleFilename;
time_t m_tLastUpdateTime;
Decoder::EFormat m_eFormat;
YDecoder m_YDecoder;
UDecoder m_UDecoder;
FILE* m_pOutFile;
bool m_bDuplicate;
ArticleWriterImpl m_ArticleWriter;
ServerStatList m_ServerStats;
bool m_bWritingStarted;
int m_iDownloadedSize;
EStatus Download();
bool Write(char* szLine, int iLen);
bool PrepareFile(char* szLine);
bool CreateOutputFile(int iSize);
EStatus DecodeCheck();
void FreeConnection(bool bKeepConnected);
EStatus CheckResponse(const char* szResponse, const char* szComment);
void SetStatus(EStatus eStatus) { m_eStatus = eStatus; }
bool Write(char* szLine, int iLen);
void AddServerData();
public:
ArticleDownloader();
~ArticleDownloader();
virtual ~ArticleDownloader();
void SetFileInfo(FileInfo* pFileInfo) { m_pFileInfo = pFileInfo; }
FileInfo* GetFileInfo() { return m_pFileInfo; }
void SetArticleInfo(ArticleInfo* pArticleInfo) { m_pArticleInfo = pArticleInfo; }
ArticleInfo* GetArticleInfo() { return m_pArticleInfo; }
EStatus GetStatus() { return m_eStatus; }
ServerStatList* GetServerStats() { return &m_ServerStats; }
virtual void Run();
virtual void Stop();
bool Terminate();
time_t GetLastUpdateTime() { return m_tLastUpdateTime; }
void SetLastUpdateTimeNow() { m_tLastUpdateTime = ::time(NULL); }
const char* GetTempFilename() { return m_szTempFilename; }
void SetTempFilename(const char* v);
void SetOutputFilename(const char* v);
const char* GetArticleFilename() { return m_szArticleFilename; }
void SetInfoName(const char* v);
void SetInfoName(const char* szInfoName);
const char* GetInfoName() { return m_szInfoName; }
void CompleteFileParts();
static bool MoveCompletedFiles(NZBInfo* pNZBInfo, const char* szOldDestDir);
const char* GetConnectionName() { return m_szConnectionName; }
void SetConnection(NNTPConnection* pConnection) { m_pConnection = pConnection; }
void CompleteFileParts() { m_ArticleWriter.CompleteFileParts(); }
int GetDownloadedSize() { return m_iDownloadedSize; }
void LogDebugInfo();
};
class DownloadSpeedMeter
{
public:
virtual ~DownloadSpeedMeter() {};
virtual int CalcCurrentDownloadSpeed() = 0;
virtual void AddSpeedReading(int iBytes) = 0;
};
#endif

View File

@@ -0,0 +1,999 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#include <sys/time.h>
#endif
#include <sys/stat.h>
#include <errno.h>
#include <algorithm>
#include "nzbget.h"
#include "ArticleWriter.h"
#include "DiskState.h"
#include "Options.h"
#include "Log.h"
#include "Util.h"
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
extern ArticleCache* g_pArticleCache;
ArticleWriter::ArticleWriter()
{
debug("Creating ArticleWriter");
m_szTempFilename = NULL;
m_szOutputFilename = NULL;
m_szResultFilename = NULL;
m_szInfoName = NULL;
m_eFormat = Decoder::efUnknown;
m_pArticleData = NULL;
m_bDuplicate = false;
m_bFlushing = false;
}
ArticleWriter::~ArticleWriter()
{
debug("Destroying ArticleWriter");
free(m_szOutputFilename);
free(m_szTempFilename);
free(m_szInfoName);
if (m_pArticleData)
{
free(m_pArticleData);
g_pArticleCache->Free(m_iArticleSize);
}
if (m_bFlushing)
{
g_pArticleCache->UnlockFlush();
}
}
void ArticleWriter::SetInfoName(const char* szInfoName)
{
m_szInfoName = strdup(szInfoName);
}
void ArticleWriter::SetWriteBuffer(FILE* pOutFile, int iRecSize)
{
if (g_pOptions->GetWriteBuffer() > 0)
{
setvbuf(pOutFile, NULL, _IOFBF,
iRecSize > 0 && iRecSize < g_pOptions->GetWriteBuffer() * 1024 ?
iRecSize : g_pOptions->GetWriteBuffer() * 1024);
}
}
void ArticleWriter::Prepare()
{
BuildOutputFilename();
m_szResultFilename = m_pArticleInfo->GetResultFilename();
}
bool ArticleWriter::Start(Decoder::EFormat eFormat, const char* szFilename, long long iFileSize,
long long iArticleOffset, int iArticleSize)
{
char szErrBuf[256];
m_pOutFile = NULL;
m_eFormat = eFormat;
m_iArticleOffset = iArticleOffset;
m_iArticleSize = iArticleSize ? iArticleSize : m_pArticleInfo->GetSize();
m_iArticlePtr = 0;
// prepare file for writing
if (m_eFormat == Decoder::efYenc)
{
if (g_pOptions->GetDupeCheck() &&
m_pFileInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
!m_pFileInfo->GetNZBInfo()->GetManyDupeFiles())
{
m_pFileInfo->LockOutputFile();
bool bOutputInitialized = m_pFileInfo->GetOutputInitialized();
if (!g_pOptions->GetDirectWrite())
{
m_pFileInfo->SetOutputInitialized(true);
}
m_pFileInfo->UnlockOutputFile();
if (!bOutputInitialized && szFilename &&
Util::FileExists(m_pFileInfo->GetNZBInfo()->GetDestDir(), szFilename))
{
m_bDuplicate = true;
return false;
}
}
if (g_pOptions->GetDirectWrite())
{
m_pFileInfo->LockOutputFile();
if (!m_pFileInfo->GetOutputInitialized())
{
if (!CreateOutputFile(iFileSize))
{
m_pFileInfo->UnlockOutputFile();
return false;
}
m_pFileInfo->SetOutputInitialized(true);
}
m_pFileInfo->UnlockOutputFile();
}
}
// allocate cache buffer
if (g_pOptions->GetArticleCache() > 0 && g_pOptions->GetDecode() &&
(!g_pOptions->GetDirectWrite() || m_eFormat == Decoder::efYenc))
{
if (m_pArticleData)
{
free(m_pArticleData);
g_pArticleCache->Free(m_iArticleSize);
}
m_pArticleData = (char*)g_pArticleCache->Alloc(m_iArticleSize);
while (!m_pArticleData && g_pArticleCache->GetFlushing())
{
usleep(5 * 1000);
m_pArticleData = (char*)g_pArticleCache->Alloc(m_iArticleSize);
}
if (!m_pArticleData)
{
detail("Article cache is full, using disk for %s", m_szInfoName);
}
}
if (!m_pArticleData)
{
bool bDirectWrite = g_pOptions->GetDirectWrite() && m_eFormat == Decoder::efYenc;
const char* szFilename = bDirectWrite ? m_szOutputFilename : m_szTempFilename;
m_pOutFile = fopen(szFilename, bDirectWrite ? FOPEN_RBP : FOPEN_WB);
if (!m_pOutFile)
{
error("Could not %s file %s: %s", bDirectWrite ? "open" : "create", szFilename, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
return false;
}
SetWriteBuffer(m_pOutFile, m_pArticleInfo->GetSize());
if (g_pOptions->GetDirectWrite() && m_eFormat == Decoder::efYenc)
{
fseek(m_pOutFile, m_iArticleOffset, SEEK_SET);
}
}
return true;
}
bool ArticleWriter::Write(char* szBufffer, int iLen)
{
if (g_pOptions->GetDecode())
{
m_iArticlePtr += iLen;
}
if (g_pOptions->GetDecode() && m_pArticleData)
{
if (m_iArticlePtr > m_iArticleSize)
{
detail("Decoding %s failed: article size mismatch", m_szInfoName);
return false;
}
memcpy(m_pArticleData + m_iArticlePtr - iLen, szBufffer, iLen);
return true;
}
return fwrite(szBufffer, 1, iLen, m_pOutFile) > 0;
}
void ArticleWriter::Finish(bool bSuccess)
{
char szErrBuf[256];
if (m_pOutFile)
{
fclose(m_pOutFile);
m_pOutFile = NULL;
}
if (!bSuccess)
{
remove(m_szTempFilename);
remove(m_szResultFilename);
return;
}
bool bDirectWrite = g_pOptions->GetDirectWrite() && m_eFormat == Decoder::efYenc;
if (g_pOptions->GetDecode())
{
if (!bDirectWrite && !m_pArticleData)
{
if (!Util::MoveFile(m_szTempFilename, m_szResultFilename))
{
error("Could not rename file %s to %s: %s", m_szTempFilename, m_szResultFilename, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
}
}
remove(m_szTempFilename);
if (m_pArticleData)
{
if (m_iArticleSize != m_iArticlePtr)
{
m_pArticleData = (char*)g_pArticleCache->Realloc(m_pArticleData, m_iArticleSize, m_iArticlePtr);
}
g_pArticleCache->LockContent();
m_pArticleInfo->AttachSegment(m_pArticleData, m_iArticleOffset, m_iArticlePtr);
m_pFileInfo->SetCachedArticles(m_pFileInfo->GetCachedArticles() + 1);
g_pArticleCache->UnlockContent();
m_pArticleData = NULL;
}
else
{
m_pArticleInfo->SetSegmentOffset(m_iArticleOffset);
m_pArticleInfo->SetSegmentSize(m_iArticlePtr);
}
}
else
{
// rawmode
if (!Util::MoveFile(m_szTempFilename, m_szResultFilename))
{
error("Could not move file %s to %s: %s", m_szTempFilename, m_szResultFilename, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
}
}
}
/* creates output file and subdirectores */
bool ArticleWriter::CreateOutputFile(long long iSize)
{
if (g_pOptions->GetDirectWrite() && Util::FileExists(m_szOutputFilename) &&
Util::FileSize(m_szOutputFilename) == iSize)
{
// keep existing old file from previous program session
return true;
}
// delete eventually existing old file from previous program session
remove(m_szOutputFilename);
// ensure the directory exist
char szDestDir[1024];
int iMaxlen = Util::BaseFileName(m_szOutputFilename) - m_szOutputFilename;
if (iMaxlen > 1024-1) iMaxlen = 1024-1;
strncpy(szDestDir, m_szOutputFilename, iMaxlen);
szDestDir[iMaxlen] = '\0';
char szErrBuf[1024];
if (!Util::ForceDirectories(szDestDir, szErrBuf, sizeof(szErrBuf)))
{
error("Could not create directory %s: %s", szDestDir, szErrBuf);
return false;
}
if (!Util::CreateSparseFile(m_szOutputFilename, iSize))
{
error("Could not create file %s", m_szOutputFilename);
return false;
}
return true;
}
void ArticleWriter::BuildOutputFilename()
{
char szFilename[1024];
snprintf(szFilename, 1024, "%s%i.%03i", g_pOptions->GetTempDir(), m_pFileInfo->GetID(), m_pArticleInfo->GetPartNumber());
szFilename[1024-1] = '\0';
m_pArticleInfo->SetResultFilename(szFilename);
char tmpname[1024];
snprintf(tmpname, 1024, "%s.tmp", szFilename);
tmpname[1024-1] = '\0';
m_szTempFilename = strdup(tmpname);
if (g_pOptions->GetDirectWrite())
{
m_pFileInfo->LockOutputFile();
if (m_pFileInfo->GetOutputFilename())
{
strncpy(szFilename, m_pFileInfo->GetOutputFilename(), 1024);
szFilename[1024-1] = '\0';
}
else
{
snprintf(szFilename, 1024, "%s%c%i.out.tmp", m_pFileInfo->GetNZBInfo()->GetDestDir(), (int)PATH_SEPARATOR, m_pFileInfo->GetID());
szFilename[1024-1] = '\0';
m_pFileInfo->SetOutputFilename(szFilename);
}
m_pFileInfo->UnlockOutputFile();
m_szOutputFilename = strdup(szFilename);
}
}
void ArticleWriter::CompleteFileParts()
{
debug("Completing file parts");
debug("ArticleFilename: %s", m_pFileInfo->GetFilename());
bool bDirectWrite = g_pOptions->GetDirectWrite() && m_pFileInfo->GetOutputInitialized();
char szErrBuf[256];
char szNZBName[1024];
char szNZBDestDir[1024];
// the locking is needed for accessing the members of NZBInfo
DownloadQueue::Lock();
strncpy(szNZBName, m_pFileInfo->GetNZBInfo()->GetName(), 1024);
strncpy(szNZBDestDir, m_pFileInfo->GetNZBInfo()->GetDestDir(), 1024);
DownloadQueue::Unlock();
szNZBName[1024-1] = '\0';
szNZBDestDir[1024-1] = '\0';
char szInfoFilename[1024];
snprintf(szInfoFilename, 1024, "%s%c%s", szNZBName, (int)PATH_SEPARATOR, m_pFileInfo->GetFilename());
szInfoFilename[1024-1] = '\0';
bool bCached = m_pFileInfo->GetCachedArticles() > 0;
if (!g_pOptions->GetDecode())
{
detail("Moving articles for %s", szInfoFilename);
}
else if (bDirectWrite && bCached)
{
detail("Writing articles for %s", szInfoFilename);
}
else if (bDirectWrite)
{
detail("Checking articles for %s", szInfoFilename);
}
else
{
detail("Joining articles for %s", szInfoFilename);
}
// Ensure the DstDir is created
if (!Util::ForceDirectories(szNZBDestDir, szErrBuf, sizeof(szErrBuf)))
{
error("Could not create directory %s: %s", szNZBDestDir, szErrBuf);
return;
}
char ofn[1024];
Util::MakeUniqueFilename(ofn, 1024, szNZBDestDir, m_pFileInfo->GetFilename());
FILE* outfile = NULL;
char tmpdestfile[1024];
snprintf(tmpdestfile, 1024, "%s.tmp", ofn);
tmpdestfile[1024-1] = '\0';
if (g_pOptions->GetDecode() && !bDirectWrite)
{
remove(tmpdestfile);
outfile = fopen(tmpdestfile, FOPEN_WBP);
if (!outfile)
{
error("Could not create file %s: %s", tmpdestfile, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
return;
}
}
else if (bDirectWrite && bCached)
{
outfile = fopen(m_szOutputFilename, FOPEN_RBP);
if (!outfile)
{
error("Could not open file %s: %s", m_szOutputFilename, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
return;
}
strncpy(tmpdestfile, m_szOutputFilename, 1024);
tmpdestfile[1024-1] = '\0';
}
else if (!g_pOptions->GetDecode())
{
remove(tmpdestfile);
if (!Util::CreateDirectory(ofn))
{
error("Could not create directory %s: %s", ofn, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
return;
}
}
if (outfile)
{
SetWriteBuffer(outfile, 0);
}
if (bCached)
{
g_pArticleCache->LockFlush();
m_bFlushing = true;
}
static const int BUFFER_SIZE = 1024 * 64;
char* buffer = NULL;
bool bFirstArticle = true;
unsigned long lCrc = 0;
if (g_pOptions->GetDecode() && !bDirectWrite)
{
buffer = (char*)malloc(BUFFER_SIZE);
}
for (FileInfo::Articles::iterator it = m_pFileInfo->GetArticles()->begin(); it != m_pFileInfo->GetArticles()->end(); it++)
{
ArticleInfo* pa = *it;
if (pa->GetStatus() != ArticleInfo::aiFinished)
{
continue;
}
if (g_pOptions->GetDecode() && !bDirectWrite && pa->GetSegmentOffset() > -1 &&
pa->GetSegmentOffset() > ftell(outfile) && ftell(outfile) > -1)
{
memset(buffer, 0, BUFFER_SIZE);
while (pa->GetSegmentOffset() > ftell(outfile) && ftell(outfile) > -1 &&
fwrite(buffer, 1, (std::min)((int)(pa->GetSegmentOffset() - ftell(outfile)), BUFFER_SIZE), outfile)) ;
}
if (pa->GetSegmentContent())
{
fseek(outfile, pa->GetSegmentOffset(), SEEK_SET);
fwrite(pa->GetSegmentContent(), 1, pa->GetSegmentSize(), outfile);
pa->DiscardSegment();
SetLastUpdateTimeNow();
}
else if (g_pOptions->GetDecode() && !bDirectWrite)
{
FILE* infile = pa->GetResultFilename() ? fopen(pa->GetResultFilename(), FOPEN_RB) : NULL;
if (infile)
{
int cnt = BUFFER_SIZE;
while (cnt == BUFFER_SIZE)
{
cnt = (int)fread(buffer, 1, BUFFER_SIZE, infile);
fwrite(buffer, 1, cnt, outfile);
SetLastUpdateTimeNow();
}
fclose(infile);
}
else
{
m_pFileInfo->SetFailedArticles(m_pFileInfo->GetFailedArticles() + 1);
m_pFileInfo->SetSuccessArticles(m_pFileInfo->GetSuccessArticles() - 1);
error("Could not find file %s for %s%c%s [%i/%i]",
pa->GetResultFilename(), szNZBName, (int)PATH_SEPARATOR, m_pFileInfo->GetFilename(),
pa->GetPartNumber(), (int)m_pFileInfo->GetArticles()->size());
}
}
else if (!g_pOptions->GetDecode())
{
char dstFileName[1024];
snprintf(dstFileName, 1024, "%s%c%03i", ofn, (int)PATH_SEPARATOR, pa->GetPartNumber());
dstFileName[1024-1] = '\0';
if (!Util::MoveFile(pa->GetResultFilename(), dstFileName))
{
error("Could not move file %s to %s: %s", pa->GetResultFilename(), dstFileName, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
}
}
if (m_eFormat == Decoder::efYenc)
{
lCrc = bFirstArticle ? pa->GetCrc() : Util::Crc32Combine(lCrc, pa->GetCrc(), pa->GetSegmentSize());
bFirstArticle = false;
}
}
free(buffer);
if (bCached)
{
g_pArticleCache->UnlockFlush();
m_bFlushing = false;
}
if (outfile)
{
fclose(outfile);
if (!bDirectWrite && !Util::MoveFile(tmpdestfile, ofn))
{
error("Could not move file %s to %s: %s", tmpdestfile, ofn, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
}
}
if (bDirectWrite)
{
if (!Util::MoveFile(m_szOutputFilename, ofn))
{
error("Could not move file %s to %s: %s", m_szOutputFilename, ofn, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
}
// if destination directory was changed delete the old directory (if empty)
int iLen = strlen(szNZBDestDir);
if (!(!strncmp(szNZBDestDir, m_szOutputFilename, iLen) &&
(m_szOutputFilename[iLen] == PATH_SEPARATOR || m_szOutputFilename[iLen] == ALT_PATH_SEPARATOR)))
{
debug("Checking old dir for: %s", m_szOutputFilename);
char szOldDestDir[1024];
int iMaxlen = Util::BaseFileName(m_szOutputFilename) - m_szOutputFilename;
if (iMaxlen > 1024-1) iMaxlen = 1024-1;
strncpy(szOldDestDir, m_szOutputFilename, iMaxlen);
szOldDestDir[iMaxlen] = '\0';
if (Util::DirEmpty(szOldDestDir))
{
debug("Deleting old dir: %s", szOldDestDir);
rmdir(szOldDestDir);
}
}
}
if (!bDirectWrite)
{
for (FileInfo::Articles::iterator it = m_pFileInfo->GetArticles()->begin(); it != m_pFileInfo->GetArticles()->end(); it++)
{
ArticleInfo* pa = *it;
remove(pa->GetResultFilename());
}
}
if (m_pFileInfo->GetMissedArticles() == 0 && m_pFileInfo->GetFailedArticles() == 0)
{
info("Successfully downloaded %s", szInfoFilename);
}
else
{
warn("%i of %i article downloads failed for \"%s\"", m_pFileInfo->GetMissedArticles() + m_pFileInfo->GetFailedArticles(),
m_pFileInfo->GetTotalArticles(), szInfoFilename);
if (g_pOptions->GetCreateBrokenLog())
{
char szBrokenLogName[1024];
snprintf(szBrokenLogName, 1024, "%s%c_brokenlog.txt", szNZBDestDir, (int)PATH_SEPARATOR);
szBrokenLogName[1024-1] = '\0';
FILE* file = fopen(szBrokenLogName, FOPEN_AB);
fprintf(file, "%s (%i/%i)%s", m_pFileInfo->GetFilename(), m_pFileInfo->GetSuccessArticles(),
m_pFileInfo->GetTotalArticles(), LINE_ENDING);
fclose(file);
}
lCrc = 0;
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->DiscardFile(m_pFileInfo, false, true, false);
g_pDiskState->SaveFileState(m_pFileInfo, true);
}
}
CompletedFile::EStatus eFileStatus = m_pFileInfo->GetMissedArticles() == 0 &&
m_pFileInfo->GetFailedArticles() == 0 ? CompletedFile::cfSuccess :
m_pFileInfo->GetSuccessArticles() > 0 ? CompletedFile::cfPartial :
CompletedFile::cfFailure;
// the locking is needed for accessing the members of NZBInfo
DownloadQueue::Lock();
m_pFileInfo->GetNZBInfo()->GetCompletedFiles()->push_back(new CompletedFile(
m_pFileInfo->GetID(), Util::BaseFileName(ofn), eFileStatus, lCrc));
if (strcmp(m_pFileInfo->GetNZBInfo()->GetDestDir(), szNZBDestDir))
{
// destination directory was changed during completion, need to move the file
MoveCompletedFiles(m_pFileInfo->GetNZBInfo(), szNZBDestDir);
}
DownloadQueue::Unlock();
}
void ArticleWriter::FlushCache()
{
detail("Flushing cache for %s", m_szInfoName);
bool bDirectWrite = g_pOptions->GetDirectWrite() && m_pFileInfo->GetOutputInitialized();
FILE* outfile = NULL;
bool bNeedBufFile = false;
char szDestFile[1024];
char szErrBuf[256];
int iFlushedArticles = 0;
long long iFlushedSize = 0;
g_pArticleCache->LockFlush();
FileInfo::Articles cachedArticles;
cachedArticles.reserve(m_pFileInfo->GetArticles()->size());
g_pArticleCache->LockContent();
for (FileInfo::Articles::iterator it = m_pFileInfo->GetArticles()->begin(); it != m_pFileInfo->GetArticles()->end(); it++)
{
ArticleInfo* pa = *it;
if (pa->GetSegmentContent())
{
cachedArticles.push_back(pa);
}
}
g_pArticleCache->UnlockContent();
for (FileInfo::Articles::iterator it = cachedArticles.begin(); it != cachedArticles.end(); it++)
{
if (m_pFileInfo->GetDeleted())
{
// the file was deleted during flushing: stop flushing immediately
break;
}
ArticleInfo* pa = *it;
if (bDirectWrite && !outfile)
{
outfile = fopen(m_pFileInfo->GetOutputFilename(), FOPEN_RBP);
if (!outfile)
{
error("Could not open file %s: %s", m_pFileInfo->GetOutputFilename(), Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
break;
}
bNeedBufFile = true;
}
if (!bDirectWrite)
{
snprintf(szDestFile, 1024, "%s.tmp", pa->GetResultFilename());
szDestFile[1024-1] = '\0';
outfile = fopen(szDestFile, FOPEN_WB);
if (!outfile)
{
error("Could not create file %s: %s", "create", szDestFile, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
break;
}
bNeedBufFile = true;
}
if (outfile && bNeedBufFile)
{
SetWriteBuffer(outfile, 0);
bNeedBufFile = false;
}
if (bDirectWrite)
{
fseek(outfile, pa->GetSegmentOffset(), SEEK_SET);
}
fwrite(pa->GetSegmentContent(), 1, pa->GetSegmentSize(), outfile);
iFlushedSize += pa->GetSegmentSize();
iFlushedArticles++;
pa->DiscardSegment();
if (!bDirectWrite)
{
fclose(outfile);
outfile = NULL;
if (!Util::MoveFile(szDestFile, pa->GetResultFilename()))
{
error("Could not rename file %s to %s: %s", szDestFile, pa->GetResultFilename(), Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
}
}
}
if (outfile)
{
fclose(outfile);
}
g_pArticleCache->LockContent();
m_pFileInfo->SetCachedArticles(m_pFileInfo->GetCachedArticles() - iFlushedArticles);
g_pArticleCache->UnlockContent();
g_pArticleCache->UnlockFlush();
detail("Saved %i articles (%.2f MB) from cache into disk for %s", iFlushedArticles, (float)(iFlushedSize / 1024.0 / 1024.0), m_szInfoName);
}
bool ArticleWriter::MoveCompletedFiles(NZBInfo* pNZBInfo, const char* szOldDestDir)
{
if (pNZBInfo->GetCompletedFiles()->empty())
{
return true;
}
// Ensure the DstDir is created
char szErrBuf[1024];
if (!Util::ForceDirectories(pNZBInfo->GetDestDir(), szErrBuf, sizeof(szErrBuf)))
{
error("Could not create directory %s: %s", pNZBInfo->GetDestDir(), szErrBuf);
return false;
}
// move already downloaded files to new destination
for (CompletedFiles::iterator it = pNZBInfo->GetCompletedFiles()->begin(); it != pNZBInfo->GetCompletedFiles()->end(); it++)
{
CompletedFile* pCompletedFile = *it;
char szOldFileName[1024];
snprintf(szOldFileName, 1024, "%s%c%s", szOldDestDir, (int)PATH_SEPARATOR, pCompletedFile->GetFileName());
szOldFileName[1024-1] = '\0';
char szNewFileName[1024];
snprintf(szNewFileName, 1024, "%s%c%s", pNZBInfo->GetDestDir(), (int)PATH_SEPARATOR, pCompletedFile->GetFileName());
szNewFileName[1024-1] = '\0';
// check if file was not moved already
if (strcmp(szOldFileName, szNewFileName))
{
// prevent overwriting of existing files
Util::MakeUniqueFilename(szNewFileName, 1024, pNZBInfo->GetDestDir(), pCompletedFile->GetFileName());
detail("Moving file %s to %s", szOldFileName, szNewFileName);
if (!Util::MoveFile(szOldFileName, szNewFileName))
{
char szErrBuf[256];
error("Could not move file %s to %s: %s", szOldFileName, szNewFileName, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
}
}
}
// move brokenlog.txt
if (g_pOptions->GetCreateBrokenLog())
{
char szOldBrokenLogName[1024];
snprintf(szOldBrokenLogName, 1024, "%s%c_brokenlog.txt", szOldDestDir, (int)PATH_SEPARATOR);
szOldBrokenLogName[1024-1] = '\0';
if (Util::FileExists(szOldBrokenLogName))
{
char szBrokenLogName[1024];
snprintf(szBrokenLogName, 1024, "%s%c_brokenlog.txt", pNZBInfo->GetDestDir(), (int)PATH_SEPARATOR);
szBrokenLogName[1024-1] = '\0';
detail("Moving file %s to %s", szOldBrokenLogName, szBrokenLogName);
if (Util::FileExists(szBrokenLogName))
{
// copy content to existing new file, then delete old file
FILE* outfile;
outfile = fopen(szBrokenLogName, FOPEN_AB);
if (outfile)
{
FILE* infile;
infile = fopen(szOldBrokenLogName, FOPEN_RB);
if (infile)
{
static const int BUFFER_SIZE = 1024 * 50;
int cnt = BUFFER_SIZE;
char* buffer = (char*)malloc(BUFFER_SIZE);
while (cnt == BUFFER_SIZE)
{
cnt = (int)fread(buffer, 1, BUFFER_SIZE, infile);
fwrite(buffer, 1, cnt, outfile);
}
fclose(infile);
free(buffer);
remove(szOldBrokenLogName);
}
else
{
error("Could not open file %s", szOldBrokenLogName);
}
fclose(outfile);
}
else
{
error("Could not open file %s", szBrokenLogName);
}
}
else
{
// move to new destination
if (!Util::MoveFile(szOldBrokenLogName, szBrokenLogName))
{
char szErrBuf[256];
error("Could not move file %s to %s: %s", szOldBrokenLogName, szBrokenLogName, Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
}
}
}
}
// delete old directory (if empty)
if (Util::DirEmpty(szOldDestDir))
{
// check if there are pending writes into directory
bool bPendingWrites = false;
for (FileList::iterator it = pNZBInfo->GetFileList()->begin(); it != pNZBInfo->GetFileList()->end() && !bPendingWrites; it++)
{
FileInfo* pFileInfo = *it;
if (pFileInfo->GetActiveDownloads() > 0)
{
pFileInfo->LockOutputFile();
bPendingWrites = pFileInfo->GetOutputInitialized() && !Util::EmptyStr(pFileInfo->GetOutputFilename());
pFileInfo->UnlockOutputFile();
}
else
{
bPendingWrites = pFileInfo->GetOutputInitialized() && !Util::EmptyStr(pFileInfo->GetOutputFilename());
}
}
if (!bPendingWrites)
{
rmdir(szOldDestDir);
}
}
return true;
}
ArticleCache::ArticleCache()
{
m_iAllocated = 0;
m_bFlushing = false;
m_pFileInfo = NULL;
}
void* ArticleCache::Alloc(int iSize)
{
m_mutexAlloc.Lock();
void* p = NULL;
if (m_iAllocated + iSize <= (size_t)g_pOptions->GetArticleCache() * 1024 * 1024)
{
p = malloc(iSize);
if (p)
{
if (!m_iAllocated && g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode() && g_pOptions->GetContinuePartial())
{
g_pDiskState->WriteCacheFlag();
}
m_iAllocated += iSize;
}
}
m_mutexAlloc.Unlock();
return p;
}
void* ArticleCache::Realloc(void* buf, int iOldSize, int iNewSize)
{
m_mutexAlloc.Lock();
void* p = realloc(buf, iNewSize);
if (p)
{
m_iAllocated += iNewSize - iOldSize;
}
else
{
p = buf;
}
m_mutexAlloc.Unlock();
return p;
}
void ArticleCache::Free(int iSize)
{
m_mutexAlloc.Lock();
m_iAllocated -= iSize;
if (!m_iAllocated && g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode() && g_pOptions->GetContinuePartial())
{
g_pDiskState->DeleteCacheFlag();
}
m_mutexAlloc.Unlock();
}
void ArticleCache::LockFlush()
{
m_mutexFlush.Lock();
m_bFlushing = true;
}
void ArticleCache::UnlockFlush()
{
m_mutexFlush.Unlock();
m_bFlushing = false;
}
void ArticleCache::Run()
{
// automatically flush the cache if it is filled to 90% (only in DirectWrite mode)
size_t iFillThreshold = (size_t)g_pOptions->GetArticleCache() * 1024 * 1024 / 100 * 90;
int iResetCounter = 0;
bool bJustFlushed = false;
while (!IsStopped() || m_iAllocated > 0)
{
if ((bJustFlushed || iResetCounter >= 1000 || IsStopped() ||
(g_pOptions->GetDirectWrite() && m_iAllocated >= iFillThreshold)) &&
m_iAllocated > 0)
{
bJustFlushed = CheckFlush(m_iAllocated >= iFillThreshold);
iResetCounter = 0;
}
else
{
usleep(5 * 1000);
iResetCounter += 5;
}
}
}
bool ArticleCache::CheckFlush(bool bFlushEverything)
{
debug("Checking cache, Allocated: %i, FlushEverything: %i", m_iAllocated, (int)bFlushEverything);
char szInfoName[1024];
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end() && !m_pFileInfo; it++)
{
NZBInfo* pNZBInfo = *it;
for (FileList::iterator it2 = pNZBInfo->GetFileList()->begin(); it2 != pNZBInfo->GetFileList()->end(); it2++)
{
FileInfo* pFileInfo = *it2;
if (pFileInfo->GetCachedArticles() > 0 && (pFileInfo->GetActiveDownloads() == 0 || bFlushEverything))
{
m_pFileInfo = pFileInfo;
snprintf(szInfoName, 1024, "%s%c%s", m_pFileInfo->GetNZBInfo()->GetName(), (int)PATH_SEPARATOR, m_pFileInfo->GetFilename());
szInfoName[1024-1] = '\0';
break;
}
}
}
DownloadQueue::Unlock();
if (m_pFileInfo)
{
ArticleWriter* pArticleWriter = new ArticleWriter();
pArticleWriter->SetFileInfo(m_pFileInfo);
pArticleWriter->SetInfoName(szInfoName);
pArticleWriter->FlushCache();
delete pArticleWriter;
m_pFileInfo = NULL;
return true;
}
debug("Checking cache... nothing to flush");
return false;
}

102
daemon/nntp/ArticleWriter.h Normal file
View File

@@ -0,0 +1,102 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef ARTICLEWRITER_H
#define ARTICLEWRITER_H
#include "DownloadInfo.h"
#include "Decoder.h"
class ArticleWriter
{
private:
FileInfo* m_pFileInfo;
ArticleInfo* m_pArticleInfo;
FILE* m_pOutFile;
char* m_szTempFilename;
char* m_szOutputFilename;
const char* m_szResultFilename;
Decoder::EFormat m_eFormat;
char* m_pArticleData;
long long m_iArticleOffset;
int m_iArticleSize;
int m_iArticlePtr;
bool m_bFlushing;
bool m_bDuplicate;
char* m_szInfoName;
bool PrepareFile(char* szLine);
bool CreateOutputFile(long long iSize);
void BuildOutputFilename();
bool IsFileCached();
void SetWriteBuffer(FILE* pOutFile, int iRecSize);
protected:
virtual void SetLastUpdateTimeNow() {}
public:
ArticleWriter();
~ArticleWriter();
void SetInfoName(const char* szInfoName);
void SetFileInfo(FileInfo* pFileInfo) { m_pFileInfo = pFileInfo; }
void SetArticleInfo(ArticleInfo* pArticleInfo) { m_pArticleInfo = pArticleInfo; }
void Prepare();
bool Start(Decoder::EFormat eFormat, const char* szFilename, long long iFileSize, long long iArticleOffset, int iArticleSize);
bool Write(char* szBufffer, int iLen);
void Finish(bool bSuccess);
bool GetDuplicate() { return m_bDuplicate; }
void CompleteFileParts();
static bool MoveCompletedFiles(NZBInfo* pNZBInfo, const char* szOldDestDir);
void FlushCache();
};
class ArticleCache : public Thread
{
private:
size_t m_iAllocated;
bool m_bFlushing;
Mutex m_mutexAlloc;
Mutex m_mutexFlush;
Mutex m_mutexContent;
FileInfo* m_pFileInfo;
bool CheckFlush(bool bFlushEverything);
public:
ArticleCache();
virtual void Run();
void* Alloc(int iSize);
void* Realloc(void* buf, int iOldSize, int iNewSize);
void Free(int iSize);
void LockFlush();
void UnlockFlush();
void LockContent() { m_mutexContent.Lock(); }
void UnlockContent() { m_mutexContent.Unlock(); }
bool GetFlushing() { return m_bFlushing; }
size_t GetAllocated() { return m_iAllocated; }
bool FileBusy(FileInfo* pFileInfo) { return pFileInfo == m_pFileInfo; }
};
#endif

View File

@@ -1,8 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -45,14 +44,11 @@
#include "Util.h"
const char* Decoder::FormatNames[] = { "Unknown", "yEnc", "UU" };
unsigned int YDecoder::crc_tab[256];
Decoder::Decoder()
{
debug("Creating Decoder");
m_szSrcFilename = NULL;
m_szDestFilename = NULL;
m_szArticleFilename = NULL;
}
@@ -60,18 +56,12 @@ Decoder::~ Decoder()
{
debug("Destroying Decoder");
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
free(m_szArticleFilename);
}
void Decoder::Clear()
{
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
free(m_szArticleFilename);
m_szArticleFilename = NULL;
}
@@ -113,17 +103,6 @@ Decoder::EFormat Decoder::DetectFormat(const char* buffer, int len)
* YDecoder: fast implementation of yEnc-Decoder
*/
void YDecoder::Init()
{
debug("Initializing global decoder");
crc32gentab();
}
void YDecoder::Final()
{
debug("Finalizing global Decoder");
}
YDecoder::YDecoder()
{
Clear();
@@ -141,69 +120,13 @@ void YDecoder::Clear()
m_lExpectedCRC = 0;
m_lCalculatedCRC = 0xFFFFFFFF;
m_iBegin = 0;
m_iEnd = 0xFFFFFFFF;
m_iEnd = 0;
m_iSize = 0;
m_iEndSize = 0;
m_bAutoSeek = false;
m_bNeedSetPos = false;
m_bCrcCheck = false;
}
/* from crc32.c (http://www.koders.com/c/fid699AFE0A656F0022C9D6B9D1743E697B69CE5815.aspx)
*
* (c) 1999,2000 Krzysztof Dabrowski
* (c) 1999,2000 ElysiuM deeZine
* Released under GPL (thanks)
*
* chksum_crc32gentab() -- to a global crc_tab[256], this one will
* calculate the crcTable for crc32-checksums.
* it is generated to the polynom [..]
*/
void YDecoder::crc32gentab()
{
unsigned long crc, poly;
int i, j;
poly = 0xEDB88320L;
for (i = 0; i < 256; i++)
{
crc = i;
for (j = 8; j > 0; j--)
{
if (crc & 1)
{
crc = (crc >> 1) ^ poly;
}
else
{
crc >>= 1;
}
}
crc_tab[i] = crc;
}
}
/* This is modified version of chksum_crc() from
* crc32.c (http://www.koders.com/c/fid699AFE0A656F0022C9D6B9D1743E697B69CE5815.aspx)
* (c) 1999,2000 Krzysztof Dabrowski
* (c) 1999,2000 ElysiuM deeZine
*
* chksum_crc() -- to a given block, this one calculates the
* crc32-checksum until the length is
* reached. the crc32-checksum will be
* the result.
*/
unsigned long YDecoder::crc32m(unsigned long startCrc, unsigned char *block, unsigned int length)
{
register unsigned long crc = startCrc;
for (unsigned long i = 0; i < length; i++)
{
crc = ((crc >> 8) & 0x00FFFFFF) ^ crc_tab[(crc ^ *block++) & 0xFF];
}
return crc;
}
unsigned int YDecoder::DecodeBuffer(char* buffer)
int YDecoder::DecodeBuffer(char* buffer, int len)
{
if (m_bBody && !m_bEnd)
{
@@ -221,7 +144,7 @@ unsigned int YDecoder::DecodeBuffer(char* buffer)
if (pb)
{
pb += 6; //=strlen(" size=")
m_iEndSize = (int)atoi(pb);
m_iEndSize = (long long)atoll(pb);
}
return 0;
}
@@ -253,9 +176,9 @@ BreakLoop:
if (m_bCrcCheck)
{
m_lCalculatedCRC = crc32m(m_lCalculatedCRC, (unsigned char *)buffer, (unsigned int)(optr - buffer));
m_lCalculatedCRC = Util::Crc32m(m_lCalculatedCRC, (unsigned char *)buffer, (unsigned int)(optr - buffer));
}
return (unsigned int)(optr - buffer);
return optr - buffer;
}
else
{
@@ -268,10 +191,7 @@ BreakLoop:
pb += 6; //=strlen(" name=")
char* pe;
for (pe = pb; *pe != '\0' && *pe != '\n' && *pe != '\r'; pe++) ;
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
free(m_szArticleFilename);
m_szArticleFilename = (char*)malloc(pe - pb + 1);
strncpy(m_szArticleFilename, pb, pe - pb);
m_szArticleFilename[pe - pb] = '\0';
@@ -280,7 +200,7 @@ BreakLoop:
if (pb)
{
pb += 6; //=strlen(" size=")
m_iSize = (int)atoi(pb);
m_iSize = (long long)atoll(pb);
}
m_bPart = strstr(buffer, " part=");
if (!m_bPart)
@@ -298,13 +218,13 @@ BreakLoop:
if (pb)
{
pb += 7; //=strlen(" begin=")
m_iBegin = (int)atoi(pb);
m_iBegin = (long long)atoll(pb);
}
pb = strstr(buffer, " end=");
if (pb)
{
pb += 5; //=strlen(" end=")
m_iEnd = (int)atoi(pb);
m_iEnd = (long long)atoll(pb);
}
}
}
@@ -312,28 +232,6 @@ BreakLoop:
return 0;
}
bool YDecoder::Write(char* buffer, int len, FILE* outfile)
{
unsigned int wcnt = DecodeBuffer(buffer);
if (wcnt > 0)
{
if (m_bNeedSetPos)
{
if (m_iBegin == 0 || m_iEnd == 0xFFFFFFFF || !outfile)
{
return false;
}
if (fseek(outfile, m_iBegin - 1, SEEK_SET))
{
return false;
}
m_bNeedSetPos = false;
}
fwrite(buffer, 1, wcnt, outfile);
}
return true;
}
Decoder::EStatus YDecoder::Check()
{
m_lCalculatedCRC ^= 0xFFFFFFFF;
@@ -388,7 +286,7 @@ void UDecoder::Clear()
#define UU_DECODE_CHAR(c) (c == '`' ? 0 : (((c) - ' ') & 077))
unsigned int UDecoder::DecodeBuffer(char* buffer, int len)
int UDecoder::DecodeBuffer(char* buffer, int len)
{
if (!m_bBody)
{
@@ -404,10 +302,7 @@ unsigned int UDecoder::DecodeBuffer(char* buffer, int len)
// extracting filename
char* pe;
for (pe = pb; *pe != '\0' && *pe != '\n' && *pe != '\r'; pe++) ;
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
free(m_szArticleFilename);
m_szArticleFilename = (char*)malloc(pe - pb + 1);
strncpy(m_szArticleFilename, pb, pe - pb);
m_szArticleFilename[pe - pb] = '\0';
@@ -458,22 +353,12 @@ unsigned int UDecoder::DecodeBuffer(char* buffer, int len)
}
}
return (unsigned int)(optr - buffer);
return optr - buffer;
}
return 0;
}
bool UDecoder::Write(char* buffer, int len, FILE* outfile)
{
unsigned int wcnt = DecodeBuffer(buffer, len);
if (wcnt > 0)
{
fwrite(buffer, 1, wcnt, outfile);
}
return true;
}
Decoder::EStatus UDecoder::Check()
{
if (!m_bBody)

View File

@@ -1,8 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -50,8 +49,6 @@ public:
static const char* FormatNames[];
protected:
const char* m_szSrcFilename;
const char* m_szDestFilename;
char* m_szArticleFilename;
public:
@@ -59,9 +56,7 @@ public:
virtual ~Decoder();
virtual EStatus Check() = 0;
virtual void Clear();
virtual bool Write(char* buffer, int len, FILE* outfile) = 0;
void SetSrcFilename(const char* szSrcFilename) { m_szSrcFilename = szSrcFilename; }
void SetDestFilename(const char* szDestFilename) { m_szDestFilename = szDestFilename; }
virtual int DecodeBuffer(char* buffer, int len) = 0;
const char* GetArticleFilename() { return m_szArticleFilename; }
static EFormat DetectFormat(const char* buffer, int len);
};
@@ -69,7 +64,6 @@ public:
class YDecoder: public Decoder
{
protected:
static unsigned int crc_tab[256];
bool m_bBegin;
bool m_bPart;
bool m_bBody;
@@ -77,28 +71,23 @@ protected:
bool m_bCrc;
unsigned long m_lExpectedCRC;
unsigned long m_lCalculatedCRC;
unsigned long m_iBegin;
unsigned long m_iEnd;
unsigned long m_iSize;
unsigned long m_iEndSize;
bool m_bAutoSeek;
bool m_bNeedSetPos;
long long m_iBegin;
long long m_iEnd;
long long m_iSize;
long long m_iEndSize;
bool m_bCrcCheck;
unsigned int DecodeBuffer(char* buffer);
static void crc32gentab();
unsigned long crc32m(unsigned long startCrc, unsigned char *block, unsigned int length);
public:
YDecoder();
virtual EStatus Check();
virtual void Clear();
virtual bool Write(char* buffer, int len, FILE* outfile);
void SetAutoSeek(bool bAutoSeek) { m_bAutoSeek = m_bNeedSetPos = bAutoSeek; }
virtual int DecodeBuffer(char* buffer, int len);
void SetCrcCheck(bool bCrcCheck) { m_bCrcCheck = bCrcCheck; }
static void Init();
static void Final();
long long GetBegin() { return m_iBegin; }
long long GetEnd() { return m_iEnd; }
long long GetSize() { return m_iSize; }
unsigned long GetExpectedCrc() { return m_lExpectedCRC; }
unsigned long GetCalculatedCrc() { return m_lCalculatedCRC; }
};
class UDecoder: public Decoder
@@ -107,13 +96,11 @@ private:
bool m_bBody;
bool m_bEnd;
unsigned int DecodeBuffer(char* buffer, int len);
public:
UDecoder();
virtual EStatus Check();
virtual void Clear();
virtual bool Write(char* buffer, int len, FILE* outfile);
virtual int DecodeBuffer(char* buffer, int len);
};
#endif

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -34,7 +34,8 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <stdio.h>
#include <ctype.h>
#include "nzbget.h"
#include "Log.h"
@@ -55,11 +56,7 @@ NNTPConnection::NNTPConnection(NewsServer* pNewsServer) : Connection(pNewsServer
NNTPConnection::~NNTPConnection()
{
if (m_szActiveGroup)
{
free(m_szActiveGroup);
m_szActiveGroup = NULL;
}
free(m_szActiveGroup);
free(m_szLineBuf);
}
@@ -85,17 +82,14 @@ const char* NNTPConnection::Request(const char* req)
{
debug("%s requested authorization", GetHost());
//authentication required!
if (!Authenticate())
{
m_bAuthError = true;
return NULL;
}
//try again
WriteLine(req);
answer = ReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
return answer;
}
return answer;
@@ -103,13 +97,16 @@ const char* NNTPConnection::Request(const char* req)
bool NNTPConnection::Authenticate()
{
if (!(m_pNewsServer)->GetUser() ||
!(m_pNewsServer)->GetPassword())
if (strlen(m_pNewsServer->GetUser()) == 0 || strlen(m_pNewsServer->GetPassword()) == 0)
{
return true;
error("%c%s (%s) requested authorization but username/password are not set in settings",
toupper(m_pNewsServer->GetName()[0]), m_pNewsServer->GetName() + 1, m_pNewsServer->GetHost());
m_bAuthError = true;
return false;
}
return AuthInfoUser();
m_bAuthError = !AuthInfoUser(0);
return !m_bAuthError;
}
bool NNTPConnection::AuthInfoUser(int iRecur)
@@ -128,7 +125,7 @@ bool NNTPConnection::AuthInfoUser(int iRecur)
char* answer = ReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
if (!answer)
{
ReportError("Authorization for %s failed: Connection closed by remote host", GetHost(), false, 0);
ReportErrorAnswer("Authorization for server%i (%s) failed: Connection closed by remote host", NULL);
return false;
}
@@ -150,7 +147,7 @@ bool NNTPConnection::AuthInfoUser(int iRecur)
if (GetStatus() != csCancelled)
{
ReportErrorAnswer("Authorization for %s failed (Answer: %s)", answer);
ReportErrorAnswer("Authorization for server%i (%s) failed (Answer: %s)", answer);
}
return false;
}
@@ -171,7 +168,7 @@ bool NNTPConnection::AuthInfoPass(int iRecur)
char* answer = ReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
if (!answer)
{
ReportError("Authorization for %s failed: Connection closed by remote host", GetHost(), false, 0);
ReportErrorAnswer("Authorization for server%i (%s) failed: Connection closed by remote host", NULL);
return false;
}
else if (!strncmp(answer, "2", 1))
@@ -188,7 +185,7 @@ bool NNTPConnection::AuthInfoPass(int iRecur)
if (GetStatus() != csCancelled)
{
ReportErrorAnswer("Authorization for %s failed (Answer: %s)", answer);
ReportErrorAnswer("Authorization for server%i (%s) failed (Answer: %s)", answer);
}
return false;
}
@@ -207,19 +204,11 @@ const char* NNTPConnection::JoinGroup(const char* grp)
tmp[1024-1] = '\0';
const char* answer = Request(tmp);
if (m_bAuthError)
{
return answer;
}
if (answer && !strncmp(answer, "2", 1))
{
debug("Changed group to %s on %s", grp, GetHost());
if (m_szActiveGroup)
{
free(m_szActiveGroup);
}
free(m_szActiveGroup);
m_szActiveGroup = strdup(grp);
}
else
@@ -230,26 +219,39 @@ const char* NNTPConnection::JoinGroup(const char* grp)
return answer;
}
bool NNTPConnection::DoConnect()
bool NNTPConnection::Connect()
{
debug("Opening connection to %s", GetHost());
bool res = Connection::DoConnect();
if (!res)
if (m_eStatus == csConnected)
{
return res;
return true;
}
char* answer = DoReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
if (!Connection::Connect())
{
return false;
}
char* answer = ReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
if (!answer)
{
ReportError("Connection to %s failed: Connection closed by remote host", GetHost(), false, 0);
ReportErrorAnswer("Connection to server%i (%s) failed: Connection closed by remote host", NULL);
Disconnect();
return false;
}
if (strncmp(answer, "2", 1))
{
ReportErrorAnswer("Connection to %s failed (Answer: %s)", answer);
ReportErrorAnswer("Connection to server%i (%s) failed (Answer: %s)", answer);
Disconnect();
return false;
}
if ((strlen(m_pNewsServer->GetUser()) > 0 && strlen(m_pNewsServer->GetPassword()) > 0) &&
!Authenticate())
{
return false;
}
@@ -258,24 +260,21 @@ bool NNTPConnection::DoConnect()
return true;
}
bool NNTPConnection::DoDisconnect()
bool NNTPConnection::Disconnect()
{
if (m_eStatus == csConnected)
{
Request("quit\r\n");
if (m_szActiveGroup)
{
free(m_szActiveGroup);
m_szActiveGroup = NULL;
}
free(m_szActiveGroup);
m_szActiveGroup = NULL;
}
return Connection::DoDisconnect();
return Connection::Disconnect();
}
void NNTPConnection::ReportErrorAnswer(const char* szMsgPrefix, const char* szAnswer)
{
char szErrStr[1024];
snprintf(szErrStr, 1024, szMsgPrefix, GetHost(), szAnswer);
snprintf(szErrStr, 1024, szMsgPrefix, m_pNewsServer->GetID(), m_pNewsServer->GetHost(), szAnswer);
szErrStr[1024-1] = '\0';
ReportError(szErrStr, NULL, false, 0);

View File

@@ -33,26 +33,26 @@
class NNTPConnection : public Connection
{
private:
NewsServer* m_pNewsServer;
char* m_szActiveGroup;
char* m_szLineBuf;
bool m_bAuthError;
NewsServer* m_pNewsServer;
char* m_szActiveGroup;
char* m_szLineBuf;
bool m_bAuthError;
virtual bool DoConnect();
virtual bool DoDisconnect();
void Clear();
void ReportErrorAnswer(const char* szMsgPrefix, const char* szAnswer);
void Clear();
void ReportErrorAnswer(const char* szMsgPrefix, const char* szAnswer);
bool Authenticate();
bool AuthInfoUser(int iRecur);
bool AuthInfoPass(int iRecur);
public:
NNTPConnection(NewsServer* pNewsServer);
virtual ~NNTPConnection();
NewsServer* GetNewsServer() { return m_pNewsServer; }
const char* Request(const char* req);
bool Authenticate();
bool AuthInfoUser(int iRecur = 0);
bool AuthInfoPass(int iRecur = 0);
const char* JoinGroup(const char* grp);
bool GetAuthError() { return m_bAuthError; }
NNTPConnection(NewsServer* pNewsServer);
virtual ~NNTPConnection();
virtual bool Connect();
virtual bool Disconnect();
NewsServer* GetNewsServer() { return m_pNewsServer; }
const char* Request(const char* req);
const char* JoinGroup(const char* grp);
bool GetAuthError() { return m_bAuthError; }
};
#endif

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -34,45 +34,47 @@
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include "nzbget.h"
#include "NewsServer.h"
NewsServer::NewsServer(int iID, const char* szHost, int iPort, const char* szUser, const char* szPass, bool bJoinGroup,
bool bTLS, const char* szCipher, int iMaxConnections, int iLevel, int iGroup)
NewsServer::NewsServer(int iID, bool bActive, const char* szName, const char* szHost, int iPort,
const char* szUser, const char* szPass, bool bJoinGroup, bool bTLS,
const char* szCipher, int iMaxConnections, int iLevel, int iGroup)
{
m_iID = iID;
m_szHost = NULL;
m_iStateID = 0;
m_bActive = bActive;
m_iPort = iPort;
m_szUser = NULL;
m_szPassword = NULL;
m_iLevel = iLevel;
m_iNormLevel = iLevel;
m_iGroup = iGroup;
m_iMaxConnections = iMaxConnections;
m_bJoinGroup = bJoinGroup;
m_bTLS = bTLS;
m_szHost = szHost ? strdup(szHost) : NULL;
m_szUser = szUser ? strdup(szUser) : NULL;
m_szPassword = szPass ? strdup(szPass) : NULL;
m_szCipher = szCipher ? strdup(szCipher) : NULL;
m_szHost = strdup(szHost ? szHost : "");
m_szUser = strdup(szUser ? szUser : "");
m_szPassword = strdup(szPass ? szPass : "");
m_szCipher = strdup(szCipher ? szCipher : "");
if (szName && strlen(szName) > 0)
{
m_szName = strdup(szName);
}
else
{
m_szName = (char*)malloc(20);
snprintf(m_szName, 20, "server%i", iID);
m_szName[20-1] = '\0';
}
}
NewsServer::~NewsServer()
{
if (m_szHost)
{
free(m_szHost);
}
if (m_szUser)
{
free(m_szUser);
}
if (m_szPassword)
{
free(m_szPassword);
}
if (m_szCipher)
{
free(m_szCipher);
}
free(m_szName);
free(m_szHost);
free(m_szUser);
free(m_szPassword);
free(m_szCipher);
}

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -27,10 +27,15 @@
#ifndef NEWSSERVER_H
#define NEWSSERVER_H
#include <vector>
class NewsServer
{
private:
int m_iID;
int m_iStateID;
bool m_bActive;
char* m_szName;
int m_iGroup;
char* m_szHost;
int m_iPort;
@@ -38,15 +43,22 @@ private:
char* m_szPassword;
int m_iMaxConnections;
int m_iLevel;
int m_iNormLevel;
bool m_bJoinGroup;
bool m_bTLS;
char* m_szCipher;
public:
NewsServer(int iID, const char* szHost, int iPort, const char* szUser, const char* szPass, bool bJoinGroup,
NewsServer(int iID, bool bActive, const char* szName, const char* szHost, int iPort,
const char* szUser, const char* szPass, bool bJoinGroup,
bool bTLS, const char* szCipher, int iMaxConnections, int iLevel, int iGroup);
~NewsServer();
int GetID() { return m_iID; }
int GetStateID() { return m_iStateID; }
void SetStateID(int iStateID) { m_iStateID = iStateID; }
bool GetActive() { return m_bActive; }
void SetActive(bool bActive) { m_bActive = bActive; }
const char* GetName() { return m_szName; }
int GetGroup() { return m_iGroup; }
const char* GetHost() { return m_szHost; }
int GetPort() { return m_iPort; }
@@ -54,10 +66,13 @@ public:
const char* GetPassword() { return m_szPassword; }
int GetMaxConnections() { return m_iMaxConnections; }
int GetLevel() { return m_iLevel; }
void SetLevel(int iLevel) { m_iLevel = iLevel; }
int GetNormLevel() { return m_iNormLevel; }
void SetNormLevel(int iLevel) { m_iNormLevel = iLevel; }
int GetJoinGroup() { return m_bJoinGroup; }
bool GetTLS() { return m_bTLS; }
const char* GetCipher() { return m_szCipher; }
};
typedef std::vector<NewsServer*> Servers;
#endif

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -42,7 +42,6 @@
#include "nzbget.h"
#include "ServerPool.h"
#include "Log.h"
static const int CONNECTION_HOLD_SECODNS = 5;
@@ -56,14 +55,19 @@ ServerPool::ServerPool()
{
debug("Creating ServerPool");
m_iMaxLevel = 0;
m_iMaxNormLevel = 0;
m_iTimeout = 60;
m_iGeneration = 0;
g_pLog->RegisterDebuggable(this);
}
ServerPool::~ ServerPool()
{
debug("Destroying ServerPool");
g_pLog->UnregisterDebuggable(this);
m_Levels.clear();
for (Servers::iterator it = m_Servers.begin(); it != m_Servers.end(); it++)
@@ -71,6 +75,7 @@ ServerPool::~ ServerPool()
delete *it;
}
m_Servers.clear();
m_SortedServers.clear();
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
{
@@ -83,16 +88,16 @@ void ServerPool::AddServer(NewsServer* pNewsServer)
{
debug("Adding server to ServerPool");
if (pNewsServer->GetMaxConnections() > 0)
{
m_Servers.push_back(pNewsServer);
}
else
{
delete pNewsServer;
}
m_Servers.push_back(pNewsServer);
m_SortedServers.push_back(pNewsServer);
}
/*
* Calculate normalized levels for all servers.
* Normalized Level means: starting from 0 with step 1.
* The servers of minimum Level must be always used even if they are not active;
* this is to prevent backup servers to act as main servers.
**/
void ServerPool::NormalizeLevels()
{
if (m_Servers.empty())
@@ -100,21 +105,38 @@ void ServerPool::NormalizeLevels()
return;
}
std::sort(m_Servers.begin(), m_Servers.end(), CompareServers);
std::sort(m_SortedServers.begin(), m_SortedServers.end(), CompareServers);
NewsServer* pNewsServer = m_Servers.front();
m_iMaxLevel = 0;
int iCurLevel = pNewsServer->GetLevel();
for (Servers::iterator it = m_Servers.begin(); it != m_Servers.end(); it++)
// find minimum level
int iMinLevel = m_SortedServers.front()->GetLevel();
for (Servers::iterator it = m_SortedServers.begin(); it != m_SortedServers.end(); it++)
{
NewsServer* pNewsServer = *it;
if (pNewsServer->GetLevel() != iCurLevel)
if (pNewsServer->GetLevel() < iMinLevel)
{
m_iMaxLevel++;
iMinLevel = pNewsServer->GetLevel();
}
}
pNewsServer->SetLevel(m_iMaxLevel);
m_iMaxNormLevel = 0;
int iLastLevel = iMinLevel;
for (Servers::iterator it = m_SortedServers.begin(); it != m_SortedServers.end(); it++)
{
NewsServer* pNewsServer = *it;
if ((pNewsServer->GetActive() && pNewsServer->GetMaxConnections() > 0) ||
(pNewsServer->GetLevel() == iMinLevel))
{
if (pNewsServer->GetLevel() != iLastLevel)
{
m_iMaxNormLevel++;
}
pNewsServer->SetNormLevel(m_iMaxNormLevel);
iLastLevel = pNewsServer->GetLevel();
}
else
{
pNewsServer->SetNormLevel(-1);
}
}
}
@@ -127,28 +149,51 @@ void ServerPool::InitConnections()
{
debug("Initializing connections in ServerPool");
NormalizeLevels();
m_mutexConnections.Lock();
for (Servers::iterator it = m_Servers.begin(); it != m_Servers.end(); it++)
NormalizeLevels();
m_Levels.clear();
for (Servers::iterator it = m_SortedServers.begin(); it != m_SortedServers.end(); it++)
{
NewsServer* pNewsServer = *it;
for (int i = 0; i < pNewsServer->GetMaxConnections(); i++)
int iNormLevel = pNewsServer->GetNormLevel();
if (pNewsServer->GetNormLevel() > -1)
{
PooledConnection* pConnection = new PooledConnection(pNewsServer);
pConnection->SetTimeout(m_iTimeout);
m_Connections.push_back(pConnection);
if ((int)m_Levels.size() <= iNormLevel)
{
m_Levels.push_back(0);
}
if (pNewsServer->GetActive())
{
int iConnections = 0;
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
{
PooledConnection* pConnection = *it;
if (pConnection->GetNewsServer() == pNewsServer)
{
iConnections++;
}
}
for (int i = iConnections; i < pNewsServer->GetMaxConnections(); i++)
{
PooledConnection* pConnection = new PooledConnection(pNewsServer);
pConnection->SetTimeout(m_iTimeout);
m_Connections.push_back(pConnection);
iConnections++;
}
m_Levels[iNormLevel] += iConnections;
}
}
if ((int)m_Levels.size() <= pNewsServer->GetLevel())
{
m_Levels.push_back(0);
}
m_Levels[pNewsServer->GetLevel()] += pNewsServer->GetMaxConnections();
}
if (m_Levels.empty())
{
warn("No news servers defined, download is not possible");
}
m_iGeneration++;
m_mutexConnections.Unlock();
}
NNTPConnection* ServerPool::GetConnection(int iLevel, NewsServer* pWantServer, Servers* pIgnoreServers)
@@ -163,7 +208,8 @@ NNTPConnection* ServerPool::GetConnection(int iLevel, NewsServer* pWantServer, S
{
PooledConnection* pCandidateConnection = *it;
NewsServer* pCandidateServer = pCandidateConnection->GetNewsServer();
if (!pCandidateConnection->GetInUse() && pCandidateServer->GetLevel() == iLevel &&
if (!pCandidateConnection->GetInUse() && pCandidateServer->GetActive() &&
pCandidateServer->GetNormLevel() == iLevel &&
(!pWantServer || pCandidateServer == pWantServer ||
(pWantServer->GetGroup() > 0 && pWantServer->GetGroup() == pCandidateServer->GetGroup())))
{
@@ -176,7 +222,7 @@ NNTPConnection* ServerPool::GetConnection(int iLevel, NewsServer* pWantServer, S
NewsServer* pIgnoreServer = *it;
if (pIgnoreServer == pCandidateServer ||
(pIgnoreServer->GetGroup() > 0 && pIgnoreServer->GetGroup() == pCandidateServer->GetGroup() &&
pIgnoreServer->GetLevel() == pCandidateServer->GetLevel()))
pIgnoreServer->GetNormLevel() == pCandidateServer->GetNormLevel()))
{
bUseConnection = false;
break;
@@ -218,7 +264,11 @@ void ServerPool::FreeConnection(NNTPConnection* pConnection, bool bUsed)
{
((PooledConnection*)pConnection)->SetFreeTimeNow();
}
m_Levels[pConnection->GetNewsServer()->GetLevel()]++;
if (pConnection->GetNewsServer()->GetNormLevel() > -1 && pConnection->GetNewsServer()->GetActive())
{
m_Levels[pConnection->GetNewsServer()->GetNormLevel()]++;
}
m_mutexConnections.Unlock();
}
@@ -229,36 +279,87 @@ void ServerPool::CloseUnusedConnections()
time_t curtime = ::time(NULL);
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
int i = 0;
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); )
{
PooledConnection* pConnection = *it;
if (!pConnection->GetInUse() && pConnection->GetStatus() == Connection::csConnected)
bool bDeleted = false;
if (!pConnection->GetInUse() &&
(pConnection->GetNewsServer()->GetNormLevel() == -1 ||
!pConnection->GetNewsServer()->GetActive()))
{
debug("Closing (and deleting) unused connection to server%i", pConnection->GetNewsServer()->GetID());
if (pConnection->GetStatus() == Connection::csConnected)
{
pConnection->Disconnect();
}
delete pConnection;
m_Connections.erase(it);
it = m_Connections.begin() + i;
bDeleted = true;
}
if (!bDeleted && !pConnection->GetInUse() && pConnection->GetStatus() == Connection::csConnected)
{
int tdiff = (int)(curtime - pConnection->GetFreeTime());
if (tdiff > CONNECTION_HOLD_SECODNS)
{
debug("Closing unused connection to %s", pConnection->GetHost());
debug("Closing (and keeping) unused connection to server%i", pConnection->GetNewsServer()->GetID());
pConnection->Disconnect();
}
}
if (!bDeleted)
{
it++;
i++;
}
}
m_mutexConnections.Unlock();
}
void ServerPool::Changed()
{
debug("Server config has been changed");
InitConnections();
CloseUnusedConnections();
}
void ServerPool::LogDebugInfo()
{
debug(" ServerPool");
debug(" ----------------");
info(" ---------- ServerPool");
debug(" Max-Level: %i", m_iMaxLevel);
info(" Max-Level: %i", m_iMaxNormLevel);
m_mutexConnections.Lock();
debug(" Connections: %i", m_Connections.size());
info(" Servers: %i", m_Servers.size());
for (Servers::iterator it = m_Servers.begin(); it != m_Servers.end(); it++)
{
NewsServer* pNewsServer = *it;
info(" %i) %s (%s): Level=%i, NormLevel=%i", pNewsServer->GetID(), pNewsServer->GetName(),
pNewsServer->GetHost(), pNewsServer->GetLevel(), pNewsServer->GetNormLevel());
}
info(" Levels: %i", m_Levels.size());
int index = 0;
for (Levels::iterator it = m_Levels.begin(); it != m_Levels.end(); it++, index++)
{
int iSize = *it;
info(" %i: Free connections=%i", index, iSize);
}
info(" Connections: %i", m_Connections.size());
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
{
debug(" %s: Level=%i, InUse:%i", (*it)->GetNewsServer()->GetHost(), (*it)->GetNewsServer()->GetLevel(), (int)(*it)->GetInUse());
PooledConnection* pConnection = *it;
info(" %i) %s (%s): Level=%i, NormLevel=%i, InUse:%i", pConnection->GetNewsServer()->GetID(),
pConnection->GetNewsServer()->GetName(), pConnection->GetNewsServer()->GetHost(),
pConnection->GetNewsServer()->GetLevel(), pConnection->GetNewsServer()->GetNormLevel(),
(int)pConnection->GetInUse());
}
m_mutexConnections.Unlock();

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -30,15 +30,13 @@
#include <vector>
#include <time.h>
#include "Log.h"
#include "Thread.h"
#include "NewsServer.h"
#include "NNTPConnection.h"
class ServerPool
class ServerPool : public Debuggable
{
public:
typedef std::vector<NewsServer*> Servers;
private:
class PooledConnection : public NNTPConnection
{
@@ -57,28 +55,33 @@ private:
typedef std::vector<PooledConnection*> Connections;
Servers m_Servers;
Servers m_SortedServers;
Connections m_Connections;
Levels m_Levels;
int m_iMaxLevel;
int m_iMaxNormLevel;
Mutex m_mutexConnections;
int m_iTimeout;
int m_iGeneration;
void NormalizeLevels();
static bool CompareServers(NewsServer* pServer1, NewsServer* pServer2);
protected:
virtual void LogDebugInfo();
public:
ServerPool();
~ServerPool();
void SetTimeout(int iTimeout) { m_iTimeout = iTimeout; }
void AddServer(NewsServer* pNewsServer);
void InitConnections();
int GetMaxLevel() { return m_iMaxLevel; }
int GetMaxNormLevel() { return m_iMaxNormLevel; }
Servers* GetServers() { return &m_Servers; } // Only for read access (no lockings)
NNTPConnection* GetConnection(int iLevel, NewsServer* pWantServer, Servers* pIgnoreServers);
void FreeConnection(NNTPConnection* pConnection, bool bUsed);
void CloseUnusedConnections();
void LogDebugInfo();
void Changed();
int GetGeneration() { return m_iGeneration; }
};
#endif

546
daemon/nntp/StatMeter.cpp Normal file
View File

@@ -0,0 +1,546 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include "nzbget.h"
#include "StatMeter.h"
#include "Options.h"
#include "ServerPool.h"
#include "DiskState.h"
extern ServerPool* g_pServerPool;
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
static const int DAYS_UP_TO_2013_JAN_1 = 15706;
static const int DAYS_IN_TWENTY_YEARS = 366*20;
ServerVolume::ServerVolume()
{
m_BytesPerSeconds.resize(60);
m_BytesPerMinutes.resize(60);
m_BytesPerHours.resize(24);
m_BytesPerDays.resize(0);
m_iFirstDay = 0;
m_tDataTime = 0;
m_lTotalBytes = 0;
m_lCustomBytes = 0;
m_tCustomTime = time(NULL);
m_iSecSlot = 0;
m_iMinSlot = 0;
m_iHourSlot = 0;
m_iDaySlot = 0;
}
void ServerVolume::CalcSlots(time_t tLocCurTime)
{
m_iSecSlot = (int)tLocCurTime % 60;
m_iMinSlot = ((int)tLocCurTime / 60) % 60;
m_iHourSlot = ((int)tLocCurTime % 86400) / 3600;
int iDaysSince1970 = (int)tLocCurTime / 86400;
m_iDaySlot = iDaysSince1970 - DAYS_UP_TO_2013_JAN_1 + 1;
if (0 <= m_iDaySlot && m_iDaySlot < DAYS_IN_TWENTY_YEARS)
{
int iCurDay = iDaysSince1970;
if (m_iFirstDay == 0 || m_iFirstDay > iCurDay)
{
m_iFirstDay = iCurDay;
}
m_iDaySlot = iCurDay - m_iFirstDay;
if (m_iDaySlot + 1 > (int)m_BytesPerDays.size())
{
m_BytesPerDays.resize(m_iDaySlot + 1);
}
}
else
{
m_iDaySlot = -1;
}
}
void ServerVolume::AddData(int iBytes)
{
time_t tCurTime = time(NULL);
time_t tLocCurTime = tCurTime + g_pOptions->GetLocalTimeOffset();
time_t tLocDataTime = m_tDataTime + g_pOptions->GetLocalTimeOffset();
int iLastMinSlot = m_iMinSlot;
int iLastHourSlot = m_iHourSlot;
CalcSlots(tLocCurTime);
if (tLocCurTime != tLocDataTime)
{
// clear seconds/minutes/hours slots if necessary
// also handle the backwards changes of system clock
int iTotalDelta = (int)(tLocCurTime - tLocDataTime);
int iDeltaSign = iTotalDelta >= 0 ? 1 : -1;
iTotalDelta = abs(iTotalDelta);
int iSecDelta = iTotalDelta;
if (iDeltaSign < 0) iSecDelta++;
if (iSecDelta >= 60) iSecDelta = 60;
for (int i = 0; i < iSecDelta; i++)
{
int iNulSlot = m_iSecSlot - i * iDeltaSign;
if (iNulSlot < 0) iNulSlot += 60;
if (iNulSlot >= 60) iNulSlot -= 60;
m_BytesPerSeconds[iNulSlot] = 0;
}
int iMinDelta = iTotalDelta / 60;
if (iDeltaSign < 0) iMinDelta++;
if (abs(iMinDelta) >= 60) iMinDelta = 60;
if (iMinDelta == 0 && m_iMinSlot != iLastMinSlot) iMinDelta = 1;
for (int i = 0; i < iMinDelta; i++)
{
int iNulSlot = m_iMinSlot - i * iDeltaSign;
if (iNulSlot < 0) iNulSlot += 60;
if (iNulSlot >= 60) iNulSlot -= 60;
m_BytesPerMinutes[iNulSlot] = 0;
}
int iHourDelta = iTotalDelta / (60 * 60);
if (iDeltaSign < 0) iHourDelta++;
if (iHourDelta >= 24) iHourDelta = 24;
if (iHourDelta == 0 && m_iHourSlot != iLastHourSlot) iHourDelta = 1;
for (int i = 0; i < iHourDelta; i++)
{
int iNulSlot = m_iHourSlot - i * iDeltaSign;
if (iNulSlot < 0) iNulSlot += 24;
if (iNulSlot >= 24) iNulSlot -= 24;
m_BytesPerHours[iNulSlot] = 0;
}
}
// add bytes to every slot
m_BytesPerSeconds[m_iSecSlot] += iBytes;
m_BytesPerMinutes[m_iMinSlot] += iBytes;
m_BytesPerHours[m_iHourSlot] += iBytes;
if (m_iDaySlot >= 0)
{
m_BytesPerDays[m_iDaySlot] += iBytes;
}
m_lTotalBytes += iBytes;
m_lCustomBytes += iBytes;
m_tDataTime = tCurTime;
}
void ServerVolume::ResetCustom()
{
m_lCustomBytes = 0;
m_tCustomTime = time(NULL);
}
void ServerVolume::LogDebugInfo()
{
info(" ---------- ServerVolume");
char szSec[4000];
szSec[0] = '\0';
for (int i = 0; i < 60; i++)
{
char szNum[20];
snprintf(szNum, 20, "[%i]=%lli ", i, m_BytesPerSeconds[i]);
strncat(szSec, szNum, 4000);
}
info("Secs: %s", szSec);
szSec[0] = '\0';
for (int i = 0; i < 60; i++)
{
char szNum[20];
snprintf(szNum, 20, "[%i]=%lli ", i, m_BytesPerMinutes[i]);
strncat(szSec, szNum, 4000);
}
info("Mins: %s", szSec);
szSec[0] = '\0';
for (int i = 0; i < 24; i++)
{
char szNum[20];
snprintf(szNum, 20, "[%i]=%lli ", i, m_BytesPerHours[i]);
strncat(szSec, szNum, 4000);
}
info("Hours: %s", szSec);
szSec[0] = '\0';
for (int i = 0; i < (int)m_BytesPerDays.size(); i++)
{
char szNum[20];
snprintf(szNum, 20, "[%i]=%lli ", m_iFirstDay + i, m_BytesPerDays[i]);
strncat(szSec, szNum, 4000);
}
info("Days: %s", szSec);
}
StatMeter::StatMeter()
{
debug("Creating StatMeter");
ResetSpeedStat();
m_iAllBytes = 0;
m_tStartDownload = 0;
m_tPausedFrom = 0;
m_bStandBy = true;
m_tStartServer = 0;
m_tLastCheck = 0;
m_tLastTimeOffset = 0;
m_bStatChanged = false;
g_pLog->RegisterDebuggable(this);
}
StatMeter::~StatMeter()
{
debug("Destroying StatMeter");
// Cleanup
g_pLog->UnregisterDebuggable(this);
for (ServerVolumes::iterator it = m_ServerVolumes.begin(); it != m_ServerVolumes.end(); it++)
{
delete *it;
}
debug("StatMeter destroyed");
}
void StatMeter::Init()
{
m_tStartServer = time(NULL);
m_tLastCheck = m_tStartServer;
AdjustTimeOffset();
m_ServerVolumes.resize(1 + g_pServerPool->GetServers()->size());
m_ServerVolumes[0] = new ServerVolume();
for (Servers::iterator it = g_pServerPool->GetServers()->begin(); it != g_pServerPool->GetServers()->end(); it++)
{
NewsServer* pServer = *it;
m_ServerVolumes[pServer->GetID()] = new ServerVolume();
}
}
void StatMeter::AdjustTimeOffset()
{
time_t tUtcTime = time(NULL);
tm tmSplittedTime;
gmtime_r(&tUtcTime, &tmSplittedTime);
tmSplittedTime.tm_isdst = -1;
time_t tLocTime = mktime(&tmSplittedTime);
time_t tLocalTimeDelta = tUtcTime - tLocTime;
g_pOptions->SetLocalTimeOffset((int)tLocalTimeDelta + g_pOptions->GetTimeCorrection());
m_tLastTimeOffset = tUtcTime;
debug("UTC delta: %i (%i+%i)", g_pOptions->GetLocalTimeOffset(), (int)tLocalTimeDelta, g_pOptions->GetTimeCorrection());
}
/*
* Called once per second.
* - detect large step changes of system time and adjust statistics;
* - save volume stats (if changed).
*/
void StatMeter::IntervalCheck()
{
time_t m_tCurTime = time(NULL);
time_t tDiff = m_tCurTime - m_tLastCheck;
if (tDiff > 60 || tDiff < 0)
{
m_tStartServer += tDiff + 1; // "1" because the method is called once per second
if (m_tStartDownload != 0 && !m_bStandBy)
{
m_tStartDownload += tDiff + 1;
}
AdjustTimeOffset();
}
else if (m_tLastTimeOffset > m_tCurTime ||
m_tCurTime - m_tLastTimeOffset > 60 * 60 * 3 ||
(m_tCurTime - m_tLastTimeOffset > 60 && !m_bStandBy))
{
// checking time zone settings may prevent the device from entering sleep/hibernate mode
// check every minute if not in standby
// check at least every 3 hours even in standby
AdjustTimeOffset();
}
m_tLastCheck = m_tCurTime;
if (m_bStatChanged)
{
Save();
}
}
void StatMeter::EnterLeaveStandBy(bool bEnter)
{
m_mutexStat.Lock();
m_bStandBy = bEnter;
if (bEnter)
{
m_tPausedFrom = time(NULL);
}
else
{
if (m_tStartDownload == 0)
{
m_tStartDownload = time(NULL);
}
else
{
m_tStartDownload += time(NULL) - m_tPausedFrom;
}
m_tPausedFrom = 0;
ResetSpeedStat();
}
m_mutexStat.Unlock();
}
void StatMeter::CalcTotalStat(int* iUpTimeSec, int* iDnTimeSec, long long* iAllBytes, bool* bStandBy)
{
m_mutexStat.Lock();
if (m_tStartServer > 0)
{
*iUpTimeSec = (int)(time(NULL) - m_tStartServer);
}
else
{
*iUpTimeSec = 0;
}
*bStandBy = m_bStandBy;
if (m_bStandBy)
{
*iDnTimeSec = (int)(m_tPausedFrom - m_tStartDownload);
}
else
{
*iDnTimeSec = (int)(time(NULL) - m_tStartDownload);
}
*iAllBytes = m_iAllBytes;
m_mutexStat.Unlock();
}
/*
* NOTE: see note to "AddSpeedReading"
*/
int StatMeter::CalcCurrentDownloadSpeed()
{
if (m_bStandBy)
{
return 0;
}
int iTimeDiff = (int)time(NULL) - m_iSpeedStartTime * SPEEDMETER_SLOTSIZE;
if (iTimeDiff == 0)
{
return 0;
}
return (int)(m_iSpeedTotalBytes / iTimeDiff);
}
void StatMeter::AddSpeedReading(int iBytes)
{
time_t tCurTime = time(NULL);
int iNowSlot = (int)tCurTime / SPEEDMETER_SLOTSIZE;
if (g_pOptions->GetAccurateRate())
{
#ifdef HAVE_SPINLOCK
m_spinlockSpeed.Lock();
#else
m_mutexSpeed.Lock();
#endif
}
while (iNowSlot > m_iSpeedTime[m_iSpeedBytesIndex])
{
//record bytes in next slot
m_iSpeedBytesIndex++;
if (m_iSpeedBytesIndex >= SPEEDMETER_SLOTS)
{
m_iSpeedBytesIndex = 0;
}
//Adjust counters with outgoing information.
m_iSpeedTotalBytes = m_iSpeedTotalBytes - (long long)m_iSpeedBytes[m_iSpeedBytesIndex];
//Note we should really use the start time of the next slot
//but its easier to just use the outgoing slot time. This
//will result in a small error.
m_iSpeedStartTime = m_iSpeedTime[m_iSpeedBytesIndex];
//Now reset.
m_iSpeedBytes[m_iSpeedBytesIndex] = 0;
m_iSpeedTime[m_iSpeedBytesIndex] = iNowSlot;
}
// Once per second recalculate summary field "m_iSpeedTotalBytes" to recover from possible synchronisation errors
if (tCurTime > m_tSpeedCorrection)
{
long long iSpeedTotalBytes = 0;
for (int i = 0; i < SPEEDMETER_SLOTS; i++)
{
iSpeedTotalBytes += m_iSpeedBytes[i];
}
m_iSpeedTotalBytes = iSpeedTotalBytes;
m_tSpeedCorrection = tCurTime;
}
if (m_iSpeedTotalBytes == 0)
{
m_iSpeedStartTime = iNowSlot;
}
m_iSpeedBytes[m_iSpeedBytesIndex] += iBytes;
m_iSpeedTotalBytes += iBytes;
m_iAllBytes += iBytes;
if (g_pOptions->GetAccurateRate())
{
#ifdef HAVE_SPINLOCK
m_spinlockSpeed.Unlock();
#else
m_mutexSpeed.Unlock();
#endif
}
}
void StatMeter::ResetSpeedStat()
{
time_t tCurTime = time(NULL);
m_iSpeedStartTime = (int)tCurTime / SPEEDMETER_SLOTSIZE;
for (int i = 0; i < SPEEDMETER_SLOTS; i++)
{
m_iSpeedBytes[i] = 0;
m_iSpeedTime[i] = m_iSpeedStartTime;
}
m_iSpeedBytesIndex = 0;
m_iSpeedTotalBytes = 0;
m_tSpeedCorrection = tCurTime;
}
void StatMeter::LogDebugInfo()
{
info(" ---------- SpeedMeter");
float fSpeed = (float)(CalcCurrentDownloadSpeed() / 1024.0);
int iTimeDiff = (int)time(NULL) - m_iSpeedStartTime * SPEEDMETER_SLOTSIZE;
info(" Speed: %f", fSpeed);
info(" SpeedStartTime: %i", m_iSpeedStartTime);
info(" SpeedTotalBytes: %i", m_iSpeedTotalBytes);
info(" SpeedBytesIndex: %i", m_iSpeedBytesIndex);
info(" AllBytes: %i", m_iAllBytes);
info(" Time: %i", (int)time(NULL));
info(" TimeDiff: %i", iTimeDiff);
for (int i=0; i < SPEEDMETER_SLOTS; i++)
{
info(" Bytes[%i]: %i, Time[%i]: %i", i, m_iSpeedBytes[i], i, m_iSpeedTime[i]);
}
m_mutexVolume.Lock();
int index = 0;
for (ServerVolumes::iterator it = m_ServerVolumes.begin(); it != m_ServerVolumes.end(); it++, index++)
{
ServerVolume* pServerVolume = *it;
info(" ServerVolume %i", index);
pServerVolume->LogDebugInfo();
}
m_mutexVolume.Unlock();
}
void StatMeter::AddServerData(int iBytes, int iServerID)
{
if (iBytes == 0)
{
return;
}
m_mutexVolume.Lock();
m_ServerVolumes[0]->AddData(iBytes);
m_ServerVolumes[iServerID]->AddData(iBytes);
m_bStatChanged = true;
m_mutexVolume.Unlock();
}
ServerVolumes* StatMeter::LockServerVolumes()
{
m_mutexVolume.Lock();
// update slots
for (ServerVolumes::iterator it = m_ServerVolumes.begin(); it != m_ServerVolumes.end(); it++)
{
ServerVolume* pServerVolume = *it;
pServerVolume->AddData(0);
}
return &m_ServerVolumes;
}
void StatMeter::UnlockServerVolumes()
{
m_mutexVolume.Unlock();
}
void StatMeter::Save()
{
if (!g_pOptions->GetServerMode())
{
return;
}
m_mutexVolume.Lock();
g_pDiskState->SaveStats(g_pServerPool->GetServers(), &m_ServerVolumes);
m_bStatChanged = false;
m_mutexVolume.Unlock();
}
bool StatMeter::Load(bool* pPerfectServerMatch)
{
m_mutexVolume.Lock();
bool bOK = g_pDiskState->LoadStats(g_pServerPool->GetServers(), &m_ServerVolumes, pPerfectServerMatch);
for (ServerVolumes::iterator it = m_ServerVolumes.begin(); it != m_ServerVolumes.end(); it++)
{
ServerVolume* pServerVolume = *it;
pServerVolume->CalcSlots(pServerVolume->GetDataTime() + g_pOptions->GetLocalTimeOffset());
}
m_mutexVolume.Unlock();
return bOK;
}

140
daemon/nntp/StatMeter.h Normal file
View File

@@ -0,0 +1,140 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef STATMETER_H
#define STATMETER_H
#include <vector>
#include <time.h>
#include "Log.h"
#include "Thread.h"
class ServerVolume
{
public:
typedef std::vector<long long> VolumeArray;
private:
VolumeArray m_BytesPerSeconds;
VolumeArray m_BytesPerMinutes;
VolumeArray m_BytesPerHours;
VolumeArray m_BytesPerDays;
int m_iFirstDay;
long long m_lTotalBytes;
long long m_lCustomBytes;
time_t m_tDataTime;
time_t m_tCustomTime;
int m_iSecSlot;
int m_iMinSlot;
int m_iHourSlot;
int m_iDaySlot;
public:
ServerVolume();
VolumeArray* BytesPerSeconds() { return &m_BytesPerSeconds; }
VolumeArray* BytesPerMinutes() { return &m_BytesPerMinutes; }
VolumeArray* BytesPerHours() { return &m_BytesPerHours; }
VolumeArray* BytesPerDays() { return &m_BytesPerDays; }
void SetFirstDay(int iFirstDay) { m_iFirstDay = iFirstDay; }
int GetFirstDay() { return m_iFirstDay; }
void SetTotalBytes(long long lTotalBytes) { m_lTotalBytes = lTotalBytes; }
long long GetTotalBytes() { return m_lTotalBytes; }
void SetCustomBytes(long long lCustomBytes) { m_lCustomBytes = lCustomBytes; }
long long GetCustomBytes() { return m_lCustomBytes; }
int GetSecSlot() { return m_iSecSlot; }
int GetMinSlot() { return m_iMinSlot; }
int GetHourSlot() { return m_iHourSlot; }
int GetDaySlot() { return m_iDaySlot; }
time_t GetDataTime() { return m_tDataTime; }
void SetDataTime(time_t tDataTime) { m_tDataTime = tDataTime; }
time_t GetCustomTime() { return m_tCustomTime; }
void SetCustomTime(time_t tCustomTime) { m_tCustomTime = tCustomTime; }
void AddData(int iBytes);
void CalcSlots(time_t tLocCurTime);
void ResetCustom();
void LogDebugInfo();
};
typedef std::vector<ServerVolume*> ServerVolumes;
class StatMeter : public Debuggable
{
private:
// speed meter
static const int SPEEDMETER_SLOTS = 30;
static const int SPEEDMETER_SLOTSIZE = 1; //Split elapsed time into this number of secs.
int m_iSpeedBytes[SPEEDMETER_SLOTS];
long long m_iSpeedTotalBytes;
int m_iSpeedTime[SPEEDMETER_SLOTS];
int m_iSpeedStartTime;
time_t m_tSpeedCorrection;
int m_iSpeedBytesIndex;
#ifdef HAVE_SPINLOCK
SpinLock m_spinlockSpeed;
#else
Mutex m_mutexSpeed;
#endif
// time
long long m_iAllBytes;
time_t m_tStartServer;
time_t m_tLastCheck;
time_t m_tLastTimeOffset;
time_t m_tStartDownload;
time_t m_tPausedFrom;
bool m_bStandBy;
Mutex m_mutexStat;
// data volume
bool m_bStatChanged;
ServerVolumes m_ServerVolumes;
Mutex m_mutexVolume;
void ResetSpeedStat();
void AdjustTimeOffset();
protected:
virtual void LogDebugInfo();
public:
StatMeter();
~StatMeter();
void Init();
int CalcCurrentDownloadSpeed();
void AddSpeedReading(int iBytes);
void AddServerData(int iBytes, int iServerID);
void CalcTotalStat(int* iUpTimeSec, int* iDnTimeSec, long long* iAllBytes, bool* bStandBy);
bool GetStandBy() { return m_bStandBy; }
void IntervalCheck();
void EnterLeaveStandBy(bool bEnter);
ServerVolumes* LockServerVolumes();
void UnlockServerVolumes();
void Save();
bool Load(bool* pPerfectServerMatch);
};
#endif

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,188 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef PARCHECKER_H
#define PARCHECKER_H
#ifndef DISABLE_PARCHECK
#include <deque>
#include <vector>
#include <string>
#include "Thread.h"
#include "Log.h"
class ParChecker : public Thread
{
public:
enum EStatus
{
psFailed,
psRepairPossible,
psRepaired,
psRepairNotNeeded
};
enum EStage
{
ptLoadingPars,
ptVerifyingSources,
ptRepairing,
ptVerifyingRepaired,
};
enum EFileStatus
{
fsUnknown,
fsSuccess,
fsPartial,
fsFailure
};
class Segment
{
private:
bool m_bSuccess;
long long m_iOffset;
int m_iSize;
unsigned long m_lCrc;
public:
Segment(bool bSuccess, long long iOffset, int iSize, unsigned long lCrc);
bool GetSuccess() { return m_bSuccess; }
long long GetOffset() { return m_iOffset; }
int GetSize() { return m_iSize; }
unsigned long GetCrc() { return m_lCrc; }
};
typedef std::deque<Segment*> SegmentListBase;
class SegmentList : public SegmentListBase
{
public:
~SegmentList();
};
typedef std::deque<char*> FileList;
typedef std::deque<void*> SourceList;
typedef std::vector<bool> ValidBlocks;
friend class Repairer;
private:
char* m_szInfoName;
char* m_szDestDir;
char* m_szNZBName;
const char* m_szParFilename;
EStatus m_eStatus;
EStage m_eStage;
// declared as void* to prevent the including of libpar2-headers into this header-file
void* m_pRepairer;
char* m_szErrMsg;
FileList m_QueuedParFiles;
Mutex m_mutexQueuedParFiles;
bool m_bQueuedParFilesChanged;
FileList m_ProcessedFiles;
int m_iProcessedFiles;
int m_iFilesToRepair;
int m_iExtraFiles;
bool m_bVerifyingExtraFiles;
char* m_szProgressLabel;
int m_iFileProgress;
int m_iStageProgress;
bool m_bCancelled;
SourceList m_sourceFiles;
std::string m_lastFilename;
bool m_bHasDamagedFiles;
bool m_bParQuick;
bool m_bForceRepair;
void Cleanup();
EStatus RunParCheckAll();
EStatus RunParCheck(const char* szParFilename);
int PreProcessPar();
bool LoadMainParBak();
int ProcessMorePars();
bool LoadMorePars();
bool AddSplittedFragments();
bool AddMissingFiles();
bool IsProcessedFile(const char* szFilename);
void WriteBrokenLog(EStatus eStatus);
void SaveSourceList();
void DeleteLeftovers();
void signal_filename(std::string str);
void signal_progress(double progress);
void signal_done(std::string str, int available, int total);
// declared as void* to prevent the including of libpar2-headers into this header-file
// DiskFile* pDiskfile, Par2RepairerSourceFile* pSourcefile
EFileStatus VerifyDataFile(void* pDiskfile, void* pSourcefile, int* pAvailableBlocks);
bool VerifySuccessDataFile(void* pDiskfile, void* pSourcefile, unsigned long lDownloadCrc);
bool VerifyPartialDataFile(void* pDiskfile, void* pSourcefile, SegmentList* pSegments, ValidBlocks* pValidBlocks);
bool SmartCalcFileRangeCrc(FILE* pFile, long long lStart, long long lEnd, SegmentList* pSegments,
unsigned long* pDownloadCrc);
bool DumbCalcFileRangeCrc(FILE* pFile, long long lStart, long long lEnd, unsigned long* pDownloadCrc);
protected:
/**
* Unpause par2-files
* returns true, if the files with required number of blocks were unpaused,
* or false if there are no more files in queue for this collection or not enough blocks
*/
virtual bool RequestMorePars(int iBlockNeeded, int* pBlockFound) = 0;
virtual void UpdateProgress() {}
virtual void Completed() {}
virtual void PrintMessage(Message::EKind eKind, const char* szFormat, ...) {}
virtual void RegisterParredFile(const char* szFilename) {}
virtual bool IsParredFile(const char* szFilename) { return false; }
virtual EFileStatus FindFileCrc(const char* szFilename, unsigned long* lCrc, SegmentList* pSegments) { return fsUnknown; }
EStage GetStage() { return m_eStage; }
const char* GetProgressLabel() { return m_szProgressLabel; }
int GetFileProgress() { return m_iFileProgress; }
int GetStageProgress() { return m_iStageProgress; }
public:
ParChecker();
virtual ~ParChecker();
virtual void Run();
void SetDestDir(const char* szDestDir);
const char* GetParFilename() { return m_szParFilename; }
const char* GetInfoName() { return m_szInfoName; }
void SetInfoName(const char* szInfoName);
void SetNZBName(const char* szNZBName);
void SetParQuick(bool bParQuick) { m_bParQuick = bParQuick; }
bool GetParQuick() { return m_bParQuick; }
void SetForceRepair(bool bForceRepair) { m_bForceRepair = bForceRepair; }
bool GetForceRepair() { return m_bForceRepair; }
EStatus GetStatus() { return m_eStatus; }
void AddParFile(const char* szParFilename);
void QueueChanged();
void Cancel();
bool GetCancelled() { return m_bCancelled; }
};
#endif
#endif

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -33,7 +33,9 @@
#include <stdlib.h>
#include <string.h>
#include <fstream>
#include <stdio.h>
#include <stdarg.h>
#include <ctype.h>
#ifdef WIN32
#include <direct.h>
#else
@@ -43,12 +45,10 @@
#include "nzbget.h"
#include "ParCoordinator.h"
#include "Options.h"
#include "DiskState.h"
#include "Log.h"
#include "Util.h"
#include "QueueCoordinator.h"
#include "DiskState.h"
extern QueueCoordinator* g_pQueueCoordinator;
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
@@ -63,10 +63,127 @@ void ParCoordinator::PostParChecker::UpdateProgress()
m_pOwner->UpdateParCheckProgress();
}
void ParCoordinator::PostParChecker::PrintMessage(Message::EKind eKind, const char* szFormat, ...)
{
char szText[1024];
va_list args;
va_start(args, szFormat);
vsnprintf(szText, 1024, szFormat, args);
va_end(args);
szText[1024-1] = '\0';
m_pOwner->PrintMessage(m_pPostInfo, eKind, "%s", szText);
}
void ParCoordinator::PostParChecker::RegisterParredFile(const char* szFilename)
{
m_pPostInfo->GetParredFiles()->push_back(strdup(szFilename));
}
bool ParCoordinator::PostParChecker::IsParredFile(const char* szFilename)
{
for (PostInfo::ParredFiles::iterator it = m_pPostInfo->GetParredFiles()->begin(); it != m_pPostInfo->GetParredFiles()->end(); it++)
{
const char* szParredFile = *it;
if (!strcasecmp(szParredFile, szFilename))
{
return true;
}
}
return false;
}
ParChecker::EFileStatus ParCoordinator::PostParChecker::FindFileCrc(const char* szFilename,
unsigned long* lCrc, SegmentList* pSegments)
{
CompletedFile* pCompletedFile = NULL;
for (CompletedFiles::iterator it = m_pPostInfo->GetNZBInfo()->GetCompletedFiles()->begin(); it != m_pPostInfo->GetNZBInfo()->GetCompletedFiles()->end(); it++)
{
CompletedFile* pCompletedFile2 = *it;
if (!strcasecmp(pCompletedFile2->GetFileName(), szFilename))
{
pCompletedFile = pCompletedFile2;
break;
}
}
if (!pCompletedFile)
{
return ParChecker::fsUnknown;
}
debug("Found completed file: %s, CRC: %.8x, Status: %i", Util::BaseFileName(pCompletedFile->GetFileName()), pCompletedFile->GetCrc(), (int)pCompletedFile->GetStatus());
*lCrc = pCompletedFile->GetCrc();
if (pCompletedFile->GetStatus() == CompletedFile::cfPartial && pCompletedFile->GetID() > 0 &&
!m_pPostInfo->GetNZBInfo()->GetReprocess())
{
FileInfo* pTmpFileInfo = new FileInfo(pCompletedFile->GetID());
if (!g_pDiskState->LoadFileState(pTmpFileInfo, NULL, true))
{
delete pTmpFileInfo;
return ParChecker::fsUnknown;
}
for (FileInfo::Articles::iterator it = pTmpFileInfo->GetArticles()->begin(); it != pTmpFileInfo->GetArticles()->end(); it++)
{
ArticleInfo* pa = *it;
ParChecker::Segment* pSegment = new Segment(pa->GetStatus() == ArticleInfo::aiFinished,
pa->GetSegmentOffset(), pa->GetSegmentSize(), pa->GetCrc());
pSegments->push_back(pSegment);
}
delete pTmpFileInfo;
}
return pCompletedFile->GetStatus() == CompletedFile::cfSuccess ? ParChecker::fsSuccess :
pCompletedFile->GetStatus() == CompletedFile::cfFailure &&
!m_pPostInfo->GetNZBInfo()->GetReprocess() ? ParChecker::fsFailure :
pCompletedFile->GetStatus() == CompletedFile::cfPartial && pSegments->size() > 0 &&
!m_pPostInfo->GetNZBInfo()->GetReprocess()? ParChecker::fsPartial :
ParChecker::fsUnknown;
}
void ParCoordinator::PostParRenamer::UpdateProgress()
{
m_pOwner->UpdateParRenameProgress();
}
void ParCoordinator::PostParRenamer::PrintMessage(Message::EKind eKind, const char* szFormat, ...)
{
char szText[1024];
va_list args;
va_start(args, szFormat);
vsnprintf(szText, 1024, szFormat, args);
va_end(args);
szText[1024-1] = '\0';
m_pOwner->PrintMessage(m_pPostInfo, eKind, "%s", szText);
}
void ParCoordinator::PostParRenamer::RegisterParredFile(const char* szFilename)
{
m_pPostInfo->GetParredFiles()->push_back(strdup(szFilename));
}
/**
* Update file name in the CompletedFiles-list of NZBInfo
*/
void ParCoordinator::PostParRenamer::RegisterRenamedFile(const char* szOldFilename, const char* szNewFileName)
{
for (CompletedFiles::iterator it = m_pPostInfo->GetNZBInfo()->GetCompletedFiles()->begin(); it != m_pPostInfo->GetNZBInfo()->GetCompletedFiles()->end(); it++)
{
CompletedFile* pCompletedFile = *it;
if (!strcasecmp(pCompletedFile->GetFileName(), szOldFilename))
{
pCompletedFile->SetFileName(szNewFileName);
break;
}
}
}
#endif
ParCoordinator::ParCoordinator()
@@ -74,18 +191,9 @@ ParCoordinator::ParCoordinator()
debug("Creating ParCoordinator");
#ifndef DISABLE_PARCHECK
m_ParCheckerObserver.m_pOwner = this;
m_ParChecker.Attach(&m_ParCheckerObserver);
m_ParChecker.m_pOwner = this;
m_ParRenamerObserver.m_pOwner = this;
m_ParRenamer.Attach(&m_ParRenamerObserver);
m_ParRenamer.m_pOwner = this;
m_bStopped = false;
const char* szPostScript = g_pOptions->GetPostProcess();
m_bPostScript = szPostScript && strlen(szPostScript) > 0;
m_ParChecker.m_pOwner = this;
m_ParRenamer.m_pOwner = this;
#endif
}
@@ -122,23 +230,12 @@ void ParCoordinator::Stop()
void ParCoordinator::PausePars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
debug("ParCoordinator: Pausing pars");
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
if (pFileInfo->GetNZBInfo() == pNZBInfo)
{
g_pQueueCoordinator->GetQueueEditor()->LockedEditEntry(pDownloadQueue, pFileInfo->GetID(), false,
(g_pOptions->GetLoadPars() == Options::lpOne ||
(g_pOptions->GetLoadPars() == Options::lpNone && g_pOptions->GetParCheck()))
? QueueEditor::eaGroupPauseExtraPars : QueueEditor::eaGroupPauseAllPars,
0, NULL);
break;
}
}
pDownloadQueue->EditEntry(pNZBInfo->GetID(),
DownloadQueue::eaGroupPauseExtraPars, 0, NULL);
}
bool ParCoordinator::FindMainPars(const char* szPath, FileList* pFileList)
bool ParCoordinator::FindMainPars(const char* szPath, ParFileList* pFileList)
{
if (pFileList)
{
@@ -158,7 +255,7 @@ bool ParCoordinator::FindMainPars(const char* szPath, FileList* pFileList)
// check if the base file already added to list
bool exists = false;
for (FileList::iterator it = pFileList->begin(); it != pFileList->end(); it++)
for (ParFileList::iterator it = pFileList->begin(); it != pFileList->end(); it++)
{
const char* filename2 = *it;
exists = SameParCollection(filename, filename2);
@@ -202,8 +299,13 @@ bool ParCoordinator::ParseParFilename(const char* szParFilename, int* iBaseNameL
char* szEnd = szFilename;
while (char* p = strstr(szEnd, ".par2")) szEnd = p + 5;
*szEnd = '\0';
iLen = strlen(szFilename);
if (iLen < 6)
{
return false;
}
if (strcasecmp(szFilename + iLen - 5, ".par2"))
{
return false;
@@ -245,11 +347,15 @@ bool ParCoordinator::ParseParFilename(const char* szParFilename, int* iBaseNameL
*/
void ParCoordinator::StartParCheckJob(PostInfo* pPostInfo)
{
info("Checking pars for %s", pPostInfo->GetInfoName());
m_eCurrentJob = jkParCheck;
m_ParChecker.SetPostInfo(pPostInfo);
m_ParChecker.SetParFilename(pPostInfo->GetParFilename());
m_ParChecker.SetInfoName(pPostInfo->GetInfoName());
m_ParChecker.SetDestDir(pPostInfo->GetNZBInfo()->GetDestDir());
m_ParChecker.SetNZBName(pPostInfo->GetNZBInfo()->GetName());
m_ParChecker.SetParTime(time(NULL));
m_ParChecker.SetDownloadSec(pPostInfo->GetNZBInfo()->GetDownloadSec());
m_ParChecker.SetParQuick(g_pOptions->GetParQuick() && !pPostInfo->GetForceParFull());
m_ParChecker.SetForceRepair(pPostInfo->GetForceRepair());
m_ParChecker.PrintMessage(Message::mkInfo, "Checking pars for %s", pPostInfo->GetNZBInfo()->GetName());
pPostInfo->SetWorking(true);
m_ParChecker.Start();
}
@@ -259,11 +365,22 @@ void ParCoordinator::StartParCheckJob(PostInfo* pPostInfo)
*/
void ParCoordinator::StartParRenameJob(PostInfo* pPostInfo)
{
info("Checking renamed files for %s", pPostInfo->GetNZBInfo()->GetName());
const char* szDestDir = pPostInfo->GetNZBInfo()->GetDestDir();
char szFinalDir[1024];
if (pPostInfo->GetNZBInfo()->GetUnpackStatus() == NZBInfo::usSuccess)
{
pPostInfo->GetNZBInfo()->BuildFinalDirName(szFinalDir, 1024);
szFinalDir[1024-1] = '\0';
szDestDir = szFinalDir;
}
m_eCurrentJob = jkParRename;
m_ParRenamer.SetPostInfo(pPostInfo);
m_ParRenamer.SetDestDir(pPostInfo->GetNZBInfo()->GetDestDir());
m_ParRenamer.SetDestDir(szDestDir);
m_ParRenamer.SetInfoName(pPostInfo->GetNZBInfo()->GetName());
m_ParRenamer.SetDetectMissing(pPostInfo->GetNZBInfo()->GetUnpackStatus() == NZBInfo::usNone);
m_ParRenamer.PrintMessage(Message::mkInfo, "Checking renamed files for %s", pPostInfo->GetNZBInfo()->GetName());
pPostInfo->SetWorking(true);
m_ParRenamer.Start();
}
@@ -272,16 +389,12 @@ bool ParCoordinator::Cancel()
{
if (m_eCurrentJob == jkParCheck)
{
#ifdef HAVE_PAR2_CANCEL
if (!m_ParChecker.GetCancelled())
{
debug("Cancelling par-repair for %s", m_ParChecker.GetInfoName());
m_ParChecker.Cancel();
return true;
}
#else
warn("Cannot cancel par-repair for %s, used version of libpar2 does not support cancelling", m_ParChecker.GetInfoName());
#endif
}
else if (m_eCurrentJob == jkParRename)
{
@@ -301,19 +414,13 @@ bool ParCoordinator::Cancel()
bool ParCoordinator::AddPar(FileInfo* pFileInfo, bool bDeleted)
{
bool bSameCollection = m_ParChecker.IsRunning() &&
pFileInfo->GetNZBInfo() == m_ParChecker.GetPostInfo()->GetNZBInfo() &&
SameParCollection(pFileInfo->GetFilename(), Util::BaseFileName(m_ParChecker.GetParFilename()));
pFileInfo->GetNZBInfo() == m_ParChecker.GetPostInfo()->GetNZBInfo();
if (bSameCollection && !bDeleted)
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", pFileInfo->GetNZBInfo()->GetDestDir(), (int)PATH_SEPARATOR, pFileInfo->GetFilename());
szFullFilename[1024-1] = '\0';
m_ParChecker.AddParFile(szFullFilename);
if (g_pOptions->GetParPauseQueue())
{
PauseDownload();
}
}
else
{
@@ -322,106 +429,42 @@ bool ParCoordinator::AddPar(FileInfo* pFileInfo, bool bDeleted)
return bSameCollection;
}
void ParCoordinator::ParCheckerUpdate(Subject* Caller, void* Aspect)
void ParCoordinator::ParCheckCompleted()
{
if (m_ParChecker.GetStatus() == ParChecker::psFinished ||
m_ParChecker.GetStatus() == ParChecker::psFailed)
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
PostInfo* pPostInfo = m_ParChecker.GetPostInfo();
// Update ParStatus (accumulate result)
if ((m_ParChecker.GetStatus() == ParChecker::psRepaired ||
m_ParChecker.GetStatus() == ParChecker::psRepairNotNeeded) &&
pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped)
{
char szPath[1024];
strncpy(szPath, m_ParChecker.GetParFilename(), 1024);
szPath[1024-1] = '\0';
if (char* p = strrchr(szPath, PATH_SEPARATOR)) *p = '\0';
if (g_pOptions->GetCreateBrokenLog())
{
char szBrokenLogName[1024];
snprintf(szBrokenLogName, 1024, "%s%c_brokenlog.txt", szPath, (int)PATH_SEPARATOR);
szBrokenLogName[1024-1] = '\0';
if (!m_ParChecker.GetRepairNotNeeded() || Util::FileExists(szBrokenLogName))
{
FILE* file = fopen(szBrokenLogName, "ab");
if (file)
{
if (m_ParChecker.GetStatus() == ParChecker::psFailed)
{
if (m_ParChecker.GetCancelled())
{
fprintf(file, "Repair cancelled for %s\n", m_ParChecker.GetInfoName());
}
else
{
fprintf(file, "Repair failed for %s: %s\n", m_ParChecker.GetInfoName(), m_ParChecker.GetErrMsg() ? m_ParChecker.GetErrMsg() : "");
}
}
else if (m_ParChecker.GetRepairNotNeeded())
{
fprintf(file, "Repair not needed for %s\n", m_ParChecker.GetInfoName());
}
else
{
if (g_pOptions->GetParRepair())
{
fprintf(file, "Successfully repaired %s\n", m_ParChecker.GetInfoName());
}
else
{
fprintf(file, "Repair possible for %s\n", m_ParChecker.GetInfoName());
}
}
fclose(file);
}
else
{
error("Could not open file %s", szBrokenLogName);
}
}
}
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
PostInfo* pPostInfo = m_ParChecker.GetPostInfo();
pPostInfo->SetWorking(false);
if (pPostInfo->GetDeleted())
{
pPostInfo->SetStage(PostInfo::ptFinished);
}
else
{
pPostInfo->SetStage(PostInfo::ptQueued);
}
// Update ParStatus by NZBInfo (accumulate result)
if (m_ParChecker.GetStatus() == ParChecker::psFailed && !m_ParChecker.GetCancelled())
{
pPostInfo->SetParStatus(PostInfo::psFailure);
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psFailure);
}
else if (m_ParChecker.GetStatus() == ParChecker::psFinished &&
(g_pOptions->GetParRepair() || m_ParChecker.GetRepairNotNeeded()))
{
pPostInfo->SetParStatus(PostInfo::psSuccess);
if (pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psNone)
{
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psSuccess);
}
}
else
{
pPostInfo->SetParStatus(PostInfo::psRepairPossible);
if (pPostInfo->GetNZBInfo()->GetParStatus() != NZBInfo::psFailure)
{
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psRepairPossible);
}
}
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
g_pQueueCoordinator->UnlockQueue();
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psSuccess);
}
else if (m_ParChecker.GetStatus() == ParChecker::psRepairPossible &&
pPostInfo->GetNZBInfo()->GetParStatus() != NZBInfo::psFailure)
{
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psRepairPossible);
}
else
{
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psFailure);
}
int iWaitTime = pPostInfo->GetNZBInfo()->GetDownloadSec() - m_ParChecker.GetDownloadSec();
pPostInfo->SetStartTime(pPostInfo->GetStartTime() + (time_t)iWaitTime);
int iParSec = (int)(time(NULL) - m_ParChecker.GetParTime()) - iWaitTime;
pPostInfo->GetNZBInfo()->SetParSec(pPostInfo->GetNZBInfo()->GetParSec() + iParSec);
pPostInfo->GetNZBInfo()->SetParFull(!m_ParChecker.GetParQuick());
pPostInfo->SetWorking(false);
pPostInfo->SetStage(PostInfo::ptQueued);
pDownloadQueue->Save();
DownloadQueue::Unlock();
}
/**
@@ -431,7 +474,7 @@ void ParCoordinator::ParCheckerUpdate(Subject* Caller, void* Aspect)
*/
bool ParCoordinator::RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilename, int iBlockNeeded, int* pBlockFound)
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
Blocks blocks;
blocks.clear();
@@ -445,7 +488,7 @@ bool ParCoordinator::RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilenam
FindPars(pDownloadQueue, pNZBInfo, szParFilename, &blocks, true, false, &iCurBlockFound);
iBlockFound += iCurBlockFound;
}
if (iBlockFound < iBlockNeeded && !g_pOptions->GetStrictParName())
if (iBlockFound < iBlockNeeded)
{
FindPars(pDownloadQueue, pNZBInfo, szParFilename, &blocks, false, false, &iCurBlockFound);
iBlockFound += iCurBlockFound;
@@ -473,7 +516,7 @@ bool ParCoordinator::RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilenam
{
if (pBestBlockInfo->m_pFileInfo->GetPaused())
{
info("Unpausing %s%c%s for par-recovery", pNZBInfo->GetName(), (int)PATH_SEPARATOR, pBestBlockInfo->m_pFileInfo->GetFilename());
m_ParChecker.PrintMessage(Message::mkInfo, "Unpausing %s%c%s for par-recovery", pNZBInfo->GetName(), (int)PATH_SEPARATOR, pBestBlockInfo->m_pFileInfo->GetFilename());
pBestBlockInfo->m_pFileInfo->SetPaused(false);
pBestBlockInfo->m_pFileInfo->SetExtraPriority(true);
}
@@ -497,7 +540,7 @@ bool ParCoordinator::RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilenam
BlockInfo* pBlockInfo = blocks.front();
if (pBlockInfo->m_pFileInfo->GetPaused())
{
info("Unpausing %s%c%s for par-recovery", pNZBInfo->GetName(), (int)PATH_SEPARATOR, pBlockInfo->m_pFileInfo->GetFilename());
m_ParChecker.PrintMessage(Message::mkInfo, "Unpausing %s%c%s for par-recovery", pNZBInfo->GetName(), (int)PATH_SEPARATOR, pBlockInfo->m_pFileInfo->GetFilename());
pBlockInfo->m_pFileInfo->SetPaused(false);
pBlockInfo->m_pFileInfo->SetExtraPriority(true);
}
@@ -505,7 +548,7 @@ bool ParCoordinator::RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilenam
}
}
g_pQueueCoordinator->UnlockQueue();
DownloadQueue::Unlock();
if (pBlockFound)
{
@@ -520,11 +563,6 @@ bool ParCoordinator::RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilenam
bool bOK = iBlockNeeded <= 0;
if (bOK && g_pOptions->GetParPauseQueue())
{
UnpauseDownload();
}
return bOK;
}
@@ -548,12 +586,11 @@ void ParCoordinator::FindPars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo,
szMainBaseFilename[maxlen] = '\0';
for (char* p = szMainBaseFilename; *p; p++) *p = tolower(*p); // convert string to lowercase
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
for (FileList::iterator it = pNZBInfo->GetFileList()->begin(); it != pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
int iBlocks = 0;
if (pFileInfo->GetNZBInfo() == pNZBInfo &&
ParseParFilename(pFileInfo->GetFilename(), NULL, &iBlocks) &&
if (ParseParFilename(pFileInfo->GetFilename(), NULL, &iBlocks) &&
iBlocks > 0)
{
bool bUseFile = true;
@@ -615,9 +652,9 @@ void ParCoordinator::FindPars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo,
void ParCoordinator::UpdateParCheckProgress()
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
DownloadQueue::Lock();
PostInfo* pPostInfo = pDownloadQueue->GetPostQueue()->front();
PostInfo* pPostInfo = m_ParChecker.GetPostInfo();
if (m_ParChecker.GetFileProgress() == 0)
{
pPostInfo->SetProgressLabel(m_ParChecker.GetProgressLabel());
@@ -628,19 +665,22 @@ void ParCoordinator::UpdateParCheckProgress()
PostInfo::EStage eStage = StageKind[m_ParChecker.GetStage()];
time_t tCurrent = time(NULL);
if (!pPostInfo->GetStartTime())
{
pPostInfo->SetStartTime(tCurrent);
}
if (pPostInfo->GetStage() != eStage)
{
pPostInfo->SetStage(eStage);
pPostInfo->SetStageTime(tCurrent);
if (pPostInfo->GetStage() == PostInfo::ptRepairing)
{
m_ParChecker.SetRepairTime(tCurrent);
}
else if (pPostInfo->GetStage() == PostInfo::ptVerifyingRepaired)
{
int iRepairSec = (int)(tCurrent - m_ParChecker.GetRepairTime());
pPostInfo->GetNZBInfo()->SetRepairSec(pPostInfo->GetNZBInfo()->GetRepairSec() + iRepairSec);
}
}
bool bParCancel = false;
#ifdef HAVE_PAR2_CANCEL
if (!m_ParChecker.GetCancelled())
{
if ((g_pOptions->GetParTimeLimit() > 0) &&
@@ -654,35 +694,36 @@ void ParCoordinator::UpdateParCheckProgress()
if (iEstimatedRepairTime > g_pOptions->GetParTimeLimit() * 60)
{
debug("Estimated repair time %i seconds", iEstimatedRepairTime);
warn("Cancelling par-repair for %s, estimated repair time (%i minutes) exceeds allowed repair time", m_ParChecker.GetInfoName(), iEstimatedRepairTime / 60);
m_ParChecker.PrintMessage(Message::mkWarning, "Cancelling par-repair for %s, estimated repair time (%i minutes) exceeds allowed repair time", m_ParChecker.GetInfoName(), iEstimatedRepairTime / 60);
bParCancel = true;
}
}
}
#endif
if (bParCancel)
{
m_ParChecker.Cancel();
}
g_pQueueCoordinator->UnlockQueue();
DownloadQueue::Unlock();
CheckPauseState(pPostInfo);
}
void ParCoordinator::CheckPauseState(PostInfo* pPostInfo)
{
if (g_pOptions->GetPausePostProcess())
if (g_pOptions->GetPausePostProcess() && !pPostInfo->GetNZBInfo()->GetForcePriority())
{
time_t tStageTime = pPostInfo->GetStageTime();
time_t tStartTime = pPostInfo->GetStartTime();
time_t tParTime = m_ParChecker.GetParTime();
time_t tRepairTime = m_ParChecker.GetRepairTime();
time_t tWaitTime = time(NULL);
// wait until Post-processor is unpaused
while (g_pOptions->GetPausePostProcess() && !m_bStopped)
while (g_pOptions->GetPausePostProcess() && !pPostInfo->GetNZBInfo()->GetForcePriority() && !m_bStopped)
{
usleep(100 * 1000);
usleep(50 * 1000);
// update time stamps
@@ -692,77 +733,96 @@ void ParCoordinator::CheckPauseState(PostInfo* pPostInfo)
{
pPostInfo->SetStageTime(tStageTime + tDelta);
}
if (tStartTime > 0)
{
pPostInfo->SetStartTime(tStartTime + tDelta);
}
if (tParTime > 0)
{
m_ParChecker.SetParTime(tParTime + tDelta);
}
if (tRepairTime > 0)
{
m_ParChecker.SetRepairTime(tRepairTime + tDelta);
}
}
}
}
void ParCoordinator::ParRenamerUpdate(Subject* Caller, void* Aspect)
void ParCoordinator::ParRenameCompleted()
{
if (m_ParRenamer.GetStatus() == ParRenamer::psFinished ||
m_ParRenamer.GetStatus() == ParRenamer::psFailed)
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
PostInfo* pPostInfo = m_ParRenamer.GetPostInfo();
pPostInfo->GetNZBInfo()->SetRenameStatus(m_ParRenamer.GetStatus() == ParRenamer::psSuccess ? NZBInfo::rsSuccess : NZBInfo::rsFailure);
if (m_ParRenamer.HasMissedFiles() && pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped)
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
PostInfo* pPostInfo = m_ParRenamer.GetPostInfo();
pPostInfo->SetWorking(false);
if (pPostInfo->GetDeleted())
{
pPostInfo->SetStage(PostInfo::ptFinished);
}
else
{
pPostInfo->SetStage(PostInfo::ptQueued);
}
// Update ParStatus by NZBInfo
if (m_ParRenamer.GetStatus() == ParRenamer::psFailed && !m_ParRenamer.GetCancelled())
{
pPostInfo->SetRenameStatus(PostInfo::rsFailure);
pPostInfo->GetNZBInfo()->SetRenameStatus(NZBInfo::rsFailure);
}
else if (m_ParRenamer.GetStatus() == ParRenamer::psFinished)
{
pPostInfo->SetRenameStatus(PostInfo::rsSuccess);
pPostInfo->GetNZBInfo()->SetRenameStatus(NZBInfo::rsSuccess);
}
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
g_pQueueCoordinator->UnlockQueue();
PrintMessage(pPostInfo, Message::mkInfo, "Requesting par-check/repair for %s to restore missing files ", m_ParRenamer.GetInfoName());
pPostInfo->SetRequestParCheck(true);
}
pPostInfo->SetWorking(false);
pPostInfo->SetStage(PostInfo::ptQueued);
pDownloadQueue->Save();
DownloadQueue::Unlock();
}
void ParCoordinator::UpdateParRenameProgress()
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
DownloadQueue::Lock();
PostInfo* pPostInfo = pDownloadQueue->GetPostQueue()->front();
PostInfo* pPostInfo = m_ParRenamer.GetPostInfo();
pPostInfo->SetProgressLabel(m_ParRenamer.GetProgressLabel());
pPostInfo->SetStageProgress(m_ParRenamer.GetStageProgress());
time_t tCurrent = time(NULL);
if (!pPostInfo->GetStartTime())
{
pPostInfo->SetStartTime(tCurrent);
}
if (pPostInfo->GetStage() != PostInfo::ptRenaming)
{
pPostInfo->SetStage(PostInfo::ptRenaming);
pPostInfo->SetStageTime(tCurrent);
}
g_pQueueCoordinator->UnlockQueue();
DownloadQueue::Unlock();
CheckPauseState(pPostInfo);
}
void ParCoordinator::PrintMessage(PostInfo* pPostInfo, Message::EKind eKind, const char* szFormat, ...)
{
char szText[1024];
va_list args;
va_start(args, szFormat);
vsnprintf(szText, 1024, szFormat, args);
va_end(args);
szText[1024-1] = '\0';
pPostInfo->AppendMessage(eKind, szText);
switch (eKind)
{
case Message::mkDetail:
detail("%s", szText);
break;
case Message::mkInfo:
info("%s", szText);
break;
case Message::mkWarning:
warn("%s", szText);
break;
case Message::mkError:
error("%s", szText);
break;
case Message::mkDebug:
debug("%s", szText);
break;
}
}
#endif

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -29,7 +29,6 @@
#include <list>
#include <deque>
#include "Observer.h"
#include "DownloadInfo.h"
#ifndef DISABLE_PARCHECK
@@ -41,35 +40,35 @@ class ParCoordinator
{
private:
#ifndef DISABLE_PARCHECK
class ParCheckerObserver: public Observer
{
public:
ParCoordinator* m_pOwner;
virtual void Update(Subject* Caller, void* Aspect) { m_pOwner->ParCheckerUpdate(Caller, Aspect); }
};
class PostParChecker: public ParChecker
{
private:
ParCoordinator* m_pOwner;
PostInfo* m_pPostInfo;
time_t m_tParTime;
time_t m_tRepairTime;
int m_iDownloadSec;
protected:
virtual bool RequestMorePars(int iBlockNeeded, int* pBlockFound);
virtual void UpdateProgress();
virtual void Completed() { m_pOwner->ParCheckCompleted(); }
virtual void PrintMessage(Message::EKind eKind, const char* szFormat, ...);
virtual void RegisterParredFile(const char* szFilename);
virtual bool IsParredFile(const char* szFilename);
virtual EFileStatus FindFileCrc(const char* szFilename, unsigned long* lCrc, SegmentList* pSegments);
public:
PostInfo* GetPostInfo() { return m_pPostInfo; }
void SetPostInfo(PostInfo* pPostInfo) { m_pPostInfo = pPostInfo; }
time_t GetParTime() { return m_tParTime; }
void SetParTime(time_t tParTime) { m_tParTime = tParTime; }
time_t GetRepairTime() { return m_tRepairTime; }
void SetRepairTime(time_t tRepairTime) { m_tRepairTime = tRepairTime; }
int GetDownloadSec() { return m_iDownloadSec; }
void SetDownloadSec(int iDownloadSec) { m_iDownloadSec = iDownloadSec; }
friend class ParCoordinator;
};
class ParRenamerObserver: public Observer
{
public:
ParCoordinator* m_pOwner;
virtual void Update(Subject* Caller, void* Aspect) { m_pOwner->ParRenamerUpdate(Caller, Aspect); }
};
class PostParRenamer: public ParRenamer
{
private:
@@ -77,6 +76,10 @@ private:
PostInfo* m_pPostInfo;
protected:
virtual void UpdateProgress();
virtual void Completed() { m_pOwner->ParRenameCompleted(); }
virtual void PrintMessage(Message::EKind eKind, const char* szFormat, ...);
virtual void RegisterParredFile(const char* szFilename);
virtual void RegisterRenamedFile(const char* szOldFilename, const char* szNewFileName);
public:
PostInfo* GetPostInfo() { return m_pPostInfo; }
void SetPostInfo(PostInfo* pPostInfo) { m_pPostInfo = pPostInfo; }
@@ -100,39 +103,35 @@ private:
private:
PostParChecker m_ParChecker;
ParCheckerObserver m_ParCheckerObserver;
bool m_bStopped;
bool m_bPostScript;
PostParRenamer m_ParRenamer;
ParRenamerObserver m_ParRenamerObserver;
EJobKind m_eCurrentJob;
protected:
virtual bool PauseDownload() = 0;
virtual bool UnpauseDownload() = 0;
void UpdateParCheckProgress();
void UpdateParRenameProgress();
void ParCheckCompleted();
void ParRenameCompleted();
void CheckPauseState(PostInfo* pPostInfo);
bool RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilename, int iBlockNeeded, int* pBlockFound);
void PrintMessage(PostInfo* pPostInfo, Message::EKind eKind, const char* szFormat, ...);
#endif
public:
typedef std::deque<char*> FileList;
typedef std::deque<char*> ParFileList;
public:
ParCoordinator();
virtual ~ParCoordinator();
static bool FindMainPars(const char* szPath, FileList* pFileList);
static bool FindMainPars(const char* szPath, ParFileList* pFileList);
static bool ParseParFilename(const char* szParFilename, int* iBaseNameLen, int* iBlocks);
static bool SameParCollection(const char* szFilename1, const char* szFilename2);
void PausePars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
#ifndef DISABLE_PARCHECK
void ParCheckerUpdate(Subject* Caller, void* Aspect);
void ParRenamerUpdate(Subject* Caller, void* Aspect);
void CheckPauseState(PostInfo* pPostInfo);
bool AddPar(FileInfo* pFileInfo, bool bDeleted);
bool RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilename, int iBlockNeeded, int* pBlockFound);
void FindPars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szParFilename,
Blocks* pBlocks, bool bStrictParName, bool bExactParName, int* pBlockFound);
void UpdateParCheckProgress();
void UpdateParRenameProgress();
void StartParCheckJob(PostInfo* pPostInfo);
void StartParRenameJob(PostInfo* pPostInfo);
void Stop();

View File

@@ -0,0 +1,489 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#ifndef DISABLE_PARCHECK
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#ifndef WIN32
#include <unistd.h>
#endif
#include "par2cmdline.h"
#include "par2repairer.h"
#include "md5.h"
#include "nzbget.h"
#include "ParRenamer.h"
#include "ParCoordinator.h"
#include "Log.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
class ParRenamerRepairer : public Par2Repairer
{
public:
friend class ParRenamer;
};
ParRenamer::FileHash::FileHash(const char* szFilename, const char* szHash)
{
m_szFilename = strdup(szFilename);
m_szHash = strdup(szHash);
m_bFileExists = false;
}
ParRenamer::FileHash::~FileHash()
{
free(m_szFilename);
free(m_szHash);
}
ParRenamer::ParRenamer()
{
debug("Creating ParRenamer");
m_eStatus = psFailed;
m_szDestDir = NULL;
m_szInfoName = NULL;
m_szProgressLabel = (char*)malloc(1024);
m_iStageProgress = 0;
m_bCancelled = false;
m_bHasMissedFiles = false;
m_bDetectMissing = false;
}
ParRenamer::~ParRenamer()
{
debug("Destroying ParRenamer");
free(m_szDestDir);
free(m_szInfoName);
free(m_szProgressLabel);
Cleanup();
}
void ParRenamer::Cleanup()
{
ClearHashList();
for (DirList::iterator it = m_DirList.begin(); it != m_DirList.end(); it++)
{
free(*it);
}
m_DirList.clear();
}
void ParRenamer::ClearHashList()
{
for (FileHashList::iterator it = m_FileHashList.begin(); it != m_FileHashList.end(); it++)
{
delete *it;
}
m_FileHashList.clear();
}
void ParRenamer::SetDestDir(const char * szDestDir)
{
free(m_szDestDir);
m_szDestDir = strdup(szDestDir);
}
void ParRenamer::SetInfoName(const char * szInfoName)
{
free(m_szInfoName);
m_szInfoName = strdup(szInfoName);
}
void ParRenamer::Cancel()
{
m_bCancelled = true;
}
void ParRenamer::Run()
{
Cleanup();
m_bCancelled = false;
m_iFileCount = 0;
m_iCurFile = 0;
m_iRenamedCount = 0;
m_bHasMissedFiles = false;
m_eStatus = psFailed;
snprintf(m_szProgressLabel, 1024, "Checking renamed files for %s", m_szInfoName);
m_szProgressLabel[1024-1] = '\0';
m_iStageProgress = 0;
UpdateProgress();
BuildDirList(m_szDestDir);
for (DirList::iterator it = m_DirList.begin(); it != m_DirList.end(); it++)
{
char* szDestDir = *it;
debug("Checking %s", szDestDir);
ClearHashList();
LoadParFiles(szDestDir);
if (m_FileHashList.empty())
{
int iSavedCurFile = m_iCurFile;
CheckFiles(szDestDir, true);
m_iCurFile = iSavedCurFile; // restore progress indicator
LoadParFiles(szDestDir);
}
CheckFiles(szDestDir, false);
if (m_bDetectMissing)
{
CheckMissing();
}
}
if (m_bCancelled)
{
PrintMessage(Message::mkWarning, "Renaming cancelled for %s", m_szInfoName);
}
else if (m_iRenamedCount > 0)
{
PrintMessage(Message::mkInfo, "Successfully renamed %i file(s) for %s", m_iRenamedCount, m_szInfoName);
m_eStatus = psSuccess;
}
else
{
PrintMessage(Message::mkInfo, "No renamed files found for %s", m_szInfoName);
}
Cleanup();
Completed();
}
void ParRenamer::BuildDirList(const char* szDestDir)
{
m_DirList.push_back(strdup(szDestDir));
char* szFullFilename = (char*)malloc(1024);
DirBrowser* pDirBrowser = new DirBrowser(szDestDir);
while (const char* filename = pDirBrowser->Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") && !m_bCancelled)
{
snprintf(szFullFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
if (Util::DirectoryExists(szFullFilename))
{
BuildDirList(szFullFilename);
}
else
{
m_iFileCount++;
}
}
}
free(szFullFilename);
delete pDirBrowser;
}
void ParRenamer::LoadParFiles(const char* szDestDir)
{
ParCoordinator::ParFileList parFileList;
ParCoordinator::FindMainPars(szDestDir, &parFileList);
for (ParCoordinator::ParFileList::iterator it = parFileList.begin(); it != parFileList.end(); it++)
{
char* szParFilename = *it;
char szFullParFilename[1024];
snprintf(szFullParFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, szParFilename);
szFullParFilename[1024-1] = '\0';
LoadParFile(szFullParFilename);
free(*it);
}
}
void ParRenamer::LoadParFile(const char* szParFilename)
{
ParRenamerRepairer* pRepairer = new ParRenamerRepairer();
if (!pRepairer->LoadPacketsFromFile(szParFilename))
{
PrintMessage(Message::mkWarning, "Could not load par2-file %s", szParFilename);
delete pRepairer;
return;
}
for (map<MD5Hash, Par2RepairerSourceFile*>::iterator it = pRepairer->sourcefilemap.begin(); it != pRepairer->sourcefilemap.end(); it++)
{
if (m_bCancelled)
{
break;
}
Par2RepairerSourceFile* sourceFile = (*it).second;
if (!sourceFile || !sourceFile->GetDescriptionPacket())
{
warn("Damaged par2-file detected: %s", szParFilename);
continue;
}
m_FileHashList.push_back(new FileHash(sourceFile->GetDescriptionPacket()->FileName().c_str(),
sourceFile->GetDescriptionPacket()->Hash16k().print().c_str()));
RegisterParredFile(sourceFile->GetDescriptionPacket()->FileName().c_str());
}
delete pRepairer;
}
void ParRenamer::CheckFiles(const char* szDestDir, bool bRenamePars)
{
DirBrowser dir(szDestDir);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") && !m_bCancelled)
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
if (!Util::DirectoryExists(szFullFilename))
{
snprintf(m_szProgressLabel, 1024, "Checking file %s", filename);
m_szProgressLabel[1024-1] = '\0';
m_iStageProgress = m_iCurFile * 1000 / m_iFileCount;
UpdateProgress();
m_iCurFile++;
if (bRenamePars)
{
CheckParFile(szDestDir, szFullFilename);
}
else
{
CheckRegularFile(szDestDir, szFullFilename);
}
}
}
}
}
void ParRenamer::CheckMissing()
{
for (FileHashList::iterator it = m_FileHashList.begin(); it != m_FileHashList.end(); it++)
{
FileHash* pFileHash = *it;
if (!pFileHash->GetFileExists())
{
if (Util::MatchFileExt(pFileHash->GetFilename(), g_pOptions->GetParIgnoreExt(), ",;") ||
Util::MatchFileExt(pFileHash->GetFilename(), g_pOptions->GetExtCleanupDisk(), ",;"))
{
info("File %s is missing, ignoring", pFileHash->GetFilename());
}
else
{
info("File %s is missing", pFileHash->GetFilename());
m_bHasMissedFiles = true;
}
}
}
}
bool ParRenamer::IsSplittedFragment(const char* szFilename, const char* szCorrectName)
{
bool bSplittedFragement = false;
const char* szDiskBasename = Util::BaseFileName(szFilename);
const char* szExtension = strrchr(szDiskBasename, '.');
int iBaseLen = strlen(szCorrectName);
if (szExtension && !strncasecmp(szDiskBasename, szCorrectName, iBaseLen))
{
const char* p = szDiskBasename + iBaseLen;
if (*p == '.')
{
for (p++; *p && strchr("0123456789", *p); p++) ;
bSplittedFragement = !*p;
bSplittedFragement = bSplittedFragement && atoi(szDiskBasename + iBaseLen + 1) <= 1; // .000 or .001
}
}
return bSplittedFragement;
}
void ParRenamer::CheckRegularFile(const char* szDestDir, const char* szFilename)
{
debug("Computing hash for %s", szFilename);
const int iBlockSize = 16*1024;
FILE* pFile = fopen(szFilename, FOPEN_RB);
if (!pFile)
{
PrintMessage(Message::mkError, "Could not open file %s", szFilename);
return;
}
// load first 16K of the file into buffer
void* pBuffer = malloc(iBlockSize);
int iReadBytes = fread(pBuffer, 1, iBlockSize, pFile);
int iError = ferror(pFile);
if (iReadBytes != iBlockSize && iError)
{
PrintMessage(Message::mkError, "Could not read file %s", szFilename);
return;
}
fclose(pFile);
MD5Hash hash16k;
MD5Context context;
context.Update(pBuffer, iReadBytes);
context.Final(hash16k);
free(pBuffer);
debug("file: %s; hash16k: %s", Util::BaseFileName(szFilename), hash16k.print().c_str());
for (FileHashList::iterator it = m_FileHashList.begin(); it != m_FileHashList.end(); it++)
{
FileHash* pFileHash = *it;
if (!strcmp(pFileHash->GetHash(), hash16k.print().c_str()))
{
debug("Found correct filename: %s", pFileHash->GetFilename());
pFileHash->SetFileExists(true);
char szDstFilename[1024];
snprintf(szDstFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, pFileHash->GetFilename());
szDstFilename[1024-1] = '\0';
if (!Util::FileExists(szDstFilename) && !IsSplittedFragment(szFilename, pFileHash->GetFilename()))
{
RenameFile(szFilename, szDstFilename);
}
break;
}
}
}
/*
* For files not having par2-extensions: checks if the file is a par2-file and renames
* it according to its set-id.
*/
void ParRenamer::CheckParFile(const char* szDestDir, const char* szFilename)
{
debug("Checking par2-header for %s", szFilename);
const char* szBasename = Util::BaseFileName(szFilename);
const char* szExtension = strrchr(szBasename, '.');
if (szExtension && !strcasecmp(szExtension, ".par2"))
{
// do not process files already having par2-extension
return;
}
FILE* pFile = fopen(szFilename, FOPEN_RB);
if (!pFile)
{
PrintMessage(Message::mkError, "Could not open file %s", szFilename);
return;
}
// load par2-header
PACKET_HEADER header;
int iReadBytes = fread(&header, 1, sizeof(header), pFile);
int iError = ferror(pFile);
if (iReadBytes != sizeof(header) && iError)
{
PrintMessage(Message::mkError, "Could not read file %s", szFilename);
return;
}
fclose(pFile);
// Check the packet header
if (packet_magic != header.magic || // not par2-file
sizeof(PACKET_HEADER) > header.length || // packet length is too small
0 != (header.length & 3) || // packet length is not a multiple of 4
Util::FileSize(szFilename) < (int)header.length) // packet would extend beyond the end of the file
{
// not par2-file or damaged header, ignoring the file
return;
}
char szSetId[33];
strncpy(szSetId, header.setid.print().c_str(), sizeof(szSetId));
szSetId[33-1] = '\0';
for (char* p = szSetId; *p; p++) *p = tolower(*p); // convert string to lowercase
debug("Renaming: %s; setid: %s", Util::BaseFileName(szFilename), szSetId);
char szDestFileName[1024];
int iNum = 1;
while (iNum == 1 || Util::FileExists(szDestFileName))
{
snprintf(szDestFileName, 1024, "%s%c%s.vol%03i+01.PAR2", szDestDir, PATH_SEPARATOR, szSetId, iNum);
szDestFileName[1024-1] = '\0';
iNum++;
}
RenameFile(szFilename, szDestFileName);
}
void ParRenamer::RenameFile(const char* szSrcFilename, const char* szDestFileName)
{
PrintMessage(Message::mkInfo, "Renaming %s to %s", Util::BaseFileName(szSrcFilename), Util::BaseFileName(szDestFileName));
if (!Util::MoveFile(szSrcFilename, szDestFileName))
{
char szErrBuf[256];
PrintMessage(Message::mkError, "Could not rename %s to %s: %s", szSrcFilename, szDestFileName,
Util::GetLastErrorMessage(szErrBuf, sizeof(szErrBuf)));
return;
}
m_iRenamedCount++;
// notify about new file name
RegisterRenamedFile(Util::BaseFileName(szSrcFilename), Util::BaseFileName(szDestFileName));
}
#endif

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -31,16 +31,15 @@
#include <deque>
#include "Thread.h"
#include "Observer.h"
#include "Log.h"
class ParRenamer : public Thread, public Subject
class ParRenamer : public Thread
{
public:
enum EStatus
{
psUnknown,
psFailed,
psFinished
psSuccess
};
class FileHash
@@ -48,15 +47,19 @@ public:
private:
char* m_szFilename;
char* m_szHash;
bool m_bFileExists;
public:
FileHash(const char* szFilename, const char* szHash);
~FileHash();
const char* GetFilename() { return m_szFilename; }
const char* GetHash() { return m_szHash; }
bool GetFileExists() { return m_bFileExists; }
void SetFileExists(bool bFileExists) { m_bFileExists = bFileExists; }
};
typedef std::deque<FileHash*> FileHashList;
typedef std::deque<char*> DirList;
private:
char* m_szInfoName;
@@ -65,17 +68,33 @@ private:
char* m_szProgressLabel;
int m_iStageProgress;
bool m_bCancelled;
FileHashList m_fileHashList;
DirList m_DirList;
FileHashList m_FileHashList;
int m_iFileCount;
int m_iCurFile;
int m_iRenamedCount;
bool m_bHasMissedFiles;
bool m_bDetectMissing;
void Cleanup();
void LoadParFiles();
void ClearHashList();
void BuildDirList(const char* szDestDir);
void CheckDir(const char* szDestDir);
void LoadParFiles(const char* szDestDir);
void LoadParFile(const char* szParFilename);
void CheckFiles();
void CheckFile(const char* szFilename);
void CheckFiles(const char* szDestDir, bool bRenamePars);
void CheckRegularFile(const char* szDestDir, const char* szFilename);
void CheckParFile(const char* szDestDir, const char* szFilename);
bool IsSplittedFragment(const char* szFilename, const char* szCorrectName);
void CheckMissing();
void RenameFile(const char* szSrcFilename, const char* szDestFileName);
protected:
virtual void UpdateProgress() {}
virtual void Completed() {}
virtual void PrintMessage(Message::EKind eKind, const char* szFormat, ...) {}
virtual void RegisterParredFile(const char* szFilename) {}
virtual void RegisterRenamedFile(const char* szOldFilename, const char* szNewFileName) {}
const char* GetProgressLabel() { return m_szProgressLabel; }
int GetStageProgress() { return m_iStageProgress; }
@@ -90,6 +109,8 @@ public:
EStatus GetStatus() { return m_eStatus; }
void Cancel();
bool GetCancelled() { return m_bCancelled; }
bool HasMissedFiles() { return m_bHasMissedFiles; }
void SetDetectMissing(bool bDetectMissing) { m_bDetectMissing = bDetectMissing; }
};
#endif

View File

@@ -0,0 +1,339 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#ifndef WIN32
#include <unistd.h>
#endif
#include <stdio.h>
#include "nzbget.h"
#include "PostScript.h"
#include "Log.h"
#include "Util.h"
#include "Options.h"
extern Options* g_pOptions;
static const int POSTPROCESS_PARCHECK = 92;
static const int POSTPROCESS_SUCCESS = 93;
static const int POSTPROCESS_ERROR = 94;
static const int POSTPROCESS_NONE = 95;
void PostScriptController::StartJob(PostInfo* pPostInfo)
{
PostScriptController* pScriptController = new PostScriptController();
pScriptController->m_pPostInfo = pPostInfo;
pScriptController->SetWorkingDir(g_pOptions->GetDestDir());
pScriptController->SetAutoDestroy(false);
pScriptController->m_iPrefixLen = 0;
pPostInfo->SetPostThread(pScriptController);
pScriptController->Start();
}
void PostScriptController::Run()
{
StringBuilder scriptCommaList;
// the locking is needed for accessing the members of NZBInfo
DownloadQueue::Lock();
for (NZBParameterList::iterator it = m_pPostInfo->GetNZBInfo()->GetParameters()->begin(); it != m_pPostInfo->GetNZBInfo()->GetParameters()->end(); it++)
{
NZBParameter* pParameter = *it;
const char* szVarname = pParameter->GetName();
if (strlen(szVarname) > 0 && szVarname[0] != '*' && szVarname[strlen(szVarname)-1] == ':' &&
(!strcasecmp(pParameter->GetValue(), "yes") || !strcasecmp(pParameter->GetValue(), "on") || !strcasecmp(pParameter->GetValue(), "1")))
{
char* szScriptName = strdup(szVarname);
szScriptName[strlen(szScriptName)-1] = '\0'; // remove trailing ':'
scriptCommaList.Append(szScriptName);
scriptCommaList.Append(",");
free(szScriptName);
}
}
m_pPostInfo->GetNZBInfo()->GetScriptStatuses()->Clear();
DownloadQueue::Unlock();
ExecuteScriptList(scriptCommaList.GetBuffer());
m_pPostInfo->SetStage(PostInfo::ptFinished);
m_pPostInfo->SetWorking(false);
}
void PostScriptController::ExecuteScript(Options::Script* pScript)
{
// if any script has requested par-check, do not execute other scripts
if (!pScript->GetPostScript() || m_pPostInfo->GetRequestParCheck())
{
return;
}
PrintMessage(Message::mkInfo, "Executing post-process-script %s for %s", pScript->GetName(), m_pPostInfo->GetNZBInfo()->GetName());
SetScript(pScript->GetLocation());
SetArgs(NULL, false);
char szInfoName[1024];
snprintf(szInfoName, 1024, "post-process-script %s for %s", pScript->GetName(), m_pPostInfo->GetNZBInfo()->GetName());
szInfoName[1024-1] = '\0';
SetInfoName(szInfoName);
m_pScript = pScript;
SetLogPrefix(pScript->GetDisplayName());
m_iPrefixLen = strlen(pScript->GetDisplayName()) + 2; // 2 = strlen(": ");
PrepareParams(pScript->GetName());
int iExitCode = Execute();
szInfoName[0] = 'P'; // uppercase
SetLogPrefix(NULL);
ScriptStatus::EStatus eStatus = AnalyseExitCode(iExitCode);
// the locking is needed for accessing the members of NZBInfo
DownloadQueue::Lock();
m_pPostInfo->GetNZBInfo()->GetScriptStatuses()->Add(pScript->GetName(), eStatus);
DownloadQueue::Unlock();
}
void PostScriptController::PrepareParams(const char* szScriptName)
{
// the locking is needed for accessing the members of NZBInfo
DownloadQueue::Lock();
ResetEnv();
SetIntEnvVar("NZBPP_NZBID", m_pPostInfo->GetNZBInfo()->GetID());
SetEnvVar("NZBPP_NZBNAME", m_pPostInfo->GetNZBInfo()->GetName());
SetEnvVar("NZBPP_DIRECTORY", m_pPostInfo->GetNZBInfo()->GetDestDir());
SetEnvVar("NZBPP_NZBFILENAME", m_pPostInfo->GetNZBInfo()->GetFilename());
SetEnvVar("NZBPP_URL", m_pPostInfo->GetNZBInfo()->GetURL());
SetEnvVar("NZBPP_FINALDIR", m_pPostInfo->GetNZBInfo()->GetFinalDir());
SetEnvVar("NZBPP_CATEGORY", m_pPostInfo->GetNZBInfo()->GetCategory());
SetIntEnvVar("NZBPP_HEALTH", m_pPostInfo->GetNZBInfo()->CalcHealth());
SetIntEnvVar("NZBPP_CRITICALHEALTH", m_pPostInfo->GetNZBInfo()->CalcCriticalHealth(false));
char szStatus[256];
strncpy(szStatus, m_pPostInfo->GetNZBInfo()->MakeTextStatus(true), sizeof(szStatus));
szStatus[256-1] = '\0';
SetEnvVar("NZBPP_STATUS", szStatus);
char* szDetail = strchr(szStatus, '/');
if (szDetail) *szDetail = '\0';
SetEnvVar("NZBPP_TOTALSTATUS", szStatus);
const char* szScriptStatusName[] = { "NONE", "FAILURE", "SUCCESS" };
SetEnvVar("NZBPP_SCRIPTSTATUS", szScriptStatusName[m_pPostInfo->GetNZBInfo()->GetScriptStatuses()->CalcTotalStatus()]);
// deprecated
int iParStatus[] = { 0, 0, 1, 2, 3, 4 };
NZBInfo::EParStatus eParStatus = m_pPostInfo->GetNZBInfo()->GetParStatus();
// for downloads marked as bad and for deleted downloads pass par status "Failure"
// for compatibility with older scripts which don't check "NZBPP_TOTALSTATUS"
if (m_pPostInfo->GetNZBInfo()->GetDeleteStatus() != NZBInfo::dsNone ||
m_pPostInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksBad)
{
eParStatus = NZBInfo::psFailure;
}
SetIntEnvVar("NZBPP_PARSTATUS", iParStatus[eParStatus]);
// deprecated
int iUnpackStatus[] = { 0, 0, 1, 2, 3, 4 };
SetIntEnvVar("NZBPP_UNPACKSTATUS", iUnpackStatus[m_pPostInfo->GetNZBInfo()->GetUnpackStatus()]);
// deprecated
SetIntEnvVar("NZBPP_HEALTHDELETED", (int)m_pPostInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsHealth);
SetIntEnvVar("NZBPP_TOTALARTICLES", (int)m_pPostInfo->GetNZBInfo()->GetTotalArticles());
SetIntEnvVar("NZBPP_SUCCESSARTICLES", (int)m_pPostInfo->GetNZBInfo()->GetSuccessArticles());
SetIntEnvVar("NZBPP_FAILEDARTICLES", (int)m_pPostInfo->GetNZBInfo()->GetFailedArticles());
for (ServerStatList::iterator it = m_pPostInfo->GetNZBInfo()->GetServerStats()->begin(); it != m_pPostInfo->GetNZBInfo()->GetServerStats()->end(); it++)
{
ServerStat* pServerStat = *it;
char szName[50];
snprintf(szName, 50, "NZBPP_SERVER%i_SUCCESSARTICLES", pServerStat->GetServerID());
szName[50-1] = '\0';
SetIntEnvVar(szName, pServerStat->GetSuccessArticles());
snprintf(szName, 50, "NZBPP_SERVER%i_FAILEDARTICLES", pServerStat->GetServerID());
szName[50-1] = '\0';
SetIntEnvVar(szName, pServerStat->GetFailedArticles());
}
PrepareEnvScript(m_pPostInfo->GetNZBInfo()->GetParameters(), szScriptName);
DownloadQueue::Unlock();
}
ScriptStatus::EStatus PostScriptController::AnalyseExitCode(int iExitCode)
{
// The ScriptStatus is accumulated for all scripts:
// If any script has failed the status is "failure", etc.
switch (iExitCode)
{
case POSTPROCESS_SUCCESS:
PrintMessage(Message::mkInfo, "%s successful", GetInfoName());
return ScriptStatus::srSuccess;
case POSTPROCESS_ERROR:
case -1: // Execute() returns -1 if the process could not be started (file not found or other problem)
PrintMessage(Message::mkError, "%s failed", GetInfoName());
return ScriptStatus::srFailure;
case POSTPROCESS_NONE:
PrintMessage(Message::mkInfo, "%s skipped", GetInfoName());
return ScriptStatus::srNone;
#ifndef DISABLE_PARCHECK
case POSTPROCESS_PARCHECK:
if (m_pPostInfo->GetNZBInfo()->GetParStatus() > NZBInfo::psSkipped)
{
PrintMessage(Message::mkError, "%s requested par-check/repair, but the collection was already checked", GetInfoName());
return ScriptStatus::srFailure;
}
else
{
PrintMessage(Message::mkInfo, "%s requested par-check/repair", GetInfoName());
m_pPostInfo->SetRequestParCheck(true);
m_pPostInfo->SetForceRepair(true);
return ScriptStatus::srSuccess;
}
break;
#endif
default:
PrintMessage(Message::mkError, "%s failed (terminated with unknown status)", GetInfoName());
return ScriptStatus::srFailure;
}
}
void PostScriptController::AddMessage(Message::EKind eKind, const char* szText)
{
const char* szMsgText = szText + m_iPrefixLen;
if (!strncmp(szMsgText, "[NZB] ", 6))
{
debug("Command %s detected", szMsgText + 6);
if (!strncmp(szMsgText + 6, "FINALDIR=", 9))
{
DownloadQueue::Lock();
m_pPostInfo->GetNZBInfo()->SetFinalDir(szMsgText + 6 + 9);
DownloadQueue::Unlock();
}
else if (!strncmp(szMsgText + 6, "DIRECTORY=", 10))
{
DownloadQueue::Lock();
m_pPostInfo->GetNZBInfo()->SetDestDir(szMsgText + 6 + 10);
DownloadQueue::Unlock();
}
else if (!strncmp(szMsgText + 6, "NZBPR_", 6))
{
char* szParam = strdup(szMsgText + 6 + 6);
char* szValue = strchr(szParam, '=');
if (szValue)
{
*szValue = '\0';
DownloadQueue::Lock();
m_pPostInfo->GetNZBInfo()->GetParameters()->SetParameter(szParam, szValue + 1);
DownloadQueue::Unlock();
}
else
{
error("Invalid command \"%s\" received from %s", szMsgText, GetInfoName());
}
free(szParam);
}
else if (!strncmp(szMsgText + 6, "MARK=BAD", 8))
{
SetLogPrefix(NULL);
PrintMessage(Message::mkWarning, "Marking %s as bad", m_pPostInfo->GetNZBInfo()->GetName());
SetLogPrefix(m_pScript->GetDisplayName());
m_pPostInfo->GetNZBInfo()->SetMarkStatus(NZBInfo::ksBad);
}
else
{
error("Invalid command \"%s\" received from %s", szMsgText, GetInfoName());
}
}
else if (!strncmp(szMsgText, "[HISTORY] ", 10))
{
m_pPostInfo->GetNZBInfo()->AppendMessage(eKind, 0, szMsgText);
}
else
{
ScriptController::AddMessage(eKind, szText);
m_pPostInfo->AppendMessage(eKind, szText);
}
if (g_pOptions->GetPausePostProcess() && !m_pPostInfo->GetNZBInfo()->GetForcePriority())
{
time_t tStageTime = m_pPostInfo->GetStageTime();
time_t tStartTime = m_pPostInfo->GetStartTime();
time_t tWaitTime = time(NULL);
// wait until Post-processor is unpaused
while (g_pOptions->GetPausePostProcess() && !m_pPostInfo->GetNZBInfo()->GetForcePriority() && !IsStopped())
{
usleep(100 * 1000);
// update time stamps
time_t tDelta = time(NULL) - tWaitTime;
if (tStageTime > 0)
{
m_pPostInfo->SetStageTime(tStageTime + tDelta);
}
if (tStartTime > 0)
{
m_pPostInfo->SetStartTime(tStartTime + tDelta);
}
}
}
}
void PostScriptController::Stop()
{
debug("Stopping post-process-script");
Thread::Stop();
Terminate();
}

View File

@@ -0,0 +1,55 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef POSTSCRIPT_H
#define POSTSCRIPT_H
#include "Thread.h"
#include "Log.h"
#include "QueueScript.h"
#include "DownloadInfo.h"
#include "Options.h"
class PostScriptController : public Thread, public NZBScriptController
{
private:
PostInfo* m_pPostInfo;
int m_iPrefixLen;
Options::Script* m_pScript;
void PrepareParams(const char* szScriptName);
ScriptStatus::EStatus AnalyseExitCode(int iExitCode);
protected:
virtual void ExecuteScript(Options::Script* pScript);
virtual void AddMessage(Message::EKind eKind, const char* szText);
public:
virtual void Run();
virtual void Stop();
static void StartJob(PostInfo* pPostInfo);
};
#endif

View File

@@ -0,0 +1,847 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#endif
#include <set>
#include <algorithm>
#include "nzbget.h"
#include "PrePostProcessor.h"
#include "Options.h"
#include "Log.h"
#include "HistoryCoordinator.h"
#include "DupeCoordinator.h"
#include "PostScript.h"
#include "Util.h"
#include "Scheduler.h"
#include "Scanner.h"
#include "Unpack.h"
#include "NZBFile.h"
#include "StatMeter.h"
#include "QueueScript.h"
extern HistoryCoordinator* g_pHistoryCoordinator;
extern DupeCoordinator* g_pDupeCoordinator;
extern Options* g_pOptions;
extern Scheduler* g_pScheduler;
extern Scanner* g_pScanner;
extern StatMeter* g_pStatMeter;
extern QueueScriptCoordinator* g_pQueueScriptCoordinator;
PrePostProcessor::PrePostProcessor()
{
debug("Creating PrePostProcessor");
m_iJobCount = 0;
m_pCurJob = NULL;
m_szPauseReason = NULL;
m_DownloadQueueObserver.m_pOwner = this;
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
pDownloadQueue->Attach(&m_DownloadQueueObserver);
DownloadQueue::Unlock();
}
PrePostProcessor::~PrePostProcessor()
{
debug("Destroying PrePostProcessor");
}
void PrePostProcessor::Run()
{
debug("Entering PrePostProcessor-loop");
while (!DownloadQueue::IsLoaded())
{
usleep(20 * 1000);
}
if (g_pOptions->GetServerMode() && g_pOptions->GetSaveQueue() && g_pOptions->GetReloadQueue())
{
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
SanitisePostQueue(pDownloadQueue);
DownloadQueue::Unlock();
}
g_pScheduler->FirstCheck();
int iDiskSpaceInterval = 1000;
int iSchedulerInterval = 1000;
int iHistoryInterval = 600000;
const int iStepMSec = 200;
while (!IsStopped())
{
// check incoming nzb directory
g_pScanner->Check();
if (!g_pOptions->GetPauseDownload() &&
g_pOptions->GetDiskSpace() > 0 && !g_pStatMeter->GetStandBy() &&
iDiskSpaceInterval >= 1000)
{
// check free disk space every 1 second
CheckDiskSpace();
iDiskSpaceInterval = 0;
}
iDiskSpaceInterval += iStepMSec;
// check post-queue every 200 msec
CheckPostQueue();
if (iSchedulerInterval >= 1000)
{
// check scheduler tasks every 1 second
g_pScheduler->IntervalCheck();
iSchedulerInterval = 0;
}
iSchedulerInterval += iStepMSec;
if (iHistoryInterval >= 600000)
{
// check history (remove old entries) every 10 minutes
g_pHistoryCoordinator->IntervalCheck();
iHistoryInterval = 0;
}
iHistoryInterval += iStepMSec;
Util::SetStandByMode(!m_pCurJob);
usleep(iStepMSec * 1000);
}
g_pHistoryCoordinator->Cleanup();
debug("Exiting PrePostProcessor-loop");
}
void PrePostProcessor::Stop()
{
Thread::Stop();
DownloadQueue::Lock();
#ifndef DISABLE_PARCHECK
m_ParCoordinator.Stop();
#endif
if (m_pCurJob && m_pCurJob->GetPostInfo() &&
(m_pCurJob->GetPostInfo()->GetStage() == PostInfo::ptUnpacking ||
m_pCurJob->GetPostInfo()->GetStage() == PostInfo::ptExecutingScript) &&
m_pCurJob->GetPostInfo()->GetPostThread())
{
Thread* pPostThread = m_pCurJob->GetPostInfo()->GetPostThread();
m_pCurJob->GetPostInfo()->SetPostThread(NULL);
pPostThread->SetAutoDestroy(true);
pPostThread->Stop();
}
DownloadQueue::Unlock();
}
void PrePostProcessor::DownloadQueueUpdate(Subject* Caller, void* Aspect)
{
if (IsStopped())
{
return;
}
DownloadQueue::Aspect* pQueueAspect = (DownloadQueue::Aspect*)Aspect;
if (pQueueAspect->eAction == DownloadQueue::eaNzbFound)
{
NZBFound(pQueueAspect->pDownloadQueue, pQueueAspect->pNZBInfo);
}
else if (pQueueAspect->eAction == DownloadQueue::eaNzbAdded)
{
NZBAdded(pQueueAspect->pDownloadQueue, pQueueAspect->pNZBInfo);
}
else if (pQueueAspect->eAction == DownloadQueue::eaNzbDeleted &&
pQueueAspect->pNZBInfo->GetDeleting() &&
!pQueueAspect->pNZBInfo->GetPostInfo() &&
!pQueueAspect->pNZBInfo->GetParCleanup() &&
pQueueAspect->pNZBInfo->GetFileList()->empty())
{
// the deleting of nzbs is usually handled via eaFileDeleted-event, but when deleting nzb without
// any files left the eaFileDeleted-event is not fired and we need to process eaNzbDeleted-event instead
info("Collection %s deleted from queue", pQueueAspect->pNZBInfo->GetName());
NZBDeleted(pQueueAspect->pDownloadQueue, pQueueAspect->pNZBInfo);
}
else if ((pQueueAspect->eAction == DownloadQueue::eaFileCompleted ||
pQueueAspect->eAction == DownloadQueue::eaFileDeleted))
{
if (pQueueAspect->eAction == DownloadQueue::eaFileCompleted && !pQueueAspect->pNZBInfo->GetPostInfo())
{
g_pQueueScriptCoordinator->EnqueueScript(pQueueAspect->pNZBInfo, QueueScriptCoordinator::qeFileDownloaded);
}
if (
#ifndef DISABLE_PARCHECK
!m_ParCoordinator.AddPar(pQueueAspect->pFileInfo, pQueueAspect->eAction == DownloadQueue::eaFileDeleted) &&
#endif
IsNZBFileCompleted(pQueueAspect->pNZBInfo, true, false) &&
!pQueueAspect->pNZBInfo->GetPostInfo() &&
(!pQueueAspect->pFileInfo->GetPaused() || IsNZBFileCompleted(pQueueAspect->pNZBInfo, false, false)))
{
if ((pQueueAspect->eAction == DownloadQueue::eaFileCompleted ||
(pQueueAspect->pFileInfo->GetAutoDeleted() &&
IsNZBFileCompleted(pQueueAspect->pNZBInfo, false, true))) &&
pQueueAspect->pFileInfo->GetNZBInfo()->GetDeleteStatus() != NZBInfo::dsHealth)
{
info("Collection %s completely downloaded", pQueueAspect->pNZBInfo->GetName());
g_pQueueScriptCoordinator->EnqueueScript(pQueueAspect->pNZBInfo, QueueScriptCoordinator::qeNzbDownloaded);
NZBDownloaded(pQueueAspect->pDownloadQueue, pQueueAspect->pNZBInfo);
}
else if ((pQueueAspect->eAction == DownloadQueue::eaFileDeleted ||
(pQueueAspect->eAction == DownloadQueue::eaFileCompleted &&
pQueueAspect->pFileInfo->GetNZBInfo()->GetDeleteStatus() > NZBInfo::dsNone)) &&
!pQueueAspect->pNZBInfo->GetParCleanup() &&
IsNZBFileCompleted(pQueueAspect->pNZBInfo, false, true))
{
info("Collection %s deleted from queue", pQueueAspect->pNZBInfo->GetName());
NZBDeleted(pQueueAspect->pDownloadQueue, pQueueAspect->pNZBInfo);
}
}
}
}
void PrePostProcessor::NZBFound(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
if (g_pOptions->GetDupeCheck() && pNZBInfo->GetDupeMode() != dmForce)
{
g_pDupeCoordinator->NZBFound(pDownloadQueue, pNZBInfo);
}
}
void PrePostProcessor::NZBAdded(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
if (g_pOptions->GetParCheck() != Options::pcForce)
{
m_ParCoordinator.PausePars(pDownloadQueue, pNZBInfo);
}
if (g_pOptions->GetDupeCheck() && pNZBInfo->GetDupeMode() != dmForce &&
pNZBInfo->GetDeleteStatus() == NZBInfo::dsDupe)
{
NZBCompleted(pDownloadQueue, pNZBInfo, false);
}
else
{
g_pQueueScriptCoordinator->EnqueueScript(pNZBInfo, QueueScriptCoordinator::qeNzbAdded);
}
}
void PrePostProcessor::NZBDownloaded(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
if (!pNZBInfo->GetPostInfo() && g_pOptions->GetDecode())
{
info("Queueing %s for post-processing", pNZBInfo->GetName());
pNZBInfo->EnterPostProcess();
m_iJobCount++;
if (pNZBInfo->GetParStatus() == NZBInfo::psNone &&
g_pOptions->GetParCheck() != Options::pcAlways &&
g_pOptions->GetParCheck() != Options::pcForce)
{
pNZBInfo->SetParStatus(NZBInfo::psSkipped);
}
if (pNZBInfo->GetRenameStatus() == NZBInfo::rsNone && !g_pOptions->GetParRename())
{
pNZBInfo->SetRenameStatus(NZBInfo::rsSkipped);
}
pDownloadQueue->Save();
}
else
{
NZBCompleted(pDownloadQueue, pNZBInfo, true);
}
}
void PrePostProcessor::NZBDeleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
if (pNZBInfo->GetDeleteStatus() == NZBInfo::dsNone)
{
pNZBInfo->SetDeleteStatus(NZBInfo::dsManual);
}
pNZBInfo->SetDeleting(false);
DeleteCleanup(pNZBInfo);
if (pNZBInfo->GetDeleteStatus() == NZBInfo::dsHealth ||
pNZBInfo->GetDeleteStatus() == NZBInfo::dsBad)
{
NZBDownloaded(pDownloadQueue, pNZBInfo);
}
else
{
NZBCompleted(pDownloadQueue, pNZBInfo, true);
}
}
void PrePostProcessor::NZBCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, bool bSaveQueue)
{
bool bAddToHistory = g_pOptions->GetKeepHistory() > 0 && !pNZBInfo->GetAvoidHistory();
if (bAddToHistory)
{
g_pHistoryCoordinator->AddToHistory(pDownloadQueue, pNZBInfo);
}
pNZBInfo->SetAvoidHistory(false);
bool bNeedSave = bAddToHistory;
if (g_pOptions->GetDupeCheck() && pNZBInfo->GetDupeMode() != dmForce &&
(pNZBInfo->GetDeleteStatus() == NZBInfo::dsNone ||
pNZBInfo->GetDeleteStatus() == NZBInfo::dsHealth ||
pNZBInfo->GetDeleteStatus() == NZBInfo::dsBad))
{
g_pDupeCoordinator->NZBCompleted(pDownloadQueue, pNZBInfo);
bNeedSave = true;
}
if (!bAddToHistory)
{
g_pHistoryCoordinator->DeleteDiskFiles(pNZBInfo);
pDownloadQueue->GetQueue()->Remove(pNZBInfo);
delete pNZBInfo;
}
if (bSaveQueue && bNeedSave)
{
pDownloadQueue->Save();
}
}
void PrePostProcessor::DeleteCleanup(NZBInfo* pNZBInfo)
{
if ((g_pOptions->GetDeleteCleanupDisk() && pNZBInfo->GetCleanupDisk()) ||
pNZBInfo->GetDeleteStatus() == NZBInfo::dsDupe)
{
// download was cancelled, deleting already downloaded files from disk
for (CompletedFiles::reverse_iterator it = pNZBInfo->GetCompletedFiles()->rbegin(); it != pNZBInfo->GetCompletedFiles()->rend(); it++)
{
CompletedFile* pCompletedFile = *it;
char szFullFileName[1024];
snprintf(szFullFileName, 1024, "%s%c%s", pNZBInfo->GetDestDir(), (int)PATH_SEPARATOR, pCompletedFile->GetFileName());
szFullFileName[1024-1] = '\0';
if (Util::FileExists(szFullFileName))
{
detail("Deleting file %s", pCompletedFile->GetFileName());
remove(szFullFileName);
}
}
// delete .out.tmp-files and _brokenlog.txt
DirBrowser dir(pNZBInfo->GetDestDir());
while (const char* szFilename = dir.Next())
{
int iLen = strlen(szFilename);
if ((iLen > 8 && !strcmp(szFilename + iLen - 8, ".out.tmp")) || !strcmp(szFilename, "_brokenlog.txt"))
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", pNZBInfo->GetDestDir(), PATH_SEPARATOR, szFilename);
szFullFilename[1024-1] = '\0';
detail("Deleting file %s", szFilename);
remove(szFullFilename);
}
}
// delete old directory (if empty)
if (Util::DirEmpty(pNZBInfo->GetDestDir()))
{
rmdir(pNZBInfo->GetDestDir());
}
}
}
void PrePostProcessor::CheckDiskSpace()
{
long long lFreeSpace = Util::FreeDiskSize(g_pOptions->GetDestDir());
if (lFreeSpace > -1 && lFreeSpace / 1024 / 1024 < g_pOptions->GetDiskSpace())
{
warn("Low disk space on %s. Pausing download", g_pOptions->GetDestDir());
g_pOptions->SetPauseDownload(true);
}
if (!Util::EmptyStr(g_pOptions->GetInterDir()))
{
lFreeSpace = Util::FreeDiskSize(g_pOptions->GetInterDir());
if (lFreeSpace > -1 && lFreeSpace / 1024 / 1024 < g_pOptions->GetDiskSpace())
{
warn("Low disk space on %s. Pausing download", g_pOptions->GetInterDir());
g_pOptions->SetPauseDownload(true);
}
}
}
void PrePostProcessor::CheckPostQueue()
{
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
if (!m_pCurJob && m_iJobCount > 0)
{
m_pCurJob = GetNextJob(pDownloadQueue);
}
if (m_pCurJob)
{
PostInfo* pPostInfo = m_pCurJob->GetPostInfo();
if (!pPostInfo->GetWorking() && !IsNZBFileDownloading(m_pCurJob))
{
#ifndef DISABLE_PARCHECK
if (pPostInfo->GetRequestParCheck() &&
(pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped ||
(pPostInfo->GetForceRepair() && !pPostInfo->GetNZBInfo()->GetParFull())) &&
g_pOptions->GetParCheck() != Options::pcManual)
{
pPostInfo->SetForceParFull(pPostInfo->GetNZBInfo()->GetParStatus() > NZBInfo::psSkipped);
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psNone);
pPostInfo->SetRequestParCheck(false);
pPostInfo->SetStage(PostInfo::ptQueued);
pPostInfo->GetNZBInfo()->GetScriptStatuses()->Clear();
DeletePostThread(pPostInfo);
}
else if (pPostInfo->GetRequestParCheck() && pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped &&
g_pOptions->GetParCheck() == Options::pcManual)
{
pPostInfo->SetRequestParCheck(false);
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psManual);
DeletePostThread(pPostInfo);
if (!pPostInfo->GetNZBInfo()->GetFileList()->empty())
{
info("Downloading all remaining files for manual par-check for %s", pPostInfo->GetNZBInfo()->GetName());
pDownloadQueue->EditEntry(pPostInfo->GetNZBInfo()->GetID(), DownloadQueue::eaGroupResume, 0, NULL);
pPostInfo->SetStage(PostInfo::ptFinished);
}
else
{
info("There are no par-files remain for download for %s", pPostInfo->GetNZBInfo()->GetName());
pPostInfo->SetStage(PostInfo::ptQueued);
}
}
#endif
if (pPostInfo->GetDeleted())
{
pPostInfo->SetStage(PostInfo::ptFinished);
}
if (pPostInfo->GetStage() == PostInfo::ptQueued &&
(!g_pOptions->GetPausePostProcess() || pPostInfo->GetNZBInfo()->GetForcePriority()))
{
DeletePostThread(pPostInfo);
StartJob(pDownloadQueue, pPostInfo);
}
else if (pPostInfo->GetStage() == PostInfo::ptFinished)
{
UpdatePauseState(false, NULL);
JobCompleted(pDownloadQueue, pPostInfo);
}
else if (!g_pOptions->GetPausePostProcess())
{
error("Internal error: invalid state in post-processor");
// TODO: cancel (delete) current job
}
}
}
DownloadQueue::Unlock();
}
NZBInfo* PrePostProcessor::GetNextJob(DownloadQueue* pDownloadQueue)
{
NZBInfo* pNZBInfo = NULL;
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pNZBInfo1 = *it;
if (pNZBInfo1->GetPostInfo() && !g_pQueueScriptCoordinator->HasJob(pNZBInfo1->GetID()) &&
(!pNZBInfo || pNZBInfo1->GetPriority() > pNZBInfo->GetPriority()) &&
(!g_pOptions->GetPausePostProcess() || pNZBInfo1->GetForcePriority()))
{
pNZBInfo = pNZBInfo1;
}
}
return pNZBInfo;
}
/**
* Reset the state of items after reloading from disk and
* delete items which could not be resumed.
* Also count the number of post-jobs.
*/
void PrePostProcessor::SanitisePostQueue(DownloadQueue* pDownloadQueue)
{
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pNZBInfo = *it;
PostInfo* pPostInfo = pNZBInfo->GetPostInfo();
if (pPostInfo)
{
m_iJobCount++;
if (pPostInfo->GetStage() == PostInfo::ptExecutingScript ||
!Util::DirectoryExists(pNZBInfo->GetDestDir()))
{
pPostInfo->SetStage(PostInfo::ptFinished);
}
else
{
pPostInfo->SetStage(PostInfo::ptQueued);
}
}
}
}
void PrePostProcessor::DeletePostThread(PostInfo* pPostInfo)
{
delete pPostInfo->GetPostThread();
pPostInfo->SetPostThread(NULL);
}
void PrePostProcessor::StartJob(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo)
{
if (!pPostInfo->GetStartTime())
{
pPostInfo->SetStartTime(time(NULL));
}
#ifndef DISABLE_PARCHECK
if (pPostInfo->GetNZBInfo()->GetRenameStatus() == NZBInfo::rsNone &&
pPostInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsNone)
{
UpdatePauseState(g_pOptions->GetParPauseQueue(), "par-rename");
m_ParCoordinator.StartParRenameJob(pPostInfo);
return;
}
else if (pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psNone &&
pPostInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsNone)
{
if (m_ParCoordinator.FindMainPars(pPostInfo->GetNZBInfo()->GetDestDir(), NULL))
{
UpdatePauseState(g_pOptions->GetParPauseQueue(), "par-check");
m_ParCoordinator.StartParCheckJob(pPostInfo);
}
else
{
info("Nothing to par-check for %s", pPostInfo->GetNZBInfo()->GetName());
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psSkipped);
pPostInfo->SetWorking(false);
pPostInfo->SetStage(PostInfo::ptQueued);
}
return;
}
else if (pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psSkipped &&
pPostInfo->GetNZBInfo()->CalcHealth() < pPostInfo->GetNZBInfo()->CalcCriticalHealth(false) &&
pPostInfo->GetNZBInfo()->CalcCriticalHealth(false) < 1000 &&
m_ParCoordinator.FindMainPars(pPostInfo->GetNZBInfo()->GetDestDir(), NULL))
{
warn("Skipping par-check for %s due to health %.1f%% below critical %.1f%%", pPostInfo->GetNZBInfo()->GetName(),
pPostInfo->GetNZBInfo()->CalcHealth() / 10.0, pPostInfo->GetNZBInfo()->CalcCriticalHealth(false) / 10.0);
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psFailure);
return;
}
else if (pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psSkipped &&
pPostInfo->GetNZBInfo()->GetFailedSize() - pPostInfo->GetNZBInfo()->GetParFailedSize() > 0 &&
m_ParCoordinator.FindMainPars(pPostInfo->GetNZBInfo()->GetDestDir(), NULL))
{
info("Collection %s with health %.1f%% needs par-check",
pPostInfo->GetNZBInfo()->GetName(), pPostInfo->GetNZBInfo()->CalcHealth() / 10.0);
pPostInfo->SetRequestParCheck(true);
return;
}
#endif
NZBParameter* pUnpackParameter = pPostInfo->GetNZBInfo()->GetParameters()->Find("*Unpack:", false);
bool bUnpackParam = !(pUnpackParameter && !strcasecmp(pUnpackParameter->GetValue(), "no"));
bool bUnpack = bUnpackParam && pPostInfo->GetNZBInfo()->GetUnpackStatus() == NZBInfo::usNone &&
pPostInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsNone;
bool bParFailed = pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psFailure ||
pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psRepairPossible ||
pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psManual;
bool bCleanup = !bUnpack &&
pPostInfo->GetNZBInfo()->GetCleanupStatus() == NZBInfo::csNone &&
((pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psSuccess &&
pPostInfo->GetNZBInfo()->GetUnpackStatus() != NZBInfo::usFailure &&
pPostInfo->GetNZBInfo()->GetUnpackStatus() != NZBInfo::usSpace &&
pPostInfo->GetNZBInfo()->GetUnpackStatus() != NZBInfo::usPassword) ||
(pPostInfo->GetNZBInfo()->GetUnpackStatus() == NZBInfo::usSuccess &&
pPostInfo->GetNZBInfo()->GetParStatus() != NZBInfo::psFailure)) &&
!Util::EmptyStr(g_pOptions->GetExtCleanupDisk());
bool bMoveInter = !bUnpack &&
pPostInfo->GetNZBInfo()->GetMoveStatus() == NZBInfo::msNone &&
pPostInfo->GetNZBInfo()->GetUnpackStatus() != NZBInfo::usFailure &&
pPostInfo->GetNZBInfo()->GetUnpackStatus() != NZBInfo::usSpace &&
pPostInfo->GetNZBInfo()->GetUnpackStatus() != NZBInfo::usPassword &&
pPostInfo->GetNZBInfo()->GetParStatus() != NZBInfo::psFailure &&
pPostInfo->GetNZBInfo()->GetParStatus() != NZBInfo::psManual &&
pPostInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsNone &&
!Util::EmptyStr(g_pOptions->GetInterDir()) &&
!strncmp(pPostInfo->GetNZBInfo()->GetDestDir(), g_pOptions->GetInterDir(), strlen(g_pOptions->GetInterDir()));
bool bPostScript = true;
if (bUnpack && bParFailed)
{
warn("Skipping unpack for %s due to %s", pPostInfo->GetNZBInfo()->GetName(),
pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psManual ? "required par-repair" : "par-failure");
pPostInfo->GetNZBInfo()->SetUnpackStatus(NZBInfo::usSkipped);
bUnpack = false;
}
if (!bUnpack && !bMoveInter && !bPostScript)
{
pPostInfo->SetStage(PostInfo::ptFinished);
return;
}
pPostInfo->SetProgressLabel(bUnpack ? "Unpacking" : bMoveInter ? "Moving" : "Executing post-process-script");
pPostInfo->SetWorking(true);
pPostInfo->SetStage(bUnpack ? PostInfo::ptUnpacking : bMoveInter ? PostInfo::ptMoving : PostInfo::ptExecutingScript);
pPostInfo->SetFileProgress(0);
pPostInfo->SetStageProgress(0);
pDownloadQueue->Save();
pPostInfo->SetStageTime(time(NULL));
if (bUnpack)
{
UpdatePauseState(g_pOptions->GetUnpackPauseQueue(), "unpack");
UnpackController::StartJob(pPostInfo);
}
else if (bCleanup)
{
UpdatePauseState(g_pOptions->GetUnpackPauseQueue() || g_pOptions->GetScriptPauseQueue(), "cleanup");
CleanupController::StartJob(pPostInfo);
}
else if (bMoveInter)
{
UpdatePauseState(g_pOptions->GetUnpackPauseQueue() || g_pOptions->GetScriptPauseQueue(), "move");
MoveController::StartJob(pPostInfo);
}
else
{
UpdatePauseState(g_pOptions->GetScriptPauseQueue(), "post-process-script");
PostScriptController::StartJob(pPostInfo);
}
}
void PrePostProcessor::JobCompleted(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo)
{
NZBInfo* pNZBInfo = pPostInfo->GetNZBInfo();
if (pPostInfo->GetStartTime() > 0)
{
pNZBInfo->SetPostTotalSec((int)(time(NULL) - pPostInfo->GetStartTime()));
pPostInfo->SetStartTime(0);
}
DeletePostThread(pPostInfo);
pNZBInfo->LeavePostProcess();
if (IsNZBFileCompleted(pNZBInfo, true, false))
{
// Cleaning up queue if par-check was successful or unpack was successful or
// health is 100% (if unpack and par-check were not performed)
// or health is below critical health
bool bCanCleanupQueue =
((pNZBInfo->GetParStatus() == NZBInfo::psSuccess ||
pNZBInfo->GetParStatus() == NZBInfo::psRepairPossible) &&
pNZBInfo->GetUnpackStatus() != NZBInfo::usFailure &&
pNZBInfo->GetUnpackStatus() != NZBInfo::usSpace &&
pNZBInfo->GetUnpackStatus() != NZBInfo::usPassword) ||
(pNZBInfo->GetUnpackStatus() == NZBInfo::usSuccess &&
pNZBInfo->GetParStatus() != NZBInfo::psFailure) ||
(pNZBInfo->GetUnpackStatus() <= NZBInfo::usSkipped &&
pNZBInfo->GetParStatus() != NZBInfo::psFailure &&
pNZBInfo->GetFailedSize() - pNZBInfo->GetParFailedSize() == 0) ||
(pNZBInfo->CalcHealth() < pNZBInfo->CalcCriticalHealth(false) &&
pNZBInfo->CalcCriticalHealth(false) < 1000);
if (g_pOptions->GetParCleanupQueue() && bCanCleanupQueue && !pNZBInfo->GetFileList()->empty())
{
info("Cleaning up download queue for %s", pNZBInfo->GetName());
pNZBInfo->SetParCleanup(true);
pDownloadQueue->EditEntry(pNZBInfo->GetID(), DownloadQueue::eaGroupDelete, 0, NULL);
}
if (pNZBInfo->GetUnpackCleanedUpDisk())
{
pNZBInfo->ClearCompletedFiles();
}
NZBCompleted(pDownloadQueue, pNZBInfo, false);
}
if (pNZBInfo == m_pCurJob)
{
m_pCurJob = NULL;
}
m_iJobCount--;
pDownloadQueue->Save();
}
bool PrePostProcessor::IsNZBFileCompleted(NZBInfo* pNZBInfo, bool bIgnorePausedPars, bool bAllowOnlyOneDeleted)
{
int iDeleted = 0;
for (FileList::iterator it = pNZBInfo->GetFileList()->begin(); it != pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
if (pFileInfo->GetDeleted())
{
iDeleted++;
}
if (((!pFileInfo->GetPaused() || !bIgnorePausedPars || !pFileInfo->GetParFile()) &&
!pFileInfo->GetDeleted()) ||
(bAllowOnlyOneDeleted && iDeleted > 1))
{
return false;
}
}
return true;
}
bool PrePostProcessor::IsNZBFileDownloading(NZBInfo* pNZBInfo)
{
if (pNZBInfo->GetActiveDownloads())
{
return true;
}
for (FileList::iterator it = pNZBInfo->GetFileList()->begin(); it != pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
if (!pFileInfo->GetPaused())
{
return true;
}
}
return false;
}
void PrePostProcessor::UpdatePauseState(bool bNeedPause, const char* szReason)
{
if (bNeedPause && !g_pOptions->GetTempPauseDownload())
{
info("Pausing download before %s", szReason);
}
else if (!bNeedPause && g_pOptions->GetTempPauseDownload())
{
info("Unpausing download after %s", m_szPauseReason);
}
g_pOptions->SetTempPauseDownload(bNeedPause);
m_szPauseReason = szReason;
}
bool PrePostProcessor::EditList(DownloadQueue* pDownloadQueue, IDList* pIDList, DownloadQueue::EEditAction eAction, int iOffset, const char* szText)
{
debug("Edit-command for post-processor received");
switch (eAction)
{
case DownloadQueue::eaPostDelete:
return PostQueueDelete(pDownloadQueue, pIDList);
default:
return false;
}
}
bool PrePostProcessor::PostQueueDelete(DownloadQueue* pDownloadQueue, IDList* pIDList)
{
bool bOK = false;
for (IDList::iterator itID = pIDList->begin(); itID != pIDList->end(); itID++)
{
int iID = *itID;
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pNZBInfo = *it;
PostInfo* pPostInfo = pNZBInfo->GetPostInfo();
if (pPostInfo && pNZBInfo->GetID() == iID)
{
if (pPostInfo->GetWorking())
{
info("Deleting active post-job %s", pPostInfo->GetNZBInfo()->GetName());
pPostInfo->SetDeleted(true);
#ifndef DISABLE_PARCHECK
if (PostInfo::ptLoadingPars <= pPostInfo->GetStage() && pPostInfo->GetStage() <= PostInfo::ptRenaming)
{
if (m_ParCoordinator.Cancel())
{
bOK = true;
}
}
else
#endif
if (pPostInfo->GetPostThread())
{
debug("Terminating %s for %s", (pPostInfo->GetStage() == PostInfo::ptUnpacking ? "unpack" : "post-process-script"), pPostInfo->GetNZBInfo()->GetName());
pPostInfo->GetPostThread()->Stop();
bOK = true;
}
else
{
error("Internal error in PrePostProcessor::QueueDelete");
}
}
else
{
info("Deleting queued post-job %s", pPostInfo->GetNZBInfo()->GetName());
JobCompleted(pDownloadQueue, pPostInfo);
bOK = true;
}
break;
}
}
}
return bOK;
}

View File

@@ -0,0 +1,83 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef PREPOSTPROCESSOR_H
#define PREPOSTPROCESSOR_H
#include <deque>
#include "Thread.h"
#include "Observer.h"
#include "DownloadInfo.h"
#include "ParCoordinator.h"
class PrePostProcessor : public Thread
{
private:
class DownloadQueueObserver: public Observer
{
public:
PrePostProcessor* m_pOwner;
virtual void Update(Subject* Caller, void* Aspect) { m_pOwner->DownloadQueueUpdate(Caller, Aspect); }
};
private:
ParCoordinator m_ParCoordinator;
DownloadQueueObserver m_DownloadQueueObserver;
int m_iJobCount;
NZBInfo* m_pCurJob;
const char* m_szPauseReason;
bool IsNZBFileCompleted(NZBInfo* pNZBInfo, bool bIgnorePausedPars, bool bAllowOnlyOneDeleted);
bool IsNZBFileDownloading(NZBInfo* pNZBInfo);
void CheckPostQueue();
void JobCompleted(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo);
void StartJob(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo);
void SaveQueue(DownloadQueue* pDownloadQueue);
void SanitisePostQueue(DownloadQueue* pDownloadQueue);
void CheckDiskSpace();
void UpdatePauseState(bool bNeedPause, const char* szReason);
void NZBFound(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBDeleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, bool bSaveQueue);
bool PostQueueDelete(DownloadQueue* pDownloadQueue, IDList* pIDList);
void DeletePostThread(PostInfo* pPostInfo);
NZBInfo* GetNextJob(DownloadQueue* pDownloadQueue);
void DownloadQueueUpdate(Subject* Caller, void* Aspect);
void DeleteCleanup(NZBInfo* pNZBInfo);
public:
PrePostProcessor();
virtual ~PrePostProcessor();
virtual void Run();
virtual void Stop();
bool HasMoreJobs() { return m_iJobCount > 0; }
int GetJobCount() { return m_iJobCount; }
bool EditList(DownloadQueue* pDownloadQueue, IDList* pIDList, DownloadQueue::EEditAction eAction, int iOffset, const char* szText);
void NZBAdded(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBDownloaded(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
};
#endif

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2013-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -31,7 +31,7 @@
#include "Log.h"
#include "Thread.h"
#include "DownloadInfo.h"
#include "ScriptController.h"
#include "Script.h"
class UnpackController : public Thread, public ScriptController
{
@@ -59,38 +59,47 @@ private:
char m_szFinalDir[1024];
char m_szUnpackDir[1024];
char m_szPassword[1024];
bool m_bInterDir;
bool m_bAllOKMessageReceived;
bool m_bNoFilesMessageReceived;
bool m_bHasParFiles;
bool m_bHasRarFiles;
bool m_bHasNonStdRarFiles;
bool m_bHasSevenZipFiles;
bool m_bHasSevenZipMultiFiles;
bool m_bHasSplittedFiles;
bool m_bUnpackOK;
bool m_bUnpackStartError;
bool m_bUnpackSpaceError;
bool m_bUnpackPasswordError4;
bool m_bUnpackPasswordError5;
bool m_bCleanedUpDisk;
bool m_bAutoTerminated;
EUnpacker m_eUnpacker;
FileList m_archiveFiles;
bool m_bFinalDirCreated;
FileList m_JoinedFiles;
protected:
virtual bool ReadLine(char* szBuf, int iBufSize, FILE* pStream);
virtual void AddMessage(Message::EKind eKind, bool bDefaultKind, const char* szText);
virtual void AddMessage(Message::EKind eKind, const char* szText);
void ExecuteUnrar();
void ExecuteSevenZip(bool bMultiVolumes);
void JoinSplittedFiles();
bool JoinFile(const char* szFragBaseName);
void Completed();
void CreateUnpackDir();
bool Cleanup();
bool HasParFiles();
bool HasBrokenFiles();
void CheckArchiveFiles();
void CheckArchiveFiles(bool bScanNonStdFiles);
void SetProgressLabel(const char* szProgressLabel);
#ifndef DISABLE_PARCHECK
void RequestParCheck(bool bRename);
void RequestParCheck(bool bForceRepair);
#endif
bool FileHasRarSignature(const char* szFilename);
public:
virtual ~UnpackController();
virtual void Run();
virtual void Stop();
static void StartUnpackJob(PostInfo* pPostInfo);
static void StartJob(PostInfo* pPostInfo);
};
class MoveController : public Thread, public ScriptController
@@ -104,7 +113,21 @@ private:
public:
virtual void Run();
static void StartMoveJob(PostInfo* pPostInfo);
static void StartJob(PostInfo* pPostInfo);
};
class CleanupController : public Thread, public ScriptController
{
private:
PostInfo* m_pPostInfo;
char m_szDestDir[1024];
char m_szFinalDir[1024];
bool Cleanup(const char* szDestDir, bool *bDeleted);
public:
virtual void Run();
static void StartJob(PostInfo* pPostInfo);
};
#endif

2886
daemon/queue/DiskState.cpp Normal file
View File

File diff suppressed because it is too large Load Diff

97
daemon/queue/DiskState.h Normal file
View File

@@ -0,0 +1,97 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef DISKSTATE_H
#define DISKSTATE_H
#include "DownloadInfo.h"
#include "FeedInfo.h"
#include "NewsServer.h"
#include "StatMeter.h"
class DiskState
{
private:
int fscanf(FILE* infile, const char* Format, ...);
int ParseFormatVersion(const char* szFormatSignature);
bool SaveFileInfo(FileInfo* pFileInfo, const char* szFilename);
bool LoadFileInfo(FileInfo* pFileInfo, const char* szFilename, bool bFileSummary, bool bArticles);
void SaveNZBQueue(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadNZBList(NZBList* pNZBList, Servers* pServers, FILE* infile, int iFormatVersion);
void SaveNZBInfo(NZBInfo* pNZBInfo, FILE* outfile);
bool LoadNZBInfo(NZBInfo* pNZBInfo, Servers* pServers, FILE* infile, int iFormatVersion);
void SavePostQueue(DownloadQueue* pDownloadQueue, FILE* outfile);
void SaveDupInfo(DupInfo* pDupInfo, FILE* outfile);
bool LoadDupInfo(DupInfo* pDupInfo, FILE* infile, int iFormatVersion);
void SaveHistory(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadHistory(DownloadQueue* pDownloadQueue, NZBList* pNZBList, Servers* pServers, FILE* infile, int iFormatVersion);
NZBInfo* FindNZBInfo(DownloadQueue* pDownloadQueue, int iID);
bool SaveFeedStatus(Feeds* pFeeds, FILE* outfile);
bool LoadFeedStatus(Feeds* pFeeds, FILE* infile, int iFormatVersion);
bool SaveFeedHistory(FeedHistory* pFeedHistory, FILE* outfile);
bool LoadFeedHistory(FeedHistory* pFeedHistory, FILE* infile, int iFormatVersion);
bool SaveServerInfo(Servers* pServers, FILE* outfile);
bool LoadServerInfo(Servers* pServers, FILE* infile, int iFormatVersion, bool* pPerfectMatch);
bool SaveVolumeStat(ServerVolumes* pServerVolumes, FILE* outfile);
bool LoadVolumeStat(Servers* pServers, ServerVolumes* pServerVolumes, FILE* infile, int iFormatVersion);
void CalcFileStats(DownloadQueue* pDownloadQueue, int iFormatVersion);
void CalcNZBFileStats(NZBInfo* pNZBInfo, int iFormatVersion);
bool LoadAllFileStates(DownloadQueue* pDownloadQueue, Servers* pServers);
void SaveServerStats(ServerStatList* pServerStatList, FILE* outfile);
bool LoadServerStats(ServerStatList* pServerStatList, Servers* pServers, FILE* infile);
// backward compatibility functions (conversions from older formats)
bool LoadPostQueue12(DownloadQueue* pDownloadQueue, NZBList* pNZBList, FILE* infile, int iFormatVersion);
bool LoadPostQueue5(DownloadQueue* pDownloadQueue, NZBList* pNZBList);
bool LoadUrlQueue12(DownloadQueue* pDownloadQueue, FILE* infile, int iFormatVersion);
bool LoadUrlInfo12(NZBInfo* pNZBInfo, FILE* infile, int iFormatVersion);
int FindNZBInfoIndex(NZBList* pNZBList, NZBInfo* pNZBInfo);
void ConvertDupeKey(char* buf, int bufsize);
bool LoadFileQueue12(NZBList* pNZBList, NZBList* pSortList, FILE* infile, int iFormatVersion);
void CompleteNZBList12(DownloadQueue* pDownloadQueue, NZBList* pNZBList, int iFormatVersion);
void CompleteDupList12(DownloadQueue* pDownloadQueue, int iFormatVersion);
void CalcCriticalHealth(NZBList* pNZBList);
public:
bool DownloadQueueExists();
bool SaveDownloadQueue(DownloadQueue* pDownloadQueue);
bool LoadDownloadQueue(DownloadQueue* pDownloadQueue, Servers* pServers);
bool SaveFile(FileInfo* pFileInfo);
bool SaveFileState(FileInfo* pFileInfo, bool bCompleted);
bool LoadFileState(FileInfo* pFileInfo, Servers* pServers, bool bCompleted);
bool LoadArticles(FileInfo* pFileInfo);
void DiscardDownloadQueue();
bool DiscardFile(FileInfo* pFileInfo, bool bDeleteData, bool bDeletePartialState, bool bDeleteCompletedState);
void DiscardFiles(NZBInfo* pNZBInfo);
bool SaveFeeds(Feeds* pFeeds, FeedHistory* pFeedHistory);
bool LoadFeeds(Feeds* pFeeds, FeedHistory* pFeedHistory);
bool SaveStats(Servers* pServers, ServerVolumes* pServerVolumes);
bool LoadStats(Servers* pServers, ServerVolumes* pServerVolumes, bool* pPerfectMatch);
void CleanupTempDir(DownloadQueue* pDownloadQueue);
void WriteCacheFlag();
void DeleteCacheFlag();
};
#endif

View File

File diff suppressed because it is too large Load Diff

949
daemon/queue/DownloadInfo.h Normal file
View File

@@ -0,0 +1,949 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef DOWNLOADINFO_H
#define DOWNLOADINFO_H
#include <vector>
#include <deque>
#include <time.h>
#include "Observer.h"
#include "Log.h"
#include "Thread.h"
class NZBInfo;
class DownloadQueue;
class PostInfo;
class ServerStat
{
private:
int m_iServerID;
int m_iSuccessArticles;
int m_iFailedArticles;
public:
ServerStat(int iServerID);
int GetServerID() { return m_iServerID; }
int GetSuccessArticles() { return m_iSuccessArticles; }
void SetSuccessArticles(int iSuccessArticles) { m_iSuccessArticles = iSuccessArticles; }
int GetFailedArticles() { return m_iFailedArticles; }
void SetFailedArticles(int iFailedArticles) { m_iFailedArticles = iFailedArticles; }
};
typedef std::vector<ServerStat*> ServerStatListBase;
class ServerStatList : public ServerStatListBase
{
public:
enum EStatOperation
{
soSet,
soAdd,
soSubtract
};
public:
~ServerStatList();
void StatOp(int iServerID, int iSuccessArticles, int iFailedArticles, EStatOperation eStatOperation);
void ListOp(ServerStatList* pServerStats, EStatOperation eStatOperation);
void Clear();
};
class ArticleInfo
{
public:
enum EStatus
{
aiUndefined,
aiRunning,
aiFinished,
aiFailed
};
private:
int m_iPartNumber;
char* m_szMessageID;
int m_iSize;
char* m_pSegmentContent;
long long m_iSegmentOffset;
int m_iSegmentSize;
EStatus m_eStatus;
char* m_szResultFilename;
unsigned long m_lCrc;
public:
ArticleInfo();
~ArticleInfo();
void SetPartNumber(int s) { m_iPartNumber = s; }
int GetPartNumber() { return m_iPartNumber; }
const char* GetMessageID() { return m_szMessageID; }
void SetMessageID(const char* szMessageID);
void SetSize(int iSize) { m_iSize = iSize; }
int GetSize() { return m_iSize; }
void AttachSegment(char* pContent, long long iOffset, int iSize);
void DiscardSegment();
const char* GetSegmentContent() { return m_pSegmentContent; }
void SetSegmentOffset(long long iSegmentOffset) { m_iSegmentOffset = iSegmentOffset; }
long long GetSegmentOffset() { return m_iSegmentOffset; }
void SetSegmentSize(int iSegmentSize) { m_iSegmentSize = iSegmentSize; }
int GetSegmentSize() { return m_iSegmentSize; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
const char* GetResultFilename() { return m_szResultFilename; }
void SetResultFilename(const char* v);
unsigned long GetCrc() { return m_lCrc; }
void SetCrc(unsigned long lCrc) { m_lCrc = lCrc; }
};
class FileInfo
{
public:
typedef std::vector<ArticleInfo*> Articles;
typedef std::vector<char*> Groups;
private:
int m_iID;
NZBInfo* m_pNZBInfo;
Articles m_Articles;
Groups m_Groups;
ServerStatList m_ServerStats;
char* m_szSubject;
char* m_szFilename;
long long m_lSize;
long long m_lRemainingSize;
long long m_lSuccessSize;
long long m_lFailedSize;
long long m_lMissedSize;
int m_iTotalArticles;
int m_iMissedArticles;
int m_iFailedArticles;
int m_iSuccessArticles;
time_t m_tTime;
bool m_bPaused;
bool m_bDeleted;
bool m_bFilenameConfirmed;
bool m_bParFile;
int m_iCompletedArticles;
bool m_bOutputInitialized;
char* m_szOutputFilename;
Mutex* m_pMutexOutputFile;
bool m_bExtraPriority;
int m_iActiveDownloads;
bool m_bAutoDeleted;
int m_iCachedArticles;
bool m_bPartialChanged;
static int m_iIDGen;
static int m_iIDMax;
friend class CompletedFile;
public:
FileInfo(int iID = 0);
~FileInfo();
int GetID() { return m_iID; }
void SetID(int iID);
static void ResetGenID(bool bMax);
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
void SetNZBInfo(NZBInfo* pNZBInfo) { m_pNZBInfo = pNZBInfo; }
Articles* GetArticles() { return &m_Articles; }
Groups* GetGroups() { return &m_Groups; }
const char* GetSubject() { return m_szSubject; }
void SetSubject(const char* szSubject);
const char* GetFilename() { return m_szFilename; }
void SetFilename(const char* szFilename);
void MakeValidFilename();
bool GetFilenameConfirmed() { return m_bFilenameConfirmed; }
void SetFilenameConfirmed(bool bFilenameConfirmed) { m_bFilenameConfirmed = bFilenameConfirmed; }
void SetSize(long long lSize) { m_lSize = lSize; m_lRemainingSize = lSize; }
long long GetSize() { return m_lSize; }
long long GetRemainingSize() { return m_lRemainingSize; }
void SetRemainingSize(long long lRemainingSize) { m_lRemainingSize = lRemainingSize; }
long long GetMissedSize() { return m_lMissedSize; }
void SetMissedSize(long long lMissedSize) { m_lMissedSize = lMissedSize; }
long long GetSuccessSize() { return m_lSuccessSize; }
void SetSuccessSize(long long lSuccessSize) { m_lSuccessSize = lSuccessSize; }
long long GetFailedSize() { return m_lFailedSize; }
void SetFailedSize(long long lFailedSize) { m_lFailedSize = lFailedSize; }
int GetTotalArticles() { return m_iTotalArticles; }
void SetTotalArticles(int iTotalArticles) { m_iTotalArticles = iTotalArticles; }
int GetMissedArticles() { return m_iMissedArticles; }
void SetMissedArticles(int iMissedArticles) { m_iMissedArticles = iMissedArticles; }
int GetFailedArticles() { return m_iFailedArticles; }
void SetFailedArticles(int iFailedArticles) { m_iFailedArticles = iFailedArticles; }
int GetSuccessArticles() { return m_iSuccessArticles; }
void SetSuccessArticles(int iSuccessArticles) { m_iSuccessArticles = iSuccessArticles; }
time_t GetTime() { return m_tTime; }
void SetTime(time_t tTime) { m_tTime = tTime; }
bool GetPaused() { return m_bPaused; }
void SetPaused(bool bPaused);
bool GetDeleted() { return m_bDeleted; }
void SetDeleted(bool Deleted) { m_bDeleted = Deleted; }
int GetCompletedArticles() { return m_iCompletedArticles; }
void SetCompletedArticles(int iCompletedArticles) { m_iCompletedArticles = iCompletedArticles; }
bool GetParFile() { return m_bParFile; }
void SetParFile(bool bParFile) { m_bParFile = bParFile; }
void ClearArticles();
void LockOutputFile();
void UnlockOutputFile();
const char* GetOutputFilename() { return m_szOutputFilename; }
void SetOutputFilename(const char* szOutputFilename);
bool GetOutputInitialized() { return m_bOutputInitialized; }
void SetOutputInitialized(bool bOutputInitialized) { m_bOutputInitialized = bOutputInitialized; }
bool GetExtraPriority() { return m_bExtraPriority; }
void SetExtraPriority(bool bExtraPriority) { m_bExtraPriority = bExtraPriority; }
int GetActiveDownloads() { return m_iActiveDownloads; }
void SetActiveDownloads(int iActiveDownloads);
bool GetAutoDeleted() { return m_bAutoDeleted; }
void SetAutoDeleted(bool bAutoDeleted) { m_bAutoDeleted = bAutoDeleted; }
int GetCachedArticles() { return m_iCachedArticles; }
void SetCachedArticles(int iCachedArticles) { m_iCachedArticles = iCachedArticles; }
bool GetPartialChanged() { return m_bPartialChanged; }
void SetPartialChanged(bool bPartialChanged) { m_bPartialChanged = bPartialChanged; }
ServerStatList* GetServerStats() { return &m_ServerStats; }
};
typedef std::deque<FileInfo*> FileListBase;
class FileList : public FileListBase
{
private:
bool m_bOwnObjects;
public:
FileList(bool bOwnObjects = false) { m_bOwnObjects = bOwnObjects; }
~FileList();
void Clear();
void Remove(FileInfo* pFileInfo);
};
class CompletedFile
{
public:
enum EStatus
{
cfUnknown,
cfSuccess,
cfPartial,
cfFailure
};
private:
int m_iID;
char* m_szFileName;
EStatus m_eStatus;
unsigned long m_lCrc;
public:
CompletedFile(int iID, const char* szFileName, EStatus eStatus, unsigned long lCrc);
~CompletedFile();
int GetID() { return m_iID; }
void SetFileName(const char* szFileName);
const char* GetFileName() { return m_szFileName; }
EStatus GetStatus() { return m_eStatus; }
unsigned long GetCrc() { return m_lCrc; }
};
typedef std::deque<CompletedFile*> CompletedFiles;
class NZBParameter
{
private:
char* m_szName;
char* m_szValue;
void SetValue(const char* szValue);
friend class NZBParameterList;
public:
NZBParameter(const char* szName);
~NZBParameter();
const char* GetName() { return m_szName; }
const char* GetValue() { return m_szValue; }
};
typedef std::deque<NZBParameter*> NZBParameterListBase;
class NZBParameterList : public NZBParameterListBase
{
public:
~NZBParameterList();
void SetParameter(const char* szName, const char* szValue);
NZBParameter* Find(const char* szName, bool bCaseSensitive);
void Clear();
void CopyFrom(NZBParameterList* pSourceParameters);
};
class ScriptStatus
{
public:
enum EStatus
{
srNone,
srFailure,
srSuccess
};
private:
char* m_szName;
EStatus m_eStatus;
friend class ScriptStatusList;
public:
ScriptStatus(const char* szName, EStatus eStatus);
~ScriptStatus();
const char* GetName() { return m_szName; }
EStatus GetStatus() { return m_eStatus; }
};
typedef std::deque<ScriptStatus*> ScriptStatusListBase;
class ScriptStatusList : public ScriptStatusListBase
{
public:
~ScriptStatusList();
void Add(const char* szScriptName, ScriptStatus::EStatus eStatus);
void Clear();
ScriptStatus::EStatus CalcTotalStatus();
};
enum EDupeMode
{
dmScore,
dmAll,
dmForce
};
class NZBInfo
{
public:
enum ERenameStatus
{
rsNone,
rsSkipped,
rsFailure,
rsSuccess
};
enum EParStatus
{
psNone,
psSkipped,
psFailure,
psSuccess,
psRepairPossible,
psManual
};
enum EUnpackStatus
{
usNone,
usSkipped,
usFailure,
usSuccess,
usSpace,
usPassword
};
enum ECleanupStatus
{
csNone,
csFailure,
csSuccess
};
enum EMoveStatus
{
msNone,
msFailure,
msSuccess
};
enum EDeleteStatus
{
dsNone,
dsManual,
dsHealth,
dsDupe,
dsBad
};
enum EMarkStatus
{
ksNone,
ksBad,
ksGood
};
enum EUrlStatus
{
lsNone,
lsRunning,
lsFinished,
lsFailed,
lsRetry,
lsScanSkipped,
lsScanFailed
};
enum EKind
{
nkNzb,
nkUrl
};
typedef std::deque<Message*> Messages;
static const int FORCE_PRIORITY = 900;
friend class DupInfo;
private:
int m_iID;
EKind m_eKind;
char* m_szURL;
char* m_szFilename;
char* m_szName;
char* m_szDestDir;
char* m_szFinalDir;
char* m_szCategory;
int m_iFileCount;
int m_iParkedFileCount;
long long m_lSize;
long long m_lRemainingSize;
int m_iPausedFileCount;
long long m_lPausedSize;
int m_iRemainingParCount;
int m_iActiveDownloads;
long long m_lSuccessSize;
long long m_lFailedSize;
long long m_lCurrentSuccessSize;
long long m_lCurrentFailedSize;
long long m_lParSize;
long long m_lParSuccessSize;
long long m_lParFailedSize;
long long m_lParCurrentSuccessSize;
long long m_lParCurrentFailedSize;
int m_iTotalArticles;
int m_iSuccessArticles;
int m_iFailedArticles;
int m_iCurrentSuccessArticles;
int m_iCurrentFailedArticles;
time_t m_tMinTime;
time_t m_tMaxTime;
int m_iPriority;
CompletedFiles m_completedFiles;
ERenameStatus m_eRenameStatus;
EParStatus m_eParStatus;
EUnpackStatus m_eUnpackStatus;
ECleanupStatus m_eCleanupStatus;
EMoveStatus m_eMoveStatus;
EDeleteStatus m_eDeleteStatus;
EMarkStatus m_eMarkStatus;
EUrlStatus m_eUrlStatus;
bool m_bAddUrlPaused;
bool m_bDeletePaused;
bool m_bManyDupeFiles;
char* m_szQueuedFilename;
bool m_bDeleting;
bool m_bAvoidHistory;
bool m_bHealthPaused;
bool m_bParCleanup;
bool m_bParManual;
bool m_bCleanupDisk;
bool m_bUnpackCleanedUpDisk;
char* m_szDupeKey;
int m_iDupeScore;
EDupeMode m_eDupeMode;
unsigned int m_iFullContentHash;
unsigned int m_iFilteredContentHash;
FileList m_FileList;
NZBParameterList m_ppParameters;
ScriptStatusList m_scriptStatuses;
ServerStatList m_ServerStats;
ServerStatList m_CurrentServerStats;
Mutex m_mutexLog;
Messages m_Messages;
int m_iIDMessageGen;
PostInfo* m_pPostInfo;
long long m_lDownloadedSize;
time_t m_tDownloadStartTime;
int m_iDownloadSec;
int m_iPostTotalSec;
int m_iParSec;
int m_iRepairSec;
int m_iUnpackSec;
bool m_bReprocess;
time_t m_tQueueScriptTime;
bool m_bParFull;
static int m_iIDGen;
static int m_iIDMax;
public:
NZBInfo();
~NZBInfo();
int GetID() { return m_iID; }
void SetID(int iID);
static void ResetGenID(bool bMax);
static int GenerateID();
EKind GetKind() { return m_eKind; }
void SetKind(EKind eKind) { m_eKind = eKind; }
const char* GetURL() { return m_szURL; } // needs locking (for shared objects)
void SetURL(const char* szURL); // needs locking (for shared objects)
const char* GetFilename() { return m_szFilename; }
void SetFilename(const char* szFilename);
static void MakeNiceNZBName(const char* szNZBFilename, char* szBuffer, int iSize, bool bRemoveExt);
static void MakeNiceUrlName(const char* szURL, const char* szNZBFilename, char* szBuffer, int iSize);
const char* GetDestDir() { return m_szDestDir; } // needs locking (for shared objects)
void SetDestDir(const char* szDestDir); // needs locking (for shared objects)
const char* GetFinalDir() { return m_szFinalDir; } // needs locking (for shared objects)
void SetFinalDir(const char* szFinalDir); // needs locking (for shared objects)
const char* GetCategory() { return m_szCategory; } // needs locking (for shared objects)
void SetCategory(const char* szCategory); // needs locking (for shared objects)
const char* GetName() { return m_szName; } // needs locking (for shared objects)
void SetName(const char* szName); // needs locking (for shared objects)
int GetFileCount() { return m_iFileCount; }
void SetFileCount(int iFileCount) { m_iFileCount = iFileCount; }
int GetParkedFileCount() { return m_iParkedFileCount; }
void SetParkedFileCount(int iParkedFileCount) { m_iParkedFileCount = iParkedFileCount; }
long long GetSize() { return m_lSize; }
void SetSize(long long lSize) { m_lSize = lSize; }
long long GetRemainingSize() { return m_lRemainingSize; }
void SetRemainingSize(long long lRemainingSize) { m_lRemainingSize = lRemainingSize; }
long long GetPausedSize() { return m_lPausedSize; }
void SetPausedSize(long long lPausedSize) { m_lPausedSize = lPausedSize; }
int GetPausedFileCount() { return m_iPausedFileCount; }
void SetPausedFileCount(int iPausedFileCount) { m_iPausedFileCount = iPausedFileCount; }
int GetRemainingParCount() { return m_iRemainingParCount; }
void SetRemainingParCount(int iRemainingParCount) { m_iRemainingParCount = iRemainingParCount; }
int GetActiveDownloads() { return m_iActiveDownloads; }
void SetActiveDownloads(int iActiveDownloads);
long long GetSuccessSize() { return m_lSuccessSize; }
void SetSuccessSize(long long lSuccessSize) { m_lSuccessSize = lSuccessSize; }
long long GetFailedSize() { return m_lFailedSize; }
void SetFailedSize(long long lFailedSize) { m_lFailedSize = lFailedSize; }
long long GetCurrentSuccessSize() { return m_lCurrentSuccessSize; }
void SetCurrentSuccessSize(long long lCurrentSuccessSize) { m_lCurrentSuccessSize = lCurrentSuccessSize; }
long long GetCurrentFailedSize() { return m_lCurrentFailedSize; }
void SetCurrentFailedSize(long long lCurrentFailedSize) { m_lCurrentFailedSize = lCurrentFailedSize; }
long long GetParSize() { return m_lParSize; }
void SetParSize(long long lParSize) { m_lParSize = lParSize; }
long long GetParSuccessSize() { return m_lParSuccessSize; }
void SetParSuccessSize(long long lParSuccessSize) { m_lParSuccessSize = lParSuccessSize; }
long long GetParFailedSize() { return m_lParFailedSize; }
void SetParFailedSize(long long lParFailedSize) { m_lParFailedSize = lParFailedSize; }
long long GetParCurrentSuccessSize() { return m_lParCurrentSuccessSize; }
void SetParCurrentSuccessSize(long long lParCurrentSuccessSize) { m_lParCurrentSuccessSize = lParCurrentSuccessSize; }
long long GetParCurrentFailedSize() { return m_lParCurrentFailedSize; }
void SetParCurrentFailedSize(long long lParCurrentFailedSize) { m_lParCurrentFailedSize = lParCurrentFailedSize; }
int GetTotalArticles() { return m_iTotalArticles; }
void SetTotalArticles(int iTotalArticles) { m_iTotalArticles = iTotalArticles; }
int GetSuccessArticles() { return m_iSuccessArticles; }
void SetSuccessArticles(int iSuccessArticles) { m_iSuccessArticles = iSuccessArticles; }
int GetFailedArticles() { return m_iFailedArticles; }
void SetFailedArticles(int iFailedArticles) { m_iFailedArticles = iFailedArticles; }
int GetCurrentSuccessArticles() { return m_iCurrentSuccessArticles; }
void SetCurrentSuccessArticles(int iCurrentSuccessArticles) { m_iCurrentSuccessArticles = iCurrentSuccessArticles; }
int GetCurrentFailedArticles() { return m_iCurrentFailedArticles; }
void SetCurrentFailedArticles(int iCurrentFailedArticles) { m_iCurrentFailedArticles = iCurrentFailedArticles; }
int GetPriority() { return m_iPriority; }
void SetPriority(int iPriority) { m_iPriority = iPriority; }
bool GetForcePriority() { return m_iPriority >= FORCE_PRIORITY; }
time_t GetMinTime() { return m_tMinTime; }
void SetMinTime(time_t tMinTime) { m_tMinTime = tMinTime; }
time_t GetMaxTime() { return m_tMaxTime; }
void SetMaxTime(time_t tMaxTime) { m_tMaxTime = tMaxTime; }
void BuildDestDirName();
void BuildFinalDirName(char* szFinalDirBuf, int iBufSize);
CompletedFiles* GetCompletedFiles() { return &m_completedFiles; } // needs locking (for shared objects)
void ClearCompletedFiles();
ERenameStatus GetRenameStatus() { return m_eRenameStatus; }
void SetRenameStatus(ERenameStatus eRenameStatus) { m_eRenameStatus = eRenameStatus; }
EParStatus GetParStatus() { return m_eParStatus; }
void SetParStatus(EParStatus eParStatus) { m_eParStatus = eParStatus; }
EUnpackStatus GetUnpackStatus() { return m_eUnpackStatus; }
void SetUnpackStatus(EUnpackStatus eUnpackStatus) { m_eUnpackStatus = eUnpackStatus; }
ECleanupStatus GetCleanupStatus() { return m_eCleanupStatus; }
void SetCleanupStatus(ECleanupStatus eCleanupStatus) { m_eCleanupStatus = eCleanupStatus; }
EMoveStatus GetMoveStatus() { return m_eMoveStatus; }
void SetMoveStatus(EMoveStatus eMoveStatus) { m_eMoveStatus = eMoveStatus; }
EDeleteStatus GetDeleteStatus() { return m_eDeleteStatus; }
void SetDeleteStatus(EDeleteStatus eDeleteStatus) { m_eDeleteStatus = eDeleteStatus; }
EMarkStatus GetMarkStatus() { return m_eMarkStatus; }
void SetMarkStatus(EMarkStatus eMarkStatus) { m_eMarkStatus = eMarkStatus; }
EUrlStatus GetUrlStatus() { return m_eUrlStatus; }
void SetUrlStatus(EUrlStatus eUrlStatus) { m_eUrlStatus = eUrlStatus; }
const char* GetQueuedFilename() { return m_szQueuedFilename; }
void SetQueuedFilename(const char* szQueuedFilename);
bool GetDeleting() { return m_bDeleting; }
void SetDeleting(bool bDeleting) { m_bDeleting = bDeleting; }
bool GetDeletePaused() { return m_bDeletePaused; }
void SetDeletePaused(bool bDeletePaused) { m_bDeletePaused = bDeletePaused; }
bool GetManyDupeFiles() { return m_bManyDupeFiles; }
void SetManyDupeFiles(bool bManyDupeFiles) { m_bManyDupeFiles = bManyDupeFiles; }
bool GetAvoidHistory() { return m_bAvoidHistory; }
void SetAvoidHistory(bool bAvoidHistory) { m_bAvoidHistory = bAvoidHistory; }
bool GetHealthPaused() { return m_bHealthPaused; }
void SetHealthPaused(bool bHealthPaused) { m_bHealthPaused = bHealthPaused; }
bool GetParCleanup() { return m_bParCleanup; }
void SetParCleanup(bool bParCleanup) { m_bParCleanup = bParCleanup; }
bool GetCleanupDisk() { return m_bCleanupDisk; }
void SetCleanupDisk(bool bCleanupDisk) { m_bCleanupDisk = bCleanupDisk; }
bool GetUnpackCleanedUpDisk() { return m_bUnpackCleanedUpDisk; }
void SetUnpackCleanedUpDisk(bool bUnpackCleanedUpDisk) { m_bUnpackCleanedUpDisk = bUnpackCleanedUpDisk; }
bool GetAddUrlPaused() { return m_bAddUrlPaused; }
void SetAddUrlPaused(bool bAddUrlPaused) { m_bAddUrlPaused = bAddUrlPaused; }
FileList* GetFileList() { return &m_FileList; } // needs locking (for shared objects)
NZBParameterList* GetParameters() { return &m_ppParameters; } // needs locking (for shared objects)
ScriptStatusList* GetScriptStatuses() { return &m_scriptStatuses; } // needs locking (for shared objects)
ServerStatList* GetServerStats() { return &m_ServerStats; }
ServerStatList* GetCurrentServerStats() { return &m_CurrentServerStats; }
int CalcHealth();
int CalcCriticalHealth(bool bAllowEstimation);
const char* GetDupeKey() { return m_szDupeKey; } // needs locking (for shared objects)
void SetDupeKey(const char* szDupeKey); // needs locking (for shared objects)
int GetDupeScore() { return m_iDupeScore; }
void SetDupeScore(int iDupeScore) { m_iDupeScore = iDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
void SetDupeMode(EDupeMode eDupeMode) { m_eDupeMode = eDupeMode; }
unsigned int GetFullContentHash() { return m_iFullContentHash; }
void SetFullContentHash(unsigned int iFullContentHash) { m_iFullContentHash = iFullContentHash; }
unsigned int GetFilteredContentHash() { return m_iFilteredContentHash; }
void SetFilteredContentHash(unsigned int iFilteredContentHash) { m_iFilteredContentHash = iFilteredContentHash; }
long long GetDownloadedSize() { return m_lDownloadedSize; }
void SetDownloadedSize(long long lDownloadedSize) { m_lDownloadedSize = lDownloadedSize; }
int GetDownloadSec() { return m_iDownloadSec; }
void SetDownloadSec(int iDownloadSec) { m_iDownloadSec = iDownloadSec; }
int GetPostTotalSec() { return m_iPostTotalSec; }
void SetPostTotalSec(int iPostTotalSec) { m_iPostTotalSec = iPostTotalSec; }
int GetParSec() { return m_iParSec; }
void SetParSec(int iParSec) { m_iParSec = iParSec; }
int GetRepairSec() { return m_iRepairSec; }
void SetRepairSec(int iRepairSec) { m_iRepairSec = iRepairSec; }
int GetUnpackSec() { return m_iUnpackSec; }
void SetUnpackSec(int iUnpackSec) { m_iUnpackSec = iUnpackSec; }
time_t GetDownloadStartTime() { return m_tDownloadStartTime; }
void SetDownloadStartTime(time_t tDownloadStartTime) { m_tDownloadStartTime = tDownloadStartTime; }
void SetReprocess(bool bReprocess) { m_bReprocess = bReprocess; }
bool GetReprocess() { return m_bReprocess; }
time_t GetQueueScriptTime() { return m_tQueueScriptTime; }
void SetQueueScriptTime(time_t tQueueScriptTime) { m_tQueueScriptTime = tQueueScriptTime; }
void SetParFull(bool bParFull) { m_bParFull = bParFull; }
bool GetParFull() { return m_bParFull; }
void CopyFileList(NZBInfo* pSrcNZBInfo);
void UpdateMinMaxTime();
PostInfo* GetPostInfo() { return m_pPostInfo; }
void EnterPostProcess();
void LeavePostProcess();
bool IsDupeSuccess();
const char* MakeTextStatus(bool bIgnoreScriptStatus);
void AppendMessage(Message::EKind eKind, time_t tTime, const char* szText);
Messages* LockMessages();
void UnlockMessages();
};
typedef std::deque<NZBInfo*> NZBQueueBase;
class NZBList : public NZBQueueBase
{
private:
bool m_bOwnObjects;
public:
NZBList(bool bOwnObjects = false) { m_bOwnObjects = bOwnObjects; }
~NZBList();
void Clear();
void Add(NZBInfo* pNZBInfo, bool bAddTop);
void Remove(NZBInfo* pNZBInfo);
NZBInfo* Find(int iID);
};
class PostInfo
{
public:
enum EStage
{
ptQueued,
ptLoadingPars,
ptVerifyingSources,
ptRepairing,
ptVerifyingRepaired,
ptRenaming,
ptUnpacking,
ptMoving,
ptExecutingScript,
ptFinished
};
typedef std::deque<Message*> Messages;
typedef std::vector<char*> ParredFiles;
private:
NZBInfo* m_pNZBInfo;
bool m_bWorking;
bool m_bDeleted;
bool m_bRequestParCheck;
bool m_bForceParFull;
bool m_bForceRepair;
EStage m_eStage;
char* m_szProgressLabel;
int m_iFileProgress;
int m_iStageProgress;
time_t m_tStartTime;
time_t m_tStageTime;
Thread* m_pPostThread;
Mutex m_mutexLog;
Messages m_Messages;
int m_iIDMessageGen;
ParredFiles m_ParredFiles;
public:
PostInfo();
~PostInfo();
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
void SetNZBInfo(NZBInfo* pNZBInfo) { m_pNZBInfo = pNZBInfo; }
EStage GetStage() { return m_eStage; }
void SetStage(EStage eStage) { m_eStage = eStage; }
void SetProgressLabel(const char* szProgressLabel);
const char* GetProgressLabel() { return m_szProgressLabel; }
int GetFileProgress() { return m_iFileProgress; }
void SetFileProgress(int iFileProgress) { m_iFileProgress = iFileProgress; }
int GetStageProgress() { return m_iStageProgress; }
void SetStageProgress(int iStageProgress) { m_iStageProgress = iStageProgress; }
time_t GetStartTime() { return m_tStartTime; }
void SetStartTime(time_t tStartTime) { m_tStartTime = tStartTime; }
time_t GetStageTime() { return m_tStageTime; }
void SetStageTime(time_t tStageTime) { m_tStageTime = tStageTime; }
bool GetWorking() { return m_bWorking; }
void SetWorking(bool bWorking) { m_bWorking = bWorking; }
bool GetDeleted() { return m_bDeleted; }
void SetDeleted(bool bDeleted) { m_bDeleted = bDeleted; }
bool GetRequestParCheck() { return m_bRequestParCheck; }
void SetRequestParCheck(bool bRequestParCheck) { m_bRequestParCheck = bRequestParCheck; }
bool GetForceParFull() { return m_bForceParFull; }
void SetForceParFull(bool bForceParFull) { m_bForceParFull = bForceParFull; }
bool GetForceRepair() { return m_bForceRepair; }
void SetForceRepair(bool bForceRepair) { m_bForceRepair = bForceRepair; }
Thread* GetPostThread() { return m_pPostThread; }
void SetPostThread(Thread* pPostThread) { m_pPostThread = pPostThread; }
void AppendMessage(Message::EKind eKind, const char* szText);
Messages* LockMessages();
void UnlockMessages();
ParredFiles* GetParredFiles() { return &m_ParredFiles; }
};
typedef std::vector<int> IDList;
typedef std::vector<char*> NameList;
class DupInfo
{
public:
enum EStatus
{
dsUndefined,
dsSuccess,
dsFailed,
dsDeleted,
dsDupe,
dsBad,
dsGood
};
private:
int m_iID;
char* m_szName;
char* m_szDupeKey;
int m_iDupeScore;
EDupeMode m_eDupeMode;
long long m_lSize;
unsigned int m_iFullContentHash;
unsigned int m_iFilteredContentHash;
EStatus m_eStatus;
public:
DupInfo();
~DupInfo();
int GetID() { return m_iID; }
void SetID(int iID);
const char* GetName() { return m_szName; } // needs locking (for shared objects)
void SetName(const char* szName); // needs locking (for shared objects)
const char* GetDupeKey() { return m_szDupeKey; } // needs locking (for shared objects)
void SetDupeKey(const char* szDupeKey); // needs locking (for shared objects)
int GetDupeScore() { return m_iDupeScore; }
void SetDupeScore(int iDupeScore) { m_iDupeScore = iDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
void SetDupeMode(EDupeMode eDupeMode) { m_eDupeMode = eDupeMode; }
long long GetSize() { return m_lSize; }
void SetSize(long long lSize) { m_lSize = lSize; }
unsigned int GetFullContentHash() { return m_iFullContentHash; }
void SetFullContentHash(unsigned int iFullContentHash) { m_iFullContentHash = iFullContentHash; }
unsigned int GetFilteredContentHash() { return m_iFilteredContentHash; }
void SetFilteredContentHash(unsigned int iFilteredContentHash) { m_iFilteredContentHash = iFilteredContentHash; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
};
class HistoryInfo
{
public:
enum EKind
{
hkUnknown,
hkNzb,
hkUrl,
hkDup
};
private:
EKind m_eKind;
void* m_pInfo;
time_t m_tTime;
public:
HistoryInfo(NZBInfo* pNZBInfo);
HistoryInfo(DupInfo* pDupInfo);
~HistoryInfo();
EKind GetKind() { return m_eKind; }
int GetID();
NZBInfo* GetNZBInfo() { return (NZBInfo*)m_pInfo; }
DupInfo* GetDupInfo() { return (DupInfo*)m_pInfo; }
void DiscardNZBInfo() { m_pInfo = NULL; }
time_t GetTime() { return m_tTime; }
void SetTime(time_t tTime) { m_tTime = tTime; }
void GetName(char* szBuffer, int iSize); // needs locking (for shared objects)
};
typedef std::deque<HistoryInfo*> HistoryList;
class DownloadQueue : public Subject
{
public:
enum EAspectAction
{
eaNzbFound,
eaNzbAdded,
eaNzbDeleted,
eaFileCompleted,
eaFileDeleted,
eaUrlCompleted
};
struct Aspect
{
EAspectAction eAction;
DownloadQueue* pDownloadQueue;
NZBInfo* pNZBInfo;
FileInfo* pFileInfo;
};
enum EEditAction
{
eaFileMoveOffset = 1, // move files to m_iOffset relative to the current position in download-queue
eaFileMoveTop, // move files to the top of download-queue
eaFileMoveBottom, // move files to the bottom of download-queue
eaFilePause, // pause files
eaFileResume, // resume (unpause) files
eaFileDelete, // delete files
eaFilePauseAllPars, // pause only (all) pars (does not affect other files)
eaFilePauseExtraPars, // pause only (almost all) pars, except main par-file (does not affect other files)
eaFileReorder, // set file order
eaFileSplit, // split - create new group from selected files
eaGroupMoveOffset, // move group to m_iOffset relative to the current position in download-queue
eaGroupMoveTop, // move group to the top of download-queue
eaGroupMoveBottom, // move group to the bottom of download-queue
eaGroupPause, // pause group
eaGroupResume, // resume (unpause) group
eaGroupDelete, // delete group and put to history
eaGroupDupeDelete, // delete group, put to history and mark as duplicate
eaGroupFinalDelete, // delete group without adding to history
eaGroupPauseAllPars, // pause only (all) pars (does not affect other files) in group
eaGroupPauseExtraPars, // pause only (almost all) pars in group, except main par-file (does not affect other files)
eaGroupSetPriority, // set priority for groups
eaGroupSetCategory, // set or change category for a group
eaGroupApplyCategory, // set or change category for a group and reassign pp-params according to category settings
eaGroupMerge, // merge groups
eaGroupSetParameter, // set post-process parameter for group
eaGroupSetName, // set group name (rename group)
eaGroupSetDupeKey, // set duplicate key
eaGroupSetDupeScore, // set duplicate score
eaGroupSetDupeMode, // set duplicate mode
eaPostDelete, // cancel post-processing
eaHistoryDelete, // hide history-item
eaHistoryFinalDelete, // delete history-item
eaHistoryReturn, // move history-item back to download queue
eaHistoryProcess, // move history-item back to download queue and start postprocessing
eaHistoryRedownload, // move history-item back to download queue for redownload
eaHistorySetParameter, // set post-process parameter for history-item
eaHistorySetDupeKey, // set duplicate key
eaHistorySetDupeScore, // set duplicate score
eaHistorySetDupeMode, // set duplicate mode
eaHistorySetDupeBackup, // set duplicate backup flag
eaHistoryMarkBad, // mark history-item as bad (and download other duplicate)
eaHistoryMarkGood // mark history-item as good (and push it into dup-history)
};
enum EMatchMode
{
mmID = 1,
mmName,
mmRegEx
};
private:
NZBList m_Queue;
HistoryList m_History;
Mutex m_LockMutex;
static DownloadQueue* g_pDownloadQueue;
static bool g_bLoaded;
protected:
DownloadQueue() : m_Queue(true) {}
static void Init(DownloadQueue* pGlobalInstance) { g_pDownloadQueue = pGlobalInstance; }
static void Final() { g_pDownloadQueue = NULL; }
static void Loaded() { g_bLoaded = true; }
public:
virtual ~DownloadQueue() {}
static bool IsLoaded() { return g_bLoaded; }
static DownloadQueue* Lock();
static void Unlock();
NZBList* GetQueue() { return &m_Queue; }
HistoryList* GetHistory() { return &m_History; }
virtual bool EditEntry(int ID, EEditAction eAction, int iOffset, const char* szText) = 0;
virtual bool EditList(IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, EEditAction eAction, int iOffset, const char* szText) = 0;
virtual void Save() = 0;
void CalcRemainingSize(long long* pRemaining, long long* pRemainingForced);
};
#endif

View File

@@ -0,0 +1,564 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#endif
#include <set>
#include <algorithm>
#include "nzbget.h"
#include "Options.h"
#include "Log.h"
#include "Util.h"
#include "NZBFile.h"
#include "HistoryCoordinator.h"
#include "DupeCoordinator.h"
extern HistoryCoordinator* g_pHistoryCoordinator;
extern Options* g_pOptions;
bool DupeCoordinator::SameNameOrKey(const char* szName1, const char* szDupeKey1,
const char* szName2, const char* szDupeKey2)
{
bool bHasDupeKeys = !Util::EmptyStr(szDupeKey1) && !Util::EmptyStr(szDupeKey2);
return (bHasDupeKeys && !strcmp(szDupeKey1, szDupeKey2)) ||
(!bHasDupeKeys && !strcmp(szName1, szName2));
}
/**
Check if the title was already downloaded or is already queued:
- if there is a duplicate with exactly same content (via hash-check)
in queue or in history - the new item is skipped;
- if there is a duplicate marked as good in history - the new item is skipped;
- if there is a duplicate with success-status in dup-history but
there are no duplicates in recent history - the new item is skipped;
- if queue has a duplicate with the same or higher score - the new item
is moved to history as dupe-backup;
- if queue has a duplicate with lower score - the existing item is moved
to history as dupe-backup (unless it is in post-processing stage) and
the new item is added to queue;
- if queue doesn't have duplicates - the new item is added to queue.
*/
void DupeCoordinator::NZBFound(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
debug("Checking duplicates for %s", pNZBInfo->GetName());
// find duplicates in download queue with exactly same content
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pQueuedNZBInfo = *it;
bool bSameContent = (pNZBInfo->GetFullContentHash() > 0 &&
pNZBInfo->GetFullContentHash() == pQueuedNZBInfo->GetFullContentHash()) ||
(pNZBInfo->GetFilteredContentHash() > 0 &&
pNZBInfo->GetFilteredContentHash() == pQueuedNZBInfo->GetFilteredContentHash());
// if there is a duplicate with exactly same content (via hash-check)
// in queue - the new item is skipped
if (pQueuedNZBInfo != pNZBInfo && bSameContent && pNZBInfo->GetKind() == NZBInfo::nkNzb)
{
if (!strcmp(pNZBInfo->GetName(), pQueuedNZBInfo->GetName()))
{
warn("Skipping duplicate %s, already queued", pNZBInfo->GetName());
}
else
{
warn("Skipping duplicate %s, already queued as %s",
pNZBInfo->GetName(), pQueuedNZBInfo->GetName());
}
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsManual);
g_pHistoryCoordinator->DeleteDiskFiles(pNZBInfo);
return;
}
}
// if download has empty dupekey and empty dupescore - check if download queue
// or history have an item with the same name and non empty dupekey or dupescore and
// take these properties from this item
if (Util::EmptyStr(pNZBInfo->GetDupeKey()) && pNZBInfo->GetDupeScore() == 0)
{
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pQueuedNZBInfo = *it;
if (!strcmp(pQueuedNZBInfo->GetName(), pNZBInfo->GetName()) &&
(!Util::EmptyStr(pQueuedNZBInfo->GetDupeKey()) || pQueuedNZBInfo->GetDupeScore() != 0))
{
pNZBInfo->SetDupeKey(pQueuedNZBInfo->GetDupeKey());
pNZBInfo->SetDupeScore(pQueuedNZBInfo->GetDupeScore());
info("Assigning dupekey %s and dupescore %i to %s from existing queue item with the same name",
pNZBInfo->GetDupeKey(), pNZBInfo->GetDupeScore(), pNZBInfo->GetName());
break;
}
}
}
if (Util::EmptyStr(pNZBInfo->GetDupeKey()) && pNZBInfo->GetDupeScore() == 0)
{
for (HistoryList::iterator it = pDownloadQueue->GetHistory()->begin(); it != pDownloadQueue->GetHistory()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
!strcmp(pHistoryInfo->GetNZBInfo()->GetName(), pNZBInfo->GetName()) &&
(!Util::EmptyStr(pHistoryInfo->GetNZBInfo()->GetDupeKey()) || pHistoryInfo->GetNZBInfo()->GetDupeScore() != 0))
{
pNZBInfo->SetDupeKey(pHistoryInfo->GetNZBInfo()->GetDupeKey());
pNZBInfo->SetDupeScore(pHistoryInfo->GetNZBInfo()->GetDupeScore());
info("Assigning dupekey %s and dupescore %i to %s from existing history item with the same name",
pNZBInfo->GetDupeKey(), pNZBInfo->GetDupeScore(), pNZBInfo->GetName());
break;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkDup &&
!strcmp(pHistoryInfo->GetDupInfo()->GetName(), pNZBInfo->GetName()) &&
(!Util::EmptyStr(pHistoryInfo->GetDupInfo()->GetDupeKey()) || pHistoryInfo->GetDupInfo()->GetDupeScore() != 0))
{
pNZBInfo->SetDupeKey(pHistoryInfo->GetDupInfo()->GetDupeKey());
pNZBInfo->SetDupeScore(pHistoryInfo->GetDupInfo()->GetDupeScore());
info("Assigning dupekey %s and dupescore %i to %s from existing history item with the same name",
pNZBInfo->GetDupeKey(), pNZBInfo->GetDupeScore(), pNZBInfo->GetName());
break;
}
}
}
// find duplicates in history
bool bSkip = false;
bool bGood = false;
bool bSameContent = false;
const char* szDupeName = NULL;
// find duplicates in queue having exactly same content
// also: nzb-files having duplicates marked as good are skipped
// also (only in score mode): nzb-files having success-duplicates in dup-history but don't having duplicates in recent history are skipped
for (HistoryList::iterator it = pDownloadQueue->GetHistory()->begin(); it != pDownloadQueue->GetHistory()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
((pNZBInfo->GetFullContentHash() > 0 &&
pNZBInfo->GetFullContentHash() == pHistoryInfo->GetNZBInfo()->GetFullContentHash()) ||
(pNZBInfo->GetFilteredContentHash() > 0 &&
pNZBInfo->GetFilteredContentHash() == pHistoryInfo->GetNZBInfo()->GetFilteredContentHash())))
{
bSkip = true;
bSameContent = true;
szDupeName = pHistoryInfo->GetNZBInfo()->GetName();
break;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkDup &&
((pNZBInfo->GetFullContentHash() > 0 &&
pNZBInfo->GetFullContentHash() == pHistoryInfo->GetDupInfo()->GetFullContentHash()) ||
(pNZBInfo->GetFilteredContentHash() > 0 &&
pNZBInfo->GetFilteredContentHash() == pHistoryInfo->GetDupInfo()->GetFilteredContentHash())))
{
bSkip = true;
bSameContent = true;
szDupeName = pHistoryInfo->GetDupInfo()->GetName();
break;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksGood &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()))
{
bSkip = true;
bGood = true;
szDupeName = pHistoryInfo->GetNZBInfo()->GetName();
break;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkDup &&
pHistoryInfo->GetDupInfo()->GetDupeMode() != dmForce &&
(pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood ||
(pNZBInfo->GetDupeMode() == dmScore &&
pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsSuccess &&
pNZBInfo->GetDupeScore() <= pHistoryInfo->GetDupInfo()->GetDupeScore())) &&
SameNameOrKey(pHistoryInfo->GetDupInfo()->GetName(), pHistoryInfo->GetDupInfo()->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()))
{
bSkip = true;
bGood = pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood;
szDupeName = pHistoryInfo->GetDupInfo()->GetName();
break;
}
}
if (!bSameContent && !bGood && pNZBInfo->GetDupeMode() == dmScore)
{
// nzb-files having success-duplicates in recent history (with different content) are added to history for backup
for (HistoryList::iterator it = pDownloadQueue->GetHistory()->begin(); it != pDownloadQueue->GetHistory()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()) &&
pNZBInfo->GetDupeScore() <= pHistoryInfo->GetNZBInfo()->GetDupeScore() &&
pHistoryInfo->GetNZBInfo()->IsDupeSuccess())
{
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsDupe);
info("Collection %s is duplicate to %s", pNZBInfo->GetName(), pHistoryInfo->GetNZBInfo()->GetName());
return;
}
}
}
if (bSkip)
{
if (!strcmp(pNZBInfo->GetName(), szDupeName))
{
warn("Skipping duplicate %s, found in history with %s", pNZBInfo->GetName(),
bSameContent ? "exactly same content" : bGood ? "good status" : "success status");
}
else
{
warn("Skipping duplicate %s, found in history %s with %s",
pNZBInfo->GetName(), szDupeName,
bSameContent ? "exactly same content" : bGood ? "good status" : "success status");
}
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsManual);
g_pHistoryCoordinator->DeleteDiskFiles(pNZBInfo);
return;
}
// find duplicates in download queue and post-queue and handle both items according to their scores:
// only one item remains in queue and another one is moved to history as dupe-backup
if (pNZBInfo->GetDupeMode() == dmScore)
{
// find duplicates in download queue
int index = 0;
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); index++)
{
NZBInfo* pQueuedNZBInfo = *it++;
if (pQueuedNZBInfo != pNZBInfo &&
pQueuedNZBInfo->GetKind() == NZBInfo::nkNzb &&
pQueuedNZBInfo->GetDupeMode() != dmForce &&
SameNameOrKey(pQueuedNZBInfo->GetName(), pQueuedNZBInfo->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()))
{
// if queue has a duplicate with the same or higher score - the new item
// is moved to history as dupe-backup
if (pNZBInfo->GetDupeScore() <= pQueuedNZBInfo->GetDupeScore())
{
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsDupe);
info("Collection %s is duplicate to %s", pNZBInfo->GetName(), pQueuedNZBInfo->GetName());
return;
}
// if queue has a duplicate with lower score - the existing item is moved
// to history as dupe-backup (unless it is in post-processing stage) and
// the new item is added to queue (unless it is in post-processing stage)
if (!pQueuedNZBInfo->GetPostInfo())
{
// the existing queue item is moved to history as dupe-backup
info("Moving collection %s with lower duplicate score to history", pQueuedNZBInfo->GetName());
pQueuedNZBInfo->SetDeleteStatus(NZBInfo::dsDupe);
pDownloadQueue->EditEntry(pQueuedNZBInfo->GetID(),
DownloadQueue::eaGroupDelete, 0, NULL);
it = pDownloadQueue->GetQueue()->begin() + index;
}
}
}
}
}
/**
- if download of an item fails and there are duplicates in history -
return the best duplicate from history to queue for download;
- if download of an item completes successfully - nothing extra needs to be done;
*/
void DupeCoordinator::NZBCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
debug("Processing duplicates for %s", pNZBInfo->GetName());
if (pNZBInfo->GetDupeMode() == dmScore && !pNZBInfo->IsDupeSuccess())
{
ReturnBestDupe(pDownloadQueue, pNZBInfo, pNZBInfo->GetName(), pNZBInfo->GetDupeKey());
}
}
/**
Returns the best duplicate from history to download queue.
*/
void DupeCoordinator::ReturnBestDupe(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szNZBName, const char* szDupeKey)
{
// check if history (recent or dup) has other success-duplicates or good-duplicates
bool bHistoryDupe = false;
int iHistoryScore = 0;
for (HistoryList::iterator it = pDownloadQueue->GetHistory()->begin(); it != pDownloadQueue->GetHistory()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
bool bGoodDupe = false;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
(pHistoryInfo->GetNZBInfo()->IsDupeSuccess() ||
pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksGood) &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(), szNZBName, szDupeKey))
{
if (!bHistoryDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > iHistoryScore)
{
iHistoryScore = pHistoryInfo->GetNZBInfo()->GetDupeScore();
}
bHistoryDupe = true;
bGoodDupe = pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksGood;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkDup &&
pHistoryInfo->GetDupInfo()->GetDupeMode() != dmForce &&
(pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsSuccess ||
pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood) &&
SameNameOrKey(pHistoryInfo->GetDupInfo()->GetName(), pHistoryInfo->GetDupInfo()->GetDupeKey(), szNZBName, szDupeKey))
{
if (!bHistoryDupe || pHistoryInfo->GetDupInfo()->GetDupeScore() > iHistoryScore)
{
iHistoryScore = pHistoryInfo->GetDupInfo()->GetDupeScore();
}
bHistoryDupe = true;
bGoodDupe = pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood;
}
if (bGoodDupe)
{
// another duplicate with good-status exists - exit without moving other dupes to queue
return;
}
}
// check if duplicates exist in download queue
bool bQueueDupe = false;
int iQueueScore = 0;
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pQueuedNZBInfo = *it;
if (pQueuedNZBInfo != pNZBInfo &&
pQueuedNZBInfo->GetKind() == NZBInfo::nkNzb &&
pQueuedNZBInfo->GetDupeMode() != dmForce &&
SameNameOrKey(pQueuedNZBInfo->GetName(), pQueuedNZBInfo->GetDupeKey(), szNZBName, szDupeKey) &&
(!bQueueDupe || pQueuedNZBInfo->GetDupeScore() > iQueueScore))
{
iQueueScore = pQueuedNZBInfo->GetDupeScore();
bQueueDupe = true;
}
}
// find dupe-backup with highest score, whose score is also higher than other
// success-duplicates and higher than already queued items
HistoryInfo* pHistoryDupe = NULL;
for (HistoryList::iterator it = pDownloadQueue->GetHistory()->begin(); it != pDownloadQueue->GetHistory()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsDupe &&
pHistoryInfo->GetNZBInfo()->CalcHealth() >= pHistoryInfo->GetNZBInfo()->CalcCriticalHealth(true) &&
pHistoryInfo->GetNZBInfo()->GetMarkStatus() != NZBInfo::ksBad &&
(!bHistoryDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > iHistoryScore) &&
(!bQueueDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > iQueueScore) &&
(!pHistoryDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > pHistoryDupe->GetNZBInfo()->GetDupeScore()) &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(), szNZBName, szDupeKey))
{
pHistoryDupe = pHistoryInfo;
}
}
// move that dupe-backup from history to download queue
if (pHistoryDupe)
{
info("Found duplicate %s for %s", pHistoryDupe->GetNZBInfo()->GetName(), szNZBName);
g_pHistoryCoordinator->Redownload(pDownloadQueue, pHistoryDupe);
}
}
void DupeCoordinator::HistoryMark(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, bool bGood)
{
char szNZBName[1024];
pHistoryInfo->GetName(szNZBName, 1024);
info("Marking %s as %s", szNZBName, (bGood ? "good" : "bad"));
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb)
{
pHistoryInfo->GetNZBInfo()->SetMarkStatus(bGood ? NZBInfo::ksGood : NZBInfo::ksBad);
}
else if (pHistoryInfo->GetKind() == HistoryInfo::hkDup)
{
pHistoryInfo->GetDupInfo()->SetStatus(bGood ? DupInfo::dsGood : DupInfo::dsBad);
}
else
{
error("Could not mark %s as bad: history item has wrong type", szNZBName);
return;
}
if (!g_pOptions->GetDupeCheck() ||
(pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() == dmForce) ||
(pHistoryInfo->GetKind() == HistoryInfo::hkDup &&
pHistoryInfo->GetDupInfo()->GetDupeMode() == dmForce))
{
return;
}
if (bGood)
{
// mark as good
// moving all duplicates from history to dup-history
HistoryCleanup(pDownloadQueue, pHistoryInfo);
}
else
{
// mark as bad
const char* szDupeKey = pHistoryInfo->GetKind() == HistoryInfo::hkNzb ? pHistoryInfo->GetNZBInfo()->GetDupeKey() :
pHistoryInfo->GetKind() == HistoryInfo::hkDup ? pHistoryInfo->GetDupInfo()->GetDupeKey() :
NULL;
ReturnBestDupe(pDownloadQueue, NULL, szNZBName, szDupeKey);
}
}
void DupeCoordinator::HistoryCleanup(DownloadQueue* pDownloadQueue, HistoryInfo* pMarkHistoryInfo)
{
const char* szDupeKey = pMarkHistoryInfo->GetKind() == HistoryInfo::hkNzb ? pMarkHistoryInfo->GetNZBInfo()->GetDupeKey() :
pMarkHistoryInfo->GetKind() == HistoryInfo::hkDup ? pMarkHistoryInfo->GetDupInfo()->GetDupeKey() :
NULL;
const char* szNZBName = pMarkHistoryInfo->GetKind() == HistoryInfo::hkNzb ? pMarkHistoryInfo->GetNZBInfo()->GetName() :
pMarkHistoryInfo->GetKind() == HistoryInfo::hkDup ? pMarkHistoryInfo->GetDupInfo()->GetName() :
NULL;
bool bChanged = false;
int index = 0;
// traversing in a reverse order to delete items in order they were added to history
// (just to produce the log-messages in a more logical order)
for (HistoryList::reverse_iterator it = pDownloadQueue->GetHistory()->rbegin(); it != pDownloadQueue->GetHistory()->rend(); )
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsDupe &&
pHistoryInfo != pMarkHistoryInfo &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(), szNZBName, szDupeKey))
{
g_pHistoryCoordinator->HistoryHide(pDownloadQueue, pHistoryInfo, index);
index++;
it = pDownloadQueue->GetHistory()->rbegin() + index;
bChanged = true;
}
else
{
it++;
index++;
}
}
if (bChanged)
{
pDownloadQueue->Save();
}
}
DupeCoordinator::EDupeStatus DupeCoordinator::GetDupeStatus(DownloadQueue* pDownloadQueue,
const char* szName, const char* szDupeKey)
{
EDupeStatus eStatuses = dsNone;
// find duplicates in download queue
for (NZBList::iterator it = pDownloadQueue->GetQueue()->begin(); it != pDownloadQueue->GetQueue()->end(); it++)
{
NZBInfo* pNZBInfo = *it;
if (SameNameOrKey(szName, szDupeKey, pNZBInfo->GetName(), pNZBInfo->GetDupeKey()))
{
if (pNZBInfo->GetSuccessArticles() + pNZBInfo->GetFailedArticles() > 0)
{
eStatuses = (EDupeStatus)(eStatuses | dsDownloading);
}
else
{
eStatuses = (EDupeStatus)(eStatuses | dsQueued);
}
}
}
// find duplicates in history
for (HistoryList::iterator it = pDownloadQueue->GetHistory()->begin(); it != pDownloadQueue->GetHistory()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
SameNameOrKey(szName, szDupeKey, pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey()))
{
const char* szTextStatus = pHistoryInfo->GetNZBInfo()->MakeTextStatus(true);
if (!strncasecmp(szTextStatus, "SUCCESS", 7))
{
eStatuses = (EDupeStatus)(eStatuses | dsSuccess);
}
else if (!strncasecmp(szTextStatus, "FAILURE", 7))
{
eStatuses = (EDupeStatus)(eStatuses | dsFailure);
}
else if (!strncasecmp(szTextStatus, "WARNING", 7))
{
eStatuses = (EDupeStatus)(eStatuses | dsWarning);
}
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkDup &&
SameNameOrKey(szName, szDupeKey, pHistoryInfo->GetDupInfo()->GetName(), pHistoryInfo->GetDupInfo()->GetDupeKey()))
{
if (pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsSuccess ||
pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood)
{
eStatuses = (EDupeStatus)(eStatuses | dsSuccess);
}
else if (pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsFailed ||
pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsBad)
{
eStatuses = (EDupeStatus)(eStatuses | dsFailure);
}
}
}
return eStatuses;
}

View File

@@ -0,0 +1,56 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef DUPECOORDINATOR_H
#define DUPECOORDINATOR_H
#include "DownloadInfo.h"
class DupeCoordinator
{
public:
enum EDupeStatus
{
dsNone = 0,
dsQueued = 1,
dsDownloading = 2,
dsSuccess = 4,
dsWarning = 8,
dsFailure = 16
};
private:
void ReturnBestDupe(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szNZBName, const char* szDupeKey);
void HistoryCleanup(DownloadQueue* pDownloadQueue, HistoryInfo* pMarkHistoryInfo);
bool SameNameOrKey(const char* szName1, const char* szDupeKey1, const char* szName2, const char* szDupeKey2);
public:
void NZBCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBFound(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void HistoryMark(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, bool bGood);
EDupeStatus GetDupeStatus(DownloadQueue* pDownloadQueue, const char* szName, const char* szDupeKey);
};
#endif

View File

@@ -0,0 +1,667 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision: 951 $
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#endif
#include <set>
#include <algorithm>
#include "nzbget.h"
#include "HistoryCoordinator.h"
#include "Options.h"
#include "Log.h"
#include "QueueCoordinator.h"
#include "DiskState.h"
#include "Util.h"
#include "NZBFile.h"
#include "DupeCoordinator.h"
#include "ParCoordinator.h"
#include "PrePostProcessor.h"
#include "DupeCoordinator.h"
extern QueueCoordinator* g_pQueueCoordinator;
extern PrePostProcessor* g_pPrePostProcessor;
extern DupeCoordinator* g_pDupeCoordinator;
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
HistoryCoordinator::HistoryCoordinator()
{
debug("Creating HistoryCoordinator");
}
HistoryCoordinator::~HistoryCoordinator()
{
debug("Destroying HistoryCoordinator");
}
void HistoryCoordinator::Cleanup()
{
debug("Cleaning up HistoryCoordinator");
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
for (HistoryList::iterator it = pDownloadQueue->GetHistory()->begin(); it != pDownloadQueue->GetHistory()->end(); it++)
{
delete *it;
}
pDownloadQueue->GetHistory()->clear();
DownloadQueue::Unlock();
}
/**
* Removes old entries from (recent) history
*/
void HistoryCoordinator::IntervalCheck()
{
DownloadQueue* pDownloadQueue = DownloadQueue::Lock();
time_t tMinTime = time(NULL) - g_pOptions->GetKeepHistory() * 60*60*24;
bool bChanged = false;
int index = 0;
// traversing in a reverse order to delete items in order they were added to history
// (just to produce the log-messages in a more logical order)
for (HistoryList::reverse_iterator it = pDownloadQueue->GetHistory()->rbegin(); it != pDownloadQueue->GetHistory()->rend(); )
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() != HistoryInfo::hkDup && pHistoryInfo->GetTime() < tMinTime)
{
if (g_pOptions->GetDupeCheck() && pHistoryInfo->GetKind() == HistoryInfo::hkNzb)
{
// replace history element
HistoryHide(pDownloadQueue, pHistoryInfo, index);
index++;
}
else
{
char szNiceName[1024];
pHistoryInfo->GetName(szNiceName, 1024);
pDownloadQueue->GetHistory()->erase(pDownloadQueue->GetHistory()->end() - 1 - index);
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb)
{
DeleteDiskFiles(pHistoryInfo->GetNZBInfo());
}
info("Collection %s removed from history", szNiceName);
delete pHistoryInfo;
}
it = pDownloadQueue->GetHistory()->rbegin() + index;
bChanged = true;
}
else
{
it++;
index++;
}
}
if (bChanged)
{
pDownloadQueue->Save();
}
DownloadQueue::Unlock();
}
void HistoryCoordinator::DeleteDiskFiles(NZBInfo* pNZBInfo)
{
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
// delete parked files
g_pDiskState->DiscardFiles(pNZBInfo);
}
pNZBInfo->GetFileList()->Clear();
// delete nzb-file
if (!g_pOptions->GetNzbCleanupDisk())
{
return;
}
// QueuedFile may contain one filename or several filenames separated
// with "|"-character (for merged groups)
char* szFilename = strdup(pNZBInfo->GetQueuedFilename());
char* szEnd = szFilename - 1;
while (szEnd)
{
char* szName1 = szEnd + 1;
szEnd = strchr(szName1, '|');
if (szEnd) *szEnd = '\0';
if (Util::FileExists(szName1))
{
info("Deleting file %s", szName1);
remove(szName1);
}
}
free(szFilename);
}
void HistoryCoordinator::AddToHistory(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
//remove old item for the same NZB
for (HistoryList::iterator it = pDownloadQueue->GetHistory()->begin(); it != pDownloadQueue->GetHistory()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetNZBInfo() == pNZBInfo)
{
delete pHistoryInfo;
pDownloadQueue->GetHistory()->erase(it);
break;
}
}
HistoryInfo* pHistoryInfo = new HistoryInfo(pNZBInfo);
pHistoryInfo->SetTime(time(NULL));
pDownloadQueue->GetHistory()->push_front(pHistoryInfo);
pDownloadQueue->GetQueue()->Remove(pNZBInfo);
if (pNZBInfo->GetDeleteStatus() == NZBInfo::dsNone)
{
// park files and delete files marked for deletion
int iParkedFiles = 0;
for (FileList::iterator it = pNZBInfo->GetFileList()->begin(); it != pNZBInfo->GetFileList()->end(); )
{
FileInfo* pFileInfo = *it;
if (!pFileInfo->GetDeleted())
{
detail("Parking file %s", pFileInfo->GetFilename());
g_pQueueCoordinator->DiscardDiskFile(pFileInfo);
iParkedFiles++;
it++;
}
else
{
// since we removed pNZBInfo from queue we need to take care of removing file infos marked for deletion
pNZBInfo->GetFileList()->erase(it);
delete pFileInfo;
it = pNZBInfo->GetFileList()->begin() + iParkedFiles;
}
}
pNZBInfo->SetParkedFileCount(iParkedFiles);
}
else
{
pNZBInfo->GetFileList()->Clear();
}
info("Collection %s added to history", pNZBInfo->GetName());
}
void HistoryCoordinator::HistoryHide(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, int rindex)
{
char szNiceName[1024];
pHistoryInfo->GetName(szNiceName, 1024);
// replace history element
DupInfo* pDupInfo = new DupInfo();
pDupInfo->SetID(pHistoryInfo->GetNZBInfo()->GetID());
pDupInfo->SetName(pHistoryInfo->GetNZBInfo()->GetName());
pDupInfo->SetDupeKey(pHistoryInfo->GetNZBInfo()->GetDupeKey());
pDupInfo->SetDupeScore(pHistoryInfo->GetNZBInfo()->GetDupeScore());
pDupInfo->SetDupeMode(pHistoryInfo->GetNZBInfo()->GetDupeMode());
pDupInfo->SetSize(pHistoryInfo->GetNZBInfo()->GetSize());
pDupInfo->SetFullContentHash(pHistoryInfo->GetNZBInfo()->GetFullContentHash());
pDupInfo->SetFilteredContentHash(pHistoryInfo->GetNZBInfo()->GetFilteredContentHash());
pDupInfo->SetStatus(
pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksGood ? DupInfo::dsGood :
pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksBad ? DupInfo::dsBad :
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsDupe ? DupInfo::dsDupe :
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsManual ? DupInfo::dsDeleted :
pHistoryInfo->GetNZBInfo()->IsDupeSuccess() ? DupInfo::dsSuccess :
DupInfo::dsFailed);
HistoryInfo* pNewHistoryInfo = new HistoryInfo(pDupInfo);
pNewHistoryInfo->SetTime(pHistoryInfo->GetTime());
(*pDownloadQueue->GetHistory())[pDownloadQueue->GetHistory()->size() - 1 - rindex] = pNewHistoryInfo;
DeleteDiskFiles(pHistoryInfo->GetNZBInfo());
delete pHistoryInfo;
info("Collection %s removed from history", szNiceName);
}
bool HistoryCoordinator::EditList(DownloadQueue* pDownloadQueue, IDList* pIDList, DownloadQueue::EEditAction eAction, int iOffset, const char* szText)
{
bool bOK = false;
for (IDList::iterator itID = pIDList->begin(); itID != pIDList->end(); itID++)
{
int iID = *itID;
for (HistoryList::iterator itHistory = pDownloadQueue->GetHistory()->begin(); itHistory != pDownloadQueue->GetHistory()->end(); itHistory++)
{
HistoryInfo* pHistoryInfo = *itHistory;
if (pHistoryInfo->GetID() == iID)
{
switch (eAction)
{
case DownloadQueue::eaHistoryDelete:
case DownloadQueue::eaHistoryFinalDelete:
HistoryDelete(pDownloadQueue, itHistory, pHistoryInfo, eAction == DownloadQueue::eaHistoryFinalDelete);
break;
case DownloadQueue::eaHistoryReturn:
case DownloadQueue::eaHistoryProcess:
HistoryReturn(pDownloadQueue, itHistory, pHistoryInfo, eAction == DownloadQueue::eaHistoryProcess);
break;
case DownloadQueue::eaHistoryRedownload:
HistoryRedownload(pDownloadQueue, itHistory, pHistoryInfo, false);
break;
case DownloadQueue::eaHistorySetParameter:
HistorySetParameter(pHistoryInfo, szText);
break;
case DownloadQueue::eaHistorySetDupeKey:
case DownloadQueue::eaHistorySetDupeScore:
case DownloadQueue::eaHistorySetDupeMode:
case DownloadQueue::eaHistorySetDupeBackup:
HistorySetDupeParam(pHistoryInfo, eAction, szText);
break;
case DownloadQueue::eaHistoryMarkBad:
case DownloadQueue::eaHistoryMarkGood:
g_pDupeCoordinator->HistoryMark(pDownloadQueue, pHistoryInfo, eAction == DownloadQueue::eaHistoryMarkGood);
break;
default:
// nothing, just to avoid compiler warning
break;
}
bOK = true;
break;
}
}
}
if (bOK)
{
pDownloadQueue->Save();
}
return bOK;
}
void HistoryCoordinator::HistoryDelete(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory,
HistoryInfo* pHistoryInfo, bool bFinal)
{
char szNiceName[1024];
pHistoryInfo->GetName(szNiceName, 1024);
info("Deleting %s from history", szNiceName);
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb)
{
DeleteDiskFiles(pHistoryInfo->GetNZBInfo());
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb &&
g_pOptions->GetDeleteCleanupDisk() &&
(pHistoryInfo->GetNZBInfo()->GetDeleteStatus() != NZBInfo::dsNone ||
pHistoryInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psFailure ||
pHistoryInfo->GetNZBInfo()->GetUnpackStatus() == NZBInfo::usFailure ||
pHistoryInfo->GetNZBInfo()->GetUnpackStatus() == NZBInfo::usPassword) &&
Util::DirectoryExists(pHistoryInfo->GetNZBInfo()->GetDestDir()))
{
info("Deleting %s", pHistoryInfo->GetNZBInfo()->GetDestDir());
char szErrBuf[256];
if (!Util::DeleteDirectoryWithContent(pHistoryInfo->GetNZBInfo()->GetDestDir(), szErrBuf, sizeof(szErrBuf)))
{
error("Could not delete directory %s: %s", pHistoryInfo->GetNZBInfo()->GetDestDir(), szErrBuf);
}
}
if (bFinal || !g_pOptions->GetDupeCheck() || pHistoryInfo->GetKind() == HistoryInfo::hkUrl)
{
pDownloadQueue->GetHistory()->erase(itHistory);
delete pHistoryInfo;
}
else
{
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb)
{
// replace history element
int rindex = pDownloadQueue->GetHistory()->size() - 1 - (itHistory - pDownloadQueue->GetHistory()->begin());
HistoryHide(pDownloadQueue, pHistoryInfo, rindex);
}
}
}
void HistoryCoordinator::HistoryReturn(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory, HistoryInfo* pHistoryInfo, bool bReprocess)
{
char szNiceName[1024];
pHistoryInfo->GetName(szNiceName, 1024);
debug("Returning %s from history back to download queue", szNiceName);
NZBInfo* pNZBInfo = NULL;
if (bReprocess && pHistoryInfo->GetKind() != HistoryInfo::hkNzb)
{
error("Could not restart postprocessing for %s: history item has wrong type", szNiceName);
return;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb)
{
pNZBInfo = pHistoryInfo->GetNZBInfo();
// unpark files
bool bUnparked = false;
for (FileList::iterator it = pNZBInfo->GetFileList()->begin(); it != pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
detail("Unpark file %s", pFileInfo->GetFilename());
bUnparked = true;
}
if (!(bUnparked || bReprocess))
{
warn("Could not return %s back from history to download queue: history item does not have any files left for download", szNiceName);
return;
}
pDownloadQueue->GetQueue()->push_front(pNZBInfo);
pHistoryInfo->DiscardNZBInfo();
// reset postprocessing status variables
pNZBInfo->SetParCleanup(false);
if (!pNZBInfo->GetUnpackCleanedUpDisk())
{
pNZBInfo->SetUnpackStatus(NZBInfo::usNone);
pNZBInfo->SetCleanupStatus(NZBInfo::csNone);
pNZBInfo->SetRenameStatus(NZBInfo::rsNone);
pNZBInfo->SetPostTotalSec(pNZBInfo->GetPostTotalSec() - pNZBInfo->GetUnpackSec());
pNZBInfo->SetUnpackSec(0);
if (ParCoordinator::FindMainPars(pNZBInfo->GetDestDir(), NULL))
{
pNZBInfo->SetParStatus(NZBInfo::psNone);
pNZBInfo->SetPostTotalSec(pNZBInfo->GetPostTotalSec() - pNZBInfo->GetParSec());
pNZBInfo->SetParSec(0);
pNZBInfo->SetRepairSec(0);
pNZBInfo->SetParFull(false);
}
}
pNZBInfo->SetDeleteStatus(NZBInfo::dsNone);
pNZBInfo->SetDeletePaused(false);
pNZBInfo->SetMarkStatus(NZBInfo::ksNone);
pNZBInfo->GetScriptStatuses()->Clear();
pNZBInfo->SetParkedFileCount(0);
if (pNZBInfo->GetMoveStatus() == NZBInfo::msFailure)
{
pNZBInfo->SetMoveStatus(NZBInfo::msNone);
}
pNZBInfo->SetReprocess(bReprocess);
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkUrl)
{
NZBInfo* pNZBInfo = pHistoryInfo->GetNZBInfo();
pHistoryInfo->DiscardNZBInfo();
pNZBInfo->SetUrlStatus(NZBInfo::lsNone);
pNZBInfo->SetDeleteStatus(NZBInfo::dsNone);
pDownloadQueue->GetQueue()->push_front(pNZBInfo);
}
pDownloadQueue->GetHistory()->erase(itHistory);
// the object "pHistoryInfo" is released few lines later, after the call to "NZBDownloaded"
info("%s returned from history back to download queue", szNiceName);
if (bReprocess)
{
// start postprocessing
debug("Restarting postprocessing for %s", szNiceName);
g_pPrePostProcessor->NZBDownloaded(pDownloadQueue, pNZBInfo);
}
delete pHistoryInfo;
}
void HistoryCoordinator::HistoryRedownload(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory,
HistoryInfo* pHistoryInfo, bool bRestorePauseState)
{
NZBInfo* pNZBInfo = pHistoryInfo->GetNZBInfo();
bool bPaused = bRestorePauseState && pNZBInfo->GetDeletePaused();
if (!Util::FileExists(pNZBInfo->GetQueuedFilename()))
{
error("Could not return collection %s from history back to queue: could not find source nzb-file %s",
pNZBInfo->GetName(), pNZBInfo->GetQueuedFilename());
return;
}
NZBFile* pNZBFile = NZBFile::Create(pNZBInfo->GetQueuedFilename(), "");
if (pNZBFile == NULL)
{
error("Could not return collection %s from history back to queue: could not parse nzb-file",
pNZBInfo->GetName());
return;
}
info("Returning collection %s from history back to queue", pNZBInfo->GetName());
for (FileList::iterator it = pNZBFile->GetNZBInfo()->GetFileList()->begin(); it != pNZBFile->GetNZBInfo()->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->SetPaused(bPaused);
}
if (Util::DirectoryExists(pNZBInfo->GetDestDir()))
{
detail("Deleting %s", pNZBInfo->GetDestDir());
char szErrBuf[256];
if (!Util::DeleteDirectoryWithContent(pNZBInfo->GetDestDir(), szErrBuf, sizeof(szErrBuf)))
{
error("Could not delete directory %s: %s", pNZBInfo->GetDestDir(), szErrBuf);
}
}
pNZBInfo->BuildDestDirName();
if (Util::DirectoryExists(pNZBInfo->GetDestDir()))
{
detail("Deleting %s", pNZBInfo->GetDestDir());
char szErrBuf[256];
if (!Util::DeleteDirectoryWithContent(pNZBInfo->GetDestDir(), szErrBuf, sizeof(szErrBuf)))
{
error("Could not delete directory %s: %s", pNZBInfo->GetDestDir(), szErrBuf);
}
}
g_pDiskState->DiscardFiles(pNZBInfo);
// reset status fields (which are not reset by "HistoryReturn")
pNZBInfo->SetMoveStatus(NZBInfo::msNone);
pNZBInfo->SetUnpackCleanedUpDisk(false);
pNZBInfo->SetParStatus(NZBInfo::psNone);
pNZBInfo->SetRenameStatus(NZBInfo::rsNone);
pNZBInfo->SetDownloadedSize(0);
pNZBInfo->SetDownloadSec(0);
pNZBInfo->SetPostTotalSec(0);
pNZBInfo->SetParSec(0);
pNZBInfo->SetRepairSec(0);
pNZBInfo->SetUnpackSec(0);
pNZBInfo->ClearCompletedFiles();
pNZBInfo->GetServerStats()->Clear();
pNZBInfo->GetCurrentServerStats()->Clear();
pNZBInfo->CopyFileList(pNZBFile->GetNZBInfo());
g_pQueueCoordinator->CheckDupeFileInfos(pNZBInfo);
delete pNZBFile;
HistoryReturn(pDownloadQueue, itHistory, pHistoryInfo, false);
g_pPrePostProcessor->NZBAdded(pDownloadQueue, pNZBInfo);
}
void HistoryCoordinator::HistorySetParameter(HistoryInfo* pHistoryInfo, const char* szText)
{
char szNiceName[1024];
pHistoryInfo->GetName(szNiceName, 1024);
debug("Setting post-process-parameter '%s' for '%s'", szText, szNiceName);
if (!(pHistoryInfo->GetKind() == HistoryInfo::hkNzb || pHistoryInfo->GetKind() == HistoryInfo::hkUrl))
{
error("Could not set post-process-parameter for %s: history item has wrong type", szNiceName);
return;
}
char* szStr = strdup(szText);
char* szValue = strchr(szStr, '=');
if (szValue)
{
*szValue = '\0';
szValue++;
pHistoryInfo->GetNZBInfo()->GetParameters()->SetParameter(szStr, szValue);
}
else
{
error("Could not set post-process-parameter for %s: invalid argument: %s", pHistoryInfo->GetNZBInfo()->GetName(), szText);
}
free(szStr);
}
void HistoryCoordinator::HistorySetDupeParam(HistoryInfo* pHistoryInfo, DownloadQueue::EEditAction eAction, const char* szText)
{
char szNiceName[1024];
pHistoryInfo->GetName(szNiceName, 1024);
debug("Setting dupe-parameter '%i'='%s' for '%s'", (int)eAction, szText, szNiceName);
EDupeMode eMode = dmScore;
if (eAction == DownloadQueue::eaHistorySetDupeMode)
{
if (!strcasecmp(szText, "SCORE"))
{
eMode = dmScore;
}
else if (!strcasecmp(szText, "ALL"))
{
eMode = dmAll;
}
else if (!strcasecmp(szText, "FORCE"))
{
eMode = dmForce;
}
else
{
error("Could not set duplicate mode for %s: incorrect mode (%s)", szNiceName, szText);
return;
}
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkNzb || pHistoryInfo->GetKind() == HistoryInfo::hkUrl)
{
switch (eAction)
{
case DownloadQueue::eaHistorySetDupeKey:
pHistoryInfo->GetNZBInfo()->SetDupeKey(szText);
break;
case DownloadQueue::eaHistorySetDupeScore:
pHistoryInfo->GetNZBInfo()->SetDupeScore(atoi(szText));
break;
case DownloadQueue::eaHistorySetDupeMode:
pHistoryInfo->GetNZBInfo()->SetDupeMode(eMode);
break;
case DownloadQueue::eaHistorySetDupeBackup:
if (pHistoryInfo->GetKind() == HistoryInfo::hkUrl)
{
error("Could not set duplicate parameter for %s: history item has wrong type", szNiceName);
return;
}
else if (pHistoryInfo->GetNZBInfo()->GetDeleteStatus() != NZBInfo::dsDupe &&
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() != NZBInfo::dsManual)
{
error("Could not set duplicate parameter for %s: history item has wrong delete status", szNiceName);
return;
}
pHistoryInfo->GetNZBInfo()->SetDeleteStatus(!strcasecmp(szText, "YES") ||
!strcasecmp(szText, "TRUE") || !strcasecmp(szText, "1") ? NZBInfo::dsDupe : NZBInfo::dsManual);
break;
default:
// suppress compiler warning
break;
}
}
else if (pHistoryInfo->GetKind() == HistoryInfo::hkDup)
{
switch (eAction)
{
case DownloadQueue::eaHistorySetDupeKey:
pHistoryInfo->GetDupInfo()->SetDupeKey(szText);
break;
case DownloadQueue::eaHistorySetDupeScore:
pHistoryInfo->GetDupInfo()->SetDupeScore(atoi(szText));
break;
case DownloadQueue::eaHistorySetDupeMode:
pHistoryInfo->GetDupInfo()->SetDupeMode(eMode);
break;
case DownloadQueue::eaHistorySetDupeBackup:
error("Could not set duplicate parameter for %s: history item has wrong type", szNiceName);
return;
default:
// suppress compiler warning
break;
}
}
}
void HistoryCoordinator::Redownload(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo)
{
HistoryList::iterator it = std::find(pDownloadQueue->GetHistory()->begin(),
pDownloadQueue->GetHistory()->end(), pHistoryInfo);
HistoryRedownload(pDownloadQueue, it, pHistoryInfo, true);
}

View File

@@ -0,0 +1,54 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision: 951 $
* $Date$
*
*/
#ifndef HISTORYCOORDINATOR_H
#define HISTORYCOORDINATOR_H
#include "DownloadInfo.h"
class HistoryCoordinator
{
private:
void HistoryDelete(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory, HistoryInfo* pHistoryInfo, bool bFinal);
void HistoryReturn(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory, HistoryInfo* pHistoryInfo, bool bReprocess);
void HistoryRedownload(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory, HistoryInfo* pHistoryInfo, bool bRestorePauseState);
void HistorySetParameter(HistoryInfo* pHistoryInfo, const char* szText);
void HistorySetDupeParam(HistoryInfo* pHistoryInfo, DownloadQueue::EEditAction eAction, const char* szText);
void HistoryTransformToDup(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, int rindex);
void SaveQueue(DownloadQueue* pDownloadQueue);
public:
HistoryCoordinator();
virtual ~HistoryCoordinator();
void AddToHistory(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
bool EditList(DownloadQueue* pDownloadQueue, IDList* pIDList, DownloadQueue::EEditAction eAction, int iOffset, const char* szText);
void DeleteDiskFiles(NZBInfo* pNZBInfo);
void HistoryHide(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, int rindex);
void Redownload(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo);
void IntervalCheck();
void Cleanup();
};
#endif

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -34,6 +34,7 @@
#include <string.h>
#include <list>
#include <ctype.h>
#ifdef WIN32
#include <comutil.h>
#import <msxml.tlb> named_guids
@@ -61,20 +62,19 @@ NZBFile::NZBFile(const char* szFileName, const char* szCategory)
debug("Creating NZBFile");
m_szFileName = strdup(szFileName);
m_szPassword = NULL;
m_pNZBInfo = new NZBInfo();
m_pNZBInfo->AddReference();
m_pNZBInfo->SetFilename(szFileName);
m_pNZBInfo->SetCategory(szCategory);
m_pNZBInfo->BuildDestDirName();
#ifndef WIN32
m_bPassword = false;
m_pFileInfo = NULL;
m_pArticle = NULL;
m_szTagContent = NULL;
m_iTagContentLen = 0;
#endif
m_FileInfos.clear();
}
NZBFile::~NZBFile()
@@ -82,53 +82,20 @@ NZBFile::~NZBFile()
debug("Destroying NZBFile");
// Cleanup
if (m_szFileName)
{
free(m_szFileName);
}
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
delete *it;
}
m_FileInfos.clear();
if (m_pNZBInfo)
{
m_pNZBInfo->Release();
}
free(m_szFileName);
free(m_szPassword);
#ifndef WIN32
if (m_pFileInfo)
{
delete m_pFileInfo;
}
if (m_szTagContent)
{
free(m_szTagContent);
}
delete m_pFileInfo;
free(m_szTagContent);
#endif
delete m_pNZBInfo;
}
void NZBFile::LogDebugInfo()
{
debug(" NZBFile %s", m_szFileName);
}
void NZBFile::DetachFileInfos()
{
m_FileInfos.clear();
}
NZBFile* NZBFile::CreateFromBuffer(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize)
{
return Create(szFileName, szCategory, szBuffer, iSize, true);
}
NZBFile* NZBFile::CreateFromFile(const char* szFileName, const char* szCategory)
{
return Create(szFileName, szCategory, NULL, 0, false);
info(" NZBFile %s", m_szFileName);
}
void NZBFile::AddArticle(FileInfo* pFileInfo, ArticleInfo* pArticleInfo)
@@ -137,39 +104,70 @@ void NZBFile::AddArticle(FileInfo* pFileInfo, ArticleInfo* pArticleInfo)
while ((int)pFileInfo->GetArticles()->size() < pArticleInfo->GetPartNumber())
pFileInfo->GetArticles()->push_back(NULL);
(*pFileInfo->GetArticles())[pArticleInfo->GetPartNumber() - 1] = pArticleInfo;
int index = pArticleInfo->GetPartNumber() - 1;
if ((*pFileInfo->GetArticles())[index])
{
delete (*pFileInfo->GetArticles())[index];
}
(*pFileInfo->GetArticles())[index] = pArticleInfo;
}
void NZBFile::AddFileInfo(FileInfo* pFileInfo)
{
// deleting empty articles
// calculate file size and delete empty articles
long long lSize = 0;
long long lMissedSize = 0;
long long lOneSize = 0;
int iUncountedArticles = 0;
int iMissedArticles = 0;
FileInfo::Articles* pArticles = pFileInfo->GetArticles();
int iTotalArticles = (int)pArticles->size();
int i = 0;
for (FileInfo::Articles::iterator it = pArticles->begin(); it != pArticles->end();)
for (FileInfo::Articles::iterator it = pArticles->begin(); it != pArticles->end(); )
{
if (*it == NULL)
ArticleInfo* pArticle = *it;
if (!pArticle)
{
pArticles->erase(it);
it = pArticles->begin() + i;
iMissedArticles++;
if (lOneSize > 0)
{
lMissedSize += lOneSize;
}
else
{
iUncountedArticles++;
}
}
else
{
lSize += pArticle->GetSize();
if (lOneSize == 0)
{
lOneSize = pArticle->GetSize();
}
it++;
i++;
}
}
if (!pArticles->empty())
if (pArticles->empty())
{
m_FileInfos.push_back(pFileInfo);
pFileInfo->SetNZBInfo(m_pNZBInfo);
m_pNZBInfo->SetSize(m_pNZBInfo->GetSize() + pFileInfo->GetSize());
m_pNZBInfo->SetFileCount(m_pNZBInfo->GetFileCount() + 1);
}
else
{
delete pFileInfo;
delete pFileInfo;
return;
}
lMissedSize += iUncountedArticles * lOneSize;
lSize += lMissedSize;
m_pNZBInfo->GetFileList()->push_back(pFileInfo);
pFileInfo->SetNZBInfo(m_pNZBInfo);
pFileInfo->SetSize(lSize);
pFileInfo->SetRemainingSize(lSize - lMissedSize);
pFileInfo->SetMissedSize(lMissedSize);
pFileInfo->SetTotalArticles(iTotalArticles);
pFileInfo->SetMissedArticles(iMissedArticles);
}
void NZBFile::ParseSubject(FileInfo* pFileInfo, bool TryQuotes)
@@ -286,11 +284,11 @@ void NZBFile::ParseSubject(FileInfo* pFileInfo, bool TryQuotes)
bool NZBFile::HasDuplicateFilenames()
{
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
for (FileList::iterator it = m_pNZBInfo->GetFileList()->begin(); it != m_pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo1 = *it;
int iDupe = 1;
for (FileInfos::iterator it2 = it + 1; it2 != m_FileInfos.end(); it2++)
for (FileList::iterator it2 = it + 1; it2 != m_pNZBInfo->GetFileList()->end(); it2++)
{
FileInfo* pFileInfo2 = *it2;
if (!strcmp(pFileInfo1->GetFilename(), pFileInfo2->GetFilename()) &&
@@ -306,7 +304,7 @@ bool NZBFile::HasDuplicateFilenames()
// false "duplicate files"-alarm.
// It's Ok for just two files to have the same filename, this is
// an often case by posting-errors to repost bad files
if (iDupe > 2 || (iDupe == 2 && m_FileInfos.size() == 2))
if (iDupe > 2 || (iDupe == 2 && m_pNZBInfo->GetFileList()->size() == 2))
{
return true;
}
@@ -318,9 +316,9 @@ bool NZBFile::HasDuplicateFilenames()
/**
* Generate filenames from subjects and check if the parsing of subject was correct
*/
void NZBFile::ProcessFilenames()
void NZBFile::BuildFilenames()
{
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
for (FileList::iterator it = m_pNZBInfo->GetFileList()->begin(); it != m_pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
ParseSubject(pFileInfo, true);
@@ -328,7 +326,7 @@ void NZBFile::ProcessFilenames()
if (HasDuplicateFilenames())
{
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
for (FileList::iterator it = m_pNZBInfo->GetFileList()->begin(); it != m_pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
ParseSubject(pFileInfo, false);
@@ -337,27 +335,168 @@ void NZBFile::ProcessFilenames()
if (HasDuplicateFilenames())
{
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
m_pNZBInfo->SetManyDupeFiles(true);
for (FileList::iterator it = m_pNZBInfo->GetFileList()->begin(); it != m_pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->SetFilename(pFileInfo->GetSubject());
}
}
}
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->MakeValidFilename();
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
bool CompareFileInfo(FileInfo* pFirst, FileInfo* pSecond)
{
return strcmp(pFirst->GetFilename(), pSecond->GetFilename()) > 0;
}
void NZBFile::CalcHashes()
{
TempFileList fileList;
for (FileList::iterator it = m_pNZBInfo->GetFileList()->begin(); it != m_pNZBInfo->GetFileList()->end(); it++)
{
fileList.push_back(*it);
}
fileList.sort(CompareFileInfo);
unsigned int iFullContentHash = 0;
unsigned int iFilteredContentHash = 0;
int iUseForFilteredCount = 0;
for (TempFileList::iterator it = fileList.begin(); it != fileList.end(); it++)
{
FileInfo* pFileInfo = *it;
// check file extension
bool bSkip = !pFileInfo->GetParFile() &&
Util::MatchFileExt(pFileInfo->GetFilename(), g_pOptions->GetExtCleanupDisk(), ",;");
for (FileInfo::Articles::iterator it = pFileInfo->GetArticles()->begin(); it != pFileInfo->GetArticles()->end(); it++)
{
ArticleInfo* pArticle = *it;
int iLen = strlen(pArticle->GetMessageID());
iFullContentHash = Util::HashBJ96(pArticle->GetMessageID(), iLen, iFullContentHash);
if (!bSkip)
{
iFilteredContentHash = Util::HashBJ96(pArticle->GetMessageID(), iLen, iFilteredContentHash);
iUseForFilteredCount++;
}
}
}
// if filtered hash is based on less than a half of files - do not use filtered hash at all
if (iUseForFilteredCount < (int)fileList.size() / 2)
{
iFilteredContentHash = 0;
}
m_pNZBInfo->SetFullContentHash(iFullContentHash);
m_pNZBInfo->SetFilteredContentHash(iFilteredContentHash);
}
void NZBFile::ProcessFiles()
{
BuildFilenames();
for (FileList::iterator it = m_pNZBInfo->GetFileList()->begin(); it != m_pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->MakeValidFilename();
char szLoFileName[1024];
strncpy(szLoFileName, pFileInfo->GetFilename(), 1024);
szLoFileName[1024-1] = '\0';
for (char* p = szLoFileName; *p; p++) *p = tolower(*p); // convert string to lowercase
bool bParFile = strstr(szLoFileName, ".par2");
m_pNZBInfo->SetFileCount(m_pNZBInfo->GetFileCount() + 1);
m_pNZBInfo->SetTotalArticles(m_pNZBInfo->GetTotalArticles() + pFileInfo->GetTotalArticles());
m_pNZBInfo->SetSize(m_pNZBInfo->GetSize() + pFileInfo->GetSize());
m_pNZBInfo->SetRemainingSize(m_pNZBInfo->GetRemainingSize() + pFileInfo->GetRemainingSize());
m_pNZBInfo->SetFailedSize(m_pNZBInfo->GetFailedSize() + pFileInfo->GetMissedSize());
m_pNZBInfo->SetCurrentFailedSize(m_pNZBInfo->GetFailedSize());
pFileInfo->SetParFile(bParFile);
if (bParFile)
{
m_pNZBInfo->SetParSize(m_pNZBInfo->GetParSize() + pFileInfo->GetSize());
m_pNZBInfo->SetParFailedSize(m_pNZBInfo->GetParFailedSize() + pFileInfo->GetMissedSize());
m_pNZBInfo->SetParCurrentFailedSize(m_pNZBInfo->GetParFailedSize());
m_pNZBInfo->SetRemainingParCount(m_pNZBInfo->GetRemainingParCount() + 1);
}
}
m_pNZBInfo->UpdateMinMaxTime();
CalcHashes();
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
for (FileList::iterator it = m_pNZBInfo->GetFileList()->begin(); it != m_pNZBInfo->GetFileList()->end(); it++)
{
FileInfo* pFileInfo = *it;
g_pDiskState->SaveFile(pFileInfo);
pFileInfo->ClearArticles();
}
}
if (m_szPassword)
{
ReadPassword();
}
}
/**
* Password read using XML-parser may have special characters (such as TAB) stripped.
* This function rereads password directly from file to keep all characters intact.
*/
void NZBFile::ReadPassword()
{
FILE* pFile = fopen(m_szFileName, FOPEN_RB);
if (!pFile)
{
return;
}
// obtain file size.
fseek(pFile , 0 , SEEK_END);
int iSize = (int)ftell(pFile);
rewind(pFile);
// reading first 4KB of the file
// allocate memory to contain the whole file.
char* buf = (char*)malloc(4096);
iSize = iSize < 4096 ? iSize : 4096;
// copy the file into the buffer.
fread(buf, 1, iSize, pFile);
fclose(pFile);
buf[iSize-1] = '\0';
char* szMetaPassword = strstr(buf, "<meta type=\"password\">");
if (szMetaPassword)
{
szMetaPassword += 22; // length of '<meta type="password">'
char* szEnd = strstr(szMetaPassword, "</meta>");
if (szEnd)
{
*szEnd = '\0';
WebUtil::XmlDecode(szMetaPassword);
free(m_szPassword);
m_szPassword = strdup(szMetaPassword);
}
}
free(buf);
}
#ifdef WIN32
NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize, bool bFromBuffer)
NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory)
{
CoInitialize(NULL);
@@ -374,21 +513,15 @@ NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const c
doc->put_resolveExternals(VARIANT_FALSE);
doc->put_validateOnParse(VARIANT_FALSE);
doc->put_async(VARIANT_FALSE);
VARIANT_BOOL success;
if (bFromBuffer)
{
success = doc->loadXML(szBuffer);
}
else
{
// filename needs to be properly encoded
char* szURL = (char*)malloc(strlen(szFileName)*3 + 1);
EncodeURL(szFileName, szURL);
debug("url=\"%s\"", szURL);
_variant_t v(szURL);
free(szURL);
success = doc->load(v);
}
// filename needs to be properly encoded
char* szURL = (char*)malloc(strlen(szFileName)*3 + 1);
EncodeURL(szFileName, szURL);
debug("url=\"%s\"", szURL);
_variant_t v(szURL);
free(szURL);
VARIANT_BOOL success = doc->load(v);
if (success == VARIANT_FALSE)
{
_bstr_t r(doc->GetparseError()->reason);
@@ -400,7 +533,7 @@ NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const c
NZBFile* pFile = new NZBFile(szFileName, szCategory);
if (pFile->ParseNZB(doc))
{
pFile->ProcessFilenames();
pFile->ProcessFiles();
}
else
{
@@ -424,7 +557,7 @@ void NZBFile::EncodeURL(const char* szFilename, char* szURL)
else
{
*szURL++ = '%';
int a = ch >> 4;
int a = (unsigned char)ch >> 4;
*szURL++ = a > 9 ? a - 10 + 'a' : a + '0';
a = ch & 0xF;
*szURL++ = a > 9 ? a - 10 + 'a' : a + '0';
@@ -438,10 +571,17 @@ bool NZBFile::ParseNZB(IUnknown* nzb)
MSXML::IXMLDOMDocumentPtr doc = nzb;
MSXML::IXMLDOMNodePtr root = doc->documentElement;
MSXML::IXMLDOMNodePtr node = root->selectSingleNode("/nzb/head/meta[@type='password']");
if (node)
{
_bstr_t password(node->Gettext());
m_szPassword = strdup(password);
}
MSXML::IXMLDOMNodeListPtr fileList = root->selectNodes("/nzb/file");
for (int i = 0; i < fileList->Getlength(); i++)
{
MSXML::IXMLDOMNodePtr node = fileList->Getitem(i);
node = fileList->Getitem(i);
MSXML::IXMLDOMNodePtr attribute = node->Getattributes()->getNamedItem("subject");
if (!attribute) return false;
_bstr_t subject(attribute->Gettext());
@@ -482,16 +622,14 @@ bool NZBFile::ParseNZB(IUnknown* nzb)
int partNumber = atoi(number);
int lsize = atoi(bytes);
ArticleInfo* pArticle = new ArticleInfo();
pArticle->SetPartNumber(partNumber);
pArticle->SetMessageID(szId);
pArticle->SetSize(lsize);
AddArticle(pFileInfo, pArticle);
if (lsize > 0)
{
pFileInfo->SetSize(pFileInfo->GetSize() + lsize);
}
if (partNumber > 0)
{
ArticleInfo* pArticle = new ArticleInfo();
pArticle->SetPartNumber(partNumber);
pArticle->SetMessageID(szId);
pArticle->SetSize(lsize);
AddArticle(pFileInfo, pArticle);
}
}
AddFileInfo(pFileInfo);
@@ -501,7 +639,7 @@ bool NZBFile::ParseNZB(IUnknown* nzb)
#else
NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize, bool bFromBuffer)
NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory)
{
NZBFile* pFile = new NZBFile(szFileName, szCategory);
@@ -512,21 +650,13 @@ NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const c
SAX_handler.error = reinterpret_cast<errorSAXFunc>(SAX_error);
SAX_handler.getEntity = reinterpret_cast<getEntitySAXFunc>(SAX_getEntity);
int ret = 0;
pFile->m_bIgnoreNextError = false;
if (bFromBuffer)
{
ret = xmlSAXUserParseMemory(&SAX_handler, pFile, szBuffer, iSize);
}
else
{
ret = xmlSAXUserParseFile(&SAX_handler, pFile, szFileName);
}
int ret = xmlSAXUserParseFile(&SAX_handler, pFile, szFileName);
if (ret == 0)
{
pFile->ProcessFilenames();
pFile->ProcessFiles();
}
else
{
@@ -552,6 +682,12 @@ void NZBFile::Parse_StartElement(const char *name, const char **atts)
m_pFileInfo = new FileInfo();
m_pFileInfo->SetFilename(m_szFileName);
if (!atts)
{
warn("Malformed nzb-file, tag <%s> must have attributes", name);
return;
}
for (int i = 0; atts[i]; i += 2)
{
const char* attrname = atts[i];
@@ -570,10 +706,16 @@ void NZBFile::Parse_StartElement(const char *name, const char **atts)
{
if (!m_pFileInfo)
{
// error: bad nzb-file
warn("Malformed nzb-file, tag <segment> without tag <file>");
return;
}
if (!atts)
{
warn("Malformed nzb-file, tag <%s> must have attributes", name);
return;
}
long long lsize = -1;
int partNumber = -1;
@@ -590,10 +732,6 @@ void NZBFile::Parse_StartElement(const char *name, const char **atts)
partNumber = atol(attrvalue);
}
}
if (lsize > 0)
{
m_pFileInfo->SetSize(m_pFileInfo->GetSize() + lsize);
}
if (partNumber > 0)
{
@@ -604,6 +742,15 @@ void NZBFile::Parse_StartElement(const char *name, const char **atts)
AddArticle(m_pFileInfo, m_pArticle);
}
}
else if (!strcmp("meta", name))
{
if (!atts)
{
warn("Malformed nzb-file, tag <%s> must have attributes", name);
return;
}
m_bPassword = atts[0] && atts[1] && !strcmp("type", atts[0]) && !strcmp("password", atts[1]);
}
}
void NZBFile::Parse_EndElement(const char *name)
@@ -641,6 +788,10 @@ void NZBFile::Parse_EndElement(const char *name)
m_pArticle->SetMessageID(ID);
m_pArticle = NULL;
}
else if (!strcmp("meta", name) && m_bPassword)
{
m_szPassword = strdup(m_szTagContent);
}
}
void NZBFile::Parse_Content(const char *buf, int len)

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -27,26 +27,29 @@
#ifndef NZBFILE_H
#define NZBFILE_H
#include <vector>
#include <list>
#include "DownloadInfo.h"
class NZBFile
{
public:
typedef std::vector<FileInfo*> FileInfos;
typedef std::list<FileInfo*> TempFileList;
private:
FileInfos m_FileInfos;
NZBInfo* m_pNZBInfo;
char* m_szFileName;
char* m_szPassword;
NZBFile(const char* szFileName, const char* szCategory);
void AddArticle(FileInfo* pFileInfo, ArticleInfo* pArticleInfo);
void AddFileInfo(FileInfo* pFileInfo);
void ParseSubject(FileInfo* pFileInfo, bool TryQuotes);
void ProcessFilenames();
void BuildFilenames();
void ProcessFiles();
void CalcHashes();
bool HasDuplicateFilenames();
void ReadPassword();
#ifdef WIN32
bool ParseNZB(IUnknown* nzb);
static void EncodeURL(const char* szFilename, char* szURL);
@@ -56,6 +59,7 @@ private:
char* m_szTagContent;
int m_iTagContentLen;
bool m_bIgnoreNextError;
bool m_bPassword;
static void SAX_StartElement(NZBFile* pFile, const char *name, const char **atts);
static void SAX_EndElement(NZBFile* pFile, const char *name);
@@ -66,16 +70,14 @@ private:
void Parse_EndElement(const char *name);
void Parse_Content(const char *buf, int len);
#endif
static NZBFile* Create(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize, bool bFromBuffer);
public:
virtual ~NZBFile();
static NZBFile* CreateFromBuffer(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize);
static NZBFile* CreateFromFile(const char* szFileName, const char* szCategory);
static NZBFile* Create(const char* szFileName, const char* szCategory);
const char* GetFileName() const { return m_szFileName; }
FileInfos* GetFileInfos() { return &m_FileInfos; }
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
void DetachFileInfos();
const char* GetPassword() { return m_szPassword; }
void DetachNZBInfo() { m_pNZBInfo = NULL; }
void LogDebugInfo();
};

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,100 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2014 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef QUEUECOORDINATOR_H
#define QUEUECOORDINATOR_H
#include <deque>
#include <list>
#include "Log.h"
#include "Thread.h"
#include "NZBFile.h"
#include "ArticleDownloader.h"
#include "DownloadInfo.h"
#include "Observer.h"
#include "QueueEditor.h"
#include "NNTPConnection.h"
class QueueCoordinator : public Thread, public Observer, public Debuggable
{
public:
typedef std::list<ArticleDownloader*> ActiveDownloads;
private:
class CoordinatorDownloadQueue : public DownloadQueue
{
private:
QueueCoordinator* m_pOwner;
friend class QueueCoordinator;
public:
virtual bool EditEntry(int ID, EEditAction eAction, int iOffset, const char* szText);
virtual bool EditList(IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, EEditAction eAction, int iOffset, const char* szText);
virtual void Save();
};
private:
CoordinatorDownloadQueue m_DownloadQueue;
ActiveDownloads m_ActiveDownloads;
QueueEditor m_QueueEditor;
bool m_bHasMoreJobs;
int m_iDownloadsLimit;
int m_iServerConfigGeneration;
bool GetNextArticle(DownloadQueue* pDownloadQueue, FileInfo* &pFileInfo, ArticleInfo* &pArticleInfo);
void StartArticleDownload(FileInfo* pFileInfo, ArticleInfo* pArticleInfo, NNTPConnection* pConnection);
void ArticleCompleted(ArticleDownloader* pArticleDownloader);
void DeleteFileInfo(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, bool bCompleted);
void StatFileInfo(FileInfo* pFileInfo, bool bCompleted);
void CheckHealth(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo);
void ResetHangingDownloads();
void AdjustDownloadsLimit();
void Load();
void SavePartialState();
protected:
virtual void LogDebugInfo();
public:
QueueCoordinator();
virtual ~QueueCoordinator();
virtual void Run();
virtual void Stop();
void Update(Subject* Caller, void* Aspect);
// editing queue
void AddNZBFileToQueue(NZBFile* pNZBFile, NZBInfo* pUrlInfo, bool bAddFirst);
void CheckDupeFileInfos(NZBInfo* pNZBInfo);
bool HasMoreJobs() { return m_bHasMoreJobs; }
void DiscardDiskFile(FileInfo* pFileInfo);
bool DeleteQueueEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo);
bool SetQueueEntryCategory(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szCategory);
bool SetQueueEntryName(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szName);
bool MergeQueueEntries(DownloadQueue* pDownloadQueue, NZBInfo* pDestNZBInfo, NZBInfo* pSrcNZBInfo);
bool SplitQueueEntries(DownloadQueue* pDownloadQueue, FileList* pFileList, const char* szName, NZBInfo** pNewNZBInfo);
};
#endif

Some files were not shown because too many files have changed in this diff Show More