Compare commits

...

525 Commits

Author SHA1 Message Date
Andrey Prygunkov
fada56ab2e version 12.0 (2 Jan. 2014) 2014-01-21 20:47:15 +00:00
Andrey Prygunkov
4b228b32f0 OSX-app: restarting via troubleshooting-menu could result in an error message "Could not establish connection to background process" although the process was successfully restarted 2014-01-02 21:06:03 +00:00
Andrey Prygunkov
f66189de6b updated version string (preparing to release 12.0) 2014-01-02 20:32:11 +00:00
Andrey Prygunkov
743805ecd5 updated ChangeLog 2014-01-01 21:27:29 +00:00
Andrey Prygunkov
6c9dcea08c fixed: for rar-files with old naming scheme only files with extensions rxx and sxx were deleted during cleanup leaving the files with extensions txx, uxx, etc. 2013-12-25 22:43:58 +00:00
Andrey Prygunkov
fa1944090d fixed wrong size of progress wheel during adding of URL in add dialog 2013-12-24 21:00:17 +00:00
Andrey Prygunkov
04cf428619 fixed: duplicate mode was not saved from history dialog 2013-12-24 18:38:10 +00:00
Andrey Prygunkov
86d6b00886 added new command "Download again" for history items; new action "HistoryRedownload" of RPC-method "editqueue"; for controlling via command line: new action "A" of subcommand "H" of command "--edit/-E" 2013-12-21 21:39:49 +00:00
Andrey Prygunkov
38429d98df fixed a potential problem in incorrect using of one library function 2013-12-19 21:28:45 +00:00
Andrey Prygunkov
0c9667fe58 fixed memory leak in RSS feed parser (Posix only) 2013-12-19 20:28:50 +00:00
Andrey Prygunkov
3c6bb7be4c 1) NZBIDs are now generated with more care avoiding numbering holes possible with previous versions; 2) fixed: new NZBIDs were generated on each refresh of web-ui (bug introduced in r811); 3) for queue disk state written by versions r811-r920 the NZBIDs are renumbered starting from 1 2013-12-18 20:19:42 +00:00
Andrey Prygunkov
40fa732122 do not closing rss filter dialog if a communication error occurs during first fetching of rss and the filter was already edited by user (this allows to save the filter) 2013-12-16 21:15:49 +00:00
Andrey Prygunkov
01e2e25bce fixed potential segfault 2013-12-12 23:43:38 +00:00
Andrey Prygunkov
29e916dcdd NZBGet.app for OSX: fixed: one text message was not properly shown 2013-12-10 20:44:58 +00:00
Andrey Prygunkov
1bfa7610ae improved error reporting when creation of temporary output file fails 2013-12-10 20:37:02 +00:00
Andrey Prygunkov
fb94c32bb4 fixed: when deleting download, if all remaining queued files are par2-files the disk cleanup should not be performed, but it was sometimes 2013-12-10 20:25:07 +00:00
Andrey Prygunkov
94ad26d818 fixed: RSS feed filter fields "age" and "size" did not work (bug introduced in r908) 2013-12-03 21:39:18 +00:00
Andrey Prygunkov
f323addc1c added new option "TimeCorrection" to adjust conversion from system time to local local (solves issues with scheduler when using a binary compiled for other platform) 2013-11-28 21:03:01 +00:00
Andrey Prygunkov
5559c91c0e do not closing rss filter dialog if a communication error occurs during editing of rss filter (this allows to save the filter into clipboard at least) 2013-11-28 20:46:32 +00:00
Andrey Prygunkov
6cc5eab94b fixed: some of actions for remote command "--edit/-E" did not work properly (bug introduced in r900) 2013-11-24 20:15:41 +00:00
Andrey Prygunkov
c2a3450c8f refactor: removed many unneeded pointer-not-null-safety-checks 2013-11-24 19:29:52 +00:00
Andrey Prygunkov
01e1ec0794 fixed line endings in one source file 2013-11-24 13:47:36 +00:00
Andrey Prygunkov
ea381cde90 fixed encoding issue for non-ASCII characters in DNZB-Headers 2013-11-18 20:37:20 +00:00
Andrey Prygunkov
26074c67c6 extended RSS filters: 1) added search field "description"; 2) any newznab attribute can now be used as search field with prefix "attr-" (for example "attr-genre"); 3) removed search fields "genre" and "rating" (use "attr-genre" and "attr-rating" instead 2013-11-17 21:57:32 +00:00
Andrey Prygunkov
e2b13fcda5 fixed: for downloads deleted by health-check status was shown as "DELETED-HEALTH" instead of "FAILURE" 2013-11-14 20:17:45 +00:00
Andrey Prygunkov
0130852d9a added scheduler command "FetchFeed"; renamed RPC-method "fetchfeeds" to "fetchfeed" and added parameter "ID" 2013-11-12 20:54:45 +00:00
Andrey Prygunkov
a027af9e84 if unpack fails with write error (usually because of not enough space on disk) this is shown as status "Unpack: space" in web-interface; this unpack-status is handled as "success" by duplicate handling (no download of other duplicate); also added new unpack-status "wrong password" (only for rar5-archives); env.var. NZBPP_UNPACKSTATUS has two new possible values: 3 (write error) and 4 (wrong password); updated pp-script "EMail.py" to support new unpack-statuses 2013-11-08 21:54:44 +00:00
Andrey Prygunkov
ce81b3d4da added status filter buttons to history page 2013-11-07 21:01:44 +00:00
Andrey Prygunkov
96d8ff3cb7 added new scheduler commands "ActivateServer" and "DeactivateServer"; combined options "TaskX.DownloadRate" and "TaskX.Process" into one option "TaskX.Param", also used by two new commands 2013-11-07 20:55:33 +00:00
Andrey Prygunkov
b67c354fdb better handling of obfuscated nzb-files containing multiple files with same names; removed option "StrictParName" which was not working good with obfuscated files; if more par-files are required for repair the files with strict names are tried first and then other par-files 2013-11-07 20:14:23 +00:00
Andrey Prygunkov
9a610197ea when a duplicate backup is returned from history to download queue its paused-state is now correctly restored 2013-11-05 21:11:47 +00:00
Andrey Prygunkov
1109c3423c reworked duplicate handling: 1) when a duplicate is added to queue it is now moved to history as dupe-backup instead of being paused in download queue; 2) if download fails the best duplicate is moved from history back to queue for download (if there are no duplicates in queue); this makes it easier to manage download queue without worrying about properly pausing/unpausing duplicates); 3) badges with duplicate info are not shown in the list of downloads and history anymore; if necessary they can be activated by manually editing setting "dupeBadges" in index.js; 4) when deleting downloads from queue there are three options now: "move to history", "move to history as duplicate" and "delete without history tracking"; 5) new actions "GroupDupeDelete" and "GroupFinalDelete" in addition to "GroupDelete" in RPC-method "editqueue"; 6) DUPE-mark for history records can now be set or cleared via history dialog; 7) new action "HistorySetDupeBackup" in RPC-method "editqueue"; 8) when deleting downloads from queue the messages about deleted individual files are now printed as "detail" instead of "info"; 9) removed command "Mark as duplicates" from edit dialog for multiple selected downloads and from RPC-method "editqueue"; the command is not needed anymore since all duplicate properties are now changeable 2013-11-04 20:59:20 +00:00
Andrey Prygunkov
18fcb8d0ad if unpack did not find archive files the par-check is not requested anymore if par-rename was already done 2013-10-30 20:15:07 +00:00
Andrey Prygunkov
a5845ed0d9 improved par-check: if main par2-file is corrupted and can not be loaded other par2-files are downloaded and then used as replacement for main par2-file 2013-10-30 20:06:18 +00:00
Andrey Prygunkov
3392fa59fe improved support of non-ascii characters in file names on windows (again) 2013-10-25 20:09:25 +00:00
Andrey Prygunkov
95e816572a small restructure in settings order: 1) combined sections "REMOTE CONTROL" and "PERMISSIONS" into one section with name "SECURITY"; 2) moved sections "CATEGORIES" and "RSS FEEDS" higher in the section list 2013-10-24 20:33:40 +00:00
Andrey Prygunkov
3acb3aab9f addition to r894: commited missing changes in another file 2013-10-24 20:24:43 +00:00
Andrey Prygunkov
509860d890 fixed: if unpack fails the created destination directory was not automatically removed (only if option "InterDir" was active) 2013-10-24 20:17:51 +00:00
Andrey Prygunkov
61dcc467ee added support for rar5-format when checking signatures of archives with non-standard file extensions 2013-10-24 20:15:59 +00:00
Andrey Prygunkov
ae6601d9e3 improved handling of non-ascii characters in file names on windows 2013-10-23 19:22:02 +00:00
Andrey Prygunkov
33733af3c5 when option "UnpackCleanupDisk" is active all archive files are now deleted from download directory without relying on output printed by unrar; this solves issues with non-ascii-characters in archive file names on some platforms and especially in combination with rar5 2013-10-23 19:16:20 +00:00
Andrey Prygunkov
da7c0ab7d6 variable substitution now works for all options, including "FeedX.Filter"; if variable could not be found no error is printed anymore because character sequence used to define variable referenece can be part of feed filter and therefore should not be reported as error 2013-10-22 21:34:36 +00:00
Andrey Prygunkov
afd156b51f when option "InterDir" is used the intermediate destination directory names now include unique number to avoid several downloads with same name to use the same directory and interfere with each other 2013-10-22 21:17:02 +00:00
Andrey Prygunkov
5d549b7c60 option "InterDir" is now active by default 2013-10-22 21:05:13 +00:00
Andrey Prygunkov
1b5671dc87 increased width of Update-dialog to accommodate 80 character columns; this improves output of console tools like wget relying on standard terminal width 2013-10-22 21:04:22 +00:00
Andrey Prygunkov
87e2893505 for external script exec-permissions are now added automatically; this makes installation of pp-scripts and other scripts easier 2013-10-21 20:24:02 +00:00
Andrey Prygunkov
89443e342f if option "ParRename" is disabled (not recommended) unpacker does not initiate par-rename anymore, instead the full par-verify is performed then; refactor: simplified the code requesting par-rename after unpack 2013-10-20 20:59:49 +00:00
Andrey Prygunkov
de5ed803ed for archives including par-files for renaming of extracted files the par-renaming now works for extracted sub-directories too 2013-10-19 21:16:53 +00:00
Andrey Prygunkov
528f9a7ec4 added new files to VC project 2013-10-18 20:38:41 +00:00
Andrey Prygunkov
a1f7656fe4 addition to r879: removed check if download has downloaded files 2013-10-17 22:00:37 +00:00
Andrey Prygunkov
a5703a55eb added automatic updates: new button "Check for updates" on settings tab of web-interface, in section "SYSTEM", initiates check and shows dialog allowing to install new version; it is possible to choose between stable, testing and development branches; this feature is for end-users using binary packages created and updated by maintainers, who need to write an update script specific for platform; the script is then called by NZBGet when user clicks on install-button; the script must download and install new version; for more info visit http://nzbget.sourceforge.net/Packaging 2013-10-17 19:35:43 +00:00
Andrey Prygunkov
1275b85465 option "DiskSpace" now checks space on "InterDir" in addition to "DestDir" 2013-10-17 19:11:46 +00:00
Andrey Prygunkov
52dc2738a1 when removing duplicates from queue after successful download now removing only unpaused items which does not have any downloaded articles; this prevents deletion of higher score duplicates 2013-10-17 19:09:40 +00:00
Andrey Prygunkov
7c0f7cbdc2 addition to r877: commited missing changes in header-file 2013-10-17 18:56:26 +00:00
Andrey Prygunkov
c14ef8bd13 fixed: in RSS filters when using substitution variables referring to matches a wrong variable could be substituted if substring search did not start with star-character 2013-10-17 18:52:16 +00:00
Andrey Prygunkov
ce62ae9f50 fixed: superfluous spaces in RSS filter caused non-matching 2013-10-14 20:33:52 +00:00
Andrey Prygunkov
133347f884 support for rar-archives with non-standard extensions is now limited to file extensions consisting of digits; this is to avoid extracting of rar-archives having non-rar extensions on purpose (example: .cbr) 2013-10-09 19:47:11 +00:00
Andrey Prygunkov
ca4f56cb04 fixed: detection of par-files inside archives did not work properly 2013-10-09 19:45:35 +00:00
Andrey Prygunkov
f512ae973d history records with failed script status are now shown as "PP-FAILURE" in history list (instead of just "FAILURE") 2013-10-09 19:43:53 +00:00
Andrey Prygunkov
04cceb314e updated descriptions of few options 2013-10-09 19:38:49 +00:00
Andrey Prygunkov
4138e10788 removed option "Also delete downloaded files" from history delete confirmation dialog; for failed downloads cleanup is now performed if option "DeleteCleanupDisk" is active; in RPC-Method "editqueue" removed actions "HistoryDeleteCleanup" and "HistoryFinalDeleteCleanup" 2013-10-09 19:37:44 +00:00
Andrey Prygunkov
5d11b9aa97 added special handling for files ".AppleDouble" and ".DS_Store" during unpack to avoid problems on NAS having support for AFP protocol (used on Mac OSX) 2013-10-08 19:44:06 +00:00
Andrey Prygunkov
6033c2b3ce fixed: invalid "Offset" passed to RPC-method "editqueue" or command line action "-E/--edit" could crash the program 2013-10-08 19:41:26 +00:00
Andrey Prygunkov
dfb28dc155 history records can now be either permanently deleted or just hidden from history list; hiding (instead of deleting) is recommended for proper work of duplicate handling; in addition it is now possible to delete downloaded files; RPC-method "editqueue" has now four actions to delete history records: "HistoryDelete", "HistoryDeleteCleanup", "HistoryFinalDelete", "HistoryFinalDeleteCleanup"; action "HistoryDelete" which has existed before now hides records, already hidden records are ignored; button "Old" in web-interface on history tab renamed to "Hidden"; badge "DUP", used to distinguish old history records changed to "hidden" 2013-10-08 19:35:59 +00:00
Andrey Prygunkov
756b29d9ed fixed a potential seg. fault in a commonly used function 2013-10-07 21:15:46 +00:00
Andrey Prygunkov
dc7a3af768 destination directory for option "DestDir" is not checked/created on program start anymore (only when a download starts); this helps when DestDir is mounted to a network drive which is not available on program start 2013-10-07 19:29:36 +00:00
Andrey Prygunkov
baf3a2d915 addition to r863: fixed: option "UrlForce" did not really worked 2013-10-07 16:15:38 +00:00
Andrey Prygunkov
25832ab2ea option "HealthCheck" is now set to "Delete" by default; the previously default setting "Pause" did not work well with automatic duplicate handling 2013-10-07 16:14:14 +00:00
Andrey Prygunkov
c95c0401eb added new option "UrlForce" to allow URL-downloads (including fetching of RSS feeds and nzb-files from feeds) even if download is paused; the option is active by default 2013-10-06 18:55:09 +00:00
Andrey Prygunkov
936580a924 moved command "Duplicate properties" from actions-menu into button "Dupe" near "PP-Parameters"; renamed button "PP-Parameters" to "Postprocess" and button "PP-Messages" (visible only during post-processing) to "Log" to free space for new Dupe-button 2013-10-05 13:03:52 +00:00
Andrey Prygunkov
00df147688 fixed: NZBGet.app (OSX): when the option "Show web-interface on start" was active, it took too long to initialize and could result in an error message "Could not establish connection to background process" 2013-10-03 20:35:01 +00:00
Andrey Prygunkov
8273dcfdfc fixed: error reading diskstate when upgrading from r808 or an older version 2013-10-03 13:34:33 +00:00
Andrey Prygunkov
81e2dc3635 improved parsing of multi-episodes from titles when generating dupekeys using item-options "rageid" or "series" and season/episode numbers 2013-10-01 16:52:23 +00:00
Andrey Prygunkov
94611cd80b fixed: when generating dupekeys with item-options "rageid" or "series" the season/episode numbers were not parsed from title if they were not used in the filter string 2013-10-01 16:49:12 +00:00
Andrey Prygunkov
28af81142f added two new item-options for RSS filter rules "Accept" and "Options": option "rageid" generates duplicate key using custom rageid and season/episode numbers; option "series" generates duplicate key using custom series name (any unique string) and season/episode numbers 2013-09-30 19:38:26 +00:00
Andrey Prygunkov
b3fd3ec0ba hiding badges for dupekeys in downloads/history lists if option "DupeCheck" is disabled 2013-09-30 19:31:27 +00:00
Andrey Prygunkov
44518d5b33 fixed: if download failed an existing queued duplicate was not automatically unpaused 2013-09-30 19:29:53 +00:00
Andrey Prygunkov
323e74f50f fixed: space character was not separator in word search mode in RSS filter 2013-09-29 10:47:25 +00:00
Andrey Prygunkov
a972e755d1 changes in duplicate handling: 1) when comparing two items if the both have dupekey only dupekeys ae compared (names not checked); 2) when a new item without dupekey is added and there is another items with the same name having dupekey its dupekey was copied to new item, this is disabled now; 3) fixed: command "Mark as Good" in history removed all duplicates but should remove only records with status "DUPE" 2013-09-28 21:16:16 +00:00
Andrey Prygunkov
49b6292f7f changes in duplicate handling: removed internal field "DupeMark" showing the item having duplicates; this flag was not always in sync with reality and it was used only to show (or not) badges with duplicate key in web-interface; now badges are always shown for items having non-empty duplicate keys; the badges becomes red if duplicate mode is set to "force" 2013-09-28 19:53:20 +00:00
Andrey Prygunkov
dd27dc1503 removed command "Unmark Duplicate" from actions menu and from command line syntax; the duplicate mark is removed automatically once the duplicate mode is set to "force"; otherwise manually removing duplicate mark does not make much sense since the titles are checked for duplicates anyway 2013-09-27 20:45:24 +00:00
Andrey Prygunkov
547ed1fd26 fixed compiler warning 2013-09-27 20:25:10 +00:00
Andrey Prygunkov
c387b0d069 duplicate properties (dupekey, dupescore and dupemode) can now be viewed and changed in download-edit-dialog and history-edit-dialog via new command "Duplicate properties" in actions menu 2013-09-27 19:56:16 +00:00
Andrey Prygunkov
ba2af4d84d fixed: in rss feed filter the substitution variable did not work 2013-09-26 21:06:43 +00:00
Andrey Prygunkov
20b7c6a823 do not showing dupemode in the build-filter-dialog if the mode is set to default value "score" 2013-09-26 20:22:28 +00:00
Andrey Prygunkov
b2edab0452 1) refactor: moved feed related classes from unit "DownloadInfo" to new unit "FeedInfo"; 2) rss filter fields "season" and "episode" are now available for all feeds (not restricted to newznab); if the feed does not have the fields, they are automatically parsed from feed title; 3) fields "season" and "episode" can now be used as substitution variables in option "dupekey" of rss filter command "Options" 2013-09-26 19:37:25 +00:00
Andrey Prygunkov
a72dc67268 added new search fields to RSS feed filter: imdbid, priority, dupekey and dupescore 2013-09-26 19:15:59 +00:00
Andrey Prygunkov
11b9745268 added OR-operator and groups (braces) to RSS filter syntax 2013-09-26 19:10:26 +00:00
Andrey Prygunkov
1d1d49a3c9 addition to r755: fixed: passwords containing special characters such as TAB were not properly read from nzb-files metatag 2013-09-24 20:42:32 +00:00
Andrey Prygunkov
31a4e7a22c by adding nzb-file to queue via RPC-methods "append" and "appendurl" the actual format of the file is checked and if nzb-format is detected the file is added even if it does not have .nzb extension 2013-09-24 19:35:54 +00:00
Andrey Prygunkov
dca13f0749 better handling of queued duplicates in command "Mark as bad" 2013-09-24 19:33:35 +00:00
Andrey Prygunkov
d377eee11c when adding nzb-file in in dupemode "score" the file is skipped if dup-history has a success-item with the same or higher score and recent history does not have a success-duplicate 2013-09-24 19:31:56 +00:00
Andrey Prygunkov
196052efed removed prefix "dupe" from badges in download and history lists 2013-09-24 19:28:36 +00:00
Andrey Prygunkov
694c5104fa fixed: incorrect health calculation for downloads consisting of only par-files 2013-09-23 20:26:29 +00:00
Andrey Prygunkov
acbf765851 fixed compiler warning 2013-09-23 20:24:31 +00:00
Andrey Prygunkov
ef06dfb7b3 replaced parameter "NoDupeCheck (yes, no)" with "DupeMode (score, all, force)" when adding nzb-files to queue using RPC-methods "append" and "appendurl"; changed option "nodupe" to "dupemode" in RSS filter commands "Append" and "Options" 2013-09-23 20:18:54 +00:00
Andrey Prygunkov
613432ef2c tuned algorithm calculating maximum threads limit to allow more threads for backup server connections (related to option "TreadLimit" removed in v11); this may sometimes increase speed when backup servers were used 2013-09-22 13:39:43 +00:00
Andrey Prygunkov
512dc87b3f fixed: items were removed from history too soon (option "KeepHistory") 2013-09-22 12:59:30 +00:00
Andrey Prygunkov
480f034301 fixed few compiler warnings 2013-09-20 21:55:47 +00:00
Andrey Prygunkov
39275ce133 improved smart duplicates features: added functions "Mark as Bad" and "Mark as Good" for history items; when a history item having success-status is marked as bad: 1) it is considered as failure by any duplicate check performed later; 2) if history has duplicates with dupe-status (dupe-backups) they are all moved (as paused) to download queue and one of them (with the highest duplicate score) is unpaused (downloaded); when a history item is marked as good: 1) it is considered as success by any duplicate check performed later; 2) no other duplicates will be added to history as dupe-backups anymore; 3) if history has duplicates with dupe-status (dupe-backups) they are all removed from recent history (moved to dup-history); new actions "HistoryMarkBad" and "HistoryMarkGood" in RPC-method "editqueue"; new actions "B" and "G" of command "--edit/-E" for history items (subcommand "H") 2013-09-20 20:45:07 +00:00
Andrey Prygunkov
c73fa0b42d addition to r817: fixed: error by reading dup-history from disk state 2013-09-19 19:45:31 +00:00
Andrey Prygunkov
3d4ed43337 changed format of generated dupekey for tv shows; now using season and episode exactly as they are passed by rss feed 2013-09-18 20:47:46 +00:00
Andrey Prygunkov
74067fd515 source nzb-files are now deleted when download-item leaves queue and history (option "NzbCleanupDisk") 2013-09-18 20:27:47 +00:00
Andrey Prygunkov
9538771eef refactor: addition to r825: small optimization 2013-09-18 20:09:32 +00:00
Andrey Prygunkov
7ecb968e23 if download was deleted by duplicate check its status in the history is now shown as "DUPE" instead of just "DELETED" 2013-09-18 19:49:59 +00:00
Andrey Prygunkov
c9365732d9 option values in RSS filter command "Options" can now refer to pattern groups in regular expressions 2013-09-17 19:33:56 +00:00
Andrey Prygunkov
d41c13ac29 fixed: if duplicate check has prevented adding file to queue the unneeded disk state files were not deleted from queue directory 2013-09-17 19:18:33 +00:00
Andrey Prygunkov
1b2fa2b2e8 extended word/substring search in RSS feed filters with pattern character "#" which matches one digit 2013-09-17 19:06:11 +00:00
Andrey Prygunkov
b9a9113abe addition to r820: fixed: when old disk state was converted the content hashes were not initialized (bug introduced in r816) 2013-09-17 19:02:59 +00:00
Andrey Prygunkov
1dd9dbec6c option values in RSS filter command "Options" can now refer to pattern groups in search string 2013-09-16 20:18:58 +00:00
Andrey Prygunkov
84efe5447b fixed compiler warning 2013-09-16 19:51:27 +00:00
Andrey Prygunkov
61bab55d11 fixed: when old disk state was converted the content hashes were not initialized (bug introduced in r816) 2013-09-16 19:49:45 +00:00
Andrey Prygunkov
85d05153f7 changed syntax of option "dupekey" of command "Option" in RSS filter: option "dupekey" now sets duplicate key (overrides existing key); option "dupekey+" adds to existing duplicate key 2013-09-15 14:02:35 +00:00
Andrey Prygunkov
f9bc316c98 rss filter command "Options" can now increase priority and dupe scrore using new option names "priority+" and "dupecore+" 2013-09-15 08:50:06 +00:00
Andrey Prygunkov
c4adc8d9be improved detection of same nzb-files acquired from different sources (nzb-sites): 1) the order of individual files as well as the order of articles in nzb-files do not matter; 2) individual files having extensions listed in option "ExtCleanupDisk" are now excluded from content comparison (unless these are par2-files, which are never excluded) 2013-09-14 21:20:31 +00:00
Andrey Prygunkov
169719c62d improved duplicate check: when history item expires (as defined by option "KeepHistory") and the duplicate check is active (option "DupeCheck") the item is not completely deleted from history; instead the amount of stored data reduces to minimum required for duplicate check (about 150 bytes vs 2000 bytes for full history item); such old history items are not shown in web-interface by default (to avoid transferring of large amount of history items); new button "Old" in web-interface to show old history items; the items are marked with badge "DUP"; 2013-09-13 20:13:09 +00:00
Andrey Prygunkov
a509a491af improved duplicate check: 1) added check for nzb-file content to avoid queueing of exactly same files (even with different names); 2) when nzb-file is added with option "Disable duplicate check" the option now works only during adding, it does not exclude the file from later checks (when adding other files) 2013-09-12 15:54:14 +00:00
Andrey Prygunkov
aed8e26062 extended add-dialog with options "Add paused" and "Disable duplicate check" 2013-09-11 20:21:49 +00:00
Andrey Prygunkov
cec414126d improved smart duplicates: nzb-files now have field "DupeScore" which can be set from rss filter using command "Options"; items with higher duplicate scores are downloaded even if the history already has successfully downloaded item (with lower score); changed the syntax of rss filter command "Options" to allow use of more options (and easy add new options in the future); new options "DupeScore", "DupeKey" and "NoDupe" to fine tune duplicate handling; updated description of option "DupeCheck" 2013-09-11 20:18:52 +00:00
Andrey Prygunkov
6bf96ab808 addition to r811: fixed: items added from feed view dialog were not marked as duplicates 2013-09-09 20:52:51 +00:00
Andrey Prygunkov
d8d9f72985 added smart duplicates feature: similar nzb-files are automatically marked as duplicates; queue items can be also manually marked as duplicates using new commands in multi-edit-dialog (action menu); duplicate-mark be manually removed using using new command in multi-edit-dialog and edit-dialog (action menu); duplicates are added in paused state; if download of first duplicates fail, another duplicate is unpaused; if download succeeds all remaining duplicates are removed from queue; an item marked as duplicate has field "DupeKey" which has the same value for all duplicates of the title; this field is shown in web-interface near nzb-name (in a short form to save screen place); new actions "GroupMarkDupe" and "GroupUnMarkDupe" of RPC-command "editqueue" to manually mark/unmark duplicates; new subcommands "DM" and "DU" of command "--edit/-E" to manually mark/unmark duplicates;;; if url-download results in a file without nzb-extension a history item with status "Scan: skipped" is created to inform user about this fact; RPC-commands "listgroups", "postqueue" and "history" now return more info about nzb-item (many new fields); removed option "MergeNzb" because it conflicts with duplicate handling, items can be merged manually if necessary 2013-09-09 20:31:17 +00:00
Andrey Prygunkov
cecb2e8f4c failed article downloads are now logged as "detail" instead of "warning" to reduce number of warnings for downloads removed from server (DMCA); one warning is printed for a file with a summary of number of failed downloads for the file 2013-09-08 20:34:53 +00:00
Andrey Prygunkov
d9ae28d3ed fixed compiler errors when configured with switch "--disable-parcheck" 2013-09-03 20:51:51 +00:00
Andrey Prygunkov
761078625e addition to r807: corrected makefile 2013-09-01 09:28:06 +00:00
Andrey Prygunkov
be37a75928 set svn keywords 2013-08-31 21:14:39 +00:00
Andrey Prygunkov
884e9fb3c9 created NZBGet.app - NZBGet is now a user friendly Mac OSX application with easy installation and seamless integration into OS UI: works in background, accessible via menubar icon 2013-08-31 21:05:31 +00:00
Andrey Prygunkov
8821502a81 updated support for DNZB-Headers: removed "X-DNZB-Link", added "X-DNZB-Details" 2013-08-29 19:56:29 +00:00
Andrey Prygunkov
f66c012df6 pp-scripts can now set post-processing parameters by printing command "[NZB] NZBPR_varname=value"; this allows scripts which are executed sooner to pass data for scripts executed later 2013-08-28 22:27:50 +00:00
Andrey Prygunkov
baf996fc06 new env-var "NZBPP_FINALDIR" passed to pp-scripts 2013-08-28 21:59:02 +00:00
Andrey Prygunkov
e56a1d3274 refactor: small code rework in passing parameters to post-processing scripts 2013-08-28 20:11:05 +00:00
Andrey Prygunkov
b04af33fb5 addition to r794: removed mistakenly added pp-parameter "NZBPP_DELETED"; since post-processing is not performed for deleted downloads, this parameter has no use 2013-08-28 19:17:41 +00:00
Andrey Prygunkov
1f53d32a62 new option "ParRename" to force par-renaming as a first post-processing step (active by default); this saves an unpack attempt and is even more useful if unpack is disabled 2013-08-28 15:08:37 +00:00
Andrey Prygunkov
534aeb3d07 addition to r795: renamed option "ApprovedIP" to "AuthorizedIP" 2013-08-26 22:07:43 +00:00
Andrey Prygunkov
a3693d0a45 refactor: fixed compiler warnings regarding "printf" 2013-08-26 16:05:29 +00:00
Andrey Prygunkov
967a6bd4a4 fixed potential buffer overflow in remote client 2013-08-26 10:02:45 +00:00
Andrey Prygunkov
c589c9b9ec addition to r795: removed a (wrong) tip about router from option decription 2013-08-24 18:56:47 +00:00
Andrey Prygunkov
d10ad4835b added new option "ApprovedIP" to set the list of IP-addresses which may connect without authorization 2013-08-23 22:05:29 +00:00
Andrey Prygunkov
38a273b195 added collecting of server usage statistical data for each download: number of successful and failed article downloads per news server; new page in history dialog shows collected statistics; new fields in RPC-method "history": ServerStats (array), TotalArticles, SuccessArticles, FailedArticles; new env. vars passed to pp-scripts: NZBPP_TOTALARTICLES, NZBPP_SUCCESSARTICLES, NZBPP_FAILEDARTICLES and per used news server: NZBPP_SERVERX_SUCCESSARTICLES, NZBPP_SERVERX_FAILEDARTICLES; also new env.vars: DELETED, HEALTHDELETED 2013-08-16 21:53:32 +00:00
Andrey Prygunkov
9618f46188 fixed scrolling to the top of page happening by clicking on items in downloads/history lists and on action-buttons in edit-download and history dialogs 2013-08-15 17:21:01 +00:00
Andrey Prygunkov
2562b384bc addition to r779: fixed: health were not shown for items with status "PP-QUEUED" 2013-08-14 22:00:39 +00:00
Andrey Prygunkov
bc8d133b69 post-processing progress label is now automatically trimmed if it doesn't fill into one line; this avoids layout breaking if the text is too long 2013-08-14 21:10:02 +00:00
Andrey Prygunkov
beb9967ad0 addition to r775: fixed: the confirmation by leaving settings page could be sometimes shown even if there were no changes made 2013-08-14 21:07:00 +00:00
Andrey Prygunkov
433baf0923 added support for http redirects when fetching URLs 2013-08-13 17:22:45 +00:00
Andrey Prygunkov
423da8b785 addition to r782: fixed: when adding files to queue the info about category and priority could get lost for some files 2013-08-12 18:39:17 +00:00
Andrey Prygunkov
f4708d2b1a fixed: final directory were not properly shown (Windows only) (bug introduced in r765) 2013-08-12 18:35:28 +00:00
Andrey Prygunkov
b00b7f7c31 critical health was sometimes not calculated properly on certain CPU architectures (mipsel) 2013-08-11 14:58:19 +00:00
Andrey Prygunkov
ec4110cb2c 1) when a duplicate file was detected in collection it was automatically deleted (if option DupeCheck is active) but the total size of collection was not updated; 2) when deleting individual files the total count of files in collection was not updated 2013-08-10 20:30:33 +00:00
Andrey Prygunkov
b2f02c7fa6 addition to r779: added calculation of critical health for old items in download queue (added to queue with program versions older than r779) 2013-08-10 20:03:14 +00:00
Andrey Prygunkov
aeb561c240 added automatic par-renaming of extracted files if archive includes par-files 2013-08-10 09:04:33 +00:00
Andrey Prygunkov
97a6abca0e fixed: when multiple nzb-files were added via URL (rss including) at the same time the info about category and priority could get lost for some of files 2013-08-09 20:37:41 +00:00
Andrey Prygunkov
5aa3a29288 addition to r779: added missing include to avoid compilation error on some systems 2013-08-08 21:46:42 +00:00
Andrey Prygunkov
bc49e7c48e all table columns except "Name" now have fixed widths to avoid annoying layout changes especially during post-processing when long status messages are displayed in the name-column 2013-08-08 21:23:29 +00:00
Andrey Prygunkov
9ba10446e9 added download health monitoring: health indicates download status, whether the file is damaged and how much; new option "HealthCheck" to define what to do with bad downloads (pause, delete, none); par-check is now automatically started for downloads having health below 100%, this works independently of unpack (even if unpack is disabled); for downloads having health less than critical health no par-check is performed (it would fail); new fields "Health" and "CriticalHealth" are returned by RPC-Method "listgroups"; new fields "Health", "CriticalHealth", "Deleted" and "HealthDeleted" are returned by RPC-Method "history"; new parameters "NZBPP_HEALTH" and "NZBPP_CRITICALHEALTH" are passed to pp-scripts; manually deleted downloads now have history status "deleted" (instead of "unknown") 2013-08-08 21:09:36 +00:00
Andrey Prygunkov
a4c686876f added filter buttons to messages tab (info, warning, etc.); also changed the color of filter buttons in feed view and feed filter dialogs (from blue to black) 2013-08-07 20:09:43 +00:00
Andrey Prygunkov
f31ba7dea3 small correction in help text 2013-08-05 20:49:28 +00:00
Andrey Prygunkov
897946c1ce added fields "rageid", "season", "episode" and command "=" to rss filters 2013-08-05 18:41:02 +00:00
Andrey Prygunkov
802266e3aa added confirmation dialog by leaving settings page if there are unsaved changes 2013-08-05 18:09:10 +00:00
Andrey Prygunkov
d9f89f7457 added menu "View" to settings page which allows to switch to "Compact Mode" when option descriptions are hidden 2013-08-05 18:05:04 +00:00
Andrey Prygunkov
b871d84379 added support for fields "rating" and "genre" in rss filters 2013-08-04 21:41:50 +00:00
Andrey Prygunkov
0375309060 fixed: rss filter commands "<=" and ">=" did not work 2013-08-04 15:35:07 +00:00
Andrey Prygunkov
a5ca653df8 fixed: crash on certain invalid rss filter strings 2013-08-03 21:06:35 +00:00
Andrey Prygunkov
ec194a15fb fixed: colons in regular expressins could cause incorrect parsing of rss filters 2013-08-03 09:56:40 +00:00
Andrey Prygunkov
eaf5d71b40 small changes in button captions: edit dialogs called from settings page (choose script, choose order, build rss filter) now have buttons "Discard/Apply" instead of "Close/Save"; in all other dialogs button "Close" renamed to "Cancel" unless it was the only button in dialog 2013-08-02 21:03:58 +00:00
Andrey Prygunkov
7a9ee279ed reversed the order of priorities in comboboxes in dialogs: the highest priority - at the top, the lowest - at the bottom 2013-08-02 16:46:24 +00:00
Andrey Prygunkov
827acdadea 1) added multiline filters for RSS feeds; new dialog to build filters in web-interface; 2) refactor: the length of configuration option values is now unlimited (previously was limited to 1000 characters; unlimited needed for long feed filters) 2013-08-02 15:54:11 +00:00
Andrey Prygunkov
ef99b2057a addition to r765: fixed small memory leak 2013-07-29 15:58:59 +00:00
Andrey Prygunkov
c938714b70 pp-scripts which move files can now inform the program about new location by printing text "[NZB] FINALDIR=/path/to/files"; the final path is then shown in history dialog instead of download path 2013-07-28 21:27:12 +00:00
Andrey Prygunkov
21e3dd30fd fixed: URLs for nzb-files were not properly read from RSS feeds of certains sites 2013-07-28 17:47:56 +00:00
Andrey Prygunkov
b271ab4721 addition to r757: fixed: statistic dialog had a scroll bar 2013-07-27 21:57:40 +00:00
Andrey Prygunkov
3abe382f44 program can now be stopped via web-interface: new button "shutdown" in section "SYSTEM" 2013-07-27 16:19:27 +00:00
Andrey Prygunkov
88a6b702d2 updated VC project file 2013-07-26 21:34:52 +00:00
Andrey Prygunkov
497d1af8bf fixed: download could hang if there were defined active news servers with 0 connections (ServerX.Active=yes, ServerX.Connections=0) (bug introduced in r743) 2013-07-26 20:38:57 +00:00
Andrey Prygunkov
cc4b6acd14 options "DeleteCleanupDisk" and "NzbCleanupDisk" are now active by default (in the example config file) 2013-07-26 20:14:32 +00:00
Andrey Prygunkov
1714e2331c combined rss filter commands @ and " into one command to make filters more intuitive 2013-07-26 20:09:36 +00:00
Andrey Prygunkov
4e419ec16d small change in css: slightly reduced the max height of modal dialogs to better work on notebooks 2013-07-25 20:13:26 +00:00
Andrey Prygunkov
5e0f214a8f fixed: malformed nzb-file could cause a memory leak 2013-07-25 19:20:06 +00:00
Andrey Prygunkov
da1727e5e4 added support for metatag "password" in nzb-files 2013-07-25 18:32:07 +00:00
Andrey Prygunkov
101be2eeb1 added confirmation by deleting download or history item from edit-dialog 2013-07-25 18:25:13 +00:00
Andrey Prygunkov
e69015204a when saving/restoring the feed status (last update time) the feeds are identified by url and filter (previously only by url) 2013-07-25 18:22:56 +00:00
Andrey Prygunkov
1b203e3292 fully implemented feed filters 2013-07-24 21:32:15 +00:00
Andrey Prygunkov
1ad8bd212c refactor: small rework of NZBParameterList-class 2013-07-24 21:09:56 +00:00
Andrey Prygunkov
3711f30d01 new logo (thanks dogzipp for the logo) 2013-07-24 21:01:27 +00:00
Andrey Prygunkov
85d59d25df 1) DirectNZB headers X-DNZB-MoreInfo, X-DNZB-Report and X-DNZB-Link are now processed when downloading URLs and the links "More Info", "External Link" and "Report This NZB" are shown in download-edit-dialog and in history-dialog; 2) combined all footer buttons into one button "Actions" with menu: in download-edit-dialog: "Pause/Resume", "Delete" and "Cancel Post-Processing", in history-dialog: "Delete", "Post-Process Again" and "Download Remaining Files (Return to Queue)" 2013-07-23 21:21:14 +00:00
Andrey Prygunkov
6d7f55a435 added missing svn-keywords 2013-07-22 20:39:49 +00:00
Andrey Prygunkov
c22ca18a82 added filtering for RSS feeds via new option "FeedX.Filter" (not all filter commands are implemented yet but this is mentioned in the option description) 2013-07-22 20:38:21 +00:00
Andrey Prygunkov
ec48959600 changed the way how option "Unpack" works: instead of enabling/disabling the unpacker as a whole, it now defines the initial value of post-processing parameter "Unpack" for nzb-file when it is added to queue; this makes it now possible to disable Unpack globally but still enable it for selected nzb-files; new option "CategoryX.Unpack" to set unpack on a per category basis 2013-07-21 20:44:13 +00:00
Andrey Prygunkov
67634c4a71 fixed compilation error on Linux (bug introduced in r743) 2013-07-20 14:05:21 +00:00
Andrey Prygunkov
c31d38a562 fixed: certain characters printed by pp-scripts could crash the program 2013-07-20 14:02:26 +00:00
Andrey Prygunkov
6b0499b82e news servers can now be temporarily disabled via speed limit dialog without reloading of the program; new option "ServerX.Active" to disable servers via settings; new option "ServerX.Name" to use for logging and in UI 2013-07-20 07:15:21 +00:00
Andrey Prygunkov
046364283f fixed: choosing local files didn't work in Opera 2013-07-18 19:21:38 +00:00
Andrey Prygunkov
85880f9bd1 in RPC-Method "appendurl" parameter "addtop" adds nzb to the top of the main download queue (not only to the top of the URL queue) 2013-07-17 19:12:23 +00:00
Andrey Prygunkov
5fd436e5c5 added switch "Titles/Filenames" to feed view dialog for rss feeds with "bad" titles 2013-07-17 19:11:35 +00:00
Andrey Prygunkov
01533cbf9f better parsing of rss feeds of certain nzb-sites (now using enclosure-tag if possible) (Windows only) 2013-07-17 19:02:41 +00:00
Andrey Prygunkov
f5c8276fdc addition to r734: fixed possible matching bug 2013-07-16 22:18:09 +00:00
Andrey Prygunkov
05f2b81025 better parsing of rss feeds of certain nzb-sites (now using enclosure-tag if possible) (POSIX only) 2013-07-16 21:41:53 +00:00
Andrey Prygunkov
9dfd6cecad added filter buttons (all, new, fetched, backlog) to feed view dialog 2013-07-16 21:11:08 +00:00
Andrey Prygunkov
2febf837e5 fixed: restoring of settings didn't work for multi-sections (servers, categories, etc.) if they were empty 2013-07-16 18:47:52 +00:00
Andrey Prygunkov
ac954bba11 refactor: small speed/memory optimization in aliases support for categories 2013-07-16 18:46:41 +00:00
Andrey Prygunkov
2bda4fef5b made alias-matching case-insensitive 2013-07-15 22:27:14 +00:00
Andrey Prygunkov
5a815592dc added new option "CategoryX.Aliases" to configure category name matching with nzb-sites; especially useful with rss feeds 2013-07-15 21:28:55 +00:00
Andrey Prygunkov
3f4c6ce144 added more debug logging to feed manager 2013-07-15 21:28:26 +00:00
Andrey Prygunkov
19e0c53d1e destination directory for option "CategoryX.DestDir" is not checked/created on program start anymore (only when a download starts for that category); this helps when certain categories are configured for external disks, which are not always connected 2013-07-15 20:07:50 +00:00
Andrey Prygunkov
95963b2289 download queue is now saved in a more secure way to avoid potential loss of queue if the program crashes during saving of queue 2013-07-15 19:53:01 +00:00
Andrey Prygunkov
4cd21cad9c fixed: crash after downloading of an URL (happen only on certain systems) 2013-07-15 19:13:36 +00:00
Andrey Prygunkov
fcf1d7d502 improved compatibility with certain nzb-sites when fetching nzb-files (original nzb-filenames were sometimes not detected properly) 2013-07-14 21:27:13 +00:00
Andrey Prygunkov
e92d04771d toolbar button "Rss Feeds" is now visible only if there are feeds defined in settings 2013-07-14 18:39:07 +00:00
Andrey Prygunkov
ce43190ca6 fixed: crash after executing of remote commands (bug introduced in r722) 2013-07-14 15:47:49 +00:00
Andrey Prygunkov
284dbbad24 addition to r722: added missing file to Makefile 2013-07-14 07:08:10 +00:00
Andrey Prygunkov
1e115a5eab addition to r722: added missing file 2013-07-14 06:39:36 +00:00
Andrey Prygunkov
f18a355469 added rss feeds support: 1) new options "FeedX.Name", "FeedX.URL", "FeedX.Interval", "FeedX.PauseNzb", "FeedX.Category", "FeedX.Priority" (section "Rss Feeds"); 2) new option "FeedHistory" (section "Download Queue"); 3) Button "Preview Feed" on settings tab near each feed definition; 4) new toolbar button "Feeds" on downloads tab with menu to view feeds or fetch new nzbs from all feeds (the button is visible only if there are feeds defined in settings); 5) new dialog to see feed content showing status of each item (new, fetched, backlog) with ability to manually fetch selected items 2013-07-13 22:00:49 +00:00
Andrey Prygunkov
8ba95bb82a updated version string to 12.0-testing 2013-07-12 21:16:55 +00:00
Andrey Prygunkov
ee5a2a320e updated version string (preparing to release 11.0) 2013-07-01 19:18:41 +00:00
Andrey Prygunkov
738fd3da58 updated ChangeLog 2013-07-01 18:48:07 +00:00
Andrey Prygunkov
decc08934c removed a superfluous page scroll after clicking on option in web-interface settings 2013-06-26 20:52:10 +00:00
Andrey Prygunkov
5f5b7f92cf improved configure-script: defining of symbol "FILE_OFFSET_BITS=64", required on some systems, is not necessary anymore 2013-06-20 18:18:05 +00:00
Andrey Prygunkov
20fa280171 imporved error message in web-interface displayed when the template configuration file could not be found 2013-06-19 18:24:50 +00:00
Andrey Prygunkov
e5fa2ef750 fixed: support for splitted files (.001, .002, etc.) were broken 2013-06-18 21:56:23 +00:00
Andrey Prygunkov
bd31e25757 configure-script now defines "SIGCHLD_HANDLER" by default on all systems including BSD; this eliminates the need of configure-parameter "--enable-sigchld-handler" on 64-Bit BSD; the trade-off: 32-Bit BSD now requires "--disable-sigchld-handler" 2013-06-18 19:06:11 +00:00
Andrey Prygunkov
5b3113d96b 1) when a nzb-file is added via web-interface or via remote call the file is now put into incoming nzb-directory (option "NzbDir") and then scanned; this has two advantages over the old behavior when the file was parsed directly in memory: the file serves as a backup for troubleshootings and the file is processed by nzbprocess-script (if defined in option "NzbProcess") making the pre-processing much easier; 2) new env-var parameters are passed to NzbProcess-script: NZBNP_NZBNAME, NZBNP_CATEGORY, NZBNP_PRIORITY, NZBNP_TOP, NZBNP_PAUSED; 3) new commands for use in NzbProcess-scripts: "[NZB] TOP=1" to add nzb to the top of queue and "[NZB] PAUSED=1" to add nzb-file in paused state 2013-06-17 20:39:46 +00:00
Andrey Prygunkov
84b4f7695b when a nzb-file whose name ends with ".queued" is added via web-interface the ".queued"-part is automatically removed 2013-06-16 13:00:57 +00:00
Andrey Prygunkov
fc8ea3bcd0 fixed: if an error occurs when a RPC-client or web-browser communicates with nzbget the program could crash 2013-06-15 15:07:11 +00:00
Andrey Prygunkov
9051a4df4d fixed: if the last file of collection was detected as duplicate after the download of the first article the file was deleted from queue (that's OK) but the post-processing was not triggered (that's a bug) 2013-06-13 20:51:38 +00:00
Andrey Prygunkov
a7b42b6c97 fixed: pp-scripts "Logger.py" and "EMail.py" failed trying to get post-processing log from nzbget if option "ControlUsername" were set to a non-default value 2013-06-12 20:25:06 +00:00
Andrey Prygunkov
f7675b1e46 addition to r693: fixed: unicode characters were not properly encoded in JSON-RPC response 2013-06-12 20:14:45 +00:00
Andrey Prygunkov
db1117d892 new option "ControlUsername" to define login user name (if you don't like default username "nzbget") 2013-06-05 21:09:28 +00:00
Andrey Prygunkov
4b14e19229 removed option "RenameBroken"; it caused problems in par-checker (the option existed since early program versions before the par-check was added) 2013-06-04 21:04:00 +00:00
Andrey Prygunkov
606021fb8a removed option "AppendNzbDir"; if it was disabled that caused problems in par-checker and unpacker; the option is now assumed always active 2013-06-04 20:56:17 +00:00
Andrey Prygunkov
b15499c1dd removed option "ProcessLogKind"; scripts should use prefixes ([INFO], [DETAIL], etc); messages printed without prefixes are added as [INFO] 2013-06-03 20:56:56 +00:00
Andrey Prygunkov
950588cb65 addition to r698: if options of the section "Terminal" were missed in the config file, they were written with empty values causing warnings on program start 2013-06-03 20:35:18 +00:00
Andrey Prygunkov
7991c06543 fixed: in the option "NzbAddedProcess" the env-var parameter with nzb-name was passed in "NZBNA_NAME", should be "NZBNA_NZBNAME"; the old parameter name "NZBNA_NAME" is still supported for compatibility 2013-06-02 21:11:09 +00:00
Andrey Prygunkov
b335c4ca05 updated svn log URL 2013-06-02 21:10:14 +00:00
Andrey Prygunkov
cf3773dd28 added functions to backup and restore settings from web-interface; when restoring it's possible to choose what sections to restore (for example only news servers settings or only settings of a certain pp-script) or restore the whole configuration 2013-06-02 19:20:31 +00:00
Andrey Prygunkov
2a3740e49f added check for directory existence in pp-script <Logger> to avoid script failure if the directory was deleted by one of the previous scripts 2013-05-28 20:22:59 +00:00
Andrey Prygunkov
bcbd30ff6e addition to r694: fixed: a directory check/creation could fail if the directory was just created in another thread 2013-05-26 21:10:08 +00:00
Andrey Prygunkov
571ab9602f 1) additional comment to r693 (ArticleDownloader.cpp, line 632): fixed: the program could hang if the destination file could not be created; 2) improved thread synchronisation to avoid (short-time) lockings of the program during creation of destination files 2013-05-26 20:42:15 +00:00
Andrey Prygunkov
cfab6a3bb6 more detailed error message if a directory could not be created (<DstDir>, <NzbDir>, etc.); the message includes error text reported by OS such as <permission denied> or similar 2013-05-26 13:47:23 +00:00
Andrey Prygunkov
7c9ab59aff addition to r649 (unicode support in XML-RPC and JSON-RPC): fixed a typo which could prevent the filtering of invalid xml-characters 2013-05-23 20:47:42 +00:00
Andrey Prygunkov
7b1d1129a8 when unpacking the unpack start time is now measured after receiving of unrar copyright message; this provides better unpack time estimation in a case when user uses unpack-script to do some things before executing unrar (for example sending Wake-On-Lan message to the destination NAS); it works with unrar only, it's not possible with 7-Zip because it buffers printed messages 2013-05-23 20:40:46 +00:00
Andrey Prygunkov
bf3e8fe3a9 the maximum number of download threads are now managed automatically taking into account the number of allowed connections to news servers; removed option <ThreadLimit> 2013-05-22 20:25:19 +00:00
Andrey Prygunkov
baeac17d5b when the program is reloaded, a message with version number is printed like on start 2013-05-22 20:09:21 +00:00
Andrey Prygunkov
68ce6dea4b if a communication error occurs in web-interface, it retries multiple times before giving up with an error message 2013-05-21 20:41:08 +00:00
Andrey Prygunkov
00df4b8920 small correction in a log-message: removed <Request:> from message <Request: Queue collection...> 2013-05-21 20:35:42 +00:00
Andrey Prygunkov
36814514b7 new parameter (env. var) <NZBPP_NZBID> is passed to pp_scripts and contains an internal ID of NZB-file 2013-05-21 20:34:42 +00:00
Andrey Prygunkov
381a9a28b0 pp-scripts terminated with unknown status are now considered failed (status=FAILURE instead of status=UNKNOWN) 2013-05-21 20:32:41 +00:00
Andrey Prygunkov
5c364896d3 added support for rar-files with non-standard extensions (such as .001, etc.) 2013-05-21 20:21:52 +00:00
Andrey Prygunkov
07ce1d44a9 fixed: remote command <--list> for history items may fail with segfault on certain par-status 2013-05-16 20:57:04 +00:00
Andrey Prygunkov
1348ac86f7 added setting of post-processing parameters for history items; pp-parameters can now be viewed and changed in history dialog in web-interface; useful before post-processing again; new action <HistorySetParameter> in RPC-method <editqueue>; new action <O> in remote command <--edit/-E> for history items (subcommand <H>) 2013-05-16 20:54:13 +00:00
Andrey Prygunkov
b4c4855a9b option <ControlPassword> can now be set to en empty value to disable authentication; useful if nzbget works behind other web-server with its own authentication 2013-05-15 19:45:48 +00:00
Andrey Prygunkov
7a1001a70b fixed: error in IE when loading web-interface (bug introduced in r673) 2013-05-15 16:51:13 +00:00
Andrey Prygunkov
340a8130e9 addition to r677: added missing headers, causing compilation error in newer gcc versions 2013-05-15 16:45:36 +00:00
Andrey Prygunkov
9eb8de27d2 addition to r676: fixed crash on Linux with uClibc 2013-05-14 20:52:39 +00:00
Andrey Prygunkov
476a43a5bf configuration can now be saved in web-interface even if there were no changes made but if obsolete or invalid options were detected in the config file; the saving removes invalid entries from config file 2013-05-14 20:27:23 +00:00
Andrey Prygunkov
9ab955d026 refactor: more consistent using of c-headers 2013-05-14 20:20:52 +00:00
Andrey Prygunkov
bcedb32cf0 1) fixed compilation error on Linux (bug introduced in r675); 2) refactor: small corrections in class <ParCoordinator> 2013-05-14 20:18:17 +00:00
Andrey Prygunkov
6bc266f13c Par-checker and renamer now add messages into the log of pp-item (like unpack- and pp-scripts-messages); these message now appear in the log created by scripts Logger.py and EMail.py 2013-05-13 21:00:08 +00:00
Andrey Prygunkov
ed36feeb0a fixed: by deleting of a partially downloaded nzb-file from queue, when the option <DeleteCleanupDisk> was active, the file <_brokenlog.txt> was not deleted preventing the directory from automatic deletion 2013-05-13 20:13:28 +00:00
Andrey Prygunkov
85400cd8f6 fixed: for pp-scripts saved using windows line endings (CR,LF) the descriptions of options were not displayed correctly on settings page 2013-05-08 21:53:18 +00:00
Andrey Prygunkov
2866697b32 fixed: crash when adding malformed nzb-files with certain structure (Windows only) 2013-05-08 21:51:48 +00:00
Andrey Prygunkov
f1a99e1194 when deleting downloads via web-interface a proper hint regarding deleting of already downloaded files from disk depending on option <DeleteCleanupDisk> is displayed 2013-05-08 21:50:47 +00:00
Andrey Prygunkov
db47ddf3dc when download is resumed in web-interface the option <ParCheck=Force> is respected and all par2-files are resumed (not only main par2-file) 2013-05-08 21:45:22 +00:00
Andrey Prygunkov
2cc4dbd2ba made pp-scripts EMail.py and Logger.py compatible with python3 (python2 is OK too) 2013-05-07 18:02:15 +00:00
Andrey Prygunkov
a38eef2971 fixed: symbol <DISABLE_TLS> must be defined in project settings, defining it in <win32.h> didn't work properly (Windows only) 2013-05-06 19:52:54 +00:00
Andrey Prygunkov
e3e197a917 1) option <ExtCleanupDisk> now checks not only file extensions but any substring at the end of file name (in particular this allows to delete file _brokenlog.txt); 2) fixed: when option <InterDir> was used the files extracted from archives were not processed/deleted by option <ExtCleanupDisk> 2013-05-06 18:58:18 +00:00
Andrey Prygunkov
361d0befb6 addition to r665: fixed: crash on starting of download 2013-05-06 18:32:16 +00:00
Andrey Prygunkov
ba35c662ea small improvement in multithread synchronisation: do not create mutexex for each file-object but instead only for active objects (which are being downloaded at the moment) 2013-05-05 21:17:23 +00:00
Andrey Prygunkov
45ce763c71 small improvement in multithread synchronisation of download queue 2013-05-05 21:16:02 +00:00
Andrey Prygunkov
73c85a0013 fixed: when a duplicate file was detected during download the program could hang 2013-05-05 20:59:22 +00:00
Andrey Prygunkov
26361630c2 addition to r660: fixed: downloads were always checked when option <ParCheck> was set to <Auto> 2013-05-04 07:45:00 +00:00
Andrey Prygunkov
96c30c509b fixed: spaces in option <ExtCleanupDisk> prevented its correct operation 2013-05-02 20:48:32 +00:00
Andrey Prygunkov
d9b9786486 improved par-check: added support for manual par-check; if option <ParCheck> is set to <Manual> and a damaged download is detected the program downloads all par2-files but doesn't perform par-check; the user must perform par-check/repair manually then (possibly on another, faster computer); old values <yes/no> of option <ParCheck> renamed to <Force> and <Auto> respectively; when set to <Force> all par2-files are always downloaded; removed option <LoadPars> since its functionality is now covered by option <ParCheck>; Result of par-check can now have new value <Manual repair necessary>; field <ParStatus> in RPC-method <history> can have new value <MANUAL>; parameter <NZBPP_PARSTATUS> for pp-script can have new value <4 = manual repair necessary>; extended pp-script <EMail.py> to handle ParStatus=4 (manual) 2013-05-02 20:40:36 +00:00
Andrey Prygunkov
bb9cea260d small improvements in formatting of option descriptions in web-interface 2013-05-02 20:03:51 +00:00
Andrey Prygunkov
958c2f97ec refactor: discarding of download queue is now less complicated and not depend on diskstate version 2013-04-30 20:12:07 +00:00
Andrey Prygunkov
27651f17bf improvement in JSON-/XML-RPC: all ID fields including NZBID are now persistent and remain their values after restart; this allows for third-party software to identify nzb-files by ID; method <history> now returns ID of NZB-file in the field <NZBID>; in versions up to 0.8.0 the field <NZBID> was used to identify history items in the edit-commands <HistoryDelete>, <HistoryReturn>, <HistoryProcess>; since version 9 field <ID> is used for this purpose; in versions 9-10 field <NZBID> still existed and had the same value as field <ID> for compatibility with version 0.8.0; the compatibility is not provided anymore; this change was needed to provide a consistent using of field <NZBID> across all RPC-methods 2013-04-30 20:10:10 +00:00
Andrey Prygunkov
71621f7bb5 eliminated a compiler warning 2013-04-30 20:06:54 +00:00
Andrey Prygunkov
d8add46215 automatic deletion of backup-source files after successful par-repair; important when repairing renamed rar-files since this could cause failure during unpack 2013-04-29 20:46:09 +00:00
Andrey Prygunkov
e459f570d5 improved unicode support in pp-script Logger.py 2013-04-29 19:25:59 +00:00
Andrey Prygunkov
5b5057dee0 fixed: failed to read download queue from disk if post-processing queue was not empty 2013-04-29 19:24:44 +00:00
Andrey Prygunkov
f21becb37d added link to catalog of pp-scripts to web-interface 2013-04-29 18:21:06 +00:00
Andrey Prygunkov
9d03eb1ad4 fixed: when option <InterDir> was active and the download after unpack contained rar-file with the same name as one of original files (sometimes happen with included subtitles) the original rar-file was kept with name <.rar_duplicate1> even if the option <UnpackCleanupDisk> was active 2013-04-28 20:29:19 +00:00
Andrey Prygunkov
cb90c5e616 when logging messages from a post-processing script, a short name of the script is now used as prefix if possible; a short name doesn't include subdirectory name or file extension; RPC-method <configtemplates> returns new field <DisplayName> representing the short name of the script which is recommended for using in UI 2013-04-26 20:01:53 +00:00
Andrey Prygunkov
77059f2db0 improved unicode support in XML-RPC and JSON-RPC 2013-04-26 19:56:41 +00:00
Andrey Prygunkov
fb72c36a48 if a news-server returns empty or bad article (this may be caused by errors on the news server), the program tries again from the same or other servers (in previous versions the article was marked as failed without other download attempts) 2013-04-26 19:55:30 +00:00
Andrey Prygunkov
b2b215a061 updated ChangeLog 2013-04-24 20:26:05 +00:00
Andrey Prygunkov
8d313e4cf8 removed pp-script Cleanup.sh (its functionality is now part of the main program) 2013-04-24 20:24:30 +00:00
Andrey Prygunkov
6bb760375e added option <ExtCleanupDisk> to automatically delete unwanted files (with specified extensions) after successful par-check or unpack 2013-04-24 20:16:04 +00:00
Andrey Prygunkov
025cd043d3 history dialog now shows status of every script 2013-04-23 18:20:52 +00:00
Andrey Prygunkov
3f368d4a8e fixed: download time in statistics were incorrect if the computer were put into standby (thanks Frank Kuypers for the patch) 2013-04-21 20:23:32 +00:00
Andrey Prygunkov
449e41e435 removed unused file from repository 2013-04-21 19:56:28 +00:00
Andrey Prygunkov
bf0062be52 addition to r639: eliminated a compiler warning 2013-04-21 19:43:14 +00:00
Andrey Prygunkov
3c025c8b52 updated forum URL in about dialog in web-interface 2013-04-19 18:53:56 +00:00
Andrey Prygunkov
6dc3d954c5 fixed: authorization to news-server was forced even when username/password were empty (bug introduced in r634) 2013-04-19 18:44:32 +00:00
Andrey Prygunkov
c9b7a11a89 fixed: by adding nzb-files with assigned category and empty option <CategoryX.DefScript> the global option <DefScript> should be used but it wasn't 2013-04-19 17:34:17 +00:00
Andrey Prygunkov
33cb2d108e fixed: if a download didn't have any par-files and the option <ParCheck> was active, the par-check was started anyway and then failed 2013-04-18 21:04:48 +00:00
Andrey Prygunkov
e053e74b58 fixed: scripts containg spaces in their names were not assigned to nzb-files by adding to queue (when defined in option <DefScript>) 2013-04-18 20:28:31 +00:00
Andrey Prygunkov
4e35fc2fbe improved multiscripts: 1) first level subfolders in the ppscripts-directory (option <ScriptDir>) are now scanned for scripts too; 2) only files containing script definition signature are considered scripts and are shown in web-interface; 3) these changes allows to easily install collections of scripts (scripts bundles by just putting a folder with multiple scripts into ppscripts-directory); 2013-04-18 20:24:30 +00:00
Andrey Prygunkov
4768d8e459 if username and password are defined for a news-server the authentication is now forced (in previous versions the authentication was performed only if requested by server); needed for servers supporting both anonimous (restricted) and authorized (full access) accounts 2013-04-17 20:34:15 +00:00
Andrey Prygunkov
3598cc1d85 refactor: restructured class <Connection> 2013-04-17 20:21:46 +00:00
Andrey Prygunkov
e3a895b88c updated README 2013-04-15 20:22:16 +00:00
Andrey Prygunkov
61eff3ddf0 removed old example post-processing script 2013-04-15 20:17:24 +00:00
Andrey Prygunkov
e9268984ae added post-processing scripts EMail.py and Logger.py 2013-04-15 20:16:06 +00:00
Andrey Prygunkov
f28b35bd28 reworked concept of post-processing scripts: multiple scripts can be assigned to each nzb-file; all assigned scripts are executed after the nzb-file is downloaded and internally processed (unpack, repair); option <PostProcess> is obsolete; new option <ScriptDir> sets directory where all pp-scripts must be stored; new option <DefScript> sets the default list of pp-scripts to be assigned to nzb-file when it's added to queue; new option <CategoryX.DefScript> to set the default list of pp-scripts on a category basis; the execution order of pp-scripts can be set using new option <ScriptOrder>; there are no separate configuration files for pp-scripts; configuration options and pp-parameters are defined in the pp-scripts; script configuration options are saved in nzbget configuration file (nzbget.conf); changed parameters list of RPC-methods <loadconfig> and <saveconfig>; new RPC-method <configtemplates> returns configuration descriptions for the program and for all pp-scripts; configuration of all scripts can be done in web-interface; the pp-scripts assigned to a particular nzb-file can be viewed and changed in web-interface on page <pp-parameters> in the edit download dialog; option <PostPauseQueue> renamed to <ScriptPauseQueue> (the old name is still recognized); new option <ConfigTemplate> to define the location of template configuration file (in previous versions it must be always stored in <WebDir>) 2013-04-15 20:06:05 +00:00
Andrey Prygunkov
a86618c2c2 fixed: when options <DirectWrite> and <ContinuePartial> were both active, a restart or reload of the program during download may cause damaged files in the active download 2013-04-08 20:19:05 +00:00
Andrey Prygunkov
1f1a4b8fb8 fixed potential segfault which could happen with file paths longer than 1024 characters 2013-04-07 15:14:20 +00:00
Andrey Prygunkov
a1d0be34c2 updated README 2013-04-07 15:10:42 +00:00
Andrey Prygunkov
ef0a04cc1c improved unicode (utf8) support: non-ascii characters are now correctly transferred via JSON-RPC; correct displaying of nzb-names and paths in web-interface; it is now possible to use non-ascii characters on settings page for option values (such as paths or category names) 2013-04-06 21:03:05 +00:00
Andrey Prygunkov
284262b7da added new feature <split download> which creates new download from selected files of source download; new command <Split> in web-interface in edit download dialog on page <Files>; new action <S> in remote command <--edit/-E>; new action <FileSplit> in JSON-/XML-RPC method <editqueue> 2013-04-06 20:54:00 +00:00
Andrey Prygunkov
58b0a17986 reworked post-processor queue: 1) only one job is created for each nzb-file; no more separate jobs are created for par-collections within one nzb-file; 2) option <AllowReProcess> removed; a post-processing script is called only once per nzb-file, this behavior cannot be altered anymore; 3) with a new feature <Split> (see next commits) individual par-collections can be processed separately in a more effective way than before 2013-04-06 20:25:07 +00:00
Andrey Prygunkov
57abe00c62 updated version string to 11.0-testing 2013-04-06 12:58:50 +00:00
Andrey Prygunkov
d014407ba4 updated ChangeLog 2013-03-31 14:37:57 +00:00
Andrey Prygunkov
2e8bfa16f9 fixed: articles with decoding errors (incomplete or damaged posts) caused infinite retry-loop in downloader 2013-03-31 14:36:15 +00:00
Andrey Prygunkov
bf34713b0c updated version string to 10.1 2013-03-31 14:10:57 +00:00
Andrey Prygunkov
c46c1a96cd updated version string (preparing to release 10.0) 2013-03-29 21:07:24 +00:00
Andrey Prygunkov
2693b62de4 if an obsolete option is found in the config file a warning is printed instead of an error and the program is not paused anymore 2013-03-23 15:07:25 +00:00
Andrey Prygunkov
18387f6d98 fixed: when the option <ContinuePartial> is active and there are partially downloaded files in queue, after reloading/restarting of the program the file may stuck with status <downloading>; trying to reload or quit the program in this state resulted in a crash (bug introduced in r599) 2013-03-22 22:07:23 +00:00
Andrey Prygunkov
08b7356184 adding of local files via web-interface now works in IE10 2013-03-18 21:43:01 +00:00
Andrey Prygunkov
e30cdfc176 addition to r602: fixed: if news servers from different levels were defined with the same group (bad config actually), download could hang when waiting for a free connection to a higher level server 2013-03-17 20:24:25 +00:00
Andrey Prygunkov
cc0ed38e68 refactor: removed dead code 2013-03-17 12:43:11 +00:00
Andrey Prygunkov
5ec0d20286 improvement in news-server/connection management: new option <ServerX.Group> allows more flexible configuration of news servers when using multiple accounts on the same server; with this option it's also possible to imitate the old server management behavior regarding levels as it was before r599 2013-03-17 12:21:46 +00:00
Andrey Prygunkov
e0aa69f605 improvement in news-server/connection management: do not reconnect on <article/group not found> errors since this doesn't help but unnecessary increases CPU load and network traffic 2013-03-16 15:24:01 +00:00
Andrey Prygunkov
c859f39036 addition to r599: fixed: download could be cancelled when waiting for a free connection to news server 2013-03-16 13:32:35 +00:00
Andrey Prygunkov
5251f62665 major improvement in news-server/connection management (main and fill servers): if download of article fails, the program tries all servers of the same level before trying higher level servers; this ensures that fill servers are used only if all main servers fail; this makes the configuring of multiple servers much easier than before: in most cases the simple configuration of level 0 for all main servers and level 1 for all fill servers suffices; in previous versions the level was increased immediately after the first tried server of the level failed; to make sure all main servers were tried before downloading from fill servers it was required to create complex server configurations with duplicates; these configurations were still not as effective as now 2013-03-14 22:30:59 +00:00
Andrey Prygunkov
2b87e2b221 fixed: download could be cancelled when waiting for a free connection on a second-(or higher)-level news server 2013-03-12 22:24:29 +00:00
Andrey Prygunkov
c185e9b487 removed unneeded code from configure-script 2013-03-12 21:33:45 +00:00
Andrey Prygunkov
b5d9a99f10 fixed: special characters (quotation marks, etc.) in unpack password and in configuration options were not displayed properly and could be discarded on saving 2013-03-12 20:28:03 +00:00
Andrey Prygunkov
fec67fe0ea fixed: some characters with codes below 32 were not properly encoded in JSON-RPC; sometimes output from unrar contained such characters and could break web-interface 2013-03-12 20:22:08 +00:00
Andrey Prygunkov
987997a986 fixed: when option <UnpackCleanupDisk> is active the unpacked archive-files (second level archives) could be deleled too (mostly affected 7-Zip archives but sometimes also rar-archives if the second level rar-files had same names as the first level rars) 2013-03-11 20:03:47 +00:00
Andrey Prygunkov
27ef79ca27 immediately clearing post-process progress label after unpack to avoid status update lag in web-interface 2013-03-11 20:00:49 +00:00
Andrey Prygunkov
bcc1932a37 addition to r586/r591: added automatic speed meter recalibration to recover after possible synchronisation errors which can occur when the option <AccurateRate> is not active; this makes the default (less accurate but fast) speed meter almost as good as the accurate one; important when speed throttling is active 2013-03-11 19:54:14 +00:00
Andrey Prygunkov
2e163e9986 reverted r586 <automatic recovery after synchronisation errors>: needs more testing 2013-03-10 22:50:34 +00:00
Andrey Prygunkov
5473e57b10 fixed: if an external program (unrar, pp-script, etc.) could not be started, the execute-function has returned code 255 although the code -1 were expected in this case; this could break designed post-processing flow 2013-03-10 22:19:46 +00:00
Andrey Prygunkov
b970bad058 imporved configure-script: 1) libs which are added via pkgconfig are now put into LIBS instead of LDFLAGS - improves compatibility with newer Linux linkers; 2) OpenSSL libs/includes are now added using pkgconfig to better handle dependencies; 3) additional check for libcrypto (part of OpenSSL) ensures the library is added to linker command even if pkgconfig is not used 2013-03-10 15:20:41 +00:00
Andrey Prygunkov
bea1814cb9 removed a line of unused code (introduced in r579) 2013-03-09 22:29:18 +00:00
Andrey Prygunkov
45e58f29b4 corrections in code formatting; no actual code changes 2013-03-09 22:23:58 +00:00
Andrey Prygunkov
f7a3df635a added automatic recovery after synchronisation errors which can occur in speed meter when the option <AccurateRate> is not active; this makes the <inaccurate/fast/default> speed meter much more reliable; important when speed throttling is active; a warning is printed to log to indicate detected error and reset of speed meter 2013-03-09 22:21:44 +00:00
Andrey Prygunkov
3f67984929 changed default value for option <ServerX.JoinGroup> to <no>; most news servers nowadays do not require joining the group and many servers do not keep headers for many groups making the join-command fail even if the articles still can be successfully downloaded 2013-03-09 14:51:48 +00:00
Andrey Prygunkov
bf82171baa removed hint <Post-processing script may have moved files elsewhere> from history dialog since it caused more questions than helped 2013-03-08 19:21:43 +00:00
Andrey Prygunkov
4c49a7f003 added link to wiki-article <Performance tips> to settings tab on web-interface 2013-03-08 19:10:34 +00:00
Andrey Prygunkov
57b5d40851 when post-processing-parameters are passed to the post-processing script a second version of each parameter with a normalized parameter-name is passed in addition to the original parameter name; in the normalized name the special characters <*> and <:> are replaced with <_> and all characters are passed in upper case; this is important for internal post-processing-parameters (*Unpack:=yes/no) which includes special characters 2013-03-07 16:45:41 +00:00
Andrey Prygunkov
ecde2d1627 improved post-processing script: better handling of nzb-files not having archive files 2013-03-07 16:45:08 +00:00
Andrey Prygunkov
b48e9a31f6 updated description of option <ServerX.Cipher> and added link to wiki article <Choosing a cipher> 2013-03-06 21:52:15 +00:00
Andrey Prygunkov
f64e5241ed improved the handling of hanging connections: if a connection hangs longer than defined by option <ConnectionTimeout> the program tries to gracefully close connection first (this is new); if it still hangs after <TerminateTimeout> the download thread is terminated as a last resort (as in previous versions) 2013-03-06 21:36:09 +00:00
Andrey Prygunkov
4e9d01055a news servers configuration is now less error-prone: 1) the option <ServerX.Level> is not required to start from <0> and when several news servers are configured the Levels can be any integers - the program sorts the servers and corrects the Levels to 0,1,2,etc. automatically if needed; 2) when option <ServerX.Connections> is set to <0> the server is ignored (in previous version such a server could cause hanging when the program was trying to go to the next level); 3) if no news servers are defined (or all definitions are invalid) a warning is printed to inform that the download is not possible 2013-03-06 21:35:31 +00:00
Andrey Prygunkov
9d4ca25499 updated VC++ project file 2013-03-04 22:16:25 +00:00
Andrey Prygunkov
f1ddf9dc2b addition to r568: corrected saving of diskstate for post-processor queue 2013-03-04 22:14:15 +00:00
Andrey Prygunkov
e99b790d58 added new option <ServerX.Cipher> to manually select cipher for encrypted communication with news server; manually choosing a faster cipher (such as <RC4>) can significantly improve performance (if CPU is a limiting factor) 2013-03-04 22:01:14 +00:00
Andrey Prygunkov
5e4a99c1ad addition to r572: changed the log-messages for deleting of 7-zip-files to <info> too 2013-03-04 21:28:37 +00:00
Andrey Prygunkov
c89824bf25 the log-messages <deleting file *file*> (when option <UnpackCleanupDisk> is active) and <moving file *file* to *destination*> are now printed as <info> instead of <detail> (since <detail> is for article related messages whereas <info> is more suitable for file related messages) 2013-03-04 20:28:27 +00:00
Andrey Prygunkov
1230d9cdd4 added fast renaming of intentionally misnamed (rar-) files; the new renaming algorithm doesn't require full par-scan and restores original filenames in just a few seconds, even on very slow computers (NAS, media players, etc.); the fast renaming is performed automatically when requested by the built-in unpacker (option <Unpack> must be active) 2013-03-04 19:55:36 +00:00
Andrey Prygunkov
d3dd8dc686 fixed a compilation warning 2013-03-03 20:48:14 +00:00
Andrey Prygunkov
382faa49cb added new option <InterDir> to put intermediate files during download into a separate directory (instead of storing them directly in destination directory (option <DestDir>); when nzb-file is completely (successfully) downloaded, repaired (if neccessary) and unpacked the files are moved to destination directory (option <DestDir> or <CategoryX.DestDir>); intermediate directory can significantly improve unpack performance if it is located on a separate physical hard drive 2013-03-01 20:32:17 +00:00
Andrey Prygunkov
5cf4b4663f addition to r568: corrected diskstate version check 2013-02-28 20:47:30 +00:00
Andrey Prygunkov
749b4d3083 when a history item is post-processed again and the archive files were previously deleted because of option <UnpackCleanupDisk> the post-processing goes directly to script stage; if the archive files were kept, the full post-processing including unpack is performed instead 2013-02-28 20:23:50 +00:00
Andrey Prygunkov
02835d057e fixed: remote commands <--list/-L> and <--connect/-C> showed download speed and speed limit in Bytes instead of KiloBytes (bug introduced in r544) 2013-02-15 08:04:23 +00:00
Andrey Prygunkov
f5aaaecc48 fixed: RPC-method <history> returned incorrect Par-Status (bug introduced in r563) 2013-02-14 15:06:10 +00:00
Andrey Prygunkov
184ff84f92 small change in example post-processing script: message <Deleting source ts-files> are now printed only if ts-files really existed 2013-02-13 19:47:48 +00:00
Andrey Prygunkov
ef56bc1f55 fixed: par-status <FAILED> was not correctly checked in the example post-processing script (bug introduced in r558) 2013-02-13 17:05:26 +00:00
Andrey Prygunkov
9846f7509e fixed: parameter <NZBPP_PARSTATUS> was not correctly passed to post-processing script when the download was repaired 2013-02-13 16:54:13 +00:00
Andrey Prygunkov
1cf7cefe83 updated VC++ project file 2013-02-12 22:12:43 +00:00
Andrey Prygunkov
b5f1dbc47b changed formatting of remaining time for post-processing to short format (as used for remaining download time) 2013-02-12 14:10:44 +00:00
Andrey Prygunkov
37b85491c3 fixed: RPC-method <history> returned bad results for URLs; impacts history tab in web-interface (bug introduced in r555) 2013-02-10 21:35:46 +00:00
Andrey Prygunkov
30d792b35b when running external programs (such as unrar or post-processing script) the full path to the program is not neccessary since the search in system PATH is now performed 2013-02-09 10:58:14 +00:00
Andrey Prygunkov
ac29412b2f updated example post-processing script: added check for nzbget version (at least 10.0) and option <Unpack>, small other corrections 2013-02-08 13:16:29 +00:00
Andrey Prygunkov
01c170afaf fixed a compilation warning 2013-02-07 19:32:07 +00:00
Andrey Prygunkov
539e0811c9 fixed compilation error on Linux 2013-02-07 19:30:11 +00:00
Andrey Prygunkov
940448ffae added built-in unpack: 1) rar and 7-zip formats are supported (via external Unrar and 7-Zip executables); 2) new options <Unpack>, <UnpackPauseQueue>, <UnpackCleanupDisk>, <UnrarCmd>, <SevenZipCmd>; 3) web-interface now shows progress and estimated time during unpack (rar only; for 7-Zip progress is not available due to limitations of 7-Zip) 4) when built-in unpack is enabled, the post-processing script is called after unpack and possibly par-check/repair (if needed); 5) for nzb-files containing multiple collections (par-sets) the post-processing script is called only once, after the last par-set; 6) new parameter <NZBPP_UNPACKSTATUS> passed to post-processing script; 7) if the option <AllowReProcess> is enabled the post-processing-script is called after each par-set (as in previous versions); 8) example post-processing script updated: removed unrar-code, added check for unpack status; 9) new field <UnpackStatus> in result of RPC-method <history>; 10) history-dialog in web-interface shows three status: par-status, unpack-status, script-status; 11) with two built-in special post-processing parameters <*Unpack:> and <*Unpack:Password> the unpack can be disabled for individual nzb-file or the password can be set; 12) built-in special post-processing parameters can be set via web-interface on page <PP-Parameters> (when built-in unpack is enabled). 2013-02-06 22:04:50 +00:00
Andrey Prygunkov
68a73f96c4 warning <Non-nzbget request received> now is not printed when the connection was aborted before the request signature was read 2013-01-31 20:49:52 +00:00
Andrey Prygunkov
87a93745cb when the par-checked requests more par-files, they get an extra priority and are downloaded before other files regardless of their priorities; this is needed to avoid hanging of par-checker-job if a file with a higher priority gets added to queue during par-check 2013-01-28 23:22:15 +00:00
Andrey Prygunkov
3b08abca10 refactor: extracted par-related code from module <PrePostProcessor> into new module <ParCoordinator> 2013-01-23 21:32:36 +00:00
Andrey Prygunkov
5e68096a2e added validation for option <CategoryX.DestDir>; removed a superfluous slash in the generated destination path 2013-01-22 22:16:18 +00:00
Andrey Prygunkov
e3ef11ceae improved error reporting for connection errors (especially on Windows) 2013-01-21 21:29:45 +00:00
Andrey Prygunkov
575fe8379f improved error reporting for connection errors when using OpenSSL 2013-01-21 21:28:51 +00:00
Andrey Prygunkov
60feae7e5b new feature <Pause for X Minutes> in web-interface; new XML-/JSON-RPC method <scheduleresume> 2013-01-21 21:19:04 +00:00
Andrey Prygunkov
2b45ecaea1 fixed: some XML-/JSON-RPC methods may return negative values for file sizes between 2-4GB; this had also influence on web-interface 2013-01-21 20:44:30 +00:00
Andrey Prygunkov
7a3a430137 fixed warning <file glyphicons-halflings.png not found> 2013-01-20 11:19:10 +00:00
Andrey Prygunkov
11c0563fe5 refactor: download speed and speed limit are now internally integers (Bytes) instead of floats (KB) 2013-01-18 21:36:17 +00:00
Andrey Prygunkov
00018b3e89 improved the automatic par-scan (option <ParScan=auto>) to significantly reduce the verify-time in some common cases with renamed rar-files: 1) the extra files are scanned in an optimized order; 2) the scan stops when all missings files are found 2013-01-17 22:08:01 +00:00
Andrey Prygunkov
de787a069d added support for HTTPS to the built-in web-server (web-interface and XML/JSON-RPC); new options <SecureControl>, <SecurePort>, <SecureCert> and <SecureKey>; Module <TLS.c/h> completely rewritten with support for servers-side sockets, newer versions of GnuTLS, proper thread lockings in OpenSSL 2013-01-17 19:07:13 +00:00
Andrey Prygunkov
5f33ea6013 merge from 9-branch: improved the post-processing script to better handle renamed rar-files 2013-01-15 22:00:44 +00:00
Andrey Prygunkov
ee74c4c17f refactor: reformatted TLS.c/.h 2013-01-13 17:46:11 +00:00
Andrey Prygunkov
b031f52ee2 replaced a browser error message when trying to add local files in IE9 with a better message dialog 2013-01-06 19:50:30 +00:00
Andrey Prygunkov
77bb01b18c small changes in libpar2-patches for compatibility with optware 2013-01-06 13:00:13 +00:00
Andrey Prygunkov
d2fdc28c85 refactor: reworked Connection-class: fully encapsulted sockets; better read/write methods 2012-12-30 15:27:38 +00:00
Andrey Prygunkov
d34e985a92 addition to r525: updated config.h.in 2012-12-20 22:04:24 +00:00
Andrey Prygunkov
f0c2c834c3 fixed: segfault if a category didn't have a destination directory defined (bug introduced in r524) 2012-12-20 22:00:46 +00:00
Andrey Prygunkov
ca0cce9401 addition to r525: corrected and updated configure-script 2012-12-19 21:52:24 +00:00
Andrey Prygunkov
2a0e211daf fixed: the reported line numbers for configuration errors were sometimes inaccurate 2012-12-19 20:37:07 +00:00
Andrey Prygunkov
ea89983e45 added full par-scan feature needed to par-check/repair files which were renamed after creation of par-files; new option <ParScan> to activate full par-scan (always or automatic); the automatic full par-scan activates if missing files are detected during par-check, this avoids unnecessary full scan for normal (not renamed) par sets 2012-12-19 20:16:17 +00:00
Andrey Prygunkov
e4ed1c8fd7 categories can now have their own destination directories 2012-12-18 22:29:24 +00:00
Andrey Prygunkov
d46155bf32 updated version string to 10.0-testing 2012-12-18 22:09:09 +00:00
Andrey Prygunkov
9bdf0d8937 updated version string in windows version 2012-12-18 21:45:45 +00:00
Andrey Prygunkov
f56c1226b6 corrected file properties 2012-12-09 13:21:18 +00:00
Andrey Prygunkov
bc3e4742f0 updated version string (preparing to release 9.0) 2012-12-09 13:15:23 +00:00
Andrey Prygunkov
891b16ac76 fixed: saving of file properties (priority or category) failed if a post-processing script having post-processing parameters were not used (bug introduced in r476) 2012-11-25 19:38:09 +00:00
Andrey Prygunkov
11a32c3537 updated Changelog 2012-11-25 15:57:28 +00:00
Andrey Prygunkov
8afc96e1ad updated Changelog 2012-11-23 22:28:48 +00:00
Andrey Prygunkov
55d2c9e49c updated README 2012-11-23 21:34:38 +00:00
Andrey Prygunkov
9baabee3fd fixed an issue on mobile safari where the click on time-label (which should bring the statistics dialog) was often registered as a click on speed-label (and showed the time limit dialog instead) 2012-11-21 21:05:45 +00:00
Andrey Prygunkov
ad20cb6644 improved the startup script <nzbgetd> so it can be directly used in </etc/init.d> without modifications 2012-11-21 20:24:17 +00:00
Andrey Prygunkov
04d2d92524 implemented function <Clear messages> in web-interface; added RPC-method <clearlog> 2012-11-20 20:42:00 +00:00
Andrey Prygunkov
6630a8c2a5 renamed subcommand <K> of command <--edit/-E> to <C> (the old subcommand is still supported for compatibility) 2012-11-16 21:27:37 +00:00
Andrey Prygunkov
67ee86eaeb made all command-line subcommands case insensitive (like it already was in <--edit/-E>-command) (example: <nzbget -L g> and <nzbget -L G> is the same) 2012-11-16 21:12:28 +00:00
Andrey Prygunkov
2fcfbc2e1a added new option <NzbAddedProcess> to setup a script called after a nzb-file is added to queue 2012-11-16 20:50:56 +00:00
Andrey Prygunkov
2bb1162adf corrected the help screen (nzbget --help) 2012-11-12 21:21:26 +00:00
Andrey Prygunkov
f5e0b67305 extended remote command <--append/-A> with optional parameters: <T> - adds the file/URL to the top of queue; <P> - pauses added files; <C category-name> - sets category for added nzb-file/URL; <N nzb-name> - sets nzb filename for added URL; the old switches <--category/-K> and <--top/-T> are deprecated but still supported for compatibility 2012-11-12 21:11:42 +00:00
Andrey Prygunkov
0b25fb5771 fixed: if the loading of settings tab were cancelled (by clicking on other tab) an error could appear 2012-11-12 19:22:15 +00:00
Andrey Prygunkov
27ff29329f added debug messages for speed meter 2012-11-12 19:21:34 +00:00
Andrey Prygunkov
b52cfbb602 fixed: version number wasn't displayed in about dialog 2012-11-12 19:21:01 +00:00
Andrey Prygunkov
c5a1c64a35 fixed: switching between tabs didn't work in IE10 2012-11-12 19:20:34 +00:00
Andrey Prygunkov
cbedd9bec5 extended browser check for IE<9 with a tip about compatibility mode 2012-11-12 19:19:52 +00:00
Andrey Prygunkov
7a90844970 fixed: edit commands for group/post/history didn't work properly (bug introduced in r487) 2012-11-12 19:18:55 +00:00
Andrey Prygunkov
45dcb72178 fixed compilation error on windows (bug introduced in r499) 2012-11-12 19:18:32 +00:00
Andrey Prygunkov
d23b5bb58b addition: fixed: the loading of configuration in web-interface failed if the program was started with parameter <-c> using relative path to config filename 2012-11-11 13:35:38 +00:00
Andrey Prygunkov
bf7de99182 fixed: the loading of configuration in web-interface failed if the program was started with parameter <-c> using relative path to config filename 2012-11-09 16:38:22 +00:00
Andrey Prygunkov
8ddfab4b47 now using minified versions of libraries for better performance and smaller size 2012-11-07 14:34:14 +00:00
Andrey Prygunkov
57c2dc2d65 updated README 2012-11-07 14:30:37 +00:00
Andrey Prygunkov
62236a38f3 added javascript error reporting - should help users to easily see browser compatibility issues 2012-11-07 14:28:14 +00:00
Andrey Prygunkov
25ab7bba02 small improvements in speed meter: 1) eliminated unneeded calls of time-function in standby mode (might help with hibernation issue on Synology NAS); 2) better speed metering on high CPU load caused by other programs (if nzbget has less CPU time) 2012-11-07 14:25:34 +00:00
Andrey Prygunkov
fe14f3ee0e fixed: RPC-method <listfiles> didn't work correctly if the parameter <NZBID> was set to <0> (bug introduced in r487) 2012-11-07 03:16:54 +00:00
Andrey Prygunkov
425120de94 fixed: trailing spaces were not discarded when the config file were loaded in web-interface 2012-11-07 03:12:25 +00:00
Andrey Prygunkov
2520c8d173 fixed compilation warning 2012-11-07 03:07:35 +00:00
Andrey Prygunkov
7b4ee1c44b fixed compilation error on older systems (bug introduced in r411) 2012-11-07 03:01:27 +00:00
Andrey Prygunkov
58a1dcd141 fixed compilation error on recent linux versions 2012-11-04 20:36:12 +00:00
Andrey Prygunkov
3b4f44f276 refactor: restructured the entire web-interface code 2012-11-03 07:41:44 +00:00
Andrey Prygunkov
ebcc06686c added editing of individual files of the group in web-interface (pause/resume/delete/reorder); added new command <FileReorder> to RPC-method <editqueue> to set the order of individual files in the group 2012-10-20 11:06:45 +00:00
Andrey Prygunkov
45b3a7dbcd temporary pausing the play animation if any modal dialog is shown (to avoid artifacts in safari) 2012-10-18 20:37:15 +00:00
Andrey Prygunkov
16f04f2255 fixed: the lockfile (option <LockFile>) was deleted after reloading (bug introduced in r463) 2012-10-18 20:31:32 +00:00
Andrey Prygunkov
a23fcbd095 addition: added processing of URLs starting with path <nzbget> (e.g. <http://localhost:6789/nzbget/>) as alias to the root path (e.g. <http://localhost:6789/>) in internal web-server to support reverse proxies lacking the ability to rewrite URL 2012-10-16 15:49:18 +00:00
Andrey Prygunkov
7491c0f7c4 added processing of URLs starting with path <nzbget> (e.g. <http://localhost:6789/nzbget/>) as alias to the root path (e.g. <http://localhost:6789/>) in internal web-server to support reverse proxies lacking the ability to rewrite URL 2012-10-15 20:40:32 +00:00
Andrey Prygunkov
83da75a5e5 fixed: error in GnuTLS-support on certain systems (bug introduced in r463) 2012-10-15 18:04:25 +00:00
Andrey Prygunkov
2474c32f60 addition: added indication of soft-pause state via orange border on play/pause button 2012-10-14 08:18:01 +00:00
Andrey Prygunkov
6910f1f0b7 added indication of soft-pause state via orange border on play/pause button 2012-10-14 07:32:37 +00:00
Andrey Prygunkov
c426aeac6a fixed: the web-interface was trying to load the post-processing configuration template file even if no post-processing script was used or when the script doesn't have a config file at all; this lead to warnings in the log (although harmless) (bug introduced in r476) 2012-10-08 03:51:45 +00:00
Andrey Prygunkov
0e2716ba31 fixed: the settings page failed to load when a post-procesing script with a config file was used and the post-processing configuration template file was not present in webui-directory (bug introduced in r476) 2012-10-07 16:40:41 +00:00
Andrey Prygunkov
011239d45c renamed options <ServerIP>, <ServerPort> and <ServerPassword> to <ControlIP>, <ControlPort> and <ControlPassword> to avoid confusion with news-server options <ServerX.Host>, <ServerX.Port> and <ServerX.Password>; the old option names are still recognized and are automatically renamed when the configuration is saved from web-interface; also renamed option <> to <MainDir> 2012-10-05 18:57:09 +00:00
Andrey Prygunkov
5bb3d1a9e1 added support for post-processing parameters in web-interface 2012-10-05 18:13:13 +00:00
Andrey Prygunkov
43766c7ab9 fixed: the status of active post-processing download was displayed as <PP-QUEUED> if the download has multiple par-sets 2012-10-02 15:23:33 +00:00
Andrey Prygunkov
e12eeed65d fixed: <make install> failed on BSD due to different syntax in <sed>-command 2012-10-01 20:15:05 +00:00
Andrey Prygunkov
711ecb4025 fixed: the size of small downloads (less than 100 MB) was not printed properly in web-interface 2012-10-01 19:50:19 +00:00
Andrey Prygunkov
88957699c5 fixed: unrar failure was not always properly detected causing the post-processing to delete not yet unpacked rar-files 2012-10-01 19:41:13 +00:00
Andrey Prygunkov
fcd6c51d55 categories available in web-interface are now configured in program configuration file (nzbget.conf) instead of a separate file <webui/categories.txt> and can therefore be added and changed via web-interface on settings page 2012-09-30 19:58:58 +00:00
Andrey Prygunkov
adda02dd0d updated descriptions in example configuration file 2012-09-29 19:56:19 +00:00
Andrey Prygunkov
ab1cff2a7d added <free disk space> to dialog <statistics and status> in web-interface 2012-09-29 19:54:54 +00:00
Andrey Prygunkov
0716a743d8 fixed: free disk space reported incorrectly on some OSes 2012-09-29 19:52:18 +00:00
Andrey Prygunkov
d0e17fde77 the status of post-processing and directory scan is now displayed as <disabled> if the related options in config file are not set 2012-09-28 19:29:03 +00:00
Andrey Prygunkov
d6c0aa8a80 the priority of nzb-file can now be set when adding local-file via web-interface; JSON/XML-RPC method <append> extended with parameter <priority> 2012-09-28 19:21:34 +00:00
Andrey Prygunkov
815bf9b390 fixed: added workaround for bug in iOS 6 safari caching POST-requests 2012-09-28 19:04:11 +00:00
Andrey Prygunkov
0aa6e0a8b2 all images are now provided with HiDPI versions in addition to standard versions; the HiDPI images are activated automatically on retina displays (requires webkit browser) 2012-09-27 20:59:47 +00:00
Andrey Prygunkov
8b1aff33fe added remote command <--reload/-O> and JSON/XML-RPC method <reload> to reload configuration from disk and reintialize the program; the reload can be performed from web-interface 2012-09-27 20:13:25 +00:00
Andrey Prygunkov
fdc9464576 added subcommand <W> to remote command <-S/--scan> to scan syncronously (wait until scan completed); added parameter <SyncMode> to XML/JSON-RPC method <scan>; the command <Scan> in web-interface now waits for completing of scan before reporting the status 2012-09-19 18:42:13 +00:00
Andrey Prygunkov
dc6c1a0fe1 with active option <AllowReProcess> the NZB considered completed even if there are paused non-par-files (the paused non-par-files are treated the same way as paused par-files): as a result the reprocessable script is called 2012-09-18 02:47:44 +00:00
Andrey Prygunkov
78a73ac15f added missing turtle icon 2012-09-18 02:32:39 +00:00
Andrey Prygunkov
48891ed7c7 many improvements in web-interface UI: main tabs are better distinguishable; separate tab headers removed; handbrake button moved to navbar and renamed to pause/resume-button; animation on pause/resume-button better shows current state; two other important info-elements <current speed> and <remaining time> moved to the navbar as well; the search-edit moved to navbar too; the refresh-button has animation; the navbar is now fixed to the top on big screens; the speed limit is now set via click on <current speed> info; <statistics and status> are accessible via click on <remaining time>; the scan-button moved to add-dialog; due to reduced number of toolbar buttons on the downloads-tab the ability to hide buttons on the toolbar were removed (not neccessary anymore); the phone-theme is now less cluttered; added editing of nzbget and post-processing script settings; the settings-tab is searchable like other tabs; added new XML/JSON-RPC methods <config>, <loadconfig> and <saveconfig>; 2012-09-16 11:38:44 +00:00
Andrey Prygunkov
d3fd5ba9ac fixed: url-downloads could fail when compiled with gzip-support (bug introduced in r440) 2012-09-08 06:40:59 +00:00
Andrey Prygunkov
2b6f575802 set svn keywords 2012-08-11 10:37:14 +00:00
Andrey Prygunkov
f604460d56 set svn keywords 2012-08-06 20:32:48 +00:00
Andrey Prygunkov
7472893e8e added built-in web-interface; new option <WebDir> 2012-08-04 13:13:49 +00:00
Andrey Prygunkov
eff074faae <index.html> is now returned by web-server for every directory-request, not only for the root one (</>) 2012-07-31 18:34:50 +00:00
Andrey Prygunkov
07c04b40b1 </index.html> is now returned by web-server when the root path </> is requested 2012-07-30 20:46:35 +00:00
Andrey Prygunkov
754adb545e eliminated few compiler warnings 2012-07-28 13:37:37 +00:00
Andrey Prygunkov
5fc04277c1 updated VC-project 2012-07-28 13:36:44 +00:00
Andrey Prygunkov
78f5fd3f71 fixed compilation error on linux (bug introduced in r449) 2012-07-18 21:27:03 +00:00
Andrey Prygunkov
0277c6b9bd improved handling of configuration errors: the program now does not terminate on errors but rather logs all of them and uses default option values 2012-07-16 20:42:33 +00:00
Andrey Prygunkov
e34b4b8ae7 fixed: RPC-method <log(0, IdFrom)> could return wrong results if the log was filtered with options <XXXTarget> 2012-07-16 20:10:21 +00:00
Andrey Prygunkov
c0de18f3aa fixed line endings 2012-07-16 19:41:33 +00:00
Andrey Prygunkov
4b78918347 fixed incompatibility with OpenSLL 1.0 (thanks to OpenWRT team for the patch) 2012-07-15 12:10:19 +00:00
Andrey Prygunkov
6606a883c5 improved the automatic installation (<make install>) to install all necessary files (not only the binary as it was before) 2012-07-14 20:04:11 +00:00
Andrey Prygunkov
30c1a64d31 renamed example configuration file and postprocessing script to make the installation easier 2012-07-14 13:53:28 +00:00
Andrey Prygunkov
91dbcc40aa fixed: remote command <-E/--edit> with option <GN> or <FN> did not work 2012-07-11 19:48:58 +00:00
Andrey Prygunkov
b0f5119ec0 added authorization via URL in RPC-server (example: http://localhost:6789/username:password/jsonrpc) 2012-07-10 20:04:13 +00:00
Andrey Prygunkov
ed9aba18b8 added processing of http-request <OPTIONS> in RPC-server for better support of cross domain requests 2012-07-10 20:00:13 +00:00
Andrey Prygunkov
d507325378 added gzip-support to URL-downloader 2012-07-09 20:56:28 +00:00
Andrey Prygunkov
0a0546168b fixed memory leak in gzip-support (bug introduced in r436) 2012-07-05 19:09:28 +00:00
Andrey Prygunkov
5abbbe80d1 refactor: reordered classes 2012-07-05 16:47:29 +00:00
Andrey Prygunkov
4a6413f654 fixed error in configure script (bug introduced in r436) 2012-07-04 18:12:51 +00:00
Andrey Prygunkov
6c60244b26 added gzip-support to built-in web-server 2012-07-03 20:34:36 +00:00
Andrey Prygunkov
a384f0e6e9 prevent duplicate nzb-entries in the history 2012-07-03 20:21:11 +00:00
Andrey Prygunkov
2a56410543 improved performance of RPC-command <listgroups> 2012-07-02 20:04:35 +00:00
Andrey Prygunkov
7ef22fc1e0 fixed compilation error on linux (bug introduced in r432) 2012-07-01 19:54:45 +00:00
Andrey Prygunkov
5051d698c0 implemented built-in web-server 2012-07-01 19:31:06 +00:00
Andrey Prygunkov
1a87c08bc2 changed version naming scheme by removing the leading zero: current version is now called 9.0 instead of 0.9.0 (it's really the 9th major version of the program) 2012-07-01 17:23:08 +00:00
Andrey Prygunkov
ed0c5908ce fixed few compiler warnings 2012-07-01 14:56:28 +00:00
Andrey Prygunkov
f571ced9c5 when adding url via RPC the supplied filename (if not empty) has precedence over the original file name 2012-06-24 17:00:51 +00:00
Andrey Prygunkov
3110181a9f fixed: when adding url the nzb name was not set properly (bug introduced in r419) 2012-06-24 16:07:16 +00:00
Andrey Prygunkov
fcb7966f70 fixed: segfault in remote command <--list/-L> when used without subcommands (bug introduced in r422) 2012-06-23 21:18:47 +00:00
Andrey Prygunkov
be2945a16f fixed a resource leak (socket) which could occur when an active download was deleted from queue 2012-06-23 19:29:43 +00:00
Andrey Prygunkov
6b3326ad42 restored accidental change of Connection.cpp in r423 (should be commited as a separate changeset) 2012-06-23 19:27:36 +00:00
Andrey Prygunkov
3e81a03087 refactor: corrected inconsistent include of <config.h> 2012-06-23 18:58:56 +00:00
Andrey Prygunkov
3778430ead in remote command <--list/-l> with subcommands <GR> and <FR> the regex-matching is now performed on the server; that ensures the list-command selects the same records as the edit-command (when server and client have different implementations of POSIX ERE) 2012-06-22 18:30:14 +00:00
Andrey Prygunkov
31bd251f37 added support for regular expressions (POSIX ERE Syntax) in remote commands <--list/-L> and <--edit/-E> using new subcommands <GR> and <FR> 2012-06-20 22:53:03 +00:00
Andrey Prygunkov
f49f01ec85 refactor: splitted class <Util> into <Util> and <Webtil> 2012-06-20 22:15:09 +00:00
Andrey Prygunkov
1bd6721af9 added options <GN> and <FN> for remote command <--edit/-E>. With these options the name of group or file can be used in edit-command instead of file ID 2012-06-19 15:08:23 +00:00
Andrey Prygunkov
4a069266d8 added new field <name> to nzb-info-object. It is initially set to the cleaned up name of the nzb-file. The renaming of the group changes this field. All RPC-methods related to nzb-object return the new field, the old field <NZBNicename> is now deprecated. The option <MergeNZB> now checks the <name>-field instead of <nzbfilename> (the latter is not changed when the nzb is renamed). New env-var-parameter <NZBPP_NZBNAME> for post-processing script. 2012-06-11 15:09:03 +00:00
Andrey Prygunkov
d10d7f3f02 fixed: RPC-Command <history> were not returning the UrlStatus correctly in JSON-RPC (bug introduced in r414) 2012-06-08 11:06:00 +00:00
Andrey Prygunkov
05adbb1325 fixed: by adding a failed URL to the history it was added to the end instead of the top of the history (bug introduced in r414) 2012-06-08 11:04:23 +00:00
Andrey Prygunkov
ca8719b42b fixed: after renaming of a group, the new name was not displayed by remote commands <-L G> and <-C in curses mode> 2012-05-27 12:59:51 +00:00
Andrey Prygunkov
ec80850e76 improved error reporting when trying to download a HTTPS-URL and the program was compiled without TLS/SSL support 2012-05-04 14:55:49 +00:00
Andrey Prygunkov
ab75a8b3e5 added the ability to queue URLs. The program automatically downloads nzb-files from given URLs and put them to download queue. When multiple URLs are added in a short time, they are put into a special URL-queue. The number of simultaneous URL-downloads are controlled via new option UrlConnections. With the new option ReloadUrlQueue can be controlled if the URL-queue should be reloaded after the program is restarted (if the URL-queue was not empty). New switch <-U> for remote-command <--append/-A> to queue an URL. New subcommand <-U> in the remote command <--list/-L> prints the current URL-queue. If URL-download fails, the URL is moved into history. With subcommand <-R> of command <--edit> the failed URL can be returned to URL-queue for redownload. The remote command <--list/-L> for history can now print the infos for URL history items. New XML/JSON-RPC command <appendurl> to add an URL or multiple URLs for download. New XML/JSON-RPC command <urlqueue> returns the items from the URL-queue. The XML/JSON-RPC command <history> was exteneded to provide infos about URL history items. The URL-queue obeys the pause-state of download queue. The URL-downloads support HTTP and HTTPS protocols. 2012-05-03 13:47:44 +00:00
Andrey Prygunkov
d00c8119fa removed references to <NetAddress.c/h> from VS-Project 2012-05-03 11:00:54 +00:00
Andrey Prygunkov
2b7d188677 removed NetAddress.cpp/h 2012-05-03 10:53:18 +00:00
Andrey Prygunkov
12c09693bd refactoring: removed class <NetAddress>. That makes <Connection>-class more transparent and easier to use. The TLS-initializing moved from <NNTPConnection> to <Connection> 2012-05-03 10:51:13 +00:00
Andrey Prygunkov
87793b3dc3 updated version string to 0.9.0-testing 2012-05-03 10:23:00 +00:00
Andrey Prygunkov
7ce8c0b966 updated version string, ChangeLog and README (preparing to release 0.8.0) 2012-04-29 16:15:25 +00:00
Andrey Prygunkov
a831944b14 added the automatic configuring of required signal handling logic to better support BSD without breaking the compatibility with certain Linux systems 2012-03-16 22:03:55 +00:00
Andrey Prygunkov
17533d2c61 fixed a compatibility issue with OpenBSD (and possibly other BSD based systems) 2012-01-14 20:06:53 +00:00
Andrey Prygunkov
bee1d6beed fixed the incorrect displaying of sizes bigger than 4 GB on 64 bit systems 2012-01-12 21:25:08 +00:00
Andrey Prygunkov
a49ae076a5 improved the parsing of filename from article subject 2012-01-01 20:51:23 +00:00
Andrey Prygunkov
4856503a33 fixed: par-repair could fail when the filenames were not correctly parsed from article subjects 2011-12-29 21:30:25 +00:00
Andrey Prygunkov
0d4560f54e fixed a compilation error on some windows versions 2011-12-29 14:04:11 +00:00
Andrey Prygunkov
c5f7c2ace7 fixed spelling errors in the example configuration file 2011-12-29 14:02:10 +00:00
Andrey Prygunkov
f8c03fa48c fixed a bug causing error on decoding of input data in JSON-RPC 2011-11-18 20:46:48 +00:00
Andrey Prygunkov
8b6b34d7c0 fixed incorrect displaying of group sizes between 2GB and 4GB on many 64-bit OSes 2011-06-15 07:27:44 +00:00
Andrey Prygunkov
1fd7001e0c corrected a spelling error 2011-05-24 12:57:47 +00:00
Andrey Prygunkov
2631550c2f corrected the address of Free Software Foundation in copyright notice; corrected the spelling of authors name (caused by new rules for translating of cyrillic names to latin alphabet / english spelling) 2011-05-24 12:52:41 +00:00
Andrey Prygunkov
01734fadcf fixed incorrect displaying of group sizes bigger than 4GB on many 64-bit OSes 2011-03-30 21:04:58 +00:00
Andrey Prygunkov
33cead0b03 updated descriptions in example config file 2011-03-24 08:36:24 +00:00
Andrey Prygunkov
08f86cbcf9 added priorities; new action <I> for remote command <--edit/-E> to set priorities for groups or individual files; new actions <SetGroupPriority> and <SetFilePriority> of RPC-command <EditQueue>; remote command <--list/-L> prints priorities and indicates files or groups being downloaded; ncurses-frontend prints priorities and indicates files or groups being download; new command <PRIORITY> to set priority of nzb-file from nzbprocess-script; RPC-commands <ListGroups> and <ListFiles> return priorities and indicate files or groups being downloaded 2011-03-12 11:48:13 +00:00
Andrey Prygunkov
a83b96c74d fixed compilation error on Posix (bug introduced in r391) 2011-03-04 11:11:35 +00:00
Andrey Prygunkov
7cf0ddc81b eliminated small memory leak (bug introduced in r391) 2011-03-01 11:56:26 +00:00
Andrey Prygunkov
41640b5215 added new option <AccurateRate>, which enables syncronisation in speed meter; that makes the indicated speed more accurate by eliminating measurement errors possible due thread conflicts; thanks to anonymous nzbget user for the patch 2011-02-05 15:48:59 +00:00
Andrey Prygunkov
e1abedbfa4 fixed: article IDs containing special xml-characters were not parsed correctly (bug introduced in r386) 2010-12-07 16:22:31 +00:00
Andrey Prygunkov
a1482b9781 added renaming of groups; new subcommand <N> for command <--edit/-E>; new action <SetName> for RPC-method <editqueue> 2010-08-11 13:30:34 +00:00
Andrey Prygunkov
8a6d6ae771 option <DirectWrite> is now efficiently works on Windows with NTFS partitions 2010-07-23 14:42:16 +00:00
Andrey Prygunkov
a67e8314af added URL-based-authentication as alternative to HTTP-header authentication for XML- and JSON-RPC 2010-07-18 12:46:28 +00:00
Andrey Prygunkov
f5e7497913 fixed: nzb-files containing umlauts and other special characters could not be parsed - replaced XML-Reader with SAX-Parser; the issue was fixed only on POSIX 2010-07-05 14:52:42 +00:00
Andrey Prygunkov
e4c7c601d7 updated version string to 0.8.0-testing 2010-06-10 15:23:42 +00:00
167 changed files with 67232 additions and 11410 deletions

View File

@@ -1,4 +1,4 @@
nzbget:
Sven Henkel <sidddy@users.sourceforge.net> (versions 0.1.0 - ?)
Bo Cordes Petersen <placebodk@users.sourceforge.net> (versions ? - 0.2.3)
Andrei Prygounkov <hugbug@users.sourceforge.net> (versions 0.3.0 and later)
Andrey Prygunkov <hugbug@users.sourceforge.net> (versions 0.3.0 and later)

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -34,7 +34,7 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
@@ -77,22 +77,10 @@ ArticleDownloader::~ArticleDownloader()
{
debug("Destroying ArticleDownloader");
if (m_szTempFilename)
{
free(m_szTempFilename);
}
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
if (m_szInfoName)
{
free(m_szInfoName);
}
if (m_szOutputFilename)
{
free(m_szOutputFilename);
}
free(m_szTempFilename);
free(m_szArticleFilename);
free(m_szInfoName);
free(m_szOutputFilename);
}
void ArticleDownloader::SetTempFilename(const char* v)
@@ -110,17 +98,35 @@ void ArticleDownloader::SetInfoName(const char * v)
m_szInfoName = strdup(v);
}
void ArticleDownloader::SetStatus(EStatus eStatus)
{
m_eStatus = eStatus;
Notify(NULL);
}
/*
* How server management (for one particular article) works:
- there is a list of failed servers which is initially empty;
- level is initially 0;
<loop>
- request a connection from server pool for current level;
Exception: this step is skipped for the very first download attempt, because a
level-0 connection is initially passed from queue manager;
- try to download from server;
- if connection to server cannot be established or download fails due to interrupted connection,
try again (as many times as needed without limit) the same server until connection is OK;
- if download fails with error "Not-Found" (article or group not found) or with CRC error,
add the server to failed server list;
- if download fails with general failure error (article incomplete, other unknown error
codes), try the same server again as many times as defined by option <Retries>; if all attempts
fail, add the server to failed server list;
- if all servers from current level were tried, increase level;
- if all servers from all levels were tried, break the loop with failure status.
<end-loop>
*/
void ArticleDownloader::Run()
{
debug("Entering ArticleDownloader-loop");
SetStatus(adRunning);
BuildOutputFilename();
m_szResultFilename = m_pArticleInfo->GetResultFilename();
if (g_pOptions->GetContinuePartial())
@@ -129,68 +135,61 @@ void ArticleDownloader::Run()
{
// file exists from previous program's start
detail("Article %s already downloaded, skipping", m_szInfoName);
SetStatus(adFinished);
FreeConnection(true);
SetStatus(adFinished);
Notify(NULL);
return;
}
}
int iRemainedDownloadRetries = g_pOptions->GetRetries() > 0 ? g_pOptions->GetRetries() : 1;
#ifdef THREADCONNECT_WORKAROUND
// NOTE: about iRemainedConnectRetries:
// Sometimes connections just do not want to work in a particular thread,
// regardless of retry count. However they work in other threads.
// If ArticleDownloader can't start download after many attempts, it terminates
// and let QueueCoordinator retry the article in a new thread.
// It wasn't confirmed that this workaround actually helps.
// Therefore it is disabled by default. Define symbol "THREADCONNECT_WORKAROUND"
// to activate the workaround.
int iRemainedConnectRetries = iRemainedDownloadRetries > 5 ? iRemainedDownloadRetries * 2 : 10;
#endif
EStatus Status = adFailed;
int iMaxLevel = g_pServerPool->GetMaxLevel();
int* LevelStatus = (int*)malloc((iMaxLevel + 1) * sizeof(int));
for (int i = 0; i <= iMaxLevel; i++)
{
LevelStatus[i] = 0;
}
int level = 0;
while (!IsStopped() && iRemainedDownloadRetries > 0)
{
SetLastUpdateTimeNow();
int iRetries = g_pOptions->GetRetries() > 0 ? g_pOptions->GetRetries() : 1;
int iRemainedRetries = iRetries;
Servers failedServers;
failedServers.reserve(g_pServerPool->GetServers()->size());
NewsServer* pWantServer = NULL;
NewsServer* pLastServer = NULL;
int iLevel = 0;
int iServerConfigGeneration = g_pServerPool->GetGeneration();
while (!IsStopped())
{
Status = adFailed;
while (!IsStopped() && !m_pConnection)
SetStatus(adWaiting);
while (!m_pConnection && !(IsStopped() || iServerConfigGeneration != g_pServerPool->GetGeneration()))
{
m_pConnection = g_pServerPool->GetConnection(level);
m_pConnection = g_pServerPool->GetConnection(iLevel, pWantServer, &failedServers);
usleep(5 * 1000);
}
SetLastUpdateTimeNow();
SetStatus(adRunning);
if (IsStopped())
{
Status = adFailed;
break;
}
if (g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2())
if (IsStopped() || g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2() ||
iServerConfigGeneration != g_pServerPool->GetGeneration())
{
Status = adRetry;
break;
}
pLastServer = m_pConnection->GetNewsServer();
m_pConnection->SetSuppressErrors(false);
// test connection
bool bConnected = m_pConnection && m_pConnection->Connect();
if (bConnected && !IsStopped())
{
// Okay, we got a Connection. Now start downloading.
detail("Downloading %s @ %s", m_szInfoName, m_pConnection->GetServer()->GetHost());
NewsServer* pNewsServer = m_pConnection->GetNewsServer();
detail("Downloading %s @ %s (%s)", m_szInfoName, pNewsServer->GetName(), m_pConnection->GetHost());
Status = Download();
if (Status == adFinished || Status == adFailed || Status == adNotFound || Status == adCrcError)
{
m_ServerStats.SetStat(pNewsServer->GetID(), Status == adFinished ? 1 : 0, Status == adFinished ? 0 : 1, false);
}
}
if (bConnected)
@@ -200,9 +199,6 @@ void ArticleDownloader::Run()
m_pConnection->Disconnect();
bConnected = false;
Status = adFailed;
#ifdef THREADCONNECT_WORKAROUND
iRemainedConnectRetries--;
#endif
}
else
{
@@ -212,89 +208,109 @@ void ArticleDownloader::Run()
// free the connection, to prevent starting of thousands of threads
// (cause each of them will also free it's connection after the
// same connect-error).
FreeConnection(Status == adFinished);
FreeConnection(Status == adFinished || Status == adNotFound);
}
}
#ifdef THREADCONNECT_WORKAROUND
else
{
iRemainedConnectRetries--;
}
if (iRemainedConnectRetries == 0)
if (Status == adFinished || Status == adFatalError)
{
debug("Can't connect from this thread, retry later from another");
Status = adRetry;
break;
}
#endif
if (g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2())
{
Status = adRetry;
break;
}
if (((Status == adFailed) || (Status == adCrcError && g_pOptions->GetRetryOnCrcError())) &&
(iRemainedDownloadRetries > 1 || !bConnected) && !IsStopped())
pWantServer = NULL;
if (bConnected && Status == adFailed)
{
iRemainedRetries--;
}
if (!bConnected || (Status == adFailed && iRemainedRetries > 0))
{
pWantServer = pLastServer;
}
if (pWantServer &&
!(IsStopped() || g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2() ||
iServerConfigGeneration != g_pServerPool->GetGeneration()))
{
detail("Waiting %i sec to retry", g_pOptions->GetRetryInterval());
SetStatus(adWaiting);
int msec = 0;
while (!IsStopped() && (msec < g_pOptions->GetRetryInterval() * 1000))
while (!(IsStopped() || g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2() ||
iServerConfigGeneration != g_pServerPool->GetGeneration()) &&
msec < g_pOptions->GetRetryInterval() * 1000)
{
usleep(100 * 1000);
msec += 100;
}
SetLastUpdateTimeNow();
SetStatus(adRunning);
}
if (IsStopped())
{
Status = adFailed;
break;
}
if ((Status == adFinished) || (Status == adFatalError) ||
(Status == adCrcError && !g_pOptions->GetRetryOnCrcError()))
if (IsStopped() || g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2() ||
iServerConfigGeneration != g_pServerPool->GetGeneration())
{
Status = adRetry;
break;
}
LevelStatus[level] = Status;
if (!pWantServer)
{
failedServers.push_back(pLastServer);
bool bAllLevelNotFound = true;
for (int lev = 0; lev <= iMaxLevel; lev++)
{
if (LevelStatus[lev] != adNotFound)
{
bAllLevelNotFound = false;
break;
}
}
if (bAllLevelNotFound)
{
if (iMaxLevel > 0)
{
warn("Article %s @ all servers failed: Article not found", m_szInfoName);
}
break;
}
// if all servers from current level were tried, increase level
// if all servers from all levels were tried, break the loop with failure status
// do not count connect-errors, only article- and group-errors
if (bConnected)
{
level++;
if (level > iMaxLevel)
bool bAllServersOnLevelFailed = true;
for (Servers::iterator it = g_pServerPool->GetServers()->begin(); it != g_pServerPool->GetServers()->end(); it++)
{
level = 0;
NewsServer* pCandidateServer = *it;
if (pCandidateServer->GetNormLevel() == iLevel)
{
bool bServerFailed = !pCandidateServer->GetActive() || pCandidateServer->GetMaxConnections() == 0;
if (!bServerFailed)
{
for (Servers::iterator it = failedServers.begin(); it != failedServers.end(); it++)
{
NewsServer* pIgnoreServer = *it;
if (pIgnoreServer == pCandidateServer ||
(pIgnoreServer->GetGroup() > 0 && pIgnoreServer->GetGroup() == pCandidateServer->GetGroup() &&
pIgnoreServer->GetNormLevel() == pCandidateServer->GetNormLevel()))
{
bServerFailed = true;
break;
}
}
}
if (!bServerFailed)
{
bAllServersOnLevelFailed = false;
break;
}
}
}
iRemainedDownloadRetries--;
if (bAllServersOnLevelFailed)
{
if (iLevel < g_pServerPool->GetMaxNormLevel())
{
detail("Article %s @ all level %i servers failed, increasing level", m_szInfoName, iLevel);
iLevel++;
}
else
{
detail("Article %s @ all servers failed", m_szInfoName);
Status = adFailed;
break;
}
}
iRemainedRetries = iRetries;
}
}
FreeConnection(Status == adFinished);
free(LevelStatus);
if (m_bDuplicate)
{
Status = adFinished;
@@ -305,19 +321,19 @@ void ArticleDownloader::Run()
Status = adFailed;
}
if (IsStopped())
{
detail("Download %s cancelled", m_szInfoName);
Status = adRetry;
}
if (Status == adFailed)
{
if (IsStopped())
{
detail("Download %s cancelled", m_szInfoName);
}
else
{
warn("Download %s failed", m_szInfoName);
}
detail("Download %s failed", m_szInfoName);
}
SetStatus(Status);
Notify(NULL);
debug("Exiting ArticleDownloader-loop");
}
@@ -354,7 +370,7 @@ ArticleDownloader::EStatus ArticleDownloader::Download()
for (int retry = 3; retry > 0; retry--)
{
szResponse = m_pConnection->Request(tmp);
if (szResponse && !strncmp(szResponse, "2", 1))
if ((szResponse && !strncmp(szResponse, "2", 1)) || m_pConnection->GetAuthError())
{
break;
}
@@ -405,7 +421,8 @@ ArticleDownloader::EStatus ArticleDownloader::Download()
{
if (!IsStopped())
{
warn("Article %s @ %s failed: Unexpected end of article", m_szInfoName, m_pConnection->GetServer()->GetHost());
detail("Article %s @ %s (%s) failed: Unexpected end of article", m_szInfoName,
m_pConnection->GetNewsServer()->GetName(), m_pConnection->GetHost());
}
Status = adFailed;
break;
@@ -439,7 +456,8 @@ ArticleDownloader::EStatus ArticleDownloader::Download()
if (strncmp(p, m_pArticleInfo->GetMessageID(), strlen(m_pArticleInfo->GetMessageID())))
{
if (char* e = strrchr(p, '\r')) *e = '\0'; // remove trailing CR-character
warn("Article %s @ %s failed: Wrong message-id, expected %s, returned %s", m_szInfoName, m_pConnection->GetServer()->GetHost(), m_pArticleInfo->GetMessageID(), p);
detail("Article %s @ %s (%s) failed: Wrong message-id, expected %s, returned %s", m_szInfoName,
m_pConnection->GetNewsServer()->GetName(), m_pConnection->GetHost(), m_pArticleInfo->GetMessageID(), p);
Status = adFailed;
break;
}
@@ -467,7 +485,8 @@ ArticleDownloader::EStatus ArticleDownloader::Download()
if (!bEnd && Status == adRunning && !IsStopped())
{
warn("Article %s @ %s failed: article incomplete", m_szInfoName, m_pConnection->GetServer()->GetHost());
detail("Article %s @ %s (%s) failed: article incomplete", m_szInfoName,
m_pConnection->GetNewsServer()->GetName(), m_pConnection->GetHost());
Status = adFailed;
}
@@ -494,18 +513,21 @@ ArticleDownloader::EStatus ArticleDownloader::CheckResponse(const char* szRespon
{
if (!IsStopped())
{
warn("Article %s @ %s failed, %s: Connection closed by remote host", m_szInfoName, m_pConnection->GetServer()->GetHost(), szComment);
detail("Article %s @ %s (%s) failed, %s: Connection closed by remote host", m_szInfoName,
m_pConnection->GetNewsServer()->GetName(), m_pConnection->GetHost(), szComment);
}
return adConnectError;
}
else if (m_pConnection->GetAuthError() || !strncmp(szResponse, "400", 3) || !strncmp(szResponse, "499", 3))
{
warn("Article %s @ %s failed, %s: %s", m_szInfoName, m_pConnection->GetServer()->GetHost(), szComment, szResponse);
detail("Article %s @ %s (%s) failed, %s: %s", m_szInfoName,
m_pConnection->GetNewsServer()->GetName(), m_pConnection->GetHost(), szComment, szResponse);
return adConnectError;
}
else if (!strncmp(szResponse, "41", 2) || !strncmp(szResponse, "42", 2) || !strncmp(szResponse, "43", 2))
{
warn("Article %s @ %s failed, %s: %s", m_szInfoName, m_pConnection->GetServer()->GetHost(), szComment, szResponse);
detail("Article %s @ %s (%s) failed, %s: %s", m_szInfoName,
m_pConnection->GetNewsServer()->GetName(), m_pConnection->GetHost(), szComment, szResponse);
return adNotFound;
}
else if (!strncmp(szResponse, "2", 1))
@@ -516,7 +538,8 @@ ArticleDownloader::EStatus ArticleDownloader::CheckResponse(const char* szRespon
else
{
// unknown error, no special handling
warn("Article %s @ %s failed, %s: %s", m_szInfoName, m_pConnection->GetServer()->GetHost(), szComment, szResponse);
detail("Article %s @ %s (%s) failed, %s: %s", m_szInfoName,
m_pConnection->GetNewsServer()->GetName(), m_pConnection->GetHost(), szComment, szResponse);
return adFailed;
}
}
@@ -541,13 +564,13 @@ bool ArticleDownloader::Write(char* szLine, int iLen)
}
else
{
warn("Decoding %s failed: unsupported encoding", m_szInfoName);
detail("Decoding %s failed: unsupported encoding", m_szInfoName);
return false;
}
if (!bOK)
{
debug("Failed line: %s", szLine);
warn("Decoding %s failed", m_szInfoName);
detail("Decoding %s failed", m_szInfoName);
}
return bOK;
}
@@ -566,10 +589,13 @@ bool ArticleDownloader::PrepareFile(char* szLine)
{
if (!strncmp(szLine, "=ybegin ", 8))
{
if (g_pOptions->GetDupeCheck())
if (g_pOptions->GetDupeCheck() &&
m_pFileInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
!m_pFileInfo->GetNZBInfo()->GetManyDupeFiles())
{
m_pFileInfo->LockOutputFile();
if (!m_pFileInfo->GetOutputInitialized())
bool bOutputInitialized = m_pFileInfo->GetOutputInitialized();
if (!bOutputInitialized)
{
char* pb = strstr(szLine, " name=");
if (pb)
@@ -583,11 +609,6 @@ bool ArticleDownloader::PrepareFile(char* szLine)
strncpy(m_szArticleFilename, pb, pe - pb);
m_szArticleFilename[pe - pb] = '\0';
}
if (m_pFileInfo->IsDupe(m_szArticleFilename))
{
m_bDuplicate = true;
return false;
}
}
}
if (!g_pOptions->GetDirectWrite())
@@ -595,6 +616,12 @@ bool ArticleDownloader::PrepareFile(char* szLine)
m_pFileInfo->SetOutputInitialized(true);
}
m_pFileInfo->UnlockOutputFile();
if (!bOutputInitialized && m_szArticleFilename &&
Util::FileExists(m_pFileInfo->GetNZBInfo()->GetDestDir(), m_szArticleFilename))
{
m_bDuplicate = true;
return false;
}
}
if (g_pOptions->GetDirectWrite())
@@ -607,9 +634,9 @@ bool ArticleDownloader::PrepareFile(char* szLine)
{
pb += 6; //=strlen(" size=")
long iArticleFilesize = atol(pb);
if (!Util::SetFileSize(m_szOutputFilename, iArticleFilesize))
if (!CreateOutputFile(iArticleFilesize))
{
error("Could not create file %s!", m_szOutputFilename);
m_pFileInfo->UnlockOutputFile();
return false;
}
m_pFileInfo->SetOutputInitialized(true);
@@ -636,7 +663,9 @@ bool ArticleDownloader::PrepareFile(char* szLine)
m_pOutFile = fopen(szFilename, bDirectWrite ? "rb+" : "wb");
if (!m_pOutFile)
{
error("Could not %s file %s", bDirectWrite ? "open" : "create", szFilename);
char szSysErrStr[256];
error("Could not %s file %s! Errcode: %i, %s", bDirectWrite ? "open" : "create", szFilename,
errno, Util::GetLastErrorMessage(szSysErrStr, sizeof(szSysErrStr)));
return false;
}
if (g_pOptions->GetWriteBufferSize() == -1)
@@ -652,6 +681,77 @@ bool ArticleDownloader::PrepareFile(char* szLine)
return true;
}
/* creates output file and subdirectores */
bool ArticleDownloader::CreateOutputFile(int iSize)
{
if (g_pOptions->GetDirectWrite() && Util::FileExists(m_szOutputFilename) &&
Util::FileSize(m_szOutputFilename) == iSize)
{
// keep existing old file from previous program session
return true;
}
// delete eventually existing old file from previous program session
remove(m_szOutputFilename);
// ensure the directory exist
char szDestDir[1024];
int iMaxlen = Util::BaseFileName(m_szOutputFilename) - m_szOutputFilename;
if (iMaxlen > 1024-1) iMaxlen = 1024-1;
strncpy(szDestDir, m_szOutputFilename, iMaxlen);
szDestDir[iMaxlen] = '\0';
char szErrBuf[1024];
if (!Util::ForceDirectories(szDestDir, szErrBuf, sizeof(szErrBuf)))
{
error("Could not create directory %s: %s", szDestDir, szErrBuf);
return false;
}
if (!Util::CreateSparseFile(m_szOutputFilename, iSize))
{
error("Could not create file %s", m_szOutputFilename);
return false;
}
return true;
}
void ArticleDownloader::BuildOutputFilename()
{
char szFilename[1024];
snprintf(szFilename, 1024, "%s%i.%03i", g_pOptions->GetTempDir(), m_pFileInfo->GetID(), m_pArticleInfo->GetPartNumber());
szFilename[1024-1] = '\0';
m_pArticleInfo->SetResultFilename(szFilename);
char tmpname[1024];
snprintf(tmpname, 1024, "%s.tmp", szFilename);
tmpname[1024-1] = '\0';
SetTempFilename(tmpname);
if (g_pOptions->GetDirectWrite())
{
m_pFileInfo->LockOutputFile();
if (m_pFileInfo->GetOutputFilename())
{
strncpy(szFilename, m_pFileInfo->GetOutputFilename(), 1024);
szFilename[1024-1] = '\0';
}
else
{
snprintf(szFilename, 1024, "%s%c%i.out.tmp", m_pFileInfo->GetNZBInfo()->GetDestDir(), (int)PATH_SEPARATOR, m_pFileInfo->GetID());
szFilename[1024-1] = '\0';
m_pFileInfo->SetOutputFilename(szFilename);
}
m_pFileInfo->UnlockOutputFile();
SetOutputFilename(szFilename);
}
}
ArticleDownloader::EStatus ArticleDownloader::DecodeCheck()
{
bool bDirectWrite = g_pOptions->GetDirectWrite() && m_eFormat == Decoder::efYenc;
@@ -672,8 +772,8 @@ ArticleDownloader::EStatus ArticleDownloader::DecodeCheck()
}
else
{
warn("Decoding %s failed: no binary data or unsupported encoding format", m_szInfoName);
return adFatalError;
detail("Decoding %s failed: no binary data or unsupported encoding format", m_szInfoName);
return adFailed;
}
Decoder::EStatus eStatus = pDecoder->Check();
@@ -717,27 +817,27 @@ ArticleDownloader::EStatus ArticleDownloader::DecodeCheck()
remove(m_szResultFilename);
if (eStatus == Decoder::eCrcError)
{
warn("Decoding %s failed: CRC-Error", m_szInfoName);
detail("Decoding %s failed: CRC-Error", m_szInfoName);
return adCrcError;
}
else if (eStatus == Decoder::eArticleIncomplete)
{
warn("Decoding %s failed: article incomplete", m_szInfoName);
detail("Decoding %s failed: article incomplete", m_szInfoName);
return adFailed;
}
else if (eStatus == Decoder::eInvalidSize)
{
warn("Decoding %s failed: size mismatch", m_szInfoName);
detail("Decoding %s failed: size mismatch", m_szInfoName);
return adFailed;
}
else if (eStatus == Decoder::eNoBinaryData)
{
warn("Decoding %s failed: no binary data found", m_szInfoName);
detail("Decoding %s failed: no binary data found", m_szInfoName);
return adFailed;
}
else
{
warn("Decoding %s failed", m_szInfoName);
detail("Decoding %s failed", m_szInfoName);
return adFailed;
}
}
@@ -823,11 +923,18 @@ void ArticleDownloader::CompleteFileParts()
bool bDirectWrite = g_pOptions->GetDirectWrite() && m_pFileInfo->GetOutputInitialized();
char szNZBNiceName[1024];
m_pFileInfo->GetNZBInfo()->GetNiceNZBName(szNZBNiceName, 1024);
char szNZBName[1024];
char szNZBDestDir[1024];
// the locking is needed for accessing the memebers of NZBInfo
g_pDownloadQueueHolder->LockQueue();
strncpy(szNZBName, m_pFileInfo->GetNZBInfo()->GetName(), 1024);
strncpy(szNZBDestDir, m_pFileInfo->GetNZBInfo()->GetDestDir(), 1024);
g_pDownloadQueueHolder->UnlockQueue();
szNZBName[1024-1] = '\0';
szNZBDestDir[1024-1] = '\0';
char InfoFilename[1024];
snprintf(InfoFilename, 1024, "%s%c%s", szNZBNiceName, (int)PATH_SEPARATOR, m_pFileInfo->GetFilename());
snprintf(InfoFilename, 1024, "%s%c%s", szNZBName, (int)PATH_SEPARATOR, m_pFileInfo->GetFilename());
InfoFilename[1024-1] = '\0';
if (!g_pOptions->GetDecode())
@@ -843,33 +950,17 @@ void ArticleDownloader::CompleteFileParts()
detail("Joining articles for %s", InfoFilename);
}
char szNZBDestDir[1024];
// the locking is needed for accessing the memebers of NZBInfo
g_pDownloadQueueHolder->LockQueue();
strncpy(szNZBDestDir, m_pFileInfo->GetNZBInfo()->GetDestDir(), 1024);
g_pDownloadQueueHolder->UnlockQueue();
szNZBDestDir[1024-1] = '\0';
// Ensure the DstDir is created
if (!Util::ForceDirectories(szNZBDestDir))
char szErrBuf[1024];
if (!Util::ForceDirectories(szNZBDestDir, szErrBuf, sizeof(szErrBuf)))
{
error("Could not create directory %s! Errcode: %i", szNZBDestDir, errno);
error("Could not create directory %s: %s", szNZBDestDir, szErrBuf);
SetStatus(adJoined);
return;
}
char ofn[1024];
snprintf(ofn, 1024, "%s%c%s", szNZBDestDir, (int)PATH_SEPARATOR, m_pFileInfo->GetFilename());
ofn[1024-1] = '\0';
// prevent overwriting existing files
int dupcount = 0;
while (Util::FileExists(ofn))
{
dupcount++;
snprintf(ofn, 1024, "%s%c%s_duplicate%d", szNZBDestDir, (int)PATH_SEPARATOR, m_pFileInfo->GetFilename(), dupcount);
ofn[1024-1] = '\0';
}
Util::MakeUniqueFilename(ofn, 1024, szNZBDestDir, m_pFileInfo->GetFilename());
FILE* outfile = NULL;
char tmpdestfile[1024];
@@ -963,10 +1054,7 @@ void ArticleDownloader::CompleteFileParts()
}
}
if (buffer)
{
free(buffer);
}
free(buffer);
if (outfile)
{
@@ -983,6 +1071,24 @@ void ArticleDownloader::CompleteFileParts()
{
error("Could not move file %s to %s! Errcode: %i", m_szOutputFilename, ofn, errno);
}
// if destination directory was changed delete the old directory (if empty)
int iLen = strlen(szNZBDestDir);
if (!(!strncmp(szNZBDestDir, m_szOutputFilename, iLen) &&
(m_szOutputFilename[iLen] == PATH_SEPARATOR || m_szOutputFilename[iLen] == ALT_PATH_SEPARATOR)))
{
debug("Checking old dir for: %s", m_szOutputFilename);
char szOldDestDir[1024];
int iMaxlen = Util::BaseFileName(m_szOutputFilename) - m_szOutputFilename;
if (iMaxlen > 1024-1) iMaxlen = 1024-1;
strncpy(szOldDestDir, m_szOutputFilename, iMaxlen);
szOldDestDir[iMaxlen] = '\0';
if (Util::DirEmpty(szOldDestDir))
{
debug("Deleting old dir: %s", szOldDestDir);
rmdir(szOldDestDir);
}
}
}
if (!bDirectWrite || g_pOptions->GetContinuePartial())
@@ -1000,28 +1106,9 @@ void ArticleDownloader::CompleteFileParts()
}
else
{
warn("%i of %i article downloads failed for \"%s\"", iBrokenCount, m_pFileInfo->GetArticles()->size(), InfoFilename);
if (g_pOptions->GetRenameBroken())
{
char brokenfn[1024];
snprintf(brokenfn, 1024, "%s_broken", ofn);
brokenfn[1024-1] = '\0';
if (Util::MoveFile(ofn, brokenfn))
{
detail("Renaming broken file from %s to %s", ofn, brokenfn);
}
else
{
warn("Renaming broken file from %s to %s failed", ofn, brokenfn);
}
strncpy(ofn, brokenfn, 1024);
ofn[1024-1] = '\0';
}
else
{
detail("Not renaming broken file %s", ofn);
}
warn("%i of %i article downloads failed for \"%s\"",
iBrokenCount + m_pFileInfo->GetMissedArticles(),
m_pFileInfo->GetTotalArticles(), InfoFilename);
if (g_pOptions->GetCreateBrokenLog())
{
@@ -1029,12 +1116,14 @@ void ArticleDownloader::CompleteFileParts()
snprintf(szBrokenLogName, 1024, "%s%c_brokenlog.txt", szNZBDestDir, (int)PATH_SEPARATOR);
szBrokenLogName[1024-1] = '\0';
FILE* file = fopen(szBrokenLogName, "ab");
fprintf(file, "%s (%i/%i)%s", m_pFileInfo->GetFilename(), m_pFileInfo->GetArticles()->size() - iBrokenCount, m_pFileInfo->GetArticles()->size(), LINE_ENDING);
fprintf(file, "%s (%i/%i)%s", m_pFileInfo->GetFilename(),
m_pFileInfo->GetTotalArticles() - iBrokenCount - m_pFileInfo->GetMissedArticles(),
m_pFileInfo->GetTotalArticles(), LINE_ENDING);
fclose(file);
}
}
// the locking is needed for accessing the memebers of NZBInfo
// the locking is needed for accessing the members of NZBInfo
g_pDownloadQueueHolder->LockQueue();
m_pFileInfo->GetNZBInfo()->GetCompletedFiles()->push_back(strdup(ofn));
if (strcmp(m_pFileInfo->GetNZBInfo()->GetDestDir(), szNZBDestDir))
@@ -1055,9 +1144,10 @@ bool ArticleDownloader::MoveCompletedFiles(NZBInfo* pNZBInfo, const char* szOldD
}
// Ensure the DstDir is created
if (!Util::ForceDirectories(pNZBInfo->GetDestDir()))
char szErrBuf[1024];
if (!Util::ForceDirectories(pNZBInfo->GetDestDir(), szErrBuf, sizeof(szErrBuf)))
{
error("Could not create directory %s! Errcode: %i", pNZBInfo->GetDestDir(), errno);
error("Could not create directory %s: %s", pNZBInfo->GetDestDir(), szErrBuf);
return false;
}
@@ -1073,13 +1163,7 @@ bool ArticleDownloader::MoveCompletedFiles(NZBInfo* pNZBInfo, const char* szOldD
if (strcmp(szFileName, szNewFileName))
{
// prevent overwriting of existing files
int dupcount = 0;
while (Util::FileExists(szNewFileName))
{
dupcount++;
snprintf(szNewFileName, 1024, "%s%c%s_duplicate%d", pNZBInfo->GetDestDir(), (int)PATH_SEPARATOR, Util::BaseFileName(szFileName), dupcount);
szNewFileName[1024-1] = '\0';
}
Util::MakeUniqueFilename(szNewFileName, 1024, pNZBInfo->GetDestDir(), Util::BaseFileName(szFileName));
detail("Moving file %s to %s", szFileName, szNewFileName);
if (Util::MoveFile(szFileName, szNewFileName))

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -42,10 +42,10 @@ public:
{
adUndefined,
adRunning,
adWaiting,
adFinished,
adFailed,
adRetry,
adDecodeError,
adCrcError,
adDecoding,
adJoining,
@@ -72,13 +72,20 @@ private:
UDecoder m_UDecoder;
FILE* m_pOutFile;
bool m_bDuplicate;
ServerStatList m_ServerStats;
EStatus Download();
bool Write(char* szLine, int iLen);
bool PrepareFile(char* szLine);
bool CreateOutputFile(int iSize);
void BuildOutputFilename();
EStatus DecodeCheck();
void FreeConnection(bool bKeepConnected);
EStatus CheckResponse(const char* szResponse, const char* szComment);
void SetStatus(EStatus eStatus) { m_eStatus = eStatus; }
const char* GetTempFilename() { return m_szTempFilename; }
void SetTempFilename(const char* v);
void SetOutputFilename(const char* v);
public:
ArticleDownloader();
@@ -87,16 +94,13 @@ public:
FileInfo* GetFileInfo() { return m_pFileInfo; }
void SetArticleInfo(ArticleInfo* pArticleInfo) { m_pArticleInfo = pArticleInfo; }
ArticleInfo* GetArticleInfo() { return m_pArticleInfo; }
void SetStatus(EStatus eStatus);
EStatus GetStatus() { return m_eStatus; }
ServerStatList* GetServerStats() { return &m_ServerStats; }
virtual void Run();
virtual void Stop();
bool Terminate();
time_t GetLastUpdateTime() { return m_tLastUpdateTime; }
void SetLastUpdateTimeNow() { m_tLastUpdateTime = ::time(NULL); }
const char* GetTempFilename() { return m_szTempFilename; }
void SetTempFilename(const char* v);
void SetOutputFilename(const char* v);
const char* GetArticleFilename() { return m_szArticleFilename; }
void SetInfoName(const char* v);
const char* GetInfoName() { return m_szInfoName; }
@@ -111,7 +115,7 @@ class DownloadSpeedMeter
{
public:
virtual ~DownloadSpeedMeter() {};
virtual float CalcCurrentDownloadSpeed() = 0;
virtual int CalcCurrentDownloadSpeed() = 0;
virtual void AddSpeedReading(int iBytes) = 0;
};

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2005 Bo Cordes Petersen <placebodk@sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -34,8 +34,7 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <fstream>
#include <stdio.h>
#ifndef WIN32
#include <unistd.h>
#include <sys/socket.h>
@@ -48,19 +47,25 @@
#include "Log.h"
#include "Options.h"
#include "QueueCoordinator.h"
#include "UrlCoordinator.h"
#include "QueueEditor.h"
#include "PrePostProcessor.h"
#include "Util.h"
#include "DownloadInfo.h"
#include "Scanner.h"
extern Options* g_pOptions;
extern QueueCoordinator* g_pQueueCoordinator;
extern UrlCoordinator* g_pUrlCoordinator;
extern PrePostProcessor* g_pPrePostProcessor;
extern Scanner* g_pScanner;
extern void ExitProc();
extern void Reload();
const char* g_szMessageRequestNames[] =
{ "N/A", "Download", "Pause/Unpause", "List", "Set download rate", "Dump debug",
"Edit queue", "Log", "Quit", "Version", "Post-queue", "Write log", "Scan",
"Pause/Unpause postprocessor", "History" };
"Edit queue", "Log", "Quit", "Reload", "Version", "Post-queue", "Write log", "Scan",
"Pause/Unpause postprocessor", "History", "Download URL", "URL-queue" };
const unsigned int g_iMessageRequestSizes[] =
{ 0,
@@ -72,39 +77,41 @@ const unsigned int g_iMessageRequestSizes[] =
sizeof(SNZBEditQueueRequest),
sizeof(SNZBLogRequest),
sizeof(SNZBShutdownRequest),
sizeof(SNZBReloadRequest),
sizeof(SNZBVersionRequest),
sizeof(SNZBPostQueueRequest),
sizeof(SNZBWriteLogRequest),
sizeof(SNZBScanRequest),
sizeof(SNZBHistoryRequest)
sizeof(SNZBHistoryRequest),
sizeof(SNZBDownloadUrlRequest),
sizeof(SNZBUrlQueueRequest)
};
//*****************************************************************
// BinProcessor
BinRpcProcessor::BinRpcProcessor()
{
m_MessageBase.m_iSignature = (int)NZBMESSAGE_SIGNATURE;
}
void BinRpcProcessor::Execute()
{
// Read the first package which needs to be a request
int iBytesReceived = recv(m_iSocket, ((char*)&m_MessageBase) + sizeof(m_MessageBase.m_iSignature), sizeof(m_MessageBase) - sizeof(m_MessageBase.m_iSignature), 0);
if (iBytesReceived < 0)
if (!m_pConnection->Recv(((char*)&m_MessageBase) + sizeof(m_MessageBase.m_iSignature), sizeof(m_MessageBase) - sizeof(m_MessageBase.m_iSignature)))
{
warn("Non-nzbget request received on port %i from %s", g_pOptions->GetControlPort(), m_pConnection->GetRemoteAddr());
return;
}
// Make sure this is a nzbget request from a client
if ((int)ntohl(m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE)
if ((strlen(g_pOptions->GetControlUsername()) > 0 && strcmp(m_MessageBase.m_szUsername, g_pOptions->GetControlUsername())) ||
strcmp(m_MessageBase.m_szPassword, g_pOptions->GetControlPassword()))
{
warn("Non-nzbget request received on port %i from %s", g_pOptions->GetServerPort(), m_szClientIP);
warn("nzbget request received on port %i from %s, but username or password invalid", g_pOptions->GetControlPort(), m_pConnection->GetRemoteAddr());
return;
}
if (strcmp(m_MessageBase.m_szPassword, g_pOptions->GetServerPassword()))
{
warn("nzbget request received on port %i from %s, but password invalid", g_pOptions->GetServerPort(), m_szClientIP);
return;
}
debug("%s request received from %s", g_szMessageRequestNames[ntohl(m_MessageBase.m_iType)], m_szClientIP);
debug("%s request received from %s", g_szMessageRequestNames[ntohl(m_MessageBase.m_iType)], m_pConnection->GetRemoteAddr());
Dispatch();
}
@@ -156,6 +163,10 @@ void BinRpcProcessor::Dispatch()
command = new ShutdownBinCommand();
break;
case eRemoteRequestReload:
command = new ReloadBinCommand();
break;
case eRemoteRequestVersion:
command = new VersionBinCommand();
break;
@@ -176,6 +187,14 @@ void BinRpcProcessor::Dispatch()
command = new HistoryBinCommand();
break;
case eRemoteRequestDownloadUrl:
command = new DownloadUrlBinCommand();
break;
case eRemoteRequestUrlQueue:
command = new UrlQueueBinCommand();
break;
default:
error("Received unsupported request %i", ntohl(m_MessageBase.m_iType));
break;
@@ -183,7 +202,7 @@ void BinRpcProcessor::Dispatch()
if (command)
{
command->SetSocket(m_iSocket);
command->SetConnection(m_pConnection);
command->SetMessageBase(&m_MessageBase);
command->Execute();
delete command;
@@ -205,8 +224,8 @@ void BinCommand::SendBoolResponse(bool bSuccess, const char* szText)
BoolResponse.m_iTrailingDataLength = htonl(iTextLen);
// Send the request answer
send(m_iSocket, (char*) &BoolResponse, sizeof(BoolResponse), 0);
send(m_iSocket, (char*)szText, iTextLen, 0);
m_pConnection->Send((char*) &BoolResponse, sizeof(BoolResponse));
m_pConnection->Send((char*)szText, iTextLen);
}
bool BinCommand::ReceiveRequest(void* pBuffer, int iSize)
@@ -215,8 +234,7 @@ bool BinCommand::ReceiveRequest(void* pBuffer, int iSize)
iSize -= sizeof(SNZBRequestBase);
if (iSize > 0)
{
int iBytesReceived = recv(m_iSocket, ((char*)pBuffer) + sizeof(SNZBRequestBase), iSize, 0);
if (iBytesReceived != iSize)
if (!m_pConnection->Recv(((char*)pBuffer) + sizeof(SNZBRequestBase), iSize))
{
error("invalid request");
return false;
@@ -233,6 +251,8 @@ void PauseUnpauseBinCommand::Execute()
return;
}
g_pOptions->SetResumeTime(0);
switch (ntohl(PauseUnpauseRequest.m_iAction))
{
case eRemotePauseUnpauseActionDownload:
@@ -263,7 +283,7 @@ void SetDownloadRateBinCommand::Execute()
return;
}
g_pOptions->SetDownloadRate(ntohl(SetDownloadRequest.m_iDownloadRate) / 1024.0f);
g_pOptions->SetDownloadRate(ntohl(SetDownloadRequest.m_iDownloadRate));
SendBoolResponse(true, "Rate-Command completed successfully");
}
@@ -276,6 +296,7 @@ void DumpDebugBinCommand::Execute()
}
g_pQueueCoordinator->LogDebugInfo();
g_pUrlCoordinator->LogDebugInfo();
SendBoolResponse(true, "Debug-Command completed successfully");
}
@@ -291,6 +312,18 @@ void ShutdownBinCommand::Execute()
ExitProc();
}
void ReloadBinCommand::Execute()
{
SNZBReloadRequest ReloadRequest;
if (!ReceiveRequest(&ReloadRequest, sizeof(ReloadRequest)))
{
return;
}
SendBoolResponse(true, "Reloading server");
Reload();
}
void VersionBinCommand::Execute()
{
SNZBVersionRequest VersionRequest;
@@ -310,48 +343,29 @@ void DownloadBinCommand::Execute()
return;
}
char* pRecvBuffer = (char*)malloc(ntohl(DownloadRequest.m_iTrailingDataLength) + 1);
char* pBufPtr = pRecvBuffer;
int iBufLen = ntohl(DownloadRequest.m_iTrailingDataLength);
char* pRecvBuffer = (char*)malloc(iBufLen);
// Read from the socket until nothing remains
int iResult = 0;
int NeedBytes = ntohl(DownloadRequest.m_iTrailingDataLength);
while (NeedBytes > 0)
if (!m_pConnection->Recv(pRecvBuffer, iBufLen))
{
iResult = recv(m_iSocket, pBufPtr, NeedBytes, 0);
// Did the recv succeed?
if (iResult <= 0)
{
error("invalid request");
break;
}
pBufPtr += iResult;
NeedBytes -= iResult;
error("invalid request");
free(pRecvBuffer);
return;
}
int iPriority = ntohl(DownloadRequest.m_iPriority);
bool bAddPaused = ntohl(DownloadRequest.m_bAddPaused);
bool bAddTop = ntohl(DownloadRequest.m_bAddFirst);
if (NeedBytes == 0)
{
NZBFile* pNZBFile = NZBFile::CreateFromBuffer(DownloadRequest.m_szFilename, DownloadRequest.m_szCategory, pRecvBuffer, ntohl(DownloadRequest.m_iTrailingDataLength));
bool bOK = g_pScanner->AddExternalFile(DownloadRequest.m_szFilename, DownloadRequest.m_szCategory,
iPriority, NULL, 0, dmScore, NULL, bAddTop, bAddPaused, NULL, pRecvBuffer, iBufLen) != Scanner::asFailed;
if (pNZBFile)
{
info("Request: Queue collection %s", DownloadRequest.m_szFilename);
g_pQueueCoordinator->AddNZBFileToQueue(pNZBFile, ntohl(DownloadRequest.m_bAddFirst));
delete pNZBFile;
char tmp[1024];
snprintf(tmp, 1024, bOK ? "Collection %s added to queue" : "Download Request failed for %s",
Util::BaseFileName(DownloadRequest.m_szFilename));
tmp[1024-1] = '\0';
char tmp[1024];
snprintf(tmp, 1024, "Collection %s added to queue", Util::BaseFileName(DownloadRequest.m_szFilename));
tmp[1024-1] = '\0';
SendBoolResponse(true, tmp);
}
else
{
char tmp[1024];
snprintf(tmp, 1024, "Download Request failed for %s", Util::BaseFileName(DownloadRequest.m_szFilename));
tmp[1024-1] = '\0';
SendBoolResponse(false, tmp);
}
}
SendBoolResponse(bOK, tmp);
free(pRecvBuffer);
}
@@ -369,12 +383,24 @@ void ListBinCommand::Execute()
ListResponse.m_MessageBase.m_iSignature = htonl(NZBMESSAGE_SIGNATURE);
ListResponse.m_MessageBase.m_iStructSize = htonl(sizeof(ListResponse));
ListResponse.m_iEntrySize = htonl(sizeof(SNZBListResponseFileEntry));
ListResponse.m_bRegExValid = 0;
char* buf = NULL;
int bufsize = 0;
if (ntohl(ListRequest.m_bFileList))
{
eRemoteMatchMode eMatchMode = (eRemoteMatchMode)ntohl(ListRequest.m_iMatchMode);
bool bMatchGroup = ntohl(ListRequest.m_bMatchGroup);
const char* szPattern = ListRequest.m_szPattern;
RegEx *pRegEx = NULL;
if (eMatchMode == eRemoteMatchModeRegEx)
{
pRegEx = new RegEx(szPattern);
ListResponse.m_bRegExValid = pRegEx->IsValid();
}
// Make a data structure and copy all the elements of the list into it
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
@@ -386,6 +412,7 @@ void ListBinCommand::Execute()
{
NZBInfo* pNZBInfo = *it;
bufsize += strlen(pNZBInfo->GetFilename()) + 1;
bufsize += strlen(pNZBInfo->GetName()) + 1;
bufsize += strlen(pNZBInfo->GetDestDir()) + 1;
bufsize += strlen(pNZBInfo->GetCategory()) + 1;
bufsize += strlen(pNZBInfo->GetQueuedFilename()) + 1;
@@ -429,13 +456,17 @@ void ListBinCommand::Execute()
Util::SplitInt64(pNZBInfo->GetSize(), &iSizeHi, &iSizeLo);
pListAnswer->m_iSizeLo = htonl(iSizeLo);
pListAnswer->m_iSizeHi = htonl(iSizeHi);
pListAnswer->m_bMatch = htonl(bMatchGroup && (!pRegEx || pRegEx->Match(pNZBInfo->GetName())));
pListAnswer->m_iFilenameLen = htonl(strlen(pNZBInfo->GetFilename()) + 1);
pListAnswer->m_iNameLen = htonl(strlen(pNZBInfo->GetName()) + 1);
pListAnswer->m_iDestDirLen = htonl(strlen(pNZBInfo->GetDestDir()) + 1);
pListAnswer->m_iCategoryLen = htonl(strlen(pNZBInfo->GetCategory()) + 1);
pListAnswer->m_iQueuedFilenameLen = htonl(strlen(pNZBInfo->GetQueuedFilename()) + 1);
bufptr += sizeof(SNZBListResponseNZBEntry);
strcpy(bufptr, pNZBInfo->GetFilename());
bufptr += ntohl(pListAnswer->m_iFilenameLen);
strcpy(bufptr, pNZBInfo->GetName());
bufptr += ntohl(pListAnswer->m_iNameLen);
strcpy(bufptr, pNZBInfo->GetDestDir());
bufptr += ntohl(pListAnswer->m_iDestDirLen);
strcpy(bufptr, pNZBInfo->GetCategory());
@@ -484,7 +515,7 @@ void ListBinCommand::Execute()
unsigned long iSizeHi, iSizeLo;
FileInfo* pFileInfo = *it;
SNZBListResponseFileEntry* pListAnswer = (SNZBListResponseFileEntry*) bufptr;
pListAnswer->m_iID = htonl(pFileInfo->GetID());
pListAnswer->m_iID = htonl(pFileInfo->GetID());
int iNZBIndex = 0;
for (unsigned int i = 0; i < pDownloadQueue->GetNZBInfoList()->size(); i++)
@@ -497,6 +528,13 @@ void ListBinCommand::Execute()
}
pListAnswer->m_iNZBIndex = htonl(iNZBIndex);
if (pRegEx && !bMatchGroup)
{
char szFilename[MAX_PATH];
snprintf(szFilename, sizeof(szFilename) - 1, "%s/%s", pFileInfo->GetNZBInfo()->GetName(), Util::BaseFileName(pFileInfo->GetFilename()));
pListAnswer->m_bMatch = htonl(pRegEx->Match(szFilename));
}
Util::SplitInt64(pFileInfo->GetSize(), &iSizeHi, &iSizeLo);
pListAnswer->m_iFileSizeLo = htonl(iSizeLo);
pListAnswer->m_iFileSizeHi = htonl(iSizeHi);
@@ -505,6 +543,8 @@ void ListBinCommand::Execute()
pListAnswer->m_iRemainingSizeHi = htonl(iSizeHi);
pListAnswer->m_bFilenameConfirmed = htonl(pFileInfo->GetFilenameConfirmed());
pListAnswer->m_bPaused = htonl(pFileInfo->GetPaused());
pListAnswer->m_iActiveDownloads = htonl(pFileInfo->GetActiveDownloads());
pListAnswer->m_iPriority = htonl(pFileInfo->GetPriority());
pListAnswer->m_iSubjectLen = htonl(strlen(pFileInfo->GetSubject()) + 1);
pListAnswer->m_iFilenameLen = htonl(strlen(pFileInfo->GetFilename()) + 1);
bufptr += sizeof(SNZBListResponseFileEntry);
@@ -523,6 +563,8 @@ void ListBinCommand::Execute()
g_pQueueCoordinator->UnlockQueue();
delete pRegEx;
ListResponse.m_iNrTrailingNZBEntries = htonl(iNrNZBEntries);
ListResponse.m_iNrTrailingPPPEntries = htonl(iNrPPPEntries);
ListResponse.m_iNrTrailingFileEntries = htonl(iNrFileEntries);
@@ -532,11 +574,11 @@ void ListBinCommand::Execute()
if (htonl(ListRequest.m_bServerState))
{
unsigned long iSizeHi, iSizeLo;
ListResponse.m_iDownloadRate = htonl((int)(g_pQueueCoordinator->CalcCurrentDownloadSpeed() * 1024));
ListResponse.m_iDownloadRate = htonl(g_pQueueCoordinator->CalcCurrentDownloadSpeed());
Util::SplitInt64(g_pQueueCoordinator->CalcRemainingSize(), &iSizeHi, &iSizeLo);
ListResponse.m_iRemainingSizeHi = htonl(iSizeHi);
ListResponse.m_iRemainingSizeLo = htonl(iSizeLo);
ListResponse.m_iDownloadLimit = htonl((int)(g_pOptions->GetDownloadRate() * 1024));
ListResponse.m_iDownloadLimit = htonl(g_pOptions->GetDownloadRate());
ListResponse.m_bDownloadPaused = htonl(g_pOptions->GetPauseDownload());
ListResponse.m_bDownload2Paused = htonl(g_pOptions->GetPauseDownload2());
ListResponse.m_bPostPaused = htonl(g_pOptions->GetPausePostProcess());
@@ -559,18 +601,15 @@ void ListBinCommand::Execute()
}
// Send the request answer
send(m_iSocket, (char*) &ListResponse, sizeof(ListResponse), 0);
m_pConnection->Send((char*) &ListResponse, sizeof(ListResponse));
// Send the data
if (bufsize > 0)
{
send(m_iSocket, buf, bufsize, 0);
m_pConnection->Send(buf, bufsize);
}
if (buf)
{
free(buf);
}
free(buf);
}
void LogBinCommand::Execute()
@@ -650,12 +689,12 @@ void LogBinCommand::Execute()
LogResponse.m_iTrailingDataLength = htonl(bufsize);
// Send the request answer
send(m_iSocket, (char*) &LogResponse, sizeof(LogResponse), 0);
m_pConnection->Send((char*) &LogResponse, sizeof(LogResponse));
// Send the data
if (bufsize > 0)
{
send(m_iSocket, buf, bufsize, 0);
m_pConnection->Send(buf, bufsize);
}
free(buf);
@@ -669,67 +708,75 @@ void EditQueueBinCommand::Execute()
return;
}
int iNrEntries = ntohl(EditQueueRequest.m_iNrTrailingEntries);
int iNrIDEntries = ntohl(EditQueueRequest.m_iNrTrailingIDEntries);
int iNrNameEntries = ntohl(EditQueueRequest.m_iNrTrailingNameEntries);
int iNameEntriesLen = ntohl(EditQueueRequest.m_iTrailingNameEntriesLen);
int iAction = ntohl(EditQueueRequest.m_iAction);
int iMatchMode = ntohl(EditQueueRequest.m_iMatchMode);
int iOffset = ntohl(EditQueueRequest.m_iOffset);
int iTextLen = ntohl(EditQueueRequest.m_iTextLen);
bool bSmartOrder = ntohl(EditQueueRequest.m_bSmartOrder);
unsigned int iBufLength = ntohl(EditQueueRequest.m_iTrailingDataLength);
if (iNrEntries * sizeof(int32_t) + iTextLen != iBufLength)
if (iNrIDEntries * sizeof(int32_t) + iTextLen + iNameEntriesLen != iBufLength)
{
error("Invalid struct size");
return;
}
if (iNrEntries <= 0)
char* pBuf = (char*)malloc(iBufLength);
if (!m_pConnection->Recv(pBuf, iBufLength))
{
SendBoolResponse(false, "Edit-Command failed: no IDs specified");
error("invalid request");
free(pBuf);
return;
}
char* pBuf = (char*)malloc(iBufLength);
char* szText = NULL;
int32_t* pIDs = NULL;
// Read from the socket until nothing remains
char* pBufPtr = pBuf;
int NeedBytes = iBufLength;
int iResult = 0;
while (NeedBytes > 0)
if (iNrIDEntries <= 0 && iNrNameEntries <= 0)
{
iResult = recv(m_iSocket, pBufPtr, NeedBytes, 0);
// Did the recv succeed?
if (iResult <= 0)
{
error("invalid request");
break;
}
pBufPtr += iResult;
NeedBytes -= iResult;
SendBoolResponse(false, "Edit-Command failed: no IDs/Names specified");
return;
}
bool bOK = NeedBytes == 0;
if (bOK)
{
szText = iTextLen > 0 ? pBuf : NULL;
pIDs = (int32_t*)(pBuf + iTextLen);
}
char* szText = iTextLen > 0 ? pBuf : NULL;
int32_t* pIDs = (int32_t*)(pBuf + iTextLen);
char* pNames = (pBuf + iTextLen + iNrIDEntries * sizeof(int32_t));
IDList cIDList;
cIDList.reserve(iNrEntries);
for (int i = 0; i < iNrEntries; i++)
NameList cNameList;
if (iNrIDEntries > 0)
{
cIDList.push_back(ntohl(pIDs[i]));
cIDList.reserve(iNrIDEntries);
for (int i = 0; i < iNrIDEntries; i++)
{
cIDList.push_back(ntohl(pIDs[i]));
}
}
if (iNrNameEntries > 0)
{
cNameList.reserve(iNrNameEntries);
for (int i = 0; i < iNrNameEntries; i++)
{
cNameList.push_back(pNames);
pNames += strlen(pNames) + 1;
}
}
bool bOK = false;
if (iAction < eRemoteEditActionPostMoveOffset)
{
bOK = g_pQueueCoordinator->GetQueueEditor()->EditList(&cIDList, bSmartOrder, (QueueEditor::EEditAction)iAction, iOffset, szText);
bOK = g_pQueueCoordinator->GetQueueEditor()->EditList(
iNrIDEntries > 0 ? &cIDList : NULL,
iNrNameEntries > 0 ? &cNameList : NULL,
(QueueEditor::EMatchMode)iMatchMode, bSmartOrder, (QueueEditor::EEditAction)iAction, iOffset, szText);
}
else
{
bOK = g_pPrePostProcessor->QueueEditList(&cIDList, (PrePostProcessor::EEditAction)iAction, iOffset);
bOK = g_pPrePostProcessor->QueueEditList(&cIDList, (PrePostProcessor::EEditAction)iAction, iOffset, szText);
}
free(pBuf);
@@ -740,6 +787,13 @@ void EditQueueBinCommand::Execute()
}
else
{
#ifndef HAVE_REGEX_H
if ((QueueEditor::EMatchMode)iMatchMode == QueueEditor::mmRegEx)
{
SendBoolResponse(false, "Edit-Command failed: the program was compiled without RegEx-support");
return;
}
#endif
SendBoolResponse(false, "Edit-Command failed");
}
}
@@ -772,7 +826,6 @@ void PostQueueBinCommand::Execute()
{
PostInfo* pPostInfo = *it;
bufsize += strlen(pPostInfo->GetNZBInfo()->GetFilename()) + 1;
bufsize += strlen(pPostInfo->GetParFilename()) + 1;
bufsize += strlen(pPostInfo->GetInfoName()) + 1;
bufsize += strlen(pPostInfo->GetNZBInfo()->GetDestDir()) + 1;
bufsize += strlen(pPostInfo->GetProgressLabel()) + 1;
@@ -795,15 +848,12 @@ void PostQueueBinCommand::Execute()
pPostQueueAnswer->m_iTotalTimeSec = htonl((int)(pPostInfo->GetStartTime() ? tCurTime - pPostInfo->GetStartTime() : 0));
pPostQueueAnswer->m_iStageTimeSec = htonl((int)(pPostInfo->GetStageTime() ? tCurTime - pPostInfo->GetStageTime() : 0));
pPostQueueAnswer->m_iNZBFilenameLen = htonl(strlen(pPostInfo->GetNZBInfo()->GetFilename()) + 1);
pPostQueueAnswer->m_iParFilename = htonl(strlen(pPostInfo->GetParFilename()) + 1);
pPostQueueAnswer->m_iInfoNameLen = htonl(strlen(pPostInfo->GetInfoName()) + 1);
pPostQueueAnswer->m_iDestDirLen = htonl(strlen(pPostInfo->GetNZBInfo()->GetDestDir()) + 1);
pPostQueueAnswer->m_iProgressLabelLen = htonl(strlen(pPostInfo->GetProgressLabel()) + 1);
bufptr += sizeof(SNZBPostQueueResponseEntry);
strcpy(bufptr, pPostInfo->GetNZBInfo()->GetFilename());
bufptr += ntohl(pPostQueueAnswer->m_iNZBFilenameLen);
strcpy(bufptr, pPostInfo->GetParFilename());
bufptr += ntohl(pPostQueueAnswer->m_iParFilename);
strcpy(bufptr, pPostInfo->GetInfoName());
bufptr += ntohl(pPostQueueAnswer->m_iInfoNameLen);
strcpy(bufptr, pPostInfo->GetNZBInfo()->GetDestDir());
@@ -825,12 +875,12 @@ void PostQueueBinCommand::Execute()
PostQueueResponse.m_iTrailingDataLength = htonl(bufsize);
// Send the request answer
send(m_iSocket, (char*) &PostQueueResponse, sizeof(PostQueueResponse), 0);
m_pConnection->Send((char*) &PostQueueResponse, sizeof(PostQueueResponse));
// Send the data
if (bufsize > 0)
{
send(m_iSocket, buf, bufsize, 0);
m_pConnection->Send(buf, bufsize);
}
free(buf);
@@ -845,50 +895,36 @@ void WriteLogBinCommand::Execute()
}
char* pRecvBuffer = (char*)malloc(ntohl(WriteLogRequest.m_iTrailingDataLength) + 1);
char* pBufPtr = pRecvBuffer;
// Read from the socket until nothing remains
int iResult = 0;
int NeedBytes = ntohl(WriteLogRequest.m_iTrailingDataLength);
pRecvBuffer[NeedBytes] = '\0';
while (NeedBytes > 0)
if (!m_pConnection->Recv(pRecvBuffer, ntohl(WriteLogRequest.m_iTrailingDataLength)))
{
iResult = recv(m_iSocket, pBufPtr, NeedBytes, 0);
// Did the recv succeed?
if (iResult <= 0)
{
error("invalid request");
error("invalid request");
free(pRecvBuffer);
return;
}
bool OK = true;
switch ((Message::EKind)ntohl(WriteLogRequest.m_iKind))
{
case Message::mkDetail:
detail(pRecvBuffer);
break;
}
pBufPtr += iResult;
NeedBytes -= iResult;
}
if (NeedBytes == 0)
{
bool OK = true;
switch ((Message::EKind)ntohl(WriteLogRequest.m_iKind))
{
case Message::mkDetail:
detail(pRecvBuffer);
break;
case Message::mkInfo:
info(pRecvBuffer);
break;
case Message::mkWarning:
warn(pRecvBuffer);
break;
case Message::mkError:
error(pRecvBuffer);
break;
case Message::mkDebug:
debug(pRecvBuffer);
break;
default:
OK = false;
}
SendBoolResponse(OK, OK ? "Message added to log" : "Invalid message-kind");
case Message::mkInfo:
info(pRecvBuffer);
break;
case Message::mkWarning:
warn(pRecvBuffer);
break;
case Message::mkError:
error(pRecvBuffer);
break;
case Message::mkDebug:
debug(pRecvBuffer);
break;
default:
OK = false;
}
SendBoolResponse(OK, OK ? "Message added to log" : "Invalid message-kind");
free(pRecvBuffer);
}
@@ -901,8 +937,10 @@ void ScanBinCommand::Execute()
return;
}
g_pPrePostProcessor->ScanNZBDir();
SendBoolResponse(true, "Scan-Command scheduled successfully");
bool bSyncMode = ntohl(ScanRequest.m_bSyncMode);
g_pScanner->ScanNZBDir(bSyncMode);
SendBoolResponse(true, bSyncMode ? "Scan-Command completed" : "Scan-Command scheduled successfully");
}
void HistoryBinCommand::Execute()
@@ -926,15 +964,14 @@ void HistoryBinCommand::Execute()
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
// calculate required buffer size for nzbs
int iNrNZBEntries = pDownloadQueue->GetHistoryList()->size();
bufsize += iNrNZBEntries * sizeof(SNZBHistoryResponseEntry);
int iNrEntries = pDownloadQueue->GetHistoryList()->size();
bufsize += iNrEntries * sizeof(SNZBHistoryResponseEntry);
for (HistoryList::iterator it = pDownloadQueue->GetHistoryList()->begin(); it != pDownloadQueue->GetHistoryList()->end(); it++)
{
NZBInfo* pNZBInfo = *it;
bufsize += strlen(pNZBInfo->GetFilename()) + 1;
bufsize += strlen(pNZBInfo->GetDestDir()) + 1;
bufsize += strlen(pNZBInfo->GetCategory()) + 1;
bufsize += strlen(pNZBInfo->GetQueuedFilename()) + 1;
HistoryInfo* pHistoryInfo = *it;
char szNicename[1024];
pHistoryInfo->GetName(szNicename, sizeof(szNicename));
bufsize += strlen(szNicename) + 1;
// align struct to 4-bytes, needed by ARM-processor (and may be others)
bufsize += bufsize % 4 > 0 ? 4 - bufsize % 4 : 0;
}
@@ -943,36 +980,42 @@ void HistoryBinCommand::Execute()
char* bufptr = buf;
// write nzb entries
for (NZBInfoList::iterator it = pDownloadQueue->GetHistoryList()->begin(); it != pDownloadQueue->GetHistoryList()->end(); it++)
for (HistoryList::iterator it = pDownloadQueue->GetHistoryList()->begin(); it != pDownloadQueue->GetHistoryList()->end(); it++)
{
unsigned long iSizeHi, iSizeLo;
NZBInfo* pNZBInfo = *it;
HistoryInfo* pHistoryInfo = *it;
SNZBHistoryResponseEntry* pListAnswer = (SNZBHistoryResponseEntry*) bufptr;
Util::SplitInt64(pNZBInfo->GetSize(), &iSizeHi, &iSizeLo);
pListAnswer->m_iID = htonl(pNZBInfo->GetID());
pListAnswer->m_tTime = htonl((int)pNZBInfo->GetHistoryTime());
pListAnswer->m_iSizeLo = htonl(iSizeLo);
pListAnswer->m_iSizeHi = htonl(iSizeHi);
pListAnswer->m_iFileCount = htonl(pNZBInfo->GetFileCount());
pListAnswer->m_iParStatus = htonl(pNZBInfo->GetParStatus());
pListAnswer->m_iScriptStatus = htonl(pNZBInfo->GetScriptStatus());
pListAnswer->m_iFilenameLen = htonl(strlen(pNZBInfo->GetFilename()) + 1);
pListAnswer->m_iDestDirLen = htonl(strlen(pNZBInfo->GetDestDir()) + 1);
pListAnswer->m_iCategoryLen = htonl(strlen(pNZBInfo->GetCategory()) + 1);
pListAnswer->m_iQueuedFilenameLen = htonl(strlen(pNZBInfo->GetQueuedFilename()) + 1);
pListAnswer->m_iID = htonl(pHistoryInfo->GetID());
pListAnswer->m_iKind = htonl((int)pHistoryInfo->GetKind());
pListAnswer->m_tTime = htonl((int)pHistoryInfo->GetTime());
char szNicename[1024];
pHistoryInfo->GetName(szNicename, sizeof(szNicename));
pListAnswer->m_iNicenameLen = htonl(strlen(szNicename) + 1);
if (pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo)
{
NZBInfo* pNZBInfo = pHistoryInfo->GetNZBInfo();
unsigned long iSizeHi, iSizeLo;
Util::SplitInt64(pNZBInfo->GetSize(), &iSizeHi, &iSizeLo);
pListAnswer->m_iSizeLo = htonl(iSizeLo);
pListAnswer->m_iSizeHi = htonl(iSizeHi);
pListAnswer->m_iFileCount = htonl(pNZBInfo->GetFileCount());
pListAnswer->m_iParStatus = htonl(pNZBInfo->GetParStatus());
pListAnswer->m_iScriptStatus = htonl(pNZBInfo->GetScriptStatuses()->CalcTotalStatus());
}
else if (pHistoryInfo->GetKind() == HistoryInfo::hkUrlInfo)
{
UrlInfo* pUrlInfo = pHistoryInfo->GetUrlInfo();
pListAnswer->m_iUrlStatus = htonl(pUrlInfo->GetStatus());
}
bufptr += sizeof(SNZBHistoryResponseEntry);
strcpy(bufptr, pNZBInfo->GetFilename());
bufptr += ntohl(pListAnswer->m_iFilenameLen);
strcpy(bufptr, pNZBInfo->GetDestDir());
bufptr += ntohl(pListAnswer->m_iDestDirLen);
strcpy(bufptr, pNZBInfo->GetCategory());
bufptr += ntohl(pListAnswer->m_iCategoryLen);
strcpy(bufptr, pNZBInfo->GetQueuedFilename());
bufptr += ntohl(pListAnswer->m_iQueuedFilenameLen);
strcpy(bufptr, szNicename);
bufptr += ntohl(pListAnswer->m_iNicenameLen);
// align struct to 4-bytes, needed by ARM-processor (and may be others)
if ((size_t)bufptr % 4 > 0)
{
pListAnswer->m_iQueuedFilenameLen = htonl(ntohl(pListAnswer->m_iQueuedFilenameLen) + 4 - (size_t)bufptr % 4);
pListAnswer->m_iNicenameLen = htonl(ntohl(pListAnswer->m_iNicenameLen) + 4 - (size_t)bufptr % 4);
memset(bufptr, 0, 4 - (size_t)bufptr % 4); //suppress valgrind warning "uninitialized data"
bufptr += 4 - (size_t)bufptr % 4;
}
@@ -980,16 +1023,126 @@ void HistoryBinCommand::Execute()
g_pQueueCoordinator->UnlockQueue();
HistoryResponse.m_iNrTrailingEntries = htonl(iNrNZBEntries);
HistoryResponse.m_iNrTrailingEntries = htonl(iNrEntries);
HistoryResponse.m_iTrailingDataLength = htonl(bufsize);
// Send the request answer
send(m_iSocket, (char*) &HistoryResponse, sizeof(HistoryResponse), 0);
m_pConnection->Send((char*) &HistoryResponse, sizeof(HistoryResponse));
// Send the data
if (bufsize > 0)
{
send(m_iSocket, buf, bufsize, 0);
m_pConnection->Send(buf, bufsize);
}
free(buf);
}
void DownloadUrlBinCommand::Execute()
{
SNZBDownloadUrlRequest DownloadUrlRequest;
if (!ReceiveRequest(&DownloadUrlRequest, sizeof(DownloadUrlRequest)))
{
return;
}
URL url(DownloadUrlRequest.m_szURL);
if (!url.IsValid())
{
char tmp[1024];
snprintf(tmp, 1024, "Url %s is not valid", DownloadUrlRequest.m_szURL);
tmp[1024-1] = '\0';
SendBoolResponse(true, tmp);
return;
}
UrlInfo* pUrlInfo = new UrlInfo();
pUrlInfo->SetURL(DownloadUrlRequest.m_szURL);
pUrlInfo->SetNZBFilename(DownloadUrlRequest.m_szNZBFilename);
pUrlInfo->SetCategory(DownloadUrlRequest.m_szCategory);
pUrlInfo->SetPriority(ntohl(DownloadUrlRequest.m_iPriority));
pUrlInfo->SetAddTop(ntohl(DownloadUrlRequest.m_bAddFirst));
pUrlInfo->SetAddPaused(ntohl(DownloadUrlRequest.m_bAddPaused));
g_pUrlCoordinator->AddUrlToQueue(pUrlInfo, ntohl(DownloadUrlRequest.m_bAddFirst));
info("Request: Queue url %s", DownloadUrlRequest.m_szURL);
char tmp[1024];
snprintf(tmp, 1024, "Url %s added to queue", DownloadUrlRequest.m_szURL);
tmp[1024-1] = '\0';
SendBoolResponse(true, tmp);
}
void UrlQueueBinCommand::Execute()
{
SNZBUrlQueueRequest UrlQueueRequest;
if (!ReceiveRequest(&UrlQueueRequest, sizeof(UrlQueueRequest)))
{
return;
}
SNZBUrlQueueResponse UrlQueueResponse;
memset(&UrlQueueResponse, 0, sizeof(UrlQueueResponse));
UrlQueueResponse.m_MessageBase.m_iSignature = htonl(NZBMESSAGE_SIGNATURE);
UrlQueueResponse.m_MessageBase.m_iStructSize = htonl(sizeof(UrlQueueResponse));
UrlQueueResponse.m_iEntrySize = htonl(sizeof(SNZBUrlQueueResponseEntry));
char* buf = NULL;
int bufsize = 0;
// Make a data structure and copy all the elements of the list into it
UrlQueue* pUrlQueue = g_pQueueCoordinator->LockQueue()->GetUrlQueue();
int NrEntries = pUrlQueue->size();
// calculate required buffer size
bufsize = NrEntries * sizeof(SNZBUrlQueueResponseEntry);
for (UrlQueue::iterator it = pUrlQueue->begin(); it != pUrlQueue->end(); it++)
{
UrlInfo* pUrlInfo = *it;
bufsize += strlen(pUrlInfo->GetURL()) + 1;
bufsize += strlen(pUrlInfo->GetNZBFilename()) + 1;
// align struct to 4-bytes, needed by ARM-processor (and may be others)
bufsize += bufsize % 4 > 0 ? 4 - bufsize % 4 : 0;
}
buf = (char*) malloc(bufsize);
char* bufptr = buf;
for (UrlQueue::iterator it = pUrlQueue->begin(); it != pUrlQueue->end(); it++)
{
UrlInfo* pUrlInfo = *it;
SNZBUrlQueueResponseEntry* pUrlQueueAnswer = (SNZBUrlQueueResponseEntry*) bufptr;
pUrlQueueAnswer->m_iID = htonl(pUrlInfo->GetID());
pUrlQueueAnswer->m_iURLLen = htonl(strlen(pUrlInfo->GetURL()) + 1);
pUrlQueueAnswer->m_iNZBFilenameLen = htonl(strlen(pUrlInfo->GetNZBFilename()) + 1);
bufptr += sizeof(SNZBUrlQueueResponseEntry);
strcpy(bufptr, pUrlInfo->GetURL());
bufptr += ntohl(pUrlQueueAnswer->m_iURLLen);
strcpy(bufptr, pUrlInfo->GetNZBFilename());
bufptr += ntohl(pUrlQueueAnswer->m_iNZBFilenameLen);
// align struct to 4-bytes, needed by ARM-processor (and may be others)
if ((size_t)bufptr % 4 > 0)
{
pUrlQueueAnswer->m_iNZBFilenameLen = htonl(ntohl(pUrlQueueAnswer->m_iNZBFilenameLen) + 4 - (size_t)bufptr % 4);
memset(bufptr, 0, 4 - (size_t)bufptr % 4); //suppress valgrind warning "uninitialized data"
bufptr += 4 - (size_t)bufptr % 4;
}
}
g_pQueueCoordinator->UnlockQueue();
UrlQueueResponse.m_iNrTrailingEntries = htonl(NrEntries);
UrlQueueResponse.m_iTrailingDataLength = htonl(bufsize);
// Send the request answer
m_pConnection->Send((char*) &UrlQueueResponse, sizeof(UrlQueueResponse));
// Send the data
if (bufsize > 0)
{
m_pConnection->Send(buf, bufsize);
}
free(buf);

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2005 Bo Cordes Petersen <placebodk@sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -33,23 +33,21 @@
class BinRpcProcessor
{
private:
SOCKET m_iSocket;
SNZBRequestBase m_MessageBase;
const char* m_szClientIP;
Connection* m_pConnection;
void Dispatch();
public:
BinRpcProcessor();
void Execute();
void SetSocket(SOCKET iSocket) { m_iSocket = iSocket; }
void SetSignature(int iSignature) { m_MessageBase.m_iSignature = iSignature; }
void SetClientIP(const char* szClientIP) { m_szClientIP = szClientIP; }
void SetConnection(Connection* pConnection) { m_pConnection = pConnection; }
};
class BinCommand
{
protected:
SOCKET m_iSocket;
Connection* m_pConnection;
SNZBRequestBase* m_pMessageBase;
bool ReceiveRequest(void* pBuffer, int iSize);
@@ -58,7 +56,7 @@ protected:
public:
virtual ~BinCommand() {}
virtual void Execute() = 0;
void SetSocket(SOCKET iSocket) { m_iSocket = iSocket; }
void SetConnection(Connection* pConnection) { m_pConnection = pConnection; }
void SetMessageBase(SNZBRequestBase* pMessageBase) { m_pMessageBase = pMessageBase; }
};
@@ -110,6 +108,12 @@ public:
virtual void Execute();
};
class ReloadBinCommand: public BinCommand
{
public:
virtual void Execute();
};
class VersionBinCommand: public BinCommand
{
public:
@@ -140,4 +144,16 @@ public:
virtual void Execute();
};
class DownloadUrlBinCommand: public BinCommand
{
public:
virtual void Execute();
};
class UrlQueueBinCommand: public BinCommand
{
public:
virtual void Execute();
};
#endif

899
ChangeLog
View File

@@ -1,3 +1,900 @@
nzbget-12.0:
- added RSS feeds support:
- new options "FeedX.Name", "FeedX.URL", "FeedX.Filter",
"FeedX.Interval", "FeedX.PauseNzb", "FeedX.Category",
"FeedX.Priority" (section "Rss Feeds");
- new option "FeedHistory" (section "Download Queue");
- button "Preview Feed" on settings tab near each feed definition;
- new toolbar button "Feeds" on downloads tab with menu to
view feeds or fetch new nzbs from all feeds (the button is
visible only if there are feeds defined in settings);
- new dialog to see feed content showing status of each item (new,
fetched, backlog) with ability to manually fetch selected items;
- powerful filters for RSS feeds;
- new dialog to build filters in web-interface with instant preview;
- added download health monitoring:
- health indicates download status, whether the file is damaged
and how much;
- 100% health means no download errors occurred; 0% means all
articles failed;
- there is also a critical health which is calculated for each
nzb-file based on number and size of par-files;
- if during download the health goes down below 100% a health
badge appears near download name indicating the necessity of
par-repair; the indicator can be orange (repair may be possible)
or red (unrepairable) if the health goes down below critical health;
- new option "HealthCheck" to define what to do with unhealthy
(unrepairable) downloads (pause, delete, none);
- health and critical health are displayed in download-edit dialog;
health is displayed in history dialog; if download was aborted
(HealthCheck=delete) this is indicated in history dialog;
- health allows to determine download status for downloads which
have unpack and/or par-check disabled; for such downloads the
status in history is shown based on health: success (health=100%),
damaged (health > critical) or failure (health < critical);
- par-check is now automatically started for downloads having
health below 100%; this works independently of unpack (even if
unpack is disabled);
- for downloads having health less than critical health no par-check
is performed (it would fail); Instead the par-check status is
set to "failure" automatically saving time of actual par-check;
- new fields "Health" and "CriticalHealth" are returned by
RPC-Method "listgroups";
- new fields "Health", "CriticalHealth", "Deleted" and "HealthDeleted"
are returned by RPC-Method "history";
- new parameters "NZBPP_HEALTH" and "NZBPP_CRITICALHEALTH" are passed
to pp-scripts;
- added collecting of server usage statistical data for each download:
- number of successful and failed article downloads per news server;
- new page in history dialog shows collected statistics;
- new fields in RPC-method "history": ServerStats (array),
TotalArticles, SuccessArticles, FailedArticles;
- new env. vars passed to pp-scripts: NZBPP_TOTALARTICLES,
NZBPP_SUCCESSARTICLES, NZBPP_FAILEDARTICLES and per used news
server: NZBPP_SERVERX_SUCCESSARTICLES, NZBPP_SERVERX_FAILEDARTICLES;
- also new env.var HEALTHDELETED;
- added smart duplicates feature:
- mostly for use with RSS feeds;
- automatic detection of duplicate nzb-files to avoid download of
duplicates;
- nzb-files can be also manually marked as duplicates;
- if download fails - automatically choose another release (duplicate);
- if download succeeds all remaining duplicates are skipped (not downloaded);
- download items have new properties to tune duplicate handling
behavior: duplicate key, duplicate score and duplicate mode;
- if download was deleted by duplicate check its status in the
history is shown as "DUPE";
- new actions "GroupSetDupeKey", "GroupSetDupeScore", "GroupSetDupeMode",
"HistorySetDupeKey", "HistorySetDupeScore", "HistorySetDupeMode",
"HistoryMarkBad" and "HistoryMarkGood" of RPC-command "editqueue";
new actions "B" and "G" of command "--edit/-E" for history items
(subcommand "H");
- when deleting downloads from queue there are three options now:
"move to history", "move to history as duplicate" and "delete
without history tracking";
- new actions "GroupDupeDelete", "GroupFinalDelete" and
"HistorySetDupeBackup" in RPC-method "editqueue";
- RPC-commands "listgroups", "postqueue" and "history" now return
more info about nzb-item (many new fields);
- removed option "MergeNzb" because it conflicts with duplicate
handling, items can be merged manually if necessary;
- automatic detection of exactly same nzb-files (same content)
coming from different sources (different RSS feeds etc.);
individual files (inside nzb-file) having extensions listed in
option "ExtCleanupDisk" are excluded from content comparison
(unless these are par2-files, which are never excluded);
- when history item expires (as defined by option "KeepHistory")
and the duplicate check is active (option "DupeCheck") the item
is not completely deleted from history; instead the amount of
stored data reduces to minimum required for duplicate check
(about 200 bytes vs 2000 bytes for full history item);
- such old history items are not shown in web-interface by default
(to avoid transferring of large amount of history items);
- new button "Hidden" in web-interface to show hidden history items;
the items are marked with badge "hidden";
- RPC-method "editqueue" has now two actions to delete history
records: "HistoryDelete", "HistoryFinal"; action "HistoryDelete"
which has existed before now hides records, already hidden records
are ignored;
- added functions "Mark as Bad" and "Mark as Good" for history
items;
- duplicate properties (dupekey, dupescore and dupemode) can now
be viewed and changed in download-edit-dialog and
history-edit-dialog via new button "Dupe";
- for full documentation see http://nzbget.sourceforge.net/RSS#Duplicates;
- created NZBGet.app - NZBGet is now a user friendly Mac OSX application
with easy installation and seamless integration into OS UI:
works in background, is controlled from a web-browser, few
important functions are accessible via menubar icon;
- better Windows package:
- unrar is included;
- several options are set to better defaults;
- all paths are set as relative paths to program directory;
the program can be started after installation without editing
anything in config;
- included two new batch-files:
- nzbget-start.bat - starts program in normal mode (dos box);
- nzbget-recovery-mode.bat - starts with empty password (dos box);
- both batch files open browser window with correct address;
- config-file template is stored in nzbget.conf.template;
- nzbget.conf is not included in the package. When the program is
started for the first time (using one of batch files) the template
config file is copied into nzbget.conf;
- updates will be easy in the future: to update the program all
files from newer archive must be extracted over old files. Since
the archive doesn't have nzbget.conf, the existing config is kept
unmodified. The template config file will be updated;
- added file README-WINDOWS.txt with small instructions;
- version string now includes revision number (like "r789");
- added automatic updates:
- new button "Check for updates" on settings tab of web-interface,
in section "SYSTEM", initiates check and shows dialog allowing to
install new version;
- it is possible to choose between stable, testing and development
branches;
- this feature is for end-users using binary packages created and
updated by maintainers, who need to write an update script specific
for platform;
- the script is then called by NZBGet when user clicks on install-button;
- the script must download and install new version;
- for more info visit http://nzbget.sourceforge.net/Packaging;
- news servers can now be temporarily disabled via speed limit dialog
without reloading of the program:
- new option "ServerX.Active" to disable servers via settings;
- new option "ServerX.Name" to use for logging and in UI;
- changed the way how option "Unpack" works:
- instead of enabling/disabling the unpacker as a whole, it now
defines the initial value of post-processing parameter "Unpack"
for nzb-file when it is added to queue;
- this makes it now possible to disable Unpack globally but still
enable it for selected nzb-files;
- new option "CategoryX.Unpack" to set unpack on a per category basis;
- combined all footer buttons into one button "Actions" with menu:
- in download-edit-dialog: "Pause/Resume", "Delete" and "Cancel
Post-Processing";
- in history-dialog: "Delete", "Post-Process Again" and "Download
Remaining Files (Return to Queue)";
- DirectNZB headers X-DNZB-MoreInfo and X-DNZB-Details are now processed
when downloading URLs and the links "More Info" and "Details" are shown
in download-edit-dialog and in history-dialog in Actions menu;
- program can now be stopped via web-interface: new button "shutdown"
in section "SYSTEM";
- added menu "View" to settings page which allows to switch to "Compact Mode"
when option descriptions are hidden;
- added confirmation dialog by leaving settings page if there are unsaved
changes;
- downloads manually deleted from queue are shown with status "deleted"
in the history (instead of "unknown");
- all table columns except "Name" now have fixed widths to avoid annoying
layout changes especially during post-processing when long status messages
are displayed in the name-column;
- added filter buttons to messages tab (info, warning, etc.);
- added automatic par-renaming of extracted files if archive includes
par-files;
- added support for http redirects when fetching URLs;
- added new command "Download again" for history items; new action
"HistoryRedownload" of RPC-method "editqueue"; for controlling via command
line: new action "A" of subcommand "H" of command "--edit/-E";
- download queue is now saved in a more safe way to avoid potential loss
of queue if the program crashes during saving of queue;
- destination directory for option "CategoryX.DestDir" is not checked/created
on program start anymore (only when a download starts for that category);
this helps when certain categories are configured for external disks,
which are not always connected;
- added new option "CategoryX.Aliases" to configure category name matching
with nzb-sites; especially useful with rss feeds;
- in RPC-Method "appendurl" parameter "addtop" adds nzb to the top of
the main download queue (not only to the top of the URL queue);
- new logo (thanks to dogzipp for the logo);
- added support for metatag "password" in nzb-files;
- pp-scripts which move files can now inform the program about new
location by printing text "[NZB] FINALDIR=/path/to/files"; the final
path is then shown in history dialog instead of download path;
- new env-var "NZBPP_FINALDIR" passed to pp-scripts;
- pp-scripts can now set post-processing parameters by printing
command "[NZB] NZBPR_varname=value"; this allows scripts which are
executed sooner to pass data for scripts executed later;
- added new option "AuthorizedIP" to set the list of IP-addresses which
may connect without authorization;
- new option "ParRename" to force par-renaming as a first post-processing
step (active by default); this saves an unpack attempt and is even more
useful if unpack is disabled;
- post-processing progress label is now automatically trimmed if it
doesn't fill into one line; this avoids layout breaking if the text
is too long;
- reversed the order of priorities in comboboxes in dialogs: the highest
priority - at the top, the lowest - at the bottom;
- small changes in button captions: edit dialogs called from settings
page (choose script, choose order, build rss filter) now have buttons
"Discard/Apply" instead of "Close/Save"; in all other dialogs button
"Close" renamed to "Cancel" unless it was the only button in dialog;
- small change in css: slightly reduced the max height of modal dialogs
to better work on notebooks;
- options "DeleteCleanupDisk" and "NzbCleanupDisk" are now active by
default (in the example config file);
- extended add-dialog with options "Add paused" and "Disable duplicate check";
- source nzb-files are now deleted when download-item leaves queue and
history (option "NzbCleanupDisk");
- when deleting downloads from queue the messages about deleted
individual files are now printed as "detail" instead of "info";
- failed article downloads are now logged as "detail" instead of
"warning" to reduce number of warnings for downloads removed from
server (DMCA); one warning is printed for a file with a summary of
number of failed downloads for the file;
- tuned algorithm calculating maximum threads limit to allow more
threads for backup server connections (related to option "TreadLimit"
removed in v11); this may sometimes increase speed when backup servers
were used;
- by adding nzb-file to queue via RPC-methods "append" and "appendurl"
the actual format of the file is checked and if nzb-format is detected
the file is added even if it does not have .nzb extension;
- added new option "UrlForce" to allow URL-downloads (including fetching
of RSS feeds and nzb-files from feeds) even if download is paused;
the option is active by default;
- destination directory for option "DestDir" is not checked/created on
program start anymore (only when a download starts); this helps when
DestDir is mounted to a network drive which is not available on program start;
- added special handling for files ".AppleDouble" and ".DS_Store" during
unpack to avoid problems on NAS having support for AFP protocol (used
on Mac OSX);
- history records with failed script status are now shown as "PP-FAILURE"
in history list (instead of just "FAILURE");
- option "DiskSpace" now checks space on "InterDir" in addition to
"DestDir";
- support for rar-archives with non-standard extensions is now limited
to file extensions consisting of digits; this is to avoid extracting
of rar-archives having non-rar extensions on purpose (example: .cbr);
- if option "ParRename" is disabled (not recommended) unpacker does
not initiate par-rename anymore, instead the full par-verify is
performed then;
- for external script the exec-permissions are now added automatically;
this makes the installation of pp-scripts and other scripts easier;
- option "InterDir" is now active by default;
- when option "InterDir" is used the intermediate destination directory
names now include unique numbers to avoid several downloads with same
name to use the same directory and interfere with each other;
- when option "UnpackCleanupDisk" is active all archive files are now
deleted from download directory without relying on output printed by
unrar; this solves issues with non-ascii-characters in archive file
names on some platforms and especially in combination with rar5;
- improved handling of non-ascii characters in file names on windows;
- added support for rar5-format when checking signatures of archives
with non-standard file extensions;
- small restructure in settings order:
- combined sections "REMOTE CONTROL" and "PERMISSIONS" into one
section with name "SECURITY";
- moved sections "CATEGORIES" and "RSS FEEDS" higher in the
section list;
- improved par-check: if main par2-file is corrupted and can not be
loaded other par2-files are downloaded and then used as replacement
for main par2-file;
- if unpack did not find archive files the par-check is not requested
anymore if par-rename was already done;
- better handling of obfuscated nzb-files containing multiple files
with same names; removed option "StrictParName" which was not working
good with obfuscated files; if more par-files are required for repair
the files with strict names are tried first and then other par-files;
- added new scheduler commands "ActivateServer", "DeactivateServer" and
"FetchFeed"; combined options "TaskX.DownloadRate" and "TaskX.Process"
into one option "TaskX.Param", also used by new commands;
- added status filter buttons to history page;
- if unpack fails with write error (usually because of not enough space
on disk) this is shown as status "Unpack: space" in web-interface;
this unpack-status is handled as "success" by duplicate handling
(no download of other duplicate); also added new unpack-status "wrong
password" (only for rar5-archives); env.var. NZBPP_UNPACKSTATUS has
two new possible values: 3 (write error) and 4 (wrong password);
updated pp-script "EMail.py" to support new unpack-statuses;
- fixed a potential seg. fault in a commonly used function;
- added new option "TimeCorrection" to adjust conversion from system
time to local time (solves issues with scheduler when using a
binary compiled for other platform);
- NZBIDs are now generated with more care avoiding numbering holes
possible in previous versions;
- fixed: invalid "Offset" passed to RPC-method "editqueue" or command
line action "-E/--edit" could crash the program;
- fixed: crash after downloading of an URL (happen only on certain systems);
- fixed: restoring of settings didn't work for multi-sections (servers,
categories, etc.) if they were empty;
- fixed: choosing local files didn't work in Opera;
- fixed: certain characters printed by pp-scripts could crash the
program;
- fixed: malformed nzb-file could cause a memory leak;
- fixed: when a duplicate file was detected in collection it was
automatically deleted (if option DupeCheck is active) but the
total size of collection was not updated;
- when deleting individual files the total count of files in collection
was not updated;
- fixed: when multiple nzb-files were added via URL (rss including) at
the same time the info about category and priority could get lost for
some of files;
- fixed: if unpack fails the created destination directory was not
automatically removed (only if option "InterDir" was active);
- fixed scrolling to the top of page happening by clicking on items in
downloads/history lists and on action-buttons in edit-download and
history dialogs;
- fixed potential buffer overflow in remote client;
- improved error reporting when creation of temporary output file fails;
- fixed: when deleting download, if all remaining queued files are
par2-files the disk cleanup should not be performed, but it was
sometimes;
- fixed a potential problem in incorrect using of one library function.
nzbget-11.0:
- reworked concept of post-processing scripts:
- multiple scripts can be assigned to each nzb-file;
- all assigned scripts are executed after the nzb-file is
downloaded and internally processed (unpack, repair);
- option <PostProcess> is obsolete;
- new option <ScriptDir> sets directory where all pp-scripts must
be stored;
- new option <DefScript> sets the default list of pp-scripts to
be assigned to nzb-file when it's added to queue;
- new option <CategoryX.DefScript> to set the default list of
pp-scripts on a category basis;
- the execution order of pp-scripts can be set using new option
<ScriptOrder>;
- there are no separate configuration files for pp-scripts;
- configuration options and pp-parameters are defined in the
pp-scripts;
- script configuration options are saved in nzbget configuration
file (nzbget.conf);
- changed parameters list of RPC-methods <loadconfig> and
<saveconfig>;
- new RPC-method <configtemplates> returns configuration
descriptions for the program and for all pp-scripts;
- configuration of all scripts can be done in web-interface;
- the pp-scripts assigned to a particular nzb-file can be viewed
and changed in web-interface on page <pp-parameters> in the
edit download dialog;
- option <PostPauseQueue> renamed to <ScriptPauseQueue> (the old
name is still recognized);
- new option <ConfigTemplate> to define the location of template
configuration file (in previous versions it must be always
stored in <WebDir>);
- history dialog shows status of every script;
- the old example post-processing script replaced with two new scripts:
- EMail.py - sends E-Mail notification;
- Logger.py - saves the full post-processing log of the job into
file _postprocesslog.txt;
- both pp-scripts are written in python and work on Windows too
(in addition to Linux, Mac, etc.);
- added possibility to set post-processing parameters for history items:
- pp-parameters can now be viewed and changed in history dialog
in web-interface;
- useful before post-processing again;
- new action <HistorySetParameter> in RPC-method <editqueue>;
- new action <O> in remote command <--edit/-E> for history items
(subcommand <H>);
- added new feature <split download> which creates new download from
selected files of source download;
- new command <Split> in web-interface in edit download dialog
on page <Files>;
- new action <S> in remote command <--edit/-E>;
- new action <FileSplit> in JSON-/XML-RPC method <editqueue>;
- added support for manual par-check:
- if option <ParCheck> is set to <Manual> and a damaged download
is detected the program downloads all par2-files but doesn't
perform par-check; the user must perform par-check/repair
manually then (possibly on another, faster computer);
- old values <yes/no> of option <ParCheck> renamed to <Force>
and <Auto> respectively;
- when set to <Force> all par2-files are always downloaded;
- removed option <LoadPars> since its functionality is now
covered by option <ParCheck>;
- result of par-check can now have new value <Manual repair
necessary>;
- field <ParStatus> in RPC-method <history> can have new value
<MANUAL>;
- parameter <NZBPP_PARSTATUS> for pp-script can have new value
<4 = manual repair necessary>;
- when download is resumed in web-interface the option <ParCheck=Force>
is respected and all par2-files are resumed (not only main par2-file);
- automatic deletion of backup-source files after successful par-repair;
important when repairing renamed rar-files since this could cause
failure during unpack;
- par-checker and renamer now add messages into the log of pp-item
(like unpack- and pp-scripts-messages); these message now appear in
the log created by scripts Logger.py and EMail.py;
- when a nzb-file is added via web-interface or via remote call the
file is now put into incoming nzb-directory (option "NzbDir") and
then scanned; this has two advantages over the old behavior when the
file was parsed directly in memory:
- the file serves as a backup for troubleshootings;
- the file is processed by nzbprocess-script (if defined in
option "NzbProcess") making the pre-processing much easier;
- new env-var parameters are passed to NzbProcess-script: NZBNP_NZBNAME,
NZBNP_CATEGORY, NZBNP_PRIORITY, NZBNP_TOP, NZBNP_PAUSED;
- new commands for use in NzbProcess-scripts: "[NZB] TOP=1" to add nzb
to the top of queue and "[NZB] PAUSED=1" to add nzb-file in paused state;
- reworked post-processor queue:
- only one job is created for each nzb-file; no more separate
jobs are created for par-collections within one nzb-file;
- option <AllowReProcess> removed; a post-processing script is
called only once per nzb-file, this behavior cannot be altered
anymore;
- with a new feature <Split> individual par-collections can be
processed separately in a more effective way than before
- improved unicode (utf8) support:
- non-ascii characters are now correctly transferred via JSON-RPC;
- correct displaying of nzb-names and paths in web-interface;
- it is now possible to use non-ascii characters on settings page
for option values (such as paths or category names);
- improved unicode support in XML-RPC and JSON-RPC;
- if username and password are defined for a news-server the
authentication is now forced (in previous versions the authentication
was performed only if requested by server); needed for servers
supporting both anonymous (restricted) and authorized (full access)
accounts;
- added option <ExtCleanupDisk> to automatically delete unwanted files
(with specified extensions or names) after successful par-check or unpack;
- improvement in JSON-/XML-RPC:
- all ID fields including NZBID are now persistent and remain
their values after restart;
- this allows for third-party software to identify nzb-files by
ID;
- method <history> now returns ID of NZB-file in the field
<NZBID>;
- in versions up to 0.8.0 the field <NZBID> was used to identify
history items in the edit-commands <HistoryDelete>,
<HistoryReturn>, <HistoryProcess>; since version 9 field <ID>
is used for this purpose; in versions 9-10 field <NZBID> still
existed and had the same value as field <ID> for compatibility
with version 0.8.0; the compatibility is not provided anymore;
this change was needed to provide a consistent using of field
<NZBID> across all RPC-methods;
- added support for rar-files with non-standard extensions (such as
.001, etc.);
- added functions to backup and restore settings from web-interface;
when restoring it's possible to choose what sections to restore
(for example only news servers settings or only settings of a
certain pp-script) or restore the whole configuration;
- new option "ControlUsername" to define login user name (if you don't
like default username "nzbget");
- if a communication error occurs in web-interface, it retries multiple
times before giving up with an error message;
- the maximum number of download threads are now managed automatically
taking into account the number of allowed connections to news servers;
removed option <ThreadLimit>;
- pp-scripts terminated with unknown status are now considered failed
(status=FAILURE instead of status=UNKNOWN);
- new parameter (env. var) <NZBPP_NZBID> is passed to pp_scripts and
contains an internal ID of NZB-file;
- improved thread synchronisation to avoid (short-time) lockings of
the program during creation of destination files;
- more detailed error message if a directory could not be created
(<DstDir>, <NzbDir>, etc.); the message includes error text reported
by OS such as <permission denied> or similar;
- when unpacking the unpack start time is now measured after receiving
of unrar copyright message; this provides better unpack time
estimation in a case when user uses unpack-script to do some things
before executing unrar (for example sending Wake-On-Lan message to
the destination NAS); it works with unrar only, it's not possible
with 7-Zip because it buffers printed messages;
- when the program is reloaded, a message with version number is
printed like on start;
- configuration can now be saved in web-interface even if there were
no changes made but if obsolete or invalid options were detected in
the config file; the saving removes invalid entries from config file;
- option <ControlPassword> can now be set to en empty value to disable
authentication; useful if nzbget works behind other web-server with
its own authentication;
- when deleting downloads via web-interface a proper hint regarding
deleting of already downloaded files from disk depending on option
<DeleteCleanupDisk> is displayed;
- if a news-server returns empty or bad article (this may be caused
by errors on the news server), the program tries again from the same
or other servers (in previous versions the article was marked as
failed without other download attempts);
- when a nzb-file whose name ends with ".queued" is added via web-
interface the ".queued"-part is automatically removed;
- small improvement in multithread synchronization of download queue;
- added link to catalog of pp-scripts to web-interface;
- updated forum URL in about dialog in web-interface;
- small correction in a log-message: removed <Request:> from message
<Request: Queue collection...>;
- removed option "ProcessLogKind"; scripts should use prefixes ([INFO],
[DETAIL], etc); messages printed without prefixes are added as [INFO];
- removed option "AppendNzbDir"; if it was disabled that caused problems
in par-checker and unpacker; the option is now assumed always active;
- removed option "RenameBroken"; it caused problems in par-checker
(the option existed since early program versions before the par-check
was added);
- configure-script now defines "SIGCHLD_HANDLER" by default on all
systems including BSD; this eliminates the need of configure-
parameter "--enable-sigchld-handler" on 64-Bit BSD; the trade-off:
32-Bit BSD now requires "--disable-sigchld-handler";
- improved configure-script: defining of symbol "FILE_OFFSET_BITS=64",
required on some systems, is not necessary anymore;
- fixed: in the option "NzbAddedProcess" the env-var parameter with
nzb-name was passed in "NZBNA_NAME", should be "NZBNA_NZBNAME";
the old parameter name "NZBNA_NAME" is still supported for
compatibility;
- fixed: download time in statistics were incorrect if the computer
was put into standby (thanks Frank Kuypers for the patch);
- fixed: when option <InterDir> was active and the download after
unpack contained rar-file with the same name as one of original
files (sometimes happen with included subtitles) the original
rar-file was kept with name <.rar_duplicate1> even if the option
<UnpackCleanupDisk> was active;
- fixed: failed to read download queue from disk if post-processing
queue was not empty;
- fixed: when a duplicate file was detected during download the
program could hang;
- fixed: symbol <DISABLE_TLS> must be defined in project settings;
defining it in <win32.h> didn't work properly (Windows only);
- fixed: crash when adding malformed nzb-files with certain
structure (Windows only);
- fixed: by deleting of a partially downloaded nzb-file from queue,
when the option <DeleteCleanupDisk> was active, the file
<_brokenlog.txt> was not deleted preventing the directory from
automatic deletion;
- fixed: if an error occurs when a RPC-client or web-browser
communicates with nzbget the program could crash;
- fixed: if the last file of collection was detected as duplicate
after the download of the first article the file was deleted from
queue (that's OK) but the post-processing was not triggered
(that's a bug);
- fixed: support for splitted files (.001, .002, etc.) were broken.
nzbget-10.2:
- fixed potential segfault which could happen with file paths longer
than 1024 characters;
- fixed: when options <DirectWrite> and <ContinuePartial> were both
active, a restart or reload of the program during download may cause
damaged files in the active download;
- increased width of speed indication ui-element to avoid layout
breaking on some linux-browsers;
- fixed a race condition in unpacker which could lead to a segfault
(although the chances were low because the code wasn't executed often).
nzbget-10.1:
- fixed: articles with decoding errors (incomplete or damaged posts)
caused infinite retry-loop in downloader.
nzbget-10.0:
- added built-in unpack:
- rar and 7-zip formats are supported (via external Unrar and
7-Zip executables);
- new options <Unpack>, <UnpackPauseQueue>, <UnpackCleanupDisk>,
<UnrarCmd>, <SevenZipCmd>;
- web-interface now shows progress and estimated time during
unpack (rar only; for 7-Zip progress is not available due to
limitations of 7-Zip);
- when built-in unpack is enabled, the post-processing script is
called after unpack and possibly par-check/repair (if needed);
- for nzb-files containing multiple collections (par-sets) the
post-processing script is called only once, after the last
par-set;
- new parameter <NZBPP_UNPACKSTATUS> passed to post-processing
script;
- if the option <AllowReProcess> is enabled the post-processing-
script is called after each par-set (as in previous versions);
- example post-processing script updated: removed unrar-code,
added check for unpack status;
- new field <UnpackStatus> in result of RPC-method <history>;
- history-dialog in web-interface shows three status: par-status,
unpack-status, script-status;
- with two built-in special post-processing parameters <*Unpack:>
and <*Unpack:Password> the unpack can be disabled for individual
nzb-file or the password can be set;
- built-in special post-processing parameters can be set via web-
interface on page <PP-Parameters> (when built-in unpack is
enabled);
- added support for HTTPS to the built-in web-server (web-interface and
XML/JSON-RPC):
- new options <SecureControl>, <SecurePort>, <SecureCert> and
<SecureKey>;
- module <TLS.c/h> completely rewritten with support for server-
side sockets, newer versions of GnuTLS, proper thread lockings
in OpenSSL;
- improved the automatic par-scan (option <ParScan=auto>) to
significantly reduce the verify-time in some common cases with renamed
rar-files:
- the extra files are scanned in an optimized order;
- the scan stops when all missings files are found;
- added fast renaming of intentionally misnamed (rar-) files:
- the new renaming algorithm doesn't require full par-scan and
restores original filenames in just a few seconds, even on very
slow computers (NAS, media players, etc.);
- the fast renaming is performed automatically when requested by
the built-in unpacker (option <Unpack> must be active);
- added new option <InterDir> to put intermediate files during download
into a separate directory (instead of storing them directly in
destination directory (option <DestDir>):
- when nzb-file is completely (successfully) downloaded, repaired
(if neccessary) and unpacked the files are moved to destination
directory (option <DestDir> or <CategoryX.DestDir>);
- intermediate directory can significantly improve unpack
performance if it is located on a separate physical hard drive;
- added new option <ServerX.Cipher> to manually select cipher for
encrypted communication with news server:
- manually choosing a faster cipher (such as <RC4>) can
significantly improve performance (if CPU is a limiting factor);
- major improvements in news-server/connection management (main and fill
servers):
- if download of article fails, the program tries all servers of
the same level before trying higher level servers;
- this ensures that fill servers are used only if all main servers
fail;
- this makes the configuring of multiple servers much easier than
before: in most cases the simple configuration of level 0 for
all main servers and level 1 for all fill servers suffices;
- in previous versions the level was increased immediately after
the first tried server of the level failed; to make sure all
main servers were tried before downloading from fill servers it
was required to create complex server configurations with
duplicates; these configurations were still not as effective as
now;
- do not reconnect on <article/group not found> errors since this
doesn't help but unnecessary increases CPU load and network
traffic;
- removed option <RetryOnCrcError>; it's not required anymore;
- new option <ServerX.Group> allows more flexible configuration
of news servers when using multiple accounts on the same server;
with this option it's also possible to imitate the old server
management behavior regarding levels;
- news servers configuration is now less error-prone:
- the option <ServerX.Level> is not required to start from <0> and
when several news servers are configured the Levels can be any
integers - the program sorts the servers and corrects the Levels
to 0,1,2,etc. automatically if needed;
- when option <ServerX.Connections> is set to <0> the server is
ignored (in previous version such a server could cause hanging
when the program was trying to go to the next level);
- if no news servers are defined (or all definitions are invalid)
a warning is printed to inform that the download is not
possible;
- categories can now have their own destination directories; new option
<CategoryX.DestDir>;
- new feature <Pause for X Minutes> in web-interface; new XML-/JSON-RPC
method <scheduleresume>;
- improved the handling of hanging connections: if a connection hangs
longer than defined by option <ConnectionTimeout> the program tries to
gracefully close connection first (this is new); if it still hangs
after <TerminateTimeout> the download thread is terminated as a last
resort (as in previous versions);
- added automatic speed meter recalibration to recover after possible
synchronization errors which can occur when the option <AccurateRate>
is not active; this makes the default (less accurate but fast) speed
meter almost as good as the accurate one; important when speed
throttling is active;
- when the par-checked requests more par-files, they get an extra
priority and are downloaded before other files regardless of their
priorities; this is needed to avoid hanging of par-checker-job if a
file with a higher priority gets added to queue during par-check;
- when post-processing-parameters are passed to the post-processing
script a second version of each parameter with a normalized parameter-
name is passed in addition to the original parameter name; in the
normalized name the special characters <*> and <:> are replaced with
<_> and all characters are passed in upper case; this is important for
internal post-processing-parameters (*Unpack:=yes/no) which include
special characters;
- warning <Non-nzbget request received> now is not printed when the
connection was aborted before the request signature was read;
- changed formatting of remaining time for post-processing to short
format (as used for remaining download time);
- added link to article <Performance tips> to settings tab on web-
interface;
- removed hint <Post-processing script may have moved files elsewhere>
from history dialog since it caused more questions than helped;
- changed default value for option <ServerX.JoinGroup> to <no>; most
news servers nowadays do not require joining the group and many
servers do not keep headers for many groups making the join-command
fail even if the articles still can be successfully downloaded;
- small change in example post-processing script: message <Deleting
source ts-files> are now printed only if ts-files really existed;
- improved configure-script:
- libs which are added via pkgconfig are now put into LIBS instead
of LDFLAGS - improves compatibility with newer Linux linkers;
- OpenSSL libs/includes are now added using pkgconfig to better
handle dependencies;
- additional check for libcrypto (part of OpenSSL) ensures the
library is added to linker command even if pkgconfig is not
used;
- adding of local files via web-interface now works in IE10;
- if an obsolete option is found in the config file a warning is printed
instead of an error and the program is not paused anymore;
- fixed: the reported line numbers for configuration errors were
sometimes inaccurate;
- fixed warning <file glyphicons-halflings.png not found>;
- fixed: some XML-/JSON-RPC methods may return negative values for file
sizes between 2-4GB; this had also influence on web-interface.
- fixed: if an external program (unrar, pp-script, etc.) could not be
started, the execute-function has returned code 255 although the code
-1 were expected in this case; this could break designed post-
processing flow;
- fixed: some characters with codes below 32 were not properly encoded
in JSON-RPC; sometimes output from unrar contained such characters
and could break web-interface;
- fixed: special characters (quotation marks, etc.) in unpack password
and in configuration options were not displayed properly and could be
discarded on saving;
nzbget-9.1:
- added full par-scan feature needed to par-check/repair files which
were renamed after creation of par-files:
- new option <ParScan> to activate full par-scan (always or automatic);
the automatic full par-scan activates if missing files are detected
during par-check, this avoids unnecessary full scan for normal
(not renamed) par sets;
- improved the post-processing script to better handle renamed rar-files;
- replaced a browser error message when trying to add local files in
IE9 with a better message dialog;
nzbget-9.0:
- changed version naming scheme by removing the leading zero: current
version is now called 9.0 instead of 0.9.0 (it's really the 9th major
version of the program);
- added built-in web-interface:
- completely new designed and written from scratch;
- doesn't require a separate web-server;
- doesn't require PHP;
- 100% Javascript application; the built-in web-server hosts only
static files; the javascript app communicates with NZBGet via
JSON-RPC;
- very efficient usage of server resources (CPU and memory);
- easy installation. Since neither a separate web-server nor PHP
are needed the installation of new web-interface is very easy.
Actually it is performed automatically when you "make install"
or "ipkg install nzbget";
- modern look: better layout, popup dialogs, nice animations,
hi-def icons;
- built-in phone-theme (activates automatically);
- combined view for "currently downloading", "queued", "currently
processing" and "queued for processing";
- renaming of nzb-files;
- multiselect with multiedit or merge of downloads;
- fast paging in the lists (downloads, history, messages);
- search box for filtering in the lists (downloads, history, messages)
and in settings;
- adding nzb-files to download queue was improved in several ways:
- add multiple files at once. The "select files dialog" allows
to select multiple files;
- add files using drag and drop. Just drop the files from your
file manager directly into the web-browser;
- add files via URLs. Put the URL and NZBGet downloads the
nzb-file and adds it to download queue automatically;
- the priority of nzb-file can now be set when adding local-files
or URLs;
- the history can be cleared completely or selected items can be removed;
- file mode is now nzb-file related;
- added the ability to queue URLs:
- the program automatically downloads nzb-files from given URLs
and put them to download queue.
- when multiple URLs are added in a short time, they are put
into a special URL-queue.
- the number of simultaneous URL-downloads are controlled via
new option UrlConnections.
- with the new option ReloadUrlQueue can be controlled if the URL-queue
should be reloaded after the program is restarted (if the URL-queue
was not empty).
- new switch <-U> for remote-command <--append/-A> to queue an URL.
- new subcommand <-U> in the remote command <--list/-L> prints the
current URL-queue.
- if URL-download fails, the URL is moved into history.
- with subcommand <-R> of command <--edit> the failed URL can be
returned to URL-queue for redownload.
- the remote command <--list/-L> for history can now print the infos
for URL history items.
- new XML/JSON-RPC command <appendurl> to add an URL or multiple
URLs for download.
- new XML/JSON-RPC command <urlqueue> returns the items from the
URL-queue.
- the XML/JSON-RPC command <history> was extended to provide
infos about URL history items.
- the URL-queue obeys the pause-state of download queue.
- the URL-downloads support HTTP and HTTPS protocols;
- added new field <name> to nzb-info-object.
- it is initially set to the cleaned up name of the nzb-file.
- the renaming of the group changes this field.
- all RPC-methods related to nzb-object return the new field, the
old field <NZBNicename> is now deprecated.
- the option <MergeNZB> now checks the <name>-field instead of
<nzbfilename> (the latter is not changed when the nzb is renamed).
- new env-var-parameter <NZBPP_NZBNAME> for post-processing script;
- added options <GN> and <FN> for remote command <--edit/-E>. With these
options the name of group or file can be used in edit-command instead
of file ID;
- added support for regular expressions (POSIX ERE Syntax) in remote
commands <--list/-L> and <--edit/-E> using new subcommands <GR> and <FR>;
- improved performance of RPC-command <listgroups>;
- added new command <FileReorder> to RPC-method <editqueue> to set the
order of individual files in the group;
- added gzip-support to built-in web-server (including RPC);
- added processing of http-request <OPTIONS> in RPC-server for better
support of cross domain requests;
- renamed example configuration file and postprocessing script to make
the installation easier;
- improved the automatic installation (<make install>) to install all
necessary files (not only the binary as it was before);
- improved handling of configuration errors: the program now does not
terminate on errors but rather logs all of them and uses default option values;
- added new XML/JSON-RPC methods <config>, <loadconfig> and <saveconfig>;
- with active option <AllowReProcess> the NZB considered completed even if
there are paused non-par-files (the paused non-par-files are treated the
same way as paused par-files): as a result the reprocessable script is called;
- added subcommand <W> to remote command <-S/--scan> to scan synchronously
(wait until scan completed);
- added parameter <SyncMode> to XML/JSON-RPC method <scan>;
- the command <Scan> in web-interface now waits for completing of scan
before reporting the status;
- added remote command <--reload/-O> and JSON/XML-RPC method <reload> to
reload configuration from disk and reintialize the program; the reload
can be performed from web-interface;
- JSON/XML-RPC method <append> extended with parameter <priority>;
- categories available in web-interface are now configured in program
configuration file (nzbget.conf) and can be managed via web-interface
on settings page;
- updated descriptions in example configuration file;
- changes in configuration file:
- renamed options <ServerIP>, <ServerPort> and <ServerPassword> to
<ControlIP>, <ControlPort> and <ControlPassword> to avoid confusion
with news-server options <ServerX.Host>, <ServerX.Port> and
<ServerX.Password>;
- the old option names are still recognized and are automatically
renamed when the configuration is saved from web-interface;
- also renamed option <$MAINDIR> to <MainDir>;
- extended remote command <--append/-A> with optional parameters:
- <T> - adds the file/URL to the top of queue;
- <P> - pauses added files;
- <C category-name> - sets category for added nzb-file/URL;
- <N nzb-name> - sets nzb filename for added URL;
- the old switches <--category/-K> and <--top/-T> are deprecated
but still supported for compatibility;
- renamed subcommand <K> of command <--edit/-E> to <C> (the old
subcommand is still supported for compatibility);
- added new option <NzbAddedProcess> to setup a script called after
a nzb-file is added to queue;
- added debug messages for speed meter;
- improved the startup script <nzbgetd> so it can be directly used in
</etc/init.d> without modifications;
- fixed: after renaming of a group, the new name was not displayed
by remote commands <-L G> and <-C in curses mode>;
- fixed incompatibility with OpenSLL 1.0 (thanks to OpenWRT team
for the patch);
- fixed: RPC-method <log(0, IdFrom)> could return wrong results if
the log was filtered with options <XXXTarget>;
- fixed: free disk space calculated incorrectly on some OSes;
- fixed: unrar failure was not always properly detected causing the
post-processing to delete not yet unpacked rar-files;
- fixed compilation error on recent linux versions;
- fixed compilation error on older systems;
nzbget-0.8.0:
- added priorities; new action <I> for remote command <--edit/-E> to set
priorities for groups or individual files; new actions <SetGroupPriority>
and <SetFilePriority> of RPC-command <editqueue>; remote command
<--list/-L> prints priorities and indicates files or groups being
downloaded; ncurses-frontend prints priorities and indicates files or
groups being download; new command <PRIORITY> to set priority of nzb-file
from nzbprocess-script; RPC-commands <listgroups> and <listfiles> return
priorities and indicate files or groups being downloaded;
- added renaming of groups; new subcommand <N> for command <--edit/-E>; new
action <SetName> for RPC-method <editqueue>;
- added new option <AccurateRate>, which enables syncronisation in speed
meter; that makes the indicated speed more accurate by eliminating
measurement errors possible due thread conflicts; thanks to anonymous
nzbget user for the patch;
- improved the parsing of filename from article subject;
- option <DirectWrite> now efficiently works on Windows with NTFS partitions;
- added URL-based-authentication as alternative to HTTP-header authentication
for XML- and JSON-RPC;
- fixed: nzb-files containing umlauts and other special characters could not
be parsed - replaced XML-Reader with SAX-Parser - only on POSIX (not on
Windows);
- fixed incorrect displaying of group sizes bigger than 4GB on many 64-bit
OSes;
- fixed a bug causing error on decoding of input data in JSON-RPC;
- fixed a compilation error on some windows versions;
- fixed: par-repair could fail when the filenames were not correctly parsed
from article subjects;
- fixed a compatibility issue with OpenBSD (and possibly other BSD based
systems); added the automatic configuring of required signal handling logic
to better support BSD without breaking the compatibility with certain Linux
systems;
- corrected the address of Free Software Foundation in copyright notice.
nzbget-0.7.0:
- added history: new option <KeepHistory>, new remote subcommand <H> for
commands <L> (list history entries) and <E> (delete history entries,
@@ -541,7 +1438,7 @@ nzbget-0.3.0:
of completing and state (paused or not) for every file is printed.
The header of queue shows number of total files, number of unpaused
files and size for all and unpaused files. Better using of screen estate
space <EFBFBD> no more empty lines and separate header for status (total seven
space - no more empty lines and separate header for status (total seven
lines gain). The messages are printed on several lines (if they not fill
in one line) without trimming now;
- configure.ac-file updated to work with recent versions of autoconf/automake.

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -74,11 +74,11 @@ void ColoredFrontend::PrintStatus()
char tmp[1024];
char timeString[100];
timeString[0] = '\0';
float fCurrentDownloadSpeed = m_bStandBy ? 0 : m_fCurrentDownloadSpeed;
int iCurrentDownloadSpeed = m_bStandBy ? 0 : m_iCurrentDownloadSpeed;
if (fCurrentDownloadSpeed > 0.0 && !(m_bPauseDownload || m_bPauseDownload2))
if (iCurrentDownloadSpeed > 0 && !(m_bPauseDownload || m_bPauseDownload2))
{
long long remain_sec = m_lRemainingSize / ((long long)(fCurrentDownloadSpeed * 1024));
long long remain_sec = (long long)(m_lRemainingSize / iCurrentDownloadSpeed);
int h = (int)(remain_sec / 3600);
int m = (int)((remain_sec % 3600) / 60);
int s = (int)(remain_sec % 60);
@@ -86,9 +86,9 @@ void ColoredFrontend::PrintStatus()
}
char szDownloadLimit[128];
if (m_fDownloadLimit > 0.0f)
if (m_iDownloadLimit > 0)
{
sprintf(szDownloadLimit, ", Limit %.0f KB/s", m_fDownloadLimit);
sprintf(szDownloadLimit, ", Limit %.0f KB/s", (float)m_iDownloadLimit / 1024.0);
}
else
{
@@ -113,7 +113,7 @@ void ColoredFrontend::PrintStatus()
#endif
snprintf(tmp, 1024, " %d threads, %.*f KB/s, %.2f MB remaining%s%s%s%s%s\n",
m_iThreadCount, (fCurrentDownloadSpeed >= 10 ? 0 : 1), fCurrentDownloadSpeed,
m_iThreadCount, (iCurrentDownloadSpeed >= 10*1024 ? 0 : 1), (float)iCurrentDownloadSpeed / 1024.0,
(float)(Util::Int64ToFloat(m_lRemainingSize) / 1024.0 / 1024.0), timeString, szPostStatus,
m_bPauseDownload || m_bPauseDownload2 ? (m_bStandBy ? ", Paused" : ", Pausing") : "",
szDownloadLimit, szControlSeq);

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -37,7 +37,7 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <stdio.h>
#ifdef WIN32
#include <winsock2.h>
#include <ws2tcpip.h>
@@ -54,19 +54,16 @@
#include "nzbget.h"
#include "Connection.h"
#include "Log.h"
#include "TLS.h"
static const int CONNECTION_READBUFFER_SIZE = 1024;
#ifndef DISABLE_TLS
bool Connection::bTLSLibInitialized = false;
#endif
#ifndef HAVE_GETADDRINFO
#ifndef HAVE_GETHOSTBYNAME_R
Mutex* Connection::m_pMutexGetHostByName = NULL;
#endif
#endif
void Connection::Init(bool bTLS)
void Connection::Init()
{
debug("Initializing global connection data");
@@ -87,21 +84,7 @@ void Connection::Init(bool bTLS)
#endif
#ifndef DISABLE_TLS
if (bTLS)
{
debug("Initializing TLS library");
char* szErrStr;
int iRes = tls_lib_init(&szErrStr);
bTLSLibInitialized = iRes == TLS_EOK;
if (!bTLSLibInitialized)
{
error("Could not initialize TLS library: %s", szErrStr ? szErrStr : "unknown error");
if (szErrStr)
{
free(szErrStr);
}
}
}
TLSSocket::Init();
#endif
#ifndef HAVE_GETADDRINFO
@@ -120,11 +103,7 @@ void Connection::Final()
#endif
#ifndef DISABLE_TLS
if (bTLSLibInitialized)
{
debug("Finalizing TLS library");
tls_lib_deinit();
}
TLSSocket::Final();
#endif
#ifndef HAVE_GETADDRINFO
@@ -134,38 +113,48 @@ void Connection::Final()
#endif
}
Connection::Connection(NetAddress* pNetAddress)
Connection::Connection(const char* szHost, int iPort, bool bTLS)
{
debug("Creating Connection");
m_pNetAddress = pNetAddress;
m_szHost = NULL;
m_iPort = iPort;
m_bTLS = bTLS;
m_szCipher = NULL;
m_eStatus = csDisconnected;
m_iSocket = INVALID_SOCKET;
m_iBufAvail = 0;
m_iTimeout = 60;
m_bSuppressErrors = true;
m_szReadBuf = (char*)malloc(CONNECTION_READBUFFER_SIZE + 1);
m_bAutoClose = true;
#ifndef DISABLE_TLS
m_pTLS = NULL;
m_pTLSSocket = NULL;
m_bTLSError = false;
#endif
if (szHost)
{
m_szHost = strdup(szHost);
}
}
Connection::Connection(SOCKET iSocket, bool bAutoClose)
Connection::Connection(SOCKET iSocket, bool bTLS)
{
debug("Creating Connection");
m_pNetAddress = NULL;
m_szHost = NULL;
m_iPort = 0;
m_bTLS = bTLS;
m_szCipher = NULL;
m_eStatus = csConnected;
m_iSocket = iSocket;
m_iBufAvail = 0;
m_iTimeout = 60;
m_bSuppressErrors = true;
m_szReadBuf = (char*)malloc(CONNECTION_READBUFFER_SIZE + 1);
m_bAutoClose = bAutoClose;
#ifndef DISABLE_TLS
m_pTLS = NULL;
m_pTLSSocket = NULL;
m_bTLSError = false;
#endif
}
@@ -173,32 +162,52 @@ Connection::~Connection()
{
debug("Destroying Connection");
if (m_eStatus == csConnected && m_bAutoClose)
{
Disconnect();
}
Disconnect();
free(m_szHost);
free(m_szCipher);
free(m_szReadBuf);
#ifndef DISABLE_TLS
if (m_pTLS)
delete m_pTLSSocket;
#endif
}
void Connection::SetSuppressErrors(bool bSuppressErrors)
{
m_bSuppressErrors = bSuppressErrors;
#ifndef DISABLE_TLS
if (m_pTLSSocket)
{
free(m_pTLS);
m_pTLSSocket->SetSuppressErrors(bSuppressErrors);
}
#endif
}
void Connection::SetCipher(const char* szCipher)
{
free(m_szCipher);
m_szCipher = szCipher ? strdup(szCipher) : NULL;
}
bool Connection::Connect()
{
debug("Connecting");
if (m_eStatus == csConnected)
{
return true;
}
bool bRes = DoConnect();
if (bRes)
{
m_eStatus = csConnected;
}
else
Connection::DoDisconnect();
{
DoDisconnect();
}
return bRes;
}
@@ -208,7 +217,9 @@ bool Connection::Disconnect()
debug("Disconnecting");
if (m_eStatus == csDisconnected)
{
return true;
}
bool bRes = DoDisconnect();
@@ -219,51 +230,145 @@ bool Connection::Disconnect()
return bRes;
}
int Connection::Bind()
bool Connection::Bind()
{
debug("Binding");
if (m_eStatus == csListening)
{
return 0;
return true;
}
int iRes = DoBind();
if (iRes == 0)
#ifdef HAVE_GETADDRINFO
struct addrinfo addr_hints, *addr_list, *addr;
char iPortStr[sizeof(int) * 4 + 1]; // is enough to hold any converted int
memset(&addr_hints, 0, sizeof(addr_hints));
addr_hints.ai_family = AF_UNSPEC; // Allow IPv4 or IPv6
addr_hints.ai_socktype = SOCK_STREAM,
addr_hints.ai_flags = AI_PASSIVE; // For wildcard IP address
sprintf(iPortStr, "%d", m_iPort);
int res = getaddrinfo(m_szHost, iPortStr, &addr_hints, &addr_list);
if (res != 0)
{
m_eStatus = csListening;
error("Could not resolve hostname %s", m_szHost);
return false;
}
m_iSocket = INVALID_SOCKET;
for (addr = addr_list; addr != NULL; addr = addr->ai_next)
{
m_iSocket = socket(addr->ai_family, addr->ai_socktype, addr->ai_protocol);
if (m_iSocket != INVALID_SOCKET)
{
int opt = 1;
setsockopt(m_iSocket, SOL_SOCKET, SO_REUSEADDR, (char*)&opt, sizeof(opt));
res = bind(m_iSocket, addr->ai_addr, addr->ai_addrlen);
if (res != -1)
{
// Connection established
break;
}
// Connection failed
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
}
freeaddrinfo(addr_list);
#else
struct sockaddr_in sSocketAddress;
memset(&sSocketAddress, 0, sizeof(sSocketAddress));
sSocketAddress.sin_family = AF_INET;
if (!m_szHost || strlen(m_szHost) == 0)
{
sSocketAddress.sin_addr.s_addr = htonl(INADDR_ANY);
}
else
{
sSocketAddress.sin_addr.s_addr = ResolveHostAddr(m_szHost);
if (sSocketAddress.sin_addr.s_addr == (unsigned int)-1)
{
return false;
}
}
sSocketAddress.sin_port = htons(m_iPort);
m_iSocket = socket(PF_INET, SOCK_STREAM, 0);
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Socket creation failed for %s", m_szHost, true, 0);
return false;
}
int opt = 1;
setsockopt(m_iSocket, SOL_SOCKET, SO_REUSEADDR, (char*)&opt, sizeof(opt));
int res = bind(m_iSocket, (struct sockaddr *) &sSocketAddress, sizeof(sSocketAddress));
if (res == -1)
{
// Connection failed
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
#endif
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Binding socket failed for %s", m_szHost, true, 0);
return false;
}
if (listen(m_iSocket, 100) < 0)
{
ReportError("Listen on socket failed for %s", m_szHost, true, 0);
return false;
}
m_eStatus = csListening;
return iRes;
return true;
}
int Connection::WriteLine(const char* pBuffer)
{
//debug("Connection::write(char* line)");
//debug("Connection::WriteLine");
if (m_eStatus != csConnected)
{
return -1;
}
int iRes = DoWriteLine(pBuffer);
int iRes = send(m_iSocket, pBuffer, strlen(pBuffer), 0);
return iRes;
}
int Connection::Send(const char* pBuffer, int iSize)
bool Connection::Send(const char* pBuffer, int iSize)
{
debug("Sending data");
if (m_eStatus != csConnected)
{
return -1;
return false;
}
int iRes = send(m_iSocket, pBuffer, iSize, 0);
int iBytesSent = 0;
while (iBytesSent < iSize)
{
int iRes = send(m_iSocket, pBuffer + iBytesSent, iSize-iBytesSent, 0);
if (iRes <= 0)
{
return false;
}
iBytesSent += iRes;
}
return iRes;
return true;
}
char* Connection::ReadLine(char* pBuffer, int iSize, int* pBytesRead)
@@ -273,26 +378,100 @@ char* Connection::ReadLine(char* pBuffer, int iSize, int* pBytesRead)
return NULL;
}
char* res = DoReadLine(pBuffer, iSize, pBytesRead);
return res;
char* pBufPtr = pBuffer;
iSize--; // for trailing '0'
int iBytesRead = 0;
int iBufAvail = m_iBufAvail; // local variable is faster
char* szBufPtr = m_szBufPtr; // local variable is faster
while (iSize)
{
if (!iBufAvail)
{
iBufAvail = recv(m_iSocket, m_szReadBuf, CONNECTION_READBUFFER_SIZE, 0);
if (iBufAvail < 0)
{
ReportError("Could not receive data on socket", NULL, true, 0);
break;
}
else if (iBufAvail == 0)
{
break;
}
szBufPtr = m_szReadBuf;
m_szReadBuf[iBufAvail] = '\0';
}
int len = 0;
char* p = (char*)memchr(szBufPtr, '\n', iBufAvail);
if (p)
{
len = (int)(p - szBufPtr + 1);
}
else
{
len = iBufAvail;
}
if (len > iSize)
{
len = iSize;
}
memcpy(pBufPtr, szBufPtr, len);
pBufPtr += len;
szBufPtr += len;
iBufAvail -= len;
iBytesRead += len;
iSize -= len;
if (p)
{
break;
}
}
*pBufPtr = '\0';
m_iBufAvail = iBufAvail > 0 ? iBufAvail : 0; // copy back to member
m_szBufPtr = szBufPtr; // copy back to member
if (pBytesRead)
{
*pBytesRead = iBytesRead;
}
if (pBufPtr == pBuffer)
{
return NULL;
}
return pBuffer;
}
SOCKET Connection::Accept()
Connection* Connection::Accept()
{
debug("Accepting connection");
if (m_eStatus != csListening)
{
return INVALID_SOCKET;
return NULL;
}
SOCKET iRes = DoAccept();
SOCKET iSocket = accept(m_iSocket, NULL, NULL);
if (iSocket == INVALID_SOCKET && m_eStatus != csCancelled)
{
ReportError("Could not accept connection", NULL, true, 0);
}
if (iSocket == INVALID_SOCKET)
{
return NULL;
}
Connection* pCon = new Connection(iSocket, m_bTLS);
return iRes;
return pCon;
}
int Connection::Recv(char* pBuffer, int iSize)
int Connection::TryRecv(char* pBuffer, int iSize)
{
debug("Receiving data");
@@ -308,7 +487,7 @@ int Connection::Recv(char* pBuffer, int iSize)
return iReceived;
}
bool Connection::RecvAll(char * pBuffer, int iSize)
bool Connection::Recv(char * pBuffer, int iSize)
{
debug("Receiving data (full buffer)");
@@ -357,17 +536,18 @@ bool Connection::DoConnect()
addr_hints.ai_family = AF_UNSPEC; /* Allow IPv4 or IPv6 */
addr_hints.ai_socktype = SOCK_STREAM,
sprintf(iPortStr, "%d", m_pNetAddress->GetPort());
sprintf(iPortStr, "%d", m_iPort);
int res = getaddrinfo(m_pNetAddress->GetHost(), iPortStr, &addr_hints, &addr_list);
int res = getaddrinfo(m_szHost, iPortStr, &addr_hints, &addr_list);
if (res != 0)
{
ReportError("Could not resolve hostname %s", m_pNetAddress->GetHost(), true, 0);
ReportError("Could not resolve hostname %s", m_szHost, true, 0);
return false;
}
for (addr = addr_list; addr != NULL; addr = addr->ai_next)
{
bool bLastAddr = !addr->ai_next;
m_iSocket = socket(addr->ai_family, addr->ai_socktype, addr->ai_protocol);
if (m_iSocket != INVALID_SOCKET)
{
@@ -378,20 +558,33 @@ bool Connection::DoConnect()
break;
}
// Connection failed
if (bLastAddr)
{
ReportError("Connection to %s failed", m_szHost, true, 0);
}
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
else if (bLastAddr)
{
ReportError("Socket creation failed for %s", m_szHost, true, 0);
}
}
freeaddrinfo(addr_list);
if (m_iSocket == INVALID_SOCKET)
{
return false;
}
#else
struct sockaddr_in sSocketAddress;
memset(&sSocketAddress, 0, sizeof(sSocketAddress));
sSocketAddress.sin_family = AF_INET;
sSocketAddress.sin_port = htons(m_pNetAddress->GetPort());
sSocketAddress.sin_addr.s_addr = ResolveHostAddr(m_pNetAddress->GetHost());
sSocketAddress.sin_port = htons(m_iPort);
sSocketAddress.sin_addr.s_addr = ResolveHostAddr(m_szHost);
if (sSocketAddress.sin_addr.s_addr == (unsigned int)-1)
{
return false;
@@ -400,25 +593,20 @@ bool Connection::DoConnect()
m_iSocket = socket(PF_INET, SOCK_STREAM, 0);
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Socket creation failed for %s", m_pNetAddress->GetHost(), true, 0);
ReportError("Socket creation failed for %s", m_szHost, true, 0);
return false;
}
int res = connect(m_iSocket , (struct sockaddr *) & sSocketAddress, sizeof(sSocketAddress));
if (res == -1)
{
// Connection failed
ReportError("Connection to %s failed", m_szHost, true, 0);
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
return false;
}
#endif
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Connection to %s failed", m_pNetAddress->GetHost(), true, 0);
return false;
}
#ifdef WIN32
int MSecVal = m_iTimeout * 1000;
int err = setsockopt(m_iSocket, SOL_SOCKET, SO_RCVTIMEO, (char*)&MSecVal, sizeof(MSecVal));
@@ -430,9 +618,16 @@ bool Connection::DoConnect()
#endif
if (err != 0)
{
ReportError("setsockopt failed", NULL, true, 0);
ReportError("Socket initialization failed for %s", m_szHost, true, 0);
}
#ifndef DISABLE_TLS
if (m_bTLS && !StartTLS(true, NULL, NULL))
{
return false;
}
#endif
return true;
}
@@ -442,205 +637,23 @@ bool Connection::DoDisconnect()
if (m_iSocket != INVALID_SOCKET)
{
#ifndef DISABLE_TLS
CloseTLS();
#endif
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
#ifndef DISABLE_TLS
if (m_pTLS)
{
CloseTLS();
}
#endif
}
m_eStatus = csDisconnected;
return true;
}
int Connection::DoWriteLine(const char* pBuffer)
void Connection::ReadBuffer(char** pBuffer, int *iBufLen)
{
//debug("Connection::doWrite()");
return send(m_iSocket, pBuffer, strlen(pBuffer), 0);
}
char* Connection::DoReadLine(char* pBuffer, int iSize, int* pBytesRead)
{
//debug( "Connection::DoReadLine()" );
char* pBufPtr = pBuffer;
iSize--; // for trailing '0'
int iBytesRead = 0;
int iBufAvail = m_iBufAvail; // local variable is faster
char* szBufPtr = m_szBufPtr; // local variable is faster
while (iSize)
{
if (!iBufAvail)
{
iBufAvail = recv(m_iSocket, m_szReadBuf, CONNECTION_READBUFFER_SIZE, 0);
if (iBufAvail < 0)
{
ReportError("Could not receive data on socket", NULL, true, 0);
break;
}
else if (iBufAvail == 0)
{
break;
}
szBufPtr = m_szReadBuf;
m_szReadBuf[iBufAvail] = '\0';
}
int len = 0;
char* p = (char*)memchr(szBufPtr, '\n', iBufAvail);
if (p)
{
len = (int)(p - szBufPtr + 1);
}
else
{
len = iBufAvail;
}
if (len > iSize)
{
len = iSize;
}
memcpy(pBufPtr, szBufPtr, len);
pBufPtr += len;
szBufPtr += len;
iBufAvail -= len;
iBytesRead += len;
iSize -= len;
if (p)
{
break;
}
}
*pBufPtr = '\0';
m_iBufAvail = iBufAvail > 0 ? iBufAvail : 0; // copy back to member
m_szBufPtr = szBufPtr; // copy back to member
if (pBytesRead)
{
*pBytesRead = iBytesRead;
}
if (pBufPtr == pBuffer)
{
return NULL;
}
return pBuffer;
}
int Connection::DoBind()
{
debug("Do binding");
#ifdef HAVE_GETADDRINFO
struct addrinfo addr_hints, *addr_list, *addr;
char iPortStr[sizeof(int) * 4 + 1]; // is enough to hold any converted int
memset(&addr_hints, 0, sizeof(addr_hints));
addr_hints.ai_family = AF_UNSPEC; // Allow IPv4 or IPv6
addr_hints.ai_socktype = SOCK_STREAM,
addr_hints.ai_flags = AI_PASSIVE; // For wildcard IP address
sprintf(iPortStr, "%d", m_pNetAddress->GetPort());
int res = getaddrinfo(m_pNetAddress->GetHost(), iPortStr, &addr_hints, &addr_list);
if (res != 0)
{
error( "Could not resolve hostname %s", m_pNetAddress->GetHost() );
return -1;
}
m_iSocket = INVALID_SOCKET;
for (addr = addr_list; addr != NULL; addr = addr->ai_next)
{
m_iSocket = socket(addr->ai_family, addr->ai_socktype, addr->ai_protocol);
if (m_iSocket != INVALID_SOCKET)
{
int opt = 1;
setsockopt(m_iSocket, SOL_SOCKET, SO_REUSEADDR, (char*)&opt, sizeof(opt));
res = bind(m_iSocket, addr->ai_addr, addr->ai_addrlen);
if (res != -1)
{
// Connection established
break;
}
// Connection failed
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
}
freeaddrinfo(addr_list);
#else
struct sockaddr_in sSocketAddress;
memset(&sSocketAddress, 0, sizeof(sSocketAddress));
sSocketAddress.sin_family = AF_INET;
if (!m_pNetAddress->GetHost() || strlen(m_pNetAddress->GetHost()) == 0)
{
sSocketAddress.sin_addr.s_addr = htonl(INADDR_ANY);
}
else
{
sSocketAddress.sin_addr.s_addr = ResolveHostAddr(m_pNetAddress->GetHost());
if (sSocketAddress.sin_addr.s_addr == (unsigned int)-1)
{
return -1;
}
}
sSocketAddress.sin_port = htons(m_pNetAddress->GetPort());
m_iSocket = socket(PF_INET, SOCK_STREAM, 0);
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Socket creation failed for %s", m_pNetAddress->GetHost(), true, 0);
return -1;
}
int opt = 1;
setsockopt(m_iSocket, SOL_SOCKET, SO_REUSEADDR, (char*)&opt, sizeof(opt));
int res = bind(m_iSocket, (struct sockaddr *) &sSocketAddress, sizeof(sSocketAddress));
if (res == -1)
{
// Connection failed
closesocket(m_iSocket);
m_iSocket = INVALID_SOCKET;
}
#endif
if (m_iSocket == INVALID_SOCKET)
{
ReportError("Binding socket failed for %s", m_pNetAddress->GetHost(), true, 0);
return -1;
}
if (listen(m_iSocket, 10) < 0)
{
ReportError("Listen on socket failed for %s", m_pNetAddress->GetHost(), true, 0);
return -1;
}
return 0;
}
SOCKET Connection::DoAccept()
{
SOCKET iSocket = accept(GetSocket(), NULL, NULL);
if (iSocket == INVALID_SOCKET && m_eStatus != csCancelled)
{
ReportError("Could not accept connection", NULL, true, 0);
}
return iSocket;
}
*iBufLen = m_iBufAvail;
*pBuffer = m_szBufPtr;
m_iBufAvail = 0;
};
void Connection::Cancel()
{
@@ -675,14 +688,9 @@ void Connection::ReportError(const char* szMsgPrefix, const char* szMsgArg, bool
{
#ifdef WIN32
int ErrCode = WSAGetLastError();
if (m_bSuppressErrors)
{
debug("%s: ErrNo %i", szErrPrefix, ErrCode);
}
else
{
error("%s: ErrNo %i", szErrPrefix, ErrCode);
}
char szErrMsg[1024];
FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, NULL, ErrCode, 0, szErrMsg, 1024, NULL);
szErrMsg[1024-1] = '\0';
#else
const char *szErrMsg = NULL;
int ErrCode = herrno;
@@ -695,7 +703,7 @@ void Connection::ReportError(const char* szMsgPrefix, const char* szMsgArg, bool
{
szErrMsg = hstrerror(ErrCode);
}
#endif
if (m_bSuppressErrors)
{
debug("%s: ErrNo %i, %s", szErrPrefix, ErrCode, szErrMsg);
@@ -704,7 +712,6 @@ void Connection::ReportError(const char* szMsgPrefix, const char* szMsgArg, bool
{
error("%s: ErrNo %i, %s", szErrPrefix, ErrCode, szErrMsg);
}
#endif
}
else
{
@@ -720,71 +727,36 @@ void Connection::ReportError(const char* szMsgPrefix, const char* szMsgArg, bool
}
#ifndef DISABLE_TLS
bool Connection::CheckTLSResult(int iResultCode, char* szErrStr, const char* szErrMsgPrefix)
{
bool bOK = iResultCode == TLS_EOK;
if (!bOK)
{
ReportError(szErrMsgPrefix, szErrStr ? szErrStr : "unknown error", false, 0);
if (szErrStr)
{
free(szErrStr);
}
}
return bOK;
}
bool Connection::StartTLS()
bool Connection::StartTLS(bool bIsClient, const char* szCertFile, const char* szKeyFile)
{
debug("Starting TLS");
if (m_pTLS)
{
free(m_pTLS);
}
delete m_pTLSSocket;
m_pTLSSocket = new TLSSocket(m_iSocket, bIsClient, szCertFile, szKeyFile, m_szCipher);
m_pTLSSocket->SetSuppressErrors(m_bSuppressErrors);
m_pTLS = malloc(sizeof(tls_t));
tls_t* pTLS = (tls_t*)m_pTLS;
memset(pTLS, 0, sizeof(tls_t));
tls_clear(pTLS);
char* szErrStr;
int iRes;
iRes = tls_init(pTLS, NULL, NULL, NULL, 0, &szErrStr);
if (!CheckTLSResult(iRes, szErrStr, "Could not initialize TLS-object: %s"))
{
return false;
}
debug("tls_start...");
iRes = tls_start(pTLS, (int)m_iSocket, NULL, 1, NULL, &szErrStr);
debug("tls_start...%i", iRes);
if (!CheckTLSResult(iRes, szErrStr, "Could not establish secure connection: %s"))
{
return false;
}
return true;
return m_pTLSSocket->Start();
}
void Connection::CloseTLS()
{
tls_close((tls_t*)m_pTLS);
free(m_pTLS);
m_pTLS = NULL;
if (m_pTLSSocket)
{
m_pTLSSocket->Close();
delete m_pTLSSocket;
m_pTLSSocket = NULL;
}
}
int Connection::recv(SOCKET s, char* buf, int len, int flags)
{
size_t iReceived = 0;
int iReceived = 0;
if (m_pTLS)
if (m_pTLSSocket)
{
m_bTLSError = false;
char* szErrStr;
int iRes = tls_getbuf((tls_t*)m_pTLS, buf, len, &iReceived, &szErrStr);
if (!CheckTLSResult(iRes, szErrStr, "Could not read from TLS-socket: %s"))
iReceived = m_pTLSSocket->Recv(buf, len);
if (iReceived < 0)
{
m_bTLSError = true;
return -1;
@@ -799,22 +771,23 @@ int Connection::recv(SOCKET s, char* buf, int len, int flags)
int Connection::send(SOCKET s, const char* buf, int len, int flags)
{
if (m_pTLS)
int iSent = 0;
if (m_pTLSSocket)
{
m_bTLSError = false;
char* szErrStr;
int iRes = tls_putbuf((tls_t*)m_pTLS, buf, len, &szErrStr);
if (!CheckTLSResult(iRes, szErrStr, "Could not send to TLS-socket: %s"))
iSent = m_pTLSSocket->Send(buf, len);
if (iSent < 0)
{
m_bTLSError = true;
return -1;
}
return 0;
return iSent;
}
else
{
int iRet = ::send(s, buf, len, flags);
return iRet;
iSent = ::send(s, buf, len, flags);
return iSent;
}
}
#endif
@@ -868,3 +841,20 @@ unsigned int Connection::ResolveHostAddr(const char* szHost)
return uaddr;
}
#endif
const char* Connection::GetRemoteAddr()
{
struct sockaddr_in PeerName;
int iPeerNameLength = sizeof(PeerName);
if (getpeername(m_iSocket, (struct sockaddr*)&PeerName, (SOCKLEN_T*) &iPeerNameLength) >= 0)
{
#ifdef WIN32
strncpy(m_szRemoteAddr, inet_ntoa(PeerName.sin_addr), sizeof(m_szRemoteAddr));
#else
inet_ntop(AF_INET, &PeerName.sin_addr, m_szRemoteAddr, sizeof(m_szRemoteAddr));
#endif
}
m_szRemoteAddr[sizeof(m_szRemoteAddr)-1] = '\0';
return m_szRemoteAddr;
}

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -27,12 +27,14 @@
#ifndef CONNECTION_H
#define CONNECTION_H
#include "NetAddress.h"
#ifndef HAVE_GETADDRINFO
#ifndef HAVE_GETHOSTBYNAME_R
#include "Thread.h"
#endif
#endif
#ifndef DISABLE_TLS
#include "TLS.h"
#endif
class Connection
{
@@ -44,20 +46,22 @@ public:
csListening,
csCancelled
};
protected:
NetAddress* m_pNetAddress;
char* m_szHost;
int m_iPort;
SOCKET m_iSocket;
bool m_bTLS;
char* m_szCipher;
char* m_szReadBuf;
int m_iBufAvail;
char* m_szBufPtr;
EStatus m_eStatus;
int m_iTimeout;
bool m_bSuppressErrors;
bool m_bAutoClose;
char m_szRemoteAddr[20];
#ifndef DISABLE_TLS
void* m_pTLS;
static bool bTLSLibInitialized;
TLSSocket* m_pTLSSocket;
bool m_bTLSError;
#endif
#ifndef HAVE_GETADDRINFO
@@ -66,47 +70,47 @@ protected:
#endif
#endif
Connection(SOCKET iSocket, bool bTLS);
void ReportError(const char* szMsgPrefix, const char* szMsgArg, bool PrintErrCode, int herrno);
virtual bool DoConnect();
virtual bool DoDisconnect();
int DoBind();
int DoWriteLine(const char* pBuffer);
char* DoReadLine(char* pBuffer, int iSize, int* pBytesRead);
SOCKET DoAccept();
bool DoConnect();
bool DoDisconnect();
#ifndef HAVE_GETADDRINFO
unsigned int ResolveHostAddr(const char* szHost);
#endif
#ifndef DISABLE_TLS
bool CheckTLSResult(int iResultCode, char* szErrStr, const char* szErrMsgPrefix);
int recv(SOCKET s, char* buf, int len, int flags);
int send(SOCKET s, const char* buf, int len, int flags);
void CloseTLS();
#endif
public:
Connection(NetAddress* pNetAddress);
Connection(SOCKET iSocket, bool bAutoClose);
Connection(const char* szHost, int iPort, bool bTLS);
virtual ~Connection();
static void Init(bool bTLS);
static void Init();
static void Final();
bool Connect();
bool Disconnect();
int Bind();
int Send(const char* pBuffer, int iSize);
int Recv(char* pBuffer, int iSize);
bool RecvAll(char* pBuffer, int iSize);
virtual bool Connect();
virtual bool Disconnect();
bool Bind();
bool Send(const char* pBuffer, int iSize);
bool Recv(char* pBuffer, int iSize);
int TryRecv(char* pBuffer, int iSize);
char* ReadLine(char* pBuffer, int iSize, int* pBytesRead);
void ReadBuffer(char** pBuffer, int *iBufLen);
int WriteLine(const char* pBuffer);
SOCKET Accept();
Connection* Accept();
void Cancel();
NetAddress* GetServer() { return m_pNetAddress; }
SOCKET GetSocket() { return m_iSocket; }
const char* GetHost() { return m_szHost; }
int GetPort() { return m_iPort; }
bool GetTLS() { return m_bTLS; }
const char* GetCipher() { return m_szCipher; }
void SetCipher(const char* szCipher);
void SetTimeout(int iTimeout) { m_iTimeout = iTimeout; }
EStatus GetStatus() { return m_eStatus; }
void SetSuppressErrors(bool bSuppressErrors) { m_bSuppressErrors = bSuppressErrors; }
void SetSuppressErrors(bool bSuppressErrors);
bool GetSuppressErrors() { return m_bSuppressErrors; }
const char* GetRemoteAddr();
#ifndef DISABLE_TLS
bool StartTLS();
bool StartTLS(bool bIsClient, const char* szCertFile, const char* szKeyFile);
#endif
};

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -60,18 +60,12 @@ Decoder::~ Decoder()
{
debug("Destroying Decoder");
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
free(m_szArticleFilename);
}
void Decoder::Clear()
{
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
free(m_szArticleFilename);
m_szArticleFilename = NULL;
}
@@ -268,10 +262,7 @@ BreakLoop:
pb += 6; //=strlen(" name=")
char* pe;
for (pe = pb; *pe != '\0' && *pe != '\n' && *pe != '\r'; pe++) ;
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
free(m_szArticleFilename);
m_szArticleFilename = (char*)malloc(pe - pb + 1);
strncpy(m_szArticleFilename, pb, pe - pb);
m_szArticleFilename[pe - pb] = '\0';
@@ -404,10 +395,7 @@ unsigned int UDecoder::DecodeBuffer(char* buffer, int len)
// extracting filename
char* pe;
for (pe = pb; *pe != '\0' && *pe != '\n' && *pe != '\r'; pe++) ;
if (m_szArticleFilename)
{
free(m_szArticleFilename);
}
free(m_szArticleFilename);
m_szArticleFilename = (char*)malloc(pe - pb + 1);
strncpy(m_szArticleFilename, pb, pe - pb);
m_szArticleFilename[pe - pb] = '\0';

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -27,10 +27,13 @@
#define DISKSTATE_H
#include "DownloadInfo.h"
#include "FeedInfo.h"
#include "NewsServer.h"
class DiskState
{
private:
int fscanf(FILE* infile, const char* Format, ...);
int ParseFormatVersion(const char* szFormatSignature);
bool SaveFileInfo(FileInfo* pFileInfo, const char* szFilename);
bool LoadFileInfo(FileInfo* pFileInfo, const char* szFilename, bool bFileSummary, bool bArticles);
@@ -39,21 +42,38 @@ private:
void SaveFileQueue(DownloadQueue* pDownloadQueue, FileQueue* pFileQueue, FILE* outfile);
bool LoadFileQueue(DownloadQueue* pDownloadQueue, FileQueue* pFileQueue, FILE* infile, int iFormatVersion);
void SavePostQueue(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadPostQueue(DownloadQueue* pDownloadQueue, FILE* infile);
bool LoadPostQueue(DownloadQueue* pDownloadQueue, FILE* infile, int iFormatVersion);
bool LoadOldPostQueue(DownloadQueue* pDownloadQueue);
void SaveUrlQueue(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadUrlQueue(DownloadQueue* pDownloadQueue, FILE* infile, int iFormatVersion);
void SaveUrlInfo(UrlInfo* pUrlInfo, FILE* outfile);
bool LoadUrlInfo(UrlInfo* pUrlInfo, FILE* infile, int iFormatVersion);
void SaveDupInfo(DupInfo* pDupInfo, FILE* outfile);
bool LoadDupInfo(DupInfo* pDupInfo, FILE* infile, int iFormatVersion);
void SaveHistory(DownloadQueue* pDownloadQueue, FILE* outfile);
bool LoadHistory(DownloadQueue* pDownloadQueue, FILE* infile);
bool LoadHistory(DownloadQueue* pDownloadQueue, FILE* infile, int iFormatVersion);
int FindNZBInfoIndex(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
bool SaveFeedStatus(Feeds* pFeeds, FILE* outfile);
bool LoadFeedStatus(Feeds* pFeeds, FILE* infile, int iFormatVersion);
bool SaveFeedHistory(FeedHistory* pFeedHistory, FILE* outfile);
bool LoadFeedHistory(FeedHistory* pFeedHistory, FILE* infile, int iFormatVersion);
void CalcCriticalHealth(DownloadQueue* pDownloadQueue);
bool SaveServerStats(Servers* pServers, FILE* outfile);
bool LoadServerStats(Servers* pServers, FILE* infile, int iFormatVersion);
void ConvertDupeKey(char* buf, int bufsize);
public:
bool DownloadQueueExists();
bool PostQueueExists(bool bCompleted);
bool SaveDownloadQueue(DownloadQueue* pDownloadQueue);
bool LoadDownloadQueue(DownloadQueue* pDownloadQueue);
bool SaveFile(FileInfo* pFileInfo);
bool LoadArticles(FileInfo* pFileInfo);
bool DiscardDownloadQueue();
void DiscardDownloadQueue();
bool DiscardFile(FileInfo* pFileInfo);
bool SaveFeeds(Feeds* pFeeds, FeedHistory* pFeedHistory);
bool LoadFeeds(Feeds* pFeeds, FeedHistory* pFeedHistory);
bool SaveStats(Servers* pServers);
bool LoadStats(Servers* pServers);
void CleanupTempDir(DownloadQueue* pDownloadQueue);
};

View File

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -85,21 +85,36 @@ private:
char* m_szFilename;
long long m_lSize;
long long m_lRemainingSize;
long long m_lSuccessSize;
long long m_lFailedSize;
long long m_lMissedSize;
int m_iTotalArticles;
int m_iMissedArticles;
int m_iFailedArticles;
int m_iSuccessArticles;
time_t m_tTime;
bool m_bPaused;
bool m_bDeleted;
bool m_bFilenameConfirmed;
bool m_bParFile;
int m_iCompleted;
bool m_bOutputInitialized;
Mutex m_mutexOutputFile;
char* m_szOutputFilename;
Mutex* m_pMutexOutputFile;
int m_iPriority;
bool m_bExtraPriority;
int m_iActiveDownloads;
bool m_bAutoDeleted;
static int m_iIDGen;
static int m_iIDMax;
public:
FileInfo();
~FileInfo();
int GetID() { return m_iID; }
void SetID(int s);
void SetID(int iID);
static void ResetGenID(bool bMax);
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
void SetNZBInfo(NZBInfo* pNZBInfo);
Articles* GetArticles() { return &m_Articles; }
@@ -111,10 +126,24 @@ public:
void MakeValidFilename();
bool GetFilenameConfirmed() { return m_bFilenameConfirmed; }
void SetFilenameConfirmed(bool bFilenameConfirmed) { m_bFilenameConfirmed = bFilenameConfirmed; }
void SetSize(long long s) { m_lSize = s; m_lRemainingSize = s; }
void SetSize(long long lSize) { m_lSize = lSize; m_lRemainingSize = lSize; }
long long GetSize() { return m_lSize; }
long long GetRemainingSize() { return m_lRemainingSize; }
void SetRemainingSize(long long s) { m_lRemainingSize = s; }
void SetRemainingSize(long long lRemainingSize) { m_lRemainingSize = lRemainingSize; }
long long GetMissedSize() { return m_lMissedSize; }
void SetMissedSize(long long lMissedSize) { m_lMissedSize = lMissedSize; }
long long GetSuccessSize() { return m_lSuccessSize; }
void SetSuccessSize(long long lSuccessSize) { m_lSuccessSize = lSuccessSize; }
long long GetFailedSize() { return m_lFailedSize; }
void SetFailedSize(long long lFailedSize) { m_lFailedSize = lFailedSize; }
int GetTotalArticles() { return m_iTotalArticles; }
void SetTotalArticles(int iTotalArticles) { m_iTotalArticles = iTotalArticles; }
int GetMissedArticles() { return m_iMissedArticles; }
void SetMissedArticles(int iMissedArticles) { m_iMissedArticles = iMissedArticles; }
int GetFailedArticles() { return m_iFailedArticles; }
void SetFailedArticles(int iFailedArticles) { m_iFailedArticles = iFailedArticles; }
int GetSuccessArticles() { return m_iSuccessArticles; }
void SetSuccessArticles(int iSuccessArticles) { m_iSuccessArticles = iSuccessArticles; }
time_t GetTime() { return m_tTime; }
void SetTime(time_t tTime) { m_tTime = tTime; }
bool GetPaused() { return m_bPaused; }
@@ -122,13 +151,24 @@ public:
bool GetDeleted() { return m_bDeleted; }
void SetDeleted(bool Deleted) { m_bDeleted = Deleted; }
int GetCompleted() { return m_iCompleted; }
void SetCompleted(int s) { m_iCompleted = s; }
void SetCompleted(int iCompleted) { m_iCompleted = iCompleted; }
bool GetParFile() { return m_bParFile; }
void SetParFile(bool bParFile) { m_bParFile = bParFile; }
void ClearArticles();
void LockOutputFile();
void UnlockOutputFile();
const char* GetOutputFilename() { return m_szOutputFilename; }
void SetOutputFilename(const char* szOutputFilename);
bool GetOutputInitialized() { return m_bOutputInitialized; }
void SetOutputInitialized(bool bOutputInitialized) { m_bOutputInitialized = bOutputInitialized; }
bool IsDupe(const char* szFilename);
int GetPriority() { return m_iPriority; }
void SetPriority(int iPriority) { m_iPriority = iPriority; }
bool GetExtraPriority() { return m_bExtraPriority; }
void SetExtraPriority(bool bExtraPriority) { m_bExtraPriority = bExtraPriority; };
int GetActiveDownloads() { return m_iActiveDownloads; }
void SetActiveDownloads(int iActiveDownloads);
bool GetAutoDeleted() { return m_bAutoDeleted; }
void SetAutoDeleted(bool bAutoDeleted) { m_bAutoDeleted = bAutoDeleted; }
};
typedef std::deque<FileInfo*> FileQueue;
@@ -146,6 +186,9 @@ private:
int m_iRemainingParCount;
time_t m_tMinTime;
time_t m_tMaxTime;
int m_iMinPriority;
int m_iMaxPriority;
int m_iActiveDownloads;
friend class DownloadQueue;
@@ -162,9 +205,20 @@ public:
int GetRemainingParCount() { return m_iRemainingParCount; }
time_t GetMinTime() { return m_tMinTime; }
time_t GetMaxTime() { return m_tMaxTime; }
int GetMinPriority() { return m_iMinPriority; }
int GetMaxPriority() { return m_iMaxPriority; }
int GetActiveDownloads() { return m_iActiveDownloads; }
};
typedef std::deque<GroupInfo*> GroupQueueBase;
class GroupQueue : public GroupQueueBase
{
public:
~GroupQueue();
void Clear();
};
typedef std::deque<GroupInfo*> GroupQueue;
class NZBParameter
{
@@ -188,7 +242,79 @@ typedef std::deque<NZBParameter*> NZBParameterListBase;
class NZBParameterList : public NZBParameterListBase
{
public:
~NZBParameterList();
void SetParameter(const char* szName, const char* szValue);
NZBParameter* Find(const char* szName, bool bCaseSensitive);
void Clear();
void CopyFrom(NZBParameterList* pSourceParameters);
};
class ScriptStatus
{
public:
enum EStatus
{
srNone,
srFailure,
srSuccess
};
private:
char* m_szName;
EStatus m_eStatus;
friend class ScriptStatusList;
public:
ScriptStatus(const char* szName, EStatus eStatus);
~ScriptStatus();
const char* GetName() { return m_szName; }
EStatus GetStatus() { return m_eStatus; }
};
typedef std::deque<ScriptStatus*> ScriptStatusListBase;
class ScriptStatusList : public ScriptStatusListBase
{
public:
~ScriptStatusList();
void Add(const char* szScriptName, ScriptStatus::EStatus eStatus);
void Clear();
ScriptStatus::EStatus CalcTotalStatus();
};
class ServerStat
{
private:
int m_iServerID;
int m_iSuccessArticles;
int m_iFailedArticles;
public:
ServerStat(int iServerID);
int GetServerID() { return m_iServerID; }
int GetSuccessArticles() { return m_iSuccessArticles; }
void SetSuccessArticles(int iSuccessArticles) { m_iSuccessArticles = iSuccessArticles; }
int GetFailedArticles() { return m_iFailedArticles; }
void SetFailedArticles(int iFailedArticles) { m_iFailedArticles = iFailedArticles; }
};
typedef std::vector<ServerStat*> ServerStatListBase;
class ServerStatList : public ServerStatListBase
{
public:
~ServerStatList();
void SetStat(int iServerID, int iSuccessArticles, int iFailedArticles, bool bAdd);
void Add(ServerStatList* pServerStats);
void Clear();
};
enum EDupeMode
{
dmScore,
dmAll,
dmForce
};
class NZBInfoList;
@@ -196,20 +322,61 @@ class NZBInfoList;
class NZBInfo
{
public:
enum EParStatus
enum ERenameStatus
{
prNone,
prFailure,
prRepairPossible,
prSuccess
rsNone,
rsSkipped,
rsFailure,
rsSuccess
};
enum EScriptStatus
enum EParStatus
{
srNone,
srUnknown,
srFailure,
srSuccess
psNone,
psSkipped,
psFailure,
psSuccess,
psRepairPossible,
psManual
};
enum EUnpackStatus
{
usNone,
usSkipped,
usFailure,
usSuccess,
usSpace,
usPassword
};
enum ECleanupStatus
{
csNone,
csFailure,
csSuccess
};
enum EMoveStatus
{
msNone,
msFailure,
msSuccess
};
enum EDeleteStatus
{
dsNone,
dsManual,
dsHealth,
dsDupe
};
enum EMarkStatus
{
ksNone,
ksBad,
ksGood
};
typedef std::vector<char*> Files;
@@ -219,71 +386,164 @@ private:
int m_iID;
int m_iRefCount;
char* m_szFilename;
char* m_szName;
char* m_szDestDir;
char* m_szFinalDir;
char* m_szCategory;
int m_iFileCount;
int m_iParkedFileCount;
long long m_lSize;
long long m_lSuccessSize;
long long m_lFailedSize;
long long m_lCurrentSuccessSize;
long long m_lCurrentFailedSize;
long long m_lParSize;
long long m_lParSuccessSize;
long long m_lParFailedSize;
long long m_lParCurrentSuccessSize;
long long m_lParCurrentFailedSize;
int m_iTotalArticles;
int m_iSuccessArticles;
int m_iFailedArticles;
Files m_completedFiles;
bool m_bPostProcess;
ERenameStatus m_eRenameStatus;
EParStatus m_eParStatus;
EScriptStatus m_eScriptStatus;
EUnpackStatus m_eUnpackStatus;
ECleanupStatus m_eCleanupStatus;
EMoveStatus m_eMoveStatus;
EDeleteStatus m_eDeleteStatus;
EMarkStatus m_eMarkStatus;
bool m_bDeletePaused;
bool m_bManyDupeFiles;
char* m_szQueuedFilename;
bool m_bDeleted;
bool m_bDeleting;
bool m_bAvoidHistory;
bool m_bHealthPaused;
bool m_bParCleanup;
bool m_bParManual;
bool m_bCleanupDisk;
time_t m_tHistoryTime;
bool m_bUnpackCleanedUpDisk;
char* m_szDupeKey;
int m_iDupeScore;
EDupeMode m_eDupeMode;
unsigned int m_iFullContentHash;
unsigned int m_iFilteredContentHash;
NZBInfoList* m_Owner;
NZBParameterList m_ppParameters;
ScriptStatusList m_scriptStatuses;
ServerStatList m_ServerStats;
Mutex m_mutexLog;
Messages m_Messages;
int m_iIDMessageGen;
static int m_iIDGen;
static int m_iIDMax;
friend class NZBInfoList;
public:
NZBInfo();
NZBInfo(bool bPersistent = true);
~NZBInfo();
void AddReference();
void Retain();
void Release();
int GetID() { return m_iID; }
void SetID(int iID);
static void ResetGenID(bool bMax);
const char* GetFilename() { return m_szFilename; }
void SetFilename(const char* szFilename);
void GetNiceNZBName(char* szBuffer, int iSize);
static void MakeNiceNZBName(const char* szNZBFilename, char* szBuffer, int iSize);
static void MakeNiceNZBName(const char* szNZBFilename, char* szBuffer, int iSize, bool bRemoveExt);
const char* GetDestDir() { return m_szDestDir; } // needs locking (for shared objects)
void SetDestDir(const char* szDestDir); // needs locking (for shared objects)
const char* GetFinalDir() { return m_szFinalDir; } // needs locking (for shared objects)
void SetFinalDir(const char* szFinalDir); // needs locking (for shared objects)
const char* GetCategory() { return m_szCategory; } // needs locking (for shared objects)
void SetCategory(const char* szCategory); // needs locking (for shared objects)
long long GetSize() { return m_lSize; }
void SetSize(long long lSize) { m_lSize = lSize; }
const char* GetName() { return m_szName; } // needs locking (for shared objects)
void SetName(const char* szName); // needs locking (for shared objects)
int GetFileCount() { return m_iFileCount; }
void SetFileCount(int iFileCount) { m_iFileCount = iFileCount; }
int GetParkedFileCount() { return m_iParkedFileCount; }
void SetParkedFileCount(int iParkedFileCount) { m_iParkedFileCount = iParkedFileCount; }
long long GetSize() { return m_lSize; }
void SetSize(long long lSize) { m_lSize = lSize; }
long long GetSuccessSize() { return m_lSuccessSize; }
void SetSuccessSize(long long lSuccessSize) { m_lSuccessSize = lSuccessSize; }
long long GetFailedSize() { return m_lFailedSize; }
void SetFailedSize(long long lFailedSize) { m_lFailedSize = lFailedSize; }
long long GetCurrentSuccessSize() { return m_lCurrentSuccessSize; }
void SetCurrentSuccessSize(long long lCurrentSuccessSize) { m_lCurrentSuccessSize = lCurrentSuccessSize; }
long long GetCurrentFailedSize() { return m_lCurrentFailedSize; }
void SetCurrentFailedSize(long long lCurrentFailedSize) { m_lCurrentFailedSize = lCurrentFailedSize; }
long long GetParSize() { return m_lParSize; }
void SetParSize(long long lParSize) { m_lParSize = lParSize; }
long long GetParSuccessSize() { return m_lParSuccessSize; }
void SetParSuccessSize(long long lParSuccessSize) { m_lParSuccessSize = lParSuccessSize; }
long long GetParFailedSize() { return m_lParFailedSize; }
void SetParFailedSize(long long lParFailedSize) { m_lParFailedSize = lParFailedSize; }
long long GetParCurrentSuccessSize() { return m_lParCurrentSuccessSize; }
void SetParCurrentSuccessSize(long long lParCurrentSuccessSize) { m_lParCurrentSuccessSize = lParCurrentSuccessSize; }
long long GetParCurrentFailedSize() { return m_lParCurrentFailedSize; }
void SetParCurrentFailedSize(long long lParCurrentFailedSize) { m_lParCurrentFailedSize = lParCurrentFailedSize; }
int GetTotalArticles() { return m_iTotalArticles; }
void SetTotalArticles(int iTotalArticles) { m_iTotalArticles = iTotalArticles; }
int GetSuccessArticles() { return m_iSuccessArticles; }
void SetSuccessArticles(int iSuccessArticles) { m_iSuccessArticles = iSuccessArticles; }
int GetFailedArticles() { return m_iFailedArticles; }
void SetFailedArticles(int iFailedArticles) { m_iFailedArticles = iFailedArticles; }
void BuildDestDirName();
void BuildFinalDirName(char* szFinalDirBuf, int iBufSize);
Files* GetCompletedFiles() { return &m_completedFiles; } // needs locking (for shared objects)
void ClearCompletedFiles();
bool GetPostProcess() { return m_bPostProcess; }
void SetPostProcess(bool bPostProcess) { m_bPostProcess = bPostProcess; }
ERenameStatus GetRenameStatus() { return m_eRenameStatus; }
void SetRenameStatus(ERenameStatus eRenameStatus) { m_eRenameStatus = eRenameStatus; }
EParStatus GetParStatus() { return m_eParStatus; }
void SetParStatus(EParStatus eParStatus) { m_eParStatus = eParStatus; }
EScriptStatus GetScriptStatus() { return m_eScriptStatus; }
void SetScriptStatus(EScriptStatus eScriptStatus) { m_eScriptStatus = eScriptStatus; }
EUnpackStatus GetUnpackStatus() { return m_eUnpackStatus; }
void SetUnpackStatus(EUnpackStatus eUnpackStatus) { m_eUnpackStatus = eUnpackStatus; }
ECleanupStatus GetCleanupStatus() { return m_eCleanupStatus; }
void SetCleanupStatus(ECleanupStatus eCleanupStatus) { m_eCleanupStatus = eCleanupStatus; }
EMoveStatus GetMoveStatus() { return m_eMoveStatus; }
void SetMoveStatus(EMoveStatus eMoveStatus) { m_eMoveStatus = eMoveStatus; }
EDeleteStatus GetDeleteStatus() { return m_eDeleteStatus; }
void SetDeleteStatus(EDeleteStatus eDeleteStatus) { m_eDeleteStatus = eDeleteStatus; }
EMarkStatus GetMarkStatus() { return m_eMarkStatus; }
void SetMarkStatus(EMarkStatus eMarkStatus) { m_eMarkStatus = eMarkStatus; }
const char* GetQueuedFilename() { return m_szQueuedFilename; }
void SetQueuedFilename(const char* szQueuedFilename);
bool GetDeleted() { return m_bDeleted; }
void SetDeleted(bool bDeleted) { m_bDeleted = bDeleted; }
bool GetDeleting() { return m_bDeleting; }
void SetDeleting(bool bDeleting) { m_bDeleting = bDeleting; }
bool GetDeletePaused() { return m_bDeletePaused; }
void SetDeletePaused(bool bDeletePaused) { m_bDeletePaused = bDeletePaused; }
bool GetManyDupeFiles() { return m_bManyDupeFiles; }
void SetManyDupeFiles(bool bManyDupeFiles) { m_bManyDupeFiles = bManyDupeFiles; }
bool GetAvoidHistory() { return m_bAvoidHistory; }
void SetAvoidHistory(bool bAvoidHistory) { m_bAvoidHistory = bAvoidHistory; }
bool GetHealthPaused() { return m_bHealthPaused; }
void SetHealthPaused(bool bHealthPaused) { m_bHealthPaused = bHealthPaused; }
bool GetParCleanup() { return m_bParCleanup; }
void SetParCleanup(bool bParCleanup) { m_bParCleanup = bParCleanup; }
bool GetCleanupDisk() { return m_bCleanupDisk; }
void SetCleanupDisk(bool bCleanupDisk) { m_bCleanupDisk = bCleanupDisk; }
time_t GetHistoryTime() { return m_tHistoryTime; }
void SetHistoryTime(time_t tHistoryTime) { m_tHistoryTime = tHistoryTime; }
bool GetUnpackCleanedUpDisk() { return m_bUnpackCleanedUpDisk; }
void SetUnpackCleanedUpDisk(bool bUnpackCleanedUpDisk) { m_bUnpackCleanedUpDisk = bUnpackCleanedUpDisk; }
NZBParameterList* GetParameters() { return &m_ppParameters; } // needs locking (for shared objects)
void SetParameter(const char* szName, const char* szValue); // needs locking (for shared objects)
ScriptStatusList* GetScriptStatuses() { return &m_scriptStatuses; } // needs locking (for shared objects)
ServerStatList* GetServerStats() { return &m_ServerStats; }
int CalcHealth();
int CalcCriticalHealth();
const char* GetDupeKey() { return m_szDupeKey; } // needs locking (for shared objects)
void SetDupeKey(const char* szDupeKey); // needs locking (for shared objects)
int GetDupeScore() { return m_iDupeScore; }
void SetDupeScore(int iDupeScore) { m_iDupeScore = iDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
void SetDupeMode(EDupeMode eDupeMode) { m_eDupeMode = eDupeMode; }
unsigned int GetFullContentHash() { return m_iFullContentHash; }
void SetFullContentHash(unsigned int iFullContentHash) { m_iFullContentHash = iFullContentHash; }
unsigned int GetFilteredContentHash() { return m_iFilteredContentHash; }
void SetFilteredContentHash(unsigned int iFilteredContentHash) { m_iFilteredContentHash = iFilteredContentHash; }
void AppendMessage(Message::EKind eKind, time_t tTime, const char* szText);
Messages* LockMessages();
void UnlockMessages();
@@ -309,59 +569,36 @@ public:
ptVerifyingSources,
ptRepairing,
ptVerifyingRepaired,
ptRenaming,
ptUnpacking,
ptMoving,
ptExecutingScript,
ptFinished
};
enum EParStatus
{
psNone,
psFailure,
psSuccess,
psRepairPossible
};
enum ERequestParCheck
{
rpNone,
rpCurrent,
rpAll
};
enum EScriptStatus
{
srNone,
srUnknown,
srFailure,
srSuccess
};
typedef std::deque<Message*> Messages;
private:
int m_iID;
NZBInfo* m_pNZBInfo;
char* m_szParFilename;
char* m_szInfoName;
bool m_bWorking;
bool m_bDeleted;
bool m_bParCheck;
EParStatus m_eParStatus;
EScriptStatus m_eScriptStatus;
ERequestParCheck m_eRequestParCheck;
bool m_bRequestParCheck;
EStage m_eStage;
char* m_szProgressLabel;
int m_iFileProgress;
int m_iStageProgress;
time_t m_tStartTime;
time_t m_tStageTime;
Thread* m_pScriptThread;
Thread* m_pPostThread;
Mutex m_mutexLog;
Messages m_Messages;
int m_iIDMessageGen;
static int m_iIDGen;
static int m_iIDMax;
public:
PostInfo();
@@ -369,8 +606,6 @@ public:
int GetID() { return m_iID; }
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
void SetNZBInfo(NZBInfo* pNZBInfo);
const char* GetParFilename() { return m_szParFilename; }
void SetParFilename(const char* szParFilename);
const char* GetInfoName() { return m_szInfoName; }
void SetInfoName(const char* szInfoName);
EStage GetStage() { return m_eStage; }
@@ -389,17 +624,11 @@ public:
void SetWorking(bool bWorking) { m_bWorking = bWorking; }
bool GetDeleted() { return m_bDeleted; }
void SetDeleted(bool bDeleted) { m_bDeleted = bDeleted; }
bool GetParCheck() { return m_bParCheck; }
void SetParCheck(bool bParCheck) { m_bParCheck = bParCheck; }
EParStatus GetParStatus() { return m_eParStatus; }
void SetParStatus(EParStatus eParStatus) { m_eParStatus = eParStatus; }
ERequestParCheck GetRequestParCheck() { return m_eRequestParCheck; }
void SetRequestParCheck(ERequestParCheck eRequestParCheck) { m_eRequestParCheck = eRequestParCheck; }
EScriptStatus GetScriptStatus() { return m_eScriptStatus; }
void SetScriptStatus(EScriptStatus eScriptStatus) { m_eScriptStatus = eScriptStatus; }
bool GetRequestParCheck() { return m_bRequestParCheck; }
void SetRequestParCheck(bool bRequestParCheck) { m_bRequestParCheck = bRequestParCheck; }
void AppendMessage(Message::EKind eKind, const char* szText);
Thread* GetScriptThread() { return m_pScriptThread; }
void SetScriptThread(Thread* pScriptThread) { m_pScriptThread = pScriptThread; }
Thread* GetPostThread() { return m_pPostThread; }
void SetPostThread(Thread* pPostThread) { m_pPostThread = pPostThread; }
Messages* LockMessages();
void UnlockMessages();
};
@@ -408,7 +637,157 @@ typedef std::deque<PostInfo*> PostQueue;
typedef std::vector<int> IDList;
typedef std::deque<NZBInfo*> HistoryList;
typedef std::vector<char*> NameList;
class UrlInfo
{
public:
enum EStatus
{
aiUndefined,
aiRunning,
aiFinished,
aiFailed,
aiRetry,
aiScanSkipped,
aiScanFailed
};
private:
int m_iID;
char* m_szURL;
char* m_szNZBFilename;
char* m_szCategory;
int m_iPriority;
char* m_szDupeKey;
int m_iDupeScore;
EDupeMode m_eDupeMode;
bool m_bAddTop;
bool m_bAddPaused;
bool m_bForce;
EStatus m_eStatus;
static int m_iIDGen;
static int m_iIDMax;
public:
UrlInfo();
~UrlInfo();
int GetID() { return m_iID; }
void SetID(int iID);
static void ResetGenID(bool bMax);
const char* GetURL() { return m_szURL; } // needs locking (for shared objects)
void SetURL(const char* szURL); // needs locking (for shared objects)
const char* GetNZBFilename() { return m_szNZBFilename; } // needs locking (for shared objects)
void SetNZBFilename(const char* szNZBFilename); // needs locking (for shared objects)
const char* GetCategory() { return m_szCategory; } // needs locking (for shared objects)
void SetCategory(const char* szCategory); // needs locking (for shared objects)
int GetPriority() { return m_iPriority; }
void SetPriority(int iPriority) { m_iPriority = iPriority; }
const char* GetDupeKey() { return m_szDupeKey; }
void SetDupeKey(const char* szDupeKey);
int GetDupeScore() { return m_iDupeScore; }
void SetDupeScore(int iDupeScore) { m_iDupeScore = iDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
void SetDupeMode(EDupeMode eDupeMode) { m_eDupeMode = eDupeMode; }
bool GetAddTop() { return m_bAddTop; }
void SetAddTop(bool bAddTop) { m_bAddTop = bAddTop; }
bool GetAddPaused() { return m_bAddPaused; }
void SetAddPaused(bool bAddPaused) { m_bAddPaused = bAddPaused; }
void GetName(char* szBuffer, int iSize); // needs locking (for shared objects)
static void MakeNiceName(const char* szURL, const char* szNZBFilename, char* szBuffer, int iSize);
bool GetForce() { return m_bForce; }
void SetForce(bool bForce) { m_bForce = bForce; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
};
typedef std::deque<UrlInfo*> UrlQueue;
class DupInfo
{
public:
enum EStatus
{
dsUndefined,
dsSuccess,
dsFailed,
dsDeleted,
dsDupe,
dsBad,
dsGood
};
private:
char* m_szName;
char* m_szDupeKey;
int m_iDupeScore;
EDupeMode m_eDupeMode;
long long m_lSize;
unsigned int m_iFullContentHash;
unsigned int m_iFilteredContentHash;
EStatus m_eStatus;
public:
DupInfo();
~DupInfo();
const char* GetName() { return m_szName; } // needs locking (for shared objects)
void SetName(const char* szName); // needs locking (for shared objects)
const char* GetDupeKey() { return m_szDupeKey; } // needs locking (for shared objects)
void SetDupeKey(const char* szDupeKey); // needs locking (for shared objects)
int GetDupeScore() { return m_iDupeScore; }
void SetDupeScore(int iDupeScore) { m_iDupeScore = iDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
void SetDupeMode(EDupeMode eDupeMode) { m_eDupeMode = eDupeMode; }
long long GetSize() { return m_lSize; }
void SetSize(long long lSize) { m_lSize = lSize; }
unsigned int GetFullContentHash() { return m_iFullContentHash; }
void SetFullContentHash(unsigned int iFullContentHash) { m_iFullContentHash = iFullContentHash; }
unsigned int GetFilteredContentHash() { return m_iFilteredContentHash; }
void SetFilteredContentHash(unsigned int iFilteredContentHash) { m_iFilteredContentHash = iFilteredContentHash; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
};
class HistoryInfo
{
public:
enum EKind
{
hkUnknown,
hkNZBInfo,
hkUrlInfo,
hkDupInfo
};
private:
int m_iID;
EKind m_eKind;
void* m_pInfo;
time_t m_tTime;
static int m_iIDGen;
static int m_iIDMax;
public:
HistoryInfo(NZBInfo* pNZBInfo);
HistoryInfo(UrlInfo* pUrlInfo);
HistoryInfo(DupInfo* pDupInfo);
~HistoryInfo();
int GetID() { return m_iID; }
void SetID(int iID);
static void ResetGenID(bool bMax);
EKind GetKind() { return m_eKind; }
NZBInfo* GetNZBInfo() { return (NZBInfo*)m_pInfo; }
UrlInfo* GetUrlInfo() { return (UrlInfo*)m_pInfo; }
DupInfo* GetDupInfo() { return (DupInfo*)m_pInfo; }
void DiscardUrlInfo() { m_pInfo = NULL; }
time_t GetTime() { return m_tTime; }
void SetTime(time_t tTime) { m_tTime = tTime; }
void GetName(char* szBuffer, int iSize); // needs locking (for shared objects)
};
typedef std::deque<HistoryInfo*> HistoryList;
class DownloadQueue
{
@@ -418,6 +797,7 @@ protected:
PostQueue m_PostQueue;
HistoryList m_HistoryList;
FileQueue m_ParkedFiles;
UrlQueue m_UrlQueue;
public:
NZBInfoList* GetNZBInfoList() { return &m_NZBInfoList; }
@@ -425,6 +805,7 @@ public:
PostQueue* GetPostQueue() { return &m_PostQueue; }
HistoryList* GetHistoryList() { return &m_HistoryList; }
FileQueue* GetParkedFiles() { return &m_ParkedFiles; }
UrlQueue* GetUrlQueue() { return &m_UrlQueue; }
void BuildGroups(GroupQueue* pGroupQueue);
};

583
DupeCoordinator.cpp Normal file
View File

@@ -0,0 +1,583 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#endif
#include <set>
#include <algorithm>
#include "nzbget.h"
#include "Options.h"
#include "Log.h"
#include "Util.h"
#include "DiskState.h"
#include "NZBFile.h"
#include "QueueCoordinator.h"
#include "DupeCoordinator.h"
extern QueueCoordinator* g_pQueueCoordinator;
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
bool DupeCoordinator::IsDupeSuccess(NZBInfo* pNZBInfo)
{
bool bFailure =
pNZBInfo->GetDeleteStatus() != NZBInfo::dsNone ||
pNZBInfo->GetMarkStatus() == NZBInfo::ksBad ||
pNZBInfo->GetParStatus() == NZBInfo::psFailure ||
pNZBInfo->GetUnpackStatus() == NZBInfo::usFailure ||
pNZBInfo->GetUnpackStatus() == NZBInfo::usPassword ||
(pNZBInfo->GetParStatus() == NZBInfo::psSkipped &&
pNZBInfo->GetUnpackStatus() == NZBInfo::usSkipped &&
pNZBInfo->CalcHealth() < pNZBInfo->CalcCriticalHealth());
return !bFailure;
}
bool DupeCoordinator::SameNameOrKey(const char* szName1, const char* szDupeKey1,
const char* szName2, const char* szDupeKey2)
{
bool bHasDupeKeys = !Util::EmptyStr(szDupeKey1) && !Util::EmptyStr(szDupeKey2);
return (bHasDupeKeys && !strcmp(szDupeKey1, szDupeKey2)) ||
(!bHasDupeKeys && !strcmp(szName1, szName2));
}
/**
Check if the title was already downloaded or is already queued:
- if there is a duplicate with exactly same content (via hash-check)
in queue or in history - the new item is skipped;
- if there is a duplicate marked as good in history - the new item is skipped;
- if there is a duplicate with success-status in dup-history but
there are no duplicates in recent history - the new item is skipped;
- if queue has a duplicate with the same or higher score - the new item
is moved to history as dupe-backup;
- if queue has a duplicate with lower score - the existing item is moved
to history as dupe-backup (unless it is in post-processing stage) and
the new item is added to queue;
- if queue doesn't have duplicates - the new item is added to queue.
*/
void DupeCoordinator::NZBFound(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
debug("Checking duplicates for %s", pNZBInfo->GetName());
GroupQueue groupQueue;
pDownloadQueue->BuildGroups(&groupQueue);
// find duplicates in download queue with exactly same content
for (GroupQueue::iterator it = groupQueue.begin(); it != groupQueue.end(); it++)
{
GroupInfo* pGroupInfo = *it;
NZBInfo* pGroupNZBInfo = pGroupInfo->GetNZBInfo();
bool bSameContent = (pNZBInfo->GetFullContentHash() > 0 &&
pNZBInfo->GetFullContentHash() == pGroupNZBInfo->GetFullContentHash()) ||
(pNZBInfo->GetFilteredContentHash() > 0 &&
pNZBInfo->GetFilteredContentHash() == pGroupNZBInfo->GetFilteredContentHash());
// if there is a duplicate with exactly same content (via hash-check)
// in queue - the new item is skipped
if (pGroupNZBInfo != pNZBInfo && bSameContent)
{
if (!strcmp(pNZBInfo->GetName(), pGroupNZBInfo->GetName()))
{
warn("Skipping duplicate %s, already queued", pNZBInfo->GetName());
}
else
{
warn("Skipping duplicate %s, already queued as %s",
pNZBInfo->GetName(), pGroupNZBInfo->GetName());
}
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsManual);
DeleteQueuedFile(pNZBInfo->GetQueuedFilename());
return;
}
}
// find duplicates in post queue with exactly same content
for (PostQueue::iterator it = pDownloadQueue->GetPostQueue()->begin(); it != pDownloadQueue->GetPostQueue()->end(); it++)
{
PostInfo* pPostInfo = *it;
bool bSameContent = (pNZBInfo->GetFullContentHash() > 0 &&
pNZBInfo->GetFullContentHash() == pPostInfo->GetNZBInfo()->GetFullContentHash()) ||
(pNZBInfo->GetFilteredContentHash() > 0 &&
pNZBInfo->GetFilteredContentHash() == pPostInfo->GetNZBInfo()->GetFilteredContentHash());
// if there is a duplicate with exactly same content (via hash-check)
// in queue - the new item is skipped;
if (bSameContent)
{
if (!strcmp(pNZBInfo->GetName(), pPostInfo->GetNZBInfo()->GetName()))
{
warn("Skipping duplicate %s, already queued", pNZBInfo->GetName());
}
else
{
warn("Skipping duplicate %s, already queued as %s",
pNZBInfo->GetName(), pPostInfo->GetNZBInfo()->GetName());
}
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsManual);
DeleteQueuedFile(pNZBInfo->GetQueuedFilename());
return;
}
}
// find duplicates in history
bool bSkip = false;
bool bGood = false;
bool bSameContent = false;
const char* szDupeName = NULL;
// find duplicates in queue having exactly same content
// also: nzb-files having duplicates marked as good are skipped
// also (only in score mode): nzb-files having success-duplicates in dup-history but don't having duplicates in recent history are skipped
for (HistoryList::iterator it = pDownloadQueue->GetHistoryList()->begin(); it != pDownloadQueue->GetHistoryList()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo &&
((pNZBInfo->GetFullContentHash() > 0 &&
pNZBInfo->GetFullContentHash() == pHistoryInfo->GetNZBInfo()->GetFullContentHash()) ||
(pNZBInfo->GetFilteredContentHash() > 0 &&
pNZBInfo->GetFilteredContentHash() == pHistoryInfo->GetNZBInfo()->GetFilteredContentHash())))
{
bSkip = true;
bSameContent = true;
szDupeName = pHistoryInfo->GetNZBInfo()->GetName();
break;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkDupInfo &&
((pNZBInfo->GetFullContentHash() > 0 &&
pNZBInfo->GetFullContentHash() == pHistoryInfo->GetDupInfo()->GetFullContentHash()) ||
(pNZBInfo->GetFilteredContentHash() > 0 &&
pNZBInfo->GetFilteredContentHash() == pHistoryInfo->GetDupInfo()->GetFilteredContentHash())))
{
bSkip = true;
bSameContent = true;
szDupeName = pHistoryInfo->GetDupInfo()->GetName();
break;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksGood &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()))
{
bSkip = true;
bGood = true;
szDupeName = pHistoryInfo->GetNZBInfo()->GetName();
break;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkDupInfo &&
pHistoryInfo->GetDupInfo()->GetDupeMode() != dmForce &&
(pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood ||
(pNZBInfo->GetDupeMode() == dmScore &&
pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsSuccess &&
pNZBInfo->GetDupeScore() <= pHistoryInfo->GetDupInfo()->GetDupeScore())) &&
SameNameOrKey(pHistoryInfo->GetDupInfo()->GetName(), pHistoryInfo->GetDupInfo()->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()))
{
bSkip = true;
bGood = pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood;
szDupeName = pHistoryInfo->GetDupInfo()->GetName();
break;
}
}
if (!bSameContent && !bGood && pNZBInfo->GetDupeMode() == dmScore)
{
// nzb-files having success-duplicates in recent history (with different content) are added to history for backup
for (HistoryList::iterator it = pDownloadQueue->GetHistoryList()->begin(); it != pDownloadQueue->GetHistoryList()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()) &&
pNZBInfo->GetDupeScore() <= pHistoryInfo->GetNZBInfo()->GetDupeScore() &&
IsDupeSuccess(pHistoryInfo->GetNZBInfo()))
{
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsDupe);
info("Collection %s is duplicate to %s", pNZBInfo->GetName(), pHistoryInfo->GetNZBInfo()->GetName());
return;
}
}
}
if (bSkip)
{
if (!strcmp(pNZBInfo->GetName(), szDupeName))
{
warn("Skipping duplicate %s, found in history with %s", pNZBInfo->GetName(),
bSameContent ? "exactly same content" : bGood ? "good status" : "success status");
}
else
{
warn("Skipping duplicate %s, found in history %s with %s",
pNZBInfo->GetName(), szDupeName,
bSameContent ? "exactly same content" : bGood ? "good status" : "success status");
}
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsManual);
DeleteQueuedFile(pNZBInfo->GetQueuedFilename());
return;
}
// find duplicates in download queue and post-queue and handle both items according to their scores:
// only one item remains in queue and another one is moved to history as dupe-backup
if (pNZBInfo->GetDupeMode() == dmScore)
{
// find duplicates in download queue
for (GroupQueue::iterator it = groupQueue.begin(); it != groupQueue.end(); it++)
{
GroupInfo* pGroupInfo = *it;
NZBInfo* pGroupNZBInfo = pGroupInfo->GetNZBInfo();
if (pGroupNZBInfo != pNZBInfo &&
pGroupNZBInfo->GetDupeMode() != dmForce &&
SameNameOrKey(pGroupNZBInfo->GetName(), pGroupNZBInfo->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()))
{
// if queue has a duplicate with the same or higher score - the new item
// is moved to history as dupe-backup
if (pNZBInfo->GetDupeScore() <= pGroupNZBInfo->GetDupeScore())
{
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsDupe);
info("Collection %s is duplicate to %s", pNZBInfo->GetName(), pGroupNZBInfo->GetName());
return;
}
// if queue has a duplicate with lower score - the existing item is moved
// to history as dupe-backup (unless it is in post-processing stage) and
// the new item is added to queue
else
{
// unless it is in post-processing stage
bool bPostProcess = false;
for (PostQueue::iterator it = pDownloadQueue->GetPostQueue()->begin(); it != pDownloadQueue->GetPostQueue()->end(); it++)
{
PostInfo* pPostInfo = *it;
if (pPostInfo->GetNZBInfo() == pGroupNZBInfo)
{
bPostProcess = true;
break;
}
}
if (!bPostProcess)
{
// the existing queue item is moved to history as dupe-backup
info("Moving collection %s with lower duplicate score to history", pGroupNZBInfo->GetName());
pGroupNZBInfo->SetDeleteStatus(NZBInfo::dsDupe);
g_pQueueCoordinator->GetQueueEditor()->LockedEditEntry(pDownloadQueue, pGroupInfo->GetLastID(), false, QueueEditor::eaGroupDelete, 0, NULL);
}
}
}
}
// find duplicates in post queue
for (PostQueue::iterator it = pDownloadQueue->GetPostQueue()->begin(); it != pDownloadQueue->GetPostQueue()->end(); it++)
{
PostInfo* pPostInfo = *it;
// if queue has a duplicate with the same or higher score - the new item
// is moved to history as dupe-backup;
if (pPostInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
pNZBInfo->GetDupeScore() <= pPostInfo->GetNZBInfo()->GetDupeScore() &&
SameNameOrKey(pPostInfo->GetNZBInfo()->GetName(), pPostInfo->GetNZBInfo()->GetDupeKey(),
pNZBInfo->GetName(), pNZBInfo->GetDupeKey()))
{
// Flag saying QueueCoordinator to skip nzb-file
pNZBInfo->SetDeleteStatus(NZBInfo::dsDupe);
info("Collection %s is duplicate to %s", pNZBInfo->GetName(), pPostInfo->GetNZBInfo()->GetName());
return;
}
}
}
}
/**
- if download of an item fails and there are duplicates in history -
return the best duplicate from historyto queue for download;
- if download of an item completes successfully - nothing extra needs to be done;
*/
void DupeCoordinator::NZBCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
debug("Processing duplicates for %s", pNZBInfo->GetName());
if (pNZBInfo->GetDupeMode() == dmScore && !IsDupeSuccess(pNZBInfo))
{
ReturnBestDupe(pDownloadQueue, pNZBInfo, pNZBInfo->GetName(), pNZBInfo->GetDupeKey());
}
}
/**
Returns the best duplicate from history to download queue.
*/
void DupeCoordinator::ReturnBestDupe(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szNZBName, const char* szDupeKey)
{
// check if history (recent or dup) has other success-duplicates or good-duplicates
bool bHistoryDupe = false;
int iHistoryScore = 0;
for (HistoryList::iterator it = pDownloadQueue->GetHistoryList()->begin(); it != pDownloadQueue->GetHistoryList()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
bool bGoodDupe = false;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
(IsDupeSuccess(pHistoryInfo->GetNZBInfo()) ||
pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksGood) &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(), szNZBName, szDupeKey))
{
if (!bHistoryDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > iHistoryScore)
{
iHistoryScore = pHistoryInfo->GetNZBInfo()->GetDupeScore();
}
bHistoryDupe = true;
bGoodDupe = pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksGood;
}
if (pHistoryInfo->GetKind() == HistoryInfo::hkDupInfo &&
pHistoryInfo->GetDupInfo()->GetDupeMode() != dmForce &&
(pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsSuccess ||
pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood) &&
SameNameOrKey(pHistoryInfo->GetDupInfo()->GetName(), pHistoryInfo->GetDupInfo()->GetDupeKey(), szNZBName, szDupeKey))
{
if (!bHistoryDupe || pHistoryInfo->GetDupInfo()->GetDupeScore() > iHistoryScore)
{
iHistoryScore = pHistoryInfo->GetDupInfo()->GetDupeScore();
}
bHistoryDupe = true;
bGoodDupe = pHistoryInfo->GetDupInfo()->GetStatus() == DupInfo::dsGood;
}
if (bGoodDupe)
{
// another duplicate with good-status exists - exit without moving other dupes to queue
return;
}
}
// check if duplicates exist in post-processing queue
bool bPostDupe = false;
int iPostScore = 0;
for (PostQueue::iterator it = pDownloadQueue->GetPostQueue()->begin(); it != pDownloadQueue->GetPostQueue()->end(); it++)
{
PostInfo* pPostInfo = *it;
if (pPostInfo->GetNZBInfo() != pNZBInfo &&
pPostInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
SameNameOrKey(pPostInfo->GetNZBInfo()->GetName(), pPostInfo->GetNZBInfo()->GetDupeKey(), szNZBName, szDupeKey) &&
(!bPostDupe || pPostInfo->GetNZBInfo()->GetDupeScore() > iPostScore))
{
iPostScore = pPostInfo->GetNZBInfo()->GetDupeScore();
bPostDupe = true;
}
}
// check if duplicates exist in download queue
GroupQueue groupQueue;
pDownloadQueue->BuildGroups(&groupQueue);
bool bQueueDupe = false;
int iQueueScore = 0;
for (GroupQueue::iterator it = groupQueue.begin(); it != groupQueue.end(); it++)
{
GroupInfo* pGroupInfo = *it;
NZBInfo* pGroupNZBInfo = pGroupInfo->GetNZBInfo();
if (pGroupNZBInfo != pNZBInfo &&
pGroupNZBInfo->GetDupeMode() != dmForce &&
SameNameOrKey(pGroupNZBInfo->GetName(), pGroupNZBInfo->GetDupeKey(), szNZBName, szDupeKey) &&
(!bQueueDupe || pGroupNZBInfo->GetDupeScore() > iQueueScore))
{
iQueueScore = pGroupNZBInfo->GetDupeScore();
bQueueDupe = true;
}
}
// find dupe-backup with highest score, whose score is also higher than other
// success-duplicates and higher than already queued items
HistoryInfo* pHistoryDupe = NULL;
for (HistoryList::iterator it = pDownloadQueue->GetHistoryList()->begin(); it != pDownloadQueue->GetHistoryList()->end(); it++)
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsDupe &&
pHistoryInfo->GetNZBInfo()->CalcHealth() >= pHistoryInfo->GetNZBInfo()->CalcCriticalHealth() &&
pHistoryInfo->GetNZBInfo()->GetMarkStatus() != NZBInfo::ksBad &&
(!bHistoryDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > iHistoryScore) &&
(!bPostDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > iPostScore) &&
(!bQueueDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > iQueueScore) &&
(!pHistoryDupe || pHistoryInfo->GetNZBInfo()->GetDupeScore() > pHistoryDupe->GetNZBInfo()->GetDupeScore()) &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(), szNZBName, szDupeKey))
{
pHistoryDupe = pHistoryInfo;
}
}
// move that dupe-backup from history to download queue
if (pHistoryDupe)
{
info("Found duplicate %s for %s", pHistoryDupe->GetNZBInfo()->GetName(), szNZBName);
HistoryRedownload(pDownloadQueue, pHistoryDupe);
}
}
void DupeCoordinator::HistoryMark(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, bool bGood)
{
char szNZBName[1024];
pHistoryInfo->GetName(szNZBName, 1024);
info("Marking %s as %s", szNZBName, (bGood ? "good" : "bad"));
if (pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo)
{
pHistoryInfo->GetNZBInfo()->SetMarkStatus(bGood ? NZBInfo::ksGood : NZBInfo::ksBad);
}
else if (pHistoryInfo->GetKind() == HistoryInfo::hkDupInfo)
{
pHistoryInfo->GetDupInfo()->SetStatus(bGood ? DupInfo::dsGood : DupInfo::dsBad);
}
else
{
error("Could not mark %s as bad: history item has wrong type", szNZBName);
return;
}
if (!g_pOptions->GetDupeCheck() ||
(pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() == dmForce) ||
(pHistoryInfo->GetKind() == HistoryInfo::hkDupInfo &&
pHistoryInfo->GetDupInfo()->GetDupeMode() == dmForce))
{
return;
}
if (bGood)
{
// mark as good
// moving all duplicates from history to dup-history
HistoryCleanup(pDownloadQueue, pHistoryInfo);
}
else
{
// mark as bad
const char* szDupeKey = pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo ? pHistoryInfo->GetNZBInfo()->GetDupeKey() :
pHistoryInfo->GetKind() == HistoryInfo::hkDupInfo ? pHistoryInfo->GetDupInfo()->GetDupeKey() :
NULL;
ReturnBestDupe(pDownloadQueue, NULL, szNZBName, szDupeKey);
}
}
void DupeCoordinator::HistoryCleanup(DownloadQueue* pDownloadQueue, HistoryInfo* pMarkHistoryInfo)
{
const char* szDupeKey = pMarkHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo ? pMarkHistoryInfo->GetNZBInfo()->GetDupeKey() :
pMarkHistoryInfo->GetKind() == HistoryInfo::hkDupInfo ? pMarkHistoryInfo->GetDupInfo()->GetDupeKey() :
NULL;
const char* szNZBName = pMarkHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo ? pMarkHistoryInfo->GetNZBInfo()->GetName() :
pMarkHistoryInfo->GetKind() == HistoryInfo::hkDupInfo ? pMarkHistoryInfo->GetDupInfo()->GetName() :
NULL;
bool bChanged = false;
int index = 0;
// traversing in a reverse order to delete items in order they were added to history
// (just to produce the log-messages in a more logical order)
for (HistoryList::reverse_iterator it = pDownloadQueue->GetHistoryList()->rbegin(); it != pDownloadQueue->GetHistoryList()->rend(); )
{
HistoryInfo* pHistoryInfo = *it;
if (pHistoryInfo->GetKind() == HistoryInfo::hkNZBInfo &&
pHistoryInfo->GetNZBInfo()->GetDupeMode() != dmForce &&
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsDupe &&
pHistoryInfo != pMarkHistoryInfo &&
SameNameOrKey(pHistoryInfo->GetNZBInfo()->GetName(), pHistoryInfo->GetNZBInfo()->GetDupeKey(), szNZBName, szDupeKey))
{
HistoryTransformToDup(pDownloadQueue, pHistoryInfo, index);
index++;
it = pDownloadQueue->GetHistoryList()->rbegin() + index;
bChanged = true;
}
else
{
it++;
index++;
}
}
if (bChanged && g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
}
void DupeCoordinator::HistoryTransformToDup(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, int rindex)
{
char szNiceName[1024];
pHistoryInfo->GetName(szNiceName, 1024);
// replace history element
DupInfo* pDupInfo = new DupInfo();
pDupInfo->SetName(pHistoryInfo->GetNZBInfo()->GetName());
pDupInfo->SetDupeKey(pHistoryInfo->GetNZBInfo()->GetDupeKey());
pDupInfo->SetDupeScore(pHistoryInfo->GetNZBInfo()->GetDupeScore());
pDupInfo->SetDupeMode(pHistoryInfo->GetNZBInfo()->GetDupeMode());
pDupInfo->SetSize(pHistoryInfo->GetNZBInfo()->GetSize());
pDupInfo->SetFullContentHash(pHistoryInfo->GetNZBInfo()->GetFullContentHash());
pDupInfo->SetFilteredContentHash(pHistoryInfo->GetNZBInfo()->GetFilteredContentHash());
pDupInfo->SetStatus(
pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksGood ? DupInfo::dsGood :
pHistoryInfo->GetNZBInfo()->GetMarkStatus() == NZBInfo::ksBad ? DupInfo::dsBad :
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsDupe ? DupInfo::dsDupe :
pHistoryInfo->GetNZBInfo()->GetDeleteStatus() == NZBInfo::dsManual ? DupInfo::dsDeleted :
IsDupeSuccess(pHistoryInfo->GetNZBInfo()) ? DupInfo::dsSuccess :
DupInfo::dsFailed);
HistoryInfo* pNewHistoryInfo = new HistoryInfo(pDupInfo);
pNewHistoryInfo->SetTime(pHistoryInfo->GetTime());
(*pDownloadQueue->GetHistoryList())[pDownloadQueue->GetHistoryList()->size() - 1 - rindex] = pNewHistoryInfo;
DeleteQueuedFile(pHistoryInfo->GetNZBInfo()->GetQueuedFilename());
delete pHistoryInfo;
info("Collection %s removed from history", szNiceName);
}

53
DupeCoordinator.h Normal file
View File

@@ -0,0 +1,53 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef DUPECOORDINATOR_H
#define DUPECOORDINATOR_H
#include <deque>
#include "DownloadInfo.h"
class DupeCoordinator
{
private:
bool IsDupeSuccess(NZBInfo* pNZBInfo);
void ReturnBestDupe(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szNZBName, const char* szDupeKey);
void HistoryReturnDupe(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo);
void HistoryCleanup(DownloadQueue* pDownloadQueue, HistoryInfo* pMarkHistoryInfo);
bool SameNameOrKey(const char* szName1, const char* szDupeKey1, const char* szName2, const char* szDupeKey2);
protected:
virtual void HistoryRedownload(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo) = 0;
virtual void DeleteQueuedFile(const char* szQueuedFile) = 0;
public:
void NZBCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBFound(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void HistoryMark(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, bool bGood);
void HistoryTransformToDup(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, int rindex);
};
#endif

742
FeedCoordinator.cpp Normal file
View File

@@ -0,0 +1,742 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <sys/stat.h>
#ifndef WIN32
#include <unistd.h>
#include <sys/time.h>
#endif
#include "nzbget.h"
#include "FeedCoordinator.h"
#include "Options.h"
#include "WebDownloader.h"
#include "Log.h"
#include "Util.h"
#include "FeedFile.h"
#include "FeedFilter.h"
#include "UrlCoordinator.h"
#include "DiskState.h"
extern Options* g_pOptions;
extern UrlCoordinator* g_pUrlCoordinator;
extern DiskState* g_pDiskState;
FeedCoordinator::FeedCacheItem::FeedCacheItem(const char* szUrl, int iCacheTimeSec,const char* szCacheId,
time_t tLastUsage, FeedItemInfos* pFeedItemInfos)
{
m_szUrl = strdup(szUrl);
m_iCacheTimeSec = iCacheTimeSec;
m_szCacheId = strdup(szCacheId);
m_tLastUsage = tLastUsage;
m_pFeedItemInfos = pFeedItemInfos;
m_pFeedItemInfos->Retain();
}
FeedCoordinator::FeedCacheItem::~FeedCacheItem()
{
free(m_szUrl);
free(m_szCacheId);
m_pFeedItemInfos->Release();
}
FeedCoordinator::FeedCoordinator()
{
debug("Creating FeedCoordinator");
m_bForce = false;
m_bSave = false;
m_UrlCoordinatorObserver.m_pOwner = this;
g_pUrlCoordinator->Attach(&m_UrlCoordinatorObserver);
}
FeedCoordinator::~FeedCoordinator()
{
debug("Destroying FeedCoordinator");
// Cleanup
debug("Deleting FeedDownloaders");
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
delete *it;
}
m_ActiveDownloads.clear();
debug("Deleting Feeds");
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
delete *it;
}
m_Feeds.clear();
debug("Deleting FeedCache");
for (FeedCache::iterator it = m_FeedCache.begin(); it != m_FeedCache.end(); it++)
{
delete *it;
}
m_FeedCache.clear();
debug("FeedCoordinator destroyed");
}
void FeedCoordinator::AddFeed(FeedInfo* pFeedInfo)
{
m_Feeds.push_back(pFeedInfo);
}
void FeedCoordinator::Run()
{
debug("Entering FeedCoordinator-loop");
m_mutexDownloads.Lock();
if (g_pOptions->GetServerMode() && g_pOptions->GetSaveQueue() && g_pOptions->GetReloadQueue())
{
g_pDiskState->LoadFeeds(&m_Feeds, &m_FeedHistory);
}
m_mutexDownloads.Unlock();
int iSleepInterval = 100;
int iUpdateCounter = 0;
int iCleanupCounter = 60000;
while (!IsStopped())
{
usleep(iSleepInterval * 1000);
iUpdateCounter += iSleepInterval;
if (iUpdateCounter >= 1000)
{
// this code should not be called too often, once per second is OK
if (!(g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2()) || m_bForce || g_pOptions->GetUrlForce())
{
m_mutexDownloads.Lock();
time_t tCurrent = time(NULL);
if ((int)m_ActiveDownloads.size() < g_pOptions->GetUrlConnections())
{
m_bForce = false;
// check feed list and update feeds
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
FeedInfo* pFeedInfo = *it;
if (((pFeedInfo->GetInterval() > 0 &&
(tCurrent - pFeedInfo->GetLastUpdate() >= pFeedInfo->GetInterval() * 60 ||
tCurrent < pFeedInfo->GetLastUpdate())) ||
pFeedInfo->GetFetch()) &&
pFeedInfo->GetStatus() != FeedInfo::fsRunning)
{
StartFeedDownload(pFeedInfo, pFeedInfo->GetFetch());
}
else if (pFeedInfo->GetFetch())
{
m_bForce = true;
}
}
}
m_mutexDownloads.Unlock();
}
CheckSaveFeeds();
ResetHangingDownloads();
iUpdateCounter = 0;
}
iCleanupCounter += iSleepInterval;
if (iCleanupCounter >= 60000)
{
// clean up feed history once a minute
CleanupHistory();
CleanupCache();
CheckSaveFeeds();
iCleanupCounter = 0;
}
}
// waiting for downloads
debug("FeedCoordinator: waiting for Downloads to complete");
bool completed = false;
while (!completed)
{
m_mutexDownloads.Lock();
completed = m_ActiveDownloads.size() == 0;
m_mutexDownloads.Unlock();
CheckSaveFeeds();
usleep(100 * 1000);
ResetHangingDownloads();
}
debug("FeedCoordinator: Downloads are completed");
debug("Exiting FeedCoordinator-loop");
}
void FeedCoordinator::Stop()
{
Thread::Stop();
debug("Stopping UrlDownloads");
m_mutexDownloads.Lock();
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
(*it)->Stop();
}
m_mutexDownloads.Unlock();
debug("UrlDownloads are notified");
}
void FeedCoordinator::ResetHangingDownloads()
{
const int TimeOut = g_pOptions->GetTerminateTimeout();
if (TimeOut == 0)
{
return;
}
m_mutexDownloads.Lock();
time_t tm = ::time(NULL);
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end();)
{
FeedDownloader* pFeedDownloader = *it;
if (tm - pFeedDownloader->GetLastUpdateTime() > TimeOut &&
pFeedDownloader->GetStatus() == FeedDownloader::adRunning)
{
debug("Terminating hanging download %s", pFeedDownloader->GetInfoName());
if (pFeedDownloader->Terminate())
{
error("Terminated hanging download %s", pFeedDownloader->GetInfoName());
pFeedDownloader->GetFeedInfo()->SetStatus(FeedInfo::fsUndefined);
}
else
{
error("Could not terminate hanging download %s", pFeedDownloader->GetInfoName());
}
m_ActiveDownloads.erase(it);
// it's not safe to destroy pFeedDownloader, because the state of object is unknown
delete pFeedDownloader;
it = m_ActiveDownloads.begin();
continue;
}
it++;
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::LogDebugInfo()
{
debug(" FeedCoordinator");
debug(" ----------------");
m_mutexDownloads.Lock();
debug(" Active Downloads: %i", m_ActiveDownloads.size());
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
FeedDownloader* pFeedDownloader = *it;
pFeedDownloader->LogDebugInfo();
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::StartFeedDownload(FeedInfo* pFeedInfo, bool bForce)
{
debug("Starting new FeedDownloader for %", pFeedInfo->GetName());
FeedDownloader* pFeedDownloader = new FeedDownloader();
pFeedDownloader->SetAutoDestroy(true);
pFeedDownloader->Attach(this);
pFeedDownloader->SetFeedInfo(pFeedInfo);
pFeedDownloader->SetURL(pFeedInfo->GetUrl());
if (strlen(pFeedInfo->GetName()) > 0)
{
pFeedDownloader->SetInfoName(pFeedInfo->GetName());
}
else
{
char szUrlName[1024];
UrlInfo::MakeNiceName(pFeedInfo->GetUrl(), "", szUrlName, sizeof(szUrlName));
pFeedDownloader->SetInfoName(szUrlName);
}
pFeedDownloader->SetForce(bForce || g_pOptions->GetUrlForce());
char tmp[1024];
if (pFeedInfo->GetID() > 0)
{
snprintf(tmp, 1024, "%sfeed-%i.tmp", g_pOptions->GetTempDir(), pFeedInfo->GetID());
}
else
{
snprintf(tmp, 1024, "%sfeed-%i-%i.tmp", g_pOptions->GetTempDir(), (int)time(NULL), rand());
}
tmp[1024-1] = '\0';
pFeedDownloader->SetOutputFilename(tmp);
pFeedInfo->SetStatus(FeedInfo::fsRunning);
pFeedInfo->SetForce(bForce);
pFeedInfo->SetFetch(false);
m_ActiveDownloads.push_back(pFeedDownloader);
pFeedDownloader->Start();
}
void FeedCoordinator::Update(Subject* pCaller, void* pAspect)
{
debug("Notification from FeedDownloader received");
FeedDownloader* pFeedDownloader = (FeedDownloader*) pCaller;
if ((pFeedDownloader->GetStatus() == WebDownloader::adFinished) ||
(pFeedDownloader->GetStatus() == WebDownloader::adFailed) ||
(pFeedDownloader->GetStatus() == WebDownloader::adRetry))
{
FeedCompleted(pFeedDownloader);
}
}
void FeedCoordinator::FeedCompleted(FeedDownloader* pFeedDownloader)
{
debug("Feed downloaded");
FeedInfo* pFeedInfo = pFeedDownloader->GetFeedInfo();
bool bStatusOK = pFeedDownloader->GetStatus() == WebDownloader::adFinished;
if (bStatusOK)
{
pFeedInfo->SetOutputFilename(pFeedDownloader->GetOutputFilename());
}
// delete Download from Queue
m_mutexDownloads.Lock();
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
FeedDownloader* pa = *it;
if (pa == pFeedDownloader)
{
m_ActiveDownloads.erase(it);
break;
}
}
m_mutexDownloads.Unlock();
if (bStatusOK)
{
if (!pFeedInfo->GetPreview())
{
FeedFile* pFeedFile = FeedFile::Create(pFeedInfo->GetOutputFilename());
remove(pFeedInfo->GetOutputFilename());
m_mutexDownloads.Lock();
if (pFeedFile)
{
ProcessFeed(pFeedInfo, pFeedFile->GetFeedItemInfos());
delete pFeedFile;
}
pFeedInfo->SetLastUpdate(time(NULL));
pFeedInfo->SetForce(false);
m_bSave = true;
m_mutexDownloads.Unlock();
}
pFeedInfo->SetStatus(FeedInfo::fsFinished);
}
else
{
pFeedInfo->SetStatus(FeedInfo::fsFailed);
}
}
void FeedCoordinator::FilterFeed(FeedInfo* pFeedInfo, FeedItemInfos* pFeedItemInfos)
{
debug("Filtering feed %s", pFeedInfo->GetName());
FeedFilter* pFeedFilter = NULL;
if (pFeedInfo->GetFilter() && strlen(pFeedInfo->GetFilter()) > 0)
{
pFeedFilter = new FeedFilter(pFeedInfo->GetFilter());
}
for (FeedItemInfos::iterator it = pFeedItemInfos->begin(); it != pFeedItemInfos->end(); it++)
{
FeedItemInfo* pFeedItemInfo = *it;
pFeedItemInfo->SetMatchStatus(FeedItemInfo::msAccepted);
pFeedItemInfo->SetMatchRule(0);
pFeedItemInfo->SetPauseNzb(pFeedInfo->GetPauseNzb());
pFeedItemInfo->SetPriority(pFeedInfo->GetPriority());
pFeedItemInfo->SetAddCategory(pFeedInfo->GetCategory());
pFeedItemInfo->SetDupeScore(0);
pFeedItemInfo->SetDupeMode(dmScore);
pFeedItemInfo->BuildDupeKey(NULL, NULL);
if (pFeedFilter)
{
pFeedFilter->Match(pFeedItemInfo);
}
}
delete pFeedFilter;
}
void FeedCoordinator::ProcessFeed(FeedInfo* pFeedInfo, FeedItemInfos* pFeedItemInfos)
{
debug("Process feed %s", pFeedInfo->GetName());
FilterFeed(pFeedInfo, pFeedItemInfos);
bool bFirstFetch = pFeedInfo->GetLastUpdate() == 0;
int iAdded = 0;
for (FeedItemInfos::iterator it = pFeedItemInfos->begin(); it != pFeedItemInfos->end(); it++)
{
FeedItemInfo* pFeedItemInfo = *it;
if (pFeedItemInfo->GetMatchStatus() == FeedItemInfo::msAccepted)
{
FeedHistoryInfo* pFeedHistoryInfo = m_FeedHistory.Find(pFeedItemInfo->GetUrl());
FeedHistoryInfo::EStatus eStatus = FeedHistoryInfo::hsUnknown;
if (bFirstFetch)
{
eStatus = FeedHistoryInfo::hsBacklog;
}
else if (!pFeedHistoryInfo)
{
DownloadItem(pFeedInfo, pFeedItemInfo);
eStatus = FeedHistoryInfo::hsFetched;
iAdded++;
}
if (pFeedHistoryInfo)
{
pFeedHistoryInfo->SetLastSeen(time(NULL));
}
else
{
m_FeedHistory.Add(pFeedItemInfo->GetUrl(), eStatus, time(NULL));
}
}
}
if (iAdded)
{
info("%s has %i new item(s)", pFeedInfo->GetName(), iAdded);
}
else
{
detail("%s has no new items", pFeedInfo->GetName());
}
}
void FeedCoordinator::DownloadItem(FeedInfo* pFeedInfo, FeedItemInfo* pFeedItemInfo)
{
debug("Download %s from %s", pFeedItemInfo->GetUrl(), pFeedInfo->GetName());
UrlInfo* pUrlInfo = new UrlInfo();
pUrlInfo->SetURL(pFeedItemInfo->GetUrl());
// add .nzb-extension if not present
char szNZBName[1024];
strncpy(szNZBName, pFeedItemInfo->GetFilename(), 1024);
szNZBName[1024-1] = '\0';
char* ext = strrchr(szNZBName, '.');
if (ext && !strcasecmp(ext, ".nzb"))
{
*ext = '\0';
}
char szNZBName2[1024];
snprintf(szNZBName2, 1024, "%s.nzb", szNZBName);
Util::MakeValidFilename(szNZBName2, '_', false);
if (strlen(szNZBName) > 0)
{
pUrlInfo->SetNZBFilename(szNZBName2);
}
pUrlInfo->SetCategory(pFeedItemInfo->GetAddCategory());
pUrlInfo->SetPriority(pFeedItemInfo->GetPriority());
pUrlInfo->SetAddPaused(pFeedItemInfo->GetPauseNzb());
pUrlInfo->SetDupeKey(pFeedItemInfo->GetDupeKey());
pUrlInfo->SetDupeScore(pFeedItemInfo->GetDupeScore());
pUrlInfo->SetDupeMode(pFeedItemInfo->GetDupeMode());
pUrlInfo->SetForce(pFeedInfo->GetForce() || g_pOptions->GetUrlForce());
g_pUrlCoordinator->AddUrlToQueue(pUrlInfo, false);
}
bool FeedCoordinator::ViewFeed(int iID, FeedItemInfos** ppFeedItemInfos)
{
if (iID < 1 || iID > (int)m_Feeds.size())
{
return false;
}
FeedInfo* pFeedInfo = m_Feeds.at(iID - 1);
return PreviewFeed(pFeedInfo->GetName(), pFeedInfo->GetUrl(), pFeedInfo->GetFilter(),
pFeedInfo->GetPauseNzb(), pFeedInfo->GetCategory(), pFeedInfo->GetPriority(),
0, NULL, ppFeedItemInfos);
}
bool FeedCoordinator::PreviewFeed(const char* szName, const char* szUrl, const char* szFilter,
bool bPauseNzb, const char* szCategory, int iPriority,
int iCacheTimeSec, const char* szCacheId, FeedItemInfos** ppFeedItemInfos)
{
debug("Preview feed %s", szName);
FeedInfo* pFeedInfo = new FeedInfo(0, szName, szUrl, 0, szFilter, bPauseNzb, szCategory, iPriority);
pFeedInfo->SetPreview(true);
FeedItemInfos* pFeedItemInfos = NULL;
bool bHasCache = false;
if (iCacheTimeSec > 0 && *szCacheId != '\0')
{
m_mutexDownloads.Lock();
for (FeedCache::iterator it = m_FeedCache.begin(); it != m_FeedCache.end(); it++)
{
FeedCacheItem* pFeedCacheItem = *it;
if (!strcmp(pFeedCacheItem->GetCacheId(), szCacheId))
{
pFeedCacheItem->SetLastUsage(time(NULL));
pFeedItemInfos = pFeedCacheItem->GetFeedItemInfos();
pFeedItemInfos->Retain();
bHasCache = true;
break;
}
}
m_mutexDownloads.Unlock();
}
if (!bHasCache)
{
m_mutexDownloads.Lock();
bool bFirstFetch = true;
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
FeedInfo* pFeedInfo2 = *it;
if (!strcmp(pFeedInfo2->GetUrl(), pFeedInfo->GetUrl()) &&
!strcmp(pFeedInfo2->GetFilter(), pFeedInfo->GetFilter()) &&
pFeedInfo2->GetLastUpdate() > 0)
{
bFirstFetch = false;
break;
}
}
StartFeedDownload(pFeedInfo, true);
m_mutexDownloads.Unlock();
// wait until the download in a separate thread completes
while (pFeedInfo->GetStatus() == FeedInfo::fsRunning)
{
usleep(100 * 1000);
}
// now can process the feed
FeedFile* pFeedFile = NULL;
if (pFeedInfo->GetStatus() == FeedInfo::fsFinished)
{
pFeedFile = FeedFile::Create(pFeedInfo->GetOutputFilename());
}
remove(pFeedInfo->GetOutputFilename());
if (!pFeedFile)
{
delete pFeedInfo;
return false;
}
pFeedItemInfos = pFeedFile->GetFeedItemInfos();
pFeedItemInfos->Retain();
delete pFeedFile;
for (FeedItemInfos::iterator it = pFeedItemInfos->begin(); it != pFeedItemInfos->end(); it++)
{
FeedItemInfo* pFeedItemInfo = *it;
pFeedItemInfo->SetStatus(bFirstFetch ? FeedItemInfo::isBacklog : FeedItemInfo::isNew);
FeedHistoryInfo* pFeedHistoryInfo = m_FeedHistory.Find(pFeedItemInfo->GetUrl());
if (pFeedHistoryInfo)
{
pFeedItemInfo->SetStatus((FeedItemInfo::EStatus)pFeedHistoryInfo->GetStatus());
}
}
}
FilterFeed(pFeedInfo, pFeedItemInfos);
delete pFeedInfo;
if (iCacheTimeSec > 0 && *szCacheId != '\0' && !bHasCache)
{
FeedCacheItem* pFeedCacheItem = new FeedCacheItem(szUrl, iCacheTimeSec, szCacheId, time(NULL), pFeedItemInfos);
m_mutexDownloads.Lock();
m_FeedCache.push_back(pFeedCacheItem);
m_mutexDownloads.Unlock();
}
*ppFeedItemInfos = pFeedItemInfos;
return true;
}
void FeedCoordinator::FetchFeed(int iID)
{
debug("FetchFeeds");
m_mutexDownloads.Lock();
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
FeedInfo* pFeedInfo = *it;
if (pFeedInfo->GetID() == iID || iID == 0)
{
pFeedInfo->SetFetch(true);
m_bForce = true;
}
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::UrlCoordinatorUpdate(Subject* pCaller, void* pAspect)
{
debug("Notification from URL-Coordinator received");
UrlCoordinator::Aspect* pUrlAspect = (UrlCoordinator::Aspect*)pAspect;
if (pUrlAspect->eAction == UrlCoordinator::eaUrlCompleted)
{
m_mutexDownloads.Lock();
FeedHistoryInfo* pFeedHistoryInfo = m_FeedHistory.Find(pUrlAspect->pUrlInfo->GetURL());
if (pFeedHistoryInfo)
{
pFeedHistoryInfo->SetStatus(FeedHistoryInfo::hsFetched);
}
else
{
m_FeedHistory.Add(pUrlAspect->pUrlInfo->GetURL(), FeedHistoryInfo::hsFetched, time(NULL));
}
m_bSave = true;
m_mutexDownloads.Unlock();
}
}
bool FeedCoordinator::HasActiveDownloads()
{
m_mutexDownloads.Lock();
bool bActive = !m_ActiveDownloads.empty();
m_mutexDownloads.Unlock();
return bActive;
}
void FeedCoordinator::CheckSaveFeeds()
{
debug("CheckSaveFeeds");
m_mutexDownloads.Lock();
if (m_bSave)
{
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveFeeds(&m_Feeds, &m_FeedHistory);
}
m_bSave = false;
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::CleanupHistory()
{
debug("CleanupHistory");
m_mutexDownloads.Lock();
time_t tOldestUpdate = time(NULL);
for (Feeds::iterator it = m_Feeds.begin(); it != m_Feeds.end(); it++)
{
FeedInfo* pFeedInfo = *it;
if (pFeedInfo->GetLastUpdate() < tOldestUpdate)
{
tOldestUpdate = pFeedInfo->GetLastUpdate();
}
}
time_t tBorderDate = tOldestUpdate - g_pOptions->GetFeedHistory();
int i = 0;
for (FeedHistory::iterator it = m_FeedHistory.begin(); it != m_FeedHistory.end(); )
{
FeedHistoryInfo* pFeedHistoryInfo = *it;
if (pFeedHistoryInfo->GetLastSeen() < tBorderDate)
{
detail("Deleting %s from feed history", pFeedHistoryInfo->GetUrl());
delete pFeedHistoryInfo;
m_FeedHistory.erase(it);
it = m_FeedHistory.begin() + i;
m_bSave = true;
}
else
{
it++;
i++;
}
}
m_mutexDownloads.Unlock();
}
void FeedCoordinator::CleanupCache()
{
debug("CleanupCache");
m_mutexDownloads.Lock();
time_t tCurTime = time(NULL);
int i = 0;
for (FeedCache::iterator it = m_FeedCache.begin(); it != m_FeedCache.end(); )
{
FeedCacheItem* pFeedCacheItem = *it;
if (pFeedCacheItem->GetLastUsage() + pFeedCacheItem->GetCacheTimeSec() < tCurTime ||
pFeedCacheItem->GetLastUsage() > tCurTime)
{
debug("Deleting %s from feed cache", pFeedCacheItem->GetUrl());
delete pFeedCacheItem;
m_FeedCache.erase(it);
it = m_FeedCache.begin() + i;
}
else
{
it++;
i++;
}
}
m_mutexDownloads.Unlock();
}

127
FeedCoordinator.h Normal file
View File

@@ -0,0 +1,127 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef FEEDCOORDINATOR_H
#define FEEDCOORDINATOR_H
#include <deque>
#include <list>
#include <time.h>
#include "Thread.h"
#include "WebDownloader.h"
#include "DownloadInfo.h"
#include "FeedInfo.h"
#include "Observer.h"
class FeedDownloader;
class FeedCoordinator : public Thread, public Observer, public Subject
{
public:
typedef std::list<FeedDownloader*> ActiveDownloads;
private:
class UrlCoordinatorObserver: public Observer
{
public:
FeedCoordinator* m_pOwner;
virtual void Update(Subject* pCaller, void* pAspect) { m_pOwner->UrlCoordinatorUpdate(pCaller, pAspect); }
};
class FeedCacheItem
{
private:
char* m_szUrl;
int m_iCacheTimeSec;
char* m_szCacheId;
time_t m_tLastUsage;
FeedItemInfos* m_pFeedItemInfos;
public:
FeedCacheItem(const char* szUrl, int iCacheTimeSec,const char* szCacheId,
time_t tLastUsage, FeedItemInfos* pFeedItemInfos);
~FeedCacheItem();
const char* GetUrl() { return m_szUrl; }
int GetCacheTimeSec() { return m_iCacheTimeSec; }
const char* GetCacheId() { return m_szCacheId; }
time_t GetLastUsage() { return m_tLastUsage; }
void SetLastUsage(time_t tLastUsage) { m_tLastUsage = tLastUsage; }
FeedItemInfos* GetFeedItemInfos() { return m_pFeedItemInfos; }
};
typedef std::deque<FeedCacheItem*> FeedCache;
private:
Feeds m_Feeds;
ActiveDownloads m_ActiveDownloads;
FeedHistory m_FeedHistory;
Mutex m_mutexDownloads;
UrlCoordinatorObserver m_UrlCoordinatorObserver;
bool m_bForce;
bool m_bSave;
FeedCache m_FeedCache;
void StartFeedDownload(FeedInfo* pFeedInfo, bool bForce);
void FeedCompleted(FeedDownloader* pFeedDownloader);
void FilterFeed(FeedInfo* pFeedInfo, FeedItemInfos* pFeedItemInfos);
void ProcessFeed(FeedInfo* pFeedInfo, FeedItemInfos* pFeedItemInfos);
void DownloadItem(FeedInfo* pFeedInfo, FeedItemInfo* pFeedItemInfo);
void ResetHangingDownloads();
void UrlCoordinatorUpdate(Subject* pCaller, void* pAspect);
void CleanupHistory();
void CleanupCache();
void CheckSaveFeeds();
public:
FeedCoordinator();
virtual ~FeedCoordinator();
virtual void Run();
virtual void Stop();
void Update(Subject* pCaller, void* pAspect);
void AddFeed(FeedInfo* pFeedInfo);
bool PreviewFeed(const char* szName, const char* szUrl, const char* szFilter,
bool bPauseNzb, const char* szCategory, int iPriority,
int iCacheTimeSec, const char* szCacheId, FeedItemInfos** ppFeedItemInfos);
bool ViewFeed(int iID, FeedItemInfos** ppFeedItemInfos);
void FetchFeed(int iID);
bool HasActiveDownloads();
Feeds* GetFeeds() { return &m_Feeds; }
void LogDebugInfo();
};
class FeedDownloader : public WebDownloader
{
private:
FeedInfo* m_pFeedInfo;
public:
void SetFeedInfo(FeedInfo* pFeedInfo) { m_pFeedInfo = pFeedInfo; }
FeedInfo* GetFeedInfo() { return m_pFeedInfo; }
};
#endif

609
FeedFile.cpp Normal file
View File

@@ -0,0 +1,609 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <string.h>
#include <list>
#ifdef WIN32
#include <comutil.h>
#import <msxml.tlb> named_guids
using namespace MSXML;
#else
#include <libxml/parser.h>
#include <libxml/xmlreader.h>
#include <libxml/xmlerror.h>
#include <libxml/entities.h>
#endif
#include "nzbget.h"
#include "FeedFile.h"
#include "Log.h"
#include "DownloadInfo.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
FeedFile::FeedFile(const char* szFileName)
{
debug("Creating FeedFile");
m_szFileName = strdup(szFileName);
m_pFeedItemInfos = new FeedItemInfos();
m_pFeedItemInfos->Retain();
#ifndef WIN32
m_pFeedItemInfo = NULL;
m_szTagContent = NULL;
m_iTagContentLen = 0;
#endif
}
FeedFile::~FeedFile()
{
debug("Destroying FeedFile");
// Cleanup
free(m_szFileName);
m_pFeedItemInfos->Release();
#ifndef WIN32
delete m_pFeedItemInfo;
free(m_szTagContent);
#endif
}
void FeedFile::LogDebugInfo()
{
debug(" FeedFile %s", m_szFileName);
}
void FeedFile::AddItem(FeedItemInfo* pFeedItemInfo)
{
m_pFeedItemInfos->Add(pFeedItemInfo);
}
void FeedFile::ParseSubject(FeedItemInfo* pFeedItemInfo)
{
// if title has quatation marks we use only part within quatation marks
char* p = (char*)pFeedItemInfo->GetTitle();
char* start = strchr(p, '\"');
if (start)
{
start++;
char* end = strchr(start + 1, '\"');
if (end)
{
int len = (int)(end - start);
char* point = strchr(start + 1, '.');
if (point && point < end)
{
char* filename = (char*)malloc(len + 1);
strncpy(filename, start, len);
filename[len] = '\0';
char* ext = strrchr(filename, '.');
if (ext && !strcasecmp(ext, ".par2"))
{
*ext = '\0';
}
pFeedItemInfo->SetFilename(filename);
free(filename);
return;
}
}
}
pFeedItemInfo->SetFilename(pFeedItemInfo->GetTitle());
}
#ifdef WIN32
FeedFile* FeedFile::Create(const char* szFileName)
{
CoInitialize(NULL);
HRESULT hr;
MSXML::IXMLDOMDocumentPtr doc;
hr = doc.CreateInstance(MSXML::CLSID_DOMDocument);
if (FAILED(hr))
{
return NULL;
}
// Load the XML document file...
doc->put_resolveExternals(VARIANT_FALSE);
doc->put_validateOnParse(VARIANT_FALSE);
doc->put_async(VARIANT_FALSE);
// filename needs to be properly encoded
char* szURL = (char*)malloc(strlen(szFileName)*3 + 1);
EncodeURL(szFileName, szURL);
debug("url=\"%s\"", szURL);
_variant_t v(szURL);
free(szURL);
VARIANT_BOOL success = doc->load(v);
if (success == VARIANT_FALSE)
{
_bstr_t r(doc->GetparseError()->reason);
const char* szErrMsg = r;
error("Error parsing rss feed: %s", szErrMsg);
return NULL;
}
FeedFile* pFile = new FeedFile(szFileName);
if (!pFile->ParseFeed(doc))
{
delete pFile;
pFile = NULL;
}
return pFile;
}
void FeedFile::EncodeURL(const char* szFilename, char* szURL)
{
while (char ch = *szFilename++)
{
if (('0' <= ch && ch <= '9') ||
('a' <= ch && ch <= 'z') ||
('A' <= ch && ch <= 'Z') )
{
*szURL++ = ch;
}
else
{
*szURL++ = '%';
int a = ch >> 4;
*szURL++ = a > 9 ? a - 10 + 'a' : a + '0';
a = ch & 0xF;
*szURL++ = a > 9 ? a - 10 + 'a' : a + '0';
}
}
*szURL = NULL;
}
bool FeedFile::ParseFeed(IUnknown* nzb)
{
MSXML::IXMLDOMDocumentPtr doc = nzb;
MSXML::IXMLDOMNodePtr root = doc->documentElement;
MSXML::IXMLDOMNodeListPtr itemList = root->selectNodes("/rss/channel/item");
for (int i = 0; i < itemList->Getlength(); i++)
{
MSXML::IXMLDOMNodePtr node = itemList->Getitem(i);
FeedItemInfo* pFeedItemInfo = new FeedItemInfo();
AddItem(pFeedItemInfo);
MSXML::IXMLDOMNodePtr tag;
MSXML::IXMLDOMNodePtr attr;
// <title>Debian 6</title>
tag = node->selectSingleNode("title");
if (!tag)
{
// bad rss feed
return false;
}
_bstr_t title(tag->Gettext());
pFeedItemInfo->SetTitle(title);
ParseSubject(pFeedItemInfo);
// <pubDate>Wed, 26 Jun 2013 00:02:54 -0600</pubDate>
tag = node->selectSingleNode("pubDate");
if (tag)
{
_bstr_t time(tag->Gettext());
time_t unixtime = WebUtil::ParseRfc822DateTime(time);
if (unixtime > 0)
{
pFeedItemInfo->SetTime(unixtime);
}
}
// <category>Movies &gt; HD</category>
tag = node->selectSingleNode("category");
if (tag)
{
_bstr_t category(tag->Gettext());
pFeedItemInfo->SetCategory(category);
}
// <description>long text</description>
tag = node->selectSingleNode("description");
if (tag)
{
_bstr_t description(tag->Gettext());
pFeedItemInfo->SetDescription(description);
}
//<enclosure url="http://myindexer.com/fetch/9eeb264aecce961a6e0d" length="150263340" type="application/x-nzb" />
tag = node->selectSingleNode("enclosure");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("url");
if (attr)
{
_bstr_t url(attr->Gettext());
pFeedItemInfo->SetUrl(url);
}
attr = tag->Getattributes()->getNamedItem("length");
if (attr)
{
_bstr_t size(attr->Gettext());
long long lSize = atoll(size);
pFeedItemInfo->SetSize(lSize);
}
}
if (!pFeedItemInfo->GetUrl())
{
// <link>https://nzb.org/fetch/334534ce/4364564564</link>
tag = node->selectSingleNode("link");
if (!tag)
{
// bad rss feed
return false;
}
_bstr_t link(tag->Gettext());
pFeedItemInfo->SetUrl(link);
}
// newznab special
//<newznab:attr name="size" value="5423523453534" />
if (pFeedItemInfo->GetSize() == 0)
{
tag = node->selectSingleNode("newznab:attr[@name='size']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t size(attr->Gettext());
long long lSize = atoll(size);
pFeedItemInfo->SetSize(lSize);
}
}
}
//<newznab:attr name="imdb" value="1588173"/>
tag = node->selectSingleNode("newznab:attr[@name='imdb']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t val(attr->Gettext());
int iVal = atoi(val);
pFeedItemInfo->SetImdbId(iVal);
}
}
//<newznab:attr name="rageid" value="33877"/>
tag = node->selectSingleNode("newznab:attr[@name='rageid']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t val(attr->Gettext());
int iVal = atoi(val);
pFeedItemInfo->SetRageId(iVal);
}
}
//<newznab:attr name="episode" value="E09"/>
//<newznab:attr name="episode" value="9"/>
tag = node->selectSingleNode("newznab:attr[@name='episode']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t val(attr->Gettext());
pFeedItemInfo->SetEpisode(val);
}
}
//<newznab:attr name="season" value="S03"/>
//<newznab:attr name="season" value="3"/>
tag = node->selectSingleNode("newznab:attr[@name='season']");
if (tag)
{
attr = tag->Getattributes()->getNamedItem("value");
if (attr)
{
_bstr_t val(attr->Gettext());
pFeedItemInfo->SetSeason(val);
}
}
MSXML::IXMLDOMNodeListPtr itemList = node->selectNodes("newznab:attr");
for (int i = 0; i < itemList->Getlength(); i++)
{
MSXML::IXMLDOMNodePtr node = itemList->Getitem(i);
MSXML::IXMLDOMNodePtr name = node->Getattributes()->getNamedItem("name");
MSXML::IXMLDOMNodePtr value = node->Getattributes()->getNamedItem("value");
if (name && value)
{
_bstr_t name(name->Gettext());
_bstr_t val(value->Gettext());
pFeedItemInfo->GetAttributes()->Add(name, val);
}
}
}
return true;
}
#else
FeedFile* FeedFile::Create(const char* szFileName)
{
FeedFile* pFile = new FeedFile(szFileName);
xmlSAXHandler SAX_handler = {0};
SAX_handler.startElement = reinterpret_cast<startElementSAXFunc>(SAX_StartElement);
SAX_handler.endElement = reinterpret_cast<endElementSAXFunc>(SAX_EndElement);
SAX_handler.characters = reinterpret_cast<charactersSAXFunc>(SAX_characters);
SAX_handler.error = reinterpret_cast<errorSAXFunc>(SAX_error);
SAX_handler.getEntity = reinterpret_cast<getEntitySAXFunc>(SAX_getEntity);
pFile->m_bIgnoreNextError = false;
int ret = xmlSAXUserParseFile(&SAX_handler, pFile, szFileName);
if (ret != 0)
{
error("Failed to parse rss feed");
delete pFile;
pFile = NULL;
}
return pFile;
}
void FeedFile::Parse_StartElement(const char *name, const char **atts)
{
ResetTagContent();
if (!strcmp("item", name))
{
delete m_pFeedItemInfo;
m_pFeedItemInfo = new FeedItemInfo();
}
else if (!strcmp("enclosure", name) && m_pFeedItemInfo)
{
//<enclosure url="http://myindexer.com/fetch/9eeb264aecce961a6e0d" length="150263340" type="application/x-nzb" />
for (; *atts; atts+=2)
{
if (!strcmp("url", atts[0]))
{
char* szUrl = strdup(atts[1]);
WebUtil::XmlDecode(szUrl);
m_pFeedItemInfo->SetUrl(szUrl);
free(szUrl);
}
else if (!strcmp("length", atts[0]))
{
long long lSize = atoll(atts[1]);
m_pFeedItemInfo->SetSize(lSize);
}
}
}
else if (m_pFeedItemInfo && !strcmp("newznab:attr", name) &&
atts[0] && atts[1] && atts[2] && atts[3] &&
!strcmp("name", atts[0]) && !strcmp("value", atts[2]))
{
m_pFeedItemInfo->GetAttributes()->Add(atts[1], atts[3]);
//<newznab:attr name="size" value="5423523453534" />
if (m_pFeedItemInfo->GetSize() == 0 &&
!strcmp("size", atts[1]))
{
long long lSize = atoll(atts[3]);
m_pFeedItemInfo->SetSize(lSize);
}
//<newznab:attr name="imdb" value="1588173"/>
else if (!strcmp("imdb", atts[1]))
{
m_pFeedItemInfo->SetImdbId(atoi(atts[3]));
}
//<newznab:attr name="rageid" value="33877"/>
else if (!strcmp("rageid", atts[1]))
{
m_pFeedItemInfo->SetRageId(atoi(atts[3]));
}
//<newznab:attr name="episode" value="E09"/>
//<newznab:attr name="episode" value="9"/>
else if (!strcmp("episode", atts[1]))
{
m_pFeedItemInfo->SetEpisode(atts[3]);
}
//<newznab:attr name="season" value="S03"/>
//<newznab:attr name="season" value="3"/>
else if (!strcmp("season", atts[1]))
{
m_pFeedItemInfo->SetSeason(atts[3]);
}
}
}
void FeedFile::Parse_EndElement(const char *name)
{
if (!strcmp("item", name))
{
// Close the file element, add the new file to file-list
AddItem(m_pFeedItemInfo);
m_pFeedItemInfo = NULL;
}
else if (!strcmp("title", name) && m_pFeedItemInfo)
{
m_pFeedItemInfo->SetTitle(m_szTagContent);
ResetTagContent();
ParseSubject(m_pFeedItemInfo);
}
else if (!strcmp("link", name) && m_pFeedItemInfo &&
(!m_pFeedItemInfo->GetUrl() || strlen(m_pFeedItemInfo->GetUrl()) == 0))
{
m_pFeedItemInfo->SetUrl(m_szTagContent);
ResetTagContent();
}
else if (!strcmp("category", name) && m_pFeedItemInfo)
{
m_pFeedItemInfo->SetCategory(m_szTagContent);
ResetTagContent();
}
else if (!strcmp("description", name) && m_pFeedItemInfo)
{
m_pFeedItemInfo->SetDescription(m_szTagContent);
ResetTagContent();
}
else if (!strcmp("pubDate", name) && m_pFeedItemInfo)
{
time_t unixtime = WebUtil::ParseRfc822DateTime(m_szTagContent);
if (unixtime > 0)
{
m_pFeedItemInfo->SetTime(unixtime);
}
ResetTagContent();
}
}
void FeedFile::Parse_Content(const char *buf, int len)
{
m_szTagContent = (char*)realloc(m_szTagContent, m_iTagContentLen + len + 1);
strncpy(m_szTagContent + m_iTagContentLen, buf, len);
m_iTagContentLen += len;
m_szTagContent[m_iTagContentLen] = '\0';
}
void FeedFile::ResetTagContent()
{
free(m_szTagContent);
m_szTagContent = NULL;
m_iTagContentLen = 0;
}
void FeedFile::SAX_StartElement(FeedFile* pFile, const char *name, const char **atts)
{
pFile->Parse_StartElement(name, atts);
}
void FeedFile::SAX_EndElement(FeedFile* pFile, const char *name)
{
pFile->Parse_EndElement(name);
}
void FeedFile::SAX_characters(FeedFile* pFile, const char * xmlstr, int len)
{
char* str = (char*)xmlstr;
// trim starting blanks
int off = 0;
for (int i = 0; i < len; i++)
{
char ch = str[i];
if (ch == ' ' || ch == 10 || ch == 13 || ch == 9)
{
off++;
}
else
{
break;
}
}
int newlen = len - off;
// trim ending blanks
for (int i = len - 1; i >= off; i--)
{
char ch = str[i];
if (ch == ' ' || ch == 10 || ch == 13 || ch == 9)
{
newlen--;
}
else
{
break;
}
}
if (newlen > 0)
{
// interpret tag content
pFile->Parse_Content(str + off, newlen);
}
}
void* FeedFile::SAX_getEntity(FeedFile* pFile, const char * name)
{
xmlEntityPtr e = xmlGetPredefinedEntity((xmlChar* )name);
if (!e)
{
warn("entity not found");
pFile->m_bIgnoreNextError = true;
}
return e;
}
void FeedFile::SAX_error(FeedFile* pFile, const char *msg, ...)
{
if (pFile->m_bIgnoreNextError)
{
pFile->m_bIgnoreNextError = false;
return;
}
va_list argp;
va_start(argp, msg);
char szErrMsg[1024];
vsnprintf(szErrMsg, sizeof(szErrMsg), msg, argp);
szErrMsg[1024-1] = '\0';
va_end(argp);
// remove trailing CRLF
for (char* pend = szErrMsg + strlen(szErrMsg) - 1; pend >= szErrMsg && (*pend == '\n' || *pend == '\r' || *pend == ' '); pend--) *pend = '\0';
error("Error parsing rss feed: %s", szErrMsg);
}
#endif

70
FeedFile.h Normal file
View File

@@ -0,0 +1,70 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef FEEDFILE_H
#define FEEDFILE_H
#include <vector>
#include "FeedInfo.h"
class FeedFile
{
private:
FeedItemInfos* m_pFeedItemInfos;
char* m_szFileName;
FeedFile(const char* szFileName);
void AddItem(FeedItemInfo* pFeedItemInfo);
void ParseSubject(FeedItemInfo* pFeedItemInfo);
#ifdef WIN32
bool ParseFeed(IUnknown* nzb);
static void EncodeURL(const char* szFilename, char* szURL);
#else
FeedItemInfo* m_pFeedItemInfo;
char* m_szTagContent;
int m_iTagContentLen;
bool m_bIgnoreNextError;
static void SAX_StartElement(FeedFile* pFile, const char *name, const char **atts);
static void SAX_EndElement(FeedFile* pFile, const char *name);
static void SAX_characters(FeedFile* pFile, const char * xmlstr, int len);
static void* SAX_getEntity(FeedFile* pFile, const char * name);
static void SAX_error(FeedFile* pFile, const char *msg, ...);
void Parse_StartElement(const char *name, const char **atts);
void Parse_EndElement(const char *name);
void Parse_Content(const char *buf, int len);
void ResetTagContent();
#endif
public:
virtual ~FeedFile();
static FeedFile* Create(const char* szFileName);
FeedItemInfos* GetFeedItemInfos() { return m_pFeedItemInfos; }
void LogDebugInfo();
};
#endif

1192
FeedFilter.cpp Normal file
View File

File diff suppressed because it is too large Load Diff

186
FeedFilter.h Normal file
View File

@@ -0,0 +1,186 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef FEEDFILTER_H
#define FEEDFILTER_H
#include "DownloadInfo.h"
#include "FeedInfo.h"
#include "Util.h"
class FeedFilter
{
private:
typedef std::deque<char*> RefValues;
enum ETermCommand
{
fcText,
fcRegex,
fcEqual,
fcLess,
fcLessEqual,
fcGreater,
fcGreaterEqual,
fcOpeningBrace,
fcClosingBrace,
fcOrOperator
};
class Term
{
private:
bool m_bPositive;
char* m_szField;
ETermCommand m_eCommand;
char* m_szParam;
long long m_iIntParam;
double m_fFloatParam;
bool m_bFloat;
RegEx* m_pRegEx;
RefValues* m_pRefValues;
bool GetFieldData(const char* szField, FeedItemInfo* pFeedItemInfo,
const char** StrValue, long long* IntValue);
bool ParseParam(const char* szField, const char* szParam);
bool ParseSizeParam(const char* szParam);
bool ParseAgeParam(const char* szParam);
bool ParseNumericParam(const char* szParam);
bool MatchValue(const char* szStrValue, long long iIntValue);
bool MatchText(const char* szStrValue);
bool MatchRegex(const char* szStrValue);
void FillWildMaskRefValues(const char* szStrValue, WildMask* pMask, int iRefOffset);
void FillRegExRefValues(const char* szStrValue, RegEx* pRegEx);
public:
Term();
~Term();
void SetRefValues(RefValues* pRefValues) { m_pRefValues = pRefValues; }
bool Compile(char* szToken);
bool Match(FeedItemInfo* pFeedItemInfo);
ETermCommand GetCommand() { return m_eCommand; }
};
typedef std::deque<Term*> TermList;
enum ERuleCommand
{
frAccept,
frReject,
frRequire,
frOptions,
frComment
};
class Rule
{
private:
bool m_bIsValid;
ERuleCommand m_eCommand;
char* m_szCategory;
int m_iPriority;
int m_iAddPriority;
bool m_bPause;
int m_iDupeScore;
int m_iAddDupeScore;
char* m_szDupeKey;
char* m_szAddDupeKey;
EDupeMode m_eDupeMode;
char* m_szSeries;
char* m_szRageId;
bool m_bHasCategory;
bool m_bHasPriority;
bool m_bHasAddPriority;
bool m_bHasPause;
bool m_bHasDupeScore;
bool m_bHasAddDupeScore;
bool m_bHasDupeKey;
bool m_bHasAddDupeKey;
bool m_bHasDupeMode;
bool m_bPatCategory;
bool m_bPatDupeKey;
bool m_bPatAddDupeKey;
bool m_bHasSeries;
bool m_bHasRageId;
char* m_szPatCategory;
char* m_szPatDupeKey;
char* m_szPatAddDupeKey;
TermList m_Terms;
RefValues m_RefValues;
char* CompileCommand(char* szRule);
char* CompileOptions(char* szRule);
bool CompileTerm(char* szTerm);
bool MatchExpression(FeedItemInfo* pFeedItemInfo);
public:
Rule();
~Rule();
void Compile(char* szRule);
bool IsValid() { return m_bIsValid; }
ERuleCommand GetCommand() { return m_eCommand; }
const char* GetCategory() { return m_szCategory; }
int GetPriority() { return m_iPriority; }
int GetAddPriority() { return m_iAddPriority; }
bool GetPause() { return m_bPause; }
const char* GetDupeKey() { return m_szDupeKey; }
const char* GetAddDupeKey() { return m_szAddDupeKey; }
int GetDupeScore() { return m_iDupeScore; }
int GetAddDupeScore() { return m_iAddDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
const char* GetRageId() { return m_szRageId; }
const char* GetSeries() { return m_szSeries; }
bool HasCategory() { return m_bHasCategory; }
bool HasPriority() { return m_bHasPriority; }
bool HasAddPriority() { return m_bHasAddPriority; }
bool HasPause() { return m_bHasPause; }
bool HasDupeScore() { return m_bHasDupeScore; }
bool HasAddDupeScore() { return m_bHasAddDupeScore; }
bool HasDupeKey() { return m_bHasDupeKey; }
bool HasAddDupeKey() { return m_bHasAddDupeKey; }
bool HasDupeMode() { return m_bHasDupeMode; }
bool HasRageId() { return m_bHasRageId; }
bool HasSeries() { return m_bHasSeries; }
bool Match(FeedItemInfo* pFeedItemInfo);
void ExpandRefValues(FeedItemInfo* pFeedItemInfo, char** pDestStr, char* pPatStr);
const char* GetRefValue(FeedItemInfo* pFeedItemInfo, const char* szVarName);
};
typedef std::deque<Rule*> RuleList;
private:
RuleList m_Rules;
void Compile(const char* szFilter);
void CompileRule(char* szRule);
void ApplyOptions(Rule* pRule, FeedItemInfo* pFeedItemInfo);
public:
FeedFilter(const char* szFilter);
~FeedFilter();
void Match(FeedItemInfo* pFeedItemInfo);
};
#endif

440
FeedInfo.cpp Normal file
View File

@@ -0,0 +1,440 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision: 0 $
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#include <sys/stat.h>
#include "nzbget.h"
#include "FeedInfo.h"
#include "Util.h"
FeedInfo::FeedInfo(int iID, const char* szName, const char* szUrl, int iInterval,
const char* szFilter, bool bPauseNzb, const char* szCategory, int iPriority)
{
m_iID = iID;
m_szName = strdup(szName ? szName : "");
m_szUrl = strdup(szUrl ? szUrl : "");
m_szFilter = strdup(szFilter ? szFilter : "");
m_iFilterHash = Util::HashBJ96(szFilter, strlen(szFilter), 0);
m_szCategory = strdup(szCategory ? szCategory : "");
m_iInterval = iInterval;
m_bPauseNzb = bPauseNzb;
m_iPriority = iPriority;
m_tLastUpdate = 0;
m_bPreview = false;
m_eStatus = fsUndefined;
m_szOutputFilename = NULL;
m_bFetch = false;
m_bForce = false;
}
FeedInfo::~FeedInfo()
{
free(m_szName);
free(m_szUrl);
free(m_szFilter);
free(m_szCategory);
free(m_szOutputFilename);
}
void FeedInfo::SetOutputFilename(const char* szOutputFilename)
{
free(m_szOutputFilename);
m_szOutputFilename = strdup(szOutputFilename);
}
FeedItemInfo::Attr::Attr(const char* szName, const char* szValue)
{
m_szName = strdup(szName ? szName : "");
m_szValue = strdup(szValue ? szValue : "");
}
FeedItemInfo::Attr::~Attr()
{
free(m_szName);
free(m_szValue);
}
FeedItemInfo::Attributes::~Attributes()
{
for (iterator it = begin(); it != end(); it++)
{
delete *it;
}
}
void FeedItemInfo::Attributes::Add(const char* szName, const char* szValue)
{
push_back(new Attr(szName, szValue));
}
FeedItemInfo::Attr* FeedItemInfo::Attributes::Find(const char* szName)
{
for (iterator it = begin(); it != end(); it++)
{
Attr* pAttr = *it;
if (!strcasecmp(pAttr->GetName(), szName))
{
return pAttr;
}
}
return NULL;
}
FeedItemInfo::FeedItemInfo()
{
m_pSharedFeedData = NULL;
m_szTitle = NULL;
m_szFilename = NULL;
m_szUrl = NULL;
m_szCategory = strdup("");
m_lSize = 0;
m_tTime = 0;
m_iImdbId = 0;
m_iRageId = 0;
m_szDescription = strdup("");
m_szSeason = NULL;
m_szEpisode = NULL;
m_iSeasonNum = 0;
m_iEpisodeNum = 0;
m_bSeasonEpisodeParsed = false;
m_szAddCategory = strdup("");
m_bPauseNzb = false;
m_iPriority = 0;
m_eStatus = isUnknown;
m_eMatchStatus = msIgnored;
m_iMatchRule = 0;
m_szDupeKey = NULL;
m_iDupeScore = 0;
m_eDupeMode = dmScore;
}
FeedItemInfo::~FeedItemInfo()
{
free(m_szTitle);
free(m_szFilename);
free(m_szUrl);
free(m_szCategory);
free(m_szDescription);
free(m_szSeason);
free(m_szEpisode);
free(m_szAddCategory);
free(m_szDupeKey);
}
void FeedItemInfo::SetTitle(const char* szTitle)
{
free(m_szTitle);
m_szTitle = szTitle ? strdup(szTitle) : NULL;
}
void FeedItemInfo::SetFilename(const char* szFilename)
{
free(m_szFilename);
m_szFilename = szFilename ? strdup(szFilename) : NULL;
}
void FeedItemInfo::SetUrl(const char* szUrl)
{
free(m_szUrl);
m_szUrl = szUrl ? strdup(szUrl) : NULL;
}
void FeedItemInfo::SetCategory(const char* szCategory)
{
free(m_szCategory);
m_szCategory = strdup(szCategory ? szCategory: "");
}
void FeedItemInfo::SetDescription(const char* szDescription)
{
free(m_szDescription);
m_szDescription = strdup(szDescription ? szDescription: "");
}
void FeedItemInfo::SetSeason(const char* szSeason)
{
free(m_szSeason);
m_szSeason = szSeason ? strdup(szSeason) : NULL;
m_iSeasonNum = szSeason ? ParsePrefixedInt(szSeason) : 0;
}
void FeedItemInfo::SetEpisode(const char* szEpisode)
{
free(m_szEpisode);
m_szEpisode = szEpisode ? strdup(szEpisode) : NULL;
m_iEpisodeNum = szEpisode ? ParsePrefixedInt(szEpisode) : 0;
}
int FeedItemInfo::ParsePrefixedInt(const char *szValue)
{
const char* szVal = szValue;
if (!strchr("0123456789", *szVal))
{
szVal++;
}
return atoi(szVal);
}
void FeedItemInfo::SetAddCategory(const char* szAddCategory)
{
free(m_szAddCategory);
m_szAddCategory = strdup(szAddCategory ? szAddCategory : "");
}
void FeedItemInfo::SetDupeKey(const char* szDupeKey)
{
free(m_szDupeKey);
m_szDupeKey = strdup(szDupeKey ? szDupeKey : "");
}
void FeedItemInfo::AppendDupeKey(const char* szExtraDupeKey)
{
if (!m_szDupeKey || *m_szDupeKey == '\0' || !szExtraDupeKey || *szExtraDupeKey == '\0')
{
return;
}
int iLen = (m_szDupeKey ? strlen(m_szDupeKey) : 0) + 1 + strlen(szExtraDupeKey) + 1;
char* szNewKey = (char*)malloc(iLen);
snprintf(szNewKey, iLen, "%s-%s", m_szDupeKey, szExtraDupeKey);
szNewKey[iLen - 1] = '\0';
free(m_szDupeKey);
m_szDupeKey = szNewKey;
}
void FeedItemInfo::BuildDupeKey(const char* szRageId, const char* szSeries)
{
int iRageId = szRageId && *szRageId ? atoi(szRageId) : m_iRageId;
free(m_szDupeKey);
if (m_iImdbId != 0)
{
m_szDupeKey = (char*)malloc(20);
snprintf(m_szDupeKey, 20, "imdb=%i", m_iImdbId);
}
else if (szSeries && *szSeries && GetSeasonNum() != 0 && GetEpisodeNum() != 0)
{
int iLen = strlen(szSeries) + 50;
m_szDupeKey = (char*)malloc(iLen);
snprintf(m_szDupeKey, iLen, "series=%s-%s-%s", szSeries, m_szSeason, m_szEpisode);
m_szDupeKey[iLen-1] = '\0';
}
else if (iRageId != 0 && GetSeasonNum() != 0 && GetEpisodeNum() != 0)
{
m_szDupeKey = (char*)malloc(100);
snprintf(m_szDupeKey, 100, "rageid=%i-%s-%s", iRageId, m_szSeason, m_szEpisode);
m_szDupeKey[100-1] = '\0';
}
else
{
m_szDupeKey = strdup("");
}
}
int FeedItemInfo::GetSeasonNum()
{
if (!m_szSeason && !m_bSeasonEpisodeParsed)
{
ParseSeasonEpisode();
}
return m_iSeasonNum;
}
int FeedItemInfo::GetEpisodeNum()
{
if (!m_szEpisode && !m_bSeasonEpisodeParsed)
{
ParseSeasonEpisode();
}
return m_iEpisodeNum;
}
void FeedItemInfo::ParseSeasonEpisode()
{
m_bSeasonEpisodeParsed = true;
RegEx* pRegEx = m_pSharedFeedData->GetSeasonEpisodeRegEx();
if (pRegEx->Match(m_szTitle))
{
char szRegValue[100];
char szValue[100];
snprintf(szValue, 100, "S%02d", atoi(m_szTitle + pRegEx->GetMatchStart(1)));
szValue[100-1] = '\0';
SetSeason(szValue);
int iLen = pRegEx->GetMatchLen(2);
iLen = iLen < 99 ? iLen : 99;
strncpy(szRegValue, m_szTitle + pRegEx->GetMatchStart(2), pRegEx->GetMatchLen(2));
szRegValue[iLen] = '\0';
snprintf(szValue, 100, "E%s", szRegValue);
szValue[100-1] = '\0';
Util::ReduceStr(szValue, "-", "");
for (char* p = szValue; *p; p++) *p = toupper(*p); // convert string to uppercase e02 -> E02
SetEpisode(szValue);
}
}
FeedHistoryInfo::FeedHistoryInfo(const char* szUrl, FeedHistoryInfo::EStatus eStatus, time_t tLastSeen)
{
m_szUrl = szUrl ? strdup(szUrl) : NULL;
m_eStatus = eStatus;
m_tLastSeen = tLastSeen;
}
FeedHistoryInfo::~FeedHistoryInfo()
{
free(m_szUrl);
}
FeedHistory::~FeedHistory()
{
Clear();
}
void FeedHistory::Clear()
{
for (iterator it = begin(); it != end(); it++)
{
delete *it;
}
clear();
}
void FeedHistory::Add(const char* szUrl, FeedHistoryInfo::EStatus eStatus, time_t tLastSeen)
{
push_back(new FeedHistoryInfo(szUrl, eStatus, tLastSeen));
}
void FeedHistory::Remove(const char* szUrl)
{
for (iterator it = begin(); it != end(); it++)
{
FeedHistoryInfo* pFeedHistoryInfo = *it;
if (!strcmp(pFeedHistoryInfo->GetUrl(), szUrl))
{
delete pFeedHistoryInfo;
erase(it);
break;
}
}
}
FeedHistoryInfo* FeedHistory::Find(const char* szUrl)
{
for (iterator it = begin(); it != end(); it++)
{
FeedHistoryInfo* pFeedHistoryInfo = *it;
if (!strcmp(pFeedHistoryInfo->GetUrl(), szUrl))
{
return pFeedHistoryInfo;
}
}
return NULL;
}
FeedItemInfos::FeedItemInfos()
{
debug("Creating FeedItemInfos");
m_iRefCount = 0;
}
FeedItemInfos::~FeedItemInfos()
{
debug("Destroing FeedItemInfos");
for (iterator it = begin(); it != end(); it++)
{
delete *it;
}
}
void FeedItemInfos::Retain()
{
m_iRefCount++;
}
void FeedItemInfos::Release()
{
m_iRefCount--;
if (m_iRefCount <= 0)
{
delete this;
}
}
void FeedItemInfos::Add(FeedItemInfo* pFeedItemInfo)
{
push_back(pFeedItemInfo);
pFeedItemInfo->SetSharedFeedData(&m_SharedFeedData);
}
SharedFeedData::SharedFeedData()
{
m_pSeasonEpisodeRegEx = NULL;
}
SharedFeedData::~SharedFeedData()
{
delete m_pSeasonEpisodeRegEx;
}
RegEx* SharedFeedData::GetSeasonEpisodeRegEx()
{
if (!m_pSeasonEpisodeRegEx)
{
m_pSeasonEpisodeRegEx = new RegEx("[^[:alnum:]]s?([0-9]+)[ex]([0-9]+(-?e[0-9]+)?)[^[:alnum:]]", 10);
}
return m_pSeasonEpisodeRegEx;
}

278
FeedInfo.h Normal file
View File

@@ -0,0 +1,278 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision: 0 $
* $Date$
*
*/
#ifndef FEEDINFO_H
#define FEEDINFO_H
#include <deque>
#include <time.h>
#include "Util.h"
#include "DownloadInfo.h"
class FeedInfo
{
public:
enum EStatus
{
fsUndefined,
fsRunning,
fsFinished,
fsFailed
};
private:
int m_iID;
char* m_szName;
char* m_szUrl;
int m_iInterval;
char* m_szFilter;
unsigned int m_iFilterHash;
bool m_bPauseNzb;
char* m_szCategory;
int m_iPriority;
time_t m_tLastUpdate;
bool m_bPreview;
EStatus m_eStatus;
char* m_szOutputFilename;
bool m_bFetch;
bool m_bForce;
public:
FeedInfo(int iID, const char* szName, const char* szUrl, int iInterval,
const char* szFilter, bool bPauseNzb, const char* szCategory, int iPriority);
~FeedInfo();
int GetID() { return m_iID; }
const char* GetName() { return m_szName; }
const char* GetUrl() { return m_szUrl; }
int GetInterval() { return m_iInterval; }
const char* GetFilter() { return m_szFilter; }
unsigned int GetFilterHash() { return m_iFilterHash; }
bool GetPauseNzb() { return m_bPauseNzb; }
const char* GetCategory() { return m_szCategory; }
int GetPriority() { return m_iPriority; }
time_t GetLastUpdate() { return m_tLastUpdate; }
void SetLastUpdate(time_t tLastUpdate) { m_tLastUpdate = tLastUpdate; }
bool GetPreview() { return m_bPreview; }
void SetPreview(bool bPreview) { m_bPreview = bPreview; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
const char* GetOutputFilename() { return m_szOutputFilename; }
void SetOutputFilename(const char* szOutputFilename);
bool GetFetch() { return m_bFetch; }
void SetFetch(bool bFetch) { m_bFetch = bFetch; }
bool GetForce() { return m_bForce; }
void SetForce(bool bForce) { m_bForce = bForce; }
};
typedef std::deque<FeedInfo*> Feeds;
class SharedFeedData
{
private:
RegEx* m_pSeasonEpisodeRegEx;
public:
SharedFeedData();
~SharedFeedData();
RegEx* GetSeasonEpisodeRegEx();
};
class FeedItemInfo
{
public:
enum EStatus
{
isUnknown,
isBacklog,
isFetched,
isNew
};
enum EMatchStatus
{
msIgnored,
msAccepted,
msRejected
};
class Attr
{
private:
char* m_szName;
char* m_szValue;
public:
Attr(const char* szName, const char* szValue);
~Attr();
const char* GetName() { return m_szName; }
const char* GetValue() { return m_szValue; }
};
typedef std::deque<Attr*> AttributesBase;
class Attributes: public AttributesBase
{
public:
~Attributes();
void Add(const char* szName, const char* szValue);
Attr* Find(const char* szName);
};
private:
char* m_szTitle;
char* m_szFilename;
char* m_szUrl;
time_t m_tTime;
long long m_lSize;
char* m_szCategory;
int m_iImdbId;
int m_iRageId;
char* m_szDescription;
char* m_szSeason;
char* m_szEpisode;
int m_iSeasonNum;
int m_iEpisodeNum;
bool m_bSeasonEpisodeParsed;
char* m_szAddCategory;
bool m_bPauseNzb;
int m_iPriority;
EStatus m_eStatus;
EMatchStatus m_eMatchStatus;
int m_iMatchRule;
char* m_szDupeKey;
int m_iDupeScore;
EDupeMode m_eDupeMode;
SharedFeedData* m_pSharedFeedData;
Attributes m_Attributes;
int ParsePrefixedInt(const char *szValue);
void ParseSeasonEpisode();
public:
FeedItemInfo();
~FeedItemInfo();
void SetSharedFeedData(SharedFeedData* pSharedFeedData) { m_pSharedFeedData = pSharedFeedData; }
const char* GetTitle() { return m_szTitle; }
void SetTitle(const char* szTitle);
const char* GetFilename() { return m_szFilename; }
void SetFilename(const char* szFilename);
const char* GetUrl() { return m_szUrl; }
void SetUrl(const char* szUrl);
long long GetSize() { return m_lSize; }
void SetSize(long long lSize) { m_lSize = lSize; }
const char* GetCategory() { return m_szCategory; }
void SetCategory(const char* szCategory);
int GetImdbId() { return m_iImdbId; }
void SetImdbId(int iImdbId) { m_iImdbId = iImdbId; }
int GetRageId() { return m_iRageId; }
void SetRageId(int iRageId) { m_iRageId = iRageId; }
const char* GetDescription() { return m_szDescription; }
void SetDescription(const char* szDescription);
const char* GetSeason() { return m_szSeason; }
void SetSeason(const char* szSeason);
const char* GetEpisode() { return m_szEpisode; }
void SetEpisode(const char* szEpisode);
int GetSeasonNum();
int GetEpisodeNum();
const char* GetAddCategory() { return m_szAddCategory; }
void SetAddCategory(const char* szAddCategory);
bool GetPauseNzb() { return m_bPauseNzb; }
void SetPauseNzb(bool bPauseNzb) { m_bPauseNzb = bPauseNzb; }
int GetPriority() { return m_iPriority; }
void SetPriority(int iPriority) { m_iPriority = iPriority; }
time_t GetTime() { return m_tTime; }
void SetTime(time_t tTime) { m_tTime = tTime; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus eStatus) { m_eStatus = eStatus; }
EMatchStatus GetMatchStatus() { return m_eMatchStatus; }
void SetMatchStatus(EMatchStatus eMatchStatus) { m_eMatchStatus = eMatchStatus; }
int GetMatchRule() { return m_iMatchRule; }
void SetMatchRule(int iMatchRule) { m_iMatchRule = iMatchRule; }
const char* GetDupeKey() { return m_szDupeKey; }
void SetDupeKey(const char* szDupeKey);
void AppendDupeKey(const char* szExtraDupeKey);
void BuildDupeKey(const char* szRageId, const char* szSeries);
int GetDupeScore() { return m_iDupeScore; }
void SetDupeScore(int iDupeScore) { m_iDupeScore = iDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
void SetDupeMode(EDupeMode eDupeMode) { m_eDupeMode = eDupeMode; }
Attributes* GetAttributes() { return &m_Attributes; }
};
typedef std::deque<FeedItemInfo*> FeedItemInfosBase;
class FeedItemInfos : public FeedItemInfosBase
{
private:
int m_iRefCount;
SharedFeedData m_SharedFeedData;
public:
FeedItemInfos();
~FeedItemInfos();
void Retain();
void Release();
void Add(FeedItemInfo* pFeedItemInfo);
};
class FeedHistoryInfo
{
public:
enum EStatus
{
hsUnknown,
hsBacklog,
hsFetched
};
private:
char* m_szUrl;
EStatus m_eStatus;
time_t m_tLastSeen;
public:
FeedHistoryInfo(const char* szUrl, EStatus eStatus, time_t tLastSeen);
~FeedHistoryInfo();
const char* GetUrl() { return m_szUrl; }
EStatus GetStatus() { return m_eStatus; }
void SetStatus(EStatus Status) { m_eStatus = Status; }
time_t GetLastSeen() { return m_tLastSeen; }
void SetLastSeen(time_t tLastSeen) { m_tLastSeen = tLastSeen; }
};
typedef std::deque<FeedHistoryInfo*> FeedHistoryBase;
class FeedHistory : public FeedHistoryBase
{
public:
~FeedHistory();
void Clear();
void Add(const char* szUrl, FeedHistoryInfo::EStatus eStatus, time_t tLastSeen);
void Remove(const char* szUrl);
FeedHistoryInfo* Find(const char* szUrl);
};
#endif

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -34,8 +34,7 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <fstream>
#include <stdio.h>
#ifndef WIN32
#include <unistd.h>
#include <arpa/inet.h>
@@ -63,11 +62,11 @@ Frontend::Frontend()
m_iNeededLogEntries = 0;
m_bSummary = false;
m_bFileList = false;
m_fCurrentDownloadSpeed = 0;
m_iCurrentDownloadSpeed = 0;
m_lRemainingSize = 0;
m_bPauseDownload = false;
m_bPauseDownload2 = false;
m_fDownloadLimit = 0;
m_iDownloadLimit = 0;
m_iThreadCount = 0;
m_iPostJobCount = 0;
m_iUpTimeSec = 0;
@@ -87,7 +86,7 @@ bool Frontend::PrepareData()
}
if (!RequestMessages() || ((m_bSummary || m_bFileList) && !RequestFileList()))
{
printf("\nUnable to send request to nzbget-server at %s (port %i) \n", g_pOptions->GetServerIP(), g_pOptions->GetServerPort());
printf("\nUnable to send request to nzbget-server at %s (port %i) \n", g_pOptions->GetControlIP(), g_pOptions->GetControlPort());
Stop();
return false;
}
@@ -96,11 +95,11 @@ bool Frontend::PrepareData()
{
if (m_bSummary)
{
m_fCurrentDownloadSpeed = g_pQueueCoordinator->CalcCurrentDownloadSpeed();
m_iCurrentDownloadSpeed = g_pQueueCoordinator->CalcCurrentDownloadSpeed();
m_lRemainingSize = g_pQueueCoordinator->CalcRemainingSize();
m_bPauseDownload = g_pOptions->GetPauseDownload();
m_bPauseDownload2 = g_pOptions->GetPauseDownload2();
m_fDownloadLimit = g_pOptions->GetDownloadRate();
m_iDownloadLimit = g_pOptions->GetDownloadRate();
m_iThreadCount = Thread::GetThreadCount();
PostQueue* pPostQueue = g_pQueueCoordinator->LockQueue()->GetPostQueue();
m_iPostJobCount = pPostQueue->size();
@@ -182,6 +181,7 @@ void Frontend::ServerPauseUnpause(bool bPause, bool bSecondRegister)
}
else
{
g_pOptions->SetResumeTime(0);
if (bSecondRegister)
{
g_pOptions->SetPauseDownload2(bPause);
@@ -193,15 +193,15 @@ void Frontend::ServerPauseUnpause(bool bPause, bool bSecondRegister)
}
}
void Frontend::ServerSetDownloadRate(float fRate)
void Frontend::ServerSetDownloadRate(int iRate)
{
if (IsRemoteMode())
{
RequestSetDownloadRate(fRate);
RequestSetDownloadRate(iRate);
}
else
{
g_pOptions->SetDownloadRate(fRate);
g_pOptions->SetDownloadRate(iRate);
}
}
@@ -235,14 +235,17 @@ void Frontend::InitMessageBase(SNZBRequestBase* pMessageBase, int iRequest, int
pMessageBase->m_iSignature = htonl(NZBMESSAGE_SIGNATURE);
pMessageBase->m_iType = htonl(iRequest);
pMessageBase->m_iStructSize = htonl(iSize);
strncpy(pMessageBase->m_szPassword, g_pOptions->GetServerPassword(), NZBREQUESTPASSWORDSIZE);
strncpy(pMessageBase->m_szUsername, g_pOptions->GetControlUsername(), NZBREQUESTPASSWORDSIZE - 1);
pMessageBase->m_szUsername[NZBREQUESTPASSWORDSIZE - 1] = '\0';
strncpy(pMessageBase->m_szPassword, g_pOptions->GetControlPassword(), NZBREQUESTPASSWORDSIZE);
pMessageBase->m_szPassword[NZBREQUESTPASSWORDSIZE - 1] = '\0';
}
bool Frontend::RequestMessages()
{
NetAddress netAddress(g_pOptions->GetServerIP(), g_pOptions->GetServerPort());
Connection connection(&netAddress);
Connection connection(g_pOptions->GetControlIP(), g_pOptions->GetControlPort(), false);
bool OK = connection.Connect();
if (!OK)
@@ -262,15 +265,15 @@ bool Frontend::RequestMessages()
LogRequest.m_iIDFrom = 0;
}
if (connection.Send((char*)(&LogRequest), sizeof(LogRequest)) < 0)
if (!connection.Send((char*)(&LogRequest), sizeof(LogRequest)))
{
return false;
}
// Now listen for the returned log
SNZBLogResponse LogResponse;
int iResponseLen = connection.Recv((char*) &LogResponse, sizeof(LogResponse));
if (iResponseLen != sizeof(LogResponse) ||
bool bRead = connection.Recv((char*) &LogResponse, sizeof(LogResponse));
if (!bRead ||
(int)ntohl(LogResponse.m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE ||
ntohl(LogResponse.m_MessageBase.m_iStructSize) != sizeof(LogResponse))
{
@@ -281,7 +284,7 @@ bool Frontend::RequestMessages()
if (ntohl(LogResponse.m_iTrailingDataLength) > 0)
{
pBuf = (char*)malloc(ntohl(LogResponse.m_iTrailingDataLength));
if (!connection.RecvAll(pBuf, ntohl(LogResponse.m_iTrailingDataLength)))
if (!connection.Recv(pBuf, ntohl(LogResponse.m_iTrailingDataLength)))
{
free(pBuf);
return false;
@@ -313,8 +316,7 @@ bool Frontend::RequestMessages()
bool Frontend::RequestFileList()
{
NetAddress netAddress(g_pOptions->GetServerIP(), g_pOptions->GetServerPort());
Connection connection(&netAddress);
Connection connection(g_pOptions->GetControlIP(), g_pOptions->GetControlPort(), false);
bool OK = connection.Connect();
if (!OK)
@@ -327,15 +329,15 @@ bool Frontend::RequestFileList()
ListRequest.m_bFileList = htonl(m_bFileList);
ListRequest.m_bServerState = htonl(m_bSummary);
if (connection.Send((char*)(&ListRequest), sizeof(ListRequest)) < 0)
if (!connection.Send((char*)(&ListRequest), sizeof(ListRequest)))
{
return false;
}
// Now listen for the returned list
SNZBListResponse ListResponse;
int iResponseLen = connection.Recv((char*) &ListResponse, sizeof(ListResponse));
if (iResponseLen != sizeof(ListResponse) ||
bool bRead = connection.Recv((char*) &ListResponse, sizeof(ListResponse));
if (!bRead ||
(int)ntohl(ListResponse.m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE ||
ntohl(ListResponse.m_MessageBase.m_iStructSize) != sizeof(ListResponse))
{
@@ -346,7 +348,7 @@ bool Frontend::RequestFileList()
if (ntohl(ListResponse.m_iTrailingDataLength) > 0)
{
pBuf = (char*)malloc(ntohl(ListResponse.m_iTrailingDataLength));
if (!connection.RecvAll(pBuf, ntohl(ListResponse.m_iTrailingDataLength)))
if (!connection.Recv(pBuf, ntohl(ListResponse.m_iTrailingDataLength)))
{
free(pBuf);
return false;
@@ -360,8 +362,8 @@ bool Frontend::RequestFileList()
m_bPauseDownload = ntohl(ListResponse.m_bDownloadPaused);
m_bPauseDownload2 = ntohl(ListResponse.m_bDownload2Paused);
m_lRemainingSize = Util::JoinInt64(ntohl(ListResponse.m_iRemainingSizeHi), ntohl(ListResponse.m_iRemainingSizeLo));
m_fCurrentDownloadSpeed = ntohl(ListResponse.m_iDownloadRate) / 1024.0f;
m_fDownloadLimit = ntohl(ListResponse.m_iDownloadLimit) / 1024.0f;
m_iCurrentDownloadSpeed = ntohl(ListResponse.m_iDownloadRate);
m_iDownloadLimit = ntohl(ListResponse.m_iDownloadLimit);
m_iThreadCount = ntohl(ListResponse.m_iThreadCount);
m_iPostJobCount = ntohl(ListResponse.m_iPostJobCount);
m_iUpTimeSec = ntohl(ListResponse.m_iUpTimeSec);
@@ -392,11 +394,11 @@ bool Frontend::RequestPauseUnpause(bool bPause, bool bSecondRegister)
return client.RequestServerPauseUnpause(bPause, bSecondRegister ? eRemotePauseUnpauseActionDownload2 : eRemotePauseUnpauseActionDownload);
}
bool Frontend::RequestSetDownloadRate(float fRate)
bool Frontend::RequestSetDownloadRate(int iRate)
{
RemoteClient client;
client.SetVerbose(false);
return client.RequestServerSetDownloadRate(fRate);
return client.RequestServerSetDownloadRate(iRate);
}
bool Frontend::RequestDumpDebug()
@@ -410,5 +412,5 @@ bool Frontend::RequestEditQueue(eRemoteEditAction iAction, int iOffset, int iID)
{
RemoteClient client;
client.SetVerbose(false);
return client.RequestServerEditQueue(iAction, iOffset, NULL, &iID, 1, false);
return client.RequestServerEditQueue(iAction, iOffset, NULL, &iID, 1, NULL, eRemoteMatchModeID, false);
}

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -50,11 +50,11 @@ protected:
int m_iUpdateInterval;
// summary
float m_fCurrentDownloadSpeed;
int m_iCurrentDownloadSpeed;
long long m_lRemainingSize;
bool m_bPauseDownload;
bool m_bPauseDownload2;
float m_fDownloadLimit;
int m_iDownloadLimit;
int m_iThreadCount;
int m_iPostJobCount;
int m_iUpTimeSec;
@@ -72,8 +72,8 @@ protected:
void InitMessageBase(SNZBRequestBase* pMessageBase, int iRequest, int iSize);
void ServerPauseUnpause(bool bPause, bool bSecondRegister);
bool RequestPauseUnpause(bool bPause, bool bSecondRegister);
void ServerSetDownloadRate(float fRate);
bool RequestSetDownloadRate(float fRate);
void ServerSetDownloadRate(int iRate);
bool RequestSetDownloadRate(int iRate);
void ServerDumpDebug();
bool RequestDumpDebug();
bool ServerEditQueue(QueueEditor::EEditAction eAction, int iOffset, int iEntry);

39
Log.cpp
View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -38,7 +38,7 @@
#include <string.h>
#include <sys/stat.h>
#include <stdarg.h>
#include <cstdio>
#include <stdio.h>
#include "nzbget.h"
#include "Options.h"
@@ -59,15 +59,8 @@ Log::Log()
Log::~Log()
{
for (Messages::iterator it = m_Messages.begin(); it != m_Messages.end(); it++)
{
delete *it;
}
m_Messages.clear();
if (m_szLogFilename)
{
free(m_szLogFilename);
}
Clear();
free(m_szLogFilename);
}
void Log::Filelog(const char* msg, ...)
@@ -84,6 +77,7 @@ void Log::Filelog(const char* msg, ...)
time_t rawtime;
time(&rawtime);
rawtime += g_pOptions->GetTimeCorrection();
char szTime[50];
#ifdef HAVE_CTIME_R_3
@@ -309,10 +303,18 @@ Message::Message(unsigned int iID, EKind eKind, time_t tTime, const char* szText
Message::~ Message()
{
if (m_szText)
free(m_szText);
}
void Log::Clear()
{
m_mutexLog.Lock();
for (Messages::iterator it = m_Messages.begin(); it != m_Messages.end(); it++)
{
free(m_szText);
delete *it;
}
m_Messages.clear();
m_mutexLog.Unlock();
}
void Log::AppendMessage(Message::EKind eKind, const char * szText)
@@ -351,9 +353,10 @@ void Log::ResetLog()
* During intializing stage (when options were not read yet) all messages
* are saved in screen log, even if they shouldn't (according to options).
* Method "InitOptions()" check all messages added to screen log during
* intializing stage and does two things:
* intializing stage and does three things:
* 1) save the messages to log-file (if they should according to options);
* 2) delete messages from screen log (if they should not be saved in screen log).
* 3) renumerate IDs
*/
void Log::InitOptions()
{
@@ -362,8 +365,13 @@ void Log::InitOptions()
if (g_pOptions->GetCreateLog() && g_pOptions->GetLogFile())
{
m_szLogFilename = strdup(g_pOptions->GetLogFile());
#ifdef WIN32
WebUtil::Utf8ToAnsi(m_szLogFilename, strlen(m_szLogFilename) + 1);
#endif
}
m_iIDGen = 0;
for (unsigned int i = 0; i < m_Messages.size(); )
{
Message* pMessage = m_Messages.at(i);
@@ -399,6 +407,7 @@ void Log::InitOptions()
}
else
{
pMessage->m_iID = ++m_iIDGen;
i++;
}
}

11
Log.h
View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -64,9 +64,11 @@ private:
time_t m_tTime;
char* m_szText;
friend class Log;
public:
Message(unsigned int iID, EKind eKind, time_t tTime, const char* szText);
~Message();
Message(unsigned int iID, EKind eKind, time_t tTime, const char* szText);
~Message();
unsigned int GetID() { return m_iID; }
EKind GetKind() { return m_eKind; }
time_t GetTime() { return m_tTime; }
@@ -108,6 +110,7 @@ public:
~Log();
Messages* LockMessages();
void UnlockMessages();
void Clear();
void ResetLog();
void InitOptions();
};

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$

325
Maintenance.cpp Normal file
View File

@@ -0,0 +1,325 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#ifndef WIN32
#include <unistd.h>
#endif
#include <errno.h>
#include "nzbget.h"
#include "Log.h"
#include "Util.h"
#include "Maintenance.h"
extern Options* g_pOptions;
extern Maintenance* g_pMaintenance;
Maintenance::Maintenance()
{
m_iIDMessageGen = 0;
m_UpdateScriptController = NULL;
m_szUpdateScript = NULL;
}
Maintenance::~Maintenance()
{
m_mutexController.Lock();
if (m_UpdateScriptController)
{
m_UpdateScriptController->Detach();
m_mutexController.Unlock();
while (m_UpdateScriptController)
{
usleep(20*1000);
}
}
ClearMessages();
free(m_szUpdateScript);
}
void Maintenance::ResetUpdateController()
{
m_mutexController.Lock();
m_UpdateScriptController = NULL;
m_mutexController.Unlock();
}
void Maintenance::ClearMessages()
{
for (Log::Messages::iterator it = m_Messages.begin(); it != m_Messages.end(); it++)
{
delete *it;
}
m_Messages.clear();
}
Log::Messages* Maintenance::LockMessages()
{
m_mutexLog.Lock();
return &m_Messages;
}
void Maintenance::UnlockMessages()
{
m_mutexLog.Unlock();
}
void Maintenance::AppendMessage(Message::EKind eKind, time_t tTime, const char * szText)
{
if (tTime == 0)
{
tTime = time(NULL);
}
m_mutexLog.Lock();
Message* pMessage = new Message(++m_iIDMessageGen, eKind, tTime, szText);
m_Messages.push_back(pMessage);
m_mutexLog.Unlock();
}
bool Maintenance::StartUpdate(EBranch eBranch)
{
m_mutexController.Lock();
bool bAlreadyUpdating = m_UpdateScriptController != NULL;
m_mutexController.Unlock();
if (bAlreadyUpdating)
{
error("Could not start update-script: update-script is already running");
return false;
}
if (m_szUpdateScript)
{
free(m_szUpdateScript);
m_szUpdateScript = NULL;
}
if (!ReadPackageInfoStr("install-script", &m_szUpdateScript))
{
return false;
}
ClearMessages();
m_UpdateScriptController = new UpdateScriptController();
m_UpdateScriptController->SetScript(m_szUpdateScript);
m_UpdateScriptController->SetBranch(eBranch);
m_UpdateScriptController->SetAutoDestroy(true);
m_UpdateScriptController->Start();
return true;
}
bool Maintenance::CheckUpdates(char** pUpdateInfo)
{
char* szUpdateInfoScript;
if (!ReadPackageInfoStr("update-info-script", &szUpdateInfoScript))
{
return false;
}
*pUpdateInfo = NULL;
UpdateInfoScriptController::ExecuteScript(szUpdateInfoScript, pUpdateInfo);
free(szUpdateInfoScript);
return *pUpdateInfo;
}
bool Maintenance::ReadPackageInfoStr(const char* szKey, char** pValue)
{
char szFileName[1024];
snprintf(szFileName, 1024, "%s%cpackage-info.json", g_pOptions->GetWebDir(), PATH_SEPARATOR);
szFileName[1024-1] = '\0';
char* szPackageInfo;
int iPackageInfoLen;
if (!Util::LoadFileIntoBuffer(szFileName, &szPackageInfo, &iPackageInfoLen))
{
error("Could not load file %s", szFileName);
return false;
}
char szKeyStr[100];
snprintf(szKeyStr, 100, "\"%s\"", szKey);
szKeyStr[100-1] = '\0';
char* p = strstr(szPackageInfo, szKeyStr);
if (!p)
{
error("Could not parse file %s", szFileName);
free(szPackageInfo);
return false;
}
p = strchr(p + strlen(szKeyStr), '"');
if (!p)
{
error("Could not parse file %s", szFileName);
free(szPackageInfo);
return false;
}
p++;
char* pend = strchr(p, '"');
if (!pend)
{
error("Could not parse file %s", szFileName);
free(szPackageInfo);
return false;
}
int iLen = pend - p;
if (iLen >= sizeof(szFileName))
{
error("Could not parse file %s", szFileName);
free(szPackageInfo);
return false;
}
*pValue = (char*)malloc(iLen+1);
strncpy(*pValue, p, iLen);
(*pValue)[iLen] = '\0';
WebUtil::JsonDecode(*pValue);
free(szPackageInfo);
return true;
}
void UpdateScriptController::Run()
{
m_iPrefixLen = 0;
PrintMessage(Message::mkInfo, "Executing update-script %s", GetScript());
char szInfoName[1024];
snprintf(szInfoName, 1024, "update-script %s", Util::BaseFileName(GetScript()));
szInfoName[1024-1] = '\0';
SetInfoName(szInfoName);
const char* szBranchName[] = { "STABLE", "TESTING", "DEVEL" };
SetEnvVar("NZBUP_BRANCH", szBranchName[m_eBranch]);
char szProcessID[20];
#ifdef WIN32
int pid = (int)GetCurrentProcessId();
#else
int pid = (int)getppid();
#endif
snprintf(szProcessID, 20, "%i", pid);
szProcessID[20-1] = '\0';
SetEnvVar("NZBUP_PROCESSID", szProcessID);
char szLogPrefix[100];
strncpy(szLogPrefix, Util::BaseFileName(GetScript()), 100);
szLogPrefix[100-1] = '\0';
if (char* ext = strrchr(szLogPrefix, '.')) *ext = '\0'; // strip file extension
SetLogPrefix(szLogPrefix);
m_iPrefixLen = strlen(szLogPrefix) + 2; // 2 = strlen(": ");
Execute();
g_pMaintenance->ResetUpdateController();
}
void UpdateScriptController::AddMessage(Message::EKind eKind, const char* szText)
{
szText = szText + m_iPrefixLen;
g_pMaintenance->AppendMessage(eKind, time(NULL), szText);
ScriptController::AddMessage(eKind, szText);
}
void UpdateInfoScriptController::ExecuteScript(const char* szScript, char** pUpdateInfo)
{
detail("Executing update-info-script %s", Util::BaseFileName(szScript));
UpdateInfoScriptController* pScriptController = new UpdateInfoScriptController();
pScriptController->SetScript(szScript);
char szInfoName[1024];
snprintf(szInfoName, 1024, "update-info-script %s", Util::BaseFileName(szScript));
szInfoName[1024-1] = '\0';
pScriptController->SetInfoName(szInfoName);
char szLogPrefix[1024];
strncpy(szLogPrefix, Util::BaseFileName(szScript), 1024);
szLogPrefix[1024-1] = '\0';
if (char* ext = strrchr(szLogPrefix, '.')) *ext = '\0'; // strip file extension
pScriptController->SetLogPrefix(szLogPrefix);
pScriptController->m_iPrefixLen = strlen(szLogPrefix) + 2; // 2 = strlen(": ");
pScriptController->Execute();
if (pScriptController->m_UpdateInfo.GetBuffer())
{
int iLen = strlen(pScriptController->m_UpdateInfo.GetBuffer());
*pUpdateInfo = (char*)malloc(iLen + 1);
strncpy(*pUpdateInfo, pScriptController->m_UpdateInfo.GetBuffer(), iLen);
(*pUpdateInfo)[iLen] = '\0';
}
delete pScriptController;
}
void UpdateInfoScriptController::AddMessage(Message::EKind eKind, const char* szText)
{
szText = szText + m_iPrefixLen;
if (!strncmp(szText, "[NZB] ", 6))
{
debug("Command %s detected", szText + 6);
if (!strncmp(szText + 6, "[UPDATEINFO]", 12))
{
m_UpdateInfo.Append(szText + 6 + 12);
}
else
{
error("Invalid command \"%s\" received from %s", szText, GetInfoName());
}
}
else
{
ScriptController::AddMessage(eKind, szText);
}
}

94
Maintenance.h Normal file
View File

@@ -0,0 +1,94 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef MAINTENANCE_H
#define MAINTENANCE_H
#include "Thread.h"
#include "ScriptController.h"
#include "Log.h"
#include "Util.h"
class UpdateScriptController;
class Maintenance
{
private:
Log::Messages m_Messages;
Mutex m_mutexLog;
Mutex m_mutexController;
int m_iIDMessageGen;
UpdateScriptController* m_UpdateScriptController;
char* m_szUpdateScript;
bool ReadPackageInfoStr(const char* szKey, char** pValue);
public:
enum EBranch
{
brStable,
brTesting,
brDevel
};
Maintenance();
~Maintenance();
void ClearMessages();
void AppendMessage(Message::EKind eKind, time_t tTime, const char* szText);
Log::Messages* LockMessages();
void UnlockMessages();
bool StartUpdate(EBranch eBranch);
void ResetUpdateController();
bool CheckUpdates(char** pUpdateInfo);
};
class UpdateScriptController : public Thread, public ScriptController
{
private:
Maintenance::EBranch m_eBranch;
int m_iPrefixLen;
protected:
virtual void AddMessage(Message::EKind eKind, const char* szText);
public:
virtual void Run();
void SetBranch(Maintenance::EBranch eBranch) { m_eBranch = eBranch; }
};
class UpdateInfoScriptController : public ScriptController
{
private:
int m_iPrefixLen;
StringBuilder m_UpdateInfo;
protected:
virtual void AddMessage(Message::EKind eKind, const char* szText);
public:
static void ExecuteScript(const char* szScript, char** pUpdateInfo);
};
#endif

View File

@@ -1,23 +1,136 @@
#
# This file if part of nzbget
#
# Copyright (C) 2008-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
#
bin_PROGRAMS = nzbget
nzbget_SOURCES = ArticleDownloader.cpp ArticleDownloader.h BinRpc.cpp BinRpc.h \
nzbget_SOURCES = \
ArticleDownloader.cpp ArticleDownloader.h BinRpc.cpp BinRpc.h \
ColoredFrontend.cpp ColoredFrontend.h Connection.cpp Connection.h Decoder.cpp Decoder.h \
DiskState.cpp DiskState.h DownloadInfo.cpp DownloadInfo.h Frontend.cpp Frontend.h \
Log.cpp Log.h LoggableFrontend.cpp LoggableFrontend.h MessageBase.h \
NCursesFrontend.cpp NCursesFrontend.h NNTPConnection.cpp NNTPConnection.h NZBFile.cpp \
NZBFile.h NetAddress.cpp NetAddress.h NewsServer.cpp NewsServer.h Observer.cpp \
Observer.h Options.cpp Options.h ParChecker.cpp ParChecker.h \
PrePostProcessor.cpp PrePostProcessor.h QueueCoordinator.cpp \
DiskState.cpp DiskState.h DownloadInfo.cpp DownloadInfo.h DupeCoordinator.cpp DupeCoordinator.h \
Frontend.cpp Frontend.h FeedCoordinator.cpp FeedCoordinator.h FeedFile.cpp FeedFile.h \
FeedFilter.cpp FeedFilter.h FeedInfo.cpp FeedInfo.h Log.cpp Log.h LoggableFrontend.cpp \
LoggableFrontend.h Maintenance.cpp Maintenance.h MessageBase.h NCursesFrontend.cpp \
NCursesFrontend.h NNTPConnection.cpp \
NNTPConnection.h NZBFile.cpp NZBFile.h NewsServer.cpp NewsServer.h Observer.cpp \
Observer.h Options.cpp Options.h ParChecker.cpp ParChecker.h ParRenamer.cpp ParRenamer.h \
ParCoordinator.cpp ParCoordinator.h PrePostProcessor.cpp PrePostProcessor.h QueueCoordinator.cpp \
QueueCoordinator.h QueueEditor.cpp QueueEditor.h RemoteClient.cpp RemoteClient.h \
RemoteServer.cpp RemoteServer.h Scanner.cpp Scanner.h Scheduler.cpp Scheduler.h ScriptController.cpp \
ScriptController.h ServerPool.cpp ServerPool.h svn_version.cpp TLS.cpp TLS.h Thread.cpp Thread.h Util.cpp \
Util.h XmlRpc.cpp XmlRpc.h nzbget.cpp nzbget.h
ScriptController.h ServerPool.cpp ServerPool.h svn_version.cpp TLS.cpp TLS.h Thread.cpp Thread.h \
Util.cpp Util.h XmlRpc.cpp XmlRpc.h WebDownloader.cpp WebDownloader.h WebServer.cpp WebServer.h \
UrlCoordinator.cpp UrlCoordinator.h Unpack.cpp Unpack.h nzbget.cpp nzbget.h
EXTRA_DIST = nzbget.conf.example postprocess-example.sh postprocess-example.conf \
win32.h NTService.cpp NTService.h \
EXTRA_DIST = \
Makefile.cvs nzbgetd \
$(patches_FILES) $(windows_FILES) $(osx_FILES)
patches_FILES = \
libpar2-0.2-bugfixes.patch libpar2-0.2-cancel.patch \
libpar2-0.2-MSVC8.patch libsigc++-2.0.18-MSVC8.patch \
Makefile.cvs nzbget.kdevelop nzbget.sln nzbget.vcproj \
nzbgetd nzbget-shell.bat
libpar2-0.2-MSVC8.patch libsigc++-2.0.18-MSVC8.patch
windows_FILES = \
win32.h NTService.cpp NTService.h nzbget.sln nzbget.vcproj nzbget-shell.bat
osx_FILES = \
osx/App_Prefix.pch osx/NZBGet-Info.plist \
osx/DaemonController.h osx/DaemonController.m \
osx/MainApp.h osx/MainApp.m osx/MainApp.xib \
osx/PFMoveApplication.h osx/PFMoveApplication.m \
osx/PreferencesDialog.h osx/PreferencesDialog.m osx/PreferencesDialog.xib \
osx/RPC.h osx/RPC.m osx/WebClient.h osx/WebClient.m \
osx/WelcomeDialog.h osx/WelcomeDialog.m osx/WelcomeDialog.xib \
osx/NZBGet.xcodeproj/project.pbxproj \
osx/Resources/Images/mainicon.icns osx/Resources/Images/statusicon.png \
osx/Resources/Images/statusicon@2x.png osx/Resources/Images/statusicon-inv.png \
osx/Resources/Images/statusicon-inv@2x.png osx/Resources/licenses/license-bootstrap.txt \
osx/Resources/licenses/license-jquery-GPL.txt osx/Resources/licenses/license-jquery-MIT.txt \
osx/Resources/Credits.rtf osx/Resources/Localizable.strings osx/Resources/Welcome.rtf
doc_FILES = \
README ChangeLog COPYING
exampleconf_FILES = \
nzbget.conf
webui_FILES = \
webui/index.html webui/index.js webui/downloads.js webui/edit.js webui/fasttable.js \
webui/history.js webui/messages.js webui/status.js webui/style.css webui/upload.js \
webui/util.js webui/config.js webui/feed.js \
webui/lib/bootstrap.js webui/lib/bootstrap.min.js webui/lib/bootstrap.css \
webui/lib/jquery.js webui/lib/jquery.min.js \
webui/img/icons.png webui/img/icons-2x.png \
webui/img/transmit.gif webui/img/transmit-file.gif webui/img/favicon.ico \
webui/img/download-anim-green-2x.png webui/img/download-anim-orange-2x.png \
webui/img/transmit-reload-2x.gif
ppscripts_FILES = \
ppscripts/EMail.py ppscripts/Logger.py
# Install
sbin_SCRIPTS = nzbgetd
dist_doc_DATA = $(doc_FILES)
exampleconfdir = $(datadir)/nzbget
dist_exampleconf_DATA = $(exampleconf_FILES)
webuidir = $(datadir)/nzbget
nobase_dist_webui_DATA = $(webui_FILES)
ppscriptsdir = $(datadir)/nzbget
nobase_dist_ppscripts_SCRIPTS = $(ppscripts_FILES)
# Note about "sed":
# We need to make some changes in installed files.
# On Linux "sed" has option "-i" for in-place-edit. Unfortunateley the BSD version of "sed"
# has incompatible syntax. To solve the problem we perform in-place-edit in three steps:
# 1) copy the original file to original.temp (delete existing original.temp, if any);
# 2) sed < original.temp > original
# 3) delete original.temp
# These steps ensure that the output file has the same permissions as the original file.
# Configure installed script
install-exec-hook:
rm -f "$(DESTDIR)$(sbindir)/nzbgetd.temp"
cp "$(DESTDIR)$(sbindir)/nzbgetd" "$(DESTDIR)$(sbindir)/nzbgetd.temp"
sed 's?/usr/local/bin?$(bindir)?' < "$(DESTDIR)$(sbindir)/nzbgetd.temp" > "$(DESTDIR)$(sbindir)/nzbgetd"
rm "$(DESTDIR)$(sbindir)/nzbgetd.temp"
# Prepare example configuration file
install-data-hook:
rm -f "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
cp "$(DESTDIR)$(exampleconfdir)/nzbget.conf" "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:^ConfigTemplate=:ConfigTemplate=$(exampleconfdir)/nzbget.conf:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
sed 's:configuration file (typically installed:configuration file (installed:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:/usr/local/share/nzbget/nzbget.conf):$(exampleconfdir)/nzbget.conf):' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
sed 's:^WebDir=:WebDir=$(webuidir)/webui:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:typically installed to /usr/local/share/nzbget/ppscripts:installed to $(ppscriptsdir)/ppscripts:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
rm "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
# Install configuration files into /etc
# (only if they do not exist there to prevent override by update)
install-conf:
if test ! -f "$(DESTDIR)$(sysconfdir)/nzbget.conf" ; then \
$(mkinstalldirs) "$(DESTDIR)$(sysconfdir)" ; \
cp "$(DESTDIR)$(exampleconfdir)/nzbget.conf" "$(DESTDIR)$(sysconfdir)/nzbget.conf" ; \
fi
uninstall-conf:
rm -f "$(DESTDIR)$(sysconfdir)/nzbget.conf"
# Determining subversion revision:
# 1) If directory ".svn" exists we take revision from it using program svnversion (part of subversion package)
@@ -56,4 +169,15 @@ svn_version.cpp: FORCE
fi
FORCE:
# Ignore "svn_version.cpp" in distcleancheck
distcleancheck_listfiles = \
find . -type f -exec sh -c 'test -f $(srcdir)/$$1 || echo $$1' \
sh '{}' ';'
clean-bak: rm *~
# Fix premissions
dist-hook:
chmod -x $(distdir)/*.cpp $(distdir)/*.h
find $(distdir)/webui -type f -print -exec chmod -x {} \;

View File

@@ -14,6 +14,29 @@
@SET_MAKE@
#
# This file if part of nzbget
#
# Copyright (C) 2008-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
#
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
@@ -36,8 +59,11 @@ PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
host_triplet = @host@
target_triplet = @target@
bin_PROGRAMS = nzbget$(EXEEXT)
DIST_COMMON = README $(am__configure_deps) $(srcdir)/Makefile.am \
DIST_COMMON = README $(am__configure_deps) $(dist_doc_DATA) \
$(dist_exampleconf_DATA) $(nobase_dist_ppscripts_SCRIPTS) \
$(nobase_dist_webui_DATA) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in $(srcdir)/config.h.in \
$(top_srcdir)/configure AUTHORS COPYING ChangeLog INSTALL NEWS \
config.guess config.sub depcomp install-sh missing
@@ -51,24 +77,41 @@ am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = config.h
CONFIG_CLEAN_FILES =
am__installdirs = "$(DESTDIR)$(bindir)"
am__installdirs = "$(DESTDIR)$(bindir)" "$(DESTDIR)$(ppscriptsdir)" \
"$(DESTDIR)$(sbindir)" "$(DESTDIR)$(docdir)" \
"$(DESTDIR)$(exampleconfdir)" "$(DESTDIR)$(webuidir)"
binPROGRAMS_INSTALL = $(INSTALL_PROGRAM)
PROGRAMS = $(bin_PROGRAMS)
am_nzbget_OBJECTS = ArticleDownloader.$(OBJEXT) BinRpc.$(OBJEXT) \
ColoredFrontend.$(OBJEXT) Connection.$(OBJEXT) \
Decoder.$(OBJEXT) DiskState.$(OBJEXT) DownloadInfo.$(OBJEXT) \
Frontend.$(OBJEXT) Log.$(OBJEXT) LoggableFrontend.$(OBJEXT) \
DupeCoordinator.$(OBJEXT) Frontend.$(OBJEXT) \
FeedCoordinator.$(OBJEXT) FeedFile.$(OBJEXT) \
FeedFilter.$(OBJEXT) FeedInfo.$(OBJEXT) Log.$(OBJEXT) \
LoggableFrontend.$(OBJEXT) Maintenance.$(OBJEXT) \
NCursesFrontend.$(OBJEXT) NNTPConnection.$(OBJEXT) \
NZBFile.$(OBJEXT) NetAddress.$(OBJEXT) NewsServer.$(OBJEXT) \
Observer.$(OBJEXT) Options.$(OBJEXT) ParChecker.$(OBJEXT) \
PrePostProcessor.$(OBJEXT) QueueCoordinator.$(OBJEXT) \
QueueEditor.$(OBJEXT) RemoteClient.$(OBJEXT) \
RemoteServer.$(OBJEXT) Scanner.$(OBJEXT) Scheduler.$(OBJEXT) \
NZBFile.$(OBJEXT) NewsServer.$(OBJEXT) Observer.$(OBJEXT) \
Options.$(OBJEXT) ParChecker.$(OBJEXT) ParRenamer.$(OBJEXT) \
ParCoordinator.$(OBJEXT) PrePostProcessor.$(OBJEXT) \
QueueCoordinator.$(OBJEXT) QueueEditor.$(OBJEXT) \
RemoteClient.$(OBJEXT) RemoteServer.$(OBJEXT) \
Scanner.$(OBJEXT) Scheduler.$(OBJEXT) \
ScriptController.$(OBJEXT) ServerPool.$(OBJEXT) \
svn_version.$(OBJEXT) TLS.$(OBJEXT) Thread.$(OBJEXT) \
Util.$(OBJEXT) XmlRpc.$(OBJEXT) nzbget.$(OBJEXT)
Util.$(OBJEXT) XmlRpc.$(OBJEXT) WebDownloader.$(OBJEXT) \
WebServer.$(OBJEXT) UrlCoordinator.$(OBJEXT) Unpack.$(OBJEXT) \
nzbget.$(OBJEXT)
nzbget_OBJECTS = $(am_nzbget_OBJECTS)
nzbget_LDADD = $(LDADD)
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
nobase_dist_ppscriptsSCRIPT_INSTALL = $(install_sh_SCRIPT)
sbinSCRIPT_INSTALL = $(INSTALL_SCRIPT)
SCRIPTS = $(nobase_dist_ppscripts_SCRIPTS) $(sbin_SCRIPTS)
DEFAULT_INCLUDES = -I. -I$(srcdir) -I.
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
@@ -83,6 +126,11 @@ CCLD = $(CC)
LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@
SOURCES = $(nzbget_SOURCES)
DIST_SOURCES = $(nzbget_SOURCES)
dist_docDATA_INSTALL = $(INSTALL_DATA)
dist_exampleconfDATA_INSTALL = $(INSTALL_DATA)
nobase_dist_webuiDATA_INSTALL = $(install_sh_DATA)
DATA = $(dist_doc_DATA) $(dist_exampleconf_DATA) \
$(nobase_dist_webui_DATA)
ETAGS = etags
CTAGS = ctags
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
@@ -95,18 +143,14 @@ am__remove_distdir = \
DIST_ARCHIVES = $(distdir).tar.gz
GZIP_ENV = --best
distuninstallcheck_listfiles = find . -type f -print
distcleancheck_listfiles = find . -type f -print
ACLOCAL = @ACLOCAL@
ADDSRCS = @ADDSRCS@
AMDEP_FALSE = @AMDEP_FALSE@
AMDEP_TRUE = @AMDEP_TRUE@
AMTAR = @AMTAR@
AR = @AR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CFLAGS = @CFLAGS@
CPPFLAGS = @CPPFLAGS@
CXX = @CXX@
CXXCPP = @CXXCPP@
@@ -184,6 +228,8 @@ localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
openssl_CFLAGS = @openssl_CFLAGS@
openssl_LIBS = @openssl_LIBS@
pdfdir = @pdfdir@
prefix = @prefix@
program_transform_name = @program_transform_name@
@@ -191,26 +237,89 @@ psdir = @psdir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
sysconfdir = @sysconfdir@
target = @target@
target_alias = @target_alias@
nzbget_SOURCES = ArticleDownloader.cpp ArticleDownloader.h BinRpc.cpp BinRpc.h \
target_cpu = @target_cpu@
target_os = @target_os@
target_vendor = @target_vendor@
nzbget_SOURCES = \
ArticleDownloader.cpp ArticleDownloader.h BinRpc.cpp BinRpc.h \
ColoredFrontend.cpp ColoredFrontend.h Connection.cpp Connection.h Decoder.cpp Decoder.h \
DiskState.cpp DiskState.h DownloadInfo.cpp DownloadInfo.h Frontend.cpp Frontend.h \
Log.cpp Log.h LoggableFrontend.cpp LoggableFrontend.h MessageBase.h \
NCursesFrontend.cpp NCursesFrontend.h NNTPConnection.cpp NNTPConnection.h NZBFile.cpp \
NZBFile.h NetAddress.cpp NetAddress.h NewsServer.cpp NewsServer.h Observer.cpp \
Observer.h Options.cpp Options.h ParChecker.cpp ParChecker.h \
PrePostProcessor.cpp PrePostProcessor.h QueueCoordinator.cpp \
DiskState.cpp DiskState.h DownloadInfo.cpp DownloadInfo.h DupeCoordinator.cpp DupeCoordinator.h \
Frontend.cpp Frontend.h FeedCoordinator.cpp FeedCoordinator.h FeedFile.cpp FeedFile.h \
FeedFilter.cpp FeedFilter.h FeedInfo.cpp FeedInfo.h Log.cpp Log.h LoggableFrontend.cpp \
LoggableFrontend.h Maintenance.cpp Maintenance.h MessageBase.h NCursesFrontend.cpp \
NCursesFrontend.h NNTPConnection.cpp \
NNTPConnection.h NZBFile.cpp NZBFile.h NewsServer.cpp NewsServer.h Observer.cpp \
Observer.h Options.cpp Options.h ParChecker.cpp ParChecker.h ParRenamer.cpp ParRenamer.h \
ParCoordinator.cpp ParCoordinator.h PrePostProcessor.cpp PrePostProcessor.h QueueCoordinator.cpp \
QueueCoordinator.h QueueEditor.cpp QueueEditor.h RemoteClient.cpp RemoteClient.h \
RemoteServer.cpp RemoteServer.h Scanner.cpp Scanner.h Scheduler.cpp Scheduler.h ScriptController.cpp \
ScriptController.h ServerPool.cpp ServerPool.h svn_version.cpp TLS.cpp TLS.h Thread.cpp Thread.h Util.cpp \
Util.h XmlRpc.cpp XmlRpc.h nzbget.cpp nzbget.h
ScriptController.h ServerPool.cpp ServerPool.h svn_version.cpp TLS.cpp TLS.h Thread.cpp Thread.h \
Util.cpp Util.h XmlRpc.cpp XmlRpc.h WebDownloader.cpp WebDownloader.h WebServer.cpp WebServer.h \
UrlCoordinator.cpp UrlCoordinator.h Unpack.cpp Unpack.h nzbget.cpp nzbget.h
EXTRA_DIST = nzbget.conf.example postprocess-example.sh postprocess-example.conf \
win32.h NTService.cpp NTService.h \
EXTRA_DIST = \
Makefile.cvs nzbgetd \
$(patches_FILES) $(windows_FILES) $(osx_FILES)
patches_FILES = \
libpar2-0.2-bugfixes.patch libpar2-0.2-cancel.patch \
libpar2-0.2-MSVC8.patch libsigc++-2.0.18-MSVC8.patch \
Makefile.cvs nzbget.kdevelop nzbget.sln nzbget.vcproj \
nzbgetd nzbget-shell.bat
libpar2-0.2-MSVC8.patch libsigc++-2.0.18-MSVC8.patch
windows_FILES = \
win32.h NTService.cpp NTService.h nzbget.sln nzbget.vcproj nzbget-shell.bat
osx_FILES = \
osx/App_Prefix.pch osx/NZBGet-Info.plist \
osx/DaemonController.h osx/DaemonController.m \
osx/MainApp.h osx/MainApp.m osx/MainApp.xib \
osx/PFMoveApplication.h osx/PFMoveApplication.m \
osx/PreferencesDialog.h osx/PreferencesDialog.m osx/PreferencesDialog.xib \
osx/RPC.h osx/RPC.m osx/WebClient.h osx/WebClient.m \
osx/WelcomeDialog.h osx/WelcomeDialog.m osx/WelcomeDialog.xib \
osx/NZBGet.xcodeproj/project.pbxproj \
osx/Resources/Images/mainicon.icns osx/Resources/Images/statusicon.png \
osx/Resources/Images/statusicon@2x.png osx/Resources/Images/statusicon-inv.png \
osx/Resources/Images/statusicon-inv@2x.png osx/Resources/licenses/license-bootstrap.txt \
osx/Resources/licenses/license-jquery-GPL.txt osx/Resources/licenses/license-jquery-MIT.txt \
osx/Resources/Credits.rtf osx/Resources/Localizable.strings osx/Resources/Welcome.rtf
doc_FILES = \
README ChangeLog COPYING
exampleconf_FILES = \
nzbget.conf
webui_FILES = \
webui/index.html webui/index.js webui/downloads.js webui/edit.js webui/fasttable.js \
webui/history.js webui/messages.js webui/status.js webui/style.css webui/upload.js \
webui/util.js webui/config.js webui/feed.js \
webui/lib/bootstrap.js webui/lib/bootstrap.min.js webui/lib/bootstrap.css \
webui/lib/jquery.js webui/lib/jquery.min.js \
webui/img/icons.png webui/img/icons-2x.png \
webui/img/transmit.gif webui/img/transmit-file.gif webui/img/favicon.ico \
webui/img/download-anim-green-2x.png webui/img/download-anim-orange-2x.png \
webui/img/transmit-reload-2x.gif
ppscripts_FILES = \
ppscripts/EMail.py ppscripts/Logger.py
# Install
sbin_SCRIPTS = nzbgetd
dist_doc_DATA = $(doc_FILES)
exampleconfdir = $(datadir)/nzbget
dist_exampleconf_DATA = $(exampleconf_FILES)
webuidir = $(datadir)/nzbget
nobase_dist_webui_DATA = $(webui_FILES)
ppscriptsdir = $(datadir)/nzbget
nobase_dist_ppscripts_SCRIPTS = $(ppscripts_FILES)
# Ignore "svn_version.cpp" in distcleancheck
distcleancheck_listfiles = \
find . -type f -exec sh -c 'test -f $(srcdir)/$$1 || echo $$1' \
sh '{}' ';'
all: config.h
$(MAKE) $(AM_MAKEFLAGS) all-am
@@ -293,6 +402,50 @@ clean-binPROGRAMS:
nzbget$(EXEEXT): $(nzbget_OBJECTS) $(nzbget_DEPENDENCIES)
@rm -f nzbget$(EXEEXT)
$(CXXLINK) $(nzbget_LDFLAGS) $(nzbget_OBJECTS) $(nzbget_LDADD) $(LIBS)
install-nobase_dist_ppscriptsSCRIPTS: $(nobase_dist_ppscripts_SCRIPTS)
@$(NORMAL_INSTALL)
test -z "$(ppscriptsdir)" || $(mkdir_p) "$(DESTDIR)$(ppscriptsdir)"
@$(am__vpath_adj_setup) \
list='$(nobase_dist_ppscripts_SCRIPTS)'; for p in $$list; do \
$(am__vpath_adj) p=$$f; \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
if test -f $$d$$p; then \
f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
f=`echo "$$p" | sed 's|[^/]*$$||'`"$$f"; \
echo " $(nobase_dist_ppscriptsSCRIPT_INSTALL) '$$d$$p' '$(DESTDIR)$(ppscriptsdir)/$$f'"; \
$(nobase_dist_ppscriptsSCRIPT_INSTALL) "$$d$$p" "$(DESTDIR)$(ppscriptsdir)/$$f"; \
else :; fi; \
done
uninstall-nobase_dist_ppscriptsSCRIPTS:
@$(NORMAL_UNINSTALL)
@$(am__vpath_adj_setup) \
list='$(nobase_dist_ppscripts_SCRIPTS)'; for p in $$list; do \
$(am__vpath_adj) p=$$f; \
f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
f=`echo "$$p" | sed 's|[^/]*$$||'`"$$f"; \
echo " rm -f '$(DESTDIR)$(ppscriptsdir)/$$f'"; \
rm -f "$(DESTDIR)$(ppscriptsdir)/$$f"; \
done
install-sbinSCRIPTS: $(sbin_SCRIPTS)
@$(NORMAL_INSTALL)
test -z "$(sbindir)" || $(mkdir_p) "$(DESTDIR)$(sbindir)"
@list='$(sbin_SCRIPTS)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
if test -f $$d$$p; then \
f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
echo " $(sbinSCRIPT_INSTALL) '$$d$$p' '$(DESTDIR)$(sbindir)/$$f'"; \
$(sbinSCRIPT_INSTALL) "$$d$$p" "$(DESTDIR)$(sbindir)/$$f"; \
else :; fi; \
done
uninstall-sbinSCRIPTS:
@$(NORMAL_UNINSTALL)
@list='$(sbin_SCRIPTS)'; for p in $$list; do \
f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
echo " rm -f '$(DESTDIR)$(sbindir)/$$f'"; \
rm -f "$(DESTDIR)$(sbindir)/$$f"; \
done
mostlyclean-compile:
-rm -f *.$(OBJEXT)
@@ -307,17 +460,24 @@ distclean-compile:
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Decoder.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/DiskState.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/DownloadInfo.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/DupeCoordinator.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/FeedCoordinator.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/FeedFile.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/FeedFilter.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/FeedInfo.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Frontend.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Log.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/LoggableFrontend.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Maintenance.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/NCursesFrontend.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/NNTPConnection.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/NZBFile.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/NetAddress.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/NewsServer.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Observer.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Options.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ParChecker.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ParCoordinator.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ParRenamer.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/PrePostProcessor.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/QueueCoordinator.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/QueueEditor.Po@am__quote@
@@ -329,7 +489,11 @@ distclean-compile:
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ServerPool.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/TLS.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Thread.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Unpack.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/UrlCoordinator.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Util.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/WebDownloader.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/WebServer.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/XmlRpc.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/nzbget.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/svn_version.Po@am__quote@
@@ -348,6 +512,59 @@ distclean-compile:
@AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCXX_FALSE@ $(CXXCOMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
uninstall-info-am:
install-dist_docDATA: $(dist_doc_DATA)
@$(NORMAL_INSTALL)
test -z "$(docdir)" || $(mkdir_p) "$(DESTDIR)$(docdir)"
@list='$(dist_doc_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(dist_docDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(docdir)/$$f'"; \
$(dist_docDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(docdir)/$$f"; \
done
uninstall-dist_docDATA:
@$(NORMAL_UNINSTALL)
@list='$(dist_doc_DATA)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(docdir)/$$f'"; \
rm -f "$(DESTDIR)$(docdir)/$$f"; \
done
install-dist_exampleconfDATA: $(dist_exampleconf_DATA)
@$(NORMAL_INSTALL)
test -z "$(exampleconfdir)" || $(mkdir_p) "$(DESTDIR)$(exampleconfdir)"
@list='$(dist_exampleconf_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(dist_exampleconfDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(exampleconfdir)/$$f'"; \
$(dist_exampleconfDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(exampleconfdir)/$$f"; \
done
uninstall-dist_exampleconfDATA:
@$(NORMAL_UNINSTALL)
@list='$(dist_exampleconf_DATA)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(exampleconfdir)/$$f'"; \
rm -f "$(DESTDIR)$(exampleconfdir)/$$f"; \
done
install-nobase_dist_webuiDATA: $(nobase_dist_webui_DATA)
@$(NORMAL_INSTALL)
test -z "$(webuidir)" || $(mkdir_p) "$(DESTDIR)$(webuidir)"
@$(am__vpath_adj_setup) \
list='$(nobase_dist_webui_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
$(am__vpath_adj) \
echo " $(nobase_dist_webuiDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(webuidir)/$$f'"; \
$(nobase_dist_webuiDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(webuidir)/$$f"; \
done
uninstall-nobase_dist_webuiDATA:
@$(NORMAL_UNINSTALL)
@$(am__vpath_adj_setup) \
list='$(nobase_dist_webui_DATA)'; for p in $$list; do \
$(am__vpath_adj) \
echo " rm -f '$(DESTDIR)$(webuidir)/$$f'"; \
rm -f "$(DESTDIR)$(webuidir)/$$f"; \
done
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
@@ -400,6 +617,7 @@ distclean-tags:
distdir: $(DISTFILES)
$(am__remove_distdir)
mkdir $(distdir)
$(mkdir_p) $(distdir)/osx $(distdir)/osx/NZBGet.xcodeproj $(distdir)/osx/Resources $(distdir)/osx/Resources/Images $(distdir)/osx/Resources/licenses $(distdir)/ppscripts $(distdir)/webui $(distdir)/webui/img $(distdir)/webui/lib
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
@@ -426,6 +644,9 @@ distdir: $(DISTFILES)
|| exit 1; \
fi; \
done
$(MAKE) $(AM_MAKEFLAGS) \
top_distdir="$(top_distdir)" distdir="$(distdir)" \
dist-hook
-find $(distdir) -type d ! -perm -777 -exec chmod a+rwx {} \; -o \
! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
! -type d ! -perm -400 -exec chmod a+r {} \; -o \
@@ -525,9 +746,9 @@ distcleancheck: distclean
exit 1; } >&2
check-am: all-am
check: check-am
all-am: Makefile $(PROGRAMS) config.h
all-am: Makefile $(PROGRAMS) $(SCRIPTS) $(DATA) config.h
installdirs:
for dir in "$(DESTDIR)$(bindir)"; do \
for dir in "$(DESTDIR)$(bindir)" "$(DESTDIR)$(ppscriptsdir)" "$(DESTDIR)$(sbindir)" "$(DESTDIR)$(docdir)" "$(DESTDIR)$(exampleconfdir)" "$(DESTDIR)$(webuidir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-am
@@ -575,9 +796,15 @@ info: info-am
info-am:
install-data-am:
install-data-am: install-dist_docDATA install-dist_exampleconfDATA \
install-nobase_dist_ppscriptsSCRIPTS \
install-nobase_dist_webuiDATA
@$(NORMAL_INSTALL)
$(MAKE) $(AM_MAKEFLAGS) install-data-hook
install-exec-am: install-binPROGRAMS
install-exec-am: install-binPROGRAMS install-sbinSCRIPTS
@$(NORMAL_INSTALL)
$(MAKE) $(AM_MAKEFLAGS) install-exec-hook
install-info: install-info-am
@@ -604,23 +831,70 @@ ps: ps-am
ps-am:
uninstall-am: uninstall-binPROGRAMS uninstall-info-am
uninstall-am: uninstall-binPROGRAMS uninstall-dist_docDATA \
uninstall-dist_exampleconfDATA uninstall-info-am \
uninstall-nobase_dist_ppscriptsSCRIPTS \
uninstall-nobase_dist_webuiDATA uninstall-sbinSCRIPTS
.PHONY: CTAGS GTAGS all all-am am--refresh check check-am clean \
clean-binPROGRAMS clean-generic ctags dist dist-all dist-bzip2 \
dist-gzip dist-shar dist-tarZ dist-zip distcheck distclean \
distclean-compile distclean-generic distclean-hdr \
dist-gzip dist-hook dist-shar dist-tarZ dist-zip distcheck \
distclean distclean-compile distclean-generic distclean-hdr \
distclean-tags distcleancheck distdir distuninstallcheck dvi \
dvi-am html html-am info info-am install install-am \
install-binPROGRAMS install-data install-data-am install-exec \
install-exec-am install-info install-info-am install-man \
install-binPROGRAMS install-data install-data-am \
install-data-hook install-dist_docDATA \
install-dist_exampleconfDATA install-exec install-exec-am \
install-exec-hook install-info install-info-am install-man \
install-nobase_dist_ppscriptsSCRIPTS \
install-nobase_dist_webuiDATA install-sbinSCRIPTS \
install-strip installcheck installcheck-am installdirs \
maintainer-clean maintainer-clean-generic mostlyclean \
mostlyclean-compile mostlyclean-generic pdf pdf-am ps ps-am \
tags uninstall uninstall-am uninstall-binPROGRAMS \
uninstall-info-am
uninstall-dist_docDATA uninstall-dist_exampleconfDATA \
uninstall-info-am uninstall-nobase_dist_ppscriptsSCRIPTS \
uninstall-nobase_dist_webuiDATA uninstall-sbinSCRIPTS
# Note about "sed":
# We need to make some changes in installed files.
# On Linux "sed" has option "-i" for in-place-edit. Unfortunateley the BSD version of "sed"
# has incompatible syntax. To solve the problem we perform in-place-edit in three steps:
# 1) copy the original file to original.temp (delete existing original.temp, if any);
# 2) sed < original.temp > original
# 3) delete original.temp
# These steps ensure that the output file has the same permissions as the original file.
# Configure installed script
install-exec-hook:
rm -f "$(DESTDIR)$(sbindir)/nzbgetd.temp"
cp "$(DESTDIR)$(sbindir)/nzbgetd" "$(DESTDIR)$(sbindir)/nzbgetd.temp"
sed 's?/usr/local/bin?$(bindir)?' < "$(DESTDIR)$(sbindir)/nzbgetd.temp" > "$(DESTDIR)$(sbindir)/nzbgetd"
rm "$(DESTDIR)$(sbindir)/nzbgetd.temp"
# Prepare example configuration file
install-data-hook:
rm -f "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
cp "$(DESTDIR)$(exampleconfdir)/nzbget.conf" "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:^ConfigTemplate=:ConfigTemplate=$(exampleconfdir)/nzbget.conf:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
sed 's:configuration file (typically installed:configuration file (installed:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:/usr/local/share/nzbget/nzbget.conf):$(exampleconfdir)/nzbget.conf):' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
sed 's:^WebDir=:WebDir=$(webuidir)/webui:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
sed 's:typically installed to /usr/local/share/nzbget/ppscripts:installed to $(ppscriptsdir)/ppscripts:' < "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp" > "$(DESTDIR)$(exampleconfdir)/nzbget.conf"
rm "$(DESTDIR)$(exampleconfdir)/nzbget.conf.temp"
# Install configuration files into /etc
# (only if they do not exist there to prevent override by update)
install-conf:
if test ! -f "$(DESTDIR)$(sysconfdir)/nzbget.conf" ; then \
$(mkinstalldirs) "$(DESTDIR)$(sysconfdir)" ; \
cp "$(DESTDIR)$(exampleconfdir)/nzbget.conf" "$(DESTDIR)$(sysconfdir)/nzbget.conf" ; \
fi
uninstall-conf:
rm -f "$(DESTDIR)$(sysconfdir)/nzbget.conf"
# Determining subversion revision:
# 1) If directory ".svn" exists we take revision from it using program svnversion (part of subversion package)
# File is recreated only if revision number was changed.
@@ -659,6 +933,11 @@ svn_version.cpp: FORCE
FORCE:
clean-bak: rm *~
# Fix premissions
dist-hook:
chmod -x $(distdir)/*.cpp $(distdir)/*.h
find $(distdir)/webui -type f -print -exec chmod -x {} \;
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2005 Bo Cordes Petersen <placebodk@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -27,7 +27,7 @@
#ifndef MESSAGEBASE_H
#define MESSAGEBASE_H
static const int32_t NZBMESSAGE_SIGNATURE = 0x6E7A6208; // = "nzb8" (protocol version)
static const int32_t NZBMESSAGE_SIGNATURE = 0x6E7A621B; // = "nzb-XX" (protocol version)
static const int NZBREQUESTFILENAMESIZE = 512;
static const int NZBREQUESTPASSWORDSIZE = 32;
@@ -56,11 +56,14 @@ enum eRemoteRequest
eRemoteRequestEditQueue,
eRemoteRequestLog,
eRemoteRequestShutdown,
eRemoteRequestReload,
eRemoteRequestVersion,
eRemoteRequestPostQueue,
eRemoteRequestWriteLog,
eRemoteRequestScan,
eRemoteRequestHistory
eRemoteRequestHistory,
eRemoteRequestDownloadUrl,
eRemoteRequestUrlQueue
};
// Possible values for field "m_iAction" of struct "SNZBEditQueueRequest":
@@ -76,24 +79,43 @@ enum eRemoteEditAction
eRemoteEditActionFileDelete, // delete files
eRemoteEditActionFilePauseAllPars, // pause only (all) pars (does not affect other files)
eRemoteEditActionFilePauseExtraPars, // pause only (almost all) pars, except main par-file (does not affect other files)
eRemoteEditActionFileSetPriority, // set priority for files
eRemoteEditActionFileReorder, // (not supported)
eRemoteEditActionFileSplit, // split - create new group from selected files
eRemoteEditActionGroupMoveOffset, // move group to m_iOffset relative to the current position in download-queue
eRemoteEditActionGroupMoveTop, // move group to the top of download-queue
eRemoteEditActionGroupMoveBottom, // move group to the bottom of download-queue
eRemoteEditActionGroupPause, // pause group
eRemoteEditActionGroupResume, // resume (unpause) group
eRemoteEditActionGroupDelete, // delete group
eRemoteEditActionGroupDupeDelete, // delete group
eRemoteEditActionGroupFinalDelete, // delete group
eRemoteEditActionGroupPauseAllPars, // pause only (all) pars (does not affect other files) in group
eRemoteEditActionGroupPauseExtraPars, // pause only (almost all) pars in group, except main par-file (does not affect other files)
eRemoteEditActionGroupSetPriority, // set priority for groups
eRemoteEditActionGroupSetCategory, // set or change category for a group
eRemoteEditActionGroupMerge, // merge group
eRemoteEditActionGroupSetParameter, // set post-process parameter for group
eRemoteEditActionGroupSetName, // set group name (rename group)
eRemoteEditActionGroupSetDupeKey, // (reserved)
eRemoteEditActionGroupSetDupeScore, // (reserved)
eRemoteEditActionGroupSetDupeMode, // (reserved)
eRemoteEditActionPostMoveOffset = 51, // move post-job to m_iOffset relative to the current position in post-queue
eRemoteEditActionPostMoveTop, // move post-job to the top of post-queue
eRemoteEditActionPostMoveBottom, // move post-job to the bottom of post-queue
eRemoteEditActionPostDelete, // delete post-job
eRemoteEditActionHistoryDelete, // delete history-item
eRemoteEditActionHistoryDelete, // hide history-item
eRemoteEditActionHistoryFinalDelete, // delete history-item
eRemoteEditActionHistoryReturn, // move history-item back to download queue
eRemoteEditActionHistoryProcess // move history-item back to download queue and start postprocessing
eRemoteEditActionHistoryProcess, // move history-item back to download queue and start postprocessing
eRemoteEditActionHistoryRedownload, // move history-item back to download queue for redownload
eRemoteEditActionHistorySetParameter, // set post-process parameter for history-item
eRemoteEditActionHistorySetDupeKey, // (reserved)
eRemoteEditActionHistorySetDupeScore, // (reserved)
eRemoteEditActionHistorySetDupeMode, // (reserved)
eRemoteEditActionHistorySetDupeBackup, // (reserved)
eRemoteEditActionHistoryMarkBad, // mark history-item as bad (and download other duplicate)
eRemoteEditActionHistoryMarkGood // mark history-item as good (and push it into dup-history)
};
// Possible values for field "m_iAction" of struct "SNZBPauseUnpauseRequest":
@@ -105,13 +127,22 @@ enum eRemotePauseUnpauseAction
eRemotePauseUnpauseActionScan // pause/unpause scan of incoming nzb-directory
};
// Possible values for field "m_iMatchMode" of struct "SNZBEditQueueRequest":
enum eRemoteMatchMode
{
eRemoteMatchModeID = 1, // ID
eRemoteMatchModeName, // Name
eRemoteMatchModeRegEx, // RegEx
};
// The basic SNZBRequestBase struct, used in all requests
struct SNZBRequestBase
{
int32_t m_iSignature; // Signature must be NZBMESSAGE_SIGNATURE in integer-value
int32_t m_iStructSize; // Size of the entire struct
int32_t m_iType; // Message type, see enum in NZBMessageRequest-namespace
char m_szPassword[NZBREQUESTPASSWORDSIZE]; // Password needs to be in every request
char m_szUsername[NZBREQUESTPASSWORDSIZE]; // User name
char m_szPassword[NZBREQUESTPASSWORDSIZE]; // Password
};
// The basic SNZBResposneBase struct, used in all responses
@@ -126,8 +157,10 @@ struct SNZBDownloadRequest
{
SNZBRequestBase m_MessageBase; // Must be the first in the struct
char m_szFilename[NZBREQUESTFILENAMESIZE]; // Name of nzb-file, may contain full path (local path on client) or only filename
char m_szCategory[NZBREQUESTFILENAMESIZE]; // Category, be empty
char m_szCategory[NZBREQUESTFILENAMESIZE]; // Category, can be empty
int32_t m_bAddFirst; // 1 - add file to the top of download queue
int32_t m_bAddPaused; // 1 - pause added files
int32_t m_iPriority; // Priority for files (0 - default)
int32_t m_iTrailingDataLength; // Length of nzb-file in bytes
//char m_szContent[m_iTrailingDataLength]; // variable sized
};
@@ -147,6 +180,9 @@ struct SNZBListRequest
SNZBRequestBase m_MessageBase; // Must be the first in the struct
int32_t m_bFileList; // 1 - return file list
int32_t m_bServerState; // 1 - return server state
int32_t m_iMatchMode; // File/Group match mode, see enum eRemoteMatchMode (only values eRemoteMatchModeID (no filter) and eRemoteMatchModeRegEx are allowed)
int32_t m_bMatchGroup; // 0 - match files; 1 - match nzbs (when m_iMatchMode == eRemoteMatchModeRegEx)
char m_szPattern[NZBREQUESTFILENAMESIZE]; // RegEx Pattern (when m_iMatchMode == eRemoteMatchModeRegEx)
};
// A list response
@@ -169,6 +205,7 @@ struct SNZBListResponse
int32_t m_iDownloadTimeSec; // Server download time in seconds (up_time - standby_time)
int32_t m_iDownloadedBytesLo; // Amount of data downloaded since server start, Low 32-bits of 64-bit value
int32_t m_iDownloadedBytesHi; // Amount of data downloaded since server start, High 32-bits of 64-bit value
int32_t m_bRegExValid; // 0 - error in RegEx-pattern, 1 - RegEx-pattern is valid (only when Request has eRemoteMatchModeRegEx)
int32_t m_iNrTrailingNZBEntries; // Number of List-NZB-entries, following to this structure
int32_t m_iNrTrailingPPPEntries; // Number of List-PPP-entries, following to this structure
int32_t m_iNrTrailingFileEntries; // Number of List-File-entries, following to this structure
@@ -183,11 +220,14 @@ struct SNZBListResponseNZBEntry
{
int32_t m_iSizeLo; // Size of all files in bytes, Low 32-bits of 64-bit value
int32_t m_iSizeHi; // Size of all files in bytes, High 32-bits of 64-bit value
int32_t m_bMatch; // 1 - group matches the pattern (only when Request has eRemoteMatchModeRegEx)
int32_t m_iFilenameLen; // Length of Filename-string (m_szFilename), following to this record
int32_t m_iNameLen; // Length of Name-string (m_szName), following to this record
int32_t m_iDestDirLen; // Length of DestDir-string (m_szDestDir), following to this record
int32_t m_iCategoryLen; // Length of Category-string (m_szCategory), following to this record
int32_t m_iQueuedFilenameLen; // Length of queued file name (m_szQueuedFilename), following to this record
//char m_szFilename[m_iFilenameLen]; // variable sized
//char m_szName[m_iNameLen]; // variable sized
//char m_szDestDir[m_iDestDirLen]; // variable sized
//char m_szCategory[m_iCategoryLen]; // variable sized
//char m_szQueuedFilename[m_iQueuedFilenameLen]; // variable sized
@@ -214,6 +254,9 @@ struct SNZBListResponseFileEntry
int32_t m_iRemainingSizeHi; // Remaining size in bytes, High 32-bits of 64-bit value
int32_t m_bPaused; // 1 - file is paused
int32_t m_bFilenameConfirmed; // 1 - Filename confirmed (read from article body), 0 - Filename parsed from subject (can be changed after reading of article)
int32_t m_iPriority; // Download priority
int32_t m_iActiveDownloads; // Number of active downloads for this file
int32_t m_bMatch; // 1 - file matches the pattern (only when Request has eRemoteMatchModeRegEx)
int32_t m_iSubjectLen; // Length of Subject-string (m_szSubject), following to this record
int32_t m_iFilenameLen; // Length of Filename-string (m_szFilename), following to this record
//char m_szSubject[m_iSubjectLen]; // variable sized
@@ -281,7 +324,7 @@ struct SNZBSetDownloadRateResponse
//char m_szText[m_iTrailingDataLength]; // variable sized
};
// An edit queue request
// edit queue request
struct SNZBEditQueueRequest
{
SNZBRequestBase m_MessageBase; // Must be the first in the struct
@@ -289,11 +332,15 @@ struct SNZBEditQueueRequest
int32_t m_iOffset; // Offset to move (for m_iAction = 0)
int32_t m_bSmartOrder; // For Move-Actions: 0 - execute action for each ID in order they are placed in array;
// 1 - smart execute to ensure that the relative order of all affected IDs are not changed.
int32_t m_iNrTrailingEntries; // Number of ID-entries, following to this structure
int32_t m_iTextLen; // Length of Text-string (m_szText), following to this record
int32_t m_iTrailingDataLength; // Length of Text-string and all ID-entries, following to this structure
//char m_szText[m_iTextLen]; // variable sized
//int32_t m_iIDs[m_iNrTrailingEntries]; // variable sized array of IDs. For File-Actions - ID of file, for Group-Actions - ID of any file belonging to group
int32_t m_iMatchMode; // File/Group match mode, see enum eRemoteMatchMode
int32_t m_iNrTrailingIDEntries; // Number of ID-entries, following to this structure
int32_t m_iNrTrailingNameEntries; // Number of Name-entries, following to this structure
int32_t m_iTrailingNameEntriesLen; // Length of all Name-entries, following to this structure
int32_t m_iTextLen; // Length of Text-string (m_szText), following to this record
int32_t m_iTrailingDataLength; // Length of Text-string and all ID-entries, following to this structure
//char m_szText[m_iTextLen]; // variable sized
//int32_t m_iIDs[m_iNrTrailingIDEntries]; // variable sized array of IDs. For File-Actions - ID of file, for Group-Actions - ID of any file belonging to group
//char* m_szNames[m_iNrTrailingNameEntries]; // variable sized array of strings. For File-Actions - name of file incl. nzb-name as path, for Group-Actions - name of group
};
// An edit queue response
@@ -335,6 +382,21 @@ struct SNZBShutdownResponse
//char m_szText[m_iTrailingDataLength]; // variable sized
};
// Reload server request
struct SNZBReloadRequest
{
SNZBRequestBase m_MessageBase; // Must be the first in the struct
};
// Reload server response
struct SNZBReloadResponse
{
SNZBResponseBase m_MessageBase; // Must be the first in the struct
int32_t m_bSuccess; // 0 - command failed, 1 - command executed successfully
int32_t m_iTrailingDataLength; // Length of Text-string (m_szText), following to this record
//char m_szText[m_iTrailingDataLength]; // variable sized
};
// Server version request
struct SNZBVersionRequest
{
@@ -376,12 +438,10 @@ struct SNZBPostQueueResponseEntry
int32_t m_iTotalTimeSec; // Number of seconds this post-job is beeing processed (after it first changed the state from QUEUED).
int32_t m_iStageTimeSec; // Number of seconds the current stage is beeing processed.
int32_t m_iNZBFilenameLen; // Length of NZBFileName-string (m_szNZBFilename), following to this record
int32_t m_iParFilename; // Length of ParFilename-string (m_szParFilename), following to this record
int32_t m_iInfoNameLen; // Length of Filename-string (m_szFilename), following to this record
int32_t m_iDestDirLen; // Length of DestDir-string (m_szDestDir), following to this record
int32_t m_iProgressLabelLen; // Length of ProgressLabel-string (m_szProgressLabel), following to this record
//char m_szNZBFilename[m_iNZBFilenameLen]; // variable sized, may contain full path (local path on client) or only filename
//char m_szParFilename[m_iParFilename]; // variable sized
//char m_szInfoName[m_iInfoNameLen]; // variable sized
//char m_szDestDir[m_iDestDirLen]; // variable sized
//char m_szProgressLabel[m_iProgressLabelLen]; // variable sized
@@ -409,6 +469,7 @@ struct SNZBWriteLogResponse
struct SNZBScanRequest
{
SNZBRequestBase m_MessageBase; // Must be the first in the struct
int32_t m_bSyncMode; // 0 - asynchronous Scan (the command returns immediately), 1 - synchronous Scan (the command returns when the scan is completed)
};
// Scan nzb directory response
@@ -426,34 +487,80 @@ struct SNZBHistoryRequest
SNZBRequestBase m_MessageBase; // Must be the first in the struct
};
// A history response
// history response
struct SNZBHistoryResponse
{
SNZBResponseBase m_MessageBase; // Must be the first in the struct
int32_t m_iEntrySize; // Size of the SNZBHistoryResponseEntry-struct
int32_t m_iNrTrailingEntries; // Number of History-entries, following to this structure
int32_t m_iTrailingDataLength; // Length of all History-entries, following to this structure
// SNZBHistoryResponseEntry m_NZBEntries[m_iNrTrailingNZBEntries] // variable sized
// SNZBHistoryResponseEntry m_Entries[m_iNrTrailingEntries] // variable sized
};
// A list response nzb entry
// history entry
struct SNZBHistoryResponseEntry
{
int32_t m_iID; // NZBID
int32_t m_iID; // History-ID
int32_t m_iKind; // Kind of Item: 1 - Collection (NZB), 2 - URL
int32_t m_tTime; // When the item was added to history. time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds.
int32_t m_iNicenameLen; // Length of Nicename-string (m_szNicename), following to this record
// for Collection items (m_iKind = 1)
int32_t m_iSizeLo; // Size of all files in bytes, Low 32-bits of 64-bit value
int32_t m_iSizeHi; // Size of all files in bytes, High 32-bits of 64-bit value
int32_t m_iFileCount; // Initial number of files included in NZB-file
int32_t m_iParStatus; // See NZBInfo::EParStatus
int32_t m_iScriptStatus; // See NZBInfo::EScriptStatus
int32_t m_iFilenameLen; // Length of Filename-string (m_szFilename), following to this record
int32_t m_iDestDirLen; // Length of DestDir-string (m_szDestDir), following to this record
int32_t m_iCategoryLen; // Length of Category-string (m_szCategory), following to this record
int32_t m_iQueuedFilenameLen; // Length of queued file name (m_szQueuedFilename), following to this record
//char m_szFilename[m_iFilenameLen]; // variable sized
//char m_szDestDir[m_iDestDirLen]; // variable sized
//char m_szCategory[m_iCategoryLen]; // variable sized
//char m_szQueuedFilename[m_iQueuedFilenameLen]; // variable sized
// for URL items (m_iKind = 2)
int32_t m_iUrlStatus; // See UrlInfo::EStatus
// trailing data
//char m_szNicename[m_iNicenameLen]; // variable sized
};
// download url request
struct SNZBDownloadUrlRequest
{
SNZBRequestBase m_MessageBase; // Must be the first in the struct
char m_szURL[NZBREQUESTFILENAMESIZE]; // url to nzb-file
char m_szNZBFilename[NZBREQUESTFILENAMESIZE];// Name of nzb-file. Can be empty, then the filename is read from URL download response
char m_szCategory[NZBREQUESTFILENAMESIZE]; // Category, can be empty
int32_t m_bAddFirst; // 1 - add url to the top of download queue
int32_t m_bAddPaused; // 1 - pause added files
int32_t m_iPriority; // Priority for files (0 - default)
};
// download url response
struct SNZBDownloadUrlResponse
{
SNZBResponseBase m_MessageBase; // Must be the first in the struct
int32_t m_bSuccess; // 0 - command failed, 1 - command executed successfully
int32_t m_iTrailingDataLength; // Length of Text-string (m_szText), following to this record
//char m_szText[m_iTrailingDataLength]; // variable sized
};
// UrlQueue request
struct SNZBUrlQueueRequest
{
SNZBRequestBase m_MessageBase; // Must be the first in the struct
};
// UrlQueue response
struct SNZBUrlQueueResponse
{
SNZBResponseBase m_MessageBase; // Must be the first in the struct
int32_t m_iEntrySize; // Size of the SNZBUrlQueueResponseEntry-struct
int32_t m_iNrTrailingEntries; // Number of UrlQueue-entries, following to this structure
int32_t m_iTrailingDataLength; // Length of all UrlQueue-entries, following to this structure
// SNZBUrlQueueResponseEntry m_Entries[m_iNrTrailingEntries] // variable sized
};
// UrlQueue response entry
struct SNZBUrlQueueResponseEntry
{
int32_t m_iID; // ID of Url-entry
int32_t m_iURLLen; // Length of URL-string (m_szURL), following to this record
int32_t m_iNZBFilenameLen; // Length of NZBFilename-string (m_szNZBFilename), following to this record
//char m_szURL[m_iURLLen]; // variable sized
//char m_szNZBFilename[m_iNZBFilenameLen]; // variable sized
};
#endif

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2011 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -202,14 +202,9 @@ NCursesFrontend::NCursesFrontend()
NCursesFrontend::~NCursesFrontend()
{
#ifdef WIN32
if (m_pScreenBuffer)
{
free(m_pScreenBuffer);
}
if (m_pOldScreenBuffer)
{
free(m_pOldScreenBuffer);
}
free(m_pScreenBuffer);
free(m_pOldScreenBuffer);
m_ColorAttr.clear();
HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
@@ -550,6 +545,8 @@ int NCursesFrontend::PrintMessage(Message* Msg, int iRow, int iMaxLines)
szText = (char*)malloc(iLen);
time_t rawtime = Msg->GetTime();
rawtime += g_pOptions->GetTimeCorrection();
char szTime[50];
#ifdef HAVE_CTIME_R_3
ctime_r(&rawtime, szTime, 50);
@@ -613,10 +610,10 @@ void NCursesFrontend::PrintStatus()
char timeString[100];
timeString[0] = '\0';
float fCurrentDownloadSpeed = m_bStandBy ? 0 : m_fCurrentDownloadSpeed;
if (fCurrentDownloadSpeed > 0.0 && !(m_bPauseDownload || m_bPauseDownload2))
int iCurrentDownloadSpeed = m_bStandBy ? 0 : m_iCurrentDownloadSpeed;
if (iCurrentDownloadSpeed > 0 && !(m_bPauseDownload || m_bPauseDownload2))
{
long long remain_sec = (long long)(m_lRemainingSize / (fCurrentDownloadSpeed * 1024));
long long remain_sec = (long long)(m_lRemainingSize / iCurrentDownloadSpeed);
int h = (int)(remain_sec / 3600);
int m = (int)((remain_sec % 3600) / 60);
int s = (int)(remain_sec % 60);
@@ -624,9 +621,9 @@ void NCursesFrontend::PrintStatus()
}
char szDownloadLimit[128];
if (m_fDownloadLimit > 0.0f)
if (m_iDownloadLimit > 0)
{
sprintf(szDownloadLimit, ", Limit %.0f KB/s", m_fDownloadLimit);
sprintf(szDownloadLimit, ", Limit %.0f KB/s", (float)m_iDownloadLimit / 1024.0);
}
else
{
@@ -646,7 +643,7 @@ void NCursesFrontend::PrintStatus()
float fAverageSpeed = (float)(Util::Int64ToFloat(m_iDnTimeSec > 0 ? m_iAllBytes / m_iDnTimeSec : 0) / 1024.0);
snprintf(tmp, MAX_SCREEN_WIDTH, " %d threads, %.*f KB/s, %.2f MB remaining%s%s%s%s%s, Avg. %.*f KB/s",
m_iThreadCount, (fCurrentDownloadSpeed >= 10 ? 0 : 1), fCurrentDownloadSpeed,
m_iThreadCount, (iCurrentDownloadSpeed >= 10*1024 ? 0 : 1), (float)iCurrentDownloadSpeed / 1024.0,
(float)(Util::Int64ToFloat(m_lRemainingSize) / 1024.0 / 1024.0), timeString, szPostStatus,
m_bPauseDownload || m_bPauseDownload2 ? (m_bStandBy ? ", Paused" : ", Pausing") : "",
m_bPauseDownload || m_bPauseDownload2 ?
@@ -723,11 +720,8 @@ void NCursesFrontend::PrintKeyInputBar()
void NCursesFrontend::SetHint(const char* szHint)
{
if (m_szHint)
{
free(m_szHint);
m_szHint = NULL;
}
free(m_szHint);
m_szHint = NULL;
if (szHint)
{
m_szHint = strdup(szHint);
@@ -792,8 +786,8 @@ void NCursesFrontend::PrintFileQueue()
char szBuffer[MAX_SCREEN_WIDTH];
snprintf(szBuffer, sizeof(szBuffer), " %sFiles for downloading - %i / %i files in queue - %s / %s",
m_bUseColor ? "" : "*** ", pDownloadQueue->GetFileQueue()->size(),
pDownloadQueue->GetFileQueue()->size() - iPausedFiles, szRemaining, szUnpaused);
m_bUseColor ? "" : "*** ", (int)pDownloadQueue->GetFileQueue()->size(),
(int)pDownloadQueue->GetFileQueue()->size() - iPausedFiles, szRemaining, szUnpaused);
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
PrintTopHeader(szBuffer, m_iQueueWinTop, true);
}
@@ -819,6 +813,19 @@ void NCursesFrontend::PrintFilename(FileInfo * pFileInfo, int iRow, bool bSelect
color = NCURSES_COLORPAIR_TEXT;
}
const char* szDownloading = "";
if (pFileInfo->GetActiveDownloads() > 0)
{
szDownloading = " *";
}
char szPriority[100];
szPriority[0] = '\0';
if (pFileInfo->GetPriority() != 0)
{
sprintf(szPriority, " [%+i]", pFileInfo->GetPriority());
}
char szCompleted[20];
szCompleted[0] = '\0';
if (pFileInfo->GetRemainingSize() < pFileInfo->GetSize())
@@ -829,7 +836,7 @@ void NCursesFrontend::PrintFilename(FileInfo * pFileInfo, int iRow, bool bSelect
char szNZBNiceName[1024];
if (m_bShowNZBname)
{
pFileInfo->GetNZBInfo()->GetNiceNZBName(szNZBNiceName, 1023);
strncpy(szNZBNiceName, pFileInfo->GetNZBInfo()->GetName(), 1023);
int len = strlen(szNZBNiceName);
szNZBNiceName[len] = PATH_SEPARATOR;
szNZBNiceName[len + 1] = '\0';
@@ -840,8 +847,9 @@ void NCursesFrontend::PrintFilename(FileInfo * pFileInfo, int iRow, bool bSelect
}
char szBuffer[MAX_SCREEN_WIDTH];
snprintf(szBuffer, MAX_SCREEN_WIDTH, "%s%i%s %s%s (%.2f MB%s)%s", Brace1, pFileInfo->GetID(),
Brace2, szNZBNiceName, pFileInfo->GetFilename(), (float)(Util::Int64ToFloat(pFileInfo->GetSize()) / 1024.0 / 1024.0),
snprintf(szBuffer, MAX_SCREEN_WIDTH, "%s%i%s%s%s %s%s (%.2f MB%s)%s", Brace1, pFileInfo->GetID(),
Brace2, szPriority, szDownloading, szNZBNiceName, pFileInfo->GetFilename(),
(float)(Util::Int64ToFloat(pFileInfo->GetSize()) / 1024.0 / 1024.0),
szCompleted, pFileInfo->GetPaused() ? " (paused)" : "");
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
@@ -912,6 +920,7 @@ void NCursesFrontend::PrintGroupQueue()
{
int iLineNr = m_iQueueWinTop;
LockQueue();
GroupQueue* pGroupQueue = &m_groupQueue;
if (pGroupQueue->empty())
{
@@ -962,10 +971,11 @@ void NCursesFrontend::PrintGroupQueue()
char szBuffer[MAX_SCREEN_WIDTH];
snprintf(szBuffer, sizeof(szBuffer), " %sNZBs for downloading - %i NZBs in queue - %s / %s",
m_bUseColor ? "" : "*** ", pGroupQueue->size(), szRemaining, szUnpaused);
m_bUseColor ? "" : "*** ", (int)pGroupQueue->size(), szRemaining, szUnpaused);
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
PrintTopHeader(szBuffer, m_iQueueWinTop, false);
}
UnlockQueue();
}
void NCursesFrontend::ResetColWidths()
@@ -977,21 +987,23 @@ void NCursesFrontend::ResetColWidths()
void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSelected, bool bCalcColWidth)
{
int color = 0;
const char* Brace1 = "[";
const char* Brace2 = "]";
int color = NCURSES_COLORPAIR_TEXT;
char chBrace1 = '[';
char chBrace2 = ']';
if (m_eInputMode == eEditQueue && bSelected)
{
color = NCURSES_COLORPAIR_TEXTHIGHL;
if (!m_bUseColor)
{
Brace1 = "<";
Brace2 = ">";
chBrace1 = '<';
chBrace2 = '>';
}
}
else
const char* szDownloading = "";
if (pGroupInfo->GetActiveDownloads() > 0)
{
color = NCURSES_COLORPAIR_TEXT;
szDownloading = " *";
}
long long lUnpausedRemainingSize = pGroupInfo->GetRemainingSize() - pGroupInfo->GetPausedSize();
@@ -999,8 +1011,19 @@ void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSel
char szRemaining[20];
Util::FormatFileSize(szRemaining, sizeof(szRemaining), lUnpausedRemainingSize);
char szNZBNiceName[1024];
pGroupInfo->GetNZBInfo()->GetNiceNZBName(szNZBNiceName, 1023);
char szPriority[100];
szPriority[0] = '\0';
if (pGroupInfo->GetMinPriority() != 0 || pGroupInfo->GetMaxPriority() != 0)
{
if (pGroupInfo->GetMinPriority() == pGroupInfo->GetMaxPriority())
{
sprintf(szPriority, " [%+i]", pGroupInfo->GetMinPriority());
}
else
{
sprintf(szPriority, " [%+i..%+i]", pGroupInfo->GetMinPriority(), pGroupInfo->GetMaxPriority());
}
}
char szBuffer[MAX_SCREEN_WIDTH];
@@ -1030,20 +1053,21 @@ void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSel
Util::FormatFileSize(szTotal, sizeof(szTotal), pGroupInfo->GetNZBInfo()->GetSize());
char szNameWithIds[1024];
snprintf(szNameWithIds, 1024, "%s%i-%i%s %s", Brace1, pGroupInfo->GetFirstID(), pGroupInfo->GetLastID(), Brace2, szNZBNiceName);
snprintf(szNameWithIds, 1024, "%c%i-%i%c%s%s %s", chBrace1, pGroupInfo->GetFirstID(), pGroupInfo->GetLastID(), chBrace2,
szPriority, szDownloading, pGroupInfo->GetNZBInfo()->GetName());
szNameWithIds[iNameLen] = '\0';
char szTime[100];
szTime[0] = '\0';
float fCurrentDownloadSpeed = m_bStandBy ? 0 : m_fCurrentDownloadSpeed;
int iCurrentDownloadSpeed = m_bStandBy ? 0 : m_iCurrentDownloadSpeed;
if (pGroupInfo->GetPausedSize() > 0 && lUnpausedRemainingSize == 0)
{
snprintf(szTime, 100, "[paused]");
Util::FormatFileSize(szRemaining, sizeof(szRemaining), pGroupInfo->GetRemainingSize());
}
else if (fCurrentDownloadSpeed > 0.0 && !(m_bPauseDownload || m_bPauseDownload2))
else if (iCurrentDownloadSpeed > 0 && !(m_bPauseDownload || m_bPauseDownload2))
{
long long remain_sec = (long long)(lUnpausedRemainingSize / (fCurrentDownloadSpeed * 1024));
long long remain_sec = (long long)(lUnpausedRemainingSize / iCurrentDownloadSpeed);
int h = (int)(remain_sec / 3600);
int m = (int)((remain_sec % 3600) / 60);
int s = (int)(remain_sec % 60);
@@ -1075,7 +1099,8 @@ void NCursesFrontend::PrintGroupname(GroupInfo * pGroupInfo, int iRow, bool bSel
}
else
{
snprintf(szBuffer, MAX_SCREEN_WIDTH, "%s%i-%i%s %s", Brace1, pGroupInfo->GetFirstID(), pGroupInfo->GetLastID(), Brace2, szNZBNiceName);
snprintf(szBuffer, MAX_SCREEN_WIDTH, "%c%i-%i%c%s %s", chBrace1, pGroupInfo->GetFirstID(),
pGroupInfo->GetLastID(), chBrace2, szDownloading, pGroupInfo->GetNZBInfo()->GetName());
}
szBuffer[MAX_SCREEN_WIDTH - 1] = '\0';
@@ -1097,11 +1122,7 @@ void NCursesFrontend::PrepareGroupQueue()
void NCursesFrontend::ClearGroupQueue()
{
for (GroupQueue::iterator it = m_groupQueue.begin(); it != m_groupQueue.end(); it++)
{
delete *it;
}
m_groupQueue.clear();
m_groupQueue.Clear();
}
bool NCursesFrontend::EditQueue(QueueEditor::EEditAction eAction, int iOffset)
@@ -1417,7 +1438,7 @@ void NCursesFrontend::UpdateInput(int initialKey)
// Enter
else if (iKey == 10 || iKey == 13)
{
ServerSetDownloadRate((float)m_iInputValue);
ServerSetDownloadRate(m_iInputValue * 1024);
m_eInputMode = eNormal;
return;
}

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -34,7 +34,8 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <stdio.h>
#include <ctype.h>
#include "nzbget.h"
#include "Log.h"
@@ -44,20 +45,18 @@
static const int CONNECTION_LINEBUFFER_SIZE = 1024*10;
NNTPConnection::NNTPConnection(NewsServer* server) : Connection(server)
NNTPConnection::NNTPConnection(NewsServer* pNewsServer) : Connection(pNewsServer->GetHost(), pNewsServer->GetPort(), pNewsServer->GetTLS())
{
m_pNewsServer = pNewsServer;
m_szActiveGroup = NULL;
m_szLineBuf = (char*)malloc(CONNECTION_LINEBUFFER_SIZE);
m_bAuthError = false;
SetCipher(pNewsServer->GetCipher());
}
NNTPConnection::~NNTPConnection()
{
if (m_szActiveGroup)
{
free(m_szActiveGroup);
m_szActiveGroup = NULL;
}
free(m_szActiveGroup);
free(m_szLineBuf);
}
@@ -81,19 +80,16 @@ const char* NNTPConnection::Request(const char* req)
if (!strncmp(answer, "480", 3))
{
debug("%s requested authorization", m_pNetAddress->GetHost());
debug("%s requested authorization", GetHost());
//authentication required!
if (!Authenticate())
{
m_bAuthError = true;
return NULL;
}
//try again
WriteLine(req);
answer = ReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
return answer;
}
return answer;
@@ -101,13 +97,16 @@ const char* NNTPConnection::Request(const char* req)
bool NNTPConnection::Authenticate()
{
if (!((NewsServer*)m_pNetAddress)->GetUser() ||
!((NewsServer*)m_pNetAddress)->GetPassword())
if (strlen(m_pNewsServer->GetUser()) == 0 || strlen(m_pNewsServer->GetPassword()) == 0)
{
return true;
error("%c%s (%s) requested authorization but username/password are not set in settings",
toupper(m_pNewsServer->GetName()[0]), m_pNewsServer->GetName() + 1, m_pNewsServer->GetHost());
m_bAuthError = true;
return false;
}
return AuthInfoUser();
m_bAuthError = !AuthInfoUser(0);
return !m_bAuthError;
}
bool NNTPConnection::AuthInfoUser(int iRecur)
@@ -118,7 +117,7 @@ bool NNTPConnection::AuthInfoUser(int iRecur)
}
char tmp[1024];
snprintf(tmp, 1024, "AUTHINFO USER %s\r\n", ((NewsServer*)m_pNetAddress)->GetUser());
snprintf(tmp, 1024, "AUTHINFO USER %s\r\n", m_pNewsServer->GetUser());
tmp[1024-1] = '\0';
WriteLine(tmp);
@@ -126,13 +125,13 @@ bool NNTPConnection::AuthInfoUser(int iRecur)
char* answer = ReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
if (!answer)
{
ReportError("Authorization for %s failed: Connection closed by remote host.", m_pNetAddress->GetHost(), true, 0);
ReportErrorAnswer("Authorization for server%i (%s) failed: Connection closed by remote host", NULL);
return false;
}
if (!strncmp(answer, "281", 3))
{
debug("Authorization for %s successful", m_pNetAddress->GetHost());
debug("Authorization for %s successful", GetHost());
return true;
}
else if (!strncmp(answer, "381", 3))
@@ -148,7 +147,7 @@ bool NNTPConnection::AuthInfoUser(int iRecur)
if (GetStatus() != csCancelled)
{
ReportErrorAnswer("Authorization for %s failed (Answer: %s)", answer);
ReportErrorAnswer("Authorization for server%i (%s) failed (Answer: %s)", answer);
}
return false;
}
@@ -161,7 +160,7 @@ bool NNTPConnection::AuthInfoPass(int iRecur)
}
char tmp[1024];
snprintf(tmp, 1024, "AUTHINFO PASS %s\r\n", ((NewsServer*)m_pNetAddress)->GetPassword());
snprintf(tmp, 1024, "AUTHINFO PASS %s\r\n", m_pNewsServer->GetPassword());
tmp[1024-1] = '\0';
WriteLine(tmp);
@@ -169,12 +168,12 @@ bool NNTPConnection::AuthInfoPass(int iRecur)
char* answer = ReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
if (!answer)
{
ReportError("Authorization for %s failed: Connection closed by remote host.", m_pNetAddress->GetHost(), true, 0);
ReportErrorAnswer("Authorization for server%i (%s) failed: Connection closed by remote host", NULL);
return false;
}
else if (!strncmp(answer, "2", 1))
{
debug("Authorization for %s successful", m_pNetAddress->GetHost());
debug("Authorization for %s successful", GetHost());
return true;
}
else if (!strncmp(answer, "381", 3))
@@ -186,7 +185,7 @@ bool NNTPConnection::AuthInfoPass(int iRecur)
if (GetStatus() != csCancelled)
{
ReportErrorAnswer("Authorization for %s failed (Answer: %s)", answer);
ReportErrorAnswer("Authorization for server%i (%s) failed (Answer: %s)", answer);
}
return false;
}
@@ -205,86 +204,77 @@ const char* NNTPConnection::JoinGroup(const char* grp)
tmp[1024-1] = '\0';
const char* answer = Request(tmp);
if (m_bAuthError)
{
return answer;
}
if (answer && !strncmp(answer, "2", 1))
{
debug("Changed group to %s on %s", grp, GetServer()->GetHost());
if (m_szActiveGroup)
{
free(m_szActiveGroup);
}
debug("Changed group to %s on %s", grp, GetHost());
free(m_szActiveGroup);
m_szActiveGroup = strdup(grp);
}
else
{
debug("Error changing group on %s to %s: %s.",
GetServer()->GetHost(), grp, answer);
debug("Error changing group on %s to %s: %s.", GetHost(), grp, answer);
}
return answer;
}
bool NNTPConnection::DoConnect()
bool NNTPConnection::Connect()
{
debug("Opening connection to %s", GetServer()->GetHost());
bool res = Connection::DoConnect();
if (!res)
debug("Opening connection to %s", GetHost());
if (m_eStatus == csConnected)
{
return res;
return true;
}
#ifndef DISABLE_TLS
if (GetNewsServer()->GetTLS())
if (!Connection::Connect())
{
if (!StartTLS())
{
return false;
}
return false;
}
#endif
char* answer = DoReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
char* answer = ReadLine(m_szLineBuf, CONNECTION_LINEBUFFER_SIZE, NULL);
if (!answer)
{
ReportError("Connection to %s failed: Connection closed by remote host.", m_pNetAddress->GetHost(), true, 0);
ReportErrorAnswer("Connection to server%i (%s) failed: Connection closed by remote host", NULL);
Disconnect();
return false;
}
if (strncmp(answer, "2", 1))
{
ReportErrorAnswer("Connection to %s failed (Answer: %s)", answer);
ReportErrorAnswer("Connection to server%i (%s) failed (Answer: %s)", answer);
Disconnect();
return false;
}
debug("Connection to %s established", GetServer()->GetHost());
if ((strlen(m_pNewsServer->GetUser()) > 0 && strlen(m_pNewsServer->GetPassword()) > 0) &&
!Authenticate())
{
return false;
}
debug("Connection to %s established", GetHost());
return true;
}
bool NNTPConnection::DoDisconnect()
bool NNTPConnection::Disconnect()
{
if (m_eStatus == csConnected)
{
Request("quit\r\n");
if (m_szActiveGroup)
{
free(m_szActiveGroup);
m_szActiveGroup = NULL;
}
free(m_szActiveGroup);
m_szActiveGroup = NULL;
}
return Connection::DoDisconnect();
return Connection::Disconnect();
}
void NNTPConnection::ReportErrorAnswer(const char* szMsgPrefix, const char* szAnswer)
{
char szErrStr[1024];
snprintf(szErrStr, 1024, szMsgPrefix, m_pNetAddress->GetHost(), szAnswer);
snprintf(szErrStr, 1024, szMsgPrefix, m_pNewsServer->GetID(), m_pNewsServer->GetHost(), szAnswer);
szErrStr[1024-1] = '\0';
ReportError(szErrStr, NULL, false, 0);

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -33,27 +33,27 @@
class NNTPConnection : public Connection
{
private:
char* m_szActiveGroup;
char* m_szLineBuf;
bool m_bAuthError;
NewsServer* m_pNewsServer;
char* m_szActiveGroup;
char* m_szLineBuf;
bool m_bAuthError;
virtual bool DoConnect();
virtual bool DoDisconnect();
void Clear();
void ReportErrorAnswer(const char* szMsgPrefix, const char* szAnswer);
void Clear();
void ReportErrorAnswer(const char* szMsgPrefix, const char* szAnswer);
bool Authenticate();
bool AuthInfoUser(int iRecur);
bool AuthInfoPass(int iRecur);
public:
NNTPConnection(NewsServer* server);
virtual ~NNTPConnection();
NewsServer* GetNewsServer() { return(NewsServer*)m_pNetAddress; }
const char* Request(const char* req);
bool Authenticate();
bool AuthInfoUser(int iRecur = 0);
bool AuthInfoPass(int iRecur = 0);
const char* JoinGroup(const char* grp);
bool GetAuthError() { return m_bAuthError; }
NNTPConnection(NewsServer* pNewsServer);
virtual ~NNTPConnection();
virtual bool Connect();
virtual bool Disconnect();
NewsServer* GetNewsServer() { return m_pNewsServer; }
const char* Request(const char* req);
const char* JoinGroup(const char* grp);
bool GetAuthError() { return m_bAuthError; }
};
#endif

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$

View File

@@ -1,8 +1,8 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -34,14 +34,16 @@
#include <string.h>
#include <list>
#include <ctype.h>
#ifdef WIN32
#include <comutil.h>
#import "MSXML.dll" named_guids
#import <msxml.tlb> named_guids
using namespace MSXML;
#else
#include <libxml/parser.h>
#include <libxml/xmlreader.h>
#include <libxml/xmlerror.h>
#include <libxml/entities.h>
#endif
#include "nzbget.h"
@@ -55,34 +57,26 @@ using namespace MSXML;
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
#ifndef WIN32
static void libxml_errorhandler(void *ebuf, const char *fmt, ...)
{
va_list argp;
va_start(argp, fmt);
char szErrMsg[1024];
vsnprintf(szErrMsg, sizeof(szErrMsg), fmt, argp);
szErrMsg[1024-1] = '\0';
va_end(argp);
// remove trailing CRLF
for (char* pend = szErrMsg + strlen(szErrMsg) - 1; pend >= szErrMsg && (*pend == '\n' || *pend == '\r' || *pend == ' '); pend--) *pend = '\0';
error("Error parsing nzb-file: %s", szErrMsg);
}
#endif
NZBFile::NZBFile(const char* szFileName, const char* szCategory)
{
debug("Creating NZBFile");
m_szFileName = strdup(szFileName);
m_szPassword = NULL;
m_pNZBInfo = new NZBInfo();
m_pNZBInfo->AddReference();
m_pNZBInfo->Retain();
m_pNZBInfo->SetFilename(szFileName);
m_pNZBInfo->SetCategory(szCategory);
m_pNZBInfo->BuildDestDirName();
#ifndef WIN32
m_bPassword = false;
m_pFileInfo = NULL;
m_pArticle = NULL;
m_szTagContent = NULL;
m_iTagContentLen = 0;
#endif
m_FileInfos.clear();
}
@@ -91,10 +85,8 @@ NZBFile::~NZBFile()
debug("Destroying NZBFile");
// Cleanup
if (m_szFileName)
{
free(m_szFileName);
}
free(m_szFileName);
free(m_szPassword);
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
@@ -106,6 +98,11 @@ NZBFile::~NZBFile()
{
m_pNZBInfo->Release();
}
#ifndef WIN32
delete m_pFileInfo;
free(m_szTagContent);
#endif
}
void NZBFile::LogDebugInfo()
@@ -118,66 +115,106 @@ void NZBFile::DetachFileInfos()
m_FileInfos.clear();
}
NZBFile* NZBFile::CreateFromBuffer(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize)
{
return Create(szFileName, szCategory, szBuffer, iSize, true);
}
NZBFile* NZBFile::CreateFromFile(const char* szFileName, const char* szCategory)
{
return Create(szFileName, szCategory, NULL, 0, false);
}
void NZBFile::AddArticle(FileInfo* pFileInfo, ArticleInfo* pArticleInfo)
{
// make Article-List big enough
while ((int)pFileInfo->GetArticles()->size() < pArticleInfo->GetPartNumber())
pFileInfo->GetArticles()->push_back(NULL);
(*pFileInfo->GetArticles())[pArticleInfo->GetPartNumber() - 1] = pArticleInfo;
int index = pArticleInfo->GetPartNumber() - 1;
if ((*pFileInfo->GetArticles())[index])
{
delete (*pFileInfo->GetArticles())[index];
}
(*pFileInfo->GetArticles())[index] = pArticleInfo;
}
void NZBFile::AddFileInfo(FileInfo* pFileInfo)
{
// deleting empty articles
// calculate file size and delete empty articles
long long lSize = 0;
long long lMissedSize = 0;
long long lOneSize = 0;
int iUncountedArticles = 0;
int iMissedArticles = 0;
FileInfo::Articles* pArticles = pFileInfo->GetArticles();
int iTotalArticles = (int)pArticles->size();
int i = 0;
for (FileInfo::Articles::iterator it = pArticles->begin(); it != pArticles->end();)
for (FileInfo::Articles::iterator it = pArticles->begin(); it != pArticles->end(); )
{
if (*it == NULL)
ArticleInfo* pArticle = *it;
if (!pArticle)
{
pArticles->erase(it);
it = pArticles->begin() + i;
iMissedArticles++;
if (lOneSize > 0)
{
lMissedSize += lOneSize;
}
else
{
iUncountedArticles++;
}
}
else
{
lSize += pArticle->GetSize();
if (lOneSize == 0)
{
lOneSize = pArticle->GetSize();
}
it++;
i++;
}
}
if (!pArticles->empty())
if (pArticles->empty())
{
ParseSubject(pFileInfo);
m_FileInfos.push_back(pFileInfo);
pFileInfo->SetNZBInfo(m_pNZBInfo);
m_pNZBInfo->SetSize(m_pNZBInfo->GetSize() + pFileInfo->GetSize());
m_pNZBInfo->SetFileCount(m_pNZBInfo->GetFileCount() + 1);
delete pFileInfo;
return;
}
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveFile(pFileInfo);
pFileInfo->ClearArticles();
}
}
else
{
delete pFileInfo;
}
lMissedSize += iUncountedArticles * lOneSize;
lSize += lMissedSize;
m_FileInfos.push_back(pFileInfo);
pFileInfo->SetNZBInfo(m_pNZBInfo);
pFileInfo->SetSize(lSize);
pFileInfo->SetRemainingSize(lSize - lMissedSize);
pFileInfo->SetMissedSize(lMissedSize);
pFileInfo->SetTotalArticles(iTotalArticles);
pFileInfo->SetMissedArticles(iMissedArticles);
}
void NZBFile::ParseSubject(FileInfo* pFileInfo)
void NZBFile::ParseSubject(FileInfo* pFileInfo, bool TryQuotes)
{
if (TryQuotes)
{
// try to use the filename in quatation marks
char* p = (char*)pFileInfo->GetSubject();
char* start = strchr(p, '\"');
if (start)
{
start++;
char* end = strchr(start + 1, '\"');
if (end)
{
int len = (int)(end - start);
char* point = strchr(start + 1, '.');
if (point && point < end)
{
char* filename = (char*)malloc(len + 1);
strncpy(filename, start, len);
filename[len] = '\0';
pFileInfo->SetFilename(filename);
free(filename);
return;
}
}
}
}
// tokenize subject, considering spaces as separators and quotation
// marks as non separatable token delimiters.
// then take the last token containing dot (".") as a filename
@@ -260,19 +297,14 @@ void NZBFile::ParseSubject(FileInfo* pFileInfo)
debug("Could not extract Filename from Subject: %s. Using Subject as Filename", pFileInfo->GetSubject());
pFileInfo->SetFilename(pFileInfo->GetSubject());
}
pFileInfo->MakeValidFilename();
}
/**
* Check if the parsing of subject was correct
*/
void NZBFile::CheckFilenames()
bool NZBFile::HasDuplicateFilenames()
{
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
FileInfo* pFileInfo1 = *it;
int iDupe = 0;
int iDupe = 1;
for (FileInfos::iterator it2 = it + 1; it2 != m_FileInfos.end(); it2++)
{
FileInfo* pFileInfo2 = *it2;
@@ -291,25 +323,218 @@ void NZBFile::CheckFilenames()
// an often case by posting-errors to repost bad files
if (iDupe > 2 || (iDupe == 2 && m_FileInfos.size() == 2))
{
for (FileInfos::iterator it2 = it; it2 != m_FileInfos.end(); it2++)
{
FileInfo* pFileInfo2 = *it2;
pFileInfo2->SetFilename(pFileInfo2->GetSubject());
pFileInfo2->MakeValidFilename();
return true;
}
}
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->LoadArticles(pFileInfo2);
g_pDiskState->SaveFile(pFileInfo2);
pFileInfo2->ClearArticles();
}
}
return false;
}
/**
* Generate filenames from subjects and check if the parsing of subject was correct
*/
void NZBFile::BuildFilenames()
{
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
FileInfo* pFileInfo = *it;
ParseSubject(pFileInfo, true);
}
if (HasDuplicateFilenames())
{
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
FileInfo* pFileInfo = *it;
ParseSubject(pFileInfo, false);
}
}
if (HasDuplicateFilenames())
{
m_pNZBInfo->SetManyDupeFiles(true);
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->SetFilename(pFileInfo->GetSubject());
}
}
}
bool CompareFileInfo(FileInfo* pFirst, FileInfo* pSecond)
{
return strcmp(pFirst->GetFilename(), pSecond->GetFilename()) > 0;
}
void NZBFile::CalcHashes()
{
FileInfoList fileList;
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
fileList.push_back(*it);
}
fileList.sort(CompareFileInfo);
// split ExtCleanupDisk into tokens and create a list
ExtList extList;
char* szExtCleanupDisk = strdup(g_pOptions->GetExtCleanupDisk());
char* saveptr;
char* szExt = strtok_r(szExtCleanupDisk, ",; ", &saveptr);
while (szExt)
{
extList.push_back(szExt);
szExt = strtok_r(NULL, ",; ", &saveptr);
}
unsigned int iFullContentHash = 0;
unsigned int iFilteredContentHash = 0;
int iUseForFilteredCount = 0;
for (FileInfoList::iterator it = fileList.begin(); it != fileList.end(); it++)
{
FileInfo* pFileInfo = *it;
// check file extension
int iFilenameLen = strlen(pFileInfo->GetFilename());
bool bSkip = false;
for (ExtList::iterator it = extList.begin(); it != extList.end(); it++)
{
const char* szExt = *it;
int iExtLen = strlen(szExt);
if (iFilenameLen >= iExtLen && !strcasecmp(szExt, pFileInfo->GetFilename() + iFilenameLen - iExtLen))
{
bSkip = true;
break;
}
}
bSkip = bSkip && !pFileInfo->GetParFile();
for (FileInfo::Articles::iterator it = pFileInfo->GetArticles()->begin(); it != pFileInfo->GetArticles()->end(); it++)
{
ArticleInfo* pArticle = *it;
int iLen = strlen(pArticle->GetMessageID());
iFullContentHash = Util::HashBJ96(pArticle->GetMessageID(), iLen, iFullContentHash);
if (!bSkip)
{
iFilteredContentHash = Util::HashBJ96(pArticle->GetMessageID(), iLen, iFilteredContentHash);
iUseForFilteredCount++;
}
}
}
free(szExtCleanupDisk);
// if filtered hash is based on less than a half of files - do not use filtered hash at all
if (iUseForFilteredCount < (int)fileList.size() / 2)
{
iFilteredContentHash = 0;
}
m_pNZBInfo->SetFullContentHash(iFullContentHash);
m_pNZBInfo->SetFilteredContentHash(iFilteredContentHash);
}
void NZBFile::ProcessFiles()
{
BuildFilenames();
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->MakeValidFilename();
char szLoFileName[1024];
strncpy(szLoFileName, pFileInfo->GetFilename(), 1024);
szLoFileName[1024-1] = '\0';
for (char* p = szLoFileName; *p; p++) *p = tolower(*p); // convert string to lowercase
bool bParFile = strstr(szLoFileName, ".par2");
m_pNZBInfo->SetFileCount(m_pNZBInfo->GetFileCount() + 1);
m_pNZBInfo->SetTotalArticles(m_pNZBInfo->GetTotalArticles() + pFileInfo->GetTotalArticles());
m_pNZBInfo->SetSize(m_pNZBInfo->GetSize() + pFileInfo->GetSize());
m_pNZBInfo->SetFailedSize(m_pNZBInfo->GetFailedSize() + pFileInfo->GetMissedSize());
m_pNZBInfo->SetCurrentFailedSize(m_pNZBInfo->GetFailedSize());
pFileInfo->SetParFile(bParFile);
if (bParFile)
{
m_pNZBInfo->SetParSize(m_pNZBInfo->GetParSize() + pFileInfo->GetSize());
m_pNZBInfo->SetParFailedSize(m_pNZBInfo->GetParFailedSize() + pFileInfo->GetMissedSize());
m_pNZBInfo->SetParCurrentFailedSize(m_pNZBInfo->GetParFailedSize());
}
}
CalcHashes();
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
for (FileInfos::iterator it = m_FileInfos.begin(); it != m_FileInfos.end(); it++)
{
FileInfo* pFileInfo = *it;
g_pDiskState->SaveFile(pFileInfo);
pFileInfo->ClearArticles();
}
}
if (m_szPassword)
{
ReadPassword();
}
}
/**
* Password read using XML-parser may have special characters (such as TAB) stripped.
* This function rereads password directly from file to keep all characters intact.
*/
void NZBFile::ReadPassword()
{
FILE* pFile = fopen(m_szFileName, "rb");
if (!pFile)
{
return;
}
// obtain file size.
fseek(pFile , 0 , SEEK_END);
int iSize = ftell(pFile);
rewind(pFile);
// reading first 4KB of the file
// allocate memory to contain the whole file.
char* buf = (char*)malloc(4096);
iSize = iSize < 4096 ? iSize : 4096;
// copy the file into the buffer.
fread(buf, 1, iSize, pFile);
fclose(pFile);
buf[iSize-1] = '\0';
char* szMetaPassword = strstr(buf, "<meta type=\"password\">");
if (szMetaPassword)
{
szMetaPassword += 22; // length of '<meta type="password">'
char* szEnd = strstr(szMetaPassword, "</meta>");
if (szEnd)
{
*szEnd = '\0';
WebUtil::XmlDecode(szMetaPassword);
free(m_szPassword);
m_szPassword = strdup(szMetaPassword);
}
}
free(buf);
}
#ifdef WIN32
NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize, bool bFromBuffer)
NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory)
{
CoInitialize(NULL);
@@ -326,21 +551,15 @@ NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const c
doc->put_resolveExternals(VARIANT_FALSE);
doc->put_validateOnParse(VARIANT_FALSE);
doc->put_async(VARIANT_FALSE);
VARIANT_BOOL success;
if (bFromBuffer)
{
success = doc->loadXML(szBuffer);
}
else
{
// filename needs to be properly encoded
char* szURL = (char*)malloc(strlen(szFileName)*3 + 1);
EncodeURL(szFileName, szURL);
debug("url=\"%s\"", szURL);
_variant_t v(szURL);
free(szURL);
success = doc->load(v);
}
// filename needs to be properly encoded
char* szURL = (char*)malloc(strlen(szFileName)*3 + 1);
EncodeURL(szFileName, szURL);
debug("url=\"%s\"", szURL);
_variant_t v(szURL);
free(szURL);
VARIANT_BOOL success = doc->load(v);
if (success == VARIANT_FALSE)
{
_bstr_t r(doc->GetparseError()->reason);
@@ -352,7 +571,7 @@ NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const c
NZBFile* pFile = new NZBFile(szFileName, szCategory);
if (pFile->ParseNZB(doc))
{
pFile->CheckFilenames();
pFile->ProcessFiles();
}
else
{
@@ -376,7 +595,7 @@ void NZBFile::EncodeURL(const char* szFilename, char* szURL)
else
{
*szURL++ = '%';
int a = ch >> 4;
int a = (unsigned char)ch >> 4;
*szURL++ = a > 9 ? a - 10 + 'a' : a + '0';
a = ch & 0xF;
*szURL++ = a > 9 ? a - 10 + 'a' : a + '0';
@@ -390,10 +609,17 @@ bool NZBFile::ParseNZB(IUnknown* nzb)
MSXML::IXMLDOMDocumentPtr doc = nzb;
MSXML::IXMLDOMNodePtr root = doc->documentElement;
MSXML::IXMLDOMNodePtr node = root->selectSingleNode("/nzb/head/meta[@type='password']");
if (node)
{
_bstr_t password(node->Gettext());
m_szPassword = strdup(password);
}
MSXML::IXMLDOMNodeListPtr fileList = root->selectNodes("/nzb/file");
for (int i = 0; i < fileList->Getlength(); i++)
{
MSXML::IXMLDOMNodePtr node = fileList->Getitem(i);
node = fileList->Getitem(i);
MSXML::IXMLDOMNodePtr attribute = node->Getattributes()->getNamedItem("subject");
if (!attribute) return false;
_bstr_t subject(attribute->Gettext());
@@ -434,16 +660,14 @@ bool NZBFile::ParseNZB(IUnknown* nzb)
int partNumber = atoi(number);
int lsize = atoi(bytes);
ArticleInfo* pArticle = new ArticleInfo();
pArticle->SetPartNumber(partNumber);
pArticle->SetMessageID(szId);
pArticle->SetSize(lsize);
AddArticle(pFileInfo, pArticle);
if (lsize > 0)
{
pFileInfo->SetSize(pFileInfo->GetSize() + lsize);
}
if (partNumber > 0)
{
ArticleInfo* pArticle = new ArticleInfo();
pArticle->SetPartNumber(partNumber);
pArticle->SetMessageID(szId);
pArticle->SetSize(lsize);
AddArticle(pFileInfo, pArticle);
}
}
AddFileInfo(pFileInfo);
@@ -453,160 +677,233 @@ bool NZBFile::ParseNZB(IUnknown* nzb)
#else
NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize, bool bFromBuffer)
NZBFile* NZBFile::Create(const char* szFileName, const char* szCategory)
{
xmlSetGenericErrorFunc(NULL, libxml_errorhandler);
xmlTextReaderPtr doc;
if (bFromBuffer)
{
doc = xmlReaderForMemory(szBuffer, iSize-1, "", NULL, 0);
}
else
{
doc = xmlReaderForFile(szFileName, NULL, 0);
}
if (!doc)
{
error("Could not create XML-Reader");
return NULL;
}
NZBFile* pFile = new NZBFile(szFileName, szCategory);
if (pFile->ParseNZB(doc))
xmlSAXHandler SAX_handler = {0};
SAX_handler.startElement = reinterpret_cast<startElementSAXFunc>(SAX_StartElement);
SAX_handler.endElement = reinterpret_cast<endElementSAXFunc>(SAX_EndElement);
SAX_handler.characters = reinterpret_cast<charactersSAXFunc>(SAX_characters);
SAX_handler.error = reinterpret_cast<errorSAXFunc>(SAX_error);
SAX_handler.getEntity = reinterpret_cast<getEntitySAXFunc>(SAX_getEntity);
pFile->m_bIgnoreNextError = false;
int ret = xmlSAXUserParseFile(&SAX_handler, pFile, szFileName);
if (ret == 0)
{
pFile->CheckFilenames();
pFile->ProcessFiles();
}
else
{
error("Failed to parse nzb-file");
delete pFile;
pFile = NULL;
}
xmlFreeTextReader(doc);
return pFile;
return pFile;
}
bool NZBFile::ParseNZB(void* nzb)
void NZBFile::Parse_StartElement(const char *name, const char **atts)
{
FileInfo* pFileInfo = NULL;
xmlTextReaderPtr node = (xmlTextReaderPtr)nzb;
// walk through whole doc and search for segments-tags
int ret = xmlTextReaderRead(node);
while (ret == 1)
{
if (node)
{
xmlChar *name, *value;
if (m_szTagContent)
{
free(m_szTagContent);
m_szTagContent = NULL;
m_iTagContentLen = 0;
}
if (!strcmp("file", name))
{
m_pFileInfo = new FileInfo();
m_pFileInfo->SetFilename(m_szFileName);
name = xmlTextReaderName(node);
if (name == NULL)
{
name = xmlStrdup(BAD_CAST "--");
}
value = xmlTextReaderValue(node);
for (int i = 0; atts[i]; i += 2)
{
const char* attrname = atts[i];
const char* attrvalue = atts[i + 1];
if (!strcmp("subject", attrname))
{
m_pFileInfo->SetSubject(attrvalue);
}
if (!strcmp("date", attrname))
{
m_pFileInfo->SetTime(atoi(attrvalue));
}
}
}
else if (!strcmp("segment", name))
{
if (!m_pFileInfo)
{
// error: bad nzb-file
return;
}
long long lsize = -1;
int partNumber = -1;
if (xmlTextReaderNodeType(node) == 1)
{
if (!strcmp("file", (char*)name))
{
pFileInfo = new FileInfo();
pFileInfo->SetFilename(m_szFileName);
for (int i = 0; atts[i]; i += 2)
{
const char* attrname = atts[i];
const char* attrvalue = atts[i + 1];
if (!strcmp("bytes", attrname))
{
lsize = atol(attrvalue);
}
if (!strcmp("number", attrname))
{
partNumber = atol(attrvalue);
}
}
while (xmlTextReaderMoveToNextAttribute(node))
{
xmlFree(name);
name = xmlTextReaderName(node);
if (!strcmp("subject",(char*)name))
{
xmlFree(value);
value = xmlTextReaderValue(node);
pFileInfo->SetSubject((char*)value);
}
if (!strcmp("date",(char*)name))
{
xmlFree(value);
value = xmlTextReaderValue(node);
pFileInfo->SetTime(atoi((char*)value));
}
}
}
else if (!strcmp("segment",(char*)name))
{
long long lsize = -1;
int partNumber = -1;
if (partNumber > 0)
{
// new segment, add it!
m_pArticle = new ArticleInfo();
m_pArticle->SetPartNumber(partNumber);
m_pArticle->SetSize(lsize);
AddArticle(m_pFileInfo, m_pArticle);
}
}
else if (!strcmp("meta", name))
{
m_bPassword = atts[0] && atts[1] && !strcmp("type", atts[0]) && !strcmp("password", atts[1]);
}
}
while (xmlTextReaderMoveToNextAttribute(node))
{
xmlFree(name);
name = xmlTextReaderName(node);
xmlFree(value);
value = xmlTextReaderValue(node);
if (!strcmp("bytes",(char*)name))
{
lsize = atol((char*)value);
}
if (!strcmp("number",(char*)name))
{
partNumber = atol((char*)value);
}
}
if (lsize > 0)
{
pFileInfo->SetSize(pFileInfo->GetSize() + lsize);
}
/* Get the #text part */
ret = xmlTextReaderRead(node);
void NZBFile::Parse_EndElement(const char *name)
{
if (!strcmp("file", name))
{
// Close the file element, add the new file to file-list
AddFileInfo(m_pFileInfo);
m_pFileInfo = NULL;
m_pArticle = NULL;
}
else if (!strcmp("group", name))
{
if (!m_pFileInfo)
{
// error: bad nzb-file
return;
}
m_pFileInfo->GetGroups()->push_back(m_szTagContent);
m_szTagContent = NULL;
m_iTagContentLen = 0;
}
else if (!strcmp("segment", name))
{
if (!m_pFileInfo || !m_pArticle)
{
// error: bad nzb-file
return;
}
if (partNumber > 0)
{
// new segment, add it!
xmlFree(value);
value = xmlTextReaderValue(node);
char tmp[2048];
snprintf(tmp, 2048, "<%s>", (char*)value);
ArticleInfo* pArticle = new ArticleInfo();
pArticle->SetPartNumber(partNumber);
pArticle->SetMessageID(tmp);
pArticle->SetSize(lsize);
AddArticle(pFileInfo, pArticle);
}
}
else if (!strcmp("group",(char*)name))
{
ret = xmlTextReaderRead(node);
xmlFree(value);
value = xmlTextReaderValue(node);
if (!pFileInfo)
{
// error: bad nzb-file
break;
}
pFileInfo->GetGroups()->push_back(strdup((char*)value));
}
}
// Get the #text part
char ID[2048];
snprintf(ID, 2048, "<%s>", m_szTagContent);
m_pArticle->SetMessageID(ID);
m_pArticle = NULL;
}
else if (!strcmp("meta", name) && m_bPassword)
{
m_szPassword = strdup(m_szTagContent);
}
}
if (xmlTextReaderNodeType(node) == 15)
{
/* Close the file element, add the new file to file-list */
if (!strcmp("file",(char*)name))
{
AddFileInfo(pFileInfo);
}
}
void NZBFile::Parse_Content(const char *buf, int len)
{
m_szTagContent = (char*)realloc(m_szTagContent, m_iTagContentLen + len + 1);
strncpy(m_szTagContent + m_iTagContentLen, buf, len);
m_iTagContentLen += len;
m_szTagContent[m_iTagContentLen] = '\0';
}
xmlFree(name);
xmlFree(value);
}
ret = xmlTextReaderRead(node);
}
if (ret != 0)
{
error("Failed to parse nzb-file");
return false;
}
return true;
void NZBFile::SAX_StartElement(NZBFile* pFile, const char *name, const char **atts)
{
pFile->Parse_StartElement(name, atts);
}
void NZBFile::SAX_EndElement(NZBFile* pFile, const char *name)
{
pFile->Parse_EndElement(name);
}
void NZBFile::SAX_characters(NZBFile* pFile, const char * xmlstr, int len)
{
char* str = (char*)xmlstr;
// trim starting blanks
int off = 0;
for (int i = 0; i < len; i++)
{
char ch = str[i];
if (ch == ' ' || ch == 10 || ch == 13 || ch == 9)
{
off++;
}
else
{
break;
}
}
int newlen = len - off;
// trim ending blanks
for (int i = len - 1; i >= off; i--)
{
char ch = str[i];
if (ch == ' ' || ch == 10 || ch == 13 || ch == 9)
{
newlen--;
}
else
{
break;
}
}
if (newlen > 0)
{
// interpret tag content
pFile->Parse_Content(str + off, newlen);
}
}
void* NZBFile::SAX_getEntity(NZBFile* pFile, const char * name)
{
xmlEntityPtr e = xmlGetPredefinedEntity((xmlChar* )name);
if (!e)
{
warn("entity not found");
pFile->m_bIgnoreNextError = true;
}
return e;
}
void NZBFile::SAX_error(NZBFile* pFile, const char *msg, ...)
{
if (pFile->m_bIgnoreNextError)
{
pFile->m_bIgnoreNextError = false;
return;
}
va_list argp;
va_start(argp, msg);
char szErrMsg[1024];
vsnprintf(szErrMsg, sizeof(szErrMsg), msg, argp);
szErrMsg[1024-1] = '\0';
va_end(argp);
// remove trailing CRLF
for (char* pend = szErrMsg + strlen(szErrMsg) - 1; pend >= szErrMsg && (*pend == '\n' || *pend == '\r' || *pend == ' '); pend--) *pend = '\0';
error("Error parsing nzb-file: %s", szErrMsg);
}
#endif

View File

@@ -1,8 +1,8 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -28,6 +28,7 @@
#define NZBFILE_H
#include <vector>
#include <list>
#include "DownloadInfo.h"
@@ -35,32 +36,52 @@ class NZBFile
{
public:
typedef std::vector<FileInfo*> FileInfos;
typedef std::list<FileInfo*> FileInfoList;
typedef std::list<char*> ExtList;
private:
FileInfos m_FileInfos;
NZBInfo* m_pNZBInfo;
char* m_szFileName;
char* m_szPassword;
NZBFile(const char* szFileName, const char* szCategory);
void AddArticle(FileInfo* pFileInfo, ArticleInfo* pArticleInfo);
void AddFileInfo(FileInfo* pFileInfo);
void ParseSubject(FileInfo* pFileInfo);
void CheckFilenames();
void ParseSubject(FileInfo* pFileInfo, bool TryQuotes);
void BuildFilenames();
void ProcessFiles();
void CalcHashes();
bool HasDuplicateFilenames();
void ReadPassword();
#ifdef WIN32
bool ParseNZB(IUnknown* nzb);
static void EncodeURL(const char* szFilename, char* szURL);
#else
bool ParseNZB(void* nzb);
FileInfo* m_pFileInfo;
ArticleInfo* m_pArticle;
char* m_szTagContent;
int m_iTagContentLen;
bool m_bIgnoreNextError;
bool m_bPassword;
static void SAX_StartElement(NZBFile* pFile, const char *name, const char **atts);
static void SAX_EndElement(NZBFile* pFile, const char *name);
static void SAX_characters(NZBFile* pFile, const char * xmlstr, int len);
static void* SAX_getEntity(NZBFile* pFile, const char * name);
static void SAX_error(NZBFile* pFile, const char *msg, ...);
void Parse_StartElement(const char *name, const char **atts);
void Parse_EndElement(const char *name);
void Parse_Content(const char *buf, int len);
#endif
static NZBFile* Create(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize, bool bFromBuffer);
public:
virtual ~NZBFile();
static NZBFile* CreateFromBuffer(const char* szFileName, const char* szCategory, const char* szBuffer, int iSize);
static NZBFile* CreateFromFile(const char* szFileName, const char* szCategory);
static NZBFile* Create(const char* szFileName, const char* szCategory);
const char* GetFileName() const { return m_szFileName; }
FileInfos* GetFileInfos() { return &m_FileInfos; }
NZBInfo* GetNZBInfo() { return m_pNZBInfo; }
const char* GetPassword() { return m_szPassword; }
void DetachFileInfos();
void LogDebugInfo();

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -34,38 +34,47 @@
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include "nzbget.h"
#include "NewsServer.h"
#include "Log.h"
NewsServer::NewsServer(const char* szHost, int iPort, const char* szUser, const char* szPass, bool bJoinGroup, bool bTLS, int iMaxConnections, int iLevel) : NetAddress(szHost, iPort)
NewsServer::NewsServer(int iID, bool bActive, const char* szName, const char* szHost, int iPort,
const char* szUser, const char* szPass, bool bJoinGroup, bool bTLS,
const char* szCipher, int iMaxConnections, int iLevel, int iGroup)
{
m_szUser = NULL;
m_szPassword = NULL;
m_iID = iID;
m_iStateID = 0;
m_bActive = bActive;
m_iPort = iPort;
m_iLevel = iLevel;
m_iNormLevel = iLevel;
m_iGroup = iGroup;
m_iMaxConnections = iMaxConnections;
m_bJoinGroup = bJoinGroup;
m_bTLS = bTLS;
m_szHost = strdup(szHost ? szHost : "");
m_szUser = strdup(szUser ? szUser : "");
m_szPassword = strdup(szPass ? szPass : "");
m_szCipher = strdup(szCipher ? szCipher : "");
if (szUser)
if (szName && strlen(szName) > 0)
{
m_szUser = strdup(szUser);
m_szName = strdup(szName);
}
if (szPass)
else
{
m_szPassword = strdup(szPass);
m_szName = (char*)malloc(20);
snprintf(m_szName, 20, "server%i", iID);
m_szName[20-1] = '\0';
}
}
NewsServer::~NewsServer()
{
if (m_szUser)
{
free(m_szUser);
}
if (m_szPassword)
{
free(m_szPassword);
}
free(m_szName);
free(m_szHost);
free(m_szUser);
free(m_szPassword);
free(m_szCipher);
}

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2008 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -27,27 +27,52 @@
#ifndef NEWSSERVER_H
#define NEWSSERVER_H
#include "NetAddress.h"
#include <vector>
class NewsServer : public NetAddress
class NewsServer
{
private:
int m_iID;
int m_iStateID;
bool m_bActive;
char* m_szName;
int m_iGroup;
char* m_szHost;
int m_iPort;
char* m_szUser;
char* m_szPassword;
int m_iMaxConnections;
int m_iLevel;
int m_iNormLevel;
bool m_bJoinGroup;
bool m_bTLS;
char* m_szCipher;
public:
NewsServer(const char* szHost, int iPort, const char* szUser, const char* szPass, bool bJoinGroup, bool bTLS, int iMaxConnections, int iLevel);
virtual ~NewsServer();
NewsServer(int iID, bool bActive, const char* szName, const char* szHost, int iPort,
const char* szUser, const char* szPass, bool bJoinGroup,
bool bTLS, const char* szCipher, int iMaxConnections, int iLevel, int iGroup);
~NewsServer();
int GetID() { return m_iID; }
int GetStateID() { return m_iStateID; }
void SetStateID(int iStateID) { m_iStateID = iStateID; }
bool GetActive() { return m_bActive; }
void SetActive(bool bActive) { m_bActive = bActive; }
const char* GetName() { return m_szName; }
int GetGroup() { return m_iGroup; }
const char* GetHost() { return m_szHost; }
int GetPort() { return m_iPort; }
const char* GetUser() { return m_szUser; }
const char* GetPassword() { return m_szPassword; }
int GetMaxConnections() { return m_iMaxConnections; }
int GetLevel() { return m_iLevel; }
int GetNormLevel() { return m_iNormLevel; }
void SetNormLevel(int iLevel) { m_iNormLevel = iLevel; }
int GetJoinGroup() { return m_bJoinGroup; }
bool GetTLS() { return m_bTLS; }
const char* GetCipher() { return m_szCipher; }
};
typedef std::vector<NewsServer*> Servers;
#endif

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32

View File

@@ -2,7 +2,7 @@
* This file if part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$

View File

File diff suppressed because it is too large Load Diff

294
Options.h
View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -28,7 +28,11 @@
#define OPTIONS_H
#include <vector>
#include <list>
#include <time.h>
#include "Thread.h"
#include "Util.h"
class Options
{
@@ -45,10 +49,12 @@ public:
opClientRequestEditQueue,
opClientRequestLog,
opClientRequestShutdown,
opClientRequestReload,
opClientRequestVersion,
opClientRequestPostQueue,
opClientRequestWriteLog,
opClientRequestScan,
opClientRequestScanSync,
opClientRequestScanAsync,
opClientRequestDownloadPause,
opClientRequestDownloadUnpause,
opClientRequestDownload2Pause,
@@ -57,7 +63,9 @@ public:
opClientRequestPostUnpause,
opClientRequestScanPause,
opClientRequestScanUnpause,
opClientRequestHistory
opClientRequestHistory,
opClientRequestDownloadUrl,
opClientRequestUrlQueue
};
enum EMessageTarget
{
@@ -72,11 +80,23 @@ public:
omColored,
omNCurses
};
enum ELoadPars
enum EParCheck
{
lpNone,
lpOne,
lpAll
pcAuto,
pcForce,
pcManual
};
enum EParScan
{
psLimited,
psFull,
psAuto
};
enum EHealthCheck
{
hcPause,
hcDelete,
hcNone
};
enum EScriptLogKind
{
@@ -87,38 +107,144 @@ public:
slError,
slDebug
};
enum EMatchMode
{
mmID = 1,
mmName,
mmRegEx
};
class OptEntry
{
private:
char* m_szName;
char* m_szValue;
char* m_szDefValue;
int m_iLineNo;
void SetName(const char* szName);
void SetValue(const char* szValue);
void SetLineNo(int iLineNo) { m_iLineNo = iLineNo; }
friend class Options;
public:
OptEntry();
OptEntry(const char* szName, const char* szValue);
~OptEntry();
const char* GetName() { return m_szName; }
const char* GetValue() { return m_szValue; }
const char* GetDefValue() { return m_szDefValue; }
int GetLineNo() { return m_iLineNo; }
};
typedef std::vector<OptEntry*> OptEntries;
typedef std::vector<OptEntry*> OptEntriesBase;
class OptEntries: public OptEntriesBase
{
public:
~OptEntries();
OptEntry* FindOption(const char* szName);
};
class ConfigTemplate
{
private:
char* m_szName;
char* m_szDisplayName;
char* m_szTemplate;
friend class Options;
public:
ConfigTemplate(const char* szName, const char* szDisplayName, const char* szTemplate);
~ConfigTemplate();
const char* GetName() { return m_szName; }
const char* GetDisplayName() { return m_szDisplayName; }
const char* GetTemplate() { return m_szTemplate; }
};
typedef std::vector<ConfigTemplate*> ConfigTemplatesBase;
class ConfigTemplates: public ConfigTemplatesBase
{
public:
~ConfigTemplates();
};
typedef std::vector<char*> NameList;
class Category
{
private:
char* m_szName;
char* m_szDestDir;
bool m_bUnpack;
char* m_szDefScript;
NameList m_Aliases;
public:
Category(const char* szName, const char* szDestDir, bool bUnpack, const char* szDefScript);
~Category();
const char* GetName() { return m_szName; }
const char* GetDestDir() { return m_szDestDir; }
bool GetUnpack() { return m_bUnpack; }
const char* GetDefScript() { return m_szDefScript; }
NameList* GetAliases() { return &m_Aliases; }
};
typedef std::vector<Category*> CategoriesBase;
class Categories: public CategoriesBase
{
public:
~Categories();
Category* FindCategory(const char* szName, bool bSearchAliases);
};
class Script
{
private:
char* m_szName;
char* m_szLocation;
char* m_szDisplayName;
public:
Script(const char* szName, const char* szLocation);
~Script();
const char* GetName() { return m_szName; }
const char* GetLocation() { return m_szLocation; }
void SetDisplayName(const char* szDisplayName);
const char* GetDisplayName() { return m_szDisplayName; }
};
typedef std::list<Script*> ScriptListBase;
class ScriptList: public ScriptListBase
{
public:
~ScriptList();
Script* Find(const char* szName);
};
private:
OptEntries m_OptEntries;
bool m_bConfigInitialized;
Mutex m_mutexOptEntries;
Categories m_Categories;
// Options
bool m_bConfigErrors;
int m_iConfigLine;
char* m_szConfigFilename;
char* m_szDestDir;
char* m_szInterDir;
char* m_szTempDir;
char* m_szQueueDir;
char* m_szNzbDir;
char* m_szWebDir;
char* m_szConfigTemplate;
char* m_szScriptDir;
EMessageTarget m_eInfoTarget;
EMessageTarget m_eWarningTarget;
EMessageTarget m_eErrorTarget;
@@ -129,31 +255,40 @@ private:
bool m_bResetLog;
int m_iConnectionTimeout;
int m_iTerminateTimeout;
bool m_bAppendNZBDir;
bool m_bAppendCategoryDir;
bool m_bContinuePartial;
bool m_bRenameBroken;
int m_iRetries;
int m_iRetryInterval;
bool m_bSaveQueue;
bool m_bDupeCheck;
char* m_szServerIP;
char* m_szServerPassword;
int m_szServerPort;
char* m_szControlIP;
char* m_szControlUsername;
char* m_szControlPassword;
int m_iControlPort;
bool m_bSecureControl;
int m_iSecurePort;
char* m_szSecureCert;
char* m_szSecureKey;
char* m_szAuthorizedIP;
char* m_szLockFile;
char* m_szDaemonUserName;
char* m_szDaemonUsername;
EOutputMode m_eOutputMode;
bool m_bReloadQueue;
bool m_bReloadUrlQueue;
bool m_bReloadPostQueue;
int m_iUrlConnections;
int m_iLogBufferSize;
bool m_bCreateLog;
char* m_szLogFile;
ELoadPars m_eLoadPars;
bool m_bParCheck;
EParCheck m_eParCheck;
bool m_bParRepair;
char* m_szPostProcess;
EParScan m_eParScan;
bool m_bParRename;
EHealthCheck m_eHealthCheck;
char* m_szDefScript;
char* m_szScriptOrder;
char* m_szNZBProcess;
bool m_bStrictParName;
char* m_szNZBAddedProcess;
bool m_bNoConfig;
int m_iUMask;
int m_iUpdateInterval;
@@ -161,25 +296,30 @@ private:
bool m_bCursesTime;
bool m_bCursesGroup;
bool m_bCrcCheck;
bool m_bRetryOnCrcError;
int m_iThreadLimit;
bool m_bDirectWrite;
int m_iWriteBufferSize;
int m_iNzbDirInterval;
int m_iNzbDirFileAge;
bool m_bParCleanupQueue;
int m_iDiskSpace;
EScriptLogKind m_eProcessLogKind;
bool m_bAllowReProcess;
bool m_bTLS;
bool m_bDumpCore;
bool m_bParPauseQueue;
bool m_bPostPauseQueue;
bool m_bScriptPauseQueue;
bool m_bNzbCleanupDisk;
bool m_bDeleteCleanupDisk;
bool m_bMergeNzb;
int m_iParTimeLimit;
int m_iKeepHistory;
bool m_bAccurateRate;
bool m_bUnpack;
bool m_bUnpackCleanupDisk;
char* m_szUnrarCmd;
char* m_szSevenZipCmd;
bool m_bUnpackPauseQueue;
char* m_szExtCleanupDisk;
int m_iFeedHistory;
bool m_bUrlForce;
int m_iTimeCorrection;
// Parsed command-line parameters
bool m_bServerMode;
@@ -189,13 +329,18 @@ private:
int m_iEditQueueOffset;
int* m_pEditQueueIDList;
int m_iEditQueueIDCount;
NameList m_EditQueueNameList;
EMatchMode m_EMatchMode;
char* m_szEditQueueText;
char* m_szArgFilename;
char* m_szCategory;
char* m_szAddCategory;
int m_iAddPriority;
bool m_bAddPaused;
char* m_szAddNZBFilename;
char* m_szLastArg;
bool m_bPrintOptions;
bool m_bAddTop;
float m_fSetRate;
int m_iSetRate;
int m_iLogLines;
int m_iWriteLogKind;
bool m_bTestBacktrace;
@@ -205,8 +350,9 @@ private:
bool m_bPauseDownload2;
bool m_bPausePostProcess;
bool m_bPauseScan;
float m_fDownloadRate;
int m_iDownloadRate;
EClientOperation m_eClientOperation;
time_t m_tResumeTime;
void InitDefault();
void InitOptFile();
@@ -214,33 +360,56 @@ private:
void InitOptions();
void InitFileArg(int argc, char* argv[]);
void InitServers();
void InitCategories();
void InitScheduler();
void InitFeeds();
void CheckOptions();
void PrintUsage(char* com);
void Dump();
int ParseOptionValue(const char* OptName, int argc, const char* argn[], const int argv[]);
int ParseEnumValue(const char* OptName, int argc, const char* argn[], const int argv[]);
int ParseIntValue(const char* OptName, int iBase);
float ParseFloatValue(const char* OptName);
OptEntry* FindOption(const char* optname);
const char* GetOption(const char* optname);
void SetOption(const char* optname, const char* value);
bool SetOptionString(const char* option);
bool SplitOptionString(const char* option, char** pOptName, char** pOptValue);
bool ValidateOptionName(const char* optname);
void LoadConfig(const char* configfile);
void CheckDir(char** dir, const char* szOptionName, bool bAllowEmpty);
void LoadConfigFile();
void CheckDir(char** dir, const char* szOptionName, bool bAllowEmpty, bool bCreate);
void ParseFileIDList(int argc, char* argv[], int optind);
void ParseFileNameList(int argc, char* argv[], int optind);
bool ParseTime(const char** pTime, int* pHours, int* pMinutes);
bool ParseWeekDays(const char* szWeekDays, int* pWeekDaysBits);
void ConfigError(const char* msg, ...);
void ConfigWarn(const char* msg, ...);
void LocateOptionSrcPos(const char *szOptionName);
void ConvertOldOption(char *szOption, int iOptionBufLen, char *szValue, int iValueBufLen);
static bool CompareScripts(Script* pScript1, Script* pScript2);
void LoadScriptDir(ScriptList* pScriptList, const char* szDirectory, bool bIsSubDir);
void BuildScriptDisplayNames(ScriptList* pScriptList);
public:
Options(int argc, char* argv[]);
~Options();
bool LoadConfig(OptEntries* pOptEntries);
bool SaveConfig(OptEntries* pOptEntries);
bool LoadConfigTemplates(ConfigTemplates* pConfigTemplates);
void LoadScriptList(ScriptList* pScriptList);
// Options
OptEntries* LockOptEntries();
void UnlockOptEntries();
const char* GetConfigFilename() { return m_szConfigFilename; }
const char* GetDestDir() { return m_szDestDir; }
const char* GetInterDir() { return m_szInterDir; }
const char* GetTempDir() { return m_szTempDir; }
const char* GetQueueDir() { return m_szQueueDir; }
const char* GetNzbDir() { return m_szNzbDir; }
const char* GetWebDir() { return m_szWebDir; }
const char* GetConfigTemplate() { return m_szConfigTemplate; }
const char* GetScriptDir() { return m_szScriptDir; }
bool GetCreateBrokenLog() const { return m_bCreateBrokenLog; }
bool GetResetLog() const { return m_bResetLog; }
EMessageTarget GetInfoTarget() const { return m_eInfoTarget; }
@@ -251,56 +420,72 @@ public:
int GetConnectionTimeout() { return m_iConnectionTimeout; }
int GetTerminateTimeout() { return m_iTerminateTimeout; }
bool GetDecode() { return m_bDecode; };
bool GetAppendNZBDir() { return m_bAppendNZBDir; }
bool GetAppendCategoryDir() { return m_bAppendCategoryDir; }
bool GetContinuePartial() { return m_bContinuePartial; }
bool GetRenameBroken() { return m_bRenameBroken; }
int GetRetries() { return m_iRetries; }
int GetRetryInterval() { return m_iRetryInterval; }
bool GetSaveQueue() { return m_bSaveQueue; }
bool GetDupeCheck() { return m_bDupeCheck; }
const char* GetServerIP() { return m_szServerIP; }
const char* GetServerPassword() { return m_szServerPassword; }
int GetServerPort() { return m_szServerPort; }
const char* GetControlIP() { return m_szControlIP; }
const char* GetControlUsername() { return m_szControlUsername; }
const char* GetControlPassword() { return m_szControlPassword; }
int GetControlPort() { return m_iControlPort; }
bool GetSecureControl() { return m_bSecureControl; }
int GetSecurePort() { return m_iSecurePort; }
const char* GetSecureCert() { return m_szSecureCert; }
const char* GetSecureKey() { return m_szSecureKey; }
const char* GetAuthorizedIP() { return m_szAuthorizedIP; }
const char* GetLockFile() { return m_szLockFile; }
const char* GetDaemonUserName() { return m_szDaemonUserName; }
const char* GetDaemonUsername() { return m_szDaemonUsername; }
EOutputMode GetOutputMode() { return m_eOutputMode; }
bool GetReloadQueue() { return m_bReloadQueue; }
bool GetReloadUrlQueue() { return m_bReloadUrlQueue; }
bool GetReloadPostQueue() { return m_bReloadPostQueue; }
int GetUrlConnections() { return m_iUrlConnections; }
int GetLogBufferSize() { return m_iLogBufferSize; }
bool GetCreateLog() { return m_bCreateLog; }
const char* GetLogFile() { return m_szLogFile; }
ELoadPars GetLoadPars() { return m_eLoadPars; }
bool GetParCheck() { return m_bParCheck; }
EParCheck GetParCheck() { return m_eParCheck; }
bool GetParRepair() { return m_bParRepair; }
const char* GetPostProcess() { return m_szPostProcess; }
EParScan GetParScan() { return m_eParScan; }
bool GetParRename() { return m_bParRename; }
EHealthCheck GetHealthCheck() { return m_eHealthCheck; }
const char* GetScriptOrder() { return m_szScriptOrder; }
const char* GetDefScript() { return m_szDefScript; }
const char* GetNZBProcess() { return m_szNZBProcess; }
bool GetStrictParName() { return m_bStrictParName; }
const char* GetNZBAddedProcess() { return m_szNZBAddedProcess; }
int GetUMask() { return m_iUMask; }
int GetUpdateInterval() {return m_iUpdateInterval; }
bool GetCursesNZBName() { return m_bCursesNZBName; }
bool GetCursesTime() { return m_bCursesTime; }
bool GetCursesGroup() { return m_bCursesGroup; }
bool GetCrcCheck() { return m_bCrcCheck; }
bool GetRetryOnCrcError() { return m_bRetryOnCrcError; }
int GetThreadLimit() { return m_iThreadLimit; }
bool GetDirectWrite() { return m_bDirectWrite; }
int GetWriteBufferSize() { return m_iWriteBufferSize; }
int GetNzbDirInterval() { return m_iNzbDirInterval; }
int GetNzbDirFileAge() { return m_iNzbDirFileAge; }
bool GetParCleanupQueue() { return m_bParCleanupQueue; }
int GetDiskSpace() { return m_iDiskSpace; }
EScriptLogKind GetProcessLogKind() { return m_eProcessLogKind; }
bool GetAllowReProcess() { return m_bAllowReProcess; }
bool GetTLS() { return m_bTLS; }
bool GetDumpCore() { return m_bDumpCore; }
bool GetParPauseQueue() { return m_bParPauseQueue; }
bool GetPostPauseQueue() { return m_bPostPauseQueue; }
bool GetScriptPauseQueue() { return m_bScriptPauseQueue; }
bool GetNzbCleanupDisk() { return m_bNzbCleanupDisk; }
bool GetDeleteCleanupDisk() { return m_bDeleteCleanupDisk; }
bool GetMergeNzb() { return m_bMergeNzb; }
int GetParTimeLimit() { return m_iParTimeLimit; }
int GetKeepHistory() { return m_iKeepHistory; }
bool GetAccurateRate() { return m_bAccurateRate; }
bool GetUnpack() { return m_bUnpack; }
bool GetUnpackCleanupDisk() { return m_bUnpackCleanupDisk; }
const char* GetUnrarCmd() { return m_szUnrarCmd; }
const char* GetSevenZipCmd() { return m_szSevenZipCmd; }
bool GetUnpackPauseQueue() { return m_bUnpackPauseQueue; }
const char* GetExtCleanupDisk() { return m_szExtCleanupDisk; }
int GetFeedHistory() { return m_iFeedHistory; }
bool GetUrlForce() { return m_bUrlForce; }
int GetTimeCorrection() { return m_iTimeCorrection; }
Category* FindCategory(const char* szName, bool bSearchAliases) { return m_Categories.FindCategory(szName, bSearchAliases); }
// Parsed command-line parameters
bool GetServerMode() { return m_bServerMode; }
@@ -311,12 +496,17 @@ public:
int GetEditQueueOffset() { return m_iEditQueueOffset; }
int* GetEditQueueIDList() { return m_pEditQueueIDList; }
int GetEditQueueIDCount() { return m_iEditQueueIDCount; }
NameList* GetEditQueueNameList() { return &m_EditQueueNameList; }
EMatchMode GetMatchMode() { return m_EMatchMode; }
const char* GetEditQueueText() { return m_szEditQueueText; }
const char* GetArgFilename() { return m_szArgFilename; }
const char* GetCategory() { return m_szCategory; }
const char* GetAddCategory() { return m_szAddCategory; }
bool GetAddPaused() { return m_bAddPaused; }
const char* GetLastArg() { return m_szLastArg; }
int GetAddPriority() { return m_iAddPriority; }
char* GetAddNZBFilename() { return m_szAddNZBFilename; }
bool GetAddTop() { return m_bAddTop; }
float GetSetRate() { return m_fSetRate; }
int GetSetRate() { return m_iSetRate; }
int GetLogLines() { return m_iLogLines; }
int GetWriteLogKind() { return m_iWriteLogKind; }
bool GetTestBacktrace() { return m_bTestBacktrace; }
@@ -330,8 +520,10 @@ public:
bool GetPausePostProcess() const { return m_bPausePostProcess; }
void SetPauseScan(bool bPauseScan) { m_bPauseScan = bPauseScan; }
bool GetPauseScan() const { return m_bPauseScan; }
void SetDownloadRate(float fRate) { m_fDownloadRate = fRate; }
float GetDownloadRate() const { return m_fDownloadRate; }
void SetDownloadRate(int iRate) { m_iDownloadRate = iRate; }
int GetDownloadRate() const { return m_iDownloadRate; }
void SetResumeTime(time_t tResumeTime) { m_tResumeTime = tResumeTime; }
time_t GetResumeTime() const { return m_tResumeTime; }
};
#endif

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -24,7 +24,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -35,18 +35,21 @@
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#include <fstream>
#ifdef WIN32
#include <par2cmdline.h>
#include <par2repairer.h>
#else
#include <unistd.h>
#include <libpar2/par2cmdline.h>
#include <libpar2/par2repairer.h>
#endif
#include <algorithm>
#include "nzbget.h"
#include "ParChecker.h"
#include "ParCoordinator.h"
#include "Log.h"
#include "Options.h"
#include "Util.h"
@@ -66,152 +69,463 @@ const char* Par2CmdLineErrStr[] = { "OK",
class Repairer : public Par2Repairer
{
private:
CommandLine commandLine;
public:
Result PreProcess(const char *szParFilename);
Result Process(bool dorepair);
friend class ParChecker;
};
Result Repairer::PreProcess(const char *szParFilename)
{
#ifdef HAVE_PAR2_BUGFIXES_V2
// Ensure linking against the patched version of libpar2
BugfixesPatchVersion2();
#endif
if (g_pOptions->GetParScan() == Options::psFull)
{
char szWildcardParam[1024];
strncpy(szWildcardParam, szParFilename, 1024);
szWildcardParam[1024-1] = '\0';
char* szBasename = Util::BaseFileName(szWildcardParam);
if (szBasename != szWildcardParam && strlen(szBasename) > 0)
{
szBasename[0] = '*';
szBasename[1] = '\0';
}
const char* argv[] = { "par2", "r", "-v", "-v", szParFilename, szWildcardParam };
if (!commandLine.Parse(6, (char**)argv))
{
return eInvalidCommandLineArguments;
}
}
else
{
const char* argv[] = { "par2", "r", "-v", "-v", szParFilename };
if (!commandLine.Parse(5, (char**)argv))
{
return eInvalidCommandLineArguments;
}
}
return Par2Repairer::PreProcess(commandLine);
}
Result Repairer::Process(bool dorepair)
{
return Par2Repairer::Process(commandLine, dorepair);
}
class MissingFilesComparator
{
private:
const char* m_szBaseParFilename;
public:
MissingFilesComparator(const char* szBaseParFilename) : m_szBaseParFilename(szBaseParFilename) {}
bool operator()(CommandLine::ExtraFile* pFirst, CommandLine::ExtraFile* pSecond) const;
};
/*
* Files with the same name as in par-file (and a differnt extension) are
* placed at the top of the list to be scanned first.
*/
bool MissingFilesComparator::operator()(CommandLine::ExtraFile* pFile1, CommandLine::ExtraFile* pFile2) const
{
char name1[1024];
strncpy(name1, Util::BaseFileName(pFile1->FileName().c_str()), 1024);
name1[1024-1] = '\0';
if (char* ext = strrchr(name1, '.')) *ext = '\0'; // trim extension
char name2[1024];
strncpy(name2, Util::BaseFileName(pFile2->FileName().c_str()), 1024);
name2[1024-1] = '\0';
if (char* ext = strrchr(name2, '.')) *ext = '\0'; // trim extension
return strcmp(name1, m_szBaseParFilename) == 0 && strcmp(name1, name2) != 0;
}
ParChecker::ParChecker()
{
debug("Creating ParChecker");
m_eStatus = psUndefined;
m_eStatus = psFailed;
m_szDestDir = NULL;
m_szNZBName = NULL;
m_szParFilename = NULL;
m_szInfoName = NULL;
m_szErrMsg = NULL;
m_szProgressLabel = (char*)malloc(1024);
m_pRepairer = NULL;
m_iFileProgress = 0;
m_iStageProgress = 0;
m_iExtraFiles = 0;
m_bVerifyingExtraFiles = false;
m_bCancelled = false;
m_eStage = ptLoadingPars;
m_QueuedParFiles.clear();
}
ParChecker::~ParChecker()
{
debug("Destroying ParChecker");
if (m_szParFilename)
{
free(m_szParFilename);
}
if (m_szInfoName)
{
free(m_szInfoName);
}
if (m_szErrMsg)
{
free(m_szErrMsg);
}
free(m_szDestDir);
free(m_szNZBName);
free(m_szInfoName);
free(m_szProgressLabel);
for (QueuedParFiles::iterator it = m_QueuedParFiles.begin(); it != m_QueuedParFiles.end() ;it++)
Cleanup();
}
void ParChecker::Cleanup()
{
delete (Repairer*)m_pRepairer;
m_pRepairer = NULL;
for (FileList::iterator it = m_QueuedParFiles.begin(); it != m_QueuedParFiles.end() ;it++)
{
free(*it);
}
m_QueuedParFiles.clear();
for (FileList::iterator it = m_ProcessedFiles.begin(); it != m_ProcessedFiles.end() ;it++)
{
free(*it);
}
m_ProcessedFiles.clear();
m_sourceFiles.clear();
free(m_szErrMsg);
m_szErrMsg = NULL;
}
void ParChecker::SetParFilename(const char * szParFilename)
void ParChecker::SetDestDir(const char * szDestDir)
{
if (m_szParFilename)
{
free(m_szParFilename);
}
m_szParFilename = strdup(szParFilename);
free(m_szDestDir);
m_szDestDir = strdup(szDestDir);
}
void ParChecker::SetNZBName(const char * szNZBName)
{
free(m_szNZBName);
m_szNZBName = strdup(szNZBName);
}
void ParChecker::SetInfoName(const char * szInfoName)
{
if (m_szInfoName)
{
free(m_szInfoName);
}
free(m_szInfoName);
m_szInfoName = strdup(szInfoName);
}
void ParChecker::SetStatus(EStatus eStatus)
{
m_eStatus = eStatus;
Notify(NULL);
}
void ParChecker::Run()
{
m_bRepairNotNeeded = false;
ParCoordinator::FileList fileList;
if (!ParCoordinator::FindMainPars(m_szDestDir, &fileList))
{
PrintMessage(Message::mkError, "Could not start par-check for %s. Could not find any par-files", m_szNZBName);
m_eStatus = psFailed;
Completed();
return;
}
m_eStatus = psRepairNotNeeded;
m_bCancelled = false;
for (ParCoordinator::FileList::iterator it = fileList.begin(); it != fileList.end(); it++)
{
char* szParFilename = *it;
debug("Found par: %s", szParFilename);
if (!IsStopped() && !m_bCancelled)
{
char szFullParFilename[1024];
snprintf(szFullParFilename, 1024, "%s%c%s", m_szDestDir, (int)PATH_SEPARATOR, szParFilename);
szFullParFilename[1024-1] = '\0';
char szInfoName[1024];
int iBaseLen = 0;
ParCoordinator::ParseParFilename(szParFilename, &iBaseLen, NULL);
int maxlen = iBaseLen < 1024 ? iBaseLen : 1024 - 1;
strncpy(szInfoName, szParFilename, maxlen);
szInfoName[maxlen] = '\0';
char szParInfoName[1024];
snprintf(szParInfoName, 1024, "%s%c%s", m_szNZBName, (int)PATH_SEPARATOR, szInfoName);
szParInfoName[1024-1] = '\0';
SetInfoName(szParInfoName);
EStatus eStatus = RunParCheck(szFullParFilename);
// accumulate total status, the worst status has priority
if (m_eStatus > eStatus)
{
m_eStatus = eStatus;
}
if (g_pOptions->GetCreateBrokenLog())
{
WriteBrokenLog(eStatus);
}
}
free(szParFilename);
}
Completed();
}
ParChecker::EStatus ParChecker::RunParCheck(const char* szParFilename)
{
Cleanup();
m_szParFilename = szParFilename;
m_eStage = ptLoadingPars;
m_iProcessedFiles = 0;
m_iExtraFiles = 0;
m_bVerifyingExtraFiles = false;
m_bCancelled = false;
EStatus eStatus = psFailed;
info("Verifying %s", m_szInfoName);
SetStatus(psWorking);
PrintMessage(Message::mkInfo, "Verifying %s", m_szInfoName);
debug("par: %s", m_szParFilename);
CommandLine commandLine;
const char* argv[] = { "par2", "r", "-v", "-v", m_szParFilename };
if (!commandLine.Parse(5, (char**)argv))
{
error("Could not start par-check for %s. Par-file: %s", m_szInfoName, m_szParFilename);
SetStatus(psFailed);
return;
}
Result res;
Repairer* pRepairer = new Repairer();
m_pRepairer = pRepairer;
pRepairer->sig_filename.connect(sigc::mem_fun(*this, &ParChecker::signal_filename));
pRepairer->sig_progress.connect(sigc::mem_fun(*this, &ParChecker::signal_progress));
pRepairer->sig_done.connect(sigc::mem_fun(*this, &ParChecker::signal_done));
snprintf(m_szProgressLabel, 1024, "Verifying %s", m_szInfoName);
m_szProgressLabel[1024-1] = '\0';
m_iFileProgress = 0;
m_iStageProgress = 0;
UpdateProgress();
res = pRepairer->PreProcess(commandLine);
debug("ParChecker: PreProcess-result=%i", res);
if (res != eSuccess || IsStopped())
Result res = (Result)PreProcessPar();
if (IsStopped() || res != eSuccess)
{
error("Could not verify %s: %s", m_szInfoName, IsStopped() ? "due stopping" : "par2-file could not be processed");
m_szErrMsg = strdup("par2-file could not be processed");
SetStatus(psFailed);
delete pRepairer;
return;
Cleanup();
return psFailed;
}
char BufReason[1024];
BufReason[0] = '\0';
if (m_szErrMsg)
{
free(m_szErrMsg);
}
m_szErrMsg = NULL;
m_eStage = ptVerifyingSources;
res = pRepairer->Process(commandLine, false);
Repairer* pRepairer = (Repairer*)m_pRepairer;
res = pRepairer->Process(false);
debug("ParChecker: Process-result=%i", res);
if (!IsStopped() && pRepairer->missingfilecount > 0 && g_pOptions->GetParScan() == Options::psAuto && AddMissingFiles())
{
res = pRepairer->Process(false);
debug("ParChecker: Process-result=%i", res);
}
if (!IsStopped() && res == eRepairNotPossible && CheckSplittedFragments())
{
pRepairer->UpdateVerificationResults();
res = pRepairer->Process(commandLine, false);
res = pRepairer->Process(false);
debug("ParChecker: Process-result=%i", res);
}
if (!IsStopped() && res == eRepairNotPossible)
{
res = (Result)ProcessMorePars();
}
if (IsStopped())
{
Cleanup();
return psFailed;
}
eStatus = psFailed;
if (res == eSuccess)
{
PrintMessage(Message::mkInfo, "Repair not needed for %s", m_szInfoName);
eStatus = psRepairNotNeeded;
}
else if (res == eRepairPossible)
{
eStatus = psRepairPossible;
if (g_pOptions->GetParRepair())
{
PrintMessage(Message::mkInfo, "Repairing %s", m_szInfoName);
SaveSourceList();
snprintf(m_szProgressLabel, 1024, "Repairing %s", m_szInfoName);
m_szProgressLabel[1024-1] = '\0';
m_iFileProgress = 0;
m_iStageProgress = 0;
m_iProcessedFiles = 0;
m_eStage = ptRepairing;
m_iFilesToRepair = pRepairer->damagedfilecount + pRepairer->missingfilecount;
UpdateProgress();
res = pRepairer->Process(true);
debug("ParChecker: Process-result=%i", res);
if (res == eSuccess)
{
PrintMessage(Message::mkInfo, "Successfully repaired %s", m_szInfoName);
eStatus = psRepaired;
DeleteLeftovers();
}
}
else
{
PrintMessage(Message::mkInfo, "Repair possible for %s", m_szInfoName);
}
}
if (m_bCancelled)
{
if (m_eStage >= ptRepairing)
{
PrintMessage(Message::mkWarning, "Repair cancelled for %s", m_szInfoName);
m_szErrMsg = strdup("repair cancelled");
eStatus = psRepairPossible;
}
else
{
PrintMessage(Message::mkWarning, "Par-check cancelled for %s", m_szInfoName);
m_szErrMsg = strdup("par-check cancelled");
eStatus = psFailed;
}
}
else if (eStatus == psFailed)
{
if (!m_szErrMsg && (int)res >= 0 && (int)res <= 8)
{
m_szErrMsg = strdup(Par2CmdLineErrStr[res]);
}
PrintMessage(Message::mkError, "Repair failed for %s: %s", m_szInfoName, m_szErrMsg ? m_szErrMsg : "");
}
Cleanup();
return eStatus;
}
int ParChecker::PreProcessPar()
{
Result res = eRepairFailed;
while (!IsStopped() && res != eSuccess)
{
Cleanup();
Repairer* pRepairer = new Repairer();
m_pRepairer = pRepairer;
pRepairer->sig_filename.connect(sigc::mem_fun(*this, &ParChecker::signal_filename));
pRepairer->sig_progress.connect(sigc::mem_fun(*this, &ParChecker::signal_progress));
pRepairer->sig_done.connect(sigc::mem_fun(*this, &ParChecker::signal_done));
res = pRepairer->PreProcess(m_szParFilename);
debug("ParChecker: PreProcess-result=%i", res);
if (IsStopped())
{
PrintMessage(Message::mkError, "Could not verify %s: stopping", m_szInfoName);
m_szErrMsg = strdup("par-check was stopped");
return eRepairFailed;
}
if (res == eInvalidCommandLineArguments)
{
PrintMessage(Message::mkError, "Could not start par-check for %s. Par-file: %s", m_szInfoName, m_szParFilename);
m_szErrMsg = strdup("Command line could not be parsed");
return res;
}
if (res != eSuccess)
{
PrintMessage(Message::mkWarning, "Could not verify %s: par2-file could not be processed", m_szInfoName);
PrintMessage(Message::mkInfo, "Requesting more par2-files for %s", m_szInfoName);
bool bHasMorePars = LoadMainParBak();
if (!bHasMorePars)
{
PrintMessage(Message::mkWarning, "No more par2-files found");
break;
}
}
}
if (res != eSuccess)
{
PrintMessage(Message::mkError, "Could not verify %s: par2-file could not be processed", m_szInfoName);
m_szErrMsg = strdup("par2-file could not be processed");
return res;
}
return res;
}
bool ParChecker::LoadMainParBak()
{
while (!IsStopped())
{
m_mutexQueuedParFiles.Lock();
bool hasMorePars = !m_QueuedParFiles.empty();
for (FileList::iterator it = m_QueuedParFiles.begin(); it != m_QueuedParFiles.end() ;it++)
{
free(*it);
}
m_QueuedParFiles.clear();
m_mutexQueuedParFiles.Unlock();
if (hasMorePars)
{
return true;
}
int iBlockFound = 0;
bool requested = RequestMorePars(1, &iBlockFound);
if (requested)
{
strncpy(m_szProgressLabel, "Awaiting additional par-files", 1024);
m_szProgressLabel[1024-1] = '\0';
m_iFileProgress = 0;
UpdateProgress();
}
m_mutexQueuedParFiles.Lock();
hasMorePars = !m_QueuedParFiles.empty();
m_bQueuedParFilesChanged = false;
m_mutexQueuedParFiles.Unlock();
if (!requested && !hasMorePars)
{
return false;
}
if (!hasMorePars)
{
// wait until new files are added by "AddParFile" or a change is signaled by "QueueChanged"
bool bQueuedParFilesChanged = false;
while (!bQueuedParFilesChanged && !IsStopped())
{
m_mutexQueuedParFiles.Lock();
bQueuedParFilesChanged = m_bQueuedParFilesChanged;
m_mutexQueuedParFiles.Unlock();
usleep(100 * 1000);
}
}
}
return false;
}
int ParChecker::ProcessMorePars()
{
Result res = eRepairNotPossible;
Repairer* pRepairer = (Repairer*)m_pRepairer;
bool bMoreFilesLoaded = true;
while (!IsStopped() && res == eRepairNotPossible)
{
int missingblockcount = pRepairer->missingblockcount - pRepairer->recoverypacketmap.size();
if (bMoreFilesLoaded)
{
info("Need more %i par-block(s) for %s", missingblockcount, m_szInfoName);
PrintMessage(Message::mkInfo, "Need more %i par-block(s) for %s", missingblockcount, m_szInfoName);
}
m_mutexQueuedParFiles.Lock();
@@ -237,9 +551,9 @@ void ParChecker::Run()
if (!requested && !hasMorePars)
{
snprintf(BufReason, 1024, "not enough par-blocks, %i block(s) needed, but %i block(s) available", missingblockcount, iBlockFound);
BufReason[1024-1] = '\0';
m_szErrMsg = strdup(BufReason);
m_szErrMsg = (char*)malloc(1024);
snprintf(m_szErrMsg, 1024, "not enough par-blocks, %i block(s) needed, but %i block(s) available", missingblockcount, iBlockFound);
m_szErrMsg[1024-1] = '\0';
break;
}
@@ -266,94 +580,33 @@ void ParChecker::Run()
if (bMoreFilesLoaded)
{
pRepairer->UpdateVerificationResults();
res = pRepairer->Process(commandLine, false);
res = pRepairer->Process(false);
debug("ParChecker: Process-result=%i", res);
}
}
if (IsStopped())
{
SetStatus(psFailed);
delete pRepairer;
return;
}
if (res == eSuccess)
{
info("Repair not needed for %s", m_szInfoName);
m_bRepairNotNeeded = true;
}
else if (res == eRepairPossible)
{
if (g_pOptions->GetParRepair())
{
info("Repairing %s", m_szInfoName);
snprintf(m_szProgressLabel, 1024, "Repairing %s", m_szInfoName);
m_szProgressLabel[1024-1] = '\0';
m_iFileProgress = 0;
m_iStageProgress = 0;
m_iProcessedFiles = 0;
m_eStage = ptRepairing;
m_iFilesToRepair = pRepairer->damagedfilecount + pRepairer->missingfilecount;
UpdateProgress();
res = pRepairer->Process(commandLine, true);
debug("ParChecker: Process-result=%i", res);
if (res == eSuccess)
{
info("Successfully repaired %s", m_szInfoName);
}
}
else
{
info("Repair possible for %s", m_szInfoName);
res = eSuccess;
}
}
if (m_bCancelled)
{
warn("Repair cancelled for %s", m_szInfoName);
m_szErrMsg = strdup("repair cancelled");
SetStatus(psFailed);
}
else if (res == eSuccess)
{
SetStatus(psFinished);
}
else
{
if (!m_szErrMsg && (int)res >= 0 && (int)res <= 8)
{
m_szErrMsg = strdup(Par2CmdLineErrStr[res]);
}
error("Repair failed for %s: %s", m_szInfoName, m_szErrMsg ? m_szErrMsg : "");
SetStatus(psFailed);
}
delete pRepairer;
return res;
}
bool ParChecker::LoadMorePars()
{
m_mutexQueuedParFiles.Lock();
QueuedParFiles moreFiles;
FileList moreFiles;
moreFiles.assign(m_QueuedParFiles.begin(), m_QueuedParFiles.end());
m_QueuedParFiles.clear();
m_mutexQueuedParFiles.Unlock();
for (QueuedParFiles::iterator it = moreFiles.begin(); it != moreFiles.end() ;it++)
for (FileList::iterator it = moreFiles.begin(); it != moreFiles.end() ;it++)
{
char* szParFilename = *it;
bool loadedOK = ((Repairer*)m_pRepairer)->LoadPacketsFromFile(szParFilename);
if (loadedOK)
{
info("File %s successfully loaded for par-check", Util::BaseFileName(szParFilename), m_szInfoName);
PrintMessage(Message::mkInfo, "File %s successfully loaded for par-check", Util::BaseFileName(szParFilename), m_szInfoName);
}
else
{
info("Could not load file %s for par-check", Util::BaseFileName(szParFilename), m_szInfoName);
PrintMessage(Message::mkInfo, "Could not load file %s for par-check", Util::BaseFileName(szParFilename), m_szInfoName);
}
free(szParFilename);
}
@@ -380,11 +633,11 @@ bool ParChecker::CheckSplittedFragments()
{
bool bFragmentsAdded = false;
for (vector<Par2RepairerSourceFile*>::iterator it = ((Repairer*)m_pRepairer)->sourcefiles.begin();
for (std::vector<Par2RepairerSourceFile*>::iterator it = ((Repairer*)m_pRepairer)->sourcefiles.begin();
it != ((Repairer*)m_pRepairer)->sourcefiles.end(); it++)
{
Par2RepairerSourceFile *sourcefile = *it;
if (!sourcefile->GetTargetExists() && AddSplittedFragments(sourcefile->TargetFileName().c_str()))
if (AddSplittedFragments(sourcefile->TargetFileName().c_str()))
{
bFragmentsAdded = true;
}
@@ -407,7 +660,7 @@ bool ParChecker::AddSplittedFragments(const char* szFilename)
szBasename[-1] = '\0';
int iBaseLen = strlen(szBasename);
list<CommandLine::ExtraFile> extrafiles;
std::list<CommandLine::ExtraFile> extrafiles;
DirBrowser dir(szDirectory);
while (const char* filename = dir.Next())
@@ -437,7 +690,7 @@ bool ParChecker::AddSplittedFragments(const char* szFilename)
if (!extrafiles.empty())
{
m_iExtraFiles = extrafiles.size();
m_iExtraFiles += extrafiles.size();
m_bVerifyingExtraFiles = true;
bFragmentsAdded = ((Repairer*)m_pRepairer)->VerifyExtraFiles(extrafiles);
m_bVerifyingExtraFiles = false;
@@ -446,6 +699,93 @@ bool ParChecker::AddSplittedFragments(const char* szFilename)
return bFragmentsAdded;
}
bool ParChecker::AddMissingFiles()
{
PrintMessage(Message::mkInfo, "Performing extra par-scan for %s", m_szInfoName);
char szDirectory[1024];
strncpy(szDirectory, m_szParFilename, 1024);
szDirectory[1024-1] = '\0';
char* szBasename = Util::BaseFileName(szDirectory);
if (szBasename == szDirectory)
{
return false;
}
szBasename[-1] = '\0';
std::list<CommandLine::ExtraFile*> extrafiles;
DirBrowser dir(szDirectory);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") && strcmp(filename, "_brokenlog.txt"))
{
bool bAlreadyScanned = false;
for (FileList::iterator it = m_ProcessedFiles.begin(); it != m_ProcessedFiles.end(); it++)
{
const char* szProcessedFilename = *it;
if (!strcasecmp(Util::BaseFileName(szProcessedFilename), filename))
{
bAlreadyScanned = true;
break;
}
}
if (!bAlreadyScanned)
{
char fullfilename[1024];
snprintf(fullfilename, 1024, "%s%c%s", szDirectory, PATH_SEPARATOR, filename);
fullfilename[1024-1] = '\0';
extrafiles.push_back(new CommandLine::ExtraFile(fullfilename, Util::FileSize(fullfilename)));
}
}
}
// Sort the list
char* szBaseParFilename = strdup(Util::BaseFileName(m_szParFilename));
if (char* ext = strrchr(szBaseParFilename, '.')) *ext = '\0'; // trim extension
extrafiles.sort(MissingFilesComparator(szBaseParFilename));
free(szBaseParFilename);
// Scan files
bool bFilesAdded = false;
if (!extrafiles.empty())
{
m_iExtraFiles += extrafiles.size();
m_bVerifyingExtraFiles = true;
std::list<CommandLine::ExtraFile> extrafiles1;
// adding files one by one until all missing files are found
while (!IsStopped() && !m_bCancelled && extrafiles.size() > 0 && ((Repairer*)m_pRepairer)->missingfilecount > 0)
{
CommandLine::ExtraFile* pExtraFile = extrafiles.front();
extrafiles.pop_front();
extrafiles1.clear();
extrafiles1.push_back(*pExtraFile);
bFilesAdded = ((Repairer*)m_pRepairer)->VerifyExtraFiles(extrafiles1) || bFilesAdded;
((Repairer*)m_pRepairer)->UpdateVerificationResults();
delete pExtraFile;
}
m_bVerifyingExtraFiles = false;
// free any remaining objects
for (std::list<CommandLine::ExtraFile*>::iterator it = extrafiles.begin(); it != extrafiles.end() ;it++)
{
delete *it;
}
}
return bFilesAdded;
}
void ParChecker::signal_filename(std::string str)
{
const char* szStageMessage[] = { "Loading file", "Verifying file", "Repairing file", "Verifying repaired file" };
@@ -455,7 +795,12 @@ void ParChecker::signal_filename(std::string str)
m_eStage = ptVerifyingRepaired;
}
info("%s %s", szStageMessage[m_eStage], str.c_str());
PrintMessage(Message::mkInfo, "%s %s", szStageMessage[m_eStage], str.c_str());
if (m_eStage == ptLoadingPars || m_eStage == ptVerifyingSources)
{
m_ProcessedFiles.push_back(strdup(str.c_str()));
}
snprintf(m_szProgressLabel, 1024, "%s %s", szStageMessage[m_eStage], str.c_str());
m_szProgressLabel[1024-1] = '\0';
@@ -520,7 +865,7 @@ void ParChecker::signal_done(std::string str, int available, int total)
{
bool bFileExists = true;
for (vector<Par2RepairerSourceFile*>::iterator it = ((Repairer*)m_pRepairer)->sourcefiles.begin();
for (std::vector<Par2RepairerSourceFile*>::iterator it = ((Repairer*)m_pRepairer)->sourcefiles.begin();
it != ((Repairer*)m_pRepairer)->sourcefiles.end(); it++)
{
Par2RepairerSourceFile *sourcefile = *it;
@@ -534,11 +879,11 @@ void ParChecker::signal_done(std::string str, int available, int total)
if (bFileExists)
{
warn("File %s has %i bad block(s) of total %i block(s)", str.c_str(), total - available, total);
PrintMessage(Message::mkWarning, "File %s has %i bad block(s) of total %i block(s)", str.c_str(), total - available, total);
}
else
{
warn("File %s with %i block(s) is missing", str.c_str(), total);
PrintMessage(Message::mkWarning, "File %s with %i block(s) is missing", str.c_str(), total);
}
}
}
@@ -550,8 +895,103 @@ void ParChecker::Cancel()
((Repairer*)m_pRepairer)->cancelled = true;
m_bCancelled = true;
#else
error("Could not cancel par-repair. The used version of libpar2 does not support the cancelling of par-repair. Libpar2 needs to be patched for that feature to work.");
PrintMessage(Message::mkError, "Could not cancel par-repair. The program was compiled using version of libpar2 which doesn't support cancelling of par-repair. Please apply libpar2-patches supplied with NZBGet and recompile libpar2 and NZBGet (see README for details).");
#endif
}
void ParChecker::WriteBrokenLog(EStatus eStatus)
{
char szBrokenLogName[1024];
snprintf(szBrokenLogName, 1024, "%s%c_brokenlog.txt", m_szDestDir, (int)PATH_SEPARATOR);
szBrokenLogName[1024-1] = '\0';
if (eStatus != psRepairNotNeeded || Util::FileExists(szBrokenLogName))
{
FILE* file = fopen(szBrokenLogName, "ab");
if (file)
{
if (eStatus == psFailed)
{
if (m_bCancelled)
{
fprintf(file, "Repair cancelled for %s\n", m_szInfoName);
}
else
{
fprintf(file, "Repair failed for %s: %s\n", m_szInfoName, m_szErrMsg ? m_szErrMsg : "");
}
}
else if (eStatus == psRepairPossible)
{
fprintf(file, "Repair possible for %s\n", m_szInfoName);
}
else if (eStatus == psRepaired)
{
fprintf(file, "Successfully repaired %s\n", m_szInfoName);
}
else if (eStatus == psRepairNotNeeded)
{
fprintf(file, "Repair not needed for %s\n", m_szInfoName);
}
fclose(file);
}
else
{
PrintMessage(Message::mkError, "Could not open file %s", szBrokenLogName);
}
}
}
void ParChecker::SaveSourceList()
{
// Buliding a list of DiskFile-objects, marked as source-files
for (std::vector<Par2RepairerSourceFile*>::iterator it = ((Repairer*)m_pRepairer)->sourcefiles.begin();
it != ((Repairer*)m_pRepairer)->sourcefiles.end(); it++)
{
Par2RepairerSourceFile* sourcefile = (Par2RepairerSourceFile*)*it;
vector<DataBlock>::iterator it2 = sourcefile->SourceBlocks();
for (int i = 0; i < (int)sourcefile->BlockCount(); i++, it2++)
{
DataBlock block = *it2;
DiskFile* pSourceFile = block.GetDiskFile();
if (pSourceFile &&
std::find(m_sourceFiles.begin(), m_sourceFiles.end(), pSourceFile) == m_sourceFiles.end())
{
m_sourceFiles.push_back(pSourceFile);
}
}
}
}
void ParChecker::DeleteLeftovers()
{
// After repairing check if all DiskFile-objects saved by "SaveSourceList()" have
// corresponding target-files. If not - the source file was replaced. In this case
// the DiskFile-object points to the renamed bak-file, which we can delete.
for (SourceList::iterator it = m_sourceFiles.begin(); it != m_sourceFiles.end(); it++)
{
DiskFile* pSourceFile = (DiskFile*)*it;
bool bFound = false;
for (std::vector<Par2RepairerSourceFile*>::iterator it2 = ((Repairer*)m_pRepairer)->sourcefiles.begin();
it2 != ((Repairer*)m_pRepairer)->sourcefiles.end(); it2++)
{
Par2RepairerSourceFile* sourcefile = *it2;
if (sourcefile->GetTargetFile() == pSourceFile)
{
bFound = true;
break;
}
}
if (!bFound)
{
PrintMessage(Message::mkInfo, "Deleting file %s", Util::BaseFileName(pSourceFile->FileName().c_str()));
remove(pSourceFile->FileName().c_str());
}
}
}
#endif

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -29,19 +29,20 @@
#ifndef DISABLE_PARCHECK
#include <deque>
#include <string>
#include "Thread.h"
#include "Observer.h"
#include "Log.h"
class ParChecker : public Thread, public Subject
class ParChecker : public Thread
{
public:
enum EStatus
{
psUndefined,
psWorking,
psFailed,
psFinished
psRepairPossible,
psRepaired,
psRepairNotNeeded
};
enum EStage
@@ -52,19 +53,22 @@ public:
ptVerifyingRepaired,
};
typedef std::deque<char*> QueuedParFiles;
typedef std::deque<char*> FileList;
typedef std::deque<void*> SourceList;
private:
char* m_szInfoName;
char* m_szParFilename;
char* m_szDestDir;
char* m_szNZBName;
const char* m_szParFilename;
EStatus m_eStatus;
EStage m_eStage;
void* m_pRepairer; // declared as void* to prevent the including of libpar2-headers into this header-file
char* m_szErrMsg;
bool m_bRepairNotNeeded;
QueuedParFiles m_QueuedParFiles;
FileList m_QueuedParFiles;
Mutex m_mutexQueuedParFiles;
bool m_bQueuedParFilesChanged;
FileList m_ProcessedFiles;
int m_iProcessedFiles;
int m_iFilesToRepair;
int m_iExtraFiles;
@@ -73,10 +77,20 @@ private:
int m_iFileProgress;
int m_iStageProgress;
bool m_bCancelled;
SourceList m_sourceFiles;
void Cleanup();
EStatus RunParCheck(const char* szParFilename);
int PreProcessPar();
bool LoadMainParBak();
int ProcessMorePars();
bool LoadMorePars();
bool CheckSplittedFragments();
bool AddSplittedFragments(const char* szFilename);
bool AddMissingFiles();
void WriteBrokenLog(EStatus eStatus);
void SaveSourceList();
void DeleteLeftovers();
void signal_filename(std::string str);
void signal_progress(double progress);
void signal_done(std::string str, int available, int total);
@@ -89,6 +103,8 @@ protected:
*/
virtual bool RequestMorePars(int iBlockNeeded, int* pBlockFound) = 0;
virtual void UpdateProgress() {}
virtual void Completed() {}
virtual void PrintMessage(Message::EKind eKind, const char* szFormat, ...) {}
EStage GetStage() { return m_eStage; }
const char* GetProgressLabel() { return m_szProgressLabel; }
int GetFileProgress() { return m_iFileProgress; }
@@ -98,14 +114,12 @@ public:
ParChecker();
virtual ~ParChecker();
virtual void Run();
void SetDestDir(const char* szDestDir);
const char* GetParFilename() { return m_szParFilename; }
void SetParFilename(const char* szParFilename);
const char* GetInfoName() { return m_szInfoName; }
void SetInfoName(const char* szInfoName);
void SetStatus(EStatus eStatus);
void SetNZBName(const char* szNZBName);
EStatus GetStatus() { return m_eStatus; }
const char* GetErrMsg() { return m_szErrMsg; }
bool GetRepairNotNeeded() { return m_bRepairNotNeeded; }
void AddParFile(const char* szParFilename);
void QueueChanged();
void Cancel();

742
ParCoordinator.cpp Normal file
View File

@@ -0,0 +1,742 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <stdarg.h>
#include <ctype.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#endif
#include "nzbget.h"
#include "ParCoordinator.h"
#include "Options.h"
#include "Log.h"
#include "Util.h"
#include "QueueCoordinator.h"
#include "DiskState.h"
extern QueueCoordinator* g_pQueueCoordinator;
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
#ifndef DISABLE_PARCHECK
bool ParCoordinator::PostParChecker::RequestMorePars(int iBlockNeeded, int* pBlockFound)
{
return m_pOwner->RequestMorePars(m_pPostInfo->GetNZBInfo(), GetParFilename(), iBlockNeeded, pBlockFound);
}
void ParCoordinator::PostParChecker::UpdateProgress()
{
m_pOwner->UpdateParCheckProgress();
}
void ParCoordinator::PostParChecker::PrintMessage(Message::EKind eKind, const char* szFormat, ...)
{
char szText[1024];
va_list args;
va_start(args, szFormat);
vsnprintf(szText, 1024, szFormat, args);
va_end(args);
szText[1024-1] = '\0';
m_pOwner->PrintMessage(m_pPostInfo, eKind, "%s", szText);
}
void ParCoordinator::PostParRenamer::UpdateProgress()
{
m_pOwner->UpdateParRenameProgress();
}
void ParCoordinator::PostParRenamer::PrintMessage(Message::EKind eKind, const char* szFormat, ...)
{
char szText[1024];
va_list args;
va_start(args, szFormat);
vsnprintf(szText, 1024, szFormat, args);
va_end(args);
szText[1024-1] = '\0';
m_pOwner->PrintMessage(m_pPostInfo, eKind, "%s", szText);
}
#endif
ParCoordinator::ParCoordinator()
{
debug("Creating ParCoordinator");
#ifndef DISABLE_PARCHECK
m_bStopped = false;
m_ParChecker.m_pOwner = this;
m_ParRenamer.m_pOwner = this;
#endif
}
ParCoordinator::~ParCoordinator()
{
debug("Destroying ParCoordinator");
}
#ifndef DISABLE_PARCHECK
void ParCoordinator::Stop()
{
debug("Stopping ParCoordinator");
m_bStopped = true;
if (m_ParChecker.IsRunning())
{
m_ParChecker.Stop();
int iMSecWait = 5000;
while (m_ParChecker.IsRunning() && iMSecWait > 0)
{
usleep(50 * 1000);
iMSecWait -= 50;
}
if (m_ParChecker.IsRunning())
{
warn("Terminating par-check for %s", m_ParChecker.GetInfoName());
m_ParChecker.Kill();
}
}
}
#endif
void ParCoordinator::PausePars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
debug("ParCoordinator: Pausing pars");
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
if (pFileInfo->GetNZBInfo() == pNZBInfo)
{
g_pQueueCoordinator->GetQueueEditor()->LockedEditEntry(pDownloadQueue, pFileInfo->GetID(), false,
QueueEditor::eaGroupPauseExtraPars, 0, NULL);
break;
}
}
}
bool ParCoordinator::FindMainPars(const char* szPath, FileList* pFileList)
{
if (pFileList)
{
pFileList->clear();
}
DirBrowser dir(szPath);
while (const char* filename = dir.Next())
{
int iBaseLen = 0;
if (ParseParFilename(filename, &iBaseLen, NULL))
{
if (!pFileList)
{
return true;
}
// check if the base file already added to list
bool exists = false;
for (FileList::iterator it = pFileList->begin(); it != pFileList->end(); it++)
{
const char* filename2 = *it;
exists = SameParCollection(filename, filename2);
if (exists)
{
break;
}
}
if (!exists)
{
pFileList->push_back(strdup(filename));
}
}
}
return pFileList && !pFileList->empty();
}
bool ParCoordinator::SameParCollection(const char* szFilename1, const char* szFilename2)
{
int iBaseLen1 = 0, iBaseLen2 = 0;
return ParseParFilename(szFilename1, &iBaseLen1, NULL) &&
ParseParFilename(szFilename2, &iBaseLen2, NULL) &&
iBaseLen1 == iBaseLen2 &&
!strncasecmp(szFilename1, szFilename2, iBaseLen1);
}
bool ParCoordinator::ParseParFilename(const char* szParFilename, int* iBaseNameLen, int* iBlocks)
{
char szFilename[1024];
strncpy(szFilename, szParFilename, 1024);
szFilename[1024-1] = '\0';
for (char* p = szFilename; *p; p++) *p = tolower(*p); // convert string to lowercase
int iLen = strlen(szFilename);
if (iLen < 6)
{
return false;
}
// find last occurence of ".par2" and trim filename after it
char* szEnd = szFilename;
while (char* p = strstr(szEnd, ".par2")) szEnd = p + 5;
*szEnd = '\0';
iLen = strlen(szFilename);
if (iLen < 6)
{
return false;
}
if (strcasecmp(szFilename + iLen - 5, ".par2"))
{
return false;
}
*(szFilename + iLen - 5) = '\0';
int blockcnt = 0;
char* p = strrchr(szFilename, '.');
if (p && !strncasecmp(p, ".vol", 4))
{
char* b = strchr(p, '+');
if (!b)
{
b = strchr(p, '-');
}
if (b)
{
blockcnt = atoi(b+1);
*p = '\0';
}
}
if (iBaseNameLen)
{
*iBaseNameLen = strlen(szFilename);
}
if (iBlocks)
{
*iBlocks = blockcnt;
}
return true;
}
#ifndef DISABLE_PARCHECK
/**
* DownloadQueue must be locked prior to call of this function.
*/
void ParCoordinator::StartParCheckJob(PostInfo* pPostInfo)
{
m_eCurrentJob = jkParCheck;
m_ParChecker.SetPostInfo(pPostInfo);
m_ParChecker.SetDestDir(pPostInfo->GetNZBInfo()->GetDestDir());
m_ParChecker.SetNZBName(pPostInfo->GetNZBInfo()->GetName());
m_ParChecker.PrintMessage(Message::mkInfo, "Checking pars for %s", pPostInfo->GetInfoName());
pPostInfo->SetWorking(true);
m_ParChecker.Start();
}
/**
* DownloadQueue must be locked prior to call of this function.
*/
void ParCoordinator::StartParRenameJob(PostInfo* pPostInfo)
{
const char* szDestDir = pPostInfo->GetNZBInfo()->GetDestDir();
char szFinalDir[1024];
if (pPostInfo->GetNZBInfo()->GetUnpackStatus() == NZBInfo::usSuccess)
{
pPostInfo->GetNZBInfo()->BuildFinalDirName(szFinalDir, 1024);
szFinalDir[1024-1] = '\0';
szDestDir = szFinalDir;
}
m_eCurrentJob = jkParRename;
m_ParRenamer.SetPostInfo(pPostInfo);
m_ParRenamer.SetDestDir(szDestDir);
m_ParRenamer.SetInfoName(pPostInfo->GetNZBInfo()->GetName());
m_ParRenamer.PrintMessage(Message::mkInfo, "Checking renamed files for %s", pPostInfo->GetNZBInfo()->GetName());
pPostInfo->SetWorking(true);
m_ParRenamer.Start();
}
bool ParCoordinator::Cancel()
{
if (m_eCurrentJob == jkParCheck)
{
#ifdef HAVE_PAR2_CANCEL
if (!m_ParChecker.GetCancelled())
{
debug("Cancelling par-repair for %s", m_ParChecker.GetInfoName());
m_ParChecker.Cancel();
return true;
}
#else
warn("Cannot cancel par-repair for %s, used version of libpar2 does not support cancelling", m_ParChecker.GetInfoName());
#endif
}
else if (m_eCurrentJob == jkParRename)
{
if (!m_ParRenamer.GetCancelled())
{
debug("Cancelling par-rename for %s", m_ParRenamer.GetInfoName());
m_ParRenamer.Cancel();
return true;
}
}
return false;
}
/**
* DownloadQueue must be locked prior to call of this function.
*/
bool ParCoordinator::AddPar(FileInfo* pFileInfo, bool bDeleted)
{
bool bSameCollection = m_ParChecker.IsRunning() &&
pFileInfo->GetNZBInfo() == m_ParChecker.GetPostInfo()->GetNZBInfo() &&
SameParCollection(pFileInfo->GetFilename(), Util::BaseFileName(m_ParChecker.GetParFilename()));
if (bSameCollection && !bDeleted)
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", pFileInfo->GetNZBInfo()->GetDestDir(), (int)PATH_SEPARATOR, pFileInfo->GetFilename());
szFullFilename[1024-1] = '\0';
m_ParChecker.AddParFile(szFullFilename);
if (g_pOptions->GetParPauseQueue())
{
PauseDownload();
}
}
else
{
m_ParChecker.QueueChanged();
}
return bSameCollection;
}
void ParCoordinator::ParCheckCompleted()
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
PostInfo* pPostInfo = m_ParChecker.GetPostInfo();
// Update ParStatus (accumulate result)
if ((m_ParChecker.GetStatus() == ParChecker::psRepaired ||
m_ParChecker.GetStatus() == ParChecker::psRepairNotNeeded) &&
pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped)
{
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psSuccess);
}
else if (m_ParChecker.GetStatus() == ParChecker::psRepairPossible &&
pPostInfo->GetNZBInfo()->GetParStatus() != NZBInfo::psFailure)
{
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psRepairPossible);
}
else
{
pPostInfo->GetNZBInfo()->SetParStatus(NZBInfo::psFailure);
}
pPostInfo->SetWorking(false);
pPostInfo->SetStage(PostInfo::ptQueued);
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
g_pQueueCoordinator->UnlockQueue();
}
/**
* Unpause par2-files
* returns true, if the files with required number of blocks were unpaused,
* or false if there are no more files in queue for this collection or not enough blocks
*/
bool ParCoordinator::RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilename, int iBlockNeeded, int* pBlockFound)
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
Blocks blocks;
blocks.clear();
int iBlockFound = 0;
int iCurBlockFound = 0;
FindPars(pDownloadQueue, pNZBInfo, szParFilename, &blocks, true, true, &iCurBlockFound);
iBlockFound += iCurBlockFound;
if (iBlockFound < iBlockNeeded)
{
FindPars(pDownloadQueue, pNZBInfo, szParFilename, &blocks, true, false, &iCurBlockFound);
iBlockFound += iCurBlockFound;
}
if (iBlockFound < iBlockNeeded)
{
FindPars(pDownloadQueue, pNZBInfo, szParFilename, &blocks, false, false, &iCurBlockFound);
iBlockFound += iCurBlockFound;
}
if (iBlockFound >= iBlockNeeded)
{
// 1. first unpause all files with par-blocks less or equal iBlockNeeded
// starting from the file with max block count.
// if par-collection was built exponentially and all par-files present,
// this step selects par-files with exact number of blocks we need.
while (iBlockNeeded > 0)
{
BlockInfo* pBestBlockInfo = NULL;
for (Blocks::iterator it = blocks.begin(); it != blocks.end(); it++)
{
BlockInfo* pBlockInfo = *it;
if (pBlockInfo->m_iBlockCount <= iBlockNeeded &&
(!pBestBlockInfo || pBestBlockInfo->m_iBlockCount < pBlockInfo->m_iBlockCount))
{
pBestBlockInfo = pBlockInfo;
}
}
if (pBestBlockInfo)
{
if (pBestBlockInfo->m_pFileInfo->GetPaused())
{
m_ParChecker.PrintMessage(Message::mkInfo, "Unpausing %s%c%s for par-recovery", pNZBInfo->GetName(), (int)PATH_SEPARATOR, pBestBlockInfo->m_pFileInfo->GetFilename());
pBestBlockInfo->m_pFileInfo->SetPaused(false);
pBestBlockInfo->m_pFileInfo->SetExtraPriority(true);
}
iBlockNeeded -= pBestBlockInfo->m_iBlockCount;
blocks.remove(pBestBlockInfo);
delete pBestBlockInfo;
}
else
{
break;
}
}
// 2. then unpause other files
// this step only needed if the par-collection was built not exponentially
// or not all par-files present (or some of them were corrupted)
// this step is not optimal, but we hope, that the first step will work good
// in most cases and we will not need the second step often
while (iBlockNeeded > 0)
{
BlockInfo* pBlockInfo = blocks.front();
if (pBlockInfo->m_pFileInfo->GetPaused())
{
m_ParChecker.PrintMessage(Message::mkInfo, "Unpausing %s%c%s for par-recovery", pNZBInfo->GetName(), (int)PATH_SEPARATOR, pBlockInfo->m_pFileInfo->GetFilename());
pBlockInfo->m_pFileInfo->SetPaused(false);
pBlockInfo->m_pFileInfo->SetExtraPriority(true);
}
iBlockNeeded -= pBlockInfo->m_iBlockCount;
}
}
g_pQueueCoordinator->UnlockQueue();
if (pBlockFound)
{
*pBlockFound = iBlockFound;
}
for (Blocks::iterator it = blocks.begin(); it != blocks.end(); it++)
{
delete *it;
}
blocks.clear();
bool bOK = iBlockNeeded <= 0;
if (bOK && g_pOptions->GetParPauseQueue())
{
UnpauseDownload();
}
return bOK;
}
void ParCoordinator::FindPars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szParFilename,
Blocks* pBlocks, bool bStrictParName, bool bExactParName, int* pBlockFound)
{
*pBlockFound = 0;
// extract base name from m_szParFilename (trim .par2-extension and possible .vol-part)
char* szBaseParFilename = Util::BaseFileName(szParFilename);
char szMainBaseFilename[1024];
int iMainBaseLen = 0;
if (!ParseParFilename(szBaseParFilename, &iMainBaseLen, NULL))
{
// should not happen
error("Internal error: could not parse filename %s", szBaseParFilename);
return;
}
int maxlen = iMainBaseLen < 1024 ? iMainBaseLen : 1024 - 1;
strncpy(szMainBaseFilename, szBaseParFilename, maxlen);
szMainBaseFilename[maxlen] = '\0';
for (char* p = szMainBaseFilename; *p; p++) *p = tolower(*p); // convert string to lowercase
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
int iBlocks = 0;
if (pFileInfo->GetNZBInfo() == pNZBInfo &&
ParseParFilename(pFileInfo->GetFilename(), NULL, &iBlocks) &&
iBlocks > 0)
{
bool bUseFile = true;
if (bExactParName)
{
bUseFile = SameParCollection(pFileInfo->GetFilename(), Util::BaseFileName(szParFilename));
}
else if (bStrictParName)
{
// the pFileInfo->GetFilename() may be not confirmed and may contain
// additional texts if Subject could not be parsed correctly
char szLoFileName[1024];
strncpy(szLoFileName, pFileInfo->GetFilename(), 1024);
szLoFileName[1024-1] = '\0';
for (char* p = szLoFileName; *p; p++) *p = tolower(*p); // convert string to lowercase
char szCandidateFileName[1024];
snprintf(szCandidateFileName, 1024, "%s.par2", szMainBaseFilename);
szCandidateFileName[1024-1] = '\0';
if (!strstr(szLoFileName, szCandidateFileName))
{
snprintf(szCandidateFileName, 1024, "%s.vol", szMainBaseFilename);
szCandidateFileName[1024-1] = '\0';
bUseFile = strstr(szLoFileName, szCandidateFileName);
}
}
bool bAlreadyAdded = false;
// check if file is not in the list already
if (bUseFile)
{
for (Blocks::iterator it = pBlocks->begin(); it != pBlocks->end(); it++)
{
BlockInfo* pBlockInfo = *it;
if (pBlockInfo->m_pFileInfo == pFileInfo)
{
bAlreadyAdded = true;
break;
}
}
}
// if it is a par2-file with blocks and it was from the same NZB-request
// and it belongs to the same file collection (same base name),
// then OK, we can use it
if (bUseFile && !bAlreadyAdded)
{
BlockInfo* pBlockInfo = new BlockInfo();
pBlockInfo->m_pFileInfo = pFileInfo;
pBlockInfo->m_iBlockCount = iBlocks;
pBlocks->push_back(pBlockInfo);
*pBlockFound += iBlocks;
}
}
}
}
void ParCoordinator::UpdateParCheckProgress()
{
g_pQueueCoordinator->LockQueue();
PostInfo* pPostInfo = m_ParChecker.GetPostInfo();
if (m_ParChecker.GetFileProgress() == 0)
{
pPostInfo->SetProgressLabel(m_ParChecker.GetProgressLabel());
}
pPostInfo->SetFileProgress(m_ParChecker.GetFileProgress());
pPostInfo->SetStageProgress(m_ParChecker.GetStageProgress());
PostInfo::EStage StageKind[] = { PostInfo::ptLoadingPars, PostInfo::ptVerifyingSources, PostInfo::ptRepairing, PostInfo::ptVerifyingRepaired };
PostInfo::EStage eStage = StageKind[m_ParChecker.GetStage()];
time_t tCurrent = time(NULL);
if (!pPostInfo->GetStartTime())
{
pPostInfo->SetStartTime(tCurrent);
}
if (pPostInfo->GetStage() != eStage)
{
pPostInfo->SetStage(eStage);
pPostInfo->SetStageTime(tCurrent);
}
bool bParCancel = false;
#ifdef HAVE_PAR2_CANCEL
if (!m_ParChecker.GetCancelled())
{
if ((g_pOptions->GetParTimeLimit() > 0) &&
m_ParChecker.GetStage() == ParChecker::ptRepairing &&
((g_pOptions->GetParTimeLimit() > 5 && tCurrent - pPostInfo->GetStageTime() > 5 * 60) ||
(g_pOptions->GetParTimeLimit() <= 5 && tCurrent - pPostInfo->GetStageTime() > 1 * 60)))
{
// first five (or one) minutes elapsed, now can check the estimated time
int iEstimatedRepairTime = (int)((tCurrent - pPostInfo->GetStartTime()) * 1000 /
(pPostInfo->GetStageProgress() > 0 ? pPostInfo->GetStageProgress() : 1));
if (iEstimatedRepairTime > g_pOptions->GetParTimeLimit() * 60)
{
debug("Estimated repair time %i seconds", iEstimatedRepairTime);
m_ParChecker.PrintMessage(Message::mkWarning, "Cancelling par-repair for %s, estimated repair time (%i minutes) exceeds allowed repair time", m_ParChecker.GetInfoName(), iEstimatedRepairTime / 60);
bParCancel = true;
}
}
}
#endif
if (bParCancel)
{
m_ParChecker.Cancel();
}
g_pQueueCoordinator->UnlockQueue();
CheckPauseState(pPostInfo);
}
void ParCoordinator::CheckPauseState(PostInfo* pPostInfo)
{
if (g_pOptions->GetPausePostProcess())
{
time_t tStageTime = pPostInfo->GetStageTime();
time_t tStartTime = pPostInfo->GetStartTime();
time_t tWaitTime = time(NULL);
// wait until Post-processor is unpaused
while (g_pOptions->GetPausePostProcess() && !m_bStopped)
{
usleep(100 * 1000);
// update time stamps
time_t tDelta = time(NULL) - tWaitTime;
if (tStageTime > 0)
{
pPostInfo->SetStageTime(tStageTime + tDelta);
}
if (tStartTime > 0)
{
pPostInfo->SetStartTime(tStartTime + tDelta);
}
}
}
}
void ParCoordinator::ParRenameCompleted()
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
PostInfo* pPostInfo = m_ParRenamer.GetPostInfo();
pPostInfo->GetNZBInfo()->SetRenameStatus(m_ParRenamer.GetStatus() == ParRenamer::psSuccess ? NZBInfo::rsSuccess : NZBInfo::rsFailure);
pPostInfo->SetWorking(false);
pPostInfo->SetStage(PostInfo::ptQueued);
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
g_pQueueCoordinator->UnlockQueue();
}
void ParCoordinator::UpdateParRenameProgress()
{
g_pQueueCoordinator->LockQueue();
PostInfo* pPostInfo = m_ParRenamer.GetPostInfo();
pPostInfo->SetProgressLabel(m_ParRenamer.GetProgressLabel());
pPostInfo->SetStageProgress(m_ParRenamer.GetStageProgress());
time_t tCurrent = time(NULL);
if (!pPostInfo->GetStartTime())
{
pPostInfo->SetStartTime(tCurrent);
}
if (pPostInfo->GetStage() != PostInfo::ptRenaming)
{
pPostInfo->SetStage(PostInfo::ptRenaming);
pPostInfo->SetStageTime(tCurrent);
}
g_pQueueCoordinator->UnlockQueue();
CheckPauseState(pPostInfo);
}
void ParCoordinator::PrintMessage(PostInfo* pPostInfo, Message::EKind eKind, const char* szFormat, ...)
{
char szText[1024];
va_list args;
va_start(args, szFormat);
vsnprintf(szText, 1024, szFormat, args);
va_end(args);
szText[1024-1] = '\0';
pPostInfo->AppendMessage(eKind, szText);
switch (eKind)
{
case Message::mkDetail:
detail("%s", szText);
break;
case Message::mkInfo:
info("%s", szText);
break;
case Message::mkWarning:
warn("%s", szText);
break;
case Message::mkError:
error("%s", szText);
break;
case Message::mkDebug:
debug("%s", szText);
break;
}
}
#endif

130
ParCoordinator.h Normal file
View File

@@ -0,0 +1,130 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef PARCOORDINATOR_H
#define PARCOORDINATOR_H
#include <list>
#include <deque>
#include "DownloadInfo.h"
#ifndef DISABLE_PARCHECK
#include "ParChecker.h"
#include "ParRenamer.h"
#endif
class ParCoordinator
{
private:
#ifndef DISABLE_PARCHECK
class PostParChecker: public ParChecker
{
private:
ParCoordinator* m_pOwner;
PostInfo* m_pPostInfo;
protected:
virtual bool RequestMorePars(int iBlockNeeded, int* pBlockFound);
virtual void UpdateProgress();
virtual void Completed() { m_pOwner->ParCheckCompleted(); }
virtual void PrintMessage(Message::EKind eKind, const char* szFormat, ...);
public:
PostInfo* GetPostInfo() { return m_pPostInfo; }
void SetPostInfo(PostInfo* pPostInfo) { m_pPostInfo = pPostInfo; }
friend class ParCoordinator;
};
class PostParRenamer: public ParRenamer
{
private:
ParCoordinator* m_pOwner;
PostInfo* m_pPostInfo;
protected:
virtual void UpdateProgress();
virtual void Completed() { m_pOwner->ParRenameCompleted(); }
virtual void PrintMessage(Message::EKind eKind, const char* szFormat, ...);
public:
PostInfo* GetPostInfo() { return m_pPostInfo; }
void SetPostInfo(PostInfo* pPostInfo) { m_pPostInfo = pPostInfo; }
friend class ParCoordinator;
};
struct BlockInfo
{
FileInfo* m_pFileInfo;
int m_iBlockCount;
};
typedef std::list<BlockInfo*> Blocks;
enum EJobKind
{
jkParCheck,
jkParRename
};
private:
PostParChecker m_ParChecker;
bool m_bStopped;
PostParRenamer m_ParRenamer;
EJobKind m_eCurrentJob;
protected:
virtual bool PauseDownload() = 0;
virtual bool UnpauseDownload() = 0;
void UpdateParCheckProgress();
void UpdateParRenameProgress();
void ParCheckCompleted();
void ParRenameCompleted();
void CheckPauseState(PostInfo* pPostInfo);
bool RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilename, int iBlockNeeded, int* pBlockFound);
void PrintMessage(PostInfo* pPostInfo, Message::EKind eKind, const char* szFormat, ...);
#endif
public:
typedef std::deque<char*> FileList;
public:
ParCoordinator();
virtual ~ParCoordinator();
static bool FindMainPars(const char* szPath, FileList* pFileList);
static bool ParseParFilename(const char* szParFilename, int* iBaseNameLen, int* iBlocks);
static bool SameParCollection(const char* szFilename1, const char* szFilename2);
void PausePars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
#ifndef DISABLE_PARCHECK
bool AddPar(FileInfo* pFileInfo, bool bDeleted);
void FindPars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szParFilename,
Blocks* pBlocks, bool bStrictParName, bool bExactParName, int* pBlockFound);
void StartParCheckJob(PostInfo* pPostInfo);
void StartParRenameJob(PostInfo* pPostInfo);
void Stop();
bool Cancel();
#endif
};
#endif

345
ParRenamer.cpp Normal file
View File

@@ -0,0 +1,345 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#ifndef DISABLE_PARCHECK
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#ifdef WIN32
#include <par2cmdline.h>
#include <par2repairer.h>
#include <md5.h>
#else
#include <unistd.h>
#include <libpar2/par2cmdline.h>
#include <libpar2/par2repairer.h>
#include <libpar2/md5.h>
#endif
#include "nzbget.h"
#include "ParRenamer.h"
#include "ParCoordinator.h"
#include "Log.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
class ParRenamerRepairer : public Par2Repairer
{
public:
friend class ParRenamer;
};
ParRenamer::FileHash::FileHash(const char* szFilename, const char* szHash)
{
m_szFilename = strdup(szFilename);
m_szHash = strdup(szHash);
}
ParRenamer::FileHash::~FileHash()
{
free(m_szFilename);
free(m_szHash);
}
ParRenamer::ParRenamer()
{
debug("Creating ParRenamer");
m_eStatus = psFailed;
m_szDestDir = NULL;
m_szInfoName = NULL;
m_szProgressLabel = (char*)malloc(1024);
m_iStageProgress = 0;
m_bCancelled = false;
}
ParRenamer::~ParRenamer()
{
debug("Destroying ParRenamer");
free(m_szDestDir);
free(m_szInfoName);
free(m_szProgressLabel);
Cleanup();
}
void ParRenamer::Cleanup()
{
ClearHashList();
for (DirList::iterator it = m_DirList.begin(); it != m_DirList.end(); it++)
{
free(*it);
}
m_DirList.clear();
}
void ParRenamer::ClearHashList()
{
for (FileHashList::iterator it = m_FileHashList.begin(); it != m_FileHashList.end(); it++)
{
delete *it;
}
m_FileHashList.clear();
}
void ParRenamer::SetDestDir(const char * szDestDir)
{
free(m_szDestDir);
m_szDestDir = strdup(szDestDir);
}
void ParRenamer::SetInfoName(const char * szInfoName)
{
free(m_szInfoName);
m_szInfoName = strdup(szInfoName);
}
void ParRenamer::Cancel()
{
m_bCancelled = true;
}
void ParRenamer::Run()
{
Cleanup();
m_bCancelled = false;
m_iFileCount = 0;
m_iCurFile = 0;
m_iRenamedCount = 0;
m_eStatus = psFailed;
snprintf(m_szProgressLabel, 1024, "Checking renamed files for %s", m_szInfoName);
m_szProgressLabel[1024-1] = '\0';
m_iStageProgress = 0;
UpdateProgress();
BuildDirList(m_szDestDir);
for (DirList::iterator it = m_DirList.begin(); it != m_DirList.end(); it++)
{
char* szDestDir = *it;
debug("Checking %s", szDestDir);
ClearHashList();
LoadParFiles(szDestDir);
CheckFiles(szDestDir);
}
if (m_bCancelled)
{
PrintMessage(Message::mkWarning, "Renaming cancelled for %s", m_szInfoName);
}
else if (m_iRenamedCount > 0)
{
PrintMessage(Message::mkInfo, "Successfully renamed %i file(s) for %s", m_iRenamedCount, m_szInfoName);
m_eStatus = psSuccess;
}
else
{
PrintMessage(Message::mkInfo, "No renamed files found for %s", m_szInfoName);
}
Cleanup();
Completed();
}
void ParRenamer::BuildDirList(const char* szDestDir)
{
m_DirList.push_back(strdup(szDestDir));
char* szFullFilename = (char*)malloc(1024);
DirBrowser* pDirBrowser = new DirBrowser(szDestDir);
while (const char* filename = pDirBrowser->Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") && !m_bCancelled)
{
snprintf(szFullFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
if (Util::DirectoryExists(szFullFilename))
{
BuildDirList(szFullFilename);
}
else
{
m_iFileCount++;
}
}
}
free(szFullFilename);
delete pDirBrowser;
}
void ParRenamer::LoadParFiles(const char* szDestDir)
{
ParCoordinator::FileList parFileList;
ParCoordinator::FindMainPars(szDestDir, &parFileList);
for (ParCoordinator::FileList::iterator it = parFileList.begin(); it != parFileList.end(); it++)
{
char* szParFilename = *it;
char szFullParFilename[1024];
snprintf(szFullParFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, szParFilename);
szFullParFilename[1024-1] = '\0';
LoadParFile(szFullParFilename);
free(*it);
}
}
void ParRenamer::LoadParFile(const char* szParFilename)
{
ParRenamerRepairer* pRepairer = new ParRenamerRepairer();
if (!pRepairer->LoadPacketsFromFile(szParFilename))
{
PrintMessage(Message::mkWarning, "Could not load par2-file %s", szParFilename);
delete pRepairer;
return;
}
for (map<MD5Hash, Par2RepairerSourceFile*>::iterator it = pRepairer->sourcefilemap.begin(); it != pRepairer->sourcefilemap.end(); it++)
{
if (m_bCancelled)
{
break;
}
Par2RepairerSourceFile* sourceFile = (*it).second;
m_FileHashList.push_back(new FileHash(sourceFile->GetDescriptionPacket()->FileName().c_str(),
sourceFile->GetDescriptionPacket()->Hash16k().print().c_str()));
}
delete pRepairer;
}
void ParRenamer::CheckFiles(const char* szDestDir)
{
DirBrowser dir(szDestDir);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") && !m_bCancelled)
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
if (!Util::DirectoryExists(szFullFilename))
{
snprintf(m_szProgressLabel, 1024, "Checking file %s", filename);
m_szProgressLabel[1024-1] = '\0';
m_iStageProgress = m_iCurFile * 1000 / m_iFileCount;
UpdateProgress();
m_iCurFile++;
CheckFile(szDestDir, szFullFilename);
}
}
}
}
void ParRenamer::CheckFile(const char* szDestDir, const char* szFilename)
{
debug("Computing hash for %s", szFilename);
const int iBlockSize = 16*1024;
FILE* pFile = fopen(szFilename, "rb");
if (!pFile)
{
PrintMessage(Message::mkError, "Could not open file %s", szFilename);
return;
}
// load first 16K of the file into buffer
void* pBuffer = malloc(iBlockSize);
int iReadBytes = fread(pBuffer, 1, iBlockSize, pFile);
int iError = ferror(pFile);
if (iReadBytes != iBlockSize && iError)
{
PrintMessage(Message::mkError, "Could not read file %s", szFilename);
return;
}
fclose(pFile);
MD5Hash hash16k;
MD5Context context;
context.Update(pBuffer, iReadBytes);
context.Final(hash16k);
free(pBuffer);
debug("file: %s; hash16k: %s", Util::BaseFileName(szFilename), hash16k.print().c_str());
for (FileHashList::iterator it = m_FileHashList.begin(); it != m_FileHashList.end(); it++)
{
FileHash* pFileHash = *it;
if (!strcmp(pFileHash->GetHash(), hash16k.print().c_str()))
{
debug("Found correct filename: %s", pFileHash->GetFilename());
char szDstFilename[1024];
snprintf(szDstFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, pFileHash->GetFilename());
szDstFilename[1024-1] = '\0';
if (!Util::FileExists(szDstFilename))
{
PrintMessage(Message::mkInfo, "Renaming %s to %s", Util::BaseFileName(szFilename), pFileHash->GetFilename());
if (Util::MoveFile(szFilename, szDstFilename))
{
m_iRenamedCount++;
}
else
{
PrintMessage(Message::mkError, "Could not rename %s to %s", szFilename, szDstFilename);
}
}
break;
}
}
}
#endif

106
ParRenamer.h Normal file
View File

@@ -0,0 +1,106 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef PARRENAMER_H
#define PARRENAMER_H
#ifndef DISABLE_PARCHECK
#include <deque>
#include "Thread.h"
#include "Log.h"
class ParRenamer : public Thread
{
public:
enum EStatus
{
psFailed,
psSuccess
};
class FileHash
{
private:
char* m_szFilename;
char* m_szHash;
public:
FileHash(const char* szFilename, const char* szHash);
~FileHash();
const char* GetFilename() { return m_szFilename; }
const char* GetHash() { return m_szHash; }
};
typedef std::deque<FileHash*> FileHashList;
typedef std::deque<char*> DirList;
private:
char* m_szInfoName;
char* m_szDestDir;
EStatus m_eStatus;
char* m_szProgressLabel;
int m_iStageProgress;
bool m_bCancelled;
DirList m_DirList;
FileHashList m_FileHashList;
int m_iFileCount;
int m_iCurFile;
int m_iRenamedCount;
void Cleanup();
void ClearHashList();
void BuildDirList(const char* szDestDir);
void CheckDir(const char* szDestDir);
void LoadParFiles(const char* szDestDir);
void LoadParFile(const char* szParFilename);
void CheckFiles(const char* szDestDir);
void CheckFile(const char* szDestDir, const char* szFilename);
protected:
virtual void UpdateProgress() {}
virtual void Completed() {}
virtual void PrintMessage(Message::EKind eKind, const char* szFormat, ...) {}
const char* GetProgressLabel() { return m_szProgressLabel; }
int GetStageProgress() { return m_iStageProgress; }
public:
ParRenamer();
virtual ~ParRenamer();
virtual void Run();
void SetDestDir(const char* szDestDir);
const char* GetInfoName() { return m_szInfoName; }
void SetInfoName(const char* szInfoName);
void SetStatus(EStatus eStatus);
EStatus GetStatus() { return m_eStatus; }
void Cancel();
bool GetCancelled() { return m_bCancelled; }
};
#endif
#endif

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -31,15 +31,13 @@
#include "Thread.h"
#include "Observer.h"
#include "DownloadInfo.h"
#include "Scanner.h"
#ifndef DISABLE_PARCHECK
#include "ParChecker.h"
#endif
#include "ParCoordinator.h"
#include "DupeCoordinator.h"
class PrePostProcessor : public Thread
{
public:
// NOTE: changes to this enum must be synced with "eRemoteEditAction" in unit "MessageBase.h"
enum EEditAction
{
eaPostMoveOffset = 51, // move post to m_iOffset relative to the current position in post-queue
@@ -47,104 +45,91 @@ public:
eaPostMoveBottom,
eaPostDelete,
eaHistoryDelete,
eaHistoryFinalDelete,
eaHistoryReturn,
eaHistoryProcess
eaHistoryProcess,
eaHistoryRedownload,
eaHistorySetParameter,
eaHistorySetDupeKey,
eaHistorySetDupeScore,
eaHistorySetDupeMode,
eaHistorySetDupeBackup,
eaHistoryMarkBad,
eaHistoryMarkGood
};
private:
typedef std::deque<char*> FileList;
class QueueCoordinatorObserver: public Observer
{
public:
PrePostProcessor* owner;
virtual void Update(Subject* Caller, void* Aspect) { owner->QueueCoordinatorUpdate(Caller, Aspect); }
PrePostProcessor* m_pOwner;
virtual void Update(Subject* Caller, void* Aspect) { m_pOwner->QueueCoordinatorUpdate(Caller, Aspect); }
};
#ifndef DISABLE_PARCHECK
class ParCheckerObserver: public Observer
{
public:
PrePostProcessor* owner;
virtual void Update(Subject* Caller, void* Aspect) { owner->ParCheckerUpdate(Caller, Aspect); }
};
class PostParChecker: public ParChecker
class PostParCoordinator: public ParCoordinator
{
private:
PrePostProcessor* m_Owner;
PostInfo* m_pPostInfo;
PrePostProcessor* m_pOwner;
protected:
virtual bool RequestMorePars(int iBlockNeeded, int* pBlockFound);
virtual void UpdateProgress();
public:
PostInfo* GetPostInfo() { return m_pPostInfo; }
void SetPostInfo(PostInfo* pPostInfo) { m_pPostInfo = pPostInfo; }
virtual bool PauseDownload() { return m_pOwner->PauseDownload(); }
virtual bool UnpauseDownload() { return m_pOwner->UnpauseDownload(); }
friend class PrePostProcessor;
};
struct BlockInfo
class PostDupeCoordinator: public DupeCoordinator
{
FileInfo* m_pFileInfo;
int m_iBlockCount;
private:
PrePostProcessor* m_pOwner;
protected:
virtual void HistoryRedownload(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo);
virtual void DeleteQueuedFile(const char* szQueuedFile) { m_pOwner->DeleteQueuedFile(szQueuedFile); }
friend class PrePostProcessor;
};
typedef std::list<BlockInfo*> Blocks;
#endif
private:
PostParCoordinator m_ParCoordinator;
PostDupeCoordinator m_DupeCoordinator;
QueueCoordinatorObserver m_QueueCoordinatorObserver;
bool m_bHasMoreJobs;
bool m_bPostScript;
bool m_bSchedulerPauseChanged;
bool m_bSchedulerPause;
bool m_bPostPause;
Scanner m_Scanner;
const char* m_szPauseReason;
bool IsNZBFileCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo,
bool bIgnoreFirstInPostQueue, bool bIgnorePausedPars, bool bCheckPostQueue, bool bAllowOnlyOneDeleted);
bool bIgnorePausedPars, bool bAllowOnlyOneDeleted);
void CheckPostQueue();
void JobCompleted(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo);
void StartScriptJob(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo);
void StartJob(DownloadQueue* pDownloadQueue, PostInfo* pPostInfo);
void SaveQueue(DownloadQueue* pDownloadQueue);
void SanitisePostQueue(PostQueue* pPostQueue);
void CheckDiskSpace();
void ApplySchedulerState();
void CheckScheduledResume();
void UpdatePauseState(bool bNeedPause, const char* szReason);
bool PauseDownload();
bool UnpauseDownload();
void NZBFound(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBAdded(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBDownloaded(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBDeleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void NZBCompleted(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, bool bSaveQueue);
bool FindMainPars(const char* szPath, FileList* pFileList);
bool ParseParFilename(const char* szParFilename, int* iBaseNameLen, int* iBlocks);
bool SameParCollection(const char* szFilename1, const char* szFilename2);
bool CreatePostJobs(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, bool bParCheck, bool bPostScript, bool bAddTop);
void DeleteQueuedFile(const char* szQueuedFile);
void PausePars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
NZBInfo* MergeGroups(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
int FindGroupID(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
bool PostQueueMove(IDList* pIDList, EEditAction eAction, int iOffset);
bool PostQueueDelete(IDList* pIDList);
bool HistoryDelete(IDList* pIDList);
bool HistoryReturn(IDList* pIDList, bool bReprocess);
bool HistoryEdit(IDList* pIDList, EEditAction eAction, int iOffset, const char* szText);
void HistoryDelete(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory, HistoryInfo* pHistoryInfo, bool bFinal);
void HistoryReturn(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory, HistoryInfo* pHistoryInfo, bool bReprocess);
void HistoryRedownload(DownloadQueue* pDownloadQueue, HistoryList::iterator itHistory, HistoryInfo* pHistoryInfo, bool bRestorePauseState);
void HistorySetParameter(HistoryInfo* pHistoryInfo, const char* szText);
void HistorySetDupeParam(HistoryInfo* pHistoryInfo, EEditAction eAction, const char* szText);
void HistoryTransformToDup(DownloadQueue* pDownloadQueue, HistoryInfo* pHistoryInfo, int rindex);
void CheckHistory();
void Cleanup();
FileInfo* GetQueueGroup(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void CheckHistory();
void DeletePostThread(PostInfo* pPostInfo);
#ifndef DISABLE_PARCHECK
PostParChecker m_ParChecker;
ParCheckerObserver m_ParCheckerObserver;
void ParCheckerUpdate(Subject* Caller, void* Aspect);
bool AddPar(FileInfo* pFileInfo, bool bDeleted);
bool RequestMorePars(NZBInfo* pNZBInfo, const char* szParFilename, int iBlockNeeded, int* pBlockFound);
void FindPars(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo, const char* szParFilename,
Blocks* pBlocks, bool bStrictParName, bool bExactParName, int* pBlockFound);
void UpdateParProgress();
void StartParJob(PostInfo* pPostInfo);
#endif
public:
PrePostProcessor();
virtual ~PrePostProcessor();
@@ -152,8 +137,7 @@ public:
virtual void Stop();
void QueueCoordinatorUpdate(Subject* Caller, void* Aspect);
bool HasMoreJobs() { return m_bHasMoreJobs; }
void ScanNZBDir();
bool QueueEditList(IDList* pIDList, EEditAction eAction, int iOffset);
bool QueueEditList(IDList* pIDList, EEditAction eAction, int iOffset, const char* szText);
};
#endif

View File

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -43,12 +43,15 @@ class QueueCoordinator : public Thread, public Observer, public Subject, public
{
public:
typedef std::list<ArticleDownloader*> ActiveDownloads;
enum EAspectAction
{
eaNZBFileFound,
eaNZBFileAdded,
eaFileCompleted,
eaFileDeleted
};
struct Aspect
{
EAspectAction eAction;
@@ -63,6 +66,8 @@ private:
QueueEditor m_QueueEditor;
Mutex m_mutexDownloadQueue;
bool m_bHasMoreJobs;
int m_iDownloadsLimit;
int m_iServerConfigGeneration;
// statistics
static const int SPEEDMETER_SLOTS = 30;
@@ -71,6 +76,12 @@ private:
int m_iSpeedTotalBytes;
int m_iSpeedTime[SPEEDMETER_SLOTS];
int m_iSpeedStartTime;
time_t m_tSpeedCorrection;
#ifdef HAVE_SPINLOCK
SpinLock m_spinlockSpeed;
#else
Mutex m_mutexSpeed;
#endif
int m_iSpeedBytesIndex;
long long m_iAllBytes;
@@ -83,14 +94,15 @@ private:
bool GetNextArticle(FileInfo* &pFileInfo, ArticleInfo* &pArticleInfo);
void StartArticleDownload(FileInfo* pFileInfo, ArticleInfo* pArticleInfo, NNTPConnection* pConnection);
void BuildArticleFilename(ArticleDownloader* pArticleDownloader, FileInfo* pFileInfo, ArticleInfo* pArticleInfo);
bool IsDupe(FileInfo* pFileInfo);
void ArticleCompleted(ArticleDownloader* pArticleDownloader);
void DeleteFileInfo(FileInfo* pFileInfo, bool bCompleted);
void StatFileInfo(FileInfo* pFileInfo, bool bCompleted);
void CheckHealth(FileInfo* pFileInfo);
void ResetHangingDownloads();
void ResetSpeedStat();
void EnterLeaveStandBy(bool bEnter);
void AdjustStartTime();
void AdjustDownloadsLimit();
public:
QueueCoordinator();
@@ -101,7 +113,7 @@ public:
// statistics
long long CalcRemainingSize();
virtual float CalcCurrentDownloadSpeed();
virtual int CalcCurrentDownloadSpeed();
virtual void AddSpeedReading(int iBytes);
void CalcStat(int* iUpTimeSec, int* iDnTimeSec, long long* iAllBytes, bool* bStandBy);
@@ -109,11 +121,14 @@ public:
DownloadQueue* LockQueue();
void UnlockQueue() ;
void AddNZBFileToQueue(NZBFile* pNZBFile, bool bAddFirst);
void AddFileInfosToFileQueue(NZBFile* pNZBFile, FileQueue* pFileQueue, bool bAddFirst);
bool HasMoreJobs() { return m_bHasMoreJobs; }
bool GetStandBy() { return m_bStandBy; }
bool DeleteQueueEntry(FileInfo* pFileInfo);
bool SetQueueEntryNZBCategory(NZBInfo* pNZBInfo, const char* szCategory);
bool SetQueueEntryNZBName(NZBInfo* pNZBInfo, const char* szName);
bool MergeQueueEntries(NZBInfo* pDestNZBInfo, NZBInfo* pSrcNZBInfo);
bool SplitQueueEntries(FileQueue* pFileList, const char* szName, NZBInfo** pNewNZBInfo);
void DiscardDiskFile(FileInfo* pFileInfo);
QueueEditor* GetQueueEditor() { return &m_QueueEditor; }

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -24,7 +24,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -33,9 +33,10 @@
#include <stdlib.h>
#include <string.h>
#include <cctype>
#include <cstdio>
#include <stdio.h>
#include <ctype.h>
#include <sys/stat.h>
#include <set>
#ifndef WIN32
#include <unistd.h>
#include <sys/time.h>
@@ -110,17 +111,24 @@ void QueueEditor::PauseUnpauseEntry(FileInfo* pFileInfo, bool bPause)
}
/*
* Removes entry with index iEntry
* Removes entry
* returns true if successful, false if operation is not possible
*/
void QueueEditor::DeleteEntry(FileInfo* pFileInfo)
{
info("Deleting file %s from download queue", pFileInfo->GetFilename());
if (pFileInfo->GetNZBInfo()->GetDeleting())
{
detail("Deleting file %s from download queue", pFileInfo->GetFilename());
}
else
{
info("Deleting file %s from download queue", pFileInfo->GetFilename());
}
g_pQueueCoordinator->DeleteQueueEntry(pFileInfo);
}
/*
* Moves entry identified with iID in the queue
* Moves entry in the queue
* returns true if successful, false if operation is not possible
*/
void QueueEditor::MoveEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, int iOffset)
@@ -148,12 +156,23 @@ void QueueEditor::MoveEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo,
}
}
/*
* Set priority for entry
* returns true if successful, false if operation is not possible
*/
void QueueEditor::SetPriorityEntry(FileInfo* pFileInfo, const char* szPriority)
{
debug("Setting priority %s for file %s", szPriority, pFileInfo->GetFilename());
int iPriority = atoi(szPriority);
pFileInfo->SetPriority(iPriority);
}
bool QueueEditor::EditEntry(int ID, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText)
{
IDList cIDList;
cIDList.clear();
cIDList.push_back(ID);
return EditList(&cIDList, bSmartOrder, eAction, iOffset, szText);
return EditList(&cIDList, NULL, mmID, bSmartOrder, eAction, iOffset, szText);
}
bool QueueEditor::LockedEditEntry(DownloadQueue* pDownloadQueue, int ID, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText)
@@ -164,11 +183,20 @@ bool QueueEditor::LockedEditEntry(DownloadQueue* pDownloadQueue, int ID, bool bS
return InternEditList(pDownloadQueue, &cIDList, bSmartOrder, eAction, iOffset, szText);
}
bool QueueEditor::EditList(IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText)
bool QueueEditor::EditList(IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, bool bSmartOrder,
EEditAction eAction, int iOffset, const char* szText)
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
bool bOK = InternEditList(pDownloadQueue, pIDList, bSmartOrder, eAction, iOffset, szText);
bool bOK = true;
if (pNameList)
{
pIDList = new IDList();
bOK = BuildIDListFromNameList(pDownloadQueue, pIDList, pNameList, eMatchMode, eAction);
}
bOK = bOK && (InternEditList(pDownloadQueue, pIDList, bSmartOrder, eAction, iOffset, szText) || eMatchMode == mmRegEx);
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
@@ -177,6 +205,11 @@ bool QueueEditor::EditList(IDList* pIDList, bool bSmartOrder, EEditAction eActio
g_pQueueCoordinator->UnlockQueue();
if (pNameList)
{
delete pIDList;
}
return bOK;
}
@@ -195,66 +228,93 @@ bool QueueEditor::InternEditList(DownloadQueue* pDownloadQueue, IDList* pIDList,
ItemList cItemList;
PrepareList(pDownloadQueue, &cItemList, pIDList, bSmartOrder, eAction, iOffset);
if (eAction == eaFilePauseAllPars || eAction == eaFilePauseExtraPars)
switch (eAction)
{
PauseParsInGroups(&cItemList, eAction == eaFilePauseExtraPars);
}
else if (eAction == eaGroupMerge)
{
MergeGroups(pDownloadQueue, &cItemList);
}
else
{
for (ItemList::iterator it = cItemList.begin(); it != cItemList.end(); it++)
{
EditItem* pItem = *it;
switch (eAction)
case eaFilePauseAllPars:
case eaFilePauseExtraPars:
PauseParsInGroups(&cItemList, eAction == eaFilePauseExtraPars);
break;
case eaGroupMerge:
return MergeGroups(pDownloadQueue, &cItemList);
case eaFileSplit:
return SplitGroup(pDownloadQueue, &cItemList, szText);
case eaFileReorder:
ReorderFiles(pDownloadQueue, &cItemList);
break;
default:
for (ItemList::iterator it = cItemList.begin(); it != cItemList.end(); it++)
{
case eaFilePause:
PauseUnpauseEntry(pItem->m_pFileInfo, true);
break;
EditItem* pItem = *it;
switch (eAction)
{
case eaFilePause:
PauseUnpauseEntry(pItem->m_pFileInfo, true);
break;
case eaFileResume:
PauseUnpauseEntry(pItem->m_pFileInfo, false);
break;
case eaFileResume:
PauseUnpauseEntry(pItem->m_pFileInfo, false);
break;
case eaFileMoveOffset:
case eaFileMoveTop:
case eaFileMoveBottom:
MoveEntry(pDownloadQueue, pItem->m_pFileInfo, pItem->m_iOffset);
break;
case eaFileMoveOffset:
case eaFileMoveTop:
case eaFileMoveBottom:
MoveEntry(pDownloadQueue, pItem->m_pFileInfo, pItem->m_iOffset);
break;
case eaFileDelete:
DeleteEntry(pItem->m_pFileInfo);
break;
case eaFileDelete:
DeleteEntry(pItem->m_pFileInfo);
break;
case eaGroupSetCategory:
SetNZBCategory(pItem->m_pFileInfo->GetNZBInfo(), szText);
break;
case eaFileSetPriority:
SetPriorityEntry(pItem->m_pFileInfo, szText);
break;
case eaGroupSetParameter:
SetNZBParameter(pItem->m_pFileInfo->GetNZBInfo(), szText);
break;
case eaGroupSetCategory:
SetNZBCategory(pItem->m_pFileInfo->GetNZBInfo(), szText);
break;
case eaGroupPause:
case eaGroupResume:
case eaGroupDelete:
case eaGroupMoveTop:
case eaGroupMoveBottom:
case eaGroupMoveOffset:
case eaGroupPauseAllPars:
case eaGroupPauseExtraPars:
EditGroup(pDownloadQueue, pItem->m_pFileInfo, eAction, iOffset);
break;
case eaGroupSetName:
SetNZBName(pItem->m_pFileInfo->GetNZBInfo(), szText);
break;
case eaFilePauseAllPars:
case eaFilePauseExtraPars:
case eaGroupMerge:
// remove compiler warning "enumeration not handled in switch"
break;
case eaGroupSetDupeKey:
case eaGroupSetDupeScore:
case eaGroupSetDupeMode:
SetNZBDupeParam(pItem->m_pFileInfo->GetNZBInfo(), eAction, szText);
break;
case eaGroupSetParameter:
SetNZBParameter(pItem->m_pFileInfo->GetNZBInfo(), szText);
break;
case eaGroupPause:
case eaGroupResume:
case eaGroupDelete:
case eaGroupDupeDelete:
case eaGroupFinalDelete:
case eaGroupMoveTop:
case eaGroupMoveBottom:
case eaGroupMoveOffset:
case eaGroupPauseAllPars:
case eaGroupPauseExtraPars:
case eaGroupSetPriority:
EditGroup(pDownloadQueue, pItem->m_pFileInfo, eAction, iOffset, szText);
break;
case eaFilePauseAllPars:
case eaFilePauseExtraPars:
case eaGroupMerge:
case eaFileReorder:
case eaFileSplit:
// remove compiler warning "enumeration not handled in switch"
break;
}
delete pItem;
}
delete pItem;
}
}
return cItemList.size() > 0;
@@ -365,10 +425,80 @@ void QueueEditor::PrepareList(DownloadQueue* pDownloadQueue, ItemList* pItemList
}
}
bool QueueEditor::EditGroup(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, EEditAction eAction, int iOffset)
bool QueueEditor::BuildIDListFromNameList(DownloadQueue* pDownloadQueue, IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, EEditAction eAction)
{
#ifndef HAVE_REGEX_H
if (eMatchMode == mmRegEx)
{
return false;
}
#endif
std::set<int> uniqueIDs;
for (NameList::iterator it = pNameList->begin(); it != pNameList->end(); it++)
{
const char* szName = *it;
RegEx *pRegEx = NULL;
if (eMatchMode == mmRegEx)
{
pRegEx = new RegEx(szName);
if (!pRegEx->IsValid())
{
delete pRegEx;
return false;
}
}
bool bFound = false;
for (FileQueue::iterator it2 = pDownloadQueue->GetFileQueue()->begin(); it2 != pDownloadQueue->GetFileQueue()->end(); it2++)
{
FileInfo* pFileInfo = *it2;
if (eAction < eaGroupMoveOffset)
{
// file action
char szFilename[MAX_PATH];
snprintf(szFilename, sizeof(szFilename) - 1, "%s/%s", pFileInfo->GetNZBInfo()->GetName(), Util::BaseFileName(pFileInfo->GetFilename()));
if (((!pRegEx && !strcmp(szFilename, szName)) || (pRegEx && pRegEx->Match(szFilename))) &&
(uniqueIDs.find(pFileInfo->GetID()) == uniqueIDs.end()))
{
uniqueIDs.insert(pFileInfo->GetID());
pIDList->push_back(pFileInfo->GetID());
bFound = true;
}
}
else
{
// group action
const char *szFilename = pFileInfo->GetNZBInfo()->GetName();
if (((!pRegEx && !strcmp(szFilename, szName)) || (pRegEx && pRegEx->Match(szFilename))) &&
(uniqueIDs.find(pFileInfo->GetNZBInfo()->GetID()) == uniqueIDs.end()))
{
uniqueIDs.insert(pFileInfo->GetNZBInfo()->GetID());
pIDList->push_back(pFileInfo->GetID());
bFound = true;
}
}
}
delete pRegEx;
if (!bFound && (eMatchMode == mmName))
{
return false;
}
}
return true;
}
bool QueueEditor::EditGroup(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, EEditAction eAction, int iOffset, const char* szText)
{
IDList cIDList;
cIDList.clear();
bool bAllPaused = true;
// collecting files belonging to group
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
@@ -377,6 +507,7 @@ bool QueueEditor::EditGroup(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo,
if (pFileInfo2->GetNZBInfo() == pFileInfo->GetNZBInfo())
{
cIDList.push_back(pFileInfo2->GetID());
bAllPaused &= pFileInfo2->GetPaused();
}
}
@@ -425,16 +556,25 @@ bool QueueEditor::EditGroup(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo,
}
iOffset = iFileOffset;
}
else if (eAction == eaGroupDelete)
else if (eAction == eaGroupDelete || eAction == eaGroupDupeDelete || eAction == eaGroupFinalDelete)
{
pFileInfo->GetNZBInfo()->SetDeleted(true);
pFileInfo->GetNZBInfo()->SetDeleting(true);
pFileInfo->GetNZBInfo()->SetAvoidHistory(eAction == eaGroupFinalDelete);
pFileInfo->GetNZBInfo()->SetDeletePaused(bAllPaused);
if (eAction == eaGroupDupeDelete)
{
pFileInfo->GetNZBInfo()->SetDeleteStatus(NZBInfo::dsDupe);
}
pFileInfo->GetNZBInfo()->SetCleanupDisk(CanCleanupDisk(pDownloadQueue, pFileInfo->GetNZBInfo()));
}
EEditAction GroupToFileMap[] = { (EEditAction)0, eaFileMoveOffset, eaFileMoveTop, eaFileMoveBottom, eaFilePause, eaFileResume, eaFileDelete, eaFilePauseAllPars, eaFilePauseExtraPars,
eaFileMoveOffset, eaFileMoveTop, eaFileMoveBottom, eaFilePause, eaFileResume, eaFileDelete, eaFilePauseAllPars, eaFilePauseExtraPars, (EEditAction)0, (EEditAction)0, (EEditAction)0 };
EEditAction GroupToFileMap[] = { (EEditAction)0, eaFileMoveOffset, eaFileMoveTop, eaFileMoveBottom, eaFilePause,
eaFileResume, eaFileDelete, eaFilePauseAllPars, eaFilePauseExtraPars, eaFileSetPriority, eaFileReorder, eaFileSplit,
eaFileMoveOffset, eaFileMoveTop, eaFileMoveBottom, eaFilePause, eaFileResume, eaFileDelete, eaFileDelete, eaFileDelete,
eaFilePauseAllPars, eaFilePauseExtraPars, eaFileSetPriority,
(EEditAction)0, (EEditAction)0, (EEditAction)0 };
return InternEditList(pDownloadQueue, &cIDList, true, GroupToFileMap[eAction], iOffset, NULL);
return InternEditList(pDownloadQueue, &cIDList, true, GroupToFileMap[eAction], iOffset, szText);
}
void QueueEditor::BuildGroupList(DownloadQueue* pDownloadQueue, FileList* pGroupList)
@@ -509,7 +649,7 @@ void QueueEditor::AlignAffectedGroups(DownloadQueue* pDownloadQueue, IDList* pID
}
if (iOffset > 0)
{
for (unsigned int i = iNum + 1; i <= cGroupList.size() - iOffset; i++)
for (int i = iNum + 1; i <= (int)cGroupList.size() - iOffset; i++)
{
if (!ItemExists(&cAffectedGroupList, cGroupList[i]))
{
@@ -533,11 +673,11 @@ void QueueEditor::AlignAffectedGroups(DownloadQueue* pDownloadQueue, IDList* pID
for (FileList::iterator it = cAffectedGroupList.begin(); it != cAffectedGroupList.end(); it++)
{
FileInfo* pFileInfo = *it;
AlignGroup(pDownloadQueue, pFileInfo);
AlignGroup(pDownloadQueue, pFileInfo->GetNZBInfo());
}
}
void QueueEditor::AlignGroup(DownloadQueue* pDownloadQueue, FileInfo* pFirstFileInfo)
void QueueEditor::AlignGroup(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo)
{
FileInfo* pLastFileInfo = NULL;
unsigned int iLastNum = 0;
@@ -545,7 +685,7 @@ void QueueEditor::AlignGroup(DownloadQueue* pDownloadQueue, FileInfo* pFirstFile
while (iNum < pDownloadQueue->GetFileQueue()->size())
{
FileInfo* pFileInfo = pDownloadQueue->GetFileQueue()->at(iNum);
if (pFirstFileInfo->GetNZBInfo() == pFileInfo->GetNZBInfo())
if (pFileInfo->GetNZBInfo() == pNZBInfo)
{
if (pLastFileInfo && iNum - iLastNum > 1)
{
@@ -687,8 +827,15 @@ void QueueEditor::SetNZBCategory(NZBInfo* pNZBInfo, const char* szCategory)
g_pQueueCoordinator->SetQueueEntryNZBCategory(pNZBInfo, szCategory);
}
void QueueEditor::SetNZBName(NZBInfo* pNZBInfo, const char* szName)
{
debug("QueueEditor: renaming '%s' to '%s'", Util::BaseFileName(pNZBInfo->GetFilename()), szName);
g_pQueueCoordinator->SetQueueEntryNZBName(pNZBInfo, szName);
}
/**
* Check if deletion of already downloaded files is possible (when nzb id deleted from queue).
* Check if deletion of already downloaded files is possible (when nzb is deleted from queue).
* The deletion is most always possible, except the case if all remaining files in queue
* (belonging to this nzb-file) are PARS.
*/
@@ -697,28 +844,33 @@ bool QueueEditor::CanCleanupDisk(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInf
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
char szLoFileName[1024];
strncpy(szLoFileName, pFileInfo->GetFilename(), 1024);
szLoFileName[1024-1] = '\0';
for (char* p = szLoFileName; *p; p++) *p = tolower(*p); // convert string to lowercase
if (!strstr(szLoFileName, ".par2"))
if (pFileInfo->GetNZBInfo() == pNZBInfo)
{
// non-par file found
return true;
char szLoFileName[1024];
strncpy(szLoFileName, pFileInfo->GetFilename(), 1024);
szLoFileName[1024-1] = '\0';
for (char* p = szLoFileName; *p; p++) *p = tolower(*p); // convert string to lowercase
if (!strstr(szLoFileName, ".par2"))
{
// non-par file found
return true;
}
}
}
return false;
}
void QueueEditor::MergeGroups(DownloadQueue* pDownloadQueue, ItemList* pItemList)
bool QueueEditor::MergeGroups(DownloadQueue* pDownloadQueue, ItemList* pItemList)
{
if (pItemList->size() == 0)
{
return;
return false;
}
bool bOK = true;
EditItem* pDestItem = pItemList->front();
for (ItemList::iterator it = pItemList->begin() + 1; it != pItemList->end(); it++)
@@ -727,23 +879,90 @@ void QueueEditor::MergeGroups(DownloadQueue* pDownloadQueue, ItemList* pItemList
if (pItem->m_pFileInfo->GetNZBInfo() != pDestItem->m_pFileInfo->GetNZBInfo())
{
debug("merge %s to %s", pItem->m_pFileInfo->GetNZBInfo()->GetFilename(), pDestItem->m_pFileInfo->GetNZBInfo()->GetFilename());
g_pQueueCoordinator->MergeQueueEntries(pDestItem->m_pFileInfo->GetNZBInfo(), pItem->m_pFileInfo->GetNZBInfo());
if (g_pQueueCoordinator->MergeQueueEntries(pDestItem->m_pFileInfo->GetNZBInfo(), pItem->m_pFileInfo->GetNZBInfo()))
{
bOK = false;
}
}
delete pItem;
}
// align group ("AlignGroup" needs the first file item as parameter)
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
if (pFileInfo->GetNZBInfo() == pDestItem->m_pFileInfo->GetNZBInfo())
{
AlignGroup(pDownloadQueue, pFileInfo);
break;
}
}
AlignGroup(pDownloadQueue, pDestItem->m_pFileInfo->GetNZBInfo());
delete pDestItem;
return bOK;
}
bool QueueEditor::SplitGroup(DownloadQueue* pDownloadQueue, ItemList* pItemList, const char* szName)
{
if (pItemList->size() == 0)
{
return false;
}
FileQueue* pFileList = new FileQueue();
for (ItemList::iterator it = pItemList->begin(); it != pItemList->end(); it++)
{
EditItem* pItem = *it;
pFileList->push_back(pItem->m_pFileInfo);
delete pItem;
}
NZBInfo* pNewNZBInfo = NULL;
bool bOK = g_pQueueCoordinator->SplitQueueEntries(pFileList, szName, &pNewNZBInfo);
if (bOK)
{
AlignGroup(pDownloadQueue, pNewNZBInfo);
}
delete pFileList;
return bOK;
}
void QueueEditor::ReorderFiles(DownloadQueue* pDownloadQueue, ItemList* pItemList)
{
if (pItemList->size() == 0)
{
return;
}
EditItem* pFirstItem = pItemList->front();
NZBInfo* pNZBInfo = pFirstItem->m_pFileInfo->GetNZBInfo();
unsigned int iInsertPos = 0;
// find first file of the group
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
if (pFileInfo->GetNZBInfo() == pNZBInfo)
{
break;
}
iInsertPos++;
}
// now can reorder
for (ItemList::iterator it = pItemList->begin(); it != pItemList->end(); it++)
{
EditItem* pItem = *it;
FileInfo* pFileInfo = pItem->m_pFileInfo;
// move file item
for (FileQueue::iterator it = pDownloadQueue->GetFileQueue()->begin(); it != pDownloadQueue->GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo1 = *it;
if (pFileInfo1 == pFileInfo)
{
pDownloadQueue->GetFileQueue()->erase(it);
pDownloadQueue->GetFileQueue()->insert(pDownloadQueue->GetFileQueue()->begin() + iInsertPos, pFileInfo);
iInsertPos++;
break;
}
}
delete pItem;
}
}
void QueueEditor::SetNZBParameter(NZBInfo* pNZBInfo, const char* szParamString)
@@ -757,12 +976,56 @@ void QueueEditor::SetNZBParameter(NZBInfo* pNZBInfo, const char* szParamString)
{
*szValue = '\0';
szValue++;
pNZBInfo->SetParameter(szStr, szValue);
pNZBInfo->GetParameters()->SetParameter(szStr, szValue);
}
else
{
error("Could not set nzb parameter for %s: invalid argument: %s", Util::BaseFileName(pNZBInfo->GetFilename()), szParamString);
error("Could not set nzb parameter for %s: invalid argument: %s", pNZBInfo->GetName(), szParamString);
}
free(szStr);
}
void QueueEditor::SetNZBDupeParam(NZBInfo* pNZBInfo, EEditAction eAction, const char* szText)
{
debug("QueueEditor: setting dupe parameter %i='%s' for '%s'", (int)eAction, szText, pNZBInfo->GetName());
switch (eAction)
{
case eaGroupSetDupeKey:
pNZBInfo->SetDupeKey(szText);
break;
case eaGroupSetDupeScore:
pNZBInfo->SetDupeScore(atoi(szText));
break;
case eaGroupSetDupeMode:
{
EDupeMode eMode = dmScore;
if (!strcasecmp(szText, "SCORE"))
{
eMode = dmScore;
}
else if (!strcasecmp(szText, "ALL"))
{
eMode = dmAll;
}
else if (!strcasecmp(szText, "FORCE"))
{
eMode = dmForce;
}
else
{
error("Could not set duplicate mode for %s: incorrect mode (%s)", pNZBInfo->GetName(), szText);
return;
}
pNZBInfo->SetDupeMode(eMode);
break;
}
default:
// suppress compiler warning
break;
}
}

View File

@@ -1,7 +1,7 @@
/*
* This file if part of nzbget
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -33,6 +33,7 @@
class QueueEditor
{
public:
// NOTE: changes to this enum must be synced with "eRemoteEditAction" in unit "MessageBase.h"
enum EEditAction
{
eaFileMoveOffset = 1, // move to m_iOffset relative to the current position in queue
@@ -43,17 +44,34 @@ public:
eaFileDelete,
eaFilePauseAllPars,
eaFilePauseExtraPars,
eaFileSetPriority,
eaFileReorder,
eaFileSplit,
eaGroupMoveOffset, // move to m_iOffset relative to the current position in queue
eaGroupMoveTop,
eaGroupMoveBottom,
eaGroupPause,
eaGroupResume,
eaGroupDelete,
eaGroupDupeDelete,
eaGroupFinalDelete,
eaGroupPauseAllPars,
eaGroupPauseExtraPars,
eaGroupSetPriority,
eaGroupSetCategory,
eaGroupMerge,
eaGroupSetParameter
eaGroupSetParameter,
eaGroupSetName,
eaGroupSetDupeKey,
eaGroupSetDupeScore,
eaGroupSetDupeMode
};
enum EMatchMode
{
mmID = 1,
mmName,
mmRegEx
};
private:
@@ -74,28 +92,34 @@ private:
int FindFileInfoEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo);
bool InternEditList(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
void PrepareList(DownloadQueue* pDownloadQueue, ItemList* pItemList, IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset);
bool EditGroup(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, EEditAction eAction, int iOffset);
bool BuildIDListFromNameList(DownloadQueue* pDownloadQueue, IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, EEditAction eAction);
bool EditGroup(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, EEditAction eAction, int iOffset, const char* szText);
void BuildGroupList(DownloadQueue* pDownloadQueue, FileList* pGroupList);
void AlignAffectedGroups(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, int iOffset);
bool ItemExists(FileList* pFileList, FileInfo* pFileInfo);
void AlignGroup(DownloadQueue* pDownloadQueue, FileInfo* pFirstFileInfo);
void AlignGroup(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void PauseParsInGroups(ItemList* pItemList, bool bExtraParsOnly);
void PausePars(FileList* pFileList, bool bExtraParsOnly);
void SetNZBCategory(NZBInfo* pNZBInfo, const char* szCategory);
void SetNZBName(NZBInfo* pNZBInfo, const char* szName);
bool CanCleanupDisk(DownloadQueue* pDownloadQueue, NZBInfo* pNZBInfo);
void MergeGroups(DownloadQueue* pDownloadQueue, ItemList* pItemList);
bool MergeGroups(DownloadQueue* pDownloadQueue, ItemList* pItemList);
bool SplitGroup(DownloadQueue* pDownloadQueue, ItemList* pItemList, const char* szName);
void ReorderFiles(DownloadQueue* pDownloadQueue, ItemList* pItemList);
void SetNZBParameter(NZBInfo* pNZBInfo, const char* szParamString);
void SetNZBDupeParam(NZBInfo* pNZBInfo, EEditAction eAction, const char* szText);
void PauseUnpauseEntry(FileInfo* pFileInfo, bool bPause);
void DeleteEntry(FileInfo* pFileInfo);
void MoveEntry(DownloadQueue* pDownloadQueue, FileInfo* pFileInfo, int iOffset);
void SetPriorityEntry(FileInfo* pFileInfo, const char* szPriority);
public:
QueueEditor();
~QueueEditor();
bool EditEntry(int ID, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
bool EditList(IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
bool EditList(IDList* pIDList, NameList* pNameList, EMatchMode eMatchMode, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
bool LockedEditEntry(DownloadQueue* pDownloadQueue, int ID, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);
bool LockedEditList(DownloadQueue* pDownloadQueue, IDList* pIDList, bool bSmartOrder, EEditAction eAction, int iOffset, const char* szText);

137
README
View File

@@ -2,6 +2,10 @@
NZBGet ReadMe
=====================================
This is a short documentation. For more information please
visit NZBGet home page at
http://nzbget.sourceforge.net
Contents
--------
1. About NZBGet
@@ -29,6 +33,9 @@ In server/client mode NZBGet runs as server in background.
Then you use client to send requests to server. The sample requests
are: download nzb-file, list files in queue, etc.
There is also a built-in web-interface. The server has RPC-support
and can be controlled from third party applications too.
Standalone-tool, server and client are all contained in only one
executable file "nzbget". The mode in which the program works
depends on command-line parameters passed to the program.
@@ -41,13 +48,12 @@ NZBGet is written in C++ and was initialy developed on Linux.
It was ported to Windows later and tested for compatibility with
several POSIX-OS'es.
The current version (0.7.0) should run at least on:
- Linux Debian 4.0 on x86;
- Linux with uClibc and BusyBox on MIPSEL and ARM;
- FreeBSD 7.0 on x86;
- OpenSolaris 2008.11 on x86;
- Mac OS X;
- Windows XP SP2 on x86.
It should run at least on:
- Linux Debian 5.0 on x86;
- Linux with uClibc on MIPSEL and ARM;
- OpenBSD 5.0 on x86;
- Mac OS X 10.7 Lion on x64;
- Windows XP SP3 on x86 and Windows 7 on x64.
Clients and servers running on different OS'es may communicate with
each other. For example, you can use NZBGet as client on Windows to
@@ -89,6 +95,9 @@ And the following libraries are optional:
or
- OpenSSL (http://www.openssl.org)
- for gzip support in web-server and web-client (enabled by default):
- zlib (http://www.zlib.net)
All these libraries are included in modern Linux distributions and
should be available as installable packages. Please note that you also
need the developer packages for these libraries too, they package names
@@ -99,24 +108,41 @@ download the libraries at the given URLs and compile them (see hints below).
4. Installation on POSIX
=====================================
Well, the usual stuff:
Installation from the source distribution archive (nzbget-VERSION.tar.gz):
- untar the nzbget-source via
tar -zxf nzbget-VERSION.tar.gz
- change into nzbget-directory via
cd nzbget-VERSION
- configure it via
./configure
(maybe you have to tell configure, where to find some libraries.
./configure --help is your friend!)
also see "Configure-options" later.)
./configure --help is your friend!
also see "Configure-options" later)
- in a case you don't have root access or want to install the program
in your home directory use the configure parameter --prefix, e. g.:
./configure --prefix ~/usr
- compile it via
make
- become root via
- to install system wide become root via:
su
- install it via
- install it via:
make install
- install configuration files into <prefix>/etc via:
make install-conf
(you can skip this step if you intend to store configuration
files in a non-standard location)
Configure-options
-----------------
You may run configure with additional arguments:
@@ -132,6 +158,9 @@ You may run configure with additional arguments:
--disable-tls - to make without TLS/SSL support. Use this option if
you can not neither GnuTLS nor OpenSSL.
--disable-gzip - to make without gzip support. Use this option
if you can not use zlib.
--enable-debug - to build in debug-mode, if you want to see and log
debug-messages.
@@ -245,17 +274,27 @@ in MS Visual C++ 2005 you should be able to compile NZBGet.
6. Configuration
=====================================
NZBGet needs a configuration-file to work properly.
NZBGet needs a configuration file.
You need to set at least the option "MAINDIR" and one newsserver in
configuration file. Have a look at the example in nzbget.conf.example,
it has comments on how to use each option.
An example configuration file is provided in "nzbget.conf", which
is installed into "<prefix>/share/nzbget" (where <prefix> depends on
system configuration and configure options - typically "/usr/local",
"/usr" or "/opt"). The installer adjusts the file according to your
system paths. If you have performed the installation step
"make install-conf" this file is already copied to "<prefix>/etc" and
NZBGet finds it automatically. If you install the program manually
from a binary archive you have to copy the file from "<prefix>/share/nzbget"
to one of the locations listed below.
Open the file in a text editor and modify it accodring to your needs.
You need to set at least the option "MAINDIR" and one news server in
configuration file. The file has comments on how to use each option.
The program looks for configuration file in following standard
locations (in this order):
On POSIX systems:
~/.nzbget
/etc/nzbget.conf
/usr/etc/nzbget.conf
@@ -383,7 +422,7 @@ Running client & server on seperate machines:
Since nzbget communicates via TCP/IP it's possible to have a server running on
one computer and adding downloads via a client on another computer.
Do this by setting the "serverip" option in the nzbget.conf file to point to the
Do this by setting the "ControlIP" option in the nzbget.conf file to point to the
IP of the server (default is localhost which means client and server runs on
same computer)
@@ -402,28 +441,58 @@ nzbget-client-commands in this terminal.
Post processing scripts
-----------------------
After the download of nzb-file is completed nzbget can call post-process-script,
defined in configuration file. See example configuration file for the
description of parameters passed to the script.
After the download of nzb-file is completed nzbget can call post-processing
scripts, defined in configuration file.
An example script for unraring of downloaded files is provided in file
postprocess-example.sh. The usage instructions are included in the file,
please open the file in any text editor to read them.
NOTE: That example script is for POSIX systems (not for Windows).
Example post-processing scripts are provided in directory "ppscripts".
To use the scripts copy them into your local directory and set options
<ScriptDir>, <DefScript> and <ScriptOrder>.
For information on writing your own post-processing scripts please
visit NZBGet web site.
Web-interface
-------------
NZBGet has a built-in web-server providing the access to the program
functions via web-interface.
To activate web-interface set the option "WebDir" to the path with
web-interface files. If you install using "make install-conf" as
described above the option is set automatically. If you install using
binary files you should check if the option is set correctly.
To access web-interface from your web-browser use the server address
and port defined in NZBGet configuration file in options "ControlIP" and
"ControlPort". For example:
http://localhost:6789/
For login credentials type username "nzbget" (predefined and not changeable)
and the password from the option "ControlPassword" (default is tegbzn6789).
In a case your browser forget credentials, to prevent typing them each
time, there is a workaround - use URL in the form:
http://localhost:6789/nzbget:password/
Please note, that in this case the password is saved in a bookmark or in
browser history in plain text and is easy to find by persons having
access to your computer.
=====================================
8. Authors
=====================================
NZBGet was initialiy written by Sven Henkel (sidddy@users.sourceforge.net).
Up to version 0.2.3 it was developed and maintained by Bo Cordes Petersen
(placebodk@users.sourceforge.net).
Beginning at version 0.3.0 the program is being developed by Andrei Prygounkov
NZBGet is developed and maintained by Andrey Prygunkov
(hugbug@users.sourceforge.net).
Module TLS (TLS.c, TLS.h) is based on work by Martin Lambers (marlam@marlam.de).
The original project was initially created by Sven Henkel
(sidddy@users.sourceforge.net) in 2004 and later developed by
Bo Cordes Petersen (placebodk@users.sourceforge.net) until 2005.
In 2007 the abandoned project was overtaken by Andrey Prygunkov.
Since then the program has been completely rewritten.
=====================================
9. Copyright
@@ -450,9 +519,9 @@ libpar2 is distributed under GPL; libsigc++ and GnuTLS - under LGPL.
10. Contact
=====================================
If you encounter any problem, feel free to use forums on
If you encounter any problem, feel free to use the forum
sourceforge.net/projects/nzbget
nzbget.sourceforge.net/forum
or contact me at

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2005 Bo Cordes Petersen <placebodk@sourceforge.net>
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -34,7 +34,7 @@
#include <stdlib.h>
#include <string.h>
#include <cstdio>
#include <stdio.h>
#ifdef WIN32
#include <windows.h>
#else
@@ -46,6 +46,7 @@
#include "nzbget.h"
#include "RemoteClient.h"
#include "DownloadInfo.h"
#include "Options.h"
#include "Log.h"
#include "Util.h"
@@ -55,7 +56,6 @@ extern Options* g_pOptions;
RemoteClient::RemoteClient()
{
m_pConnection = NULL;
m_pNetAddress = NULL;
m_bVerbose = true;
/*
@@ -76,15 +76,7 @@ RemoteClient::RemoteClient()
RemoteClient::~RemoteClient()
{
if (m_pConnection)
{
delete m_pConnection;
}
if (m_pNetAddress)
{
delete m_pNetAddress;
}
delete m_pConnection;
}
void RemoteClient::printf(const char * msg,...)
@@ -109,13 +101,19 @@ void RemoteClient::perror(const char * msg)
bool RemoteClient::InitConnection()
{
// Create a connection to the server
m_pNetAddress = new NetAddress(g_pOptions->GetServerIP(), g_pOptions->GetServerPort());
m_pConnection = new Connection(m_pNetAddress);
const char *szControlIP = g_pOptions->GetControlIP();
if (!strcmp(szControlIP, "0.0.0.0"))
{
szControlIP = "127.0.0.1";
}
m_pConnection = new Connection(szControlIP, g_pOptions->GetControlPort(), false);
bool OK = m_pConnection->Connect();
if (!OK)
{
printf("Unable to send request to nzbserver at %s (port %i)\n", g_pOptions->GetServerIP(), g_pOptions->GetServerPort());
printf("Unable to send request to nzbserver at %s (port %i)\n", szControlIP, g_pOptions->GetControlPort());
}
return OK;
}
@@ -125,7 +123,11 @@ void RemoteClient::InitMessageBase(SNZBRequestBase* pMessageBase, int iRequest,
pMessageBase->m_iSignature = htonl(NZBMESSAGE_SIGNATURE);
pMessageBase->m_iType = htonl(iRequest);
pMessageBase->m_iStructSize = htonl(iSize);
strncpy(pMessageBase->m_szPassword, g_pOptions->GetServerPassword(), NZBREQUESTPASSWORDSIZE - 1);
strncpy(pMessageBase->m_szUsername, g_pOptions->GetControlUsername(), NZBREQUESTPASSWORDSIZE - 1);
pMessageBase->m_szUsername[NZBREQUESTPASSWORDSIZE - 1] = '\0';
strncpy(pMessageBase->m_szPassword, g_pOptions->GetControlPassword(), NZBREQUESTPASSWORDSIZE - 1);
pMessageBase->m_szPassword[NZBREQUESTPASSWORDSIZE - 1] = '\0';
}
@@ -137,35 +139,22 @@ bool RemoteClient::ReceiveBoolResponse()
SNZBDownloadResponse BoolResponse;
memset(&BoolResponse, 0, sizeof(BoolResponse));
int iResponseLen = m_pConnection->Recv((char*)&BoolResponse, sizeof(BoolResponse));
if (iResponseLen != sizeof(BoolResponse) ||
bool bRead = m_pConnection->Recv((char*)&BoolResponse, sizeof(BoolResponse));
if (!bRead ||
(int)ntohl(BoolResponse.m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE ||
ntohl(BoolResponse.m_MessageBase.m_iStructSize) != sizeof(BoolResponse))
{
if (iResponseLen < 0)
{
printf("No response received (timeout)\n");
}
else
{
printf("Invalid response received: either not nzbget-server or wrong server version\n");
}
printf("No response or invalid response (timeout, not nzbget-server or wrong nzbget-server version)\n");
return false;
}
int iTextLen = ntohl(BoolResponse.m_iTrailingDataLength);
char* buf = (char*)malloc(iTextLen);
iResponseLen = m_pConnection->Recv(buf, iTextLen);
if (iResponseLen != iTextLen)
bRead = m_pConnection->Recv(buf, iTextLen);
if (!bRead)
{
if (iResponseLen < 0)
{
printf("No response received (timeout)\n");
}
else
{
printf("Invalid response received: either not nzbget-server or wrong server version\n");
}
printf("No response or invalid response (timeout, not nzbget-server or wrong nzbget-server version)\n");
free(buf);
return false;
}
@@ -177,7 +166,7 @@ bool RemoteClient::ReceiveBoolResponse()
/*
* Sends a message to the running nzbget process.
*/
bool RemoteClient::RequestServerDownload(const char* szFilename, const char* szCategory, bool bAddFirst)
bool RemoteClient::RequestServerDownload(const char* szFilename, const char* szCategory, bool bAddFirst, bool bAddPaused, int iPriority)
{
// Read the file into the buffer
char* szBuffer = NULL;
@@ -194,7 +183,9 @@ bool RemoteClient::RequestServerDownload(const char* szFilename, const char* szC
SNZBDownloadRequest DownloadRequest;
InitMessageBase(&DownloadRequest.m_MessageBase, eRemoteRequestDownload, sizeof(DownloadRequest));
DownloadRequest.m_bAddFirst = htonl(bAddFirst);
DownloadRequest.m_iTrailingDataLength = htonl(iLength);
DownloadRequest.m_bAddPaused = htonl(bAddPaused);
DownloadRequest.m_iPriority = htonl(iPriority);
DownloadRequest.m_iTrailingDataLength = htonl(iLength - 1);
strncpy(DownloadRequest.m_szFilename, szFilename, NZBREQUESTFILENAMESIZE - 1);
DownloadRequest.m_szFilename[NZBREQUESTFILENAMESIZE-1] = '\0';
@@ -205,7 +196,7 @@ bool RemoteClient::RequestServerDownload(const char* szFilename, const char* szC
}
DownloadRequest.m_szCategory[NZBREQUESTFILENAMESIZE-1] = '\0';
if (m_pConnection->Send((char*)(&DownloadRequest), sizeof(DownloadRequest)) < 0)
if (!m_pConnection->Send((char*)(&DownloadRequest), sizeof(DownloadRequest)))
{
perror("m_pConnection->Send");
OK = false;
@@ -219,10 +210,7 @@ bool RemoteClient::RequestServerDownload(const char* szFilename, const char* szC
}
// Cleanup
if (szBuffer)
{
free(szBuffer);
}
free(szBuffer);
return OK;
}
@@ -239,23 +227,29 @@ void RemoteClient::BuildFileList(SNZBListResponse* pListResponse, const char* pT
SNZBListResponseNZBEntry* pListAnswer = (SNZBListResponseNZBEntry*) pBufPtr;
const char* szFileName = pBufPtr + sizeof(SNZBListResponseNZBEntry);
const char* szDestDir = pBufPtr + sizeof(SNZBListResponseNZBEntry) + ntohl(pListAnswer->m_iFilenameLen);
const char* szCategory = pBufPtr + sizeof(SNZBListResponseNZBEntry) + ntohl(pListAnswer->m_iFilenameLen) + ntohl(pListAnswer->m_iDestDirLen);
const char* m_szQueuedFilename = pBufPtr + sizeof(SNZBListResponseNZBEntry) + ntohl(pListAnswer->m_iFilenameLen) + ntohl(pListAnswer->m_iDestDirLen) + ntohl(pListAnswer->m_iCategoryLen);
const char* szName = pBufPtr + sizeof(SNZBListResponseNZBEntry) + ntohl(pListAnswer->m_iFilenameLen);
const char* szDestDir = pBufPtr + sizeof(SNZBListResponseNZBEntry) + ntohl(pListAnswer->m_iFilenameLen) +
ntohl(pListAnswer->m_iNameLen);
const char* szCategory = pBufPtr + sizeof(SNZBListResponseNZBEntry) + ntohl(pListAnswer->m_iFilenameLen) +
ntohl(pListAnswer->m_iNameLen) + ntohl(pListAnswer->m_iDestDirLen);
const char* m_szQueuedFilename = pBufPtr + sizeof(SNZBListResponseNZBEntry) + ntohl(pListAnswer->m_iFilenameLen) +
ntohl(pListAnswer->m_iNameLen) + ntohl(pListAnswer->m_iDestDirLen) + ntohl(pListAnswer->m_iCategoryLen);
NZBInfo* pNZBInfo = new NZBInfo();
MatchedNZBInfo* pNZBInfo = new MatchedNZBInfo();
pNZBInfo->SetSize(Util::JoinInt64(ntohl(pListAnswer->m_iSizeHi), ntohl(pListAnswer->m_iSizeLo)));
pNZBInfo->SetFilename(szFileName);
pNZBInfo->SetName(szName);
pNZBInfo->SetDestDir(szDestDir);
pNZBInfo->SetCategory(szCategory);
pNZBInfo->SetQueuedFilename(m_szQueuedFilename);
pNZBInfo->m_bMatch = ntohl(pListAnswer->m_bMatch);
pNZBInfo->AddReference();
pNZBInfo->Retain();
pDownloadQueue->GetNZBInfoList()->Add(pNZBInfo);
pBufPtr += sizeof(SNZBListResponseNZBEntry) + ntohl(pListAnswer->m_iFilenameLen) +
ntohl(pListAnswer->m_iDestDirLen) + ntohl(pListAnswer->m_iCategoryLen) +
ntohl(pListAnswer->m_iQueuedFilenameLen);
ntohl(pListAnswer->m_iNameLen) + ntohl(pListAnswer->m_iDestDirLen) +
ntohl(pListAnswer->m_iCategoryLen) + ntohl(pListAnswer->m_iQueuedFilenameLen);
}
//read ppp entries
@@ -267,7 +261,7 @@ void RemoteClient::BuildFileList(SNZBListResponse* pListResponse, const char* pT
const char* szValue = pBufPtr + sizeof(SNZBListResponsePPPEntry) + ntohl(pListAnswer->m_iNameLen);
NZBInfo* pNZBInfo = pDownloadQueue->GetNZBInfoList()->at(ntohl(pListAnswer->m_iNZBIndex) - 1);
pNZBInfo->SetParameter(szName, szValue);
pNZBInfo->GetParameters()->SetParameter(szName, szValue);
pBufPtr += sizeof(SNZBListResponsePPPEntry) + ntohl(pListAnswer->m_iNameLen) +
ntohl(pListAnswer->m_iValueLen);
@@ -281,7 +275,7 @@ void RemoteClient::BuildFileList(SNZBListResponse* pListResponse, const char* pT
const char* szSubject = pBufPtr + sizeof(SNZBListResponseFileEntry);
const char* szFileName = pBufPtr + sizeof(SNZBListResponseFileEntry) + ntohl(pListAnswer->m_iSubjectLen);
FileInfo* pFileInfo = new FileInfo();
MatchedFileInfo* pFileInfo = new MatchedFileInfo();
pFileInfo->SetID(ntohl(pListAnswer->m_iID));
pFileInfo->SetSize(Util::JoinInt64(ntohl(pListAnswer->m_iFileSizeHi), ntohl(pListAnswer->m_iFileSizeLo)));
pFileInfo->SetRemainingSize(Util::JoinInt64(ntohl(pListAnswer->m_iRemainingSizeHi), ntohl(pListAnswer->m_iRemainingSizeLo)));
@@ -289,6 +283,9 @@ void RemoteClient::BuildFileList(SNZBListResponse* pListResponse, const char* pT
pFileInfo->SetSubject(szSubject);
pFileInfo->SetFilename(szFileName);
pFileInfo->SetFilenameConfirmed(ntohl(pListAnswer->m_bFilenameConfirmed));
pFileInfo->SetActiveDownloads(ntohl(pListAnswer->m_iActiveDownloads));
pFileInfo->SetPriority(ntohl(pListAnswer->m_iPriority));
pFileInfo->m_bMatch = ntohl(pListAnswer->m_bMatch);
NZBInfo* pNZBInfo = pDownloadQueue->GetNZBInfoList()->at(ntohl(pListAnswer->m_iNZBIndex) - 1);
@@ -304,7 +301,7 @@ void RemoteClient::BuildFileList(SNZBListResponse* pListResponse, const char* pT
pDownloadQueue->GetNZBInfoList()->ReleaseAll();
}
bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
bool RemoteClient::RequestServerList(bool bFiles, bool bGroups, const char* szPattern)
{
if (!InitConnection()) return false;
@@ -312,8 +309,15 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
InitMessageBase(&ListRequest.m_MessageBase, eRemoteRequestList, sizeof(ListRequest));
ListRequest.m_bFileList = htonl(true);
ListRequest.m_bServerState = htonl(true);
ListRequest.m_iMatchMode = htonl(szPattern ? eRemoteMatchModeRegEx : eRemoteMatchModeID);
ListRequest.m_bMatchGroup = htonl(bGroups);
if (szPattern)
{
strncpy(ListRequest.m_szPattern, szPattern, NZBREQUESTFILENAMESIZE - 1);
ListRequest.m_szPattern[NZBREQUESTFILENAMESIZE-1] = '\0';
}
if (m_pConnection->Send((char*)(&ListRequest), sizeof(ListRequest)) < 0)
if (!m_pConnection->Send((char*)(&ListRequest), sizeof(ListRequest)))
{
perror("m_pConnection->Send");
return false;
@@ -323,19 +327,12 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
// Now listen for the returned list
SNZBListResponse ListResponse;
int iResponseLen = m_pConnection->Recv((char*) &ListResponse, sizeof(ListResponse));
if (iResponseLen != sizeof(ListResponse) ||
bool bRead = m_pConnection->Recv((char*) &ListResponse, sizeof(ListResponse));
if (!bRead ||
(int)ntohl(ListResponse.m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE ||
ntohl(ListResponse.m_MessageBase.m_iStructSize) != sizeof(ListResponse))
{
if (iResponseLen < 0)
{
printf("No response received (timeout)\n");
}
else
{
printf("Invalid response received: either not nzbget-server or wrong server version\n");
}
printf("No response or invalid response (timeout, not nzbget-server or wrong nzbget-server version)\n");
return false;
}
@@ -343,7 +340,7 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
if (ntohl(ListResponse.m_iTrailingDataLength) > 0)
{
pBuf = (char*)malloc(ntohl(ListResponse.m_iTrailingDataLength));
if (!m_pConnection->RecvAll(pBuf, ntohl(ListResponse.m_iTrailingDataLength)))
if (!m_pConnection->Recv(pBuf, ntohl(ListResponse.m_iTrailingDataLength)))
{
free(pBuf);
return false;
@@ -352,6 +349,13 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
m_pConnection->Disconnect();
if (szPattern && !ListResponse.m_bRegExValid)
{
printf("Error in regular expression\n");
free(pBuf);
return false;
}
if (bFiles)
{
if (ntohl(ListResponse.m_iTrailingDataLength) == 0)
@@ -368,17 +372,33 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
long long lRemaining = 0;
long long lPaused = 0;
int iMatches = 0;
for (FileQueue::iterator it = cRemoteQueue.GetFileQueue()->begin(); it != cRemoteQueue.GetFileQueue()->end(); it++)
{
FileInfo* pFileInfo = *it;
char szPriority[100];
szPriority[0] = '\0';
if (pFileInfo->GetPriority() != 0)
{
sprintf(szPriority, "[%+i] ", pFileInfo->GetPriority());
}
char szCompleted[100];
szCompleted[0] = '\0';
if (pFileInfo->GetRemainingSize() < pFileInfo->GetSize())
{
sprintf(szCompleted, ", %i%s", (int)(100 - Util::Int64ToFloat(pFileInfo->GetRemainingSize()) * 100.0 / Util::Int64ToFloat(pFileInfo->GetSize())), "%");
}
char szThreads[100];
szThreads[0] = '\0';
if (pFileInfo->GetActiveDownloads() > 0)
{
sprintf(szThreads, ", %i thread%s", pFileInfo->GetActiveDownloads(), (pFileInfo->GetActiveDownloads() > 1 ? "s" : ""));
}
char szStatus[100];
if (pFileInfo->GetPaused())
{
@@ -390,18 +410,30 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
szStatus[0] = '\0';
lRemaining += pFileInfo->GetRemainingSize();
}
char szNZBNiceName[1024];
pFileInfo->GetNZBInfo()->GetNiceNZBName(szNZBNiceName, 1024);
printf("[%i] %s%c%s (%.2f MB%s)%s\n", pFileInfo->GetID(), szNZBNiceName, (int)PATH_SEPARATOR, pFileInfo->GetFilename(),
(float)(Util::Int64ToFloat(pFileInfo->GetSize()) / 1024.0 / 1024.0), szCompleted, szStatus);
if (!szPattern || ((MatchedFileInfo*)pFileInfo)->m_bMatch)
{
printf("[%i] %s%s/%s (%.2f MB%s%s)%s\n", pFileInfo->GetID(), szPriority, pFileInfo->GetNZBInfo()->GetName(),
pFileInfo->GetFilename(),
(float)(Util::Int64ToFloat(pFileInfo->GetSize()) / 1024.0 / 1024.0),
szCompleted, szThreads, szStatus);
iMatches++;
}
delete pFileInfo;
}
if (iMatches == 0)
{
printf("No matches founds\n");
}
printf("-----------------------------------\n");
printf("Files: %i\n", cRemoteQueue.GetFileQueue()->size());
if (szPattern)
{
printf("Matches: %i\n", iMatches);
}
if (lPaused > 0)
{
printf("Remaining size: %.2f MB (+%.2f MB paused)\n", (float)(Util::Int64ToFloat(lRemaining) / 1024.0 / 1024.0),
@@ -433,6 +465,7 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
long long lRemaining = 0;
long long lPaused = 0;
int iMatches = 0;
for (GroupQueue::iterator it = cGroupQueue.begin(); it != cGroupQueue.end(); it++)
{
@@ -444,6 +477,20 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
char szRemaining[20];
Util::FormatFileSize(szRemaining, sizeof(szRemaining), lUnpausedRemainingSize);
char szPriority[100];
szPriority[0] = '\0';
if (pGroupInfo->GetMinPriority() != 0 || pGroupInfo->GetMaxPriority() != 0)
{
if (pGroupInfo->GetMinPriority() == pGroupInfo->GetMaxPriority())
{
sprintf(szPriority, "[%+i] ", pGroupInfo->GetMinPriority());
}
else
{
sprintf(szPriority, "[%+i..%+i] ", pGroupInfo->GetMinPriority(), pGroupInfo->GetMaxPriority());
}
}
char szPaused[20];
szPaused[0] = '\0';
if (pGroupInfo->GetPausedSize() > 0)
@@ -454,9 +501,6 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
lPaused += pGroupInfo->GetPausedSize();
}
char szNZBNiceName[1024];
pGroupInfo->GetNZBInfo()->GetNiceNZBName(szNZBNiceName, 1023);
char szCategory[1024];
szCategory[0] = '\0';
if (pGroupInfo->GetNZBInfo()->GetCategory() && strlen(pGroupInfo->GetNZBInfo()->GetCategory()) > 0)
@@ -464,33 +508,43 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
sprintf(szCategory, " (%s)", pGroupInfo->GetNZBInfo()->GetCategory());
}
char szThreads[100];
szThreads[0] = '\0';
if (pGroupInfo->GetActiveDownloads() > 0)
{
sprintf(szThreads, ", %i thread%s", pGroupInfo->GetActiveDownloads(), (pGroupInfo->GetActiveDownloads() > 1 ? "s" : ""));
}
char szParameters[1024];
szParameters[0] = '\0';
for (NZBParameterList::iterator it = pGroupInfo->GetNZBInfo()->GetParameters()->begin(); it != pGroupInfo->GetNZBInfo()->GetParameters()->end(); it++)
{
if (szParameters[0] == '\0')
{
strncat(szParameters, " (", 1024);
strncat(szParameters, " (", sizeof(szParameters) - strlen(szParameters) - 1);
}
else
{
strncat(szParameters, ", ", 1024);
strncat(szParameters, ", ", sizeof(szParameters) - strlen(szParameters) - 1);
}
NZBParameter* pNZBParameter = *it;
strncat(szParameters, pNZBParameter->GetName(), 1024);
strncat(szParameters, "=", 1024);
strncat(szParameters, pNZBParameter->GetValue(), 1024);
strncat(szParameters, pNZBParameter->GetName(), sizeof(szParameters) - strlen(szParameters) - 1);
strncat(szParameters, "=", sizeof(szParameters) - strlen(szParameters) - 1);
strncat(szParameters, pNZBParameter->GetValue(), sizeof(szParameters) - strlen(szParameters) - 1);
}
if (szParameters[0] != '\0')
{
strncat(szParameters, ")", 1024);
strncat(szParameters, ")", sizeof(szParameters) - strlen(szParameters) - 1);
}
printf("[%i-%i] %s (%i file%s, %s%s)%s%s\n", pGroupInfo->GetFirstID(), pGroupInfo->GetLastID(), szNZBNiceName,
pGroupInfo->GetRemainingFileCount(), pGroupInfo->GetRemainingFileCount() > 1 ? "s" : "", szRemaining,
szPaused, szCategory, szParameters);
delete pGroupInfo;
if (!szPattern || ((MatchedNZBInfo*)pGroupInfo->GetNZBInfo())->m_bMatch)
{
printf("[%i-%i] %s%s (%i file%s, %s%s%s)%s%s\n", pGroupInfo->GetFirstID(), pGroupInfo->GetLastID(), szPriority,
pGroupInfo->GetNZBInfo()->GetName(), pGroupInfo->GetRemainingFileCount(),
pGroupInfo->GetRemainingFileCount() > 1 ? "s" : "", szRemaining,
szPaused, szThreads, szCategory, szParameters);
iMatches++;
}
}
for (FileQueue::iterator it = cRemoteQueue.GetFileQueue()->begin(); it != cRemoteQueue.GetFileQueue()->end(); it++)
@@ -498,8 +552,17 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
delete *it;
}
if (iMatches == 0)
{
printf("No matches founds\n");
}
printf("-----------------------------------\n");
printf("Groups: %i\n", cGroupQueue.size());
if (szPattern)
{
printf("Matches: %i\n", iMatches);
}
printf("Files: %i\n", cRemoteQueue.GetFileQueue()->size());
if (lPaused > 0)
{
@@ -587,10 +650,10 @@ bool RemoteClient::RequestServerList(bool bFiles, bool bGroups)
if (ntohl(ListResponse.m_iPostJobCount) > 0 || ntohl(ListResponse.m_bPostPaused))
{
strncat(szServerState, strlen(szServerState) > 0 ? ", Post-Processing" : "Post-Processing", sizeof(szServerState));
strncat(szServerState, strlen(szServerState) > 0 ? ", Post-Processing" : "Post-Processing", sizeof(szServerState) - strlen(szServerState) - 1);
if (ntohl(ListResponse.m_bPostPaused))
{
strncat(szServerState, " paused", sizeof(szServerState));
strncat(szServerState, " paused", sizeof(szServerState) - strlen(szServerState) - 1);
}
}
@@ -613,7 +676,7 @@ bool RemoteClient::RequestServerLog(int iLines)
LogRequest.m_iLines = htonl(iLines);
LogRequest.m_iIDFrom = 0;
if (m_pConnection->Send((char*)(&LogRequest), sizeof(LogRequest)) < 0)
if (!m_pConnection->Send((char*)(&LogRequest), sizeof(LogRequest)))
{
perror("m_pConnection->Send");
return false;
@@ -623,19 +686,12 @@ bool RemoteClient::RequestServerLog(int iLines)
// Now listen for the returned log
SNZBLogResponse LogResponse;
int iResponseLen = m_pConnection->Recv((char*) &LogResponse, sizeof(LogResponse));
if (iResponseLen != sizeof(LogResponse) ||
bool bRead = m_pConnection->Recv((char*) &LogResponse, sizeof(LogResponse));
if (!bRead ||
(int)ntohl(LogResponse.m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE ||
ntohl(LogResponse.m_MessageBase.m_iStructSize) != sizeof(LogResponse))
{
if (iResponseLen < 0)
{
printf("No response received (timeout)\n");
}
else
{
printf("Invalid response received: either not nzbget-server or wrong server version\n");
}
printf("No response or invalid response (timeout, not nzbget-server or wrong nzbget-server version)\n");
return false;
}
@@ -643,7 +699,7 @@ bool RemoteClient::RequestServerLog(int iLines)
if (ntohl(LogResponse.m_iTrailingDataLength) > 0)
{
pBuf = (char*)malloc(ntohl(LogResponse.m_iTrailingDataLength));
if (!m_pConnection->RecvAll(pBuf, ntohl(LogResponse.m_iTrailingDataLength)))
if (!m_pConnection->Recv(pBuf, ntohl(LogResponse.m_iTrailingDataLength)))
{
free(pBuf);
return false;
@@ -706,7 +762,7 @@ bool RemoteClient::RequestServerPauseUnpause(bool bPause, eRemotePauseUnpauseAct
PauseUnpauseRequest.m_bPause = htonl(bPause);
PauseUnpauseRequest.m_iAction = htonl(iAction);
if (m_pConnection->Send((char*)(&PauseUnpauseRequest), sizeof(PauseUnpauseRequest)) < 0)
if (!m_pConnection->Send((char*)(&PauseUnpauseRequest), sizeof(PauseUnpauseRequest)))
{
perror("m_pConnection->Send");
m_pConnection->Disconnect();
@@ -719,15 +775,15 @@ bool RemoteClient::RequestServerPauseUnpause(bool bPause, eRemotePauseUnpauseAct
return OK;
}
bool RemoteClient::RequestServerSetDownloadRate(float fRate)
bool RemoteClient::RequestServerSetDownloadRate(int iRate)
{
if (!InitConnection()) return false;
SNZBSetDownloadRateRequest SetDownloadRateRequest;
InitMessageBase(&SetDownloadRateRequest.m_MessageBase, eRemoteRequestSetDownloadRate, sizeof(SetDownloadRateRequest));
SetDownloadRateRequest.m_iDownloadRate = htonl((unsigned int)(fRate * 1024));
SetDownloadRateRequest.m_iDownloadRate = htonl(iRate);
if (m_pConnection->Send((char*)(&SetDownloadRateRequest), sizeof(SetDownloadRateRequest)) < 0)
if (!m_pConnection->Send((char*)(&SetDownloadRateRequest), sizeof(SetDownloadRateRequest)))
{
perror("m_pConnection->Send");
m_pConnection->Disconnect();
@@ -747,7 +803,7 @@ bool RemoteClient::RequestServerDumpDebug()
SNZBDumpDebugRequest DumpDebugInfo;
InitMessageBase(&DumpDebugInfo.m_MessageBase, eRemoteRequestDumpDebug, sizeof(DumpDebugInfo));
if (m_pConnection->Send((char*)(&DumpDebugInfo), sizeof(DumpDebugInfo)) < 0)
if (!m_pConnection->Send((char*)(&DumpDebugInfo), sizeof(DumpDebugInfo)))
{
perror("m_pConnection->Send");
m_pConnection->Disconnect();
@@ -760,9 +816,10 @@ bool RemoteClient::RequestServerDumpDebug()
return OK;
}
bool RemoteClient::RequestServerEditQueue(eRemoteEditAction iAction, int iOffset, const char* szText, int* pIDList, int iIDCount, bool bSmartOrder)
bool RemoteClient::RequestServerEditQueue(eRemoteEditAction iAction, int iOffset, const char* szText,
int* pIDList, int iIDCount, NameList* pNameList, eRemoteMatchMode iMatchMode, bool bSmartOrder)
{
if (iIDCount <= 0 || pIDList == NULL)
if ((iIDCount <= 0 || pIDList == NULL) && (pNameList == NULL || pNameList->size() == 0))
{
printf("File(s) not specified\n");
return false;
@@ -770,18 +827,38 @@ bool RemoteClient::RequestServerEditQueue(eRemoteEditAction iAction, int iOffset
if (!InitConnection()) return false;
int iIDLength = sizeof(int32_t) * iIDCount;
int iNameCount = 0;
int iNameLength = 0;
if (pNameList && pNameList->size() > 0)
{
for (NameList::iterator it = pNameList->begin(); it != pNameList->end(); it++)
{
const char *szName = *it;
iNameLength += strlen(szName) + 1;
iNameCount++;
}
// align size to 4-bytes, needed by ARM-processor (and may be others)
iNameLength += iNameLength % 4 > 0 ? 4 - iNameLength % 4 : 0;
}
int iTextLen = szText ? strlen(szText) + 1 : 0;
// align size to 4-bytes, needed by ARM-processor (and may be others)
iTextLen += iTextLen % 4 > 0 ? 4 - iTextLen % 4 : 0;
int iLength = sizeof(int32_t) * iIDCount + iTextLen;
int iLength = iTextLen + iIDLength + iNameLength;
SNZBEditQueueRequest EditQueueRequest;
InitMessageBase(&EditQueueRequest.m_MessageBase, eRemoteRequestEditQueue, sizeof(EditQueueRequest));
EditQueueRequest.m_iAction = htonl(iAction);
EditQueueRequest.m_iMatchMode = htonl(iMatchMode);
EditQueueRequest.m_iOffset = htonl((int)iOffset);
EditQueueRequest.m_bSmartOrder = htonl(bSmartOrder);
EditQueueRequest.m_iTextLen = htonl(iTextLen);
EditQueueRequest.m_iNrTrailingEntries = htonl(iIDCount);
EditQueueRequest.m_iNrTrailingIDEntries = htonl(iIDCount);
EditQueueRequest.m_iNrTrailingNameEntries = htonl(iNameCount);
EditQueueRequest.m_iTrailingNameEntriesLen = htonl(iNameLength);
EditQueueRequest.m_iTrailingDataLength = htonl(iLength);
char* pTrailingData = (char*)malloc(iLength);
@@ -797,9 +874,21 @@ bool RemoteClient::RequestServerEditQueue(eRemoteEditAction iAction, int iOffset
{
pIDs[i] = htonl(pIDList[i]);
}
if (iNameCount > 0)
{
char *pNames = pTrailingData + iTextLen + iIDLength;
for (NameList::iterator it = pNameList->begin(); it != pNameList->end(); it++)
{
const char *szName = *it;
int iLen = strlen(szName);
strncpy(pNames, szName, iLen + 1);
pNames += iLen + 1;
}
}
bool OK = false;
if (m_pConnection->Send((char*)(&EditQueueRequest), sizeof(EditQueueRequest)) < 0)
if (!m_pConnection->Send((char*)(&EditQueueRequest), sizeof(EditQueueRequest)))
{
perror("m_pConnection->Send");
}
@@ -823,7 +912,28 @@ bool RemoteClient::RequestServerShutdown()
SNZBShutdownRequest ShutdownRequest;
InitMessageBase(&ShutdownRequest.m_MessageBase, eRemoteRequestShutdown, sizeof(ShutdownRequest));
bool OK = m_pConnection->Send((char*)(&ShutdownRequest), sizeof(ShutdownRequest)) >= 0;
bool OK = m_pConnection->Send((char*)(&ShutdownRequest), sizeof(ShutdownRequest));
if (OK)
{
OK = ReceiveBoolResponse();
}
else
{
perror("m_pConnection->Send");
}
m_pConnection->Disconnect();
return OK;
}
bool RemoteClient::RequestServerReload()
{
if (!InitConnection()) return false;
SNZBReloadRequest ReloadRequest;
InitMessageBase(&ReloadRequest.m_MessageBase, eRemoteRequestReload, sizeof(ReloadRequest));
bool OK = m_pConnection->Send((char*)(&ReloadRequest), sizeof(ReloadRequest));
if (OK)
{
OK = ReceiveBoolResponse();
@@ -844,7 +954,7 @@ bool RemoteClient::RequestServerVersion()
SNZBVersionRequest VersionRequest;
InitMessageBase(&VersionRequest.m_MessageBase, eRemoteRequestVersion, sizeof(VersionRequest));
bool OK = m_pConnection->Send((char*)(&VersionRequest), sizeof(VersionRequest)) >= 0;
bool OK = m_pConnection->Send((char*)(&VersionRequest), sizeof(VersionRequest));
if (OK)
{
OK = ReceiveBoolResponse();
@@ -865,7 +975,7 @@ bool RemoteClient::RequestPostQueue()
SNZBPostQueueRequest PostQueueRequest;
InitMessageBase(&PostQueueRequest.m_MessageBase, eRemoteRequestPostQueue, sizeof(PostQueueRequest));
if (m_pConnection->Send((char*)(&PostQueueRequest), sizeof(PostQueueRequest)) < 0)
if (!m_pConnection->Send((char*)(&PostQueueRequest), sizeof(PostQueueRequest)))
{
perror("m_pConnection->Send");
return false;
@@ -875,19 +985,12 @@ bool RemoteClient::RequestPostQueue()
// Now listen for the returned list
SNZBPostQueueResponse PostQueueResponse;
int iResponseLen = m_pConnection->Recv((char*) &PostQueueResponse, sizeof(PostQueueResponse));
if (iResponseLen != sizeof(PostQueueResponse) ||
bool bRead = m_pConnection->Recv((char*) &PostQueueResponse, sizeof(PostQueueResponse));
if (!bRead ||
(int)ntohl(PostQueueResponse.m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE ||
ntohl(PostQueueResponse.m_MessageBase.m_iStructSize) != sizeof(PostQueueResponse))
{
if (iResponseLen < 0)
{
printf("No response received (timeout)\n");
}
else
{
printf("Invalid response received: either not nzbget-server or wrong server version\n");
}
printf("No response or invalid response (timeout, not nzbget-server or wrong nzbget-server version)\n");
return false;
}
@@ -895,7 +998,7 @@ bool RemoteClient::RequestPostQueue()
if (ntohl(PostQueueResponse.m_iTrailingDataLength) > 0)
{
pBuf = (char*)malloc(ntohl(PostQueueResponse.m_iTrailingDataLength));
if (!m_pConnection->RecvAll(pBuf, ntohl(PostQueueResponse.m_iTrailingDataLength)))
if (!m_pConnection->Recv(pBuf, ntohl(PostQueueResponse.m_iTrailingDataLength)))
{
free(pBuf);
return false;
@@ -920,22 +1023,21 @@ bool RemoteClient::RequestPostQueue()
int iStageProgress = ntohl(pPostQueueAnswer->m_iStageProgress);
static const int EXECUTING_SCRIPT = 5;
char szCompleted[100];
szCompleted[0] = '\0';
if (iStageProgress > 0 && (int)ntohl(pPostQueueAnswer->m_iStage) != EXECUTING_SCRIPT)
if (iStageProgress > 0 && (int)ntohl(pPostQueueAnswer->m_iStage) != (int)PostInfo::ptExecutingScript)
{
sprintf(szCompleted, ", %i%s", (int)(iStageProgress / 10), "%");
}
const char* szPostStageName[] = { "", ", Loading Pars", ", Verifying source files", ", Repairing", ", Verifying repaired files", ", Executing postprocess-script", "" };
char* szInfoName = pBufPtr + sizeof(SNZBPostQueueResponseEntry) + ntohl(pPostQueueAnswer->m_iNZBFilenameLen) + ntohl(pPostQueueAnswer->m_iParFilename);
const char* szPostStageName[] = { "", ", Loading Pars", ", Verifying source files", ", Repairing", ", Verifying repaired files", ", Unpacking", ", Executing postprocess-script", "" };
char* szInfoName = pBufPtr + sizeof(SNZBPostQueueResponseEntry) + ntohl(pPostQueueAnswer->m_iNZBFilenameLen);
printf("[%i] %s%s%s\n", ntohl(pPostQueueAnswer->m_iID), szInfoName, szPostStageName[ntohl(pPostQueueAnswer->m_iStage)], szCompleted);
pBufPtr += sizeof(SNZBPostQueueResponseEntry) + ntohl(pPostQueueAnswer->m_iNZBFilenameLen) +
ntohl(pPostQueueAnswer->m_iParFilename) + ntohl(pPostQueueAnswer->m_iInfoNameLen) +
ntohl(pPostQueueAnswer->m_iDestDirLen) + ntohl(pPostQueueAnswer->m_iProgressLabelLen);
ntohl(pPostQueueAnswer->m_iInfoNameLen) + ntohl(pPostQueueAnswer->m_iDestDirLen) +
ntohl(pPostQueueAnswer->m_iProgressLabelLen);
}
free(pBuf);
@@ -956,7 +1058,7 @@ bool RemoteClient::RequestWriteLog(int iKind, const char* szText)
int iLength = strlen(szText) + 1;
WriteLogRequest.m_iTrailingDataLength = htonl(iLength);
if (m_pConnection->Send((char*)(&WriteLogRequest), sizeof(WriteLogRequest)) < 0)
if (!m_pConnection->Send((char*)(&WriteLogRequest), sizeof(WriteLogRequest)))
{
perror("m_pConnection->Send");
return false;
@@ -968,14 +1070,16 @@ bool RemoteClient::RequestWriteLog(int iKind, const char* szText)
return OK;
}
bool RemoteClient::RequestScan()
bool RemoteClient::RequestScan(bool bSyncMode)
{
if (!InitConnection()) return false;
SNZBScanRequest ScanRequest;
InitMessageBase(&ScanRequest.m_MessageBase, eRemoteRequestScan, sizeof(ScanRequest));
bool OK = m_pConnection->Send((char*)(&ScanRequest), sizeof(ScanRequest)) >= 0;
ScanRequest.m_bSyncMode = htonl(bSyncMode);
bool OK = m_pConnection->Send((char*)(&ScanRequest), sizeof(ScanRequest));
if (OK)
{
OK = ReceiveBoolResponse();
@@ -996,7 +1100,7 @@ bool RemoteClient::RequestHistory()
SNZBHistoryRequest HistoryRequest;
InitMessageBase(&HistoryRequest.m_MessageBase, eRemoteRequestHistory, sizeof(HistoryRequest));
if (m_pConnection->Send((char*)(&HistoryRequest), sizeof(HistoryRequest)) < 0)
if (!m_pConnection->Send((char*)(&HistoryRequest), sizeof(HistoryRequest)))
{
perror("m_pConnection->Send");
return false;
@@ -1006,19 +1110,12 @@ bool RemoteClient::RequestHistory()
// Now listen for the returned list
SNZBHistoryResponse HistoryResponse;
int iResponseLen = m_pConnection->Recv((char*) &HistoryResponse, sizeof(HistoryResponse));
if (iResponseLen != sizeof(HistoryResponse) ||
bool bRead = m_pConnection->Recv((char*) &HistoryResponse, sizeof(HistoryResponse));
if (!bRead ||
(int)ntohl(HistoryResponse.m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE ||
ntohl(HistoryResponse.m_MessageBase.m_iStructSize) != sizeof(HistoryResponse))
{
if (iResponseLen < 0)
{
printf("No response received (timeout)\n");
}
else
{
printf("Invalid response received: either not nzbget-server or wrong server version\n");
}
printf("No response or invalid response (timeout, not nzbget-server or wrong nzbget-server version)\n");
return false;
}
@@ -1026,7 +1123,7 @@ bool RemoteClient::RequestHistory()
if (ntohl(HistoryResponse.m_iTrailingDataLength) > 0)
{
pBuf = (char*)malloc(ntohl(HistoryResponse.m_iTrailingDataLength));
if (!m_pConnection->RecvAll(pBuf, ntohl(HistoryResponse.m_iTrailingDataLength)))
if (!m_pConnection->Recv(pBuf, ntohl(HistoryResponse.m_iTrailingDataLength)))
{
free(pBuf);
return false;
@@ -1049,27 +1146,33 @@ bool RemoteClient::RequestHistory()
{
SNZBHistoryResponseEntry* pListAnswer = (SNZBHistoryResponseEntry*) pBufPtr;
const char* szFileName = pBufPtr + sizeof(SNZBHistoryResponseEntry);
HistoryInfo::EKind eKind = (HistoryInfo::EKind)ntohl(pListAnswer->m_iKind);
const char* szNicename = pBufPtr + sizeof(SNZBHistoryResponseEntry);
long long lSize = Util::JoinInt64(ntohl(pListAnswer->m_iSizeHi), ntohl(pListAnswer->m_iSizeLo));
if (eKind == HistoryInfo::hkNZBInfo)
{
long long lSize = Util::JoinInt64(ntohl(pListAnswer->m_iSizeHi), ntohl(pListAnswer->m_iSizeLo));
char szNZBNiceName[1024];
NZBInfo::MakeNiceNZBName(szFileName, szNZBNiceName, 1024);
char szSize[20];
Util::FormatFileSize(szSize, sizeof(szSize), lSize);
char szSize[20];
Util::FormatFileSize(szSize, sizeof(szSize), lSize);
const char* szParStatusText[] = { "", "", ", Par failed", ", Par successful", ", Repair possible", ", Repair needed" };
const char* szScriptStatusText[] = { "", ", Script status unknown", ", Script failed", ", Script successful" };
const char* szParStatusText[] = { "", ", Par failed", ", Par possible", ", Par successful" };
const char* szScriptStatusText[] = { "", ", Script status unknown", ", Script failed", ", Script successful" };
printf("[%i] %s (%i files, %s%s%s)\n", ntohl(pListAnswer->m_iID), szNicename,
ntohl(pListAnswer->m_iFileCount), szSize,
szParStatusText[ntohl(pListAnswer->m_iParStatus)],
szScriptStatusText[ntohl(pListAnswer->m_iScriptStatus)]);
}
else if (eKind == HistoryInfo::hkUrlInfo)
{
const char* szUrlStatusText[] = { "", "", "Url download successful", "Url download failed", "" };
printf("[%i] %s (%i files, %s%s%s)\n", ntohl(pListAnswer->m_iID), szNZBNiceName,
ntohl(pListAnswer->m_iFileCount), szSize,
szParStatusText[ntohl(pListAnswer->m_iParStatus)],
szScriptStatusText[ntohl(pListAnswer->m_iScriptStatus)]);
printf("[%i] %s (%s)\n", ntohl(pListAnswer->m_iID), szNicename,
szUrlStatusText[ntohl(pListAnswer->m_iUrlStatus)]);
}
pBufPtr += sizeof(SNZBHistoryResponseEntry) + ntohl(pListAnswer->m_iFilenameLen) +
ntohl(pListAnswer->m_iDestDirLen) + ntohl(pListAnswer->m_iCategoryLen) +
ntohl(pListAnswer->m_iQueuedFilenameLen);
pBufPtr += sizeof(SNZBHistoryResponseEntry) + ntohl(pListAnswer->m_iNicenameLen);
}
printf("-----------------------------------\n");
@@ -1080,3 +1183,117 @@ bool RemoteClient::RequestHistory()
return true;
}
bool RemoteClient::RequestServerDownloadUrl(const char* szURL, const char* szNZBFilename, const char* szCategory, bool bAddFirst, bool bAddPaused, int iPriority)
{
if (!InitConnection()) return false;
SNZBDownloadUrlRequest DownloadUrlRequest;
InitMessageBase(&DownloadUrlRequest.m_MessageBase, eRemoteRequestDownloadUrl, sizeof(DownloadUrlRequest));
DownloadUrlRequest.m_bAddFirst = htonl(bAddFirst);
DownloadUrlRequest.m_bAddPaused = htonl(bAddPaused);
DownloadUrlRequest.m_iPriority = htonl(iPriority);
strncpy(DownloadUrlRequest.m_szURL, szURL, NZBREQUESTFILENAMESIZE - 1);
DownloadUrlRequest.m_szURL[NZBREQUESTFILENAMESIZE-1] = '\0';
DownloadUrlRequest.m_szCategory[0] = '\0';
if (szCategory)
{
strncpy(DownloadUrlRequest.m_szCategory, szCategory, NZBREQUESTFILENAMESIZE - 1);
}
DownloadUrlRequest.m_szCategory[NZBREQUESTFILENAMESIZE-1] = '\0';
DownloadUrlRequest.m_szNZBFilename[0] = '\0';
if (szNZBFilename)
{
strncpy(DownloadUrlRequest.m_szNZBFilename, szNZBFilename, NZBREQUESTFILENAMESIZE - 1);
}
DownloadUrlRequest.m_szNZBFilename[NZBREQUESTFILENAMESIZE-1] = '\0';
bool OK = m_pConnection->Send((char*)(&DownloadUrlRequest), sizeof(DownloadUrlRequest));
if (OK)
{
OK = ReceiveBoolResponse();
}
else
{
perror("m_pConnection->Send");
}
m_pConnection->Disconnect();
return OK;
}
bool RemoteClient::RequestUrlQueue()
{
if (!InitConnection()) return false;
SNZBUrlQueueRequest UrlQueueRequest;
InitMessageBase(&UrlQueueRequest.m_MessageBase, eRemoteRequestUrlQueue, sizeof(UrlQueueRequest));
if (!m_pConnection->Send((char*)(&UrlQueueRequest), sizeof(UrlQueueRequest)))
{
perror("m_pConnection->Send");
return false;
}
printf("Request sent\n");
// Now listen for the returned list
SNZBUrlQueueResponse UrlQueueResponse;
bool bRead = m_pConnection->Recv((char*) &UrlQueueResponse, sizeof(UrlQueueResponse));
if (!bRead ||
(int)ntohl(UrlQueueResponse.m_MessageBase.m_iSignature) != (int)NZBMESSAGE_SIGNATURE ||
ntohl(UrlQueueResponse.m_MessageBase.m_iStructSize) != sizeof(UrlQueueResponse))
{
printf("No response or invalid response (timeout, not nzbget-server or wrong nzbget-server version)\n");
return false;
}
char* pBuf = NULL;
if (ntohl(UrlQueueResponse.m_iTrailingDataLength) > 0)
{
pBuf = (char*)malloc(ntohl(UrlQueueResponse.m_iTrailingDataLength));
if (!m_pConnection->Recv(pBuf, ntohl(UrlQueueResponse.m_iTrailingDataLength)))
{
free(pBuf);
return false;
}
}
m_pConnection->Disconnect();
if (ntohl(UrlQueueResponse.m_iTrailingDataLength) == 0)
{
printf("Server has no urls queued for download\n");
}
else
{
printf("Url-Queue\n");
printf("-----------------------------------\n");
char* pBufPtr = (char*)pBuf;
for (unsigned int i = 0; i < ntohl(UrlQueueResponse.m_iNrTrailingEntries); i++)
{
SNZBUrlQueueResponseEntry* pUrlQueueAnswer = (SNZBUrlQueueResponseEntry*) pBufPtr;
const char* szURL = pBufPtr + sizeof(SNZBUrlQueueResponseEntry);
const char* szTitle = pBufPtr + sizeof(SNZBUrlQueueResponseEntry) + ntohl(pUrlQueueAnswer->m_iURLLen);
char szNiceName[1024];
UrlInfo::MakeNiceName(szURL, szTitle, szNiceName, 1024);
printf("[%i] %s\n", ntohl(pUrlQueueAnswer->m_iID), szNiceName);
pBufPtr += sizeof(SNZBUrlQueueResponseEntry) + ntohl(pUrlQueueAnswer->m_iURLLen) +
ntohl(pUrlQueueAnswer->m_iNZBFilenameLen);
}
free(pBuf);
printf("-----------------------------------\n");
}
return true;
}

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2005 Bo Cordes Petersen <placebodk@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -35,8 +35,19 @@
class RemoteClient
{
private:
class MatchedNZBInfo: public NZBInfo
{
public:
bool m_bMatch;
};
class MatchedFileInfo: public FileInfo
{
public:
bool m_bMatch;
};
Connection* m_pConnection;
NetAddress* m_pNetAddress;
bool m_bVerbose;
bool InitConnection();
@@ -49,19 +60,23 @@ public:
RemoteClient();
~RemoteClient();
void SetVerbose(bool bVerbose) { m_bVerbose = bVerbose; };
bool RequestServerDownload(const char* szFilename, const char* szCategory, bool bAddFirst);
bool RequestServerList(bool bFiles, bool bGroups);
bool RequestServerDownload(const char* szFilename, const char* szCategory, bool bAddFirst, bool bAddPaused, int iPriority);
bool RequestServerList(bool bFiles, bool bGroups, const char* szPattern);
bool RequestServerPauseUnpause(bool bPause, eRemotePauseUnpauseAction iAction);
bool RequestServerSetDownloadRate(float fRate);
bool RequestServerSetDownloadRate(int iRate);
bool RequestServerDumpDebug();
bool RequestServerEditQueue(eRemoteEditAction iAction, int iOffset, const char* szText, int* pIDList, int iIDCount, bool bSmartOrder);
bool RequestServerEditQueue(eRemoteEditAction iAction, int iOffset, const char* szText,
int* pIDList, int iIDCount, NameList* pNameList, eRemoteMatchMode iMatchMode, bool bSmartOrder);
bool RequestServerLog(int iLines);
bool RequestServerShutdown();
bool RequestServerReload();
bool RequestServerVersion();
bool RequestPostQueue();
bool RequestWriteLog(int iKind, const char* szText);
bool RequestScan();
bool RequestScan(bool bSyncMode);
bool RequestHistory();
bool RequestServerDownloadUrl(const char* szURL, const char* szNZBFilename, const char* szCategory, bool bAddFirst, bool bAddPaused, int iPriority);
bool RequestUrlQueue();
void BuildFileList(SNZBListResponse* pListResponse, const char* pTrailingData, DownloadQueue* pDownloadQueue);
};

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2005 Bo Cordes Petersen <placebodk@sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -44,20 +44,21 @@
#include "nzbget.h"
#include "RemoteServer.h"
#include "BinRpc.h"
#include "XmlRpc.h"
#include "WebServer.h"
#include "Log.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
//*****************************************************************
// RemoteServer
RemoteServer::RemoteServer()
RemoteServer::RemoteServer(bool bTLS)
{
debug("Creating RemoteServer");
m_pNetAddress = new NetAddress(g_pOptions->GetServerIP(), g_pOptions->GetServerPort());
m_bTLS = bTLS;
m_pConnection = NULL;
}
@@ -65,36 +66,51 @@ RemoteServer::~RemoteServer()
{
debug("Destroying RemoteServer");
if (m_pConnection)
{
delete m_pConnection;
}
delete m_pNetAddress;
delete m_pConnection;
}
void RemoteServer::Run()
{
debug("Entering RemoteServer-loop");
#ifndef DISABLE_TLS
if (m_bTLS)
{
if (strlen(g_pOptions->GetSecureCert()) == 0 || !Util::FileExists(g_pOptions->GetSecureCert()))
{
error("Could not initialize TLS, secure certificate is not configured or the cert-file was not found. Check option <SecureCert>");
return;
}
if (strlen(g_pOptions->GetSecureKey()) == 0 || !Util::FileExists(g_pOptions->GetSecureKey()))
{
error("Could not initialize TLS, secure key is not configured or the key-file was not found. Check option <SecureKey>");
return;
}
}
#endif
while (!IsStopped())
{
bool bBind = true;
if (!m_pConnection)
{
m_pConnection = new Connection(m_pNetAddress);
m_pConnection = new Connection(g_pOptions->GetControlIP(),
m_bTLS ? g_pOptions->GetSecurePort() : g_pOptions->GetControlPort(),
m_bTLS);
m_pConnection->SetTimeout(g_pOptions->GetConnectionTimeout());
m_pConnection->SetSuppressErrors(false);
bBind = m_pConnection->Bind() == 0;
bBind = m_pConnection->Bind();
}
// Accept connections and store the "new" socket value
SOCKET iSocket = INVALID_SOCKET;
// Accept connections and store the new Connection
Connection* pAcceptedConnection = NULL;
if (bBind)
{
iSocket = m_pConnection->Accept();
pAcceptedConnection = m_pConnection->Accept();
}
if (!bBind || iSocket == INVALID_SOCKET)
if (!bBind || pAcceptedConnection == NULL)
{
// Remote server could not bind or accept connection, waiting 1/2 sec and try again
if (IsStopped())
@@ -109,9 +125,13 @@ void RemoteServer::Run()
RequestProcessor* commandThread = new RequestProcessor();
commandThread->SetAutoDestroy(true);
commandThread->SetSocket(iSocket);
commandThread->SetConnection(pAcceptedConnection);
#ifndef DISABLE_TLS
commandThread->SetTLS(m_bTLS);
#endif
commandThread->Start();
}
if (m_pConnection)
{
m_pConnection->Disconnect();
@@ -136,32 +156,32 @@ void RemoteServer::Stop()
//*****************************************************************
// RequestProcessor
RequestProcessor::~RequestProcessor()
{
m_pConnection->Disconnect();
delete m_pConnection;
}
void RequestProcessor::Run()
{
// Read the first 4 bytes to determine request type
bool bOK = false;
int iSignature = 0;
int iBytesReceived = recv(m_iSocket, (char*)&iSignature, sizeof(iSignature), 0);
if (iBytesReceived < 0)
m_pConnection->SetSuppressErrors(true);
#ifndef DISABLE_TLS
if (m_bTLS && !m_pConnection->StartTLS(false, g_pOptions->GetSecureCert(), g_pOptions->GetSecureKey()))
{
debug("Could not establish secure connection to web-client: Start TLS failed");
return;
}
#endif
// Info - connection received
#ifdef WIN32
char* ip = NULL;
#else
char ip[20];
#endif
struct sockaddr_in PeerName;
int iPeerNameLength = sizeof(PeerName);
if (getpeername(m_iSocket, (struct sockaddr*)&PeerName, (SOCKLEN_T*) &iPeerNameLength) >= 0)
// Read the first 4 bytes to determine request type
int iSignature = 0;
if (!m_pConnection->Recv((char*)&iSignature, 4))
{
#ifdef WIN32
ip = inet_ntoa(PeerName.sin_addr);
#else
inet_ntop(AF_INET, &PeerName.sin_addr, ip, sizeof(ip));
#endif
debug("Could not read request signature, request received on port %i from %s", m_bTLS ? g_pOptions->GetSecurePort() : g_pOptions->GetControlPort(), m_pConnection->GetRemoteAddr());
return;
}
if ((int)ntohl(iSignature) == (int)NZBMESSAGE_SIGNATURE)
@@ -169,62 +189,47 @@ void RequestProcessor::Run()
// binary request received
bOK = true;
BinRpcProcessor processor;
processor.SetSocket(m_iSocket);
processor.SetSignature(iSignature);
processor.SetClientIP(ip);
processor.SetConnection(m_pConnection);
processor.Execute();
}
else if (!strncmp((char*)&iSignature, "POST", 4) || !strncmp((char*)&iSignature, "GET ", 4))
else if (!strncmp((char*)&iSignature, "POST", 4) ||
!strncmp((char*)&iSignature, "GET ", 4) ||
!strncmp((char*)&iSignature, "OPTI", 4))
{
// XML-RPC or JSON-RPC request received
Connection con(m_iSocket, false);
// HTTP request received
char szBuffer[1024];
if (con.ReadLine(szBuffer, sizeof(szBuffer), NULL))
if (m_pConnection->ReadLine(szBuffer, sizeof(szBuffer), NULL))
{
XmlRpcProcessor::EHttpMethod eHttpMethod = XmlRpcProcessor::hmGet;
WebProcessor::EHttpMethod eHttpMethod = WebProcessor::hmGet;
char* szUrl = szBuffer;
if (!strncmp((char*)&iSignature, "POST", 4))
{
eHttpMethod = XmlRpcProcessor::hmPost;
eHttpMethod = WebProcessor::hmPost;
szUrl++;
}
if (!strncmp((char*)&iSignature, "OPTI", 4) && strlen(szUrl) > 4)
{
eHttpMethod = WebProcessor::hmOptions;
szUrl += 4;
}
if (char* p = strchr(szUrl, ' '))
{
*p = '\0';
}
XmlRpcProcessor::ERpcProtocol eProtocol = XmlRpcProcessor::rpUndefined;
if (!strcmp(szUrl, "/xmlrpc") || !strncmp(szUrl, "/xmlrpc/", 8))
{
eProtocol = XmlRpcProcessor::rpXmlRpc;
}
else if (!strcmp(szUrl, "/jsonrpc") || !strncmp(szUrl, "/jsonrpc/", 9))
{
eProtocol = XmlRpcProcessor::rpJsonRpc;
}
else if (!strcmp(szUrl, "/jsonprpc") || !strncmp(szUrl, "/jsonprpc/", 10))
{
eProtocol = XmlRpcProcessor::rpJsonPRpc;
}
debug("url: %s", szUrl);
if (eProtocol != XmlRpcProcessor::rpUndefined)
{
XmlRpcProcessor processor;
processor.SetConnection(&con);
processor.SetClientIP(ip);
processor.SetProtocol(eProtocol);
processor.SetHttpMethod(eHttpMethod);
processor.SetUrl(szUrl);
processor.Execute();
bOK = true;
}
WebProcessor processor;
processor.SetConnection(m_pConnection);
processor.SetUrl(szUrl);
processor.SetHttpMethod(eHttpMethod);
processor.Execute();
bOK = true;
}
}
if (!bOK)
{
warn("Non-nzbget request received on port %i from %s", g_pOptions->GetServerPort(), ip);
warn("Non-nzbget request received on port %i from %s", m_bTLS ? g_pOptions->GetSecurePort() : g_pOptions->GetControlPort(), m_pConnection->GetRemoteAddr());
}
closesocket(m_iSocket);
}

View File

@@ -1,8 +1,8 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2005 Bo Cordes Petersen <placebodk@users.sourceforge.net>
* Copyright (C) 2007 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2005 Bo Cordes Petersen <placebodk@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -28,17 +28,16 @@
#define REMOTESERVER_H
#include "Thread.h"
#include "NetAddress.h"
#include "Connection.h"
class RemoteServer : public Thread
{
private:
NetAddress* m_pNetAddress;
bool m_bTLS;
Connection* m_pConnection;
public:
RemoteServer();
RemoteServer(bool bTLS);
~RemoteServer();
virtual void Run();
virtual void Stop();
@@ -47,11 +46,14 @@ public:
class RequestProcessor : public Thread
{
private:
SOCKET m_iSocket;
bool m_bTLS;
Connection* m_pConnection;
public:
~RequestProcessor();
virtual void Run();
void SetSocket(SOCKET iSocket) { m_iSocket = iSocket; };
void SetTLS(bool bTLS) { m_bTLS = bTLS; }
void SetConnection(Connection* pConnection) { m_pConnection = pConnection; }
};
#endif

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -24,7 +24,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -33,12 +33,13 @@
#include <stdlib.h>
#include <string.h>
#include <fstream>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#endif
#include <sys/stat.h>
#include <errno.h>
#include "nzbget.h"
#include "Scanner.h"
@@ -65,14 +66,53 @@ Scanner::FileData::~FileData()
free(m_szFilename);
}
Scanner::QueueData::QueueData(const char* szFilename, const char* szNZBName, const char* szCategory,
int iPriority, const char* szDupeKey, int iDupeScore, EDupeMode eDupeMode,
NZBParameterList* pParameters, bool bAddTop, bool bAddPaused, EAddStatus* pAddStatus)
{
m_szFilename = strdup(szFilename);
m_szNZBName = strdup(szNZBName);
m_szCategory = strdup(szCategory ? szCategory : "");
m_iPriority = iPriority;
m_szDupeKey = strdup(szDupeKey ? szDupeKey : "");
m_iDupeScore = iDupeScore;
m_eDupeMode = eDupeMode;
m_bAddTop = bAddTop;
m_bAddPaused = bAddPaused;
m_pAddStatus = pAddStatus;
if (pParameters)
{
m_Parameters.CopyFrom(pParameters);
}
}
Scanner::QueueData::~QueueData()
{
free(m_szFilename);
free(m_szNZBName);
free(m_szCategory);
free(m_szDupeKey);
}
void Scanner::QueueData::SetAddStatus(EAddStatus eAddStatus)
{
if (m_pAddStatus)
{
*m_pAddStatus = eAddStatus;
}
}
Scanner::Scanner()
{
debug("Creating Scanner");
m_bRequestedNZBDirScan = false;
m_bScanning = false;
m_iNZBDirInterval = g_pOptions->GetNzbDirInterval() * 1000;
m_iPass = 0;
m_iStepMSec = 0;
const char* szNZBScript = g_pOptions->GetNZBProcess();
m_bNZBScript = szNZBScript && strlen(szNZBScript) > 0;
@@ -87,18 +127,38 @@ Scanner::~Scanner()
delete *it;
}
m_FileList.clear();
ClearQueueList();
}
void Scanner::ClearQueueList()
{
for (QueueList::iterator it = m_QueueList.begin(); it != m_QueueList.end(); it++)
{
delete *it;
}
m_QueueList.clear();
}
void Scanner::Check()
{
if (g_pOptions->GetNzbDir() && (m_bRequestedNZBDirScan ||
m_mutexScan.Lock();
if (m_bRequestedNZBDirScan ||
(!g_pOptions->GetPauseScan() && g_pOptions->GetNzbDirInterval() > 0 &&
m_iNZBDirInterval >= g_pOptions->GetNzbDirInterval() * 1000)))
m_iNZBDirInterval >= g_pOptions->GetNzbDirInterval() * 1000))
{
// check nzbdir every g_pOptions->GetNzbDirInterval() seconds or if requested
bool bCheckStat = !m_bRequestedNZBDirScan;
m_bRequestedNZBDirScan = false;
m_bScanning = true;
CheckIncomingNZBs(g_pOptions->GetNzbDir(), "", bCheckStat);
if (!bCheckStat && m_bNZBScript)
{
// if immediate scan requested, we need second scan to process files extracted by NzbProcess-script
CheckIncomingNZBs(g_pOptions->GetNzbDir(), "", bCheckStat);
}
m_bScanning = false;
m_iNZBDirInterval = 0;
// if NzbDirFileAge is less than NzbDirInterval (that can happen if NzbDirInterval
@@ -106,7 +166,7 @@ void Scanner::Check()
// - one additional scan is neccessary to check sizes of detected files;
// - another scan is required to check files which were extracted by NzbProcess-script;
// - third scan is needed to check sizes of extracted files.
if (g_pOptions->GetNzbDirFileAge() < g_pOptions->GetNzbDirInterval())
if (g_pOptions->GetNzbDirInterval() > 0 && g_pOptions->GetNzbDirFileAge() < g_pOptions->GetNzbDirInterval())
{
int iMaxPass = m_bNZBScript ? 3 : 1;
if (m_iPass < iMaxPass)
@@ -122,8 +182,11 @@ void Scanner::Check()
}
DropOldFiles();
ClearQueueList();
}
m_iNZBDirInterval += m_iStepMSec;
m_iNZBDirInterval += 200;
m_mutexScan.Unlock();
}
/**
@@ -260,7 +323,8 @@ void Scanner::DropOldFiles()
}
}
void Scanner::ProcessIncomingFile(const char* szDirectory, const char* szBaseFilename, const char* szFullFilename, const char* szCategory)
void Scanner::ProcessIncomingFile(const char* szDirectory, const char* szBaseFilename,
const char* szFullFilename, const char* szCategory)
{
const char* szExtension = strrchr(szBaseFilename, '.');
if (!szExtension)
@@ -268,14 +332,47 @@ void Scanner::ProcessIncomingFile(const char* szDirectory, const char* szBaseFil
return;
}
char* szNZBName = strdup("");
char* szNZBCategory = strdup(szCategory);
NZBParameterList* pParameterList = new NZBParameterList;
NZBParameterList* pParameters = new NZBParameterList();
int iPriority = 0;
bool bAddTop = false;
bool bAddPaused = false;
const char* szDupeKey = NULL;
int iDupeScore = 0;
EDupeMode eDupeMode = dmScore;
EAddStatus eAddStatus = asSkipped;
bool bAdded = false;
QueueData* pQueueData = NULL;
for (QueueList::iterator it = m_QueueList.begin(); it != m_QueueList.end(); it++)
{
QueueData* pQueueData1 = *it;
if (Util::SameFilename(pQueueData1->GetFilename(), szFullFilename))
{
pQueueData = pQueueData1;
free(szNZBName);
szNZBName = strdup(pQueueData->GetNZBName());
free(szNZBCategory);
szNZBCategory = strdup(pQueueData->GetCategory());
iPriority = pQueueData->GetPriority();
szDupeKey = pQueueData->GetDupeKey();
iDupeScore = pQueueData->GetDupeScore();
eDupeMode = pQueueData->GetDupeMode();
bAddTop = pQueueData->GetAddTop();
bAddPaused = pQueueData->GetAddPaused();
pParameters->CopyFrom(pQueueData->GetParameters());
}
}
InitPPParameters(szNZBCategory, pParameters);
bool bExists = true;
if (m_bNZBScript && strcasecmp(szExtension, ".nzb_processed"))
{
NZBScriptController::ExecuteScript(g_pOptions->GetNZBProcess(), szFullFilename, szDirectory, &szNZBCategory, pParameterList);
NZBScriptController::ExecuteScript(g_pOptions->GetNZBProcess(), szFullFilename, szDirectory,
&szNZBName, &szNZBCategory, &iPriority, pParameters, &bAddTop, &bAddPaused);
bExists = Util::FileExists(szFullFilename);
if (bExists && strcasecmp(szExtension, ".nzb"))
{
@@ -283,7 +380,8 @@ void Scanner::ProcessIncomingFile(const char* szDirectory, const char* szBaseFil
bool bRenameOK = Util::RenameBak(szFullFilename, "processed", false, bakname2, 1024);
if (!bRenameOK)
{
error("Could not rename file %s to %s! Errcode: %i", szFullFilename, bakname2, errno);
char szSysErrStr[256];
error("Could not rename file %s to %s: %s", szFullFilename, bakname2, Util::GetLastErrorMessage(szSysErrStr, sizeof(szSysErrStr)));
}
}
}
@@ -294,70 +392,260 @@ void Scanner::ProcessIncomingFile(const char* szDirectory, const char* szBaseFil
bool bRenameOK = Util::RenameBak(szFullFilename, "nzb", true, szRenamedName, 1024);
if (bRenameOK)
{
AddFileToQueue(szRenamedName, szNZBCategory, pParameterList);
bAdded = AddFileToQueue(szRenamedName, szNZBName, szNZBCategory, iPriority,
szDupeKey, iDupeScore, eDupeMode, pParameters, bAddTop, bAddPaused);
}
else
{
error("Could not rename file %s to %s! Errcode: %i", szFullFilename, szRenamedName, errno);
char szSysErrStr[256];
error("Could not rename file %s to %s: %s", szFullFilename, szRenamedName, Util::GetLastErrorMessage(szSysErrStr, sizeof(szSysErrStr)));
eAddStatus = asFailed;
}
}
else if (bExists && !strcasecmp(szExtension, ".nzb"))
{
AddFileToQueue(szFullFilename, szNZBCategory, pParameterList);
bAdded = AddFileToQueue(szFullFilename, szNZBName, szNZBCategory, iPriority,
szDupeKey, iDupeScore, eDupeMode, pParameters, bAddTop, bAddPaused);
}
for (NZBParameterList::iterator it = pParameterList->begin(); it != pParameterList->end(); it++)
{
delete *it;
}
pParameterList->clear();
delete pParameterList;
delete pParameters;
free(szNZBName);
free(szNZBCategory);
if (pQueueData)
{
pQueueData->SetAddStatus(eAddStatus == asFailed ? asFailed : bAdded ? asSuccess : asSkipped);
}
}
void Scanner::AddFileToQueue(const char* szFilename, const char* szCategory, NZBParameterList* pParameterList)
void Scanner::InitPPParameters(const char* szCategory, NZBParameterList* pParameters)
{
bool bUnpack = g_pOptions->GetUnpack();
const char* szDefScript = g_pOptions->GetDefScript();
if (szCategory && *szCategory)
{
Options::Category* pCategory = g_pOptions->FindCategory(szCategory, false);
if (pCategory)
{
bUnpack = pCategory->GetUnpack();
if (pCategory->GetDefScript() && *pCategory->GetDefScript())
{
szDefScript = pCategory->GetDefScript();
}
}
}
pParameters->SetParameter("*Unpack:", bUnpack ? "yes" : "no");
if (szDefScript && *szDefScript)
{
// split szDefScript into tokens and create pp-parameter for each token
char* szDefScript2 = strdup(szDefScript);
char* saveptr;
char* szScriptName = strtok_r(szDefScript2, ",;", &saveptr);
while (szScriptName)
{
szScriptName = Util::Trim(szScriptName);
if (szScriptName[0] != '\0')
{
char szParam[1024];
snprintf(szParam, 1024, "%s:", szScriptName);
szParam[1024-1] = '\0';
pParameters->SetParameter(szParam, "yes");
}
szScriptName = strtok_r(NULL, ",;", &saveptr);
}
free(szDefScript2);
}
}
bool Scanner::AddFileToQueue(const char* szFilename, const char* szNZBName, const char* szCategory,
int iPriority, const char* szDupeKey, int iDupeScore, EDupeMode eDupeMode,
NZBParameterList* pParameters, bool bAddTop, bool bAddPaused)
{
const char* szBasename = Util::BaseFileName(szFilename);
info("Collection %s found", szBasename);
NZBFile* pNZBFile = NZBFile::CreateFromFile(szFilename, szCategory);
if (!pNZBFile)
NZBFile* pNZBFile = NZBFile::Create(szFilename, szCategory);
bool bOK = pNZBFile != NULL;
if (!bOK)
{
error("Could not add collection %s to queue", szBasename);
}
char bakname2[1024];
bool bRenameOK = Util::RenameBak(szFilename, pNZBFile ? "queued" : "error", false, bakname2, 1024);
if (!bRenameOK)
if (!Util::RenameBak(szFilename, pNZBFile ? "queued" : "error", false, bakname2, 1024))
{
error("Could not rename file %s to %s! Errcode: %i", szFilename, bakname2, errno);
bOK = false;
char szSysErrStr[256];
error("Could not rename file %s to %s: %s", szFilename, bakname2, Util::GetLastErrorMessage(szSysErrStr, sizeof(szSysErrStr)));
}
if (pNZBFile && bRenameOK)
if (bOK)
{
pNZBFile->GetNZBInfo()->SetQueuedFilename(bakname2);
for (NZBParameterList::iterator it = pParameterList->begin(); it != pParameterList->end(); it++)
if (szNZBName && strlen(szNZBName) > 0)
{
NZBParameter* pParameter = *it;
pNZBFile->GetNZBInfo()->SetParameter(pParameter->GetName(), pParameter->GetValue());
pNZBFile->GetNZBInfo()->SetName(NULL);
#ifdef WIN32
char* szAnsiFilename = strdup(szNZBName);
WebUtil::Utf8ToAnsi(szAnsiFilename, strlen(szAnsiFilename) + 1);
pNZBFile->GetNZBInfo()->SetFilename(szAnsiFilename);
free(szAnsiFilename);
#else
pNZBFile->GetNZBInfo()->SetFilename(szNZBName);
#endif
pNZBFile->GetNZBInfo()->BuildDestDirName();
}
g_pQueueCoordinator->AddNZBFileToQueue(pNZBFile, false);
info("Collection %s added to queue", szBasename);
pNZBFile->GetNZBInfo()->SetDupeKey(szDupeKey);
pNZBFile->GetNZBInfo()->SetDupeScore(iDupeScore);
pNZBFile->GetNZBInfo()->SetDupeMode(eDupeMode);
if (pNZBFile->GetPassword())
{
pNZBFile->GetNZBInfo()->GetParameters()->SetParameter("*Unpack:Password", pNZBFile->GetPassword());
}
pNZBFile->GetNZBInfo()->GetParameters()->CopyFrom(pParameters);
for (NZBFile::FileInfos::iterator it = pNZBFile->GetFileInfos()->begin(); it != pNZBFile->GetFileInfos()->end(); it++)
{
FileInfo* pFileInfo = *it;
pFileInfo->SetPriority(iPriority);
pFileInfo->SetPaused(bAddPaused);
}
g_pQueueCoordinator->AddNZBFileToQueue(pNZBFile, bAddTop);
}
if (pNZBFile)
{
delete pNZBFile;
}
delete pNZBFile;
return bOK;
}
void Scanner::ScanNZBDir()
void Scanner::ScanNZBDir(bool bSyncMode)
{
// ideally we should use mutex to access "m_bRequestedNZBDirScan",
// but it's not critical here.
m_mutexScan.Lock();
m_bScanning = true;
m_bRequestedNZBDirScan = true;
m_mutexScan.Unlock();
while (bSyncMode && (m_bScanning || m_bRequestedNZBDirScan))
{
usleep(100 * 1000);
}
}
Scanner::EAddStatus Scanner::AddExternalFile(const char* szNZBName, const char* szCategory,
int iPriority, const char* szDupeKey, int iDupeScore, EDupeMode eDupeMode,
NZBParameterList* pParameters, bool bAddTop, bool bAddPaused,
const char* szFileName, const char* szBuffer, int iBufSize)
{
bool bNZB = false;
char szTempFileName[1024];
if (szFileName)
{
strncpy(szTempFileName, szFileName, 1024);
szTempFileName[1024-1] = '\0';
}
else
{
int iNum = 1;
while (iNum == 1 || Util::FileExists(szTempFileName))
{
snprintf(szTempFileName, 1024, "%snzb-%i.tmp", g_pOptions->GetTempDir(), iNum);
szTempFileName[1024-1] = '\0';
iNum++;
}
if (!Util::SaveBufferIntoFile(szTempFileName, szBuffer, iBufSize))
{
error("Could not create file %s", szTempFileName);
return asFailed;
}
char buf[1024];
strncpy(buf, szBuffer, 1024);
buf[1024-1] = '\0';
bNZB = !strncmp(buf, "<?xml", 5) && strstr(buf, "<nzb");
}
// move file into NzbDir, make sure the file name is unique
char szValidNZBName[1024];
strncpy(szValidNZBName, Util::BaseFileName(szNZBName), 1024);
szValidNZBName[1024-1] = '\0';
Util::MakeValidFilename(szValidNZBName, '_', false);
#ifdef WIN32
WebUtil::Utf8ToAnsi(szValidNZBName, 1024);
#endif
const char* szExtension = strrchr(szNZBName, '.');
if (bNZB && (!szExtension || strcasecmp(szExtension, ".nzb")))
{
strncat(szValidNZBName, ".nzb", 1024 - strlen(szValidNZBName) - 1);
}
char szScanFileName[1024];
snprintf(szScanFileName, 1024, "%s%s", g_pOptions->GetNzbDir(), szValidNZBName);
char *szExt = strrchr(szValidNZBName, '.');
if (szExt)
{
*szExt = '\0';
szExt++;
}
int iNum = 2;
while (Util::FileExists(szScanFileName))
{
if (szExt)
{
snprintf(szScanFileName, 1024, "%s%s_%i.%s", g_pOptions->GetNzbDir(), szValidNZBName, iNum, szExt);
}
else
{
snprintf(szScanFileName, 1024, "%s%s_%i", g_pOptions->GetNzbDir(), szValidNZBName, iNum);
}
szScanFileName[1024-1] = '\0';
iNum++;
}
m_mutexScan.Lock();
if (!Util::MoveFile(szTempFileName, szScanFileName))
{
char szSysErrStr[256];
error("Could not move file %s to %s: %s", szTempFileName, szScanFileName, Util::GetLastErrorMessage(szSysErrStr, sizeof(szSysErrStr)));
remove(szTempFileName);
m_mutexScan.Unlock(); // UNLOCK
return asFailed;
}
char* szUseCategory = strdup(szCategory ? szCategory : "");
Options::Category *pCategory = g_pOptions->FindCategory(szCategory, true);
if (pCategory && strcmp(szCategory, pCategory->GetName()))
{
free(szUseCategory);
szUseCategory = strdup(pCategory->GetName());
detail("Category %s matched to %s for %s", szCategory, szUseCategory, szNZBName);
}
EAddStatus eAddStatus = asSkipped;
QueueData* pQueueData = new QueueData(szScanFileName, szNZBName, szUseCategory,
iPriority, szDupeKey, iDupeScore, eDupeMode, pParameters, bAddTop, bAddPaused, &eAddStatus);
free(szUseCategory);
m_QueueList.push_back(pQueueData);
m_mutexScan.Unlock();
ScanNZBDir(true);
return eAddStatus;
}

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -29,9 +29,18 @@
#include <deque>
#include <time.h>
#include "DownloadInfo.h"
#include "Thread.h"
class Scanner
{
public:
enum EAddStatus
{
asSkipped,
asSuccess,
asFailed
};
private:
class FileData
{
@@ -52,25 +61,70 @@ private:
typedef std::deque<FileData*> FileList;
class QueueData
{
private:
char* m_szFilename;
char* m_szNZBName;
char* m_szCategory;
int m_iPriority;
char* m_szDupeKey;
int m_iDupeScore;
EDupeMode m_eDupeMode;
NZBParameterList m_Parameters;
bool m_bAddTop;
bool m_bAddPaused;
EAddStatus* m_pAddStatus;
public:
QueueData(const char* szFilename, const char* szNZBName, const char* szCategory,
int iPriority, const char* szDupeKey, int iDupeScore, EDupeMode eDupeMode,
NZBParameterList* pParameters, bool bAddTop, bool bAddPaused, EAddStatus* pAddStatus);
~QueueData();
const char* GetFilename() { return m_szFilename; }
const char* GetNZBName() { return m_szNZBName; }
const char* GetCategory() { return m_szCategory; }
int GetPriority() { return m_iPriority; }
const char* GetDupeKey() { return m_szDupeKey; }
int GetDupeScore() { return m_iDupeScore; }
EDupeMode GetDupeMode() { return m_eDupeMode; }
NZBParameterList* GetParameters() { return &m_Parameters; }
bool GetAddTop() { return m_bAddTop; }
bool GetAddPaused() { return m_bAddPaused; }
void SetAddStatus(EAddStatus eAddStatus);
};
typedef std::deque<QueueData*> QueueList;
bool m_bRequestedNZBDirScan;
int m_iNZBDirInterval;
bool m_bNZBScript;
int m_iPass;
int m_iStepMSec;
FileList m_FileList;
QueueList m_QueueList;
bool m_bScanning;
Mutex m_mutexScan;
void CheckIncomingNZBs(const char* szDirectory, const char* szCategory, bool bCheckStat);
void AddFileToQueue(const char* szFilename, const char* szCategory, NZBParameterList* pParameterList);
void ProcessIncomingFile(const char* szDirectory, const char* szBaseFilename, const char* szFullFilename, const char* szCategory);
bool AddFileToQueue(const char* szFilename, const char* szNZBName, const char* szCategory,
int iPriority, const char* szDupeKey, int iDupeScore, EDupeMode eDupeMode,
NZBParameterList* pParameters, bool bAddTop, bool bAddPaused);
void ProcessIncomingFile(const char* szDirectory, const char* szBaseFilename,
const char* szFullFilename, const char* szCategory);
bool CanProcessFile(const char* szFullFilename, bool bCheckStat);
void InitPPParameters(const char* szCategory, NZBParameterList* pParameters);
void DropOldFiles();
void ClearQueueList();
public:
Scanner();
~Scanner();
void SetStepInterval(int iStepMSec) { m_iStepMSec = iStepMSec; }
void ScanNZBDir();
void ScanNZBDir(bool bSyncMode);
void Check();
EAddStatus AddExternalFile(const char* szNZBName, const char* szCategory, int iPriority,
const char* szDupeKey, int iDupeScore, EDupeMode eDupeMode,
NZBParameterList* pParameters, bool bAddTop, bool bAddPaused,
const char* szFileName, const char* szBuffer, int iBufSize);
};
#endif

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2008-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2008-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -24,7 +24,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -35,37 +35,35 @@
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include "nzbget.h"
#include "Scheduler.h"
#include "ScriptController.h"
#include "Options.h"
#include "Log.h"
#include "NewsServer.h"
#include "ServerPool.h"
#include "FeedInfo.h"
#include "FeedCoordinator.h"
extern Options* g_pOptions;
extern ServerPool* g_pServerPool;
extern FeedCoordinator* g_pFeedCoordinator;
Scheduler::Task::Task(int iHours, int iMinutes, int iWeekDaysBits, ECommand eCommand,
int iDownloadRate, const char* szProcess)
Scheduler::Task::Task(int iHours, int iMinutes, int iWeekDaysBits, ECommand eCommand, const char* szParam)
{
m_iHours = iHours;
m_iMinutes = iMinutes;
m_iWeekDaysBits = iWeekDaysBits;
m_eCommand = eCommand;
m_iDownloadRate = iDownloadRate;
m_szProcess = NULL;
if (szProcess)
{
m_szProcess = strdup(szProcess);
}
m_szParam = szParam ? strdup(szParam) : NULL;
m_tLastExecuted = 0;
}
Scheduler::Task::~Task()
{
if (m_szProcess)
{
free(m_szProcess);
}
free(m_szParam);
}
@@ -111,9 +109,6 @@ void Scheduler::FirstCheck()
m_tLastCheck = tCurrent - 60*60*24*7;
m_bDetectClockChanges = false;
m_bExecuteProcess = false;
m_bDownloadRateChanged = false;
m_bPauseDownloadChanged = false;
m_bPauseScanChanged = false;
CheckTasks();
}
@@ -121,21 +116,16 @@ void Scheduler::IntervalCheck()
{
m_bDetectClockChanges = true;
m_bExecuteProcess = true;
m_bDownloadRateChanged = false;
m_bPauseDownloadChanged = false;
m_bPauseScanChanged = false;
CheckTasks();
}
void Scheduler::CheckTasks()
{
PrepareLog();
m_mutexTaskList.Lock();
time_t tCurrent = time(NULL);
struct tm tmCurrent;
localtime_r(&tCurrent, &tmCurrent);
struct tm tmLastCheck;
if (m_bDetectClockChanges)
{
@@ -155,9 +145,12 @@ void Scheduler::CheckTasks()
}
}
tm tmCurrent;
localtime_r(&tCurrent, &tmCurrent);
tm tmLastCheck;
localtime_r(&m_tLastCheck, &tmLastCheck);
struct tm tmLoop;
tm tmLoop;
memcpy(&tmLoop, &tmLastCheck, sizeof(tmLastCheck));
tmLoop.tm_hour = tmCurrent.tm_hour;
tmLoop.tm_min = tmCurrent.tm_min;
@@ -171,13 +164,15 @@ void Scheduler::CheckTasks()
Task* pTask = *it;
if (pTask->m_tLastExecuted != tLoop)
{
struct tm tmAppoint;
tm tmAppoint;
memcpy(&tmAppoint, &tmLoop, sizeof(tmLoop));
tmAppoint.tm_hour = pTask->m_iHours;
tmAppoint.tm_min = pTask->m_iMinutes;
tmAppoint.tm_sec = 0;
time_t tAppoint = mktime(&tmAppoint);
tAppoint -= g_pOptions->GetTimeCorrection();
int iWeekDay = tmAppoint.tm_wday;
if (iWeekDay == 0)
{
@@ -203,25 +198,24 @@ void Scheduler::CheckTasks()
m_tLastCheck = tCurrent;
m_mutexTaskList.Unlock();
PrintLog();
}
void Scheduler::ExecuteTask(Task* pTask)
{
if (pTask->m_eCommand == scDownloadRate)
{
debug("Executing scheduled command: Set download rate to %i", pTask->m_iDownloadRate);
}
else
{
const char* szCommandName[] = { "Pause", "Unpause", "Set download rate", "Execute program", "Pause Scan", "Unpause Scan" };
debug("Executing scheduled command: %s", szCommandName[pTask->m_eCommand]);
}
const char* szCommandName[] = { "Pause", "Unpause", "Set download rate", "Execute program", "Pause Scan", "Unpause Scan",
"Enable Server", "Disable Server", "Fetch Feed" };
debug("Executing scheduled command: %s", szCommandName[pTask->m_eCommand]);
switch (pTask->m_eCommand)
{
case scDownloadRate:
m_iDownloadRate = pTask->m_iDownloadRate;
m_bDownloadRateChanged = true;
if (!Util::EmptyStr(pTask->m_szParam))
{
g_pOptions->SetDownloadRate(atoi(pTask->m_szParam) * 1024);
m_bDownloadRateChanged = true;
}
break;
case scPauseDownload:
@@ -233,14 +227,126 @@ void Scheduler::ExecuteTask(Task* pTask)
case scProcess:
if (m_bExecuteProcess)
{
SchedulerScriptController::StartScript(pTask->m_szProcess);
SchedulerScriptController::StartScript(pTask->m_szParam);
}
break;
case scPauseScan:
case scUnpauseScan:
m_bPauseScan = pTask->m_eCommand == scPauseScan;
g_pOptions->SetPauseScan(pTask->m_eCommand == scPauseScan);
m_bPauseScanChanged = true;
break;
case scActivateServer:
case scDeactivateServer:
EditServer(pTask->m_eCommand == scActivateServer, pTask->m_szParam);
break;
case scFetchFeed:
if (m_bExecuteProcess)
{
FetchFeed(pTask->m_szParam);
break;
}
}
}
void Scheduler::PrepareLog()
{
m_bDownloadRateChanged = false;
m_bPauseDownloadChanged = false;
m_bPauseScanChanged = false;
m_bServerChanged = false;
}
void Scheduler::PrintLog()
{
if (m_bDownloadRateChanged)
{
info("Scheduler: setting download rate to %i KB/s", g_pOptions->GetDownloadRate() / 1024);
}
if (m_bPauseScanChanged)
{
info("Scheduler: %s scan", g_pOptions->GetPauseScan() ? "pausing" : "unpausing");
}
if (m_bServerChanged)
{
int index = 0;
for (Servers::iterator it = g_pServerPool->GetServers()->begin(); it != g_pServerPool->GetServers()->end(); it++, index++)
{
NewsServer* pServer = *it;
if (pServer->GetActive() != m_ServerStatusList[index])
{
info("Scheduler: %s %s", pServer->GetActive() ? "activating" : "deactivating", pServer->GetName());
}
}
g_pServerPool->Changed();
}
}
void Scheduler::EditServer(bool bActive, const char* szServerList)
{
char* szServerList2 = strdup(szServerList);
char* saveptr;
char* szServer = strtok_r(szServerList2, ",;", &saveptr);
while (szServer)
{
szServer = Util::Trim(szServer);
if (!Util::EmptyStr(szServer))
{
int iID = atoi(szServer);
for (Servers::iterator it = g_pServerPool->GetServers()->begin(); it != g_pServerPool->GetServers()->end(); it++)
{
NewsServer* pServer = *it;
if ((iID > 0 && pServer->GetID() == iID) ||
!strcasecmp(pServer->GetName(), szServer))
{
if (!m_bServerChanged)
{
// store old server status for logging
m_ServerStatusList.clear();
m_ServerStatusList.reserve(g_pServerPool->GetServers()->size());
for (Servers::iterator it2 = g_pServerPool->GetServers()->begin(); it2 != g_pServerPool->GetServers()->end(); it2++)
{
NewsServer* pServer2 = *it2;
m_ServerStatusList.push_back(pServer2->GetActive());
}
}
m_bServerChanged = true;
pServer->SetActive(bActive);
break;
}
}
}
szServer = strtok_r(NULL, ",;", &saveptr);
}
free(szServerList2);
}
void Scheduler::FetchFeed(const char* szFeedList)
{
char* szFeedList2 = strdup(szFeedList);
char* saveptr;
char* szFeed = strtok_r(szFeedList2, ",;", &saveptr);
while (szFeed)
{
szFeed = Util::Trim(szFeed);
if (!Util::EmptyStr(szFeed))
{
int iID = atoi(szFeed);
for (Feeds::iterator it = g_pFeedCoordinator->GetFeeds()->begin(); it != g_pFeedCoordinator->GetFeeds()->end(); it++)
{
FeedInfo* pFeed = *it;
if (pFeed->GetID() == iID ||
!strcasecmp(pFeed->GetName(), szFeed) ||
!strcasecmp("0", szFeed))
{
g_pFeedCoordinator->FetchFeed(pFeed->GetID());
break;
}
}
}
szFeed = strtok_r(NULL, ",;", &saveptr);
}
free(szFeedList2);
}

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2008-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2008-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -27,6 +27,7 @@
#define SCHEDULER_H
#include <list>
#include <vector>
#include <time.h>
#include "Thread.h"
@@ -41,7 +42,10 @@ public:
scDownloadRate,
scProcess,
scPauseScan,
scUnpauseScan
scUnpauseScan,
scActivateServer,
scDeactivateServer,
scFetchFeed
};
class Task
@@ -51,13 +55,12 @@ public:
int m_iMinutes;
int m_iWeekDaysBits;
ECommand m_eCommand;
int m_iDownloadRate;
char* m_szProcess;
char* m_szParam;
time_t m_tLastExecuted;
public:
Task(int iHours, int iMinutes, int iWeekDaysBits, ECommand eCommand,
int iDownloadRate, const char* szProcess);
const char* szParam);
~Task();
friend class Scheduler;
};
@@ -65,6 +68,7 @@ public:
private:
typedef std::list<Task*> TaskList;
typedef std::vector<bool> ServerStatusList;
TaskList m_TaskList;
Mutex m_mutexTaskList;
@@ -72,14 +76,18 @@ private:
bool m_bDetectClockChanges;
bool m_bDownloadRateChanged;
bool m_bExecuteProcess;
int m_iDownloadRate;
bool m_bPauseDownloadChanged;
bool m_bPauseDownload;
bool m_bPauseScanChanged;
bool m_bPauseScan;
bool m_bServerChanged;
ServerStatusList m_ServerStatusList;
void ExecuteTask(Task* pTask);
void CheckTasks();
static bool CompareTasks(Scheduler::Task* pTask1, Scheduler::Task* pTask2);
void PrepareLog();
void PrintLog();
void EditServer(bool bActive, const char* szServerList);
void FetchFeed(const char* szFeedList);
public:
Scheduler();
@@ -87,12 +95,8 @@ public:
void AddTask(Task* pTask);
void FirstCheck();
void IntervalCheck();
bool GetDownloadRateChanged() { return m_bDownloadRateChanged; }
int GetDownloadRate() { return m_iDownloadRate; }
bool GetPauseDownloadChanged() { return m_bPauseDownloadChanged; }
bool GetPauseDownload() { return m_bPauseDownload; }
bool GetPauseScanChanged() { return m_bPauseScanChanged; }
bool GetPauseScan() { return m_bPauseScan; }
};
#endif

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -43,6 +43,7 @@ private:
public:
EnvironmentStrings();
~EnvironmentStrings();
void Clear();
void InitFromCurrentProcess();
void Append(char* szString);
#ifdef WIN32
@@ -59,28 +60,37 @@ private:
const char* m_szWorkingDir;
const char** m_szArgs;
bool m_bFreeArgs;
const char* m_szStdArgs[2];
const char* m_szInfoName;
const char* m_szDefaultKindPrefix;
const char* m_szLogPrefix;
EnvironmentStrings m_environmentStrings;
Options::EScriptLogKind m_eDefaultLogKind;
bool m_bTerminated;
bool m_bDetached;
FILE* m_pReadpipe;
#ifdef WIN32
HANDLE m_hProcess;
char m_szCmdLine[2048];
#else
pid_t m_hProcess;
#endif
void ProcessOutput(char* szText);
void PrepareEnvironmentStrings();
protected:
virtual void AddMessage(Message::EKind eKind, bool bDefaultKind, Options::EMessageTarget eMessageTarget, const char* szText);
void ProcessOutput(char* szText);
virtual bool ReadLine(char* szBuf, int iBufSize, FILE* pStream);
void PrintMessage(Message::EKind eKind, const char* szFormat, ...);
virtual void AddMessage(Message::EKind eKind, const char* szText);
bool GetTerminated() { return m_bTerminated; }
void ResetEnv();
void PrepareEnvOptions(const char* szStripPrefix);
void PrepareEnvParameters(NZBInfo* pNZBInfo, const char* szStripPrefix);
void PrepareArgs();
public:
ScriptController();
virtual ~ScriptController();
int Execute();
void Terminate();
void Detach();
void SetScript(const char* szScript) { m_szScript = szScript; }
const char* GetScript() { return m_szScript; }
@@ -88,42 +98,66 @@ public:
void SetArgs(const char** szArgs, bool bFreeArgs) { m_szArgs = szArgs; m_bFreeArgs = bFreeArgs; }
void SetInfoName(const char* szInfoName) { m_szInfoName = szInfoName; }
const char* GetInfoName() { return m_szInfoName; }
void SetDefaultKindPrefix(const char* szDefaultKindPrefix) { m_szDefaultKindPrefix = szDefaultKindPrefix; }
void SetDefaultLogKind(Options::EScriptLogKind eDefaultLogKind) { m_eDefaultLogKind = eDefaultLogKind; }
void SetLogPrefix(const char* szLogPrefix) { m_szLogPrefix = szLogPrefix; }
void SetEnvVar(const char* szName, const char* szValue);
void SetEnvVarSpecial(const char* szPrefix, const char* szName, const char* szValue);
void SetIntEnvVar(const char* szName, int iValue);
};
class PostScriptController : public Thread, ScriptController
class PostScriptController : public Thread, public ScriptController
{
private:
PostInfo* m_pPostInfo;
bool m_bNZBFileCompleted;
bool m_bHasFailedParJobs;
char m_szNZBName[1024];
int m_iPrefixLen;
void ExecuteScript(const char* szScriptName, const char* szDisplayName, const char* szLocation);
void PrepareParams(const char* szScriptName);
ScriptStatus::EStatus AnalyseExitCode(int iExitCode);
typedef std::deque<char*> FileList;
protected:
virtual void AddMessage(Message::EKind eKind, bool bDefaultKind, Options::EMessageTarget eMessageTarget, const char* szText);
virtual void AddMessage(Message::EKind eKind, const char* szText);
public:
virtual void Run();
virtual void Stop();
static void StartScriptJob(PostInfo* pPostInfo, const char* szScript,
bool bNZBFileCompleted, bool bHasFailedParJobs);
static void StartJob(PostInfo* pPostInfo);
static void InitParamsForNewNZB(NZBInfo* pNZBInfo);
};
class NZBScriptController : public ScriptController
{
private:
char** m_pNZBName;
char** m_pCategory;
NZBParameterList* m_pParameterList;
int* m_iPriority;
NZBParameterList* m_pParameters;
bool* m_bAddTop;
bool* m_bAddPaused;
int m_iPrefixLen;
protected:
virtual void AddMessage(Message::EKind eKind, bool bDefaultKind, Options::EMessageTarget eMessageTarget, const char* szText);
virtual void AddMessage(Message::EKind eKind, const char* szText);
public:
static void ExecuteScript(const char* szScript, const char* szNZBFilename, const char* szDirectory, char** pCategory, NZBParameterList* pParameterList);
static void ExecuteScript(const char* szScript, const char* szNZBFilename, const char* szDirectory,
char** pNZBName, char** pCategory, int* iPriority, NZBParameterList* pParameters,
bool* bAddTop, bool* bAddPaused);
};
class SchedulerScriptController : public Thread, ScriptController
class NZBAddedScriptController : public Thread, public ScriptController
{
private:
char* m_szNZBName;
public:
virtual void Run();
static void StartScript(DownloadQueue* pDownloadQueue, NZBInfo *pNZBInfo, const char* szScript);
};
class SchedulerScriptController : public Thread, public ScriptController
{
public:
virtual void Run();

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -25,7 +25,7 @@
#ifdef HAVE_CONFIG_H
#include <config.h>
#include "config.h"
#endif
#ifdef WIN32
@@ -38,6 +38,7 @@
#include <unistd.h>
#include <errno.h>
#endif
#include <algorithm>
#include "nzbget.h"
#include "ServerPool.h"
@@ -55,8 +56,9 @@ ServerPool::ServerPool()
{
debug("Creating ServerPool");
m_iMaxLevel = 0;
m_iMaxNormLevel = 0;
m_iTimeout = 60;
m_iGeneration = 0;
}
ServerPool::~ ServerPool()
@@ -70,6 +72,7 @@ ServerPool::~ ServerPool()
delete *it;
}
m_Servers.clear();
m_SortedServers.clear();
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
{
@@ -83,69 +86,159 @@ void ServerPool::AddServer(NewsServer* pNewsServer)
debug("Adding server to ServerPool");
m_Servers.push_back(pNewsServer);
m_SortedServers.push_back(pNewsServer);
}
/*
* Calculate normalized levels for all servers.
* Normalized Level means: starting from 0 with step 1.
* The servers of minimum Level must be always used even if they are not active;
* this is to prevent backup servers to act as main servers.
**/
void ServerPool::NormalizeLevels()
{
if (m_Servers.empty())
{
return;
}
std::sort(m_SortedServers.begin(), m_SortedServers.end(), CompareServers);
// find minimum level
int iMinLevel = m_SortedServers.front()->GetLevel();
for (Servers::iterator it = m_SortedServers.begin(); it != m_SortedServers.end(); it++)
{
NewsServer* pNewsServer = *it;
if (pNewsServer->GetLevel() < iMinLevel)
{
iMinLevel = pNewsServer->GetLevel();
}
}
m_iMaxNormLevel = 0;
int iLastLevel = iMinLevel;
for (Servers::iterator it = m_SortedServers.begin(); it != m_SortedServers.end(); it++)
{
NewsServer* pNewsServer = *it;
if ((pNewsServer->GetActive() && pNewsServer->GetMaxConnections() > 0) ||
(pNewsServer->GetLevel() == iMinLevel))
{
if (pNewsServer->GetLevel() != iLastLevel)
{
m_iMaxNormLevel++;
}
pNewsServer->SetNormLevel(m_iMaxNormLevel);
iLastLevel = pNewsServer->GetLevel();
}
else
{
pNewsServer->SetNormLevel(-1);
}
}
}
bool ServerPool::CompareServers(NewsServer* pServer1, NewsServer* pServer2)
{
return pServer1->GetLevel() < pServer2->GetLevel();
}
void ServerPool::InitConnections()
{
debug("Initializing connections in ServerPool");
m_iMaxLevel = 0;
for (Servers::iterator it = m_Servers.begin(); it != m_Servers.end(); it++)
m_mutexConnections.Lock();
NormalizeLevels();
m_Levels.clear();
for (Servers::iterator it = m_SortedServers.begin(); it != m_SortedServers.end(); it++)
{
NewsServer* pNewsServer = *it;
if (m_iMaxLevel < pNewsServer->GetLevel())
int iNormLevel = pNewsServer->GetNormLevel();
if (pNewsServer->GetNormLevel() > -1)
{
m_iMaxLevel = pNewsServer->GetLevel();
}
for (int i = 0; i < pNewsServer->GetMaxConnections(); i++)
{
PooledConnection* pConnection = new PooledConnection(pNewsServer);
pConnection->SetTimeout(m_iTimeout);
m_Connections.push_back(pConnection);
}
}
for (int iLevel = 0; iLevel <= m_iMaxLevel; iLevel++)
{
int iMaxConnectionsForLevel = 0;
for (Servers::iterator it = m_Servers.begin(); it != m_Servers.end(); it++)
{
NewsServer* pNewsServer = *it;
if (iLevel == pNewsServer->GetLevel())
if ((int)m_Levels.size() <= iNormLevel)
{
iMaxConnectionsForLevel += pNewsServer->GetMaxConnections();
m_Levels.push_back(0);
}
if (pNewsServer->GetActive())
{
int iConnections = 0;
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
{
PooledConnection* pConnection = *it;
if (pConnection->GetNewsServer() == pNewsServer)
{
iConnections++;
}
}
for (int i = iConnections; i < pNewsServer->GetMaxConnections(); i++)
{
PooledConnection* pConnection = new PooledConnection(pNewsServer);
pConnection->SetTimeout(m_iTimeout);
m_Connections.push_back(pConnection);
iConnections++;
}
m_Levels[iNormLevel] += iConnections;
}
}
m_Levels.push_back(iMaxConnectionsForLevel);
}
m_iGeneration++;
m_mutexConnections.Unlock();
}
NNTPConnection* ServerPool::GetConnection(int iLevel)
NNTPConnection* ServerPool::GetConnection(int iLevel, NewsServer* pWantServer, Servers* pIgnoreServers)
{
PooledConnection* pConnection = NULL;
m_mutexConnections.Lock();
if (m_Levels[iLevel] > 0)
if (iLevel < (int)m_Levels.size() && m_Levels[iLevel] > 0)
{
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
{
PooledConnection* pConnection1 = *it;
if (!pConnection1->GetInUse() && pConnection1->GetNewsServer()->GetLevel() == iLevel)
PooledConnection* pCandidateConnection = *it;
NewsServer* pCandidateServer = pCandidateConnection->GetNewsServer();
if (!pCandidateConnection->GetInUse() && pCandidateServer->GetActive() &&
pCandidateServer->GetNormLevel() == iLevel &&
(!pWantServer || pCandidateServer == pWantServer ||
(pWantServer->GetGroup() > 0 && pWantServer->GetGroup() == pCandidateServer->GetGroup())))
{
// free connection found, take it!
pConnection = pConnection1;
pConnection->SetInUse(true);
break;
// free connection found, check if it's not from the server which should be ignored
bool bUseConnection = true;
if (pIgnoreServers && !pWantServer)
{
for (Servers::iterator it = pIgnoreServers->begin(); it != pIgnoreServers->end(); it++)
{
NewsServer* pIgnoreServer = *it;
if (pIgnoreServer == pCandidateServer ||
(pIgnoreServer->GetGroup() > 0 && pIgnoreServer->GetGroup() == pCandidateServer->GetGroup() &&
pIgnoreServer->GetNormLevel() == pCandidateServer->GetNormLevel()))
{
bUseConnection = false;
break;
}
}
}
if (bUseConnection)
{
pConnection = pCandidateConnection;
pConnection->SetInUse(true);
break;
}
}
}
m_Levels[iLevel]--;
if (!pConnection)
if (pConnection)
{
error("ServerPool: internal error, no free connection found, but there should be one");
m_Levels[iLevel]--;
}
}
@@ -168,7 +261,11 @@ void ServerPool::FreeConnection(NNTPConnection* pConnection, bool bUsed)
{
((PooledConnection*)pConnection)->SetFreeTimeNow();
}
m_Levels[pConnection->GetNewsServer()->GetLevel()]++;
if (pConnection->GetNewsServer()->GetNormLevel() > -1 && pConnection->GetNewsServer()->GetActive())
{
m_Levels[pConnection->GetNewsServer()->GetNormLevel()]++;
}
m_mutexConnections.Unlock();
}
@@ -179,36 +276,88 @@ void ServerPool::CloseUnusedConnections()
time_t curtime = ::time(NULL);
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
int i = 0;
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); )
{
PooledConnection* pConnection = *it;
if (!pConnection->GetInUse() && pConnection->GetStatus() == Connection::csConnected)
bool bDeleted = false;
if (!pConnection->GetInUse() &&
(pConnection->GetNewsServer()->GetNormLevel() == -1 ||
!pConnection->GetNewsServer()->GetActive()))
{
debug("Closing (and deleting) unused connection to server%i", pConnection->GetNewsServer()->GetID());
if (pConnection->GetStatus() == Connection::csConnected)
{
pConnection->Disconnect();
}
delete pConnection;
m_Connections.erase(it);
it = m_Connections.begin() + i;
bDeleted = true;
}
if (!bDeleted && !pConnection->GetInUse() && pConnection->GetStatus() == Connection::csConnected)
{
int tdiff = (int)(curtime - pConnection->GetFreeTime());
if (tdiff > CONNECTION_HOLD_SECODNS)
{
debug("Closing unused connection to %s", pConnection->GetNewsServer()->GetHost());
debug("Closing (and keeping) unused connection to server%i", pConnection->GetNewsServer()->GetID());
pConnection->Disconnect();
}
}
if (!bDeleted)
{
it++;
i++;
}
}
m_mutexConnections.Unlock();
}
void ServerPool::Changed()
{
debug("Server config has been changed");
InitConnections();
CloseUnusedConnections();
}
void ServerPool::LogDebugInfo()
{
debug(" ServerPool");
debug(" ----------------");
debug(" Max-Level: %i", m_iMaxLevel);
debug(" Max-Level: %i", m_iMaxNormLevel);
m_mutexConnections.Lock();
debug(" Servers: %i", m_Servers.size());
for (Servers::iterator it = m_Servers.begin(); it != m_Servers.end(); it++)
{
NewsServer* pNewsServer = *it;
debug(" %i) %s (%s): Level=%i, NormLevel=%i", pNewsServer->GetID(), pNewsServer->GetName(),
pNewsServer->GetHost(), pNewsServer->GetLevel(), pNewsServer->GetNormLevel());
}
debug(" Levels: %i", m_Levels.size());
int index = 0;
for (Levels::iterator it = m_Levels.begin(); it != m_Levels.end(); it++, index++)
{
int iSize = *it;
debug(" %i: Size=%i", index, iSize);
}
debug(" Connections: %i", m_Connections.size());
for (Connections::iterator it = m_Connections.begin(); it != m_Connections.end(); it++)
{
debug(" Connection: Level=%i, InUse:%i", (*it)->GetNewsServer()->GetLevel(), (int)(*it)->GetInUse());
PooledConnection* pConnection = *it;
debug(" %i) %s (%s): Level=%i, NormLevel=%i, InUse:%i", pConnection->GetNewsServer()->GetID(),
pConnection->GetNewsServer()->GetName(), pConnection->GetNewsServer()->GetHost(),
pConnection->GetNewsServer()->GetLevel(), pConnection->GetNewsServer()->GetNormLevel(),
(int)pConnection->GetInUse());
}
m_mutexConnections.Unlock();

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -50,16 +50,20 @@ private:
void SetFreeTimeNow() { m_tFreeTime = ::time(NULL); }
};
typedef std::vector<NewsServer*> Servers;
typedef std::vector<int> Levels;
typedef std::vector<PooledConnection*> Connections;
Servers m_Servers;
Servers m_SortedServers;
Connections m_Connections;
Levels m_Levels;
int m_iMaxLevel;
int m_iMaxNormLevel;
Mutex m_mutexConnections;
int m_iTimeout;
int m_iGeneration;
void NormalizeLevels();
static bool CompareServers(NewsServer* pServer1, NewsServer* pServer2);
public:
ServerPool();
@@ -67,10 +71,13 @@ public:
void SetTimeout(int iTimeout) { m_iTimeout = iTimeout; }
void AddServer(NewsServer* pNewsServer);
void InitConnections();
int GetMaxLevel() { return m_iMaxLevel; }
NNTPConnection* GetConnection(int iLevel);
int GetMaxNormLevel() { return m_iMaxNormLevel; }
Servers* GetServers() { return &m_Servers; } // Only for read access (no lockings)
NNTPConnection* GetConnection(int iLevel, NewsServer* pWantServer, Servers* pIgnoreServers);
void FreeConnection(NNTPConnection* pConnection, bool bUsed);
void CloseUnusedConnections();
void Changed();
int GetGeneration() { return m_iGeneration; }
void LogDebugInfo();
};

1817
TLS.cpp
View File

File diff suppressed because it is too large Load Diff

206
TLS.h
View File

@@ -1,13 +1,7 @@
/*
* This file is part of nzbget
*
* Based on "tls.h" from project "mpop" by Martin Lambers
* Original source code available on http://sourceforge.net/projects/mpop/
*
* Copyright (C) 2000, 2003, 2004, 2005, 2006, 2007
* Martin Lambers <marlam@marlam.de>
*
* Copyright (C) 2008-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2008-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -21,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -33,180 +27,36 @@
#ifndef DISABLE_TLS
#include <time.h>
#ifdef HAVE_LIBGNUTLS
# include <gnutls/gnutls.h>
#endif /* HAVE_LIBGNUTLS */
#ifdef HAVE_OPENSSL
# include <openssl/ssl.h>
#endif /* HAVE_OPENSSL */
/*
* If a function with an 'errstr' argument returns a value != TLS_EOK,
* '*errstr' either points to an allocates string containing an error
* description or is NULL.
* If such a function returns TLS_EOK, 'errstr' will not be changed.
*/
#define TLS_EOK 0 /* no error */
#define TLS_ELIBFAILED 1 /* The underlying library failed */
#define TLS_ESEED 2 /* Cannot seed pseudo random number generator */
#define TLS_ECERT 3 /* Certificate check or verification failed */
#define TLS_EIO 4 /* Input/output error */
#define TLS_EFILE 5 /* A file does not exist/cannot be read */
#define TLS_EHANDSHAKE 6 /* TLS handshake failed */
/*
* Always use tls_clear() before using a tls_t!
* Never call a tls_*() function with tls_t NULL!
*/
typedef struct
class TLSSocket
{
int have_trust_file;
int is_active;
#ifdef HAVE_LIBGNUTLS
gnutls_session_t session;
gnutls_certificate_credentials_t cred;
#endif /* HAVE_LIBGNUTLS */
#ifdef HAVE_OPENSSL
SSL_CTX *ssl_ctx;
SSL *ssl;
#endif /* HAVE_OPENSSL */
} tls_t;
private:
bool m_bIsClient;
char* m_szCertFile;
char* m_szKeyFile;
char* m_szCipher;
SOCKET m_iSocket;
bool m_bSuppressErrors;
int m_iRetCode;
bool m_bInitialized;
bool m_bConnected;
/*
* Information about a X509 certificate.
* The 6 owner_info and issuer_info fields are:
* Common Name
* Organization
* Organizational unit
* Locality
* State/Province
* Country
* Each of these entries may be NULL if it was not provided.
*/
typedef struct
{
unsigned char sha1_fingerprint[20];
unsigned char md5_fingerprint[16];
time_t activation_time;
time_t expiration_time;
char *owner_info[6];
char *issuer_info[6];
} tls_cert_info_t;
// using "void*" to prevent the including of GnuTLS/OpenSSL header files into TLS.h
void* m_pContext;
void* m_pSession;
/*
* tls_lib_init()
*
* Initialize underlying TLS library. If this function returns TLS_ELIBFAILED,
* *errstr will always point to an error string.
* Used error codes: TLS_ELIBFAILED
*/
int tls_lib_init(char **errstr);
void ReportError(const char* szErrMsg);
/*
* tls_clear()
*
* Clears a tls_t type (marks it inactive).
*/
void tls_clear(tls_t *tls);
/*
* tls_init()
*
* Initializes a tls_t. If both 'key_file' and 'ca_file' are not NULL, they are
* set to be used when the peer request a certificate. If 'trust_file' is not
* NULL, it will be used to verify the peer certificate.
* All files must be in PEM format.
* If 'force_sslv3' is set, then only the SSLv3 protocol will be accepted. This
* option might be needed to talk to some obsolete broken servers. Only use this
* if you have to.
* Used error codes: TLS_ELIBFAILED, TLS_EFILE
*/
int tls_init(tls_t *tls,
const char *key_file, const char *ca_file, const char *trust_file,
int force_sslv3, char **errstr);
/*
* tls_start()
*
* Starts TLS encryption on a socket.
* 'tls' must be initialized using tls_init().
* If 'no_certcheck' is true, then no checks will be performed on the peer
* certificate. If it is false and no trust file was set with tls_init(),
* only sanity checks are performed on the peer certificate. If it is false
* and a trust file was set, real verification of the peer certificate is
* performed.
* 'hostname' is the host to start TLS with. It is needed for sanity checks/
* verification.
* 'tci' must be allocated with tls_cert_info_new(). Information about the
* peer's certificata will be stored in it. It can later be freed with
* tls_cert_info_free(). 'tci' is allowed to be NULL; no certificate
* information will be passed in this case.
* Used error codes: TLS_ELIBFAILED, TLS_ECERT, TLS_EHANDSHAKE
*/
int tls_start(tls_t *tls, int fd, const char *hostname, int no_certcheck,
tls_cert_info_t *tci, char **errstr);
/*
* tls_is_active()
*
* Returns whether 'tls' is an active TLS connection.
*/
int tls_is_active(tls_t *tls);
/*
* tls_cert_info_new()
* Returns a new tls_cert_info_t
*/
tls_cert_info_t *tls_cert_info_new(void);
/*
* tls_cert_info_free()
* Frees a tls_cert_info_t
*/
void tls_cert_info_free(tls_cert_info_t *tci);
/*
* tls_cert_info_get()
*
* Extracts certificate information from the TLS connection 'tls' and stores
* it in 'tci'. See the description of tls_cert_info_t above.
* Used error codes: TLS_ECERT
*/
int tls_cert_info_get(tls_t *tls, tls_cert_info_t *tci, char **errstr);
/*
* tls_getbuf()
*
* Reads a buffer using TLS and stores it in 's'.
* Used error codes: TLS_EIO
*/
int tls_getbuf(tls_t *tls, char* s, size_t len, size_t* readlen, char **errstr);
/*
* tls_putbuf()
*
* Writes 'len' characters from the string 's' using TLS.
* Used error codes: TLS_EIO
*/
int tls_putbuf(tls_t *tls, const char *s, size_t len, char **errstr);
/*
* tls_close()
*
* Close a TLS connection and mark it inactive
*/
void tls_close(tls_t *tls);
/*
* tls_lib_deinit()
*
* Deinit underlying TLS library.
*/
void tls_lib_deinit(void);
public:
TLSSocket(SOCKET iSocket, bool bIsClient, const char* szCertFile, const char* szKeyFile, const char* szCipher);
~TLSSocket();
static void Init();
static void Final();
bool Start();
void Close();
int Send(const char* pBuffer, int iSize);
int Recv(char* pBuffer, int iSize);
void SetSuppressErrors(bool bSuppressErrors) { m_bSuppressErrors = bSuppressErrors; }
};
#endif
#endif

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -98,6 +98,48 @@ void Mutex::Unlock()
}
#ifdef HAVE_SPINLOCK
SpinLock::SpinLock()
{
#ifdef WIN32
m_pSpinLockObj = (CRITICAL_SECTION *)malloc(sizeof(CRITICAL_SECTION));
InitializeCriticalSectionAndSpinCount((CRITICAL_SECTION *)m_pSpinLockObj, 0x00FFFFFF);
#else
m_pSpinLockObj = (pthread_spinlock_t *)malloc(sizeof(pthread_spinlock_t));
pthread_spin_init((pthread_spinlock_t *)m_pSpinLockObj, PTHREAD_PROCESS_PRIVATE);
#endif
}
SpinLock::~SpinLock()
{
#ifdef WIN32
DeleteCriticalSection((CRITICAL_SECTION *)m_pSpinLockObj);
#else
pthread_spin_destroy((pthread_spinlock_t *)m_pSpinLockObj);
#endif
free((void*)m_pSpinLockObj);
}
void SpinLock::Lock()
{
#ifdef WIN32
EnterCriticalSection((CRITICAL_SECTION *)m_pSpinLockObj);
#else
pthread_spin_lock((pthread_spinlock_t *)m_pSpinLockObj);
#endif
}
void SpinLock::Unlock()
{
#ifdef WIN32
LeaveCriticalSection((CRITICAL_SECTION *)m_pSpinLockObj);
#else
pthread_spin_unlock((pthread_spinlock_t *)m_pSpinLockObj);
#endif
}
#endif
void Thread::Init()
{
debug("Initializing global thread data");

View File

@@ -2,7 +2,7 @@
* This file is part of nzbget
*
* Copyright (C) 2004 Sven Henkel <sidddy@users.sourceforge.net>
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2010 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,7 +16,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -39,6 +39,23 @@ public:
void Unlock();
};
#ifdef HAVE_SPINLOCK
class SpinLock
{
private:
#ifdef WIN32
void* m_pSpinLockObj;
#else
volatile void* m_pSpinLockObj;
#endif
public:
SpinLock();
~SpinLock();
void Lock();
void Unlock();
};
#endif
class Thread
{

860
Unpack.cpp Normal file
View File

@@ -0,0 +1,860 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#ifndef WIN32
#include <unistd.h>
#endif
#include <errno.h>
#include "nzbget.h"
#include "Unpack.h"
#include "Log.h"
#include "Util.h"
#include "ParCoordinator.h"
extern Options* g_pOptions;
extern DownloadQueueHolder* g_pDownloadQueueHolder;
void UnpackController::FileList::Clear()
{
for (iterator it = begin(); it != end(); it++)
{
free(*it);
}
clear();
}
bool UnpackController::FileList::Exists(const char* szFilename)
{
for (iterator it = begin(); it != end(); it++)
{
char* szFilename1 = *it;
if (!strcmp(szFilename1, szFilename))
{
return true;
}
}
return false;
}
void UnpackController::StartJob(PostInfo* pPostInfo)
{
UnpackController* pUnpackController = new UnpackController();
pUnpackController->m_pPostInfo = pPostInfo;
pUnpackController->SetAutoDestroy(false);
pPostInfo->SetPostThread(pUnpackController);
pUnpackController->Start();
}
void UnpackController::Run()
{
// the locking is needed for accessing the members of NZBInfo
g_pDownloadQueueHolder->LockQueue();
strncpy(m_szDestDir, m_pPostInfo->GetNZBInfo()->GetDestDir(), 1024);
m_szDestDir[1024-1] = '\0';
strncpy(m_szName, m_pPostInfo->GetNZBInfo()->GetName(), 1024);
m_szName[1024-1] = '\0';
m_bCleanedUpDisk = false;
m_szPassword[0] = '\0';
m_szFinalDir[0] = '\0';
m_bFinalDirCreated = false;
NZBParameter* pParameter = m_pPostInfo->GetNZBInfo()->GetParameters()->Find("*Unpack:", false);
bool bUnpack = !(pParameter && !strcasecmp(pParameter->GetValue(), "no"));
pParameter = m_pPostInfo->GetNZBInfo()->GetParameters()->Find("*Unpack:Password", false);
if (pParameter)
{
strncpy(m_szPassword, pParameter->GetValue(), 1024-1);
m_szPassword[1024-1] = '\0';
}
g_pDownloadQueueHolder->UnlockQueue();
snprintf(m_szInfoName, 1024, "unpack for %s", m_szName);
m_szInfoName[1024-1] = '\0';
snprintf(m_szInfoNameUp, 1024, "Unpack for %s", m_szName); // first letter in upper case
m_szInfoNameUp[1024-1] = '\0';
m_bHasParFiles = ParCoordinator::FindMainPars(m_szDestDir, NULL);
if (bUnpack)
{
bool bScanNonStdFiles = m_pPostInfo->GetNZBInfo()->GetRenameStatus() > NZBInfo::rsSkipped ||
m_pPostInfo->GetNZBInfo()->GetParStatus() == NZBInfo::psSuccess ||
!m_bHasParFiles;
CheckArchiveFiles(bScanNonStdFiles);
}
if (bUnpack && (m_bHasRarFiles || m_bHasNonStdRarFiles || m_bHasSevenZipFiles || m_bHasSevenZipMultiFiles))
{
SetInfoName(m_szInfoName);
SetWorkingDir(m_szDestDir);
PrintMessage(Message::mkInfo, "Unpacking %s", m_szName);
CreateUnpackDir();
m_bUnpackOK = true;
m_bUnpackStartError = false;
m_bUnpackSpaceError = false;
m_bUnpackPasswordError = false;
if (m_bHasRarFiles || m_bHasNonStdRarFiles)
{
ExecuteUnrar();
}
if (m_bHasSevenZipFiles && m_bUnpackOK)
{
ExecuteSevenZip(false);
}
if (m_bHasSevenZipMultiFiles && m_bUnpackOK)
{
ExecuteSevenZip(true);
}
Completed();
}
else
{
PrintMessage(Message::mkInfo, (bUnpack ? "Nothing to unpack for %s" : "Unpack for %s skipped"), m_szName);
#ifndef DISABLE_PARCHECK
if (bUnpack && m_pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped &&
m_pPostInfo->GetNZBInfo()->GetRenameStatus() <= NZBInfo::rsSkipped && m_bHasParFiles)
{
RequestParCheck();
}
else
#endif
{
m_pPostInfo->GetNZBInfo()->SetUnpackStatus(NZBInfo::usSkipped);
m_pPostInfo->SetStage(PostInfo::ptQueued);
}
}
m_pPostInfo->SetWorking(false);
}
void UnpackController::ExecuteUnrar()
{
// Format:
// unrar x -y -p- -o+ *.rar ./_unpack
char szPasswordParam[1024];
const char* szArgs[8];
szArgs[0] = g_pOptions->GetUnrarCmd();
szArgs[1] = "x";
szArgs[2] = "-y";
szArgs[3] = "-p-";
if (strlen(m_szPassword) > 0)
{
snprintf(szPasswordParam, 1024, "-p%s", m_szPassword);
szArgs[3] = szPasswordParam;
}
szArgs[4] = "-o+";
szArgs[5] = m_bHasNonStdRarFiles ? "*.*" : "*.rar";
szArgs[6] = m_szUnpackDir;
szArgs[7] = NULL;
SetArgs(szArgs, false);
SetScript(g_pOptions->GetUnrarCmd());
SetLogPrefix("Unrar");
m_bAllOKMessageReceived = false;
m_eUnpacker = upUnrar;
SetProgressLabel("");
int iExitCode = Execute();
SetLogPrefix(NULL);
SetProgressLabel("");
m_bUnpackOK = iExitCode == 0 && m_bAllOKMessageReceived && !GetTerminated();
m_bUnpackStartError = iExitCode == -1;
m_bUnpackSpaceError = iExitCode == 5;
m_bUnpackPasswordError = iExitCode == 11; // only for rar5-archives
if (!m_bUnpackOK && iExitCode > 0)
{
PrintMessage(Message::mkError, "Unrar error code: %i", iExitCode);
}
}
void UnpackController::ExecuteSevenZip(bool bMultiVolumes)
{
// Format:
// 7z x -y -p- -o./_unpack *.7z
// OR
// 7z x -y -p- -o./_unpack *.7z.001
char szPasswordParam[1024];
const char* szArgs[7];
szArgs[0] = g_pOptions->GetSevenZipCmd();
szArgs[1] = "x";
szArgs[2] = "-y";
szArgs[3] = "-p-";
if (strlen(m_szPassword) > 0)
{
snprintf(szPasswordParam, 1024, "-p%s", m_szPassword);
szArgs[3] = szPasswordParam;
}
char szUnpackDirParam[1024];
snprintf(szUnpackDirParam, 1024, "-o%s", m_szUnpackDir);
szArgs[4] = szUnpackDirParam;
szArgs[5] = bMultiVolumes ? "*.7z.001" : "*.7z";
szArgs[6] = NULL;
SetArgs(szArgs, false);
SetScript(g_pOptions->GetSevenZipCmd());
m_bAllOKMessageReceived = false;
m_eUnpacker = upSevenZip;
PrintMessage(Message::mkInfo, "Executing 7-Zip");
SetLogPrefix("7-Zip");
SetProgressLabel("");
int iExitCode = Execute();
SetLogPrefix(NULL);
SetProgressLabel("");
m_bUnpackOK = iExitCode == 0 && m_bAllOKMessageReceived && !GetTerminated();
m_bUnpackStartError = iExitCode == -1;
if (!m_bUnpackOK && iExitCode > 0)
{
PrintMessage(Message::mkError, "7-Zip error code: %i", iExitCode);
}
}
void UnpackController::Completed()
{
bool bCleanupSuccess = Cleanup();
if (m_bUnpackOK && bCleanupSuccess)
{
PrintMessage(Message::mkInfo, "%s %s", m_szInfoNameUp, "successful");
m_pPostInfo->GetNZBInfo()->SetUnpackStatus(NZBInfo::usSuccess);
m_pPostInfo->GetNZBInfo()->SetUnpackCleanedUpDisk(m_bCleanedUpDisk);
if (g_pOptions->GetParRename())
{
//request par-rename check for extracted files
m_pPostInfo->GetNZBInfo()->SetRenameStatus(NZBInfo::rsNone);
}
m_pPostInfo->SetStage(PostInfo::ptQueued);
}
else
{
#ifndef DISABLE_PARCHECK
if (!m_bUnpackOK && m_pPostInfo->GetNZBInfo()->GetParStatus() <= NZBInfo::psSkipped &&
!m_bUnpackStartError && !m_bUnpackSpaceError && !m_bUnpackPasswordError &&
!GetTerminated() && m_bHasParFiles)
{
RequestParCheck();
}
else
#endif
{
PrintMessage(Message::mkError, "%s failed", m_szInfoNameUp);
m_pPostInfo->GetNZBInfo()->SetUnpackStatus(
m_bUnpackSpaceError ? NZBInfo::usSpace :
m_bUnpackPasswordError ? NZBInfo::usPassword :
NZBInfo::usFailure);
m_pPostInfo->SetStage(PostInfo::ptQueued);
}
}
}
#ifndef DISABLE_PARCHECK
void UnpackController::RequestParCheck()
{
PrintMessage(Message::mkInfo, "%s requested par-check/repair", m_szInfoNameUp);
m_pPostInfo->SetRequestParCheck(true);
m_pPostInfo->SetStage(PostInfo::ptFinished);
}
#endif
void UnpackController::CreateUnpackDir()
{
m_bInterDir = strlen(g_pOptions->GetInterDir()) > 0 &&
!strncmp(m_szDestDir, g_pOptions->GetInterDir(), strlen(g_pOptions->GetInterDir()));
if (m_bInterDir)
{
m_pPostInfo->GetNZBInfo()->BuildFinalDirName(m_szFinalDir, 1024);
m_szFinalDir[1024-1] = '\0';
snprintf(m_szUnpackDir, 1024, "%s%c%s", m_szFinalDir, PATH_SEPARATOR, "_unpack");
m_bFinalDirCreated = !Util::DirectoryExists(m_szFinalDir);
}
else
{
snprintf(m_szUnpackDir, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, "_unpack");
}
m_szUnpackDir[1024-1] = '\0';
char szErrBuf[1024];
if (!Util::ForceDirectories(m_szUnpackDir, szErrBuf, sizeof(szErrBuf)))
{
error("Could not create directory %s: %s", m_szUnpackDir, szErrBuf);
}
}
void UnpackController::CheckArchiveFiles(bool bScanNonStdFiles)
{
m_bHasRarFiles = false;
m_bHasNonStdRarFiles = false;
m_bHasSevenZipFiles = false;
m_bHasSevenZipMultiFiles = false;
RegEx regExRar(".*\\.rar$");
RegEx regExRarMultiSeq(".*\\.(r|s)[0-9][0-9]$");
RegEx regExSevenZip(".*\\.7z$");
RegEx regExSevenZipMulti(".*\\.7z\\.[0-9]+$");
RegEx regExNumExt(".*\\.[0-9]+$");
DirBrowser dir(m_szDestDir);
while (const char* filename = dir.Next())
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
if (strcmp(filename, ".") && strcmp(filename, "..") && !Util::DirectoryExists(szFullFilename))
{
if (regExRar.Match(filename))
{
m_bHasRarFiles = true;
}
else if (regExSevenZip.Match(filename))
{
m_bHasSevenZipFiles = true;
}
else if (regExSevenZipMulti.Match(filename))
{
m_bHasSevenZipMultiFiles = true;
}
else if (bScanNonStdFiles && !m_bHasNonStdRarFiles &&
!regExRarMultiSeq.Match(filename) && regExNumExt.Match(filename) &&
FileHasRarSignature(szFullFilename))
{
m_bHasNonStdRarFiles = true;
}
}
}
}
bool UnpackController::FileHasRarSignature(const char* szFilename)
{
char rar4Signature[] = { 0x52, 0x61, 0x72, 0x21, 0x1A, 0x07, 0x00 };
char rar5Signature[] = { 0x52, 0x61, 0x72, 0x21, 0x1A, 0x07, 0x01, 0x00 };
char fileSignature[8];
int cnt = 0;
FILE* infile;
infile = fopen(szFilename, "rb");
if (infile)
{
cnt = (int)fread(fileSignature, 1, sizeof(fileSignature), infile);
fclose(infile);
}
bool bRar = cnt == sizeof(fileSignature) &&
(!strcmp(rar4Signature, fileSignature) || !strcmp(rar5Signature, fileSignature));
return bRar;
}
bool UnpackController::Cleanup()
{
// By success:
// - move unpacked files to destination dir;
// - remove _unpack-dir;
// - delete archive-files.
// By failure:
// - remove _unpack-dir.
bool bOK = true;
FileList extractedFiles;
if (m_bUnpackOK)
{
// moving files back
DirBrowser dir(m_szUnpackDir);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") &&
strcmp(filename, ".AppleDouble") && strcmp(filename, ".DS_Store"))
{
char szSrcFile[1024];
snprintf(szSrcFile, 1024, "%s%c%s", m_szUnpackDir, PATH_SEPARATOR, filename);
szSrcFile[1024-1] = '\0';
char szDstFile[1024];
snprintf(szDstFile, 1024, "%s%c%s", m_szFinalDir[0] != '\0' ? m_szFinalDir : m_szDestDir, PATH_SEPARATOR, filename);
szDstFile[1024-1] = '\0';
// silently overwrite existing files
remove(szDstFile);
if (!Util::MoveFile(szSrcFile, szDstFile))
{
PrintMessage(Message::mkError, "Could not move file %s to %s", szSrcFile, szDstFile);
bOK = false;
}
extractedFiles.push_back(strdup(filename));
}
}
}
if (bOK && !Util::DeleteDirectoryWithContent(m_szUnpackDir))
{
PrintMessage(Message::mkError, "Could not remove temporary directory %s", m_szUnpackDir);
}
if (!m_bUnpackOK && m_bFinalDirCreated)
{
Util::RemoveDirectory(m_szFinalDir);
}
if (m_bUnpackOK && bOK && g_pOptions->GetUnpackCleanupDisk())
{
PrintMessage(Message::mkInfo, "Deleting archive files");
RegEx regExRar(".*\\.rar$");
RegEx regExRarMultiSeq(".*\\.[r-z][0-9][0-9]$");
RegEx regExSevenZip(".*\\.7z$|.*\\.7z\\.[0-9]+$");
RegEx regExNumExt(".*\\.[0-9]+$");
DirBrowser dir(m_szDestDir);
while (const char* filename = dir.Next())
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", m_szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
if (strcmp(filename, ".") && strcmp(filename, "..") &&
!Util::DirectoryExists(szFullFilename) &&
(m_bInterDir || !extractedFiles.Exists(filename)) &&
(regExRar.Match(filename) || regExSevenZip.Match(filename) ||
(regExRarMultiSeq.Match(filename) && FileHasRarSignature(szFullFilename)) ||
(m_bHasNonStdRarFiles && regExNumExt.Match(filename) && FileHasRarSignature(szFullFilename))))
{
PrintMessage(Message::mkInfo, "Deleting file %s", filename);
if (remove(szFullFilename) != 0)
{
PrintMessage(Message::mkError, "Could not delete file %s", szFullFilename);
}
}
}
m_bCleanedUpDisk = true;
}
extractedFiles.Clear();
return bOK;
}
/**
* Unrar prints progress information into the same line using backspace control character.
* In order to print progress continuously we analyze the output after every char
* and update post-job progress information.
*/
bool UnpackController::ReadLine(char* szBuf, int iBufSize, FILE* pStream)
{
bool bPrinted = false;
int i = 0;
for (; i < iBufSize - 1; i++)
{
int ch = fgetc(pStream);
szBuf[i] = ch;
szBuf[i+1] = '\0';
if (ch == EOF)
{
break;
}
if (ch == '\n')
{
i++;
break;
}
char* szBackspace = strrchr(szBuf, '\b');
if (szBackspace)
{
if (!bPrinted)
{
char tmp[1024];
strncpy(tmp, szBuf, 1024);
tmp[1024-1] = '\0';
char* szTmpPercent = strrchr(tmp, '\b');
if (szTmpPercent)
{
*szTmpPercent = '\0';
}
if (strncmp(szBuf, "...", 3))
{
ProcessOutput(tmp);
}
bPrinted = true;
}
if (strchr(szBackspace, '%'))
{
int iPercent = atoi(szBackspace + 1);
m_pPostInfo->SetStageProgress(iPercent * 10);
}
}
}
szBuf[i] = '\0';
if (bPrinted)
{
szBuf[0] = '\0';
}
return i > 0;
}
void UnpackController::AddMessage(Message::EKind eKind, const char* szText)
{
char szMsgText[1024];
strncpy(szMsgText, szText, 1024);
szMsgText[1024-1] = '\0';
// Modify unrar messages for better readability:
// remove the destination path part from message "Extracting file.xxx"
if (m_eUnpacker == upUnrar && !strncmp(szText, "Unrar: Extracting ", 19) &&
!strncmp(szText + 19, m_szUnpackDir, strlen(m_szUnpackDir)))
{
snprintf(szMsgText, 1024, "Unrar: Extracting %s", szText + 19 + strlen(m_szUnpackDir) + 1);
szMsgText[1024-1] = '\0';
}
ScriptController::AddMessage(eKind, szMsgText);
m_pPostInfo->AppendMessage(eKind, szMsgText);
if (m_eUnpacker == upUnrar && !strncmp(szMsgText, "Unrar: UNRAR ", 6) &&
strstr(szMsgText, " Copyright ") && strstr(szMsgText, " Alexander Roshal"))
{
// reset start time for a case if user uses unpack-script to do some things
// (like sending Wake-On-Lan message) before executing unrar
m_pPostInfo->SetStageTime(time(NULL));
}
if (m_eUnpacker == upUnrar && !strncmp(szMsgText, "Unrar: Extracting ", 18))
{
SetProgressLabel(szMsgText + 7);
}
if (m_eUnpacker == upUnrar && !strncmp(szText, "Unrar: Extracting from ", 23))
{
const char *szFilename = szText + 23;
debug("Filename: %s", szFilename);
SetProgressLabel(szText + 7);
}
if ((m_eUnpacker == upUnrar && !strncmp(szText, "Unrar: All OK", 13)) ||
(m_eUnpacker == upSevenZip && !strncmp(szText, "7-Zip: Everything is Ok", 23)))
{
m_bAllOKMessageReceived = true;
}
}
void UnpackController::Stop()
{
debug("Stopping unpack");
Thread::Stop();
Terminate();
}
void UnpackController::SetProgressLabel(const char* szProgressLabel)
{
g_pDownloadQueueHolder->LockQueue();
m_pPostInfo->SetProgressLabel(szProgressLabel);
g_pDownloadQueueHolder->UnlockQueue();
}
void MoveController::StartJob(PostInfo* pPostInfo)
{
MoveController* pMoveController = new MoveController();
pMoveController->m_pPostInfo = pPostInfo;
pMoveController->SetAutoDestroy(false);
pPostInfo->SetPostThread(pMoveController);
pMoveController->Start();
}
void MoveController::Run()
{
// the locking is needed for accessing the members of NZBInfo
g_pDownloadQueueHolder->LockQueue();
char szNZBName[1024];
strncpy(szNZBName, m_pPostInfo->GetNZBInfo()->GetName(), 1024);
szNZBName[1024-1] = '\0';
char szInfoName[1024];
snprintf(szInfoName, 1024, "move for %s", m_pPostInfo->GetNZBInfo()->GetName());
szInfoName[1024-1] = '\0';
SetInfoName(szInfoName);
strncpy(m_szInterDir, m_pPostInfo->GetNZBInfo()->GetDestDir(), 1024);
m_szInterDir[1024-1] = '\0';
m_pPostInfo->GetNZBInfo()->BuildFinalDirName(m_szDestDir, 1024);
m_szDestDir[1024-1] = '\0';
g_pDownloadQueueHolder->UnlockQueue();
info("Moving completed files for %s", szNZBName);
bool bOK = MoveFiles();
szInfoName[0] = 'M'; // uppercase
if (bOK)
{
info("%s successful", szInfoName);
// save new dest dir
g_pDownloadQueueHolder->LockQueue();
m_pPostInfo->GetNZBInfo()->SetDestDir(m_szDestDir);
m_pPostInfo->GetNZBInfo()->SetMoveStatus(NZBInfo::msSuccess);
g_pDownloadQueueHolder->UnlockQueue();
}
else
{
error("%s failed", szInfoName);
m_pPostInfo->GetNZBInfo()->SetMoveStatus(NZBInfo::msFailure);
}
m_pPostInfo->SetStage(PostInfo::ptQueued);
m_pPostInfo->SetWorking(false);
}
bool MoveController::MoveFiles()
{
char szErrBuf[1024];
if (!Util::ForceDirectories(m_szDestDir, szErrBuf, sizeof(szErrBuf)))
{
error("Could not create directory %s: %s", m_szDestDir, szErrBuf);
return false;
}
bool bOK = true;
DirBrowser dir(m_szInterDir);
while (const char* filename = dir.Next())
{
if (strcmp(filename, ".") && strcmp(filename, "..") &&
strcmp(filename, ".AppleDouble") && strcmp(filename, ".DS_Store"))
{
char szSrcFile[1024];
snprintf(szSrcFile, 1024, "%s%c%s", m_szInterDir, PATH_SEPARATOR, filename);
szSrcFile[1024-1] = '\0';
char szDstFile[1024];
Util::MakeUniqueFilename(szDstFile, 1024, m_szDestDir, filename);
PrintMessage(Message::mkInfo, "Moving file %s to %s", Util::BaseFileName(szSrcFile), m_szDestDir);
if (!Util::MoveFile(szSrcFile, szDstFile))
{
PrintMessage(Message::mkError, "Could not move file %s to %s! Errcode: %i", szSrcFile, szDstFile, errno);
bOK = false;
}
}
}
if (bOK && !Util::DeleteDirectoryWithContent(m_szInterDir))
{
PrintMessage(Message::mkError, "Could not remove intermediate directory %s", m_szInterDir);
}
return bOK;
}
void CleanupController::StartJob(PostInfo* pPostInfo)
{
CleanupController* pCleanupController = new CleanupController();
pCleanupController->m_pPostInfo = pPostInfo;
pCleanupController->SetAutoDestroy(false);
pPostInfo->SetPostThread(pCleanupController);
pCleanupController->Start();
}
void CleanupController::Run()
{
// the locking is needed for accessing the members of NZBInfo
g_pDownloadQueueHolder->LockQueue();
char szNZBName[1024];
strncpy(szNZBName, m_pPostInfo->GetNZBInfo()->GetName(), 1024);
szNZBName[1024-1] = '\0';
char szInfoName[1024];
snprintf(szInfoName, 1024, "cleanup for %s", m_pPostInfo->GetNZBInfo()->GetName());
szInfoName[1024-1] = '\0';
SetInfoName(szInfoName);
strncpy(m_szDestDir, m_pPostInfo->GetNZBInfo()->GetDestDir(), 1024);
m_szDestDir[1024-1] = '\0';
bool bInterDir = strlen(g_pOptions->GetInterDir()) > 0 &&
!strncmp(m_szDestDir, g_pOptions->GetInterDir(), strlen(g_pOptions->GetInterDir()));
if (bInterDir)
{
m_pPostInfo->GetNZBInfo()->BuildFinalDirName(m_szFinalDir, 1024);
m_szFinalDir[1024-1] = '\0';
}
else
{
m_szFinalDir[0] = '\0';
}
g_pDownloadQueueHolder->UnlockQueue();
info("Cleaning up %s", szNZBName);
bool bDeleted = false;
bool bOK = Cleanup(m_szDestDir, &bDeleted);
if (bOK && m_szFinalDir[0] != '\0')
{
bool bDeleted2 = false;
bOK = Cleanup(m_szFinalDir, &bDeleted2);
bDeleted = bDeleted || bDeleted2;
}
szInfoName[0] = 'C'; // uppercase
if (bOK && bDeleted)
{
info("%s successful", szInfoName);
m_pPostInfo->GetNZBInfo()->SetCleanupStatus(NZBInfo::csSuccess);
}
else if (bOK)
{
info("Nothing to cleanup for %s", szNZBName);
m_pPostInfo->GetNZBInfo()->SetCleanupStatus(NZBInfo::csSuccess);
}
else
{
error("%s failed", szInfoName);
m_pPostInfo->GetNZBInfo()->SetCleanupStatus(NZBInfo::csFailure);
}
m_pPostInfo->SetStage(PostInfo::ptQueued);
m_pPostInfo->SetWorking(false);
}
bool CleanupController::Cleanup(const char* szDestDir, bool *bDeleted)
{
*bDeleted = false;
bool bOK = true;
ExtList extList;
// split ExtCleanupDisk into tokens and create a list
char* szExtCleanupDisk = strdup(g_pOptions->GetExtCleanupDisk());
char* saveptr;
char* szExt = strtok_r(szExtCleanupDisk, ",; ", &saveptr);
while (szExt)
{
extList.push_back(szExt);
szExt = strtok_r(NULL, ",; ", &saveptr);
}
DirBrowser dir(szDestDir);
while (const char* filename = dir.Next())
{
// check file extension
int iFilenameLen = strlen(filename);
bool bDeleteIt = false;
for (ExtList::iterator it = extList.begin(); it != extList.end(); it++)
{
const char* szExt = *it;
int iExtLen = strlen(szExt);
if (iFilenameLen >= iExtLen && !strcasecmp(szExt, filename + iFilenameLen - iExtLen))
{
bDeleteIt = true;
break;
}
}
if (bDeleteIt)
{
char szFullFilename[1024];
snprintf(szFullFilename, 1024, "%s%c%s", szDestDir, PATH_SEPARATOR, filename);
szFullFilename[1024-1] = '\0';
PrintMessage(Message::mkInfo, "Deleting file %s", filename);
if (remove(szFullFilename) != 0)
{
PrintMessage(Message::mkError, "Could not delete file %s! Errcode: %i", szFullFilename, errno);
bOK = false;
}
*bDeleted = true;
}
}
free(szExtCleanupDisk);
return bOK;
}

129
Unpack.h Normal file
View File

@@ -0,0 +1,129 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef UNPACK_H
#define UNPACK_H
#include <deque>
#include "Log.h"
#include "Thread.h"
#include "DownloadInfo.h"
#include "ScriptController.h"
class UnpackController : public Thread, public ScriptController
{
private:
enum EUnpacker
{
upUnrar,
upSevenZip
};
typedef std::deque<char*> FileListBase;
class FileList : public FileListBase
{
public:
void Clear();
bool Exists(const char* szFilename);
};
private:
PostInfo* m_pPostInfo;
char m_szName[1024];
char m_szInfoName[1024];
char m_szInfoNameUp[1024];
char m_szDestDir[1024];
char m_szFinalDir[1024];
char m_szUnpackDir[1024];
char m_szPassword[1024];
bool m_bInterDir;
bool m_bAllOKMessageReceived;
bool m_bNoFilesMessageReceived;
bool m_bHasParFiles;
bool m_bHasRarFiles;
bool m_bHasNonStdRarFiles;
bool m_bHasSevenZipFiles;
bool m_bHasSevenZipMultiFiles;
bool m_bUnpackOK;
bool m_bUnpackStartError;
bool m_bUnpackSpaceError;
bool m_bUnpackPasswordError;
bool m_bCleanedUpDisk;
EUnpacker m_eUnpacker;
bool m_bFinalDirCreated;
protected:
virtual bool ReadLine(char* szBuf, int iBufSize, FILE* pStream);
virtual void AddMessage(Message::EKind eKind, const char* szText);
void ExecuteUnrar();
void ExecuteSevenZip(bool bMultiVolumes);
void Completed();
void CreateUnpackDir();
bool Cleanup();
void CheckArchiveFiles(bool bScanNonStdFiles);
void SetProgressLabel(const char* szProgressLabel);
#ifndef DISABLE_PARCHECK
void RequestParCheck();
#endif
bool FileHasRarSignature(const char* szFilename);
public:
virtual void Run();
virtual void Stop();
static void StartJob(PostInfo* pPostInfo);
};
class MoveController : public Thread, public ScriptController
{
private:
PostInfo* m_pPostInfo;
char m_szInterDir[1024];
char m_szDestDir[1024];
bool MoveFiles();
public:
virtual void Run();
static void StartJob(PostInfo* pPostInfo);
};
class CleanupController : public Thread, public ScriptController
{
private:
PostInfo* m_pPostInfo;
char m_szDestDir[1024];
char m_szFinalDir[1024];
bool Cleanup(const char* szDestDir, bool *bDeleted);
typedef std::deque<char*> ExtList;
public:
virtual void Run();
static void StartJob(PostInfo* pPostInfo);
};
#endif

454
UrlCoordinator.cpp Normal file
View File

@@ -0,0 +1,454 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <sys/stat.h>
#ifndef WIN32
#include <unistd.h>
#include <sys/time.h>
#endif
#include "nzbget.h"
#include "UrlCoordinator.h"
#include "Options.h"
#include "WebDownloader.h"
#include "DiskState.h"
#include "Log.h"
#include "Util.h"
#include "NZBFile.h"
#include "QueueCoordinator.h"
#include "Scanner.h"
extern Options* g_pOptions;
extern DiskState* g_pDiskState;
extern QueueCoordinator* g_pQueueCoordinator;
extern Scanner* g_pScanner;
UrlDownloader::UrlDownloader() : WebDownloader()
{
m_szCategory = NULL;
}
UrlDownloader::~UrlDownloader()
{
free(m_szCategory);
}
void UrlDownloader::ProcessHeader(const char* szLine)
{
WebDownloader::ProcessHeader(szLine);
if (!strncmp(szLine, "X-DNZB-Category:", 16))
{
free(m_szCategory);
char* szCategory = strdup(szLine + 16);
m_szCategory = strdup(Util::Trim(szCategory));
free(szCategory);
debug("Category: %s", m_szCategory);
}
else if (!strncmp(szLine, "X-DNZB-", 7))
{
char* szModLine = strdup(szLine);
char* szValue = strchr(szModLine, ':');
if (szValue)
{
*szValue = NULL;
szValue++;
while (*szValue == ' ') szValue++;
Util::Trim(szValue);
debug("X-DNZB: %s", szModLine);
debug("Value: %s", szValue);
char szParamName[100];
snprintf(szParamName, 100, "*DNZB:%s", szModLine + 7);
szParamName[100-1] = '\0';
char* szVal = WebUtil::Latin1ToUtf8(szValue);
m_ppParameters.SetParameter(szParamName, szVal);
free(szVal);
}
free(szModLine);
}
}
UrlCoordinator::UrlCoordinator()
{
debug("Creating UrlCoordinator");
m_bHasMoreJobs = true;
m_bForce = false;
}
UrlCoordinator::~UrlCoordinator()
{
debug("Destroying UrlCoordinator");
// Cleanup
debug("Deleting UrlDownloaders");
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
delete *it;
}
m_ActiveDownloads.clear();
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
for (UrlQueue::iterator it = pDownloadQueue->GetUrlQueue()->begin(); it != pDownloadQueue->GetUrlQueue()->end(); it++)
{
delete *it;
}
pDownloadQueue->GetUrlQueue()->clear();
g_pQueueCoordinator->UnlockQueue();
debug("UrlCoordinator destroyed");
}
void UrlCoordinator::Run()
{
debug("Entering UrlCoordinator-loop");
int iResetCounter = 0;
while (!IsStopped())
{
if (!(g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2()) || m_bForce || g_pOptions->GetUrlForce())
{
// start download for next URL
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
if ((int)m_ActiveDownloads.size() < g_pOptions->GetUrlConnections())
{
UrlInfo* pUrlInfo;
bool bHasMoreUrls = GetNextUrl(pDownloadQueue, pUrlInfo);
bool bUrlDownloadsRunning = !m_ActiveDownloads.empty();
m_bHasMoreJobs = bHasMoreUrls || bUrlDownloadsRunning;
if (bHasMoreUrls && !IsStopped())
{
StartUrlDownload(pUrlInfo);
}
if (!bHasMoreUrls)
{
m_bForce = false;
}
}
g_pQueueCoordinator->UnlockQueue();
}
int iSleepInterval = 100;
usleep(iSleepInterval * 1000);
iResetCounter += iSleepInterval;
if (iResetCounter >= 1000)
{
// this code should not be called too often, once per second is OK
ResetHangingDownloads();
iResetCounter = 0;
}
}
// waiting for downloads
debug("UrlCoordinator: waiting for Downloads to complete");
bool completed = false;
while (!completed)
{
g_pQueueCoordinator->LockQueue();
completed = m_ActiveDownloads.size() == 0;
g_pQueueCoordinator->UnlockQueue();
usleep(100 * 1000);
ResetHangingDownloads();
}
debug("UrlCoordinator: Downloads are completed");
debug("Exiting UrlCoordinator-loop");
}
void UrlCoordinator::Stop()
{
Thread::Stop();
debug("Stopping UrlDownloads");
g_pQueueCoordinator->LockQueue();
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
(*it)->Stop();
}
g_pQueueCoordinator->UnlockQueue();
debug("UrlDownloads are notified");
}
void UrlCoordinator::ResetHangingDownloads()
{
const int TimeOut = g_pOptions->GetTerminateTimeout();
if (TimeOut == 0)
{
return;
}
g_pQueueCoordinator->LockQueue();
time_t tm = ::time(NULL);
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end();)
{
UrlDownloader* pUrlDownloader = *it;
if (tm - pUrlDownloader->GetLastUpdateTime() > TimeOut &&
pUrlDownloader->GetStatus() == UrlDownloader::adRunning)
{
UrlInfo* pUrlInfo = pUrlDownloader->GetUrlInfo();
debug("Terminating hanging download %s", pUrlDownloader->GetInfoName());
if (pUrlDownloader->Terminate())
{
error("Terminated hanging download %s", pUrlDownloader->GetInfoName());
pUrlInfo->SetStatus(UrlInfo::aiUndefined);
}
else
{
error("Could not terminate hanging download %s", pUrlDownloader->GetInfoName());
}
m_ActiveDownloads.erase(it);
// it's not safe to destroy pUrlDownloader, because the state of object is unknown
delete pUrlDownloader;
it = m_ActiveDownloads.begin();
continue;
}
it++;
}
g_pQueueCoordinator->UnlockQueue();
}
void UrlCoordinator::LogDebugInfo()
{
debug(" UrlCoordinator");
debug(" ----------------");
g_pQueueCoordinator->LockQueue();
debug(" Active Downloads: %i", m_ActiveDownloads.size());
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
UrlDownloader* pUrlDownloader = *it;
pUrlDownloader->LogDebugInfo();
}
g_pQueueCoordinator->UnlockQueue();
}
void UrlCoordinator::AddUrlToQueue(UrlInfo* pUrlInfo, bool AddFirst)
{
debug("Adding NZB-URL to queue");
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
pDownloadQueue->GetUrlQueue()->push_back(pUrlInfo);
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
if (pUrlInfo->GetForce())
{
m_bForce = true;
}
g_pQueueCoordinator->UnlockQueue();
}
/*
* Returns next URL for download.
*/
bool UrlCoordinator::GetNextUrl(DownloadQueue* pDownloadQueue, UrlInfo* &pUrlInfo)
{
bool bPauseDownload = g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2();
for (UrlQueue::iterator at = pDownloadQueue->GetUrlQueue()->begin(); at != pDownloadQueue->GetUrlQueue()->end(); at++)
{
pUrlInfo = *at;
if (pUrlInfo->GetStatus() == 0 && (!bPauseDownload || pUrlInfo->GetForce() || g_pOptions->GetUrlForce()))
{
return true;
break;
}
}
return false;
}
void UrlCoordinator::StartUrlDownload(UrlInfo* pUrlInfo)
{
debug("Starting new UrlDownloader");
UrlDownloader* pUrlDownloader = new UrlDownloader();
pUrlDownloader->SetAutoDestroy(true);
pUrlDownloader->Attach(this);
pUrlDownloader->SetUrlInfo(pUrlInfo);
pUrlDownloader->SetURL(pUrlInfo->GetURL());
pUrlDownloader->SetForce(pUrlInfo->GetForce() || g_pOptions->GetUrlForce());
char tmp[1024];
pUrlInfo->GetName(tmp, 1024);
pUrlDownloader->SetInfoName(tmp);
snprintf(tmp, 1024, "%surl-%i.tmp", g_pOptions->GetTempDir(), pUrlInfo->GetID());
tmp[1024-1] = '\0';
pUrlDownloader->SetOutputFilename(tmp);
pUrlInfo->SetStatus(UrlInfo::aiRunning);
m_ActiveDownloads.push_back(pUrlDownloader);
pUrlDownloader->Start();
}
void UrlCoordinator::Update(Subject* pCaller, void* pAspect)
{
debug("Notification from UrlDownloader received");
UrlDownloader* pUrlDownloader = (UrlDownloader*) pCaller;
if ((pUrlDownloader->GetStatus() == WebDownloader::adFinished) ||
(pUrlDownloader->GetStatus() == WebDownloader::adFailed) ||
(pUrlDownloader->GetStatus() == WebDownloader::adRetry))
{
UrlCompleted(pUrlDownloader);
}
}
void UrlCoordinator::UrlCompleted(UrlDownloader* pUrlDownloader)
{
debug("URL downloaded");
UrlInfo* pUrlInfo = pUrlDownloader->GetUrlInfo();
if (pUrlDownloader->GetStatus() == WebDownloader::adFinished)
{
pUrlInfo->SetStatus(UrlInfo::aiFinished);
}
else if (pUrlDownloader->GetStatus() == WebDownloader::adFailed)
{
pUrlInfo->SetStatus(UrlInfo::aiFailed);
}
else if (pUrlDownloader->GetStatus() == WebDownloader::adRetry)
{
pUrlInfo->SetStatus(UrlInfo::aiUndefined);
}
char filename[1024];
if (pUrlDownloader->GetOriginalFilename())
{
strncpy(filename, pUrlDownloader->GetOriginalFilename(), 1024);
filename[1024-1] = '\0';
}
else
{
strncpy(filename, Util::BaseFileName(pUrlInfo->GetURL()), 1024);
filename[1024-1] = '\0';
// TODO: decode URL escaping
}
Util::MakeValidFilename(filename, '_', false);
debug("Filename: [%s]", filename);
// delete Download from active jobs
g_pQueueCoordinator->LockQueue();
for (ActiveDownloads::iterator it = m_ActiveDownloads.begin(); it != m_ActiveDownloads.end(); it++)
{
UrlDownloader* pa = *it;
if (pa == pUrlDownloader)
{
m_ActiveDownloads.erase(it);
break;
}
}
g_pQueueCoordinator->UnlockQueue();
Aspect aspect = { eaUrlCompleted, pUrlInfo };
Notify(&aspect);
if (pUrlInfo->GetStatus() == UrlInfo::aiFinished)
{
// add nzb-file to download queue
Scanner::EAddStatus eAddStatus = g_pScanner->AddExternalFile(
pUrlInfo->GetNZBFilename() && strlen(pUrlInfo->GetNZBFilename()) > 0 ? pUrlInfo->GetNZBFilename() : filename,
strlen(pUrlInfo->GetCategory()) > 0 ? pUrlInfo->GetCategory() : pUrlDownloader->GetCategory(),
pUrlInfo->GetPriority(), pUrlInfo->GetDupeKey(), pUrlInfo->GetDupeScore(), pUrlInfo->GetDupeMode(),
pUrlDownloader->GetParameters(), pUrlInfo->GetAddTop(), pUrlInfo->GetAddPaused(),
pUrlDownloader->GetOutputFilename(), NULL, 0);
if (eAddStatus != Scanner::asSuccess)
{
pUrlInfo->SetStatus(eAddStatus == Scanner::asFailed ? UrlInfo::aiScanFailed : UrlInfo::aiScanSkipped);
}
}
// delete Download from Url Queue
if (pUrlInfo->GetStatus() != UrlInfo::aiRetry)
{
DownloadQueue* pDownloadQueue = g_pQueueCoordinator->LockQueue();
for (UrlQueue::iterator it = pDownloadQueue->GetUrlQueue()->begin(); it != pDownloadQueue->GetUrlQueue()->end(); it++)
{
UrlInfo* pa = *it;
if (pa == pUrlInfo)
{
pDownloadQueue->GetUrlQueue()->erase(it);
break;
}
}
bool bDeleteObj = true;
if (g_pOptions->GetKeepHistory() > 0 && pUrlInfo->GetStatus() != UrlInfo::aiFinished)
{
HistoryInfo* pHistoryInfo = new HistoryInfo(pUrlInfo);
pHistoryInfo->SetTime(time(NULL));
pDownloadQueue->GetHistoryList()->push_front(pHistoryInfo);
bDeleteObj = false;
}
if (g_pOptions->GetSaveQueue() && g_pOptions->GetServerMode())
{
g_pDiskState->SaveDownloadQueue(pDownloadQueue);
}
g_pQueueCoordinator->UnlockQueue();
if (bDeleteObj)
{
delete pUrlInfo;
}
}
}

98
UrlCoordinator.h Normal file
View File

@@ -0,0 +1,98 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef URLCOORDINATOR_H
#define URLCOORDINATOR_H
#include <deque>
#include <list>
#include <time.h>
#include "Thread.h"
#include "WebDownloader.h"
#include "DownloadInfo.h"
#include "Observer.h"
class UrlDownloader;
class UrlCoordinator : public Thread, public Observer, public Subject
{
public:
typedef std::list<UrlDownloader*> ActiveDownloads;
enum EAspectAction
{
eaUrlAdded,
eaUrlCompleted
};
struct Aspect
{
EAspectAction eAction;
UrlInfo* pUrlInfo;
};
private:
ActiveDownloads m_ActiveDownloads;
bool m_bHasMoreJobs;
bool m_bForce;
bool GetNextUrl(DownloadQueue* pDownloadQueue, UrlInfo* &pUrlInfo);
void StartUrlDownload(UrlInfo* pUrlInfo);
void UrlCompleted(UrlDownloader* pUrlDownloader);
void ResetHangingDownloads();
public:
UrlCoordinator();
virtual ~UrlCoordinator();
virtual void Run();
virtual void Stop();
void Update(Subject* pCaller, void* pAspect);
// Editing the queue
void AddUrlToQueue(UrlInfo* pUrlInfo, bool AddFirst);
bool HasMoreJobs() { return m_bHasMoreJobs; }
void LogDebugInfo();
};
class UrlDownloader : public WebDownloader
{
private:
UrlInfo* m_pUrlInfo;
char* m_szCategory;
NZBParameterList m_ppParameters;
protected:
virtual void ProcessHeader(const char* szLine);
public:
UrlDownloader();
~UrlDownloader();
void SetUrlInfo(UrlInfo* pUrlInfo) { m_pUrlInfo = pUrlInfo; }
UrlInfo* GetUrlInfo() { return m_pUrlInfo; }
const char* GetCategory() { return m_szCategory; }
NZBParameterList* GetParameters() { return &m_ppParameters; }
};
#endif

1951
Util.cpp
View File

File diff suppressed because it is too large Load Diff

205
Util.h
View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -57,6 +57,19 @@ public:
const char* Next();
};
class StringBuilder
{
private:
char* m_szBuffer;
int m_iBufferSize;
int m_iUsedSize;
public:
StringBuilder();
~StringBuilder();
void Append(const char* szStr);
const char* GetBuffer() { return m_szBuffer; }
};
class Util
{
public:
@@ -64,22 +77,33 @@ public:
static char* BaseFileName(const char* filename);
static void NormalizePathSeparators(char* szPath);
static bool LoadFileIntoBuffer(const char* szFileName, char** pBuffer, int* pBufferLength);
static bool SetFileSize(const char* szFilename, int iSize);
static bool SaveBufferIntoFile(const char* szFileName, const char* szBuffer, int iBufLen);
static bool CreateSparseFile(const char* szFilename, int iSize);
static bool TruncateFile(const char* szFilename, int iSize);
static void MakeValidFilename(char* szFilename, char cReplaceChar, bool bAllowSlashes);
static bool MakeUniqueFilename(char* szDestBufFilename, int iDestBufSize, const char* szDestDir, const char* szBasename);
static bool MoveFile(const char* szSrcFilename, const char* szDstFilename);
static bool FileExists(const char* szFilename);
static bool FileExists(const char* szPath, const char* szFilenameWithoutPath);
static bool DirectoryExists(const char* szDirFilename);
static bool CreateDirectory(const char* szDirFilename);
static bool ForceDirectories(const char* szPath);
static bool RemoveDirectory(const char* szDirFilename);
static bool DeleteDirectoryWithContent(const char* szDirFilename);
static bool ForceDirectories(const char* szPath, char* szErrBuf, int iBufSize);
static bool GetCurrentDirectory(char* szBuffer, int iBufSize);
static bool SetCurrentDirectory(const char* szDirFilename);
static long long FileSize(const char* szFilename);
static long long FreeDiskSize(const char* szPath);
static bool DirEmpty(const char* szDirFilename);
static bool RenameBak(const char* szFilename, const char* szBakPart, bool bRemoveOldExtension, char* szNewNameBuf, int iNewNameBufSize);
#ifndef WIN32
static bool ExpandHomePath(const char* szFilename, char* szBuffer, int iBufSize);
static void ExpandFileName(const char* szFilename, char* szBuffer, int iBufSize);
static void FixExecPermission(const char* szFilename);
#endif
static void ExpandFileName(const char* szFilename, char* szBuffer, int iBufSize);
static void FormatFileSize(char* szBuffer, int iBufLen, long long lFileSize);
static bool SameFilename(const char* szFilename1, const char* szFilename2);
static char* GetLastErrorMessage(char* szBuffer, int iBufLen);
/*
* Split command line int arguments.
@@ -105,6 +129,39 @@ public:
*/
static float Int64ToFloat(long long Int64);
static void TrimRight(char* szStr);
static char* Trim(char* szStr);
static bool EmptyStr(const char* szStr) { return !szStr || !*szStr; }
/* replace all occurences of szFrom to szTo in string szStr with a limitation that szTo must be shorter than szFrom */
static char* ReduceStr(char* szStr, const char* szFrom, const char* szTo);
/* Calculate Hash using Bob Jenkins (1996) algorithm */
static unsigned int HashBJ96(const char* szBuffer, int iBufSize, unsigned int iInitValue);
#ifdef WIN32
static bool RegReadStr(HKEY hKey, const char* szKeyName, const char* szValueName, char* szBuffer, int* iBufLen);
#endif
/*
* Returns program version and revision number as string formatted like "0.7.0-r295".
* If revision number is not available only version is returned ("0.7.0").
*/
static const char* VersionRevision() { return VersionRevisionBuf; };
/*
* Initialize buffer for program version and revision number.
* This function must be called during program initialization before any
* call to "VersionRevision()".
*/
static void InitVersionRevision();
static char VersionRevisionBuf[40];
};
class WebUtil
{
public:
static unsigned int DecodeBase64(char* szInputBuffer, int iInputBufferLength, char* szOutputBuffer);
/*
@@ -155,19 +212,137 @@ public:
static const char* JsonNextValue(const char* szJsonText, int* pValueLength);
/*
* Returns program version and revision number as string formatted like "0.7.0-r295".
* If revision number is not available only version is returned ("0.7.0").
* Unquote http quoted string.
* The string is decoded on the place overwriting the content of raw-data.
*/
static const char* VersionRevision() { return VersionRevisionBuf; };
static void HttpUnquote(char* raw);
#ifdef WIN32
static bool Utf8ToAnsi(char* szBuffer, int iBufLen);
static bool AnsiToUtf8(char* szBuffer, int iBufLen);
#endif
/*
* Initialize buffer for program version and revision number.
* This function must be called during program initialization before any
* call to "VersionRevision()".
* Converts ISO-8859-1 (aka Latin-1) into UTF-8.
* Returns new string allocated with malloc, it needs to be freed by caller.
*/
static void InitVersionRevision();
static char VersionRevisionBuf[40];
static char* Latin1ToUtf8(const char* szStr);
static time_t ParseRfc822DateTime(const char* szDateTimeStr);
};
class URL
{
private:
char* m_szAddress;
char* m_szProtocol;
char* m_szUser;
char* m_szPassword;
char* m_szHost;
char* m_szResource;
int m_iPort;
bool m_bTLS;
bool m_bValid;
void ParseURL();
public:
URL(const char* szAddress);
~URL();
bool IsValid() { return m_bValid; }
const char* GetAddress() { return m_szAddress; }
const char* GetProtocol() { return m_szProtocol; }
const char* GetUser() { return m_szUser; }
const char* GetPassword() { return m_szPassword; }
const char* GetHost() { return m_szHost; }
const char* GetResource() { return m_szResource; }
int GetPort() { return m_iPort; }
bool GetTLS() { return m_bTLS; }
};
class RegEx
{
private:
void* m_pContext;
bool m_bValid;
void* m_pMatches;
int m_iMatchBufSize;
public:
RegEx(const char *szPattern, int iMatchBufSize = 100);
~RegEx();
bool IsValid() { return m_bValid; }
bool Match(const char *szStr);
int GetMatchCount();
int GetMatchStart(int index);
int GetMatchLen(int index);
};
class WildMask
{
private:
char* m_szPattern;
bool m_bWantsPositions;
int m_iWildCount;
int* m_WildStart;
int* m_WildLen;
int m_iArrLen;
void ExpandArray();
public:
WildMask(const char *szPattern, bool bWantsPositions = false);
~WildMask();
bool Match(const char *szStr);
int GetMatchCount() { return m_iWildCount; }
int GetMatchStart(int index) { return m_WildStart[index]; }
int GetMatchLen(int index) { return m_WildLen[index]; }
};
#ifndef DISABLE_GZIP
class ZLib
{
public:
/*
* calculates the size required for output buffer
*/
static unsigned int GZipLen(int iInputBufferLength);
/*
* returns the size of bytes written to szOutputBuffer or 0 if the buffer is too small or an error occured.
*/
static unsigned int GZip(const void* szInputBuffer, int iInputBufferLength, void* szOutputBuffer, int iOutputBufferLength);
};
class GUnzipStream
{
public:
enum EStatus
{
zlError,
zlFinished,
zlOK
};
private:
void* m_pZStream;
void* m_pOutputBuffer;
int m_iBufferSize;
public:
GUnzipStream(int BufferSize);
~GUnzipStream();
/*
* set next memory block for uncompression
*/
void Write(const void *pInputBuffer, int iInputBufferLength);
/*
* get next uncompressed memory block.
* iOutputBufferLength - the size of uncompressed block. if it is "0" the next compressed block must be provided via "Write".
*/
EStatus Read(const void **pOutputBuffer, int *iOutputBufferLength);
};
#endif
#endif

722
WebDownloader.cpp Normal file
View File

@@ -0,0 +1,722 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifdef WIN32
#include <direct.h>
#else
#include <unistd.h>
#include <sys/time.h>
#endif
#include <sys/stat.h>
#include <errno.h>
#include "nzbget.h"
#include "WebDownloader.h"
#include "Log.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
WebDownloader::WebDownloader()
{
debug("Creating WebDownloader");
m_szURL = NULL;
m_szOutputFilename = NULL;
m_pConnection = NULL;
m_szInfoName = NULL;
m_bConfirmedLength = false;
m_eStatus = adUndefined;
m_szOriginalFilename = NULL;
m_bForce = false;
m_bRetry = true;
SetLastUpdateTimeNow();
}
WebDownloader::~WebDownloader()
{
debug("Destroying WebDownloader");
free(m_szURL);
free(m_szInfoName);
free(m_szOutputFilename);
free(m_szOriginalFilename);
}
void WebDownloader::SetOutputFilename(const char* v)
{
m_szOutputFilename = strdup(v);
}
void WebDownloader::SetInfoName(const char* v)
{
m_szInfoName = strdup(v);
}
void WebDownloader::SetURL(const char * szURL)
{
free(m_szURL);
m_szURL = strdup(szURL);
}
void WebDownloader::SetStatus(EStatus eStatus)
{
m_eStatus = eStatus;
Notify(NULL);
}
void WebDownloader::Run()
{
debug("Entering WebDownloader-loop");
SetStatus(adRunning);
int iRemainedDownloadRetries = g_pOptions->GetRetries() > 0 ? g_pOptions->GetRetries() : 1;
int iRemainedConnectRetries = iRemainedDownloadRetries > 10 ? iRemainedDownloadRetries : 10;
if (!m_bRetry)
{
iRemainedDownloadRetries = 1;
iRemainedConnectRetries = 1;
}
m_iRedirects = 0;
EStatus Status = adFailed;
while (!IsStopped() && iRemainedDownloadRetries > 0 && iRemainedConnectRetries > 0)
{
SetLastUpdateTimeNow();
Status = Download();
if ((((Status == adFailed) && (iRemainedDownloadRetries > 1)) ||
((Status == adConnectError) && (iRemainedConnectRetries > 1)))
&& !IsStopped() && !(!m_bForce && (g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2())))
{
detail("Waiting %i sec to retry", g_pOptions->GetRetryInterval());
int msec = 0;
while (!IsStopped() && (msec < g_pOptions->GetRetryInterval() * 1000) &&
!(!m_bForce && (g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2())))
{
usleep(100 * 1000);
msec += 100;
}
}
if (IsStopped() || (!m_bForce && (g_pOptions->GetPauseDownload() || g_pOptions->GetPauseDownload2())))
{
Status = adRetry;
break;
}
if (Status == adFinished || Status == adFatalError || Status == adNotFound)
{
break;
}
if (Status == adRedirect)
{
m_iRedirects++;
if (m_iRedirects > 5)
{
warn("Too many redirects for %s", m_szInfoName);
Status = adFailed;
break;
}
}
if (Status != adConnectError)
{
iRemainedDownloadRetries--;
}
else
{
iRemainedConnectRetries--;
}
}
if (Status != adFinished && Status != adRetry)
{
Status = adFailed;
}
if (Status == adFailed)
{
if (IsStopped())
{
detail("Download %s cancelled", m_szInfoName);
}
else
{
error("Download %s failed", m_szInfoName);
}
}
if (Status == adFinished)
{
detail("Download %s completed", m_szInfoName);
}
SetStatus(Status);
debug("Exiting WebDownloader-loop");
}
WebDownloader::EStatus WebDownloader::Download()
{
EStatus Status = adRunning;
URL url(m_szURL);
Status = CreateConnection(&url);
if (Status != adRunning)
{
return Status;
}
m_pConnection->SetSuppressErrors(false);
// connection
bool bConnected = m_pConnection->Connect();
if (!bConnected || IsStopped())
{
FreeConnection();
return adConnectError;
}
// Okay, we got a Connection. Now start downloading.
detail("Downloading %s", m_szInfoName);
SendHeaders(&url);
Status = DownloadHeaders();
if (Status == adRunning)
{
Status = DownloadBody();
}
if (IsStopped())
{
Status = adFailed;
}
FreeConnection();
if (Status != adFinished)
{
// Download failed, delete broken output file
remove(m_szOutputFilename);
}
return Status;
}
WebDownloader::EStatus WebDownloader::CreateConnection(URL *pUrl)
{
if (!pUrl->IsValid())
{
error("URL is not valid: %s", pUrl->GetAddress());
return adFatalError;
}
int iPort = pUrl->GetPort();
if (iPort == 0 && !strcasecmp(pUrl->GetProtocol(), "http"))
{
iPort = 80;
}
if (iPort == 0 && !strcasecmp(pUrl->GetProtocol(), "https"))
{
iPort = 443;
}
if (strcasecmp(pUrl->GetProtocol(), "http") && strcasecmp(pUrl->GetProtocol(), "https"))
{
error("Unsupported protocol in URL: %s", pUrl->GetAddress());
return adFatalError;
}
#ifdef DISABLE_TLS
if (!strcasecmp(pUrl->GetProtocol(), "https"))
{
error("Program was compiled without TLS/SSL-support. Cannot download using https protocol. URL: %s", pUrl->GetAddress());
return adFatalError;
}
#endif
bool bTLS = !strcasecmp(pUrl->GetProtocol(), "https");
m_pConnection = new Connection(pUrl->GetHost(), iPort, bTLS);
return adRunning;
}
void WebDownloader::SendHeaders(URL *pUrl)
{
char tmp[1024];
// retrieve file
snprintf(tmp, 1024, "GET %s HTTP/1.0\r\n", pUrl->GetResource());
tmp[1024-1] = '\0';
m_pConnection->WriteLine(tmp);
snprintf(tmp, 1024, "User-Agent: nzbget/%s\r\n", Util::VersionRevision());
tmp[1024-1] = '\0';
m_pConnection->WriteLine(tmp);
snprintf(tmp, 1024, "Host: %s\r\n", pUrl->GetHost());
tmp[1024-1] = '\0';
m_pConnection->WriteLine(tmp);
m_pConnection->WriteLine("Accept: */*\r\n");
#ifndef DISABLE_GZIP
m_pConnection->WriteLine("Accept-Encoding: gzip\r\n");
#endif
m_pConnection->WriteLine("Connection: close\r\n");
m_pConnection->WriteLine("\r\n");
}
WebDownloader::EStatus WebDownloader::DownloadHeaders()
{
EStatus Status = adRunning;
m_bConfirmedLength = false;
const int LineBufSize = 1024*10;
char* szLineBuf = (char*)malloc(LineBufSize);
m_iContentLen = -1;
bool bFirstLine = true;
m_bGZip = false;
m_bRedirecting = false;
m_bRedirected = false;
// Headers
while (!IsStopped())
{
SetLastUpdateTimeNow();
int iLen = 0;
char* line = m_pConnection->ReadLine(szLineBuf, LineBufSize, &iLen);
if (bFirstLine)
{
Status = CheckResponse(szLineBuf);
if (Status != adRunning)
{
break;
}
bFirstLine = false;
}
// Have we encountered a timeout?
if (!line)
{
if (!IsStopped())
{
warn("URL %s failed: Unexpected end of file", m_szInfoName);
}
Status = adFailed;
break;
}
debug("Header: %s", line);
// detect body of response
if (*line == '\r' || *line == '\n')
{
if (m_iContentLen == -1 && !m_bGZip)
{
warn("URL %s: Content-Length is not submitted by server, cannot verify whether the file is complete", m_szInfoName);
}
break;
}
Util::TrimRight(line);
ProcessHeader(line);
if (m_bRedirected)
{
Status = adRedirect;
break;
}
}
free(szLineBuf);
return Status;
}
WebDownloader::EStatus WebDownloader::DownloadBody()
{
EStatus Status = adRunning;
m_pOutFile = NULL;
bool bEnd = false;
const int LineBufSize = 1024*10;
char* szLineBuf = (char*)malloc(LineBufSize);
int iWrittenLen = 0;
#ifndef DISABLE_GZIP
m_pGUnzipStream = NULL;
if (m_bGZip)
{
m_pGUnzipStream = new GUnzipStream(1024*10);
}
#endif
// Body
while (!IsStopped())
{
SetLastUpdateTimeNow();
char* szBuffer;
int iLen;
m_pConnection->ReadBuffer(&szBuffer, &iLen);
if (iLen == 0)
{
iLen = m_pConnection->TryRecv(szLineBuf, LineBufSize);
szBuffer = szLineBuf;
}
// Have we encountered a timeout?
if (iLen <= 0)
{
if (m_iContentLen == -1 && iWrittenLen > 0)
{
bEnd = true;
break;
}
if (!IsStopped())
{
warn("URL %s failed: Unexpected end of file", m_szInfoName);
}
Status = adFailed;
break;
}
// write to output file
if (!Write(szBuffer, iLen))
{
Status = adFatalError;
break;
}
iWrittenLen += iLen;
//detect end of file
if (iWrittenLen == m_iContentLen || (m_iContentLen == -1 && m_bGZip && m_bConfirmedLength))
{
bEnd = true;
break;
}
}
free(szLineBuf);
#ifndef DISABLE_GZIP
delete m_pGUnzipStream;
#endif
if (m_pOutFile)
{
fclose(m_pOutFile);
}
if (!bEnd && Status == adRunning && !IsStopped())
{
warn("URL %s failed: file incomplete", m_szInfoName);
Status = adFailed;
}
if (bEnd)
{
Status = adFinished;
}
return Status;
}
WebDownloader::EStatus WebDownloader::CheckResponse(const char* szResponse)
{
if (!szResponse)
{
if (!IsStopped())
{
warn("URL %s: Connection closed by remote host", m_szInfoName);
}
return adConnectError;
}
const char* szHTTPResponse = strchr(szResponse, ' ');
if (strncmp(szResponse, "HTTP", 4) || !szHTTPResponse)
{
warn("URL %s failed: %s", m_szInfoName, szResponse);
return adFailed;
}
szHTTPResponse++;
if (!strncmp(szHTTPResponse, "400", 3) || !strncmp(szHTTPResponse, "499", 3))
{
warn("URL %s failed: %s", m_szInfoName, szHTTPResponse);
return adConnectError;
}
else if (!strncmp(szHTTPResponse, "404", 3))
{
warn("URL %s failed: %s", m_szInfoName, szHTTPResponse);
return adNotFound;
}
else if (!strncmp(szHTTPResponse, "301", 3) || !strncmp(szHTTPResponse, "302", 3))
{
m_bRedirecting = true;
return adRunning;
}
else if (!strncmp(szHTTPResponse, "200", 3))
{
// OK
return adRunning;
}
else
{
// unknown error, no special handling
warn("URL %s failed: %s", m_szInfoName, szResponse);
return adFailed;
}
}
void WebDownloader::ProcessHeader(const char* szLine)
{
if (!strncasecmp(szLine, "Content-Length: ", 16))
{
m_iContentLen = atoi(szLine + 16);
m_bConfirmedLength = true;
}
else if (!strncasecmp(szLine, "Content-Encoding: gzip", 22))
{
m_bGZip = true;
}
else if (!strncasecmp(szLine, "Content-Disposition: ", 21))
{
ParseFilename(szLine);
}
else if (m_bRedirecting && !strncasecmp(szLine, "Location: ", 10))
{
ParseRedirect(szLine + 10);
m_bRedirected = true;
}
}
void WebDownloader::ParseFilename(const char* szContentDisposition)
{
// Examples:
// Content-Disposition: attachment; filename="fname.ext"
// Content-Disposition: attachement;filename=fname.ext
// Content-Disposition: attachement;filename=fname.ext;
const char *p = strstr(szContentDisposition, "filename");
if (!p)
{
return;
}
p = strchr(p, '=');
if (!p)
{
return;
}
p++;
while (*p == ' ') p++;
char fname[1024];
strncpy(fname, p, 1024);
fname[1024-1] = '\0';
char *pe = fname + strlen(fname) - 1;
while ((*pe == ' ' || *pe == '\n' || *pe == '\r' || *pe == ';') && pe > fname) {
*pe = '\0';
pe--;
}
WebUtil::HttpUnquote(fname);
free(m_szOriginalFilename);
m_szOriginalFilename = strdup(Util::BaseFileName(fname));
debug("OriginalFilename: %s", m_szOriginalFilename);
}
void WebDownloader::ParseRedirect(const char* szLocation)
{
const char* szNewURL = szLocation;
char szUrlBuf[1024];
URL newUrl(szNewURL);
if (!newUrl.IsValid())
{
// relative address
URL oldUrl(m_szURL);
if (oldUrl.GetPort() > 0)
{
snprintf(szUrlBuf, 1024, "%s://%s:%i%s", oldUrl.GetProtocol(), oldUrl.GetHost(), oldUrl.GetPort(), szNewURL);
}
else
{
snprintf(szUrlBuf, 1024, "%s://%s%s", oldUrl.GetProtocol(), oldUrl.GetHost(), szNewURL);
}
szUrlBuf[1024-1] = '\0';
szNewURL = szUrlBuf;
}
detail("URL %s redirected to %s", m_szURL, szNewURL);
SetURL(szNewURL);
}
bool WebDownloader::Write(void* pBuffer, int iLen)
{
if (!m_pOutFile && !PrepareFile())
{
return false;
}
#ifndef DISABLE_GZIP
if (m_bGZip)
{
m_pGUnzipStream->Write(pBuffer, iLen);
const void *pOutBuf;
int iOutLen = 1;
while (iOutLen > 0)
{
GUnzipStream::EStatus eGZStatus = m_pGUnzipStream->Read(&pOutBuf, &iOutLen);
if (eGZStatus == GUnzipStream::zlError)
{
error("URL %s: GUnzip failed", m_szInfoName);
return false;
}
if (iOutLen > 0 && fwrite(pOutBuf, 1, iOutLen, m_pOutFile) <= 0)
{
return false;
}
if (eGZStatus == GUnzipStream::zlFinished)
{
m_bConfirmedLength = true;
return true;
}
}
return true;
}
else
#endif
return fwrite(pBuffer, 1, iLen, m_pOutFile) > 0;
}
bool WebDownloader::PrepareFile()
{
// prepare file for writing
const char* szFilename = m_szOutputFilename;
m_pOutFile = fopen(szFilename, "wb");
if (!m_pOutFile)
{
error("Could not %s file %s", "create", szFilename);
return false;
}
if (g_pOptions->GetWriteBufferSize() > 0)
{
setvbuf(m_pOutFile, (char *)NULL, _IOFBF, g_pOptions->GetWriteBufferSize());
}
return true;
}
void WebDownloader::LogDebugInfo()
{
char szTime[50];
#ifdef HAVE_CTIME_R_3
ctime_r(&m_tLastUpdateTime, szTime, 50);
#else
ctime_r(&m_tLastUpdateTime, szTime);
#endif
debug(" Web-Download: status=%i, LastUpdateTime=%s, filename=%s", m_eStatus, szTime, Util::BaseFileName(m_szOutputFilename));
}
void WebDownloader::Stop()
{
debug("Trying to stop WebDownloader");
Thread::Stop();
m_mutexConnection.Lock();
if (m_pConnection)
{
m_pConnection->SetSuppressErrors(true);
m_pConnection->Cancel();
}
m_mutexConnection.Unlock();
debug("WebDownloader stopped successfully");
}
bool WebDownloader::Terminate()
{
Connection* pConnection = m_pConnection;
bool terminated = Kill();
if (terminated && pConnection)
{
debug("Terminating connection");
pConnection->SetSuppressErrors(true);
pConnection->Cancel();
pConnection->Disconnect();
delete pConnection;
}
return terminated;
}
void WebDownloader::FreeConnection()
{
if (m_pConnection)
{
debug("Releasing connection");
m_mutexConnection.Lock();
if (m_pConnection->GetStatus() == Connection::csCancelled)
{
m_pConnection->Disconnect();
}
delete m_pConnection;
m_pConnection = NULL;
m_mutexConnection.Unlock();
}
}

112
WebDownloader.h Normal file
View File

@@ -0,0 +1,112 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef WEBDOWNLOADER_H
#define WEBDOWNLOADER_H
#include <time.h>
#include "Observer.h"
#include "Thread.h"
#include "Connection.h"
#include "Util.h"
class WebDownloader : public Thread, public Subject
{
public:
enum EStatus
{
adUndefined,
adRunning,
adFinished,
adFailed,
adRetry,
adNotFound,
adRedirect,
adConnectError,
adFatalError
};
private:
char* m_szURL;
char* m_szOutputFilename;
Connection* m_pConnection;
Mutex m_mutexConnection;
EStatus m_eStatus;
time_t m_tLastUpdateTime;
char* m_szInfoName;
FILE* m_pOutFile;
int m_iContentLen;
bool m_bConfirmedLength;
char* m_szOriginalFilename;
bool m_bForce;
bool m_bRedirecting;
bool m_bRedirected;
int m_iRedirects;
bool m_bGZip;
bool m_bRetry;
#ifndef DISABLE_GZIP
GUnzipStream* m_pGUnzipStream;
#endif
void SetStatus(EStatus eStatus);
bool Write(void* pBuffer, int iLen);
bool PrepareFile();
void FreeConnection();
EStatus CheckResponse(const char* szResponse);
EStatus CreateConnection(URL *pUrl);
void ParseFilename(const char* szContentDisposition);
void SendHeaders(URL *pUrl);
EStatus DownloadHeaders();
EStatus DownloadBody();
void ParseRedirect(const char* szLocation);
protected:
virtual void ProcessHeader(const char* szLine);
public:
WebDownloader();
~WebDownloader();
EStatus GetStatus() { return m_eStatus; }
virtual void Run();
virtual void Stop();
EStatus Download();
bool Terminate();
void SetInfoName(const char* v);
const char* GetInfoName() { return m_szInfoName; }
void SetURL(const char* szURL);
const char* GetOutputFilename() { return m_szOutputFilename; }
void SetOutputFilename(const char* v);
time_t GetLastUpdateTime() { return m_tLastUpdateTime; }
void SetLastUpdateTimeNow() { m_tLastUpdateTime = ::time(NULL); }
bool GetConfirmedLength() { return m_bConfirmedLength; }
const char* GetOriginalFilename() { return m_szOriginalFilename; }
void SetForce(bool bForce) { m_bForce = bForce; }
void SetRetry(bool bRetry) { m_bRetry = bRetry; }
void LogDebugInfo();
};
#endif

498
WebServer.cpp Normal file
View File

@@ -0,0 +1,498 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#ifdef WIN32
#include "win32.h"
#endif
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#ifndef WIN32
#include <unistd.h>
#endif
#include "nzbget.h"
#include "WebServer.h"
#include "XmlRpc.h"
#include "Log.h"
#include "Options.h"
#include "Util.h"
extern Options* g_pOptions;
static const char* ERR_HTTP_BAD_REQUEST = "400 Bad Request";
static const char* ERR_HTTP_NOT_FOUND = "404 Not Found";
static const char* ERR_HTTP_SERVICE_UNAVAILABLE = "503 Service Unavailable";
static const int MAX_UNCOMPRESSED_SIZE = 500;
//*****************************************************************
// WebProcessor
WebProcessor::WebProcessor()
{
m_pConnection = NULL;
m_szRequest = NULL;
m_szUrl = NULL;
m_szOrigin = NULL;
}
WebProcessor::~WebProcessor()
{
free(m_szRequest);
free(m_szUrl);
free(m_szOrigin);
}
void WebProcessor::SetUrl(const char* szUrl)
{
m_szUrl = strdup(szUrl);
}
void WebProcessor::Execute()
{
m_bGZip =false;
char szAuthInfo[1024];
szAuthInfo[0] = '\0';
// reading http header
char szBuffer[1024];
int iContentLen = 0;
while (char* p = m_pConnection->ReadLine(szBuffer, sizeof(szBuffer), NULL))
{
if (char* pe = strrchr(p, '\r')) *pe = '\0';
debug("header=%s", p);
if (!strncasecmp(p, "Content-Length: ", 16))
{
iContentLen = atoi(p + 16);
}
if (!strncasecmp(p, "Authorization: Basic ", 21))
{
char* szAuthInfo64 = p + 21;
if (strlen(szAuthInfo64) > sizeof(szAuthInfo))
{
error("Invalid-request: auth-info too big");
return;
}
szAuthInfo[WebUtil::DecodeBase64(szAuthInfo64, 0, szAuthInfo)] = '\0';
}
if (!strncasecmp(p, "Accept-Encoding: ", 17))
{
m_bGZip = strstr(p, "gzip");
}
if (!strncasecmp(p, "Origin: ", 8))
{
m_szOrigin = strdup(p + 8);
}
if (*p == '\0')
{
break;
}
}
debug("URL=%s", m_szUrl);
debug("Authorization=%s", szAuthInfo);
if (m_eHttpMethod == hmPost && iContentLen <= 0)
{
error("Invalid-request: content length is 0");
return;
}
if (m_eHttpMethod == hmOptions)
{
SendOptionsResponse();
return;
}
// remove subfolder "nzbget" from the path (if exists)
// http://localhost:6789/nzbget/username:password/jsonrpc -> http://localhost:6789/username:password/jsonrpc
if (!strncmp(m_szUrl, "/nzbget/", 8))
{
char* sz_OldUrl = m_szUrl;
m_szUrl = strdup(m_szUrl + 7);
free(sz_OldUrl);
}
// http://localhost:6789/nzbget -> http://localhost:6789
if (!strcmp(m_szUrl, "/nzbget"))
{
char szRedirectURL[1024];
snprintf(szRedirectURL, 1024, "%s/", m_szUrl);
szRedirectURL[1024-1] = '\0';
SendRedirectResponse(szRedirectURL);
return;
}
// authorization via URL in format:
// http://localhost:6789/username:password/jsonrpc
char* pauth1 = strchr(m_szUrl + 1, ':');
char* pauth2 = strchr(m_szUrl + 1, '/');
if (pauth1 && pauth1 < pauth2)
{
char* pstart = m_szUrl + 1;
int iLen = 0;
char* pend = strchr(pstart + 1, '/');
if (pend)
{
iLen = (int)(pend - pstart < (int)sizeof(szAuthInfo) - 1 ? pend - pstart : (int)sizeof(szAuthInfo) - 1);
}
else
{
iLen = strlen(pstart);
}
strncpy(szAuthInfo, pstart, iLen);
szAuthInfo[iLen] = '\0';
char* sz_OldUrl = m_szUrl;
m_szUrl = strdup(pend);
free(sz_OldUrl);
}
debug("Final URL=%s", m_szUrl);
if (strlen(g_pOptions->GetControlPassword()) > 0 &&
!(strlen(g_pOptions->GetAuthorizedIP()) > 0 && IsAuthorizedIP(m_pConnection->GetRemoteAddr())))
{
if (strlen(szAuthInfo) == 0)
{
SendAuthResponse();
return;
}
// Authorization
char* pw = strchr(szAuthInfo, ':');
if (pw) *pw++ = '\0';
if ((strlen(g_pOptions->GetControlUsername()) > 0 && strcmp(szAuthInfo, g_pOptions->GetControlUsername())) ||
strcmp(pw, g_pOptions->GetControlPassword()))
{
warn("Request received on port %i from %s, but username or password invalid (%s:%s)",
g_pOptions->GetControlPort(), m_pConnection->GetRemoteAddr(), szAuthInfo, pw);
SendAuthResponse();
return;
}
}
if (m_eHttpMethod == hmPost)
{
// reading http body (request content)
m_szRequest = (char*)malloc(iContentLen + 1);
m_szRequest[iContentLen] = '\0';
if (!m_pConnection->Recv(m_szRequest, iContentLen))
{
error("Invalid-request: could not read data");
return;
}
debug("Request=%s", m_szRequest);
}
debug("request received from %s", m_pConnection->GetRemoteAddr());
Dispatch();
}
bool WebProcessor::IsAuthorizedIP(const char* szRemoteAddr)
{
const char* szRemoteIP = m_pConnection->GetRemoteAddr();
// split option AuthorizedIP into tokens and check each token
bool bAuthorized = false;
char* szAuthorizedIP = strdup(g_pOptions->GetAuthorizedIP());
char* saveptr;
char* szIP = strtok_r(szAuthorizedIP, ",;", &saveptr);
while (szIP)
{
szIP = Util::Trim(szIP);
if (szIP[0] != '\0' && !strcmp(szIP, szRemoteIP))
{
bAuthorized = true;
break;
}
szIP = strtok_r(NULL, ",;", &saveptr);
}
free(szAuthorizedIP);
return bAuthorized;
}
void WebProcessor::Dispatch()
{
if (*m_szUrl != '/')
{
SendErrorResponse(ERR_HTTP_BAD_REQUEST);
return;
}
if (XmlRpcProcessor::IsRpcRequest(m_szUrl))
{
XmlRpcProcessor processor;
processor.SetRequest(m_szRequest);
processor.SetHttpMethod(m_eHttpMethod == hmGet ? XmlRpcProcessor::hmGet : XmlRpcProcessor::hmPost);
processor.SetUrl(m_szUrl);
processor.Execute();
SendBodyResponse(processor.GetResponse(), strlen(processor.GetResponse()), processor.GetContentType());
return;
}
if (!g_pOptions->GetWebDir() || strlen(g_pOptions->GetWebDir()) == 0)
{
SendErrorResponse(ERR_HTTP_SERVICE_UNAVAILABLE);
return;
}
if (m_eHttpMethod != hmGet)
{
SendErrorResponse(ERR_HTTP_BAD_REQUEST);
return;
}
// for security reasons we allow only characters "0..9 A..Z a..z . - _ /" in the URLs
// we also don't allow ".." in the URLs
for (char *p = m_szUrl; *p; p++)
{
if (!((*p >= '0' && *p <= '9') || (*p >= 'A' && *p <= 'Z') || (*p >= 'a' && *p <= 'z') ||
*p == '.' || *p == '-' || *p == '_' || *p == '/') || (*p == '.' && p[1] == '.'))
{
SendErrorResponse(ERR_HTTP_NOT_FOUND);
return;
}
}
const char *szDefRes = "";
if (m_szUrl[strlen(m_szUrl)-1] == '/')
{
// default file in directory (if not specified) is "index.html"
szDefRes = "index.html";
}
char disk_filename[1024];
snprintf(disk_filename, sizeof(disk_filename), "%s%s%s", g_pOptions->GetWebDir(), m_szUrl + 1, szDefRes);
disk_filename[sizeof(disk_filename)-1] = '\0';
SendFileResponse(disk_filename);
}
void WebProcessor::SendAuthResponse()
{
const char* AUTH_RESPONSE_HEADER =
"HTTP/1.0 401 Unauthorized\r\n"
"WWW-Authenticate: Basic realm=\"NZBGet\"\r\n"
"Connection: close\r\n"
"Content-Type: text/plain\r\n"
"Server: nzbget-%s\r\n"
"\r\n";
char szResponseHeader[1024];
snprintf(szResponseHeader, 1024, AUTH_RESPONSE_HEADER, Util::VersionRevision());
// Send the response answer
debug("ResponseHeader=%s", szResponseHeader);
m_pConnection->Send(szResponseHeader, strlen(szResponseHeader));
}
void WebProcessor::SendOptionsResponse()
{
const char* OPTIONS_RESPONSE_HEADER =
"HTTP/1.1 200 OK\r\n"
"Connection: close\r\n"
//"Content-Type: plain/text\r\n"
"Access-Control-Allow-Methods: GET, POST, OPTIONS\r\n"
"Access-Control-Allow-Origin: %s\r\n"
"Access-Control-Allow-Credentials: true\r\n"
"Access-Control-Max-Age: 86400\r\n"
"Access-Control-Allow-Headers: Content-Type, Authorization\r\n"
"Server: nzbget-%s\r\n"
"\r\n";
char szResponseHeader[1024];
snprintf(szResponseHeader, 1024, OPTIONS_RESPONSE_HEADER,
m_szOrigin ? m_szOrigin : "",
Util::VersionRevision());
// Send the response answer
debug("ResponseHeader=%s", szResponseHeader);
m_pConnection->Send(szResponseHeader, strlen(szResponseHeader));
}
void WebProcessor::SendErrorResponse(const char* szErrCode)
{
const char* RESPONSE_HEADER =
"HTTP/1.0 %s\r\n"
"Connection: close\r\n"
"Content-Length: %i\r\n"
"Content-Type: text/html\r\n"
"Server: nzbget-%s\r\n"
"\r\n";
warn("Web-Server: %s, Resource: %s", szErrCode, m_szUrl);
char szResponseBody[1024];
snprintf(szResponseBody, 1024, "<html><head><title>%s</title></head><body>Error: %s</body></html>", szErrCode, szErrCode);
int iPageContentLen = strlen(szResponseBody);
char szResponseHeader[1024];
snprintf(szResponseHeader, 1024, RESPONSE_HEADER, szErrCode, iPageContentLen, Util::VersionRevision());
// Send the response answer
m_pConnection->Send(szResponseHeader, strlen(szResponseHeader));
m_pConnection->Send(szResponseBody, iPageContentLen);
}
void WebProcessor::SendRedirectResponse(const char* szURL)
{
const char* REDIRECT_RESPONSE_HEADER =
"HTTP/1.0 301 Moved Permanently\r\n"
"Location: %s\r\n"
"Connection: close\r\n"
"Server: nzbget-%s\r\n"
"\r\n";
char szResponseHeader[1024];
snprintf(szResponseHeader, 1024, REDIRECT_RESPONSE_HEADER, szURL, Util::VersionRevision());
// Send the response answer
debug("ResponseHeader=%s", szResponseHeader);
m_pConnection->Send(szResponseHeader, strlen(szResponseHeader));
}
void WebProcessor::SendBodyResponse(const char* szBody, int iBodyLen, const char* szContentType)
{
const char* RESPONSE_HEADER =
"HTTP/1.1 200 OK\r\n"
"Connection: close\r\n"
"Access-Control-Allow-Methods: GET, POST, OPTIONS\r\n"
"Access-Control-Allow-Origin: %s\r\n"
"Access-Control-Allow-Credentials: true\r\n"
"Access-Control-Max-Age: 86400\r\n"
"Access-Control-Allow-Headers: Content-Type, Authorization\r\n"
"Content-Length: %i\r\n"
"%s" // Content-Type: xxx
"%s" // Content-Encoding: gzip
"Server: nzbget-%s\r\n"
"\r\n";
#ifndef DISABLE_GZIP
char *szGBuf = NULL;
bool bGZip = m_bGZip && iBodyLen > MAX_UNCOMPRESSED_SIZE;
if (bGZip)
{
unsigned int iOutLen = ZLib::GZipLen(iBodyLen);
szGBuf = (char*)malloc(iOutLen);
int iGZippedLen = ZLib::GZip(szBody, iBodyLen, szGBuf, iOutLen);
if (iGZippedLen > 0 && iGZippedLen < iBodyLen)
{
szBody = szGBuf;
iBodyLen = iGZippedLen;
}
else
{
free(szGBuf);
szGBuf = NULL;
bGZip = false;
}
}
#else
bool bGZip = false;
#endif
char szContentTypeHeader[1024];
if (szContentType)
{
snprintf(szContentTypeHeader, 1024, "Content-Type: %s\r\n", szContentType);
}
else
{
szContentTypeHeader[0] = '\0';
}
char szResponseHeader[1024];
snprintf(szResponseHeader, 1024, RESPONSE_HEADER,
m_szOrigin ? m_szOrigin : "",
iBodyLen, szContentTypeHeader,
bGZip ? "Content-Encoding: gzip\r\n" : "",
Util::VersionRevision());
// Send the request answer
m_pConnection->Send(szResponseHeader, strlen(szResponseHeader));
m_pConnection->Send(szBody, iBodyLen);
#ifndef DISABLE_GZIP
free(szGBuf);
#endif
}
void WebProcessor::SendFileResponse(const char* szFilename)
{
debug("serving file: %s", szFilename);
char *szBody;
int iBodyLen;
if (!Util::LoadFileIntoBuffer(szFilename, &szBody, &iBodyLen))
{
SendErrorResponse(ERR_HTTP_NOT_FOUND);
return;
}
// "LoadFileIntoBuffer" adds a trailing NULL, which we don't need here
iBodyLen--;
SendBodyResponse(szBody, iBodyLen, DetectContentType(szFilename));
free(szBody);
}
const char* WebProcessor::DetectContentType(const char* szFilename)
{
if (const char *szExt = strrchr(szFilename, '.'))
{
if (!strcasecmp(szExt, ".css"))
{
return "text/css";
}
else if (!strcasecmp(szExt, ".html"))
{
return "text/html";
}
else if (!strcasecmp(szExt, ".js"))
{
return "application/javascript";
}
else if (!strcasecmp(szExt, ".png"))
{
return "image/png";
}
else if (!strcasecmp(szExt, ".jpeg"))
{
return "image/jpeg";
}
else if (!strcasecmp(szExt, ".gif"))
{
return "image/gif";
}
}
return NULL;
}

68
WebServer.h Normal file
View File

@@ -0,0 +1,68 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2012 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
*
*/
#ifndef WEBSERVER_H
#define WEBSERVER_H
#include "Connection.h"
class WebProcessor
{
public:
enum EHttpMethod
{
hmPost,
hmGet,
hmOptions
};
private:
Connection* m_pConnection;
char* m_szRequest;
char* m_szUrl;
EHttpMethod m_eHttpMethod;
bool m_bGZip;
char* m_szOrigin;
void Dispatch();
void SendAuthResponse();
void SendOptionsResponse();
void SendErrorResponse(const char* szErrCode);
void SendFileResponse(const char* szFilename);
void SendBodyResponse(const char* szBody, int iBodyLen, const char* szContentType);
void SendRedirectResponse(const char* szURL);
const char* DetectContentType(const char* szFilename);
bool IsAuthorizedIP(const char* szRemoteAddr);
public:
WebProcessor();
~WebProcessor();
void Execute();
void SetConnection(Connection* pConnection) { m_pConnection = pConnection; }
void SetUrl(const char* szUrl);
void SetHttpMethod(EHttpMethod eHttpMethod) { m_eHttpMethod = eHttpMethod; }
};
#endif

2336
XmlRpc.cpp
View File

File diff suppressed because it is too large Load Diff

150
XmlRpc.h
View File

@@ -1,7 +1,7 @@
/*
* This file is part of nzbget
*
* Copyright (C) 2007-2010 Andrei Prygounkov <hugbug@users.sourceforge.net>
* Copyright (C) 2007-2013 Andrey Prygunkov <hugbug@users.sourceforge.net>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -15,7 +15,7 @@
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* $Revision$
* $Date$
@@ -27,19 +27,7 @@
#define XMLRPC_H
#include "Connection.h"
class StringBuilder
{
private:
char* m_szBuffer;
int m_iBufferSize;
int m_iUsedSize;
public:
StringBuilder();
~StringBuilder();
void Append(const char* szStr);
const char* GetBuffer() { return m_szBuffer; }
};
#include "Util.h"
class XmlCommand;
@@ -61,28 +49,28 @@ public:
};
private:
Connection* m_pConnection;
const char* m_szClientIP;
char* m_szRequest;
const char* m_szContentType;
ERpcProtocol m_eProtocol;
EHttpMethod m_eHttpMethod;
char* m_szUrl;
StringBuilder m_cResponse;
void Dispatch();
void SendAuthResponse();
void SendResponse(const char* szResponse, const char* szCallbackFunc, bool bFault);
XmlCommand* CreateCommand(const char* szMethodName);
void MutliCall();
void BuildResponse(const char* szResponse, const char* szCallbackFunc, bool bFault);
public:
XmlRpcProcessor();
~XmlRpcProcessor();
void Execute();
void SetConnection(Connection* pConnection) { m_pConnection = pConnection; }
void SetProtocol(ERpcProtocol eProtocol) { m_eProtocol = eProtocol; }
void SetHttpMethod(EHttpMethod eHttpMethod) { m_eHttpMethod = eHttpMethod; }
void SetUrl(const char* szUrl);
void SetClientIP(const char* szClientIP) { m_szClientIP = szClientIP; }
void SetRequest(char* szRequest) { m_szRequest = szRequest; }
const char* GetResponse() { return m_cResponse.GetBuffer(); }
const char* GetContentType() { return m_szContentType; }
static bool IsRpcRequest(const char* szUrl);
};
class XmlCommand
@@ -96,7 +84,7 @@ protected:
XmlRpcProcessor::ERpcProtocol m_eProtocol;
XmlRpcProcessor::EHttpMethod m_eHttpMethod;
void BuildErrorResponse(int iErrCode, const char* szErrText);
void BuildErrorResponse(int iErrCode, const char* szErrText, ...);
void BuildBoolResponse(bool bOK);
void AppendResponse(const char* szPart);
bool IsJson();
@@ -106,6 +94,7 @@ protected:
bool NextParamAsStr(char** szValueBuf);
const char* BoolToStr(bool bValue);
char* EncodeStr(const char* szStr);
void DecodeStr(char* szStr);
public:
XmlCommand();
@@ -120,119 +109,4 @@ public:
bool GetFault() { return m_bFault; }
};
class ErrorXmlCommand: public XmlCommand
{
private:
int m_iErrCode;
const char* m_szErrText;
public:
ErrorXmlCommand(int iErrCode, const char* szErrText);
virtual void Execute();
};
class PauseUnpauseXmlCommand: public XmlCommand
{
public:
enum EPauseAction
{
paDownload,
paDownload2,
paPostProcess,
paScan
};
private:
bool m_bPause;
EPauseAction m_eEPauseAction;
public:
PauseUnpauseXmlCommand(bool bPause, EPauseAction eEPauseAction);
virtual void Execute();
};
class ShutdownXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class VersionXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class DumpDebugXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class SetDownloadRateXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class StatusXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class LogXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class ListFilesXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class ListGroupsXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class EditQueueXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class DownloadXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class PostQueueXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class WriteLogXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class ScanXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
class HistoryXmlCommand: public XmlCommand
{
public:
virtual void Execute();
};
#endif

View File

@@ -6,6 +6,9 @@
/* Define to 1 to not use curses */
#undef DISABLE_CURSES
/* Define to 1 to disable gzip-support */
#undef DISABLE_GZIP
/* Define to 1 to disable smart par-verification and restoration */
#undef DISABLE_PARCHECK
@@ -64,11 +67,17 @@
/* Define to 1 to use OpenSSL library for TLS/SSL-support. */
#undef HAVE_OPENSSL
/* Define to 1 if libpar2 has recent bugfixes-patch (version 2) */
#undef HAVE_PAR2_BUGFIXES_V2
/* Define to 1 if libpar2 supports cancelling (needs a special patch) */
#undef HAVE_PAR2_CANCEL
/* Define to 1 if stat64 is supported */
#undef HAVE_STAT64
/* Define to 1 if you have the <regex.h> header file. */
#undef HAVE_REGEX_H
/* Define to 1 if spinlocks are supported */
#undef HAVE_SPINLOCK
/* Define to 1 if you have the <stdint.h> header file. */
#undef HAVE_STDINT_H
@@ -115,6 +124,9 @@
/* Define to the version of this package. */
#undef PACKAGE_VERSION
/* Define to 1 to install an empty signal handler for SIGCHLD */
#undef SIGCHLD_HANDLER
/* Determine what socket length (socklen_t) data type is */
#undef SOCKLEN_T
@@ -123,3 +135,9 @@
/* Version number of package */
#undef VERSION
/* Number of bits in a file offset, on hosts where this is settable. */
#undef _FILE_OFFSET_BITS
/* Define for large files, on AIX-style hosts. */
#undef _LARGE_FILES

5256
configure vendored
View File

File diff suppressed because it is too large Load Diff

View File

@@ -2,34 +2,18 @@
# Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59)
AC_INIT(nzbget, 0.7.0, hugbug@users.sourceforge.net)
AM_INIT_AUTOMAKE(nzbget, 0.7.0)
AC_INIT(nzbget, 12.0, hugbug@users.sourceforge.net)
AC_CANONICAL_SYSTEM
AM_INIT_AUTOMAKE(nzbget, 12.0)
AC_CONFIG_SRCDIR([nzbget.cpp])
AC_CONFIG_HEADERS([config.h])
dnl
dnl Architecture check.
dnl
AC_CANONICAL_HOST
dnl
dnl Set default library path, if not specified in environment variable "LIBPREF".
dnl
if test "$LIBPREF" = ""; then
case "$host" in
*-linux*)
LIBPREF="/usr"
;;
*-freebsd*)
LIBPREF="/usr/local"
;;
*-solaris*)
LIBPREF="/usr"
;;
esac
LIBPREF="/usr"
fi
@@ -52,6 +36,7 @@ dnl
dnl Checks for header files.
dnl
AC_CHECK_HEADERS(sys/prctl.h)
AC_CHECK_HEADERS(regex.h)
dnl
@@ -71,10 +56,9 @@ AC_CHECK_FUNC(getopt_long,
dnl
dnl stat64
dnl use 64-Bits for file sizes
dnl
AC_CHECK_FUNC(stat64,
[AC_DEFINE([HAVE_STAT64], 1, [Define to 1 if stat64 is supported])],)
AC_SYS_LARGEFILE
dnl
@@ -158,6 +142,14 @@ if test "$FOUND" = "no"; then
fi
dnl
dnl cCheck if spinlocks are available
dnl
AC_CHECK_FUNC(pthread_spin_init,
[AC_DEFINE([HAVE_SPINLOCK], 1, [Define to 1 if spinlocks are supported])]
AC_SEARCH_LIBS([pthread_spin_init], [pthread]),)
dnl
dnl Determine what socket length (socklen_t) data type is
dnl
@@ -194,7 +186,6 @@ dnl
AC_ARG_WITH(libxml2_includes,
[AS_HELP_STRING([--with-libxml2-includes=DIR], [libxml2 include directory])],
[CPPFLAGS="${CPPFLAGS} -I${withval}"]
[CFLAGS="${CFLAGS} -I${withval}"]
[INCVAL="yes"],
[INCVAL="no"])
AC_ARG_WITH(libxml2_libraries,
@@ -204,9 +195,8 @@ AC_ARG_WITH(libxml2_libraries,
[LIBVAL="no"])
if test "$INCVAL" = "no" -o "$LIBVAL" = "no"; then
PKG_CHECK_MODULES(libxml2, libxml-2.0,
[LDFLAGS="${LDFLAGS} $libxml2_LIBS"]
[CPPFLAGS="${CPPFLAGS} $libxml2_CFLAGS"]
[CFLAGS="${CFLAGS} $libxml2_CFLAGS"])
[LIBS="${LIBS} $libxml2_LIBS"]
[CPPFLAGS="${CPPFLAGS} $libxml2_CFLAGS"])
fi
AC_CHECK_HEADER(libxml/tree.h,,
AC_MSG_ERROR("libxml2 header files not found"))
@@ -230,7 +220,6 @@ if test "$USECURSES" = "yes"; then
[AS_HELP_STRING([--with-libcurses-includes=DIR], [libcurses include directory])],
[INCVAL="$withval"])
CPPFLAGS="${CPPFLAGS} -I${INCVAL}"
CFLAGS="${CFLAGS} -I${INCVAL}"
AC_ARG_WITH(libcurses_libraries,
[AS_HELP_STRING([--with-libcurses-libraries=DIR], [libcurses library directory])],
[LIBVAL="$withval"])
@@ -290,7 +279,7 @@ if test "$ENABLEPARCHECK" = "yes"; then
[LIBVAL="no"])
if test "$INCVAL" = "no" -o "$LIBVAL" = "no"; then
PKG_CHECK_MODULES(libsigc, sigc++-2.0,
[LDFLAGS="${LDFLAGS} $libsigc_LIBS"]
[LIBS="${LIBS} $libsigc_LIBS"]
[CPPFLAGS="${CPPFLAGS} $libsigc_CFLAGS"])
fi
@@ -345,6 +334,32 @@ if test "$ENABLEPARCHECK" = "yes"; then
AC_MSG_RESULT([[yes]])
AC_DEFINE([HAVE_PAR2_CANCEL], 1, [Define to 1 if libpar2 supports cancelling (needs a special patch)]),
AC_MSG_RESULT([[no]]))
dnl
dnl check if libpar2 has recent bugfixes-patch
dnl
AC_MSG_CHECKING(whether libpar2 has recent bugfixes-patch (version 2))
AC_TRY_COMPILE(
[#include <libpar2/par2cmdline.h>]
[#include <libpar2/par2repairer.h>]
[ class Repairer : public Par2Repairer { void test() { BugfixesPatchVersion2(); } }; ],
[],
AC_MSG_RESULT([[yes]])
PAR2PATCHV2=yes
AC_DEFINE([HAVE_PAR2_BUGFIXES_V2], 1, [Define to 1 if libpar2 has recent bugfixes-patch (version 2)]),
AC_MSG_RESULT([[no]])
PAR2PATCHV2=no)
if test "$PAR2PATCHV2" = "no" ; then
AC_ARG_ENABLE(libpar2-bugfixes-check,
[AS_HELP_STRING([--disable-libpar2-bugfixes-check], [do not check if libpar2 has recent bugfixes-patch applied])],
[ PAR2PATCHCHECK=$enableval ],
[ PAR2PATCHCHECK=yes] )
if test "$PAR2PATCHCHECK" = "yes"; then
AC_ERROR([Your version of libpar2 doesn't include the recent bugfixes-patch (version 2, updated Dec 3, 2012). Please patch libpar2 with the patches supplied with NZBGet (see README for details). If you cannot patch libpar2, you can use configure parameter --disable-libpar2-bugfixes-check to suppress the check. Please note however that in this case the program may crash during par-check/repair. The patch is highly recommended!])
fi
fi
else
AC_DEFINE([DISABLE_PARCHECK],1,[Define to 1 to disable smart par-verification and restoration])
fi
@@ -374,12 +389,11 @@ if test "$USETLS" = "yes"; then
[AS_HELP_STRING([--with-libgnutls-includes=DIR], [GnuTLS include directory])],
[INCVAL="$withval"])
CPPFLAGS="${CPPFLAGS} -I${INCVAL}"
CFLAGS="${CFLAGS} -I${INCVAL}"
AC_ARG_WITH(libgnutls_libraries,
[AS_HELP_STRING([--with-libgnutls-libraries=DIR], [GnuTLS library directory])],
[LIBVAL="$withval"])
LDFLAGS="${LDFLAGS} -L${LIBVAL}"
AC_CHECK_HEADER(gnutls/gnutls.h,
FOUND=yes
TLSHEADERS=yes,
@@ -404,18 +418,22 @@ if test "$USETLS" = "yes"; then
fi
if test "$TLSLIB" = "OpenSSL" -o "$TLSLIB" = ""; then
INCVAL="${LIBPREF}/include"
LIBVAL="${LIBPREF}/lib"
AC_ARG_WITH(openssl_includes,
[AS_HELP_STRING([--with-openssl-includes=DIR], [OpenSSL include directory])],
[INCVAL="$withval"])
CPPFLAGS="${CPPFLAGS} -I${INCVAL}"
CFLAGS="${CFLAGS} -I${INCVAL}"
[CPPFLAGS="${CPPFLAGS} -I${withval}"]
[INCVAL="yes"],
[INCVAL="no"])
AC_ARG_WITH(openssl_libraries,
[AS_HELP_STRING([--with-openssl-libraries=DIR], [OpenSSL library directory])],
[LIBVAL="$withval"])
LDFLAGS="${LDFLAGS} -L${LIBVAL}"
[LDFLAGS="${LDFLAGS} -L${withval}"]
[LIBVAL="yes"],
[LIBVAL="no"])
if test "$INCVAL" = "no" -o "$LIBVAL" = "no"; then
PKG_CHECK_MODULES(openssl, openssl,
[LIBS="${LIBS} $openssl_LIBS"]
[CPPFLAGS="${CPPFLAGS} $openssl_CFLAGS"])
fi
AC_CHECK_HEADER(openssl/ssl.h,
FOUND=yes
TLSHEADERS=yes,
@@ -424,8 +442,10 @@ if test "$USETLS" = "yes"; then
AC_MSG_ERROR([Couldn't find OpenSSL headers (ssl.h)])
fi
if test "$FOUND" = "yes"; then
AC_SEARCH_LIBS([SSL_library_init], [ssl],
FOUND=yes,
AC_SEARCH_LIBS([CRYPTO_set_locking_callback], [crypto],
AC_SEARCH_LIBS([SSL_library_init], [ssl],
FOUND=yes,
FOUND=no),
FOUND=no)
if test "$FOUND" = "no" -a "$TLSLIB" = "OpenSSL"; then
AC_MSG_ERROR([Couldn't find OpenSSL library])
@@ -449,6 +469,53 @@ else
fi
dnl
dnl checks for zlib includes and libraries.
dnl
AC_MSG_CHECKING(whether to use gzip)
AC_ARG_ENABLE(gzip,
[AS_HELP_STRING([--disable-gzip], [disable gzip-compression/decompression (removes dependency from zlib-library)])],
[USEZLIB=$enableval],
[USEZLIB=yes] )
AC_MSG_RESULT($USEZLIB)
if test "$USEZLIB" = "yes"; then
INCVAL="${LIBPREF}/include"
LIBVAL="${LIBPREF}/lib"
AC_ARG_WITH(zlib_includes,
[AS_HELP_STRING([--with-zlib-includes=DIR], [zlib include directory])],
[INCVAL="$withval"])
CPPFLAGS="${CPPFLAGS} -I${INCVAL}"
AC_ARG_WITH(zlib_libraries,
[AS_HELP_STRING([--with-zlib-libraries=DIR], [zlib library directory])],
[LIBVAL="$withval"])
LDFLAGS="${LDFLAGS} -L${LIBVAL}"
AC_CHECK_HEADER(zlib.h,,
AC_MSG_ERROR("zlib header files not found"))
AC_SEARCH_LIBS([deflateBound], [z], ,
AC_MSG_ERROR("zlib library not found"))
else
AC_DEFINE([DISABLE_GZIP],1,[Define to 1 to disable gzip-support])
fi
dnl
dnl Some Linux systems require an empty signal handler for SIGCHLD
dnl in order for exit codes to be correctly delivered to parent process.
dnl Some 32-Bit BSD systems however may not function properly if the handler is installed.
dnl The default behavior is to install the handler.
dnl
AC_MSG_CHECKING(whether to use an empty SIGCHLD handler)
AC_ARG_ENABLE(sigchld-handler,
[AS_HELP_STRING([--disable-sigchld-handler], [do not use sigchld-handler (the disabling may be neccessary on 32-Bit BSD)])],
[SIGCHLDHANDLER=$enableval],
[SIGCHLDHANDLER=yes])
AC_MSG_RESULT($SIGCHLDHANDLER)
if test "$SIGCHLDHANDLER" = "yes"; then
AC_DEFINE([SIGCHLD_HANDLER], 1, [Define to 1 to install an empty signal handler for SIGCHLD])
fi
dnl
dnl Debugging. Default: no
dnl
@@ -545,15 +612,5 @@ dnl
fi
dnl Substitute flags.
AC_SUBST(CFLAGS)
AC_SUBST(CPPFLAGS)
AC_SUBST(LDFLAGS)
AC_SUBST(CXXFLAGS)
AC_SUBST(TAR)
AC_SUBST(AR)
AC_SUBST(ADDSRCS)
AC_CONFIG_FILES([Makefile])
AC_OUTPUT

View File

@@ -1,7 +1,9 @@
diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.cpp
diff -aud -U 5 ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.cpp
--- ../libpar2-0.2-original/par2repairer.cpp 2006-01-20 18:25:20.000000000 +0100
+++ ../libpar2-0.2/par2repairer.cpp 2008-02-06 12:02:53.226050300 +0100
@@ -78,6 +78,7 @@
+++ ../libpar2-0.2/par2repairer.cpp 2012-11-30 14:23:31.000000000 +0100
@@ -76,10 +76,11 @@
++sf;
}
delete mainpacket;
delete creatorpacket;
@@ -9,7 +11,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
}
@@ -1261,7 +1262,7 @@
Result Par2Repairer::PreProcess(const CommandLine &commandline)
{
@@ -1259,11 +1260,11 @@
string path;
string name;
DiskFile::SplitFilename(filename, path, name);
cout << "Target: \"" << name << "\" - missing." << endl;
@@ -18,12 +24,37 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
}
}
@@ -1804,7 +1805,7 @@
++sf;
}
@@ -1802,11 +1803,11 @@
<< "\" - no data found."
<< endl;
}
}
}
- sig_done.emit(name,count,sourcefile->GetVerificationPacket()->BlockCount());
+ sig_done.emit(name,count, sourcefile->GetVerificationPacket() ? sourcefile->GetVerificationPacket()->BlockCount() : 0);
+ sig_done.emit(name,count, count>0 && sourcefile->GetVerificationPacket() ? sourcefile->GetVerificationPacket()->BlockCount() : 0);
sig_progress.emit(1000.0);
return true;
}
// Find out how much data we have found
diff -aud -U 5 ../libpar2-0.2-original/par2repairer.h ../libpar2-0.2/par2repairer.h
--- ../libpar2-0.2-original/par2repairer.h 2006-01-20 00:38:27.000000000 +0100
+++ ../libpar2-0.2/par2repairer.h 2012-11-30 14:24:46.000000000 +0100
@@ -34,10 +34,15 @@
sigc::signal<void, std::string> sig_filename;
sigc::signal<void, double> sig_progress;
sigc::signal<void, ParHeaders*> sig_headers;
sigc::signal<void, std::string, int, int> sig_done;
+ // This method allows to determine whether libpar2 includes the patches
+ // ("libpar2-0.2-bugfixes.patch") submitted to libpar2 project.
+ // Use the method in configure scripts for detection.
+ void BugfixesPatchVersion2() { }
+
protected:
// Steps in verifying and repairing files:
// Load packets from the specified file
bool LoadPacketsFromFile(string filename);

View File

@@ -1,7 +1,9 @@
diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.cpp
--- ../libpar2-0.2-original/par2repairer.cpp 2008-10-26 19:54:33.000000000 +0100
+++ ../libpar2-0.2/par2repairer.cpp 2008-10-29 10:24:48.000000000 +0100
@@ -52,6 +52,8 @@
diff -aud -U 5 ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.cpp
--- ../libpar2-0.2-original/par2repairer.cpp 2012-12-03 10:47:04.000000000 +0100
+++ ../libpar2-0.2/par2repairer.cpp 2012-12-03 10:48:13.000000000 +0100
@@ -50,10 +50,12 @@
outputbuffer = 0;
noiselevel = CommandLine::nlNormal;
headers = new ParHeaders;
alreadyloaded = false;
@@ -10,7 +12,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
}
Par2Repairer::~Par2Repairer(void)
@@ -406,6 +408,10 @@
{
delete [] (u8*)inputbuffer;
@@ -404,10 +406,14 @@
{
cout << "Loading: " << newfraction/10 << '.' << newfraction%10 << "%\r" << flush;
progress = offset;
sig_progress.emit(newfraction);
@@ -21,7 +27,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
}
}
@@ -584,6 +590,11 @@
// Attempt to read the next packet header
PACKET_HEADER header;
@@ -582,10 +588,15 @@
if (noiselevel > CommandLine::nlQuiet)
cout << "No new packets found" << endl;
delete diskfile;
}
@@ -33,7 +43,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
return true;
}
@@ -833,9 +844,17 @@
// Finish loading a recovery packet
bool Par2Repairer::LoadRecoveryPacket(DiskFile *diskfile, u64 offset, PACKET_HEADER &header)
@@ -831,26 +842,42 @@
// Load packets from each file that was found
for (list<string>::const_iterator s=files->begin(); s!=files->end(); ++s)
{
LoadPacketsFromFile(*s);
@@ -51,7 +65,10 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
}
{
@@ -846,9 +865,17 @@
string wildcard = name.empty() ? "*.PAR2" : name + ".*.PAR2";
list<string> *files = DiskFile::FindFiles(path, wildcard);
// Load packets from each file that was found
for (list<string>::const_iterator s=files->begin(); s!=files->end(); ++s)
{
LoadPacketsFromFile(*s);
@@ -69,7 +86,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
}
return true;
@@ -866,9 +893,18 @@
}
@@ -864,13 +891,22 @@
// If the filename contains ".par2" anywhere
if (string::npos != filename.find(".par2") ||
string::npos != filename.find(".PAR2"))
{
LoadPacketsFromFile(filename);
@@ -88,7 +109,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
return true;
}
@@ -1210,6 +1246,11 @@
// Check that the packets are consistent and discard any that are not
bool Par2Repairer::CheckPacketConsistency(void)
@@ -1208,10 +1244,15 @@
// Start verifying the files
sf = sortedfiles.begin();
while (sf != sortedfiles.end())
{
@@ -100,7 +125,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
// Do we have a source file
Par2RepairerSourceFile *sourcefile = *sf;
@@ -1562,6 +1603,10 @@
// What filename does the file use
string filename = sourcefile->TargetFileName();
@@ -1560,10 +1601,14 @@
if (oldfraction != newfraction)
{
cout << "Scanning: \"" << shortname << "\": " << newfraction/10 << '.' << newfraction%10 << "%\r" << flush;
sig_progress.emit(newfraction);
@@ -111,7 +140,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
}
}
@@ -1651,6 +1696,11 @@
// If we fail to find a match, it might be because it was a duplicate of a block
// that we have already found.
@@ -1649,10 +1694,15 @@
return false;
}
}
}
@@ -123,7 +156,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
// Get the Full and 16k hash values of the file
filechecksummer.GetFileHashes(hashfull, hash16k);
@@ -2291,10 +2341,19 @@
// Did we make any matches at all
if (count > 0)
@@ -2289,14 +2339,23 @@
if (oldfraction != newfraction)
{
cout << "Repairing: " << newfraction/10 << '.' << newfraction%10 << "%\r" << flush;
sig_progress.emit(newfraction);
@@ -143,7 +180,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
++inputblock;
++inputindex;
}
@@ -2348,9 +2407,18 @@
}
else
@@ -2346,13 +2405,22 @@
if (oldfraction != newfraction)
{
cout << "Processing: " << newfraction/10 << '.' << newfraction%10 << "%\r" << flush;
sig_progress.emit(newfraction);
@@ -162,7 +203,11 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
++copyblock;
++inputblock;
}
@@ -2362,6 +2430,11 @@
}
@@ -2360,10 +2428,15 @@
if (lastopenfile != NULL)
{
lastopenfile->Close();
}
@@ -174,10 +219,14 @@ diff -aud ../libpar2-0.2-original/par2repairer.cpp ../libpar2-0.2/par2repairer.c
if (noiselevel > CommandLine::nlQuiet)
cout << "Writing recovered data\r";
diff -aud ../libpar2-0.2-original/par2repairer.h ../libpar2-0.2/par2repairer.h
--- ../libpar2-0.2-original/par2repairer.h 2006-01-20 00:38:27.000000000 +0100
+++ ../libpar2-0.2/par2repairer.h 2008-10-26 19:01:08.000000000 +0100
@@ -183,6 +183,7 @@
// For each output block that has been recomputed
vector<DataBlock*>::iterator outputblock = outputblocks.begin();
diff -aud -U 5 ../libpar2-0.2-with-bugfixes-patch/par2repairer.h ../libpar2-0.2/par2repairer.h
--- ../libpar2-0.2-original/par2repairer.h 2012-12-03 10:47:04.000000000 +0100
+++ ../libpar2-0.2/par2repairer.h 2012-12-03 10:48:13.000000000 +0100
@@ -186,8 +186,9 @@
u64 progress; // How much data has been processed.
u64 totaldata; // Total amount of data to be processed.
u64 totalsize; // Total data size

View File

@@ -4,7 +4,7 @@ rem
rem Batch file to start nzbget shell
rem
rem Copyright (C) 2009 orbisvicis <orbisvicis@users.sourceforge.net>
rem Copyright (C) 2009 Andrei Prygounkov <hugbug@users.sourceforge.net>
rem Copyright (C) 2009 Andrey Prygunkov <hugbug@users.sourceforge.net>
rem
rem This program is free software; you can redistribute it and/or modify
rem it under the terms of the GNU General Public License as published by
@@ -18,7 +18,7 @@ rem GNU General Public License for more details.
rem
rem You should have received a copy of the GNU General Public License
rem along with this program; if not, write to the Free Software
rem Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
rem Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
rem
rem ####################### Usage instructions #######################

1446
nzbget.conf Normal file
View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,866 +0,0 @@
# Sample configuration file for nzbget
#
# On POSIX put this file to one of the following locations:
# ~/.nzbget
# /etc/nzbget.conf
# /usr/etc/nzbget.conf
# /usr/local/etc/nzbget.conf
# /opt/etc/nzbget.conf
#
# On Windows put this file in program's directory.
#
# You can also put the file into any location, if you specify the path to it
# using switch "-c", e.g:
# nzbget -c /home/user/myconig.txt
# For quick start change the option MAINDIR and configure one news-server
##############################################################################
### PATHS ###
# Root directory for all related tasks.
#
# MAINDIR is a variable and therefore starts with "$".
# On POSIX you can use "~" as alias for home directory (e.g. "~/download").
# On Windows use absolute paths (e.g. "C:\Download").
$MAINDIR=~/download
# Destination-directory to store the downloaded files.
DestDir=${MAINDIR}/dst
# Directory to monitor for incoming nzb-jobs.
#
# Can have subdirectories.
# A nzb-file queued from a subdirectory will be automatically assigned to
# category with the directory-name.
NzbDir=${MAINDIR}/nzb
# Directory to store download queue.
QueueDir=${MAINDIR}/queue
# Directory to store temporary files.
TempDir=${MAINDIR}/tmp
# Lock-file for daemon-mode, POSIX only.
#
# If the option is not empty, nzbget creates the file and writes process-id
# (PID) into it. That info can be used in shell scripts.
LockFile=/tmp/nzbget.lock
# Where to store log file, if it needs to be created.
#
# NOTE: See also option <CreateLog>.
LogFile=${DestDir}/nzbget.log
##############################################################################
### NEWS-SERVERS ###
# This section defines which servers nzbget should connect to.
# Level of newsserver (0-99).
#
# The servers will be ordered by their level, i.e. nzbget will at
# first try to download an article from the level-0-server.
# If that server fails, nzbget proceeds with the level-1-server, etc.
# A good idea is surely to put your major download-server at level 0
# and your fill-servers at levels 1,2,...
#
# NOTE: Do not leave out a level in your server-list and start with level 0.
#
# NOTE: Several servers with the same level may be used, they will have
# the same priority.
Server1.Level=0
# Host name of newsserver.
Server1.Host=my1.newsserver.com
# Port to connect to (1-65535).
Server1.Port=119
# User name to use for authentication.
Server1.Username=user
# Password to use for authentication.
Server1.Password=pass
# Server requires "Join Group"-command (yes, no).
Server1.JoinGroup=yes
# Encrypted server connection (TLS/SSL) (yes, no).
Server1.Encryption=no
# Maximal number of simultaneous connections to this server (0-999).
Server1.Connections=4
# Second server, on level 0.
#Server2.Level=0
#Server2.Host=my2.newsserver.com
#Server2.Port=119
#Server2.Username=me
#Server2.Password=mypass
#Server2.JoinGroup=yes
#Server2.Connections=4
# Third server, on level 1.
#Server3.Level=1
#Server3.Host=fills.newsserver.com
#Server3.Port=119
#Server3.Username=me2
#Server3.Password=mypass2
#Server3.JoinGroup=yes
#Server3.Connections=1
##############################################################################
### PERMISSIONS ###
# User name for daemon-mode, POSIX only.
#
# Set the user that the daemon normally runs at (POSIX in daemon-mode only).
# Set $MAINDIR with an absolute path to be sure where it will write.
# This allows nzbget daemon to be launched in rc.local (at boot), and
# download items as a specific user id.
#
# NOTE: This option has effect only if the program was started from
# root-account, otherwise it is ignored and the daemon runs under
# current user id.
DaemonUserName=root
# Specify default umask (affects file permissions) for newly created
# files, POSIX only (000-1000).
#
# The value should be written in octal form (the same as for "umask" shell
# command).
# Empty value or value "1000" disable the setting of umask-mode; current
# umask-mode (set via shell) is used in this case.
UMask=1000
##############################################################################
### INCOMING NZBS ###
# Create subdirectory with category-name in destination-directory (yes, no).
AppendCategoryDir=yes
# Create subdirectory with nzb-filename in destination-directory (yes, no).
AppendNzbDir=yes
# How often incoming-directory (option <NzbDir>) must be checked for new
# nzb-files (seconds).
#
# Value "0" disables the check.
NzbDirInterval=5
# How old nzb-file should at least be for it to be loaded to queue (seconds).
#
# Nzbget checks if nzb-file was not modified in last few seconds, defined by
# this option. That safety interval prevents the loading of files, which
# were not yet completely saved to disk, for example if they are still being
# downloaded in web-browser.
NzbDirFileAge=60
# Automatic merging of nzb-files with the same filename (yes, no).
#
# A typical scenario: you put nzb-file into incoming directory, nzbget adds
# file to queue. You find out, that the file doesn't have par-files. You
# find required par-files, put nzb-file with the par-files into incoming
# directory, nzbget adds it to queue as a separate group. You want the second
# file to be merged with the first for parchecking to work properly. With
# option "MergeNzb" nzbget can merge files automatically. You only need to
# save the second file under the same filename as the first one.
MergeNzb=no
# Set path to program, that must be executed before any file in incoming
# directory (option <NzbDir>) is processed.
#
# Example: "NzbProcess=~/nzbprocess.sh".
#
# That program can unpack archives which were put in incoming directory, make
# filename cleanup, assign category and post-processing parameters to nzb-file
# or do something else.
#
# NZBGet passes following arguments to nzbprocess-program as environment
# variables:
# NZBNP_DIRECTORY - path to directory, where file is located. It is a directory
# specified by the option <NzbDir> or a subdirectory;
# NZBNP_FILENAME - name of file to be processed;
#
# In addition to these arguments nzbget passes all
# nzbget.conf-options to postprocess-program as environment variables. These
# variables have prefix "NZBOP_" and are written in UPPER CASE. For Example
# option "ParRepair" is passed as environment variable "NZBOP_PARREPAIR".
# The dots in option names are replaced with underscores, for example
# "SERVER1_HOST". For options with predefined possible values (yes/no, etc.)
# the values are passed always in lower case.
#
# The nzbprocess-script can assign category or post-processing parameters
# to current nzb-file by printing special messages into standard output
# (which is processed by NZBGet).
#
# To assign category use following syntax:
# echo "[NZB] CATEGORY=my category";
#
# To assign post-processing parameters:
# echo "[NZB] NZBPR_myvar=my value";
#
# The prefix "NZBPR_" will be removed. In this example a post-processing
# parameter with name "myvar" and value "my value" will be associated
# with nzb-file.
#
# The nzbprocess-script can delete processed file, rename it or move somewhere.
# After the calling of the script the file will be either added to queue
# (if it was an nzb-file) or renamed by adding the extension ".processed".
#
# NOTE: Files with extensions ".processed", ".queued" and ".error" are skipped
# during the directory scanning.
#
# NOTE: Files with extension ".nzb_processed" are not passed to
# NzbProcess-script before adding to queue. This feature allows
# NzbProcess-script to prevent the scanning of nzb-files extracted from
# archives, if they were already processed by the script.
NzbProcess=
# Check for duplicate files (yes, no).
#
# If this option is enabled the program checks by adding of a new nzb-file:
# 1) if nzb-file contains duplicate entries. This check aims on detecting
# of reposted files (if first file was not fully uploaded);
# If the program find two files with identical names, only the
# biggest of these files will be added to queue;
# 2) if download queue already contains file with the same name;
# 3) if destination file on disk already exists.
# In last two cases: if the file exists it will not be added to queue;
#
# If this option is disabled, all files are downloaded and duplicate files
# are renamed to "filename_duplicate1".
# Existing files are never deleted or overwritten.
DupeCheck=no
##############################################################################
### DOWNLOAD QUEUE ###
# Save download queue to disk (yes, no).
#
# This allows to reload it on next start.
SaveQueue=yes
# Reload download queue on start, if it exists (yes, no).
ReloadQueue=yes
# Reload Post-processor-queue on start, if it exists (yes, no).
#
# For this option to work the options <SaveQueue> and <ReloadQueue> must
# be also enabled.
ReloadPostQueue=yes
# Reuse articles saved in temp-directory from previous program start (yes, no).
#
# This allows to continue download of file, if program was exited before
# the file was completed.
ContinuePartial=yes
# Visibly rename broken files on download appending "_broken" (yes, no).
#
# Do not activate this option if par-check is enabled.
RenameBroken=no
# Decode articles (yes, no).
#
# yes - decode articles using internal decoder (supports yEnc and UU formats);
# no - the articles will not be decoded and joined. External programs
# (like "uudeview") can be used to decode and join downloaded articles.
# Also useful for debugging to look at article's source text.
Decode=yes
# Write decoded articles directly into destination output file (yes, no).
#
# With this option enabled the program at first creates the output
# destination file with required size (total size of all articles),
# then writes on the fly decoded articles directly to the file
# without creating of any temporary files, even for decoded articles.
# This may results in major performance improvement, but this highly
# depends on OS and file system.
#
# Can improve performance on a very fast internet connections,
# but you need to test if it works in your case.
#
# INFO: Tests showed, that on Linux with EXT3-partition activating of
# this option results in up to 20% better performance, but on Windows with NTFS
# or Linux with FAT32-partitions the performance were decreased.
# The possible reason is that on EXT3-partition Linux can create large files
# very fast (if the content of file does not need to be initialized),
# but Windows on NTFS-partition and also Linux on FAT32-partition need to
# initialize created large file with nulls, resulting in a big performance
# degradation.
#
# NOTE: for testing try to download few big files (with total size 500-1000MB)
# and measure required time. Do not rely on the program's speed indicator.
#
# NOTE: if both options <DirectWrite> and <ContinuePartial> are enabled,
# the program will still create empty articles-files in temp-directory. They
# are used to continue download of file on a next program start. To minimize
# disk-io it is recommended to disable option <ContinuePartial>, if
# <DirectWrite> is enabled. Especially on a fast connections (where you
# would want to activate <DirectWrite>) it should not be a problem to
# redownload an interrupted file.
DirectWrite=no
# Check CRC of downloaded and decoded articles (yes, no).
#
# Normally this option should be enabled for better detecting of download
# errors. However checking of CRC needs about the same CPU time as
# decoding of articles. On a fast connections with slow CPUs disabling of
# CPU-check may slightly improve performance (if CPU is a limiting factor).
CrcCheck=yes
# How much retries should be attempted if a download error occurs (0-99).
Retries=4
# Set the interval between retries (seconds).
RetryInterval=10
# Redownload article if CRC-check fails (yes, no).
#
# Helps to minimize number of broken files, but may be effective
# only if you have multiple download servers (even from the same provider
# but from different locations (e.g. europe, usa)).
# In any case the option increases your traffic.
# For slow connections loading of extra par-blocks may be more effective
# The option <CrcCheck> must be enabled for option <RetryOnCrcError> to work.
RetryOnCrcError=no
# Set connection timeout (seconds).
ConnectionTimeout=60
# Timeout until a download-thread should be killed (seconds).
#
# This can help on hanging downloads, but is dangerous.
# Do not use small values!
TerminateTimeout=600
# Set the (approximate) maximum number of allowed threads (0-999).
#
# Sometimes under certain circumstances the program may create way to many
# download threads. Most of them are in wait-state. That is not bad,
# but threads are usually a limited resource. If a program creates to many
# of them, operating system may kill it. The option <ThreadLimit> prevents that.
#
# NOTE: the number of threads is not the same as the number of connections
# opened to NNTP-servers. Do not use the option <ThreadLimit> to limit the
# number of connections. Use the appropriate options <ServerX.Connections>
# instead.
#
# NOTE: the actual number of created threads can be slightly larger as
# defined by the option. Important threads may be created even if the
# number of threads is exceeded. The option prevents only the creation of
# additional download threads.
#
# NOTE: in most cases you should leave the default value "100" unchanged.
# However you may increase that value if you need more than 90 connections
# (that's very unlikely) or decrease the value if the OS does not allow so
# many threads. But the most OSes should not have problems with 100 threads.
ThreadLimit=100
# Set the maximum download rate on program start (kilobytes/sec).
#
# Value "0" means no speed control.
# The download rate can be changed later via remote calls.
DownloadRate=0
# Set the size of memory buffer used by writing the articles (bytes).
#
# Bigger values decrease disk-io, but increase memory usage.
# Value "0" causes an OS-dependent default value to be used.
# With value "-1" (which means "max/auto") the program sets the size of
# buffer according to the size of current article (typically less than 500K).
#
# NOTE: the value must be written in bytes, do not use postfixes "K" or "M".
#
# NOTE: to calculate the memory usage multiply WriteBufferSize by max number
# of connections, configured in section "NEWS-SERVERS".
#
# NOTE: typical article's size not exceed 500000 bytes, so using bigger values
# (like several megabytes) will just waste memory.
#
# NOTE: for desktop computers with large amount of memory value "-1" (max/auto)
# is recommended, but for computers with very low memory (routers, NAS)
# value "0" (default OS-dependent size) could be better alternative.
#
# NOTE: write-buffer is managed by OS (system libraries) and therefore
# the effect of the option is highly OS-dependent.
WriteBufferSize=0
# Pause if disk space gets below this value (megabytes).
#
# Value "0" disables the check.
# Only the disk space on the drive with <DestDir> is checked.
# The drive with <TempDir> is not checked.
DiskSpace=250
# Delete already downloaded files from disk, if the download of nzb-file was
# cancelled (nzb-file was deleted from queue) (yes, no).
#
# NOTE: nzbget does not delete files in a case if all remaining files in
# queue are par-files. That prevents the accidental deletion if the option
# <ParCleanupQueue> is disabled or if the program was interrupted during
# parcheck and later restarted without reloading of post queue (option
# <ReloadPostQueue> disabled).
DeleteCleanupDisk=no
# Keep the history of downloaded nzb-files (days).
#
# Value "0" disables the history.
#
# NOTE: when a collection having paused files is added to history all remaining
# files are moved from download queue to a list of parked files. It holds files
# which could be required later if the collection will be moved back to
# download queue for downloading of remaining files. The parked files still
# consume some amount of memory and disk space. If the collection was downloaded
# and successfully par-checked or postprocessed it is recommended to discard the
# unneeded parked files before adding the collection to history. For par2-files
# that can be achieved with the option <ParCleanupQueue>.
KeepHistory=1
##############################################################################
### LOGGING ###
# Create log file (yes, no).
CreateLog=yes
# Delete log file upon server start (only in server-mode) (yes, no).
ResetLog=no
# How error messages must be printed (screen, log, both, none).
ErrorTarget=both
# How warning messages must be printed (screen, log, both, none).
WarningTarget=both
# How info messages must be printed (screen, log, both, none).
InfoTarget=both
# How detail messages must be printed (screen, log, both, none).
DetailTarget=both
# How debug messages must be printed (screen, log, both, none).
#
# Debug-messages can be printed only if the program was compiled in
# debug-mode: "./configure --enable-debug".
DebugTarget=both
# Set the default message-kind for output received from process-scripts
# (PostProcess, NzbProcess, TaskX.Process) (none, detail, info, warning,
# error, debug).
#
# NZBGet checks if the line written by the script to stdout or stderr starts
# with special character-sequence, determining the message-kind, e.g.:
# [INFO] bla-bla.
# [DETAIL] bla-bla.
# [WARNING] bla-bla.
# [ERROR] bla-bla.
# [DEBUG] bla-bla.
#
# If the message-kind was detected the text is added to log with detected type.
# Otherwise the message becomes the default kind, specified in this option.
ProcessLogKind=detail
# Number of messages stored in buffer and available for remote
# clients (messages).
LogBufferSize=1000
# Create a log of all broken files (yes ,no).
#
# It is a text file placed near downloaded files, which contains
# the names of broken files.
CreateBrokenLog=yes
# Create memory dump (core-file) on abnormal termination, Linux only (yes, no).
#
# Core-files are very helpful for debugging.
#
# NOTE: core-files may contain sensible data, like your login/password to
# newsserver etc.
DumpCore=no
# See also option <LogFile> in section "PATHS"
##############################################################################
### DISPLAY (TERMINAL) ###
# Set screen-outputmode (loggable, colored, curses).
#
# loggable - only messages will be printed to standard output;
# colored - prints messages (with simple coloring for messages categories)
# and download progress info; uses escape-sequences to move cursor;
# curses - advanced interactive interface with the ability to edit
# download queue and various output option.
OutputMode=curses
# Shows NZB-Filename in file list in curses-outputmode (yes, no).
#
# This option controls the initial state of curses-frontend,
# it can be switched on/off in run-time with Z-key.
CursesNzbName=yes
# Show files in groups (NZB-files) in queue list in curses-outputmode (yes, no).
#
# This option controls the initial state of curses-frontend,
# it can be switched on/off in run-time with G-key.
CursesGroup=no
# Show timestamps in message list in curses-outputmode (yes, no).
#
# This option controls the initial state of curses-frontend,
# it can be switched on/off in run-time with T-key.
CursesTime=no
# Update interval for Frontend-output in console mode or remote client
# mode (milliseconds).
#
# Min value 25. Bigger values reduce CPU usage (especially in curses-outputmode)
# and network traffic in remote-client mode.
UpdateInterval=200
##############################################################################
### CLIENT/SERVER COMMUNICATION ###
# IP on which the server listen and which client uses to contact the server.
#
# It could be dns-hostname or ip-address (more effective since does not
# require dns-lookup).
# If you want the server to listen to all interfaces, use "0.0.0.0".
ServerIp=127.0.0.1
# Port which the server & client use (1-65535).
ServerPort=6789
# Password which the server & client use.
ServerPassword=tegbzn6789
# See also option <LogBufferSize> in section "LOGGING"
##############################################################################
### PAR CHECK/REPAIR ###
# How many par2-files to load (none, all, one).
#
# none - all par2-files must be automatically paused;
# all - all par2-files must be downloaded;
# one - only one main par2-file must be dowloaded and other must be paused.
# Paused files remain in queue and can be unpaused by parchecker when needed.
LoadPars=one
# Automatic par-verification (yes, no).
#
# To download only needed par2-files (smart par-files loading) set also
# the option <LoadPars> to "one". If option <LoadPars> is set to "all",
# all par2-files will be downloaded before verification and repair starts.
# The option <RenameBroken> must be set to "no", otherwise the par-checker
# may not find renamed files and fail.
ParCheck=no
# Automatic par-repair (yes, no).
#
# If option <ParCheck> is enabled and <ParRepair> is not, the program
# only verifies downloaded files and downloads needed par2-files, but does
# not start repair-process. This is useful if the server does not have
# enough CPU power, since repairing of large files may take too much
# resources and time on a slow computers.
# This option has effect only if the option <ParCheck> is enabled.
ParRepair=yes
# Use only par2-files with matching names (yes, no).
#
# If par-check needs extra par-blocks it searches for par2-files
# in download queue, which can be unpaused and used for restore.
# These par2-files should have the same base name as the main par2-file,
# currently loaded in par-checker. Sometimes extra par files (especially if
# they were uploaded by a different poster) have not matching names.
# Normally par-checker does not use these files, but you can allow it
# to use these files by setting <StrictParName> to "no".
# This has however a side effect: if NZB-file contains more than one collection
# of files (with different par-sets), par-checker may download par-files from
# a wrong collection. This increases you traffic (but not harm par-check).
#
# NOTE: par-checker always uses only par-files added from the same NZB-file
# and the option <StrictParName> does not change this behavior.
StrictParName=yes
# Maximum allowed time for par-repair (minutes).
#
# Value "0" means unlimited.
#
# If you use nzbget on a very slow computer like NAS-device, it may be good to
# limit the time allowed for par-repair. Nzbget calculates the estimated time
# required for par-repair. If the estimated value exceeds the limit defined
# here, nzbget cancels the repair.
#
# To avoid a false cancellation nzbget compares the estimated time with
# <ParTimeLimit> after the first 5 minutes of repairing, when the calculated
# estimated time is more or less accurate. But in a case if <ParTimeLimit> is
# set to a value smaller than 5 minutes, the comparison is made after the first
# whole minute.
#
# NOTE: the option limits only the time required for repairing. It doesn't
# affect the first stage of parcheck - verification of files. However the
# verification speed is constant, it doesn't depend on files integrity and
# therefore it is not necessary to limit the time needed for the first stage.
#
# NOTE: this option requires an extended version of libpar2 (the original
# version doesn't support the cancelling of repairing). Please refer to
# nzbget's README for info on how to apply a patch to libpar2.
ParTimeLimit=0
# Pause download queue during check/repair (yes, no).
#
# Enable the option to give CPU more time for par-check/repair. That helps
# to speed up check/repair on slow CPUs with fast connection (e.g. NAS-devices).
#
# NOTE: if parchecker needs additional par-files it temporary unpauses queue.
#
# NOTE: See also option <PostPauseQueue>.
ParPauseQueue=no
# Cleanup download queue after successful check/repair (yes, no).
#
# Enable this option for automatic deletion of unneeded (paused) par-files
# from download queue after successful check/repair.
ParCleanupQueue=yes
# Delete source nzb-file after successful check/repair (yes, no).
#
# Enable this option for automatic deletion of nzb-file from incoming directory
# after successful check/repair.
NzbCleanupDisk=no
##############################################################################
### POSTPROCESSING ###
# Set path to program, that must be executed after the download of nzb-file
# or one collection in nzb-file (if par-check enabled and nzb-file contains
# multiple collections; see note below for the definition of "collection")
# is completed and possibly par-checked/repaired.
#
# Example: "PostProcess=~/postprocess-example.sh".
#
# NZBGet passes following arguments to postprocess-program as environment
# variables:
# NZBPP_DIRECTORY - path to destination dir for downloaded files;
# NZBPP_NZBFILENAME - name of processed nzb-file;
# NZBPP_PARFILENAME - name of par-file or empty string (if no collections were
# found);
# NZBPP_PARSTATUS - result of par-check:
# 0 = not checked: par-check disabled or nzb-file does
# not contain any par-files;
# 1 = checked and failed to repair;
# 2 = checked and successfully repaired;
# 3 = checked and can be repaired but repair is disabled;
# NZBPP_NZBCOMPLETED - state of nzb-job:
# 0 = there are more collections in this nzb-file queued;
# 1 = this was the last collection in nzb-file;
# NZBPP_PARFAILED - indication of failed par-jobs for current nzb-file:
# 0 = no failed par-jobs;
# 1 = current par-job or any of the previous par-jobs for
# the same nzb-files failed;
# NZBPP_CATEGORY - category assigned to nzb-file (can be empty string).
#
# If nzb-file has associated postprocess-parameters (which can be set using
# subcommand <O> of command <-E>, for example: nzbget -E G O "myvar=hello !" 10)
# or using XML-/JSON-RPC (for example via web-interface), they are also passed
# as environment variables. These variables have prefix "NZBPR_" in their names.
# For example, pp-parameter "myvar" will be passed as environment
# variable "NZBPR_myvar".
#
# In addition to arguments and postprocess-parameters nzbget passes all
# nzbget.conf-options to postprocess-program as environment variables. These
# variables have prefix "NZBOP_" and are written in UPPER CASE. For Example
# option "ParRepair" is passed as environment variable "NZBOP_PARREPAIR".
# The dots in option names are replaced with underscores, for example
# "SERVER1_HOST". For options with predefined possible values (yes/no, etc.)
# the values are passed always in lower case.
#
# Return value: nzbget processes the exit code returned by the script:
# 91 - request nzbget to do par-check/repair for current collection in the
# current nzb-file;
# 92 - request nzbget to do par-check/repair for all collections in the
# current nzb-file;
# 93 - post-process successful (status = SUCCESS);
# 94 - post-process failed (status = FAILURE);
# 95 - post-process skipped (status = NONE);
# All other return codes are interpreted as "status unknown".
#
# The return value is used to display the status of post-processing in
# a history view. In addition to status one or more text messages can be
# passed to history using a special prefix "[HISTORY]" by printing messages
# to standard output. For example:
# echo "[ERROR] [HISTORY] Unpack failed, not enough disk space";
#
# NOTE: The parameter NZBPP_NZBCOMPLETED is very important and MUST be checked
# even in the simplest scripts.
# If par-check is enabled and nzb-file contains more than one collection
# of files the postprocess-program is called after each collection is completed
# and par-checked. If you want to unpack files or clean up the directory
# (delete par-files, etc.) there are two possibilities, when you can do this:
# 1) you parse NZBPP_PARFILENAME to find out the base name of collection and
# clean up only files from this collection (not reliable, because par-files
# sometimes have different names than rar-files);
# 2) or you just check the parameters NZBPP_NZBCOMPLETED and NZBPP_PARFAILED
# and do the processing, only if NZBPP_NZBCOMPLETED is set to "1" (which
# means, that this was the last collection in nzb-file and all files
# are now completed) and NZBPP_PARFAILED is set to "0" (no failed par-jobs);
#
# NOTE: the term "collection" in the above description actually means
# "par-set". To determine what "collections" are present in nzb-file nzbget
# looks for par-sets. If any collection of files within nzb-file does
# not have any par-files, this collection will not be detected.
# For example, for nzb-file containing three collections but only two par-sets,
# the postprocess will be called two times - after processing of each par-set.
#
# NOTE: if nzbget doesn't find any collections it calls PostProcess once
# with empty string for parameter NZBPP_PARFILENAME;
#
# NOTE: the using of special return values (91 and 92) for requesting of
# par-check/repair allows to organize the delayed parcheck. To do that:
# 1) set options: LoadPars=one, ParCheck=no, ParRepair=yes;
# 2) in post-process-script check the parameter NZBPP_PARSTATUS. If it is "0",
# that means, the script is called for the first time. Try to unpack files.
# If unpack fails, exit the script with exit code for par-check/repair;
# 3) nzbget will start par-check/repair. After that it calls the script again;
# 4) on second pass the parameter NZBPP_PARSTATUS will have value
# greater than "0". If it is "2" ("checked and successfully repaired")
# you can try unpack again.
#
# NOTE: an example script for unrarring is provided within distribution
# in file "postprocess-example.sh".
PostProcess=
# Allow multiple post-processing for the same nzb-file (yes, no).
#
# After the post-processing (par-check and call of a postprocess-script) is
# completed, nzbget adds the nzb-file to a list of completed-jobs. The nzb-file
# stays in the list until the last file from that nzb-file is deleted from
# the download queue (it occurs straight away if the par-check was successful
# and the option <ParCleanupQueue> is enabled).
# That means, if a paused file from a nzb-collection becomes unpaused
# (manually or from a post-process-script) after the collection was already
# postprocessed nzbget will not post-process nzb-file again.
# This prevents the unwanted multiple post-processings of the same nzb-file.
# But it might be needed if the par-check/-repair are performed not directly
# by nzbget but from a post-process-script.
#
# NOTE: enable this option only if you were advised to do that by the author
# of the post-process-script.
#
# NOTE: by enabling <AllowReProcess> you should disable the option <ParCheck>
# to prevent multiple par-checking.
AllowReProcess=no
# Pause download queue during executing of postprocess-script (yes, no).
#
# Enable the option to give CPU more time for postprocess-script. That helps
# to speed up postprocess on slow CPUs with fast connection (e.g. NAS-devices).
#
# NOTE: See also option <ParPauseQueue>.
PostPauseQueue=no
##############################################################################
### SCHEDULER ###
# This section defines scheduler commands.
# For each command create a set of options <TaskX.Time>, <TaskX.Command>,
# <TaskX.WeekDays> and <TaskX.DownloadRate>.
# The following example shows how to throttle downloads in the daytime
# by 100 KB/s and download at full speed overnights:
# Time to execute the command (HH:MM).
#
# Multiple comma-separated values are accepted.
# Asterix as hours-part means "every hour".
#
# Examples: "08:00", "00:00,06:00,12:00,18:00", "*:00", "*:00,*:30".
#Task1.Time=08:00
# Week days to execute the command (1-7).
#
# Comma separated list of week days numbers.
# 1 is Monday.
# Character '-' may be used to define ranges.
#
# Examples: "1-7", "1-5", "5,6", "1-5, 7".
#Task1.WeekDays=1-7
# Command to be executed (PauseDownload, UnpauseDownload, PauseScan,
# UnpauseScan, DownloadRate, Process).
#
# Possible commands:
# PauseDownload - pauses download;
# UnpauseDownload - resumes download;
# PauseScan - pauses scan of incoming nzb-directory;
# UnpauseScan - resumes scan of incoming nzb-directory;
# DownloadRate - sets download rate in KB/s;
# Process - executes external program.
#Task1.Command=DownloadRate
# Download rate to be set if the command is "DownloadRate" (kilobytes/sec).
#
# Value "0" means no speed control.
#
# If the option <TaskX.Command> is not set to "DownloadRate" this option
# is ignored and can be omitted.
#Task1.DownloadRate=100
# Path to the porgram to execute if the command is "Process".
#
# Example: "Task1.Process=/home/user/fetch-nzb.sh".
#
# If the option <TaskX.Command> is not set to "Process" this option
# is ignored and can be omitted.
#
# NOTE: it's allowed to add parameters to command line. If filename or
# any parameter contains spaces it must be surrounded with single quotation
# marks. If filename/parameter contains single quotation marks, each of them
# must be replaced with two single quotation marks and the resulting filename/
# parameter must be surrounded with single quotation marks.
# Example: '/home/user/download/my scripts/task process.sh' 'world''s fun'.
# In this example one parameter (world's fun) is passed to the script
# (task process.sh).
#Task1.Process=
#Task2.Time=20:00
#Task2.WeekDays=1-7
#Task2.Command=DownloadRate
#Task2.DownloadRate=0
##############################################################################
## PERFORMANCE ##
# On a very fast connection and slow CPU and/or drive the following
# settings may improve performance:
# 1) Disable par-checking and -repairing ("ParCheck=no"). VERY important,
# because par-checking/repairing needs a lot of CPU-power and
# significantly increases disk usage;
# 2) Try to activate option <DirectWrite> ("DirectWrite=yes"), especially
# if you use EXT3-partitions;
# 3) Disable option <CrcCheck> ("CrcCheck=no");
# 4) Disable option <ContinuePartial> ("ContinuePartial=no");
# 5) Do not limit download rate ("DownloadRate=0"), because the bandwidth
# throttling eats some CPU time;
# 6) Disable logging for detail- and debug-messages ("DetailTarget=none",
# "DebugTarget=none");
# 7) Run the program in daemon (Posix) or service (Windows) mode and use
# remote client for short periods of time needed for controlling of
# download process on server. Daemon/Service mode eats less CPU
# resources than console server mode due to not updating the screen.
# 8) Increase the value of option <WriteBufferSize> or better set it to
# "-1" (max/auto) if you have spare 5-20 MB of memory.

Some files were not shown because too many files have changed in this diff Show More