Compare commits

...

29 Commits
2.2.0 ... 2.2.x

Author SHA1 Message Date
Safihre
bcc4dd75cf Update text files for 2.2.1 2017-08-25 21:58:48 +02:00
Safihre
97711ca82e Revert "Remove locks from ArticleCache"
This reverts commit 5e7558ce4a.
2017-08-25 21:57:16 +02:00
Safihre
e782237f27 More logging when adding NZB's 2017-08-25 21:49:24 +02:00
Safihre
52bb156c08 Ignore unpack errors in duplicate rarsets
Multipar and par2tbb will detect and log them so we can remove them, but par2cmdline will not.
2017-08-25 16:58:38 +02:00
Safihre
4361d82ddd Duplicate files in NZB could result in broken unpack after repair
Because par2 would detect them, but not use them. So ".1" files would later be used for unpack, even though it's not a real set.
2017-08-25 16:57:47 +02:00
Safihre
017cf8f285 Do not fail a job if recursive unpack fails
The user can handle it, we did our part.
2017-08-25 09:15:14 +02:00
Safihre
03cdf6ed5d Sync translatable texts from develop
To avoid conflicts on Launchpad
2017-08-24 23:40:37 +02:00
Safihre
cf347a8e90 Original files would be deleted after a MultiPar rename 2017-08-24 23:36:36 +02:00
Safihre
f06afe43e1 Use fileobj to prevent having to chdir, which can crash on macOS
If something is "wrong" with the current directory, for example when SABnzbd is started after downloading in a sandbox by macOS security, this function can error and break the adding of NZB's.
Have to use a fileobj, otherwise GZip will put the whole path inside there.
2017-08-24 16:36:13 +02:00
Safihre
fb301eb5c8 Update text files for 2.2.1RC2 2017-08-23 22:49:59 +02:00
Safihre
1562c3560b Handle '482 Download limt exceeded'
Closes #1009
2017-08-23 22:48:15 +02:00
Safihre
9813bc237f Only auto-disconnect after first run of verification 2017-08-23 21:42:56 +02:00
Safihre
b39fe059c6 Pause between unpacks on Windows, otherwise subprocess_fix overloads
Strange but true, but on jobs with many small files to unpack, it would just fail.
2017-08-23 21:42:17 +02:00
Safihre
a56c770a8b The real anti-stalling fix
Woohoo!
For each NZF (file) make sure all articles have tried a server before marking it as tried. Before if articles were still in transit they could be marked as tried on NZF level before the server could get to them,
2017-08-23 16:02:01 +02:00
Safihre
e3bf0edad8 TryList reset at NZO level also nessecary
Timing issue between when a new server is selected and when a job is added to the NZO-level try-list. Locks were tried, but failed.
2017-08-23 09:11:01 +02:00
Safihre
e35d9e4db3 Correct handeling of TryList when server has timeout 2017-08-23 08:32:47 +02:00
Safihre
c617d4321a Correctly remove + from INFO label in all languages 2017-08-22 16:13:24 +02:00
Safihre
0fd3a2881f Correct redirect after ports change 2017-08-22 10:19:42 +02:00
Safihre
0c1f7633de Only discard really non-unique hashes from md5of16k 2017-08-22 09:43:33 +02:00
Safihre
b7d5d49c84 Show hover-title that the compress icon is Direct Unpack 2017-08-22 09:43:26 +02:00
Safihre
9911b93ece Add error when NZO creation fails 2017-08-22 09:43:11 +02:00
Safihre
eeaad00968 Also hide email-accounts in logging 2017-08-22 09:43:06 +02:00
Safihre
e1bb8459e3 Take the risk of allowing up to 5 bad articles in jobs without Par2 2017-08-22 09:42:47 +02:00
Safihre
65c3ac0cc0 Warn in case the password file has too many passwords 2017-08-22 09:42:16 +02:00
Safihre
413c02a80f Do not run get_new_id forever in case of problems
#984
2017-08-22 09:41:40 +02:00
Safihre
80f118f304 UnRar is required to read some RAR files 2017-08-21 08:23:10 +02:00
Safihre
5c0a10e16b Update text files for 2.2.1RC1 2017-08-19 11:06:40 +02:00
Safihre
d9b32261e7 Reset all NZO TryList when doing Prospective Add
I thought in c14b3ed82a that this was enough, but clearly it is not.
2017-08-19 11:03:22 +02:00
Safihre
8d8ce52193 Stall prevention by checking TryList 2017-08-19 11:03:15 +02:00
18 changed files with 207 additions and 77 deletions

View File

@@ -1,7 +1,7 @@
Metadata-Version: 1.0
Name: SABnzbd
Version: 2.2.0
Summary: SABnzbd-2.2.0
Version: 2.2.1
Summary: SABnzbd-2.2.1
Home-page: https://sabnzbd.org
Author: The SABnzbd Team
Author-email: team@sabnzbd.org

View File

@@ -1,3 +1,23 @@
Release Notes - SABnzbd 2.2.1
=========================================================
## Changes since 2.2.0
- Allow up to 5 bad articles for jobs with no or little par2
- Only auto-disconnect after first run of verification
- Warning is shown when password-file is too large
- Failure of recursive unpacking no longer fails whole job
- Failure of unpacking of duplicate RAR no longer fails whole job
## Bugfixes since 2.2.0
- Some users were experiencing downloads or pre-check being stuck at 99%
- Fixed RarFile error during unpacking
- Remove email addresses settings from log export
- Block server longer on 'Download limit exceeded' errors
- Windows: If repair renamed a job the correct renamed file was deleted
- Windows: Unpacking of downloads with many archives could fail
- macOS: Adding jobs could fail without any error
Release Notes - SABnzbd 2.2.0
=========================================================
@@ -81,6 +101,10 @@ fetching before the upgrade will be lost!
- Start SABnzbd
## Upgrade notices
- Due to changes in this release, the queue will be converted when 2.2.x
is started for the first time. Job order, settings and data will be
preserved, but all jobs will be unpaused and URLs that did not finish
fetching before the upgrade will be lost!
- The organization of the download queue is different from 0.7.x releases.
This version will not see the old queue, but you restore the jobs by going
to Status page and use Queue Repair.

View File

@@ -232,7 +232,7 @@ function do_restart() {
var portsUnchanged = ($('#port').val() == $('#port').data('original')) && ($('#https_port').val() == $('#https_port').data('original'))
// Are we on settings page or did nothing change?
if(!$('body').hasClass('General') || (!switchedHTTPS && !portsUnchanged)) {
if(!$('body').hasClass('General') || (!switchedHTTPS && portsUnchanged)) {
// Same as before
var urlTotal = window.location.origin + urlPath
} else {

View File

@@ -95,7 +95,7 @@
<span data-bind="text: password"></span>
</small>
<!-- /ko -->
<div class="name-icons direct-unpack hover-button" data-bind="visible: direct_unpack">
<div class="name-icons direct-unpack hover-button" data-bind="visible: direct_unpack" title="$T('opt-direct_unpack')">
<span class="glyphicon glyphicon-compressed"></span> <span data-bind="text: direct_unpack"></span>
</div>
</div>

View File

@@ -103,7 +103,7 @@
glitterTranslate.status['Script'] = "$T('stage-script')";
glitterTranslate.status['Source'] = "$T('stage-source')";
glitterTranslate.status['Servers'] = "$T('stage-servers')";
glitterTranslate.status['INFO'] = "$T('log-info')".replace('+ ', '').toUpperCase();
glitterTranslate.status['INFO'] = "$T('log-info')".replace('+', '').toUpperCase();
glitterTranslate.status['WARNING'] = "$T('Glitter-warning')";
glitterTranslate.status['ERROR'] = "$T('Glitter-error')";

View File

@@ -5,14 +5,14 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-2.2.0-develop\n"
"Project-Id-Version: SABnzbd-2.3.0-develop\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: shypike@sabnzbd.org\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=ASCII\n"
"Content-Transfer-Encoding: 7bit\n"
"POT-Creation-Date: 2017-08-16 13:33+W. Europe Daylight Time\n"
"POT-Creation-Date: 2017-08-25 09:18+W. Europe Daylight Time\n"
"Generated-By: pygettext.py 1.5\n"
@@ -40,10 +40,22 @@ msgstr ""
msgid "par2 binary... NOT found!"
msgstr ""
#: SABnzbd.py [Error message] # SABnzbd.py [Error message]
msgid "Verification and repair will not be possible."
msgstr ""
#: SABnzbd.py [Error message]
msgid "MultiPar binary... NOT found!"
msgstr ""
#: SABnzbd.py [Warning message]
msgid "Your UNRAR version is %s, we recommend version %s or higher.<br />"
msgstr ""
#: SABnzbd.py [Error message]
msgid "Downloads will not unpacked."
msgstr ""
#: SABnzbd.py [Error message]
msgid "unrar binary... NOT found"
msgstr ""
@@ -435,10 +447,14 @@ msgstr ""
msgid "Error removing %s"
msgstr ""
#: sabnzbd/dirscanner.py [Warning message]
#: sabnzbd/dirscanner.py [Warning message] # sabnzbd/rss.py [Warning message]
msgid "Cannot read %s"
msgstr ""
#: sabnzbd/dirscanner.py [Error message]
msgid "Error while adding %s, removing"
msgstr ""
#: sabnzbd/dirscanner.py [Error message] # sabnzbd/dirscanner.py [Error message]
msgid "Cannot read Watched Folder %s"
msgstr ""
@@ -664,7 +680,7 @@ msgstr ""
msgid "Undefined server!"
msgstr ""
#: sabnzbd/interface.py
#: sabnzbd/interface.py # sabnzbd/interface.py
msgid "Incorrect parameter"
msgstr ""
@@ -712,6 +728,10 @@ msgstr ""
msgid "Error creating SSL key and certificate"
msgstr ""
#: sabnzbd/misc.py [Warning message]
msgid "Your password file contains more than 30 passwords, testing all these passwords takes a lot of time. Try to only list useful passwords."
msgstr ""
#: sabnzbd/misc.py [Error message]
msgid "Cannot change permissions of %s"
msgstr ""
@@ -921,7 +941,7 @@ msgid "Main packet not found..."
msgstr ""
#: sabnzbd/newsunpack.py # sabnzbd/newsunpack.py
msgid "Invalid par2 files, cannot verify or repair"
msgid "Invalid par2 files or invalid PAR2 parameters, cannot verify or repair"
msgstr ""
#: sabnzbd/newsunpack.py # sabnzbd/newsunpack.py
@@ -1759,6 +1779,14 @@ msgstr ""
msgid "Disable quota management"
msgstr ""
#: sabnzbd/skintext.py [Config->Scheduler]
msgid "Pause jobs with category"
msgstr ""
#: sabnzbd/skintext.py [Config->Scheduler]
msgid "Resume jobs with category"
msgstr ""
#: sabnzbd/skintext.py [Prowl priority] # sabnzbd/skintext.py [Prowl priority] # sabnzbd/skintext.py [Three way switch for duplicates]
msgid "Off"
msgstr ""
@@ -2965,6 +2993,14 @@ msgstr ""
msgid "Detect identical episodes in series (based on \"name/season/episode\" of items in your History)"
msgstr ""
#: sabnzbd/skintext.py
msgid "Allow proper releases"
msgstr ""
#: sabnzbd/skintext.py
msgid "Bypass series duplicate detection if PROPER, REAL or REPACK is detected in the download name"
msgstr ""
#: sabnzbd/skintext.py [Four way switch for duplicates]
msgid "Discard"
msgstr ""
@@ -4574,7 +4610,11 @@ msgid ""
msgstr ""
#: sabnzbd/skintext.py
msgid "In order to download from Usenet you will require access to a provider. Your ISP may provide you with access, however a premium provider is recommended. Don't have a Usenet provider? We recommend trying %s."
msgid "In order to download from usenet you will require access to a provider. Your ISP may provide you with access, however a premium provider is recommended."
msgstr ""
#: sabnzbd/skintext.py
msgid "Don't have a usenet provider? We recommend trying %s."
msgstr ""
#: sabnzbd/tvsort.py [Error message]

View File

@@ -598,27 +598,22 @@ def backup_nzb(filename, data):
def save_compressed(folder, filename, data):
""" Save compressed NZB file in folder """
# Need to go to the save folder to
# prevent the pathname being embedded in the GZ file
here = os.getcwd()
os.chdir(folder)
if filename.endswith('.nzb'):
filename += '.gz'
else:
filename += '.nzb.gz'
logging.info("Backing up %s", os.path.join(folder, filename))
try:
f = gzip.GzipFile(filename, 'wb')
f.write(data)
f.flush()
f.close()
# Have to get around the path being put inside the tgz
with open(os.path.join(folder, filename), 'wb') as tgz_file:
f = gzip.GzipFile(filename, fileobj=tgz_file)
f.write(data)
f.flush()
f.close()
except:
logging.error(T('Saving %s failed'), os.path.join(folder, filename))
logging.info("Traceback: ", exc_info=True)
os.chdir(here)
##############################################################################
# Unsynchronized methods
@@ -863,6 +858,7 @@ def get_new_id(prefix, folder, check_list=None):
except:
logging.error(T('Failure in tempfile.mkstemp'))
logging.info("Traceback: ", exc_info=True)
break
# Cannot create unique id, crash the process
raise IOError

View File

@@ -30,13 +30,8 @@ from sabnzbd.constants import GIGI, ANFO
ARTICLE_LOCK = threading.Lock()
class ArticleCache(object):
""" Operations on lists/dicts are atomic enough that we
do not have to put locks. Only the cache-size needs
a lock since the integer needs to stay synced.
With less locking, the decoder and assembler do not
have to wait on each other.
"""
do = None
def __init__(self):
@@ -47,9 +42,11 @@ class ArticleCache(object):
self.__article_table = {} # Dict of buffered articles
ArticleCache.do = self
@synchronized(ARTICLE_LOCK)
def cache_info(self):
return ANFO(len(self.__article_list), abs(self.__cache_size), self.__cache_limit_org)
@synchronized(ARTICLE_LOCK)
def new_limit(self, limit):
""" Called when cache limit changes """
self.__cache_limit_org = limit
@@ -59,28 +56,23 @@ class ArticleCache(object):
self.__cache_limit = min(limit, GIGI)
@synchronized(ARTICLE_LOCK)
def increase_cache_size(self, value):
self.__cache_size += value
@synchronized(ARTICLE_LOCK)
def decrease_cache_size(self, value):
self.__cache_size -= value
def reserve_space(self, data):
""" Is there space left in the set limit? """
data_size = sys.getsizeof(data) * 64
self.increase_cache_size(data_size)
self.__cache_size += data_size
if self.__cache_size + data_size > self.__cache_limit:
return False
else:
return True
@synchronized(ARTICLE_LOCK)
def free_reserve_space(self, data):
""" Remove previously reserved space """
data_size = sys.getsizeof(data) * 64
self.decrease_cache_size(data_size)
self.__cache_size -= data_size
return self.__cache_size + data_size < self.__cache_limit
@synchronized(ARTICLE_LOCK)
def save_article(self, article, data):
nzf = article.nzf
nzo = nzf.nzo
@@ -98,6 +90,7 @@ class ArticleCache(object):
if self.__cache_limit:
if self.__cache_limit < 0:
self.__add_to_cache(article, data)
else:
data_size = len(data)
@@ -106,7 +99,7 @@ class ArticleCache(object):
# Flush oldest article in cache
old_article = self.__article_list.pop(0)
old_data = self.__article_table.pop(old_article)
self.decrease_cache_size(len(old_data))
self.__cache_size -= len(old_data)
# No need to flush if this is a refreshment article
if old_article != article:
self.__flush_article(old_article, old_data)
@@ -120,6 +113,7 @@ class ArticleCache(object):
else:
self.__flush_article(article, data)
@synchronized(ARTICLE_LOCK)
def load_article(self, article):
data = None
nzo = article.nzf.nzo
@@ -127,7 +121,7 @@ class ArticleCache(object):
if article in self.__article_list:
data = self.__article_table.pop(article)
self.__article_list.remove(article)
self.decrease_cache_size(len(data))
self.__cache_size -= len(data)
elif article.art_id:
data = sabnzbd.load_data(article.art_id, nzo.workpath, remove=True,
do_pickle=False, silent=True)
@@ -137,19 +131,21 @@ class ArticleCache(object):
return data
@synchronized(ARTICLE_LOCK)
def flush_articles(self):
self.__cache_size = 0
while self.__article_list:
article = self.__article_list.pop(0)
data = self.__article_table.pop(article)
self.__flush_article(article, data)
self.__cache_size = 0
@synchronized(ARTICLE_LOCK)
def purge_articles(self, articles):
for article in articles:
if article in self.__article_list:
self.__article_list.remove(article)
data = self.__article_table.pop(article)
self.decrease_cache_size(len(data))
self.__cache_size -= len(data)
if article.art_id:
sabnzbd.remove_data(article.art_id, article.nzf.nzo.workpath)
@@ -172,12 +168,11 @@ class ArticleCache(object):
def __add_to_cache(self, article, data):
if article in self.__article_table:
self.decrease_cache_size(len(self.__article_table[article]))
self.__cache_size -= len(self.__article_table[article])
else:
self.__article_list.append(article)
self.__article_table[article] = data
self.increase_cache_size(len(data))
self.__cache_size += len(data)
# Create the instance

View File

@@ -218,8 +218,9 @@ class Assembler(Thread):
table[name] = hash
if hash16k not in nzf.nzo.md5of16k:
nzf.nzo.md5of16k[hash16k] = name
else:
# Not unique, remove to avoid false-renames
elif nzf.nzo.md5of16k[hash16k] != name:
# Not unique and not already linked to this file
# Remove to avoid false-renames
duplicates16k.append(hash16k)
header = f.read(8)

View File

@@ -81,6 +81,7 @@ MAX_DECODE_QUEUE = 10
LIMIT_DECODE_QUEUE = 100
MAX_WARNINGS = 20
MAX_WIN_DFOLDER = 60
MAX_BAD_ARTICLES = 5
REPAIR_PRIORITY = 3
TOP_PRIORITY = 2

View File

@@ -73,6 +73,7 @@ def is_archive(path):
zf = zipfile.ZipFile(path)
return 0, zf, '.zip'
except:
logging.info(T('Cannot read %s'), path, exc_info=True)
return -1, None, ''
elif rarfile.is_rarfile(path):
try:
@@ -81,14 +82,17 @@ def is_archive(path):
zf = rarfile.RarFile(path)
return 0, zf, '.rar'
except:
logging.info(T('Cannot read %s'), path, exc_info=True)
return -1, None, ''
elif is_sevenfile(path):
try:
zf = SevenZip(path)
return 0, zf, '.7z'
except:
logging.info(T('Cannot read %s'), path, exc_info=True)
return -1, None, ''
else:
logging.info('Archive %s is not a real archive!', os.path.basename(path))
return 1, None, ''
@@ -127,17 +131,24 @@ def ProcessArchiveFile(filename, path, pp=None, script=None, cat=None, catdir=No
try:
data = zf.read(name)
except:
logging.error(T('Cannot read %s'), name, exc_info=True)
zf.close()
return -1, []
name = os.path.basename(name)
if data:
nzo = None
try:
nzo = nzbstuff.NzbObject(name, pp, script, data, cat=cat, url=url,
priority=priority, nzbname=nzbname)
if not nzo.password:
nzo.password = password
except (TypeError, ValueError) as e:
# Duplicate or empty, ignore
pass
except:
nzo = None
# Something else is wrong, show error
logging.error(T('Error while adding %s, removing'), name, exc_info=True)
if nzo:
if nzo_id:
# Re-use existing nzo_id, when a "future" job gets it payload
@@ -222,6 +233,8 @@ def ProcessSingleFile(filename, path, pp=None, script=None, cat=None, catdir=Non
# Looks like an incomplete file, retry
return -2, nzo_ids
else:
# Something else is wrong, show error
logging.error(T('Error while adding %s, removing'), name, exc_info=True)
return -1, nzo_ids
if nzo:

View File

@@ -655,7 +655,7 @@ class Downloader(Thread):
logging.error(T('Failed login for server %s'), server.id)
penalty = _PENALTY_PERM
block = True
elif ecode == '502':
elif ecode in ('502', '482'):
# Cannot connect (other reasons), block this server
if server.active:
errormsg = T('Cannot connect to server %s [%s]') % ('', display_msg)
@@ -795,11 +795,8 @@ class Downloader(Thread):
# Remove this server from try_list
article.fetcher = None
nzf = article.nzf
nzo = nzf.nzo
# Allow all servers to iterate over each nzo/nzf again ##
sabnzbd.nzbqueue.NzbQueue.do.reset_try_lists(nzf, nzo)
# Allow all servers to iterate over each nzo/nzf again
sabnzbd.nzbqueue.NzbQueue.do.reset_try_lists(article.nzf, article.nzf.nzo)
if destroy:
nw.terminate(quit=quit)
@@ -942,7 +939,8 @@ def clues_too_many(text):
""" Check for any "too many connections" clues in the response code """
text = text.lower()
for clue in ('exceed', 'connections', 'too many', 'threads', 'limit'):
if clue in text:
# Not 'download limit exceeded' error
if (clue in text) and ('download' not in text):
return True
return False
@@ -959,7 +957,7 @@ def clues_too_many_ip(text):
def clues_pay(text):
""" Check for messages about payments """
text = text.lower()
for clue in ('credits', 'paym', 'expired'):
for clue in ('credits', 'paym', 'expired', 'exceeded'):
if clue in text:
return True
return False

View File

@@ -2414,7 +2414,7 @@ LOG_API_RE = re.compile(r"(apikey|api)(=|:)[\w]+", re.I)
LOG_API_JSON_RE = re.compile(r"u'(apikey|api)': u'[\w]+'", re.I)
LOG_USER_RE = re.compile(r"(user|username)\s?=\s?[\S]+", re.I)
LOG_PASS_RE = re.compile(r"(password)\s?=\s?[\S]+", re.I)
LOG_INI_HIDE_RE = re.compile(r"(email_pwd|rating_api_key|pushover_token|pushover_userkey|pushbullet_apikey|prowl_apikey|growl_password|growl_server|IPv[4|6] address)\s?=\s?[\S]+", re.I)
LOG_INI_HIDE_RE = re.compile(r"(email_pwd|email_account|email_to|rating_api_key|pushover_token|pushover_userkey|pushbullet_apikey|prowl_apikey|growl_password|growl_server|IPv[4|6] address)\s?=\s?[\S]+", re.I)
LOG_HASH_RE = re.compile(r"([a-fA-F\d]{25})", re.I)
class Status(object):

View File

@@ -1359,6 +1359,7 @@ def get_all_passwords(nzo):
pw = nzo.nzo_info.get('password')
if pw:
meta_passwords.append(pw)
if meta_passwords:
if nzo.password == meta_passwords[0]:
# this nzo.password came from meta, so don't use it twice
@@ -1366,19 +1367,23 @@ def get_all_passwords(nzo):
else:
passwords.extend(meta_passwords)
logging.info('Read %s passwords from meta data in NZB: %s', len(meta_passwords), meta_passwords)
pw_file = cfg.password_file.get_path()
if pw_file:
try:
pwf = open(pw_file, 'r')
lines = pwf.read().split('\n')
with open(pw_file, 'r') as pwf:
lines = pwf.read().split('\n')
# Remove empty lines and space-only passwords and remove surrounding spaces
pws = [pw.strip('\r\n ') for pw in lines if pw.strip('\r\n ')]
logging.debug('Read these passwords from file: %s', pws)
passwords.extend(pws)
pwf.close()
logging.info('Read %s passwords from file %s', len(pws), pw_file)
except IOError:
logging.info('Failed to read the passwords file %s', pw_file)
except:
logging.warning('Failed to read the passwords file %s', pw_file)
# Check size
if len(passwords) > 30:
logging.warning(T('Your password file contains more than 30 passwords, testing all these passwords takes a lot of time. Try to only list useful passwords.'))
if nzo.password:
# If an explicit password was set, add a retry without password, just in case.

View File

@@ -522,6 +522,23 @@ def rar_unpack(nzo, workdir, workdir_complete, delete, one_folder, rars):
logging.debug('rar_unpack(): Newfiles: %s', newfiles)
extracted_files.extend(newfiles)
# Do not fail if this was a recursive unpack
if fail and rarpath.startswith(workdir_complete):
# Do not delete the files, leave it to user!
logging.info('Ignoring failure to do recursive unpack of %s', rarpath)
fail = 0
success = True
newfiles = []
# Do not fail if this was maybe just some duplicate fileset
# Multipar and par2tbb will detect and log them, par2cmdline will not
if fail and rar_set.endswith(('.1', '.2')):
# Just in case, we leave the raw files
logging.info('Ignoring failure of unpack for possible duplicate file %s', rarpath)
fail = 0
success = True
newfiles = []
# Delete the old files if we have to
if success and delete and newfiles:
for rar in rars:
@@ -606,6 +623,10 @@ def rar_extract_core(rarfile_path, numrars, one_folder, nzo, setname, extraction
command = ['%s' % RAR_COMMAND, action, '-idp', overwrite, rename, '-ai', password_command,
'%s' % clip_path(rarfile_path), '%s\\' % extraction_path]
# The subprocess_fix requires time to clear the buffers to work,
# otherwise the inputs get send incorrectly and unrar breaks
time.sleep(0.5)
elif RAR_PROBLEM:
# Use only oldest options (specifically no "-or")
command = ['%s' % RAR_COMMAND, action, '-idp', overwrite, password_command,
@@ -1521,7 +1542,12 @@ def PAR_Verify(parfile, parfile_nzf, nzo, setname, joinables, single=False):
verifynum += 1
nzo.set_action_line(T('Verifying'), '%02d/%02d' % (verifynum, verifytotal))
nzo.status = Status.VERIFYING
datafiles.append(TRANS(m.group(1)))
# Remove redundant extra files that are just duplicates of original ones
if 'duplicate data blocks' in line:
used_for_repair.append(TRANS(m.group(1)))
else:
datafiles.append(TRANS(m.group(1)))
continue
# Verify done
@@ -1943,7 +1969,7 @@ def MultiPar_Verify(parfile, parfile_nzf, nzo, setname, joinables, single=False)
if renames:
# If succes, we also remove the possibly previously renamed ones
if finished:
reconstructed.extend(nzo.renames)
reconstructed.extend(renames.values())
# Adding to the collection
nzo.renamed_file(renames)
@@ -2064,10 +2090,13 @@ def rar_volumelist(rarfile_path, password, known_volumes):
""" Extract volumes that are part of this rarset
and merge them with existing list, removing duplicates
"""
# UnRar is required to read some RAR files
rarfile.UNRAR_TOOL = RAR_COMMAND
zf = rarfile.RarFile(rarfile_path)
# setpassword can fail due to bugs in RarFile
if password:
try:
# setpassword can fail due to bugs in RarFile
zf.setpassword(password)
except:
pass

View File

@@ -768,10 +768,6 @@ class NzbQueue(object):
def end_job(self, nzo):
""" Send NZO to the post-processing queue """
logging.info('Ending job %s', nzo.final_name)
if self.actives(grabs=False) < 2 and cfg.autodisconnect():
# This was the last job, close server connections
if sabnzbd.downloader.Downloader.do:
sabnzbd.downloader.Downloader.do.disconnect()
# Notify assembler to call postprocessor
if not nzo.deleted:
@@ -861,6 +857,7 @@ class NzbQueue(object):
for nzo in self.__nzo_list:
if not nzo.futuretype and not nzo.files and nzo.status not in (Status.PAUSED, Status.GRABBING):
empty.append(nzo)
for nzo in empty:
self.end_job(nzo)

View File

@@ -41,7 +41,7 @@ import sabnzbd
from sabnzbd.constants import GIGI, ATTRIB_FILE, JOB_ADMIN, \
DEFAULT_PRIORITY, LOW_PRIORITY, NORMAL_PRIORITY, \
PAUSED_PRIORITY, TOP_PRIORITY, DUP_PRIORITY, REPAIR_PRIORITY, \
RENAMES_FILE, Status, PNFO
RENAMES_FILE, MAX_BAD_ARTICLES, Status, PNFO
from sabnzbd.misc import to_units, cat_to_opts, cat_convert, sanitize_foldername, \
get_unique_path, get_admin_path, remove_all, sanitize_filename, globber_full, \
int_conv, set_permissions, format_time_string, long_path, trim_win_path, \
@@ -307,13 +307,27 @@ class NzbFile(TryList):
self.blocks = int(blocks)
def get_article(self, server, servers):
""" Get next article to be downloaded """
""" Get next article to be downloaded from this server
Returns None when there are still articles to try
Returns False when all articles are tried
"""
# Make sure all articles have tried this server before
# adding to the NZF-TryList, otherwise there will be stalls!
tried_all_articles = True
for article in self.articles:
article = article.get_article(server, servers)
if article:
return article
article_return = article.get_article(server, servers)
if article_return:
return article_return
elif tried_all_articles and not article.server_in_try_list(server):
tried_all_articles = False
self.add_to_try_list(server)
# We are sure they are all tried
if tried_all_articles:
self.add_to_try_list(server)
return False
# Still articles left to try
return None
def reset_all_try_lists(self):
""" Clear all lists of visited servers """
@@ -1059,7 +1073,7 @@ class NzbObject(TryList):
self.prospective_add(nzf)
# Sometimes a few CRC errors are still fine, so we continue
if self.bad_articles > 5:
if self.bad_articles > MAX_BAD_ARTICLES:
self.abort_direct_unpacker()
post_done = False
@@ -1259,7 +1273,7 @@ class NzbObject(TryList):
while blocks_already < self.bad_articles and extrapars_sorted:
new_nzf = extrapars_sorted.pop()
# Reset NZF TryList, in case something was on it before it became extrapar
new_nzf.reset_try_list()
new_nzf.reset_all_try_lists()
self.add_parfile(new_nzf)
self.extrapars[parset] = extrapars_sorted
blocks_already = blocks_already + int_conv(new_nzf.blocks)
@@ -1282,6 +1296,12 @@ class NzbObject(TryList):
""" Determine amount of articles present on servers
and return (gross available, nett) bytes
"""
# Few missing articles in RAR-only job might still work
if self.bad_articles <= MAX_BAD_ARTICLES:
logging.debug('Download Quality: bad-articles=%s', self.bad_articles)
return True, 200
# Do the full check
need = 0L
pars = 0L
short = 0L
@@ -1368,6 +1388,7 @@ class NzbObject(TryList):
def get_article(self, server, servers):
article = None
nzf_remove_list = []
tried_all_articles = True
for nzf in self.files:
if nzf.deleted:
@@ -1391,16 +1412,21 @@ class NzbObject(TryList):
article = nzf.get_article(server, servers)
if article:
break
if article == None:
# None is returned by NZF when server is not tried for all articles
tried_all_articles = False
# Remove all files for which admin could not be read
for nzf in nzf_remove_list:
nzf.deleted = True
self.files.remove(nzf)
# If cleanup emptied the active files list, end this job
if nzf_remove_list and not self.files:
sabnzbd.NzbQueue.do.end_job(self)
if not article:
# Only add to trylist when server has been tried for all articles of all NZF's
if not article and tried_all_articles:
# No articles for this server, block for next time
self.add_to_try_list(server)
return article

View File

@@ -308,6 +308,11 @@ def process_job(nzo):
# Try to get more par files
return False
# If we don't need extra par2, we can disconnect
if sabnzbd.nzbqueue.NzbQueue.do.actives(grabs=False) == 0 and cfg.autodisconnect():
# This was the last job, close server connections
sabnzbd.downloader.Downloader.do.disconnect()
# Sanitize the resulting files
if sabnzbd.WIN32:
sanitize_files_in_folder(workdir)