Compare commits

...

9 Commits

Author SHA1 Message Date
Safihre
bcc4dd75cf Update text files for 2.2.1 2017-08-25 21:58:48 +02:00
Safihre
97711ca82e Revert "Remove locks from ArticleCache"
This reverts commit 5e7558ce4a.
2017-08-25 21:57:16 +02:00
Safihre
e782237f27 More logging when adding NZB's 2017-08-25 21:49:24 +02:00
Safihre
52bb156c08 Ignore unpack errors in duplicate rarsets
Multipar and par2tbb will detect and log them so we can remove them, but par2cmdline will not.
2017-08-25 16:58:38 +02:00
Safihre
4361d82ddd Duplicate files in NZB could result in broken unpack after repair
Because par2 would detect them, but not use them. So ".1" files would later be used for unpack, even though it's not a real set.
2017-08-25 16:57:47 +02:00
Safihre
017cf8f285 Do not fail a job if recursive unpack fails
The user can handle it, we did our part.
2017-08-25 09:15:14 +02:00
Safihre
03cdf6ed5d Sync translatable texts from develop
To avoid conflicts on Launchpad
2017-08-24 23:40:37 +02:00
Safihre
cf347a8e90 Original files would be deleted after a MultiPar rename 2017-08-24 23:36:36 +02:00
Safihre
f06afe43e1 Use fileobj to prevent having to chdir, which can crash on macOS
If something is "wrong" with the current directory, for example when SABnzbd is started after downloading in a sandbox by macOS security, this function can error and break the adding of NZB's.
Have to use a fileobj, otherwise GZip will put the whole path inside there.
2017-08-24 16:36:13 +02:00
7 changed files with 191 additions and 49 deletions

View File

@@ -1,7 +1,7 @@
Metadata-Version: 1.0
Name: SABnzbd
Version: 2.2.1RC2
Summary: SABnzbd-2.2.1RC2
Version: 2.2.1
Summary: SABnzbd-2.2.1
Home-page: https://sabnzbd.org
Author: The SABnzbd Team
Author-email: team@sabnzbd.org

View File

@@ -1,15 +1,98 @@
Release Notes - SABnzbd 2.2.1 Release Candidate 2
Release Notes - SABnzbd 2.2.1
=========================================================
## Changes since 2.2.0
- Allow up to 5 bad articles for jobs with no or little par2
- Only auto-disconnect after first run of verification
- Warning is shown when password-file is too large
- Failure of recursive unpacking no longer fails whole job
- Failure of unpacking of duplicate RAR no longer fails whole job
## Bugfixes since 2.2.0
- Some users were experiencing downloads or pre-check being stuck at 99%
- Allow up to 5 bad articles for jobs with no or little par2
- Fixed RarFile error during unpacking
- Unpacking of many archives could fail
- Warn user when password-file is too large
- Remove email addresses settings from log export
- Block server longer on 'Download limit exceeded' errors
- Only auto-disconnect after first run of verification
- Windows: If repair renamed a job the correct renamed file was deleted
- Windows: Unpacking of downloads with many archives could fail
- macOS: Adding jobs could fail without any error
Release Notes - SABnzbd 2.2.0
=========================================================
NOTE: Due to changes in this release, the queue will be converted when 2.2.0
is started for the first time. Job order, settings and data will be
preserved, but all jobs will be unpaused and URLs that did not finish
fetching before the upgrade will be lost!
## Changes since 2.1.0
- Direct Unpack: Jobs will start unpacking during the download, reduces
post-processing time but requires capable hard drive. Only works for jobs that
do not need repair. Will be enabled if your incomplete folder-speed > 40MB/s
- Reduced memory usage, especially with larger queues
- Graphical overview of server-usage on Servers page
- Notifications can now be limited to certain Categories
- Removed 5 second delay between fetching URLs
- Each item in the Queue and File list now has Move to Top/Bottom buttons
- Add option to only tag a duplicate job without pausing or removing it
- New option "History Retention" to automatically purge jobs from History
- Jobs outside server retention are processed faster
- Obfuscated filenames are renamed during downloading, if possible
- Disk-space is now checked before writing files
- Add "Retry All Failed" button to Glitter
- Smoother animations in Firefox (disabled previously due to FF high-CPU usage)
- Show missing articles in MB instead of number of articles
- Better indication of verification process before and after repair
- Remove video and audio rating icons from Queue
- Show vote buttons instead of video and audio rating buttons in History
- If enabled, replace dots in filenames also when there are spaces already
- Handling of par2 files made more robust
- All par2 files are only downloaded when enabled, not on enable_par_cleanup
- Update GNTP bindings to 1.0.3
- max_art_opt and replace_illegal moved from Switches to Specials
- Removed Specials par2_multicore and allow_streaming
- Windows: Full unicode support when calling repair and unpack
- Windows: Move enable_multipar to Specials
- Windows: MultiPar verification of a job is skipped after blocks are fetched
- Windows & macOS: removed par2cmdline in favor of par2tbb/MultiPar
- Windows & macOS: Updated WinRAR to 5.5.0
## Bugfixes since 2.1.0
- Shutdown/suspend did not work on some Linux systems
- Standby/Hibernate was not working on Windows
- Deleting a job could result in write errors
- Display warning if "Extra par2 parameters" turn out to be wrong
- RSS URLs with commas in the URL were broken
- Fixed some "Saving failed" errors
- Fixed crashing URLGrabber
- Jobs with renamed files are now correctly handled when using Retry
- Disk-space readings could be updated incorrectly
- Correct redirect after enabling HTTPS in the Config
- Fix race-condition in Post-processing
- History would not always show latest changes
- Convert HTML in error messages
- In some cases not all RAR-sets were unpacked
- Fixed unicode error during Sorting
- Faulty pynotify could stop shutdown
- Categories with ' in them could result in SQL errors
- Special characters like []!* in filenames could break repair
- Wizard was always accessible, even with username and password set
- Correct value in "Speed" Extra History Column
- Not all texts were shown in the selected Language
- Various CSS fixes in Glitter and the Config
- Catch "error 0" when using HTTPS on some Linux platforms
- Warning is shown when many files with duplicate filenames are discarded
- Improve zeroconf/bonjour by sending HTTPS setting and auto-discover of IP
- Windows: Fix error in MultiPar-code when first par2-file was damaged
- macOS: Catch "Protocol wrong type for socket" errors
## Translations
- Added Hebrew translation by ION IL, many other languages updated.
## Depreciation notices
- Option to limit Servers to specific Categories is now scheduled
to be removed in the next release.
## Upgrading from 2.1.x and older
- Finish queue

View File

@@ -5,14 +5,14 @@
#
msgid ""
msgstr ""
"Project-Id-Version: SABnzbd-2.2.0-develop\n"
"Project-Id-Version: SABnzbd-2.3.0-develop\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: shypike@sabnzbd.org\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=ASCII\n"
"Content-Transfer-Encoding: 7bit\n"
"POT-Creation-Date: 2017-08-19 16:05+W. Europe Daylight Time\n"
"POT-Creation-Date: 2017-08-25 09:18+W. Europe Daylight Time\n"
"Generated-By: pygettext.py 1.5\n"
@@ -40,10 +40,22 @@ msgstr ""
msgid "par2 binary... NOT found!"
msgstr ""
#: SABnzbd.py [Error message] # SABnzbd.py [Error message]
msgid "Verification and repair will not be possible."
msgstr ""
#: SABnzbd.py [Error message]
msgid "MultiPar binary... NOT found!"
msgstr ""
#: SABnzbd.py [Warning message]
msgid "Your UNRAR version is %s, we recommend version %s or higher.<br />"
msgstr ""
#: SABnzbd.py [Error message]
msgid "Downloads will not unpacked."
msgstr ""
#: SABnzbd.py [Error message]
msgid "unrar binary... NOT found"
msgstr ""
@@ -435,10 +447,14 @@ msgstr ""
msgid "Error removing %s"
msgstr ""
#: sabnzbd/dirscanner.py [Warning message]
#: sabnzbd/dirscanner.py [Warning message] # sabnzbd/rss.py [Warning message]
msgid "Cannot read %s"
msgstr ""
#: sabnzbd/dirscanner.py [Error message]
msgid "Error while adding %s, removing"
msgstr ""
#: sabnzbd/dirscanner.py [Error message] # sabnzbd/dirscanner.py [Error message]
msgid "Cannot read Watched Folder %s"
msgstr ""
@@ -664,7 +680,7 @@ msgstr ""
msgid "Undefined server!"
msgstr ""
#: sabnzbd/interface.py
#: sabnzbd/interface.py # sabnzbd/interface.py
msgid "Incorrect parameter"
msgstr ""
@@ -925,7 +941,7 @@ msgid "Main packet not found..."
msgstr ""
#: sabnzbd/newsunpack.py # sabnzbd/newsunpack.py
msgid "Invalid par2 files, cannot verify or repair"
msgid "Invalid par2 files or invalid PAR2 parameters, cannot verify or repair"
msgstr ""
#: sabnzbd/newsunpack.py # sabnzbd/newsunpack.py
@@ -1763,6 +1779,14 @@ msgstr ""
msgid "Disable quota management"
msgstr ""
#: sabnzbd/skintext.py [Config->Scheduler]
msgid "Pause jobs with category"
msgstr ""
#: sabnzbd/skintext.py [Config->Scheduler]
msgid "Resume jobs with category"
msgstr ""
#: sabnzbd/skintext.py [Prowl priority] # sabnzbd/skintext.py [Prowl priority] # sabnzbd/skintext.py [Three way switch for duplicates]
msgid "Off"
msgstr ""
@@ -2969,6 +2993,14 @@ msgstr ""
msgid "Detect identical episodes in series (based on \"name/season/episode\" of items in your History)"
msgstr ""
#: sabnzbd/skintext.py
msgid "Allow proper releases"
msgstr ""
#: sabnzbd/skintext.py
msgid "Bypass series duplicate detection if PROPER, REAL or REPACK is detected in the download name"
msgstr ""
#: sabnzbd/skintext.py [Four way switch for duplicates]
msgid "Discard"
msgstr ""
@@ -4578,7 +4610,11 @@ msgid ""
msgstr ""
#: sabnzbd/skintext.py
msgid "In order to download from Usenet you will require access to a provider. Your ISP may provide you with access, however a premium provider is recommended. Don't have a Usenet provider? We recommend trying %s."
msgid "In order to download from usenet you will require access to a provider. Your ISP may provide you with access, however a premium provider is recommended."
msgstr ""
#: sabnzbd/skintext.py
msgid "Don't have a usenet provider? We recommend trying %s."
msgstr ""
#: sabnzbd/tvsort.py [Error message]

View File

@@ -598,27 +598,22 @@ def backup_nzb(filename, data):
def save_compressed(folder, filename, data):
""" Save compressed NZB file in folder """
# Need to go to the save folder to
# prevent the pathname being embedded in the GZ file
here = os.getcwd()
os.chdir(folder)
if filename.endswith('.nzb'):
filename += '.gz'
else:
filename += '.nzb.gz'
logging.info("Backing up %s", os.path.join(folder, filename))
try:
f = gzip.GzipFile(filename, 'wb')
f.write(data)
f.flush()
f.close()
# Have to get around the path being put inside the tgz
with open(os.path.join(folder, filename), 'wb') as tgz_file:
f = gzip.GzipFile(filename, fileobj=tgz_file)
f.write(data)
f.flush()
f.close()
except:
logging.error(T('Saving %s failed'), os.path.join(folder, filename))
logging.info("Traceback: ", exc_info=True)
os.chdir(here)
##############################################################################
# Unsynchronized methods

View File

@@ -30,13 +30,8 @@ from sabnzbd.constants import GIGI, ANFO
ARTICLE_LOCK = threading.Lock()
class ArticleCache(object):
""" Operations on lists/dicts are atomic enough that we
do not have to put locks. Only the cache-size needs
a lock since the integer needs to stay synced.
With less locking, the decoder and assembler do not
have to wait on each other.
"""
do = None
def __init__(self):
@@ -47,9 +42,11 @@ class ArticleCache(object):
self.__article_table = {} # Dict of buffered articles
ArticleCache.do = self
@synchronized(ARTICLE_LOCK)
def cache_info(self):
return ANFO(len(self.__article_list), abs(self.__cache_size), self.__cache_limit_org)
@synchronized(ARTICLE_LOCK)
def new_limit(self, limit):
""" Called when cache limit changes """
self.__cache_limit_org = limit
@@ -59,28 +56,23 @@ class ArticleCache(object):
self.__cache_limit = min(limit, GIGI)
@synchronized(ARTICLE_LOCK)
def increase_cache_size(self, value):
self.__cache_size += value
@synchronized(ARTICLE_LOCK)
def decrease_cache_size(self, value):
self.__cache_size -= value
def reserve_space(self, data):
""" Is there space left in the set limit? """
data_size = sys.getsizeof(data) * 64
self.increase_cache_size(data_size)
self.__cache_size += data_size
if self.__cache_size + data_size > self.__cache_limit:
return False
else:
return True
@synchronized(ARTICLE_LOCK)
def free_reserve_space(self, data):
""" Remove previously reserved space """
data_size = sys.getsizeof(data) * 64
self.decrease_cache_size(data_size)
self.__cache_size -= data_size
return self.__cache_size + data_size < self.__cache_limit
@synchronized(ARTICLE_LOCK)
def save_article(self, article, data):
nzf = article.nzf
nzo = nzf.nzo
@@ -98,6 +90,7 @@ class ArticleCache(object):
if self.__cache_limit:
if self.__cache_limit < 0:
self.__add_to_cache(article, data)
else:
data_size = len(data)
@@ -106,7 +99,7 @@ class ArticleCache(object):
# Flush oldest article in cache
old_article = self.__article_list.pop(0)
old_data = self.__article_table.pop(old_article)
self.decrease_cache_size(len(old_data))
self.__cache_size -= len(old_data)
# No need to flush if this is a refreshment article
if old_article != article:
self.__flush_article(old_article, old_data)
@@ -120,6 +113,7 @@ class ArticleCache(object):
else:
self.__flush_article(article, data)
@synchronized(ARTICLE_LOCK)
def load_article(self, article):
data = None
nzo = article.nzf.nzo
@@ -127,7 +121,7 @@ class ArticleCache(object):
if article in self.__article_list:
data = self.__article_table.pop(article)
self.__article_list.remove(article)
self.decrease_cache_size(len(data))
self.__cache_size -= len(data)
elif article.art_id:
data = sabnzbd.load_data(article.art_id, nzo.workpath, remove=True,
do_pickle=False, silent=True)
@@ -137,19 +131,21 @@ class ArticleCache(object):
return data
@synchronized(ARTICLE_LOCK)
def flush_articles(self):
self.__cache_size = 0
while self.__article_list:
article = self.__article_list.pop(0)
data = self.__article_table.pop(article)
self.__flush_article(article, data)
self.__cache_size = 0
@synchronized(ARTICLE_LOCK)
def purge_articles(self, articles):
for article in articles:
if article in self.__article_list:
self.__article_list.remove(article)
data = self.__article_table.pop(article)
self.decrease_cache_size(len(data))
self.__cache_size -= len(data)
if article.art_id:
sabnzbd.remove_data(article.art_id, article.nzf.nzo.workpath)
@@ -172,12 +168,11 @@ class ArticleCache(object):
def __add_to_cache(self, article, data):
if article in self.__article_table:
self.decrease_cache_size(len(self.__article_table[article]))
self.__cache_size -= len(self.__article_table[article])
else:
self.__article_list.append(article)
self.__article_table[article] = data
self.increase_cache_size(len(data))
self.__cache_size += len(data)
# Create the instance

View File

@@ -73,6 +73,7 @@ def is_archive(path):
zf = zipfile.ZipFile(path)
return 0, zf, '.zip'
except:
logging.info(T('Cannot read %s'), path, exc_info=True)
return -1, None, ''
elif rarfile.is_rarfile(path):
try:
@@ -81,14 +82,17 @@ def is_archive(path):
zf = rarfile.RarFile(path)
return 0, zf, '.rar'
except:
logging.info(T('Cannot read %s'), path, exc_info=True)
return -1, None, ''
elif is_sevenfile(path):
try:
zf = SevenZip(path)
return 0, zf, '.7z'
except:
logging.info(T('Cannot read %s'), path, exc_info=True)
return -1, None, ''
else:
logging.info('Archive %s is not a real archive!', os.path.basename(path))
return 1, None, ''
@@ -127,17 +131,24 @@ def ProcessArchiveFile(filename, path, pp=None, script=None, cat=None, catdir=No
try:
data = zf.read(name)
except:
logging.error(T('Cannot read %s'), name, exc_info=True)
zf.close()
return -1, []
name = os.path.basename(name)
if data:
nzo = None
try:
nzo = nzbstuff.NzbObject(name, pp, script, data, cat=cat, url=url,
priority=priority, nzbname=nzbname)
if not nzo.password:
nzo.password = password
except (TypeError, ValueError) as e:
# Duplicate or empty, ignore
pass
except:
nzo = None
# Something else is wrong, show error
logging.error(T('Error while adding %s, removing'), name, exc_info=True)
if nzo:
if nzo_id:
# Re-use existing nzo_id, when a "future" job gets it payload

View File

@@ -522,6 +522,23 @@ def rar_unpack(nzo, workdir, workdir_complete, delete, one_folder, rars):
logging.debug('rar_unpack(): Newfiles: %s', newfiles)
extracted_files.extend(newfiles)
# Do not fail if this was a recursive unpack
if fail and rarpath.startswith(workdir_complete):
# Do not delete the files, leave it to user!
logging.info('Ignoring failure to do recursive unpack of %s', rarpath)
fail = 0
success = True
newfiles = []
# Do not fail if this was maybe just some duplicate fileset
# Multipar and par2tbb will detect and log them, par2cmdline will not
if fail and rar_set.endswith(('.1', '.2')):
# Just in case, we leave the raw files
logging.info('Ignoring failure of unpack for possible duplicate file %s', rarpath)
fail = 0
success = True
newfiles = []
# Delete the old files if we have to
if success and delete and newfiles:
for rar in rars:
@@ -1525,7 +1542,12 @@ def PAR_Verify(parfile, parfile_nzf, nzo, setname, joinables, single=False):
verifynum += 1
nzo.set_action_line(T('Verifying'), '%02d/%02d' % (verifynum, verifytotal))
nzo.status = Status.VERIFYING
datafiles.append(TRANS(m.group(1)))
# Remove redundant extra files that are just duplicates of original ones
if 'duplicate data blocks' in line:
used_for_repair.append(TRANS(m.group(1)))
else:
datafiles.append(TRANS(m.group(1)))
continue
# Verify done
@@ -1947,7 +1969,7 @@ def MultiPar_Verify(parfile, parfile_nzf, nzo, setname, joinables, single=False)
if renames:
# If succes, we also remove the possibly previously renamed ones
if finished:
reconstructed.extend(nzo.renames)
reconstructed.extend(renames.values())
# Adding to the collection
nzo.renamed_file(renames)