Compare commits

..

31 Commits

Author SHA1 Message Date
Safihre
0298beac15 Update text files for 2.3.6 RC 1 2018-12-17 13:56:34 +01:00
SABnzbd Automation
be6f047e31 Automatic translation update 2018-12-16 18:01:50 +00:00
Safihre
e8206371e4 Update Wizard example URL 2018-12-14 08:41:00 +01:00
SABnzbd Automation
6609248fce Automatic translation update 2018-12-06 08:32:33 +00:00
Safihre
0ae5c7f8aa Code improvements 2018-11-28 09:23:11 +01:00
Sander Jo
02b6f63156 Option require_modern_tls if you want TLS 1.2 or higher for NNTPS 2018-11-23 15:28:03 +01:00
Safihre
2b665667af Unavailable feeds would crash reading 2018-11-20 09:42:13 +01:00
jcfp
ad7fc240c7 help output: add -1 param for logging, 7z as supported file ext (#1192)
* help output: add -1 param for logging, 7z as supported file ext

* stop the universe from expanding
2018-11-18 09:02:37 +01:00
jcfp
aef878a0f2 update snapcraft url
the old link only displays a 'your session has expired' message...
2018-11-01 13:08:47 +01:00
Erik Berkun-Drevnig
2a967b62d9 Add Travis and AppVeyor badges (#1186)
* Add Travis and AppVeyor badges

* Add snap, issue resolution and license badges

* Switch master to develop branch

* Update README.md
2018-10-31 09:48:57 +01:00
Safihre
4c5ca149ba Update MultiPar to v1.3.0.2 2018-10-30 09:09:05 +01:00
jcfp
54d6e0dc21 fix extension filters in linux tray
correct typo (nbz) in filter name and bring the extension filters in line with the supported types (cf. sabnzbd/constants.py:99-100)
2018-10-29 22:17:23 +01:00
Erik Berkun-Drevnig
ecb1403776 Add initial snap support (#1183)
* Add initial snap support
* Apply review feedback
* Fix armhf and arm64 builds
* Use PPA and build lang files
* Add openssl for x86
* Remove unnecessary stage-packages
* Improve arch grammar
* Add back dev packages for building extensions
* Add back missing SSL
* Add icon
* Update snapcraft.yaml
2018-10-29 09:50:28 +01:00
Safihre
7e5c6d1c04 Update text files for 2.3.6 Beta 1 2018-10-26 09:27:59 +02:00
Safihre
96b140dee0 Update included Python license 2018-10-26 09:27:42 +02:00
Safihre
2e098b641f Update UnRar to 5.61 for macOS and Windows 2018-10-26 09:15:43 +02:00
Sander Jo
6678cb9d56 Remove \x00 from par2 creator info: uniformed method 2018-10-25 07:58:06 +02:00
Sander Jo
4b67405d16 Remove \x00 from par2 creator info 2018-10-19 08:45:58 +02:00
Safihre
7463a4abdc Existing RSS-feeds don't have the infourl yet
Rookie mistake.
2018-10-14 09:39:38 +02:00
SABnzbd Automation
163523048b Automatic translation update 2018-10-09 13:55:06 +00:00
Safihre
4892bc18f3 Prevent endless loop when disk-space is exceeded
It was a nice idea, to keep retrying to save the job, but it also breaks the pp-actions as it gets removed before it is ready to be removed.
Closes #1095
2018-10-09 15:14:49 +02:00
Safihre
217b2436f2 Detect RSS-feed login redirect and show specific error
Before it would overwrite specific errors detected above by general "No entries" warning.
2018-10-09 14:26:49 +02:00
Safihre
a9247ba934 No URL-encoding of RSS-URL's with comma's
Some indexers don't like that!
2018-10-09 13:55:58 +02:00
Safihre
8b2a6ef825 Link to details page of RSS-feed item if provided 2018-10-09 10:04:21 +02:00
Safihre
320495671b Add RSS-source icon to all tabs 2018-10-08 13:22:34 +02:00
Safihre
5ab872afa0 Assume correct SSL if test-host disabled
Closes #1179
2018-10-03 08:04:43 +02:00
Safihre
7ecb31805e Add API-capability to modify RSS filters
Closes #1154
2018-09-16 14:51:11 +02:00
Safihre
e8ebeb843c Retry All would not Retry URL's
Closes #1164
2018-09-16 13:32:13 +02:00
Safihre
3840678913 Rename thread-database getter
Each thread needs it's own DB-connection (see Python-docs). So for each CherryPy-thread we need to store the thread connection. The name of the function made it sound like we create a whole new connection, which isn't the case.
2018-09-16 13:31:43 +02:00
Safihre
da7082b17e Sending Retry for already completed jobs would give traceback
Due to missing file/folder.
See: https://forums.sabnzbd.org/viewtopic.php?f=2&t=23587
2018-09-16 10:11:32 +02:00
Safihre
6198f95e1e Do not log null-bytes in par2-creator
#1153
2018-09-11 16:21:06 +02:00
61 changed files with 10781 additions and 11557 deletions

9
.gitignore vendored
View File

@@ -1,4 +1,4 @@
#Compiled python
# Compiled python
*.py[cod]
# Working folders for Win build
@@ -7,6 +7,13 @@ dist/
locale/
srcdist/
# Snapcraft
parts/
prime/
stage/
snap/.snapcraft/
*.snap
# Generated email templates
email/*.tmpl

View File

@@ -1,5 +1,5 @@
*******************************************
*** This is SABnzbd 2.3.5 ***
*** This is SABnzbd 2.3.6 ***
*******************************************
SABnzbd is an open-source cross-platform binary newsreader.
It simplifies the process of downloading from Usenet dramatically,

View File

@@ -1,4 +1,4 @@
SABnzbd 2.3.5
SABnzbd 2.3.6
-------------------------------------------------------------------------------
0) LICENSE

View File

@@ -1,7 +1,7 @@
Metadata-Version: 1.0
Name: SABnzbd
Version: 2.3.5
Summary: SABnzbd-2.3.5
Version: 2.3.6RC1
Summary: SABnzbd-2.3.6RC1
Home-page: https://sabnzbd.org
Author: The SABnzbd Team
Author-email: team@sabnzbd.org

View File

@@ -1,6 +1,12 @@
SABnzbd - The automated Usenet download tool
============================================
[![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/sabnzbd/sabnzbd.svg)](https://isitmaintained.com/project/sabnzbd/sabnzbd "Average time to resolve an issue")
[![Travis CI](https://travis-ci.org/sabnzbd/sabnzbd.svg?branch=develop)](https://travis-ci.org/sabnzbd/sabnzbd)
[![AppVeryor](https://ci.appveyor.com/api/projects/status/github/sabnzbd/sabnzbd?svg=true&branch=develop)](https://ci.appveyor.com/project/Safihre/sabnzbd)
[![Snap Status](https://build.snapcraft.io/badge/sabnzbd/sabnzbd.svg)](https://snapcraft.io/sabnzbd)
[![License](https://img.shields.io/badge/license-GPL%20v2-blue.svg)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)
SABnzbd is an Open Source Binary Newsreader written in Python.
It's totally free, incredibly easy to use, and works practically everywhere.

View File

@@ -1,25 +1,27 @@
Release Notes - SABnzbd 2.3.5
Release Notes - SABnzbd 2.3.6 RC 1
=========================================================
## Bug fixes since 2.3.4
- Reworked Deobfuscate.py script for much faster renaming
- All scripts can now receive input through environment variables
- Unable to set only one Indexer Category per category
- Could falsely report not enough blocks are available for repair
- Failures in un-(7)zip or file-joining would not fail the job
- Direct Unpack could abort unnecessarily
- Rare crash during file assembly
- Server hostname is now used in warnings and logs
- Improved disk performance measurement
- Overall improvements in stability and reliability
- Windows: MultiPar repair of joinable files could fail
- Windows: Tray icon also shows remaining size when paused
- Windows: Wizard would not default to installer language
- Windows: Update MultiPar to 1.3.0.1
- Windows and macOS: Update UnRar to 5.60
## Improvements and bug fixes since 2.3.6 Beta 1
- New option require_modern_tls forces TLSv1.2 for SSL-connections
- Unavailable feeds could crash RSS-readout
- Linux: Correct supported file extensions of tray icon
- Windows: Update MultiPar to 1.3.0.2
Looking for help with SABnzbd development:
https://www.reddit.com/r/usenet/918nxv/
## Improvements and bug fixes since 2.3.5
- RSS source icon on all tabs of feed overview
- RSS source icon now links to feed details page (if available)
- RSS feed URL's with commas would be wrongly escaped
- Common RSS login problems will show more appropriate error
- Added API-call to modify RSS-filters
- Exceeding disk space could result in endless retry-loop
- History Retry All would not retry failed NZB URL-fetches
- API-call to Retry a job could result in an error
- Assume correct SSL/certificate setup if test-host was disabled
- Better logging of par2-file creator
- Windows and macOS: Update UnRar to 5.61
Still looking for help with SABnzbd (Python 3) development!
https://www.reddit.com/r/usenet/comments/918nxv/
## Upgrading from 2.2.x and older
- Finish queue

View File

@@ -182,7 +182,7 @@ def print_help():
print " -s --server <srv:port> Listen on server:port [*]"
print " -t --templates <templ> Template directory [*]"
print
print " -l --logging <0..2> Set logging level (-1=off, 0= least, 2= most) [*]"
print " -l --logging <-1..2> Set logging level (-1=off, 0= least, 2= most) [*]"
print " -w --weblogging Enable cherrypy access logging"
print
print " -b --browser <0..1> Auto browser launch (0= off, 1= on) [*]"
@@ -209,7 +209,7 @@ def print_help():
print " --new Run a new instance of SABnzbd"
print ""
print "NZB (or related) file:"
print " NZB or zipped NZB file, with extension .nzb, .zip, .rar, .gz, or .bz2"
print " NZB or compressed NZB file, with extension .nzb, .zip, .rar, .7z, .gz, or .bz2"
print ""

View File

@@ -390,9 +390,10 @@
<th class="no-sort">$T('link-download')</th>
<th>$T('rss-filter')</th>
<th>$T('size')</th>
<th width="65%">$T('sort-title')</th>
<th width="60%">$T('sort-title')</th>
<th>$T('category')</th>
<th class="default-sort">$T('nzo-age')</th>
<th>$T('source')</th>
</tr>
</thead>
<!--#for $job in $matched#-->
@@ -411,6 +412,13 @@
<td>$job['title']</td>
<td>$job['cat']</td>
<td data-sort-value="$job['age_ms']">$job['age']</td>
<td data-sort-value="$job['baselink']" title="$job['baselink']">
<!--#if not $job['infourl']#-->
<div class="favicon source-icon" style="background-image: url(//$job['baselink']/favicon.ico);" data-domain="$job['baselink']"></div>
<!--#else#-->
<a class="favicon source-icon" href="$job['infourl']" target="_blank" style="background-image: url(//$job['baselink']/favicon.ico);" data-domain="$job['baselink']"></a>
<!--#end if#-->
</td>
</tr>
<!--#end for#-->
</table>
@@ -426,9 +434,10 @@
<th class="no-sort">$T('link-download')</th>
<th>$T('rss-filter')</th>
<th>$T('size')</th>
<th width="65%">$T('sort-title')</th>
<th width="60%">$T('sort-title')</th>
<th>$T('category')</th>
<th class="default-sort">$T('nzo-age')</th>
<th>$T('source')</th>
</tr>
</thead>
<!--#for $job in $unmatched#-->
@@ -447,6 +456,13 @@
<td>$job['title']</td>
<td>$job['cat']</td>
<td data-sort-value="$job['age_ms']">$job['age']</td>
<td data-sort-value="$job['baselink']" title="$job['baselink']">
<!--#if not $job['infourl']#-->
<div class="favicon source-icon" style="background-image: url(//$job['baselink']/favicon.ico);" data-domain="$job['baselink']"></div>
<!--#else#-->
<a class="favicon source-icon" href="$job['infourl']" target="_blank" style="background-image: url(//$job['baselink']/favicon.ico);" data-domain="$job['baselink']"></a>
<!--#end if#-->
</td>
</tr>
<!--#end for#-->
</table>
@@ -476,8 +492,10 @@
<td>$job['title']</td>
<td>$job['cat']</td>
<td data-sort-value="$job['baselink']" title="$job['baselink']">
<!--#if $job['baselink']#-->
<!--#if not $job['infourl']#-->
<div class="favicon source-icon" style="background-image: url(//$job['baselink']/favicon.ico);" data-domain="$job['baselink']"></div>
<!--#else#-->
<a class="favicon source-icon" href="$job['infourl']" target="_blank" style="background-image: url(//$job['baselink']/favicon.ico);" data-domain="$job['baselink']"></a>
<!--#end if#-->
</td>
</tr>

View File

@@ -573,6 +573,7 @@ h2.activeRSS {
float: left;
margin: 0 6px 0 2px;
text-align: center;
color: black !important;
}
.source-icon span {
top: -3px;

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,2 +1,2 @@
dojo.hostenv.conditionalLoadModule({"common": ["MochiKit.MochiKit"]});
dojo.hostenv.moduleLoaded("MochiKit.*");
dojo.hostenv.conditionalLoadModule({"common": ["MochiKit.MochiKit"]});
dojo.hostenv.moduleLoaded("MochiKit.*");

View File

@@ -20,7 +20,7 @@
<div class="form-group">
<label for="host" class="col-sm-4 control-label">$T('srv-host')</label>
<div class="col-sm-8">
<input type="text" class="form-control" name="host" id="host" value="$host" placeholder="$T('wizard-example') news.giganews.com" />
<input type="text" class="form-control" name="host" id="host" value="$host" placeholder="$T('wizard-example') news.newshosting.com" />
</div>
</div>
<div class="form-group">

View File

@@ -53,7 +53,7 @@ the various releases.
2.4.2 2.4.1 2005 PSF yes
2.4.3 2.4.2 2006 PSF yes
2.5 2.4 2006 PSF yes
2.5.1 2.5 2007 PSF yes
2.7 2.6 2010 PSF yes
Footnotes:
@@ -89,9 +89,9 @@ license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python
alone or in any derivative version, provided, however, that PSF's
License Agreement and PSF's notice of copyright, i.e., "Copyright (c)
2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software Foundation;
All Rights Reserved" are retained in Python alone or in any derivative
version prepared by Licensee.
2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation; All Rights
Reserved" are retained in Python alone or in any derivative version
prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make

View File

@@ -1,11 +1,11 @@
The original author of SABnzbd based his work on Pynewsleecher by Freddy@madcowdesease.org.
Few parts of Pynewsleecher have survived the generations of SABnzbd in a
recognizable form.
Still, we wish to thank Freddy for his inspiration.
The home of the Pynewsleecher project:
http://www.madcowdisease.org/mcd/pynewsleecher
The software does not carry any license information.
The original author of SABnzbd based his work on Pynewsleecher by Freddy@madcowdesease.org.
Few parts of Pynewsleecher have survived the generations of SABnzbd in a
recognizable form.
Still, we wish to thank Freddy for his inspiration.
The home of the Pynewsleecher project:
http://www.madcowdisease.org/mcd/pynewsleecher
The software does not carry any license information.

View File

@@ -1,8 +1,8 @@
On http://www.brunningonline.net/simon/blog/archives/001835.html,
the author licensed SysTrayIcon.py under a variant of the WTFPL:
> Any road up, help yourself. Consider SysTrayIcon.py to be under an
> "Aleister Crowley" style license - "Do what thou wilt shall be the
> only law".
>
> Err, but don't sue me if it doesn't work. ;-)
On http://www.brunningonline.net/simon/blog/archives/001835.html,
the author licensed SysTrayIcon.py under a variant of the WTFPL:
> Any road up, help yourself. Consider SysTrayIcon.py to be under an
> "Aleister Crowley" style license - "Do what thou wilt shall be the
> only law".
>
> Err, but don't sue me if it doesn't work. ;-)

View File

Binary file not shown.

View File

@@ -8,14 +8,14 @@ msgstr ""
"Project-Id-Version: sabnzbd\n"
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
"POT-Creation-Date: 2018-03-15 13:04+0000\n"
"PO-Revision-Date: 2013-05-05 14:50+0000\n"
"Last-Translator: shypike <Unknown>\n"
"PO-Revision-Date: 2018-11-27 23:39+0000\n"
"Last-Translator: scootergrisen <scootergrisen@gmail.com>\n"
"Language-Team: Danish <da@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2018-03-16 05:37+0000\n"
"X-Generator: Launchpad (build 18571)\n"
"X-Launchpad-Export-Date: 2018-11-28 05:48+0000\n"
"X-Generator: Launchpad (build 18826)\n"
#: email/email.tmpl:1
msgid ""
@@ -65,13 +65,13 @@ msgid ""
"<!--#end if#-->\n"
msgstr ""
"##\n"
"## Standard Email skabelon til SABnzbd\n"
"## Dette er en Cheetah skabelon\n"
"## Standard E-mail-skabelon til SABnzbd\n"
"## Dette er en Cheetah-skabelon\n"
"## Dokumentation: http://sabnzbd.wikidot.com/email-templates\n"
"##\n"
"## Linjeskift og blanktegn er betydelig!\n"
"## Linjeskift og blanktegn har betydning!\n"
"##\n"
"## Disse er e-mail-headerne \n"
"## Dette er e-mail-headerne \n"
"To: $to\n"
"From: $from\n"
"Date: $date\n"
@@ -79,7 +79,7 @@ msgstr ""
"job $name\n"
"X-priority: 5\n"
"X-MS-priority: 5\n"
"## Efter dette kommer body, den tomme linje kræves!\n"
"## Herefter kommer kroppen, den tomme linje skal være der!\n"
"\n"
"Hej,\n"
"<!--#if $status #-->\n"
@@ -100,13 +100,13 @@ msgstr ""
"<!--#end for#-->\n"
"<!--#end for#-->\n"
"<!--#if $script!=\"\" #-->\n"
"Output fra bruger script \"$script\" (Exit code = $script_ret):\n"
"Output fra brugerscriptet \"$script\" (Afslutningskode = $script_ret):\n"
"$script_output\n"
"<!--#end if#-->\n"
"<!--#if $status #-->\n"
"Enjoy!\n"
"Hav det godt!\n"
"<!--#else#-->\n"
"Sorry!\n"
"Beklager!\n"
"<!--#end if#-->\n"
#: email/rss.tmpl:1
@@ -138,25 +138,25 @@ msgid ""
"Bye\n"
msgstr ""
"##\n"
"## RSS Email skabelon til SABnzbd\n"
"## Dette er Cheetah skabelon\n"
"## RSS E-mail-skabelon til SABnzbd\n"
"## Dette er en Cheetah-skabelon\n"
"## Dokumentation: http://sabnzbd.wikidot.com/email-templates\n"
"##\n"
"## Linjeskift og blanktegn er betydelig!\n"
"## Linjeskift og blanktegn har betydning!\n"
"##\n"
"## Dette er email headers\n"
"## Dette er e-mai-headere\n"
"To: $to\n"
"From: $from\n"
"Date: $date\n"
"Subject: SABnzbd har tilføjet $antal jobs til køen\n"
"X-priority: 5\n"
"X-MS-priority: 5\n"
"## Efter dette kommer body, den tomme linje kræves!\n"
"## Herefter kommer kroppen, den tomme linje skal være der!\n"
"\n"
"Hej,\n"
"\n"
"SABnzbd har tilføjet $antal job(s) til køen.\n"
"De er fra RSS feed \"$feed\".\n"
"De er fra RSS-feedet \"$feed\".\n"
"<!--#for $job in $jobs#-->\n"
" $job <!--#slurp#-->\n"
"<!--#end for#-->\n"
@@ -189,24 +189,24 @@ msgid ""
"Bye\n"
msgstr ""
"##\n"
"## Dårlig URL Fetch E-mail skabelon for SABnzbd\n"
"## Dette er en Cheetah skabelon\n"
"## Dårlig URL-hentning af E-mail-skabelon til SABnzbd\n"
"## Dette er en Cheetah-skabelon\n"
"## Dokumentation: http://sabnzbd.wikidot.com/email-templates\n"
"##\n"
"## Linjeskift og blanktegn er betydelig!\n"
"## Linjeskift og blanktegn har betydning!\n"
"##\n"
"## Dette er email headers\n"
"## Dette er e-mail-headere\n"
"To: $to\n"
"From: $from\n"
"Date: $date\n"
"Subject: SABnzbd kunne ikke hente en NZB\n"
"X-priority: 5\n"
"X-MS-priority: 5\n"
"## Efter dette kommer body, den tomme linje kræves!\n"
"## Herefter kommer kroppen, den tomme linje skal være der!\n"
"\n"
"Hej,\n"
"\n"
"SABnzbd kunne ikke hente NZB fra $url.\n"
"Fejl meddelelsen er: $msg\n"
"Fejlmeddelelsen er: $msg\n"
"\n"
"Farvel\n"

View File

@@ -12,7 +12,7 @@ msgstr ""
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=ASCII\n"
"Content-Transfer-Encoding: 7bit\n"
"POT-Creation-Date: 2018-09-07 10:10+W. Europe Daylight Time\n"
"POT-Creation-Date: 2018-11-28 09:21+W. Europe Standard Time\n"
"Generated-By: pygettext.py 1.5\n"
@@ -890,10 +890,6 @@ msgstr ""
msgid "Starting Repair"
msgstr ""
#: sabnzbd/newsunpack.py [Warning message]
msgid "Par verify failed on %s, while QuickCheck succeeded!"
msgstr ""
#: sabnzbd/newsunpack.py
msgid "Repairing failed, %s"
msgstr ""
@@ -1386,10 +1382,6 @@ msgstr ""
msgid "Download failed - Not on your server(s)"
msgstr ""
#: sabnzbd/postproc.py
msgid "No post-processing because of failed verification"
msgstr ""
#: sabnzbd/postproc.py
msgid "Moving"
msgstr ""

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

@@ -8,14 +8,14 @@ msgstr ""
"Project-Id-Version: sabnzbd\n"
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
"POT-Creation-Date: 2018-03-15 13:05+0000\n"
"PO-Revision-Date: 2017-04-10 11:28+0000\n"
"Last-Translator: Safihre <safihre@sabnzbd.org>\n"
"PO-Revision-Date: 2018-11-27 23:30+0000\n"
"Last-Translator: scootergrisen <scootergrisen@gmail.com>\n"
"Language-Team: Danish <da@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2018-03-16 05:38+0000\n"
"X-Generator: Launchpad (build 18571)\n"
"X-Launchpad-Export-Date: 2018-11-28 05:48+0000\n"
"X-Generator: Launchpad (build 18826)\n"
#: NSIS_Installer.nsi
msgid "Show Release Notes"
@@ -23,7 +23,7 @@ msgstr "Vis udgivelsesbemærkninger"
#: NSIS_Installer.nsi
msgid "Start SABnzbd"
msgstr ""
msgstr "Start SABnzbd"
#: NSIS_Installer.nsi
msgid "Support the project, Donate!"
@@ -38,7 +38,7 @@ msgid ""
"The installation directory has changed (now in \"Program Files\"). \\nIf you "
"run SABnzbd as a service, you need to update the service settings."
msgstr ""
"Installationsmappen er ændret (nu i \"Program Files \"). \\nHvis du kører "
"Installationsmappen er ændret (nu i \"Program Files\"). \\nHvis du kører "
"SABnzbd som en tjeneste, skal du opdatere tjenesteindstillingerne."
#: NSIS_Installer.nsi
@@ -55,7 +55,7 @@ msgstr "Skrivebordsikon"
#: NSIS_Installer.nsi
msgid "NZB File association"
msgstr "NZB filtilknytning"
msgstr "NZB-filtilknytning"
#: NSIS_Installer.nsi
msgid "Delete Program"
@@ -70,20 +70,20 @@ msgid ""
"This system requires the Microsoft runtime library VC90 to be installed "
"first. Do you want to do that now?"
msgstr ""
"Dette system kræver, at Microsoft runtime biblioteket VC90 skal installeres "
"først. Ønsker du at gøre det nu?"
"Systemet kræver at Microsoft runtime-biblioteket VC90 skal installeres "
"først. Vil du gøre det nu?"
#: NSIS_Installer.nsi
msgid "Downloading Microsoft runtime installer..."
msgstr "Downloader Microsoft runtime installationsfil..."
msgstr "Downloader Microsoft runtime-installationsfil..."
#: NSIS_Installer.nsi
msgid "Download error, retry?"
msgstr "Download fejl, prøv igen?"
msgstr "Fejl ved download, prøv igen?"
#: NSIS_Installer.nsi
msgid "Cannot install without runtime library, retry?"
msgstr "Kan ikke installere uden runtime bibliotek, prøv igen?"
msgstr "Kan ikke installere uden runtime-bibliotek, prøv igen?"
#: NSIS_Installer.nsi
msgid ""
@@ -91,8 +91,7 @@ msgid ""
"the previous version or `Cancel` to cancel this upgrade."
msgstr ""
"Du kan ikke overskrive en eksisterende installation. \\n\\nKlik `OK` for at "
"fjerne den tidligere version eller `Annuller` for at annullere denne "
"opgradering."
"fjerne den tidligere version eller `Annuller` for at annullere opgraderingen."
#: NSIS_Installer.nsi
msgid "Your settings and data will be preserved."

View File

@@ -202,7 +202,7 @@ def sig_handler(signum=None, frame=None):
INIT_LOCK = Lock()
def connect_db(thread_index=0):
def get_db_connection(thread_index=0):
# Create a connection and store it in the current thread
if not (hasattr(cherrypy.thread_data, 'history_db') and cherrypy.thread_data.history_db):
cherrypy.thread_data.history_db = sabnzbd.database.HistoryDB()
@@ -223,7 +223,7 @@ def initialize(pause_downloader=False, clean_up=False, evalSched=False, repair=0
__SHUTTING_DOWN__ = False
# Set global database connection for Web-UI threads
cherrypy.engine.subscribe('start_thread', connect_db)
cherrypy.engine.subscribe('start_thread', get_db_connection)
# Paused?
pause_downloader = pause_downloader or cfg.start_paused()
@@ -1195,6 +1195,10 @@ def test_cert_checking():
On systems with at least Python > 2.7.9
"""
if sabnzbd.HAVE_SSL_CONTEXT:
# User disabled the test, assume proper SSL certificates
if not cfg.selftest_host():
return True
# Try a connection to our test-host
try:
import ssl
ctx = ssl.create_default_context()

View File

@@ -65,13 +65,12 @@ from sabnzbd.articlecache import ArticleCache
from sabnzbd.utils.servertests import test_nntp_server_dict
from sabnzbd.bpsmeter import BPSMeter
from sabnzbd.rating import Rating
from sabnzbd.getipaddress import localipv4, publicipv4, ipv6
from sabnzbd.getipaddress import localipv4, publicipv4, ipv6, addresslookup
from sabnzbd.newsunpack import userxbit
from sabnzbd.database import build_history_info, unpack_history_info, HistoryDB
import sabnzbd.notifier
import sabnzbd.rss
import sabnzbd.emailer
import sabnzbd.getipaddress as getipaddress
##############################################################################
# API error messages
@@ -501,7 +500,7 @@ def _api_history(name, output, kwargs):
special = value.lower()
del_files = bool(int_conv(kwargs.get('del_files')))
if special in ('all', 'failed', 'completed'):
history_db = sabnzbd.connect_db()
history_db = sabnzbd.get_db_connection()
if special in ('all', 'failed'):
if del_files:
del_job_files(history_db.get_failed_paths(search))
@@ -1149,6 +1148,24 @@ def handle_rss_api(output, kwargs):
feed.set_dict(kwargs)
else:
config.ConfigRSS(name, kwargs)
action = kwargs.get('filter_action')
if action in ('add', 'update'):
# Use the general function, but catch the redirect-raise
try:
kwargs['feed'] = name
sabnzbd.interface.ConfigRss('/').internal_upd_rss_filter(**kwargs)
except cherrypy.HTTPRedirect:
pass
elif action == 'delete':
# Use the general function, but catch the redirect-raise
try:
kwargs['feed'] = name
sabnzbd.interface.ConfigRss('/').internal_del_rss_filter(**kwargs)
except cherrypy.HTTPRedirect:
pass
return name
@@ -1197,7 +1214,7 @@ def build_status(skip_dashboard=False, output=None):
info['ipv6'] = ipv6()
# Dashboard: DNS-check
try:
getipaddress.addresslookup(cfg.selftest_host())
addresslookup(cfg.selftest_host())
info['dnslookup'] = "OK"
except:
info['dnslookup'] = None
@@ -1509,16 +1526,17 @@ def options_list(output):
})
def retry_job(job, new_nzb, password):
def retry_job(job, new_nzb=None, password=None):
""" Re enter failed job in the download queue """
if job:
history_db = sabnzbd.connect_db()
history_db = sabnzbd.get_db_connection()
futuretype, url, pp, script, cat = history_db.get_other(job)
if futuretype:
if pp == 'X':
pp = None
sabnzbd.add_url(url, pp, script, cat)
nzo_id = sabnzbd.add_url(url, pp, script, cat)
history_db.remove_history(job)
return nzo_id
else:
path = history_db.get_path(job)
if path:
@@ -1530,8 +1548,13 @@ def retry_job(job, new_nzb, password):
def retry_all_jobs():
""" Re enter all failed jobs in the download queue """
history_db = sabnzbd.connect_db()
return NzbQueue.do.retry_all_jobs(history_db)
# Fetch all retryable folders from History
items = sabnzbd.api.build_history()[0]
nzo_ids = []
for item in items:
if item['retry']:
nzo_ids.append(retry_job(item['nzo_id']))
return nzo_ids
def del_job_files(job_paths):
@@ -1548,7 +1571,7 @@ def del_hist_job(job, del_files):
if path:
PostProcessor.do.delete(job, del_files=del_files)
else:
history_db = sabnzbd.connect_db()
history_db = sabnzbd.get_db_connection()
path = history_db.get_path(job)
history_db.remove_history(job)
@@ -1759,7 +1782,7 @@ def build_history(start=None, limit=None, verbose=False, verbose_list=None, sear
# Aquire the db instance
try:
history_db = sabnzbd.connect_db()
history_db = sabnzbd.get_db_connection()
close_db = False
except:
# Required for repairs at startup because Cherrypy isn't active yet
@@ -2010,7 +2033,6 @@ def history_remove_failed():
del_job_files(history_db.get_failed_paths())
history_db.remove_failed()
history_db.close()
del history_db
def history_remove_completed():
@@ -2019,4 +2041,3 @@ def history_remove_completed():
history_db = HistoryDB()
history_db.remove_completed()
history_db.close()
del history_db

View File

@@ -81,11 +81,6 @@ class Assembler(Thread):
# Abort all direct unpackers, just to be sure
sabnzbd.directunpacker.abort_all()
# Place job back in queue and wait 30 seconds to hope it gets resolved
self.process(job)
sleep(30)
continue
# Prepare filename
nzo.verify_nzf_filename(nzf)
nzf.filename = sanitize_filename(nzf.filename)

View File

@@ -265,6 +265,7 @@ no_penalties = OptionBool('misc', 'no_penalties', False)
debug_log_decoding = OptionBool('misc', 'debug_log_decoding', False)
ignore_empty_files = OptionBool('misc', 'ignore_empty_files', False)
x_frame_options = OptionBool('misc', 'x_frame_options', True)
require_modern_tls = OptionBool('misc', 'require_modern_tls', False)
# Text values
rss_odd_titles = OptionList('misc', 'rss_odd_titles', ['nzbindex.nl/', 'nzbindex.com/', 'nzbclub.com/'])

View File

@@ -990,7 +990,7 @@ def get_rss():
for feed_uri in feed.uri():
if new_feed_uris and not urlparse(feed_uri).scheme and urlparse(new_feed_uris[-1]).scheme:
# Current one has no scheme but previous one does, append to previous
new_feed_uris[-1] += '%2C' + feed_uri
new_feed_uris[-1] += ',' + feed_uri
have_new_uri = True
continue
# Add full working URL

View File

@@ -400,7 +400,7 @@ class HistoryDB(object):
return name
def get_path(self, nzo_id):
""" Return the `incomplete` path of the job `nzo_id` """
""" Return the `incomplete` path of the job `nzo_id` if it is still there """
t = (nzo_id,)
path = ''
if self.execute('SELECT path FROM history WHERE nzo_id=?', t):
@@ -408,7 +408,9 @@ class HistoryDB(object):
path = self.c.fetchone().get('path')
except AttributeError:
pass
return path
if os.path.exists(path):
return path
return None
def get_other(self, nzo_id):
""" Return additional data for job `nzo_id` """
@@ -421,9 +423,10 @@ class HistoryDB(object):
pp = items.get('pp')
script = items.get('script')
cat = items.get('category')
return dtype, url, pp, script, cat
except (AttributeError, IndexError):
return '', '', '', '', ''
return dtype, url, pp, script, cat
pass
return '', '', '', '', ''
def dict_factory(cursor, row):

View File

@@ -412,7 +412,6 @@ class DirectUnpacker(threading.Thread):
except:
# The user will have to remove it themselves
logging.info('Failed to clean Direct Unpack after aborting %s', rarfile_nzf.filename, exc_info=True)
pass
else:
# We can just remove the whole path
remove_all(extraction_path, recursive=True)

View File

@@ -499,7 +499,7 @@ class MainPage(object):
# No session key check, due to fixed URLs
name = kwargs.get('name')
if name:
history_db = sabnzbd.connect_db()
history_db = sabnzbd.get_db_connection()
return ShowString(history_db.get_name(name), history_db.get_script_log(name))
else:
raise Raiser(self.__root)
@@ -1106,7 +1106,7 @@ class HistoryPage(object):
@secured_expose(check_session_key=True)
def purge(self, **kwargs):
history_db = sabnzbd.connect_db()
history_db = sabnzbd.get_db_connection()
history_db.remove_history()
raise queueRaiser(self.__root, kwargs)
@@ -1133,7 +1133,7 @@ class HistoryPage(object):
@secured_expose(check_session_key=True)
def purge_failed(self, **kwargs):
del_files = bool(int_conv(kwargs.get('del_files')))
history_db = sabnzbd.connect_db()
history_db = sabnzbd.get_db_connection()
if del_files:
del_job_files(history_db.get_failed_paths())
history_db.remove_failed()
@@ -1173,7 +1173,7 @@ class HistoryPage(object):
# No session key check, due to fixed URLs
name = kwargs.get('name')
if name:
history_db = sabnzbd.connect_db()
history_db = sabnzbd.get_db_connection()
return ShowString(history_db.get_name(name), history_db.get_script_log(name))
else:
raise Raiser(self.__root)
@@ -1371,7 +1371,7 @@ SPECIAL_BOOL_LIST = \
'rss_filenames', 'ipv6_hosting', 'keep_awake', 'empty_postproc', 'html_login', 'wait_for_dfolder',
'max_art_opt', 'warn_empty_nzb', 'enable_bonjour', 'reject_duplicate_files', 'warn_dupl_jobs',
'replace_illegal', 'backup_for_duplicates', 'disable_api_key', 'api_logging',
'ignore_empty_files', 'x_frame_options'
'ignore_empty_files', 'x_frame_options', 'require_modern_tls'
)
SPECIAL_VALUE_LIST = \
('size_limit', 'folder_max_length', 'fsys_type', 'movie_rename_limit', 'nomedia_marker',
@@ -1877,9 +1877,13 @@ class ConfigRss(object):
@secured_expose(check_session_key=True, check_configlock=True)
def upd_rss_filter(self, **kwargs):
""" Wrapper, so we can call from api.py """
self.internal_upd_rss_filter(**kwargs)
def internal_upd_rss_filter(self, **kwargs):
""" Save updated filter definition """
try:
cfg = config.get_rss()[kwargs.get('feed')]
feed_cfg = config.get_rss()[kwargs.get('feed')]
except KeyError:
raise rssRaiser(self.__root, kwargs)
@@ -1893,14 +1897,14 @@ class ConfigRss(object):
enabled = kwargs.get('enabled', '0')
if filt:
cfg.filters.update(int(kwargs.get('index', 0)), (cat, pp, script, kwargs.get('filter_type'),
feed_cfg.filters.update(int(kwargs.get('index', 0)), (cat, pp, script, kwargs.get('filter_type'),
platform_encode(filt), prio, enabled))
# Move filter if requested
index = int_conv(kwargs.get('index', ''))
new_index = kwargs.get('new_index', '')
if new_index and int_conv(new_index) != index:
cfg.filters.move(int(index), int_conv(new_index))
feed_cfg.filters.move(int(index), int_conv(new_index))
config.save_config()
self.__evaluate = False
@@ -1918,13 +1922,17 @@ class ConfigRss(object):
@secured_expose(check_session_key=True, check_configlock=True)
def del_rss_filter(self, **kwargs):
""" Wrapper, so we can call from api.py """
self.internal_del_rss_filter(**kwargs)
def internal_del_rss_filter(self, **kwargs):
""" Remove one RSS filter """
try:
cfg = config.get_rss()[kwargs.get('feed')]
feed_cfg = config.get_rss()[kwargs.get('feed')]
except KeyError:
raise rssRaiser(self.__root, kwargs)
cfg.filters.delete(int(kwargs.get('index', 0)))
feed_cfg.filters.delete(int(kwargs.get('index', 0)))
config.save_config()
self.__evaluate = False
self.__show_eval_button = True
@@ -2522,6 +2530,7 @@ def GetRssLog(feed):
# These fields could be empty
job['cat'] = job.get('cat', '')
job['size'] = job.get('size', '')
job['infourl'] = job.get('infourl', '')
# Auto-fetched jobs didn't have these fields set
if job.get('url'):

View File

@@ -1142,10 +1142,7 @@ def par2_repair(parfile_nzf, nzo, workdir, setname, single):
# Remove this set so we don't try to check it again
nzo.remove_parset(parfile_nzf.setname)
else:
if qc_result:
logging.warning(T('Par verify failed on %s, while QuickCheck succeeded!'), parfile)
else:
logging.info('Par verify failed on %s!', parfile)
logging.info('Par verify failed on %s!', parfile)
if not readd:
# Failed to repair -> remove this set

View File

@@ -174,6 +174,10 @@ class NNTP(object):
# Setup the SSL socket
ctx = ssl.create_default_context()
if sabnzbd.cfg.require_modern_tls():
# We want a modern TLS (1.2 or higher), so we disallow older protocol versions (<= TLS 1.1)
ctx.options |= ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3 | ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1
# Only verify hostname when we're strict
if nw.server.ssl_verify < 2:
ctx.check_hostname = False

View File

@@ -181,22 +181,6 @@ class NzbQueue(object):
logging.info('Skipping repair for job %s', folder)
return result
def retry_all_jobs(self, history_db):
""" Retry all retryable jobs in History """
result = []
# Retryable folders from History
items = sabnzbd.api.build_history()[0]
registered = [(platform_encode(os.path.basename(item['path'])),
item['nzo_id'])
for item in items if item['retry']]
for job in registered:
logging.info('Repairing job %s', job[0])
result.append(self.repair_job(job[0]))
history_db.remove_history(job[1])
return bool(result)
def repair_job(self, folder, new_nzb=None, password=None):
""" Reconstruct admin for a single job folder, optionally with new NZB """
def all_verified(path):

View File

@@ -1750,7 +1750,6 @@ class NzbObject(TryList):
remove_dir(self.downpath)
except:
logging.debug('Folder not removed: %s', self.downpath)
pass
def gather_info(self, full=False):
queued_files = []

View File

@@ -165,8 +165,9 @@ def parse_par2_file_packet(f, header):
filename = data[offset + 72:].strip('\0')
return filename, hash, hash16k
elif data[offset:offset + 15] == PAR_CREATOR_ID:
# Here untill the end is the creator-text
# Usefull in case of bugs in the par2-creating software
logging.debug('Par2-creator of %s is: %s', os.path.basename(f.name), data[offset+16:])
# From here until the end is the creator-text
# Useful in case of bugs in the par2-creating software
par2creator = data[offset+16:].strip('\0') # Remove any trailing \0
logging.debug('Par2-creator of %s is: %s', os.path.basename(f.name), par2creator)
return nothing

View File

@@ -376,19 +376,16 @@ def process_job(nzo):
newfiles = []
# Run Stage 2: Unpack
if flag_unpack:
if all_ok:
# set the current nzo status to "Extracting...". Used in History
nzo.status = Status.EXTRACTING
logging.info("Running unpack_magic on %s", filename)
unpack_error, newfiles = unpack_magic(nzo, workdir, tmp_workdir_complete, flag_delete, one_folder, (), (), (), (), ())
logging.info("Unpacked files %s", newfiles)
# set the current nzo status to "Extracting...". Used in History
nzo.status = Status.EXTRACTING
logging.info("Running unpack_magic on %s", filename)
unpack_error, newfiles = unpack_magic(nzo, workdir, tmp_workdir_complete, flag_delete, one_folder, (), (), (), (), ())
logging.info("Unpacked files %s", newfiles)
if sabnzbd.WIN32:
# Sanitize the resulting files
newfiles = sanitize_files_in_folder(tmp_workdir_complete)
logging.info("Finished unpack_magic on %s", filename)
else:
nzo.set_unpack_info('Unpack', T('No post-processing because of failed verification'))
if sabnzbd.WIN32:
# Sanitize the resulting files
newfiles = sanitize_files_in_folder(tmp_workdir_complete)
logging.info("Finished unpack_magic on %s", filename)
if cfg.safe_postproc():
all_ok = all_ok and not unpack_error
@@ -449,7 +446,6 @@ def process_job(nzo):
else:
workdir_complete = tmp_workdir_complete.replace('_UNPACK_', '_FAILED_')
workdir_complete = get_unique_path(workdir_complete, n=0, create_dir=False)
workdir_complete = workdir_complete
if empty:
job_result = -1

View File

@@ -287,7 +287,7 @@ class RSSQueue(object):
status = feed_parsed.get('status', 999)
if status in (401, 402, 403):
msg = T('Do not have valid authentication for feed %s') % feed
msg = T('Do not have valid authentication for feed %s') % uri
logging.info(msg)
if 500 <= status <= 599:
@@ -301,11 +301,14 @@ class RSSQueue(object):
msg = T('Server %s uses an untrusted HTTPS certificate') % get_urlbase(uri)
msg += ' - https://sabnzbd.org/certificate-errors'
logging.error(msg)
elif 'href' in feed_parsed and feed_parsed['href'] != uri and 'login' in feed_parsed['href']:
# Redirect to login page!
msg = T('Do not have valid authentication for feed %s') % uri
else:
msg = T('Failed to retrieve RSS from %s: %s') % (uri, xml_name(msg))
logging.info(msg)
if not entries:
if not entries and not msg:
msg = T('RSS Feed %s was empty') % uri
logging.info(msg)
all_entries.extend(entries)
@@ -330,7 +333,7 @@ class RSSQueue(object):
if readout:
try:
link, category, size, age, season, episode = _get_link(entry)
link, infourl, category, size, age, season, episode = _get_link(entry)
except (AttributeError, IndexError):
logging.info(T('Incompatible feed') + ' ' + uri)
logging.info("Traceback: ", exc_info=True)
@@ -350,6 +353,7 @@ class RSSQueue(object):
continue
else:
link = entry
infourl = jobs[link].get('infourl', '')
category = jobs[link].get('orgcat', '')
if category in ('', '*'):
category = None
@@ -478,13 +482,13 @@ class RSSQueue(object):
else:
star = first
if result:
_HandleLink(jobs, feed, link, title, size, age, season, episode, 'G', category, myCat, myPP,
myScript, act, star, priority=myPrio, rule=str(n))
_HandleLink(jobs, feed, link, infourl, title, size, age, season, episode, 'G', category, myCat,
myPP, myScript, act, star, priority=myPrio, rule=str(n))
if act:
new_downloads.append(title)
else:
_HandleLink(jobs, feed, link, title, size, age, season, episode, 'B', category, myCat, myPP,
myScript, False, star, priority=myPrio, rule=str(n))
_HandleLink(jobs, feed, link, infourl, title, size, age, season, episode, 'B', category, myCat,
myPP, myScript, False, star, priority=myPrio, rule=str(n))
# Send email if wanted and not "forced"
if new_downloads and cfg.email_rss() and not force:
@@ -584,7 +588,7 @@ class RSSQueue(object):
return ''
def _HandleLink(jobs, feed, link, title, size, age, season, episode, flag, orgcat, cat, pp, script,
def _HandleLink(jobs, feed, link, infourl, title, size, age, season, episode, flag, orgcat, cat, pp, script,
download, star, priority=NORMAL_PRIORITY, rule=0):
""" Process one link """
if script == '':
@@ -595,6 +599,7 @@ def _HandleLink(jobs, feed, link, title, size, age, season, episode, flag, orgca
jobs[link] = {}
jobs[link]['title'] = title
jobs[link]['url'] = link
jobs[link]['infourl'] = infourl
jobs[link]['cat'] = cat
jobs[link]['pp'] = pp
jobs[link]['script'] = script
@@ -641,6 +646,11 @@ def _get_link(entry):
except:
pass
# GUID usually has URL to result on page
infourl = None
if entry.id and entry.id != link and entry.id.startswith('http'):
infourl = entry.id
if size == 0L:
_RE_SIZE1 = re.compile(r'Size:\s*(\d+\.\d+\s*[KMG]{0,1})B\W*', re.I)
_RE_SIZE2 = re.compile(r'\W*(\d+\.\d+\s*[KMG]{0,1})B\W*', re.I)
@@ -690,10 +700,10 @@ def _get_link(entry):
except:
category = ''
return link, category, size, age, season, episode
return link, infourl, category, size, age, season, episode
else:
logging.warning(T('Empty RSS entry found (%s)'), link)
return None, '', 0L, None, 0, 0
return None, None, '', 0L, None, 0, 0
def special_rss_site(url):

View File

@@ -136,12 +136,13 @@ class StatusIcon(Thread):
dialog.set_select_multiple(True)
filter = gtk.FileFilter()
filter.set_name("*.nbz,*.nbz.gz,*.bz2,*.zip,*.rar")
filter.add_pattern("*.nzb*")
filter.add_pattern("*.nzb.gz")
filter.add_pattern("*.nzb.bz2")
filter.set_name("*.nzb,*.gz,*.bz2,*.zip,*.rar,*.7z")
filter.add_pattern("*.nzb")
filter.add_pattern("*.gz")
filter.add_pattern("*.bz2")
filter.add_pattern("*.zip")
filter.add_pattern("*.rar")
filter.add_pattern("*.7z")
dialog.add_filter(filter)
response = dialog.run()

View File

@@ -10,13 +10,14 @@ import os
debug = False
def isFAT(dir):
def isFAT(check_dir):
""" Check if "check_dir" is on FAT. FAT considered harmful (for big files)
Works for Linux, Windows, MacOS
NB: On Windows, full path with drive letter is needed!
"""
# Check if "dir" is on FAT. FAT considered harmful (for big files)
# Works for Linux, Windows, MacOS
# NB: On Windows, full path with drive letter is needed!
FAT = False # default: not FAT
FAT = False # default: not FAT
# We're dealing with OS calls, so put everything in a try/except, just in case:
try:
if 'linux' in sys.platform:
@@ -30,9 +31,8 @@ def isFAT(dir):
/dev/sda1 vfat 488263616 163545248 324718368 34% /media/sander/INTENSO
'''
cmd = "df -T " + dir + " 2>&1"
cmd = "df -T " + check_dir + " 2>&1"
for thisline in os.popen(cmd).readlines():
#print thisline
if thisline.find('/') == 0:
# Starts with /, so a real, local device
fstype = thisline.split()[1]
@@ -43,11 +43,11 @@ def isFAT(dir):
break
elif 'win32' in sys.platform:
import win32api
if '?' in dir:
if '?' in check_dir:
# Remove \\?\ or \\?\UNC\ prefix from Windows path
dir = dir.replace(u'\\\\?\\UNC\\', u'\\\\', 1).replace(u'\\\\?\\', u'', 1)
check_dir = check_dir.replace(u'\\\\?\\UNC\\', u'\\\\', 1).replace(u'\\\\?\\', u'', 1)
try:
result = win32api.GetVolumeInformation(os.path.splitdrive(dir)[0])
result = win32api.GetVolumeInformation(os.path.splitdrive(check_dir)[0])
if debug: print result
if result[4].startswith("FAT"):
FAT = True
@@ -69,7 +69,7 @@ def isFAT(dir):
'''
dfcmd = "df " + dir
dfcmd = "df " + check_dir
for thisline in os.popen(dfcmd).readlines():
if thisline.find('/')==0:
if debug: print thisline
@@ -87,17 +87,16 @@ def isFAT(dir):
return FAT
if __name__ == "__main__":
if debug: print sys.platform
try:
dir = sys.argv[1]
dir_to_check = sys.argv[1]
except:
print "Specify dir on the command line"
print "Specify check_dir on the command line"
sys.exit(0)
if isFAT(dir):
print dir, "is on FAT"
if isFAT(dir_to_check):
print dir_to_check, "is on FAT"
else:
print dir, "is not on FAT"
print dir_to_check, "is not on FAT"

View File

@@ -4,5 +4,5 @@
# You MUST use double quotes (so " and not ')
__version__ = "2.3.5"
__baseline__ = "76c7a6ce9517beb7f84917f7a5f7efeec93abb4b"
__version__ = "2.4.0-develop"
__baseline__ = "unknown"

View File

@@ -50,7 +50,6 @@ import time
import fnmatch
import struct
import hashlib
from os import path
# Are we being called from SABnzbd?
@@ -109,10 +108,10 @@ def decodePar(parfile):
# filename makes up for the rest of the packet, padded with null characters
targetName = parfileToDecode.read(bodyLength - STRUCT_FILE_DESC_PACKET.size).rstrip('\0')
targetPath = path.join(dir, targetName)
targetPath = os.path.join(dir, targetName)
# file already exists, skip it
if path.exists(targetPath):
if os.path.exists(targetPath):
print "File already exists: " + targetName
continue
@@ -120,7 +119,7 @@ def decodePar(parfile):
srcPath = findFile(dir, filelength, hash16k)
if srcPath is not None:
os.rename(srcPath, targetPath)
print "Renamed file from " + path.basename(srcPath) + " to " + targetName
print "Renamed file from " + os.path.basename(srcPath) + " to " + targetName
result = True
else:
print "No match found for: " + targetName
@@ -129,10 +128,10 @@ def decodePar(parfile):
def findFile(dir, filelength, hash16k):
for filename in os.listdir(dir):
filepath = path.join(dir, filename)
filepath = os.path.join(dir, filename)
# check if the size matches as an indication
if path.getsize(filepath) != filelength: continue
if os.path.getsize(filepath) != filelength: continue
with open(filepath, 'rb') as fileToMatch:
data = fileToMatch.read(16 * 1024)

View File

@@ -1,15 +1,15 @@
@echo off
rem Example of a post processing script for SABnzbd
echo.
echo Running in directory "%~d0%~p0"
echo.
echo The first parameter (result-dir) = %1
echo The second parameter (nzb-name) = %2
echo The third parameter (nice name) = %3
echo The fourth parameter (newzbin #) = %4
echo The fifth parameter (category) = %5
echo The sixth parameter (group) = %6
echo The seventh parameter (status) = %7
echo The eight parameter (failure_url)= %8
echo.
@echo off
rem Example of a post processing script for SABnzbd
echo.
echo Running in directory "%~d0%~p0"
echo.
echo The first parameter (result-dir) = %1
echo The second parameter (nzb-name) = %2
echo The third parameter (nice name) = %3
echo The fourth parameter (newzbin #) = %4
echo The fifth parameter (category) = %5
echo The sixth parameter (group) = %6
echo The seventh parameter (status) = %7
echo The eight parameter (failure_url)= %8
echo.

42
snap/snapcraft.yaml Normal file
View File

@@ -0,0 +1,42 @@
name: sabnzbd
version: git
summary: SABnzbd
description: The automated Usenet download tool
confinement: strict
icon: interfaces/Config/templates/staticcfg/images/logo-small.svg
adopt-info: sabnzbd
version-script: |
grep -oP '(?<=^Version: ).*' PKG-INFO
apps:
sabnzbd:
command: python $SNAP/opt/sabnzbd/SABnzbd.py -f $SNAP_COMMON
daemon: simple
plugs: [network, network-bind, removable-media]
parts:
sabnzbd:
plugin: python
source: .
python-version: python2
python-packages: [cheetah3, cryptography, sabyenc]
build-attributes: [no-system-libraries]
stage-packages:
- to armhf: ["unrar:armhf", "p7zip-full:armhf", "par2:armhf"]
- to arm64: ["unrar:arm64", "p7zip-full:arm64", "par2:arm64"]
- to amd64: ["unrar:amd64", "p7zip-full:amd64", "par2:amd64"]
- to i386: ["unrar:i386", "p7zip-full:i386", "par2:i386"]
build-packages:
- to armhf: ["libffi-dev:armhf", "python-dev:armhf", "libssl-dev:armhf"]
- to arm64: ["libffi-dev:arm64", "python-dev:arm64", "libssl-dev:arm64"]
- to amd64: ["libffi-dev:amd64", "python-dev:amd64", "libssl-dev:amd64"]
- to i386: ["libffi-dev:i386", "python-dev:i386", "libssl-dev:i386"]
override-pull: |
snapcraftctl pull
[ $(git rev-parse --abbrev-ref HEAD) = "master" ] && GRADE=stable || GRADE=devel
snapcraftctl set-grade "$GRADE"
override-build: |
snapcraftctl build
python tools/make_mo.py
mkdir -p $SNAPCRAFT_PART_INSTALL/opt/sabnzbd
cp -R $SNAPCRAFT_PART_BUILD/* $SNAPCRAFT_PART_INSTALL/opt/sabnzbd

View File

@@ -213,7 +213,6 @@ def make_templates():
def patch_nsis():
""" Patch translation into the NSIS script """
RE_NSIS = re.compile(r'^(\s*LangString\s+\w+\s+\$\{LANG_)(\w+)\}\s+(".*)', re.I)
RE_NSIS = re.compile(r'^(\s*LangString\s+)(\w+)(\s+\$\{LANG_)(\w+)\}\s+(".*)', re.I)
languages = [os.path.split(path)[1] for path in glob.glob(os.path.join(MO_DIR, '*'))]

View File

@@ -80,7 +80,6 @@ def set_connection_info(url, user=True):
except WindowsError:
if user:
set_connection_info(url, user=False)
pass
finally:
_winreg.CloseKey(hive)
@@ -96,7 +95,6 @@ def del_connection_info(user=True):
except WindowsError:
if user:
del_connection_info(user=False)
pass
finally:
_winreg.CloseKey(hive)

View File

Binary file not shown.

View File

Binary file not shown.

View File

Binary file not shown.

View File

Binary file not shown.