Compare commits

..

221 Commits

Author SHA1 Message Date
shypike
1df2943d05 Update text files for 0.7.20 2014-11-21 20:39:23 +01:00
shypike
543eeae124 Make sure that Rating support is suppressed completely when disabled. 2014-11-21 20:32:38 +01:00
shypike
8ed2f7a175 Update some license files. 2014-11-18 19:47:46 +01:00
shypike
074c78ce31 Update text files for 0.7.20RC2 2014-11-15 10:06:16 +01:00
shypike
01b93ce8eb More flexibility in Rating meta data. 2014-11-15 10:05:48 +01:00
shypike
ff42913359 Update unrar to 5.11 for (Snow)Leopard on Intel, but not for PPC.
There's no unrar 5.* release for PPC.
2014-11-14 18:16:48 +01:00
shypike
9b3dcb0bda Merge pull request #184 from sanderjo/0.7.x
Measure and log Pystone performance, and - if possible - CPU type
2014-11-12 22:13:49 +01:00
shypike
a9cfff0ce3 Update text files for 0.7.20RC1 2014-11-11 23:35:02 +01:00
shypike
ca4db6b428 Support for the NZB meta field "x-rating-host". 2014-11-11 23:11:53 +01:00
shypike
f986f649d2 Let the API-call "Retry" return the new nzo_id of the job.
Also change the status of a doomed re-fetch from "Failed" to "Fetching".
2014-11-07 21:29:18 +01:00
shypike
48b47535b6 Update copyright year. 2014-11-06 22:06:46 +01:00
shypike
194fbce7ae Added an issue and made some language corrections. 2014-11-06 14:27:51 +01:00
shypike
17ae4d9974 OSX Yosemite: make top-menu icon compatible with "Dark Mode". 2014-11-05 19:21:32 +01:00
shypike
cdaf7ae973 Fix problem of testing email server with existing parameters.
In existing email parameters the password consists only of asterisks.
In that case, get the password from storage.
Also improve the logging of failed authentication attempts.
2014-11-04 20:40:15 +01:00
shypike
1390a17950 Update unrar for OSX to 5.11
(Earlier update wasn't right for OSX.)
2014-11-04 19:41:19 +01:00
shypike
852b0e3cc2 Update text files for 0.7.19 2014-10-30 18:32:12 +01:00
shypike
d3715e0ab9 Support double quotes to delineate parameters in category match lists.
"a b", "c d"
is now properly handled. The double quotes needed HTML-quoting.
2014-10-28 22:06:21 +01:00
shypike
8d28d8f000 Correct spelling of OSX release names. 2014-10-28 21:15:26 +01:00
shypike
1a3dd15609 When sanitizing names, preserve "." and ".." elements in paths.
"sanitize_foldername()" was too eager to remove "." characters, thus removing "." and "..".
When using a single folder, the final "." element would be replaced with "unknown".
2014-10-27 21:46:28 +01:00
shypike
ad58dde0e0 The after-unrar-check needs to take the "flat_unpack" option into account. 2014-10-27 19:51:56 +01:00
sanderjo
da317d9054 Scheduling logging in proper format (hh:mm), so 08:05 instead of 8:5
Old:
2014-10-26 08:03:11,305::DEBUG::[scheduler:127] scheduling speedlimit(['5000']) on days [1, 2, 3, 4, 5, 6, 7] at 8:5

New:
2014-10-26 08:18:45,493::DEBUG::[scheduler:127] scheduling speedlimit(['5000']) on days [1, 2, 3, 4, 5, 6, 7] at 08:05
2014-10-26 08:43:48 +01:00
shypike
9e344114d6 When a comma is present in a file name, quotes are needed when passed to a user script.
The replacement list2cmdline() should handle commas.
2014-10-25 21:52:27 +02:00
shypike
2cc73c8ad1 Update OSX DMG image.
Make MountainLion/Mavericks/Yosemite the default choice.
Separate folders for SnowLeopard and Lion.
2014-10-25 14:38:19 +02:00
shypike
efe473b4d7 Update OSX signing.
Lion build can only be signed on Lion.
ML/Mav build must be signed on Mav.
2014-10-25 11:25:22 +02:00
shypike
f9e02458e7 Update text files for 0.7.19RC4 2014-10-24 19:57:23 +02:00
shypike
07d8ceb284 Fix problem of a job's destination path getting damaged on Windows.
"D:\folder\map" would become "D:folder\map", giving nasty side-effects.
2014-10-24 19:51:37 +02:00
shypike
fc141b6cb6 Update text files for 0.7.19RC3 2014-10-17 16:58:47 +02:00
shypike
4b31bea0e3 Change renaming of duplicate files from file.ext-->file.ext.1 to file.ext-->file.1.ext
Works better when encountering multiple rar/par sets with identically named files.
2014-10-17 16:56:25 +02:00
shypike
44b1b6ca2e Corrected meta-data name 2014-10-16 16:57:03 +02:00
shypike
8da8b62446 Make OSX MountainLion build compatible with Mavericks. 2014-10-15 20:56:20 +02:00
shypike
dc03414659 Do not remove a leading dot in a path element.
"/folder/.hidden/" must be preserved.
"/folder/hidden./" must be converted to "/folder/hidden/"
2014-10-15 18:45:04 +02:00
shypike
be7222de8f Windows UNC paths, used as final destination, were damaged.
misc.sanitize_and_trim_path() did not handle UNC paths properly.
2014-10-15 18:38:17 +02:00
shypike
0f64495d25 Update text files for release 0.7.19RC3 2014-10-14 20:34:14 +02:00
shypike
9f82347e72 Update OSX signing method 2014-10-14 20:21:51 +02:00
shypike
f22b26ba09 Treat RAR CRC errors like "incorrect password"
Older versions of unrar report wrong passwords as CRC errors.
Therefor, try the next password (if available) when a CRC error is reported.
2014-10-14 12:03:03 +02:00
shypike
f9081afe80 Auto-update rating_host value. 2014-10-13 22:54:06 +02:00
SanderJ
4904ff4bd9 Measure and log Pystone performance, and - if possible - CPU type
Pystone performance will give an indication of the performance that can be expected from SAB, par2 and unrar
2014-10-12 11:35:51 +02:00
shypike
8c2eda4e4b Update text files for 0.7.19RC2 2014-10-10 20:54:47 +02:00
shypike
5591127a0e Expose the rating_host setting to Config->Special. 2014-10-10 19:59:00 +02:00
shypike
823ec4fad8 Add Finnish translation 2014-10-10 19:34:06 +02:00
Johannes 'fish' Ziemke
661dc33912 Add Dockerfile
Usage:

        docker build -t sabnzbd .
        docker run -p 127.0.0.1:8080:8080 sabnzbd
2014-10-10 19:16:14 +02:00
shypike
28370c826a Merge pull request #183 from sanderjo/0.7.x
Extra logging in case of Loading .../.sabnzbd/admin/Rating.sab failed
2014-10-08 21:33:22 +02:00
sanderjo
12086a247f Extra logging in case of Loading .../.sabnzbd/admin/Rating.sab failed
Will print the exact except error message in the error message.
2014-10-07 21:51:41 +02:00
shypike
f53c25ccc5 Update text files for 0.7.19RC2 2014-10-07 20:48:47 +02:00
shypike
04d31bd807 OSX Signing is now only possible on OSX Mavericks, so check this. 2014-10-07 20:48:11 +02:00
shypike
433711bdc6 Prevent folder trimming from removing embedded passwords in filenames.
Reduce the number of calls to sanitize_foldername() and rely on NzbObject to do its work.
Also allow "oversized" incomplete folders to be sent back to the queue, by not sanitizing/trimming again.
2014-10-07 20:15:35 +02:00
shypike
376428d12d Fix potential problem with timestamps in RSS. 2014-10-07 20:11:23 +02:00
shypike
8969b7c87e Make sure the final destination path is always sanitized and trimmed.
Titles coming from RSS and processed by Sorting escaped sanitation.
2014-10-04 12:23:03 +02:00
shypike
92bc48c241 When matching SFV files with RAR-sets, do this case-insensitive. 2014-10-04 12:20:40 +02:00
shypike
d85708c5f5 Small code improvement "unwanted extensions". 2014-09-22 21:54:51 +02:00
shypike
fa4eaf3e42 Merge pull request #180 from sanderjo/0.7.x
Put the last rar immediately the first rar, so that unwanted extensions will get detected earlier
2014-09-22 20:35:00 +02:00
sanderjo
8b788551bb cfg.unwanted_extensions() is a list so check must be cfg.unwanted_extensions() != []
Signed-off-by: sanderjo <sander.jonkers+github@gmail.com>
2014-09-21 10:42:12 +02:00
sanderjo
db58025e96 Put the last rar immediately the first rar, so that unwanted extensions will get detected earlier.
Signed-off-by: sanderjo <sander.jonkers+github@gmail.com>
2014-09-21 08:01:21 +02:00
shypike
9ec397d6a0 Limit article cache to 1G to prevent a memory size bug in the _yenc module. 2014-09-18 22:09:49 +02:00
shypike
dc3fe36d8a Upgrade unrar to version 5.11 (OSX and Windows) 2014-09-18 20:50:46 +02:00
shypike
d809b13936 Restore Python 2.5 compatibility in "Ratings" change. 2014-09-16 23:38:05 +02:00
shypike
c1607c6bf2 Update text files for 0.7.19RC1 2014-09-16 23:19:01 +02:00
shypike
0ff3c5978c Merge pull request #178 from oznzb-dev/0.7.x
OZnzb filtering
2014-09-16 23:09:28 +02:00
shypike
c1d283f299 Sort order of RSS feeds incorrect due to UI using wrong time field. 2014-08-30 15:51:09 +02:00
shypike
04637c2c74 Prevent further pauses by "unwanted extension", once the user has resumed the job after the first stop.
Part of this code was accidentally included in commit 081010d50b
This is the missing part.
2014-08-22 19:42:45 +02:00
shypike
3a3eeba429 Change renaming of duplicate files from file.ext-->file.ext.1 to file.ext-->file.1.ext
Works better when encountering multiple rar/par sets with identically named files.
2014-08-22 19:35:11 +02:00
oznzb-dev
1404175ac8 OZnzb filtering
extended OZnzb integration with additional filtering options
2014-08-19 22:31:40 +10:00
shypike
081010d50b Fix for "Range" selection of queue.
Not a really good fix.
The code looks identical to the code for selecting files within an NZB (in config.js) , but it has some subtle error.
This is a fix which will work reasonably well.
Should be fixed properly later.
2014-08-06 22:22:39 +02:00
shypike
1591954472 Prevent crash when Windows SysTray function hits PyWin bug. 2014-07-29 21:25:24 +02:00
shypike
c2a2b61e15 Sort queue on now visible name instead of original name. 2014-07-16 21:39:10 +02:00
shypike
4b6c469831 Remove special URL handling for nzbclub indexer, no longer needed. 2014-07-16 21:25:42 +02:00
shypike
2d10f879da Update text files for 0.7.18 2014-07-06 16:11:33 +02:00
shypike
c2043c6b83 Update translations 2014-07-03 21:53:06 +02:00
shypike
b3a637d29c Update main POT file. 2014-07-01 21:31:01 +02:00
shypike
67395f0c19 Improve "unwanted extension" text. 2014-07-01 21:30:29 +02:00
shypike
55f0eccb88 Update text files for 0.7.18RC1 2014-06-29 17:33:36 +02:00
shypike
e293a669d2 Allow "nzbname" parameter with just the password (like "/password).
Works also in AddNZB dialog.
2014-06-29 17:30:59 +02:00
shypike
26b79513c6 Update text files for 0.7.18RC1 2014-06-29 17:12:23 +02:00
shypike
fa1ef47341 In debug logging mode, use Google to determine our own IP address (IPv4 and IPv6).
Helpful for diagnosing troublesome setups.
2014-06-23 21:25:25 +02:00
shypike
746be02899 Merge pull request #162 from JessThrysoee/0.7.x
Use default-path in Plush cookies.
2014-06-23 21:09:04 +02:00
shypike
9a4fdd4582 Log more info about failure to remove item from History. 2014-06-20 18:42:08 +02:00
shypike
c76f01b54b Merge pull request #165 from oopoa/bandwidth-api
New API-function "server_stats".
2014-06-20 10:44:19 +02:00
kcrayon
0849039bb4 rename to server_stats 2014-06-20 12:02:31 +08:00
shypike
a79d1173de Merge pull request #163 from Der-Jan/0.7.x
Support password embedding when password field is empty in job details page.
2014-06-16 19:25:13 +02:00
shypike
2eeb6cd9c5 Merge pull request #166 from sanderjo/0.7.x
Logs in which rar file unwanted extension
2014-06-16 19:08:27 +02:00
oopoa
7571f146fa add api for bandwidth statistics 2014-06-15 00:14:47 +08:00
sanderjo
6753ed031d Logs in which rar file unwanted extension 2014-06-14 08:32:18 +02:00
shypike
f4dd7d0df4 Don't pass seemingly "joinable" files to par2. No longer needed since we use a wildcard.
Can lead to problems on Windows due to a potentially huge amount of parameters.
No longer needed since we're using the wild-card parameter nowadays.
2014-06-13 22:24:22 +02:00
shypike
e5e9b37bf8 Fix bug that prevented multiple sets in one NZB from joining. 2014-06-11 23:05:02 +02:00
shypike
460841ae36 Merge pull request #164 from sanderjo/0.7.x
Handle server side 5xx problems more nicely
2014-06-11 21:15:45 +02:00
sanderjo
565d8bec8a Handle server side 5xx problems 2014-06-09 13:06:28 +02:00
Der-Jan
4bf057b0a9 Set pwd to none when empty 2014-06-06 14:07:32 +02:00
JessThrysoee
081da0a0fc Use default-path in Plush cookies.
Cookies are not port specific. When sending cookies with path=/ to
http://localhost:8090/sabnzbd/ these cookies also turn up when
requesting e.g. http://localhost/apache2 or http://localhost:8080/tomcat and so on.

To avoid polluting the global cookie space, simply do not specify a path
when setting the cookie. This will result in a cookie with the default-path,
i.e. path=/sabnzbd for http://localhost:8090/sabnzbd.
2014-05-31 18:48:46 +02:00
shypike
7fe125fd7e Prevent false encryption messages.
Probably not encrypted when multiple files are in a RAR.
2014-05-30 14:55:47 +02:00
shypike
9669580ea7 Prevent false positives for encryption detection.
Weird posts with double rar-ed subtitles.
2014-05-30 14:19:47 +02:00
shypike
27cc3f9be5 Update main POT file. 2014-05-24 11:57:51 +02:00
shypike
57084ab6d1 Additional logging for folder renaming. 2014-05-24 11:56:18 +02:00
shypike
56d0e2957b Update text files for 0.7.18RC1 2014-05-23 21:49:26 +02:00
shypike
08ffaedaca Implement support for X-Failure call-back URL.
Optionally to be called when par2 verification fails, the RAR files have CRC errors or the right password isn't known.
2014-05-23 21:39:05 +02:00
shypike
3c3d6dbb4b Merge pull request #160 from sanderjo/0.7.x
We're now on 0.7.x, and developing 0.8.x
2014-05-16 22:46:50 +02:00
sanderjo
a66ffc61fd We're now on 0.7.x, and developing 0.8.x 2014-05-16 19:05:03 +02:00
shypike
7e04cb019d Prevent crash in unpacking due to unset variable. 2014-05-13 19:55:43 +02:00
shypike
802c9cfab6 Using priority "Force" will override the duplicate NZB check.
Will work when manually entering an NZB.
2014-05-06 20:26:44 +02:00
shypike
987a64d3a4 Fix little problem in extension detection and fix some PyLint complaints. 2014-05-06 20:11:43 +02:00
shypike
ea90e9e153 Provide UI for detection of unwanted extensions inside RAR archives. 2014-04-28 23:12:38 +02:00
sanderjo
ace1ef5498 Implement support to detect unwanted extensions inside RAR archives.
unwanted_extensions = .exe, .bla
    action_on_unwanted_extensions = 1
2014-04-28 21:48:15 +02:00
shypike
d2926dcbac Prevent pseudo error message when testing "Notification Center". 2014-04-26 13:40:04 +02:00
shypike
5d4f8c2459 Merge branch '0.7.x' of https://github.com/josteink/sabnzbd into 0.7.x 2014-04-26 13:31:09 +02:00
Jostein Kjønigsen
e2790cbaa9 Notification: Respect NotifyOSD-preference and allow testing of values from UI. 2014-04-26 12:34:44 +02:00
shypike
1441eea5ae Merge pull request #148 from josteink/0.7.x
0.7.x - Support for testing email settings before saving
2014-04-26 12:26:31 +02:00
Jostein Kjønigsen
db4ecfc59c Support testing email based on values in UI instead of stored config. 2014-04-26 11:44:54 +02:00
shypike
4a69dea144 Set new_nzb_on_failure option default off. 2014-04-22 23:22:20 +02:00
shypike
3f790a5048 Update POT file. 2014-04-15 19:19:36 +02:00
shypike
470c1fc13e Install support for X-Failure call-back URL.
To be called when par2 verification fails, the RAR files have CRC errors or the right password isn't known.
2014-04-10 22:05:54 +02:00
shypike
6faeb578a7 Don't trim file names when renaming them (so revert to old behavior). 2014-04-09 21:11:08 +02:00
shypike
44383deb33 Add "pause_pp" to the API. 2014-04-05 16:04:04 +02:00
shypike
80fccd92ec Pause/abort on encryption failed when pre-check was active. 2014-04-03 21:32:23 +02:00
shypike
666e115032 Also remove colons ":" with option sanitize_safe 2014-03-25 11:32:30 +01:00
shypike
048c36c107 Update DMG template again. 2014-03-23 22:02:24 +01:00
shypike
de86ba7496 Update text files for 0.7.17Final 2014-03-23 20:59:34 +01:00
shypike
a58d070aaf Update OSX template. 2014-03-23 20:30:02 +01:00
shypike
30ed8a2d30 Update OSX DMG template for Mavericks. 2014-03-15 15:57:06 +01:00
shypike
c503e3078f Fix error in previous commit (d87162bebd). 2014-03-10 07:34:22 +01:00
shypike
3d74d879a9 Support "retry-after" attribute (for NZBFinder), used for rate limiting of NZB grabs. 2014-03-07 17:45:15 +01:00
shypike
ae0268eb3c Update text files for 0.7.17RC2 2014-03-07 14:46:46 +01:00
shypike
d6eb1ffbe0 Sanitize names when renaming files and folders. 2014-03-07 14:46:21 +01:00
shypike
d88b3a76a5 Prevent race condition in Rating usage from crashing SABnzbd. 2014-03-07 14:11:25 +01:00
shypike
a6a68b9b67 Reverse commit 507cbb4d87
The sanitize call is not the right solution, it will remove the path slashes!
2014-02-28 15:43:58 +01:00
shypike
182bce40a7 Update translations 2014-02-21 20:08:45 +01:00
shypike
5072f55d6e Do an extra sanitizing call before a file rename.
Just in case the user has forbidden characters in Sort expressions.
2014-02-20 22:53:21 +01:00
shypike
dc2d4ffb85 Make RAR/RAR5 detection more robust. 2014-02-20 22:09:32 +01:00
shypike
22f2e0881e Support double quotes in password entry boxes on job detail page. 2014-02-18 20:48:14 +01:00
shypike
614510c00b Prevent embedded password from getting damaged by sanitizing. 2014-02-18 20:14:37 +01:00
shypike
ee32bb5419 Extend password boxes on file details page. 2014-02-18 20:13:40 +01:00
shypike
99eb16b504 Provisional RAR5 support.
Recognize magic rar5 marker and the "incorrect password" message.
2014-02-18 19:49:54 +01:00
shypike
2da71a9fa2 Add password fields to File Detail pages of "smpl" and "Classic" skins.
Also remove previous band-aid to preserve passwords after repeated scans.
Scan is now only needed when old-style API call is done (so without password field).
2014-02-13 18:57:11 +01:00
shypike
9cc9e0eaea When checking unrar, prevent creating a zombie process on some systems. 2014-02-13 18:52:16 +01:00
shypike
2d92069a3c Prevent unwanted change of queue order after editing job details.
When an explicit priority is set, the category evaluation should not temporarily change the priority,
which will cause a re-sort within the priority group.
2014-02-11 21:42:43 +01:00
shypike
a0cfc19682 Merge pull request #140 from ecourtenay/0.7.x
Fix trailing slash
2014-02-11 11:29:27 +01:00
Ed Courtenay
d87cd33523 Fix trailing slash 2014-02-10 22:28:49 +00:00
shypike
63186c671b Update text files for 0.7.17RC2 2014-02-08 11:52:59 +01:00
shypike
3d3d8f1535 Add Special option ”warn_dupl_jobs” to suppress/enable warning about duplicate jobs. 2014-02-07 20:48:23 +01:00
shypike
a90754149b Prevent URLs in the queue from getting ”sanitized”. 2014-02-07 20:19:15 +01:00
shypike
ad6a896e46 Support UNC paths in Sort expressions (Windows). 2014-02-05 20:41:07 +01:00
shypike
f4e0559894 Remove race-condition in PP-queue exit that prevented shutdown. 2014-02-01 16:05:25 +01:00
shypike
e517b67faf Allow "Force" priority to be set in the NZO page (now also for smpl and Classic.)
Otherwise "Force" jobs will lose their priority when other fields are changed on the NZO page.
2014-02-01 13:00:32 +01:00
shypike
bbb2b27157 Update text files for 0.7.17RC1 2014-01-31 12:00:07 +01:00
shypike
0b1f9dfe14 Allow "Force" priority to be set in the NZO page.
Otherwise "Force" jobs will lose their priority when other fields are changed on the NZO page.
2014-01-31 11:09:27 +01:00
shypike
e7bd7ae0d3 Make sure a manually entered decryption password has no leading spaces. 2014-01-31 10:57:17 +01:00
shypike
8a4579d2ec Allow "Default" category to be selected in Multi-ops. 2014-01-26 22:02:05 +01:00
shypike
0adf5ef93d Fix minor issues in rating system.
FLAG_ENCRYPTED is misspelled in postproc.py
Ignore requests for which rating data isn’t available in rating.py
2014-01-23 21:20:03 +01:00
shypike
fdacbeaa4d Prevent PP queue timeout construction from keeping the CPU awake.
Due to a bug in the Python libraries queue.get(timeout=3) will awake the CPU every 50 msec.
The new code will only use the timeout to detect an empty queue.
If empty, but no end-of-queue check was needed, then launch an indefinite get().
2014-01-23 21:06:48 +01:00
shypike
f5ecf3df05 Remove spaces from display of filename/password. 2014-01-23 20:51:39 +01:00
shypike
eae644d8f6 Add more logging about password file results. 2014-01-23 20:50:38 +01:00
shypike
406f18d6bb Prevent sanitizing of passwords in "nzbname" argument of API calls.
Passwords could be damaged, for example: pass/word ==> pass+word
2014-01-18 12:53:57 +01:00
shypike
bf04f950c3 Add Special option "flat_unpack" to remove embedded folders in archives. 2014-01-15 22:28:31 +01:00
shypike
e90ff0eeb7 Fix bug in rating system.
Caused posting comments multiple times if the user added a comment and then
rated the release or used one of the other reporting functions after adding the comment.
2014-01-06 20:35:49 +01:00
shypike
35381555a9 Update text files for 0.7.17Beta3 2013-12-31 13:59:52 +01:00
shypike
4bf956f4ff Upgrade unrar to 5.01 2013-12-31 13:31:38 +01:00
shypike
ddfe8653f4 Fix problem with space added to password coming from a file name.
Regression caused by redesign of password scanner.
2013-12-23 21:29:46 +01:00
shypike
0066b9f11e Don’t send 8th parameter to user script when empty.
Modify sample script to show 8th parameter.
2013-12-21 13:54:04 +01:00
shypike
c897b2252d Update text files for 0.7.17Beta2 2013-12-16 18:44:57 +01:00
shypike
2aadcd032d Fix coding errors in OZnzb rating support. 2013-12-16 18:43:22 +01:00
shypike
23e9bf112e Fix problem with running user’s PP script. 2013-12-16 18:42:52 +01:00
shypike
9b577df408 Fix error in ozNZB support. 2013-12-13 20:40:29 +01:00
shypike
e6d75b45ae Update text files for 0.7.17Beta1 2013-12-12 21:49:26 +01:00
shypike
f68de5df4c Add some basic support for X-DNZB-Failure and X-DNZB-Details headers coming from indexers.
”Failure” will send an extra URL parameter to the post processing script.
”Details” will override ”More-info” in the History.
2013-12-12 21:45:22 +01:00
shypike
8bda8efa2f Add provisional support for unrar 5.
The report text for encrypted files has changed in unrar 5.
Thank you, Sander!
2013-12-12 19:51:12 +01:00
shypike
62792f859b Improve scanning of passwords in file names.
Replace regexes by plain logic to allow "/" and "{{" and "}}” in passwords.
2013-12-12 19:24:53 +01:00
shypike
79fa42f90f Update translations 2013-12-12 18:14:33 +01:00
shypike
acb3ed2c77 Update translation templates. 2013-12-10 21:19:18 +01:00
shypike
50f411fb0b Add the command line parameter —pidfile to set an explicit PID-file name. 2013-12-10 20:37:48 +01:00
shypike
38c13bc4f0 Merge pull request #127 from oznzb-dev/0.7.x
0.7.x ratings, feedback and comments
2013-12-10 11:16:28 -08:00
oznzb-dev
7a7bf0f4e4 Merge branch '0.7.x' of https://github.com/oznzb-dev/sabnzbd into 0.7.x 2013-11-25 18:03:12 +10:00
oznzb-dev
4d1b02fa64 Add backport of OrderedDict to support earlier versions of Python. 2013-11-25 17:57:36 +10:00
oznzb-dev
612e68b5e6 Update three.html 2013-11-23 20:05:58 +10:00
oznzb-dev
9645f947b8 Add modified files related ratings. 2013-11-23 19:19:45 +10:00
oznzb-dev
c211969a81 Added support for ratings and integration with OZnzb. 2013-11-23 19:15:10 +10:00
shypike
86916cdf90 Fix another issue with commit 3b3759e81e (NZB-meta data). 2013-11-12 22:06:40 +01:00
shypike
fa3e0f941b Always rename files in Sorting, regardless of casing. 2013-11-12 20:58:07 +01:00
shypike
32c524c18d Fix issue with commit 3b3759e81e (NZB-meta data). 2013-11-12 20:47:11 +01:00
shypike
7ba1a4c20f Add Solaris manifest to tar.gz distribution file. 2013-11-03 15:45:20 +01:00
shypike
3b3759e81e Add usage of NZB-meta data and X-headers for Sorting.
Meta records: "episodename", "propername" and "year".
X-headers: "x-dnzb-episodename", "x-dnzb-propername" and "x-dnzb-year".
Controlled by an option.
2013-11-02 17:35:58 +01:00
shypike
ba3aaab3dc Pass extra parameter to OSX Notification Center tool to enable Mavericks support. 2013-10-29 20:36:04 +01:00
shypike
30ec3c430d Show job's ETA when its priority is forced, but queue is paused. 2013-09-19 22:03:33 +02:00
shypike
dac568fc35 Another fix for false encryption reports. 2013-09-19 20:09:46 +02:00
shypike
1ee00e12ce Fix crash in API-call "queue-rename" when "value3" is empty or undefined. 2013-09-18 20:56:15 +02:00
shypike
15ae1ae5fd Merge pull request #111 from jim80net/0.7.x
Adds Solaris SMF manifest.
2013-09-02 14:05:28 -07:00
shypike
14f39a21e3 Update text files for 0.7.16 2013-09-02 22:05:58 +02:00
shypike
97ea4ee2eb Special "news_items" wasn't removed completely, leading to UI crash. 2013-09-02 22:01:36 +02:00
Jim80net
c68fa9f0c5 Adds solaris manifest 2013-09-02 19:42:44 +00:00
shypike
06576baf5c Update text files for 0.7.15 2013-08-26 19:12:28 +02:00
shypike
7b5fcbe0af For Unix systems, expand wildcards for the par2 tool to prevent problems with some builds of par2cmdline. 2013-08-20 21:31:46 +02:00
shypike
003ee07dee Revert "Use ".admin" instead of "__ADMIN__" as job admin folder to support some non-standard Unix systems."
This reverts commit 1b05bc9ed2.
2013-08-20 21:29:01 +02:00
shypike
654b5e9a24 Remove "news" section in Config skin's main page.
Was never used and caused mixed mode https/http issues.
2013-08-15 19:23:26 +02:00
shypike
1b05bc9ed2 Use ".admin" instead of "__ADMIN__" as job admin folder to support some non-standard Unix systems.
".admin" will be treated as a hidden folder by non-Windows systems, avoiding a problem with
wildcard expansion for par2cmdline on some Unix systems.
2013-08-12 22:18:13 +02:00
shypike
dc328c545b Add password entry box to "File Details" page (Plush only).
Also extend api call "queue_rename" with a password parameter (value3).
2013-08-09 18:34:59 +02:00
shypike
823816ddc4 Prevent "special" sub-folders on file servers from being scanned during unpacking. 2013-07-28 14:00:14 +02:00
shypike
8979598f23 Add special option 'sanitize_safe' to remove bad Windows chars on other platforms. 2013-07-16 21:54:02 +02:00
shypike
f26bf9b21f Fix false positive encryption alarm for some posts, 2013-07-16 21:36:09 +02:00
shypike
5d3a0cc593 Merge pull request #104 from manandre/rss_guid
Add of GUID field in History and Queue RSS feeds
2013-07-16 12:32:02 -07:00
manandre
21d445b7a6 Add of GUID field in Queue RSS feed
The NZO id is used as unique id for the queue RSS feed to help some RSS
readers (like Thunderbird) to identify articles when the link field is
the same for all articles
2013-07-07 18:38:17 +02:00
manandre
9c0df30d34 Add of GUID field in History RSS feed
The NZO id is used as unique id for the history RSS feed to help some RSS readers (like Thunderbird) to identify articles when the link field is the same for all articles.
2013-07-07 18:16:24 +02:00
shypike
bc9be3f92b Update text files for 0.7.14 2013-07-07 13:12:15 +02:00
shypike
2dc5c329c9 Fix special case of unjustified encryption warning. 2013-07-07 13:11:01 +02:00
shypike
67817978f4 Missing mini-par2 sometimes prevents the other par2 files from being downloaded. 2013-06-27 20:41:57 +02:00
shypike
e2ab8c6ce4 Make sure even invalid RAR files are fed to unrar and handle its reporting. 2013-06-27 20:29:04 +02:00
shypike
f33a952536 Update text files for 0.7.13 (again). 2013-06-13 21:35:14 +02:00
shypike
cc582b5321 Accept "nzbname" parameter in api-call "add url" even when a ZIP file is retrieved. 2013-06-13 21:33:00 +02:00
shypike
bdc526c91b Update text files for 0.7.13 2013-06-12 22:59:28 +02:00
shypike
52039c29b4 Accept partial par2 file when no others are available. 2013-06-12 21:03:29 +02:00
shypike
1dc4175f82 Add "special" option enable_recursion to control recursive unpacking. 2013-06-09 09:59:38 +02:00
shypike
92f70fc177 When post has just one par2-set, use full wildcard so that all files are repair and par candidates. 2013-06-01 11:21:00 +02:00
shypike
fd573208bd Fix encryption detection again. 2013-05-28 19:47:35 +02:00
shypike
ca9f10c12f Update text files for 0.7.12 2013-05-21 21:47:02 +02:00
shypike
49a72d0902 Update translations 2013-05-21 21:34:25 +02:00
shypike
6aafe3c531 Fix problem in encryption detection. 2013-05-07 21:17:06 +02:00
shypike
9e84696f96 Config and Wizard skins: fix problem with Unicode when using Chrome.
The Config skin and the Wizard were missing a proper Content-Type in <head>.
2013-04-14 12:02:33 +02:00
shypike
120c133d7a Implement robots.txt to keep web crawlers out.
Should not really be needed, because users should password-protect any
SABnzbd instance exposed to internet.
2013-04-12 21:25:56 +02:00
shypike
cf9713a4b0 Don't try to join a set of just one file (e.g. IMAGE.000) and reduce memory usage when joining large segments.
When there a single file called something like IMAGE.000, don't try to join it.
The joining procedure tries to read an entire segment file into memory, this may lead to a string overflow.
Use shutil.copyfileobj() with a 24 MB buffer instead.
2013-04-12 21:24:53 +02:00
shypike
d12e9889e7 Make encryption detection more careful. 2013-04-09 19:30:25 +02:00
shypike
711a546989 Make name sorting of the queue case-insensitive. 2013-03-20 23:12:13 +01:00
shypike
7f78e6fac1 Save job admin to disk when setting password or changing other attributes. 2013-03-02 13:09:24 +01:00
shypike
72533eefa4 Plush: add "resume pp" entry to pulldown menu, when pause_pp event is scheduled.
The option allows manual resume of a scheduled paused post-processing.
2013-02-26 20:33:58 +01:00
shypike
d9643d9ea8 Improve RAR detection. 2013-02-25 22:08:26 +01:00
shypike
2de71bb96c Enable "abort if hopeless" for pre-check as well. 2013-02-13 20:40:31 +01:00
578 changed files with 78518 additions and 124681 deletions

9
.gitignore vendored
View File

@@ -1,5 +1,5 @@
#Compiled python
*.py[cod]
*.py[co]
# Working folders for Win build
build/
@@ -10,14 +10,17 @@ srcdist/
# Generated email templates
email/*.tmpl
# Romanian ro.po is generated from ro.px, due to mapping to latin-1
ro.po
# Build results
SABnzbd*.zip
SABnzbd*.exe
SABnzbd*.gz
SABnzbd*.dmg
# WingIDE project files
*.wp[ru]
# WingIDE project file
*.wpr
# General junk
*.keep

View File

@@ -1,5 +1,5 @@
*******************************************
*** This is SABnzbd 2.2.0 ***
*** This is SABnzbd 0.7.20 ***
*******************************************
SABnzbd is an open-source cross-platform binary newsreader.
It simplifies the process of downloading from Usenet dramatically,
@@ -10,19 +10,47 @@ SABnzbd also has a fully customizable user interface,
and offers a complete API for third-party applications to hook into.
There is an extensive Wiki on the use of SABnzbd.
https://sabnzbd.org/wiki/
http://wiki.sabnzbd.org/
IMPORTANT INFORMATION about release 0.7.0:
http://wiki.sabnzbd.org/introducing-0-7-0
Please also read the file "ISSUES.txt"
The organization of the download queue is different from 0.7.x (and older).
1.0.0 will not finish downloading an existing queue.
Also, your sabnzbd.ini file will be upgraded, making it
incompatible with older releases.
*******************************************
*** Upgrading from 0.7.x and below ***
*** Upgrading from 0.6.x ***
*******************************************
Empty your current queue
Stop SABnzbd.
Install new version
Start SABnzbd.
*******************************************
*** Upgrading from 0.5.x ***
*******************************************
Stop SABnzbd.
Uninstall current version, keeping the data.
Install new version
Start SABnzbd.
The organization of the download queue is different from 0.5.x.
0.6.x will finish downloading an existing queue, but you
cannot go back to an older version without losing your queue.
Also, your sabnzbd.ini file will be upgraded, making it
incompatible with release 0.5.x
*******************************************
*** Upgrading from 0.4.x ***
*******************************************
>>>>> PLEASE DOWNLOAD YOUR CURRENT QUEUE BEFORE UPGRADING <<<<<<
When upgrading from a 0.4.x release such as 0.4.12 your old settings will be kept.
You will however be given a fresh queue and history. If you have items in your queue
from the older version of SABnzbd, you can either re-import the nzb files if you kept
an nzb backup folder, or temporarily go back to 0.4.x until your queue is complete.
The history is now stored in a better format meaning future upgrades should be backwards
compatible.

543
CHANGELOG.txt Normal file
View File

@@ -0,0 +1,543 @@
-------------------------------------------------------------------------------
0.7.20 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Make sure that Rating support is suppressed completely when disabled.
-------------------------------------------------------------------------------
0.7.20RC2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Update unrar for (Snow)Leopard on Intel to 5.11
- Logging of the Pystone performance of the system
-------------------------------------------------------------------------------
0.7.20RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- OSX unrar now really updated to 5.11
- API call "Retry" now returns new job id
- Support for NZB meta field 'x-rating-host'
- Fix email test issue
- Support of OSX Yosemite "Dark Mode"
-------------------------------------------------------------------------------
0.7.19 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Support double quotes to delineate parameters in category match lists.
- When a comma is present in a file name, quotes are needed when passed to a user script
- The after-unrar-check needs to take the "flat_unpack" option into account
- When sanitizing names, preserve "." and ".." elements in paths
-------------------------------------------------------------------------------
0.7.19RC4 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix damaging of job's destination path on Windows
-------------------------------------------------------------------------------
0.7.19RC3 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Improve password trial when the system uses an older unrar tool
-------------------------------------------------------------------------------
0.7.19RC2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Make matching of SFV file against RAR-sets case-insensitive
- Make sure final destination path is always sanitized and trimmed
- Improve "check for unwanted extensions"
- Upgrade unrar to version 5.11 (OSX and Windows)
- Limit article cache to 1G to prevent a memory size bug in the _yenc module
- Fix a number of problems with embedded passwords and folder size trimming
- Allow "float" timestamps in RSS feeds
- Expose 'rating_host' to Config->Special
- Add Finnish translation
-------------------------------------------------------------------------------
0.7.19RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- More oznzb filtering
- Fix OSX notification center problem
- Fix sort order of RSS feeds
- Prevent multiple pauses in "unwanted extensions" option
- Change renaming scheme for duplicate files
- Fix sorting of the queue
-------------------------------------------------------------------------------
0.7.18 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Update translations
-------------------------------------------------------------------------------
0.7.18RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Support for X-Failure header
- Support for detecting unwanted extensions inside RAR files
- Using priority Force will override duplicate detection
- Notification: Respect NotifyOSD-preference and allow testing of values from UI
- Prevent pseudo error message when testing "Notification Center"
- Testing email based on values in UI instead of stored config
- Don't trim file names when renaming them (so revert to old behavior)
- Add "pause_pp" to the API
- Pause/abort on encryption failed when pre-check was active
- Also remove colons ":" with option sanitize_safe
- Update DMG template
- Fix potential crash when unpacking due to unset variable
- Fix problem of cookie interference with other apps
- Add API function server_stats
- Support password embedding in file detail page and AddNZB dialog
- Pause/Remove posts when unwanted extensions are detected (like .exe)
- Fix issue with some RAR file sets leading to Windows "87" error
- Handle 5xx RSS feed error messages
-------------------------------------------------------------------------------
0.7.17Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Implement "retry-after" header to support rate-limiting
- Update OSX image to show Mavericks support
-------------------------------------------------------------------------------
0.7.17RC2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Support UNC paths in Sort expressions (Windows)
- URL in the queue should not show up "sanitized"
- Fix shutdown issue in PP queue
- Allow "Force" to be set as priority in files overview
- Special option "warn_dupl_jobs" to suppress/enable warnings for duplicate jobs
- Fix problem with "sanitize" in "renamer"
- Add (partial) RAR5 support
- Fix some more password-in-filename issues
- Prevent unwanted change of queue order after editing job details
- Add password entry boxes in smpl and Classic skins
- Prevent unrar zombies on some systems
-------------------------------------------------------------------------------
0.7.17RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix bug in rating system
- Fix multiple encryption password issues
- Allow Default category to be picked in Multi-Ops
- Allow "force" prio to be picked in NZO page
- Prevent PP queue timeout construction from keeping the CPU awake.
- Special option "flat_unpack"
-------------------------------------------------------------------------------
0.7.17Beta2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix regression errors in Beta1
-------------------------------------------------------------------------------
0.7.17Beta1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Add command line option --pidfile
- Another fix for false encryption reports
- Fix issue with OSX Mavericks Notification Center
- Add support for 'x-dnzb-propername', 'x-dnzb-episodename', 'x-dnzb-year'
in meta-data of NZB. To be used in TV Sorting
- Add OZnzb features need to be enabled in config ->switches
- Add integration with OZnzb indexer enhanced functionality, allows user access to ratings and reporting directly from SABnzbd interface.
- Add automatic feedback to OZnzb on failed downloads (if enabled)
- Add X-DNZB-Failure and X-DNZB-Details support
- Fix issue with passwords embedded in file names
- Updated translations
-------------------------------------------------------------------------------
0.7.16Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix Config->Special UI crash
-------------------------------------------------------------------------------
0.7.15Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix false encryption alarms for some posts
- Add "password" dialog to Plush's job details page
- Add special "sanitize_safe" to remove bad Windows characters on other platforms
- Remove "news" section from Config skin
- Fix for faulty par2cmdline on some embbeded Unix systems
- Add GUID fields to the History RSS feed.
-------------------------------------------------------------------------------
0.7.14Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Another encryption detection fix (special case)
- Missing mini-par2 sometimes prevents the other par2 files from being downloaded.
- Make sure even invalid RAR files are fed to unrar and handle its reporting.
-------------------------------------------------------------------------------
0.7.13Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Another encryption detection fix
- Special option "enable_recursion" to control recursive unpacking
- When post has just one par2 set, use wildcard so that all files are used
- Accept partial par2 file when only one is available
- Accept "nzbname" parameter in api-call "add url" even when a ZIP file is retrieved.
-------------------------------------------------------------------------------
0.7.12Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix issue in encryption detection
- Don't try to "join" a single X.000 file
- Fix memory overflow caused by very large files to be joined
- Make name sorting of the queue case-insensitive
- Save data to disk after changing job password or other attributes
- Add "resume_pp" entry to Plush pull-down menu when pause_pp event is scheduled
- Deploy "abort when completion not possible" method also in pre-download check
-------------------------------------------------------------------------------
0.7.11Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Bad articles from some servers were accepted as valid data
- Show warning when the decoder encounters I/O errors
- Generic Sort failed to rename files when an extra folder level was present in the RAR files
- Obfuscated file name support caused regular NZBs to verify slower
-------------------------------------------------------------------------------
0.7.10Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Disable obsolete newzbin bookmark readout
- Show speed when downloading in Forced mode while paused
- Plush History icons repair and unpack were swapped
- Try to repair rar/par sets with obfuscated names
- Reset "today" byte counters at midnight even when idle
- Display next RSS scan moment in Cfg->RSS
- An email about a failed should say that the download failed
- Report errors coming from fully encrypted rar files
- Accept NNTP error 400 without "too many connection" clues as a transient error.
- Accept %fn (next to %fn.%ext) as end parameter in sorting strings.
-------------------------------------------------------------------------------
0.7.9Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix fatal error in decoder when encountering a malformed article
- Fix compatibility with free.xsusenet.com
- Small fix in smpl-black CSS
-------------------------------------------------------------------------------
0.7.8Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix problem with %fn substitution in Sorting
- Add special "wait_for_dfolder", enables waiting for external temp download folder
- Work-around for servers that do not support STAT command
- Removed articles are now listed seperately in download report
- Add "abort" option to encryption detection
- Fix missing Retry link for "Out of retention" jobs.
- Option to abort download when it is clear that not enough data is available
- Support "nzbname" parameter in addfile/addlocalfile api calls for
ZIP files with a single NZB
- Support NZB-1.1 meta data "password" and "category"
- Don't retry an empty but correct NZB from an indexer
-------------------------------------------------------------------------------
0.7.7Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Windows/OSX: Update unrar to 4.20
- Fix some issues with orphaned items
- Generic sort didn't always rename media files in multi-part jobs properly
- Optional web-ui watchdog
- Always show RSS items in the same order as the original RSS feed
- Remove unusable folders from folder selector (Plush skin)
- Remove newzbin support
-------------------------------------------------------------------------------
0.7.6Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Recursive scanning when re-queuing downloaded NZB files
- Log "User-Agent" header of API calls
-------------------------------------------------------------------------------
0.7.6Beta2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- A damaged smallest par2 can block fetching of more par2 files
- Fix evaluation of schedules at startup
- Make check for running SABnzbd instance more robust
-------------------------------------------------------------------------------
0.7.6Beta1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Handle par2 sets that were renamed after creation
- Prevent blocking assembly of completed files, ( this resulted in
excessive CPU and memory usage)
- Fix speed issues with some Usenet servers due to unreachable IPv6 addresses
- Fix issues with SFV-base checks
- Prevent crash on Unix-Pythons that don't have the os.getloadavg() function
- Successfully pre-checked job lost its attributes when those were changed during check
- Remove version check when looking for a running instance of SABnzbd
-------------------------------------------------------------------------------
0.7.5Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Add missing %dn formula to Generic Sort
- Improve RSS logging
-------------------------------------------------------------------------------
0.7.5RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Prevent stuck jobs at end of pre-check.
- Fix issues with accented and special characters in names of downloaded files.
- Adjust nzbmatrix category table.
- Add 'prio_sort_list' special
- Add special option 'empty_postproc'.
- Prevent CherryPy crash when reading a cookie from another app which has a non-standard name.
- Prevent crash when trying to open non-existing "complete" folder from Windows System-tray icon.
- Fix problem with "Read" button when RSS feed name contains "&".
- Prevent unusual SFV files from crashing post-processing.
- OSX: Retina compatible menu-bar icons.
- Don't show speed and ETA when download is paused during post-processing
- Prevent soft-crash when api-function "addfile" is called without parameters.
- Add news channel frame
-------------------------------------------------------------------------------
0.7.4Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Pre-queue script no longer got the show/season/episode information.
- Prevent crash on startup when a fully downloaded job is still in download queue.
- New RSS feed should no longer be considered new after first, but empty readout.
- Make "auth" call backward-compatible with 0.6.x releases.
- Config->Notifications: email and growl server addresses should not be marked as "url" type.
- OSX: fix top menu queue info so that it shows total queue size
-------------------------------------------------------------------------------
0.7.4RC2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Pre-check failed to consider extra par2 files
- Fixed unjustified warning that can occur with OSX Growl 2.0
- Show memory usage on Linux systems
- Fix incorrect end-of-month quota reset
- Fix UI refresh issue when using Safari on iOS6
-------------------------------------------------------------------------------
0.7.4RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Remove potential queue stalling when downloading extra par2 files
- Make Windows version less eager to use par2-classic
- Fixed DMG images
- Add missing encoding directive to Plush and Classic skins
- Prevent oversized data in API-call "history"
-------------------------------------------------------------------------------
0.7.4Beta3 by The SABnzbd-Team
-------------------------------------------------------------------------------
- All three OSX build in one DMG again
- Minor bugfixes
-------------------------------------------------------------------------------
0.7.4Beta2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix failure to fetch more par2-files for posts with badly formatted subject lines
- After successful pre-check, preserve a job's position in the queue
- Restore SABnzbd icon for Growl
- Fix "check new releases" option in Config skin
- Separate DMG files for OSX Leopard/SL, Lion and MLion
-------------------------------------------------------------------------------
0.7.4Beta1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- OSX Mountain Lion Notification Center support
- OSX Mountain Lion improved "keep awake" support
- OSX: separate builds: one for Mountain Lion and one for all others
- OSX removed 64bit code
- Scheduler: action can now run on multiple weekdays
- Scheduler: add "remove failed jobs" action
- Special option: rss_odd_titles (see Wiki)
- Support for HTTPS chain files (needed when you buy your own certificate)
- Prevent jobs from showing up in queue and history simultaneously
- Add parameter 'pp_active' to history elements in qstatus API call
- Fix some minor par2 handling bugs
- Prevent potential crash when an actively downloading job is deleted from the queue
- Special option: 'overwrite_files' (See Wiki)
- Don't try an SFV check when a retried job was already successfully verified by par2
- Enable compression of API call results
- Log failed attempts to log in to the Web UI
- A job with "forced" priority should keep that when fetching more par2 files
- Updated translations
-------------------------------------------------------------------------------
0.7.3Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Rename Special "random_server_ip" to "randomize_server_ip" so that we
can force the default to "Off". "On" kills speed on some servers.
- Ignore pseudo NZB files that start with a period in the name
- SFV failure now listed in History instead of issuing warnings
- Translation updates
- "502" errors about payments/credits will now block a server
-------------------------------------------------------------------------------
0.7.3Beta2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Try to keep OSX Mountain Lion awake as long as downloading/postprocessing runs
- Prevent queue deadlock in case of fatally damaged par2 files
- Add RSS filter-enable checkboxes to Plush, Smpl and Classic skins
- Fix problem with saving modified paramters of an already enabled server
- Extend "check new release" option with test releases
-------------------------------------------------------------------------------
0.7.3Beta1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Correct several errors in Sort function
- Improve organization of Config->Servers
- Support for nzbxxx.com
- Make detection of samples less aggressive
- Some minor corrections
-------------------------------------------------------------------------------
0.7.2Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix for NZB-icon issue when 0.7.0 was previously installed
- Check validity of totals9.sab file
- Fix startup problem when localhost has unexpected order of IP addresses
-------------------------------------------------------------------------------
0.7.2RC2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Improve support for nzbsrus.com
- Don't try to show NZB age when not known yet
- Prevent systems with unresolvable hostnames from always using 0.0.0.0
-------------------------------------------------------------------------------
0.7.2RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix fatal error in nzbsrus.com support
- Initial "quota left" was not set correctly when enabling quota
- Report incorrect RSS filter expressions (instead of aborting analysis)
- Improve detection of invalid articles (so that backup server will be tried)
- Windows installer: improve NZB association so that a reboot isn't needed
- Windows installer: don't remove settimngs by default when uninstalling
- Fix sorting of rar files in job so that .rar preceeds .r00
-------------------------------------------------------------------------------
0.7.1Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Disable VC90 check in Windows Installer as long as we're still on Python 2.5
- Windows: make sure \\server\share notation is never seen as a relative path
-------------------------------------------------------------------------------
0.7.1RC5 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix signing of OSX DMG
- Fix endless par2-fetch loop after retrying failed job
- Don't send "bad fetch" email when emailing is off
- Add some support for nzbrus.com's non-VIP limiting
-------------------------------------------------------------------------------
0.7.1RC4 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix failure to grab NZBs from indexers that send compressed files.
-------------------------------------------------------------------------------
0.7.1RC3 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fixed stalling par2 fetches (after first verification run)
- Fixed retry behaviour of NZB fetching from URL
and add handling of nzbsrus.com error codes
- Make sure that all malformed articles are retried on another server
- Add no_ipv6 option that suppresses listing on ::1
(to be used if your system cannot handle that)
- Prevent crash in QuickCheck when expected par2 file wasn't downloaded
- Verification/repair would not be executed properly when one more RAR files
missed their first article.
- API calls "addurl" and "addid" (newzbin) can be used interchangeably
(Fixes a problem in Qouch)
-------------------------------------------------------------------------------
0.7.1RC2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Improved backup of sabnzbd.ini file
Will use backup when original is gone or become corrupt
- Windows: Using ::1 as single webhost address would start IE instead of default browser
-------------------------------------------------------------------------------
0.7.1RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Plush skin: fix problems with pull-down menus in Mobile Safari
- On some Linux and OSX systems using localhost would still make SABnzbd
give access to other computers
- Windows: the installer did not set an icon when associating NZB files with SABnzbd
- Fix problem that the Opera browser had with Config->Servers
- Retry a few times when accessing a mounted drive to create the
final destination folder
- Reduce load caused by WinTray and OSX topmenu
-------------------------------------------------------------------------------
0.7.0Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Updated translations
-------------------------------------------------------------------------------
0.7.0RC2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Suppress permission errors on paths containing ".AppleDouble" or ".DS_Store"
(Required for NAS systems that support Apple AFP shares)
- OSX/Windows: Set article cache to 200M when not already set.
- Pre-check: lower default minimum completion rate to 100.2%
-------------------------------------------------------------------------------
0.7.0RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix for rare crash in par2 fetching
- Another /nomedia fix
- Quota reset wasn't done when quota-reset-time was passed while SABnzbd wasn't running.
- Pre-check: required ratio for NZB without par2 files should be 100%
and not the "safe" ratio
-------------------------------------------------------------------------------
0.7.0Beta8 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Disabled the .nomedia marker file feature.
Those who want to try it, use the "nomedia_marker" setting in Config->Special
It remains an experimental feature without guarantees
- Add missing info in email about failed pre-check
- Updated translations
-------------------------------------------------------------------------------
0.7.0Beta7 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix for .nomedia files not being deleted
- Fix NZB re-queueing (due to .nomedia remaining)
- Polish was missing in Windows installer and Dutch was incorrect
- When Sort renames auxillirary files, it should disregard case
- Fix crash in Wizard on some Linux systems
-------------------------------------------------------------------------------
0.7.0Beta6 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Upgrade unzip for Windows to 6.00 (supports ZIPs above 2G)
- Lower threshold for pre-check to 100.5%
- Fix removal of .nomedia file when using Sorting
- Add Polish translation (using reduced character set)
- Extension-based cleanup list now also removes extension-only files like ".sfv".
- Several small issues
-------------------------------------------------------------------------------
0.7.0Beta5 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Solved serious connection problem with some providers
- Windows Tray has the "restart" entries no under a Troubleshoot menu
- Fix newzbin entries in History's "Source" field
- During unpacking the destination folder will contain a ".nomedia" file
which will keep mediaplayers temporarily from indexing
- Pre-check jobs now require 101% completion rate (with a "special" parameter)
- Unified OSX DMG
-------------------------------------------------------------------------------
0.7.0Beta4 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Add Portuguese (Brazil) language
- Updated translations
- Some odd NZB files led to blank initial filenames in file overview
- Jobs that have 99.91%-99.99% completion rate should not be rounded to 100.0%
- Windows Tray icon now has entry to show "complete" folder
- Some minor fixes in code and Config skin
- Individual RSS filter toggle
-------------------------------------------------------------------------------
0.7.0Beta3 by The SABnzbd-Team
-------------------------------------------------------------------------------
- OSX/Linux: permissions are now also applied to the "temporary download folder"
- Fix some issues in the Config skin.
- The default for "apply max retries only on optional servers" is now 0,
thus enabling the new anti-deadlock behaviour for all servers
- Fix incompatibility with nzbsa.co.za indexer
- Log all API calls (in debug mode)
- Restore Python2.5 compatibility in growler.py
- After a language change, register again with Growl
- Clean up the api-call auth. It will now give preference to 'apikey'.
- Fix detection of retry-able history entries for case-insensitive file systems.
- API-calls "addfile" and "addlocalfile" returned an incorrect status value.
- Add support for the peculiar Usenet provider "free.xsusenet.com".
- OSX menu now uses the same formatting for speed as the skins.
- Accept multiple items in API-calls "addurl" and "addid".
The "name" and "nzbname" keywords can be repeated.
-------------------------------------------------------------------------------
0.7.0Beta2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix behavior when using host address 0.0.0.0 on a system
that doesn't resolve localhost properly
- Add Spanish translation
-------------------------------------------------------------------------------
0.7.0Beta1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Updated nzbmatrix categories
- "Special" option allows incomplete/partial NZB files
- Forbid "complete" being a subfolder of "incomplete"
-------------------------------------------------------------------------------
0.7.0Alpha3 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix failing join-by-par2
- Prevent API crash when deleting non-existing history item
- Prevent UI crash message when looking at NZB details page of finished job
- Config skin: fix path complettion in Config->Folders
- Config skin: fixes to support "hide behind proxy"
- Keep using unrar 4.10 for OSX Leopard and older, due to PPC support
-------------------------------------------------------------------------------
0.7.0Alpha2 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Fix disabled options in Config skin
- Remove flags from the Wizard and Config skin
- Replace real spaces in RSS-urls with %20
- Prevent double entries in History's "Source" section
- Prevent crash when OSDNotify doesn't work properly
- Small improvents in Windows installer
-------------------------------------------------------------------------------
0.7.0Alpha1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Not tracked
-------------------------------------------------------------------------------
0.6.15Final by The SABnzbd-Team
-------------------------------------------------------------------------------
- Flag post-processing as failed when files cannot be moved/copied to destination
- Fixed another newzbin link
-------------------------------------------------------------------------------
0.6.15RC1 by The SABnzbd-Team
-------------------------------------------------------------------------------
- Change newzbin URL
- Prevent setting watched-folder speed to 0 (while having no watched-folder)
from triggering an inifinite loop.
- Move "locale" construction from Plush skin to Python code.
Some embedded Linux platforms show unstable behavior with the original construction.
- Extend OSX menu with troubleshooting options
- Add trailing slashes to internal Plush paths to support reverse proxies better.
- Ignore whitespace around regular expressions in RSS filters.
- Prevent crash on restoring URL-fetches when using --repair-all option
- Fix "Repair" button on smpl Connection page. Current path fails when using a reverse proxy
- Suppress "incompatible feed" error when doing a scheduled/automatic RSS read-out.
- Add special setting to use "pickle" library instead of cPickle.

View File

@@ -1,5 +1,5 @@
(c) Copyright 2007-2017 by "The SABnzbd-team" <team@sabnzbd.org>
(c) Copyright 2007-2014 by "The SABnzbd-team" <team@sabnzbd.org>
The SABnzbd-team is:
@@ -7,7 +7,7 @@ Active team:
ShyPike <shypike@sabnzbd.org>
inpheaux <inpheaux@sabnzbd.org>
zoggy <zoggy@sabnzbd.org>
Safihre <safihre@sabnzbd.org>
OZnzb-dev <sabdev@oznzb.com>
Sleeping members
sw1tch <switch@sabnzbd.org>
pairofdimes <pairofdimes@sabnzbd.org>
@@ -16,20 +16,18 @@ Honorary member (and original author)
Gregor Kaufmann <tdian@users.sourceforge.net>
The main contributors and moderators of the translations
Danish: Rene (nordjyden6), Scott
Dutch: ShyPike, Safihre
French : rAf, Fox Ace, Fred, Morback, Jih
German: Severin Heiniger, Tim Hartmann, DonPizza, Alex
Norwegian: Protx, mjelva, TomP, John
Danish: Rene (nordjyden6)
Dutch: ShyPike
French : rAf and Fox Ace
German: Severin Heiniger
Norwegian: Protx
Romanian: nicusor
Serbian: Ozzii, Krišan Darko
Swedish: Malmis, Kim Joahnsson, Patrik-liind, Chris M
Spanish: Syquus, Adolfo Jayme
Portuguese (Brazil): lrrosa, diegosps
Serbian: Ozzii
Swedish: Malmis
Spanish: Syquus
Portuguese (Brazil): lrrosa
Russian: Pavel Maryanov
Polish: Tomasz 'Zen' Napierala
Chinese: XsLiDian
Finnish: Matti Ylönen
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License

18
Dockerfile Normal file
View File

@@ -0,0 +1,18 @@
FROM ubuntu:14.04
MAINTAINER Johannes 'fish' Ziemke <docker@freigeist.org> @discordianfish
RUN echo deb http://archive.ubuntu.com/ubuntu/ trusty multiverse >> \
/etc/apt/sources.list
RUN apt-get -qy update && apt-get -qy install python python-cheetah unrar \
unzip python-yenc par2
RUN useradd sabnzbd -d /sab -m && chown -R sabnzbd:sabnzbd /sab
VOLUME /sab
ADD . /sabnzbd
EXPOSE 8080
USER sabnzbd
ENV HOME /sab
ENTRYPOINT [ "python", "/sabnzbd/SABnzbd.py", "-s", "0.0.0.0:8080" ]

View File

@@ -1,10 +1,10 @@
SABnzbd 2.2.0
SABnzbd 0.7.20
-------------------------------------------------------------------------------
0) LICENSE
-------------------------------------------------------------------------------
(c) Copyright 2007-2017 by "The SABnzbd-team" <team@sabnzbd.org>
(c) Copyright 2007-2014 by "The SABnzbd-team" <team@sabnzbd.org>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
@@ -21,31 +21,30 @@ along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
-------------------------------------------------------------------------------
1) INSTALL with the Windows installer
1) INSTALL with the Win32 installer
-------------------------------------------------------------------------------
Just run the downloaded EXE file and the installer will start.
It's just a simple standard installer.
After installation, find the SABnzbd program in the Start menu and start it.
After installaton, find the SABnzbd program in the Start menu and start it.
Within 5-10 seconds your web browser will start and show the user interface.
Use the "Help" button in the web-interface to be directed to the Help Wiki.
-------------------------------------------------------------------------------
2) INSTALL pre-built Windows binaries
2) INSTALL pre-built Win32 binaries
-------------------------------------------------------------------------------
Unzip pre-built version to any folder of your liking.
Start the SABnzbd.exe program.
Within 5-10 seconds your web browser will start and show the user interface.
Use the "Help" button in the web-interface to be directed to the Help Wiki.
-------------------------------------------------------------------------------
3) INSTALL pre-built macOS binaries
3) INSTALL pre-built OSX binaries
-------------------------------------------------------------------------------
Download the DMG file, mount and drag the SABnzbd icon to Programs.
Just like you do with so many apps.
Make sure you pick the right folder, depending on your macOS version.
Make sure you pick the right folder, depending on your OSX version.
-------------------------------------------------------------------------------
@@ -55,36 +54,42 @@ Make sure you pick the right folder, depending on your macOS version.
You need to have Python installed plus some non-standard Python modules
and a few tools.
Unix/Linux/macOS
Python-2.7.latest http://www.python.org (2.7.9+ recommended)
Unix/Linux/OSX
Python-2.5, 2.6 or 2.7 http://www.python.org
OSX Leopard/SnowLeopard
Python 2.6 http://www.activestate.com
OSX Lion/MountainLion
Apple Python 2.7 Included in OSX (default)
Windows
Python-2.7.latest http://www.python.org (2.7.9+ recommended)
PyWin32 use "pip install pypiwin32"
Python-2.7.latest http://www.activestate.com
Essential modules
cheetah-2.0.1+ use "pip install cheetah"
par2cmdline >= 0.4 https://github.com/Parchive/par2cmdline/releases
See also: https://sabnzbd.org/wiki/installation/multicore-par2
unrar >= 5.00+ http://www.rarlab.com/rar_add.htm
cheetah-2.0.1+ http://www.cheetahtemplate.org/ (or use "pypm install cheetah")
par2cmdline >= 0.4 http://parchive.sourceforge.net/
http://chuchusoft.com/par2_tbb/index.html (multi-core)
Optional modules
unzip >= 6.00 http://www.info-zip.org/
7zip >= 9.20 http://www.7zip.org/
sabyenc == 3.0.2 use "pip install sabyenc"
More information: https://sabnzbd.org/sabyenc
openssl >= 1.0.0 http://www.openssl.org/
v0.9.8 will work, but limits certificate validation
cryptography >= 1.0 use "pip install cryptography"
Enables certificate generation and detection of encrypted RAR-files
unrar >= 3.90+ http://www.rarlab.com/rar_add.htm
unzip >= 5.52 http://www.info-zip.org/
yenc module >= 0.3 http://sabnzbd.sourceforge.net/yenc-0.3.tar.gz
http://sabnzbd.sourceforge.net/yenc-0.3-w32fixed.zip (Win32-only)
Optional modules Unix/Linux/macOS
Optional modules Windows
pyopenssl >= 0.11 http://pypi.python.org/pypi/pyOpenSSL
(Binaries, including the OpenSSL libraries)
Optional modules Unix/Linux/OSX
pyopenssl >= 0.11 http://pypi.python.org/pypi/pyOpenSSL
openssl => v0.9.8g+ http://www.openssl.org/
Make sure the OpenSSL libraries match with PyOpenSSL
pynotify Should be part of GTK for Python support on Debian/Ubuntu
If not, you cannot use the NotifyOSD feature.
python-dbus Enable option to Shutdown/Restart/Standby PC on queue finish.
Embedded modules (preferably use the included version)
CherryPy-8.1.2 with patches http://www.cherrypy.org
Embedded modules (only use the included version)
CherryPy-3.2 rev2138 with patches http://www.cherrypy.org
Unpack the ZIP-file containing the SABnzbd sources to any folder of your liking.
@@ -94,7 +99,6 @@ Start this from a shell terminal (or command prompt):
Start this from a shell terminal (or command prompt):
python SABnzbd.py
Within 5-10 seconds your web browser will start and show the user interface.
Use the "Help" button in the web-interface to be directed to the Help Wiki.
@@ -122,7 +126,7 @@ may help you solve problems easier.
-------------------------------------------------------------------------------
Visit the WIKI site:
https://sabnzbd.org/wiki/
http://wiki.sabnzbd.org/
-------------------------------------------------------------------------------
@@ -131,4 +135,4 @@ Visit the WIKI site:
Several parts of SABnzbd were built by other people, illustrating the
wonderful world of Free Open Source Software.
See the licenses folder of the main program and of the skin folders.
See the licences folder of the main program and of the skin folders.

View File

@@ -12,25 +12,34 @@
Windows-only:
If you keep having trouble with par2 multicore you can disable it
in Config->Switches.
This will force the use of the old and tried, but slower par2cmdline program.
This will force the use of the old and tried, but slower par2-classic program.
- A bug in Windows 7 may cause severe memory leaks when you use SABnzbd in
combination with some virus scanners and firewalls.
combination with some virus scanners and firewals.
Install this hotfix:
Description: http://support.microsoft.com/kb/979223/en-us
Download location: http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=979223&kbln=en-us
- Windows cannot handle pathnames longer than 254 characters.
Currently, SABnzbd doesn't handle this problem gracefully.
We have added the INI-only option "folder_length_max" in which you can set
a maximum folder name size.
For Windows the default is 128 and for others 256.
A quite safe value for Windows would be 64.
SABnzbd will take care of overlapping names.
See: http://wiki.sabnzbd.org/configure-special-0-7
- Some Usenet servers have intermittent login (or other) problems.
For these the server blocking method is not very favourable.
There is an INI-only option that will limit blocks to 1 minute.
no_penalties = 1
See: https://sabnzbd.org/wiki/configuration/2.2/special
See: http://wiki.sabnzbd.org/configure-special-0-7
- Some third-party utilties try to probe SABnzbd API in such a way that you will
often see warnings about unauthenticated access.
If you are sure these probes are harmless, you can suppress the warnings by
setting the option "api_warnings" to 0.
See: https://sabnzbd.org/wiki/configuration/2.2/special
See: http://wiki.sabnzbd.org/configure-special-0-7
- On OSX you may encounter downloaded files with foreign characters.
The par2 repair may fail when the files were created on a Windows system.
@@ -39,9 +48,9 @@
- On Linux when you download files they may have the wrong character encoding.
You will see this only when downloaded files contain accented characters.
You need to fix it yourself by running the convmv utility (available for most Linux platforms).
You need to fix it yourself by running the convmv utility (availaible for most Linux platforms).
Possible the file system override setting 'fsys_type' might be solve things:
See: https://sabnzbd.org/wiki/configuration/2.2/special
See: http://wiki.sabnzbd.org/configure-special-0-7
- The "Watched Folder" sometimes fails to delete the NZB files it has
processed. This happens when other software still accesses these files.
@@ -53,15 +62,15 @@
memory systems (e.g. a NAS device or a router) a challenge.
- SABnzbd is not compatible with some software firewall versions.
The Microsoft Windows Firewall works fine, but remember to tell this
The Mircosoft Windows Firewall works fine, but remember to tell this
firewall that SABnzbd is allowed to talk to other computers.
- When SABnzbd cannot send notification emails, check your virus scanner,
- When SABnzbd cannot send nofication emails, check your virus scanner,
firewall or security suite. It may be blocking outgoing email.
- When you are using external drives or network shares on OSX or Linux
make sure that the drives are mounted.
The operating system will simply redirect your files to alternative locations.
The operating system wil simply redirect your files to alternative locations.
You may have trouble finding the files when mounting the drive later.
On OSX, SABnzbd will not create new folders in /Volumes.
The result will be a failed job that can be retried once the volume has been mounted.
@@ -81,4 +90,4 @@
- Squeeze Linux
There is a "special" option that will allow you to select an alternative library.
use_pickle = 1
See: https://sabnzbd.org/wiki/configuration/2.2/special
See: http://wiki.sabnzbd.org/configure-special-0-7

View File

@@ -1,4 +1,4 @@
(c) Copyright 2007-2017 by "The SABnzbd-team" <team@sabnzbd.org>
(c) Copyright 2007-2014 by "The SABnzbd-team" <team@sabnzbd.org>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License

462
NSIS_Installer.nsi Normal file
View File

@@ -0,0 +1,462 @@
; -*- coding: latin-1 -*-
;
; Copyright 2008-2012 The SABnzbd-Team <team@sabnzbd.org>
;
; This program is free software; you can redistribute it and/or
; modify it under the terms of the GNU General Public License
; as published by the Free Software Foundation; either version 2
; of the License, or (at your option) any later version.
;
; This program is distributed in the hope that it will be useful,
; but WITHOUT ANY WARRANTY; without even the implied warranty of
; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
; GNU General Public License for more details.
;
; You should have received a copy of the GNU General Public License
; along with this program; if not, write to the Free Software
; Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
!addplugindir win\nsis\Plugins
!addincludedir win\nsis\Include
!include "MUI2.nsh"
!include "registerExtension.nsh"
!include "FileFunc.nsh"
!include "LogicLib.nsh"
!include "WinVer.nsh"
!include "WinSxSQuery.nsh"
;------------------------------------------------------------------
;
; Marco for removing existing and the current installation
; It share buy the installer and the uninstaller.
; Make sure it covers 0.5.x, 0.6.x and 0.7.x in one go.
;
!define RemovePrev "!insertmacro RemovePrev"
!macro RemovePrev idir
Delete "${idir}\email\email-de.tmpl"
Delete "${idir}\email\email-en.tmpl"
Delete "${idir}\email\email-nl.tmpl"
Delete "${idir}\email\email-fr.tmpl"
Delete "${idir}\email\email-sv.tmpl"
Delete "${idir}\email\email-da.tmpl"
Delete "${idir}\email\email-nb.tmpl"
Delete "${idir}\email\email-pl.tmpl"
Delete "${idir}\email\email-ro.tmpl"
Delete "${idir}\email\email-sr.tmpl"
Delete "${idir}\email\email-es.tmpl"
Delete "${idir}\email\email-pt_BR.tmpl"
Delete "${idir}\email\email-sr.tmpl"
Delete "${idir}\email\email-ru.tmpl"
Delete "${idir}\email\rss-de.tmpl"
Delete "${idir}\email\rss-en.tmpl"
Delete "${idir}\email\rss-nl.tmpl"
Delete "${idir}\email\rss-pl.tmpl"
Delete "${idir}\email\rss-fr.tmpl"
Delete "${idir}\email\rss-sv.tmpl"
Delete "${idir}\email\rss-da.tmpl"
Delete "${idir}\email\rss-nb.tmpl"
Delete "${idir}\email\rss-ro.tmpl"
Delete "${idir}\email\rss-sr.tmpl"
Delete "${idir}\email\rss-es.tmpl"
Delete "${idir}\email\rss-pt_BR.tmpl"
Delete "${idir}\email\rss-sr.tmpl"
Delete "${idir}\email\rss-ru.tmpl"
Delete "${idir}\email\badfetch-da.tmpl"
Delete "${idir}\email\badfetch-de.tmpl"
Delete "${idir}\email\badfetch-en.tmpl"
Delete "${idir}\email\badfetch-fr.tmpl"
Delete "${idir}\email\badfetch-nb.tmpl"
Delete "${idir}\email\badfetch-nl.tmpl"
Delete "${idir}\email\badfetch-pl.tmpl"
Delete "${idir}\email\badfetch-ro.tmpl"
Delete "${idir}\email\badfetch-sr.tmpl"
Delete "${idir}\email\badfetch-sv.tmpl"
Delete "${idir}\email\badfetch-sr.tmpl"
Delete "${idir}\email\badfetch-es.tmpl"
Delete "${idir}\email\badfetch-pt_BR.tmpl"
Delete "${idir}\email\badfetch-ru.tmpl"
RMDir "${idir}\email"
RMDir /r "${idir}\locale"
RMDir /r "${idir}\interfaces\Classic"
RMDir /r "${idir}\interfaces\Plush"
RMDir /r "${idir}\interfaces\smpl"
RMDir /r "${idir}\interfaces\Mobile"
RMDir /r "${idir}\interfaces\wizard"
RMDir /r "${idir}\interfaces\Config"
RMDir "${idir}\interfaces"
RMDir /r "${idir}\win\curl"
RMDir /r "${idir}\win\par2"
RMDir /r "${idir}\win\unrar"
RMDir /r "${idir}\win\unzip"
RMDir /r "${idir}\win"
RMDir /r "${idir}\licenses"
RMDir /r "${idir}\lib\"
RMDir /r "${idir}\po\email"
RMDir /r "${idir}\po\main"
RMDir /r "${idir}\po\nsis"
RMDir "${idir}\po"
RMDir /r "${idir}\icons"
Delete "${idir}\CHANGELOG.txt"
Delete "${idir}\COPYRIGHT.txt"
Delete "${idir}\email.tmpl"
Delete "${idir}\GPL2.txt"
Delete "${idir}\GPL3.txt"
Delete "${idir}\INSTALL.txt"
Delete "${idir}\ISSUES.txt"
Delete "${idir}\LICENSE.txt"
Delete "${idir}\nzbmatrix.txt"
Delete "${idir}\MSVCR71.dll"
Delete "${idir}\nzb.ico"
Delete "${idir}\sabnzbd.ico"
Delete "${idir}\PKG-INFO"
Delete "${idir}\python25.dll"
Delete "${idir}\python26.dll"
Delete "${idir}\python27.dll"
Delete "${idir}\README.txt"
Delete "${idir}\README.rtf"
Delete "${idir}\ABOUT.txt"
Delete "${idir}\IMPORTANT_MESSAGE.txt"
Delete "${idir}\SABnzbd-console.exe"
Delete "${idir}\SABnzbd.exe"
Delete "${idir}\SABnzbd.exe.log"
Delete "${idir}\SABnzbd-helper.exe"
Delete "${idir}\SABnzbd-service.exe"
Delete "${idir}\Sample-PostProc.cmd"
Delete "${idir}\Uninstall.exe"
Delete "${idir}\w9xpopen.exe"
RMDir "${idir}"
!macroend
;------------------------------------------------------------------
; Define names of the product
Name "${SAB_PRODUCT}"
OutFile "${SAB_FILE}"
InstallDir "$PROGRAMFILES\SABnzbd"
InstallDirRegKey HKEY_LOCAL_MACHINE "SOFTWARE\SABnzbd" ""
;DirText $(MsgSelectDir)
;------------------------------------------------------------------
; Some default compiler settings (uncomment and change at will):
SetCompress auto ; (can be off or force)
SetDatablockOptimize on ; (can be off)
CRCCheck on ; (can be off)
AutoCloseWindow false ; (can be true for the window go away automatically at end)
ShowInstDetails hide ; (can be show to have them shown, or nevershow to disable)
SetDateSave off ; (can be on to have files restored to their orginal date)
WindowIcon on
;------------------------------------------------------------------
; Vista/Win7 redirects $SMPROGRAMS to all users without this
RequestExecutionLevel admin
FileErrorText "If you have no admin rights, try to install into a user directory."
;------------------------------------------------------------------
;Variables
Var MUI_TEMP
Var STARTMENU_FOLDER
;------------------------------------------------------------------
;Interface Settings
!define MUI_ABORTWARNING
;Show all languages, despite user's codepage
!define MUI_LANGDLL_ALLLANGUAGES
!define MUI_ICON "interfaces/Classic/templates/static/images/favicon.ico"
;--------------------------------
;Pages
!insertmacro MUI_PAGE_LICENSE "LICENSE.txt"
!define MUI_COMPONENTSPAGE_NODESC
!insertmacro MUI_PAGE_COMPONENTS
!insertmacro MUI_PAGE_DIRECTORY
;Start Menu Folder Page Configuration
!define MUI_STARTMENUPAGE_REGISTRY_ROOT "HKCU"
!define MUI_STARTMENUPAGE_REGISTRY_KEY "Software\SABnzbd"
!define MUI_STARTMENUPAGE_REGISTRY_VALUENAME "Start Menu Folder"
!define MUI_STARTMENUPAGE_DEFAULTFOLDER "SABnzbd"
;Remember the installer language
!define MUI_LANGDLL_REGISTRY_ROOT "HKCU"
!define MUI_LANGDLL_REGISTRY_KEY "Software\SABnzbd"
!define MUI_LANGDLL_REGISTRY_VALUENAME "Installer Language"
!insertmacro MUI_PAGE_STARTMENU Application $STARTMENU_FOLDER
!insertmacro MUI_PAGE_INSTFILES
!define MUI_FINISHPAGE_RUN
!define MUI_FINISHPAGE_RUN_FUNCTION "LaunchLink"
!define MUI_FINISHPAGE_RUN_TEXT $(MsgGoWiki)
!define MUI_FINISHPAGE_RUN_NOTCHECKED
!define MUI_FINISHPAGE_SHOWREADME "$INSTDIR\README.txt"
!define MUI_FINISHPAGE_SHOWREADME_TEXT $(MsgShowRelNote)
!define MUI_FINISHPAGE_LINK $(MsgSupportUs)
!define MUI_FINISHPAGE_LINK_LOCATION "http://www.sabnzbd.org/contribute/"
!insertmacro MUI_PAGE_FINISH
!insertmacro MUI_UNPAGE_CONFIRM
!define MUI_UNPAGE_COMPONENTSPAGE_NODESC
!insertmacro MUI_UNPAGE_COMPONENTS
!insertmacro MUI_UNPAGE_INSTFILES
;------------------------------------------------------------------
; Set supported languages
!insertmacro MUI_LANGUAGE "English" ;first language is the default language
!insertmacro MUI_LANGUAGE "French"
!insertmacro MUI_LANGUAGE "German"
!insertmacro MUI_LANGUAGE "Dutch"
!insertmacro MUI_LANGUAGE "Polish"
!insertmacro MUI_LANGUAGE "Swedish"
!insertmacro MUI_LANGUAGE "Danish"
!insertmacro MUI_LANGUAGE "NORWEGIAN"
!insertmacro MUI_LANGUAGE "Romanian"
!insertmacro MUI_LANGUAGE "Spanish"
!insertmacro MUI_LANGUAGE "PortugueseBR"
;------------------------------------------------------------------
;Reserve Files
;If you are using solid compression, files that are required before
;the actual installation should be stored first in the data block,
;because this will make your installer start faster.
!insertmacro MUI_RESERVEFILE_LANGDLL
;------------------------------------------------------------------
Function LaunchLink
ExecShell "" "http://wiki.sabnzbd.org/"
FunctionEnd
;------------------------------------------------------------------
Function .onInit
!insertmacro MUI_LANGDLL_DISPLAY
;--------------------------------
;make sure that the requires MS Runtimes are installed
;
goto nodownload ; Not needed while still using Python25
runtime_loop:
push 'msvcr90.dll'
push 'Microsoft.VC90.CRT,version="9.0.21022.8",type="win32",processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b"'
call WinSxS_HasAssembly
pop $0
StrCmp $0 "1" nodownload
MessageBox MB_OKCANCEL|MB_ICONEXCLAMATION $(MsgNoRuntime) /SD IDOK IDOK download IDCANCEL noinstall
download:
inetc::get /BANNER $(MsgDLRuntime) \
"http://download.microsoft.com/download/1/1/1/1116b75a-9ec3-481a-a3c8-1777b5381140/vcredist_x86.exe" \
"$TEMP\vcredist_x86.exe"
Pop $0
DetailPrint "Downloaded MS runtime library"
StrCmp $0 "OK" dlok
MessageBox MB_OKCANCEL|MB_ICONEXCLAMATION $(MsgDLError) /SD IDCANCEL IDCANCEL exitinstall IDOK download
dlok:
ExecWait "$TEMP\vcredist_x86.exe" $1
DetailPrint "VCRESULT=$1"
DetailPrint "Tried to install MS runtime library"
delete "$TEMP\vcredist_x86.exe"
StrCmp $1 "0" nodownload
noinstall:
MessageBox MB_OKCANCEL|MB_ICONEXCLAMATION $(MsgDLNeed) /SD IDOK IDOK runtime_loop IDCANCEL exitinstall
Abort
nodownload:
;------------------------------------------------------------------
;make sure user terminates sabnzbd.exe or else abort
;
loop:
StrCpy $0 "SABnzbd.exe"
KillProc::FindProcesses
StrCmp $0 "0" endcheck
MessageBox MB_OKCANCEL|MB_ICONEXCLAMATION $(MsgCloseSab) /SD IDCANCEL IDOK loop IDCANCEL exitinstall
exitinstall:
Abort
endcheck:
FunctionEnd
;------------------------------------------------------------------
; SECTION main program
;
Section "SABnzbd" SecDummy
SetOutPath "$INSTDIR"
;------------------------------------------------------------------
; Make sure old versions are gone
IfFileExists $INSTDIR\sabnzbd.exe 0 endWarnExist
IfFileExists $INSTDIR\python27.dll 0 endWarnExist
MessageBox MB_OKCANCEL|MB_ICONEXCLAMATION "$(MsgRemoveOld)$\n$\n$(MsgRemoveOld2)" IDOK uninst
Abort
uninst:
${RemovePrev} "$INSTDIR"
endWarnExist:
; add files / whatever that need to be installed here.
File /r "dist\*"
WriteRegStr HKEY_LOCAL_MACHINE "SOFTWARE\SABnzbd" "" "$INSTDIR"
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "DisplayName" "SABnzbd ${SAB_VERSION}"
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "UninstallString" '"$INSTDIR\uninstall.exe"'
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "DisplayVersion" '${SAB_VERSION}'
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "Publisher" 'The SABnzbd Team'
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "HelpLink" 'http://forums.sabnzbd.org/'
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "URLInfoAbout" 'http://wiki.sabnzbd.org/'
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "URLUpdateInfo" 'http://sabnzbd.org/'
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "Comments" 'The automated Usenet download tool'
WriteRegStr HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "DisplayIcon" '$INSTDIR\interfaces\Classic\templates\static\images\favicon.ico'
WriteRegDWORD HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "EstimatedSize" 25674
WriteRegDWORD HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "NoRepair" -1
WriteRegDWORD HKEY_LOCAL_MACHINE "Software\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd" "NoModify" -1
; write out uninstaller
WriteUninstaller "$INSTDIR\Uninstall.exe"
!insertmacro MUI_STARTMENU_WRITE_BEGIN Application
;Create shortcuts
CreateDirectory "$SMPROGRAMS\$STARTMENU_FOLDER"
CreateShortCut "$SMPROGRAMS\$STARTMENU_FOLDER\SABnzbd.lnk" "$INSTDIR\SABnzbd.exe"
CreateShortCut "$SMPROGRAMS\$STARTMENU_FOLDER\SABnzbd - SafeMode.lnk" "$INSTDIR\SABnzbd.exe" "--server 127.0.0.1:8080 -b1 --no-login -t Plush"
WriteINIStr "$SMPROGRAMS\$STARTMENU_FOLDER\SABnzbd - Documentation.url" "InternetShortcut" "URL" "http://wiki.sabnzbd.org/"
CreateShortCut "$SMPROGRAMS\$STARTMENU_FOLDER\Uninstall.lnk" "$INSTDIR\Uninstall.exe"
!insertmacro MUI_STARTMENU_WRITE_END
SectionEnd ; end of default section
Section /o $(MsgRunAtStart) startup
CreateShortCut "$SMPROGRAMS\Startup\SABnzbd.lnk" "$INSTDIR\SABnzbd.exe" "-b0"
SectionEnd ;
Section $(MsgIcon) desktop
CreateShortCut "$DESKTOP\SABnzbd.lnk" "$INSTDIR\SABnzbd.exe"
SectionEnd ; end of desktop icon section
Section /o $(MsgAssoc) assoc
${registerExtension} "$INSTDIR\icons\nzb.ico" "$INSTDIR\SABnzbd.exe" ".nzb" "NZB File"
${RefreshShellIcons}
SectionEnd ; end of file association section
; begin uninstall settings/section
UninstallText $(MsgUninstall)
Section "un.$(MsgDelProgram)" Uninstall
;make sure sabnzbd.exe isnt running..if so shut it down
StrCpy $0 "sabnzbd.exe"
DetailPrint "Searching for processes called '$0'"
KillProc::FindProcesses
StrCmp $1 "-1" wooops
DetailPrint "-> Found $0 processes"
StrCmp $0 "0" completed
Sleep 1500
StrCpy $0 "sabnzbd.exe"
DetailPrint "Killing all processes called '$0'"
KillProc::KillProcesses
StrCmp $1 "-1" wooops
DetailPrint "-> Killed $0 processes, failed to kill $1 processes"
Goto completed
wooops:
DetailPrint "-> Error: Something went wrong :-("
Abort
completed:
DetailPrint "Process Killed"
; add delete commands to delete whatever files/registry keys/etc you installed here.
Delete "$INSTDIR\uninstall.exe"
DeleteRegKey HKEY_LOCAL_MACHINE "SOFTWARE\SABnzbd"
DeleteRegKey HKEY_LOCAL_MACHINE "SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\SABnzbd"
${RemovePrev} "$INSTDIR"
!insertmacro MUI_STARTMENU_GETFOLDER Application $MUI_TEMP
Delete "$SMPROGRAMS\$MUI_TEMP\SABnzbd.lnk"
Delete "$SMPROGRAMS\$MUI_TEMP\Uninstall.lnk"
Delete "$SMPROGRAMS\$MUI_TEMP\SABnzbd - SafeMode.lnk"
Delete "$SMPROGRAMS\$MUI_TEMP\SABnzbd - Documentation.url"
RMDir "$SMPROGRAMS\$MUI_TEMP"
Delete "$SMPROGRAMS\Startup\SABnzbd.lnk"
Delete "$DESKTOP\SABnzbd.lnk"
DeleteRegKey HKEY_CURRENT_USER "Software\SABnzbd"
${unregisterExtension} ".nzb" "NZB File"
${RefreshShellIcons}
SectionEnd ; end of uninstall section
Section /o "un.$(MsgDelSettings)" DelSettings
DetailPrint "Uninstall settings $LOCALAPPDATA"
Delete "$LOCALAPPDATA\sabnzbd\sabnzbd.ini"
RMDir /r "$LOCALAPPDATA\sabnzbd"
SectionEnd
; eof
;--------------------------------
;Language strings
LangString MsgGoWiki ${LANG_ENGLISH} "Go to the SABnzbd Wiki"
LangString MsgShowRelNote ${LANG_ENGLISH} "Show Release Notes"
LangString MsgSupportUs ${LANG_ENGLISH} "Support the project, Donate!"
LangString MsgCloseSab ${LANG_ENGLISH} "Please close $\"SABnzbd.exe$\" first"
LangString MsgOldQueue ${LANG_ENGLISH} " >>>> WARNING <<<<$\r$\n$\r$\nPlease, first check the release notes or go to http://wiki.sabnzbd.org/introducing-0-7-0 !"
LangString MsgUninstall ${LANG_ENGLISH} "This will uninstall SABnzbd from your system"
LangString MsgRunAtStart ${LANG_ENGLISH} "Run at startup"
LangString MsgIcon ${LANG_ENGLISH} "Desktop Icon"
LangString MsgAssoc ${LANG_ENGLISH} "NZB File association"
LangString MsgDelProgram ${LANG_ENGLISH} "Delete Program"
LangString MsgDelSettings ${LANG_ENGLISH} "Delete Settings"
LangString MsgNoRuntime ${LANG_ENGLISH} "This system requires the Microsoft runtime library VC90 to be installed first. Do you want to do that now?"
LangString MsgDLRuntime ${LANG_ENGLISH} "Downloading Microsoft runtime installer..."
LangString MsgDLError ${LANG_ENGLISH} "Download error, retry?"
LangString MsgDLNeed ${LANG_ENGLISH} "Cannot install without runtime library, retry?"
LangString MsgRemoveOld ${LANG_ENGLISH} "You cannot overwrite an existing installation. $\n$\nClick `OK` to remove the previous version or `Cancel` to cancel this upgrade."
LangString MsgRemoveOld2 ${LANG_ENGLISH} "Your settings and data will be preserved."
Function un.onInit
!insertmacro MUI_UNGETLANGUAGE
FunctionEnd

View File

@@ -1,8 +1,8 @@
Metadata-Version: 1.0
Name: SABnzbd
Version: 2.2.0
Summary: SABnzbd-2.2.0
Home-page: https://sabnzbd.org
Version: 0.7.20
Summary: SABnzbd-0.7.20
Home-page: http://sabnzbd.org
Author: The SABnzbd Team
Author-email: team@sabnzbd.org
License: GNU General Public License 2 (GPL2 or later)

View File

@@ -4,24 +4,25 @@ SABnzbd - The automated Usenet download tool
SABnzbd is an Open Source Binary Newsreader written in Python.
It's totally free, incredibly easy to use, and works practically everywhere.
SABnzbd makes Usenet as simple and streamlined as possible by automating everything we can. All you have to do is add an `.nzb`. SABnzbd takes over from there, where it will be automatically downloaded, verified, repaired, extracted and filed away with zero human interaction.
If you want to know more you can head over to our website: https://sabnzbd.org.
SABnzbd makes Usenet as simple and streamlined as possible by automating everything we can. All you have to do is add an .nzb. SABnzbd takes over from there, where it will be automatically downloaded, verified, repaired, extracted and filed away with zero human interaction.
If you want to know more you can head over to our website: http://sabnzbd.org.
## Resolving Dependencies
SABnzbd has a good deal of dependencies you'll need before you can get running. If you've previously run SABnzbd from one of the various Linux packages, then you likely already have all the needed dependencies. If not, here's what you're looking for:
SABnzbd has a good deal of dependencies you'll need before you can get running. If you've previously run SABnzbd from one of the various Linux packages floating around (Ubuntu, Debian, Fedora, etc), then you likely already have all the needed dependencies. If not, here's what you're looking for:
- `python` (only 2.7.x and higher, but not 3.x.x)
- `python` (We support Python 2.5-2.7, preferably 2.6 or 2.7.)
- `python-cheetah`
- `python-configobj`
- `python-feedparser`
- `python-dbus`
- `python-openssl`
- `python-support`
- `par2` (Multi-threaded par2 installation guide can be found [here](https://sabnzbd.org/wiki/installation/multicore-par2))
- `python-yenc`
- `par2` (Multi-threaded par2 can be downloaded from [ChuChuSoft](http://chuchusoft.com/par2_tbb/download.html) )
- `unrar` (Make sure you get the "official" non-free version of unrar)
- `sabyenc` (installation guide can be found [here](https://sabnzbd.org/sabyenc))
Optional:
- `python-cryptography` (enables certificate generation and detection of encrypted RAR-files during download)
- `python-dbus` (enable option to Shutdown/Restart/Standby PC on queue finish)
- `7zip`
- `unzip`
Your package manager should supply these. If not, we've got links in our more in-depth [installation guide](https://github.com/sabnzbd/sabnzbd/blob/master/INSTALL.txt).
@@ -31,13 +32,13 @@ Your package manager should supply these. If not, we've got links in our more in
Once you've sorted out all the dependencies, simply run:
```
python -OO SABnzbd.py
python SABnzbd.py
```
Or, if you want to run in the background:
```
python -OO SABnzbd.py -d -f /path/to/sabnzbd.ini
python -d -f /path/to/sabnzbd.ini
```
If you want multi-language support, run:
@@ -46,23 +47,8 @@ If you want multi-language support, run:
python tools/make_mo.py
```
Our many other command line options are explained in depth [here](https://sabnzbd.org/wiki/advanced/command-line-parameters).
Our many other commandline options are explained in depth [here](http://wiki.sabnzbd.org/command-line-parameters).
## About Our Repo
The workflow we use, is a simplified form of "GitFlow".
Basically:
- `master` contains only stable releases (which have been merged to `master`) and is intended for end-users.
- `develop` is the target for integration and is **not** intended for end-users.
- `1.1.x` is a release and maintenance branch for 1.1.x (1.1.0 -> 1.1.1 -> 1.1.2) and is **not** intended for end-users.
- `feature/my_feature` is a temporary feature branch based on `develop`.
- `bugfix/my_bugfix` is an optional temporary branch for bugfix(es) based on `develop`.
Conditions:
- Merging of a stable release into `master` will be simple: the release branch is always right.
- `master` is not merged back to `develop`.
- `develop` is not re-based on `master`.
- Release branches branch from `develop` only.
- Bugfixes created specifically for a release branch are done there (because they are specific, they're not cherry-picked to `develop`).
- Bugfixes done on `develop` may be cherry-picked to a release branch.
- We will not release a 1.0.2 if a 1.1.0 has already been released.
We're going to be attempting to follow the [gitflow model](http://nvie.com/posts/a-successful-git-branching-model/), so you can consider "master" to be whatever our present stable release build is (presently 0.7.x) and "develop" to be whatever our next build will be (presently 0.8.x). Once we transition from unstable to stable dev builds we'll create release branches, and encourage you to follow along and help us test.

View File

@@ -1,98 +1,71 @@
Release Notes - SABnzbd 2.2.0
=========================================================
Release Notes - SABnzbd 0.7.20
================================
NOTE: Due to changes in this release, the queue will be converted when 2.2.0
is started for the first time. Job order, settings and data will be
preserved, but all jobs will be unpaused and URLs that did not finish
fetching before the upgrade will be lost!
## Changes since 2.1.0
- Direct Unpack: Jobs will start unpacking during the download, reduces
post-processing time but requires capable hard drive. Only works for jobs that
do not need repair. Will be enabled if your incomplete folder-speed > 40MB/s
- Reduced memory usage, especially with larger queues
- Graphical overview of server-usage on Servers page
- Notifications can now be limited to certain Categories
- Removed 5 second delay between fetching URLs
- Each item in the Queue and File list now has Move to Top/Bottom buttons
- Add option to only tag a duplicate job without pausing or removing it
- New option "History Retention" to automatically purge jobs from History
- Jobs outside server retention are processed faster
- Obfuscated filenames are renamed during downloading, if possible
- Disk-space is now checked before writing files
- Add "Retry All Failed" button to Glitter
- Smoother animations in Firefox (disabled previously due to FF high-CPU usage)
- Show missing articles in MB instead of number of articles
- Better indication of verification process before and after repair
- Remove video and audio rating icons from Queue
- Show vote buttons instead of video and audio rating buttons in History
- If enabled, replace dots in filenames also when there are spaces already
- Handling of par2 files made more robust
- All par2 files are only downloaded when enabled, not on enable_par_cleanup
- Update GNTP bindings to 1.0.3
- max_art_opt and replace_illegal moved from Switches to Specials
- Removed Specials par2_multicore and allow_streaming
- Windows: Full unicode support when calling repair and unpack
- Windows: Move enable_multipar to Specials
- Windows: MultiPar verification of a job is skipped after blocks are fetched
- Windows & macOS: removed par2cmdline in favor of par2tbb/MultiPar
- Windows & macOS: Updated WinRAR to 5.5.0
## Features
## Bugfixes since 2.1.0
- Shutdown/suspend did not work on some Linux systems
- Standby/Hibernate was not working on Windows
- Deleting a job could result in write errors
- Display warning if "Extra par2 parameters" turn out to be wrong
- RSS URLs with commas in the URL were broken
- Fixed some "Saving failed" errors
- Fixed crashing URLGrabber
- Jobs with renamed files are now correctly handled when using Retry
- Disk-space readings could be updated incorrectly
- Correct redirect after enabling HTTPS in the Config
- Fix race-condition in Post-processing
- History would not always show latest changes
- Convert HTML in error messages
- In some cases not all RAR-sets were unpacked
- Fixed unicode error during Sorting
- Faulty pynotify could stop shutdown
- Categories with ' in them could result in SQL errors
- Special characters like []!* in filenames could break repair
- Wizard was always accessible, even with username and password set
- Correct value in "Speed" Extra History Column
- Not all texts were shown in the selected Language
- Various CSS fixes in Glitter and the Config
- Catch "error 0" when using HTTPS on some Linux platforms
- Warning is shown when many files with duplicate filenames are discarded
- Improve zeroconf/bonjour by sending HTTPS setting and auto-discover of IP
- Windows: Fix error in MultiPar-code when first par2-file was damaged
- macOS: Catch "Protocol wrong type for socket" errors
- Support of OSX Yosemite "Dark Mode"
- API call "Retry" now returns new job id (supporting nzbdrone)
## Translations
- Added Hebrew translation by ION IL, many other languages updated.
## Bug fixes
- OSX unrar now really updated to 5.11 for Lion and higher
- unrar is now updated to 5.11 for Intel systems running (Snow)Leopard
- (Snow)Leopard on PPC still only has unrar 4.01, no new versions from rarlabs
- Fix email test issue
## Depreciation notices
- Option to limit Servers to specific Categories is now scheduled
to be removed in the next release.
## Upgrading from 2.1.x and older
- Finish queue
## What's new in 0.7.0
- Download quota management
- Windows: simple system tray menu
- Multi-platform Growl support
- NotifyOSD support for Linux distros that have it
- Option to set maximum number of retries for servers (prevents deadlock)
- Pre-download check to estimate completeness (reliability is limited)
- Prevent partial downloading of par2 files that are not needed yet
- Config->Special for settings previously only available in the sabnzbd.ini file
- For Usenet servers with multiple IP addresses, pick a random one per connection
- Add pseudo-priority "Stop" that will send the job immediately to the post-processing queue
- Allow jobs still waiting for post-processing to be deleted too
- More persistent retries for unreliable indexers
- Single Configuration skin for all others skins (there is an option for the old style)
- Config->Special for settings that were previously only changeable in the sabnzbd.ini file
- Add Spanish, Portuguese (Brazil) and Polish translations
- Individual RSS filter toggle
- Unified OSX DMG
## About
SABnzbd is an open-source cross-platform binary newsreader.
It simplifies the process of downloading from Usenet dramatically,
thanks to its web-based user interface and advanced
built-in post-processing options that automatically verify, repair,
extract and clean up posts downloaded from Usenet.
(c) Copyright 2007-2014 by "The SABnzbd-team" \<team@sabnzbd.org\>
### IMPORTANT INFORMATION about release 0.7.x
<http://wiki.sabnzbd.org/introducing-0-7-0>
### Known problems and solutions
- Read the file "ISSUES.txt"
### Upgrading from 0.6.x
- Stop SABnzbd
- Install new version
- Start SABnzbd
## Upgrade notices
- The organization of the download queue is different from 0.7.x releases.
This version will not see the old queue, but you restore the jobs by going
to Status page and use Queue Repair.
### Upgrading from 0.5.x
- Stop SABnzbd
- Install new version
- Start SABnzbd.
## Known problems and solutions
- Read the file "ISSUES.txt"
The organization of the download queue is different from 0.5.x.
0.7.x will finish downloading an existing queue, but you
cannot go back to an older version without losing your queue.
Also, your sabnzbd.ini file will be upgraded, making it
incompatible with release 0.5.x
## About
SABnzbd is an open-source cross-platform binary newsreader.
It simplifies the process of downloading from Usenet dramatically, thanks
to its web-based user interface and advanced built-in post-processing options
that automatically verify, repair, extract and clean up posts downloaded
from Usenet.
(c) Copyright 2007-2017 by "The SABnzbd-team" \<team@sabnzbd.org\>
### Upgrading from 0.4.x
Download your current queue before upgrading.

View File

@@ -1,5 +1,5 @@
#!/usr/bin/python -OO
# Copyright 2008-2017 The SABnzbd-Team <team@sabnzbd.org>
# Copyright 2008-2012 The SABnzbd-Team <team@sabnzbd.org>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
@@ -16,23 +16,18 @@
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import sys
if sys.version_info[:2] < (2, 6) or sys.version_info[:2] >= (3, 0):
print "Sorry, requires Python 2.6 or 2.7."
if sys.version_info < (2,5):
print "Sorry, requires Python 2.5 or higher."
sys.exit(1)
import os
import time
import subprocess
#------------------------------------------------------------------------------
try:
import win32api
import win32file
import win32serviceutil
import win32evtlogutil
import win32event
import win32service
import pywintypes
import win32api, win32file
import win32serviceutil, win32evtlogutil, win32event, win32service, pywintypes
except ImportError:
print "Sorry, requires Python module PyWin32."
sys.exit(1)
@@ -40,10 +35,11 @@ except ImportError:
from util.mailslot import MailSlot
from util.apireg import del_connection_info, set_connection_info
#------------------------------------------------------------------------------
WIN_SERVICE = None
#------------------------------------------------------------------------------
def HandleCommandLine(allow_service=True):
""" Handle command line for a Windows Service
Prescribed name that will be called by Py2Exe.
@@ -56,6 +52,7 @@ def start_sab():
return subprocess.Popen('net start SABnzbd', stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).stdout.read()
#------------------------------------------------------------------------------
def main():
mail = MailSlot()
@@ -81,13 +78,13 @@ def main():
elif msg.startswith('api '):
active = True
counter = 0
_cmd, url = msg.split()
cmd, url = msg.split()
if url:
set_connection_info(url.strip(), user=False)
if active:
counter += 1
if counter > 120: # 120 seconds
if counter > 120: # 120 seconds
counter = 0
start_sab()
@@ -99,12 +96,11 @@ def main():
return ''
##############################################################################
#####################################################################
#
# Windows Service Support
##############################################################################
#
import servicemanager
class SABHelper(win32serviceutil.ServiceFramework):
""" Win32 Service Handler """
@@ -119,7 +115,7 @@ class SABHelper(win32serviceutil.ServiceFramework):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
self.overlapped = pywintypes.OVERLAPPED() # @UndefinedVariable
self.overlapped = pywintypes.OVERLAPPED()
self.overlapped.hEvent = win32event.CreateEvent(None, 0, 0, None)
WIN_SERVICE = self
@@ -147,9 +143,11 @@ class SABHelper(win32serviceutil.ServiceFramework):
unicode(text))
##############################################################################
#####################################################################
#
# Platform specific startup code
##############################################################################
#
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(SABHelper, argv=sys.argv)

1016
SABnzbd.py
View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,17 @@
@echo off
rem Example of a post processing script for SABnzbd
echo.
echo Running in directory "%~d0%~p0"
echo.
echo The first parameter (result-dir) = %1
echo The second parameter (nzb-name) = %2
echo The third parameter (nice name) = %3
echo The fourth parameter (newzbin #) = %4
echo The fifth parameter (category) = %5
echo The sixth parameter (group) = %6
echo The seventh parameter (status) = %7
echo The eight parameter (failure_url)= %8
echo.
@echo off
rem Example of a post processing script for SABnzbd
echo.
echo Running in directory "%~d0%~p0"
echo.
echo The first parameter (result-dir) = %1
echo The second parameter (nzb-name) = %2
echo The third parameter (nice name) = %3
echo The fourth parameter (newzbin #) = %4
echo The fifth parameter (category) = %5
echo The sixth parameter (group) = %6
echo The seventh parameter (status) = %7
echo The eigth parameter (failure_url)= %8
echo.

View File

@@ -11,4 +11,7 @@ echo "The fourth parameter (newzbin-id) =" "$4"
echo "The fifth parameter (category) =" "$5"
echo "The sixth parameter (group) =" "$6"
echo "The seventh parameter (status) =" "$7"
echo "The eight parameter (failure_url) =" "$8"
echo "The eigth parameter (failure_url) =" "$8"
echo

25
cherrypy/LICENSE.txt Normal file
View File

@@ -0,0 +1,25 @@
Copyright (c) 2004-2007, CherryPy Team (team@cherrypy.org)
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the CherryPy Team nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,5 +1 @@
CherryPy 8.1.2
Official distribution: https://github.com/cherrypy/cherrypy/releases
The folders 'tutorial', 'test' and 'scaffold' have been removed.
This file has been added.
This CherryPy Rev 2138 patched with Rev 2272.

View File

@@ -53,34 +53,123 @@ with customized or extended components. The core API's are:
* Server API
* WSGI API
These API's are described in the `CherryPy specification <https://bitbucket.org/cherrypy/cherrypy/wiki/CherryPySpec>`_.
These API's are described in the CherryPy specification:
http://www.cherrypy.org/wiki/CherryPySpec
"""
try:
import pkg_resources
except ImportError:
pass
__version__ = "3.2.0"
from threading import local as _local
from urlparse import urljoin as _urljoin
from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect # noqa
from cherrypy._cperror import NotFound, CherryPyException, TimeoutError # noqa
from cherrypy import _cplogging
class _AttributeDocstrings(type):
"""Metaclass for declaring docstrings for class attributes."""
# The full docstring for this type is down in the __init__ method so
# that it doesn't show up in help() for every consumer class.
def __init__(cls, name, bases, dct):
'''Metaclass for declaring docstrings for class attributes.
Base Python doesn't provide any syntax for setting docstrings on
'data attributes' (non-callables). This metaclass allows class
definitions to follow the declaration of a data attribute with
a docstring for that attribute; the attribute docstring will be
popped from the class dict and folded into the class docstring.
The naming convention for attribute docstrings is:
<attrname> + "__doc".
For example:
class Thing(object):
"""A thing and its properties."""
__metaclass__ = cherrypy._AttributeDocstrings
height = 50
height__doc = """The height of the Thing in inches."""
In which case, help(Thing) starts like this:
>>> help(mod.Thing)
Help on class Thing in module pkg.mod:
class Thing(__builtin__.object)
| A thing and its properties.
|
| height [= 50]:
| The height of the Thing in inches.
|
The benefits of this approach over hand-edited class docstrings:
1. Places the docstring nearer to the attribute declaration.
2. Makes attribute docs more uniform ("name (default): doc").
3. Reduces mismatches of attribute _names_ between
the declaration and the documentation.
4. Reduces mismatches of attribute default _values_ between
the declaration and the documentation.
The benefits of a metaclass approach over other approaches:
1. Simpler ("less magic") than interface-based solutions.
2. __metaclass__ can be specified at the module global level
for classic classes.
For various formatting reasons, you should write multiline docs
with a leading newline and not a trailing one:
response__doc = """
The response object for the current thread. In the main thread,
and any threads which are not HTTP requests, this is None."""
The type of the attribute is intentionally not included, because
that's not How Python Works. Quack.
'''
newdoc = [cls.__doc__ or ""]
dctnames = dct.keys()
dctnames.sort()
for name in dctnames:
if name.endswith("__doc"):
# Remove the magic doc attribute.
if hasattr(cls, name):
delattr(cls, name)
# Make a uniformly-indented docstring from it.
val = '\n'.join([' ' + line.strip()
for line in dct[name].split('\n')])
# Get the default value.
attrname = name[:-5]
try:
attrval = getattr(cls, attrname)
except AttributeError:
attrval = "missing"
# Add the complete attribute docstring to our list.
newdoc.append("%s [= %r]:\n%s" % (attrname, attrval, val))
# Add our list of new docstrings to the class docstring.
cls.__doc__ = "\n\n".join(newdoc)
from cherrypy import _cpdispatch as dispatch # noqa
from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect
from cherrypy._cperror import NotFound, CherryPyException, TimeoutError
from cherrypy import _cpdispatch as dispatch
from cherrypy import _cptools
from cherrypy._cptools import default_toolbox as tools, Tool
tools = _cptools.default_toolbox
Tool = _cptools.Tool
from cherrypy import _cprequest
from cherrypy.lib import httputil as _httputil
from cherrypy.lib import http as _http
from cherrypy import _cptree
from cherrypy._cptree import Application # noqa
from cherrypy import _cpwsgi as wsgi # noqa
tree = _cptree.Tree()
from cherrypy._cptree import Application
from cherrypy import _cpwsgi as wsgi
from cherrypy import _cpserver
from cherrypy import process
try:
from cherrypy.process import win32
@@ -91,33 +180,22 @@ except ImportError:
engine = process.bus
tree = _cptree.Tree()
__version__ = '8.1.2'
# Timeout monitor. We add two channels to the engine
# to which cherrypy.Application will publish.
engine.listeners['before_request'] = set()
engine.listeners['after_request'] = set()
# Timeout monitor
class _TimeoutMonitor(process.plugins.Monitor):
def __init__(self, bus):
self.servings = []
process.plugins.Monitor.__init__(self, bus, self.run)
def before_request(self):
def acquire(self):
self.servings.append((serving.request, serving.response))
def after_request(self):
def release(self):
try:
self.servings.remove((serving.request, serving.response))
except ValueError:
pass
def run(self):
"""Check timeout on all responses. (Internal)"""
for req, resp in self.servings:
@@ -134,31 +212,14 @@ engine.thread_manager.subscribe()
engine.signal_handler = process.plugins.SignalHandler(engine)
class _HandleSignalsPlugin(object):
"""Handle signals from other processes based on the configured
platform handlers above."""
def __init__(self, bus):
self.bus = bus
def subscribe(self):
"""Add the handlers based on the platform"""
if hasattr(self.bus, 'signal_handler'):
self.bus.signal_handler.subscribe()
if hasattr(self.bus, 'console_control_handler'):
self.bus.console_control_handler.subscribe()
engine.signals = _HandleSignalsPlugin(engine)
from cherrypy import _cpserver
server = _cpserver.Server()
server.subscribe()
def quickstart(root=None, script_name='', config=None):
def quickstart(root=None, script_name="", config=None):
"""Mount the given root, start the builtin server (and engine), then block.
root: an instance of a "controller class" (a collection of page handler
methods) which represents the root of the application.
script_name: a string containing the "mount point" of the application.
@@ -166,7 +227,7 @@ def quickstart(root=None, script_name='', config=None):
at which to mount the given root. For example, if root.index() will
handle requests to "http://www.example.com:8080/dept/app1/", then
the script_name argument would be "/dept/app1".
It MUST NOT end in a slash. If the script_name refers to the root
of the URI, it MUST be an empty string (not "/").
config: a file or dict containing application config. If this contains
@@ -175,18 +236,27 @@ def quickstart(root=None, script_name='', config=None):
"""
if config:
_global_conf_alias.update(config)
tree.mount(root, script_name, config)
engine.signals.subscribe()
if root is not None:
tree.mount(root, script_name, config)
if hasattr(engine, "signal_handler"):
engine.signal_handler.subscribe()
if hasattr(engine, "console_control_handler"):
engine.console_control_handler.subscribe()
engine.start()
engine.block()
try:
from threading import local as _local
except ImportError:
from cherrypy._cpthreadinglocal import local as _local
class _Serving(_local):
"""An interface for registering request and response objects.
Rather than have a separate "thread local" object for the request and
the response, this class works as a single threadlocal container for
both objects (and any others which developers wish to define). In this
@@ -194,22 +264,24 @@ class _Serving(_local):
conversation, yet still refer to them as module-level globals in a
thread-safe way.
"""
request = _cprequest.Request(_httputil.Host('127.0.0.1', 80),
_httputil.Host('127.0.0.1', 1111))
"""
__metaclass__ = _AttributeDocstrings
request = _cprequest.Request(_http.Host("127.0.0.1", 80),
_http.Host("127.0.0.1", 1111))
request__doc = """
The request object for the current thread. In the main thread,
and any threads which are not receiving HTTP requests, this is None."""
response = _cprequest.Response()
"""
response__doc = """
The response object for the current thread. In the main thread,
and any threads which are not receiving HTTP requests, this is None."""
def load(self, request, response):
self.request = request
self.response = response
def clear(self):
"""Remove all attributes of self."""
self.__dict__.clear()
@@ -218,59 +290,58 @@ serving = _Serving()
class _ThreadLocalProxy(object):
__slots__ = ['__attrname__', '__dict__']
def __init__(self, attrname):
self.__attrname__ = attrname
def __getattr__(self, name):
child = getattr(serving, self.__attrname__)
return getattr(child, name)
def __setattr__(self, name, value):
if name in ('__attrname__', ):
if name in ("__attrname__", ):
object.__setattr__(self, name, value)
else:
child = getattr(serving, self.__attrname__)
setattr(child, name, value)
def __delattr__(self, name):
child = getattr(serving, self.__attrname__)
delattr(child, name)
def _get_dict(self):
child = getattr(serving, self.__attrname__)
d = child.__class__.__dict__.copy()
d.update(child.__dict__)
return d
__dict__ = property(_get_dict)
def __getitem__(self, key):
child = getattr(serving, self.__attrname__)
return child[key]
def __setitem__(self, key, value):
child = getattr(serving, self.__attrname__)
child[key] = value
def __delitem__(self, key):
child = getattr(serving, self.__attrname__)
del child[key]
def __contains__(self, key):
child = getattr(serving, self.__attrname__)
return key in child
def __len__(self):
child = getattr(serving, self.__attrname__)
return len(child)
def __nonzero__(self):
child = getattr(serving, self.__attrname__)
return bool(child)
# Python 3
__bool__ = __nonzero__
# Create request and response object (the same objects will be used
# throughout the entire life of the webserver, but will redirect
@@ -279,10 +350,7 @@ request = _ThreadLocalProxy('request')
response = _ThreadLocalProxy('response')
# Create thread_data object as a thread-specific all-purpose storage
class _ThreadData(_local):
"""A container for thread-specific data."""
thread_data = _ThreadData()
@@ -305,31 +373,18 @@ except ImportError:
pass
from cherrypy import _cplogging
class _GlobalLogManager(_cplogging.LogManager):
"""A site-wide LogManager; routes to app.log or global log as appropriate.
This :class:`LogManager<cherrypy._cplogging.LogManager>` implements
cherrypy.log() and cherrypy.log.access(). If either
function is called during a request, the message will be sent to the
logger for the current Application. If they are called outside of a
request, the message will be sent to the site-wide logger.
"""
def __call__(self, *args, **kwargs):
"""Log the given message to the app.log or global log as appropriate.
"""
# Do NOT use try/except here. See
# https://github.com/cherrypy/cherrypy/issues/945
if hasattr(request, 'app') and hasattr(request.app, 'log'):
try:
log = request.app.log
else:
except AttributeError:
log = self
return log.error(*args, **kwargs)
def access(self):
"""Log an access message to the app.log or global log as appropriate.
"""
try:
return request.app.log.access()
except AttributeError:
@@ -343,29 +398,164 @@ log.error_file = ''
# Using an access file makes CP about 10% slower. Leave off by default.
log.access_file = ''
def _buslog(msg, level):
log.error(msg, 'ENGINE', severity=level)
engine.subscribe('log', _buslog)
from cherrypy._helper import expose, popargs, url # noqa
# Helper functions for CP apps #
def expose(func=None, alias=None):
"""Expose the function, optionally providing an alias or set of aliases."""
def expose_(func):
func.exposed = True
if alias is not None:
if isinstance(alias, basestring):
parents[alias.replace(".", "_")] = func
else:
for a in alias:
parents[a.replace(".", "_")] = func
return func
import sys, types
if isinstance(func, (types.FunctionType, types.MethodType)):
if alias is None:
# @expose
func.exposed = True
return func
else:
# func = expose(func, alias)
parents = sys._getframe(1).f_locals
return expose_(func)
elif func is None:
if alias is None:
# @expose()
parents = sys._getframe(1).f_locals
return expose_
else:
# @expose(alias="alias") or
# @expose(alias=["alias1", "alias2"])
parents = sys._getframe(1).f_locals
return expose_
else:
# @expose("alias") or
# @expose(["alias1", "alias2"])
parents = sys._getframe(1).f_locals
alias = func
return expose_
def url(path="", qs="", script_name=None, base=None, relative=None):
"""Create an absolute URL for the given path.
If 'path' starts with a slash ('/'), this will return
(base + script_name + path + qs).
If it does not start with a slash, this returns
(base + script_name [+ request.path_info] + path + qs).
If script_name is None, cherrypy.request will be used
to find a script_name, if available.
If base is None, cherrypy.request.base will be used (if available).
Note that you can use cherrypy.tools.proxy to change this.
Finally, note that this function can be used to obtain an absolute URL
for the current request path (minus the querystring) by passing no args.
If you call url(qs=cherrypy.request.query_string), you should get the
original browser URL (assuming no internal redirections).
If relative is None or not provided, request.app.relative_urls will
be used (if available, else False). If False, the output will be an
absolute URL (including the scheme, host, vhost, and script_name).
If True, the output will instead be a URL that is relative to the
current request path, perhaps including '..' atoms. If relative is
the string 'server', the output will instead be a URL that is
relative to the server root; i.e., it will start with a slash.
"""
if qs:
qs = '?' + qs
if request.app:
if not path.startswith("/"):
# Append/remove trailing slash from path_info as needed
# (this is to support mistyped URL's without redirecting;
# if you want to redirect, use tools.trailing_slash).
pi = request.path_info
if request.is_index is True:
if not pi.endswith('/'):
pi = pi + '/'
elif request.is_index is False:
if pi.endswith('/') and pi != '/':
pi = pi[:-1]
if path == "":
path = pi
else:
path = _urljoin(pi, path)
if script_name is None:
script_name = request.script_name
if base is None:
base = request.base
newurl = base + script_name + path + qs
else:
# No request.app (we're being called outside a request).
# We'll have to guess the base from server.* attributes.
# This will produce very different results from the above
# if you're using vhosts or tools.proxy.
if base is None:
base = server.base()
path = (script_name or "") + path
newurl = base + path + qs
if './' in newurl:
# Normalize the URL by removing ./ and ../
atoms = []
for atom in newurl.split('/'):
if atom == '.':
pass
elif atom == '..':
atoms.pop()
else:
atoms.append(atom)
newurl = '/'.join(atoms)
# At this point, we should have a fully-qualified absolute URL.
if relative is None:
relative = getattr(request.app, "relative_urls", False)
# See http://www.ietf.org/rfc/rfc2396.txt
if relative == 'server':
# "A relative reference beginning with a single slash character is
# termed an absolute-path reference, as defined by <abs_path>..."
# This is also sometimes called "server-relative".
newurl = '/' + '/'.join(newurl.split('/', 3)[3:])
elif relative:
# "A relative reference that does not begin with a scheme name
# or a slash character is termed a relative-path reference."
old = url().split('/')[:-1]
new = newurl.split('/')
while old and new:
a, b = old[0], new[0]
if a != b:
break
old.pop(0)
new.pop(0)
new = (['..'] * len(old)) + new
newurl = '/'.join(new)
return newurl
# import _cpconfig last so it can reference other top-level objects
from cherrypy import _cpconfig # noqa
from cherrypy import _cpconfig
# Use _global_conf_alias so quickstart can use 'config' as an arg
# without shadowing cherrypy.config.
config = _global_conf_alias = _cpconfig.Config()
config.defaults = {
'tools.log_tracebacks.on': True,
'tools.log_headers.on': True,
'tools.trailing_slash.on': True,
'tools.encode.on': True
}
config.namespaces['log'] = lambda k, v: setattr(log, k, v)
config.namespaces['checker'] = lambda k, v: setattr(checker, k, v)
# Must reset to get our defaults applied.
config.reset()
from cherrypy import _cpchecker # noqa
from cherrypy import _cpchecker
checker = _cpchecker.Checker()
engine.subscribe('start', checker)

View File

@@ -1,4 +0,0 @@
import cherrypy.daemon
if __name__ == '__main__':
cherrypy.daemon.run()

79
cherrypy/_cpcgifs.py Normal file
View File

@@ -0,0 +1,79 @@
import cgi
import cherrypy
class FieldStorage(cgi.FieldStorage):
def __init__(self, *args, **kwds):
try:
cgi.FieldStorage.__init__(self, *args, **kwds)
except ValueError, ex:
if str(ex) == 'Maximum content length exceeded':
raise cherrypy.HTTPError(status=413)
else:
raise ex
def read_lines_to_eof(self):
"""Internal: read lines until EOF."""
while 1:
line = self.fp.readline(1<<16)
if not line:
self.done = -1
break
self.__write(line)
def read_lines_to_outerboundary(self):
"""Internal: read lines until outerboundary."""
next = "--" + self.outerboundary
last = next + "--"
delim = ""
last_line_lfend = True
while 1:
line = self.fp.readline(1<<16)
if not line:
self.done = -1
break
if line[:2] == "--" and last_line_lfend:
strippedline = line.strip()
if strippedline == next:
break
if strippedline == last:
self.done = 1
break
odelim = delim
if line[-2:] == "\r\n":
delim = "\r\n"
line = line[:-2]
last_line_lfend = True
elif line[-1] == "\n":
delim = "\n"
line = line[:-1]
last_line_lfend = True
else:
delim = ""
last_line_lfend = False
self.__write(odelim + line)
def skip_lines(self):
"""Internal: skip lines until outer boundary if defined."""
if not self.outerboundary or self.done:
return
next = "--" + self.outerboundary
last = next + "--"
last_line_lfend = True
while 1:
line = self.fp.readline(1<<16)
if not line:
self.done = -1
break
if line[:2] == "--" and last_line_lfend:
strippedline = line.strip()
if strippedline == next:
break
if strippedline == last:
self.done = 1
break
if line.endswith('\n'):
last_line_lfend = True
else:
last_line_lfend = False

View File

@@ -2,30 +2,29 @@ import os
import warnings
import cherrypy
from cherrypy._cpcompat import iteritems, copykeys, builtins
class Checker(object):
"""A checker for CherryPy sites and their mounted applications.
on: set this to False to turn off the checker completely.
When this object is called at engine startup, it executes each
of its own methods whose names start with ``check_``. If you wish
of its own methods whose names start with "check_". If you wish
to disable selected checks, simply add a line in your global
config which sets the appropriate method to False::
[global]
checker.check_skipped_app_config = False
You may also dynamically add or replace ``check_*`` methods in this way.
config which sets the appropriate method to False:
[global]
checker.check_skipped_app_config = False
You may also dynamically add or replace check_* methods in this way.
"""
on = True
"""If True (the default), run all checks; if False, turn off all checks."""
def __init__(self):
self._populate_known_types()
def __call__(self):
"""Run all check_* methods."""
if self.on:
@@ -33,149 +32,102 @@ class Checker(object):
warnings.formatwarning = self.formatwarning
try:
for name in dir(self):
if name.startswith('check_'):
if name.startswith("check_"):
method = getattr(self, name)
if method and hasattr(method, '__call__'):
if method and callable(method):
method()
finally:
warnings.formatwarning = oldformatwarning
def formatwarning(self, message, category, filename, lineno, line=None):
"""Function to format a warning."""
return 'CherryPy Checker:\n%s\n\n' % message
return "CherryPy Checker:\n%s\n\n" % message
# This value should be set inside _cpconfig.
global_config_contained_paths = False
def check_app_config_entries_dont_start_with_script_name(self):
"""Check for Application config with sections that repeat script_name.
"""
for sn, app in cherrypy.tree.apps.items():
for sn, app in cherrypy.tree.apps.iteritems():
if not isinstance(app, cherrypy.Application):
continue
if not app.config:
continue
if sn == '':
continue
sn_atoms = sn.strip('/').split('/')
sn_atoms = sn.strip("/").split("/")
for key in app.config.keys():
key_atoms = key.strip('/').split('/')
key_atoms = key.strip("/").split("/")
if key_atoms[:len(sn_atoms)] == sn_atoms:
warnings.warn(
'The application mounted at %r has config '
'entries that start with its script name: %r' % (sn,
key))
def check_site_config_entries_in_app_config(self):
"""Check for mounted Applications that have site-scoped config."""
for sn, app in iteritems(cherrypy.tree.apps):
if not isinstance(app, cherrypy.Application):
continue
msg = []
for section, entries in iteritems(app.config):
if section.startswith('/'):
for key, value in iteritems(entries):
for n in ('engine.', 'server.', 'tree.', 'checker.'):
if key.startswith(n):
msg.append('[%s] %s = %s' %
(section, key, value))
if msg:
msg.insert(0,
'The application mounted at %r contains the '
'following config entries, which are only allowed '
'in site-wide config. Move them to a [global] '
'section and pass them to cherrypy.config.update() '
'instead of tree.mount().' % sn)
warnings.warn(os.linesep.join(msg))
"The application mounted at %r has config " \
"entries that start with its script name: %r" % (sn, key))
def check_skipped_app_config(self):
"""Check for mounted Applications that have no config."""
for sn, app in cherrypy.tree.apps.items():
for sn, app in cherrypy.tree.apps.iteritems():
if not isinstance(app, cherrypy.Application):
continue
if not app.config:
msg = 'The Application mounted at %r has an empty config.' % sn
msg = "The Application mounted at %r has an empty config." % sn
if self.global_config_contained_paths:
msg += (' It looks like the config you passed to '
'cherrypy.config.update() contains application-'
'specific sections. You must explicitly pass '
'application config via '
'cherrypy.tree.mount(..., config=app_config)')
msg += (" It looks like the config you passed to "
"cherrypy.config.update() contains application-"
"specific sections. You must explicitly pass "
"application config via "
"cherrypy.tree.mount(..., config=app_config)")
warnings.warn(msg)
return
def check_app_config_brackets(self):
"""Check for Application config with extraneous brackets in section
names.
"""
for sn, app in cherrypy.tree.apps.items():
if not isinstance(app, cherrypy.Application):
continue
if not app.config:
continue
for key in app.config.keys():
if key.startswith('[') or key.endswith(']'):
warnings.warn(
'The application mounted at %r has config '
'section names with extraneous brackets: %r. '
'Config *files* need brackets; config *dicts* '
'(e.g. passed to tree.mount) do not.' % (sn, key))
def check_static_paths(self):
"""Check Application config for incorrect static paths."""
# Use the dummy Request object in the main thread.
request = cherrypy.request
for sn, app in cherrypy.tree.apps.items():
for sn, app in cherrypy.tree.apps.iteritems():
if not isinstance(app, cherrypy.Application):
continue
request.app = app
for section in app.config:
# get_resource will populate request.config
request.get_resource(section + '/dummy.html')
request.get_resource(section + "/dummy.html")
conf = request.config.get
if conf('tools.staticdir.on', False):
msg = ''
root = conf('tools.staticdir.root')
dir = conf('tools.staticdir.dir')
if conf("tools.staticdir.on", False):
msg = ""
root = conf("tools.staticdir.root")
dir = conf("tools.staticdir.dir")
if dir is None:
msg = 'tools.staticdir.dir is not set.'
msg = "tools.staticdir.dir is not set."
else:
fulldir = ''
fulldir = ""
if os.path.isabs(dir):
fulldir = dir
if root:
msg = ('dir is an absolute path, even '
'though a root is provided.')
msg = ("dir is an absolute path, even "
"though a root is provided.")
testdir = os.path.join(root, dir[1:])
if os.path.exists(testdir):
msg += (
'\nIf you meant to serve the '
'filesystem folder at %r, remove the '
'leading slash from dir.' % (testdir,))
msg += ("\nIf you meant to serve the "
"filesystem folder at %r, remove "
"the leading slash from dir." % testdir)
else:
if not root:
msg = (
'dir is a relative path and '
'no root provided.')
msg = "dir is a relative path and no root provided."
else:
fulldir = os.path.join(root, dir)
if not os.path.isabs(fulldir):
msg = ('%r is not an absolute path.' % (
fulldir,))
msg = "%r is not an absolute path." % fulldir
if fulldir and not os.path.exists(fulldir):
if msg:
msg += '\n'
msg += ('%r (root + dir) is not an existing '
'filesystem path.' % fulldir)
msg += "\n"
msg += ("%r (root + dir) is not an existing "
"filesystem path." % fulldir)
if msg:
warnings.warn('%s\nsection: [%s]\nroot: %r\ndir: %r'
warnings.warn("%s\nsection: [%s]\nroot: %r\ndir: %r"
% (msg, section, root, dir))
# -------------------------- Compatibility -------------------------- #
obsolete = {
'server.default_content_type': 'tools.response_headers.headers',
'log_access_file': 'log.access_file',
@@ -188,115 +140,115 @@ class Checker(object):
'throw_errors': 'request.throw_errors',
'profiler.on': ('cherrypy.tree.mount(profiler.make_app('
'cherrypy.Application(Root())))'),
}
}
deprecated = {}
def _compat(self, config):
"""Process config and warn on each obsolete or deprecated entry."""
for section, conf in config.items():
for section, conf in config.iteritems():
if isinstance(conf, dict):
for k, v in conf.items():
for k, v in conf.iteritems():
if k in self.obsolete:
warnings.warn('%r is obsolete. Use %r instead.\n'
'section: [%s]' %
warnings.warn("%r is obsolete. Use %r instead.\n"
"section: [%s]" %
(k, self.obsolete[k], section))
elif k in self.deprecated:
warnings.warn('%r is deprecated. Use %r instead.\n'
'section: [%s]' %
warnings.warn("%r is deprecated. Use %r instead.\n"
"section: [%s]" %
(k, self.deprecated[k], section))
else:
if section in self.obsolete:
warnings.warn('%r is obsolete. Use %r instead.'
warnings.warn("%r is obsolete. Use %r instead."
% (section, self.obsolete[section]))
elif section in self.deprecated:
warnings.warn('%r is deprecated. Use %r instead.'
warnings.warn("%r is deprecated. Use %r instead."
% (section, self.deprecated[section]))
def check_compatibility(self):
"""Process config and warn on each obsolete or deprecated entry."""
self._compat(cherrypy.config)
for sn, app in cherrypy.tree.apps.items():
for sn, app in cherrypy.tree.apps.iteritems():
if not isinstance(app, cherrypy.Application):
continue
self._compat(app.config)
# ------------------------ Known Namespaces ------------------------ #
extra_config_namespaces = []
def _known_ns(self, app):
ns = ['wsgi']
ns.extend(copykeys(app.toolboxes))
ns.extend(copykeys(app.namespaces))
ns.extend(copykeys(app.request_class.namespaces))
ns.extend(copykeys(cherrypy.config.namespaces))
ns = ["wsgi"]
ns.extend(app.toolboxes.keys())
ns.extend(app.namespaces.keys())
ns.extend(app.request_class.namespaces.keys())
ns.extend(cherrypy.config.namespaces.keys())
ns += self.extra_config_namespaces
for section, conf in app.config.items():
is_path_section = section.startswith('/')
for section, conf in app.config.iteritems():
is_path_section = section.startswith("/")
if is_path_section and isinstance(conf, dict):
for k, v in conf.items():
atoms = k.split('.')
for k, v in conf.iteritems():
atoms = k.split(".")
if len(atoms) > 1:
if atoms[0] not in ns:
# Spit out a special warning if a known
# namespace is preceded by "cherrypy."
if atoms[0] == 'cherrypy' and atoms[1] in ns:
msg = (
'The config entry %r is invalid; '
'try %r instead.\nsection: [%s]'
% (k, '.'.join(atoms[1:]), section))
if (atoms[0] == "cherrypy" and atoms[1] in ns):
msg = ("The config entry %r is invalid; "
"try %r instead.\nsection: [%s]"
% (k, ".".join(atoms[1:]), section))
else:
msg = (
'The config entry %r is invalid, '
'because the %r config namespace '
'is unknown.\n'
'section: [%s]' % (k, atoms[0], section))
msg = ("The config entry %r is invalid, because "
"the %r config namespace is unknown.\n"
"section: [%s]" % (k, atoms[0], section))
warnings.warn(msg)
elif atoms[0] == 'tools':
elif atoms[0] == "tools":
if atoms[1] not in dir(cherrypy.tools):
msg = (
'The config entry %r may be invalid, '
'because the %r tool was not found.\n'
'section: [%s]' % (k, atoms[1], section))
msg = ("The config entry %r may be invalid, "
"because the %r tool was not found.\n"
"section: [%s]" % (k, atoms[1], section))
warnings.warn(msg)
def check_config_namespaces(self):
"""Process config and warn on each unknown config namespace."""
for sn, app in cherrypy.tree.apps.items():
for sn, app in cherrypy.tree.apps.iteritems():
if not isinstance(app, cherrypy.Application):
continue
self._known_ns(app)
# -------------------------- Config Types -------------------------- #
known_config_types = {}
def _populate_known_types(self):
b = [x for x in vars(builtins).values()
if type(x) is type(str)]
import __builtin__
builtins = [x for x in vars(__builtin__).values()
if type(x) is type(str)]
def traverse(obj, namespace):
for name in dir(obj):
# Hack for 3.2's warning about body_params
if name == 'body_params':
continue
vtype = type(getattr(obj, name, None))
if vtype in b:
self.known_config_types[namespace + '.' + name] = vtype
traverse(cherrypy.request, 'request')
traverse(cherrypy.response, 'response')
traverse(cherrypy.server, 'server')
traverse(cherrypy.engine, 'engine')
traverse(cherrypy.log, 'log')
if vtype in builtins:
self.known_config_types[namespace + "." + name] = vtype
traverse(cherrypy.request, "request")
traverse(cherrypy.response, "response")
traverse(cherrypy.server, "server")
traverse(cherrypy.engine, "engine")
traverse(cherrypy.log, "log")
def _known_types(self, config):
msg = ('The config entry %r in section %r is of type %r, '
'which does not match the expected type %r.')
for section, conf in config.items():
msg = ("The config entry %r in section %r is of type %r, "
"which does not match the expected type %r.")
for section, conf in config.iteritems():
if isinstance(conf, dict):
for k, v in conf.items():
for k, v in conf.iteritems():
if v is not None:
expected_type = self.known_config_types.get(k, None)
vtype = type(v)
@@ -311,22 +263,23 @@ class Checker(object):
if expected_type and vtype != expected_type:
warnings.warn(msg % (k, section, vtype.__name__,
expected_type.__name__))
def check_config_types(self):
"""Assert that config values are of the same type as default values."""
self._known_types(cherrypy.config)
for sn, app in cherrypy.tree.apps.items():
for sn, app in cherrypy.tree.apps.iteritems():
if not isinstance(app, cherrypy.Application):
continue
self._known_types(app.config)
# -------------------- Specific config warnings -------------------- #
def check_localhost(self):
"""Warn if any socket_host is 'localhost'. See #711."""
for k, v in cherrypy.config.items():
for k, v in cherrypy.config.iteritems():
if k == 'server.socket_host' and v == 'localhost':
warnings.warn("The use of 'localhost' as a socket host can "
'cause problems on newer systems, since '
"'localhost' can map to either an IPv4 or an "
"IPv6 address. You should use '127.0.0.1' "
"or '[::1]' instead.")
"cause problems on newer systems, since 'localhost' can "
"map to either an IPv4 or an IPv6 address. You should "
"use '127.0.0.1' or '[::1]' instead.")

View File

@@ -1,334 +0,0 @@
"""Compatibility code for using CherryPy with various versions of Python.
CherryPy 3.2 is compatible with Python versions 2.6+. This module provides a
useful abstraction over the differences between Python versions, sometimes by
preferring a newer idiom, sometimes an older one, and sometimes a custom one.
In particular, Python 2 uses str and '' for byte strings, while Python 3
uses str and '' for unicode strings. We will call each of these the 'native
string' type for each version. Because of this major difference, this module
provides
two functions: 'ntob', which translates native strings (of type 'str') into
byte strings regardless of Python version, and 'ntou', which translates native
strings to unicode strings. This also provides a 'BytesIO' name for dealing
specifically with bytes, and a 'StringIO' name for dealing with native strings.
It also provides a 'base64_decode' function with native strings as input and
output.
"""
import binascii
import os
import re
import sys
import threading
import six
if six.PY3:
def ntob(n, encoding='ISO-8859-1'):
"""Return the given native string as a byte string in the given
encoding.
"""
assert_native(n)
# In Python 3, the native string type is unicode
return n.encode(encoding)
def ntou(n, encoding='ISO-8859-1'):
"""Return the given native string as a unicode string with the given
encoding.
"""
assert_native(n)
# In Python 3, the native string type is unicode
return n
def tonative(n, encoding='ISO-8859-1'):
"""Return the given string as a native string in the given encoding."""
# In Python 3, the native string type is unicode
if isinstance(n, bytes):
return n.decode(encoding)
return n
else:
# Python 2
def ntob(n, encoding='ISO-8859-1'):
"""Return the given native string as a byte string in the given
encoding.
"""
assert_native(n)
# In Python 2, the native string type is bytes. Assume it's already
# in the given encoding, which for ISO-8859-1 is almost always what
# was intended.
return n
def ntou(n, encoding='ISO-8859-1'):
"""Return the given native string as a unicode string with the given
encoding.
"""
assert_native(n)
# In Python 2, the native string type is bytes.
# First, check for the special encoding 'escape'. The test suite uses
# this to signal that it wants to pass a string with embedded \uXXXX
# escapes, but without having to prefix it with u'' for Python 2,
# but no prefix for Python 3.
if encoding == 'escape':
return unicode(
re.sub(r'\\u([0-9a-zA-Z]{4})',
lambda m: unichr(int(m.group(1), 16)),
n.decode('ISO-8859-1')))
# Assume it's already in the given encoding, which for ISO-8859-1
# is almost always what was intended.
return n.decode(encoding)
def tonative(n, encoding='ISO-8859-1'):
"""Return the given string as a native string in the given encoding."""
# In Python 2, the native string type is bytes.
if isinstance(n, unicode):
return n.encode(encoding)
return n
def assert_native(n):
if not isinstance(n, str):
raise TypeError('n must be a native str (got %s)' % type(n).__name__)
try:
# Python 3.1+
from base64 import decodebytes as _base64_decodebytes
except ImportError:
# Python 3.0-
# since CherryPy claims compability with Python 2.3, we must use
# the legacy API of base64
from base64 import decodestring as _base64_decodebytes
def base64_decode(n, encoding='ISO-8859-1'):
"""Return the native string base64-decoded (as a native string)."""
if isinstance(n, six.text_type):
b = n.encode(encoding)
else:
b = n
b = _base64_decodebytes(b)
if str is six.text_type:
return b.decode(encoding)
else:
return b
try:
sorted = sorted
except NameError:
def sorted(i):
i = i[:]
i.sort()
return i
try:
reversed = reversed
except NameError:
def reversed(x):
i = len(x)
while i > 0:
i -= 1
yield x[i]
try:
# Python 3
from urllib.parse import urljoin, urlencode
from urllib.parse import quote, quote_plus
from urllib.request import unquote, urlopen
from urllib.request import parse_http_list, parse_keqv_list
except ImportError:
# Python 2
from urlparse import urljoin # noqa
from urllib import urlencode, urlopen # noqa
from urllib import quote, quote_plus # noqa
from urllib import unquote # noqa
from urllib2 import parse_http_list, parse_keqv_list # noqa
try:
dict.iteritems
# Python 2
iteritems = lambda d: d.iteritems()
copyitems = lambda d: d.items()
except AttributeError:
# Python 3
iteritems = lambda d: d.items()
copyitems = lambda d: list(d.items())
try:
dict.iterkeys
# Python 2
iterkeys = lambda d: d.iterkeys()
copykeys = lambda d: d.keys()
except AttributeError:
# Python 3
iterkeys = lambda d: d.keys()
copykeys = lambda d: list(d.keys())
try:
dict.itervalues
# Python 2
itervalues = lambda d: d.itervalues()
copyvalues = lambda d: d.values()
except AttributeError:
# Python 3
itervalues = lambda d: d.values()
copyvalues = lambda d: list(d.values())
try:
# Python 3
import builtins
except ImportError:
# Python 2
import __builtin__ as builtins # noqa
try:
# Python 2. We try Python 2 first clients on Python 2
# don't try to import the 'http' module from cherrypy.lib
from Cookie import SimpleCookie, CookieError
from httplib import BadStatusLine, HTTPConnection, IncompleteRead
from httplib import NotConnected
from BaseHTTPServer import BaseHTTPRequestHandler
except ImportError:
# Python 3
from http.cookies import SimpleCookie, CookieError # noqa
from http.client import BadStatusLine, HTTPConnection, IncompleteRead # noqa
from http.client import NotConnected # noqa
from http.server import BaseHTTPRequestHandler # noqa
# Some platforms don't expose HTTPSConnection, so handle it separately
if six.PY3:
try:
from http.client import HTTPSConnection
except ImportError:
# Some platforms which don't have SSL don't expose HTTPSConnection
HTTPSConnection = None
else:
try:
from httplib import HTTPSConnection
except ImportError:
HTTPSConnection = None
try:
# Python 2
xrange = xrange
except NameError:
# Python 3
xrange = range
try:
# Python 3
from urllib.parse import unquote as parse_unquote
def unquote_qs(atom, encoding, errors='strict'):
return parse_unquote(
atom.replace('+', ' '),
encoding=encoding,
errors=errors)
except ImportError:
# Python 2
from urllib import unquote as parse_unquote
def unquote_qs(atom, encoding, errors='strict'):
return parse_unquote(atom.replace('+', ' ')).decode(encoding, errors)
try:
# Prefer simplejson, which is usually more advanced than the builtin
# module.
import simplejson as json
json_decode = json.JSONDecoder().decode
_json_encode = json.JSONEncoder().iterencode
except ImportError:
if sys.version_info >= (2, 6):
# Python >=2.6 : json is part of the standard library
import json
json_decode = json.JSONDecoder().decode
_json_encode = json.JSONEncoder().iterencode
else:
json = None
def json_decode(s):
raise ValueError('No JSON library is available')
def _json_encode(s):
raise ValueError('No JSON library is available')
finally:
if json and six.PY3:
# The two Python 3 implementations (simplejson/json)
# outputs str. We need bytes.
def json_encode(value):
for chunk in _json_encode(value):
yield chunk.encode('utf8')
else:
json_encode = _json_encode
text_or_bytes = six.text_type, six.binary_type
try:
import cPickle as pickle
except ImportError:
# In Python 2, pickle is a Python version.
# In Python 3, pickle is the sped-up C version.
import pickle # noqa
def random20():
return binascii.hexlify(os.urandom(20)).decode('ascii')
try:
from _thread import get_ident as get_thread_ident
except ImportError:
from thread import get_ident as get_thread_ident # noqa
try:
# Python 3
next = next
except NameError:
# Python 2
def next(i):
return i.next()
if sys.version_info >= (3, 3):
Timer = threading.Timer
Event = threading.Event
else:
# Python 3.2 and earlier
Timer = threading._Timer
Event = threading._Event
try:
# Python 2.7+
from subprocess import _args_from_interpreter_flags
except ImportError:
def _args_from_interpreter_flags():
"""Tries to reconstruct original interpreter args from sys.flags for Python 2.6
Backported from Python 3.5. Aims to return a list of
command-line arguments reproducing the current
settings in sys.flags and sys.warnoptions.
"""
flag_opt_map = {
'debug': 'd',
# 'inspect': 'i',
# 'interactive': 'i',
'optimize': 'O',
'dont_write_bytecode': 'B',
'no_user_site': 's',
'no_site': 'S',
'ignore_environment': 'E',
'verbose': 'v',
'bytes_warning': 'b',
'quiet': 'q',
'hash_randomization': 'R',
'py3k_warning': '3',
}
args = []
for flag, opt in flag_opt_map.items():
v = getattr(sys.flags, flag)
if v > 0:
if flag == 'hash_randomization':
v = 1 # Handle specification of an exact seed
args.append('-' + opt * v)
for opt in sys.warnoptions:
args.append('-W' + opt)
return args

View File

@@ -1,5 +1,4 @@
"""
Configuration system for CherryPy.
"""Configuration system for CherryPy.
Configuration in CherryPy is implemented via dictionaries. Keys are strings
which name the mapped value, which may be of any type.
@@ -11,20 +10,17 @@ Architecture
CherryPy Requests are part of an Application, which runs in a global context,
and configuration data may apply to any of those three scopes:
Global
Configuration entries which apply everywhere are stored in
Global: configuration entries which apply everywhere are stored in
cherrypy.config.
Application
Entries which apply to each mounted application are stored
Application: entries which apply to each mounted application are stored
on the Application object itself, as 'app.config'. This is a two-level
dict where each key is a path, or "relative URL" (for example, "/" or
"/path/to/my/page"), and each value is a config dict. Usually, this
data is provided in the call to tree.mount(root(), config=conf),
although you may also use app.merge(conf).
Request
Each Request object possesses a single 'Request.config' dict.
Request: each Request object possesses a single 'Request.config' dict.
Early in the request process, this dict is populated by merging global
config entries, Application entries (whose path equals or is a parent
of Request.path_info), and any config acquired while looking up the
@@ -37,7 +33,7 @@ Declaration
Configuration data may be supplied as a Python dictionary, as a filename,
or as an open file object. When you supply a filename or file, CherryPy
uses Python's builtin ConfigParser; you declare Application config by
writing each path as a section header::
writing each path as a section header:
[/path/to/my/page]
request.stream = True
@@ -45,22 +41,20 @@ writing each path as a section header::
To declare global configuration entries, place them in a [global] section.
You may also declare config entries directly on the classes and methods
(page handlers) that make up your CherryPy application via the ``_cp_config``
attribute, set with the ``cherrypy.config`` decorator. For example::
(page handlers) that make up your CherryPy application via the '_cp_config'
attribute. For example:
@cherrypy.config(**{'tools.gzip.on': True})
class Demo:
@cherrypy.expose
@cherrypy.config(**{'request.show_tracebacks': False})
_cp_config = {'tools.gzip.on': True}
def index(self):
return "Hello world"
index.exposed = True
index._cp_config = {'request.show_tracebacks': False}
.. note::
This behavior is only guaranteed for the default dispatcher.
Other dispatchers may have different restrictions on where
you can attach config attributes.
Note, however, that this behavior is only guaranteed for the default
dispatcher. Other dispatchers may have different restrictions on where
you can attach _cp_config attributes.
Namespaces
@@ -69,42 +63,23 @@ Namespaces
Configuration keys are separated into namespaces by the first "." in the key.
Current namespaces:
engine
Controls the 'application engine', including autoreload.
These can only be declared in the global config.
tree
Grafts cherrypy.Application objects onto cherrypy.tree.
These can only be declared in the global config.
hooks
Declares additional request-processing functions.
log
Configures the logging for each application.
These can only be declared in the global or / config.
request
Adds attributes to each Request.
response
Adds attributes to each Response.
server
Controls the default HTTP server via cherrypy.server.
These can only be declared in the global config.
tools
Runs and configures additional request-processing packages.
wsgi
Adds WSGI middleware to an Application's "pipeline".
These can only be declared in the app's root config ("/").
checker
Controls the 'checker', which looks for common errors in
app state (including config) when the engine starts.
Global config only.
engine: Controls the 'application engine', including autoreload.
These can only be declared in the global config.
tree: Grafts cherrypy.Application objects onto cherrypy.tree.
These can only be declared in the global config.
hooks: Declares additional request-processing functions.
log: Configures the logging for each application.
These can only be declared in the global or / config.
request: Adds attributes to each Request.
response: Adds attributes to each Response.
server: Controls the default HTTP server via cherrypy.server.
These can only be declared in the global config.
tools: Runs and configures additional request-processing packages.
wsgi: Adds WSGI middleware to an Application's "pipeline".
These can only be declared in the app's root config ("/").
checker: Controls the 'checker', which looks for common errors in
app state (including config) when the engine starts.
Global config only.
The only key that does not exist in a namespace is the "environment" entry.
This special entry 'imports' other config entries from a template stored in
@@ -118,103 +93,35 @@ be any string, and the handler must be either a callable or a (Python 2.5
style) context manager.
"""
import ConfigParser
try:
set
except NameError:
from sets import Set as set
import sys
import cherrypy
from cherrypy._cpcompat import text_or_bytes
from cherrypy.lib import reprconf
# Deprecated in CherryPy 3.2--remove in 3.3
NamespaceSet = reprconf.NamespaceSet
def merge(base, other):
"""Merge one app config (from a dict, file, or filename) into another.
If the given config is a filename, it will be appended to
the list of files to monitor for "autoreload" changes.
"""
if isinstance(other, text_or_bytes):
cherrypy.engine.autoreload.files.add(other)
# Load other into base
for section, value_map in reprconf.as_dict(other).items():
if not isinstance(value_map, dict):
raise ValueError(
'Application config must include section headers, but the '
"config you tried to merge doesn't have any sections. "
'Wrap your config in another dict with paths as section '
"headers, for example: {'/': config}.")
base.setdefault(section, {}).update(value_map)
class Config(reprconf.Config):
"""The 'global' configuration data for the entire CherryPy process."""
def update(self, config):
"""Update self from a dict, file or filename."""
if isinstance(config, text_or_bytes):
# Filename
cherrypy.engine.autoreload.files.add(config)
reprconf.Config.update(self, config)
def _apply(self, config):
"""Update self from a dict."""
if isinstance(config.get('global'), dict):
if len(config) > 1:
cherrypy.checker.global_config_contained_paths = True
config = config['global']
if 'tools.staticdir.dir' in config:
config['tools.staticdir.section'] = 'global'
reprconf.Config._apply(self, config)
@staticmethod
def __call__(*args, **kwargs):
"""Decorator for page handlers to set _cp_config."""
if args:
raise TypeError(
'The cherrypy.config decorator does not accept positional '
'arguments; you must use keyword arguments.')
def tool_decorator(f):
_Vars(f).setdefault('_cp_config', {}).update(kwargs)
return f
return tool_decorator
class _Vars(object):
"""
Adapter that allows setting a default attribute on a function
or class.
"""
def __init__(self, target):
self.target = target
def setdefault(self, key, default):
if not hasattr(self.target, key):
setattr(self.target, key, default)
return getattr(self.target, key)
# Sphinx begin config.environments
Config.environments = environments = {
'staging': {
'engine.autoreload.on': False,
environments = {
"staging": {
'engine.autoreload_on': False,
'checker.on': False,
'tools.log_headers.on': False,
'request.show_tracebacks': False,
'request.show_mismatched_params': False,
},
'production': {
'engine.autoreload.on': False,
},
"production": {
'engine.autoreload_on': False,
'checker.on': False,
'tools.log_headers.on': False,
'request.show_tracebacks': False,
'request.show_mismatched_params': False,
'log.screen': False,
},
'embedded': {
},
"embedded": {
# For use with CherryPy embedded in another deployment stack.
'engine.autoreload.on': False,
'engine.autoreload_on': False,
'checker.on': False,
'tools.log_headers.on': False,
'request.show_tracebacks': False,
@@ -222,35 +129,186 @@ Config.environments = environments = {
'log.screen': False,
'engine.SIGHUP': None,
'engine.SIGTERM': None,
},
'test_suite': {
'engine.autoreload.on': False,
},
"test_suite": {
'engine.autoreload_on': False,
'checker.on': False,
'tools.log_headers.on': False,
'request.show_tracebacks': True,
'request.show_mismatched_params': True,
'log.screen': False,
},
}
# Sphinx end config.environments
},
}
def as_dict(config):
"""Return a dict from 'config' whether it is a dict, file, or filename."""
if isinstance(config, basestring):
config = _Parser().dict_from_file(config)
elif hasattr(config, 'read'):
config = _Parser().dict_from_file(config)
return config
def merge(base, other):
"""Merge one app config (from a dict, file, or filename) into another.
If the given config is a filename, it will be appended to
the list of files to monitor for "autoreload" changes.
"""
if isinstance(other, basestring):
cherrypy.engine.autoreload.files.add(other)
# Load other into base
for section, value_map in as_dict(other).iteritems():
base.setdefault(section, {}).update(value_map)
class NamespaceSet(dict):
"""A dict of config namespace names and handlers.
Each config entry should begin with a namespace name; the corresponding
namespace handler will be called once for each config entry in that
namespace, and will be passed two arguments: the config key (with the
namespace removed) and the config value.
Namespace handlers may be any Python callable; they may also be
Python 2.5-style 'context managers', in which case their __enter__
method should return a callable to be used as the handler.
See cherrypy.tools (the Toolbox class) for an example.
"""
def __call__(self, config):
"""Iterate through config and pass it to each namespace handler.
'config' should be a flat dict, where keys use dots to separate
namespaces, and values are arbitrary.
The first name in each config key is used to look up the corresponding
namespace handler. For example, a config entry of {'tools.gzip.on': v}
will call the 'tools' namespace handler with the args: ('gzip.on', v)
"""
# Separate the given config into namespaces
ns_confs = {}
for k in config:
if "." in k:
ns, name = k.split(".", 1)
bucket = ns_confs.setdefault(ns, {})
bucket[name] = config[k]
# I chose __enter__ and __exit__ so someday this could be
# rewritten using Python 2.5's 'with' statement:
# for ns, handler in self.iteritems():
# with handler as callable:
# for k, v in ns_confs.get(ns, {}).iteritems():
# callable(k, v)
for ns, handler in self.iteritems():
exit = getattr(handler, "__exit__", None)
if exit:
callable = handler.__enter__()
no_exc = True
try:
try:
for k, v in ns_confs.get(ns, {}).iteritems():
callable(k, v)
except:
# The exceptional case is handled here
no_exc = False
if exit is None:
raise
if not exit(*sys.exc_info()):
raise
# The exception is swallowed if exit() returns true
finally:
# The normal and non-local-goto cases are handled here
if no_exc and exit:
exit(None, None, None)
else:
for k, v in ns_confs.get(ns, {}).iteritems():
handler(k, v)
def __repr__(self):
return "%s.%s(%s)" % (self.__module__, self.__class__.__name__,
dict.__repr__(self))
def __copy__(self):
newobj = self.__class__()
newobj.update(self)
return newobj
copy = __copy__
class Config(dict):
"""The 'global' configuration data for the entire CherryPy process."""
defaults = {
'tools.log_tracebacks.on': True,
'tools.log_headers.on': True,
'tools.trailing_slash.on': True,
}
namespaces = NamespaceSet(
**{"log": lambda k, v: setattr(cherrypy.log, k, v),
"checker": lambda k, v: setattr(cherrypy.checker, k, v),
})
def __init__(self):
self.reset()
def reset(self):
"""Reset self to default values."""
self.clear()
dict.update(self, self.defaults)
def update(self, config):
"""Update self from a dict, file or filename."""
if isinstance(config, basestring):
# Filename
cherrypy.engine.autoreload.files.add(config)
config = _Parser().dict_from_file(config)
elif hasattr(config, 'read'):
# Open file object
config = _Parser().dict_from_file(config)
else:
config = config.copy()
if isinstance(config.get("global", None), dict):
if len(config) > 1:
cherrypy.checker.global_config_contained_paths = True
config = config["global"]
which_env = config.get('environment')
if which_env:
env = environments[which_env]
for k in env:
if k not in config:
config[k] = env[k]
if 'tools.staticdir.dir' in config:
config['tools.staticdir.section'] = "global"
dict.update(self, config)
self.namespaces(config)
def __setitem__(self, k, v):
dict.__setitem__(self, k, v)
self.namespaces({k: v})
def _server_namespace_handler(k, v):
"""Config handler for the "server" namespace."""
atoms = k.split('.', 1)
atoms = k.split(".", 1)
if len(atoms) > 1:
# Special-case config keys of the form 'server.servername.socket_port'
# to configure additional HTTP servers.
if not hasattr(cherrypy, 'servers'):
if not hasattr(cherrypy, "servers"):
cherrypy.servers = {}
servername, k = atoms
if servername not in cherrypy.servers:
from cherrypy import _cpserver
cherrypy.servers[servername] = _cpserver.Server()
# On by default, but 'on = False' can unsubscribe it (see below).
cherrypy.servers[servername].subscribe()
if k == 'on':
if v:
cherrypy.servers[servername].subscribe()
@@ -260,44 +318,98 @@ def _server_namespace_handler(k, v):
setattr(cherrypy.servers[servername], k, v)
else:
setattr(cherrypy.server, k, v)
Config.namespaces['server'] = _server_namespace_handler
Config.namespaces["server"] = _server_namespace_handler
def _engine_namespace_handler(k, v):
"""Config handler for the "engine" namespace."""
"""Backward compatibility handler for the "engine" namespace."""
engine = cherrypy.engine
if k == 'SIGHUP':
engine.subscribe('SIGHUP', v)
if k == 'autoreload_on':
if v:
engine.autoreload.subscribe()
else:
engine.autoreload.unsubscribe()
elif k == 'autoreload_frequency':
engine.autoreload.frequency = v
elif k == 'autoreload_match':
engine.autoreload.match = v
elif k == 'reload_files':
engine.autoreload.files = set(v)
elif k == 'deadlock_poll_freq':
engine.timeout_monitor.frequency = v
elif k == 'SIGHUP':
engine.listeners['SIGHUP'] = set([v])
elif k == 'SIGTERM':
engine.subscribe('SIGTERM', v)
elif '.' in k:
plugin, attrname = k.split('.', 1)
engine.listeners['SIGTERM'] = set([v])
elif "." in k:
plugin, attrname = k.split(".", 1)
plugin = getattr(engine, plugin)
if attrname == 'on':
if v and hasattr(getattr(plugin, 'subscribe', None), '__call__'):
if v and callable(getattr(plugin, 'subscribe', None)):
plugin.subscribe()
return
elif (
(not v) and
hasattr(getattr(plugin, 'unsubscribe', None), '__call__')
):
elif (not v) and callable(getattr(plugin, 'unsubscribe', None)):
plugin.unsubscribe()
return
setattr(plugin, attrname, v)
else:
setattr(engine, k, v)
Config.namespaces['engine'] = _engine_namespace_handler
Config.namespaces["engine"] = _engine_namespace_handler
def _tree_namespace_handler(k, v):
"""Namespace handler for the 'tree' config namespace."""
if isinstance(v, dict):
for script_name, app in v.items():
cherrypy.tree.graft(app, script_name)
msg = 'Mounted: %s on %s' % (app, script_name or '/')
cherrypy.engine.log(msg)
else:
cherrypy.tree.graft(v, v.script_name)
cherrypy.engine.log('Mounted: %s on %s' % (v, v.script_name or '/'))
Config.namespaces['tree'] = _tree_namespace_handler
cherrypy.tree.graft(v, v.script_name)
cherrypy.engine.log("Mounted: %s on %s" % (v, v.script_name or "/"))
Config.namespaces["tree"] = _tree_namespace_handler
class _Parser(ConfigParser.ConfigParser):
"""Sub-class of ConfigParser that keeps the case of options and that raises
an exception if the file cannot be read.
"""
def optionxform(self, optionstr):
return optionstr
def read(self, filenames):
if isinstance(filenames, basestring):
filenames = [filenames]
for filename in filenames:
# try:
# fp = open(filename)
# except IOError:
# continue
fp = open(filename)
try:
self._read(fp, filename)
finally:
fp.close()
def as_dict(self, raw=False, vars=None):
"""Convert an INI file to a dictionary"""
# Load INI file into a dict
from cherrypy.lib import unrepr
result = {}
for section in self.sections():
if section not in result:
result[section] = {}
for option in self.options(section):
value = self.get(section, option, raw, vars)
try:
value = unrepr(value)
except Exception, x:
msg = ("Config error in section: %r, option: %r, "
"value: %r. Config values must be valid Python." %
(section, option, value))
raise ValueError(msg, x.__class__.__name__, x.args)
result[section][option] = value
return result
def dict_from_file(self, file):
if hasattr(file, 'read'):
self.readfp(file)
else:
self.read(file)
return self.as_dict()
del ConfigParser

View File

@@ -9,61 +9,25 @@ The default dispatcher discovers the page handler by matching path_info
to a hierarchical arrangement of objects, starting at request.app.root.
"""
import string
import sys
import types
try:
classtype = (type, types.ClassType)
except AttributeError:
classtype = type
import cherrypy
class PageHandler(object):
"""Callable which sets response.body."""
def __init__(self, callable, *args, **kwargs):
self.callable = callable
self.args = args
self.kwargs = kwargs
def get_args(self):
return cherrypy.serving.request.args
def set_args(self, args):
cherrypy.serving.request.args = args
return cherrypy.serving.request.args
args = property(
get_args,
set_args,
doc='The ordered args should be accessible from post dispatch hooks'
)
def get_kwargs(self):
return cherrypy.serving.request.kwargs
def set_kwargs(self, kwargs):
cherrypy.serving.request.kwargs = kwargs
return cherrypy.serving.request.kwargs
kwargs = property(
get_kwargs,
set_kwargs,
doc='The named kwargs should be accessible from post dispatch hooks'
)
def __call__(self):
try:
return self.callable(*self.args, **self.kwargs)
except TypeError:
x = sys.exc_info()[1]
except TypeError, x:
try:
test_callable_spec(self.callable, self.args, self.kwargs)
except cherrypy.HTTPError:
raise sys.exc_info()[1]
except cherrypy.HTTPError, error:
raise error
except:
raise x
raise
@@ -80,8 +44,7 @@ def test_callable_spec(callable, callable_args, callable_kwargs):
2. Too little parameters are passed to the function.
There are 3 sources of parameters to a cherrypy handler.
1. query string parameters are passed as keyword parameters to the
handler.
1. query string parameters are passed as keyword parameters to the handler.
2. body parameters are also passed as keyword parameters.
3. when partial matching occurs, the final path atoms are passed as
positional args.
@@ -89,16 +52,14 @@ def test_callable_spec(callable, callable_args, callable_kwargs):
incorrect, then a 404 Not Found should be raised. Conversely the body
parameters are part of the request; if they are invalid a 400 Bad Request.
"""
show_mismatched_params = getattr(
cherrypy.serving.request, 'show_mismatched_params', False)
show_mismatched_params = getattr(cherrypy.request, 'show_mismatched_params', False)
try:
(args, varargs, varkw, defaults) = getargspec(callable)
(args, varargs, varkw, defaults) = inspect.getargspec(callable)
except TypeError:
if isinstance(callable, object) and hasattr(callable, '__call__'):
(args, varargs, varkw,
defaults) = getargspec(callable.__call__)
(args, varargs, varkw, defaults) = inspect.getargspec(callable.__call__)
else:
# If it wasn't one of our own types, re-raise
# If it wasn't one of our own types, re-raise
# the original error
raise
@@ -123,16 +84,14 @@ def test_callable_spec(callable, callable_args, callable_kwargs):
varkw_usage += 1
extra_kwargs.add(key)
# figure out which args have defaults.
args_with_defaults = args[-len(defaults or []):]
for i, val in enumerate(defaults or []):
# Defaults take effect only when the arg hasn't been used yet.
if arg_usage[args_with_defaults[i]] == 0:
arg_usage[args_with_defaults[i]] += 1
if arg_usage[args[i]] == 0:
arg_usage[args[i]] += 1
missing_args = []
multiple_args = []
for key, usage in arg_usage.items():
for key, usage in arg_usage.iteritems():
if usage == 0:
missing_args.append(key)
elif usage > 1:
@@ -145,26 +104,27 @@ def test_callable_spec(callable, callable_args, callable_kwargs):
# 2. not enough body parameters -> 400
# 3. not enough path parts (partial matches) -> 404
#
# We can't actually tell which case it is,
# We can't actually tell which case it is,
# so I'm raising a 404 because that covers 2/3 of the
# possibilities
#
#
# In the case where the method does not allow body
# arguments it's definitely a 404.
message = None
if show_mismatched_params:
message = 'Missing parameters: %s' % ','.join(missing_args)
message="Missing parameters: %s" % ",".join(missing_args)
raise cherrypy.HTTPError(404, message=message)
# the extra positional arguments come from the path - 404 Not Found
if not varargs and vararg_usage > 0:
raise cherrypy.HTTPError(404)
body_params = cherrypy.serving.request.body.params or {}
body_params = cherrypy.request.body_params or {}
body_params = set(body_params.keys())
qs_params = set(callable_kwargs.keys()) - body_params
if multiple_args:
if qs_params.intersection(set(multiple_args)):
# If any of the multiple parameters came from the query string then
# it's a 404 Not Found
@@ -175,8 +135,8 @@ def test_callable_spec(callable, callable_args, callable_kwargs):
message = None
if show_mismatched_params:
message = 'Multiple values for parameters: '\
'%s' % ','.join(multiple_args)
message="Multiple values for parameters: "\
"%s" % ",".join(multiple_args)
raise cherrypy.HTTPError(error, message=message)
if not varkw and varkw_usage > 0:
@@ -186,8 +146,8 @@ def test_callable_spec(callable, callable_args, callable_kwargs):
if extra_qs_params:
message = None
if show_mismatched_params:
message = 'Unexpected query string '\
'parameters: %s' % ', '.join(extra_qs_params)
message="Unexpected query string "\
"parameters: %s" % ", ".join(extra_qs_params)
raise cherrypy.HTTPError(404, message=message)
# If there were any extra body parameters, it's a 400 Not Found
@@ -195,8 +155,8 @@ def test_callable_spec(callable, callable_args, callable_kwargs):
if extra_body_params:
message = None
if show_mismatched_params:
message = 'Unexpected body parameters: '\
'%s' % ', '.join(extra_body_params)
message="Unexpected body parameters: "\
"%s" % ", ".join(extra_body_params)
raise cherrypy.HTTPError(400, message=message)
@@ -204,16 +164,10 @@ try:
import inspect
except ImportError:
test_callable_spec = lambda callable, args, kwargs: None
else:
getargspec = inspect.getargspec
# Python 3 requires using getfullargspec if keyword-only arguments are present
if hasattr(inspect, 'getfullargspec'):
def getargspec(callable):
return inspect.getfullargspec(callable)[:4]
class LateParamPageHandler(PageHandler):
"""When passing cherrypy.request.params to the page handler, we do not
want to capture that dict too early; we want to give tools like the
decoding tool a chance to modify the params dict in-between the lookup
@@ -221,43 +175,24 @@ class LateParamPageHandler(PageHandler):
takes that into account, and allows request.params to be 'bound late'
(it's more complicated than that, but that's the effect).
"""
def _get_kwargs(self):
kwargs = cherrypy.serving.request.params.copy()
kwargs = cherrypy.request.params.copy()
if self._kwargs:
kwargs.update(self._kwargs)
return kwargs
def _set_kwargs(self, kwargs):
cherrypy.serving.request.kwargs = kwargs
self._kwargs = kwargs
kwargs = property(_get_kwargs, _set_kwargs,
doc='page handler kwargs (with '
'cherrypy.request.params copied in)')
if sys.version_info < (3, 0):
punctuation_to_underscores = string.maketrans(
string.punctuation, '_' * len(string.punctuation))
def validate_translator(t):
if not isinstance(t, str) or len(t) != 256:
raise ValueError(
'The translate argument must be a str of len 256.')
else:
punctuation_to_underscores = str.maketrans(
string.punctuation, '_' * len(string.punctuation))
def validate_translator(t):
if not isinstance(t, dict):
raise ValueError('The translate argument must be a dict.')
class Dispatcher(object):
"""CherryPy Dispatcher which walks a tree of objects to find a handler.
The tree is rooted at cherrypy.request.app.root, and each hierarchical
component in the path_info argument is matched to a corresponding nested
attribute of the root object. Matching handlers must have an 'exposed'
@@ -265,171 +200,130 @@ class Dispatcher(object):
matches a URI which ends in a slash ("/"). The special method name
"default" may match a portion of the path_info (but only when no longer
substring of the path_info matches some other object).
This is the default, built-in dispatcher for CherryPy.
"""
__metaclass__ = cherrypy._AttributeDocstrings
dispatch_method_name = '_cp_dispatch'
"""
dispatch_method_name__doc = """
The name of the dispatch method that nodes may optionally implement
to provide their own dynamic dispatch algorithm.
"""
def __init__(self, dispatch_method_name=None,
translate=punctuation_to_underscores):
validate_translator(translate)
self.translate = translate
def __init__(self, dispatch_method_name = None):
if dispatch_method_name:
self.dispatch_method_name = dispatch_method_name
def __call__(self, path_info):
"""Set handler and config for the current request."""
request = cherrypy.serving.request
request = cherrypy.request
func, vpath = self.find_handler(path_info)
if func:
# Decode any leftover %2F in the virtual_path atoms.
vpath = [x.replace('%2F', '/') for x in vpath]
vpath = [x.replace("%2F", "/") for x in vpath]
request.handler = LateParamPageHandler(func, *vpath)
else:
request.handler = cherrypy.NotFound()
def find_handler(self, path):
"""Return the appropriate page handler, plus any virtual path.
This will return two objects. The first will be a callable,
which can be used to generate page output. Any parameters from
the query string or request body will be sent to that callable
as keyword arguments.
The callable is found by traversing the application's tree,
starting from cherrypy.request.app.root, and matching path
components to successive objects in the tree. For example, the
URL "/path/to/handler" might return root.path.to.handler.
The second object returned will be a list of names which are
'virtual path' components: parts of the URL which are dynamic,
and were not used when looking up the handler.
These virtual path components are passed to the handler as
positional arguments.
"""
request = cherrypy.serving.request
request = cherrypy.request
app = request.app
root = app.root
dispatch_name = self.dispatch_method_name
# Get config for the root object/path.
fullpath = [x for x in path.strip('/').split('/') if x] + ['index']
fullpath_len = len(fullpath)
segleft = fullpath_len
curpath = ""
nodeconf = {}
if hasattr(root, '_cp_config'):
if hasattr(root, "_cp_config"):
nodeconf.update(root._cp_config)
if '/' in app.config:
nodeconf.update(app.config['/'])
object_trail = [['root', root, nodeconf, segleft]]
if "/" in app.config:
nodeconf.update(app.config["/"])
object_trail = [['root', root, nodeconf, curpath]]
node = root
iternames = fullpath[:]
names = [x for x in path.strip('/').split('/') if x] + ['index']
iternames = names[:]
while iternames:
name = iternames[0]
# map to legal Python identifiers (e.g. replace '.' with '_')
objname = name.translate(self.translate)
# map to legal Python identifiers (replace '.' with '_')
objname = name.replace('.', '_')
nodeconf = {}
subnode = getattr(node, objname, None)
pre_len = len(iternames)
if subnode is None:
dispatch = getattr(node, dispatch_name, None)
if dispatch and hasattr(dispatch, '__call__') and not \
getattr(dispatch, 'exposed', False) and \
pre_len > 1:
# Don't expose the hidden 'index' token to _cp_dispatch
# We skip this if pre_len == 1 since it makes no sense
# to call a dispatcher when we have no tokens left.
index_name = iternames.pop()
if dispatch and callable(dispatch) and not \
getattr(dispatch, 'exposed', False):
subnode = dispatch(vpath=iternames)
iternames.append(index_name)
else:
# We didn't find a path, but keep processing in case there
# is a default() handler.
iternames.pop(0)
else:
# We found the path, remove the vpath entry
iternames.pop(0)
segleft = len(iternames)
if segleft > pre_len:
# No path segment was removed. Raise an error.
raise cherrypy.CherryPyException(
'A vpath segment was added. Custom dispatchers may only '
+ 'remove elements. While trying to process '
+ '{0} in {1}'.format(name, fullpath)
)
elif segleft == pre_len:
# Assume that the handler used the current path segment, but
# did not pop it. This allows things like
# return getattr(self, vpath[0], None)
iternames.pop(0)
segleft -= 1
name = iternames.pop(0)
node = subnode
if node is not None:
# Get _cp_config attached to this node.
if hasattr(node, '_cp_config'):
if hasattr(node, "_cp_config"):
nodeconf.update(node._cp_config)
# Mix in values from app.config for this path.
existing_len = fullpath_len - pre_len
if existing_len != 0:
curpath = '/' + '/'.join(fullpath[0:existing_len])
else:
curpath = ''
new_segs = fullpath[fullpath_len - pre_len:fullpath_len - segleft]
for seg in new_segs:
curpath += '/' + seg
if curpath in app.config:
nodeconf.update(app.config[curpath])
object_trail.append([name, node, nodeconf, segleft])
curpath = "/".join((curpath, name))
if curpath in app.config:
nodeconf.update(app.config[curpath])
object_trail.append([name, node, nodeconf, curpath])
def set_conf():
"""Collapse all object_trail config into cherrypy.request.config.
"""
"""Collapse all object_trail config into cherrypy.request.config."""
base = cherrypy.config.copy()
# Note that we merge the config from each node
# even if that node was None.
for name, obj, conf, segleft in object_trail:
for name, obj, conf, curpath in object_trail:
base.update(conf)
if 'tools.staticdir.dir' in conf:
base['tools.staticdir.section'] = '/' + \
'/'.join(fullpath[0:fullpath_len - segleft])
base['tools.staticdir.section'] = curpath
return base
# Try successive objects (reverse order)
num_candidates = len(object_trail) - 1
for i in range(num_candidates, -1, -1):
name, candidate, nodeconf, segleft = object_trail[i]
for i in xrange(num_candidates, -1, -1):
name, candidate, nodeconf, curpath = object_trail[i]
if candidate is None:
continue
# Try a "default" method on the current leaf.
if hasattr(candidate, 'default'):
if hasattr(candidate, "default"):
defhandler = candidate.default
if getattr(defhandler, 'exposed', False):
# Insert any extra _cp_config from the default handler.
conf = getattr(defhandler, '_cp_config', {})
object_trail.insert(
i + 1, ['default', defhandler, conf, segleft])
conf = getattr(defhandler, "_cp_config", {})
object_trail.insert(i+1, ["default", defhandler, conf, curpath])
request.config = set_conf()
# See https://github.com/cherrypy/cherrypy/issues/613
request.is_index = path.endswith('/')
return defhandler, fullpath[fullpath_len - segleft:-1]
# Uncomment the next line to restrict positional params to
# "default".
# See http://www.cherrypy.org/ticket/613
request.is_index = path.endswith("/")
return defhandler, names[i:-1]
# Uncomment the next line to restrict positional params to "default".
# if i < num_candidates - 2: continue
# Try the current leaf.
if getattr(candidate, 'exposed', False):
request.config = set_conf()
@@ -443,50 +337,45 @@ class Dispatcher(object):
# Note that this also includes handlers which take
# positional parameters (virtual paths).
request.is_index = False
return candidate, fullpath[fullpath_len - segleft:-1]
return candidate, names[i:-1]
# We didn't find anything
request.config = set_conf()
return None, []
class MethodDispatcher(Dispatcher):
"""Additional dispatch based on cherrypy.request.method.upper().
Methods named GET, POST, etc will be called on an exposed class.
The method names must be all caps; the appropriate Allow header
will be output showing all capitalized method names as allowable
HTTP verbs.
Note that the containing class must be exposed, not the methods.
"""
def __call__(self, path_info):
"""Set handler and config for the current request."""
request = cherrypy.serving.request
request = cherrypy.request
resource, vpath = self.find_handler(path_info)
if resource:
# Set Allow header
avail = [m for m in dir(resource) if m.isupper()]
if 'GET' in avail and 'HEAD' not in avail:
avail.append('HEAD')
if "GET" in avail and "HEAD" not in avail:
avail.append("HEAD")
avail.sort()
cherrypy.serving.response.headers['Allow'] = ', '.join(avail)
cherrypy.response.headers['Allow'] = ", ".join(avail)
# Find the subhandler
meth = request.method.upper()
func = getattr(resource, meth, None)
if func is None and meth == 'HEAD':
func = getattr(resource, 'GET', None)
if func is None and meth == "HEAD":
func = getattr(resource, "GET", None)
if func:
# Grab any _cp_config on the subhandler.
if hasattr(func, '_cp_config'):
request.config.update(func._cp_config)
# Decode any leftover %2F in the virtual_path atoms.
vpath = [x.replace('%2F', '/') for x in vpath]
vpath = [x.replace("%2F", "/") for x in vpath]
request.handler = LateParamPageHandler(func, *vpath)
else:
request.handler = cherrypy.HTTPError(405)
@@ -495,10 +384,9 @@ class MethodDispatcher(Dispatcher):
class RoutesDispatcher(object):
"""A Routes based dispatcher for CherryPy."""
def __init__(self, full_result=False, **mapper_options):
def __init__(self, full_result=False):
"""
Routes dispatcher
@@ -509,40 +397,40 @@ class RoutesDispatcher(object):
import routes
self.full_result = full_result
self.controllers = {}
self.mapper = routes.Mapper(**mapper_options)
self.mapper = routes.Mapper()
self.mapper.controller_scan = self.controllers.keys
def connect(self, name, route, controller, **kwargs):
self.controllers[name] = controller
self.mapper.connect(name, route, controller=name, **kwargs)
def redirect(self, url):
raise cherrypy.HTTPRedirect(url)
def __call__(self, path_info):
"""Set handler and config for the current request."""
func = self.find_handler(path_info)
if func:
cherrypy.serving.request.handler = LateParamPageHandler(func)
cherrypy.request.handler = LateParamPageHandler(func)
else:
cherrypy.serving.request.handler = cherrypy.NotFound()
cherrypy.request.handler = cherrypy.NotFound()
def find_handler(self, path_info):
"""Find the right page handler, and set request.config."""
import routes
request = cherrypy.serving.request
request = cherrypy.request
config = routes.request_config()
config.mapper = self.mapper
if hasattr(request, 'wsgi_environ'):
config.environ = request.wsgi_environ
if hasattr(cherrypy.request, 'wsgi_environ'):
config.environ = cherrypy.request.wsgi_environ
config.host = request.headers.get('Host', None)
config.protocol = request.scheme
config.redirect = self.redirect
result = self.mapper.match(path_info)
config.mapper_dict = result
params = {}
if result:
@@ -551,106 +439,96 @@ class RoutesDispatcher(object):
params.pop('controller', None)
params.pop('action', None)
request.params.update(params)
# Get config for the root object/path.
request.config = base = cherrypy.config.copy()
curpath = ''
curpath = ""
def merge(nodeconf):
if 'tools.staticdir.dir' in nodeconf:
nodeconf['tools.staticdir.section'] = curpath or '/'
nodeconf['tools.staticdir.section'] = curpath or "/"
base.update(nodeconf)
app = request.app
root = app.root
if hasattr(root, '_cp_config'):
if hasattr(root, "_cp_config"):
merge(root._cp_config)
if '/' in app.config:
merge(app.config['/'])
if "/" in app.config:
merge(app.config["/"])
# Mix in values from app.config.
atoms = [x for x in path_info.split('/') if x]
atoms = [x for x in path_info.split("/") if x]
if atoms:
last = atoms.pop()
else:
last = None
for atom in atoms:
curpath = '/'.join((curpath, atom))
curpath = "/".join((curpath, atom))
if curpath in app.config:
merge(app.config[curpath])
handler = None
if result:
controller = result.get('controller')
controller = self.controllers.get(controller, controller)
controller = result.get('controller', None)
controller = self.controllers.get(controller)
if controller:
if isinstance(controller, classtype):
controller = controller()
# Get config from the controller.
if hasattr(controller, '_cp_config'):
if hasattr(controller, "_cp_config"):
merge(controller._cp_config)
action = result.get('action')
action = result.get('action', None)
if action is not None:
handler = getattr(controller, action, None)
# Get config from the handler
if hasattr(handler, '_cp_config'):
# Get config from the handler
if hasattr(handler, "_cp_config"):
merge(handler._cp_config)
else:
handler = controller
# Do the last path atom here so it can
# override the controller's _cp_config.
if last:
curpath = '/'.join((curpath, last))
curpath = "/".join((curpath, last))
if curpath in app.config:
merge(app.config[curpath])
return handler
def XMLRPCDispatcher(next_dispatcher=Dispatcher()):
from cherrypy.lib import xmlrpcutil
from cherrypy.lib import xmlrpc
def xmlrpc_dispatch(path_info):
path_info = xmlrpcutil.patched_path(path_info)
path_info = xmlrpc.patched_path(path_info)
return next_dispatcher(path_info)
return xmlrpc_dispatch
def VirtualHost(next_dispatcher=Dispatcher(), use_x_forwarded_host=True,
**domains):
"""
Select a different handler based on the Host header.
def VirtualHost(next_dispatcher=Dispatcher(), use_x_forwarded_host=True, **domains):
"""Select a different handler based on the Host header.
This can be useful when running multiple sites within one CP server.
It allows several domains to point to different parts of a single
website structure. For example::
website structure. For example:
http://www.domain.example -> root
http://www.domain2.example -> root/domain2/
http://www.domain2.example:443 -> root/secure
can be accomplished via the following config::
can be accomplished via the following config:
[/]
request.dispatch = cherrypy.dispatch.VirtualHost(
**{'www.domain2.example': '/domain2',
'www.domain2.example:443': '/secure',
})
next_dispatcher
The next dispatcher object in the dispatch chain.
next_dispatcher: the next dispatcher object in the dispatch chain.
The VirtualHost dispatcher adds a prefix to the URL and calls
another dispatcher. Defaults to cherrypy.dispatch.Dispatcher().
use_x_forwarded_host
If True (the default), any "X-Forwarded-Host"
use_x_forwarded_host: if True (the default), any "X-Forwarded-Host"
request header will be used instead of the "Host" header. This
is commonly added by HTTP servers (such as Apache) when proxying.
``**domains``
A dict of {host header value: virtual prefix} pairs.
**domains: a dict of {host header value: virtual prefix} pairs.
The incoming "Host" request header is looked up in this dict,
and, if a match is found, the corresponding "virtual prefix"
value will be prepended to the URL path before calling the
@@ -658,28 +536,26 @@ def VirtualHost(next_dispatcher=Dispatcher(), use_x_forwarded_host=True,
for "example.com" and "www.example.com". In addition, "Host"
headers may contain the port number.
"""
from cherrypy.lib import httputil
from cherrypy.lib import http
def vhost_dispatch(path_info):
request = cherrypy.serving.request
header = request.headers.get
header = cherrypy.request.headers.get
domain = header('Host', '')
if use_x_forwarded_host:
domain = header('X-Forwarded-Host', domain)
prefix = domains.get(domain, '')
domain = header("X-Forwarded-Host", domain)
prefix = domains.get(domain, "")
if prefix:
path_info = httputil.urljoin(prefix, path_info)
path_info = http.urljoin(prefix, path_info)
result = next_dispatcher(path_info)
# Touch up staticdir config. See
# https://github.com/cherrypy/cherrypy/issues/614.
section = request.config.get('tools.staticdir.section')
# Touch up staticdir config. See http://www.cherrypy.org/ticket/614.
section = cherrypy.request.config.get('tools.staticdir.section')
if section:
section = section[len(prefix):]
request.config['tools.staticdir.section'] = section
cherrypy.request.config['tools.staticdir.section'] = section
return result
return vhost_dispatch

View File

@@ -1,221 +1,76 @@
"""Exception classes for CherryPy.
"""Error classes for CherryPy."""
CherryPy provides (and uses) exceptions for declaring that the HTTP response
should be a status other than the default "200 OK". You can ``raise`` them like
normal Python exceptions. You can also call them and they will raise
themselves; this means you can set an
:class:`HTTPError<cherrypy._cperror.HTTPError>`
or :class:`HTTPRedirect<cherrypy._cperror.HTTPRedirect>` as the
:attr:`request.handler<cherrypy._cprequest.Request.handler>`.
.. _redirectingpost:
Redirecting POST
================
When you GET a resource and are redirected by the server to another Location,
there's generally no problem since GET is both a "safe method" (there should
be no side-effects) and an "idempotent method" (multiple calls are no different
than a single call).
POST, however, is neither safe nor idempotent--if you
charge a credit card, you don't want to be charged twice by a redirect!
For this reason, *none* of the 3xx responses permit a user-agent (browser) to
resubmit a POST on redirection without first confirming the action with the
user:
===== ================================= ===========
300 Multiple Choices Confirm with the user
301 Moved Permanently Confirm with the user
302 Found (Object moved temporarily) Confirm with the user
303 See Other GET the new URI--no confirmation
304 Not modified (for conditional GET only--POST should not raise this error)
305 Use Proxy Confirm with the user
307 Temporary Redirect Confirm with the user
===== ================================= ===========
However, browsers have historically implemented these restrictions poorly;
in particular, many browsers do not force the user to confirm 301, 302
or 307 when redirecting POST. For this reason, CherryPy defaults to 303,
which most user-agents appear to have implemented correctly. Therefore, if
you raise HTTPRedirect for a POST request, the user-agent will most likely
attempt to GET the new URI (without asking for confirmation from the user).
We realize this is confusing for developers, but it's the safest thing we
could do. You are of course free to raise ``HTTPRedirect(uri, status=302)``
or any other 3xx status if you know what you're doing, but given the
environment, we couldn't let any of those be the default.
Custom Error Handling
=====================
.. image:: /refman/cperrors.gif
Anticipated HTTP responses
--------------------------
The 'error_page' config namespace can be used to provide custom HTML output for
expected responses (like 404 Not Found). Supply a filename from which the
output will be read. The contents will be interpolated with the values
%(status)s, %(message)s, %(traceback)s, and %(version)s using plain old Python
`string formatting <http://docs.python.org/2/library/stdtypes.html#string-formatting-operations>`_.
::
_cp_config = {
'error_page.404': os.path.join(localDir, "static/index.html")
}
Beginning in version 3.1, you may also provide a function or other callable as
an error_page entry. It will be passed the same status, message, traceback and
version arguments that are interpolated into templates::
def error_page_402(status, message, traceback, version):
return "Error %s - Well, I'm very sorry but you haven't paid!" % status
cherrypy.config.update({'error_page.402': error_page_402})
Also in 3.1, in addition to the numbered error codes, you may also supply
"error_page.default" to handle all codes which do not have their own error_page
entry.
Unanticipated errors
--------------------
CherryPy also has a generic error handling mechanism: whenever an unanticipated
error occurs in your code, it will call
:func:`Request.error_response<cherrypy._cprequest.Request.error_response>` to
set the response status, headers, and body. By default, this is the same
output as
:class:`HTTPError(500) <cherrypy._cperror.HTTPError>`. If you want to provide
some other behavior, you generally replace "request.error_response".
Here is some sample code that shows how to display a custom error message and
send an e-mail containing the error::
from cherrypy import _cperror
def handle_error():
cherrypy.response.status = 500
cherrypy.response.body = [
"<html><body>Sorry, an error occured</body></html>"
]
sendMail('error@domain.com',
'Error in your web app',
_cperror.format_exc())
@cherrypy.config(**{'request.error_response': handle_error})
class Root:
pass
Note that you have to explicitly set
:attr:`response.body <cherrypy._cprequest.Response.body>`
and not simply return an error message as a result.
"""
import contextlib
from cgi import escape as _escape
from sys import exc_info as _exc_info
from traceback import format_exception as _format_exception
from xml.sax import saxutils
import six
from cherrypy._cpcompat import text_or_bytes, iteritems, ntob
from cherrypy._cpcompat import tonative, urljoin as _urljoin
from cherrypy.lib import httputil as _httputil
from urlparse import urljoin as _urljoin
from cherrypy.lib import http as _http
class CherryPyException(Exception):
"""A base class for CherryPy exceptions."""
pass
class TimeoutError(CherryPyException):
"""Exception raised when Response.timed_out is detected."""
pass
class InternalRedirect(CherryPyException):
"""Exception raised to switch to the handler for a different URL.
This exception will redirect processing to another path within the site
(without informing the client). Provide the new path as an argument when
raising the exception. Provide any params in the querystring for the new
URL.
Any request.params must be supplied in a query string.
"""
def __init__(self, path, query_string=''):
def __init__(self, path):
import cherrypy
self.request = cherrypy.serving.request
self.query_string = query_string
if '?' in path:
request = cherrypy.request
self.query_string = ""
if "?" in path:
# Separate any params included in the path
path, self.query_string = path.split('?', 1)
path, self.query_string = path.split("?", 1)
# Note that urljoin will "do the right thing" whether url is:
# 1. a URL relative to root (e.g. "/dummy")
# 2. a URL relative to the current path
# Note that any query string will be discarded.
path = _urljoin(self.request.path_info, path)
path = _urljoin(request.path_info, path)
# Set a 'path' member attribute so that code which traps this
# error can have access to it.
self.path = path
CherryPyException.__init__(self, path, self.query_string)
class HTTPRedirect(CherryPyException):
"""Exception raised when the request should be redirected.
This exception will force a HTTP redirect to the URL or URL's you give it.
The new URL must be passed as the first argument to the Exception,
e.g., HTTPRedirect(newUrl). Multiple URLs are allowed in a list.
If a URL is absolute, it will be used as-is. If it is relative, it is
assumed to be relative to the current cherrypy.request.path_info.
If one of the provided URL is a unicode object, it will be encoded
using the default encoding or the one passed in parameter.
There are multiple types of redirect, from which you can select via the
``status`` argument. If you do not provide a ``status`` arg, it defaults to
303 (or 302 if responding with HTTP/1.0).
Examples::
raise cherrypy.HTTPRedirect("")
raise cherrypy.HTTPRedirect("/abs/path", 307)
raise cherrypy.HTTPRedirect(["path1", "path2?a=1&b=2"], 301)
See :ref:`redirectingpost` for additional caveats.
e.g., HTTPRedirect(newUrl). Multiple URLs are allowed. If a URL is
absolute, it will be used as-is. If it is relative, it is assumed
to be relative to the current cherrypy.request.path_info.
"""
status = None
"""The integer HTTP status code to emit."""
urls = None
"""The list of URL's to emit."""
encoding = 'utf-8'
"""The encoding when passed urls are not native strings"""
def __init__(self, urls, status=None, encoding=None):
def __init__(self, urls, status=None):
import cherrypy
request = cherrypy.serving.request
if isinstance(urls, text_or_bytes):
request = cherrypy.request
if isinstance(urls, basestring):
urls = [urls]
self.urls = [tonative(url, encoding or self.encoding) for url in urls]
abs_urls = []
for url in urls:
# Note that urljoin will "do the right thing" whether url is:
# 1. a complete URL with host (e.g. "http://www.example.com/test")
# 2. a URL relative to root (e.g. "/dummy")
# 3. a URL relative to the current path
# Note that any query string in cherrypy.request is discarded.
url = _urljoin(cherrypy.url(), url)
abs_urls.append(url)
self.urls = abs_urls
# RFC 2616 indicates a 301 response code fits our goal; however,
# browser support for 301 is quite messy. Do 302/303 instead. See
# http://www.alanflavell.org.uk/www/post-redirect.html
@@ -227,41 +82,37 @@ class HTTPRedirect(CherryPyException):
else:
status = int(status)
if status < 300 or status > 399:
raise ValueError('status must be between 300 and 399.')
raise ValueError("status must be between 300 and 399.")
self.status = status
CherryPyException.__init__(self, self.urls, status)
CherryPyException.__init__(self, abs_urls, status)
def set_response(self):
"""Modify cherrypy.response status, headers, and body to represent
self.
"""Modify cherrypy.response status, headers, and body to represent self.
CherryPy uses this internally, but you can also use it to create an
HTTPRedirect object and set its output without *raising* the exception.
"""
import cherrypy
response = cherrypy.serving.response
response = cherrypy.response
response.status = status = self.status
if status in (300, 301, 302, 303, 307):
response.headers['Content-Type'] = 'text/html;charset=utf-8'
response.headers['Content-Type'] = "text/html"
# "The ... URI SHOULD be given by the Location field
# in the response."
response.headers['Location'] = self.urls[0]
# "Unless the request method was HEAD, the entity of the response
# SHOULD contain a short hypertext note with a hyperlink to the
# new URI(s)."
msg = {
300: 'This resource can be found at ',
301: 'This resource has permanently moved to ',
302: 'This resource resides temporarily at ',
303: 'This resource can be found at ',
307: 'This resource has moved temporarily to ',
}[status]
msg += '<a href=%s>%s</a>.'
msgs = [msg % (saxutils.quoteattr(u), u) for u in self.urls]
response.body = ntob('<br />\n'.join(msgs), 'utf-8')
msg = {300: "This resource can be found at <a href='%s'>%s</a>.",
301: "This resource has permanently moved to <a href='%s'>%s</a>.",
302: "This resource resides temporarily at <a href='%s'>%s</a>.",
303: "This resource can be found at <a href='%s'>%s</a>.",
307: "This resource has moved temporarily to <a href='%s'>%s</a>.",
}[status]
response.body = "<br />\n".join([msg % (u, u) for u in self.urls])
# Previous code may have set C-L, so we have to reset it
# (allow finalize to set it).
response.headers.pop('Content-Length', None)
@@ -270,7 +121,7 @@ class HTTPRedirect(CherryPyException):
# "The response MUST include the following header fields:
# Date, unless its omission is required by section 14.18.1"
# The "Date" header should have been set in Response.__init__
# "...the response SHOULD NOT include other entity-headers."
for key in ('Allow', 'Content-Encoding', 'Content-Language',
'Content-Length', 'Content-Location', 'Content-MD5',
@@ -278,7 +129,7 @@ class HTTPRedirect(CherryPyException):
'Last-Modified'):
if key in response.headers:
del response.headers[key]
# "The 304 response MUST NOT contain a message-body."
response.body = None
# Previous code may have set C-L, so we have to reset it.
@@ -286,13 +137,13 @@ class HTTPRedirect(CherryPyException):
elif status == 305:
# Use Proxy.
# self.urls[0] should be the URI of the proxy.
response.headers['Location'] = ntob(self.urls[0], 'utf-8')
response.headers['Location'] = self.urls[0]
response.body = None
# Previous code may have set C-L, so we have to reset it.
response.headers.pop('Content-Length', None)
else:
raise ValueError('The %s status code is unknown.' % status)
raise ValueError("The %s status code is unknown." % status)
def __call__(self):
"""Use this exception as a request.handler (raise self)."""
raise self
@@ -301,18 +152,18 @@ class HTTPRedirect(CherryPyException):
def clean_headers(status):
"""Remove any headers which should not apply to an error response."""
import cherrypy
response = cherrypy.serving.response
response = cherrypy.response
# Remove headers which applied to the original content,
# but do not apply to the error page.
respheaders = response.headers
for key in ['Accept-Ranges', 'Age', 'ETag', 'Location', 'Retry-After',
'Vary', 'Content-Encoding', 'Content-Length', 'Expires',
'Content-Location', 'Content-MD5', 'Last-Modified']:
if key in respheaders:
for key in ["Accept-Ranges", "Age", "ETag", "Location", "Retry-After",
"Vary", "Content-Encoding", "Content-Length", "Expires",
"Content-Location", "Content-MD5", "Last-Modified"]:
if respheaders.has_key(key):
del respheaders[key]
if status != 416:
# A server sending a response with status code 416 (Requested
# range not satisfiable) SHOULD include a Content-Range field
@@ -320,118 +171,80 @@ def clean_headers(status):
# specifies the current length of the selected resource.
# A response with status code 206 (Partial Content) MUST NOT
# include a Content-Range field with a byte-range- resp-spec of "*".
if 'Content-Range' in respheaders:
del respheaders['Content-Range']
if respheaders.has_key("Content-Range"):
del respheaders["Content-Range"]
class HTTPError(CherryPyException):
"""Exception used to return an HTTP error code (4xx-5xx) to the client.
This exception can be used to automatically send a response using a
http status code, with an appropriate error page. It takes an optional
``status`` argument (which must be between 400 and 599); it defaults to 500
("Internal Server Error"). It also takes an optional ``message`` argument,
which will be returned in the response body. See
`RFC2616 <http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4>`_
for a complete list of available error codes and when to use them.
Examples::
raise cherrypy.HTTPError(403)
raise cherrypy.HTTPError(
"403 Forbidden", "You are not allowed to access this resource.")
""" Exception used to return an HTTP error code (4xx-5xx) to the client.
This exception will automatically set the response status and body.
A custom message (a long description to display in the browser)
can be provided in place of the default.
"""
status = None
"""The HTTP status code. May be of type int or str (with a Reason-Phrase).
"""
code = None
"""The integer HTTP status code."""
reason = None
"""The HTTP Reason-Phrase string."""
def __init__(self, status=500, message=None):
self.status = status
try:
self.code, self.reason, defaultmsg = _httputil.valid_status(status)
except ValueError:
raise self.__class__(500, _exc_info()[1].args[0])
self.code, self.reason, defaultmsg = _http.valid_status(status)
except ValueError, x:
raise cherrypy.HTTPError(500, x.args[0])
if self.code < 400 or self.code > 599:
raise ValueError('status must be between 400 and 599.')
raise ValueError("status must be between 400 and 599.")
# See http://www.python.org/dev/peps/pep-0352/
# self.message = message
self._message = message or defaultmsg
CherryPyException.__init__(self, status, message)
def set_response(self):
"""Modify cherrypy.response status, headers, and body to represent
self.
"""Modify cherrypy.response status, headers, and body to represent self.
CherryPy uses this internally, but you can also use it to create an
HTTPError object and set its output without *raising* the exception.
"""
import cherrypy
response = cherrypy.serving.response
response = cherrypy.response
clean_headers(self.code)
# In all cases, finalize will be called after this method,
# so don't bother cleaning up response values here.
response.status = self.status
tb = None
if cherrypy.serving.request.show_tracebacks:
if cherrypy.request.show_tracebacks:
tb = format_exc()
response.headers.pop('Content-Length', None)
response.headers['Content-Type'] = "text/html"
content = self.get_error_page(self.status, traceback=tb,
message=self._message)
response.body = content
response.headers['Content-Length'] = len(content)
_be_ie_unfriendly(self.code)
def get_error_page(self, *args, **kwargs):
return get_error_page(*args, **kwargs)
def __call__(self):
"""Use this exception as a request.handler (raise self)."""
raise self
@classmethod
@contextlib.contextmanager
def handle(cls, exception, status=500, message=''):
"""Translate exception into an HTTPError."""
try:
yield
except exception as exc:
raise cls(status, message or str(exc))
class NotFound(HTTPError):
"""Exception raised when a URL could not be mapped to any handler (404).
This is equivalent to raising
:class:`HTTPError("404 Not Found") <cherrypy._cperror.HTTPError>`.
"""
"""Exception raised when a URL could not be mapped to any handler (404)."""
def __init__(self, path=None):
if path is None:
import cherrypy
request = cherrypy.serving.request
path = request.script_name + request.path_info
path = cherrypy.request.script_name + cherrypy.request.path_info
self.args = (path,)
HTTPError.__init__(self, 404, "The path '%s' was not found." % path)
HTTPError.__init__(self, 404, "The path %r was not found." % path)
_HTTPErrorTemplate = '''<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.0 Transitional//EN"
_HTTPErrorTemplate = '''<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
@@ -454,101 +267,73 @@ _HTTPErrorTemplate = '''<!DOCTYPE html PUBLIC
<p>%(message)s</p>
<pre id="traceback">%(traceback)s</pre>
<div id="powered_by">
<span>
Powered by <a href="http://www.cherrypy.org">CherryPy %(version)s</a>
</span>
<span>Powered by <a href="http://www.cherrypy.org">CherryPy %(version)s</a></span>
</div>
</body>
</html>
'''
def get_error_page(status, **kwargs):
"""Return an HTML page, containing a pretty error response.
status should be an int or a str.
kwargs will be interpolated into the page template.
"""
import cherrypy
try:
code, reason, message = _httputil.valid_status(status)
except ValueError:
raise cherrypy.HTTPError(500, _exc_info()[1].args[0])
code, reason, message = _http.valid_status(status)
except ValueError, x:
raise cherrypy.HTTPError(500, x.args[0])
# We can't use setdefault here, because some
# callers send None for kwarg values.
if kwargs.get('status') is None:
kwargs['status'] = '%s %s' % (code, reason)
kwargs['status'] = "%s %s" % (code, reason)
if kwargs.get('message') is None:
kwargs['message'] = message
if kwargs.get('traceback') is None:
kwargs['traceback'] = ''
if kwargs.get('version') is None:
kwargs['version'] = cherrypy.__version__
for k, v in iteritems(kwargs):
for k, v in kwargs.iteritems():
if v is None:
kwargs[k] = ''
kwargs[k] = ""
else:
kwargs[k] = _escape(kwargs[k])
# Use a custom template or callable for the error page?
pages = cherrypy.serving.request.error_page
pages = cherrypy.request.error_page
error_page = pages.get(code) or pages.get('default')
# Default template, can be overridden below.
template = _HTTPErrorTemplate
if error_page:
try:
if hasattr(error_page, '__call__'):
# The caller function may be setting headers manually,
# so we delegate to it completely. We may be returning
# an iterator as well as a string here.
#
# We *must* make sure any content is not unicode.
result = error_page(**kwargs)
if cherrypy.lib.is_iterator(result):
from cherrypy.lib.encoding import UTF8StreamEncoder
return UTF8StreamEncoder(result)
elif isinstance(result, six.text_type):
return result.encode('utf-8')
else:
if not isinstance(result, bytes):
raise ValueError('error page function did not '
'return a bytestring, six.text_typeing or an '
'iterator - returned object of type %s.'
% (type(result).__name__))
return result
if callable(error_page):
return error_page(**kwargs)
else:
# Load the template from this path.
template = tonative(open(error_page, 'rb').read())
return file(error_page, 'rb').read() % kwargs
except:
e = _format_exception(*_exc_info())[-1]
m = kwargs['message']
if m:
m += '<br />'
m += 'In addition, the custom error page failed:\n<br />%s' % e
m += "<br />"
m += "In addition, the custom error page failed:\n<br />%s" % e
kwargs['message'] = m
response = cherrypy.serving.response
response.headers['Content-Type'] = 'text/html;charset=utf-8'
result = template % kwargs
return result.encode('utf-8')
return _HTTPErrorTemplate % kwargs
_ie_friendly_error_sizes = {
400: 512, 403: 256, 404: 512, 405: 256,
406: 512, 408: 512, 409: 512, 410: 256,
500: 512, 501: 512, 505: 512,
}
}
def _be_ie_unfriendly(status):
import cherrypy
response = cherrypy.serving.response
response = cherrypy.response
# For some statuses, Internet Explorer 5+ shows "friendly error
# messages" instead of our response.body if the body is smaller
# than a given size. Fix this by returning a body over that size
@@ -564,48 +349,44 @@ def _be_ie_unfriendly(status):
if l and l < s:
# IN ADDITION: the response must be written to IE
# in one chunk or it will still get replaced! Bah.
content = content + (ntob(' ') * (s - l))
content = content + (" " * (s - l))
response.body = content
response.headers['Content-Length'] = str(len(content))
response.headers['Content-Length'] = len(content)
def format_exc(exc=None):
"""Return exc (or sys.exc_info if None), formatted."""
try:
if exc is None:
exc = _exc_info()
if exc == (None, None, None):
return ''
import traceback
return ''.join(traceback.format_exception(*exc))
finally:
del exc
if exc is None:
exc = _exc_info()
if exc == (None, None, None):
return ""
import traceback
return "".join(traceback.format_exception(*exc))
def bare_error(extrabody=None):
"""Produce status, headers, body for a critical error.
Returns a triple without calling any other questionable functions,
so it should be as error-free as possible. Call it from an HTTP server
if you get errors outside of the request.
If extrabody is None, a friendly but rather unhelpful error message
is set in the body. If extrabody is a string, it will be appended
as-is to the body.
"""
# The whole point of this function is to be a last line-of-defense
# in handling errors. That is, it must not raise any errors itself;
# it cannot be allowed to fail. Therefore, don't add to it!
# In particular, don't call any other CP functions.
body = ntob('Unrecoverable error in the server.')
body = "Unrecoverable error in the server."
if extrabody is not None:
if not isinstance(extrabody, bytes):
extrabody = extrabody.encode('utf-8')
body += ntob('\n') + extrabody
return (ntob('500 Internal Server Error'),
[(ntob('Content-Type'), ntob('text/plain')),
(ntob('Content-Length'), ntob(str(len(body)), 'ISO-8859-1'))],
body += "\n" + extrabody
return ("500 Internal Server Error",
[('Content-Type', 'text/plain'),
('Content-Length', str(len(body)))],
[body])

View File

@@ -1,197 +1,39 @@
"""
Simple config
=============
Although CherryPy uses the :mod:`Python logging module <logging>`, it does so
behind the scenes so that simple logging is simple, but complicated logging
is still possible. "Simple" logging means that you can log to the screen
(i.e. console/stdout) or to a file, and that you can easily have separate
error and access log files.
Here are the simplified logging settings. You use these by adding lines to
your config file or dict. You should set these at either the global level or
per application (see next), but generally not both.
* ``log.screen``: Set this to True to have both "error" and "access" messages
printed to stdout.
* ``log.access_file``: Set this to an absolute filename where you want
"access" messages written.
* ``log.error_file``: Set this to an absolute filename where you want "error"
messages written.
Many events are automatically logged; to log your own application events, call
:func:`cherrypy.log`.
Architecture
============
Separate scopes
---------------
CherryPy provides log managers at both the global and application layers.
This means you can have one set of logging rules for your entire site,
and another set of rules specific to each application. The global log
manager is found at :func:`cherrypy.log`, and the log manager for each
application is found at :attr:`app.log<cherrypy._cptree.Application.log>`.
If you're inside a request, the latter is reachable from
``cherrypy.request.app.log``; if you're outside a request, you'll have to
obtain a reference to the ``app``: either the return value of
:func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used
:func:`quickstart()<cherrypy.quickstart>` instead, via
``cherrypy.tree.apps['/']``.
By default, the global logs are named "cherrypy.error" and "cherrypy.access",
and the application logs are named "cherrypy.error.2378745" and
"cherrypy.access.2378745" (the number is the id of the Application object).
This means that the application logs "bubble up" to the site logs, so if your
application has no log handlers, the site-level handlers will still log the
messages.
Errors vs. Access
-----------------
Each log manager handles both "access" messages (one per HTTP request) and
"error" messages (everything else). Note that the "error" log is not just for
errors! The format of access messages is highly formalized, but the error log
isn't--it receives messages from a variety of sources (including full error
tracebacks, if enabled).
If you are logging the access log and error log to the same source, then there
is a possibility that a specially crafted error message may replicate an access
log message as described in CWE-117. In this case it is the application
developer's responsibility to manually escape data before using CherryPy's log()
functionality, or they may create an application that is vulnerable to CWE-117.
This would be achieved by using a custom handler escape any special characters,
and attached as described below.
Custom Handlers
===============
The simple settings above work by manipulating Python's standard :mod:`logging`
module. So when you need something more complex, the full power of the standard
module is yours to exploit. You can borrow or create custom handlers, formats,
filters, and much more. Here's an example that skips the standard FileHandler
and uses a RotatingFileHandler instead:
::
#python
log = app.log
# Remove the default FileHandlers if present.
log.error_file = ""
log.access_file = ""
maxBytes = getattr(log, "rot_maxBytes", 10000000)
backupCount = getattr(log, "rot_backupCount", 1000)
# Make a new RotatingFileHandler for the error log.
fname = getattr(log, "rot_error_file", "error.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(DEBUG)
h.setFormatter(_cplogging.logfmt)
log.error_log.addHandler(h)
# Make a new RotatingFileHandler for the access log.
fname = getattr(log, "rot_access_file", "access.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(DEBUG)
h.setFormatter(_cplogging.logfmt)
log.access_log.addHandler(h)
The ``rot_*`` attributes are pulled straight from the application log object.
Since "log.*" config entries simply set attributes on the log object, you can
add custom attributes to your heart's content. Note that these handlers are
used ''instead'' of the default, simple handlers outlined above (so don't set
the "log.error_file" config entry, for example).
"""
"""CherryPy logging."""
import datetime
import logging
# Silence the no-handlers "warning" (stderr write!) in stdlib logging
logging.Logger.manager.emittedNoHandlerWarning = 1
logfmt = logging.Formatter("%(message)s")
import os
import rfc822
import sys
import six
import cherrypy
from cherrypy import _cperror
from cherrypy._cpcompat import ntob
# Silence the no-handlers "warning" (stderr write!) in stdlib logging
logging.Logger.manager.emittedNoHandlerWarning = 1
logfmt = logging.Formatter('%(message)s')
class NullHandler(logging.Handler):
"""A no-op logging handler to silence the logging.lastResort handler."""
def handle(self, record):
pass
def emit(self, record):
pass
def createLock(self):
self.lock = None
class LogManager(object):
"""An object to assist both simple and advanced logging.
``cherrypy.log`` is an instance of this class.
"""
appid = None
"""The id() of the Application object which owns this log manager. If this
is a global log manager, appid is None."""
error_log = None
"""The actual :class:`logging.Logger` instance for error messages."""
access_log = None
"""The actual :class:`logging.Logger` instance for access messages."""
access_log_format = (
'{h} {l} {u} {t} "{r}" {s} {b} "{f}" "{a}"'
if six.PY3 else
access_log_format = \
'%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
)
logger_root = None
"""The "top-level" logger name.
This string will be used as the first segment in the Logger names.
The default is "cherrypy", for example, in which case the Logger names
will be of the form::
cherrypy.error.<appid>
cherrypy.access.<appid>
"""
def __init__(self, appid=None, logger_root='cherrypy'):
def __init__(self, appid=None, logger_root="cherrypy"):
self.logger_root = logger_root
self.appid = appid
if appid is None:
self.error_log = logging.getLogger('%s.error' % logger_root)
self.access_log = logging.getLogger('%s.access' % logger_root)
self.error_log = logging.getLogger("%s.error" % logger_root)
self.access_log = logging.getLogger("%s.access" % logger_root)
else:
self.error_log = logging.getLogger(
'%s.error.%s' % (logger_root, appid))
self.access_log = logging.getLogger(
'%s.access.%s' % (logger_root, appid))
self.error_log.setLevel(logging.INFO)
self.error_log = logging.getLogger("%s.error.%s" % (logger_root, appid))
self.access_log = logging.getLogger("%s.access.%s" % (logger_root, appid))
self.error_log.setLevel(logging.DEBUG)
self.access_log.setLevel(logging.INFO)
# Silence the no-handlers "warning" (stderr write!) in stdlib logging
self.error_log.addHandler(NullHandler())
self.access_log.addHandler(NullHandler())
cherrypy.engine.subscribe('graceful', self.reopen_files)
def reopen_files(self):
"""Close and reopen all file handlers."""
for log in (self.error_log, self.access_log):
@@ -201,38 +43,28 @@ class LogManager(object):
h.stream.close()
h.stream = open(h.baseFilename, h.mode)
h.release()
def error(self, msg='', context='', severity=logging.INFO,
traceback=False):
"""Write the given ``msg`` to the error log.
def error(self, msg='', context='', severity=logging.INFO, traceback=False):
"""Write to the error log.
This is not just for errors! Applications may call this at any time
to log application-specific information.
If ``traceback`` is True, the traceback of the current exception
(if any) will be appended to ``msg``.
"""
exc_info = None
if traceback:
exc_info = _cperror._exc_info()
self.error_log.log(severity, ' '.join((self.time(), context, msg)), exc_info=exc_info)
msg += _cperror.format_exc()
self.error_log.log(severity, ' '.join((self.time(), context, msg)))
def __call__(self, *args, **kwargs):
"""An alias for ``error``."""
"""Write to the error log.
This is not just for errors! Applications may call this at any time
to log application-specific information.
"""
return self.error(*args, **kwargs)
def access(self):
"""Write to the access log (in Apache/NCSA Combined Log format).
See the
`apache documentation <http://httpd.apache.org/docs/current/logs.html#combined>`_
for format details.
CherryPy calls this automatically for you. Note there are no arguments;
it collects the data itself from
:class:`cherrypy.request<cherrypy._cprequest.Request>`.
Like Apache started doing in 2.0.46, non-printable and other special
characters in %r (and we expand that to all parts) are escaped using
\\xhh sequences, where hh stands for the hexadecimal representation
@@ -240,122 +72,88 @@ class LogManager(object):
escaped by prepending a backslash, and all whitespace characters,
which are written in their C-style notation (\\n, \\t, etc).
"""
request = cherrypy.serving.request
request = cherrypy.request
remote = request.remote
response = cherrypy.serving.response
response = cherrypy.response
outheaders = response.headers
inheaders = request.headers
if response.output_status is None:
status = '-'
else:
status = response.output_status.split(ntob(' '), 1)[0]
if six.PY3:
status = status.decode('ISO-8859-1')
atoms = {'h': remote.name or remote.ip,
'l': '-',
'u': getattr(request, 'login', None) or '-',
'u': getattr(request, "login", None) or "-",
't': self.time(),
'r': request.request_line,
's': status,
'b': dict.get(outheaders, 'Content-Length', '') or '-',
'f': dict.get(inheaders, 'Referer', ''),
'a': dict.get(inheaders, 'User-Agent', ''),
'o': dict.get(inheaders, 'Host', '-'),
's': response.status.split(" ", 1)[0],
'b': outheaders.get('Content-Length', '') or "-",
'f': inheaders.get('Referer', ''),
'a': inheaders.get('User-Agent', ''),
}
if six.PY3:
for k, v in atoms.items():
if not isinstance(v, str):
v = str(v)
v = v.replace('"', '\\"').encode('utf8')
# Fortunately, repr(str) escapes unprintable chars, \n, \t, etc
# and backslash for us. All we have to do is strip the quotes.
v = repr(v)[2:-1]
# in python 3.0 the repr of bytes (as returned by encode)
# uses double \'s. But then the logger escapes them yet, again
# resulting in quadruple slashes. Remove the extra one here.
v = v.replace('\\\\', '\\')
# Escape double-quote.
atoms[k] = v
try:
self.access_log.log(
logging.INFO, self.access_log_format.format(**atoms))
except:
self(traceback=True)
else:
for k, v in atoms.items():
if isinstance(v, six.text_type):
v = v.encode('utf8')
elif not isinstance(v, str):
v = str(v)
# Fortunately, repr(str) escapes unprintable chars, \n, \t, etc
# and backslash for us. All we have to do is strip the quotes.
v = repr(v)[1:-1]
# Escape double-quote.
atoms[k] = v.replace('"', '\\"')
try:
self.access_log.log(
logging.INFO, self.access_log_format % atoms)
except:
self(traceback=True)
for k, v in atoms.items():
if isinstance(v, unicode):
v = v.encode('utf8')
elif not isinstance(v, str):
v = str(v)
# Fortunately, repr(str) escapes unprintable chars, \n, \t, etc
# and backslash for us. All we have to do is strip the quotes.
v = repr(v)[1:-1]
# Escape double-quote.
atoms[k] = v.replace('"', '\\"')
try:
self.access_log.log(logging.INFO, self.access_log_format % atoms)
except:
self(traceback=True)
def time(self):
"""Return now() in Apache Common Log Format (no timezone)."""
now = datetime.datetime.now()
monthnames = ['jan', 'feb', 'mar', 'apr', 'may', 'jun',
'jul', 'aug', 'sep', 'oct', 'nov', 'dec']
month = monthnames[now.month - 1].capitalize()
month = rfc822._monthnames[now.month - 1].capitalize()
return ('[%02d/%s/%04d:%02d:%02d:%02d]' %
(now.day, month, now.year, now.hour, now.minute, now.second))
def _get_builtin_handler(self, log, key):
for h in log.handlers:
if getattr(h, '_cpbuiltin', None) == key:
if getattr(h, "_cpbuiltin", None) == key:
return h
# ------------------------- Screen handlers ------------------------- #
def _set_screen_handler(self, log, enable, stream=None):
h = self._get_builtin_handler(log, 'screen')
h = self._get_builtin_handler(log, "screen")
if enable:
if not h:
if stream is None:
stream = sys.stderr
stream=sys.stderr
h = logging.StreamHandler(stream)
h.setFormatter(logfmt)
h._cpbuiltin = 'screen'
h._cpbuiltin = "screen"
log.addHandler(h)
elif h:
log.handlers.remove(h)
def _get_screen(self):
h = self._get_builtin_handler
has_h = h(self.error_log, 'screen') or h(self.access_log, 'screen')
has_h = h(self.error_log, "screen") or h(self.access_log, "screen")
return bool(has_h)
def _set_screen(self, newvalue):
self._set_screen_handler(self.error_log, newvalue, stream=sys.stderr)
self._set_screen_handler(self.access_log, newvalue, stream=sys.stdout)
screen = property(_get_screen, _set_screen,
doc="""Turn stderr/stdout logging on or off.
If you set this to True, it'll add the appropriate StreamHandler for
you. If you set it to False, it will remove the handler.
""")
doc="If True, error and access will print to stderr.")
# -------------------------- File handlers -------------------------- #
def _add_builtin_file_handler(self, log, fname):
h = logging.FileHandler(fname)
h.setFormatter(logfmt)
h._cpbuiltin = 'file'
h._cpbuiltin = "file"
log.addHandler(h)
def _set_file_handler(self, log, filename):
h = self._get_builtin_handler(log, 'file')
h = self._get_builtin_handler(log, "file")
if filename:
if h:
if h.baseFilename != os.path.abspath(filename):
@@ -368,97 +166,80 @@ class LogManager(object):
if h:
h.close()
log.handlers.remove(h)
def _get_error_file(self):
h = self._get_builtin_handler(self.error_log, 'file')
h = self._get_builtin_handler(self.error_log, "file")
if h:
return h.baseFilename
return ''
def _set_error_file(self, newvalue):
self._set_file_handler(self.error_log, newvalue)
error_file = property(_get_error_file, _set_error_file,
doc="""The filename for self.error_log.
If you set this to a string, it'll add the appropriate FileHandler for
you. If you set it to ``None`` or ``''``, it will remove the handler.
""")
doc="The filename for self.error_log.")
def _get_access_file(self):
h = self._get_builtin_handler(self.access_log, 'file')
h = self._get_builtin_handler(self.access_log, "file")
if h:
return h.baseFilename
return ''
def _set_access_file(self, newvalue):
self._set_file_handler(self.access_log, newvalue)
access_file = property(_get_access_file, _set_access_file,
doc="""The filename for self.access_log.
If you set this to a string, it'll add the appropriate FileHandler for
you. If you set it to ``None`` or ``''``, it will remove the handler.
""")
doc="The filename for self.access_log.")
# ------------------------- WSGI handlers ------------------------- #
def _set_wsgi_handler(self, log, enable):
h = self._get_builtin_handler(log, 'wsgi')
h = self._get_builtin_handler(log, "wsgi")
if enable:
if not h:
h = WSGIErrorHandler()
h.setFormatter(logfmt)
h._cpbuiltin = 'wsgi'
h._cpbuiltin = "wsgi"
log.addHandler(h)
elif h:
log.handlers.remove(h)
def _get_wsgi(self):
return bool(self._get_builtin_handler(self.error_log, 'wsgi'))
return bool(self._get_builtin_handler(self.error_log, "wsgi"))
def _set_wsgi(self, newvalue):
self._set_wsgi_handler(self.error_log, newvalue)
wsgi = property(_get_wsgi, _set_wsgi,
doc="""Write errors to wsgi.errors.
If you set this to True, it'll add the appropriate
:class:`WSGIErrorHandler<cherrypy._cplogging.WSGIErrorHandler>` for you
(which writes errors to ``wsgi.errors``).
If you set it to False, it will remove the handler.
""")
doc="If True, error messages will be sent to wsgi.errors.")
class WSGIErrorHandler(logging.Handler):
"A handler class which writes logging records to environ['wsgi.errors']."
def flush(self):
"""Flushes the stream."""
try:
stream = cherrypy.serving.request.wsgi_environ.get('wsgi.errors')
stream = cherrypy.request.wsgi_environ.get('wsgi.errors')
except (AttributeError, KeyError):
pass
else:
stream.flush()
def emit(self, record):
"""Emit a record."""
try:
stream = cherrypy.serving.request.wsgi_environ.get('wsgi.errors')
stream = cherrypy.request.wsgi_environ.get('wsgi.errors')
except (AttributeError, KeyError):
pass
else:
try:
msg = self.format(record)
fs = '%s\n'
fs = "%s\n"
import types
# if no unicode support...
if not hasattr(types, 'UnicodeType'):
if not hasattr(types, "UnicodeType"): #if no unicode support...
stream.write(fs % msg)
else:
try:
stream.write(fs % msg)
except UnicodeError:
stream.write(fs % msg.encode('UTF-8'))
stream.write(fs % msg.encode("UTF-8"))
self.flush()
except:
self.handleError(record)

View File

@@ -35,12 +35,12 @@ Listen 8080
LoadModule python_module /usr/lib/apache2/modules/mod_python.so
<Location "/">
PythonPath "sys.path+['/path/to/my/application']"
SetHandler python-program
PythonHandler cherrypy._cpmodpy::handler
PythonOption cherrypy.setup myapp::setup_server
PythonDebug On
</Location>
PythonPath "sys.path+['/path/to/my/application']"
SetHandler python-program
PythonHandler cherrypy._cpmodpy::handler
PythonOption cherrypy.setup myapp::setup_server
PythonDebug On
</Location>
# End
The actual path to your mod_python.so is dependent on your
@@ -55,51 +55,47 @@ resides in the global site-package this won't be needed.
Then restart apache2 and access http://127.0.0.1:8080
"""
import io
import logging
import os
import re
import sys
import StringIO
import cherrypy
from cherrypy._cpcompat import copyitems, ntob
from cherrypy._cperror import format_exc, bare_error
from cherrypy.lib import httputil
from cherrypy.lib import http
# ------------------------------ Request-handling
def setup(req):
from mod_python import apache
# Run any setup functions defined by a "PythonOption cherrypy.setup"
# directive.
# Run any setup function defined by a "PythonOption cherrypy.setup" directive.
options = req.get_options()
if 'cherrypy.setup' in options:
for function in options['cherrypy.setup'].split():
atoms = function.split('::', 1)
if len(atoms) == 1:
mod = __import__(atoms[0], globals(), locals())
else:
modname, fname = atoms
mod = __import__(modname, globals(), locals(), [fname])
func = getattr(mod, fname)
func()
atoms = options['cherrypy.setup'].split('::', 1)
if len(atoms) == 1:
mod = __import__(atoms[0], globals(), locals())
else:
modname, fname = atoms
mod = __import__(modname, globals(), locals(), [fname])
func = getattr(mod, fname)
func()
cherrypy.config.update({'log.screen': False,
'tools.ignore_headers.on': True,
'tools.ignore_headers.headers': ['Range'],
"tools.ignore_headers.on": True,
"tools.ignore_headers.headers": ['Range'],
})
engine = cherrypy.engine
if hasattr(engine, 'signal_handler'):
if hasattr(engine, "signal_handler"):
engine.signal_handler.unsubscribe()
if hasattr(engine, 'console_control_handler'):
if hasattr(engine, "console_control_handler"):
engine.console_control_handler.unsubscribe()
engine.autoreload.unsubscribe()
cherrypy.server.unsubscribe()
def _log(msg, level):
newlevel = apache.APLOG_ERR
if logging.DEBUG >= level:
@@ -109,13 +105,13 @@ def setup(req):
elif logging.WARNING >= level:
newlevel = apache.APLOG_WARNING
# On Windows, req.server is required or the msg will vanish. See
# http://www.modpython.org/pipermail/mod_python/2003-October/014291.html
# http://www.modpython.org/pipermail/mod_python/2003-October/014291.html.
# Also, "When server is not specified...LogLevel does not apply..."
apache.log_error(msg, newlevel, req.server)
engine.subscribe('log', _log)
engine.start()
def cherrypy_cleanup(data):
engine.exit()
try:
@@ -127,7 +123,6 @@ def setup(req):
class _ReadOnlyRequest:
expose = ('read', 'readline', 'readlines')
def __init__(self, req):
for method in self.expose:
self.__dict__[method] = getattr(req, method)
@@ -136,8 +131,6 @@ class _ReadOnlyRequest:
recursive = False
_isSetUp = False
def handler(req):
from mod_python import apache
try:
@@ -145,18 +138,16 @@ def handler(req):
if not _isSetUp:
setup(req)
_isSetUp = True
# Obtain a Request object from CherryPy
local = req.connection.local_addr
local = httputil.Host(
local[0], local[1], req.connection.local_host or '')
local = http.Host(local[0], local[1], req.connection.local_host or "")
remote = req.connection.remote_addr
remote = httputil.Host(
remote[0], remote[1], req.connection.remote_host or '')
remote = http.Host(remote[0], remote[1], req.connection.remote_host or "")
scheme = req.parsed_uri[0] or 'http'
req.get_basic_auth_pw()
try:
# apache.mpm_query only became available in mod_python 3.1
q = apache.mpm_query
@@ -165,78 +156,74 @@ def handler(req):
except AttributeError:
bad_value = ("You must provide a PythonOption '%s', "
"either 'on' or 'off', when running a version "
'of mod_python < 3.1')
"of mod_python < 3.1")
threaded = options.get('multithread', '').lower()
if threaded == 'on':
threaded = True
elif threaded == 'off':
threaded = False
else:
raise ValueError(bad_value % 'multithread')
raise ValueError(bad_value % "multithread")
forked = options.get('multiprocess', '').lower()
if forked == 'on':
forked = True
elif forked == 'off':
forked = False
else:
raise ValueError(bad_value % 'multiprocess')
sn = cherrypy.tree.script_name(req.uri or '/')
raise ValueError(bad_value % "multiprocess")
sn = cherrypy.tree.script_name(req.uri or "/")
if sn is None:
send_response(req, '404 Not Found', [], '')
else:
app = cherrypy.tree.apps[sn]
method = req.method
path = req.uri
qs = req.args or ''
qs = req.args or ""
reqproto = req.protocol
headers = copyitems(req.headers_in)
headers = req.headers_in.items()
rfile = _ReadOnlyRequest(req)
prev = None
try:
redirections = []
while True:
request, response = app.get_serving(local, remote, scheme,
'HTTP/1.1')
"HTTP/1.1")
request.login = req.user
request.multithread = bool(threaded)
request.multiprocess = bool(forked)
request.app = app
request.prev = prev
# Run the CherryPy Request object and obtain the response
try:
request.run(method, path, qs, reqproto, headers, rfile)
break
except cherrypy.InternalRedirect:
ir = sys.exc_info()[1]
except cherrypy.InternalRedirect, ir:
app.release_serving()
prev = request
if not recursive:
if ir.path in redirections:
raise RuntimeError(
'InternalRedirector visited the same URL '
'twice: %r' % ir.path)
raise RuntimeError("InternalRedirector visited the "
"same URL twice: %r" % ir.path)
else:
# Add the *previous* path_info + qs to
# redirections.
# Add the *previous* path_info + qs to redirections.
if qs:
qs = '?' + qs
qs = "?" + qs
redirections.append(sn + path + qs)
# Munge environment and try again.
method = 'GET'
method = "GET"
path = ir.path
qs = ir.query_string
rfile = io.BytesIO()
send_response(
req, response.output_status, response.header_list,
response.body, response.stream)
rfile = StringIO.StringIO()
send_response(req, response.status, response.header_list,
response.body, response.stream)
finally:
app.release_serving()
except:
@@ -250,53 +237,41 @@ def handler(req):
def send_response(req, status, headers, body, stream=False):
# Set response status
req.status = int(status[:3])
# Set response headers
req.content_type = 'text/plain'
req.content_type = "text/plain"
for header, value in headers:
if header.lower() == 'content-type':
req.content_type = value
continue
req.headers_out.add(header, value)
if stream:
# Flush now so the status and headers are sent immediately.
req.flush()
# Set response body
if isinstance(body, text_or_bytes):
if isinstance(body, basestring):
req.write(body)
else:
for seg in body:
req.write(seg)
# --------------- Startup tools for CherryPy + mod_python --------------- #
try:
import subprocess
def popen(fullcmd):
p = subprocess.Popen(fullcmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
close_fds=True)
return p.stdout
except ImportError:
def popen(fullcmd):
pipein, pipeout = os.popen4(fullcmd)
return pipeout
def read_process(cmd, args=''):
fullcmd = '%s %s' % (cmd, args)
pipeout = popen(fullcmd)
import os
import re
def read_process(cmd, args=""):
pipein, pipeout = os.popen4("%s %s" % (cmd, args))
try:
firstline = pipeout.readline()
cmd_not_found = re.search(
ntob('(not recognized|No such file|not found)'),
firstline,
re.IGNORECASE
)
if cmd_not_found:
if (re.search(r"(not recognized|No such file|not found)", firstline,
re.IGNORECASE)):
raise IOError('%s must be on your system path.' % cmd)
output = firstline + pipeout.read()
finally:
@@ -305,7 +280,7 @@ def read_process(cmd, args=''):
class ModPythonServer(object):
template = """
# Apache2 server configuration file for running CherryPy with mod_python.
@@ -320,35 +295,36 @@ LoadModule python_module modules/mod_python.so
%(opts)s
</Location>
"""
def __init__(self, loc='/', port=80, opts=None, apache_path='apache',
handler='cherrypy._cpmodpy::handler'):
def __init__(self, loc="/", port=80, opts=None, apache_path="apache",
handler="cherrypy._cpmodpy::handler"):
self.loc = loc
self.port = port
self.opts = opts
self.apache_path = apache_path
self.handler = handler
def start(self):
opts = ''.join([' PythonOption %s %s\n' % (k, v)
opts = "".join([" PythonOption %s %s\n" % (k, v)
for k, v in self.opts])
conf_data = self.template % {'port': self.port,
'loc': self.loc,
'opts': opts,
'handler': self.handler,
conf_data = self.template % {"port": self.port,
"loc": self.loc,
"opts": opts,
"handler": self.handler,
}
mpconf = os.path.join(os.path.dirname(__file__), 'cpmodpy.conf')
mpconf = os.path.join(os.path.dirname(__file__), "cpmodpy.conf")
f = open(mpconf, 'wb')
try:
f.write(conf_data)
finally:
f.close()
response = read_process(self.apache_path, '-k start -f %s' % mpconf)
response = read_process(self.apache_path, "-k start -f %s" % mpconf)
self.ready = True
return response
def stop(self):
os.popen('apache -k stop')
os.popen("apache -k stop")
self.ready = False

View File

@@ -1,154 +0,0 @@
"""Native adapter for serving CherryPy via its builtin server."""
import logging
import sys
import io
import cherrypy
from cherrypy._cperror import format_exc, bare_error
from cherrypy.lib import httputil
from cherrypy import wsgiserver
class NativeGateway(wsgiserver.Gateway):
recursive = False
def respond(self):
req = self.req
try:
# Obtain a Request object from CherryPy
local = req.server.bind_addr
local = httputil.Host(local[0], local[1], '')
remote = req.conn.remote_addr, req.conn.remote_port
remote = httputil.Host(remote[0], remote[1], '')
scheme = req.scheme
sn = cherrypy.tree.script_name(req.uri or '/')
if sn is None:
self.send_response('404 Not Found', [], [''])
else:
app = cherrypy.tree.apps[sn]
method = req.method
path = req.path
qs = req.qs or ''
headers = req.inheaders.items()
rfile = req.rfile
prev = None
try:
redirections = []
while True:
request, response = app.get_serving(
local, remote, scheme, 'HTTP/1.1')
request.multithread = True
request.multiprocess = False
request.app = app
request.prev = prev
# Run the CherryPy Request object and obtain the
# response
try:
request.run(method, path, qs,
req.request_protocol, headers, rfile)
break
except cherrypy.InternalRedirect:
ir = sys.exc_info()[1]
app.release_serving()
prev = request
if not self.recursive:
if ir.path in redirections:
raise RuntimeError(
'InternalRedirector visited the same '
'URL twice: %r' % ir.path)
else:
# Add the *previous* path_info + qs to
# redirections.
if qs:
qs = '?' + qs
redirections.append(sn + path + qs)
# Munge environment and try again.
method = 'GET'
path = ir.path
qs = ir.query_string
rfile = io.BytesIO()
self.send_response(
response.output_status, response.header_list,
response.body)
finally:
app.release_serving()
except:
tb = format_exc()
# print tb
cherrypy.log(tb, 'NATIVE_ADAPTER', severity=logging.ERROR)
s, h, b = bare_error()
self.send_response(s, h, b)
def send_response(self, status, headers, body):
req = self.req
# Set response status
req.status = str(status or '500 Server Error')
# Set response headers
for header, value in headers:
req.outheaders.append((header, value))
if (req.ready and not req.sent_headers):
req.sent_headers = True
req.send_headers()
# Set response body
for seg in body:
req.write(seg)
class CPHTTPServer(wsgiserver.HTTPServer):
"""Wrapper for wsgiserver.HTTPServer.
wsgiserver has been designed to not reference CherryPy in any way,
so that it can be used in other frameworks and applications.
Therefore, we wrap it here, so we can apply some attributes
from config -> cherrypy.server -> HTTPServer.
"""
def __init__(self, server_adapter=cherrypy.server):
self.server_adapter = server_adapter
server_name = (self.server_adapter.socket_host or
self.server_adapter.socket_file or
None)
wsgiserver.HTTPServer.__init__(
self, server_adapter.bind_addr, NativeGateway,
minthreads=server_adapter.thread_pool,
maxthreads=server_adapter.thread_pool_max,
server_name=server_name)
self.max_request_header_size = (
self.server_adapter.max_request_header_size or 0)
self.max_request_body_size = (
self.server_adapter.max_request_body_size or 0)
self.request_queue_size = self.server_adapter.socket_queue_size
self.timeout = self.server_adapter.socket_timeout
self.shutdown_timeout = self.server_adapter.shutdown_timeout
self.protocol = self.server_adapter.protocol_version
self.nodelay = self.server_adapter.nodelay
ssl_module = self.server_adapter.ssl_module or 'pyopenssl'
if self.server_adapter.ssl_context:
adapter_class = wsgiserver.get_ssl_adapter_class(ssl_module)
self.ssl_adapter = adapter_class(
self.server_adapter.ssl_certificate,
self.server_adapter.ssl_private_key,
self.server_adapter.ssl_certificate_chain)
self.ssl_adapter.context = self.server_adapter.ssl_context
elif self.server_adapter.ssl_certificate:
adapter_class = wsgiserver.get_ssl_adapter_class(ssl_module)
self.ssl_adapter = adapter_class(
self.server_adapter.ssl_certificate,
self.server_adapter.ssl_private_key,
self.server_adapter.ssl_certificate_chain)

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,9 @@
"""Manage HTTP servers with CherryPy."""
import six
import warnings
import cherrypy
from cherrypy.lib.reprconf import attributes
from cherrypy._cpcompat import text_or_bytes
from cherrypy.lib import attributes
# We import * because we want to export check_port
# et al as attributes of this module.
@@ -12,143 +11,66 @@ from cherrypy.process.servers import *
class Server(ServerAdapter):
"""An adapter for an HTTP server.
You can set attributes (like socket_host and socket_port)
on *this* object (which is probably cherrypy.server), and call
quickstart. For example::
quickstart. For example:
cherrypy.server.socket_port = 80
cherrypy.quickstart()
"""
socket_port = 8080
"""The TCP port on which to listen for connections."""
_socket_host = '127.0.0.1'
def _get_socket_host(self):
return self._socket_host
def _set_socket_host(self, value):
if value == '':
raise ValueError("The empty string ('') is not an allowed value. "
"Use '0.0.0.0' instead to listen on all active "
'interfaces (INADDR_ANY).')
"interfaces (INADDR_ANY).")
self._socket_host = value
socket_host = property(
_get_socket_host,
_set_socket_host,
socket_host = property(_get_socket_host, _set_socket_host,
doc="""The hostname or IP address on which to listen for connections.
Host values may be any IPv4 or IPv6 address, or any valid hostname.
The string 'localhost' is a synonym for '127.0.0.1' (or '::1', if
your hosts file prefers IPv6). The string '0.0.0.0' is a special
IPv4 entry meaning "any active interface" (INADDR_ANY), and '::'
is the similar IN6ADDR_ANY for IPv6. The empty string or None are
not allowed.""")
socket_file = None
"""If given, the name of the UNIX socket to use instead of TCP/IP.
When this option is not None, the `socket_host` and `socket_port` options
are ignored."""
socket_queue_size = 5
"""The 'backlog' argument to socket.listen(); specifies the maximum number
of queued connections (default 5)."""
socket_timeout = 10
"""The timeout in seconds for accepted connections (default 10)."""
accepted_queue_size = -1
"""The maximum number of requests which will be queued up before
the server refuses to accept it (default -1, meaning no limit)."""
accepted_queue_timeout = 10
"""The timeout in seconds for attempting to add a request to the
queue when the queue is full (default 10)."""
shutdown_timeout = 5
"""The time to wait for HTTP worker threads to clean up."""
protocol_version = 'HTTP/1.1'
"""The version string to write in the Status-Line of all HTTP responses,
for example, "HTTP/1.1" (the default). Depending on the HTTP server used,
this should also limit the supported features used in the response."""
reverse_dns = False
thread_pool = 10
"""The number of worker threads to start up in the pool."""
thread_pool_max = -1
"""The maximum size of the worker-thread pool. Use -1 to indicate no limit.
"""
max_request_header_size = 500 * 1024
"""The maximum number of bytes allowable in the request headers.
If exceeded, the HTTP server should return "413 Request Entity Too Large".
"""
max_request_body_size = 100 * 1024 * 1024
"""The maximum number of bytes allowable in the request body. If exceeded,
the HTTP server should return "413 Request Entity Too Large"."""
instance = None
"""If not None, this should be an HTTP server instance (such as
CPWSGIServer) which cherrypy.server will control. Use this when you need
more control over object instantiation than is available in the various
configuration options."""
ssl_context = None
"""When using PyOpenSSL, an instance of SSL.Context."""
ssl_certificate = None
"""The filename of the SSL certificate to use."""
ssl_certificate_chain = None
"""When using PyOpenSSL, the certificate chain to pass to
Context.load_verify_locations."""
ssl_private_key = None
"""The filename of the private key to use with SSL."""
if six.PY3:
ssl_module = 'builtin'
"""The name of a registered SSL adaptation module to use with
the builtin WSGI server. Builtin options are: 'builtin' (to
use the SSL library built into recent versions of Python).
You may also register your own classes in the
wsgiserver.ssl_adapters dict."""
else:
ssl_module = 'pyopenssl'
"""The name of a registered SSL adaptation module to use with the
builtin WSGI server. Builtin options are 'builtin' (to use the SSL
library built into recent versions of Python) and 'pyopenssl' (to
use the PyOpenSSL project, which you must install separately). You
may also register your own classes in the wsgiserver.ssl_adapters
dict."""
statistics = False
"""Turns statistics-gathering on or off for aware HTTP servers."""
nodelay = True
"""If True (the default since 3.1), sets the TCP_NODELAY socket option."""
wsgi_version = (1, 0)
"""The WSGI version tuple to use with the builtin WSGI server.
The provided options are (1, 0) [which includes support for PEP 3333,
which declares it covers WSGI version 1.0.1 but still mandates the
wsgi.version (1, 0)] and ('u', 0), an experimental unicode version.
You may create and register your own experimental versions of the WSGI
protocol by adding custom classes to the wsgiserver.wsgi_gateways dict."""
def __init__(self):
self.bus = cherrypy.engine
self.httpserver = None
self.interrupt = None
self.running = False
def quickstart(self, server=None):
"""This does nothing now and will be removed in 3.2."""
warnings.warn('quickstart does nothing now and will be removed in '
'3.2. Call cherrypy.engine.start() instead.',
DeprecationWarning)
def httpserver_from_self(self, httpserver=None):
"""Return a (httpserver, bind_addr) pair based on self attributes."""
if httpserver is None:
@@ -156,31 +78,30 @@ class Server(ServerAdapter):
if httpserver is None:
from cherrypy import _cpwsgi_server
httpserver = _cpwsgi_server.CPWSGIServer(self)
if isinstance(httpserver, text_or_bytes):
if isinstance(httpserver, basestring):
# Is anyone using this? Can I add an arg?
httpserver = attributes(httpserver)(self)
return httpserver, self.bind_addr
def start(self):
"""Start the HTTP server."""
if not self.httpserver:
self.httpserver, self.bind_addr = self.httpserver_from_self()
ServerAdapter.start(self)
start.priority = 75
def _get_bind_addr(self):
if self.socket_file:
return self.socket_file
if self.socket_host is None and self.socket_port is None:
return None
return (self.socket_host, self.socket_port)
def _set_bind_addr(self, value):
if value is None:
self.socket_file = None
self.socket_host = None
self.socket_port = None
elif isinstance(value, text_or_bytes):
elif isinstance(value, basestring):
self.socket_file = value
self.socket_host = None
self.socket_port = None
@@ -189,21 +110,16 @@ class Server(ServerAdapter):
self.socket_host, self.socket_port = value
self.socket_file = None
except ValueError:
raise ValueError('bind_addr must be a (host, port) tuple '
'(for TCP sockets) or a string (for Unix '
'domain sockets), not %r' % value)
bind_addr = property(
_get_bind_addr,
_set_bind_addr,
doc='A (host, port) tuple for TCP sockets or '
'a str for Unix domain sockets.')
raise ValueError("bind_addr must be a (host, port) tuple "
"(for TCP sockets) or a string (for Unix "
"domain sockets), not %r" % value)
bind_addr = property(_get_bind_addr, _set_bind_addr)
def base(self):
"""Return the base (scheme://host[:port] or sock file) for this server.
"""
"""Return the base (scheme://host[:port] or sock file) for this server."""
if self.socket_file:
return self.socket_file
host = self.socket_host
if host in ('0.0.0.0', '::'):
# 0.0.0.0 is INADDR_ANY and :: is IN6ADDR_ANY.
@@ -211,16 +127,17 @@ class Server(ServerAdapter):
# safest thing to spit out in a URL.
import socket
host = socket.gethostname()
port = self.socket_port
if self.ssl_certificate:
scheme = 'https'
scheme = "https"
if port != 443:
host += ':%s' % port
host += ":%s" % port
else:
scheme = 'http'
scheme = "http"
if port != 80:
host += ':%s' % port
host += ":%s" % port
return "%s://%s" % (scheme, host)
return '%s://%s' % (scheme, host)

View File

@@ -0,0 +1,239 @@
# This is a backport of Python-2.4's threading.local() implementation
"""Thread-local objects
(Note that this module provides a Python version of thread
threading.local class. Depending on the version of Python you're
using, there may be a faster one available. You should always import
the local class from threading.)
Thread-local objects support the management of thread-local data.
If you have data that you want to be local to a thread, simply create
a thread-local object and use its attributes:
>>> mydata = local()
>>> mydata.number = 42
>>> mydata.number
42
You can also access the local-object's dictionary:
>>> mydata.__dict__
{'number': 42}
>>> mydata.__dict__.setdefault('widgets', [])
[]
>>> mydata.widgets
[]
What's important about thread-local objects is that their data are
local to a thread. If we access the data in a different thread:
>>> log = []
>>> def f():
... items = mydata.__dict__.items()
... items.sort()
... log.append(items)
... mydata.number = 11
... log.append(mydata.number)
>>> import threading
>>> thread = threading.Thread(target=f)
>>> thread.start()
>>> thread.join()
>>> log
[[], 11]
we get different data. Furthermore, changes made in the other thread
don't affect data seen in this thread:
>>> mydata.number
42
Of course, values you get from a local object, including a __dict__
attribute, are for whatever thread was current at the time the
attribute was read. For that reason, you generally don't want to save
these values across threads, as they apply only to the thread they
came from.
You can create custom local objects by subclassing the local class:
>>> class MyLocal(local):
... number = 2
... initialized = False
... def __init__(self, **kw):
... if self.initialized:
... raise SystemError('__init__ called too many times')
... self.initialized = True
... self.__dict__.update(kw)
... def squared(self):
... return self.number ** 2
This can be useful to support default values, methods and
initialization. Note that if you define an __init__ method, it will be
called each time the local object is used in a separate thread. This
is necessary to initialize each thread's dictionary.
Now if we create a local object:
>>> mydata = MyLocal(color='red')
Now we have a default number:
>>> mydata.number
2
an initial color:
>>> mydata.color
'red'
>>> del mydata.color
And a method that operates on the data:
>>> mydata.squared()
4
As before, we can access the data in a separate thread:
>>> log = []
>>> thread = threading.Thread(target=f)
>>> thread.start()
>>> thread.join()
>>> log
[[('color', 'red'), ('initialized', True)], 11]
without affecting this thread's data:
>>> mydata.number
2
>>> mydata.color
Traceback (most recent call last):
...
AttributeError: 'MyLocal' object has no attribute 'color'
Note that subclasses can define slots, but they are not thread
local. They are shared across threads:
>>> class MyLocal(local):
... __slots__ = 'number'
>>> mydata = MyLocal()
>>> mydata.number = 42
>>> mydata.color = 'red'
So, the separate thread:
>>> thread = threading.Thread(target=f)
>>> thread.start()
>>> thread.join()
affects what we see:
>>> mydata.number
11
>>> del mydata
"""
# Threading import is at end
class _localbase(object):
__slots__ = '_local__key', '_local__args', '_local__lock'
def __new__(cls, *args, **kw):
self = object.__new__(cls)
key = 'thread.local.' + str(id(self))
object.__setattr__(self, '_local__key', key)
object.__setattr__(self, '_local__args', (args, kw))
object.__setattr__(self, '_local__lock', RLock())
if args or kw and (cls.__init__ is object.__init__):
raise TypeError("Initialization arguments are not supported")
# We need to create the thread dict in anticipation of
# __init__ being called, to make sure we don't call it
# again ourselves.
dict = object.__getattribute__(self, '__dict__')
currentThread().__dict__[key] = dict
return self
def _patch(self):
key = object.__getattribute__(self, '_local__key')
d = currentThread().__dict__.get(key)
if d is None:
d = {}
currentThread().__dict__[key] = d
object.__setattr__(self, '__dict__', d)
# we have a new instance dict, so call out __init__ if we have
# one
cls = type(self)
if cls.__init__ is not object.__init__:
args, kw = object.__getattribute__(self, '_local__args')
cls.__init__(self, *args, **kw)
else:
object.__setattr__(self, '__dict__', d)
class local(_localbase):
def __getattribute__(self, name):
lock = object.__getattribute__(self, '_local__lock')
lock.acquire()
try:
_patch(self)
return object.__getattribute__(self, name)
finally:
lock.release()
def __setattr__(self, name, value):
lock = object.__getattribute__(self, '_local__lock')
lock.acquire()
try:
_patch(self)
return object.__setattr__(self, name, value)
finally:
lock.release()
def __delattr__(self, name):
lock = object.__getattribute__(self, '_local__lock')
lock.acquire()
try:
_patch(self)
return object.__delattr__(self, name)
finally:
lock.release()
def __del__():
threading_enumerate = enumerate
__getattribute__ = object.__getattribute__
def __del__(self):
key = __getattribute__(self, '_local__key')
try:
threads = list(threading_enumerate())
except:
# if enumerate fails, as it seems to do during
# shutdown, we'll skip cleanup under the assumption
# that there is nothing to clean up
return
for thread in threads:
try:
__dict__ = thread.__dict__
except AttributeError:
# Thread is dying, rest in peace
continue
if key in __dict__:
try:
del __dict__[key]
except KeyError:
pass # didn't have anything in this thread
return __del__
__del__ = __del__()
from threading import currentThread, enumerate, RLock

View File

@@ -2,19 +2,19 @@
Tools are usually designed to be used in a variety of ways (although some
may only offer one if they choose):
Library calls
Library calls:
All tools are callables that can be used wherever needed.
The arguments are straightforward and should be detailed within the
docstring.
Function decorators
Function decorators:
All tools, when called, may be used as decorators which configure
individual CherryPy page handlers (methods on the CherryPy tree).
That is, "@tools.anytool()" should "turn on" the tool via the
decorated function's _cp_config attribute.
CherryPy config
CherryPy config:
If a tool exposes a "_setup" callable, it will be called
once per Request (if the feature is "turned on" via config).
@@ -22,48 +22,27 @@ Tools may be implemented as any object with a namespace. The builtins
are generally either modules or instances of the tools.Tool class.
"""
import sys
import warnings
import cherrypy
from cherrypy._helper import expose
from cherrypy.lib import cptools, encoding, auth, static, jsontools
from cherrypy.lib import sessions as _sessions, xmlrpcutil as _xmlrpc
from cherrypy.lib import caching as _caching
from cherrypy.lib import auth_basic, auth_digest
def _getargs(func):
"""Return the names of all static arguments to the given function."""
# Use this instead of importing inspect for less mem overhead.
import types
if sys.version_info >= (3, 0):
if isinstance(func, types.MethodType):
func = func.__func__
co = func.__code__
else:
if isinstance(func, types.MethodType):
func = func.im_func
co = func.func_code
if isinstance(func, types.MethodType):
func = func.im_func
co = func.func_code
return co.co_varnames[:co.co_argcount]
_attr_error = (
'CherryPy Tools cannot be turned on directly. Instead, turn them '
'on via config, or use them as decorators on your page handlers.'
)
class Tool(object):
"""A registered function for use with CherryPy request-processing hooks.
help(tool.callable) should give you more information about this Tool.
"""
namespace = 'tools'
namespace = "tools"
def __init__(self, point, callable, name=None, priority=50):
self._point = point
self.callable = callable
@@ -71,21 +50,14 @@ class Tool(object):
self._priority = priority
self.__doc__ = self.callable.__doc__
self._setargs()
def _get_on(self):
raise AttributeError(_attr_error)
def _set_on(self, value):
raise AttributeError(_attr_error)
on = property(_get_on, _set_on)
def _setargs(self):
"""Copy func parameter names to obj attributes."""
try:
for arg in _getargs(self.callable):
setattr(self, arg, None)
except (TypeError, AttributeError):
if hasattr(self.callable, '__call__'):
if hasattr(self.callable, "__call__"):
for arg in _getargs(self.callable.__call__):
setattr(self, arg, None)
# IronPython 1.0 raises NotImplementedError because
@@ -97,66 +69,64 @@ class Tool(object):
# but if we trap it here it doesn't prevent CP from working.
except IndexError:
pass
def _merged_args(self, d=None):
"""Return a dict of configuration entries for this Tool."""
if d:
conf = d.copy()
else:
conf = {}
tm = cherrypy.serving.request.toolmaps[self.namespace]
tm = cherrypy.request.toolmaps[self.namespace]
if self._name in tm:
conf.update(tm[self._name])
if 'on' in conf:
del conf['on']
if "on" in conf:
del conf["on"]
return conf
def __call__(self, *args, **kwargs):
"""Compile-time decorator (turn on the tool in config).
For example::
@expose
For example:
@tools.proxy()
def whats_my_base(self):
return cherrypy.request.base
whats_my_base.exposed = True
"""
if args:
raise TypeError('The %r Tool does not accept positional '
'arguments; you must use keyword arguments.'
raise TypeError("The %r Tool does not accept positional "
"arguments; you must use keyword arguments."
% self._name)
def tool_decorator(f):
if not hasattr(f, '_cp_config'):
if not hasattr(f, "_cp_config"):
f._cp_config = {}
subspace = self.namespace + '.' + self._name + '.'
f._cp_config[subspace + 'on'] = True
for k, v in kwargs.items():
subspace = self.namespace + "." + self._name + "."
f._cp_config[subspace + "on"] = True
for k, v in kwargs.iteritems():
f._cp_config[subspace + k] = v
return f
return tool_decorator
def _setup(self):
"""Hook this tool into cherrypy.request.
The standard CherryPy request object will automatically call this
method when the tool is "turned on" in config.
"""
conf = self._merged_args()
p = conf.pop('priority', None)
p = conf.pop("priority", None)
if p is None:
p = getattr(self.callable, 'priority', self._priority)
cherrypy.serving.request.hooks.attach(self._point, self.callable,
priority=p, **conf)
p = getattr(self.callable, "priority", self._priority)
cherrypy.request.hooks.attach(self._point, self.callable,
priority=p, **conf)
class HandlerTool(Tool):
"""Tool which is called 'before main', that may skip normal handlers.
If the tool successfully handles the request (by setting response.body),
if should return True. This will cause CherryPy to skip any 'normal' page
handler. If the tool did not handle the request, it should return False
@@ -164,57 +134,54 @@ class HandlerTool(Tool):
tool is declared AS a page handler (see the 'handler' method), returning
False will raise NotFound.
"""
def __init__(self, callable, name=None):
Tool.__init__(self, 'before_handler', callable, name)
def handler(self, *args, **kwargs):
"""Use this tool as a CherryPy page handler.
For example::
For example:
class Root:
nav = tools.staticdir.handler(section="/nav", dir="nav",
root=absDir)
"""
@expose
def handle_func(*a, **kw):
handled = self.callable(*args, **self._merged_args(kwargs))
if not handled:
raise cherrypy.NotFound()
return cherrypy.serving.response.body
return cherrypy.response.body
handle_func.exposed = True
return handle_func
def _wrapper(self, **kwargs):
if self.callable(**kwargs):
cherrypy.serving.request.handler = None
cherrypy.request.handler = None
def _setup(self):
"""Hook this tool into cherrypy.request.
The standard CherryPy request object will automatically call this
method when the tool is "turned on" in config.
"""
conf = self._merged_args()
p = conf.pop('priority', None)
p = conf.pop("priority", None)
if p is None:
p = getattr(self.callable, 'priority', self._priority)
cherrypy.serving.request.hooks.attach(self._point, self._wrapper,
priority=p, **conf)
p = getattr(self.callable, "priority", self._priority)
cherrypy.request.hooks.attach(self._point, self._wrapper,
priority=p, **conf)
class HandlerWrapperTool(Tool):
"""Tool which wraps request.handler in a provided wrapper function.
The 'newhandler' arg must be a handler wrapper function that takes a
'next_handler' argument, plus ``*args`` and ``**kwargs``. Like all
page handler
'next_handler' argument, plus *args and **kwargs. Like all page handler
functions, it must return an iterable for use as cherrypy.response.body.
For example, to allow your 'inner' page handlers to return dicts
which then get interpolated into a template::
which then get interpolated into a template:
def interpolator(next_handler, *args, **kwargs):
filename = cherrypy.request.config.get('template')
cherrypy.response.template = env.get_template(filename)
@@ -222,86 +189,83 @@ class HandlerWrapperTool(Tool):
return cherrypy.response.template.render(**response_dict)
cherrypy.tools.jinja = HandlerWrapperTool(interpolator)
"""
def __init__(self, newhandler, point='before_handler', name=None,
priority=50):
def __init__(self, newhandler, point='before_handler', name=None, priority=50):
self.newhandler = newhandler
self._point = point
self._name = name
self._priority = priority
def callable(self, *args, **kwargs):
innerfunc = cherrypy.serving.request.handler
def callable(self):
innerfunc = cherrypy.request.handler
def wrap(*args, **kwargs):
return self.newhandler(innerfunc, *args, **kwargs)
cherrypy.serving.request.handler = wrap
cherrypy.request.handler = wrap
class ErrorTool(Tool):
"""Tool which is used to replace the default request.error_response."""
def __init__(self, callable, name=None):
Tool.__init__(self, None, callable, name)
def _wrapper(self):
self.callable(**self._merged_args())
def _setup(self):
"""Hook this tool into cherrypy.request.
The standard CherryPy request object will automatically call this
method when the tool is "turned on" in config.
"""
cherrypy.serving.request.error_response = self._wrapper
cherrypy.request.error_response = self._wrapper
# Builtin tools #
from cherrypy.lib import cptools, encoding, auth, static, tidy
from cherrypy.lib import sessions as _sessions, xmlrpc as _xmlrpc
from cherrypy.lib import caching as _caching, wsgiapp as _wsgiapp
class SessionTool(Tool):
"""Session Tool for CherryPy.
sessions.locking
sessions.locking:
When 'implicit' (the default), the session will be locked for you,
just before running the page handler.
just before running the page handler.
When 'early', the session will be locked before reading the request
body. This is off by default for safety reasons; for example,
a large upload would block the session, denying an AJAX
progress meter
(`issue <https://github.com/cherrypy/cherrypy/issues/630>`_).
body. This is off by default for safety reasons; for example,
a large upload would block the session, denying an AJAX
progress meter (see http://www.cherrypy.org/ticket/630).
When 'explicit' (or any other value), you need to call
cherrypy.session.acquire_lock() yourself before using
session data.
cherrypy.session.acquire_lock() yourself before using
session data.
"""
def __init__(self):
# _sessions.init must be bound after headers are read
Tool.__init__(self, 'before_request_body', _sessions.init)
def _lock_session(self):
cherrypy.serving.session.acquire_lock()
def _setup(self):
"""Hook this tool into cherrypy.request.
The standard CherryPy request object will automatically call this
method when the tool is "turned on" in config.
"""
hooks = cherrypy.serving.request.hooks
hooks = cherrypy.request.hooks
conf = self._merged_args()
p = conf.pop('priority', None)
p = conf.pop("priority", None)
if p is None:
p = getattr(self.callable, 'priority', self._priority)
p = getattr(self.callable, "priority", self._priority)
hooks.attach(self._point, self.callable, priority=p, **conf)
locking = conf.pop('locking', 'implicit')
if locking == 'implicit':
hooks.attach('before_handler', self._lock_session)
@@ -312,34 +276,35 @@ class SessionTool(Tool):
else:
# Don't lock
pass
hooks.attach('before_finalize', _sessions.save)
hooks.attach('on_end_request', _sessions.close)
def regenerate(self):
"""Drop the current session and make a new one (with a new id)."""
sess = cherrypy.serving.session
sess.regenerate()
# Grab cookie-relevant tool args
conf = dict([(k, v) for k, v in self._merged_args().items()
conf = dict([(k, v) for k, v in self._merged_args().iteritems()
if k in ('path', 'path_header', 'name', 'timeout',
'domain', 'secure')])
_sessions.set_response_cookie(**conf)
class XMLRPCController(object):
"""A Controller (page handler collection) for XML-RPC.
To use it, have your controllers subclass this base class (it will
turn on the tool for you).
You can also supply the following optional config entries::
You can also supply the following optional config entries:
tools.xmlrpc.encoding: 'utf-8'
tools.xmlrpc.allow_none: 0
XML-RPC is a rather discontinuous layer over HTTP; dispatching to the
appropriate handler must first be performed according to the URL, and
then a second dispatch step must take place according to the RPC method
@@ -347,93 +312,125 @@ class XMLRPCController(object):
prefix in the URL, supplies its own handler args in the body, and
requires a 200 OK "Fault" response instead of 404 when the desired
method is not found.
Therefore, XML-RPC cannot be implemented for CherryPy via a Tool alone.
This Controller acts as the dispatch target for the first half (based
on the URL); it then reads the RPC method from the request body and
does its own second dispatch step based on that method. It also reads
body params, and returns a Fault on error.
The XMLRPCDispatcher strips any /RPC2 prefix; if you aren't using /RPC2
in your URL's, you can safely skip turning on the XMLRPCDispatcher.
Otherwise, you need to use declare it in config::
Otherwise, you need to use declare it in config:
request.dispatch: cherrypy.dispatch.XMLRPCDispatcher()
"""
# Note we're hard-coding this into the 'tools' namespace. We could do
# a huge amount of work to make it relocatable, but the only reason why
# would be if someone actually disabled the default_toolbox. Meh.
_cp_config = {'tools.xmlrpc.on': True}
@expose
def default(self, *vpath, **params):
rpcparams, rpcmethod = _xmlrpc.process_body()
subhandler = self
for attr in str(rpcmethod).split('.'):
subhandler = getattr(subhandler, attr, None)
if subhandler and getattr(subhandler, 'exposed', False):
if subhandler and getattr(subhandler, "exposed", False):
body = subhandler(*(vpath + rpcparams), **params)
else:
# https://github.com/cherrypy/cherrypy/issues/533
# http://www.cherrypy.org/ticket/533
# if a method is not found, an xmlrpclib.Fault should be returned
# raising an exception here will do that; see
# cherrypy.lib.xmlrpcutil.on_error
raise Exception('method "%s" is not supported' % attr)
conf = cherrypy.serving.request.toolmaps['tools'].get('xmlrpc', {})
# cherrypy.lib.xmlrpc.on_error
raise Exception, 'method "%s" is not supported' % attr
conf = cherrypy.request.toolmaps['tools'].get("xmlrpc", {})
_xmlrpc.respond(body,
conf.get('encoding', 'utf-8'),
conf.get('allow_none', 0))
return cherrypy.serving.response.body
return cherrypy.response.body
default.exposed = True
class WSGIAppTool(HandlerTool):
"""A tool for running any WSGI middleware/application within CP.
Here are the parameters:
wsgi_app - any wsgi application callable
env_update - a dictionary with arbitrary keys and values to be
merged with the WSGI environ dictionary.
Example:
class Whatever:
_cp_config = {'tools.wsgiapp.on': True,
'tools.wsgiapp.app': some_app,
'tools.wsgiapp.env': app_environ,
}
"""
def _setup(self):
# Keep request body intact so the wsgi app can have its way with it.
cherrypy.request.process_request_body = False
HandlerTool._setup(self)
class SessionAuthTool(HandlerTool):
def _setargs(self):
for name in dir(cptools.SessionAuth):
if not name.startswith('__'):
if not name.startswith("__"):
setattr(self, name, None)
class CachingTool(Tool):
"""Caching Tool for CherryPy."""
def _wrapper(self, **kwargs):
request = cherrypy.serving.request
if _caching.get(**kwargs):
def _wrapper(self, invalid_methods=("POST", "PUT", "DELETE"), **kwargs):
request = cherrypy.request
if not hasattr(cherrypy, "_cache"):
# Make a process-wide Cache object.
cherrypy._cache = kwargs.pop("cache_class", _caching.MemoryCache)()
# Take all remaining kwargs and set them on the Cache object.
for k, v in kwargs.iteritems():
setattr(cherrypy._cache, k, v)
if _caching.get(invalid_methods=invalid_methods):
request.handler = None
else:
if request.cacheable:
# Note the devious technique here of adding hooks on the fly
request.hooks.attach('before_finalize', _caching.tee_output,
priority=90)
priority = 90)
_wrapper.priority = 20
def _setup(self):
"""Hook caching into cherrypy.request."""
conf = self._merged_args()
p = conf.pop("priority", None)
cherrypy.request.hooks.attach('before_handler', self._wrapper,
priority=p, **conf)
p = conf.pop('priority', None)
cherrypy.serving.request.hooks.attach('before_handler', self._wrapper,
priority=p, **conf)
class Toolbox(object):
"""A collection of Tools.
This object also functions as a config namespace handler for itself.
Custom toolboxes should be added to each Application's toolboxes dict.
"""
def __init__(self, namespace):
self.namespace = namespace
def __setattr__(self, name, value):
# If the Tool._name is None, supply it from the attribute name.
if isinstance(value, Tool):
@@ -441,58 +438,28 @@ class Toolbox(object):
value._name = name
value.namespace = self.namespace
object.__setattr__(self, name, value)
def __enter__(self):
"""Populate request.toolmaps from tools specified in config."""
cherrypy.serving.request.toolmaps[self.namespace] = map = {}
cherrypy.request.toolmaps[self.namespace] = map = {}
def populate(k, v):
toolname, arg = k.split('.', 1)
toolname, arg = k.split(".", 1)
bucket = map.setdefault(toolname, {})
bucket[arg] = v
return populate
def __exit__(self, exc_type, exc_val, exc_tb):
"""Run tool._setup() for each tool in our toolmap."""
map = cherrypy.serving.request.toolmaps.get(self.namespace)
map = cherrypy.request.toolmaps.get(self.namespace)
if map:
for name, settings in map.items():
if settings.get('on', False):
if settings.get("on", False):
tool = getattr(self, name)
tool._setup()
def register(self, point, **kwargs):
"""Return a decorator which registers the function at the given hook point."""
def decorator(func):
setattr(self, kwargs.get('name', func.__name__), Tool(point, func, **kwargs))
return func
return decorator
class DeprecatedTool(Tool):
_name = None
warnmsg = 'This Tool is deprecated.'
def __init__(self, point, warnmsg=None):
self.point = point
if warnmsg is not None:
self.warnmsg = warnmsg
def __call__(self, *args, **kwargs):
warnings.warn(self.warnmsg)
def tool_decorator(f):
return f
return tool_decorator
def _setup(self):
warnings.warn(self.warnmsg)
default_toolbox = _d = Toolbox('tools')
default_toolbox = _d = Toolbox("tools")
_d.session_auth = SessionAuthTool(cptools.session_auth)
_d.allow = Tool('on_start_resource', cptools.allow)
_d.proxy = Tool('before_request_body', cptools.proxy, priority=30)
_d.response_headers = Tool('on_start_resource', cptools.response_headers)
_d.log_tracebacks = Tool('before_error_response', cptools.log_traceback)
@@ -500,26 +467,19 @@ _d.log_headers = Tool('before_error_response', cptools.log_request_headers)
_d.log_hooks = Tool('on_end_request', cptools.log_hooks, priority=100)
_d.err_redirect = ErrorTool(cptools.redirect)
_d.etags = Tool('before_finalize', cptools.validate_etags, priority=75)
_d.decode = Tool('before_request_body', encoding.decode)
_d.decode = Tool('before_handler', encoding.decode)
# the order of encoding, gzip, caching is important
_d.encode = Tool('before_handler', encoding.ResponseEncoder, priority=70)
_d.encode = Tool('before_finalize', encoding.encode, priority=70)
_d.gzip = Tool('before_finalize', encoding.gzip, priority=80)
_d.staticdir = HandlerTool(static.staticdir)
_d.staticfile = HandlerTool(static.staticfile)
_d.sessions = SessionTool()
_d.xmlrpc = ErrorTool(_xmlrpc.on_error)
_d.wsgiapp = WSGIAppTool(_wsgiapp.run)
_d.caching = CachingTool('before_handler', _caching.get, 'caching')
_d.expires = Tool('before_finalize', _caching.expires)
_d.tidy = DeprecatedTool(
'before_finalize',
'The tidy tool has been removed from the standard distribution of '
'CherryPy. The most recent version can be found at '
'http://tools.cherrypy.org/browser.')
_d.nsgmls = DeprecatedTool(
'before_finalize',
'The nsgmls tool has been removed from the standard distribution of '
'CherryPy. The most recent version can be found at '
'http://tools.cherrypy.org/browser.')
_d.tidy = Tool('before_finalize', tidy.tidy)
_d.nsgmls = Tool('before_finalize', tidy.nsgmls)
_d.ignore_headers = Tool('before_request_body', cptools.ignore_headers)
_d.referer = Tool('before_request_body', cptools.referer)
_d.basic_auth = Tool('on_start_resource', auth.basic_auth)
@@ -528,11 +488,5 @@ _d.trailing_slash = Tool('before_handler', cptools.trailing_slash, priority=60)
_d.flatten = Tool('before_finalize', cptools.flatten)
_d.accept = Tool('on_start_resource', cptools.accept)
_d.redirect = Tool('on_start_resource', cptools.redirect)
_d.autovary = Tool('on_start_resource', cptools.autovary, priority=0)
_d.json_in = Tool('before_request_body', jsontools.json_in, priority=30)
_d.json_out = Tool('before_handler', jsontools.json_out, priority=30)
_d.auth_basic = Tool('before_handler', auth_basic.basic_auth, priority=1)
_d.auth_digest = Tool('before_handler', auth_digest.digest_auth, priority=1)
_d.params = Tool('before_handler', cptools.convert_params)
del _d, cptools, encoding, auth, static
del _d, cptools, encoding, auth, static, tidy

View File

@@ -1,287 +1,248 @@
"""CherryPy Application and Tree objects."""
import os
import six
import cherrypy
from cherrypy._cpcompat import ntou
from cherrypy import _cpconfig, _cplogging, _cprequest, _cpwsgi, tools
from cherrypy.lib import httputil
from cherrypy.lib import http as _http
class Application(object):
"""A CherryPy Application.
Servers and gateways should not instantiate Request objects directly.
Instead, they should ask an Application object for a request object.
An instance of this class may also be used as a WSGI callable
(WSGI application object) for itself.
"""
__metaclass__ = cherrypy._AttributeDocstrings
root = None
"""The top-most container of page handlers for this app. Handlers should
root__doc = """
The top-most container of page handlers for this app. Handlers should
be arranged in a hierarchy of attributes, matching the expected URI
hierarchy; the default dispatcher then searches this hierarchy for a
matching handler. When using a dispatcher other than the default,
this value may be None."""
config = {}
"""A dict of {path: pathconf} pairs, where 'pathconf' is itself a dict
config__doc = """
A dict of {path: pathconf} pairs, where 'pathconf' is itself a dict
of {key: value} pairs."""
namespaces = _cpconfig.NamespaceSet()
toolboxes = {'tools': cherrypy.tools}
log = None
"""A LogManager instance. See _cplogging."""
log__doc = """A LogManager instance. See _cplogging."""
wsgiapp = None
"""A CPWSGIApp instance. See _cpwsgi."""
wsgiapp__doc = """A CPWSGIApp instance. See _cpwsgi."""
request_class = _cprequest.Request
response_class = _cprequest.Response
relative_urls = False
def __init__(self, root, script_name='', config=None):
def __init__(self, root, script_name="", config=None):
self.log = _cplogging.LogManager(id(self), cherrypy.log.logger_root)
self.root = root
self.script_name = script_name
self.wsgiapp = _cpwsgi.CPWSGIApp(self)
self.namespaces = self.namespaces.copy()
self.namespaces['log'] = lambda k, v: setattr(self.log, k, v)
self.namespaces['wsgi'] = self.wsgiapp.namespace_handler
self.namespaces["log"] = lambda k, v: setattr(self.log, k, v)
self.namespaces["wsgi"] = self.wsgiapp.namespace_handler
self.config = self.__class__.config.copy()
if config:
self.merge(config)
def __repr__(self):
return '%s.%s(%r, %r)' % (self.__module__, self.__class__.__name__,
return "%s.%s(%r, %r)" % (self.__module__, self.__class__.__name__,
self.root, self.script_name)
script_name_doc = """The URI "mount point" for this app. A mount point
is that portion of the URI which is constant for all URIs that are
serviced by this application; it does not include scheme, host, or proxy
("virtual host") portions of the URI.
script_name__doc = """
The URI "mount point" for this app. A mount point is that portion of
the URI which is constant for all URIs that are serviced by this
application; it does not include scheme, host, or proxy ("virtual host")
portions of the URI.
For example, if script_name is "/my/cool/app", then the URL
"http://www.example.com/my/cool/app/page1" might be handled by a
"page1" method on the root object.
The value of script_name MUST NOT end in a slash. If the script_name
refers to the root of the URI, it MUST be an empty string (not "/").
If script_name is explicitly set to None, then the script_name will be
provided for each call from request.wsgi_environ['SCRIPT_NAME'].
"""
def _get_script_name(self):
if self._script_name is not None:
return self._script_name
# A `_script_name` with a value of None signals that the script name
# should be pulled from WSGI environ.
return cherrypy.serving.request.wsgi_environ['SCRIPT_NAME'].rstrip('/')
if self._script_name is None:
# None signals that the script name should be pulled from WSGI environ.
return cherrypy.request.wsgi_environ['SCRIPT_NAME'].rstrip("/")
return self._script_name
def _set_script_name(self, value):
if value:
value = value.rstrip('/')
value = value.rstrip("/")
self._script_name = value
script_name = property(fget=_get_script_name, fset=_set_script_name,
doc=script_name_doc)
doc=script_name__doc)
def merge(self, config):
"""Merge the given config into self.config."""
_cpconfig.merge(self.config, config)
# Handle namespaces specified in config.
self.namespaces(self.config.get('/', {}))
def find_config(self, path, key, default=None):
"""Return the most-specific value for key along path, or default."""
trail = path or '/'
while trail:
nodeconf = self.config.get(trail, {})
if key in nodeconf:
return nodeconf[key]
lastslash = trail.rfind('/')
if lastslash == -1:
break
elif lastslash == 0 and trail != '/':
trail = '/'
else:
trail = trail[:lastslash]
return default
self.namespaces(self.config.get("/", {}))
def get_serving(self, local, remote, scheme, sproto):
"""Create and return a Request and Response object."""
req = self.request_class(local, remote, scheme, sproto)
req.app = self
for name, toolbox in self.toolboxes.items():
for name, toolbox in self.toolboxes.iteritems():
req.namespaces[name] = toolbox
resp = self.response_class()
cherrypy.serving.load(req, resp)
cherrypy.engine.timeout_monitor.acquire()
cherrypy.engine.publish('acquire_thread')
cherrypy.engine.publish('before_request')
return req, resp
def release_serving(self):
"""Release the current serving (request and response)."""
req = cherrypy.serving.request
cherrypy.engine.publish('after_request')
cherrypy.engine.timeout_monitor.release()
try:
req.close()
except:
cherrypy.log(traceback=True, severity=40)
cherrypy.serving.clear()
def __call__(self, environ, start_response):
return self.wsgiapp(environ, start_response)
class Tree(object):
"""A registry of CherryPy applications, mounted at diverse points.
An instance of this class may also be used as a WSGI callable
(WSGI application object), in which case it dispatches to all
mounted apps.
"""
apps = {}
"""
apps__doc = """
A dict of the form {script name: application}, where "script name"
is a string declaring the URI mount point (no trailing slash), and
"application" is an instance of cherrypy.Application (or an arbitrary
WSGI callable if you happen to be using a WSGI server)."""
def __init__(self):
self.apps = {}
def mount(self, root, script_name='', config=None):
def mount(self, root, script_name="", config=None):
"""Mount a new app from a root object, script_name, and config.
root
An instance of a "controller class" (a collection of page
root: an instance of a "controller class" (a collection of page
handler methods) which represents the root of the application.
This may also be an Application instance, or None if using
a dispatcher other than the default.
script_name
A string containing the "mount point" of the application.
script_name: a string containing the "mount point" of the application.
This should start with a slash, and be the path portion of the
URL at which to mount the given root. For example, if root.index()
will handle requests to "http://www.example.com:8080/dept/app1/",
then the script_name argument would be "/dept/app1".
It MUST NOT end in a slash. If the script_name refers to the
root of the URI, it MUST be an empty string (not "/").
config
A file or dict containing application config.
config: a file or dict containing application config.
"""
if script_name is None:
raise TypeError(
"The 'script_name' argument may not be None. Application "
'objects may, however, possess a script_name of None (in '
'order to inpect the WSGI environ for SCRIPT_NAME upon each '
'request). You cannot mount such Applications on this Tree; '
'you must pass them to a WSGI server interface directly.')
"objects may, however, possess a script_name of None (in "
"order to inpect the WSGI environ for SCRIPT_NAME upon each "
"request). You cannot mount such Applications on this Tree; "
"you must pass them to a WSGI server interface directly.")
# Next line both 1) strips trailing slash and 2) maps "/" -> "".
script_name = script_name.rstrip('/')
script_name = script_name.rstrip("/")
if isinstance(root, Application):
app = root
if script_name != '' and script_name != app.script_name:
raise ValueError(
'Cannot specify a different script name and pass an '
'Application instance to cherrypy.mount')
if script_name != "" and script_name != app.script_name:
raise ValueError, "Cannot specify a different script name and pass an Application instance to cherrypy.mount"
script_name = app.script_name
else:
app = Application(root, script_name)
# If mounted at "", add favicon.ico
if (script_name == '' and root is not None
and not hasattr(root, 'favicon_ico')):
if (script_name == "" and root is not None
and not hasattr(root, "favicon_ico")):
favicon = os.path.join(os.getcwd(), os.path.dirname(__file__),
'favicon.ico')
"favicon.ico")
root.favicon_ico = tools.staticfile.handler(favicon)
if config:
app.merge(config)
self.apps[script_name] = app
return app
def graft(self, wsgi_callable, script_name=''):
def graft(self, wsgi_callable, script_name=""):
"""Mount a wsgi callable at the given script_name."""
# Next line both 1) strips trailing slash and 2) maps "/" -> "".
script_name = script_name.rstrip('/')
script_name = script_name.rstrip("/")
self.apps[script_name] = wsgi_callable
def script_name(self, path=None):
"""The script_name of the app at the given path, or None.
If path is None, cherrypy.request is used.
"""
if path is None:
try:
request = cherrypy.serving.request
path = httputil.urljoin(request.script_name,
request.path_info)
path = _http.urljoin(cherrypy.request.script_name,
cherrypy.request.path_info)
except AttributeError:
return None
while True:
if path in self.apps:
return path
if path == '':
if path == "":
return None
# Move one node up the tree and try again.
path = path[:path.rfind('/')]
path = path[:path.rfind("/")]
def __call__(self, environ, start_response):
# If you're calling this, then you're probably setting SCRIPT_NAME
# to '' (some WSGI servers always set SCRIPT_NAME to '').
# Try to look up the app using the full path.
env1x = environ
if six.PY2 and environ.get(ntou('wsgi.version')) == (ntou('u'), 0):
env1x = _cpwsgi.downgrade_wsgi_ux_to_1x(environ)
path = httputil.urljoin(env1x.get('SCRIPT_NAME', ''),
env1x.get('PATH_INFO', ''))
sn = self.script_name(path or '/')
path = _http.urljoin(environ.get('SCRIPT_NAME', ''),
environ.get('PATH_INFO', ''))
sn = self.script_name(path or "/")
if sn is None:
start_response('404 Not Found', [])
return []
app = self.apps[sn]
# Correct the SCRIPT_NAME and PATH_INFO environ entries.
environ = environ.copy()
if six.PY2 and environ.get(ntou('wsgi.version')) == (ntou('u'), 0):
# Python 2/WSGI u.0: all strings MUST be of type unicode
enc = environ[ntou('wsgi.url_encoding')]
environ[ntou('SCRIPT_NAME')] = sn.decode(enc)
environ[ntou('PATH_INFO')] = path[len(sn.rstrip('/')):].decode(enc)
else:
environ['SCRIPT_NAME'] = sn
environ['PATH_INFO'] = path[len(sn.rstrip('/')):]
environ['SCRIPT_NAME'] = sn
environ['PATH_INFO'] = path[len(sn.rstrip("/")):]
return app(environ, start_response)

View File

@@ -1,215 +1,104 @@
"""WSGI interface (see PEP 333 and 3333).
Note that WSGI environ keys and values are 'native strings'; that is,
whatever the type of "" is. For Python 2, that's a byte string; for Python 3,
it's a unicode string. But PEP 3333 says: "even if Python's str type is
actually Unicode "under the hood", the content of native strings must
still be translatable to bytes via the Latin-1 encoding!"
"""
"""WSGI interface (see PEP 333)."""
import StringIO as _StringIO
import sys as _sys
import io
import six
import cherrypy as _cherrypy
from cherrypy._cpcompat import ntob, ntou
from cherrypy import _cperror
from cherrypy.lib import httputil
from cherrypy.lib import is_closable_iterator
def downgrade_wsgi_ux_to_1x(environ):
"""Return a new environ dict for WSGI 1.x from the given WSGI u.x environ.
"""
env1x = {}
url_encoding = environ[ntou('wsgi.url_encoding')]
for k, v in list(environ.items()):
if k in [ntou('PATH_INFO'), ntou('SCRIPT_NAME'), ntou('QUERY_STRING')]:
v = v.encode(url_encoding)
elif isinstance(v, six.text_type):
v = v.encode('ISO-8859-1')
env1x[k.encode('ISO-8859-1')] = v
return env1x
from cherrypy.lib import http as _http
class VirtualHost(object):
"""Select a different WSGI application based on the Host header.
This can be useful when running multiple sites within one CP server.
It allows several domains to point to different applications. For example::
It allows several domains to point to different applications. For example:
root = Root()
RootApp = cherrypy.Application(root)
Domain2App = cherrypy.Application(root)
SecureApp = cherrypy.Application(Secure())
vhost = cherrypy._cpwsgi.VirtualHost(
RootApp,
domains={
'www.domain2.example': Domain2App,
'www.domain2.example:443': SecureApp,
},
)
vhost = cherrypy._cpwsgi.VirtualHost(RootApp,
domains={'www.domain2.example': Domain2App,
'www.domain2.example:443': SecureApp,
})
cherrypy.tree.graft(vhost)
default: required. The default WSGI application.
use_x_forwarded_host: if True (the default), any "X-Forwarded-Host"
request header will be used instead of the "Host" header. This
is commonly added by HTTP servers (such as Apache) when proxying.
domains: a dict of {host header value: application} pairs.
The incoming "Host" request header is looked up in this dict,
and, if a match is found, the corresponding WSGI application
will be called instead of the default. Note that you often need
separate entries for "example.com" and "www.example.com".
In addition, "Host" headers may contain the port number.
"""
default = None
"""Required. The default WSGI application."""
use_x_forwarded_host = True
"""If True (the default), any "X-Forwarded-Host"
request header will be used instead of the "Host" header. This
is commonly added by HTTP servers (such as Apache) when proxying."""
domains = {}
"""A dict of {host header value: application} pairs.
The incoming "Host" request header is looked up in this dict,
and, if a match is found, the corresponding WSGI application
will be called instead of the default. Note that you often need
separate entries for "example.com" and "www.example.com".
In addition, "Host" headers may contain the port number.
"""
def __init__(self, default, domains=None, use_x_forwarded_host=True):
self.default = default
self.domains = domains or {}
self.use_x_forwarded_host = use_x_forwarded_host
def __call__(self, environ, start_response):
domain = environ.get('HTTP_HOST', '')
if self.use_x_forwarded_host:
domain = environ.get('HTTP_X_FORWARDED_HOST', domain)
domain = environ.get("HTTP_X_FORWARDED_HOST", domain)
nextapp = self.domains.get(domain)
if nextapp is None:
nextapp = self.default
return nextapp(environ, start_response)
class InternalRedirector(object):
"""WSGI middleware that handles raised cherrypy.InternalRedirect."""
# WSGI-to-CP Adapter #
def __init__(self, nextapp, recursive=False):
self.nextapp = nextapp
class AppResponse(object):
throws = (KeyboardInterrupt, SystemExit)
request = None
def __init__(self, environ, start_response, cpapp, recursive=False):
self.redirections = []
self.recursive = recursive
def __call__(self, environ, start_response):
redirections = []
while True:
environ = environ.copy()
try:
return self.nextapp(environ, start_response)
except _cherrypy.InternalRedirect:
ir = _sys.exc_info()[1]
sn = environ.get('SCRIPT_NAME', '')
path = environ.get('PATH_INFO', '')
qs = environ.get('QUERY_STRING', '')
# Add the *previous* path_info + qs to redirections.
old_uri = sn + path
if qs:
old_uri += '?' + qs
redirections.append(old_uri)
if not self.recursive:
# Check to see if the new URI has been redirected to
# already
new_uri = sn + ir.path
if ir.query_string:
new_uri += '?' + ir.query_string
if new_uri in redirections:
ir.request.close()
tmpl = (
'InternalRedirector visited the same URL twice: %r'
)
raise RuntimeError(tmpl % new_uri)
# Munge the environment and try again.
environ['REQUEST_METHOD'] = 'GET'
environ['PATH_INFO'] = ir.path
environ['QUERY_STRING'] = ir.query_string
environ['wsgi.input'] = io.BytesIO()
environ['CONTENT_LENGTH'] = '0'
environ['cherrypy.previous_request'] = ir.request
class ExceptionTrapper(object):
"""WSGI middleware that traps exceptions."""
def __init__(self, nextapp, throws=(KeyboardInterrupt, SystemExit)):
self.nextapp = nextapp
self.throws = throws
def __call__(self, environ, start_response):
return _TrappedResponse(
self.nextapp,
environ,
start_response,
self.throws
)
class _TrappedResponse(object):
response = iter([])
def __init__(self, nextapp, environ, start_response, throws):
self.nextapp = nextapp
self.environ = environ
self.start_response = start_response
self.throws = throws
self.started_response = False
self.response = self.trap(
self.nextapp, self.environ, self.start_response,
)
self.iter_response = iter(self.response)
def __iter__(self):
self.started_response = True
return self
def __next__(self):
return self.trap(next, self.iter_response)
# todo: https://pythonhosted.org/six/#six.Iterator
if six.PY2:
next = __next__
def close(self):
if hasattr(self.response, 'close'):
self.response.close()
def trap(self, func, *args, **kwargs):
self.cpapp = cpapp
self.setapp()
def setapp(self):
try:
return func(*args, **kwargs)
self.request = self.get_request()
s, h, b = self.get_response()
self.iter_response = iter(b)
self.write = self.start_response(s, h)
except self.throws:
self.close()
raise
except StopIteration:
raise
except _cherrypy.InternalRedirect, ir:
self.environ['cherrypy.previous_request'] = _cherrypy.serving.request
self.close()
self.iredirect(ir.path, ir.query_string)
return
except:
if getattr(self.request, "throw_errors", False):
self.close()
raise
tb = _cperror.format_exc()
_cherrypy.log(tb, severity=40)
if not _cherrypy.request.show_tracebacks:
tb = ''
if not getattr(self.request, "show_tracebacks", True):
tb = ""
s, h, b = _cperror.bare_error(tb)
if six.PY3:
# What fun.
s = s.decode('ISO-8859-1')
h = [
(k.decode('ISO-8859-1'), v.decode('ISO-8859-1'))
for k, v in h
]
if self.started_response:
# Empty our iterable (so future calls raise StopIteration)
self.iter_response = iter([])
else:
self.iter_response = iter(b)
self.iter_response = iter(b)
try:
self.start_response(s, h, _sys.exc_info())
except:
@@ -219,109 +108,130 @@ class _TrappedResponse(object):
# back to the server or gateway."
# But we still log and call close() to clean up ourselves.
_cherrypy.log(traceback=True, severity=40)
self.close()
raise
if self.started_response:
return ntob('').join(b)
def iredirect(self, path, query_string):
"""Doctor self.environ and perform an internal redirect.
When cherrypy.InternalRedirect is raised, this method is called.
It rewrites the WSGI environ using the new path and query_string,
and calls a new CherryPy Request object. Because the wsgi.input
stream may have already been consumed by the next application,
the redirected call will always be of HTTP method "GET"; therefore,
any params must be passed in the query_string argument, which is
formed from InternalRedirect.query_string when using that exception.
If you need something more complicated, make and raise your own
exception and write your own AppResponse subclass to trap it. ;)
It would be a bad idea to redirect after you've already yielded
response content, although an enterprising soul could choose
to abuse this.
"""
env = self.environ
if not self.recursive:
sn = env.get('SCRIPT_NAME', '')
qs = query_string
if qs:
qs = "?" + qs
if sn + path + qs in self.redirections:
raise RuntimeError("InternalRedirector visited the "
"same URL twice: %r + %r + %r" %
(sn, path, qs))
else:
return b
# WSGI-to-CP Adapter #
class AppResponse(object):
"""WSGI response iterable for CherryPy applications."""
def __init__(self, environ, start_response, cpapp):
self.cpapp = cpapp
try:
if six.PY2:
if environ.get(ntou('wsgi.version')) == (ntou('u'), 0):
environ = downgrade_wsgi_ux_to_1x(environ)
self.environ = environ
self.run()
r = _cherrypy.serving.response
outstatus = r.output_status
if not isinstance(outstatus, bytes):
raise TypeError('response.output_status is not a byte string.')
outheaders = []
for k, v in r.header_list:
if not isinstance(k, bytes):
tmpl = 'response.header_list key %r is not a byte string.'
raise TypeError(tmpl % k)
if not isinstance(v, bytes):
tmpl = (
'response.header_list value %r is not a byte string.'
)
raise TypeError(tmpl % v)
outheaders.append((k, v))
if six.PY3:
# According to PEP 3333, when using Python 3, the response
# status and headers must be bytes masquerading as unicode;
# that is, they must be of type "str" but are restricted to
# code points in the "latin-1" set.
outstatus = outstatus.decode('ISO-8859-1')
outheaders = [
(k.decode('ISO-8859-1'), v.decode('ISO-8859-1'))
for k, v in outheaders
]
self.iter_response = iter(r.body)
self.write = start_response(outstatus, outheaders)
except:
self.close()
raise
# Add the *previous* path_info + qs to redirections.
p = env.get('PATH_INFO', '')
qs = env.get('QUERY_STRING', '')
if qs:
qs = "?" + qs
self.redirections.append(sn + p + qs)
# Munge environment and try again.
env['REQUEST_METHOD'] = "GET"
env['PATH_INFO'] = path
env['QUERY_STRING'] = query_string
env['wsgi.input'] = _StringIO.StringIO()
env['CONTENT_LENGTH'] = "0"
self.setapp()
def __iter__(self):
return self
def __next__(self):
return next(self.iter_response)
# todo: https://pythonhosted.org/six/#six.Iterator
if six.PY2:
next = __next__
def next(self):
try:
chunk = self.iter_response.next()
# WSGI requires all data to be of type "str". This coercion should
# not take any time at all if chunk is already of type "str".
# If it's unicode, it could be a big performance hit (x ~500).
if not isinstance(chunk, str):
chunk = chunk.encode("ISO-8859-1")
return chunk
except self.throws:
self.close()
raise
except _cherrypy.InternalRedirect, ir:
self.environ['cherrypy.previous_request'] = _cherrypy.serving.request
self.close()
self.iredirect(ir.path, ir.query_string)
except StopIteration:
raise
except:
if getattr(self.request, "throw_errors", False):
self.close()
raise
tb = _cperror.format_exc()
_cherrypy.log(tb, severity=40)
if not getattr(self.request, "show_tracebacks", True):
tb = ""
s, h, b = _cperror.bare_error(tb)
# Empty our iterable (so future calls raise StopIteration)
self.iter_response = iter([])
try:
self.start_response(s, h, _sys.exc_info())
except:
# "The application must not trap any exceptions raised by
# start_response, if it called start_response with exc_info.
# Instead, it should allow such exceptions to propagate
# back to the server or gateway."
# But we still log and call close() to clean up ourselves.
_cherrypy.log(traceback=True, severity=40)
self.close()
raise
return "".join(b)
def close(self):
"""Close and de-reference the current request and response. (Core)"""
streaming = _cherrypy.serving.response.stream
self.cpapp.release_serving()
# We avoid the expense of examining the iterator to see if it's
# closable unless we are streaming the response, as that's the
# only situation where we are going to have an iterator which
# may not have been exhausted yet.
if streaming and is_closable_iterator(self.iter_response):
iter_close = self.iter_response.close
try:
iter_close()
except Exception:
_cherrypy.log(traceback=True, severity=40)
def run(self):
def get_response(self):
"""Run self.request and return its response."""
meth = self.environ['REQUEST_METHOD']
path = _http.urljoin(self.environ.get('SCRIPT_NAME', ''),
self.environ.get('PATH_INFO', ''))
qs = self.environ.get('QUERY_STRING', '')
rproto = self.environ.get('SERVER_PROTOCOL')
headers = self.translate_headers(self.environ)
rfile = self.environ['wsgi.input']
response = self.request.run(meth, path, qs, rproto, headers, rfile)
return response.status, response.header_list, response.body
def get_request(self):
"""Create a Request object using environ."""
env = self.environ.get
local = httputil.Host(
'',
int(env('SERVER_PORT', 80) or -1),
env('SERVER_NAME', ''),
)
remote = httputil.Host(
env('REMOTE_ADDR', ''),
int(env('REMOTE_PORT', -1) or -1),
env('REMOTE_HOST', ''),
)
local = _http.Host('', int(env('SERVER_PORT', 80)),
env('SERVER_NAME', ''))
remote = _http.Host(env('REMOTE_ADDR', ''),
int(env('REMOTE_PORT', -1)),
env('REMOTE_HOST', ''))
scheme = env('wsgi.url_scheme')
sproto = env('ACTUAL_SERVER_PROTOCOL', 'HTTP/1.1')
sproto = env('ACTUAL_SERVER_PROTOCOL', "HTTP/1.1")
request, resp = self.cpapp.get_serving(local, remote, scheme, sproto)
# LOGON_USER is served by IIS, and is the name of the
# user after having been mapped to a local account.
# Both IIS and Apache set REMOTE_USER, when possible.
@@ -330,113 +240,66 @@ class AppResponse(object):
request.multiprocess = self.environ['wsgi.multiprocess']
request.wsgi_environ = self.environ
request.prev = env('cherrypy.previous_request', None)
meth = self.environ['REQUEST_METHOD']
path = httputil.urljoin(
self.environ.get('SCRIPT_NAME', ''),
self.environ.get('PATH_INFO', ''),
)
qs = self.environ.get('QUERY_STRING', '')
path, qs = self.recode_path_qs(path, qs) or (path, qs)
rproto = self.environ.get('SERVER_PROTOCOL')
headers = self.translate_headers(self.environ)
rfile = self.environ['wsgi.input']
request.run(meth, path, qs, rproto, headers, rfile)
headerNames = {
'HTTP_CGI_AUTHORIZATION': 'Authorization',
'CONTENT_LENGTH': 'Content-Length',
'CONTENT_TYPE': 'Content-Type',
'REMOTE_HOST': 'Remote-Host',
'REMOTE_ADDR': 'Remote-Addr',
}
def recode_path_qs(self, path, qs):
if not six.PY3:
return
# This isn't perfect; if the given PATH_INFO is in the
# wrong encoding, it may fail to match the appropriate config
# section URI. But meh.
old_enc = self.environ.get('wsgi.url_encoding', 'ISO-8859-1')
new_enc = self.cpapp.find_config(
self.environ.get('PATH_INFO', ''),
'request.uri_encoding', 'utf-8',
)
if new_enc.lower() == old_enc.lower():
return
# Even though the path and qs are unicode, the WSGI server
# is required by PEP 3333 to coerce them to ISO-8859-1
# masquerading as unicode. So we have to encode back to
# bytes and then decode again using the "correct" encoding.
try:
return (
path.encode(old_enc).decode(new_enc),
qs.encode(old_enc).decode(new_enc),
)
except (UnicodeEncodeError, UnicodeDecodeError):
# Just pass them through without transcoding and hope.
pass
return request
headerNames = {'HTTP_CGI_AUTHORIZATION': 'Authorization',
'CONTENT_LENGTH': 'Content-Length',
'CONTENT_TYPE': 'Content-Type',
'REMOTE_HOST': 'Remote-Host',
'REMOTE_ADDR': 'Remote-Addr',
}
def translate_headers(self, environ):
"""Translate CGI-environ header names to HTTP header names."""
for cgiName in environ:
# We assume all incoming header keys are uppercase already.
if cgiName in self.headerNames:
yield self.headerNames[cgiName], environ[cgiName]
elif cgiName[:5] == 'HTTP_':
elif cgiName[:5] == "HTTP_":
# Hackish attempt at recovering original header names.
translatedHeader = cgiName[5:].replace('_', '-')
translatedHeader = cgiName[5:].replace("_", "-")
yield translatedHeader, environ[cgiName]
class CPWSGIApp(object):
"""A WSGI application object for a CherryPy Application."""
pipeline = [
('ExceptionTrapper', ExceptionTrapper),
('InternalRedirector', InternalRedirector),
]
"""A list of (name, wsgiapp) pairs. Each 'wsgiapp' MUST be a
constructor that takes an initial, positional 'nextapp' argument,
plus optional keyword arguments, and returns a WSGI application
(that takes environ and start_response arguments). The 'name' can
be any you choose, and will correspond to keys in self.config."""
head = None
"""Rather than nest all apps in the pipeline on each call, it's only
done the first time, and the result is memoized into self.head. Set
this to None again if you change self.pipeline after calling self."""
config = {}
"""A dict whose keys match names listed in the pipeline. Each
value is a further dict which will be passed to the corresponding
named WSGI callable (from the pipeline) as keyword arguments."""
response_class = AppResponse
"""The class to instantiate and return as the next app in the WSGI chain.
"""A WSGI application object for a CherryPy Application.
pipeline: a list of (name, wsgiapp) pairs. Each 'wsgiapp' MUST be a
constructor that takes an initial, positional 'nextapp' argument,
plus optional keyword arguments, and returns a WSGI application
(that takes environ and start_response arguments). The 'name' can
be any you choose, and will correspond to keys in self.config.
head: rather than nest all apps in the pipeline on each call, it's only
done the first time, and the result is memoized into self.head. Set
this to None again if you change self.pipeline after calling self.
config: a dict whose keys match names listed in the pipeline. Each
value is a further dict which will be passed to the corresponding
named WSGI callable (from the pipeline) as keyword arguments.
"""
pipeline = []
head = None
config = {}
response_class = AppResponse
def __init__(self, cpapp, pipeline=None):
self.cpapp = cpapp
self.pipeline = self.pipeline[:]
if pipeline:
self.pipeline.extend(pipeline)
self.config = self.config.copy()
def tail(self, environ, start_response):
"""WSGI application callable for the actual CherryPy application.
You probably shouldn't call this; call self.__call__ instead,
so that any WSGI middleware in self.pipeline can run first.
"""
return self.response_class(environ, start_response, self.cpapp)
def __call__(self, environ, start_response):
head = self.head
if head is None:
@@ -448,19 +311,20 @@ class CPWSGIApp(object):
head = callable(head, **conf)
self.head = head
return head(environ, start_response)
def namespace_handler(self, k, v):
"""Config handler for the 'wsgi' namespace."""
if k == 'pipeline':
if k == "pipeline":
# Note this allows multiple 'wsgi.pipeline' config entries
# (but each entry will be processed in a 'random' order).
# It should also allow developers to set default middleware
# in code (passed to self.__init__) that deployers can add to
# (but not remove) via config.
self.pipeline.extend(v)
elif k == 'response_class':
elif k == "response_class":
self.response_class = v
else:
name, arg = k.split('.', 1)
name, arg = k.split(".", 1)
bucket = self.config.setdefault(name, {})
bucket[arg] = v

View File

@@ -1,70 +1,68 @@
"""WSGI server interface (see PEP 333). This adds some CP-specific bits to
the framework-agnostic wsgiserver package.
"""
import sys
import cherrypy
from cherrypy import wsgiserver
class CPHTTPRequest(wsgiserver.HTTPRequest):
def __init__(self, sendall, environ, wsgi_app):
s = cherrypy.server
self.max_request_header_size = s.max_request_header_size or 0
self.max_request_body_size = s.max_request_body_size or 0
wsgiserver.HTTPRequest.__init__(self, sendall, environ, wsgi_app)
class CPHTTPConnection(wsgiserver.HTTPConnection):
RequestHandlerClass = CPHTTPRequest
class CPWSGIServer(wsgiserver.CherryPyWSGIServer):
"""Wrapper for wsgiserver.CherryPyWSGIServer.
wsgiserver has been designed to not reference CherryPy in any way,
so that it can be used in other frameworks and applications. Therefore,
we wrap it here, so we can set our own mount points from cherrypy.tree
and apply some attributes from config -> cherrypy.server -> wsgiserver.
"""
ConnectionClass = CPHTTPConnection
def __init__(self, server_adapter=cherrypy.server):
self.server_adapter = server_adapter
self.max_request_header_size = (
self.server_adapter.max_request_header_size or 0
)
self.max_request_body_size = (
self.server_adapter.max_request_body_size or 0
)
# We have to make custom subclasses of wsgiserver internals here
# so that our server.* attributes get applied to every request.
class _CPHTTPRequest(wsgiserver.HTTPRequest):
def __init__(self, sendall, environ, wsgi_app):
s = server_adapter
self.max_request_header_size = s.max_request_header_size or 0
self.max_request_body_size = s.max_request_body_size or 0
wsgiserver.HTTPRequest.__init__(self, sendall, environ, wsgi_app)
class _CPHTTPConnection(wsgiserver.HTTPConnection):
RequestHandlerClass = _CPHTTPRequest
self.ConnectionClass = _CPHTTPConnection
server_name = (self.server_adapter.socket_host or
self.server_adapter.socket_file or
None)
self.wsgi_version = self.server_adapter.wsgi_version
s = wsgiserver.CherryPyWSGIServer
s.__init__(self, server_adapter.bind_addr, cherrypy.tree,
self.server_adapter.thread_pool,
server_name,
max=self.server_adapter.thread_pool_max,
request_queue_size=self.server_adapter.socket_queue_size,
timeout=self.server_adapter.socket_timeout,
shutdown_timeout=self.server_adapter.shutdown_timeout,
accepted_queue_size=self.server_adapter.accepted_queue_size,
accepted_queue_timeout=self.server_adapter.accepted_queue_timeout,
max = self.server_adapter.thread_pool_max,
request_queue_size = self.server_adapter.socket_queue_size,
timeout = self.server_adapter.socket_timeout,
shutdown_timeout = self.server_adapter.shutdown_timeout,
)
self.protocol = self.server_adapter.protocol_version
self.nodelay = self.server_adapter.nodelay
self.ssl_context = self.server_adapter.ssl_context
self.ssl_certificate = self.server_adapter.ssl_certificate
self.ssl_certificate_chain = self.server_adapter.ssl_certificate_chain
self.ssl_private_key = self.server_adapter.ssl_private_key
if sys.version_info >= (3, 0):
ssl_module = self.server_adapter.ssl_module or 'builtin'
else:
ssl_module = self.server_adapter.ssl_module or 'pyopenssl'
if self.server_adapter.ssl_context:
adapter_class = wsgiserver.get_ssl_adapter_class(ssl_module)
self.ssl_adapter = adapter_class(
self.server_adapter.ssl_certificate,
self.server_adapter.ssl_private_key,
self.server_adapter.ssl_certificate_chain)
self.ssl_adapter.context = self.server_adapter.ssl_context
elif self.server_adapter.ssl_certificate:
adapter_class = wsgiserver.get_ssl_adapter_class(ssl_module)
self.ssl_adapter = adapter_class(
self.server_adapter.ssl_certificate,
self.server_adapter.ssl_private_key,
self.server_adapter.ssl_certificate_chain)
self.stats['Enabled'] = getattr(
self.server_adapter, 'statistics', False)
def error_log(self, msg='', level=20, traceback=False):
cherrypy.engine.log(msg, level, traceback)

View File

@@ -1,298 +0,0 @@
"""
Helper functions for CP apps
"""
import six
from cherrypy._cpcompat import urljoin as _urljoin, urlencode as _urlencode
from cherrypy._cpcompat import text_or_bytes
import cherrypy
def expose(func=None, alias=None):
"""
Expose the function or class, optionally providing an alias or set of aliases.
"""
def expose_(func):
func.exposed = True
if alias is not None:
if isinstance(alias, text_or_bytes):
parents[alias.replace('.', '_')] = func
else:
for a in alias:
parents[a.replace('.', '_')] = func
return func
import sys
import types
decoratable_types = types.FunctionType, types.MethodType, type,
if six.PY2:
# Old-style classes are type types.ClassType.
decoratable_types += types.ClassType,
if isinstance(func, decoratable_types):
if alias is None:
# @expose
func.exposed = True
return func
else:
# func = expose(func, alias)
parents = sys._getframe(1).f_locals
return expose_(func)
elif func is None:
if alias is None:
# @expose()
parents = sys._getframe(1).f_locals
return expose_
else:
# @expose(alias="alias") or
# @expose(alias=["alias1", "alias2"])
parents = sys._getframe(1).f_locals
return expose_
else:
# @expose("alias") or
# @expose(["alias1", "alias2"])
parents = sys._getframe(1).f_locals
alias = func
return expose_
def popargs(*args, **kwargs):
"""A decorator for _cp_dispatch
(cherrypy.dispatch.Dispatcher.dispatch_method_name).
Optional keyword argument: handler=(Object or Function)
Provides a _cp_dispatch function that pops off path segments into
cherrypy.request.params under the names specified. The dispatch
is then forwarded on to the next vpath element.
Note that any existing (and exposed) member function of the class that
popargs is applied to will override that value of the argument. For
instance, if you have a method named "list" on the class decorated with
popargs, then accessing "/list" will call that function instead of popping
it off as the requested parameter. This restriction applies to all
_cp_dispatch functions. The only way around this restriction is to create
a "blank class" whose only function is to provide _cp_dispatch.
If there are path elements after the arguments, or more arguments
are requested than are available in the vpath, then the 'handler'
keyword argument specifies the next object to handle the parameterized
request. If handler is not specified or is None, then self is used.
If handler is a function rather than an instance, then that function
will be called with the args specified and the return value from that
function used as the next object INSTEAD of adding the parameters to
cherrypy.request.args.
This decorator may be used in one of two ways:
As a class decorator:
@cherrypy.popargs('year', 'month', 'day')
class Blog:
def index(self, year=None, month=None, day=None):
#Process the parameters here; any url like
#/, /2009, /2009/12, or /2009/12/31
#will fill in the appropriate parameters.
def create(self):
#This link will still be available at /create. Defined functions
#take precedence over arguments.
Or as a member of a class:
class Blog:
_cp_dispatch = cherrypy.popargs('year', 'month', 'day')
#...
The handler argument may be used to mix arguments with built in functions.
For instance, the following setup allows different activities at the
day, month, and year level:
class DayHandler:
def index(self, year, month, day):
#Do something with this day; probably list entries
def delete(self, year, month, day):
#Delete all entries for this day
@cherrypy.popargs('day', handler=DayHandler())
class MonthHandler:
def index(self, year, month):
#Do something with this month; probably list entries
def delete(self, year, month):
#Delete all entries for this month
@cherrypy.popargs('month', handler=MonthHandler())
class YearHandler:
def index(self, year):
#Do something with this year
#...
@cherrypy.popargs('year', handler=YearHandler())
class Root:
def index(self):
#...
"""
# Since keyword arg comes after *args, we have to process it ourselves
# for lower versions of python.
handler = None
handler_call = False
for k, v in kwargs.items():
if k == 'handler':
handler = v
else:
raise TypeError(
"cherrypy.popargs() got an unexpected keyword argument '{0}'"
.format(k)
)
import inspect
if handler is not None \
and (hasattr(handler, '__call__') or inspect.isclass(handler)):
handler_call = True
def decorated(cls_or_self=None, vpath=None):
if inspect.isclass(cls_or_self):
# cherrypy.popargs is a class decorator
cls = cls_or_self
setattr(cls, cherrypy.dispatch.Dispatcher.dispatch_method_name, decorated)
return cls
# We're in the actual function
self = cls_or_self
parms = {}
for arg in args:
if not vpath:
break
parms[arg] = vpath.pop(0)
if handler is not None:
if handler_call:
return handler(**parms)
else:
cherrypy.request.params.update(parms)
return handler
cherrypy.request.params.update(parms)
# If we are the ultimate handler, then to prevent our _cp_dispatch
# from being called again, we will resolve remaining elements through
# getattr() directly.
if vpath:
return getattr(self, vpath.pop(0), None)
else:
return self
return decorated
def url(path='', qs='', script_name=None, base=None, relative=None):
"""Create an absolute URL for the given path.
If 'path' starts with a slash ('/'), this will return
(base + script_name + path + qs).
If it does not start with a slash, this returns
(base + script_name [+ request.path_info] + path + qs).
If script_name is None, cherrypy.request will be used
to find a script_name, if available.
If base is None, cherrypy.request.base will be used (if available).
Note that you can use cherrypy.tools.proxy to change this.
Finally, note that this function can be used to obtain an absolute URL
for the current request path (minus the querystring) by passing no args.
If you call url(qs=cherrypy.request.query_string), you should get the
original browser URL (assuming no internal redirections).
If relative is None or not provided, request.app.relative_urls will
be used (if available, else False). If False, the output will be an
absolute URL (including the scheme, host, vhost, and script_name).
If True, the output will instead be a URL that is relative to the
current request path, perhaps including '..' atoms. If relative is
the string 'server', the output will instead be a URL that is
relative to the server root; i.e., it will start with a slash.
"""
if isinstance(qs, (tuple, list, dict)):
qs = _urlencode(qs)
if qs:
qs = '?' + qs
if cherrypy.request.app:
if not path.startswith('/'):
# Append/remove trailing slash from path_info as needed
# (this is to support mistyped URL's without redirecting;
# if you want to redirect, use tools.trailing_slash).
pi = cherrypy.request.path_info
if cherrypy.request.is_index is True:
if not pi.endswith('/'):
pi = pi + '/'
elif cherrypy.request.is_index is False:
if pi.endswith('/') and pi != '/':
pi = pi[:-1]
if path == '':
path = pi
else:
path = _urljoin(pi, path)
if script_name is None:
script_name = cherrypy.request.script_name
if base is None:
base = cherrypy.request.base
newurl = base + script_name + path + qs
else:
# No request.app (we're being called outside a request).
# We'll have to guess the base from server.* attributes.
# This will produce very different results from the above
# if you're using vhosts or tools.proxy.
if base is None:
base = cherrypy.server.base()
path = (script_name or '') + path
newurl = base + path + qs
if './' in newurl:
# Normalize the URL by removing ./ and ../
atoms = []
for atom in newurl.split('/'):
if atom == '.':
pass
elif atom == '..':
atoms.pop()
else:
atoms.append(atom)
newurl = '/'.join(atoms)
# At this point, we should have a fully-qualified absolute URL.
if relative is None:
relative = getattr(cherrypy.request.app, 'relative_urls', False)
# See http://www.ietf.org/rfc/rfc2396.txt
if relative == 'server':
# "A relative reference beginning with a single slash character is
# termed an absolute-path reference, as defined by <abs_path>..."
# This is also sometimes called "server-relative".
newurl = '/' + '/'.join(newurl.split('/', 3)[3:])
elif relative:
# "A relative reference that does not begin with a scheme name
# or a slash character is termed a relative-path reference."
old = url(relative=False).split('/')[:-1]
new = newurl.split('/')
while old and new:
a, b = old[0], new[0]
if a != b:
break
old.pop(0)
new.pop(0)
new = (['..'] * len(old)) + new
newurl = '/'.join(new)
return newurl

90
cherrypy/cherryd Executable file → Normal file
View File

@@ -1,6 +1,92 @@
#! /usr/bin/env python
"""The CherryPy daemon."""
import sys
import cherrypy
from cherrypy.process import plugins, servers
def start(configfiles=None, daemonize=False, environment=None,
fastcgi=False, scgi=False, pidfile=None, imports=None):
"""Subscribe all engine plugins and start the engine."""
sys.path = [''] + sys.path
for i in imports or []:
exec "import %s" % i
for c in configfiles or []:
cherrypy.config.update(c)
engine = cherrypy.engine
if environment is not None:
cherrypy.config.update({'environment': environment})
# Only daemonize if asked to.
if daemonize:
# Don't print anything to stdout/sterr.
cherrypy.config.update({'log.screen': False})
plugins.Daemonizer(engine).subscribe()
if pidfile:
plugins.PIDFile(engine, pidfile).subscribe()
if hasattr(engine, "signal_handler"):
engine.signal_handler.subscribe()
if hasattr(engine, "console_control_handler"):
engine.console_control_handler.subscribe()
if fastcgi and scgi:
# fastcgi and scgi aren't allowed together.
cherrypy.log.error("fastcgi and scgi aren't allowed together.", 'ENGINE')
sys.exit(1)
elif fastcgi or scgi:
# Turn off autoreload when using fastcgi or scgi.
cherrypy.config.update({'engine.autoreload_on': False})
# Turn off the default HTTP server (which is subscribed by default).
cherrypy.server.unsubscribe()
addr = cherrypy.server.bind_addr
if fastcgi:
f = servers.FlupFCGIServer(application=cherrypy.tree,
bindAddress=addr)
else:
f = servers.FlupSCGIServer(application=cherrypy.tree,
bindAddress=addr)
s = servers.ServerAdapter(engine, httpserver=f, bind_addr=addr)
s.subscribe()
# Always start the engine; this will start all other services
try:
engine.start()
except:
# Assume the error has been logged already via bus.log.
sys.exit(1)
else:
engine.block()
import cherrypy.daemon
if __name__ == '__main__':
cherrypy.daemon.run()
from optparse import OptionParser
p = OptionParser()
p.add_option('-c', '--config', action="append", dest='config',
help="specify config file(s)")
p.add_option('-d', action="store_true", dest='daemonize',
help="run the server as a daemon")
p.add_option('-e', '--environment', dest='environment', default=None,
help="apply the given config environment")
p.add_option('-f', action="store_true", dest='fastcgi',
help="start a fastcgi server instead of the default HTTP server")
p.add_option('-s', action="store_true", dest='scgi',
help="start a scgi server instead of the default HTTP server")
p.add_option('-i', '--import', action="append", dest='imports',
help="specify modules to import")
p.add_option('-p', '--pidfile', dest='pidfile', default=None,
help="store the process id in the given file")
options, args = p.parse_args()
start(options.config, options.daemonize,
options.environment, options.fastcgi, options.scgi, options.pidfile,
options.imports)

View File

@@ -1,106 +0,0 @@
"""The CherryPy daemon."""
import sys
import cherrypy
from cherrypy.process import plugins, servers
from cherrypy import Application
def start(configfiles=None, daemonize=False, environment=None,
fastcgi=False, scgi=False, pidfile=None, imports=None,
cgi=False):
"""Subscribe all engine plugins and start the engine."""
sys.path = [''] + sys.path
for i in imports or []:
exec('import %s' % i)
for c in configfiles or []:
cherrypy.config.update(c)
# If there's only one app mounted, merge config into it.
if len(cherrypy.tree.apps) == 1:
for app in cherrypy.tree.apps.values():
if isinstance(app, Application):
app.merge(c)
engine = cherrypy.engine
if environment is not None:
cherrypy.config.update({'environment': environment})
# Only daemonize if asked to.
if daemonize:
# Don't print anything to stdout/sterr.
cherrypy.config.update({'log.screen': False})
plugins.Daemonizer(engine).subscribe()
if pidfile:
plugins.PIDFile(engine, pidfile).subscribe()
if hasattr(engine, 'signal_handler'):
engine.signal_handler.subscribe()
if hasattr(engine, 'console_control_handler'):
engine.console_control_handler.subscribe()
if (fastcgi and (scgi or cgi)) or (scgi and cgi):
cherrypy.log.error('You may only specify one of the cgi, fastcgi, and '
'scgi options.', 'ENGINE')
sys.exit(1)
elif fastcgi or scgi or cgi:
# Turn off autoreload when using *cgi.
cherrypy.config.update({'engine.autoreload.on': False})
# Turn off the default HTTP server (which is subscribed by default).
cherrypy.server.unsubscribe()
addr = cherrypy.server.bind_addr
cls = (
servers.FlupFCGIServer if fastcgi else
servers.FlupSCGIServer if scgi else
servers.FlupCGIServer
)
f = cls(application=cherrypy.tree, bindAddress=addr)
s = servers.ServerAdapter(engine, httpserver=f, bind_addr=addr)
s.subscribe()
# Always start the engine; this will start all other services
try:
engine.start()
except:
# Assume the error has been logged already via bus.log.
sys.exit(1)
else:
engine.block()
def run():
from optparse import OptionParser
p = OptionParser()
p.add_option('-c', '--config', action='append', dest='config',
help='specify config file(s)')
p.add_option('-d', action='store_true', dest='daemonize',
help='run the server as a daemon')
p.add_option('-e', '--environment', dest='environment', default=None,
help='apply the given config environment')
p.add_option('-f', action='store_true', dest='fastcgi',
help='start a fastcgi server instead of the default HTTP '
'server')
p.add_option('-s', action='store_true', dest='scgi',
help='start a scgi server instead of the default HTTP server')
p.add_option('-x', action='store_true', dest='cgi',
help='start a cgi server instead of the default HTTP server')
p.add_option('-i', '--import', action='append', dest='imports',
help='specify modules to import')
p.add_option('-p', '--pidfile', dest='pidfile', default=None,
help='store the process id in the given file')
p.add_option('-P', '--Path', action='append', dest='Path',
help='add the given paths to sys.path')
options, args = p.parse_args()
if options.Path:
for p in options.Path:
sys.path.insert(0, p)
start(options.config, options.daemonize,
options.environment, options.fastcgi, options.scgi,
options.pidfile, options.imports, options.cgi)

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 KiB

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -1,65 +1,156 @@
"""CherryPy Library"""
def is_iterator(obj):
'''Returns a boolean indicating if the object provided implements
the iterator protocol (i.e. like a generator). This will return
false for objects which iterable, but not iterators themselves.'''
from types import GeneratorType
if isinstance(obj, GeneratorType):
return True
elif not hasattr(obj, '__iter__'):
return False
else:
# Types which implement the protocol must return themselves when
# invoking 'iter' upon them.
return iter(obj) is obj
import sys as _sys
def is_closable_iterator(obj):
# Not an iterator.
if not is_iterator(obj):
return False
# A generator - the easiest thing to deal with.
import inspect
if inspect.isgenerator(obj):
return True
# A custom iterator. Look for a close method...
if not (hasattr(obj, 'close') and callable(obj.close)):
return False
# ... which doesn't require any arguments.
def modules(modulePath):
"""Load a module and retrieve a reference to that module."""
try:
inspect.getcallargs(obj.close)
except TypeError:
return False
else:
return True
mod = _sys.modules[modulePath]
if mod is None:
raise KeyError()
except KeyError:
# The last [''] is important.
mod = __import__(modulePath, globals(), locals(), [''])
return mod
def attributes(full_attribute_name):
"""Load a module and retrieve an attribute of that module."""
# Parse out the path, module, and attribute
last_dot = full_attribute_name.rfind(u".")
attr_name = full_attribute_name[last_dot + 1:]
mod_path = full_attribute_name[:last_dot]
mod = modules(mod_path)
# Let an AttributeError propagate outward.
try:
attr = getattr(mod, attr_name)
except AttributeError:
raise AttributeError("'%s' object has no attribute '%s'"
% (mod_path, attr_name))
# Return a reference to the attribute.
return attr
# public domain "unrepr" implementation, found on the web and then improved.
class _Builder:
def build(self, o):
m = getattr(self, 'build_' + o.__class__.__name__, None)
if m is None:
raise TypeError("unrepr does not recognize %s" %
repr(o.__class__.__name__))
return m(o)
def build_Subscript(self, o):
expr, flags, subs = o.getChildren()
expr = self.build(expr)
subs = self.build(subs)
return expr[subs]
def build_CallFunc(self, o):
children = map(self.build, o.getChildren())
callee = children.pop(0)
kwargs = children.pop() or {}
starargs = children.pop() or ()
args = tuple(children) + tuple(starargs)
return callee(*args, **kwargs)
def build_List(self, o):
return map(self.build, o.getChildren())
def build_Const(self, o):
return o.value
def build_Dict(self, o):
d = {}
i = iter(map(self.build, o.getChildren()))
for el in i:
d[el] = i.next()
return d
def build_Tuple(self, o):
return tuple(self.build_List(o))
def build_Name(self, o):
if o.name == 'None':
return None
if o.name == 'True':
return True
if o.name == 'False':
return False
# See if the Name is a package or module. If it is, import it.
try:
return modules(o.name)
except ImportError:
pass
# See if the Name is in __builtin__.
try:
import __builtin__
return getattr(__builtin__, o.name)
except AttributeError:
pass
raise TypeError("unrepr could not resolve the name %s" % repr(o.name))
def build_Add(self, o):
left, right = map(self.build, o.getChildren())
return left + right
def build_Getattr(self, o):
parent = self.build(o.expr)
return getattr(parent, o.attrname)
def build_NoneType(self, o):
return None
def build_UnarySub(self, o):
return -self.build(o.getChildren()[0])
def build_UnaryAdd(self, o):
return self.build(o.getChildren()[0])
def unrepr(s):
"""Return a Python object compiled from a string."""
if not s:
return s
try:
import compiler
except ImportError:
# Fallback to eval when compiler package is not available,
# e.g. IronPython 1.0.
return eval(s)
p = compiler.parse("__tempvalue__ = " + s)
obj = p.getChildren()[1].getChildren()[0].getChildren()[1]
return _Builder().build(obj)
class file_generator(object):
"""Yield the given input (a file object) in chunks (default 64k). (Core)"""
def __init__(self, input, chunkSize=65536):
self.input = input
self.chunkSize = chunkSize
def __iter__(self):
return self
def __next__(self):
def next(self):
chunk = self.input.read(self.chunkSize)
if chunk:
return chunk
else:
if hasattr(self.input, 'close'):
self.input.close()
self.input.close()
raise StopIteration()
next = __next__
def file_generator_limited(fileobj, count, chunk_size=65536):
@@ -75,11 +166,3 @@ def file_generator_limited(fileobj, count, chunk_size=65536):
remaining -= chunklen
yield chunk
def set_vary_header(response, header_name):
'Add a Vary header to a response'
varies = response.headers.get('Vary', '')
varies = [x.strip() for x in varies.split(',') if x.strip()]
if header_name not in varies:
varies.append(header_name)
response.headers['Vary'] = ', '.join(varies)

View File

@@ -1,97 +1,79 @@
import logging
import cherrypy
from cherrypy.lib import httpauth
def check_auth(users, encrypt=None, realm=None):
"""If an authorization header contains credentials, return True or False.
"""
request = cherrypy.serving.request
if 'authorization' in request.headers:
"""If an authorization header contains credentials, return True, else False."""
if 'authorization' in cherrypy.request.headers:
# make sure the provided credentials are correctly set
ah = httpauth.parseAuthorization(request.headers['authorization'])
ah = httpauth.parseAuthorization(cherrypy.request.headers['authorization'])
if ah is None:
raise cherrypy.HTTPError(400, 'Bad Request')
if not encrypt:
encrypt = httpauth.DIGEST_AUTH_ENCODERS[httpauth.MD5]
if hasattr(users, '__call__'):
if callable(users):
try:
# backward compatibility
users = users() # expect it to return a dictionary
users = users() # expect it to return a dictionary
if not isinstance(users, dict):
raise ValueError(
'Authentication users must be a dictionary')
raise ValueError, "Authentication users must be a dictionary"
# fetch the user password
password = users.get(ah['username'], None)
password = users.get(ah["username"], None)
except TypeError:
# returns a password (encrypted or clear text)
password = users(ah['username'])
password = users(ah["username"])
else:
if not isinstance(users, dict):
raise ValueError('Authentication users must be a dictionary')
raise ValueError, "Authentication users must be a dictionary"
# fetch the user password
password = users.get(ah['username'], None)
password = users.get(ah["username"], None)
# validate the authorization by re-computing it here
# and compare it with what the user-agent provided
if httpauth.checkResponse(ah, password, method=request.method,
if httpauth.checkResponse(ah, password, method=cherrypy.request.method,
encrypt=encrypt, realm=realm):
request.login = ah['username']
cherrypy.request.login = ah["username"]
return True
request.login = False
if ah.get('username') or ah.get('password'):
logging.info('Attempt to login with wrong credentials from %s',
cherrypy.request.headers['Remote-Addr'])
cherrypy.request.login = False
return False
def basic_auth(realm, users, encrypt=None, debug=False):
def basic_auth(realm, users, encrypt=None):
"""If auth fails, raise 401 with a basic authentication header.
realm
A string containing the authentication realm.
users
A dict of the form: {username: password} or a callable returning
a dict.
encrypt
callable used to encrypt the password returned from the user-agent.
if None it defaults to a md5 encryption.
realm: a string containing the authentication realm.
users: a dict of the form: {username: password} or a callable returning a dict.
encrypt: callable used to encrypt the password returned from the user-agent.
if None it defaults to a md5 encryption.
"""
if check_auth(users, encrypt):
if debug:
cherrypy.log('Auth successful', 'TOOLS.BASIC_AUTH')
return
# inform the user-agent this path is protected
cherrypy.serving.response.headers[
'www-authenticate'] = httpauth.basicAuth(realm)
cherrypy.response.headers['www-authenticate'] = httpauth.basicAuth(realm)
raise cherrypy.HTTPError(
401, 'You are not authorized to access that resource')
raise cherrypy.HTTPError(401, "You are not authorized to access that resource")
def digest_auth(realm, users, debug=False):
def digest_auth(realm, users):
"""If auth fails, raise 401 with a digest authentication header.
realm
A string containing the authentication realm.
users
A dict of the form: {username: password} or a callable returning
a dict.
realm: a string containing the authentication realm.
users: a dict of the form: {username: password} or a callable returning a dict.
"""
if check_auth(users, realm=realm):
if debug:
cherrypy.log('Auth successful', 'TOOLS.DIGEST_AUTH')
return
# inform the user-agent this path is protected
cherrypy.serving.response.headers[
'www-authenticate'] = httpauth.digestAuth(realm)
cherrypy.response.headers['www-authenticate'] = httpauth.digestAuth(realm)
raise cherrypy.HTTPError(401, "You are not authorized to access that resource")
raise cherrypy.HTTPError(
401, 'You are not authorized to access that resource')

View File

@@ -1,90 +0,0 @@
# This file is part of CherryPy <http://www.cherrypy.org/>
# -*- coding: utf-8 -*-
# vim:ts=4:sw=4:expandtab:fileencoding=utf-8
import binascii
import cherrypy
from cherrypy._cpcompat import base64_decode
__doc__ = """This module provides a CherryPy 3.x tool which implements
the server-side of HTTP Basic Access Authentication, as described in
:rfc:`2617`.
Example usage, using the built-in checkpassword_dict function which uses a dict
as the credentials store::
userpassdict = {'bird' : 'bebop', 'ornette' : 'wayout'}
checkpassword = cherrypy.lib.auth_basic.checkpassword_dict(userpassdict)
basic_auth = {'tools.auth_basic.on': True,
'tools.auth_basic.realm': 'earth',
'tools.auth_basic.checkpassword': checkpassword,
}
app_config = { '/' : basic_auth }
"""
__author__ = 'visteya'
__date__ = 'April 2009'
def checkpassword_dict(user_password_dict):
"""Returns a checkpassword function which checks credentials
against a dictionary of the form: {username : password}.
If you want a simple dictionary-based authentication scheme, use
checkpassword_dict(my_credentials_dict) as the value for the
checkpassword argument to basic_auth().
"""
def checkpassword(realm, user, password):
p = user_password_dict.get(user)
return p and p == password or False
return checkpassword
def basic_auth(realm, checkpassword, debug=False):
"""A CherryPy tool which hooks at before_handler to perform
HTTP Basic Access Authentication, as specified in :rfc:`2617`.
If the request has an 'authorization' header with a 'Basic' scheme, this
tool attempts to authenticate the credentials supplied in that header. If
the request has no 'authorization' header, or if it does but the scheme is
not 'Basic', or if authentication fails, the tool sends a 401 response with
a 'WWW-Authenticate' Basic header.
realm
A string containing the authentication realm.
checkpassword
A callable which checks the authentication credentials.
Its signature is checkpassword(realm, username, password). where
username and password are the values obtained from the request's
'authorization' header. If authentication succeeds, checkpassword
returns True, else it returns False.
"""
if '"' in realm:
raise ValueError('Realm cannot contain the " (quote) character.')
request = cherrypy.serving.request
auth_header = request.headers.get('authorization')
if auth_header is not None:
# split() error, base64.decodestring() error
with cherrypy.HTTPError.handle((ValueError, binascii.Error), 400, 'Bad Request'):
scheme, params = auth_header.split(' ', 1)
if scheme.lower() == 'basic':
username, password = base64_decode(params).split(':', 1)
if checkpassword(realm, username, password):
if debug:
cherrypy.log('Auth succeeded', 'TOOLS.AUTH_BASIC')
request.login = username
return # successful authentication
# Respond with 401 status and a WWW-Authenticate header
cherrypy.serving.response.headers[
'www-authenticate'] = 'Basic realm="%s"' % realm
raise cherrypy.HTTPError(
401, 'You are not authorized to access that resource')

View File

@@ -1,390 +0,0 @@
# This file is part of CherryPy <http://www.cherrypy.org/>
# -*- coding: utf-8 -*-
# vim:ts=4:sw=4:expandtab:fileencoding=utf-8
import time
from hashlib import md5
import cherrypy
from cherrypy._cpcompat import ntob, parse_http_list, parse_keqv_list
__doc__ = """An implementation of the server-side of HTTP Digest Access
Authentication, which is described in :rfc:`2617`.
Example usage, using the built-in get_ha1_dict_plain function which uses a dict
of plaintext passwords as the credentials store::
userpassdict = {'alice' : '4x5istwelve'}
get_ha1 = cherrypy.lib.auth_digest.get_ha1_dict_plain(userpassdict)
digest_auth = {'tools.auth_digest.on': True,
'tools.auth_digest.realm': 'wonderland',
'tools.auth_digest.get_ha1': get_ha1,
'tools.auth_digest.key': 'a565c27146791cfb',
}
app_config = { '/' : digest_auth }
"""
__author__ = 'visteya'
__date__ = 'April 2009'
md5_hex = lambda s: md5(ntob(s)).hexdigest()
qop_auth = 'auth'
qop_auth_int = 'auth-int'
valid_qops = (qop_auth, qop_auth_int)
valid_algorithms = ('MD5', 'MD5-sess')
def TRACE(msg):
cherrypy.log(msg, context='TOOLS.AUTH_DIGEST')
# Three helper functions for users of the tool, providing three variants
# of get_ha1() functions for three different kinds of credential stores.
def get_ha1_dict_plain(user_password_dict):
"""Returns a get_ha1 function which obtains a plaintext password from a
dictionary of the form: {username : password}.
If you want a simple dictionary-based authentication scheme, with plaintext
passwords, use get_ha1_dict_plain(my_userpass_dict) as the value for the
get_ha1 argument to digest_auth().
"""
def get_ha1(realm, username):
password = user_password_dict.get(username)
if password:
return md5_hex('%s:%s:%s' % (username, realm, password))
return None
return get_ha1
def get_ha1_dict(user_ha1_dict):
"""Returns a get_ha1 function which obtains a HA1 password hash from a
dictionary of the form: {username : HA1}.
If you want a dictionary-based authentication scheme, but with
pre-computed HA1 hashes instead of plain-text passwords, use
get_ha1_dict(my_userha1_dict) as the value for the get_ha1
argument to digest_auth().
"""
def get_ha1(realm, username):
return user_ha1_dict.get(username)
return get_ha1
def get_ha1_file_htdigest(filename):
"""Returns a get_ha1 function which obtains a HA1 password hash from a
flat file with lines of the same format as that produced by the Apache
htdigest utility. For example, for realm 'wonderland', username 'alice',
and password '4x5istwelve', the htdigest line would be::
alice:wonderland:3238cdfe91a8b2ed8e39646921a02d4c
If you want to use an Apache htdigest file as the credentials store,
then use get_ha1_file_htdigest(my_htdigest_file) as the value for the
get_ha1 argument to digest_auth(). It is recommended that the filename
argument be an absolute path, to avoid problems.
"""
def get_ha1(realm, username):
result = None
f = open(filename, 'r')
for line in f:
u, r, ha1 = line.rstrip().split(':')
if u == username and r == realm:
result = ha1
break
f.close()
return result
return get_ha1
def synthesize_nonce(s, key, timestamp=None):
"""Synthesize a nonce value which resists spoofing and can be checked
for staleness. Returns a string suitable as the value for 'nonce' in
the www-authenticate header.
s
A string related to the resource, such as the hostname of the server.
key
A secret string known only to the server.
timestamp
An integer seconds-since-the-epoch timestamp
"""
if timestamp is None:
timestamp = int(time.time())
h = md5_hex('%s:%s:%s' % (timestamp, s, key))
nonce = '%s:%s' % (timestamp, h)
return nonce
def H(s):
"""The hash function H"""
return md5_hex(s)
class HttpDigestAuthorization (object):
"""Class to parse a Digest Authorization header and perform re-calculation
of the digest.
"""
def errmsg(self, s):
return 'Digest Authorization header: %s' % s
def __init__(self, auth_header, http_method, debug=False):
self.http_method = http_method
self.debug = debug
scheme, params = auth_header.split(' ', 1)
self.scheme = scheme.lower()
if self.scheme != 'digest':
raise ValueError('Authorization scheme is not "Digest"')
self.auth_header = auth_header
# make a dict of the params
items = parse_http_list(params)
paramsd = parse_keqv_list(items)
self.realm = paramsd.get('realm')
self.username = paramsd.get('username')
self.nonce = paramsd.get('nonce')
self.uri = paramsd.get('uri')
self.method = paramsd.get('method')
self.response = paramsd.get('response') # the response digest
self.algorithm = paramsd.get('algorithm', 'MD5').upper()
self.cnonce = paramsd.get('cnonce')
self.opaque = paramsd.get('opaque')
self.qop = paramsd.get('qop') # qop
self.nc = paramsd.get('nc') # nonce count
# perform some correctness checks
if self.algorithm not in valid_algorithms:
raise ValueError(
self.errmsg("Unsupported value for algorithm: '%s'" %
self.algorithm))
has_reqd = (
self.username and
self.realm and
self.nonce and
self.uri and
self.response
)
if not has_reqd:
raise ValueError(
self.errmsg('Not all required parameters are present.'))
if self.qop:
if self.qop not in valid_qops:
raise ValueError(
self.errmsg("Unsupported value for qop: '%s'" % self.qop))
if not (self.cnonce and self.nc):
raise ValueError(
self.errmsg('If qop is sent then '
'cnonce and nc MUST be present'))
else:
if self.cnonce or self.nc:
raise ValueError(
self.errmsg('If qop is not sent, '
'neither cnonce nor nc can be present'))
def __str__(self):
return 'authorization : %s' % self.auth_header
def validate_nonce(self, s, key):
"""Validate the nonce.
Returns True if nonce was generated by synthesize_nonce() and the
timestamp is not spoofed, else returns False.
s
A string related to the resource, such as the hostname of
the server.
key
A secret string known only to the server.
Both s and key must be the same values which were used to synthesize
the nonce we are trying to validate.
"""
try:
timestamp, hashpart = self.nonce.split(':', 1)
s_timestamp, s_hashpart = synthesize_nonce(
s, key, timestamp).split(':', 1)
is_valid = s_hashpart == hashpart
if self.debug:
TRACE('validate_nonce: %s' % is_valid)
return is_valid
except ValueError: # split() error
pass
return False
def is_nonce_stale(self, max_age_seconds=600):
"""Returns True if a validated nonce is stale. The nonce contains a
timestamp in plaintext and also a secure hash of the timestamp.
You should first validate the nonce to ensure the plaintext
timestamp is not spoofed.
"""
try:
timestamp, hashpart = self.nonce.split(':', 1)
if int(timestamp) + max_age_seconds > int(time.time()):
return False
except ValueError: # int() error
pass
if self.debug:
TRACE('nonce is stale')
return True
def HA2(self, entity_body=''):
"""Returns the H(A2) string. See :rfc:`2617` section 3.2.2.3."""
# RFC 2617 3.2.2.3
# If the "qop" directive's value is "auth" or is unspecified,
# then A2 is:
# A2 = method ":" digest-uri-value
#
# If the "qop" value is "auth-int", then A2 is:
# A2 = method ":" digest-uri-value ":" H(entity-body)
if self.qop is None or self.qop == 'auth':
a2 = '%s:%s' % (self.http_method, self.uri)
elif self.qop == 'auth-int':
a2 = '%s:%s:%s' % (self.http_method, self.uri, H(entity_body))
else:
# in theory, this should never happen, since I validate qop in
# __init__()
raise ValueError(self.errmsg('Unrecognized value for qop!'))
return H(a2)
def request_digest(self, ha1, entity_body=''):
"""Calculates the Request-Digest. See :rfc:`2617` section 3.2.2.1.
ha1
The HA1 string obtained from the credentials store.
entity_body
If 'qop' is set to 'auth-int', then A2 includes a hash
of the "entity body". The entity body is the part of the
message which follows the HTTP headers. See :rfc:`2617` section
4.3. This refers to the entity the user agent sent in the
request which has the Authorization header. Typically GET
requests don't have an entity, and POST requests do.
"""
ha2 = self.HA2(entity_body)
# Request-Digest -- RFC 2617 3.2.2.1
if self.qop:
req = '%s:%s:%s:%s:%s' % (
self.nonce, self.nc, self.cnonce, self.qop, ha2)
else:
req = '%s:%s' % (self.nonce, ha2)
# RFC 2617 3.2.2.2
#
# If the "algorithm" directive's value is "MD5" or is unspecified,
# then A1 is:
# A1 = unq(username-value) ":" unq(realm-value) ":" passwd
#
# If the "algorithm" directive's value is "MD5-sess", then A1 is
# calculated only once - on the first request by the client following
# receipt of a WWW-Authenticate challenge from the server.
# A1 = H( unq(username-value) ":" unq(realm-value) ":" passwd )
# ":" unq(nonce-value) ":" unq(cnonce-value)
if self.algorithm == 'MD5-sess':
ha1 = H('%s:%s:%s' % (ha1, self.nonce, self.cnonce))
digest = H('%s:%s' % (ha1, req))
return digest
def www_authenticate(realm, key, algorithm='MD5', nonce=None, qop=qop_auth,
stale=False):
"""Constructs a WWW-Authenticate header for Digest authentication."""
if qop not in valid_qops:
raise ValueError("Unsupported value for qop: '%s'" % qop)
if algorithm not in valid_algorithms:
raise ValueError("Unsupported value for algorithm: '%s'" % algorithm)
if nonce is None:
nonce = synthesize_nonce(realm, key)
s = 'Digest realm="%s", nonce="%s", algorithm="%s", qop="%s"' % (
realm, nonce, algorithm, qop)
if stale:
s += ', stale="true"'
return s
def digest_auth(realm, get_ha1, key, debug=False):
"""A CherryPy tool which hooks at before_handler to perform
HTTP Digest Access Authentication, as specified in :rfc:`2617`.
If the request has an 'authorization' header with a 'Digest' scheme,
this tool authenticates the credentials supplied in that header.
If the request has no 'authorization' header, or if it does but the
scheme is not "Digest", or if authentication fails, the tool sends
a 401 response with a 'WWW-Authenticate' Digest header.
realm
A string containing the authentication realm.
get_ha1
A callable which looks up a username in a credentials store
and returns the HA1 string, which is defined in the RFC to be
MD5(username : realm : password). The function's signature is:
``get_ha1(realm, username)``
where username is obtained from the request's 'authorization' header.
If username is not found in the credentials store, get_ha1() returns
None.
key
A secret string known only to the server, used in the synthesis
of nonces.
"""
request = cherrypy.serving.request
auth_header = request.headers.get('authorization')
nonce_is_stale = False
if auth_header is not None:
with cherrypy.HTTPError.handle(ValueError, 400,
'The Authorization header could not be parsed.'):
auth = HttpDigestAuthorization(
auth_header, request.method, debug=debug)
if debug:
TRACE(str(auth))
if auth.validate_nonce(realm, key):
ha1 = get_ha1(realm, auth.username)
if ha1 is not None:
# note that for request.body to be available we need to
# hook in at before_handler, not on_start_resource like
# 3.1.x digest_auth does.
digest = auth.request_digest(ha1, entity_body=request.body)
if digest == auth.response: # authenticated
if debug:
TRACE('digest matches auth.response')
# Now check if nonce is stale.
# The choice of ten minutes' lifetime for nonce is somewhat
# arbitrary
nonce_is_stale = auth.is_nonce_stale(max_age_seconds=600)
if not nonce_is_stale:
request.login = auth.username
if debug:
TRACE('authentication of %s successful' %
auth.username)
return
# Respond with 401 status and a WWW-Authenticate header
header = www_authenticate(realm, key, stale=nonce_is_stale)
if debug:
TRACE(header)
cherrypy.serving.response.headers['WWW-Authenticate'] = header
raise cherrypy.HTTPError(
401, 'You are not authorized to access that resource')

View File

@@ -1,181 +1,32 @@
"""
CherryPy implements a simple caching system as a pluggable Tool. This tool
tries to be an (in-process) HTTP/1.1-compliant cache. It's not quite there
yet, but it's probably good enough for most sites.
In general, GET responses are cached (along with selecting headers) and, if
another request arrives for the same resource, the caching Tool will return 304
Not Modified if possible, or serve the cached response otherwise. It also sets
request.cached to True if serving a cached representation, and sets
request.cacheable to False (so it doesn't get cached again).
If POST, PUT, or DELETE requests are made for a cached resource, they
invalidate (delete) any cached response.
Usage
=====
Configuration file example::
[/]
tools.caching.on = True
tools.caching.delay = 3600
You may use a class other than the default
:class:`MemoryCache<cherrypy.lib.caching.MemoryCache>` by supplying the config
entry ``cache_class``; supply the full dotted name of the replacement class
as the config value. It must implement the basic methods ``get``, ``put``,
``delete``, and ``clear``.
You may set any attribute, including overriding methods, on the cache
instance by providing them in config. The above sets the
:attr:`delay<cherrypy.lib.caching.MemoryCache.delay>` attribute, for example.
"""
import datetime
import sys
import threading
import time
import cherrypy
from cherrypy.lib import cptools, httputil
from cherrypy._cpcompat import copyitems, ntob, sorted, Event
from cherrypy.lib import cptools, http
class Cache(object):
"""Base class for Cache implementations."""
def get(self):
"""Return the current variant if in the cache, else None."""
raise NotImplemented
def put(self, obj, size):
"""Store the current variant in the cache."""
raise NotImplemented
def delete(self):
"""Remove ALL cached variants of the current resource."""
raise NotImplemented
def clear(self):
"""Reset the cache to its initial, empty state."""
raise NotImplemented
# ------------------------------ Memory Cache ------------------------------- #
class AntiStampedeCache(dict):
"""A storage system for cached items which reduces stampede collisions."""
def wait(self, key, timeout=5, debug=False):
"""Return the cached value for the given key, or None.
If timeout is not None, and the value is already
being calculated by another thread, wait until the given timeout has
elapsed. If the value is available before the timeout expires, it is
returned. If not, None is returned, and a sentinel placed in the cache
to signal other threads to wait.
If timeout is None, no waiting is performed nor sentinels used.
"""
value = self.get(key)
if isinstance(value, Event):
if timeout is None:
# Ignore the other thread and recalc it ourselves.
if debug:
cherrypy.log('No timeout', 'TOOLS.CACHING')
return None
# Wait until it's done or times out.
if debug:
cherrypy.log('Waiting up to %s seconds' %
timeout, 'TOOLS.CACHING')
value.wait(timeout)
if value.result is not None:
# The other thread finished its calculation. Use it.
if debug:
cherrypy.log('Result!', 'TOOLS.CACHING')
return value.result
# Timed out. Stick an Event in the slot so other threads wait
# on this one to finish calculating the value.
if debug:
cherrypy.log('Timed out', 'TOOLS.CACHING')
e = threading.Event()
e.result = None
dict.__setitem__(self, key, e)
return None
elif value is None:
# Stick an Event in the slot so other threads wait
# on this one to finish calculating the value.
if debug:
cherrypy.log('Timed out', 'TOOLS.CACHING')
e = threading.Event()
e.result = None
dict.__setitem__(self, key, e)
return value
def __setitem__(self, key, value):
"""Set the cached value for the given key."""
existing = self.get(key)
dict.__setitem__(self, key, value)
if isinstance(existing, Event):
# Set Event.result so other threads waiting on it have
# immediate access without needing to poll the cache again.
existing.result = value
existing.set()
class MemoryCache(Cache):
"""An in-memory cache for varying response content.
Each key in self.store is a URI, and each value is an AntiStampedeCache.
The response for any given URI may vary based on the values of
"selecting request headers"; that is, those named in the Vary
response header. We assume the list of header names to be constant
for each URI throughout the lifetime of the application, and store
that list in ``self.store[uri].selecting_headers``.
The items contained in ``self.store[uri]`` have keys which are tuples of
request header values (in the same order as the names in its
selecting_headers), and values which are the actual responses.
"""
class MemoryCache:
maxobjects = 1000
"""The maximum number of cached objects; defaults to 1000."""
maxobj_size = 100000
"""The maximum size of each cached object in bytes; defaults to 100 KB."""
maxsize = 10000000
"""The maximum size of the entire cache in bytes; defaults to 10 MB."""
delay = 600
"""Seconds until the cached content expires; defaults to 600 (10 minutes).
"""
antistampede_timeout = 5
"""Seconds to wait for other threads to release a cache lock."""
expire_freq = 0.1
"""Seconds to sleep between cache expiration sweeps."""
debug = False
def __init__(self):
self.clear()
# Run self.expire_cache in a separate daemon thread.
t = threading.Thread(target=self.expire_cache, name='expire_cache')
self.expiration_thread = t
t.daemon = True
if hasattr(threading.Thread, "daemon"):
# Python 2.6+
t.daemon = True
else:
t.setDaemon(True)
t.start()
def clear(self):
"""Reset the cache to its initial, empty state."""
self.store = {}
self.cache = {}
self.expirations = {}
self.tot_puts = 0
self.tot_gets = 0
@@ -183,262 +34,191 @@ class MemoryCache(Cache):
self.tot_expires = 0
self.tot_non_modified = 0
self.cursize = 0
def key(self):
return cherrypy.url(qs=cherrypy.request.query_string)
def expire_cache(self):
"""Continuously examine cached objects, expiring stale ones.
This function is designed to be run in its own daemon thread,
referenced at ``self.expiration_thread``.
"""
# It's possible that "time" will be set to None
# expire_cache runs in a separate thread which the servers are
# not aware of. It's possible that "time" will be set to None
# arbitrarily, so we check "while time" to avoid exceptions.
# See tickets #99 and #180 for more information.
while time:
now = time.time()
# Must make a copy of expirations so it doesn't change size
# during iteration
for expiration_time, objects in copyitems(self.expirations):
for expiration_time, objects in self.expirations.items():
if expiration_time <= now:
for obj_size, uri, sel_header_values in objects:
for obj_size, obj_key in objects:
try:
del self.store[uri][tuple(sel_header_values)]
del self.cache[obj_key]
self.tot_expires += 1
self.cursize -= obj_size
except KeyError:
# the key may have been deleted elsewhere
pass
del self.expirations[expiration_time]
time.sleep(self.expire_freq)
time.sleep(0.1)
def get(self):
"""Return the current variant if in the cache, else None."""
request = cherrypy.serving.request
"""Return the object if in the cache, else None."""
self.tot_gets += 1
uri = cherrypy.url(qs=request.query_string)
uricache = self.store.get(uri)
if uricache is None:
return None
header_values = [request.headers.get(h, '')
for h in uricache.selecting_headers]
variant = uricache.wait(key=tuple(sorted(header_values)),
timeout=self.antistampede_timeout,
debug=self.debug)
if variant is not None:
cache_item = self.cache.get(self.key(), None)
if cache_item:
self.tot_hist += 1
return variant
def put(self, variant, size):
"""Store the current variant in the cache."""
request = cherrypy.serving.request
response = cherrypy.serving.response
uri = cherrypy.url(qs=request.query_string)
uricache = self.store.get(uri)
if uricache is None:
uricache = AntiStampedeCache()
uricache.selecting_headers = [
e.value for e in response.headers.elements('Vary')]
self.store[uri] = uricache
if len(self.store) < self.maxobjects:
total_size = self.cursize + size
return cache_item
else:
return None
def put(self, obj):
if len(self.cache) < self.maxobjects:
# Size check no longer includes header length
obj_size = len(obj[2])
total_size = self.cursize + obj_size
# checks if there's space for the object
if (size < self.maxobj_size and total_size < self.maxsize):
# add to the expirations list
expiration_time = response.time + self.delay
if (obj_size < self.maxobj_size and total_size < self.maxsize):
# add to the expirations list and cache
expiration_time = cherrypy.response.time + self.delay
obj_key = self.key()
bucket = self.expirations.setdefault(expiration_time, [])
bucket.append((size, uri, uricache.selecting_headers))
# add to the cache
header_values = [request.headers.get(h, '')
for h in uricache.selecting_headers]
uricache[tuple(sorted(header_values))] = variant
bucket.append((obj_size, obj_key))
self.cache[obj_key] = obj
self.tot_puts += 1
self.cursize = total_size
def delete(self):
"""Remove ALL cached variants of the current resource."""
uri = cherrypy.url(qs=cherrypy.serving.request.query_string)
self.store.pop(uri, None)
self.cache.pop(self.key(), None)
def get(invalid_methods=('POST', 'PUT', 'DELETE'), debug=False, **kwargs):
def get(invalid_methods=("POST", "PUT", "DELETE"), **kwargs):
"""Try to obtain cached output. If fresh enough, raise HTTPError(304).
If POST, PUT, or DELETE:
* invalidates (deletes) any cached response for this resource
* sets request.cached = False
* sets request.cacheable = False
else if a cached copy exists:
* sets request.cached = True
* sets request.cacheable = False
* sets response.headers to the cached values
* checks the cached Last-Modified response header against the
current If-(Un)Modified-Since request headers; raises 304
if necessary.
current If-(Un)Modified-Since request headers; raises 304
if necessary.
* sets response.status and response.body to the cached values
* returns True
otherwise:
* sets request.cached = False
* sets request.cacheable = True
* returns False
"""
request = cherrypy.serving.request
response = cherrypy.serving.response
if not hasattr(cherrypy, '_cache'):
# Make a process-wide Cache object.
cherrypy._cache = kwargs.pop('cache_class', MemoryCache)()
# Take all remaining kwargs and set them on the Cache object.
for k, v in kwargs.items():
setattr(cherrypy._cache, k, v)
cherrypy._cache.debug = debug
request = cherrypy.request
# POST, PUT, DELETE should invalidate (delete) the cached copy.
# See http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10.
if request.method in invalid_methods:
if debug:
cherrypy.log('request.method %r in invalid_methods %r' %
(request.method, invalid_methods), 'TOOLS.CACHING')
cherrypy._cache.delete()
request.cached = False
request.cacheable = False
return False
if 'no-cache' in [e.value for e in request.headers.elements('Pragma')]:
request.cached = False
request.cacheable = True
return False
cache_data = cherrypy._cache.get()
request.cached = bool(cache_data)
request.cacheable = not request.cached
if request.cached:
# Serve the cached copy.
max_age = cherrypy._cache.delay
for v in [e.value for e in request.headers.elements('Cache-Control')]:
atoms = v.split('=', 1)
directive = atoms.pop(0)
if directive == 'max-age':
if len(atoms) != 1 or not atoms[0].isdigit():
raise cherrypy.HTTPError(
400, 'Invalid Cache-Control header')
max_age = int(atoms[0])
break
elif directive == 'no-cache':
if debug:
cherrypy.log(
'Ignoring cache due to Cache-Control: no-cache',
'TOOLS.CACHING')
request.cached = c = bool(cache_data)
request.cacheable = not c
if c:
response = cherrypy.response
s, h, b, create_time, original_req_headers = cache_data
# Check 'Vary' selecting headers. If any headers mentioned in "Vary"
# differ between the cached and current request, bail out and
# let the rest of CP handle the request. This should properly
# mimic the behavior of isolated caches as RFC 2616 assumes:
# "If the selecting request header fields for the cached entry
# do not match the selecting request header fields of the new
# request, then the cache MUST NOT use a cached entry to satisfy
# the request unless it first relays the new request to the origin
# server in a conditional request and the server responds with
# 304 (Not Modified), including an entity tag or Content-Location
# that indicates the entity to be used.
# TODO: can we store multiple variants based on Vary'd headers?
for header_element in h.elements('Vary'):
key = header_element.value
if original_req_headers[key] != request.headers.get(key, 'missing'):
request.cached = False
request.cacheable = True
return False
if debug:
cherrypy.log('Reading response from cache', 'TOOLS.CACHING')
s, h, b, create_time = cache_data
age = int(response.time - create_time)
if (age > max_age):
if debug:
cherrypy.log('Ignoring cache due to age > %d' % max_age,
'TOOLS.CACHING')
request.cached = False
request.cacheable = True
return False
# Copy the response headers. See
# https://github.com/cherrypy/cherrypy/issues/721.
response.headers = rh = httputil.HeaderMap()
# Copy the response headers. See http://www.cherrypy.org/ticket/721.
response.headers = rh = http.HeaderMap()
for k in h:
dict.__setitem__(rh, k, dict.__getitem__(h, k))
# Add the required Age header
response.headers['Age'] = str(age)
response.headers["Age"] = str(int(response.time - create_time))
try:
# Note that validate_since depends on a Last-Modified header;
# this was put into the cached copy, and should have been
# resurrected just above (response.headers = cache_data[1]).
cptools.validate_since()
except cherrypy.HTTPRedirect:
x = sys.exc_info()[1]
except cherrypy.HTTPRedirect, x:
if x.status == 304:
cherrypy._cache.tot_non_modified += 1
raise
# serve it & get out from the request
response.status = s
response.body = b
else:
if debug:
cherrypy.log('request is not cached', 'TOOLS.CACHING')
return request.cached
return c
def tee_output():
"""Tee response output to cache storage. Internal."""
# Used by CachingTool by attaching to request.hooks
request = cherrypy.serving.request
if 'no-store' in request.headers.values('Cache-Control'):
return
def tee(body):
"""Tee response.body into a list."""
if ('no-cache' in response.headers.values('Pragma') or
'no-store' in response.headers.values('Cache-Control')):
for chunk in body:
yield chunk
return
output = []
for chunk in body:
output.append(chunk)
yield chunk
# save the cache data
body = ntob('').join(output)
cherrypy._cache.put((response.status, response.headers or {},
body, response.time), len(body))
response = cherrypy.serving.response
# Might as well do this here; why cache if the body isn't consumed?
if response.headers.get('Pragma', None) != 'no-cache':
# save the cache data
body = ''.join(output)
vary = [he.value for he in
cherrypy.response.headers.elements('Vary')]
if vary:
sel_headers = dict([(k, v) for k, v
in cherrypy.request.headers.iteritems()
if k in vary])
else:
sel_headers = {}
cherrypy._cache.put((response.status, response.headers or {},
body, response.time, sel_headers))
response = cherrypy.response
response.body = tee(response.body)
def expires(secs=0, force=False, debug=False):
def expires(secs=0, force=False):
"""Tool for influencing cache mechanisms using the 'Expires' header.
secs
Must be either an int or a datetime.timedelta, and indicates the
number of seconds between response.time and when the response should
expire. The 'Expires' header will be set to response.time + secs.
If secs is zero, the 'Expires' header is set one year in the past, and
the following "cache prevention" headers are also set:
* Pragma: no-cache
* Cache-Control': no-cache, must-revalidate
force
If False, the following headers are checked:
* Etag
* Last-Modified
* Age
* Expires
If any are already present, none of the above response headers are set.
'secs' must be either an int or a datetime.timedelta, and indicates the
number of seconds between response.time and when the response should
expire. The 'Expires' header will be set to (response.time + secs).
If 'secs' is zero, the 'Expires' header is set one year in the past, and
the following "cache prevention" headers are also set:
'Pragma': 'no-cache'
'Cache-Control': 'no-cache, must-revalidate'
If 'force' is False (the default), the following headers are checked:
'Etag', 'Last-Modified', 'Age', 'Expires'. If any are already present,
none of the above response headers are set.
"""
response = cherrypy.serving.response
response = cherrypy.response
headers = response.headers
cacheable = False
if not force:
# some header names that indicate that the response can be cached
@@ -446,25 +226,20 @@ def expires(secs=0, force=False, debug=False):
if indicator in headers:
cacheable = True
break
if not cacheable and not force:
if debug:
cherrypy.log('request is not cacheable', 'TOOLS.EXPIRES')
else:
if debug:
cherrypy.log('request is cacheable', 'TOOLS.EXPIRES')
if not cacheable:
if isinstance(secs, datetime.timedelta):
secs = (86400 * secs.days) + secs.seconds
if secs == 0:
if force or ('Pragma' not in headers):
headers['Pragma'] = 'no-cache'
if cherrypy.serving.request.protocol >= (1, 1):
if force or 'Cache-Control' not in headers:
headers['Cache-Control'] = 'no-cache, must-revalidate'
if force or "Pragma" not in headers:
headers["Pragma"] = "no-cache"
if cherrypy.request.protocol >= (1, 1):
if force or "Cache-Control" not in headers:
headers["Cache-Control"] = "no-cache, must-revalidate"
# Set an explicit Expires date in the past.
expiry = httputil.HTTPDate(1169942400.0)
expiry = http.HTTPDate(1169942400.0)
else:
expiry = httputil.HTTPDate(response.time + secs)
if force or 'Expires' not in headers:
headers['Expires'] = expiry
expiry = http.HTTPDate(response.time + secs)
if force or "Expires" not in headers:
headers["Expires"] = expiry

View File

@@ -1,58 +1,55 @@
"""Code-coverage tools for CherryPy.
To use this module, or the coverage tools in the test suite,
you need to download 'coverage.py', either Gareth Rees' `original
implementation <http://www.garethrees.org/2001/12/04/python-coverage/>`_
or Ned Batchelder's `enhanced version:
<http://www.nedbatchelder.com/code/modules/coverage.html>`_
you need to download 'coverage.py', either Gareth Rees' original
implementation:
http://www.garethrees.org/2001/12/04/python-coverage/
To turn on coverage tracing, use the following code::
or Ned Batchelder's enhanced version:
http://www.nedbatchelder.com/code/modules/coverage.html
To turn on coverage tracing, use the following code:
cherrypy.engine.subscribe('start', covercp.start)
cherrypy.engine.subscribe('start_thread', covercp.start)
DO NOT subscribe anything on the 'start_thread' channel, as previously
recommended. Calling start once in the main thread should be sufficient
to start coverage on all threads. Calling start again in each thread
effectively clears any coverage data gathered up to that point.
Run your code, then use the ``covercp.serve()`` function to browse the
Run your code, then use the covercp.serve() function to browse the
results in a web browser. If you run this module from the command line,
it will call ``serve()`` for you.
it will call serve() for you.
"""
import re
import sys
import cgi
import os
import os.path
import urllib
import os, os.path
localFile = os.path.join(os.path.dirname(__file__), "coverage.cache")
import cherrypy
from cherrypy._cpcompat import quote_plus
localFile = os.path.join(os.path.dirname(__file__), 'coverage.cache')
the_coverage = None
try:
from coverage import coverage
the_coverage = coverage(data_file=localFile)
def start():
the_coverage.start()
import cStringIO as StringIO
except ImportError:
# Setting the_coverage to None will raise errors
import StringIO
try:
from coverage import the_coverage as coverage
def start(threadid=None):
coverage.start()
except ImportError:
# Setting coverage to None will raise errors
# that need to be trapped downstream.
the_coverage = None
coverage = None
import warnings
warnings.warn(
'No code coverage will be performed; '
'coverage.py could not be imported.')
def start():
warnings.warn("No code coverage will be performed; coverage.py could not be imported.")
def start(threadid=None):
pass
start.priority = 20
# Guess initial depth to hide FIXME this doesn't work for non-cherrypy stuff
import cherrypy
initial_base = os.path.dirname(cherrypy.__file__)
TEMPLATE_MENU = """<html>
<head>
<title>CherryPy Coverage Menu</title>
@@ -77,7 +74,7 @@ TEMPLATE_MENU = """<html>
font-size: small;
font-weight: bold;
font-style: italic;
margin-top: 5px;
margin-top: 5px;
}
input { border: 1px solid #ccc; padding: 2px; }
.directory {
@@ -126,18 +123,15 @@ TEMPLATE_FORM = """
<div id="options">
<form action='menu' method=GET>
<input type='hidden' name='base' value='%(base)s' />
Show percentages
<input type='checkbox' %(showpct)s name='showpct' value='checked' /><br />
Hide files over
<input type='text' id='pct' name='pct' value='%(pct)s' size='3' />%%<br />
Show percentages <input type='checkbox' %(showpct)s name='showpct' value='checked' /><br />
Hide files over <input type='text' id='pct' name='pct' value='%(pct)s' size='3' />%%<br />
Exclude files matching<br />
<input type='text' id='exclude' name='exclude'
value='%(exclude)s' size='20' />
<input type='text' id='exclude' name='exclude' value='%(exclude)s' size='20' />
<br />
<input type='submit' value='Change view' id="submit"/>
</form>
</div>"""
</div>"""
TEMPLATE_FRAMESET = """<html>
<head><title>CherryPy coverage data</title></head>
@@ -146,7 +140,7 @@ TEMPLATE_FRAMESET = """<html>
<frame name='main' src='' />
</frameset>
</html>
"""
""" % initial_base.lower()
TEMPLATE_COVERAGE = """<html>
<head>
@@ -184,10 +178,7 @@ TEMPLATE_LOC_EXCLUDED = """<tr class="excluded">
<td>%s</td>
</tr>\n"""
TEMPLATE_ITEM = (
"%s%s<a class='file' href='report?name=%s' target='main'>%s</a>\n"
)
TEMPLATE_ITEM = "%s%s<a class='file' href='report?name=%s' target='main'>%s</a>\n"
def _percent(statements, missing):
s = len(statements)
@@ -196,40 +187,32 @@ def _percent(statements, missing):
return int(round(100.0 * e / s))
return 0
def _show_branch(root, base, path, pct=0, showpct=False, exclude='',
coverage=the_coverage):
def _show_branch(root, base, path, pct=0, showpct=False, exclude=""):
# Show the directory name and any of our children
dirs = [k for k, v in root.items() if v]
dirs = [k for k, v in root.iteritems() if v]
dirs.sort()
for name in dirs:
newpath = os.path.join(path, name)
if newpath.lower().startswith(base):
relpath = newpath[len(base):]
yield '| ' * relpath.count(os.sep)
yield (
"<a class='directory' "
"href='menu?base=%s&exclude=%s'>%s</a>\n" %
(newpath, quote_plus(exclude), name)
)
for chunk in _show_branch(
root[name], base, newpath, pct, showpct,
exclude, coverage=coverage
):
yield "| " * relpath.count(os.sep)
yield "<a class='directory' href='menu?base=%s&exclude=%s'>%s</a>\n" % \
(newpath, urllib.quote_plus(exclude), name)
for chunk in _show_branch(root[name], base, newpath, pct, showpct, exclude):
yield chunk
# Now list the files
if path.lower().startswith(base):
relpath = path[len(base):]
files = [k for k, v in root.items() if not v]
files = [k for k, v in root.iteritems() if not v]
files.sort()
for name in files:
newpath = os.path.join(path, name)
pc_str = ''
pc_str = ""
if showpct:
try:
_, statements, _, missing, _ = coverage.analysis2(newpath)
@@ -238,24 +221,22 @@ def _show_branch(root, base, path, pct=0, showpct=False, exclude='',
pass
else:
pc = _percent(statements, missing)
pc_str = ('%3d%% ' % pc).replace(' ', '&nbsp;')
pc_str = ("%3d%% " % pc).replace(' ','&nbsp;')
if pc < float(pct) or pc == -1:
pc_str = "<span class='fail'>%s</span>" % pc_str
else:
pc_str = "<span class='pass'>%s</span>" % pc_str
yield TEMPLATE_ITEM % ('| ' * (relpath.count(os.sep) + 1),
yield TEMPLATE_ITEM % ("| " * (relpath.count(os.sep) + 1),
pc_str, newpath, name)
def _skip_file(path, exclude):
if exclude:
return bool(re.search(exclude, path))
def _graft(path, tree):
d = tree
p = path
atoms = []
while True:
@@ -264,82 +245,72 @@ def _graft(path, tree):
break
atoms.append(tail)
atoms.append(p)
if p != '/':
atoms.append('/')
if p != "/":
atoms.append("/")
atoms.reverse()
for node in atoms:
if node:
d = d.setdefault(node, {})
def get_tree(base, exclude, coverage=the_coverage):
def get_tree(base, exclude):
"""Return covered module names as a nested dict."""
tree = {}
runs = coverage.data.executed_files()
for path in runs:
if not _skip_file(path, exclude) and not os.path.isdir(path):
_graft(path, tree)
coverage.get_ready()
runs = coverage.cexecuted.keys()
if runs:
for path in runs:
if not _skip_file(path, exclude) and not os.path.isdir(path):
_graft(path, tree)
return tree
class CoverStats(object):
def __init__(self, coverage, root=None):
self.coverage = coverage
if root is None:
# Guess initial depth. Files outside this path will not be
# reachable from the web interface.
import cherrypy
root = os.path.dirname(cherrypy.__file__)
self.root = root
@cherrypy.expose
def index(self):
return TEMPLATE_FRAMESET % self.root.lower()
@cherrypy.expose
def menu(self, base='/', pct='50', showpct='',
return TEMPLATE_FRAMESET
index.exposed = True
def menu(self, base="/", pct="50", showpct="",
exclude=r'python\d\.\d|test|tut\d|tutorial'):
# The coverage module uses all-lower-case names.
base = base.lower().rstrip(os.sep)
yield TEMPLATE_MENU
yield TEMPLATE_FORM % locals()
# Start by showing links for parent paths
yield "<div id='crumbs'>"
path = ''
path = ""
atoms = base.split(os.sep)
atoms.pop()
for atom in atoms:
path += atom + os.sep
yield ("<a href='menu?base=%s&exclude=%s'>%s</a> %s"
% (path, quote_plus(exclude), atom, os.sep))
yield '</div>'
% (path, urllib.quote_plus(exclude), atom, os.sep))
yield "</div>"
yield "<div id='tree'>"
# Then display the tree
tree = get_tree(base, exclude, self.coverage)
tree = get_tree(base, exclude)
if not tree:
yield '<p>No modules covered.</p>'
yield "<p>No modules covered.</p>"
else:
for chunk in _show_branch(tree, base, '/', pct,
showpct == 'checked', exclude,
coverage=self.coverage):
for chunk in _show_branch(tree, base, "/", pct,
showpct=='checked', exclude):
yield chunk
yield '</div>'
yield '</body></html>'
yield "</div>"
yield "</body></html>"
menu.exposed = True
def annotated_file(self, filename, statements, excluded, missing):
source = open(filename, 'r')
buffer = []
for lineno, line in enumerate(source.readlines()):
lineno += 1
line = line.strip('\n\r')
line = line.strip("\n\r")
empty_the_buffer = True
if lineno in excluded:
template = TEMPLATE_LOC_EXCLUDED
@@ -355,11 +326,10 @@ class CoverStats(object):
yield template % (lno, cgi.escape(pastline))
buffer = []
yield template % (lineno, cgi.escape(line))
@cherrypy.expose
def report(self, name):
filename, statements, excluded, missing, _ = self.coverage.analysis2(
name)
coverage.get_ready()
filename, statements, excluded, missing, _ = coverage.analysis2(name)
pc = _percent(statements, missing)
yield TEMPLATE_COVERAGE % dict(name=os.path.basename(name),
fullpath=name,
@@ -371,21 +341,21 @@ class CoverStats(object):
yield '</table>'
yield '</body>'
yield '</html>'
report.exposed = True
def serve(path=localFile, port=8080, root=None):
def serve(path=localFile, port=8080):
if coverage is None:
raise ImportError('The coverage module could not be imported.')
from coverage import coverage
cov = coverage(data_file=path)
cov.load()
raise ImportError("The coverage module could not be imported.")
coverage.cache_default = path
import cherrypy
cherrypy.config.update({'server.socket_port': int(port),
cherrypy.config.update({'server.socket_port': port,
'server.thread_pool': 10,
'environment': 'production',
'environment': "production",
})
cherrypy.quickstart(CoverStats(cov, root))
cherrypy.quickstart(CoverStats())
if __name__ == '__main__':
if __name__ == "__main__":
serve(*tuple(sys.argv[1:]))

View File

@@ -1,690 +0,0 @@
"""CPStats, a package for collecting and reporting on program statistics.
Overview
========
Statistics about program operation are an invaluable monitoring and debugging
tool. Unfortunately, the gathering and reporting of these critical values is
usually ad-hoc. This package aims to add a centralized place for gathering
statistical performance data, a structure for recording that data which
provides for extrapolation of that data into more useful information,
and a method of serving that data to both human investigators and
monitoring software. Let's examine each of those in more detail.
Data Gathering
--------------
Just as Python's `logging` module provides a common importable for gathering
and sending messages, performance statistics would benefit from a similar
common mechanism, and one that does *not* require each package which wishes
to collect stats to import a third-party module. Therefore, we choose to
re-use the `logging` module by adding a `statistics` object to it.
That `logging.statistics` object is a nested dict. It is not a custom class,
because that would:
1. require libraries and applications to import a third-party module in
order to participate
2. inhibit innovation in extrapolation approaches and in reporting tools, and
3. be slow.
There are, however, some specifications regarding the structure of the dict.::
{
+----"SQLAlchemy": {
| "Inserts": 4389745,
| "Inserts per Second":
| lambda s: s["Inserts"] / (time() - s["Start"]),
| C +---"Table Statistics": {
| o | "widgets": {-----------+
N | l | "Rows": 1.3M, | Record
a | l | "Inserts": 400, |
m | e | },---------------------+
e | c | "froobles": {
s | t | "Rows": 7845,
p | i | "Inserts": 0,
a | o | },
c | n +---},
e | "Slow Queries":
| [{"Query": "SELECT * FROM widgets;",
| "Processing Time": 47.840923343,
| },
| ],
+----},
}
The `logging.statistics` dict has four levels. The topmost level is nothing
more than a set of names to introduce modularity, usually along the lines of
package names. If the SQLAlchemy project wanted to participate, for example,
it might populate the item `logging.statistics['SQLAlchemy']`, whose value
would be a second-layer dict we call a "namespace". Namespaces help multiple
packages to avoid collisions over key names, and make reports easier to read,
to boot. The maintainers of SQLAlchemy should feel free to use more than one
namespace if needed (such as 'SQLAlchemy ORM'). Note that there are no case
or other syntax constraints on the namespace names; they should be chosen
to be maximally readable by humans (neither too short nor too long).
Each namespace, then, is a dict of named statistical values, such as
'Requests/sec' or 'Uptime'. You should choose names which will look
good on a report: spaces and capitalization are just fine.
In addition to scalars, values in a namespace MAY be a (third-layer)
dict, or a list, called a "collection". For example, the CherryPy
:class:`StatsTool` keeps track of what each request is doing (or has most
recently done) in a 'Requests' collection, where each key is a thread ID; each
value in the subdict MUST be a fourth dict (whew!) of statistical data about
each thread. We call each subdict in the collection a "record". Similarly,
the :class:`StatsTool` also keeps a list of slow queries, where each record
contains data about each slow query, in order.
Values in a namespace or record may also be functions, which brings us to:
Extrapolation
-------------
The collection of statistical data needs to be fast, as close to unnoticeable
as possible to the host program. That requires us to minimize I/O, for example,
but in Python it also means we need to minimize function calls. So when you
are designing your namespace and record values, try to insert the most basic
scalar values you already have on hand.
When it comes time to report on the gathered data, however, we usually have
much more freedom in what we can calculate. Therefore, whenever reporting
tools (like the provided :class:`StatsPage` CherryPy class) fetch the contents
of `logging.statistics` for reporting, they first call
`extrapolate_statistics` (passing the whole `statistics` dict as the only
argument). This makes a deep copy of the statistics dict so that the
reporting tool can both iterate over it and even change it without harming
the original. But it also expands any functions in the dict by calling them.
For example, you might have a 'Current Time' entry in the namespace with the
value "lambda scope: time.time()". The "scope" parameter is the current
namespace dict (or record, if we're currently expanding one of those
instead), allowing you access to existing static entries. If you're truly
evil, you can even modify more than one entry at a time.
However, don't try to calculate an entry and then use its value in further
extrapolations; the order in which the functions are called is not guaranteed.
This can lead to a certain amount of duplicated work (or a redesign of your
schema), but that's better than complicating the spec.
After the whole thing has been extrapolated, it's time for:
Reporting
---------
The :class:`StatsPage` class grabs the `logging.statistics` dict, extrapolates
it all, and then transforms it to HTML for easy viewing. Each namespace gets
its own header and attribute table, plus an extra table for each collection.
This is NOT part of the statistics specification; other tools can format how
they like.
You can control which columns are output and how they are formatted by updating
StatsPage.formatting, which is a dict that mirrors the keys and nesting of
`logging.statistics`. The difference is that, instead of data values, it has
formatting values. Use None for a given key to indicate to the StatsPage that a
given column should not be output. Use a string with formatting
(such as '%.3f') to interpolate the value(s), or use a callable (such as
lambda v: v.isoformat()) for more advanced formatting. Any entry which is not
mentioned in the formatting dict is output unchanged.
Monitoring
----------
Although the HTML output takes pains to assign unique id's to each <td> with
statistical data, you're probably better off fetching /cpstats/data, which
outputs the whole (extrapolated) `logging.statistics` dict in JSON format.
That is probably easier to parse, and doesn't have any formatting controls,
so you get the "original" data in a consistently-serialized format.
Note: there's no treatment yet for datetime objects. Try time.time() instead
for now if you can. Nagios will probably thank you.
Turning Collection Off
----------------------
It is recommended each namespace have an "Enabled" item which, if False,
stops collection (but not reporting) of statistical data. Applications
SHOULD provide controls to pause and resume collection by setting these
entries to False or True, if present.
Usage
=====
To collect statistics on CherryPy applications::
from cherrypy.lib import cpstats
appconfig['/']['tools.cpstats.on'] = True
To collect statistics on your own code::
import logging
# Initialize the repository
if not hasattr(logging, 'statistics'): logging.statistics = {}
# Initialize my namespace
mystats = logging.statistics.setdefault('My Stuff', {})
# Initialize my namespace's scalars and collections
mystats.update({
'Enabled': True,
'Start Time': time.time(),
'Important Events': 0,
'Events/Second': lambda s: (
(s['Important Events'] / (time.time() - s['Start Time']))),
})
...
for event in events:
...
# Collect stats
if mystats.get('Enabled', False):
mystats['Important Events'] += 1
To report statistics::
root.cpstats = cpstats.StatsPage()
To format statistics reports::
See 'Reporting', above.
"""
import logging
import os
import sys
import threading
import time
import cherrypy
from cherrypy._cpcompat import json
# ------------------------------- Statistics -------------------------------- #
if not hasattr(logging, 'statistics'):
logging.statistics = {}
def extrapolate_statistics(scope):
"""Return an extrapolated copy of the given scope."""
c = {}
for k, v in list(scope.items()):
if isinstance(v, dict):
v = extrapolate_statistics(v)
elif isinstance(v, (list, tuple)):
v = [extrapolate_statistics(record) for record in v]
elif hasattr(v, '__call__'):
v = v(scope)
c[k] = v
return c
# -------------------- CherryPy Applications Statistics --------------------- #
appstats = logging.statistics.setdefault('CherryPy Applications', {})
appstats.update({
'Enabled': True,
'Bytes Read/Request': lambda s: (
s['Total Requests'] and
(s['Total Bytes Read'] / float(s['Total Requests'])) or
0.0
),
'Bytes Read/Second': lambda s: s['Total Bytes Read'] / s['Uptime'](s),
'Bytes Written/Request': lambda s: (
s['Total Requests'] and
(s['Total Bytes Written'] / float(s['Total Requests'])) or
0.0
),
'Bytes Written/Second': lambda s: (
s['Total Bytes Written'] / s['Uptime'](s)
),
'Current Time': lambda s: time.time(),
'Current Requests': 0,
'Requests/Second': lambda s: float(s['Total Requests']) / s['Uptime'](s),
'Server Version': cherrypy.__version__,
'Start Time': time.time(),
'Total Bytes Read': 0,
'Total Bytes Written': 0,
'Total Requests': 0,
'Total Time': 0,
'Uptime': lambda s: time.time() - s['Start Time'],
'Requests': {},
})
proc_time = lambda s: time.time() - s['Start Time']
class ByteCountWrapper(object):
"""Wraps a file-like object, counting the number of bytes read."""
def __init__(self, rfile):
self.rfile = rfile
self.bytes_read = 0
def read(self, size=-1):
data = self.rfile.read(size)
self.bytes_read += len(data)
return data
def readline(self, size=-1):
data = self.rfile.readline(size)
self.bytes_read += len(data)
return data
def readlines(self, sizehint=0):
# Shamelessly stolen from StringIO
total = 0
lines = []
line = self.readline()
while line:
lines.append(line)
total += len(line)
if 0 < sizehint <= total:
break
line = self.readline()
return lines
def close(self):
self.rfile.close()
def __iter__(self):
return self
def next(self):
data = self.rfile.next()
self.bytes_read += len(data)
return data
average_uriset_time = lambda s: s['Count'] and (s['Sum'] / s['Count']) or 0
def _get_threading_ident():
if sys.version_info >= (3, 3):
return threading.get_ident()
return threading._get_ident()
class StatsTool(cherrypy.Tool):
"""Record various information about the current request."""
def __init__(self):
cherrypy.Tool.__init__(self, 'on_end_request', self.record_stop)
def _setup(self):
"""Hook this tool into cherrypy.request.
The standard CherryPy request object will automatically call this
method when the tool is "turned on" in config.
"""
if appstats.get('Enabled', False):
cherrypy.Tool._setup(self)
self.record_start()
def record_start(self):
"""Record the beginning of a request."""
request = cherrypy.serving.request
if not hasattr(request.rfile, 'bytes_read'):
request.rfile = ByteCountWrapper(request.rfile)
request.body.fp = request.rfile
r = request.remote
appstats['Current Requests'] += 1
appstats['Total Requests'] += 1
appstats['Requests'][_get_threading_ident()] = {
'Bytes Read': None,
'Bytes Written': None,
# Use a lambda so the ip gets updated by tools.proxy later
'Client': lambda s: '%s:%s' % (r.ip, r.port),
'End Time': None,
'Processing Time': proc_time,
'Request-Line': request.request_line,
'Response Status': None,
'Start Time': time.time(),
}
def record_stop(
self, uriset=None, slow_queries=1.0, slow_queries_count=100,
debug=False, **kwargs):
"""Record the end of a request."""
resp = cherrypy.serving.response
w = appstats['Requests'][_get_threading_ident()]
r = cherrypy.request.rfile.bytes_read
w['Bytes Read'] = r
appstats['Total Bytes Read'] += r
if resp.stream:
w['Bytes Written'] = 'chunked'
else:
cl = int(resp.headers.get('Content-Length', 0))
w['Bytes Written'] = cl
appstats['Total Bytes Written'] += cl
w['Response Status'] = getattr(
resp, 'output_status', None) or resp.status
w['End Time'] = time.time()
p = w['End Time'] - w['Start Time']
w['Processing Time'] = p
appstats['Total Time'] += p
appstats['Current Requests'] -= 1
if debug:
cherrypy.log('Stats recorded: %s' % repr(w), 'TOOLS.CPSTATS')
if uriset:
rs = appstats.setdefault('URI Set Tracking', {})
r = rs.setdefault(uriset, {
'Min': None, 'Max': None, 'Count': 0, 'Sum': 0,
'Avg': average_uriset_time})
if r['Min'] is None or p < r['Min']:
r['Min'] = p
if r['Max'] is None or p > r['Max']:
r['Max'] = p
r['Count'] += 1
r['Sum'] += p
if slow_queries and p > slow_queries:
sq = appstats.setdefault('Slow Queries', [])
sq.append(w.copy())
if len(sq) > slow_queries_count:
sq.pop(0)
cherrypy.tools.cpstats = StatsTool()
# ---------------------- CherryPy Statistics Reporting ---------------------- #
thisdir = os.path.abspath(os.path.dirname(__file__))
missing = object()
locale_date = lambda v: time.strftime('%c', time.gmtime(v))
iso_format = lambda v: time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(v))
def pause_resume(ns):
def _pause_resume(enabled):
pause_disabled = ''
resume_disabled = ''
if enabled:
resume_disabled = 'disabled="disabled" '
else:
pause_disabled = 'disabled="disabled" '
return """
<form action="pause" method="POST" style="display:inline">
<input type="hidden" name="namespace" value="%s" />
<input type="submit" value="Pause" %s/>
</form>
<form action="resume" method="POST" style="display:inline">
<input type="hidden" name="namespace" value="%s" />
<input type="submit" value="Resume" %s/>
</form>
""" % (ns, pause_disabled, ns, resume_disabled)
return _pause_resume
class StatsPage(object):
formatting = {
'CherryPy Applications': {
'Enabled': pause_resume('CherryPy Applications'),
'Bytes Read/Request': '%.3f',
'Bytes Read/Second': '%.3f',
'Bytes Written/Request': '%.3f',
'Bytes Written/Second': '%.3f',
'Current Time': iso_format,
'Requests/Second': '%.3f',
'Start Time': iso_format,
'Total Time': '%.3f',
'Uptime': '%.3f',
'Slow Queries': {
'End Time': None,
'Processing Time': '%.3f',
'Start Time': iso_format,
},
'URI Set Tracking': {
'Avg': '%.3f',
'Max': '%.3f',
'Min': '%.3f',
'Sum': '%.3f',
},
'Requests': {
'Bytes Read': '%s',
'Bytes Written': '%s',
'End Time': None,
'Processing Time': '%.3f',
'Start Time': None,
},
},
'CherryPy WSGIServer': {
'Enabled': pause_resume('CherryPy WSGIServer'),
'Connections/second': '%.3f',
'Start time': iso_format,
},
}
@cherrypy.expose
def index(self):
# Transform the raw data into pretty output for HTML
yield """
<html>
<head>
<title>Statistics</title>
<style>
th, td {
padding: 0.25em 0.5em;
border: 1px solid #666699;
}
table {
border-collapse: collapse;
}
table.stats1 {
width: 100%;
}
table.stats1 th {
font-weight: bold;
text-align: right;
background-color: #CCD5DD;
}
table.stats2, h2 {
margin-left: 50px;
}
table.stats2 th {
font-weight: bold;
text-align: center;
background-color: #CCD5DD;
}
</style>
</head>
<body>
"""
for title, scalars, collections in self.get_namespaces():
yield """
<h1>%s</h1>
<table class='stats1'>
<tbody>
""" % title
for i, (key, value) in enumerate(scalars):
colnum = i % 3
if colnum == 0:
yield """
<tr>"""
yield (
"""
<th>%(key)s</th><td id='%(title)s-%(key)s'>%(value)s</td>""" %
vars()
)
if colnum == 2:
yield """
</tr>"""
if colnum == 0:
yield """
<th></th><td></td>
<th></th><td></td>
</tr>"""
elif colnum == 1:
yield """
<th></th><td></td>
</tr>"""
yield """
</tbody>
</table>"""
for subtitle, headers, subrows in collections:
yield """
<h2>%s</h2>
<table class='stats2'>
<thead>
<tr>""" % subtitle
for key in headers:
yield """
<th>%s</th>""" % key
yield """
</tr>
</thead>
<tbody>"""
for subrow in subrows:
yield """
<tr>"""
for value in subrow:
yield """
<td>%s</td>""" % value
yield """
</tr>"""
yield """
</tbody>
</table>"""
yield """
</body>
</html>
"""
def get_namespaces(self):
"""Yield (title, scalars, collections) for each namespace."""
s = extrapolate_statistics(logging.statistics)
for title, ns in sorted(s.items()):
scalars = []
collections = []
ns_fmt = self.formatting.get(title, {})
for k, v in sorted(ns.items()):
fmt = ns_fmt.get(k, {})
if isinstance(v, dict):
headers, subrows = self.get_dict_collection(v, fmt)
collections.append((k, ['ID'] + headers, subrows))
elif isinstance(v, (list, tuple)):
headers, subrows = self.get_list_collection(v, fmt)
collections.append((k, headers, subrows))
else:
format = ns_fmt.get(k, missing)
if format is None:
# Don't output this column.
continue
if hasattr(format, '__call__'):
v = format(v)
elif format is not missing:
v = format % v
scalars.append((k, v))
yield title, scalars, collections
def get_dict_collection(self, v, formatting):
"""Return ([headers], [rows]) for the given collection."""
# E.g., the 'Requests' dict.
headers = []
try:
# python2
vals = v.itervalues()
except AttributeError:
# python3
vals = v.values()
for record in vals:
for k3 in record:
format = formatting.get(k3, missing)
if format is None:
# Don't output this column.
continue
if k3 not in headers:
headers.append(k3)
headers.sort()
subrows = []
for k2, record in sorted(v.items()):
subrow = [k2]
for k3 in headers:
v3 = record.get(k3, '')
format = formatting.get(k3, missing)
if format is None:
# Don't output this column.
continue
if hasattr(format, '__call__'):
v3 = format(v3)
elif format is not missing:
v3 = format % v3
subrow.append(v3)
subrows.append(subrow)
return headers, subrows
def get_list_collection(self, v, formatting):
"""Return ([headers], [subrows]) for the given collection."""
# E.g., the 'Slow Queries' list.
headers = []
for record in v:
for k3 in record:
format = formatting.get(k3, missing)
if format is None:
# Don't output this column.
continue
if k3 not in headers:
headers.append(k3)
headers.sort()
subrows = []
for record in v:
subrow = []
for k3 in headers:
v3 = record.get(k3, '')
format = formatting.get(k3, missing)
if format is None:
# Don't output this column.
continue
if hasattr(format, '__call__'):
v3 = format(v3)
elif format is not missing:
v3 = format % v3
subrow.append(v3)
subrows.append(subrow)
return headers, subrows
if json is not None:
@cherrypy.expose
def data(self):
s = extrapolate_statistics(logging.statistics)
cherrypy.response.headers['Content-Type'] = 'application/json'
return json.dumps(s, sort_keys=True, indent=4)
@cherrypy.expose
def pause(self, namespace):
logging.statistics.get(namespace, {})['Enabled'] = False
raise cherrypy.HTTPRedirect('./')
pause.cp_config = {'tools.allow.on': True,
'tools.allow.methods': ['POST']}
@cherrypy.expose
def resume(self, namespace):
logging.statistics.get(namespace, {})['Enabled'] = True
raise cherrypy.HTTPRedirect('./')
resume.cp_config = {'tools.allow.on': True,
'tools.allow.methods': ['POST']}

View File

@@ -1,119 +1,97 @@
"""Functions for builtin CherryPy tools."""
import logging
try:
# Python 2.5+
from hashlib import md5
except ImportError:
from md5 import new as md5
import re
from hashlib import md5
import six
import cherrypy
from cherrypy._cpcompat import text_or_bytes
from cherrypy.lib import httputil as _httputil
from cherrypy.lib import is_iterator
from cherrypy.lib import http as _http
# Conditional HTTP request support #
def validate_etags(autotags=False, debug=False):
def validate_etags(autotags=False):
"""Validate the current ETag against If-Match, If-None-Match headers.
If autotags is True, an ETag response-header value will be provided
from an MD5 hash of the response body (unless some other code has
already provided an ETag header). If False (the default), the ETag
will not be automatic.
WARNING: the autotags feature is not designed for URL's which allow
methods other than GET. For example, if a POST to the same URL returns
no content, the automatic ETag will be incorrect, breaking a fundamental
use for entity tags in a possibly destructive fashion. Likewise, if you
raise 304 Not Modified, the response body will be empty, the ETag hash
will be incorrect, and your application will break.
See :rfc:`2616` Section 14.24.
See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24
"""
response = cherrypy.serving.response
response = cherrypy.response
# Guard against being run twice.
if hasattr(response, 'ETag'):
if hasattr(response, "ETag"):
return
status, reason, msg = _httputil.valid_status(response.status)
status, reason, msg = _http.valid_status(response.status)
etag = response.headers.get('ETag')
# Automatic ETag generation. See warning in docstring.
if etag:
if debug:
cherrypy.log('ETag already set: %s' % etag, 'TOOLS.ETAGS')
elif not autotags:
if debug:
cherrypy.log('Autotags off', 'TOOLS.ETAGS')
elif status != 200:
if debug:
cherrypy.log('Status not 200', 'TOOLS.ETAGS')
else:
etag = response.collapse_body()
etag = '"%s"' % md5(etag).hexdigest()
if debug:
cherrypy.log('Setting ETag: %s' % etag, 'TOOLS.ETAGS')
response.headers['ETag'] = etag
if (not etag) and autotags:
if status == 200:
etag = response.collapse_body()
etag = '"%s"' % md5(etag).hexdigest()
response.headers['ETag'] = etag
response.ETag = etag
# "If the request would, without the If-Match header field, result in
# anything other than a 2xx or 412 status, then the If-Match header
# MUST be ignored."
if debug:
cherrypy.log('Status: %s' % status, 'TOOLS.ETAGS')
if status >= 200 and status <= 299:
request = cherrypy.serving.request
request = cherrypy.request
conditions = request.headers.elements('If-Match') or []
conditions = [str(x) for x in conditions]
if debug:
cherrypy.log('If-Match conditions: %s' % repr(conditions),
'TOOLS.ETAGS')
if conditions and not (conditions == ['*'] or etag in conditions):
raise cherrypy.HTTPError(412, 'If-Match failed: ETag %r did '
'not match %r' % (etag, conditions))
if conditions and not (conditions == ["*"] or etag in conditions):
raise cherrypy.HTTPError(412, "If-Match failed: ETag %r did "
"not match %r" % (etag, conditions))
conditions = request.headers.elements('If-None-Match') or []
conditions = [str(x) for x in conditions]
if debug:
cherrypy.log('If-None-Match conditions: %s' % repr(conditions),
'TOOLS.ETAGS')
if conditions == ['*'] or etag in conditions:
if debug:
cherrypy.log('request.method: %s' %
request.method, 'TOOLS.ETAGS')
if request.method in ('GET', 'HEAD'):
if conditions == ["*"] or etag in conditions:
if request.method in ("GET", "HEAD"):
raise cherrypy.HTTPRedirect([], 304)
else:
raise cherrypy.HTTPError(412, 'If-None-Match failed: ETag %r '
'matched %r' % (etag, conditions))
raise cherrypy.HTTPError(412, "If-None-Match failed: ETag %r "
"matched %r" % (etag, conditions))
def validate_since():
"""Validate the current Last-Modified against If-Modified-Since headers.
If no code has set the Last-Modified response header, then no validation
will be performed.
"""
response = cherrypy.serving.response
response = cherrypy.response
lastmod = response.headers.get('Last-Modified')
if lastmod:
status, reason, msg = _httputil.valid_status(response.status)
request = cherrypy.serving.request
status, reason, msg = _http.valid_status(response.status)
request = cherrypy.request
since = request.headers.get('If-Unmodified-Since')
if since and since != lastmod:
if (status >= 200 and status <= 299) or status == 412:
raise cherrypy.HTTPError(412)
since = request.headers.get('If-Modified-Since')
if since and since == lastmod:
if (status >= 200 and status <= 299) or status == 304:
if request.method in ('GET', 'HEAD'):
if request.method in ("GET", "HEAD"):
raise cherrypy.HTTPRedirect([], 304)
else:
raise cherrypy.HTTPError(412)
@@ -121,64 +99,28 @@ def validate_since():
# Tool code #
def allow(methods=None, debug=False):
"""Raise 405 if request.method not in methods (default ['GET', 'HEAD']).
The given methods are case-insensitive, and may be in any order.
If only one method is allowed, you may supply a single string;
if more than one, supply a list of strings.
Regardless of whether the current method is allowed or not, this
also emits an 'Allow' response header, containing the given methods.
"""
if not isinstance(methods, (tuple, list)):
methods = [methods]
methods = [m.upper() for m in methods if m]
if not methods:
methods = ['GET', 'HEAD']
elif 'GET' in methods and 'HEAD' not in methods:
methods.append('HEAD')
cherrypy.response.headers['Allow'] = ', '.join(methods)
if cherrypy.request.method not in methods:
if debug:
cherrypy.log('request.method %r not in methods %r' %
(cherrypy.request.method, methods), 'TOOLS.ALLOW')
raise cherrypy.HTTPError(405)
else:
if debug:
cherrypy.log('request.method %r in methods %r' %
(cherrypy.request.method, methods), 'TOOLS.ALLOW')
def proxy(base=None, local='X-Forwarded-Host', remote='X-Forwarded-For',
scheme='X-Forwarded-Proto', debug=False):
scheme='X-Forwarded-Proto'):
"""Change the base URL (scheme://host[:port][/path]).
For running a CP server behind Apache, lighttpd, or other HTTP server.
For Apache and lighttpd, you should leave the 'local' argument at the
default value of 'X-Forwarded-Host'. For Squid, you probably want to set
tools.proxy.local = 'Origin'.
If you want the new request.base to include path info (not just the host),
you must explicitly set base to the full base path, and ALSO set 'local'
to '', so that the X-Forwarded-Host request header (which never includes
path info) does not override it. Regardless, the value for 'base' MUST
NOT end in a slash.
cherrypy.request.remote.ip (the IP address of the client) will be
rewritten if the header specified by the 'remote' arg is valid.
By default, 'remote' is set to 'X-Forwarded-For'. If you do not
want to rewrite remote.ip, set the 'remote' arg to an empty string.
"""
request = cherrypy.serving.request
request = cherrypy.request
if scheme:
s = request.headers.get(scheme, None)
if debug:
cherrypy.log('Testing scheme %r:%r' % (scheme, s), 'TOOLS.PROXY')
if s == 'on' and 'ssl' in scheme.lower():
# This handles e.g. webfaction's 'X-Forwarded-Ssl: on' header
scheme = 'https'
@@ -186,227 +128,165 @@ def proxy(base=None, local='X-Forwarded-Host', remote='X-Forwarded-For',
# This is for lighttpd/pound/Mongrel's 'X-Forwarded-Proto: https'
scheme = s
if not scheme:
scheme = request.base[:request.base.find('://')]
scheme = request.base[:request.base.find("://")]
if local:
lbase = request.headers.get(local, None)
if debug:
cherrypy.log('Testing local %r:%r' % (local, lbase), 'TOOLS.PROXY')
if lbase is not None:
base = lbase.split(',')[0]
base = request.headers.get(local, base)
if not base:
base = request.headers.get('Host', '127.0.0.1')
port = request.local.port
if port != 80:
base += ':%s' % port
if base.find('://') == -1:
port = cherrypy.request.local.port
if port == 80:
base = '127.0.0.1'
else:
base = '127.0.0.1:%s' % port
if base.find("://") == -1:
# add http:// or https:// if needed
base = scheme + '://' + base
base = scheme + "://" + base
request.base = base
if remote:
xff = request.headers.get(remote)
if debug:
cherrypy.log('Testing remote %r:%r' % (remote, xff), 'TOOLS.PROXY')
if xff:
if remote == 'X-Forwarded-For':
# Bug #1268
xff = xff.split(',')[0].strip()
# See http://bob.pythonmac.org/archives/2005/09/23/apache-x-forwarded-for-caveat/
xff = xff.split(',')[-1].strip()
request.remote.ip = xff
def ignore_headers(headers=('Range',), debug=False):
def ignore_headers(headers=('Range',)):
"""Delete request headers whose field names are included in 'headers'.
This is a useful tool for working behind certain HTTP servers;
for example, Apache duplicates the work that CP does for 'Range'
headers, and will doubly-truncate the response.
"""
request = cherrypy.serving.request
request = cherrypy.request
for name in headers:
if name in request.headers:
if debug:
cherrypy.log('Ignoring request header %r' % name,
'TOOLS.IGNORE_HEADERS')
del request.headers[name]
def response_headers(headers=None, debug=False):
def response_headers(headers=None):
"""Set headers on the response."""
if debug:
cherrypy.log('Setting response headers: %s' % repr(headers),
'TOOLS.RESPONSE_HEADERS')
for name, value in (headers or []):
cherrypy.serving.response.headers[name] = value
cherrypy.response.headers[name] = value
response_headers.failsafe = True
def referer(pattern, accept=True, accept_missing=False, error=403,
message='Forbidden Referer header.', debug=False):
message='Forbidden Referer header.'):
"""Raise HTTPError if Referer header does/does not match the given pattern.
pattern
A regular expression pattern to test against the Referer.
accept
If True, the Referer must match the pattern; if False,
pattern: a regular expression pattern to test against the Referer.
accept: if True, the Referer must match the pattern; if False,
the Referer must NOT match the pattern.
accept_missing
If True, permit requests with no Referer header.
error
The HTTP error code to return to the client on failure.
message
A string to include in the response body on failure.
accept_missing: if True, permit requests with no Referer header.
error: the HTTP error code to return to the client on failure.
message: a string to include in the response body on failure.
"""
try:
ref = cherrypy.serving.request.headers['Referer']
match = bool(re.match(pattern, ref))
if debug:
cherrypy.log('Referer %r matches %r' % (ref, pattern),
'TOOLS.REFERER')
match = bool(re.match(pattern, cherrypy.request.headers['Referer']))
if accept == match:
return
except KeyError:
if debug:
cherrypy.log('No Referer header', 'TOOLS.REFERER')
if accept_missing:
return
raise cherrypy.HTTPError(error, message)
class SessionAuth(object):
"""Assert that the user is logged in."""
session_key = 'username'
debug = False
session_key = "username"
def check_username_and_password(self, username, password):
pass
def anonymous(self):
"""Provide a temporary user name for anonymous users."""
pass
def on_login(self, username):
pass
def on_logout(self, username):
pass
def on_check(self, username):
pass
def login_screen(self, from_page='..', username='', error_msg='',
**kwargs):
return (six.text_type("""<html><body>
def login_screen(self, from_page='..', username='', error_msg='', **kwargs):
return """<html><body>
Message: %(error_msg)s
<form method="post" action="do_login">
Login: <input type="text" name="username" value="%(username)s" size="10" />
<br />
Password: <input type="password" name="password" size="10" />
<br />
<input type="hidden" name="from_page" value="%(from_page)s" />
<br />
Login: <input type="text" name="username" value="%(username)s" size="10" /><br />
Password: <input type="password" name="password" size="10" /><br />
<input type="hidden" name="from_page" value="%(from_page)s" /><br />
<input type="submit" />
</form>
</body></html>""") % vars()).encode('utf-8')
</body></html>""" % {'from_page': from_page, 'username': username,
'error_msg': error_msg}
def do_login(self, username, password, from_page='..', **kwargs):
"""Login. May raise redirect, or return True if request handled."""
response = cherrypy.serving.response
error_msg = self.check_username_and_password(username, password)
if error_msg:
body = self.login_screen(from_page, username, error_msg)
response.body = body
if 'Content-Length' in response.headers:
cherrypy.response.body = body
if cherrypy.response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
del response.headers['Content-Length']
del cherrypy.response.headers["Content-Length"]
return True
else:
cherrypy.serving.request.login = username
cherrypy.session[self.session_key] = username
cherrypy.session[self.session_key] = cherrypy.request.login = username
self.on_login(username)
raise cherrypy.HTTPRedirect(from_page or '/')
raise cherrypy.HTTPRedirect(from_page or "/")
def do_logout(self, from_page='..', **kwargs):
"""Logout. May raise redirect, or return True if request handled."""
sess = cherrypy.session
username = sess.get(self.session_key)
sess[self.session_key] = None
if username:
cherrypy.serving.request.login = None
cherrypy.request.login = None
self.on_logout(username)
raise cherrypy.HTTPRedirect(from_page)
def do_check(self):
"""Assert username. Raise redirect, or return True if request handled.
"""
"""Assert username. May raise redirect, or return True if request handled."""
sess = cherrypy.session
request = cherrypy.serving.request
response = cherrypy.serving.response
request = cherrypy.request
username = sess.get(self.session_key)
if not username:
sess[self.session_key] = username = self.anonymous()
self._debug_message('No session[username], trying anonymous')
if not username:
url = cherrypy.url(qs=request.query_string)
self._debug_message(
'No username, routing to login_screen with from_page %(url)r',
locals(),
)
response.body = self.login_screen(url)
if 'Content-Length' in response.headers:
cherrypy.response.body = self.login_screen(cherrypy.url(qs=request.query_string))
if cherrypy.response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
del response.headers['Content-Length']
del cherrypy.response.headers["Content-Length"]
return True
self._debug_message('Setting request.login to %(username)r', locals())
request.login = username
cherrypy.request.login = username
self.on_check(username)
def _debug_message(self, template, context={}):
if not self.debug:
return
cherrypy.log(template % context, 'TOOLS.SESSAUTH')
def run(self):
request = cherrypy.serving.request
response = cherrypy.serving.response
request = cherrypy.request
path = request.path_info
if path.endswith('login_screen'):
self._debug_message('routing %(path)r to login_screen', locals())
response.body = self.login_screen()
return True
return self.login_screen(**request.params)
elif path.endswith('do_login'):
if request.method != 'POST':
response.headers['Allow'] = 'POST'
self._debug_message('do_login requires POST')
raise cherrypy.HTTPError(405)
self._debug_message('routing %(path)r to do_login', locals())
return self.do_login(**request.params)
elif path.endswith('do_logout'):
if request.method != 'POST':
response.headers['Allow'] = 'POST'
raise cherrypy.HTTPError(405)
self._debug_message('routing %(path)r to do_logout', locals())
return self.do_logout(**request.params)
else:
self._debug_message('No special path, running do_check')
return self.do_check()
def session_auth(**kwargs):
sa = SessionAuth()
for k, v in kwargs.items():
for k, v in kwargs.iteritems():
setattr(sa, k, v)
return sa.run()
session_auth.__doc__ = """Session authentication hook.
@@ -414,235 +294,140 @@ session_auth.__doc__ = """Session authentication hook.
Any attribute of the SessionAuth class may be overridden via a keyword arg
to this function:
""" + '\n'.join(['%s: %s' % (k, type(getattr(SessionAuth, k)).__name__)
for k in dir(SessionAuth) if not k.startswith('__')])
""" + "\n".join(["%s: %s" % (k, type(getattr(SessionAuth, k)).__name__)
for k in dir(SessionAuth) if not k.startswith("__")])
def log_traceback(severity=logging.ERROR, debug=False):
def log_traceback(severity=logging.DEBUG):
"""Write the last error's traceback to the cherrypy error log."""
cherrypy.log('', 'HTTP', severity=severity, traceback=True)
cherrypy.log("", "HTTP", severity=severity, traceback=True)
def log_request_headers(debug=False):
def log_request_headers():
"""Write request headers to the cherrypy error log."""
h = [' %s: %s' % (k, v) for k, v in cherrypy.serving.request.header_list]
cherrypy.log('\nRequest Headers:\n' + '\n'.join(h), 'HTTP')
h = [" %s: %s" % (k, v) for k, v in cherrypy.request.header_list]
cherrypy.log('\nRequest Headers:\n' + '\n'.join(h), "HTTP")
def log_hooks(debug=False):
def log_hooks():
"""Write request.hooks to the cherrypy error log."""
request = cherrypy.serving.request
msg = []
# Sort by the standard points if possible.
from cherrypy import _cprequest
points = _cprequest.hookpoints
for k in request.hooks.keys():
for k in cherrypy.request.hooks.keys():
if k not in points:
points.append(k)
for k in points:
msg.append(' %s:' % k)
v = request.hooks.get(k, [])
msg.append(" %s:" % k)
v = cherrypy.request.hooks.get(k, [])
v.sort()
for h in v:
msg.append(' %r' % h)
msg.append(" %r" % h)
cherrypy.log('\nRequest Hooks for ' + cherrypy.url() +
':\n' + '\n'.join(msg), 'HTTP')
':\n' + '\n'.join(msg), "HTTP")
def redirect(url='', internal=True, debug=False):
def redirect(url='', internal=True):
"""Raise InternalRedirect or HTTPRedirect to the given url."""
if debug:
cherrypy.log('Redirecting %sto: %s' %
({True: 'internal ', False: ''}[internal], url),
'TOOLS.REDIRECT')
if internal:
raise cherrypy.InternalRedirect(url)
else:
raise cherrypy.HTTPRedirect(url)
def trailing_slash(missing=True, extra=False, status=None, debug=False):
def trailing_slash(missing=True, extra=False):
"""Redirect if path_info has (missing|extra) trailing slash."""
request = cherrypy.serving.request
request = cherrypy.request
pi = request.path_info
if debug:
cherrypy.log('is_index: %r, missing: %r, extra: %r, path_info: %r' %
(request.is_index, missing, extra, pi),
'TOOLS.TRAILING_SLASH')
if request.is_index is True:
if missing:
if not pi.endswith('/'):
new_url = cherrypy.url(pi + '/', request.query_string)
raise cherrypy.HTTPRedirect(new_url, status=status or 301)
raise cherrypy.HTTPRedirect(new_url)
elif request.is_index is False:
if extra:
# If pi == '/', don't redirect to ''!
if pi.endswith('/') and pi != '/':
new_url = cherrypy.url(pi[:-1], request.query_string)
raise cherrypy.HTTPRedirect(new_url, status=status or 301)
raise cherrypy.HTTPRedirect(new_url)
def flatten(debug=False):
def flatten():
"""Wrap response.body in a generator that recursively iterates over body.
This allows cherrypy.response.body to consist of 'nested generators';
that is, a set of generators that yield generators.
"""
import types
def flattener(input):
numchunks = 0
for x in input:
if not is_iterator(x):
numchunks += 1
if not isinstance(x, types.GeneratorType):
yield x
else:
for y in flattener(x):
numchunks += 1
yield y
if debug:
cherrypy.log('Flattened %d chunks' % numchunks, 'TOOLS.FLATTEN')
response = cherrypy.serving.response
yield y
response = cherrypy.response
response.body = flattener(response.body)
def accept(media=None, debug=False):
def accept(media=None):
"""Return the client's preferred media-type (from the given Content-Types).
If 'media' is None (the default), no test will be performed.
If 'media' is provided, it should be the Content-Type value (as a string)
or values (as a list or tuple of strings) which the current resource
or values (as a list or tuple of strings) which the current request
can emit. The client's acceptable media ranges (as declared in the
Accept request header) will be matched in order to these Content-Type
values; the first such string is returned. That is, the return value
will always be one of the strings provided in the 'media' arg (or None
if 'media' is None).
If no match is found, then HTTPError 406 (Not Acceptable) is raised.
Note that most web browsers send */* as a (low-quality) acceptable
media range, which should match any Content-Type. In addition, "...if
no Accept header field is present, then it is assumed that the client
accepts all media types."
Matching types are checked in order of client preference first,
and then in the order of the given 'media' values.
Note that this function does not honor accept-params (other than "q").
"""
if not media:
return
if isinstance(media, text_or_bytes):
if isinstance(media, basestring):
media = [media]
request = cherrypy.serving.request
# Parse the Accept request header, and try to match one
# of the requested media-ranges (in order of preference).
ranges = request.headers.elements('Accept')
ranges = cherrypy.request.headers.elements('Accept')
if not ranges:
# Any media type is acceptable.
if debug:
cherrypy.log('No Accept header elements', 'TOOLS.ACCEPT')
return media[0]
else:
# Note that 'ranges' is sorted in order of preference
for element in ranges:
if element.qvalue > 0:
if element.value == '*/*':
if element.value == "*/*":
# Matches any type or subtype
if debug:
cherrypy.log('Match due to */*', 'TOOLS.ACCEPT')
return media[0]
elif element.value.endswith('/*'):
elif element.value.endswith("/*"):
# Matches any subtype
mtype = element.value[:-1] # Keep the slash
for m in media:
if m.startswith(mtype):
if debug:
cherrypy.log('Match due to %s' % element.value,
'TOOLS.ACCEPT')
return m
else:
# Matches exact value
if element.value in media:
if debug:
cherrypy.log('Match due to %s' % element.value,
'TOOLS.ACCEPT')
return element.value
# No suitable media-range found.
ah = request.headers.get('Accept')
ah = cherrypy.request.headers.get('Accept')
if ah is None:
msg = 'Your client did not send an Accept header.'
msg = "Your client did not send an Accept header."
else:
msg = 'Your client sent this Accept header: %s.' % ah
msg += (' But this resource only emits these media types: %s.' %
', '.join(media))
msg = "Your client sent this Accept header: %s." % ah
msg += (" But this resource only emits these media types: %s." %
", ".join(media))
raise cherrypy.HTTPError(406, msg)
class MonitoredHeaderMap(_httputil.HeaderMap):
def __init__(self):
self.accessed_headers = set()
def __getitem__(self, key):
self.accessed_headers.add(key)
return _httputil.HeaderMap.__getitem__(self, key)
def __contains__(self, key):
self.accessed_headers.add(key)
return _httputil.HeaderMap.__contains__(self, key)
def get(self, key, default=None):
self.accessed_headers.add(key)
return _httputil.HeaderMap.get(self, key, default=default)
if hasattr({}, 'has_key'):
# Python 2
def has_key(self, key):
self.accessed_headers.add(key)
return _httputil.HeaderMap.has_key(self, key)
def autovary(ignore=None, debug=False):
"""Auto-populate the Vary response header based on request.header access.
"""
request = cherrypy.serving.request
req_h = request.headers
request.headers = MonitoredHeaderMap()
request.headers.update(req_h)
if ignore is None:
ignore = set(['Content-Disposition', 'Content-Length', 'Content-Type'])
def set_response_header():
resp_h = cherrypy.serving.response.headers
v = set([e.value for e in resp_h.elements('Vary')])
if debug:
cherrypy.log(
'Accessed headers: %s' % request.headers.accessed_headers,
'TOOLS.AUTOVARY')
v = v.union(request.headers.accessed_headers)
v = v.difference(ignore)
v = list(v)
v.sort()
resp_h['Vary'] = ', '.join(v)
request.hooks.attach('before_finalize', set_response_header, 95)
def convert_params(exception=ValueError, error=400):
"""Convert request params based on function annotations, with error handling.
exception
Exception class to catch.
status
The HTTP error code to return to the client on failure.
"""
request = cherrypy.serving.request
types = request.handler.callable.__annotations__
with cherrypy.HTTPError.handle(exception, error):
for key in set(types).intersection(request.params):
request.params[key] = types[key](request.params[key])

View File

@@ -1,290 +1,189 @@
import struct
import time
import io
import six
import cherrypy
from cherrypy._cpcompat import text_or_bytes, ntob
from cherrypy.lib import file_generator
from cherrypy.lib import is_closable_iterator
from cherrypy.lib import set_vary_header
def decode(encoding=None, default_encoding='utf-8'):
"""Replace or extend the list of charsets used to decode a request entity.
Either argument may be a single string or a list of strings.
encoding
If not None, restricts the set of charsets attempted while decoding
a request entity to the given set (even if a different charset is
given in the Content-Type request header).
default_encoding
Only in effect if the 'encoding' argument is not given.
If given, the set of charsets attempted while decoding a request
entity is *extended* with the given value(s).
"""
body = cherrypy.request.body
if encoding is not None:
if not isinstance(encoding, list):
encoding = [encoding]
body.attempt_charsets = encoding
elif default_encoding:
if not isinstance(default_encoding, list):
default_encoding = [default_encoding]
body.attempt_charsets = body.attempt_charsets + default_encoding
class UTF8StreamEncoder:
def __init__(self, iterator):
self._iterator = iterator
def __iter__(self):
return self
def next(self):
return self.__next__()
def __next__(self):
res = next(self._iterator)
if isinstance(res, six.text_type):
res = res.encode('utf-8')
return res
def close(self):
if is_closable_iterator(self._iterator):
self._iterator.close()
def __getattr__(self, attr):
if attr.startswith('__'):
raise AttributeError(self, attr)
return getattr(self._iterator, attr)
class ResponseEncoder:
default_encoding = 'utf-8'
failmsg = 'Response body could not be encoded with %r.'
encoding = None
errors = 'strict'
text_only = True
add_charset = True
debug = False
def __init__(self, **kwargs):
for k, v in kwargs.items():
setattr(self, k, v)
self.attempted_charsets = set()
request = cherrypy.serving.request
if request.handler is not None:
# Replace request.handler with self
if self.debug:
cherrypy.log('Replacing request.handler', 'TOOLS.ENCODE')
self.oldhandler = request.handler
request.handler = self
def encode_stream(self, encoding):
"""Encode a streaming response body.
Use a generator wrapper, and just pray it works as the stream is
being written out.
"""
if encoding in self.attempted_charsets:
return False
self.attempted_charsets.add(encoding)
def encoder(body):
for chunk in body:
if isinstance(chunk, six.text_type):
chunk = chunk.encode(encoding, self.errors)
yield chunk
self.body = encoder(self.body)
return True
def encode_string(self, encoding):
"""Encode a buffered response body."""
if encoding in self.attempted_charsets:
return False
self.attempted_charsets.add(encoding)
body = []
for chunk in self.body:
if isinstance(chunk, six.text_type):
try:
chunk = chunk.encode(encoding, self.errors)
except (LookupError, UnicodeError):
return False
body.append(chunk)
self.body = body
return True
def find_acceptable_charset(self):
request = cherrypy.serving.request
response = cherrypy.serving.response
if self.debug:
cherrypy.log('response.stream %r' %
response.stream, 'TOOLS.ENCODE')
if response.stream:
encoder = self.encode_stream
else:
encoder = self.encode_string
if 'Content-Length' in response.headers:
# Delete Content-Length header so finalize() recalcs it.
# Encoded strings may be of different lengths from their
# unicode equivalents, and even from each other. For example:
# >>> t = u"\u7007\u3040"
# >>> len(t)
# 2
# >>> len(t.encode("UTF-8"))
# 6
# >>> len(t.encode("utf7"))
# 8
del response.headers['Content-Length']
# Parse the Accept-Charset request header, and try to provide one
# of the requested charsets (in order of user preference).
encs = request.headers.elements('Accept-Charset')
charsets = [enc.value.lower() for enc in encs]
if self.debug:
cherrypy.log('charsets %s' % repr(charsets), 'TOOLS.ENCODE')
if self.encoding is not None:
# If specified, force this encoding to be used, or fail.
encoding = self.encoding.lower()
if self.debug:
cherrypy.log('Specified encoding %r' %
encoding, 'TOOLS.ENCODE')
if (not charsets) or '*' in charsets or encoding in charsets:
if self.debug:
cherrypy.log('Attempting encoding %r' %
encoding, 'TOOLS.ENCODE')
if encoder(encoding):
return encoding
else:
if not encs:
if self.debug:
cherrypy.log('Attempting default encoding %r' %
self.default_encoding, 'TOOLS.ENCODE')
# Any character-set is acceptable.
if encoder(self.default_encoding):
return self.default_encoding
else:
raise cherrypy.HTTPError(500, self.failmsg %
self.default_encoding)
else:
for element in encs:
if element.qvalue > 0:
if element.value == '*':
# Matches any charset. Try our default.
if self.debug:
cherrypy.log('Attempting default encoding due '
'to %r' % element, 'TOOLS.ENCODE')
if encoder(self.default_encoding):
return self.default_encoding
else:
encoding = element.value
if self.debug:
cherrypy.log('Attempting encoding %s (qvalue >'
'0)' % element, 'TOOLS.ENCODE')
if encoder(encoding):
return encoding
if '*' not in charsets:
# If no "*" is present in an Accept-Charset field, then all
# character sets not explicitly mentioned get a quality
# value of 0, except for ISO-8859-1, which gets a quality
# value of 1 if not explicitly mentioned.
iso = 'iso-8859-1'
if iso not in charsets:
if self.debug:
cherrypy.log('Attempting ISO-8859-1 encoding',
'TOOLS.ENCODE')
if encoder(iso):
return iso
# No suitable encoding found.
ac = request.headers.get('Accept-Charset')
if ac is None:
msg = 'Your client did not send an Accept-Charset header.'
else:
msg = 'Your client sent this Accept-Charset header: %s.' % ac
_charsets = ', '.join(sorted(self.attempted_charsets))
msg += ' We tried these charsets: %s.' % (_charsets,)
raise cherrypy.HTTPError(406, msg)
def __call__(self, *args, **kwargs):
response = cherrypy.serving.response
self.body = self.oldhandler(*args, **kwargs)
if isinstance(self.body, text_or_bytes):
# strings get wrapped in a list because iterating over a single
# item list is much faster than iterating over every character
# in a long string.
if self.body:
self.body = [self.body]
else:
# [''] doesn't evaluate to False, so replace it with [].
self.body = []
elif hasattr(self.body, 'read'):
self.body = file_generator(self.body)
elif self.body is None:
self.body = []
ct = response.headers.elements('Content-Type')
if self.debug:
cherrypy.log('Content-Type: %r' % [str(h)
for h in ct], 'TOOLS.ENCODE')
if ct and self.add_charset:
"""Decode cherrypy.request.params from str to unicode objects."""
if not encoding:
ct = cherrypy.request.headers.elements("Content-Type")
if ct:
ct = ct[0]
if self.text_only:
if ct.value.lower().startswith('text/'):
if self.debug:
cherrypy.log(
'Content-Type %s starts with "text/"' % ct,
'TOOLS.ENCODE')
do_find = True
else:
if self.debug:
cherrypy.log('Not finding because Content-Type %s '
'does not start with "text/"' % ct,
'TOOLS.ENCODE')
do_find = False
encoding = ct.params.get("charset", None)
if (not encoding) and ct.value.lower().startswith("text/"):
# http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7.1
# When no explicit charset parameter is provided by the
# sender, media subtypes of the "text" type are defined
# to have a default charset value of "ISO-8859-1" when
# received via HTTP.
encoding = "ISO-8859-1"
if not encoding:
encoding = default_encoding
try:
decode_params(encoding)
except UnicodeDecodeError:
# IE and Firefox don't supply a charset when submitting form
# params with a CT of application/x-www-form-urlencoded.
# So after all our guessing, it could *still* be wrong.
# Start over with ISO-8859-1, since that seems to be preferred.
decode_params("ISO-8859-1")
def decode_params(encoding):
decoded_params = {}
for key, value in cherrypy.request.params.items():
if not hasattr(value, 'file'):
# Skip the value if it is an uploaded file
if isinstance(value, list):
# value is a list: decode each element
value = [v.decode(encoding) for v in value]
elif isinstance(value, str):
# value is a regular string: decode it
value = value.decode(encoding)
decoded_params[key] = value
# Decode all or nothing, so we can try again on error.
cherrypy.request.params = decoded_params
# Encoding
def encode(encoding=None, errors='strict', text_only=True, add_charset=True):
# Guard against running twice
if getattr(cherrypy.request, "_encoding_attempted", False):
return
cherrypy.request._encoding_attempted = True
ct = cherrypy.response.headers.elements("Content-Type")
if ct:
ct = ct[0]
if (not text_only) or ct.value.lower().startswith("text/"):
# Set "charset=..." param on response Content-Type header
ct.params['charset'] = find_acceptable_charset(encoding, errors=errors)
if add_charset:
cherrypy.response.headers["Content-Type"] = str(ct)
def encode_stream(encoding, errors='strict'):
"""Encode a streaming response body.
Use a generator wrapper, and just pray it works as the stream is
being written out.
"""
def encoder(body):
for chunk in body:
if isinstance(chunk, unicode):
chunk = chunk.encode(encoding, errors)
yield chunk
cherrypy.response.body = encoder(cherrypy.response.body)
return True
def encode_string(encoding, errors='strict'):
"""Encode a buffered response body."""
try:
body = []
for chunk in cherrypy.response.body:
if isinstance(chunk, unicode):
chunk = chunk.encode(encoding, errors)
body.append(chunk)
cherrypy.response.body = body
except (LookupError, UnicodeError):
return False
else:
return True
def find_acceptable_charset(encoding=None, default_encoding='utf-8', errors='strict'):
response = cherrypy.response
if cherrypy.response.stream:
encoder = encode_stream
else:
response.collapse_body()
encoder = encode_string
if response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
# Encoded strings may be of different lengths from their
# unicode equivalents, and even from each other. For example:
# >>> t = u"\u7007\u3040"
# >>> len(t)
# 2
# >>> len(t.encode("UTF-8"))
# 6
# >>> len(t.encode("utf7"))
# 8
del response.headers["Content-Length"]
# Parse the Accept-Charset request header, and try to provide one
# of the requested charsets (in order of user preference).
encs = cherrypy.request.headers.elements('Accept-Charset')
charsets = [enc.value.lower() for enc in encs]
attempted_charsets = []
if encoding is not None:
# If specified, force this encoding to be used, or fail.
encoding = encoding.lower()
if (not charsets) or "*" in charsets or encoding in charsets:
if encoder(encoding, errors):
return encoding
else:
if not encs:
# Any character-set is acceptable.
if encoder(default_encoding, errors):
return default_encoding
else:
if self.debug:
cherrypy.log('Finding because not text_only',
'TOOLS.ENCODE')
do_find = True
raise cherrypy.HTTPError(500, failmsg % default_encoding)
else:
if "*" not in charsets:
# If no "*" is present in an Accept-Charset field, then all
# character sets not explicitly mentioned get a quality
# value of 0, except for ISO-8859-1, which gets a quality
# value of 1 if not explicitly mentioned.
iso = 'iso-8859-1'
if iso not in charsets:
attempted_charsets.append(iso)
if encoder(iso, errors):
return iso
for element in encs:
if element.qvalue > 0:
if element.value == "*":
# Matches any charset. Try our default.
if default_encoding not in attempted_charsets:
attempted_charsets.append(default_encoding)
if encoder(default_encoding, errors):
return default_encoding
else:
encoding = element.value
if encoding not in attempted_charsets:
attempted_charsets.append(encoding)
if encoder(encoding, errors):
return encoding
# No suitable encoding found.
ac = cherrypy.request.headers.get('Accept-Charset')
if ac is None:
msg = "Your client did not send an Accept-Charset header."
else:
msg = "Your client sent this Accept-Charset header: %s." % ac
msg += " We tried these charsets: %s." % ", ".join(attempted_charsets)
raise cherrypy.HTTPError(406, msg)
if do_find:
# Set "charset=..." param on response Content-Type header
ct.params['charset'] = self.find_acceptable_charset()
if self.debug:
cherrypy.log('Setting Content-Type %s' % ct,
'TOOLS.ENCODE')
response.headers['Content-Type'] = str(ct)
return self.body
# GZIP
def compress(body, compress_level):
"""Compress 'body' at the given compress_level."""
import zlib
# See http://www.gzip.org/zlib/rfc-gzip.html
yield ntob('\x1f\x8b') # ID1 and ID2: gzip marker
yield ntob('\x08') # CM: compression method
yield ntob('\x00') # FLG: none set
# MTIME: 4 bytes
yield struct.pack('<L', int(time.time()) & int('FFFFFFFF', 16))
yield ntob('\x02') # XFL: max compression, slowest algo
yield ntob('\xff') # OS: unknown
crc = zlib.crc32(ntob(''))
yield '\037\213' # magic header
yield '\010' # compression method
yield '\0'
yield struct.pack("<L", long(time.time()))
yield '\002'
yield '\377'
crc = zlib.crc32("")
size = 0
zobj = zlib.compressobj(compress_level,
zlib.DEFLATED, -zlib.MAX_WBITS,
@@ -294,17 +193,13 @@ def compress(body, compress_level):
crc = zlib.crc32(line, crc)
yield zobj.compress(line)
yield zobj.flush()
# CRC32: 4 bytes
yield struct.pack('<L', crc & int('FFFFFFFF', 16))
# ISIZE: 4 bytes
yield struct.pack('<L', size & int('FFFFFFFF', 16))
yield struct.pack("<l", crc)
yield struct.pack("<L", size & 0xFFFFFFFFL)
def decompress(body):
import gzip
zbuf = io.BytesIO()
import gzip, StringIO
zbuf = StringIO.StringIO()
zbuf.write(body)
zbuf.seek(0)
zfile = gzip.GzipFile(mode='rb', fileobj=zbuf)
@@ -313,44 +208,29 @@ def decompress(body):
return data
def gzip(compress_level=5, mime_types=['text/html', 'text/plain'],
debug=False):
def gzip(compress_level=9, mime_types=['text/html', 'text/plain']):
"""Try to gzip the response body if Content-Type in mime_types.
cherrypy.response.headers['Content-Type'] must be set to one of the
values in the mime_types arg before calling this function.
The provided list of mime-types must be of one of the following form:
* type/subtype
* type/*
* type/*+subtype
No compression is performed if any of the following hold:
* The client sends no Accept-Encoding request header
* No 'gzip' or 'x-gzip' is present in the Accept-Encoding header
* No 'gzip' or 'x-gzip' with a qvalue > 0 is present
* The 'identity' value is given with a qvalue > 0.
"""
request = cherrypy.serving.request
response = cherrypy.serving.response
set_vary_header(response, 'Accept-Encoding')
response = cherrypy.response
if not response.body:
# Response body is empty (might be a 304 for instance)
if debug:
cherrypy.log('No response body', context='TOOLS.GZIP')
return
# If returning cached content (which should already have been gzipped),
# don't re-zip.
if getattr(request, 'cached', False):
if debug:
cherrypy.log('Not gzipping cached response', context='TOOLS.GZIP')
if getattr(cherrypy.request, "cached", False):
return
acceptable = request.headers.elements('Accept-Encoding')
acceptable = cherrypy.request.headers.elements('Accept-Encoding')
if not acceptable:
# If no Accept-Encoding field is present in a request,
# the server MAY assume that the client will accept any
@@ -359,66 +239,27 @@ def gzip(compress_level=5, mime_types=['text/html', 'text/plain'],
# the "identity" content-coding, unless it has additional
# information that a different content-coding is meaningful
# to the client.
if debug:
cherrypy.log('No Accept-Encoding', context='TOOLS.GZIP')
return
ct = response.headers.get('Content-Type', '').split(';')[0]
for coding in acceptable:
if coding.value == 'identity' and coding.qvalue != 0:
if debug:
cherrypy.log('Non-zero identity qvalue: %s' % coding,
context='TOOLS.GZIP')
return
if coding.value in ('gzip', 'x-gzip'):
if coding.qvalue == 0:
if debug:
cherrypy.log('Zero gzip qvalue: %s' % coding,
context='TOOLS.GZIP')
return
if ct not in mime_types:
# If the list of provided mime-types contains tokens
# such as 'text/*' or 'application/*+xml',
# we go through them and find the most appropriate one
# based on the given content-type.
# The pattern matching is only caring about the most
# common cases, as stated above, and doesn't support
# for extra parameters.
found = False
if '/' in ct:
ct_media_type, ct_sub_type = ct.split('/')
for mime_type in mime_types:
if '/' in mime_type:
media_type, sub_type = mime_type.split('/')
if ct_media_type == media_type:
if sub_type == '*':
found = True
break
elif '+' in sub_type and '+' in ct_sub_type:
ct_left, ct_right = ct_sub_type.split('+')
left, right = sub_type.split('+')
if left == '*' and ct_right == right:
found = True
break
if not found:
if debug:
cherrypy.log('Content-Type %s not in mime_types %r' %
(ct, mime_types), context='TOOLS.GZIP')
return
if debug:
cherrypy.log('Gzipping', context='TOOLS.GZIP')
# Return a generator that compresses the page
response.headers['Content-Encoding'] = 'gzip'
response.body = compress(response.body, compress_level)
if 'Content-Length' in response.headers:
# Delete Content-Length header so finalize() recalcs it.
del response.headers['Content-Length']
if ct in mime_types:
# Return a generator that compresses the page
varies = response.headers.get("Vary", "")
varies = [x.strip() for x in varies.split(",") if x.strip()]
if "Accept-Encoding" not in varies:
varies.append("Accept-Encoding")
response.headers['Vary'] = ", ".join(varies)
response.headers['Content-Encoding'] = 'gzip'
response.body = compress(response.body, compress_level)
if response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
del response.headers["Content-Length"]
return
if debug:
cherrypy.log('No acceptable encoding found.', context='GZIP')
cherrypy.HTTPError(406, 'identity, gzip').set_response()
cherrypy.HTTPError(406, "identity, gzip").set_response()

View File

@@ -1,216 +0,0 @@
import gc
import inspect
import sys
import time
try:
import objgraph
except ImportError:
objgraph = None
import cherrypy
from cherrypy import _cprequest, _cpwsgi
from cherrypy.process.plugins import SimplePlugin
class ReferrerTree(object):
"""An object which gathers all referrers of an object to a given depth."""
peek_length = 40
def __init__(self, ignore=None, maxdepth=2, maxparents=10):
self.ignore = ignore or []
self.ignore.append(inspect.currentframe().f_back)
self.maxdepth = maxdepth
self.maxparents = maxparents
def ascend(self, obj, depth=1):
"""Return a nested list containing referrers of the given object."""
depth += 1
parents = []
# Gather all referrers in one step to minimize
# cascading references due to repr() logic.
refs = gc.get_referrers(obj)
self.ignore.append(refs)
if len(refs) > self.maxparents:
return [('[%s referrers]' % len(refs), [])]
try:
ascendcode = self.ascend.__code__
except AttributeError:
ascendcode = self.ascend.im_func.func_code
for parent in refs:
if inspect.isframe(parent) and parent.f_code is ascendcode:
continue
if parent in self.ignore:
continue
if depth <= self.maxdepth:
parents.append((parent, self.ascend(parent, depth)))
else:
parents.append((parent, []))
return parents
def peek(self, s):
"""Return s, restricted to a sane length."""
if len(s) > (self.peek_length + 3):
half = self.peek_length // 2
return s[:half] + '...' + s[-half:]
else:
return s
def _format(self, obj, descend=True):
"""Return a string representation of a single object."""
if inspect.isframe(obj):
filename, lineno, func, context, index = inspect.getframeinfo(obj)
return "<frame of function '%s'>" % func
if not descend:
return self.peek(repr(obj))
if isinstance(obj, dict):
return '{' + ', '.join(['%s: %s' % (self._format(k, descend=False),
self._format(v, descend=False))
for k, v in obj.items()]) + '}'
elif isinstance(obj, list):
return '[' + ', '.join([self._format(item, descend=False)
for item in obj]) + ']'
elif isinstance(obj, tuple):
return '(' + ', '.join([self._format(item, descend=False)
for item in obj]) + ')'
r = self.peek(repr(obj))
if isinstance(obj, (str, int, float)):
return r
return '%s: %s' % (type(obj), r)
def format(self, tree):
"""Return a list of string reprs from a nested list of referrers."""
output = []
def ascend(branch, depth=1):
for parent, grandparents in branch:
output.append((' ' * depth) + self._format(parent))
if grandparents:
ascend(grandparents, depth + 1)
ascend(tree)
return output
def get_instances(cls):
return [x for x in gc.get_objects() if isinstance(x, cls)]
class RequestCounter(SimplePlugin):
def start(self):
self.count = 0
def before_request(self):
self.count += 1
def after_request(self):
self.count -= 1
request_counter = RequestCounter(cherrypy.engine)
request_counter.subscribe()
def get_context(obj):
if isinstance(obj, _cprequest.Request):
return 'path=%s;stage=%s' % (obj.path_info, obj.stage)
elif isinstance(obj, _cprequest.Response):
return 'status=%s' % obj.status
elif isinstance(obj, _cpwsgi.AppResponse):
return 'PATH_INFO=%s' % obj.environ.get('PATH_INFO', '')
elif hasattr(obj, 'tb_lineno'):
return 'tb_lineno=%s' % obj.tb_lineno
return ''
class GCRoot(object):
"""A CherryPy page handler for testing reference leaks."""
classes = [
(_cprequest.Request, 2, 2,
'Should be 1 in this request thread and 1 in the main thread.'),
(_cprequest.Response, 2, 2,
'Should be 1 in this request thread and 1 in the main thread.'),
(_cpwsgi.AppResponse, 1, 1,
'Should be 1 in this request thread only.'),
]
@cherrypy.expose
def index(self):
return 'Hello, world!'
@cherrypy.expose
def stats(self):
output = ['Statistics:']
for trial in range(10):
if request_counter.count > 0:
break
time.sleep(0.5)
else:
output.append('\nNot all requests closed properly.')
# gc_collect isn't perfectly synchronous, because it may
# break reference cycles that then take time to fully
# finalize. Call it thrice and hope for the best.
gc.collect()
gc.collect()
unreachable = gc.collect()
if unreachable:
if objgraph is not None:
final = objgraph.by_type('Nondestructible')
if final:
objgraph.show_backrefs(final, filename='finalizers.png')
trash = {}
for x in gc.garbage:
trash[type(x)] = trash.get(type(x), 0) + 1
if trash:
output.insert(0, '\n%s unreachable objects:' % unreachable)
trash = [(v, k) for k, v in trash.items()]
trash.sort()
for pair in trash:
output.append(' ' + repr(pair))
# Check declared classes to verify uncollected instances.
# These don't have to be part of a cycle; they can be
# any objects that have unanticipated referrers that keep
# them from being collected.
allobjs = {}
for cls, minobj, maxobj, msg in self.classes:
allobjs[cls] = get_instances(cls)
for cls, minobj, maxobj, msg in self.classes:
objs = allobjs[cls]
lenobj = len(objs)
if lenobj < minobj or lenobj > maxobj:
if minobj == maxobj:
output.append(
'\nExpected %s %r references, got %s.' %
(minobj, cls, lenobj))
else:
output.append(
'\nExpected %s to %s %r references, got %s.' %
(minobj, maxobj, cls, lenobj))
for obj in objs:
if objgraph is not None:
ig = [id(objs), id(inspect.currentframe())]
fname = 'graph_%s_%s.png' % (cls.__name__, id(obj))
objgraph.show_backrefs(
obj, extra_ignore=ig, max_depth=4, too_many=20,
filename=fname, extra_info=get_context)
output.append('\nReferrers for %s (refcount=%s):' %
(repr(obj), sys.getrefcount(obj)))
t = ReferrerTree(ignore=[objs], maxdepth=3)
tree = t.ascend(obj)
output.extend(t.format(tree))
return '\n'.join(output)

410
cherrypy/lib/http.py Normal file
View File

@@ -0,0 +1,410 @@
"""HTTP library functions."""
# This module contains functions for building an HTTP application
# framework: any one, not just one whose name starts with "Ch". ;) If you
# reference any modules from some popular framework inside *this* module,
# FuManChu will personally hang you up by your thumbs and submit you
# to a public caning.
from BaseHTTPServer import BaseHTTPRequestHandler
response_codes = BaseHTTPRequestHandler.responses.copy()
# From http://www.cherrypy.org/ticket/361
response_codes[500] = ('Internal Server Error',
'The server encountered an unexpected condition '
'which prevented it from fulfilling the request.')
response_codes[503] = ('Service Unavailable',
'The server is currently unable to handle the '
'request due to a temporary overloading or '
'maintenance of the server.')
import cgi
import re
from rfc822 import formatdate as HTTPDate
def urljoin(*atoms):
"""Return the given path *atoms, joined into a single URL.
This will correctly join a SCRIPT_NAME and PATH_INFO into the
original URL, even if either atom is blank.
"""
url = "/".join([x for x in atoms if x])
while "//" in url:
url = url.replace("//", "/")
# Special-case the final url of "", and return "/" instead.
return url or "/"
def protocol_from_http(protocol_str):
"""Return a protocol tuple from the given 'HTTP/x.y' string."""
return int(protocol_str[5]), int(protocol_str[7])
def get_ranges(headervalue, content_length):
"""Return a list of (start, stop) indices from a Range header, or None.
Each (start, stop) tuple will be composed of two ints, which are suitable
for use in a slicing operation. That is, the header "Range: bytes=3-6",
if applied against a Python string, is requesting resource[3:7]. This
function will return the list [(3, 7)].
If this function returns an empty list, you should return HTTP 416.
"""
if not headervalue:
return None
result = []
bytesunit, byteranges = headervalue.split("=", 1)
for brange in byteranges.split(","):
start, stop = [x.strip() for x in brange.split("-", 1)]
if start:
if not stop:
stop = content_length - 1
start, stop = map(int, (start, stop))
if start >= content_length:
# From rfc 2616 sec 14.16:
# "If the server receives a request (other than one
# including an If-Range request-header field) with an
# unsatisfiable Range request-header field (that is,
# all of whose byte-range-spec values have a first-byte-pos
# value greater than the current length of the selected
# resource), it SHOULD return a response code of 416
# (Requested range not satisfiable)."
continue
if stop < start:
# From rfc 2616 sec 14.16:
# "If the server ignores a byte-range-spec because it
# is syntactically invalid, the server SHOULD treat
# the request as if the invalid Range header field
# did not exist. (Normally, this means return a 200
# response containing the full entity)."
return None
result.append((start, stop + 1))
else:
if not stop:
# See rfc quote above.
return None
# Negative subscript (last N bytes)
result.append((content_length - int(stop), content_length))
return result
class HeaderElement(object):
"""An element (with parameters) from an HTTP header's element list."""
def __init__(self, value, params=None):
self.value = value
if params is None:
params = {}
self.params = params
def __unicode__(self):
p = [";%s=%s" % (k, v) for k, v in self.params.iteritems()]
return u"%s%s" % (self.value, "".join(p))
def __str__(self):
return str(self.__unicode__())
def parse(elementstr):
"""Transform 'token;key=val' to ('token', {'key': 'val'})."""
# Split the element into a value and parameters. The 'value' may
# be of the form, "token=token", but we don't split that here.
atoms = [x.strip() for x in elementstr.split(";") if x.strip()]
initial_value = atoms.pop(0).strip()
params = {}
for atom in atoms:
atom = [x.strip() for x in atom.split("=", 1) if x.strip()]
key = atom.pop(0)
if atom:
val = atom[0]
else:
val = ""
params[key] = val
return initial_value, params
parse = staticmethod(parse)
def from_str(cls, elementstr):
"""Construct an instance from a string of the form 'token;key=val'."""
ival, params = cls.parse(elementstr)
return cls(ival, params)
from_str = classmethod(from_str)
q_separator = re.compile(r'; *q *=')
class AcceptElement(HeaderElement):
"""An element (with parameters) from an Accept* header's element list.
AcceptElement objects are comparable; the more-preferred object will be
"less than" the less-preferred object. They are also therefore sortable;
if you sort a list of AcceptElement objects, they will be listed in
priority order; the most preferred value will be first. Yes, it should
have been the other way around, but it's too late to fix now.
"""
def from_str(cls, elementstr):
qvalue = None
# The first "q" parameter (if any) separates the initial
# media-range parameter(s) (if any) from the accept-params.
atoms = q_separator.split(elementstr, 1)
media_range = atoms.pop(0).strip()
if atoms:
# The qvalue for an Accept header can have extensions. The other
# headers cannot, but it's easier to parse them as if they did.
qvalue = HeaderElement.from_str(atoms[0].strip())
media_type, params = cls.parse(media_range)
if qvalue is not None:
params["q"] = qvalue
return cls(media_type, params)
from_str = classmethod(from_str)
def qvalue(self):
val = self.params.get("q", "1")
if isinstance(val, HeaderElement):
val = val.value
return float(val)
qvalue = property(qvalue, doc="The qvalue, or priority, of this value.")
def __cmp__(self, other):
diff = cmp(other.qvalue, self.qvalue)
if diff == 0:
diff = cmp(str(other), str(self))
return diff
def header_elements(fieldname, fieldvalue):
"""Return a HeaderElement list from a comma-separated header str."""
if not fieldvalue:
return None
headername = fieldname.lower()
result = []
for element in fieldvalue.split(","):
if headername.startswith("accept") or headername == 'te':
hv = AcceptElement.from_str(element)
else:
hv = HeaderElement.from_str(element)
result.append(hv)
result.sort()
return result
def decode_TEXT(value):
"""Decode RFC-2047 TEXT (e.g. "=?utf-8?q?f=C3=BCr?=" -> u"f\xfcr")."""
from email.Header import decode_header
atoms = decode_header(value)
decodedvalue = ""
for atom, charset in atoms:
if charset is not None:
atom = atom.decode(charset)
decodedvalue += atom
return decodedvalue
def valid_status(status):
"""Return legal HTTP status Code, Reason-phrase and Message.
The status arg must be an int, or a str that begins with an int.
If status is an int, or a str and no reason-phrase is supplied,
a default reason-phrase will be provided.
"""
if not status:
status = 200
status = str(status)
parts = status.split(" ", 1)
if len(parts) == 1:
# No reason supplied.
code, = parts
reason = None
else:
code, reason = parts
reason = reason.strip()
try:
code = int(code)
except ValueError:
raise ValueError("Illegal response status from server "
"(%s is non-numeric)." % repr(code))
if code < 100 or code > 599:
raise ValueError("Illegal response status from server "
"(%s is out of range)." % repr(code))
if code not in response_codes:
# code is unknown but not illegal
default_reason, message = "", ""
else:
default_reason, message = response_codes[code]
if reason is None:
reason = default_reason
return code, reason, message
image_map_pattern = re.compile(r"[0-9]+,[0-9]+")
def parse_query_string(query_string, keep_blank_values=True):
"""Build a params dictionary from a query_string.
Duplicate key/value pairs in the provided query_string will be
returned as {'key': [val1, val2, ...]}. Single key/values will
be returned as strings: {'key': 'value'}.
"""
if image_map_pattern.match(query_string):
# Server-side image map. Map the coords to 'x' and 'y'
# (like CGI::Request does).
pm = query_string.split(",")
pm = {'x': int(pm[0]), 'y': int(pm[1])}
else:
pm = cgi.parse_qs(query_string, keep_blank_values)
for key, val in pm.items():
if len(val) == 1:
pm[key] = val[0]
return pm
def params_from_CGI_form(form):
params = {}
for key in form.keys():
value_list = form[key]
if key is None:
# multipart/* message parts that have no Content-Disposition
# have a .name of None, but Python kwarg keys must be strings.
# See http://www.cherrypy.org/ticket/890.
key = 'parts'
if isinstance(value_list, list):
params[key] = []
for item in value_list:
if item.filename is not None:
value = item # It's a file upload
else:
value = item.value # It's a regular field
params[key].append(value)
else:
if value_list.filename is not None:
value = value_list # It's a file upload
else:
value = value_list.value # It's a regular field
params[key] = value
return params
class CaseInsensitiveDict(dict):
"""A case-insensitive dict subclass.
Each key is changed on entry to str(key).title().
"""
def __getitem__(self, key):
return dict.__getitem__(self, str(key).title())
def __setitem__(self, key, value):
dict.__setitem__(self, str(key).title(), value)
def __delitem__(self, key):
dict.__delitem__(self, str(key).title())
def __contains__(self, key):
return dict.__contains__(self, str(key).title())
def get(self, key, default=None):
return dict.get(self, str(key).title(), default)
def has_key(self, key):
return dict.has_key(self, str(key).title())
def update(self, E):
for k in E.keys():
self[str(k).title()] = E[k]
def fromkeys(cls, seq, value=None):
newdict = cls()
for k in seq:
newdict[str(k).title()] = value
return newdict
fromkeys = classmethod(fromkeys)
def setdefault(self, key, x=None):
key = str(key).title()
try:
return self[key]
except KeyError:
self[key] = x
return x
def pop(self, key, default):
return dict.pop(self, str(key).title(), default)
class HeaderMap(CaseInsensitiveDict):
"""A dict subclass for HTTP request and response headers.
Each key is changed on entry to str(key).title(). This allows headers
to be case-insensitive and avoid duplicates.
Values are header values (decoded according to RFC 2047 if necessary).
"""
def elements(self, key):
"""Return a list of HeaderElements for the given header (or None)."""
key = str(key).title()
h = self.get(key)
if h is None:
return []
return header_elements(key, h)
def output(self, protocol=(1, 1)):
"""Transform self into a list of (name, value) tuples."""
header_list = []
for key, v in self.iteritems():
if isinstance(v, unicode):
# HTTP/1.0 says, "Words of *TEXT may contain octets
# from character sets other than US-ASCII." and
# "Recipients of header field TEXT containing octets
# outside the US-ASCII character set may assume that
# they represent ISO-8859-1 characters."
try:
v = v.encode("iso-8859-1")
except UnicodeEncodeError:
if protocol >= (1, 1):
# Encode RFC-2047 TEXT
# (e.g. u"\u8200" -> "=?utf-8?b?6IiA?=").
from email.Header import Header
v = Header(v, 'utf-8').encode()
else:
raise
else:
# This coercion should not take any time at all
# if value is already of type "str".
v = str(v)
header_list.append((key, v))
return header_list
class Host(object):
"""An internet address.
name should be the client's host name. If not available (because no DNS
lookup is performed), the IP address should be used instead.
"""
ip = "0.0.0.0"
port = 80
name = "unknown.tld"
def __init__(self, ip, port, name=None):
self.ip = ip
self.port = port
if name is None:
name = ip
self.name = name
def __repr__(self):
return "http.Host(%r, %r, %r)" % (self.ip, self.port, self.name)

View File

@@ -1,40 +1,28 @@
"""
This module defines functions to implement HTTP Digest Authentication
(:rfc:`2617`).
httpauth modules defines functions to implement HTTP Digest Authentication (RFC 2617).
This has full compliance with 'Digest' and 'Basic' authentication methods. In
'Digest' it supports both MD5 and MD5-sess algorithms.
Usage:
First use 'doAuth' to request the client authentication for a
certain resource. You should send an httplib.UNAUTHORIZED response to the
client so he knows he has to authenticate itself.
Then use 'parseAuthorization' to retrieve the 'auth_map' used in
'checkResponse'.
To use 'checkResponse' you must have already verified the password
associated with the 'username' key in 'auth_map' dict. Then you use the
'checkResponse' function to verify if the password matches the one sent
by the client.
To use 'checkResponse' you must have already verified the password associated
with the 'username' key in 'auth_map' dict. Then you use the 'checkResponse'
function to verify if the password matches the one sent by the client.
SUPPORTED_ALGORITHM - list of supported 'Digest' algorithms
SUPPORTED_QOP - list of supported 'Digest' 'qop'.
"""
import time
from hashlib import md5
from cherrypy._cpcompat import (
base64_decode, ntob,
parse_http_list, parse_keqv_list
)
__version__ = 1, 0, 1
__author__ = 'Tiago Cogumbreiro <cogumbreiro@users.sf.net>'
__author__ = "Tiago Cogumbreiro <cogumbreiro@users.sf.net>"
__credits__ = """
Peter van Kampen for its recipe which implement most of Digest
authentication:
Peter van Kampen for its recipe which implement most of Digest authentication:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/302378
"""
@@ -42,55 +30,62 @@ __license__ = """
Copyright (c) 2005, Tiago Cogumbreiro <cogumbreiro@users.sf.net>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of Sylvain Hellegouarch nor the names of his
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
* Neither the name of Sylvain Hellegouarch nor the names of his contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
__all__ = ('digestAuth', 'basicAuth', 'doAuth', 'checkResponse',
'parseAuthorization', 'SUPPORTED_ALGORITHM', 'md5SessionKey',
'calculateNonce', 'SUPPORTED_QOP')
__all__ = ("digestAuth", "basicAuth", "doAuth", "checkResponse",
"parseAuthorization", "SUPPORTED_ALGORITHM", "md5SessionKey",
"calculateNonce", "SUPPORTED_QOP")
##########################################################################
################################################################################
try:
# Python 2.5+
from hashlib import md5
except ImportError:
from md5 import new as md5
import time
import base64
import urllib2
MD5 = 'MD5'
MD5_SESS = 'MD5-sess'
AUTH = 'auth'
AUTH_INT = 'auth-int'
MD5 = "MD5"
MD5_SESS = "MD5-sess"
AUTH = "auth"
AUTH_INT = "auth-int"
SUPPORTED_ALGORITHM = (MD5, MD5_SESS)
SUPPORTED_QOP = (AUTH, AUTH_INT)
##########################################################################
################################################################################
# doAuth
#
DIGEST_AUTH_ENCODERS = {
MD5: lambda val: md5(ntob(val)).hexdigest(),
MD5_SESS: lambda val: md5(ntob(val)).hexdigest(),
# SHA: lambda val: sha.new(ntob(val)).hexdigest (),
MD5: lambda val: md5(val).hexdigest(),
MD5_SESS: lambda val: md5(val).hexdigest(),
# SHA: lambda val: sha.new (val).hexdigest (),
}
def calculateNonce(realm, algorithm=MD5):
def calculateNonce (realm, algorithm = MD5):
"""This is an auxaliary function that calculates 'nonce' value. It is used
to handle sessions."""
@@ -100,109 +95,105 @@ def calculateNonce(realm, algorithm=MD5):
try:
encoder = DIGEST_AUTH_ENCODERS[algorithm]
except KeyError:
raise NotImplementedError('The chosen algorithm (%s) does not have '
'an implementation yet' % algorithm)
raise NotImplementedError ("The chosen algorithm (%s) does not have "\
"an implementation yet" % algorithm)
return encoder('%d:%s' % (time.time(), realm))
return encoder ("%d:%s" % (time.time(), realm))
def digestAuth(realm, algorithm=MD5, nonce=None, qop=AUTH):
def digestAuth (realm, algorithm = MD5, nonce = None, qop = AUTH):
"""Challenges the client for a Digest authentication."""
global SUPPORTED_ALGORITHM, DIGEST_AUTH_ENCODERS, SUPPORTED_QOP
assert algorithm in SUPPORTED_ALGORITHM
assert qop in SUPPORTED_QOP
if nonce is None:
nonce = calculateNonce(realm, algorithm)
nonce = calculateNonce (realm, algorithm)
return 'Digest realm="%s", nonce="%s", algorithm="%s", qop="%s"' % (
realm, nonce, algorithm, qop
)
def basicAuth(realm):
def basicAuth (realm):
"""Challengenes the client for a Basic authentication."""
assert '"' not in realm, "Realms cannot contain the \" (quote) character."
return 'Basic realm="%s"' % realm
def doAuth(realm):
def doAuth (realm):
"""'doAuth' function returns the challenge string b giving priority over
Digest and fallback to Basic authentication when the browser doesn't
support the first one.
This should be set in the HTTP header under the key 'WWW-Authenticate'."""
return digestAuth(realm) + ' ' + basicAuth(realm)
return digestAuth (realm) + " " + basicAuth (realm)
##########################################################################
################################################################################
# Parse authorization parameters
#
def _parseDigestAuthorization(auth_params):
def _parseDigestAuthorization (auth_params):
# Convert the auth params to a dict
items = parse_http_list(auth_params)
params = parse_keqv_list(items)
items = urllib2.parse_http_list (auth_params)
params = urllib2.parse_keqv_list (items)
# Now validate the params
# Check for required parameters
required = ['username', 'realm', 'nonce', 'uri', 'response']
required = ["username", "realm", "nonce", "uri", "response"]
for k in required:
if k not in params:
if not params.has_key(k):
return None
# If qop is sent then cnonce and nc MUST be present
if 'qop' in params and not ('cnonce' in params
and 'nc' in params):
if params.has_key("qop") and not (params.has_key("cnonce") \
and params.has_key("nc")):
return None
# If qop is not sent, neither cnonce nor nc can be present
if ('cnonce' in params or 'nc' in params) and \
'qop' not in params:
if (params.has_key("cnonce") or params.has_key("nc")) and \
not params.has_key("qop"):
return None
return params
def _parseBasicAuthorization(auth_params):
username, password = base64_decode(auth_params).split(':', 1)
return {'username': username, 'password': password}
def _parseBasicAuthorization (auth_params):
username, password = base64.decodestring (auth_params).split (":", 1)
return {"username": username, "password": password}
AUTH_SCHEMES = {
'basic': _parseBasicAuthorization,
'digest': _parseDigestAuthorization,
"basic": _parseBasicAuthorization,
"digest": _parseDigestAuthorization,
}
def parseAuthorization(credentials):
def parseAuthorization (credentials):
"""parseAuthorization will convert the value of the 'Authorization' key in
the HTTP header to a map itself. If the parsing fails 'None' is returned.
"""
global AUTH_SCHEMES
auth_scheme, auth_params = credentials.split(' ', 1)
auth_scheme = auth_scheme.lower()
auth_scheme, auth_params = credentials.split(" ", 1)
auth_scheme = auth_scheme.lower ()
parser = AUTH_SCHEMES[auth_scheme]
params = parser(auth_params)
params = parser (auth_params)
if params is None:
return
assert 'auth_scheme' not in params
params['auth_scheme'] = auth_scheme
assert "auth_scheme" not in params
params["auth_scheme"] = auth_scheme
return params
##########################################################################
################################################################################
# Check provided response for a valid password
#
def md5SessionKey(params, password):
def md5SessionKey (params, password):
"""
If the "algorithm" directive's value is "MD5-sess", then A1
If the "algorithm" directive's value is "MD5-sess", then A1
[the session key] is calculated only once - on the first request by the
client following receipt of a WWW-Authenticate challenge from the server.
@@ -219,70 +210,67 @@ def md5SessionKey(params, password):
specification.
"""
keys = ('username', 'realm', 'nonce', 'cnonce')
keys = ("username", "realm", "nonce", "cnonce")
params_copy = {}
for key in keys:
params_copy[key] = params[key]
params_copy['algorithm'] = MD5_SESS
return _A1(params_copy, password)
params_copy["algorithm"] = MD5_SESS
return _A1 (params_copy, password)
def _A1(params, password):
algorithm = params.get('algorithm', MD5)
algorithm = params.get ("algorithm", MD5)
H = DIGEST_AUTH_ENCODERS[algorithm]
if algorithm == MD5:
# If the "algorithm" directive's value is "MD5" or is
# unspecified, then A1 is:
# A1 = unq(username-value) ":" unq(realm-value) ":" passwd
return '%s:%s:%s' % (params['username'], params['realm'], password)
return "%s:%s:%s" % (params["username"], params["realm"], password)
elif algorithm == MD5_SESS:
# This is A1 if qop is set
# A1 = H( unq(username-value) ":" unq(realm-value) ":" passwd )
# ":" unq(nonce-value) ":" unq(cnonce-value)
h_a1 = H('%s:%s:%s' % (params['username'], params['realm'], password))
return '%s:%s:%s' % (h_a1, params['nonce'], params['cnonce'])
h_a1 = H ("%s:%s:%s" % (params["username"], params["realm"], password))
return "%s:%s:%s" % (h_a1, params["nonce"], params["cnonce"])
def _A2(params, method, kwargs):
# If the "qop" directive's value is "auth" or is unspecified, then A2 is:
# A2 = Method ":" digest-uri-value
qop = params.get('qop', 'auth')
if qop == 'auth':
return method + ':' + params['uri']
elif qop == 'auth-int':
qop = params.get ("qop", "auth")
if qop == "auth":
return method + ":" + params["uri"]
elif qop == "auth-int":
# If the "qop" value is "auth-int", then A2 is:
# A2 = Method ":" digest-uri-value ":" H(entity-body)
entity_body = kwargs.get('entity_body', '')
H = kwargs['H']
entity_body = kwargs.get ("entity_body", "")
H = kwargs["H"]
return '%s:%s:%s' % (
return "%s:%s:%s" % (
method,
params['uri'],
params["uri"],
H(entity_body)
)
else:
raise NotImplementedError("The 'qop' method is unknown: %s" % qop)
raise NotImplementedError ("The 'qop' method is unknown: %s" % qop)
def _computeDigestResponse(auth_map, password, method='GET', A1=None,
**kwargs):
def _computeDigestResponse(auth_map, password, method = "GET", A1 = None,**kwargs):
"""
Generates a response respecting the algorithm defined in RFC 2617
"""
params = auth_map
algorithm = params.get('algorithm', MD5)
algorithm = params.get ("algorithm", MD5)
H = DIGEST_AUTH_ENCODERS[algorithm]
KD = lambda secret, data: H(secret + ':' + data)
KD = lambda secret, data: H(secret + ":" + data)
qop = params.get('qop', None)
qop = params.get ("qop", None)
H_A2 = H(_A2(params, method, kwargs))
@@ -291,7 +279,7 @@ def _computeDigestResponse(auth_map, password, method='GET', A1=None,
else:
H_A1 = H(_A1(params, password))
if qop in ('auth', 'auth-int'):
if qop in ("auth", "auth-int"):
# If the "qop" value is "auth" or "auth-int":
# request-digest = <"> < KD ( H(A1), unq(nonce-value)
# ":" nc-value
@@ -299,11 +287,11 @@ def _computeDigestResponse(auth_map, password, method='GET', A1=None,
# ":" unq(qop-value)
# ":" H(A2)
# ) <">
request = '%s:%s:%s:%s:%s' % (
params['nonce'],
params['nc'],
params['cnonce'],
params['qop'],
request = "%s:%s:%s:%s:%s" % (
params["nonce"],
params["nc"],
params["cnonce"],
params["qop"],
H_A2,
)
elif qop is None:
@@ -311,12 +299,11 @@ def _computeDigestResponse(auth_map, password, method='GET', A1=None,
# for compatibility with RFC 2069):
# request-digest =
# <"> < KD ( H(A1), unq(nonce-value) ":" H(A2) ) > <">
request = '%s:%s' % (params['nonce'], H_A2)
request = "%s:%s" % (params["nonce"], H_A2)
return KD(H_A1, request)
def _checkDigestResponse(auth_map, password, method='GET', A1=None, **kwargs):
def _checkDigestResponse(auth_map, password, method = "GET", A1 = None, **kwargs):
"""This function is used to verify the response given by the client when
he tries to authenticate.
Optional arguments:
@@ -331,48 +318,44 @@ def _checkDigestResponse(auth_map, password, method='GET', A1=None, **kwargs):
if auth_map['realm'] != kwargs.get('realm', None):
return False
response = _computeDigestResponse(
auth_map, password, method, A1, **kwargs)
response = _computeDigestResponse(auth_map, password, method, A1,**kwargs)
return response == auth_map['response']
return response == auth_map["response"]
def _checkBasicResponse(auth_map, password, method='GET', encrypt=None,
**kwargs):
def _checkBasicResponse (auth_map, password, method='GET', encrypt=None, **kwargs):
# Note that the Basic response doesn't provide the realm value so we cannot
# test it
pass_through = lambda password, username=None: password
encrypt = encrypt or pass_through
try:
candidate = encrypt(auth_map['password'], auth_map['username'])
return encrypt(auth_map["password"], auth_map["username"]) == password
except TypeError:
# if encrypt only takes one parameter, it's the password
candidate = encrypt(auth_map['password'])
return candidate == password
return encrypt(auth_map["password"]) == password
AUTH_RESPONSES = {
'basic': _checkBasicResponse,
'digest': _checkDigestResponse,
"basic": _checkBasicResponse,
"digest": _checkDigestResponse,
}
def checkResponse(auth_map, password, method='GET', encrypt=None, **kwargs):
def checkResponse (auth_map, password, method = "GET", encrypt=None, **kwargs):
"""'checkResponse' compares the auth_map with the password and optionally
other arguments that each implementation might need.
If the response is of type 'Basic' then the function has the following
signature::
checkBasicResponse(auth_map, password) -> bool
signature:
checkBasicResponse (auth_map, password) -> bool
If the response is of type 'Digest' then the function has the following
signature::
checkDigestResponse(auth_map, password, method='GET', A1=None) -> bool
signature:
checkDigestResponse (auth_map, password, method = 'GET', A1 = None) -> bool
The 'A1' argument is only used in MD5_SESS algorithm based responses.
Check md5SessionKey() for more info.
"""
checker = AUTH_RESPONSES[auth_map['auth_scheme']]
return checker(auth_map, password, method=method, encrypt=encrypt,
**kwargs)
global AUTH_RESPONSES
checker = AUTH_RESPONSES[auth_map["auth_scheme"]]
return checker (auth_map, password, method=method, encrypt=encrypt, **kwargs)

View File

@@ -1,530 +0,0 @@
"""HTTP library functions.
This module contains functions for building an HTTP application
framework: any one, not just one whose name starts with "Ch". ;) If you
reference any modules from some popular framework inside *this* module,
FuManChu will personally hang you up by your thumbs and submit you
to a public caning.
"""
import functools
import email.utils
import re
from binascii import b2a_base64
from cgi import parse_header
try:
# Python 3
from email.header import decode_header
except ImportError:
from email.Header import decode_header
import six
from cherrypy._cpcompat import BaseHTTPRequestHandler, ntob, ntou
from cherrypy._cpcompat import text_or_bytes, iteritems
from cherrypy._cpcompat import reversed, sorted, unquote_qs
response_codes = BaseHTTPRequestHandler.responses.copy()
# From https://github.com/cherrypy/cherrypy/issues/361
response_codes[500] = ('Internal Server Error',
'The server encountered an unexpected condition '
'which prevented it from fulfilling the request.')
response_codes[503] = ('Service Unavailable',
'The server is currently unable to handle the '
'request due to a temporary overloading or '
'maintenance of the server.')
HTTPDate = functools.partial(email.utils.formatdate, usegmt=True)
def urljoin(*atoms):
"""Return the given path \*atoms, joined into a single URL.
This will correctly join a SCRIPT_NAME and PATH_INFO into the
original URL, even if either atom is blank.
"""
url = '/'.join([x for x in atoms if x])
while '//' in url:
url = url.replace('//', '/')
# Special-case the final url of "", and return "/" instead.
return url or '/'
def urljoin_bytes(*atoms):
"""Return the given path *atoms, joined into a single URL.
This will correctly join a SCRIPT_NAME and PATH_INFO into the
original URL, even if either atom is blank.
"""
url = ntob('/').join([x for x in atoms if x])
while ntob('//') in url:
url = url.replace(ntob('//'), ntob('/'))
# Special-case the final url of "", and return "/" instead.
return url or ntob('/')
def protocol_from_http(protocol_str):
"""Return a protocol tuple from the given 'HTTP/x.y' string."""
return int(protocol_str[5]), int(protocol_str[7])
def get_ranges(headervalue, content_length):
"""Return a list of (start, stop) indices from a Range header, or None.
Each (start, stop) tuple will be composed of two ints, which are suitable
for use in a slicing operation. That is, the header "Range: bytes=3-6",
if applied against a Python string, is requesting resource[3:7]. This
function will return the list [(3, 7)].
If this function returns an empty list, you should return HTTP 416.
"""
if not headervalue:
return None
result = []
bytesunit, byteranges = headervalue.split('=', 1)
for brange in byteranges.split(','):
start, stop = [x.strip() for x in brange.split('-', 1)]
if start:
if not stop:
stop = content_length - 1
start, stop = int(start), int(stop)
if start >= content_length:
# From rfc 2616 sec 14.16:
# "If the server receives a request (other than one
# including an If-Range request-header field) with an
# unsatisfiable Range request-header field (that is,
# all of whose byte-range-spec values have a first-byte-pos
# value greater than the current length of the selected
# resource), it SHOULD return a response code of 416
# (Requested range not satisfiable)."
continue
if stop < start:
# From rfc 2616 sec 14.16:
# "If the server ignores a byte-range-spec because it
# is syntactically invalid, the server SHOULD treat
# the request as if the invalid Range header field
# did not exist. (Normally, this means return a 200
# response containing the full entity)."
return None
result.append((start, stop + 1))
else:
if not stop:
# See rfc quote above.
return None
# Negative subscript (last N bytes)
#
# RFC 2616 Section 14.35.1:
# If the entity is shorter than the specified suffix-length,
# the entire entity-body is used.
if int(stop) > content_length:
result.append((0, content_length))
else:
result.append((content_length - int(stop), content_length))
return result
class HeaderElement(object):
"""An element (with parameters) from an HTTP header's element list."""
def __init__(self, value, params=None):
self.value = value
if params is None:
params = {}
self.params = params
def __cmp__(self, other):
return cmp(self.value, other.value)
def __lt__(self, other):
return self.value < other.value
def __str__(self):
p = [';%s=%s' % (k, v) for k, v in iteritems(self.params)]
return str('%s%s' % (self.value, ''.join(p)))
def __bytes__(self):
return ntob(self.__str__())
def __unicode__(self):
return ntou(self.__str__())
@staticmethod
def parse(elementstr):
"""Transform 'token;key=val' to ('token', {'key': 'val'})."""
initial_value, params = parse_header(elementstr)
return initial_value, params
@classmethod
def from_str(cls, elementstr):
"""Construct an instance from a string of the form 'token;key=val'."""
ival, params = cls.parse(elementstr)
return cls(ival, params)
q_separator = re.compile(r'; *q *=')
class AcceptElement(HeaderElement):
"""An element (with parameters) from an Accept* header's element list.
AcceptElement objects are comparable; the more-preferred object will be
"less than" the less-preferred object. They are also therefore sortable;
if you sort a list of AcceptElement objects, they will be listed in
priority order; the most preferred value will be first. Yes, it should
have been the other way around, but it's too late to fix now.
"""
@classmethod
def from_str(cls, elementstr):
qvalue = None
# The first "q" parameter (if any) separates the initial
# media-range parameter(s) (if any) from the accept-params.
atoms = q_separator.split(elementstr, 1)
media_range = atoms.pop(0).strip()
if atoms:
# The qvalue for an Accept header can have extensions. The other
# headers cannot, but it's easier to parse them as if they did.
qvalue = HeaderElement.from_str(atoms[0].strip())
media_type, params = cls.parse(media_range)
if qvalue is not None:
params['q'] = qvalue
return cls(media_type, params)
@property
def qvalue(self):
'The qvalue, or priority, of this value.'
val = self.params.get('q', '1')
if isinstance(val, HeaderElement):
val = val.value
return float(val)
def __cmp__(self, other):
diff = cmp(self.qvalue, other.qvalue)
if diff == 0:
diff = cmp(str(self), str(other))
return diff
def __lt__(self, other):
if self.qvalue == other.qvalue:
return str(self) < str(other)
else:
return self.qvalue < other.qvalue
RE_HEADER_SPLIT = re.compile(',(?=(?:[^"]*"[^"]*")*[^"]*$)')
def header_elements(fieldname, fieldvalue):
"""Return a sorted HeaderElement list from a comma-separated header string.
"""
if not fieldvalue:
return []
result = []
for element in RE_HEADER_SPLIT.split(fieldvalue):
if fieldname.startswith('Accept') or fieldname == 'TE':
hv = AcceptElement.from_str(element)
else:
hv = HeaderElement.from_str(element)
result.append(hv)
return list(reversed(sorted(result)))
def decode_TEXT(value):
r"""Decode :rfc:`2047` TEXT (e.g. "=?utf-8?q?f=C3=BCr?=" -> "f\xfcr")."""
atoms = decode_header(value)
decodedvalue = ''
for atom, charset in atoms:
if charset is not None:
atom = atom.decode(charset)
decodedvalue += atom
return decodedvalue
def valid_status(status):
"""Return legal HTTP status Code, Reason-phrase and Message.
The status arg must be an int, or a str that begins with an int.
If status is an int, or a str and no reason-phrase is supplied,
a default reason-phrase will be provided.
"""
if not status:
status = 200
status = str(status)
parts = status.split(' ', 1)
if len(parts) == 1:
# No reason supplied.
code, = parts
reason = None
else:
code, reason = parts
reason = reason.strip()
try:
code = int(code)
except ValueError:
raise ValueError('Illegal response status from server '
'(%s is non-numeric).' % repr(code))
if code < 100 or code > 599:
raise ValueError('Illegal response status from server '
'(%s is out of range).' % repr(code))
if code not in response_codes:
# code is unknown but not illegal
default_reason, message = '', ''
else:
default_reason, message = response_codes[code]
if reason is None:
reason = default_reason
return code, reason, message
# NOTE: the parse_qs functions that follow are modified version of those
# in the python3.0 source - we need to pass through an encoding to the unquote
# method, but the default parse_qs function doesn't allow us to. These do.
def _parse_qs(qs, keep_blank_values=0, strict_parsing=0, encoding='utf-8'):
"""Parse a query given as a string argument.
Arguments:
qs: URL-encoded query string to be parsed
keep_blank_values: flag indicating whether blank values in
URL encoded queries should be treated as blank strings. A
true value indicates that blanks should be retained as blank
strings. The default false value indicates that blank values
are to be ignored and treated as if they were not included.
strict_parsing: flag indicating what to do with parsing errors. If
false (the default), errors are silently ignored. If true,
errors raise a ValueError exception.
Returns a dict, as G-d intended.
"""
pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')]
d = {}
for name_value in pairs:
if not name_value and not strict_parsing:
continue
nv = name_value.split('=', 1)
if len(nv) != 2:
if strict_parsing:
raise ValueError('bad query field: %r' % (name_value,))
# Handle case of a control-name with no equal sign
if keep_blank_values:
nv.append('')
else:
continue
if len(nv[1]) or keep_blank_values:
name = unquote_qs(nv[0], encoding)
value = unquote_qs(nv[1], encoding)
if name in d:
if not isinstance(d[name], list):
d[name] = [d[name]]
d[name].append(value)
else:
d[name] = value
return d
image_map_pattern = re.compile(r'[0-9]+,[0-9]+')
def parse_query_string(query_string, keep_blank_values=True, encoding='utf-8'):
"""Build a params dictionary from a query_string.
Duplicate key/value pairs in the provided query_string will be
returned as {'key': [val1, val2, ...]}. Single key/values will
be returned as strings: {'key': 'value'}.
"""
if image_map_pattern.match(query_string):
# Server-side image map. Map the coords to 'x' and 'y'
# (like CGI::Request does).
pm = query_string.split(',')
pm = {'x': int(pm[0]), 'y': int(pm[1])}
else:
pm = _parse_qs(query_string, keep_blank_values, encoding=encoding)
return pm
class CaseInsensitiveDict(dict):
"""A case-insensitive dict subclass.
Each key is changed on entry to str(key).title().
"""
def __getitem__(self, key):
return dict.__getitem__(self, str(key).title())
def __setitem__(self, key, value):
dict.__setitem__(self, str(key).title(), value)
def __delitem__(self, key):
dict.__delitem__(self, str(key).title())
def __contains__(self, key):
return dict.__contains__(self, str(key).title())
def get(self, key, default=None):
return dict.get(self, str(key).title(), default)
if hasattr({}, 'has_key'):
def has_key(self, key):
return str(key).title() in self
def update(self, E):
for k in E.keys():
self[str(k).title()] = E[k]
@classmethod
def fromkeys(cls, seq, value=None):
newdict = cls()
for k in seq:
newdict[str(k).title()] = value
return newdict
def setdefault(self, key, x=None):
key = str(key).title()
try:
return self[key]
except KeyError:
self[key] = x
return x
def pop(self, key, default):
return dict.pop(self, str(key).title(), default)
# TEXT = <any OCTET except CTLs, but including LWS>
#
# A CRLF is allowed in the definition of TEXT only as part of a header
# field continuation. It is expected that the folding LWS will be
# replaced with a single SP before interpretation of the TEXT value."
if str == bytes:
header_translate_table = ''.join([chr(i) for i in xrange(256)])
header_translate_deletechars = ''.join(
[chr(i) for i in xrange(32)]) + chr(127)
else:
header_translate_table = None
header_translate_deletechars = bytes(range(32)) + bytes([127])
class HeaderMap(CaseInsensitiveDict):
"""A dict subclass for HTTP request and response headers.
Each key is changed on entry to str(key).title(). This allows headers
to be case-insensitive and avoid duplicates.
Values are header values (decoded according to :rfc:`2047` if necessary).
"""
protocol = (1, 1)
encodings = ['ISO-8859-1']
# Someday, when http-bis is done, this will probably get dropped
# since few servers, clients, or intermediaries do it. But until then,
# we're going to obey the spec as is.
# "Words of *TEXT MAY contain characters from character sets other than
# ISO-8859-1 only when encoded according to the rules of RFC 2047."
use_rfc_2047 = True
def elements(self, key):
"""Return a sorted list of HeaderElements for the given header."""
key = str(key).title()
value = self.get(key)
return header_elements(key, value)
def values(self, key):
"""Return a sorted list of HeaderElement.value for the given header."""
return [e.value for e in self.elements(key)]
def output(self):
"""Transform self into a list of (name, value) tuples."""
return list(self.encode_header_items(self.items()))
@classmethod
def encode_header_items(cls, header_items):
"""
Prepare the sequence of name, value tuples into a form suitable for
transmitting on the wire for HTTP.
"""
for k, v in header_items:
if isinstance(k, six.text_type):
k = cls.encode(k)
if not isinstance(v, text_or_bytes):
v = str(v)
if isinstance(v, six.text_type):
v = cls.encode(v)
# See header_translate_* constants above.
# Replace only if you really know what you're doing.
k = k.translate(header_translate_table,
header_translate_deletechars)
v = v.translate(header_translate_table,
header_translate_deletechars)
yield (k, v)
@classmethod
def encode(cls, v):
"""Return the given header name or value, encoded for HTTP output."""
for enc in cls.encodings:
try:
return v.encode(enc)
except UnicodeEncodeError:
continue
if cls.protocol == (1, 1) and cls.use_rfc_2047:
# Encode RFC-2047 TEXT
# (e.g. u"\u8200" -> "=?utf-8?b?6IiA?=").
# We do our own here instead of using the email module
# because we never want to fold lines--folding has
# been deprecated by the HTTP working group.
v = b2a_base64(v.encode('utf-8'))
return (ntob('=?utf-8?b?') + v.strip(ntob('\n')) + ntob('?='))
raise ValueError('Could not encode header part %r using '
'any of the encodings %r.' %
(v, cls.encodings))
class Host(object):
"""An internet address.
name
Should be the client's host name. If not available (because no DNS
lookup is performed), the IP address should be used instead.
"""
ip = '0.0.0.0'
port = 80
name = 'unknown.tld'
def __init__(self, ip, port, name=None):
self.ip = ip
self.port = port
if name is None:
name = ip
self.name = name
def __repr__(self):
return 'httputil.Host(%r, %r, %r)' % (self.ip, self.port, self.name)

View File

@@ -1,94 +0,0 @@
import cherrypy
from cherrypy._cpcompat import text_or_bytes, ntou, json_encode, json_decode
def json_processor(entity):
"""Read application/json data into request.json."""
if not entity.headers.get(ntou('Content-Length'), ntou('')):
raise cherrypy.HTTPError(411)
body = entity.fp.read()
with cherrypy.HTTPError.handle(ValueError, 400, 'Invalid JSON document'):
cherrypy.serving.request.json = json_decode(body.decode('utf-8'))
def json_in(content_type=[ntou('application/json'), ntou('text/javascript')],
force=True, debug=False, processor=json_processor):
"""Add a processor to parse JSON request entities:
The default processor places the parsed data into request.json.
Incoming request entities which match the given content_type(s) will
be deserialized from JSON to the Python equivalent, and the result
stored at cherrypy.request.json. The 'content_type' argument may
be a Content-Type string or a list of allowable Content-Type strings.
If the 'force' argument is True (the default), then entities of other
content types will not be allowed; "415 Unsupported Media Type" is
raised instead.
Supply your own processor to use a custom decoder, or to handle the parsed
data differently. The processor can be configured via
tools.json_in.processor or via the decorator method.
Note that the deserializer requires the client send a Content-Length
request header, or it will raise "411 Length Required". If for any
other reason the request entity cannot be deserialized from JSON,
it will raise "400 Bad Request: Invalid JSON document".
You must be using Python 2.6 or greater, or have the 'simplejson'
package importable; otherwise, ValueError is raised during processing.
"""
request = cherrypy.serving.request
if isinstance(content_type, text_or_bytes):
content_type = [content_type]
if force:
if debug:
cherrypy.log('Removing body processors %s' %
repr(request.body.processors.keys()), 'TOOLS.JSON_IN')
request.body.processors.clear()
request.body.default_proc = cherrypy.HTTPError(
415, 'Expected an entity of content type %s' %
', '.join(content_type))
for ct in content_type:
if debug:
cherrypy.log('Adding body processor for %s' % ct, 'TOOLS.JSON_IN')
request.body.processors[ct] = processor
def json_handler(*args, **kwargs):
value = cherrypy.serving.request._json_inner_handler(*args, **kwargs)
return json_encode(value)
def json_out(content_type='application/json', debug=False,
handler=json_handler):
"""Wrap request.handler to serialize its output to JSON. Sets Content-Type.
If the given content_type is None, the Content-Type response header
is not set.
Provide your own handler to use a custom encoder. For example
cherrypy.config['tools.json_out.handler'] = <function>, or
@json_out(handler=function).
You must be using Python 2.6 or greater, or have the 'simplejson'
package importable; otherwise, ValueError is raised during processing.
"""
request = cherrypy.serving.request
# request.handler may be set to None by e.g. the caching tool
# to signal to all components that a response body has already
# been attached, in which case we don't need to wrap anything.
if request.handler is None:
return
if debug:
cherrypy.log('Replacing %s with JSON handler' % request.handler,
'TOOLS.JSON_OUT')
request._json_inner_handler = request.handler
request.handler = handler
if content_type is not None:
if debug:
cherrypy.log('Setting Content-Type to %s' %
content_type, 'TOOLS.JSON_OUT')
cherrypy.serving.response.headers['Content-Type'] = content_type

View File

@@ -1,142 +0,0 @@
"""
Platform-independent file locking. Inspired by and modeled after zc.lockfile.
"""
import os
try:
import msvcrt
except ImportError:
pass
try:
import fcntl
except ImportError:
pass
class LockError(Exception):
'Could not obtain a lock'
msg = 'Unable to lock %r'
def __init__(self, path):
super(LockError, self).__init__(self.msg % path)
class UnlockError(LockError):
'Could not release a lock'
msg = 'Unable to unlock %r'
# first, a default, naive locking implementation
class LockFile(object):
"""
A default, naive locking implementation. Always fails if the file
already exists.
"""
def __init__(self, path):
self.path = path
try:
fd = os.open(path, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
except OSError:
raise LockError(self.path)
os.close(fd)
def release(self):
os.remove(self.path)
def remove(self):
pass
class SystemLockFile(object):
"""
An abstract base class for platform-specific locking.
"""
def __init__(self, path):
self.path = path
try:
# Open lockfile for writing without truncation:
self.fp = open(path, 'r+')
except IOError:
# If the file doesn't exist, IOError is raised; Use a+ instead.
# Note that there may be a race here. Multiple processes
# could fail on the r+ open and open the file a+, but only
# one will get the the lock and write a pid.
self.fp = open(path, 'a+')
try:
self._lock_file()
except:
self.fp.seek(1)
self.fp.close()
del self.fp
raise
self.fp.write(' %s\n' % os.getpid())
self.fp.truncate()
self.fp.flush()
def release(self):
if not hasattr(self, 'fp'):
return
self._unlock_file()
self.fp.close()
del self.fp
def remove(self):
"""
Attempt to remove the file
"""
try:
os.remove(self.path)
except:
pass
def _unlock_file(self):
"""Attempt to obtain the lock on self.fp. Raise UnlockError if not
released."""
class WindowsLockFile(SystemLockFile):
def _lock_file(self):
# Lock just the first byte
try:
msvcrt.locking(self.fp.fileno(), msvcrt.LK_NBLCK, 1)
except IOError:
raise LockError(self.fp.name)
def _unlock_file(self):
try:
self.fp.seek(0)
msvcrt.locking(self.fp.fileno(), msvcrt.LK_UNLCK, 1)
except IOError:
raise UnlockError(self.fp.name)
if 'msvcrt' in globals():
LockFile = WindowsLockFile
class UnixLockFile(SystemLockFile):
def _lock_file(self):
flags = fcntl.LOCK_EX | fcntl.LOCK_NB
try:
fcntl.flock(self.fp.fileno(), flags)
except IOError:
raise LockError(self.fp.name)
# no need to implement _unlock_file, it will be unlocked on close()
if 'fcntl' in globals():
LockFile = UnixLockFile

View File

@@ -1,47 +0,0 @@
import datetime
class NeverExpires(object):
def expired(self):
return False
class Timer(object):
"""
A simple timer that will indicate when an expiration time has passed.
"""
def __init__(self, expiration):
'Create a timer that expires at `expiration` (UTC datetime)'
self.expiration = expiration
@classmethod
def after(cls, elapsed):
"""
Return a timer that will expire after `elapsed` passes.
"""
return cls(datetime.datetime.utcnow() + elapsed)
def expired(self):
return datetime.datetime.utcnow() >= self.expiration
class LockTimeout(Exception):
'An exception when a lock could not be acquired before a timeout period'
class LockChecker(object):
"""
Keep track of the time and detect if a timeout has expired
"""
def __init__(self, session_id, timeout):
self.session_id = session_id
if timeout:
self.timer = Timer.after(timeout)
else:
self.timer = NeverExpires()
def expired(self):
if self.timer.expired():
raise LockTimeout(
'Timeout acquiring lock for %(session_id)s' % vars(self))
return False

View File

@@ -3,95 +3,97 @@
CherryPy users
==============
You can profile any of your pages as follows::
You can profile any of your pages as follows:
from cherrypy.lib import profiler
class Root:
p = profiler.Profiler("/path/to/profile/dir")
@cherrypy.expose
p = profile.Profiler("/path/to/profile/dir")
def index(self):
self.p.run(self._index)
index.exposed = True
def _index(self):
return "Hello, world!"
cherrypy.tree.mount(Root())
You can also turn on profiling for all requests
using the ``make_app`` function as WSGI middleware.
using the make_app function as WSGI middleware.
CherryPy developers
===================
This module can be used whenever you make changes to CherryPy,
to get a quick sanity-check on overall CP performance. Use the
``--profile`` flag when running the test suite. Then, use the ``serve()``
"--profile" flag when running the test suite. Then, use the serve()
function to browse the results in a web browser. If you run this
module from the command line, it will call ``serve()`` for you.
module from the command line, it will call serve() for you.
"""
import io
import os
import os.path
import sys
import warnings
import cherrypy
# Make profiler output more readable by adding __init__ modules' parents.
def new_func_strip_path(func_name):
filename, line, name = func_name
if filename.endswith("__init__.py"):
return os.path.basename(filename[:-12]) + filename[-12:], line, name
return os.path.basename(filename), line, name
try:
import profile
import pstats
def new_func_strip_path(func_name):
"""Make profiler output more readable by adding `__init__` modules' parents
"""
filename, line, name = func_name
if filename.endswith('__init__.py'):
return os.path.basename(filename[:-12]) + filename[-12:], line, name
return os.path.basename(filename), line, name
pstats.func_strip_path = new_func_strip_path
except ImportError:
profile = None
pstats = None
import warnings
msg = ("Your installation of Python does not have a profile module. "
"If you're on Debian, try `sudo apt-get install python-profiler`. "
"See http://www.cherrypy.org/wiki/ProfilingOnDebian for details.")
warnings.warn(msg)
import os, os.path
import sys
try:
import cStringIO as StringIO
except ImportError:
import StringIO
_count = 0
class Profiler(object):
def __init__(self, path=None):
if not path:
path = os.path.join(os.path.dirname(__file__), 'profile')
path = os.path.join(os.path.dirname(__file__), "profile")
self.path = path
if not os.path.exists(path):
os.makedirs(path)
def run(self, func, *args, **params):
"""Dump profile data into self.path."""
global _count
c = _count = _count + 1
path = os.path.join(self.path, 'cp_%04d.prof' % c)
path = os.path.join(self.path, "cp_%04d.prof" % c)
prof = profile.Profile()
result = prof.runcall(func, *args, **params)
prof.dump_stats(path)
return result
def statfiles(self):
""":rtype: list of available profiles.
"""
"""statfiles() -> list of available profiles."""
return [f for f in os.listdir(self.path)
if f.startswith('cp_') and f.endswith('.prof')]
if f.startswith("cp_") and f.endswith(".prof")]
def stats(self, filename, sortby='cumulative'):
""":rtype stats(index): output of print_stats() for the given profile.
"""
sio = io.StringIO()
"""stats(index) -> output of print_stats() for the given profile."""
sio = StringIO.StringIO()
if sys.version_info >= (2, 5):
s = pstats.Stats(os.path.join(self.path, filename), stream=sio)
s.strip_dirs()
@@ -112,8 +114,7 @@ class Profiler(object):
response = sio.getvalue()
sio.close()
return response
@cherrypy.expose
def index(self):
return """<html>
<head><title>CherryPy profile data</title></head>
@@ -123,71 +124,57 @@ class Profiler(object):
</frameset>
</html>
"""
@cherrypy.expose
index.exposed = True
def menu(self):
yield '<h2>Profiling runs</h2>'
yield '<p>Click on one of the runs below to see profiling data.</p>'
yield "<h2>Profiling runs</h2>"
yield "<p>Click on one of the runs below to see profiling data.</p>"
runs = self.statfiles()
runs.sort()
for i in runs:
yield "<a href='report?filename=%s' target='main'>%s</a><br />" % (
i, i)
@cherrypy.expose
yield "<a href='report?filename=%s' target='main'>%s</a><br />" % (i, i)
menu.exposed = True
def report(self, filename):
import cherrypy
cherrypy.response.headers['Content-Type'] = 'text/plain'
return self.stats(filename)
report.exposed = True
class ProfileAggregator(Profiler):
def __init__(self, path=None):
Profiler.__init__(self, path)
global _count
self.count = _count = _count + 1
self.profiler = profile.Profile()
def run(self, func, *args, **params):
path = os.path.join(self.path, 'cp_%04d.prof' % self.count)
result = self.profiler.runcall(func, *args, **params)
def run(self, func, *args):
path = os.path.join(self.path, "cp_%04d.prof" % self.count)
result = self.profiler.runcall(func, *args)
self.profiler.dump_stats(path)
return result
class make_app:
def __init__(self, nextapp, path=None, aggregate=False):
"""Make a WSGI middleware app which wraps 'nextapp' with profiling.
nextapp
the WSGI application to wrap, usually an instance of
nextapp: the WSGI application to wrap, usually an instance of
cherrypy.Application.
path
where to dump the profiling output.
aggregate
if True, profile data for all HTTP requests will go in
path: where to dump the profiling output.
aggregate: if True, profile data for all HTTP requests will go in
a single file. If False (the default), each HTTP request will
dump its profile data into a separate file.
"""
if profile is None or pstats is None:
msg = ('Your installation of Python does not have a profile '
"module. If you're on Debian, try "
'`sudo apt-get install python-profiler`. '
'See http://www.cherrypy.org/wiki/ProfilingOnDebian '
'for details.')
warnings.warn(msg)
self.nextapp = nextapp
self.aggregate = aggregate
if aggregate:
self.profiler = ProfileAggregator(path)
else:
self.profiler = Profiler(path)
def __call__(self, environ, start_response):
def gather():
result = []
@@ -198,20 +185,14 @@ class make_app:
def serve(path=None, port=8080):
if profile is None or pstats is None:
msg = ('Your installation of Python does not have a profile module. '
"If you're on Debian, try "
'`sudo apt-get install python-profiler`. '
'See http://www.cherrypy.org/wiki/ProfilingOnDebian '
'for details.')
warnings.warn(msg)
import cherrypy
cherrypy.config.update({'server.socket_port': int(port),
'server.thread_pool': 10,
'environment': 'production',
'environment': "production",
})
cherrypy.quickstart(Profiler(path))
if __name__ == '__main__':
if __name__ == "__main__":
serve(*tuple(sys.argv[1:]))

View File

@@ -1,534 +0,0 @@
"""Generic configuration system using unrepr.
Configuration data may be supplied as a Python dictionary, as a filename,
or as an open file object. When you supply a filename or file, Python's
builtin ConfigParser is used (with some extensions).
Namespaces
----------
Configuration keys are separated into namespaces by the first "." in the key.
The only key that cannot exist in a namespace is the "environment" entry.
This special entry 'imports' other config entries from a template stored in
the Config.environments dict.
You can define your own namespaces to be called when new config is merged
by adding a named handler to Config.namespaces. The name can be any string,
and the handler must be either a callable or a context manager.
"""
try:
# Python 3.0+
from configparser import ConfigParser
except ImportError:
from ConfigParser import ConfigParser
try:
text_or_bytes
except NameError:
text_or_bytes = str
try:
# Python 3
import builtins
except ImportError:
# Python 2
import __builtin__ as builtins
import operator as _operator
import sys
def as_dict(config):
"""Return a dict from 'config' whether it is a dict, file, or filename."""
if isinstance(config, text_or_bytes):
config = Parser().dict_from_file(config)
elif hasattr(config, 'read'):
config = Parser().dict_from_file(config)
return config
class NamespaceSet(dict):
"""A dict of config namespace names and handlers.
Each config entry should begin with a namespace name; the corresponding
namespace handler will be called once for each config entry in that
namespace, and will be passed two arguments: the config key (with the
namespace removed) and the config value.
Namespace handlers may be any Python callable; they may also be
Python 2.5-style 'context managers', in which case their __enter__
method should return a callable to be used as the handler.
See cherrypy.tools (the Toolbox class) for an example.
"""
def __call__(self, config):
"""Iterate through config and pass it to each namespace handler.
config
A flat dict, where keys use dots to separate
namespaces, and values are arbitrary.
The first name in each config key is used to look up the corresponding
namespace handler. For example, a config entry of {'tools.gzip.on': v}
will call the 'tools' namespace handler with the args: ('gzip.on', v)
"""
# Separate the given config into namespaces
ns_confs = {}
for k in config:
if '.' in k:
ns, name = k.split('.', 1)
bucket = ns_confs.setdefault(ns, {})
bucket[name] = config[k]
# I chose __enter__ and __exit__ so someday this could be
# rewritten using Python 2.5's 'with' statement:
# for ns, handler in self.iteritems():
# with handler as callable:
# for k, v in ns_confs.get(ns, {}).iteritems():
# callable(k, v)
for ns, handler in self.items():
exit = getattr(handler, '__exit__', None)
if exit:
callable = handler.__enter__()
no_exc = True
try:
try:
for k, v in ns_confs.get(ns, {}).items():
callable(k, v)
except:
# The exceptional case is handled here
no_exc = False
if exit is None:
raise
if not exit(*sys.exc_info()):
raise
# The exception is swallowed if exit() returns true
finally:
# The normal and non-local-goto cases are handled here
if no_exc and exit:
exit(None, None, None)
else:
for k, v in ns_confs.get(ns, {}).items():
handler(k, v)
def __repr__(self):
return '%s.%s(%s)' % (self.__module__, self.__class__.__name__,
dict.__repr__(self))
def __copy__(self):
newobj = self.__class__()
newobj.update(self)
return newobj
copy = __copy__
class Config(dict):
"""A dict-like set of configuration data, with defaults and namespaces.
May take a file, filename, or dict.
"""
defaults = {}
environments = {}
namespaces = NamespaceSet()
def __init__(self, file=None, **kwargs):
self.reset()
if file is not None:
self.update(file)
if kwargs:
self.update(kwargs)
def reset(self):
"""Reset self to default values."""
self.clear()
dict.update(self, self.defaults)
def update(self, config):
"""Update self from a dict, file or filename."""
if isinstance(config, text_or_bytes):
# Filename
config = Parser().dict_from_file(config)
elif hasattr(config, 'read'):
# Open file object
config = Parser().dict_from_file(config)
else:
config = config.copy()
self._apply(config)
def _apply(self, config):
"""Update self from a dict."""
which_env = config.get('environment')
if which_env:
env = self.environments[which_env]
for k in env:
if k not in config:
config[k] = env[k]
dict.update(self, config)
self.namespaces(config)
def __setitem__(self, k, v):
dict.__setitem__(self, k, v)
self.namespaces({k: v})
class Parser(ConfigParser):
"""Sub-class of ConfigParser that keeps the case of options and that
raises an exception if the file cannot be read.
"""
def optionxform(self, optionstr):
return optionstr
def read(self, filenames):
if isinstance(filenames, text_or_bytes):
filenames = [filenames]
for filename in filenames:
# try:
# fp = open(filename)
# except IOError:
# continue
fp = open(filename)
try:
self._read(fp, filename)
finally:
fp.close()
def as_dict(self, raw=False, vars=None):
"""Convert an INI file to a dictionary"""
# Load INI file into a dict
result = {}
for section in self.sections():
if section not in result:
result[section] = {}
for option in self.options(section):
value = self.get(section, option, raw=raw, vars=vars)
try:
value = unrepr(value)
except Exception:
x = sys.exc_info()[1]
msg = ('Config error in section: %r, option: %r, '
'value: %r. Config values must be valid Python.' %
(section, option, value))
raise ValueError(msg, x.__class__.__name__, x.args)
result[section][option] = value
return result
def dict_from_file(self, file):
if hasattr(file, 'read'):
self.readfp(file)
else:
self.read(file)
return self.as_dict()
# public domain "unrepr" implementation, found on the web and then improved.
class _Builder2:
def build(self, o):
m = getattr(self, 'build_' + o.__class__.__name__, None)
if m is None:
raise TypeError('unrepr does not recognize %s' %
repr(o.__class__.__name__))
return m(o)
def astnode(self, s):
"""Return a Python2 ast Node compiled from a string."""
try:
import compiler
except ImportError:
# Fallback to eval when compiler package is not available,
# e.g. IronPython 1.0.
return eval(s)
p = compiler.parse('__tempvalue__ = ' + s)
return p.getChildren()[1].getChildren()[0].getChildren()[1]
def build_Subscript(self, o):
expr, flags, subs = o.getChildren()
expr = self.build(expr)
subs = self.build(subs)
return expr[subs]
def build_CallFunc(self, o):
children = o.getChildren()
# Build callee from first child
callee = self.build(children[0])
# Build args and kwargs from remaining children
args = []
kwargs = {}
for child in children[1:]:
class_name = child.__class__.__name__
# None is ignored
if class_name == 'NoneType':
continue
# Keywords become kwargs
if class_name == 'Keyword':
kwargs.update(self.build(child))
# Everything else becomes args
else :
args.append(self.build(child))
return callee(*args, **kwargs)
def build_Keyword(self, o):
key, value_obj = o.getChildren()
value = self.build(value_obj)
kw_dict = {key: value}
return kw_dict
def build_List(self, o):
return map(self.build, o.getChildren())
def build_Const(self, o):
return o.value
def build_Dict(self, o):
d = {}
i = iter(map(self.build, o.getChildren()))
for el in i:
d[el] = i.next()
return d
def build_Tuple(self, o):
return tuple(self.build_List(o))
def build_Name(self, o):
name = o.name
if name == 'None':
return None
if name == 'True':
return True
if name == 'False':
return False
# See if the Name is a package or module. If it is, import it.
try:
return modules(name)
except ImportError:
pass
# See if the Name is in builtins.
try:
return getattr(builtins, name)
except AttributeError:
pass
raise TypeError('unrepr could not resolve the name %s' % repr(name))
def build_Add(self, o):
left, right = map(self.build, o.getChildren())
return left + right
def build_Mul(self, o):
left, right = map(self.build, o.getChildren())
return left * right
def build_Getattr(self, o):
parent = self.build(o.expr)
return getattr(parent, o.attrname)
def build_NoneType(self, o):
return None
def build_UnarySub(self, o):
return -self.build(o.getChildren()[0])
def build_UnaryAdd(self, o):
return self.build(o.getChildren()[0])
class _Builder3:
def build(self, o):
m = getattr(self, 'build_' + o.__class__.__name__, None)
if m is None:
raise TypeError('unrepr does not recognize %s' %
repr(o.__class__.__name__))
return m(o)
def astnode(self, s):
"""Return a Python3 ast Node compiled from a string."""
try:
import ast
except ImportError:
# Fallback to eval when ast package is not available,
# e.g. IronPython 1.0.
return eval(s)
p = ast.parse('__tempvalue__ = ' + s)
return p.body[0].value
def build_Subscript(self, o):
return self.build(o.value)[self.build(o.slice)]
def build_Index(self, o):
return self.build(o.value)
def _build_call35(self, o):
"""
Workaround for python 3.5 _ast.Call signature, docs found here
https://greentreesnakes.readthedocs.org/en/latest/nodes.html
"""
import ast
callee = self.build(o.func)
args = []
if o.args is not None:
for a in o.args:
if isinstance(a, ast.Starred):
args.append(self.build(a.value))
else:
args.append(self.build(a))
kwargs = {}
for kw in o.keywords:
if kw.arg is None: # double asterix `**`
rst = self.build(kw.value)
if not isinstance(rst, dict):
raise TypeError('Invalid argument for call.'
'Must be a mapping object.')
# give preference to the keys set directly from arg=value
for k, v in rst.items():
if k not in kwargs:
kwargs[k] = v
else: # defined on the call as: arg=value
kwargs[kw.arg] = self.build(kw.value)
return callee(*args, **kwargs)
def build_Call(self, o):
if sys.version_info >= (3, 5):
return self._build_call35(o)
callee = self.build(o.func)
if o.args is None:
args = ()
else:
args = tuple([self.build(a) for a in o.args])
if o.starargs is None:
starargs = ()
else:
starargs = tuple(self.build(o.starargs))
if o.kwargs is None:
kwargs = {}
else:
kwargs = self.build(o.kwargs)
if o.keywords is not None: # direct a=b keywords
for kw in o.keywords:
# preference because is a direct keyword against **kwargs
kwargs[kw.arg] = self.build(kw.value)
return callee(*(args + starargs), **kwargs)
def build_List(self, o):
return list(map(self.build, o.elts))
def build_Str(self, o):
return o.s
def build_Num(self, o):
return o.n
def build_Dict(self, o):
return dict([(self.build(k), self.build(v))
for k, v in zip(o.keys, o.values)])
def build_Tuple(self, o):
return tuple(self.build_List(o))
def build_Name(self, o):
name = o.id
if name == 'None':
return None
if name == 'True':
return True
if name == 'False':
return False
# See if the Name is a package or module. If it is, import it.
try:
return modules(name)
except ImportError:
pass
# See if the Name is in builtins.
try:
import builtins
return getattr(builtins, name)
except AttributeError:
pass
raise TypeError('unrepr could not resolve the name %s' % repr(name))
def build_NameConstant(self, o):
return o.value
def build_UnaryOp(self, o):
op, operand = map(self.build, [o.op, o.operand])
return op(operand)
def build_BinOp(self, o):
left, op, right = map(self.build, [o.left, o.op, o.right])
return op(left, right)
def build_Add(self, o):
return _operator.add
def build_Mult(self, o):
return _operator.mul
def build_USub(self, o):
return _operator.neg
def build_Attribute(self, o):
parent = self.build(o.value)
return getattr(parent, o.attr)
def build_NoneType(self, o):
return None
def unrepr(s):
"""Return a Python object compiled from a string."""
if not s:
return s
if sys.version_info < (3, 0):
b = _Builder2()
else:
b = _Builder3()
obj = b.astnode(s)
return b.build(obj)
def modules(modulePath):
"""Load a module and retrieve a reference to that module."""
__import__(modulePath)
return sys.modules[modulePath]
def attributes(full_attribute_name):
"""Load a module and retrieve an attribute of that module."""
# Parse out the path, module, and attribute
last_dot = full_attribute_name.rfind('.')
attr_name = full_attribute_name[last_dot + 1:]
mod_path = full_attribute_name[:last_dot]
mod = modules(mod_path)
# Let an AttributeError propagate outward.
try:
attr = getattr(mod, attr_name)
except AttributeError:
raise AttributeError("'%s' object has no attribute '%s'"
% (mod_path, attr_name))
# Return a reference to the attribute.
return attr

128
cherrypy/lib/safemime.py Normal file
View File

@@ -0,0 +1,128 @@
import cherrypy
class MultipartWrapper(object):
"""Wraps a file-like object, returning '' when Content-Length is reached.
The cgi module's logic for reading multipart MIME messages doesn't
allow the parts to know when the Content-Length for the entire message
has been reached, and doesn't allow for multipart-MIME messages that
omit the trailing CRLF (Flash 8's FileReference.upload(url), for example,
does this). The read_lines_to_outerboundary function gets stuck in a loop
until the socket times out.
This rfile wrapper simply monitors the incoming stream. When a read is
attempted past the Content-Length, it returns an empty string rather
than timing out (of course, if the last read *overlaps* the C-L, you'll
get the last bit of data up to C-L, and then the next read will return
an empty string).
"""
def __init__(self, rfile, clen):
self.rfile = rfile
self.clen = clen
self.bytes_read = 0
def read(self, size = None):
if self.clen:
# Return '' if we've read all the data.
if self.bytes_read >= self.clen:
return ''
# Reduce 'size' if it's over our limit.
new_bytes_read = self.bytes_read + size
if new_bytes_read > self.clen:
size = self.clen - self.bytes_read
data = self.rfile.read(size)
self.bytes_read += len(data)
return data
def readline(self, size = None):
if size is not None:
if self.clen:
# Return '' if we've read all the data.
if self.bytes_read >= self.clen:
return ''
# Reduce 'size' if it's over our limit.
new_bytes_read = self.bytes_read + size
if new_bytes_read > self.clen:
size = self.clen - self.bytes_read
data = self.rfile.readline(size)
self.bytes_read += len(data)
return data
# User didn't specify a size ...
# We read the line in chunks to make sure it's not a 100MB line !
res = []
size = 256
while True:
if self.clen:
# Return if we've read all the data.
if self.bytes_read >= self.clen:
return ''.join(res)
# Reduce 'size' if it's over our limit.
new_bytes_read = self.bytes_read + size
if new_bytes_read > self.clen:
size = self.clen - self.bytes_read
data = self.rfile.readline(size)
self.bytes_read += len(data)
res.append(data)
# See http://www.cherrypy.org/ticket/421
if len(data) < size or data[-1:] == "\n":
return ''.join(res)
def readlines(self, sizehint = 0):
# Shamelessly stolen from StringIO
total = 0
lines = []
line = self.readline()
while line:
lines.append(line)
total += len(line)
if 0 < sizehint <= total:
break
line = self.readline()
return lines
def close(self):
self.rfile.close()
def __iter__(self):
return self.rfile
def next(self):
if self.clen:
# Return '' if we've read all the data.
if self.bytes_read >= self.clen:
return ''
data = self.rfile.next()
self.bytes_read += len(data)
return data
def safe_multipart(flash_only=False):
"""Wrap request.rfile in a reader that won't crash on no trailing CRLF."""
h = cherrypy.request.headers
if not h.get('Content-Type','').startswith('multipart/'):
return
if flash_only and not 'Shockwave Flash' in h.get('User-Agent', ''):
return
clen = h.get('Content-Length', '0')
try:
clen = int(clen)
except ValueError:
return
cherrypy.request.rfile = MultipartWrapper(cherrypy.request.rfile, clen)
def init():
"""Create a Tool for safe_multipart and add it to cherrypy.tools."""
cherrypy.tools.safe_multipart = cherrypy.Tool('before_request_body',
safe_multipart)

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,256 +1,141 @@
import mimetypes
mimetypes.init()
mimetypes.types_map['.dwg']='image/x-dwg'
mimetypes.types_map['.ico']='image/x-icon'
import os
import re
import stat
import mimetypes
try:
from io import UnsupportedOperation
except ImportError:
UnsupportedOperation = object()
import time
import urllib
import cherrypy
from cherrypy._cpcompat import ntob, unquote
from cherrypy.lib import cptools, httputil, file_generator_limited
from cherrypy.lib import cptools, http, file_generator_limited
mimetypes.init()
mimetypes.types_map['.dwg'] = 'image/x-dwg'
mimetypes.types_map['.ico'] = 'image/x-icon'
mimetypes.types_map['.bz2'] = 'application/x-bzip2'
mimetypes.types_map['.gz'] = 'application/x-gzip'
def serve_file(path, content_type=None, disposition=None, name=None,
debug=False):
"""Set status, headers, and body in order to serve the given path.
def serve_file(path, content_type=None, disposition=None, name=None):
"""Set status, headers, and body in order to serve the given file.
The Content-Type header will be set to the content_type arg, if provided.
If not provided, the Content-Type will be guessed by the file extension
of the 'path' argument.
If disposition is not None, the Content-Disposition header will be set
to "<disposition>; filename=<name>". If name is None, it will be set
to the basename of path. If disposition is None, no Content-Disposition
header will be written.
"""
response = cherrypy.serving.response
response = cherrypy.response
# If path is relative, users should fix it by making path absolute.
# That is, CherryPy should not guess where the application root is.
# It certainly should *not* use cwd (since CP may be invoked from a
# variety of paths). If using tools.staticdir, you can make your relative
# paths become absolute by supplying a value for "tools.staticdir.root".
if not os.path.isabs(path):
msg = "'%s' is not an absolute path." % path
if debug:
cherrypy.log(msg, 'TOOLS.STATICFILE')
raise ValueError(msg)
raise ValueError("'%s' is not an absolute path." % path)
try:
st = os.stat(path)
except (OSError, TypeError, ValueError):
# OSError when file fails to stat
# TypeError on Python 2 when there's a null byte
# ValueError on Python 3 when there's a null byte
if debug:
cherrypy.log('os.stat(%r) failed' % path, 'TOOLS.STATIC')
except OSError:
raise cherrypy.NotFound()
# Check if path is a directory.
if stat.S_ISDIR(st.st_mode):
# Let the caller deal with it as they like.
if debug:
cherrypy.log('%r is a directory' % path, 'TOOLS.STATIC')
raise cherrypy.NotFound()
# Set the Last-Modified response header, so that
# modified-since validation code can work.
response.headers['Last-Modified'] = httputil.HTTPDate(st.st_mtime)
response.headers['Last-Modified'] = http.HTTPDate(st.st_mtime)
cptools.validate_since()
if content_type is None:
# Set content-type based on filename extension
ext = ''
ext = ""
i = path.rfind('.')
if i != -1:
ext = path[i:].lower()
content_type = mimetypes.types_map.get(ext, None)
if content_type is not None:
response.headers['Content-Type'] = content_type
if debug:
cherrypy.log('Content-Type: %r' % content_type, 'TOOLS.STATIC')
cd = None
content_type = mimetypes.types_map.get(ext, "text/plain")
response.headers['Content-Type'] = content_type
if disposition is not None:
if name is None:
name = os.path.basename(path)
cd = '%s; filename="%s"' % (disposition, name)
response.headers['Content-Disposition'] = cd
if debug:
cherrypy.log('Content-Disposition: %r' % cd, 'TOOLS.STATIC')
response.headers["Content-Disposition"] = cd
# Set Content-Length and use an iterable (file object)
# this way CP won't load the whole file in memory
content_length = st.st_size
fileobj = open(path, 'rb')
return _serve_fileobj(fileobj, content_type, content_length, debug=debug)
def serve_fileobj(fileobj, content_type=None, disposition=None, name=None,
debug=False):
"""Set status, headers, and body in order to serve the given file object.
The Content-Type header will be set to the content_type arg, if provided.
If disposition is not None, the Content-Disposition header will be set
to "<disposition>; filename=<name>". If name is None, 'filename' will
not be set. If disposition is None, no Content-Disposition header will
be written.
CAUTION: If the request contains a 'Range' header, one or more seek()s will
be performed on the file object. This may cause undesired behavior if
the file object is not seekable. It could also produce undesired results
if the caller set the read position of the file object prior to calling
serve_fileobj(), expecting that the data would be served starting from that
position.
"""
response = cherrypy.serving.response
try:
st = os.fstat(fileobj.fileno())
except AttributeError:
if debug:
cherrypy.log('os has no fstat attribute', 'TOOLS.STATIC')
content_length = None
except UnsupportedOperation:
content_length = None
else:
# Set the Last-Modified response header, so that
# modified-since validation code can work.
response.headers['Last-Modified'] = httputil.HTTPDate(st.st_mtime)
cptools.validate_since()
content_length = st.st_size
if content_type is not None:
response.headers['Content-Type'] = content_type
if debug:
cherrypy.log('Content-Type: %r' % content_type, 'TOOLS.STATIC')
cd = None
if disposition is not None:
if name is None:
cd = disposition
else:
cd = '%s; filename="%s"' % (disposition, name)
response.headers['Content-Disposition'] = cd
if debug:
cherrypy.log('Content-Disposition: %r' % cd, 'TOOLS.STATIC')
return _serve_fileobj(fileobj, content_type, content_length, debug=debug)
def _serve_fileobj(fileobj, content_type, content_length, debug=False):
"""Internal. Set response.body to the given file object, perhaps ranged."""
response = cherrypy.serving.response
c_len = st.st_size
bodyfile = open(path, 'rb')
# HTTP/1.0 didn't have Range/Accept-Ranges headers, or the 206 code
request = cherrypy.serving.request
if request.protocol >= (1, 1):
response.headers['Accept-Ranges'] = 'bytes'
r = httputil.get_ranges(request.headers.get('Range'), content_length)
if cherrypy.request.protocol >= (1, 1):
response.headers["Accept-Ranges"] = "bytes"
r = http.get_ranges(cherrypy.request.headers.get('Range'), c_len)
if r == []:
response.headers['Content-Range'] = 'bytes */%s' % content_length
message = ('Invalid Range (first-byte-pos greater than '
'Content-Length)')
if debug:
cherrypy.log(message, 'TOOLS.STATIC')
response.headers['Content-Range'] = "bytes */%s" % c_len
message = "Invalid Range (first-byte-pos greater than Content-Length)"
raise cherrypy.HTTPError(416, message)
if r:
if len(r) == 1:
# Return a single-part response.
start, stop = r[0]
if stop > content_length:
stop = content_length
if stop > c_len:
stop = c_len
r_len = stop - start
if debug:
cherrypy.log(
'Single part; start: %r, stop: %r' % (start, stop),
'TOOLS.STATIC')
response.status = '206 Partial Content'
response.headers['Content-Range'] = (
'bytes %s-%s/%s' % (start, stop - 1, content_length))
response.status = "206 Partial Content"
response.headers['Content-Range'] = ("bytes %s-%s/%s" %
(start, stop - 1, c_len))
response.headers['Content-Length'] = r_len
fileobj.seek(start)
response.body = file_generator_limited(fileobj, r_len)
bodyfile.seek(start)
response.body = file_generator_limited(bodyfile, r_len)
else:
# Return a multipart/byteranges response.
response.status = '206 Partial Content'
try:
# Python 3
from email.generator import _make_boundary as make_boundary
except ImportError:
# Python 2
from mimetools import choose_boundary as make_boundary
boundary = make_boundary()
ct = 'multipart/byteranges; boundary=%s' % boundary
response.status = "206 Partial Content"
import mimetools
boundary = mimetools.choose_boundary()
ct = "multipart/byteranges; boundary=%s" % boundary
response.headers['Content-Type'] = ct
if 'Content-Length' in response.headers:
if response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
del response.headers['Content-Length']
del response.headers["Content-Length"]
def file_ranges():
# Apache compatibility:
yield ntob('\r\n')
yield "\r\n"
for start, stop in r:
if debug:
cherrypy.log(
'Multipart; start: %r, stop: %r' % (
start, stop),
'TOOLS.STATIC')
yield ntob('--' + boundary, 'ascii')
yield ntob('\r\nContent-type: %s' % content_type,
'ascii')
yield ntob(
'\r\nContent-range: bytes %s-%s/%s\r\n\r\n' % (
start, stop - 1, content_length),
'ascii')
fileobj.seek(start)
gen = file_generator_limited(fileobj, stop - start)
for chunk in gen:
yield "--" + boundary
yield "\r\nContent-type: %s" % content_type
yield ("\r\nContent-range: bytes %s-%s/%s\r\n\r\n"
% (start, stop - 1, c_len))
bodyfile.seek(start)
for chunk in file_generator_limited(bodyfile, stop-start):
yield chunk
yield ntob('\r\n')
yield "\r\n"
# Final boundary
yield ntob('--' + boundary + '--', 'ascii')
yield "--" + boundary + "--"
# Apache compatibility:
yield ntob('\r\n')
yield "\r\n"
response.body = file_ranges()
return response.body
else:
if debug:
cherrypy.log('No byteranges requested', 'TOOLS.STATIC')
# Set Content-Length and use an iterable (file object)
# this way CP won't load the whole file in memory
response.headers['Content-Length'] = content_length
response.body = fileobj
response.headers['Content-Length'] = c_len
response.body = bodyfile
return response.body
def serve_download(path, name=None):
"""Serve 'path' as an application/x-download attachment."""
# This is such a common idiom I felt it deserved its own wrapper.
return serve_file(path, 'application/x-download', 'attachment', name)
return serve_file(path, "application/x-download", "attachment", name)
def _attempt(filename, content_types, debug=False):
if debug:
cherrypy.log('Attempting %r (content_types %r)' %
(filename, content_types), 'TOOLS.STATICDIR')
def _attempt(filename, content_types):
try:
# you can set the content types for a
# complete directory per extension
@@ -258,124 +143,87 @@ def _attempt(filename, content_types, debug=False):
if content_types:
r, ext = os.path.splitext(filename)
content_type = content_types.get(ext[1:], None)
serve_file(filename, content_type=content_type, debug=debug)
serve_file(filename, content_type=content_type)
return True
except cherrypy.NotFound:
# If we didn't find the static file, continue handling the
# request. We might find a dynamic handler instead.
if debug:
cherrypy.log('NotFound', 'TOOLS.STATICFILE')
return False
def staticdir(section, dir, root='', match='', content_types=None, index='',
debug=False):
def staticdir(section, dir, root="", match="", content_types=None, index=""):
"""Serve a static resource from the given (root +) dir.
match
If given, request.path_info will be searched for the given
regular expression before attempting to serve static content.
content_types
If given, it should be a Python dictionary of
{file-extension: content-type} pairs, where 'file-extension' is
a string (e.g. "gif") and 'content-type' is the value to write
out in the Content-Type response header (e.g. "image/gif").
index
If provided, it should be the (relative) name of a file to
serve for directory requests. For example, if the dir argument is
'/home/me', the Request-URI is 'myapp', and the index arg is
'index.html', the file '/home/me/myapp/index.html' will be sought.
If 'match' is given, request.path_info will be searched for the given
regular expression before attempting to serve static content.
If content_types is given, it should be a Python dictionary of
{file-extension: content-type} pairs, where 'file-extension' is
a string (e.g. "gif") and 'content-type' is the value to write
out in the Content-Type response header (e.g. "image/gif").
If 'index' is provided, it should be the (relative) name of a file to
serve for directory requests. For example, if the dir argument is
'/home/me', the Request-URI is 'myapp', and the index arg is
'index.html', the file '/home/me/myapp/index.html' will be sought.
"""
request = cherrypy.serving.request
if request.method not in ('GET', 'HEAD'):
if debug:
cherrypy.log('request.method not GET or HEAD', 'TOOLS.STATICDIR')
if match and not re.search(match, cherrypy.request.path_info):
return False
if match and not re.search(match, request.path_info):
if debug:
cherrypy.log('request.path_info %r does not match pattern %r' %
(request.path_info, match), 'TOOLS.STATICDIR')
return False
# Allow the use of '~' to refer to a user's home directory.
dir = os.path.expanduser(dir)
# If dir is relative, make absolute using "root".
if not os.path.isabs(dir):
if not root:
msg = 'Static dir requires an absolute dir (or root).'
if debug:
cherrypy.log(msg, 'TOOLS.STATICDIR')
msg = "Static dir requires an absolute dir (or root)."
raise ValueError(msg)
dir = os.path.join(root, dir)
# Determine where we are in the object tree relative to 'section'
# (where the static tool was defined).
if section == 'global':
section = '/'
section = section.rstrip(r'\/')
branch = request.path_info[len(section) + 1:]
branch = unquote(branch.lstrip(r'\/'))
section = "/"
section = section.rstrip(r"\/")
branch = cherrypy.request.path_info[len(section) + 1:]
branch = urllib.unquote(branch.lstrip(r"\/"))
# If branch is "", filename will end in a slash
filename = os.path.join(dir, branch)
if debug:
cherrypy.log('Checking file %r to fulfill %r' %
(filename, request.path_info), 'TOOLS.STATICDIR')
# There's a chance that the branch pulled from the URL might
# have ".." or similar uplevel attacks in it. Check that the final
# filename is a child of dir.
if not os.path.normpath(filename).startswith(os.path.normpath(dir)):
raise cherrypy.HTTPError(403) # Forbidden
raise cherrypy.HTTPError(403) # Forbidden
handled = _attempt(filename, content_types)
if not handled:
# Check for an index file if a folder was requested.
if index:
handled = _attempt(os.path.join(filename, index), content_types)
if handled:
request.is_index = filename[-1] in (r'\/')
cherrypy.request.is_index = filename[-1] in (r"\/")
return handled
def staticfile(filename, root=None, match='', content_types=None, debug=False):
def staticfile(filename, root=None, match="", content_types=None):
"""Serve a static resource from the given (root +) filename.
match
If given, request.path_info will be searched for the given
regular expression before attempting to serve static content.
content_types
If given, it should be a Python dictionary of
{file-extension: content-type} pairs, where 'file-extension' is
a string (e.g. "gif") and 'content-type' is the value to write
out in the Content-Type response header (e.g. "image/gif").
If 'match' is given, request.path_info will be searched for the given
regular expression before attempting to serve static content.
If content_types is given, it should be a Python dictionary of
{file-extension: content-type} pairs, where 'file-extension' is
a string (e.g. "gif") and 'content-type' is the value to write
out in the Content-Type response header (e.g. "image/gif").
"""
request = cherrypy.serving.request
if request.method not in ('GET', 'HEAD'):
if debug:
cherrypy.log('request.method not GET or HEAD', 'TOOLS.STATICFILE')
if match and not re.search(match, cherrypy.request.path_info):
return False
if match and not re.search(match, request.path_info):
if debug:
cherrypy.log('request.path_info %r does not match pattern %r' %
(request.path_info, match), 'TOOLS.STATICFILE')
return False
# If filename is relative, make absolute using "root".
if not os.path.isabs(filename):
if not root:
msg = "Static tool requires an absolute filename (got '%s')." % (
filename,)
if debug:
cherrypy.log(msg, 'TOOLS.STATICFILE')
msg = "Static tool requires an absolute filename (got '%s')." % filename
raise ValueError(msg)
filename = os.path.join(root, filename)
return _attempt(filename, content_types, debug=debug)
return _attempt(filename, content_types)

184
cherrypy/lib/tidy.py Normal file
View File

@@ -0,0 +1,184 @@
"""Functions to run cherrypy.response through Tidy or NSGML."""
import cgi
import os
import StringIO
import traceback
import cherrypy
def tidy(temp_dir, tidy_path, strict_xml=False, errors_to_ignore=None,
indent=False, wrap=False, warnings=True):
"""Run cherrypy.response through Tidy.
If either 'indent' or 'wrap' are specified, then response.body will be
set to the output of tidy. Otherwise, only errors (including warnings,
if warnings is True) will change the body.
Note that we use the standalone Tidy tool rather than the python
mxTidy module. This is because this module does not seem to be
stable and it crashes on some HTML pages (which means that the
server would also crash)
"""
response = cherrypy.response
# the tidy tool, by its very nature it's not generator friendly,
# so we just collapse the body and work with it.
orig_body = response.collapse_body()
fct = response.headers.get('Content-Type', '')
ct = fct.split(';')[0]
encoding = ''
i = fct.find('charset=')
if i != -1:
encoding = fct[i + 8:]
if ct == 'text/html':
page_file = os.path.join(temp_dir, 'page.html')
open(page_file, 'wb').write(orig_body)
out_file = os.path.join(temp_dir, 'tidy.out')
err_file = os.path.join(temp_dir, 'tidy.err')
tidy_enc = encoding.replace('-', '')
if tidy_enc:
tidy_enc = '-' + tidy_enc
strict_xml = ("", " -xml")[bool(strict_xml)]
if indent:
indent = ' -indent'
else:
indent = ''
if wrap is False:
wrap = ''
else:
try:
wrap = ' -wrap %d' % int(tidyWrap)
except:
wrap = ''
result = os.system('"%s" %s%s%s%s -f %s -o %s %s' %
(tidy_path, tidy_enc, strict_xml, indent, wrap,
err_file, out_file, page_file))
use_output = bool(indent or wrap) and not result
if use_output:
output = open(out_file, 'rb').read()
new_errs = []
for err in open(err_file, 'rb').read().splitlines():
if (err.find('Error') != -1 or
(warnings and err.find('Warning') != -1)):
ignore = 0
for err_ign in errors_to_ignore or []:
if err.find(err_ign) != -1:
ignore = 1
break
if not ignore:
new_errs.append(err)
if new_errs:
response.body = wrong_content('<br />'.join(new_errs), orig_body)
if response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
del response.headers["Content-Length"]
return
elif strict_xml:
# The HTML is OK, but is it valid XML?
# Use elementtree to parse XML
from elementtree.ElementTree import parse
tag_list = ['nbsp', 'quot']
for tag in tag_list:
orig_body = orig_body.replace('&' + tag + ';', tag.upper())
if encoding:
enctag = '<?xml version="1.0" encoding="%s"?>' % encoding
orig_body = enctag + orig_body
f = StringIO.StringIO(orig_body)
try:
tree = parse(f)
except:
# Wrong XML
body_file = StringIO.StringIO()
traceback.print_exc(file = body_file)
body_file = '<br />'.join(body_file.getvalue())
response.body = wrong_content(body_file, orig_body, "XML")
if response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
del response.headers["Content-Length"]
return
if use_output:
response.body = [output]
if response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
del response.headers["Content-Length"]
def html_space(text):
"""Escape text, replacing space with nbsp and tab with 4 nbsp's."""
return cgi.escape(text).replace('\t', ' ').replace(' ', '&nbsp;')
def html_break(text):
"""Escape text, replacing newline with HTML br element."""
return cgi.escape(text).replace('\n', '<br />')
def wrong_content(header, body, content_type="HTML"):
output = ["Wrong %s:<br />%s<br />" % (content_type, html_break(header))]
for i, line in enumerate(body.splitlines()):
output.append("%03d - %s" % (i + 1, html_space(line)))
return "<br />".join(output)
def nsgmls(temp_dir, nsgmls_path, catalog_path, errors_to_ignore=None):
response = cherrypy.response
# the tidy tool, by its very nature it's not generator friendly,
# so we just collect the body and work with it.
orig_body = response.collapse_body()
fct = response.headers.get('Content-Type', '')
ct = fct.split(';')[0]
encoding = ''
i = fct.find('charset=')
if i != -1:
encoding = fct[i + 8:]
if ct == 'text/html':
# Remove bits of Javascript (nsgmls doesn't seem to handle
# them correctly (for instance, if <a appears in your
# Javascript code nsgmls complains about it)
while True:
i = orig_body.find('<script')
if i == -1:
break
j = orig_body.find('</script>', i)
if j == -1:
break
orig_body = orig_body[:i] + orig_body[j+9:]
page_file = os.path.join(temp_dir, 'page.html')
open(page_file, 'wb').write(orig_body)
err_file = os.path.join(temp_dir, 'nsgmls.err')
command = ('%s -c%s -f%s -s -E10 %s' %
(nsgmls_path, catalog_path, err_file, page_file))
command = command.replace('\\', '/')
os.system(command)
errs = open(err_file, 'rb').read()
new_errs = []
for err in errs.splitlines():
ignore = False
for err_ign in errors_to_ignore or []:
if err.find(err_ign) != -1:
ignore = True
break
if not ignore:
new_errs.append(err)
if new_errs:
response.body = wrong_content('<br />'.join(new_errs), orig_body)
if response.headers.has_key("Content-Length"):
# Delete Content-Length header so finalize() recalcs it.
del response.headers["Content-Length"]

77
cherrypy/lib/wsgiapp.py Normal file
View File

@@ -0,0 +1,77 @@
"""A CherryPy tool for hosting a foreign WSGI application."""
import sys
import warnings
import cherrypy
# is this sufficient for start_response?
def start_response(status, response_headers, exc_info=None):
cherrypy.response.status = status
headers_dict = dict(response_headers)
cherrypy.response.headers.update(headers_dict)
def make_environ():
"""grabbed some of below from wsgiserver.py
for hosting WSGI apps in non-WSGI environments (yikes!)
"""
request = cherrypy.request
# create and populate the wsgi environ
environ = dict()
environ["wsgi.version"] = (1,0)
environ["wsgi.url_scheme"] = request.scheme
environ["wsgi.input"] = request.rfile
environ["wsgi.errors"] = sys.stderr
environ["wsgi.multithread"] = True
environ["wsgi.multiprocess"] = False
environ["wsgi.run_once"] = False
environ["REQUEST_METHOD"] = request.method
environ["SCRIPT_NAME"] = request.script_name
environ["PATH_INFO"] = request.path_info
environ["QUERY_STRING"] = request.query_string
environ["SERVER_PROTOCOL"] = request.protocol
environ["SERVER_NAME"] = request.local.name
environ["SERVER_PORT"] = request.local.port
environ["REMOTE_HOST"] = request.remote.name
environ["REMOTE_ADDR"] = request.remote.ip
environ["REMOTE_PORT"] = request.remote.port
# then all the http headers
headers = request.headers
environ["CONTENT_TYPE"] = headers.get("Content-type", "")
environ["CONTENT_LENGTH"] = headers.get("Content-length", "")
for (k, v) in headers.iteritems():
envname = "HTTP_" + k.upper().replace("-","_")
environ[envname] = v
return environ
def run(app, env=None):
"""Run the given WSGI app and set response.body to its output."""
warnings.warn("This module is deprecated and will be removed in "
"Cherrypy 3.2. See http://www.cherrypy.org/ticket/700 "
"for more information.")
try:
environ = cherrypy.request.wsgi_environ.copy()
environ['SCRIPT_NAME'] = cherrypy.request.script_name
environ['PATH_INFO'] = cherrypy.request.path_info
except AttributeError:
environ = make_environ()
if env:
environ.update(env)
# run the wsgi app and have it set response.body
response = app(environ, start_response)
try:
cherrypy.response.body = [x for x in response]
finally:
if hasattr(response, "close"):
response.close()
return True

View File

@@ -1,21 +1,13 @@
import sys
import cherrypy
from cherrypy._cpcompat import ntob
def get_xmlrpclib():
try:
import xmlrpc.client as x
except ImportError:
import xmlrpclib as x
return x
def process_body():
"""Return (params, method) from request body."""
try:
return get_xmlrpclib().loads(cherrypy.request.body.read())
import xmlrpclib
return xmlrpclib.loads(cherrypy.request.body.read())
except Exception:
return ('ERROR PARAMS', ), 'ERRORMETHOD'
@@ -37,21 +29,21 @@ def _set_response(body):
# as a "Protocol Error", we'll just return 200 every time.
response = cherrypy.response
response.status = '200 OK'
response.body = ntob(body, 'utf-8')
response.body = body
response.headers['Content-Type'] = 'text/xml'
response.headers['Content-Length'] = len(body)
def respond(body, encoding='utf-8', allow_none=0):
xmlrpclib = get_xmlrpclib()
import xmlrpclib
if not isinstance(body, xmlrpclib.Fault):
body = (body,)
_set_response(xmlrpclib.dumps(body, methodresponse=1,
encoding=encoding,
allow_none=allow_none))
def on_error(*args, **kwargs):
body = str(sys.exc_info()[1])
xmlrpclib = get_xmlrpclib()
import xmlrpclib
_set_response(xmlrpclib.dumps(xmlrpclib.Fault(1, body)))

View File

@@ -10,5 +10,5 @@ use with the bus. Some use tool-specific channels; see the documentation
for each class.
"""
from cherrypy.process.wspbus import bus # noqa
from cherrypy.process import plugins, servers # noqa
from cherrypy.process.wspbus import bus
from cherrypy.process import plugins, servers

View File

@@ -2,44 +2,22 @@
import os
import re
try:
set
except NameError:
from sets import Set as set
import signal as _signal
import sys
import time
import threading
from cherrypy._cpcompat import text_or_bytes, get_thread_ident
from cherrypy._cpcompat import ntob, Timer
# _module__file__base is used by Autoreload to make
# absolute any filenames retrieved from sys.modules which are not
# already absolute paths. This is to work around Python's quirk
# of importing the startup script and using a relative filename
# for it in sys.modules.
#
# Autoreload examines sys.modules afresh every time it runs. If an application
# changes the current directory by executing os.chdir(), then the next time
# Autoreload runs, it will not be able to find any filenames which are
# not absolute paths, because the current directory is not the same as when the
# module was first imported. Autoreload will then wrongly conclude the file
# has "changed", and initiate the shutdown/re-exec sequence.
# See ticket #917.
# For this workaround to have a decent probability of success, this module
# needs to be imported as early as possible, before the app has much chance
# to change the working directory.
_module__file__base = os.getcwd()
class SimplePlugin(object):
"""Plugin base class which auto-subscribes methods for known channels."""
bus = None
"""A :class:`Bus <cherrypy.process.wspbus.Bus>`, usually cherrypy.engine.
"""
def __init__(self, bus):
self.bus = bus
def subscribe(self):
"""Register this object as a (multi-channel) listener on the bus."""
for channel in self.bus.listeners:
@@ -47,7 +25,7 @@ class SimplePlugin(object):
method = getattr(self, channel, None)
if method is not None:
self.bus.subscribe(channel, method)
def unsubscribe(self):
"""Unregister this object as a listener on the bus."""
for channel in self.bus.listeners:
@@ -57,42 +35,25 @@ class SimplePlugin(object):
self.bus.unsubscribe(channel, method)
class SignalHandler(object):
"""Register bus channels (and listeners) for system signals.
You can modify what signals your application listens for, and what it does
when it receives signals, by modifying :attr:`SignalHandler.handlers`,
a dict of {signal name: callback} pairs. The default set is::
handlers = {'SIGTERM': self.bus.exit,
'SIGHUP': self.handle_SIGHUP,
'SIGUSR1': self.bus.graceful,
}
The :func:`SignalHandler.handle_SIGHUP`` method calls
:func:`bus.restart()<cherrypy.process.wspbus.Bus.restart>`
if the process is daemonized, but
:func:`bus.exit()<cherrypy.process.wspbus.Bus.exit>`
if the process is attached to a TTY. This is because Unix window
managers tend to send SIGHUP to terminal windows when the user closes them.
Feel free to add signals which are not available on every platform.
The :class:`SignalHandler` will ignore errors raised from attempting
to register handlers for unknown signals.
By default, instantiating this object subscribes the following signals
and listeners:
TERM: bus.exit
HUP : bus.restart
USR1: bus.graceful
"""
handlers = {}
"""A map from signal names (e.g. 'SIGTERM') to handlers (e.g. bus.exit)."""
# Map from signal numbers to names
signals = {}
"""A map from signal numbers to names."""
for k, v in vars(_signal).items():
if k.startswith('SIG') and not k.startswith('SIG_'):
signals[v] = k
del k, v
def __init__(self, bus):
self.bus = bus
# Set default handlers
@@ -100,191 +61,138 @@ class SignalHandler(object):
'SIGHUP': self.handle_SIGHUP,
'SIGUSR1': self.bus.graceful,
}
if sys.platform[:4] == 'java':
del self.handlers['SIGUSR1']
self.handlers['SIGUSR2'] = self.bus.graceful
self.bus.log('SIGUSR1 cannot be set on the JVM platform. '
'Using SIGUSR2 instead.')
self.handlers['SIGINT'] = self._jython_SIGINT_handler
self._previous_handlers = {}
# used to determine is the process is a daemon in `self._is_daemonized`
self._original_pid = os.getpid()
def _jython_SIGINT_handler(self, signum=None, frame=None):
# See http://bugs.jython.org/issue1313
self.bus.log('Keyboard Interrupt: shutting down bus')
self.bus.exit()
def _is_daemonized(self):
"""Return boolean indicating if the current process is
running as a daemon.
The criteria to determine the `daemon` condition is to verify
if the current pid is not the same as the one that got used on
the initial construction of the plugin *and* the stdin is not
connected to a terminal.
The sole validation of the tty is not enough when the plugin
is executing inside other process like in a CI tool
(Buildbot, Jenkins).
"""
if (self._original_pid != os.getpid() and
not os.isatty(sys.stdin.fileno())):
return True
else:
return False
def subscribe(self):
"""Subscribe self.handlers to signals."""
for sig, func in self.handlers.items():
for sig, func in self.handlers.iteritems():
try:
self.set_handler(sig, func)
except ValueError:
pass
def unsubscribe(self):
"""Unsubscribe self.handlers from signals."""
for signum, handler in self._previous_handlers.items():
for signum, handler in self._previous_handlers.iteritems():
signame = self.signals[signum]
if handler is None:
self.bus.log('Restoring %s handler to SIG_DFL.' % signame)
self.bus.log("Restoring %s handler to SIG_DFL." % signame)
handler = _signal.SIG_DFL
else:
self.bus.log('Restoring %s handler %r.' % (signame, handler))
self.bus.log("Restoring %s handler %r." % (signame, handler))
try:
our_handler = _signal.signal(signum, handler)
if our_handler is None:
self.bus.log('Restored old %s handler %r, but our '
'handler was not registered.' %
self.bus.log("Restored old %s handler %r, but our "
"handler was not registered." %
(signame, handler), level=30)
except ValueError:
self.bus.log('Unable to restore %s handler %r.' %
self.bus.log("Unable to restore %s handler %r." %
(signame, handler), level=40, traceback=True)
def set_handler(self, signal, listener=None):
"""Subscribe a handler for the given signal (number or name).
If the optional 'listener' argument is provided, it will be
subscribed as a listener for the given signal's channel.
If the given signal name or number is not available on the current
platform, ValueError is raised.
"""
if isinstance(signal, text_or_bytes):
if isinstance(signal, basestring):
signum = getattr(_signal, signal, None)
if signum is None:
raise ValueError('No such signal: %r' % signal)
raise ValueError("No such signal: %r" % signal)
signame = signal
else:
try:
signame = self.signals[signal]
except KeyError:
raise ValueError('No such signal: %r' % signal)
raise ValueError("No such signal: %r" % signal)
signum = signal
prev = _signal.signal(signum, self._handle_signal)
self._previous_handlers[signum] = prev
if listener is not None:
self.bus.log('Listening for %s.' % signame)
self.bus.log("Listening for %s." % signame)
self.bus.subscribe(signame, listener)
def _handle_signal(self, signum=None, frame=None):
"""Python signal handler (self.set_handler subscribes it for you)."""
signame = self.signals[signum]
self.bus.log('Caught signal %s.' % signame)
self.bus.log("Caught signal %s." % signame)
self.bus.publish(signame)
def handle_SIGHUP(self):
"""Restart if daemonized, else exit."""
if self._is_daemonized():
self.bus.log('SIGHUP caught while daemonized. Restarting.')
self.bus.restart()
else:
if os.isatty(sys.stdin.fileno()):
# not daemonized (may be foreground or background)
self.bus.log('SIGHUP caught but not daemonized. Exiting.')
self.bus.log("SIGHUP caught but not daemonized. Exiting.")
self.bus.exit()
else:
self.bus.log("SIGHUP caught while daemonized. Restarting.")
self.bus.restart()
try:
import pwd
import grp
import pwd, grp
except ImportError:
pwd, grp = None, None
class DropPrivileges(SimplePlugin):
"""Drop privileges. uid/gid arguments not available on Windows.
Special thanks to `Gavin Baker <http://antonym.org/2005/12/dropping-privileges-in-python.html>`_
Special thanks to Gavin Baker: http://antonym.org/node/100.
"""
def __init__(self, bus, umask=None, uid=None, gid=None):
SimplePlugin.__init__(self, bus)
self.finalized = False
self.uid = uid
self.gid = gid
self.umask = umask
def _get_uid(self):
return self._uid
def _set_uid(self, val):
if val is not None:
if pwd is None:
self.bus.log('pwd module not available; ignoring uid.',
self.bus.log("pwd module not available; ignoring uid.",
level=30)
val = None
elif isinstance(val, text_or_bytes):
elif isinstance(val, basestring):
val = pwd.getpwnam(val)[2]
self._uid = val
uid = property(_get_uid, _set_uid,
doc='The uid under which to run. Availability: Unix.')
uid = property(_get_uid, _set_uid, doc="The uid under which to run.")
def _get_gid(self):
return self._gid
def _set_gid(self, val):
if val is not None:
if grp is None:
self.bus.log('grp module not available; ignoring gid.',
self.bus.log("grp module not available; ignoring gid.",
level=30)
val = None
elif isinstance(val, text_or_bytes):
elif isinstance(val, basestring):
val = grp.getgrnam(val)[2]
self._gid = val
gid = property(_get_gid, _set_gid,
doc='The gid under which to run. Availability: Unix.')
gid = property(_get_gid, _set_gid, doc="The gid under which to run.")
def _get_umask(self):
return self._umask
def _set_umask(self, val):
if val is not None:
try:
os.umask
except AttributeError:
self.bus.log('umask function not available; ignoring umask.',
self.bus.log("umask function not available; ignoring umask.",
level=30)
val = None
self._umask = val
umask = property(
_get_umask,
_set_umask,
doc="""The default permission mode for newly created files and
directories.
Usually expressed in octal format, for example, ``0644``.
Availability: Unix, Windows.
""")
umask = property(_get_umask, _set_umask, doc="The umask under which to run.")
def start(self):
# uid/gid
def current_ids():
@@ -295,7 +203,7 @@ class DropPrivileges(SimplePlugin):
if grp:
group = grp.getgrgid(os.getgid())[0]
return name, group
if self.finalized:
if not (self.uid is None and self.gid is None):
self.bus.log('Already running as uid: %r gid: %r' %
@@ -308,11 +216,10 @@ class DropPrivileges(SimplePlugin):
self.bus.log('Started as uid: %r gid: %r' % current_ids())
if self.gid is not None:
os.setgid(self.gid)
os.setgroups([])
if self.uid is not None:
os.setuid(self.uid)
self.bus.log('Running as uid: %r gid: %r' % current_ids())
# umask
if self.finalized:
if self.umask is not None:
@@ -324,7 +231,7 @@ class DropPrivileges(SimplePlugin):
old_umask = os.umask(self.umask)
self.bus.log('umask old: %03o, new: %03o' %
(old_umask, self.umask))
self.finalized = True
# This is slightly higher than the priority for server.start
# in order to facilitate the most common use: starting on a low
@@ -333,13 +240,12 @@ class DropPrivileges(SimplePlugin):
class Daemonizer(SimplePlugin):
"""Daemonize the running script.
Use this with a Web Site Process Bus via::
Use this with a Web Site Process Bus via:
Daemonizer(bus).subscribe()
When this component finishes, the process is completely decoupled from
the parent environment. Please note that when this component is used,
the return code from the parent process will still be 0 if a startup
@@ -349,7 +255,7 @@ class Daemonizer(SimplePlugin):
of whether the process fully started. In fact, that return code only
indicates if the process succesfully finished the first fork.
"""
def __init__(self, bus, stdin='/dev/null', stdout='/dev/null',
stderr='/dev/null'):
SimplePlugin.__init__(self, bus)
@@ -357,11 +263,11 @@ class Daemonizer(SimplePlugin):
self.stdout = stdout
self.stderr = stderr
self.finalized = False
def start(self):
if self.finalized:
self.bus.log('Already deamonized.')
# forking has issues with threads:
# http://www.opengroup.org/onlinepubs/000095399/functions/fork.html
# "The general problem with making fork() work in a multi-threaded
@@ -371,15 +277,15 @@ class Daemonizer(SimplePlugin):
self.bus.log('There are %r active threads. '
'Daemonizing now may cause strange failures.' %
threading.enumerate(), level=30)
# See http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
# (or http://www.faqs.org/faqs/unix-faq/programmer/faq/ section 1.7)
# and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66012
# Finish up with the current stdout/stderr
sys.stdout.flush()
sys.stderr.flush()
# Do first fork.
try:
pid = os.fork()
@@ -390,31 +296,29 @@ class Daemonizer(SimplePlugin):
# This is the first parent. Exit, now that we've forked.
self.bus.log('Forking once.')
os._exit(0)
except OSError:
except OSError, exc:
# Python raises OSError rather than returning negative numbers.
exc = sys.exc_info()[1]
sys.exit('%s: fork #1 failed: (%d) %s\n'
sys.exit("%s: fork #1 failed: (%d) %s\n"
% (sys.argv[0], exc.errno, exc.strerror))
os.setsid()
# Do second fork
try:
pid = os.fork()
if pid > 0:
self.bus.log('Forking twice.')
os._exit(0) # Exit second parent
except OSError:
exc = sys.exc_info()[1]
sys.exit('%s: fork #2 failed: (%d) %s\n'
os._exit(0) # Exit second parent
except OSError, exc:
sys.exit("%s: fork #2 failed: (%d) %s\n"
% (sys.argv[0], exc.errno, exc.strerror))
os.chdir('/')
os.chdir("/")
os.umask(0)
si = open(self.stdin, 'r')
so = open(self.stdout, 'a+')
se = open(self.stderr, 'a+')
si = open(self.stdin, "r")
so = open(self.stdout, "a+")
se = open(self.stderr, "a+", 0)
# os.dup2(fd, fd2) will close fd2 if necessary,
# so we don't explicitly close stdin/out/err.
@@ -422,31 +326,30 @@ class Daemonizer(SimplePlugin):
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
self.bus.log('Daemonized to PID: %s' % os.getpid())
self.finalized = True
start.priority = 65
class PIDFile(SimplePlugin):
"""Maintain a PID file via a WSPBus."""
def __init__(self, bus, pidfile):
SimplePlugin.__init__(self, bus)
self.pidfile = pidfile
self.finalized = False
def start(self):
pid = os.getpid()
if self.finalized:
self.bus.log('PID %r already written to %r.' % (pid, self.pidfile))
else:
open(self.pidfile, 'wb').write(ntob('%s\n' % pid, 'utf8'))
open(self.pidfile, "wb").write(str(pid))
self.bus.log('PID %r written to %r.' % (pid, self.pidfile))
self.finalized = True
start.priority = 70
def exit(self):
try:
os.remove(self.pidfile)
@@ -457,241 +360,131 @@ class PIDFile(SimplePlugin):
pass
class PerpetualTimer(Timer):
"""A responsive subclass of threading.Timer whose run() method repeats.
Use this timer only when you really need a very interruptible timer;
this checks its 'finished' condition up to 20 times a second, which can
results in pretty high CPU usage
"""
def __init__(self, *args, **kwargs):
"Override parent constructor to allow 'bus' to be provided."
self.bus = kwargs.pop('bus', None)
super(PerpetualTimer, self).__init__(*args, **kwargs)
class PerpetualTimer(threading._Timer):
"""A subclass of threading._Timer whose run() method repeats."""
def run(self):
while True:
self.finished.wait(self.interval)
if self.finished.isSet():
return
try:
self.function(*self.args, **self.kwargs)
except Exception:
if self.bus:
self.bus.log(
'Error in perpetual timer thread function %r.' %
self.function, level=40, traceback=True)
# Quit on first error to avoid massive logs.
raise
class BackgroundTask(threading.Thread):
"""A subclass of threading.Thread whose run() method repeats.
Use this class for most repeating tasks. It uses time.sleep() to wait
for each interval, which isn't very responsive; that is, even if you call
self.cancel(), you'll have to wait until the sleep() call finishes before
the thread stops. To compensate, it defaults to being daemonic, which means
it won't delay stopping the whole process.
"""
def __init__(self, interval, function, args=[], kwargs={}, bus=None):
super(BackgroundTask, self).__init__()
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.running = False
self.bus = bus
# default to daemonic
self.daemon = True
def cancel(self):
self.running = False
def run(self):
self.running = True
while self.running:
time.sleep(self.interval)
if not self.running:
return
try:
self.function(*self.args, **self.kwargs)
except Exception:
if self.bus:
self.bus.log('Error in background task thread function %r.'
% self.function, level=40, traceback=True)
# Quit on first error to avoid massive logs.
raise
self.function(*self.args, **self.kwargs)
class Monitor(SimplePlugin):
"""WSPBus listener to periodically run a callback in its own thread."""
callback = None
"""The function to call at intervals."""
frequency = 60
"""The time in seconds between callback runs."""
thread = None
"""A :class:`BackgroundTask<cherrypy.process.plugins.BackgroundTask>`
thread.
"""WSPBus listener to periodically run a callback in its own thread.
bus: a Web Site Process Bus object.
callback: the function to call at intervals.
frequency: the time in seconds between callback runs.
"""
def __init__(self, bus, callback, frequency=60, name=None):
frequency = 60
def __init__(self, bus, callback, frequency=60):
SimplePlugin.__init__(self, bus)
self.callback = callback
self.frequency = frequency
self.thread = None
self.name = name
def start(self):
"""Start our callback in its own background thread."""
"""Start our callback in its own perpetual timer thread."""
if self.frequency > 0:
threadname = self.name or self.__class__.__name__
threadname = self.__class__.__name__
if self.thread is None:
self.thread = BackgroundTask(self.frequency, self.callback,
bus=self.bus)
self.thread = PerpetualTimer(self.frequency, self.callback)
self.thread.setName(threadname)
self.thread.start()
self.bus.log('Started monitor thread %r.' % threadname)
self.bus.log("Started monitor thread %r." % threadname)
else:
self.bus.log('Monitor thread %r already started.' % threadname)
self.bus.log("Monitor thread %r already started." % threadname)
start.priority = 70
def stop(self):
"""Stop our callback's background task thread."""
"""Stop our callback's perpetual timer thread."""
if self.thread is None:
self.bus.log('No thread running for %s.' %
self.name or self.__class__.__name__)
self.bus.log("No thread running for %s." % self.__class__.__name__)
else:
if self.thread is not threading.currentThread():
name = self.thread.getName()
self.thread.cancel()
if not self.thread.daemon:
self.bus.log('Joining %r' % name)
self.thread.join()
self.bus.log('Stopped thread %r.' % name)
self.thread.join()
self.bus.log("Stopped thread %r." % name)
self.thread = None
def graceful(self):
"""Stop the callback's background task thread and restart it."""
"""Stop the callback's perpetual timer thread and restart it."""
self.stop()
self.start()
class Autoreloader(Monitor):
"""Monitor which re-executes the process when files change.
This :ref:`plugin<plugins>` restarts the process (via :func:`os.execv`)
if any of the files it monitors change (or is deleted). By default, the
autoreloader monitors all imported modules; you can add to the
set by adding to ``autoreload.files``::
cherrypy.engine.autoreload.files.add(myFile)
If there are imported files you do *not* wish to monitor, you can
adjust the ``match`` attribute, a regular expression. For example,
to stop monitoring cherrypy itself::
cherrypy.engine.autoreload.match = r'^(?!cherrypy).+'
Like all :class:`Monitor<cherrypy.process.plugins.Monitor>` plugins,
the autoreload plugin takes a ``frequency`` argument. The default is
1 second; that is, the autoreloader will examine files once each second.
"""
files = None
"""The set of files to poll for modifications."""
"""Monitor which re-executes the process when files change."""
frequency = 1
"""The interval in seconds at which to poll for modified files."""
match = '.*'
"""A regular expression by which to match filenames."""
def __init__(self, bus, frequency=1, match='.*'):
self.mtimes = {}
self.files = set()
self.match = match
Monitor.__init__(self, bus, self.run, frequency)
def start(self):
"""Start our own background task thread for self.run."""
"""Start our own perpetual timer thread for self.run."""
if self.thread is None:
self.mtimes = {}
Monitor.start(self)
start.priority = 70
def sysfiles(self):
"""Return a Set of sys.modules filenames to monitor."""
files = set()
for k, m in list(sys.modules.items()):
if re.match(self.match, k):
if (
hasattr(m, '__loader__') and
hasattr(m.__loader__, 'archive')
):
f = m.__loader__.archive
else:
f = getattr(m, '__file__', None)
if f is not None and not os.path.isabs(f):
# ensure absolute paths so a os.chdir() in the app
# doesn't break me
f = os.path.normpath(
os.path.join(_module__file__base, f))
files.add(f)
return files
start.priority = 70
def run(self):
"""Reload the process if registered files have been modified."""
for filename in self.sysfiles() | self.files:
sysfiles = set()
for k, m in sys.modules.items():
if re.match(self.match, k):
if hasattr(m, '__loader__'):
if hasattr(m.__loader__, 'archive'):
k = m.__loader__.archive
k = getattr(m, '__file__', None)
sysfiles.add(k)
for filename in sysfiles | self.files:
if filename:
if filename.endswith('.pyc'):
filename = filename[:-1]
oldtime = self.mtimes.get(filename, 0)
if oldtime is None:
# Module with no .py file. Skip it.
continue
try:
mtime = os.stat(filename).st_mtime
except OSError:
# Either a module with no .py file, or it's been deleted.
mtime = None
if filename not in self.mtimes:
# If a module has no .py file, this will be None.
self.mtimes[filename] = mtime
else:
if mtime is None or mtime > oldtime:
# The file has been deleted or modified.
self.bus.log('Restarting because %s changed.' %
filename)
self.bus.log("Restarting because %s changed." % filename)
self.thread.cancel()
self.bus.log('Stopped thread %r.' %
self.thread.getName())
self.bus.log("Stopped thread %r." % self.thread.getName())
self.bus.restart()
return
class ThreadManager(SimplePlugin):
"""Manager for HTTP request threads.
If you have control over thread creation and destruction, publish to
the 'acquire_thread' and 'release_thread' channels (for each thread).
This will register/unregister the current thread and publish to
'start_thread' and 'stop_thread' listeners in the bus as needed.
If threads are created and destroyed by code you do not control
(e.g., Apache), then, at the beginning of every HTTP request,
publish to 'acquire_thread' only. You should not publish to
@@ -699,42 +492,38 @@ class ThreadManager(SimplePlugin):
the thread will be re-used or not. The bus will call
'stop_thread' listeners for you when it stops.
"""
threads = None
"""A map of {thread ident: index number} pairs."""
def __init__(self, bus):
self.threads = {}
SimplePlugin.__init__(self, bus)
self.bus.listeners.setdefault('acquire_thread', set())
self.bus.listeners.setdefault('start_thread', set())
self.bus.listeners.setdefault('release_thread', set())
self.bus.listeners.setdefault('stop_thread', set())
def acquire_thread(self):
"""Run 'start_thread' listeners for the current thread.
If the current thread has already been seen, any 'start_thread'
listeners will not be run again.
"""
thread_ident = get_thread_ident()
thread_ident = threading._get_ident()
if thread_ident not in self.threads:
# We can't just use get_ident as the thread ID
# We can't just use _get_ident as the thread ID
# because some platforms reuse thread ID's.
i = len(self.threads) + 1
self.threads[thread_ident] = i
self.bus.publish('start_thread', i)
def release_thread(self):
"""Release the current thread and run 'stop_thread' listeners."""
thread_ident = get_thread_ident()
thread_ident = threading._get_ident()
i = self.threads.pop(thread_ident, None)
if i is not None:
self.bus.publish('stop_thread', i)
def stop(self):
"""Release all threads and run all 'stop_thread' listeners."""
for thread_ident, i in self.threads.items():
for thread_ident, i in self.threads.iteritems():
self.bus.publish('stop_thread', i)
self.threads.clear()
graceful = stop

View File

@@ -1,201 +1,69 @@
"""
Starting in CherryPy 3.1, cherrypy.server is implemented as an
:ref:`Engine Plugin<plugins>`. It's an instance of
:class:`cherrypy._cpserver.Server`, which is a subclass of
:class:`cherrypy.process.servers.ServerAdapter`. The ``ServerAdapter`` class
is designed to control other servers, as well.
"""Adapt an HTTP server."""
Multiple servers/ports
======================
If you need to start more than one HTTP server (to serve on multiple ports, or
protocols, etc.), you can manually register each one and then start them all
with engine.start::
s1 = ServerAdapter(cherrypy.engine, MyWSGIServer(host='0.0.0.0', port=80))
s2 = ServerAdapter(cherrypy.engine,
another.HTTPServer(host='127.0.0.1',
SSL=True))
s1.subscribe()
s2.subscribe()
cherrypy.engine.start()
.. index:: SCGI
FastCGI/SCGI
============
There are also Flup\ **F**\ CGIServer and Flup\ **S**\ CGIServer classes in
:mod:`cherrypy.process.servers`. To start an fcgi server, for example,
wrap an instance of it in a ServerAdapter::
addr = ('0.0.0.0', 4000)
f = servers.FlupFCGIServer(application=cherrypy.tree, bindAddress=addr)
s = servers.ServerAdapter(cherrypy.engine, httpserver=f, bind_addr=addr)
s.subscribe()
The :doc:`cherryd</deployguide/cherryd>` startup script will do the above for
you via its `-f` flag.
Note that you need to download and install `flup <http://trac.saddi.com/flup>`_
yourself, whether you use ``cherryd`` or not.
.. _fastcgi:
.. index:: FastCGI
FastCGI
-------
A very simple setup lets your cherry run with FastCGI.
You just need the flup library,
plus a running Apache server (with ``mod_fastcgi``) or lighttpd server.
CherryPy code
^^^^^^^^^^^^^
hello.py::
#!/usr/bin/python
import cherrypy
class HelloWorld:
\"""Sample request handler class.\"""
@cherrypy.expose
def index(self):
return "Hello world!"
cherrypy.tree.mount(HelloWorld())
# CherryPy autoreload must be disabled for the flup server to work
cherrypy.config.update({'engine.autoreload.on':False})
Then run :doc:`/deployguide/cherryd` with the '-f' arg::
cherryd -c <myconfig> -d -f -i hello.py
Apache
^^^^^^
At the top level in httpd.conf::
FastCgiIpcDir /tmp
FastCgiServer /path/to/cherry.fcgi -idle-timeout 120 -processes 4
And inside the relevant VirtualHost section::
# FastCGI config
AddHandler fastcgi-script .fcgi
ScriptAliasMatch (.*$) /path/to/cherry.fcgi$1
Lighttpd
^^^^^^^^
For `Lighttpd <http://www.lighttpd.net/>`_ you can follow these
instructions. Within ``lighttpd.conf`` make sure ``mod_fastcgi`` is
active within ``server.modules``. Then, within your ``$HTTP["host"]``
directive, configure your fastcgi script like the following::
$HTTP["url"] =~ "" {
fastcgi.server = (
"/" => (
"script.fcgi" => (
"bin-path" => "/path/to/your/script.fcgi",
"socket" => "/tmp/script.sock",
"check-local" => "disable",
"disable-time" => 1,
"min-procs" => 1,
"max-procs" => 1, # adjust as needed
),
),
)
} # end of $HTTP["url"] =~ "^/"
Please see `Lighttpd FastCGI Docs
<http://redmine.lighttpd.net/wiki/lighttpd/Docs:ModFastCGI>`_ for
an explanation of the possible configuration options.
"""
import os
import sys
import time
import warnings
class ServerAdapter(object):
"""Adapter for an HTTP server.
If you need to start more than one HTTP server (to serve on multiple
ports, or protocols, etc.), you can manually register each one and then
start them all with bus.start::
start them all with bus.start:
s1 = ServerAdapter(bus, MyWSGIServer(host='0.0.0.0', port=80))
s2 = ServerAdapter(bus, another.HTTPServer(host='127.0.0.1', SSL=True))
s1.subscribe()
s2.subscribe()
bus.start()
"""
def __init__(self, bus, httpserver=None, bind_addr=None):
self.bus = bus
self.httpserver = httpserver
self.bind_addr = bind_addr
self.interrupt = None
self.running = False
def subscribe(self):
self.bus.subscribe('start', self.start)
self.bus.subscribe('stop', self.stop)
def unsubscribe(self):
self.bus.unsubscribe('start', self.start)
self.bus.unsubscribe('stop', self.stop)
def start(self):
"""Start the HTTP server."""
if self.bind_addr is None:
on_what = 'unknown interface (dynamic?)'
on_what = "unknown interface (dynamic?)"
elif isinstance(self.bind_addr, tuple):
on_what = self._get_base()
host, port = self.bind_addr
on_what = "%s:%s" % (host, port)
else:
on_what = 'socket file: %s' % self.bind_addr
on_what = "socket file: %s" % self.bind_addr
if self.running:
self.bus.log('Already serving on %s' % on_what)
self.bus.log("Already serving on %s" % on_what)
return
self.interrupt = None
if not self.httpserver:
raise ValueError('No HTTP server has been created.')
if not os.environ.get('LISTEN_PID', None):
# Start the httpserver in a new thread.
if isinstance(self.bind_addr, tuple):
wait_for_free_port(*self.bind_addr)
raise ValueError("No HTTP server has been created.")
# Start the httpserver in a new thread.
if isinstance(self.bind_addr, tuple):
wait_for_free_port(*self.bind_addr)
import threading
t = threading.Thread(target=self._start_http_thread)
t.setName('HTTPServer ' + t.getName())
t.setName("HTTPServer " + t.getName())
t.start()
self.wait()
self.running = True
self.bus.log('Serving on %s' % on_what)
self.bus.log("Serving on %s" % on_what)
start.priority = 75
def _get_base(self):
if not self.httpserver:
return ''
host, port = self.bind_addr
if getattr(self.httpserver, 'ssl_adapter', None):
scheme = 'https'
if port != 443:
host += ':%s' % port
else:
scheme = 'http'
if port != 80:
host += ':%s' % port
return '%s://%s' % (scheme, host)
def _start_http_thread(self):
"""HTTP servers MUST be running in new threads, so that the
main thread persists to receive KeyboardInterrupt's. If an
@@ -205,37 +73,35 @@ class ServerAdapter(object):
"""
try:
self.httpserver.start()
except KeyboardInterrupt:
self.bus.log('<Ctrl-C> hit: shutting down HTTP server')
self.interrupt = sys.exc_info()[1]
except KeyboardInterrupt, exc:
self.bus.log("<Ctrl-C> hit: shutting down HTTP server")
self.interrupt = exc
self.bus.exit()
except SystemExit:
self.bus.log('SystemExit raised: shutting down HTTP server')
self.interrupt = sys.exc_info()[1]
except SystemExit, exc:
self.bus.log("SystemExit raised: shutting down HTTP server")
self.interrupt = exc
self.bus.exit()
raise
except:
import sys
self.interrupt = sys.exc_info()[1]
self.bus.log('Error in HTTP server: shutting down',
self.bus.log("Error in HTTP server: shutting down",
traceback=True, level=40)
self.bus.exit()
raise
def wait(self):
"""Wait until the HTTP server is ready to receive requests."""
while not getattr(self.httpserver, 'ready', False):
while not getattr(self.httpserver, "ready", False):
if self.interrupt:
raise self.interrupt
time.sleep(.1)
# Wait for port to be occupied
if not os.environ.get('LISTEN_PID', None):
# Wait for port to be occupied if not running via socket-activation
# (for socket-activation the port will be managed by systemd )
if isinstance(self.bind_addr, tuple):
host, port = self.bind_addr
wait_for_occupied_port(host, port)
if isinstance(self.bind_addr, tuple):
host, port = self.bind_addr
wait_for_occupied_port(host, port)
def stop(self):
"""Stop the HTTP server."""
if self.running:
@@ -245,49 +111,24 @@ class ServerAdapter(object):
if isinstance(self.bind_addr, tuple):
wait_for_free_port(*self.bind_addr)
self.running = False
self.bus.log('HTTP Server %s shut down' % self.httpserver)
self.bus.log("HTTP Server %s shut down" % self.httpserver)
else:
self.bus.log('HTTP Server %s already shut down' % self.httpserver)
self.bus.log("HTTP Server %s already shut down" % self.httpserver)
stop.priority = 25
def restart(self):
"""Restart the HTTP server."""
self.stop()
self.start()
class FlupCGIServer(object):
"""Adapter for a flup.server.cgi.WSGIServer."""
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
self.ready = False
def start(self):
"""Start the CGI server."""
# We have to instantiate the server class here because its __init__
# starts a threadpool. If we do it too early, daemonize won't work.
from flup.server.cgi import WSGIServer
self.cgiserver = WSGIServer(*self.args, **self.kwargs)
self.ready = True
self.cgiserver.run()
def stop(self):
"""Stop the HTTP server."""
self.ready = False
class FlupFCGIServer(object):
"""Adapter for a flup.server.fcgi.WSGIServer."""
def __init__(self, *args, **kwargs):
if kwargs.get('bindAddress', None) is None:
import socket
if not hasattr(socket, 'fromfd'):
if not hasattr(socket.socket, 'fromfd'):
raise ValueError(
'Dynamic FCGI server not available on this platform. '
'You must use a static or external one by providing a '
@@ -295,7 +136,7 @@ class FlupFCGIServer(object):
self.args = args
self.kwargs = kwargs
self.ready = False
def start(self):
"""Start the FCGI server."""
# We have to instantiate the server class here because its __init__
@@ -315,26 +156,24 @@ class FlupFCGIServer(object):
self.fcgiserver._oldSIGs = []
self.ready = True
self.fcgiserver.run()
def stop(self):
"""Stop the HTTP server."""
# Forcibly stop the fcgi server main event loop.
self.fcgiserver._keepGoing = False
# Force all worker threads to die off.
self.fcgiserver._threadPool.maxSpare = (
self.fcgiserver._threadPool._idleCount)
self.fcgiserver._threadPool.maxSpare = self.fcgiserver._threadPool._idleCount
self.ready = False
class FlupSCGIServer(object):
"""Adapter for a flup.server.scgi.WSGIServer."""
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
self.ready = False
def start(self):
"""Start the SCGI server."""
# We have to instantiate the server class here because its __init__
@@ -354,7 +193,7 @@ class FlupSCGIServer(object):
self.scgiserver._oldSIGs = []
self.ready = True
self.scgiserver.run()
def stop(self):
"""Stop the HTTP server."""
self.ready = False
@@ -369,37 +208,24 @@ def client_host(server_host):
if server_host == '0.0.0.0':
# 0.0.0.0 is INADDR_ANY, which should answer on localhost.
return '127.0.0.1'
if server_host in ('::', '::0', '::0.0.0.0'):
if server_host == '::':
# :: is IN6ADDR_ANY, which should answer on localhost.
# ::0 and ::0.0.0.0 are non-canonical but common
# ways to write IN6ADDR_ANY.
return '::1'
return server_host
def check_port(host, port, timeout=1.0):
"""Raise an error if the given port is not free on the given host."""
if not host:
raise ValueError("Host values of '' or None are not allowed.")
host = client_host(host)
port = int(port)
import socket
# AF_INET or AF_INET6 socket
# Get the correct address family for our host (allows IPv6 addresses)
try:
info = socket.getaddrinfo(host, port, socket.AF_UNSPEC,
socket.SOCK_STREAM)
except socket.gaierror:
if ':' in host:
info = [(
socket.AF_INET6, socket.SOCK_STREAM, 0, '', (host, port, 0, 0)
)]
else:
info = [(socket.AF_INET, socket.SOCK_STREAM, 0, '', (host, port))]
for res in info:
for res in socket.getaddrinfo(host, port, socket.AF_UNSPEC,
socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
s = None
try:
@@ -409,62 +235,41 @@ def check_port(host, port, timeout=1.0):
s.settimeout(timeout)
s.connect((host, port))
s.close()
raise IOError("Port %s is in use on %s; perhaps the previous "
"httpserver did not shut down properly." %
(repr(port), repr(host)))
except socket.error:
if s:
s.close()
else:
raise IOError('Port %s is in use on %s; perhaps the previous '
'httpserver did not shut down properly.' %
(repr(port), repr(host)))
# Feel free to increase these defaults on slow systems:
free_port_timeout = 0.1
occupied_port_timeout = 1.0
def wait_for_free_port(host, port, timeout=None):
def wait_for_free_port(host, port):
"""Wait for the specified port to become free (drop requests)."""
if not host:
raise ValueError("Host values of '' or None are not allowed.")
if timeout is None:
timeout = free_port_timeout
for trial in range(50):
for trial in xrange(50):
try:
# we are expecting a free port, so reduce the timeout
check_port(host, port, timeout=timeout)
check_port(host, port, timeout=0.1)
except IOError:
# Give the old server thread time to free the port.
time.sleep(timeout)
time.sleep(0.1)
else:
return
raise IOError("Port %r not free on %r" % (port, host))
raise IOError('Port %r not free on %r' % (port, host))
def wait_for_occupied_port(host, port, timeout=None):
def wait_for_occupied_port(host, port):
"""Wait for the specified port to become active (receive requests)."""
if not host:
raise ValueError("Host values of '' or None are not allowed.")
if timeout is None:
timeout = occupied_port_timeout
for trial in range(50):
for trial in xrange(50):
try:
check_port(host, port, timeout=timeout)
check_port(host, port)
except IOError:
# port is occupied
return
else:
time.sleep(timeout)
if host == client_host(host):
raise IOError('Port %r not bound on %r' % (port, host))
# On systems where a loopback interface is not available and the
# server is bound to all interfaces, it's difficult to determine
# whether the server is in fact occupying the port. In this case,
# just issue a warning and move on. See issue #1100.
msg = 'Unable to verify that the server is bound on %r' % port
warnings.warn(msg)
time.sleep(.1)
raise IOError("Port %r not bound on %r" % (port, host))

View File

@@ -1,6 +1,7 @@
"""Windows service. Requires pywin32."""
import os
import thread
import win32api
import win32con
import win32event
@@ -11,18 +12,17 @@ from cherrypy.process import wspbus, plugins
class ConsoleCtrlHandler(plugins.SimplePlugin):
"""A WSPBus plugin for handling Win32 console events (like Ctrl-C)."""
def __init__(self, bus):
self.is_set = False
plugins.SimplePlugin.__init__(self, bus)
def start(self):
if self.is_set:
self.bus.log('Handler for console events already set.', level=40)
return
result = win32api.SetConsoleCtrlHandler(self.handle, 1)
if result == 0:
self.bus.log('Could not SetConsoleCtrlHandler (error %r)' %
@@ -30,38 +30,38 @@ class ConsoleCtrlHandler(plugins.SimplePlugin):
else:
self.bus.log('Set handler for console events.', level=40)
self.is_set = True
def stop(self):
if not self.is_set:
self.bus.log('Handler for console events already off.', level=40)
return
try:
result = win32api.SetConsoleCtrlHandler(self.handle, 0)
except ValueError:
# "ValueError: The object has not been registered"
result = 1
if result == 0:
self.bus.log('Could not remove SetConsoleCtrlHandler (error %r)' %
win32api.GetLastError(), level=40)
else:
self.bus.log('Removed handler for console events.', level=40)
self.is_set = False
def handle(self, event):
"""Handle console control events (like Ctrl-C)."""
if event in (win32con.CTRL_C_EVENT, win32con.CTRL_LOGOFF_EVENT,
win32con.CTRL_BREAK_EVENT, win32con.CTRL_SHUTDOWN_EVENT,
win32con.CTRL_CLOSE_EVENT):
self.bus.log('Console event %s: shutting down bus' % event)
# Remove self immediately so repeated Ctrl-C doesn't re-call it.
try:
self.stop()
except ValueError:
pass
self.bus.exit()
# 'First to return True stops the calls'
return 1
@@ -69,39 +69,37 @@ class ConsoleCtrlHandler(plugins.SimplePlugin):
class Win32Bus(wspbus.Bus):
"""A Web Site Process Bus implementation for Win32.
Instead of time.sleep, this bus blocks using native win32event objects.
"""
def __init__(self):
self.events = {}
wspbus.Bus.__init__(self)
def _get_state_event(self, state):
"""Return a win32event for the given state (creating it if needed)."""
try:
return self.events[state]
except KeyError:
event = win32event.CreateEvent(None, 0, 0,
'WSPBus %s Event (pid=%r)' %
u"WSPBus %s Event (pid=%r)" %
(state.name, os.getpid()))
self.events[state] = event
return event
def _get_state(self):
return self._state
def _set_state(self, value):
self._state = value
event = self._get_state_event(value)
win32event.PulseEvent(event)
state = property(_get_state, _set_state)
def wait(self, state, interval=0.1, channel=None):
def wait(self, state, interval=0.1):
"""Wait for the given state(s), KeyboardInterrupt or SystemExit.
Since this class uses native win32event objects, the interval
argument is ignored.
"""
@@ -109,8 +107,7 @@ class Win32Bus(wspbus.Bus):
# Don't wait for an event that beat us to the punch ;)
if self.state not in state:
events = tuple([self._get_state_event(s) for s in state])
win32event.WaitForMultipleObjects(
events, 0, win32event.INFINITE)
win32event.WaitForMultipleObjects(events, 0, win32event.INFINITE)
else:
# Don't wait for an event that beat us to the punch ;)
if self.state != state:
@@ -119,23 +116,22 @@ class Win32Bus(wspbus.Bus):
class _ControlCodes(dict):
"""Control codes used to "signal" a service via ControlService.
User-defined control codes are in the range 128-255. We generally use
the standard Python value for the Linux signal and add 128. Example:
>>> signal.SIGUSR1
10
control_codes['graceful'] = 128 + 10
"""
def key_for(self, obj):
"""For the given value, return its corresponding key."""
for key, val in self.items():
for key, val in self.iteritems():
if val is obj:
return key
raise ValueError('The given object could not be found: %r' % obj)
raise ValueError("The given object could not be found: %r" % obj)
control_codes = _ControlCodes({'graceful': 138})
@@ -150,28 +146,27 @@ def signal_child(service, command):
class PyWebService(win32serviceutil.ServiceFramework):
"""Python Web Service."""
_svc_name_ = 'Python Web Service'
_svc_display_name_ = 'Python Web Service'
_svc_name_ = "Python Web Service"
_svc_display_name_ = "Python Web Service"
_svc_deps_ = None # sequence of service names on which this depends
_exe_name_ = 'pywebsvc'
_exe_name_ = "pywebsvc"
_exe_args_ = None # Default to no arguments
# Only exists on Windows 2000 or later, ignored on windows NT
_svc_description_ = 'Python Web Service'
_svc_description_ = "Python Web Service"
def SvcDoRun(self):
from cherrypy import process
process.bus.start()
process.bus.block()
def SvcStop(self):
from cherrypy import process
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
process.bus.exit()
def SvcOther(self, control):
process.bus.publish(control_codes.key_for(control))

View File

@@ -20,24 +20,24 @@ autoreload component.
Ideally, a Bus object will be flexible enough to be useful in a variety
of invocation scenarios:
1. The deployer starts a site from the command line via a
framework-neutral deployment script; applications from multiple frameworks
are mixed in a single site. Command-line arguments and configuration
files are used to define site-wide components such as the HTTP server,
WSGI component graph, autoreload behavior, signal handling, etc.
1. The deployer starts a site from the command line via a framework-
neutral deployment script; applications from multiple frameworks
are mixed in a single site. Command-line arguments and configuration
files are used to define site-wide components such as the HTTP server,
WSGI component graph, autoreload behavior, signal handling, etc.
2. The deployer starts a site via some other process, such as Apache;
applications from multiple frameworks are mixed in a single site.
Autoreload and signal handling (from Python at least) are disabled.
applications from multiple frameworks are mixed in a single site.
Autoreload and signal handling (from Python at least) are disabled.
3. The deployer starts a site via a framework-specific mechanism;
for example, when running tests, exploring tutorials, or deploying
single applications from a single framework. The framework controls
which site-wide components are enabled as it sees fit.
for example, when running tests, exploring tutorials, or deploying
single applications from a single framework. The framework controls
which site-wide components are enabled as it sees fit.
The Bus object in this package uses topic-based publish-subscribe
messaging to accomplish all this. A few topic channels are built in
('start', 'stop', 'exit', 'graceful', 'log', and 'main'). Frameworks and
site containers are free to define their own. If a message is sent to a
channel that has not been defined or has no listeners, there is no effect.
('start', 'stop', 'exit', and 'graceful'). Frameworks and site containers
are free to define their own. If a message is sent to a channel that has
not been defined or has no listeners, there is no effect.
In general, there should only ever be a single Bus object per process.
Frameworks and site containers share a single Bus object by publishing
@@ -46,7 +46,7 @@ messages and subscribing listeners.
The Bus object works as a finite state machine which models the current
state of the process. Bus methods move it from one state to another;
those methods then publish to subscribed listeners on the channel for
the new state.::
the new state.
O
|
@@ -61,69 +61,25 @@ the new state.::
"""
import atexit
import ctypes
import operator
import os
import subprocess
try:
set
except NameError:
from sets import Set as set
import sys
import threading
import time
import traceback as _traceback
import warnings
import six
from cherrypy._cpcompat import _args_from_interpreter_flags
# Here I save the value of os.getcwd(), which, if I am imported early enough,
# will be the directory from which the startup script was run. This is needed
# by _do_execv(), to change back to the original directory before execv()ing a
# new process. This is a defense against the application having changed the
# current working directory (which could make sys.executable "not found" if
# sys.executable is a relative-path, and/or cause other problems).
_startup_cwd = os.getcwd()
class ChannelFailures(Exception):
"""Exception raised when errors occur in a listener during Bus.publish().
"""
delimiter = '\n'
def __init__(self, *args, **kwargs):
super(Exception, self).__init__(*args, **kwargs)
self._exceptions = list()
def handle_exception(self):
"""Append the current exception to self."""
self._exceptions.append(sys.exc_info()[1])
def get_instances(self):
"""Return a list of seen exception instances."""
return self._exceptions[:]
def __str__(self):
exception_strings = map(repr, self.get_instances())
return self.delimiter.join(exception_strings)
__repr__ = __str__
def __bool__(self):
return bool(self._exceptions)
__nonzero__ = __bool__
# Use a flag to indicate the state of the bus.
class _StateEnum(object):
class State(object):
name = None
def __repr__(self):
return 'states.%s' % self.name
return "states.%s" % self.name
def __setattr__(self, key, value):
if isinstance(value, self.State):
value.name = key
@@ -136,109 +92,92 @@ states.STOPPING = states.State()
states.EXITING = states.State()
try:
import fcntl
except ImportError:
max_files = 0
else:
try:
max_files = os.sysconf('SC_OPEN_MAX')
except AttributeError:
max_files = 1024
class Bus(object):
"""Process state-machine and messenger for HTTP site deployment.
All listeners for a given channel are guaranteed to be called even
if others at the same channel fail. Each failure is logged, but
execution proceeds on to the next listener. The only way to stop all
processing from inside a listener is to raise SystemExit and stop the
whole server.
"""
states = states
state = states.STOPPED
execv = False
max_cloexec_files = max_files
def __init__(self):
self.execv = False
self.state = states.STOPPED
channels = 'start', 'stop', 'exit', 'graceful', 'log', 'main'
self.listeners = dict(
(channel, set())
for channel in channels
)
[(channel, set()) for channel
in ('start', 'stop', 'exit', 'graceful', 'log')])
self._priorities = {}
def subscribe(self, channel, callback, priority=None):
"""Add the given callback at the given channel (if not present)."""
ch_listeners = self.listeners.setdefault(channel, set())
ch_listeners.add(callback)
if channel not in self.listeners:
self.listeners[channel] = set()
self.listeners[channel].add(callback)
if priority is None:
priority = getattr(callback, 'priority', 50)
self._priorities[(channel, callback)] = priority
def unsubscribe(self, channel, callback):
"""Discard the given callback (if present)."""
listeners = self.listeners.get(channel)
if listeners and callback in listeners:
listeners.discard(callback)
del self._priorities[(channel, callback)]
def publish(self, channel, *args, **kwargs):
"""Return output of all subscribers for the given channel."""
if channel not in self.listeners:
return []
exc = ChannelFailures()
exc = None
output = []
raw_items = (
(self._priorities[(channel, listener)], listener)
for listener in self.listeners[channel]
)
items = sorted(raw_items, key=operator.itemgetter(0))
items = [(self._priorities[(channel, listener)], listener)
for listener in self.listeners[channel]]
items.sort()
for priority, listener in items:
try:
output.append(listener(*args, **kwargs))
except KeyboardInterrupt:
raise
except SystemExit:
e = sys.exc_info()[1]
except SystemExit, e:
# If we have previous errors ensure the exit code is non-zero
if exc and e.code == 0:
e.code = 1
raise
except:
exc.handle_exception()
exc = sys.exc_info()[1]
if channel == 'log':
# Assume any further messages to 'log' will fail.
pass
else:
self.log('Error in %r listener %r' % (channel, listener),
self.log("Error in %r listener %r" % (channel, listener),
level=40, traceback=True)
if exc:
raise exc
raise
return output
def _clean_exit(self):
"""An atexit handler which asserts the Bus is not running."""
if self.state != states.EXITING:
warnings.warn(
'The main thread is exiting, but the Bus is in the %r state; '
'shutting it down automatically now. You must either call '
'bus.block() after start(), or call bus.exit() before the '
'main thread exits.' % self.state, RuntimeWarning)
"The main thread is exiting, but the Bus is in the %r state; "
"shutting it down automatically now. You must either call "
"bus.block() after start(), or call bus.exit() before the "
"main thread exits." % self.state, RuntimeWarning)
self.exit()
def start(self):
"""Start all services."""
atexit.register(self._clean_exit)
self.state = states.STARTING
self.log('Bus STARTING')
try:
@@ -248,23 +187,21 @@ class Bus(object):
except (KeyboardInterrupt, SystemExit):
raise
except:
self.log('Shutting down due to error in start listener:',
self.log("Shutting down due to error in start listener:",
level=40, traceback=True)
e_info = sys.exc_info()[1]
e_info = sys.exc_info()
try:
self.exit()
except:
# Any stop/exit errors will be logged inside publish().
pass
# Re-raise the original error
raise e_info
raise e_info[0], e_info[1], e_info[2]
def exit(self):
"""Stop all services and prepare to exit the process."""
exitstate = self.state
try:
self.stop()
self.state = states.EXITING
self.log('Bus EXITING')
self.publish('exit')
@@ -276,32 +213,25 @@ class Bus(object):
# signal handler, console handler, or atexit handler), so we
# can't just let exceptions propagate out unhandled.
# Assume it's been logged and just die.
os._exit(70) # EX_SOFTWARE
if exitstate == states.STARTING:
# exit() was called before start() finished, possibly due to
# Ctrl-C because a start listener got stuck. In this case,
# we could get stuck in a loop where Ctrl-C never exits the
# process, so we just call os.exit here.
os._exit(70) # EX_SOFTWARE
os._exit(70) # EX_SOFTWARE
def restart(self):
"""Restart the process (may close connections).
This method does not restart the process from the calling thread;
instead, it stops the bus and asks the main thread to call execv.
"""
self.execv = True
self.exit()
def graceful(self):
"""Advise all services to reload."""
self.log('Bus graceful')
self.publish('graceful')
def block(self, interval=0.1):
"""Wait for the EXITING state, KeyboardInterrupt or SystemExit.
This function is intended to be called only by the main thread.
After waiting for the EXITING state, it also waits for all threads
to terminate, and then calls os.execv if self.execv is True. This
@@ -309,7 +239,7 @@ class Bus(object):
thread perform the actual execv call (required on some platforms).
"""
try:
self.wait(states.EXITING, interval=interval, channel='main')
self.wait(states.EXITING, interval=interval)
except (KeyboardInterrupt, IOError):
# The time.sleep call might raise
# "IOError: [Errno 4] Interrupted function call" on KBInt.
@@ -319,48 +249,38 @@ class Bus(object):
self.log('SystemExit raised: shutting down bus')
self.exit()
raise
# Waiting for ALL child threads to finish is necessary on OS X.
# See https://github.com/cherrypy/cherrypy/issues/581.
# See http://www.cherrypy.org/ticket/581.
# It's also good to let them all shut down before allowing
# the main thread to call atexit handlers.
# See https://github.com/cherrypy/cherrypy/issues/751.
self.log('Waiting for child threads to terminate...')
# See http://www.cherrypy.org/ticket/751.
self.log("Waiting for child threads to terminate...")
for t in threading.enumerate():
# Validate the we're not trying to join the MainThread
# that will cause a deadlock and the case exist when
# implemented as a windows service and in any other case
# that another thread executes cherrypy.engine.exit()
if (
t != threading.currentThread() and
t.isAlive() and
not isinstance(t, threading._MainThread)
):
if t != threading.currentThread() and t.isAlive():
# Note that any dummy (external) threads are always daemonic.
if hasattr(threading.Thread, 'daemon'):
if hasattr(threading.Thread, "daemon"):
# Python 2.6+
d = t.daemon
else:
d = t.isDaemon()
if not d:
self.log('Waiting for thread %s.' % t.getName())
t.join()
if self.execv:
self._do_execv()
def wait(self, state, interval=0.1, channel=None):
"""Poll for the given state(s) at intervals; publish to channel."""
def wait(self, state, interval=0.1):
"""Wait for the given state(s)."""
if isinstance(state, (tuple, list)):
states = state
else:
states = [state]
def _wait():
while self.state not in states:
time.sleep(interval)
self.publish(channel)
# From http://psyco.sourceforge.net/psycoguide/bugs.html:
# "The compiled machine code does not include the regular polling
# done by Python, meaning that a KeyboardInterrupt will not be
@@ -371,112 +291,23 @@ class Bus(object):
sys.modules['psyco'].cannotcompile(_wait)
except (KeyError, AttributeError):
pass
_wait()
def _do_execv(self):
"""Re-execute the current process.
This must be called from the main thread, because certain platforms
(OS X) don't allow execv to be called in a child thread very well.
"""
try:
args = self._get_true_argv()
except NotImplementedError:
"""It's probably win32"""
# For the SABnzbd.exe binary we don't want interpreter flags
# https://github.com/cherrypy/cherrypy/issues/1526
if getattr(sys, 'frozen', False):
args = [sys.executable] + sys.argv
else:
args = [sys.executable] + _args_from_interpreter_flags() + sys.argv
args = sys.argv[:]
self.log('Re-spawning %s' % ' '.join(args))
self._extend_pythonpath(os.environ)
if sys.platform[:4] == 'java':
from _systemrestart import SystemRestart
raise SystemRestart
else:
if sys.platform == 'win32':
args = ['"%s"' % arg for arg in args]
os.chdir(_startup_cwd)
if self.max_cloexec_files:
self._set_cloexec()
os.execv(sys.executable, args)
@staticmethod
def _get_true_argv():
"""Retrieves all real arguments of the python interpreter
...even those not listed in ``sys.argv``
:seealso: http://stackoverflow.com/a/28338254/595220
:seealso: http://stackoverflow.com/a/6683222/595220
:seealso: http://stackoverflow.com/a/28414807/595220
"""
try:
char_p = ctypes.c_char_p if six.PY2 else ctypes.c_wchar_p
argv = ctypes.POINTER(char_p)()
argc = ctypes.c_int()
ctypes.pythonapi.Py_GetArgcArgv(ctypes.byref(argc), ctypes.byref(argv))
except AttributeError:
"""It looks Py_GetArgcArgv is completely absent in MS Windows
:seealso: https://github.com/cherrypy/cherrypy/issues/1506
:ref: https://chromium.googlesource.com/infra/infra/+/69eb0279c12bcede5937ce9298020dd4581e38dd%5E!/
"""
raise NotImplementedError
else:
return argv[:argc.value]
@staticmethod
def _extend_pythonpath(env):
"""
If sys.path[0] is an empty string, the interpreter was likely
invoked with -m and the effective path is about to change on
re-exec. Add the current directory to $PYTHONPATH to ensure
that the new process sees the same path.
This issue cannot be addressed in the general case because
Python cannot reliably reconstruct the
original command line (http://bugs.python.org/issue14208).
(This idea filched from tornado.autoreload)
"""
path_prefix = '.' + os.pathsep
existing_path = env.get('PYTHONPATH', '')
needs_patch = (
sys.path[0] == '' and
not existing_path.startswith(path_prefix)
)
if needs_patch:
env['PYTHONPATH'] = path_prefix + existing_path
def _set_cloexec(self):
"""Set the CLOEXEC flag on all open files (except stdin/out/err).
If self.max_cloexec_files is an integer (the default), then on
platforms which support it, it represents the max open files setting
for the operating system. This function will be called just before
the process is restarted via os.execv() to prevent open files
from persisting into the new process.
Set self.max_cloexec_files to 0 to disable this behavior.
"""
for fd in range(3, self.max_cloexec_files): # skip stdin/out/err
try:
flags = fcntl.fcntl(fd, fcntl.F_GETFD)
except IOError:
continue
fcntl.fcntl(fd, fcntl.F_SETFD, flags | fcntl.FD_CLOEXEC)
args.insert(0, sys.executable)
if sys.platform == 'win32':
args = ['"%s"' % arg for arg in args]
os.execv(sys.executable, args)
def stop(self):
"""Stop all services."""
self.state = states.STOPPING
@@ -484,7 +315,7 @@ class Bus(object):
self.publish('stop')
self.state = states.STOPPED
self.log('Bus STOPPED')
def start_with_callback(self, func, args=None, kwargs=None):
"""Start 'func' in a new thread T, then start self (and return T)."""
if args is None:
@@ -492,28 +323,23 @@ class Bus(object):
if kwargs is None:
kwargs = {}
args = (func,) + args
def _callback(func, *a, **kw):
self.wait(states.STARTED)
func(*a, **kw)
t = threading.Thread(target=_callback, args=args, kwargs=kwargs)
t.setName('Bus Callback ' + t.getName())
t.start()
self.start()
return t
def log(self, msg='', level=20, traceback=False):
def log(self, msg="", level=20, traceback=False):
"""Log the given message. Append the last traceback if requested."""
if traceback:
# Work-around for bug in Python's traceback implementation
# which crashes when the error message contains %1, %2 etc.
errors = sys.exc_info()
if '%' in errors[1].message:
errors[1].message = errors[1].message.replace('%', '#')
errors[1].args = [item.replace('%', '#') for item in errors[1].args]
msg += "\n" + "".join(_traceback.format_exception(*errors))
exc = sys.exc_info()
msg += "\n" + "".join(_traceback.format_exception(*exc))
self.publish('log', msg, level)
bus = Bus()

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,124 +0,0 @@
"""A library for integrating Python's builtin ``ssl`` library with CherryPy.
The ssl module must be importable for SSL functionality.
To use this module, set ``CherryPyWSGIServer.ssl_adapter`` to an instance of
``BuiltinSSLAdapter``.
"""
try:
import ssl
except ImportError:
ssl = None
try:
from _pyio import DEFAULT_BUFFER_SIZE
except ImportError:
try:
from io import DEFAULT_BUFFER_SIZE
except ImportError:
DEFAULT_BUFFER_SIZE = -1
import sys
from cherrypy import wsgiserver
class BuiltinSSLAdapter(wsgiserver.SSLAdapter):
"""A wrapper for integrating Python's builtin ssl module with CherryPy."""
certificate = None
"""The filename of the server SSL certificate."""
private_key = None
"""The filename of the server's private key file."""
certificate_chain = None
"""The filename of the certificate chain file."""
"""The ssl.SSLContext that will be used to wrap sockets where available
(on Python > 2.7.9 / 3.3)
"""
context = None
def __init__(self, certificate, private_key, certificate_chain=None):
if ssl is None:
raise ImportError('You must install the ssl module to use HTTPS.')
self.certificate = certificate
self.private_key = private_key
self.certificate_chain = certificate_chain
if hasattr(ssl, 'create_default_context'):
self.context = ssl.create_default_context(
purpose=ssl.Purpose.CLIENT_AUTH,
cafile=certificate_chain
)
self.context.load_cert_chain(certificate, private_key)
def bind(self, sock):
"""Wrap and return the given socket."""
return sock
def wrap(self, sock):
"""Wrap and return the given socket, plus WSGI environ entries."""
try:
if self.context is not None:
s = self.context.wrap_socket(sock,do_handshake_on_connect=True,
server_side=True)
else:
s = ssl.wrap_socket(sock, do_handshake_on_connect=True,
server_side=True, certfile=self.certificate,
keyfile=self.private_key,
ssl_version=ssl.PROTOCOL_SSLv23,
ca_certs=self.certificate_chain)
except ssl.SSLError:
e = sys.exc_info()[1]
if e.errno == ssl.SSL_ERROR_EOF:
# This is almost certainly due to the cherrypy engine
# 'pinging' the socket to assert it's connectable;
# the 'ping' isn't SSL.
return None, {}
elif e.errno == ssl.SSL_ERROR_SSL:
if 'http request' in e.args[1]:
# The client is speaking HTTP to an HTTPS server.
raise wsgiserver.NoSSLError
# Check if it's one of the known errors
# Errors that are caught by PyOpenSSL, but thrown by built-in ssl
_block_errors = ('unknown protocol', 'unknown ca', 'unknown_ca', 'unknown error',
'https proxy request', 'inappropriate fallback', 'wrong version number',
'no shared cipher', 'certificate unknown', 'ccs received early')
for error_text in _block_errors:
if error_text in e.args[1].lower():
# Accepted error, let's pass
return None, {}
elif 'handshake operation timed out' in e.args[0]:
# This error is thrown by builtin SSL after a timeout
# when client is speaking HTTP to an HTTPS server.
# The connection can safely be dropped.
return None, {}
raise
except:
# Temporary fix for https://github.com/cherrypy/cherrypy/issues/1618
e = sys.exc_info()[1]
if e.args == (0, 'Error'):
return None, {}
raise
return s, self.get_environ(s)
# TODO: fill this out more with mod ssl env
def get_environ(self, sock):
"""Create WSGI environ entries to be merged into each request."""
cipher = sock.cipher()
ssl_environ = {
'wsgi.url_scheme': 'https',
'HTTPS': 'on',
'SSL_PROTOCOL': cipher[1],
'SSL_CIPHER': cipher[0]
# SSL_VERSION_INTERFACE string The mod_ssl program version
# SSL_VERSION_LIBRARY string The OpenSSL program version
}
return ssl_environ
def makefile(self, sock, mode='r', bufsize=DEFAULT_BUFFER_SIZE):
return wsgiserver.CP_makefile(sock, mode, bufsize)

View File

@@ -1,253 +0,0 @@
"""A library for integrating pyOpenSSL with CherryPy.
The OpenSSL module must be importable for SSL functionality.
You can obtain it from `here <https://launchpad.net/pyopenssl>`_.
To use this module, set CherryPyWSGIServer.ssl_adapter to an instance of
SSLAdapter. There are two ways to use SSL:
Method One
----------
* ``ssl_adapter.context``: an instance of SSL.Context.
If this is not None, it is assumed to be an SSL.Context instance,
and will be passed to SSL.Connection on bind(). The developer is
responsible for forming a valid Context object. This approach is
to be preferred for more flexibility, e.g. if the cert and key are
streams instead of files, or need decryption, or SSL.SSLv3_METHOD
is desired instead of the default SSL.SSLv23_METHOD, etc. Consult
the pyOpenSSL documentation for complete options.
Method Two (shortcut)
---------------------
* ``ssl_adapter.certificate``: the filename of the server SSL certificate.
* ``ssl_adapter.private_key``: the filename of the server's private key file.
Both are None by default. If ssl_adapter.context is None, but .private_key
and .certificate are both given and valid, they will be read, and the
context will be automatically created from them.
"""
import socket
import threading
import time
from cherrypy import wsgiserver
try:
from OpenSSL import SSL
from OpenSSL import crypto
except ImportError:
SSL = None
class SSL_fileobject(wsgiserver.CP_makefile):
"""SSL file object attached to a socket object."""
ssl_timeout = 3
ssl_retry = .01
def _safe_call(self, is_reader, call, *args, **kwargs):
"""Wrap the given call with SSL error-trapping.
is_reader: if False EOF errors will be raised. If True, EOF errors
will return "" (to emulate normal sockets).
"""
start = time.time()
while True:
try:
return call(*args, **kwargs)
except SSL.WantReadError:
# Sleep and try again. This is dangerous, because it means
# the rest of the stack has no way of differentiating
# between a "new handshake" error and "client dropped".
# Note this isn't an endless loop: there's a timeout below.
time.sleep(self.ssl_retry)
except SSL.WantWriteError:
time.sleep(self.ssl_retry)
except SSL.SysCallError as e:
if is_reader and e.args == (-1, 'Unexpected EOF'):
return ''
errnum = e.args[0]
if is_reader and errnum in wsgiserver.socket_errors_to_ignore:
return ''
raise socket.error(errnum)
except SSL.Error as e:
if is_reader and e.args == (-1, 'Unexpected EOF'):
return ''
thirdarg = None
try:
thirdarg = e.args[0][0][2]
except IndexError:
pass
if thirdarg == 'http request':
# The client is talking HTTP to an HTTPS server.
raise wsgiserver.NoSSLError()
raise wsgiserver.FatalSSLAlert(*e.args)
except:
raise
if time.time() - start > self.ssl_timeout:
raise socket.timeout('timed out')
def recv(self, size):
return self._safe_call(True, super(SSL_fileobject, self).recv, size)
def sendall(self, *args, **kwargs):
return self._safe_call(False, super(SSL_fileobject, self).sendall,
*args, **kwargs)
def send(self, *args, **kwargs):
return self._safe_call(False, super(SSL_fileobject, self).send,
*args, **kwargs)
class SSLConnection:
"""A thread-safe wrapper for an SSL.Connection.
``*args``: the arguments to create the wrapped ``SSL.Connection(*args)``.
"""
def __init__(self, *args):
self._ssl_conn = SSL.Connection(*args)
self._lock = threading.RLock()
for f in ('get_context', 'pending', 'send', 'write', 'recv', 'read',
'renegotiate', 'bind', 'listen', 'connect', 'accept',
'setblocking', 'fileno', 'close', 'get_cipher_list',
'getpeername', 'getsockname', 'getsockopt', 'setsockopt',
'makefile', 'get_app_data', 'set_app_data', 'state_string',
'sock_shutdown', 'get_peer_certificate', 'want_read',
'want_write', 'set_connect_state', 'set_accept_state',
'connect_ex', 'sendall', 'settimeout', 'gettimeout'):
exec("""def %s(self, *args):
self._lock.acquire()
try:
return self._ssl_conn.%s(*args)
finally:
self._lock.release()
""" % (f, f))
def shutdown(self, *args):
self._lock.acquire()
try:
# pyOpenSSL.socket.shutdown takes no args
return self._ssl_conn.shutdown()
finally:
self._lock.release()
class pyOpenSSLAdapter(wsgiserver.SSLAdapter):
"""A wrapper for integrating pyOpenSSL with CherryPy."""
context = None
"""An instance of SSL.Context."""
certificate = None
"""The filename of the server SSL certificate."""
private_key = None
"""The filename of the server's private key file."""
certificate_chain = None
"""Optional. The filename of CA's intermediate certificate bundle.
This is needed for cheaper "chained root" SSL certificates, and should be
left as None if not required."""
def __init__(self, certificate, private_key, certificate_chain=None):
if SSL is None:
raise ImportError('You must install pyOpenSSL to use HTTPS.')
self.context = None
self.certificate = certificate
self.private_key = private_key
self.certificate_chain = certificate_chain
self._environ = None
def bind(self, sock):
"""Wrap and return the given socket."""
if self.context is None:
self.context = self.get_context()
conn = SSLConnection(self.context, sock)
self._environ = self.get_environ()
return conn
def wrap(self, sock):
"""Wrap and return the given socket, plus WSGI environ entries."""
return sock, self._environ.copy()
def get_context(self):
"""Return an SSL.Context from self attributes."""
# See http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/442473
c = SSL.Context(SSL.SSLv23_METHOD)
c.use_privatekey_file(self.private_key)
if self.certificate_chain:
c.load_verify_locations(self.certificate_chain)
c.use_certificate_file(self.certificate)
return c
def get_environ(self):
"""Return WSGI environ entries to be merged into each request."""
ssl_environ = {
'HTTPS': 'on',
# pyOpenSSL doesn't provide access to any of these AFAICT
# 'SSL_PROTOCOL': 'SSLv2',
# SSL_CIPHER string The cipher specification name
# SSL_VERSION_INTERFACE string The mod_ssl program version
# SSL_VERSION_LIBRARY string The OpenSSL program version
}
if self.certificate:
# Server certificate attributes
cert = open(self.certificate, 'rb').read()
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert)
ssl_environ.update({
'SSL_SERVER_M_VERSION': cert.get_version(),
'SSL_SERVER_M_SERIAL': cert.get_serial_number(),
# 'SSL_SERVER_V_START':
# Validity of server's certificate (start time),
# 'SSL_SERVER_V_END':
# Validity of server's certificate (end time),
})
for prefix, dn in [('I', cert.get_issuer()),
('S', cert.get_subject())]:
# X509Name objects don't seem to have a way to get the
# complete DN string. Use str() and slice it instead,
# because str(dn) == "<X509Name object '/C=US/ST=...'>"
dnstr = str(dn)[18:-2]
wsgikey = 'SSL_SERVER_%s_DN' % prefix
ssl_environ[wsgikey] = dnstr
# The DN should be of the form: /k1=v1/k2=v2, but we must allow
# for any value to contain slashes itself (in a URL).
while dnstr:
pos = dnstr.rfind('=')
dnstr, value = dnstr[:pos], dnstr[pos + 1:]
pos = dnstr.rfind('/')
dnstr, key = dnstr[:pos], dnstr[pos + 1:]
if key and value:
wsgikey = 'SSL_SERVER_%s_DN_%s' % (prefix, key)
ssl_environ[wsgikey] = value
return ssl_environ
def makefile(self, sock, mode='r', bufsize=-1):
if SSL and isinstance(sock, SSL.ConnectionType):
timeout = sock.gettimeout()
f = SSL_fileobject(sock, mode, bufsize)
f.ssl_timeout = timeout
return f
else:
return wsgiserver.CP_fileobject(sock, mode, bufsize)

View File

@@ -1,16 +0,0 @@
import six
import mock
from cherrypy import wsgiserver
class TestWSGIGateway_u0:
@mock.patch('cherrypy.wsgiserver.WSGIGateway_10.get_environ',
lambda self: {'foo': 'bar'})
def test_decodes_items(self):
req = mock.MagicMock(path=b'/', qs=b'')
gw = wsgiserver.WSGIGateway_u0(req=req)
env = gw.get_environ()
assert env['foo'] == 'bar'
assert isinstance(env['foo'], six.text_type)

View File

@@ -0,0 +1,509 @@
import re
import hashlib
import time
import StringIO
__version__ = '0.8'
#GNTP/<version> <messagetype> <encryptionAlgorithmID>[:<ivValue>][ <keyHashAlgorithmID>:<keyHash>.<salt>]
GNTP_INFO_LINE = re.compile(
'GNTP/(?P<version>\d+\.\d+) (?P<messagetype>REGISTER|NOTIFY|SUBSCRIBE|\-OK|\-ERROR)' +
' (?P<encryptionAlgorithmID>[A-Z0-9]+(:(?P<ivValue>[A-F0-9]+))?) ?' +
'((?P<keyHashAlgorithmID>[A-Z0-9]+):(?P<keyHash>[A-F0-9]+).(?P<salt>[A-F0-9]+))?\r\n',
re.IGNORECASE
)
GNTP_INFO_LINE_SHORT = re.compile(
'GNTP/(?P<version>\d+\.\d+) (?P<messagetype>REGISTER|NOTIFY|SUBSCRIBE|\-OK|\-ERROR)',
re.IGNORECASE
)
GNTP_HEADER = re.compile('([\w-]+):(.+)')
GNTP_EOL = '\r\n'
class BaseError(Exception):
def gntp_error(self):
error = GNTPError(self.errorcode, self.errordesc)
return error.encode()
class ParseError(BaseError):
errorcode = 500
errordesc = 'Error parsing the message'
class AuthError(BaseError):
errorcode = 400
errordesc = 'Error with authorization'
class UnsupportedError(BaseError):
errorcode = 500
errordesc = 'Currently unsupported by gntp.py'
class _GNTPBuffer(StringIO.StringIO):
"""GNTP Buffer class"""
def writefmt(self, message="", *args):
"""Shortcut function for writing GNTP Headers"""
self.write((message % args).encode('utf8', 'replace'))
self.write(GNTP_EOL)
class _GNTPBase(object):
"""Base initilization
:param string messagetype: GNTP Message type
:param string version: GNTP Protocol version
:param string encription: Encryption protocol
"""
def __init__(self, messagetype=None, version='1.0', encryption=None):
self.info = {
'version': version,
'messagetype': messagetype,
'encryptionAlgorithmID': encryption
}
self.headers = {}
self.resources = {}
def __str__(self):
return self.encode()
def _parse_info(self, data):
"""Parse the first line of a GNTP message to get security and other info values
:param string data: GNTP Message
:return dict: Parsed GNTP Info line
"""
match = GNTP_INFO_LINE.match(data)
if not match:
raise ParseError('ERROR_PARSING_INFO_LINE')
info = match.groupdict()
if info['encryptionAlgorithmID'] == 'NONE':
info['encryptionAlgorithmID'] = None
return info
def set_password(self, password, encryptAlgo='MD5'):
"""Set a password for a GNTP Message
:param string password: Null to clear password
:param string encryptAlgo: Supports MD5, SHA1, SHA256, SHA512
"""
hash = {
'MD5': hashlib.md5,
'SHA1': hashlib.sha1,
'SHA256': hashlib.sha256,
'SHA512': hashlib.sha512,
}
self.password = password
self.encryptAlgo = encryptAlgo.upper()
if not password:
self.info['encryptionAlgorithmID'] = None
self.info['keyHashAlgorithm'] = None
return
if not self.encryptAlgo in hash.keys():
raise UnsupportedError('INVALID HASH "%s"' % self.encryptAlgo)
hashfunction = hash.get(self.encryptAlgo)
password = password.encode('utf8')
seed = time.ctime()
salt = hashfunction(seed).hexdigest()
saltHash = hashfunction(seed).digest()
keyBasis = password + saltHash
key = hashfunction(keyBasis).digest()
keyHash = hashfunction(key).hexdigest()
self.info['keyHashAlgorithmID'] = self.encryptAlgo
self.info['keyHash'] = keyHash.upper()
self.info['salt'] = salt.upper()
def _decode_hex(self, value):
"""Helper function to decode hex string to `proper` hex string
:param string value: Human readable hex string
:return string: Hex string
"""
result = ''
for i in range(0, len(value), 2):
tmp = int(value[i:i + 2], 16)
result += chr(tmp)
return result
def _decode_binary(self, rawIdentifier, identifier):
rawIdentifier += '\r\n\r\n'
dataLength = int(identifier['Length'])
pointerStart = self.raw.find(rawIdentifier) + len(rawIdentifier)
pointerEnd = pointerStart + dataLength
data = self.raw[pointerStart:pointerEnd]
if not len(data) == dataLength:
raise ParseError('INVALID_DATA_LENGTH Expected: %s Recieved %s' % (dataLength, len(data)))
return data
def _validate_password(self, password):
"""Validate GNTP Message against stored password"""
self.password = password
if password == None:
raise AuthError('Missing password')
keyHash = self.info.get('keyHash', None)
if keyHash is None and self.password is None:
return True
if keyHash is None:
raise AuthError('Invalid keyHash')
if self.password is None:
raise AuthError('Missing password')
password = self.password.encode('utf8')
saltHash = self._decode_hex(self.info['salt'])
keyBasis = password + saltHash
key = hashlib.md5(keyBasis).digest()
keyHash = hashlib.md5(key).hexdigest()
if not keyHash.upper() == self.info['keyHash'].upper():
raise AuthError('Invalid Hash')
return True
def validate(self):
"""Verify required headers"""
for header in self._requiredHeaders:
if not self.headers.get(header, False):
raise ParseError('Missing Notification Header: ' + header)
def _format_info(self):
"""Generate info line for GNTP Message
:return string:
"""
info = u'GNTP/%s %s' % (
self.info.get('version'),
self.info.get('messagetype'),
)
if self.info.get('encryptionAlgorithmID', None):
info += ' %s:%s' % (
self.info.get('encryptionAlgorithmID'),
self.info.get('ivValue'),
)
else:
info += ' NONE'
if self.info.get('keyHashAlgorithmID', None):
info += ' %s:%s.%s' % (
self.info.get('keyHashAlgorithmID'),
self.info.get('keyHash'),
self.info.get('salt')
)
return info
def _parse_dict(self, data):
"""Helper function to parse blocks of GNTP headers into a dictionary
:param string data:
:return dict:
"""
dict = {}
for line in data.split('\r\n'):
match = GNTP_HEADER.match(line)
if not match:
continue
key = unicode(match.group(1).strip(), 'utf8', 'replace')
val = unicode(match.group(2).strip(), 'utf8', 'replace')
dict[key] = val
return dict
def add_header(self, key, value):
if isinstance(value, unicode):
self.headers[key] = value
else:
self.headers[key] = unicode('%s' % value, 'utf8', 'replace')
def add_resource(self, data):
"""Add binary resource
:param string data: Binary Data
"""
identifier = hashlib.md5(data).hexdigest()
self.resources[identifier] = data
return 'x-growl-resource://%s' % identifier
def decode(self, data, password=None):
"""Decode GNTP Message
:param string data:
"""
self.password = password
self.raw = data
parts = self.raw.split('\r\n\r\n')
self.info = self._parse_info(data)
self.headers = self._parse_dict(parts[0])
def encode(self):
"""Encode a generic GNTP Message
:return string: GNTP Message ready to be sent
"""
buffer = _GNTPBuffer()
buffer.writefmt(self._format_info())
#Headers
for k, v in self.headers.iteritems():
buffer.writefmt('%s: %s', k, v)
buffer.writefmt()
#Resources
for resource, data in self.resources.iteritems():
buffer.writefmt('Identifier: %s', resource)
buffer.writefmt('Length: %d', len(data))
buffer.writefmt()
buffer.write(data)
buffer.writefmt()
buffer.writefmt()
return buffer.getvalue()
class GNTPRegister(_GNTPBase):
"""Represents a GNTP Registration Command
:param string data: (Optional) See decode()
:param string password: (Optional) Password to use while encoding/decoding messages
"""
_requiredHeaders = [
'Application-Name',
'Notifications-Count'
]
_requiredNotificationHeaders = ['Notification-Name']
def __init__(self, data=None, password=None):
_GNTPBase.__init__(self, 'REGISTER')
self.notifications = []
if data:
self.decode(data, password)
else:
self.set_password(password)
self.add_header('Application-Name', 'pygntp')
self.add_header('Notifications-Count', 0)
def validate(self):
'''Validate required headers and validate notification headers'''
for header in self._requiredHeaders:
if not self.headers.get(header, False):
raise ParseError('Missing Registration Header: ' + header)
for notice in self.notifications:
for header in self._requiredNotificationHeaders:
if not notice.get(header, False):
raise ParseError('Missing Notification Header: ' + header)
def decode(self, data, password):
"""Decode existing GNTP Registration message
:param string data: Message to decode
"""
self.raw = data
parts = self.raw.split('\r\n\r\n')
self.info = self._parse_info(data)
self._validate_password(password)
self.headers = self._parse_dict(parts[0])
for i, part in enumerate(parts):
if i == 0:
continue # Skip Header
if part.strip() == '':
continue
notice = self._parse_dict(part)
if notice.get('Notification-Name', False):
self.notifications.append(notice)
elif notice.get('Identifier', False):
notice['Data'] = self._decode_binary(part, notice)
#open('register.png','wblol').write(notice['Data'])
self.resources[notice.get('Identifier')] = notice
def add_notification(self, name, enabled=True):
"""Add new Notification to Registration message
:param string name: Notification Name
:param boolean enabled: Enable this notification by default
"""
notice = {}
notice['Notification-Name'] = u'%s' % name
notice['Notification-Enabled'] = u'%s' % enabled
self.notifications.append(notice)
self.add_header('Notifications-Count', len(self.notifications))
def encode(self):
"""Encode a GNTP Registration Message
:return string: Encoded GNTP Registration message
"""
buffer = _GNTPBuffer()
buffer.writefmt(self._format_info())
#Headers
for k, v in self.headers.iteritems():
buffer.writefmt('%s: %s', k, v)
buffer.writefmt()
#Notifications
if len(self.notifications) > 0:
for notice in self.notifications:
for k, v in notice.iteritems():
buffer.writefmt('%s: %s', k, v)
buffer.writefmt()
#Resources
for resource, data in self.resources.iteritems():
buffer.writefmt('Identifier: %s', resource)
buffer.writefmt('Length: %d', len(data))
buffer.writefmt()
buffer.write(data)
buffer.writefmt()
buffer.writefmt()
return buffer.getvalue()
class GNTPNotice(_GNTPBase):
"""Represents a GNTP Notification Command
:param string data: (Optional) See decode()
:param string app: (Optional) Set Application-Name
:param string name: (Optional) Set Notification-Name
:param string title: (Optional) Set Notification Title
:param string password: (Optional) Password to use while encoding/decoding messages
"""
_requiredHeaders = [
'Application-Name',
'Notification-Name',
'Notification-Title'
]
def __init__(self, data=None, app=None, name=None, title=None, password=None):
_GNTPBase.__init__(self, 'NOTIFY')
if data:
self.decode(data, password)
else:
self.set_password(password)
if app:
self.add_header('Application-Name', app)
if name:
self.add_header('Notification-Name', name)
if title:
self.add_header('Notification-Title', title)
def decode(self, data, password):
"""Decode existing GNTP Notification message
:param string data: Message to decode.
"""
self.raw = data
parts = self.raw.split('\r\n\r\n')
self.info = self._parse_info(data)
self._validate_password(password)
self.headers = self._parse_dict(parts[0])
for i, part in enumerate(parts):
if i == 0:
continue # Skip Header
if part.strip() == '':
continue
notice = self._parse_dict(part)
if notice.get('Identifier', False):
notice['Data'] = self._decode_binary(part, notice)
#open('notice.png','wblol').write(notice['Data'])
self.resources[notice.get('Identifier')] = notice
class GNTPSubscribe(_GNTPBase):
"""Represents a GNTP Subscribe Command
:param string data: (Optional) See decode()
:param string password: (Optional) Password to use while encoding/decoding messages
"""
_requiredHeaders = [
'Subscriber-ID',
'Subscriber-Name',
]
def __init__(self, data=None, password=None):
_GNTPBase.__init__(self, 'SUBSCRIBE')
if data:
self.decode(data, password)
else:
self.set_password(password)
class GNTPOK(_GNTPBase):
"""Represents a GNTP OK Response
:param string data: (Optional) See _GNTPResponse.decode()
:param string action: (Optional) Set type of action the OK Response is for
"""
_requiredHeaders = ['Response-Action']
def __init__(self, data=None, action=None):
_GNTPBase.__init__(self, '-OK')
if data:
self.decode(data)
if action:
self.add_header('Response-Action', action)
class GNTPError(_GNTPBase):
"""Represents a GNTP Error response
:param string data: (Optional) See _GNTPResponse.decode()
:param string errorcode: (Optional) Error code
:param string errordesc: (Optional) Error Description
"""
_requiredHeaders = ['Error-Code', 'Error-Description']
def __init__(self, data=None, errorcode=None, errordesc=None):
_GNTPBase.__init__(self, '-ERROR')
if data:
self.decode(data)
if errorcode:
self.add_header('Error-Code', errorcode)
self.add_header('Error-Description', errordesc)
def error(self):
return (self.headers.get('Error-Code', None),
self.headers.get('Error-Description', None))
def parse_gntp(data, password=None):
"""Attempt to parse a message as a GNTP message
:param string data: Message to be parsed
:param string password: Optional password to be used to verify the message
"""
match = GNTP_INFO_LINE_SHORT.match(data)
if not match:
raise ParseError('INVALID_GNTP_INFO')
info = match.groupdict()
if info['messagetype'] == 'REGISTER':
return GNTPRegister(data, password=password)
elif info['messagetype'] == 'NOTIFY':
return GNTPNotice(data, password=password)
elif info['messagetype'] == 'SUBSCRIBE':
return GNTPSubscribe(data, password=password)
elif info['messagetype'] == '-OK':
return GNTPOK(data)
elif info['messagetype'] == '-ERROR':
return GNTPError(data)
raise ParseError('INVALID_GNTP_MESSAGE')

View File

@@ -1,141 +0,0 @@
# Copyright: 2013 Paul Traylor
# These sources are released under the terms of the MIT license: see LICENSE
import logging
import os
import sys
from optparse import OptionParser, OptionGroup
from gntp.notifier import GrowlNotifier
from gntp.shim import RawConfigParser
from gntp.version import __version__
DEFAULT_CONFIG = os.path.expanduser('~/.gntp')
config = RawConfigParser({
'hostname': 'localhost',
'password': None,
'port': 23053,
})
config.read([DEFAULT_CONFIG])
if not config.has_section('gntp'):
config.add_section('gntp')
class ClientParser(OptionParser):
def __init__(self):
OptionParser.__init__(self, version="%%prog %s" % __version__)
group = OptionGroup(self, "Network Options")
group.add_option("-H", "--host",
dest="host", default=config.get('gntp', 'hostname'),
help="Specify a hostname to which to send a remote notification. [%default]")
group.add_option("--port",
dest="port", default=config.getint('gntp', 'port'), type="int",
help="port to listen on [%default]")
group.add_option("-P", "--password",
dest='password', default=config.get('gntp', 'password'),
help="Network password")
self.add_option_group(group)
group = OptionGroup(self, "Notification Options")
group.add_option("-n", "--name",
dest="app", default='Python GNTP Test Client',
help="Set the name of the application [%default]")
group.add_option("-s", "--sticky",
dest='sticky', default=False, action="store_true",
help="Make the notification sticky [%default]")
group.add_option("--image",
dest="icon", default=None,
help="Icon for notification (URL or /path/to/file)")
group.add_option("-m", "--message",
dest="message", default=None,
help="Sets the message instead of using stdin")
group.add_option("-p", "--priority",
dest="priority", default=0, type="int",
help="-2 to 2 [%default]")
group.add_option("-d", "--identifier",
dest="identifier",
help="Identifier for coalescing")
group.add_option("-t", "--title",
dest="title", default=None,
help="Set the title of the notification [%default]")
group.add_option("-N", "--notification",
dest="name", default='Notification',
help="Set the notification name [%default]")
group.add_option("--callback",
dest="callback",
help="URL callback")
self.add_option_group(group)
# Extra Options
self.add_option('-v', '--verbose',
dest='verbose', default=0, action='count',
help="Verbosity levels")
def parse_args(self, args=None, values=None):
values, args = OptionParser.parse_args(self, args, values)
if values.message is None:
print('Enter a message followed by Ctrl-D')
try:
message = sys.stdin.read()
except KeyboardInterrupt:
exit()
else:
message = values.message
if values.title is None:
values.title = ' '.join(args)
# If we still have an empty title, use the
# first bit of the message as the title
if values.title == '':
values.title = message[:20]
values.verbose = logging.WARNING - values.verbose * 10
return values, message
def main():
(options, message) = ClientParser().parse_args()
logging.basicConfig(level=options.verbose)
if not os.path.exists(DEFAULT_CONFIG):
logging.info('No config read found at %s', DEFAULT_CONFIG)
growl = GrowlNotifier(
applicationName=options.app,
notifications=[options.name],
defaultNotifications=[options.name],
hostname=options.host,
password=options.password,
port=options.port,
)
result = growl.register()
if result is not True:
exit(result)
# This would likely be better placed within the growl notifier
# class but until I make _checkIcon smarter this is "easier"
if options.icon and growl._checkIcon(options.icon) is False:
logging.info('Loading image %s', options.icon)
f = open(options.icon, 'rb')
options.icon = f.read()
f.close()
result = growl.notify(
noteType=options.name,
title=options.title,
description=message,
icon=options.icon,
sticky=options.sticky,
priority=options.priority,
callback=options.callback,
identifier=options.identifier,
)
if result is not True:
exit(result)
if __name__ == "__main__":
main()

View File

@@ -1,77 +0,0 @@
# Copyright: 2013 Paul Traylor
# These sources are released under the terms of the MIT license: see LICENSE
"""
The gntp.config module is provided as an extended GrowlNotifier object that takes
advantage of the ConfigParser module to allow us to setup some default values
(such as hostname, password, and port) in a more global way to be shared among
programs using gntp
"""
import logging
import os
import gntp.notifier
import gntp.shim
__all__ = [
'mini',
'GrowlNotifier'
]
logger = logging.getLogger(__name__)
class GrowlNotifier(gntp.notifier.GrowlNotifier):
"""
ConfigParser enhanced GrowlNotifier object
For right now, we are only interested in letting users overide certain
values from ~/.gntp
::
[gntp]
hostname = ?
password = ?
port = ?
"""
def __init__(self, *args, **kwargs):
config = gntp.shim.RawConfigParser({
'hostname': kwargs.get('hostname', 'localhost'),
'password': kwargs.get('password'),
'port': kwargs.get('port', 23053),
})
config.read([os.path.expanduser('~/.gntp')])
# If the file does not exist, then there will be no gntp section defined
# and the config.get() lines below will get confused. Since we are not
# saving the config, it should be safe to just add it here so the
# code below doesn't complain
if not config.has_section('gntp'):
logger.info('Error reading ~/.gntp config file')
config.add_section('gntp')
kwargs['password'] = config.get('gntp', 'password')
kwargs['hostname'] = config.get('gntp', 'hostname')
kwargs['port'] = config.getint('gntp', 'port')
super(GrowlNotifier, self).__init__(*args, **kwargs)
def mini(description, **kwargs):
"""Single notification function
Simple notification function in one line. Has only one required parameter
and attempts to use reasonable defaults for everything else
:param string description: Notification message
"""
kwargs['notifierFactory'] = GrowlNotifier
gntp.notifier.mini(description, **kwargs)
if __name__ == '__main__':
# If we're running this module directly we're likely running it as a test
# so extra debugging is useful
logging.basicConfig(level=logging.INFO)
mini('Testing mini notification')

View File

@@ -1,518 +0,0 @@
# Copyright: 2013 Paul Traylor
# These sources are released under the terms of the MIT license: see LICENSE
import hashlib
import re
import time
import gntp.shim
import gntp.errors as errors
__all__ = [
'GNTPRegister',
'GNTPNotice',
'GNTPSubscribe',
'GNTPOK',
'GNTPError',
'parse_gntp',
]
#GNTP/<version> <messagetype> <encryptionAlgorithmID>[:<ivValue>][ <keyHashAlgorithmID>:<keyHash>.<salt>]
GNTP_INFO_LINE = re.compile(
'GNTP/(?P<version>\d+\.\d+) (?P<messagetype>REGISTER|NOTIFY|SUBSCRIBE|\-OK|\-ERROR)' +
' (?P<encryptionAlgorithmID>[A-Z0-9]+(:(?P<ivValue>[A-F0-9]+))?) ?' +
'((?P<keyHashAlgorithmID>[A-Z0-9]+):(?P<keyHash>[A-F0-9]+).(?P<salt>[A-F0-9]+))?\r\n',
re.IGNORECASE
)
GNTP_INFO_LINE_SHORT = re.compile(
'GNTP/(?P<version>\d+\.\d+) (?P<messagetype>REGISTER|NOTIFY|SUBSCRIBE|\-OK|\-ERROR)',
re.IGNORECASE
)
GNTP_HEADER = re.compile('([\w-]+):(.+)')
GNTP_EOL = gntp.shim.b('\r\n')
GNTP_SEP = gntp.shim.b(': ')
class _GNTPBuffer(gntp.shim.StringIO):
"""GNTP Buffer class"""
def writeln(self, value=None):
if value:
self.write(gntp.shim.b(value))
self.write(GNTP_EOL)
def writeheader(self, key, value):
if not isinstance(value, str):
value = str(value)
self.write(gntp.shim.b(key))
self.write(GNTP_SEP)
self.write(gntp.shim.b(value))
self.write(GNTP_EOL)
class _GNTPBase(object):
"""Base initilization
:param string messagetype: GNTP Message type
:param string version: GNTP Protocol version
:param string encription: Encryption protocol
"""
def __init__(self, messagetype=None, version='1.0', encryption=None):
self.info = {
'version': version,
'messagetype': messagetype,
'encryptionAlgorithmID': encryption
}
self.hash_algo = {
'MD5': hashlib.md5,
'SHA1': hashlib.sha1,
'SHA256': hashlib.sha256,
'SHA512': hashlib.sha512,
}
self.headers = {}
self.resources = {}
# For Python2 we can just return the bytes as is without worry
# but on Python3 we want to make sure we return the packet as
# a unicode string so that things like logging won't get confused
if gntp.shim.PY2:
def __str__(self):
return self.encode()
else:
def __str__(self):
return gntp.shim.u(self.encode())
def _parse_info(self, data):
"""Parse the first line of a GNTP message to get security and other info values
:param string data: GNTP Message
:return dict: Parsed GNTP Info line
"""
match = GNTP_INFO_LINE.match(data)
if not match:
raise errors.ParseError('ERROR_PARSING_INFO_LINE')
info = match.groupdict()
if info['encryptionAlgorithmID'] == 'NONE':
info['encryptionAlgorithmID'] = None
return info
def set_password(self, password, encryptAlgo='MD5'):
"""Set a password for a GNTP Message
:param string password: Null to clear password
:param string encryptAlgo: Supports MD5, SHA1, SHA256, SHA512
"""
if not password:
self.info['encryptionAlgorithmID'] = None
self.info['keyHashAlgorithm'] = None
return
self.password = gntp.shim.b(password)
self.encryptAlgo = encryptAlgo.upper()
if not self.encryptAlgo in self.hash_algo:
raise errors.UnsupportedError('INVALID HASH "%s"' % self.encryptAlgo)
hashfunction = self.hash_algo.get(self.encryptAlgo)
password = password.encode('utf8')
seed = time.ctime().encode('utf8')
salt = hashfunction(seed).hexdigest()
saltHash = hashfunction(seed).digest()
keyBasis = password + saltHash
key = hashfunction(keyBasis).digest()
keyHash = hashfunction(key).hexdigest()
self.info['keyHashAlgorithmID'] = self.encryptAlgo
self.info['keyHash'] = keyHash.upper()
self.info['salt'] = salt.upper()
def _decode_hex(self, value):
"""Helper function to decode hex string to `proper` hex string
:param string value: Human readable hex string
:return string: Hex string
"""
result = ''
for i in range(0, len(value), 2):
tmp = int(value[i:i + 2], 16)
result += chr(tmp)
return result
def _decode_binary(self, rawIdentifier, identifier):
rawIdentifier += '\r\n\r\n'
dataLength = int(identifier['Length'])
pointerStart = self.raw.find(rawIdentifier) + len(rawIdentifier)
pointerEnd = pointerStart + dataLength
data = self.raw[pointerStart:pointerEnd]
if not len(data) == dataLength:
raise errors.ParseError('INVALID_DATA_LENGTH Expected: %s Recieved %s' % (dataLength, len(data)))
return data
def _validate_password(self, password):
"""Validate GNTP Message against stored password"""
self.password = password
if password is None:
raise errors.AuthError('Missing password')
keyHash = self.info.get('keyHash', None)
if keyHash is None and self.password is None:
return True
if keyHash is None:
raise errors.AuthError('Invalid keyHash')
if self.password is None:
raise errors.AuthError('Missing password')
keyHashAlgorithmID = self.info.get('keyHashAlgorithmID','MD5')
password = self.password.encode('utf8')
saltHash = self._decode_hex(self.info['salt'])
keyBasis = password + saltHash
self.key = self.hash_algo[keyHashAlgorithmID](keyBasis).digest()
keyHash = self.hash_algo[keyHashAlgorithmID](self.key).hexdigest()
if not keyHash.upper() == self.info['keyHash'].upper():
raise errors.AuthError('Invalid Hash')
return True
def validate(self):
"""Verify required headers"""
for header in self._requiredHeaders:
if not self.headers.get(header, False):
raise errors.ParseError('Missing Notification Header: ' + header)
def _format_info(self):
"""Generate info line for GNTP Message
:return string:
"""
info = 'GNTP/%s %s' % (
self.info.get('version'),
self.info.get('messagetype'),
)
if self.info.get('encryptionAlgorithmID', None):
info += ' %s:%s' % (
self.info.get('encryptionAlgorithmID'),
self.info.get('ivValue'),
)
else:
info += ' NONE'
if self.info.get('keyHashAlgorithmID', None):
info += ' %s:%s.%s' % (
self.info.get('keyHashAlgorithmID'),
self.info.get('keyHash'),
self.info.get('salt')
)
return info
def _parse_dict(self, data):
"""Helper function to parse blocks of GNTP headers into a dictionary
:param string data:
:return dict: Dictionary of parsed GNTP Headers
"""
d = {}
for line in data.split('\r\n'):
match = GNTP_HEADER.match(line)
if not match:
continue
key = match.group(1).strip()
val = match.group(2).strip()
d[key] = val
return d
def add_header(self, key, value):
self.headers[key] = value
def add_resource(self, data):
"""Add binary resource
:param string data: Binary Data
"""
data = gntp.shim.b(data)
identifier = hashlib.md5(data).hexdigest()
self.resources[identifier] = data
return 'x-growl-resource://%s' % identifier
def decode(self, data, password=None):
"""Decode GNTP Message
:param string data:
"""
self.password = password
self.raw = gntp.shim.u(data)
parts = self.raw.split('\r\n\r\n')
self.info = self._parse_info(self.raw)
self.headers = self._parse_dict(parts[0])
def encode(self):
"""Encode a generic GNTP Message
:return string: GNTP Message ready to be sent. Returned as a byte string
"""
buff = _GNTPBuffer()
buff.writeln(self._format_info())
#Headers
for k, v in self.headers.items():
buff.writeheader(k, v)
buff.writeln()
#Resources
for resource, data in self.resources.items():
buff.writeheader('Identifier', resource)
buff.writeheader('Length', len(data))
buff.writeln()
buff.write(data)
buff.writeln()
buff.writeln()
return buff.getvalue()
class GNTPRegister(_GNTPBase):
"""Represents a GNTP Registration Command
:param string data: (Optional) See decode()
:param string password: (Optional) Password to use while encoding/decoding messages
"""
_requiredHeaders = [
'Application-Name',
'Notifications-Count'
]
_requiredNotificationHeaders = ['Notification-Name']
def __init__(self, data=None, password=None):
_GNTPBase.__init__(self, 'REGISTER')
self.notifications = []
if data:
self.decode(data, password)
else:
self.set_password(password)
self.add_header('Application-Name', 'pygntp')
self.add_header('Notifications-Count', 0)
def validate(self):
'''Validate required headers and validate notification headers'''
for header in self._requiredHeaders:
if not self.headers.get(header, False):
raise errors.ParseError('Missing Registration Header: ' + header)
for notice in self.notifications:
for header in self._requiredNotificationHeaders:
if not notice.get(header, False):
raise errors.ParseError('Missing Notification Header: ' + header)
def decode(self, data, password):
"""Decode existing GNTP Registration message
:param string data: Message to decode
"""
self.raw = gntp.shim.u(data)
parts = self.raw.split('\r\n\r\n')
self.info = self._parse_info(self.raw)
self._validate_password(password)
self.headers = self._parse_dict(parts[0])
for i, part in enumerate(parts):
if i == 0:
continue # Skip Header
if part.strip() == '':
continue
notice = self._parse_dict(part)
if notice.get('Notification-Name', False):
self.notifications.append(notice)
elif notice.get('Identifier', False):
notice['Data'] = self._decode_binary(part, notice)
#open('register.png','wblol').write(notice['Data'])
self.resources[notice.get('Identifier')] = notice
def add_notification(self, name, enabled=True):
"""Add new Notification to Registration message
:param string name: Notification Name
:param boolean enabled: Enable this notification by default
"""
notice = {}
notice['Notification-Name'] = name
notice['Notification-Enabled'] = enabled
self.notifications.append(notice)
self.add_header('Notifications-Count', len(self.notifications))
def encode(self):
"""Encode a GNTP Registration Message
:return string: Encoded GNTP Registration message. Returned as a byte string
"""
buff = _GNTPBuffer()
buff.writeln(self._format_info())
#Headers
for k, v in self.headers.items():
buff.writeheader(k, v)
buff.writeln()
#Notifications
if len(self.notifications) > 0:
for notice in self.notifications:
for k, v in notice.items():
buff.writeheader(k, v)
buff.writeln()
#Resources
for resource, data in self.resources.items():
buff.writeheader('Identifier', resource)
buff.writeheader('Length', len(data))
buff.writeln()
buff.write(data)
buff.writeln()
buff.writeln()
return buff.getvalue()
class GNTPNotice(_GNTPBase):
"""Represents a GNTP Notification Command
:param string data: (Optional) See decode()
:param string app: (Optional) Set Application-Name
:param string name: (Optional) Set Notification-Name
:param string title: (Optional) Set Notification Title
:param string password: (Optional) Password to use while encoding/decoding messages
"""
_requiredHeaders = [
'Application-Name',
'Notification-Name',
'Notification-Title'
]
def __init__(self, data=None, app=None, name=None, title=None, password=None):
_GNTPBase.__init__(self, 'NOTIFY')
if data:
self.decode(data, password)
else:
self.set_password(password)
if app:
self.add_header('Application-Name', app)
if name:
self.add_header('Notification-Name', name)
if title:
self.add_header('Notification-Title', title)
def decode(self, data, password):
"""Decode existing GNTP Notification message
:param string data: Message to decode.
"""
self.raw = gntp.shim.u(data)
parts = self.raw.split('\r\n\r\n')
self.info = self._parse_info(self.raw)
self._validate_password(password)
self.headers = self._parse_dict(parts[0])
for i, part in enumerate(parts):
if i == 0:
continue # Skip Header
if part.strip() == '':
continue
notice = self._parse_dict(part)
if notice.get('Identifier', False):
notice['Data'] = self._decode_binary(part, notice)
#open('notice.png','wblol').write(notice['Data'])
self.resources[notice.get('Identifier')] = notice
class GNTPSubscribe(_GNTPBase):
"""Represents a GNTP Subscribe Command
:param string data: (Optional) See decode()
:param string password: (Optional) Password to use while encoding/decoding messages
"""
_requiredHeaders = [
'Subscriber-ID',
'Subscriber-Name',
]
def __init__(self, data=None, password=None):
_GNTPBase.__init__(self, 'SUBSCRIBE')
if data:
self.decode(data, password)
else:
self.set_password(password)
class GNTPOK(_GNTPBase):
"""Represents a GNTP OK Response
:param string data: (Optional) See _GNTPResponse.decode()
:param string action: (Optional) Set type of action the OK Response is for
"""
_requiredHeaders = ['Response-Action']
def __init__(self, data=None, action=None):
_GNTPBase.__init__(self, '-OK')
if data:
self.decode(data)
if action:
self.add_header('Response-Action', action)
class GNTPError(_GNTPBase):
"""Represents a GNTP Error response
:param string data: (Optional) See _GNTPResponse.decode()
:param string errorcode: (Optional) Error code
:param string errordesc: (Optional) Error Description
"""
_requiredHeaders = ['Error-Code', 'Error-Description']
def __init__(self, data=None, errorcode=None, errordesc=None):
_GNTPBase.__init__(self, '-ERROR')
if data:
self.decode(data)
if errorcode:
self.add_header('Error-Code', errorcode)
self.add_header('Error-Description', errordesc)
def error(self):
return (self.headers.get('Error-Code', None),
self.headers.get('Error-Description', None))
def parse_gntp(data, password=None):
"""Attempt to parse a message as a GNTP message
:param string data: Message to be parsed
:param string password: Optional password to be used to verify the message
"""
data = gntp.shim.u(data)
match = GNTP_INFO_LINE_SHORT.match(data)
if not match:
raise errors.ParseError('INVALID_GNTP_INFO')
info = match.groupdict()
if info['messagetype'] == 'REGISTER':
return GNTPRegister(data, password=password)
elif info['messagetype'] == 'NOTIFY':
return GNTPNotice(data, password=password)
elif info['messagetype'] == 'SUBSCRIBE':
return GNTPSubscribe(data, password=password)
elif info['messagetype'] == '-OK':
return GNTPOK(data)
elif info['messagetype'] == '-ERROR':
return GNTPError(data)
raise errors.ParseError('INVALID_GNTP_MESSAGE')

View File

@@ -1,25 +0,0 @@
# Copyright: 2013 Paul Traylor
# These sources are released under the terms of the MIT license: see LICENSE
class BaseError(Exception):
pass
class ParseError(BaseError):
errorcode = 500
errordesc = 'Error parsing the message'
class AuthError(BaseError):
errorcode = 400
errordesc = 'Error with authorization'
class UnsupportedError(BaseError):
errorcode = 500
errordesc = 'Currently unsupported by gntp.py'
class NetworkError(BaseError):
errorcode = 500
errordesc = "Error connecting to growl server"

View File

@@ -1,6 +1,3 @@
# Copyright: 2013 Paul Traylor
# These sources are released under the terms of the MIT license: see LICENSE
"""
The gntp.notifier module is provided as a simple way to send notifications
using GNTP
@@ -12,15 +9,10 @@ using GNTP
`Original Python bindings <http://code.google.com/p/growl/source/browse/Bindings/python/Growl.py>`_
"""
import gntp
import socket
import logging
import platform
import socket
import sys
from gntp.version import __version__
import gntp.core
import gntp.errors as errors
import gntp.shim
__all__ = [
'mini',
@@ -30,6 +22,45 @@ __all__ = [
logger = logging.getLogger(__name__)
def mini(description, applicationName='PythonMini', noteType="Message",
title="Mini Message", applicationIcon=None, hostname='localhost',
password=None, port=23053, sticky=False, priority=None,
callback=None, notificationIcon=None, identifier=None):
"""Single notification function
Simple notification function in one line. Has only one required parameter
and attempts to use reasonable defaults for everything else
:param string description: Notification message
.. warning::
For now, only URL callbacks are supported. In the future, the
callback argument will also support a function
"""
growl = GrowlNotifier(
applicationName=applicationName,
notifications=[noteType],
defaultNotifications=[noteType],
applicationIcon=applicationIcon,
hostname=hostname,
password=password,
port=port,
)
result = growl.register()
if result is not True:
return result
return growl.notify(
noteType=noteType,
title=title,
description=description,
icon=notificationIcon,
sticky=sticky,
priority=priority,
callback=callback,
identifier=identifier,
)
class GrowlNotifier(object):
"""Helper class to simplfy sending Growl messages
@@ -68,9 +99,8 @@ class GrowlNotifier(object):
If it's a simple URL icon, then we return True. If it's a data icon
then we return False
'''
logger.info('Checking icon')
return gntp.shim.u(data)[:4] in ['http', 'file']
logger.debug('Checking icon')
return data.startswith('http')
def register(self):
"""Send GNTP Registration
@@ -79,8 +109,8 @@ class GrowlNotifier(object):
Before sending notifications to Growl, you need to have
sent a registration message at least once
"""
logger.info('Sending registration to %s:%s', self.hostname, self.port)
register = gntp.core.GNTPRegister()
logger.debug('Sending registration to %s:%s', self.hostname, self.port)
register = gntp.GNTPRegister()
register.add_header('Application-Name', self.applicationName)
for notification in self.notifications:
enabled = notification in self.defaultNotifications
@@ -89,8 +119,8 @@ class GrowlNotifier(object):
if self._checkIcon(self.applicationIcon):
register.add_header('Application-Icon', self.applicationIcon)
else:
resource = register.add_resource(self.applicationIcon)
register.add_header('Application-Icon', resource)
id = register.add_resource(self.applicationIcon)
register.add_header('Application-Icon', id)
if self.password:
register.set_password(self.password, self.passwordHash)
self.add_origin_info(register)
@@ -98,7 +128,7 @@ class GrowlNotifier(object):
return self._send('register', register)
def notify(self, noteType, title, description, icon=None, sticky=False,
priority=None, callback=None, identifier=None, custom={}):
priority=None, callback=None, identifier=None):
"""Send a GNTP notifications
.. warning::
@@ -111,16 +141,14 @@ class GrowlNotifier(object):
:param boolean sticky: Sticky notification
:param integer priority: Message priority level from -2 to 2
:param string callback: URL callback
:param dict custom: Custom attributes. Key names should be prefixed with X-
according to the spec but this is not enforced by this class
.. warning::
For now, only URL callbacks are supported. In the future, the
callback argument will also support a function
"""
logger.info('Sending notification [%s] to %s:%s', noteType, self.hostname, self.port)
logger.debug('Sending notification [%s] to %s:%s', noteType, self.hostname, self.port)
assert noteType in self.notifications
notice = gntp.core.GNTPNotice()
notice = gntp.GNTPNotice()
notice.add_header('Application-Name', self.applicationName)
notice.add_header('Notification-Name', noteType)
notice.add_header('Notification-Title', title)
@@ -134,8 +162,8 @@ class GrowlNotifier(object):
if self._checkIcon(icon):
notice.add_header('Notification-Icon', icon)
else:
resource = notice.add_resource(icon)
notice.add_header('Notification-Icon', resource)
id = notice.add_resource(icon)
notice.add_header('Notification-Icon', id)
if description:
notice.add_header('Notification-Text', description)
@@ -144,9 +172,6 @@ class GrowlNotifier(object):
if identifier:
notice.add_header('Notification-Coalescing-ID', identifier)
for key in custom:
notice.add_header(key, custom[key])
self.add_origin_info(notice)
self.notify_hook(notice)
@@ -154,7 +179,7 @@ class GrowlNotifier(object):
def subscribe(self, id, name, port):
"""Send a Subscribe request to a remote machine"""
sub = gntp.core.GNTPSubscribe()
sub = gntp.GNTPSubscribe()
sub.add_header('Subscriber-ID', id)
sub.add_header('Subscriber-Name', name)
sub.add_header('Subscriber-Port', port)
@@ -170,7 +195,7 @@ class GrowlNotifier(object):
"""Add optional Origin headers to message"""
packet.add_header('Origin-Machine-Name', platform.node())
packet.add_header('Origin-Software-Name', 'gntp.py')
packet.add_header('Origin-Software-Version', __version__)
packet.add_header('Origin-Software-Version', gntp.__version__)
packet.add_header('Origin-Platform-Name', platform.system())
packet.add_header('Origin-Platform-Version', platform.platform())
@@ -189,78 +214,34 @@ class GrowlNotifier(object):
packet.validate()
data = packet.encode()
logger.debug('To : %s:%s <%s>\n%s', self.hostname, self.port, packet.__class__, data)
#logger.debug('To : %s:%s <%s>\n%s', self.hostname, self.port, packet.__class__, data)
#Less verbose
logger.debug('To : %s:%s <%s>', self.hostname, self.port, packet.__class__)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(self.socketTimeout)
try:
s.connect((self.hostname, self.port))
s.send(data)
recv_data = s.recv(1024)
while not recv_data.endswith(gntp.shim.b("\r\n\r\n")):
recv_data += s.recv(1024)
except socket.error:
# Python2.5 and Python3 compatibile exception
exc = sys.exc_info()[1]
raise errors.NetworkError(exc)
response = gntp.core.parse_gntp(recv_data)
s.connect((self.hostname, self.port))
s.send(data)
recv_data = s.recv(1024)
while not recv_data.endswith("\r\n\r\n"):
recv_data += s.recv(1024)
response = gntp.parse_gntp(recv_data)
s.close()
logger.debug('From : %s:%s <%s>\n%s', self.hostname, self.port, response.__class__, response)
#logger.debug('From : %s:%s <%s>\n%s', self.hostname, self.port, response.__class__, response)
#Less verbose
logger.debug('From : %s:%s <%s>', self.hostname, self.port, response.__class__)
if type(response) == gntp.core.GNTPOK:
if type(response) == gntp.GNTPOK:
return True
if response.error()[0] == '404' and 'disabled' in response.error()[1]:
# Ignore message saying that user has disabled this class
return True
logger.error('Invalid response: %s', response.error())
return response.error()
def mini(description, applicationName='PythonMini', noteType="Message",
title="Mini Message", applicationIcon=None, hostname='localhost',
password=None, port=23053, sticky=False, priority=None,
callback=None, notificationIcon=None, identifier=None,
notifierFactory=GrowlNotifier):
"""Single notification function
Simple notification function in one line. Has only one required parameter
and attempts to use reasonable defaults for everything else
:param string description: Notification message
.. warning::
For now, only URL callbacks are supported. In the future, the
callback argument will also support a function
"""
try:
growl = notifierFactory(
applicationName=applicationName,
notifications=[noteType],
defaultNotifications=[noteType],
applicationIcon=applicationIcon,
hostname=hostname,
password=password,
port=port,
)
result = growl.register()
if result is not True:
return result
return growl.notify(
noteType=noteType,
title=title,
description=description,
icon=notificationIcon,
sticky=sticky,
priority=priority,
callback=callback,
identifier=identifier,
)
except Exception:
# We want the "mini" function to be simple and swallow Exceptions
# in order to be less invasive
logger.exception("Growl error")
if __name__ == '__main__':
# If we're running this module directly we're likely running it as a test
# so extra debugging is useful
logging.basicConfig(level=logging.INFO)
logging.basicConfig(level=logging.DEBUG)
mini('Testing mini notification')

View File

@@ -1,46 +0,0 @@
# Copyright: 2013 Paul Traylor
# These sources are released under the terms of the MIT license: see LICENSE
"""
Python2.5 and Python3.3 compatibility shim
Heavily inspirted by the "six" library.
https://pypi.python.org/pypi/six
"""
import sys
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY3:
def b(s):
if isinstance(s, bytes):
return s
return s.encode('utf8', 'replace')
def u(s):
if isinstance(s, bytes):
return s.decode('utf8', 'replace')
return s
from io import BytesIO as StringIO
from configparser import RawConfigParser
else:
def b(s):
if isinstance(s, unicode):
return s.encode('utf8', 'replace')
return s
def u(s):
if isinstance(s, unicode):
return s
if isinstance(s, int):
s = str(s)
return unicode(s, "utf8", "replace")
from StringIO import StringIO
from ConfigParser import RawConfigParser
b.__doc__ = "Ensure we have a byte string"
u.__doc__ = "Ensure we have a unicode string"

View File

@@ -1,4 +0,0 @@
# Copyright: 2013 Paul Traylor
# These sources are released under the terms of the MIT license: see LICENSE
__version__ = '1.0.3'

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 38 KiB

BIN
icons/sabnzbd16.ico Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

BIN
icons/sabnzbd16green.ico Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

BIN
icons/sabnzbd16paused.ico Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -0,0 +1,22 @@
#
# Copyright 2008-2014 The SABnzbd-Team <team@sabnzbd.org>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
#
# This is the "Classic" web interface for SABnzbd
# Simple, but compatible with all popular browsers.
# We recommend use of the more advanced versions.
#

View File

@@ -0,0 +1,66 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath=".."#-->
<!--#set global $helpsubject="Configure-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl"#-->
<!--#set global $submenu=""#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2>$T('configuration')</h2>
<p>
<b>$T('confgFile'):</b> $configfn
</p>
$T('explain-Restart')<br/>
<form action="saveGeneral" method="post">
<input type="submit" onclick="this.form.action='restart?session=$session'; this.form.submit(); return false;" value="$T('button-restart')"/>
</form>
<br/><br/>
<hr/>
<!--#if $folders#-->
$T('explain-orphans')<br/>
<br/>
<table id="catTable">
<tr>
<th></th>
<th></th>
<th>$T('name')</th>
</tr>
<!--#set $odd = False#-->
<!--#for $folder in $folders#-->
<!--#set $odd = not $odd#-->
<tr class="<!--#if $odd then "odd" else "even"#-->">
<td>
<form action="delete" method="get">
<input type="hidden" value="$folder" name="name">
<input type="hidden" value="$session" name="session">
<input type="submit" value="$T('button-delCat')">
</form>
</td>
<td>
<form action="add" method="get">
<input type="hidden" value="$folder" name="name">
<input type="hidden" value="$session" name="session">
<input type="submit" value="$T('button-add')">
</form>
</td>
<td>
$folder
</td>
<!--#end for#-->
</table>
<hr/><br/>
<!--#end if#-->
$T('explain-Repair')<br/>
<form action="saveGeneral" method="post">
<input type="submit" onclick="this.form.action='repair?session=$session'; this.form.submit(); return false;" value="$T('button-repair')"/>
</form>
<!--#include $webdir + "/inc_bottom.tmpl"#-->

View File

@@ -0,0 +1,104 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath="../.."#-->
<!--#set global $helpsubject="configure-categories-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl"#-->
<!--#set global $submenu="categories"#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2>$T('configCat')</h2>
$T('explain-configCat')<br/>
$T('explain-catTags')<br/>
$T('explain-catTags2')<br/>
$T('explain-relFolder') $defdir<br/>
<br/>
<table id="catTable">
<tr>
<th></th>
<th>$T('category')</th>
<th>&nbsp;$T('mode')&nbsp;</th>
<th>&nbsp;$T('priority')&nbsp;</th>
<!--#if $script_list#--><th>$T('script')</th><!--#end if#-->
<th>$T('catFolderPath')</th>
<th>$T('catTags')</th>
<th></th>
</tr>
<!--#set $odd = False#-->
<!--#for $slot in $slotinfo#-->
<!--#set $odd = not $odd#-->
<tr class="<!--#if $odd then "odd" else "even"#-->">
<td><!--#if $slot.name and $slot.name != '*'#-->
<form action="delete" method="get">
<input type="hidden" value="$slot.name" name="name">
<input type="hidden" value="$session" name="session">
<input type="submit" value="$T('button-delCat')">
</form>
<!--#end if#-->
</td>
<td>
<form action="save" method="get">
<input type="hidden" value="$slot.name" name="name">
<input type="hidden" value="$session" name="session">
<!--#if $slot.name != '*'#-->
<input type="text" name="newname" value="$slot.name">
<!--#else#-->
$T('default')
<!--#end if#-->
</td>
<td>
<select name="pp">
<optgroup label="$T('pp')">
<!--#if $slot.name != '*'#-->
<option value="" <!--#if $slot.pp == "" then "selected" else ""#-->>$T('default')</option>
<!--#end if#-->
<option value="0" <!--#if $slot.pp == "0" then "selected" else ""#-->>$T('pp-none')</option>
<option value="1" <!--#if $slot.pp == "1" then "selected" else ""#-->>$T('pp-repair')</option>
<option value="2" <!--#if $slot.pp == "2" then "selected" else ""#-->>$T('pp-unpack')</option>
<option value="3" <!--#if $slot.pp == "3" then "selected" else ""#-->>$T('pp-delete')</option>
</optgroup>
</select>
</td>
<td>
<select name="priority">
<optgroup label="$T('priority')">
<!--#if $slot.name != '*'#-->
<option value="-100" <!--#if $slot.priority == -100 then 'selected' else ''#-->>$T('default')</option>
<!--#end if#-->
<option value="2" <!--#if $slot.priority == 2 then 'selected' else ''#-->>$T('pr-force')</option>
<option value="1" <!--#if $slot.priority == 1 then 'selected' else ''#-->>$T('pr-high')</option>
<option value="0" <!--#if $slot.priority == 0 then 'selected' else ''#-->>$T('pr-normal')</option>
<option value="-1" <!--#if $slot.priority == -1 then 'selected' else ''#-->>$T('pr-low')</option>
<option value="-2" <!--#if $slot.priority == -2 then 'selected' else ''#-->>$T('pr-paused')</option>
</optgroup>
</select>
</td>
<!--#if $script_list#-->
<td>
<select name="script">
<optgroup label="$T('script')">
<!--#for $sc in $script_list#-->
<!--#if not ($sc == 'Default' and $slot.name == '*')#-->
<option value="$sc" <!--#if $slot.script.lower() == $sc.lower() then "selected" else ""#-->>$Tspec($sc)</option>
<!--#end if#-->
<!--#end for#-->
</optgroup>
</select>
</td>
<!--#end if#-->
<td><input type="text" size=30 name="dir" value="$slot.dir"></td>
<td><input type="text" size=30 name="newzbin" value="$slot.newzbin"></td>
<td><input type="submit" value="$T('button-save')"></td>
</form>
<!--#end for#-->
</table>
<!--#include $webdir + "/inc_bottom.tmpl"#-->

View File

@@ -0,0 +1,90 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath="../.."#-->
<!--#set global $helpsubject="Configure+Folders-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl"#-->
<!--#set global $submenu="directories"#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2>$T('folderConfig')</h2>
<p><strong>
$T('explain-folderConfig')<br />
</strong></p>
<form action="saveDirectories" method="post">
<div class="EntryBlock">
<fieldset class="EntryFieldSet">
<legend>$T('userFolders')</legend>
<emp>$T('in') "$my_home"</emp><br><br>
<strong>$T('opt-download_dir'):</strong><br>
$T('explain-download_dir')<br/>
<input type="text" size="40" name="download_dir" value="$download_dir">
<br>
<br>
<strong>$T('opt-download_free'):</strong><br>
$T('explain-download_free')<br>
<input type="text" size="10" name="download_free" value="$download_free">
<br>
<br>
<strong>$T('opt-complete_dir'):</strong><br>
$T('explain-complete_dir')<br>
<input type="text" size="40" id="complete_dir" name="complete_dir" value="$complete_dir">
<!--#if not $nt#-->
<br>
<br>
<strong>$T('opt-permissions'):</strong><br>
$T('explain-permissions')<br>
<input type="text" size="10" name="permissions" value="$permissions">
<!--#end if#-->
<br>
<br>
<strong>$T('opt-dirscan_dir'):</strong><br>
$T('explain-dirscan_dir')<br>
<input type="text" size="40" name="dirscan_dir" value="$dirscan_dir">
<br>
<br>
<strong>$T('opt-dirscan_speed'):</strong><br>
$T('explain-dirscan_speed')<br>
<input type="text" size="10" name="dirscan_speed" value="$dirscan_speed">
<br>
<br>
<strong>$T('opt-script_dir'):</strong><br>
$T('explain-script_dir')<br>
<input type="text" size="40" name="script_dir" value="$script_dir">
<br>
<br>
<strong>$T('opt-email_dir'):</strong><br>
$T('explain-email_dir')<br>
<input type="text" size="40" name="email_dir" value="$email_dir">
<br>
<br>
<strong>$T('opt-password_file'):</strong><br>
$T('explain-password_file')<br>
<input type="text" size="40" name="password_file" value="$password_file">
</fieldset>
<fieldset class="EntryFieldSet">
<legend>$T('systemFolders')</legend>
<emp>$T('in') "$my_lcldata"</emp><br><br>
<strong>$T('opt-admin_dir'):</strong><br>
$T('explain-admin_dir1')<br/>$T('explain-admin_dir2')<br/>
<input type="text" size="40" name="admin_dir" value="$admin_dir">
<br>
<br>
<strong>$T('opt-log_dir'):</strong><br>
$T('explain-log_dir')<br/>
<input type="text" size="40" name="log_dir" value="$log_dir">
<br>
<br>
<strong>$T('opt-nzb_backup_dir'):</strong><br>
$T('explain-nzb_backup_dir')<br>
<input type="text" size="40" name="nzb_backup_dir" value="$nzb_backup_dir">
<input type="hidden" name="session" value="$session">
</fieldset>
</div><br>
<input type="submit" size="40" value="$T('button-saveChanges')">
<!--#if $restart_req#-->
<input type="submit" onclick="this.form.action='../restart'; this.form.submit(); return false;" value="$T('button-restart')"/>
<!--#end if#-->
</form>
<!--#include $webdir + "/inc_bottom.tmpl"#-->

View File

@@ -0,0 +1,145 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath="../.."#-->
<!--#set global $helpsubject="Configure+General-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl"#-->
<!--#set global $submenu="general"#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2>$T('generalConfig')</h2>
<form action="saveGeneral" method="post" autocomplete="off">
<div class="EntryBlock">
<fieldset class="EntryFieldSet">
<legend>$T('webServer')</legend>
<i>$T('restartRequired')</i><br/><br/>
<strong>$T('opt-host'):</strong><br>
$T('explain-host')<br>
<input type="text" name="host" value="$host">
<br>
<br>
<strong>$T('opt-port'):</strong><br>
$T('explain-port')<br>
<input type="text" name="port" value="$port">
<br>
<br>
<strong>$T('opt-web_dir'):</strong><br>
$T('explain-web_dir')<br>
<select name="web_dir">
<!--#for $webline in $web_list#-->
<!--#if $webline.lower() == $web_dir.lower()#-->
<option value="$webline" selected>$webline</option>
<!--#else#-->
<option value="$webline">$webline</option>
<!--#end if#-->
<!--#end for#-->
</select>
<br/><br/>
<strong>$T('opt-web_dir2'):</strong><br>
$T('explain-web_dir2')<br>
<select name="web_dir2">
<!--#for $webline in $web_list2#-->
<!--#if $webline.lower() == $web_dir2.lower()#-->
<option value="$webline" selected>$webline</option>
<!--#else#-->
<option value="$webline">$webline</option>
<!--#end if#-->
<!--#end for#-->
</select>
<br /><br /><strong>$T('opt-apikey'):</strong><br />
$T('explain-apikey')<br />
<input type="text" style="width:250px;border:none;" onclick="this.select()" id="apikey" value="$session">
<a href="generateAPIKey?session=$session">$T('button-apikey')</a>
<br /><br /><strong>$T('opt-nzbkey'):</strong><br />
$T('explain-nzbkey')<br />
<input type="text" style="width:250px;border:none;" onclick="this.select()" id="nzbkey" value="$nzb_key">
<a href="generateNzbKey?session=$session">$T('button-apikey')</a>
<br /><br />
<label><input type="checkbox" name="disable_api_key" value="1" <!--#if $disable_api_key > 0 then "checked=1" else ""#--> /> <strong>$T('opt-disableApikey')</strong></label><br>
$T('explain-disableApikey') <a href="${helpuri}cross-site-vulnerability/" target="_blank">$T('explain-disableApikeyWarn')</a>
<!--#if $lang_list#-->
<br/><br/>
<strong>$T('opt-language'):</strong><br/>
$T('explain-language')<br/>
<select name="language">
<!--#for $webline in $lang_list#-->
<!--#if $webline[0].lower() == $language.lower()#-->
<option value="$webline[0]" selected>$webline[1]</option>
<!--#else#-->
<option value="$webline[0]">$webline[1]</option>
<!--#end if#-->
<!--#end for#-->
</select>
<!--#end if#-->
</fieldset>
</div>
<fieldset class="EntryFieldSet">
<legend>$T('webAuth')</legend>
<strong>$T('opt-web_username'):</strong><br>
$T('explain-web_username')<br>
<input type="text" name="username" value="$username">
<br>
<br>
<strong>$T('opt-web_password')</strong><br>
$T('explain-web_password')<br>
<input type="password" name="password" value="$password">
</fieldset>
<div class="EntryBlock">
<fieldset class="EntryFieldSet">
<legend>$T('httpsSupport')</legend>
<i>$T('restartRequired')</i><br/><br/>
<label><input type="checkbox" name="enable_https" value="1" <!--#if $enable_https > 0 then 'checked="1"' else ""#--> <!--#if int($have_ssl) == 0 then "disabled" else ""#--> />
<strong>$T('opt-enable_https')<!--#if int($have_ssl) == 0 then " "+$T('opt-notInstalled') else ""#--></strong></label><br/>
$T('explain-enable_https')<br>
<br/>
<strong>$T('opt-https_port'):</strong><br>
$T('explain-https_port')<br>
<input type="text" name="https_port" value="$https_port">
<br/>
<br/>
<strong>$T('opt-https_cert'):</strong><br/>
$T('explain-https_cert')<br/>
<input type="text" name="https_cert" value="$https_cert">
<br/>
<br/>
<strong>$T('opt-https_key'):</strong><br/>
$T('explain-https_key')<br/>
<input type="text" name="https_key" value="$https_key">
</fieldset>
</div>
<div class="EntryBlock">
<fieldset class="EntryFieldSet">
<legend>$T('tuning')</legend>
<strong>$T('opt-refresh_rate'):</strong><br>
$T('explain-refresh_rate')<br>
<input type="text" name="refresh_rate" value="$refresh_rate">
<br>
<br>
<strong>$T('opt-bandwidth_limit'):</strong><br>
$T('explain-bandwidth_limit')<br>
<input type="text" name="bandwidth_limit" value="$bandwidth_limit">
<br>
<br>
<strong>$T('opt-cache_limitstr'):</strong><br>
$T('explain-cache_limitstr')<br>
<input type="text" name="cache_limit" value="$cache_limit">
<br>
<br>
<strong>$T('opt-cleanup_list'):</strong><br>
$T('explain-cleanup_list')<br><br>
<input type="text" name="cleanup_list" value="$cleanup_list">
<input type="hidden" name="session" value="$session">
</fieldset>
</div>
<p>
<input type="submit" value="$T('button-saveChanges')">
<!--#if $restart_req#-->
<input type="submit" onclick="this.form.action='../restart'; this.form.submit(); return false;" value="$T('button-restart')"/>
<!--#end if#-->
</p>
</form>
</table>
<!--#include $webdir + "/inc_bottom.tmpl"#-->

View File

@@ -0,0 +1,34 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath="../.."#-->
<!--#set global $helpsubject="Configure+Indexers-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl"#-->
<!--#set global $submenu="newzbin"#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2>NzbMatrix</h2>
$T('explain-nzbmatrix')<br/><br/>
<form action="saveMatrix" method="post" autocomplete="off">
<div class="EntryBlock">
<fieldset class="EntryFieldSet">
<legend>$T('accountInfo')</legend>
<strong>$T('opt-username_matrix'):</strong><br>
$T('explain-username_matrix')<br>
<input type="text" name="matrix_username" value="$matrix_username">
<br>
<br>
<strong>$T('opt-apikey_matrix'):</strong><br>
$T('explain-apikey_matrix')<br>
<input type="text" name="matrix_apikey" value="$matrix_apikey">
<br/><br/>
<input type="checkbox" name="matrix_del_bookmark" value="1" <!--#if $matrix_del_bookmark > 0 then "checked=1" else ""#--> /> <strong>$T('opt-newzbin_unbookmark'):</strong><br>
$T('explain-newzbin_unbookmark')<br>
</fieldset>
</div>
<input type="hidden" name="session" value="$session">
<p><input type="submit" value="$T('button-saveChanges')"></p>
</form>
<!--#include $webdir + "/inc_bottom.tmpl"#-->

View File

@@ -0,0 +1,94 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath="../.."#-->
<!--#set global $helpsubject="Configure+Notifications-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl" #-->
<!--#set global $submenu="email"#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2>$T('configEmail')</h2>
<form action="saveEmail" method="post" autocomplete="off">
<div class="EntryBlock">
<fieldset class="EntryFieldSet">
<legend>$T('emailOptions')</legend>
<strong>$T('opt-email_endjob')</strong><br/>
<input type="radio" name="email_endjob" value="0" <!--#if $email_endjob == "0" then "checked=1" else ""#--> /> $T('email-never')
<input type="radio" name="email_endjob" value="1" <!--#if $email_endjob == "1" then "checked=1" else ""#--> /> $T('email-always')
<input type="radio" name="email_endjob" value="2" <!--#if $email_endjob == "2" then "checked=1" else ""#--> /> $T('email-errorOnly')
<br/><br/>
<label><input type="checkbox" name="email_full" value="1" <!--#if $email_full != "0" then "checked=1" else ""#--> /> <strong>$T('opt-email_full'):</strong></label><br>
$T('explain-email_full')<br/>
<br/>
<label><input type="checkbox" name="email_rss" value="1" <!--#if $email_rss != "0" then "checked=1" else ""#--> /> <strong>$T('opt-email_rss'):</strong></label><br>
$T('explain-email_rss')<br/>
<strong>$T('opt-email_dir'):</strong><br/>
$T('explain-email_dir')<br/>
<input type="text" size="40" name="email_dir" value="$email_dir">
</fieldset>
</div>
<fieldset class="EntryFieldSet">
<legend>$T('emailAccount')</legend>
<strong>$T('opt-email_server'):</strong><br>
$T('explain-email_server').<br>
<input type="text" size="35" name="email_server" value="$email_server">
<br>
<br>
<strong>$T('opt-email_to'):</strong><br>
$T('explain-email_to')<br>
<input type="text" size="35" name="email_to" value="$email_to">
<br>
<br>
<strong>$T('opt-email_from'):</strong><br>
$T('explain-email_from')<br>
<input type="text" size="35" name="email_from" value="$email_from">
<br>
<br>
<strong>$T('opt-email_account'):</strong><br>
$T('explain-email_account')<br>
<input type="text" size="35" name="email_account" value="$email_account">
<br>
<br>
<strong>$T('opt-email_pwd'):</strong><br>
$T('explain-email_pwd')<br>
<input type="password" size="35" name="email_pwd" value="$email_pwd">
</fieldset>
<!--#if $have_growl or $have_ntfosd#-->
<fieldset class="EntryFieldSet">
<legend>$T('growlSettings')</legend>
<!--#if $have_ntfosd#-->
<label><input type="checkbox" name="ntfosd_enable" value="1" <!--#if $ntfosd_enable != "0" then "checked=1" else ""#--> /> <strong>$T('opt-ntfosd_enable'):</strong></label><br>
$T('explain-ntfosd_enable')
<br/>
<br/>
<!--#end if#-->
<!--#if $have_growl#-->
<label><input type="checkbox" name="growl_enable" value="1" <!--#if $growl_enable != "0" then "checked=1" else ""#--> /> <strong>$T('opt-growl_enable'):</strong></label><br>
$T('explain-growl_enable')
<br/>
<br/>
<strong>$T('opt-growl_server'):</strong><br>
$T('explain-growl_server')<br>
<input type="text" size="35" name="growl_server" value="$growl_server">
<br>
<br>
<strong>$T('opt-growl_password'):</strong><br>
$T('explain-growl_password')<br>
<input type="password" size="35" name="growl_password" value="$growl_password">
</fieldset>
<!--#end if#-->
<!--#end if#-->
</div>
<input type="hidden" name="session" value="$session">
<p><input type="submit" value="$T('button-saveChanges')">&nbsp;&nbsp;
<input type="button" onclick="if (confirm('$T('askTestEmail').replace("'","`") ')) { this.form.action='testmail?session=$session&'; this.form.submit(); return false;}" value="$T('link-testEmail')"/>
<input type="button" onclick="this.form.action='testnotification?session=$session&'; this.form.submit(); return false;"value="$T('testNotify')"/>
</p>
</form>
<!--#if $lastmail#-->
$T('emailResult') = <b>$lastmail</b>
<!--#end if#-->
<!--#include $webdir + "/inc_bottom.tmpl"#-->

View File

@@ -0,0 +1,381 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath="../.."#-->
<!--#set global $helpsubject="Configure+RSS-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl"#-->
<!--#set global $submenu="rss"#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2><a href="../rss">$T('configRSS')</a></h2>
<!--#if $active_feed#-->
<!--#set $feed = $active_feed#-->
<div class="EntryBlock">
<form action="upd_rss_feed" method="post">
<fieldset class="EntryFieldSet">
<legend <!--#if $rss[$feed]['enable'] then 'class="feedEnabled"' else 'class="feedDisabled"'#-->><input type="checkbox" onclick="this.form.action='toggle_rss_feed?session=$session'; this.form.submit(); return false;" name="enable" <!--#if $rss[$feed]['enable'] then "CHECKED" else "" #-->/>
$T('feed') $feed</legend>
<input type="text" size="105" name="uri" value="$rss[$feed]['uri']"/>
<input type="button" onclick="if (confirm('$T('confirm').replace("'","`") ')) { this.form.action='del_rss_feed?session=$session&'; this.form.submit(); return false;}" value="$T('button-delFeed')"/>
<input type="button" onclick="this.form.action='test_rss_feed?session=$session&'; this.form.submit(); return false;" value="$T('button-preFeed')"/>
<input type="button" onclick="this.form.action='download_rss_feed?session=$session&'; this.form.submit(); return false;" value="$T('button-forceFeed')"/>
<br/><br/>
<!--#if $rss[$feed]['pick_cat']#-->
<select name="cat">
<optgroup label="$T('category')">
<!--#for $ct in $cat_list#-->
<option value="$ct" <!--#if $ct == $rss[$feed]['cat'] then "selected" else ""#-->>$Tspec($ct)</option>
<!--#end for#-->
</optgroup>
</select>
<!--#end if#-->
<select name="pp">
<optgroup label="$T('pp')">
<option value="" <!--#if $rss[$feed]['pp'] == "" then 'selected' else ''#-->>$T('default')</option>
<option value="0" <!--#if $rss[$feed]['pp'] == "0" then 'selected' else ''#-->>$T('pp-none')</option>
<option value="1" <!--#if $rss[$feed]['pp'] == "1" then 'selected' else ''#-->>$T('pp-repair')</option>
<option value="2" <!--#if $rss[$feed]['pp'] == "2" then 'selected' else ''#-->>$T('pp-unpack')</option>
<option value="3" <!--#if $rss[$feed]['pp'] == "3" then 'selected' else ''#-->>$T('pp-delete')</option>
</optgroup>
</select>
<!--#if $rss[$feed]['pick_script']#-->
<select name="script">
<optgroup label="$T('script')">
<!--#for $sc in $script_list#-->
<option value="$sc" <!--#if $sc == $rss[$feed]['script'] then "selected" else ""#-->>$Tspec($sc)</option>
<!--#end for#-->
</optgroup>
</select>
<!--#end if#-->
<select name="priority">
<optgroup label="$T('priority')">
<option value="-100" <!--#if $rss[$feed]['priority'] == -100 then 'selected' else ''#-->>$T('default')</option>
<option value="2" <!--#if $rss[$feed]['priority'] == 2 then 'selected' else ''#-->>$T('pr-force')</option>
<option value="1" <!--#if $rss[$feed]['priority'] == 1 then 'selected' else ''#-->>$T('pr-high')</option>
<option value="0" <!--#if $rss[$feed]['priority'] == 0 then 'selected' else ''#-->>$T('pr-normal')</option>
<option value="-1" <!--#if $rss[$feed]['priority'] == -1 then 'selected' else ''#-->>$T('pr-low')</option>
<option value="-2" <!--#if $rss[$feed]['priority'] == -2 then 'selected' else ''#-->>$T('pr-paused')</option>
</optgroup>
</select>
<input type="hidden" name="feed" value="$feed"/>
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('button-save')"/>
<br />
</form>
<br/><br/>
<table>
<tr>
<th>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</th>
<th>&nbsp;</th>
<th>$T('rss-order')</th>
<th>$T('rss-type')</th>
<th>$T('rss-filter')</th>
<!--#if $rss[$feed]['pick_cat']#--><th>$T('category')</th><!--#end if#-->
<th>Mode</th>
<!--#if $rss[$feed]['pick_script']#--><th>$T('script')</th><!--#end if#-->
<th>$T('priority')</th>
<th></th>
</tr>
<form action="upd_rss_filter" method="get">
<tr class="odd">
<td></td>
<td><input type="checkbox" name="enabled" value="1" checked="checked" /></td>
<td></td>
<td>
<select name="filter_type">
<option value="A" selected /> $T('rss-accept')</option>
<option value="M" /> $T('rss-must')</option>
<option value="R" /> $T('rss-reject')</option>
<option value="C" /> $T('rss-mustcat')</option>
</select>
</td>
<td><input type="text" size="60" name="filter_text" value=""></td>
<!--#if $rss[$feed]['pick_cat']#-->
<td>
<select name="cat">
<!--#for $ct in $cat_list#-->
<option value="$ct" <!--#if $ct == "Default" then "selected" else ""#-->>$Tspec($ct)</option>
<!--#end for#-->
</select>
</td>
<!--#end if#-->
<td>
<select name="pp">
<option value="" selected>$T('default')</option>
<option value="0">$T('pp-none')</option>
<option value="1">$T('pp-repair')</option>
<option value="2">$T('pp-unpack')</option>
<option value="3">$T('pp-delete')</option>
</select>
</td>
<!--#if $rss[$feed]['pick_script']#-->
<td>
<select name="script">
<!--#for $sc in $script_list#-->
<option value="$sc" <!--#if $sc == "Default" then "selected" else ""#-->>$Tspec($sc)</option>
<!--#end for#-->
</select>
</td>
<td>
<select name="priority">
<option value="-100" selected>$T('default')</option>
<option value="2">$T('pr-force')</option>
<option value="1">$T('pr-high')</option>
<option value="0">$T('pr-normal')</option>
<option value="-1">$T('pr-low')</option>
<option value="-2">$T('pr-paused')</option>
</select>
</td>
<!--#end if#-->
<input type="hidden" value="$rss[$feed]['filtercount']" name="index">
<input type="hidden" value="$feed" name="feed">
<input type="hidden" name="session" value="$session">
<td><input type="submit" value="$T('button-save')"></td>
</tr>
</form>
<!--#set $fnum = 0#-->
<!--#for $filter in $rss[$feed].filters#-->
<tr class="odd">
<td>
<form action="del_rss_filter" method="post">
<input type="hidden" value="$fnum" name="index">
<input type="hidden" value="$feed" name="feed">
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('rss-delFilter')"></form>
</td>
<form action="upd_rss_filter" method="get">
<td>
<input type="checkbox" name="enabled" value="1" <!--#if $filter[6] == '1' then 'checked="checked"' else ""#--> />
</td>
<td>
<input type="text" size="3" name="new_index" value=$fnum>
</td>
<td>
<select name="filter_type">
<option value="A" <!--#if $filter[3] == "A" then "selected" else ""#--> /> $T('rss-accept')</option>
<option value="M" <!--#if $filter[3] == "M" then "selected" else ""#--> /> $T('rss-must')</option>
<option value="R" <!--#if $filter[3] == "R" then "selected" else ""#--> /> $T('rss-reject')</option>
<option value="C" <!--#if $filter[3] == "C" then "selected" else ""#--> /> $T('rss-mustcat')</option>
</select>
</td>
<td><input type="text" size="60" name="filter_text" value="$filter[4]"/></td>
<!--#if $rss[$feed]['pick_cat']#-->
<td>
<select name="cat">
<!--#for $ct in $cat_list#-->
<option value="$ct" <!--#if $ct == $filter[0] then "selected" else ""#-->>$Tspec($ct)</option>
<!--#end for#-->
</select>
</td>
<!--#end if#-->
<td>
<select name="pp">
<option value="" <!--#if $filter[1] == "0" then 'selected' else ''#-->>$T('default')</option>
<option value="0" <!--#if $filter[1] == "0" then 'selected' else ''#-->>$T('pp-none')</option>
<option value="1" <!--#if $filter[1] == "1" then 'selected' else ''#-->>$T('pp-repair')</option>
<option value="2" <!--#if $filter[1] == "2" then 'selected' else ''#-->>$T('pp-unpack')</option>
<option value="3" <!--#if $filter[1] == "3" then 'selected' else ''#-->>$T('pp-delete')</option>
</select>
</td>
<!--#if $rss[$feed]['pick_script']#-->
<td>
<select name="script">
<!--#for $sc in $script_list#-->
<option value="$sc" <!--#if $sc == $filter[2] then "selected" else ""#-->>$Tspec($sc)</option>
<!--#end for#-->
</select>
</td>
<!--#end if#-->
<td>
<select name="priority">
<option value="-100" <!--#if $filter[5] == "-100" or $filter[4] == "" then 'selected' else ''#-->>$T('default')</option>
<option value="2" <!--#if $filter[5] == "2" then "selected" else ""#-->>$T('pr-force')</option>
<option value="1" <!--#if $filter[5] == "1" then "selected" else ""#-->>$T('pr-high')</option>
<option value="0" <!--#if $filter[5] == "0" then "selected" else ""#-->>$T('pr-normal')</option>
<option value="-1" <!--#if $filter[5] == "-1" then "selected" else ""#-->>$T('pr-low')</option>
<option value="-2" <!--#if $filter[5] == "-2" then "selected" else ""#-->>$T('pr-paused')</option>
</select>
</td>
<td>
<input type="hidden" name="index" value="$fnum"/>
<input type="hidden" name="feed" value="$feed"/>
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('button-save')"/>
<!--#if not $rss[$feed].filter_states[$fnum]#-->&nbsp;&nbsp;$T('Incorrect filter')<!--#end if#-->
</td>
</form>
</tr>
<!--#set $fnum = $fnum+1#-->
<!--#end for#-->
</table>
</div>
</fieldset>
<!--#if $error#-->
<br/><br/><b>$error</b><br/><br/>
<!--#end if#-->
<h3>$T('rss-matched')</h3>
<table id="catTable">
<tr>
<th>&nbsp;</th>
<th>&nbsp;$T('rss-skip')&nbsp;</th>
<th>&nbsp;$T('rss-filter')&nbsp;</th>
<th>$T('sort-title')</th>
</tr>
<!--#set $odd = False#-->
<!--#for $job in $matched#-->
<!--#set $odd = not $odd#-->
<tr class="<!--#if $odd then "odd" else "even"#-->">
<td><form action="download" method="get">
<input type="hidden" name="url" value="$job[0]"/>
<input type="hidden" name="nzbname" value="$job[4]"/>
<input type="hidden" value="$feed" name="feed"/>
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('link-download')">
</form>
</td>
<td>$job[2]</td>
<td>$job[3]</td>
<td>$job[1]</td>
</tr>
<!--#end for#-->
</table>
<h3>$T('rss-notMatched')</h3>
<table id="catTable">
<tr>
<th>&nbsp;</th>
<th>&nbsp;$T('rss-skip')&nbsp;</th>
<th>&nbsp;$T('rss-filter')&nbsp;</th>
<th>$T('sort-title')</th>
</tr>
<!--#set $odd = False#-->
<!--#for $job in $unmatched#-->
<!--#set $odd = not $odd#-->
<tr class="<!--#if $odd then "odd" else "even"#-->">
<td><form action="download" method="get">
<input type="hidden" name="url" value="$job[0]"/>
<input type="hidden" name="nzbname" value="$job[4]"/>
<input type="hidden" value="$feed" name="feed"/>
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('link-download')">
</form>
</td>
<td>$job[2]</td>
<td>$job[3]</td>
<td>$job[1]</td>
</tr>
<!--#end for#-->
</table>
<h3>$T('rss-done')</h3>
<!--#if $downloaded#-->
<form action="clean_rss_jobs" method="get">
<input type="hidden" value="$feed" name="feed"/>
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('button-clear')">
</form><br/>
<!--#end if#-->
<table id="catTable">
<tr>
<th>$T('sort-title')</th>
</tr>
<!--#set $odd = False#-->
<!--#for $job in $downloaded#-->
<!--#set $odd = not $odd#-->
<tr class="<!--#if $odd then "odd" else "even"#-->">
<td>$job</td>
</tr>
<!--#end for#-->
</table>
<!--#else#-->
<div class="EntryBlock">
<form action="add_rss_feed" method="post">
<fieldset class="EntryFieldSet">
<legend>$T('newFeedURI')</legend>
<input type="text" size="10" name="feed" value="$feed"/>&nbsp;
<input type="text" size="104" name="uri" value=""/><br/><br/>
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('button-add')"/>
</fieldset>
</form>
</div>
<p>$T('explain-RSS')</p>
<div class="EntryBlock">
<form action="save_rss_rate" method="post">
<fieldset class="EntryFieldSet">
<legend>$T('opt-rss_rate')</legend>
<input type="text" name="rss_rate" value="$rss_rate">
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('button-save')"/>
<br/>
$T('explain-rss_rate')
</fieldset>
</form>
</div>
<div class="EntryBlock">
<form action="rss_now" method="post">
<fieldset class="EntryFieldSet">
<legend>RSS</legend>
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('button-rssNow')"/>
<br/>
&nbsp;
</fieldset>
</form>
</div>
<table id="catTable">
<tr>
<th></th>
<th>$T('enabled')</th>
<th>$T('feed')</th>
<th>URL</th>
</tr>
<!--#set $odd = False#-->
<!--#for $feed in sorted($rss.keys(), cmp=lambda x,y: cmp(x.lower(), y.lower()))#-->
<!--#set $odd = not $odd#-->
<tr class="<!--#if $odd then "odd" else "even"#-->">
<td><form action="del_rss_feed" method="get">
<input type="hidden" name="session" value="$session">
<input type="hidden" value="$feed" name="feed">
<input type="button" onclick="if (confirm('$T('confirm').replace("'","`") ')) { this.form.action='del_rss_feed?session=$session&'; this.form.submit(); return false;}" value="$T('button-del')"/>
</form>
</td>
<td><form action="upd_rss_feed" method="post">
<input type="hidden" name="session" value="$session">
<input type="hidden" value="$feed" name="feed">
<input type="hidden" value="1" name="table">
<input type="checkbox" onclick="this.form.action='toggle_rss_feed?session=$session'; this.form.submit(); return false;" name="enable" <!--#if $rss[$feed]['enable'] then "CHECKED" else "" #-->/>
</form>
</td>
<td><a href="?feed=$rss[$feed]['link']">$feed</a></td>
<td>$rss[$feed]['uri']</td>
</tr>
<!--#end for#-->
</table>
<!--#end if#-->
<!--#include $webdir + "/inc_bottom.tmpl"#-->

View File

@@ -0,0 +1,71 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath="../.."#-->
<!--#set global $helpsubject="Configure+Scheduling-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl"#-->
<!--#set global $submenu="scheduling"#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2>$T('configSchedule')</h2>
<div class="EntryBlock">
<form action="addSchedule" method="post">
<fieldset class="EntryFieldSet">
<legend>$T('addSchedule')</legend>
<%import time
t = time.localtime()
hour = t[3]
if hour != 23:
hour += 1
else:
hour = 0 %>
$T('hour'):<br>
<select name="hour">
<!--#for $i in range(24)#-->
<option value="$i" <!--#if hour == i then "selected=1" else ""#-->> $i</option>
<!--#end for#-->
</select>
:
<select name="minute">
<!--#for $i in range(60)#-->
<option value="$i">$i
<!--#end for#-->
</select>
<br>$T('sch-frequency'): <br>
<input type="checkbox" name="daysofweek" value="1">$T('monday')<br/>
<input type="checkbox" name="daysofweek" value="2">$T('tuesday')<br/>
<input type="checkbox" name="daysofweek" value="3">$T('wednesday')<br/>
<input type="checkbox" name="daysofweek" value="4">$T('thursday')<br/>
<input type="checkbox" name="daysofweek" value="5">$T('friday')<br/>
<input type="checkbox" name="daysofweek" value="6">$T('saturday')<br/>
<input type="checkbox" name="daysofweek" value="7">$T('sunday')<br/>
<br>$T('sch-action'):<br>
<select name="action">
<!--#for $action in $actions#-->
<option value="$action">$actions_lng[$action]
<!--#end for#-->
</select>
<br>$T('sch-arguments'):<br>
<input type="text" size="20" name="arguments" value="">
<input type="hidden" name="session" value="$session">
<p><input type="submit" value="$T('button-addSchedule')"></p>
</fieldset>
</form>
</div>
<h3>$T('currentSchedules'):</h3>
<div class="EntryBlock">
<!--#set $schednum = 0#-->
<!--#for $line in $schedlines#-->
<form action="delSchedule" method="post">
<fieldset class="EntryFieldSet">
$T('sch-task') $taskinfo[$schednum][0]: <strong>$taskinfo[$schednum][1]:$taskinfo[$schednum][2]</strong> - $taskinfo[$schednum][3] - $taskinfo[$schednum][4]
<!--#set $schednum += 1#-->
<input type="hidden" name="line" value="$line">
<input type="hidden" name="session" value="$session">
<input type="submit" value="$T('button-delSchedule')">
</fieldset>
</form><br />
<!--#end for#-->
</div>
<!--#include $webdir + "/inc_bottom.tmpl"#-->

View File

@@ -0,0 +1,71 @@
<!--#set global $topmenu="config"#-->
<!--#set global $statpath="../.."#-->
<!--#set global $helpsubject="Configure+Servers-0-7"#-->
<!--#include $webdir + "/inc_top.tmpl"#-->
<!--#set global $submenu="servers"#-->
<!--#include $webdir + "/inc_cmenu.tmpl"#-->
<h2>$T('configServer')</h2>
<div class="EntryBlock">
<form action="addServer" method="post" autocomplete="off">
<fieldset class="EntryFieldSet">
<legend>$T('addServer')</legend>
$T('srv-host'):<br><input type="text" size="25" name="host"><br>
$T('srv-port'):<br><input type="text" size="25" name="port"><br>
$T('srv-username'):<br><input type="text" size="25" name="username"><br>
$T('srv-password'):<br><input type="password" size="25" name="password"><br>
$T('srv-timeout'):<br><input type="text" size="25" name="timeout" value="120"><br>
$T('srv-connections'):<br><input type="text" size="25" name="connections"><br>
$T('srv-retention') ($T('days')):<br><input type="text" size="25" name="retention"><br>
<!--#if int($have_ssl) == 0#-->
$T('srv-ssl') $T('opt-notInstalled')
<!--#else#-->
<input type="checkbox" name="ssl" value="1" <!--#if int($have_ssl) == 0 then "disabled" else ""#-->>&nbsp;$T('srv-ssl')<br/>
<!--#end if#-->
<input type="checkbox" name="fillserver" value="1">&nbsp;$T('srv-fillserver')<br>
<input type="checkbox" name="optional" value="1">&nbsp;$T('srv-optional')<br>
<input type="checkbox" name="enable" value="1" checked="1">&nbsp;$T('srv-enable')<br>
<input type="hidden" name="session" value="$session">
<p><input type="submit" value="$T('button-addServer')"></p>
</fieldset>
</form>
<!--#set $slist = $servers.keys()#-->
<!--#$slist.sort()#-->
<!--#for $server in $slist#-->
<form action="saveServer" method="post" autocomplete="off">
<fieldset class="EntryFieldSet">
<legend>$server</legend>
$T('srv-host'):<br><input type="text" size="25" name="host" value="$servers[$server]['host']"><br>
$T('srv-port'):<br><input type="text" size="25" name="port" value="$servers[$server]['port']"><br>
$T('srv-username'):<br><input type="text" size="25" name="username" value="$servers[$server]['username']"><br>
$T('srv-password'):<br><input type="password" size="25" name="password" value="$servers[$server]['password']"><br>
$T('srv-timeout'):<br><input type="text" size="25" name="timeout" value="$servers[$server]['timeout']"><br>
$T('srv-connections'):<br><input type="text" size="25" name="connections" value="$servers[$server]['connections']"><br>
$T('srv-retention'):<br><input type="text" size="25" name="retention" value="$servers[$server]['retention']"><br>
<!--#if int($have_ssl) == 0#-->
$T('srv-ssl') $T('opt-notInstalled')
<!--#else#-->
<input type="checkbox" name="ssl" value="1" <!--#if int($servers[$server]['ssl']) != 0 then "checked=1" else ""#-->/>&nbsp;$T('srv-ssl')<br/>
<!--#end if#-->
<input type="checkbox" name="fillserver" value="1" <!--#if int($servers[$server]['fillserver']) != 0 then "checked=1" else ""#--> />&nbsp;$T('srv-fillserver')<br/>
<input type="checkbox" name="optional" value="1" <!--#if int($servers[$server]['optional']) != 0 then "checked=1" else ""#--> />&nbsp;$T('srv-optional')<br/>
<input type="checkbox" name="enable" value="1" <!--#if int($servers[$server]['enable']) != 0 then "checked=1" else ""#--> />&nbsp;$T('srv-enable')<br/>
<input type="hidden" name="server" value="$server">
<input type="hidden" name="session" value="$session">
<p><input type="submit" value="$T('button-saveChanges')"></p>
<p><input type="submit" onclick="this.form.action='delServer'; this.form.submit(); return false;" value="$T('button-delServer')"></p>
<!--#if 'amounts' in $servers[$server]#-->
<table border="1">
<tr><td>$T('total')</td><td>$servers[$server]['amounts'][0]</td>
<td>$T('thisMonth')</td><td>$servers[$server]['amounts'][1]</td></tr>
<tr><td>$T('today')</td><td>$servers[$server]['amounts'][3]</td>
<td>$T('thisWeek')</td><td>$servers[$server]['amounts'][2]</td></tr>
</table>
<!--#end if#-->
</fieldset>
</form>
<!--#end for#-->
</div>
<!--#include $webdir + "/inc_bottom.tmpl"#-->

Some files were not shown because too many files have changed in this diff Show More