Require and use Python 2.7.2.

This commit is contained in:
Bastian Kleineidam 2012-06-22 23:58:20 +02:00
parent dbe57c0f9b
commit 6d9a8859d3
8 changed files with 15 additions and 52 deletions

1
debian/changelog vendored
View file

@ -1,6 +1,7 @@
linkchecker (8.0-1) UNRELEASED; urgency=low
* New upstream release.
* Require Python >= 2.7.2
-- Bastian Kleineidam <calvin@debian.org> Mon, 18 Jun 2012 22:23:23 +0200

2
debian/control vendored
View file

@ -15,7 +15,7 @@ Vcs-Browser: http://linkchecker.git.sourceforge.net/
Package: linkchecker
Architecture: any
Depends: ${misc:Depends}, ${python:Depends}, ${shlibs:Depends}
Depends: ${misc:Depends}, ${python:Depends}, ${shlibs:Depends}, python (>= 2.7.2)
Provides: ${python:Provides}
Conflicts: python-dnspython
Suggests: clamav-daemon,

View file

@ -5,6 +5,9 @@ Features:
hostname and the expiration date are checked.
- cmdline: Added Nagios plugin script.
Changes:
- dependencies: Python >= 2.7.2 is now required
Fixes:
- gui: Fix saving of the debugmemory option.

View file

@ -9,7 +9,7 @@ Requirements
On Mac OS X systems, using MacPorts, Fink or homebrew for software
installation is recommended.
- Install Python >= 2.7 from http://www.python.org/
- Install Python >= 2.7.2 from http://www.python.org/
- Qt4 SDK development tools from http://qt.nokia.com/downloads
The binary "qcollectiongenerator" is used to generate the

View file

@ -37,7 +37,7 @@ First, install the required software.
template from the source files, you will need xgettext with Python
support. This is available in gettext >= 0.12.
2. Python >= 2.7 from http://www.python.org/
2. Python >= 2.7.2 from http://www.python.org/
Be sure to also have installed the included distutils module.
On most distributions, the distutils module is included in

View file

@ -1,6 +1,10 @@
Upgrading
=========
Migrating from 7.9 to 8.0
-------------------------
Python 2.7.2 or newer is required (Python 3.x is not supported though).
Migrating from 7.6 to 7.7
-------------------------
The deprecated options --check-html-w3 and --check-css-w3

View file

@ -20,9 +20,11 @@ Main function module for link checking.
# imports and checks
import sys
# Needs Python >= 2.7 because we use dictionary based logging config
# Needs Python >= 2.7.2 which fixed http://bugs.python.org/issue11467
if not (hasattr(sys, 'version_info') or
sys.version_info < (2, 7, 0, 'final', 0)):
raise SystemExit("This program requires Python 2.7 or later.")
sys.version_info < (2, 7, 2, 'final', 0)):
raise SystemExit("This program requires Python 2.7.2 or later.")
import os
import re

View file

@ -32,53 +32,6 @@ for scheme in ('ldap', 'irc'):
if scheme not in urlparse.uses_netloc:
urlparse.uses_netloc.append(scheme)
if sys.version_info[0] > 2 or sys.version_info[1] > 6:
# Fix Python regression; see http://bugs.python.org/issue11467
def urlsplit_26(url, scheme='', allow_fragments=True):
"""Parse a URL into 5 components:
<scheme>://<netloc>/<path>?<query>#<fragment>
Return a 5-tuple: (scheme, netloc, path, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes."""
allow_fragments = bool(allow_fragments)
key = url, scheme, allow_fragments, type(url), type(scheme)
cached = urlparse._parse_cache.get(key, None)
if cached:
return cached
if len(urlparse._parse_cache) >= urlparse.MAX_CACHE_SIZE: # avoid runaway growth
urlparse.clear_cache()
netloc = query = fragment = ''
i = url.find(':')
if i > 0:
if url[:i] == 'http': # optimize the common case
scheme = url[:i].lower()
url = url[i+1:]
if url[:2] == '//':
netloc, url = urlparse._splitnetloc(url, 2)
if allow_fragments and '#' in url:
url, fragment = url.split('#', 1)
if '?' in url:
url, query = url.split('?', 1)
v = urlparse.SplitResult(scheme, netloc, url, query, fragment)
urlparse._parse_cache[key] = v
return v
for c in url[:i]:
if c not in urlparse.scheme_chars:
break
else:
scheme, url = url[:i].lower(), url[i+1:]
if url[:2] == '//':
netloc, url = urlparse._splitnetloc(url, 2)
if allow_fragments and scheme in urlparse.uses_fragment and '#' in url:
url, fragment = url.split('#', 1)
if scheme in urlparse.uses_query and '?' in url:
url, query = url.split('?', 1)
v = urlparse.SplitResult(scheme, netloc, url, query, fragment)
urlparse._parse_cache[key] = v
return v
urlparse.urlsplit = urlsplit_26
# The character set to encode non-ASCII characters in a URL. See also
# http://tools.ietf.org/html/rfc2396#section-2.1
# Note that the encoding is not really specified, but most browsers