<li>try to login with the given user and password. The default
user is <ttclass="docutils literal"><spanclass="pre">anonymous</span></tt>, the default password is <ttclass="docutils literal"><spanclass="pre">anonymous@</span></tt>.</li>
<p><strong>Q: LinkChecker produced an error, but my web page is ok with
Mozilla/IE/Opera/...
Is this a bug in LinkChecker?</strong></p>
<p>A: Please check your web pages first. Are they really ok?
Use the <ttclass="docutils literal"><spanclass="pre">--check-html</span></tt> option, or check if you are using a proxy
which produces the error.</p>
<p><strong>Q: I still get an error, but the page is definitely ok.</strong></p>
<p>A: Some servers deny access of automated tools (also called robots)
like LinkChecker. This is not a bug in LinkChecker but rather a
policy by the webmaster running the website you are checking. Look
the <ttclass="docutils literal"><spanclass="pre">/robots.txt</span></tt> file which follows the <aclass="reference external"href="http://www.robotstxt.org/wc/norobots-rfc.html">robots.txt exclusion standard</a>.</p>
<p><strong>Q: How can I tell LinkChecker which proxy to use?</strong></p>
<p>A: LinkChecker works transparently with proxies. In a Unix or Windows
environment, set the http_proxy, https_proxy, ftp_proxy environment
variables to a URL that identifies the proxy server before starting
<p><strong>Q: The link “mailto:john@company.com?subject=Hello John” is reported
as an error.</strong></p>
<p>A: You have to quote special characters (e.g. spaces) in the subject field.
The correct link should be “mailto:...?subject=Hello%20John”
Unfortunately browsers like IE and Netscape do not enforce this.</p>
<p><strong>Q: Has LinkChecker JavaScript support?</strong></p>
<p>A: No, it never will. If your page is not working without JS, it is
better checked with a browser testing tool like <aclass="reference external"href="http://seleniumhq.org/">Selenium</a>.</p>
<p><strong>Q: Is LinkCheckers cookie feature insecure?</strong></p>
<p>A: If a cookie file is specified, the information will be sent
to the specified hosts.
The following restrictions apply for LinkChecker cookies:</p>
<ulclass="simple">
<li>Cookies will only be sent to the originating server.</li>
<li>Cookies are only stored in memory. After LinkChecker finishes, they
are lost.</li>
<li>The cookie feature is disabled as default.</li>
</ul>
<p><strong>Q: I see LinkChecker gets a /robots.txt file for every site it
checks. What is that about?</strong></p>
<p>A: LinkChecker follows the <aclass="reference external"href="http://www.robotstxt.org/wc/norobots-rfc.html">robots.txt exclusion standard</a>. To avoid
misuse of LinkChecker, you cannot turn this feature off.
See the <aclass="reference external"href="http://www.robotstxt.org/wc/robots.html">Web Robot pages</a> and the <aclass="reference external"href="http://www.w3.org/Search/9605-Indexing-Workshop/ReportOutcomes/Spidering.txt">Spidering report</a> for more info.</p>
<p><strong>Q: How do I print unreachable/dead documents of my website with
LinkChecker?</strong></p>
<p>A: No can do. This would require file system access to your web
repository and access to your web server configuration.</p>
<p><strong>Q: How do I check HTML/XML/CSS syntax with LinkChecker?</strong></p>
<p>A: Use the <ttclass="docutils literal"><spanclass="pre">--check-html</span></tt> and <ttclass="docutils literal"><spanclass="pre">--check-css</span></tt> options.</p>