wummel / linkchecker

check links in web documents or full websites
http://wummel.github.io/linkchecker/
GNU General Public License v2.0
1.43k stars 233 forks source link

How to check all subdomains? #477

Closed bootstraponline closed 10 years ago

bootstraponline commented 10 years ago

www.example.com random.example.com example.com

How do I check all *example.com without listing each one? Isn't there a regex feature for this somewhere?

bootstraponline commented 10 years ago

I'm expecting something like this:

--check-urls REGEX

--check-urls .*example.com

wummel commented 10 years ago

Yes, see the "internlinks" option of the configuration file.

bootstraponline commented 10 years ago

Is internlinks not supported on the command line?

$ ./linkchecker --help
usage: linkchecker [-h] [-f FILENAME] [-t NUMBER] [-V] [--list-plugins]
                   [--stdin] [-D STRING] [-F TYPE[/ENCODING[/FILENAME]]]
                   [--no-status] [--no-warnings] [-o TYPE[/ENCODING]] [-q]
                   [-v] [--cookiefile FILENAME] [--check-extern]
                   [--ignore-url REGEX] [--no-follow-url REGEX] [-N STRING]
                   [-p] [-r NUMBER] [--timeout NUMBER] [-u STRING]
                   [--user-agent STRING]
                   [url [url ...]]

positional arguments:
  url

optional arguments:
  -h, --help            show this help message and exit

General options:
  -f FILENAME, --config FILENAME
                        Use FILENAME as configuration file. Per default
                        LinkChecker uses ~/.linkchecker/linkcheckerrc (under
                        Windows %HOMEPATH%\.linkchecker\linkcheckerrc).
  -t NUMBER, --threads NUMBER
                        Generate no more than the given number of threads.
                        Default number of threads is 10. To disable threading
                        specify a non-positive number.
  -V, --version         Print version and exit.
  --list-plugins        Print available check plugins and exit.
  --stdin               Read list of white-space separated URLs to check from
                        stdin.

Output options:
  -D STRING, --debug STRING
                        Print debugging output for the given logger. Available
                        loggers are 'all', 'thread', 'plugin', 'checking',
                        'gui', 'cache', 'cmdline'. Specifying 'all' is an
                        alias for specifying all available loggers. The option
                        can be given multiple times to debug with more than
                        one logger. For accurate results, threading will be
                        disabled during debug runs.
  -F TYPE[/ENCODING[/FILENAME]], --file-output TYPE[/ENCODING[/FILENAME]]
                        Output to a file linkchecker-out.TYPE,
                        $HOME/.linkchecker/blacklist for 'blacklist' output,
                        or FILENAME if specified. The ENCODING specifies the
                        output encoding, the default is that of your locale.
                        Valid encodings are listed at
                        http://docs.python.org/lib/standard-encodings.html.
                        The FILENAME and ENCODING parts of the 'none' output
                        type will be ignored, else if the file already exists,
                        it will be overwritten. You can specify this option
                        more than once. Valid file output types are
                        'blacklist', 'csv', 'xml', 'dot', 'gml', 'gxml',
                        'html', 'none', 'sitemap', 'sql', 'text'. You can
                        specify this option multiple times to output to more
                        than one file. Default is no file output. Note that
                        you can suppress all console output with the option
                        '-o none'.
  --no-status           Do not print check status messages.
  --no-warnings         Don't log warnings. Default is to log warnings.
  -o TYPE[/ENCODING], --output TYPE[/ENCODING]
                        Specify output as 'blacklist', 'csv', 'xml', 'dot',
                        'gml', 'gxml', 'html', 'none', 'sitemap', 'sql',
                        'text'. Default output type is text. The ENCODING
                        specifies the output encoding, the default is that of
                        your locale. Valid encodings are listed at
                        http://docs.python.org/lib/standard-encodings.html.
  -q, --quiet           Quiet operation, an alias for '-o none'. This is only
                        useful with -F.
  -v, --verbose         Log all URLs. Default is to log only errors and
                        warnings.

Checking options:
  --cookiefile FILENAME
                        Read a file with initial cookie data. The cookie data
                        format is explained below.
  --check-extern        Check external URLs.
  --ignore-url REGEX    Only check syntax of URLs matching the given regular
                        expression. This option can be given multiple times.
  --no-follow-url REGEX
                        Check but do not recurse into URLs matching the given
                        regular expression. This option can be given multiple
                        times.
  -N STRING, --nntp-server STRING
                        Specify an NNTP server for 'news:...' links. Default
                        is the environment variable NNTP_SERVER. If no host is
                        given, only the syntax of the link is checked.
  -p, --password        Read a password from console and use it for HTTP and
                        FTP authorization. For FTP the default password is
                        'anonymous@'. For HTTP there is no default password.
                        See also -u.
  -r NUMBER, --recursion-level NUMBER
                        Check recursively all links up to given depth. A
                        negative depth will enable infinite recursion. Default
                        depth is infinite.
  --timeout NUMBER      Set the timeout for connection attempts in seconds.
                        The default timeout is 60 seconds.
  -u STRING, --user STRING
                        Try the given username for HTTP and FTP authorization.
                        For FTP the default username is 'anonymous'. For HTTP
                        there is no default username. See also -p.
  --user-agent STRING   Specify the User-Agent string to send to the HTTP
                        server, for example "Mozilla/4.0". The default is
                        "LinkChecker/X.Y" where X.Y is the current version of
                        LinkChecker.

EXAMPLES
The most common use checks the given domain recursively, plus any
single URL pointing outside of the domain:
  linkchecker http://www.example.org/
Beware that this checks the whole site which can have several hundred
thousands URLs. Use the -r option to restrict the recursion depth.

Don't connect to mailto: hosts, only check their URL syntax. All other
links are checked as usual:
  linkchecker --ignore-url=^mailto: www.example.org

Checking local HTML files on Unix:
  linkchecker ../bla.html subdir/blubber.html

Checking a local HTML file on Windows:
  linkchecker c:\temp\test.html

You can skip the "http://" url part if the domain starts with "www.":
  linkchecker www.example.de

You can skip the "ftp://" url part if the domain starts with "ftp.":
  linkchecker -r0 ftp.example.org

OUTPUT TYPES
Note that by default only errors and warnings are logged.
You should use the --verbose option to see valid URLs,
and when outputting a sitemap graph format.

text    Standard text output, logging URLs in keyword: argument fashion.
html    Log URLs in keyword: argument fashion, formatted as HTML.
        Additionally has links to the referenced pages. Invalid URLs have
        HTML and CSS syntax check links appended.
csv     Log check result in CSV format with one URL per line.
gml     Log parent-child relations between linked URLs as a GML sitemap
        graph.
dot     Log parent-child relations between linked URLs as a DOT sitemap
        graph.
gxml    Log check result as a GraphXML sitemap graph.
xml     Log check result as machine-readable XML.
sql     Log check result as SQL script with INSERT commands. An example
        script to create the initial SQL table is included as create.sql.
blacklist
        Suitable for cron jobs. Logs the check result into a file
        ~/.linkchecker/blacklist which only contains entries with invalid
        URLs and the number of times they have failed.
none    Logs nothing. Suitable for debugging or checking the exit code.

REGULAR EXPRESSIONS
Only Python regular expressions are accepted by LinkChecker.
See http://www.amk.ca/python/howto/regex/ for an introduction in
regular expressions.

The only addition is that a leading exclamation mark negates
the regular expression.

COOKIE FILES
A cookie file contains standard RFC 805 header data with the following
possible names:
Scheme (optional)
 Sets the scheme the cookies are valid for; default scheme is 'http'.
Host (required)
 Sets the domain the cookies are valid for.
Path (optional)
 Gives the path the cookies are value for; default path is '/'.
Set-cookie (optional)
 Set cookie name/value. Can be given more than once.

Multiple entries are separated by a blank line.

The example below will send two cookies to all URLs starting with
'http://example.org/hello/' and one to all URLs starting
with 'https://example.com/':

Host: example.org
Path: /hello
Set-cookie: ID="smee"
Set-cookie: spam="egg"

Scheme: https
Host: example.com
Set-cookie: baggage="elitist"; comment="hologram"

PROXY SUPPORT
To use a proxy on Unix or Windows set $http_proxy, $https_proxy or $ftp_proxy
to the proxy URL. The URL should be of the form
"http://[<user>:<pass>@]<host>[:<port>]".
LinkChecker also detects manual proxy settings of Internet Explorer under
Windows systems, and gconf or KDE on Linux systems.
On a Mac use the Internet Config to select a proxy.

LinkChecker honors the $no_proxy environment variable. It can be a list
of domain names for which no proxy will be used.

Setting a HTTP proxy on Unix for example looks like this:

  export http_proxy="http://proxy.example.com:8080"

Proxy authentication is also supported:

  export http_proxy="http://user1:mypass@proxy.example.org:8081"

Setting a proxy on the Windows command prompt:

  set http_proxy=http://proxy.example.com:8080

NOTES
 o URLs on the command line starting with "ftp." are treated like
   "ftp://ftp.", URLs starting with "www." are treated like "http://www.".
   You can also give local files as arguments.
 o If you have your system configured to automatically establish a
   connection to the internet (e.g. with diald), it will connect when
   checking links not pointing to your local system.
   See the --ignore-url option on how to prevent this.
 o Javascript links are currently ignored.
 o If your platform does not support threading, LinkChecker disables it
   automatically.
 o You can supply multiple user/password pairs in a configuration file.
 o When checking 'news:' links the given NNTP host doesn't need to be the
   same as the host of the user browsing your pages.

RETURN VALUE
The return value is non-zero when
 o invalid links were found or
 o warnings were found warnings are enabled
 o a program error occurred

IGNORE WARNINGS
The following warnings are recognized in the 'ignorewarnings' config
file entry:
 o file-missing-slash - The file: URL is missing a trailing slash.
 o file-system-path - The file: path is not the same as the system specific path.
 o ftp-missing-slash - The ftp: URL is missing a trailing slash.
 o http-cookie-store-error - An error occurred while storing a cookie.
 o http-empty-content - The URL had no content.
 o ignore-url - The URL has been ignored.
 o mail-no-mx-host - The mail MX host could not be found.
 o nntp-no-newsgroup - The NNTP newsgroup could not be found.
 o nntp-no-server - No NNTP server was found.
 o url-content-size-zero - The URL content size is zero.
 o url-content-too-large - The URL content size is too large.
 o url-effective-url - The effective URL is different from the original.
 o url-error-getting-content - Could not get the content of the URL.
 o url-obfuscated-ip - The IP is obfuscated.
 o url-too-long - The URL is longer than the recommended size.
 o url-whitespace - The URL contains leading or trailing whitespace.
wummel commented 10 years ago

No, you have to use a configuration file for that. See http://wummel.github.io/linkchecker/man5/linkcheckerrc.5.html

bootstraponline commented 10 years ago

Thanks.