t1gor / Robots.txt-Parser-Class

Php class for robots.txt parse
MIT License
83 stars 31 forks source link

Prettify function #67

Open szepeviktor opened 8 years ago

szepeviktor commented 8 years ago

It would be nice to output a nice and valid robots.txt.

fte378 commented 8 years ago

This already looks good except that there is no ordering for the length of matching and the match-all clause.

I don't know if that's the case but possibly a parser would match "User-agent: " if this is not the last entry even if there is a more specific entry like "User-agent: GoogleBot". "User-agent: " should always be the last entry in the generated file.

The same may hold true for the paths. I doubt that a parser matches "/themes/flowers" if there comes "/themes" first. So the entries within the User-agent blocks should be ordered with the longest matching paths first and the shorter ones later. For example:

User-agent: * Allow: /themes/flowers/roses Disallow: /themes/flowers Allow: /themes

fte378 commented 8 years ago

I quickly checked the Google and Yandex specs. They say that more specific paths override more general ones. However there may be other not so smart parsers out there. So the ordering has two use cases: satisfy less smart parsers and make the file better readable for humans.

szepeviktor commented 8 years ago

Implemented. @fte378 Would you open a PR and solve the disabling of $this->convert('self::prepareRegexRule');?

// Disable
//$this->convert('self::prepareRegexRule');

$robots = $parser->getRules();
$output = array();

// Make match-all clause the last
if ( array_key_exists( '*', $robots ) ) {
    $match_all = $robots['*'];
    unset( $robots['*'] );
    $robots['*'] = $match_all;
}

foreach ( $robots as $ua => $rules ) {
    $output[] = 'User-agent: ' . $ua;
    foreach ( $rules as $name => $value ) {
        // Not multibyte
        $nice_name = ucfirst( $name );
        if ( is_array( $value ) ) {
            // Shorter paths later
            usort( $value, function ( $a, $b ) {
                return mb_strlen( $a ) < mb_strlen( $b );
            } );
            foreach( $value as $elem ) {
                $output[] = $nice_name . ': ' . $elem;
            }
        } else {
            $output[] = $nice_name . ': ' . $value;
        }
    }
    $output[] = '';
}

$sitemaps = $parser->getSitemaps();

foreach ( $sitemaps as $sitemap ) {
    $output[] = 'Sitemap: ' . $sitemap;
}
$output[] = '';

print implode( PHP_EOL, $output );
fte378 commented 8 years ago

@szepeviktor I'm only an end user of your Wordpress plugin :) I know what a pull request might be but I don't have anything to do with this library.

szepeviktor commented 8 years ago

@t1gor @JanPetterMG Could you add this as a new function and solve the disabling of $this->convert('self::prepareRegexRule');?

JanPetterMG commented 8 years ago

I'm quite busy right now, but give me a day or two and I'll come back to this. Stay tuned.

szepeviktor commented 8 years ago

OK!

JanPetterMG commented 8 years ago

@fte378 Just want to hear your opinion, why does the user-agents really have to be sorted Z-A (witch normally resulting in * as the last one)? I know there is a lot of pros/cons here, but if parsed by this library, it shouldn't make any difference anyway. Do you plan to use this library to render an existing robots.txt and then publish it? or in other ways use the rendered result outside of this library?

Only one group of group-member records is valid for a particular crawler. The crawler must determine the correct group of records by finding the group with the most specific user-agent that still matches. All other groups of records are ignored by the crawler. The user-agent is non-case-sensitive. All non-matching text is ignored (for example, both googlebot/1.2 and googlebot* are equivalent to googlebot). The order of the groups within the robots.txt file is irrelevant.

Source: https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt#order-of-precedence-for-user-agents

I know the UA matching code in this library has a lot of bugs, but at least it's selecting the correct UA in most cases. I'm probably fixing that one within a couple of days...

szepeviktor commented 8 years ago

It is going into my plugin: https://wordpress.org/plugins/multipart-robotstxt-editor/

szepeviktor commented 8 years ago

*Added sitemaps.

szepeviktor commented 8 years ago

@JanPetterMG Could you guide me to disable $this->convert('self::prepareRegexRule');?

JanPetterMG commented 8 years ago

@szepeviktor sorry for the delay, I'll take a look at it

szepeviktor commented 8 years ago

Thank you for your help. I appreciate it.

JanPetterMG commented 8 years ago

How about now? $parser->render();

szepeviktor commented 8 years ago

Oh! Thanks. https://github.com/t1gor/Robots.txt-Parser-Class/commit/814737ccf1feb358e9beb4819e31b4b3bdc8e79d

JanPetterMG commented 8 years ago

It's PHP 5.4 compatible too, pushed https://github.com/t1gor/Robots.txt-Parser-Class/commit/b74ad596468f4927e0de290f371eb5c27f0b9efb a few minutes later, with an build fix...

Still, there is work to do:

szepeviktor commented 8 years ago

ucfirst() is implented above.

JanPetterMG commented 8 years ago

@szepeviktor I know, but that doesn't fix the inline directives. This is what it outputs:

Disallow: clean-param:token path // invalid path
Disallow: host:a.b // invalid host

It should be something like:

Disallow: Clean-param: token /path
Disallow: Host: example.com
szepeviktor commented 8 years ago

Doesn't Host stand on a separate line? https://yandex.com/support/webmaster/controlling-robot/robots-txt.xml#host

JanPetterMG commented 8 years ago

Rendering of the (regular) host directive now implemented. https://github.com/t1gor/Robots.txt-Parser-Class/commit/627082db6d76b1fd31890012d6a65d0e51369642 Host: example.com


I'm not sure where I saw it, but there was some robots.txt files where it was used as an "in-line" directive. It's not coming from any of the Yandex examples, Google doesn't support them, and these two directives aren't listed in any draft either, so I haven't much references to it...

Anyway, Yandex states this, whatever interpretation witch is right or wrong...

However, the Host directive is intersectional, so it will be used by the robot regardless of its location in robots.txt.

and

The Clean-Param directive is intersectional, so it can be indicated in any place within the robots.txt file.

There is also the Cache-delay directive (implemented), I don't know where it's coming from, haven't found any links or references to date, still, I can list a couple of handfuls of URLs to robots.txt files where it's been used...

szepeviktor commented 8 years ago

Do you plan to increase coverage? https://codeclimate.com/github/t1gor/Robots.txt-Parser-Class/source/robotstxtparser.php there is no direct link to Source -> Coverage

szepeviktor commented 8 years ago

@fte378 It seems it is ready to integrate into the robots.txt plugin

szepeviktor commented 8 years ago

Reopen for todos in https://github.com/t1gor/Robots.txt-Parser-Class/issues/67#issuecomment-238948354

JanPetterMG commented 8 years ago

There's a link on the front page, at the top of the README.md to be exact, the red box, from there it should be pretty easy to get an overview, and even view the coverage line by line...

There is a lot more bugs in this library than I've expected, so I'm not sure how much time I'm going to spend fixing things... It should be refactored and rewritten, the sooner the better...

Take a look at VIPnytt/RobotsTxtParser, it's originally based on this library, but refactored and built from scratch as of version 2.0. Any bugs/issues from t1gor/Robots.txt-Parser-Class is cross checked and eliminated in VIPnytt/RobotsTxtParser... Coverage 96.5% It even has an integrated render function, a little bit better than the one included in this library :smile:

szepeviktor commented 8 years ago

Why reinvent the wheel?

JanPetterMG commented 8 years ago

Yeah, why fix this library at all, when VIPnytt/RobotsTxtParser has everything this library is missing??

I needed to handle caching, proper rendering, automatic download, proper user-agent matching, and it had to be as bug free as possible, for my spider, so that I can crawl up to 10k webpages a day without having to worry about anything... That's something this library never will be able to do :p That's why I've built it :D

szepeviktor commented 8 years ago

I do not care. I've just released this class: https://github.com/szepeviktor/multipart-robotstxt-editor/commit/71e0a877ddd15cda4f7e7d4ec6410ddbe4f2e791

t1gor commented 6 years ago

Yeah, why fix this library at all, when VIPnytt/RobotsTxtParser has everything this library is missing??

I needed to handle caching, proper rendering, automatic download, proper user-agent matching, and it had to be as bug free as possible, for my spider, so that I can crawl up to 10k webpages a day without having to worry about anything... That's something this library never will be able to do :p That's why I've built it :D

Seems like you've just moved most of the code and the idea to your own library now :( I'm not really sure how to beahve in this situation ...

Taking a closer look at your lib, I have several suggestions. Maybe we join the efforts and make a better version of the parser together?

JanPetterMG commented 6 years ago

@t1gor You're using the MIT license, witch allows me to do anything really. I've moved the idea, but 95% of the code is unique. The only reasons behind this decision, was too many bugs, lots of missing features, and that the backwards compatibility would be really hard to maintain...

Anyways, I'm happy to join forces again, my library is rather over-engineered and could be simplified. I've included support for every rule ever existed, including drafts, but I'm open to dropping support for some of them, as they're practically never used on the internet anymore...