Currently we're only checking the meta tags to see if a page can be indexed or not.
The better more robust solution would be to also validate against the robots.txt file (to see if there is a disallow somewhere) and to check the HTTP x-robots-tag.
To leave a comment, please authenticate.