You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
this bug in robots.txt processing with multiple User-Agents in robots.txt has already been fixed in 9c2c989 and will be published in the next version in the next few days.
Also you can use --ignore-robots-txt flag. This is listed in the README.md, but is missing in the documentation website. It will be added when the next version is released.
In my robots.txt I have a directive dedicated to disallow the crawl for some specific User-Agent
The SEO tab explain it's due to the robots.txt configuration
If I remove the Bytespider directive the site is well crawled.
Would it possible to have a setting to ignore specific user-agent rules (or even ignore the robots.txt file)
The text was updated successfully, but these errors were encountered: