Robots.txt Generator

Generate robots.txt files to control how search engines crawl your website. Block specific directories, allow certain bots, and set crawl rules.

How to Use

  1. Select user agents to configure
  2. Add allow/disallow rules
  3. Set crawl delay if needed
  4. Add your sitemap URL

About robots.txt


The robots.txt file tells search engine crawlers which pages or sections of your site they can or cannot access. It is placed in your website root directory.

Use "Disallow" to block crawlers from specific paths like /admin/, /private/, or /tmp/. Use "Allow" to permit access to specific files within blocked directories.

The crawl-delay directive tells bots to wait between requests, which can help reduce server load. However, Google ignores this directive.

Always reference your sitemap in robots.txt using the Sitemap directive. This helps search engines find your sitemap without needing to submit it manually.

FAQ

What is robots.txt?
Robots.txt is a text file that tells search engine crawlers which pages they can or cannot crawl on your website.
Will robots.txt hide pages from Google?
No, it only prevents crawling. Pages can still appear in search results if linked from other sites. Use noindex for true exclusion.
Where do I put robots.txt?
Place it in your website root directory: https://example.com/robots.txt
What is a crawl delay?
Crawl delay tells bots to wait a specified number of seconds between requests. Useful for reducing server load.
Should I block all bots?
No, blocking all bots prevents search engines from indexing your site. Only block what you need to protect.