The robots.txt file is used to communicate with web robots, also known as web crawlers or spiders that crawl the web indexing websites.
Search engines use robots to crawl the web looking for websites to include in search results.
You can use this site to learn what a robots.txt file is, how it works, how to create a robots.txt file, and how you can use it to control how a robot interacts with your website.
What can you do with a robots.txt file?
You can tell robots that it's okay to crawl your website.
User-agent: * Disallow:
You can tell robots not to crawl your website.
User-agent: * Disallow: /
You can tell robots not to crawl certain parts of your website.
User-agent: * Disallow: /cgi-bin/ Disallow: /tmp/ Disallow: /~steve/