OpenAI, the company behind ChatGPT, recently launched its own web crawler GPTBot, to scrape websites for information. However, the company also released the crawler’s specifications so that website owners and publishers can block the bot from scraping their content.
In a technical document released by OpenAI, the company has described how to identify the crawler using its user agent token and string. The document also explains how to block the crawler by adding an entry to the server’s robots.txt file.
What Does GPTBot Do and How to Block It?
Just like any other web crawler, GPTBot crawls through websites, scanning the web pages and scraping information. However, it’s the purpose of the scraped information that sets GPTBot apart from search engine indexing crawlers – the gathered data would be used to train the company’s AI models. This is a part of OpenAI’s effort to develop the next generation of AI models, which reportedly include GPT-5.
Allowing GPTBot to access your site can help AI models become more accurate and improve their general capabilities and safety.OpenAI
It adds that the web pages crawled using the bot may be filtered to remove sources. These include sources that contain text which violates OpenAI’s policies, collects personally identifiable information, or requires paywall access.
Of course, most website owners and publishers wouldn’t want to let the machine learning giant scrape their content and use them for its AI models. The document published by OpenAI details how to block GPTBot, and the process is rather simple.
To disallow the web crawler from accessing a website entirely, all you have to do is add its token to the site’s robots.txt file and use the “Disallow: /” command.
The bot can also be blocked from accessing certain pages on a website but allowed access to the rest. For this, site owners would have to use the “Allow: /directory-1/” and “Disallow: /directory-2/” commands and then customize as necessary.
Growing Concerns Over AI Companies Scraping Information From the Internet
The web crawler happens to be OpenAI’s latest acknowledgment that it trains its AI models based on public information from the internet. This coincides with the growing efforts by different organizations to restrict automated access to information via the web.
Companies like OpenAI make millions of dollars in revenue by training their models on all sorts of information gathered from the internet. Frustrated at not getting a share of the profits earned by AI companies using their content, business owners are taking a stand by closing off access.
Recently, four unidentified entities were sued by Twitter in order to prevent data on the website from being scraped and used to train AI models.
Reddit made changes to its API terms too, enabling the company to effectively monetize the content that’s created by its users for free of charge.
Not too long ago, OpenAI was also sued by award Sarah Silverman for training ChatGPT on their copyrighted works without their consent. Other companies such as Microsoft, Google, and its AI research arm Deepmind have faced similar lawsuits.
According to Israel Krush, CEO and co-founder of Hyro, the fact that publishers have to manually opt out of having their sites scraped by GPTBot raises a big concern. Hyro is the company behind an AI assistant used in the healthcare industry.
He went on to add that while his own firm scrapes data from the internet, it does so only with explicit permission and ensures the appropriate handling of personal information.
Companies like Adobe have also suggested marking information as “not for AI training” through legal means. It remains to be seen if any legal discourse would be taken to prevent GPTBot from scraping websites by default.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechReport – https://techreport.com/news/openai-launches-a-crawler-bot-named-gptbot-to-scrape-websites-for-information/