The Websites Used to Train AI Identified by The Washington Post

IBL News | New York

Chatbots mimic human speech because the AI that powers them has ingested a huge amount of text, mostly scraped from the Internet. If they ace the bar exam it’s because it’s training data included thousands of practice sites.

The Washington Post analyzed those websites used to train AI, although companies like OpenAI didn’t disclose what dataset used.

The newspaper worked with researchers of the Allen Institute for AI and categorized the websites, with data from analytics firm Similarweb. Into a tree map of 11 categories.

It started looking inside Google’s C4 data set, which includes 15 million websites from journalism, entertainment, software development, medicine, and content creation, among other industries. Facebook’s LLaMa used it.

The three biggest sites were (which contains text from patents issued around the world),, and (a subscription-only digital library). Also, on the list: the notorious market for pirated e-books a, along with 27 other sites identified by the U.S. government as markets for piracy and counterfeits.

In the area of top business & industrial sites, these were some of the sites:,,,,,,,,,,

Top News sites:,,,,,,,,,, (the Russian state-backed propaganda site),, and (anti-immigration), among others.

Top Religious sites:,,,,,,,,,, etc.

Top Technology sites:,,,,,,,,,,, WordPress, Tumblr, Blogspot, Live Journal, etc.

Data sets used to train AI couldn’t access social networks like Facebook and Twitter, which prohibit scraping.

Search Engine Land: Search the 15.7 million websites in Google’s C4 dataset