It is critical to evaluate sources used in research for credibility, including for accuracy and authority. Text-generating AI tools like ChatGPT, however, have limitations for research, including:
To address these limitations of generative AI tools, take these steps:
How will you use the information learned from AI-generated text? If the information is less important or the application low-impact, you may not need to search as in-depth to verify claims. If, however, the information is important or the application will have real-world impact, you need to take extra steps to verify claims.
When evaluating the accuracy of AI-generated text, you are evaluating the claim rather than the source. To check the claim, you need to locate the claim in another, trusted source. Rather than ask yourself "who is behind this information?" ask "who can verify this information?"* For low-stakes claims, a simple Google or Wikipedia search may suffice. For other claims, think about who is likely to care enough about the topic to publish about it. You may search for the topic on government websites, in trusted news sources, or through research databases and library search engines.
In cases where AI-generated text provides a citation, search directly for the source. You may use Google Scholar or the library search tool to search for a specific article or a specific book. If you cannot locate the source, the citation may have been hallucinated.
*Some of the content on this page was adapted from the University of Maryland's Artificial Intelligence (AI) and Information Literacy guide.