Generative AI tools chat boxes such as ChatGTP or Microsoft Copilot are trained on Large Language Models (LLMs), large amounts of data or text. They look for patterns in their training materials and reproduce these patterns as responses to prompts.
AI-generated responses or works may not be accurate, as a predicted pattern in a given response may not be the correct answer. In some cases, AI responses have included biased, outdated, or fabricated information. Inaccurate AI results have become known as AI hallucinations. According to a New York Times article (2023), ChatGPT makes things up about 3% of the time and a Google system's rate of hallucinations was as much as 27%. AI hallucinations may include incorrect or fabricated citations to authentic-sounding sources.
AI-generated summaries and sources of information (links) should be checked before use for accuracy (or even existence).
There are a number of ways to check whether a citation generated by AI is correct:
Search the entire title of an article or book in Novanet, the library catalogue.
Check the authors' names; do they match the names in the AI citation?
If an article citation includes a DOI, check to make sure it leads to an article on a journal or publisher's platform.
Use the Crossref search tool; Copy the DOI (without the https://doi.org/) into the search box.
Check the author and title to make sure it is the same article
Copy the title of the item into Google Scholar or Google Books.
Verify that the item is the same, checking the author, title, date, etc.