Skip to Main Content

Artificial Intelligence : AI Resources

Interested in AI? This guide is for you! Direct any questions to the UM Library's Artificial Intelligence Committee.

Free Tools

Many AI tools require subscriptions to access. Below are a few tools that do not require subscriptions or have free versions comparable to the premium versions. 

  • ChatGPT - A generative AI chatbot that uses natural language processing. It can be used (with limits) without an account or with a free account. 
  • Copilot - Microsoft's' generative AI chatbot. UM affiliates have access to Copilot through their olemiss.edu email.
  • Gemini - Formerly known as Bard, Gemini is Google's generative AI chatbot. 
  • GDELT Project - The GDELT Project monitors worldwide news via print, broadcast, and web resources in 100 languages. It updates with current news every 15 minutes and has a historical archive that dates back to 1979. 
  • Semantic Scholar - An AI-powered research database. It generates short summaries of articles and personalized article recommendations. Searches can sort to first show the most highly cited articles. 
  • Rayyan - A systematic review builder assistant. It can be accessed anywhere and worked on collaboratively. Visit the library's guide on building systematic reviews with Rayyan

 

This list does not signify the library's endorsement. Individuals should use their best judgment when using any product. Check with your instructors if you are using AI programs to complete classroom assignments. 

Assessing Tools & Output

Vetting tools will help protect user privacy and data and improve the quality of the experience. Many new AI programs require subscriptions or other pay-to-use systems, which is another reason to assess a tool before using it. Remember to review any product before giving away personal or sensitive information. 

To begin assessing an AI tool, consider the following:

Performance 

  • How well does the tool perform its intended outcome? Look at Output Assessment for more details. 

Usability 

  • How easy is the program to use? Does it require a lot of experience or practice? 

Bias

  • Who owns the program? Who created the AI model? What is the business model? 

Adaptability

  • Can the program adapt to new requests, additional parameters, feedback, etc.? 

Transparency 

  • Does the program provide information on how the model was built? Can you find the privacy policy? Is your usage data used to retrain the model? 

An important part of tool assessment is testing the output. Assess a program's output for: 

Accuracy 

  • How accurate is the information in the output? 

Relevance 

  • Is the output relevant to the search query? 

Completeness

  • Does the output cover all parts of the query? 

Efficiency 

  • How long did the output take to generate or generate properly? 

Clarity 

  • Is the output clear and easy to understand? 

Subscription Tools

These tools require a subscription to access. The costs are subject to change. 

 

  • ChatGPT Plus - The premium version of ChatGPT that costs $20 per month.  
  • Otter - A note-taking and transcription tool with a free basic version, but a Pro version with greater capabilities that costs $10 per month. 
  • Scite - A research tool that uses AI technology to provide context to scholarly article citations (e.g., when articles cite this paper, do they agree with their claims or refute them).  
  • Grammarly - A writing assistant that can identify misspellings, grammatical mistakes, tone inconsistencies, and more. Once identified, Grammarly will make suggestions for improvement.  

 

 

 

This list does not signify the library's endorsement. Individuals should use their best judgment when using any product. Check with your instructors if you are using AI programs to complete classroom assignments. 

ROBOT Test

Another way to assess an AI tool is to use the ROBOT Test. 

R.O.B.O.T.

The R in the Robot test stands for reliability. Check how reliable the program seems to be.

  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
    • Is information only partially available due to trade secrets?
    • How biased is the information that they produce?
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To cite in APA: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test

R.O.B.O.T.

The O in the Robot test stands for objective. Investigate what the objective for the program is. 

  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?

R.O.B.O.T.

The B in the Robot test stands for bias.

  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?

R.O.B.O.T.

The second O in the Robot test stands for ownership. Who owns the tool? Who owns the software? It may be different. 

  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?

R.O.B.O.T.

The T in the Robot test stands for type. What type of AI is being used? 

  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 

Integrated AI Technology

More and more companies are introducing their own AI features or assistants on their platforms, including databases the library subscribes to. Some of these new "AI" updates are normal features being rebranded as AI, such as showing "related articles" suggestions for other articles in the same database.

Platforms will sometimes use a RAG (Retrieval-Augmented Generation) model, which combines regular database retrieval technology with an LLM, so it will show results based on information from the site. Features using RAG models can be more accurate than other models, but AI models are never perfect, so users should always think critically about any output before using the information.