Like it or not, AI is the future—or at least, it’s an unavoidable part of it. From facial recognition on your phone to the list of recommendations in your streaming service, self-driving cars and diagnostics at your doctor’s office, AI is everywhere.
I recently finished the AI Literacy at NAU course in Canvas designed for faculty and staff, so here’s a quick summary. The course, which takes about 90 minutes, is a guide for everyone who uses or is thinking about using AI. (Yes, that means you.) It goes over the technical, practical, evaluative and ethical use of AI.
It starts by covering the different types of artificial intelligence out there, how they were designed and how humans can operate them as tools. It gives different examples of how AI is used to complete tasks that might take humans a long time to do, like synthetizing and analyzing reports, translating documents and even generating a coordinated change in delivery systems due to changing weather patterns.
The course later dives into the practical uses of AI by demonstrating how to effectively and ethically use chatbots. This is the part that new and current users of AI might find most helpful since it gives tips on how to prompt information to get more comprehensive responses. It also goes over AI generated images and how to identify them.
The evaluative module goes over the accuracy, bias and limitation of AI tools. It was useful to be reminded that even though AI has no consciousness, it can still be biased. Chatbot responses are based on human programming, and humans are fallible—which is why we always need to cross-reference AI responses to detect signs of misinformation and to maintain our voice and accountability.
But don’t take my word for it. I asked AI.
“Yes, absolutely—you should always cross-check information from AI for accuracy and biases, especially in the following cases:
All models like me can generate inaccurate or outdated information.
AI is trained on large datasets, which can contain inherent biases—cultural, social, political or even harmful stereotypes.
- No context of your intentions.
AI doesn’t always understand the full nuance or context of your needs, which means answers might not be fit for your purposes without human judgment.
- Some areas are high stakes.
In fields like healthcare, law, finance or safety, relying solely on AI can lead to serious consequences if the information is wrong or incomplete.”
The last module goes over the importance of ethics when using AI and why it is important to disclose its use to comply with NAU’s academic integrity and AI usage policies. It gives examples on how to cite the use of AI when analyzing and synthetizing research, writing papers or creating tests and reminds us why AI is only a tool to aid our work, and not a shortcut to do it.
The course ends with the do’s and don’ts of AI, the importance of creating laws to regulate it and its environmental impacts, including water consumption and waste.
NAU’s AI Literacy course is helpful and informative for anyone who wants to dive a little deeper into the pros, cons and best uses of artificial intelligence. It provides you with basic understanding about AI and information on how to maximize its uses, making it more effective. And for those of you worried that it will take over the world, I will just say that it still relies heavily on human cognition to operate, so it is not ready to do that, just yet.

(928) 523-5050 | mariana.laas@nau.edu


