When ChatGPT first appeared on the scene in 2022, there was a ripple effect in the world of library science. Because librarians are centered on evaluating and utilizing information sources and systems, many recognized right away both the power and the problems of generative AI as a source of information.
Humanities librarian Laurie Bridges said one of the first red flags for librarians was when patrons began coming to libraries and asking for books and materials that they’d learned about on AI, but which didn’t actually exist. Those ‘hallucinations’ created by generative AI opened up questions not only about AI’s flaws, but about information literacy in general, and Bridges was quick to dive in.
She is now director of OSU’s brand-new AI Literacy Center, a collaborative effort between OSU Libraries and The Center for Teaching and Learning. The Center, which was approved in March, was created after Dean of Libraries Anne-Marie Deitering saw the need to help students, faculty and staff understand generative AI, and reached out to other campus entities to pull in expertise and create trainings and talks related to the topic.
“We are focused on educating everyone about how generative AI works, which is important for everyone to understand, whether you use it or not,” Bridges said. “What data goes into genAI? How was the data obtained? How are humans involved in the process of cleaning the data? Who financially benefits from AI and who is harmed? How does AI process that data? Is there a transparent process? When people are more informed, they can make more thoughtful choices about how and when to use AI, if they use it at all.”
Bridges considers herself an AI pragmatist, not a champion. Each question she’s raised about AI has created another, and the topic itself is complex and fascinating. So while she would never say that someone should or should not use generative AI, she wants people to learn as much as they can and make informed choices.
“There is a diversity of opinions and feelings about generative AI in the library and across campus,” she said. “The only way forward is to engage in more conversation and discussion among the diverse viewpoints and discipline-specific perspectives.”
One of the opportunities the AI Literacy Center provides is a place for researchers to explore these questions. For example, two AI Literacy Center fellows, Anna Guasco and Demian Hommel, are researching environmental concerns about AI usage, such as the large amount of power and cooling required to run data centers.
Bridges said different populations have different questions surrounding generative AI. Students most often want help identifying what tools are considered AI tools, and whether or not they’re allowed to use them in their work. Faculty want to know how they can teach with, or around, generative AI.
For Bridges, it’s crucial for AI users to understand that AI is a machine created by humans, even though many interact with generative AI as if it’s another being. And that machine is imperfect.
“AI is powered by algorithms, and those algorithms are not neutral, instead, they reflect the data and decisions made by humans,” she said. “This creates algorithmic bias, where historical or systemic biases in the data manifest in AI outputs. On top of that, there’s “automation bias”, the human tendency to trust machine outputs even when contradictory information tells us to do otherwise. People often assume that because AI uses data, its answers are objective. In reality, AI can amplify existing biases.”
Bridges encourages the OSU community to take one of the center’s free events and trainings, which are listed on their website. She also wants to connect with anyone at OSU who is conducting research on topics related to AI literacy, especially those concerned with social, economic and political dimensions of generative AI.
“Generative AI is a complicated topic,” Bridges said, “which is why the AI Literacy Center exists: to help faculty and students understand these issues and make informed decisions.”
~ Theresa Hogue