Responsible AI: Building Tools and Frameworks for Transparent and Ethical AI Implementations
Artificial intelligence (AI) can be used in libraries and archives as a powerful tool for enhancing metadata, improving search and discovery, recommending resources, powering library chatbots, and more. However, AI systems may incorporate surveillance technologies that threaten user privacy, and AI often reflects the biases of our society due to biased training data—for example, facial recognition technology is worse at recognizing the faces of people of color if training data is predominantly comprised of white faces. This talk discusses the early activities of the IMLS-funded Responsible AI project, which examines this tension between innovating library services and protecting library communities. The Responsible AI team will present key takeaways from an environmental scan of AI projects in libraries and archives, and our plans for an AI Harms Analysis tool that can ground AI software development and technology implementation. We’ll also discuss a call for case studies to illustrate ethical considerations and challenges during AI project or tool implementation. Attendees will learn about: the current state of AI implementation; how to think about AI transparency; and practical ideas for improving our services, while creating trust in our systems and mitigating the harms AI tools could inflict on our communities.