Library at Queen’s University Belfast: Partnering on Responsible and Effective AI
Executive Summary
Artificial Intelligence (AI) is reshaping discovery, research, and scholarly communication across Queen’s. This paper sets out the Library’s collaborative role as an essential partner—bringing expertise in information ethics, scholarly communication, rights and licensing, and critical AI literacy. It proposes a constructive programme that balances innovation with integrity, clarifies governance, and defines how the Library will coordinate activity with University partners (AI Hub, Digital & Information Services, Research, Schools, Graduate School, Ethics/Information Compliance).
Purpose & Scope
Purpose: articulate the Library’s position and priorities for responsible engagement with AI across discovery and scholarly workflows; provide guidance for literacy, ethical assurance, procurement, and cross-University collaboration.
Scope: AI-enabled discovery and summarisation resources, scholarly communication support, staff/student literacy, and governance processes overseen by the Library AI Strategy Group (reporting to the Library Leadership Team).
Strategic Principles
- Innovation with integrity: champion responsible, evidence-based adoption of AI across discovery and scholarly workflows.
- Alignment: ensure activity dovetails with institutional strategy, sector guidance, and legal obligations (GDPR, accessibility, EDI).
- Collaboration: the Library understands that AI has different impacts and implications across disciplines, and that a single approach to AI is not always appropriate for a diverse institution. Library AI programmes will be developed with discipline specific needs in mind, and in consultation with schools
- Equity and inclusion: design AI literacy and services accessible to diverse disciplines and communities.
- Transparency and accountability: disclose AI use, teach citation ethics, and validate outputs.
- Sustainability: consider environmental, financial, and operational impacts when piloting and procuring AI-enabled resources.
Roles & Responsibilities (Aligned to AI Strategy Group ToR)
- Strategic direction for AI in the Library
Define priorities for AI-related services/resources; align with institutional strategy, sector practice, and legal obligations.
- Developing and overseeing AI literacy initiatives
Co-develop programmes and resources that promote critical AI literacy for staff and students; embed in Schools/Graduate School.
- Ethical and legal compliance
Advise on rights/IP, information ethics, accessibility, GDPR; apply assurance via the ARIA checklist (see Appendix A).
- Evaluating and guiding AI resource adoption
Assess AI-enabled discovery/summarisation resources; recommend pilots; monitor effectiveness; make evidence-based recommendations for ongoing use/investment.
- Coordinating cross-Library and University engagement
Coordinate Library teams and represent the Library with AI Hub, Digital & Information Services, Research, Schools, Graduate School, Ethics/Information Compliance.
- Monitoring trends and advising on future opportunities
Track developments and provide strategic advice on emerging resources, risks, and best practice.
The Library’s Unique Contribution
Information professionals in the Library bring a distinctive blend of skills—critical literacy, scholarly communication, rights/licensing, and long-term stewardship. The Library advises on policy and practice, supports discipline-sensitive adoption, and safeguards intellectual and cultural assets while expanding equitable access to knowledge.
Programme of Work (2025–2026)
1) Critical AI Literacy
- Teach not only ‘how’ but ‘whether, when, and why’: develop reflective, discipline-sensitive practice for using AI resources in research and teaching.
- Train staff/students to evaluate AI-generated outputs, identify algorithmic bias, and recognise misinformation/hallucinations; embed validation and source-tracing.
- Promote disclosure of AI use (prompt logging where appropriate) and good citation ethics in research and learning materials.
- Support responsible AI-assisted discovery and literature searching: strategies for locating authoritative sources while respecting licences and IP.
- Embed Library-led AI literacy programmes in at least three Schools and the Graduate School; co-deliver with Subject Librarians, Open Research, and Special Collections & Archives.
2) Ethical Stewardship
Co-develop University frameworks for responsible AI use with partners (AI Hub, D&IS, Research, Ethics/Information Compliance). The Library contributes specialist expertise in rights/IP, information ethics, accessibility, and open research. Adopt the AI Risk & Impact Assessment (ARIA) checklist for pilots/procurements covering accuracy, bias, traceability, data protection, accessibility, environmental impact, and sustainability.
3) Interdisciplinary Engagement
- Convene dialogue across disciplines—technologists, humanists, ethicists, practitioners—on AI’s social, philosophical, and epistemological dimensions.
- Host events blending critical perspectives with practical guidance (responsible discovery, summarisation literacy, citation practice) with AI Hub and Schools.
4) Academic Integrity & Scholarly Values
- Provide guidance on incorporating AI in scholarly work while upholding authorship norms, citation ethics, and academic honesty.
- Advocate for AI as a complement—not a replacement—for critical thinking, creativity, and human judgement.
- Assess AI-enabled discovery services for citation transparency and scholarly norms, especially in literature searching and summarisation workflows.
5) Copyright and Rights Expertise
Provide authoritative guidance on copyright, licensing, and IP implications of AI use, including risks of inputting third-party or sensitive content into AI services. Draw on relevant sector initiatives to inform policy/practice for ethical reuse in AI contexts.
6) Procurement & Evaluation of AI Resources (Condensed Statement)
Decisions to pilot or procure AI-enabled discovery/summarisation resources will be structured and evidence-based. The full position statement is maintained by the AI Strategy Group; this section summarises criteria and process.
- Academic need and stakeholder feedback across disciplines.
- Cost–benefit analysis and strategic alignment with Library/University priorities.
- Integration with existing systems and interoperability.
- Ethical compliance (GDPR, accessibility, EDI), algorithmic transparency, and citation traceability.
- Pilot-first approach with evaluation metrics (accuracy, bias, coverage, usability, environmental impact).
- Ongoing monitoring and periodic review to ensure continued value and alignment with research goals.
Collaboration & Governance
Partners: AI Hub (co-programming and shared resources), Digital & Information Services (infrastructure/security), Research (integrity/open research), Schools and Graduate School (curriculum/skills), Ethics/Information Compliance (policy/assurance). Coordination: the Library AI Strategy Group oversees delivery and reports to the Library Leadership Team.
AI Strategy Group: Operating Model & Membership
- Chair: University Librarian.
- Members (roles): Bibliographic Services Manager; Head of Digital, Content and Research Services; Faculty & Subject Support Manager; Open Research Services Manager; Digital Services Manager; Head of Customer Experience & Partnerships; Digital Scholarship Librarian.
- Cadence: bi-monthly meetings; standing agenda—ethics/compliance; literacy initiatives; pilots/procurements; stakeholder engagement; risk review.
- Outputs: annual workplan; pilot evaluations; guidance/training materials; recommendations to Leadership Team.
- Reporting line: Library Leadership Team.
Implementation Timeline & Success Measures (2025–2026)
- Q1 2026: launch School-embedded AI literacy pilots (≥2 Schools); publish disclosure & citation guidance; agree ARIA checklist with stakeholders.
- Q2 2026: complete at least one pilot of AI-enabled discovery resource; publish evaluation; decision on continuation or procurement.
- Q3 2026: expand literacy to Graduate School; second resource pilot in a contrasting discipline; host interdisciplinary event with AI Hub.
- Q4 2026: consolidate guidance; annual review of AI resource portfolio; report to LLT with recommendations for 2027.
Success Measures (KPIs):
- ≥3 Schools and Graduate School engaged in Library-led AI literacy sessions.
- 2 pilots completed with published evaluations and decisions.
- ARIA checklist adopted in pilots and procurement workflows.
- Improved confidence in responsible AI-enabled discovery evidenced by training feedback and LibGuide analytics.
Risks & Mitigations
Bias/accuracy issues in AI resources: Mitigation: apply ARIA checklist; require citation traceability; teach validation and source-tracing.
Licensing/IP risks when using AI services: Mitigation: rights guidance; staff/student training; avoid uploading sensitive/third-party content without permissions.
Accessibility and EDI gaps: Mitigation: accessibility review in procurement; inclusive design of literacy materials.
Sustainability and cost pressures: Mitigation: pilot-first approach; cost–benefit analysis; periodic portfolio review.
Appendix A: AI Risk & Impact Assessment (ARIA) Checklist
- Accuracy & Coverage: Does the resource reliably retrieve/summarise authoritative scholarly sources? What is its coverage bias?
- Bias & Fairness: What measures exist to mitigate algorithmic bias? How are models/data audited?
- Traceability & Citations: Are outputs transparent with citations/links to sources? Can users verify provenance?
- Data Protection & Privacy: What data are sent to the service? Is personal/sensitive data excluded? Is GDPR compliance documented?
- Accessibility & EDI: Does the resource meet accessibility standards? Is guidance inclusive across disciplines and user needs?
- Licensing & IP: Are licences respected? Are terms clear on reuse of outputs and input content?
- Security & Integration: How does the resource integrate with Library/D&IS systems? What security assurances exist?
- Sustainability & Environmental Impact: What is the environmental footprint? Are usage and costs justifiable?
- Usability & Support: Is the resource easy to use? What training/support is required?
- Evaluation Metrics: Define success measures (precision/recall proxies, user satisfaction, time saved, inclusivity indicators).
Appendix B: Current Library-Led AI Activities
- Internal training workshops on responsible AI use and service applications.
- Discipline-specific guest lectures on LLM implications for research and writing.
- AI Literacy LibGuides (e.g., ‘AI in the Library’; ‘AI in Library Resources’) for ethical use and discovery of subscription resources.
- Formation of the Library AI Strategy Group to coordinate initiatives, policy input, and staff development.
- Collaboration with University stakeholders to shape responsible AI adoption across research and teaching.