AI Professional Development Resources

AI Professional Development Resources
On this page, you will find a curated selection of professional development tools and training resources focused on the use of AI in teaching and learning. These materials are designed to support educators, enrich teaching practices, and promote ongoing professional growth.
Explore the links below to engage with the tools and resources. Please also see our St Andrews’ staff blogs here: https://celpie.wp.st-andrews.ac.uk/category/ai-and-resource-hub/
If you have suggestions for additional content, please contact the Associate Deans (Education).
Contents
Debate/Commentary/Critique of AI
- Dan McQuillan – “The role of the University is to resist AI” (CPCT seminar text)
- Alfie Kohn – “The Chatbot in the Classroom, the Forklift at the Gym”
- Alfie Kohn – A.I., as in Anti-Intellectual
- MIT GenAI – “The Climate and Sustainability Implications of Generative AI”
Learning from the Sector: Peer Institutions’ AI Resource Hubs and Approaches
- Queen’s University Belfast AI Training Hub
- UCL – Generative AI and education futures
- University of Warwick (Warwick International Higher Education Academy) – Artificial Intelligence (AI), Race and Racism and Critical Pedagogies (webinar hub)
- AI at the University of Oxford
- JISC
Frameworks for Teaching, Learning and Assessment in AI
- The Open University – A framework for the Learning and Teaching of Critical AI Literacy skills
- The Open University – Responsible by Design: GenAI & Ethics (Learning Design Team)
- Assessment and AI
- AI RISK MEASURE SCALE (ARMS): GUIDANCE & RESOURCES
- University of Greenwich
- Perkins & Colleagues’ Assessment Scale
Events
- Artificial Intelligence Symposium 2026 | Advance HE
Debate/Commentary/Critique of AI
Dan McQuillan – “The role of the University is to resist AI” (CPCT seminar text)
https://danmcquillan.org/cpct_seminar.html
This resource provides commentary on how the role of universities should be to resist the uncritical uptake of generative AI. McQuillan critiques AI’s material infrastructures (scale, energy, data extraction), its “slopification” of knowledge work, and the managerial logics driving adoption, and proposes convivial criteria and “people’s councils” to subject technology to social determination. Useful for framing policy, pedagogy, and institutional strategy debates.
Alfie Kohn – “The Chatbot in the Classroom, the Forklift at the Gym”
https://www.alfiekohn.org/article/ai/
This is an extended essay critiquing generative AI in education. Kohn argues that schools are rushing to adopt LLMs amid corporate and managerial hype, despite environmental costs, data extraction, accuracy issues, and risks to democratic and relational aims of education. He contends that AI cannot think, tends toward banal consensus, may depress critical thinking, and can create a “machines on both sides” loop (AI-written tasks, AI-completed work, AI-graded responses). The piece urges universities and educators to question not just how to implement AI, but whether it serves educational purposes at all.
Alfie Kohn – A.I., as in Anti-Intellectual
https://www.alfiekohn.org/podcasts/ai-podcast/
This podcast episode page introduces Kohn’s critique of generative AI in education. In particular, the podcast points to the potential risks AI poses to learning processes, including thinking, reading, and writing. The page links to related research, activist resources, and his companion essay.
MIT GenAI – “The Climate and Sustainability Implications of Generative AI”
by Noman Bashir, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti
https://mit-genai.pubpub.org/pub/8ulgrckc/release/2
This piece (with an audio version) outlines AI’s environmental impacts: escalating computational demand, increased carbon emissions, and faster depletion of natural resources. It argues that “responsible” GenAI must look beyond efficiency gains, using benefit–cost frameworks that steer development towards social and environmental sustainability as well as economic opportunity.
Learning from the Sector: Peer Institutions’ AI Resource Hubs and Approaches
Queen’s University Belfast AI Training Hub
What will you find?
A curated hub for staff AI upskilling at Queen’s University Belfast including:
- Bite-size AI Microlearning playlists (Copilot, ChatGPT, Gemini/NotebookLM, Claude, research tools, design/creativity),
- A self-paced AI for Educators Canvas course with hands-on skills builds (available through signup);
- AI Lightning Talks showcasing QUB case studies (accessibility, student voice analytics, quiz generation, CustomGPTs).
- Recordings and slides from the AI Building Blocks workshop series – Foundations, Ethics, Everyday Tasks, Accessibility, Teaching & Learning (including “Pedagogy over Tech”), and Research.
UCL – Generative AI and education futures
https://www.ucl.ac.uk/teaching-learning/case-studies/2023/aug/generative-ai-and-education-futures
Edited clips and summary from Professor Mike Sharples’ 2023 UCL Education Conference keynote. Topics include: what GPT-4 is and how reliable it is; practical roles for AI in learning (e.g., “Socratic opponent,” “guide on the side,” feedback support, maths assessment); accessibility and bias (including translanguaging/sign language); and responsibility, policy, and creativity. Links to the full recording and further UCL guidance/resources are provided.
University of Warwick (Warwick International Higher Education Academy) – Artificial Intelligence (AI), Race and Racism and Critical Pedagogies (webinar hub)
https://warwick.ac.uk/fac/cross_fac/academy/activities/seminar/ai-and-race-webinar
A page gathering talks and materials on AI’s intersections with race, racism and critical pedagogy. It offers recorded sessions such as “Is AI Racist?” by Dr Sanjay Sharma, “Race and AI: The Diversity Dilemma” by Dr Kanta Dihal, and “Developing a critical digital perspective on AI tools in Higher Education” by Chris Rowell.
AI at the University of Oxford
https://www.ox.ac.uk/ai-oxford
Information about how Oxford has become the first UK university to provide free ChatGPT Edu access – powered by OpenAI’s GPT-5 – to all staff and students.
JISC
https://www.jisc.ac.uk/training?categories=20
jisc is the UK digital, data and technology agency focused on tertiary education, research and innovation. Jisc offers a curated set of AI training courses for educators.
Frameworks for Teaching, Learning and Assessment in AI
The Open University – A framework for the Learning and Teaching of Critical AI Literacy skills
This resource provides a practical framework for teaching and learning Critical AI Literacy across the curriculum, with a strong EDIA lens. It defines domains (AI concepts & applications; learning/teaching with AI; AI creativity; AI ethics; AI in society; AI careers), offers staged guidance for staff (from foundational to advanced use), stressing iterative, context-specific integration and reflective practice.
The Open University – Responsible by Design: GenAI & Ethics (Learning Design Team)
https://www.open.ac.uk/blogs/learning-design/wp-content/uploads/2024/12/RBD-Version-for-blog.pdf
A practical checklist framework to embed ethical AI use in learning materials. It organises prompts under key pillars including Bias & Sustainability, Exploitation & Digital Divide, and Opting Out. Each prompt includes a “check” and possible actions, plus a Solutions Bank with ready-to-adapt ideas for activities and guidance. Ideal for course teams seeking concrete, teachable interventions rather than abstract principles.
Assessment and AI
AI RISK MEASURE SCALE (ARMS): GUIDANCE & RESOURCES
University of Greenwich
The purpose of ARMS is to create awareness regarding the potential risks and implications associated with generative AI in relation to assessment design. The diagnostic tool facilitates the categorisation of assessments, fostering a shared understanding among staff regarding the risks associated with different types of assessments. Furthermore, ARMS serves as a basis for identifying and disseminating effective assessment practices, creating a collaborative environment that encourages knowledge-sharing among staff and optimisation of assessment approaches. By prompting staff to engage in reflection, discussion, and review of assessment tasks, ARMS fosters ongoing dialogue on assessment design to align with the evolving AI landscape.
Perkins & Colleagues’ Assessment Scale
This section gathers a number of Perkins and colleagues’ Assessment Scale publications. The “Assessment Scale” is a five-level framework for specifying and communicating how generative AI may be used in assessments, from “No AI” through various graduated forms of AI support/use. It’s designed to help educators align AI use with learning outcomes, make expectations transparent to students, and to support educators to redesign assessment tasks accordingly.
Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(6), 49–66. https://search.informit.org/doi/10.3316/informit.T2024092900003300954126858
Perkins, M., Roe, J., & Furze, L. (2024). The AI Assessment Scale Revisited: A framework for educational assessment. https://arxiv.org/abs/2412.09029
Perkins, M., Roe, J., & Furze, L. (2025). Reimagining the Artificial Intelligence Assessment Scale (AIAS): A refined framework for educational assessment. Journal of University Teaching and Learning Practice, 22(7). https://doi.org/10.53761/rrm4y757
Perkins, M., Roe, J., & Furze, L. (2025). How (not) to use the AI Assessment Scale. Journal of Applied Learning and Teaching, 8(2). https://doi.org/10.37074/jalt.2025.8.2.15
Events
Artificial Intelligence Symposium 2026 | Advance HE
A presenter-led event focused on practical AI pedagogy in HE, with strands on:
- Using AI to provide more inclusive and personalised learning.
- Embedding Generative AI into curricula and assessment to prepare students
for the future. - Using AI to support the work of academic and professional services staff.
- Fostering a culture of responsible and ethical use of AI by staff and students.
Submission deadline of 14 November 2025 and early-booking discounts available.