AI Professional Development Resources

On this page, you will find a curated selection of professional development tools and training resources focused on the use of AI in teaching and learning. These materials are designed to support educators, enrich teaching practices, and promote ongoing professional growth.
Explore the links below to engage with the tools and resources. They include:
- AI Podcast Series
- Using Co-pilot
- Debate/Commentary/Critique of AI
- Learning from the Sector: Peer Institutions’ AI Resource Hubs and Approaches
- Frameworks for Teaching, Learning and Assessment in AI
- Events
- CPD Prize for Educators
- St Andrews Staff Blogs
If you have suggestions for additional content, please contact the Associate Deans (Education).
AI Podcast Series
Introduction
Welcome to this podcast series – The Higher Education Educator’s AI Sandpit – where we create space to talk about the trials, tribulations and triumphs of Generative AI in teaching, learning and assessment.
Across an initial five episodes, we share colleagues’ accounts of experimenting with AI within their teaching practice. Episode 1 features Paloma Gay Blasco, who, with colleagues and students in social anthropology, uses disciplinary tools to surface and interrogate biases in generative AI models. In Episode 2, Luc Bridet explains how he has redesigned a master’s-level economics module to incorporate AI for coding whilst teaching students to scrutinise the limitations of AI-generated code. Episode 3 centres Kirsty Duff, who describes the use of Perkins and colleagues’ AI Assessment Scale in her own teaching and assessment design. In Episode 4, John Mitchell reports on inquiry he carried out into AI’s performance on undergraduate chemistry assessment tasks and his subsequent shift to assessments requiring students to critique AI-produced exam answers. Finally, in episode 5 we listen in as Jenny Taylorson and Blair Matthews explore how to reconfigure assessment on their master’s-level module, Education and Researching, in the age of generative AI.
Looking back over these conversations, one theme rings out: none of us are so-called expert “AI pedagogues.” And yet, as the series unfolds, something else becomes equally clear. Whilst we may not be experts in AI pedagogy, we do hold a great deal of expertise that equips us with the skills and knowledge we need to inquire and learn together.
Each of us, for example, has a practical pedagogical knowledge built through trial, error, reflection, and often, pedagogical enquiry. We each have institutional knowledge – of policy and localised practice, as well as of our students, which helps us to understand the supports and parameters of learning, teaching and assessment within our departments and within our institution. We also have extensive disciplinary knowledge, and with that, an extensive understanding of the tacit norms that shape assessment and supervision, as well as the practical constraints of time, curriculum, accreditation, and professional standards. And finally, as inquiry is our professional habitus as academics, we each of us, have the skills needed to pose questions, test assumptions, generate evidence, and refine practice. As the episodes reveal, each guest draws on these strengths in their exploratory endeavours regarding the pedagogical impacts and implications of Generative AI.
With this in mind, we present in these episodes instances of experimentation with, and critical evaluation of AI, rather than case studies to be held up as exemplars of “good practice.” In this spirit, we ask you, the listener, to take an observer’s stance and to treat the conversations within each episode not as fixed case studies of “good practice,” but as situated narratives to be observed, interpreted, and interrogated. By hearing where things have worked and where they have not, our hope is that you will turn back to your own setting and apply the same observational lens as a tool for interpreting and interrogating your own approach to AI.
In short, none of us are experts, but rather than this being a weakness, we see it as an opportunity. By adopting a beginner’s mindset, we give ourselves permission to learn through doing, to proceed through trial and error, to make mistakes, and, in doing so, to be creative. This series is an invitation to think with us, to test ideas within your own context, and to shape practices that are not only informed by evidence, but guided by educational purpose.
We hope you enjoy these episodes.
Jenny and Amritesh.
Listen now: Anthropological Encounters with AI with Paloma Gay Blasco
Written Introduction to Episode 1
How do anthropologists actually work with AI? In this episode, Paloma Gay Blasco – Director of Teaching, Social Anthropology, St Andrews – shares a hands-on, student-partnered exploration of generative AI: posing concrete anthropological scenarios to ChatGPT, comparing refusals and outputs, and documenting where bias and contradiction appear. In this episode we talk about drawing on the tools of our disciplines to explore and interrogate AI, and discuss how adopting a beginner’s mindsets can be useful to this task. We also talk co-design with students, and the workload/sustainability trade-offs of fast-moving tools. Join us to hear more!Follow up: Resources Discussed in the Podcast Episode
Biesta, G. J. (2010). Why ‘what works’ still won’t work: From evidence-based education to value-based education. Studies in philosophy and education, 29(5), 491-503. https://doi.org/10.1007/s11217-010-9191-x
Reflecting on your Own Practice
What does ethical use of AI in learning, teaching and assessment look like to me in my context?
What might inquiry into AI look like in my discipline or context? Would I consider partnering with students, as Paloma does?
Listen now: Teaching Masters Students to use AI for Coding in Economics with Luc Bridet
Written Introduction to Episode 2
What happens when a postgraduate economics module moves from pen-and-paper proofs to Python and invites students to use AI as a coding partner? In this episode, economist Luc Bridet explains how he redesigned a master’s-level optional module to reflect an increasingly common workflow in parts of the discipline – AI drafts; humans verify, test, and refine. We discuss the pedagogy and the practical implications for sustainability, transferability, and workload. If you’re deciding when to allow AI and how to reshape assessment as its use grows, this conversation offers a candid account of the adaptations and their trade-offs. Join us to hear more!
Reflecting on your Own Practice
AI as first drafter, student as quality assurer:
- If you were so minded, in one of your assignments, where might you permit students to use AI to produce an initial draft/outline/solution, while requiring them to verify, test, and refine it?
Ethical Adoption of AI:
- If you introduced a task that allowed students to use AI for an initial draft/outline/solution, would it be important to design it so students can opt out of using AI? Why or why not?
Fairness across different starting routes:
If students may begin with or without AI assistance, what common criteria (clarity of reasoning, quality of adaptation, robustness of tests) will let you judge both routes equitably?
Listen now: The Artificial Intelligence Assessment Scale_Kirsty Duff
Written Introduction to Episode 3
How can we guide students to use generative AI responsibly without banning it or letting it “write the assignment”? In this episode, Kirsty Duff (Director of Foundation Studies and Academic Misconduct Officer, University of St Andrews) introduces the Artificial Intelligence Assessment Scale (AIAS) developed by Mike Perkins and colleagues – a five-level framework from no AI to full AI. Kirsty shows how she embeds the AIAS in handbooks and induction activities so that staff and students know what’s acceptable, why, and how to evidence working with AI. You’ll hear concrete, classroom-ready ideas as well as discussion of workload, policy fit, disciplinary differences, and how to move beyond a “deficit” view of AI while keeping integrity central. If you want practical guidance you can adopt tomorrow, this one’s for you. Join us to hear more!
Follow up: Resources Discussed in the Podcast Episode
Perkins, M., Roe, J., & Furze, L. (2025). Reimagining the Artificial Intelligence Assessment Scale (AIAS): A refined framework for educational assessment. Journal of University Teaching and Learning Practice, 22(7). https://leonfurze.com/wp-content/uploads/2025/09/JUTLPFinalPerkins_JUTLP_2025.pdf
Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(6), 49–66. https://search.informit.org/doi/10.3316/informit.T2024092900003300954126858
Perkins, M., Roe, J., & Furze, L. (2025). How (not) to use the AI Assessment Scale. Journal of Applied Learning and Teaching, 8(2). https://doi.org/10.37074/jalt.2025.8.2.15
Reflecting on your Own Practice
Policy clarity:
- How will you signal, in handbooks and induction, exactly what AI use is acceptable in each assessment – e.g., mapping tasks to a level on the AI Assessment Scale (from “no AI” to “full AI”)?
Process over product:
- Where could you design activities that foreground how students work (brainstorming, structuring, editing) rather than the final output, to reduce misconduct and build judgement?
Open-book nuance:
- If students may consult notes in open-book exams, how will you address the risk that those notes are AI-generated and disconnected from taught material (e.g., require citation to lecture/seminar sources)?
Equity and choice:
- Given ethical, environmental, and access concerns, will you allow an opt-out pathway for students who prefer not to use AI, without disadvantage? How would you phrase that option?
Discipline fit:
What adaptations would your discipline need (e.g., from essay-focused activities to code, lab, or design tasks) to keep the same principles but change the artefacts?
Listen now: Catalysing Change – Rethinking Chemistry Assessment with John Mitchell.wav
Written Introduction to Episode 4
What happens when AI reshapes a discipline before pedagogy catches up, and how should assessment respond? In this episode, John Mitchell (School of Chemistry; Academic Misconduct Officer) and I consider AI’s purported “cognitive” capabilities, then turn to assessment. John shares findings from pedagogical inquiry benchmarking AI answers to exam questions against undergraduate responses, and explains why he now asks students to critique AI-generated answers to exam questions, rather than getting them to write their own from scratch. Expect candid lessons on what worked for him and what didn’t, plus practical ideas for building adaptability into assessment in changing times. If you’re weighing assessment redesign in the age of AI and want honest trade-offs rather than hype, this conversation is for you. Join us to hear more!
Follow up: Resources Discussed in the Podcast Episode
University of Kent. (n.d.). Digitally enhanced education webinars [YouTube channel]. YouTube. https://www.youtube.com/@digitallyenhancededucation554
Krathwohl, D. R. (2002). A Revision of Bloom’s Taxonomy: An Overview. Theory Into Practice, 41(4), 212–218. https://doi.org/10.1207/s15430421tip4104_2
Reflecting on your Own Practice
AI’s impact on your discipline:
- In your field, has AI already changed research or professional practice before teaching caught up, and if so, which parts of your curriculum or assessment need to move first in response (if any)?
Benchmarking reality check:
- Could you run a small, ethical benchmarking exercise (AI answers vs. typical student answers or AI answers vs. the criteria) on one existing task to reveal strengths/weaknesses. If you did this, how might you use the findings to brief students on pitfalls and good practice?
Marking that rewards insight:
- John found an overly prescriptive marking scheme made it difficult to differentiate between better and less good responses to his assessment questions. How might you design rubrics that recognise depth, nuance, and warranted judgement rather than tallying obvious points? How might you pilot any changes you plan to make?
Thinking levels as a lens:
Would applying a cognitive framework (e.g., lower- vs higher-order demands) help you specify the kind of thinking you want students – not AI – to do on a given task? How might these cognitive skills align to external frameworks that guide our course outcomes such as the Scottish Credit and Qualification Framework?
Listen now: Colleagues in Dialogue – Re-Designing Assessment in the Age of GenAI with Jenny Taylorson and Blair Matthews
Written Introduction to Episode 5
What should assessment do when AI can already draft a passable answer, and when some students won’t use AI on principle while others will? In this episode, two colleagues think aloud about redesigning a master’s-level research-methods assessment for Teaching English to Speakers of Other Languages, Digital Education, and International Education students. We weigh trust and equity (opt-out pathways, transparency), purpose (what should the purpose of this assignment be in the age of Generative AI), and practical constraints (large cohorts, time zones, workload). Rather than offering a finished fix, we discuss possibilities: shifting from production to critique tasks, tightening context-specificity to reduce “AI-ability,” changing criteria, and communicating clear expectations (e.g., traffic-light/scale approaches to permitted use). If you want an honest conversation about changing assessment in the age of AI – trade-offs, dead ends, and workable next steps, this episode will help you frame your own. Join us to hear more!
Follow up: Resources Discussed in the Podcast Episode
De Vita, K., & Brown, G. (n.d.). AI risk measure scale (ARMS): Guidance and resources [PDF]. University of Greenwich. https://www.gre.ac.uk/__data/assets/pdf_file/0022/323590/ai-risk-measure-scale-guidance-and-resources-website-version.pdf
The Open University Learning Design Team. (2024, December). Responsible by design (RBD) [PDF]. The Open University. https://www.open.ac.uk/blogs/learning-design/wp-content/uploads/2024/12/RBD-Version-for-blog.pdf
Perkins, M., Roe, J., & Furze, L. (2025). Reimagining the Artificial Intelligence Assessment Scale (AIAS): A refined framework for educational assessment. Journal of University Teaching and Learning Practice, 22(7). https://leonfurze.com/wp-content/uploads/2025/09/JUTLPFinalPerkins_JUTLP_2025.pdf
Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(6), 49–66. https://search.informit.org/doi/10.3316/informit.T2024092900003300954126858
Perkins, M., Roe, J., & Furze, L. (2025). How (not) to use the AI Assessment Scale. Journal of Applied Learning and Teaching, 8(2). https://doi.org/10.37074/jalt.2025.8.2.15
Reflecting on your Own Practice
AI risks (beyond a basic search):
- Where might AI’s capacity to collate information across sources pose a risk to learning? How would you make those risks explicit to students?
Trust:
- In the age of generative AI, how can we preserve trust between students and lecturers and between students?
Purpose of assessment:
- What, precisely, is your assessment for now? What kinds of judgement, integrity, and disciplinary thinking should it elicit in an AI-saturated context?
Workable formats of assessment at scale:
- Given cohort size, time zones, and workload, what is your lightest-touch mechanism to evidence understanding (e.g., short recorded rationale, annotated plan) when vivas or in-person exams are not viable?
Raising the bar (criteria):
If AI can already achieve a pass on a current assessment brief, which elements of your criteria could you strengthen to reward human judgement? Or should we even be considering tinkering with criteria to solve this problem?
Using Co-pilot
The IT Services team is offering general training and information sessions on Copilot, the generative-AI assistant available to all staff. These sessions are designed to help you make the most of this tool while ensuring data protection.
Introduction to Copilot
Copilot can assist with drafting content, brainstorming ideas, summarising documents, and producing meeting minutes. In this session, you’ll learn about:
- Popular use cases for Copilot in your daily work
- Enabling enterprise data protection
When Copilot Should Take a Backseat
Copilot is widely used across the University, but it’s not perfect for every task. This session will cover:
- Tasks Copilot struggles with (e.g., image generation, creating PowerPoint presentations, branding)
- How to use Copilot for guidance while retaining control of the final version
Are you interested?
Please contact Bethany Reid (Bethany.reid@), Business Relationship Manager, IT Services, or her colleague Monica Cecil (mcc28@) to arrange a session for your team(s).
Staff who are concerned about how the University’s Enterprise data protection for Co-Pilot works can access the information below
Enterprise data protection in Microsoft 365 Copilot and Microsoft 365 Copilot Chat
Debate/Commentary/Critique of AI
https://danmcquillan.org/cpct_seminar.html
This resource provides commentary on how the role of universities should be to resist the uncritical uptake of generative AI. McQuillan critiques AI’s material infrastructures (scale, energy, data extraction), its “slopification” of knowledge work, and the managerial logics driving adoption, and proposes convivial criteria and “people’s councils” to subject technology to social determination. Useful for framing policy, pedagogy, and institutional strategy debates.
https://www.alfiekohn.org/article/ai/
This is an extended essay critiquing generative AI in education. Kohn argues that schools are rushing to adopt LLMs amid corporate and managerial hype, despite environmental costs, data extraction, accuracy issues, and risks to democratic and relational aims of education. He contends that AI cannot think, tends toward banal consensus, may depress critical thinking, and can create a “machines on both sides” loop (AI-written tasks, AI-completed work, AI-graded responses). The piece urges universities and educators to question not just how to implement AI, but whether it serves educational purposes at all.
https://www.alfiekohn.org/podcasts/ai-podcast/
This podcast episode page introduces Kohn’s critique of generative AI in education. In particular, the podcast points to the potential risks AI poses to learning processes, including thinking, reading, and writing. The page links to related research, activist resources, and his companion essay.
by Noman Bashir, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti
https://mit-genai.pubpub.org/pub/8ulgrckc/release/2
This piece (with an audio version) outlines AI’s environmental impacts: escalating computational demand, increased carbon emissions, and faster depletion of natural resources. It argues that “responsible” GenAI must look beyond efficiency gains, using benefit–cost frameworks that steer development towards social and environmental sustainability as well as economic opportunity.
Learning from the Sector: Peer Institutions’ AI Resource Hubs and Approaches
Welcome to Queen’s Digihub – Queen’s DigiHub
A curated hub for staff AI upskilling at Queen’s University Belfast including:
- Bite-size AI Microlearning playlists (Copilot, ChatGPT, Gemini/NotebookLM, Claude, research tools, design/creativity),
- A self-paced AI for Educators Canvas course with hands-on skills builds (available through signup),
- AI Lightning Talks showcasing QUB case studies (accessibility, student voice analytics, quiz generation, CustomGPTs),
- and recordings and slides from the AI Building Blocks workshop series – Foundations, Ethics, Everyday Tasks, Accessibility, Teaching & Learning (including “Pedagogy over Tech”), and Research.
https://www.ucl.ac.uk/teaching-learning/case-studies/2023/aug/generative-ai-and-education-futures
Edited clips and summary from Professor Mike Sharples’ 2023 UCL Education Conference keynote. Topics include: what GPT-4 is and how reliable it is; practical roles for AI in learning (e.g., “Socratic opponent,” “guide on the side,” feedback support, maths assessment); accessibility and bias (including translanguaging/sign language); and responsibility, policy, and creativity. Links to the full recording and further UCL guidance/resources are provided.
https://warwick.ac.uk/fac/cross_fac/academy/activities/seminar/ai-and-race-webinar
A page gathering talks and materials on AI’s intersections with race, racism and critical pedagogy. It offers recorded sessions such as “Is AI Racist?” by Dr Sanjay Sharma, “Race and AI: The Diversity Dilemma” by Dr Kanta Dihal, and “Developing a critical digital perspective on AI tools in Higher Education” by Chris Rowell.
https://www.ox.ac.uk/ai-oxford
Information about how Oxford has become the first UK university to provide free ChatGPT Edu access – powered by OpenAI’s GPT-5 – to all staff and students.
https://www.jisc.ac.uk/training?categories=20
JISC is the UK digital, data and technology agency focused on tertiary education, research and innovation. Jisc offers a curated set of AI training courses for educators.
These webinars from the University of Kent provide real-world examples of digital technology enhanced learning with lots of resources on generative AI in teaching and assessment. https://www.youtube.com/@digitallyenhancededucation554
Frameworks for Teaching, Learning and Assessment in AI
This resource provides a practical framework for teaching and learning Critical AI Literacy across the curriculum, with a strong EDIA lens. It defines domains (AI concepts & applications; learning/teaching with AI; AI creativity; AI ethics; AI in society; AI careers), offers staged guidance for staff (from foundational to advanced use), stressing iterative, context-specific integration and reflective practice.
https://www.open.ac.uk/blogs/learning-design/wp-content/uploads/2024/12/RBD-Version-for-blog.pdf
A practical checklist framework to embed ethical AI use in learning materials. It organises prompts under key pillars including Bias & Sustainability, Exploitation & Digital Divide, and Opting Out. Each prompt includes a “check” and possible actions, plus a Solutions Bank with ready-to-adapt ideas for activities and guidance. Ideal for course teams seeking concrete, teachable interventions rather than abstract principles.
The purpose of ARMS is to create awareness regarding the potential risks and implications associated with generative AI in relation to assessment design. The diagnostic tool facilitates the categorisation of assessments, fostering a shared understanding among staff regarding the risks associated with different types of assessments. Furthermore, ARMS serves as a basis for identifying and disseminating effective assessment practices, creating a collaborative environment that encourages knowledge-sharing among staff and optimisation of assessment approaches. By prompting staff to engage in reflection, discussion, and review of assessment tasks, ARMS fosters ongoing dialogue on assessment design to align with the evolving AI landscape.
This section gathers a number of Perkins and colleagues’ Assessment Scale publications. The “Assessment Scale” is a five-level framework for specifying and communicating how generative AI may be used in assessments, from “No AI” through various graduated forms of AI support/use. It’s designed to help educators align AI use with learning outcomes, make expectations transparent to students, and to support educators to redesign assessment tasks accordingly.
Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(6), 49–66. https://search.informit.org/doi/10.3316/informit.T2024092900003300954126858
Perkins, M., Roe, J., & Furze, L. (2024). The AI Assessment Scale Revisited: A framework for educational assessment. https://arxiv.org/abs/2412.09029
Perkins, M., Roe, J., & Furze, L. (2025). Reimagining the Artificial Intelligence Assessment Scale (AIAS): A refined framework for educational assessment. Journal of University Teaching and Learning Practice, 22(7). https://doi.org/10.53761/rrm4y757
Perkins, M., Roe, J., & Furze, L. (2025). How (not) to use the AI Assessment Scale. Journal of Applied Learning and Teaching, 8(2). https://doi.org/10.37074/jalt.2025.8.2.15
Events
A presenter-led event focused on practical AI pedagogy in HE, with strands on:
- Using AI to provide more inclusive and personalised learning.
- Embedding Generative AI into curricula and assessment to prepare students
for the future. - Using AI to support the work of academic and professional services staff.
- Fostering a culture of responsible and ethical use of AI by staff and students.
Submission deadline of 14 November 2025 and early-booking discounts available.
Date: Wednesday 18 February 2026
Times: 14:30 to 16:30
Location: Parliament Hall
Join colleagues for an interactive AI in Teaching & Learning Sandbox event. This informal, hands-on event offers 25-minute conversation slots at themed tables, where you can explore topics such as the AI Assessment Scale, harnessing AI for assessment, AI as a reasonable adjustment, experimenting with AI in Science and in Arts, and AI and ethics in teaching and assessment.
This is a formative and playful space for exploration. There are no experts in the room and no expectation to have all the answers. Instead, the focus is on curiosity, experimentation, and creating ideas that are meaningful for your own disciplinary context. It’s a safe space, where there is plenty of room for error, and lots of opportunities to learn from one another. Come to explore and question what AI could mean for your teaching practice. We encourage discussions around the challenges AI poses for us all and how to counteract these, as well as the opportunities it may bring in some contexts.Sign up here – AI in Teaching & Learning Sandbox
Date: Wednesday 8 April 2026
Times: 14.00 to 16.00
In this workshop, we will examine the implications of generative artificial intelligence (genAI) for Higher Education, focusing on both the opportunities it presents and the AI Safety issues that raise critical challenges for students, educators, and institutions. Topics will include academic integrity, misuse and over-reliance, bias and fairness, and the role of policy in shaping responsible use.
Sign up here – AI Ethical Use/Safety in HE – PDMS – University of St Andrews
Date: Tuesday 28 April 2026
Times: 14.00 to 16.00
Artificial intelligence has arrived in our classrooms, whether we invited it or not. This hands-on workshop delivers on the promise of its title: cutting through sensational claims to reveal what AI actually does, why its limitations matter as much as its capabilities, and how we can harness both to strengthen education. Together, we will see how AI tools really work in plain language. Using concrete examples from language teaching, essay writing, and research skills, you will see first hand how students are already using these tools and discover practical strategies for responding constructively. No technical expertise is required.
Sign up here – AI in Education – PDMS – University of St Andrews
Dates: 22 April and 29 April 2026
Times: 14:00 to 16:00
This session will look at activities that can be embedded throughout a module to support students to use GenAI (with integrity) as part of the learning process and to avoid over-reliance in assessments. It focusses on coursework assessments, primarily essay style assessments and is linked to the AI in Assessment scale. We will workshop ideas of how GenAI could be used in your modules using the AI in Assessment scale as a starting point.
Sign up coming soon – The Use of AI in Assessment – PDMS- University of St Andrews
The Annual CPD Award
This award recognises and celebrates excellence in teaching and learning practice that meaningfully incorporates an annually designated CPD theme. The 2026 theme is “AI in Teaching, Learning and Assessment”. This award aims to encourage creativity, innovation, and reflective practice in teaching, while promoting the integration of contemporary pedagogical ideas and learning experiences that enhance student engagement and success. As part of this recognition, the annual prize winner will be awarded £500 to their class grant account. This can be used to support a range of academic and professional development activities. Please also note that the award text may also be repurposed as supporting evidence for an HEA Fellowship application, particularly in demonstrating your commitment to enhancing teaching practice and engaging in scholarly professional development.
Each year, the award will focus on a specific CPD pedagogical theme. Staff must demonstrate how they have integrated this theme into their teaching practice or curriculum.
This prize is open to all lecturing and teaching staff (individuals or teams) including PGRs who teach, involved in undergraduate or postgraduate teaching at St Andrews.
The award encourages novel approaches that demonstrate creativity and originality in embedding the annual theme into teaching and learning.
Emphasis should be placed on how the incorporation of the CPD theme has positively impacted students’ learning experiences, engagement, or outcomes.
Consideration should be given to how the approach, or innovation could be shared, adapted, or scaled across other disciplines or teaching contexts within or beyond the institution.
Applicants should provide evidence of the effectiveness of their approach, such as student feedback, peer review, reflective analysis, learning analytics data, or outputs.
The award should support and reflect St Andrews’ teaching and learning strategy, values, and overarching priorities.
The extent to which the annual CPD theme has been meaningfully incorporated into curriculum design, teaching practice, or assessment.
Evidence of creativity, innovation, or new approaches that enhance learning and teaching.
Evidence of critical reflection on the process, challenges, and outcomes of implementing the theme.
The potential for the innovation to be sustained, scaled, transferred to other disciplines or adapted by others.
Evidence of effectiveness and impact, for example, presenting appropriate data, evidence of collaboration, feedback, or transferability across or beyond your discipline.
Each question should be answered in 250 words or less
- How have you meaningfully integrated this year’s CPD theme into your curriculum design, teaching practice, or assessment methods?
(Please provide specific context and examples.) - What creative, innovative, or new approaches have you implemented to enhance learning and teaching?
(Describe the approach and explain how it differs from conventional practice.) - Reflecting on your implementation of the CPD theme, what challenges did you encounter, and how did you address them?
(Critically reflect on the insights have you gained from this process, and how it has influenced your practice?) - How could your innovation be sustained over time, scaled up, or adapted for use in other disciplines or contexts?
(Include any evidence of or plans for broader applicability.) - What evidence do you have of the effectiveness and impact of your initiative?
(You may include data, feedback, collaboration outcomes, or examples of transferability across or beyond your discipline.)
- How have you meaningfully integrated this year’s CPD theme into your curriculum design, teaching practice, or assessment methods?
Applicants may include:
- Examples of pedagogical research underpinning your design
- Examples of teaching materials or curriculum design
- Student testimonials or feedback summaries
- Peer observations or reviews
- Reflective statements on practice
Evidence of dissemination, such as presentations, workshops, or publications
CPD Team colleagues and the President of Education from the Student Union.
As this is a single-cycle award with no subsequent application years, the panel will not be providing individual feedback to unsuccessful applicants.
The award deadline is 8th May 2026 at 12 noon. The award will be presented at the CELPiE conference on 18th June 2026.
Please apply via this Form CPD Award Application Form 2026 – AI in Teaching, Learning & Assessment – Fill in form
St Andrews Staff Blogs
AI × MedEd is led by Dr Andrew O’Malley, Senior Lecturer at the University of St Andrews Medical School. His work examines the role of generative artificial intelligence in medical education, with particular focus on AI safety, bias, assessment, and practical implementation.
Functioning as an independent academic service, AI × MedEd bridges the gap between rapid technological development and evidence‑based educational practice. In his blog, Dr O’Malley distils the fast‑growing body of AI research into clear, actionable insights for educators, policymakers, and clinicians. Although rooted in medical education, the principles explored here apply broadly across the wider educational landscape. https://andrewomalley.substack.com
University of St Andrews staff have provided a series of blogs regarding their use of AI in their teaching practice. These can be accessed through the link below
AI Resource Hub – CELPiE: Community for Evidence-Led Practice in Education
These are updated on a regular basis. If you would like to contribute a blog, please reach out to the Associate Dean Education (Science) at [email protected].