Workshop 1: Designing Scenario-Based Language Assessments with the Assistance of AI Technologies
Soo Hyoung Joo, Daniel Eskin, James E. Purpura & Giulia Peri
Scenario-based language assessment (SBLA) is an assessment technique with the potential to measure 21st-century language-driven competencies by situating learners in “scenarios” with an overarching scenario goal that requires them to work collaboratively with simulated characters in a social context online. Recent advances in AI technologies offer the opportunity for SBLA test developers to propose scenarios and scenario narratives, create content, generate items, deliver assessments online, score performance, provide static and interactive feedback, and simulate real-life collaborative problem-solving in virtual environments. While the construction of SBLAs seems daunting, AI technologies have made these assessments much more feasible to develop while also expanding the possibilities for what knowledge, skills, and abilities can be assessed.
This two-day workshop first introduces participants to SBLA by discussing how SBLAs can be conceptualized, designed, and ultimately validated. A traditional classroom-based test (the sugarbeet activity) will be used as a basis for illustrating SBLA design principles within a learning-oriented language assessment framework. Following this discussion, participants will be guided through the process of developing an SBLA similar to the model, and AI technologies will be introduced to assist with each stage of development. After this, participants will be asked to use AI tools to identify a scenario goal and problem-solving narrative. They will then have to generate a sequence of related tasks in the narrative (for reading, listening, writing, speaking, and topical understandings) to elicit performance indicators–again with AI assistance. Once the performance indicators are developed, participants will focus on incorporating other features of SBLA into the design that could moderate performance. For example, they will design chats to move the scenario from scene to scene; static and dynamic feedback on performance associated with the performance indicators; and they might include an explicit instruction component into the narrative. At the end of the workshop, participants will present their SBLAs in a PowerPoint presentation and engage in a discussion.
Intended Learning Outcomes
By the end of the workshop, participants will be able to:
- Develop a basic understanding of SBLA design principles and how AI technologies can be used to assist in the SBLA development process
- Use the model exam to specify the contextual dimension of the assessment event (purpose, target language use domain)
- Use AI tools (e.g., ChatGPT, Heygen, Synthesia, Cathoven) critically to assist with generating a scenario narrative along with a sequence of tasks to elicit performance indicators within the narrative
- Use AI technologies to engineer performance moderators (chat interactions, feedback, explicit instruction)
- Develop an SBLA similar to the model and present their SBLA in a presentation to the group for feedback.
Workshop Content
Day 1: Foundations of SBLA
- Introduce SBLA design principles and practices
using the sugarbeet activity as a means of
anchoring the discussion.
Day 2: Designing & Engineering Performance Indicators
- Introduce AI technologies to assist with the identification of a scenario goal and narrative, and the development of tasks to elicit performance indicators (listening, reading, writing & speaking sections; topical knowledge tasks)
- Introduce AI technologies to develop performance moderators by integrating chats, feedback, and explicit instruction into the narrative
Engagement Methods
- Presentation of the theory of SBLA, anchoring it to a classroom test (Sugarbeet activity)
- Interactive demonstrations of AI tools for several purposes (e.g., item generation)
- Small-group tasks for each component of the workshop, from discussing theory to developing the SBLA
- Collaborative critique and feedback sessions
- Live tool exploration (hands-on AI-assisted generation of narratives, inputs, and items)
- Small-group PowerPoint Presentation of final projects
Participant Background
Participants should have a background in language assessment design (e.g., task-based or performance-based assessment); digital literacy (familiarity with Google Slides and PowerPoint). No prior coding or AI expertise is required.
Pre-Workshop Activities
To gain the most from this workshop, we would like participants to come prepared. We will have three sets of activities to do before the workshop.
- Required Preliminary Reading
○ Purpura, J. E. (2021). A Rationale for using a scenario-based assessment to measure competency-based, situated second and foreign language proficiency. In M. Masperi, C. Cervini, & Y. Bardière (Eds.), Évaluation des acquisitions langagières: Du formatif au certificatif. MediAzioni 32: A54-A96, http://www.mediazioni.sitlec.unibo.it. ISSN 1974-4382.
○ Purpura, J. E., & Liu Banerjee, H. (in press). Developing a scenario-based language assessment using a learning-oriented assessment framework as an approach to conceptual design and validation. Language Assessment Quarterly. - Required Pre-Workshop SBLA Experience (15-20 minutes): You will be given a link to take a brief SBLA online before the workshop. This hands-on experience will help familiarize you with what an SBLA is.
- Reflect on the sugarbeet activity: Look carefully at the activity. As this activity forms the basis for creating an AI-enhanced SBLA, you should be familiar with the activity. To help you review it, we will provide a set of guiding questions to respond to before the workshop.
Soo Hyong Joo
Soo Hyoung Joo is a doctoral candidate in Applied Linguistics (Second Language Assessment track) at Teachers College, Columbia University. Her research focuses on learning-oriented and scenario-based language assessment and the use of AI and technology for assessment design and validation. She leads several international SBLA projects investigating domain knowledge,
collaboration, and feedback in AI-mediated performance assessment.
Daniel Eskin
Daniel Eskin is a doctoral student in Applied Linguistics (Second Language Assessment track) at Teachers College, Columbia University. His research centers on L2 acquisitional patterns in pragmatic development and the assessment of L2 pragmatics, with a particular interest in integrating conversational AI into interactive assessment contexts.
James E. Purpura
James E. Purpura (Ph.D., University of California, Los Angeles) is Professor of Language and Education at Teachers College, Columbia University, and the Director of the SBLA Research Lab. His work has advanced theoretical models of language ability, validity theory, and
learning-oriented assessment, and he has authored numerous publications in language testing and assessment.
Giulia Peri
Giulia Peri (Ph.D., University for Foreigners of Siena) is a Research Fellow at the CILS Centre – Certification of Italian as a Foreign Language at the University for Foreigners of Siena, Italy. Her research focuses on L2 Italian teaching and learning, language testing and assessment, and
technology-enhanced testing. She contributes to the SBLA Lab’s international collaborations on multilingual scenario-based assessments







