Are there any trends that can democratise these AIs such that students or teachers or even, educational institutions create one's own AI systems without the level of skills that companies like OpenAI can yield? Looking forward, do you see an environment where universities are building and fine tuning custom models, or uses of models, for their own use cases – a kind of Cambrian explosion of gen AI solutions across the sector? Or instead will the sector gravitate to large providers with centralised solutions, like we have now with the Turnitins and ServiceNows of the world? There are two levels here – one is creating the foundation models, the AI engines (like GPT-4 from OpenAI, or LLaMA from Meta), that power these generative AI tools. The other is creating a tool on top of foundation models that can be steered or finetuned to the needs of education. On the first (creating our own foundation models) – with current technology, it can take tens-hundreds of millions of dollars to train these more powerful models, so it’s unlikely that we will be developing our own. On the second, the fine-tuning or steering of these models is much more accessible to education. Fine-tuning involves additional training to tweak the ‘weights’ (the strengths of the connections between the artificial neurons) that are in the base model – this is much more affordable and may possibly be a good avenue to explore. The best approach for now for educational institutions might be to create AI systems that are augmented with stronger prompts designed by educators to steer the AI in certain directions, and provide subject-specific resources that the AI can draw from – this is actually something we are exploring with an AI tool we are building called Cogniti (https://cogniti.ai/).
How might AI change teachers’ pedagogical approaches? (as opposed to relieving administrative workload) Our guidance for teachers (e.g. https://educational-innovation.sydney.edu.au/teaching@sydney/prompt-engineering-for-educators-making-generative-ai-work-for-you/) has a number of examples where teachers can work alongside AI to prepare for teaching and assessment. On top of this, if teachers can appropriately steer and resource AI, this could present new opportunities for pedagogy. We are exploring a new AI tool, Cogniti (https://cogniti.ai/), to do this. For example, using Cogniti, an occupational therapy teacher can design an AI chatbot that acts as a client and also knows about the requirements of the unit of study. Teachers can then ask students to converse with the chatbot, presenting their ideas to the ‘client’ who will then discuss whether this fits their needs and throw other questions and contexts at students. As another example, a teacher could design another AI chatbot to provide specific formative feedback on student writing, based on the standards and criteria of the unit, and instruct the chatbot not to re-write content on behalf of the student. In both these examples, generative AI presents new educational workflows and activities that were not possible before.
In the 2 path model that you spoke about, especially in a large cohort of 1000+, how do we assess students on process of using AI so we can give them meaningful feedback? This is definitely a challenge. Assessing the process of learning, or indeed of using AI, is not something we are typically good at doing. In our guidance to educators at Sydney (https://educational-innovation.sydney.edu.au/teaching@sydney/what-to-do-about-assessments-if-we-cant-out-design-or-out-run-ai/), appendix 3 has some ideas around rubrics that might help to assess the process of human-AI collaboration and provide meaningful feedback.
How does AI counter what Wyndham called ‘pupil wastage’? Is it only in time saved? Is it about collaborating with AI outputs? Sir Wyndham’s idea of pupil wastage was around students being placed into secondary schools that covered subjects that were not suited to them, because the decision was made too early and based on a flawed assessment. If we take the general theme that this educational paradigm meant that students were placed in demotivating and less relevant educational contexts, then generative AI might have significant roles to play. For example, generative AI may help teachers (and students themselves) find connections between subjects and students’ interests. We have some examples of this in our guidance for teachers. Generative AI may also be able to help students better understand topics and help to provide them a boost to their studies – our guidance for students has a few examples of this as well.
What has really changed? We currently have longer careers, later retirement and discipline specific skills have a shorter half-life with the majority of knowledge required for practice being acquired after graduation emphasising the importance of literacy including personal, learning, information and feedback (often overlooked in assessment). My question is do you think the disruption caused by AI will help us to address issues we should already have addressed? Or do you think like many previous disruptions, universities will get interested for a while, and then things will go back to normal. My take is that we may see a resurgence of things like the graduate qualities which have been reviled for so long, as those seem to be one of the last bastions of humanity in higher education. It’s the flexibility, adaptability, creativity, interdisciplinarity, and uncommon thinking that as you say is often overlooked in assessment – because we’re so obsessed with content. There’s a definite risk that universities will react to this as a fad, although as a general purpose technology, it’s unlikely to be going away. Through COVID we saw that universities can be incredibly responsive and educators are incredibly resourceful. I’m hoping we can draw from this again (keeping in mind everyone is exhausted) to fix the things that have needed fixing for years – assessment being a key issue.
When do you think we’ll see AGI? Would you agree that it doesn’t look like the current transformer architecture will get us there? This working paper provides an interesting perspective, amongst others, around AGI. It may be 2 years, or 50-100 years away, or more. But we need to start preparing for this (probably) eventual horizon. It’s likely that non-AGI AIs will be released over the next few years that will be more capable than GPT-4 currently is, and so we need to prepare for what these AIs mean for education now. Not being in the field of computer science, I’m not qualified to answer which architecture AGI will run on – although it’s interesting that OpenAI thinks that AGI will only be invented once – which I read to mean that once AGI is invented, it will change humanity so dramatically and be capable of building more of itself.
How do you think generative AI may impact language assessment, in particular reading and writing skills, and how should reading and writing assessments be modified to incorporate the use of AI productively? This is a very important and context-sensitive question. In some subjects, expression is not as important a learning outcome than other disciplinary skills. For example, perhaps in software engineering it is more important for students to apply knowledge about how users perceive user interfaces and apply this to their designs, than be able to write at length about this. But in language subjects, core learning outcomes are around the ability to use language, to read and to write. In these contexts, it’s critical for us to be able to assuredly evaluate whether students can do this. But also in these contexts, it’s important for us to motivate students to want to develop these skills, and consider how these skills might look in society and workplaces when AI is so prevalent. Our two-lane approach tries to encapsulate this – lane 1 being the former, and lane 2 being the latter. For lane 2 assessments, perhaps we have (non-secure) assessments where students need to work alongside an AI to produce a piece of writing, and along the way document the process of this collaboration: critiquing, improving upon, and otherwise demonstrating critical engagement with the AI, in a manner that is authentic for their futures. For the corresponding lane 1 assessments, students would need to demonstrate, in a secured environment, that they have these skills. For language assessments, these might be short viva voces or interactive oral dialogues, or supervised reading/comprehension exercises.
Are there sites that are recommended and sites to avoid? We’ve curated a highly-selective bunch of sites and resources here: https://bit.ly/usyd-aied-links
Apart from adapting our assessments etc – how do teachers know how to prepare our students to be successful in an AI world when by the time they leave school, AI will be totally different from now? AI is advancing at a breakneck pace. A large part of being successful in an AI world will probably be fundamental AI literacies and having strong principles around the use of AI. This might include a foundational understanding of how AIs work, which will help them appreciate how to productively interact with these tools. A strong set of principles will act as a compass to guide their responsible use, including an appreciation of the ethical, legal, privacy, and other considerations surrounding AI. The way to develop these will be to use these tools critically, discuss with teachers and peers, and read widely. Many of these qualities are captured in ‘graduate attributes’, ‘graduate capabilities’, or ‘graduate qualities’ statements that almost all universities publish and endeavour to build in all students. Sydney’s graduate qualities include problem solving, communication, information and digital literacy, cultural competence, inventiveness, and influence – these, and others, will be critical skills that will help prepare our students to be successful in an AI world.
Your two lane approach requires a greater involvement of time and work from the academic staff. How do we integrate this into the current business model of the universities? Everyone is already overstretched and exhausted. The response to AI will require a concerted and continued effort – so something else needs to give. Part of the solution here might be AI itself – there are many ways that AI could save time such as in helping with administrative or preparatory workloads. Instead of assessing the way we currently do, we need to change (and hopefully not add to) our current methods of assessments towards more process-oriented assessments. If we can slowly shift to program-level assessment, this may end up reducing overall assessment (and marking!) load, because there would be fewer key assessments spread out between units/subjects instead of needing high-stakes assessments in every unit/subject. Undoubtedly the current business model is not friendly towards educational development and university leaders need to address this urgently in the face of an exhausted workforce and significant challenges from the technological context. Part of this will involve helping everyone across the university realise and appreciate, through experience, the transformative nature of generative AI on all aspects of knowledge work.
Are there any trends that can democratise these AIs such that students or teachers or even, educational institutions create one's own AI systems without the level of skills that companies like OpenAI can yield? Looking forward, do you see an environment where universities are building and fine tuning custom models, or uses of models, for their own use cases – a kind of Cambrian explosion of gen AI solutions across the sector? Or instead will the sector gravitate to large providers with centralised solutions, like we have now with the Turnitins and ServiceNows of the world? There are two levels here – one is creating the foundation models, the AI engines (like GPT-4 from OpenAI, or LLaMA from Meta), that power these generative AI tools. The other is creating a tool on top of foundation models that can be steered or finetuned to the needs of education. On the first (creating our own foundation models) – with current technology, it can take tens-hundreds of millions of dollars to train these more powerful models, so it’s unlikely that we will be developing our own. On the second, the fine-tuning or steering of these models is much more accessible to education. Fine-tuning involves additional training to tweak the ‘weights’ (the strengths of the connections between the artificial neurons) that are in the base model – this is much more affordable and may possibly be a good avenue to explore. The best approach for now for educational institutions might be to create AI systems that are augmented with stronger prompts designed by educators to steer the AI in certain directions, and provide subject-specific resources that the AI can draw from – this is actually something we are exploring with an AI tool we are building called Cogniti (https://cogniti.ai/).
How might AI change teachers’ pedagogical approaches? (as opposed to relieving administrative workload) Our guidance for teachers (e.g. https://educational-innovation.sydney.edu.au/teaching@sydney/prompt-engineering-for-educators-making-generative-ai-work-for-you/) has a number of examples where teachers can work alongside AI to prepare for teaching and assessment. On top of this, if teachers can appropriately steer and resource AI, this could present new opportunities for pedagogy. We are exploring a new AI tool, Cogniti (https://cogniti.ai/), to do this. For example, using Cogniti, an occupational therapy teacher can design an AI chatbot that acts as a client and also knows about the requirements of the unit of study. Teachers can then ask students to converse with the chatbot, presenting their ideas to the ‘client’ who will then discuss whether this fits their needs and throw other questions and contexts at students. As another example, a teacher could design another AI chatbot to provide specific formative feedback on student writing, based on the standards and criteria of the unit, and instruct the chatbot not to re-write content on behalf of the student. In both these examples, generative AI presents new educational workflows and activities that were not possible before.
In the 2 path model that you spoke about, especially in a large cohort of 1000+, how do we assess students on process of using AI so we can give them meaningful feedback? This is definitely a challenge. Assessing the process of learning, or indeed of using AI, is not something we are typically good at doing. In our guidance to educators at Sydney (https://educational-innovation.sydney.edu.au/teaching@sydney/what-to-do-about-assessments-if-we-cant-out-design-or-out-run-ai/), appendix 3 has some ideas around rubrics that might help to assess the process of human-AI collaboration and provide meaningful feedback.
How does AI counter what Wyndham called ‘pupil wastage’? Is it only in time saved? Is it about collaborating with AI outputs? Sir Wyndham’s idea of pupil wastage was around students being placed into secondary schools that covered subjects that were not suited to them, because the decision was made too early and based on a flawed assessment. If we take the general theme that this educational paradigm meant that students were placed in demotivating and less relevant educational contexts, then generative AI might have significant roles to play. For example, generative AI may help teachers (and students themselves) find connections between subjects and students’ interests. We have some examples of this in our guidance for teachers. Generative AI may also be able to help students better understand topics and help to provide them a boost to their studies – our guidance for students has a few examples of this as well.
What has really changed? We currently have longer careers, later retirement and discipline specific skills have a shorter half-life with the majority of knowledge required for practice being acquired after graduation emphasising the importance of literacy including personal, learning, information and feedback (often overlooked in assessment). My question is do you think the disruption caused by AI will help us to address issues we should already have addressed? Or do you think like many previous disruptions, universities will get interested for a while, and then things will go back to normal. My take is that we may see a resurgence of things like the graduate qualities which have been reviled for so long, as those seem to be one of the last bastions of humanity in higher education. It’s the flexibility, adaptability, creativity, interdisciplinarity, and uncommon thinking that as you say is often overlooked in assessment – because we’re so obsessed with content. There’s a definite risk that universities will react to this as a fad, although as a general purpose technology, it’s unlikely to be going away. Through COVID we saw that universities can be incredibly responsive and educators are incredibly resourceful. I’m hoping we can draw from this again (keeping in mind everyone is exhausted) to fix the things that have needed fixing for years – assessment being a key issue.
When do you think we’ll see AGI? Would you agree that it doesn’t look like the current transformer architecture will get us there? This working paper provides an interesting perspective, amongst others, around AGI. It may be 2 years, or 50-100 years away, or more. But we need to start preparing for this (probably) eventual horizon. It’s likely that non-AGI AIs will be released over the next few years that will be more capable than GPT-4 currently is, and so we need to prepare for what these AIs mean for education now. Not being in the field of computer science, I’m not qualified to answer which architecture AGI will run on – although it’s interesting that OpenAI thinks that AGI will only be invented once – which I read to mean that once AGI is invented, it will change humanity so dramatically and be capable of building more of itself.
How do you think generative AI may impact language assessment, in particular reading and writing skills, and how should reading and writing assessments be modified to incorporate the use of AI productively? This is a very important and context-sensitive question. In some subjects, expression is not as important a learning outcome than other disciplinary skills. For example, perhaps in software engineering it is more important for students to apply knowledge about how users perceive user interfaces and apply this to their designs, than be able to write at length about this. But in language subjects, core learning outcomes are around the ability to use language, to read and to write. In these contexts, it’s critical for us to be able to assuredly evaluate whether students can do this. But also in these contexts, it’s important for us to motivate students to want to develop these skills, and consider how these skills might look in society and workplaces when AI is so prevalent. Our two-lane approach tries to encapsulate this – lane 1 being the former, and lane 2 being the latter. For lane 2 assessments, perhaps we have (non-secure) assessments where students need to work alongside an AI to produce a piece of writing, and along the way document the process of this collaboration: critiquing, improving upon, and otherwise demonstrating critical engagement with the AI, in a manner that is authentic for their futures. For the corresponding lane 1 assessments, students would need to demonstrate, in a secured environment, that they have these skills. For language assessments, these might be short viva voces or interactive oral dialogues, or supervised reading/comprehension exercises.
Are there sites that are recommended and sites to avoid? We’ve curated a highly-selective bunch of sites and resources here: https://bit.ly/usyd-aied-links
Apart from adapting our assessments etc – how do teachers know how to prepare our students to be successful in an AI world when by the time they leave school, AI will be totally different from now? AI is advancing at a breakneck pace. A large part of being successful in an AI world will probably be fundamental AI literacies and having strong principles around the use of AI. This might include a foundational understanding of how AIs work, which will help them appreciate how to productively interact with these tools. A strong set of principles will act as a compass to guide their responsible use, including an appreciation of the ethical, legal, privacy, and other considerations surrounding AI. The way to develop these will be to use these tools critically, discuss with teachers and peers, and read widely. Many of these qualities are captured in ‘graduate attributes’, ‘graduate capabilities’, or ‘graduate qualities’ statements that almost all universities publish and endeavour to build in all students. Sydney’s graduate qualities include problem solving, communication, information and digital literacy, cultural competence, inventiveness, and influence – these, and others, will be critical skills that will help prepare our students to be successful in an AI world.
Your two lane approach requires a greater involvement of time and work from the academic staff. How do we integrate this into the current business model of the universities? Everyone is already overstretched and exhausted. The response to AI will require a concerted and continued effort – so something else needs to give. Part of the solution here might be AI itself – there are many ways that AI could save time such as in helping with administrative or preparatory workloads. Instead of assessing the way we currently do, we need to change (and hopefully not add to) our current methods of assessments towards more process-oriented assessments. If we can slowly shift to program-level assessment, this may end up reducing overall assessment (and marking!) load, because there would be fewer key assessments spread out between units/subjects instead of needing high-stakes assessments in every unit/subject. Undoubtedly the current business model is not friendly towards educational development and university leaders need to address this urgently in the face of an exhausted workforce and significant challenges from the technological context. Part of this will involve helping everyone across the university realise and appreciate, through experience, the transformative nature of generative AI on all aspects of knowledge work.