top of page

ESA: Making AI safe for Australian classrooms

What does it mean to make generative AI ‘safe’ for education?

As the use of this emerging technology spreads throughout our classrooms, education leaders have been hard at work addressing this question.

The AI revolution in education

The launch of OpenAI’s Chat GPT in late 2022 and the subsequent media attention on its capabilities have sparked widespread discussion about the dawn of a new era in technological innovation. From revolutionising the creation of teaching materials to increasing accessibility, AI’s potential for positive change in education seems infinite. However, delivering its optimistic promise in our classrooms is dependent on addressing a number of challenges.

When considering AI in classrooms, the critical question arises: 

“How can we effectively integrate AI in education while ensuring ethical and educational integrity?

This query underscores the need to balance AI’s transformative potential with thoughtful, practical management of the numerous complexities it introduces into the educational landscape.

In 2023, ensuring academic integrity, protecting student privacy, and maintaining equity and access for all students dominated conversations about AI in education. Discussions covering how it will reshape the skills required for future careers also gained significant attention. This employment market shift was emphasised in the Productivity Commission’s 2023 Interim Report. As a result, educators are now faced with the challenge of modifying their teaching practice to accommodate the changing demands for skills.

With technology evolving at a rapid pace, there’s an inevitability that AI-enabled education products will fast become abundantly available. In anticipation of the challenges they may present, the team at Australia’s national education technology organisation, Education Services Australia (ESA), have been proactively preparing.

Over the last year, ESA has been collaborating with key education bodies to start addressing the challenges introduced by AI and ensure that its deployment in classrooms safely enhances the educational landscape for students and teachers alike.

The ethical dilemma of AI in education

The integration of AI in education brings forth an array of ethical concerns, paramount among them being privacy, academic integrity, and equitable access. While it holds the potential to revolutionise the educational landscape, its implementation raises critical questions about how it aligns with the fundamental values of the educational system.

Ensuring student privacy in an age where AI technologies can process vast amounts of personal data is a key challenge. The concern is not just about unauthorised access to data, but also about how this data is used, potentially impacting student privacy and safety.

Significant debate also surrounds academic integrity. With AI’s capabilities in generating and processing information, there’s a strong concern about its potential misuse for plagiarism or cheating.

From a socioeconomic perspective, equitable access is a crucial issue potentially exacerbated by AI. The digital divide is real, with more affluent families being better placed to access more advanced tools compared to those who are less fortunate. This discrepancy could widen existing educational inequalities, counteracting the potential benefits that AI might bring.

Recognising these challenges, a report commissioned by ESA entitled, AI in Australian Education Snapshot: Principles, Policy and Practice”, emphasises the importance of establishing a base level of safety in the short term. This foundational safety net is crucial to addressing immediate concerns around privacy, integrity, and equity before AI technologies are widely adopted in educational settings.

To achieve base-level safety, ESA’s 2023-24 National Schools Interoperability Program (NSIP) workplan will add a specialised workstream to its Safer Technology For Schools (ST4S) initiative.

ST4S standardises the evaluation of digital products and services in Australian and New Zealand schools, ensuring consistent security and privacy controls. Education product developers can undertake a self-assessment and, when ready, submit their product for a free safety compliance assessment. Successful products display an ST4S badge on their website. The new workstream will examine how ST4S can be extended to cover AI-enabled educational technology.

The work will create a Privacy and Information Security Technical Standards Framework, updating the current ST4S initiative with specific privacy and security guidelines for AI-powered educational technology. Additionally, Human Rights and Wellbeing Standards will be established to verify edtech developers' claims of explainability, non-discrimination, and contestability in AI-enabled products.

Implementing AI in classrooms: the policy creation road

Last year, Australia’s education ministers established a National Taskforce to provide advice on developing evidence-based, best practice frameworks to guide schools on the use of generative AI tools.

The taskforce included members from the Australian Government, states and territories, the Australian Education Research Organisation (AERO), the Australian Institute for Teaching and School Leadership (AITSL), and ESA.

Members from ESA included CEO Andrew Smith and General Manager of Assessment Systems and Data Standards, Stuart Mitchell. The taskforce consulted widely with government and non-government school sectors, education unions, AI and education experts and others, including First Nations people.

As part of this work, ESA commissioned the AI in Australian Education Snapshot: Principles, Policy and Practice” report, authored by Daniel Ingvarson and Beth Havinga (August 2023). The report outlines the challenges and strategies for implementing AI in Australian schools, emphasising safe, ethical principles. It segments actions into short, medium, and long-term categories to help plan a manageable integration of AI into classrooms.

The taskforce's work resulted in the creation of The Australian Framework for Generative Artificial Intelligence in Schools, or ‘the Framework’ (Released November 2023). The Framework was unanimously approved at a ministers meeting on the 5th of October 2023. This paved the road for the ban on Australian schools using generative AI products to be lifted, starting Term 1 2024.

In tandem with the approval of the framework, an investment was made in this work, and ESA will be responsible for setting ‘product expectations’ for generative AI technologies in education. ESA’s work will elevate the standards for such technologies in education. It will establish a technical framework aimed at guiding product development and enabling risk assessments for artificial intelligence technologies.

The work will be split into three workstreams aimed at introducing AI ethically, effectively and safely into classrooms. Ethical challenges will be covered through ESA’s ST4S and NSIP 2023-24 workplan (mentioned above), while practical challenges will be addressed via a collaboration with the AERO.

Practical challenges

Primary challenges in the practical application of AI in education include ensuring its efficacy and ongoing safety. ESA’s Snapshot report covers addressing these concerns in its medium-term actions.

While artificial intelligence technologies promise to revolutionise education, their actual impact on learning outcomes, teaching methodologies and workload is a subject of ongoing research and debate. Collaborating with AERO, ESA will work to ascertain the effectiveness of AI tools in enhancing the learning experience and ensure these technologies are safe and align with educational goals.

Another significant challenge is maintaining the ongoing safety of AI technologies in educational settings. With rapidly updated iterations being produced, ensuring that these tools remain secure and do not compromise student data or privacy is an ongoing concern. ESA’s Snapshot report recommends continuous monitoring and assessment of AI technologies to safeguard against potential risks and vulnerabilities.

Supporting an empowered future for educators

To ensure AI is safe for education, it’s essential to develop practical, ethical, and implementable policies and frameworks, along with taking a dynamic approach in the long-term. ESA’s Snapshot report highlights the need for policymakers to address complex ethical and jurisdictional issues dynamically as new technologies emerge. Adaptable, flexible strategies are crucial for effectively managing these evolving challenges.

However, continuing technological advancements require more than policies, standards and frameworks to support educators. Practical, accessible teaching and learning tools must be created alongside them — and these tools should be easy to use and quality-assured. Currently, ESA has resources available via its website, which can be shared across school communities to help support educators in adapting their practice to the rapid pace of change.

Tools such as Scootle offer a national online library of curriculum-aligned teaching and learning resources, free to all educators in Australia. With thousands of links to online resources, teachers can find lesson plans, create learning paths for students, and curate their own personalised libraries of teaching materials. The library is regularly updated, offering educators material to support teaching, including resources covering artificial intelligence.

As workforce needs to evolve in response to the introduction of new technologies, it will be essential for classroom teaching to assist students in preparing to cope with a constantly changing environment. With freely available career planning tools and webinars covering industry profiles and emerging jobs, myfuture supports teachers and parents in preparing students for the transition to work.

Although the rapid and multifaceted changes heralded by generative AI can feel overwhelming, pragmatic steps are already being taken to ensure the safety, efficacy and equity of tools that will soon enter Australian schools. With the right protection and support, students and educators will stand to benefit enormously from the safe implementation of AI in classrooms.

03 9207 9600


bottom of page