AI avatars are opening new possibilities for teaching and learning, for example in multilingual online courses. But as universities experiment with these technologies, we must urgently address the ethical, pedagogical and institutional implications of ‘avatarised’ educators.

AI technologies are changing the way we teach. One prominent example is the rise of synthetic video avatars that can speak multiple languages, mimic facial expressions and represent teachers in digital learning environments.

At first glance, these tools offer exciting opportunities. For example, in a Unite! European Universities alliance project, we were able to produce Massive Open Online Courses (MOOCs) with synchronised AI avatars in English, Spanish, Portuguese and more, without the need for each lecturer to film themselves again and again. In short, this technological leap has allowed us to scale and localise educational offerings more quickly and cost-effectively than ever before.

Interestingly, the avatars’ voices were so realistic that we received comments – some sincerely impressed, others slightly ironic – praising our language skills. Colleagues and students asked how we managed to speak such fluent Spanish or Portuguese. These reactions reveal just how convincingly synthetic avatars can simulate authenticity – and how easily human presence can be imitated.

But as we begin to use these tools in practice, and not just study them in theory, new questions are surfacing. Beyond the enthusiasm for innovation, we are entering ethically sensitive territory, and universities must start to prepare frameworks for how AI-generated representations of educators are to be created, used and governed.

Who owns your AI avatar?

As educators who have begun working with AI-generated avatars of ourselves, we find ourselves wondering: who owns these digital representations?

In many cases, universities provide the funding and infrastructure to generate the avatars – often using commercial tools. But does this give the institution a form of ownership over the avatar’s use? Could our digital selves be used, for instance, in other courses, without our active involvement? Would it be acceptable, or even expected, for an avatar to continue teaching after a lecturer leaves the university? Or retires? Or even dies?

These questions are not just hypothetical. They speak to a deeper concern: in a world where our image, voice and presence can be simulated and reused, what rights do educators retain over their identity? And how do we ensure that avatars are used with the consent and control of those they represent?

Teaching in the age of 24/7 AI support

At the same time, many educators are actively developing AI tools themselves - avatars, chatbots, or conversational assistants trained on course content or lecture transcripts. These can offer round-the-clock support to students, answer frequently asked questions, or simulate tutoring sessions.

But here too, ethical and institutional questions arise. If a teacher invests time and effort to create such a tool:

  • Is it considered part of their teaching workload?
  • Should it count towards their teaching credit or reduce other duties?
  • What happens if the chatbot gives incorrect or biased responses, or says something inappropriate?

These concerns go beyond technical issues. They reflect the evolving nature of academic responsibility. If a chatbot ‘speaks’ in my voice, who is accountable for its content? What happens if it contradicts what I teach in class – or what my institution stands for?

From innovation to regulation

AI avatars and related technologies can enrich education, especially in cross-border and multilingual contexts. But universities must move quickly from experimentation to the development of governance structures.

Ethical reflection must accompany innovation. This includes:

  • Informed consent and control: Educators should have clear rights over their avatars, including the ability to approve or withdraw usage.
  • Transparency: Students should always know when they are interacting with an AI-generated representation rather than a live teacher.
  • Attribution and responsibility: Institutions need to define who is accountable for AI-generated content, especially for when things go wrong.
  • Recognition of effort: If educators contribute to AI-driven tools that extend their teaching presence, this should be acknowledged in workload and performance assessments.

This is not only a matter of fairness, but of viability. If educators fear that their digital selves can be repurposed without permission – or that their AI contributions are not valued – they will rightly hesitate to engage with these technologies.

Now is the time to act and build trust

AI avatars offer real potential to extend and diversify higher education. They allow for multilingual teaching, increased accessibility and new forms of learner engagement. But as we begin to work with them not just as tools, but as representations of ourselves, the lines between presence, identity and responsibility blur.

We must not postpone the ethical debate. Now is the time for universities to establish principles and policies around avatar-based teaching. As with all digital innovation, the goal must be to support, not undermine, academic integrity, autonomy and trust.

Authors

Martin Ebner
Graz University of Technology
Martin Ebner is Head of the Department of Educational Technology and Senior Researcher at the institute of Human-Centred Computing at Graz University of Technology, Austria. He researches Technology Enhanced Learning – with a focus to MOOCs, AI in education and open educational resources.
Sandra Schön
Graz University of Technology
Sandra Schön is Senior Researcher at Graz University of Technology, Austria, where she focuses on open educational resources, AI-enhanced teaching and emerging practices in university alliances.