For Stefan Lang, we need a deeper understanding of the potential and limitations of AI in higher education that prioritises the ability of both humans and machines to think outside the box.

The use of generative artificial intelligence is fundamentally changing scientific work.

In recent years, a gradual digitalisation of established practices in academia (i.e. online access to publications, digital identifiers, plagiarism checks, monitoring of intellectual property, consistent bibliography, etc.) has given way to a more abrupt transition to the use of generative tools. Ultimately, this is creating a new paradigm that may redefine what constitutes good scientific practice. As teachers in higher education, we must face this challenge, by asking where to impose limits and controls on ourselves and our students and even what to prohibit for academic and ethical reasons.

This is also a question of resources, which digitality should seek to conserve. Currently, we are observing the opposite trend, given the enormous energy consumption associated with the development and use of digital technologies. But just because something is generally feasible, it does not mean it has to be the first choice. Indeed, targeted use sometimes also means conscious renunciation.

Overall, we need a deeper understanding of the potential and limitations of AI in higher education. Moreover, this should inform comprehensive and inclusive skills acquisition across the sector regarding the use of generative tools in academic practices.

Where does reproduction end and creativity begin?

With generative AI, machines now provide plausible and well-structured answers, seemingly generating new knowledge independently and with minimal human interaction, even for advanced matters and discourses. We know that complex statistical processes that perform probabilistic reasoning actually lie behind this, but where does reproduction end and where does creativity begin? In other words, to what extent are matters simply replicated, summarized or aggregated, and at what point is something new created?

In a common definition, originality and usability are two essential components of creativity. Creativity can be understood as a process which combines what already exists in an unconventional way to generate something new. Isn’t this close to what AI does?  

That said, while the outputs of today’s generative AI tools might be surprisingly convincing – and to some extent ‘creative’ – relying on them carries certain risks. The typical architecture of large language models (LLMs) enforces a gigantic averaging of the existing corpus of knowledge, which may also lead – in the longer term – to an averaging of education.  With the AI solutions currently on the market, it is hard to imagine much room for creativity outside of what has been learned and trained.

Can AI think outside the box?

Deep learning, which aims to simulate the complex way in which the human brain makes decisions, holds enormous power. Moreover, it widens up the capacity of other, more shallow, machine learning techniques, but does not overcome their inherent limitations.

Considering the current architecture of artificial neural networks, can AI think outside the box? In deep learning, neurons are connected through weighted nodes that provide a probable output relation with minimal loss for each given input. Consequently, the output of LLMs aligns with what the broad majority of users would expect – with the answer being the most likely one. Moreover, answers that have never been thought about before can hardly be generated.

Feeding neural networks with a vast amount of technical and general knowledge enables extremely precise and well-formulated answers to virtually any question. However, using this method to invent new things is almost unthinkable. Therefore, the use of these generative AI tools in teaching and research raises genuine concerns about the arguably inevitable averaging of knowledge and education. 

We want students to learn to think

Our ability to distinguish between human-generated and machine-generated texts, images, poems, pieces of music, etc., is shrinking. So, we may also ask whether intellectual achievements can be attributed to generative AI, as well as whether its outputs can be claimed as intellectual property.

This is symptomatic of how quickly human-intellectual achievement is being redefined and reduced to the act of prompting AI tools. It also raises the question of who makes a more significant contribution to AI products, those whose data has been incorporated into the models: those who develop the models, or those who come up with a prompt and generate results? Could it even be the AI tool itself?

In academia, the choice is ours to make. Do we really want students to just replicate what has come before? Probably not. We want students to learn to think.

What does thinking mean? The Latin verb cogitare provides a good hint as to the original meaning: literally, to shake together. Thinking does not necessarily mean to produce something completely new, but rather being creative, i.e. to combine or rework things that we already know in an unconventional way. In that sense, let’s try to teach generative AI to start thinking outside the box too.

Author

Stefan Lang
University of Salzburg
Stefan Lang is Professor for Earth Observation and Geoinformatics at the University of Salzburg, Austria, where he is also Dean of the Faculty of Digital and Analytical Science. As a former Vice-Rector for Digitalisation and International Affairs at the same university, he also serves as Advisor to the Rector for Digitalisation.