Deakin Pro Vice-Chancellor Matthew Clarke says universities need to provide more guidance for the use of AI in research

Research news

03 July 2023

This article originally appeared in The Australian on Monday 3 July 2023. Access the original article.

Late last year, public access to generative artificial intelligence was made easier with the launch of ChatGPT. Almost immediately, those involved in education began to consider the consequences of this tool for their students. Initial responses bounced between banning this and similar commercial tools to fully embracing generative AI as the vanguard of an education revolution.

While there is not yet uniformity across all universities as to how to approach generative AI in teaching and learning, the tendency is an acceptance that this tool presents opportunities, and universities have a responsibility to students to ensure it is appropriately used. This has invariably involved updates to policies and procedures around plagiarism and contract cheating as well as revising assessment tasks and modes of assessment.

But in addition to teaching, the other primary role of universities is to produce research. Universities have only more recently begun to focus their attention on the impact of generative AI on the research activities of their staff. These efforts have been a second-tier priority in many universities, with guidance for staff on use of generative AI in research lagging behind the advice and support provided on teaching and learning.

Generative AI has equally powerful consequences – both positive and negative – for research.

Research undertaken at Australian universities is highly regulated, both by internal and external legislation, policies, and procedures. These regulations ensure research is undertaken in an ethical and safe manner that protects not only those involved but also intellectual property and future commercialisation (where appropriate).

Generative AI can both enhance and boost research activities but also impinge on protections that support such research.

As with the impact of generative AI on teaching and learning, the practice of using this new tool in research is still evolving. There are differences between disciplines as to what is considered appropriate and whether attribution to generative AI is required or not. Guidance being provided to university researchers needs to be cognisant of these disciplinary differences but ensure the tenets of responsible research are not forgotten.

At Deakin University we have prepared our generative AI research guidance by drawing together those with expertise on research integrity, early adopters of these tools, and those with discipline expertise in generative AI. This combination allowed us to determine high-level principles that provide a general direction for researchers.

  • Researchers should not upload data into generative AI tools that would not also be appropriate to share with external organisations, companies, and competitors. Once data is uploaded into commercial generative AI tools, control over that data risks being lost. This includes third party copyrighted data that the researcher does not own, confidential or sensitive data, human research data, and private and personal information.
  • Existing research ethics and integrity obligations remain even when using generative AI. This extends to ensuring that researchers must take reasonable steps to ensure that the use of generative AI does not breach normal academic standards. This means acknowledging the use of generative AI and not using these tools to peer review others’ work – including the examination of higher degree by research theses.
  • Researchers using commercial generative AI tools should also make themselves aware of the possibility that commercially available generative AI tools may perpetrate existing social, racial, sexual, gender, and other biases. It is important, therefore, that researchers critically review generated results for such inherent biases.
  • Higher degree by research students must further limit their use of such tools to protect the integrity of their own original contribution to knowledge. Appropriate uses may include copyediting and proofreading, but it would be inappropriate to use generative AI to substantially write or redraft parts of the thesis.

Generative AI will provide enormous opportunities to enhance and boost research within Australian universities. However, the full extent of its impact on research is not yet fully understood, similar to its impact on teaching and learning. Researchers require guidance now on how to best use these tools correctly, so that they do not breach research integrity or ethics.

While Deakin, like other universities, has provided such guidance, we recognise that this document must remain dynamic and subject to revision and refinement as current practices and standards evolve and change within different disciplines over time.

Matthew Clarke is Alfred Deakin Professor and Pro Vice-Chancellor of Researcher Development at Deakin University. Jeanette Fyffe is Director, Research Training and Development at Deakin University. Peter Murphy is Senior Advisor, Office of Deputy Vice-Chancellor Research at Deakin University. Kristy Fickinger, is Manager, Research Integrity and Ethics at Deakin University.

Share this story

ChatGPT was accessed by around 100 million users in just its first two months.

Key Fact

ChatGPT was accessed by around 100 million users in just its first two months.

Share this story

More like this

Research news Advancing society, culture and the economy