Artificial Intelligence (AI)
NEW: A vision for education and skills at Newcastle University: Education for Life 2030+
Artificial Intelligence (AI) refers to computer systems that can perform tasks that usually require human intelligence, such as writing text, generating images, producing music, recognising speech, translating language, and making informed decisions. AI is based on the idea of creating machines that can "think" and "learn" like humans do, using algorithms to process and make sense of large amounts of data. AI is an area of active research and development, and has the potential to revolutionise the way we teach, learn, assess and access education.
AI at Newcastle University
Students and colleagues are already making use of Generative AI tools such as ChatGPT, Google Gemini, Claude and DALLE-3 to write text and create images, and AI-powered digital assistants are being used to simplify a wide variety of everyday academic tasks. It is our role as educators to explore and investigate the opportunities these AI tools provide to our students’ educational experience, as well as to consider any potential negative impacts.
How generative AI tools work
Artificial Intelligence tools have been around for a long time, with predictive AI and machine learning models used to power a wide variety of everyday applications: voice assistants such as Siri or Alexa, recommendation engines on Netflix or Amazon, customer service chatbots, image recognition photo apps, and even Google's search algorithm. By analysing large amounts of current and historical data, these tools seek to provide insights and make future predictions. Generative AI, however, is concerned with the creation of new, imaginative and unique material such as written text, pictures, video or audio. There has been a lot of media coverage of these tools lately (in particular ChatGPT) with concerns raised about the risk they present to academic integrity.
AI text generation tools
AI text generation tools such as ChatGPT, Google Gemini, Claude and Llama work by building vast datasets of reference text, sourced from online books, articles, social media posts, and Wikipedia pages. When responding to a user’s request, they simply select the statistical "next best word" based on the words and sentences that have gone before – influenced by information supplied in an initial prompt. In many ways, they work in a similar way to a mobile phone’s predictive text features.
Even though text may appear well presented and convincing, generative AI tools can - and often do - get things wrong (see Limitations of text generation tools below). And since many third-party AI-powered websites and tools are built upon these technologies, they too will inherit similar weaknesses.
AI image generation tools
AI image generation tools such as DALL.E, Midjourney and Stable Diffusion are trained on large datasets of digital imagery, including photographs and artwork (along with their text descriptions). The AI interprets and learns underlying patterns and features present in the data, and over time and with manual training and fine-tuning, can use that data to produce new images based on natural language text prompts. The better the dataset and training, the more realistic and original the output.
Video and audio generation
With AI generated "deep fake" videos in the news and AI generated songs climbing the music charts, AI video and audio will increasingly impact the way we produce and consume media. Both forms of AI operate in much the same way as other generative tools, in that a simple text prompt will generate an output. Videos are not always smooth or professional yet, but the technology is improving. AI generated music and synthesised voices, however, are already quite advanced.
Limitations of text generation tools
Understanding how AI text generation tools such as ChatGPT and Google Gemini work also highlights their limitations. When viewing the output of such tools, it is easy to think they have a level of understanding of the subject being written about, but that isn't the case. They are relatively poor at synthesising information in a critical way, reflecting on real-world experiences, or summarising content from multiple sources to build an argument. Due to the way text generation AI works, facts may also be out of date or entirely fabricated, with any referenced text potentially incorrect or misleading (even if apparently obtained via an internet search).
Therefore, always question the reliability of text generation tools, apply your judgment, and fact-check the information provided.
Limitations and constraints
We asked ChatGPT the following: "Please provide a bullet-point critique on the functionality of AI text generation technologies and their limitations." This is the response returned:
- Potential for Untruths. AI text generation technologies like ChatGPT, Claude and Google Gemini can produce false or misleading information. They rely on patterns in training data, which might contain inaccuracies, leading to the propagation of misinformation and reinforcing of untruths.
- Dataset Bias. The data used to train AI models may be biased, leading to biased outputs that reflect and potentially amplify societal prejudices or stereotypes.
- No External Source Referencing. Although this area of AI text generation has improved massively, AI tools can still lack the ability to reference external sources of information correctly. This limitation prevents them from fact-checking or verifying the information included within their generated text.
- Limited Contextual Understanding. While AI models have made significant progress in understanding context, they can still struggle with complex nuances, resulting in inappropriate or inaccurate responses in certain situations.
- Lack of Explainability. Both ChatGPT, Claude and Google Gemini lack transparency in their decision-making process, making it difficult to understand why certain responses are generated. This lack of explainability hinders accountability for any errors or biases in their outputs.
- Limited Generalisation. AI text generation technologies may struggle to generalise to new scenarios or handle inputs significantly different from their training data, resulting in unreliable outputs in unfamiliar situations.
- Imaginative but Unverified. AI text generation models can produce creative and imaginative content. However, this content is usually not fact-checked or verified (despite what the tools claim), potentially leading to the generation of inaccurate or fictional information.
- Ethical Concerns. AI-generated text can be used maliciously to spread misinformation, generate fake news, and propagate harmful content, raising serious ethical concerns. The sustainability of AI technology, the use of copyright-protected materials and intellectual property, and the labour used to build and refine datasets are also areas of concern.
- Potential for Manipulation. AI-generated text can be manipulated to suit specific agendas, as these models can be fine-tuned or biased during their development.
We are committed to the critical, ethical and responsible use of generative AI tools and to preparing our students and colleagues to work effectively in an increasingly AI-enabled world.
Principles for the use of AI
New and rapidly evolving AI tools will undoubtably influence and change how our students approach their studies and research projects. To stay relevant, we need to change with them, and reconsider the way we deliver teaching and assessment at Newcastle University. More importantly, we need to recognise the significant benefits of these tools – and how we can all use them to best effect – rather than seeking to restrict their use.
Newcastle University's position is therefore not to prohibit the use of AI tools, but rather to offer colleagues and students support and guidance on how to use these tools responsibly, critically and ethically. Here we present our 5 Principles for the use of AI, which align with Russell Group principles and reflect a greater emphasis on our long-term approach to living with evolving AI tools.
1. Students and colleagues will be supported in developing their AI literacy
Principle 1: Students and colleagues will be supported in developing their AI literacy, enabling them to critically, effectively, responsibly, and ethically communicate with and use AI tools.
We will prioritise AI literacy to equip our students and colleagues with the knowledge and skills needed to use AI technologies effectively and responsibly. Understanding the potential uses of AI, as well as their limitations and ethical issues, will support us in using these tools effectively - as well as thinking critically about their output.
By increasing AI-literacy, our students will develop the skills needed to use these tools appropriately throughout their studies and future careers. It will also ensure that colleagues have the necessary skills and knowledge to deploy these tools in support of student learning, and to adapt their teaching and assessment practices to include effective use of AI.
Developing AI literacy will involve consideration of the following:
- Privacy and data considerations: whether a generative AI tool is designed to learn directly from its users’ inputs or not, there are risks to privacy and intellectual property associated with the information that students and staff may enter into these tools.
- Potential for bias: generative AI tools produce answers based on information generated by humans which may contain societal biases and stereotypes which, in-turn, may be replicated in the generative AI tool’s responses.
- Inaccuracy and misinterpretation of information: data and information contained within generative AI tools is garnered from a wide range of sources, including those that are poorly referenced or incorrect. Similarly, unclear commands or information may be misinterpreted by generative AI tools and produce incorrect, irrelevant or out-of-date information.
- Accountability: The user is ultimately accountable for ensuring the accuracy of information generated by AI tools, whatever its final form or intended use. Thinking critically about input to and output from AI tools is therefore vital.
- Ethical considerations: users of generative AI tools should be aware that while ethics codes exist, they may not be embedded within all generative AI tools and that their incorporation, or otherwise, may not be something that users can easily verify.
- Plagiarism: generative AI tools re-present information developed by others and so there is the risk of plagiarised content and/or copyright infringement, and artwork used by image generators may have been included without the creator’s consent or licence.
- Exploitation: the process by which generative AI tools are built can present ethical issues. For example, some developers have outsourced data labelling to low-wage workers in poor conditions.
2. Teaching and assessment strategies will be adapted to incorporate AI
Principle 2: Teaching, assessment, and student experience strategies will be adapted to incorporate ethical use of AI tools.
We continually update and enhance our pedagogies and assessment methods in response to drivers including new research, technological developments and workforce needs. Adapting to the use of generative AI technology is no different. Incorporating the use of generative AI tools into teaching and assessment has the potential to enhance the student learning experience, improve critical reasoning skills and prepare students for the real-world applications of the generative AI technologies they will encounter beyond university.
We encourage colleagues to explore potential uses of AI technologies in the advancement of education and the student experience, aligning with our core values of excellence, creativity, and academic freedom. All colleagues who support student learning should be empowered to design teaching sessions, materials and assessments that incorporate the creative use of generative AI tools where appropriate. Professional bodies will also have an important role in supporting universities to adapt their practices, particularly in relation to accreditation.
The appropriate uses of generative AI tools are likely to differ between academic disciplines and will be informed by policies and guidance from subject associations, e.g. Professional, Statutory and Regulatory Bodies. We will therefore encourage Schools to apply institution-wide policies within their own context. We will also encourage consideration of how these tools might be applied appropriately for different student groups or those with specific learning needs.
Engagement and dialogue between academic staff and students will be important to establish a shared understanding of the appropriate use of generative AI tools. Ensuring this dialogue is regular and ongoing will be vital given the pace at which generative AI is evolving.
In line with our core values of equality, diversity, and inclusion, we should explore the potential of generative AI technologies in supporting students and colleagues with disabilities or those whose first language is not English. We must also consider the implications and mitigate issues of equity of access when AI tools are deployed behind paywalls and actively used for educational purposes.
3. Academic integrity and rigour in assessment will be upheld
Principle 3: Academic integrity and rigour in assessment will be upheld.
We have reviewed our academic misconduct policy to reflect the emergence of generative AI.
We will provide transparent information and guidance and make it clear to students and colleagues where the use of generative AI is appropriate and where it may constitute academic misconduct. The information and guidance provided is intended to support students in making informed decisions and to empower them to use these tools appropriately and acknowledge their use where necessary.
We will also promote academic integrity and the ethical use of generative AI by cultivating an environment where students can ask questions about specific cases of their use and discuss the associated challenges openly and without fear of penalisation.
We will maintain our policy of not defaulting to AI checkers for all assessed work. AI text checkers often lack accuracy, provide insufficient explanations about how they generate scores and what those scores mean, and are typically designed to detect specific versions of specific language models. Instead, our efforts should focus on developing students' understanding of academic integrity and improving our assessment methodologies.
We will also continually update and enhance our assessment methods and strategies in response to drivers including new research, technological developments and workforce needs. Adapting to the use of generative AI technology is no different. We will ensure assessments are rigorous, fair, assess intended learning outcomes, and that academic integrity is upheld.
4. Innovation, collaboration and sharing best practice will be fostered
Principle 4: A culture of innovation, collaboration, and sharing of best practice in the application of AI tools will be fostered.
Navigating this ever-changing landscape will require collaboration between universities, students, schools, FE colleges, employers, and sector and professional bodies. This will also include the ongoing review and evaluation of policies, principles and their practical implementation.
As generative AI tools evolve, there will be opportunities to innovate and explore potential applications to improve teaching, learning, assessment and the wider student experience. We learn as much from failure as success, and thus we should feel safe sharing lessons learned from ineffective endeavours as much as sharing those that are successful.
We will regularly evaluate policies and guidance for staff and students relating to generative AI tools and their impact on teaching, learning, and assessment practices. This will include monitoring the effectiveness, fairness, and ethical implications of the integration of generative AI tools into academic life, and adapting policies and procedures to ensure they remain valid as generative AI technologies evolve.
Fostering relationships between higher education institutions, schools, employers, professional bodies who accredit degrees, AI experts, leading academics and researchers, as well as ensuring an inter-disciplinary approach to addressing emerging challenges and promoting the ethical use of generative AI, will be crucial. We recognise the challenges that lie ahead and will continue to value the input of others, along with contributing expertise to the national and international discussions about generative AI and its applications within teaching, learning, assessment and support.
5. We will adapt as AI technologies evolve
Principle 5: We will maintain a dynamic position and adapt as AI technologies evolve.
With the evolution in AI technologies, the future opportunities and implications they could generate are difficult to predict. Acknowledging this and recognising the need to be agile in our response is critical in the effective delivery of teaching, learning, assessment, ensuring a good quality student experience, and best preparing our graduates for work and life in an increasingly AI-enabled world.
Our Academic Skills Kit website and this Learning and Teaching @ Newcastle website have been recognised for their quality and depth of information and guidance. However, we must continuously evolve the information, resources and guidance within them to keep pace with the evolving AI technology landscape.
What do you need to do?
Colleagues are encouraged to consider and implement the five Principles for the use of AI at Newcastle University, as listed above. To help you do this, you will find advice and best-practice guidance for the use of AI in teaching and assessment below. To complement the information available on this site, colleagues in Newcastle University Library have also developed a range of AI information literacy resources for students, detailing how to critically evaluate, acknowledge and reference AI-generated content.
AI really is going to revolutionise how we write and access information... If we understand more about the technology, and more about its limitations as well as capabilities, we’ll be in a good position to make the most of it.
Learning the basics
Colleagues who support student learning have a responsibility to learn about and teach with AI. Incorporating the creative use of generative AI tools into teaching sessions, materials and assessment practices has the potential to enhance the student learning experience, improve their critical reasoning skills, and prepare them for the real-world applications of AI that they will encounter beyond university.
To gain the basic skills and knowledge needed to start using AI in support of student learning, LTDS and Newcastle University’s Library have created a short AI for Educators Canvas Course.
Microsoft Copilot
Newcastle University colleagues and students now have access to Microsoft Copilot for Edge, which allows you to use the latest generative AI tools (ChatGPT v4 for text-generation and DALL.E v3 for image generation) for free with your University Microsoft account.
- Microsoft Copilot can help with tasks such as drafting written content, developing presentations, and creating images
- Copilot also comes with enterprise data protection so you and your students can use it safely and securely
- The information you enter into Copilot is not used to train the AI model
Remember: it's Copilot, not autopilot! Use the platform as a tool to help you and your students be more productive. As with all generative AI platforms, however, you will always need to sense check and verify outputs.