Skip to main content

Artificial Intelligence (AI) and Content Production

The rapid rise of AI tools provides both opportunity and risk for content creators.

What is AI? 

AI refers to computer systems that can perform tasks that usually require human intelligence. This includes writing text, generating images, producing music, recognising speech, translating language, and making informed decisions.  

AI is based on the idea of creating machines that can 'think' and 'learn' like humans do, using algorithms to process and make sense of large amounts of data.  

Some people are concerned that AI is taking jobs, but we prefer to think that AI changes roles rather than replaces them. AI provides the technology to speed up elements of content creation, giving us more time to think about our messaging and product. 

Students and colleagues are already making use of AI tools to write text, create images and more. This includes ChatGPT, Google Bard and DALLE-2 . For example when researching a topic, using tools such as ChatGPT can be a useful starting point. But the human element is always needed. 

Avoiding the risks 

AI is here to stay and we all need to learn how to best use the tools to help us in our work. However we also need to be aware of the risks:

  • Privacy and data considerations: there are risks to privacy and intellectual property associated with the information that is entered into AI tools. This applies to AI tools designed to learn from users inputs and those that don't.
  • Potential for bias: AI tools produce answers based on information input by humans. This information may contain biases and stereotypes which could be replicated in responses. 
  • Inaccuracy and misinterpretation of information: information from AI comes from a range of sources. This includes those that are poorly referenced or incorrect. Similarly, unclear inputs may be misinterpreted and produce incorrect, irrelevant or out-of-date information. 
  • Accountability: the user is accountable for ensuring the accuracy of information generated by AI. Thinking critically about input to and output from AI tools is vital. 
  • Ethical considerations: users of AI should be aware that while ethics codes exist, they may not be embedded within all AI tools. Their incorporation, or otherwise, may not be something that users can easily verify. 
  • Plagiarism: AI presents information developed by others. Therefore, there is the risk of plagiarised content and/or copyright infringement. For example, artwork used by image generators may not have the creator’s consent or licence. 
  • Exploitation: the process by which AI tools are built can present ethical issues. For example, some developers have outsourced data labelling to low-wage workers in poor conditions. 

Data privacy

  • Data privacy and security: uploading sensitive or personal data to ChatGPT could lead to privacy breaches. If the data uploaded includes personal, confidential, or proprietary information, there's a risk it could be accessed by unauthorised parties. This applies only if the platform's security measures are not robust. 
  • Data misuse and exploitation: there's a risk that the data uploaded could be misused. For example, personal information could be used for purposes other than what it was intended. Such as training other AI models without consent. 
  • Inadvertent data sharing: when you interact with AI, the data you provide can be used to improve the tool's performance. This means that any information, including sensitive data, could become part of the training set for future iterations of the AI. This could lead to potentially unintended sharing of information. 
  • Lack of anonymity: in some cases, data uploaded to AI platforms may not be anonymous. If identifiable information is included, it could be traced back to the individual or organisation. This leads to privacy concerns. 
  • Compliance with regulations: uploading data to AI may have implications for compliance with data protection regulations. For example General Data Protection Regulation (GDPR) in Europe. Non-compliance can result in legal issues and penalties. 
  • Intellectual property risks: if the data uploaded contains intellectual property (IP), there's a risk that it could be replicated or used by AI in a way that infringes on IP rights. This could potentially lead to legal challenges or loss of proprietary information. 
  • Dependency on third-party platforms: relying on external AI platforms for data processing or storage can create dependency. Therefore, any changes in the platform's policies, pricing, or availability can impact users who have uploaded data. 
  • Algorithmic bias and errors: the AI's responses are based on its training data, which can include biases or inaccuracies. Uploading data and relying on the output without review can repeat biases or lead to decisions based on incorrect information. 

To mitigate these risks, it's advisable to avoid sharing sensitive, personal, or proprietary data with ChatGPT or similar platforms. It is also important to be aware of their terms of service and privacy policies.

Additionally, staying informed about data protection laws and regulations can help ensure compliance and safeguard against legal issues. 

More information and learning

If in doubt please contact rec-man@ncl.ac.uk for support and questions regarding information governance and data protection.