skip to content

Communications

 
A hazy AI-generated illustration of chapel spires drenched in warn sunlight.

Image: a fully AI generated image from Canva's Magic Media tool, created from the prompt 'University of Cambridge'.

 

Whether it’s ChatGPT or DALL·E, generative AI tools are changing the communications sector fast.

As a team of writers, designers and strategists who communicate the work of the University of Cambridge to the world, we’re interested in how these tools could support our efforts to share ground-breaking research or create campaigns to attract students. From speeding up tasks like transcribing audio to sparking ideas for news features, we can see lots of opportunities for how these tools can support our work.

But it’s also imperative that we acknowledge the risks when using generative AI tools and understand how to utilise them properly. Upholding factual accuracy, observing ethical usage and investing in our skills are all imperative and especially relevant as the University is built on over 800 years of knowledge.

Through research and workshops, we have created these new guidelines to support our teams in using generative AI tools safely, ethically and effectively. 

The guidelines can be read alongside the Russell Group principles on the use of generative AI tools in education, which include that universities will support students and staff to become AI-literate, universities will ensure academic rigour and integrity is upheld, and universities will work collaboratively to share best practice as the technology and its application in education evolves.

 

Using AI text generators like ChatGPT, Bard and LaMDA

We may use text generator tools to help us research a topic

Generative AI tools like ChatGPT are capable of processing vast amounts of information to quickly produce an easy-to-understand summary of a complex topic. This can help us work faster and understand new ideas.

For example, if a research communications manager is writing a feature on the discovery of DNA, they could ask ChatGPT to summarise the key moments in 500 words and use the answer generated as a starting point for whom to interview or which journal articles to read next. We may use these tools in a similar way to how we use search engines for researching topics and will always carefully fact-check before publication. 

We may use text generators to spark inspiration

The ability of generative AI tools to analyse huge datasets can also be used to help spark creative inspiration. This can help us if we’re struggling for time or battling writer’s block.

For example, if a social media manager is looking for ideas on how to engage alumni on Instagram, they could ask ChatGPT for suggestions based on recent popular content. They could then pick the best ideas from ChatGPT’s response and adapt them. We may use these tools in a similar way to how we ask a colleague for an idea on how to approach a creative task.

We do not publish any content that has been written 100% by text generators

We will be critical and responsible users of generative AI tools – and that means not publishing anything written completely by ChatGPT and being aware of the many reasons why we shouldn’t.

First, our experience with these tools so far is that the default written style and tone of content produced by generative AI is not appropriate for our audiences in its unedited form. We always need to apply the University’s brand and tone of voice guidelines to all content.

Second, generative AI tools do not produce neutral answers because the information sources they are drawing from have all been created by humans and contain our biases and stereotypes. These tools also often create content that contains errors and ‘hallucinations’. As a seat of learning, it is imperative that the University only publishes unbiased and factual written content.

Thirdly, the risk of plagiarism is high with lifting something completely from an AI content generator, and these tools are often opaque about their original sources and who owns the output. It is essential that our outputs are original and do not plagiarise.

For all of these reasons, we will not publish any press releases, articles, social media posts, blog posts, internal emails or other written content that is 100% produced by generative AI. We will always apply brand guidelines, fact-check responses, and re-write in our own words. The one exception to this is if we are publishing something about AI and would like to demonstrate what AI can do, and we will always make that clear to audiences.

 

Using AI photo editor tools and image generator tools like DALL·E and Midjourney

We may use image tools to correct and make minor edits

Image editing with AI tools can speed up work for several teams. We may use these tools to make minor changes to a photo to make it more usable without changing the subject matter or original essence.

For example, if a website manager needs a photo in a landscape ratio but only has one in a portrait ratio, they could use Photoshop’s inbuilt AI tools to extend the background of the photo to create an image with the correct dimensions for the website.

We will use image editing tools ethically

We do not use any image editing tools to change the essence of any original images on our website, social media, or anywhere else. For example, we do not change the expressions, appearance, ethnicity or any other core features of those captured in our photography.

Where we do use AI image editing tools for corrections and minor edits, we will work with the original photographer or designer so they are aware.

We do not publish any images that have been created from scratch by image generators

Just like not publishing any text created completely by text generator tools, we will also not publish any images created 100% by image generator tools like DALL·E and Midjourney.

Our experience with these tools shows that what they produce is not always appropriate for the University channels and needs heavy application of the University’s brand guidelines.

The only situation where we would use an AI generated image would be if we were publishing a news story on AI and wanted to show what AI can produce, and we will always make that clear to audiences.

 

Using AI audio and video tools like Adobe Podcast, Descript and Pictory

We may use audio and video tools to remove distractions for audiences

Just like image tools and text generators, AI audio and video tools can also help us save time with creative projects. For example, our video producers can save hours of editing time by using audio clean-up tools to remove unwanted background noise or using video clean-up apps to remove distractions in shots in seconds.

We do not use voice clone generators or create deepfake videos

We do not use AI tools to recreate the voice or appearance of anyone, as there are major ethical implications and risks attached to this.

Again, the exception is if we are publishing something about AI and want to demonstrate what these tools can do. In these cases, we will only ever work with the consent of the original speaker and we will always make that clear to audiences.

 

Managing privacy risks and being transparent

We do not input any sensitive, private or embargoed information

Text generators allow users to paste in articles or data and build a prompt around them, such as ‘write a social media post about this article’ or ‘analyse this data and tell me about the trends’.

However, there are risks to privacy and intellectual property associated with the information we enter into these tools. The Terms of Use in many AI tools are not clear on how the inputs are stored or may be accessed in the future. We must only input information that is already in the public domain. We will not input any confidential or restricted data, in the same way that we do not share this on social media, in an external email, or discuss in public.

For example, if a media officer was working on an embargoed story about a new scholarship and they wanted to get some inspiration for the copy, there are only a few ways they could use AI tools. They could not paste their draft copy or web link into ChatGPT in advance of the embargo lifting, or paste in raw philanthropic data kept on a private SharePoint site, or type in text that included the unpublished names of students who receive the scholarship.

We always share with each other how we’re using AI tools in our daily work

When we use any generative AI tool, we will let our manager know and, when relevant, show them the before and after version. This could be in the form of a quick conversation over a laptop at the desk or sharing links over Teams. Some teams have a standard AI tools item in their regular meeting, to foster a culture of trust, transparency and learning.

We will create a training guide and invest in our teams

Our teams are made up of skilled writers, photographers, designers and strategists who want to use these tools to support – not replace – their creativity.

We will create a list of approved tools our teams can use, and create an internal training guide on SharePoint with examples to help new team members understand how we can and can’t use these tools. We will invest in training for our teams on writing prompts and fact-checking.

About the guidelines

These guidelines were created for the University of Cambridge's Office of External Affairs and Communications.

We will update this guidance regularly to keep pace with this rapidly moving sector.

For more information on these guidelines, please contact Amy Mollett, Head of Social Media.