Winners: Small Integrated Agency of the Year. Find out more.

background image

Authenticity in Design: why we don’t use GenAI

Cover Image for Authenticity in Design: why we don’t use GenAI

Cast your mind back: it’s 2022. Life after COVID is starting to feel more normal. NFTs have died out. Will Smith slapped Chris Rock. The world was healing.

Then ChatGPT was released. Ai starts getting integrated into everything. GenAI became the hottest buzz phrase since ’non-fungible token’ and people were going insane. Society quickly self-categorised themselves into one of three factions.

  • This is going to change the world and I’m going to benefit
  • This is going to take my job and I’m going to be on benefits
  • Oh look – Star Wars if it was directed by Wes Anderson, cool

The technology offered a way of generating text, images, videos and other content using training data through ‘prompts’. But when people dug deeper, the training data in question was scraped from literally tens of thousands of artists, designers, writers, filmmakers, developers; the list goes on. This was done without permission.

An early example of this was October 3rd, 2022. Kim Jung Gi – the renowned South Korean illustrator passed away suddenly, aged just 47. Famed for his ink and brushwork – a style developed over years of dedication to his craft, Kim would often perform live drawing sessions to audiences and was generally regarded as one of life’s good guys.

It took approximately 2 days for someone to start feeding Kim’s work into an AI model, and share it on Twitter as a ‘tribute’ allowing anybody to create their artwork in his style. Described as an abhorrent act of theft, (I’ll say this part again: just days after his death) people were understandably furious.

But that is just one drop in the ocean of hundreds of thousands of stories. Midjourney were caught out in January 2024 when a database titled ‘Midjourney Style List’ leaked. It contained the names of over 16,000 artists whose work was allegedly trained on. It documented everyone, from Banksy to Disney. And they aren’t the only  LLM (Large Language Model) being sued. Openai, Microsoft, Google and Meta are all under investigation or have lawsuits underway.

So, what does this mean for designers or agencies using GenAI in their work practices? Well, at best, they are approving the theft of other creatives and have no remorse in using it for themselves in a race to the bottom. In addition to this – they could be landing their clients in hot water in one of two ways.

  • What happens if one of the companies loses a lawsuit – does the work produced with its models come under scrutiny?
  • AI-generated works are not eligible for copyright protection.

Authenticity is going to be the hottest commodity over the next few years. Genuine designers and artists have been caught in the crossfire with false AI accusations.  Documenting the process, proving value within this process and demonstrating that in a sea of mediocrity, you can work with honesty and integrity is going to automatically raise you above the bottom rungs.

So where does that leave us? We have yet to find a way of integrating GenAI into our workflow that is both morally sound and a boost to creative output. GenAI  is a fast track to the finish line for work that is expected or generic. You miss out on so much by skipping the most vital part – discovery and development. On top of this, I love what I do, and I’m not about to miss out on the most rewarding part of the design process. Artificial intelligence was supposed to give us the tools to free us from the boring tasks in life, not replace the enjoyable parts. Otherwise, what’s the point – introduce universal basic income and let’s be done with the whole thing.

Ted Chiang published ‘Why A.I Isn’t Going to Make Art’ for The New Yorker, which summarised perfectly how much of the creative industry feels. This particular quote is of note (which features a quote itself, making this whole paragraph feel like an ode to ‘Inception’).

“The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.”

Alongside this apathy, many are predicting the GenAI bubble will collapse in the next year, @GaryMarcus suggests that ‘the public will lose interest, and so will many corporate clients’ and although ’the field of AI research will go on…Generative AI will move from the foreground to the background.’ In real life, outside of online tech bubbles, it’s already happening.

If you are happy dealing with humans (that’s us) – drop us an email at projects@expconsultancy.com

Join our newsletter

Enter your email address below to receive news and insights from the EXP team.

Latest posts

Got a project in mind? Let’s talk.

Got a brand or digital project in mind? We’d love to chat about how we can help you.

Tom Grattan

projects@expconsultancy.com

+44 (0)1524 388104