For Arts and Culture

The Most Common AI Mistakes Arts and Culture Make

Arts organisations are adopting AI tools without asking whether the tool is changing what they value about art itself. The biggest mistakes happen when AI efficiency replaces human judgment about what matters.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Mistakes in Artistic Practice

Artists generate dozens of variations quickly and pick the best one, then call it their work. This avoids the harder question of whether you are making the art or curating someone else's labour.

The fix

Before you use any generative tool, write down what you believe the artist's role is in your own work, then check whether the tool supports or contradicts that belief.

Midjourney and DALL-E produce polished results that feel like a starting point because they come so fast. You stop thinking and iterating because the tool already did the heavy visual work.

The fix

Spend the same amount of studio time on AI outputs as you would on hand-made sketches, or acknowledge that you are outsourcing the core creative decision-making.

Runway ML and Adobe Firefly carry the aesthetic choices of their training data into your work without you noticing. You inherit visual language you did not choose.

The fix

Study what images these tools consistently favour, then actively work against those patterns if your practice requires it.

AI writes cleaner, more confident prose than most artists produce. You use the generated version because it sounds more credible, even though it distances you from your own work.

The fix

Use ChatGPT only to tighten sentences you have already written, not to replace your own voice in explaining why your work matters.

Because generative tools work in seconds, you work faster and make more pieces. You confuse productivity with quality or depth of thinking.

The fix

Set your own timeline before you open the tool, and do not let rendering speed determine how much time you spend deciding what to make.

Mistakes in Curation and Programming

AI tools analyse which past exhibitions drew the biggest crowds and suggest similar programming. You end up with a calendar optimised for engagement rather than artistic risk or cultural breadth.

The fix

Set your curatorial goals first (artistic merit, community need, cultural representation), then use analytics only to understand who is not in the room.

Recommendation algorithms flag certain artists based on historical patterns and social media metrics. You unconsciously favour those recommendations when selecting work to exhibit.

The fix

Always view the work itself before checking any predictive data, and deliberately programme at least one artist per year whose metrics do not fit the algorithm.

Analytics platforms tell you your visitors skew toward a certain age, income level, or postcode. You unconsciously start curating for that group and exclude others.

The fix

Compare your actual visitor data to your stated mission and community needs, then decide whether you are serving the right people or just the easiest ones to reach.

AI-generated wall text sounds authoritative but often contains factual errors, misinterprets artist intent, or flattens complexity. Visitors read confident-sounding falsehoods.

The fix

Write your own first draft of any interpretive text, then use ChatGPT only to improve clarity and flow, not to generate the content itself.

Mistakes in Funding and Administration

Funders now see dozens of applications with the same polished phrasing, confident tone, and structural predictability. Yours looks professional but forgettable because it sounds like AI.

The fix

Write your first draft in your own voice, then ask ChatGPT to make only specific sentences clearer, not to rewrite whole sections.

A project idea is unclear or unconvincing, so you ask ChatGPT to make the case sound stronger. The tool polishes the thinking, but does not fix the underlying problem.

The fix

Use ChatGPT only on ideas you already believe in, and treat AI inability to explain your concept as a sign that it needs rethinking.

Budget-planning tools extrapolate your past spending and propose similar allocations. You skip the difficult conversation about whether those priorities still match your mission.

The fix

Before running any budget tool, list your current priorities, then ask the tool to map spending against those rather than past patterns.

AI generates standard evaluation templates and metrics that look professional but miss what truly mattered about this year's work. Your real impact disappears under standardised language.

The fix

Write the specific outcomes that matter to your organisation first, then use templates only as a structural guide, not as content.

Governance documents written by ChatGPT sound comprehensive but miss local context, your community's needs, and the actual culture you want to build.

The fix

Draft your own policy language based on real decisions you have made, then use AI to check for gaps or legal language only.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.