For Data Analysts

How Data Analysts Can Use AI Without Losing Their Edge

When ChatGPT Code Interpreter writes your SQL or Tableau AI suggests a visualisation, you face a choice: trust the output or verify the logic underneath. Speed is your stakeholder's expectation now, but insight laundering happens when you present findings without checking whether the AI actually answered the right question. The skills that made you valuable as an analyst are the ones most at risk when automation feels faster than thinking.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Verify the SQL before you trust the result

AI code generators excel at syntax but often miss business logic. When Databricks AI or Code Interpreter writes a query, read it line by line before execution. Check whether it filtered for the right date range, applied the correct join logic, and excluded test records or duplicate transactions that your organisation knows about. A query that runs without error is not the same as a query that answers the right question. Your job is to catch the gap between what the AI built and what your business actually needs.

Question the visualisation choice, not just the visual design

Tableau AI and Julius will generate charts quickly, but they optimise for what looks good, not what tells the truth. A line chart showing month-on-month change might mask seasonal patterns that a proper trend analysis would reveal. Before you share the chart with stakeholders, ask whether the visualisation type actually fits the data distribution and the decision you are trying to support. If the AI chose a bar chart for comparing distributions across ten categories, that choice serves clarity. If it chose that same chart to show correlation, the choice serves confusion.

Stay sharp on the statistical questions AI skips over

When you let automation generate summaries and confidence intervals, your instinct for statistical reasoning decays without you noticing. Copilot will tell you the average customer lifetime value went up, but it will not flag that the sample size shrank or that a single large customer drove the result. Your role is to think about sample size, outliers, and whether a difference is real or noise before you tell the business what to do. If you stop asking these questions yourself, you will lose the ability to recognise when an AI-generated insight is technically correct but strategically misleading.

Formulate the question yourself before you ask the AI

The clearest sign that AI is eroding your judgement is when you stop knowing what question to ask. If you open Code Interpreter and say explore this dataset, the AI will find something. That something will feel like insight because it came from data. But it may not be the insight your organisation needs. Spend five minutes writing down what you actually need to understand before you ask the AI to generate a chart or run a query. This is not extra work. This is the work that separates analysis from data exploration.

Own your communication about AI limits with stakeholders

When you present an AI-generated insight, you inherit responsibility for its accuracy whether you wrote the code or not. Stakeholders expect analyst-quality judgement at AI speed, but that expectation is not realistic. You need to be explicit about what the AI did, what you verified, and what you did not check because time did not permit it. If you present a forecast from Julius without explaining that it assumed the past predicts the future, you have done the organisation a disservice. Clear communication about limits is not a sign of weakness. It is the only defence against insight laundering.

Key principles

  1. 1.Verify the logic of every query and visualisation before you trust its output, because syntax correctness does not guarantee business correctness.
  2. 2.Keep your own statistical reasoning sharp by questioning outliers, sample sizes, and causation claims before you present them as insight.
  3. 3.Formulate your own question before you ask the AI to generate an answer, because exploratory results can feel like insight even when they answer no one's need.
  4. 4.Own the accuracy of every finding you present, whether you built it yourself or the AI built it for you, because stakeholders cannot distinguish between the two.
  5. 5.Communicate clearly about what you verified and what you did not, because silent acceptance of AI output is how organisations make decisions based on unexamined logic.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.