For Data Analysts
How Data Analysts Can Use AI Without Losing Their Edge
When ChatGPT Code Interpreter writes your SQL or Tableau AI suggests a visualisation, you face a choice: trust the output or verify the logic underneath. Speed is your stakeholder's expectation now, but insight laundering happens when you present findings without checking whether the AI actually answered the right question. The skills that made you valuable as an analyst are the ones most at risk when automation feels faster than thinking.
These are suggestions. Your situation will differ. Use what is useful.
Verify the SQL before you trust the result
AI code generators excel at syntax but often miss business logic. When Databricks AI or Code Interpreter writes a query, read it line by line before execution. Check whether it filtered for the right date range, applied the correct join logic, and excluded test records or duplicate transactions that your organisation knows about. A query that runs without error is not the same as a query that answers the right question. Your job is to catch the gap between what the AI built and what your business actually needs.
- ›Read the WHERE clause first. This is where most business rules live and where AI most often gets it wrong.
- ›Check joins for correctness by manually tracing through a small sample. AI defaults are often left outer joins when you need inner joins.
- ›Ask yourself: what would this dataset look like if the logic was wrong? Then spot-check one obvious case before running it at scale.
Question the visualisation choice, not just the visual design
Tableau AI and Julius will generate charts quickly, but they optimise for what looks good, not what tells the truth. A line chart showing month-on-month change might mask seasonal patterns that a proper trend analysis would reveal. Before you share the chart with stakeholders, ask whether the visualisation type actually fits the data distribution and the decision you are trying to support. If the AI chose a bar chart for comparing distributions across ten categories, that choice serves clarity. If it chose that same chart to show correlation, the choice serves confusion.
- ›Identify what story the chart is supposed to tell before you accept the AI's choice. Then check whether that visualisation type actually supports that story.
- ›Look at the axes. Are they zero-based when they should be, or scaled to dramatise small differences when context matters more?
- ›Ask: would a stakeholder make a different decision based on this chart than they would based on the raw numbers? If yes, the visualisation is doing real work. If no, question whether it adds clarity or just speed.
Stay sharp on the statistical questions AI skips over
When you let automation generate summaries and confidence intervals, your instinct for statistical reasoning decays without you noticing. Copilot will tell you the average customer lifetime value went up, but it will not flag that the sample size shrank or that a single large customer drove the result. Your role is to think about sample size, outliers, and whether a difference is real or noise before you tell the business what to do. If you stop asking these questions yourself, you will lose the ability to recognise when an AI-generated insight is technically correct but strategically misleading.
- ›Always ask: how many records sit behind this number? Then consider whether that sample is large enough to trust.
- ›Look for outliers yourself before you accept a summary statistic. One very large transaction can move an average without moving the median or the mode.
- ›Mentally reverse the finding. If the result were the opposite, would the same data support it? If yes, your confidence in the insight should drop.
Formulate the question yourself before you ask the AI
The clearest sign that AI is eroding your judgement is when you stop knowing what question to ask. If you open Code Interpreter and say explore this dataset, the AI will find something. That something will feel like insight because it came from data. But it may not be the insight your organisation needs. Spend five minutes writing down what you actually need to understand before you ask the AI to generate a chart or run a query. This is not extra work. This is the work that separates analysis from data exploration.
- ›Write one sentence describing what decision a stakeholder needs to make. Then write one sentence describing what information would help them make it well.
- ›Avoid yes or no questions when you ask the AI. Instead ask: which segments show this pattern, or what changed between these two periods.
- ›If you cannot articulate the question, pause the AI work. Go back to the stakeholder and clarify what problem you are actually trying to solve.
Own your communication about AI limits with stakeholders
When you present an AI-generated insight, you inherit responsibility for its accuracy whether you wrote the code or not. Stakeholders expect analyst-quality judgement at AI speed, but that expectation is not realistic. You need to be explicit about what the AI did, what you verified, and what you did not check because time did not permit it. If you present a forecast from Julius without explaining that it assumed the past predicts the future, you have done the organisation a disservice. Clear communication about limits is not a sign of weakness. It is the only defence against insight laundering.
- ›When you present a finding, say how you verified it. Use language like: I checked the query logic and confirmed the sample includes only active accounts.
- ›Name what you did not verify. Say: this forecast assumes no major market disruption and I have not reviewed the model assumptions in detail.
- ›If a stakeholder pushes back, treat that push back as valuable. It may reveal a business rule you missed or a pattern the AI smoothed over.
Key principles
- 1.Verify the logic of every query and visualisation before you trust its output, because syntax correctness does not guarantee business correctness.
- 2.Keep your own statistical reasoning sharp by questioning outliers, sample sizes, and causation claims before you present them as insight.
- 3.Formulate your own question before you ask the AI to generate an answer, because exploratory results can feel like insight even when they answer no one's need.
- 4.Own the accuracy of every finding you present, whether you built it yourself or the AI built it for you, because stakeholders cannot distinguish between the two.
- 5.Communicate clearly about what you verified and what you did not, because silent acceptance of AI output is how organisations make decisions based on unexamined logic.
Key reminders
- Before accepting a query result, manually trace through the logic using a small subset of known data. Run the query on ten rows you can mentally verify, not on a million.
- When Tableau AI suggests a visualisation, ask yourself: does this chart type match the data type and the decision? If you cannot answer clearly, choose a simpler chart.
- Keep a log of times AI got the logic wrong. Pattern these mistakes to learn what kinds of business rules the tools you use most often miss.
- When a stakeholder expects overnight analysis, tell them the trade-off upfront: we can get results fast, but I will verify only the core logic. Other claims I will flag as needing deeper checking.
- Spend the first week with any new AI tool actually reading its output instead of relying on it. This teaches you its blind spots faster than any tutorial.