For Data Analysts
Data analysts using AI tools often skip the verification step between the AI's answer and the stakeholder presentation. This gap turns insight laundering into standard practice, where no one in the chain actually checked if the logic was sound.
These are observations, not criticism. Recognising the pattern is the first step.
The tool generates working code so quickly that analysts paste results into dashboards without reading what the query actually does. A JOIN condition might be wrong, a filter might exclude valid rows, or an aggregation might double-count transactions and you would not know from the output alone.
The fix
Read the generated SQL line by line before executing it, especially checking the WHERE clause, JOIN conditions, and GROUP BY logic against your mental model of the data.
These tools suggest visualisations based on your data, but they cannot know your business rules. A sum might need to be an average, a year-over-year calculation might need to exclude one product line, or a trend line might be fitted to too little data to be meaningful.
The fix
Hover over the metric definition in the tool, verify the formula matches your requirement, and manually spot-check the numbers against a known subset before sharing the chart.
Julius and similar tools can generate written insights from your data very quickly, but they work from patterns in the numbers without understanding causation or business context. A correlation it highlights might be coincidence, seasonality, or a data quality issue.
The fix
Read the summary, identify which claims interest you, then manually investigate those specific claims using your domain knowledge before presenting them to stakeholders.
Copilot can write confident-sounding conclusions from small subsets of data. If you filtered for one region or one month, the AI will still generate a summary that reads as general insight, and your stakeholder will assume it applies organisation-wide.
The fix
Check the row count and date range in your data selection before running the AI summary, and always state those bounds explicitly when presenting the finding.
AI tools often treat missing data as absence rather than absence of recording. A NULL in a revenue field might mean the transaction was not captured, but the AI might show it as zero revenue, changing the story entirely.
The fix
Before sharing any AI-generated insight, manually check how nulls appear in your dataset and whether the AI's interpretation matches reality.
When Tableau AI draws a trend line or Databricks identifies a pattern, analysts often present it as finding without asking if the change is larger than typical variation. A 2 per cent month-on-month shift might be within normal bounds.
The fix
Calculate or estimate the standard variation in your metric from historical data, then assess whether the AI-identified change is actually meaningful against that baseline.
Code Interpreter can show you that two columns move together, but it cannot know whether a third variable is driving both. Higher marketing spend and higher sales might both be driven by season, not causation.
The fix
When AI highlights a correlation, list the obvious confounding variables and test whether the relationship holds when you control for them.
Some tools output confidence bounds automatically, but if you do not know whether they come from a t-test, bootstrap, or simple percentile method, you cannot judge whether they are fit for your use case.
The fix
Ask the tool to show its method for calculating the interval, and verify that method matches the distribution of your data.
A metric might go up overall while going down in every subgroup, or vice versa. AI tools that generate summaries across the whole dataset can mask this reversal.
The fix
When presenting an overall trend, always break the metric down by your key segments to confirm the direction holds within groups.
When ChatGPT Code Interpreter or Copilot gives you a working query result, it is easy to assume the problem is solved. You may not probe whether that was the right question to ask.
The fix
Before accepting the AI's answer, state in writing what business question you wanted solved and check that the result actually addresses it.
Analysts who habitually hand vague requests to AI tools like Julius begin to lose the skill of forming testable predictions. You become dependent on the tool to suggest what might matter.
The fix
Before querying any AI tool, write down your hypothesis in one sentence, then ask the tool to test that specific claim rather than to explore open-ended.
When you ask Tableau AI or Databricks AI to explore a dataset, it will suggest relevant dimensions and metrics. But those suggestions are generic. They may miss the specific business question your stakeholder actually needs answered.
The fix
Before accepting the AI's suggested analysis path, translate the business request into your own framing, then check whether the AI's exploration addresses that frame.
Tools like Copilot will summarise at whatever grain they choose. They might give you a company-level summary when you needed regional breakdown, or weekly when you needed daily to spot a specific issue.
The fix
Specify the required level of detail and time grain in your request to the AI tool before asking for summary or analysis.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.