By Steve Raju

For Data Analysts

Cognitive Sovereignty Checklist for Data Analysts

About 20 minutes Last reviewed March 2026

When you use Code Interpreter or Copilot to generate charts and SQL queries, you risk presenting AI outputs as analyst judgement without actually checking the work. Your organisation expects speed from AI but quality from you. The biggest threat is insight laundering: AI generates a finding, you present it, stakeholders act on it, and no one has verified whether the underlying logic is correct.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Data Analysts: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Before You Run the Query

Write down what question you are actually trying to answer before asking the AIbeginner
If you skip this step, you will use whatever answer the AI provides, even if it answers a different question. This is how you lose the ability to formulate the right question in the first place.
Specify the business definition of each metric before running the querybeginner
Different organisations count revenue or customer churn in different ways. If you do not tell the AI your definition upfront, it will invent one. This is the starting point for insight laundering.
Identify which joins and filters will affect your row count and tell the AI explicitlyintermediate
AI often proposes joins that introduce duplicates or filters that exclude valid records. State the expected impact on row count before you run anything.
List any time periods, segments, or exclusions that matter to your stakeholdersbeginner
A chart that includes cancelled customers looks different from one that excludes them. AI will not know which one you need unless you say so first.
Check whether the data grain matches your questionintermediate
If your table has one row per transaction but your question needs one row per customer per month, the AI may not catch this mismatch. Specify the grain you need.
Ask the AI to show you the SQL before it runs anythingbeginner
Read the query. Do not let tools like Julius or Databricks execute code without your eyes on it first.

While You Review the Output

Manually verify the row count against what you expectbeginner
A query that returns 50,000 rows when you expected 5,000 has a problem. This is often faster than reading the code.
Spot-check three to five rows by hand against the source systemintermediate
Open the raw data. Confirm that the values in your AI-generated chart actually match what is in the database. This catches logic errors that are invisible in summary tables.
Test the query on a known subset to see if it produces results you can verifyintermediate
Run the same logic on a small date range or single customer segment where you already know the answer. This is how you catch off-by-one errors and filter mistakes.
Look for any values that seem too round or too uniform across categoriesintermediate
If a metric shows exactly 25 per cent across every region, something is probably wrong. Real data is messier. AI sometimes manufactures clean patterns.
Check the chart against the numbers in the underlying tablebeginner
Tableau AI and other tools sometimes render a chart that misrepresents the data. The numbers may be correct but the visual may mislead.
Identify which assumptions the AI made about missing or null dataadvanced
Did it exclude rows with nulls? Did it treat them as zero? As a separate category? Ask the AI to show you how it handled each assumption.
Calculate one key metric by hand using the raw data to compare against the AI resultadvanced
Pick a single number that matters. Sum it yourself in a spreadsheet or simple query. If it does not match the AI output, you have found the problem before stakeholders see it.

Before You Present to Stakeholders

Write one sentence explaining how you verified this findingbeginner
You need to be able to say what you actually checked. This forces you to think clearly about what you did and did not verify.
Prepare a list of the data sources, date ranges, and exclusions usedbeginner
If a stakeholder asks a follow-up question later and you cannot answer it because you did not record these details, you have lost control of the analysis.
Note any statistical limitations or edge cases in your insightintermediate
If the finding is based on ten records from a single region, say so. If there is a recent change in how the data is collected, mention it. Do not hide the boundaries of what you actually know.
Ask yourself whether this finding contradicts something you already believe and whyintermediate
If an AI insight surprises you, that is usually good. But if it contradicts what you thought was true, dig deeper before presenting it. Your statistical instinct is worth listening to.
Decide what decision or action the stakeholder will make based on this findingadvanced
If you cannot say what the analysis is for, you have not done the right analysis. Use this to decide what level of verification is actually required.
Prepare a statement about what would need to change for this finding to be wrongadvanced
This forces you to think about the fragility of your conclusion. It also gives stakeholders a way to challenge your work that is more useful than just disagreeing.

Five things worth remembering

Related reads


Common questions

Should data analysts write down what question you are actually trying to answer before asking the ai?

If you skip this step, you will use whatever answer the AI provides, even if it answers a different question. This is how you lose the ability to formulate the right question in the first place.

Should data analysts specify the business definition of each metric before running the query?

Different organisations count revenue or customer churn in different ways. If you do not tell the AI your definition upfront, it will invent one. This is the starting point for insight laundering.

Should data analysts identify which joins and filters will affect your row count and tell the ai explicitly?

AI often proposes joins that introduce duplicates or filters that exclude valid records. State the expected impact on row count before you run anything.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.