For Economistss
Protecting Your Judgement: A Guide for Economistss Using AI Tools
AI tools can generate plausible-looking regression specifications, confidence intervals, and policy scenarios in seconds. This speed creates a real danger: you may find yourself defending a model you did not build from first principles, or presenting forecasts that carry false precision into decisions affecting millions of people. Your edge as an economist is not your ability to run code faster than an AI. It is your capacity to question assumptions, ground analysis in economic theory, and know when a model should not be trusted.
These are suggestions. Your situation will differ. Use what is useful.
Test every assumption the AI builds into your model
When you use Claude or ChatGPT to generate a regression specification or production function, you are outsourcing the assumption-building work. The AI will choose lag structures, functional forms, and variable transformations based on statistical convenience or patterns in its training data, not because economic theory supports those choices. Before you run the model, force yourself to write down each assumption the AI suggested and ask whether you would have made that choice independently. If you cannot defend it on economic grounds, change it.
- ›Ask the AI to list every assumption embedded in its proposed model specification, then evaluate each one against your own understanding of the economic mechanism you are studying
- ›When using Stata AI or Bloomberg AI to generate a forecast model, require yourself to check whether lag lengths match the timing of the economic adjustment process you expect, not just the sample size
- ›Keep a separate document recording which assumptions you changed and why. This becomes your audit trail if policy makers later question your model
Reject spurious precision in forecasts and confidence bands
AI tools generate forecast confidence intervals that look rigorous but often reflect only the statistical fit to historical data. They do not account for structural breaks, regime shifts, or the genuine uncertainty about future behaviour that comes from economic institutions changing shape. When Perplexity or ChatGPT presents you with a 95 per cent confidence interval for GDP growth three years ahead, recognise that number as a statistical artifact, not a reflection of actual future knowledge. Your judgement about the sources of real uncertainty must override the machine's arithmetic.
- ›Always widen AI-generated confidence intervals based on your own assessment of structural uncertainty. Document how much wider you made them and why
- ›When presenting forecasts, separate the statistical precision (what the model fit produces) from the genuine uncertainty (what you believe about institutional change, policy regime, or global shifts)
- ›Use AI-generated point forecasts as one input to your own scenario analysis, not as a central estimate to be reported with false certainty
Keep theory before data fitting
The cognitive risk with AI tools is that they make data fitting so easy that you stop thinking like an economist and start thinking like a statistician. ChatGPT or Claude can generate dozens of candidate models, test them against your data, and hand you the best fit. That model will rarely correspond to the economic theory you believe in. You must work backwards: start with the economic mechanism you want to explain, specify that mechanism in equations, then use AI to help you test that specific model against data. Do not let the AI choose your model structure because it fits better.
- ›Write your economic model on paper or whiteboard before showing it to any AI tool. This forces you to own the theoretical structure before automation takes over
- ›When using Bloomberg AI or Stata AI, instruct the tool to test your specified model, not to find the best fitting model among many candidates
- ›If AI-generated diagnostics suggest your theoretically-motivated model has problems, investigate whether the problem reflects real economic behaviour or whether you need to revise your theory
Make policy recommendations your own, not the model's
AI tools are effective at generating policy options that follow from a given model's output. They are not equipped to judge whether those policy options are politically viable, administratively feasible, or consistent with the broader economic evidence you know about from your field. When Claude suggests a policy scenario based on your model, treat it as a proposal you must interrogate, not as an implication you should adopt. Your responsibility to policy makers is to present what your model shows, what assumptions make it work, and what you believe on other grounds about which policy path is sound.
- ›Separate three things in every policy memo: what the model predicts, what economic theory suggests independent of your model, and what your judgement is about trade-offs the model does not capture
- ›Before using AI to generate policy scenarios, list the constraints (legal, administrative, political) that your model knows nothing about. These constrain which scenarios are real options
- ›Ask policy makers directly: do you want me to tell you what my model predicts, or what I think you should do? The answer changes how you present AI-assisted analysis
Document what you checked and what you did not
AI tools make it possible to produce economic analysis faster than you can verify it thoroughly. This creates an accountability problem. If a policy based on your AI-assisted model fails, you need to be able to show what you tested and where you made deliberate choices to override what the machine suggested. Build this discipline into your working practice: keep records of alternative specifications you considered and rejected, assumptions you questioned, and forecasts where you widened the confidence bands based on judgement. This is not burdensome; it is the professional standard you already apply to models you build by hand.
- ›Maintain a model specification log for each project showing the version history of your model structure and the reasons you changed it
- ›When you use ChatGPT or Perplexity to explore ideas, save the conversation but do not treat it as evidence of your thinking. Write your own summary of what you learned and why it changed your approach
- ›For any forecast or policy recommendation that uses AI-assisted tools, include a limitations section that names the assumptions you tested and the uncertainties you could not resolve
Key principles
- 1.An assumption the AI proposes is not valid until you have tested it against economic theory and your own understanding of the mechanism being modelled.
- 2.Confidence intervals generated by statistical models describe past fit, not future certainty. Your own judgement about structural breaks and institutional change must widen them.
- 3.Data fitting is not economic reasoning. Start with theory, use AI to test it, and reject the model if it contradicts your economic intuition without a sound explanation.
- 4.Policy recommendations from AI outputs are suggestions to interrogate, not conclusions to adopt. Your responsibility is to present the model's logic and your own independent assessment of what is sound.
- 5.Document the choices you made, the assumptions you questioned, and the paths you rejected. This record is how you remain accountable when AI allows you to produce analysis faster than you can verify it.
Key reminders
- Before running any model specification suggested by Claude or ChatGPT, write down on paper the economic theory that justifies each term and relationship. If you cannot do this, the specification is not ready.
- When a Bloomberg AI or Stata AI forecast produces a narrow confidence band, immediately ask yourself what structural uncertainty the model is ignoring. Widen the bands based on that assessment before presenting them.
- Keep one document for each project that lists alternative models you considered but rejected, with your reasoning. This discipline prevents you from drifting into pure data fitting.
- For policy analysis, require yourself to present three scenarios: what happens if the model is right, what happens if the key assumptions are wrong, and what your independent judgement suggests about which is more likely.
- Use AI to generate first drafts of technical appendices and sensitivity tables, but write the main findings and limitations sections yourself. This ensures your judgement, not the machine's confidence, comes through in the conclusions.