For Economistss

Protecting Your Judgement: A Guide for Economistss Using AI Tools

AI tools can generate plausible-looking regression specifications, confidence intervals, and policy scenarios in seconds. This speed creates a real danger: you may find yourself defending a model you did not build from first principles, or presenting forecasts that carry false precision into decisions affecting millions of people. Your edge as an economist is not your ability to run code faster than an AI. It is your capacity to question assumptions, ground analysis in economic theory, and know when a model should not be trusted.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Test every assumption the AI builds into your model

When you use Claude or ChatGPT to generate a regression specification or production function, you are outsourcing the assumption-building work. The AI will choose lag structures, functional forms, and variable transformations based on statistical convenience or patterns in its training data, not because economic theory supports those choices. Before you run the model, force yourself to write down each assumption the AI suggested and ask whether you would have made that choice independently. If you cannot defend it on economic grounds, change it.

Reject spurious precision in forecasts and confidence bands

AI tools generate forecast confidence intervals that look rigorous but often reflect only the statistical fit to historical data. They do not account for structural breaks, regime shifts, or the genuine uncertainty about future behaviour that comes from economic institutions changing shape. When Perplexity or ChatGPT presents you with a 95 per cent confidence interval for GDP growth three years ahead, recognise that number as a statistical artifact, not a reflection of actual future knowledge. Your judgement about the sources of real uncertainty must override the machine's arithmetic.

Keep theory before data fitting

The cognitive risk with AI tools is that they make data fitting so easy that you stop thinking like an economist and start thinking like a statistician. ChatGPT or Claude can generate dozens of candidate models, test them against your data, and hand you the best fit. That model will rarely correspond to the economic theory you believe in. You must work backwards: start with the economic mechanism you want to explain, specify that mechanism in equations, then use AI to help you test that specific model against data. Do not let the AI choose your model structure because it fits better.

Make policy recommendations your own, not the model's

AI tools are effective at generating policy options that follow from a given model's output. They are not equipped to judge whether those policy options are politically viable, administratively feasible, or consistent with the broader economic evidence you know about from your field. When Claude suggests a policy scenario based on your model, treat it as a proposal you must interrogate, not as an implication you should adopt. Your responsibility to policy makers is to present what your model shows, what assumptions make it work, and what you believe on other grounds about which policy path is sound.

Document what you checked and what you did not

AI tools make it possible to produce economic analysis faster than you can verify it thoroughly. This creates an accountability problem. If a policy based on your AI-assisted model fails, you need to be able to show what you tested and where you made deliberate choices to override what the machine suggested. Build this discipline into your working practice: keep records of alternative specifications you considered and rejected, assumptions you questioned, and forecasts where you widened the confidence bands based on judgement. This is not burdensome; it is the professional standard you already apply to models you build by hand.

Key principles

  1. 1.An assumption the AI proposes is not valid until you have tested it against economic theory and your own understanding of the mechanism being modelled.
  2. 2.Confidence intervals generated by statistical models describe past fit, not future certainty. Your own judgement about structural breaks and institutional change must widen them.
  3. 3.Data fitting is not economic reasoning. Start with theory, use AI to test it, and reject the model if it contradicts your economic intuition without a sound explanation.
  4. 4.Policy recommendations from AI outputs are suggestions to interrogate, not conclusions to adopt. Your responsibility is to present the model's logic and your own independent assessment of what is sound.
  5. 5.Document the choices you made, the assumptions you questioned, and the paths you rejected. This record is how you remain accountable when AI allows you to produce analysis faster than you can verify it.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.