For Pharmaceutical and Life Sciences

30 Practical Ideas for Pharmaceutical and Life Sciences to Stay Cognitively Sovereign

Your researchers now rely on AI models like Schrödinger and Insilico Medicine to design molecules and plan trials, but many cannot explain why these tools chose what they chose. When your team stops asking hard questions about AI recommendations, you lose the human judgement that catches safety risks and scientific blind spots. These 30 ideas help you keep control of your decisions.

These are suggestions. Take what fits, leave the rest.

Download printable PDF

Protecting Judgement in Drug Discovery

Train chemists to manually design 10 percent of lead candidatesbeginner
Before running Schrödinger predictions, have your medicinal chemists sketch molecules by hand based on their understanding of your target protein, then compare their work to the AI output.
Require written explanations of why Insilico Medicine ranked compounds the way it didbeginner
Ask the AI tool to show its reasoning for its top three picks, then have a senior chemist assess whether that reasoning matches chemical logic or reflects gaps in the training data.
Run quarterly retrospectives on compounds the AI rejected but your team wanted to makeintermediate
Pick five molecules your researchers pushed back on, synthesise them anyway, and test them. Track which ones the AI missed and why this matters for your discovery strategy.
Keep a physical laboratory notebook for each AI-designed seriesbeginner
Parallel your computational work with hand-written notes about your hypotheses, predicted properties, and what you expected to find. This catches moments when you are following AI predictions without critical thought.
Assign one senior scientist to challenge every SAR conclusion from your AI platformintermediate
Designate someone with 15 years of chemistry experience to review structure-activity relationship claims before they guide your next round of synthesis. They should disagree at least once per week.
Test whether your AI model performs worse on underrepresented chemical scaffoldsintermediate
Run a small experiment with molecules from underexplored chemistry space, then check if the AI ranks them lower than your chemists do. If it does, you have found a blind spot.
Require newcomers to the team to solve one target without touching AI toolsbeginner
Before they learn your AI workflow, have junior chemists spend two weeks on classical library design and property prediction so they understand what the tools are doing.
Ask your AI vendor for documentation of where their training data came fromintermediate
For BenevolentAI or similar platforms, request a report on which patents, papers, and experimental datasets taught the model. This tells you what kinds of chemistry it knows well and what it might miss.
Block your team from seeing AI-generated compound names until after they have read the structuresbeginner
Remove algorithmic labels and IDs, show only the molecular structure, and ask your chemist what they notice. Then reveal what the AI thought it was optimising for.
Run a monthly journal club focused on compounds your AI did not discoverundefined
Search recent literature for novel scaffolds and mechanisms that solve your target, then map them back to your AI training data. This keeps your team aware of what the model does not know.

Maintaining Clinical Judgement in Trial Design

Have your trial statistician manually design the primary endpoint before AI recommends onebeginner
Work through your patient population, clinical history, and regulatory precedent without AI input. Only then compare your endpoint to what an AI platform suggested.
Create a one-page document of what your trial must detect that the AI optimisation might missbeginner
List patient-relevant safety signals, vulnerable populations, and secondary outcomes that matter to your disease area. Review this before accepting any AI-driven trial design.
Require a clinician to audit patient inclusion and exclusion criteria generated by AIintermediate
Ask a physician with 10 years of clinical experience to challenge each AI-recommended criterion. What populations does this exclude that should be included?
Run a pilot enrolment using your AI-designed protocol on 10 real patientsintermediate
Before committing to your full AI-optimised design, recruit 10 patients and apply the criteria and assessments. Document where the protocol fails in real clinical practice.
Compare your AI-designed trial to one human-designed trial from your competitor or a published protocolintermediate
Take another company's trial for the same indication that was designed without AI. Lay them side by side. Where does the AI design look overfit or impractical?
Assign a trial nurse to flag any endpoint or assessment that seems difficult to perform consistentlybeginner
Ask the staff who will actually run your trial to read the AI protocol and highlight moments where patient variability or practical constraints might break the design.
Document which patient subgroups your AI trial design treats as noise versus signalintermediate
Ask your biostatistician to show you the heterogeneity analysis from your AI optimisation. Which patients is the model downweighting? Why?
Hold a meeting where your clinical operations team vetoes one aspect of the AI protocolbeginner
Tell your ops team that their job is to find one element of the AI design that creates practical or safety risk. Empower them to reject it.
Write a regulatory strategy before you write the statistical analysis planintermediate
Without AI tools, decide what the regulator needs to see to approve your drug. Then check whether your AI-designed trial collects that evidence.
Review patient outcome data from your most recent trial and ask what the AI would have missedundefined
Look at adverse events, discontinuations, and secondary outcomes from a completed study. Identify patterns that your AI model would not have predicted. Use this to improve your next trial design.

Keeping Regulatory Dossiers Credible

Require a human toxicologist to review all AI-generated safety summaries before they go into your CTDbeginner
Do not let ChatGPT or automated tools write your Common Technical Document sections. A qualified person must read every summary and confirm it reflects your actual data.
Create a log of every AI tool used in your dossier and what it analysedbeginner
Document whether Schrödinger modelled your active pharmaceutical ingredient, whether Watson Health helped your clinical summary, or whether you used automated statistical outputs. Transparency builds regulator confidence.
Have a regulatory affairs manager manually write one module before AI drafts the restintermediate
Write module 2 or section 2.3 by hand to establish your voice, structure, and standard of evidence. Use this as a template to review AI-generated sections.
Ask a regulator informally whether they are comfortable with the AI-generated content in your dossierintermediate
During a pre-submission meeting, be honest about what you generated with AI assistance. Ask whether they want you to rewrite sections with traditional methods.
Require two independent pharmacokineticists to review any AI modelling of your drug metabolismintermediate
If you used Watson Health or similar tools to predict PK properties, have two experienced scientists check the assumptions and predictions against your bench data.
Document the limitations of AI-generated analyses in your dossier itselfbeginner
If you used automated statistical methods or AI summarisation, add a note to your CTD explaining what the AI did, how it was validated, and where human review occurred.
Compare your AI-drafted clinical overview to one written without AI from a competitor's approved dossierintermediate
Read a publicly available EMA assessment report. Notice the structure, depth, and critical analysis. Does your AI version match this standard?
Have your quality assurance team verify that AI did not miss any out-of-specification batch or stability resultbeginner
Ask QA to manually check your chemistry and pharmaceutical development sections. AI tools sometimes skip or downplay inconvenient data points.
Require a clinical pharmacologist to sign off on any AI-generated pharmacology summarybeginner
Do not let automated tools write claims about your mechanism of action without a qualified expert confirming the statements against your study reports.
Create a standard of evidence for when you will and will not use AI in regulatory writingundefined
Decide in advance that AI can draft sections where data is routine and well-established, but may not write sections involving new mechanisms, safety signals, or regulatory precedent.

Five things worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.