In one of our internal anxiety-drug workflows, the most useful result was not a single molecule. It was a recipe a team could actually act on.
The clearest case comes from a lorazepam-centered run we used as a stress test, including a rerun against a second receptor structure to make sure the result was not just a one-template accident.
That matters because the workflow is supposed to answer a commercial question, not just a scientific one. If LeadScope is going to help with next-compound decisions, it has to do more than identify a locally exciting molecule. It has to tell the team which coordinated change appears worth making and whether that recommendation survives contact with the rest of the objective bundle.
In that lorazepam run, the best direction was surprisingly unglamorous. It pushed the chemistry toward something a bit less extended, a bit less branched, and a bit less chain-heavy.
Why is that useful? Because the move is balanced. In that screen, predicted liver-safety risk improves by about +0.547, docking still nudges in the right direction by about +0.036, and no modeled endpoint gets worse. That is not a miracle-molecule claim. It is a decision-ready claim: there is a coordinated direction that appears to help safety while preserving the broader profile.
Now compare that with the more seductive alternative. A different shape-focused edit pushes docking harder, about +0.082, and still helps liver risk somewhat, about +0.124. But it also worsens cardiac-liability risk, with hERG moving by about -0.188.
The shape of that decision is simple:

That contrast is the whole point. A ranking-first workflow is tempted to celebrate the bigger docking move. A decision-oriented workflow has to ask the harder question: is this still the right move once the cardiac-liability bill shows up in the next column?
In that run, the better answer was not the flashiest answer. It was the durable one.
This is what the Design/Rescue Report is supposed to capture. Not “here is the prettiest metric.” Instead: here is the direction of change that best survives the full objective stack, here is the tradeoff that made the runner-up less trustworthy, and here is the reason the recommendation should be treated as a real next-step hypothesis rather than a score artifact.
That is a higher bar than ordinary optimization copy usually clears. It is also much closer to what a chemistry team actually needs.
Next in the series: why that same lorazepam-centered recommendation gets more convincing once you see that it already occupies a dense, repeatable region of the Pareto frontier.


Leave a Reply