The lever we don't pull hard enough
Why affinity continues to matter
Veteran drug hunter Mark Murcko who is a colleague and friend has a lesson for drug discovery scientists in an excellent perspective for J. Med. Chem. that may seem deceptively simple and obvious: optimizing affinity matters, and far and away it must still be the leading objective in early stage drug discovery.
Now many medicinal chemists will go “Yep, we’ve always known this” while others will go, “Let’s not put affinity on a pedestal since we have found out the hard way that ADME properties are more important.” Mark anticipates and comprehensively answers both of these sentiments, but first he starts out by pointing out the seven advantages of affinity: among these are achieving potent tool compounds more quickly; making compounds with increased potency; making more selective compounds; optimizing drug candidates more quickly; encouraging the pursuit of more synthetically challenging compounds; expanding chemical diversity during lead optimization; and minimizing interactions with “avoid-ome” targets that lead to poor ADME and tox properties.
As any medicinal chemist knows, these objectives pretty much span the gamut of goals in preclinical drug discovery. I particularly liked the emphasis on minimizing interactions with the “avoidome” and being able to challenge synthetic chemistry efforts. Whether computational or experiment, new methods for calculating or measuring affinity more accurately will lead to greater confidence that efforts to make synthetically challenging molecules is worth it; this in turn will encourage chemists to step away from the standard “hammer and tong” chemistry of Suzuki and amide couplings and confront the latest, more exotic synthetic strategies. If you have a good sense that you are optimizing affinity both on a biochemical and a cellular level, you are more likely to expend the efforts needed on challenging chemistry.
There’s also a subtle interplay between on and off-target effects that operates through both direct binding effects and downstream PK/PD effects: the tighter and better a compound binds to and modulates a target, the lower the risk that it will measurably modulate an unwanted off-target. This is of course not an all-or-none proposition since most compounds have some affinity for off-targets, so it really boils down to a race between on and off-target binding which can be biased toward the desired target by optimizing affinity. The point is that you stand a much better chance of not hitting unrelated targets if you optimize interactions with the desired target.
Note that we are saying “optimizing”, not “maximizing”, as Mark rightly notes. Maximizing affinity is always tempting, especially since we know for a fact that affinity almost always drops as we start optimizing ADME properties. But the “right” affinity for a target depends on modality and desired pharmacological effect. As one example that the piece anticipates and discusses in some detail, maximizing binary affinity for complex multi-body targets like PROTACs and glues might even be counterproductive to maximizing ternary affinity. In fact the reason these systems are fascinating is because of the non-linear relationships between the binary affinities (for both the effector protein and the target) and the ternary affinities between the complex. We are finding out that “loose” interactions where the partners can reorganize into productive binding conformations can matter more than tighter ones that lock them into unproductive ones, but that still means that we optimize those loose interactions: affinity still matters.
Conversely as the piece points out, one must not stop at some magic, single-digit nanomolar number if optimizing affinity is desired. Chemists often operate with a kind of mental model based on their reading of the data, saying to themselves something along the lines of “Since we have enough compounds with a 5 nM IC50/Ki/EC50, let’s now focus on optimizing properties.” But in the absence of data, there’s no reason why further progress in improving affinity should not be continued in order to squeeze the most out of downstream properties. My recent blog post about the discovery of lenacapavir (a picomolar affinity compound) makes it clear that having multiple, extremely potent compounds as starting points gives you much better odds in case one or more of them show poor clearance or idiosyncratic tox downstream. As Mark puts it, “additional affinity may dramatically simplify the multi-parameter optimization process, especially for challenging targets. PK challenges simply become more manageable if less circulating drug is needed at the site of action to achieve the desired effect. Lower total body dose compounds also tend to be safer.”
The piece ends with an “affinity checklist” that should be useful to both novice and experienced practitioners in the field; it lays out the advantages of optimal affinity across the several pillars of binding, properties and synthesis. When I think about it, while a relentless emphasis on properties as the driving force for successful druglike compounds has immensely benefited the field, it seems in some sense to me that the field might have overcorrected, dismissing affinity as somehow mattering much less than it should. Mark’s perspective is a valuable correction, showing us that affinity is important precisely because it matters for all the downstream properties that drug hunters care about.




Really thought-provoking piece on this. The insight about "loose" binding in PROTACs and glues is kinda fascinating, especially how maximizing binary affinity can actually backfire. I've worked on a few projects where we hit that magic nanomolar number and just stopped, assuming we were good enouh to move forward to other properties