The Bioinformatics CRO Podcast

Episode 80 with Diane Shao

Dr. Diane Shao, an attending neurologist at Boston Children’s Hospital and instructor of neurology at Harvard Medical School, discusses her work as a physician scientist focusing on genetic causes of childhood neurodevelopmental conditions.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Diane Shao

Dr. Diane Shao, is an attending neurologist at Boston Children’s Hospital, an instructor of neurology at Harvard Medical School, and an investor with Legacy Venture Capital.

Transcript of Episode 80: Diane Shao

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome to The Bioinformatics CRO Podcast. I’m Grant Belgard, and today I’m joined by Dr. Diane Shao, an attending neurologist at Boston Children’s Hospital and instructor of neurology at Harvard Medical School. Dr. Shao is a physician scientist whose work focuses on understanding the genetic causes of childhood neurodevelopmental conditions, including how newer single-cell approaches can help answer questions we couldn’t address before. She’s also an investor. Diane, welcome to the show.

Diane Shao: Thank you so much, Grant. And it’s such an honor to be here and also reconnect with you after our long-time friendship from college.

Grant Belgard: Indeed. So what’s been most energizing for you lately in your work?

Diane Shao: Well, something I really like to think about in my work is how to span across disciplines. So, you know, based on your introduction, I think the listeners can understand I do some very fundamental research in genomics. I also really think about how that research applies to patient translation. I actually see the patients and have to involve sometimes not yet solid data in terms of making a then firm clinical decision.

And then as an investor, thinking about how to assess that landscape. And so all of these, I would say, require a different vision and goal in mind. And so I think a lot about for any given application, what is my vision of translation or patient care or understanding fundamentals, et cetera, and how to generate the data, work with the data, apply that to really further that vision, kind of like big picture goals.

Grant Belgard: What’s a question you’re hearing more often now than you were a few years ago?

Diane Shao: The question I’m hearing more, you know, more and more people want to translate, from the time a PhD student starts working in the lab to post-docs graduating thinking, or post-docs trying to think about what’s next in their careers, wondering about industry versus academia. There is such a strong focus on translation, making that impact on humans versus I think, 10 years ago, as I was still going through training, it was more, you were doing a PhD, a lot more students were thinking about an academic path, fundamental biology.

And, you know, I don’t know if this shift is good or bad, but it certainly brings new questions to the table. And then the, you know, way far past academia had this idea that, you know, going to industry is, you know, maybe a sellout, you know, you’re not asking us interesting questions, but I think there is a growing realization that those questions are also extremely interesting, extremely impactful and need really, really smart people to be involved.

Grant Belgard: What does a good week look like for you? What kinds of activities make you feel like you made progress?

Diane Shao: Yeah, I do think stepping back and seeing where have things gone from, maybe you get some data that is really uncertain and murky. And then this week it’s, hey, we can draw one small conclusion from that. Or it’s thinking about, hey, this data, which is applied to fundamental biology, may be able to be reanalyzed in some small way to give us an insight on an actionable clinical impact. Or thinking about like, could this have implications for which companies we think could really be really strong in their market spaces? And so even one small 1% insight, I think is a good success because it’s building on that, 1% in all different directions that ultimately leads to where we’re going.

Grant Belgard: So since you have this kind of dual role of physician scientist, how do you think about differentiating between this is something interesting and this is something actionable, right? Because for your patients in the real world, you have to make decisions now, you can’t wait five, 10 years for something to maybe be firmed up. So how do you approach that?

Diane Shao: Yeah, those are really, I think, questions that the field of let’s say genetics is always constantly grappling with. So I’ll just give you an example from my clinic from this week just to make it a little more pertinent.

So this week, I saw a case in my neurogenetics clinic at Boston Children’s Hospital, a two year old that has a condition called lissencephaly, which is their brains are smooth and they had clinical sequencing that comes back with a rare variant that’s homozygous, meaning it’s in both the mother inherited and father inherited alleles of a particular gene and records the variant as variant of uncertain significance which means on a clinical lab basis, they cannot provide a diagnosis. And if you search the literature for this gene, there are exactly four patients reported worldwide that have other variants, not this exact variant in this new gene, but their features are all extremely similar to the given patient.

And so there is a practical matter of how many people need to exist in the world for you to have confidence that the fifth variant in mother and paternal inherited alleles of a gene is now diagnostic. And so that’s a clinical lab question. And on their end, they said, hey, we can’t call this disease causing, we have to give it this variant of uncertain significance label. And then there’s a whole other decision to be made on a clinical level. So when the patient comes to see me and I’m like, okay, the lab is not gonna be able to change your classification, but I can tell you your brain looks almost exactly like the four other brains that are out there. Your kid is manifesting all the symptoms that those other four are. We have a very good sense that we should be worried about the things that those other patients have.

So if they have, for example, those other patients had problems with their eyes and so I’m saying, hey, we gotta check their eyes, we gotta check their hearing. These are the things I can worry about now. And those are really practical clinical decision-making matters. And then there’s a whole interesting aspect of, well, what do we do in this gray zone? And those are kind of boundary pushing now research questions. So I then spoke with one of the residents who was very excited about this potentially new gene and this new presentation that we’re seeing. And so they’re saying, hey, can we write this up? I’m like, yeah, it would be great.

And it would be great if we found 10 other people so that we now have the statistical informatic confidence to provide this diagnosis. We can then go back to the clinical lab, change their classification, who would change therefore the classification of other patients that come in with a similar presentation and new variants in that gene, which otherwise wouldn’t be diagnostic. And so we can then go back to the research realm and really make a difference. And so I don’t know if that kind of showcases how the different elements of clinical decision-making, gray zones in what is known in a diagnostic laboratory, and then what can be brought back into the research side are clear from my description.

Grant Belgard:
Oh, that’s great. What kinds of uncertainty are you most comfortable with and which kinds do you work hardest to reduce?

Diane Shao: Yeah, so in terms of, we can talk about a couple of different settings here. So maybe in the clinical setting that I, currently in this example, continuing on the kinds of decisions that we can make are interventional. Does this child need therapy? That’s a pretty certain yes. And I can probably give a good sense of how much therapy they need. Do I need certain screening and certain organ systems based on what I know? The answer is yes. And the risk that I’m going to be wrong, or even if they don’t totally need it, that screening will be negative. Okay, those are tolerable risks.

But other risks are not so tolerable. For example, if I am wrong about the variant interpretation and the family is doing prenatal genetic testing for embryo selection for their next child, at that level, I may stop at my confident assessment that this is absolutely the disease-causing gene until I have more of my research statistical evidence that I’m going to gather with my resident, let’s say. And so those are various arenas where I may or may not be able to make a solid decision. And then in the stepping back into the research space, these, the confidence in a research diagnosis is a little more clear.

Because on a research basis, you don’t need to have an assessment of any patient with any variant in that gene that comes in. You just need to have a sense of, is that particular variant causing a functional change? And on a research basis, there’s a lot of other modalities that can give us confidence. You can do, look at the RNA changes. You can see how that variant affects gene function. Structurally, you can do other types of statistical testing if you enroll patient cohorts within your cohort to do, for example, linkage analysis or other types of confidence-building metrics. And so in different settings, there are different ways to really increase confidence in different types of interpretations.

Grant Belgard: How do you think about measuring success when outcomes can take years to show up?

Diane Shao: Yeah, yeah, that’s a great question. You’re kind of thinking about as we, let’s say, push forward our research agenda on a given genetic condition, what the success is.

Grant Belgard: Well, or I guess, or in the case of investment, right?

Diane Shao: Yeah, okay, yeah, no, I think that’s a great question. So why don’t we jump to the investment for just a moment? So right now, for example, not all rare disease genes are good even current targets for investment. To even embark on starting a company, there are only certain, you’ll see this mentality where people will invest in only a handful, let’s say, of rare diseases that are broadly of interest. And partly it’s because those are the diseases that we know the most about.

There is a lot more research dollars, there’s a lot more research interest. Maybe the patient advocacy groups have been really promoting or focused on getting a therapeutic out and there’s enough support interests that finally there’s enough data and understanding so that initial startup can even be conceptualized for investors to be interested. And so not every, let’s say, genetic condition in this moment in time is ready for research translation.

And so pushing that long-term envelope from the fundamental discovery of a gene to when is it ready to be even considered a therapeutic target to actually pushing out the company to then now assessing that market landscape and seeing whether or not it’s worth funding, et cetera, is a really, really long pipeline as you’re suggesting. And so in any given moment, there are many different people from investigators pushing their visions and agendas to the NIH pushing their research vision agenda to the business development people pushing their agendas and the investors pushing their agendas.

And I kind of really see the progress for each individual needs to be unique. As an investor, I am really interested in pushing the investments that we make into rare diseases more broadly, but that doesn’t mean every rare disease that’s presented to me with the potential therapeutic target is a good investment to make. And so a progress on an investment front means having grasped, let’s say, further the landscape of a particular genetic condition, grasped maybe the market space, what are the FDA regulations?

Those are things are progress in an investment space versus in a research setting, I may be a little more agnostic to which disease I’m looking at and promoting. And that research space may be promoting, let’s say, like progressing new techniques for gene discovery. It may be figuring out how can I collaborate better with others. And so for people in any given phase of all of these different intersecting sectors, I think progress at the end of the day is very, very individual. And I hope that collectively across everyone, this will really push the boundaries of treatment for any disorders, you know, rare or common.

Grant Belgard: So across all the domains in which you operate, what is your expectation for the impacts AI will have in the near term, right? Looking out over the next one to two years?

Diane Shao: Yeah, after this conversation, I’d love to hear your thoughts on that too, Grant. But for me, I feel like AI has touched every aspect of both what I do and also how I assess both the research spaces I want to go into as well as investment spaces I am considering. At a high level right now, I would describe my usage of AI as increases in efficiency. So increases in data sourcing, let’s say to help me find relevant papers and subject matter and people and spaces, et cetera. I also feel it as efficiency in terms of helping me integrate across different, let’s say, perspectives.

Right now I have all this data that describes this biology. Now I want to understand how to change this into a clinical risk assessment model, et cetera. These are kind of, I would still consider efficiency spaces. That being said, I know that the AI field really wants to do new discovery, pushing the envelope, idea creation from AI. I don’t feel that it’s there right now and I’m not really engaged in tool development to know how close are we to that. I think that pushing efficiency and data interpretation, management, et cetera, is already a really, really large task.

It takes off so much from my plate to be able to outsource a lot of those tasks to AI for it to also hold that information for me across, let’s say, these are the grants I’m writing and this is all the data that I need you to store for me. And as I re-synthesize into a new grant with a slightly different focus, how can you help me shape that? And then so it lessens my work a lot. And I’ve found it tremendously beneficial. And to me, that’s really important because it means I can leave my mind space, let’s say, open to big vision problems. I can be the one leading idea generation and then using AI to kind of curate these spaces. So I would say that’s my perspective. I think AI is transformative, but I don’t feel like AI is transformative in the way of taking the place of human creativity and pushing the boundaries of, let’s say, the unknown, unknowns in the world.

Grant Belgard: When you’re designing a study, what decisions early on have you found most impact downstream data quality and interpretability?

Diane Shao: Yeah, so the types of study that I design most are in the realm of human genetics. So I do some human gene discovery research for which I would say the pipelines for that are probably pretty well described. And then I also do single cell technology development for the purpose of understanding how mutations arise and in particular, understanding the variation in the DNA within or between individual cells of an individual, what we call somatic mosaicism.

I’m part of a pretty large consortium from the NIH called the NIH Somatic Mosaicism Across Human Tissues Network, where analogous to other large consortium efforts, one of the most notable ones being the Human Genome Sequencing Project back in the 2000s, the idea is that by characterizing the full intra-individual variability in genetics, that tool can be extremely useful across many, many different areas of biology and life sciences. And so for single cell technology development, that experimental design really affects the downstream.

So for example, I have been working on understanding human brain development and the single cell copy number landscape. Copy number changes are structural changes in the DNA where whole areas of regions of chromosomes either get amplified or deleted or lost. And so to detect structural copy number changes uses fundamentally different techniques than detecting other types of variations, such as single nucleotide variation, where you’re just changing, let’s say C to a G or a single nucleotide, and also uses totally different techniques than identifying, let’s say, repeat expansions in single cells, which are also highly mosaic across an individual.

And so the study design choice of tool becomes really critical to say, is it even possible to analyze my genetic change of interest? And that decision comes down to a matter of, what is the goal of the project? And also has some practical considerations of cost, and also has some technical considerations of do I have the informatic support to analyze the type of variation I’m interested in?

Grant Belgard: How do you communicate uncertainty to different audiences, scientists, clinicians, families, leadership?

Diane Shao: That’s a great question. Depending on the audience, I try to do things differently. Most people do a lot better with the things that are certain than the things that are uncertain. I think that for me, always trying to portray first what we do clearly know can be really, really helpful to then give a framework to all the things that we still don’t know or still exploring.

So just to give a concrete example of that, in my work in mosaicism, I really think that there will be totally new possibilities for genomic biomarkers or different possibilities for precision medicine that are related to the genetic landscape when we look across all the cells in the body. But of course, we’re still in early days of that on a research basis, so I don’t know if that’s true. But what I do know is true is that we have, for example, in our brain, in our neurons, hundreds of single nucleotide variation per neuron, times six billion neurons in our brain by the time we are born.

So that is a biological fact. And so I can hang on that certainty and share with people that certainty and then describe what I think we can do with that level of genomic data. And just so the audience kind of understands where I’m going a little bit with this thought, think about, for example, the difference between when the Human Genome Project first came to light and they sequenced one human versus what we can do with the genetics now that we’re sequencing hundreds of thousands of humans across different countries and across different disease modalities, et cetera. That type of data, while we don’t know yet what will be revealed about ourselves and tissues and how they all work together from a DNA perspective, I think is also inevitable to shift how we think about disease and how we think about diagnostic possibilities.

Grant Belgard:
What is the current state of the evidence on the impact of mosaicism on clinically relevant phenotypes and the prevalence?

Diane Shao: Yeah, that’s a great question, Grant. So in certain diseases, it is a fairly actually commonplace now to think about mosaicism. There are a number of disorders where it’s pretty common to now look for mosaic genetic causes. So for example, epilepsy, there is a subset of patients with epilepsy which will get surgical removal of the epileptic lesion and often somatic mutations are found in those lesions. They follow particular biological pathway principles and so those are pretty clear.

Another realm which is pretty common to think about now is vascular disorders. So localized cavernous malformations, there’s a pretty common precedent. Vascular disorders like Sturge-Weber syndrome which is a capillary malformation over just one part of the body now are pretty commonplace. So there are certain disorders where it is common to now think about somatic mutations as the primary cause. There are other disorders that is coming to light that even while they can have both causes in the germline and at a mosaic level, that many of those individuals actually are mosaics.

And just because think about generating a human, how many cell divisions that you went through to generate this entire person from the time they were an egg and a sperm meeting each other to the huge five to seven foot human being, there’s just a lot of mosaicism to be had and that causes disease and sometimes they look like germline presentations even if the person themselves are mixed genetically. And then there’s a whole realm of things that we don’t know which is maybe a subject of research but things like there are diseases where certain cell types are lost.

So for example, in Hirschsprung’s disease, a very particular neuronal cell is lost from the gut intestine. And so to me, that’s a high likelihood place that there is likely a somatic localized cell-specific component but when it’s lost, how do we use genetics to actually determine what it was that was lost to begin with? So a lot of questions but I hope that answers your question on the areas that we do currently know which is I would say a tip of the iceberg.

Grant Belgard: So how do you think about future development of precision medicine and so on in a mosaic condition?

Diane Shao: Yeah, so I’m really excited about a couple different areas. One area is simply leveraging the power of essentially what I would describe as let’s say a saturating mutagenesis experiment within an individual. So thinking about what we’ve learned from human populations. So when we sequence hundreds of thousands of people from human populations, we can see, hey, these genes never have a mutation and the other genes have mutations that are just scattered all across the genome.

And those genes that never have a mutation, they’re actually important to humans in some way. There’s a reason why we never have a mutation usually is because either they were embryonic lethal or they affected reproductive fitness in some way. And so that’s actually a huge part currently of gene discovery to compare to population databases and say, hey, those areas are constrained, this may be an important disease gene. And so similarly, you can imagine that there are lots and lots of disorders which don’t have a strong reproductive fitness component.

Think about cancer, for example, in old age, it’s not necessarily gonna be selected against the population level. Think about eye conditions like strabismus where you’re not really gonna have a strong reproductive fitness signal or autism even, nowadays many people are getting diagnosed when they’re already lived full lives. And so while some forms of autism will have reproductive fitness constraints, others will not.

And so then the question to me starts to be, well, if we can now get information on genomic constraints, so which areas of the genome are
really, really important just in a particular cell types, like in neurons or in the lung cells or in something like that, is that now new information on what genes are really critical for biology and does that tell us something about disease? So that’s one area I’m really excited about.

You can also think about that the same way in terms of modulation, how do individual genetics within a cell either drive a phenotype or are still collected against the phenotype. So for example, let’s say a person with a neurodegenerative disorder where some of their neurons will die with age. Well, it’s not that these neurons die uniformly, some will die earlier, others will die later and there’s genetic variation between that.

So can we leverage that to somehow understand what is it genetically about those individual cells that are surviving longer? And I think in the past, the view is just, oh, it’s stochastic. Some are just gonna die sooner, some are die later. And yes, probably it is stochastic, but stochastic doesn’t necessarily mean random. Stochastic is a distribution that is related to some underlying biology. And so these are open questions as to the genetics that drives these stochastic processes. And so these are some of the areas I’m interested in and I think they have really strong translational potential and also the therapeutic potential. Yeah.

Grant Belgard: When did you first realize you wanted a career at the intersection of medicine and research?

Diane Shao:
This is a great question, Grant. I think it actually goes back to our college days. When we were at Rice University, I started working for a PI at Rice who’s now left that university, but he was my first significant research experience and I realized he was kind of a remarkable person in that he was a trained astrophysicist who then became an HHMI investigator, which is a very prestigious award funded investigator who studied slime molds, Dictyostelium.

And then when I was in the lab, was going into human immunology and had created a compound to treat fibrosis, which is I was working on in the laboratory as like he had one postdoc and, I guess, me the undergraduate working on this at the time. And then he turned that into a company that was sold for $1.4 billion, ultimately for trials in fibrosis. And so that mindset of the fundamentals of science can be leveraged across astrophysics to slime molds, to human immunology, to translational medicine, I think already came to me maybe through this experience by osmosis maybe as an undergraduate in this space.

And I think that mindset really resonates with me as in at the core, science is science and those principles apply no matter what realms you’re looking into. And also as scientists or people engaging with life sciences in any way that many people do, you also don’t have to be limited to the one dimension that you’re trained in, that all of these realms are possible and so, for me, that is what also made me think doing a MD/PhD career path would be one for me because it was one where I got to see both the research perspective, the translational perspective, the clinical perspective and then in my early 20s, in my past couple years have added this investment and market space perspective as well.

And while some people feel that they’re really disparate and indeed they’re really tackling very different problems at the core, if you go to core principles, there’s a commonality.

Grant Belgard:
What’s something you intentionally didn’t do or stopped doing that made your path a little easier?

Diane Shao: Oh, that’s interesting. Yeah, it does sound like I’m just accruing things, but to be honest, I drop things constantly. I actually think that’s a critical part to maybe going back to your question on what drives progress. Progress does mean constantly cutting out everything that is not leading to your vision of progress.

So even in, let’s say my work on single cell technology, I was developing some technology, applying it to a number of different settings and when I found one that seemed like it’s particularly interesting in terms of its understanding of the biology and that we could really gain traction with the tool and stuff, it meant I just dropped everything else and I don’t have any intention of picking them back up unless they further my vision in a given direction. And so I think that there is always this fear, like the sunk cost fear of like, oh, I’ve invested all this time, I gotta finish it, it’s gotta be a thing, but I don’t buy into that at all.

So I actually need to constantly drop things along the way and to me, that’s a huge driver of success because it means we focus on our energy, on the things that go toward a vision.

Grant Belgard:
If you could go back and give your earlier self one piece of advice, what would it be?

Diane Shao: Oh, probably don’t stress so much. You know, I think that especially as a trainee or a student, it was so easy to worry about how things would unfold and try to control them, but honestly, simply because we didn’t know, for example, writing a first paper, you don’t actually know what it takes to even write a paper or, you know, what are all the steps, what are all the pitfalls, what is everything you wouldn’t even need to think about?

And so I think I spent a lot of time stressing and let’s say strategizing and things like that, but the reality is you just gotta do it and then you’ll learn from it, as in nothing needs to be perfect that first time. And to allow for that, allow for the learning process, you’re gonna get more out of that than trying to make it go a particular way each time.

Grant Belgard:
I guess on that note, how do you avoid burnout?

Diane Shao:
Drop everything else that you don’t feel like doing. But in some ways, I really believe that burnout is a combination of what we’re holding, all the different aspects that we’re holding, and also how we feel about it. As in, if we’re aligned, like if I’m holding a lot of things that I’m doing and those are the things that get me up in the morning, I’m so excited about them, I can’t wait to discuss them with people and share them with the world, that’s not burnout.

That’s just me holding a lot of things that I like to do. But burnout is having a particular interest, but also feeling like I’m obligated to do all these things I don’t wanna do, I’m supposed to be finishing XYZ thing in this other realm that I’ve sunk all this time and effort in. And so to me, preventing burnout is pretty continual, like every few weeks renewal of what is my actual vision, what is actually driving me, and am I doing the things aligned with that? Because if it’s not, and you’re doing that and having that conflict internally long-term, that’s what burnout is. So yeah, so if you are aligned, then that will feel good. Everything will feel like fun and flow.

Grant Belgard: What’s an effective way to build competence across disciplines?

Diane Shao: Competence or confidence?

Grant Belgard: Competence.

Diane Shao: Competence. Oh, that’s a great question, Grant. The biggest thing is to not be afraid and to not be afraid to not know. There’s no reason you would know. And I find that what people really orient around is a strong vision. So for example, maybe with my own interest in mosaicism and thinking about how that can push our boundaries in precision medicine, I work in child neurology, I work in pediatrics, brain development, et cetera. I’m very interested in maternal influences on childhood brain development, but that’s not a space I know at all.

I don’t know a single OB, I don’t know nearly anything about obstetrics or what happened, all the actual biological principles of pregnancy, et cetera. And so, as I delve into that space, that is a totally new space for me. But what I do orient around is how important I think understanding this phenomenon is. And if I can share with people my vision, what I know in a very clear way, others are going to wanna help me and that will build my competence. As in, I don’t go in pretending I know anything about these other spaces where I don’t.

And that’s actually where true collaboration lives. It’s not, we both know everything about the other’s field, it’s knowing exactly what do I know that’s valuable between us and exactly what do you know that’s valuable between us and then putting those together. And competence is not always getting to know everything in a different space. Competence is sometimes being able to know where the gaps are and know how to ask questions and get help.

Grant Belgard: What mistakes do you see smart people make when they try to do interdisciplinary work?

Diane Shao: Oh, that’s a really good question, Grant. You’re full of good questions. So one thing I do think is really important to recognize is that there’s always a difference in culture, no matter what. Research culture, medical culture, even as I’m talking about neurology research versus obstetrical research, there’s a difference in culture. And if you are not recognizing that and respecting those cultures, it’s just not going to work out. So for example, in the biological space, samples are really critical. I work with post-mortem tissues, those are really important. And PhD scientists are also really interested in studying human tissues.

But why do PhD scientists have a lot of trouble integrating with MDs? It’s because they kind of speak different languages, right? It’s the way they’re talking about the samples is different. The PhDs are talking about the samples as a biological utility. The MDs are talking about them like the boy they took care of for 10 years and then passed away for some reason. And so to understand that culture is critical. If you go to the MD and say, hey, I’m looking for samples for X. They might say like, oh, okay, I have some. And then you’re going to say something like, okay, well, I want to study protein. Proteins A and B and how they interact and blah, blah, blah.

The MD is not going to connect with that, right? So thinking about, well, protein A could be a therapeutic if it interacts with protein B in this way as a much more viable start. And then also thinking about, it’s easy to start thinking about, okay, the doctor is just the one who’s going to just be retrieving the sample and et cetera. And the minute you start reducing some other person’s role to just a task oriented sample retrieval role, you’ve totally lost the collaborative interdisciplinary engagement there. And so I think about these things a lot and I encounter them constantly.

For example, even in my example of, what do I do as a neurologist who wants to think about obstetrical tissue? Well, when I started, I’m used to paying $0 for my tissue because I get them from biobanks. I get them from patient groups that are really trying to get people to utilize the tissue for studies, et cetera. But obstetrical tissues are different. They pay a lot of people healthy pregnant women money to collect samples to be part of studies, et cetera. And so even engaging on costs, what is value?

I was running the risk of devaluing all of their tissues simply because I’m used to paying $0 for my tissues. And so these are all cultural nuances between disciplines, the same way going to a different country, you really have to consider those cultural nuances. Understanding them is non-trivial. I do rely on saying things like, hey, I don’t know what the typical way things are done in your field is, this is what I’m used to. And having that humility upfront allows people to also share with you their culture and being open to that, whatever that culture is and not just judging it as unreasonable or too hard just because that’s not the culture you’re used to.

Grant Belgard: What’s a good habit you find most strongly compounds over time?

Diane Shao: Oh, good habits. I find that, I think this may be going to your burnout question, finding the things that are going to make you feel passionate and excited every day. And sometimes they’re not always scientific questions. Like for example, I find a good habit that I have is taking a break at 2:30 PM every day. Either that break could be taking my 2:30 meeting and asking the person if they’d rather take a walk around and have a discussion instead of sitting at a Zoom screen, or that break could be meditating for 10 minutes by myself in a quiet space.

And so I guess I mean that as in, not to say that everyone needs to take a break at 2:30, but just if that is something that you need and will make you feel good about your day, that’s something you need to do for yourself. Similarly, if there’s a particular question you need to answer to feel excited, engaged in science, you just need to go down that route regardless of if it’s exactly the right time or if you have 10 other things you need to finish first or whatever it is, because it’s doing those things for you that is really gonna make everything worthwhile.

Grant Belgard: And where can our listeners follow your various threads of work?

Diane Shao: Oh, that’s a wonderful question. I am in the middle of building my own lab website, but for now you can find me through the
Boston Children’s Hospital. I have a research page there. I’ll provide the link for your notes. And then also my venture capital firm is at LegacyVentureCapital.com.


Grant Belgard:
Well, Diane, thank you so much for joining us. It’s been
lovely.

Diane Shao: Thank you so much, Grant, so lovely to be here.

The Bioinformatics CRO Podcast

Episode 79 with Yang Li

Yang Li, an Associate Professor at the University of Chicago, discusses applying computational genomics to the intersection of genetics, gene regulation, and disease, as well as the impact of new AI tools.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Yang Li

Yang Li is an Associate Professor at the University of Chicago, where his lab investigates the genetics and genomics of RNA splicing.

Transcript of Episode 79: Yang Li

Disclaimer: Transcripts are automated and may contain errors.

Intro: We are conducting our first listener survey. If you enjoy the podcast, please follow the link in the description to a 60-second multiple choice survey. This helps us understand what kind of guests you’re most interested in and keep the podcast sustainable. The survey is anonymous, but you can choose to provide your email to receive a summary of the aggregate results after the survey period is over. Go take the survey at bioinformaticscro.com/survey.

Grant Belgard: Welcome back to the Bioinformatics CRO podcast. I’m your host, Grant Belgard. Today, we’re joined by Professor Yang Li from the University of Chicago, a computational genomics researcher working at the intersection of genetics, gene regulation and disease. Yang, welcome.

Yang Li: Hi, Grant. Nice to see you.

Grant Belgard: Good to see you again. So what’s been energizing you most recently in your work, scientifically or operationally?

Yang Li: Well, since the New Year’s, I’ve been playing a lot with Claude. I mean, everyone’s, I think, playing with Claude. And I think both in terms of the science that he can help me produce and also, you know, just managing my schedule, that has been a game changer. And I’m still exploring what he can do. But yeah, so I think that’s basically what’s been what I’ve been thinking about most of the time.

Grant Belgard: What have you put into practice so far? Like what’s kind of, quote unquote, in production?

Yang Li: Yeah, we’ve been writing the revisions for one of our papers. And I’ve been using that extensively both to help me write some of the response, making it a little bit friendlier, but also rewriting some of my old code and checking for bugs and things like that. And it’s amazing. The number of things that I can do in just an hour far exceeds what I can do within a day at this point. So things like producing a plot in a slightly different way. As you know, it’s very difficult to rerun your code, especially if it’s not the best practice in the sense of software engineering. I’ve been self trained in terms of programming, mostly, and so the comments are not necessarily the best. But with Claude, it helps me comment, it helps me name my variable, right?

Or at least improve the naming of my variables, and then produce plots very, very fast, right? And so as you know, a lot of the way we check that the code is doing its job is to visualizing the underlying data in many different ways. And so Claude helps me do that. You know, as soon as I have an idea, I can just ask it to do it. And then I would see the visualization and I would sometimes I would find error. But most often than not, it gives me exactly what I expect.

Grant Belgard: When someone asks you what you do, what’s your favorite way to describe it without using jargon?

Yang Li: Well, lately, I’ve been trying to steer away from that because I’ve been doing things that are pretty technical. But in just a few sentences, I think I would just describe it as I’m trying to understand how proteins are expressed. And there are many different ways by which we can control the expression of these proteins and focusing on this regulatory mechanism called RNA splicing. And this is highly regulated. And I want to understand what is the function in different system and how to modulate it using drugs.

Grant Belgard: What makes this the right time for that?

Yang Li: Well, I think the reason why I chose this and I stuck to this ever since I think I was in grad school, really, is because almost nobody talks about genes in terms of how many proteins each gene can be producing. And so and it was clear when I was researching, even the things I was researching in grad school, which is, as you might remember, the cichlids, it was clear to me that every single gene produces many proteins or many isoforms. And to me, it felt like this has had to do something. Right. And my perspective has changed slightly since then. But because of my earlier work and the fact that no one, almost no one was really researching that, I became really interested in that topic.

Grant Belgard: So what is your current perspective on splicing?

Yang Li: Well, when you read the textbook, it basically tells us that every single gene, every single human gene can produce many different proteins and many different protein isoforms. So these are isoforms that are essentially the same, but with slight differences. So it could be one protein domain that is included in an isoform and in another isoform, this same protein domain is excluded. And so often textbook or in literature, it would be described as something intentional, as in the two version of the proteins have very different function. So one would be performing function A and the other would be performing function B. And both are very important for the survival or the proper function of a cell or the organism.

But what I think now is that the vast majority, and by a vast majority, I mean really over 90% of these different isoforms is not really to have a different function, but really as a regulatory sort of switch. So again, to fine tune, very similar to gene expression level, right? So when you regulate gene expression levels through enhancers and promoters, you’re not changing the final output or the function of the gene. You’re just changing the activity by a little bit. And I think splicing most of the time is doing that, is doing exactly the same thing. The regulatory input is a little bit different, but the outcome is very similar.

So it’s able to change the protein and have a different function, but those are really the minority of the cases rather than the majority of the cases, as is taught by literature or the textbook.

Grant Belgard: How do you decide if a problem is method worthy or just something you’ll apply existing tools and move quickly on?

Yang Li: So do you mean in terms of developing a tool or just using a tool to solve a problem? Right. So I think it takes me a long time to convince myself that I need to develop a method for something. And so in general, I try to use methods that exist already or previous method that I or my lab has developed. In some very rare case, I think, hey, we need to develop a method because there’s really something that hasn’t been done. And we really need to do that and also that we can do it. So all of these checkboxes has to be checked in order for me to move on to method development.

And I should say that we’re not particularly, I don’t think my lab is particularly good at developing methods, but we’re pretty good at identifying, I think, problems that can be solved by an older method whose goal is not necessarily for, well, hasn’t been developed for the specific question.

Grant Belgard: What are the most common bottlenecks you run into today? Is it a matter of data, compute, annotations, study design, interpretation, something else?

Yang Li: Yeah, that’s a pretty good question. I would say for me, it’s my time and getting a sense of what to focus on when there’s just so many people that I think needs my attention, so many projects that needs my attention. I think one thing that I’ve heard a friend tell me was a good example that I often talk about. It’s this context switching time. I’ve heard that the grape vines that Terry Tao, the famous mathematician, is extremely good at context switching. So he basically could switch from one problem to the next within seconds. And for others like me, we need more time to context switch. And so our schedule, when I guess you become a faculty, is that it’s spreading blocks of one hour. And I find it pretty hard to switch context from one hour to the next.

So I try to block more time, but then there are fewer blocks of the longer period of time. And so I think that’s somewhat of a bottleneck for me, is to find a longer block of time so I can have the time to context switch and then do deep work instead of just trivial work in order to make progress. It feels a lot of time I’m trying to just keep afloat and that doesn’t give me enough time to do enough deep work, which is the thing that I think I’m good at and also the most happy in doing. Yeah.

Grant Belgard: Have you found using tools like Claude impacts that in any way?

Yang Li: Yeah, yeah. So I think previously it was very hard to, I had a lot of questions about a data set or some topic and it just never felt like I had the time to do it. And with Claude, all of a sudden you could do things that would take a few hours. It would just take you a few minutes because it had the context, it remembers the context in which and you would ask and then you would remember the context and then you would just do it. So for example, plotting a figure about a data set and then it remembers where the file, where the raw data was. It would have taken me maybe 10, 20 minutes if I came back to this specific project after a week. It would take me maybe 10, 20 minutes to even recall where was the file that I was using and what exactly I was doing essentially.

I can ask Claude or summarize what I was doing or just scroll up a little bit and then ask questions and then he would give me the answer within a few minutes and then that would get me back on track much more rapidly than he would me by looking at my own code and browsing and recalling. So that has been extremely useful. I think also Claude might be able to help me manage better. I haven’t implemented this, but I’ve sort of joked around that I would have my trainee talk to an agent or Claude and then Claude would tell me, you know, summarize all of their things. And then I would only have to read through the summarized version.

Grant Belgard: So you could be like the nurse at a doctor’s appointment before you see the doctor, right?

Yang Li: Yeah, exactly. And then five minutes before I meet them, I would review that and I would think a little bit to just get into context. And then it would be, I think, a lot more productive, right? So, yeah, I often tell my students to prepare some slides or some notes before I meet with them so that it helps me switch, get into context, because oftentimes I hear about their problem on the spot when they come to me during the half an hour or the hour period. And then I have to think about it. And oftentimes when I think about it, I find it’s not really awkward, but it’s still some pressure to answer, right? I can’t think if I thought in silence for five minutes, even two minutes, right? It feels a little bit long, right? Let alone 10, 15 minutes.

But oftentimes that’s the time that you need to bring you back into context, to recall all of these different information, right? To have a very effective conversation. But the reality is that it’s also hard on them to come up every time with a few bullet points. Or at least, you know, I don’t know if it’s hard for them, but they don’t do it essentially. And this would, I think, speed up tremendously our meeting or at least make it extremely productive because everyone’s on the same page.

Grant Belgard: What’s a recent result or direction that surprised you and how did you respond to the surprise?

Yang Li: I can’t say that there’s a recent direction that really surprised me. I think I plan my projects long in advance and I can see points of failures pretty early on. And oftentimes the project or the direction does indeed fail. But then I often have a backup plan. And so I don’t think there’s anything, any direction that surprised me, I would say. And unfortunately, there hasn’t been anything like a sudden discovery that changed everything, unfortunately. So I’m either a very good planner or just not super lucky in terms of unexpected findings.

Grant Belgard: How do you think about reproducibility in practice? What’s good enough versus gold standard?

Yang Li: Yeah, I think there’s a lot that can be improved in terms of reproducibility. Unfortunately, when I think there is some amount of pressure to understand the system, the biology. I mean, there’s a speed component, right? You want to dig into the biology more rapidly. And oftentimes the solution to that is to do what you know best to do. And we’re not trained as software engineers. We don’t do these kind of unit tests. And so reproducibility and there are bugs, right? So I’ve developed LeafCutter many years ago and I still find bugs there. So in that sense, these things can be improved drastically. On the flip side, I don’t think any of these bugs or these issues affect our results, our biological interpretation of things.

Very rarely there would be a very important result that are affected by these. It does happen that it affects a very minor result, right? Or the interpretation of a minor result. And to prevent these bugs or these lack of reproducible findings, we essentially try to poke holes at our major, what we call major discovery. So the things that would, for example, break a paper or the main finding that we think we made. We would look at many different data sets and we would design tests that would essentially break it in one way or the other. So we have very orthogonal ways of trying to confirm a result. That would include, for example, looking at a just completely different data set or deriving some corollary. So based on these, if this were true, then this other thing must be true.

And so we would do more tests on whether this downstream result should be true, will be true. So we do a lot of these type of analysis. And then at some point, everything makes sense. And if something doesn’t make sense, then we have to explain this. Right. So I think this is a scientific process. And I’m not going to claim that this is foolproof, as in I will never have anything that is later falsified. But I think from my track record, I think this has worked so far.

Grant Belgard: How do you decide what to delegate and what you personally stay close to?

Yang Li: Right. So I try to delegate as much as possible. I try to delegate anything that I think a trainee or someone else or a collaborator can do to them. But I obviously weigh by importance. So the things that are the most important, even though I also try to delegate those, depending on whether I think they can do it, I would pay attention to the outcome. Yeah, in my minor things, I would just trust them to do the correct thing. And sometimes, you know, we have to backtrack when later on we find a problem.

Grant Belgard: How do you help your trainees develop taste, knowing what to do and what’s not worth the effort?

Yang Li: Yeah, that’s a very good question. It’s a little bit like asking me, how do I teach creativity, someone to be creative? And yeah, I hate to have this fixed mindset view, but I think it’s something that’s very difficult to teach, right? I think we can encourage creativity, but it’s something that has to do a lot with personality. I think I’ve noticed some type of personality that are, I would say, not as creative or don’t have as much taste and more rely on other indicators. So, for example, sometimes I notice it as not just my training, but in general, right, that a paper that’s published in Nature, right, or in a high impact factor journal, they sort of rely on that to be as a measure of what’s exciting and what’s good.

And others, they don’t rely on this and they have an internal perception of what’s exciting and what’s not. I think the one way that you can help is to read a lot, right? I think it’s, I always tell my trainees to read a lot. I don’t know if you remember, but in grad school, I just tried to read at least one paper a day and I would go through my RSS feed with, you know, hundreds of abstract every day. I mean, now it’s getting even harder because, well, much harder because there’s just a lot more papers that’s been published. But at least back then I had all my journals that I generally read and an RSS feed and I would go through all of the abstracts, title and abstract. I would do this every morning and I would read at least one paper that interests me.

And so I think that helped a lot in terms of both creativity. I mean, creativity is not just, you know, whether you can come up with new things, right? You can come up with new things to you, but someone might have done it. So you also have to know about what’s out there, right? And taste, I think, is somewhat similar as well, right, to creativity. If it’s just, if you like a paper or if you like a project just because it sounds good to you and you don’t know that much, then maybe someone might not call that a good taste, right? So I think these are linked together.

So the more you know, the more you’re likely to have good taste and you have to have your own sense of what’s worthwhile and what’s valuable and not just use some kind of, you know, external, I mean, what someone tells you, right? Obviously, at some point you have to rely on someone, right? So someone that you respect, someone that you know have good taste, if they like it, then you can maybe up weight something a little bit. But then at the end of the day, you need to build your own, you know, scoring function.

Grant Belgard: So let’s talk about your own career track. In your own words, how did you get here?

Yang Li: That’s a very interesting question. I did mention that I like to plan things ahead in terms of my research project, but my trajectory, I think it’s been, yeah, I’m reminded of the quote by Bertrand Russell. I don’t remember the quote, but essentially it goes like, you know, my life has been like great waves, right? Or great winds, like it blows me here and there. And I really feel that way, that, you know, there are periods of my life where that changed me a lot, right? And that has depended a lot on luck or maybe we shouldn’t call it luck, just circumstances that I guess I viewed favourably and therefore I called these luck. But it could have also been misfortune, right, if it didn’t end up very well. And I think these periods are what, these few periods are what took me here.

And the first period was during the last few years after high school. So right before college, I grew up in Montreal, in Quebec, and there’s this period called CEGEP, which is two years before university, but after high school. And at that point, I met some very good friends who introduced me to coding, but also hacker culture, not in terms of, you know, black hat, but more just, you know, coding hacker. And I remember that I started to also become interested in philosophy a lot and asking, you know, bigger questions such as what is the meaning of life, obviously, what is consciousness and all that. But really asking the question of, you know, what am I doing here, right? And at that point, I started to code and to read essays.

I think many of us have been influenced by essays from Paul Graham about, you know, being a little bit more intentional about who your friends are, who you hang out with. And I think this really has started this trajectory, right, being very humble, always trying to look for people who are smarter, who are more, you know, who are more knowledgeable than me. And so without that perspective, I don’t think I would be where I am right now. I mean, grades don’t really matter, but my grades were very average. I wasn’t particularly interested in anything other than video games, obviously. But at that point, something switched in me, right? So being a lot more intentional about how I use my time, who I’m friends with, or who I hang out with. And I think that worked out, right?

Immediately during university, I just identified people who were really excited about their work, excited about their craft. It didn’t have to be anything particular. But at that point, I majored in mathematics and computer science. And I met very good friends, again, that were really excited about the work that they did. And they were really passionate about something, right? So you can be passionate about video games, which I was about, right? But it’s very easy to be passionate about video games. It’s a lot harder to be passionate about something that’s very difficult and that no one cares about. And so I was looking for these sort of phenotype of people who really cared about mathematics, right?
Something that I didn’t particularly care about, I enjoyed, but I didn’t particularly care about. But then, you know, I started to model things that I cared about and use their intensity on these, I guess, passion, right? And so just recognizing the fact that real usable things like mathematics or coding or these things that are hard and tedious can be something that you really enjoy, right? Was something that’s quite new to me. And then, as you know, I got really interested still through my philosophical angle about aging, the aging process. And so I followed these passions and essentially all through that experience, changing from mathematics and computer science to biology, my main goal was to follow what I was passionate about and to do things as rigorously as I could.

And so essentially that led me to where I am right now, which is not studying what I initially set myself up to do, which is aging. And there’s plenty of reasons for that. But essentially following a passion that not everyone cares about, right? But I care about and applying the same fundamental values to these problems.

Grant Belgard: What’s something you learned early on that still pays dividends today?

Yang Li: Early on, as in how early on? I think one thing that I mentioned a lot to people is doing a degree in mathematics. And I don’t think that you have to do a degree in mathematics to get that. But this is where I got it, is this sense of what you don’t understand is actually very important, right? I don’t know if it’s, it’s definitely not a muscle, right? But it does feel like a muscle that you can use to know gaps in logic. And I think even, I wouldn’t say extremely good scientists, but because I do think that very extremely good scientists, they have this muscle, but I would say maybe many, many trainees and many faculty, I would say, and leaders, they still struggle with some gaps in logic. It’s very easy to jump, to have this logical jump.

And that impacts a lot of things, impacts the science, but also the writing a lot. That’s what I observe. That I think is one aspect that I see the most apparent when I see someone’s writing and I observe that there’s a gap in logic. So you just assume that, well, first of all, you assume that everyone knows what you know, but you also assume that one sentence followed the next sentence, or rather the next sentence follows the previous sentence. And I often see that there’s a gap in reasoning that I think is pretty hard to fix, right? Essentially, you have to say, well, why does it follow? And then a student might say, well, it follows because it’s obvious, but it’s actually not obvious. But how do you know that something is not obvious?

How do you distinguish something that’s not obvious from something that’s obvious? And especially when you’re talking about biology, for something to be obvious, there’s often a stack of unstated assumptions. Exactly, exactly. And in mathematics, when you do a lot of proofs, you’re sort of trained to always question every single step. And so I think doing this has really taught me, or at least it made me extremely careful about these steps. And in biology especially, I thought it was extremely useful because, well, sometimes you just can’t overcome, right? You cannot prove every single thing, right? In fact, the first few years when I transitioned from mathematics to biology, it was extremely difficult, because I was just hung up with the simplest thing, right?

But then I found utility in this because you can stash it, right? You can stash this gap in logic. So you notice them, and then you have to convince yourself that, well, it’s true that I can prove it, but it’s probably right in this system, right? And then you can move forward. But also at the same time, you understand what is this gap, right? And by understanding this gap or this condition, right, that it works only in one system, I think you start to understand the system a little bit better, and you start to understand how can this information that is supposedly only applies to this specific system also apply to another system. And so it helps me transfer some of my understanding from one paper, for example, to another paper, right?

So how might these be similar across papers or across cell types, right, or across disease transferred to another cell type or across another cell, another disease? So I think, I mean, this part helped me a lot, I think, in my thinking and in how I transferred knowledge across disease cell types or anything really. And I guess this was a little bit unexpected. I use very little mathematics right now, very, very little of the things I actually learned during undergrad. But this obviously has stuck with me.

Grant Belgard: What are some things you had to unlearn when transitioning between stages, you know, student to postdoc to faculty?

Yang Li: Right. I mean, I wouldn’t say that it’s unlearn, but change definitely very much so. All right. So when you’re a student and a postdoc, you’re very self-centered. You drive your project forward, and there’s some sense of the truth is the only thing that matters. The results are the only thing that matters. There’s less, I would say there are some, but much less personal touch, right? There’s some collaboration, obviously, but you’re really focused on your own project. At least that was my experience. And whatever I did was a lot focused on just obtaining the truth, obtaining, you know, understanding the way it works. What I had to, I would say, unlearn is maybe be less obsessed with the truth and how things should be done versus, you know, how it will be done by someone.

So it’s hard to force your way of doing things, even though if you still believe that it’s correct onto someone else who might not do the same way as you. Right. And so, as you know, we’re not taught to manage as faculty. And this is something that you learn because you see either students struggle or you see other people struggle. And then you notice that, hey, this is not, I mean, this is not productive, right? You cannot tell someone to work the same way as you did, even though you think or you strongly believe that this is how you would do things. And even if you could prove that this is the more efficient or the better way. So I think this is something that I think about.

Everyone’s different and some personalities are more likely to accept some ways of doing things and some other personalities are unlikely to perform well if you tell them to do it in a certain way.

Grant Belgard: What’s something a great mentor did for you that you try to replicate for others?

Yang Li: Well, I think all my mentors have been extremely kind. I think that’s something that, and at no point I felt like that the mentor just was using me in some ways to get a paper out, for example. And I’m mentioning this because I have witnessed that some mentors, they essentially think trainees, even though maybe they think it’s justified, I mean, using the students as a means to an end. And so I always think of the trainee as a person that is here to grow in terms of their ability, in terms of their knowledge. And so I think that’s something that I’m very careful about. I never try to have a student do something that is not beneficial to them.

Grant Belgard: Given the rapid changes in the field driven by AI, what advice do you typically give to early career bioinformaticians in navigating that?

Yang Li: Yeah, I think that’s a great question. And it really depends on your own personality. I think one aspect is to understand yourself, like what kind of personality you are. And I truly believe that personality matters a lot, right? And it’s really, some might say, oh, well, you have to change your personality. But I find that extremely hard. There are some personality traits that I know that I should change or that if I change, I would be happier or even more productive. But it’s very difficult to change, right? And so the way that I try to guide my trainees, for example, is to get a very broad sense first of what type of personality that person is. There are some, I think it was Ray Dalio in his book, he used to be the CEO of Bridgewater, and he developed these tests.

I don’t exactly remember the specifics, but I think what he did was for every personnel, every member of the company, he has this test that will classify them into what they’re good at and what are their personality. And one thing that I keep on thinking about is doers and thinkers, right? So oftentimes you can characterize someone as a doer or a thinker. The thinkers are those who like to think and then they’d like less to do. And the doers, they’re more, you know, they have a higher affinity to just start to do things before even thinking, right? So one thing that’s helpful and it’s only, you know, potentially related to personality is to figure out if you’re more like a thinker or more like a doer. And maybe you’re both, right? And that’s great.

But then figuring out these sort of traits for you will help you determine what you should focus on. And one advice was, if you think that you’re a doer, maybe you should team up with a thinker, right? And vice versa. If you’re a thinker, maybe you should team up with a doer and be very, again, intentional about this, right? Don’t let chance decide. If you have two doers, odds are that you’re just going to build a lot of things and it might not be very useful or not very good, right? If you’re a thinker, if you’re two thinkers and nothing gets done. And personality as well, it’s the same, right? And sometimes I also think about this diversity. I mean, lots of people say, oh, diversity is good, is good, is good.

But when pressed about exactly how diversity is good, they would say, you know, that the blanket statement like, oh, well, diversity, you know, you have different ways of thinking about things. I agree with that. I think you need a little bit more to really build a good diverse team, right? And this, for example, this diversity in thinkers and doers, and there’s also other personality traits that I forget to mention, right? So there are also, you know, personality traits about being very pessimistic, right? So I would classify myself as being a very pessimistic person. Careful, I’m trying to improve that, obviously. And then there are some people who are extremely optimistic. Like, you have an idea and they’re on board and then they’re like, OK, yeah, it could work because X, Y, Z.

I’m more of the, but it won’t work because X, Y, Z, right? But you need both, I think, on the team. If everyone is optimistic again, then this is going to be maybe an echo chamber of, well, yeah, it’s going to work and they’re just going to be hyped up. And then it’s great, right? Everyone feels great, but then it doesn’t, it’s not, you’re not going to have a good product, right? Because then you don’t consider what the negatives are. And if you’re all pessimistic, like me, if you have two rooms of me, then nothing’s going to work. So I think you need to figure out who you are and then team up with some diverse people, right, in that sense. And again, there’s lots of different, these are just two axes of variation.

There’s a lot more axes of variation that I think that you can optimize to build a very strong team.

Grant Belgard: What separates great collaboration partners from frustrating ones in CompBio projects?

Yang Li: As in me, is it CompBio or like two CompBio teams, or like one biological and one computational?

Grant Belgard: Yeah, probably a computational and a wet lab.

Yang Li: And a wet lab, I see. I think there needs to be respect, right? Respect for each other’s craft. If one is, again, using the other and without any amount of respect, I mean, that seems obvious, but it’s actually not. And it can go in both ways and often does, right? Yeah, yeah, exactly, exactly. So we, as a computational people, we can treat the experimental as just, you know, a pipette. Like, oh, you’re going to be replaced by robots soon, right? In the same way that the wet lab experimentalist cannot treat us as, you know, Claude Code, right? And in fact, I see it happen, right? Not every day, but I know who these people are, right? And so it’s a lot more prevalent than you might expect, I think.

And also a little bit of effort in understanding the other is, I think, at bare minimum, right? But obviously, accepting the fact that you’re not going to be as good as your experimental or dry lab counterpart. Another thing that is extremely important is that you have to enjoy working with them. Sometimes it could be tempting to work with someone who’s just very good, right? And you just need the resource. But personally, I just don’t think it’s worth it if you really don’t enjoy working with someone. Yeah, personally, I don’t think it’s worth it. The other thing is energy level. I think it’s very important to have the same amount of energy. If one of you is just a lot more excited, you end up being really annoyed that the other one is slacking off, right? And vice versa.

They’re probably going to be annoyed at you, or you’re going to find them pushy if you don’t have the same energy level. So I think these things are the main thing. And I’ve had, I would say, very good collaborators and pretty bad ones. And I think always these three aspects separate these perfectly.

Grant Belgard: What frameworks do you use when helping trainees decide on career paths?

Yang Li: Yeah, I think it also has to do with personality. Anyone who’s very curious and very open minded and maybe like more, you know, just very idealistic, I would try to push them towards academia. Anyone who is very practical and, you know, I don’t mean to say that one is better than other. Anyone who is very practical and have a very good sense of what they want in life and they don’t want to deviate too much. I would say that I would, you know, steer them towards industry. And I don’t tell them that, right? Everyone who goes through my lab, I tell them that I think that they could become good academics. But the fact of the matter is academia right now is not super welcoming in the sense that it’s just very difficult. It’s very difficult to have a tenure track position.

That being said, there’s a lot of position that is not tenure track. And if you’re OK with that, and I think you should totally be OK with that, there’s a lot of possibilities. And I would also encourage that. But obviously, you know, if you’re very creative and very, you know, idealistic kind of person and you really want to change the world or research something that you’re deeply passionate about that not many people might care about, then I still think that academia as a tenure track, having your own lab, at least, is the right place or the right path.

Grant Belgard: Final question. If you could give just one piece of advice to your earlier self, what would it be and why?

Yang Li: Other than buy Bitcoin? Yeah, I think communication is very important to focus on and be more open minded to things to improve. I think when I was young, I was really into doing hard things and technical things. I think you can call it hard skills versus soft skills. And I didn’t think at all about improving soft skills or maybe personal skills. So interpersonal skills. And I think I would give that advice to my past self, even though I strongly suspect that I wouldn’t listen to myself. Yeah, so personal skills is, I think, more and more important, especially with AI, which I think can replace a lot of the hard skills, to be honest.

And so the one who I can see that the one who will succeed a lot more than I will are the one who has the soft skills and know how to get AI to help them with the sort of hard skills.

Grant Belgard: Well, Yang, this has been fantastic. Thank you so much for joining us.

Yang Li: Great. Thanks for having me, Grant.

The Bioinformatics CRO Podcast

Episode 78 with Sun-Gou Ji

Dr. Sun-Gou Ji, statistical geneticist and VP of Computational Genomics at BridgeBio, discusses his career in genetics and genomics and BridgeBio’s approach to target validation and novel target discovery.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Sun-Gou Ji

Sun-Gou Ji is VP of Computational Genomics at BridgeBio, supporting target validation and novel target discovery for drug development. 

Transcript of Episode 78: Sun-Gou Ji

Disclaimer: Transcript is automated and may contain errors.

Grant Belgard: Welcome to the Bioinformatic CRO podcast. I’m Grant Belgard, and joining us today is Sun-Gou Ji. Sun-Gou is a statistical geneticist at BridgeBio, where he drives scientific decision making based on human genetics. As VP of Computational Genomics, he leads a team of statistical geneticists and data engineers focused on target validation and novel target discovery. Previously, he was at Seven Bridges, where he collaborated with the Million Veteran Program to validate and uncover genetic factors influencing human traits in a highly diverse and admixed population. Welcome to the show.

Sun-Gou Ji: Thanks, Grant, for having me.

Grant Belgard: So how did you first become interested in genetics and drug development and what drew you into the field?

Sun-Gou Ji: Sure, sure. I’m sure everyone has this time where you think about what impact, you know, do you want to make in this world while living here? And the type of lasting impact that I was very struck with was that a drug that I could develop could help people even when I’m gone. It would stick and still help people for perpetuity. So, you know, so once I thought about those things, I was actually lucky enough to then do a Ph.D. at the Sanger Institute at a time when human genetics were showing a pretty meaningful impact to the success of their programs. And here I am now. I feel like I was just, you know, happened to be at the right place at the right time and things aligned and really happy to be contributing to something that will outlive me.

Grant Belgard: So the Sanger Institute, of course, is an epicenter of human genetics. How did your Ph.D. work there shape the way you think about it?

Sun-Gou Ji: I would say it just basically shaped who I am now. I feel like if I had to choose one time in the past, I could go back to it would be doing a Ph.D. at the Sanger, which I think is pretty rare for people that have done Ph.Ds. And, you know, its history started with sequencing the human genome and the density of world-class human geneticists. There’s just no other comparison out there. So especially the scientific rigor and the collaborativeness I learned at Sanger are still the basis of how I operate today. And I would really strongly recommend it to anyone, you know, considering this field.

Sun-Gou Ji: And many of my friends remember the time at Sanger being the best time of our lives, not only the scientific achievements, you know, people at Sanger do publish a lot and pretty high impact journals, but also the diverse culture and its inclusiveness being part of Cambridge culture is a very exceptional experience.

Grant Belgard: What did you take away from your time at Seven Bridges, especially working on the Million Veteran program and the Graph Genome Project?

Sun-Gou Ji: Yeah, sure, sure. I joined Seven Bridges, it was like 2015, 2016. That time it was the data science, big data was the hype, you know, before AI. And back then, it actually took ages to perform imputation on HPC clusters or run a GWAS using LMMMs. And I’m sure you remember that time too, Grant. And being able to run a large compute and know some stats qualified me as a data scientist. And Seven Bridges has kind of occupied this niche where it was almost impossible to orchestrate complicated genomic workflows on AWS directly. And although everyone knew things would move from HPCs to the cloud, I think there was a time where people were scared of having their precious data in the cloud. And I was in the R&D team working on the Graph Genome Project and I met the smartest people I’ve ever met there.

Sun-Gou Ji: It was very different from the crew from Sanger in terms of that it was a completely different group of folks with PhDs in quantum physics, like mathematics and engineers, software engineers with some with like 20 plus years of experience. And this focus team of like a dozen plus work on a single project to create this graph genome ecosystem. And if you know, the name Seven Bridges comes from the seven bridges of Konigsberg, which was solved by Euler and laid the foundation of the graph theory. And he could understand what Seven Bridges was trying to do. And they were trying to use graph genomes and actually revolutionize how we do genomic analysis. And my experience there really opened my eyes on the difference between academia and industry. And because of the, usually when you have this type of project, you have one PhD or postdoc working on it.

Sun-Gou Ji: Whereas here you had dozens of like people with vast experience working on a single project to get one thing done. And I mainly focused on the structural variant aspect of the project and which led to a nice paper back in 2018 or so. And I believe it’s still part of Velsera’s offering, which is, which absorbed Seven Bridges. And it’s really great to see that this graph genome and pan genome approaches are really picking up more recently. And actually I feel like this really shows how difficult it is to commercialize a completely novel bioinformatics tool, even though it could revolutionize the whole field. And as for the MVP work, I was also working with many others at the VA to QC the initial tranche of the genotyping data and imputation. And all these experiences at Seven Bridges is like, I really learned a lot, especially being the only human geneticist in the group.

Sun-Gou Ji: It took me some time to understand that Sanger was sort of a bubble, right? Where like everyone understands human genetics. But here I quickly had to get myself comfortable basically defending the whole field of human genetics in front of mathematicians and physicists and engineers who would listen to you about how, you know, variant calling is done, alignment is done, association testing is done. And they would say, oh, that’s, this is irrational. This is inefficient. This is like very old statistical tools. You could, there’s these novel things you could use. Why are you using this? And, but actually me kind of defend the field in front of these really smart people helped me explain concepts of human genetics from first principles.

Sun-Gou Ji: Why do we do this reasons though, that the human genetics field uses this type of kind of old statistical techniques rather than these very complicated non-linear models a lot of times. And this kind of explaining the reasons of how human genetics done from first principles turned out to be very useful at Bridge Bio.

Grant Belgard: What do you consider to be the most impactful outcomes of the million veteran program?

Sun-Gou Ji: Well, the data itself, it’s, you know, the Million Veteran Program, it’s, it’s like, it’s very amazing that, you know, the veterans are actually contributing the health information, the genomic information for research to advance, you know, veteran, veteran care. And this type of data actually is reaching for a million in a single hospital system is still, there’s no comparison. And actually the Million Veteran data is really special. And in the way of how the ancestry proportions are distributed within the data, it’s very higher proportion of African-Americans as well as Hispanic Americans compared to the other databases that have larger European ancestry. So the type of analysis and knowledge that’s coming out of the MVP data is very orthogonal to what we get from other databases or biobanks.

Grant Belgard: So what led you to then join BridgeBio?

Sun-Gou Ji: Yeah, so honestly there was, of course, a lot of serendipity. And once I was working on these bioinformatics tools and QCing the data for others to use, the only thing I was sure that I wanted to do is move closer to patient impact through developing drugs. Like, like I said, at the beginning, it’s like, I felt I was sort of ready to kind of move closer to actually making a drug. That I feel like I made and, or I contributed significantly to making. And being as choices back then were like big pharmas, you know, thanks to the, the [Nelsted?] et al paper from GSK or the King et al paper from AbbVie, many pharma companies were building huge genomics teams. And, you know, I think there were a lot of choices from a lot of these places, but looking back and trying to justify my choice to join BridgeBio instead was definitely the people I met during interview.

Sun-Gou Ji: I was really impressed by the team. There were super smart in very different ways. There were, I think a lot of people, I would say from Seven Bridges were like really scientific smart, like street, like very academic smart. Whereas the BridgeBio folks felt a bit more street smart and they would just get things done right somehow. And without dwelling too much into the detail, but just enough to actually get things done in a very efficient way. And of course, the other part was, you know, being the opportunity to be interviewed, like world experts like Richard Scheller and the people that are like that, as well as getting personal call-ups from the CEO, you know, you wouldn’t really get that if I was going to be joining like the big pharmas. And it felt like these people could really do something. And this hub-and-spoke model for rare disease really also resonated with me.

Grant Belgard: So speaking of the hub-and-spoke model, that’s pretty uncommon in biotech. Can you explain how it works and why it’s effective in rare disease drug development?

Sun-Gou Ji: Yeah, so I’ll start with the ‘effective’ because I don’t think a lot of people appreciate it. Like one metric I really like to highlight about BridgeBio is like, we’ve been around for 10 years now. And within that time, we’ve delivered 19 INDs and three NDAs. We had two positive phase three trials that just read out in the last year. And we’re waiting for one more that we’ll weed out within this quarter. This efficiency is really rare. And this starts with actually picking the right programs and having a balanced view of the portfolio. So how do we choose? And the majority of rare diseases happen to be genetic. And we know that targets for genetic support have a higher chance of success. And that’s why BridgeBio develops therapies that target the source of these genetic disorders or are very close to it. All of our targets technically have genetic support.

Sun-Gou Ji: But, you know, everyone knows like there’s twofold increased success rate if you have genetic support. But the chance of a single program succeeding is still very low if you think about a single program. But if you bundle enough of them together, you have low probability of success, but you have slightly increased because of genetic support. And then you kind of bundle them all together. And if you bundle enough of them together, it just becomes a mathematical problem of how many programs do you have to try to get a certain probability of the portfolio of making it? So this is a paper from Andrew Lo, one of our founders that actually came up with this concept and our CEO Neil Kumar kind of delivering, executing it on it. And that becomes a very mathematical problem that actually a lot of investors and bankers get.

Sun-Gou Ji: And it’s very hard to raise funding for a single rare disease program that has a low success rate and actually the outcome of that would not be that huge. So it’s actually very difficult to raise for a single program. But if you, because of the higher probability of success of a single rare disorders, bundle them together, then your risk becomes really low. So there are investors that have the appetite for low risk investment under this model. So we were actually, we were like, BridgeBio was able to raise from, you know, unconstitutional investors in biotech. And also not only that, how we raise funding, but also it allows funding towards the smaller indications with smaller upside, which would not be funded individually if this model was not there.

Grant Belgard: So for a company, aside from a successful launch, the best outcome is failing as early as possible, not going as far as possible. What does that mean in practice to fail early in rare disease development? And how do you operationalize that mindset within BridgeBio where you have multiple shots on goal, you know, kind of in principle uncorrelated risk basket of programs?

Sun-Gou Ji: Yeah, that’s actually a very important aspect of our portfolio. We’re not trying to make every program a success where we try to optimize for the portfolio. And usually this is not possible because if you have one company working on this program, if this program fails, you’re done. Whereas at BridgeBio, if this program fails, there’s always new programs that we are starting. So people that are working on a certain program, even if that fails, it’s not mean, does that mean that they’ll lose their job? They might, they actually can be transferred over to other programs that are being created newly or that need support for other things because, you know, everything moves and all these programs that are uncorrelated have different stages of development, different programs and different problems.

Sun-Gou Ji: And as long as you there, that’s how you can actually least incentivize people to make the right decision rather than the decision that makes the program live longer. And, you know, these type of kind of shutting programs happen in very different circumstances. Sometimes it’s kind of happens because of external factors, right? Where the market’s shrinking. Now there you have to kind of figure out which programs you want to, which is kind of similar to what all other biotechs and pharma companies go through. But we also do that very intentionally where we review our programs, especially the early stage programs and make sure when we start a program, we develop these decision points, like clear decision points. Like if we hit a milestone, then it’s a go. But then we also very clearly lay out what a no-go would be for each milestone and try to make harsh decisions.

Sun-Gou Ji: But these are definitely one of the hardest decisions that we always have to make, but we always try to push ourselves to make those decisions before the market makes us make those decisions.

Grant Belgard: And how do you approach risk-adjusted net present value modeling in rare diseases? And why do you think that’s a better framework than focusing on peak sales?

Sun-Gou Ji: Yes. So we actually released a white paper on this and last October called the feasibility of rare disease drug development. And this is all talking about risk-adjusted NPV is the net present value of program, meaning what is the present value of a certain drug development program at this time, considering all the potential path this program could take and aggregating across all the potential outcomes from failure to like failure risk and success risk and how much and all these things and aggregate and cost and taking time into account, which is risk adjusted. Then you have a single number on whether this program is actually positive, meaning it’s worth investing because you’ll get something out of it versus negative, which is just, it’s not like economically viable, financially viable to actually make investment into the program.

Sun-Gou Ji: And I’m sure people have heard of this herding in rare disease drug development, where everyone is working on a select few more common rare diseases. And most of the other rare disease just have no interest. And that’s, I think what happens if you focus on peak sales, there are just a few rare diseases that actually make sense if you just think about peak sales and the biology is understood about disorder. And if you focus on just the peak sales, there’s just, I feel there’s just not much way to avoid herding on select rare diseases. Big sales only considers the potential outcome and ignores potential costs to get there, no way. So in contrast to common diseases like IBD or more like, you know, autism, like our NPV is not relevant because the cost, whatever you spend on it would actually be negligible in the context of the large outcome, like a large fruit at the end.

Sun-Gou Ji: But for rare diseases, comparing the size of the fruit that will bear with some probability against the expected cost and whether that is positive or not is critical. And like a lot of our drugs would not have been like interesting for many other traditional way of just thinking about peak sales. But you know, some of our team are so lean and efficient and then has pulled off like one of the cheapest drug development programs that you could actually, that has been ever run to reach phase three. And all of that, if you only focus on peak sales, it doesn’t really matter. So if anyone’s interested, I would really encourage people to check out our white paper. And there is actually a toy you could play with.

Sun-Gou Ji: You could kind of change how much you think you’re going to, this is going to cost, how long your trial is going to last and what are the things and try to figure out how, what you need to optimize in order to turn your program NPV positive.

Grant Belgard: In broad strokes, how would you define computational genetics for the work that you lead?

Sun-Gou Ji: In broad strokes, any analysis that cannot be done on an Excel file, Excel spreadsheet that is not directly related to clinical trials.

Grant Belgard: I like that definition. Yeah. I haven’t heard that before. That’s a good one. Where in the life cycle from target ID to validation, candidate selection, trial design, post-marketing, is your involvement the heaviest and why?

Sun-Gou Ji: It will be in the earlier stages, especially like once the target is selected and the drug program gets going, there’s not much in terms of the computational genetics that can be done to actually make the full, it can help decision-making while generating different biological kind of support for the pathway, the target and all that, and that we all do. And actually we work across all of them, but the heaviest that we put our effort into is selecting the right target and actually validating it for that. That’s one of the things where you, this is the type of decision, once you make it, there’s no turning back. You can only know after phase three is spending a lot of money and a lot of time, a lot of resources that could have been spent on try to help other around disease patients. If you just pick the right target, there’s no way you could kind of change that.

Sun-Gou Ji: That’s where we put a lot of our efforts and that’s also where, you know, there is trial and tested proof that it does significantly improve your success when you incorporate a lot of genetics data in that stage.

Grant Belgard: What data sets are most actionable for your work right now and what makes them actionable?

Sun-Gou Ji: There are multiple databases that we, of course, like everyone is working with the UK Biobank, the All of Us, it’s very useful and somewhat actionable because of the kind of general population representation that you could actually learn from where you can think about, okay, if you go after certain rare disorders, what are the kind of more common expression of the rare disorder that could be observed in more common patients?

Sun-Gou Ji: And can we actually build like an analytic series around the target based on more common variants that are not directly causing the monogenic disorders, but also because these UK Biobanks and All of Us are usually devoid of a lot of severe rare monogenic disorders, but you do have to complement those with other databases that have a higher enrichment of these more severe rare monogenic disorders that would include databases like Genomics England that we work closely with and also a lot of these genetic testing providers like Invitae and GeneDX where you would get tested because you have a certain concern about a genetic disorder. So those are the databases that would be enriched in the type of patients that we are trying to treat. So in the end, there’s not a single database because they all have different ascertainment bias.

Sun-Gou Ji: And if you just keep sampling from the general population, you would basically have to sample the whole of the US to actually get enough sample size to do anything for any of these rare disorders. So that would take too long, we’ll get there, but it’ll take too long from the other end because you are biased towards people that actually have a reason to be tested. Then you’re missing a lot of these people in those kind of genetic testing vendors. You’re missing a lot of people that are kind of mixed, where they have slightly less severe forms of the disorders that would not get tested. So a lot of the insights you get from those databases will be biased towards more severe expression of the phenotype.

Sun-Gou Ji: So in the end, you have to merge those two together and make sure that what we get from one database can be replicated, or if it’s not replicated, we can explain why you don’t see that in these other databases. And then of course, it doesn’t end by just using the genomics data, especially now the UK Biobank, I think they’re one of the best things about the UK Biobank. Now they provide all these proteomics data and a lot of other multi-omics data sets are being more readily available and kind of layering on top of that from the genetics is becoming more and more important. But again, a lot of these monogenic disorders don’t have a large enough sample size for these multiomics. So how do you use a general population or a general database, the multiomics to incorporate that layer of information to help de-risk our targets or de-risk our program moving forward, it’s always case by case.

Grant Belgard: So the calcium sensing receptor has been described as a system level node for calcium homeostasis. Can you explain why it’s an interesting target?

Sun-Gou Ji: Yeah, so the CasR gene is, like you said, is the calcium sensing receptor. It senses calcium and calcium level in your blood and try to make sure that your calcium levels are kind of kept at check. And one of our programs that read out last year was an inhibitor of this calcium sensing receptor that’s trying to treat autosomal dominant hypocalcemia, where the calcium sensing receptor is overactive and where it’s a monogenic disorder that kind of causes this calcium sensing receptor to be too sensitive to calcium. And that’s why it thinks that our body has more calcium than needed and kind of keeps the calcium level lower. So the hypocalcemia is the symptom of this monogenic disorder. And why CasR as the gene is super important and interesting is actually it’s a genetic target with an allelic series.

Sun-Gou Ji: And what an allelic series is, is to simply put, it’s nature’s dose response curve, where the dosage of the gene correlates with disease outcome. That means if you have low dosage, meaning a loss of function, CasR, then you have hypercalcemia, where you have too much calcium, and then you have your wild type in the middle, where you’re kind of okay. And then you have your gain of function in CasR that actually causes the disorder that we’re trying to treat, which is autosomal dominant hypocalcemia. So you have this outcome, human and phenotypic outcome that correlates with the dose. And the dose response curve is what you want to see in a clinical trial. That kind of proves that you’re actually hitting the target correctly.

Sun-Gou Ji: And having this allelic series of like different types of mutation, where you have very severe loss of function or like a weak loss of function, a very strong gain of function and a weak gain of function that correlates with a human phenotype, that’s the perfect genetic support for a target. And usually when you talk about the allelic series, everyone talks about PCSK9 for lipid metabolism. PCSK9 has been a beautiful story where you have gain of function and loss of function individual, where you have loss of function individuals who are protected from high lipids and coronary artery disease. Because PCSK9 inhibitors are not only used for monogenic hyperlipidemia. It’s used for just the general population. And that’s the analogy that we could use for these CasR inhibitors is that it’s not just for this autosomal dominant hypocalcemia type one monogenic disorder.

Sun-Gou Ji: But if you have this imbalance in calcium, which also leads to an imbalance in the parasite hormone. And usually when that happens, what you try to do is what you get prescribed is like a calcium tablet or that you could get more of the calcium and kind of increase your blood calcium. But then it normalizes your blood calcium. So it kind of gets rid of a lot of these other brain fog or neurological effect or tingling or other tetany or even kind of seizures. But what it actually does then it increases the amount of calcium that has to go through your kidneys. And that would end up leading to kidney damage. So a lot of the ADH1 patients are actually struggling with controlling the level of serum calcium against by using calcium supplements against their kidneys kind of breaking. So that could actually happen to other people that may be using calcium supplements wrongly.

Sun-Gou Ji: And there’s this kind of allelic series that we see in CasR actually indicates that this CasR inhibition as a therapeutic could be used for an other expansion from not just the rare CasR and ADH1 disorders to more complex phenotypes associated with the calcium sensing receptor, especially the anything influenced by calcium balance.

Grant Belgard: Many companies cluster around the same common rare diseases while ultra-rare conditions are left to non-profits. How do you decide which diseases to pursue, especially when patient populations are unknown or trial feasibility?

Sun-Gou Ji: That’s always a moving target, as you can expect. But one of the things that we really focus on is really let the science speak. Meaning, can we really get into the science of understanding the patient beyond the need and the biology of the disorder? And we call that the connect the dots from the genetic perturbation to human phenotype. And where does the proposed treatment is intervening in that whole pathway? So as I alluded to for the CasR example, like for genetic support, the allelic series is the best. That’s the ultimate genetic support of those response curve, super rare. Interestingly, we either find things that obvious and everyone is working on or stumble upon ones that no one is working on. If the rare monogenic disorder is too hard to make a drug, it sometimes makes sense to go straight to the complex disorder. But usually that’s not for us.

Sun-Gou Ji: And we look for partners that are willing to take it on together for these more larger indications that requires a significantly longer and complicated trials.

Grant Belgard: So as we sequence more of the population, what are you seeing about prevalence, penetrance and variable expressivity of monogenic variants?

Sun-Gou Ji: Definitely a higher genetic prevalence, but lower penetrance and wider phenotypic spectrum of expressivity. And this is definitely not new, right? Because pathogenic variants were observed in an exact a long time ago and were called, you know, these people were called super humans at some point. And that kind of led to the search for modifiers of these pathogenic monogenic variant carriers. And that still goes on today. And proceeding our work on ADH1, you know, Hugh Markus’s work on monogenic stroke or Karen Wright’s work on neurodevelopmental disorders and many others consistently show that there’s very many people, a lot more than expected, that carry pathogenic variants, but the penetrance is much lower than we traditionally thought.

Grant Belgard: How do those findings complicate the way we define patients and measure unmet need in rare diseases?

Sun-Gou Ji: Yes, because of the much wider variants and expressivity that we’ve been talking about, it’s just it’s very important to capture all the phenotype, not just the classical ones. And because treatment starts from diagnosis, but diagnosis a lot of times is based on genetic testing. And there’s just too many rare diseases out there. And if the symptoms observed in a patient doesn’t align with the classical symptoms of the genetic disease, the genetic testing will not be recommended a lot of times and may be only considered when symptoms become too severe.

Sun-Gou Ji: So the unmet need of rare diseases today, that’s why it’s harder to, and we’re learning that it’s actually harder to quantify properly because there’s two things, again, that kind of comes back to our old ascertainment bias that we were talking about, the databases where a lot of these testing vendors would be severely biased towards more classical symptoms with severe phenotypes, whereas the general population will just not be picking up enough of these rare, severe monogenic disorders to actually make sense out of. So making sense out of those two is still going to be hard.

Sun-Gou Ji: And because of the variants and phenotypic expressivity, understanding the full spectrum of phenotypic expressivity, meaning like we should actually start from the genetics, get everyone that carries a pathogenic variant and actually try to even identify new phenotypes that are not classically associated with the traditional monogenic disorder and expanding the phenotypic spectrum and defining it through a genetics first approach would be important.

Grant Belgard: So how do you think this will change the definition of a monogenic patient and impact clinical trial inclusion exclusion criteria for deciding who should be part of the trial and later on who should be treated?

Sun-Gou Ji: Well, it’s all going to be part of the continuum, right? You’ll have variants and that’s a very difficult line to draw, right? Because it’s pretty clear when you think about, okay, do you carry a variant in a gene that has been pathogenic before? And there are a bunch of VUSs, so whether you have a pathogenic, likely pathogenic or VUS carrier may actually tell you that you have a mutation, but whether you have the disorder, that may be a very different thing. You may be a monogenic patient because you have the pathogenic variant, but do you have the monogenic disease? Maybe no, but then how do you say no? Like in case of CasR, you have a monogenic variant in CasR that’s pathogenic. You have hypocalcemia, then you are technically an ADH1, but then when do you start treatment? It’s a different question too, right?

Sun-Gou Ji: Because then like, when does it warrant treatment to actually do these things? It’ll be very different by the disorder and the safety profile of the drug. And that’s sort of the start of personalized medicine, right? That’s when you start understanding the genetics and then the phenotype that you’re seeing in that patients, and when do you actually start treatment?

Grant Belgard: So you’ve talked about the importance of genetic support and drug development. What makes it such a powerful tool compared to other methods of validation?

Sun-Gou Ji: Yes, I would say it’s, you know, genetic support is the only tool with predictive validity for clinical success. There is not anything that I know of that have shown this reproducibly, that there is two to four times increased success replicated across so many different groups. But I wouldn’t really say it’s more powerful than any other tools, but it does provide an orthogonal point of validation of the therapeutic hypothesis that’s just basically not possible through models. Even the best models are just models, right? And although we have to be careful, the effect of a lifelong perturbation, which is a variant that you carry or genetic support versus therapeutic intervention, which is a sudden change, it still provides a completely different validation for the target.

Sun-Gou Ji: So, but again, however, despite genetic support showing two times increased odds of success, whether genetic support alone provides any predicted validity is unclear. Because genetic support, given the target had been tested in the clinic, independent of any genetic support, gives you this increased odds of success. So you always have this conditional, where a lot of these drugs were tested not knowing there is any genetic support. But when then you look conditional on that test set of genes that have been tested, you know, without knowing genetic support, then you have this increased odds. But if you only have genetic support, does it actually give you any increase? We just don’t know because there hasn’t been a drug that’s been tested just based on genetic support.

Sun-Gou Ji: And so it’s very powerful, and we are actively working on it, but that should not be a replacement of a target prioritization, target validation.

Grant Belgard: And final question on the future of precision medicine. So in what way would routine newborn sequencing transform precision medicine?

Sun-Gou Ji: Yeah, this does come to quite a personal story too, because I have a one-year-old daughter who’s been recently diagnosed with a rare genetic disorder. And we were lucky enough to be living in Boston, you know, where our pediatrician knew to refer us to a specialist who then quickly sent us to Boston Children’s and then diagnosed us within a couple of days and starts treatment right away. You know, the nurses and doctors were so helpful, you know, they were super supportive, full of empathy, and so grateful for our care team. And now this is what the US and the medical care should be, right? It’s the best medical care. And of course, it’s, we were lucky in the sense, of course, it’s best to not have a rare disorder, but we were lucky as it had been. But one thing, that’s the one thing I regret, though, is that, you know, this is a genetic disorder.

Sun-Gou Ji: And I actually convinced myself that I didn’t want to get her sequence when she was born. I sort of used the exact same logic against newborn sequencing to convince myself that I’d be overwhelmed with this information. You know, you’ll find these pathogenic variants in different like VUSs, am I going to be worried about them without saying but looking back, I feel like it was quite laziness on my end. And if I actually looked at her genome, have the information of the handful of genes that was potentially bad variants, but I have reduced the search space for what I should prioritize. And is it possible that maybe I would have picked up her symptoms earlier before it’s this late? And with the benefit of hindsight, I do feel like it is possible to catch it, it would have been possible for me to catch this a bit earlier and get her treated before.

Sun-Gou Ji: Technically, this is as much as possible now, right? The technology is all there, like assays are as accurate as it can be. And the interpretation, although needs some improvement, but the only way to get better than interpretation is just by doing more. And those are various newborn sequencing efforts, of course, the UK leading and Guardian and Beacon studies along with others in the US.

Grant Belgard: Well, what are your thoughts on whole genome sequencing versus whole exome versus targeted sequencing for newborns?

Sun-Gou Ji: I feel we should future-proof ourselves. And even for the UK BioBank that released the whole genome set last year, they show an improvement in identifying these pathogenic, likely pathogenic variants even with encoding exons over whole exomes. And I just feel like there’s no reason to use these targeted approaches, especially for data generation. For interpretation, there could be a case to make, but we should just do whole genomes to future-proof ourselves and get the highest yield. And then the interpretation could help. And the data sets itself could be very useful. It’s the first step. It will really help cases like my daughter a bit early on and reducing or at least prioritizing the search space, because when you have a baby, you’re worried about everything. But if you know that she has something and you see signs of that, you would be a bit more careful.

Sun-Gou Ji: And I feel like just for that, it should be worth it. But going back to your question about whole genomes, whole exomes and targeted panels. But in addition, I think the more exciting piece that I was thinking about traditionally as a scientist was the data generated, because it will be huge, so valuable for genetic research and drug discovery or development, because this is the true unbiased information of the population.

Sun-Gou Ji: Where I was talking to you about the fascinating bias about the different biobanks and cohorts, but newborn sequencing will be ultimate unbiased sampling of the population, which will open up the first door for the precision medicine that would really help us understand the difference, not just monogenic prevalence or in a transient expressivity, but also even in common disorders and different or complex disorders and really expand how we think about human health with genetics and start of precision medicine. And you would carry that information throughout your life and whenever something happens, you have that background information to best rather than waiting until something goes wrong and figuring out.

Grant Belgard: Yeah, it’s interesting. You know, we’ve heard for years that this is coming and certainly at this point, it’s not a barrier of price, right? I mean, getting a whole genome sequence is a pretty negligible cost in the American healthcare system these days compared to everything else, but it’s still not routine. I wonder when that will finally flip.

Sun-Gou Ji: Yeah, it’s interesting. And also, I guess there’s questions about privacy and who owns the data and who actually gets to analyze the data and how do we make that equitable before and maximize patient benefit over anything else?

Grant Belgard: Well, I guess that’s another challenge, particularly in the US healthcare system, right, is although there’s a ton of money spent, it is very fragmented from a data perspective, many different systems, et cetera, right? So that will be a challenge.

Sun-Gou Ji: This is like an operational problem rather than a technical or scientific problem now. And yeah, there are a lot of sensitivities and issues about it, but there are these pioneers are trying to do these pilots across different institutes in different countries. And hopefully those will change the mind of governments.

Grant Belgard: Thank you so much for joining us. It’s been great.

Sun-Gou Ji: Thank you for having me.

The Bioinformatics CRO Podcast

Episode 77 with Ewelina Kurtys

Dr. Ewelina Kurtys, a neuroscientist at FinalSpark, discusses her experience bridging AI, neurotech, and business development in industry, and FinalSpark’s mission to build a remotely accessible platform using living neural networks as a biocomputing substrate.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Ewelina Kurtys

Ewelina Kurtys is a neuroscientist at the biocomputing startup FinalSpark, which is working to create a bioprocessor from human neural organoids.

Transcript of Episode 77: Ewelina Kurtys

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome to The Bioinformatics CRO Podcast. I’m your host, Grant Belgard. Today we’re exploring wetware computing, living neural networks as computing substrates. Our guest, Dr. Ewelina Kurtys, works with FinalSpark, a Swiss biocomputing startup building a remotely accessible neural platform where researchers run experiments on human neural organoids connected to electronics and microfluidics. Ewelina’s background spans pharmacy, biotechnology, and a neuroscience PhD with postdoctoral work in brain imaging before moving into industry and startup work, bridging AI, neurotech, and business development. We’ll cover her current work, the path that led there, and advice for anyone curious about this new frontier. Welcome to the show.

Ewelina Kurtys: Thank you so much. Very happy to be here.

Grant Belgard: So for someone hearing about wetware computing for the first time, how do you explain what you work on and why it matters?

Ewelina Kurtys: So we are trying to build computers using living neurons, the same as we have in our heads. And the reason why we do this is because the neurons are 1 million times more energy efficient than digital computers. So we want to solve the problem, which is now emerging, that artificial intelligence, the silicon one, digital, is using exponentially increasing amount of energy. So this is a problem which is growing and many people are searching for solutions. So there are two ways, basically either alternative energy sources or alternative computing, and we are working on the second option on alternative computing. So we try to program living neurons so that in the future we can build biocomputers, which will have as a heart, as a processor, living neurons.

Grant Belgard: When you say programming living neural networks, what does that look like in practice today?

Ewelina Kurtys: So we know that neurons are producing spikes, which can be measured by electrodes as a current, and this is the way of communication of neurons. So in the lab, we can put them on electrodes and we can send them electrical signals and we can also measure the response from neurons. And actually the response from neurons in real time, you can see on our website, finalspark.com, there is section live. So you can see really how it looks. This is spikes, this electric activity of neurons. So we basically try to send them electrical signals and we measure the response and we would like that there is a sense between this input and output. So we would like to be able to program them in such a way just by sending them some signals and measuring what they answer.

Grant Belgard: So what elements of that are feasible with today’s technology and what still feels out of reach?

Ewelina Kurtys: Well, it’s relatively feasible to put neurons on electrodes and to measure the activity. Let’s say it’s something what is already established in the scientific world and technology. So technology is ready for this, but we don’t know how to program neurons. So we don’t know how to make sense of these signals, which we send to them and which we receive. So that’s the biggest challenge currently in biocomputing.

Grant Belgard: And so is there a way to tell if a neural culture has learned something or is that still in the future?

Ewelina Kurtys: Yes, it’s difficult. At the moment we do really simple experiments, the basics. For example, we just want that neurons increase the activity or decrease the number of spikes they produce. So this is the most simple task you can give to a living neuron. And yes, so this you can measure very easily. If they behave as you want it, that means they learned something, but this is still very difficult and not fully reproducible.

Grant Belgard: And as a readout, are you focused exclusively on spikes or other phenotypes?

Ewelina Kurtys: No, on spikes always. And actually you can measure them in many ways. You can measure them just as an occurrence, yes, no, just as this is called spike train. So you just have a series of dots over time and every dot is representing one spike or you can measure the shape of the signal. So in this case, you sample more data. So you can get exact signal how the voltage is changing over time. But we do measure only actually electrical signals from neurons, yes. We can measure also some other stuff, like for example, the color of the medium, which is the liquid in which neurons are immersed, but this is more for monitoring.

Grant Belgard: How do you structure input/output and what forms of reinforcement have you found meaningful so far?

Ewelina Kurtys: The most simple reinforcement is just sending the impulse, electrical impulse, but we also developed other methods. So we know that neurons are also communicating via neurotransmitters in the brain and we try to reproduce this in our lab. So for example, today you can stimulate neurons with dopamine to reinforce the behavior, which is considered as a reward, the dopamine signal. And we do this in such a way that we chemically neutralize dopamine. Then we put it in the medium in the liquid in which neurons are immersed and then by just putting the UV light, we can activate the dopamine. So basically cells get immediate treatment from dopamine. And this is, this is used also to communicate with neurons and to reinforce the behavior if they do what we wanted.

Grant Belgard: What are the biggest problems you’re focused on solving right now?

Ewelina Kurtys: Yes. So there are many problems. One of the big challenge is how to keep neurons alive for a very long time because we want this biocomputer to be robust. And we know from nature that neurons can live up to a hundred years even because those which we have in our brains, they are usually the same for through the, our lifetime, especially during adulthood. So for now we can keep them alive on electrodes for three months, which is quite a lot considering the industry standards, but it’s still not enough for what we wanted. But the biggest challenge is actually programming neurons. So how to learn, how to interact with them in a meaningful way. And the biggest problem here is because nobody knows really how neurons encode information. So we know quite a lot that they producing spikes and then how they process the spikes, but we do not know what they really mean.

Grant Belgard: And is all this 2D or are you looking at 3D systems?

Ewelina Kurtys: So the data is 2D the voltage over time versus time, but the structure of neurons, which we have is actually three dimensional because we are using neurospheres. So these are around structures of the neurons around half millimeter diameter and they are in 3D. So yes, so the neurons are quite complex. However, the electrodes are only on the surface.

Grant Belgard: How do you think about reproducibility for something like this?

Ewelina Kurtys: Well, that’s quite simple. You just have to do experiments many times and then you have reproducibility if you get the same results over time. But this is very challenging because neurons are not stable system. They are dynamic. So that means that responses can change for the same signal. So this is still challenging. But every time we say we have some results, it’s only if we have repeated them many times. So for example, we managed to store one bit of information in neurons. So that means we have done this many times, but we have done also a lot of things which were working maybe one or twice, and then we don’t report them.

Grant Belgard: When you were starting out, you were comparing energy use and efficiency to digital systems. What’s a good apples to apples way to compare energy usage of biological neurons to artificial neural nets?

Ewelina Kurtys: Well, it’s still a bit tricky to compare, but we can have some ideas about the brain efficiency by neurons efficiency by looking at the human brain. So actually all of what we assume about biocomputers today is based on our observation of the human brain. And we can see that human brain can run on 20 watts is quite low energy consuming. But if you would like to reproduce the workings of the human brain with digital computing, you would need a small nuclear plant. So all these ideas about efficiency of neurons are based on what we see in the human brain.

Grant Belgard: What milestones would convince a skeptic that wetware’s more than a curiosity?

Ewelina Kurtys: Well, it’s not only for the skeptic. I think it’s also for us and for everyone who is following the field. So our milestones, first milestone, which is for the next two, three years after we receive investment, because we are currently considering accepting an investor, we are searching for 50 million Swiss francs, which is around $50 million, let’s say more or less. And with this investment, we can have tight timeline because for now we are self-funded. So everything can take longer, but assuming the investment, we would like to solve the problem of learning in vitro. So the problem I just described that nobody knows how to teach neurons something, how to encode information also. So we would like to do basic algorithm into three years. And after the next around three years would be advanced algorithm, because we would like to match the performance of digital computing.

Ewelina Kurtys: And the last milestone would be scaling because we would like to, of course, be able to build huge structures of neurons, much bigger than human brain, whatever it will be technically possible. And we assume that the biocomputer will be ready in around 10 years. And this will be so-called bioserver. So this will be a computer which will be available remotely as today you can access cloud computing. So that’s the idea which we have in mind. It’s just the difference will be that it will be much, much cheaper. So for example, maybe you will be able to run ChatGPT or something similar on the living neurons, but it will be much, much cheaper because of this lower energy consumption.

Grant Belgard: I’m just thinking about how you typically staff a data center and what very different skills might be required for a wetware data center, right? Your DevOps engineer role would look very different if you’re having to care for living cells. How might that look in practice from the perspective of the engineers running the data center?

Ewelina Kurtys: Well, so yes. So biocomputer will need a little bit different expertise, but we hope that everything will be automated. So now we, of course, do a lot of things by hand, but in the future, we hope it will be all automated facility and I’m sure it will happen. But what you need for running biocomputer is definitely biology knowledge. You have to know something about living neurons, how to keep them alive for a very long time. So of course, coding in digital computers is important because everything is connected to digital computers. However, you need to compliment this with the biology knowledge about how to keep living neurons in the proper condition because they are very demanding. They’re very fragile as living cells. So you have to keep temperature, pH, everything perfect for them.

Grant Belgard: Where might wetware make the earliest real world impact?

Ewelina Kurtys: So we believe in generative AI because it’s very energy consuming and also because we believe that human brain is very good at solving complex problems, generating ideas. So if you use the living neurons for that, it will be working much better. That’s what we believe.

Grant Belgard: Definitely more efficient. What collaborations are most valuable for you at this stage?

Ewelina Kurtys: So for the moment where I would say we maybe, I don’t know if you can call it collaboration. Well, we do collaborate a little bit with the hardware, some hardware providers because we need, for example, some systems for electrodes for living neurons. But what is most important is the maybe more that we give our access to our lab for free or paid access. So for free, we give it to universities. We have accepted nine universities from 34 applications and we prioritize those who have the biggest chance to publish. We also have, which is a surprise for us, we didn’t plan for this. We also have clients who pay us for subscription to get access to our lab remotely because everything in our lab you can do also remotely. You don’t have to be in the lab in Switzerland.

Ewelina Kurtys: And we have this because during COVID our engineers have developed all this remote system to access the lab when they couldn’t go physically. But later we decided to use this opportunity and invite universities to collaborate. And also we got a lot of requests and we started to open paid subscriptions for private clients.

Grant Belgard: That’s really interesting. Yeah.

Ewelina Kurtys: So that’s very important for us because it gives us some revenue and also it gives us some kind of recognition, maybe appreciation to our work because this is emerging field. So still many people don’t know about the bio-computing.

Grant Belgard: What surprised you the most since you started working with neuronal cultures as computing elements?

Ewelina Kurtys: I think the most surprising is how difficult it is to program neurons. I know that people from many years try to figure out this on many models. Also there are a lot of physical models which are not using living cells, but some models of living cells, living neurons, and it’s still nobody knows how neurons encode information. That’s amazing. That’s so difficult.

Grant Belgard: What do people outside the field most often misunderstand and how do you correct it?

Ewelina Kurtys: I think what people don’t understand sometimes they say that we build a human brain in the lab, so that’s not what we do. I think it’s important from ethical perspective because we don’t try to reproduce human brain in the lab. We just use the same building blocks as in human brain, which are living neurons. So this is a big difference. I think because of the anthropomorphic bias, people often see human traits in everything. So of course, if we use human neurons, then people think, oh, is it conscious? Can it feel? So these are actually important ethical questions, although I think they are more raised for general public than for really philosophers or ethicists. I think this requires some thinking from philosophers. Of course, we are happy always to get suggestions and also we hope that we can use some work of philosophers to also kind of answer all these difficult questions.

Ewelina Kurtys: But it’s normal thing that every new technology is always raising some concerns and some surprise in some people. So yes, this is important to address this, but I think philosophers can do this much better. And we actually try to encourage many philosophers to work on biocomputing. We have done a lot of effort. Last year, I was at a conference in the Netherlands about ethics in technology. So we try to reach out to this kind of philosophers who could be interested to work on these topics. I think it doesn’t matter at this stage. We are using human neurons because it’s the easiest to produce at the moment because today you can get stem cells which are commercially available and they are derived from the human skin. So we can produce huge amounts of neurons quite easily. And yes, we could also use animal neurons. Absolutely. At this stage of the project, it doesn’t matter.

Grant Belgard: If you suddenly had a tenfold increase in stable high quality cultures, what would you do that you can’t do now?

Ewelina Kurtys: Well, we would run experiments longer because our lab is fully automated. So we can run experiments 24/7. But of course, because neurons usually live up to three months, you cannot really maybe run this longer. So I think it would be easier to make long-term experiments. That’s first. And the second, the maintenance of the lab would be easier because every time neurons die, we have to exchange them. It’s quite efficient process, but still it would be easier if we don’t have to do this too much.

Grant Belgard: How do you think about the balance between advancing the biology, so getting higher quality, more robust cultures and pushing the tooling that you’re using, electrodes and software and so on?

Ewelina Kurtys: I think both are important. I think definitely the second one is much easier, but keeping cells alive and making sure we have… There’s a lot of questions we can have about how to culture neurons and how to do this. So biology is, I think, much more complex. Engineering is just a matter of time. Of course, resources, we are a very limited team because we are just six people. So of course, we are also limited by this, but let’s say our engineers are so excellent that it’s a matter of time to build stuff. However, biology is just… It’s not only of being good or not, it’s just biology. It’s complex and sometimes you just have to do a lot of trial and error. So this is, I think, much more difficult.

Grant Belgard: So when did you first get interested in this interface of biology and computing?

Ewelina Kurtys: So I actually… No, I did my research in neuroscience. So that was totally different field, pure biology. But I did also research in medical imaging because I was doing brain imaging mainly. So my first job in industry was medical imaging service. I had a little experience there. And in medical imaging, you use a lot of AI. At that time, it was hype. It was hot topic. So I learned this way about AI and I get interested in that. And I had a chance at the time I was living in London and I had the chance to attend many different events, networking. I was also doing business development. So I was interested in connecting to people. And I attended AI Summit in London, which was, I think, 2019. Then I met the founders of FinalSpark. And I get interested because it’s not easy to combine or also to go outside your field.

Ewelina Kurtys: So I said, okay, if they try to build computer from living neurons, but they are engineers, then that must be interesting. So I decided that it’s a cool project because generally I always look at the people because every topic can be interesting or not, but on the daily basis, it all depends with which kind of people you work with. So I think every topic can be good, but it’s just mostly the people. But what I’ve noticed is that when you look at the very deep tech research, usually you have nice people to work with. So that’s why I’m in this field.

Grant Belgard: Looking back at your own degrees in training, what experiences most uniquely shape how you approach problems in this field since this field is so multidisciplinary?

Ewelina Kurtys: Well, I have to say the PhD experience for sure, because it gives you a chance to do independent research. But also before PhD, I did some projects. So it always depends on how much autonomy I had in the lab. I think I learned a lot about this. And also I get the confidence. That’s important because that I realized that I really can solve problems and it works what I do. So it gives you its confidence boost is important. And then when I left academia, then actually maybe setting up my own company in the UK because I work within FinalSpark as a consultant. So I think that gave me a lot of experience and it’s always, yes, it’s amazing, always adventure when you can do things by yourself, even if they’re very small, but trying to organize, let’s say life in your own way is the best you can do, at least from my experience.

Grant Belgard: What did you learn from the business facing roles that scientists often overlook?

Ewelina Kurtys: What I learned, I think the biggest lesson was what I learned as a scientist who left academia is that it’s not so much important to be smart, but what is the most important is that likability that people have to like you. And actually every deal you make in your life depends on whether people like you, not whether you are so smart or not. So I think this is a very big mistake, which maybe especially academics are doing because they think it’s all about technical skills and being clever. But of course, some thresholds you need to pass, you have to maybe pass some minimum, but all the rest is all about, I would say, likability. It’s a lot about, you know, talking to people and everything who usually works if you get a good connection with the clients. So I think that’s extremely important.

Ewelina Kurtys: Let’s say this mental part of the work, not so much technical because technical is easy way, you know, after PhD is easy, but mental part.

Grant Belgard: How do you evaluate opportunities in emerging fields with high uncertainty?

Ewelina Kurtys: Well, you mean opportunities, what are the job opportunities or opportunities for us as FinalSpark?

Grant Belgard: Either.

Ewelina Kurtys: Either. I would say the job opportunities are at the moment quite slim. So if I would be an engineer and you know, thinking about biocomputing, I wouldn’t focus only on this. I would rather think more broadly on the emerging fields because there is a lot of things growing on the intersection of neuroscience and engineering. So there are a lot of stuff, but it’s not only biocomputing, it’s also, for example, brain computer interface or some other stuff. So I think it’s good to look at this more broadly if someone is interested, of course, how to combine biology and engineering. And there are a lot of projects, but if you focus only on biocomputing, it’s quite difficult because to our knowledge, there are only three companies in the world who are doing this and all of them have limited resources. So yeah, it’s quite difficult to be on in this.

Ewelina Kurtys: But I think if you like biology, if you are fascinated with biocomputing, you can also do something similar like brain computer interface, for example, or maybe neuromorphic computing, you know, depends on how much engineering, how much biology you prefer. And so that’s about opportunities, the jobs and yeah, we get a lot of actually questions from interest potential and from potential coworkers. But unfortunately, for the moment we don’t hire, but once we get investor, for sure, we will be searching for more people. And when it comes to opportunities for us as FinalSpark, I think it’s quite interesting because when you’re working on such a deep tech project, a lot of people are interested at least to hear what you do.

Ewelina Kurtys: So that makes the work easier, I think, because when you try to promote the topic, for example, we try to reach out to journalists or podcasters like you, this is quite, I would say maybe easy is maybe, I don’t know if it’s the right word, but it’s not so difficult because the topic by itself is interesting because it shows some totally different point of view on the engineering. And I think it’s, it brings added value to many discussions. So I think it’s quite easy to promote, let’s say if I can say so.

Grant Belgard: How do you maintain credibility while crossing disciplines?

Ewelina Kurtys: Well, you mean myself when I crossed the disciplines for biology to engineering or as FinalSpark?

Grant Belgard: Well, for yourself, what kind of general lessons would be in there?

Ewelina Kurtys: Okay. I would say, well, you always have to be prepared at least. Okay. I said that the mental part is more important in the work, but still you have to be technically prepared. You need to really know what you do. So that’s, that gives you the credibility because you can easily answer questions. And I think that’s, that’s very important that you really know upside down your topic. And as a company, I think it’s important to be transparent. I think, and also we, that’s why we collaborate with universities because we want that they publish something. So there is already one publication, uh, from our free users. And, um, this is very important, uh, to be very transparent that people know what we have exactly so that we are open and explain it. And also I think scientific collaborations are helpful to getting this credibility.

Grant Belgard: For a grad student or postdoc intrigued by wetware computing, what should they learn first?

Ewelina Kurtys: Depends if they’re coming from biology or they’re coming from engineering. So if they come from engineering, they should learn about biology. And if they come from biology, they should learn coding and engineering. So it depends from where you’re from, but it’s very important in biocomputing to combine the knowledge between biology and engineering. That’s, that’s the key.

Grant Belgard: So if someone is strong, uh, on the computing side, but new to what lab biology, what’s a realistic path, uh, for them to quickly get hands on competence that’s relevant for this space?

Ewelina Kurtys: Oh, just to read about neurons, about how they process information, even some Wikipedia articles are usually enough for the start. And also I highly recommend to check our website, FinalSpark.com. We have written a lot of blogs and now also our paper, our technical paper in Frontiers, there’s only one we published, so it’s easy to find. Uh, so yeah, I think checking our paper, our blog articles, it could be interesting and helpful for the beginner to just to see what is important. Yes.

Grant Belgard: For, uh, for when you, you, you do, uh, raise money and start hiring, what kinds of portfolio pieces or proof of works would you be looking for from potential applicants?

Ewelina Kurtys: Uh, well, for example, uh, for sure, most of the people we will hire will be on the engineering side. Maybe there will be also some biologists. So biologists will have to have extensive experience with, uh, in vitro cell culture and how to, you know, work with living neurons, but engineers, uh, not definitely. We look at the coding. They, they have to be people who like to code and also who like hardware because they know, uh, by computing, you have both hardware and software. So we are changing this all the time. And so, and also a lot of signal processing, data science, because we try to search for patterns in the signals. So that’s also very important.

Grant Belgard: What underrated skill is a superpower in this area?

Ewelina Kurtys: Hard to say. I don’t know. It depends on the person because it’s so diverse. So I wouldn’t say there is one thing for everyone. I think maybe if you are coding, then it’s underrated that you have to know biology, for example, but it’s really depends where you come from.

Grant Belgard: What red flags should candidates watch for when they’re choosing a lab or startup in this field?

Ewelina Kurtys: Oh, red flag. This is difficult. I don’t know. I think, um, maybe one thing where you can look at, oh yes, this is something I’ve learned during my experience, life experience, uh, is that you have to look at the people, for example, uh, or the coworkers, if they are happy and relaxed. And if you are not, then you should escape because in the nice environment, people are happy and relaxed. And if they are not, that means that there is some pressure and maybe not very nice environment. And I think this is important, although I have to say also from my experience that it’s very difficult to say from the, you know, at the beginning when you have interview. So it’s very, very difficult to spot, I would say, but yes, maybe this, maybe this. And also of course, that when you have an interview, you it’s also, you are interviewing your future employer or project.

Ewelina Kurtys: Uh, so you have to also look at this, that it’s not only them to check you, but also you to check them. And another thing, which I also heard that actually when you have an interview that people really want that you succeed because they want to find someone. So, because usually people are very stressed and they think that interview is just a search for a bet for your weaknesses, but that’s not really true because everyone wants to find a great person. So actually everyone wants that. It will be successful. That I heard from my friend who is actually HR manager, very experienced. So she always told me this, that people usually misunderstand that, but it is very generic. It’s not only about this field.

Grant Belgard: If you could go back in time and give your earlier self one piece of advice, what would it be?

Ewelina Kurtys: Be more confident because when I was young, I was not confident at all. I always was afraid that I will be wrong, which is not necessary. Yes.

Grant Belgard: Where can our listeners go to learn more about you and your work and about FinalSpark?

Ewelina Kurtys: So, uh, we are very active on LinkedIn. We promote ourselves there as much as we can. And of course our website, finalspark.com. And also on the website, you can send us a request that you are interested in the project. We also send some reading materials. Uh, so it’s very easy to get in touch with us. We are also on discord and this is on our website and, um, we have also newsletter also you can subscribe on our website. So many ways to get in touch and learn more and join the community, which is growing very fast.

Grant Belgard: Well, Ewelina, thank you so much for joining us. This is enlightening.

Ewelina Kurtys: Thank you so much. It was a pleasure.

The Bioinformatics CRO Podcast

Episode 76 with Christopher Woelk

Christopher Woelk, an External Innovation Partner at Astellas, discusses his background in multi-omics and AI/ML and what he looks for in his current search & evaluation role embedded within therapeutic oncology research.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Christopher Woelk

Christopher Woelk is an External Innovation Partner at Astellas, which focuses on developing and supporting transformative disease therapies.

Transcript of Episode 76: Christopher Woelk

Disclaimer: Transcripts may contain errors.

Grant Belgard: Welcome to The Bioinformatics CRO Podcast. I’m Grant Belgard and joining me today is Christopher Woelk, aka Topher, from Astellas. We’ll explore what Topher is working on now, the path that led here, and practical advice for scientists and engineers charting their own course in biotech and pharma. Topher, thanks for joining us.

Christopher Woelk: Thanks, Grant. No, great intro. Thanks for pronouncing my nickname and my last name correctly. People stumble on that all the time.

Grant Belgard: What problems are you and your immediate team focused on solving right now?

Christopher Woelk: Yeah, so right now I work, as you mentioned, for a Japanese pharma called Astellas. I’ve had a bit of a career pivot, which I’m happy to explore into search and evaluation and BD from running large technical groups at biotech and pharma companies. So right now what we’re focused on in my group, so I’m embedded in the therapeutic area oncology. I’m not embedded in BD and so I’m really pushing the science first. I think the real sweet spot for me at the moment is trying to find interesting startups with a platform that preferably can spit out more than one asset and a preclinical data package around that asset that shows some evidence that this therapy or asset will be efficacious. So I’m using that template to search my network and meet new startups to figure out if those assets will plug and play with Astellas programs.

Grant Belgard: What criteria do you use to triage those?

Christopher Woelk: Yeah, that’s a great question. I think the strategic part behind that, again, I’m fairly new to this particular role, but coming up with a template. So obviously there’s internal programs ongoing at Astellas. We’re looking to use a template where we can find backups for those programs out in the ecosystem of startups today. Hopefully things that don’t conflict with internal programs, so things that are maybe novel. Then just going through that rubric, having worked with BD and Ventures arms in previous roles, interviewing these startups, what is their problem statement? What are they actually doing to correct that problem? Why are they different from everybody else? That competitive intelligence piece, who are your competitors and why are you different from them, are a series of questions that I like to work through when I’m chatting with startups.

Grant Belgard: What therapeutic areas or technology platforms do you come across to your work most often?

Christopher Woelk: Yeah, that’s a great question. I think, again, being embedded in oncology, my primary focus is oncology. Astellas is also working in ophthalmology, so I keep my ear out for those disease areas. Then in terms of platforms that I come across, really thinking about target identification, target validation, generative AI for small molecule and biologics design are all at the forefront. I think Perturb-Seq is something that I’m focused particularly on at the moment, and I know you and I have had conversations in other contexts to that regard. But building these models of the cell with Perturb-Seq, finding new targets, validating targets, finding biomarkers, I think this platform is really starting to come into its own with respect to those outputs.

Grant Belgard: What does success look like for your group over the next 6-12 months?

Christopher Woelk: Yeah, that’s a great question. So I’ve been wrestling with this a little bit because in a traditional BD role, of course, success is a transaction. So meaning that you find a company, a startup, they have an interesting platform or an asset, and there is a collaboration or a partnership, or maybe even a merger and acquisition, that type of transaction, of course, is success. I don’t have a budget myself to do transactions, and so I’m trying to figure out what success looks like to your point. And what I think it is, exactly what you brought up earlier, what template can I use to go out there and assess academics and startups? How many things can I feed in the top of that funnel? And it’s probably going to be in the hundreds so that the really good opportunities trickle down and BD gets to transact on them.

Christopher Woelk: So I think success for me is probably getting out there in the world, meeting hundreds of startups, whittling those through that filtering criteria we were talking about, and being able to trickle really high-class opportunities into BD.

Grant Belgard: What have you found to be the biggest differences in your current role versus your previous roles in R&D, and how have you adjusted to those?

Christopher Woelk: Yeah, no, that’s a great question. So previously, I mean, just to cover it briefly, I had a whole academic career at UCSD and at the University of Southampton in the UK, really using AI-ML and multiomics, again, to get to target ID biomarkers, reverse translation mechanism of action of drugs. And so I transitioned into industry after academia. I ran an exploratory science center for Merck and built them up a systems biology group, and then I went through a couple of startups, and I even had my own consultancy business for a while before this current role. So my old jobs were running technical groups of 15 to 20 people, really focused on things like target ID and reverse translation, as I mentioned. And that was getting into a lot of collaborations, bringing in a lot of data, searching through that pay stack for the needle that is really going to be a promising target.

Christopher Woelk: And then shifting over to this new role, sort of a search evaluation in a therapeutic area. I mean, I think one of the reasons I got hired was I did have that technical background. And so when I’m going out in the world and talking to startups, I can actually evaluate what they’re doing from an AI-ML or technical standpoint or causal inference, multiomics, data integration, I can actually dig in and figure that out. So the commonalities, I’m still using my technical background, but I’m using it now to evaluate companies as opposed to sort of solve problems in technical groups. And that’s a lot of fun. That’s a lot of going to conferences. It’s a lot of having coffee chats with startups, and it’s a really nice social aspect of this role.

Grant Belgard: So after the triage step for potential companies of interest, what questions do you ask as you get deeper with them and how does that process typically play out?

Christopher Woelk: Yeah, typically you’ll go under CDA so that you can have those deeper dive conversations. And normally at that stage, you’re pretty excited about the science. But as you go under CDA, presumably you get access to more data that’s not publicly available. So with the scientific hat on, you start to take a deeper dive into maybe its efficacy in a mouse model or a bunch of testing across in vitro cell lines that aren’t in the public domain. And so you can continue to convince yourself that the science is good at the particular startup that you’re vetting. But also going under CDA, you can start to explore what the company is looking for. So is it a fee for service type engagement? Is it a partnership with milestones? Is it more of a collaboration where you’re both going to put things into the pot and then maybe share the data at the end?

Christopher Woelk: And so you can start exploring what the relationship might look like, and then you can also start getting information around costs. And so is the startup just asking for too much and it’s never going to fit into the budget and what we’re trying to do? Or does it look like a good fit and quite a reasonable cost? And we can get a thumbs up from BD.

Grant Belgard: Maybe we can get into some questions that may draw more on some of your experience in some prior roles. But I hope will be interesting for our listeners. So how do you turn exploratory analyses into decision enabling work to inform programs?

Christopher Woelk: Yeah, I think that’s quite a challenge. So in previous roles, I’ve really been tasked with generating multiomics data sets, figuring out where the signal is in those data sets, and delivering targets. And so that sounds relatively easy, but in terms of generating the samples, you either need to find a biobank that has what you need or you need to work with your translational medicine colleagues, spin up a clinical study, which can take years, collect the right samples in the right way to get the data that you want. And then, of course, there’s the big question, well, what data am I going to generate from my samples? Maybe the question is disease versus health or treatment versus untreated. Which omics layers am I going to look at for that particular disease?

Christopher Woelk: The MRC always, the Medical Research Council in the UK, always used to ask which tissue and which modality, meaning are you sampling from the right place and are you sure the modality that you’re going to run on these samples in terms of omics layer is going to give you what you want? I had the privilege in some roles where we weren’t limited by omics modality, and so we ran four or five layers. I came up through transcriptomics, so I always have a slight bias towards transcriptomics. But then I was often surprised in studies that the metabolomics layer, for example, had more signal. And so it’s keeping an omen behind around those omics layers as you’re crunching the data in a data integration project to try and get to target ID. I spent a lot of time with my groups thinking about quality. The last thing I wanted to do was take an omics layer in five and slam it together with the others.

Christopher Woelk: If it was a really noisy layer, it’s just going to diminish the signal from the other layers. And so making sure that each individual layer is quality controlled and that if there’s anything really noisy there, it’s better to leave it out than smush it together with all the other omics layers. And then all these different ways to get to target ID, right, Grant, that I think a lot of places are wrestling with. So do you build some sort of correlation network across your modalities, and then you query that network for health and disease? Or do you query for health and disease and then build a network to try and figure out what the biology looks like? And then, of course, we always sort of get it to fall in this trap. I’m going to bring this up tonight.

Christopher Woelk: I’m actually teaching it at Northeastern, where we hear it all the time, correlation is not causality, right, that ice cream sales are correlated with shark attacks. So if you eat an ice cream, you’re going to get bitten by Jaws. So really, it’s trying to figure out what types of causal methodologies, as I’m combing through these multi-omics layers, can I use to really give me confidence this target is involved with the disease and is not just responding to the disease. And in that context, I’ve always loved that genomics layer. When you have a SNP or a mutation in the DNA, that’s something that’s sort of static and built in. If it’s in a gene that’s related to that disease or it’s related to sort of some co-express module in the protein or the transcript sphere, then you’ve got a causal sort of indicator pointing at some interesting pathway biology in other layers.

Christopher Woelk: So that was a long answer, but hopefully what you’re looking for.

Grant Belgard: Yeah, well, what are your thoughts on methodologies like Mendelian randomization, structural equation modeling, and so on?

Christopher Woelk: Yes. I mean, I think I’m not an expert in sort of the genetic genomic space. I actually had a great colleague at a previous startup who used to spend a lot of time trying to explain Mendelian randomization to me. But I like the concept of these methodologies where you can look at the data set in different ways and get outputs. And then the trick is always to look across those outputs and seeing if they agree with each other. And if a lot of different outputs are pointing at the same pathway or pointing at the same target, then I think you’re in good shape.

Grant Belgard: How does effective cross-functional collaboration look to you?

Christopher Woelk: Yeah, that’s a great question. So for me, it’s interesting. I think biology has gotten very complex, right? There was a concept of a polymath probably a century ago where, as a scientist, you could be an expert in every domain. But even now, just in biology, that’s impossible to do. So I think, to your point, to tackle some of these really interesting questions, you need that diverse group. You need sort of clinical, you need commercial, you need your AI/ML, your software engineer, your bioinformatician, your biologist. And so I’ve been in several collaborations where these people, so if you think about bigger pharmas, these people live in different departments. And so you have to bring them cross-functionally together. It’s a little bit easier. Smaller companies, like startups, where you’re pretty much already all on the same team because the company is only 50 people.

Christopher Woelk: And so you can bring those folks together, build the psychological safety much faster, and tackle whatever the problem is. But at the end of the day, you want to bring those cross-functions together, again, build this environment of psychological safety where everybody feels heard, there are no stupid questions. And then I found it sometimes can take up to a year before everybody’s speaking everybody else’s language because the clinicians think one way, the software engineers think another way, the biologists think a third way. And I’ve been in rooms before where I’ve seen a clinician arguing with someone from IT. They’re actually agreeing, but because their terminology is so different, they think that they’re on different sides of the argument. And so I love being in those rooms and basically guiding the conversation to show that everybody’s in agreement.

Christopher Woelk: We’re just using different semantics.

Grant Belgard: What role, if any, do foundation models or LLMs play in your work right now?

Christopher Woelk: That’s a great question. I think, yeah, I mean, LLMs are becoming fairly pervasive. In my current role, search and evaluation, I’m starting to stumble across some interesting companies that have consolidated data across clinical trials, poster abstracts at conferences on those clinical trials, and patent information. And then once they’ve pulled all that information together, being able to search across it or ask questions through an LLM type interface is starting to look really powerful. So that’s my current role. In previous lives, I got pretty interested in foundational models. I worked with a great company. They were a client of mine when I was consulting called Imugene. And they had built foundational models of histology images from cancer patients.

Christopher Woelk: And to cut a long story short, what they had been able to do is normally when you get cancer, they take a sample of that tumor, and it gets sent off for sequencing to figure out which biomarkers you have. And based on that biomarker profile, it can dictate which therapy you get. And what Imugene had done is they’d gone into the software as a medical device field, and they’d used the image data along with this molecular biomarker data on a subset of patients to build a foundational model that was a neural network that could basically recognize in the image data whether someone was biomarker positive or biomarker negative. And of course, why that’s important is that cancer patient has to sit around for a month and wait for their molecular data to come back, which is a long time in a cancer patient’s life.

Christopher Woelk: And at the time, around diagnosis when these histology images are coming back, if you can make that biomarker call right there and get the patient on the right treatment, you’ve saved four weeks of them not being on a treatment, which is huge. And so that’s a place where I really thought foundational models were having a big effect and a big impact on oncology patients.

Grant Belgard: And on the flip side, where have you seen AI methods under deliver and what tends to make them succeed?

Christopher Woelk: Yeah, I think this is a fascinating space. I spent a bit of time thinking about this. Again, as a consultant, I would help out with strategic plans and platform initiatives for a number of clients. And a component of that was AI. And so the story I have in my head, and I sort of tested this a bit out in the real world, and I think it’s holding up, is that if rewind the clock five years and you were able to sit in a couple C-suites and a couple large pharmas, I think you would get the impression by the conversation that they thought AI was going to be the silver bullet. So let’s get some AI in, whatever that is. It’s going to speed up our drug discovery pipeline. It’s going to reduce our clinical failures, and it’s magically going to increase profits and everybody’s going to win. And I think there’s been a realization that it’s not a silver bullet, right?

Christopher Woelk: People have gotten educated in this domain over the last few years. And in fact, the way that I see AI/ML, especially around the drug discovery pipeline, is a series of accelerators, so modules that you can sort of plug in and they’ll speed up a bottleneck or a particular problem in that drug discovery pipeline. And so I think we’ve had big problems in implementation. You can imagine that if AI is a silver bullet and you’re just going to apply something everywhere regardless of whether it works or not, that’s a path to failure. Whereas I think people have gotten a lot smarter about how to implement AI.

Christopher Woelk: And again, the really successful templates I see are looking at the drug discovery pipeline, identifying a bottleneck in that pipeline, having a strong problem statement, ensuring it’s a fit for an AI/ML solution, building that solution and proving that use case on that single component in the drug discovery pipeline, and then figuring out where else it applies or building other AI/ML tools to accelerate different parts of the pipeline. And then, of course, when we put all of those things together and we’re not there yet, I think we’re still several years away, but you will start to see, especially in the larger companies that have the budget to do this, the ability to accelerate drug discovery, decrease clinical trial failures, and increase profits. But I think the implementation and approach is the real change that is happening right now.

Grant Belgard: What’s overhyped and what’s underhyped in your corner of R&D right now? Yes. That is a good question. I think I feel like, I don’t know where you think we are on that hype cycle curve, but I feel like everything was overhyped, again, a couple of years ago. I feel like we’ve come down the backslope and we’re in that little valley of death. We’re coming up the other side.

Grant Belgard: Trough of disillusionment.

Christopher Woelk: Is that what it is? Yeah. Valley of death might be a little dramatic, but we’re coming up that slope of where the hard work begins and these things might actually work. So I’ve been in meetings before where we’ve been trying to build an infrastructure to handle multi-omics data. And we start talking about patient privacy. We start talking about homogenizing across different array platforms for calling SNPs. And someone’s come along with a sticky note and with AI written on it, sticking on every problem that we have, saying it’s going to fix that. So the danger, the hype is what we were talking about earlier, that AI is going to fix absolutely every problem. I don’t think that’s true. I think there are problems that are suitable and problems that aren’t suitable. So as we move away from that fix-all hype to what’s the specific problem and what is the solution.

Christopher Woelk: And the solution just might be a database as opposed to a whole AI ML approach. But really finding those good use cases, I think, is important.

Grant Belgard: And a question that’s especially topical in light of the continued financing troubles in biotech. How do you keep institutional knowledge from getting lost, especially in the context of layoffs, downsizing, restructuring, et cetera?

Christopher Woelk: Yeah. So that’s a fascinating question. And I’ve actually wrestled with that question and tried to run projects in that space before. So you’re referring to knowledge loss. So what is knowledge loss? You’re right. It’s when somebody leaves a company and they take critical pieces of information with them in their head. And you can no longer do that thing because that person has left. And so I used to think about how, especially, I think, to your point, in our field, again, over the last few years, there seems to be this two, three, four-year cycle of either our companies going boom and bust or people moving to get to a better position at a different company. And that’s in strict contrast to you think of pharma companies of old where people would go and spend their careers. They would work there for 25 or 30 years. And so if you’re in that environment, there is no knowledge loss.

Christopher Woelk: And you just go down the corridor and you ask the subject matter expert and you get your answer. But in the current landscape where people are cycling every three or four years, you’ve got to really think about how you mitigate that knowledge loss. So one of the things that I did at a company is we built what’s akin to a Stack Overflow system where anybody across the company could answer that question. And then the answer that was the best got upvoted and locked in as the correct answer to that particular question. And then as that data accumulated, you could start moving it into wikis and information pages at the company. And so again, I really found that those types of initiatives helped capturing people’s knowledge that was in their head, getting them into a database that was searchable so that when those people left, you could still find the answer to that question.

Grant Belgard: What first drew you into computational biology and translational questions?

Christopher Woelk: Oh, that’s a great question. I think the honest answer is I was horrible on the bench. So I think this is all the way back to my undergrad. I did a biochemistry and genetics degree at the University of Nottingham in the UK. And we had organic chemistry. We had biology labs waiting for things to change color or to stop spinning or centrifuging. I enjoyed the coffee breaks, but I was always frustrated at how long things took. And so when I was at Nottingham, I did my third year project in an evolutionary lab under a gentleman called Paul Sharp. And I realized pulling down sequence data, aligning it, drawing family trees of bacterial families at this stage, it was all quite immediate. You could write code. You could run the software. I could get my answer in a day as opposed to several weeks.

Christopher Woelk: And I guess that speaks to me being quite an impatient person that opened up a whole world of computational biology for me.

Grant Belgard: What career move changed the way you think about drug discovery the most?

Christopher Woelk: Yeah, I think it’s that academic to industry transition. So I love my academic career. I did a lot of great projects. I was part of clinical studies. But I think in academia, and it’s understandable, people haven’t been inside a pharma company, so they don’t fully understand the drug discovery pipeline and all the steps and all the types of data and all the checkpoints that are required. And so when I moved into Merck, it’s a different language. It’s a different way of operating. It took me about a year to really understand the vocabulary and all the checkpoints and how a target gets all the way through to become a drug. And so that was a big transition for me. But then I really enjoyed it because you’re moving away from sort of the theoretical in academia to the real practical in industry.

Grant Belgard: What did you keep doing the same across these different environments and sectors, and what did you have to relearn in those key transitions?

Christopher Woelk: Yeah, that’s a good question. I think it really goes back to this concept of building happy groups and psychological safety. So in academia, my groups were like extended family. They’d come over for Thanksgiving. We’d go out for meals. It was a very close-knit group. And so when I moved into industry, I recreated that. And it works well with small groups, I think 10 or 15 people. I think it’s hard if you’re managing a group of 50 or 100. But I really enjoyed taking that personal element into industry and building those tight-knit groups and forging those relationships with my colleagues. And I found that when groups are happy, they’re very productive. When they’re having fun, they’re very, very productive. And so I like that part. It’s much more effective than going in and screaming at everyone every day to do their job.

Christopher Woelk: So I’ve always tried to maintain that through the jobs that I’ve had.

Grant Belgard: When you’ve considered new roles, what signals told you a team or culture would be a good fit?

Christopher Woelk: Oh, yeah, that’s another good question. I think, yeah, so my approach to interviewing, hopefully this will get at your question, is, of course, asking the same question to many different people. And if I get the same answer, that tells me that that team or that group is all on the same page and the objectives are clear. If I ask the same question, I start getting vastly different answers, especially from people in leadership. That tells me that team is not on the same page and that that’s a bit of a red flag and I need to be careful.

Grant Belgard: Interesting that just to note, that was the same kind of answer we got from the NASA engineer turned organizational culture expert. I was telling you about it before we hit record. Whenever he goes in to assess an organization, that’s like the first thing he does, ask the same questions to people across the organization and particularly look for differences between the leadership and the people on the ground.

Christopher Woelk: Yes, yeah, because ultimately, if the objectives aren’t clear from top to bottom, then you’re not going to be an effective organization. But now you’ve got me thinking I might have missed a career in space frontier in NASA, but we’ll leave that for another day.

Grant Belgard: What kinds of challenges have you found consistently energizing?

Christopher Woelk: Yeah, that’s a good question. So I think I am quite challenge orientated. So often I’ve been told, you know, you sort of, you can’t get an NIH R01 before the age of 45, you’ll never become a full professor. You know, these are sort of personal challenges that I’ve come across. I think from a scientific aspect, what I find quite motivating are these really complex questions. Like, you know, again, we’ve generated five layers of multiomics data in a longitudinal study, and we want to understand the mechanism of vaccine response. How do you put all those layers together across time in order to answer that question? And I find that motivating because it’s complicated. There is a lecture record that needs to be dived into to figure out what the solutions are.

Christopher Woelk: There are teams that need to be brought together to brainstorm where the gaps in existing solutions are and what we would do differently. There’s a strategic plan and an operational plan that needs to be pulled together to get that analysis done. And at the end of the day, there are results that start falling out of these studies that some of them are what is already known, but especially when you hit those normal nuggets that people haven’t discovered before. I find that very motivating.

Grant Belgard: Who shaped your approach to science or leadership and what did you take from them?

Christopher Woelk: Yes, so there’s been a few people, quite a few great mentors over the years. I mean, I can go all the way back to high school biology. I had a great biology teacher, Mr. Williams, at a boarding school in the UK that really excited me about biology and set me on a biology path. My PhD supervisor is a gentleman called Eddie Holmes, who’s down in Australia these days, but I met him at Oxford University, and he really taught me about managing groups. In an Oxford academic group, there were some very different personalities and traits, and I noticed what he would do is, he didn’t have one management style. He would adapt his management style to each individual to get them what they needed. And I always took that away with me in groups that I managed really trying to adapt to my, not force my style on everyone, but adapt to what that individual needed.

Christopher Woelk: And then I had another great mentor at UCSD, Douglas Richmond. He really sort of helped characterize HIV resistance and how to get over resistance with combination therapies. But he was a great academic mentor and sort of taught me about the HIV world and how to climb the academic ladder. And then transitioning into industry, there’s a wonderful scientist called Daria Hazuda, who was my boss when I was at the Exploratory Science Center, and she really helped me understand how industry functioned and educated me on the industry side.

Grant Belgard: What has changed most about the field since you started?

Christopher Woelk: Yeah, that’s great. So I started, you’re going to date me now. I started as a postdoc at UCSD in 2002, when U95A Version 2 Affymetrix arrays were in vogue and the latest array type. And so, again, I think sequencing technology has really opened up a lot of biology that we didn’t have, especially in the transcript arena. And then watching the Human Genome Project kick off, watching Craig Venter lambast academia that we should do this faster and better, and then proving that you could by parallelizing sequencers, seeing sequence technology get better and better in a way that, you know, I don’t know what the dollar amount is on a genome now, but it’s a lot less than back in the early 2000s.

Christopher Woelk: I think the, just the amount, the technology and the amount of data that we can get out of a human sample these days provides an incredible microscope to look at disease that we haven’t had before when I started my career.

Grant Belgard: Looking back, what did you underestimate about working at the interface of computational biology? Yeah, that’s a good question. I think you’re reminding me of a conversation I had with a machine learner at Southampton, [?]. And so it’s basically around this concept of trusting the data that you’re given and not being more curious and exploratory around it. And so, you know, very specifically, it’s a very specific answer to your question. If you looked at the old Affymetrix array data for expression analysis, it came with 14 decimal points. And so [Neurangin?] sat me down one day and said, is this data accurate to 14 decimal points? And I said, what do you mean? And he goes, do we need them? And I said, well, of course we need them. It’s the data, it’s coming off the machine. And he goes, well, let me show you something.

Grant Belgard: And he’d binarized the data, basically zeros and ones, and showed that he could get the same answer that I did when I was using, you know, 14 decimal points. And so, you know, it’s just this concept of, that was a surprise to me, right? That, oh, okay, there’s different ways to look at this data. I should be more curious about these 14 decimal points. And it always stuck in my head that he educated me that just because it’s coming off the machine doesn’t mean it’s useful.

Grant Belgard: For someone just finishing a degree or fellowship, what skills would you prioritize in their first year on the job?

Christopher Woelk: Yeah, I think that’s another good question. I think it’s an interesting landscape right now. You know, they’re saying that, so my girls are 16, they’re heading into college in a couple of years. They’re saying this generation is going to change jobs six or seven times in their lifetime. So, you know, I used to hate this phrase, thriving in ambiguity, but really getting used to change, right? Because it’s coming with all the sort of AI impact, greater efficiencies, increased technologies. I think you’re gonna have to be very flexible in your career. And then I went to a career advice workshop when I was an undergrad and the gentleman got on stage and said, don’t stress too much about where you are today starting your career, because when you finish your career, you’re going to be in a completely different place. And that didn’t appeal to me at the time at all.

Christopher Woelk: I thought he was speaking rubbish, but as I’ve looked at my own career, that’s exactly what has happened is that, you know, where you start and where you end up, I started, you know, in a very technical field, now I’m in sort of more of a research and evaluation role. And just being able to sort of go with the flow of that career and make sure that you’re always curious and you’re always doing something that you find interesting would be really rewarding.

Grant Belgard: How can scientists tell whether management is a good next step for them?

Christopher Woelk: Yes, I’ve had this conversation dozens of times in my career too, because there are these three tracks, right? There’s the management track, there’s the independent contributor track, and then there’s sort of a middle track where you’re an independent contributor, but you have a couple of reports. And I can tell you what really helped formulate my thinking in this space was that work-life podcast series by Adam Grant. Is it Adam Grant? Yeah, I think it is. And he’s like this workplace psychologist at Harvard who sort of gets out into groups and really tries to understand what makes, you know, innovative groups tick. But he has a particular podcast exactly on your question of am I management or am I independent contributor?

Christopher Woelk: And the problem is that the management tract is often the one that everybody thinks they should be going down because it seems to come with these titles and salaries and increased responsibility, but it’s not a good fit for everyone. So there are cases where people leaped into the management track, they’re absolutely miserable, and then they end up in the independent contributor track. And so I think what you really need to do is sit down with a mentor or sit down with a whiteboard and try to figure out the things that really motivate you. You know, do you like coding? Do you like working directly on the data? Do you like solving problems? That feels more independent contributor versus do you like mentoring people? Do you like helping other people solve their problems? That feels slightly more going down that management track.

Christopher Woelk: And I think that, you know, to one of your earlier questions about how do you assess companies or organizations, this is another thing that you can do as you’re looking to onboard at a company. You know, what is their management track and what is their independent contributor track? And do they have an independent contributor track that has senior positions that are equivalent in status and in salary to the management track? And if that’s the case, then that company’s really thought about valuing both managers and independent contributors in a way that I would wanna work at that company.

Grant Belgard: What signs suggest it’s time to change roles?

Christopher Woelk: Yes, there’s a rubric that I worked through for that. I’ve worked through it with myself and I’ve worked through it with mentees. And again, it came from this gentleman, Adam Grant. So I do encourage you to listen to that. The first season of that podcast is fantastic. So it’s voice, loyalty and alternatives. And so if I’m at a job and there’s a problem or something that needs fixing, then the first thing to do is I use my voice, right? So I highlight the problem, I talk to people, I try to make the change by following sort of change management procedures and speaking up. Now that doesn’t always work. Sometimes you’re ignored. And so then you move on to this loyalty bucket. So you’re at a company, are you still loyal to the mission of the company? Are you still loyal to the objectives? Are you still loyal to the people that you work with and that team? And it feels really strong.

Christopher Woelk: But if those loyalties start to get frayed, then I think you start looking at alternatives and those alternatives of course are, what else can I do with my skillset? Can I find a similar role at a company elsewhere? Could I find a different role with my skillset? And then you start exploring those alternatives. But I just found that quite a useful rubric, the voice loyalty alternative. You can work through that and it helps you sort of relax through a very stressful process.

Grant Belgard: What books, papers or resources would you suggest to someone entering this space today?

Christopher Woelk: That’s a good question. I think, again, I think scientifically, everybody’s pretty familiar with downloading reading papers, staying up with the research. I think the thing, at least with my old manager hat on, that’s been harder to teach is around soft skills. And so what I’ve often done is as I see people that could be going down that management track in my groups, or they’re just really talented, independent contributors, there’s some literature around soft skills that I’ll give them. So I used to give out a book called the One Minute Manager, which is a great quick read. And the take home message is one minute objective setting. Everybody should know the objectives. There’s one minute praising. It’s when people do something right, you should tell them they’re doing something right.

Christopher Woelk: And then one minute course corrections, don’t wait for things to go completely off the rails, but get people back on track early on when you see problems. And that’s just a nice little template to run a group. I’ve transitioned recently, again, sticking with soft skills to a book by a friend of mine called Gwen Acton. And I think it’s Leadership for Scientists and Engineers. And it’s a very comprehensive manual explaining the soft skills that are needed in STEM to be successful. She’s got some sort of great examples and role-playing examples in that book, and then a series of things that you can do when you find yourself in certain situations. And so I’ll often give that book out as well. But to wrap up the answer to this question, these types of materials to really help people develop their soft skills is something that I found really important.

Grant Belgard: And last but not least, if you could go back and give just one piece of advice to your younger self, what would it be and why?

Christopher Woelk: Oh, wow. Yeah, I think there’s this phrase, this too shall pass. And so there’ve been fairly stressful parts of my career, trying to get grant funding, transitioning jobs in industry. And it feels sometimes like these periods are never gonna end, but this too shall pass. Hang in there, get the work done, try and show some strong deliveries and ultimately you’ll find yourself in a more productive place.

Grant Belgard: Great, Topher, thank you so much for joining us.

Christopher Woelk: Oh, it was my pleasure, great questions. You had me thinking there.

The Bioinformatics CRO Podcast

Episode 75 with Chris Yohn

Chris Yohn, leader of CompBio Bridge, discusses his current experience with computational biology contracting and consulting, what companies are doing with computational biology right now, and how to most effectively bridge the gap between data science and the wet lab. 

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Chris Yohn

Dr. Chris Yohn is a computational biologist who currently leads CompBio Bridge, which provides a fractional strategy and management practice to help biotech teams bridge data science with the wet lab.

Transcript of Episode 75: Chris Yohn

Disclaimer: Transcripts may contain errors.

Grant Belgard: Welcome to The Bioinformatic CRO Podcast. I’m Grant Belgard. Today we’re joined by Dr. Chris Yohn, a biotechnology leader and computational biologist. He currently leads CompBridge Bio, a fractional strategy and management practice that helps biotech teams bridge data science with the wet lab. Previously, he headed computational biology at TRexBio and held discovery leadership roles at Unity Biotechnology, with earlier industry experience spanning platform buildouts and translational programs. He trained at Scripps Research and later completed postdoctoral work at the Skirball Institute in New York. Chris, welcome.

Chris Yohn: Thanks, Grant. It’s great to be here.

Grant Belgard: How do you describe the work you’re focused on right now?

Chris Yohn: So currently, I do computational biology contracting and consulting. Think of it as a fractional head of computational biology, typically for small companies that maybe can’t afford or aren’t ready to bring on a full-time head of Comp Bio.

Grant Belgard: What kinds of problems are showing up most often in your engagements?

Chris Yohn: I’d say there’s probably three main categories. First is early target identification, validation. Then, of course, there’s once you have a program doing translational informatics. So in that, I would include things like mechanism of action studies, biomarker selection, then a discovery, indication selection, even some like tox flags that you might be able to point out for a program that’s headed towards the clinic. The third category that I think is important that comes up pretty frequently is research informatics. So this is really, you know, essentially kind of managing your data, making sure you capture your data well and that once you capture it, you can use it and visualize it.

Grant Belgard: That’s been fun this week with the AWS outages.

Chris Yohn: Yeah, for sure. Yeah.

Grant Belgard: We’re recording this a good while before it comes out, just for our listeners. AWS hopefully did not go down the week you’re listening to this. So when a new group asks for help, what do you listen for in the first 15 minutes?

Chris Yohn: You know, so my original training is in molecular and cell biology. So, you know, I’m a biologist at heart. So really what I’m thinking about are what are the key biological questions that need to be answered? What’s going to help advance the company? What’s going to advance the programs they’re working on? What’s going to hit their goals? So what is the biology that’s underlying it and what are the questions that they need to really address for that?

Grant Belgard: And what does success look like in a typical project? How do you measure it?

Chris Yohn: Maybe it’s easiest. I’ll give a couple of quick examples. So one company I’m working with, I’m helping them with some mechanism action studies. And in this particular case, this is not typical for a lot of companies, but one of their major goals for this study is publication. You might think of that more for academics, but sometimes companies have that goal, too. So that’s a pretty concrete goal and metric that we can use. Like if the study helps lead to a publication, then that’s success. Another example is I’m working with a group basically to figure out, like, is there a company? So it’s actually the company hasn’t even been formed yet. So is there enough here to actually get something off the ground? So in that case, I guess getting the company started would be the measure of success.

Chris Yohn: And frankly, you know, I think in that case, making a decision not to start the company could be just as good as an outcome. Right. So that’s a good decision, too.

Grant Belgard: Right. You have to know where to allocate resources.

Chris Yohn: That’s right.

Grant Belgard: So where do you see the biggest disconnects between data science and the bench today?

Chris Yohn: You know, many, including myself at some of my previous companies, would talk about this, you know, sort of a design build test loop that really helps, you know, once you get data to bring it back into your modeling. Unfortunately, in many cases, it’s not always a loop. It’s kind of a one way trip. Right. And I think that’s where we see some disconnects. You know, the vision is there, but sometimes the execution to bring the data back into your modeling doesn’t always happen.

Grant Belgard: If you had to pick one capability that most accelerates discovery for your clients, what is it and why?

Chris Yohn: You know, this might be a little bit related to the last question, and I’m not going to pick a technical capability. I’m going to say communication. You know, I kind of consider myself because I’ve had a pretty diverse background. I call myself a multilingual scientist. I’ve worked in a lot of different areas, and because of that, I’m able to really translate between different disciplines. And I think that’s what could really accelerate discovery is that if you can increase communication, help different groups really understand each other and understand what they’re capable of, what their needs and goals are. And then how to move forward with that. I think that’s really what can help discovery move forward quickly.

Grant Belgard: When timelines are tight, how do you choose between depth of analysis and speed to decision?

Chris Yohn: You know, this is probably a common theme for our talk. You know, I really always go back to what are the key questions? Like, you really have to understand what’s the question that’s going to advance your program? What’s the question that when you get the answer, you’re going to make a decision based on it? And so if you can define what that key question is, then you go deep on that and you really dig in on that question. And kind of others that maybe are interesting but aren’t going to help you move forward fall by the wayside. At least when time and money is tight, you’ve got to do that.

Grant Belgard: What’s your framework for deciding build or buy?

Chris Yohn: I always lean towards buy, frankly. I think I want to rely on people who focus on building things, you know, focus on your expertise. You know, again, I’m going to focus on the biological questions and if I need tools for that, I want to find somebody who focuses on building that tool and then use it as opposed to trying to make it myself. Plus, frankly, software engineers are pretty expensive. So if you don’t really need it and to bring that capability in-house, then I’d rather rely on someone else who’s putting all their energy and effort into building a tool, but then I can make use of.

Grant Belgard: Where do you see multi-omic analyses and single-cell or spatial data actually changing decisions?

Chris Yohn: Yeah, you know, sometimes you do see where it’s not peripheral, but it’s just not core to really making things move forward. You know, I’ve seen a few. I helped build a target identification platform based on primarily single-cell data and we use that for some of our translational work, but really to have a big impact, it’s got to be really baked into the core approach of what you’re doing. It can’t be kind of an add-on. I do think that, you know, one place, especially as you move towards translation and getting things closer to the clinic, that you can have a couple of places you can have a big impact there is in certainly mechanisms of action studies, right? That’s going to really get you a lot more insight.

Chris Yohn: And then perhaps I think we’re starting to see a little bit of traction even in biomarkers where people are starting to bring more multi-omics technology later into the clinic and I think that’s going to start to really help us with really understanding both markers that we can use for things like pharmacodynamics and outputs as well as hopefully eventually even like, you know, patient selection and stratification down the road.

Grant Belgard: How do you approach data readiness, metadata, QC and so on?

Chris Yohn: I think you really want to start with consistent, you know, semantics. You know, make sure your IDs, ontologies are all kind of in place. Make sure all parties both on the wet lab side and the dry side really agree ahead of time. And then, you know, I think including biological QC in addition to sort of statistical QC of your experiments, I think is important, like did the experiment even work, right? An example is recently I was working with a company and they did this in vivo experiment where we were doing, you know, some omics readouts on it and we were looking at the data and let’s just say we didn’t see the effect we expected. Some cases we did, so there was like some old and young animals and you could definitely see differences there, but they had a compound treatment and they just didn’t see anything.

Chris Yohn: And so I went back and we talked about the experiment and unfortunately in that case they didn’t have any biological readout from the animals that we used for that study. So we didn’t know like did they see the effect they normally would see with their drug? Maybe somebody misdosed them, maybe like somebody left the drug out on the bench the night before and it was no longer effective and we just had no information. So having that biological QC would have made a huge difference for that experiment.

Grant Belgard: Yeah, that happens far too often and oftentimes, you know, people like you aren’t brought in until after the experiments run, right?

Chris Yohn: Exactly. I mean, that’s a huge point, right? I think that being involved early on as a computational biologist and experimental design is so important. And, you know, not to go off on a tangent here, but I think, you know, most computational biologists and bioinformaticians have experienced someone coming to them, giving them a pile of data and asking the question, what does it say, right? And that’s like the worst experience, I think. So, yeah, definitely getting involved early is critical.

Grant Belgard: Especially when it’s multimodal data, right?

Chris Yohn: Yes, even worse.

Grant Belgard: It does many things.

Chris Yohn: Yes, that’s right.

Grant Belgard: It’s your question. How do you pick evaluation metrics that matter to the biology?

Chris Yohn: You know, it has to fit the biology and the question and what the next testing step is. Like, you want to make sure that you’re getting an answer that’s going to help you make a decision. You know, if we’re looking for, and also like making sure your level of information fits your question. So, like, for example, let’s say we’re picking some targets and you have a screening platform you want to put the targets into and you can fit, you know, maybe 20 things into your screening platform. What you want is what are the top 20, right? You don’t really care like the relative order of numbers two, three and four. You just want to know, am I accurately getting the top 20? So, designing your, you know, experiment so that you get that answer and not like what is two versus three is important.

Grant Belgard: What’s your process for closing the loop, turning predictions into testable decision-relevant hypotheses?

Chris Yohn: I think it’s kind of related to the last question, you know, about making sure that you fit the experiment to the biology. I think also really important here is making sure you have a really good collaboration between the wet and dry side. You need to kind of have buy-in ahead of time that you’re going to be able to test the predictions, you know, as computational biologists, almost everything we do is just a prediction, right? And in order to really show that this is truth, you need to go into the lab most of the time to prove it out. And so having, making sure that that’s in place ahead of time, I think is important. Yeah.

Grant Belgard: In translational settings, what’s the most underrated biomarker characteristic to pressure test early?

Chris Yohn: For that, I would say one thing that I’ve seen is donor or patient variability. Often, especially when you’re doing multi-omics experiments early on, it’s hard to get a large N for your study. And you may not have fully looked at the amount of variability that you might be seeing once you move forward into a clinical setting. So as much as you can, paying attention to donor and patient variability and doing maybe follow-on experiments with larger numbers, where maybe you hone in on a particular set of biomarkers or assays versus, you know, maybe early discovery or kind of bigger experiments with smaller N. But that’s definitely something that I think you really have to pay attention to.

Grant Belgard: I totally agree. How do you keep analyses reproducible without slowing teams?

Chris Yohn: That’s a tough one. You know, usually, you know, I’ve always been at small companies and, you know, you’re always moving fast. And I think one of the things that, you know, we talked about at one of the companies I worked at previously was everybody has to eat their vegetables, meaning that, you know, everybody wants to like do sort of the quote unquote fun analysis where you get to the interesting biological result. But in order to get there, you need to have like, you know, the infrastructure and the process in place. And so we used to say everybody has to eat their vegetables. Everybody has to do some of that as well as sort of more fun analysis. So spreading it out, I think, helps.

Grant Belgard: So on that note, what are your thoughts on, you know, the recent rise of bioinformatics agents? Because I have to say one concern I have is that a lot of the vegetable eating is skipped to some extent, right? So there may be confounds in how the data was produced that, you know, if you’re going through it properly eating your vegetables, you know, looking for all those things, you catch that early. And otherwise, you might get some really nice volcano plot, but it might be nonsense.

Chris Yohn: Yeah, yeah. No, I think it’s a great point. And, you know, I think it’s important to understand the fundamentals. And unfortunately, you know, some AI approaches are going to enable people to skip that. I even think back to like when I was working in the lab and a new cool kit would come out for, you know, doing some process, even, you know, like simple things like mini preps or whatever. And when I was in grad school, my advisor forced us to kind of do it the old school way first so that we really understood the process. And then you could go to like the fancy kit that did it really quick and fast and with simple steps. So I think the same thing applies here. Like I would hope that as we’re training people that we continue to make sure people understand the fundamentals before they jump to sort of the quick and easy path. It’s great to have those. Like I’m not discounting them, right?

Chris Yohn: Like I use them. And but I think knowing the fundamentals and how it actually works under the hood is key.

Grant Belgard: How do you handle batch effects and confounders when experiments are multisite or longitudinal?

Chris Yohn: That’s a tough one. I mean, it’s the one thing that, you know, kind of hits anybody who does these kind of analyses. You know, I think this also gets to what we touched on earlier about being involved in experimental design, because I think if you were involved in the experimental design, then you can help to try to minimize those variables as much as possible. And the other thing is, I think you need to make sure as you’re looking at the data, you model both technical variance as well as biological variance and have them both like distinct so that you can as much as possible understand like where things are, where the variance is coming from. And then if it’s the biological, then you can start to understand like what are your biological questions. I mean, I don’t have a great solution, right? That’s a tough one. And I think everybody struggles with that.

Chris Yohn: So I don’t know if you have any like magic wand that you’ve used that you can help me and your listeners to deal with this.

Grant Belgard: Yeah, I mean, it’s a question we get a lot. And unfortunately, if it’s not baked into the design from the get go, it can be very difficult to do well. I mean, of course, there are approaches to try to mitigate it, but they introduce their own artifacts, right? Unless you have proper controls run everywhere. And ideally, you know, you’re not changing your array midstream or something, right? It causes huge problems that you could do things to try to get around it, but they’re not going to be perfect. It’d be far from perfect.

Chris Yohn: Yeah, yeah. I mean, and that’s I mean, that’s a good point, too, right? It’s really making sure that you pick the right whatever platform and approach like at the beginning so that you don’t realize halfway through that, oh, this is not really fitting my needs. I’ve been able to switch something. And obviously that throws in a whole nother set of issues around batch. So, yeah.

Grant Belgard: So when a single cell or spatial data set underwhelms, what’s your troubleshooting playbook?

Chris Yohn: I think first you have to probably need to define, again, whether it’s a technical or a biological reason that you’re getting underwhelmed. Then you go back to your QC. And this is like that experiment I was mentioning earlier, where it turns out that we didn’t really understand if there was a biological effect. So, you know, talk to the experimentalist who did the data, who produced the data, like, was there anything unusual? Sometimes you can talk to them and they mention, oh, yeah, so happens that these samples looked a little odd when I was processing them, but I just went ahead with it. And then that can maybe explain what you’re seeing in the data. So I think that’s an important thing to follow up on. So really, you know, trying to gather as much information as you can to try to explain why you’re not seeing the effects that you had hoped or expected to see.

Grant Belgard: Where does simulation or in silico perturbation add the most value in your experience?

Chris Yohn: For that, I would say if you have like a really big space that you want to explore, that is just impossible or intractable to approach from in the wet lab, then those simulation or in silico perturbation type approaches could help you then limit or focus your wet lab experiments. And again, I’m probably showing my biological and lab-based bias in that answer a little bit, right? Because I’m always headed back to how do you validate it in the lab, right? So for me, you know, doing simulations or predictions from models just helps you to be more efficient in your lab work, I think.

Grant Belgard: Yeah, totally agree. What’s one technical belief you’ve changed your mind about the last two years?

Chris Yohn: Hmm, that’s interesting. Well, maybe I’m in the process of changing my mind on this one. I haven’t quite settled yet, but if you had asked me a year or two ago, I would have said that in order to build a good model, you really need highly structured, clean data to really get a good model. I think that’s still true. The thing that’s maybe I’m changing a little bit is, and this is all driven by, you know, large language models and everything we’ve seen with ChatGPT, et cetera, is that the fact that they can make sense of sort of the messy data of language makes me reconsider that maybe we can get good value out of the corpus of messy data that we currently have in biology, right? So I think I’m still always, if I have a choice, I’m going to go to like well-structured, clean data as my go-to, but maybe there’s going to be more value out of the messy stuff than I first thought.

Grant Belgard: Switching to talking about building teams and operating models, what responsibilities do you believe belong inside computational biology versus in a central data organization?

Chris Yohn: So I’ve always been at small companies, so usually that’s one organization, usually not a separate group. But I think if you do have it split, certainly biological interpretation, right, lies in the computational biology group, whereas maybe more like infrastructure and enablement of being able to answer those questions, you know, data platforms, you know, shared services are going to be in that central data organization. But that’s, like I said, that’s not from personal experience because for me, it’s always been one and the same in a small group.

Grant Belgard: What competencies do you expect from computational biologists versus data scientists or machine learning engineers?

Chris Yohn: Again, probably my small company bias is showing, but I think there’s overlap. Like you need people who can do a little of a lot of things. But generally, I would say for computational biologists, it’s more about, you know, really understanding like experimental design, getting to the biological results, sort of why things matter. Data science is more about, for me, you know, modeling really rigorous analysis, good statistical approaches to the work, model building, essentially. An engineer like an ML engineer is more about like scale, right, like more system based. Like we’re talking, you know, then you’re talking about bigger data sets and really bringing a lot of things to bear and getting to, like I said, more scale approaches.

Grant Belgard: How do you operationalize scientific prioritization when everything looks interesting?

Chris Yohn: I think the key thing is you need to look at an experiment you’re doing and then decide what decision am I going to make based on the result. So if the result of this experiment is X, I’m going to do this. And if it’s Y, I’m going to do something else. Right. So that really helps, I think, to prioritize what you move forward with.

Grant Belgard: How do you approach hiring in a market with both mass layoffs and at the same time intense competition for certain niche skills?

Chris Yohn: Yeah, it’s really an interesting market for sure in the hiring front lately. You know, I go back to something that’s, I think, pretty critical, especially, again, small companies is it’s about oftentimes it’s about culture and sort of mission alignment. I mean, certainly, obviously, you need to make sure that the skills you need are there. And I think it’s right. There are a lot of people out there looking for jobs. So you kind of if you’re hiring, you kind of have your pick a little bit, but certain skills are still in high demand. So to me, whether you’re in that environment or in a different kind of hiring environment, it’s so important that the folks that you bring in are aligned with, you know, sort of the culture and what you’re doing in the company. You know, I’ve unfortunately experienced had experiences where someone isn’t right and it just throws everything off.

Chris Yohn: So you’ve got to have the baseline of making sure, like the technical competencies are there. But then to me, getting that alignment is is really a critical part of hiring.

Grant Belgard: Yeah, we actually just recorded a podcast with an expert in organizational culture and kind of the emergent properties of individuals. Right. And how, you know, taking the most skilled, best and smartest people in every function and sticking them together rarely creates the most effective team.

Chris Yohn: That’s right. That’s right. That’s right. We’ve probably seen we’ve probably all experienced examples of that, of dysfunctional teams. So then you kind of figure out from that maybe what the right approach is.

Grant Belgard: Yeah. So looking back, what were the pivotal decisions that led you into computational biology in your own career?

Chris Yohn: Oh, wow. You know, I was doing my postdoc. I was in doing in a fly lab doing developmental genetics. This was like a while ago, like late 90s, early 2000s, when really that was really like, you know, genomes are being sequenced and just a lot of great technology coming out. And I think, you know, in my graduate and postdoc work, it was really still kind of a single gene focus. Like I literally worked on like very specific, a couple of genes in both my graduate work and postdoc. And seeing kind of what was possible as the genes were being sequenced really inspired me so that when I started getting into it in my postdoc and like took some programming classes and started doing some work there. And then when I left and I started my first my first biotech job was a bioinformatic scientist.

Chris Yohn: So, you know, I think just that timing, that time was really pivotal for, yeah, just the advances that we were seeing.

Grant Belgard: Yeah. And can you talk about how that transition was for you from academia to biotech?

Chris Yohn: Yeah, I think the way I like to talk about it is in academia, you have time, but no money. And in biotech, you have money, but no time. So that’s really the…

Grant Belgard: Except right now where you neither have time nor money.

Chris Yohn: That’s a good point. And I think along with that, like the willingness to take risks is much greater, right? Because you don’t have time. You’ve got to just try things and move forward. So that was a real difference. And that’s why whenever I talk to people who are kind of thinking about the transition, like that’s one of the things I really try to help them understand, because I’ve seen people make that transition well. And I’ve seen people struggle with it.

Grant Belgard: Yeah. I would say that that’s, I think, the most common answer we get from people and certainly an observation I’ve had. So what experience has prepared you to manage both bioinformatics platform buildouts and translational aspects of that?

Chris Yohn: When I was at Unity Biotechnology, we were working on diseases related to aging. We did a lot of early sort of discovery around new applications in different diseases. And at the same time, we had programs that were advancing into the clinic. So I think the fact that I was able to, for example, I helped design and execute a biomarker clinical trial for osteoarthritis. While I was also working on exploring new indications that we could potentially get into, really helped me to understand kind of what was necessary to move things towards the clinic, but also kind of the exploration that you have to do on those platform buildouts. So being able to do both at one time was really great.

Grant Belgard: What’s a fork in the road moment? You’re glad you chose the path you did? And what’s one where if you had to do it over again, you would make a different choice?

Chris Yohn: Probably, so I’ve spent a lot of my career in San Diego. And then about a decade ago, I moved up to the Bay Area and I think that move was great. So it really allowed me to expand my network as a lot of opportunities. I mean, San Diego is awesome. I love San Diego. It’s got a great biotech community, but the Bay Area is just another level. And that’s been really a great opportunity. And I’ve really enjoyed the work that I’ve been able to do here. In terms of something I would do differently, I’m not sure if there’s anything I would say. I mean, I don’t know, maybe I’d buy Nvidia stock 10 years ago. In terms of my career, I mean, I definitely have been very… I’ve kind of followed opportunities. It’s kind of been my path. It’s not like I’ve decided this is the thing I want to do and I’ve pursued it with passion. It’s more about seeing interesting opportunities and following up on them.

Chris Yohn: And so I don’t think there’s an opportunity that I chose that I would have preferred to have passed on at this point.

Grant Belgard: What habits or practices have been most durable across very different problem domains?

Chris Yohn: I think, and sorry if I’m being a little redundant, but I still go back to focusing on the key questions. That’s so important because I’ve worked in biofuels, in early stage, late stage clinical, across different therapeutic areas, different modalities. And no matter what, in order to really focus, you have to understand what is the question that’s going to help me move forward and do everything you can to get an answer to that question. So I would say, and there’s sort of two pieces in that answer where I say focus on the key questions. You know, certainly part of it is the key questions and the other part is that focus word, right? Because it’s so easy to get distracted. There’s so many things you can do. So making sure that you focus on what’s important has been so important to me.

Grant Belgard: So to get your thoughts on advice for people at different stages of their career, a number of questions. Firstly, for grad students and postdocs, where do you think they should invest their time and focus in learning over the next year?

Chris Yohn: Well, at the risk of sounding like probably what many other people say, you know, I think the sort of obvious answer is to really understand how AI is going to impact what they’re studying, how it’s going to impact them. I think a really important aspect of that is what are the limits of what AI is going to be able to do for you and to you a little bit, but also like what are the opportunities that you can use, that you can follow up on in your studies or in your work. Like I said, it’s maybe an expected answer, but I think it’s super, super important today.

Grant Belgard: And for scientists moving from wet lab to dry lab, what’s your recommended on-ramp?

Chris Yohn: I would say if you can, like look at your own data. I mean, certainly you could go and like there’s a lot of like tutorials and places that you can download data and learn on that. But if you can look at your own data, I think you’re going to be much better. Like, you know the data, you know what the limitations are of the data, you know what makes sense in the data. So I think that’s going to help you a lot more than like coursework or tutorials. And certainly I think if you can find one, find a mentor who can kind of walk with you just to keep you from making silly mistakes that, you know, a lot of people probably would do when they’re just getting started.

Grant Belgard: For first time computational biology managers, what advice would you have?

Chris Yohn: I would say you really want to kind of understand the landscape. Like what do you have? Like, do you have a team? What are the pipelines that are in place? What kind of data do you have? I think for new managers, usually the advice is, you know, don’t come in and start changing everything. You need to learn first, right? And I think that applies here as well. So understand the landscape. And I think out of that, you know, most important is probably really understanding the data, both what you have currently and what’s planned. And then if there’s data being planned, like get involved in planning those experiments, right, that’s really critical to plug in, get on program teams, you know, get, you know, to the project manager people who are actually like moving things forward and get into the planning as soon as you can.

Grant Belgard: And for scientific founders or heads of R&D, how do they set problem statements that are tractable and can be decision driven?

Chris Yohn: I think you have to define the scale of the question or the problem statement so that you can get to a decision. I mean, maybe that’s kind of built into your question, but, you know, you don’t want your problem statement to be too big, right? Like, can we cure Alzheimer’s, right? I mean, that’s way, obviously, that’s way too broad. But if that’s your ultimate question, you need to break that down to the point where you get a question that has like a clear go, no go at the end of it, right? You know, define your problems by what they allow you to decide next, not just by, oh, data we’re going to generate or something, right? You want to be clear about I’m getting, I’m doing this experiment to get this data that’s going to enable me to make this decision.

Grant Belgard: What types of structured communication, for example, memos, dashboards, formal reviews, and so on, do you find most effectively inform and drive decisions?

Chris Yohn: It varies a lot. I mean, to me, the best tool is the one that actually gets used, whatever that is. You know, I’m actually starting an effort right now with a company to create some dashboards, and we’re figuring out, you know, what those use cases are. And it’s going to be different. Like, we actually kind of define the two extremes. One is the person who is a little more data savvy and wants like a big, basically download dump of data that they can then play with, right? And then you have the other extreme, which is usually, you know, the senior management who wants like a PowerPoint slide with a summary of the data.

Grant Belgard: And some nice colors.

Chris Yohn: With some nice colors, right? Exactly, exactly. Some red and green checkboxes and stuff, right? And that’s exactly what we’re doing, right? So I think, and probably what, you know, I think what we’re going to do is, you know, we’re going to create some drafts, we’re going to circulate them, and we’re going to kind of see like, where do we get traction, and then you just double down on those. So I think you have to try a few things and then see, like I said, whatever gets used, that’s the one that you want to focus on.

Grant Belgard: When budgets are tight, as they have been for many companies in recent years, what do you defend first? And how do you go about deciding what can be paused, what can’t be?

Chris Yohn: Yeah, I think you need to define your one-way doors. Like, what are the things that if you stop, it’s really difficult to start again? And what are the things that you can easily restart again, if you do pause them? And so obviously, the ones that are easier to restart, then those are, you know, pretty easy to say, well, we’re going to pause that if it’s not going to be critical to our next step. I think if it’s a one-way door, then that’s when you really have to look at it very carefully. Like, what are the implications of pausing or stopping this, and then base your decisions on that. Like, if it’s a, maybe it’s a collaboration, and if you pause it, then they’re going to go find somebody else to collaborate with, right? And you can’t come back, right?

Chris Yohn: So that might be something you think twice about, versus, you know, something that’s completely controlled internally, you could maybe be a little more flexible with how you prioritize it.

Grant Belgard: And if you could give advice to your younger self, maybe at different stages of your career, what would be the most impactful advice you would impart?

Chris Yohn: Hmm. I think I would probably encourage my younger self to take more risks, and to just go for it. I think that, and this is probably a little bit of my own personality, but you know, I am somewhat conservative and a little risk averse, and you know, that’s probably, you know, held me back a little bit in some cases. So I think, you know, just, you know, failure is not a bad thing. Failure is how you learn and how you learn how to be better. So I think just going for it is important sometimes.

Grant Belgard: And if someone wants to work with you in a fractional leadership capacity, how should they prepare? And what sets an engagement like that up for success?

Chris Yohn: You know, there’s probably two main ways that people interact, that I work with people. One is where someone really knows what they want, right? Like, I need, I need this, I need to answer, I need a mechanism of action study for my compound. Can you help me like with experimental design and execution? And like, I have one customer or clan I’m working with, but that’s what I’m doing. The second is probably a little more open, where, you know, you might have overall goals, and you really need to figure out like, what is the strategy to help us find a solution to meet these goals? And like the company I mentioned earlier, where we’re really trying to figure out, is there a company here, that’s very open and broad, and it’s sort of there’s a overarching goal, but then like, together, we’re figuring out what that what that strategy is.

Chris Yohn: So understanding like where, which of those two categories you’re in, and then helping to define that, I think is important. Yeah.

Grant Belgard: And where could our listeners follow your work or reach you?

Chris Yohn: So LinkedIn is probably a great place to reach me. My website is compbiobridge.com. And my, if you want to just reach me directly, my email is just chris@compbiobridge.com.

Grant Belgard: Great, Chris, thank you so much for your time.

Chris Yohn: Hey, this is great, Grant, I really appreciate the time.

The Bioinformatics CRO Podcast

Episode 74 with Phillip Meade

Dr. Phillip Meade, a leadership and culture advisor at Gallaher Edge, discusses his experience evaluating organizational culture and how to diagnose culture problems and build lasting habits for high-performance organizations.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Phillip Meade

Phillip Meade is a leadership and cultural advisor at Gallaher Edge, which provides executive coaching, leadership development, strategic guidance and culture management services for businesses and organizations.

Transcript of Episode 74: Phillip Meade

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome back to the Bioinformatics CRO podcast. Today I’m talking with Dr. Philip Meade, a leadership and culture advisor at Gallaher Edge, whose career has included extensive work inside NASA, particularly around organizational culture and return-to-flight moments after major setbacks. He’s collaborated across public and private sectors and co-authored a book on building high-performing cultures. Today we’ll translate those lessons for labs, universities, biotechs, and pharma, how to evaluate the strength of a culture, diagnose problems, and build habits that last, plus common pitfalls to avoid. Dr. Meade, thanks for joining us.

Phillip Meade: Good morning. Thank you for having me. I’m happy to be here.

Grant Belgard: So we’ll cover three arcs today, your current work and lens, how you got there, including time with NASA, and practical advice for leaders and teams in the life sciences. So to kick us off, in your current work at Gallaher Edge, what kinds of culture or leadership challenges are you most often being asked to help with right now?

Phillip Meade: The thing that we see most often is companies asking us to come in and help them because either they are in the process of growing and scaling or they want to grow and scale and they’ve hit a ceiling and they’re having trouble doing that. And so culture typically is one of those things that either is an enabler for scaling or it ends up being a roadblock that keeps them from being able to do the scaling that they’re wanting to do.

Grant Belgard: When you first meet an executive team, what signals, good or bad, do you look for the first hour?

Phillip Meade: There’s a few things that we typically see that demonstrates what we’re looking for in terms of a high-performing culture. Openness is one of them. Is every member of the executive team truly engaged and contributing or is there one or two key members that are really the ones that are doing everything and everybody else is sort of sitting there waiting and seeing what they do and hanging back? Another one is self-awareness. Are they really aware that when we’re talking about culture that they’re a part of it, that culture starts with them and so that this work is really about them and they’re a piece of it and they’re involved? Or are they talking about everybody else needs to change and this culture is about out there? And then another piece of it that’s very important is a willingness to be vulnerable.

Phillip Meade: Do they show that and demonstrate that willingness to actually let the guard down and take the armor off and be vulnerable as human beings? Or are they armored up and trying to present themselves that way?

Grant Belgard: How do you decide whether a client needs structural changes, leadership, behavioral changes, or both?

Phillip Meade: You know, it’s usually all of the above. It’s just a question of how much of each and how do we set those dials in there. When we talk about organizational culture and how is that created, people take cues for how they behave and what they believe about how they should behave. They take that from the leaders and what the leaders do and what the leaders pay attention to and what the leaders say and do and all of that, as well as from the structure. And so we really want to be intentional about all of that and be intentional about how do we design the behaviors that we want from the leaders and what are the leaders saying and doing, as well as how are we creating the structures and the experiences within the organization that people are seeing and responding to. And so it’s really a total design that we’re looking for from that perspective.

Grant Belgard: Many leaders feel they already talk about culture. What separates talk from traction?

Phillip Meade: I just touched on it a little bit in my previous answer, but first and foremost, it’s an intentional design. I think a lot of people think they’re doing culture just because they do things that are culture adjacent. Like they do things that are around, you know, employees being happy or feeling good in the workplace, but they haven’t done the work to intentionally design what is the culture that they want? How do they create that culture? What are the beliefs that they’re intentionally trying to create in their employees around that culture? And how are they creating those beliefs through the specific experiences that they’re creating? And what experiences are those? How are they doing those experiences? So if you haven’t intentionally designed that, then it is kind of just talk.

Phillip Meade: And so you want to have that level of intentionality to the design of what you’re doing so that you know, let’s just take the silly ping pong table in the break room. If you want to have a ping pong table in the break room, that’s great. Do you know why you have that ping pong table in the break room? You should know exactly why you have that ping pong table there, what that experience is designed to do. Is it what beliefs are you trying to create in your employees? And then what beliefs those are creating? What do those beliefs drive from a behavioral perspective from your employees? And how do those behaviors then help to create that culture and ultimately drive the strategy of your organization? So that’s the whole flow that you want to have from a design perspective. And if you don’t have that level of understanding, then you haven’t really designed your culture.

Phillip Meade: You’ve just bought a ping pong table and put it into your break room. And so it’s there’s nothing wrong with the ping pong table. It’s neither good nor bad, but you haven’t designed a culture around it.

Grant Belgard: What’s your go to way to align executive intent with middle management behaviors?

Phillip Meade: So you want to have first the senior leaders to demonstrate those behaviors, because if the senior leaders aren’t truly living it, it’s going to be very difficult to just look at the middle managers and say, you know, do what I say, not what I do. That never works. Secondly, you’re going to want to communicate those expectations clearly. It needs to be crystal clear so that they understand what is exactly expected of them. You’re going to want to align the systems and processes so that they have the ability to do what you’re asking them to do and that it fits into how they do their jobs and they’re rewarded for it. And then finally, you’re going to want to provide them with if it’s if it’s skills based, you’re going to provide them with training.

Phillip Meade: And if it’s really is behavioral, you’re going to provide them with some behavioral change workshops that will support the behavioral change that you want from them.

Grant Belgard: If a team has strong technical results, but shows strain, missed handoffs, creeping burnout, how do you frame the problem without pathologizing people?

Phillip Meade: This is one of the things that we typically focus on with all of the organizations that we work with, because blame is actually one of the greatest drivers of organizational dysfunction. I mean, you see it in a lot of a lot of organizations, and it’s a huge waste of time and energy. We like to focus on contributions. And so in any time that there’s an issue that happens, there are many things that contribute to it. If you think about blame, blame is typically a game that we play where we try to figure out who was mostly responsible, and then we assign blame to them so that we can say it was their fault. And from an organizational standpoint, if you’re trying to think about how do we become most effective, that doesn’t make us most effective. We really want to figure out how do we diagnose how this happened? How do we correct that?

Phillip Meade: And how do we move forward and prevent this from happening in the future? So the way that we do that is we try to identify all the contributors to the situation, and then we figure out how do we prevent those contributions or shift those contributions so that this doesn’t happen in the future. And so we want to approach it from that standpoint so that people aren’t afraid that if I admit that I contributed to this, either through my action or inaction in some way, I’m not going to be in danger of becoming the person who is blamed as a result. And so we come together and we look. Everybody contributed in multiple ways through action and inaction. The system contributed to it. There were environmental contributors. We really look at exactly all the things that contributed to it, and then we say, okay, how can we shift those contributions in the future and get a different result?

Phillip Meade: And so that’s the way we want to start approaching things differently from now on. How do you design for sustainability so the workout lives the initial consulting period? You really want to embed it within the fabric of the organization. And that’s where, when we talk about true culture change is not a short-term project, this is why. Because oftentimes it can take a little while to really go through the whole process of getting it really embedded. But you want to build it into everything you’re doing.

Phillip Meade: Once you really understand the culture that you’re trying to create and what that looks like and have it well-defined, and you understand the behaviors that you’re looking for, and you understand the core values that you want, and what that really looks and feels like, and how to create this culture that you’re after, then you can build it into how you recruit, how you perform your interviews, how you onboard and introduce people into your organization so that they’re trained into your culture from the beginning. You can build it into your leadership development programs. You can build it into your executive development. You can build it into your performance management systems. You can build it into your succession management. You can build it into the language that you use in your organization and how you talk and speak and interact with each other.

Phillip Meade: And then, as I was talking earlier, you can build it into the experiences that you intentionally design into your organization that are part of the way that you do things as a company. And so, you know, as you’re doing that throughout the course of the year and the course of the life of the organization, you know these are the different experiences we have and why we’re doing it. And you can change those out and tweak those over time. But as you’re doing that, you know what you’re doing and why you’re doing it. And then, as you update it, you know how you’re updating it and why you’re doing that.

Grant Belgard: So, shifting gears to talk about your own career trajectory, what early experiences pointed you towards organizational performance and culture as your focus?

Phillip Meade: Well, you touched on it in the introduction. It was an abrupt change for me. It wasn’t a subtle shift. In 2003, the space shuttle Columbia disintegrated on re-entry, killing all seven astronauts on board. And in the wake of that accident, the Columbia Accident Investigation Board found that NASA’s culture had as much to do with the accident as the piece of foam that hit the wing. And I was asked to lead all of the cultural and organizational changes for return to flight because they grounded the entire space shuttle fleet until we could fix the culture. And so, that really set me off on sort of a life-altering path where I began looking into organizational culture and really how that impacts organizations and how important that is to how they perform.

Grant Belgard: When did you realize engineering, as of course you originally came up as an engineer, right?

Phillip Meade: Yeah.

Grant Belgard: Systems thinking could be applied to human systems.

Phillip Meade: Well, I mean, I will say it was a lifeline to some extent. I was trying to grasp for something to make sense of how do I figure this out? How do I solve for this problem of organizational culture? And I realized that an organization is a system. But the thing that I realized is that it’s not just any kind of system. It’s a complex adaptive system. And so, that’s where systems thinking came in. Because if you try to treat an organization like, you know, a car engine, you’re not going to get the right results. You have to treat it like the complex adaptive system it is. And so, when you shift your thinking and begin, you know, analyzing it and diagnosing it and working with it in that way, you get different results. So, a couple of pivotal mentors that I had, I worked with a couple of consultants very early on, Paul Gustafson and Shane Cragun.

Phillip Meade: They were very instrumental in helping me to learn a lot about organizational behavior. And, of course, I read a ton of books that helped me come up to speed on all of this. And I’ll say that one of the moments that helped shape my approach was really the fact that, you know, I thought that NASA had a great culture. And that’s really part of what freaked me out when I was asked to lead this culture change. Because I would have felt better if there were tons and tons of problems for me to solve. And I didn’t think that there were any. So, one of the moments that shaped my approach was that the results of a study was released right after I was asked to lead this. And it named NASA as the best place in the federal government to work. And it was like, okay, this just confirmed what I thought.

Phillip Meade: And so, it really shaped my approach because it confirmed that the way that we’re looking at culture might not be perfectly correct here. If culture caused this accident, and yet we’re the best place in the federal government to work, then what does culture really mean? And, you know, that’s where I came up with the fact that, you know, culture means more than just people are happy at work, right? It has to mean something more. And so, that really influenced my philosophy on organizational culture.

Grant Belgard: So, this might feed into the next question. What’s a belief you held earlier in your career that you’ve since updated?

Phillip Meade: So, beliefs that I held earlier in my career that I would have updated, I think I’ll go in a different direction on that one. I mean, I was very much an engineer in my early career. I was an electrical engineer. You know, they say you can’t spell geek without double E. And I had, I think one of the ones that is my favorite one to reminisce on is, I used to say, I can explain it to you, but I can’t understand it for you. And, you know, I had philosophies on communications that, you know, if I explained it, and I was technically accurate, and you didn’t get it, then that was your problem. And, you know, I grew a lot, you know, over my early career, realizing that being effective was more important than being right. And being effective meant learning how to work well with other people. And organizational culture, oddly enough, really is a lot about that.

Phillip Meade: Organizational culture is about how do you help human beings to work together effectively as a group. A lot of the psychology underpinnings that we use in the work that we do actually comes from work that was done with the Navy, because they were having challenges, trying to figure out how to put the most effective teams together in the control center of their ships. And their theory was, if we take the smartest, you know, best performer at each position and put them together on these teams, we should get the best performance. And they weren’t getting that. And they were confused. And you would think that that’s what you would get. But in reality, the best performance on a team comes from the teams that work best together, not from putting the best performers together. And so that’s what culture is all about.

Phillip Meade: Culture is about how do you get people and put them together that actually work well together. And in an organization, that’s what you need. You need people who feel good about themselves and have the ability that when you put them together with other people in that environment with other people, they all feel good working together. They feel good about themselves. They have the ability to adapt and interact with each other in ways that it makes the whole team perform better. Not just about each one of them trying to maximize how they work best individually, but the team suffers as a result of it. That’s not what you want as an organization. And so, you know, it’s ironic, but I was a part of that personally when I think back to how I performed individually as a young engineer.

Grant Belgard: So, diving a bit more into your learnings from your time at NASA, when people hear culture, they often picture perks, right? The ping pong table in the break room, as you mentioned. In mission-critical contexts, what does culture actually do?

Phillip Meade: Yeah, so this takes me back to the previous question where I said that, you know, being named as the best place to work in the federal government showed me that it has to mean more than, culture has to mean more than that, right? And so, I define culture as, you know, being three things. I think it has to drive employee engagement because you get so many benefits from that. I mean, when a culture drives employee engagement, I mean, there was a 2020 Gallup poll that said that disengaged employees have 37% higher absenteeism, 15% lower profitability. I mean, that drops down to the bottom line and translates into a cost of 34% of their salary. I mean, you know, engagement is huge. You know, it’s a big deal. And so, having highly engaged employees is a big part of what culture does for you. And then, it also improves people’s lives.

Phillip Meade: And that’s a big part of what having an effective culture does. But the third thing that culture does is that it drives organizational performance and market success. And, you know, for a mission-critical organization like NASA, this means that it had to support mission success, which meant taking astronauts up to space and returning them back to Earth safely. I mean, safety was a huge part of that. And so, if it doesn’t do all three, it’s like, you know, three legs of a stool. If it doesn’t do all three, you don’t truly have an effective culture. I mean, I can think of examples of companies that have any two of those three, and I would argue it doesn’t have what I would call a truly effective culture. In some ways, it’s not doing good things. And so, when it has all three of those, and that’s what it takes to truly have an effective culture, and that’s what you want to be shooting for.

Grant Belgard: What did you learn about surfacing dissent in bad news in environments where schedule pressure and hero narratives play a big role?

Phillip Meade: Yeah. You know, I learned that human psychology is complex. You know, even though we’re an organization full of, NASA was an organization full of engineers, and, you know, we like to joke that they’re not really human beings. They are human beings. And when you talk about organizational culture and what happens there, it all starts inside of the human being, and it really is driven by that human psychology. And we don’t think about this. We don’t talk about it very often in our daily lives, but we’re all actively self-deceiving ourselves, you know, on a daily basis. It’s just, it’s part of what our human psychology does to protect us.

Phillip Meade: And so, you know, when we are afraid of something, when we’re afraid that something’s going to make us feel uncomfortable, when we’re afraid that we’re going to be unpopular, when we’re afraid that this isn’t going to align with the identity that I’ve created for myself, all kinds of funny things happen in our psyche, and we get behavior that you wouldn’t expect. And so, when you’ve got engineers that live in an environment where failure is not an option, and they don’t want to be the one that says that something’s impossible or something that can’t be done, and they’re tremendously committed to mission success, and they love their jobs, and they love doing what they do, and they’re working really, really hard and long hours to try and make something be successful.

Phillip Meade: They don’t want to be the one that holds their hand up and say, hey, I don’t think we can do this, or this isn’t possible, or we can’t get this done. There’s a lot of silent peer pressure to be successful, and to save the day, and to make things work, and to not do that. And it’s not overt, and nobody’s saying anything, and nobody would call them a bad name if they did that, but it’s all below the surface, and it’s all in the subconscious. And so, it makes it very, very hard to identify and see, which is why it’s so deadly. So, many organizations talk about psychological safety and practice what behaviors from senior leaders create or destroy it. It’s really about truly encouraging and rewarding the feedback and dissenting opinions, normalizing dissent and healthy conflict, and helping individuals to increase self-awareness.

Phillip Meade: You know, that self-deception that I was talking about that’s happening on a daily basis, educating people that that’s going on, helping people to know that that’s a piece of what’s happening, and helping us all to know and be aware of what we’re doing and what’s going on so that we can recognize it and combat it. Because noticing is the first step. Until we notice, there’s nothing we can do.

Grant Belgard: Could you share an example of aligning structure, for example, reporting lines or decision rights with the desired cultural behaviors?

Phillip Meade: Yeah. So, there’s two I’d like to talk about. One is sort of a large-scale one, and then there’s another one that I like to use, which is a sneakier one. And so, I like to use it as an example. The larger one was with the Columbia accident. One of the challenges that was identified after the accident was that the way we were structured, the engineering, the technical, as well as the budget and schedule and safety, they all rolled up to the program manager. And so, it was a single point of accountability was managing all of that. And so, there was a feeling like from the engineers that they didn’t have their own voice. And so, you had one human being who was having to try to juggle responsibility for budget pressure and schedule pressure, as well as technical decisions and safety.

Phillip Meade: And so, afterwards, we split that out into separate technical authority and safety authority so that we did have the, again, we called it the three legs of the stool, but we had the three legs there where we had a program manager that was responsible for budget and schedule. And then we had a safety organization that was responsible for safety and a technical organization that was responsible for the engineering. And so, engineering, if they had a technical concern, they felt like they had a route that they could advocate all the way up and didn’t feel like they were having to go up to their boss who was more concerned about budget impacts than the technical concerns. And then the sneaky one that I want to talk about is an organization where they had quality assurance technicians that were responsible for safety and speaking up about safety concerns.

Phillip Meade: And they had to punch a time clock on a daily basis coming in to work. And the engineers that were working in this area didn’t have to punch a time clock. Nobody else had to punch a time clock. And for whatever reason, the quality assurance technicians, the story in their head as a result of punching the time clock was that management didn’t trust them to keep their time, that they distrusted them. And so, that’s the reason they had to punch a time clock. And so, they felt like because they weren’t trusted by management, then they created a similar distrust towards management, because trust is a reciprocal entity. So, if you don’t trust me, I’m naturally not going to trust you. That’s just the way that it works. And so, speaking up and raising safety concerns becomes harder. If I don’t trust management, it’s going to be harder for me to raise a safety concern.

Phillip Meade: And so, it was creating a challenge with raising safety concerns because there was a trust issue. And one of the root causes of this trust issue was this silly time clock that they were having to punch in and out of work. So, it’s just weird structural stuff. It’s all about the beliefs that are created in people through the environment that they live in and through the things that happen. And so, we create those unintentionally many times in ways that we never intended to do.

Grant Belgard: That’s interesting. Yeah. Because in the clinical trial arena, you do have this structural separation of the safety monitoring for the patients, but there’s typically not something like that in the earlier stages of drug development before patients get involved. So, for leaders inheriting legacy systems in history, where do you begin?

Phillip Meade: I always like to begin by trying to learn as much as I can about why things are the way that they are. I don’t like to change things until I understand the reasoning behind why they are and how they got there. Usually, there’s people and there’s inertia around the existing systems and processes and everything. And so, providing honor to why it’s there and being able to respect that and take the good for what it is and then only change the things that need to be changed or build upon what it is. That usually helps at least minimize some of the resistance from the people who are involved in what’s there already. And you can save time and energy too because there’s probably are reasons why things are the way they are. And so, you’re not, you know, breaking things that don’t need to be broken or, you know, doing something that won’t work.

Grant Belgard: If you had a week inside a life sciences organization, how would you diagnose the culture quickly?

Phillip Meade: I would try to be as much of a fly on the wall as I could. I would just try to hang out, visit meetings and listen, see how the meetings go, you know, see how much actual discussion happens in meetings. Are people speaking up? Is there meaningful dialogue and is there healthy conflict happening in those meetings? You know, follow people out into the hallway. Are there, is there more conversation after the meeting than there was in the meeting? You know, listen to what’s happening, the conversations that are happening in the executive meetings and what they’re, they’re asking to have happen. And then, you know, see what the managers at the middle level, what are they telling their people? Are they telling their people the same things that the managers at the upper level are telling? Or is the, does the message get distorted by the time it reaches that level?

Phillip Meade: And do the employees, or do they understand the things that the leaders want them to know? Do they even know why they’re doing what they’re doing? Just that, that kind of a thing. You know, what is, what is the, what is the general vibe around the office feel like, you know, or do employees seem like they’re happy and enjoy being there? Or does it, does it feel like it’s a, it’s a drag hanging out at the office? You can learn a lot just by hanging around.

Grant Belgard: What questions would you ask at the bench level versus the executive level?

Phillip Meade: I probably would ask a lot of the same questions. Honestly, I’d want to know, like, if they understood what their, what their strategy was, it might come out in different language, but I’d want to know, you know, do you understand how you’re going to be successful as a company? What are the values here? Or what, how would you describe the culture? Do you know, do you know what that means to be an employee here? I’d probably ask them questions about how they liked working here.

Grant Belgard: How do you tease apart performance issues that stem from process, structure or relationships?

Phillip Meade: You really just have to dive in and start asking questions and, and, and figure it out. You know, a lot of it is, is trying to figure out, you know, if the person that’s doing it, is it, are they, if there’s a challenge, is it because they, they can’t do it? Or is it because they won’t do it? Do they not have the, the ability to do it because they don’t know how to do it, or they don’t have the ability to do it because there’s something that’s missing? You know, you just have to, there’s just so many different ways it can go. You have to, just have to dig in and, and start asking questions and, and figure things out.

Grant Belgard: For, for regulated environments, of course, drug development is fairly regulated. What cultural strengths and blind spots tend to show up?

Phillip Meade: Well, I mean, sometimes you’ll have a strength from a feeling of, of sameness. You know, there can be like a, a, a sense of community or camaraderie that can come with being a part of a committee or a particular community there. But similarly, a blind spot can come along with that, that maybe there’s an over-reliance on standards or regulations to protect you from things. And, you know, that can be dangerous because many times, well, in all cases, those are only as effective as, as the people who are following them. And so, you know, you, you really have to depend on people to do what those regulations say. So.

Grant Belgard: When, when publication pressure or go, no, go, gates, loom, how do you maintain integrity of decision-making?

Phillip Meade: So first and foremost, I want to be honest, I haven’t dealt with this too much personally, but if I’m reading into the question correctly, I would say that as an organization, you would want to make sure that you are structuring your incentives correctly. You don’t want to create situation where you’re, you’re putting your, your employees into a no-win situation and, you know, putting them under undue, undue pressure to, to do things in order to save their job or, you know, or whatever. So, uh, I think that’s what I would say there.

Grant Belgard: What are the telltale signs that a strong culture has drifted into groupthink?

Phillip Meade: Uh, I think similar to, to what I said about being a fly on the wall in a, in a meeting earlier, you know, groupthink is obvious when everybody basically agrees to everything all the time. So, you know, I, I look for healthy conflict, uh, as a sign of a strong culture in, in many cases. And so I would be looking for, you know, that type of healthy dissent, not arguing or fighting, but, you know, questioning and challenging and, and people with different ideas or different positions on things. And so that’s where you get the, the best decisions and the best ideas and the best innovation. And so, um, that’s what you want to see.

Grant Belgard: What’s your approach to decision rights clarity? Who decides who’s consulted, who’s informed?

Phillip Meade: I don’t think that there’s a single answer to this one because, you know, there’s lots of different types of decisions. The idealistic answer to this is that you want the people who are affected to be involved in the decision. That’s not realistic in a lot of cases. I would say that I would lean as far towards that as is practical because the more that you can involve the people that are impacted in the decision, the more buy-in you’re going to get. And so one of the things that people don’t think about oftentimes is they, they misinterpret what it means to make a decision quickly. And they think of the time to make a decision as the time it takes to actually like decide. And I would argue that the time that you want to look at is the total time from when you start to the time to finish implementation.

Phillip Meade: And so you may get from the beginning to making the decision quickly, but then your implementation may take three times as long if you don’t involve the right people. And sometimes it may take a little longer to get to the actual decision point, but then your implementation is, is a third of the time to actually implement it. So the total time is actually shorter when you involve more people. And, you know, you got to think through that. Obviously you can’t always involve all the people and you can’t, and sometimes it is too long. And the way I just described, it doesn’t work out. And that’s the reason I said, it depends and it’s not really super clear, but, you know, I would lean towards involving more people and trying to get, you know, implementation to go more smoothly and getting greater buy-in when, when you can’t, because it really does, it really does help.

Phillip Meade: And I think that right now, in many cases, people lean too far on trying to decrease the amount of individuals involved because it makes the deciding part go faster. But then I think they’re under, underweighting how much it increases the implementation portion of it.

Grant Belgard: That’s a good point. How do you cultivate leader self-awareness?

Phillip Meade: I mean, coaching is a great way to do that. We have some workshops that, uh, that help to increase leader self-awareness, you know, reading helps, you know, as if once a leader decides that they want to start improving their self-awareness and then there’s, then just starting to pay attention and notice things can, can begin to, to be that part of that process. But as with all self-improvement, it has to start with the desire from the individual themselves to, to improve.

Grant Belgard: So how do you adapt culture work as a company scales from 20 to 200 to 2000, uh, even 20,000, right? Life science organizations come in all shapes and sizes.

Phillip Meade: Yeah. I mean, you’re doing the same basic things. It’s just a matter of how do you roll it out in tiers? So, you know, we, we always like to start at the top and then roll it down. And so you want to start with the executive team and then you want to move down to the layer below that. And then the layer below that. And so you, you just, you have more tiers. It takes a little bit more time. You know, when you start to get up to like 2000 and above, now you’ve got more mature, more well-developed HR departments. So you’re, you begin to work with, you know, more well-developed HR systems and processes. So you’ve got LMSs that you’re, you’re now integrating with and you’re, you’ve got really well-developed performance management systems and tools that you’re integrating into. And you’ve got internal HR teams that you begin to integrate into and work with.

Phillip Meade: And so, you know, you’re, the work that we do begins to integrate with the people that they have and the work that they’re already doing. And so we begin to, to weave in, into that.

Grant Belgard: What’s the best small concrete habit a leader can start tomorrow?

Phillip Meade: You know, for me, it’s, it’s just, I would say it’s, it’s learned something new every day. You know, one of the commitments that I made a long time ago was that I was, I was going to read every day. And so I try to, I try to read something new every day, but I think more generically, I would say just to, to learn something new every day. I think that’s a great habit.

Grant Belgard: What are the top three mistakes leaders make that quietly erode culture over six to 18 months?

Phillip Meade: I think the top three are not communicating, not admitting mistakes and tolerating bad behavior.

Grant Belgard: Where have you seen well-intentioned values backfire?

Phillip Meade: I think there’s two ways that well-intentioned values backfire. The first one is anytime the company or the leaders of the company don’t actually live the values or, you know, do something counter to the values that kills it right there. People see that it’s basically a lie or that it’s not true, then it becomes immediately ignored or, or worthless to them. The other one is when the values as well intentioned as they may be are over general. And Patrick Lencioni refers to these as permission to play values. And I mean, I’m not opposed to them existing as permission to play values, but I would call them that and differentiate those from your true core values. But, and these are things that almost every organization could claim that they have like integrity and respect and safety.

Phillip Meade: You know, it’s, it just feels so vanilla that a lot of times employees will look at those and they’re like, yeah, yeah. Okay. I don’t get it. You know, like it just, it just feels like it’s a platitude or, or something that is just being hung on the wall just to, just to do it because it doesn’t seem like there’s anything particularly special to it. Like, yeah, of course, you know, we don’t want employees to steal from us and, you know, everybody should have some basic respect from each other and you should expect not to die when you come to work. So that, you know, those things make sense. And so people just sort of blow it off that, you know, and they don’t pay attention to it. And so I think that those things are, are very well intentioned and there’s nothing bad to them, but it’s also very difficult to really get a lot of traction with them because they are so in most cases, vanilla.

Phillip Meade: And you know what, what Patrick Lencioni says is that, and unless you can truly argue that you have more integrity than 99% of the other companies in your industry, like it’s not really your core value, like it’s not what defines you. And so it’s, it’s hard to like, say this sets us apart. This is something that we’re going to hang our hat on and your employees see that. And it’s like, okay, like, yeah, we have integrity, but you know, it doesn’t really, it doesn’t really mean, you know, mean something special. And so it sort of just becomes this thing that we hang on the wall.

Grant Belgard: When culture change fails, what was the root cause of that failure most of the time?

Phillip Meade: Most of the time it comes down to a failure of leadership. Usually the leaders, the most senior leaders haven’t really truly bought into it and committed to it.

Grant Belgard: How do you prevent hero culture from undermining redundancy and documentation?

Phillip Meade: This goes back to what we were talking about a little earlier. I mean, this is a self-awareness issue. When hero culture is about me not truly having the self-awareness to realize that I am trying to make myself feel better by becoming the hero. And, you know, it’s that lack of self-awareness. It’s that self, it’s where I, it’s a defensive mechanism where kicking in, where, where I’m just trying to, to prevent myself from, from feeling bad. And so it’s, it’s part of my identity and I’m trying to protect. And so we want to try and raise that and prevent that from happening and increase, increase that, uh, self-awareness so that it, it doesn’t happen.

Grant Belgard: What’s the smallest viable step an individual contributor can take to strengthen culture?

Phillip Meade: The smallest viable step I would say is to increase your courage by 1%. If you increase your courage by 1%, then you’re going to increase your openness by 1%, which means that you’re going to increase the feedback that you give to others by 1%. And you’re going to increase the self-accountability that you have by 1%. And you’re going to increase the initiative that you take by 1%. You’re going to increase the contributions that you make by 1%. You’re going to increase your performance by 1%. I think if, if everybody in the organization were to do that, I think that you’d start to see visible changes in the culture.

Grant Belgard: What book, practice, or question has stayed useful across contexts?

Phillip Meade: I think the thing that has stayed useful across contexts, the practice, I’m going to go with the practice is getting curious. And it’s, it’s something that it’s something that I’ve, I’ve had to learn. And, you know, it’s, I’m not necessarily proud of it, but, you know, one of the things that is my tendency is, you know, and probably a reason why I’m sitting here answering all these questions really quickly for you on a podcast is I like being an answer guy. And so, you know, people come to me and, and ask me a question and I’m, I’m really quick to have an answer. And a practice that I started developing as a leader was to not answer the question immediately and to get curious and to ask more questions and try and learn more and say, okay, well, what’s going on here?

Phillip Meade: Or when someone would say something and I thought that they were wrong or I didn’t, you know, I thought that I had the answer and they didn’t, they were, they didn’t understand, get curious and figure out, well, why do I think that they’re wrong and I’m right? That’s been very, very useful to me across a lot of contexts to just try to get more curious instead of assuming that I always know the answer, that I always had the, you know, the right answer and that everybody else is wrong is very, very useful.

Grant Belgard: So what, what options do our listeners have to get more engaged with you through your work at Gallaher Edge? And, uh, you know, I know you have a book, you offer courses, you have, uh, consulting and so on.

Phillip Meade: Yeah, absolutely. You pretty much summarized it. We have a, we have a book that they can get on Amazon. It’s, it’s called The Missing Links: launching a high-performing company culture. They can get that on Amazon. You can go to our website. It’s Gallaheredge.com and, uh, check us out. Uh, we offer individual workshops as well as, uh, consulting engagements. We have a on-demand leadership development course that we offer. That’s, uh, it’s a micro learning format and, uh, it’s, uh, it’s a great way to get introduced to us and, and see what we did. We’re all about. So a lot of different ways. We, we also do, uh, speaking. So if you’re looking for a speaker for, uh, for an event, it’s another way that we can come and help you all out. So.

Grant Belgard: Great. Dr. Meade, thank you so much for joining us.

Phillip Meade: Thank you, Grant. I really appreciate it.

The Bioinformatics CRO Podcast

Episode 73 with Nataraj Pagadala

Nataraj Pagadala, founder, president, and CEO of LigronBio, discusses his company’s goal of using molecular glues to target traditionally undruggable proteins as a route to new therapies for neurodegenerative diseases.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Nataraj Pagadala

Dr. Nataraj Pagadala is the founder, president, and CEO of LigronBio, which develops molecular glues to target traditionally undruggable proteins.

Transcript of Episode 73: Nataraj Pagadala

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome to The Bioinformatics CRO Podcast, where we talk to scientists, founders, and leaders at the intersection of computation and biology. I’m your host, Grant Belgard. I’m joined today by Dr. Nataraj Pagadala, founder, president, and CEO of LigronBio. LigronBio is a biotech company focused on molecular glue therapeutics, small molecules that co-opt the cell’s own protein degradation machinery to go after proteins that have traditionally been considered undruggable. The company is applying computational chemistry, bioinformatics, and AI-driven platforms like its tri-matrix analyzer to design these glues and target neurodegenerative diseases and other serious conditions where new therapies are badly needed.

Grant Belgard: Nataraj has more than two decades of experience in computational drug discovery, spanning academia and industry from early work in biochemistry and bioinformatics through postdoctoral and research roles, modeling protein structures and aggregates, to senior positions in biotech and now founding his own company. Today, we’ll talk about what he’s working on now at LigronBio, how his career path led him into molecular glues and company building, and the advice he has for students, trainees, and scientists who are now thinking about careers in computational drug discovery, or even starting their own companies. Nataraj, thanks for joining us. Great to have you on the show.

Nataraj Pagadala: Thank you very much, Grant. Thanks a lot for, you know, giving me the great opportunity for the molecular glue audience and also for the targeted protein degradation companies. This is Nataraj Pagadala, founder and CEO of LigronBio, and LigronBio is incorporated in 2023, working on targeted protein degradation space, developing molecular glues for all undruggable targets in oncology side and also in neurodegenerative diseases, mainly focused on Alzheimer’s, and later on it will be extended to Parkinson’s and also ALS therapeutics. So, primarily, we are developing the platform called as the AI TriMatrix Analyzer Platform to rationalize and discover molecular glues for the specific undruggable targets in Alzheimer’s space, and also this is linked with the diagnostic kit, which is called as an L-tag assay.

Nataraj Pagadala: This particular L-tag assay will help in the functional studies of these molecular glues to take it further for preclinical studies and also for clinical trials. So, this is a powerful engine linked with generative AI that will help in discovery of these molecular glues within 36 months.

Grant Belgard: So, for members of the audience who have never heard of molecular glues, what are they?

Nataraj Pagadala: Molecular glues are the small molecules, which is very, all the medicinal chemistry properties are similar to traditional drug molecules, except that the difference between general traditional molecules and molecular glues are these molecules, they do the protein degradation compared to the traditional drug molecules where they inhibit the proteins in the biological system. So, for the undruggable targets, basically, there is no binding pockets where actually these undruggable targets help in the progression of the disease, even though there are the proteins which can be inhibited by the traditional drug molecules. So, that is the reason why these molecular glues are designed especially for the undruggable targets for protein degradation.

Grant Belgard: When you explain your company’s mission to someone with biology background, what do you emphasize first, the disease areas, the modality, or the technology platform?

Nataraj Pagadala: So, basically, our mission is basically to design the molecular glues for any of the disease-specific proteins, which is undruggable mainly. So, at the same time, our mission is to do the targeted protein degradation for the diseases and also help in reduction of the proteins in the biological system and also the disease progression. So, our vision is very broad to develop a molecular glues for all the undruggable targets, you know, and to save the future generations from Alzheimer’s is our very big mission.

Grant Belgard: Are there any currently approved molecular glues?

Nataraj Pagadala: Yes, yeah. So, there is a couple of approved molecular glues. The two are, one is palmolidamide and also one is lenidamide, which is in the market as a revlimid for multiple myeloma. So, and also, it is a very big market for this particular molecular glues for multiple myeloma disease.

Grant Belgard: So, what convinced you that there is space for a new company in this area?

Nataraj Pagadala: So, basically, if you see from the last 10 to 15 years, many companies are developing molecular glues in the targeted protein degradation, but unfortunately, all these companies, they are literally, were not completely successful in developing molecular glues for any disease-specific or also the target-specific because of a lack of a serendipity. So, this is the reason why LigronBio came into picture. We are developing because of, you know, serendipity reasons, you know, to rationalizing the molecular glues and discovery of molecular glues is a very difficult task. So, we are developing right from the scratch. This is the primary reason why we are developing a TriMatrix Analyzer platform where actually this particular platform rationalizes the molecular glues and, you know, for a specific target using a generative AI that will help in discovery one thing.

Nataraj Pagadala: And also, at the same time, this particular platform also, you know, finds out all the off-target interactions, you know, that way we can eliminate all the serendipity problems within the biological system to develop a molecular glue for a specific target without any off-target interactions. That is the reason why LigronBio is a novel compared to all the existing platforms worldwide in terms of, you know, data integration with the AI and also high selectivity and specificity.

Grant Belgard: Neurodegeneration is notoriously difficult. What aspects of those diseases make them feel particularly well-suited for a molecular glue approach?

Nataraj Pagadala: Basically, if you see in the biological system with the neurodegenerative diseases like Alzheimer’s, right? So, that’s what I’m saying that, you know, there are many undruggable targets in the biological system that will help in the progression of the disease, not only in oncology side, but in also the neurodegen, neurological space in the neurodegeneration. So, these, as long as these undruggable targets exist in the biological system, it is very difficult for, you know, to inhibit the progression of Alzheimer’s or Parkinson’s and ALS. So, this is where actually, unfortunately, the targeted protein degradation space is not introduced into this neurological space and people are not successful as of now. So, this is where actually we need to develop these molecular glues and, you know, eliminate these toxic proteins which are undruggable from the biological system.

Nataraj Pagadala: That way, we can slow down the disease progression and, you know, restore the memory function and then also reduce the cognitive decline. So, this is where importance of molecular glues comes into picture with respect to neurodegenerative diseases.

Grant Belgard: How do you balance going deep on a few carefully chosen targets versus exploring widely across many possible targets with your platform?

Nataraj Pagadala: So, basically, this particular platform designs the molecular glues for any specific target. So, even though there is no three-dimensional structures of the protein done by crystallography or by any other method. So, this particular platform designs the molecular glue just by the amino acid. So, basically, if you see the undruggable targets, then there is a motif called, let’s say, degron. So, this degron is a six to seven amino acids or maximum 10 amino acids. So, based on that, this particular platform designs the molecular glue based on the amino acid. So, it is even the layman who doesn’t know how to design the molecular glues, this particular platform gives an opportunity just by typing, inputting the amino acid, amino, just an amino acid or a peptide sequence, it will develop a molecular glue.

Nataraj Pagadala: That’s where this particular platform is completely different from all the existing platforms worldwide.

Grant Belgard: What kinds of collaborations or partnerships are most important for a company like yours at this stage?

Nataraj Pagadala: So, at this stage, particularly because, you know, the experiments of targeted protein degradation is different than the traditional way. So, that is the reason why we need partnerships, you know, who are well-versed with the targeted protein degradation space. So, this is where, actually, we need the partners like BMS who is working on targeted protein degradation or also C4 Therapeutics or Chimera Therapeutics. You know, these companies are developing or working on a protein degradation, but unfortunately, they are not working especially on molecular glues, but they are working on other modality called as a protag. But, you know, there are some companies who are working on especially on molecular glues, but, you know, they were not successful as of now.

Nataraj Pagadala: So, we can help those kind of companies, you know, we can help, we can also partner with those companies to design the molecular glues with this particular platform and also help them to, you know, for the targeted protein degradation with the molecular glues with our platform. That’s where, you know, we can partner with those companies and we also, we can help those companies for developing a molecular glues.

Grant Belgard: When you think a few years ahead, what would success look like for LigronBio?

Nataraj Pagadala: Earlier, a few years ahead, right? You know, that time, actually, to be honest, funding is much flexible compared to this particular time where, you know, funding is a very bit hard. So, because of not successful by many of the companies. So, otherwise, you know, by today, LigronBio might have developed the molecular glues for the Alzheimer’s therapy. And by today, we might have at least reached the patients, you know, clinical trials for Alzheimer’s therapy and also might reach the patients.

Grant Belgard: And is the vision to accomplish that through partnerships or are you planning on sponsoring trials as Ligron?

Nataraj Pagadala: Yeah, actually, we are also trying to do from our side, our own clinical trials. At the same time, we are also looking for the big partners. You know, once we complete the initial phase of studies, once we file the IND, then we are also looking for the big partners to step in and also do the clinical trials, you know, as a joint collaboration with LigronBio.

Grant Belgard: What do you see as the main advantages and disadvantages of molecular glues compared to more traditional small molecule approaches?

Nataraj Pagadala: The most important advantage of molecular glues is, you know, because this is an event-driven mechanism, the effectivity and also the degradation therapy is more effective for any disease compared to the inhibition. That is a major difference between the molecular glues and also the traditional inhibitors because the traditional inhibitors are an occupancy-driven mechanism. So, as long as you take the drug molecule, then the effect will be more on the disease state. But when in the molecular glues, even though the molecular, the drug will be eliminated from the biological system, then still the effect will be more. So, that is the reason why, if you see the efficacy is also very high when compared to traditional molecules, and the effect will be 100 times more than the traditional drug molecule.

Nataraj Pagadala: So, that is the reason why, and not only that, basically, the molecular glues are treat undruggable targets, which is notoriously undruggable in the biological system and helps the disease progression. As long as these, as I said, you know, earlier that these proteins are not eliminated from the biological system, the disease progression will still be there. That is the reason why we cannot stop oncology, we cannot, cancer progression, and also neurodegeneration. So, there actually, traditional methods cannot deal with those undruggable targets. Only molecular glues can help in that particular situation and, you know, help in the inhibition of disease progression.

Grant Belgard: What makes designing molecular glues hard, scientifically or computationally?

Nataraj Pagadala: Basically, I see, basically, as I said, you know, the molecular glues, they influence the target protein based on a simple motif, which is called as a degron. So, degron is always, you know, as I said, you know, maximum of 10 amino acids, right? So, this is not a catalytic site. This is a catalytic site for our traditional drug molecules is different than, you know, influencing the drug molecule based on this particular glue, which is a solvent exposed. So, you know, to formation of ternary complex is very, very difficult with respect to molecular glues. So, this is where the difficulty comes in, one thing, because as I said, you know, the degron is only 10 amino acids or maximum of 6 amino acids. So, there will be serendipity of the molecular glues because, you know, most of the kinases, you know, most of the kinases contains this kind of a degron where, you know, 6 to 7 amino acids.

Nataraj Pagadala: That is the reason why there is a high chances of off-target interactions with the molecular glues. That’s where we need to eliminate those molecular glues. And the AA TriMatrix Analyzer platform is the one that, you know, eliminates all these off-target interactions and gives them highly specific molecules for the time, you know, that shows a target protein degradation.

Grant Belgard: How do you think about modeling ternary complexes and cooperativity when you’re working with molecular glues?

Nataraj Pagadala: So, modeling, basically, as I said, you know, we are training a very big database of ternary complexes right from the literature and also from our own in-house experimental studies. And we are also, you know, mapping the proteome in the biological system for all the undruggable targets, you know. So, that will help us in, you know, to see that using a generate AI, artificial intelligence, you know, large language models, that will help us, you know, to see that, you know, how the molecular glues is especially, you know, seeing the off-target interactions. Once we eliminate that off-target interactions, it is easy for designing of molecular glues for a specific target. So, this is where actually that we are building the TriMatrix Analyzer platform.

Nataraj Pagadala: And also, because, you know, most of the targets doesn’t have a three-dimensional structure, this is where another advantage of this platform is that even though there is no three-dimensional structure, still we can develop a molecular glue for the particular target, you know, just based on amino acid as an input. So, this is where the advantage of this one, and also the difficulty that I said, you know, in most of the companies, they don’t have a three-dimensional structure, you know, for most of the targets, you know, unless there is no three-dimensional structure, there is no molecular glue. But a TriMatrix Analyzer platform can do this. And at the same time, most of the companies, to find out a ternary complex formation, they are using a diagnostic kits. Those diagnostic kits is based on the fluorescence.

Nataraj Pagadala: They only give indication about, you know, whether the ternary complex is formed or not. But when that is taken into experimental site, then it is not replicated. The diagnostic kit is not replicated. The results of the diagnostic kit is not replicated in the biological system in most of the cases. But we are developing a diagnostic kit in, which is called as an LTG assay, which gives information about, you know, how the ternary complex is formed, which is like an alternative to x-ray crystallography. That’s where we can clearly see that how the ternary complex is formed. So, this is where the difficulty from all the big companies are facing as of now. And that’s what we want to make it easier for all these companies, with our TriMatrix Analyzer platform, or also the diagnostic kit.

Grant Belgard: How do you decide which parts of the problem to treat with more traditional physics-based structural biology approaches versus more data-driven AI-ML approaches?

Nataraj Pagadala: So, basically, in the physics-based approaches, you know, most of these approaches are for traditional therapy for all the proteins which have a three-dimensional structure of the protein, right? You know, on the catalytic side, you know, there it is easy for the physics-based approaches, you know, for designing of the drug molecules. But data-driven approaches, this where actually, where we don’t have a proper [trim?] structures of the protein, this is where actually the data-driven approaches comes into picture. Now, just like, as I said, you know, for all the undruggable targets where we need lots of data, and lots of data to develop one molecular glue for a specific target.

Nataraj Pagadala: This is where AI and also machine learning and artificial intelligence comes into picture compared to, even though, basically, artificial intelligence and machine learning is also useful for traditional therapy, but especially because that even though artificial intelligence and machine learning is not needed, still we can develop a drug molecule for the proteins which have three-dimensional structures of the protein and also the catalytic pockets. But without the data-driven approaches and without AI and ML, it is very, very difficult to design molecular glue for undruggable targets.

Grant Belgard: How important is experimental feedback for your models and what does that loop look like in practice?

Nataraj Pagadala: Basically, the experimental studies is very important because, you know, the important thing is, you know, very, very rare that we see the targeted protein degradation effectively by molecular glue in the beginning. So, the experimental side is very, very important. I know because, you know, there are many factors that we need to find out in the area of targeted protein degradation, especially with the molecular glues, because, you know, the protag development is completely different. So, it is easy to find out the targeted protein degradation with the protags. But molecular glues is a small molecule and they influence the target protein through small motif. Sometimes, you know, we don’t know how the degradation is happening, you know, how the degradation is happening, whether the territory complex is formed. You know, this is a very complex system through molecular glues.

Nataraj Pagadala: That is the reason why the experimental data, not only that, you know, it’s like, you know, if you check, you know, thousands of, hundreds of molecular glues, sometimes, you know, we end up with no molecular glue showing a targeted protein degradation. So, that is where experimental data, one experimental data, and one targeted protein degradation will give a clue for many, many stages of a molecular glue development in the biological system.

Grant Belgard: Where do you see the biggest gaps right now in this space? If you could choose one particular type of data to just have a lot more of, or better data of, what would that look like?

Nataraj Pagadala: So, basically, I see the main gap here is, especially in the molecular glue is, you know, we don’t have a ternary complexes. So, that is where actually we cannot design a molecular glues, the ternary complexes, not only, and also from x-ray crystallography, especially from the x-ray crystallography, actually, how the ternary complexes formed, except, you know, five or six cases. Not only that, you know, because when these undruggable targets, you know, the ternary complexes formed, it’s a larger, you know, it’s a very big complex. It’s very difficult sometimes to create a three-dimensional structures of the proteins through the x-ray crystallography because of its complexity in nature. So, this is where actually the difficulty is coming from in the area of molecular glues.

Nataraj Pagadala: That’s where we need to do some computational studies in the beginning with enormous, generate enormous amount of data, what the ternary complexes, you know, mapping of all the ternary complexes. That’s where we get some clues to do the experimental studies. If it is replicated, then we can say that, you know, yeah, this is what is happening from my computational studies, and this is also replicated in experimental studies. Then from that, you know, generate more, you know, molecular glues for other targets, you know, more data-driven through AI and ML.

Grant Belgard: So, to talk about your career, looking back, what were the big inflection points that shaped your career in computational drug discovery?

Nataraj Pagadala: Basically, I did my PhD in computational chemistry in 2007. And after that, you know, I did four years of postdoc in the University of Alberta and one year of postdoc in Belgium in KU Leuven University. So, I have lots of my career, you know, 25 years of experience. But, you know, all my career, I worked on a traditional way, you know, developing a drug molecules for all the proteins, for all the proteins which has the binding pockets, you know, have a very great traction record of computational drug discovery from the last 25 years, you know, published for international publications. And also, I was also rated as one of the eminent scientists in computational chemistry by Carnegie Mellon University. So, you know, but unfortunately, I never worked on this targeted protein degradation earlier, before I started my career in [biotherics], you know.

Nataraj Pagadala: There, my journey of a targeted protein degradation has changed, actually. Yeah. So, from there, you know, after going in-depth analysis, you know, then I realized that, you know, this is a, it’s not a simple thing, you know. I need to, I need to show to the world that, you know, with all my experience that, you know, how can we design the molecular glues? How can we not only molecular glues, you know, how can, I know, targeted protein degradation can be done easily? That is the reason why I started this particular career. That’s where the, I know, the inflection point has come in my career to show to the world that, you know, how can we do this? Not only that, with the doing of this, now, how can we, you know, reduce the progression of the Alzheimer’s or Parkinson’s and also ALS and also major this, this devastating diseases, you know.

Nataraj Pagadala: With this technology, we can definitely protect the future generations because we know that COVID-19 has, you know, pandemic has created, you know, havoc in entire world, right? You know, half of the world was got wiped off. So, that is the reason why I changed my career that I want to do something to this, you know, in the disease therapy and I want to show something to this, you know, how can we, you know, stop the diseases or also we can, we can inhibit the disease progression and, you know, protect the future generations for, for these devastating diseases.

Grant Belgard: What gave you the confidence to start your own company doing this?

Nataraj Pagadala: So, basically, my experience, you know, from the last 25 years, as I said, you know, I have a great track record of, you know, computational drug discovery and also because, you know, as I said, you know, I, I did a full five years of postdoc in a PhD and publications, you know, my, as from Carnegie Mellon University, I was also rated as an eminent scientist. So, based on my career, my track record and my way of doing a drug discovery, so it’s completely, a little bit different, you know, compared to other people in terms of thinking, in terms of implementation. That gives me confidence that, you know, definitely my approach will help definitely for these diseases to, for the disease progression, inhibit the disease progression.

Nataraj Pagadala: So, that is the reason why with all my computational chemistry, because not only that, you know, my other confidence is because I’m a, I’m a biochemistry background. Mainly, my, my background is biochemistry with a genetics, you know, with a PhD genetics department. And also, I’m well-versed with molecular biology and all the biology aspects. So, that’s where actually, I can easily connect my biochemistry experience with a computational chemistry experience, with a drug discovery experience, and also experience in biophysics. So, with all these subjects, you know, great expertise, it is easy for me to design the molecular glues. Think about how the drug molecule works in the biological system. That’s where I can easily connect. That’s where my confidence has come that, you know, I can achieve, not only that, you know, I don’t need big laboratories to develop these drug molecules.

Nataraj Pagadala: You know, I can sit at home and design the molecular glues in on the computer with all my expertise. So, that’s where, you know, I started, I started this company because of all my expertise and also discovery of these drug molecules without having a laboratory spaces.

Grant Belgard: Have there been any particularly helpful pieces of advice from other founders or mentors that have changed the way you run the company?

Nataraj Pagadala: Actually, because, you know, there are very less people, you know, who are working on molecular glues. So, and as of now, apart from the very big companies, like [?], and also C4 Therapeutics, and also Chimera Therapeutics, and BMS, apart from this, I personally feel that, you know, I’m the only one who started as a startup with developing a molecular glues and developing a platform. Other than this, you know, till now, I did not see any kind of other founder developing a molecular glues till today.

Grant Belgard: What’s something about the founder-CEO role that you didn’t appreciate until you were actually doing it?

Nataraj Pagadala: Yeah, actually, as I’ve basically, you know, earlier, when I was doing, working in different companies, you know, at that time, I was, you know, my ideas was not taken into consideration. But as a CEO of the company, when I was developing this TriMatrix Analyzer platform, when I was developing this, you know, designing the molecular glues, you know, with a diagnostic kit, you know, that’s where actually people completely, you know, seeing me as a different person in terms of, because, you know, there are people who are well worth the experience from the last 10 to 15 years of experience. Even though they have so much of experience, they were unable to figure out how the ternary complexes, how the targeted protein degradation is happening in the biological system, you know.

Nataraj Pagadala: But as a CEO of the CEO of LigronBio, as within a short period of time, you know, when I was doing this, you know, then people, you know, are seeing me as a different exceptional person and then who can definitely deal these particular problems, you know, help the community and help the society for and also for future generations with Alzheimer’s and also other domestic diseases.

Grant Belgard: From your perspective, what are the most underrated skills for computational scientists who want to work closely with wet lab teams?

Nataraj Pagadala: With the wet lab teams, actually, we, this is basically a different complex, you know, biology. So I need, you know, I want to work with the people who are well-versed with, especially with the neuroscience one, especially with targeted protein degradation, who has experienced targeted protein degradation in terms of molecular glues, without that, it’s very difficult, you know, to understand, to understand and do the experiments in the, you know, in the laboratory without having a knowledge about the molecular glues are targeted protein degradation. So I prefer the people from this particular background, you know, if you want to work with, yeah.

Grant Belgard: Where do you think molecular glues will realistically be in 10 years? A niche modality or something more mainstream?

Nataraj Pagadala: Yeah, actually molecular glues, as of now, molecular glues are, are in the, in the high priority for different companies and also bigger companies like J&J. So because they are small molecules, as I said, you know, they are brain penetrant, gut penetrant, and also membrane permeable. So molecular glues are the first priority as of now, and also, till now, 24 billions of money was deployed in molecular glue development by different companies and also by different VCs. So molecular glues are the highest priority in, in under the next 10 years, molecular glues is going to occupy number one place compared to traditional drug molecules. Because, you know, as I said, you know, the effect of the molecular glue will be high, very high, 100 times more than a traditional drug molecule. So it is going to, it is the first number one priority in the next 10 years.

Nataraj Pagadala: And also, not only that, in the molecular glues are going to, you know, affect on the disease therapy, especially for the Alzheimer’s in the next 10 years, there is a high chances that a molecular glue therapy will come into existence for Alzheimer’s, for Alzheimer’s, and also help the progression of, you know, and also inhibit the progression of Alzheimer’s. That way, it is a stepping stone for, you know, reversing the Alzheimer’s. If that happens in the next 10 years, trust me that, you know, molecular glue therapy will also reverse the Parkinson’s and also will reverse the ALS and also all the devastating diseases, even the cancer progression. We definitely, we can reverse the cancer progression, and also we can inhibit the cancer progression, you know, 30 to 40 percent. That increases the lifespan of the patient and also the families who are affected with these devastating diseases.

Grant Belgard: Is there a misconception about molecular glues that you wish you could correct for everyone listening?

Nataraj Pagadala: Actually, yes. You know, basically, people think that, you know, molecular glues are very difficult to design. And also, molecular glues have a high serendipity and also off-target toxicity. This is what the people think about molecular glues. But, you know, if you design properly from right from the scratch, you know, and also, we can design a molecular glue with a high target. Because last 10 years, this is what is happening with the molecular glues. Whatever the target is, basically, they are designing, but ending up at the same targets repeatedly every time and showing a degradation. So, because there is some problem in designing the molecular glues. That is the reason why we can design the molecular glues without off-target toxicity, very easily, if you do right from the scratch in a proper way.

Nataraj Pagadala: So, this is the misconception that, you know, molecular glues cannot be designed so easily. That is, that is a misconception there for the different companies all over the world.

Grant Belgard: Finally, if listeners remember just one thing from this conversation, what would you want it to be?

Nataraj Pagadala: Yeah. LigronBio, we are unlocking the undruggable targets for Alzheimer’s and other neurodegenerative diseases with the molecular glues. So, this is where actually we are the pointers in the molecular glue discovery.

Grant Belgard: And how can listeners or potential investors connect with you to learn more?

Nataraj Pagadala: So, basically, through email and also with my website, you know, all the information is given in the website. And, you know, please contact me. If you want any kind of a collaboration, if you want any kind of a help in designing the molecular glues with our TriMatrix Analyzer platform, I’m here to help you in a very effective way. And also, we can reduce the time of research and the cost of your research. And we can design the molecular glue for sure within less than 36 months. So, all the details were given in the website. Please contact me. Or else, you know, my email is npagadala@ligronbio.com. And my cell number is 412-863-3812. Please contact with any of this, you know, medium. You know, I’ll be here to help you as much as I can. Thank you.

Grant Belgard: Nataraj, thank you for joining us.

The Bioinformatics CRO Podcast

Episode 72 with Sophia George

Sophia George, professor in the Division of Gynecological Oncology at the University of Miami Miller School of Medicine, discusses her research at the Sylvester Comprehensive Cancer Center investigating the genetics and biology of hereditary breast and ovarian cancer and working at the intersection of genomics, health equity, and cancer.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Sophia George

Sophia George is a professor in the Division of Gynecological Oncology at the University of Miami Miller School of Medicine and the principal investigator of the George Lab at the university’s Sylvester Comprehensive Cancer Center.

Transcript of Episode 72: Sophia George

Disclaimer: Transcripts may contain errors.

Coming Soon…

The Bioinformatics CRO Podcast

Episode 71 with Christiaan Engstrom

Christiaan Engstrom, founder and CEO of BLPN, discusses his experience building a space for authentic, non-transactional business networking in the life sciences.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Christiaan Engstrom

Christiaan Engstrom is founder and CEO of BLPN, an invite-only community for life science investors and senior executives to connect.

Transcript of Episode 71: Christiaan Engstrom

Disclaimer: Transcripts may contain errors.

Coming Soon…