F.D.A. to Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’. With a Trump-driven reduction of nearly 2,000 employees, agency officials view artificial intelligence as a way to speed drugs to the market. (Jun 2025) Article 

EpilepsyFMTcure

New member
Joined
Apr 29, 2025
Messages
27

F.D.A. to Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’​

With a Trump-driven reduction of nearly 2,000 employees, agency officials view artificial intelligence as a way to speed drugs to the market.

https://www.nytimes.com/2025/06/10/health/fda-drug-approvals-artificial-intelligence.html

"For some cases, the F.D.A. officials proposed speeding major drug approvals by requiring only one major study in patients rather than two, a practice the agency has used in recent years. The pandemic provided a precedent, they said, for accelerating the process."



I'd be super curious if this applies to biologics... Sorry if this article was already mentioned elsewhere but thought it was relevant. I saw a few other articles mention that this AI integration was to bypass animal testing as well.

Anyways, we have more than enough studies showing FMT for many different diseases. This has the potential to benefit us. As far as I know there's two pharma companies that still offer FMT for C Diff, but if we think their goal would be to expand their range of diseases in order to sell their product, wouldn't they WANT FMT for off label use? Is that a naive assumption? Its possible those companies only serve a gatekeeping role I guess. Has anyone ever reached out to those two companies? For the record I don't know what composition FMT they use or any specifics, just thought I'd ask.
 
F.D.A. to Use A.I. in Drug Approvals
I don't think this would have much of an impact on the FMT situation. The primary hurdles are here: https://forum.humanmicrobiome.info/threads/the-fda-and-fmt-regulation-mar-2024-humanmicrobes-org.303/post-1292

we have more than enough studies showing FMT for many different diseases
I think they require each "company" to run its own study, showing its product is safe and effective. So other FMT studies wouldn't have any impact on us.

As far as I know there's two pharma companies that still offer FMT for C Diff
What the Pharma companies sell is not FMT, it's derived from human stool. As noted on the blog, it's not even close to FMT, it's more like a probiotic that works to suppress C. diff.

UMN is the only place actually selling FMT, and as noted in other threads, it's one of the worst sources of FMT that exists.
 

F.D.A. to Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’​

With a Trump-driven reduction of nearly 2,000 employees, agency officials view artificial intelligence as a way to speed drugs to the market.

https://www.nytimes.com/2025/06/10/health/fda-drug-approvals-artificial-intelligence.html

"For some cases, the F.D.A. officials proposed speeding major drug approvals by requiring only one major study in patients rather than two, a practice the agency has used in recent years. The pandemic provided a precedent, they said, for accelerating the process."



I'd be super curious if this applies to biologics... Sorry if this article was already mentioned elsewhere but thought it was relevant. I saw a few other articles mention that this AI integration was to bypass animal testing as well.

This sounds like complete empty hype. There have been data-driven, machine learning-based predictors for toxicity, oral absorption and blood brain barrier permeability, stability in blood, and similar ADME-type measures for years, and these haven't really made much of a difference in the amount of animal (and human) testing that needs to be done to prove a drug safe and effective.

A much more realistic innovation to make headway here is organoids and "organs on a chip" (this is not a microchip--i.e. it's not a computational technique--it's an actual experimental screen using real animal cells that are differentiated to mimic those in organs rather than cells in cell culture, which can behave quite differently). These actually have the advantage over animal models that at least in principle, human cells can be used to make the organoids instead of testing on mouse or rat organs. The animal to human step still occasionally bring surprises--you can look up the Bial FAAH inhibitor trial in France for an example.

Almost all of this relates to classic small molecule drugs, not to something like a microbiome therapy where the active ingredients are live organisms. Of course getting bacteria in your cell culture will be detrimental, but they're not being injected into the blood, they are staying in the gut. Gut organoids do exist, I once looked up if anyone has tried to put a realistic artificial gut microbiome into them and there are a few papers but not many.

What the Pharma companies sell is not FMT, it's derived from human stool. As noted on the blog, it's not even close to FMT, it's more like a probiotic that works to suppress C. diff.
Why do you consider Rebyota "not FMT"? Is it simply the fact that it's pooled-donor and not single-donor? From everything I can find, it contains the full spectrum of bacteria from the stool, it's not treated in any way to remove or select out certain species to the exclusion of others--thus I would consider it FMT. Vowst on the other hand is definitely NOT a "real FMT" as it's spore-only.
 
This sounds like complete empty hype.
Agree. It's worrisome that they say things like that instead of focusing on concrete, major opportunities that they're already well aware of -- the gut microbiome and FMT.

Why do you consider Rebyota "not FMT"? Is it simply the fact that it's pooled-donor and not single-donor?
Responded here: https://forum.humanmicrobiome.info/threads/finch-wins-us-jury-trial-against-ferring-over-fecal-transplant-patents.539/post-2731
 
A much more realistic innovation to make headway here is organoids and "organs on a chip" (this is not a microchip--i.e. it's not a computational technique--it's an actual experimental screen using real animal cells that are differentiated to mimic those in organs rather than cells in cell culture, which can behave quite differently). These actually have the advantage over animal models that at least in principle, human cells can be used to make the organoids instead of testing on mouse or rat organs. The animal to human step still occasionally bring surprises--you can look up the Bial FAAH inhibitor trial in France for an example.

I'll check this out, sounds interesting. Something I've come across a lot in studying how the gut biome effects the body/chronic illness is IL-6 (interleukin 6). My understanding is that some strains of bacteria stimulate this cytokine in the body which leads to things like IBS, arthritis, and so on. What I'm hoping DOESN'T happen, is that this small molecule approach skips over the step where bacteria interacting with the gut lining is stepped over in favor of a "treating the symptoms" approach where they just say "well yeah we can block IL-6, no need to look into why it's elevated." If they use an "uncensored" version of AI and use the right prompting, they might get real answers, but who knows.
 
" If they use an "uncensored" version of AI and use the right prompting, they might get real answers, but who knows.

Why would this have anything to do with being "censored" or not? AI "censoring" relates to the prevention of the model returning unethical, bigoted, or harmful (i.e. advocacy of suicide and/or violence) outputs, or in the case of image models, generating NSFW images. I have never heard of an AI model being censored to omit scientific research findings that the creator doesn't like, except possibly when this research has direct socio-political implications (like vaccine misinformation).

The real issue here is that the research that's out there is incredibly self-contradictory regarding the links between specific bacteria and diseases.
 
except possibly when this research has direct socio-political implications (like vaccine misinformation).

Whoever programs the AI, as well as however the prompting is phrased, will determine the output to a large extent. This could probably get its own entire thread but for the sake of simplicity I think the vaccine misinformation censorship was a really good example of a well-meaning idea going off the rails. Censorship in any form is super risky and leads to a LOT of blind spots.


An applied example:

If I ask AI to give me the cure to epilepsy, it will return "There is no known cure to epilepsy, here is a range of treatment options."

If I ask AI "based on case studies of spontaneous remission, what types of treatments or circumstances have led to a cure of epilepsy diagnosis?" AI returns the following study:

Fecal microbiota transplantation cured epilepsy in a case with Crohn's disease: The first report​


https://pubmed.ncbi.nlm.nih.gov/28596693/


So my point is that the way AI is asked will determine the outcome greatly, and while a ton of studies are contradictory, there are ways to sift thru that, like telling AI to ignore studies with conflicts of interest via funding, or from scientists that have received payments in one form or another from Big Pharma, or have worked too closely with organizations that benefit from treatments rather than cures.

As for studies that conflict with one another in terms of specific bacteria strains, I think that's slightly more granular than it will get, and could lead to the same issue of "strain X causes disease Y therefore ANTIBIOTICS!!" Luckily I think everyone knows antibiotics are best avoided now and will buffer against that conclusion.
 
An applied example:

If I ask AI to give me the cure to epilepsy, it will return "There is no known cure to epilepsy, here is a range of treatment options."

If I ask AI "based on case studies of spontaneous remission, what types of treatments or circumstances have led to a cure of epilepsy diagnosis?" AI returns the following study:

Fecal microbiota transplantation cured epilepsy in a case with Crohn's disease: The first report​

https://pubmed.ncbi.nlm.nih.gov/28596693/


So my point is that the way AI is asked will determine the outcome greatly, and while a ton of studies are contradictory, there are ways to sift thru that, like telling AI to ignore studies with conflicts of interest via funding, or from scientists that have received payments in one form or another from Big Pharma, or have worked too closely with organizations that benefit from treatments rather than cures.

Does it return ONLY that study though, or (as I would suspect) that study in a long list of other studies?

The factor at play here is nothing to do with censorship. It has to do with the models associating collection of words in a certain order. The model doesn't actually know what "curing epilepsy" is--it goes off of which texts match your query "better than chance". So many texts will mention a "cure for epilepsy", usually next to text stating that such a cure is not known. Whereas if you include the phrase "spontaneous remission", then presumably the study you mentioned here describes that, whereas those other texts that mention the lack of a cure DON'T mention that (or mention it "less prominently", by the model's own way of determining what are the key phrases in a text).

Someone mentioned having gotten different results from ChatGPT to the queries "fastest flying mammal", "what is the fastest flying mammal", and "tell me the fastest flying mammal please", only one of which was correct (the others were either birds rather than mammals, not the fastest, or both). There is no incentive for an AI developer to censor information on fast-flying mammals, it's just that articles on the web that gave the correct answer tended to use phrasing more like one of these examples than the ones that didn't.
 
Does it return ONLY that study though, or (as I would suspect) that study in a long list of other studies?

It will return other studies (animal models) depending on how I phrase the question. But I like your thinking and I'm going to try many variants of questions from now on to see what range of results I get. I do think a lot of models restrict saying anything is a "cure" specifically because of liability. If it tells someone that something is a cure and then they go out and take a bunch of it and overdose, well, there's likely a lawsuit coming down the line. I used to think it was big pharma paying the models to hide cures, and still believe that is entirely possible, however your perspective is the most likely one in a "occams razor" kind of way. I like to keep things on the "maybe shelf" in that I don't write it off as impossible that they'd go out of their way to hide certain info for financial gain long term. This weekend I'll start a thread on AI research in FMT.
 
I don't see how using AI in this way would provide any more information than one would get by merely searching the literature using a conventional search engine. LLMs have pretty conclusively demonstrated their inability to rank the relative credibility of sources better than a human could.

Where an AI model might be able to provide some insight is to infer relationships between species in large-scale microbiome sequencing studies. For example, one could try to link shotgun metagenomic data with species abundances, in order to infer the missing gene expression of species in the studies that solely measured species counts. This might contribute to FMT in the future by allowing better sorting of donors into microbiome community types.

As far as analyzing the success of different types of FMT donors for different recipients and/or diseases, the amount of data is almost certainly far too low for AI models to offer much if any of an advantage over more traditional methods of analysis, and none is likely to be significantly more informative than the kind of connections we here on this forum can make from different people's experience with different donors.
 
Back
Top Bottom