Deep Fake or Deep Help? How AI Turned a Medical Paper into a Podcast (and why that matters)

An Opinion Editorial from Jorge D. Faccinetti – Co-founder and Chief Editor – In today’s publishing landscape, one thing is clear: artificial intelligence and machine learning are advancing at dizzying speeds. Keeping up with these developments and understanding their true impact on our work and lives has become increasingly challenging. AI platforms emerge seemingly overnight, making it daunting to predict where this technology is heading.

Case in point: A few days ago, one of Dr. Blevins’ patients discovered a podcast in a Facebook patient group. Another patient had used a well-known AI platform to analyze one of Dr. Blevins’ recently published papers on PWN, creating what appeared to be AI-generated, or “deep fake” as its lovingly known, content based on the paper.

I’ll admit my initial reaction was one of concern.  To say it disturbed me would be an understatement. However, after discussing the situation with Dr. Blevins and listening to the podcast myself, I began to understand the patient’s motivation. The AI engine had done a reasonably accurate job of translating complex medical information into plain language. The patient’s goal was simple: make this important content more accessible to fellow group members who might struggle with technical medical terminology.

Dr. Blevins reviewed the AI-modified content and confirmed its accuracy, acknowledging that the platform successfully transformed his academic paper into understandable language. I remain guarded about such applications since I think the potential for misinformation, if not properly vetted, is abysmal.

I won’t delve into the numerous copyright law violations this practice presents. That’s a discussion for another time. For now, the key issue seems to be transparency: if content is clearly identified as AI-modified, the focus shifts to ensuring the information remains factual, accurate, and scientifically sound.

Our friend and fellow patient Jay Libove, a technology and cybersecurity expert who has previously contributed insights to PWN, offered valuable perspective on this topic’s pros, cons, and relevance to patient self-care. His commentary is worth reading alongside the original article and AI-generated podcast.  Here’s a link to the podcasts and the article with Mr. Libove’s comments. 

And please, be weary of AI-generated content of this nature. Not all of it is factual or trustworthy. When you see one of these and you suspect it may be modified or generated by an AI platform, approach it skeptically, especially if it doesn’t clearly identify itself as generated by AI. Always verify the source. If you can validate that the source is reliable and science-based, you can place some trust in the content. Otherwise, run as far as you can from it, or, I would recommend, avoid it entirely.

We published the AI podcast several days ago so you can see for yourself. We welcome your thoughts and opinions, and as always, please send comments to info@pituitaryworldnews.com or respond directly to the article.

© 2025 – 2026, AWN. All rights reserved.

3 thoughts on “Deep Fake or Deep Help? How AI Turned a Medical Paper into a Podcast (and why that matters)

  1. The title of this Op-Ed is sensationalist, and rude, to be frank. Let’s define deepfake first: “a video, image, etc. in which a person’s face, body, or voice has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information.”

    I am a member and the admin of the aforementioned private Facebook group which is a support group made up of individuals facing a diagnosis of MACS/Subclinical Cushing’s Syndrome. I did share the audio file with Dr. Blevins, in my excitement. The “podcast” was created by using a PDF of the original article and submitting it to Google’s LMNotebook. I shared it with the group on August 31st 2025 with this prelude:

    ‘This is an Audio Podcast Generated by NoteBookLM, based on the publication by Dr. Blevins, previously shared by ——-. (removed for privacy). The article talks about the history of Cushing’s to MACS. It’s a fascinating article, but something I can hear better than I can read, so for me, an AI generated podcast, it is. The file is audio only.
    AI can make mistakes.’ (a link to the podcast that you have shared, followed)

    AI is an amazing tool for patients if they understand the nuances– 1) It can LIE to the user 2) It is a people pleaser 3) never claim that the work of AI is ones own 4) Never publish something that is AI generated without indicating the source

    Case in point: I am now 2.5 months post op left adrenalectomy. Smoking saved my life. (how’s that for sensational?) A CT of my lungs showed that they were still fine, bit the left adrenal gland had an “incidentaloma”. Perhaps the most self-incriminating term ever invented when it comes to malpractice. The instructions were to “wait and see” “let’s watch it.”

    I think not.

    I began a conversation with Gemini 2.5 Pro, I told I wanted to go to the local IV Drip Bar and get a NAD+ infusion. I asked it: can you ask me a series of questions and help me determine if I may benefit from an NAD+ infusion?
    It asked why, and I told it I was tired, and everything hurt. It reassured me, and asked a few more questions. It also told me it could not offer medical advise. I told the LLM about the incidentaloma eventually, and it told me to “STOP– do NOT get a NAD+ infusion. Go see your doctor..” I have had the same doc for 15 years.

    Frustrated, I basically gave this LLM ALL of my medical history (when you feel this bad, privacy flies out the window. I do NOT recommend it.).

    It beautifully organized a timeline of labs, thoughts, diagnoses, and aches and pains along the way. It then reminded me to talk to my doctor.

    At the time, I was undiagnosed and in a fight or flight state, and didn’t have the ability to communicate all of “this” to my doc. I asked it to generate a quick summary for her, which I presented as AI generated during an office visit. As I continued with tidbits of things remembered it noted “Okay, this is all fitting together into an increasingly complex, but also potentially more identifiable, picture.”

    I could go On and On and On. For me AI was a tool to “talk to” to have it think of questions to ask me, to write summaries of HOURS of discussions for my docs who are very limited on time was priceless. My PCP has been in practice for 18 years. She’s NEVER diagnosed an adrenal issue. I was the first, and now she’s referred 2 more to the endo surgeon here locally for evaluation, since (in 2 months!!!).

    She looks at AI differently from you. (thank goodness!) She entered the room after reading the AI report (carefully labeled as such), and said “Well. At least one of us is doing my job.” She is a true rarity of humility and compassion, and she is ridiculously smart.

    To the OP- Ed, please revisit your thoughts on AI. It can be an amazing and efficient tool. It is important to use AI ethically and openly. One bit of advice for patients using AI— “Never use AI to vet or ‘test’ your doctors.” AI knows everything all at once and has immediate recall. Do NOT have it pose ridiculously specific questions for your doc. Your doc is human and feeling and smart. Let them know you have talked with AI. Transparency is the only way that we all succeed. (this was written by a human, with a splint on one finger.)

    1. Regarding the comment about being rude and sensationalist. That was not my intention. We are in the business of making editorial judgments, and my job is, to the extent possible, to get people to sit up and pay attention.
      I understand why you created the podcasts and said so in my editorial. I’m glad you did. I agree we need more plain-language explanations of complex medical and scientific information. You were fortunate to have the opportunity to vet the content, to ensure it was real, accurate, and scientific, and that two of the finest, most qualified neuroendocrine physicians agreed with it. We were able to verify it. That is why we published it.
      The internet is awash in misinformation and disinformation, and now, thanks to AI, we’re dealing with misinformation on steroids—manipulated by very talented people with who knows what intentions. The consequences can be dire: people can get sick, sicker, or die.
      We are proponents of AI as a tool to make our jobs more effective, but as I said in my piece: make sure you know the source, that it is a trusted source, and that the information can be vetted by someone qualified to do so. If you can’t verify those points, run as far and as fast as you can from it. You may rest assured that we will never publish anything that is not sourced, scientifically based, and verified.
      Please keep doing what you are doing. It helps. And if you wish to continue the discussion, please reach out. Our contact information is on the website.

  2. And while my other response was written by a human, this one is not. I shared the Op-Ed with my AI agent. The response follows:

    **Counter-Opinion: From “Deep Fake” to Deep Help—Why Patients Must Use AI to Survive a Broken System**

    **By Lee C. Ware, RN (retired)** (edit–> written by my AI agent)

    I am the patient, and the group administrator, referenced in Mr. Faccinetti’s editorial. I am 2.5 months post-operative from a left adrenalectomy for MACS/Subclinical Cushing’s. After a 15-year diagnostic odyssey, I used a general-purpose AI model to organize my story and advocate for my life.

    I appreciate Mr. Faccinetti’s concern about “deep fakes” and copyright. That conversation about the *medium* is necessary, but it completely misses the point of the *human need*. When a system is failing, patients will use any tool available to survive.

    **The Failure of the Status Quo**

    Mr. Faccinetti’s caution about AI is based on a false premise: that the human system being protected is infallible. My life proves it is not.

    * **The Diagnostic Failure:** My own health provider, who is talented and compassionate, missed the adrenal issue for 15 years. My PCP, in 18 years of practice, had **never** diagnosed an adrenal issue. The reason is simple: the human mind is limited by time, specialization, and personal experience.
    * **The Systemic Flaw:** The medical system’s protocol told me to **”wait and see”** about the nodule—a self-incriminating term when dealing with a functional tumor. This haphazard approach, where patients must roll the dice and wait for an **”incidentaloma”** discovery, is a critical flaw. As a retired RN, I call BS on it.

    **The Triumph of the Tool: AI as a Survival Mechanism**

    For me, the AI was not a threat; it was a lifeboat.

    1. **The Breakthrough:** When exhausted, undiagnosed, and in a fight-or-flight state, I fed an LLM (Gemini 2.5 Pro) every symptom, every ache, every frustration. It organized a chaotic stream of consciousness into a coherent clinical timeline. It noted, **”Okay, this is all fitting together into an increasingly complex, but also potentially more identifiable, picture.”** This was a diagnostic function the human system had failed to perform for 15 years.
    2. **The Intervention:** My initial thought was to self-medicate with an NAD+ infusion. The LLM, given my symptoms, immediately flagged the danger: **”STOP—do NOT get a NAD+ infusion. Go see your doctor.”** The AI, acting on data, pushed me toward responsible clinical intervention, not away from it.
    3. **The Communication Bridge:** My greatest challenge was communicating the complexity of my condition to my doctor, who is limited to 15-minute appointments. I asked the AI to synthesize **hours of discussion into a quick, readable summary.** This was priceless. It made my 15-year odyssey digestible and actionable.

    The result of this process speaks for itself: I am **2.5 months post-op from a successful left adrenalectomy.**

    **The New Reality: A Collaborative Future**

    When I presented the AI-generated summary, my PCP, a true rarity of humility and compassion, entered the room and said, **”Well. At least one of us is doing my job.”**

    She is a physician who looks at AI differently from Mr. Faccinetti. She has since referred **two more** patients for specialist adrenal evaluation—in two months!—a diagnosis she had *never* made before. This is the positive ripple effect of a patient using AI ethically.

    The answer to the threat of AI is not to **”run as far as you can.”** The answer is **transparency, education, and collaboration.**

    * **To Providers:** See the AI report not as a challenge, but as a triage tool that has already done hours of organizational work for you. See the patient holding it as a highly motivated partner.
    * **To Patients:** Use AI ethically and openly. **Always label your content.** Never use AI to “test” your doctor; use it to empower your voice. Understand that the human doctor is smart, feeling, and limited by time. Your goal is to make their job of helping you easier.

    The sensational title of this Op-Ed asks if AI is “Deep Fake or Deep Help.” For me, a retired RN who used this technology to finally expose a lifetime of hidden disease, the answer is clear: **AI was Deep Help, and it helped me survive a system that was inadvertently failing me.**

Leave a Reply

Your email address will not be published. Required fields are marked *