Thursday, November 9, 2023

AI Corner: Wow, Now, Direct Upload of Science PDFs into ChatGPT 4

All year, I've been using Chat GPT4 to summarize things.   Problem was, it was limited to about 2000 words at a go.  (There are "splitters" that will for example divide an 8000 word text into 4 chunks.)

While Open AI may not update all users simultaneously, I just got a cool upgrade feature.

My Chat GPT subscription now comes with a little "paper clip" icon by the prompt box.  For a while this symbol was just a JPEG symbol [see my Oct 4 blog].  Now it's a paper clip and accepts PDFs.  (The paper clip also still accepts JPEGs).  (See paper clip illustration at bottom of blog.)


Short Example - NEJM FDA LDT ESSAY (SINGHAL)

An example of GPT4 "Reading" a document, by giving the new NEJM Op Ed about FDA regulation of LDTs, below.


Please summarize this paper in 75 words or less.  The paper is about FDA regulation of lab developed tests and appears in NEJM.

 ChatGPT

The paper in NEJM addresses the modernization of federal oversight of lab-developed tests (LDTs). It discusses the evolution of LDTs due to advances in precision medicine and their increasing complexity and availability. The paper criticizes the lack of sufficient federal regulation, noting that many LDTs operate without FDA review, leading to inconsistent performance and potential safety issues. The authors advocate for a risk-based regulatory framework, such as the proposed VALID Act, to ensure LDTs' safety, validity, and utility, balancing the need for innovation against the risk of harm from unvalidated tests. [91 words]

You

While the authors are somewhat supportive of the FDA proposal to regulate LDTs, what are the authors' biggest concerns about harms or risks or ill-thought-out aspects of the FDA proposal?

ChatGPT

The authors express concerns about the proposed FDA regulation of lab-developed tests (LDTs), highlighting potential harms and risks. They point out that while the FDA's proposal to classify LDTs as medical devices could enhance safety and effectiveness, it faces a lengthy rulemaking process and potential legal challenges. The rule's limited scope means that future legislation would be required to strengthen test-performance standards and give the FDA greater authority, especially for high-risk LDTs. There is also an emphasis on the need for inclusion of marginalized and underserved communities in test development and validation to ensure generalizable results.

The authors also stress the importance of post-marketing review and robust postmarketing surveillance to facilitate the collection of adverse-event data, identify additional safety issues, and ensure that LDTs perform as intended. They worry that FDA regulation might impose a burden on laboratories, potentially decreasing innovation, but they argue that meaningful patient-centered innovation depends on consistency, accuracy, reliability, and reproducibility.

To address these concerns, they suggest increasing transparency through labeling and public listing of tests, creating public repositories with information on test performance, and implementing mechanisms for post-marketing review. They also propose allowing the FDA to partner with third-party quality-review programs for lower-risk LDTs and developing complementary state-level programs. Despite these challenges, the authors believe that FDA oversight is crucial and that a unified regulatory framework focused on safety, validity, and utility is required.

___

  • Ask questions!
    • Note that just being ask to "summarize" is a big plus.  However, GPT gets really interesting when you expand into a series of questions with it and "discuss" a document. 
  • Free version still works too.
    • While I've focused on the super-swiftness of uploading a PDF by "paper clip" in the paid subscription GPT4, in free GPT3.5 or other free AI, you can still "dump" text files into the question space (as I was doing until yesterday) and get on the same path.

___

The exciting new function is signaled by just the little paper clip symbol!




Bonus Example: Carobene, Labs, AI 2023

Here's a bonus second example.

AI READS PDF:

The document is an opinion paper titled "Where is laboratory medicine headed in the next decade? Partnership model for efficient integration and adoption of artificial intelligence into medical laboratories." It appears in the Clinical Chemistry and Laboratory Medicine journal, 2023, volume 61, issue 4, pages 535–543.

The authors, Anna Carobene and colleagues, aim to address the integration of Artificial Intelligence (AI) into laboratory medicine, noting that while AI studies are prevalent in areas like surgery, radiology, and oncology, there's less attention to its application in laboratory medicine.

They summarize the content presented at the 3rd annual European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) strategic conference in 2022, dedicated to AI in future laboratories. The paper distills five key messages from the conference:

  1. Laboratory specialists and technicians are expected to continue enhancing diagnostic quality and laboratory turnaround times.
  2. The modular nature of laboratory processes is suitable for AI solutions.
  3. There's a growing complexity in laboratory sub-specialization, from test selection to interpretation tasks.
  4. Expertise in AI implementation and industry partnerships will become essential professional competencies, necessitating novel educational strategies.
  5. Regulatory frameworks must evolve to accommodate new computational paradigms in AI.

The paper suggests that the successful adoption of AI in laboratory medicine will depend on hands-on expertise and quality improvement initiatives within laboratories to improve patient care.

Sections in the paper include various perspectives on the impact of AI on laboratory diagnostics, the significance of tailored diagnostics, the importance of laboratory diagnostics in clinical decision-making, and the need for regulation in AI/ML applications in the laboratory setting. The paper concludes that laboratory specialists must adapt to and adopt AI due to its increasing relevance and potential to enhance diagnostic quality, necessitating collaboration between data scientists, clinicians, and the IVD industry.

The authors advocate for a careful and risk-aware approach towards AI's limitations to avoid compromising patient safety and emphasize the need for effective collaboration and regulation to ensure the safe and effective use of AI and ML models in patient care.

##

Extra-Long Example - Sidebar

I input a recent and complex molecular medicine paper in Nature Medicine (Baca et al).   If you'd like to see an extended "dialog" with ChatGPT about a medical article, see link to sidebar.

##

Chat GPT Nerd Corner: Avoiding Bing

Recently Chat provided GPT built-in web search, whether you want it or not. I don't aways want that; I liked getting a little "essay" from Chat GPT rather than just having GPT4 go to Wikipedia and spit out the top fact.  I can do that myself.

Under my user name found at the bottom of the left bar, there's an option for MY GPT, one function of which is to give you "GPT 4 Classic" (lacking Bing search).  I may use that sometimes.