Skip to main navigation Skip to main content

CMH : Clinical and Molecular Hepatology

OPEN ACCESS
ABOUT
BROWSE ARTICLES
FOR CONTRIBUTORS

Articles

Letter to the Editor

Accountability in AI medicine: A critical appraisal of ChatGPT in patient self-management and screening

Clinical and Molecular Hepatology 2025;31(1):e1-e2.
Published online: September 26, 2024

1Department of Urology, Fujian Provincial Hospital, Fuzhou University Affiliated Provincial Hospital, Shengli Clinical Medical College of Fujian Medical University, Fuzhou, Fujian, China

2Department of Urology, Xuzhou Central Hospital, Xuzhou, Jiangsu, China

Corresponding author : Tian Xia Department of Urology, Xuzhou Central Hospital, No. 199, Jiefang South Road, Quanshan District, Xuzhou, Jiangsu, 221009, China Tel: +86-0516-83956900, Fax: +86-010-83956365, E-mail: 542819434@qq.com
Jiawen Wang Department of Urology, Fujian Provincial Hospital, Fuzhou University Affiliated Provincial Hospital, Shengli Clinical Medical College of Fujian Medical University, No. 134, Dongjie Street, Fuzhou, Fujian, 350001, China Tel: +86-0591-87557768, Fax: +86-0591-87532356, E-mail: 1811210684@pku.edu.cn

Editor: Gi-Ae Kim, Kyung Hee University, Korea

• Received: September 9, 2024   • Accepted: September 24, 2024

Copyright © 2025 by The Korean Association for the Study of the Liver

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

  • 6,356 Views
  • 63 Download
  • 3 Web of Science
  • 1 Crossref
  • 1 Scopus
next
Dear Editor,
We have read with great interest the recent publication by Yeo et al. [1], which presents an optimistic view of the potential for ChatGPT in responding to patient queries about cirrhosis and hepatocellular carcinoma (HCC). The study suggests that ChatGPT exhibits promising capabilities, which could contribute to increased awareness and enhanced management efficacy of these conditions. The authors propose that, with adequate training, language generation systems like ChatGPT could be further optimized to improve their performance in the context of patient selfmanagement.
While we commend the authors for their insightful work, we feel compelled to raise some pertinent concerns. Specifically, we question the accountability for any inaccuracies in the responses provided by ChatGPT. A previous study has indicated that, when compared to the guidelines, only 26% of ChatGPT’s responses to clinical questions were fully accurate, while 48% of the answers contained errors or were misleading [2]. This discrepancy could result in a substantial number of patients receiving misleading information, with potentially grave consequences.
Despite the significant advancements in ChatGPT’s conversational and interactive capabilities, the potential for factual inaccuracies remains [3]. It is imperative that human stakeholders take on the responsibility to ensure ChatGPT’s effectiveness and appropriate use in healthcare settings.
When medical decisions influenced by ChatGPT result in harm to individuals, the issue of accountability becomes particularly critical. It is necessary to clearly define the principles of accountability. However, the current legal frameworks in most jurisdictions offer little clarity on this issue, and it is uncertain whether traditional product liability theories apply to ChatGPT. Although medical software is clas-sified as medical devices in courts across Europe and the United States, developers may categorize ChatGPT as a service rather than a product to evade liability, complicating the process for patients seeking redress. Therefore, it must be clarified who bears the responsibility when ChatGPT is involved in medical decision-making and causes harm—the service provider, the technology supporter, or the user. This determination may require a joint assessment of the roles of the algorithmic model, training data, and user input in the tortious act, based on specific circumstances.
Moreover, it is essential to establish a risk transfer mechanism. Drawing on the insurance systems established in some countries, the socialized transfer of risks can be achieved through the purchase of commercial insurance, ensuring that victims receive timely compensation when liability is difficult to define.
Finally, ethical and legal training is indispensable. Medical professionals must be trained on the ethical and legal frameworks concerning artificial intelligence to ensure they can make responsible decisions when using ChatGPT.
We are supportive of the potential application of ChatGPT in the medical field. Nevertheless, we believe that until ethical concerns, such as accountability, are adequately addressed, clinicians should refrain from endorsing ChatGPT for patient self-management.

Authors’ contribution

Jiawen Wang: Writing - Original Draft. Tian Xia: Writing - Review & Editing.

Conflicts of Interest

The authors have no conflicts to disclose.

HCC

hepatocellular carcinoma
  • 1. Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol 2023;29:721-732.
  • 2. Lombardo R, Gallo G, Stira J, Turchi B, Santoro G, Riolo S, et al. Quality of information and appropriateness of Open AI outputs for prostate cancer. Prostate Cancer Prostatic Dis 2024 Jan 16;doi: 10.1038/s41391-024-00789-0.
  • 3. Whiles BB, Bird VG, Canales BK, DiBianco JM, Terry RS. Caution! AI bot has entered the patient chat: ChatGPT has limitations in providing accurate urologic healthcare advice. Urology 2023;180:278-284.

Download Citation

Download a citation file in RIS format that can be imported by all major citation management software, including EndNote, ProCite, RefWorks, and Reference Manager.

Format:

Include:

Accountability in AI medicine: A critical appraisal of ChatGPT in patient self-management and screening
Clin Mol Hepatol. 2025;31(1):e1-e2.   Published online September 26, 2024
Download Citation

Download a citation file in RIS format that can be imported by all major citation management software, including EndNote, ProCite, RefWorks, and Reference Manager.

Format:
Include:
Accountability in AI medicine: A critical appraisal of ChatGPT in patient self-management and screening
Clin Mol Hepatol. 2025;31(1):e1-e2.   Published online September 26, 2024
Close
Accountability in AI medicine: A critical appraisal of ChatGPT in patient self-management and screening
Accountability in AI medicine: A critical appraisal of ChatGPT in patient self-management and screening