Letter 1 regarding “Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma”

Article information

Clin Mol Hepatol. 2024;30(1):111-112
Publication date (electronic) : 2023 October 13
doi : https://doi.org/10.3350/cmh.2023.0394
1Private Academic Consultant, Phonhong, Lao People’s Democratic Republic
2Research Center, Chandigarh University, Mohali, India
3Department of Biological Science, Joseph Ayobabalola University, Ikeji-Arakeji, Nigeria
Corresponding author : Hinpetch Daungsupawong Private Academic Consultant, Phonhong, 10000 Lao People’s Democratic Republic E-mail: hinpetchdaung@gmail.com
Viroj Wiwanitkit Research Center, Chandigarh University, Mohali, Punjab, 140413 India Tel: +91 1800 121 288 800, E-mail: wviroj@yahoo.com
Editor: Ji Won Han, Catholic University of Korea, Korea
Received 2023 October 5; Revised 2023 October 9; Accepted 2023 October 11.

Dear Editor,

Regarding the study on “Assessing the performance of ChatGPT in answering questions about cirrhosis and hepatocellular carcinoma [1],” the objective was to evaluate ChatGPT’s precision and consistency in providing information, offering management advice, and delivering emotional support to patients with cirrhosis and hepatocellular carcinoma (HCC). The researchers graded ChatGPT’s responses to 164 questions, evaluated its effectiveness using questionnaires and quality metrics, and assessed its capability for providing emotional support.

The study’s findings revealed that ChatGPT had a thorough understanding of cirrhosis (79.1% correct) and HCC (74.0% correct); however, only a small percentage of its responses were considered comprehensive. Compared to diagnosis and preventive medicine, it performed better in the areas of fundamental knowledge, lifestyle, and treatment. ChatGPT successfully responded to 76.9% of the quality measure queries; however, it failed to provide precise decision-making cutoffs and durations of treatment. Additionally, the model lacks understanding of regional variations in guidelines, such as HCC screening standards. In terms of subsequent steps and adapting to a new diagnosis, it did offer patients and caregivers useful guidance and support.

This study’s weakness is that it only assessed ChatGPT’s effectiveness in giving help and information regarding cirrhosis and HCC. Therefore, the generalizability of the study’s findings to other medical conditions or specialties is limited. The study also did not examine the potential biases or limits of ChatGPT’s answers, such as the reliability of the information it provides or the source of its training data.

Further research is needed to assess ChatGPT’s effectiveness across different medical specialties and conditions and to address the identified limitations. For the model to be successfully integrated into patient care, it is essential to assess how well it can comprehend and interpret complicated medical information, fill in knowledge gaps, and guarantee the correctness and dependability of its responses.

Advanced algorithms and substantial training sets are essential to minimize biases and errors in chatbots [2]. This is because relying solely on one major data source may lead to issues. Chatbot use presents ethical questions due to the potential for unexpected or undesirable outcomes. To stop the spread of false information and harmful ideas, ethical standards and restrictions must be put in place as artificial intelligence language models continue to develop.


Authors’ contribution

Hinpetch Daungsupawong 50% ideas, writing, analyzing, approval. Viroj Wiwanitkit 50% ideas, supervision, approval.

Conflicts of Interest

The authors declare no conflicts of interest.



hepatocellular carcinoma


1. Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol 2023;29:721–732.
2. Kleebayoon A, Wiwanitkit V. Artificial intelligence, chatbots, plagiarism and basic honesty: Comment. Cell Mol Bioeng 2023;16:173–174.

Article information Continued