Clinical and Molecular Hepatology



Original Article
Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma
Yee Hui Yeo1, Jamil S. Samaan1, Wee Han Ng2, Peng-Sheng Ting3, Hirsh Trivedi1,4, Aarshi Vipani1, Walid Ayoub1,4, Ju Dong Yang1,4,5, Omer Liran6,7, Brennan Spiegel1,7  , Alexander Kuo1,4 
1Karsh Division of Gastroenterology and Hepatology, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, California, USA
2Bristol Medical School, University of Bristol, Bristol, UK
3School of Medicine, Tulane University, New Orleans, Louisiana, USA
4Comprehensive Transplant Center, Cedars-Sinai Medical Center, Los Angeles, California, USA
5Samuel Oschin Comprehensive Cancer Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
6Department of Psychiatry and Behavioral Sciences, Cedars-Sinai, Los Angeles, California, USA
7Division of Health Services Research, Department of Medicine, Cedars-Sinai, Los Angeles, California, USA
Corresponding author: Brennan Spiegel ,Email:
Alexander Kuo ,Email:
Received: March 3, 2023; Revised: March 20, 2023   Accepted: March 21, 2023.
Background & Aims
Patients with cirrhosis and hepatocellular carcinoma (HCC) require extensive and personalized care to improve outcomes. ChatGPT (Generative Pre-trained Transformer), a large language model, holds the potential to provide professional yet patient-friendly support. We aimed to examine the accuracy and reproducibility of ChatGPT in answering questions regarding knowledge, management, and emotional support for cirrhosis and HCC.
ChatGPT’s responses to 164 questions were independently graded by two transplant hepatologists and resolved by a third reviewer. The performance of ChatGPT was also assessed using two published questionnaires and 26 questions formulated from the quality measures of cirrhosis management. Finally, its emotional support capacity was tested.
We showed that ChatGPT regurgitated extensive knowledge of cirrhosis (79.1% correct) and HCC (74.0% correct), but only small proportions (47.3% in cirrhosis, 41.1% in HCC) were labeled as comprehensive. The performance was better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine. For the quality measures, the model answered 76.9% of questions correctly but failed to specify decision-making cut-offs and treatment durations. ChatGPT lacked knowledge of regional guidelines variations, such as HCC screening criteria. However, it provided practical and multifaceted advice to patients and caregivers regarding the next steps and adjusting to a new diagnosis.
We analyzed the areas of robustness and limitations of ChatGPT’s responses on the management of cirrhosis and HCC and relevant emotional support. ChatGPT may have a role as an adjunct informational tool for patients and physicians to improve outcomes.

Keywords :artificial intelligence, accuracy, reproducibility, patient knowledge, health literacy

Go to Top