Clinical and Molecular Hepatology

Search

Close

Daungsupawong and Wiwanitkit: Letter 1 regarding “Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma”

Letter 1 regarding “Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma”

Hinpetch Daungsupawong1, Viroj Wiwanitkit2,3
Received October 5, 2023       Revised October 9, 2023       Accepted October 11, 2023
Dear Editor,
Regarding the study on “Assessing the performance of ChatGPT in answering questions about cirrhosis and hepatocellular carcinoma [1],” the objective was to evaluate ChatGPT’s precision and consistency in providing information, offering management advice, and delivering emotional support to patients with cirrhosis and hepatocellular carcinoma (HCC). The researchers graded ChatGPT’s responses to 164 questions, evaluated its effectiveness using questionnaires and quality metrics, and assessed its capability for providing emotional support.
The study’s findings revealed that ChatGPT had a thorough understanding of cirrhosis (79.1% correct) and HCC (74.0% correct); however, only a small percentage of its responses were considered comprehensive. Compared to diagnosis and preventive medicine, it performed better in the areas of fundamental knowledge, lifestyle, and treatment. ChatGPT successfully responded to 76.9% of the quality measure queries; however, it failed to provide precise decision-making cutoffs and durations of treatment. Additionally, the model lacks understanding of regional variations in guidelines, such as HCC screening standards. In terms of subsequent steps and adapting to a new diagnosis, it did offer patients and caregivers useful guidance and support.
This study’s weakness is that it only assessed ChatGPT’s effectiveness in giving help and information regarding cirrhosis and HCC. Therefore, the generalizability of the study’s findings to other medical conditions or specialties is limited. The study also did not examine the potential biases or limits of ChatGPT’s answers, such as the reliability of the information it provides or the source of its training data.
Further research is needed to assess ChatGPT’s effectiveness across different medical specialties and conditions and to address the identified limitations. For the model to be successfully integrated into patient care, it is essential to assess how well it can comprehend and interpret complicated medical information, fill in knowledge gaps, and guarantee the correctness and dependability of its responses.
Advanced algorithms and substantial training sets are essential to minimize biases and errors in chatbots [2]. This is because relying solely on one major data source may lead to issues. Chatbot use presents ethical questions due to the potential for unexpected or undesirable outcomes. To stop the spread of false information and harmful ideas, ethical standards and restrictions must be put in place as artificial intelligence language models continue to develop.
FOOTNOTES
FOOTNOTES

Authors’ contribution

Hinpetch Daungsupawong 50% ideas, writing, analyzing, approval. Viroj Wiwanitkit 50% ideas, supervision, approval.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations
Abbreviations
HCC

hepatocellular carcinoma

REFERENCES
REFERENCES

REFERENCES

1. Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol 2023;29:721-732.
[Article] [PubMed] [PMC]
2. Kleebayoon A, Wiwanitkit V. Artificial intelligence, chatbots, plagiarism and basic honesty: Comment. Cell Mol Bioeng 2023;16:173-174.
[Article] [PubMed]

Go to Top