Background and Aims: Chat Generative Pre-Trained Transformer (Chat-GPT) has proven effective in addressing patient inquiries related to gastrointestinal (GI) disease. We aimed to assess the effectiveness and reliability of Chat-GPT in answering common patients’ queries on GI endoscopy. Methods: Eighteen selected patients’ queries regarding GI endoscopy were rated on a Likert-type scale by 10 health professionals and 2 non-health professionals on the following features: reliability (1-6), accuracy (1-3), and comprehensibility (1-3). Results: The mean reliability, accuracy, and comprehensibility values were 5.2 ± 1.7, 2.7 ± 0.4, and 2.9 ± 0.2, respectively. Overall, most answers were rated as having a solid level of reliability (94.4%) and accuracy (100%) and fair levels of comprehensibility (61.1%). The physicians considered the tool to be adequate for addressing questions related to clinical practice, except for inquiries regarding bowel prep solutions, medications, and pacemaker management. Conclusions: Chat-GPT 4.0 demonstrated effectiveness in providing patients with informative content about GI endoscopy, even though health professional support remains essential for a comprehensive approach.

Unveiling the effectiveness of Chat-GPT 4.0, an artificial intelligence conversational tool, for addressing common patient queries in gastrointestinal endoscopy

Maida, Marcello;
2025-01-01

Abstract

Background and Aims: Chat Generative Pre-Trained Transformer (Chat-GPT) has proven effective in addressing patient inquiries related to gastrointestinal (GI) disease. We aimed to assess the effectiveness and reliability of Chat-GPT in answering common patients’ queries on GI endoscopy. Methods: Eighteen selected patients’ queries regarding GI endoscopy were rated on a Likert-type scale by 10 health professionals and 2 non-health professionals on the following features: reliability (1-6), accuracy (1-3), and comprehensibility (1-3). Results: The mean reliability, accuracy, and comprehensibility values were 5.2 ± 1.7, 2.7 ± 0.4, and 2.9 ± 0.2, respectively. Overall, most answers were rated as having a solid level of reliability (94.4%) and accuracy (100%) and fair levels of comprehensibility (61.1%). The physicians considered the tool to be adequate for addressing questions related to clinical practice, except for inquiries regarding bowel prep solutions, medications, and pacemaker management. Conclusions: Chat-GPT 4.0 demonstrated effectiveness in providing patients with informative content about GI endoscopy, even though health professional support remains essential for a comprehensive approach.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11387/194833
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact