Objective: To investigate the accuracy of information provided by ChatGPT-4o to patients about tracheotomy. Methods: Twenty common questions of patients about tracheotomy were presented to ChatGPT-4o twice (7-day intervals). The accuracy, clarity, relevance, completeness, referencing, and usefulness of responses were assessed by a board-certified otolaryngologist and a board-certified intensive care unit practitioner with the Quality Analysis of Medical Artificial Intelligence (QAMAI) tool. The interrater reliability and the stability of the ChatGPT-4o responses were evaluated with intraclass correlation coefficient (ICC) and Pearson correlation analysis. Results: The total scores of QAMAI were 22.85 ± 4.75 for the intensive care practitioner and 21.45 ± 3.95 for the otolaryngologist, which consists of moderate-to-high accuracy. The otolaryngologist and the ICU practitioner reported high ICC (0.807; 95%CI: 0.655–0.911). The highest QAMAI scores have been found for clarity and completeness of explanations. The QAMAI scores for the accuracy of the information and the referencing were the lowest. The information related to the post-laryngectomy tracheostomy remains incomplete or erroneous. ChatGPT-4o did not provide references for their responses. The stability analysis reported high stability in regenerated questions. Conclusion: The accuracy of ChatGPT-4o is moderate-to-high in providing information related to the tracheotomy. However, patients using ChatGPT-4o need to be cautious about the information related to tracheotomy care, steps, and the differences between temporary and permanent tracheotomies.

Accuracy of ChatGPT responses on tracheotomy for patient education

Maniaci A.;
2024-01-01

Abstract

Objective: To investigate the accuracy of information provided by ChatGPT-4o to patients about tracheotomy. Methods: Twenty common questions of patients about tracheotomy were presented to ChatGPT-4o twice (7-day intervals). The accuracy, clarity, relevance, completeness, referencing, and usefulness of responses were assessed by a board-certified otolaryngologist and a board-certified intensive care unit practitioner with the Quality Analysis of Medical Artificial Intelligence (QAMAI) tool. The interrater reliability and the stability of the ChatGPT-4o responses were evaluated with intraclass correlation coefficient (ICC) and Pearson correlation analysis. Results: The total scores of QAMAI were 22.85 ± 4.75 for the intensive care practitioner and 21.45 ± 3.95 for the otolaryngologist, which consists of moderate-to-high accuracy. The otolaryngologist and the ICU practitioner reported high ICC (0.807; 95%CI: 0.655–0.911). The highest QAMAI scores have been found for clarity and completeness of explanations. The QAMAI scores for the accuracy of the information and the referencing were the lowest. The information related to the post-laryngectomy tracheostomy remains incomplete or erroneous. ChatGPT-4o did not provide references for their responses. The stability analysis reported high stability in regenerated questions. Conclusion: The accuracy of ChatGPT-4o is moderate-to-high in providing information related to the tracheotomy. However, patients using ChatGPT-4o need to be cautious about the information related to tracheotomy care, steps, and the differences between temporary and permanent tracheotomies.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11387/185222
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact