Can AI Answer My Questions? Utilizing Artificial Intelligence in the Perioperative Assessment for Abdominoplasty Patients.
Abstract
[BACKGROUND] Abdominoplasty is a common operation, used for a range of cosmetic and functional issues, often in the context of divarication of recti, significant weight loss, and after pregnancy. Despite this, patient-surgeon communication gaps can hinder informed decision-making. The integration of large language models (LLMs) in healthcare offers potential for enhancing patient information. This study evaluated the feasibility of using LLMs for answering perioperative queries.
[METHODS] This study assessed the efficacy of four leading LLMs-OpenAI's ChatGPT-3.5, Anthropic's Claude, Google's Gemini, and Bing's CoPilot-using fifteen unique prompts. All outputs were evaluated using the Flesch-Kincaid, Flesch Reading Ease score, and Coleman-Liau index for readability assessment. The DISCERN score and a Likert scale were utilized to evaluate quality. Scores were assigned by two plastic surgical residents and then reviewed and discussed until a consensus was reached by five plastic surgeon specialists.
[RESULTS] ChatGPT-3.5 required the highest level for comprehension, followed by Gemini, Claude, then CoPilot. Claude provided the most appropriate and actionable advice. In terms of patient-friendliness, CoPilot outperformed the rest, enhancing engagement and information comprehensiveness. ChatGPT-3.5 and Gemini offered adequate, though unremarkable, advice, employing more professional language. CoPilot uniquely included visual aids and was the only model to use hyperlinks, although they were not very helpful and acceptable, and it faced limitations in responding to certain queries.
[CONCLUSION] ChatGPT-3.5, Gemini, Claude, and Bing's CoPilot showcased differences in readability and reliability. LLMs offer unique advantages for patient care but require careful selection. Future research should integrate LLM strengths and address weaknesses for optimal patient education.
[LEVEL OF EVIDENCE V] This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
[METHODS] This study assessed the efficacy of four leading LLMs-OpenAI's ChatGPT-3.5, Anthropic's Claude, Google's Gemini, and Bing's CoPilot-using fifteen unique prompts. All outputs were evaluated using the Flesch-Kincaid, Flesch Reading Ease score, and Coleman-Liau index for readability assessment. The DISCERN score and a Likert scale were utilized to evaluate quality. Scores were assigned by two plastic surgical residents and then reviewed and discussed until a consensus was reached by five plastic surgeon specialists.
[RESULTS] ChatGPT-3.5 required the highest level for comprehension, followed by Gemini, Claude, then CoPilot. Claude provided the most appropriate and actionable advice. In terms of patient-friendliness, CoPilot outperformed the rest, enhancing engagement and information comprehensiveness. ChatGPT-3.5 and Gemini offered adequate, though unremarkable, advice, employing more professional language. CoPilot uniquely included visual aids and was the only model to use hyperlinks, although they were not very helpful and acceptable, and it faced limitations in responding to certain queries.
[CONCLUSION] ChatGPT-3.5, Gemini, Claude, and Bing's CoPilot showcased differences in readability and reliability. LLMs offer unique advantages for patient care but require careful selection. Future research should integrate LLM strengths and address weaknesses for optimal patient education.
[LEVEL OF EVIDENCE V] This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
추출된 의학 개체 (NER)
| 유형 | 영어 표현 | 한국어 / 풀이 | UMLS CUI | 출처 | 등장 |
|---|---|---|---|---|---|
| 시술 | abdominoplasty
|
복부성형술 | dict | 2 | |
| 합병증 | recti
|
scispacy | 1 | ||
| 약물 | ChatGPT-3.5
|
scispacy | 1 | ||
| 약물 | [BACKGROUND] Abdominoplasty
|
scispacy | 1 | ||
| 약물 | [RESULTS] ChatGPT-3.5
|
scispacy | 1 | ||
| 약물 | Gemini
|
scispacy | 1 | ||
| 질환 | weight loss
|
C1262477
Weight Loss
|
scispacy | 1 | |
| 질환 | LLM
|
scispacy | 1 | ||
| 기타 | Patients
|
scispacy | 1 | ||
| 기타 | patient
|
scispacy | 1 | ||
| 기타 | Gemini
|
scispacy | 1 |
MeSH Terms
Humans; Abdominoplasty; Artificial Intelligence; Female; Feasibility Studies; Physician-Patient Relations; Male; Perioperative Care
🔗 함께 등장하는 도메인
이 논문이 속한 카테고리와 같은 논문에서 자주 함께 다뤄지는 카테고리들
관련 논문
- Case report of a rare soft tissue tuberculosis in a patient undergoing lipoabdominoplasty.
- What is the potential role of the nonopioid suzetrigine in pain management?
- Ex Vivo and In Vivo Histological Evaluation of a 3-μm Wavelength, 40-μm Spot Size Fractional Laser System for Dermatology.
- Correspondence on "Lymphatic pathway remodeling in the supraumbilical region after abdominoplasty: A prospective cohort study".
- Sculpting Success-The TULUANHA: Modified TULUA Lipo-Abdominoplasty in Post-Bariatric Body Contouring.