02832nas a2200301 4500000000100000008004100001260001200042653001600054653002700070653002600097653002400123653002500147100001300172700001700185700001500202700001400217700001500231700001800246700001600264700001400280700001200294245016000306856008600466300000900552490000700561520194800568022001402516 2024 d c08/202410aAI chatbots10aDigital health queries10aHealthcare technology10aInfectious Diseases10aLanguage performance1 aSallam M1 aAl-Mahzoum K1 aAlshuaib O1 aAlhajri H1 aAlotaibi F1 aAlkhurainej D1 aAl-Balwah M1 aBarakat M1 aEgger J00aLanguage discrepancies in the performance of generative artificial intelligence models: an examination of infectious disease queries in English and Arabic. uhttps://bmcinfectdis.biomedcentral.com/counter/pdf/10.1186/s12879-024-09725-y.pdf a1-130 v243 a

Background: Assessment of artificial intelligence (AI)-based models across languages is crucial to ensure equitable access and accuracy of information in multilingual contexts. This study aimed to compare AI model efficiency in English and Arabic for infectious disease queries.

Methods: The study employed the METRICS checklist for the design and reporting of AI-based studies in healthcare. The AI models tested included ChatGPT-3.5, ChatGPT-4, Bing, and Bard. The queries comprised 15 questions on HIV/AIDS, tuberculosis, malaria, COVID-19, and influenza. The AI-generated content was assessed by two bilingual experts using the validated CLEAR tool.

Results: In comparing AI models' performance in English and Arabic for infectious disease queries, variability was noted. English queries showed consistently superior performance, with Bard leading, followed by Bing, ChatGPT-4, and ChatGPT-3.5 (P = .012). The same trend was observed in Arabic, albeit without statistical significance (P = .082). Stratified analysis revealed higher scores for English in most CLEAR components, notably in completeness, accuracy, appropriateness, and relevance, especially with ChatGPT-3.5 and Bard. Across the five infectious disease topics, English outperformed Arabic, except for flu queries in Bing and Bard. The four AI models' performance in English was rated as "excellent", significantly outperforming their "above-average" Arabic counterparts (P = .002).

Conclusions: Disparity in AI model performance was noticed between English and Arabic in response to infectious disease queries. This language variation can negatively impact the quality of health content delivered by AI models among native speakers of Arabic. This issue is recommended to be addressed by AI developers, with the ultimate goal of enhancing health outcomes.

 a1471-2334