Exploring the potential of artificial intelligence chatbots in prosthodontics education


ERASLAN R., Ayata M., YAĞCI F., ALBAYRAK H.

BMC medical education, vol.25, no.1, pp.321, 2025 (SCI-Expanded) identifier identifier identifier

  • Publication Type: Article / Article
  • Volume: 25 Issue: 1
  • Publication Date: 2025
  • Doi Number: 10.1186/s12909-025-06849-w
  • Journal Name: BMC medical education
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Social Sciences Citation Index (SSCI), Scopus, Biotechnology Research Abstracts, EMBASE, MEDLINE, Veterinary Science Database, Directory of Open Access Journals
  • Page Numbers: pp.321
  • Keywords: Prosthodontics education, Artificial intelligence applications, Dentistry specialization, AI chatbot evaluation, Clinical decision-support systems
  • Erciyes University Affiliated: Yes

Abstract

BACKGROUND: The purpose of this study was to evaluate the performance of widely used artificial intelligence (AI) chatbots in answering prosthodontics questions from the Dentistry Specialization Residency Examination (DSRE). METHODS: A total of 126 DSRE prosthodontics questions were divided into seven subtopics (dental morphology, materials science, fixed dentures, removable partial dentures, complete dentures, occlusion/temporomandibular joint, and dental implantology). Questions were translated into English by the authors, and this version of the questions were asked to five chatbots (ChatGPT-3.5, Gemini Advanced, Claude Pro, Microsoft Copilot, and Perplexity) within a 7-day period. Statistical analyses, including chi-square and z-tests, were performed to compare accuracy rates across the chatbots and subtopics at a significance level of 0.05. RESULTS: The overall accuracy rates for the chatbots were as follows: Copilot (73%), Gemini (63.5%), ChatGPT-3.5 (61.1%), Claude Pro (57.9%), and Perplexity (54.8%). Copilot significantly outperformed Perplexity (P = 0.035). However, no significant differences in accuracy were found across subtopics among chatbots. Questions on dental implantology had the highest accuracy rate (75%), while questions on removable partial dentures had the lowest (50.8%). CONCLUSION: Copilot showed the highest accuracy rate (73%), significantly outperforming Perplexity (54.8%). AI models demonstrate potential as educational support tools but currently face limitations in serving as reliable educational tools across all areas of prosthodontics. Future advancements in AI may lead to better integration and more effective use in dental education.