This study evaluates whether providing clinicians with real-time diagnostic suggestions from a high-reasoning large language model (GPT-5) improves diagnostic accuracy, confidence, and efficiency when solving nephrology clinical vignettes. Prior to selecting the model for the trial, the research team benchmarked several state-of-the-art models across a pilot set of nephrology cases, including: GPT-5, GPT-5-mini, O3, GPT-4o, Llama-4 Maverick-17B, Gemini-2.5-Pro, Qwen-3 VL-235B Thinking, DeepSeek-V3.2-Exp, MedGEMMA-27B, Claude Sonnet-4.5, and Magistral-Medium-2509. GPT-5 (high-reasoning) demonstrated the highest diagnostic performance, stability, and interpretability, and was selected as the AI system used in the intervention arm. Participants include medical students, residents, fellows, and practicing physicians. After creating an account, participants complete a demographic questionnaire (specialty, years of experience, practice type, age category, AI familiarity) and must explicitly agree to the use of these data for research purposes before accessing the vignettes. No directly identifying information is collected. Participants are randomized (with stratification by professional status) to either the AI-supported arm or the control arm. Each participant is assigned 10 nephrology vignettes in French or English and may complete them over multiple sessions. Once a vignette is submitted, it cannot be revisited ("no backtracking"). Completion time per vignette is automatically recorded. Control Arm Participants view each vignette and provide up to three diagnoses ("Top-3"), followed by a confidence rating (0-10). AI-Supported Arm Participants first enter an initial Top-3 diagnosis and confidence rating without AI assistance. The system then displays GPT-5's diagnostic suggestions, after which participants may revise their diagnoses once. The vignette is locked after submission. The study collects: * initial and final diagnoses, * confidence ratings before and (if applicable) after AI suggestions, * completion times, * participant demographic variables, * and the AI model's own diagnostic outputs. Partial completion is permitted; all completed vignettes contribute to the analysis. Primary and secondary outcomes include diagnostic accuracy (Top-3 and Top-1), accuracy improvement before vs. after AI, changes in diagnostic confidence, AI-induced diagnostic errors, human-versus-AI benchmarking, completion-time efficiency metrics, and the proportion of assigned vignettes completed. The primary analysis will compare diagnostic accuracy between the control arm (physicians alone) and the experimental arm (physicians assisted by the AI model). Accuracy is analyzed as a binary outcome (correct vs incorrect diagnosis). Because each participant evaluates multiple clinical vignettes, accuracy will be modeled using a mixed-effects logistic regression with a fixed effect for study arm and random intercepts for both participant and vignette. This approach accounts for clustering and varying difficulty across cases. The primary hypothesis test uses a two-sided α = 0.05. Effect sizes will be reported as odds ratios with 95% confidence intervals. Secondary analyses will explore whether accuracy varies by demographic factors (e.g., experience level, specialty) using interaction terms. Because each participant evaluates multiple vignettes, the team also performed simulation-based power analyses using mixed-effects logistic regression models with random intercepts for both participant and vignette, assuming an intra-participant ICC of 0.10. Under these assumptions, a total sample of 100 participants (50 per arm) with 10 vignettes per participant provides >99% power to detect a clinically meaningful improvement in diagnostic accuracy. The investigators therefore plan to enroll approximately 100 participants overall. This study aims to quantify whether AI-augmented reasoning meaningfully improves diagnostic performance and decision-making when clinicians evaluate complex nephrology cases.
Inclusion Criteria: Adults aged 18 years or older. Able to read and answer clinical vignettes in English or French. Access to a computer or smartphone with an internet connection. Provides informed consent online. Participants are expected to have at least basic medical training (e.g., medical students, residents, fellows, or practicing clinicians), although no formal verification is required. Exclusion Criteria: Individuals under 18 years of age. Inability to complete online study procedures. Prior involvement in the design, development, or evaluation of the AI system used in this study.
está designado en este estudio
de ser asignado al grupo placebo