File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1148/ryai.240140
- Scopus: eid_2-s2.0-105007570052
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography
| Title | Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography |
|---|---|
| Authors | |
| Keywords | CT Angiography Intracranial Aneurysm Sham AI Vascular |
| Issue Date | 1-May-2025 |
| Publisher | Radiological Society of North America |
| Citation | Radiology: Artificial Intelligence, 2025, v. 7, n. 3 How to Cite? |
| Abstract | Purpose: To evaluate a sham–artificial intelligence (AI) model acting as a placebo control for a standard-AI model for diagnosis of intracranial aneurysm. Materials and Methods: This retrospective crossover, blinded, multireader, multicase study was conducted from November 2022 to March 2023. A sham-AI model with near-zero sensitivity and similar specificity to a standard AI model was developed using 16 422 CT angiography examinations. Digital subtraction angiography–verified CT angiographic examinations from four hospitals were collected, half of which were processed by standard AI and the others by sham AI to generate sequence A; sequence B was generated in the reverse order. Twenty-eight radiologists from seven hospitals were randomly assigned to either sequence and then assigned to the other sequence after a washout period. The diagnostic performances of radiologists alone, radiologists with standard-AI assistance, and radiologists with sham-AI assistance were compared using sensitivity and specificity, and radiologists’ susceptibility to sham AI suggestions was assessed. Results: The testing dataset included 300 patients (median age, 61.0 years [IQR, 52.0–67.0]; 199 male), 50 of whom had aneurysms. Standard AI and sham AI performed as expected (sensitivity, 96.0% vs 0.0%; specificity, 82.0% vs 76.0%). The differences in sensitivity and specificity between standard AI–assisted and sham AI–assisted readings were 20.7% (95% CI: 15.8, 25.5 [superiority]) and 0.0% (95% CI: −2.0, 2.0 [noninferiority]), respectively. The difference between sham AI–assisted readings and radiologists alone was −2.6% (95% CI: −3.8, −1.4 [noninferiority]) for both sensitivity and specificity. After sham-AI suggestions, 5.3% (44 of 823) of true-positive and 1.2% (seven of 577) of false-negative results of radiologists alone were changed. Conclusion: Radiologists’ diagnostic performance was not compromised when aided by the proposed sham-AI model compared with their unassisted performance. |
| Persistent Identifier | http://hdl.handle.net/10722/362649 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Shi, Zhao | - |
| dc.contributor.author | Hu, Bin | - |
| dc.contributor.author | Lu, Mengjie | - |
| dc.contributor.author | Zhang, Manting | - |
| dc.contributor.author | Yang, Haiting | - |
| dc.contributor.author | He, Bo | - |
| dc.contributor.author | Ma, Jiyao | - |
| dc.contributor.author | Hu, Chunfeng | - |
| dc.contributor.author | Lu, Li | - |
| dc.contributor.author | Li, Sheng | - |
| dc.contributor.author | Ren, Shiyu | - |
| dc.contributor.author | Zhang, Yonggao | - |
| dc.contributor.author | Li, Jun | - |
| dc.contributor.author | Nijiati, Mayidili | - |
| dc.contributor.author | Dong, Jiake | - |
| dc.contributor.author | Wang, Hao | - |
| dc.contributor.author | Zhou, Zhen | - |
| dc.contributor.author | Zhang, Fandong | - |
| dc.contributor.author | Pan, Chengwei | - |
| dc.contributor.author | Yu, Yizhou | - |
| dc.contributor.author | Chen, Zijian | - |
| dc.contributor.author | Zhou, Chang Sheng | - |
| dc.contributor.author | Wei, Yongyue | - |
| dc.contributor.author | Zhou, Junlin | - |
| dc.contributor.author | Zhang, Long Jiang | - |
| dc.date.accessioned | 2025-09-26T00:36:43Z | - |
| dc.date.available | 2025-09-26T00:36:43Z | - |
| dc.date.issued | 2025-05-01 | - |
| dc.identifier.citation | Radiology: Artificial Intelligence, 2025, v. 7, n. 3 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362649 | - |
| dc.description.abstract | Purpose: To evaluate a sham–artificial intelligence (AI) model acting as a placebo control for a standard-AI model for diagnosis of intracranial aneurysm. Materials and Methods: This retrospective crossover, blinded, multireader, multicase study was conducted from November 2022 to March 2023. A sham-AI model with near-zero sensitivity and similar specificity to a standard AI model was developed using 16 422 CT angiography examinations. Digital subtraction angiography–verified CT angiographic examinations from four hospitals were collected, half of which were processed by standard AI and the others by sham AI to generate sequence A; sequence B was generated in the reverse order. Twenty-eight radiologists from seven hospitals were randomly assigned to either sequence and then assigned to the other sequence after a washout period. The diagnostic performances of radiologists alone, radiologists with standard-AI assistance, and radiologists with sham-AI assistance were compared using sensitivity and specificity, and radiologists’ susceptibility to sham AI suggestions was assessed. Results: The testing dataset included 300 patients (median age, 61.0 years [IQR, 52.0–67.0]; 199 male), 50 of whom had aneurysms. Standard AI and sham AI performed as expected (sensitivity, 96.0% vs 0.0%; specificity, 82.0% vs 76.0%). The differences in sensitivity and specificity between standard AI–assisted and sham AI–assisted readings were 20.7% (95% CI: 15.8, 25.5 [superiority]) and 0.0% (95% CI: −2.0, 2.0 [noninferiority]), respectively. The difference between sham AI–assisted readings and radiologists alone was −2.6% (95% CI: −3.8, −1.4 [noninferiority]) for both sensitivity and specificity. After sham-AI suggestions, 5.3% (44 of 823) of true-positive and 1.2% (seven of 577) of false-negative results of radiologists alone were changed. Conclusion: Radiologists’ diagnostic performance was not compromised when aided by the proposed sham-AI model compared with their unassisted performance. | - |
| dc.language | eng | - |
| dc.publisher | Radiological Society of North America | - |
| dc.relation.ispartof | Radiology: Artificial Intelligence | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | CT Angiography | - |
| dc.subject | Intracranial Aneurysm | - |
| dc.subject | Sham AI | - |
| dc.subject | Vascular | - |
| dc.title | Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1148/ryai.240140 | - |
| dc.identifier.scopus | eid_2-s2.0-105007570052 | - |
| dc.identifier.volume | 7 | - |
| dc.identifier.issue | 3 | - |
| dc.identifier.eissn | 2638-6100 | - |
| dc.identifier.issnl | 2638-6100 | - |
