stroke 1

The Great Reality Check: Acute stroke

The Great Reality Check Part 2: Acute Stroke

Read the results of our new user studies – up-to-date and transparent!

Purpose:

The aim of the study was to prospectively determine the performance of a common AI assistant in acute stroke, validated with the first read reports of radiologists specialized in emergency radiology as well as imaging and clinical follow-up.

Patients, Material and Methods:

In 2025, 88 patients (age: 18 to 89 years, mean: 52 years, standard deviation: ± 25 years) who had been referred to ERS Emergency Radiology Schueller, a provider of teleradiology services, for cranial CT scans with suspected acute stroke were randomly and prospectively enrolled in the study over three consecutive weeks. CT studies of these patients were evaluated by a common, commercially available AI assistant. Radiologists reported the CT studies without the initial knowledge of the AI results and compared the radiological with the AI findings in a second step. Gold standard were the specialists´ reports as well as clinical follow-up. In case of discrepancies between the radiologists´ and the AI assistants´ findings, CT studies were second read within 30 minutes at the latest. The study was prematurely terminated due to the AI results.

Results:

Of 88 patients, 14 AI results could not be retrieved. Of 74 patients, radiologists and clinical follow-up diagnosed 2 acute ischemia (2.7%). The AI assistant yielded 2 true positive (TP), 58 false positive (FP), 0 false negative (FN), and 14 true negative (TN) results; sensitivity 1.0; specificity 0.194; positive predictive value (PPV) 0.033; negative predictive value (NPV) 1.0. In a second step, the results of the AI ​​assistant were calculated based on the clinically and therapeutically relevant threshold of the ASPECTS score of 7 or lower: The AI assistant yielded 2 TP, 32 FP, 0 FN, and 14 TN results; sensitivity 1.0; specificity 0.304; PPV 0.059; NPV 1.0.

Discussion:

The AI ​​assistant achieved 58 out of 74 FP (78%) and two out of 74 TP (2.7%). This rate, along with the absence of FN, suggests that the software company is accepting FP in favor of sensitivity. The calculated specificity is significantly lower than officially stated in the AI ​​manufacturer’s publications. Evaluation based on the clinically and therapeutically relevant threshold of the ASPECTS score 7 yielded a similar result. Data collection was terminated prematurely, and the low number of cases achieved certainly represents a limitation of our study. Based on the available data, it must be assumed that reporting CT scans for acute stroke, with its complexity, especially with pre-existing, non-acute lesions of brain tissue, particularly in older patients, should for the time being remain entirely in the hands of experienced radiologists.

Gerd Schueller and the Radailogy Team

icb 1

The Great Reality Check: Acute cerebral hemorrhage

The Great Reality Check Part 1: Acute cerebral hemorrhage

Read the results of our new user studies – up-to-date and transparent!

Purpose:

The aim of the study was to prospectively determine the performance of common AI assistants in acute cerebral hemorrhage, validated with the first read reports of radiologists specialized in emergency radiology as well as imaging and clinical follow-up.

Patients, Materials and Methods:

In 2025, 218 patients who had been referred to ERS Emergency Radiology Schueller, a provider of teleradiology services, for cranial CT scans following blunt head trauma were randomly and prospectively enrolled in the study over eight consecutive weeks. CT studies of these patients were randomly evaluated by one of two common, commercially available AI assistants. Radiologists reported the CT studies without the initial knowledge of the AI results and compared the radiological with the AI findings in a second step. Gold standard were the specialists´ reports as well as clinical follow-up. In case of discrepancies between the radiologists´ and the AI assistants´ findings, CT studies were second read within 30 minutes at the latest.

Results:

Of 218 patients, 18 AI results could not be retrieved. Of 200 patients, radiologists and clinical follow-up diagnosed 58 acute intracranial bleedings (29%). The AI assistants yielded 58 true positive (TP), 0 false positive (FP), 40 false negative (FN), and 82 true negative (TN) results; sensitivity .592; specificity 1.0; positive predictive value (PPV) 1.0; negative predictive value (NPV) .672. No significant difference was found between the results of the AI assistants used. FN findings involved hemorrhages with a width of 5 mm or less (mean 3.5 mm, SE ± 1.9 mm). The minimum extent of a hemorrhage classified as TP by the AI assistants was 5 mm (range 5–15 mm; mean, 9 mm; SE ± 7 mm).

Discussion:

The AI assistants correctly identified all acute cerebral hemorrhages. The absence of FP results suggests that typical pitfalls, such as hardening artifacts, bone margins, and calcifications along the intern table, have been addressed by the software companies. However, the surprisingly high FN rate suggests that AI assistants are currently non suitable for triaging patients with traumatic brain injury in the high-end setting of professional teleradiology. The high FN rate, particularly for smaller hemorrhages, also casts doubt on their use as a second look in the hectic daily routine of acute radiology. Our data are not comparable to the official figures provided by AI manufacturers, who published sensitivity and specificity figures of at least 90%. Certainly, the relatively small sample size of our study contributes to this discrepancy as a limitation. Furthermore, future studies should test a larger number of AI assistants.

Gerd Schueller and the Radailogy Team

Bild3

The Great Reality Check

March 2026: How suitable is Artificial Intelligence really in acute medicine?

March 2026: How suitable is Artificial Intelligence really in acute medicine?

Read the results of our extensive AI field test in time for ECR 2026. At Radailogy, we publish our cutting-edge empirical data with complete transparency and without influence from any stakeholders.

The aim of our study was to evaluate the practical applicability of common AI assistants for the detection of frequent pathologies in acute care. We generated our data prospectively using a randomized trial in collaboration with our sister company ERS Emergency Radiology Schueller, the market leader in teleradiology in Austria and Switzerland.

Publications:

February 27, 2026: Acute cerebral hemorrhage

February 28, 2026: Acute stroke

March 2, 2026: Urolithiasis

March 3, 2026: Acute abdominal organs

March 4, 2026: Fractures of the arms and legs

Book your free spot for ERS TV Live!

In addition, all data will be presented live in our ERS TV show on March 10, 2026. Secure your free spot! You’ll find the registration link on Monday, March 2, 2026, at www.emergencyradiology.ch

We look forward to seeing you there. It’s going to be exciting.

Yours sincerely,

Gerd Schueller