Demo: fSTG Toolkit – an Open-Source Software for Longitudinal Brain Connectivity Analysis with Spatio-Temporal Graphs
Julien Pontabry
ICube – University of Strasbourg
The fSTG Toolkit is a comprehensive pipeline designed for processing and analyzing longitudinal dynamics in sequences of brain connectivity matrices. It focuses on the analysis of longitudinal reorganization dynamics between brain regions, providing an effective set of tools for neuroscience research. Although primarily designed for fMRI data, the toolkit is extensible by construction to other types of connectivity data, making it a versatile tool for researchers in the field.
Key features of the fSTG Toolkit include:
- A multi-platform software written in Python and Javascript.
- A straightforward command-line interface (CLI) for
- building spatio-temporal graphs and
- extracting relevant metrics.
- A web-based viewer for user-friendly visualization of results.
- A continuously evolving codebase with planned future updates, including
- frequent pattern detection and analysis and
- installation-free web-based serving features to enhance accessibility and usability for end-users.
Demo: B-Guide – Breast Cancer Surgical Planning Tool
Felicia Alfano
Biomedical Image Technologies, Universidad Politécnica de Madrid; CIBER-BBN, ISCIII
Breast-conserving surgery is performed with the patient in supine position, while preoperative MRI is typically acquired in prone position. This discrepancy leads to significant tissue deformations, complicating tumor localization during surgery. B-Guide addresses this challenge by integrating prone MRI with intraoperative surface scans, predicting tumor displacement with a deep learning–based framework. Our demo will showcase the B-Guide system implemented in 3D Slicer, enabling interactive surgical planning and visualization of tumor position in real time.
Demo: Spatio-Temporal AI for Lung Cancer Screening Nodule Assessment
Benito Farina
Centro de Investigación Biomédica en la Red (CIBER) – Universidad Politécnica de Madrid – BIT
This demo presents an interactive software tool for predicting the malignancy probability of lung nodules in a lung cancer screening setting. The tool is open-source, freely available, and designed for hands-on exploration by attendees.
Early and accurate identification of malignant nodules is critical, as it can reduce unnecessary follow-up exams, lower patient anxiety, and accelerate treatment for high-risk cases. Our system analyzes up to three 3D CT scans acquired at different timepoints, capturing the temporal evolution of nodules — a key clinical factor often overlooked in routine screening assessments.
Unlike most existing approaches that rely on a single scan, this demo leverages spatio-temporal deep learning to model disease progression. It provides robust predictions even when one or more timepoints are missing. The system outputs malignancy probabilities together with attention weights (indicating the contribution of each timepoint) and saliency maps (highlighting regions driving the decision), offering clinicians both accuracy and interpretability.
The demo is designed for practical use: it runs efficiently on a CPU-only setup (e.g., Intel i7 with 32 GB RAM), and delivers predictions and visual explanations in about one minute. By enabling participants to explore how AI can assist in longitudinal nodule assessment, the demo highlights its potential to support clinical decision-making and improve the effectiveness of lung cancer screening programs.
Attendees can explore different longitudinal CT scenarios, adjust input data, and visualize how the model adapts, providing an interactive learning experience.
Demo: Hope4kids – AI-Powered Brain Tumor Segmenter
This demo presents our three-time-winning brain tumor segmentation AI algorithm. It processes four MRI modalities to generate masks for a wide range of tumor types and structures, including gliomas, meningiomas, metastases, sub-Saharan gliomas, and pediatric tumors. Key features include a user-friendly interface for non-deep learning experts, automatic segmentation with high accuracy, and versatile output in NIfTI format. The tool significantly reduces manual annotation workload and has potential applications in surgical planning, treatment optimization, and clinical research. Link: https://segmenter.hope4kids.io/.
Daniel Capellán-Martín
Universidad Politécnica de Madrid
Abhijeet Parida
Children’s National Hospital
Demo: Deep Learning for Pediatric TB Detection in Chest Radiographs
Daniel Capellán-Martín
Universidad Politécnica de Madrid
This demo highlights our AI algorithm for detecting pediatric tuberculosis (TB) from chest X-rays (CXR). It analyzes CXRs to identify potential TB-related abnormalities. Features include an intuitive interface for healthcare providers, high-accuracy detection (AUC 0.903), and standardized outputs. The tool supports diagnosis with a prediction score and Grad-CAMs highlighting compatible findings, aiding early detection and improving outcomes.
Demo: Visualizing Intelligence with ASCRIBE-VR for Granular, Data-Agnostic 3D Analysis of AI Results
Daniela Ushizima
Berkeley Lab, University of California San Francisco, University of California Berkeley
The topic centers on ASCRIBE VR, a novel, data-agnostic virtual reality (VR) platform designed for the immersive visualization and interactive manipulation of virtual structures, primarily those derived from brain data. Built using Unreal Engine 5 and operating on the Meta Quest 3X, it provides features like joystick-controlled manipulation (pushing, pulling, scaling), object selection/instantiation, texture modification, and multiple locomotion modes. Key capabilities include the direct import of common 3D file formats (FBX, STL, OBJ), granular interaction with complex multi-structured meshes, and visualization of 2D neuroimaging slices. A significant feature is its multiplayer connectivity, which supports collaborative research, remote consultations, and team-based educational sessions in a shared virtual environment.
Demo: A Reconfigurable High-Resolution Handheld Ultrasound Imaging System with Non-Linear Beamforming Capabilities
Banhimitra Kundu
Indian Institute of Science, Bangalore , INDIA
The growing demand for affordable, portable, and accurate diagnostic tools has positioned handheld ultrasound systems as a transformative solution for healthcare delivery. Conventional handheld ultrasound devices, while compact and accessible, are often limited in resolution and depth performance compared to high-end cart-based systems. To bridge this gap, a reconfigurable high-resolution handheld ultrasound imaging system with non-linear beamforming capabilities is proposed.
The system leverages advanced novel non-linear beamforming methods to significantly enhance image resolution and contrast beyond the limitations of delay-and-sum or linear approaches. Unlike traditional architectures, the reconfigurable platform allows flexible adaptation of both beamforming algorithms and transducers, enabling clinicians and researchers to optimize imaging for diverse use cases such as breast cancer detection, lung tuberculosis screening, and general point-of-care applications. Notably, one such non-linear beamformer, DOPCON, was recently presented by our group at IEEE IUS 2025, demonstrating the feasibility and clinical potential of hardware-friendly non-linear formulations for handheld ultrasound systems. This work builds on that foundation to advance reconfigurable, high-resolution imaging in portable platforms.
By combining low-power hardware with software-defined hardware reconfigurability, the system ensures scalability across different clinical environments, including remote and resource-limited settings. When coupled with portable design and modular reconfiguration, the non-linear beamforming capability sets handheld imaging technology with high-end diagnostic precision into a compact, field-deployable form factor.