Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust and with faster convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model.
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust and with faster convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model.
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust and with faster convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model.
Clinical applications of computational cardiac models require precise personalization, i.e. fitting model parameters to capture patient's physiology. However, due to parameter non-identifiability, limited data, uncertainty in the clinical measurements, and modeling assumptions, various combinations of parameter values may exist that yield the same quality of fit. Hence, there is a need for quantifying the uncertainty in estimated parameters and to ascertain the uniqueness of the found solution. This paper presents a stochastic method to estimate the parameters of an image-based electromechanical model of the heart and their uncertainty due to noise in measurements. First, Bayesian inference is applied to fully estimate the posterior probability density function (PDF) of the model. To that end, Markov Chain Monte Carlo sampling is used, which is made computationally tractable by employing a fast surrogate model based on Polynomial Chaos Expansion, instead of the true forward model. Then, we use the mean-shift algorithm to automatically find the modes of the PDF and select the most likely one while being robust to noise. The approach is used to estimate global active stress and passive stiffness from invasive pressure and image-based volume quantification. Experiments on eight patients showed that not only our approach yielded goodness of fits equivalent to a well-established deterministic method, but we could also demonstrate the non-uniqueness of the problem and report uncertainty estimates, crucial information for subsequent clinical assessments of the personalized models.
A key requirement for recent advances in computational modeling to be clinically applicable is the ability to fit models to patient data. Various personalization techniques have been proposed for isolated sub-components of complex models of heart physiology. However, no work has been presented that focuses on personalizing full electromechanical (EM) models in a streamlined, consistent and automatic fashion, which has been evaluated on a large population.
The aim of this project is to develop an integrated system for full EM personalization from routinely acquired clinical data. The importance of mechanical parameters is analyzed in a comprehensive sensitivity study. On a large, physiologically diverse set of 15 patients, we demonstrate the effectiveness of our framework by comparing measured and calculated parameters, yielding low left ventricular ejection fraction and stroke volume errors.
This project focuses on advanced modeling of the heart in the setting of cardiology applications, with a special focus on the aortic and mitral valve. In current clinical practice, minimally invasive procedures are starting to replace open heart surgeries. As there is no direct view or access to the affected anatomy, there is a high demand for predictive models, sophisticated planning and operating room (OR) integration tools. In the context of transcatheter therapies, such as the Transcatheter Aortic Valve Implantation (TAVI), biomarkers of the entire aortic-mitral apparatus must be extracted in order to perform patient selection and risk assessment as well as device selection and operation optimization. Current methods are able to extract basic biomarkers such as diameter, area measurements and C-arm angulations, but they fail to model the entire left-heart system and are neglecting the true volumetric physiology of the aortic root and leaflets. These features are mandatory for extracting advanced biomarkers as well as for computational modeling of biomechanics and fluid dynamics.
The main research focus lies on the development of novel segmentation algorithms for volumetric models of the subvalvular apparatus, in particular the aortic and mitral valve. Imaging modalities which offer high spatial resolution will be utilized to extract these volumetric models. Furthermore, to improve tracking performance, we will explore the value of biomechanical models.
This project is in cooperation with Siemens AG, Corporate Technology, Erlangen.
The modeling of three-dimensional scene geometry from temporal point cloud streams is of particular interest for a variety of computer vision applications. With the advent of RGB-D imaging devices that deliver dense, metric and textured 6-D data in real-time, on-the-fly reconstruction of static environments has come into reach. In this paper, we propose a system for real-time point cloud mapping based n an efficient implementation of the iterative closest point (ICP) algorithm on the graphics processing unit (GPU). In order to achieve robust mappings at real-time performance, our nearest neighbor search evaluates both geometric and photometric information in a direct manner. For acceleration of the search space traversal, we exploit the inherent computing parallelism of GPUs. In this work, we have investigated the fitness of the random ball cover (RBC) data structure and search algorithm, originally proposed for high-dimensional problems, for 6-D data. In particular, we introduce a scheme that enables both fast RBC construction and queries. The proposed system is validated on an indoor scene modeling scenario. For dense data from the Microsoft Kinect sensor (640x480 px), our implementation achieved ICP runtimes of < 20 ms on an off-the-shelf consumer GPU.
Clinical applications of computational cardiac models require precise personalization, i.e. fitting model parameters to capture patient's physiology. However, due to parameter non-identifiability, limited data, uncertainty in the clinical measurements, and modeling assumptions, various combinations of parameter values may exist that yield the same quality of fit. Hence, there is a need for quantifying the uncertainty in estimated parameters and to ascertain the uniqueness of the found solution. This paper presents a stochastic method to estimate the parameters of an image-based electromechanical model of the heart and their uncertainty due to noise in measurements. First, Bayesian inference is applied to fully estimate the posterior probability density function (PDF) of the model. To that end, Markov Chain Monte Carlo sampling is used, which is made computationally tractable by employing a fast surrogate model based on Polynomial Chaos Expansion, instead of the true forward model. Then, we use the mean-shift algorithm to automatically find the modes of the PDF and select the most likely one while being robust to noise. The approach is used to estimate global active stress and passive stiffness from invasive pressure and image-based volume quantification. Experiments on eight patients showed that not only our approach yielded goodness of fits equivalent to a well-established deterministic method, but we could also demonstrate the non-uniqueness of the problem and report uncertainty estimates, crucial information for subsequent clinical assessments of the personalized models.
A key requirement for recent advances in computational modeling to be clinically applicable is the ability to fit models to patient data. Various personalization techniques have been proposed for isolated sub-components of complex models of heart physiology. However, no work has been presented that focuses on personalizing full electromechanical (EM) models in a streamlined, consistent and automatic fashion, which has been evaluated on a large population.
The aim of this project is to develop an integrated system for full EM personalization from routinely acquired clinical data. The importance of mechanical parameters is analyzed in a comprehensive sensitivity study. On a large, physiologically diverse set of 15 patients, we demonstrate the effectiveness of our framework by comparing measured and calculated parameters, yielding low left ventricular ejection fraction and stroke volume errors.
This project focuses on advanced modeling of the heart in the setting of cardiology applications, with a special focus on the aortic and mitral valve. In current clinical practice, minimally invasive procedures are starting to replace open heart surgeries. As there is no direct view or access to the affected anatomy, there is a high demand for predictive models, sophisticated planning and operating room (OR) integration tools. In the context of transcatheter therapies, such as the Transcatheter Aortic Valve Implantation (TAVI), biomarkers of the entire aortic-mitral apparatus must be extracted in order to perform patient selection and risk assessment as well as device selection and operation optimization. Current methods are able to extract basic biomarkers such as diameter, area measurements and C-arm angulations, but they fail to model the entire left-heart system and are neglecting the true volumetric physiology of the aortic root and leaflets. These features are mandatory for extracting advanced biomarkers as well as for computational modeling of biomechanics and fluid dynamics.
The main research focus lies on the development of novel segmentation algorithms for volumetric models of the subvalvular apparatus, in particular the aortic and mitral valve. Imaging modalities which offer high spatial resolution will be utilized to extract these volumetric models. Furthermore, to improve tracking performance, we will explore the value of biomechanical models.
This project is in cooperation with Siemens AG, Corporate Technology, Erlangen.
The modeling of three-dimensional scene geometry from temporal point cloud streams is of particular interest for a variety of computer vision applications. With the advent of RGB-D imaging devices that deliver dense, metric and textured 6-D data in real-time, on-the-fly reconstruction of static environments has come into reach. In this paper, we propose a system for real-time point cloud mapping based n an efficient implementation of the iterative closest point (ICP) algorithm on the graphics processing unit (GPU). In order to achieve robust mappings at real-time performance, our nearest neighbor search evaluates both geometric and photometric information in a direct manner. For acceleration of the search space traversal, we exploit the inherent computing parallelism of GPUs. In this work, we have investigated the fitness of the random ball cover (RBC) data structure and search algorithm, originally proposed for high-dimensional problems, for 6-D data. In particular, we introduce a scheme that enables both fast RBC construction and queries. The proposed system is validated on an indoor scene modeling scenario. For dense data from the Microsoft Kinect sensor (640x480 px), our implementation achieved ICP runtimes of < 20 ms on an off-the-shelf consumer GPU.