Contact
Address
Universität Erlangen-Nürnberg
Chair of Computer Science 5 (Pattern Recognition)
Martensstrasse 3 91058 Erlangen Germany
Powered by
|
Dr.-Ing. Sven Haase
Alumnus of the Pattern Recognition Lab of the Friedrich-Alexander-Universität Erlangen-Nürnberg
Time-of-Flight for 3-D Endoscopy and Open Surgery
Patch Based Specular Reflection Removal for Range Images in Hybrid 3-D Endoscopy Sven Haase, Jakob Wasza, Mustafa Safak, Thomas Kilgus, Lena Maier-Hein, Hubertus Feußner, Joachim Hornegger
-
In minimally invasive surgery, hybrid 3-D endoscopy is an evolving field of research that aims to augment conventional video based systems by metric 3-D measurements. One crucial issue with these systems arises from specular reflections that result in overexposed RGB values and invalid range measurements. In this paper, we address this problem by registering video and range information acquired from different view points using a patch based approach. This allows to replace invalid measurements caused by specular reflections in one view with valid data from non-specular regions in the other view. In contrast to previous approaches that employ interpolation techniques our method utilizes actual scene information being advantageous in a medical environment. In our experiments, we show that our method decreases the mean absolute error in common situations on average by more than 30% compared to conventional interpolation. Moreover, for challenging scenarios we outperform interpolation by more than 1 mm and reconstruct important structures that inherently could not be restored by conventional interpolation.
Temporal Non-Local-Means Filtering in Hybrid 3-D Endoscopy Tobias Lindenberger, Sven Haase, Jakob Wasza, Thomas Kilgus, Lena Maier-Hein, Hubertus Feuner, Joachim Hornegger
-
Time-of-Flight (ToF) cameras are a novel and fast developing technology for acquiring 3-D surfaces. In recent years they have gathered interest from many fields including 3-D endoscopy. However, preprossessing of the obtained images is absolutely mandatory due to the low signal-to-noise ratio of current sensors. One possibility to increase image quality is the non-local-means (NLM) filter that utilizes local neighborhoods for denoising. In this paper we present an enhanced NLM filter for hybrid 3-D endoscopy. The introduced filter gathers the structural information from an RGB image that shows the same scene as the range image. To cope with camera movements, we incorporate a temporal component by considering a sequence of frames. Evaluated on simulated data, the algorithm showed an improvement in range accuracy of 70% when compared to the unfiltered image.
3-D Operation Situs Reconstruction with Time-of-Flight Satellite Cameras Using Photogeometric Data Fusion Sven Haase, Sebastian Bauer, Jakob Wasza, Thomas Kilgus, Lena Maier-Hein, Armin Schneider, Michael Kranzfelder, Hubertus Feußner, Joachim Hornegger
-
Minimally invasive procedures are of growing importance in modern surgery. Navigation and orientation are major issues during these interventions as conventional endoscopes only cover a limited eld of view. We propose the application of a Time-of-Flight (ToF) satellite camera at the zenith of the pneumoperitoneum to survey the operation situs. Due to its limited eld of view we propose a fusion of dierent 3-D views to reconstruct the situs using photometric and geometric information provided by the ToF sensor. We were able to reconstruct the entire abdomen with a mean absolute mesh-to-mesh error of less than 5 mm compared to CT ground truth data, at a frame rate of 3 Hz. The framework was evaluated on real data from a miniature ToF camera in an open surgery pig study and for quantitative evaluation with a realistic human phantom. With the proposed approach to operation situs reconstruction we improve the surgeons' orientation and navigation and therefore increase safety and speed up surgical interventions.
Medical Image Computing and Computer-Assisted Intervention - MICCAI 2013, Lecture Notes in Computer Science, Nagoya, 23.09, vol. 8149, pp. 356-363, 2013 (BiBTeX, Who cited this?)
Time-of-Flight Based Collision Avoidance for Robot Assisted Minimally Invasive Surgery Sven Haase, Armin Schneider, Michael Kranzfelder, Thomas Kilgus, Lena Maier-Hein, Joachim Hornegger, Hubertus Feußner
-
In minimally invasive surgery navigation and orientation are major issues due to the limited field of view. To ensure the safety while navigating through the patient´s abdomen it is of high importance to avoid collisions with surrounding tissue and organs. Recently, intelligent assistance systems have been developed to eliminate the error prone navigation of surgeons and replace it by indirect navigation using a robot. To ease the navigation and guarantee that the endoscope always keeps a fixed distance to the operation site we introduce a Time-of-Flight based module for robotic assistance systems. Our module allows the endoscope to hover over the situs to avoid collisions with healthy tissue.
Evaluating effectiveness and acceptance of robots in surgery: user centered design and economic factors (ICRA Workshop 2013), Karlsruhe, 06.05, pp. n/a, 2013 (BiBTeX, Who cited this?)
GPU Accelerated Time-of-Flight Super-Resolution for Image-Guided Surgery Jens Wetzl, Oliver Taubmann, Sven Haase, Thomas Köhler, Martin Kraus, Joachim Hornegger
-
In the field of image-guided surgery, Time-of-Flight (ToF) sensors are of interest due to their fast acquisition of 3-D surfaces. However, the poor signal-to-noise ratio and low spatial resolution of today's ToF sensors require preprocessing of the acquired range data. Super-resolution is a technique for image restoration and resolution enhancement by utilizing information from successive raw frames of an image sequence. We propose a super-resolution framework using the graphics processing unit. Our framework enables interactive frame rates, computing an upsampled image from 10 noisy frames of 200x200 px with an upsampling factor of 2 in 109 ms. The root-mean-square error of the super-resolved surface with respect to ground truth data is improved by more than 20% relative to a single raw frame.
Computer- und Roboter Assistierte Chirurgie (CURAC 2013), Innsbruck, 28.11, pp. 194-197, 2013 (BiBTeX, Who cited this?)
Medical Image Computing and Computer-Assisted Intervention - MICCAI 2013, Lecture Notes in Computer Science (International Conference on Medical Image Computing and Computer-Assisted Intervention), Nagoya, Japan, 2013, vol. 8149, pp. 139-146, 2013, ISBN 978-3-642-40810-6 (BiBTeX, Who cited this?)
Laparoscopic Instrument Localization using a 3-D Time-of-Flight/RGB Endoscope Sven Haase, Jakob Wasza, Thomas Kilgus, Joachim Hornegger
-
Minimally invasive procedures are of importance in modern surgery due to reduced operative trauma and recovery time. To enable robot assisted interventions, automatic tracking of endoscopic tools is an essential task. State-of-the-art techniques rely on 2-D color information only which is error prone for varying illumination and unpredictable color distribution within the human body. In this paper, we use a novel 3-D Time-of-Flight/RGB endoscope that allows to use both color and range information to locate laparoscopic instruments in 3-D. Regarding color and range information the proposed technique calculates a score to indicate which information is more reliable and adopts the next steps of the localization procedure based on this reliability. In experiments on real data the tool tip is located with an average 3-D distance error of less than 4 mm compared to manually labeled ground truth data with a frame-rate of 10 fps.
Computer- und Roboter Assistierte Chirurgie (CURAC 2013), Innsbruck, 28.11, pp. 194-197, 2013 (BiBTeX, Who cited this?)
IEEE Workshop on Applications of Computer Vision (WACV), Clearwater, 18.01, pp. 449-454, 2013 (BiBTeX, Who cited this?)
ToF/RGB Sensor Fusion for Augmented 3D Endoscopy using a Fully Automatic Calibration Scheme S. Haase, C. Forman, T. Kilgus, R. Bammer, L. Maier-Hein, J. Hornegger
-
3-D Endoscopy is an evolving field of research and offers great benefits for minimally invasive procedures. Besides the pure topology, color texture is an inevitable feature to provide an optimal visualization. Therefore, in this paper, we propose a sensor fusion of a Time-of-Flight (ToF) and an RGB sensor. This requires an intrinsic and extrinsic calibration of both cameras. In particular, the low resolution of the ToF camera (64x50 px) and inhomogeneous illumination precludes the use of standard calibration techniques. By enhancing the image data the use of self-encoded markers for automatic checkerboard detection, a re-projection error of less than 0.23 px for the ToF camera was achieved. The relative transformation of both sensors for data fusion was calculated in an automatic manner.
Bildverarbeitung für die Medizin (Bildverarbeitung für die Medizin 2012), Berlin, 19.03, pp. 111-116, 2012, ISBN 978-3-642-28501-1 (BiBTeX, Who cited this?)
Optimierte endoskopische Time-of-Flight Oberflächenrekonstruktion durch Integration eines Struktur-durch-Bewegung-Ansatzes A. Groch, S.Haase, M. Wagner, T.Kilgus, H. Kenngott, H.-P. Schlemmer, J. Hornegger, H.-P. Meinzer, L. Maier-Hein
-
Eine der größten Herausforderungen computergestützter Assistenzsysteme für laparoskopische Eingriffe ist die intraoperative akkurate und schnelle Rekonstruktion der Organoberfläche. Während Rekonstruktionstechniken basierend auf Multiple View Methoden, beispielsweise Stereo-Rekonstruktion, schon länger Gegenstand der Forschung sind, wurde erst kürzlich das weltweit erste Time-of-Flight (ToF) Endoskop vorgestellt. Die Vorteile gegenüber Stereo liegen in der hohen Aktualisierungsrate und dem dichten Tiefenbild unabhängig von der betrachteten Szene. Demgegenüber stehen allerdings Nachteile wie schlechte Genauigkeit bedingt durch hohes Rauschen und systematische Fehler. Um die Vorteile beider Verfahren zu vereinen, wird ein Konzept entwickelt, die ToF-Endoskopie-Technik mit einem stereoähnlichen Multiple-View-Ansatz (Struktur durch Bewegung) zu fusionieren. Der Ansatz benötigt keine zusätzliche Bildgebungsmodalität wie z.B. ein Stereoskop, sondern nutzt die ohnehin akquirierten (Mono-) Farbdaten des ToF-Endoskops. Erste Ergebnisse zeigen, dass die Genauigkeit der Oberflächenrekonstruktion mit diesem Ansatz verbessert werden kann.
Bildverarbeitung für die Medizin (Bildverarbeitung für die Medizin 2012), Berlin, 19.03, pp. 39-44, 2012, ISBN 978-3-642-28501-1 (BiBTeX, Who cited this?)
Sparse Principal Axes Statistical Surface Deformation Models for Respiration Analysis and Classification J. Wasza, S. Bauer, S. Haase, J. Hornegger
-
Detection, analysis and compensation of respiratory motion is a key issue for a variety of medical applications, such as tumor tracking in fractionated radiotherapy. One class of approaches aims for predicting the internal target movement by correlating intra-operatively captured body surface deformations to a preoperatively learned deformable model. Here, range imaging (RI) devices assume a prominent role for dense and real-time surface acquisition due to their non-intrusive and markerless nature.
In this work we present an RI based statistical model built upon sparse principal axes for body surface deformations induced by respiratory motion. In contrast to commonly employed global models based on principal component analysis, we exploit orthomax rotations in order to enable the differentiation between distinctive and local respiratory motion patterns such as thoracic and abdominal breathing. In a case study, we demonstrate our model's capability to capture dense respiration curves and the usage of our model for simulating realistic distinctive respiratory motion patterns.
Bildverarbeitung für die Medizin 2012 (Bildverarbeitung für die Medizin), Berlin, Germany, 20.03.2012, pp. 316-321, 2012, ISBN 978-3-642-28501-1 (BiBTeX, Who cited this?)
Multi-modal Surface Registration for Markerless Initial Patient Setup in Radiation Therapy using Microsoft's Kinect Sensor S. Bauer, J. Wasza, S. Haase, N. Marosi, J. Hornegger
-
In radiation therapy, prior to each treatment fraction, the patient must be aligned to computed tomography (CT) data. Patient setup verification systems based on range imaging (RI) can accurately verify the patient position and adjust the treatment table at a fine scale, but require an initial manual setup using lasers and skin markers. We propose a novel markerless solution that enables a fully-automatic initial coarse patient setup. The table transformation that brings template and reference data in congruence is estimated from point correspondences based on matching local surface descriptors. Inherently, this point-based registration approach is capable of coping with gross initial misalignments and partial matching. Facing the challenge of multi-modal surface registration (RI/CT), we have adapted state-of-the-art descriptors to achieve invariance to mesh resolution and robustness to variations in topology. In a case study on real data from a low-cost RI device (Microsoft Kinect), the performance of different descriptors is evaluated on anthropomorphic phantoms. Furthermore, we have investigated the system's resilience to deformations for mono-modal RI/RI registration of data from healthy volunteers. Under gross initial misalignments, our method resulted in an average angular error of 1.5° and an average translational error of 13.4 mm in RI/CT registration. This coarse patient setup provides a feasible initialization for subsequent refinement with verification systems.
IEEE International Conference on Computer Vision (ICCV) Workshops (IEEE Workshop on Consumer Depth Cameras for Computer Vision (CDC4CV)), Barcelona, Spain, 12.11.2011, pp. 1175-1181, 2011 (BiBTeX, Who cited this?)
RITK: The Range Imaging Toolkit - A Framework for 3-D Range Image Stream Processing J. Wasza, S. Bauer, S. Haase, M. Schmid, S. Reichert, J. Hornegger
-
The recent introduction of low-cost devices for real-time acquisition of dense 3-D range imaging (RI) streams has attracted a great deal of attention. Many research communities will benefit from this evolution. However, to date, there exists no open source framework that is explicitly dedicated to real-time processing of RI streams.
In this paper, we present the Range Imaging Toolkit (RITK). The goal is to provide a powerful yet intuitive software platform that facilitates the development of range image stream applications. In addition to its usage as a library within existing software, the toolkit supports developers with an easy-to-use development and rapid prototyping infrastructure for the creation of application-specific RITK modules or standalone solutions. Putting a strong focus on modularity, this application-layer allows for distribution and reuse of existing modules. In particular, a pool of dynamically loadable plugins can be assembled into tailored RI processing pipelines during runtime. RITK puts emphasis on real-time processing of range image streams and proposes the use of a dedicated pipeline mechanism. Furthermore, we introduce a powerful and convenient interface for range image processing on the graphics processing unit (GPU). Being designed thoroughly and in a generic manner, the toolkit is able to cope with the broad diversity of data streams provided by available RI devices and can easily be extended by custom range imaging sensors or processing modules.
RITK is an open source project and is publicly available under www.cs.fau.de/ritk.
VMV 2011: Vision, Modeling & Visualization, Berlin, Germany, 04.10.2011, pp. 57-64, 2011, ISBN 978-3-905673-85-2 (BiBTeX, Who cited this?)
|