192 research outputs found
Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation
In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is
required for subsurface visualisation to characterise the state of the tissue.
However, scanning of large tissue surfaces in the presence of deformation is a
challenging task for the surgeon. Recently, robot-assisted local tissue
scanning has been investigated for motion stabilisation of imaging probes to
facilitate the capturing of good quality images and reduce the surgeon's
cognitive load. Nonetheless, these approaches require the tissue surface to be
static or deform with periodic motion. To eliminate these assumptions, we
propose a visual servoing framework for autonomous tissue scanning, able to
deal with free-form tissue deformation. The 3D structure of the surgical scene
is recovered and a feature-based method is proposed to estimate the motion of
the tissue in real-time. A desired scanning trajectory is manually defined on a
reference frame and continuously updated using projective geometry to follow
the tissue motion and control the movement of the robotic arm. The advantage of
the proposed method is that it does not require the learning of the tissue
motion prior to scanning and can deal with free-form deformation. We deployed
this framework on the da Vinci surgical robot using the da Vinci Research Kit
(dVRK) for Ultrasound tissue scanning. Since the framework does not rely on
information from the Ultrasound data, it can be easily extended to other
probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202
Pulmonary vasospasm in systemic sclerosis: noninvasive techniques for detection
In a subgroup of patients with systemic sclerosis (SSc), vasospasm affecting the pulmonary circulation may contribute to worsening respiratory symptoms, including dyspnea. Noninvasive assessment of pulmonary blood flow (PBF), utilizing inert-gas rebreathing (IGR) and dual-energy computed-tomography pulmonary angiography (DE-CTPA), may be useful for identifying pulmonary vasospasm. Thirty-one participants (22 SSc patients and 9 healthy volunteers) underwent PBF assessment with IGR and DE-CTPA at baseline and after provocation with a cold-air inhalation challenge (CACh). Before the study investigations, participants were assigned to subgroups: group A included SSc patients who reported increased breathlessness after exposure to cold air (n = 11), group B included SSc patients without cold-air sensitivity (n = 11), and group C patients included the healthy volunteers. Median change in PBF from baseline was compared between groups A, B, and C after CACh. Compared with groups B and C, in group A there was a significant decline in median PBF from baseline at 10 minutes (−10%; range: −52.2% to 4.0%; P < 0.01), 20 minutes (−17.4%; −27.9% to 0.0%; P < 0.01), and 30 minutes (−8.5%; −34.4% to 2.0%; P < 0.01) after CACh. There was no significant difference in median PBF change between groups B or C at any time point and no change in pulmonary perfusion on DE-CTPA. Reduction in pulmonary blood flow following CACh suggests that pulmonary vasospasm may be present in a subgroup of patients with SSc and may contribute to worsening dyspnea on exposure to cold
Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations
With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art
SCEM+: Real-Time Robust Simultaneous Catheter and Environment Modeling for Endovascular Navigation
© 2016 IEEE. Endovascular procedures are characterised by significant challenges mainly due to the complexity in catheter control and navigation. Real-time recovery of the 3-D structure of the vasculature is necessary to visualise the interaction between the catheter and its surrounding environment to facilitate catheter manipulations. State-of-the-art intraoperative vessel reconstruction approaches are increasingly relying on nonionising imaging techniques such as optical coherence tomography (OCT) and intravascular ultrasound (IVUS). To enable accurate recovery of vessel structures and to deal with sensing errors and abrupt catheter motions, this letter presents a robust and real-time vessel reconstruction scheme for endovascular navigation based on IVUS and electromagnetic (EM) tracking. It is formulated as a nonlinear optimisation problem, which considers the uncertainty in both the IVUS contour and the EM pose, as well as vessel morphology provided by preoperative data. Detailed phantom validation is performed and the results demonstrate the potential clinical value of the technique
Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions
PURPOSE: Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. METHODS: The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. RESULTS: Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. CONCLUSION: ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments
Registration-free simultaneous catheter and environment modelling
Endovascular procedures are challenging to perform due to
the complexity and difficulty in catheter manipulation. The simultaneous
recovery of the 3D structure of the vasculature and the catheter posi-
tion and orientation intra-operatively is necessary in catheter control
and navigation. State-of-art Simultaneous Catheter and Environment
Modelling provides robust and real-time 3D vessel reconstruction based on real-time intravascular ultrasound (IVUS) imaging and electromagnetic (EM) sensing, but still relies on accurate registration between EM and pre-operative data. In this paper, a registration-free vessel reconstruction method is proposed for endovascular navigation. In the optimisation framework, the EM-CT registration is estimated and updated intra-operatively together with the 3D vessel reconstruction from IVUS, EM and pre-operative data, and thus does not require explicit registration. The proposed algorithm can also deal with global (patient) motion and periodic deformation caused by cardiac motion. Phantom and in-vivo experiments validate the accuracy of the algorithm and the results
demonstrate the potential clinical value of the technique
Caveats on the first-generation da Vinci Research Kit: latent technical constraints and essential calibrations
Telesurgical robotic systems provide a well established form of assistance in
the operating theater, with evidence of growing uptake in recent years. Until
now, the da Vinci surgical system (Intuitive Surgical Inc, Sunnyvale,
California) has been the most widely adopted robot of this kind, with more than
6,700 systems in current clinical use worldwide [1]. To accelerate research on
robotic-assisted surgery, the retired first-generation da Vinci robots have
been redeployed for research use as "da Vinci Research Kits" (dVRKs), which
have been distributed to research institutions around the world to support both
training and research in the sector. In the past ten years, a great amount of
research on the dVRK has been carried out across a vast range of research
topics. During this extensive and distributed process, common technical issues
have been identified that are buried deep within the dVRK research and
development architecture, and were found to be common among dVRK user feedback,
regardless of the breadth and disparity of research directions identified. This
paper gathers and analyzes the most significant of these, with a focus on the
technical constraints of the first-generation dVRK, which both existing and
prospective users should be aware of before embarking onto dVRK-related
research. The hope is that this review will aid users in identifying and
addressing common limitations of the systems promptly, thus helping to
accelerate progress in the field.Comment: 15 pages, 7 figure
H-Net: unsupervised attention-based stereo depth estimation leveraging epipolar geometry
Depth estimation from a stereo image pair has become one of the most explored applications in computer vision, with most previous methods relying on fully supervised learning settings. However, due to the difficulty in acquiring accurate and scalable ground truth data, the training of fully supervised methods is challenging. As an alternative, self-supervised methods are becoming more popular to mitigate this challenge. In this paper, we introduce the H-Net, a deep-learning framework for unsupervised stereo depth estimation that leverages epipolar geometry to refine stereo matching. For the first time, a Siamese autoencoder architecture is used for depth estimation which allows mutual information between rectified stereo images to be extracted. To enforce the epipolar constraint, the mutual epipolar attention mechanism has been designed which gives more emphasis to correspondences of features that lie on the same epipolar line while learning mutual information between the input stereo pair. Stereo correspondences are further enhanced by incorporating semantic information to the proposed attention mechanism. More specifically, the optimal transport algorithm is used to suppress attention and eliminate outliers in areas not visible in both cameras. Extensive experiments on KITTI2015 and Cityscapes show that the proposed modules are able to improve the performance of the unsupervised stereo depth estimation methods while closing the gap with the fully supervised approaches
Towards autonomous control of surgical instruments using adaptive-fusion tracking and robot self-calibration
The ability to track surgical instruments in realtime is crucial for autonomous Robotic Assisted Surgery (RAS). Recently, the fusion of visual and kinematic data has been proposed to track surgical instruments. However, these methods assume that both sensors are equally reliable, and cannot successfully handle cases where there are significant perturbations in one of the sensors' data. In this paper, we address this problem by proposing an enhanced fusion-based method. The main advantage of our method is that it can adjust fusion weights to adapt to sensor perturbations and failures. Another problem is that before performing an autonomous task, these robots have to be repetitively recalibrated by a human for each new patient to estimate the transformations between the different robotic arms. To address this problem, we propose a self-calibration algorithm that empowers the robot to autonomously calibrate the transformations by itself in the beginning of the surgery. We applied our fusion and selfcalibration algorithms for autonomous ultrasound tissue scanning and we showed that the robot achieved stable ultrasound imaging when using our method. Our performance evaluation shows that our proposed method outperforms the state-of-art both in normal and challenging situations
- …
