The parasternal long axis (PLAX) is a routine imaging plane used by clinicians to assess the overall function of the left ventricle during an echocardiogram. Measurements from the PLAX view, in particular left ventricle internal dimension at both end-diastole (LVIDd) and end-systole (LVIDs), are significant markers used to identify cardiovascular disease and can provide an estimation of ejection fraction (EF). However, due to the user-dependent nature of echocardiograms, these measurements suffer from a large amount of inter-observer variability, which greatly affect the sensitive formula used to calculate PLAX EF. While few previous works have attempted to reduce this variability by automating LVID measurements, their models not only lack reliable accuracy and precision, but also generally are not suited to be adapted in point-of-care ultrasound (POCUS) which has limited computing resources. In this paper, we propose a fully automatic, light-weight landmark detection network for detecting LVID and rapidly estimating PLAX EF. Our model is built upon recent advances in deep video landmark tracking with extremely sparse annotations.1 The model is trained on only two frames in the cardiac cine that contain either the LVIDd or LVIDs measurements labeled by clinicians. Using data from 34,305 patients for our experiments, the proposed model accurately tracks the contraction of left ventricular walls. Our model achieves a mean absolute error and standard deviation of 2:65 ± 2:36 mm, 2:77 ± 2:58 mm, and 8:45 ± 7:43% for predicting LVIDd length, LVIDs length, and PLAX EF, respectively. As a light-weight network with less than 125,000 parameters, our model is extremely accessible for POCUS applications.
Echocardiography (echo) is one of the widely used imaging techniques to evaluate cardiac function. Left ventricular ejection fraction (EF) is a commonly assessed echocardiographic measurement to study systolic function and is a primary index of cardiac contractility. EF indicates the percentage of blood volume ejected from the left ventricle in a cardiac cycle. Several deep learning (DL) works have contributed to the automatic measurements of EF in echo via LV segmentation and visual assessment,1-8 but still the design of a lightweight and robust video-based model for EF estimation in portable mobile environments remains a challenge. To overcome this limitation, here we propose a modified Tiny Video Network (TVN) with sampling-free uncertainty estimation for video-based EF measurement in echo. Our key contribution is to achieve comparable accuracy with the contemporary state-of-the-art video-based model, Echonet-Dynamic approach1 while having a small model size. Moreover, we consider the aleatoric uncertainty in our network to model the inherent noise and ambiguity of EF labels in echo data to improve prediction robustness. The proposed network is suitable for real-time video-based EF estimation compatible with portable mobile devices. For experiments, we use the publically available Echonet-Dynamic dataset1 with 10,030 four-chamber echo videos and their corresponding EF labels. The experiments show the advantages of the proposed method in performance and robustness.
The utilization of point-of-care ultrasound (POCUS) has been on rise in recent years, followed by a growing need for comprehensive, compact obstetrics analysis software systems. The accurate computerized assessment of obstetric ultrasound (US) is a challenging task due to the noisy nature of US images and presence of complex anatomies. In this work, we propose a multi-branch deep learning architecture to identify multiple anatomies in obstetric sonography through segmentation and landmark detection. The multi-task deep model is trained to segment the uterus and gestational sac regions, and to localize the position of landmark points denoting the crown and rump of the fetus. We conduct experiments with varying sizes of models, presenting a trade-off between accuracy and efficiency. Our larger models reach average Dice scores of 90% and 91% for segmenting uterus and gestational sac, respectively, and have 1.64 mm average length error for the fetus crown-rump length (CRL) measurement. Furthermore, we present choices for smaller model sizes suitable for integration in portable POCUS devices with limited computing capacities. The smallest model with the most efficient run-time has only about 167.3k parameters, where compared to the larger models, the Dice score is reduced by 4.1% and 6.7% for uterus and gestational sac segmentation, respectively, and the CRL length error is increased by 0.42 mm. This is while the smallest model has 98.8% reduced model size and could smoothly run on naive mobile devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.