Dual-energy X-ray absorptiometry (DXA) is the primary modality for bone mineral density (BMD), offering a reliable clinical indicator for orthopedic diagnosis and treatment. In practical applications, there are several factors that could lead to variations in the grayscale distribution of dual-energy X-ray images. The domain shift of the dual-energy X-ray images often makes neural networks used for ulna and radius segmentation ineffective, significantly impacting bone density detection. In this paper, we propose an unsupervised Global Contextual Enhanced Attention-guided Domain Adaptation (GCEADA) framework to enhance the performance of the network in the ulna and radius segmentation task of the target domain. The proposed method extracts global context information to enhance attention expression, obtaining more representative domain invariant features. We evaluate the GCEADA in two cross-domain tasks and conduct ablation experiments to assess the performance of each component. The results indicate that the proposed attention module effectively improves the feature extraction capability of the discriminator and the discriminability and transferability of the framework.
Combining the posture recognition algorithm with Taiji to realize the movement recognition of it. Acquire the video key frame, detect the target of the standard posture of Taiji through the deep learning method, use the posture recognition algorithm OpenPose to extract the coordinates of the key points of the human body, use PAF to calculate the connection of the key points, through the double judgment of angle and distance, select the action with the smallest error from the standard action, which is more than the previous single judgment ,The accuracy rate of each movement has increased by about 4% on average, achieving the expected effect.
Brain structure segmentation from 3D magnetic resonance (MR) images is a prerequisite for quantifying brain morphology. Since typical 3D whole brain deep learning models demand large GPU memory, 3D image patch-based deep learning methods are favored for their GPU memory efficiency. However, existing 3D image patch-based methods are not well equipped to capture spatial and anatomical contextual information that is necessary for accurate brain structure segmentation. To overcome this limitation, we develop a spatial and anatomical context-aware network to integrate spatial and anatomical contextual information for accurate brain structure segmentation from MR images. Particularly, a spatial attention block is adopted to encode spatial context information of the 3D patches, an anatomical attention block is adopted to aggregate image information across channels of the 3D patches, and finally the spatial and anatomical attention blocks are adaptively fused by an element-wise convolution operation. Moreover, an online patch sampling strategy is utilized to train a deep neural network with all available patches of the training MR images, facilitating accurate segmentation of brain structures. Ablation and comparison results have demonstrated that our method is capable of achieving promising segmentation performance, better than state-of-the-art alternative methods by 3.30% in terms of Dice scores.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.