This paper describes the application of biologically-inspired algorithms and concepts to the design of wideband antenna
arrays. In particular, we address two specific design problems. The first involves the design of a constrained-feed
network for a Rotman-lens beamformer. We implemented two evolutionary optimization (EO) approaches, namely a
simple genetic algorithm (SGA) and a competent genetic algorithm. We conducted simulations based on experimental
data, which effectively demonstrate that the competent GA outperforms the SGA (i.e., finds a better design solution) as
the objective function becomes less specific and more "general." The second design problem involves the
implementation of polyomino-shaped subarrays for sidelobe suppression of large, wideband planar arrays. We use a
modified screen-saver code to generate random polyomino tilings. A separate code assigns array values to each element
of the tiling (i.e., amplitude, phase, time delay, etc.) and computes the corresponding far-field radiation pattern. In order
to conduct a statistical analysis of pattern characteristics vs. tiling geometry, we needed a way to measure the
"similarity" between two arbitrary tilings to ensure that our sampling of the tiling space was somewhat uniformly
distributed. We ultimately borrowed a concept from neural network theory, which we refer to as the "dot-product
metric," to effectively categorize tilings based on their degree of similarity.
We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP’s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.
A method for summarizing the information of a video clip into a single image is proposed. Two kinds of information in video clips can be distinguished, one is related to background contents and other is related to foreground object motion. For condensing foreground object motion information, a new technique based on edge overlap is described. The edges of moving objects in a set of consecutive frames are detected and then suitably overlapped on the composite background to show the movement of objects along time axis. The background variation caused by camera motion is captured and different scenes are connected using video mosaic technique. By combined use of video mosaic and edge overlap techniques, a VM&EO frame is generated, which sums up both the background contents and object motion information. Thus, such a frame can be used to represent the video clip in a compact and meaningful way. This will greatly save people's time for viewing the whole video clip to capture the necessary motion information in the video browsing and retrieval systems as well as other applications.
How to quickly and effectively exchange video information with the user is a major task for video searching engine's user interface. In this paper, we proposed to use Moving Edge Overlaid Frame (MEOF) image to summarize both the local object motion and global camera motion information of the video clip into a single image. MEOF will supplement the motion information that is generally dropped by the key frame representation, and it will enable faster perception for the user than viewing the actual video. The key technology of our MEOF generating algorithm involves the global motion estimation (GME). In order to extract the precise global motion model from general video, our GME module takes two stages, the match based initial GME and the gradient based GME refinement. The GME module also maintains a sprite image that will be aligned with the new input frame in the background after the global motion compensation transform. The difference between the aligned sprite and the new frame will be used to extract the masks that will help to pick out the moving objects' edges. The sprite is updated with each input frame and the moving edges are extracted at a constant interval. After all the frames are processed, the extracted moving edges are overlaid to the sprite according to there global motion displacement with the sprite and the temporal distance with the last frame, thus create our MEOF image. Experiments show that the MEOF representation of the video clip helps the user acquire the motion knowledge much faster and also be compact enough to serve the needs of online applications.
In this paper we present a region based approach for Short Term Motion analysis and retrieval of video sequences. Our feature extraction scheme converts the motion information of a video frame pair into a combination of different symbols. First the system analyzes the global and local motion to get a dense optical flow field for every frame pair. The local optical flow field is segmented using an affine model based region growing method. The affine model parameters of the segmented regions as well as the region size from a 7 dimensional space, which is partitioned by a vector quanitzer. Each region is then mapped to a code book symbol of the quantizer. With a group of symbols representing each frame pair, we borrow the Vector Space Model and TF*IDF scoring from text document retrieval to index and retrieve their motion information. Preliminary experimental results are shown in the paper. Our approach is able to retrieve complex combination of different motion in the video, and can be easily scaled up to form a shot level descriptor as well as integrated with other video features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.