Two-dimensional periodic structures with nanometer feature sizes have been widely used in many photonic devices. The profile and size of the nanostructure elements can greatly affect optical performances. Various practical subwavelength structures working in the visible and near infrared region have been fabricated using electron-beam writing or laser interference techniques. In this paper, we present a new technique which use nanosphere lithography (NSL) and reactive-ion etching to fabricate two-dimensional nanostructures with tunable nanoelement size and profile.
KEYWORDS: Video, Cameras, Motion estimation, Motion models, Video compression, Video processing, Detection and tracking algorithms, 3D modeling, Error analysis, Digital video discs
In this paper, we address the problem of video frame rate up conversion (FRC) in compressed domain. FRC is often recognized as video temporal interpolation. The problem is very challenging when targeted for a video sequence with an inconsistent camera and object motion. A novel compressed domain motion compensation scheme is presented and applied in this paper. The proposed algorithm uses MPEG-2 compressed motion vectors to undergo a cumulative spatiotemporal interpolation over a temporal sliding window of frames. An iterative
rejection scheme based on the affine motion model is exploited to detect the global camera motion. Subsequently, the foreground object separation is performed by examining the temporal consistency of the output of iterative rejections. This consistency check process helps to conglomerate the resulting foreground macroblocks and weeds out the unqualified blocks, thus further refines the crude segmentation results. Finally, different strategies for compensating the camera motion and the object motion are applied to interpolate the new frames. Illustrative examples are provided to demonstrate the efficacy of the proposed approach. Experimental results are compared with the popular block based frame interpolation approach.
In the paper, we addresses the problem of camera and object motion detection in compressed domain. The estimation of camera motion and the moving object segmentation have been widely stated in a variety of context for video analysis, because they are capable of providing essential clues for interpreting high-level semantic meanings of video sequences. A novel compressed domain motion estimation and segmentation scheme is presented and applied in this paper. The proposed algorithm uses MPEG-2 compressed motion vectors to undergo a spatial and temporal interpolation over several adjacent frames. An iterative rejection scheme based upon the affine model is exploited to effect global camera motion detection. The foreground spatiotemporal objects are separated from the background using the temporal consistency check to the output of the iterative segmentation. This consistency check process can help conglomerate the resulting foreground blocks and weed out unqualified blocks. Illustrative examples are provided to demonstrate the efficacy of the proposed approach.
We report the first results of self-assembled nanostructures using colloids for antireflection optical coatings. Two-dimensional (2D) periodic nano-structures were made by using self-assembled 2D colloidal crystals on top of a transparent substrate. An atomic force microscope was used to evaluate the quality of the nanostructure. The feature size of the structures was around 105 nm. This sub-wavelength structure is equivalent to an artificial film on top of the substrate. The effective refractive index of the film is found to be around 1.3. Such a low-index materials is desired for anti-reflection coating to reduce Fresnal reflection. We have observed the reduced reflection from glass surfaces as well as enhanced transmission. Our calculated results agree well with experimental measurement.
In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.