The wireless industry has seen a surge of interest in upcoming broadband wireless access (BWA) networks
like WiMAX that are based on orthogonal frequency division multiplexing (OFDM). These wireless access
technologies have several key features such as centralized scheduling, fine-grained allocation of
transmission slots, adaptation of the modulation and coding schemes (MCS) to the SNR variations of the
wireless channel, flexible and connection oriented MAC layer as well as QoS awareness and
differentiation for applications. As a result, such architectures provide new opportunities for cross-layer
optimization, particularly for applications that can tolerate some bit errors. In this paper, we describe a
multi-channel video streaming protocol for video streaming over such networks. In addition, we propose a
new combined channel coding and proportional share allocation scheme for multicast video distribution
based upon a video's popularity. Our results show that we can more efficiently allocate network
bandwidth while providing high quality video to the application.
This paper describes the design and implementation of a multi-modal, multimedia capable sensor networking framework called SenseTK.
SenseTK allows application writers to easily construct multi-modal, multimedia sensor networks that include both traditional scalar-based
sensors as well as sensors capable of recording sound and video. The distinguishing features of such systems include the need to push
application processing deep within the sensor network, the need to bridge extremely low power and low computation devices, and the need
to distribute and manage such systems. This paper describes the design and implementation of SenseTK and provides several diverse
examples to show the flexibility and unique aspects of SenseTK. Finally, we experimentally measure several aspects of SenseTK.
This paper describes the design and implementation of Cascades, a scalable, flexible and composable middleware platform for multi-modal sensor networking applications. The middleware is designed to provide a way for application writers to use pre-packaged routines as well as incorporate their own application-tailored code when necessary. As sensor systems become more diverse in both hardware and sensing modalities, such systems support will become critical. Furthermore, the systems software must not only be flexible, but also be efficient and provide high performance. Experimentation in this paper compares and contrasts several possible implementations based upon testbed measurements on embedded devices. Our experimentation shows that such a system can indeed be constructed.
This paper presents the architectural trade-offs to support fine-grain multi-resolution video over a wide range of resolutions. In the future, video streaming systems will have to support video adaptation over an extremely large range of display requirements (e.g. 90x60 to 1920x1080). While several techniques have been proposed for multi-resolution video adaptation, which is also known as spatial scalability, they have focused mainly on limited spatial resolutions. In this paper, we examine the ability of current techniques to support wide-range spatial scalability. Based upon experiments with real video, we propose an architecture that can support wide-range adaptation more effectively. Our results indicate that multiple encodings with limited spatial adaptation from each encoding provides the best trade-off between efficient coding and the ability to adapt the stream to various resolutions.
This paper discusses a technique which allows conversion of a variable-bit-rate MPEG stream at an input bit rate to a constant-bit-rate MPEG video at a specified target bit rate. This allows researchers to use real compressed VBR data to pseudo-generate a CBR stream without having the over-head of re-capturing the video sequence. Thus, using our technique researchers can use a relatively inexpensive video capture board to digitize video data and then statistically generate a variety of frame pattern and bit-rates for CBR streams. Our results will show that one can capture the long-term behavior of the video data while generating a CBR stream at a specified bit-rate. We will show experiments of our approach when applied to bandwidth smoothing and autocorrelation functions.
KEYWORDS: Video, Video compression, Video coding, Image segmentation, Computer programming, Cameras, Optical tracking, Image processing algorithms and systems, Detection and tracking algorithms, Computing systems
This paper introduces a multi-differential video coding algorithm for the efficient transmission of video conference streams. Because the camera is typically stationary during a video conference, we can take advantage of this information to aid in the robust transmission of the video. Specifically, we describe two simple techniques that can be used to segment the video in to background and foreground information. Using this information, we can then efficiently transmit the video stream while reducing the burstiness seen at the network layer. This technique does not require any changes to the video compression standard (typically H.261 or H.263 for video conferencing) but can reduce the bandwidth required for transmission by 20%. Experimental results are shown with actual video stream traces.
KEYWORDS: Video, Video compression, Quantization, Computer programming, Data modeling, Video coding, Systems modeling, Mathematical modeling, Statistical modeling, Motion models
In this paper, we introduce a novel technique for the modeling of variable-bit-rate MPEG video streams. This technique is designed to aid multimedia systems researchers interested in using real compressed video data to study the systems they are designing without the overhead of having to digitize and compress the video stream multiple times. Our proposed approach differs from current approaches in that it uses an inexpensive non-inter-coded video capture board to digitize the movie data and then statistically generates MPEG data using the captured movie itself. Our model uses a subsample from the digitized data to create a linear regression model of the target video (both quality and frame pattern). Using this linear regression model, our approach statistically generates the target video. We have digitized and compressed 29 hours of constant-quality MPEG video to verify our pseudo-modeling approach and to allow other researchers to verify this model as well.
In this paper, we introduce a priority-based technique for the delivery of compressed prerecorded video streams across best-effort networks. This technique uses a multi-level priority queue in conjunction with a delivery window to help smooth the video frame rate delivered to the end user while allowing it to easily adapt to changing network conditions. Compared with current approaches, our priority-based approach has several advantages. First, it acts more globally by ensuring that a minimum frame rate for the window interval has been delivered before sending enhancement layers. Second, this approach is much simpler to implement than other frame smoothing algorithms that have been presented for the delivery of stored video across best- effort networks. Finally, this approach is directly applicable for the shaping of MPEG-based video encodings with frame dependencies.
KEYWORDS: Video, Data modeling, Video compression, Systems modeling, Performance modeling, Error analysis, Motion models, Computing systems, Data storage, Time metrology
Bandwidth smoothing algorithms can effectively reduce the network resource requirements for the delivery of compressed video streams. For stored video, a large number of bandwidth smoothing algorithms have been introduced that are optimal under certain constraints but require access to all the frame size data in order to achieve their optimal properties. This constraint, however, can be both resource and computationally expensive, especially for moderately priced set-top-boxes. In this paper, we introduce a movie approximation technique for the representation of the frame sizes of a video, reducing the complexity of the bandwidth smoothing algorithms and the amount of frame data that must be transmitted prior to the start of playback. Our results show that the proposed approximation technique can accurately approximate the frame data with a small number of piece-wise linear segments without affecting the performance measures that the bandwidth soothing algorithms are attempting to achieve by more than 1%. In addition, we show that implementations of this technique can speed up execution times by 100 to 400 times, allowing the bandwidth plan calculation times to be reduced to tens of milliseconds. Evaluation using a compressed full-length motion-JPEG video is provided.
KEYWORDS: Video, Video compression, Information science, Multimedia, Digital libraries, Network architectures, Data storage, Data communications, Time metrology, Image segmentation
Bandwidth smoothing techniques for the delivery of compressed prerecorded video have been shown effective in removing the burstiness required for the continuous playback of stored video. Given a fixed client-side buffer, several bandwidth smoothing algorithms have been introduced that are provably optimal under certain constraints. These algorithms, however, may be too aggressive in the amount of data that they prefetch, making it more difficult to support VCR functions that are required for interactive video-on- demand systems. In this paper, we introduce a rate- constrained bandwidth smoothing algorithm for the delivery of stored video that, given a fixed maximum bandwidth rate minimizes both the smoothing buffer requirements as well as the buffer residency requirements. By minimizing the buffer residency times, the clients and servers can remain more tightly coupled making VCR functions easier to support. A comparison between the rate-constrained bandwidth smoothing algorithm and other bandwidth smoothing algorithms is presented using a compressed full-length movie.
Software implementations of MPEG decompression provide flexibility at low cost but suffer performance problems, including poor cache behavior. For MPEG video, decompressing the video in the implied order does not take advantage of coherence generated by dependent macroblocks and, therefore, undermines the effectiveness of processor caching. In this paper, we investigate the caching performance gain which is available to algorithms that use different traversal algorithms to decompress these MPEG streams. We have found that the total cache miss rate can be reduced considerably at the expense of a small increase in instructions. To show the potential gains available, we have implemented the different traversal algorithms using the standard Berkeley MPEG player. Without optimizing the MPEG decompression code itself, we are able to obtain better cache performance for the traversal orders examined. In one case, faster decompression rates are achieved by making better use of processor caching, even though additional overhead is introduced to implement the different traversal algorithm. With better instruction-level support in future architectures, low cache miss rates will be crucial for the overall performance of software MPEG video decompression.
The transportation of compressed video data generally requires the network to adapt to large fluctuations in bandwidth requirements if the quality of video is to remain constant. Techniques that use averaging for smoothing video data, such as those found in video conferences, allows for some smoothing at the expense of delay. With video-on-demand systems on the horizon, smoothing techniques for prerecorded video data are necessary for the efficient use of network resources. Simply extending algorithms that smooth via averaging for use in video playback cannot smooth the burstiness in bandwidth requirements without using large amounts of buffering. In this paper, we introduce the notion of critical bandwidth allocation which allows for the most effective use of buffering while allowing for long durations between bandwidth increase requests. A comparison between critical bandwidth allocation algorithms and other smoothing algorithms is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.