The difficulty of parallelizing entropy coding is increasingly limiting the data throughputs achievable in media compression. In this work we analyze what are the fundamental limitations, using finite-state-machine models for identifying the best manner of separating tasks that can be processed independently, while minimizing compression losses. This analysis confirms previous works showing that effective parallelization is feasible only if the compressed data is organized in a proper way, which is quite different from conventional formats. The proposed new formats exploit the fact that optimal compression is not affected by the arrangement of coded bits, but it goes further in exploiting the decreasing cost of data processing and memory. Additional advantages include the ability to use, within this framework, increasingly more complex data modeling techniques, and the freedom to mix different types of coding. We confirm the parallelization effectiveness using coding simulations that run on multi-core processors, and show how throughput scales with the number of cores, and analyze the additional bit-rate overhead.
KEYWORDS: Computer programming, Berkelium, Data modeling, Electronics, Data processing, Binary data, Signal attenuation, Video coding, Visual information processing, Electronic imaging
Buffer or counter-based techniques are adequate for dealing with carry propagation in software implementations of arithmetic coding, but create problems in hardware implementations due to the difficulty of handling worst-case scenarios, defined by very long propagations. We propose a new technique for constraining the carry propagation, similar to “bit-stuffing,” but designed for encoders that generate data as bytes instead of individual bits, and is based on the fact that the encoder and decoder can maintain the same state, and both can identify the situations when it desired to limit carry propagation. The new technique adjusts the coding interval in a way that corresponds to coding an unused data symbol, but selected to minimize overhead. Our experimental results demonstrate that the loss in compression can be made very small using regular precision for arithmetic operations.
There is significant industry activity on delivery of 3D video to the home. It is expected that 3D capable
devices will be able to provide consumers with the ability to adjust the depth perceived for stereo content. This
paper provides an overview of related techniques and evaluates the effectiveness of several approaches. Practical
considerations are also discussed.
KEYWORDS: Video, 3D displays, Video coding, 3D image processing, Video compression, Glasses, 3D video compression, Multiplexing, Computer programming, Eye
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a
3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
A 3D fuzzy-filtering scheme is proposed for reduction of compression artifacts such as blocking and ringing noises. The
proposed scheme incorporates information from temporally-neighboring frames as well as from spatially-neighboring
pixels by accounting for the spatio-temporal relationships in the definitions of spatial-rank orders and spread information
for the fuzzy-filter. Extra information from a 3D set of pixels of the surrounding frames helps enhance the clustering
characteristic of the fuzzy filter while preserving the frame edges. The proposed scheme also exploits the chroma
components from neighboring frames to reconstruct the color of the current frame more faithfully. The experimental
results show that both the subjective and the objective qualities of post-processed video are significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.