Multimedia quality assessment has been an important research topic during the last decades. The original focus on artifact visibility has been extended during the years to aspects as image aesthetics, interestingness and memorability. More recently, Fedorovskaya proposed the concept of 'image psychology': this concept focuses on additional quality dimensions related to human content processing. While these additional dimensions are very valuable in understanding preferences, it is very hard to define, isolate and measure their effect on quality. In this paper we continue our research on face pictures investigating which image factors influence context perception. We collected perceived fit of a set of images to various content categories. These categories were selected based on current typologies in social networks. Logistic regression was adopted to model category fit based on images features. In this model we used both low level and high level features, the latter focusing on complex features related to image content. In order to extract these high level features, we relied on crowdsourcing, since computer vision algorithms are not yet sufficiently accurate for the features we needed. Our results underline the importance of some high level content features, e.g. the dress of the portrayed person and scene setting, in categorizing image.
Human content perception has been underlined to be important in multimedia quality evaluation. Recently aesthetic considerations have been subject of research in this field. First attempts in aesthetics took into account perceived low-level features, especially taken from photography theory. However they demonstrated to be insuf- ficient to characterize human content perception. More recently image psychology started to be considered as higher cognitive feature impacting user perception. In this paper we follow this idea introducing social cognitive elements. Our experiments focus on the influence of different versions of portrait pictures in context where they are showed aside some completely unrelated informations; this can happen for example in social networks interactions between users, where profile pictures are present aside almost every user action. In particular, we tested this impact on resumes between professional portrait and self shot pictures. Moreover, as we run tests in crowdsourcing, we will discuss the use of this methodology for these tests. Our final aim is to analyse social biases’ impact on multimedia aesthetics evaluation and how this bias influences messages that go along with pictures, as in public online platforms and social networks.
KEYWORDS: Video, Error analysis, Electroencephalography, Analytical research, Multimedia, Electronic imaging, Video processing, Video coding, Digital video discs, Electrodes
Evaluating (audio)visual quality and Quality of Experience (QoE) from the user’s perspective, has become a key element
in optimizing users’ experiences and their quality. Traditionally, the focus lies on how multi-level quality features are
perceived by a human user. The interest has however gradually expanded towards human cognitive, affective and
behavioral processes that may impact on, be an element of, or be influenced by QoE, and which have been underinvestigated
so far. In addition, there is a major discrepancy between the new, broadly supported and more holistic
conceptualization of QoE proposed by Le Callet et al. (2012) and traditional, standardized QoE assessment. This paper
explores ways to tackle this discrepancy by means of a multi-instrumental approach. More concretely, it presents results
from a lab study on video quality (N=27), aimed at going beyond the dominant QoE assessment paradigm and at
exploring affective aspects in relation to QoE and in relation to perceived overall quality. Four types of data were
collected: ‘traditional’ QoE self-report measures were complemented with ‘alternative’, emotional state- and user
engagement-related self-report measures to evaluate QoE. In addition, we collected EEG (physiological) data, gazetracking
data and facial expressions (behavioral) data. The video samples used in test were longer in duration than is
common in standard tests allowing us to study e.g. more realistic experience and deeper user engagement. Our findings
support the claim that the traditional QoE measures need to be reconsidered and extended with additional, affective staterelated measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.