Deep learning models for organ contouring in radiotherapy are poised for clinical usage, but currently, there exist few tools for automated quality assessment (QA) of the predicted contours. Bayesian models and their associated uncertainty, can potentially automate the process of detecting inaccurate predictions. We investigate two Bayesian models for auto-contouring, DropOut and FlipOut, using a quantitative measure – expected calibration error (ECE) and a qualitative measure – region-based accuracy-vs-uncertainty (R-AvU) graphs. It is well understood that a model should have low ECE to be considered trustworthy. However, in a QA context, a model should also have high uncertainty in inaccurate regions and low uncertainty in accurate regions. Such behaviour could direct visual attention of expert users to potentially inaccurate regions, leading to a speed-up in the QA process. Using R-AvU graphs, we qualitatively compare the behaviour of different models in accurate and inaccurate regions. Experiments are conducted on the MICCAI2015 Head and Neck Segmentation Challenge and on the DeepMindTCIA CT dataset using three models: DropOut-DICE, Dropout-CE (Cross Entropy) and FlipOut-CE. Quantitative results show that DropOut-DICE has the highest ECE, while Dropout-CE and FlipOut-CE have the lowest ECE. To better understand the difference between DropOut-CE and FlipOut-CE, we use the R-AvU graph which shows that FlipOut-CE has better uncertainty coverage in inaccurate regions than DropOut-CE. Such a combination of quantitative and qualitative metrics explores a new approach that helps to select which model can be deployed as a QA tool in clinical settings.
KEYWORDS: Image contrast enhancement, Visualization, Databases, Electronic imaging, Light sources and illumination, Analytical research, Human vision and color perception, Current controlled current source, Human-machine interfaces, Calibration
We investigate the effect of contrast enhancement on the subjective roughness of visual textures. Our analysis
is based on subjective experiments with seventeen images from the CUReT database in three variants: original,
synthesized textures, and contrast-enhanced synthesized textures. In Experiment 1, participants were asked
to adjust the contrast of a synthesized image so that it became similar in roughness to the original image. A
new adaptive procedure that extends the staircase paradigm was used for efficient placement of the stimuli. In
Experiment 2, the subjective roughness and the subjective contrast of the original, synthesized, and contrastenhanced
synthesized images were determined using a pairwise comparison paradigm. The results of the two
experiments show that although contrast enhancement of a synthesized image results in a similar subjective
roughness as the original, the subjective contrast of that image is considerably higher than that of the original
image. Future research should give more insights in the interaction between roughness and contrast.
The use of sound was explored as means for expressing perceptual attributes of visual textures. Two sets of 17 visual
textures were prepared: one set taken from the CUReT database, and one set synthesized to replicate the former set.
Participants were instructed to match a sound texture with a visual texture displayed onscreen. A modified version of a
Product Sound Sketching Tool was provided, in which an interactive physical interface was coupled to a frequency
modulation synthesizer. Rather than selecting from a pre-defined set of sound samples, continuous exploration of the
auditory space allowed for an increased freedom of expression. While doing so, participants were asked to describe what
auditory and visual qualities they were paying attention to. It was found that participants were able to create sounds that
matched visual textures. Based on differences in diversity of descriptions, synthetic textures were found to have less
salient perceptual attributes than their original counterparts. Finally, three interesting sound synthesis clusters were
found, corresponding with mutually exclusive description vocabularies.
In a previous study we investigated the roughness of real world textures taken from the CUReT database. We showed
that people could systematically judge the subjective roughness of these textures. However, we did not determine which
objective factors relate to these perceptual judgments of roughness. In the present study we take the first step in this
direction using a subband decomposition of the CUReT textures. This subband decomposition is used to predict the
subjective roughness judgments of the previous study. We also generated synthetic textures with uniformly distributed
white noise of the same variance in each subband, and conducted a perceptual experiment to determine the perceived
roughness of both the original and synthesized texture images. The participants were asked to rank-order the images
based on the degree of perceived roughness. It was found that the synthesis method produces images that are similar in
roughness to the original ones except for a small but systematic deviation.
Consistent product experience requires congruity between product properties such as visual appearance and sound.
Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds
should preferably not be considered in isolation but as an integral part of the main product concept. Because visual
aspects of a product are considered to dominate the communication of the desired product concept, sound is usually
expected to fit the visual character of a product. We argue that this can be accomplished successfully only on basis of a
thorough understanding of the impact of audio-visual interactions on product sounds. Two experimental studies are
reviewed to show audio-visual interactions on both perceptual and cognitive levels influencing the way people encode,
recall, and attribute meaning to product sounds. Implications for sound design are discussed defying the natural tendency
of product designers to analyze the "sound problem" in isolation from the other product properties.
In three experiments the perceived roughness of visual and of auditory materials was investigated. In Experiment 1, the
roughness of frequency-modulated tones was determined using a paired-comparison paradigm. It was found that using
this paradigm similar results in comparison to literature were found. In Experiment 2, the perceived visual roughness of
textures drawn from the CUReT database was determined. It was found that participants could systematically judge the
roughness of the textures. In Experiment 3 the perceived pleasantness for the textures used in Experiment 2 was
determined. It was found that two groups of participants could be distinguished. One group found rough textures
unpleasant and smooth textures pleasant. The other group found rough textures pleasant and smooth textures unpleasant.
Although for the latter groups the relation between relative roughness and perceived pleasantness was less strong.
In two experiments the effect of sound on visual information was investigated. In Experiment 1 the effect of the visual
appearance of product types with an expensive deign and with an inexpensive design on the experience of the sound
recordings of these products was investigated. Recordings and pictures were systematically interchanged. Thus, for
example, the visual image of an expensive design was combined with a recording of the sound of an inexpensive and of
an expensive design. It was found that product appearance did not affect the judgment on luxury, pleasantness, quality,
and ease-of-use but that the experience of the sound dominated over the visual experience. In Experiment 2, pictures
from the international affective pictures set were combined with frequency-modulated tones that varied in the amount of
sensory pleasantness by manipulating the amount of roughness. The combination of sounds and pictures were rated on
the valence and arousal dimensions of the circumplex model of core affect. It was found that the sounds only negatively
affected the experience of the pictures on the valence dimension. The arousal level was not affected by the sounds. Both
experiments show that sound can affect the perception and experience of pictures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.