Published in

Association for Computing Machinery (ACM), ACM Transactions on Applied Perception, 1(8), p. 1-19, 2010

DOI: 10.1145/1857893.1857899

Links

Tools

Export citation

Search in Google Scholar

Perceptually guided high-fidelity rendering exploiting movement bias in visual attention

Journal article published in 2010 by Jasminka Hasic ORCID, Alan Chalmers, Elena Sikudova
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of “realism in real time” in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2 ˆ foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritized based on the saliency of the objects in the scene or the task the user is performing. Such “glimpses” of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed. Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics work has shown that both fixed viewpoint and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high-saliency movement that may be present. A high-saliency movement can be generated in a scene if an otherwise static objects starts moving. In this article, we investigate, through psychophysical experiments, including eye-tracking, the perception of rendering quality in dynamic complex scenes based on the introduction of a moving object in a scene. Two types of object movement are investigated: (i) rotation in place and (ii) rotation combined with translation. These were chosen as the simplest movement types. Future studies may include movement with varied acceleration. The object's geometry and location in the scene are not salient. We then use this information to guide our high-fidelity selective renderer to produce perceptually high-quality images at significantly reduced computation times. We also show how these results can have important implications for virtual environment and computer games applications.