Allgemeine Psychologie II

Project 1: Can semantic object information be extracted from parafoveal vision?

Vision is best in the foveal region (2 degrees in the center of vision), but some information can also be extracted from the parafoveal region (extending from the foveal region to about 5 degrees on either side of fixation). The project will investigate whether semantic information about objects can be extracted from parafoveal vision. Participants will be viewing pairs of photographs of objects, taken from the POPORO database. The second object will be either semantically related or unrelated to the first object (examples: strawberry—mango or strawberry—football). To vary the visibility of the second object when processing the first one, two spatial distances between objects will be tested (close vs. far). Will the viewing time on the first object, measured with the eye-tracking methodology, be modulated by these factors?


  • Malpass, D., & Meyer, A. S. (2010). The time course of name retrieval during multiple-object naming: Evidence from extrafoveal-on-foveal effects. Journal of Experimental Psychology: Learning Memory and Cognition, 36(2), 523-537. (But note that these authors tested object naming rather than object recognition.)
  • Kovalenko, L. Y., Chaumon, M., & Busch, N. A. (2012). A pool of pairs of related objects (POPORO) for investigating visual semantic integration: Behavioral and electrophysiological validation. Brain Topography, 25(3), 272-284.

Project 2: Low-level scene features: human judgements vs. algorithmic predictions

In natural scenes, visual information can be described at a variety of levels, ranging from basic low-level image features to higher level semantic understanding. Low-level image properties comprise individual features (e.g., luminance, edge density) or feature constellations (e.g., clutter, visual salience). Typically, low-level image properties are determined algorithmically. The goal of this project is to collect empirical data on this issue by asking human participants to rate images of naturalistic scenes on these dimensions (one feature and/or feature constellation per project). This allows us to determine how well human behavior and algorithmic predictions coincide.


  • Elazary, L., & Itti, L. (2008). Interesting objects are visually salient. Journal of Vision, 8(3), 1-15.
  • Nuthmann, A., & Einhäuser, W. (2015). A new approach to modeling the influence of image features on fixation selection in scenes. Annals of the New York Academy of Sciences, 1339(1), 82-96.