Follow
Candace Peacock
Candace Peacock
Vision Scientist, Snap Inc.
Verified email at ucdavis.edu - Homepage
Title
Cited by
Cited by
Year
Towards gaze-based prediction of the intent to interact in virtual reality
B David-John, CE Peacock, T Zhang, TS Murdison, H Benko, TR Jonker
ACM Symposium on Eye Tracking Research & Applications, 7, 2021
682021
Meaning guides attention during scene viewing, even when it is irrelevant
CE Peacock, TR Hayes, JM Henderson
Attention, Perception, Psychophysics, 2019
682019
Meaning and Attentional Guidance in Scenes: A Review of the Meaning Map Approach
JM Henderson, TR Hayes, CE Peacock, G Rehrig
Vision 3 (9), 1-10, 2019
542019
The role of meaning in attentional guidance during free viewing of real-world scenes
CE Peacock, TR Hayes, JM Henderson
Acta Psychologica 198, 102889, 2019
432019
Where the action could be: Speakers look more at meaningful scene regions than graspable objects when describing potential actions
G Rehrig, C Peacock, T Hayes, F Ferreira, JM Henderson
Journal of Experimental Psychology: Learning, Memory, and Cognition, 2019
26*2019
Visual and verbal working memory loads interfere with scene-viewing
DA Cronin, CE Peacock, JM Henderson
Attention, Perception, & Psychophysics 82 (6), 2814-2820, 2020
202020
Center bias does not account for the advantage of meaning over salience in attentional guidance during scene viewing
CE Peacock, TR Hayes, JM Henderson
Frontiers in Psychology 11, 557752, 2020
192020
Meaning maps capture the density of local semantic features in scenes: A reply to Pedziwiatr, Kümmerer, Wallis, Bethge & Teufel (2021)
JM Henderson, TR Hayes, CE Peacock, G Rehrig
Cognition 214, 104742, 2021
172021
Meaning and expected surfaces combine to guide attention during visual search in scenes
C Peacock, DA Cronin, TR Hayes, JM Henderson
Journal of Vision 21 (1), 2021
102021
Verbal cues flexibly transform spatial representations in human memory
CE Peacock, AD Ekstrom
Memory 27 (4), 465-479, 2019
92019
Electrophysiological correlates of encoding processes in a full-report visual working memory paradigm
KW Killebrew, G Gurariy, CE Peacock, ME Berryhill, GP Caplovitz
Cognitive, Affective, & Behavioral Neuroscience 18, 353-365, 2018
82018
Gaze as an indicator of input recognition errors
CE Peacock, B Lafreniere, T Zhang, S Santosa, H Benko, TR Jonker
Proceedings of the ACM on Human-Computer Interaction 6 (ETRA), 1-18, 2022
72022
Gaze Signatures Decode the Onset of Working Memory Encoding
CE Peacock, B David-John, T Zhang, TS Murdison, MJ Boring, H Benko, ...
Eye Movements as an Interface to Cognitive State '21 at CHI' 21, 4, 2021
72021
Eye to eye: gaze patterns predict remote collaborative problem solving behaviors in triads
A Abitino, SL Pugh, CE Peacock, SK D’Mello
International Conference on Artificial Intelligence in Education, 378-389, 2022
52022
Look at what I can do: Object affordances guide visual attention while speakers describe potential actions
G Rehrig, M Barker, CE Peacock, TR Hayes, JM Henderson, F Ferreira
Attention, Perception, & Psychophysics 84 (5), 1583-1610, 2022
42022
Gaze dynamics are sensitive to target orienting for working memory encoding in virtual reality
CE Peacock, T Zhang, B David-John, TS Murdison, MJ Boring, H Benko, ...
Journal of vision 22 (1), 2-2, 2022
22022
Searching for meaning: Local scene semantics guide attention during natural visual search in scenes
CE Peacock, P Singh, TR Hayes, G Rehrig, JM Henderson
Quarterly Journal of Experimental Psychology 76 (3), 632-648, 2023
12023
Encoding-stage adaptation effects: long-term memory
CE Peacock, F Gözenman
Perception 47 (2), 216-224, 2018
12018
Objects are selected for attention based upon meaning during passive scene viewing
CE Peacock, EH Hall, JM Henderson
Psychonomic Bulletin & Review 30 (5), 1874-1886, 2023
2023
Getting the Wiggles Out: Movement Between Tasks Predicts Future Mind Wandering During Learning Activities
R Southwell, CE Peacock, SK D’Mello
International Conference on Artificial Intelligence in Education, 489-501, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–20