Talk Abstracts

Memory in Clinical Contexts

Thursday 9:00 – 10:20

Prospective Memory and Posttraumatic Stress Disorder

Taylor Swain | Melanie Takarangi

Flinders University

Posttraumatic Stress Disorder (PTSD) is widely conceptualised as a problem of memory, yet most prior research has focused on retrospective errors, rather than memory for future intentions, or prospective memory. This trend is surprising, given that people with impaired prospective memory encounter problems in daily functioning, similar those problems PTSD sufferers experience. The extant research using veteran samples has generally shown that people with PTSD have poorer prospective memory than those without PTSD. However, we don’t know the nature and extent of prospective memory deficits among adults with varying PTSD symptomology in the general population. We asked Mechanical Turk workers to self-report PTSD symptoms in relation to a stressful or traumatic event, prospective memory failures, and a number of belief/appraisal measures. We found a moderate correlation (r= .42-49) between prospective memory failures and PTSD symptom severity. We also found mediation effects suggesting that PTSD symptoms cause prospective memory deficits via their influence on cognitive confidence, negative cognitions about the self, beliefs about memory, thought suppression tendency, and willingness to future plan. Our data suggest that people with PTSD may have more difficulty in performing everyday prospective memory tasks such as remembering to attend appointments or return phone calls.

PTSD-like symptoms are similarly correlated with characteristics of remembered and imagined stressful events

Mevagh Sanson | Holly Wilson | Maryanne Garry

The University of Waikato

After a stressful event, people may develop symptoms of Posttraumatic Stress Disorder. We know characteristics of people’s memory for a stressful event are related to the PTSD symptoms they develop. We also know remembering past events and imagining future ones are closely related abilities. Could people develop these same symptoms before a stressful event they have merely imagined? And, if yes, what is the relationship between characteristics of an imagined stressful event and the PreTSD symptoms people experience? To investigate, we asked subjects in two experiments to describe a deeply stressful event, either one they had imagined or one they remembered, report the severity of PTSD-like symptoms they had experienced in relation to that event, and rate the event’s characteristics. Subjects reported PreTSD symptoms to a similar degree as PTSD symptoms. Moreover many characteristics were correlated similarly with PreTSD symptoms and PTSD symptoms. These experiments suggest people experience PreTSD symptoms about stressful future events, and the more intense the characteristics of these imagined events, the more severe people’s symptoms’ just as is the case for past events. These findings fit with the idea that how people remember a stressful event – not just that it happened – is central to the occurrence of PTSD symptoms.

Everyday hassles, Locus of control, General Self Efficacy and Prospective Memory Performance among Adolescents

Azizuddin Khan

Indian Institute of Technology Bombay

Everyday stress is a potential modulator of working and long term memory. However. little is known about role of stress in prospective memory performance. The present study explored the effect of everyday stress and locus of control on prospective memory among adolescents. The research also investigated relationship among general self-efficacy, and prospective memory.  Two hundred and forty four (134 Male and 110 Female, Mean= 16.54, SD= 0.76) adolescents participated in the study.  Everyday hassles were assessed by Hassles Assessment Scale. Locus of Control Scale, General Self Efficacy Scale and Prospective and Retrospective Memory Questionnaire were also administered on adolescent subjects. Analysis of variance and correlational analysis were used to analyse the data. All the main effects were significant. Some interactions were also found to be significant. Results suggest that everyday hassles affect performance of prospective memory. Significant differential effects were observed among low, medium and high everyday hassles for memory, cue and term and various interaction between these variables. There was significant difference between low and high locus of control. Further, there was significant negative correlation between retrospective memory and general self efficacy. The findings of the study is discussed in the light of the existing literature.

Spontaneous memory encoding, but not perceptual prioritization, of emotional distractors predicts intrusive memories from a trauma analogue

Steven B. Most | Wing Yuen | Sandersan Onie | Angela Nickerson

UNSW Sydney

Evidence suggests links between strong encoding of trauma-related perceptual information and "re-experiencing" symptoms characteristic of PTSD. For example, perceptual priming, that is, facilitated identification of previously seen stimuli is particularly strong for trauma-related material. But perceptual priming conflates perceptual prioritization and memory encoding. Which drives the link with re-experiencing symptoms? Fifty-seven participants completed an emotion-induced blindness (EIB) task to index perceptual prioritization: they searched for single targets embedded in rapid streams of images and were impaired when targets were preceded by an emotional distractor. They then received a surprise memory test for the distractors to index spontaneous encoding of them into memory. Finally, in a lab analogue of trauma exposure, they watched graphic footage of the aftermath of a car accident and were probed for three days to assess the occurrence of intrusive memories of the footage (i.e., memories that spontaneously entered awareness). Only spontaneous memory for the emotional distractors from the EIB task, not EIB itself, predicted intrusive memories of the footage, suggesting that re-experiencing of trauma-related material may stem from a general inability to filter non-target emotional material from memory.

 

Reading 1

Thursday 9:00 – 10:20

Do nonword reading tests for children measure what we want them to? An analysis of Year 2 error responses

Anne Castles | Vince Polito | Stephen Pritchard | Thushara Anandakumar | Max Coltheart

Macquarie University

A key foundational skill in learning to read is the acquisition of phonological decoding ability. Childrens’ progress in acquiring this skill is typically measured using tests of nonword reading (e.g. gop, flarm), as these are thought to index knowledge of grapheme-phoneme correspondences independently of word-level knowledge. However, some critics have argued that nonword reading measures disadvantage good readers, as these children are influenced by their strong lexical knowledge and so err by making word responses (e.g., reading flarm as farm). We tested this claim by examining the errors made by 64 Year 2 children when reading aloud a set of 20 simple nonwords. We found that there was a large amount of variability in error responses. However, stronger word readers were overall less likely to make a word error response than weaker word readers, with their most prevalent error type being another nonword that was highly similar to the target. We conclude that nonword reading measures are a valid index of phonological decoding skill in developing readers, and do not disadvantage children who are already reading words well.

Vocabulary development from reading under error-free and trial-and-error treatments

Irina Elgort | Natalia Beliaeva | Frank Boers

Victoria University of Wellington | Western University

Reading is a source of the knowledge of subject-specific vocabulary. Students commonly make an effort to derive meanings of unfamiliar words from context engaging in a trial-and-error procedure, in which readers may initially infer word meanings incorrectly. Should these incorrect inferences be avoided, for example, by providing definitions of unfamiliar words prior to reading? Our research investigates whether errorless or trial-and-error treatments lead to better word knowledge in contextual learning. Participants (n=55) were presented with 90 novel words and their definitions either prior to reading (errorless learning) or after contextual encounters (trial-and-error learning). In the control condition, no definitions were presented. In all treatments, the readers were also instructed to infer the meanings of the novel words from context. To evaluate word learning after the treatment we administered a meaning generation (recall) task (as a measure of declarative knowledge) and a self-paced reading task (as a measure on nondeclarative knowledge). We found that trial-and-error learning followed by definitions resulted in a superior declarative and nondeclarative knowledge, compared to errorless learning. Inference errors affected the development of declarative but not nondeclarative knowledge. We interpret these findings in terms of the declarative and nondeclarative memory processes underpinning contextual word learning.

Contextual plausibility effects on word skipping during reading

Aaron Veldre | Roslyn Wong | Sally Andrews

University of Sydney

Recent eye-movement evidence suggests readers are more likely to skip high-frequency words than low-frequency words regardless of the contextual fit of the word in the sentence. This is argued to be consistent with the assumptions of models in which the decision to skip a word is based on the completion of a preliminary stage of lexical processing prior to any assessment of sentential fit. The present large-scale study was designed to reconcile these findings with recent demonstrations of the plausibility preview effect: skipping and first-pass fixation duration benefits for words that are parafoveally previewed by contextually plausible sentence continuations. Participants eye movements were recorded as they read sentences containing a short (3-4 letters) or long (6 letters) critical target word. The boundary paradigm was used to factorially manipulate the parafoveal preview, which was either higher or lower frequency than the target, and plausible or implausible in the sentence. The results revealed strong, independent effects of all three factors on skipping and early measures of target reading time. In contrast, the effects of preview frequency and plausibility interacted on later measures of target fixation duration. The data are inconsistent with the assumption that higher-level contextual information only affects post-lexical integration processes.

Towards a Model of Reading

Erik D. Reichle

Macquarie University

Computational models have been developed to explain the cognitive processes involved in reading (Reichle, 2015), including word identification (e.g., Perry et al., 2007), sentence processing (e.g., Lewis & Vasishth, 2005), the representation of discourse (e.g., Frank et al., 2003), and how the systems that mediate these processes interact with the systems that guide the eyes and attention during reading (e.g., Reichle et al., 2012).  In this talk, I will describe my efforts to develop a more comprehensive description of reading by embedding models of word-identification, sentence-processing, and discourse- representation within the framework of the E-Z Reader model of eye-movement control during reading (Reichle, 2019).  The goal of this work is to develop a framework for simulating both on- and off-line behaviours during reading (e.g., the patterns of eye movements made during reading, the content of the text later remembered, etc.), and by doing so, to gain a better understanding of reading in its entirety.

Emotion Regulation

Thursday 11:00 – 12:40

Social Sharing of Emotion on Twitter: A case study of 2018 US mid-term elections

Meng-Jie Wang

University of Canterbury

Emotions are often shared quickly and on a huge scale through the connected world of social media. Instead of throwing out personal feelings, politicians or their champions shared emotion in such manner as to invite subsequent feedback, holding the potential to attract those most likely to support and share messages with others. While previous studies identify that the politicians preference for Twitter, the impact on their followers is less apparent. By tracking the verified Twitter users, including prominent American politicians and their champions (i.e., Donald Trump, Hillary Clinton, Nancy Pelosi, Barack Obama, Paul Ryan, Mike Pence, Joe Biden, Nikki Haley, Steve Scalise, The Democrats, GOP known as the Republican national committee), and 95 declared candidates across 9 weeks around the 2018 US midterm elections (4 weeks prior to election week and 4 weeks thereafter), this study examines whether and to what extent individuals will engage or disengage with the target of message. Through the analysis of text-based tweets and retweet behavior, the current study is expected to identify the features of emotional posts that elicit social sharing on Twitter.

Media multitasking as an avoidance coping strategy for emotionally stressful events

Jay Shin | Eva Kemps

Charles Sturt University | Flinders University

Emotion regulation refers to the way individuals manage and regulate their own emotions in response to negative emotional experiences. This study investigated the possibility that media multitasking serves as a maladaptive coping strategy (e.g., suppression, avoidance) for managing emotionally stressful events, using self-report measures of emotion regulation (Difficulties in Emotion Regulation Scale) and cognitive measures that assess attentional bias for emotional stimuli (the dot probe and the emotional Stroop task). Results showed that media multitasking was associated with difficulties in accepting emotional responses for participants who selectively attended to anxiety words, as well as for participants who showed an avoidance away from such words. Further, there was a particularly strong association between media multitasking and an avoidance away from anxiety words for participants who successfully ignored the meaning of anxiety words in the emotional Stroop task. These results support the idea that media multitasking serves as an avoidance coping strategy where one deliberately directs attention away from negative stimuli to prevent their further processing. The findings have real life implications for managing and treating anxiety and depression, as media multitasking may be used as a maladaptive coping strategy that further increases feelings of anxiety and depression.

Rituals, Repetitiveness and Cognitive Load: A Competitive Test of Ritual Benefits for Stress

Johannes Karl | Ronald Fischer

Victoria University of Wellington

A central hypothesis to account for the ubiquity of rituals across cultures is their supposed anxiolytic effects: rituals being maintained because they reduce existential anxiety and uncertainty. We aimed to test the anxiolytic effects of rituals by investigating two possible underlying mechanisms for it: cognitive load and repetitive movement. In our pre-registered experiment (osf.io/rsu9x), 180 undergraduates took part in either a stress or a control condition and were subsequently assigned to either control, cognitive load, undirected movement, a combination of undirected movement and cognitive load, or a ritualistic intervention. Using both repeated self-report measures and continuous physiological indicators of anxiety, we failed to find direct support for a cognitive suppression effect of anxiety through ritualistic behavior. Nevertheless, we found that induced stress increased participants subsequent repetitive behavior, which in turn reduced physiological arousal. This study provides novel evidence for plausible underlying effects of the proposed anxiolytic effect of rituals: repetitive behavior but not cognitive load may decrease physiological stress responses during ritual.

Towards a non-invasive music-based assessment of depression

Eline A. Smit | Felix Dobrowohl | Nora K. Schaal | Kat R. Agres | Steffen A. Herff

Western Sydney University Heinrich-Heine University | National University of Singapore

Statistical Learning (SL) is the ability to track probabilistic contingencies in the environment and constitutes a profound mechanism of learning. Interestingly, patients suffering from depression show reduced learning of positively compared to negatively valenced information. This suggests that a SL-paradigm deploying valenced stimuli may function as a marker of depression. Identifying suitable stimuli, however, is a daunting task. Here, we aim to explore music’s remarkable capacity to convey affect, and take a first step towards a non-invasive music-based SL task to diagnose depression. To identify the suitability of different potential stimuli, we explored participants affective judgements towards chord sequences derived from four different chord-sets. We tested the influence of mode (major vs. minor) and number of tones (triads vs. tetrachords) on affective judgements. Twelve participants listened to 150-sequences and were asked to rate each sequence on a valence and arousal space. Triad sequences were varied in arousal ratings, and received higher valence ratings compared to tetrachords. Tetrachords sequences received higher arousal ratings. Valence was generally lower for minor triads and higher for major triads. Implications are discussed both in the context of musicology, as well as the chords suitability for a music-based SL task relevant for the diagnosis of depression.

Effects of parenting styles and delay of gratification on cognitive abilities of preschool children

Dr. Santha Kumari | Prerna Jain

Thapar Institute of Engineering & Technology Patiala India

The capacity to delay gratification, to refrain from   immediately available rewards in pursuit of delayed but more valuable outcome, is a hallmark of self-regulation, cognitive and social competence across life span. The present study attempted to investigate the effect of parenting styles on delay of gratification and its impact on cognitive abilities. Thirty preschool children of the age range 3-4 years and their mothers participated in the study. The extent of delay of gratification was measured using an experiment similar to the Marshmallow experiment by Walter Mischel and cognitive abilities was assessed using the Seguin form Board Test. Mothers of these children completed a parenting style questionnaire.  Regression analysis and mediation analysis were done to analyse the data. Results indicated that authoritative parenting style had a significant impact on the cognitive abilities of children. In addition to this, children with high delay of gratification scored better in the Seguin form board test. The findings are explained in terms of the role  of parenting styles in developing   delay of gratification in children for promoting goal directed behaviour and fostering cognitive and social competence  in later life.

Perceptual Illusions

Thursday 11:00 – 12:40

Sensory processes play a more crucial role than cognitive ones in the size-weight illusion.

Philippe Chouinard | Cody Freeman | Elizabeth Saccone

La Trobe University

In the size-weight illusion (SWI), people perceive the smaller of two equally weighted objects as heavier. We examined the relative importance of cognitive and sensory processes by varying cognitive load in a dual-task and the quality of haptic feedback from wearing gloves. On each trial, participants (N = 25) hefted an object without receiving visual feedback. The objects varied in two sizes and six weights. Participants provided perceptual estimates about each objects weight under four Conditions: no-load with gloves, no-load without gloves, low-load without gloves, and high-load without gloves. Differences in perceptual estimates were calculated between the small and large objects for each weight. The SWI diminished when participants wore the gloves but did not change as cognitive load increased on the dual-task. Specifically, a main effect of Condition was found F(3,72) = 6.50, p = .001. Bonferroni pairwise comparisons reveal that the illusion when wearing gloves was lower than the three gloveless conditions (all p .027) while the three gloveless conditions were not different from each other (all p = 1). These findings indicate that the illusion is more strongly influenced by sensory than cognitive processes. This has implications for understanding the SWI and weight perception more generally.

Gender and the Body Size and Shape Aftereffect: Implications for neural processing and body image distortion

A/Prof. Kevin R. Brooks | Ms. Evelyn Baldry | Dr. Jonathan Mond | Professor Richard Stevenson | Dr. Ian Stephen

Macquarie University | Western Sydney University |University of Tasmania

Extended viewing of thin (wide) bodies causes a perceptual aftereffect such that bodies appear wider (thinner) than they actually are. This phenomenon is known as visual adaptation. We used the adaptation paradigm to examine the gender selectivity of the neural mechanisms encoding body size and shape. Observers adjusted male and female test bodies to appear normal sized both before and after adaptation to extreme bodies. In Experiment 1, observers adapted simultaneously to bodies of each gender distorted in opposite distortions (e.g. wide males and thin females). The direction of resultant aftereffects for male and female test bodies was contingent on the adaptor, indicating at least some separation of the neural mechanisms processing body size and shape for the two genders. In Experiment 2, adaptation involved either wide males, thin males, wide females or thin females. Aftereffects were present in all conditions, but were stronger when adaptation and test genders matched, suggesting some overlap in the tuning of gender-selective neural mechanisms. Given that visual adaptation has been implicated in real-world examples of body image distortion (e.g. in anorexia nervosa or obesity), these results may have implications for the development of body size and shape misperception therapies based on the adaptation model.

The perception and misperception of opacity in opaque and sub-surface scattering materials

Dr Phillip J Marlow | Prof. Barton L Anderson

The University of Sydney

The visual system has the remarkable ability to determine whether it is viewing an opaque or translucent material in contexts where both generate identical images. In these contexts, it is the 3D shape interpretation of an image that is somehow being used to disambiguate whether the material is opaque or translucent. Here, we offer an explanation for how 3D shape information might be used to compute opacity/ translucency. We explain why the luminance of opaque materials nearly always covaries with the surface normal field, whereas the luminance of translucent materials covaries with the distribution of peaks and valleys across a 3D shape. We show that these two forms of covariation between luminance and 3D shape predict the perception and misperception of opacity/translucency. We identify specific combinations of 3D shape and illumination where opaque materials mimic forms of luminance-shape covariation that are characteristic of translucent materials (and vice versa), and result in illusory percepts of translucency (or opacity). These results provide new insights into the computations underlying material perception as well as questions about how accurately we perceive the reflectance and transmittance functions of real-world materials.

Flowers in the Attic: Is there a hemispheric asymmetry to seeing meaning in noise?

Simon Cropper | Ashlan McCauley | Owen Gwynn | Megan Bartlett | Mike Nicholls

University of Melbourne | Flinders University

The brain is a slave to sense; we see and hear things that are not there and engage in ongoing correction of these falsely-positive percepts. The current study investigates whether the predisposition to see meaning in noise is biased toward the right cerebral hemisphere, as would be suggested by the literature. It was predicted that the right hemisphere would be more prone to false positives than the left hemisphere. Right-handed undergraduates participated in a forced-choice signal-detection task where they determined whether a face or flower was present in visual noise. Information was presented to either the left or right hemispheres using a divided visual field paradigm. Experiment 1 involved an equal ratio of signal to noise trials; experiment 2 provided more opportunity for illusionary perception with 25% signal and 75% noise trials. There was no asymmetry in the ability to discriminate signal from noise trials for both faces and flowers. Response criterion was conservative for both stimuli and the avoidance of false positives was stronger in the left- than the right-visual field. These results were the opposite of that predicted and it is suggested that the asymmetry is the result of a left hemisphere advantage for rapid evidence accumulation.

Altered visual sensitivity in zebrafish knockout of autism and schizophrenia risk gene pdzk1

Patrick T. Goodbourn | Jiaheng Xie | Bang V. Bui | Patricia R. Jusuf

The University of Melbourne

Abnormal visual contrast sensitivity has been demonstrated in schizophrenia and autism spectrum disorder (ASD). Because the neurobiology of visual function is relatively well characterised, measures of visual performance are promising endophenotypes for investigating the genetic basis of these disorders. A previous genome-wide association study identified a polymorphism in the PDZK1 gene, in a risk region for both schizophrenia and ASD, as a novel candidate affecting contrast sensitivity at low spatial frequencies. We aimed to verify this association by generating pdzk1-knockout zebrafish (Danio rerio) using CRISPR-Cas9 genome editing. At the larval stage, we measured behavioural contrast-sensitivity functions in knockout animals and wild-type controls by exploiting an innate, visually guided stabilisation response. We also measured the scotopic electroretinogram to assess retinal integrity. Knockouts showed significantly enhanced behavioural contrast sensitivity relative to controls, specific to low spatial frequencies. Electroretinography revealed increased a-wave (photoreceptor) and decreased b-wave (bipolar cell) amplitude in knockouts compared to controls, with delayed peaks in both components. These findings corroborate the role of PDZK1 in visual contrast sensitivity. Ongoing neurodevelopmental characterisation of this model will reveal the biological consequences of pdzk1 disruption, and may provide a new window into the neurobiology of schizophrenia and ASD.

Language and Learning

Thursday 11:00 – 12:40

Rhyme competition in spoken word recognition depends on the availability of cognitive resources

Laurence Bruggeman | Anne Cutler

Western Sydney University

When we listen to speech, multiple candidate words are activated which then compete with one another for recognition. In young adult native listeners, activated words typically include onset competitors (sharing a beginning; button/butter) and rhyme competitors (sharing an ending; bumper/jumper). Traditional models of speech comprehension assume stronger competition for onset than rhyme, as this matches the left-to-right direction of speech input. The fact that rhyme overlap evokes competition at all may even be called counterintuitive. Here, we examined the strength of this competition, focusing on more effortful listening. Using eyetracking, we assessed word recognition by 1) late bilinguals listening to speech in their first and second language, and 2) older adults listening to speech interrupted by bursts of noise. Eye gaze was recorded as listeners heard sentences and viewed sets of four drawings: three unrelated, one depicting an onset or rhyme competitor of a word in the input. Activation patterns showed considerable onset but no rhyme competition, for either bilinguals (in either language) or older adults in noise. Rhyme competition did emerge for older adults listening to clear speech. Rhyme competition may thus be a luxury feature of maximally efficient listening, to be abandoned when resources are scarcer.

Eye movements down the garden-path: The role of semantic persistence on processing routes in second-language comprehension

Roslyn Wong | Dr Aaron Veldre | Professor Sally Andrews

University of Sydney

Consistent evidence from the second-language (L2) processing literature suggests that non-native speakers depend on shallow, semantic-based processing to compute representations of linguistic text because of an inherent difficulty with acquiring native-like syntactic competence for a second learned language. However, similar patterns of shallow processing observed in native or L1 speakers demonstrated by semantic persistence effects have been attributed to a preference for semantic-based processing, which outputs good enough representations of linguistic text faster than syntax-based processing. The current study was thus designed to investigate whether shallow processing in L2 speakers is due to inadequate syntactic knowledge or simply to a preference for semantic-based processing. An eye-tracking paradigm was used to compare how native English speakers, and native Chinese L2 English speakers processed two-sentence texts in which the syntactic and semantic properties of the first sentence had been manipulated. The results were consistent with the view that L2 speakers build linguistic representations based on semantic information even when the available syntactic information does not necessarily support this interpretation.

Grounding Semantic Representations: A Multimodal Approach Using Visual and Affective Features.

Simon De Deyne | Danielle Navarro | Guillem Collell | Amy Perfors

University of Melbourne | University of New South Wales | KU Leuven

One of the main limitations in natural language-based approaches to meaning is that they are not grounded. In this study, we evaluate how well different kinds of models account for peoples representations of both concrete and abstract concepts. The models are both unimodal (language-based only) models and multimodal distributional semantic models (which additionally incorporate perceptual and/or affective information). The language-based models include both external (based on text corpora) and internal (derived from word associations) language. We present two new studies and a re-analysis of a series of previous studies demonstrating that the unimodal performance is substantially higher for internal models, especially when comparisons at the basic level are considered. For multimodal models, our findings suggest that additional visual and affective features lead to only slightly more accurate mental representations of word meaning than what is already encoded in internal language models; however, for abstract concepts, visual and affective features improve the predictions of external text-based models. Our work presents new evidence that the grounding problem includes abstract words as well and is therefore more widespread than previously suggested. Implications for both embodied and distributional views are discussed.

Active Tasks and Test-Potentiated Learning, What Works and Why?

Shaun Boustani

University of Sydney

Active learning is a popular and effective teaching method which is thought to increase students interactivity with materials. Often juxtaposed against passive styles of teaching, such as lecturing, the benefits of active tasks are robust and reliable (Freeman et al., 2014). Despite this, little is known about why and how these tasks produce their pedagogical benefits. The current research reviews one subset of active tasks: intermittent testing, and investigates how practice retrieval might enhance memory and potentiate new learning. Testing involves the retrieval of previously learned information from memory, and has been shown to improve performance restudy on a range of memory tasks using many different material types (Rowland, 2014). In a series of experiments interpolated retrieval was compared to restudy and elaboration tasks, using a novel experiment design. Learning materials were first introduced within an original study block, after which half of the original items were either retrieved, restudied, or elaborated upon, whilst the other half were not re-exposed. Ultimately, memory for all materials was superior in retrieval conditions, suggesting the existence of both direct and indirect benefits of retrieval. The educational implications of these findings are discussed.

Which cognitive skills predict success in an undergraduate programming course?

Irene Graafsma | Serje Robidoux | Matthew Roberts | Vince Polito | Lyndsey Nickels | Judy Zhu | Eva Marinus

Macquarie University | Padagogische Hochschule Schwyz

Governments are increasingly focussed on improving the digital literacy of the next generation. Despite the increase in demand for programming education, little is understood about how programming skills are best acquired and how to teach them optimally. An important first step in addressing these gaps is to determine which cognitive skills play a role in programming. We used a model of the cognition of coding to predict potentially important cognitive skills and assessed both these skills and programming ability in 123 undergraduate students at the start and end of an introductory programming unit. Initial skills in algebra, logical reasoning, pattern recognition and grammar learning predicted performance at the end of the semester of programming tuition for both the final unit grade, and on an independent programming task. Students vocabulary learning skills, previous programming experience, self-rated programming skills and number of natural languages spoken did not predict programming performance. In addition, students logical reasoning and pattern recognition skills improved between the start and end of semester, suggesting that learning to program may improve those cognitive skills. The results will form a basis for future studies that may contribute to the development of better teaching methods.

Face Processing

Thursday 1:30 – 3:30

Perception of unfamiliar faces is sensitive to vertical image ratio

Ella Macaskill | Tirta Susilo

Victoria University of Wellington

Recognition of familiar faces shows remarkable resilience when famous face images are globally stretched to twice their original height, but not when only half of the image is stretched (Hole et al., 2002). This finding suggests familiar face recognition relies on maintaining the image ratio along the vertical axis of the face. Here we explore whether vertical image ratio is also important for perception of unfamiliar faces, and whether the effect is specific to upright faces. We apply global and half stretching to images of upright faces, inverted faces, and cars in a 3AFC match-to-sample task. Global image stretching has little effect on all stimuli, but half image stretching impacts upright faces more than inverted faces and cars. These results suggest that like familiar face recognition, unfamiliar face perception is sensitive to vertical image ratio. Our data also show that resilience to global image stretching is not specific to upright faces. We discuss our findings in terms of several models of image stretching effect.

Viewer-centred biases in perceiving fixation distance from eye vergence

Alysha Nguyen | Colin Clifford

UNSW Sydney

The eyes of others play a crucial role in social interactions, providing information such as the focus of another’s attention, their current thoughts and emotions. Although much research has focussed on understanding how we perceive gaze direction, little has been done on gaze vergence, i.e. the angle between the two eyes. The vergence of the eyes yield potential information about the distance of another’s fixation the more converged someone’s eyes are, the closer their object of fixation. In the present study, we aimed to determine the precise fixation distance at which participants perceive a face to be gazing, using synthetic faces in a stereoscopically-simulated three-dimensional environment. The data revealed a systematic underestimation of fixation distance for downwards-averted gaze as well as a limit in discrimination of gaze vergence beyond 35cm. When the faces were inverted, fixation distance was again underestimated for gaze vergence in the observers lower visual field, this time corresponding to the avatars upwards gaze. This pattern of results indicates that our bias to underestimate others™ fixation distance relies primarily on a viewer-centred, egocentric representation of interpersonal space.

Holistic integration of symmetry cues across different face regions

Brooke Ward | Robert McGuire | Tim Burrey

Charles Sturt University

More asymmetrical faces are judged to be less trustworthy. Paradoxically, individuals high on socially aversive personality traits, including psychopaths, have highly symmetrical faces. Here we investigated whether personality judgements are affected by symmetry cues in the internal, expressive region of the face only. Faces were manipulated to remove the asymmetry in the internal region, external region, whole face, or not at all. In Part 1, faces were rated for trustworthiness and attractiveness. Theories of the effects of asymmetry on attractiveness predict that asymmetry of both the internal face and external face should influence attractiveness judgements. In part 2, we subjected the same faces to four personality judgements (selfish, aggressive, responsible, and empathic). Results confirmed that, as predicted, only asymmetry cues in the internal region affected judgements of trustworthiness (part 1) and 3 of the 4 personality judgements (part 2), while attractiveness judgements (part 1) were influenced by integration of asymmetry cues across the entire face. Contrary to expectations, selfishness judgments were affected only by asymmetry cues in the external region of the face. Data from the current study support the idea that (with the exception of selfishness), personality judgements are influenced by asymmetry cues in the internal face region only.

Body inversion effects: the role of heads

Emma Axelsson | Rachel Robbins | Tharindi Buddhadasa

University of Newcastle | The Australian National University

Body inversion effects (BIEs) are typically reliable except in the case of headless bodies. Yovel et al. (2010) failed to find a BIE in a posture discrimination task with headless bodies, while Robbins and Coltheart (2012) found a BIE in an identity discrimination task, and Minnebusch found a reversed BIE. The aim here was to test whether the differences are due to task type: identity versus posture discrimination and in study design. If heads are key to a BIE then there may be carryover effects from trials containing images with heads.  Using a within groups design, participants completed both task types in separate blocks of headless images and whole figures (bodies with heads) in upright and inverted orientations. The headless images always appeared first.  A match-to-sample method was used. After presentation of a sample image (250 ms), the same image appeared with a different posture or identity. A BIE was found with whole figures and headless bodies in both the posture and identity discrimination tasks. Participants were more efficient at discriminating upright compared to inverted bodies regardless of the presence of heads and task type. Study design and eye tracking data will also be discussed.

The Viewpoint Illusion: A Spatial Analogue to Facial Viewpoint Adaptation

Mr Kieran J. Pang | Prof Colin C.W. Clifford

University Of New South Wales

Contextual modulation, the shift in a targets perceptual properties as a function of its surroundings, has been demonstrated to be analogous across temporal and spatial dimensions. However, while this relationship holds true in low-level vision, the effects of spatial context remain unexplored in its high-level counterpart. The present study series utilized an unbiased 2AFC paradigm to measure a novel viewpoint illusion, a spatial analogue to facial viewpoint adaptation. .Participants were tasked to select from two face sets, each consisting of one centred target identity surrounded by flankers, depending on which target was oriented more directly forward. One set of flankers faced left, while the other set mirrored this, facing right. Target orientations deviated equally from a base orientation, selected by a Bayesian staircase. A robust repulsive effect of approximately 0.5° was observed and subsequently replicated; judgements of target viewpoints were shifted from their true orientation, in a direction opposite to the surround. This was eliminated following surround inversion, indicating that the result was unlikely to be driven by low-level factors. This study provides evidence that spatial contextual modulation is present beyond low-level vision, and additionally opens the field to investigating spatial interactions in other high-level features with established adaptation effects.

Distribution analysis reveals a variable right-hemisphere contribution to the N170

Paul M. Corballis | Haiyang Jin | William G. Hayward | Kara D. Federmeier

The University of Auckland, Auckland, New Zealand | The University of Hong Kong | University of Illinois

Event-related potentials (ERPs) are usually derived by averaging similar trials, with the implicit, but mostly untested assumptions that ERPs will be similar from trial-to-trial, and the distribution of amplitudes normally distributed. Violations of these assumptions could have significant implications for the interpretation of ERP data. We explored these assumptions by investigating the amplitude distributions of the face-specific N170 component. It is commonly noted that the N170 evoked by faces is larger in amplitude than for non-face stimuli, and that this difference is larger over the right hemisphere “ a pattern we replicated across three experiments.  We fit the single-trial amplitudes with ex-Gaussian functions. Plotting the distributions of N170 evoked by face and house stimuli revealed significant skew when faces were presented. The ex-Gaussian fits generated parameters (mu and tau) that we subjected to linear mixed-model (LMM) analysis. The mu parameter was larger for faces than for houses, reflecting a larger overall N170, while tau confirmed that the distribution of N170 from right-hemisphere electrodes was negatively skewed. This implies that the rightward lateralization of N170 amplitudes is probably due to a response that occurs only on some trials “ a finding that is obscured by signal-averaging approaches to ERP analysis.

Working Memory

Thursday 1:30 – 3:30

The integration of stimulus information in visual short-term memory

Simon D. Lilburn | Philip L. Smith

The University of Melbourne

Multiple observation or "multiple look" procedures have been used to examine the efficiency of information integration within the visual system. The integration of information involves some form of visual memory, but investigations of visual short-term memory have largely focused on the capacity limits across manipulations of memory load rather than the memory dynamics across time. We present a near-threshold post-stimulus probed version of the standard multiple look procedure with multiple stimuli across two intervals and compare observer performance to a version of the attention-weighted sample size model. In modelling the different rate of accumulation for each presentation intervals across different stimulus exposure durations, this model provides a precise characterisation of information integration, as well as memory load, within the task. A parsimonious model, in which the rate of information sampled into memory is constant and is efficiently integrated for decision making provides captures the complicated pattern of observer behaviour in this task. We also demonstrate that this model can be extended into the response time domain by using a modified diffusion model.

Examination of doubly stochastic processes in a neural model of visual working memory

Robert Taylor

University of Newcastle

A popular method for probing the precision of visual working memory is to ask individuals to reproduce encoded stimulus features using a continuous response scale. A neural resource model based on storing information in a noisy population of idealized neurons provides a very parsimonious account of continuous report recall errors while also having some correspondence with neurophysiological principles. In this version of the model, random spiking in the neural population is typically approximated using a homogeneous Poisson process. However, population activity is considered to be purely stimulus driven and thus ignores the possible contribution of non-sensory factors “ such as arousal, attention, and adaptation – on cortical excitability. Here I provide a preliminary examination of a doubly stochastic neural resource model for the continuous report task. To account for non-sensory factors, the model generalizes the Poisson process by introducing trial-by-trial fluctuations in neural gain. Initial assessment suggests that while the doubly stochastic model might provide an improved fit to behavioral data in some cases, model evaluation is complicated by the unrealistically low firing rates generated by the models.

Using eye gaze data to examine the flexibility of resource allocation in visual working memory

Ed Stewart | Chris Donkin | Mike Le Pelley

UNSW

Computational models of visual working memory (VWM) generally fall into two categories: slots-based and resources-based models. On the surface, these models appear to make distinct predictions. However, as these models have expanded to capture empirical data, they have begun to mimic each other. Further complicating matters, Donkin, Kary, Tahir and Taylor (2016) proposed that observers were capable of using either slot- or resource-based encoding strategies. In the current experiment, we aimed to test the claim that observers adapt their encoding strategies depending on the task environment by observing how participants move their eyes in a VWM experiment. We ran participants on a standard colour recall task (Zhang and Luck, 2008) while tracking their eye movements. Trials consisted of 3 or 6 items. We manipulated whether the number of items was held constant for a block of trials, or varied randomly. We expected to see participants use more resource-like encoding when the number of items was predictable. Contrary to these expectations, we observed no difference between blocked and unblocked conditions. Further, the eye gaze data was only very weakly related to behaviour in the task. We conclude that caution should be taken in interpreting eye gaze data in VWM experiments.

Task Switching and Working Memory Capacity: The impact of cue switch costs on this elusive relationship

Katie Knapp | Stephen Hill | Michael Philipp

Massey University

Task switching and working memory capacity (WMC) are both purported to measure executive attentional control processes, and are widely used to measure executive functions. A surprising finding in the literature has been the lack of relationship between performance on these tasks. These findings may be explained by a task switching theory which argues that task switching performance does not actually measure executive attentional control processes. This controversial claim suggests that the structure of the traditional switching task with one cue assigned to each task (e.g., colour task cued by Colour) confounds cue switching and task switching. Merely having to encode a new cue leads to cue switch costs, which may explain performance decrements on task switch trials. In the present experiment, participants completed a WMC task and a switching task which used two cues per task. This design allowed for the separation of cue switch costs from true task switch costs. When true task switch costs were isolated, a relationship between WMC and task switching performance was found. These results suggest that the task switching paradigm does index attentional control. The findings also raise a number of methodological considerations when using the task switching paradigm in individual differences research.

Media-multitasking, Working Memory and Response Inhibition

Dr Karen Murphy | Olivia Creux

Griffith University

Media-multitasking (using multiple forms of media or devices simultaneously or swapping between media quickly) utilizes executive functions (EFs) for successful task performance. This study examined the link between media-multitasking and the EFs of working memory and response inhibition. Participants completed the Digit Ordering Task (working memory), a Spatial Stroop Task (response inhibition) and a Go-No-Go task (involving both working memory and response inhibition). Higher media-multitasking scores were associated with better working memory scores and more efficient performance on the Go-No-Go task. Response inhibition (No-Go trial performance in the Go-No-Go task) was not related to media-multitasking behaviour. Higher levels of media-multitasking were associated with poorer Spatial Stroop task accuracy, indicating poorer response inhibition for those frequently media-multitasking. The Spatial Stroop task used locational arrows and the Digit Ordering and the Go-No-Go tasks used alphanumeric characters as stimuli. Hence, it is possible that the superior performance for high media-multitaskers in the Digit Ordering and the Go-No-Go tasks arise from participants training with these types of stimuli. That is, the more efficient operation of these EFs by those who engage in more media-multitasking might be linked to the frequency with which they encounter this type of information during media-multitasking.

Is There Something Special About Eyes? Investigating the Effects of a Concurrent Working Memory Load on Gaze- and Arrow-Induced Attentional Orienting

Louisa A. Talipski | Stephanie C. Goodhew | Mark Edwards

The Australian National University

As social animals, we regularly make use of a variety of social cues in order to navigate our environment. One such cue is gaze direction, which appears to compel our attentional resources to the gazed-at location. A recent debate concerns whether orienting in response to familiar nonsocial cues -such as arrows – is as reflexive as that in response to a socially relevant stimulus such as gaze direction. In this study, we examined the automaticity of arrow-induced orienting versus that of gaze-following in terms of the load-insensitivity criterion of automaticity. Across three experiments, we found that orienting in response to arrows, like gaze-following, is resistant to a verbal working memory load (WML). In two subsequent experiments, we found that a visuospatial WML – which is known to share mechanisms with spatial attention – abolishes arrow-induced orienting, but leaves gaze-following intact. This indicates that gaze-following may draw on a different pool of resources to those involved in nonsocial orienting, or requires fewer visuospatial resources. In a further experiment, we determine whether this intact social orienting effect can be attributed to the apparent motion of the pupils – rather than gaze direction per se – eliciting a reflexive orienting response to the gazed-at location.

Attentional Control

Thursday 4:00 – 5:40

Feature-based attention is not confined by object boundaries: spatially global enhancement of irrelevant features

Angus F. Chapman | Viola S. Stormer

University of California San Diego

Theories of visual attention differ in what they define as the core unit of selection. Feature-based theories emphasize the importance of visual features (e.g., color, size, motion), demonstrated through enhancement of attended features across the visual field, while object-based theories propose that attention enhances all features belonging to the same object. Here we test how within-object enhancement of features interacts with spatially global effects of feature-based attention. Participants attended a set of colored dots (moving coherently upwards or downwards) to detect brief luminance decreases, while simultaneously detecting speed changes in another set of dots in the opposite visual field. In the first experiment (N = 15), participants had higher speed detection rates for the dot array that matched the motion direction of the attended color array, although motion direction was entirely task-irrelevant (11.5% difference in hit rate, p  .001). In a second experiment (N = 48), we manipulated the proportion of match and non-match trials (80% vs 20%), and found the effect persisted even when spreading attention to the task-irrelevant feature was detrimental for performance (8.2% vs 6.1% difference in hit rate, p = .512). Overall, these results indicate that task-irrelevant object features are enhanced globally, surpassing object boundaries.

Flexibility in resizing attentional breadth: Asymmetrical versus symmetrical attentional contraction and expansion costs depends on context

Stephanie Goodhew | Ann Plummer

The Australian National University

A core way that attentional resources can be regulated is the breadth of attention: the tendency to concentrate ones attentional resources over a small region of space (narrow scope), or to spread them over a larger region of space ( broad scope). It has long been understood that humans have a preference toward the broad or global level of processing. More recently, beyond any static preference, researchers have increasingly appreciated the importance of rapid rescaling of attentional breadth to meet task demands, especially for real-world tasks such as driving. Here we examined whether there was any asymmetry in the human capacity to resize attention from a narrow to broad scale (expansion) versus a broad to narrow scale (contraction). In Experiment 1, we found remarkable symmetry in expansion and contraction efficiency, even under conditions where the global stimuli were demonstrably more salient. This indicates that humans can flexibly adapt to the attentional demands of the context. However, in Experiment 2, an asymmetry was revealed, whereby attentional expansion was more efficient than contraction. The key difference between Experiment 1 and Experiment 2 was whether or not the initial baseline block demanded frequent attentional re-sizing, suggesting that recent experience can impact attentional flexibility.

Should we used unfilled shapes to manipulate spatial attention in cognitive psychology experiments?

Rebecca K. Lawrence | Mark Edwards | Stephanie C. Goodhew

The Australian National University

Humans scale the region over which spatial attention is deployed for effective visual processing of the world. Typically, attentional scale is manipulated using unfilled shapes of various sizes. Specifically, participants might be asked to respond to the presentation of small or large rings, which are assumed to narrow or broaden spatial attention respectively (e.g. Goodhew et al., 2017, 2016). Using such stimuli, previous research has found attentional scaling to influence spatial, but not temporal acuity (selective spatial enhancement model). However, rather than scaling attention, it is possible that these ring stimuli encourage participants to attend to the outline of the shapes, subsequently, deploying attention as an annulus. To address this, we developed two new methods of attentional manipulation. In Experiment 1, we used small and large global motion stimuli to narrow or broaden attention respectively. In Experiment 2, we used glass pattern stimuli. Both of these methods encourage participants to spread attention across the entire area occupied by the stimulus. Critically, compared to past work, we found that narrowing attention improved both spatial and temporal acuity. This contests the selective spatial enhancement model. As such, we recommend that future research move away from using unfilled shapes to manipulate spatial attention.

Selection history guides attention during voluntary task selection

Dion Henare | Hanna Kadel | Anna Schubo

Phillips University of Marburg

Traditionally, selective attention has been understood in terms of an interaction between top-down and bottom-up processing. More recently a third factor, selection history, has been proposed to play a role in directing attention. Selection history’s effects on attention have been demonstrated experimentally using paradigms in which participants learn an association in one task, while the effect of that learning is observed in the other task. In this study, we examined whether increased top-down control processes could be used to override the effects of selection history in a visual search task. To do this, we employed a voluntary task selection paradigm in which the participant chose which of the two tasks they would perform on any given trial. Voluntary task selection has been shown to increase top-down control and reduce switch costs. We measured behavioural performance while using EEG to record lateralised event-related potentials indexing attentional processing. Results showed that selection history continued to guide attention in a visual search task, even when top-down control was maximised. This suggests that the effects of selection history on attention are enduring, and that top-down control processes cannot be used to override their influence.

Attentional control of conflict in the Stroop task

Luke Mills | Sachiko Kinoshita

Macquarie University

In the Stroop task, there is a conflict between reading the word and naming the display colour. There are a number of potential mechanisms for controlling this conflict, including inhibitory control of reading and disengagement of attention from the word. The present study aimed to determine which mechanism is responsible for attentional control of the conflict produced by word distractors. Attentional control of the conflict in vocal and manual Stroop tasks was manipulated by altering the proportion of non-linguistic neutral trials. In both versions of the task, the interference for words was magnified when there was a high proportion of neutral trials. This was taken as evidence that a high proportion of neutral trials reduced attentional control of the conflict produced by the word distractors. Analysis of the RT distribution showed that neutral proportion shifted the RT distribution without changing its shape. The RT distribution analysis was supported by diffusion modelling, which showed that the effect of neutral proportion was manifested solely in the non-decision time parameter. We use these results to argue that the mechanism responsible for the attentional control of the conflict caused by word distractors is early stage, disengagement of attention from the word stimulus.

Memory Processes

Thursday 4:00 – 5:40

Is it smart to read on your phone? Does reading format affect susceptibility to misinformation?

Sally Andrews | Yi Xu | Roslyn Wong | Shuhan He | Aaron Veldre

University of Sydney | Shanghai Jiao Tong University

The increased accessibility and mobility of digital technology is changing the way that people access information, including the format in which they read it. This research is designed to investigate whether the format in which people read information influences their memory, comprehension and interpretation of what they read. To address these questions, we adapted a method used by Ecker et al. (2017) to assess susceptibility to misinformation. Participants read pairs of short newspaper-like passages describing an event (e.g., a bushfire) in which the first passage suggested a possible cause for the event (e.g., arson) that was sometimes retracted and replaced with an alternative cause (e.g., lightning strike), with or without an explicit retraction of the original cause. Open-ended and multiple-choice questions assessed comprehension and interpretation of the passages. To assess the impact of reading format, we manipulated whether participants read the passage pairs in print, on computer or on their own mobile phone. The results confirmed Ecker et al.s findings that explicit retraction strengthened resistance to misinformation, across all reading formats. The only effect of reading format was a reduction in sensitivity to misinformation on inferential/attitudinal responses that was replicated in both English and Chinese readers.

Fake news! Could overtly-doctored photographs have long-term sleeper effects?

Andrew Mills | Robert Nash | Kimberley Wade | Rachel Zajac

University of Otago | Aston University | University of Warwick

We know that a well-photoshopped photo can cultivate memories for events that never happened. But could an overtly-fake photograph also lead people to remember a false news event? Building on previous research, we investigated whether people retain fragments of information presented in a poorly photoshopped photograph, even when they were able to reject this information outright when they first encountered it. In this study, participants read a series of true and false news headlines. Half of the true headlines were accompanied by a genuine photo from the event. Half of the false headlines were accompanied by an overtly-doctored photo of the event in question, which was explicitly labelled as fake. We asked participants to rate each headline based on how likely it was that the event actually happened, and how well they could remember the event taking place. One week later, participants were asked to read and make ratings on the news headlines again this time without any accompanying photos. Consistent with our predictions, it appears that initially rejected information in the fake photos re-emerged over time.  Participants ratings indicated the presence of source monitoring errors; that is, initially rejected information was mistaken for an old memory of a true event.

Taking stock of the production effect in memory

Glen E. Bodner | Jonathan M. Fawcett | Colin M. MacLeod

Flinders University | Memorial University of Newfoundland | University of Waterloo

Production refers to any method a learner can use to produce information at study to make it more memorable, such as saying it aloud, or writing or typing it out. People commonly report using production strategies in daily life, but only in the last decade have studies established that production can offer a simple, effective study strategy. This talk takes stock of the recent surge in research on the production effect. To this end, key findings and knowledge gaps are summarised and then considered in light of potential accounts. Production has been shown to enhance both the distinctiveness and familiarity of items in memory, but additional mechanisms may be at work. For example, production improves recognition but does not improve free recall. This dissociation might suggest that production can enhance perceptual processing. Some attempts to evaluate this possibility are discussed. Given its ease of implementation and effectiveness, researchers may wish to explore the utility of production strategies for improving student learning, for offsetting memory deficits in ageing, and for the rehabilitation of neurological and memory disorders.

Global semantic similarity effects in recognition memory: Insights from BEAGLE representations and the diffusion decision model

Adam F. Osth | Douglas Mewhort | Andrew Heathcote

University of Melbourne | Queens University | University of Tasmania

Recognition memory models posit that performance is impaired as the similarity between the probe cue and the contents of memory is increased (global similarity). These predictions have been commonly tested using category length designs, in which the number of items from a common taxonomic or associative category is manipulated. Prior work has demonstrated that increases in the length of associative categories show clear detriments on performance, a result which is inconsistently found for taxonomic categories. In this work, we explored global similarity predictions using representations from the BEAGLE model. BEAGLE’s two types of word representations, item and order vectors, exhibit similarity relations that resemble relations among associative and taxonomic category members. Global similarity among item and order vectors was regressed them onto drift rates in the diffusion decision model, which leverages both response times and accuracy. Results from seven datasets indicated clear deficits of global similarity among item vectors, suggesting that lists of unrelated words exhibit semantic structure that impairs performance. However, there were relatively small influences of global similarity among the order vectors. These results are consistent with prior work suggesting associative similarity exhibits stronger impairments on performance than taxonomic similarity.

The song that never ends: The effect of familiarity with a song on earworm incidence

Callula Killingly | Philippe Lacherez

Queensland University of Technology

Having music stuck in the head (an ‘earworm’) is a common experience, yet relatively little is known about how and why this curious phenomenon occurs. Previous research using a dual-task paradigm demonstrates that earworms for vocal music engage phonological resources, manifesting as ‘inner singing’. The present study investigated whether this effect is moderated by familiarity with the music. In one session, participants (N = 48) were presented with four novel songs. To manipulate familiarity, each song was presented between 1 and 4 times, counterbalanced across participants. The following day, participants undertook a serial recall task during and following presentation of each song. They rated the music on familiarity, enjoyment, their desire to sing along, and perceived catchiness, before and following the experiment. Increased familiarity with a song tended to result in greater phonological interference following song presentation, yet the effect varied depending on the song. Ratings of the desire to sing along and perceived familiarity increased significantly between the sessions for all songs. These findings are important in understanding the relative influence of familiarity and song-level characteristics in the development of an earworm.

Face Perception

Friday 9:00-10:20

Early neural processing of tearful faces

Sarah Krivan | Dr. David Cottrell | A/Prof. Nerina Caltabiano | Dr. Nicole Thomas

James Cook University | Monash University

Facial expressions are a critical component of social communication. Expressions of joy are readily shared, whereas expressions of sadness are often costly to respond to. Despite this, research employing self-report methodologies have identified that tears elicit greater empathic and caregiving responses compared to other expressions. Understanding whether these differences also modulate face-specific, event-related potential waveforms will allow for a unique understanding of the way that tears are processed. Fifty participants completed an emotional discrimination task of images depicting happy, sad, and neutral faces, both with and without tears. Tearful faces were found to produce a stronger central negativity and posterior positivity at 130ms (N1/P1) than tear-free faces. Whereas the emotional content of the images modulated the VPP/N170, with a larger negative mean amplitude response to sad faces compared to happy faces. Tears modulated the early-neural components typically associated with the structure of faces, whereas emotion modulated later components. We interpret these results to mean that tears are processed independently of emotional expression at the early-neural level. This early-neural processing may be the basis for further understanding their communicative function at an evolutionary level.

 

The lens of attitudes and personality when reading emotions expressed in the face

Hedwig Eisenbarth | Martin Sellbom | Ted Ruffman

Victoria University of Wellington | University of Otago

Both psychopathic personality traits and right-wing-attitudes (RWA) have been linked to the ability to categorize facial expressions by emotion. In addition, while RWA increases with age, emotion recognition was found to decline. This study investigated if emotion recognition is related to psychopathic traits, independent of age recognition, and if emotion recognition is related to RWA across ages. A sample of 399 participants (184m, 213f, 2 other; M<sub>age= 42.80 ±14.81, range=18-85) completed an online emotion categorization and an age categorization task using facial expressions presented for 3s each. Participants chose the emotion category (sad, happy, afraid, surprise, angry, disgust) or the age category (20-25, 30-35, 40-45, 50-55,60-65,70-75 years) and filled in psychopathic personality and RWA questionnaires. Results show reduced categorisation ability for happy facial expressions but higher categorisation ability for disgust and anger related to psychopathic traits. RWA was related to poorer emotion recognition ability for disgust, surprise, sad and afraid facial expressions. These effects were stable when controlling for ability to categorize by age.

Children’s Judgements of Facial Hair are Influenced by Biological Development and Experience

Nicole Nelson | Barnaby Dixson

University of Queensland

Adults use others facial features to judge that persons social dominance and mate value. Facial hair heavily influences these judgments in male faces. The association between facial hair and judgments of dominance and mate value are underpinned by biological and social factors, and how these judgments develop is a fundamental yet unanswered question. We sought to determine when these associations develop, which associations develop first, and whether they are associated with early exposure to bearded faces. We presented pairs of bearded and clean-shaven faces to children ranging from 2-17 years of age (N=470) and adults (18-22 years; N = 164) and asked them to judge dominance, age, masculinity, attractiveness, and parenting ability. We found that children as young as 2-5 years associated beardedness with dominance traits like being strong and old. Conversely, children did not associate beardedness with traits associated with mate choice: children avoided bearded faces when judging attractiveness and showed little preference for bearded vs. clean-shaven faces when judging who looked like a parental figure. However, children who had bearded fathers judged bearded faces more positively in terms of attractiveness and looking like a parental figure.

Exploring the effects of salience on early visual signals using faces

Manuela Francesca Russo | Philippe Lacherez | Patrick Johnston

Queensland University of Technology

The N170 is an electrophysiological brain response indexing higher level visual processing. It is commonly considered a neural marker for faces because a larger amplitude is consistently found in response to faces than to other objects. This is known as the Face Specificity Hypothesis. Conversely, there is evidence suggesting that non-face objects for which participants are visually expert produce similar amplitude in N170 peak (Visual Expertise Hypothesis). Faces, and important non-face stimuli, may be considered salient objects. N170 responses to these categories should then be interpreted in light of this. Our study investigated the contribution of salience during acquisition of Visual Expertise (VE). We firstly used classical conditioning, pairing images of unknown Cartographic Contours (CC) (CS+) with emotionally significant images of faces (i.e. own self) (US), and images of unknown faces (NS). Then participants were exposed to degraded images of previously seen CC and foils. We hypothesized that ERP responses to CC previously paired with own faces would elicit a larger N170, as compared to CC paired with unknown faces, and CC foils. Supporting our hypothesis, preliminary EEG data showed that N170s responses elicited by CS+ US were stronger than CS+ NS.

Decision-making 1

Friday 9:00-10:20

Evidence for a Fixed-Point Property in Categorization Response Time Data

Daniel Little | Xue Jun Cheng | Sarah Moneeer

The University of Melbourne

Many different attentional and decision-making strategies can be conceptualized as mixture models. For instance, shifts between automatic and controlled processes could be considered mixtures of parallel and serial processing. This explanation has been favoured in recent investigations of categorization. One caveat to this conclusion is that mixtures of parallel and serial processing can often be mimicked by parallel models with inhibitory interactions. To tell these explanations apart, we take advantage of the "fixed-point” property of mixture models: that is, there is a single point of cross-over in the probability density functions generated from different mixture proportions. This property can be used to distinguish mixture models from other explanations if one can selectively manipulate the mixture proportion across conditions. This talk will present evidence from two experiments showing evidence for this fixed-point property in feature processing during categorization.

New Insights into Decisions from Experience: Using Cognitive Models to Understand How Value Information, Outcome Order, and Salience Drive Risk Taking

Jared M. Hotaling | Chris Donkin | Ben R. Newell | Andreas Jarvstad

University of New South Wales | University of London

Many decisions are based on experienced outcomes. However, little is known about the mechanism underlying these decisions. Previous research has focused on contrasting these decisions with those based on described alternatives. Observations of a description-experience gap (D-E gap) led Hotaling, Jarvstad, Donkin, and Newell (under review) to investigate various factors influencing decisions from experience. Critically, they found that the juncture at which value and probability information is provided has a fundamental effect on choice. They also found evidence for the impact of perceptual salience and outcome recency on choice. To better understand these results, we developed an exemplar-based cognitive model. It uses a noisy error-prone memory mechanism to explain how confusions between events give rise to various behavioral patterns. According to the model, each time an outcome is experienced, a record is laid down in memory. However, memory traces can be disturbed in several ways as new information enters the system. We tested several versions of models within this basic framework, and found that one with mechanisms for value-assignment confusions and risk bias provided the best account. We discuss the implications of these findings on our understanding of the interplay between attention, memory, and choice.

Replicating the disaster information effect: knowing more about knowing more about disasters

Garston Liang

UNSW Sydney

How do people react to reports of disasters? We examined this question using a geographical decisions-from-experience task. On each round, participants chose a house to live in from one of three villages, each with different exposures to disasters: no exposure, frequent occurrence with limited exposure, and infrequent but catastrophic disasters. Additionally, participants received either a) forgone feedback over the entire 3 villages, b) forgone feedback for their own village. The original finding in Newell, Rakow, Yechiam & Sambur (2016) was that additional forgone feedback increased peoples tolerance for risk-taking, i.e. knowing more about disasters led to riskier choices. The explanation for this result was that participants underweighted the probability of rare disasters. This disaster-information-effect, motivated four follow-up experiments, however, the failure to replicate the original finding led to a registered replication (N = 242) which investigated the reactionary behaviours to rare disasters at the trial-level. Sequential analyses revealed evidence of a) differences in choices between witnessing a disaster and experiencing its effects, and b) gamblers fallacy in moving into a disaster-affected region after witnessing the disaster from safety.  Together, the findings shed light on how people react to large losses and clarifies the role of feedback in this decisions-from-experience task.

The limited role of pre-registration and Bayes factors in fixing our science

Chris Donkin | Aba Szollosi

UNSW Sydney | UNSW Sydney

In this talk, we’ll ask what the most popular reactions to the replication crisis – pre-registration and better out-of-the-box statistical methods – do to improve our understanding of psychological phenomena. We start with the assumption that scientific progress is made through good explanations, and then look at the contribution of pre-registration and "improved" statistics to that end. The aim of the talk isn’t to discourage the use and adoption of such methods, but is rather to pre-empt the potential consequences of widespread and thoughtless application of such approaches, and to dispel the notion that if we adopt new ‘best practices’ that we have made considerable progress towards fixing psychological science. The inconvenient truth is that we have only begun our journey to improve psychology science, and the aim of this talk is to draw attention to some of the issues that remain despite innovations such as pre-registration and better statistics.

Emotion and Motivation

Friday 11:00 – 12:40

Evaluative conditioning affects subsequent fear learning

Ottmar Lipp | Camilla Luck | Alana Muir

Curtin University

It is currently unknown whether the acquisition of negative valence in evaluative and fear conditioning reflects on a single learning mechanism. The current study used a transfer paradigm to address this question. Three groups of participants (N = 85) were trained in differential fear conditioning after completing a picture-picture evaluative conditioning paradigm. In group Congruent, the to-be-CS+ was paired with negative pictures whereas the to-be-CS- was paired with positive pictures; in group Incongruent the to-be-CS+ was paired with positive pictures whereas the to-be-CS- was paired with negative pictures, and different CSs were used in evaluative and fear conditioning in group Different. An online measure of CS evaluation indicated that CS valence acquired during evaluative conditioning affected valence acquisition during fear conditioning with CS+ being less pleasant than CS- in groups Congruent and Different, but not in group Incongruent. Differential electrodermal responses emerged within fewer trials in groups Congruent and Different than in group Incongruent and there was a trend towards faster extinction in group Incongruent. The current research indicates that CS valence acquired during evaluative conditioning transfers across conditioning paradigms and will affect the acquisition of fear learning as indexed by subjective evaluations and electrodermal responses.

Startle modulation in backward evaluative conditioning is not affected by type of instructions or concurrent forward conditioning

Luke Green | Camilla Luck | Ottmar Lipp

Curtin University

Backward conditioning results in a conditional stimulus (CS) either acquiring the valence of the unconditional stimulus (US; assimilation effect), or valence opposite to that of the US (contrast effect). In two experiments, startle modulation (startle is larger during negative than positive stimuli) was employed to determine whether instructions or the conditioning design used affect the nature of backward conditioning. In Experiment 1, different CSs were presented after positive, neutral, or negative sound USs and participants were asked to learn which CSs stopped the respective USs. Smaller startles were found during CSs following negative USs than neutral and positive USs indicating a contrast effect. In Experiment 2, concurrent forward and backward conditioning was performed with positive, neutral, and negative sound USs. One group received the instructions from Experiment 1, while a second group were asked to pay attention to the stimuli. Results indicated contrast effects for backward conditioning in both groups. These findings suggest that backward CSs acquire valence opposite to the US in simple backward conditioning and concurrent forward and backward conditioning designs, and with and without instructions highlighting the roles of the CSs. These findings are most consistent with relief learning account of backward conditioning.

Social context influences dynamic facial affective responses to food stimuli

Elizabeth Nath | Peter Cannon | Michael Philipp

Massey University

In recent years, there has been growing interest in emotion measurement in the field of consumer and sensory science. However, current emotion measurement methods fail to consider contextual variables and fail to capture the temporal nature of the emotion process. In two related studies, we explored how social context influences the temporal dynamics of facial expression and evaluated the utility of facial EMG in predicting food acceptability. In the first study, 70 participants viewed food images either alone, or in the presence of the researcher. In the second, 87 participants viewed the same food images either alone, with a stranger, or with a friend. Subjective liking ratings were measured using a labelled affective magnitude scale, and facial muscle activity was measured with an EMG recording system. Results showed that social context influenced the pattern of temporal responding to food images, with increased facial muscle activity in the presence of others. These findings suggest that social context influences implicit emotional responses to food, and that it is vital to consider the relationship between the expresser and their audience. The findings also indicate that facial EMG may be a useful dynamic measure of emotion in certain social environments.

The Role of Inhibitory Learning Processes in Callous Unemotional Traits

Lindsay J. Kemp | Caroline Moul

The University of Sydney

Emotional or behavioural responses frequently return (relapse) after an individual learns to inhibit these responses (i.e. experiences extinction learning). While there is considerable research into the prevention of relapse in the context of anxiety and addiction, it is unknown whether deficits in the relapse of behaviour may cause any psychopathology or particular trait profile. One profile that may be linked to this process are Callous-Unemotional (CU) Traits. These traits describe individuals with callous disregard for others emotions, and individuals with CU traits show inflexibility in learned reward seeking behaviour. It is not known whether this inflexibility extends to learned inhibition of reward seeking behaviour, making it less prone to relapse effects. We therefore hypothesise that psychopathic traits will be correlated with reductions in relapse of responding after extinction learning, indicating inflexible inhibitory learning. We have developed a measure of extinction of instrumental reward seeking behaviour, and later relapse of extinguished responding. Results indicate that CU traits in an undergraduate sample (N = 48) are associated with significant reductions in extinction learning. Furthermore, collection of relapse behaviour data is underway and will soon be complete. This study constitutes a novel examination of the role of relapse processes in personality.

Affect perception of unfamiliar musical chords

Eline A. Smit | Andrew J. Milne | Roger T. Dean | Gabrielle Weidemann

Western Sydney University

It is well established that musical compositions can induce strong emotional responses, but it has also been found that even small musical events such as two- or three-tone combinations (i.e. dyads or triads) can convey affect. In order to explain the origin of these affective responses, this study investigates the role of extrinsic and intrinsic predictors in the perception of affect in triads from an unfamiliar musical system. Intrinsic predictors include psychoacoustic features inherent in the acoustic signal, whereas extrinsic predictors are derived from long-term statistical regularities of the environment. In two experiments, two slightly different tunings of unfamiliar chords and two types of perceived affect (pleasantness and happiness) were tested for each chord. Bayesian generalised linear multivariate modelling was used to model the data with a number of intrinsic predictors, roughness, harmonicity, spectral entropy and average pitch height; one extrinsic predictor, unfamiliarity, and musical sophistication as a potential moderator of any of the other predictors. All the tested predictors were found to have consistent influences across both tunings and both affective responses. This study highlights the importance of both intrinsic features of the music itself, as well as extrinsic factors on affect perception in music.

Perceptual Expertise

Friday 11:00 – 12:40

Can we train deep learning models to identify hip fractures as efficiently as humans?

Jessica Marris | Amy Perfors | Piers Howe

University of Melbourne

With perceptual training, humans can rapidly improve their performance on visual tasks involving complex images (e.g., identifying whether a hip fracture is present in an X-ray image).  How do they achieve this?  Current models of perceptual learning are typically applied to simple images, rather than the complex images that are involved in real-world tasks.  We present a new perceptual learning model based on pre-trained deep convolutional neural networks (DCNNs).  We find that pre-trained DCNNs eventually learn to identify hip fractures in X-ray images with the same level of accuracy as humans, although DCNNs do not learn as rapidly as people do; they require larger training datasets.  We hypothesise that one reason for this may be that humans draw on pre-existing visual representations that are related to a fracture, such as ‘broken’ and ‘unbroken’.  The pre-trained DCNNs that we used did not have these visual representations, as these DCNNs had not been trained to recognise broken images.  We therefore investigate whether training DCNNs to differentiate between broken and unbroken objects allows them to learn to identify hip fractures in X-ray images more rapidly, thus better simulating human perceptual learning.  Our method and preliminary findings will be discussed.

 

Perceptual experts can rely on stylistic gist information to discriminate naturalistic visual stimuli

Rachel A. Searston | Brooklyn J. Corbett | Samuel G. Robson | Matthew B. Thompson | Jason M. Tangen

The University of Adelaide |The University of Queensland | Murdoch University

Perceptual experts in forensic science, security, and health professions have demonstrated an astonishing ability to rapidly and accurately discriminate complex, naturalistic visual stimuli (e.g., mammograms, faces, fingerprints). But we still don’t know what visual cues these perceptual experts are relying on to get the job done. For instance, do experts need the pieces, the particular features local to the images, to perceive the whole category? We present two experiments that probe experts ability to detect the gist or style of a fingerprint when the local detail the minutiae cannot be relied on as a cue to the correct response. The first experiment involves a fill-in-the-blank task, where experts select one of seven fingerprint keyholes to fill in the blank of a print with a hole in it, much like a puzzle. The second experiment involves an identification task where examiners select one of four keyholes that matches a reference print taken from a different portion of the finger. We find that experts are able to infer what’s missing in a print reliably above chance. They can also detect impressions from the same finger without corresponding local features to guide their response. Our findings are consistent with a coarse-to-fine account of visual recognition.

Adapting visual search tasks to investigate the analytic components of perceptual expertise

Samuel G Robson | Brooklyn J Corbett | Rachel A Searston | Jason M Tangen | Matthew B Thompson

The University of Queensland | The University of Adelaide | Murdoch University

Experts across several fields (e.g., chess, sport and diagnostic medicine) demonstrate clear intuitive abilities that novices do not. Diagnosticians, for instance, can detect abnormalities after only a short glance of a mammogram or X-ray. The more analytic components of visual expertise, however, remain under-explored. We used two variants of a visual search task to test the analytic perceptual skill of one group of experts: fingerprint examiners. One task assesses their ability to find a circular fragment in a whole print (Find-the-Fragment) and the other measures their ability to spot a circular fragment that has been changed between two prints (Spot-the-Difference). Participants, in addition, located either diagnostic fragments or non-diagnostic fragments within either intact fingerprint images or fingerprint images scrambled via Photoshop. In both tasks, experts performed best when locating diagnostic targets within intact images. When finding fragments, however, performance depended more on target diagnosticity, whereas performance depended more on image structure when spotting differences. These findings demonstrate that the visual search abilities of experienced examiners are affected by the local visual information they are looking for, as well as the global structure of the prints they are searching in.

Modelling the dynamics of perceptual discrimination with complex naturalistic stimuli

Matthew B. Thompson | Hector Palada | Rachel A. Searston | Annabel Persson | Timothy Ballard

Murdoch University | The University of Queensland | The University of Adelaide | The University of Queensland | The University of Queensland

Evidence accumulation models have been used to describe the cognitive processes underlying performance across a number of domains. Previous applications of these models have typically involved decisions about basic perceptual stimuli (e.g., motion discrimination). Applied perceptual domains, such as fingerprint discrimination, face recognition or medical image interpretation, however, require the processing of more complex visual information. The ability of evidence accumulation models to account for these more complex decisions is unknown. We apply a dynamic decision-making model – the linear ballistic accumulator (LBA) – to fingerprint discrimination decisions in order to gain insight into the cognitive processes underlying these complex perceptual judgments. We will present data from three experiments showing that the LBA provides an accurate description of the fingerprint discrimination decision processes with manipulations in visual noise, speed-accuracy emphasis, and training. We will argue that our results demonstrate that the LBA is a promising model for furthering our understanding of complex perceptual decisions, and close by contrasting the LBA model with the signal detection model.

Fractal-scaling statistics in the preference and perceived complexity of musical melodies

Catherine Viengkham | Branka Spehar

UNSW Sydney

Fractal-scaling (or 1/f) statistics are a ubiquitous part of our visual and auditory aesthetic experiences. Early studies showed musical melodies with notes that follow an intermediate 1/f pattern are perceived as the most ˜music-like™ in comparison to random melodies (no structure between notes) or melodies with steep a slopes (highly predictable structure between adjacent notes). Our study attempted to replicate these findings over a series of experiments utilising wide ranges of a values, different measurement procedures (two-alternative forced choice and Likert scales) and melody lengths (8 and 26 seconds). Despite the consistent findings of earlier studies, we did not observe the same preference pattern for melodies with intermediate 1/f statistics. Across our studies, preference either exhibited a flat distribution across slope variations or a slight preference towards less complex melodies (greater a). On the other hand, the perceived complexity of melodies increased as a approached 0 (random). This is consistent with previous findings and an indication our experimental manipulation of a did effectively control musical complexity. Planned experiments to further explore the impact of 1/f statistics on auditory aesthetics and its congruence with current research in the visual domain are discussed.

Emotional Attention

Friday 1:30 – 3:30

A vigilance-avoidance account of dual-stream emotion induced blindness

Mark Edwards | Matthew Proud | Stephanie Goodhew

The Australian National University

The attentional-bottleneck theory states there is a cognitive processing limit to the amount of information that we can simultaneously process. The high-level locus of this bottleneck has been questioned by dual-stream (DS) emotion-induced-blindness (EIB) studies.

A standard EIB stimulus consists of a rapid serial presentation of images in which a target briefly lags a neutral or emotive distractor. Target performance is worse for the emotive distractor. DS EIB consists of two spatially-offset streams with the target and distractor being in either the same or different streams. DS EIB is only obtained when they are in the same stream. This finding led to the proposal of an early bottleneck that is specific to location in space. Here we propose an alternative account that is consistent with the high-level locus of the bottleneck: vigilance avoidance. That is, when anxious observers are confronted with a negative distractor, they avoid it by shifting their attention to the other stream, hence decreasing DS EIB. We tested this theory by using both negative and, critically, positive valanced stimuli, for which vigilance avoidance would not occur. Consistent with our theory, we obtained DS EIB for the negative but not the positive valenced distractor.

Does hemispheric processing influence emotion induced blindness?

Ella Moeck | Melanie Takarangi | Steven Most | Jenna Zhao | Nicole Thomas

Flinders University | University of New South Wales |James Cook University

Emotion induced blindness (EIB) occurs when an emotional distractor impairs people’s ability to detect a subsequent neutral target in a rapidly presented image stream (100 ms per image). EIB relies on visuospatial attention and the automatic processing of emotion, which are both right hemisphere functions. Given these hemispheric specialisations, we wondered: does hemispheric processing influence EIB? In three experiments, we presented image pairs horizontally whilst participants maintained central fixation. In Experiment 1, the distractor (negative, neutral) and the target always appeared in the same stream (left, right). In Experiments 2 and 3, the distractor and target appeared in the same, or the opposite, stream. Participants were better at detecting the target when it appeared on the left vs. right, because of the right hemisphere’s responsibility for visuospatial attention. We found a large EIB effect, but the size of this effect did not differ with right or left hemisphere processing. Contrary to vertical stream EIB experiments, we found EIB when the distractor and target appeared in the same, and the opposite, stream. We conclude that horizontal streams may give rise to non-spatially localised EIB, which advances our understanding of when graphic material decreases awareness of important, but neutral, material.

Context matters: Multiple emotional distractors improve target perception

Jenna L. Zhao | Adrian R. Walker | Steven B. Most

UNSW Sydney

Emotional distractors can impair our visual perception of goal-relevant items, a robust phenomenon known as emotion-induced blindness (EIB; Most et al., 2005). However, the world is full of distractions and few studies have examined the effect of multiple emotional distractors on target perception. Across three experiments, we presented participants with streams of rapidly presented upright images and investigated what happens when multiple distractors appear in an EIB trial. In Experiment 1, participants searched for a rotated target image within the stream that was preceded by either one distractor or multiple distractors of matching valence (negative or neutral). Robust EIB emerged in both conditions but multiple distractors led to higher accuracy than a single distractor. Experiment 2 showed that reduced distraction was only observed when all distractors in a single trial were matched on valence and not when they were mismatched. Experiment 3 explored whether pairing a neutral category, such as bicycles and watches, with electrical stimulation on a full reinforcement schedule would lead to similar effects. Preliminary results revealed that the negative distractor categories (natural and conditioned) produced more distraction than the neutral distractor categories. There was also an effect of exposure where multiple distractors led to reduced distraction.

Motivational Magnets: delayed attentional disengagement from stimuli predictive of reward

Poppy Watson | Daniel Pearson | Mike Le Pelley

UNSW Sydney

Salient stimuli in the visual scene tend to draw attention. These stimuli might be salient due to their physical characteristics (e.g., a brighter colour) or due to their motivational significance being previously paired with rewards such as money or tasty foods. Once our attention has been captured by such stimuli, attention also tends to linger longer at their location. Previous research has demonstrated delayed disengagement of attention from physically-salient stimuli compared to neutral stimuli. The evidence for delayed disengagement from stimuli previously associated with high reward relative to low reward is, however, mixed. In the current study we used a novel task to investigate whether participants were slower to move their eyes away from a stimulus that signalled whether a high or low monetary reward was available was available on that trial. Participants needed to move their eyes quickly in order to receive the signalled financial reward -nonetheless, we found that participants were slower to begin moving their eyes away from the high-reward stimulus. These results demonstrate that even when motivated to move their eyes quickly, participants found it difficult to disengage attention from a stimulus signalling high reward, demonstrating that signals of reward are potent motivational magnets.

Approach bias and inhibitory control moderate the effect of television advertising on soft drink consumption

Eva Kemps | Marika Tiggemann | Amber Tuscharski

Flinders University

Exposure to environmental soft drink cues is a major contributor to soft drink consumption. This study investigated the effect of one such cue, television advertising, on soft drink choice and intake. We further examined whether effects would be stronger for individuals with an automatic tendency to reach for soft drinks (approach bias) or a difficulty resisting soft drinks (poor inhibitory control). Participants (N=127; 18-25 years) viewed television advertisements of soft drinks or other beverages. Soft drink choice was assessed by a choice task and intake by a taste test. Approach bias and inhibitory control were assessed by the approach-avoidance and go/no-go tasks, respectively. Participants who had viewed soft drink advertisements were more likely to choose a soft drink as their first drink, an effect that was stronger for those with an approach bias for soft drinks. Participants with poorer inhibitory control chose more soft drinks overall when they had viewed the soft drink advertisements. Exposure to soft drink advertisements did not affect intake. In line with dual-process models, individuals with strong automatic tendencies or poor self-regulatory control were more responsive to television advertising for soft drinks. These cognitive vulnerabilities provide potential targets for intervention to help resist soft drink cues.

Does attentional bias attenuate boundary extension?

Deanne Green | Melanie Takarangi

Flinders University

Ordinarily, people remember boundaries of a scene as being more expansive than it was (boundary-extension). Conversely, for emotional scenes, the boundaries are often narrowed at retrieval (boundary-restriction). One explanation for boundary-restriction is that negative scenes capture attention. However, evidence for this pattern is mixed. It is possible that negative scenes do not induce boundary-restriction per se, but their tendency to capture attention affects how people remember them. Evidently, divided attention can increase boundary-extension (Intraub et al., 2008), so increased attention may attenuate boundary-extension, or induce boundary-restriction. We used a dot-probe paradigm to attract participants attention to (congruent trials), or away from (incongruent trials), neutral images. After a delay, we tested subjects’ memory but told them the camera distance of some images had changed. Subjects judged whether the  "camera distance" matched that of the image they saw earlier, from Much farther to Much closer than the original, for congruent and incongruent trials. Subjects made more boundary-extension errors for incongruent trials than for congruent trials. These results support the assertion that boundary-extension results from thinking and imagining beyond the boundary of an image (see Hubbard, et al., 2010), and that attention capture reduces opportunity for extrapolation beyond the boundaries of a view.

Social/Group Decision Making

Friday 1:30 – 3:30

Exemplary leadership: A cognitive modeling perspective on leadership judgments.

David K Sewell | Timothy Ballard | Niklas Steffens

The University of Queensland

How do people judge whether someone will make a good leader? Current theory suggests that leadership judgments are based on a potential leaders perceived ability to support a shared group identity (i.e., a sense of we and us among those they seek to lead). For example, the effectiveness of a potential leader has been shown to derive from the individuals capacity to create, embody, promote, and embed a shared group identity. However, the relative importance of these attributes remains unclear. We consider an exemplar-based perspective on leadership, where effectiveness judgments are based on how similar a prospective leader is to other individuals who are viewed as effective leaders. Using rating data for 80 highly recognizable Americans (e.g., celebrities, politicians, and other public figures), we found that leadership judgments were overwhelmingly based on attributes relating to actively promoting the interests of the group and embedding group identity. Attributes relating to facilitating in-group cohesion and embodying the ideals of the in-group did not appear to be relevant. These attribute weightings were consistent for judgments of general leadership (i.e., whether one is perceived as a competent leader) and judgments for a specific leadership role (i.e., whether one would make an effective US president).

Larger groups are better at category learning so long as they are organized right

Bradley Walker | Nicolas Fay

The University of Western Australia

Cultural evolutionary theory highlights the benefit of large populations for cumulative cultural evolution. By contrast, psychological research indicates that group decision-making often leads to poorer outcomes than individual decision-making, so the potential benefits of increased group size may depend on other factors. We conducted an experiment to test whether group structure is one such factor. Participants (N = 160) played a social categorisation game where they described alien creatures to each other and chose to stun and/or capture those creatures based on an underlying category structure learned through corrective feedback. The category structure was formed by the combination of a simple rule (e.g., long tentacles = stun) and a complex rule (e.g., two or more of: round heart, spiky fins and diamond eyes = capture). Participants played in dyads, or groups of four that were centralised (one individual connected to three others) or decentralised (four interconnected individuals) in structure. Task performance improved over trials as participants uncovered the two rules. However, participants in decentralised groups showed superior learning of the complex rule and the best overall performance. This result indicates that increasing group size can benefit performance when group structure is decentralised.

When do Larger Populations Enhance Cumulative Cultural Evolution?

Nicolas Fay | Bradley Walker

University of Western Australia

The extent to which large populations enhance cumulative cultural evolution (CCE) is contentious. The present study (N=407) tested if the ability to selectively filter variants is necessary for larger populations to enhance CCE. Participants repeatedly built virtual arrowheads over 15 trials in populations of 3 or 6. Three social learning conditions were compared: View 1-Model, View All-Models: Select Order or Random Order. Participants were told the score associated with each of the arrowheads produced by the other members of their population. In the View 1-Model condition they could choose one arrowhead to learn from, i.e., they could apply a selective filter. In the View All-Model conditions participants viewed all the arrowheads produced by the other members of the their group, i.e., no selective filter. In the Select Order condition they could choose the order they viewed the arrowheads and in the Random Order condition they viewed the arrowheads in a prescribed random order. Performance improved over trials in all conditions. The rate of improvement was boosted by population size only in the View 1-Model condition, indicating that a selective filter is critical if larger populations are to enhance CCE.

Blood is thicker than water: The effects of relationship type and knowledge of guilt on willingness to corroborate a false alibi

Melissa Boyce | Rita Diaz

University of Calgary

Alibi corroboration by family and friends of an accused is perceived with skepticism (Olson & Wells, 2004). But is this skepticism justified? Although Kienzie and Levett (2018) found greater willingness to corroborate a false alibi for a friend than a stranger, learning the accused had confessed led most participants to refuse to corroborate the false alibi regardless of the relationship between parties. We employed a 2(confession: yes, no) x 3(relationship: stranger, friend, family member) between-subjects factorial design to determine whether individuals are more willing to protect family members even after a confession. Subjects imagined participating in a study with a partner (either a stranger,  friend, or relative) during which the partner sometimes confessed to stealing money. Participants were asked whether they would confirm their partners false claim to have remained in the room. We found that participants were significantly less likely to confirm a false alibi for a stranger than a friend or relative. Furthermore, although a confession significantly reduced participants willingness to corroborate a false alibi for a friend, participants willingness to corroborate a false alibi for a relative was unaffected. These findings highlight the importance of physical evidence whenever possible in cases involving alibi corroboration by a relative.

Evidence Against The Self-Categorization Account Of The Descriptive Norm Effect

A/Prof Piers Howe | Campbell Pryor | A/Prof Amy Perfors

University of Melbourne

The descriptive norm effect is the phenomenon that people are more likely to perform a particular action or hold a particular opinion when they know that other people have performed similar actions or have similar opinions. Our previous work provided strong evidence against two accounts of this phenomenon, the information and social sanctions accounts, and argued in favour of a third account, proposed by self-categorization theory (Pryor, Perfors, Howe, 2019, Nature Human Behaviour, 57-62). Self-categorization theory makes the intuitive prediction that people will actively avoid conforming to the norms of an outgroup in an effort to remain distinct from that outgroup. We tested this prediction in a series of experiments and found that, contrary to this prediction, people conformed to descriptive norms even when they came from the outgroup. This result was replicated across multiple settings thereby ruling out a number of alternative explanations. These results argue against self-categorization theory and demonstrate that, in a broad range of circumstances, a general desire to conform with others can overpower the common ingroup vs outgroup mentality. We discuss possible practical applications of this and make suggestions as to how this general conformity mechanism may operate in practice.

Compensation and Theory of Mind assessments: An investigation into the value of multiple answer time limitations.

Liam Allen

Massey University

Over the last two decades there has been an increased interest in Theory of Mind (ToM), leading to the development of an array of ToM assessments. However, significant problems have been identified with assessing ToM. One such problem is that many ToM assessments are easily passable by individuals with ToM deficits through compensation. Using compensatory strategies, individuals are able to use alternative physical or neurological processes to superficially improve their performance on a ToM measure, rendering the assessment invalid. One way to improve the validity of ToM assessments may lie in addition of multiple time limits on ToM tasks. Higher ToM assessment scores in compensating individuals coming at the cost of slower answer times and more fragile abilities. This study used an adapted version of the Reading the Mind in the Eyes Test  (RMET) to examine whether reducing the time people have to respond to each trial improves the measures convergent validity. Results of a pilot study suggest that RMET performance is better correlated with an empathy measure when responses times to RMET trails are shorter. This method provides an easy method for improving existing assessments of ToM in older populations and assessing individual differences in compensation on ToM measures.

Perception: Time and Motion

Friday 1:30 – 3:30

Rapid recalibration of temporal order judgements: Response bias accounts for contradictory results

Brendan Keane | Nicholas Bland | Natasha Matthews | Timothy J Carroll | Guy Wallis

The University of Queensland

Recent findings indicate that timing perception is systematically changed after only a single presentation of temporal asynchrony. This effect is known as rapid recalibration. In the synchrony judgement task, similar timing relationships seem more synchronous in consecutive trials (positive rapid recalibration; Van der Burg et al., 2013, 2015). Interestingly, the direction of this effect is apparently reversed (negative rapid recalibration; Roseboom, 2019) for temporal order judgements (TOJs). We aimed to determine whether this negative effect reflects genuine rapid recalibration, or instead reflects a response bias unrelated to timing perception. In our first experiment we found no evidence of rapid recalibration of TOJs, but positive rapid recalibration of associated confidence. This indicates timing perception had rapidly recalibrated, but this was undetectable in TOJs, plausibly because it was obfuscated by a large negative response bias effect. In our second experiment, we dissociated participants™ previous response from the previous timing relationship, mitigating the predicted response bias effect, and found evidence of positive rapid recalibration of TOJs. We propose that timing perception is positively rapidly recalibrated in synchrony and temporal order judgement tasks, and that the discrepancy between these two was due to the susceptibility of the TOJ task to response bias effects.

Neural correlates of subjective timing precision

Derek H. Arnold | Wiremu Hohaia | Kielan Yarrow

The University of Queensland | City University of London

There can be marked individual differences in subjective timing precision. The causes for this are unclear. One possibility is that timing precision scales with variance in the dynamics of evoked brain activity. Another possibility is that evoked patterns of brain activity differ systematically, in a manner that makes timing judgments more difficult for imprecise judges. We assessed these possibilities by conducting audio-visual (AV) timing judgments, in conjunction with electroencephalography (EEG) and multivariate pattern classification analyses. We found we could decode, on a trial-by-trial basis from measures of individual brain activity, both the timing of physical AV stimulations, and timing decisions. There was, however, no relationship between timing precision and decoding success rates for physical stimulation. As decoding success relies on similar activation patterns on repeated trials, this suggests imprecise timing does not result from more variable brain dynamics. Instead, we find evidence that AV stimulation evokes distinct patterns of activity in the brains of precise and imprecise judges of timing. Precision was associated with exaggerated responses ~300ms after initial physical stimulation. We suggest temporal excitatory and inhibitory interactions are exaggerated in the brains of precise judges of timing, which more clearly distinguishes patterns of activity evoked by distinct physical inputs.

Effects of self-initiation and temporal prediction on motor-evoked potentials.

Evan Livesey | Dominic Tran | Nicolas McNair

The University of Sydney

The brain’s response to sensory input is modulated by prediction. For instance, sounds that are produced by one’s own actions or that are strongly predicted by other environmental cues are accompanied by an attenuated N1 component in auditory evoked potentials and perceived as being less salient. Here we tested whether the neural response to direct stimulation of the brain is attenuated by prediction in a similar way. Transcranial magnetic stimulation (TMS) applied over primary motor cortex is often used to gauge the excitability of the motor system. Motor-evoked potentials (MEPs), elicited by TMS and measured in peripheral muscles, tend to be larger when actions are being prepared and smaller when actions are voluntarily suppressed. We tested whether the magnitude of the MEP was attenuated under circumstances where the TMS pulse can be reliably predicted, even though control of the relevant motor effector was not required. Self-initiation of the TMS pulse was associated with reduced MEP magnitude. We will discuss how this relates to domain-general effects of prediction on neural systems.

Extracting information about one’s curved path through the world from 2-D video sequences.

John A. Perrone | Michael J. Cree | M. Hedayati

The University of Waikato

A common activity for humans involves movement along curved paths (e.g., when driving around a curve). We still do not have a good understanding as to how the visual system carries out this task.  A model has been proposed for how humans are able to recover information about their curved path through the world (curvilinear motion) from a combination of visual motion information on the retina and information from the vestibular system (Perrone, Journal of Vision, 18, 2018). However, the model was only tested with theoretical visual motion inputs (optical flow fields).  We have now carried out a series of tests using brief video sequences of natural scenes. A video camera moved along a curved path using a computer-controlled X-Y table at a range of rotation rates and radii.  The 2-D visual motion in eight frames of the video was measured using a model of flow field extraction based on primate middle temporal (MT/V5) neurone properties (Perrone, JOV, 12, 2012). This flow field was then analysed using the curvilinear motion model. The curvilinear rotation estimates from the model were accurate but the image motion information needed to be supplemented with camera rotation rate data from an inertial motion unit (IMU).

Direction discrimination when multiple objects are in motion

David R. Badcock | Julia C. Haile | Nateesha Tuckett | Mark Edwards

The University of Western Australia | Australian National University

Motion signals are extracted by receptive fields that are typically smaller than the images of objects of interest. This necessitates pooling of motion signals to determine an object’s motion. However, scenes containing multiple objects also require segmentation, that is, pool signals from the same object and segment those from other objects.  We used psychophysical methods to examine how motion pooling and segmentation are affected by form information. Specifically, how spatial structure and the feature similarity of the motion signals affects the pooling/segmentation in processing direction.  We found that an identical number of motion signals leads to better direction discrimination when presented as a familiar shape, provided they are presented within a larger field of random direction motion signals; and that the perceived direction of a circle of elements is altered by a second, concentric, circle drifting in a different direction, regardless of whether it is inside or outside of the test circle.

 

 

 

Neuromuscular Coupling in Sensorimotor Synchronisation

Patti Nijhuis | Peter E. Keller | Sylvie Nozaradan | Manuel Varlet

MARCS Institute for Brain, Behaviour, and Development, The University of Western Sydney

 

Synchronised movements such as dancing or side-by-side walking are often encountered in daily life. However, their underlying neural mechanisms remain largely unknown. Recent studies have found neural oscillations in the beta-band frequencies (12-40Hz), commonly associated with movement production, to resonate with environmental rhythms. To investigate if neural resonance extends to sensorimotor synchronisation, we investigated the cortico-muscular coupling (CMC) underlying movement synchronisation to auditory and visual rhythms. Participants either produced or imagined finger tapping guided by visual or auditory stimulus presented in 1.25Hz bimodal syncopated sequences. In the imaginary and control conditions, participants maintained a constant pressure with their index finger during stimuli presentation. Cortico-muscular coherence was computed between the electroencephalogram (EEG) and electromyogram (EMG) of the flexor digitorum superficialis muscle, using time-frequency analysis to assess dynamical changes in the synchronisation of the signals in the beta-range. The results show amplitude modulations of beta-band CMC that are time-locked to the visual or auditory sequences. Such modulations occurred for both produced and imagined tapping but not the control condition and they were of larger magnitude for produced tapping than imagined tapping. These findings suggest that cortical and muscular activity become more synchronised during movement and movement oriented attention, allowing efficient movement-stimulus coordination.

Neuromuscular Coupling in Sensorimotor Synchronisation

Patti Nijhuis | Peter E. Keller | Sylvie Nozaradan | Manuel Varlet

MARCS Institute for Brain, Behaviour, and Development, The University of Western Sydney

Synchronised movements such as dancing or side-by-side walking are often encountered in daily life. However, their underlying neural mechanisms remain largely unknown. Recent studies have found neural oscillations in the beta-band frequencies (12-40Hz), commonly associated with movement production, to resonate with environmental rhythms. To investigate if neural resonance extends to sensorimotor synchronisation, we investigated the cortico-muscular coupling (CMC) underlying movement synchronisation to auditory and visual rhythms. Participants either produced or imagined finger tapping guided by visual or auditory stimulus presented in 1.25Hz bimodal syncopated sequences. In the imaginary and control conditions, participants maintained a constant pressure with their index finger during stimuli presentation. Cortico-muscular coherence was computed between the electroencephalogram (EEG) and electromyogram (EMG) of the flexor digitorum superficialis muscle, using time-frequency analysis to assess dynamical changes in the synchronisation of the signals in the beta-range. The results show amplitude modulations of beta-band CMC that are time-locked to the visual or auditory sequences. Such modulations occurred for both produced and imagined tapping but not the control condition and they were of larger magnitude for produced tapping than imagined tapping. These findings suggest that cortical and muscular activity become more synchronised during movement and movement oriented attention, allowing efficient movement-stimulus coordination.

 

Faces: Individual Differences

Saturday 9:00 – 10:20

Broader object recognition is normal in developmental prosopagnosia

Hazel K. Godfrey | Tirta Susilo

Victoria University of Wellington

A fundamental issue in developmental prosopagnosia (DP) research is whether DP recognition deficits are specific to faces or extend to other objects. Past studies found mixed results, likely due to small sample sizes and limited types of objects tested. To resolve these issues, we ran a large scale study in which 100+ DPs and 100+ controls completed recognition tests of faces, bicycles, bodies, cars, hair, and houses, using the well-validated format of the Cambridge Memory Test. Group analyses show that DPs had the most trouble with faces and minor trouble with cars, but they were no worse than controls with all four remaining objects. Our findings provide strong evidence that broader object recognition is not impaired in DP, indicating that the core deficit in DP spares domain-general recognition mechanisms that apply to all objects. We discuss what our results mean for various hypotheses of DP deficits, describe follow-up experiments exploring the subtle deficits with cars, and provide the first estimate of the prevalence of pure prosopagnosic individuals impaired with faces but normal with all other objects in the DP population.

Individual Differences in Serial Dependence of Facial Identity Contribute to Variation in Face Recognition Abilities

Kaitlyn Turbett | Romina Palermo | Jason Bell | Jessamy Burton | Linda Jeffery

University of Western Australia

Serial dependence is a perceptual bias where current perception is biased towards prior visual input. This bias occurs when perceiving visual attributes, such as facial identity, and has been argued to play an important functional role in vision, stabilising the perception of objects by integrating visual information. The strength of serial dependence varies individually, but it is unknown whether this variation is related to face recognition, as has been found for other perceptual biases. We measured face recognition ability in 219 adults and developed a new measure of serial dependence of facial identity. Participants sequentially viewed two faces, which varied in similarity, and were asked to identify the second face. We found that stronger serial dependence for more similar faces was associated with better face recognition abilities, whereas stronger serial dependence for more dissimilar faces was associated with poorer face recognition. This suggests that it is the extent to which an individual selectively uses serial dependence that is important to face recognition, rather than the overall strength of the bias. This association between serial dependence and face recognition ability is consistent with the view that serial dependence plays a functional role in face recognition.

Lets split hairs: Examination of face learning strategies in developmental prosopagnosia

Morgan Reedy | Hazel K. Godfrey | Tirta Susilo | Christel Devue

Victoria University of Wellington

Developmental prosopagnosics (DPs) have debilitating deficits in face recognition. Some DPs report that they recognize people through peripheral information like clothing or hair. However, most studies test DPs with cropped images, neglecting the potential contribution of extra-facial information (hair, ears, jawline), and perhaps exaggerating their deficits.  Moreover, no research has systematically investigated how changes in appearance might affect DPs recognition. In this study, DPs (N = 30) studied videos of three identities and then performed a recognition test in which images were either similar (i.e., same hairstyle and makeup) or dissimilar in appearance to learning. To assess the contribution of extra-facial features, half of the images only showed inner features (eyes, nose, mouth) and half included extra-facial features. Consistent with deficits they report, DPs recognition was impaired when extra-facial information was concealed or had changed. However, controls also showed impairments in these conditions, and DPs were as accurate as controls when extra-facial information similar to learning was present. Therefore, learning strategies used by DPs and controls may not be as different as currently thought. They both use extra-facial features as identity cues and, under the right circumstances, DPs can learn new faces as efficiently as controls.

Facial expression recognition difficulty in the autism phenotype reflects both alexithymia and perceptual differences

Ellen Bothe | Romina Palermo | Gillian Rhodes | Nichola Burton | Linda Jeffery

University of Western Australia

Autistic people often show impairment in facial expression recognition, an important social ability. However, the extent of impairment varies widely, with some individuals showing profound difficulty and others performing no differently to controls. This variation might be a function of differing autistic symptom profiles. In a typical sample (N = 145), we found that more autistic-like social communication was associated with more difficulty labelling and perceptually discriminating between expressions, and more autistic-like social skills were associated with poorer labelling of expressions. We found evidence of two independent sources of these difficulties. The first, alexithymia, describes diminished processing of internal emotional experience, with high levels common but not universal in autistic people. Alexithymia mediated the associations between autistic-like social skills and communication on expression labelling. The second, adaptive norm-based coding of expression, is a perceptual process that facilitates expression recognition by calibrating perception to current demands. Weakened adaptive norm-based coding mediated the associations between autistic-like social communication and poorer labelling and perceptual discrimination of expressions. Overall these results suggest expression recognition varies meaningfully between the phenotypic expression of autistic symptom domains, with difficulties reflecting unique contributions from both alexithymia and a key perceptual mechanism, adaptive norm-based coding.

Consciousness

Saturday 9:00 – 10:20

Memories and dreams of a blind mind: The cognitive impact of visual imagery deficits in Aphantasia

Alexei Dawes

University of New South Wales

Visual imagery is thought to play a critical role in supporting naturalistic cognitive processes such as episodic memory, future event prospection, dreaming, spatial navigation and emotional regulation. Some individuals, however, lack the ability to voluntarily generate visual imagery altogether a condition termed aphantasia. Recent research suggests that aphantasia is a condition defined by the veritable absence of visual imagery, rather than a lack of metacognitive awareness of imagery. Here we elucidate a cognitive fingerprint of aphantasia, demonstrating that compared to participants with normal visual imagery ability, individuals with aphantasia also describe imagery deficits in other sensory domains, report less vivid and phenomenologically rich episodic memories and imagined events, and experience less qualitatively rich night dreams. Interestingly, individuals with aphantasia show normal spatial imagery ability, and are not less likely to report memory intrusions of the kind consistent with the re-experiencing of past traumatic events. Implications for our understanding of visual imagery are discussed, and it is argued that visual imagery may act as a normative but ultimately non-essential representational format for wider cognitive processes.

Top-down signals of a huntsman spider on a rubber hand do not dampen subsequent bottom-up signals from inducing a rubber-hand illusion.

Philippe A Chouinard | Rachel Stewart

La Trobe University

Rubber-hand illusion studies typically threaten the false hand for the purposes of verifying embodiment. For the first time, we tested if embodiment could occur with a threat in place prior to elicitation of the illusion by having a live huntsman spider on the false hand. There were three conditions: synchronous movements with of a huntsman spider (n = 17), synchronous movements with a fake spider (n = 17), and asynchronous movements with a fake spider (n = 17). The results revealed that the embodiment of the false hand under threat occurred as strongly as when there was no threat. Responses on ownership questionnaires (p = .432) and perceptual drift measures (p = .991) did not differ between the huntsman and fake spiders during synchronous conditions. On the other hand, a greater transfer in ownership occurred in the synchronous compared to the asynchronous conditions as assessed by the same measurements (all p < .002). The results suggest that any top-down influence of the threat was not strong enough to interfere with bottom-up processes driving the illusion. Implicit attitudes towards spiders were also assessed. No differences were observed when the huntsman spider was present.

What limits visual awareness in the context of hand and tool interactions? – An investigation using the continuous flash suppression paradigm

Regine Zopf | Stefan R. Schweinberger | Anina N. Rich

Macquarie University|Friedrich Schiller University

Interacting with the world requires processing visual information about different types of object categories including effectors such as hands and tools. We asked if there are hand/tool specific limits to visual awareness and tested if category-specific limits can be predicted by the similarity of cortical high-level object representations. This model is based on the finding that in the continuous flash suppression (CFS) task certain object categories (e.g., faces) are more effective in blocking awareness of other categories (e.g., buildings) than other combinations (e.g., cars/chairs) which was found to be related to category-pair representational similarity in higher visual cortex. Because cortical hand and tool representations are known to overlap, we predicted longer breakthrough times for hands/tools compared to other pairs. In contrast, across three experiments participants were generally faster at detecting targets masked by hands or tools compared to other mask categories. Exploring potential low-level explanations, we found that category averages for edges (e.g. hands have less detail compared to cars) were the best predictor for the data but could not completely account for the hand/tool effects. Overall, our findings provide evidence for a category-specific low-level limit for hands and tools, and potential high-level bottlenecks for visual awareness require more testing.

Holistic Processing of Conscious and Unconscious Faces

Haiyang Jin | Paul Corballis | Matt Oxner | William G. Hayward

The University of Auckland| The University of Hong Kong

Previous research suggests that holistic face processing is implicated in face recognition. However, little is known about the role of consciousness in holistic face processing. The present study explores the holistic processing of conscious and unconscious faces. Holistic processing was measured by the composite task and stimulus components were presented unconsciously with continuous flash suppression (CFS). In the first two experiments, participants performed the composite task with the irrelevant bottom halves of both study and test faces presented consciously (monocular) or unconsciously (CFS). Results showed the composite effect was only found in the conscious, but not in the unconscious, condition. However, a marginally significant composite effect was observed when only the bottom halves of test, but not study, faces were presented unconsciously. Moreover, catch trials were embedded in the second experiment to test whether identity information of the face components could be processed at all in the unconscious condition. Performance on the catch trials was not significantly above chance suggesting that component identity could not be processed in the unconscious condition. Taken together, these results show that unconscious face information does not appear to be processed holistically.

Reading 2

Saturday 9:00 – 10:20

The Written-Order of Strokes Influences Chinese Character Identification: Evidence from A Variant of RSVP Task

Lili Yu | Jiakun Liu | Qiaoming Zhang | Sachiko Kinoshita

Macquarie University | Ludong University

Readers’ knowledge of a language is known to influence letter (or character) perception beyond the visual features. However, a recent study by Zhai & Fischer-Baum (2018) showed a null effect of the stroke motoric knowledge on Chinese character perception for both naive and native Chinese readers in a same-different task. In the current study, we employed a variant of RSVP (Rapid Serial Visual Presentation) task, presenting each stroke of Chinese characters rapidly one after another, with participants task being to judge whether the character presented at the end of the trial matched that formed by the strokes. The critical manipulation was whether the strokes of each character were presented in the correct or reversed written-order, with each stroke being presented in their correct spatial location in relation to the character in Experiment 1, and centrally in Experiment 2. Both experiments revealed a positive stroke written-order effect: it was easier to recognize a character when its strokes were presented in the correct, as compared to the reversed, written order. Discrepancies between the current study and Zhai & Fischer-Baums, and the theoretical implication for the role of stroke written-order in Chinese character identification will be discussed.

Orthographic learning in Chinese: a role for semantic decoding?

Luan Li | Hua-Chen Wang | Anne Castles | Miao-Ling Hsieh | Eva Marinus

Macquarie University| National Taiwan Normal University | The Schwyz University of Teacher Education

Children’s word-reading ability is closely related to their vocabulary knowledge. Yet it is not clear how knowledge about a words meaning contributes to learning to read it. Most Chinese characters have a semantic radical providing cues to the meaning (e.g., ? means rice, and the semantic radical ? means food). In this study, we investigated the role of building the print-meaning link via the semantic radical – a mechanism we call semantic decoding –  in orthographic learning of Chinese characters. Ninety-two Grade 4 children were taught the pronunciations and meanings of 16 novel characters. They were then exposed to the written characters in stories. Half the characters contained semantic radicals related to the taught meaning; the other half were unrelated. Half of the children learned the characters regular pronunciations; the other half learned the irregular pronunciations. There was better orthographic learning of regular characters across measures of spelling, orthographic choice and word naming, replicating our previous finding that phonological decoding supports orthographic learning in Chinese. However, semantic decoding had no impact on orthographic learning. The results provide evidence that even in Chinese, where the print is considered closely linked to meaning, word semantics makes a limited contribution to orthographic learning.

What do Artificial Orthography Learning tasks actually measure?

Xenia Schmalz | Kristina Moll | Gerd Schulte-Korne

University of Munich

We aimed to test whether Artificial Orthography Learning (AOL) is a viable experimental paradigm to model reading acquisition in children. First, to measure participant-level differences, learning performance should capture a stable participant characteristic: when the same participants learn two orthographies, their performance should correlate. Second, if AOL mimics the process of reading acquisition, we expect a correlation between learning performance and reading ability. Third, performance should not merely reflect the ability to memorise arbitrary symbol-sound associations, resulting in weak correlations between performance on the AOL and Paired Associate Learning (PAL) tasks. We tested 70 adult participants on two AOL tasks, reading ability, and a PAL task. We found high correlations between learning of the two AOL tasks, suggesting that performance captures a stable individual characteristic. Correlations with reading ability and PAL were low, suggesting that AOL is dissociable from reading ability and from the ability to memorise arbitrary symbol-sound associations.

Orthographic skeletons: What form do they take?

Signy Wegener | Hua-Chen Wang | Kate Nation | Anne Castles

Macquarie University | University of Oxford

When children know a spoken word and understand phoneme-to-grapheme mappings, they form orthographic skeletons, or expectations about the likely spellings of words they have previously heard but never seen (Wegener et al., 2018). Here, we asked whether skeletons are built around a words consonants, vowel, or both? Forty one Grade 4 children received oral vocabulary training on one set of 18 novel words (e.g. desh, taff, jorv) over four sessions, while another set were untrained. Spellings were either predictable from their phonology (e.g. desh), or included an unpredictable consonant (e.g. taph) or vowel (e.g. jauv). Trained and untrained items were shown in print for the first time, embedded in sentences, and children’s eye movements were monitored. Trained items with predictable spellings were consistently fixated for shorter periods than untrained predictable spellings. Early processing measures (first fixation and gaze duration) showed that this benefit of oral training for predictable spellings was significantly larger than for unpredictable consonant and unpredictable vowel spellings. Late in processing (total reading time), this pattern persisted only for unpredictable consonants. These results suggest that orthographic skeletons involve both consonants and vowels initially; with unpredictable vowel spellings possibly being more rapidly resolved online than unpredictable consonant spellings.

Face Recognition

Saturday 11:20 – 1:00

Recognition for Pairs of Unfamiliar Faces and Recall of Ethnicity and Gender Information

Todd C. Jones

Victoria University of Wellington

Prior research on episodic memory of unfamiliar faces indicates that a single, relatively automatic memory retrieval process can provide a good account of recognition performance for single items.  In contrast, there is some evidence that, after studying pairs of faces (A-B, C-D, E-F,  . . . ), people may be able to use a controlled retrieval process to avoid judging rearranged pairs (C-F) as having been seen together.  What information people retrieve in these cases is unclear. In addition, verification of what information may have been retrieved is a big challenge.  In three experiments we manipulated whether the ethnic and gender combination in the rearranged pairs changed or stayed the same relative to the studied combination, and we asked participants to give reasons for correct rejections.  This procedure meant that (a) there was the opportunity to judge a rearranged pair as new by remembering something obvious about an absent studied face (ethnicity or gender) and (b) this type of recall could be checked objectively.  Nevertheless, retrieval of ethnicity, gender, or other information to avoid an error was surprisingly low.

Deciding you don’t know in face recognition memory.

Andrew Heathcote |Angus Reynolds | Rod Garton | Valera Griffin | Peter Kvam | James Sauer | Adam Osth

University of Tasmania | The Ohio State University

In two experiments we studied yes/no and two alternative forced choice (2AFC) recognition memory decisions where participants were also able to give a don’t-know (DK) response. Both experiments used the same set of highly similar face pairs that enabled a manipulation of similarity by morphing structurally corresponding faces together to different degrees. DK usage was manipulated by instructions emphasising either speed or accuracy, and by penalizing errors in definitive (i.e., non-DK) responses to varying degrees. We successfully fit the data with a new model of both response choices and response time (RT), the Multiple Threshold Race, which keeps the same tractable architecture as standard evidence-accumulation models but adds extra thresholds to account for DK responses. Model parameters provided insights into the psychological causes of the complex and interacting effects of choice format, similarity, speed vs. accuracy emphasis and penalties on both definitive and DK response probabilities and corresponding RT distributions.

Strong role for image information in axis and view dependent face recognition: evidence from a same view task

Simone Favelle | Angela Anchor

University of Wollongong

Face recognition across a change in view is both axis and view dependent. Specifically, viewpoint dependent declines have been found to be steeper across views of faces rotated in the pitch axis compared to yaw. To date, this has only been demonstrated in view generalisation tasks, confounding the roles of intrinsic image information and the mechanism(s) used to compare that information. The relatively poor face recognition performance in pitch views could be a consequence of: (a) pitch views containing inadequate visual face information, or (b) the mechanism used to compare views (eg, transformation, interpolation, or rotation) being more difficult to execute in the pitch than yaw axis. Here we used a same view task to remove the need to compare different views and test the role of image information in face recognition. Faces were shown upright and inverted in views rotated in 15° increments in the yaw and pitch axes. Consistent with generalisation tasks, performance was significantly poorer with a steeper viewpoint decline when recognising faces rotated in pitch compared to yaw. These results suggest that axis and view dependent effects in face recognition are determined by the quality of the face information that can be extracted from an image.

Why is Jared Leto more refined than Tom Cruise? The role of stability in developing parsimonious facial representations

Christel Devue

Victoria University of Wellington

The facial information we use to memorise large numbers of faces is unclear despite decades of experimentation. We developed a theory that assumes representations are cost-efficient and include different diagnostic features in different faces, regardless of familiarity. Features that remain stable over encounters are diagnostic and so they receive more representational weight. Importantly, to decrease storage demands, coarse information is privileged over fine details. This creates parsimonious facial representations that refine over time if appearance changes. The theory predicts that representations of people with a consistent appearance (e.g., Tom Cruise) include stable coarse extra-facial features, and so their inner features need not be encoded with the same high resolution as those of equally famous people who change appearance frequently (e.g., Jared Leto). In three preregistered experiments, participants performed a recognition task in which we controlled actors appearance (variable, consistent) and popularity (higher, lower). In line with our theory, in less popular actors, stable extra-facial features helped remember consistent faces compared to variable ones. However, in popular actors, variable actors were better recognised than consistent ones, suggesting representations of the former had refined over time. We will discuss broader implications of our theory for the field.

Decomposing the composite effect in face perception

William G. Hayward | Luyan Ji | Haiyang Jin

University of Hong Kong | University of Auckland

The composite face effect occurs when the top half of one face is combined with the bottom half of another face, changing the perceived appearance of each half as they appear to form an entirely new face (Hay, Young, and Hellawell, 1987). The composite task is often used as a strong test of holistic face processing.  However, in recent years a debate has emerged about the best way to conduct the composite task; in addition, other tests of holistic processing correlate poorly, if at all, with the composite task. In this study we tested judgments of component identity while manipulating component size within the composite face task, and then examined subcomponents of the task to look for both interference and facilitation effects from distractor information. Overall we found strong evidence for interference and weaker evidence for facilitation, and negligible effects of size differences between target and distractor. These results allow us to develop a more comprehensive understanding of the composite task and its relationship with other measures of holistic processing.

Attention in Real World Contexts

Saturday 11:20 – 1:00

Inattentional blindness to medically-relevant stimuli in radiologists

Lauren H. Williams | Ann Carrigan | William F. Auffermann | Megan Mills | Anina N. Rich | Trafton Drew

University of Utah| Macquarie University

Attention allows us to focus on task-relevant information without being overwhelmed by the constant barrage of sensory input. However, this ability comes with a cost: salient events are sometimes missed when attention is directed elsewhere. In a famous example of this inattentional blindness, ~50% of individuals failed to notice a person in a gorilla-suit when performing another task (Simons and Chabris, 1999). This effect has been replicated in radiologists, who failed to notice an image of a gorilla when performing lung-cancer screening (Drew, Võ, & Wolfe, 2013). This phenomenon could help explain a common source of error in radiology: missing incidental but clinically-important findings. Here, we tested whether inattentional blindness occurs when the unexpected stimulus is medically-relevant. Radiologists (n=45) performed a lung-cancer screening task. In addition to nodules, the final case contained a large breast mass. 66.7% reported there were no signs of breast cancer in the final case. Of these radiologists, 20% looked directly at the mass. Neither years of experience nor the number of chest CTs evaluated per week predicted which radiologists detected the breast cancer, p.05. These results highlight the influence of goal-directed attention on the evaluation of medical images, which may contribute to errors in radiology.

Visual exploration and attentional control in pedestrian safety

Sebastien Miellet | Victoria Nicholls

University of Wollongong | Bournemouth University

Each year 270,000 pedestrians die of road-traffic accidents and millions are injured worldwide. In a series of recent studies, we investigated how, when, and where pedestrians, in particular vulnerable pedestrians (children and the elderly), extract visual information and make road-crossing decisions. We tailored a range of novel eye-movement, image-processing and EEG techniques to offer fine-grained robust results. Our results revealed that the decisional age bias in road-crossing decisions is linked to specific gaze-patterns. Young children’s attention is attracted towards task-irrelevant distractors and they have difficulties to disengage overt-attention under high cognitive load. In a follow-up study, we confirmed the involvement of executive functioning and showed that, in contrast to explicit knowledge, the performance on the anti-saccade task was linked to safer decisions. In older adults, we showed that the decline of executive functions impacts on visual exploration and decision latencies. Finally, we developed a new approach using eye-tracking with EEG and steady-state visually evoked potentials during smooth-pursuit. Our results revealed a unique neural signature of divided attention in road-crossing contexts. Future work will involve the use of VR to allow for 3D oculomotor mapping, with an aim to design effective training programs.

What are you looking at? How learner drivers scan their visual environment while driving

Rachael A. Wynne | Vanessa Beanland | Gemma J. M. Read | Paul M. Salmon

University of the Sunshine Coast | University of Otago

Research has consistently demonstrated differences in novice and experience driver visual search patterns. Despite the learner period representing a critical developmental phase for drivers schemata, most research has been cross-sectional comparing learners or newly-licensed drivers with much more experienced drivers, and little longitudinal research covers the evolution of scanning patterns during the learner phase. As part of a program of research investigating visual attention in driving, we are monitoring a group of learner drivers as they complete their required training via a longitudinal study. Thirty learner drivers viewed short videos of driving scenes while their eye movements were tracked using a non-invasive remote eye tracker. Videos were road scenes of various different road types from a drivers perspective. Footage was filmed across familiar (local to participants) and unfamiliar (foreign) roadways. This presentation focuses on findings from the first time of testing, participants had logged less than 15 hours of supervised driving practice. A significant main effect was found for roadway type but not location familiarity in percentage of identified hazards. Additionally, the results reveal the scanning patterns of learner drivers within the early stages of driving. The implications for driver training and road safety initiatives are discussed.

Rest in peace: Effects of roadside memorials on drivers risk perception and eye movements

Vanessa Beanland | Rachael Wynne | Paul Salmon

University of Otago | University of the Sunshine Coast

In many countries, it is common to see spontaneous roadside memorials constructed in response to road fatalities. These memorials are controversial and are explicitly banned in many jurisdictions. Advocates argue that memorials improve safety, by making other drivers aware of an especially dangerous road, whereas opponents argue that they are distracting and decrease safety by diverting drivers attention away from the road. However, almost no previous research has examined this empirically. We conducted a preregistered experiment in which 40 fully-licensed adult drivers viewed videos of road scenes with and without memorials, to examine how the presence of roadside memorials influences drivers attentional allocation (indicated by eye movements to the roadside area) and safety-related behaviours (indicated by perceived risk ratings and preferred travel speeds for the road). Overall, it appears memorials frequently capture visual attention, with drivers making more fixations on the roadside and being more likely to fixate on memorials vs. other roadside objects (e.g. a traffic cone). However, glances to the memorials were relatively short (~400ms) and there were no clear positive or negative effects on safety-related behaviours. Nearly all drivers supported permitting roadside memorials, with a small number strongly opposed because they are distracting and/or distressing.

Schemas in motion

Samuel G. Charlton | Nicola J. Starkey

University of Waikato

This paper will describe several experiments that demonstrate that driving a car becomes an automatic skill that we accomplish with very little effort or deliberation. We argue that these patterns result from the evolution and continuous refinement of schemata for familiar roads and routes. The implication of the automaticity and expectancies resulting from development of driving schemata are increased change blindness and inattention blindness while driving. Further, these expectancies can override correct recollection of objects and events from a recent drive, and result in schema-consistent false memories of the drive. These findings, obtained across a range of on-road and laboratory experiments paint a consistent picture of how we form schemata about our everyday journeys, and use these schemata to guide attention, action, and subsequent memory retrieval.

Decision-making 2

Saturday 11:20 – 1:00

Inhibiting Responses to Difficult Choices

Dora Matzke | Samuel Curley | Andrew Heathcote

University of Amsterdam | University of Newcastle | University of Tasmania

Response inhibition is frequently investigated using the stop-signal paradigm where participants perform an easy response time task, such as responding to the direction of an arrow. Occasionally, this go task is interrupted by a stop signal that instructs participants to withhold their response. Stop-signal performance is typically formalized as a race between an independent go and stop process. The race model allows for the estimation of the latency of the unobservable stop response. It does so, however, without accounting for accuracy on the go task. This restriction means that the race model may not be used to investigate response inhibition in the full range of tasks used in experimental psychology, which can involve difficulties that result in non-negligible levels of errors, or where it is theoretically important to manipulate error rates. We propose a Bayesian framework that addresses this limitation, and hence expands the scope of the stop-signal paradigm to the study of response inhibition in the context of difficult as well as easy choices. We show using novel stop-signal data that our model has good measurement properties, and so can be practically applied in the broad range of tasks and populations studied in experimental psychology.

 

The dynamics of decision making during goal pursuit

Timothy Ballard | Andrew Neal | Simon Farrell | Andrew Heathcote

University of Queensland | University of Western Australia | University of Tasmania

Goal pursuit can be thought as a series of interdependent decisions made in an attempt to progress towards a performance target. Whilst much is known about the intra-decision dynamics of one-shot decisions, far less is known about how this process changes over time as people work toward a goal. We have developed an extended version of the LBA model that accounts for the effects that the dynamics of goal pursuit exert on the decision process. We test the model using a paradigm in which participants perform a random dot motion discrimination task, gaining one point for correct responses and losing one point for incorrect responses. The objective is to achieve a certain number of points within a certain timeframe. Preliminary results suggest that decision thresholds were highly sensitive to deadline. The decision process was also sensitive to the amount of progress that remained before the goal was achieved, the incentive for goal achievement, and whether the goal was represented as an approach goal or an avoidance goal. These findings illustrate the sensitivity of decision making to the higher order goals of the individual, and provides an initial step towards a formal theory of how these higher level dynamics play out.

Exploring for the sake of it: Directed exploration with minimal feedback.

Adrian R. Walker | Ben R. Newell | Danielle J. Navarro

UNSW Sydney

The Exploration/Exploitation trade-off describes the tension that decision makers face when selecting between alternatives with known good outcomes and alternatives with unknown, but potentially better outcomes. Normally, this trade-off is examined using so-called bandit tasks. In these tasks, participants are presented a number of alternatives with differing, unknown values. Participants then choose between these alternatives with the goal of maximizing reward, requiring that they balance between choosing the best known option (exploitation) and sampling novel options (exploration). One key issue with this task is that it is not always possible to tell whether a participants choice reflects an exploratory choice, an exploitative choice, or indeed some combination of the two. The current talk presents a novel task in which we distinctly separate exploration and exploitation into discrete phases. In this task, feedback is delayed until the end of each phase. We show that participants are able to strategically explore the environment in the absence of feedback. We argue that this shows that participants can direct their exploration to reduce uncertainty, even when feedback is delayed.

What’s Lagging in our Understanding of Interruptions?: Effects of Interruption Lags in Sequential Decision-Making

Jennifer Sloane | Garston Liang | Chris Donkin | Ben Newell

UNSW

Interruptions are a common occurrence in daily life and can often lead to errors, especially if they occur during sequential decisions. Previous research suggests that interruptions can decrease performance and increase errors and response time. Additionally, there is evidence that providing a lag time prior to an interruption can mitigate some of the interruption costs. We use a novel binary decision tree paradigm to investigate the effects of interruptions and interruption lags in sequential decision-making. We manipulate the difficulty of the task and type of interruption and predict that interruptions will decrease performance when the task is difficult, but that this will be attenuated by interruption lag. The results indicate that there is a potential benefit to including a lag time when presented with an interruption.

Learning outcome sequences under uncertainty

Aba Szollosi | Chris Donkin | Ben Newell

UNSW

When making decisions under uncertainty, people need to learn about the sequential distribution of the outcomes. This can help them identify and exploit potential environmental regularities for example, the regularity in the way trees produce fruit. Here we investigated how good people are at learning such distributions. Participants first made decisions in an uncertain decision task in which we manipulated outcome probabilities across three between-subjects conditions. We then asked participants to generate a sequence of outcomes that was representative of what they have observed in the decision making task. The sequence of outcomes the participants generated did not have the same distributional properties as the sequences they observed. Instead, we found that although the sequences that participants produced had similar average lengths, their variance was substantially smaller and their shape was often different compared to the observed distribution. Based on these results we argue that some deviations from normative choice behavior under uncertainty (e.g., underweighting rare events) can be partly attributed to peoples ineffective learning of the sequential distribution of the outcomes. The results also pose a problem for some current models of learning which assume that people focus only on the frequency of probabilistic outcomes.