Abstracts: Posters



Posters should be up by Morning Tea on Friday. Presenters should be at their posters on Friday from 3:30 – 5:00.


Detuning of chords is less noticeable in just intonation

Philippe Lacherez | Callula Killingly | Lalitha Newman


Throughout history, and across different cultures, musical tuning systems have evolved to meet varying goals. For instance, 12-tone equal temperament allows transposition of any melody or harmony into any western key.  At the other extreme, just intonation allows maximum consonance for any given harmony, but restricts flexibility of transposition.  Although just intonation leads to objectively more consonant sounds, there is evidence that individual preferences may differ depending on prior exposure to each system.  The present study explored people’s judgments of in-tuneness when presented with either correctly tuned or detuned chords from both equal tempered and just intonation systems.  Participants (N = 46) were also asked to tune the notes of two standard diatonic chords (V7 and I). To assess whether immediate exposure to music from a given system moderated the perception of in-tuneness, these tasks were completed before and after exposure to either barbershop quartet (just intonation) or Korean popular (equal temperament tuning) music. Participants were very sensitive to detuning in equal tempered chords; however, the perception of just intonation music was highly robust to detuning, with chords judged as generally in-tune even when detuned. Exposure to music did not influence participants’ judgements in either task.


Phase specific shape after-effects explained by the tilt after-effect

Vanessa K. Bowden | J. Edwin Dickinson | Robert Green | David Badcock

The University of Western Australia | UC Berkeley

After-effects (AEs) of adaptation are frequently used to infer mechanisms of human visual perception. Adaptation across radial frequency (RF) patterns, patterns deformed from circular by a sinusoidal modulation of radius, causes repulsive after-effects, sensitive to the relative amplitudes and orientations of the patterns. It has previously been suggested that these after-effects may be the result of global adaptation mechanisms. Here, a series of four experiments including psychophysics and eye-tracking will be presented which make the case for local orientation AEs being primarily responsible for the apparent shape AE observed following adaptation to RF patterns. The local tilt AE model can comfortably account for the variation in the magnitude of the RF shape AE associated with rotational phase difference across two very different experimental paradigms which produce very different patterns of results. It also accounts for bidirectional AEs for test patterns with differing amplitudes. Explaining shape adaptation as a collection of local orientation AEs provides an alternative explanation to the global models conventionally used to address this issue. Our results suggest that there are some limitations to using adaptation for investigating the properties of global shape coding


Investigating the dynamics of perceptual predictions across the visual hierarchy

Matt Oxner | Eric T. Rosentreter | William G. Hayward | Paul M. Corballis

Victoria University of Wellington | University of Auckland | The University of Hong Kong

The visual system quickly registers perceptual regularities in the environment and responds to violations in these patterns. We have recently shown prediction error signals, specifically the visual mismatch negativity (vMMN), reflect expectancy violations of the relative occlusion of visual objects. This evidence of prediction violation for surface segmentation, a "high-level" visual process, extends similar findings for elementary perceptual regularities like orientation and colour. One surprising result of this study was that the mismatch response to deviants was not graded by the strength of previous evidence, as would be expected under predictive coding theory and previous findings. This suggests that the dynamics of Bayesian evidence accumulation and model updating may differ for different perceptual features across the visual hierarchy. We propose to independently investigate mismatch responses for three distinct features of a shared object stimulus: form, occlusion, and colour. Importantly we wish to compare the sensitivity of these responses to the strength of previous perceptual evidence for a given feature. The findings will provide insight on whether perceptual predictions occur for multiple aspects of object perception, and whether these are driven by shared or dissociable implementations across the visual system.


Individual differences in the time course of subjective experience during active and sham protocols for transcranial direct current stimulation (tDCS)

Jason Forte | Oliva Carter

The University of Melbourne

The efficacy of tDCS is inferred from differences between active and sham (presumably inert) electrical stimulation. A sham condition consisting of a shortened active stimulation protocol is typically used for inert stimulation, based on the assumption that the somatosensory experience of participants is indistinguishable from the active stimulation. However, some studies have shown that the number of participants reporting a given experience is different for active and sham tDCS. Furthermore, it is not known whether changes in experience enables individual participants to distinguish active stimulation and sham. 28 participants used a continuous report paradigm to record their experiences during 1mA and 2mA active and sham tDCS protocols. Participants reported few experiences during the 1mA active and sham conditions. For 2mA stimulation trials, reports of itching, prickling and burning were experienced for much longer durations in the active stimulation protocol. Our results indicate that a proportion of participants can reliably distinguish 2mA active tDCS stimulation from sham stimulation. We are currently investigating whether differences in experience between sham and active stimulation influence behaviour. However, future research will need to question the assumption that a sham condition is indistinguishable from active tDCS.


Multisensory integration outside conscious awareness

Daniel Jenkins | Gina Grimshaw

Victoria University of Wellington

Perception can occur without conscious awareness of the perceived stimulus. But can unconsciously perceived stimuli be integrated across sensory modalities to influence downstream cognitive processes, such as the allocation of spatial attention? Palmer & Ramsey (2012) reported that participants could integrate auditory spoken syllables with faces that were rendered subjectively invisible with continuous flash suppression. When face/voice pairings (either congruent or incongruent) reliably predicted the target location, participants spontaneously allocated attention towards the cued location despite being unaware of the visual stimuli – and therefore also of the cue itself – suggesting that unconsciously processed visual stimuli can be integrated with consciously processed auditory information. Here, we report a replication and extension of this study in which cues depended on the integration of auditory and visual emotional expression. In two experiments, we refined the original paradigm to include awareness checks and a perceptually aware control group, and found evidence supporting the predicted cueing effect only in aware controls using emotional stimuli.


The minimal duration required for face perception

Renzo C. Lanfranco | Dalila Achoui | Axel Ceeremans | Hugh Rabagliati | David Carmel

University of Edinburgh | Universite Libre de Bruxelles | Victoria University of Wellington

What is the minimal time it takes to perceive a face? Do upright faces enjoy a processing advantage over inverted faces? And are emotional faces perceived faster than non-emotional faces? Due to hardware limitations, studies examining fast visual processing typically present stimuli for suprathreshold durations of 10-20 milliseconds, and disrupt processing with a mask. Here, we use a newly-developed LCD tachistoscope which enables sub-millisecond presentation durations. Observers had to discriminate the location of a face image from that of a scrambled face, in extremely brief presentations (without a mask) ranging in duration from under a millisecond to 6 milliseconds. We found that above-chance face perception requires about 2.5 milliseconds, with an advantage for upright over inverted faces seen for durations of around 3.5 milliseconds and above. We did not, however, find any processing advantage for emotional over neutral faces. A control experiment ruled out attribution of the findings to afterimage processing. These findings clarify the minimal duration required to perceive a face and suggest that while holistic processing (i.e., of upright vs inverted faces) provides a perceptual advantage, any influence of emotional content may be restricted to processes that come after perception of the face itself.


Does Sex Modulate the Cheerleader Effect?

Daniel Carragher | Nicole Thomas | Mike Nicholls

Flinders University | Monash University

Perceptions of facial attractiveness can be modulated by cues that are external to the face, such as social context. The cheerleader effect is a robust visual phenomenon whereby the same face is perceived to be significantly more attractive when it is seen in a group with other faces, compared to when it is seen alone. To date, the cheerleader effect has been demonstrated for male and female faces that are presented in groups of same-sex distractor faces. However, the influence that the sex of the observer has on the effect has not been considered. Evolutionary psychologists have previously reported that the effect of social context on ratings of facial attractiveness is modulated by the sex of the observer, the sex of the face being evaluated, and the sex of the distractor faces in the group. To investigate whether the cheerleader effect is also modulated by these sex effects, we asked both male and female observers to rate the attractiveness of male and female target faces that were presented three times; once alone, once in a group with same-sex distractors, and once in a group with opposite-sex distractors. Preliminary data suggest that both male and female observers show the cheerleader effect.


Face perception and face detection deficits in developmental prosopagnosia

Stephanie Huang | Hazel K. Godfrey | Tirta Susilo

Victoria University of Wellington

Developmental prosopagnosia (DP) is traditionally defined by problems in memorising and recognising familiar faces. Whether DP also impairs the ability to distinguish unfamiliar faces (face perception) and detect the presence of faces (face detection) is not well understood. Here we assess face perception and face detection in 60+ DPs and 60+ controls. Both the perception and detection tasks presented faces and cars (as control stimuli), and both tasks were run with upright and inverted presentation. For the perception task, DPs as a group showed deficits with both faces and cars on the upright trials, but they were no different than controls on the inverted trials. This finding suggests that DP is associated with perceptual deficits that are orientation-sensitive but not specific to faces. For the detection task, DPs as a group showed subtle deficits only with faces on the upright trials, but the deficits are driven by a small subset of DPs. This result suggests that face detection problems occur in a minority of DPs but are not a primary feature of the condition. Overall, our study shows that DP deficits can manifest at multiple stages of visual processing with varying degrees of face-specificity.


Exploring the time course of direct gaze processing

Zoe Little | Tirta Susilo

Victoria University of Wellington

People are highly sensitive to direct gaze: they perceive faces to be looking at them even when the eyes are actually averted away. The angle within which a person will classify averted eyes as looking directly at them is their cone of direct gaze, and previous lab-based studies report the average width of this cone is around 10 degrees. However, little is known about the spatiotemporal properties of the cone of direct gaze. Here we report an internet-based study of the time course of the cone of direct gaze. We first show that internet-based samples yield an average cone of gaze of about 10-15 degrees, comparable to those found in the lab. We next manipulate the amount of time participants had to perceive each gaze by presenting face stimuli for shorter intervals followed by a mask. We find that the cone of direct gaze gradually narrows as face stimuli are visible for longer, from about 50 degrees at 100ms to about 15 degrees at 800ms. Our results begin to explore the spatiotemporal dynamics of the cone of direct gaze, and they show that internet-based samples can be used to investigate gaze processing.


No right-hemisphere advantage for holistic detection of Mooney faces

Jaiden Cancian | Ella Macaskill | Stephanie Huang | Hazel K. Godfrey | Tirta Susilo

Victoria University of Wellington

Face processing is thought to be lateralised to the right-hemisphere, but only lateralisation at the recognition stage has been closely tested. Here we report three experiments investigating whether there is a right-hemisphere advantage for face detection using Mooney faces. Mooney faces allow us to examine face detection based on holistic processing of the whole image in the absence of explicit cues about facial features. All experiments rely on the logic of the visual half-field presentation method (i.e., inferring a right-hemisphere advantage from enhanced performance in the left visual-field). Experiments 1 and 2 use different versions of the three-alternative forced-choice Mooney tests (Verhallen et al., 2014; Verhallen & Mollon, 2016). Experiment 3 is a yes/no task involving unilateral presentation of single Mooney images. No experiment shows a right-hemisphere advantage for detecting Mooney faces. Our results suggest that the lateralisation of face processing to the right-hemisphere occurs after the holistic face detection stage.


Recognition of Compound Expressions of Emotion: An Expression Rating Study

Emily Keough | Simone Favelle | Steven Roodenrys

University of Wollongong

Facial expression recognition ability is important for interpersonal communication and social functioning. Most research exploring expression recognition use only 6 basic expressions, although humans are capable of producing a much wider range that are used more often in social interaction. The Compound Facial Expressions of Emotion database (CFEE) is a large database of 22 expressions developed and validated using computer algorithms, however the stimuli is yet to be validated by human observers.

The current study aimed to test whether all 22 expressions were perceivable by humans; and identify the stimuli that were most consistently regarded to express the intended emotion for inclusion in a high quality set of stimuli that can be used for future expression processing research. Participants rated how well expressions matched the intended emotion label on a Likert scale. There were 14 sets of stimuli for a total of 3212 faces. The 11 highest rated images of each expression were extracted and analysed. Results showed that while there was overall good agreement between the expressions and the emotion label, the level of agreement varied significantly across the 22 expressions. Further testing is needed to test the perceptual discriminability (amongst other things) of expressions in this set.


Emotion recognition in faces, working memory and schizotypy

Leonie Miller | Simone Favelle | Emma Barkus | Steven Roodenrys | Tracey Woolrych | Emily Keough

University of Wollongong

Utilising a dual-task approach, research has found that emotion recognition (ER) in faces is dependent upon working memory (WM) resources. Given that ER is fundamental to social communication, this finding is also compatible with the observation that individuals with social processing deficits, including those with schizophrenia, demonstrate limitations in WM. The purpose of the current work was twofold; first, to address the possibility that the aforementioned experimental evidence for the ER-WM association is an artefact of modality specific interference, with an experiment combining a non-verbal WM task and an emotion labelling task; and secondly, to examine whether performance differs as a function of self-reported schizotypy, the personality organisation associated with vulnerability to schizophrenia.  Results indicate that the emotion labelling task draws on WM resources, with poorer WM performance in the dual- than single-task conditions. Performance in general was not different by level of schizotypy although the dual-task cost to WM performance was smaller for high schizotypy participants. However, high schizotypy individuals did report higher levels of loneliness to controls (d = 1.67). These results show that high schizotypy individuals may not exhibit cognitive impairment in ER in faces, but nonetheless experience greater subjective difficulties in the social domain.


What are you looking at? Investigating the interaction of facial expression, eye-gaze and the detection of threat.

Karen Griffith | Danielle Sulikowski

Charles Sturt University

While the effects of eye-gaze and facial expression have been extensively studied, few have considered the influence of these factors in detecting emotionally relevant stimuli. This study investigated whether congruency between the emotional relevance of target stimuli and facial expression affects detection time, and investigated the relationship between threat vigilance, facial cues and environmental context.  Using a novel adaption of the Posner cueing paradigm, 181 participants were presented with gazing angry, disgusted, fearful, happy and neutral face-cues prior to the onset of two flanking target stimuli. In a mixed repeated measures design participants detected either emotionally relevant or irrelevant targets. Target pairs were presented in blocks, creating threatening, pleasant/opportunity and benign contexts.  Results revealed congruency effects in straight-gaze trials when emotionally relevant stimuli were distractors. There was also clear evidence of threat vigilance when searching for emotionally relevant targets. Angry, disgusted and fearful faces resulted in faster responses than happy faces in threat contexts, and neutral faces in opportunity and benign contexts. These findings support theories positing dynamic, competitive interaction of stimulus-driven and observer-dependent inputs in determining relevance for selective attention, and suggest these inputs have cumulative effects to the extent each input is useful within the specific context.


Why do we see what we see? The influence of context and stimulus features on the rapid detection of threat

Miriam Wilkinson | Simon Wilkinson | Danielle Sulikowski

Charles Sturt University | Charles Sturt University | Charles Sturt University

The visual prioritisation of threatening stimuli is called the threat-superiority effect. While this effect has been extensively studied, few have considered the influence of contextual cues and specific stimulus features on the rapid detection of threat. Utilising visual search tasks, three studies presented 200 participants with images of threatening and nonthreatening stimuli within arrays of distractor images. Target images were presented in aquatic, bushland, and urban context conditions, and reaction time and caution indices were measured. It was hypothesised that threatening stimuli (sharks, spiders and snakes) would be detected faster and with more caution than nonthreatening stimuli; that coiled and sinusoidal snake postures and snakelike shapes would be detected more rapidly and with greater caution than other snake postures and non-snakelike shapes; and that context congruence would modulate visual priority within threatening categories. Results revealed a threat-superiority effect for all threatening stimuli, and confirmed the importance of coiled body shape in rapid snake detection. There was strong support for caution indices as measures of implied threat-relevance, and context effects emerged for threatening stimuli and caution responses. These results suggest visual attention responds to contextual cues and visual sensitivities not directly linked to threat-awareness facilitate the rapid detection of snakes.


Training a Machine Learning Model to Recognise Arousal and Valence

Caitlin Heesterman | Tim Gastrell | Bing Xue | Hedwig Eisenbarth

Victoria University of Wellington

This study aimed to develop a machine learning model to predict individuals’ subjective ratings of arousal and valence based on their electroencephalographic (EEG) and peripheral physiological activity. The model was trained on data from 32 participants who rated their valence and arousal levels after watching emotional music videos (Koelstra et al., 2012). To optimise the root mean square error (RMSE) of the model’s predictions, we systematically varied the data processing, feature selection, regressors and sample train / test method. We consistently found lower RMSE scores with valence than with arousal. We also noted that training the model across the whole sample compared to solely on the test individual decreased the RMSE for arousal but increased the RMSE for valence. This might suggest that the (neuro-)physiological correlates of subjective arousal are more consistent across a sample but that these correlates of subjective valence are more consistent within an individual. Thus, it appears that the subjective experience of emotional states can be explained by neurophysiological activity with some accuracy but that individual differences in the relationship between experience and this activity decrease accuracy. Further investigation will involve collecting a larger dataset of EEG, physiological and emotion recordings to train the model on.


Physiological and subjective emotional reactivity and recovery among young people who self-injure

Kealagh Robinson | Marc S. Wilson | Gina Grimshaw

Victoria University of Wellington

Although people who self-injure report experiencing greater emotion dysregulation, little is known about how they actually respond during emotional challenge. Experimental work is needed to understand if differences found in questionnaire measures of emotional experiences are due to irregularities in the physiological generation of an emotional response, the subjective interpretation of that physiological response, or how this emotional experience is later remembered. In a pre-registered study, we recruited young women who had self-injured in the past 12-months and those without a history of self-injury. We measured subjective feelings, heart rate, and electrodermal responding while participants completed a baseline task, followed by a stress induction, and a post-test resting phase. Two weeks later, participants recalled their subjective feelings during the stress induction. No group difference in reactivity to, or recovery from, the laboratory stressor was found across subjective feelings, heart rate, or electrodermal responding. A trend was found where the Self-Injury group recalled experiencing greater negative feelings during the stress-induction than did the Control group, suggesting that people who self-injure may amplify emotional challenges in memory. Findings underscore the need to better distinguish the underlying cause(s) of emotion dysregulation experienced by people who self-injure in order to better tailor therapeutic interventions.


Mood, optimism and prospective memory in Medical Adherence: An experimental investigation

Azizuddin Khan

Indian Institute of Technology Bombay, India

Medical adherence is the ability of a person to take medicine as prescribed by the doctor at the appropriate time and in appropriate amount. Many find it difficult to stick to the prescribed medication plan, the reasons of which range from mood, optimism to remember to do something in the future. The present study aimed to understand the relationship of medical adherence to memory, level of optimism and mood. One hundred and seven subjects were recruited for the study from community dwelling population between the age groups 55 years to 85 years of age. Medical adherence was measured by using Morisky’s six item medical adherence scale, Prospective and Retrospective memory questionnaire (PRMQ), life orientation scale revised (LOT- R) and Geriatric Depression Scale were employed. A univariate analysis revealed a significant relationship between motivation to adhere to medication and mood. A significant relationship of optimism and knowledge of medical adherence was also obtained by means of ANOVA. Regression analysis showed that prospective memory significantly predicted motivation to adhere to medication but did not predict knowledge about benefits of adherence to medication. Retrospective memory did not predict either motivation or knowledge towards medication adherence.


Unprepared: Thinking of a trigger warning does not prompt adaptive preparation for trauma-related content

Victoria Bridgland | Jorja Barnard | Melanie Takarangi

Flinders University

Trigger warnings purportedly enable people to ‘prepare’ for upcoming trauma-related material via ‘coping strategies’ that mitigate negative affect (Lockhart, 2015). However, in the face of potentially distressing content, does the addition of a trigger warning prompt people to engage helpful coping strategies, compared to the same situation without a warning? No research has examined this claim. We asked participants from Amazon’s Mechanical Turk (n = 260) to complete one of two different future thinking tasks; half were asked to think about coming across a warning related to their most stressful/traumatic experience; the others thought about content related to their most stressful/traumatic experience. Regardless of task, participants described similar feelings of anxiety, PTSD symptomology, and perceptions of event centrality, and reported that they would use the same kinds of coping strategies. Thus, the idea that warning messages prompt the use of adaptive coping strategies is questionable; trigger warnings may, in themselves, act as ‘triggering reminders.’


Testing the effects of a brief mindfulness intervention on cognitive control of emotional distraction

Justin Murphy | Gina Grimshaw | David Carmel

Victoria University of Wellington

Mindfulness practice involves paying attention intentionally, non-judgementally, and in the present moment. Such practice has been claimed to reduce emotional reactivity by increasing cognitive control: a constellation of mechanisms that enhance focus on goal-relevant information while ignoring goal-irrelevant distractions. However, empirical evidence for the influence of mindfulness on cognitive control is weak. In recent years, our lab has used an irrelevant distraction paradigm to examine how various manipulations of cognitive control influence emotional distraction.  Participants are typically more distracted by emotional than neutral images; however, emotional distraction can be reduced by manipulations such as reward, or increasing distractor frequency that are known to enhance cognitive control. Here, we tested whether mindfulness can similarly enhance cognitive control to reduce emotional distraction. In our preregistered study, participants performed a simple letter discrimination task, while attempting to ignore emotional and neutral distractor images that sometimes appeared. Halfway through the distraction task, they listened to either a 10-minute guided meditation or a 10 minute talk about mindfulness. Preliminary results (N = 59) show less emotional distraction following the mindfulness than control intervention, consistent with mindfulness enhancing cognitive control. Results based on the entire preregistered sample (N = 96) will be presented.


Are there visual and cognitive aftereffects of using virtual reality head-mounted displays?

Ancret Szpak

University of South Australia

A high-quality stereoscopic head-mounted display (HMD) can simulate depth in a virtual environment that resembles the spatial properties of the real world. However, current technology is not capable of exactly replicating how humans see and perceive depth in the real world. The aim of this study was to investigate the visual and cognitive aftereffects of using HMDs and the relationship of these aftereffects to virtual reality sickness (VRS) symptoms. A high-quality, off-the-shelf game was selected to test the hypothesis that commercially sold HMDs may lead to visual and cognitive aftereffects. Standardised visual and cognitive assessments were employed before and after participants engaged in a 30-minute table tennis game (VR group) or went about their daily activities (control group). We observed changes in accommodation but not in vergence, which possibly stems from the aftereffects of decoupling of accommodation and vergence in virtual reality. Participants in the VR group also demonstrated a decrease in reaction times after using an HMD. Objective measures of the visual and cognitive aftereffects of using HMDs may provide greater insight beyond self-report measures of VR sickness. These measures may also be valuable to obtain a better understanding of user issues and safety around VR usage.


Walk the Plank! Fear induction in Virtual Reality

Christopher Maymon | Jeremy Meier | Kealagh Robinson | Michael Tooley | Justin Murphy | Lauren Liao | Matt Crawford | Gina M. Grimshaw

Victoria University of Wellington

It has long been established that emotions are comprised of behavioural, physiological, and subjective responses. In order to better understand relationships amongst these components of emotion, it is necessary to reliably induce specific emotional experiences in the lab. Here, we aimed to induce fear by exposing adults to extreme heights in Virtual Reality. Participants explored a virtual city street before being transported to the ‘height challenge’: walking along a plank suspended 80 stories above the ground. Subjective emotional experience was measured at specific points during the simulation. Heart rate (HR) and skin conductance level (SCL) were recorded continuously as a measure of physiological response. Mean HR and SCL significantly increased during the height challenge (ps < .001) relative to street level. Results from emotion ratings obtained during the simulation indicated that participants experienced significant increases in fear, anxiety, and presence; along with significant decreases in relaxation, happiness, and desire. Taken together, these results suggest that the height challenge specifically induced fear. Next steps for this paradigm involve incorporating behavioural measures (e.g., full body movements and phonetic speech analyses) in order to more fully capture emotional responses


Motion increases recognition of spontaneous postures but not facial expressions

Tamara Van Der Zant | Nicole Nelson

University of Queensland

Most emotion recognition research uses static, posed facial expressions of emotion. In this study we examine how the use of more ecologically valid stimuli – including dynamic, spontaneous whole person expressions – improves recognition. Expressions were drawn from professional tennis matches following an important win or loss within the match. Participants judged the expression shown in the face, body or whole person on whether the player had won or lost the prior point as well as the valence and arousal depicted in the expression. Recognition of wins and losses was improved when stimuli were presented dynamically and when the face and body were presented as a whole. Though recognition was better for the more ecologically valid stimuli types (i.e. dynamic stimuli and whole person stimuli), overall recognition of emotion was poor, with emotion recognition for isolated face stimuli being poorer than chance in some conditions. Emotion recognition for ecologically valid expressions can differ greatly from the posed and less ecologically valid stimuli typically used in the field. We highlight the need for research further examining ecologically valid stimuli, including dynamic, spontaneous and full person stimuli, as well as stimuli incorporating contextual features.


Size Does Matter: Accuracy in Detecting Digitally Altered Images

Nicole A. Thomas | Ellie Aniulis | Alessia Mattia | Elizabeth Matthews

Monash University | James Cook University (Cairns)

The prevalence of social media is undeniable; indeed, most people check social media every day. Furthermore, it is increasingly simple to modify photographs prior to posting them on social media. Given that higher level cognitive factors influence our perception, does repeated exposure to unrealistically thin, idealised pictures of women influence our ability to detect digitally altered images? Across 3 experiments, female participants viewed an unaltered image, followed by a noise mask, then an image of the same female model that had been modified (in increments of 5%) to be larger or smaller than the original. Participants underestimated change levels for thin models, and overestimated change levels for plus-size models. Although participants were accurate in determining whether two images of plus-size models were the same or different, the second image of thin models had to be significantly smaller than the first for participants to report they were the same. Overall, participants believed photographs had been modified to a lesser degree than they actually had, particularly for thin models. We suggest that regular exposure to unrealistically thin, idealised images on social media has changed our perception of normal , leading to the belief that the average body is larger than it truly is.


The Effectiveness of Short-format Refutational Fact-checks

Assoc/Prof Ullrich Ecker | Ms Ziggy O’Reilly | Mr Jesse S. Reid | Ms Ee Pin Chang

University of Western Australia

Fact-checking is an increasingly important feature of the modern media landscape. However, the most effective format of fact-checks remains unclear. Some have argued that simple retractions that repeat a false claim and tag it as false may backfire because they boost the claim’s familiarity. More detailed refutations may provide a more promising approach, but may not be feasible under the severe space constraints associated with social-media communication. In two experiments, we tested whether (1) simple false-tag retractions can indeed be ineffective or harmful; and (2) if short-format (140-character) refutations are more effective than simple retractions. Regarding (1), simple retractions reduced belief in false claims, and we found no evidence for a familiarity-driven backfire effect. Regarding (2), short-format refutations

were found to be more effective than simple retractions after a one-week delay but not a one-day delay. At both delays, however, they were associated with reduced misinformation-congruent reasoning. This means that embedding a rebuttal in a facts-oriented context has beneficial implications beyond specific belief reduction.


Thinking more does not protect people from truthiness

Deva Paramitta Ly | Eryn Newman

The Australian National University

Truth comes from the gut and not from books: when people are deciding what is real, they draw on feelings, rather than on facts. Consequentially, people are swayed by peripheral details that have nothing to do with truth. Research shows that people are more likely to believe claims that appear with decorative photos, even when the photo does not bear on the truth of a claim: truthiness (Newman et al., 2012). That is, when people see the claim, "Turtles are deaf", they are more likely to believe that claim when it is paired with a photo of a turtle. My honours research investigated whether instructions to think analytically had converging effects on people’s assessments of truth: reducing people’s susceptibility to the photo-bias.  Across two experiments, participants completed a survey containing true-false trivia claims. Half of the claims appeared with a photograph and half appeared alone. Before the trivia task, some participants were instructed to think deeply [or critically] before responding. In a control condition, they received standard truthiness instructions. While instructions encouraging analytical processing significantly increased accuracy, there was no decrease in truthiness across conditions. These results fit with the theoretical literature on cognitive fluency and have implications for dealing with misinformation.


Isolating the time of choice challenges the postdictive paradigm

Konstantina Vasileva

Victoria University of Wellington

Research on conscious choice repeatedly challenges the phenomenology of agency. Recently, Bear & Bloom (Psychological Science, 2015) demonstrated that when asked to choose between two options, participants were biased by a low-level perceptual cue appearing shortly after they should have completed their choice.

The postdictive effect occurred when the cue circle appeared at a sufficiently short delay when unconscious visual processing and the subjective experience of conscious choice overlap. However, the original method failed to capture precisely the timing of choice as the cue circle appeared alongside the choice options within the same screen frame. After successfully replicating the original experiment, I attempted to isolate the timing of choice by separating the choice options from the cued stimulus into two consecutive frames. Contrary to Bear & Bloom, I found that the choice of the cued item was larger for greater stimulus-cue onset differences, which directly challenges the postdictive model.


Do contingency estimates inform our causal judgments? A survey of controversial health-related beliefs.

Julie Chow | Ben Colagiuri | Micah Goldwater | Benjamin Rottman | Evan Livesey

University of Sydney University of Pittsburgh

Estimating the contingency between events seems to be the logical basis for making informed judgments about causal relationships (e.g., whether a treatment effectively improves health). However, there are consistent asymmetries in how sensitive contingency estimates and causal judgments are to illusory causation manipulations. Illusory causation refers to the overestimation of causal relationships when there is no objective contingency between events (the patient is just as likely to recover with or without the treatment). We conducted a survey of Australian adults to investigate relationships between causal judgments and contingency estimates in several real-life controversial health-related beliefs. We found that individuals’ contingency estimates reflected their beliefs about treatment efficacy, but dissociations between causal judgments and these contingency estimates persisted. Similar to causal judgment in laboratory experiments, endorsements of health-related causal relationships appear to be influenced by more than simple contingency estimates, which has important implications for attempting to correct erroneous beliefs.


What makes an outcome extreme? Refining the definition of extremity and its influence in risky choice

Joel Holwerda | Prof. Ben Newell

UNSW Sydney

Outcomes that are extreme within a given context are more influential than intermediate outcomes in decisions involving risk. People tend to accept risks that allow the possibility of acquiring the best outcome and reject risks that could lead to the worst outcome. But this pattern of behaviour, referred to as the extreme-outcome effect, could be explained by numerous theories about what constitutes an extreme outcome. Across three experiments, we provide evidence that the disproportionate influence of extreme outcomes cannot be accurately described as being driven solely by the best and worst outcomes, as is often suggested, or as increasing with distance from the average experienced outcome. Instead, our results were more consistent with an account in which the influence of outcomes is determined by their rank relative to other experienced outcomes. These findings have implications for the relationship between the extreme-outcome effect in risky choice and analogous effects across numerous cognitive domains, notably the peak-end effect and bow (or edge) effects.


Oral Vocabulary and Novel Word Reading: An Eye Movement Study

Lyndall Murray | Rauno Parrila | Signy Wegener | Hua-Chen Wang | Anne Castles

Macquarie University

The association between oral vocabulary and word reading has been tied to a skill termed mispronunciation correction that enables readers to adjust the pronunciation of unfamiliar irregular words (e.g. correcting "wazz" to "wozz" when reading was). We aimed to provide evidence for mispronunciation correction processes operating during reading. Year 5 children were orally trained on a set of novel words but received no training on a second set. Half the words were designated an irregular spelling and half a regular spelling. Children later read silently both trained and untrained words in sentences that provided a supportive or neutral context while their eye movements were monitored. Fixations on regular words were significantly shorter than on irregular words, and fixations on trained words were shorter than on untrained words. Children then read the target words aloud and demonstrated a greater likelihood of reading regular and irregular words accurately when they had been trained. Reading words in contextually supportive sentences boosted accuracy for irregular but not for regular words. These results suggest that when words are present in oral vocabulary and irregular, they undergo additional processing when viewed in text for the first time. This additional processing may reflect mispronunciation correction processes.


Orthographic knowledge predicts reading performance on word and sentence level in German third-graders with reading difficulties

Jelena Zaric | Marcus Hasselhorn | Telse Nagler

Goethe University Frankfurt a. M.

Reading is among the most important competencies. It comprises hierarchically lower processing (i.e., word identification), and hierarchically higher processing (i.e., understanding the relationship between words; integration of sentences for successful text comprehension). Difficulties in these processing can lead to reading difficulties. Well-established competencies for successful reading performance are phonological awareness and naming speed. Recent research focuses also on the role of orthographic knowledge (i.e., knowledge of conventions of a writing system), however, previous studies only examined the role of orthographic knowledge in basic reading processing, such as word reading. The aim of this study was to examine the role of orthographic knowledge in hierarchically higher reading processes (sentence and text comprehension), in addition to phonological awareness and naming speed. For this purpose, data from 102 German third-graders with reading difficulties (age M = 8.86, SD = 0.50) were analyzed via multiple linear regression analysis. Analyses revealed that orthographic knowledge is a significant predictor for word and sentence reading, but not for text comprehension. These results indicate that the knowledge about the convention of a writing system contributes to word identification as well as to the integration between words into coherent sentences, but not to any higher text comprehension processes.