Publication Cover
Laterality
Asymmetries of Brain, Behaviour, and Cognition
Latest Articles
160
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Opposite perceptual biases in analogous auditory and visual tasks are unique to consonant–vowel strings and are unlikely a consequence of repetition

ORCID Icon, , , & ORCID Icon
Received 20 Sep 2023, Accepted 23 Apr 2024, Published online: 03 May 2024

ABSTRACT

Despite wide reporting of a right ear (RE) advantage on dichotic listening tasks and a right visual field (RVF) advantage on visual half-field tasks, we know very little about the relationship between these perceptual biases. Previous studies that have investigated perceptual asymmetries for analogous auditory and visual consonant–vowel tasks have indicated a serendipitous finding: a RE advantage and a left visual field (LVF) advantage with poor cross-modal correlations. In this study, we examined the possibility that this LVF advantage for visual processing of consonant–vowel strings may be a consequence of repetition by examining perceptual biases in analogous auditory and visual tasks for both consonant–vowel strings and words. We replicated opposite perceptual biases for consonant–vowel strings (RE and LVF advantages). This did not extend to word stimuli where we found RE and RVF advantages. Furthermore, these perceptual biases did not differ across the three experimental blocks. Thus, we can firmly conclude that this LVF advantage is unique to consonant–vowel strings and is not a consequence of the repetition of a relatively limited number of stimuli. Finally, a test of covariances indicated no cross-modal relationships between laterality indices suggesting that perceptual biases are dissociable within individuals and cluster on mode of presentation.

Behavioural tasks that measure perceptual asymmetries for linguistic stimuli in the auditory and visual domains are often used to index hemispheric dominance for language. In the auditory domain, dichotic listening paradigms are typically used to assess hemispheric dominance (see Kimura, Citation2011; and Hugdahl, Citation2011, for reviews) whereas, in the visual domain, visual half-field paradigms are used (see Bourne, Citation2006, for a review). These paradigms have produced two of the most widely replicated results in the field of cognitive neuroscience: a right ear (RE) advantage and a right visual field (RVF) advantage for the processing of linguistic stimuli, which is interpreted as evidence for left hemisphere dominance for language at a population level. While these effects are robust, few studies have compared perceptual asymmetries using analogous auditory and visual tasks using the same stimuli. Those that have, have used consonant–vowel strings (e.g., ba, ga) as stimuli, and reported opposite perceptual asymmetries: a RE/left hemisphere advantage on the dichotic listening task and an opposite LVF/right hemisphere advantage on the visual half-field task (Oltedal & Hugdahl, Citation2017; Voyer & Boudreau, Citation2003). One potential explanation for this opposite pattern of results is that the auditory stimuli are processed as linguistic units while the visual letter strings are processed spatially. Here we return to the issue of opposite laterality for analogous auditory and visual tasks and seek to draw direct comparisons between the lateralization of consonant–vowel strings and 5-letter words (e.g., power, tower). This enabled us to assess whether this opposite pattern of asymmetry is unique to consonant–vowel strings or if it could be observed for a list of frequently repeated words presented under the same conditions. We then used the data from these tasks to test the dissociable language laterality hypothesis (COLA consortium, Citation2022), which suggests that language functions are independently lateralized within individuals (that is, individuals are not consistently right or left lateralized within themselves), via an exploratory analysis whilst accounting for each behavioural task’s reliability.

In dichotic listening paradigms, different stimuli are presented simultaneously to each ear. In non-forced versions, which are commonly used to assess hemispheric dominance, participants then report the sound that they heard most clearly. While input to both ears projects bilaterally, the ipsilateral pathway is suppressed when the two sides compete under simultaneous presentation, meaning that there are stronger contralateral projections during the task. As such, stimuli presented to the right ear, with a contralateral projection to the left hemisphere, will have an advantage in tasks that involve language processing (see Hugdahl, Citation2011, for a review). Indeed, evidence from large-scale studies indicate a strong RE advantage when consonant–vowel syllables are used (Bless et al., Citation2013; COLA consortium, Citation2022; Karlsson et al., Citation2019; Parker, Woodhead, et al., Citation2021) that is reliable within individuals when the task is administered twice (e.g., both Bless et al., Citation2015 , and Parker, Woodhead, et al., Citation2021, reported a retest correlation of r = 0.78). Other versions of the task that have required participants to make judgements about digits (e.g., Kimura, Citation1961) and words (e.g., Bryden, Citation1964; Bryden & Macrae, Citation1988; Wexler & Halwes, Citation1983) have revealed a strong RE advantage at the population level. Furthermore, dichotic listening paradigms are sensitive to the subtle differences in hemispheric dominance that are typically observed between left- and right-handers, where left-handers show more variable laterality at a group level (Bethmann et al., Citation2007; Bryden, Citation1970; Karlsson et al., Citation2019; Parker, Woodhead, et al., Citation2021).

Like dichotic listening paradigms, visual half-field paradigms make use of contralateral projections from sensory input. In visual half-field paradigms, participants are required to respond to visual stimuli shown in the left visual field (LVF) and RVF. Because the visual pathways from the LVF and RVF are crossed for non-foveal viewing, the information presented to the RVF will project straight to the left hemisphere and the LVF to the right hemisphere. While the corpus callosum does allow for the transfer of information between the hemispheres, there is more efficient processing for linguistic stimuli arriving at the left hemisphere as there is an absence of transfer/processing costs from the non-dominant to the language dominant hemisphere (Bonandrini, Paulesu, et al., Citation2023; Bourne, Citation2006; Hellige, Citation1993; Hunter & Brysbaert, Citation2008). It is not surprising then that many studies have produced a strong RVF advantage for the processing of written words (e.g., Bonandrini, Paulesu, et al., Citation2023; Brederoo et al., Citation2019, Citation2020; Hausmann et al., Citation2019; Mills et al., Citation2022; Parker, Egan, et. al., Citation2021; Parker, Woodhead, et al., Citation2021; Perea et al., Citation2008; Willemin et al., Citation2016) despite using very different stimuli between studies. Compared to auditory tasks, visual half-field paradigms have lower reliability (Voyer, Citation1998), but there is evidence to suggest that these paradigms are sensitive to increased variability in hemispheric dominance in left- relative to right-handers (e.g., Brederoo et al., Citation2020).

Given the similarities between dichotic listening and visual half-field paradigms, it is surprising that few studies have examined co-lateralization between the two paradigms. The few studies that have looked at co-lateralization have tended to report poor correlations between laterality indices on these tasks (e.g., Bryden, Citation1965; Hines & Satz, Citation1974; Wexler & King, Citation1990). This is illustrated in a study conducted by Van der Haegen and Brysbaert (Citation2018). Van der Haegen and Brysbaert administered a battery of behavioural laterality tasks containing a consonant–vowel dichotic listening task, a visual half-field task with word stimuli, and an optimal viewing position (OVP) variation of the visual half-field paradigm to 100 left-handers. The correlation of laterality indices between the dichotic listening and word visual half-field task was r = 0.31 and the correlation between laterality indices on the dichotic listening and OVP task was r = 0.26, indicating that there are medium correlations at best between laterality indices from dichotic listening and visual half-field paradigms. It is, however, possible that the relatively weak correlations between laterality indices for dichotic listening and visual half-field paradigms are a consequence of a lack of comparability of the stimuli used across sensory domains.

Voyer and Boudreau (Citation2003) were the first to compare perceptual asymmetries on analogous auditory and visual tasks that used the same stimuli. In their first experiment, 48 right-handers completed a consonant–vowel dichotic listening task and a consonant–vowel visual half-field task with bilateral presentation. The goal of both tasks was to monitor for a specific syllable and report when it was presented. For the dichotic listening task, as expected, accuracy was higher for stimuli presented to the RE. For the visual half-field task, however, there was an unexpected LVF advantage with greater accuracy for stimuli presented in the LVF. Furthermore, Voyer and Boudreau reported a weak, non-significant cross-modal correlation (r = −.09) between laterality indices on these tasks. This was interpreted as strong evidence that the participants recruited two distinct cognitive systems when completing each task.

The most surprising feature of the data reported by Voyer and Boudreau (Citation2003) is the LVF advantage for the visual half-field task for the processing of syllables. This was contrary to the prediction that there would be an RVF advantage for the processing of these stimuli given that sensory information presented to the RVF is initially projected to the language-dominant left hemisphere. This led Voyer and Boudreau to conduct a second experiment to replicate this finding in a separate group of 23 right-handers. Again, Voyer and Boudreau reported a clear LVF advantage for the visual processing of syllables, confirming that this effect was not the result of experimental error. Voyer and Boudreau hence suggested that this opposite pattern of laterality between the two tasks is a result of auditory stimuli being processed as linguistic units while the visual letter strings are processed spatially.

The findings of Voyer and Boudreau (Citation2003) have since been replicated in two experiments conducted by Oltedal and Hugdahl (Citation2017). In Experiment 1, 12 right-handed males completed a consonant–vowel dichotic listening task and a modified consonant–vowel visual half-field task under non-forced, forced-left, and forced-right conditions. Under non-forced conditions, Oltedal and Hugdahl replicated the RE and LVF advantages. Furthermore, there was evidence to suggest that the RE advantage could be reversed under forced right attention and the LVF advantage could be reversed under forced left conditions. Together, Oltedal and Hugdahl replicated opposite patterns of laterality for analogous tasks and extended these findings in showing that these asymmetries are independent given that forced attention conditions independently influenced outcome measures on each task. In Experiment 2, Oltedal and Hugdahl replicated these results under fMRI and additionally noted different patterns of BOLD activation across the two tasks. During the auditory, but not visual, task there was symmetrically distributed activation in the superior posterior temporal lobe, corresponding to the auditory cortex. Together the results of Oltedal and Hugdahl confirm that the robustness of the LVF advantage for the visual processing of consonant–vowel strings and activation in the auditory cortex for the auditory but not visual task demonstrates the idea that different cognitive systems are used for the processing of stimuli across each modality. Like Voyer and Boudreau, Oltedal and Hugdahl concluded that participants process auditory consonant–vowel strings as linguistic units while visual consonant–vowel strings are processed spatially or as objects.

The finding of opposite perceptual biases on analogous auditory and visual tasks, particularly the LVF advantage for consonant–vowel strings, is surprising. During these auditory and visual tasks, only six stimuli (ba, da, ga, ka, pa, ta) are typically presented with each string being repeated numerous times under rapid presentation. Under these conditions, it seems reasonable that visual syllables may have been learned and processed as spatial units. Indeed, studies of lateralized word naming have shown smaller RVF advantages for stimuli shown 16 times throughout an experiment compared to those shown only once (Sullivan & McKeever, Citation1985). Although we cannot tease apart whether the results of Sullivan and McKeever are a consequence of recent lexical processing or a switch from linguistic to spatial processing in the visual domain, it does suggest that this opposite pattern of laterality for visually presented consonant–vowel strings may be a consequence of repetition. In other words, the task could be completed without the need of accessing the linguistic processing system and, as such, stimuli would represent visual objects in opposite sides of the space that compete for processing resources. This would likely engage the right hemisphere, by virtue of advantage over the left hemisphere for spatial selective attention (Becker & Karnath, Citation2007; Bowen et al., Citation1999; Heilman et al., Citation1985 ; Ringman et al., Citation2004). We, therefore, set out to examine this possibility in an experiment where participants completed analogous dichotic listening and visual half-field tasks for six consonant–vowel strings and four five-letter words. We reasoned that if the same LVF advantage could be observed for a list of frequently repeated words, then this would clearly indicate that presentation conditions in previous studies led to the LVF for visually presented consonant–vowel strings rather than these stimuli solely being processed as spatial units. On the contrary, finding a RVF for visually presented words even under frequent repetitions would suggest that consonant–vowel strings are processed differently from words. To this end, we pre-registered four predictions: (1) That we would replicate the RE advantage for the processing of aurally presented consonant–vowel strings; (2) we would replicate the LVF advantage for the processing of visually presented consonant–vowel strings; (3) we would replicate the RE advantage for the processing of aurally presented words; and (4) if opposite laterality in analogous tasks is a consequence of repetition and, as a consequence, spatial processing, then we would expect a LVF advantage for the processing of visually presented word strings. If, however, opposite laterality for consonant–vowel strings reflects something unique about how consonant–vowel strings are processed under visual presentation then we would expect a RVF advantage on the word visual half-field task. Following our pre-registered analysis of the data, we conducted an unplanned exploratory analysis of the data to look at how performance on each task changed across subsequent blocks of stimuli. We reasoned that if the LVF advantage for consonant–vowel strings was a consequence of repetition, then we might expect the LVF advantage to be stronger in later relative to earlier blocks.

While the primary aim of our experiment was to further investigate opposite laterality during analogous auditory and visual tasks, our experimental approach provides a unique opportunity to reexamine the dissociable language laterality hypothesis (COLA consortium, Citation2022; Woodhead et al., Citation2019, Citation2021), which specifies that language lateralization is not unitary and language functions are independently lateralized within individuals. The strongest test of this hypothesis to date comes from a study involving both behavioural (N = 621) and functional transcranial Doppler ultrasound (N = 209) data. Within their study, the COLA consortium (Citation2022) reported that the three behavioural laterality tasks were only weakly correlated and that a two-factor model was a better fit to lateralized blood flow data from six tasks than a one-factor unitary laterality model. This finding of dissociable language laterality in blood flow measures could not be attributed to measurement error due to each task showing good split-half reliability. This led the authors to conclude that language laterality is not a unitary construct. The finding of dissociable language laterality, however, is by no means novel. Zurif and Bryden (Citation1969) reported that, while laterality indices on three visual half-field tasks correlated as did laterality indices on two dichotic listening tasks, there was no cross-modal correlation of laterality indices. Thus, the findings of Zurif and Bryden (Citation1969) point towards the idea that language functions, or at the very least perceptual biases, are lateralized based upon modality despite both spoken and written language both recruiting the left hemisphere. Thus, we further examined the dissociable language laterality hypothesis via an exploratory covariance modelling approach (Parker, Woodhead, et al., Citation2021) where we pitted four models against each other: (1) a dissociable laterality model where all laterality indices were unrelated, (2) a modality laterality model where laterality indices for tasks within the same modality were related but there were no cross modality relationships, (3) a stimuli laterality model where laterality indices for consonant–vowel strings and words were related but there were no relationships between consonant–vowel string and words, and (4) a unitary language laterality model were all laterality indices were related. The four models are shown visually in . To our knowledge, this was the first formal modelling approach to assess the evidence of a modality-specific lateralization model against a completely dissociable model. Importantly, we considered the reliability of each task as without consideration of reliability it is difficult to differentiate an absence of a relationship from measurement error (Parsons et al., Citation2019).

Figure 1. A schematic illustration of the four laterality models: (A) dissociable laterality, (B) modality laterality, (C) stimuli laterality, and (D) unitary laterality.

Figure 1. A schematic illustration of the four laterality models: (A) dissociable laterality, (B) modality laterality, (C) stimuli laterality, and (D) unitary laterality.

Method

This experiment was pre-registered on the Open Science Framework (OSF) before data collection. The registration form, alongside task materials, analysis scripts, and anonymized data is available on the Open Science Framework: https://osf.io/72sk6/.

Participants

The sample size for the current study was determined via power simulations using the SimR package (version 1.0.7; Green et al., Citation2016) within R (version 4.3.3; R Development Core Team, Citation2020). First, we conducted a small-scale pilot study where 10 right-handers (3 male, 7 female), recruited via Prolific (https://www.prolific.co/), completed each of the four laterality tasks, which are described in detail below. Results from the initial pilot study aligned with our pre-registered predictions. We observed a RE advantage for both listening tasks. The count of correctly identified stimuli presented to the left and right ears were 26.6 and 45.7 for consonant–vowel strings and 24.5 and 40.7 for words. There was a LVF advantage for consonant–vowel strings, with 73.9 correct responses for the LVF and 40.9 correct responses in the RVF, and a RVF advantage for words, with 40.1 correct responses for the LVF and 51.1 correct responses in the RVF. We then used the powerCurve() function to determine a sample size at which we would have 90% power at an alpha level of p < 0.05 to detect the 33 unit LVF advantage for consonant–vowel strings using a generalized Poisson linear mixed-effects model that was optimized for the processing of count data: glmer(dvVF + (1 + VF | participant), family =  Poisson(“log”)),Footnote1 where participant was a random factor. One thousand simulations under varying sample sizes indicated that 50 participants would provide 92.4% power to detect the LVF advantage for consonant–vowel strings.

To reach our pre-registered sample size, we initially recruited 68 participants via Prolific. Participants were native English speakers aged 18–45 years old, had normal or corrected-to-normal vision, no hearing or visual impairments, and reported no history of neurological disease (e.g., brain tumour, stroke, head injury). We additionally over sampled left-handers at this point (33 left-handers; 48.5% of the sample). The decision to over sample left-handers followed previous work (COLA Consortium, Citation2022; Parker, Woodhead, et al., Citation2021). By contrast to right-handers, left-handers are characterized by greater variability in language lateralization (e.g., Bruckert et al., Citation2021; Mazoyer et al., Citation2014) and offer the opportunity to adopt statistical models with greater explanatory power. Indeed, Van der Haegen and Brysbaert (Citation2018) have made the argument that left-handers should be a first choice when studying the (co-)lateralization of linguistic functions. Given that we imposed several cleaning procedures on the data, we report our final sample size under Data Cleaning and Final Sample.

Online behavioural laterality battery

Our online battery of tasks included three measures of background information and four experimental assessments of behavioural laterality that were built and administered using the Gorilla Experiment Builder (http://www.gorilla.sc/; Anwyl-Irvine et al., Citation2020). Each task is described below.

Basic demographics questionnaire

Participants were asked to report their age, gender, years in education, and whether they were bilingual. We also asked participants to answer the questions Which is your writing hand? and Which is your preferred foot to kick a ball?.Footnote2 The available options for both questions were “left”, “right”, or “no preference”.

Edinburgh handedness inventory

The Edinburgh Handedness Inventory (Oldfield, Citation1971) was used to quantify handedness on a continuous scale. Participants were required to indicate a preference for the left or right hand on a 5-point scale for 10 everyday activities. The Edinburgh Handedness Inventory was scored in the standard way so that they ranged from −100 to 100, where −100 indicates extreme left-handedness, zero indicates no preference, and 100 indicates exclusive right-handedness.

Lexical test for advanced learners of English (LexTALE)

The Lexical Test for Advanced Learners of English (LexTALE; Lemhöfer & Broersma, Citation2012) was used to screen for normal-range language ability. Participants had to indicate whether they knew 40 English words and 20 non-words. LexTALE scores were corrected for the unequal proportion of words and non-words by averaging the percentages corrected for these two item types (e.g., ((number of words correct/40×100) + (number of nonwords correct/20×100))/2). Accordingly, the LexTALE scores ranged from 0 to 100.

Dichotic listening tasks

Consonant–Vowel dichotic listening task

The consonant–vowel dichotic listening task was used to measure lateralized auditory syllable perception and was closely matched to Oltedal and Hugdahl (Citation2017).

Materials

Stimuli for the consonant–vowel dichotic listening task followed the standard Bergen dichotic listening paradigm, which has previously been used in an online format (Bless et al., Citation2013; Parker, Woodhead, et al., Citation2021). The CV syllables were created by pairing six stop-consonants (/b/, /d/, /g/, /p/, /t/, /k/) with the vowel /a/. Each of the consonant–vowel strings (/ba/, /da/, /ga/, /ta/, /ka/, and /pa/) were then paired and played in each sound channel (e.g., /ba/–/da/). This yielded a total of 36 possible pairwise combinations, including 6 homonyms (e.g., /ba/–/ba/).

Procedure

Participants completed three blocks of the consonant–vowel dichotic listening. In each block, participants heard each of the 36 pairings once (108 pairings across the three blocks). At the start of a trial, participants saw an asterisk for 2000 ms, after which a stimulus was played and the string <> was shown on the screen until a response was made or 6000 ms had elapsed. Participants were instructed to report the syllable that they heard the clearest by clicking on one of six buttons (each corresponding to a single consonant–vowel string). A schematic illustration of the dichotic listening task (and all other behavioural laterality tasks) is shown in .

Figure 2. A schematic illustration of each of the four behavioural laterality tasks (not to scale).

Figure 2. A schematic illustration of each of the four behavioural laterality tasks (not to scale).

Word dichotic listening task

The word dichotic listening task was used to measure lateralized auditory word perception.

Stimuli

Stimuli for the word dichotic listening task came from an emotional prosody task reported by Godfrey and Grimshaw (Citation2016), where we presented neutral-toned stimuli. The words were /bower/, /dower/, /power/ and /tower/. Each of the word strings were then paired and played in each sound channel (e.g., /bower/–/dower/). This yielded a total of 16 possible pairwise combinations, including 4 homographs (e.g., /bower/–/bower/). It should be noted that the structure of word stimuli (i.e., differing only by the first letter) mirrors that of the consonant–vowel stimuli.

Procedure

Participants completed three blocks of the consonant–vowel dichotic listening. In each block, participants heard each of the 16 pairings twice (96 pairings across the three blocks). At the start of a trial, participants saw an asterisk for 2000 ms, after which a stimulus was played and the string <> was shown on the screen until a response was made or 6000 ms had elapsed. Participants were instructed to report the word that they heard the clearest by clicking on one of four buttons (each corresponding to a single word string).

Visual half-field tasks

We employed two visual half-field tasks with bilateral presentation of consonant–vowel strings and words. The use of bilateral presentation had several advantages in the context of the current experiment compared to unilateral presentation: (1) it enabled direct comparison with Oltedal and Hugdahl (Citation2017) who also employed bilateral presentation for the visual presentation of consonant–vowel strings; (2) its adoption allows a more direct parallel (compared to unilateral visual presentation) between the visual and the auditory modality, as bilateral visual presentation elicits competition between LVF and RVF (Hausmann et al., Citation2019; Hunter & Brysbaert, Citation2008); (3) it followed from Hunter and Brysbaert’s (Citation2008) observation that divided visual field reading effects are larger and more stable when bilateral presentation is adopted instead of unilateral presentation (Boles, Citation1987, Citation1990, Citation1994; see also Iacoboni & Zaidel, Citation1996; for more recent applications of this technique, see Hausmann et al., Citation2019; Willemin et al., Citation2016; and Mills et al., Citation2022); and (4) in the absence of explicit fixation control via eye tracking, the use of bilateral presentation can (by virtue of competition for attentional resources) reduce the probability of eye movements (Hunter & Brysbaert, Citation2008).

Consonant–vowel visual half-field task

The consonant–vowel visual half-field task was used to measure lateralized visual syllable recognition and was closely matched to Oltedal and Hugdahl (Citation2017).

Stimuli

We used the 30 non-homonym pairings from the consonant–vowel dichotic listening task for the analogous visual half-field task.

Procedure

Participants completed three blocks of the consonant–vowel visual half-field task. In each block, participants saw each of the 30 pairings twice (180 pairings across the three blocks). At the start of a trial, participants saw an asterisk for 2000 ms, after which pairs of syllables (e.g., /ba/–/ka/) were presented simultaneously (one in each visual field) for 80 ms at an eccentricity of approximately 2.5 degrees of visual angle using Gorilla’s scaling tool and asking participants to sit at 50 cm from the screen. At the same time, an arrow was presented to indicate which stimuli participants were required to identify. < indicated that participants should respond to stimuli in the LVF and > indicated that participants should respond to stimuli in the RVF. Participants were required to indicate the stimuli they saw by using one of six buttons. Within each block, participants responded once to the LVF and once to the RVF for each pairing. At the offset, the stimuli were masked with #####. The arrow and masks were shown on the screen until a response was made or 6000 ms had elapsed.

Word visual half-field task

The word visual half-field task was used to measure lateralized visual word recognition.

Stimuli

We used the 12 non-homograph pairings from the word dichotic listening task for the analogous visual half-field task.

Procedure

Participants completed three blocks of the consonant–vowel visual half-field task. In each block, participants saw each of the 12 pairings four times (144 pairings across the three blocks). At the start of a trial, participants saw an asterisk for 2000 ms, after which pairs of words (e.g., /bower/–/dower/) were presented simultaneously (one in each visual field) for 150 msFootnote3 at an eccentricity of approximately 2.5 degrees of visual angle. At the same time, an arrow was presented to indicate which stimuli participants were required to identify. < indicated that participants should respond to stimuli in the LVF and > indicated that participants should respond to stimuli in the RVF. Participants were required to indicate the stimuli they saw by using one of four buttons. Within each block, participants responded once to the LVF and once to the RVF for each pairing. At the offset, the stimuli were masked with #####. The arrow and masks were shown on the screen until a response was made or 6000 ms had elapsed.

General procedure

A within-participants design was utilized so that participants completed all tasks and responded to stimuli presented to both ears and in both visual fields. After providing informed consent, Gorilla’s in-house scaling tool, which involved holding a credit card up to the screen and adjusting the size of an image until it matched the size of the credit card, was used to ensure that stimulus size and eccentricity were maintained across all tasks and participants. This was followed by two headphone screens: A dichotic pitch test (Milne et al., Citation2021) to check adherence to headphone use and a stereo headphone check (Parker, Woodhead, et al., Citation2021) to check for two-channel headphone use. Participants who scored 100% on each of these tasks completed the behavioural laterality battery. All participants scoring less than 100% were rejected from the experiment.

Eligible participants completed the basic demographics questionnaire, Edinburgh Handedness Inventory, and the lexTALE tasks. They were then randomly assigned to complete three blocks of the behavioural laterality tasks in one of the following orders: ABCD, BCDA, CDAB, DABC, where A, B, C, and D correspond to the consonant–vowel dichotic listening, the consonant–vowel visual half-field, the word dichotic listening, and word visual half-field tasks respectively. The order of tasks remained consistent across blocks for each participant. For each task, the trial order was randomized.

Data analysis

Here we describe our pre-registered data cleaning procedures, report our final sample size, and describe our approach to statistical analysis. All departures from the pre-registration are described below.

Data cleaning

For each task, we pre-registered the following data cleaning procedures: (1) we would remove participants who score below 80 on the LexTALE; (2) we would remove dichotic listening trials where the reaction time was less than 200 ms or greater than 6000 ms; (3) we would remove participants who scored less than 50% on the non-homonym/non-homograph trials on the dichotic listening task; (4) we would remove visual half-field trials where the reaction time was less than 200 ms or greater than 6000 ms. These reaction time cleaning procedues led to the removal of zero trials for the consonant–vowel dichotic listening task, zero trials for word dichotic listening task, 0.25% of trials for the consonant–vowel visual half-field task, and 0.30% of trials for the word visual half field task; (5) we would remove participants who scored less than 50%Footnote4 on the homonym/homograph trials on the visual half-field task.

Final sample

The sample consisted of 52 participants. Participants had a mean age of 31.92 years (SD = 6.81) and included 26 right-handers (11 female, 15 male) and 26 left-handers (15 female, 11 male). The left-handers had a mean handedness index of −80.19 (SD = 23.13); 13 of whom were left-footers, 10 were right-footers, and 3 had no foot preference. The right-handers had a mean handedness index of 94.22 (SD = 9.85); 2 of whom were left-footers, and 24 were right-footers. 3 left-handers were bilingual, and 1 right-hander was bilingual.

Statistical models

Perceptual biases

For each behavioural task, a generalized Poisson linear mixed-effects model was fitted to count data using the glmer() function from the lme4 package (version 1.1.35.3; Bates et al., Citation2015) within R: dvear/visual field + (1 + ear/visual field | participant), family = Poisson(link =  “log”), where participant is a random factor. Ear/visual field was coded as a categorical variable using −0.5 and +0.5 such that the intercept in each model corresponded to the grand mean. To estimate the best fitting random structure for each model, the buildmer() function from the buildmer package (version 2.11; Voeten, Citation2023) was used. First, a maximal structure was fitted to the data before applying a backwards elimination process based on the significance of the change in log-likelihood between models. The most basic possible model retained the categorical fixed effect coding for ear/visual field.

To evaluate the evidence for potential null effects, we supplemented our analyses with Bayes Factor analysis (for a review see Wagenmakers, Citation2007). Bayes Factors were computed by fitting Bayesian linear-mixed effects models using the brms() function from the brms package (version 2.21.0; Bürkner, Citation2017). All Bayesian models included the same fixed effects as the glmer models. The models had 12,000 iterations (with the first 2,000 being discarded due to warm-up) and assumed non-informative priors (cauchy(0,1)) for each fixed effect. The hypothesis() function was then used to estimate evidence for the alternative hypothesis (BF10) for each fixed effect.

The combination of frequentist and Bayesian analysis enabled us to take a two-stage approach to inference. We considered results to be statistically significant where |z| > 2. If |z| < 2 and BF10  > 1/3, we considered there to be insufficient evidence. If |z| < 2 and BF10 < 1/3, we concluded that there was evidence in favour of the null hypothesis.

Perceptual biases and repetition

For each behavioural task, we fitted an unplanned exploratory model to examine how the ear/visual field advantages changed across each of the three blocks. This provided an additional test of the hypothesis that the LVF advantage for visual consonant–vowel strings is the result of repetition. Again, generalized Poisson linear mixed-effects models were fitted to count data using the glmer() function, but this time included a categorical fixed effect of block: dvear/visual field × block + (1 + ear/visual field × block | participant), family = Poisson(link = “log”), where participant is a random factor. Ear/visual field was coded as a categorical variable using −0.5 and +0.5 and successive difference contrasts were implemented for the fixed effect of block using the contr.sdif() function from the MASS package (version 7.3.60.0.1; Venables & Ripley, Citation2002), such that the intercept in each model corresponded to the grand mean. This coding scheme meant that the first contrast for block compared the mean accuracy between blocks 2 vs 1 and the second compared the mean accuracy vs blocks 3 vs 2. The critical interactions indicated whether the ear/visual field advantages differed between successive blocks of trials. Our approach to modelling the random effects, Bayes Factor calculation, and statistical inference are identical to that for our analysis of perceptual biases.

Dissociable language laterality

For our planned exploratory analysis of the dissociable language laterality hypothesis, we first calculated laterality indices for each task: (100 × (Right − Left)/(Right + Left)). We then derived the split-half reliability for each task by correlating laterality indices from odd and even trials for each task. This meant that we could report both disattenuated and attenuated correlation coefficients (Spearman, Citation2010), corrected for split-half reliability rxy/sqrt(rxx × ryy), between laterality indices. In a formal analysis, we then compared four theoretical models by modelling covariances using a simplified version of Structural Equation Modelling (SEM). In this approach, we constrain covariance patterns and report the “best” model fit according to Akaike weights (Wagenmakers & Farrell, Citation2004). Akaike weights are transformations of the AIC values, which can be directly interpreted as conditional probabilities for each model, with the best-fitting model being selected. The four models were: (1) a dissociable laterality model where all laterality indices are unrelated, (2) a modality laterality model where laterality indices for tasks within the same modality are related but there are no cross modality relationships, (3) a stimuli laterality model where laterality indices for consonant–vowel strings and words are related regardless of stimulus modality but there are no relationships between consonant–vowel string and words, and (4) a unitary language laterality model where all laterality indices are related.

Results

Below we report our confirmatory analysis of perceptual biases on each behavioural laterality task and our exploratory analysis of the dissociable language laterality hypothesis.

Perceptual biases

The number of correctly identified stimuli in the left and right ear/visual field is shown in . For the consonant–vowel dichotic listening task, the model fitted to count data for each ear (glm(dvear, family = Poisson(link = “log”))) indicated a RE advantage, where the count of correctly reported consonant–vowel strings was greater for stimuli presented to the RE, b = 0.534, SE = 0.033, z = 16.09, BF10 = 1.228e+16. Similarly, the model fitted to count data for the word dichotic listening task (glm(dvear, family = Poisson(link = “log”))) indicated a RE advantage, b = 0.734, SE = 0.036, z = 20.31, BF10 = 2.093e+16. For the consonant–vowel visual half-field task, the model fitted to count data (glmer(dv ∼ VF + (1 + VF | participant), family = Poisson(link = “log”))) indicated a LVF advantage, where the count of correctly reported consonant–vowel strings was greater for stimuli presented in the LVF, b = −0.292, SE = 0.035, z = −8.25, BF10 = 7.027e+75. Finally, the model fitted to count data for word visual half-field task (glmer(dvVF + (1 + VF | participant), family = Poisson(link = “log”))) indicated a RVF advantage, where the count of correctly reported words was greater for stimuli presented in the RVF, b = 0.450, SE = 0.037, z = 12.15, BF10 = 3.702e+16.

Figure 3. Count of correctly identified stimuli presented to the left and right ear/visual field across each behavioural task: (A) consonant-vowel dichotic listening, (B) word dichotic listening, (C) consonant-vowel visual half-field, and (D) word visual half-field. Dots represent the count of correct answers for each participant.

Figure 3. Count of correctly identified stimuli presented to the left and right ear/visual field across each behavioural task: (A) consonant-vowel dichotic listening, (B) word dichotic listening, (C) consonant-vowel visual half-field, and (D) word visual half-field. Dots represent the count of correct answers for each participant.

Perceptual biases and repetition

The number of correctly identified stimuli in the left and right ear/visual field across of the three blocks is shown in . clearly illustrates that performance was similar across each block for each task, with perceptual biases also being relatively stable. This was confirmed via mixed effects analysis.

Figure 4. Count of correctly identified stimuli presented to the left and right ear/visual field across each experimental block for the four behavioural laterality tasks: (A) consonant-vowel dichotic listening, (B) word dichotic listening, (C) consonant-vowel visual half-field, and (D) word visual half-field. Dots represent the count of correct answers for each participant.

Figure 4. Count of correctly identified stimuli presented to the left and right ear/visual field across each experimental block for the four behavioural laterality tasks: (A) consonant-vowel dichotic listening, (B) word dichotic listening, (C) consonant-vowel visual half-field, and (D) word visual half-field. Dots represent the count of correct answers for each participant.

For the consonant–vowel dichotic listening task, the model fitted to count data for each ear (glm(dvear × block, family = Poisson(link = “log”))) indicated a RE advantage, where the count of correctly reported consonant–vowel strings was greater for stimuli presented to the RE, b = 0.534, SE = 0.033, z = 16.10, BF10 = 1.064e+17. The contrasts comparing the mean count of correctly identified stimuli indicated that the count of correctly identified stimuli did not differ between blocks 2 and 1, b = −0.020, SE = 0.041, z = −0.49, BF10 = 0.203, and blocks 3 and 2, b = 0.001, SE = 0.041, z = 0.02, BF10 = 0.184. Furthermore, null hypothesis testing indicated that the RE advantage did not differ between blocks 2 and 1, b = 0.127, SE = 0.081, z = 1.56, BF10 = 1.160, or blocks 3 and 2, b = −0.092, SE = 0.082, z = −1.12, BF10 = 0.666. It is, however, important to note that the Bayes Factors here provided equivocal evidence for the null hypothesis. Similarly, for the word dichotic listening task, the model fitted to count data for each ear (glm(dvear × block, family = Poisson(link = “log”))) indicated a RE advantage, b = 0.695, SE = 0.036, z = 19.21, BF10 = 1.941e+16. The contrasts comparing the mean count of correctly identified stimuli indicated that the count of correctly identified stimuli did not differ between blocks 2 and 1, b = −0.020, SE =  0.044, z = −0.46, BF10 = 0.290, and blocks 3 and 2, b = 0.027, SE = 0.044, z = 0.61, BF10 = 0.318. Null hypothesis testing indicated that the RE advantage did not differ between blocks 2 and 1, b = 0.017, SE = 0.088, z = 0.20, BF10 = 0.543, or blocks 3 and 2, b = −0.029, SE = 0.089, z = −0.33, BF10 = 0.558. As with the consonant–vowel dichotic listening, Bayes Factors provided equivocal evidence for the null hypothesis that the RE advantage did not differ across blocks. For the consonant–vowel visual half-field task, the model fitted to count data for each visual field (glmer(dvVF × block + (1 + VF | participant), family = Poisson(link = “log”))) indicated a LVF advantage, b = −0.292, SE = 0.035, z = −8.26, BF10 = 1.218e+23. The contrasts comparing the mean count of correctly identified stimuli indicated that the count of correctly identified stimuli did not differ between blocks 2 and 1, b = −0.032, SE = 0.030, z = −1.09, BF10 = 0.224, and blocks 3 and 2, b = 2.292e-05, SE = 0.030, z = 0.00, BF10 = 0.126. Likewise, the LVF advantage did not differ between blocks 2 and 1, b = −0.016, SE = 0.059, z = −0.27, BF10 = 0.261, or blocks 3 and 2, b = 0.012, SE = 0.060, z = 0.21, BF10 = 0.262. Finally, for the word visual half-field task, the model fitted to count data for each visual field (glmer(dv  VF × block + (1 + VF | participant), family = Poisson(link = “log”))) indicated a RVF advantage, b = 0.450, SE = 0.037, z = 12.15, BF10 = 1.883e+15. The contrasts comparing the mean count of correctly identified stimuli indicated that the count of correctly identified stimuli did not differ between blocks 2 and 1, b = 0.014, SE = 0.033, z = 0.43, BF10 = 0.153, and blocks 3 and 2, b = −2.739e-02, SE = 0.034, z = −0.82, BF10 = 0.191. Furthermore, the RVF advantage did not differ between blocks 2 and 1, b = 0.026, SE = 0.067, z = 0.39, BF10 = 0.298, or blocks 3 and 2, b = 0.000, SE = 0.067, z = 0.01, BF10 = 0.281.

Dissociable language laterality

The laterality indices for each task are reported in , alongside the split half-field reliability for each task. As can be gleaned from the table, the split-half reliabilities for each task were satisfactory and above the arbitrary r = 0.65 threshold that Parker, Woodhead, et al. (Citation2021) selected to represent a lower bound of reliability for a measure to be useful for analysing individual differences.

Table 1. Laterality indices, split-half reliability, and disattenuated and attenuated correlation coefficients.

Initial scrutiny of Spearman correlation coefficients between laterality indices suggests strong associations between the (1) two dichotic listening tasks and (2) the two visual half-field tasks. The relationships between laterality indices on each task are visualized in . There were only weak associations between all other laterality indices. This pattern holds for both disattenuated and attenuated correlations.

Figure 5. Scatter plots visualizing the relationships between laterality indices across each of the four behavioural tasks: (A) consonant-vowel dichotic listening and word dichotic listening, (B) consonant-vowel dichotic listening and consonant-vowel visual half-field, (C) consonant-vowel dichotic listening and word visual half-field, (D) word dichotic listening and consonant-vowel visual half-field, (E) word dichotic listening and word visual half-field, and (F) consonant-vowel visual half-field and word visual half-field. Dots represent laterality indices for each participant. Left-handed participants are shown in pink and right-handed participants are shown in blue.

Figure 5. Scatter plots visualizing the relationships between laterality indices across each of the four behavioural tasks: (A) consonant-vowel dichotic listening and word dichotic listening, (B) consonant-vowel dichotic listening and consonant-vowel visual half-field, (C) consonant-vowel dichotic listening and word visual half-field, (D) word dichotic listening and consonant-vowel visual half-field, (E) word dichotic listening and word visual half-field, and (F) consonant-vowel visual half-field and word visual half-field. Dots represent laterality indices for each participant. Left-handed participants are shown in pink and right-handed participants are shown in blue.

As a formal test of the dissociable language laterality hypothesis, we compared four models: (1) a dissociable laterality model where all laterality indices are unrelated, (2) a modality laterality model where laterality indices for tasks within the same modality are related but there was no across modality, (3) a stimuli laterality model where laterality indices for constant-vowels and words are related but not across stimuli type, and (4) a unitary language laterality model were all laterality indices are related. The Akaike weights for the four models were 7.62e-16, 0.932, 1.272e-16, and 0.068 respectively. Thus, the Akaike weights favoured a modality model confirming that there are associations for laterality indices where stimuli are presented in the same visual or auditory domain.

Discussion

Dichotic listening and visual half-field paradigms have produced two of the most widely replicated results in the field of cognitive neuroscience: a RE advantage and a RVF advantage for the processing of linguistic stimuli, which is interpreted as evidence for left hemisphere dominance for language at a population level. While the RE and RVF advantages are widely replicated, there has been little research comparing perceptual biases in analogous auditory and visual tasks. Those that have, have looked at the processing of auditory and visual consonant–vowel strings and reported opposite patterns of laterality: a RE and LVF advantage (Oltedal & Hugdahl, Citation2017; Voyer & Boudreau, Citation2003). The current experiment returned to this serendipitous finding and looked to (1) replicate opposite perceptual biases for the processing of auditory (i.e., RE advantage) and visual (i.e., LVF advantage) consonant–vowel strings in a sample containing both left- and right-handers to increase variability in the data set; (2) examine whether the same pattern of opposite perceptual biases could be observed for a select set of word stimuli presented under near identical conditions to the consonant–vowel strings; and (3) test the dissociable language laterality hypothesis via a exploratory covariance modelling technique that could compare different predictions about the relationships between laterality indices on analogous auditory and visual consonant–vowel and word tasks. The novel contributions of our work can be summarized in three general points. First, we replicated the LVF advantage for consonant–vowel strings in a sample of left- and right-handers recruited from outside of a university sample. A RE advantage was observed for the auditory processing of consonant–vowel strings. Second, we found that even when word stimuli are presented under similar conditions to consonant–vowel strings, there was a RVF advantage for words. Third, we provide clear evidence against a unitary laterality hypothesis and instead show that perceptual biases cluster on the mode of presentation (i.e., auditory vs visual). We discuss each point in turn.

The first contribution of this study is that it provides a conceptual replication of Voyer and Boudreau (Citation2003) and Oltedal and Hugdahl (Citation2017) using online data collection from a non-university bound sample. Both Voyer and Boudreau and Oltedal and Hugdahl reported a robust RE advantage for the processing of consonant–vowel strings during an auditory task. They then reported a LVF advantage when the same stimuli were presented in an analogous visual half-field paradigm. Here we looked to replicate these findings for consonant–vowel strings via a well powered study that utilized a simplified experimental design, advanced statistical analysis techniques, and tested a non-university bound sample, where left-handers were over sampled to increase variability within the dataset. Despite our departures from previous studies, we replicated a RE advantage on the consonant–vowel dichotic listening and a LVF advantage on the consonant–vowel dichotic listening task. This confirms the robustness of the effect and suggests that it extends beyond right-handed, university bound populations that have been the focus of prior research. That said, the level of overall accuracy on these tasks differed somewhat from estimates in prior research. Oltedal and Hugdahl reported that right-handed participants correctly identified 37.8% and 50.5% of syllables presented to the left and right ears in their unforced condition and correctly identified 66.0% and 25.8% of consonant–vowel strings correctly in the left and right visual fields. When converting our scores into percentages we report that, on average, our group of left- and right-handed participants correctly identified 31.3% and 54.9% of syllables presented to the left and right ears and 38.4% and 27.5% of consonant–vowel strings presented in the left and right visual fields. While the estimates for the dichotic listening task are in a similar ballpark, the average performance was considerably lower on our visual half-field task. While it is plausible that our larger sample (50 participants vs 12 participants) was better suited to detecting a true estimate of these differences in the general population, it is also possible that the online setting made this task more difficult to complete as environmental factors, such as background noise or viewing conditions, made this task more difficult (an issue we return to in due course). Alternatively, this could reflect in part the increased variability in perceptual asymmetries in our more diverse sample that included both left- and right-handers.

The second contribution of this study is that we can rule out the possibility that the LVF advantage for the visual processing of consonant–vowel strings is a consequence of repetition and stimuli being over-learned. Previous studies have relied on six unique consonant–vowel strings to investigate perceptual biases/laterality on auditory and visual tasks. As a consequence, this may have led to the LVF advantage for the visual processing of consonant–vowel strings. This suggestion stems from the observation that participants show a substantially reduced RVF advantage for word stimuli that are shown 16 times through the course of an experiment compared to a condition were the word is shown only once (Sullivan & McKeever, Citation1985). To test such a possibility, we presented participants with a restricted set of word stimuli that were repeated under similar conditions to the consonant–vowel strings. We reasoned that if opposite laterality in analogous tasks is a consequence of repetition, then we would expect a RE advantage on the word dichotic listening task and a LVF advantage for the processing of visually presented word strings. Contrary to this suggestion, we reported a RE and RVF advantage for the processing of words, which is in keeping with a large body of existing evidence. This enables us to conclude that the presentation conditions are not responsible for opposite patterns of perceptual biases observed for dichotic listening and visual half-field consonant–vowel tasks. Additionally, an unplanned exploratory analysis comparing perceptual biases across experimental blocks indicated further null evidence for the idea that the LVF advantage for consonant–vowel strings is a consequence of repetition as there was no statistically reliable evidence to show that there was a switch from a right to a left visual field advantage across the three blocks that each task was repeated.

So why might we have observed a LVF advantage for consonant–vowel strings? There exist several potential explanations. One suggestion is that although consonant–vowel strings carry meaning (e.g., Bonandrini, Amenta, et al., Citation2023; Hendrix & Sun, Citation2021), they carry less information than words. This may result in reduced left hemisphere involvement for the visual processing of consonant–vowel strings, although this does not completely explain the LVF advantage for consonant–vowel strings. While this explanation would fit well with a scenario highlighting bilateral activation during the visual processing of consonant–vowel strings, we in fact observed a LVF advantage for visual consonant–vowel strings.

A second explanation relates to word length. Young and Ellis (Citation1985) reported that when 3- to 6-letter words were presented in the left- or right-visual fields, the percentage of correctly reported words decreased with increasing word length in the LVF while accuracy in the RVF did not vary as a function of word length. It remains plausible then that for very short visually presented two-letter stimuli, that there may be preferential processing in the LVF. Following from this, the differential results for visual consonant–vowel strings and visual words could also potentially be explained by low-level differences in visual presentation. In the current work, both the consonant–vowel and word visual half-field tasks required the identification of a single letter at the start of a string to successfully identify the letter string. As such, it is entirely possible that participants engaged solely in letter identification rather than processing consonant–vowel strings as whole units. The critical letter in the word task is presented much further to the left than in the consonant–vowel task in the LVF, and it may have resulted in the LVF advantage dissipating for words as the critical letter is much further from foveal vision. Such a possibility is in keeping with the results reported by Young and Ellis (Citation1985), that there is a stronger RVF advantage with increasing word length, as initial letters would fall further from foveal vision. Future work would be able to test such a possibility by either ensuring that the critical letters are presented in an identical position across stimuli or presenting stimuli so that they subtend the same degree of visual angle on the retina.

A third explanation that has been discussed by both Voyer and Boudreau (Citation2003) and Oltedal and Hugdahl (Citation2017) is the idea that auditory consonant–vowel strings are processed as linguistic units whereas visual consonant–vowel strings are processed spatially. In other words, in the case of visually presented lateralized consonant–vowel stimuli, the task could be solved without the need to access the linguistic processing system. Stimuli would simply be visual objects on opposite sides of space that compete for processing resources. The competition would be won by the right hemisphere, by virtue of its advantage over the left one for spatial selective attention (Becker & Karnath, Citation2007; Bowen et al., Citation1999; Heilman et al., Citation2000; Ringman et al., Citation2004). Considering only prior research, this argument seems the most parsimonious given Voyer and Boudreau’s observation of a small negative correlation between laterality indices on auditory and visual consonant–vowel tasks and Geffen et al.’s (Citation1972) observation of a LVF advantage on a physical letter matching task that is thought to demand spatial processing compared to a RVF advantage when matching the same stimuli based upon name (which is thought to demand linguistic processing). Further consistent with this interpretation is the observation that bilateral presentation of bar graph stimuli (e.g., Berryman & Kennelly, Citation1992; Boles, Citation1986), colour scales (e.g., Karlsson et al., Citation2019; Mattingley et al., Citation1994), and line bisection (e.g., Jewell & McCourt, Citation2000), all of which demand spatial processing, show a clear LVF advantage. What is novel within our work is that instead of reporting a small negative correlation between laterality indices on auditory and visual consonant–vowel tasks, we report a positive relationship between the laterality indices for visual consonant–vowel strings and words, indicating that the greater the RVF advantage for visual word processing, the greater the RVF advantage (or rather, the smaller the LVF advantage) for written consonant–vowel string processing. Although the present evidence is not conclusive, it would suggest that the LVF advantage for visual consonant–vowel string processing does not mirror a specific role for the right hemisphere in orthographic processing (i.e., they are not processed by two independently lateralized functions). If this was the case, then we would expect a negative correlation between laterality indices for written consonant–vowel stimuli and words. Rather, the data suggest that the lateralization of consonant–vowel and word processing is underlined by the same process, and that the LVF advantage for written consonant–vowel strings is a by-product of how the domain-general visual-attention system and the specific neurocognitive mechanisms involved in word recognition interact: consonant–vowel stimuli fail to engage the linguistic processing system in the same way as words, and this results in these stimuli being processed as mere visual entities and, as a consequence, processed spatially.

Our experimental design enabled us to test several questions about the nature of co-lateralization for perceptual biases. Recent evidence (COLA Consortium, Citation2022; Woodhead et al., Citation2019, Citation2021) involving lateralized blood flow has indicated that language lateralization is not unitary; that is language functions are not consistently left or right lateralized within an individual. In line with this suggestion, laterality indices from the same participants completing multiple perceptual tasks are often poorly correlated (e.g., Bryden, Citation1965; Hines & Satz, Citation1974; Wexler & King, Citation1990; Parker, Woodhead, et al., Citation2021; Van der Haegen & Brysbaert, Citation2018). Furthermore, correlations between laterality indices for visual half-field word recognition tasks are poorly correlated with laterality indices on dichotic listening tasks, indicating independence (c.f., Zurif & Bryden, Citation1969; see also COLA Consortium, Citation2022, for a evidence that these correlations extend to visual half-field tasks with pictures). What these studies all have in common is that they used different stimuli, failed to control specific psycholinguistic features of the stimuli (leading to different outcomes in lateralization; Bonandrini, Paulesu, et al., Citation2023), and failed to account for task reliability, so perhaps is not so surprising that there are often poor correlations when comparing laterality indices on dichotic listening and visual half-field tasks. Therefore, in a planned exploratory analysis where we account for these limitations, we pitted four models against each other which assumed complete independence of perceptual biases, within modality relationships, stimulus type relationships, or unitary laterality. When we compared the four models, the best fitting model was a modality specific model where laterality indices for tasks within the same modality were related and there were no cross-modality relationships. The results of these analyses are consistent Voyer and Boudreau’s (Citation2003) observation that performance on a consonant–vowel dichotic listening task was poorly correlated with performance on a consonant–vowel visual half-field task, where it was suggested that these poor correlations reflect the recruitment of functionally independent cognitive systems between modalities. Therefore, the dissociation between laterality indices across modalities could be due to greater variability and/or complexity in the neurocognitive operations involved in lateralized word recognition than in dichotic listening. Future research will be welcomed to shed light on this possibility. Still notably, the present data highlighting a dissociation between functional lateralization as measured with dichotic listening paradigms and visual half-field tasks suggests that behavioural tasks measuring functional language lateralization are not equivalent to one another: visual half-field tasks could fall short of predicting lateralization for receptive language, whereas lateralization metrics obtained through dichotic listening may inaccurately predict functional lateralization for reading. This advocates the use (in the context of measurement of the functional lateralization of language) of a battery of tests tapping the lateralization of different language sub-functions instead of one single test.

Regarding the dissociation between laterality indices across modalities, it is not immediately obvious how these findings fit in with the COLA Consortium’s (Citation2022) argument that there are two language centres in the brain: one that is left lateralized and one that is bilateral and centred on zero. One possibility is that the dichotic listening task may tap the left lateralized language centre, while word recognition engages the bilateral language centre or, rather, a centre that is less left lateralized than the complementary auditory language centre. An alternative account of these modality differences that involves two brain centres is that the recognition of visual language activates two different systems: a left lateralized language system, and a right lateralized visual-attention system. As such, the processing of orthography could be viewed as a summation of the visual-attention system which acts on top of more general language processing. By comparison, consonant–vowel strings may only weakly engage the language system and rely more on the visual-attention system. Such an explanation would be able to explain both the positive correlation between laterality indices on the two tasks, as they are both recruiting the visual-attention system, and reduced left hemisphere processing for visual consonant–vowel strings. Future research will be welcomed to shed light on this possibility.

The current work over sampled left-handers to increase variability in laterality measures for our analysis of covariances. While we see this as a major advantage of the research, the degree to which the findings would replicate in a more typical right-handed only sample remains to be discussed as a wealth of research has indicated that left-handers show more variability in lateralization (e.g., Bruckert et al., Citation2021; Mazoyer et al., Citation2014). Given that both Voyer and Boudreau (Citation2003) and Oltedal and Hugdahl (Citation2017) reported LVF advantages for the visual consonant–vowel task, we do not anticipate major differences between the current study and one that would recruit only right-handers; however, the visual asymmetries may be more strongly biased. For the modelling of covariances it is entirely possible that slight differences may be observed between groups. In a study of 31 left-handers and 43 right-handers, Woodhead et al. (Citation2021) reported that not only were there significant differences in laterality between groups for four out of six laterality tasks but there was stronger evidence for dissociable lateralization in left-handers. Based on this evidence, a study involving only right-handers may show weaker evidence for the suggestion that laterality clusters on sensory domain. Addressing this question would require a much larger sample than that reported here to ensure sufficient variability when conducting group-level analyses.

Before our concluding remarks, it is important to reflect on the use of online behavioural research methods. As this study was conducted online, we could not control for a number of environmental factors that may have influenced performance. While we used screening techniques for appropriate headphone use and a credit card scaling tool to standardize the size of stimuli across displays, we could not control for contrast, luminance, or the participants’ distance from the screen. The last of these is perhaps most problematic as subtle head movements between trials may have resulted in visual stimuli occupying different degrees of visual angle on the retina between participants or between trials for the same participants. It is reassuring nonetheless that we managed to replicate the previous literature despite potential room for increased noise whilst maintaining reasonable split-half reliability for each task. One way to add credibility to our novel findings may be to repeat these tasks within a laboratory setting and then compare results from the same participants when the task is completed online.

To conclude, we recruited a diverse sample of left- and right-handed participants to (1) further examine the serendipitous finding of a LVF advantage for the processing of consonant–vowel strings and (2) conduct a test of the dissociable language laterality hypothesis. While we cannot be sure of the mechanisms underlying the LVF advantage, we can confirm that it is unlikely the consequence of stimulus presentation conditions as it did not extend to words, and it is not the consequence of repetition because the LVF advantage did not become more extreme with exposure to stimuli. Regarding our second purpose, via a formal modelling approach, we show that language laterality, as indexed by perceptual biases, is not a unitary construct and instead is lateralized based on the modality in which stimuli were presented. This study, therefore, represents a step forward in not only understanding perceptual biases in analogous auditory and visual tasks, but also in understanding how perceptual asymmetries vary within individuals.

Ethical approval statement

The experimental procedure was granted ethical approval by the UCL Department of Experimental Psychology’s Ethics Chair, ethics application number: EP/2021/013.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The materials and the data sets generated and analysed are available in the Open Science Framework (OSF) repository, https://osf.io/72sk6/. This repository also includes an R Markdown script to reproduce all analyses and generate the manuscript.

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Notes

1 A Poisson linear mixed-effects model was selected as the dependent variable is an integer count of correct responses, where we summed the number of correctly identified stimuli in each ear/visual field, which is not appropriate for an ordinary least squares regression. Treating data as count, as opposed to continuous, aided interpretation of an unplanned exploratory analysis which compared counts of correct answers across blocks.

2 While we had no a priori predictions regarding handedness and footedness, we decided to gather this information should it be useful in future exploratory work when generating novel hypotheses given the ease of collecting this data.

3 An initial pilot study with an 80 ms duration indicated that participants were at floor performance and were unable to recognise word stimuli with sufficient accuracy.

4 We initially pre-registered that we would remove participants who scored less than 16.7% on these tasks, but ultimately chose to opt for a more stringent inclusion criteria to filter out poor performers. The pattern of results was not affected by this decision and our conclusions remained unchanged.

References

  • Anwyl-Irvine, A. L., Massonnié, J., Flitton, A., Kirkham, N., & Evershed, J. K. (2020). Gorilla in our midst: An online behavioral experiment builder. Behavior Research Methods, 52(1), 388–407. https://doi.org/10.3758/s13428-019-01237-x
  • Bates, D., Mächler, M., Bolker, B. M., & Walker, S. C. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01
  • Becker, E., & Karnath, H.-O. (2007). Incidence of visual extinction after left versus right hemisphere stroke. Stroke (1970), 38(12), 3172–3174. https://doi.org/10.1161/STROKEAHA.107.489096
  • Berryman, M. L., & Kennelly, K. J. (1992). Letter memory loads change more than visual-field advantage: Interhemispheric coupling effects. Brain and Cognition, 18(2), 152–168. https://doi.org/10.1016/0278-2626(92)90076-X
  • Bethmann, A., Tempelmann, C., De Bleser, R., Scheich, H., & Brechmann, A. (2007). Determining language laterality by fMRI and dichotic listening. Brain Research, 1133(1), 145–157. https://doi.org/10.1016/j.brainres.2006.11.057
  • Bless, J. J., Westerhausen, R., Arciuli, J., Kompus, K., Gudmundsen, M., & Hugdahl, K. (2013). Right on all occasions?” - On the feasibility of laterality research using a smartphone dichotic listening application. Frontiers in Psychology, 4, 42–42. https://doi.org/10.3389/fpsyg.2013.00042
  • Bless, J. J., Westerhausen, R., Torkildsen, J. V. K., Gudmundsen, M., Kompus, K., & Hugdahl, K. (2015). Laterality across languages: Results from a global dichotic listening study using a smartphone application. Laterality: Asymmetries of Body, Brain and Cognition, 20(4), 434–452. https://doi.org/10.1080/1357650X.2014.997245.
  • Boles, D. B. (1986). Hemispheric differences in the judgment of number. Neuropsychologia, 24(4), 511–519. https://doi.org/10.1016/0028-3932(86)90095-3
  • Boles, D. B. (1987). Reaction time asymmetry through bilateral versus unilateral stimulus presentation. Brain and Cognition, 6(3), 321–333. https://doi.org/10.1016/0278-2626(87)90129-1
  • Boles, D. B. (1990). What bilateral displays do. Brain and Cognition, 12(2), 205–228. https://doi.org/10.1016/0278-2626(90)90016-H
  • Boles, D. B. (1994). An experimental comparison of stimulus type, display type, and input variable contributions to visual field asymmetry. Brain and Cognition, 24(2), 184–197. https://doi.org/10.1006/brcg.1994.1010
  • Bonandrini, R., Amenta, S., Sulpizio, S., Tettamanti, M., Mazzucchelli, A., & Marelli, M. (2023). Form to meaning mapping and the impact of explicit morpheme combination in novel word processing. Cognitive Psychology, 145, 101594–101594. https://doi.org/10.1016/j.cogpsych.2023.101594
  • Bonandrini, R., Paulesu, E., Traficante, D., Capelli, E., Marelli, M., & Luzzatti, C. (2023). Lateralized reading in the healthy brain: A behavioral and computational study on the nature of the visual field effect. Neuropsychologia, 180, 108468–108468. https://doi.org/10.1016/j.neuropsychologia.2023.108468
  • Bourne, V. J. (2006). The divided visual field paradigm: Methodological considerations. Laterality (Hove), 11(4), 373–393. https://doi.org/10.1080/13576500600633982
  • Bowen, A., McKenna, K., & Tallis, R. C. (1999). Reasons for variability in the reported rate of occurrence of unilateral spatial neglect after stroke. Stroke (1970), 30(6), 1196–1202. https://doi.org/10.1161/01.STR.30.6.1196
  • Brederoo, S. G., Nieuwenstein, M. R., Cornelissen, F. W., & Lorist, M. M. (2019). Reproducibility of visual-field asymmetries: Nine replication studies investigating lateralization of visual information processing. Cortex, 111, 100–126. https://doi.org/10.1016/j.cortex.2018.10.021
  • Brederoo, S. G., Van der Haegen, L., Brysbaert, M., Nieuwenstein, M. R., Cornelissen, F. W., & Lorist, M. M. (2020). Towards a unified understanding of lateralized vision: A large-scale study investigating principles governing patterns of lateralization using a heterogeneous sample. Cortex, 133, 201–214. https://doi.org/10.1016/j.cortex.2020.08.029
  • Bruckert, L., Thompson, P. A., Watkins, K., Bishop, D. V., & Woodhead, Z. (2021). Investigating the effects of handedness on the consistency of lateralization for speech production and semantic processing tasks using functional transcranial doppler sonography. Laterality, 26(6), 680–705. https://doi.org/10.1080/1357650X.2021.1898416
  • Bryden, M. P. (1964). The manipulation of strategies of report in dichotic listening. Canadian Journal of Psychology, 18(2), 126–138. https://doi.org/10.1037/h0083290
  • Bryden, M. P. (1965). Tachistoscopic recognition, handedness, and cerebral dominance. Neuropsychologia, 3(1), 1–8. https://doi.org/10.1016/0028-3932(65)90015-1
  • Bryden, M. P. (1970). Left-right differences in tachistoscopic recognition as a function of familiarity and pattern orientation. Journal of Experimental Psychology, 84(1), 120–122. https://doi.org/10.1037/h0028927
  • Bryden, M. P., & Macrae, L. (1988). Dichotic laterality effects obtained with emotional words. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 1(3), 171–176.
  • Bürkner, P.-C. (2017). Brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software, 80, 1–28. https://doi.org/10.18637/jss.v080.i01
  • COLA Consortium (2022). Inconsistent language lateralisation – testing the dissociable language laterality hypothesis using behaviour and lateralised cerebral blood flow. Cortex, 154, 105–134. https://doi.org/10.1016/j.cortex.2022.05.013
  • Geffen, G., Bradshaw, J. L., & Nettleton, N. C. (1972). Hemispheric asymmetry: Verbal and spatial encoding of visual stimuli. Journal of Experimental Psychology, 95(1), 25–31. https://doi.org/10.1037/h0033265
  • Godfrey, H. K., & Grimshaw, G. M. (2016). Emotional language is all right: Emotional prosody reduces hemispheric asymmetry for linguistic processing. Laterality (Hove), 21(4–6), 568–584. https://doi.org/10.1080/1357650X.2015.1096940
  • Green, P., MacLeod, C. J., & Nakagawa, S. (2016). SIMR: An r package for power analysis of generalized linear mixed models by simulation. Methods in Ecology and Evolution, 7(4), 493–498. https://doi.org/10.1111/2041-210X.12504
  • Hausmann, M., Brysbaert, M., van der Haegen, L., Lewald, J., Specht, K., Hirnstein, M., Willemin, J., Barton, J., Buchilly, D., Chmetz, F., Roch, M., Brederoo, S., Dael, N., & Mohr, C. (2019). Language lateralisation measured across linguistic and national boundaries. Cortex, 111, 134–147. https://doi.org/10.1016/j.cortex.2018.10.020
  • Heilman, K. M., Bowers, D., Coslett, H. B., Whelan, H., & Watson, R. T. (1985). Directional hypokinesia: Prolonged reaction times for leftward movements in patients with right hemisphere lesions and neglect. Neurology, 35(6), 855–855. https://doi.org/10.1212/WNL.35.6.855
  • Heilman, K. M., Valenstein, E., & Watson, R. T. (2000). Neglect and related disorders. Seminars in Neurology, 20(4), 463–470. https://doi.org/10.1055/s-2000-13179
  • Hellige, J. B. (1993). Hemispheric asymmetry: What’s right and what’s left / Joseph B. Hellige. Harvard University Press.
  • Hendrix, P., & Sun, C. C. (2021). A word or two about nonwords: Frequency, semantic neighborhood density, and orthography-to-semantics consistency effects for nonwords in the lexical decision task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(1), 157. https://doi.org/10.1037/xlm0000819
  • Hines, D., & Satz, P. (1974). Cross-modal asymmetries in perception related to asymmetry in cerebral function. Neuropsychologia, 12(2), 239–247. https://doi.org/10.1016/0028-3932(74)90009-8
  • Hugdahl, K. (2011). Fifty years of dichotic listening research - still going and going and … introduction. Brain and Cognition, 76(2), 211–213. https://doi.org/10.1016/j.bandc.2011.03.006
  • Hunter, Z. R., & Brysbaert, M. (2008). Visual half-field experiments are a good measure of cerebral language dominance if used properly: Evidence from fMRI. Neuropsychologia, 46(1), 316–325. https://doi.org/10.1016/j.neuropsychologia.2007.07.007
  • Iacoboni, M., & Zaidel, E. (1996). Hemispheric independence in word recognition: Evidence from unilateral and bilateral presentations. Brain and Language, 53(1), 121–140. https://doi.org/10.1006/brln.1996.0040
  • Jewell, G., & McCourt, M. E. (2000). Pseudoneglect: A review and meta-analysis of performance factors in line bisection tasks. Neuropsychologia, 38(1), 93–110. https://doi.org/10.1016/S0028-3932(99)00045-7
  • Karlsson, E. M., Johnstone, L. T., & Carey, D. P. (2019). The depth and breadth of multiple perceptual asymmetries in right handers and non-right handers. Laterality (Hove), 24(6), 707–739. https://doi.org/10.1080/1357650X.2019.1652308
  • Kimura, D. (1961). Some effects of temporal-lobe damage on auditory perception. Canadian Journal of Psychology, 15(3), 156–165. https://doi.org/10.1037/h0083218
  • Kimura, D. (2011). From ear to brain. Brain and Cognition, 76(2), 214–217. https://doi.org/10.1016/j.bandc.2010.11.009
  • Lemhöfer, K. M., & Broersma, M. (2012). Introducing LexTALE: A quick and valid lexical test for advanced learners of English. Behavior Research Methods, 44(2), 325–343. https://doi.org/10.3758/s13428-011-0146-0
  • Mattingley, J. B., Bradshaw, J. L., Nettleton, N. C., & Bradshaw, J. A. (1994). Can task specific perceptual bias be distinguished from unilateral neglect?. Neuropsychologia, 32(7), 805–817. https://doi.org/10.1016/0028-3932(94)90019-1
  • Mazoyer, B., Zago, L., Jobard, G., Crivello, F., Joliot, M., Perchey, G., Mellet, E., Petit, L., & Tzourio-Mazoyer, N. (2014). Gaussian mixture modeling of hemispheric lateralization for language in a large sample of healthy individuals balanced for handedness. PLoS One, 9(6), e101165. https://doi.org/10.1371/journal.pone.0101165
  • Mills, R., Woodhead, Z. V. J., & Parker, A. J. (2022). Orthographic neighborhood effects during lateralized lexical decision are abolished with bilateral presentation. Journal of Experimental Psychology. Human Perception and Performance, 48(5), 481–496. https://doi.org/10.1037/xhp0000997
  • Milne, A., Bianco, R., Poole, K., Zhao, S., Oxenham, A., Billig, A., & Chait, M. (2021). An online headphone screening test based on dichotic pitch. Behavior Research Methods, 53(4), 1551–1562. https://doi.org/10.3758/s13428-020-01514-0
  • Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9(1), 97–113. https://doi.org/10.1016/0028-3932(71)90067-4
  • Oltedal, L., & Hugdahl, K. (2017). Opposite brain laterality in analogous auditory and visual tests. Laterality (Hove), 22(6), 690–702. https://doi.org/10.1080/1357650X.2016.1269335
  • Parker, A., Egan, C., Grant, J., Harte, S., Hudson, B., & Woodhead, Z. (2021, April 28). The role of orthographic neighbourhood effects in lateralized lexical decision: A replication study and meta-analysis.
  • Parker, A. J., Woodhead, Z. V. J., Thompson, P. A., & Bishop, D. V. M. (2021). Assessing the reliability of an online behavioural laterality battery: A pre-registered study. Laterality (Hove), 26(4), 359–397. https://doi.org/10.1080/1357650X.2020.1859526
  • Parsons, S., Kruijt, A.-W., & Fox, E. (2019). Psychological science needs a standard practice of reporting the reliability of cognitive-behavioral measurements. Advances in Methods and Practices in Psychological Science, 2(4), 378–395. https://doi.org/10.1177/2515245919879695
  • Perea, M., Acha, J., & Fraga, I. (2008). Lexical competition is enhanced in the left hemisphere: Evidence from different types of orthographic neighbors. Brain and Language, 105(3), 199–210. https://doi.org/10.1016/j.bandl.2007.08.005
  • R Development Core Team. (2020). R: A language and environment for statistical computing. 82.
  • Ringman, J. M., Saver, J. L., Woolson, R. F., Clarke, W. R., & Adams, H. P. (2004). Frequency risk factors, anatomy, and course of unilateral neglect in an acute stroke cohort. Neurology, 63(3), 468–474. https://doi.org/10.1212/01.WNL.0000133011.10689.CE
  • Spearman, C. (2010). The proof and measurement of association between two things. International Journal of Epidemiology, 39(5), 1137–1150. https://doi.org/10.1093/ije/dyq191
  • Sullivan, K. F., & McKeever, W. F. (1985). The roles of stimulus repetition and hemispheric activation in visual half-field asymmetries. Brain and Cognition, 4(4), 413–429. https://doi.org/10.1016/0278-2626(85)90030-2
  • Van der Haegen, L., & Brysbaert, M. (2018). The relationship between behavioral language laterality, face laterality and language performance in left-handers. PLoS One, 13(12), e0208696–e0208696. https://doi.org/10.1371/journal.pone.0208696
  • Venables, W. N., & Ripley, B. D. (2002). Modern applied statistics with s (4th ed.). Springer. https://www.stats.ox.ac.uk/pub/MASS4/.
  • Voeten, C. C. (2023). Buildmer: Stepwise elimination and term reordering for mixed-EffectsRegression. https://CRAN.R-project.org/package=buildmer.
  • Voyer, D. (1998). On the reliability and validity of noninvasive laterality measures. Brain and Cognition, 36(2), 209–236. https://doi.org/10.1006/brcg.1997.0953
  • Voyer, D., & Boudreau, V. G. (2003). Cross-modal correlation of auditory and visual language laterality tasks: A serendipitous finding. Brain and Cognition, 53(2), 393–397. https://doi.org/10.1016/S0278-2626(03)00152-0
  • Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems ofp values. Psychonomic Bulletin & Review, 14(5), 779–804. https://doi.org/10.3758/BF03194105
  • Wagenmakers, E. J., & Farrell, S. (2004). AIC model selection using akaike weights. Psychonomic Bulletin & Review, 11(1), 192–196. https://doi.org/10.3758/BF03206482
  • Wexler, B. E., & Halwes, T. (1983). Increasing the power of dichotic methods: The fused rhymed words test. Neuropsychologia, 21(1), 59–66. https://doi.org/10.1016/0028-3932(83)90100-8
  • Wexler, B. E., & King, G. P. (1990). Within-modal and cross-modal consistency in the direction and magnitude of perceptual asymmetry. Neuropsychologia, 28(1), 71–80. https://doi.org/10.1016/0028-3932(90)90087-5
  • Willemin, J., Hausmann, M., Brysbaert, M., Dael, N., Chmetz, F., Fioravera, A., Gieruc, K., & Mohr, C. (2016). Stability of right visual field advantage in an international lateralized lexical decision task irrespective of participants’ sex, handedness or bilingualism. Laterality (Hove), 21(4–6), 502–524. https://doi.org/10.1080/1357650X.2015.1130716
  • Woodhead, Z. V. J., Bradshaw, A. R., Wilson, A. C., Thompson, P. A., & Bishop, D. V. M. (2019). Testing the unitary theory of language lateralization using functional transcranial doppler sonography in adults. Royal Society Open Science, 6(3), 181801–181801. https://doi.org/10.1098/rsos.181801
  • Woodhead, Z. V. J., Thompson, P. A., Karlsson, E. M., & Bishop, D. V. M. (2021). An updated investigation of the multidimensional structure of language lateralization in left- and right-handed adults: A test-retest functional transcranial doppler sonography study with six language tasks. Royal Society Open Science, 8(2), 200696–200696. https://doi.org/10.1098/rsos.200696
  • Young, A. W., & Ellis, A. W. (1985). Different methods of lexical access for words presented in the left and right visual hemifields. Brain and Language, 24(2), 326–358. https://doi.org/10.1016/0093-934X(85)90139-7
  • Zurif, E. B., & Bryden, M. P. (1969). Familial handedness and left-right differences in auditory and visual perception. Neuropsychologia, 7(2), 179–187. https://doi.org/10.1016/0028-3932(69)90015-3