271
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Real-time feedback improves imagined 3D primitive object classification from EEG

ORCID Icon, , , , , , & show all
Received 21 Feb 2022, Accepted 21 Mar 2024, Published online: 27 Mar 2024

ABSTRACT

Brain-computer interfaces (BCI) enable movement-independent information transfer from humans to computers. Decoding imagined 3D objects from electroencephalography (EEG) may improve design ideation in engineering design or image reconstruction from EEG for application in brain-computer interfaces, neuro-prosthetics, and cognitive neuroscience research. Object-imagery decoding studies, to date, predominantly employ functional magnetic resonance imaging (fMRI) and do not provide real-time feedback. We present four linked studies in a study series to investigate: (1) whether five imagined 3D primitive objects (sphere, cone, pyramid, cylinder, and cube) could be decoded from EEG; and (2) the influence of real-time feedback on decoding accuracy. Studies 1 (N = 10) and 2 (N = 3) involved a single-session and a multi-session design, respectively, without real-time feedback. Studies 3 (N = 2) and 4 (N = 4) involved multiple sessions, without and with real-time feedback. The four studies involved 69 sessions in total of which 26 sessions were online with real-time feedback (15,480 trials for offline and at least 6,840 trials for online sessions in total). We demonstrate that decoding accuracy over multiple sessions improves significantly with biased feedback (p = 0.004), compared to performance without feedback. This is the first study to show the effect of real-time feedback on the performance of primitive object-imagery BCI.

1. Introduction

Brain-computer interface (BCI) research aims to develop systems that enable movement-independent communication between the user and a computer/device, using information encoded in neural signals [Citation1]. BCIs have been investigated across a variety of application areas such as classifying the semantic and emotional content of imagined representations [Citation2], monitoring cognitive state for lie detection [Citation3], written communication using BCI spellers [Citation4], wheelchair control [Citation5], controlling objects in real-world situations [Citation6, Citation7] or virtual spaces [Citation8–10], neurogaming [Citation11], enabling assessment in prolonged disorders of consciousness (PDoC) [Citation12] and enhancing recovery following stroke [Citation13], to name just a few. However, only a few studies have investigated the detection of visual imagery and working memory [Citation14], the classification of mentally imagined real-world objects [Citation15, Citation16], the shape of imagined 3D primitive objects [Citation17–19] or different image categories [Citation20]. For example, the application of imagined object classification could be a precursor for applications of BCI in computer-aided design (CAD), computer-aided manufacturing (CAM) [Citation21] and computer-aided engineering design (CAED) [Citation22, Citation23] along with augmented virtual reality (AVR) [Citation24, Citation25] to inform alternative and neural informed design ideation and visual creativity [Citation25, Citation26]. In this work, we focus on the state-of the-art in decoding shape/object imagery from electroencephalography (EEG).

A noninvasive BCI system commonly uses voluntary modulation of electroencephalography (EEG) signals for controlling an electronic device. However, to date, most studies investigating the relationship between brain activity and visual object imagery tasks rely on functional magnetic resonance imaging (fMRI) which has a lower temporal resolution than EEG neuroimaging, making it less suitable for a BCI [Citation27]. fMRI does enable measuring activity from deep brain structures, providing enhanced spatial resolution compared to EEG. Therefore, existing fMRI studies underpin the rationale for investigating various neuroimaging modalities to understand neural modulations in object imagery tasks and are thus reviewed here.

1.1. fMRI studies

Visual object imagery is related to several brain functions, such as working memory [Citation28–31], shape-specific processing in the visual cortex [Citation32], imagined and perceptual scene-specific brain activity [Citation33], mental imagery during dreaming [Citation34], visual search [Citation35], and the relationship between mental imagery and emotions [Citation36]. Visual perception and mental imagery activate similar brain patterns [Citation37–42]. Although the primary visual cortex has an important role in mental imagery and perception [Citation43, Citation44], the occipitotemporal cortex is shown to encode sensory, semantic, and emotional properties, which are important for both [Citation2]. The relationship between working memory and long-term memory is reviewed by Bradly and colleagues [Citation45], highlighting that connectivity between short-term memory and long-term memory is important for a better understanding of the mechanism underlying mental imagery and perceptual processes. Furthermore, the similarity of fMRI patterns obtained during the perception of objects and their equivalent word representation has been demonstrated [Citation46]. Mitchell and Cusack (2008) found that the limited capacity of visual short-term memory for attended objects is correlated with neural activity in the posterior parietal cortex [Citation47]. Moreover, the occipitotemporal cortex is not only important in mental imagery and visual perception but also in object-related identification [Citation48]. Furthermore, the hippocampus may also affect these processes [Citation49], as well as the frontal and parietal cortex [Citation50]. These findings suggest that different spatio-temporal patterns, at various levels of abstraction in terms of neural signaling, should be evaluated to determine if BCIs can exploit the associated features to enable direct movement-independent interaction between the user and a computer or device.

Color, size, and rotation of perceived or imagined 3D objects may also prove useful for developing a BCI that aims to decode imagined 3D objects. It has been suggested that the physical size of visualized objects might link with the occipitotemporal cortex and is represented in the ventral stream [Citation51, Citation52]. A recent study [Citation53] demonstrated that object size-related neural responses are organized in bilateral topographic maps, with similar cortical extents responding to large and small objects. The importance of the visual cortex in color representation is highlighted in several papers [Citation54–56]. Bird et al., 2014 [Citation57] showed that the visual cortex responded only to the size of the color differences while color categories, such as blue and green, are encoded by regions in the frontal lobe. One other important property of mental imagery and visual perception is mental rotation. The rotation of imagined objects (object-rotation) and rotation of the viewpoint of the subjects (self-rotation) have been studied [Citation58]. The results show that the primary motor cortex (M1) has an important role in an object-rotation imagery task. At the same time, the sensorimotor area (SMA) is important for the self-rotation imagery task.

Charest et al. 2014 [Citation59] indicated that individual differences in the early visual cortex and human inferior temporal cortex were involved in the visual detection of particular objects. With this observation, they emphasized that the individual-specific sensation of the environment might be reflected in an individually unique neural pattern in visual cortical areas. Another fMRI study [Citation34] demonstrated that perceived or visualized objects could be classified using hierarchical visual features. This method demonstrated that objects could be categorized based on the sameness of the objects’ properties and the properties of an object that had been viewed in a previous training session. As shown, several properties are involved in mental imagery or visual perception [Citation34], which might relate to different types of brain activity such as shape-specific visual memory [Citation32] or object size-specific information processing [Citation51, Citation53]. Due to the variety of properties involved in the mental imagery of real-world objects, a comprehensive feature selection strategy is likely required to enable accurate detection, or decoding, of 3D object imagery from noninvasive neural recordings in practical end-user BCI applications.

1.2. EEG studies

As discussed above, the majority of mental imagery studies employ fMRI techniques [Citation28 - 59] and only a very limited number of studies have focused on decoding mentally imagined real-world objects [Citation15, Citation16], the shape of primitive objects [Citation17–19], or different image categories [Citation20] from electroencephalography (EEG).

Kosmyna et al. [Citation15] used twenty-six participants for the offline classification of visual observation and imagery involving two real-world objects (flower and hammer), reporting a decoding accuracy (DA) of 61.7 ± 10.5% (M±SD) for visual observation and 55.7 ± 6.8% (M±SD) for visual imagery (theoretical chance level 50.0%). Llorella et al. [Citation16] reported a DA of 60.5 ± 13.3% (M±SD) for four participants in offline classification of four real-world objects (tree, house, plane, and dog) plus the relaxation state from EEG (theoretical chance level 20.0%). In [Citation16], the offline decoder involved a convolutional neural network (CNN) to obtain the reconstruction of the images of the imagined real-world object and a genetic algorithm (GA) to find the optimal hyperparameters of the CNN. Regarding shape classification, Esfahani and Sundararajan [Citation17] focused on the offline classification of five primitive objects (sphere, cone, pyramid, cylinder, and cube) from EEG, using an Emotiv 14-channel EEG neuroheadset [Citation60]. They achieved an offline DA of 44.6 ± 6.6% (M±SD) for ten participants (theoretical chance level 20.0%). Bang et al. [Citation18], with four participants, achieved a DA of 32.6 ± 7.1% (M±SD) for offline classification of six colored primitive geometric symbols (red ‘O’, white ‘X’, yellow ‘-’, blue ‘Δ’, light blue ‘+’ and green ‘|’ (theoretical chance level 16.7%) using a CNN. Llorella et al. [Citation19], using a CNN and the black hole search algorithm for the classification of two simple 2D geometric objects with eighteen participants, obtained an offline DA of 69.6 ± 8.4% (M±SD) (theoretical chance level 50.0%), and classification of seven simple 2D geometric objects with seven participants obtained an offline DA of 35.1 ± 7.0% (M±SD) (theoretical chance level 14.3%). Lee et al. [Citation20] investigated the classification accuracy during visual perception and visual imagination in three image categories using three different images per class (i.e. real-world objects: airplane, cup, tree; numeric digits: monochrome one, three, five; colored 2D shapes: red heart, yellow star, white triangle). They compared the following five classifiers: EEGNet, convolutional neural network (CNN), Multi-Rocket, MobileNet, and support vector machine (SVM). The highest DA was obtained with the MultiRocket framework. With seven participants, they achieved a classification DA of 57.0% for perception in three categories, and a DA of 46.4% for visual imagery (theoretical chance level 33.3%). A shape imagery detection application, for example, a BCI-controlled CAD or CAED application, presents a requirement where the brain response is classified online in real time whereas all the studies reviewed above involve a single-session offline assessment without providing real-time feedback to the participant (and/or a controlled BCI application) regarding the actual decoded object.

To address this shortcoming in our understanding of the effect of online classification and feedback when decoding shape imagery, as an extension of our pilot study [Citation61], we developed an online EEG-based BCI to investigate decoding five imagined 3D primitive objects (sphere, cone, pyramid, cylinder and cube) from EEG to determine if the separability of shape-specific EEG modulations is enhanced by real-time feedback to participants. We carried out our research using a four-study series wherein the paradigm was improved between each study in the series. The offline pilot paradigm was tested and evaluated in studies 1 and 2 involving a single-session and a multi-session scenario, respectively, in which no feedback was applied. The pilot version of the online paradigm was introduced in study 3 and, based on the experience gained, it was refined and gamified in study 4. In addition to presenting an investigation involving the classification of imagined objects online in real time using BCI, we provide a comprehensive analysis for the identification of frequency bands and cortical areas engaged in the visual imagery of primitive objects. The results serve as a basis for enabling further investigation into the decoding of imagined objects for applications in CAD, CAM, CAED, or AVR systems.

2. Material and methods

2.1. Participants

Ten volunteers (male (n = 7) and female (n = 3), aged 26–44 years) participated in the first offline study (study 1), three male volunteers (aged 30–44 years) participated in the second offline study (study 2), two volunteers (one male aged 21, and one female aged 20) participated in the first online study (study 3), and four male volunteers (aged 23–34 years) participated in the second online study (study 4). There were sixteen participants in total, of which three participated in more than one study (Supplementary Table 1). The experiments were conducted in the Spatial Computing and Neurotechnology Innovation Hub (SCANi-hub) at the Intelligent Systems Research Centre (ISRC), Ulster University, United Kingdom. Before the beginning of the first session, participants were presented with information about the experimental protocol which they were asked to read. Those who wished to participate gave consent by signing an informed consent form that had been approved by the Ulster University research ethics committee (UREC). All participants were healthy and had normal, or corrected to normal, vision. Participants were recruited for each study separately. They were informed about the session number and time requirements of each session. Based on discussions with participants, we believe that each participant was motivated to provide the best performance during each session. Supplementary Table 1 provides information about the dominant hand, gender, age and BCI experiences of the participants in the study series.

2.2. Experimental paradigm

Study 1 (N = 10) comprised one offline session. Study 2 (N = 3) comprised three offline sessions. Study 3 (N = 2) comprised eight offline sessions and seven online sessions. Finally, study 4 (N = 4) comprised two offline sessions and three online sessions. summarizes the duration of sessions performed in studies 1–4.

Table 1. The number and duration of sessions performed in studies 1–4.

In each study, each session lasted approximately two hours, including EEG preparation time. Before the beginning of the experiments, participants were asked to look forward and maintain a constant head position, avoid teeth grinding, minimize unnecessary movements during task performance, focus with eyes on the middle of the screen (indicated with a fixation cross before the task during the resting period) and to avoid eye blinks during object imagery tasks. Participants were asked to blink after the task end indicator cue if possible. In each session, the participant was seated in an armchair positioned 1.5 m in front of a Fujitsu Siemens B22W–5 ECO 22” LCD monitor. For task performance, the participant was asked to perform visual mental imagery of the actual target object in 3D (i.e. to mentally project the 3D shape of the target object on the middle of the screen, as it would be seen there). Participants who reported difficulty visualizing the object in 3D were asked to imagine the object in 2D. The offline datasets recorded in studies 3 and 4 were used to prepare an initial calibration of the BCI setup for the online sessions in the associated study. The impact of feedback on subjects’ performance across multiple-session sessions was a central research focus for the current study.

The structure of the paradigm was similar for studies 1–4. However, some elements of the paradigm evolved from study to study. In the following section, we describe the experimental paradigm that was applied to the final study (study 4). The differences between study 4 and the previous three studies are summarized in Section 2.2.2.

2.2.1. Timing of the experimental paradigm for study 4

The experimental paradigm comprised three runs, each run comprising four blocks, and was presented in a gamified format as described below. Ten seconds before commencing each block, a white fixation cross was presented in the center of the screen, and a voice message played to inform the subject the block was about to begin. Each block comprised the following sub-blocks: one block initialization (involving a trial triplet, i.e. three trials) and ten further sub-blocks (involving ten trial triplets, i.e. thirty trials). In the block initialization sub-block, three of the five 3D primitive objects (sphere, cone, pyramid, cylinder, and cube; ) were used as target objects in randomized order. The paradigm was designed using eleven trial triplets to maintain the participant’s attention using a gamified scenario to enhance engagement and motivation [Citation62], rather than presenting a monotonous series of thirty-three single trials. The ten trial triplets, comprising six repetitions of the five 3D primitive objects in randomized order, were used for the main analysis. The block initialization trials were not used in the main analysis because at the beginning of the block, after a long resting period, the subjects’ task-related EEG pattern may differ from the patterns generated during continuous object imagery task performance.

Figure 1. Illustration of five 3D primitive objects displayed in studies presented in this paper.

Figure 1. Illustration of five 3D primitive objects displayed in studies presented in this paper.

The timing of a trial, and an example of how the screen content varied during the trial, are presented for the offline and online paradigms in , respectively. At the beginning of each sub-block, a white fixation cross (in the middle) and three gray-colored 3D primitive objects (on the left side) were displayed on the screen. The gray-colored objects illustrated the target triplet for the current sub-block. After a 2s pause, the fixation cross was replaced in the middle with a blue replicate of the first (bottom-most) target object for a duration of 1s, indicating the next target for the oncoming task – and then disappeared, indicating the beginning of the object imagery task. During the imagery task, the middle of the screen was set to empty for 3s during the task period. The end of the task period was indicated with a 200 ms auditory tone (6 kHz beep). In parallel with the onset of the auditory tone, for the offline paradigm, the target object was displayed once again in the middle of the screen for 1s. This second appearance of the target object was replaced in the online paradigm with the decoded object to provide visual feedback. After a 1s delay, the target object was replaced with the fixation cross, and the color of the corresponding target on the left side of the screen, for the offline paradigm, changed to blue, indicating the trial had been completed. For the online paradigm, the color of the corresponding target changed to blue only if the actual task was successful. Otherwise, the corresponding target changed to yellow and the incorrectly decoded object was moved from the middle to the right side of the screen. Gamification was achieved through this stacking of correctly identified objects with the same color. All trials in each trial triplet were executed in the same way as described above.

Figure 2. The offline experimental paradigm. (a) An example of the screen layout during offline task performance. (b) The timing of an offline trial. (c) An example of how the screen content varied during the second offline trial of a sub-block.

Figure 2. The offline experimental paradigm. (a) An example of the screen layout during offline task performance. (b) The timing of an offline trial. (c) An example of how the screen content varied during the second offline trial of a sub-block.

Figure 3. The online experimental paradigm. (a) An example of the screen layout during online task performance. (b) The timing of an online trial. (c) An example of how the screen content changed during the second online trial of a sub-block. In this example, the result of the first trial was successful as the color of the bottom-most object (cube) on the left side of the screenshots is blue. The result of the (second) trial indicates an unsuccessful trial as the object (pyramid) is different from the target object (cylinder), and the color of the middle object on the left side (c) (cylinder) changed to pale yellow.

Figure 3. The online experimental paradigm. (a) An example of the screen layout during online task performance. (b) The timing of an online trial. (c) An example of how the screen content changed during the second online trial of a sub-block. In this example, the result of the first trial was successful as the color of the bottom-most object (cube) on the left side of the screenshots is blue. The result of the (second) trial indicates an unsuccessful trial as the object (pyramid) is different from the target object (cylinder), and the color of the middle object on the left side (c) (cylinder) changed to pale yellow.

Each 23s sub-block () comprises a sub-block initialization pause and three trials. Each 260s block () comprises a block initialization voice message, a block initialization sub-block, and ten sub-blocks for the analysis. Each 20-minute run () comprises four blocks and three inter-block resting periods (IBR: 50s each). During IBR, the participants were asked to relax and not to move or talk. A session comprised three runs, which were separated by inter-run resting (IRR) periods (). The length of IRRs was determined by the participant (typically 5 minutes). Thus, the total duration of an offline session, comprising three runs and two inter-run resting periods, was around 70 minutes, involving 72 trials for each class (i.e. 360 trials in total) ().

Figure 4. The timing of the experiment in a session. (a) Timing of a sub-block. (b) Timing of a block. (c) Timing of a run. (d) Timing of the experiment.

Figure 4. The timing of the experiment in a session. (a) Timing of a sub-block. (b) Timing of a block. (c) Timing of a run. (d) Timing of the experiment.

2.2.2. Differences in the experimental paradigms used for studies 1–4

Although each of the paradigms was mainly consistent, certain elements evolved during the research from study 1 to study 4, as described below and summarized in .

  • The appearance of the objects displayed on the screen was refined after study 2 to improve the appearance of the presented objects. The 3D primitive objects for studies 1-2 and studies 3-4 are presented in .

  • In studies 1 and 2, the thirty trials were presented as a continuous series (i.e. the trial triplet structure was not used). Therefore, the target triplet (presented in on the left side of the screen) and the 2s trial triplet initialization pause were not applied to studies 1 and 2. This was introduced in studies 3 and 4 to engage participants through gamification of the task.

  • The block initialization sub-block (i.e. the extra trial triplet) was added to the paradigm only in studies 3 and 4.

  • When a participant failed a task in the online sessions of study 3, the task was repeated once to give the participant a second attempt to achieve the correct response. As repeated tasks increased the duration of the blocks significantly, the number of trials in the online sessions of study 3 was reduced in each block from thirty to fifteen. To avoid the reduction in trials, the repetition of failed tasks was not applied to study 4.

Table 2. Differences in the experimental paradigms used for studies 1–4.

2.3. Data acquisition

EEG was recorded from 30 channels, and electrooculography (EOG) was recorded from two channels using 32 active EEG sensors (gLadybird) with two cross-linked 16-channel g.BSamp bipolar EEG amplifiers and two AC type g.GAMMboxes. The EEG reference electrode was positioned on the left earlobe. The EEG was amplified (gain: 20000), filtered (Butterworth, 0.5-100 Hz, eighth order), and sampled (A/D resolution: 24 Bits, sampling rate: 250 samples/s). The ground electrode was positioned at AFz according to the international 10/20 EEG standard. The EEG montage is illustrated in .

Figure 5. Placement of the EEG and ground electrodes (reference electrode was placed to the right earlobe).

Figure 5. Placement of the EEG and ground electrodes (reference electrode was placed to the right earlobe).

The communication between a Simulink [Citation63] module that was used for EEG data acquisition and online signal processing and the experimental protocol application in Unity 3D Game Engine [Citation64] was managed with the user datagram protocol (UDP).

2.4. Offline signal processing

2.4.1. Multi-class classification using FBCSP

2.4.1.1 EEG signal processing and trial validation

the quality of the recorded EEG was inspected manually, and EEG channels with high-level noise (>200 mV) were removed from further processing. Recorded signals were band-pass filtered in six non-overlapped EEG bands (0.5-4 Hz (delta), 4-8 Hz (theta), 8-12 Hz (mu), 12-18 Hz (low beta), 18-28 Hz (high beta), and 28-40 Hz (low gamma)) with Simulink [Citation63] using high-pass and low-pass finite impulse response (FIR) filters (band-pass attenuation 0 dB, band-stop attenuation 60 dB). To reduce the size of the EEG dataset, the preprocessed EEG dataset was downsampled from 250 Hz to 125 Hz. Reference (baseline) and task-related time intervals between −4s (prior) and 5s (after) the onset of the object imagery task were epoched out from the frequency-filtered EEG dataset for each EEG channel and stored. The quality of the EEG was inspected manually for each trial, and trials containing visually obvious artifacts overlapping the task period (i.e. between −2s (prior) and 3s (after) the onset of the object imagery task) were removed. Spatial filtering: EEG decoding was performed using filter-bank common spatial patterns (FBCSP) [Citation65], a well-established classification technique that enables discrimination between different types of imagined movements [Citation66]. FBCSP was used to create spatial filters that maximize the discriminability of two classes by maximizing the variance of band-pass filtered EEG signals from one class while minimizing their variance for the other classes [Citation67, Citation68]. A maximum of three CSP filter pairs for each 2-class classifier for each frequency band was used. Feature extraction: for studies 1 and 2, the time-varying log-variance of the CSP filtered EEG was calculated, in three separate analyses, using a 500 ms, 1s, or 2s width sliding window over the epochs with a 200 ms time lag between two windows. Based on experiences gained from studies 1 and 2, the 500 ms option was omitted in studies 3 and 4. Feature selection: the mutual information (MI) between features and the associated target class was estimated using a quantized feature space [Citation69] to identify a subset of features that maximize classification accuracy. 2-class classification: a regularized LDA (RLDA) algorithm using the RCSP toolbox [Citation68] was used to create a linear hyperplane to separate data from two classes where the class assigned to an unseen feature vector depends on the polarity of the classifier output, determined by position for the hyperplane [Citation70]. Multi-class classification: the multi-class classification module involves multiple 2-class classifiers (target vs non-target classes) to separate each target class from the other (non-target) classes. Thus, the number of 2-class classifiers equaled the number of classes. The class label was determined by the class associated with the classifier that produced the largest signed distance in the task class associated side of the hyperplane. A general overview of the applied FBCSP method is presented in .

Figure 6. Filter-bank common spatial patterns (FBCSP) based multi-class classification method. The block diagram illustrates the structure of the FBCSP-based multi-class classification method using mutual information (MI) selection and linear discriminant analysis (LDA) based classifier. The number of the bands and selected features were different in offline studies 1–2 and online studies 3–4 (described in the text body).

Figure 6. Filter-bank common spatial patterns (FBCSP) based multi-class classification method. The block diagram illustrates the structure of the FBCSP-based multi-class classification method using mutual information (MI) selection and linear discriminant analysis (LDA) based classifier. The number of the bands and selected features were different in offline studies 1–2 and online studies 3–4 (described in the text body).

2.4.2. Decoding accuracy calculation with cross-validation for the offline studies

DA for the offline studies (studies 1 and 2) was calculated using an inner-outer (nested) cross-validation (CV) (Supplementary Figure 1). The inner-outer CV guarantees that the test data used for the outer level CV were not used in the inner level for hyperparameter optimization. Further details of the inner-outer CV are described in [Citation71]. All DA values were compared to the real (empirical) chance level [Citation72] which was calculated using a significance level of p < 0.01.

For studies 1 and 2, six outer folds and five inner folds were assigned. During the inner fold CV, the optimal architecture (resulting in the highest DA) denoted the number of the selected CSP filter pairs (2, 3, or 4), the number of the quantization levels for the mutual information (MI) features selection module (2, 3, or 6), the number of the selected features at the output of the MI module (6, 10, 14, or 18), and the optimal width of the classification window (500 ms, 1s, or 2s).

The cross-participant/session averaged time-varying DA for both offline studies was calculated and plotted using outer-level test results obtained from multiple single-session analyses.

A Wilcoxon non-parametric test was performed to compare the significance of the difference in DA peaks obtained in the task period and reference (baseline) period, i.e. the pause period before the target object was displayed on the screen.

2.5. BCI calibration for the online studies

The BCI configurations used in the online sessions (studies 3 and 4) were calibrated using a multi-session dataset recorded in sessions conducted before the calibration. Results from studies 1 and 2 showed that the four lower (delta, theta, mu, and low-beta) EEG bands made a greater contribution to the DA compared to the high-beta and low-gamma bands. Therefore, in studies 3 and 4, the EEG bands used in the FBCSP module were limited to the four lower (delta, theta, mu, and low-beta) bands. Moreover, using the experience of studies 1 and 2 and knowledge gained around which hyperparameters produced maximum DA, in studies 3 and 4 the number of selected CSP filter pairs was set to 2 and the number of quantization levels was set to 3. The number of features that could be selected by the MI module was selected from 6, 8, and 10 and the width of the classification window was selected from 1s and 2s.

To improve the cross-session stability of the calibrated BCI, the single-session-based FBCSP calibration (used in studies 1 and 2) was replaced with a cross-session test based FBCSP calibration. In the first step, the BCI was calibrated based on a single-session dataset using each combination of denoted hyperparameter options, separately, with the six-fold CV (Supplementary Figure 2) which is equivalent to a simple outer-level CV. The time-varying DA graphs resulting from the single-session six-fold CV were plotted for each BCI configuration and compared by visual inspection. The BCI configurations resulting in a reasonably high DA peak in the task interval (compared to the DA peak obtained from other BCI configurations) were noted for the cross-session test. In the cross-session test, the DA was calculated for each session, which was not used for calibrating the tested BCI configuration. Thus, in studies 3 and 4, the six-fold CV-based BCI calibration formed the inner level of the CV, and the cross-session test formed the outer level of the CV. BCI configurations were ranked by visual inspection for each participant separately, comparing DA peaks obtained from multiple sessions with the cross-session test. The best-ranked BCI configuration was used in the first online session of the participant. In the online BCI, the delay between the onset of the task and the classification time was set to the time between the onset of the task and the DA peak obtained in the cross-session test.

summarizes the sessions used to calibrate and test the classifiers.

Table 3. Sessions used for calibration, stability test, application, and re-calibration of the online BCI.

2.6. Online signal processing

The online multi-class classification was performed in Simulink [Citation63] using the calibrated BCI. Studies investigating the impact of unbiased real-time feedback show that negative feedback has a significant impact on accuracy during online task performance. The influences of positive and negative visual feedback on motor imagery task performance using EEG and electrocardiography (ECG) have been studied [Citation73]. The findings suggest that over-biased negative feedback causes mental stress that is detected in the form of significantly higher heart rate variability compared to sessions where over-biased positive feedback was presented – and accuracies correlate with the polarity (-/+) of the biased feedback. Alimardani et al. [Citation74] studied EEG-based BCI-operated human-like robotic hands using imagined grasp or squeeze motions. They evaluate participants’ performance under different presentations of feedback including: (1) non-biased direct feedback, (2) biased feedback corrected to fake positive 90% accuracy, and (3) biased feedback corrected to fake negative 20% accuracy. Participants achieved better accuracy when they received fake positive feedback, while fake negative feedback resulted in a decreased performance. These results were considered in the study for online visual feedback. When the classification was ‘successful’, the decoded (correct) object was displayed during the feedback period. However, if the classification was incorrect, there was a 33% chance (biased-positive feedback) that the correct object would be displayed rather than the decoded object. It is important to note that DA values presented in this paper were calculated based on ‘successful’ classifications rather than the displayed (biased) result, which may be positively biased.

2.7. Temporospatial spectral analysis

To identify frequency bands and cortical areas that provided the most separable features, an analysis was performed using the multi-session datasets and involving CSP filters and the MI weights of the FBCSP classifiers calibrated. This analysis was performed separately for every session and participant. For the time-varying frequency analysis, the mean values of MI weights (that weight the DA contribution of the features of the 2 class classifiers) were calculated in each analyzed frequency band and time point, separately, and were plotted in the form of participant-specific topographical maps. For the topographical analysis, all transformation values in each CSP filter were multiplied with MI weights obtained for the corresponding CSP filter at the time matching the maximal DA.

2.8. Cross-study statistical analysis

A cross-study analysis was performed to examine differences in the DA values achieved in each session for studies 1 and 2, compared to those achieved for studies 3 and 4. This analysis was performed to establish whether there was a statistically significant improvement in DA scores when feedback was included in the paradigm. The Mann-Whitney U test was chosen to compare mean ranks, due to the small and unequal data samples. Only sessions where the maximum DA obtained in the task period differed significantly from DA in the reference (baseline) period were included in the analysis. Furthermore, to ensure the independence of observations, one participant’s dataset was excluded from the cross-study analysis as the participant had completed study 2 and study 4, both of which were in separate independent groups for the analyses.

3. Methods summary

An overview of the calibration, cross-validation, and evaluation methods applied to studies 1–4 is presented in .

Figure 7. Overview of calibration, cross-validation, and evaluation methods applied to studies 1–4.

Figure 7. Overview of calibration, cross-validation, and evaluation methods applied to studies 1–4.

4. Results

4.1. Results of studies 1–2

provides an overview of participants’ performance by presenting time-varying DA plots and significant peak DA values obtained for study 1 in a single offline session, and for study 2 in three offline sessions. The cross-participants/session averaged time-varying DA for both studies are presented in , respectively, while participant/session-specific differences in time-varying DA plots for study 2 are presented in . As indicated in , seven of ten participants in study 1 and each of the three participants in study 2 achieved DA peaks during the task period which were significantly higher than the DA peak obtained during the corresponding pause period (Wilcoxon non-parametric test, p < 0.05). The maximum peak DA in study 1 was achieved by participant 6 (33 ± 4%), and in study 2 by participant 3 (37 ± 3%). Cross-participants/session averaged frequency maps and object-specific topographical maps () indicate that for participants who achieved a DA > 30% (empirical chance level = 20 ± 6%), the 1-4 Hz (delta) and 4-8 Hz (theta) oscillations in frontal, posterior parietal and occipitotemporal cortical areas provided the highest contribution for offline classification of five imagined 3D primitive objects.

Figure 8. Results of studies 1–2. (a): grand average (thick curve) and cross-fold standard deviation (STD) (shaded area) of time-varying DA calculated in studies 1 and 2 using six folds. (b): peak DA values (thick black lines in green columns) and the corresponding cross-fold STDs (green columns) for each subject of studies 1 and 2. Subjects/sessions achieved a DA peak in a similar range with DA during the pause (Wilcoxon non-parametric test, p >0.05) are indicated with ‘N/A’). The bottom panel of (B) displays DA values for each subject and session of study 2, separately. (C): grand average (thick curve) and cross-subject STD (shaded area) of time-varying DA calculated in studies 1 and 2. (D): subject-specific time-varying DA plots from each session of study 2. (E): cross-subject averaged frequency and topographical maps indicating frequency bands and cortical areas providing the highest contribution to DA. The cross-subject averaged frequency and topographical maps are derived using CSP filters and MI weights of subject-specifically calibrated BCIs using subjects/sessions which provided DA peak above 30% (i.e. using only those subject/session combinations for which the thick black lines in green columns of (B) indicate a DA above 30%).

Figure 8. Results of studies 1–2. (a): grand average (thick curve) and cross-fold standard deviation (STD) (shaded area) of time-varying DA calculated in studies 1 and 2 using six folds. (b): peak DA values (thick black lines in green columns) and the corresponding cross-fold STDs (green columns) for each subject of studies 1 and 2. Subjects/sessions achieved a DA peak in a similar range with DA during the pause (Wilcoxon non-parametric test, p >0.05) are indicated with ‘N/A’). The bottom panel of (B) displays DA values for each subject and session of study 2, separately. (C): grand average (thick curve) and cross-subject STD (shaded area) of time-varying DA calculated in studies 1 and 2. (D): subject-specific time-varying DA plots from each session of study 2. (E): cross-subject averaged frequency and topographical maps indicating frequency bands and cortical areas providing the highest contribution to DA. The cross-subject averaged frequency and topographical maps are derived using CSP filters and MI weights of subject-specifically calibrated BCIs using subjects/sessions which provided DA peak above 30% (i.e. using only those subject/session combinations for which the thick black lines in green columns of (B) indicate a DA above 30%).

4.2. Results of study 3

provides an overview of significant DA peak values obtained using datasets acquired for two participants in (1): seven offline sessions (used for BCI calibration), (2): one additional offline session recorded with a two-week gap after the seventh offline session (used for offline DA stability check), and (3): seven online sessions (of which the first five were used for BCI recalibration) (). 1 presents the mean values and standard deviations of cross-session DA peak values obtained for BCI configurations calibrated using datasets acquired in offline sessions 1–7 (initial calibration) and online sessions 1–5 (recalibration). Furthermore, 2 presents DA peak values obtained from the cross-session stability test using the initial and recalibrated online BCI configuration selected from the single-session-based calibration presented in 1.

Figure 9. Results of study 3. (a): significant DA values from study 3. (a1): cross-session CV results. The mean value (thick black lines in green columns) and STD (green columns) of peak DA rates obtained from cross-session CV are presented for each session, separately. The session ID that was selected for calibrating the final BCI is marked with a rectangle below the DA chart. (a2): detailed results of the offline cross-session stability test (i.e. DA rates obtained in test sessions of the best performing BCI configuration selected based on (a1)), furthermore, DA obtained in offline session 8, and online sessions 1–7 using the BCI configuration selected based on offline sessions 1–7. (b) and (d): results of an analysis investigating the subject-specifically calibrated BCI, calibrated based on offline sessions 1–7 and online sessions 1–5, respectively. (b1) and (d1): time-varying DA plots (an averaged curve (thick blue curve) and STD (shaded area)) resulted from cross-session CV during BCI calibration. (b2) and (d2): frequency bands and cortical areas with the highest DA contribution based on CSP filters and MI weights of the calibrated BCI. (c) and (e): subject-specific time-varying DA plots obtained using the BCI, which was calibrated based on offline sessions 1–7 and online sessions 1–5, respectively (c1: long-term stability test results from offline session 8, C2: the average and standard deviation of time-varying DA from online sessions 1–5, e1 and e2: time-varying DA from online sessions 6 and 7). The expected position of peak DA is indicated with a black vertical solid line in the task interval of the time-varying DA plots and frequency maps.

Figure 9. Results of study 3. (a): significant DA values from study 3. (a1): cross-session CV results. The mean value (thick black lines in green columns) and STD (green columns) of peak DA rates obtained from cross-session CV are presented for each session, separately. The session ID that was selected for calibrating the final BCI is marked with a rectangle below the DA chart. (a2): detailed results of the offline cross-session stability test (i.e. DA rates obtained in test sessions of the best performing BCI configuration selected based on (a1)), furthermore, DA obtained in offline session 8, and online sessions 1–7 using the BCI configuration selected based on offline sessions 1–7. (b) and (d): results of an analysis investigating the subject-specifically calibrated BCI, calibrated based on offline sessions 1–7 and online sessions 1–5, respectively. (b1) and (d1): time-varying DA plots (an averaged curve (thick blue curve) and STD (shaded area)) resulted from cross-session CV during BCI calibration. (b2) and (d2): frequency bands and cortical areas with the highest DA contribution based on CSP filters and MI weights of the calibrated BCI. (c) and (e): subject-specific time-varying DA plots obtained using the BCI, which was calibrated based on offline sessions 1–7 and online sessions 1–5, respectively (c1: long-term stability test results from offline session 8, C2: the average and standard deviation of time-varying DA from online sessions 1–5, e1 and e2: time-varying DA from online sessions 6 and 7). The expected position of peak DA is indicated with a black vertical solid line in the task interval of the time-varying DA plots and frequency maps.

The cross-session stability test, using datasets acquired in offline sessions 1–7, indicates an increasing trend of DA peak values obtained over sessions 1 to 7, ranging from 25% to 34% for participant 1 and from 25% to 35% for participant 2. The long-term cross-session stability test, using data acquired in offline session 8, shows a slightly decreased DA peak (30% for participant 1 and 33% for participant 2) compared to that achieved two weeks earlier in offline session 7. However, DA in session 8 is higher than that achieved in the first two offline sessions (≈26%).

During the first online session, both participants failed to achieve above chance level performance (DA peak in the task period was similar to the DA peak obtained in the pause period; Wilcoxon non-parametric test, p > 0.05). However, during the last two online sessions, using the recalibrated BCI, the participants reached a personal online DA maximum of 29% and 32% (participants 1 and 2, respectively) (empirical chance level 20 ± 6%).

The participant-specific frequency maps of CSP-MI weights at the corresponding DA peak () for both participants indicate that the 1-4 Hz (delta) band provided a maximal contribution for encoding the imagined objects. It is worth noting that, as expected, the highest values of CSP-MI weights were obtained at times which correspond with the peak DA.

The participant-specific topographical maps of MI-weighed CSP patterns (2 and 2) indicate that frontal, posterior parietal, and occipitotemporal cortical areas provided the highest contribution for both offline and online trials. As expected, the MI-weighed CSP patterns show higher DA contributions in task-related cortical areas compared to patterns obtained during pause periods.

Finally, participant-specific time-varying cross-session DA plots were obtained from: (1) long-term stability tests (1), (2) the averaged curves and standard deviation of time-varying DA graphs obtained from online sessions 1–5 (2), and (3) participant/session-specific time-varying DA obtained from online sessions 6 and 7 (); indicates that maximal DA (peak DA) for both participants was achieved with a latency that matches the latency observed during BCI calibration in cross-session CV.

4.3. Results of study 4

Results obtained in study 4 () are similar to those obtained in the pilot online study (study 3), even though the number of both offline and online sessions in study 4 was only half of those completed in study 3.

Figure 10. Overview of decoding accuracy achieved in studies 4. (A): significant DA values from study 4. (A1): single-session CV results of subject-specifically calibrated BCIs providing the highest DA in cross-session CV. The mean value (thick black lines in green columns) and STD (green columns) of peak DA rates obtained from the single-session CV are presented for each subject, separately. (A2): cross-session stability test results of subject-specifically calibrated BCIs providing the highest DA in cross-session CV. (A3): DA rates achieved by the subjects 1–4 in online sessions 1–3. (B): subject-specific time-varying DA plots from online sessions 1–3. The expected position of peak DA is indicated with a black vertical solid line in the task interval of the time-varying DA plots. DA peaks obtained in a time interval matching visual perception and mental imagery periods are indicated with VP and MI labels in time-varying DA plots, respectively (DA values plotted in (B) are calculated using a 1s classification window prior the plotted DA values occurs around +500 ms shift in the peak DA compared to the mid-point of the classification window).

Figure 10. Overview of decoding accuracy achieved in studies 4. (A): significant DA values from study 4. (A1): single-session CV results of subject-specifically calibrated BCIs providing the highest DA in cross-session CV. The mean value (thick black lines in green columns) and STD (green columns) of peak DA rates obtained from the single-session CV are presented for each subject, separately. (A2): cross-session stability test results of subject-specifically calibrated BCIs providing the highest DA in cross-session CV. (A3): DA rates achieved by the subjects 1–4 in online sessions 1–3. (B): subject-specific time-varying DA plots from online sessions 1–3. The expected position of peak DA is indicated with a black vertical solid line in the task interval of the time-varying DA plots. DA peaks obtained in a time interval matching visual perception and mental imagery periods are indicated with VP and MI labels in time-varying DA plots, respectively (DA values plotted in (B) are calculated using a 1s classification window prior the plotted DA values occurs around +500 ms shift in the peak DA compared to the mid-point of the classification window).

Figure 11. Results of subject-specific frequency and topographical analyses for studies 4. The frequency and topographical maps, using CSP filters and MI weights of the calibrated BCIs, indicate frequency bands and cortical areas providing the highest DA contribution. DA from the single-session CV of the analyzed BCI configuration is indicated below the topographical maps. Panels presenting results of a BCI configuration that provided DA > 30% in single-session CV (figure 10A1) are highlighted with a bold frame.

Figure 11. Results of subject-specific frequency and topographical analyses for studies 4. The frequency and topographical maps, using CSP filters and MI weights of the calibrated BCIs, indicate frequency bands and cortical areas providing the highest DA contribution. DA from the single-session CV of the analyzed BCI configuration is indicated below the topographical maps. Panels presenting results of a BCI configuration that provided DA > 30% in single-session CV (figure 10A1) are highlighted with a bold frame.

presents an overview of significant DA peak values obtained for four participants in offline sessions 1–2 (used for initial calibration of the BCI), online sessions 1–2 (based on the BCI that was calibrated based on offline sessions 1–2), and in online session 3 (using the BCI that was recalibrated based on online sessions 1–2).

Single-session CV results presented in 1 provide a summary of DA rates from the sessions that were selected for calibrating/recalibrating the BCI (based on cross-session CV results). Cross-session CV results of the calibrated/recalibrated BCIs are presented in 2. DA values presented in 3 indicate that the online DA for each participant increased over the three online sessions. For each participant DA in online session 1 picked a value in the range of the empirical chance level 20 ± 6% while in online session 3 reached 28%, 32%, 23%, 32% for subjects 1–4, respectively. Participant/session-specific time-varying DA plots (presented in ) indicate for online sessions that the DA peak for each participant was achieved near the time where it was expected following the onset of the task. However, the DA peak was slightly higher (30%, 33%, 25%, 34% for subjects 1–4, respectively) compared to that obtained at the denoted time of the online classification.

The participant/session-specific frequency maps of CSP-MI weights () were calculated based on the results of single-session analyses for each online session. The results confirm that the 1-4 Hz (delta) band (in some cases along with the 4-8 Hz (theta) band) provides the highest contribution to DA of the imagined object classification.

The participant/session-specific topographical maps of MI-weighted CSP patterns () confirm that frontal, posterior parietal and occipitotemporal cortical areas provided the greatest contribution to the online classification of the five imagined 3D primitive objects from EEG.

4.4. Results of the cross-study statistical analysis and offline vs. online scenarios

Feedback was not provided in studies 1 and 2, while initial sessions without feedback in studies 3 and 4 were followed by sessions that provided online feedback. A preliminary comparison was made between the combined first offline (no-feedback) sessions from studies 3 and 4 and the combined first sessions from studies 1 and 2, to determine if initial differences in performance existed that could be attributed to variations in participant performance as a function of group assignment. The analysis did not yield a significant difference, indicating homogeneity in initial performance ability across studies (U = 18, Z = −0.29, p = 0.77, ).

Figure 12. Comparison significant DA values for the cross-session analysis. Colored dots displayed in the boxplots indicate DA peaks in the task period which were significantly higher compared to DA obtained in the corresponding reference (baseline) period. Sessions without feedback are indicated as offline sessions. Sessions with feedback are indicated as online sessions, the box extends from the lower to upper quartile values, with a line at the median. The whiskers extend from the box to show the range of displayed DA values. p values obtained from the Mann-Whitney U tests are also presented.

Figure 12. Comparison significant DA values for the cross-session analysis. Colored dots displayed in the boxplots indicate DA peaks in the task period which were significantly higher compared to DA obtained in the corresponding reference (baseline) period. Sessions without feedback are indicated as offline sessions. Sessions with feedback are indicated as online sessions, the box extends from the lower to upper quartile values, with a line at the median. The whiskers extend from the box to show the range of displayed DA values. p values obtained from the Mann-Whitney U tests are also presented.

Subsequently, to determine the impact of an increased number of sessions with and without feedback, DA scores for studies 1 and 2 combined were compared against DA scores for studies 3 and 4 (with and without feedback) combined. The mean rank of DA values for studies 3 and 4 was found to be significantly greater than those for studies 1 and 2 (U = 121, Z = −2.73, p = 0.006, ).

Given the difference in DA was found to be significant, a follow-up analysis was run to compare DA scores for studies 1 and 2 against DA scores for combined sessions from studies 3 and 4 in two ways: (1) without feedback only, and (2) with feedback only; to determine whether the provision of feedback significantly improved performance. The alpha level was adjusted to 0.025 to control the false discovery rate given these two post hoc comparisons. For the comparison of runs without feedback, the mean rank of DA scores achieved in studies 3 and 4 was not found to be significantly greater compared to those achieved in studies 1 and 2 (U = 68, z = −2.02, p = 0.043 (>0.025), ). However, the results for the comparison of DA scores, when feedback was provided indicated that the mean rank of DA values in feedback sessions in studies 3 and 4 was significantly greater than those achieved in studies 1 and 2 (U = 53, Z = −2.85, p = 0.004 (<0.025), ).

A comparison of DA values which were used for the cross-session analysis is presented in .

5. Discussion

To date, only a limited number of offline studies have focused on decoding mentally imagined real-word objects [Citation16] or the shape of primitive objects [Citation17–19] from EEG. However, none of these studies used an online scenario providing real-time feedback from the actual DA.

The studies presented in this paper, not only intended to evaluate the separability of five imagined 3D primitive objects (sphere, cone, pyramid, cylinder, and cube) from EEG using an offline scenario (studies 1–2), but also to evaluate if closed-loop BCI training could improve separability using a multi-session experimental paradigm with gamified feedback (studies 3–4), and to identify frequency bands and cortical areas providing a maximal contribution for decoding imagined objects from EEG. Our results show that:

  • Significant DA, above empirical chance level performance, is feasible.

  • The addition of feedback to the experimental paradigm, over multiple sessions, enhances performance.

  • Prominent frequency bands are primarily 0-4 Hz (delta) and secondarily the 4-8 Hz (theta) oscillations.

  • Prominent activations during shape imagery are observed in the frontal, posterior parietal, and occipitotemporal cortex.

5.1. Decoding accuracy and multi-session learning process

In our offline pilot studies (studies 1 and 2), ten of thirteen participants achieved a DA peak during the task period, which was significantly higher than the DA peak obtained in the pause period (Wilcoxon non-parametric test, p < 0.05). The significant DA peak for these ten participants ranged between 27.1% and 37.1% (). In study 3, an increasing trend of cross-session DA values was detected in a comparison between the BCI when calibrated using data acquired from an early session vs a later session for both participants (see offline session 1–7 in 1). It is important to note that the peak DA during online sessions was not only significantly higher than the empirical chance level (20 ± 6%) but the DA peak occurred with the same latency following the onset of the task as observed on cross-session CV tests, performed during the BCI calibration process. (See the relation of the DA peak and the denoted classification time indicated with a black vertical solid line during the task period in the time-varying DA plots in ). The DA for both offline and online session groups increased over sessions. Despite the stability of the BCI (confirmed in the long-term stability test, 2) and the fact that the offline and online paradigms followed the same scenario, the DA for all participants in studies 3 and 4 dropped significantly in the first online session compared to the DA obtained in the last offline session, which took place some days prior to the first online session (2 and 2-A3). This observation may relate to an initial adaptation of the feedback and/or frustration caused by misclassification.

The DA trends achieved in the online sessions of studies 3 and 4 (2 and 3) show a positive learning process for all participants, during which they learned to use a participant-specifically calibrated BCI more effectively. Despite the overall performance being relatively low compared to other types of imagery (e.g. motor imagery), the results indicate that a multi-session learning process using a closed-loop scenario provides an opportunity for the user to improve performance.

It is important to note that the distribution of the data and features commonly change significantly over the participant’s learning process due to the following reasons. First, the user’s strategy to attempt to control the BCI commonly changes during a multi-session learning period. Second, the task-specific neural activity pattern changes significantly when the participant learns to control the BCI [Citation74]. The results of the online experiments also call attention to the importance of an adequately scheduled recalibration of the BCI: in both online studies (study 3 and 4), after the BCI re-calibration, the online DA increased significantly for most participants (see an increase in DA for study 3 in 2 between online sessions 5 and 6; for study 4 in 3 between online sessions 2 and 3).

It should be noted that the highest single session CV accuracy (DA = 51 ± 7%) presented in this paper was achieved by participant 2 in study 4, i.e. where the subject had completed the most sessions with biased feedback. However, as this accuracy was achieved during offline recalibration of the BCI using a dataset recorded in this participant’s final online session, the online performance of this BCI configuration was not tested. Nevertheless, this result is an example of the possible improvement over time by the user mutually learning with the BCI, as well as the potential of the paradigm to enable primitive 3D object decoding from EEG.

5.2. Cross-study statistical analysis and offline vs online scenarios

The cross-study statistical analysis first established that initial performance ability was homogenous, as indicated by the results of the comparison between the combined first offline (no-feedback) sessions from studies 3 and 4 and the combined first sessions from studies 1 and 2 (all offline), which was not significant (p = 0.77). Following this, we determined that performance in studies 3 and 4 was generally improved compared to performance in studies 1 and 2 (p = 0.006). Considering that studies 3 and 4 involved several sessions for each participant, as opposed to studies 1 and 2 (which involved one and three sessions, respectively), both offline (without feedback) and online (with feedback), it was important to analyze the impact of (1): increased sessions, and (2): feedback sessions separately to determine the effect of feedback sessions alone.

Regarding the former effect, the mean rank of DA scores achieved in studies 3 and 4 offline (no-feedback) sessions was not found to be significantly greater compared to those achieved in studies 1 and 2, at the Bonferroni-adjusted alpha level of 0.025, for two post hoc tests (U = 68, Z = −2.02, p = 0.043). Therefore, despite an increase in the number of offline sessions, performance did not improve significantly. In contrast, the results for the comparison of study 3 and 4 feedback sessions with study 1 and 2 sessions (without feedback) revealed the mean rank of DA values achieved in feedback sessions in studies 3 and 4 was significantly greater compared to those achieved in the no-feedback studies 1 and 2 (U = 53, Z = −2.85, p = 0.004, given the Bonferroni-adjusted alpha level of 0.025). This significant improvement attributable to the influence of feedback is a strong indicator that real-time feedback during shape imagery improves separability of neural modulations and enhances decoding accuracy, and that participants can learn to improve performance in shape imagery to modulate brain activity.

5.3. Visual perception vs mental imagery

fMRI studies show that visual perception and mental imagery are associated with similar patterns [Citation37–42]. As in our experimental paradigms, prior to the object imagery task, the target object is presented on the screen. Therefore, it is important to investigate if the results of the object classification were linked to the neural activity involving perception (prior to the imagery task) or related to the object imagery task. Time-varying DA plots with a reasonably high DA sometimes indicate two DA peaks; a smaller (non-dominant) DA peak at the end of the 1s period when the target object was displayed on the screen, and a significantly higher (dominant) DA peak matching time interval of the task period (indicated with VP and MI labels respectively in ). Assuming that visual perception and mental imagery rely on similar patterns, it is logical to suppose that the smaller peak at the end of the display period may rely on the visual perception of the displayed target object, while the dominant peak in task interval is a result of the mental imagery task. The delay between the onset of perception and VP peak, and between the onset of the mental imagery task and MI peak, originates not only from biological factors such as the reaction time but also the size of the classification window that was optimized participant specifically (i.e. 1s or 2s).

5.4. Frequency and topographical analysis

The frequency and topographical analyses aimed to identify frequency band(s) and cortical area(s) involved primarily in mental imagery (imagined visual representation) of 3D primitive objects.

The frequency analysis performed for studies 1–4 showed clear evidence that the 0-4 Hz (delta) oscillations (for some participants along with the 4-8 Hz (theta) oscillations) provided the highest contribution to the classification of the five primitive objects from the EEG recorded during both offline and online sessions. Furthermore, the topographical analysis indicated that the frontal, posterior parietal and occipitotemporal cortical areas have an important role in object imagery (2, 2, ). It is important to highlight that BCI configurations which provided the highest accuracy in single-session CV (DA > 30%, empirical chance level 20 ± 6%) also provided a sharper separation of cortical areas involved in object imagery task performance (panels highlighted with bold frame in ) compared to BCI configurations with which a lower level of DA was achieved (panels without bold frame in ). The object-specific similarities/differences of topographical maps were analyzed using a dataset from studies 1–2 (), indicating that imagery of the five analyzed objects generates similar, or overlapping, cortical activity patterns. As individual brain activity has a wide range of variability [Citation75], an analysis studying participant-specific variability of object-specific cortical activity patterns generated during imagery of different 3D primitive objects may be an objective in future work. The results obtained from topographical analyses are in line with fMRI studies. For example, Stokes et al. [Citation32] show an important role of the visual cortex in shape-specific mental imagery. Furthermore, in line with our results, the object-related mental imagery contribution of the occipitotemporal cortex [Citation48] as well as the frontal and parietal cortex [Citation50] has been reported.

Our result regarding the importance of delta oscillations in the decoding of the shape of imagined objects is supported by a recent study by Sburlea et al., 2021 [Citation76], the results of which indicate that low-frequency EEG not only encodes information about properties of grasping movements but also the shape and size of the grasped objects. Regarding object imagery EEG studies, Chew et al. [Citation77] report a maximal decoding accuracy of 80% (theoretical chance level 50%) for the binary classification, based on whether the user aesthetically liked or disliked the presented object. The features for their KNN-based classifier were extracted from 1-4 Hz (delta), 4-8 Hz (theta), and 8-13 Hz (alpha) bands but features from the 13–30 Hz (beta) and 30–49 Hz (low gamma) bands were omitted. Although this result supports our findings showing that low-frequency EEG oscillations (from the delta and in some cases the theta band) encode maximal information from the shape of an imagined 3D primitive object, the aesthetic perceptions might not rely on the same neural circuits as the shape of imagined objects.

5.5. Limitations

Our research aimed to develop an online BCI to decode five imagined 3D primitive objects and show that real-time feedback enhanced decoding accuracy. The online DA in the final session reached an average 35%, which is significantly above the empirical chance level (20 ± 6%), and the performance of participants who received multiple feedback sessions was significantly higher (p < 0.004) than the performance of those who participated in only one session and had no feedback. However, the decoding accuracy values are not sufficiently high to enable reliable real-time intended shape selection using a BCI. Nevertheless, it can be seen that in studies 3 and 4 performances are improving with feedback for all participants. Moreover, the highest offline result (DA = 51 ± 7%) across the study is achieved in study 4 by subject 2 in the final session. This observation again suggests the gamified paradigm and feedback have impacted the performance. Further training and gamification to enhance training may, therefore, improve the results and produce a BCI which could be used functionally with shape imagery alone. The results also suggest that hybridizing imagery strategies to include, for example, motor and shape imagery may increase the potential for shape imagery to be used.

For the first time, our results show that DA can be enhanced with real-time feedback in a multi-session scenario and a gamified paradigm. We can report that multiple feedback sessions enhance performance. However, we cannot conclude that positively biased feedback and/or gamification improved performance any more than unbiased and/or gamified feedback, as we do not have control groups for the latter. Future work should consider controlling for these measures to gain a better understanding of the effects of various types of feedback on shape-imagery BCI performance.

In an offline study, Llorella et al. [Citation19] classified seven simple 2D geometric objects, achieving an average offline DA of 35.1 ± 7.0% (theoretical chance level 14.3%) using a convolutional neural network for feature selection which would indicate slightly better average performance with the CNN (7 shapes in [Citation19] vs 5 in this study). However, in [Citation19] the stimuli are different (sample line drawing of shape in [Citation19] vs 3D shape imagery in this study) which could account for observed differences in accuracy. Additionally, in [Citation19] a 2-class analysis shows that maximal DA is achieved with line vs parallelogram which are two shapes with maximum appearance distinction, suggesting that the types of stimuli/cues for shape imagery significantly impact results. This observation is further supported in another study by Llorella et al. [Citation16] showing offline classification of four real-world objects (tree, house, plane, and dog) plus the relaxation state obtained a DA of 60.5% (theoretical chance level 20.0%), again using CNN. Further extensive research is needed to determine optimal combinations of shapes and indeed the influence of shape and signal processing strategy. A global search of the parameter space using advanced data-driven deep learning approaches may indeed find optimal features for shape imagery classification as suggested by the results in Llorella et al. [Citation16]. The machine learning approach applied in the study is constrained in terms of the search space and optimal frequency band, number of spatial filters and a relatively simple classifier.

We recently demonstrated in Cooney et al. [Citation78] that classification of six imagined words (theoretical chance level 16.7%) and five imagined vowels (theoretical chance level 20.0%) was enhanced by CNN frameworks (Shallow, Deep, EEGNet), achieving significantly higher DA (p < 0.0001) compared to a FBCSP-RLDA framework (words: 21 ± 2%, vowels: 26 ± 2%), similar to that applied in this study. These results were further improved using EEG and fNIRS fusion or alternative words and word pairing arrangement. Therefore, in future work, we will investigate replacing our FBCSP-based classifier with a CNN-based framework for imagined object classification and optimizing the type of images.

Here, we also note that although the number of participants for almost every study presented in this paper was relatively low: studies 1 (N = 10), study 2 (N = 3), study 3 (N = 2), and study 4 (N = 4), the overall number of the participants involved in the four interdependent studies was sixteen (from which three participated in more than one study, Supplementary Table 1). There were 69 sessions in total of which 26 sessions were online with real-time feedback (involving 15,480 trials for offline and at least 6,840 trials for online sessions in total) which is a relatively comprehensive assessment of the paradigm and sufficient to demonstrate statistically the impact of feedback on BCI performance. This approach of adapting the study design and evaluating each new study with a limited number of participants was efficient and effective in testing our hypothesis. However, in future studies, we shall undertake a single experimental protocol with many participants rather than mixing participation across multiple interdependent studies. Here, we note that the number of trials in online sessions of study 3 was not fixed because participants were permitted a second attempt to make the correct response to the failed tasks (more details in Section 2.2.2). Topographical and frequency maps obtained in studies 1–4 demonstrated similar brain activity patterns across participants during object imagery task performance, suggesting the combined results obtained from the four interdependent studies in the series can be taken as a whole. With the various observations enabled by modifications across the study series, significant progress has been made toward designing a larger trial with optimal stimuli, gamification and signal processing strategies to determine if participants can learn to modulate brain activity through shape/object imagery sufficiently to achieve accuracies that are possible in other imagery paradigms, as shown in our recent study in 2022 [Citation79] and by Bigitiomana et al., 2020 [Citation80].

Notably, the DA in the final experimental paradigm (study 4) for each of the four participants showed an incremental increase over three online sessions using visual feedback, reaching the highest online accuracy (DA = 35%) during the last session. However, as these results were obtained only for four participants using three online sessions, the increased trend in the DA which was detected over three online sessions should be confirmed in future work with more participants using a longitudinal multi-session scenario, as it has similarly been demonstrated in a longitudinal study based on Cybathlon results in our recent publication [Citation79]. Building on work reported by Pidgeon et al. 2016 [Citation25] and Hay et al. 2019 [Citation26] – Duffy et al., 2019 [Citation81] investigating design ideation and the potential for future BCI technologies to support design ideation. For example, providing neurofeedback to allow designers to moderate their thought processes or allowing them to realize their imagination seamlessly in digital environments. Recent results by Campbell et al., 2020 [Citation82] involving designers ideating on complex design tasks during fMRI show various brain regions are activated and may be associated with memory access, visual and motor imagery. For example: (1) a region of interest in the para-hippocampal gyrus (−27,-34,-13) revealed significant design ideation-related coactivations with the left fusiform gyrus, lingual gyrus, inferior temporal gyrus and right cerebellum during design ideation; and (2): the left lingual gyrus ROI (−15,-43,-10) was found to have significant ideation related functional connectivity with clusters in the right lingual gyrus, as well as in the left superior frontal gyrus and bilateral cerebellum, indicating a significant connectivity with visual processing regions (lingual gyrus and fusiform gyrus). The results possibly reflect the interplay between long-term memory processes, visual and motor imagery during design ideation. Our work provides evidence that we can classify shape imagery when weighting CSP features across a number of those regions. Ongoing work is focused on undertaking a detailed functional connectivity analysis to determine more specifically the regions of activation and connectivity, but this is limited by the spatial resolution provided by our EEG montage.

Finally, we should note that BCI calibration trials contained artifacts (identified via visual inspection) were removed during the offline calibration method. However, the online frameworks applied to the present studies did not involve online artifact removal and, therefore, the results of the online sessions are demonstrable of what would occur in an online setting. Automated artifact removal may indeed provide further enhancements to online object/shape imagery classification.

6. Conclusion

The research presented in this paper, involving ten participants in a single offline session (study 1), three participants in three offline sessions (study 2), two participants in eight offline and seven online sessions (study 3), and four participants in two offline and three online sessions (study 4) – provides evidence that distinguishing imagined sphere, cone, pyramid, cylinder, and cube-based neural correlates in EEG is feasible and participants can improve shape imagery to modulate brain activity to enhance BCI performance when real-time feedback is provided.

Thirteen of sixteen participants achieved a DA of 30 ± 5% during the mental imagery task, significantly higher than the DA obtained during the corresponding pause period (Wilcoxon non-parametric test p < 0.05, empirical chance level 20 ± 6%). The performance of all participants improved with online feedback. To the best of the authors’ knowledge, this is the first study to provide real-time feedback across multiple sessions involving mental imagery of five 3D primitive objects. The best single-session CV test accuracy was achieved by participant 2 of study 4, when the classifier was trained and tested using a dataset recorded in the last (third) online session (DA = 51 ± 7%, empirical chance level 20 ± 6%). This result suggests that it may be feasible to reach accuracy levels that would enable functional use with this type of BCI and extensive training. The evolution of the paradigm involving gamification and biased feedback may have also influenced engagement over sessions. We also showed that the features are stable through inter-session tests, where peak accuracy levels and time point of peak accuracy were consistent when classifiers were trained on one session and applied to later sessions. Recalibrating the BCI within the session may enhance the results. Improvement in online DA over sessions indicates mutual learning capability between the BCI user and the BCI. An appropriately scheduled BCI recalibration regime and more advanced signal processing pipeline together with a longitudinal multi-session scenario may lead to improved accuracy.

Results of the frequency and topographical analysis indicate that the 0-4 Hz (delta) (for some participants along with the 4-8 Hz (theta)) oscillations in the frontal, posterior parietal and occipitotemporal cortex have an important role in the mental imagery of 3D primitive objects.

In conclusion, although the performance of this BCI involving 3D object classification from EEG is likely to be too low to experience a feeling of reliable control or interaction, the results are a positive indication that with learning and real-time feedback these mental tasks, or a combination of these and other mental tasks, could be used for performing a mental-task-based operations in virtual spaces or cognitive aided engineering design using an online BCI. The low number of participants does, however, prevent us from assessing how generalizable these results are, and further work is required to confirm this preliminary evidence.

Authors’ contribution

Attila Korik: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing – Original Draft, Visualization. Naomi du Bois: Validation, Writing – Review & Editing. Gerard Campbell and Eamonn O’Neill: Writing – Review & Editing. Laura Hay, Sam Gilbert, and Madeleine Grealy: Conceptualization, Validation, Writing – Review & Editing, Project administration. Damien Coyle: Conceptualization, Methodology, Validation, Resources, Writing – Review & Editing, Supervision, Project administration, Funding acquisition

Supplemental material

Korik et al - Online img objects (BCI) 02 suplements.docx

Download MS Word (191 KB)

Acknowledgements

We would like to extend our sincere thanks to Prof Alex Duffy, for his foundational input toward the conceptualization and design of this research project, and his oversight of the process. Prof Duffy also applied his expertise to the review and editing stages – for which we are very grateful. Furthermore, we would like to acknowledge the effort of all subjects who participated in the studies presented in this paper. Without their commitment, we would not have had the learning and BCI development experience outlined in this manuscript. We would also like to thank Ulster University for financially supporting our research.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Disclosure statement

Damien Coyle is Founder and CEO of Neurotech company, NeuroCONCISE Ltd.

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/2326263X.2024.2334558

Additional information

Funding

This research has been supported by the UK Engineering and Physical Sciences Research Council (EPSRC) under Grant numbers EP/M01214X/1 and EP/M012123/1; The access to the Tier 2 High Performance Computing resources provided by the Northern Ireland High Performance Computing (NI-HPC) facility funded by the UK EPSRC under Grant number EP/T022175; the UKRI Turing AI Fellowship 2021-2025 funded by the EPSRC under Grant number EP/V025724/1; and the Spatial Computing and Neurotechnology Innovation Hub, funded by The Department for the Economy, Northern Ireland.

References

  • J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002;6(113):767–791. doi:10.1016/S1388-2457(02)00057-3
  • D. J. Mitchell and R. Cusack. Semantic and emotional content of imagined representations in human occipitotemporal cortex. Sci Rep. 2015;6(December):20232. doi: 10.1038/srep20232
  • J. M. Nuñez, B. J. Casey, T. Egner, T. Hare, and J. Hirsch. Intentional false responding shares neural substrates with response conflict and cognitive control. Neuroimage. 2005;25(1):267–277. doi:10.1016/j.neuroimage.2004.10.041
  • L. A. Farwell and E. Donchy. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr Clin Neurophysiol. 1988;70(6):510–523. doi:10.1016/0013-4694(88)90149-6
  • Rebsamen B, Guan C, Zhang H, et al. A brain controlled wheelchair to navigate in familiar environments. IEEE Trans Neural Syst Rehabil Eng.2010;Dec;18(6):590–598. do i:
  • Y. J. Kim, et al. A study on a robot arm driven by three-dimensional trajectories predicted from non-invasive neural signals. Biomed Eng Online. 2015;14:81. doi: 10.1186/s12938-015-0075-8
  • K. LaFleur, K. Cassady, A. Doud, K. Shades, E. Rogin, and B. He. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface. J Neural Eng. 2013 Aug;10(4):046003. doi: 10.1088/1741-2560/10/4/046003
  • McFarland DJ, Sarnacki WA, Wolpaw JR. Electroencephalographic (EEG) control of three-dimensional movement. J Neural Eng. 2010 Jun;7(3):9. doi: 10.1088/1741-2560/7/3/036007
  • Royer AS, Doud AJ, Rose ML, et al. EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies. IEEE Trans Neural Syst Rehabil Eng. 2010 Dec;18(6):581–589. doi: 10.1109/TNSRE.2010.2077654
  • Korik A, Sosnik R, Siddique N, et al. Decoding imagined 3D hand movement trajectories from EEG: evidence to support the Use of Mu, Beta, and Low Gamma Oscillations. Front Neurosci. 2018;12(March):1–16. doi: 10.3389/fnins.2018.00130
  • Beveridge R, Wilson S, Callaghan M, et al. Neurogaming with motion-onset visual evoked potentials (mVeps): adults versus teenagers. IEEE Trans Neural Syst Rehabil Eng. 2019;27(4):572–581. doi: 10.1109/tnsre.2019.2904260
  • Coyle D, Stow J, McCreadie K, et al. Sensorimotor modulation assessment and brain-computer interface training in disorders of consciousness. Arch Phys Med Rehabil. 2015;96(3):S62–S70. doi: 10.1016/j.apmr.2014.08.024
  • Prasad G, Herman P, Coyle D, et al. Applying a brain-computer interface to support motor imagery practice in people with stroke for upper limb recovery: a feasibility study. J Neuroeng Rehabil. 2010 Jan;7(1):60. doi: 10.1186/1743-0003-7-60
  • C. M. Hamamé, et al. Reading the mind’s eye: online detection of visuo-spatial working memory and visual imagery in the inferior temporal lobe. Neuroimage. 2012 Jan;59(1):872–879. doi: 10.1016/j.neuroimage.2011.07.087
  • Kosmyna N, Lindgren JT, Lécuyer A. Attending to visual stimuli versus performing visual imagery as a control strategy for EEG-based Brain-Computer Interfaces. Sci Rep. 2018;8(1):1–14. doi: 10.1038/s41598-018-31472-9
  • F. R. Llorella, E. Iáñez, J. M. Azorín, and G. Patow. Classify four imagined objects with EEG signals. Evol Intell. 2021;1(1):1–10. doi:10.1016/j.neuri.2021.100029
  • E. T. Esfahani and V. Sundararajan. Classification of primitive shapes using brain-computer interfaces. CAD CAD Comput Aided Des. 2012;44(10):1011–1019. doi:10.1016/j.cad.2011.04.008
  • J. S. Bang, J. H. Jeong, and D. O. Won, “Classification of visual perception and imagery based EEG signals using convolutional neural networks,” In 9th IEEE International Winter Conference on Brain-Computer Interface, BCI 2021, 2021. p. 2–7. doi: 10.1109/BCI51272.2021.9385367.
  • F. R. Llorella, E. Iáñez, J. M. Azorín, and G. Patow. Classification of imagined geometric shapes using EEG signals and convolutional neural networks. Neurosci Inform. 2021;1(4):1–8. doi: 10.1016/j.neuri.2021.100029
  • S. Lee, S. Jang, and S. C. Jun. Exploring the ability to classify visual perception and visual imagery EEG data: toward an Intuitive BCI System. Electronics. 2022 Sep;11(17): doi: 10.3390/electronics11172706
  • Bilgin MS, Baytaroğlu EN, Erdem A, et al. A review of computer-aided design/computer-aided manufacture techniques for removable denture fabrication. Eur J Dent. 2016;10(2):286–291. doi: 10.4103/1305-7456.178304
  • A. Kolbasin and O. Husu. Computer-aided design and computer-aided engineering. Intern Sci Confer. 2017;1–6. doi: 10.1051/matecconf/201817001115
  • Vuletic T, Duffy A, Hay L, et al. The challenges in computer supported conceptual engineering design. Comput Ind. 2018;95:22–37. doi: 10.1016/j.compind.2017.11.003
  • S. Kaji, H. Kolivand, R. Madani, and M. Salehinia. Utilizing ‘augmented reality’ technology to illustrate residential open space greenery. Int J Appl Eng Res. 2017;12(16):6022–6028. doi: 10.0000/0002-5672-7810
  • L. M. Pidgeon, Grealy, M., Duffy, Alex H. B., et al. Functional neuroimaging of visual creativity: a systematic review and meta-analysis. Brain Behav. 2016;6(10):1–26. doi: 10.1002/brb3.540
  • L. Hay, Duffy, A. H. B., Gilbert, S J., et al. The neural correlates of ideation in product design engineering practitioners. Design Science. 2019;5(November):1–23. doi: 10.1017/dsj.2019.27
  • D. Coyle and R. Sosnik. Neuroengineering (Sensorimotor-Computer Interfaces), no. January 2015. Berlin, Heidelberg:Springer Berlin Heidelberg; 2015. 10.1007/978-3-662-43505-2
  • Harrison SA, Tong F. Decoding reveals the contents of visual working memory in early visual areas. Nat: Letters. 2009;458(7238):632–635. doi:10.1038/nature07832
  • J. Serences, E. Ester, E. Vogel, and E. Awh. Stimulus-specific delay activity in human primary visual cortex. Psychol Sci. 2009;20(2):207–214. doi:10.1111/j.1467-9280.2009.02276.x
  • Albers AM, Kok P, Toni I, et al. Shared representations for working memory and mental imagery in early visual cortex. Curr Biol. 2013;23(15):1427–1431. doi: 10.1016/j.cub.2013.05.065
  • Xing Y, Ledgeway T, McGraw PV, et al. Decoding working memory of stimulus contrast in early visual cortex. J Neurosci. 2013;33(25):10301–10311. doi: 10.1523/jneurosci.3754-12.2013
  • Stokes M, Thompson R, Nobre AC, et al. Shape-specific preparatory activity mediates attention to targets in human visual cortex. Proc Natl Acad Sci U S A. 2009;106(46):19569–19574. doi: 10.1073/pnas.0905306106
  • Johnson M, Johnson M. Decoding individual natural scene representations during perception and imagery. Front Hum Neurosci. 2014;8(7):59. doi:10.3389/fnhum.2014.00059
  • T. Horikawa and Y. Kamitani. Generic decoding of seen and imagined objects using hierarchical visual features. Nat Commun. 2015;8(May):1–15. doi:10.1038/ncomms15037
  • Peelen MV, Kastner S. A neural basis for real-world visual search in human occipitotemporal cortex. Proc Natl Acad Sci U S A. 2011;108(29):12125–12130. doi: 10.1073/pnas.1101042108
  • E. K. Diekhof, H. E. Kipshagen, P. Falkai, P. Dechent, J. Baudewig, and O. Gruber. The power of imagination - how anticipatory mental imagery alters perceptual processing of fearful facial expressions. Neuroimage. 2011;54(2):1703–1714. doi: 10.1016/j.neuroimage.2010.08.034
  • Ganis G, Thompson WL, Kosslyn SM. Brain areas underlying visual mental imagery and visual perception: an fMRI study. Cognit Brain Res. 2004;20(2):226–241. doi: 10.1016/j.cogbrainres.2004.02.012
  • O’Craven KM, Kanwisher N. Mental imagery of faces and places activates corresponding stimulus-specific brain regions. J Cogn Neurosci. 2000;12(6):1013–1023. doi: 10.1162/08989290051137549
  • L. Reddy, N. Tsuchiya, and T. Serre. Reading the mind’s eye: decoding category information during mental imagery. Neuroimage. 2011;50(2):818–825. doi: 10.1016/j.neuroimage.2009.11.084.Reading
  • M. Stokes, R. Thompson, R. Cusack, and J. Duncan. Top-down activation of shape-specific population codes in visual cortex during mental imagery. J Neurosci. 2009;29:1565–1572. doi: 10.1523/JNEUROSCI.4657-08.2009
  • Cichy RM, Heinzle J, Haynes JD. Imagery and perception share cortical representations of content and location. Cerebral Cortex. 2012;22(2):372–380. doi: 10.1093/cercor/bhr106
  • S. Lee, D. J. Kravitz, and C. I. Baker. Disentangling visual imagery and perception of real-world objects. Neuroimage. 2013;59(4):4064–4073. doi:10.1016/j.neuroimage.2011.10.055
  • Cusack R, Veldsman M, Naci L, et al. Seeing different objects in different ways: measuring ventral visual tuning to sensory and semantic features with dynamically adaptive imaging. Hum Brain Mapp. 2012;33(2):387–397. doi: 10.1002/hbm.21219
  • Yue X, Cassidy BS, Devaney KJ, et al. Lower-level stimulus features strongly influence responses in the fusiform face area. Cerebral Cortex. 2011;21(1):35–47. doi: 10.1093/cercor/bhq050
  • T. F. Brady, T. Konkle, and G. A. Alvarez. A review of visual memory capacity: beyond individual items and toward structured representations. J Vis. 2012;11(5):4–4. doi: 10.1167/11.5.4.A
  • S. V. Shinkareva, V. L. Malave, R. A. Mason, T. M. Mitchell, and M. A. Just. Commonality of neural representations of words and pictures. Neuroimage. 2011;54(3):2418–2425. doi: 10.1016/j.neuroimage.2010.10.042
  • Mitchell DJ, Cusack R. Flexible, capacity-limited activity of posterior parietal cortex in perceptual as well as visual short-term memory tasks. Cerebral Cortex. 2008;18(8):1788–1798. doi: 10.1093/cercor/bhm205
  • Malach R, Reppas JB, Benson RR, et al. Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proc Natl Acad Sci U S A. 1995;92(18):8135–8139. doi: 10.1073/pnas.92.18.8135
  • Chadwick MJ, Hassabis D, Weiskopf N, et al. Decoding individual episodic memory traces in the human hippocampus. Curr Biol. 2010;20(6):544–547. doi: 10.1016/j.cub.2010.01.053
  • A. Ishai, L. G. Ungerleider, and J. V. Haxby. Distributed neural systems for the generation of visual images. Neuron. 2000;28(3):979–990. doi: 10.1016/S0896-6273(00)00168-9
  • T. Konkle and A. Oliva. A real-world size organization of object responses in occipitotemporal cortex. Neuron. 2012;74(6):1114–1124. doi: 10.1016/j.neuron.2012.04.036
  • Konkle T, Caramazza A. Tripartite organization of the ventral stream by animacy and object size. J Neurosci. 2013;33(25):10235–10242. doi: 10.1523/JNEUROSCI.0983-13.2013
  • Harvey BM, Fracasso A, Petridou N, et al. Topographic representations of object size and relationships with numerosity reveal generalized quantity processing in human parietal cortex. Proc Natl Acad Sci U S A. 2015;112(44):13525–13530. doi: 10.1073/pnas.1515414112
  • G. J. Brouwer and D. J. Heeger. Decoding and reconstructing color from responses in human visual cortex. J Neurosci. 2009;29(44):13992–17003. doi:10.1523/JNEUROSCI.3577-09.2009
  • P. Sumner, E. J. Anderson, R. Sylvester, J. D. Haynes, and G. Rees. Combined orientation and colour information in human V1 for both L-M and S-cone chromatic axes. Neuroimage. 2008;39(2):814–824. doi:10.1016/j.neuroimage.2007.09.013
  • T. M. Van Leeuwen, K. M. Petersson, O. Langner, M. Rijpkema, and P. Hagoort. Color specificity in the human V4 complex – an fMRI repetition suppression study. T. D. Papageorgiou, G. I. Christopoulos, and S. M. Smirnakis, Editors. Advanced Brain Neuroimaging Topics in health and disease - methods and applications; 2014. p. 284–302. doi: 10.5772/58278
  • Bird CM, Berens SC, Horner AJ, et al. Categorical encoding of color in the brain. Proc Natl Acad Sci U S A. 2014;111(12):4590–4595. doi: 10.1073/pnas.1315275111
  • M. Wraga, J. M. Shephard, J. A. Church, S. Inati, and S. M. Kosslyn. Imagined rotations of self versus objects: an fMRI study. Neuropsychologia. 2005;43(9):1351–1361. doi: 10.1016/j.neuropsychologia.2004.11.028
  • Charest I, Kievit RA, Schmitz TW, et al. Unique semantic space in the brain of each beholder predicts perceived similarity. Proc Nat Acad Sci. 2014;111(40):14565–14570. doi: 10.1073/pnas.1402594111
  • Emotiv, ‘Emotiv neuroheadset.’ [Online]. Available: https://www.emotiv.com/
  • A. Korik et al., “Primitive shape imagery classification from EEG,” in 7th International BCI Meeting, Monterey, California, US, 2018, pp. 64–65. [Online]. Available: https://bcisociety.org/wp-content/uploads/2019/03/2018AbstractBook.pdf
  • D. Marshall, D. Coyle, S. Wilson, and M. Callaghan. Games, gameplay, and BCI: the state of the art. IEEE Trans Comput Intell AI Games. 2013 Jun;5(2):82–99. doi: 10.1109/TCIAIG.2013.2263555
  • “Simulink for Matlab (the MathWorks, inc.).” [Online]. Available: http://www.mathworks.co.uk/products/simulink/
  • “Unity 3D game engine.” [Online]. Available: https://unity3d.com/
  • Kai Keng Ang, Zheng Yang Chin, Haihong Zhang, and Cuntai Guan, “Filter bank common spatial pattern (FBCSP) in brain-computer interface,” In 2008 IEEE International Joint Conference on Neural Networks, Hong Kong, 2008. pp. 2390–2397. doi: 10.1109/IJCNN.2008.4634130.
  • K. K. Ang, Z. Y. Chin, C. Wang, C. Guan, and H. Zhang. Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b. Front Neurosci. 2012;6(MAR):1–9. doi: 10.3389/fnins.2012.00039
  • Lotte F, Guan C. Regularizing common spatial patterns to improve BCI designs: unified theory and new algorithms. IEEE Trans Biomed Eng. 2011 Feb;58(2):355–362. doi: 10.1109/TBME.2010.2082539
  • F. Lotte and C. Guan, ‘Regularized common spatial patterns (RCSP) toolbox.’ [Online]. Available: https://sites.google.com/site/fabienlotte/code-and-softwares
  • J. Pohjalainen, O. Räsänen, and S. Kadioglu. Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits. Comput Speech Lang. 2015;29(1):145–171. doi:10.1016/j.csl.2013.11.004
  • F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi. A review of classification algorithms for EEG-based brain-computer interfaces. J Neural Eng. 2007 Jun;4(2):R1–R13. doi:10.1088/1741-2560/4/2/R01
  • A. Korik, R. Sosnik, N. Siddique, and D. Coyle. EEG mu and beta bandpower encodes information for 3D hand motion trajectory prediction. In: D. Coyle, Ed. PBR: brain-computer interfaces: lab experiments to real-world applications. Vol. 228. UK: Elsevier Inc; 2016. 71–105.doi:10.1016/bs.pbr.2016.05.001
  • G. R. Müller-Putz, R. Scherer, C. Brunner, R. Leeb, and G. Pfurtscheller, ‘Better than random? A closer look on BCI results,’ 2008. [Online]. Available: http://www.ijbem.org
  • M. González-Franco, P. Yuan, D. Zhang, B. Hong, and S. Gao. Motor imagery based brain-computer interface: a study of the effect of positive and negative feedback. In: Proceedings of the annual international conference of the IEEE Engineering in Medicine and Biology Society, EMBS; 2011. p. 6323–6326. doi:10.1109/IEMBS.2011.6091560
  • J. D. Wander, et al. Distributed cortical adaptation during learning of a brain–computer interface task. Proc Nat Acad Sci. Jun, 2013;26(110). doi: 10.1073/pnas.1221127110/-/DCSupplemental.www.pnas.org/cgi/doi/10.1073/pnas.1221127110
  • Hassabis D, Spreng RN, Rusu AA, et al. Imagine all the people: how the brain creates and uses personality models to predict behavior. Cerebral Cortex. 2014;24(8):1979–1987. doi: 10.1093/cercor/bht042
  • A. I. Sburlea, M. Wilding, and G. R. Müller-Putz. Disentangling human grasping type from the object’s intrinsic properties using low-frequency EEG signals. Neuroimage. 2021 Jun;1(2): doi: 10.1016/j.ynirp.2021.100012
  • L. H. Chew, J. Teo, and J. Mountstephens. Aesthetic preference recognition of 3D shapes using EEG. Cogn Neurodyn. 2016;10(2):165–173. doi: 10.1007/s11571-015-9363-z
  • C. Cooney, A. Korik, R. Aella Folli, and D. Coyle. Evaluation of hyperparameter optimization in machine and deep learning methods for decoding imagined speech EEG. Sensors. 2020;20(4629):1–22. doi:10.3390/s20164629
  • A. Korik, McCreadie, K., McShane, N., et al. Competing at the cybathlon championship for people with disabilities: long-term motor imagery brain–computer interface training of a cybathlete who has tetraplegia. J Neuroeng Rehabil.2022;Sep;19(1):1–22. do i:
  • Bigirimana AD, Siddique N, Coyle D. Emotion-Inducing Imagery versus Motor Imagery for a brain-computer interface. IEEE Trans Neural Syst Rehabil Eng. 2020 Apr;28(4):850–859. doi: 10.1109/TNSRE.2020.2978951
  • A. Duffy, L. Hay, M. Grealy, and T. Vuletic, “A vision for cognitive driven creative design,” in 30th Anniversary Heron Island Conference on Computational and Cognitive Models of Creativity, Gladstone, 2019, pp. 1–22. Accessed: Feb. 26, 2023. [Online]. Available: http://dccconferences.org/hi19/index.html
  • Gerard Campbell et al., “Functional activity and connectivity during creative ideation in product design engineers,” in 6th Meeting of the Society for the Neuroscience of Creativity, Boston, United States, Mar. 2020, pp. 1–1.