122
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Development and evaluation of verbal fluency cut-off scores for the Brisbane Evidence-Based Language Test (EBLT) for use in clinical practice

, , &
Received 12 Dec 2023, Accepted 03 Apr 2024, Published online: 22 Apr 2024

ABSTRACT

The Brisbane Evidence-Based Language Test (EBLT) incorporates verbal fluency tasks (animals and words starting with ‘F’) although it does not currently provide cut-off scores for the verbal fluency subtest. The aim is to gain speech pathologists’ perceptions of the current scoring of the verbal fluency subtest (i.e., initial survey) and develop new verbal fluency cut-offs. An online survey study design was implemented and involved three online survey stages (1) initial survey, (2) weekly surveys and (3) final survey. Forty-two speech pathologists (age range: 23–66 years) participated in the initial survey and 23 clinicians participated in the weekly surveys. The verbal fluency cut-offs were calculated through diagnostic receiver operating characteristic sensitivity and specificity analysis comparing verbal fluency test scores with the binary (yes/no) language reference standard result. A cut-off score of ≤6 words on each task (animal task and words starting with ‘F’ task) was indicative of impaired performance. When not stratified by age, the combined animal and words starting with ‘F’ task cut-off score had a lower cut-off of ≤15 indicative of impairment. Overall diagnostic accuracy was greater when the animal and ‘F’ words tasks were combined. Participants reported that the new cut-offs were helpful, appropriate, in line with their clinical impression, patient presentation and met their needs as clinicians. The findings of this study indicate speech pathologists found the new verbal fluency cut-offs to be a useful addition in informing their decision-making in their clinical language assessments.

Introduction

Stroke is a leading cause of mortality and morbidity globally and often results in ongoing impairments (Strilciuc et al., Citation2021). Approximately 30% of stroke survivors experience aphasia, which may affect the production and comprehension of written, spoken and/or signed language (Berg et al., Citation2020; Engelter et al., Citation2006; Grönberg, Henriksson, Stenman, & Lindgren, Citation2022). Language production, specifically word retrieval issues, commonly occur in people with aphasia (PWA) and are predominately assessed through picture naming, narrative production, and/or verbal fluency (Faroqi-Shah & Milman, Citation2018; Schuchard & Middleton, Citation2018). Verbal fluency tests require the participant to produce as many words relevant to a pre-determined category as possible in a set time frame, typically one minute (Kim, Kim, Kim, & Heo, Citation2011). These verbal fluency measures are often incorporated within aphasia assessments as they are a quick and efficient method of providing clinically relevant information about linguistic and executive functions (Bolla, Lindgren, Bonaccorsy, & Bleecker, Citation1990). Verbal fluency is incorporated in the Brisbane Evidence-Based Language Test (EBLT), which is an assessment that can be used to diagnose the presence or absence of post-stroke aphasia (Rohde et al., Citation2020). This language assessment assesses acquired language disorders in adult populations and is used in speech pathology clinical practice (Arnold, Wallace, Ryan, Finch, & Shrubsole, Citation2020). Currently, the EBLT does not provide cut-off scores for the verbal fluency subtest but only total scores (/45 for animals and /35 for /f/ words) are provided. The total scores of /45 for animals and /35 for /f/ words are maximum scores only and not cut-off scores indicating impaired/non-impaired language. These maximum scores were selected with the specific intention that they will not be achieved by any patient. These maximum scores were determined by the Brisbane EBLT psychometric dataset (n = 100 patients) where the highest patient score achieved for animals was 38 and 29 for /f/ words. These higher maximum scores (45 and 35) are therefore comfortably over what any patient achieved. If the maximum scores were lower (e.g., 25), and a patient could name 28 animals, they would be awarded 25 and 3 animal scores would essentially be ‘lost’. The current absence of clinical cut-off scores is limiting the clinical applicability of test results. Therefore, the purpose of this study was to develop and evaluate verbal fluency cut-off scores for the EBLT according to speech pathologists.

Word retrieval difficulties in PWA may present as slow responses, semantic and/or phonological errors and/or an inability to produce the target word (Galletta & Goral, Citation2018). Difficulties in word retrieval are theorised to result from damage to cognitive or language processes, which causes impaired lexical-semantic representations (Riès, Dronkers, & Knight, Citation2016). Lexical activation and selection are the overarching processes for word retrieval, where lexical activation is subconsciously achieved by identifying features that correlate to both semantic memory and the intended word (Riès et al., Citation2016). This process allows the formulation of lexical representations through semantic feature mapping (Faroqi-Shah & Gehman, Citation2021). Lexical selection occurs when the target word’s phonological form and articulatory plan are retrieved (Riès et al., Citation2016). Verbal fluency tasks are completed via this word retrieval process, although verbal fluency measures provide clinically relevant information on additional cognitive processes (Barry, Bates, & Labouvie, Citation2008).

Verbal fluency measures typically involve two tasks: phonemic fluency (i.e., words starting with the letter ‘F’) and semantic categories (i.e., animals) (Sarno, Postman, Cho, & Norman, Citation2005). The EBLT incorporates both of these tasks. Several cognitive processes are involved in verbal fluency tasks, including semantic memory, selective inhibition, attention, self-monitoring, processing speed, effort and atypical word searches dictated by phonemic and or semantic categories (Barry et al., Citation2008; Bolla et al., Citation1990; Gladsjo et al., Citation1999). Verbal fluency measures have been found to have clinically significant implications for measuring patient outcomes and determining intervention targets. A study by Sarno and Levita (Citation1979) found that phonemic fluency tasks were the most sensitive measure of determining a change in patients in their first-year post-stroke. Four tasks were implemented with the study’s participants: visual naming, the Token Test, sentence repetition and verbal fluency, where the verbal fluency task was most successful at distinguishing the most and least improved patients (Sarno & Levita, Citation1979). Similarly, a study by Adamovich and Henderson (Citation1984) found verbal fluency measures to be clinically significant in creating an intervention for PWA when patients’ verbal fluency responses were analysed. Responses on verbal fluency tasks indicated that PWA used semantic association strategies significantly less than language-intact and healthy control groups, whereas PWA instead relied on phonemic associations (Adamovich & Henderson, Citation1984). Verbal fluency assessments are reported to be widely used in clinical practice and useful not only in identifying lexical difficulties but also examining multiple elements associated with executive function (Jansson, Ortiz, & Barreto, Citation2020). The development of new cut-off scores specifically for this verbal fluency sub-test will therefore assist in the clinical interpretation of these executive features, which few other language sub-tests focus specifically on. The large number of responses obtained in this subtest (often 20–25 responses) also enables the calculation of clinically useful cut-offs. Other subtests (e.g., naming objects) often have limited response types (e.g., single words or phrases) and therefore clinically-useful cut-offs not able to be similarly calculated.

Verbal fluency performance is influenced by demographic factors including age, education, sex and ethnicity (Peña-Casanova et al., Citation2009). Age and education have been found to significantly impact verbal fluency performance (Acevedo et al., Citation2000; Barry et al., Citation2008; Kave, Citation2005). Higher age is associated with poorer performance on verbal fluency tasks, and higher education is associated with improved performance on verbal fluency tasks (Barry et al., Citation2008). Due to cerebral changes that occur with ageing, verbal fluency performance typically reduces (Barnes & Burke, Citation2006). This reduction in performance occurs because verbal fluency tasks involve executive ability in conjunction with effort and the ability to accurately retrieve and organise information, which are factors that are impacted by age-related cerebral changes (Barnes & Burke, Citation2006; Bryan & Luszcz, Citation2000; Plumet, Gil, & Gaonac'h, Citation2005). A study by Barry et al. (Citation2008) reported that neurologically intact older adults perform more poorly and are more variable on verbal fluency tasks than younger populations. This study was conducted on healthy, neurologically intact participants; hence these results are not clinically relevant to stroke survivors. Tombaugh, Kozak, and Rees (Citation1999) used neurologically intact participants and found that age had a greater impact on performance on semantic fluency tasks than education, and education was reported to have a greater impact on phonemic fluency tasks. Age was the factor used in this study to create more specific and clinically relevant cut-off scores due to the reported impact of age on verbal fluency performance. Stroke patients are typically older adults; hence there may be a reduction in verbal fluency performance which occurs with age in conjunction with potential stroke-related language and executive function difficulties.

While verbal fluency tasks are accurate indicators of linguistic and executive functions in PWA, there is limited research exploring the diagnostic accuracy of verbal fluency tasks in distinguishing between PWA and language intact stroke patients. A longitudinal study by Sarno et al. (Citation2005) examined the changes in phonemic verbal fluency performance in 18 PWA stroke survivors. These participants were assessed on the letters ‘F’, ‘A’, and ‘S’ at three-month intervals while receiving post-stroke intervention for three months to one-year post-stroke (Sarno et al., Citation2005). Their verbal fluency results demonstrated an improvement in access to grammatical categories (e.g., nouns, verbs, modifiers, and function words) and significantly improved frequency of phonemic clusters when generating word lists at the end of the intervention (Sarno et al., Citation2005). This study's verbal fluency task results were not used for diagnostic purposes but rather to recognise the importance of qualitative aspects of verbal fluency (Sarno et al., Citation2005). This study also compared phonemic vs semantic tasks and performance in fluent vs non-fluent PWA (Sarno et al., Citation2005). Sarno et al. (Citation2005) attempted to establish prevailing norms for an F-A-S verbal fluency task, reporting that a typical cut-off score would be 36 words given that 12 words would, on average, be produced per letter. The use of the neurotypical controls limited this study as it reduced the clinical application of the results within stroke populations. Another limitation was that there was no comparison of age on verbal fluency achievement, hence the likely decline of performance on verbal fluency tasks due to age was not accounted for in the cut-offs.

The Western Aphasia Battery-Revised (WAB-R) incorporates semantic verbal fluency in the assessment battery (termed ‘Word Fluency’); however, the verbal fluency task does not provide cut-off scores that distinguish between PWA and language-intact stroke patients (Kertesz, Citation2006). A maximum score of 20 is provided in the scoring guidelines of the WAB-R (Kertesz, Citation2006). The WAB-R incorporates a section detailing that semantic fluency is more important in assessing aphasia than phonemic fluency as the semantic fluency task allows further exploration of lexical access (Bolla et al., Citation1990). Furthermore, phonemic fluency tasks involve more executive functioning and are typically more difficult for neurotypical people and PWA. This task is deemed more sensitive for frontal deficits or dementia; however, a limitation of the study was that only neurologically intact participants were involved (Bolla et al., Citation1990).

A case control study by Faroqi-Shah and Milman (Citation2018) comparing animal, phonemic, and action fluency in aphasia identified the relevance of qualitative analysis on verbal fluency and provided data for interpreting verbal fluency scores. This study involved 28 PWA and 40 age-matched neurotypical adults, all of whom were administered one phonemic (the letters ‘F’, ‘A’, ‘S’) and two semantic (actions and animals) verbal fluency tasks (Faroqi-Shah & Milman, Citation2018). Faroqi-Shah and Milman (Citation2018) clearly outlined their aims and the methods for describing the implementation of the verbal fluency task and included a script for administering the task to the participant. This study found that verbal fluency tasks are impaired in PWA, particularly the animal verbal fluency task, and PWA produced lexically simpler answers compared to their neurotypical peers (Faroqi-Shah & Milman, Citation2018). Despite these findings, the study used a neurotypical control group; hence the results cannot be applied to identifying aphasia in stroke populations in clinical settings. Another limitation of this study was that sensitivity and specificity scores were not presented and verbal fluency performance was not compared with age.

A study by Kim et al. (Citation2011) differentiated PWA and language intact stroke patients through a semantic verbal fluency task and created cut-off scores between the patient groups. It was determined that a 30 s administration time has discriminative validity to differentiate between the 53 stroke patients and 28 neurotypical control participants (Kim et al., Citation2011). This finding is important to the population as a reduced assessment time, from 60 to 30 s, could reduce frustration levels and administration time whilst gaining diagnostically valuable information (Kim et al., Citation2011). The PWA were compared to neurotypical control peers in conjunction with language-intact stroke patients, improving the study results’ clinical application (Kim et al., Citation2011). The cut-offs between the PWA and language-intact groups were 7.0 and 6.0 at the 60 and 30 s periods respectively (Kim et al., Citation2011). Similar to the previously discussed studies, this study did not consider age when creating the cut-offs, although the mean ages and years of education between the patient and control groups did not significantly differ (Kim et al., Citation2011). This study found that PWA performed poorer than language-intact stroke patients, and language-intact stroke patients performed poorer than the neurotypical controls (Kim et al., Citation2011). The authors attributed this pattern of performance to the involvement of executive functions and memory in verbal fluency tasks (Kim et al., Citation2011). Overall, there is a lack of research examining the verbal fluency cut-off scores stratified by age in conjunction with a lack of use of language-intact stroke survivors as controls.

Verbal fluency cut-off scores need to be developed for the EBLT to improve the clinical applicability of the verbal fluency subtest, thereby enhancing assessment and intervention planning post-stroke. The study’s first aim was to gain speech pathologists’ perceptions of the current scoring of the verbal fluency subtest and recommended areas for improvement (i.e., initial survey). The second aim was to develop new verbal fluency cut-offs that included age stratification and severity ratings. The final aim was to pilot and review the feedback to determine if the new cut-offs were appropriate for use in practice (i.e., weekly surveys).

Methods and methods

Study design

A repeated online survey study design was used to explore participant perspectives of the verbal fluency cut-off scores. There was a co-running study that used the same surveys but focussed on overall severity scores of the EBLT, those results were reported elsewhere. The Checklist for Reporting of Survey Studies (CROSS) and the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) reporting guidelines were used in this study to guide the content and format of information (Eysenbach, Citation2004; Sharma et al., Citation2021).

Survey administration and study preparation

Participants were recruited through snowball sampling of the speech pathology email network: Speech Pathology Email Chats (SPECS). Clinicians registered with the Brisbane Test (brisbanetest.org) website were also informed of the study and potential participants were asked to circulate the study within their networks. The researchers involved in the study also contacted their professional contacts whom they thought might be interested in study participation.

Ethical considerations

The study was approved by De-identified research ethics committee.

Data collection methods

Data for participant experience was collected through three online survey stages, (1) initial survey, (2) weekly surveys and (3) final survey, using de-identified survey links through Qualtrics from May to July 2022. Survey administration involved a three-week recruitment period where the initial survey was sent to potential participants, followed by the first weekly survey sent three weeks later.

Participants were required to input a unique code into the initial survey to label their subsequent surveys. This technique preserved participant anonymity while simultaneously allowing researchers to link their responses over time.

Stage 1: initial survey

Participants were sent an initial 5-to-10-min online customised mixed methods survey. This survey had 13 questions with multiple-choice options or free-text boxes. The questions included general demographic questions such as gender, age, clinical caseload and work environment, the number of years working as a speech pathologist and participants’ general usage and perceptions of the scoring of the EBLT and verbal fluency subtest (see Appendix 1). Verbal fluency cut-off scores were provided at the end of the survey.

Calculation of the verbal fluency diagnostic accuracy cut-off scores. The original diagnostic-accuracy data set from the Rohde et al. (Citation2020) cross-sectional validation study was used to calculate the verbal fluency cut-off scores. The EBLT, including the verbal fluency subtest, was conducted with 100 consecutive acute stroke patients (mean age of 66.49 years) admitted to two tertiary hospitals in Brisbane, Australia. Each eligible stroke patient was administered two assessments, randomised in order of administration. The first assessment was the EBLT (including the verbal fluency subtest) and the second was a reference standard measure which was based on a composite of different language and other clinical measures which together formed a binary yes/no result determining whether or not the patient presented with aphasia (Rohde et al., Citation2020). The resulting 100 stroke patients were therefore categorised into two groups depending upon the results of this reference measure: (1) stroke patients with aphasia and (2) stroke patients without aphasia. These clinical groups were selected as they are reflective of the populations seen in stroke care and assists in differentiating between the lexical and cognitive processes that influence verbal fluency impairment with language-intact stroke populations. Healthy (neurologically intact) participants were not included as the control group as comparison of performance between healthy populations against PWA post-stroke also evaluates the impact of stroke in general (and not aphasia specifically). The performance of these language-intact groups does not assist in identifying language impairment in stroke (Rohde et al., Citation2018).

The verbal fluency cut-off scores were calculated to determine the presence or absence of impairment (i.e., diagnostic accuracy cut-off scores) based on performance on this language task. These cut-off scores were calculated for the semantic task (i.e., animal naming), the phonemic task (i.e., words starting with the letter ‘F’) and the combined semantic and phonemic tasks. Additional age-specific cut-offs were also calculated through stratifying the data into age categories for the combined ‘animals’ and ‘F’ words score. This was achieved by analysing the 100 stroke participant data from the Rohde et al. (Citation2020) study into three distinctive age categories (25–50, 51–75 and 76–100). The development of severity categories indicating if a verbal fluency impairment is ‘mild’, ‘moderate’ or ‘severe’ was also undertaken for these scores stratified by age. The limitations of the sample size (only 100 stroke patients) meant the data was able to be stratified according to one variable (age) only. Stratification of the data according to additional variables (sex, education etc.) would have resulted in too few datapoints within each category. Age was selected as the variable of interest due to the body of research indicating age to be a strong predictor of verbal fluency performance (Barnes & Burke, Citation2006; Barry et al., Citation2008).

Diagnostic cut-off scores were created through sensitivity and specificity receiver operating characteristic (ROC) curve analysis. The method of analysis for diagnostic accuracy in this study was the same method used by the Rohde et al. (Citation2020) study. As per Rohde et al. (Citation2020), Stata/IC 13.0 was used for diagnostic analysis. Verbal fluency test scores were used to calculate diagnostic accuracy by comparing these scores with the binary (yes/no) language reference standard result (composite reference of multiple language tests and clinical opinion) determining whether or not the patient presented with aphasia (Rohde et al., Citation2020). This comparison was used to determine test specificity and sensitivity and positive (+LR) and negative likelihood ratios (−LR) (Rohde et al., Citation2020). The specificity and sensitivity scores were evaluated at each cut-off threshold to determine the diagnostic verbal fluency cut-off scores (i.e., scores used to discriminate between the presence or absence of impairment) (Rohde et al., Citation2020).

As per the full version of the EBLT, two cut-off scores were provided, which helped optimise the test's clinical utility and assisted in determining the presence or absence of a language condition. The lower verbal fluency threshold scores were determined from the cut-off score which produced the highest sensitivity for a specificity ≥90%. A score at or below these cut-offs indicated the likely presence of impairment (Rohde et al., Citation2020). Higher cut-offs were created to determine the likely absence of impairment (Rohde et al., Citation2020). The higher cut-offs were created from threshold scores that yielded sensitivities of ≥90% (Rohde et al., Citation2020). A score at or above these cut-offs indicated the likely absence of impairment. At-risk scores were those that fell in the range between the higher and lower cut-offs and indicated a risk of impairment.

Calculation of the severity range verbal fluency cut-off scores. Severity range cut-off scores were created to quantify verbal fluency scores into mild, moderate and severe categories. These severity ranges were calculated by dividing the lower cut-off score (discriminating between impaired and non-impaired language performance) into three equal categories to achieve the severe, moderate and mild severity ranges. The scores were rounded into whole numbers to ensure they were clinically relevant and no overlap in score categories was achieved.

Stage 2: weekly surveys

The aim of the weekly surveys was to obtain feedback on participants’ experiences with the newly-developed verbal fluency cut-off scores in practice. The weekly survey contained 15 questions which remained the same over the six weeks. Seven of the 15 questions specifically surrounded verbal fluency and included the frequency of use of the verbal fluency subtest, the populations this subtest was used on, perceptions of the cut-off scores, if they were meeting the needs of the clinician, and if any concerns had arisen or changes could be made. The remaining eight of the 15 questions were focussed around the severity guidelines for the whole EBLT and asked about how participants were finding the overall guidelines, whether the guidelines were in line with their clinical impression, meeting their needs as clinicians, and whether there were any concerns or changes to be made. The questions were a combination of short answer and multi-choice formats (see Appendix 2).

Stage 3: final survey

In the final week of data collection, participants were asked to complete a 5-minute survey to give their final perceptions of the verbal fluency cut-off scores and their intended use in clinical practice. Five questions were specific to the verbal fluency subtest in multi-choice and open-ended questions (see Appendix 3).

Survey development and piloting

The three surveys were pretested twice each by the research team and feedback was used to ensure the functionality and usability of the surveys to maximise completion rates before distribution to the participants. The research team included final year speech pathology honours students and registered speech pathologists. The participants used for pretesting were employed in metropolitan hospitals in Australia.

Sample characteristics of survey responses

To be eligible for inclusion in the study, participants had to be practicing speech pathologists who worked in an adult caseload and used the EBLT in their clinical practice. Participants were informed that there would be weekly surveys as part of the study, although they were able to withdraw at any point. One female participant withdrew from the study following the release of the first weekly survey as a result of infrequent use of the EBLT. Exclusion criteria included people who were not speech pathologists and speech pathologists who did not use the EBLT in their clinical practice.

Statistical analysis

The survey data were screened for complete and incomplete responses, where nine respondents with one or fewer questions answered were removed from the data set. Furthermore, only logical and interpretable data was included. An additional entry was excluded from the study as the participant did not meet inclusion criteria (i.e., they were a student). Data analysis of quantitative survey data was completed using Microsoft Excel and SPSS software. Quantitative data was analysed using descriptive statistics, specifically counts and means. Qualitative survey data was analysed using Graneheim and Lundman’s (Citation2004) qualitative content analysis to determine categories and sub-categories. This was achieved by firstly reading through the responses several times to gain an understanding of potential sub-categories. Responses were divided into condensed meaning units and were labelled into codes. The codes were sorted into sub-categories and then categories which held the overall meaning of the responses. All researchers discussed the codes and sub-categories to maximise the study's rigour.

Results

Respondent characteristics

Initial survey

Forty-two speech pathologists (41 females and one male, aged 23–66 years) participated in the initial survey. Participants had variable years of experience in the profession, ranging from 2 to 43 years and worked in countries including Australia, New Zealand, the United States of America, Canada, India, and the United Kingdom. Participants worked in numerous workplaces including acute and rehabilitation hospital settings in metropolitan areas, private practices, rural community hospitals and outpatient settings. The participant demographics for all participants who responded to the initial survey are reported in .

Table 1. Initial and weekly survey – participant demographics.

Weekly surveys

Of the 42 participants who completed the initial survey, twenty-three clinicians (22 females and 1 male, aged 23–66 years) then participated in the weekly surveys. These 23 participants had a range of work experience as speech pathologists from 2 to 43 years. Participants reported that they worked in Australia, the United States of America, Canada, England, the United Kingdom and India (). Clinician participation was variable for each weekly survey (). Participation had an overall decline from the initial survey to the final survey. The lowest participation rate was for the week 6 survey where two clinicians participated.

Figure 1. Flow chart of the number of participants for each survey.

Figure 1. Flow chart of the number of participants for each survey.

Main findings

Current use and perceptions of the verbal fluency subtest

Initial survey. Twenty-three of the 42 participants reported their use and perceptions of the verbal fluency subtest. The verbal fluency subtest was used at varying frequencies in a typical week, ranging from no use to being implemented daily. Of the 23 participants, 19 responded to how often they used the verbal fluency subtest in a typical week, where 17 used the verbal fluency subtest on a weekly basis, with the majority using the subtest approximately one to two times per week (range: 1–7). Three participants did not provide numerical equivalents to their weekly use with one reporting they did not use the test ‘often’, another reporting they use the test ‘almost daily’ and another stating they were unaware of a separate verbal fluency component. Results of these frequencies are summarised in .

Figure 2. Initial survey: frequency of use of verbal fluency subtest in a typical week.

Figure 2. Initial survey: frequency of use of verbal fluency subtest in a typical week.

Survey questions which sought qualitative information on participants’ experiences and perceptions of the verbal fluency scores were analysed using content analysis. Each participant was assigned a number from one to 23 to ensure consistency in reporting throughout the weekly surveys. reports the results from the initial survey question: ‘What are your current thoughts on scoring the verbal fluency component of the EBLT? Is it specific enough for your needs?’. Four participants (P4, P12, P19, P22) reported that the scoring was appropriate; however, the majority (n = 18) of participants reported that the current scores should be modified due to various reasons: norms would be beneficial (n = 2), the scores are not specific enough (n = 2), scores are too high (n = 3), scores underestimate functional abilities (n = 3), are a poor measure of verbal output (n = 1) or do not capture strengths (n = 1). One participant (P16) provided a perspective which was not relevant to the verbal fluency subtest and instead commented on the picture description subtest; this comment was removed from content analysis.

Table 2. Initial survey: content analysis of current perceptions of scoring the verbal fluency subtest with categories, subcategories and quotes.

Verbal fluency cut-off scores

New verbal fluency scores were created for the participants to pilot in their clinical practices. To reflect the existing format of the diagnostic cut-offs for the EBLT, two cut-off scores for the animal task and words starting with ‘F’ task were developed: a lower cut-off (indicating likely presence of impairment) and a higher cut-off (indicating likely absence of impairment). An ‘at-risk zone’ between these two cut-offs indicates possible impairment (). Where possible, severity ranges (indicating mild, moderate or severe impairment) were also calculated.

Table 3. Diagnostic accuracy characteristics for animals and words starting with ‘F’ (separate task analysis).

Animal and ‘F’ word separate task analysis. Cut-offs that indicate the presence of verbal fluency impairment were created by determining the scores with the highest sensitivity for a specificity of ≥90%. These cut-off parameters were selected to achieve the highest sensitivity as possible (optimise the test’s ability to accurately identify aphasia) while still maintaining a high level of specificity (≥90%) and ensure that the test is also able to accurately identify when the person does not present with language-impairment (Rohde et al., Citation2020). This method was used for the animal verbal fluency and words starting with ‘F’ tasks. A cut-off score of ≤6 words for the animal task yielded a sensitivity of 60.27% and a specificity of 96.30% with LR + 16.27 and LR − 0.41. A cut-off of ≤6 words for the words starting with ‘F’ task had a sensitivity of 71.23% and a specificity of 92.59% with LR + 9.62 and LR − 0.31. Severity analysis (the creation of severity score ranges ‘mild’, ‘moderate’ and ‘severe’) was also attempted for both animals and ‘F’ word tasks however, as both generated a lower cut-off score of 6, the score range between 0 and 6 was too small to be meaningfully quantified into three separate severities. For these separate tasks no mild, moderate, or severe ranges were consequently able to be provided. The separate cut-off scores for animals and ‘F’ words are reported in .

Combined animal and ‘F’ word analysis. Cut-off scores were also created for the combined animals and ‘F’ words total task scores. This analysis combined scores across the two tasks to create a single verbal fluency score and the combined animal and words starting with ‘F’ task was separated into mild, moderate and severe categories. Cut-offs were calculated when not stratified for age and also when stratified by age (). The combined task cut-off score when not stratified by age had a lower cut-off of ≤15 indicative of impairment and had a sensitivity of 68.49% and a specificity of 92.59% with LR + 9.25 and LR − 0.34.

Table 4. Diagnostic accuracy characteristics – combined task analysis (animals and ‘F’ words).

Cut-offs were stratified into three age categories (from age 25 to 100 years) grouping into approximately 25 years in each. The 25–50 years age range for the combined task had a lower cut-off of ≤29 which had a sensitivity of 84.62% and a specificity of 100% with LR − 0.15. A lower cut-off of ≤24 was determined for the 51–75 years age range for the combined task which yielded a sensitivity of 86.49% and a specificity of 100% with LR − 0.13. For the 76–100 years group there was a small lower cut-off of seven that could not be quantified into severity categories. The 76–100 age range had a sensitivity of 65.22% and a specificity of 100% with LR − 0.35. Overall, combining the animal and words starting with ‘F’ tasks creates a marginal greater overall diagnostic accuracy as determined by a greater AUC curve (0.897) compared to the individual tasks ().

Results and perceptions of the verbal fluency cut-off scores

Week 1 survey. The verbal fluency subtest was used by participants throughout the first week between 0 and 5 times (Appendix 4). Five participants (P7, P9, P10, P19, P23) reported they did not use the cut-off scores, two (P16, P17) implemented them twice, one (P4) implemented them three times, and another (P1) used them five times. Participants perceived the cut-offs to be ‘very helpful’ (P1, P4, P16, P17) (Appendix 5). Participants who used the cut-offs found the scores beneficial and reported that the scores met their needs (Appendix 6). Relative to clinical impression and presentation of the clients, 33.3% of participants (n = 3) found the cut-offs strongly in line with their clinical impression, 11.1% (n = 1) found the cut-offs somewhat in line and 55.6% (n = 5) neither agreed nor disagreed (Appendix 6). Three participants (P4, P7, P16) did not have concerns with the cut-offs and one participant (P1) suggested that it would be beneficial to separate the severity scores and commented that more information about the ‘at-risk’ category would be helpful (Appendix 5). Participants (P1, P16, P17) commented that no changes were necessary (Appendix 5).

Week 2 survey. The cut-off scores were implemented from 0 to 3 times during the week. Five participants did not use the cut-off scores (P2, P4, P16, P19, P23), one participant (P1) used the scores once, and two (P17, P11) used the scores three times. Three participants (P1, P4, P17) reported that the cut-offs were appropriate and useful in practice and one (P11) raised a concern that the scores were ‘a bit high’ (Appendix 5). The cut-off scores met the needs of five participants (62.5%), while 37.5% (n = 3) reported they were unsure as to whether the cut-offs met their needs as clinicians (Appendix 6). Three participants (42.9%) reported that they strongly agreed the cut-offs were in line with their clinical impression and client presentation, 28.6% (n = 2) somewhat agreed, and neither agreed nor disagreed (Appendix 6).

Week 3 survey. The verbal fluency cut-offs were used from 0 to 3 times during week 3. Two participants (P2, P16) did not implement the cut-offs over the week, one (P21) used the cut-offs once, one (P1) used the cut-offs twice and one participant (P17) used the cut-offs three times. Two participants (P1, P17) reported that the cut-offs were appropriate and useful in clinical practice (Appendix 5). All participants believed that the cut-offs met their needs and 75% (n = 3) of participants strongly agreed that the cut-offs were in line with their clinical impression and client presentation and 25% (n = 1) somewhat agreed (Appendix 6). No changes were suggested by participants (Appendix 5).

Week 4 survey. Participants implemented the cut-off scores between 0 and 3 times with three participants (P1, P4, P16) not implementing the cut-offs in the past week and two participants (P17, P21) implementing the cut-offs three times. The two participants (P17, P21) that implemented the scores reported that the cut-offs were appropriate and good for documentation (Appendix 5). One participant (P17) reported that the cut-offs were meeting their needs and strongly agreed that they were in line with their clinical impression and client presentation and one (P21) was unsure that the cut-offs met their needs and somewhat agreed that the cut-offs were in line with their clinical impression (Appendix 6). No changes were reported (Appendix 5).

Week 5 survey. The participants (P4, P17, P21) did not implement the cut-off scores in week 5. There were no reports of perceptions of the cut-offs, although one participant (P17) believed the cut-offs met their needs as a clinician and neither agreed nor disagreed that the cut-offs were in line with their clinical impression and client presentation (Appendix 6). No changes were reported (Appendix 5).

Week 6 survey. Two participants (P16, P17) completed the survey with one (P16) not implementing the cut-offs and one (P17) implementing the cut-offs once. One participant (P17) reported that the cut-offs were appropriate, believed they were meeting their needs as a clinician, and strongly agreed that the cut-offs were in line with their clinical interpretation and client presentation (Appendix 6). There were no reported changes (Appendix 5).

Final survey. Two participants (P4, P17) reported that the verbal fluency cut-offs were useful in clinical practice and one participant (P19) did not implement the cut-offs over the week due to an inappropriate caseload. Two participants (P4, P17) reported that they would use the new cut-offs routinely in clinical practice and perceived the cut-offs to help with clinical decision-making (). One participant (P19) was unsure if they would routinely use the cut-offs and if the new cut-offs helped with clinical decision-making (). All participants (P4, P17, P19) would encourage other speech pathologists to use the cut-offs in their clinical practice (). There was no relevant feedback for the verbal fluency cut-off scores and one response was removed from the data as it was not relevant to the verbal fluency subtest.

Table 5. Final survey: analysis of quantitative data for question 3, 4 and 5.

Discussion

This study was the first to evaluate current and newly developed cut-off scores for the verbal fluency subtest in the EBLT. The majority of participants perceived the current scoring as not specific enough, too high, underestimated functional abilities and a poor measure of verbal output. In response to participant feedback new verbal fluency cut-off scores were developed to provide clinically relevant information to speech pathologists and multidisciplinary teams that may assist with intervention planning and decision-making. This study also evaluated the newly developed cut-off scores through clinician perspectives. The majority of those that implemented the new cut-off scores reported that they were helpful, appropriate, in line with their clinical impression and presentation of the patient and were meeting their needs as clinicians.

A finding of this study was that speech pathologists wanted more specific information from the verbal fluency cut-off scores in the EBLT. While this is the first study to find that speech pathologists are seeking more specific cut-offs for the EBLT, similar findings have been found by Sarno et al. (Citation2005) who determined that qualitative and quantitative information derived from verbal fluency tasks have been used to help achieve definite and individualised hypotheses surrounding prognosis in PWA. This finding supports that clinicians are seeking more specific and definite information on recovery from language impairments and highlights the importance of clinicians gaining in-depth information on patient abilities. Hersh, Wood, and Armstrong (Citation2018) determined that a comprehensive language assessment was crucial in post-stroke rehabilitation as it facilitates intervention through language baselines, diagnosis, monitoring progress, goal setting and intervention and discharge planning. The study by Hersh et al. (Citation2018) provides potential reasons why speech pathologists want more specific cut-offs to inform their clinical decision-making.

As a result of speech pathologists’ perceptions towards the current verbal fluency cut-offs, new cut-off scores were created for the animal and words starting with ‘F’ tasks which were divided into severity categories. The at-risk and higher cut-off scores were lower for the words starting with ‘F’ task, suggesting that this phonemic task was more difficult. Various studies have calculated the average scores for phonemic fluency tasks although these studies were conducted on neurologically intact adults and hence did not provide cut-offs indicating the presence or absence of impairment (Ruff, Light, Parker, & Levin, Citation1996; Spreen & Risser, Citation2003). Kim et al. (Citation2011) compared PWA stroke patients to language-intact stroke patients on a semantic fluency task and created a cut-off score of 7.0 that indicated impairment. This cut-off was similar to the cut-off created for the semantic verbal fluency task in this study (cut-off score of ≤6) which indicates impairment. In the current study, combined animal and words starting with ‘F’ cut-offs were created and stratified for age and severity categories. The cut-off scores had varying degrees of diagnostic accuracy which varied between tasks and age categories. Overall diagnostic accuracy was greater when the animal and words starting with ‘F’ tasks were combined as determined by a greater AUC curve. These developed cut-offs are able to be implemented in speech pathology practice, particularly since the cut-offs were created using data from stroke patients as opposed to neurologically intact participants.

Another study finding was that the phonemic verbal fluency task was likely to be more difficult than the semantic fluency task as the at-risk and higher cut-off scores were lower for the words starting with ‘F’ task. This finding has been reported in other studies. A study by Shao, Janse, Visser, and Meyer (Citation2014) noted that the task demands were different where semantic fluency tasks allow the participant to use existing links between concepts to elicit responses. This is compared to phonemic fluency tasks which require the participant to retrieve the words from a phonemic category which is not often completed in speech production and is hence a novel retrieval strategy (i.e., more so a measure of executive ability) (Luo, Luk, & Bialystok, Citation2010; Shao et al., Citation2014). A study by Sarno et al. (Citation2005) similarly found that phonemic fluency tasks are unnatural linguistic tasks as words are typically retrieved through semantic association instead of through sounds. This overall finding that phonemic verbal fluency tasks are more difficult and indicate executive abilities as opposed to verbal ability (more associated with semantic tasks) may impact clinicians’ qualitative and quantitative interpretations of the individual task scores (Shao et al., Citation2014).

This study stratified the verbal fluency cut-off scores by age due to the influence age has on verbal fluency performance. Increased age is associated with poorer performance on verbal fluency tasks, hence by stratifying the new cut-off scores by age the cut-offs are more specific and clinically relevant (Barry et al., Citation2008). This allows the cut-off scores to be integrated into speech pathology clinical practice, thereby providing clinicians with a more specific diagnostic measure of impairment. The majority of participants involved in this study found that the cut-offs were helpful (e.g., P1: ‘Very helpful in clearly documenting the severity of aphasia associated with stroke’) and were appropriate and useful in practice (e.g., P21: ‘Good descriptors/guide for documentation’). One participant (P11) was concerned that the cut-offs were too high and another participant (P1) suggested that more information about the ‘at-risk’ category would be beneficial. Similarly to verbal fluency tasks, age effects have been reported to influence other post-stroke language tasks including picture naming, whereby reaction times to name pictures were notably reduced with increased age (Gordon & Cheimariou, Citation2014). Simos, Kasselimis, Potagas, and Evdokimidis (Citation2014) reported significant effects of age on sentence-level auditory comprehension where performance decreased as age increased.

The cut-off scores for the combined animal and words starting with ‘F’ task were divided into severity categories (e.g., mild, moderate and severe). By dividing the scores into severity categories, the clinical applicability of the scores was increased as clinicians would be able to gain more valuable and specific information on diagnostic accuracy which may aid with areas of further assessment or intervention planning. Incorporating severity categories in the cut-offs are useful for clinicians as part of their assessments as it provides additional information that may be used in documentation or description of assessment findings (Khadka, Gothwal, McAlinden, Lamoureux, & Pesudovs, Citation2012). Severity categories may also be used to facilitate the planning of additional assessment items or aid with intervention targets (Khadka et al., Citation2012). The severity cut-off scores for the combined animal and words starting with ‘F’ task may be implemented by clinicians in their clinical practice which may benefit clinical documentation, further assessment and intervention planning and may provide a method for clinicians to describe results to patients effectively.

Strengths and limitations

A key strength of this study was the participant diversity. Participants ranged in age, gender, race and country (e.g., from six countries) and area of work (e.g., from acute to residential care facilities). This enabled feedback from clinicians practicing in a wide range of settings. This diversity influenced another key strength of the study, namely the study’s generalisability. As a result of implementing the cut-off scores in a range of settings, populations and by multiple clinicians, the findings of this study could be applied to a broader clinical context. Another strength of this study was the number of times participants got the opportunity to provide feedback, where participants were provided with six weekly surveys over two months. This enabled participants to provide their perceptions at any time during the survey if any concerns were raised following clinical implementation.

A study limitation was the low response rate and high attrition rate. Forty-two clinicians participated in the initial survey but of those participants, 23 participated in the weekly surveys. The highest response rate for a weekly survey was for the first survey with 10 participants which decreased throughout the weeks, with the lowest response rate at two participants for the week 6 survey. The high attrition rate reduced the number of perspectives received for the cut-off scores, particularly as the study progressed. While this attrition may be associated with the high frequency of the surveys (weekly), this rate of attrition as the study progressed may also indicate that participants did not have concerns or that their concerns were resolved as there were numerous opportunities for participants to provide feedback throughout the study duration. Further research is warranted in this area including additional re-testing of the cut-off scores in a larger sample size. Another limitation was that the cut-off scores were stratified by age only and did not consider other factors that may influence performance on verbal fluency tasks such as education and ethnicity.

Potential areas for future research

Areas for future research would be to explore the potential differences that different dialects may have on verbal fluency cut-off score interpretation (e.g., Aboriginal English dialect) and to analyse other factors influencing verbal fluency performance, as this study focused on age. However, other factors, such as education, were not examined.

Verbal fluency tasks can be used post-stroke to provide qualitative and quantitative information on a patient’s language abilities. The existing EBLT cut-off scores provided clinicians only with diagnostically accurate scores to determine the likely absence or presence and severity of language impairment. However, this study found that clinicians were wanting verbal fluency specific cut-off scores to inform their clinical decision-making for their patient populations. The new cut-off scores were created to rectify this clinical gap. The combined animal and words starting with ‘F’ verbal fluency tasks were found to be slightly more diagnostically accurate than the individual verbal fluency tasks. Following implementation of the new scores in clinical practice, study participants reported that the new cut-offs were appropriate, met their needs as clinicians, and aligned with their clinical interpretation and client presentation. The findings of this study could be used by clinicians to provide information on language impairments to speech pathologists to guide clinical decision-making and intervention planning post-stroke.

Acknowledgements

The authors thank the study participants for their contribution to this research and are grateful to (De-identified for review) for helping create and distribute the surveys.

Disclosure statement

Alexia Rohde is the creator of the Brisbane Evidence-Based Language Test.

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

References

  • Acevedo, A., Loewenstein, D. A., Barker, W. W., Harwood, D. G., Luis, C., Bravo, M., … Duara, R. (2000). Category fluency test: Normative data for English- and Spanish-speaking elderly. Journal of the International Neuropsychological Society, 6(7), 760–769. doi:10.1017/S1355617700677032
  • Adamovich, B. L. B., & Henderson, J. A. (1984). Can we learn more from word fluency measures with aphasic, right brain injured, and closed head trauma patients? Clinical Aphasiology, 124–131.
  • Arnold, H., Wallace, S. J., Ryan, B., Finch, E., & Shrubsole, K. (2020). Current practice and barriers and facilitators to outcome measurement in aphasia rehabilitation: A cross-sectional study using the theoretical domains framework. Aphasiology, 34(1), 47–69. doi:10.1080/02687038.2019.1678090
  • Barnes, C. A., & Burke, S. N. (2006). Neural plasticity in the ageing brain. Nature Reviews. Neuroscience, 7(1), 30–40. doi:10.1038/nrn1809
  • Barry, D., Bates, M. E., & Labouvie, E. (2008). Fas and CFL forms of verbal fluency differ in difficulty: A meta-analytic study. Applied Neuropsychology, 15(2), 97–106. doi:10.1080/09084280802083863
  • Berg, K., Isaksen, J., Wallace, S. J., Cruice, M., Simmons-Mackie, N., & Worrall, L. (2020). Establishing consensus on a definition of aphasia: An e-Delphi study of international aphasia researchers. Aphasiology, 36(4), 385–400. doi:10.1080/02687038.2020.1852003
  • Bolla, K. I., Lindgren, K. N., Bonaccorsy, C., & Bleecker, M. L. (1990). Predictors of verbal fluency (FAS) in the healthy elderly. Journal of Clinical Psychology, 46(5), 623–628. doi:10.1002/1097-4679(199009)46:5<623::AID-JCLP2270460513>3.0.CO;2-C
  • Bryan, J., & Luszcz, M. A. (2000). Measurement of executive function: Considerations for detecting adult age differences. Journal of Clinical and Experimental Neuropsychology, 22(1), 40–55. doi:10.1076/1380-3395(200002)22:1;1-8;FT040
  • Engelter, S. T., Gostynski, M., Papa, S., Frei, M., Born, C., Ajdacic-Gross, V., … Lyrer, P. A. (2006). Epidemiology of aphasia attributable to first ischemic stroke: Incidence, severity, fluency, etiology, and thrombolysis. Stroke (1970), 37(6), 1379–1384. doi:10.1161/01.STR.0000221815.64093.8c
  • Eysenbach, G. (2004). Improving the quality of web surveys: The checklist for reporting results of internet E-surveys (CHERRIES). Journal of Medical Internet Research, 6(3), 34–16. doi:10.2196/jmir.6.3.e34
  • Faroqi-Shah, Y., & Gehman, M. (2021). The role of processing speed and cognitive control on word retrieval in ageing and aphasia. Journal of Speech, Language, and Hearing Research, 64(3), 949–964. doi:10.1044/2020_JSLHR-20-00326
  • Faroqi-Shah, Y., & Milman, L. (2018). Comparison of animal, action and phonemic fluency in aphasia. International Journal of Language & Communication Disorders, 53(2), 370–384. doi:10.1111/1460-6984.12354
  • Galletta, E. E., & Goral, M. (2018). Response time inconsistencies in object and action naming in anomic aphasia. American Journal of Speech-Language Pathology, 27(1S), 477–484. doi:10.1044/2017_AJSLP-16-0168
  • Gladsjo, J. A., Schuman, C. C., Evans, J. D., Peavy, G. M., Miller, S. W., & Heaton, R. K. (1999). Norms for letter and category fluency: Demographic corrections for age, education, and ethnicity. Psychological Assessment Resources, 6(2), 147–178. doi:10.1177/107319119900600204. https://doi-org.ezproxy.library.uq.edu.au/
  • Gordon, J. K., & Cheimariou, S. (2014). Semantic interference in a randomised naming task: Effects of age, order, and category. Cognitive Neuropsychology, 30(7-8), 476–494. doi:10.1080/02643294.2013.877437
  • Graneheim, U., & Lundman, B. (2004). Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness. Nurse Education Today, 24(2), 105–112. doi:10.1016/j.nedt.2003.10.001
  • Grönberg, A., Henriksson, I., Stenman, M., & Lindgren, A. G. (2022). Incidence of aphasia in ischemic stroke. Neuroepidemiology, 56(3), 174–182. doi:10.1159/000524206
  • Hersh, D., Wood, P., & Armstrong, E. (2018). Informal aphasia assessment, interaction and the development of the therapeutic relationship in the early period after stroke. Aphasiology, 32(8), 876–901. doi:10.1080/02687038.2017.1381878
  • Jansson, I. J., Ortiz, K. Z., & Barreto, S. S. (2020). Qualitative and quantitative aspects of the FAS fluency test in people with aphasia. Dementia & Neuropsychologia, 14(4), 412–418.
  • Kave, G. (2005). Phonemic fluency, semantic fluency, and difference scores: Normative data for adult Hebrew speakers. Journal of Clinical and Experimental Neuropsychology, 27(6), 690–699. doi:10.1080/13803390490918499
  • Kertesz, A. (2006). The western aphasia battery (revised). Washington, DC: PsychCorp.
  • Khadka, J., Gothwal, V. K., McAlinden, C., Lamoureux, E. L., & Pesudovs, K. (2012). The importance of rating scales in measuring patient-reported outcomes. Health and Quality of Life Outcomes, 10(1), 80–80. doi:10.1186/1477-7525-10-80
  • Kim, H., Kim, J., Kim, D. Y., & Heo, J. (2011). Differentiating between aphasic and nonaphasic stroke patients using semantic verbal fluency measures with administration time of 30 seconds. European Neurology, 65(2), 113–117. doi:10.1159/000324036
  • Luo, L., Luk, G., & Bialystok, E. (2010). Effect of language proficiency and executive control on verbal fluency performance in bilinguals. Cognition, 114(1), 29–41. doi:10.1016/j.cognition.2009.08.014
  • Peña-Casanova, J., Quiñones-Úbeda, S., Gramunt-Fombuena, N., Quintana-Aparicio, M., Aguilar, M., Badenes, D., … Blesa, R. (2009). Spanish multicenter normative studies (NEURONORMA project): Norms for verbal fluency tests. Archives of Clinical Neuropsychology, 24(4), 395–411. doi:10.1093/arclin/acp042
  • Plumet, J., Gil, R., & Gaonac'h, D. (2005). Neuropsychological assessment of executive functions in women. Neuropsychology, 19(5), 566–577. doi:10.1037/0894-4105.19.5.566
  • Riès, S. K., Dronkers, N. F., & Knight, R. T. (2016). Choosing words: Left hemisphere, right hemisphere, or both? Perspective on the lateralisation of word retrieval. Annals of the New York Academy of Sciences, 1369(1), 111–131. doi:10.1111/nyas.12993
  • Rohde, A., Doi, S. A., Worrall, L., Godecke, E., Farrell, A., O'Halloran, R., … Wong, A. (2020). Development and diagnostic validation of the Brisbane evidence-based language test. Disability and Rehabilitation, 44(4), 625–636. doi:10.1080/09638288.2020.1773547
  • Rohde, A., Worrall, L., Godecke, E., O’Halloran, R., Farrell, A., & Massey, M. (2018). Diagnosis of aphasia in stroke populations: A systematic review of language tests. PLoS One, 13(3), e0194143. doi:10.1371/journal.pone.0194143
  • Ruff, R. M., Light, R. H., Parker, S. B., & Levin, H. S. (1996). Benton controlled oral word association test: Reliability and updated norms. Archives of Clinical Neuropsychology, 11(4), 329–338. doi:10.1016/0887-6177(95)00033-X
  • Sarno, M. T., & Levita, E. (1979). Recovery in treated aphasia in the first year post-stroke. Stroke (1970), 10(6), 663–670. doi:10.1161/01.STR.10.6.663
  • Sarno, M. T., Postman, W. A., Cho, Y. S., & Norman, R. G. (2005). Evolution of phonemic word fluency performance in post-stroke aphasia. Journal of Communication Disorders, 38(2), 83–107. doi:10.1016/j.jcomdis.2004.05.001
  • Schuchard, J., & Middleton, E. L. (2018). Word repetition and retrieval practice effects in aphasia: Evidence for use-dependent learning in lexical access. Cognitive Neuropsychology, 35(5-6), 271–287. doi:10.1080/02643294.2018.1461615
  • Shao, Z., Janse, E., Visser, K., & Meyer, A. (2014). What do verbal fluency tasks measure? Predictors of verbal fluency performance in older adults. Frontiers in Psychology, 5, 772–772. doi:10.3389/fpsyg.2014.00772
  • Sharma, A., Minh Duc, N. T., Luu Lam Thang, T., Nam, N. H., Ng, S. J., Abbas, K. S., … Karamouzian, M. (2021). A consensus-based checklist for reporting of survey studies (CROSS). Journal of General Internal Medicine: JGIM, 36(10), 3179–3187. doi:10.1007/s11606-021-06737-1
  • Simos, P. G., Kasselimis, D., Potagas, C., & Evdokimidis, I. (2014). Verbal comprehension ability in aphasia: Demographic and lexical knowledge effects. Behavioural Neurology, 2014, 258303–258308. doi:10.1155/2014/258303
  • Spreen, O., & Risser, A. H. (2003). Assessment of aphasia. Oxford: Oxford University Press.
  • Strilciuc, S., Grad, D. A., Radu, C., Chira, D., Stan, A., Ungureanu, M., … Muresanu, F.-D. (2021). The economic burden of stroke: A systematic review of cost of illness studies. Journal of Medicine and Life, 14(5), 606–619. doi:10.25122/jml-2021-0361
  • Tombaugh, T. N., Kozak, J., & Rees, L. (1999). Normative data stratified by age and education for two measures of verbal fluency: FAS and animal naming. Archives of Clinical Neuropsychology, 14(2), 167–177. doi:10.1016/S0887-6177(97)00095-4

Appendices

Appendix 1. Initial survey questions and response options

Appendix 2. Weekly survey questions and response options

Appendix 3. Final survey questions and response options

Appendix 4. Weekly surveys: number of times the verbal fluency subtest was used in the past week for each week

Appendix 5. Content analysis of qualitative questions from weekly surveys with categories, subcategories and quotes

Appendix 6. Analysis of quantitative data for question 7 and 8 for the weekly surveys