570
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Assessment patterns in teacher education programmes: content analysis of course syllabi

ORCID Icon & ORCID Icon

Abstract

Given the crucial role that teacher education programmes play in developing teacher-students’ assessment literacy, this study investigates all the course syllabi (n = 278) in primary and secondary-school teacher education programmes offered by three universities in Norway. This study aimed at identifying: 1) the assessment patterns, 2) the relationship between coursework and examination assessment within the course syllabi, and 3) the similarities (or differences) of assessment in teacher education programmes. The results showed a large variation in assessment patterns, especially in the two categories of coursework and examinations. A large percentage of the course syllabi required a form of written, and to a lesser degree, oral assessment. The results did not show a statistically significant relationship between the two modes of assessment in the course syllabi, i.e. coursework and examination assessments. Further, the course syllabi in the three universities were not similar in many assessment-related categories. Our study reveals that formative assessment patterns are rare, and approaches to train teacher-students in developing assessment literacy for future school practices might be missing.

Introduction

Research on assessment literacy in teacher education was traditionally focused more on in-service teachers and the interest to study pre-service teachers is more recent (Fountain Citation2021). This study will focus on assessment in the courses developed for pre-service teachers (referred to as teacher-students). Recent research has shown that the curriculum design of teacher education programmes does not prepare teacher-students with the required theoretical knowledge and practical skills of assessment (e.g. Oo, Alonzo, and Asih Citation2022).

To encourage that teachers adopt formative assessments into their teaching practice, they must be given opportunities to gain experience in assessment within their teacher education programme (Brevik, Blikstad-Balas, and Engelien Citation2017; Smith Citation2021). This is what Goos and Moni (Citation2001, p.73) highlighted as ‘practice what we preach’, a modelling-focused instructional approach whereby teacher educators in universities prepare students as prospective teachers to implement criteria and standards-based assessment in schools. In line with this need, this study aimed to explore how teacher education course syllabi in three universities in Norway are preparing teacher-students in assessment literacy for their future career.

Assessment in teacher education programmes

In the last two decades, there has been an increase in focus on school-based assessment and educational curriculum reforms, and a growing body of research has been conducted on assessment and the evaluation of teacher education programmes at the national and international levels (e.g. DeLuca and Bellara Citation2013; Noell, Burns, and Gansle (Citation2019) in the US; Volante and Xavier, Citation2007; Poth Citation2013; DeLuca, Chapman-Chin, and Klinger (Citation2019) in Canada, Rezaee and Ghanbarpour (Citation2016) in Iran; Ruiz and Panadero (Citation2023) in Spain; Smith Citation2021; Vattøy, Gamlem, and Rogne (Citation2021) in Norway).

Research has a consensus on the importance of developing assessment literacy standards within teacher education programmes as it provides a strong foundation for future professional development (DeLuca and Klinger Citation2010). Teacher education programmes utilise a variety of approaches, such as ‘explicit, integrated, and blended assessment education models’ to meet these standards (DeLuca and Volante Citation2016, p. 20). While the explicit model introduces discrete coursework in assessment and has proven the most valuable in promoting teacher-students’ assessment literacy according to Brevik, Blikstad-Balas, and Engelien (Citation2017), the integrated approach incorporates assessment into the broader curriculum and study courses. A blended model provides both direct instruction in assessment and additional learning opportunities in general education courses.

Teacher education programmes in Norway

Teacher education in Norway has gone through five major reforms in 25 years (1992, 1999, 2003, 2010 and 2017) (Askling et al. Citation2016). Since the quality reform of 2003, higher education institutions in Norway follow a 3–2 model offering 3-year bachelor’s and 2-year master’s degrees. The reform also highlighted students’ active role in teaching and learning as well as a closer connection between teaching and assessment (Dysthe and Engelsen Citation2004). Teacher education in Norway is referred to as initial teacher education (ITE), degree programmes in partnership with schools where teachers are trained for the primary and lower secondary schools (Poth Citation2013). In this article, however, the term teacher education is used after Smith (Citation2021). There are three main teacher education programmes in Norway: two to encompass primary and lower secondary schools (Grade 1–10) and one which targets upper secondary schools in Grades 11–13. The first two are labelled as Grunnskolelærarutdanning (Primary and Secondary Teacher Education) [AKA GLU] and are the aim of this study. They are respectively known as Master in Primary Teacher Education 1–7 (grades) and Master in Lower Secondary Teacher Education 5–10 (grades).

Both are 5-year integrated master’s programmes, practice-oriented and based on research and professional knowledge. The lower-secondary school programme places more emphasis on in-depth competence of fewer subjects while the primary school programme prepares the teacher-students with beginner training and for teaching many subjects in Grades 1–7 class (Education Association/ Utdanningsforbundet Citation2021). Both programmes encompass 300 ECT credits (one ECT usually equates to 28 h of study) and are similar in many ways. Some subjects are mandatory in both programmes and all teacher-students need to pass them (e.g. 60 ECT for pedagogy and student knowledge; school practicum; 90 ECT for Master’s thesis). Several subjects (3–4 for primary and 2–3 for lower secondary) are selected by teacher-students and are mutually exclusive (e.g. mathematics or Norwegian; English or social studies). These alternatives may vary in different universities depending on the university economy and number of student-teachers; however, there are requirements that make the subjects similar, for instance, in both programmes Norwegian and mathematics are obligatory.

Both education programmes encompass at least 110 d of practicum experience in schools. The practicum can be integrated in a variety of ways by the higher education institutions. Thus, there are some variations regarding organisation, but all must meet the required number of days at schools. In the practicum the students will be introduced to primary and secondary school teachers’ practices, and thus also the assessment practices (Ministry of Education and Research Citation2018).

Formative assessment is incorporated in the national curriculum of both primary and lower-secondary school teacher education programmes. This is because assessment in primary and lower secondary schools is mainly formative, with grades only provided in secondary school. Thus, assessment in lower secondary schools requires teacher-students in this programme to gain competences related to both teacher assessment and examinations (Universitets og Høgskolerådet [University and College Council, UHR] Citation2023).

Norway, like the UK, USA and Australia, has discipline-based curricula where knowledge and content are discipline/subject specific while the surrounding framework, learning outcome descriptions, assessment regulations and validation of courses are centralised (Shulman Citation1993). UHR-Teacher Education is responsible for the national guidelines in all teacher education programmes. It gives a suggested national framework for all courses in the teacher education programmes and requires the course syllabi to be clear on ‘scope and purposes’, ‘learning outcomes’, ‘skills’, ‘general competence’, ‘work requirements’ and ‘assessment’ (Ministry of Education and Research Citation2018; University and College Council [UHR] 2023). The regulations (Ministry of Education and Research Citation2018) aim to ensure that teacher education institutions provide integrated, professionally oriented and research-based primary and lower secondary teacher education programmes of high academic quality. The education programmes must comply with the Norwegian Education Act (Act of 17 July 1998 no. 61 relating to Primary and Secondary Education and Training) and the prevailing curriculum for primary and lower secondary education and training. More specifically, courses are expected to have knowledge, skills, competence aims and learning outcomes; conditions for taking the examination including attendance and work requirements; and final examination(s) which specify mode, proportion (if there is more than one examination), duration, scope, grading, aids and assessment criteria.

Assessment in Norway’s teacher education

Assessment components of the courses in Norway’s teacher education are in adherence with the Norwegian Education Acts of 2006 and 2013 (Brevik, Blikstad-Balas, and Engelien Citation2017). The practicing assessment approach in Norway is based on an integrated model where teacher-students are expected to learn about assessment in the general national curriculum and through various study courses introduced over the 5-year integrated master’s programmes.

Brevik, Blikstad-Balas, and Engelien (Citation2017), for instance, showed how formative assessment principles were integrated in the teacher education master’s programme in University of Oslo. Their study showed how teacher educators released the assessment responsibility in four sequences to teacher-students. In the first, the teacher educators integrated a variety of assessment principles and practices in their lectures, workshops and didactics seminars through guided instruction. This was done in the next two stages through tightly structured tasks and open tasks. In the final stage, they (along with the school supervisors) assessed teacher-students’ use of assessment in their school teaching, examination tasks and research reports. The majority of assessment was formative and at schools while the university assessment components were more summative. Teacher-students used many of the principles to design their subject lessons in their school practices. However, they were more concerned with assessing the school students than using those principles for improving their instruction and self-assessment.

The emphasis on developing assessment literacy in teacher education through both universities and schools in Norway was reinforced in 2006 with the implementation of the educational reform ‘Knowledge Promotion’ (Tveit Citation2014). The new framework plan required assessment for learning (AfL) as one of the competencies the teachers were expected to have gained upon graduation (Nusche et al. Citation2011). The Norwegian Directorate for Education and Training (NDET) assigned and funded (until 2011) a national group representing teachers and researchers from higher education (the Norwegian Network for Student and Apprentice Assessment [NELVU]) to build capacity regarding student assessment within schools and universities.

An output was that teacher training institutions were able to assign assessment experts within the institution to work with faculty and enhance assessment literacy (Nusche et al. Citation2011). This strategy, however, seemed to fail, since a planned cascade model with an increased number of experts did not grow, and NELVU ended in 2012 when the funding ended. Two main professional development projects (along with many local projects) were implemented to contribute to an assessment culture in schools: the Better Assessment Practices project (2007–2009), and its follow-up project AfL (2010–2018) (Nusche et al. Citation2011; NDET Citation2018). The national development programme on AfL included schools from 1st to 13th grade and involved a selected group of researchers and staff from institutions with teacher education programmes in Norway, in addition to international experts, who were invited to contribute as lecturers and process supervisors (NDET Citation2018). While these research projects proved a change towards enhancing AfL culture in Norway’s schools, lack of assessment literacy among teacher education-academic staff has been problematised in policy documents (e.g. Blömeke and Olsen Citation2019) and confirmed by research. Despite teacher-students’ reported formative assessment practices, an ‘underlying summative assessment culture’ seems to be dominant (e.g. Brevik, Blikstad-Balas, and Engelien Citation2017; Vattøy, Gamlem, and Rogne,Citation2021 p. 2334).

This study aims to identify assessment patterns in Norway’s teacher education programmes. We conduct content analysis of the course syllabi that teacher-education students are introduced to in three higher education institutions. This study gives answers to the following research questions (RQs):

  1. What are common assessment patterns (examinations and coursework assignments) in Norway’s teacher-education programme?

  2. What is the relationship between coursework and examinations in the course syllabi?

  3. Are course syllabi in Norway’s teacher-education programmes similar in terms of assessment across different higher education institutions?

Methodology

Methods

This study aimed at identifying the assessment patterns through content analysis of all the course syllabi (N = 278) offered in two teacher education programmes in three universities in Norway – primary and lower-secondary school (Grunnskolelærarutdanning) – in the academic year 2022–2023. This project is in line with the programme approach in Transforming the Experience of Students through Assessment project (TESTA, 2009-12 Citation2023), which evaluated programmes through triangulated data and by attending to the ‘sequence, timing, proportions and variety of assessment tasks across modules’ (Jessop, Hakim, and Gibbs Citation2014, p. 74).

Recruitment – sample

To explore the assessment patterns in Norway’s teacher education programmes this study was conducted in three phases: data collection, content analysis and finally inter-rater reliability and re-coding.

In the first phase in September 2022, all the universities and university colleges in Norway with GLU programmes (Grades 1–7 and 5–10) were identified. From the fifteen higher education universities and institutions that offered GLU study programmes (Finne et al. Citation2017), three institutions were selected based on purposeful criterion sampling. provides an overview of these programmes. The aim was to choose three programmes which geographically spread across Norway and were representative of both universities and colleges. The links to all the course syllabi offered by both ITE programmes in these institutions were put together in an Excel file where the researchers imported emerging codes simultaneously in Phase 2.

Table 1. Descriptive statistics of the sample.

In the second phase, conventional content analysis was employed: a qualitative document analysis approach where coding categories are derived from the text data instead of theories or other researchers’ findings (Armstrong Citation2021). The key categories related to assessment from course syllabi in three higher education contexts were extracted. They consisted of course code, university, subject, study level, examination semester, credits, type of coursework assessment, type of examination, grading scale, examination pre-requisite conditions and type of support material for the examinations. The constant comparative method (Bogdan and Biklen Citation2007) was employed since analysis of more course syllabi resulted in constantly emerging codes under these categories. We chose this open coding approach due to the large variation we encountered in the type of coursework assessments and examinations. Coding and analysis of syllabi were simultaneously conducted by both researchers in a shared Excel document and codebook. This stage took three months and the researchers met four times to discuss the codes and agree on the code book.

In the third stage, we aimed to estimate the inter-rater reliability for the data. Of 10% of the courses (n = 28, every 10th-course syllabus) were selected and re-coded by the other researcher. Kappa measure of agreement was statistically significant for most of the categories, such as subject, study level, examination semester, credits and support materials for examinations. The researchers discovered that the non-significant Kappa index for coursework assessment types and examination patterns was not due to the researchers’ inconsistent coding but mainly because the online course syllabi in one of the universities had been revised during the three months of coding phase. The researchers thus re-coded the three categories in all the syllabi at the end of the semester.

Data analysis

SPSS version 29 (IBM SPSS Statistics Citation2022) was used to analyse the data. To respond to RQ1, the frequency of all the codes extracted from the content analysis of the syllabi was used ( and ). For RQ2 and exploring the correlation between two categorical variables (i.e. coursework assessment and examination patterns), Chi-square test for independence and Cramer’s V were calculated (). For RQ3 the multivariate Kruskall–Wallis Test was used ().

Table 2. Coursework assessment patterns derived from course plans (n = 278).

Table 3. Assessment/examination patterns.

Table 4. Symmetric measures, Chi-square test of independence.

Table 5. Kruskal–Wallis test.

Table 6. Test statistics Kruskal–Wallis H across the three universities.

Table 7. Mann–Whitney U-test statistics.

Results

The analysis of 278 syllabi resulted in a long list of codes. Appendix I (Online Supplementary Materials) provides an overview of both primary and lower-secondary school programmes showing the frequency of codes. A large variety of subjects (n = 21) were introduced in both programmes; mathematics, Norwegian, English and pedagogy (44.8% in total) were the most frequent subjects. There were no courses to teach assessment purely and explicitly, which was integrated in different subjects. Teacher-students’ assessment literacy is enhanced over the five years through all subjects. Most of these subjects run in one semester (43.5% in autumn, 52.5% in spring and only 4% for the whole year): 70.9% of them were 15-credit courses, with courses with 30 credits (master thesis courses) next most popular. Most of the courses (78.4%) required the teacher-students to have their work requirements approved by the course leader in addition to attending a specific percentage (usually 70–80%) of the organised activities and lectures; 93.5% of the course syllabi used A–F grading scale and only 2.5% assigned a binary complete/incomplete grading; 4% of the syllabi had missing information for grading and 77.7% of them missed information about the support materials for the examinations.

and illustrate the common coursework and examination patterns (RQ1). The results revealed a large variation in two categories of coursework assessment patterns (N = 77) and examination types (N = 34): 26.7% of coursework assessment patterns and 8.8% of examination types were seen in only one or two course syllabi. These patterns are labelled as others. While some syllabi required only one coursework assignment (e.g. 2.2% of syllabi required an individual written assignment [WA]), most of them required two (indicated by plural s, as in WAs in ) or more (indicated by several). The majority of course syllabi employed a combination of several assessment types which is indicated by + in and .

Missing information was the most noticeable finding which was observed when the analysis came to three categories, support materials for the examinations, coursework assessment and examination patterns: 77.7% of the syllabi missed the information about what support materials were allowed for the examinations; 29.1% () did not reveal any information on the expected coursework assessments and 2.9% () lacked information about the examination at the end of the course. Missing information was registered for examination patterns and was the most frequent pattern in coursework assessments: 81 of the 278 syllabi either gave no explicit information about the expected coursework assessments or referred the students to the semester plan for more specific information. This was a finding applicable to the syllabi in university 3 where information about the coursework, for instance, was missing in 87 out of 138 syllabi. One of their syllabi in the subject English, for instance, read ‘Approved mandatory work assignments that appear from the overview announced at the start of the semester’. When re-coding the course syllabi, the researchers noticed other changes. For example, a history course on ‘Norway and the world in twentieth century’ changed the prerequisite conditions for taking the final examination. This course replaced several WAs during the course with an entrance examination towards the end. The students were only allowed to sit for the 6-h written final examination if they passed that entrance examination.

From , it is apparent that WA is a frequently used pattern in Norway’s teacher education. It is usually followed by an oral presentation either as one written task (n = 24) or a set of written tasks (n = 8): 19 courses required two or more than two written tasks and 13 course syllabi required WAs as a portfolio submission.

The frequency analysis of examination patterns in the course syllabi of the three academic contexts showed results similar to the coursework assessment patterns. A wide range of examination patterns were observed (n = 34); however, the most frequent patterns were written (12.9%), oral (18%) and home examinations (12.6%) (). Portfolio examinations were also frequently used either independently or along with an oral adjusting examination (7.6% and 4.7% respectively). In an oral adjusting examination, a grade is assigned to the students’ written work before the student is called in for an oral examination (OE). The entirety of the written and oral performance is counted as the student’s overall assessment. In other words, the student’s performance in the OE changes the portfolio grade for a lower or higher grade. While some courses utilised written and oral assignments together (8.6% for WA + OP and 2.9% for Was + OP in ), the number of course syllabi with final examinations which encompassed both modes were fewer (4.7% for OAE and 4% for WE + OE in ).

A significant observation in this study was course syllabi which gave choice to the students regarding the mode and use of multimodal texts (i.e. digital, oral or written) or the number of students involved (i.e. group, paired or individual). Although the number of these courses was insignificant statistically and not listed in the tables here, this was observed in both categories of coursework assessment and examination patterns. For instance, ‘Norwegian Didactic: Aesthetic learning processes in Norwegian’ was a course that required the students, for the coursework assignments, to plan and conduct a lesson plan based on existing research and the literature. It required the students to accompany it with a reflection note and a written text, which could be multimodal, digital or even in the form of an oral presentation.

As another example of creative choice in examination patterns, a master’s thesis syllabus in ‘arts and crafts didactics’ allowed the students to choose from two main examination forms with various modes: either an individual written thesis and a practical examination, or an individual or paired written thesis. In the first pattern, every student was assessed in three stages: 30% on a written reflection text (16,000 words) three weeks before the display of the artwork, which was 40% of the assessment. In the final stage, the student was assessed (30% weight) on a 15-min public oral and visual presentation. In the second examination form, the students could choose to write their thesis individually or in pairs with a rise in page numbers and word count (70%), and a similar 30% public oral and visual presentation. In both examination forms, the final presentation was followed by a conversation with the examiner’s committee. While this syllabus related to a master thesis course and gave detailed and specific information about the coursework assessment and various forms of examinations, information in many syllabi of similar subjects (N = 19) was broad and inadequate (see Patterns 5 and 6 in ).

For RQ2, the results of the Chi-square test of independence showed that the assumption of minimum expected cell frequency was violated even though the patterns with frequencies less than 1% were placed under one code. The results of Cramer’s V test and contingency coefficient test (Cramer’s V = 0.365, N = 278, df = 323) revealed that association between the examination forms and coursework assessment patterns was not statistically significant ().

Appendix II (Online Supplementary Materials) illustrates cross-tabulation of coursework assessment and the examination patterns in the syllabi analysed in this study. The highest frequency is 9 and relates to the number of syllabi with WAs for coursework assessment and a written examination at the end of the semester. Master thesis courses show more associations between coursework and final assessment. Appendix II also illustrates many course syllabi that did not show a similar assessment pattern during and at the end of the course. For instance, four syllabi with WAs during the course assessed the students’ oral skills through an OE at the end.

For RQ3, the results ( and ) revealed a statistically significant difference in all the four categories (coursework assessment, examination patterns, support materials for examinations and examination prerequisite conditions) across the three institutions (Uni 1, n = 73, Uni 2, n = 67, Uni 3, n = 138). shows the lowest mean in all the categories belonged to the first institution (78.26, 104.01, 111.53 and 90.01, respectively) and the highest mean in coursework assessment and support materials for examinations belonged to Uni3.

The Mann–Whitney U-test was used for dual comparisons and identifying the sources of differences (). It revealed a significant difference in coursework assessment among all three universities. The difference was similarly observed in six comparisons for the other three categories. As shows, in the categories of examination patterns, support materials for examinations and examination prerequisite conditions, only three comparisons showed similarities across the three universities. University 1 used examination support materials which were like university 2 (sig 0.711 at the 0.001 level). University 2 syllabi were similar to university 3 in both examination patterns and examination prerequisite conditions (sig of 0.060 and 0.347 for 0.001 level).

Overall, the results revealed huge variations in the assessment patterns across the course syllabi in the three universities’ teacher education programmes. While this variation seemed more significant during the course and in the coursework and assignments, some similarities were observed in final examinations, support materials and examination prerequisite conditions.

Discussion

This study set out to explore the assessment patterns in Norway’s teacher education programmes based on the content analysis of all the syllabi in three universities (RQ1). The relationship between coursework and examination assessment and the comparison of the three universities with respect to assessment categories were the aims of the next two RQs. The results for RQ1 showed a large variation in assessment patterns, especially in coursework and examinations. A substantial percentage of syllabi required a form of written, and to a lesser degree oral, assessment in coursework assignments and final examinations.

Group-based coursework assignments and examinations, where teacher-students, according to Gibson et al. (Citation2018) have more chance to peer-assess and promote professional skills, such as teamwork and leadership, was a noticeably missing pattern in the courses analysed in this study. Portfolio assessment and adjusting oral or written examinations were, on the other hand, visible patterns particularly at the end of the courses. The rapid spread of ‘learning and assessment portfolios’ in Norway’s higher education and teacher education programmes is what Dysthe and Engelsen (Citation2004, p. 240) referred to two decades ago. It seems that portfolios are still a common assessment pattern in courses; however, they are assessed more linearly. Even though portfolios may be generated through process writing of coursework assignments, the teacher-students submit them as the final product of the course. They seem to be graded with no formative, continuous and iterative feedback to link them to other courses or the programme. This is what Brevik, Blikstad-Balas, and Engelien (Citation2017) reported when analysing assessment practices in teacher education programme at the University of Oslo. Assessment of finished products (examinations, texts and oral presentations) was overemphasised and learning about giving feedback in different stages of an activity and using self-assessment to improve one’s teaching was missing. Dysthe and Engelsen (Citation2004) argued that digital portfolio assessment, when used along with reflective texts and self-assessment, has an immense potential to develop teacher-students’ professional identity; however, their theory-based study of two universities in Norway showed that the potentials for these modes of assessment were not fully utilised.

The enormous variation of assessment patterns in this study was in line with Jessop, Hakim, and Gibbs (2014). Although their primary aim was to evaluate students’ learning from assessment and their findings emerged from the analysis of survey data and interviews/audits with students and programme leaders rather than the course syllabi, their results also showed huge variation in assessment patterns and feedback practices both within and across various disciplines/subjects. Consistent with the findings in this study, they reported ‘patterns of high summative and low formative assessment’ with a proportion of three to two (p. 79) and higher quantity of written feedback versus moderate quantity of oral feedback. The researchers attributed the variation to the wide range of programmes; however, considering the centralised quality assurance frameworks in the UK since the 1980s (Becher and Trowler Citation2001), and Norway’s quality reform since 2003, variation is still a significant finding in these studies.

The argument that huge variations in assessment patterns impedes students’ learning when there is no programme-wide balance is supported by researchers like Boud and Falchikov (Citation2006), Richardson (Citation2015) and Gibson et al. (Citation2018). The prevalent practice of modular approaches in assessment in higher education adds to the need for constructive alignment of coursework assessment and end of the term examinations. Norway’s teacher education programmes, similarly, need a shift away from the traditional regime of examination and coursework assessment (Dysthe and Engelsen Citation2004). Our study reveals that end-of-course examinations are the main pattern in the teacher education courses in the studied universities.

The results for RQ2 revealed no statistically significant association between the examination forms and coursework assessment patterns within the courses. This was what TESTA’s reports (Jessop et al. Citation2012, p. 5) revealed as the lack of ‘clear sequencing and progression of assessment’. Assessment in Norway’s teacher education is compartmentalised with summative grades at the end of the courses and (formative) feedback on specific coursework assignments within the courses.

Our results confirmed what Boud (Citation2000, p. 151) found to be missing in higher education, which is ‘sustainable assessment’. Current assessment practices do not prepare students for their long-term learning requirements and future life. Higher education places more weight on the examination than the coursework assessment, and high-quality formative assessment practices and life-long assessment are not adequately utilised. The different patterns of assessment in the examinations and coursework assessments in this study are a significant finding when research on life-long learning is reviewed. The aim in primary schools is to provide formative assessment as ‘the basis for adapted education’, while higher education does not prepare teacher-students with these skills and uses end-of-course examinations as ‘a means for certification or selection’ (Nusche et al. Citation2011, p. 50).

When compared with end-of module examination-based assessment, coursework assessment has been found to target a wider range and a different set of cognitive skills (Johnston Citation1994). Coursework marks have shown a higher correlation with students’ long-term learning (Bridges et al. Citation2002). When these findings are compared with the findings in this study, we see the crucial need in Norway to move towards long-term assessment where formative assessment of coursework assignments is viewed as an integral part of the process and aligned with other forms of assessment in the whole course. This is the shift that Richardson (Citation2015) observed in the UK higher education where end-of-module assessment by examinations shifted to end-of-module assessment by coursework.

Our study revealed that the teacher education course syllabi in the three universities were not similar in many assessment-related categories (RQ3). The students do not have the chance to master a particular pattern and encounter new and varied forms of assessment in their programme. The need for teacher-students’ assessment literacy in Norway is still significant. Opportunities to strengthen this literacy through obligatory or/and elective assessment courses appear to be missing in Norway’s teacher education programmes. Fountain (Citation2021) highlighted this need and showed how introducing a 45-d practical course could develop classroom AfL skills. Assessment literacy is enhanced, according to DeLuca and Klinger (Citation2010), if the course syllabi in the programmes are more specific and clearer in terms of assessment.

The results of the second round of coding in this study, however, revealed that information was missing particularly in one of the universities and in two main categories of coursework and examination prerequisite conditions. Some syllabi removed the detailed information about these two categories and referred the students to local documents, i.e. the semester plan which would be published later when the semester started. The course changed and gave the teacher more power as a gatekeeper to assess the students’ performance in the final examination. Many of the syllabi in this university had changed into a more general document giving the course coordinators more autonomy and flexibility in assessment decisions.

The national regulations relating to the curriculum (The Higher Education Act,Citation2005 § 4–2) explicitly state that teacher education programmes must prepare the course syllabi in collaboration with the students, and that assessment schemes among other aspects are part of this. In addition, it is stated that the syllabi must be adopted by the institution’s board. At the same time, it gives a certain level of autonomy to each teacher education institution about how assessment should be conducted. National guidelines on the coursework assignments, the examinations and examination prerequisite conditions might give course developers, coordinators and teachers more flexibility and autonomy in running courses. However, clarity of assessment procedures in the teacher education courses is a need and an approach that research (e.g. DeLuca and Klinger Citation2010; DeLuca and Volante Citation2016) has proved to be contributing to teachers’ assessment literacy.

Conclusion and implications

The huge variations that were identified across the three universities in coursework and examination assessment patterns in the syllabi of teacher education programmes indicate the crucial role that explicit and blended models of assessment education play in enhancing the assessment literacy of course developers, teacher educators and teacher-students. AfL culture in primary schools will be enhanced on a large scale only when all in teacher education programmes and schools (i.e. the programme principals, teacher educators, teacher-students and the schoolteachers and students) become assessment literate (Engelsen and Smith Citation2014). It is not easy to achieve this culture when teacher-students are expected to become assessment literate through courses that assess them in a fragmented compartmentalised way, courses that according to Brevik, Blikstad-Balas, and Engelien (Citation2017) do not adequately integrate assessment at the university with what the teacher-students are expected to assess in practice and in their future career at schools; courses that Jessop, Hakim, and Gibbs (2014) and Jessop and Maleckar (Citation2016) see as modular and result in the students’ fragmented and disconnected experience of assessment. One implication in this study is the need to have explicit assessment courses where assessment literacy takes place with the contribution of all the agents in teacher education and at the programme level.

In addition to variations across universities, assessment patterns varied a lot both within and across courses. The analysis of course syllabi in this study showed differences in coursework and examination assessments. Variation within the courses can result in one-off occurrence of each pattern and the students’ lower chance of experiencing an assessment pattern in depth, formatively and both in theory and in practice. Teacher-students’ assessment at the end of the course seemed to be on competencies that they had not mastered during the course. The mode or pattern of assessment in the final examination was not always what they experienced before.

Variation in assessment patterns is not necessarily destructive if assessment practices are well integrated in the whole programme and various courses link them together to a more life-long authentic learning. What seems to be currently a challenge for teacher-students is that they encounter various patterns within and across different courses. The findings suggest the need for explicit assessment courses to educate teacher-students about how these patterns should be utilised formatively in practice and in the lessons and tasks that they design. Through these courses, teacher-students will have a chance to share their experiences and internalise what they did not practice adequately in their own courses. These courses can also provide the chance for teacher-educators to collaboratively enhance their assessment literacy.

Another thing that needs to improve in teacher education courses is assessment of all the competencies that teacher-students will need at schools. Current courses seem to focus more on written and spoken skills in assessment and lack many other skills such as peer feedback, collaborative assessment, classroom management and teachers’ emotional support. The new school curriculum (NDET Citation2020) has more focus on formative assessment (self-regulation and self-assessment in particular) along with cross-disciplinary subjects; however, the analysis of course syllabi in teacher education does not clearly reflect these shifts.

While there is more focus on formative assessment in some policy documents, there are still documents that do not align with this. For instance, Norway’s Ministry of Education and Research (Citation2018) has developed a long list of competences that teacher-students need to document at the end of the primary and lower-secondary teacher education programmes. Smith (Citation2013, p. 230) views the danger of this way of assessing teachers based on standardised criteria when ‘assessment becomes technical, and pedagogical aspects of assessment are replaced by an increasing demand for external control’. Some policy documents seem to be contributing to modular assessment rather than programme coherence. While shifting back to the traditional standardised practice of assessment is not the solution, awareness of the policy makers, teacher educators and course developers of the threats with the current trends would pave the way for formative assessment being more integrated in the courses and the teacher education programmes.

The findings reveal the gap in teacher education assessment literacy, and this can be remedied by giving knowledge and awareness to policy makers, programme principals, course developers and teacher educators.

Supplemental material

Supplemental Material

Download MS Word (32.8 KB)

Disclosure statement

The authors report there are no competing interests to declare.

Additional information

Funding

This work was not supported by any funding agencies or grants.

Notes on contributors

Elaheh Tavakoli

Elaheh Tavakoli is Associate Professor in teacher Education and Teaching of English at the Faculty of Humanities and Teacher Education, Volda University College, Norway. Her research include Assessment and feedback, teaching and learning language skills, questionnaire development and validation studies.

Siv M. Gamlem

Siv M. Gamlem is Professor in Pedagogy at the Faculty of Humanities and Teacher Education, Volda University College, Norway. Her research include assessment, feedback, learning processes, professional development, systematic observation, and learning and teaching in digital environment and AIEd.

References

  • Armstrong, C. 2021. “Key methods used in qualitative document analysis.” (December 29, 2021). doi:10.2139/ssrn.3996213.
  • Askling, B., T. Dahl, K. Heggen, L. I. Kulbrandstad, T. Lauvdal, S. Mausethagen, L. Qvortrup, et al. 2016. Om Lærerrollen. Et Kunnskapsgrunnlag. Bergen: Fagbokforlaget. https://oda.oslomet.no/oda-xmlui/bitstream/handle/20.500.12199/3010/Om%20laererrollen.pdf?sequence=6&isAllowed=y.
  • Becher, T., and P. Trowler. 2001. Academic Tribes and Territories. England: McGraw-Hill Education.
  • Blömeke, S., and R. V. Olsen. 2019. “Consistency of Results regarding Teacher Effects across Subjects, School Levels, Outcomes and Countries.” Teaching and Teacher Education 77: 170–182. doi:10.1016/j.tate.2018.09.018.
  • Bogdan, R., and S. K. Biklen. 2007. Qualitative Research for Education: An Introduction to Theory and Methods. Hoboken, NJ: Pearson A & B.
  • Boud, D. 2000. “Sustainable Assessment: Rethinking Assessment for the Learning Society.” Studies in Continuing Education 22 (2): 151–167. doi:10.1080/713695728.
  • Boud, D., and N. Falchikov. 2006. “Aligning Assessment with Long‐Term Learning.” Assessment & Evaluation in Higher Education 31 (4): 399–413. doi:10.1080/02602930600679050.
  • Brevik, L. M., M. Blikstad-Balas, and K. L. Engelien. 2017. “Integrating Assessment for Learning in the Teacher Education Programme at the University of Oslo.” Assessment in Education: Principles, Policy & Practice 24 (2): 164–184. doi:10.1080/0969594X.2016.1239611.
  • Bridges, P., A. Cooper, P. Evanson, C. Haines, D. Jenkins, D. Scurry, H. Woolf, and M. Yorke. 2002. “Coursework Marks High, Examination Marks Low: Discuss.” Assessment & Evaluation in Higher Education 27 (1): 35–48. doi:10.1080/02602930120105045.
  • DeLuca, C., and B. Aarti. 2013. “The Current State of Assessment Education: Aligning Policy, Standards, and Teacher Education Curriculum.” Journal of Teacher Education 64 (4): 356–372. doi:10.1177/002248711348814.
  • DeLuca, C., and D. A. Klinger. 2010. “Assessment Literacy Development: Identifying Gaps in Teacher Candidates’ Learning.” Assessment in Education: Principles, Policy & Practice 17 (4): 419–438. doi:10.1080/0969594X.2010.516643.
  • DeLuca, C., and L. Volante. 2016. “Assessment for Learning in Teacher Education Programs: Navigating the Juxtaposition of Theory and Praxis.” Journal of the International Society for Teacher Education 20 (1): 19–31. https://eric.ed.gov/?id=EJ1177153
  • DeLuca, C., A. Chapman-Chin, and D. A. Klinger. 2019. “Toward a Teacher Professional Learning Continuum in Assessment for Learning.” Educational Assessment 24 (4): 267–285. doi:10.1080/10627197.2019.1670056.
  • Dysthe, O., and K. S. Engelsen. 2004. “Portfolios and Assessment in Teacher Education in Norway: A Theory‐Based Discussion of Different Models in Two Sites.” Assessment & Evaluation in Higher Education 29 (2): 239–258. doi:10.1080/0260293042000188500.
  • Education Association/Utdanningsforbundet. 2021. “Five-year teacher training and teaching competence.” February, 20, 2024. https://www.utdanningsforbundet.no/medlemsgrupper/universitet-og-hogskole/ny-master-i-grunnskolelarerutdanning-og-undervisningskompetanse/
  • Engelsen, K. S., and K. Smith. 2014. “Assessment literacy.” Designing Assessment for Quality Learning, 91–107. Dordrecht, the Netherlands: Springer Netherlands. 10.1007/978-94-007-5902-2_6.
  • Finne, H., A. D. Landmark, S. Mordal, and E. F. Ullern. 2017. “R&D in teacher education milieus. A descriptive mapping of research and development in milieus that educate teachers for primary and lower secondary schools in Norway (GLU) SINTEF; SINTEF report A28156. http://hdl.handle.net/11250/2446882
  • Fountain, J. R. 2021. “Pre-service English Teachers’ Assessment Literacy: The Influence of a Practical Work Placement.” Master’s thesis. University of Oslo. http://urn.nb.no/URN:NBN:no-90965.
  • Gibson, A., R. Yerworth, M. D. P. Garcia Souto, J. Griffiths, and G. Hughes. 2018. “Co-ordinating assessment across a programme.” International Symposium of Engineering Education 2018. https://discovery.ucl.ac.uk/id/eprint/10062995/
  • Goos, M., and K. Moni. 2001. “Modelling Professional Practice: A Collaborative Approach to Developing Criteria and Standards-Based Assessment in Pre-Service Teacher Education Courses.” Assessment & Evaluation in Higher Education 26 (1): 73–88. doi:10.1080/02602930020022291a.
  • IBM SPSS Statistics. 2022. Macintosh, Version 29.0. [Computer software]. IBM SPSS Statistics.
  • Jessop, T., Y. El Hakim, and G. Gibbs. 2014. “The Whole is Greater than the Sum of Its Parts: A Large-Scale Study of Students’ Learning in Response to Different Programme Assessment Patterns.” Assessment & Evaluation in Higher Education 39 (1): 73–88. doi:10.1080/02602938.2013.792108.
  • Jessop, T., Y. El Hakim, G. Gibbs, P. Hyland, D. Reavey, and I. Scott. 2012. “NTFS project final report, TESTA (2009–2012).” https://www.academia.edu/15601506/NTFS_Project_Final_Report_TESTA_2009_12_
  • Jessop, T., and B. Maleckar. 2016. “The Influence of Disciplinary Assessment Patterns on Student Learning: A Comparative Study.” Studies in Higher Education 41 (4): 696–711. doi:10.1080/03075079.2014.943170.
  • Johnston, R. J. 1994. “Resources, Student: Staff Ratios and Teaching Quality in British Higher Education: Some Speculations Aroused by Jenkins and Smith.” Transactions of the Institute of British Geographers 19 (3): 359–365. doi:10.2307/622328.
  • Ministry of Education and Research. 2018. “Nasjonale guidelines for grunnskolelærerutdanningen, 1–7.” trinn. https://www.uhr.no/_f/p1/ibda59a76-750c-43f2-b95a-a7690820ccf4/revidert-171018-nasjonale-retningslinjer-for-grunnskolelarerutdanning-trinn-1-7_fin.pdf
  • Noell, G. H., J. M. Burns, and K. A. Gansle. 2019. “Linking Student Achievement to Teacher Preparation: Emergent Challenges in Implementing Value Added Assessment.” Journal of Teacher Education 70 (2): 128–138. doi:10.1177/0022487118800708.
  • NDET (Norwegian Directorate for Education and Training). 2020. Læreplan, [Curriculum]. Established as regulations. The National curriculum for the Knowledge Promotion 2020. https://sokeresultat.udir.no/finn-lareplan.html?fltypefiltermulti=Kunnskapsl%C3%B8ftet%202020&spraakmaalform=Engelsk
  • NDET (The Norwegian Directorate for Education and Training). 2018. “Observations on the national assessment for learning programme (2010–2018). Skills development in networks, final report. https://www.udir.no/tall-og-forskning/finn-forskning/rapporter/Kunnskapsgrunnlag-for-evaluering-av-eksamensordningen/.
  • Nusche, D., L. Earl, W. Maxwell, and C. Shewbridge. 2011. “OECD reviews of evaluation and assessment in education, Norway.” https://www.oecd.org/norway/48632032.pdf.
  • Oo, C. Z., D. Alonzo, and R. Asih. 2022. “Acquisition of Teacher Assessment Literacy by Pre-Service Teachers: A Review of Practices and Program Designs.” Issues in Educational Research 32 (1): 352–373. doi:10.3316/informit.475861785399488.
  • Poth, C. 2013. “What Assessment Knowledge and Skills Do Initial Teacher Education Programs Address? A Western Canadian Perspective.” Alberta Journal of Educational Research 58 (4): 634–656. doi:10.11575/ajer.v58i4.55670.
  • Rezaee, A. A., and M. Ghanbarpour. 2016. “The Status Quo of Teacher-Training Courses in the Iranian EFL Context: A Focus on Models of Professional Education and Dynamic Assessment.” International Journal for 21st Century Education 3: 89–120. doi:10.21071/ij21ce.v3iSpecial.5710.
  • Richardson, J. T. E. 2015. “Coursework versus Examinations in End-of-Module Assessment: A Literature Review.” Assessment & Evaluation in Higher Education 40 (3): 439–455. doi:10.1080/02602938.2014.919628.
  • Ruiz, J. F., and E. Panadero. 2023. “Assessment Professional Development Courses for University Teachers: A Nationwide Analysis Exploring Length, Evaluation and Content.” Assessment & Evaluation in Higher Education 48 (4): 485–501. doi:10.1080/02602938.2022.2099811.
  • Shulman, L. S. 1993. “Teaching as Community Property: Putting an End to Pedagogical Solitude. Change.” Change: The Magazine of Higher Learning 25 (6): 6–7. doi:10.1080/00091383.1993.9938465.
  • Smith, K. 2013. “Formative Assessment of Teacher Learning: Issues about Quality, Design Characteristics and Impact on Teacher Learning.” Teachers and Teaching 19 (2): 228–234. doi:10.1080/13540602.2013.741835.
  • Smith, K. 2021. “Educating Teachers for the Future School-the Challenge of Bridging between Perceptions of Quality Teaching and Policy Decisions: Reflections from Norway.” European Journal of Teacher Education 44 (3): 383–398. doi:10.1080/02619768.2021.1901077.
  • TESTA 2009–12. 2023. “Transforming the experience of students through assessment.” Higher Education Academy, National Teaching Fellowship Project. Accessed 20 July 2023. www.testa.ac.uk
  • The Higher Education Act. 2005. “Act on Universities and Colleges.” (LOV-2005-04-01-15). https://lovdata.no/dokument/NL/lov/2005-04-01-15/KAPITTEL_1-4#KAPITTEL_1-4
  • Tveit, S. 2014. “Educational Assessment in Norway.” Assessment in Education: Principles, Policy & Practice 21 (2): 221–237. doi:10.1080/0969594X.2013.830079.
  • University and College Council (UHR). 2023. “September National Guidelines for Teacher Education.” https://www.uhr.no/temasider/nasjonale-retningslinjer/nasjonale-retningslinjer-for-larerutdanningene/
  • Vattøy, K., S. M. Gamlem, and W. M. Rogne. 2021. “Examining Students’ Feedback Engagement and Assessment Experiences: A Mixed Study.” Studies in Higher Education 46 (11): 2325–2337. doi:10.1080/03075079.2020.1723523
  • Volante, L., and F. Xavier. 2007. “Exploring Teacher Candidates’ Assessment Literacy: Implications for Teacher Education Reform and Professional Development.” Canadian Journal of Education / Revue Canadienne de L’éducation 30 (3): 749–770. https://eric.ed.gov/?id=EJ780818. doi:10.2307/20466661.