Marks, and their proportioning between examinations, tutorials and assignment, as determinants of the performance of nursing students in a pharmacology

Background In programs with higher proportions of marks allocated to ongoing assessment, the students have higher overall marks than those with a lower proportion allocated to assessment. Little or no attention has been made to how the allocation affects the academic success of students in individual courses. The purpose of this study was to determine how the allocation of marks to examinations, tutorials and an assignment affects the performance of nursing students in a pharmacology course. For students who passed a pharmacology course (i) the marks for examinations and ongoing assessment (tutorials and/or an assignment) were compared, and (ii) regression line and correlation analysis was undertaken to determine any association between these marks. In addition, for completing students, modelling was undertaken to determine the effects of changing the allocation of marks on passing and failing rates.


Abstract Background
In programs with higher proportions of marks allocated to ongoing assessment, the students have higher overall marks than those with a lower proportion allocated to assessment. Little or no attention has been made to how the allocation affects the academic success of students in individual courses.
The purpose of this study was to determine how the allocation of marks to examinations, tutorials and an assignment affects the performance of nursing students in a pharmacology course.

Methods
For students who passed a pharmacology course (i) the marks for examinations and ongoing assessment (tutorials and/or an assignment) were compared, and (ii) regression line and correlation analysis was undertaken to determine any association between these marks. In addition, for completing students, modelling was undertaken to determine the effects of changing the allocation of marks on passing and failing rates.

Results
Nursing students who passed a pharmacology course obtained significantly lower marks in examinations than ongoing assessment, and for the ongoing assessment, lower marks in the assignment than tutorials. Regression line analysis showed that the marks in ongoing assessment (tutorials and/or the assignment) versus examination marks were a poor line-fit. The correlation coefficients between ongoing assessment and examinations were weak to moderate. A high percentage of students passed the course (> 90%) and, modelling for completing students, showed that decreasing the marks for examination would have led to slightly more students passing the pharmacology course with higher grades. In contrast, increasing the marks for examination would have dramatically decreased the number of students passing the course, and their grades.

Conclusions
The allocation of marks can have a major effect on student performance. As ongoing assessment is not a good predictor of performance in examination this has implications for students who rely on passing examinations for their advancement. For instance, nursing students in some countries (e.g. USA) are required to pass examinations prior to registration, whereas in others (e.g. Australia) they are not. Consideration needs to be given as to whether it is appropriate for nursing students who fail examinations to pass courses/programs.

Background
Historically, unseen examinations were the most common way to determine academic performance for students. However, over the last 40 years, ongoing assessment (coursework) has been introduced into many degrees [1], so that most courses have become a mixture of examination and ongoing assessment. Presently, examinations are often used to test the assimilation of knowledge and ensure that the students complete the work themselves [1]. However, due to time pressures, examinations do not allow academic excellence [1]. In contrast, ongoing assessment is used to teach (formative) as well as test (summative), allows use of resources, allows students to produce their best work, does not rely on memory, and often includes team-work components [1].
There are no rules about the proportional allocation of marks for ongoing assessment and examination, and the allocation is often made on a seemingly arbitrary basis and not justified. For instance, in pharmacology courses for nurses in Australia, the proportion of marks for examinations ranges from 40-80% [2][3][4][5][6]. The consequences of the proportional allocation of marks in courses are often not considered. If the marks for examination are greater than those for ongoing assessment, the overall grade is likely to be predominantly determined by the examination, and vice-versa.
Assessment can either be summative, which evaluates student learning at the end of the component or course, or formative, which monitors students learning to provide ongoing feedback. Whereas unseen examinations are clearly summative, ongoing assessment can be either summative or formative. One of the reasons for this is that ongoing assessment takes many forms including weekly quizzes, homework, tutorials, laboratory work, oral or poster presentations, and assignments/research projects [7]. Some of these ongoing assessment types are examples of formative activities e.g. weekly quizzes and homework, whereas others are summative e.g. final presentations and final reports [8].
There is evidence that the marks for ongoing assessment are higher than for examinations, and this has various consequences. For instance, it was suggested as long ago as 1987, that weaker students in a BA (Hons) Business Studies degree benefited in degree obtained by achieving higher marks in ongoing assessment than examinations [9]. This finding was widened to show that across UK universities, in the programs with higher proportions of ongoing assessment, students had higher overall marks, and consequently better degrees, than those with a lower proportion of ongoing assessment [10,11]. This included students in biology/molecular sciences having higher marks in courses with 100% assessment, compared to courses with mixed assessment [12].
There have been few studies of the association between marks in ongoing assessments and examination in single programs or courses. Studies have shown that the marks for ongoing assessment were higher than examination marks in a pharmacy program [13] and in a bioscience course [14]. However, it is not known whether this applies to all kinds of ongoing assessment versus examinations, and to all students and courses/programs.
The relationship between marked examinations and formative unmarked ongoing assessment has been considered in meta-analysis. This meta-analysis was of the effect of active learning interventions on examination outcomes in the STEM disciplines and showed that the interventions improved examination marks by 6%, and reduced failure rates compared to traditional lecturing [15].
The interventions were unmarked formative activities such as worksheets or tutorials completed during class [15]. Notably, formative activity that was marked, such as worksheets/homework completed prior to tutorials/workshops, were not included in this meta-analysis.
There have been few studies of the relationship between marked formative or summative ongoing assessment and marks in examinations, and these have had varying outcomes. In a pharmacy program, there was only a weak correlation between the marks for ongoing assessment and examinations [13]. Only in one course of this study, was the ongoing assessment marks separated, and this showed no correlation between the marks for a practical write-up and an aligned examination question [13]. In contrast, marks for home assignments were a strong predictor of examination performance in courses in calculus and macroeconomics [16], and education [17]. Marks for home assignments in statistics were shown to predict examination performance in one study [18], but not another [16]. Other studies have shown that marked tutorial-based assessments have a significant positive association with examination performance in finance [19] and law [20] courses. Marked online quizzes were also associated with better performance in examinations for education students [17]. To our knowledge, there are no studies determining the relationship between marked ongoing assessment and examinations for nursing students.
Being able to perform well in examinations is especially important for nursing students as it may determine whether they can practice clinically. For nursing students, in the USA, it is the marks in an examination taken after completing their studies, the National Council Licensure Examination-Registered Nurse (NCLEX-RN), which determines whether the students can practice. The NCLEX-RN is also used to register nurses in Canada, as are the Canadian Registered Nurse Examination (CRNE) and the Ordre des infirmières du Québec (OIIQ). At present, Australia, the UK, the Republic of Ireland, and New Zealand are among the countries not requiring national examination prior to registration for nursing students, but relying on graduation from courses with examinations and ongoing assessment. Undergraduate performance may predict success in the NCLEX-RN. Thus, students with more grades of C or below are less likely to pass the NCLEX-RN (reviewed by [21]). This study did not separate examinations from ongoing assessment as predictors of success in the NCLEX-RN [21]. Other studies have shown that examinations are a predictor of success in the NCLEX-RN. Thus, prior to sitting the NCLEX-RN, final year nursing students often sit a commercially-prepared examination, based on the NCLEX-RN, such as the HESI (Health Education Systems, Inc) Exit Exam, and performance in the HESI is a good predictor of success in the NCLEX-RN [22,23]. Given the differences in how nursing registration is achieved between countries, and the allocation of marks to ongoing assessment and examination between universities and countries, it was of interest to consider how the proportioning of marks between ongoing assessment and examinations affected marks and pass rates.
This study was performed at an Australian university and was of nursing students in a pharmacology course. The hypothesise and objectives were: (i) The hypothesis was that nursing students had higher marks in ongoing assessment than examinations. The objective was to compare the academic performance of students who passed the course in ongoing assessment and examinations.
(ii) The hypothesis was that marks in ongoing assessment are not strong predictors of marks in examinations. The objective was to use regression line analysis for the passing students to determine whether performance in ongoing assessment was a predictor of performance in examinations.
(iii) To test the hypothesis that allocating higher proportions of marks to ongoing assessment was associated with higher marks and pass rates, and vice-versa. The objective was to consider how the proportioning marks, between ongoing assessment and examinations, affected marks and pass rates for the passing and failing students who completed a pharmacology course.

Methods
This is a descriptive study of the relationship between mark allocation to examinations and ongoing assessment (an assignment and/or tutorials) and the academic performance of nursing students in a pharmacology course. Ethical approval was obtained for this project from the Human Research Ethics Committee at Queensland University of Technology; Ethics Approval Number 1900000541. Student anonymity was achieved by removing names and students' IDs from the marks data prior to the study.
For the students, who successfully passed the course, average grades ± SEM were determined.
In the pharmacology course, 40% of the total marks were allocated to ongoing assessment, which had two components; tutorials and an assignment, both of which were allocated 20% of the marks. The tutorials were both formative and summative and were held weekly in classes of 25 students divided into groups of 5. Half of the tutorial marks were given for preparation, which was unsupervised and could be undertaken alone or in groups. The other half of the tutorial marks was a group mark for performance at the tutorial, which included questioning by the tutor of individuals and the group about the content of the student preparation. The second 20% of the ongoing assessment was a summative case-study assignment undertaken outside of class. There were two examinations to make up the other 60%; firstly a 25% examination covering the principles of pharmacology (20% multiple choice questions MCQs, 5% Short Answer Questions) and, secondly, a 35% MCQ examination of systematic pharmacology.
For the successful students, the outcomes in ongoing assessment (combined and separated tutorials and assignment) and examinations were compared by calculating the marks as a percentage of 100%. The percentages for individuals were compared by Students paired t-test, and the percentages for different cohorts were compared by Students unpaired t-test. Mean values were also determined.
Students who achieved less than 50% in the ongoing assessment or examinations were considered to have failed that component; failure rates for each component were compared by Odds ratio using the online Odd ratio calculator; https://www.medcalc.org/calc/odds_ratio.php. P ≤ 0.05 was significant for both Student's t-test and Odds ratios.
In order to determine whether performance in ongoing assessment was a predictor of performance in examinations for the successful students, regression line analysis was undertaken using Microsoft Excel. Thus, the marks for individual students in examinations were plotted against their marks in ongoing assessment (combined and separated tutorials and assignment). The equation for the regression line (y = ax + b), where 'a' is the slope of the line, and the R 2 values were also given. In regression, the R 2 coefficient of determination is a statistical measure of how well the regression line approximates the real data points, with an R 2 of 1 indicating the regression line perfectly fits the data.
For all the students who completed the course (i.e. successful and failing students), modelling was undertaken to determine the effect of changing the marking proportions from 40% ongoing assessment/60% examinations had on the pass/failure rates and overall grades. The proportions modelled were 60% ongoing assessment/ 40% examinations, 80%/20%, 100%/0%, 20%/80% and 0%/100%. Mean values ± SEM were determined. Students who achieved less than 50% in the ongoing assessment or examinations were considered to have failed that component for both the actual and modelled data.

Results
The pharmacology course, taken by nursing students in the semesters of 2014 and 2015, had the same content and teacher. In both years, the pharmacology course enrolled ~ 250 students in semester 1 and ~ 360 students in semester 2, and some of these students withdrew or did not complete. For completing students, the passing rate was > 90% and the failure rate was < 10% (Table 1). Comparison of marks for examinations and ongoing assessment for passing students The average grade at ~ 4.8 (Table 1) and examination marks at 58-60% (Table 2) were similar between years and cohorts. There were only small variations between semesters for ongoing assessment ( Table 2). Students in each cohort obtained significantly lower marks, ~ 15-20% point difference, in examinations than ongoing assessment ( Table 2). Dividing the ongoing assessment showed that students obtained significantly lower marks, 9-19% point difference, in assignments than in tutorials (Table 2). Each value is the mean ± SEM (number of students) Unpaired t-test is between semesters 1 and 2 Paired t-tests are between examinations and ongoing assessment marks, and between tutorial and assignment marks Despite passing the pharmacology course overall by obtaining ≥ 50% of the total marks available, some of these students failed the individual components. Thus, the failure rates for the examinations ranged from 19-26% (Table 3). These examination failure rates for examinations were much higher than for the ongoing assessment; 0-1.6% (Table 3). None of the students who passed the pharmacology course failed the tutorial component. Thus, the failure rates in ongoing assessment were due to failure in the assignment component, which ranged from 3 to 6% (Table 3). Failure rates were number of student with less than 50%/total number of students who passed the unit (percentages) * P < 0.05 by Odd's ratio between ongoing assessment and examinations

For the passing students, regression line analysis and Pearson's correlation coefficients
Regression line analysis was undertaken to determine whether performance in ongoing assessment was a good predictor of performance in the examinations. A good correlation would be indicated by the slopes of ~ 1 and R 2 values would also be ~ 1. However, as students obtained significantly lower marks in examinations than in ongoing assessment (Table 2), it was predicted that there would be a poor fit of the data in regression line analysis, and this was the case (Fig. 1, Table 4). Pearson's correlation coefficients showed there was a weak correlation between the marks for examinations and ongoing assessment for three of the four semesters, and a moderate correlation for the other seminar (seminar 1 in 2014); Table 4. Dividing the ongoing assessment into tutorials or assignment marks also showed a poor line-fit to examination data to a line ( Table 4). The correlations between tutorial and examination marks were weak, and between the assignment and examinations, very weak to moderate (Table 4).

Modelling changing the proportional allocation of marks between ongoing assessment and examinations
The modelling changing the allocation of marks, from ongoing assessment to examinations and viceversa, gave consistent results for all four cohorts of nursing students. Decreasing the allocation of marks to examinations increased the number of students who would have passed the course (Table 1). As the passing rates in the course were high (≥ 92%), there was little possibility of increasing these rates, and the modelling only resulted in a maximum of 2-6 percentage point increases (Table 1). Conversely, increasing the allocation of marks to examinations would have dramatically increased the number of students who failed the course ( Table 1). The failure rates were low (≤ 8%) and were increased up to a maximum of 12-17 percentage points in the modelling (Table 1).

Discussion
The three major findings of this study of nursing students in a pharmacology course are that for the passing students (i) marks are higher for ongoing assessment than examinations and (ii) there are very weak to moderate relationships between marks obtained in examination and ongoing assessment, and for completing students (iii) increasing the marks allocated to examinations would have decreased the number of students who passed the course. This is the first study to show that marks for ongoing assessment are higher than for examinations for nursing students in a pharmacology course. Similar findings have been made previously for bioscience courses being undertaken by nursing students [25] or science students [14] and confirms previous findings of higher marks for ongoing assessment at the program level [9][10][11][12].
In this study, we showed that for nursing students in pharmacology, marks in a written assignment were weak to moderate predictors of performance in examinations. A previous study showed a weak correlation (like us, using Pearson's coefficient) between marks in a research project and the final examination in a pharmacy course [13]. It would be of interest to know whether this finding relating to assignments/projects applies to students in other disciplines.
In addition, the present study showed that marks in tutorials, which included a homework component, are not good predictors of academic performance in examinations. This is the first time that this has been shown for nursing students or in a pharmacology course. However, this finding is not consistent for all disciplines or students. Marked tutorials improved marks in examination for courses in calculus, macroeconomics [16], finance [20], and law [21]. Marked homework in statistics has been variously shown to improve examination performance [18] or have no effect [16]. One possible explanation for this discrepancy between studies may relate to disciplines, with marked homework/tutorials being a better predictor of examination results in mathematics, economics, education and law than in a pharmacology course.
With the allocation of marks of 60% to examinations and 40% to ongoing assessment, in the present study, the number of students who failed the pharmacology course was low (5-8%). With this low failure rate, the likelihood of increasing the passing rate by changing the allocation of marks was low, and our modelling confirmed this by showing that the passing rate could only be increased by 2-6 percentage points by increasing the marks allocated to ongoing assessment. With this allocation, the passing rate was high, 92-95%, and this occurred despite 20-26% of students failing the examination component of the course.
The major finding of the modelling part of our study was to show that increasing the marks allocated to examinations would have decreased the number of students who passed the course in pharmacology, with 19-25% failing overall if all the marks had been allocated to the examination. In Australia, the allocation of marks for examination in pharmacology or pharmacology-related courses from nursing programs ranges is variable (85%, University of Adelaide; 70%, University of Queensland; 50%, Edith Cowan University, RMIT University; 40% University of Tasmania [2][3][4][5][6]). Thus, if the standard trend of there being higher marks in ongoing assessment than examination occurs in these courses, for the same marks in ongoing assessment and examinations, a smaller percentage of students enrolled at Adelaide where examination marks predominate, would have been successful than if they had been enrolled at Tasmania, where marks for ongoing assessment predominates.
Although our modelling was done for a pharmacology course, the findings will apply to any course where the students have weaker outcomes in examinations than ongoing assessment, which is common [10][11][12][13]. As, to our knowledge, there are no previous studies of the either the relationship between marks in examination and ongoing assessment in an individual course, or of modelling the effect of changing the allocation of marks, for nursing or other students, these are novel findings.
In the pharmacology course, 55% of the 60% of the marks allocated to examinations were in the form of MCQs. When MCQs are used, the fairest option is to focus on the number of questions attempted and penalize wrong answers, as with this option, blind guessing will on average not help the student [26]. Many universities, including the one that this study was undertaken at, do not deduct marks for incorrectly answered MCQs, and this inflates the MCQ marks [26]. In the pharmacology course studies, this could have inflated the marks for MCQs by ~ 20% and the overall mark in in the examination by 11% of the 60% of marks. Thus, the students who fail the examination in pharmacology by achieving less than 30% of the 60% of marks available are clearly demonstrating a poor knowledge of pharmacology, especially as the some of the marks may be due to blind guessing.
The concern is that the nursing students, who pass the ongoing assessment, but not the examinations, may not have assimilated the necessary knowledge in pharmacology or other courses, to continue their program of study. Thus, the disparity between marks in examinations and ongoing assessment needs to be considered, and methods introduced to overcome this. One possible practical solution to this dilemma of whether students who pass ongoing assessment but fail examination, should be allowed to pass courses and progress in their studies, would be to make it compulsory for the students to pass the examination component of the course.
There are several possible reasons for this disparity between marks in examinations and ongoing assessment. The most obvious of these is that the examination results represent those of the individual student, whereas the ongoing assessment marks may represent that of individuals or groups of students. In the present study, the tutorial mark of 20% is partly a group mark and is composed of 10% for unsupervised preparation/homework, which can be individual or group, and 10% for participation, which is a group mark. This makes it possible that the performance of weak students, and their marks in tutorials, to be artificially enhanced by better students in the group. The assignment component of the ongoing assessment (20%) should represent work undertaken by the individual student, but as this was unsupervised, there was nothing preventing students colluding.
One way to overcome this would be to remove group work from courses. However, it is well known that group work is very important skill for nursing students. Thus, we need to be able to overcome this ongoing problem with assessing individuals in group work [27,28] or use an alternative approach to make sure that students do not pass courses based on the work done by others in ongoing assessment.
For group assignments, self-and peer-rating has been used to overcome varying contributions by students in the humanities [29] and in postgraduate nursing/midwifery studies [30]. However, this method is not usually applied to weekly tutorials for students, including nursing students. When it was applied to problem-based learning tutorials for medical students, it was shown that self-ratings did not correlate, and peer-ratings only weakly correlated, with tutor-ratings of the students [31]. Thus, it is not proven that this method gives a reliable outcome of the student's achievements in weekly tutorials. Furthermore, it would be very time consuming and expensive to undertake such assessment for weekly hourly tutorials in a large cohort. For instance, the pharmacology tutorials for nursing students in the present study were weekly over 13 weeks, in groups of 25, for cohorts of 250 or 350 students. However, self-and peer-ratings of tutorials are not routinely undertaken for large groups on a regular basis.
First year examinations results have been shown to be a predictor of later success in clinical assessment for medical students (e.g. [32]). For nursing students, it is not known whether either results in undergraduate examinations or ongoing assessment are good predictors of success in clinical practice. Thus, a 2012 review of the factors influencing nursing students' academic and clinical performance did not find any studies of academic factors (e.g. examination or ongoing assessment outcomes) as a predictor of clinical performance [33].
In the USA, overall undergraduate performance may be predictor of success in the NCLEX-RN, as GPA predict success in the NCLEX-RN [21,34]. Also, performance in HESI predicts performance in the NCLEX-RN [22,23]. However, to our knowledge, there are no studies determining whether performance in ongoing assessment is a predictor of success in the NCLEX-RN.
For nursing students, assessment in Australia is commonly a mixture of ongoing assessment and examinations to give a GPA, and for many nursing courses/programs, most marks are from ongoing assessment. Thus, in the present nursing program at the university where the present study was undertaken there are 23 compulsory and one elective course. Seven of the courses are off-campus (practicums) and are marked as satisfactory or not satisfactory. Of the remaining 16 compulsory courses, 8 have no examinations, and 78% of marks are allocated to ongoing assessment and only 22% to examinations. It seems likely that the number of students who failed the examination components in our Australian university but passed the program overall, would have failed the examinations in USA system and not have been registered. Further consideration needs to be given as to whether students in Australia who do not undertake or fail examinations are fit to practice. One possible practical solution to this dilemma of whether students who pass ongoing assessment but fail examination, should be allowed to pass courses and progress in their studies, would be to make it compulsory for the students to pass the examination component of the course. In addition, studies need to be undertaken that consider the relationship between success in undergraduate courses and clinical practice.
The major limitation of this study is that it is of a single course in pharmacology, and that some of the findings may not relate to other courses being undertaken by nursing or non-nursing students. However, we have previously shown a similar reliance of marks in ongoing assessment for the overall success of nursing students in a bioscience course [25]. Also, the findings of the present study may apply to any course where students obtain significantly lower marks in examinations than ongoing assessment. However, for many courses, we do not know whether marks are lower for examinations than ongoing assessment for nursing or non-nursing students. Thus, similar analysis needs to be undertaken of other courses to determine whether the findings are specific to science courses for nursing students or can be related to other courses for nursing and non-nursing students.

Conclusions
More attention needs to be given to the allocation of marks between ongoing assessment and examinations. Marks in ongoing assessment may be a poor indicator of success in examinations.
Students can fail the examination component but pass the course, and increasing the marks allocated to ongoing assessment accentuates these findings. Students, who pass the course but not the examinations, may not have assimilated the necessary knowledge to continue in their program.
Additionally, some of the passing students may have passed overall due to work done by others in ongoing assessment.