3. TU Delft assessment agreements
This chapter describes TU Delft agreements on assessment. Some of these agreements are specific for the TU Delft. Therefore, this chapter is relevant for new teaching staff and other employees, especially in case they have been working in other universities. Most agreements originate from the models for the Teaching and Education Regulations (model TER, Onderwijs en Examenregeling in Dutch) and from the model for the Rules and Guidelines of the Board of Examiners (model R&G, Regels en Richtlijnen van de Examencommissie in Dutch). In case the operationalisations differ per faculty or if there are exceptions, this is indicated in the text.
3.1 TU Delft assessment terminology
In this assessment framework, we use the terminology that lecturers and programme management use, to optimise the readability of this document. However, some terminology is different in laws, regulations, and tools. These are discussed below, together with an overview of assessment characteristics. In addition, Appendix F contains the following tools:
-
In the TER and WHW, ‘examination’ (Dutch: tentamen) is defined as the assessment of an entire course. At the TU Delft and other universities, as well as in this framework, ‘exam’ (same Dutch word tentamen) is reserved for a written or oral exam that takes place in a scheduled time slot and it excludes other forms of assessment and resulting products like reports, presentations, lab work, and projects. In addition, ‘an assessment’ means a single assessment of any form, while ‘course assessment’ indicates the total of assessments in a course.
To complicate things, the Dutch word ‘examen’ translates as ‘degree audit’, which is the final check by the board of examiners (or those mandated by the board) whether a student has passed their individual exam programme, which is the student’s individual approved list of courses.
-
Assessments can be described based on their characteristics. Table 2 lists the main characteristics of assessments and their possible values. Especially the list of assessment types/methods and their definition can differ between faculties.
Table 2: Main assessment characteristics and values.
Assessment characteristic
Values
Assessment type/method:
presentation, report, essay, written exam, oral exam, portfolio, skill assessment, assignment, take-home exam, video, prototypet
Assessment category:
‘oral, written, or in another way’ (‘mondeling, schriftelijk of op een andere wijze’, art. 7.13 sub 2.l). TU Delft terminology: oral, written, assignment/project
Assessment mode:
on-campus, remoteu, hybrid (simultaneous on-campus and remote assessment)
Assessment schedule:
timeslot (like in written and oral exams), or deadline
(initial) Input format:
paper, digital, 3D, combination
Question type:
open-ended, closed ended (whether students can type their own answer, or have to choose from listed answers).
Answer type:
open-ended, numeric, multiple choice, multiple selectv, true/false, document
Grading method:
manual on paper, manual digitally, fully automated, automated suggestions
For the most common assessment types, Table 3 shows their main timing, and main fraud prevention/detection characteristics.
Table 3: Main timing and fraud prevention/detection characteristics per assessment type.
Timeslot
Fraud prevention / detection measures
Presentation
Scheduled timeslot
Plagiarism scan for slides, identity check
Report
Deliverable with deadline
Plagiarism scanw
Essay
Deliverable with deadline
Plagiarism scan
Written exam
Scheduled timeslot
On-campus: Invigilators, identity check
Remote: different exam versions, parameterisation, (random) oral checks, see 3.9
Oral exam
Scheduled timeslot
Identity check
Project
Deliverable with deadline
Supervision, original assignments/research questions
Portfolio
Deliverable with deadline
Plagiarism scan (on written deliverables), reverse image search (on digital graphical deliverables)
Skill assessment
During practical lab session, or at specific moment
Identity check, supervision
Assignment
Deliverable with deadline
Plagiarism scan
Video
Deliverable with deadline
Reverse image search on video stills
-
Projects could be considered an educational as well as an assessment method, and it typically contains several assessment method. Some programmes consider projects to be an assessment method on its own, while other programmes consider projects to include multiple assessment methodsx.
3.2 Planning of assessments
To ensure studyability for students (Quality requirement 6) and feasibility for teaching staff in terms of time and resources (Quality requirement 8), the model TER contains the following agreements on the planning of assessment:
-
Programmes offer two opportunities to take an assessments per academic year. This holds for any type of assessment, unless this is unfeasible for the programme. TER article 17 indicates when (e.g. in which weeks) these two assessments take place. In practice, offering two assessments per year is only considered feasible for written exams and oral exams, and not for other assessment forms like projects, practicals, and field trips. In the latter cases, programmes offer repair options to minimize study delay (Quality requirement 6; for repair options see 4.2), if this is feasible. The programme can set additional requirements to participate in the repair option, but these requirements must be reasonable for the student.
-
The planning of assessments and deadlinesy is feasible for students (Quality requirement 6) and focusses on learning (Quality requirement 4). Therefore, the standard for bachelor programmes is to schedule no more than two summative assessments/deadlines per five EC8,21, and have no more than 2 summative assessments/deadlines per week across courses8,21, for students who take the nominal course programme. The faculty or programme can deviate from these standards based on their assessment vision, in which case they substantiate this in their assessment policy. For master programmes, programmes create their own policy on the number of assessments and deadlines per week and per EC (see 4.2).
-
The TER of a programme specifies when the exam periods are and whether exams can be planned outside the regular exam weeks. Most programmes (but not all) divide their educational periods into ‘quarters’ that last around 10 weeks (depending on holidays). In case of quartiles, regular exam weeks are the last one or two weeks of the education period (weeksz x.9 and x.10, see art. 17.1 of the TER of each programme that lists the periods in which exams can be taken). Depending on the programme’s policy, an educational period may have a midterm week (for quartiles: week x.5) during which midterm exams and sometimes resits from the previous period are planned. The summer resit week is scheduled in the third week of the summer period (week 5.3).
During the regular exam weeks (regular teaching weeks x.5, x.9 and x.10, and summer resit week 5.3aa), exams are scheduled within the following 3-hour timeslots, on weekdays: 9.00-12.00; 13.30-16.30, and 18.30-21.30. The evening timeslots are only used if no other timeslots are availablebb. If evening timeslots are inevitable, then they are preferably not used for first year bachelor students26F. Exams can be shorter than these three-hour timeslots, but not longer (except for students who were granted extra exam time).
-
Deadlines of deliverables (e.g. reports) need to be published in the LMS (Brightspace) by the start of the course (R&G 15.2).
3.3 Written exam specific information: registration
To enable the scheduling of resources that are required for written exams (Quality requirement 8), the following agreements exists in the model TER:
-
Students must register for each written exam (via Osiris, the student information system / SIS) in order to be allowed to participate. They will receive an ‘examination ticket’ by email. Registration opens 56 days before the exam. Students can register up and until 14 calendar days before the day the written exam takes place (TER art. 13.1), except for the registration period for the summer resit, which is up and until 6 calendar days before the written exam. The board of examiners may allow students to participate in the exam in case students failed to register in time due to exceptional circumstances (TER art. 13.3).
-
After the deadline, students can register for a waiting list (via Osiris) until 6 days before the exam date. The student will receive an ‘examination ticket’ if there are still places available.
-
If students do not register in time, they can report to an invigilator at the entrance of the exam room before the start of the exam. In the case that there are places left 30 minutes after the start of the examcc, students will be granted late access to the exam.
Registered students who arrive late are allowed to enter the exam hall until 30 minutes after the start of the exam18.That is why the 30 minute waiting exists. It also is why students cannot use the bathroom during the first 30 minutes of an exam.
-
In case digital exams need to be administered remotely during a lockdown, standard exam registration will be possible until six calendar days before the exam. Since the deadline for the waiting list procedure has already passed, there will be no waiting list procedure. Due to practical constraints to carry out the late access procedure remotely, and the increased chance for fraud during the 30 minute wait period, there is no late access possible.
3.4 Fraud prevention and detection
Fraud is defined as ‘any act or omission by a student that makes it fully or partially impossible to properly assess the knowledge, insight and skill of that student or another student’.22
Fraud prevention and detection helps to ensure that an assessment result correctly reflects how well an individual student masters the learning objectives (Quality requirement 3, leading to fairness). In order to create fair assessment, the TU Delft takes fraud prevention and fraud detection measures23 that facilitates catching students who commit fraud. This has a preventative effect.24
-
Fraud prevention starts before the assessment, during the course, using the following principles:
- Students are much less likely to commit fraud if they are feeling capable of doing the assessment. TU Delft programmes enable students to be prepared for assessments by offering constructively aligned courses and studyable programmes.
- Students will not commit fraud unintentionally if they are aware of what is and is not allowed and what the consequences are. Therefore, programmes inform students on fraud and its consequences. They also teach students how to reference (and if applicable in the field: cite) properly. Lecturers communicate clearly on what aids, tools, communication, and level of cooperation between students are and are not allowed during an assessment. In case of group work, lecturers inform student about the expected individual contribution and facilitate groups to discuss this regularly to prevent free riding.
-
The TU Delft chooses its fraud prevention and detection measures carefully, to optimise student performance. Fraud prevention and detection measures may hinder students to demonstrate their abilities, for example by causing stressdd (e.g. preventing students to access previous questions, and online proctoring). Therefore, these are only applied if necessary. During written exams, the focus lies on preventing the use of unauthorised sources and communication between students, using invigilators and the secure digital exam environment.
In case of unsupervised assessments, oral checks may be carried out in order to
- ensure that students created their deliverables by showing that they understand the content of what they delivered
- ensure that students created the output of the tools (like AI tools) themselves.
-
It is not viable to make assessment fully fraud-proof. That is why fraud detection remains equally important. During grading, assessors are vigilant about similarities between student answers and scores that can be an indicator of fraud in any assessment form, and of the use of unwanted tools or help. Fraud detection is especially relevant for take-home assignments like reports, presentations, essays, and codes. Examiners use the available fraud detection tools to detect suspicions of fraud. Written student work that students produced outside an invigilated exam setting is checked for plagiarism. This includes BSc and MSc graduation work (model R&G art. 7b.1). However, plagiarism scans cannot prove that it was the student who submitted the work.
If examiners detect a suspicion of fraud, they must follow the procedure that their board of examiners has described in the applicable R&G.
3.5 Scoring and grading
For transparent (Quality requirement 1) and reliable (Quality requirement 3) grades/results, as well as a feasible process for examiners (Quality requirement 8) the following agreements are in the model TER:
-
Lecturers score student work (e.g. exam questions or assignments) not only based on whether the final answer is correct, but also take the underlying calculations or reasoning that lead to this final answer into account. Partially correct student work is awarded partial scores, unless this is not feasible, or if this conflicts with the learning objectives or the nature of the course. All-or-nothing / binary scoring (full score or no score) results in less precise grades.
-
The examiner determines and publishes the result (grades or pass/fail) of written exams as quickly as possible but no later than 15 workings days after the examination (TER art. 19.1). For other assessment forms (oral exams, assignments/projects), TER art. 19 indicates the grading deadlines, which is usually 15 working days.
There are two exemptions to this rule of 15 working days:
- Examiners need to communicate all Q4 grades before the Friday of week 5.1 (TER art. 19.5).
- Examiners need to communicate all Q5 grades of first year bachelor courses before the Friday of week 5.4 due to BSA deadlines (TER art. 19.5).
If unforeseen circumstances lead to not meeting the grading deadline, the examiner discusses this with the board of examiners as soon as possible, and communicates the new publication date to the students.
-
After completing the assessments of a course, each student will receive a result for this course. Partial results determine the course result. These partial results are the most detailed level that the Rules and Guidelines of the Board of Examiners (R&G) describe. Course and partial results can be grades (numerals), pass (V, voldaan) or fail (O, onvoldoende) (model R&G art. 14.3).
If a student does not show up on an exam or other assessed activity, or does not hand in an deliverable, the examiner must register NVee for this assessment (which is not a result and therefore cannot be appealed):
NV no show niet verschenen if a student registered for an assessment but did not show up / did not hand in their work (in time)ff. If the grades or pass/fail results are too low to pass the course (or NV), Osiris will automatically determine the following course result:
NVD did not pass niet voldaan course result if a student did not receive sufficiently high assessment results, or received an NV: Osiris will automatically determine the course result as NVD. The board of examiners can decide to grant exemption for certain courses, in which case they register VR at course level in Osiris:
VR exemption vrijstelling if the board of examiners granted exemption for a course -
Grades are expressed on a scale from 1-10 (model R&G art. 14.4). The meaning of grades is (model R&G art. 14.4):
9.5-10.0 excellent uitmuntend 8.5-9.0 very good zeer goed 7.5-8.0 good goed 6.5-7.0 more than satisfactory ruim voldoende 6.0 satisfactory voldoende 4.5-5.5 nearly satisfactory onvoldoende 3.5-4.0 poor slecht 1.0-3.0 very poor zeer slecht -
The faculty policy contains guidelines on grade calculation (see 4.1).
-
Course grades are calculated with a precision of .5 (model R&G art. 14.4 and 14.5) on a scale of 1-10 (R&G art. 14.4). The minimum pass grade of a course is 6.0 (model R&G art. 14.4). This implies that the minimum pass grade before rounding is 5.75gg.
A course grade can either consist of the result of a single or multiple assessments. In the case of multiple assessments, the resulting individual grades are called partial gradeshh. Partial grades are calculated with the precision of one decimal (model R&G art. 14.6, first bullet).
Each board of examiners decides what the minimum partial grade is to receive a course grade in their Rules and Guidelines. Typically, the minimum partial grade is 5.00 (model R&G art. 14.6, second bullet), but it can be higher. In some cases, there is no minimum partial grade.
In case students do not reach the minimum partial grade, or did not receive a partial grade, the course result will be ‘NVD’ (did not suffice, niet voldaan).
-
In case of multiple attempts for a course or an assessmentii, the highest result countsjj (model R&G art. 14.8) and is used for the degree audit. In the rare case that students receive a (numerical) grade one year, and a pass/fail result in another year, passing grades (≥ 6.0) are regarded as ‘higher’ than pass/fail resultskk.
-
Course results are registered and published in Osiris. It depends on the programme whether partial results (i.e. result of single assessments) are registered in Osiris. If not, they are published on Brightspace (model TER art. 19.3-4 and model R&G art. 15.1).
3.6 Test result analysis to adjust grading guide & grades
The faculty’s assessment policy (see 4.1) describes how lecturers are advised/required to do a test result analysis to check for the need to adjust the answer model or grading guide/assessment sheet (see 1.3). The policy also describes in what case what methods examiners should use to adapt the grade calculation and/or the cut-off scorell. This is to ensure the reliability (Quality requirement 3) of grades/results.
3.7 Communicating grades & feedback, student review & discussion of assessed work, appeals and the validity of results
The model TER includes processes that stimulate learning from assessments (Quality requirement 4) by giving students the right to review. In addition, it includes processes to ensure fairness of grading by giving students insight in their scored and graded work (Quality requirement 1) and by the appeal process in case students do not agree with their result.
-
Students have, for a period of 20 working days after notification of the results, the right to review (‘inspect’ in the TER) their marked work. The course examiner can schedule a (group) meeting during which the review will take place.
-
Students can ask to discuss a (partial) result (grade or pass/fail) for a period of 20 working days with the examiner after notification of the results. The course examiner can schedule a (group) meeting during which the discussion will take place, after which students are still allowed to request an individual discussion on the motivation of the grade.
-
If students do not agree with the final course result, they can lodge an appeal with the Examination Appeals Boardmm,25 (EAB, College van beroep voor de examens, Cbe). The student can only lodge an appeal after the examiner has determined the entire course grade and announced this (via Osiris). The student can lodge an appeal within 6 weeks after announcement of the course result.
In case time is of the essence, students can ask for an emergency procedure at the EAB. This is for example the case if students are excluded for a second partial assessment (i.e. failed the entire course) based on the results of a first partial assessment.
-
Assessment results are valid indefinitely (TER art. 22.1), unless the dean decides that the assessed knowledge, insights, or skills are proven to be outdated. The validity of partial grades is limited in the programme’s TER (model TER art. 22.4 limits the validity of partial grades). Not all TERs determine a maximum period of validity, in which case partial results are valid indefinitely.
3.8 Ownership and archiving of assessments and student work
-
The binding guidelines for archiving and destroying educational related student data26 are published here, including retention periods, and how to archive and destroy data. The guidelines are applicable to educational data produced by teaching staff or students, and therefore include formative and summative assessment, like graduation work, feedback on student work, completed assessment forms, and test result analyses.
The main guidelines are summarised here:
- Store data in the format in which the student originally handed it in:
- Digital-born data
- Paper-born data: even after scanning, the original paper should be stored
- 3D-data
- Adhere to the minimum and maximum retention period (see the four main periods below, and the teaching support site for a complete overview).
- TU Delft archives student work in centrally supported educational tools (including assessment tools, see 1.4):
- If students deliver work there, the application owner (ESA-IM or IT) is responsible for archiving and destruction
- If students deliver work elsewhere, the examiner is responsible for archiving and destruction
- All student work and feedback/grades are considered sensitive personal data.
- Store sensitive personal data in a secure place (see Teaching Support site for concrete advise).
- Anonymize data exports from educational tools (e.g. test result analyses).
- The student has the right of access to their work due to the GDPR and archive law and therefore can request to see the work during the retention period.
- Summative (3D) work should not be returned to the student before the end of the appeal period (6 weeks after the student has received the course result), because returning the work earlier hinders the appeal processnn.
- Physical formative student work (on paper and 3D) is returned to students together with feedback. If this feedback is written on paper (paper-born), the feedback is given to the student, while digital-born feedback is communicated via and stored in the LMS (Brightspace) or a centrally supported (assessment) tool.
The retention and destruction periods of assessment related documents that examiners and students produced are published on the teaching support site, together with background information. This information complies with the following regulations from the legal framework:- Examiners archive the assessment, corresponding grading guideoo and score-grade transformation for seven years after the assessment date or deadline (SL proc. 5427).
- Examiners archive student work and assessor annotations/feedback on student work for two years after the exam date (model R&G art. 16.1, SL proc. 54) and destroy it within three years after the exam date. This includes completed rubrics and assessment forms.
- Examiners must retain three-dimensional projects for six weeks after the examination date or the date on which the results were published (model R&G art. 16.3, SL proc. 54) and destroy it after 3 months, after enabling students to pick up their work.
- The faculty archives students’ graduation work for at least seven years after graduation (model R&G art. 16.2, SL proc. 54), including the completed grading guide/rubric and assessment form. The faculty archives the underlying graduation manual, rubrics/grading guide and assessment form for at least seven years, too.
- Store data in the format in which the student originally handed it in:
-
Examiners are responsible for adequate and restricted access to all assessment data in their course. Within a course, the examiner gives course staff members access to assessment data that is relevant for their assessment related tasks. If the board of examiners (WHW art. 7.12c, sub 2) requests data on assessments, the examiner will provide the requested information. In case the examiner asks advice from an assessment advisor for which student data is needed, they preferably only share anonymized student data, or use screen-sharing to minimize the distribution of personal data. In case of changes in teaching staff, functional admins or exam support officers give the new teaching staff access to course data.
-
Any student products and the resulting feedback, scores and grades contain sensitive personal information. It is therefore important to save this data securely and prevent the data from being available to unauthorized people, and to destroy the data after the legal retention period. This is the case for summative assessments, but also for formative assessments and learning activities. Students and others should not be able to see each other’s assessment results.
In addition, data on special provisions for individual students during assessments (e.g. extra time) are considered health-related data and are therefore ‘special category of personal data’ according to the GDPR Implementation Act. These data should be handled with extra care.
3.9 Guidelines remote assessment & fraud prevention
In the case remote assessment is needed, the TU Delft delivers good quality assessment (the quality requirements in 1.2) and specifically a healthy balance between 1) quality assurance (fraud prevention measures, Quality requirement 3), 2) privacy concerns, 3) enabling students in demonstrating how they master the learning objectives (Quality requirement 2 & Quality requirement 7), and limit stress for students (Quality requirement 6). This has resulted in limiting the use of online proctoring in remote assessment as laid down in the online proctoring regulations19. Online proctoring can only be used in exceptional cases where other remote fraud prevention measures are insufficient and where remote assessment is the only option, and only after approval from the concerned board of examiners. This can be the case in specific individual cases, like students who cannot come to the exam hall due to chronic health issues (Quality requirement 7).
For remote assessments, the following guidelines for remote assessment28 apply:
- The assessment assesses all learning objectives in a reliable way (Quality requirement 1 & Quality requirement 2).
- Fraud prevention measures do not hinder student performance (Quality requirement 3 & Quality requirement 6), i.e., aims to limit stress for students in these assessments.
- Helpdesk: the examiner is available for students during the assessment.
- Practice exam: The examiner provides a practice exam to enable students practice with the setting, questions & tools (Quality requirement 1 & Quality requirement 3).
- Feasible: The assessment is feasible for both students and lecturers (Quality requirement 6, Quality requirement 8 & Quality requirement 9).
- Extra time: Students with disabilities receive the required extra time (Quality requirement 7).
- Privacy: The assessment complies with the privacy regulations.
- Transparency: The examiner communicates assessment details to the students via the LMS (Brightspace) and email (Quality requirement 1).
3.10 Guideline for use of (AI) tools in assessment
Below, the initial 8 guidelines for teaching staff (June 2023) on the use of (AI) tools by students in non-invigilated assessments are listed. The guidelines are in development. On this page, a more extensive and recent version can be found.
These guidelines are only relevant outside exam-like environments, in which students will likely use available (AI) tools.
- Discover the possibilities and limitations of (AI) tools and discuss them with the students.
- Promote safe use of AI tools and plugins and do not reveal personal, internal or confidential information.
- Be transparent and explain choices. Discuss with students how they can follow the Code of Conduct29 in the context of AI tools. Communicate changed expectations to students.
- Attribute correctly: Inform students on how they should correctly attribute the use of AI-tools.
- Reduce the need of students to rely on AI tools by making them feel confident: Have sufficient feedback moments and regularly check the progress of individual students.
- Focus on the students’ process if the course is heavily influence by AI tool use: Shift assessment criteria towards the process, track progress using version control.
- Take fraud detection measures & report suspicions of fraud to the board of examiners: Consider doing oral authenticity checks to check if it is likely that the student produced the deliverable by themselves.
- Rethink the course, including learning objectives and course assessment planpp.
3.11 Assessment adaptations for students with a support need
Students who encounter obstacles during assessments due to e.g. functional limitation, disability, chronic illness, psychological problems, young parenthood, gender transition, or special family circumstances may request adjustments of assessment (TER art. 25, Quality requirement 7), after consultation of Horizon (desk for studying with a disability or extra support question) for standard support facilities, or their academic counsellor for customised adjustments.
Standard assessment support facilities include 10 minutes per hour extra exam time for students with e.g. dyslexia. Customised assessment adjustments depend on the individual situation of the student and may include changes in assessment type, timing, permitted aids (e.g. dictionaries) and location (TER art. 25.4). This includes exemptions from attendance requirements.
For customised assessment adjustments, the board of examiners (or the mandated academic counsellors in some faculties) will evaluate the student requests on the following criteria30:
- If possible, the adjustment must still allow assessment of the learning objectives of the course at the required level. If this is not possible, individual degree programmes of students should still cover and assess all final attainment levels of their degree programme (TER art. 25.1).
- The adjustment must be efficacious for the student: it should be suitable and necessary (Wgbh/cz31 art. 2.1).
- The adjustment should not place a disproportionate burden on the faculty / TU Delft (Wgbh/cz art. 2.1), in terms of time and money32.
3.12 Composition of assessment committees for graduation projects
The board of examiners establishes rules on the composition of the assessment committee for the graduation project in order to secure assessment competence (see 6.3 for examples & guidelines, model R&G art. 25).
3.13 Graduating with honours or cum laude
-
The board of examiners publishes guidelines for granting an honours certificate to students who participate in an honours programme in the Rules and Guidelines (model R&G art. 29).
-
The board of examiners publishes guidelines for granting the predicate ‘cum laude’ in the Rules and Guidelines (model R&G art. 30) on three criteria: average grade, study duration and grade for graduation project. In addition, the R&G may limit the number of ECs for which students may receive an exemption or ‘pass’ (model R&G art. 30.1.b).
t Each faculty will have different values.
u Remote and hybrid exams are normally not allowed by boards of examiners, except for in lockdown situations or in case of students with specific support needs.
v Multiple select: a multiple choice type where multiple options can be selected. Should only be used in specific situations. See here.
w ‘plagiarism scan’ can be digital in case of digital work, but also manual. Assessors always need to be vigilant about fraud by detecting suspiciously similar work or mistakes.
x Example: If a programme prepares students for writing a thesis in different courses in which students write a report that is assessed on writing skills, they can define ‘report’ as a separate assessment method that is explicitly mentioned in the assessment programme. This can help to make learning lines more explicit.
y Mandatory deadlines are considered summative assessments (or ‘examinations’ in terms of the TER).
z Teaching weeks are numbered p.w where p is the period number (1-4 are the regular periods, 5 is the summer period that is only used for resits), and w is the week number (1-10). September 1st typically falls in week 1.1. See https://www.tudelft.nl/en/student/education/academic-calendar
aa There are some exceptions. Examples: EE has two resit weeks in week 5.2 and 5.3; ME use eight ‘octals’ instead of 4 ‘quarters’.
bb Because not all first year bachelor students can be expected to find a room before the start of their first year.
cc Students who registered for the exam can enter the exam until 30 minutes after the start of the exam.
dd Examples: During lockdowns, 1) some lecturers administered digital exams in which students could answer one question at a time, without the possibility to access previous questions; 2) online proctored digital exams during lockdowns, stress was caused because students e.g. feared that roommates or family members would walk into the room, or that there would be technical error.
ee At IDE, a ‘NI’ (niet ingeleverd, not delivered) is chosen if a deliverable was not delivered (in time).
ff Example: if students participate in a project/computer lab, but do not hand in the summative assignments (in time).
gg This has at least been the case since 2006. In other Dutch universities, a (rounded) 5.5 is considered a pass grade.
hh In de model TER and model R&G, the term ‘interim examination’ is used. The term ‘interim’ is very appropriate for midterm exams that test the first half of course as apposed to the final exam that tests the entire course. However, for courses that consist of an exam and a practical, the term ‘interim’ is not appropriate. Therefore, we use the more neutral term ‘partial’ here.
ii In some programmes, the regular assessment of a course consists of a midterm and final exam, while the resit consists of one large exam that covers both regular exams. This is typical for BSc year 1 courses.
jj This has at least been the case since 2006. Other Dutch universities keep the last grade.
kk During the pandemic, the results of some online exams were changed into pass/fail instead of grades because of the lower reliability. This arose the question of whether a ‘pass’ was considered higher than a 6.0 or not.
ll The Cohen-Schotanus procedure is advised for score-grade calculation adjustments, but not in resits (because the student population is not representative). ME uses Angoff cut-off score calculation beforehand combined with Hofstee cut-off score adjustments afterwards. See TU Delft assessment manual.16
mm In case students appeal against course results to their BoEx, the BoEx is legally obliged to forward the appeal to the EAB. After receiving the appeal, the EAB will request the BoEx to mediate between the student and the examiner.
nn Student work could be altered after returning it to the student, which hinders e.g. a second opinion.
oo Examples per assessment category:
o exam: exam, answer model (including grading guide)
o oral exam: used questions, cases, scenarios, etc.; grading guide/rubric/assessment form
o project/assignment: manual, mandatory template, rubric, assessment form
pp For group work / projects, consider e.g. checking the transfer of skills & knowledge by adding an individual exam on project related cases. However, consider study load as well.