Assessment:1 the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development.
Assessment for accountability:2 assessment of some unit (could be a department, program or entire institution) to satisfy stakeholders external to the unit itself. Results are often compared across units. Always summative. Example: to retain state approval, the achievement of a 90 percent pass rate or better on teacher certification tests by graduates of a school of education.
Assessment for improvement:3 assessment that feeds directly, and often immediately, back into revising the course, program or institution to improve student learning results. Can be formative or summative (see "formative assessment" for an example).
Benchmark:4 a description or example of candidate or institutional performance that serves as a standard of comparison for evaluation or judging quality.
Bloom’s taxonomy of cognitive objectives:5 six levels arranged in order of increasing complexity (1=low, 6=high):
1. Knowledge: recalling or remembering information without necessarily understanding it. Includes behaviors such as describing, listing, identifying, and labeling.
2. Comprehension: understanding learned material and includes behaviors such as explaining, discussing, and interpreting.
3. Application: the ability to put ideas and concepts to work in solving problems. It includes behaviors such as demonstrating, showing, and making use of information.
4. Analysis: breaking down information into its component parts to see interrelationships and ideas. Related behaviors include differentiating, comparing, and categorizing.
5. Synthesis: the ability to put parts together to form something original. It involves using creativity to compose or design something new.
6. Evaluation: judging the value of evidence based on definite criteria. Behaviors related to evaluation include: concluding, criticizing, prioritizing, and recommending.
Course-embedded assessment:6 an assessment that allows faculty to evaluate and improve approaches to instruction and course design in a way that is built into and a natural part of the teaching-learning process. Often used for assessment purposes classroom assignments that are evaluated to assign students a grade. Can assess individual student performance or aggregate the information to provide information about the course or program; can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy).
Direct assessment of learning:7 gathers evidence, based on student performance, which demonstrates the learning itself. Can be value added, related to standards, qualitative or quantitative, embedded or not, using local or external criteria. Examples: most classroom testing for grades is direct assessment (in this instance within the confines of a course), as is the evaluation of a research paper in terms of the discriminating use of sources. The latter example could assess learning accomplished within a single course or, if part of a senior requirement,could also assess cumulative learning.
Direct measures of learning:8 students (learners) display knowledge and skills as they respond directly to the instrument itself. Examples include: objective tests, essays, presentations, and classroom assignments.
External assessment:9 use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. Usually summative, quantitative, and often high-stakes (seebelow). Example: GRE exams.
Formative assessment:10 the gathering of information about student learning-during the progression of a course or program and usually repeatedly-to improve the learning of those students. Example: reading the first lab reports of a class to assess whether some or all students in the group need a lesson on how to make them succinct and informative.
Goals for learning:11 goals are used to express intended results in general terms. The term goals are used to describe broad learning concepts, for example: clear communication, problem solving, and ethical awareness.
"High stakes" use of assessment:12 the decision to use the results of assessment to set a hurdle that needs to be cleared for completing a program of study, receiving certification, or moving to the next level. Most often the assessment so used is externally developed, based on set standards, carried out in a secure testing situation, and administered at a single point in time. Examples: at the secondary school level, statewide exams required for graduation; in postgraduate education, the bar exam.
Indirect assessment of learning:13 gathers student reflection about the learning or secondary evidence of its existence. Example: a student survey about whether a course or program helped develop a greater sensitivity to issues of diversity, exit surveys, student interviews (e.g. graduating seniors), and alumni surveys.
Individual assessment:14 uses the individual student, and his/her learning, as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement. Would need to be aggregated if used for accountability purposes. Examples: improvement in student knowledge of a subject during a single course; improved ability of a student to build cogent arguments over the course of an undergraduate career.
Institutional assessment:15 uses the institution as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally institution-wide goals and objectives would serve as a basis for the assessment. Example: how well students across the institution can work in multi-cultural teams as sophomores and seniors.
Institutional effectiveness:16 the measure of what an institution actually achieves.
Learning objectives:17 objectives are used to express intended results in precise terms. Further, objectives are more specific as to what needs to be assessed and thus are a more accurate guide in selecting appropriate assessment tools. Example: Graduates in Speech Communication will be able to interpret non-verbal behavior and to support arguments with credible evidence.
Learning outcomes (Outcome Behaviors):18 observable behaviors or actions on the part of students that demonstrate that the intended learning objective has occurred.
Local assessment:19 means and methods that are developed by an institution's faculty based on their teaching approaches, students, and learning goals. Can fall into any of the definitions here except "external assessment," for which is it an antonym. Example: one college's use of nursing students' writing about the "universal precautions" at multiple points in their undergraduate program as an assessment of the development of writing competence.
Measurements or methods of assessment:20 design of strategies, techniques and instruments for collecting feedback data that evidence the extent to which students demonstrate the desired behaviors.
Modifications:21 recommended actions or changes for improving student learning, service delivery, etc. that respond to the respective measurement evaluation.
Performance assessment:22 the process of using student activities or products, as opposed to tests or surveys, to evaluate students’ knowledge, skills, and development. Methods include: essays, oral presentations, exhibitions, performances, and demonstrations. Examples include: reflective journals (daily/weekly); capstone experiences; demonstrations of student work (e.g. acting in a theatrical production, playing an instrument, observing a student teaching a lesson); products of student work (e.g. Art students produce paintings/drawings, Journalism students write newspaper articles, Geography students create maps, Computer Science students generate computer programs, etc.).
Portfolio:23 an accumulation of evidence about individual proficiencies, especially in relation to learning standards. Examples include but are not limited to: Samples of student work including projects, journals, exams, papers, presentations, videos of speeches and performances.
Program assessment:24 uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally program goals and objectives would serve as a basis for the assessment. Example: how sophisticated a close reading of texts senior English majors can accomplish (if used to determine value added, would be compared to the ability of newly declared majors).
Qualitative assessment:25 collects data that does not lend itself to quantitative methods but rather to interpretive criteria or descriptions rather than numbers. Examples: Ethnographic field studies, logs, journals, participant observation, and open-ended questions on interviews and surveys.
Quantitative assessment:26 collects data that can be analyzed using quantitative methods, such as numerical scores or ratings. Examples: Surveys, Inventories, Institutional/departmental data, departmental/course-level exams (locally constructed, standardized, etc.).
Reliability:27 reliable measures are measures that produce consistent responses over time.
Rubrics (Scoring Guidelines):28 written and shared for judging performance that indicate the qualities by which levels of performance can be differentiated, and that anchor judgments about the degree of achievement.
Standards:29 sets a level of accomplishment all students are expected to meet or exceed. Standards do not necessarily imply high quality learning; sometimes the level is a lowest common denominator. Nor do they imply complete standardization in a program; a common minimum level could be achieved by multiple pathways and demonstrated in various ways. Examples: carrying on a conversation about daily activities in a foreign language using correct grammar and comprehensible pronunciation; achieving a certain score on a standardized test.
Summative assessment:30 the gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines.
Student outcomes assessment:31 the act of assembling, analyzing and using both quantitative and qualitative evidence of teaching and learning outcomes, in order to examine their congruence with stated purposes and educational objectives and to provide meaningful feedback that will stimulate self-renewal.
Teaching-improvement loop:32 teaching, learning, outcomes assessment, and improvement may be defined as elements of a feedback loop in which teaching influences learning, and the assessment of learning outcomes is used to improve teaching and learning.
Validity:33 as applied to a test refers to a judgment concerning how well a test does in fact measure what it purports to measure.
Value added: the increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). Requires a baseline measurement for comparison.
1. AAC&U Peer Review, Winter/Spring 2002, Beyond Confusion: An Assessment Glossary, http://www.aacu.org/peerreview/pr-sp02/pr-sp02reality.cfm, accessed July 2, 2004.
2. Glossary of Assessment Terms, http://www.potsdam.edu/IR/Assessment%20Glossary%20of%20Terms.pdf
3. Bloom, B.S. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. White Plains, N.Y.: Longman.
4. Commission on Higher Education. Framework for Outcomes Assessment. (1996) Philadelphia: Commission on Higher Education of the Middle States Association of Colleges and Schools.
5. Palomba, C.A & Banta, T.W (1999). Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education . Jossey – Bass: San Francisco.
6. National Council on Accreditation of Teachers of Colleges of Teacher Education. (2000) NCATE 2000 Standards: Glossary of NCATE Terms. Washington, DC: Author.
©2004 The George Washington University