Cerro Coso Community College is committed to the ongoing assessment of student learning in academic programs and through student services through a systematic, college-wide assessment plan. The results of assessment provide clear evidence of student learning and are used to make further improvement to instruction and services.
Student learning outcome assessment is an activity in which institutional and instructional effectiveness is certified by evidence of student learning. Specific measurable learning behaviors are identified and assessed, and the results of the assessment are used to improve programs, courses, and services. Assessment, in this context, is not an evaluation of individual students or faculty.
There are several other concepts implicit to assessment:
It is a process by which individual student learning outcomes are defined at the institutional, program, and course level. For a particular outcome, expected student achievement is compared with actual outcomes, using predetermined benchmarks. If the results are lower than what has been determined to be acceptable, a plan to improve student learning is developed and implemented.
Assessment, in this context, is not related to grades or faculty evaluation. Although students provide evidence of learning, this is not an assessment of individuals, but an assessment of curriculum design and institutional best practices to the end that students are successfully learning.
Self-assessment is a natural extension of instruction and student services, and all members of the College share in this responsibility. It is a means to an end, with the result being continuous improvement in student learning. Student populations are becoming more diverse and a rapidly changing employment economy creates challenges to meet all students’ needs effectively. Consequently, the teaching methods of today may not work as well for tomorrow’s learners. We need to continually assess what is working and what requires improvement. Another trend that makes self-assessment a natural academic activity is the culture of teaching and learning is shifting from independence and autonomy to interdependence and collaboration. Intra-departmental, collaborative assessment is a natural extension of this culture. We want to ensure that students are learning, so we should be interested in verifying this. Finally, we are accountable to external organizations and students, as consumers, for our learning effectiveness. Assessment certifies the quality of the education we offer.
Some people fear that student learning outcome assessment is a slippery slope, and if we "give in" to it, instruction will eventually become heavily regulated by the Department of Education. In truth, if we do nothing or respond slowly to implementing assessment, such regulation will be a foregone conclusion. Rather, it is by thorough, self-initiated assessment that we will retain our autonomy and, thus, the quality of our instruction. Regional accrediting agencies are our ally in this effort and are working diligently on our behalf. The Accrediting Commission for Community and Junior Colleges (ACCJC) requests reports about our progress, not to micro-manage us, but rather to make a case to the Department of Education that we are effectively conducting assessment and intervention of the DoE is not necessary. The more we can accomplish, the more progress we can report to the ACCJC, and the better protected we are against regulation.
Several models and approaches to assessment have been discussed during faculty flex days, faculty chair meetings, and learning outcome workshops. Several major themes have emerged from these discussions:
Faculty or Department Chairs assume primary responsibility for all aspects student learning outcome or administrative outcome assessment, although the process should be collaborative within departments and/or programs, and it may be necessary to rely more heavily on particular faculty members who have more expertise in a course's subject matter. The ACCJC is interested in seeing evidence of collaboration and dialog, so it is important to maintain detailed department meeting minutes evidencing this discussion.
The Student Learning Outcome Coordinator (SLOC) provides college-wide leadership in the implementation of student learning outcome assessment. This includes the following:
The SLOC should have a strong understanding of curriculum, program review, and accreditation standards and is a member of the Curriculum and Instruction Council. The following is a list of other skills identified as necessary for SLOCs, based on input from SLOCs, curriculum chairs, and administrators throughout California:
Cerro Coso has access to the District Institutional Researcher 2 days per month for support for unit plan, program review and student learning outcome assessment data. However, this support is not adequate—especially with respect to the need for a researcher's guidance on the design of effective assessment studies. Although many faculty are familiar with research practices, few have any experience with educational research. We are in need of a researcher who is dedicated to our campus to provide guidance in the crafting of assessments that are valid and reliable and to assist in the collection of data that is not easily attainable through classroom-embedded assessments or through Oracle Discoverer. It is important that we have a researcher who is a member of our college culture and understands the complexities of serving students across multiple sites over a large geographic area.
There are 3 primary phases to outcome assessment:
Student Learning Outcomes identify what students can DO to demonstrate that they are learning. There should be clear linkages between student behavior, the production of a learning artifact, and assessment of that artifact. Other characteristics of student learning outcomes include:
Student learning outcomes should be defined for:
Ideally, program learning outcomes should be defined first, resulting from input from advisory committees or academic organizations for the discipline. Course learning outcomes should emerge from program learning outcomes. A matrix is useful in presenting how courses align or map to program learning outcomes.
Administrative Unit Outcomes (AOU) identify what students (or clients) will experience or receive as a result of a given service. AOUs may also be business related, identifying particular goals related to efficiency or achievement.
To be fully descriptive and useful, the structure of a student learning outcome includes
Conditions. In our course outlines of record and program documents, the condition is either "upon successful completion of the program" or upon successful completion of the course."
Outcomes. We refer to Bloom's Taxonomy of Educational Objectives (see Appendix A) for suggestions about appropriate observable outcomes (although Bloom's is not an exhaustive list). Bloom organized outcomes into three domains: cognitive, psychomotor, and affective. The cognitive domain relates to knowledge, the psychomotor domain relates to skills, and the affective domain relates to attitudes and values. If possible, we favor a set of outcomes that draw from each domain, although the psychomotor domain may not be appropriate for all programs or courses. Each of those domains has outcomes further organized according to depth of processing. We favor higher level outcomes that demonstrate critical thinking, a high degree of skill mastery, or personal integration of attitudes and values. Such higher level outcomes are listed in the right columns of of the outcome tables.
Acceptable Results. It is also useful to determine what the acceptable benchmark of student achievement will be. This has nothing to do with students passing courses or obtaining credit. Although we are measuring student learning in assessment, the objective is to determine how well we are doing with respect to instruction or student services. The question to be considered is: at what level would we determine that there is nothing more that we can do to improve student learning? There are student success factors that are outside of our control, so 100% student success is not realistic. However, something less than 100% will be appropriate, perhaps 90%, 85%, 80%, etc.
The determination of what will be acceptable is dependent upon many factors and, at first, may have to be a best guess among departments and program areas. That benchmark may differ from department to department, and it may differ between courses within a department. It may even differ between outcomes within a single course. An illustration of why this may differ is the following:
In some programs, entry level courses may have greater attrition than advanced courses because some students likely discover sooner rather than later that the program is not a good fit for their interests or aptitudes. This is a factor over which we have no control. Defining 75% as an acceptable result for assessment may be appropriate for an entry level course, during which many students are determining whether they are really interested in that program of study, whereas 95% might be appropriate for the capstone course of the same program because presumably by that time, students are confident about their academic goals. We would expect greater success, given the same quality of instruction.
Again, you are determining the point at which you believe institutional enhancements will no longer improve the results. This benchmark will inform you about what to do with the assessment data—make improvements or congratulate yourselves. There isn't a science to this. Determining appropriate levels is best achieved through continuous dialog within your department, as well as reassessment of the criteria after an assessment cycle.
Assessment Artifacts. Finally, the student learning outcome assessment definition needs to specify how the outcome will be measured. This includes an artifact and a method for scoring the quality of that artifact. Examples of common assessment artifacts include:
The artifact(s) chosen for the assessment should be appropriate for the outcome verb. For example, a learning outcome of describe is better measured by an essay than a multiple choice exam. Another consideration for the selection of an artifact is the relative ease or difficulty that the assessment can be conducted. Exams and surveys are easier to administer than portfolio assessments that are scored with a rubric. Departments should give careful thought about choosing an assessment that effectively measures the learning outcome, but is also reasonable to administer. An ideal assessment definition that is never implemented has little value.
Assessment Scoring. Some of the above artifacts can be simply scored for correctness, as is the case with multiple choice exams. Rubrics are appropriate for scoring projects, portfolios, essays, speeches, performances, skill demonstrations, critiques or essay exams. Response scales, such as Likert (respondents choose Strongly Agree, Somewhat Agree, Neutral, Somewhat Disagree, Strongly Disagree) may be useful in scoring surveys, interviews, or critiques. A scale might also be used to score a artifact holistically.
Assignment or course grades are not a valid means of assessing student learning outcomes for the following reasons. Course grades and many assignments reflect multiple skills and outcomes. We need to tease out a specific outcome for measurement. Course grades also may reflect criteria that have nothing to do with course learning outcomes, but are imposed within a course to motivate participation and the development of a learning community. Grades are an individual evaluation, whereas outcome assessment is collaborative and the results generalized.
However, certain types of course assignments can be leveraged for student assessment AND course assessment. To do so, you would need to ensure that the same assignments and measuring tools are used in every single section of a course over multiple semesters and among all faculty. There must be a way to tease out a specific outcome and assess only that outcome.
The following are examples of several complete outcome statements, where purple is the condition, green is the outcome, blue is the acceptable result, and orange is the assessment tool and method:
Upon successful completion of the course, students will be able to:
Upon successful completion of the program, students will be able to:
Upon graduation from the institution, students will be able to:
Student learning outcomes are identified in the appropriate curriculum documents, such as the program curriculum form or the course outline of record and are approved by the Curriculum and Instruction Council (CIC) via the approval of those documents. CIC and the SLOC are faculty resources to provide input and guidance on the crafting of outcomes so that the outcomes are observable, measurable, and use higher order learning domains (critical thinking) whenever possible. Bloom's Taxonomy is recommended as a resource for the selection of outcome verbs.
CIC requires that ALL student learning outcomes have assessment statements included in all new or revised course outlines of record and program documents. Assessment statements simply follow as a second sentences in the Student Learning Outcome Assessment sections of the CORs and program documents (see the above examples).
The Assessment Study is the process by which a learning outcome is actually measured and the results analyzed. It is important to understand that only 1 outcome is assessed in a particular study.
This phase occurs over an appropriate period of time, to allow data to be collected from a sufficient sample. For the assessment of course student learning outcomes, this is usually 2-3 semesters. For program learning outcomes, it could be 2-3 years. There are 3 steps to the Assessment Study phase:
In the previous phase, the assessment artifact, scoring method, and possibly the criteria for success will have already been defined. At this point, however, departments or program areas will need to work out the details of how the assessment will be conducted. The following issues/questions should be considered:
With thorough planning, the data collection process is fairly straightforward. There are a few points of note, however:
After tabulating the results and having already determined a benchmark of success, it will be clear whether students are achieving the outcome above, at, or below the expected level. If the result is at or above the expected level, congratulations are in order! This implies that there is nothing department faculty can do to improve the result. However, it may be worthwhile to discuss whether the criterion was set too low. This may be obvious if the department faculty can identify practices that could improve the result further.
If the result is lower than expected, there should be discussion about why that is the case and what can be done to improve the result. This is where the identification of other data in association with the outcome data is useful. If on-site courses have a better result than online courses, what can be done to improve student learning in online sections? If results are better for 16-week semester courses than for 8-week summer courses, is there a way to improve the outcome for summer courses? Perhaps the solution is that particular course should not be offered during the summer because there is not enough time on task. If one instructor produced better results than others, what is that instructor doing that should be replicated throughout the department?
Please note that this data should not be used to penalize faculty or to point out failures. It should only be used to identify best practices and implement what works well more consistently. This is a constructive process and faculty should have that spirit about it. (This is also a good time to point out that while faculty are asked to discuss student learning outcome assessment as a part of the Faculty Evaluation process, this should simply be a discussion of the instructor's involvement in the process. The results of assessment are not included in faculty evaluation.)
Based on a collaborative departmental process, the results should be analyzed and a plan for improvement developed. Be sure to take detailed minutes of all meetings in order to provide evidence of collegial dialogue.
If a plan to improve student learning was developed, it should be implemented and reassessed in a new Assessment Study to verify that student learning has, indeed, improved. As has been previously mentioned, assessment is an on-going and cyclical activity. If the results of the previous study were acceptable, the next Assessment Study should focus on a different outcome.
Assessment results and plans for improvement must be integrated into our other institutional plans and processes. Because Cerro Coso Community College exists so that students may learn, there must be a link between the results of Assessment Studies and everything else that we do at Cerro Coso.
Assessment results and plans should be included in the Department Unit Plan and in Program Review. The Unit Plan is included in the Educational Master Plan, which drives the College's Technology Plan, the Staffing Plan, the Facilities Master Plan, and the College's budget. Some improvements to student learning can be made with instructional practices, but sometimes institutional support is needed, and this process accomplishes that.