CWU banner, your future is Central.  
Banner Picture of Trees


Assessment System Description

This document describes the unit’s assessment system in detail, including assessment of candidate performance and evaluation of unit operations.

Assessment System Introduction

Consistent with the conceptual framework’s constructivist philosophy, the Center for Teacing and Learning (CTL) assessment system has dynamically evolved over the past decade as a result of relevant and meaningful experiences, which include constituent and community feedback. The latest edition of the system is comprehensively designed to be purposely redundant in the measurement of standards, flexible enough to meet specific program requirements, and robust enough to provide unit-wide analyses for the purpose of improving program and unit (CTL) operations, including the evaluation of the system. Multiple assessment measures are collected using a transitional timeline. Data are collected, aggregated, and reported at both the program level and unit level for both initial and advanced programs using CTL, state, and national standards as criteria for the measurements. Data reports and summaries are used by individual faculty to assess their own instruction; by candidates to assess their own learning and development; by program faculty and coordinators to assess program effectiveness; and by the Office of Research, Evaluation, and Assessment (OREA) to assess unit operations. Summaries are also shared, discussed, and reported in numerous ways at each level of the unit’s governance organization, and are made public through the CTL/OREA web sites. Feedback is sought at every level: candidate, faculty, PEAB, administration, as well as the public. The latest edition now includes a relational database and the title has changed to the Comprehensive Data Management System (CDMS).

History of Assessment System Development

The Center for Teaching and Learning (CTL) has been using assessment data to advise programs on candidates’ performance and satisfaction since the late 1990s. Early on, grades and the results of first year and third year teacher and principal surveys were the primary sources of data. Summaries of these data were regularly shared with faculty and the professional community through Professional Educators Advisory Board (PEAB) meetings and periodically through faculty/principal/superintendent summit meetings.

The CTL began working on a more comprehensive electronic assessment system in May 2002 as it was revising and updating its conceptual framework and standards. Following a year of meetings, the CTL faculty and administration formally approved a new system in June of 2003 including the use of LiveText (LT) within that system as an additional means for collecting electronic data relative to candidate performance. Although electronic portfolios were the focus at that time, what resulted was an electronic course-based standards-embedded assessment initiative. [Since that time some programs have migrated toward an end-of-program assessment process, and portfoilos have emerged as a means for documenting candidate standard compliance.] Several training workshops for faculty were conducted throughout August and September 2003, and an implementation plan was initiated. It was determined that volunteer faculty teaching EDF 301 “An Orientation to Teaching” would introduce LT to candidates from October 2003 through June 2004. A study was conducted to determine candidate acceptance of LT, as well as what additional knowledge and skills candidates’ had acquired as a result of its use. Candidate survey data indicated neither a strong acceptance nor a strong dislike for the new electronic assessment requirements. Comparisons between candidates using LT to candidates not yet using LiveText revealed only one significant difference. Candidates using LT had a better understanding of the use of advanced multimedia tools (Northwest Regional Program Report, 2005, p. 84). Prior to publishing this report, data were reported to all faculty during a CTL meeting on 2/8/2004. At the same meeting, the science programs demonstrated the use of LiveText program data for program and course improvement, citing how the data reports in LiveText illustrated a need for course instructors to pay more attention to safety standards.

In February, 2004 a fulltime technology trainer was hired to assist faculty and candidates in the design and sharing of artifacts using LT. During the fall quarter of 2004, twenty-three program evaluations were individually conducted to measure the extent to which each program was including electronic data. These data were evaluated at the same time the Educational Benchmarking Inventory (EBI), and results of the first and third year teacher surveys were presented. As a result, the Teacher PEAB as well as the Advisory Council and Executive Board suggested the CTL needed to increase productivity and improve satisfaction. Therefore, in February 2005 the CTL bought new technologies and each program was assigned computers, scanners, and digital cameras specifically designated for candidate electronic artifact production. Over the next four months, implementation studies were conducted and meetings were held with each program as well as the CTL governing levels to report on the comprehensiveness of the data being shared and reported. Further, on June 10, 2005 the CWU Board of Trustees approved a resolution demonstrating support for the full implementation of the electronic data initiative. In October 2005, the CTL hired and trained five student workers to be “The CTL’s LiveText Rescue Team”, and established a web site for LiveText assistance for providing focused assistance to candidates having trouble negotiating the LiveText system.

About the same time, a new disposition inventory was being developed and integrated into the LiveText system. The new disposition inventory was distributed to CTL faculty, deans, department chairs, program coordinators, and the advisory council. A pilot test to validate the inventory was conducted using EDF 301 candidates, EDCS 311 candidates, and student teaching candidates. In September 2005, the validation of the disposition inventory was complete and results were shared with faculty. In addition, reliability and validity results were presented in a disposition manuscript submitted for national peer review and was later published.

Of course the assessment system consisted of much more than LT candidate performance data and dispositional data. A graphic of the comprehensive assessment system was presented and accepted by the CTL faculty December 5, 2005. Other measurements used for program and unit operation assessments included: trend analyses of the first and third year teacher and principal survey data, and trend analyses of the EBI data for principals and teachers. Program completer data were collected and reported by Career Services for the years including 2002-2012.

After receiving State and NCATE feedback in May 2007, a unit work plan was developed in partnership with OSPI and shared with all CTL faculty and governance committees. The Office of Research and Evaluation was renamed the Office of Research, Evaluation, and Assessment (OREA) and assigned an additional responsibility of assessment with an immediate task of evaluating the current assessment system with respect to the feedback provided during the State and NCATE reviews. Moreover, OREA was allocated additional support positions by the Office of the Provost. As a result, data collected are now analyzed, summarized, and shared electronically using the website and initally a desktop version of the relational database.

The Assessment System

During Fall 2007, the CTL Assessment Committee approved a new web-based concept designed by OREA for displaying and managing data in the assessment system. The new system includes \ all advanced programs as well as the initial programs. All assessment information is displayed under the following categories: programs, standards, measurements, aggregated data & graphic summaries, CDMS, program reports, and unit reports. The change was made to more clearly illustrate the comprehensiveness of the CTL assessment system.

The CTL assessment system is based upon a formative and summative evaluation model specifically designed to systematically and redundantly assess standards relative to candidate knowledge, skills, dispositions, and program design. The system is redundant because standards are measured in multiple ways, at multiple times across transition levels, within programs, and across programs. This allows candidates to assess their competency development; for faculty to assess their candidates’ learning as well as their own teaching effectiveness (in relation to personal Student Evaluation of Instruction [SEOI] data); for programs to assess the teaching and learning experience; and for the unit to assess operations (which includes a summative evaluation of the assessment system).

The assessment system website graphic was designed to simplify the multidimensionality of the system by illustrating the categories in a linear fashion. The system’s categories are transparently displayed in a website graphic for easy access. Any person desiring to review the categorical components of the system would simply click on a category. Contained within each category is a display of the system’s internal mechanisms.

For example, under initial programs the transition table found within the “measurement” category of the assessment system graphic illustrates when, where, and how standards are being measured. To examine the entry-level assessment process, the table shows the formative and summative measures used to determine admission acceptance into a program. Summative measures used to assess content knowledge standards during Transition I include GPA analyses and WEST B results. In addition, an entry data spreadsheet is used to measure the admission demographics and an admissions checklist is used to formatively track candidates’ progress in attaining entry.

A formative measure of disposition is also conducted during this admission transition. Summaries of these data found in the graphic summaries category are used at several levels: a) by the Admissions/ Certification Office to effectively monitor candidate admission, b) by program faculty in making predictions about candidate success (found in program reports), and c) by the OREA in making inferences for a unit report. The data displayed in the aggregated data category contains data that offers a reviewer (candidates, board members, faculty, administration, the public, and external reviewers open access to all of the data collected). If someone has a question that has not been previously summarized by the OREA, the data are, therefore, accessible for unique analyses. Formal requests can be made to the OREA to analyze and summarize these data to answer queries that may not have been previously considered for offices on campus or agencies off campus. Each transition level works in the same manner. All of the data collected for each level can be found in the categories of “aggregated data” and “graphic data” consistently recognizable by the measures used to collect the data and organized by transition. This method is used across the system, so that it is easier to view how data are used to interpret standard acquisition as described in each program report and ultimately in the unit report. To move from transition to transition tabs were placed at the top of each page.

The system is built using the common denominator of standards. Standards are measured in a formative fashion within courses, within programs, and again across programs. This redundancy is purposely designed into the system because standard acquisition is believed to be developmental, which is congruent with our conceptual framework. “Within” program measures include those data specific to content knowledge, pedagogical knowledge and skills, professional knowledge and skills, and student learning standards, which differ for each content discipline (i.e., elementary, reading, early childhood, math, theatre, health/fitness, etc). This is why each program has a specific assessment process within the larger assessment system. This is also where state competencies are measured (within program using Live Text rubrics and grades). “Across” program measures include data typically collected on all residency teacher candidates and post -graduates (e.g., WEST B and follow-up surveys). In addition to these typical measures, standards are measured across programs using data collected in the Professional Education Program (PEP) and student teaching. PEP data include LT reports for all courses that all candidates are required to complete. Similarly, a Final Student Teaching Evaluation is used to formatively assess all candidates particularly addressing elements of state standard V. The WA Teacher Performance Assessment (WTPA) is a summative measure used to ensure candidates who pass student teaching meet all standards. At the advanced level, programs generally use within program formative and summative evaluation techniques. In 2007 the Diversity and Equity Committee proposed a common course for all graduate programs, a survey, and a matrix. Using these instruments by the end of the Spring 2009 quarter, advanced programs produced data that were comparable across programs.

Guided by the conceptual framework’s 5 standards and 22 competencies, a matrix analysis was designed to align CTL, state, and national standards. Because most standards are related to the INTASC standards, the matrix of standards was created to demonstrate the common linkages between institutional, state, and national standards. To simplify the standards analyses, most assessments refer to the institutional standard (CTL standards) although some programs also reference state standards. Programs also reference state competencies as specified by program endorsements. These standards are cited and referenced in each program and across programs through course syllabi, program handbooks, LT artifacts, rubrics, and reports. In addition, OREA has examined how frequently standards are measured and where standards are measured in the LiveText portion of the system. Taking the standards analysis one step further, OREA has summarized the standards measured in PEP core courses. Using standards as the criteria, analyses drill into data that are formatively assessed through course work and summarily interpreted by the program. The purpose is to assess how well the candidates are formatively and summatively meeting standards. Standards, after all, are the primary targets of measurement. Because there are so many different standards to make it simple, the system systematically addresses all standards at all levels and uses a complex assortment of redundant analyses and formative summaries to ensure all candidates are being measured and are meeting all standards. This Analysis can be found on the Assessment site under Aggregated Data & Graphic Summaries on the Endorsement Preparation tab.

Evaluation of the Assessment System

To be completely comprehensive, the assessment system must have a mechanism with which to evaluate the categories and components of the system. This examination is divided into four levels: a) the accuracy, fairness, and consistency of the measures; b) standard, artifact, and rubric analyses; c) technology effectiveness; and d) the usefulness of data presentation for program reporting. Evaluation of the assessment system is conducted annually and revealed in the unit report, which is published and shared publicly on the assessment system website.

Accuracy, Fairness, and Consistency
The West B, West E & WTPA are state supported assessments, which have been tested for validity and reliability by the agencies that were contracted to create the instruments. The data reported by the state for the new teacher assessment is pilot data that is being used for validation of the new instrument. The Disposition Inventory was validated in September 2005 (see manuscript). LiveText inter-rater summary (accuracy) data are examined electronically and automatically on a continuous basis and updated as each new set of data is entered into the system. Inter-rater data can be found at the bottom of each report in the LT Exhibit Room. Under each assessment report is an inter-rater summary report. Content validity of the LT data were produced by each program as faculty discussed and agreed on the content to be assessed and how it is to be assessed. Efficacy of agreement is reviewed in section 3 of the annual program reports. To ensure fairness and consistency, faculty within programs have met and agreed on the type of artifact candidates must submit. No matter who teaches a particular course requiring an artifact, the artifact will be the same for all candidates. In addition, the rubric assessing a particular artifact will be the same regardless of which faculty member is assessing the artifact. This ensures that all candidates are being assessed consistently and fairly on standards associated with each artifact. The next section reveals how accuracy, consistency, and fairness are reflected in program reports. The final measurement examined is grades. The CTL has been able to disaggregate grades and WEST E scores by program and the OREA has conducted descriptive statistics to show how closely the scores align. This is used to estimate a prediction of candidate success, although program reports also predict candidate success through the analysis of standard compliance (see section 4, Program Reports), and the unit report.

Standards, Artifacts, Rubrics
The assessment system is evaluated annually with regards to how the standards are being assessed. A standards analysis using continuous LT data has been reported and shared with programs (see levels 2, 3, and 4 of the program reports). Prompts provided to programs and for which programs have provided responses, include:
1) Are the standards dispersed appropriately in your program?
2) Are all the standards represented as you wish them to be?
3) After reviewing this analysis, are there changes your program would recommend making to the way you cite standards or assess your candidates using LiveText?
4) Please examine all of your reports in the LiveText exhibit area and discuss the accuracy, consistency, and fairness of the data, as well as what improvements could be made in the program assessment rubrics, courses, artifacts, or reporting.
5) Using these data, please reflect upon your candidates' success in meeting standards.
6) Compare these data to the data provided in the WEST B and E charts that follow. Is there consistency in the rates of success?

An external consultant evaluated the LT standards, artifacts, and rubrics during the summer of 2007 and again March 2008, which are found under the unit report category of the assessment system (2007 Assessment System Evaluation, 2008 Assessment System Evaluation/LiveText Standards Assessment (C, P, PP, SL).

Technology Effectiveness and Program Reporting
The unit report is a synthesis of the conclusions and interpretations written by each program as represented by each of the program reports. Because one of the functions of the assessment system is the integrated use of technology, program reports have included statements of strengths and challenges relative to using the LT electronic system, as well as the web- based reporting system. In determining how to improve unit operations, the governance system continuously discusses recommendations from faculty, students, and our professional community regarding better ways to collect, analyze, and report accountability information. To promote the professional culture in which data are a regular part of the faculty conversation, effective as well as efficient use of technology is continuously being discussed and enhanced. Conversations surrounding a desire for exploring a better electronic system have reached a point of consensus across the unit. As the university also struggles with this issue, representatives for the CTL assessment committee discuss information learned from program reports with the university’s assessment committee. There was an RFI issued on February 6, 2008 by the University’s Information Technology Services unit to invite vendors to submit information about new technologies that may meet university-wide accountability needs. The request for information calls for a comprehensive academic assessment System that will enhance Academic Program Planning, Tracking & Assessment and Academic student learning/outcomes assessment. Please see the unit report for specific information regarding the evaluation of our current technology and program report effectiveness.

Use of Data for Program Improvement
An examination of the assessment system’s category, entitled program reports demonstrates how the CTL regularly and systematically uses data to evaluate the efficacy of its courses, programs, clinical experiences, and unit operations. Although faculty have been examining LiveText and WEST E data quarterly, and providing evidence of monitoring these data through program minutes and notes, an annual reporting system was approved by the CTL and implemented in AY 2007/08 for the purpose of improving the unit’s systematic analyses and reporting efforts. Programs are now provided, immediately following summer quarter, graphic summaries and short interpretations of both unit-wide and program specific candidate and post-graduate performance data. The timeline will change after 2012, data will be available on the new CDMS.

The summaries include: a Standard Citation Analysis, LiveText Rubric Reports, Candidate Success on Standards Across Courses or Dimensions, WEST B Trends, WEST E Trends, EBI Teacher and Principal Trends, First and Third Year Teacher Trends, Dispositional Analyses, a Final Student Teaching Evaluation LT Report, and Career Services Program Completer data. In return, program coordinators are asked to examine these summaries, summarize their own program data, and share these summaries with their respective faculties to facilitate discussions that ultimately culminate in responses to each set of summarized data. The culminating responses are syntheses of those conversations, which become evidence that the unit analyzes program data and performance data to initiate improvements where necessary. Programs are given the entire fall quarter to edit or update their reports as necessary. At the end of the spring quarter, program reports are locked and archived. At that time a final unit report is also conducted in LiveText.

The primary purpose of the program reports is to create a professional culture in which evidence and data are a regular part of faculty conversation, as well as to discretely demonstrate how the CTL uses data to update courses, improve programs, and evaluate unit operations including the assessment system. The latter is completed after all of the program reports have been submitted. Each program report is read and a content analysis is conducted across all program reports.

The unit report is the last category of the Assessment System Graphic. The graphic depicts the unit report as the culminating analysis and report, which is fed back to programs, CTL governing committees, PEABs, and is open to the public for comment. PEAB minutes verify that data are regularly shared and evaluated, which result in recommendations for program and unit improvement. Please refer to the Unit Report category for details of the content analysis findings. A public survey has been developed using Qualtrics. This survey provides the academic community (candidates, faculty outside of the professional unit, and administrators), the professional community, and interested public with an opportunity to comment in a systematic way and to assess the data, summaries, programs, and unit reports published on the web. As the graphic illustrates, the assessment system is an open process of continuous improvement, which is closed looped, multilayered, comprehensive, efficient, reflective, and now relational.

Comprehensive Data management System (CDMS)— The Next Generation

Central Washington University, Office of Research, Evaluation, and Assessment (OREA) has designed a new relational database that focuses on individual students as they progress academically from a high school student to teacher. Currently in desktop version, the CDMS has over 5,600 candidate records.  After we receive feedback from faculty and administrators during the Fall Quarter of 2012, we plan to place the CDMS within a new enterprise system being developed by the University’s new Organizational Effectiveness Unit.  This work has been supported by the University at the highest levels, because of the comprehensive aggregate analyses we have, to date produced, which demonstrates CWU’s effectiveness in preparing the next generation of high quality teachers for our State’s and Nation’s workforce.

National and Washington State policymakers have called for fundamental changes to the system of educator preparation The OREA plans to conduct research on teacher effectiveness now that we have a good sized and growing data set and because Teacher Education is being challenged by the public for greater efficacy of its research, accountability, and reporting. Our first challenge to meet these increased demands is to ensure greater validity of our measurements and transparency of our results. Central Washington University in partnership with The Professional Educators Standards Board and the Washington Association of Colleges of Teacher Education have designed a taxonomy that will guide one portion of our data efforts. The partnership expects every preparation program in the State to have its own system for collecting and reporting candidate records.  To ensure output comparability, the partnership will need all state programs to produce identical data inputs, both in the use of definitions and in the structural variables to be evaluated.  The result of this work thus far, has been the design and implementation of a PESB annual Completer Table.

In preparation for the required Completer Table assignment, OREA has designed the new CDMS to furnish accurate and efficient internal and external data reporting. While the OREA staff and CTL faculty have successfully used a well-designed standards-based assessment system that reports aggregate candidate performances on State Tests and through LiveText for program preparation, the faculty have urged us in their annual reports to design a system that is user friendly and candidate based.  Therefore, the new system now relates baseline candidates’ performances before they enter our program, examine changes as they progress through our program, and associate what they learned - to what they do - during their careers as teachers.  In addition, we are discovering the means necessary to access standardized exams scores of the students they teach.  The CDMS will help us provide clearer evidence of what experiences result in better P-12 student learning.



< Previous Page | 
 
Optimized for Newer Browsers
Go back to Central's main page