Archived Information

How Technology Supports Better Assessment

Adaptive assessment facilitates differentiated learning

Sidebar
Meshing Learning and Assessment in Online and Blended Instruction
Moving Assessment Data from the Classroom to the State

As we move to a model where learners have options in terms of how they learn, there is a new role for assessment in diagnosing how best to support an individual learner. This new role should not be confused with computerized adaptive testing, which has been used for years to give examinees different assessment items depending on their responses to previous items on the test in order to get more precise estimates of ability using fewer test items.

Adaptive assessment has a different goal. It is designed to identify the next kind of learning experience that will most benefit the particular learner. The School of One demonstration project (see the sidebar on the School of One in the Learning section) used adaptive assessment to differentiate learning by combining information from inventories that students completed on how they like to learn with information on students' actual learning gains after different types of experiences (working with a tutor, small-group instruction, learning on line, learning through games). This information was used to generate individual "playlists" of customized learning activities for every student.

An example of adaptive assessment in higher education is Carnegie Mellon's Online Learning Initiative (OLI) as described in the sidebar on Meshing Learning and Assessment in Online and Blended Instruction.

Universal Design for Learning improves accessibility

Technology allows the development of assessments designed using Universal Design for Learning (UDL) principles that make them more accessible, effective, and valid for students with greater diversity in terms of disability and English language capability. (See the sidebar on Universal Design for Learning in the Learning section.)

Most traditional tests are written in English and can be taken only by sighted learners who are fluent in English. Technology allows for presentation and assessment using alternative representations of the same concept or skill and can accommodate various student disabilities and strengths. Moreover, having the option of presenting information through multiple modalities enlarges the proportion of the population that can be assessed fairly.

Technology also can support the application of UDL principles to assessment design. For example, the Principled-Assessment Designs for Inquiry (PADI) system developed by Geneva Haertel, Robert Mislevy and associates (Zhang et al., 2010) is being used to help states develop science assessment items that tap the science concepts the states want to measure and minimize the influence of extraneous factors such as general English vocabulary or vision. Technology can support doing this labor-intensive work more efficiently and provides a record of all the steps taken to make each assessment item accessible and fair for the broadest number of students.

Technology speeds development and testing of new assessments

One challenge associated with developing new technology-based assessments is the time and cost of developing, testing for validity and reliability, and implementation. Here, too, technology can help. When an assessment item is developed, it can be field tested automatically by putting it into a web-based learning environment with thousands of students responding to it in the course of their online learning. Data collected in this way can help clarify the inferences derived from student performance and can be used to improve features of the assessment task prior to its large-scale use.

Technology enables broader involvement in providing feedback

Some performances are so complex and varied that we do not have automated scoring options at present. In such cases, technology makes it possible for experts located thousands of miles away to provide students with authentic feedback. This is especially useful as educators work to incorporate authentic problems and access to experts into their instruction. The expectation of having an audience outside the classroom is highly motivating for many students. Students can post their poems to a social networking site or make videotaped public service announcements for posting on video-sharing sites and get comments and critiques. Students who are developing design skills by writing mobile device applications can share their code with others, creating communities of application developers who provide feedback on each other's applications. Ultimately, their success can be measured by the number of downloads of their finished applications.

For many academic efforts, the free-for-all of the Internet would not provide a meaningful assessment of student work, but technology can support connections with online communities of individuals who do have the expertise and interest to be judges of students' work. Practicing scientists can respond to student projects in online science fairs. Readers of online literary magazines can review student writing. Professional animators can judge online filmmaking competitions. Especially in contests and competitions, rubrics are useful in communicating expectations to participants and external judges and in helping promote judgment consistency. Technology also has the potential to make both the assessment process itself and the data resulting from that process more transparent and inclusive. Currently, only average scores and proficiency levels on state assessments are widely available through both public and private systems. Still, parents, policymakers, and the public at large can see schools' and districts' test scores and in some instances, test items. This transparency increases public understanding of the current assessment system.

Technology could reduce test-taking for accountability only

Many educators, parents, and students are concerned with the amount of class time devoted to taking tests for accountability purposes. Students are not only completing the tests required every year by their states, they also are taking tests of the same content throughout the year to predict how well they will perform on the end-of-year state assessment (Perie, Marion, & Gong, 2009).

When teaching and learning are mediated through technology, it is possible to reduce the number of external assessments needed to audit the education system's quality. Data streams captured by an online learning system can provide the information needed to make judgments about students' competencies. These data-based judgments about individual students could then be aggregated to generate judgments about classes, schools, districts, and states.

West Virginia uses this strategy in its assessment of students' technology skills. As this example, described in the sidebar, illustrates, the need for year-end summative tests can be reduced if the student data collected, analyzed, and recorded by formative, embedded assessments are valid, reliable, and of a manageable and actionable level of detail.

Previous: Technology Supports Assessing Complex Competencies Next: Prospects for Electronic Learning Records
   Posted in