|Technology-based Assessment Using a Hot-Air Balloon Simulation|
|Using Networked Graphing Calculators for Formative Assessment|
As Minstrell's and others' work shows, through multimedia, interactivity, and connectivity it is possible to assess competencies that we believe are important and that are aspects of thinking highlighted in cognitive research. It also is possible to directly assess problem-solving skills; make visible sequences of actions taken by learners in simulated environments; model complex reasoning tasks; and do it all within the contexts of relevant societal issues and problems that people care about in everyday life (Vendlenski & Stevens, 2002).
Other technologies enable us to assess how well students communicate for a variety of purposes and in a variety of ways, including in virtual environments. An example of this is River City, a decade-long effort at Harvard University funded by the NSF. River City is a multi-user virtual environment designed by researchers to study how students learn through using it (Dede, 2009). This virtual environment was built as a context in which middle school students could acquire concepts in biology, ecology, and epidemiology while planning and implementing scientific investigations in a virtual world.
River City takes students into an industrial city at the time in the 18th-century when scientists were just beginning to discover bacteria. Each student is represented as an avatar and communicates with other student avatars through chat and gestures. Students work in teams of three, moving through River City to collect data and run tests in response to the mayor's challenge to find out why River City residents are falling ill. The student teams form and test hypotheses within the virtual city, analyze data, and write up their research in a report they deliver to the mayor.
Student performance in River City can be assessed by analyzing the reports that are the culmination of their experiences, and also by looking at the kinds of information each student and each student team chose to examine and their moment-to-moment movements, actions, and utterances. On the basis of student actions in River City, researchers developed measures of students' science inquiry skills, sense of efficacy as a scientist, and science concept knowledge (Dede, 2009; Dieterle, 2009). Materials and other resources have been developed to support educators in implementing River City in their classrooms.
As the River City example illustrates, just as technology has changed the nature of inquiry among professionals, it can change how the corresponding academic subjects can be taught and tested. Technology allows representation of domains, systems, models, data, and their manipulation in ways that previously were not possible. Technology enables the use of dynamic models of systems, such as an energy-efficient car, a recycling program, or a molecular structure. Technology makes it possible to assess students by asking them to design products or experiments, to manipulate parameters, run tests, record data, and graph and describe their results.
Another advantage to technology-based assessments is we can use them to assess what students learn outside school walls and hours as well as inside. Assuming that we have standards for the competencies students must have and valid, reliable techniques for measuring these competencies, technology can help us assess (and reward) learning regardless of when and where it takes place.
The National Assessment of Educational Progress (NAEP) has designed and fielded several technology-based assessments involving complex tasks and problem situations (Bennett, Persky, Weiss, & Jenkins, 2007). One of these calls on students to interact with a simulation of a hot-air balloon (see sidebar).
Using Technology to Assess in Ways That Improve Learning
There is a difference between using assessments to determine what students have learned for grading and accountability purposes (summative uses) and using assessments to diagnose and modify the conditions of learning and instruction (formative uses). Both uses are important, but the latter can improve student learning in the moment (Black & Wiliam, 1998; Black et al., 2004). Concepts that are widely misunderstood can be explained and demonstrated in a way that directly addresses students' misconceptions. Strategic pairing of students who think about a concept in different ways can lead to conceptual growth for both of them as a result of experiences trying to communicate and support their ideas.
Assessing in the classroom
Educators routinely try to gather information about their students' learning on the basis of what students do in class. But for any question posed in the classroom, only a few students respond. Educators' insight into what the remaining students do and do not understand is informed only by selected students' facial expressions of interest, boredom, or puzzlement.
To solve this problem, a number of groups are exploring the use of various technologies to "instrument" the classroom in an attempt to find out what students are thinking. One example is the use of simple response devices designed to work with multiple-choice and true/false questions. Useful information can be gained from answers to these types of questions if they are carefully designed and used in meaningful ways. Physics professor Eric Mazur poses multiple-choice physics problems to his college classes, has the students use response devices to answer questions, and then has them discuss the problem with a peer who gave a different answer. Mazur reports much higher levels of engagement and better student learning from this combination of a classroom response system and peer instruction (Mazur, 1997).
Science educators in Singapore have adopted a more sophisticated system that supports peer instruction by capturing more complex kinds of student responses. Called Group Scribbles, the system allows every student to contribute to a classroom discussion by placing and arranging sketches or small notes (drawn with a stylus on a tablet or handheld computer) on an electronic whiteboard. One educator using Group Scribbles asked groups of students to sketch different ways of forming an electric circuit with a light bulb and to share them by placing them on a whiteboard. Students learned by explaining their work to others, and through providing and receiving feedback (Looi, Chen, & Ng, 2010).
Assessing during online learning
When students are learning online, there are multiple opportunities to exploit the power of technology for formative assessment. The same technology that supports learning activities gathers data in the course of learning that can be used for assessment (Lovett, Meyer, & Thille, 2008). An online system can collect much more and much more detailed information about how students are learning than manual methods. As students work, the system can capture their inputs and collect evidence of their problem-solving sequences, knowledge, and strategy use, as reflected by the information each student selects or inputs, the number of attempts they make, the number of hints and feedback given, and the time allocation across parts of the problem. The ASSISTment system, currently used by more than 4,000 students in Worcester County Public Schools in Massachusetts, is an example of a web-based tutoring system that combines online learning and assessment activities (Feng, Heffernan, & Koedinger, 2009). The name "ASSISTment" is a blend of tutoring "assistance" with "assessment" reporting to educators. The ASSISTment system was designed by researchers at Worcester Polytechnic Institute and Carnegie Mellon University to teach middle school math concepts and to provide educators with a detailed assessment of students' developing math skills and their skills as learners. It gives educators detailed reports of students' mastery of 100 math skills, as well as their accuracy, speed, help-seeking behavior, and number of problem-solving attempts. The ASSISTment system can identify the difficulties that individual students are having and the weaknesses demonstrated by the class as a whole so that educators can tailor the focus of their upcoming instruction.
When students respond to ASSISTment problems, they receive hints and tutoring to the extent they need them. At the same time, how individual students respond to the problems and how much support they need from the system to generate correct responses constitute valuable assessment information. Each week, when students work on the ASSISTment website, the system "learns" more about the students' abilities and thus can provide increasingly appropriate tutoring and can generate increasingly accurate predictions of how well the students will do on the end-of-year standardized test. In fact the ASSISTment system has been found to be more accurate at predicting students' performance on the state examination than the pen-and-paper benchmark tests developed for that purpose (Feng, Heffernan, & Koedinger, 2009).
|Previous: What We Should Be Assessing||Next: How Technology Supports Better Assessment|