This webpage is part of the Evaluating State Accountability Systems Under ESEA tool, which is designed to help state educational agency (SEA) staff reflect on how the state's accountability system achieves its intended purposes and build confidence in the state's accountability system design decisions and implementation activities. Please visit the tool landing page to learn more about this tool and how to navigate these modules.
A key part of validating a theory of action is to determine whether evidence confirms the assumptions and links between components that are designed to yield intended outcomes. A state's accountability system is a measurement instrument that helps the public understand the degree to which schools meet the state's educational objectives and priorities as well as a policy lever to incentivize actions that help achieve those same objectives and priorities.1 To what degree is that happening? If a state can identify sufficient evidence that upholds the assumptions associated with indicator interactions, then it can more effectively argue that the results of the state's system of AMD are valid for identifying schools. The following self-reflection prompts provide an opportunity for a state to consider whether the interactions among the indicators or the decision rules of the state's system of AMD uphold the underlying rationale, as well as an opportunity to determine whether the SEA can be sufficiently confident that the elements of the state's system of AMD (i.e., indicator interactions and decision rules) are working as intended.
Respond to the following prompts to engage in the reflection around the way in which indicators interact:
- Read the claim, consideration, and key evidence checks; then examine the specific evidence available in your state.
- Reflect on whether you believe you have collected enough evidence to be confident in the claim stated or whether there is a need for further examination.
- Finally, respond to questions that pose whether you have sufficiently explored the confidence claims below and believe that you have collected enough evidence that these claims can be confirmed.
Some questions may be based on opinion, whereas others will require an examination of data, supplemental analyses, or conversations with other members of your state department.
You may print this webpage and use it as a template for note-taking if working with colleagues.
For summative rating systems (e.g., index-based systems), please see the reflection prompts below in Table 6.
Table 6. Confidence in the Operations and Results of Combining Indicators for Summative Rating Systems Reflection
Claim 1: The indicator weights reflect the state's theory of action and stakeholder vision, as appropriate. | |||
---|---|---|---|
Consideration 1.1: Indicator weights or decision rules reflect appropriate stakeholder and constituent input. | |||
Consideration 1.2: Indicator weights are coherent with the policy intent and intended incentivized behaviors for the state's accountability system. | |||
The Elementary and Secondary Education Act of 1965 (ESEA), as amended by the Every Student Succeeds Act (ESSA) requires that each state consult with key stakeholder groups that represent the range of constituents across the state when developing their state plans, which include a description of their state's accountability system. However, some stakeholders' recommendations may not be appropriate to implement as is, given constraints such as high-stakes use, corruptibility, data access, and data collection. It is important that SEA staff supporting accountability have a clear understanding of how the indicator weights and interactions reflect the state's theory of action and stakeholder feedback gathered as part of the ESEA consolidated state plan development process. For each set of claims, consider the following statements and explore the suggested evidence for index-based systems or non-summative systems. | |||
Reflection Prompts | Notes | ||
Key questions for the indicator: How were stakeholder groups solicited for feedback? To what extent was this feedback incorporated when developing policy weights for the system? | |||
Why is this important? Soliciting stakeholder feedback and input is an important design step to ensure representative viewpoints are included. It is important to blend this feedback with the overall policy objective and theory of action for how indicators are weighted. This feedback can also be incorporated to improve the system over time. | |||
Key evidence checks:
| |||
Recommended next steps:
| |||
Reflection Prompts | Notes | ||
Key questions for the indicator: What behaviors or next steps did you intend to promote based on the way in which indicators are weighted? | |||
Why is this important? An important aspect of a state's accountability system design is considering how people will interpret, use, and act upon data from the system. It is important to consider how weighting decisions influence changes in awareness and behavior. | |||
Key evidence checks:
| |||
Recommended next steps:
| |||
Claim 1 Reflection Questions | Claim 1 Response | ||
Reflecting on your notes above, consider your confidence in responding to the reflection prompts below. If you answer "no" or are not confident in your response, use the notes from your discussion to determine next steps. | |||
We have sufficiently explored the confidence claims above to understand how our indicator weights reflect the state's theory of action and stakeholder vision. | Yes / No | ||
We have collected enough evidence to sufficiently address key questions and can confirm that the considerations associated with Claim 1 reflect the state's theory of action and stakeholder vision, as appropriate. | Yes / No |
Claim 2: The empirical indicator weights reflect the intended state priorities and promote valid, fair, and reliable school ratings. | |||
---|---|---|---|
Consideration 2.1: Indicators can be compared and contrasted based on technical characteristics of the data. | |||
Consideration 2.2: Empirical indicator weights reflect intended policy weights and result in accountability signals as designed and intended. | |||
How these indicators are combined in the form of weighting or decision rules plays a major role in how schools are differentiated and identified. Because of the variety of indicators and the diversity of data that may be used in systems of AMD, several technical considerations should be addressed. These might include examining the appropriateness of comparing and contrasting measures, ensuring policy weights are reflected in system operations, or identifying sources of volatility and error within and across indicators. For each set of claims, consider the following statements. and explore the suggested evidence for summative rating systems or non-summative systems. | |||
Reflection Prompts | Notes | ||
Key questions for the indicator: Are the measures and data that comprise each indicator functioning appropriately for use? Are the measures across indicators appropriate for use as part of a state's system of AMD? | |||
Why is this important? These technical characteristics include both the process- and outcome-related characteristics of the data. Process-related characteristics may include things like policies related to data collection, data collection processes, ownership of data, corruptibility of data, and cleanliness of data. Outcome-related characteristics may include things like the shape, skew, range, mean, and mode associated with measures after data are collected and cleaned. Identified concerns associated with process-related characteristics can introduce uncertainty when trying to compare schools in the aggregate. Increasing consistency in policy interpretation, establishing systematic mechanisms for data collection, or increasing training in collaboration with the district can help address data entry and process concerns. | |||
Key evidence checks:
| |||
Recommended next steps:
| |||
Reflection Prompts | Notes | ||
Key questions for the indicator: What empirical evidence is available to show that design decisions for policy weights are reflected in operational weights? | |||
Why is this important? The ways in which indicators interact affect how the overall state's system of AMD differentiates and identifies schools. It is important to have a clear understanding of what indicators are most influential in the state's system of AMD, how changes within indicators affect differentiation over time, and whether any indicators have an unexpected amount of influence on the system. | |||
Key evidence checks:
| |||
Recommended next steps:
| |||
Claim 2 Reflection Questions | Claim 2 Response | ||
Reflecting on your notes above, consider your confidence in responding to the reflection prompts below. If you answer "no" or are not confident in your response, use the notes from your discussion to determine next steps. | |||
We have sufficiently explored the confidence claims above to understand whether indicator weights promote valid, fair, and reliable results. | Yes / No | ||
We have collected enough evidence to sufficiently address key questions and can confirm that the considerations associated with Claim 2 reflect the intended policy weights to promote valid, fair, and reliable results. | Yes / No |
[If you wish to explore specific indicators in more depth, click here to continue on to Modules 3A — 3E: Indicators.]
[If you do not wish to further explore specific indicators, click here to continue on to Module 6: Reporting.]
[Click here to go back to the tool home page.]
Table 7. Confidence in the Operations and Results of Combining Indicators for Non-Summative Rating Systems Reflection
Claim 1: The decision rules reflect the state's theory of action and stakeholder vision, as appropriate. | |||
---|---|---|---|
Consideration 1.1: Decision rules reflect appropriate stakeholder and constituent input. | |||
Consideration 1.2: Decision rules are coherent with the policy intent and intended incentivized behaviors for the state's accountability system. | |||
The ESEA requires that each state consult with key stakeholder groups that represent the range of constituents across the state when developing the consolidated state plan, which includes a description of the state's accountability system. However, some stakeholders' recommendations may not be appropriate to implement as is, given constraints such as high-stakes use, corruptibility, data access, and data collection. It is important for SEA staff supporting accountability to have a clear understanding of how the sequence of decision rules and interactions reflect the state's theory of action and stakeholder feedback gathered as part of the ESEA consolidated state plan development process. For each set of claims, consider the following statements and explore the suggested evidence for index-based systems or non-summative systems. | |||
Reflection Prompts | Notes | ||
Key questions for the indicator: How were stakeholder groups solicited for feedback? To what extent was this feedback incorporated when developing decision rules for the system? | |||
Why is this important? Soliciting stakeholder feedback and input is an important design step to ensure that representative viewpoints are included. It is important to blend this feedback with the overall policy objective and theory of action for how decision rules are designed and ordered. This feedback can also be incorporated to improve the system over time. | |||
Key evidence checks:
| |||
Recommended next steps:
| |||
Reflection Prompts | Notes | ||
Key questions for the indicator: What behaviors or next steps did you intend to promote based on the way in which decision rules are ordered? | |||
Why is this important? An important aspect of accountability-system design is considering how people will interpret, use, and act upon data from the system. It is important to consider how design decisions around the order of decision rules facilitate changes in awareness and behavior. | |||
Key evidence checks:
| |||
Potential next steps:
| |||
Claim 1 Reflection Questions | Claim 1 Response | ||
Reflecting on your notes above, consider your confidence in responding to the reflection prompts below. If you answer "no" or are not confident in your response, use the notes from your discussion to determine next steps. | |||
We have sufficiently explored the confidence claims above to understand how our decision rules reflect the state's theory of action and stakeholder vision. | Yes / No | ||
We have collected enough evidence to sufficiently address key questions and can confirm that the considerations associated with Claim 1 reflect the state's theory of action and stakeholder vision, as appropriate. | Yes / No |
Claim 2: The empirical results of decision-rule implementation reflect the intended sequencing of decision rules to promote valid, fair, and reliable school results. | |||
---|---|---|---|
Consideration 2.1: Indicators can be compared and contrasted based on technical characteristics of the data. | |||
Consideration 2.2: Empirical decision rules reflect intended policy weights and result in accountability signals as designed and intended. | |||
How these indicators are combined in the form of decision rules plays a major role in how schools are differentiated and identified. Because of the variety of indicators and the diversity of data that may be used in systems of AMD, several technical considerations should be made. These might include examining the appropriateness of comparing and contrasting measures, ensuring whether decision rules are appropriately reflected in system operations, or identifying sources of volatility and error within and across indicators. For each set of claims, consider the following statements and explore the suggested evidence for summative rating systems or non-summative systems. | |||
Reflection Prompts | Notes | ||
Key questions: Are the measures that comprise the indicator appropriate for use? Are the measures across indicators functioning appropriately for use as part of a state's system of AMD? | |||
Why is this important? The technical characteristics of measures reflected within and across indicators are important to consider. These technical characteristics include both the process- and outcome-related characteristics of the data. Process-related characteristics may include things like policies related to data collection, data collection processes, ownership of data, corruptibility of data, and cleanliness of data. Outcome-related characteristics may include things like the shape, skew, range, mean, and mode associated with measures after data are collected and cleaned. | |||
Key evidence checks:
| |||
Potential next steps:
| |||
Reflection Prompts | Notes | ||
Key questions: What empirical evidence is available to show that design decisions for decision rules correspond to the predictive power of indicators? | |||
Why is this important? The way in which indicators interact affects how the overall state's system of AMD identifies schools. It is important to have a clear understanding of what indicators are contributing the most influence in the state's system of AMD, how changes within indicators affect differentiation over time, and whether any indicators have an unexpected amount of influence on the system. | |||
Key evidence checks:
| |||
Potential next steps:
* For examples, please see State Systems of Identification and Support under ESSA: Evaluating Identification Methods and Results in an Accountability System (link is external) from the Council of Chief State School Officers. | |||
Claim 2 Reflection Questions | Claim 2 Response | ||
Reflecting on your notes above, consider your confidence in responding to the reflection prompts below. If you answer "no" or are not confident in your response, use the notes from your discussion to determine next steps. | |||
We have sufficiently explored the confidence claims above to understand whether the order of decision rules promote valid, fair, and reliable school results. | Yes / No | ||
We have collected enough evidence to sufficiently address key questions and can confirm that the state's system of AMD reflects the intended policy weights or intended sequencing of decision rules to promote valid, fair, and reliable school results. | Yes / No |
[If you would like to further explore the state's system of AMD through reflection on specific indicators, please click here to continue on to Module 3A — 3E: Indicators.]
[If you would like to further explore the state's system of AMD through reflection on identified schools, please click here to continue on to Module 4: Comprehensive Support and Improvement (CSI) Schools]
[If you do not wish to further explore specific indicators or identified schools, click here to continue on to Module 6: Reporting.]
1 For more information, please see Accountability Identification Is Only the Beginning: Monitoring and Evaluating Accountability Results and Implementation (link is external) from the Council of Chief State School Officers. Note: The inclusion of links to resources and examples do not reflect their importance, nor is it intended to represent or be an endorsement by the Department of any views expressed, or materials provided. The U.S. Department of Education does not control or guarantee the accuracy, relevance, timeliness, or completeness of any outside information included in this document.
2 Relevant inferential analyses might include variability, commonality, principal components, or discriminant analyses. For examples please see State Systems of Identification and Support under ESSA: Evaluating Identification Methods and Results in an Accountability System (link is external) from the Council of Chief State School Officers. Note: The inclusion of links to resources and examples do not reflect their importance, nor is it intended to represent or be an endorsement by the Department of any views expressed, or materials provided. The U.S. Department of Education does not control or guarantee the accuracy, relevance, timeliness, or completeness of any outside information included in this document.