Home > Implementation for Educators Blog
Data Informs Effective Implementation

Have you ever worked to complete a home improvement project, such as putting together a piece of furniture, where the directions have left you totally confused? You repeatedly read and attempt to follow the directions, but can’t figure out why you can’t get the pieces to fit together. Or, just as frustrating, you finally achieve success in getting one side together but don’t know how you actually accomplished it to put the other side together? Now, let’s think of these examples as they relate to education. Have you struggled to determine why students are not achieving expected outcomes, even with all the programs and support that have been put into place? Or have students achieved outcomes in “pockets of greatness” at a limited number of schools, however, the district is not able to replicate the processes or practices successfully across the district? Effectively using data to inform implementation could be the key.
The Active Implementation Formula identifies the factors needed to achieve improved student outcomes. Student outcomes are achieved when all formula components are effectively in place, meaning student outcomes are the end result. The formula is a multiplication equation in which all the components with the corresponding frameworks need to be in place to achieve these outcomes. How do we know these components are effectively in place? DATA! Let’s take a look at different types of data and how data can effectively be used to inform implementation.

Types of data needed for implementation
Fidelity data measure the extent to which each core component of the practice or program is being implemented while examining content, competency, and context. Content data assess adherence to program guidelines, such as providing the required number of sessions. Competency data evaluate practitioners’ skills in delivering the intervention, using methods like observation to determine responsiveness in providing feedback. Context data identify whether conditions required to administer the program—such as small group instruction—are met and if program adaptations are responsive to the needs of practitioners and students. Fidelity data help inform needed improvements in implementing the core components of the practice or program and help us identify barriers for removal. Fidelity data can be collected using a practical fidelity measure, such as the Observational Tool for Instructional Supports and Systems (OTISS), during classroom walkthroughs and observations.
Scaling data are specific to the activities and resources needed to develop competencies and organizational systems to implement and scale a program or practice, such as costs, training, and staffing. Implementation teams need scaling data to develop and refine implementation plans and align resources to the intended outcomes. These data help identify barriers that arise during implementation and assist in determining if it is a program or process issue to effectively engage in action planning. An example of scaling data that can inform implementation is staff post-training surveys. These surveys can identify gaps within the training and additional learnings that need to be supported through coaching.
Capacity data evaluate the infrastructure needed to adopt, use, and sustain the practice or program, identifying strengths and areas for improvement related to leadership, teaming structures, competency development, and data use and sharing. Capacity data are used to assess, monitor, and strengthen efforts to adopt and maintain evidence-based practices, align resources, develop competencies, and engage linked teams within the system. Capacity data can be collected through state, regional, and district capacity assessments. These assessments need to be used at the organizational level and across the system to continue to build capacity for improved and sustained implementation.
Outcome data are results data used to measure the impact of the program or practice. Outcome data support fidelity and scaling by verifying that implementation is occurring as designed, such as training the intended number of teachers. Outcome data also assess the impact on those involved in implementation, such as changes in teacher instructional practices or student growth in reading, and identify to what extent these are occurring through quantitative and qualitative measures. Outcome data serve as an accountability measure, providing evidence related to the effectiveness of implementation. Referring back to the student reading, benchmark data serve as a measure to determine if students are making adequate progress toward their yearly goal, which informs if adjustments need to be made in implementation.
Building a data system
“All too often projects don’t move forward if the right data aren’t being collected and used, and this happens in teams from the school level to the state.”
John Gimpl, Minnesota Department of Education
Just as important as the types of data we collect is how we use the data to inform implementation. This extends beyond having a data platform, such as PowerSchool or another state data management system. Data must be useful and usable, meaning data needs to be the right data at the right time, in the right format, and available at all levels of the system. This requires building an effective data system to ensure data are identified, collected, and analyzed for effective and sustained implementation. Within the data system, there needs to be someone accountable for providing data while ensuring that the right team members are trained and have access to the data. Processes need to be established to ensure teams engage in data-based discussions that review data and develop formalized action steps based on the data. Teams throughout the linked system should communicate and share data to celebrate successes, identify barriers, and engage in continuous quality improvement.
Implementing an effective data system is an iterative process. Components within the data system that need to be regularly reviewed across all levels include data collection, data quality and usability, data analysis, and data use. These reviews should be shared across the system to identify refinements that need to be made to the data system to support continuous improvement in implementation.
“Never put something in place that won’t endure for a minimum of 10 years. That means you’re going to do it with a level of precision, with a level of fidelity, and with the organizational systems that will go beyond just you getting good at it.”
Rob Horner, Association of Positive Behavior Support Conference
Let’s explore how Minnesota has developed its data system to support effective and sustained implementation.
Minnesota’s data story
Minnesota has several projects based on the Active Implementation Frameworks and the Active Implementation Formula—the Formula for Success. For each of them we regularly collect implementation data: process (e.g., scaling), fidelity, capacity, and outcome data, which are critical in informing different improvement cycles that are scheduled and continuous and aligned with specific linked team levels.
When work begins, we ensure teams don’t skip the stages of Exploration and Installation, and we see that they don’t get stuck there either. Teams may not build readiness well or accurately assess readiness. Sometimes the training plan is ineffective, or there is a lack of evaluation data (how well) or scaling data (how much). Moreover, they may not start measuring capacity once a practice has been chosen so that teams can function in a way that facilitates Initial Implementation. Teams—from the school level to the state—need to use the right data at the right level at the right time to create the right next steps. Without that, decisions can hinder goal achievement. For example, some teams might believe they are in Full Implementation without any fidelity data as evidence of that.
For a Structured Literacy implementation project, Minnesota stays focused on the linked team structure of a State Management Team supporting a Regional Implementation Team that supports four District Implementation Teams. Each district began implementing the practice in one to three building sites. Multiple types of data currently flow at each level. Scaling data tracks changes such as the large numbers of reading teachers required to complete many hours of training, the number of staff to be coached and the capacity to do it, the number of observers available to measure fidelity compared to what is needed, and the budget supporting the implementation work. Training evaluation data showed which skills teachers felt they could put into practice and which they were less confident in transferring to daily use. This informed how to improve training, when to retrain staff, and what coaching services to prioritize. Once trained, all buildings started strong in measuring fidelity with regular classroom observations using the Observation Tool for Instructional Supports and Systems (OTISS). The region and all four districts share observers to sustain the process and achieve robust interrater reliability. These data inform system changes, adjustments to instructional materials, and general training and coaching plans. Other data is collected over time to show increases in required structures, such as expected minutes of Word Study and numbers of teachers using materials aligned to the skills learned in training—scaling data used to help with reaching fidelity. Coaching data indicate that more coaching capacity is needed to serve a large number of teachers implementing such complex practices. All of these data serve as evidence when the district and the regional teams take the capacity assessments. They create action plans and make improvements every year using these tools.
Minnesota sustains implementation beginning in Exploration with a commitment to resources beyond short-term funding so that there can be a focus on the project long enough to see consistent outcomes. This timeframe should be at least 10 years. We see pockets of excellence where fidelity is high, and we expect outcomes to increase as fidelity is more widespread.
Putting it all together
If we want to successfully assemble that piece of furniture, it starts with having clear directions and evaluating each step as it is assembled to ensure we have a usable piece of furniture at the end. The Active Implementation Formula provides the components needed in implementation to achieve improved outcomes for students. Just as when we try to put together furniture, effective implementation of a program or practice requires a well-designed plan for what we hope to accomplish, with ongoing data collection and use to inform how we are doing and what needs to be changed to continuously improve and sustain implementation.
Resources
- Implementation Drivers Overview (Module 2)
- Drivers Tip Sheets: Decision Support Data System
- Drivers Ed Short Series: Handout: Decision Support Data System
- Drivers Ed Short Series: Handout: DSDS – Examples in Practice
- Tool: Observation Tool for Instructional Supports and Systems (OTISS)
- State Capacity Assessment (SCA)
- Regional Capacity Assessment (RCA)
- District Capacity Assessment (DCA)
References
Horner, R. (2018). Association for Positive Behavior Support Conference. Session A1. Retrieved from https://apbs.org/members-home/apbs-videos/
State Implementation and Scaling-up of Evidence-based Practices Center. (2015). Implementation Drivers Overview PDF. Active Implementation Hub. https://implementation.fpg.unc.edu/wp-content/uploads/Implementation-Drivers-Overview.pdf
State Implementation and Scaling-up of Evidence-based Practices Center. (2018). Drivers Ed Short Series: Decision Support Data Systems (DSDS) Lesson. National Implementation Research Network, University of North Carolina at Chapel Hill. Retrieved from https://modules.fpg.unc.edu/sisep/de-dsds/story.html[1](https://implementation.fpg.unc.edu/resource/drivers-ed-short-series/)[2](https://www.nationaldb.org/media/doc/AIHub-DriversEd-DSDS-10-04-2016updated.pptx).
Ward, C., (2019, November 5). Exploring connections between implementation capacity and fidelity [Practicing Implementation Blog]. National Implementation Research Network, University of North Carolina at Chapel Hill.
Ward, C., St. Martin, K., Horner, R., Duda, M., Ingram-West, K., Tedesco, M., Putnam, D., Buenrostro, M., & Chaparro, E. (2015). District Capacity Assessment. National Implementation Research Network, University of North Carolina at Chapel Hill.
Watkins, C., & Hornak, R. (2022). What is fidelity? State Implementation and Scaling-up of Evidence-based Practices Center. Retrieved from https://implementation.fpg.unc.edu/resource/what-is-fidelity/1