Blog
What is Efficacy Research in Education and How Do I Know if Edtech is Really Working?
- June 3, 2024
- Posted by: admin_ebon
- Category: Uncategorized
By : Sierra Noakes, Kip Glazer and Pati Ruiz
“Does this edtech tool work for my students, and in my classroom?”
It’s a question many have asked, though the answer isn’t always easy to find. As ESSER and stimulus funding come to an end, district leaders are suddenly tasked with determining which of the record number of edtech tools they should invest in moving forward (an average of 2,591 edtech tools are accessed by each school district over a school year). To measure effectiveness, researchers have traditionally used randomized controlled trials (RCTs) as the gold standard. However, this methodology is time-consuming, expensive, and yields results only after a long period of time. We need to consider the value additional methods of measurement can provide to authentically evaluate edtech tools at a pace that will support districts with these decisions now.
RCTs are well-designed research studies that may offer causal findings. RCTs offer the promise of determining whether an edtech tool directly increases student learning or not. Originating from the medical field where researchers determine if a medicine has an intended outcome for patients, RCTs offer a promise of being able to pinpoint a cause. However, there are several challenges to implementing this model for edtech research:
- Control over variables: Demand for control over variables is simply unrealistic in a school setting. Unlike in medical research where subjects take the medicine or placebo consistently in regular intervals, students can be absent from school, WiFi or devices can fail, or schools may close due to a global pandemic. Any interruption typical in a school setting can disqualify an edtech research study from maintaining the RCT title.
- Pace of change: Even after meeting the minimum requirements for the Every Student Succeeds Act (ESSA) Tiers of Evidence by conducting a multisite study with over 350 students, the post-research process of data analysis, peer-review, and final publications can take years. By the time the study is available publicly, the original technology can already be more than a year or two old. Considering how quickly technology changes, we need a method that aligns with the pace of change.
- Metrics of success: Unlike medicine which typically cures a singular disease, metrics of success for edtech tools can be extremely varied. Improved testing scores, educator satisfaction, or even rate of adoption can all be considered as an indication of success.
- Practicality: Edtech Impact found in 2021 that only seven percent of edtech suppliers use RCTs to consider impact. It is clear that exclusively relying on RCT is not practical.
RCTs are not the appropriate method to use when we want to determine the effectiveness of edtech tools rapidly. Instead, we should reexamine what success looks like with an edtech tool. The education field has often considered increased testing scores alone as a metric for success; however, we believe learning is more than acquiring discrete pieces of knowledge. It is fundamentally a human experience that requires social and cultural interactions. Expanding the research basis we use to inform decisions is essential in this next phase of decision-making, especially including qualitative studies to better understand an experience holistically. Many students are facing unprecedented challenges and world events leading to increased suicide rates, depression, and chronic absenteeism. Now more than ever, the need to elevate the significance of learners’ experiences, their sense of belonging, engagement, interest, and excitement about learning and being at school has intensified. The question we must ask is whether a tool has created a greater sense of community for students or further alienated learners. As such, student experience should be considered as a success indicator.
To accomplish this goal, researchers need to elevate the status of qualitative research in edtech by always using mixed methods when evaluating the effectiveness of an edtech tool. This will allow us to ask much more nuanced questions. For example, rather than asking, “Did a tool work?” we can ask, “Why did a tool not work for all students?” With qualitative data, such as student focus groups and classroom observations, we can learn deeper insights such as students of color sharing that they did not feel represented in the math problems used by the product, which often led them to feel disengaged with the learning.
To authentically measure the effectiveness of edtech tools, skilled learning scientists at Digital Promise have collaborated with multiple organizations and a variety of practitioners including district leaders. As a result, Digital Promise has launched the Evidence-Based Edtech product certification as a way to operationalize this effort. The certification welcomes submitted studies that consider correlational, quasi-experimental, and randomized controlled trials research, and require findings to be fully reported, whether positive or negative, and disaggregated by learner subpopulations.
Our goal is to assess the quality of research that falls outside of ESSA Tier 1, which exclusively represents RCTs. We aim to support education leaders with information about the reliability of evidence that vendors share and increase the amount of evidence available to the field by recognizing the quality of non-RCT edtech research.
The Evidence-Based Edtech product certification enables Digital Promise to evaluate the reliability of the product’s evidence basis, along with an evaluation of the product’s theory of change. Our assessors also evaluate the quality and relevance of learning sciences research used to drive specific and distinct design decisions within a product and ensure the product’s research basis is easily accessible to the public.
Most importantly, the Evidence-Based Edtech product certification allows those who select and purchase edtech to know with confidence that a product has been vetted through the learning science lens. Our team has worked with district leaders to develop these district resources to support the integration of evidence into edtech evaluation and decision-making.
District leaders have fewer dollars to move forward with edtech products, and they deserve access to quality information about the potential impact an edtech tool can have on their community. Mixed-methods, correlational, and quasi-experimental research can provide a reasonable turnaround time to support decision-making that incorporates evidence. Evidence, too, can help justify decisions to teachers, school boards, and communities as district leaders have to make significant cuts to the number of tools available across their district.
Sierra Noakes is the Director of Edtech Evaluation and Contracting at Digital Promise.
Kip Glazer is Principal at Mountain View High School.
Pati Ruiz is the Senior Director of Edtech and Emerging Technologies at Digital Promise.