top of page

How to Improve Children’s Educational Trajectories

Updated: Jan 2, 2018

Of late, I’ve been pondering how we can do work that has more staying power, that will equip students and adults to be more successful in the long run. As education research professionals, we often find ourselves studying interventions that are narrowly focused, such as reform programs designed for specific ages, grade bands, student populations, and school contexts. As a result, we see the “fade-out effect” when those interventions are no longer supported.



Although these narrow approaches facilitate rigorous research (by allowing us to more easily manipulate the variables of interest and estimate the impacts of specific interventions on short-term outcomes), they miss the broader picture: an understanding of processes across time, transitions, and contexts of influence. In turn, educational policy generated from such initiatives has a smaller sphere of influence (Morris & Reardon, 2017).


How might we alter this situation?

The Institute of Education Sciences has long supported the notion of multidisciplinarity – approaching problems from different perspectives and combining those perspectives into proposed research studies – but this hasn’t gained enough traction in the field. Drawing from fields such as developmental psychology, public health, and prevention science, here’s what we can do.

  1. Identify teaching those skills that have long-term consequences—such as self-efficacy, and cognitive and emotional self-regulation. There is a credible body of research that demonstrates that these skills can be taught (Durlak, Weissberg, Dymnicki, Taylor & Schellinger, 2011; Kenthirarajah & Walton, 2015; Wilson & Linville, 1985). When combined with the primary treatment of interest (e.g., reading or math programs), these universal interventions can propel students towards life-long learning, insulating them in otherwise difficult life circumstances.

  2. Consider the timing of interventions relative to life course or transitions in life. Interventions might equip students to seize opportunities at the right time, such as entry into honors classes. Or pregnancy prevention programs might focus on delaying the onset of sexual activity beyond the teen years rather than eliminating these outcomes altogether (Bailey, Duncan, Odgers & Yu, 2017).

  3. Understand environments as dynamic systems across time. Taking a page from Bronfenbrenner’s “ecology of human development” (1979), our work should be guided by a longitudinal focus.

  4. Expand our toolkit. Standard linear models explain point-in-time impacts; they don’t actually examine children’s developmental growth. Growth curve models are promising alternatives (Greenberg & Abenavoli, 2017). And metrics like relative risk reduction and improvement index used in combination with the traditional effect size can strengthen the validity of our analyses (Wang & Ware, 2013; What Works Clearinghouse, 2014). All these methodologies are readily accessible to us.

Our immediate reaction to these recommendations might be, “We don’t have control over the content to train or evaluate; they are defined by the client’s contract to us.” True enough. But we are now seeing windows of opportunity to play a role in upstream solutions (e.g., policy development).


Given this evolving reality, what are the implications of this shift for us?

When training teachers, professional development providers should keep an eye towards “post program” educational supports for students. In the area of school safety professional development, we might incorporate train-the-trainer protective and resilience building skills for students that go over and beyond the array of institutional safety prescriptions. Evaluators can impress upon clients the importance of ensuring that interventions include skills of the genre listed above and are timed to roll out at critical developmental stages. And wherever possible, we should look for opportunities to extend evaluations beyond the fade out window.

_________________________________

References

Bailey, D., Duncan, G., Odgers, C., & Yu, W. (2017). Persistence and fadeout in the impacts of child and adolescent interventions. Journal of Research on Educational Effectiveness, 10, 7– 39. doi:10.1080/19345747.2016.1232459


Bronfenbrenner, U. (1979). The ecology of human development: Experiments by nature and design Cambridge, MA: Harvard University Press.


Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The

impact of enhancing students’ social and emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82, 405–432. doi:10.1111/j.1467-

8624.2010.01564.x


Greenberg, M. T. & Abenavoli, R. (2017). Universal interventions: Fully exploring their impacts and potential to produce population-level impacts. Journal of Research on Educational Effectiveness, 10, 40-67. doi:10.1080/19345747.2016.1246632


Kenthirarajah, D. T., & Walton, G. M. (2015). How brief social psychological interventions can cause enduring effects. Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource. doi:10.1002/9781118900772.etrds0026


Morris, P. A., & Reardon, S. F. (2017). Moving education science forward by leaps and bounds: The need for interdisciplinary approaches to improving children’s educational trajectories. Journal of Research of Educational Effectiveness, 10, 1-6, doi: 10.1080/19345747.2016.1254466


Wang, R., & Ware, J. H. (2013). Detecting moderator effects using subgroup analyses.

Prevention Science, 14, 193–198. What Works Clearinghouse. (2014). Procedures and standards handbook, Version 3.0. Retrieved from  https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_procedures_v3_0_standards_handbook.pdf


Wilson, T. D., & Linville, P. W. (1985). Improving the performance of college freshmen with attributional techniques. Journal of Personality and Social Psychology, 49, 287–293.

doi:10.1037/0022- 3514.49.1.287

________________________________________________________





Sara Silver is a Senior Research Associate for the Program Evaluation and School Improvement Services Division at Measurement Incorporated.






Please learn more about our program evaluation and professional development services on this website.

26 views0 comments

Recent Posts

See All
bottom of page