Project Details
Description
Implementation of Evidence-Based Preventive Parenting Programs Implementation of Evidence-Based Preventive Parenting Programs. ABSTRACT Poor implementation explains significant reductions in effect sizes when Evidence-Based Programs (EBPs) are translated to real-world settings (Henggeler, 2004). Multiple dimensions of implementation influence outcomes (Durlak& DuPre, 2008), but only four occur within the delivery of program sessions and, as a result, are potential modifiable sources of disconnect between the program as designed and as implemented: 1) fidelity (amount of curriculum providers deliver), 2) positive adaptations (the quality of unscripted additions providers make during delivery), 3) quality of delivery (skill with which providers deliver material and interact with participants, and 4) participant responsiveness (level of program participation, such as participant attendance and program skill use). Identifying efficient methods to monitor these 4 dimensions (Schoenwald, et al., 2011) as well as understanding how they interrelate to influence outcomes (Berkel, Mauricio, et al., 2011) and change across sessions has important implications for maintaining EBP effects in real-world settings. The proposed research uses data from the NIDA-funded effectiveness trial of the New Beginnings Program (NBP; R01DA026874) and independent observer data that we will collect. NBP is a parenting intervention that improves youth substance use and mental health outcomes in response to parental divorce (Wolchik, et al., 2002) through effects on intervention-targeted parenting mediators (McClain, et al., 2010). Community providers will deliver the NBP to approximately 550 parents across 75 intervention groups through a partnership with 5 county-level courts in Arizona. The effectiveness trial will only collect independent observer data on 10% of sessions and only examine simple, direct effects of implementation on outcomes. This study builds on the NBP trial by collecting independent observer data for 100% of sessions, providing the required data to address 3 research aims that will inform critical questions in implementation measurement and theory. In Aim 1, we compare the reliability and predictive validity of fidelity, positive adaptation, and quality across independent observer ratings, provider self-report, and supervisor ratings. We also examine sampling strategies to identify the optimal quantity of data (e.g., 10%, 20%) that must be sampled to reliably assess fidelity, positive adaptations, and quality and if this is contingent on program activity type (e.g., didactic versus skills practice) or provider characteristics (e.g., experience, attitudes about EBPs). In Aim 2, we test theoretical frameworks explaining implementation effects on outcomes and examine the role of parent gender and ethnicity within these frameworks. Aim 2 builds on extant studies by testing: a) moderational and mediational hypotheses about the effects of provider behaviors (i.e., fidelity, adaptation, quality) on responsiveness and outcomes, b) the temporal precedence of provider behavior effects on responsiveness and outcomes, and c) the moderating effects of parent gender and ethnicity on model pathways. In Aim 3, we examine changes in fidelity, adaptation, and quality over time and the influence of provider characteristics on these changes. Implementation of Evidence-Based Preventive Parenting Programs Although Evidence-Based Programs (EBPs) have demonstrated themselves as capable of reducing youth substance use in a variety of contexts, the public health impact of these programs has not kept up with its potential.15 Poor implementation explains significant reductions in effect sizes when are translated to real-world settings11. Multiple dimensions of implementation influence outcomes,5 but the most critical to study are those that occur within the delivery of program sessions as these are potentially modifiable sources of disconnect between the program as designed and as implemented, including providers fidelity to the content in the manual and the quality with which the providers deliver the material and interact with participants.2 Although considered the gold standard, independent observation can be cost-prohibitive and impractical in community settings19. Findings from the proposed supplement can facilitate the efficient monitoring of implementation and thus support high quality delivery and reductions in substance use. Using transcripts of 470 session tapes from the New Beginnings Program effectiveness trial and human qualitative coding and ratings of those tapes, as well as pre and post-test interview data and attendance and home practice data from the trial itself, the proposed supplement will develop machine based ratings linked to fidelity and quality following computational linguistic methods developed by our own group,3,9 as well as others.1 This will be accomplished by empirically identifying components of fidelity and quality with high human interrater reliability and predictive validity (Aim 1), developing and testing the reliability of machine generated codes for all sessions compared to human codes (Aim 2), and testing the validity of the machine generated codes as compared to participant attendance, home practice, and changes in program targeted parenting that is associated with substance use (Aim 3). A strong mentoring team will mentor Dr. Gallo in several areas that will contribute to his independence as a translational researcher, including the theory of evidence-based parenting programs, assessment of dimensions of implementation, statistical analyses and implementation methodology, and systems engineering and machine learning. He will also be mentored on crafting manuscripts and grant proposals, which will allow him to take a productive leadership role in implementation research throughout his career.
Status | Finished |
---|---|
Effective start/end date | 6/15/13 → 3/31/16 |
Funding
- HHS: National Institutes of Health (NIH): $1,602,491.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.