Research Watch

A formative stage evaluation of a birth parent behavioural training program and foster care children’s well-being and placement stability

Year of Publication
Reviewed By
Carolyn O’Connor, Tara Black, & Barbara Fallon
Citation

Akin, B. A., Byers, K. D., Lloyd, M. H., & McDonald, T. P. (2015). Joining formative evaluation with translational science to assess an EBI in foster care: Examining social-emotional well-being and placement stability. Children and Youth Services Review, 58, 253-264.

Summary

The current study examined the effect of a birth parent behavioural training program (Parent Management Training-Oregon or PMTO) on children’s well-being (measured by socio-emotional functioning, problem behaviours, and social skills) and placement stability in a foster care setting. Subjects were from a Midwestern state and involved in the private foster care system, through which they were asked to participate in a pre-test, post-test randomized consent trial. At this formative stage in the process, researchers were interested in identifying whether their outcomes were measurable and if the selected instruments were sensitive enough to detect change. They also wanted to identify if there were differences found between the intervention and comparison groups.

A multi-stage evaluation process was employed by Akin and colleagues as a way to save time and resources while ensuring the reliability and validity of the study measures and constructs. This method also assists in expediting implementation of evidence-based programs. Specifically, this article represents the formative stage of the evaluation, which is used to establish the program’s initial effectiveness and confirm the pathways of change before progressing to more expensive and involved phases like summative evaluation and dissemination. The researchers also applied the translational science concept of cultural exchange, in which active collaboration and mutual input occurs between agency practitioners, administrators, and researchers to reduce practice-based barriers.

Findings:

Well-being was measured by three variables: social-emotional functioning, problem behaviour, and social skills. Social-emotional functioning was measured using the Child and Adolescent Functioning Assessment Scale (CAFAS), with satisfactory concurrent and predictive validity and internal reliability found at .63 and .85 for Time 1 and Time 2, respectively. Problem behaviour and social skills were assessed together using the Social Skills Improvement System (SSIS), a measure that has moderately high validity indices for both scales (problem behaviour and social skills). Cronbach’s alpha was calculated to determine internal reliability and problem behaviour scales showed .89 for Time 1 and .82 for Time 2, while social skills scales were .92 for Time 1 and .92 for Time 2. Placement stability was measured annually using administrative data and was calculated by including the total number of placements divided by the number of days in foster care and multiplied by 365 to provide an annualized rate.

Both groups improved on all three measures of well-being with the intervention group showing greater improvement. The treatment group had 22% fewer annual placements than the comparison group. The preliminary results demonstrate the measure’s ability to function properly and detect change. All three measures of well-being measures at Time 1 significantly predicted well-being at Time 2 (six months later). However, no relationship in either direction was found between well-being and placement stability with one exception: the intervention group’s Time 2 social skills and placement instability. The authors anticipate that with a larger sample size and subsequent increase in power, these relationships are likely to re-emerge and a next-stage summative evaluation should commence. 

Methodological Notes

Some issues with the study sample are apparent. The sample size (n=121) was analyzed by two simulations to determine statistical power and was found to be underpowered overall (.40) by the test of not acceptable fit. However, the Monte Carlo test found it to be sufficiently powered for many variables (.89 and above) and slightly underpowered for others (.65). The researchers deemed these levels to be acceptable for the formative purposes at this stage of the process but noted that these results should not be used to draw any summative conclusions. Power calculations should have been performed prior to recruitment in order to ensure that an appropriate sample size had been obtained.

The measure of annualized placement rate is a very simplified outcome that is intended to portray some potentially complex situations. It is therefore limited in that it does not describe the reasons for placement change, just simply indicating that a move has occurred. Some placement changes can be positive and adaptive, but the current analysis assumes that a higher number of placements are related to poorer outcomes.

The researchers utilized a pre-test, post-test randomized consent trial, which means that participants were randomly assigned to their condition and informed of their assignment prior to giving consent. Some researchers indicate that this inverse randomization and consent sequence can reduce participation rates and thus, statistical power (Homer, 2002). This article did not provide the initial sample values to investigate whether this had occurred, but it remains possible that the expressed limitation of the underpowered sample was influenced by this method. Also observed is the much higher number of participants present in the intervention group (n=78) versus the comparison group (n=43), despite being assigned evenly, suggesting that several families would not consent after being informed of their placement in the control group. In addition, the practitioners administering the treatment were in the midst of PMTO certification and required a minimum number of caseloads for completion, which also increased the amount of those in the treatment group. Further research with a larger sample size should resolve some of these statistical limitations.

References

Homer, C. (2002). Using the Zelen design in randomized controlled trials: Debates and controversies. Journal of Advanced Nursing, 38(2), 200-207.