• Home

Please read

SOCIAL WORK CASE STUDIES: CONCENTRATION YEAR

68

Social Work Research:
Qualitative Groups

A focus group was conducted to explore the application of
a cross-system collaboration and its effect on service delivery
outcomes among social service agencies in a large urban county
on the West Coast. The focus group consisted of 10 social workers
and was led by a facilitator from the local office of a major commu-
nity support organization (the organization). Participants in the
focus group had diverse experiences working with children, youth,
adults, older adults, and families. They represented agencies that
addressed child welfare, family services, and community mental
health issues. The group included five males and five females from
diverse ethnicities.

The focus group was conducted in a conference room at the
organization’s headquarters. The organization was interested in
exploring options for greater collaboration and less fragmentation
of social services in the local area. Participants in the group were
recruited from local agencies that were either already receiving
or were applying for funding from the organization. The 2-hour
focus group was recorded.

The facilitator explained the objective of the focus group and
encouraged each participant to share personal experiences and
perspectives regarding cross-system collaboration. Eight ques-
tions were asked that explored local examples of cross-system
collaboration and the strengths and barriers found in using the
model. The facilitator tried to achieve maximum participation by
reflecting the answers back to the participants and maintaining
eye contact.

To analyze the data, the researchers carefully transcribed the
entire recorded discussion and utilized a qualitative data analysis
software package issued by StatPac, which offers a product called
Verbatim Blaster. This software focuses on content coding and
word counting to identify the most salient themes and patterns.

The focus group was seen by the sponsoring entity as
successful because every participant eventually provided feed-
back to the facilitator about cross-system collaboration. It was also

RESEARCH

69

seen as a success because the facilitator remained engaged and
nonjudgmental and strived to have each participant share their
experiences.

In terms of outcomes, the facilitator said that the feedback
obtained was useful in exploring new ways of delivering services
and encouraging greater cooperation. As a result of this process,
the organization decided to add a component to all agency annual
plans and reports that asked them to describe what types of cross-
agency collaboration were occurring and what additional efforts
were planned.

Please read

55

Journal of Social Work Practice in the Addictions, 9:55–70, 2009
Copyright © Taylor & Francis Group, LLC
ISSN: 1533-256X print/1533-2578 online
DOI: 10.1080/15332560802533612

WSWP1533-256X1533-2578Journal of Social Work Practice in the Addictions, Vol. 9, No. 1, January 2009: pp. 1–25Journal of Social Work Practice in the Addictions

Collocation: Integrating Child Welfare
and Substance Abuse Services

CollocationE. Lee et al.

EUNJU LEE, PHD
Senior Research Scientist, Center for Human Services Research, School of Social

Welfare, University at Albany, New York, USA

NINA ESAKI, PHD
Research Scientist, Center for Human Services Research, School of Social

Welfare, University at Albany, New York, USA

ROSE GREENE, MA
Associate Director, Center for Human Services Research, School of Social

Welfare, University at Albany, New York, USA

This article presents findings from a process evaluation of a pilot
program to address parental substance abuse in the child welfare
system. By placing substance abuse counselors in a local child
welfare office, the collocation program was designed to facilitate
early identification, timely referral to treatment, and improved
treatment engagement of substance-abusing parents. Frontline
child welfare workers in 6 of the 7 pilot sites endorsed the program
as they found that the collocated substance abuse counselors pro-
vided additional resources and facilitated case processing. Findings
suggest that clearly defined procedures and sufficient staffing of
qualified substance abuse counselors could lead to better programs.

KEYWORDS child welfare, parental substance abuse, service
integration

Received May 16, 2007; accepted February 8, 2008.
This research was funded by the Children’s Bureau (#90CW-1111), Administration of

Children and Family, U.S. Department of Health and Human Services.
Address correspondence to Eunju Lee, Center for Human Services Research, School of Social

Welfare, University at Albany, 135 Western Ave., Albany, NY 12222, USA. E-mail: Elee@uamail.
albany.edu

56 E. Lee et al.

Parental substance abuse is a well-known risk factor affecting families in the
child welfare system. However, both the child welfare and substance abuse
service systems have faced challenges in identifying, engaging, and provid-
ing effective treatment to substance-abusing parents investigated for child
maltreatment.

Challenges include the different goals, legal mandates, and practices
between the child welfare and substance abuse fields. As a result of the
Adoption and Safe Families Act of 1997, the timelines for placement deci-
sions and family reunification were shortened, placing unrealistic demands
on substance-abusing parents to make significant life changes. Additionally,
the child welfare system seeks to protect children and, whenever possible,
to keep families together. Substance abuse treatment providers view addic-
tion as a chronic, relapsing condition and traditionally place primary focus
on the individual client.

In response to these challenges, policymakers and administrators have
invested in service integration models. Promising results from the Illinois
Title IV demonstration program will further generate interest in service inte-
gration (Marsh, Ryan, Choi, & Testa, 2006; Ryan, Marsh, Testa, & Louderman,
2006). Despite this recent advance, empirical research on service integration
models is still limited and few studies have rigorously examined the imple-
mentation issues of these models.

A collocation program piloted in a northeastern state is a service integra-
tion model designed to address parental substance abuse in the child welfare
system. The program consists of placing credentialed alcoholism and sub-
stance abuse counselors (CASACs) in local child welfare offices to work with
frontline child welfare workers to increase the level of substance abuse iden-
tification, treatment referral, and treatment engagement. This article presents
the results of a process evaluation of the collocation model using data from
interviews, focus groups, and administrative records. The program model,
implementation process, implementation challenges, perceived effects, and
suggestions for future service integration models are examined.

LITERATURE REVIEW

Prevalence and Risk of Substance Abuse in the
Child Welfare System

Although substance abuse is considered a serious risk factor for child
maltreatment, current prevalence rates of parental substance abuse in child
welfare cases vary widely due to differences in definitions and methodology
(Besinger, Garland, Litrownik, & Landsverk, 1999; Semidei, Radel, & Nolan,
2001; Young, Boles, & Otero, 2007). For example, the Child Welfare League
of America (1998) estimated that at least 50% of confirmed cases of child
maltreatment involve parents with substance abuse problems. Semidei et al.

Collocation 57

(2001) found substance abuse contributed to child maltreatment for one
third to two thirds of the families involved with child welfare agencies.
Parental alcohol or drug use has been also strongly associated with the sub-
stantiation of abuse or neglect allegations (Sun, Shillington, Hohman, & Jones,
2001; Wolock, Sherman, Feldman, & Metzger, 2001). Estimates of parental
substance abuse for children entering foster care have been even more stag-
gering: About 80% of children placed out of home due to maltreatment have
parents with substance abuse issues (Besinger et al., 1999; U.S. Department of
Health and Human Services [USDHHS], 1999). The prognosis for families with
substance abuse problems in the child welfare system is dismal. Child mal-
treatment cases involving parental substance abuse often result in recurring
maltreatment allegations, longer stays in foster care, and reduced likelihood
of family reunification (Ryan et al., 2006; Smith & Testa, 2002; USDHHS, 1999;
U.S. Government Accounting Office, 1998; Wolock & Magura, 1996).

Barriers to Service and Treatment

Unfortunately, less than half of all parents with substance abuse issues in the
child welfare system enter and complete necessary alcohol and drug services
(Young, Gardner, & Dennis, 1998). Gregoire and Schultz (2001) found that few
parents complete assessment or treatment. Engaging and retaining these clients
in treatment has been a critical problem (Choi & Ryan, 2006; USDHHS, 1999).
There have been clinical and systemic barriers for engagement and retention of
parents in treatment (McAlpine, Marshall, & Doran, 2001). These issues revolve
around the nature of the child welfare job, the types of substance abuse treat-
ment services readily available in communities, federal and state policies, and
the differing perspectives of the child welfare and substance abuse fields.

First, child welfare staff lacks the training and experience to accurately
assess the extent of substance abuse problems of parents investigated for
child maltreatment (Semidei et al., 2001; Tracy, 1994; Young et al., 1998).
Parents in the child welfare system are likely to deny their alcohol and other
drug problems as well as their need for help, in part, because they fear
removal of their children (Dore, Doris, & Wright, 1995; Jessup, Humphreys,
Brindis, & Lee, 2003). Child welfare workers whose primary focus is the
safety of children are also not experienced in helping parents with
substance problems (Marsh & Cao, 2005; Tracy & Farkas, 1994) and view
substance-abusing parents as difficult to treat (Semidei et al., 2001).

Effective treatment designed for parents, especially women with young
children, is not easily available in many communities. Many providers are
not prepared or equipped to address the complex physical, mental, social,
and economic issues facing these women and their children (USDHHS,
1999). In addition, these parents, particularly mothers, often lack critical
concrete supports (e.g., child care, transportation) necessary to begin and
complete treatment (Azzi-Lessing & Olsen, 1996; Carlson, 2006).

58 E. Lee et al.

Despite a lengthy recovery process and the need for concrete services
to enter and complete treatment, federal and state policies place demanding
timelines on such families. Under the Adoption and Safe Families Act
(ASFA) of 1997, parents must resolve their problem within a 12-month
period or risk permanent loss of their children (Green, Rockhill, & Furrer,
2006; Smith, 2001). These policies not only place demands on substance-
abusing parents to make significant life changes in relatively brief periods of
time, but also place undue burdens on child welfare services to accelerate
accurate assessment, referral, and case management services (McAlpine
et al., 2001).

Finally, the child welfare and substance abuse treatment systems have
different perspectives (Feig, 1998; Young & Gardner, 1998). Substance
abuse treatment staff members who are knowledgeable about addiction
focus almost exclusively on the drug abuser. In contrast, child welfare
workers who are more knowledgeable about the consequences of addiction
on the other family members might have a punitive attitude toward sub-
stance abusers and focus on the maltreated child. In addition, given the
often different background and training experiences of workers in these
two fields, child welfare workers and substance abuse treatment providers
typically know very little about the other area (Carlson, 2006).

Need for Collaboration Between the Two Systems

To address the challenges associated with substance abuse in child welfare,
strategies for integrating substance abuse treatment and child welfare
services have gained increased popularity (Horwath & Morrison, 2007; Ryan
et al., 2006). Historically, the implicit model in child welfare depended on
the child welfare worker acting in isolation to motivate the substance-abusing
client to seek treatment. However, more recently, policymakers, practitio-
ners, and scholars have come to believe that collaboration between sub-
stance abuse and child welfare systems can be more effective in engaging
the parents in treatment (Colby and Murrell, 1998; Cornerstone Consulting
Group, 2002; McAlpine et al., 2001; Peterson, Gable, & Saldana, 1996; Ryan
et al., 2006; Semidei et al., 2001; Young & Gardner, 2002).

Some research suggests collaboration between substance abuse treatment
and other social service systems improves treatment outcomes, especially for
women (Dore & Doris, 1998; Kraft & Dickinson, 1997; Marsh, D’Aunno, &
Smith, 2000; Randolph & Sherman, 1993; Walsh & Young, 1998; Young &
Gardner, 1998). Dore and Doris (1998) found that nearly half of the women in
their study were able to complete treatment through a placement prevention
initiative staffed by both child welfare workers and substance abuse specialists.
For women with children, improved access to treatment, specifically the provi-
sion of transportation, outreach, and child-care services, showed a negative
relationship with continued substance abuse (Marsh et al., 2000).

Collocation 59

A number of states have initiated collaborative efforts between the
child welfare and substance abuse systems to build effective new partner-
ships. Although some show promising results (Cornerstone Consulting
Group, 2002; Maluccio & Ainsworth, 2003; Young & Gardner, 2002), there
has been limited empirical evidence to demonstrate the impact of these
collaborative efforts on child welfare outcomes (Barth, Gibbons, & Guo,
2006; Marsh et al., 2006). One exception has been a recent study (Ryan et
al., 2006) that demonstrated positive results after provision of intensive
case management to link substance abuse services and child welfare ser-
vices in Illinois.

Collocation: A Service Integration Model

Collocation refers to strategies that place multiple services in the same
physical space (Ginsburg, 2008). It has been suggested as a strategy for
integrating different service systems for clients with multiple service needs
(Agranoff, 1991; Austin, 1997). Clients with multiple needs face difficulties
in navigating fractured systems with different sets of rules and expecta-
tions. As a result, they are less likely to receive needed services and more
likely to experience poor outcomes (Marsh et al., 2006). A recent study
indicated that child welfare outcomes are substantially enhanced when
families receive appropriate substance abuse services (Green, Rockhill, &
Furrer, 2007).

A collocation model, which places substance abuse counselors at local
child welfare agencies, serves as a simple, concrete, and straightforward
mechanism for facilitating collaboration between the two systems. The
model has the potential to increase early identification of substance-abusing
parents in the child welfare system. It could also address some of the barri-
ers to treatment, thereby engaging and retaining substance-abusing parents
in treatment that might, in turn, lead to improved child welfare outcomes.
Substance abuse specialists are trained to utilize empirically based tech-
niques, such as the transtheoretical model of change (Prochaska &
DiClemente, 1984; Prochaska & Norcross, 1999) and motivational interview-
ing (Miller & Rollnick, 2002), a process of engagement that is designed to
overcome child welfare clients’ denial of abuse and to motivate them to
enter treatment. These specialists, working in concert with child welfare
workers, can address the logistical and psychosocial barriers to treatment,
can build a trusting relationship during the “window of opportunity” when
parents feel highly vulnerable, and can successfully obtain the parents’
acceptance of care plan goals within federal and state time constraints.

Unfortunately, literature specific to the topic of collocation is limited.
Several descriptive studies regarding collocation have been conducted in
such venues as human services in schools (Briar-Lawson, Lawson, Collier, &
Joseph, 1997; Tapper, Kleinman, & Nakashian, 1997), mental health service

60 E. Lee et al.

providers in buildings of primary care physicians for the treatment of
depressed patients (Valenstein et al., 1999), and substance abuse providers
in departments of social services for the assessment of Temporary Assis-
tance to Needy Families (TANF) recipients (Center on Addiction and Sub-
stance Abuse, 1999). Similarly, research regarding the collocation of
substance abuse specialists in child protective services (CPS) is sparse, and
although encouraging regarding intermediate outcomes (McAlpine et al.,
2001), remains inconclusive regarding longer term child welfare outcomes
(Marsh et al., 2006). McAlpine and colleagues (2001) examined a program
that included collocating substance abuse specialists in child welfare offices.
They found substantial increased use of the substance abuse specialist by
the child welfare office in less than 1 year—from an initial rate of 10 staff
members making requests for 169 investigations to 32 staff members
making requests for 282 investigations. A recent evaluation of the Illinois
Title IV-E demonstration program showed promise of service integration for
substance-abusing parents whose children were removed from their care
(Ryan et al., 2006).

Despite encouraging outcomes, additional research is needed regarding
service integration models for child welfare clients. Particularly useful
would be studies examining implementation issues. The Maryland Title IV-E
demonstration was terminated due to several factors, but some were related
to program implementation (USDHHS, 2005), indicating difficulties of
service integration regardless of its promise.

METHODOLOGY

To address the issue of substance abuse in families involved in the child
welfare system, the child welfare and substance abuse state agencies in a
northeastern state issued a request for proposals (RFP). Collocation was one
of the suggested models funded under this RFP, using TANF prevention
funds. For this model, CASACs were to be collocated in child welfare offices
to identify and assist parents with substance abuse problems. Treatment
agencies were eligible to apply for the funding in partnership with child
welfare offices in their region. In 2001, nine programs began to serve child
welfare clients and the pilot programs ended in most sites by 2004.

Study Design

From 2004 to 2005, the authors conducted a process study as part of an
evaluation of the pilot collocation program. The study included seven sites;
four programs in primarily rural locations and three programs in primarily
metropolitan areas. Two of the original sites were eliminated from the
study. One site was defunded in the first year due to the inability of the

Collocation 61

substance abuse treatment agency to establish a working relationship with
the local child welfare office. The second site adopted a blended interven-
tion model of the collocation and family drug court programs, which was
unfavorable to an evaluation of the collocation model.

The study’s goal was to examine the implementation processes and to
assess whether program sites varied in implementation success. Specifically,
the authors were interested in examining the following questions: 1) Were
the target populations served? 2) Did collocation increase collaboration and
understanding between the child welfare and substance abuse agencies?
3) Was the program implemented as intended? and 4) What were the barri-
ers to successful implementation?

Data and Analysis

Data were collected from focus groups and individual interviews at each of
the seven collocation sites, as well as from interviews with stakeholders at
the state agencies. Information gathered from stakeholders included the
planning and startup of the program, the operations, processes for case
identification and referrals, the relationship between the child welfare and
substance abuse fields, and administrative procedures and protocols. In
each collocation site, a focus group consisting of 10 to 15 child welfare
workers and a separate focus group for 6 to 12 child welfare supervisors
were conducted. Interviews were also held with at least one key child
welfare administrator, often the individual with responsibility for overseeing
the program at each program site. Separate interviews were conducted with
each CASAC and his or her supervisor from the treatment agency. To elimi-
nate bias, two investigators were present at each of the focus groups and
interviews, and sessions were tape-recorded. In total, 14 focus groups and
18 interviews were conducted. Additionally, progress reports and other
administrative records, such as the original contracts, were reviewed.

After each site visit, the tapes from the interviews and focus groups
were transcribed and categorized. To ensure accuracy and to eliminate bias,
the transcribed notes were compared with the notes taken by the two
authors. Data were then analyzed using the constant comparison method
(Glaser, 1978) by writing down emerging themes and by comparing similar-
ities and differences within and across sites (Miles & Huberman, 1994;
Patton, 2002).

RESULTS

Despite initial start-up difficulties, all but one of the seven sites succeeded
in implementing the collocation model. At the one site where implementa-
tion did not occur, staff at the child welfare office and at the treatment

62 E. Lee et al.

agency disagreed on program goals and operating procedures and could
not establish a strong working relationship.

In general, child welfare workers who admitted to being initially skep-
tical about yet another new initiative ended up embracing the program.
Similarly, substance abuse counselors who typically provide services within
their clinics grew to realize the benefits of home visits as a way to identify
and assess substance abuse issues and to elicit greater awareness of client
needs. Both agreed that the collocation program improved their under-
standing of each other’s system and perceived that the program improved
early identification, timely referral to treatment, and treatment outcomes of
substance-abusing parents in the child welfare system.

Challenges

ACCEPTANCE BY CHILD WELFARE STAFF

Although frontline child welfare workers were advised of the new initiative,
specific mechanisms were not established about how to work with the
collocated substance abuse worker. In addition, many of the child welfare
workers were skeptical about the introduction of yet another new program
in their offices. As a result, the burden of implementation fell heavily on the
CASACs and their supervisors.

The lack of established procedures made implementation difficult,
especially in the first year. All of the collocated counselors encountered a
number of startup difficulties, particularly in obtaining acceptance from the
child welfare workers and in achieving an adequate number of case refer-
rals. Although the concept of collocation implies an egalitarian partnership,
it was the CASACs who had to make an extra effort to ingratiate themselves
to the child welfare staff and to make personal appeals for case referrals.
Two CASACs were replaced early on because they were unable to develop
close working relationships with child welfare workers.

MODEL VARIATIONS

Although the program framework was identified in the RFP, the design of
the program mechanisms was determined by the localities. At six out of the
seven sites, the collocated counselors consistently provided two core
services: assessment of substance abuse and referral to treatment services.
However, the programs varied on how the counselors provided these ser-
vices and whether they provided additional services beyond these two core
activities.

Two basic variations of the program emerged: one in the metropolitan
sites and one in the rural sites. In the metropolitan programs, the client
interviews, assessments, and referrals were conducted in the child welfare

Collocation 63

office. In the rural programs, the counselors conducted home visits and their
services were not physically limited to the child welfare offices. Additionally,
in the rural sites, the CASACs continued to work with the client over a longer
period of time than in the metropolitan programs by providing case manage-
ment services, such as transportation, for the duration of their treatment.

Similarly, there were two different processes for how the case was
referred to the collocated counselors. Identification of substance abuse cases
occurred either through a call to the child abuse hotline or after the initial
investigation. In some sites, the hotline call that identified parental substance
abuse was forwarded directly to the substance abuse counselor, although
this represented a minority of referrals to the program. Most often, cases
were referred to the collocated counselor after the investigation was initiated
by the child welfare worker. Child welfare workers were generally willing to
involve the CASACs in such cases to obtain additional assistance and coun-
sel. However, they were inconsistent regarding the types of cases that were
referred and when the referrals were made. No consistent rules were estab-
lished, resulting in individual child welfare workers using their own discretion.

TARGET POPULATIONS AND CAPACITY

Overall, the collocation programs served the intended populations, TANF
parents affected by substance abuse. In most cases, the CASACs served
mothers who were being investigated for child maltreatment. However, on
occasion, the counselors would provide services to other family members.
In some of the smaller rural counties, the collocated counselors worked
with a significant number of adolescents with substance abuse issues
involved in persons in need of supervision (PINS) cases, who were neither
the perpetrators nor victims of the CPS reports.

As for capacity, even in the smallest county, a single CASAC could not
serve all eligible clients, especially when the CASAC was conducting both
home visits and providing case management services. Due to the level of
funding, the sites were limited to hiring one or two CASACs. Although child
welfare workers generally respected the collocated counselors for their
ability to engage the clients as well as for their knowledge of appropriate
treatment services, they expressed frustration about the limited service
capacity that could be offered by one or two CASACs. Child welfare work-
ers in one focus group expressed a desire for 10 substance abuse counse-
lors to be assigned to their local program.

CONFIDENTIALITY

At a number of sites, there was confusion and apprehension among the child
welfare workers about sharing information. Child welfare workers felt that
they had to obtain consent forms from their clients to share information with

64 E. Lee et al.

the CASACs. This process slowed down the CASACs’ effort to quickly engage
clients and provide them with appropriate assessments and treatment referrals
during the short investigation period. Eventually, some sites developed mem-
oranda of understanding (MOUs) between the two agencies that addressed
this issue. In compliance with the Health Insurance Portability and Account-
ability Act (HIPAA) laws, CASACs obtained a signed consent form from clients
to share client information with child welfare workers. Addressing the issues
of information sharing and confidentiality prior to implementation is impor-
tant to reduce confusion and difficulties for workers on both sides.

Benefits

IMPROVED COORDINATION OF SERVICES

At the programmatic level, there was an improved relationship between the
child welfare and substance abuse fields as demonstrated by the enhanced
coordination of service delivery. This could be partly attributed to an
increased awareness on both sides of the goals, objectives, and challenges
of each other’s field. Similarly, the physical proximity of the CASAC made a
difference for child welfare workers and their clients. Child welfare workers
were able to contact the CASAC immediately and have the client meet with
the substance abuse specialist in a timely fashion, which was extremely
important due to policies imposing time limitations in case determination.

The child welfare workers believed the program led to less recurrence
of child maltreatment and consequently fewer subsequent CPS reports.
However, this impression has yet to be verified by a comprehensive review
of the administrative data.

INCREASED SUBSTANCE ABUSE IDENTIFICATION AND BETTER REFERRAL

The child welfare workers agreed that the substance abuse counselors were
better equipped to persuade child welfare clients to admit to substance
abuse problems. Two possible explanations can be offered. First, unlike the
child welfare workers, the counselors were trained specifically in tech-
niques for engaging clients with substance abuse problems. Second, the cli-
ents were not as threatened by the counselors as they were by the child
welfare workers, who could ultimately remove their children from the
home. Therefore, they were more willing to be honest about their substance
abuse issues and were more motivated to resolve their problems with assis-
tance from an experienced substance abuse counselor.

Some counselors helped clients access treatment services and worked
with them to remain in treatment. In the rural sites, the counselors followed
the clients beyond the referral stage by providing additional case management
services, such as arranging transportation and removing other barriers that

Collocation 65

might impede clients from obtaining treatment. In all of the sites, the coun-
selors had discretionary funds to assist clients in this capacity.

DISCUSSION

Findings from this study offer insight into the challenges and potential
benefits of implementing a program to collocate substance abuse counse-
lors in child welfare offices. The collocation programs faced issues similar to
those that plague many new initiatives. Suggestions for successful implemen-
tation of a collocation program include careful planning, engaging child
welfare workers, standardizing procedures, and providing strong leadership.

Planning

To facilitate communication and processing of cases between child welfare
workers and counselors, child welfare offices and collaborating treatment
agencies would benefit from detailing policies on confidentiality in MOUs.
Similarly, providing adequate physical facilities for collocated counselors
should be planned to enhance their integration into the child welfare offices.

In the planning phase, administrators might want to consider the specific
qualities that would maximize the acceptance of the collocated counselor by
the child welfare office. Early on, it needs to be recognized that the collocated
substance abuse counselors are entering a potentially unwelcoming culture.
Although good clinical skills are important, the collocated substance abuse
counselor also needs a flexible personality, as demonstrated by a willingness
to work with child welfare workers, an aptitude for learning new rules, and
an open-mindedness toward the culture of child welfare offices.

Engaging Child Welfare Workers

Programs that engage both child welfare workers and substance abuse coun-
selors in advance of program implementation are likely to experience greater
success. Informing workers of the program and soliciting their feedback
beforehand will lead to easier program implementation when formally intro-
duced. Providing the workers with information regarding the program, espe-
cially the benefits to both them and their clients, is essential. Child welfare
workers are often wary of new initiatives that tend to add more work to their
already heavy caseloads. The successful implementation of the collocation
program was partly due to the fact that the CASACs provided additional
resources to child welfare workers, thus lessening some of their burden.

Similarly, substance abuse counselors need to understand that their role is
to be complementary to that of the child welfare workers. They need to be
trained on the policies and practices of the child welfare system from the

66 E. Lee et al.

beginning, especially the laws, requirements, and timelines pertinent to the child
welfare system. To be accepted and effective, they need to overcome precon-
ceived notions about the child welfare system and adapt to the agency’s culture.

Standardizing Procedures

Collocation programs would benefit from clearly stated procedures outlining
the program model, program eligibility, and the process for identification,
referral, and follow-up of clients. The lack of such procedures is not condu-
cive to collaboration, as workers from the two systems could be left with
differing expectations.

Standardization may include the identification and referral of all child
welfare cases with parental substance abuse issues directly to the collocated
substance abuse counselors as soon as possible. Specifically, cases with
substance abuse issues identified in the initial hotline call may be auto-
matically referred to the counselors. Similarly, all other cases that are inves-
tigated by child welfare workers should be screened, if possible, using a
brief standardized tool. The earlier the intervention, the better the potential
outcomes for the families. The CPS investigation provides a window of
opportunity to engage child welfare clients when they are feeling vulnerable
and perhaps more receptive to treatment services.

In addition, it might be advantageous to implement an automated
information system to track cases that are referred to the CASAC. By so
doing, both the child welfare workers a

Please read

Social Work Evaluation

1

Social Work Evaluation
Enhancing What We Do


T H I R D E D I T I O N

JAMES R. DUDLEY
University of North Carolina at Charlotte

3
Oxford University Press is a department of the University of Oxford. It furthers the University’s
objective of excellence in research, scholarship, and education by publishing worldwide.
Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2020

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, without the prior permission in writing of Oxford
University Press, or as expressly permitted by law, by license, or under terms agreed with the
appropriate reproduction rights organization. Inquiries concerning reproduction outside the
scope of the above should be sent to the Rights Department, Oxford University Press, at the
address above.

You must not circulate this work in any other form
and you must impose this same condition on any acquirer.

Library of Congress Cataloging- in- Publication Data
Names: Dudley, James R., author.
Title: Social work evaluation : enhancing what we do / James R. Dudley.
Description: Third Edition. | New York : Oxford University Press, 2020. |
Revised edition of the author’s Social work evaluation, [2014] |
Includes bibliographical references and index.
Identifiers: LCCN 2019032564 (print) | LCCN 2019032565 (ebook) |
ISBN 9780190916657 (paperback) | ISBN 9780190916671 (epub) | ISBN 9780190916664 (updf )
Subjects: LCSH: Social service—Evaluation. | Evaluation research (Social action programs)
Classification: LCC HV41. D83 2019 (print) | LCC HV41 (ebook) | DDC 361.3072—dc23
LC record available at https://lccn.loc.gov/2019032564
LC ebook record available at https://lccn.loc.gov/2019032565

1 3 5 7 9 8 6 4 2

Printed by Marquis, Canada

I dedicate this book to my students, who have inspired and encouraged me
over many years. I am deeply grateful to them!

vii

C O N T E N T S

CSWE’s Core Competency Fulfillment Guide: 
How It Is Covered in the Book xiii

Preface xvii
New to this Edition xviii
Other Special Features xix
Organization of the Book xxi

Acknowledgments xxiii

part i INTRODUCTION

Chapter 1 Evaluation and Social Work: Making the
Connection 3

A Focus on Both Programs and Practice 4
Practice is Embedded in a Program 5
Introduction to Evaluation 7
A Three- Stage Approach 7
Different Purposes of Evaluations 7
Common Characteristics of Evaluations 10
Seven Steps in Conducting an Evaluation 20
Defining and Clarifying Important Terms 23
Summary 28
Key Terms 29
Discussion Questions and Assignments 29
References 30

viii C O N T E N T S

part ii ORIENTATION TO THE BIGGER PICTURE
OF EVALUATIONS: WHAT ’ S NEXT?

Chapter 2 The Influence of History and Varying Theoretical
Views on Evaluations 35

Relevant Events in History 36
Varying Views on Theoretical Approaches 40
Synthesis of These Evaluation Perspectives 44
Key Perspectives for the Book 50
Three- Stage Approach 50
Summary 52
Key Terms 53
Discussion Questions and Assignments 53
References 54

Chapter 3 The Role of Ethics in Evaluations 56
Ethics for Conducting Evaluations 58
Diversity and Social Justice 67
Summary 74
Key Terms 74
Discussion Questions and Assignments 74
References 76

Chapter 4 Common Types of Evaluations 78
Common Program Evaluations 78
Common Practice Evaluations 89
Common Evaluations and the Three- Stage Approach 93
Summary 94
Key Terms 94
Discussion Questions and Assignments 94
References 95

Chapter 5 Focusing an Evaluation 96
Important Initial Questions 96
Crafting Good Study Questions for an Evaluation
as the Focus 99
Guidelines for Focusing an Evaluation 100
A Practical Tool 106
Summary 110
Key Terms 110
Discussion Questions and Assignments 110
References 111

C O N T E N T S ix

part iii THE PL ANNING OR INPU T STAGE

Chapter 6 Needs Assessments 115
The Logic Model 116
The Link Between Problems and Needs 118
The Underlying Causes 120
Input Stage and Planning a Proposed Program 121
Why Conduct a Needs Assessment? 122
Some Purposes of Needs Assessments 122
Methods of Conducting Needs Assessments 125
Needs Assessments and Practice Interventions 140
Suggestions for How to Conduct a Needs Assessment 141
Summary 143
Key Terms 144
Discussion Questions and Assignments 144
References 146

Chapter 7 Crafting Goals and Objectives 149
Goals for Program and Practice Interventions 150
Characteristics of Goals 151
Limitations of Goals 154
Crafting Measurable Objectives 156
Three Properties: Performance, Conditions, and Criteria 160
Differences Between Measurable Objectives of Programs
and Practice 164
Summary 166
Key Terms 166
Discussion Questions and Assignments 167
References 168

part iv THE IMPLEMENTATION STAGE

Chapter 8 Improving How Programs and Practice Work 171
James R. Dudley and Robert Herman- Smith

Link the Intervention to the Clients’ Problems 172
Implement the Intervention as Proposed 175
Adopt and Promote Evidence- Based Interventions 179
Focus on Staff Members 184
Accessibility of the Intervention 189
Program Quality 194
Client Satisfaction 196
Evaluating Practice Processes: Some Additional Thoughts 202
Summary 207

x C O N T E N T S

Key Terms 207
Discussion Questions and Assignments 207
References 208

part v THE OU TC OME STAGE

Chapter 9 Is the Intervention Effective? 215
The Nature of Outcomes 216
Varied Ways to Measure Outcomes 219
Criteria for Choosing Outcome Measures 222
Outcomes and Program Costs 223
Evidence- Based Interventions 224
Determining a Causal Relationship 227
Group Designs for Programs 229
Outcome Evaluations for Practice 236
Summary 247
Key Terms 247
Discussion Questions and Assignments 248
References 250

part vi FINAL STEPS IN C OMPLETING AN
EVALUATION

Chapter 10 Analyzing Evaluation Data 255
James R. Dudley and Jeffrey Shears

Formative or Summative Evaluations and Data Analysis 255
Stages of Interventions and Data Analysis 257
Summary of Pertinent Tools for Qualitative Data
Analysis 260
Summary of Pertinent Tools for Quantitative Data
Analysis 264
Mixed Methods and Data Analysis 271
Summary 274
Key Terms 274
Discussion Questions and Assignments 275
References 275

Chapter 11 Preparing and Disseminating a Report of
Findings 276

Considering the Input of Stakeholders 277
Format of the Report 278

C O N T E N T S xi

Strategies for Preparing a Report 283
Strategies for Disseminating Reports 287
Summary 289
Key Terms 290
Discussion Questions and Assignments 290
References 291

part vii C ONSUMING EVALUATION REPORT S

Chapter 12 Becoming Critical Consumers
of Evaluations 295
Daniel Freedman and James R. Dudley

Stakeholders Who Consume Evaluation Reports 296
Critical Consumption of an Evaluation Report 299
The Need for Multiple Strategies on Reports 310
Helping Clients Become Critical Consumers 311
Summary 313
Key Terms 313
Discussion Questions and Assignments 313
References 314

Appendix A: American Evaluation Association
Guiding Principles for Evaluators:
2018 Updated Guiding Principles 317
A. Systematic Inquiry: Evaluators Conduct Data-Based

Inquiries That Are Thorough, Methodical, and
Contextually Relevant 317

B. Competence: Evaluators Provide Skilled Professional
Services to Stakeholders 317

C. Integrity: Evaluators Behave With Honesty and
Transparency in Order to Ensure the Integrity of the
Evaluation 318

D. Respect for People: Evaluators Honor the Dignity,
Well-being, and Self-Worth of Individuals and
Acknowledge the Influence of Culture Within
and Across Groups 319

E. Common Good and Equity: Evaluators Strive to
Contribute to the Common Good and Advancement
of an Equitable and Just Society 319

Appendix B: Glossary 321

Index 329

xiii

C S W E ’ S C O R E C O M P E T E N C Y F U L F I L L M E N T
G U I D E :   H O W I T I S C O V E R E D I N   T H E B O O K

CSWE’ S NINE SO CIAL WORK C OMPETENCIES
C OVERED IN THE B O OK

Competency Chapters
Competency 1: Demonstrate Ethical and Professional
Behavior
• Make ethical decisions by applying the standards of the

NASW Code of Ethics, relevant laws and regulations, models
for ethical decision- making, ethical conduct of research, and
additional codes of ethics as appropriate to context;

• Use reflection and self- regulation to manage personal values
and maintain professionalism in practice situations;

• Demonstrate professional demeanor in behavior; appear-
ance; and oral, written, and electronic communication;

• Use technology ethically and appropriately to facilitate prac-
tice outcomes; and

• Use supervision and consultation to guide professional judg-
ment and behavior.

1, 2, 3, 9,
10, 11, 12

2, 3, 12

1, 3, 5, 8, 10, 11

3, 6, 10, 11

3, 4, 5, 8

Competency 2: Engage Diversity and Difference in Practice
• Apply and communicate understanding of the importance of

diversity and difference in shaping life experiences in prac-
tice at the micro, mezzo, and macro levels;

• Present themselves as learners and engage clients and con-
stituencies as experts of their own experiences; and

• Apply self- awareness and self- regulation to manage the influ-
ence of personal biases and values in working with diverse
clients and constituencies.

2, 3, 5, 7, 8

1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12
2, 3, 7, 8, 10, 12

Competency 3: Advance Human Rights and Social,
Economic, and Environmental Justice
• Apply their understanding of social, economic, and environ-

mental justice to advocate for human rights at the individual
and system levels;

• Engage in practices that advance social, economic, and envir-
onmental justice.

1, 2, 3, 5, 6,
8, 10, 11

1, 2, 3, 6, 7, 8,
9, 11, 12

xiv C S W E ’ S C O R E C O M P E T E N C Y F U L F I L L M E N T G U I D E

Competency Chapters
Competency 4: Engage in Practice- informed Research and
Research- informed Practice
• Use practice experience and theory to inform scientific

inquiry and research;
• Apply critical thinking to engage in analysis of quantitative

and qualitative research methods and research findings;
• Use and translate research evidence to inform and improve

practice, policy, and service delivery.

1, 2, 4, 5, 11

2, 4, 6, 7, 9,
10, 11, 12
1, 2, 4, 6, 9, 10,
11, 12

Competency 5: Engage in Policy Practice
• Identify social policy at the local, state, and federal level that

impacts well- being, service delivery, and access to social
services;

• Assess how social welfare and economic policies impact the
delivery of and access to social services;

• Apply critical thinking to analyze, formulate, and advocate
for policies that advance human rights and social, economic,
and environmental justice.

2, 5, 6, 11

4, 6, 8, 11

1, 2, 3, 5, 6, 7,
8, 9, 10, 11, 12

Competency 6: Engage with Individuals, Families, Groups,
Organizations, and Communities
• Apply knowledge of human behavior and the social envir-

onment, person- in- environment, and other multidiscip-
linary theoretical frameworks to engage with clients and
constituencies;

• Use empathy, reflection, and interpersonal skills to effectively
engage diverse clients and constituencies.

1, 2, 3, 4, 6,
7, 8, 9

2, 3, 4, 5, 6,
8, 12

Competency 7: Assess Individuals, Families, Groups,
Organizations, and Communities
• Collect and organize data, and apply critical thinking to

interpret information from clients and constituencies;
• Apply knowledge of human behavior and the social environ-

ment, person- in- environment, and other multidisciplinary
theoretical frameworks in the analysis of assessment data
from clients and constituencies;

• Develop mutually agreed- on intervention goals and object-
ives based on the critical assessment of strengths, needs, and
challenges within clients and constituencies;

• Select appropriate intervention strategies based on the as-
sessment, research knowledge, and values and preferences of
clients and constituencies.

1, 3, 4, 6, 10, 11

1, 2, 4, 5, 6, 7,
8, 10, 11, 12

1, 2, 3, 4,
5, 7, 11

1, 2, 4, 5, 6, 7,
8, 11, 12

C S W E ’ S C O R E C O M P E T E N C Y F U L F I L L M E N T G U I D E xv

Competency Chapters
Competency 8: Intervene with Individuals, Families,
Groups, Organizations, and Communities
• Critically choose and implement interventions to achieve

practice goals and enhance capacities of clients and
constituencies;

• Apply knowledge of human behavior and the social environ-
ment, person- in- environment, and other multidisciplinary
theoretical frameworks in interventions with clients and
constituencies;

• Use interprofessional collaboration as appropriate to achieve
beneficial practice outcomes;

• Negotiate, mediate, and advocate with and on behalf of
diverse clients and constituencies; and

• Facilitate effective transitions and endings that advance
mutually agreed- on goals.

1, 2, 3, 4, 5, 7, 8,
9, 11, 12

1, 2, 3, 4, 6, 7, 8,
9, 11, 12

1, 2, 4, 6, 8, 11

1, 2, 3, 4, 5, 7, 8

1, 4, 5, 7, 9, 11, 12

Competency 9: Evaluate Practice with Individuals, Families,
Groups, Organizations, and Communities
• Select and use appropriate methods for evaluation of

outcomes;
• Apply knowledge of human behavior and the social environ-

ment, person- in- environment, and other multidisciplinary
theoretical frameworks in the evaluation of outcomes;

• Critically analyze, monitor, and evaluate intervention and
program processes and outcomes;

• Apply evaluation findings to improve practice effectiveness at
the micro, mezzo, and macro levels.

1, 2, 4, 5, 7,
9, 10 11
1, 2, 3, 4, 6, 7,
9, 10, 11, 12

1, 2, 4, 5, 7, 8,
9, 10, 11, 12
1, 2, 4, 6, 7, 10,
11, 12

Note. CSWE = Council on Social Work Education; NASW = National Association of Social Workers.

xvii

P R E FA C E

Every social worker is expected to know how to conduct evaluations of his or her practice. In addition, growing numbers of social workers will also be assuming
a program evaluator role at some time in their careers because of the increasing
demands for program accountability. Yet, many social workers are still inadequately
prepared to design and implement evaluations. Social Work Evaluation: Enhancing
What We Do introduces social workers and other human service workers to a broad
array of knowledge, ethics, and skills on how to conduct evaluations. The book
prepares you to conduct evaluations at both the program and practice levels.

The book presents evaluation material in a form that is easily understood and
especially relevant to social work students. Research is among the most difficult con-
tent areas for social work students to comprehend. This is partially because it is dif-
ficult to see the applicability of research to social work practice. The statistical and
other technical aspects of research content also tend to be unfamiliar to students
and difficult to comprehend. This book is especially designed to overcome these and
other types of barriers more than other social work evaluation texts do because it
continually discusses evaluation in the context of social work programs and practice
and uses numerous pertinent examples.

The book is organized around a three- stage approach of evaluation. The stages
divide evaluation into activities during the planning of an intervention, its implemen-
tation, and, afterward, to measure its impact on the recipients. In addition, the text
describes seven general steps to follow in conducting evaluations. These steps offer
a flexible set of guidelines to follow in implementing an evaluation with all its prac-
ticalities. The book also gives significant attention to evidence- based interventions
and how evaluations can generate evidence as a central goal. Readers are also given
several specific suggestions for how to promote evidence- based practice.

This book can be used for several research and practice courses in both Bachelor
of Social Work (BSW) and Master of Social Work (MSW) programs. It is designed
for primary use in a one- semester evaluation course in MSW programs. It can also
be a primary text along with a research methods text for a two- course research
sequence in BSW programs. The book can also be very useful as a secondary text

xviii P R E FA C E

in BSW and MSW practice courses at all system levels and policy courses. In add-
ition, it is an excellent handbook for the helping professions in other fields such as
counseling, psychology, and gerontology.

NEW TO THIS EDITION

The entire book has been carefully reviewed, revised, and updated, and summaries
are added to each chapter. Also, new material is added in several sections. A strength
of the book is that it covers both program and practice evaluations. In the new ed-
ition, greater attention is now given to programs and practice as key concepts and
how the evaluation process offers more understanding of each of them and their
relationship to each other. Evaluations at both levels have much in common. In add-
ition, there is frequently a need to distinguish between these two levels of evaluation.
In the new edition, separate sections are provided for both program and practice
evaluations when there is a need to explain their differences and how each can
be implemented. A  symbol has been added to the text to let you know when the
material following the symbol covers only programs or practice.

Accreditation standards of social work mandated by the Council on Social Work
Education (CSWE) are updated and highlighted in a “Core Competency Fulfillment
Guide” at the beginning of the text. These standards are frequently addressed in the
content of every chapter. Content on the six core social work values of the National
Association of Social Workers (NASW) Code of Ethics are also added in the new
edition and elaborated on in the ethics chapter to highlight how they provide the
foundation for the ethics used in evaluations.

Content is expanded on using the logic model as an analytic tool in conducting
evaluations. This gives practitioners the capacity to have continual oversight of
evaluation concerns. Most important, this tool helps remind social workers of the
importance of the logical links among the clients’ problems, needs, and their causes,
their goals, and the interventions chosen to reach their goals. The logic model is also
useful for supporting evidence- based practice and giving clients greater assurance
that that they will be successful in reaching their goals.

The seven steps for conducting an evaluation are emphasized throughout the
book and provide a helpful guide for the readers to follow. An emphasis on client-
centered change highlighted in earlier editions is strengthened in this edition in
these seven steps. Client- centered change is promoted through innovative ways of
assisting clients, staff members, and community groups in becoming more actively
involved in the evaluation process. Ultimately, these changes are intended to help
clients succeed as recipients of these interventions. Clients are presented throughout
the book as a key group of stakeholders who are often overlooked in other texts.

A new Teacher and Student Resource website has been added and is available
from Oxford University Press. It will contain all the resources provided with the
book in earlier editions along with some new helpful aids for both teachers and
students.

P R E FA C E xix

OTHER SPECIAL FEATURES

Both qualitative and quantitative methods of evaluation are described and
highlighted throughout the book. While quantitative methods are pertinent to both
summative and formative evaluations, qualitative methods are presented as espe-
cially relevant to many types of formative evaluations. Criteria are offered for when
to use qualitative methods and when to use quantitative ones, and examples of both
are provided. Mixed methods are also encouraged and often suggested as the best
option.

Many efforts have been made throughout the book to help students and
practitioners view evaluation as being helpful and relevant not only to programs but
also to their own practice. Throughout the book, the evaluation content on practice
interventions offers the readers practical insights and tools for enhancing their own
practice and increasing their capacity to impact their clients’ well- being.

The planning stage for new programs and practice interventions is presented
as perhaps the most critical stage before new programs and practice interventions
are implemented. Unfortunately, most agencies do not invest nearly enough time,
thought, and resources to the tasks of this critical planning period. The tasks of
planning include clearly identify and describing the clients’ problems and needs to
be addressed, along with the goals for resolving them. In addition, the proposed
interventions need to be carefully developed to uniquely fit the problems and needs
of their clients. Moreover, evidence that these interventions can be effective are para-
mount to develop and emphasize.

The evaluation process is described as a collaborative effort that encourages the
participation of the clients and other important stakeholders in some of the steps.
A  periodic focus on the principles of participant action research is highlighted in
some sections to emphasize how evaluation can be used to promote client involve-
ment, empowerment, and social change. Also, special emphasis is placed on staff
and client involvement in consuming evaluation findings and becoming more active
gatekeepers.

As mentioned earlier, another feature of the text is that it directly addresses all
the current accreditation standards of the CSWE, the national accrediting organ-
ization for social workers. The CSWE promulgates minimum curriculum standards
for all BSW and MSW programs, including research and evaluation content. This
book devotes extensive attention to several competencies related to evaluation with
a special focus on three areas:  ethics, diversity, and social and economic justice.
Because of the importance of these three competency areas, they are highlighted
in numerous examples and exercises throughout the book. In addition, practice, an
overall competency of the social work curriculum, is often highlighted as it relates
to evaluation. Evaluation is described throughout the book as a vital and necessary
component of practice at both the MSW and the BSW levels.

While a social work perspective is emphasized that helps in understanding
the connections of evaluation with practice, ethics, diversity issues, and social
justice, other human service professionals will also find these topics pertinent.

xx P R E FA C E

Professionals with disciplines in psychology, family and individual therapy, public
health, nursing, mental health, criminal justice, school counseling, special edu-
cation, addictions, sociology, and others will find this text to be a very useful
handbook.

Technology skills are infused in different parts of the text. Social work
practitioners must know how to use various electronic tools like the Google, e-
mail, electronic discussion lists, and data analysis programs like SPSS (Statistical
Package for the Social Sciences). The book includes electronic exercises and other
assignments that involve students using such tools. Emphasis is given to electronic
skills that help students obtain access to the latest information on client populations,
practice and program interventions, information from professional organizations,
relevant articles, and helpful discussion lists.

Another distinguishing aspect of this book is the extensive use of case examples.
It has been the author’s experience that students’ learning is enhanced when they can
immediately see the application of abstract concepts to human service situations.
Specific evaluation studies from professional journals, websites, and books are fre-
quently highlighted to illustrate concepts, findings, data analyses, and other issues.
Numerous examples of evaluations that Dudley has conducted are frequently used.
Exemplary evaluation activities of social work students and practitioners are also
generously included. These illustrations reflect what students will often find in field
placement agencies and social agencies where they are hired. Figures and graphs
are also used and designed to appeal to students with a range of learning styles. The
book also contains a glossary of terms.

In addition, the book is user- friendly for faculty who teach evaluation courses.
Sometimes social work educators who do not have the time or interest in conducting
their own evaluations teach research courses. Such faculty may often feel less quali-
fied to teach an evaluation course. This text is understandable to both inexperienced
and experienced faculty. Also, discussion questions included at the end of each
chapter can serve as a focus for class discussions, quizzes, and tests.

A chapter, “Becoming Critical Consumers of Evaluations,” is also included
to stress the importance of the consumer role in reading and utilizing evaluation
studies of other researchers. The chapter walks the readers through each of the
seven steps of conducting an evaluation, pointing out strengths and weaknesses of
evaluation reports using a recently published evaluation report as an illustration.
This chapter and others provide guidelines for how to cautiously and tentatively
consider how to apply the findings of someone else’s evaluation to your own prac-
tice with clients.

In addition, a Teacher and Student Resource website is an online ancillary
resource that is available with the purchase of the book, available from Oxford
University Press. It elaborates on how the content of the book can be used and
suggests helpful ways to involve students in understanding and using it. The
teacher’s guide includes a sample syllabus, PowerPoint presentations for each
chapter, and a test bank of multiple- choice exam questions that includes questions
for each chapter.

P R E FA C E xxi

ORGANIZATION OF THE B O OK

The book is organized into seven parts. Part I, the first chapter, introduces evalu-
ation and how it is described and defined in the book. The chapter begins with a
persuasive rationale for why social workers should be proficient in evaluation. The
concepts of program and practice are introduced along with how they are similar
and different. Definitions of program and practice evaluations, their characteristics
and aims, and the larger social contexts for evaluations are introduced. The misuses
of the term evaluation are also pointed out. Also, evidence- based interventions are
introduced as an indispensable concept in the context of evaluation.

Part II is an orientation to the bigger picture about evaluations. Chapter  2
highlights key historical events that have helped to shape current public policies and
stresses the importance of conducting evaluations. Also, five different theoretical
perspectives on evaluation are introduced to remind readers that evaluation is not a
monolithic enterprise; to the contrary, its purposes vary widely depending on who
is conducting the evaluation and what they are attempting to accomplish. Aspects of
all these theoretical perspectives contribute to the concept of evaluation adopted in
the book. Chapter 3 focuses on the ethics of evaluation, drawing on the NASW Code
of Ethics and the ethical principles of the American Evaluation Association. The
chapter explains how the accreditation standards of the CSWE can be implemented,
including the ethics of social work and the importance of diversity and social and
economic justice. Chapter 4 introduces readers to several types of program and prac-
tice evaluation that are commonly practiced in the settings in which social workers
and other human service workers are employed. They are introduced in this chapter
to help readers be able to identify them in various field settings. These common
evaluations range from client satisfaction studies to outcome studies, licensing
of professionals and programs, quality assurance, and judicial decisions. Finally,
Chapter 5 offers guidelines for focusing an evaluation and presents a tool that can be
used to craft a focus for any evaluation.

Part III covers the first of three stages of evaluation activities, the planning stage,
when a program or practice intervention is being conceptualized and important
details are being worked out. The planning stage is presented as a critical time
for evaluation activities, especially to document the need for a new intervention.
Chapter 6 is devoted to conducting needs assessments, especially during the plan-
ning stage. The chapter explains why needs assessments are so important, highlights
a variety of assessment tools, and describes the steps involved in conducting a needs
assessment. Crafting goals and objectives for a new program or practice intervention
are highlighted in Chapter 7. Characteristics

Please read

Workbook

for
Designing
a Process
Evaluation

Produced for the

Georgia Department of Human
Resources

Division of Public Health

By

Melanie J. Bliss, M.A.
James G. Emshoff, Ph.D.

Department of Psychology
Georgia State University

July 2002

Evaluation Expert Session
July 16, 2002 Page 1

What is process evaluation?

Process evaluation uses empirical data to assess the delivery of
programs. In contrast to outcome evaluation, which assess the
impact of the program, process evaluation verifies what the
program is and whether it is being implemented as designed. Thus,
process evaluation asks “what,” and outcome evaluation asks, “so
what?”

When conducting a process evaluation, keep in mind these three
questions:

1. What is the program intended to be?
2. What is delivered, in reality?
3. Where are the gaps between program design and delivery?

This workbook will serve as a guide for designing your own process
evaluation for a program of your choosing. There are many steps involved
in the implementation of a process evaluation, and this workbook will
attempt to direct you through some of the main stages. It will be helpful to
think of a delivery service program that you can use as your example as
you complete these activities.

Why is process evaluation important?
1. To determine the extent to which the program is being

implemented according to plan
2. To assess and document the degree of fidelity and variability in

program implementation, expected or unexpected, planned or
unplanned

3. To compare multiple sites with respect to fidelity
4. To provide validity for the relationship between the intervention

and the outcomes
5. To provide information on what components of the intervention

are responsible for outcomes
6. To understand the relationship between program context (i.e.,

setting characteristics) and program processes (i.e., levels of
implementation).

7. To provide managers feedback on the quality of implementation
8. To refine delivery components
9. To provide program accountability to sponsors, the public, clients,

and funders
10. To improve the quality of the program, as the act of evaluating is

an intervention.

Evaluation Expert Session
July 16, 2002 Page 2

Stages of Process Evaluation Page Number

1. Form Collaborative Relationships 3
2. Determine Program Components 4
3. Develop Logic Model*
4. Determine Evaluation Questions 6
5. Determine Methodology 11
6. Consider a Management Information System 25
7. Implement Data Collection and Analysis 28
8. Write Report**

Also included in this workbook:

a. Logic Model Template 30
b. Pitfalls to avoid 30
c. References 31

Evaluation can be an exciting,
challenging, and fun experience

Enjoy!

* Previously covered in Evaluation Planning Workshops.
** Will not be covered in this expert session. Please refer to the Evaluation Framework

and Evaluation Module of FHB Best Practice Manual for more details.

Evaluation Expert Session
July 16, 2002 Page 3

Forming collaborative relationships

A strong, collaborative relationship with program delivery staff and management will
likely result in the following:

Feedback regarding evaluation design and implementation
Ease in conducting the evaluation due to increased cooperation
Participation in interviews, panel discussion, meetings, etc.
Increased utilization of findings

Seek to establish a mutually respectful relationship characterized by trust, commitment,
and flexibility.

Key points in establishing a collaborative
relationship:

Start early. Introduce yourself and the evaluation team to as many delivery staff and
management personnel as early as possible.

Emphasize that THEY are the experts, and you will be utilizing their knowledge and

information to inform your evaluation development and implementation.

Be respectful of their time both in-person and on the telephone. Set up meeting places
that are geographically accessible to all parties involved in the evaluation process.

Remain aware that, even if they have requested the evaluation, it may often appear as

an intrusion upon their daily activities. Attempt to be as unobtrusive as possible and
request their feedback regarding appropriate times for on-site data collection.

Involve key policy makers, managers, and staff in a series of meetings throughout the

evaluation process. The evaluation should be driven by the questions that are of
greatest interest to the stakeholders. Set agendas for meetings and provide an
overview of the goals of the meeting before beginning. Obtain their feedback and
provide them with updates regarding the evaluation process. You may wish to
obtained structured feedback. Sample feedback forms are throughout the workbook.

Provide feedback regarding evaluation findings to the key policy makers, managers,

and staff when and as appropriate. Use visual aids and handouts. Tabulate and
summarize information. Make it as interesting as possible.

Consider establishing a resource or expert “panel” or advisory board that is an official

group of people willing to be contacted when you need feedback or have questions.

Evaluation Expert Session
July 16, 2002 Page 4

Determining Program Components

Program components are identified by answering the questions who, what, when, where,
and how as they pertain to your program.

Who: the program clients/recipients and staff
What: activities, behaviors, materials
When: frequency and length of the contact or intervention
Where: the community context and physical setting
How: strategies for operating the program or intervention

BRIEF EXAMPLE:

Who: elementary school students
What: fire safety intervention
When: 2 times per year
Where: in students’ classroom
How: group administered intervention, small group practice

1. Instruct students what to do in case of fire (stop, drop and roll).
2. Educate students on calling 911 and have them practice on play telephones.
3. Educate students on how to pull a fire alarm, how to test a home fire alarm and how to

change batteries in a home fire alarm. Have students practice each of these activities.
4. Provide students with written information and have them take it home to share with their

parents. Request parental signature to indicate compliance and target a 75% return rate.

Points to keep in mind when determining program
components

Specify activities as behaviors that can be observed

If you have a logic model, use the “activities” column as a starting point

Ensure that each component is separate and distinguishable from others

Include all activities and materials intended for use in the intervention

Identify the aspects of the intervention that may need to be adapted, and those that should

always be delivered as designed.

Consult with program staff, mission statements, and program materials as needed.

Evaluation Expert Session
July 16, 2002 Page 5

Your Program Components

After you have identified your program components, create a logic model that graphically
portrays the link between program components and outcomes expected from these
components.

Now, write out a succinct list of the components of your program.

WHO:

WHAT:

WHEN:

WHERE:

HOW:

Evaluation Expert Session
July 16, 2002 Page 6

What is a Logic Model

A logical series of statements that link the problems your program is attempting to
address (conditions), how it will address them (activities), and what are the expected
results (immediate and intermediate outcomes, long-term goals).

Benefits of the logic model include:

helps develop clarity about a project or program,
helps to develop consensus among people,
helps to identify gaps or redundancies in a plan,
helps to identify core hypothesis,
helps to succinctly communicate what your project or program is about.

When do you use a logic model

Use…

– During any work to clarify what is being done, why, and with what intended results

– During project or program planning to make sure that the project or program is logical and
complete

– During evaluation planning to focus the evaluation

– During project or program implementation as a template for comparing to the actual program
and as a filter to determine whether proposed changes fit or not.

This information was extracted from the Logic Models: A Multi-Purpose Tool materials developed by Wellsys
Corporation for the Evaluation Planning Workshop Training. Please see the Evaluation Planning Workshop
materials for more information. Appendix A has a sample template of the tabular format.

Evaluation Expert Session
July 16, 2002 Page 7

Determining Evaluation Questions

As you design your process evaluation, consider what questions you would like to answer. It is only after
your questions are specified that you can begin to develop your methodology. Considering the importance
and purpose of each question is critical.

BROADLY….

What questions do you hope to answer? You may wish to turn the program components that you have just identified
into questions assessing:

Was the component completed as indicated?
What were the strengths in implementation?
What were the barriers or challenges in implementation?
What were the apparent strengths and weaknesses of each step of the intervention?
Did the recipient understand the intervention?
Were resources available to sustain project activities?
What were staff perceptions?
What were community perceptions?
What was the nature of the interaction between staff and clients?

These are examples. Check off what is applicable to you, and use the space below to write additional broad,
overarching questions that you wish to answer.

Evaluation Expert Session
July 16, 2002 Page 8

SPECIFICALLY …

Now, make a list of all the specific questions you wish to answer, and organize your questions categorically. Your
list of questions will likely be much longer than your list of program components. This step of developing your
evaluation will inform your methodologies and instrument choice.

Remember that you must collect information on what the program is intended to be and what it is in reality, so you
may need to ask some questions in 2 formats.

For example:

How many people are intended to complete this intervention per week?”
How many actually go through the intervention during an average week?”

Consider what specific questions you have. The questions below are only examples! Some may not be appropriate
for your evaluation, and you will most likely need to add additional questions. Check off the questions that are
applicable to you, and add your own questions in the space provided.

WHO (regarding client):
Who is the target audience, client, or recipient?
How many people have participated?
How many people have dropped out?
How many people have declined participation?
What are the demographic characteristics of clients?

Race
Ethnicity
National Origin
Age
Gender
Sexual Orientation
Religion
Marital Status
Employment
Income Sources
Education
Socio-Economic Status

What factors do the clients have in common?
What risk factors do clients have?
Who is eligible for participation?
How are people referred to the program? How are the screened?
How satisfied are the clients?

YOUR QUESTIONS:

Evaluation Expert Session
July 16, 2002 Page 9

WHO (Regarding staff):
Who delivers the services?
How are they hired?
How supportive are staff and management of each other?
What qualifications do staff have?
How are staff trained?
How congruent are staff and recipients with one another?
What are staff demographics? (see client demographic list for specifics.)

YOUR QUESTIONS:

WHAT:

What happens during the intervention?
What is being delivered?
What are the methods of delivery for each service (e.g., one-on-one, group session, didactic instruction,

etc.)
What are the standard operating procedures?
What technologies are in use?
What types of communication techniques are implemented?
What type of organization delivers the program?
How many years has the organization existed? How many years has the program been operating?
What type of reputation does the agency have in the community? What about the program?
What are the methods of service delivery?
How is the intervention structured?
How is confidentiality maintained?

YOUR QUESTIONS:

WHEN:
When is the intervention conducted?
How frequently is the intervention conducted?
At what intervals?
At what time of day, week, month, year?
What is the length and/or duration of each service?

Evaluation Expert Session
July 16, 2002 Page 10

YOUR QUESTIONS:

WHERE:
Where does the intervention occur?
What type of facility is used?
What is the age and condition of the facility?
In what part of town is the facility? Is it accessible to the target audience? Does public transportation access

the facility? Is parking available?
Is child care provided on site?

YOUR QUESTIONS:

WHY:

Why are these activities or strategies implemented and why not others?
Why has the intervention varied in ability to maintain interest?
Why are clients not participating?
Why is the intervention conducted at a certain time or at a certain frequency?

YOUR QUESTIONS:

Evaluation Expert Session
July 16, 2002 Page 11

Validating Your Evaluation Questions

Even though all of your questions may be interesting, it is important to narrow your list to questions that
will be particularly helpful to the evaluation and that can be answered given your specific resources, staff,
and time.

Go through each of your questions and consider it with respect to the questions below, which may be helpful in
streamlining your final list of questions.

Revise your worksheet/list of questions until you can answer “yes” to all of these questions. If you cannot answer
“yes” to your question, consider omitting the question from your evaluation.

Validation

Yes

No

Will I use the data that will stem from these questions?

Do I know why each question is important and /or valuable?

Is someone interested in each of these questions?

Have I ensured that no questions are omitted that may be important to
someone else?

Is the wording of each question sufficiently clear and unambiguous?

Do I have a hypothesis about what the “correct” answer will be for each
question?

Is each question specific without inappropriately limiting the scope of the
evaluation or probing for a specific response?

Do they constitute a sufficient set of questions to achieve the purpose(s) of
the evaluation?

Is it feasible to answer the question, given what I know about the
resources for evaluation?

Is each question worth the expense of answering it?

Derived from “A Design Manual” Checklist, page 51.

Evaluation Expert Session
July 16, 2002 Page 12

Determining Methodology

Process evaluation is characterized by collection of data primarily through two formats:

1) Quantitative, archival, recorded data that may be managed by an computerized

tracking or management system, and

2) Qualitative data that may be obtained through a variety of formats, such as

surveys or focus groups.

When considering what methods to use, it is critical to have a thorough
understanding and knowledge of the questions you want answered. Your
questions will inform your choice of methods. After this section on types of
methodologies, you will complete an exercise in which you consider what method
of data collection is most appropriate for each question.

Do you have a thorough understanding of your
questions?

Furthermore, it is essential to consider what data the organization you are
evaluating already has. Data may exist in the form of an existing computerized
management information system, records, or a tracking system of some other
sort. Using this data may provide the best reflection of what is “going on,” and it
will also save you time, money, and energy because you will not have to devise
your own data collection method! However, keep in mind that you may have to
adapt this data to meet your own needs – you may need to add or replace fields,
records, or variables.

What data does your organization already have?

Will you need to adapt it?

If the organization does not already have existing data, consider devising a
method for the organizational staff to collect their own data. This process will
ultimately be helpful for them so that they can continue to self-evaluate, track
their activities, and assess progress and change. It will be helpful for the
evaluation process because, again, it will save you time, money, and energy that
you can better devote towards other aspects of the evaluation. Management
information systems will be described more fully in a later section of this
workbook.

Do you have the capacity and resources to devise
such a system? (You may need to refer to a later
section of this workbook before answering.)

Evaluation Expert Session
July 16, 2002 Page 13

Who should collect the data?

Given all of this, what thoughts do you have on who should collect data for your
evaluation? Program staff, evaluation staff, or some combination?

Program Staff: May collect data from activities such as attendance, demographics,
participation, characteristics of participants, dispositions, etc; may
conduct intake interviews, note changes regarding service delivery,
and monitor program implementation.

Advantages: Cost-efficient, accessible, resourceful, available, time-efficient,

and increased understanding of the program.

Disadvantages: May exhibit bias and/or social desirability, may use data for critical

judgment, may compromise the validity of the program; may put
staff in uncomfortable or inappropriate position; also, if staff collect
data, may have an increased burden and responsibility placed upon
them outside of their usual or typical job responsibilities. If you
utilize staff for data collection, provide frequent reminders as well
as messages of gratitude.

Evaluation staff: May collect qualitative information regarding implementation,
general characteristics of program participants, and other
information that may otherwise be subject to bias or distortion.

Advantages: Data collected in manner consistent with overall goals and timeline

of evaluation; prevents bias and inappropriate use of information;
promotes overall fidelity and validity of data.

Disadvantages: May be costly and take extensive time; may require additional

training on part of evaluator; presence of evaluator in organization
may be intrusive, inconvenient, or burdensome.

Evaluation Expert Session
July 16, 2002 Page 14

When should data be collected?

Conducting the evaluation according to your timeline can be challenging. Consider how
much time you have for data collection, and make decisions regarding what to collect
and how much based on your timeline.

In many cases, outcome evaluation is not considered appropriate until the program has
stabilized. However, when conducting a process evaluation, it can be important to start
the evaluation at the beginning so that a story may be told regarding how the program
was developed, information may be provided on refinements, and program growth and
progress may be noted.

If you have the luxury of collecting data from the start of the intervention to the end of
the intervention, space out data collection as appropriate. If you are evaluating an
ongoing intervention that is fairly quick (e.g., an 8-week educational group), you may
choose to evaluate one or more “cycles.”

How much time do you have to conduct your evaluation?

How much time do you have for data collection (as opposed to designing the evaluation,
training, organizing and analyzing results, and writing the report?)

Is the program you are evaluating time specific?

How long does the program or intervention last?

At what stages do you think you will most likely collect data?

Soon after a program has begun

Descriptive information on program characteristics that will not change; information
requiring baseline information

During the intervention
Ongoing process information such as recruitment, program implementation

After the intervention
Demographics, attendance ratings, satisfaction ratings

Evaluation Expert Session
July 16, 2002 Page 15

Before you consider methods

A list of various methods follows this section. Before choosing what methods are
most appropriate for your evaluation, review the following questions. (Some may
already be answered in another section of this workbook.)

What questions do I want answered? (see previous section)

Does the organization already have existing data, and if so, what kind?

Does the organization have staff to collect data?

What data can the organization staff collect?

Must I maintain anonymity (participant is not identified at all) or confidentiality

(participant is identified but responses remain private)? This consideration
pertains to existing archival data as well as original data collection.

How much time do I have to conduct the evaluation?

How much money do I have in my budget?

How many evaluation staff do I have to manage the data collection activities?

Can I (and/or members of my evaluation staff) travel on site?

What time of day is best for collecting data? For example, if you plan to conduct

focus groups or interviews, remember that your population may work during the
day and need evening times.

Evaluation Expert Session
July 16, 2002 Page 16

Types of methods

A number of different methods exist that can be used to collect process
information. Consider each of the following, and check those that you think would
be helpful in addressing the specific questions in your evaluation. When “see
sample” is indicated, refer to the pages that follow this table.

√ Method Description

Activity,
participation, or
client tracking log

Brief record completed on site at frequent intervals by participant or deliverer.
May use form developed by evaluator if none previously exists. Examples: sign
in log, daily records of food consumption, medication management.

Case Studies
Collection of in-depth information regarding small number of intervention
recipients; use multiple methods of data collection.

Ethnographic
analysis

Obtain in-depth information regarding the experience of the recipient by
partaking in the intervention, attending meetings, and talking with delivery staff
and recipients.

Expert judgment
Convene a panel of experts or conduct individual interviews to obtain their
understanding of and reaction to program delivery.

Focus groups
Small group discussion among program delivery staff or recipients. Focus on
their thoughts and opinions regarding their experiences with the intervention.

Meeting minutes
(see sample)

Qualitative information regarding agendas, tasks assigned, and coordination and
implementation of the intervention as recorded on a consistent basis.

Observation
(see sample)

Observe actual delivery in vivo or on video, record findings using check sheet
or make qualitative observations.

Open-ended
interviews –
telephone or in
person

Evaluator asks open questions (i.e., who, what, when, where, why, how) to
delivery staff or recipients. Use interview protocol without preset response
options.

Questionnaire
Written survey with structured questions. May administer in individual, group,
or mail format. May be anonymous or confidential.

Record review

Obtain indicators from intervention records such patient files, time sheets,
telephone logs, registration forms, student charts, sales records, or records
specific to the service delivery.

Structured
interviews –
telephone or in
person

Interviewer asks direct questions using interview protocol with preset response
options.

Evaluation Expert Session

July 16, 2002
Page 17

Sample activity log

This is a common process evaluation methodology because it systematically records exactly what is happening during
implementation. You may wish to devise a log such as the one below and alter it to meet your specific needs. Consider
computerizing such a log for efficiency. Your program may already have existing logs that you can utilize and adapt for your
evaluation purposes.

Site:

Recorder:

Code

Service

Date

Location

# People

# Hours

Notes

Evaluation Expert Session
July 16, 2002

Page 18

Meeting Minutes

Taking notes at meetings may provide extensive and invaluable process information that
can later be organized and structured into a comprehensive report. Minutes may be taken
by program staff or by the evaluator if necessary. You may find it helpful to use a
structured form, such as the one below that is derived from Evaluating Collaboratives,
University of Wisconsin-Cooperative Extension, 1998.

Meeting Place: __________________ Start time: ____________
Date: _____________________________ End time: ____________

Attendance (names):

Agenda topic: _________________________________________________

Discussion: _____________________________________________________

Decision Related Tasks Who responsible Deadline

1.

2.

3.

Agenda topic: _________________________________________________

Discussion: _____________________________________________________

Decision Related Tasks Who responsible Deadline

1.

2.

3.

Sample observation log

Evaluation Expert Session
July 16, 2002

Page 19

Observation may occur in various methods, but one of the most common is
hand-recording specific details during a small time period. The following is several rows
from an observation log utilized during an evaluation examining school classrooms.

CLASSROOM OBSERVATIONS (School Environment Scale)
Classroom 1: Grade level _________________ (Goal: 30 minutes of observation)

Time began observation: _________Time ended observation:_________
Subjects were taught during observation period: ___________________

PHYSICAL ENVIRONMENT

Question

Answer
1. Number of students

2. Number of adults in room:
a. Teachers
b. Para-pros
c. Parents

Total:
a.
b.
c.

3. Desks/Tables
a. Number of Desks
b. Number of Tables for students’ use
c. Any other furniture/include number
(Arrangement of desks/tables/other furniture)

a.
b.
c.

4. Number of computers, type

5. How are computers being used?

6. What is the general classroom setup? (are there walls, windows, mirrors,
carpet, rugs, cabinets, curtains, etc.)

7. Other technology (overhead projector, power point, VCR, etc.)

8. Are books and other materials accessible for students?

9. Is there adequate space for whole-class instruction?

12. What type of lighting is used?

13. Are there animals or fish in the room?

14. Is there background music playing?

15. Rate the classroom condition
Poor Average Excellent

16. Are rules/discipline procedures posted? If so, where?

17. Is the classroom Noisy or Quiet?
Very Quiet Very Noisy

Choosing or designing measurement instruments
Consider using a resource panel, advisory panel, or focus group to offer feedback

Evaluation Expert Session
July 16, 2002

Page 20

regarding your instrument. This group may be composed of any of the people listed
below. You may also wish to consult with one or more of these individuals throughou

Please read

Social Work Evaluation

1

Social Work Evaluation
Enhancing What We Do


T H I R D E D I T I O N

JAMES R. DUDLEY
University of North Carolina at Charlotte

3
Oxford University Press is a department of the University of Oxford. It furthers the University’s
objective of excellence in research, scholarship, and education by publishing worldwide.
Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2020

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, without the prior permission in writing of Oxford
University Press, or as expressly permitted by law, by license, or under terms agreed with the
appropriate reproduction rights organization. Inquiries concerning reproduction outside the
scope of the above should be sent to the Rights Department, Oxford University Press, at the
address above.

You must not circulate this work in any other form
and you must impose this same condition on any acquirer.

Library of Congress Cataloging- in- Publication Data
Names: Dudley, James R., author.
Title: Social work evaluation : enhancing what we do / James R. Dudley.
Description: Third Edition. | New York : Oxford University Press, 2020. |
Revised edition of the author’s Social work evaluation, [2014] |
Includes bibliographical references and index.
Identifiers: LCCN 2019032564 (print) | LCCN 2019032565 (ebook) |
ISBN 9780190916657 (paperback) | ISBN 9780190916671 (epub) | ISBN 9780190916664 (updf )
Subjects: LCSH: Social service—Evaluation. | Evaluation research (Social action programs)
Classification: LCC HV41. D83 2019 (print) | LCC HV41 (ebook) | DDC 361.3072—dc23
LC record available at https://lccn.loc.gov/2019032564
LC ebook record available at https://lccn.loc.gov/2019032565

1 3 5 7 9 8 6 4 2

Printed by Marquis, Canada

I dedicate this book to my students, who have inspired and encouraged me
over many years. I am deeply grateful to them!

vii

C O N T E N T S

CSWE’s Core Competency Fulfillment Guide: 
How It Is Covered in the Book xiii

Preface xvii
New to this Edition xviii
Other Special Features xix
Organization of the Book xxi

Acknowledgments xxiii

part i INTRODUCTION

Chapter 1 Evaluation and Social Work: Making the
Connection 3

A Focus on Both Programs and Practice 4
Practice is Embedded in a Program 5
Introduction to Evaluation 7
A Three- Stage Approach 7
Different Purposes of Evaluations 7
Common Characteristics of Evaluations 10
Seven Steps in Conducting an Evaluation 20
Defining and Clarifying Important Terms 23
Summary 28
Key Terms 29
Discussion Questions and Assignments 29
References 30

viii C O N T E N T S

part ii ORIENTATION TO THE BIGGER PICTURE
OF EVALUATIONS: WHAT ’ S NEXT?

Chapter 2 The Influence of History and Varying Theoretical
Views on Evaluations 35

Relevant Events in History 36
Varying Views on Theoretical Approaches 40
Synthesis of These Evaluation Perspectives 44
Key Perspectives for the Book 50
Three- Stage Approach 50
Summary 52
Key Terms 53
Discussion Questions and Assignments 53
References 54

Chapter 3 The Role of Ethics in Evaluations 56
Ethics for Conducting Evaluations 58
Diversity and Social Justice 67
Summary 74
Key Terms 74
Discussion Questions and Assignments 74
References 76

Chapter 4 Common Types of Evaluations 78
Common Program Evaluations 78
Common Practice Evaluations 89
Common Evaluations and the Three- Stage Approach 93
Summary 94
Key Terms 94
Discussion Questions and Assignments 94
References 95

Chapter 5 Focusing an Evaluation 96
Important Initial Questions 96
Crafting Good Study Questions for an Evaluation
as the Focus 99
Guidelines for Focusing an Evaluation 100
A Practical Tool 106
Summary 110
Key Terms 110
Discussion Questions and Assignments 110
References 111

C O N T E N T S ix

part iii THE PL ANNING OR INPU T STAGE

Chapter 6 Needs Assessments 115
The Logic Model 116
The Link Between Problems and Needs 118
The Underlying Causes 120
Input Stage and Planning a Proposed Program 121
Why Conduct a Needs Assessment? 122
Some Purposes of Needs Assessments 122
Methods of Conducting Needs Assessments 125
Needs Assessments and Practice Interventions 140
Suggestions for How to Conduct a Needs Assessment 141
Summary 143
Key Terms 144
Discussion Questions and Assignments 144
References 146

Chapter 7 Crafting Goals and Objectives 149
Goals for Program and Practice Interventions 150
Characteristics of Goals 151
Limitations of Goals 154
Crafting Measurable Objectives 156
Three Properties: Performance, Conditions, and Criteria 160
Differences Between Measurable Objectives of Programs
and Practice 164
Summary 166
Key Terms 166
Discussion Questions and Assignments 167
References 168

part iv THE IMPLEMENTATION STAGE

Chapter 8 Improving How Programs and Practice Work 171
James R. Dudley and Robert Herman- Smith

Link the Intervention to the Clients’ Problems 172
Implement the Intervention as Proposed 175
Adopt and Promote Evidence- Based Interventions 179
Focus on Staff Members 184
Accessibility of the Intervention 189
Program Quality 194
Client Satisfaction 196
Evaluating Practice Processes: Some Additional Thoughts 202
Summary 207

x C O N T E N T S

Key Terms 207
Discussion Questions and Assignments 207
References 208

part v THE OU TC OME STAGE

Chapter 9 Is the Intervention Effective? 215
The Nature of Outcomes 216
Varied Ways to Measure Outcomes 219
Criteria for Choosing Outcome Measures 222
Outcomes and Program Costs 223
Evidence- Based Interventions 224
Determining a Causal Relationship 227
Group Designs for Programs 229
Outcome Evaluations for Practice 236
Summary 247
Key Terms 247
Discussion Questions and Assignments 248
References 250

part vi FINAL STEPS IN C OMPLETING AN
EVALUATION

Chapter 10 Analyzing Evaluation Data 255
James R. Dudley and Jeffrey Shears

Formative or Summative Evaluations and Data Analysis 255
Stages of Interventions and Data Analysis 257
Summary of Pertinent Tools for Qualitative Data
Analysis 260
Summary of Pertinent Tools for Quantitative Data
Analysis 264
Mixed Methods and Data Analysis 271
Summary 274
Key Terms 274
Discussion Questions and Assignments 275
References 275

Chapter 11 Preparing and Disseminating a Report of
Findings 276

Considering the Input of Stakeholders 277
Format of the Report 278

C O N T E N T S xi

Strategies for Preparing a Report 283
Strategies for Disseminating Reports 287
Summary 289
Key Terms 290
Discussion Questions and Assignments 290
References 291

part vii C ONSUMING EVALUATION REPORT S

Chapter 12 Becoming Critical Consumers
of Evaluations 295
Daniel Freedman and James R. Dudley

Stakeholders Who Consume Evaluation Reports 296
Critical Consumption of an Evaluation Report 299
The Need for Multiple Strategies on Reports 310
Helping Clients Become Critical Consumers 311
Summary 313
Key Terms 313
Discussion Questions and Assignments 313
References 314

Appendix A: American Evaluation Association
Guiding Principles for Evaluators:
2018 Updated Guiding Principles 317
A. Systematic Inquiry: Evaluators Conduct Data-Based

Inquiries That Are Thorough, Methodical, and
Contextually Relevant 317

B. Competence: Evaluators Provide Skilled Professional
Services to Stakeholders 317

C. Integrity: Evaluators Behave With Honesty and
Transparency in Order to Ensure the Integrity of the
Evaluation 318

D. Respect for People: Evaluators Honor the Dignity,
Well-being, and Self-Worth of Individuals and
Acknowledge the Influence of Culture Within
and Across Groups 319

E. Common Good and Equity: Evaluators Strive to
Contribute to the Common Good and Advancement
of an Equitable and Just Society 319

Appendix B: Glossary 321

Index 329

xiii

C S W E ’ S C O R E C O M P E T E N C Y F U L F I L L M E N T
G U I D E :   H O W I T I S C O V E R E D I N   T H E B O O K

CSWE’ S NINE SO CIAL WORK C OMPETENCIES
C OVERED IN THE B O OK

Competency Chapters
Competency 1: Demonstrate Ethical and Professional
Behavior
• Make ethical decisions by applying the standards of the

NASW Code of Ethics, relevant laws and regulations, models
for ethical decision- making, ethical conduct of research, and
additional codes of ethics as appropriate to context;

• Use reflection and self- regulation to manage personal values
and maintain professionalism in practice situations;

• Demonstrate professional demeanor in behavior; appear-
ance; and oral, written, and electronic communication;

• Use technology ethically and appropriately to facilitate prac-
tice outcomes; and

• Use supervision and consultation to guide professional judg-
ment and behavior.

1, 2, 3, 9,
10, 11, 12

2, 3, 12

1, 3, 5, 8, 10, 11

3, 6, 10, 11

3, 4, 5, 8

Competency 2: Engage Diversity and Difference in Practice
• Apply and communicate understanding of the importance of

diversity and difference in shaping life experiences in prac-
tice at the micro, mezzo, and macro levels;

• Present themselves as learners and engage clients and con-
stituencies as experts of their own experiences; and

• Apply self- awareness and self- regulation to manage the influ-
ence of personal biases and values in working with diverse
clients and constituencies.

2, 3, 5, 7, 8

1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12
2, 3, 7, 8, 10, 12

Competency 3: Advance Human Rights and Social,
Economic, and Environmental Justice
• Apply their understanding of social, economic, and environ-

mental justice to advocate for human rights at the individual
and system levels;

• Engage in practices that advance social, economic, and envir-
onmental justice.

1, 2, 3, 5, 6,
8, 10, 11

1, 2, 3, 6, 7, 8,
9, 11, 12

xiv C S W E ’ S C O R E C O M P E T E N C Y F U L F I L L M E N T G U I D E

Competency Chapters
Competency 4: Engage in Practice- informed Research and
Research- informed Practice
• Use practice experience and theory to inform scientific

inquiry and research;
• Apply critical thinking to engage in analysis of quantitative

and qualitative research methods and research findings;
• Use and translate research evidence to inform and improve

practice, policy, and service delivery.

1, 2, 4, 5, 11

2, 4, 6, 7, 9,
10, 11, 12
1, 2, 4, 6, 9, 10,
11, 12

Competency 5: Engage in Policy Practice
• Identify social policy at the local, state, and federal level that

impacts well- being, service delivery, and access to social
services;

• Assess how social welfare and economic policies impact the
delivery of and access to social services;

• Apply critical thinking to analyze, formulate, and advocate
for policies that advance human rights and social, economic,
and environmental justice.

2, 5, 6, 11

4, 6, 8, 11

1, 2, 3, 5, 6, 7,
8, 9, 10, 11, 12

Competency 6: Engage with Individuals, Families, Groups,
Organizations, and Communities
• Apply knowledge of human behavior and the social envir-

onment, person- in- environment, and other multidiscip-
linary theoretical frameworks to engage with clients and
constituencies;

• Use empathy, reflection, and interpersonal skills to effectively
engage diverse clients and constituencies.

1, 2, 3, 4, 6,
7, 8, 9

2, 3, 4, 5, 6,
8, 12

Competency 7: Assess Individuals, Families, Groups,
Organizations, and Communities
• Collect and organize data, and apply critical thinking to

interpret information from clients and constituencies;
• Apply knowledge of human behavior and the social environ-

ment, person- in- environment, and other multidisciplinary
theoretical frameworks in the analysis of assessment data
from clients and constituencies;

• Develop mutually agreed- on intervention goals and object-
ives based on the critical assessment of strengths, needs, and
challenges within clients and constituencies;

• Select appropriate intervention strategies based on the as-
sessment, research knowledge, and values and preferences of
clients and constituencies.

1, 3, 4, 6, 10, 11

1, 2, 4, 5, 6, 7,
8, 10, 11, 12

1, 2, 3, 4,
5, 7, 11

1, 2, 4, 5, 6, 7,
8, 11, 12

C S W E ’ S C O R E C O M P E T E N C Y F U L F I L L M E N T G U I D E xv

Competency Chapters
Competency 8: Intervene with Individuals, Families,
Groups, Organizations, and Communities
• Critically choose and implement interventions to achieve

practice goals and enhance capacities of clients and
constituencies;

• Apply knowledge of human behavior and the social environ-
ment, person- in- environment, and other multidisciplinary
theoretical frameworks in interventions with clients and
constituencies;

• Use interprofessional collaboration as appropriate to achieve
beneficial practice outcomes;

• Negotiate, mediate, and advocate with and on behalf of
diverse clients and constituencies; and

• Facilitate effective transitions and endings that advance
mutually agreed- on goals.

1, 2, 3, 4, 5, 7, 8,
9, 11, 12

1, 2, 3, 4, 6, 7, 8,
9, 11, 12

1, 2, 4, 6, 8, 11

1, 2, 3, 4, 5, 7, 8

1, 4, 5, 7, 9, 11, 12

Competency 9: Evaluate Practice with Individuals, Families,
Groups, Organizations, and Communities
• Select and use appropriate methods for evaluation of

outcomes;
• Apply knowledge of human behavior and the social environ-

ment, person- in- environment, and other multidisciplinary
theoretical frameworks in the evaluation of outcomes;

• Critically analyze, monitor, and evaluate intervention and
program processes and outcomes;

• Apply evaluation findings to improve practice effectiveness at
the micro, mezzo, and macro levels.

1, 2, 4, 5, 7,
9, 10 11
1, 2, 3, 4, 6, 7,
9, 10, 11, 12

1, 2, 4, 5, 7, 8,
9, 10, 11, 12
1, 2, 4, 6, 7, 10,
11, 12

Note. CSWE = Council on Social Work Education; NASW = National Association of Social Workers.

xvii

P R E FA C E

Every social worker is expected to know how to conduct evaluations of his or her practice. In addition, growing numbers of social workers will also be assuming
a program evaluator role at some time in their careers because of the increasing
demands for program accountability. Yet, many social workers are still inadequately
prepared to design and implement evaluations. Social Work Evaluation: Enhancing
What We Do introduces social workers and other human service workers to a broad
array of knowledge, ethics, and skills on how to conduct evaluations. The book
prepares you to conduct evaluations at both the program and practice levels.

The book presents evaluation material in a form that is easily understood and
especially relevant to social work students. Research is among the most difficult con-
tent areas for social work students to comprehend. This is partially because it is dif-
ficult to see the applicability of research to social work practice. The statistical and
other technical aspects of research content also tend to be unfamiliar to students
and difficult to comprehend. This book is especially designed to overcome these and
other types of barriers more than other social work evaluation texts do because it
continually discusses evaluation in the context of social work programs and practice
and uses numerous pertinent examples.

The book is organized around a three- stage approach of evaluation. The stages
divide evaluation into activities during the planning of an intervention, its implemen-
tation, and, afterward, to measure its impact on the recipients. In addition, the text
describes seven general steps to follow in conducting evaluations. These steps offer
a flexible set of guidelines to follow in implementing an evaluation with all its prac-
ticalities. The book also gives significant attention to evidence- based interventions
and how evaluations can generate evidence as a central goal. Readers are also given
several specific suggestions for how to promote evidence- based practice.

This book can be used for several research and practice courses in both Bachelor
of Social Work (BSW) and Master of Social Work (MSW) programs. It is designed
for primary use in a one- semester evaluation course in MSW programs. It can also
be a primary text along with a research methods text for a two- course research
sequence in BSW programs. The book can also be very useful as a secondary text

xviii P R E FA C E

in BSW and MSW practice courses at all system levels and policy courses. In add-
ition, it is an excellent handbook for the helping professions in other fields such as
counseling, psychology, and gerontology.

NEW TO THIS EDITION

The entire book has been carefully reviewed, revised, and updated, and summaries
are added to each chapter. Also, new material is added in several sections. A strength
of the book is that it covers both program and practice evaluations. In the new ed-
ition, greater attention is now given to programs and practice as key concepts and
how the evaluation process offers more understanding of each of them and their
relationship to each other. Evaluations at both levels have much in common. In add-
ition, there is frequently a need to distinguish between these two levels of evaluation.
In the new edition, separate sections are provided for both program and practice
evaluations when there is a need to explain their differences and how each can
be implemented. A  symbol has been added to the text to let you know when the
material following the symbol covers only programs or practice.

Accreditation standards of social work mandated by the Council on Social Work
Education (CSWE) are updated and highlighted in a “Core Competency Fulfillment
Guide” at the beginning of the text. These standards are frequently addressed in the
content of every chapter. Content on the six core social work values of the National
Association of Social Workers (NASW) Code of Ethics are also added in the new
edition and elaborated on in the ethics chapter to highlight how they provide the
foundation for the ethics used in evaluations.

Content is expanded on using the logic model as an analytic tool in conducting
evaluations. This gives practitioners the capacity to have continual oversight of
evaluation concerns. Most important, this tool helps remind social workers of the
importance of the logical links among the clients’ problems, needs, and their causes,
their goals, and the interventions chosen to reach their goals. The logic model is also
useful for supporting evidence- based practice and giving clients greater assurance
that that they will be successful in reaching their goals.

The seven steps for conducting an evaluation are emphasized throughout the
book and provide a helpful guide for the readers to follow. An emphasis on client-
centered change highlighted in earlier editions is strengthened in this edition in
these seven steps. Client- centered change is promoted through innovative ways of
assisting clients, staff members, and community groups in becoming more actively
involved in the evaluation process. Ultimately, these changes are intended to help
clients succeed as recipients of these interventions. Clients are presented throughout
the book as a key group of stakeholders who are often overlooked in other texts.

A new Teacher and Student Resource website has been added and is available
from Oxford University Press. It will contain all the resources provided with the
book in earlier editions along with some new helpful aids for both teachers and
students.

P R E FA C E xix

OTHER SPECIAL FEATURES

Both qualitative and quantitative methods of evaluation are described and
highlighted throughout the book. While quantitative methods are pertinent to both
summative and formative evaluations, qualitative methods are presented as espe-
cially relevant to many types of formative evaluations. Criteria are offered for when
to use qualitative methods and when to use quantitative ones, and examples of both
are provided. Mixed methods are also encouraged and often suggested as the best
option.

Many efforts have been made throughout the book to help students and
practitioners view evaluation as being helpful and relevant not only to programs but
also to their own practice. Throughout the book, the evaluation content on practice
interventions offers the readers practical insights and tools for enhancing their own
practice and increasing their capacity to impact their clients’ well- being.

The planning stage for new programs and practice interventions is presented
as perhaps the most critical stage before new programs and practice interventions
are implemented. Unfortunately, most agencies do not invest nearly enough time,
thought, and resources to the tasks of this critical planning period. The tasks of
planning include clearly identify and describing the clients’ problems and needs to
be addressed, along with the goals for resolving them. In addition, the proposed
interventions need to be carefully developed to uniquely fit the problems and needs
of their clients. Moreover, evidence that these interventions can be effective are para-
mount to develop and emphasize.

The evaluation process is described as a collaborative effort that encourages the
participation of the clients and other important stakeholders in some of the steps.
A  periodic focus on the principles of participant action research is highlighted in
some sections to emphasize how evaluation can be used to promote client involve-
ment, empowerment, and social change. Also, special emphasis is placed on staff
and client involvement in consuming evaluation findings and becoming more active
gatekeepers.

As mentioned earlier, another feature of the text is that it directly addresses all
the current accreditation standards of the CSWE, the national accrediting organ-
ization for social workers. The CSWE promulgates minimum curriculum standards
for all BSW and MSW programs, including research and evaluation content. This
book devotes extensive attention to several competencies related to evaluation with
a special focus on three areas:  ethics, diversity, and social and economic justice.
Because of the importance of these three competency areas, they are highlighted
in numerous examples and exercises throughout the book. In addition, practice, an
overall competency of the social work curriculum, is often highlighted as it relates
to evaluation. Evaluation is described throughout the book as a vital and necessary
component of practice at both the MSW and the BSW levels.

While a social work perspective is emphasized that helps in understanding
the connections of evaluation with practice, ethics, diversity issues, and social
justice, other human service professionals will also find these topics pertinent.

xx P R E FA C E

Professionals with disciplines in psychology, family and individual therapy, public
health, nursing, mental health, criminal justice, school counseling, special edu-
cation, addictions, sociology, and others will find this text to be a very useful
handbook.

Technology skills are infused in different parts of the text. Social work
practitioners must know how to use various electronic tools like the Google, e-
mail, electronic discussion lists, and data analysis programs like SPSS (Statistical
Package for the Social Sciences). The book includes electronic exercises and other
assignments that involve students using such tools. Emphasis is given to electronic
skills that help students obtain access to the latest information on client populations,
practice and program interventions, information from professional organizations,
relevant articles, and helpful discussion lists.

Another distinguishing aspect of this book is the extensive use of case examples.
It has been the author’s experience that students’ learning is enhanced when they can
immediately see the application of abstract concepts to human service situations.
Specific evaluation studies from professional journals, websites, and books are fre-
quently highlighted to illustrate concepts, findings, data analyses, and other issues.
Numerous examples of evaluations that Dudley has conducted are frequently used.
Exemplary evaluation activities of social work students and practitioners are also
generously included. These illustrations reflect what students will often find in field
placement agencies and social agencies where they are hired. Figures and graphs
are also used and designed to appeal to students with a range of learning styles. The
book also contains a glossary of terms.

In addition, the book is user- friendly for faculty who teach evaluation courses.
Sometimes social work educators who do not have the time or interest in conducting
their own evaluations teach research courses. Such faculty may often feel less quali-
fied to teach an evaluation course. This text is understandable to both inexperienced
and experienced faculty. Also, discussion questions included at the end of each
chapter can serve as a focus for class discussions, quizzes, and tests.

A chapter, “Becoming Critical Consumers of Evaluations,” is also included
to stress the importance of the consumer role in reading and utilizing evaluation
studies of other researchers. The chapter walks the readers through each of the
seven steps of conducting an evaluation, pointing out strengths and weaknesses of
evaluation reports using a recently published evaluation report as an illustration.
This chapter and others provide guidelines for how to cautiously and tentatively
consider how to apply the findings of someone else’s evaluation to your own prac-
tice with clients.

In addition, a Teacher and Student Resource website is an online ancillary
resource that is available with the purchase of the book, available from Oxford
University Press. It elaborates on how the content of the book can be used and
suggests helpful ways to involve students in understanding and using it. The
teacher’s guide includes a sample syllabus, PowerPoint presentations for each
chapter, and a test bank of multiple- choice exam questions that includes questions
for each chapter.

P R E FA C E xxi

ORGANIZATION OF THE B O OK

The book is organized into seven parts. Part I, the first chapter, introduces evalu-
ation and how it is described and defined in the book. The chapter begins with a
persuasive rationale for why social workers should be proficient in evaluation. The
concepts of program and practice are introduced along with how they are similar
and different. Definitions of program and practice evaluations, their characteristics
and aims, and the larger social contexts for evaluations are introduced. The misuses
of the term evaluation are also pointed out. Also, evidence- based interventions are
introduced as an indispensable concept in the context of evaluation.

Part II is an orientation to the bigger picture about evaluations. Chapter  2
highlights key historical events that have helped to shape current public policies and
stresses the importance of conducting evaluations. Also, five different theoretical
perspectives on evaluation are introduced to remind readers that evaluation is not a
monolithic enterprise; to the contrary, its purposes vary widely depending on who
is conducting the evaluation and what they are attempting to accomplish. Aspects of
all these theoretical perspectives contribute to the concept of evaluation adopted in
the book. Chapter 3 focuses on the ethics of evaluation, drawing on the NASW Code
of Ethics and the ethical principles of the American Evaluation Association. The
chapter explains how the accreditation standards of the CSWE can be implemented,
including the ethics of social work and the importance of diversity and social and
economic justice. Chapter 4 introduces readers to several types of program and prac-
tice evaluation that are commonly practiced in the settings in which social workers
and other human service workers are employed. They are introduced in this chapter
to help readers be able to identify them in various field settings. These common
evaluations range from client satisfaction studies to outcome studies, licensing
of professionals and programs, quality assurance, and judicial decisions. Finally,
Chapter 5 offers guidelines for focusing an evaluation and presents a tool that can be
used to craft a focus for any evaluation.

Part III covers the first of three stages of evaluation activities, the planning stage,
when a program or practice intervention is being conceptualized and important
details are being worked out. The planning stage is presented as a critical time
for evaluation activities, especially to document the need for a new intervention.
Chapter 6 is devoted to conducting needs assessments, especially during the plan-
ning stage. The chapter explains why needs assessments are so important, highlights
a variety of assessment tools, and describes the steps involved in conducting a needs
assessment. Crafting goals and objectives for a new program or practice intervention
are highlighted in Chapter 7. Characteristics

please read

Assignment: Designing a Plan for Outcome Evaluation

 

Social workers can apply knowledge and skills learned from conducting one type of evaluation to others. Moreover, evaluations themselves can inform and com`plement each other throughout the life of a program. This week, you apply all that you have learned about program evaluation throughout this course to aid you in program evaluation.

To prepare for this Assignment, review “Basic Guide to Program Evaluation (Including Outcomes Evaluation)” from this week’s resources, Plummer, S.-B., Makris, S., & Brocksen S. (Eds.). (2014b). Social work case studies: Concentration year. Retrieved from 
http://www.vitalsource.com
 , especially the sections titled “Outcomes-Based Evaluation” and “Contents of an Evaluation Plan.” Then, select a program that you would like to evaluate. You should build on work that you have done in previous assignments, but be sure to self-cite any written work that you have already submitted. Complete as many areas of the “Contents of an Evaluation Plan” as possible, leaving out items that assume you have already collected and analyzed the data.

 

By Day 7

Submit a 4- to 5-page paper that outlines a plan for a program evaluation focused on outcomes. Be specific and elaborate. Include the following information:

        The purpose of the evaluation, including specific questions to be answered

        The outcomes to be evaluated

        The indicators or instruments to be used to measure those outcomes, including the strengths and limitations of those measures to be used to evaluate the outcomes

        A rationale for selecting among the six group research designs

        The methods for collecting, organizing and analyzing data

 

 

Resources

 


http://managementhelp.org/evaluation/program-evaluation-guide.htm#anchor1586742

 


http://managementhelp.org/evaluation/outcomes-evaluation-guide.htm#anchor30249


Foster-parent training and foster-child outcomes: An exploratory cross-sectional analysis: Vulnerable Children and Youth Studies: Vol 4, No 2 (tandfonline.com)

 


Evaluation of Foster Parent Training Programs: A Critical Review: Child & Family Behavior Therapy: Vol 33, No 2 (tandfonline.com)


please read

SOCIAL WORK CASE STUDIES: CONCENTRATION YEAR

66

Social Work Research:
Planning a Program Evaluation

Joan is a social worker who is currently enrolled in a social
work PhD program. She is planning to conduct her dissertation
research project with a large nonprofit child welfare organization
where she has worked as a site coordinator for many years. She
has already approached the agency director with her interest, and
the leadership team of the agency stated that they would like to
collaborate on the research project.

The child welfare organization at the center of the planned
study has seven regional centers that operate fairly independently.
The primary focus of work is on foster care; that is, recruiting and
training foster parents and running a regular foster care program
with an emphasis on family foster care. The agency has a residen-
tial program as well, but it will not participate in the study. Each
of the regional centers services about 45–50 foster parents and
approximately 100 foster children. On average, five to six new
foster families are recruited at each center on a quarterly basis.
This number has been consistent over the past 2 years.

Recently it was decided that a new training program for
incoming foster parents would be used by the organization. The
primary goals of this new training program include reducing foster
placement disruptions, improving the quality of services delivered,
and increasing child well-being through better trained and skilled
foster families. Each of the regional centers will participate and
implement the new training program. Three of the sites will start
the program immediately, while the other four centers will not start
until 12 months from now. The new training program consists of
six separate 3-hour training sessions that are typically conducted
in a biweekly format. It is a fairly proceduralized training program;
that is, a very detailed set of manuals and training materials exists.
All trainings will be conducted by the same two instructors. The
current training program that it will replace differs considerably in
its focus, but it also uses a 6-week, 3-hour format. It will be used
by those sites not immediately participating until the new program
is implemented.

RESEARCH

67

Joan has done a thorough review of the foster care literature
and has found that there has been no research on the training
program to date, even though it is being used by a growing
number of agencies. She also found that there are some stan-
dardized instruments that she could use for her study. In addition,
she would need to create a set of Likert-type scales for the study.
She will be able to use a group design because all seven regional
centers are interested in participating and they are starting the
training at different times.