Review Article
Austin Tuberc Res Treat. 2016; 1(1): 1001.
National Tuberculosis Prevention and Control Program Evaluation in the United States of America: From Concept to Practice
Awal Khan*
Division of Tuberculosis Elimination, Centers for Disease Control and Prevention, USA
*Corresponding author: Awal Khan, Centers for Disease Control and Prevention, National Center for HIV/ AIDS, Viral Hepatitis, STD and TB Prevention, Division of Tuberculosis Elimination, Atlanta, USA
Received: May 03, 2016; Accepted: June 15, 2016; Published: June 17, 2016
Abstract
Within public health, Program Evaluation (PE) concepts, methodologies, and practices need further development for U.S. Tuberculosis (TB) control programs to develop and implement PE in an atmosphere of greater transparency, accountability, and return on investment. TBPE efforts focus detailed analyses on meeting national TB program objectives and sustaining achievements. Three topics are essential for sustainable PE practices: specifying related concepts; specifying and understanding causal pathways in evaluation; and meeting the demand for collaboration among multiple stakeholders. This is a move beyond current efforts, and focuses on evaluation of the range and distribution of direct impacts of interventions on programs for greater return on investment. Those involved in TB control and prevention efforts may profit from focusing on both shared decision processes and greater collaboration and engagement in PE as a way to achieve the desired accountability “from and among all sectors of the public health system” [1].
Keywords: Program evaluation; Tuberculosis; Goals and objectives; Model; Implementation gap
Introduction
Background and objectives: the broadening of the program evaluation agenda
Public health programs in the United States have a goal of preventing and controlling disease, injury, disability, and death. Program evaluation is an essential program management tool in public health [2]. Program evaluation is also a necessary part of the operating principles of the Centers for Disease Control and Prevention (CDC) for guiding public health activities, which include: using science as a basis for decision making and public health action; expanding the quest for social equity through public health action; performing effectively as a serving agency; making efforts outcomeoriented; and being accountable for program activities achieving intended outcomes efficiently [3]. These operating principles guide public health programs to develop clear goals and objectives with expected measures of success and to work collaboratively with relevant stakeholders for ongoing improvement and learning through shared experiences.
As cooperative partners in the U.S. national TB control program, state and local Tuberculosis (TB) control programs are at the frontline of disease control and prevention. TB programs in state and local health departments were successful in halting and reversing the TB resurgence in the United States in 1985-1992 [4]. The continued downward trend in U.S. TB incidence during the last 2 decades attests to the success of sustained program efforts. Despite its substantial declines in the United States, TB remains among the leading global disease conditions.
In 2006, a collaborative TB Evaluation Workgroup comprised of representatives of state and local health departments, the National TB Controllers Association (NTCA), the Association of Public Health Laboratories, CDC’s Division of Tuberculosis Elimination (DTBE), and CDC’s Division of Global Migration and Quarantine selected 15 national TB program objectives with performance targets for 2015 [5]. DTBE’s program evaluation activities have focused on 12 of these 15 objectives to make quantitatively based judgments about programs’ progress toward achieving their targets. The 12 objective categories include Completion of Treatment; TB Case Rates; Contact Investigation; Laboratory Reporting; Treatment Initiation; Sputum Culture Conversion; Sputum Culture Reported; Data Reporting; Evaluations of Immigrants and Refugees; Recommended Initial Therapy; Universal Genotyping; and Known Human Immunodeficiency Virus (HIV) Status [5,6]. The remaining three objectives focused on the capacity building which already has the necessary commitment and support from the program levels. These judgments may take a number of forms including decisions on program performance, initiatives for focused evaluation, highlighting programmatic challenges and opportunities, experience in implementation, and judgments about cost-effectiveness and efficiency. The focus might be on the program overall, or particular aspects of it, such as its implementation or its impact on program performance and accountability. In general, CDC’s TB evaluation efforts are aimed at addressing three gaps-implementation, knowledge, and ambition, as inspired to be “mutually reinforcing to achieve desired results” [7].
The importance of monitoring program performance in meeting targeted goals of the national TB program objectives is well recognized. However, there is a clear need for an updated, conceptual model that helps organize information and makes use of program evaluation efforts to achieve continuous improvements. A conceptual model, grounded on practical realities, is important throughout all phases of developing, monitoring, and evaluating a given plan.
The purpose of this article is to provide for the three fundamental activities to support monitoring and evaluation efforts for TB program performance improvement on national TB program objectives:
1. To describe key concepts of program evaluation, both in general terms and in reference to TB control and prevention;
2. To develop an improvement-oriented model for program evaluation; and
3. To provide a collaborative process that will help TB program staff in developing effective program evaluation activities and ways to improve program accountability in a transparent and productive manner.
Key concepts for program evaluation
The terms “evaluation” and “program evaluation” have been used interchangeably. Additional terms such as “monitoring” and “assessment” are also linked with evaluation as concepts of program evaluation. Evaluation is best defined as a process of decision making about an objective and how it compares to some standard of acceptability [8]. When planning and conducting an evaluation study, it is important to take account of the likely uses of the information provided by the study. This helps to ensure that the scope and quality of program information are appropriate to the nature and significance of the judgments and decisions to be made.
Monitoring: It is the ongoing and systematic collection and analysis of information (i.e., data) to determine the extent of progress towards the stated goals and objectives. It has been described as “continuing collection of data on specified indicators to provide management and the main stakeholders of an ongoing decision making with indications of the extent of progress and achievement of objectives and progress in the use of allocated resources” [9]. Monitoring involves the routine tracking of progress with respect to previously defined objectives using surveillance data; an unexpected or abrupt change in monitored data may signal a need for a more formal evaluation of the program activities that would identify both problems and practical solutions.
Assessment: It is defined as the estimation of the nature, quality, or ability of someone or something at certain time point. Assessment helps one to know about or judge an event or a situation where meaningful application of essential knowledge and skills is demonstrated. Assessment has been described as the “systematic collection, review, and use of information about educational programs undertaken for the purpose of improving learning and development” [10]. Assessment is an ongoing process aimed at understanding and improving program performance.
Evaluation: It is the collection and analysis of information in an attempt to understand whether and how a program is meeting its stated objectives. Evaluation builds on: the monitoring process; identifying the level of impacts achieved; identifying the intended and unintended effects of these achievements; identifying approaches that did or did not work well; and identifying the reason(s) for success or failure and learning from both. The evaluation process also provides a level of judgment as to overall merit, worth, and value of the program [11-13]. Evaluation is defined as learning, analyzing, and discussing what has happened during a period of time, and how these lessons can help to improve actions for a similar period in the future [14].
Having performance data or other monitoring information about a program’s performance progress towards meeting targets of the national TB program objectives is not sufficient for making sound judgments and decisions about problem identification and improvement in program practices. Program evaluation, as a concept, is the process of weighing, interpreting, and subsequently making judgments about program effectiveness [15].
Program evaluation includes both monitoring and evaluation concepts and focuses on the systematic review of program operations and outcomes as a means of gathering data that can be used in future program improvement processes. This ensures a continuation of benefits from an intervention after it has been completed. It is about demonstrating that something is working, is needed, or is improving practices [16]. As DTBE Director Dr. Kenneth Castro (personal communication, 2010) has explained, the purpose of program evaluation is to improve accountability of program activities; identify problems with potential solutions for performance of program activities; and have a learning agenda that focuses on the continuous improvements of program practice and performance.
Improvement-oriented model for TB program evaluation
The tangible contribution of conceptual models varies with the maturity of the program. A simple model provides a framework that reflects monitoring in a broad context-improvement, understanding, and communication. For TB programs, the model needs to be easily understood by scientists, TB program managers, and TB frontline patient care and management staff. The model should include information to make educated choices on what might be used as a focus of the evaluation of national TB program objectives, and provides a context of organizing information and understanding gaps in implementation and knowledge of specific focus areas. This model encompasses the concept of the improvement-oriented model outlined in Michael Quinn Patton’s book, Essentials of Utilization- Focused Evaluation. The book begins with the premise that evaluations should be judged by their utility and actual use, and should be based on an organization’s needs, wants, and logistical realities [17]. Ideally, the data and analyses should inform decision making.
The national TB program objectives are the implicit linkage of the development of the model, and their articulation is essential to justify the program evaluation initiative. Our model for TB program evaluation will articulate the monitoring of all national TB program objectives in achieving its targeted goals; contribute to understanding of the current status of implementing objectives including the gaps; identify key focus areas for evaluation, based on program priority areas; facilitate selection and justification of evaluation focus areas (objectives); and clearly communicate the dynamic processes of program evaluation to relevant stakeholders.
This model considers three important criteria in the development: assessment; alignment; and involvement. Assessment considers the current implementation and achievement of 12 national TB program objectives, comprised of 24 indicators [6]. Alignment makes sure identification of an evaluation focus area is grounded in the national TB program objectives. Involvement ensures all local, state, and national TB program stakeholders have engaged in the development of an evaluation plan.
Figure 1 depicts the improvement-oriented model for developing evaluation plans within a TB control program. The goals of this model are to review the current status in implementing the National TB program objectives and identify the focus area of evaluation and to determine why the evaluation should be done (i.e., addressing “Implementation Gaps”). Three key ideas guide the model for TB program evaluation and activities: assessing the implementation gaps in meeting national TB program objective performance targets; identifying the focus program evaluation objective based on current knowledge of the program performance and priority; and identifying tools and systems to be used to conduct the evaluation.
Figure 1: Model for Tuberculosis (TB) program evaluation.
Several questions need to be posed to address the implementation gap properly, including, did the TB control program achieve its intended objectives, and to what extent? Program evaluation is validated by gathering information regarding program achievements in meeting the targets of its objectives, learning of the inherent challenges and opportunities (knowledge gaps), and identifying tools and systems to be used to overcome the challenges. The proposed approach to address implementation gaps reflects the After Action Review (AAR) model, which centers on four questions. What was expected to happen? What actually occurred? What went well and why? What can be improved and how? The AAR model features a structured approach for reflecting on the work of a group and identifying strengths, weaknesses, and areas for improvement that requires an open and honest professional discourse [18].
Table 1 provides a brief outline of the application of a conceptual model for TB program evaluation. The six steps assess the current implementation status and gaps, and identify the objectives, or areas of focus, and steps involved in planning and carrying out the program evaluation activities, consistent with the evaluation lessons learned and best practices.
Step
Justification
1. Determination of Evaluation Need
Review the current performance of the national TB program objectives using the National TB Indicator Project1 system or TB surveillance data in meeting the targets. Acknowledge that surveillance data alone provide no or little information on program activities. Should an evaluation be undertaken? Why should an evaluation be undertaken? How do the designated program evaluation focal points assess the need in conjunction with TB program staff?
2. Identify evaluation focus area
What national TB program objective(s) or area(s) should be the focus?
How is the area(s) of focus aligned with TB prevention and control efforts?
3. Determination of evaluation methodology and data need
What are the objective(s) of the evaluation?
What are the activities related to each evaluation objective?
When is the evaluation to be conducted?
What types of data are being collected?
What are the data sources?
How will the data be collected (i.e., survey, document review, etc.)?
4. Presenting evaluation findings
What type of data analyses will be needed?
Who will be responsible for summarizing and presenting evaluation findings?
What is the role of the TB Program Evaluation Network (PEN2) and the designated PE focal point person (EFP3) from each program?
How will the data be presented and reported (NTCA4, TB PEN meeting)?
5. Documenting evaluation results
How are the evaluation results documented?
Who are the key stakeholders who must review the results (TB PEN focal point, TB program manager, and program staff)?
Present evaluation results in interim and annual TB progress reports.
6. Determine the sustainability plan
How are the evaluation results used for program improvement?
What are the lessons learned and used for program improvement?
How are evaluation evidence-based “best practices” identified for program improvement?
1National TB Indicator Project is a secure, web-based monitoring system that uses routinely collected surveillance data to track progress toward national TB program objectives.
2TB PEN is designed to develop and strengthen the capacity of state and local TB programs to monitor and evaluate their programs and use findings to enhance effectiveness of prevention and control activities.
3EFP is a designated individual required by each TB cooperative agreement recipient who will be responsible for serving as the point of contact for evaluation activities and sharing program evaluation experiences and lessons learned.
4NTCA-National Tuberculosis Controllers Association (https://www.tbcontrollers.org).
Table 1: Steps in application of the model for TB program evaluation.
Collaborative process model for effective TB program evaluation
The model requires interaction between the local, state, and national TB programs to create an easily understood and sometimes visual format, which simplifies the process of identifying appropriate evaluation focus areas and actions of what and how to evaluate. In addition, the model is useful in identifying key stakeholders and linkages between those stakeholders to organize and identify programmatic actions to monitor and evaluate program performance on national TB program objectives.
Collaboration has become vital to the implementation of this model for TB program evaluation (Figure 2). State- and local-level designated TB program staffs serve as Evaluation Focal Points (EFPs), each a member of the TB Program Evaluation Network (TB PEN). The EFP and DTBE program and evaluation consultants are empowered by the national TB program objectives to collaborate to implement a program evaluation initiative. The operational aspect of the conceptual model can be divided into three separate areas: the program (state/local) level; the CDC level; and the evaluation knowledge base. At the program level, the TB program EFP and TB program manager review the current performance with respect to the national TB program objectives. Based on the state’s TB prevention and control priorities, they identify the program evaluation focus area and develop the program evaluation plan following the CDC evaluation framework [19] and Program Evaluation Team (PET) guidance. At the CDC level, DTBE program and evaluation consultants review the interim and annual progress reports submitted by the program and provide feedback on the program evaluation plan and reports using a clearly laid out form. After the feedback is shared, a follow-up conference call occurs with program staff and CDC staff. Ideally, a comprehensive program evaluation effort would likely involve all relevant stakeholders and incorporate all the steps shown in Table 1.
Figure 2: Operational aspects of dynamic model for TB program evaluation.
This collaborative approach focuses on increasing awareness and engagement throughout the TB program stakeholders, facilitating partnerships, and promoting co-learning and capacity building among all stakeholders.We propose applying the “Plan, Do, Check, and Act (PDCA)” methodology, which is used to identify improvement opportunities and creates a systematic approach to implementing changes [20]. This method is used to learn by doing and experimenting with improvements, examining what is learned, and implementing what was learned into further improvement efforts. It embeds a culture of ongoing learning and establishes responsibility of all stakeholders to retain overall accountability for programmatic improvements. The principle of the PDCA cycle is to establish collaborative understanding of issues, to discuss improvements with all stakeholders in a structured way, and to continue to practice those strategies with the greatest effects on program performance and practices.
Conclusion
Steps forward: Program evaluation addresses two different, interrelated topics. The first is implementation of strategic evaluation initiatives; the second is progress on achievements. If program evaluation is to remain relevant to the improvement of program performance through meeting program and national goals and objectives, and to strengthening public health systems in the 21st century, greater engagement of all relevant TB stakeholders in the form of understanding, developing, and implementing evaluation is required. Programs must learn to develop a culture of continuous improvements in program performance and practices. This means that rigorous analysis of outcomes must be shared among all stakeholders and beneficiaries in a timely manner [21]. Our model moves from concept to practice, and is important throughout all the phases of developing a program evaluation initiative. It is also useful in identifying “implementation gaps” in order to improve program performance and maintain a shared understanding of the program evaluation process. Initially, program evaluation efforts help the program determine whether it is achieving the national TB program objectives. Next, the process provides the momentum for program improvement by promoting and implementing a collaborative environment in which to identify challenges, barriers, and opportunities and knowledge gaps for program improvement. Lastly, there is a basis for identifying lessons learned and evidencebased best practices for reporting to all stakeholders-local, state, and national staff, as well as the general public-for greater transparency and accountability. This approach lends itself to a greater understanding of TB control and prevention efforts by a long-term cooperative engagement, and integration of shareddecision-making processes for the mutual benefit of all stakeholders. In principle, this collaborative approach embraces concepts of greater accountability, working together, collaborative learnings, and empowerment of involved stakeholders.
Program evaluation, with regard to TB control and prevention efforts, requires the concerted engagement of local, state, and national health department program staff. The analysis of the national TB program objectives, state and local TB program evaluation capacity, and challenges that programs face in implementing an evaluation plan are more likely to support the effectiveness of the TB control and prevention operations, and help to improve and maintain the program performance. Creating accountable TB control and prevention programs require understanding the “programmatic determinants of evidence”: the aspects of operation; function; science; and accountability that are the foundations of TB control and prevention efforts.
These models provide a framework of concept to practice by supporting strong collaboration among all TB program staff (CDC, state, and local programs) and representing the full cycle of evaluation, including communicating with state and local staff to examine program achievements in meeting performance targets and in implementing national TB program objectives.
Acknowledgement
The paper has benefited from the attention of many Division of Tuberculosis Elimination (DTBE) colleagues over the course of its preparation, beginning with a presentation at the first Tuberculosis Program Evaluation Network (TB PEN) conference. We would especially like to acknowledge the support and commitment of DTBE program consultants and then program evaluation team members in developing the collaborative approach needed to operationalize the conceptual model. We thank Dr. Kashef Ijaz for his guidance in revitalizing program evaluation and we thank Dr. Kenneth Castro for his vision and guidance in program evaluation.
References
- Institute of Medicine. The Future of Public Health in the 21st Century, NAS Press. 2002.
- Dyal WW. Ten organizational practices of public health: a historical perspective. American Journal of Preventive Medicine. 1995; 11: 6-8.
- Koplan JP. CDC sets millennium priorities. US Medicine. 1999; 4-7.
- Cantwell MF, Snider DE, Cauthen GM, Onorato IM. Epidemiology of tuberculosis in the United States, 1985 through 1982. Journal of the American Medical Association. 1994; 272: 535-539.
- Centers for Disease Control and Prevention. National TB Program Objectives and Performance Targets for 2015. 2009.
- Centers for Disease Control and Prevention. Monitoring Tuberculosis Programs - National Tuberculosis Indicator Project, United States, 2002- 2008. Morbidity and Mortality Weekly Report. 2010; 59: 295-298.
- Castro KG, LoBue P. Bridging implementation, knowledge, and ambition gaps to eliminate tuberculosis in the United States and globally. Emerging Infectious Disease. 2011; 17: 337-342.
- Green LW, Lewis FM. Measurement and evaluation in health education and health promotion. Palo Alto, CA: Mayfield Publishing. 1986; 8: 428-430.
- World Bank Independent Evaluation Group (IEG): 10 Steps to a results based monitoring and evaluation system. 2007.
- Palomba CA, Banta TW. Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. San Francisco: Jossey Bass. 1999.
- Rossi P, Freeman H. Evaluation: a systematic approach. Newbury Park, CA, Sage Publications. 1993.
- Scriven M. Evaluation Thesaurus. 4th edn, Newbury Park, CA: Sage. 1991.
- Billing DM, Halstead JA. Teaching in nursing: a guide to faculty. 2nd edn, St. Louis: Elsevier Saunders. 2005.
- Mahanty S, Stacey N, Holland A, Menzies S. Learning to learn: designing monitoring plans in the Pacific Islands International Water Project. Ocean and Coastal Management. 2007; 50: 392-410.
- Boulmetis J, Dutwin P. The ABCs of evaluation: Timeless techniques for program and project managers. San Francisco: Jossey Bass. 2000.
- Rogers A, Smith MK. Evaluation: Learning what matters, London: Rank Foundation/YMCA George Williams College. 2006.
- Patton MQ. Utilization-focused evaluation: the new century text. 3rd edn, Thousand Oaks, CA: Sage. 1997.
- Salem-Schatz S, Ordin D, Mittman B. Using Evaluation to Improve Our Work: A Resource Guide. Guide to the After Action Review Version 1.1. 2010.
- Centers for Disease Control and Prevention. Framework for program evaluation in public health. Morbidity and Mortality Weekly Report. (No.RR- 11); Atlanta, Georgia. 1999; 48.
- Deming WE. Out of the Crises. Cambridge MA. Massachusetts Institute of Technology Center for Advanced Engineering Studies. 1986.
- Craig P, Dieppe PA, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. British Medical Journal. 2008; 337: 1655.