Explain the process for planning and evaluating work-family programs
Within the categories of formative and summative, there are different types of evaluation. Which of these evaluations is most appropriate depends on the stage of your program: Type of Evaluation Purpose Formative 1. Needs Assessment Determines who needs the program, how great the need is, and what can be done to best meet the need.
For more information, Needs Assessment Training uses a practical training module to lead you through a series of interactive pages about needs assessment. Process or Implementation Evaluation Examines the process of implementing the program and determines whether the program is operating as planned.
Can be done continuously or as a one-time assessment. Results are used to improve the program. Summative 1. Outcome Evaluation Investigates to what extent the program is achieving its outcomes. These outcomes are the short-term and medium-term changes in program participants that result directly from the program. Impact Evaluation Determines any broader, longer-term changes that have occurred as a result of the program.
These impacts are the net effects, typically on the entire school, community, organization, society, or environment. EE impact evaluations may focus on the educational, environmental quality, or human health impacts of EE programs. Before Program Begins. These summative evaluations build on data collected in the earlier stages. To what extent is the need being met? What can be done to address this need? What predicted and unpredicted impacts has the program had?
Needs Assessment. Outcome Evaluation. Impact Evaluation. Evans' Short Course on Evaluation Basics : Good evaluation is tailored to your program and builds on existing evaluation knowledge and resources.
Good evaluation is inclusive. Good evaluation is honest. Whatever program components have been included in the evaluation, it is important that they be both measurable and realistic. Finally, once the key outcomes have been identified and written as measureable and realistic, identify when each will be measured e. Some outcomes may be measured only early or late in the program, while others may be measured several times, as long as the program is active.
The need for baseline measures is one key reason for designing the evaluation plan before implementation begins because they establish a starting place and frame of reference for the workplace health program. These can usually be developed from data collected during the initial assessment activities and summarized in the assessment final report.
Baseline measures determine where the organization currently is on a given health problem e. The evaluation guidance so far has been general guidance that can apply to any outcome. The evaluation module has been organized by the specific health topics listed above, and for each one, potential measures for the following four main outcome categories of interest to employers and employees have been developed.
Top of Page. Centers for Disease Control and Prevention. Framework for program evaluation in public health. Morbidity and Mortality Weekly Report ;48 No. RR : Skip directly to site content Skip directly to page options Skip directly to A-Z link. Workplace Health Promotion. Section Navigation. The CFCA practice resource Dissemination of Evaluation Findings discusses ways of writing an evaluation report and sharing the findings.
Appendix A and B can be downloaded separately below. Helping FaC Activity service providers to build capacity to plan and implement programs, evaluate outcomes, and share the results with others. This resource has been designed for Communities for Children service providers, but may be used by anyone who is interested in developing a program.
This short article is for people new to evaluation who are planning to conduct or commission an evaluation. Copyright information. The Australian Institute of Family Studies acknowledges the traditional country throughout Australia on which we gather, live, work and stand. We acknowledge all traditional custodians, their Elders past, present and emerging and we pay our respects to their continuing connection to their culture, community, land, sea and rivers.
Home » Evidence and Evaluation Support home » Planning an evaluation. Planning an evaluation Planning an evaluation. Step by step. Jessica Smart. Read online. View as a PDF. Scroll down. About this guide This resource is for people who are new to evaluation and need some help with developing an evaluation plan for a program, project or service for children and families.
Read the publication. Developing an evaluation plan. Developing an evaluation plan An evaluation plan is used to outline what you want to evaluate, what information you need to collect, who will collect it and how it will be done. Figure 1: Key steps in an evaluation Read text description. Back to the top of section. Why do I need to evaluate? Identifying evaluation purpose and audience The essential first step in an evaluation is identifying the purpose of the evaluation and who the main audience will be.
What do I need to find out? Identifying evaluation questions Evaluation questions are high-level questions that guide an evaluation. Deciding on evaluation design Different evaluation designs serve different purposes and can answer different types of evaluation questions. Some factors to consider are: the type of program or project you are seeking to evaluate the questions you want to answer your target group the purpose of your evaluation your resources whether you will conduct an evaluation internally or contract an evaluator.
What will I measure? Outcomes and outputs Outcomes are the changes that are experienced by children and families as a result of your program or service.
The following questions, adapted from the Ontario Centre for Excellence in Child and Youth Mental Health , will help you to make these decisions: Is this outcome important to your stakeholders? Different outcomes may have different levels of importance to different stakeholders.
It will be important to arrive at some consensus. Is this outcome within your sphere of influence? If a program works to build parenting skills but also provides some referrals to an employment program, measuring changes to parental employment would not be a priority as these are outside of the influence of the program.
Is this a core outcome to your program? A program may have a range of outcomes but you should aim to measure those which are directly related to your goal and objectives. Will the program be at the right stage of delivery to produce the particular outcome? Ensure that the outcomes are achievable within the timelines of the evaluation. For example, it may not be appropriate to measure a long-term outcome immediately after the end of the program.
Will you be able to measure the outcome? There are many standardised measures with strong validity and reliability that are designed to measure specific outcomes see the Outcomes measurement matrix. The challenge is to ensure that the selected measure is appropriate for and easy to undertake with the target population e.
Will measuring this outcome give you useful information about whether the program is effective or not? Evaluation findings should help you to make decisions about the program, so if measuring an outcome gives you interesting, but not useful, information, it is probably not a priority. For example, if your program is designed to improve parenting skills, measuring changes in physical activity will not tell you whether or not your program is effective.
Selecting indicators An indicator is the thing that you need to measure or observe to see if you have achieved your outcomes. How will I measure it? Types of data Data are the information you collect to answer your evaluation questions. Selecting data collection methods Interviews, observation, surveys, analysis of administrative data, and focus groups are common data collection methods used in the evaluation of community services and programs.
The needs of the target group : For example, a written survey may not be suitable for groups with English as an additional language or low literacy. It is best to check with the target group about what methods they prefer.
Cultural appropriateness : If you are working with Aboriginal and Torres Strait Islander people, or people from culturally and linguistically diverse CALD backgrounds, make sure your evaluation methods and the tools you are using are relevant and appropriate.
The best way to do this is through partnering with Aboriginal and Torres Strait Islander people or members of CALD communities to design and conduct the evaluation.
At a minimum, you should discuss the evaluation with participants and pilot any tools or questionnaires that you will use. For example, questions about how participants experienced a program might be best answered by an interview or focus group, whereas questions about how much change occurred might be best answered by a survey.
Timing : Consider the time that you and your team have to collect and analyse the data, and also the amount of time your participants will have to participate in evaluation.
Developing surveys or observation checklists, conducting interviews and analysing data are all specialist skills. If you don't have these skills, you may wish to undertake some training or contract an external evaluator. Access to tools : Developing data collection tools such as your own survey or observation checklist that collect good quality data is difficult. Using validated outcomes measurement tools is often preferable, but these may not be suitable for your group or may need to be purchased.
For more information see this article on how to select an outcomes measurement tool and this resource to assist you to find an appropriate outcomes measurement tool. Practicality : There is no point collecting lots of valuable data if you do not have the time or skills to analyse them. In fact, this would be unethical see ethics below because it would be an unnecessary invasion of your participants' privacy and a waste of their time.
You need to make sure that the amount of data you're collecting, and the methods you're using to collect it, are proportionate to the requirements of the evaluation and the needs of the target group. Understanding the causes of a social issue and the way it is experienced by different groups and individuals will enable you to identify and implement a more effective program or service.
Most projects, programs or services are designed to meet the needs of a group of people who share certain characteristics. This section includes resources that describe how to ensure your program meets the needs of your target group, as well as resources on how to work effectively with specific groups such as Aboriginal and Torres Strait Islander people. Effective programs and services are those that have been planned and designed using evidence and a clear program theory.
There are many different programs and strategies to address social issues, so it is important to consider what will be most effective in the context in which you are working.
There are usually many different options for programs, services or projects to address a social problem. This section includes resources that can help you to identify and understand different approaches, different types of programs and activities, and the evidence behind them. Central to successful program design, implementation and evaluation is being clear about how a program should work and what outcomes it is intended to produce. This is often done with a program logic. The video steps through how to develop a program logic model.
It outlines common terminology and uses examples to explore practical considerations for child and family practitioners. To achieve their intended outcomes, programs or services must be implemented in line with a plan.
This is true whether you have designed your own program or you have selected an evidence-based program. However sometimes programs need to be adapted to suit the context or needs of the participants. Knowing how to adapt a program and having a clear plan for its implementation will help ensure that it meets the needs of participants while still achieving its intended outcomes.
In social services, evaluation is usually undertaken to find out whether a program or service was delivered the way it was planned, and to examine its effects on program participants. This page will give you the basics about evaluating programs and services, from developing evaluation questions to interpreting data.
You will also find guidance on selecting an evaluation approach that is suited to your program participants, objectives and resource requirements. Evaluation can be complex and confusing, and it can be hard to know where to start.
It is essential that evaluations are conducted ethically and with minimal risk to participants. This section provides an overview of ethical standards in evaluation and explains the ethics approval process. National Health and Medical Research Council. Evaluation approaches can help to guide the design and implementation of your evaluation and are particularly useful if there are specific ways you want to work with participants e. The following resources provide information on selected approaches to evaluation.
Program outcomes are what you anticipate will happen as a direct result of the program or service you are delivering. Most program evaluations will target short and medium-term outcomes. This section provides guidance on choosing specific, realistic and measurable outcomes that can be used to demonstrate program effects. A guided tour through: Measuring outcomes is an instructional video, produced by AIFS, that steps the audience through how to measure outcomes.
It outlines common terminology and uses examples to explore practical considerations for child and family practitioners when measuring outcomes. It includes a focus on standardised outcome measures. The video is designed to be accessible to practitioners with limited evaluation experience. You might choose to use a standardised survey, observation checklist or an interview guide — or a combination of all three.
0コメント