Using data from a group
There are different ways of analysing outcomes data from a group of people. Common approaches include looking at the change in the average score between two time points, and looking at how many people improved, stayed the same, or deteriorated between two time points
At CORC we think it is helpful to consider the changes in a group of people in a range of ways:
- Real-life and practical significance of the change for the group
- Generalisability to the young people supported by your service overall
- Statistical significance
Where relevant to do so, additional insight may be gained by comparing change in your service to other services. More details below.
For more information about our data analysis support please visit here.
-
Here are a couple of ways to reflect on how the change might relate to children and young people’s experience:
- Look at the specific questions that are asked by the questionnaire: how much movement in the question responses would you expect, given the children and young people you see, and the support you offer?
- If a person’s change was more than the average, would this correspond to a noticeable change in their life?
-
Any changes in score can only be seen for those children and young people who completed a measure twice. Their responses might not be reflective of all service-users’ experiences. Consider:
- What proportion of the people you support completed a measure twice?
- What are the possible similarities and differences between them and your service-users as a whole?
-
In the analysis we offer to members, we do a statistical test that helps us to consider whether the change was due to chance. We take into account:
- that factors influencing the change in score can include factors other than the support received
- that the strength of our knowledge about the change is influenced by the number of people in the analysis, which can be too small to distinguish the change from random fluctuations in data, as well as the impact of the support (or other factors)
-
Drawing from the data that members have shared through CORC, we have developed a set of CORC Comparators for commonly used outcome measures. These tools provide the change reported by children and young people in the CORC dataset for each measure, allowing you to compare this with your own organisation’s data. This can help you to think about whether the outcomes in your service are in line with your expectations.
We used an approach that shows both how much people improve, stay the same or get worse; and how much of that improvement/deterioration is ‘reliable’ (beyond what could be explained by measurement error for that questionnaire).
CORC Comparators for the following measures can be downloaded from the members area of our website:- RCADS subscales
- SDQ subscales
- SDQ impact supplement
- Child Outcome Rating Scale (CORS)
- Outcome Rating Scale (ORS)
- Clinical Outcomes in Routine Evaluation 10 (CORE-10)
- Young Person’s CORE (YP-CORE)
- Generalised Anxiety Disorder 7 (GAD-7)
- Patient Health Questionnaire 9 (PHQ-9)
If you are a member please sign in here.