The Mölnlycke Health Care blog
Grading the evidence - Practical tips on grading your own
In this third post in a series on evidence-based medicine, Hayley Hughes, a member of the clinical research team at Mölnlycke Health Care, will look at how she critically appraises the different types of evidence and grades them accordingly.
In my role as Clinical Evaluation Specialist for surgical products, I look at the different types of published evidence available in order to assess the ongoing safety and efficacy of our medical devices, and also those that highlight any potential issues that have been found in our own and equivalent products so as to mitigate the risk of the issue affecting our customers. I then grade the evidence accordingly to see if any provide a higher level of evidence than others.
In this blog post, I would like to share some specific questions as practical tips on how to critically analyze articles across the entire hierarchy of evidence.
Where to start?
I use a template that includes specific questions that are asked of each article in order to minimize bias. Perhaps you can create a personal template to ensure you cover all the relevant questions to your own situation and do not have to repeat yourself for each paper that you assess.
It is always a good idea to add a comments box to record anything that you think was either well carried out or caused you concern whilst reading the article, e.g. something that was particularly relevant to your own way of working or conversely, something that you found strange or particularly irrelevant to your normal way of working.
It is good practice to distinguish which type of study is being evaluated first so that questions like those below can be asked of it, depending upon the type of study:
- Randomized controlled trial, cohort study, case-control study, or case series or any other?
- Was an explicit description of the intervention provided?
- Was the study carried out on your device or an equivalent one?
- Did the study test equivalent devices/interventions for comparison?
- Did the study include a placebo or control group?
- Is the device/intervention suitable for your particular needs?
- Is the method of intervention carried out as it was intended to be?
- Were trial participants selected prospectively or retrospectively?
- Were the subjects truly randomized or allocated by days of the week or type of surgeries?
- Were the inclusion and exclusion criteria specified?
- Were the trial participants picked from a representative sample selected from a relevant population?
- Was the disease state of the trial participants reliably assessed and validated at baseline and throughout the study?
- Was assignment to the treatment or control groups random?
- Were all randomized participants included in the analysis?
- Was the number of subjects required for significance given at the beginning of the study?
- Was the treatment allocation concealed from those responsible for recruiting subjects?
- Were investigators and/or microbiologists blinded to the treatment allocation?
- Were the care providers blinded?Were the subjects blinded?
- Did the study adequately control for potential confounding factors in the design or analysis?
- Were the techniques/setting adequately described?
- Was patient follow-up long enough for outcomes to be thoroughly documented?
- Were any exclusions related to the intervention or other factors?
- Were drop-out rates/reasons similar across the intervention groups?
- Are the results of each arm suitably documented?
- If comparisons of sub-series were made, were there sufficient descriptions of the series and reasons given?
- Was an appropriate statistical analysis used?Are there any limitations documented in the discussion?
- Are all the results documented in the conclusion or do some results take precedence?
- Is there any conflict of interest stated by the author(s)?
- Was the study funded in part or in whole by any biased organization?
Grading the evidence
I personally use a scale of 1-3, where:
- is given for a high quality attribute
- is given for minor deficiencies
- is given for major deficiencies
After each answer has been allocated a score (along with comments if necessary), the values are all added together to give a score for the whole article… and the lower the score you give it, the higher graded the evidence (think first class as being better than second or third class!). In some cases, I have even discarded the data completely based on it scoring a 3 in a certain area e.g. the measurement taken is too vague and could be down to other external variables that have not been factored in.
Finally, I make a list of all the papers I have included in an evaluation and write down reasons for those I have excluded. Then taking into consideration the grades I have given each paper, along with any pertinent comments, I make the decision as to whether it supports or contradicts the performance or safety of an intervention, and write the report accordingly.
Now what are you waiting for…?
Go make your own template so you can critically analyze some articles yourself and impress all your colleagues!
In the next post, we will consider how posing a different research question may provide a more positive outcome for your intervention of choice.
Previous posts in this series:
Grading the evidence
Grading the evidence - Efficacy vs effectiveness