Selecting evidence of known quality to provide insights 

This section is about measuring the strength of available evidence (appraisal) and what to do when the evidence is uncertain. If relevant clinical guidelines are available, you will still need to appraise quality.


Here you will learn to:

  • How to appraise a research study
  • The place of checklists and where to find them
  • The influence of study type on appraisal
  • How to practice when there is no evidence or the evidence is contested.



Getting started

Following your search for literature the next step is to apply clearly defined eligibility criteria to decide which articles you will keep. This helps to keep your review focused and the workload manageable.


Some of the criteria will come from your PICO (or similar) elements as this ensures article relevance to your question. For example, population characteristics might include ‘older people with dementia’. In this case, younger people with dementia might be excluded. Your criteria might also include or exclude certain types of publications, languages, recency, and contexts. Once you have selected articles that meet your criteria you are ready to appraise their quality.


Video from CEP Guidelines

All studies including meta-analyses and systematic reviews need to be reviewed to ensure that they are of high quality. This might begin with the study design or methodology, but also includes consistency of or confidence in findings, generalisability, and applicability of the evidence.


An introduction to Critical Appraisal




Video from The University of Queensland Library


Appraisal of quantitative research helps you to determine evidence validity (closeness to the truth), impact (size of the effect), and applicability (usefulness in your clinical practice). [1]


Appraisal of qualitative research helps us assess the appropriateness of the approach taken, sampling strategy and data collection methods and whether the conclusions are justified by the results. [2]


At this step it is important that you understand different study types, the evidence hierarchy and what it means, and the different ways findings should be reported to enable appraisal (what information should be available). This will help you to select and appraise the most robust evidence. There are many online courses available about appraising literature.


Remember when appraising a study for use in your practice, the study should: [3]

  • address a clearly focused question relevant to your care issue
  • have used valid methods to address this question
  • have clear and clinically meaningful findings
  • be applicable to my patient, population, or context of care.



Appraisal checklists

There are many tools or checklists available to help you assess the quality of research evidence. Using checklists helps to standardise the approach across different articles and improves objectivity when appraising research articles. They also help you to summarise your findings including relevance to your patient.


The most appropriate appraisal checklist to use will depend on the study type. For example, AGREE II is used for assessing the quality of guidelines, AMSTAR is for Systematic reviews, and AACODS is for grey literature. If there is no checklist available then reporting guidelines for that study type might be used to informally assess study quality. Since the purpose of these is to ensure sufficient information is provided to enable assessment. CareSearch Evidence Tools provides access to appraisal checklists and reporting guidelines for some of the different study types relevant in palliative care. The EQUATOR network has more.



Cochrane has developed a video series Critical Appraisal Modules 2019 on appraisal of different study types.


The use of checklists helps you to take a systematic approach to appraisal that is consistent across all of the studies you examine. However, as a general introduction to appraisal and what is important the NHMRC suggests the following examples of key quality criteria according to study type (this list is supplemented with criteria from other sources as indicated). [4]



Randomised controlled trials
  • Was the study double blinded?
  • Was allocation to treatment groups concealed from those responsible for recruiting the subjects?
  • Were all randomised participants included in the analysis?


Cohort studies
  • How were subjects selected for the ‘new intervention’?
  • How were subjects selected for the comparison or control group?
  • Does the study adequately control for demographic characteristics, clinical features and other potential confounding variables in the design or analysis?
  • Was the measurement of outcomes unbiased (i.e. blinded to treatment group and comparable across groups)?
  • Was follow-up long enough for outcomes to occur?
  • Was follow-up complete and were there exclusions from the analysis?

Case-control studies
  • How were cases defined and selected?
  • How were controls defined and selected?
  • Does the study adequately control for demographic characteristics and important potential confounders in the design or analysis?
  • Was measurement of exposure to the factor of interest (e.g. the new intervention) adequate and kept blinded to case/control status?
  • Were all selected subjects included in the analysis?


Systematic reviews
  • Was an adequate search strategy used?
  • Were the inclusion criteria appropriate and applied in an unbiased way?
  • Was a quality assessment of included studies undertaken?
  • Were the characteristics and results of the individual studies appropriately summarised?
  • Were the methods for pooling the data appropriate?
  • Were sources of heterogeneity explored?

For appraisal of guidelines consider: [5]
  • Was the guideline systematically developed?
  • Are the recommendations clearly linked to the evidence?
  • Are all guideline developers named?
  • Is a statement of goals provided?
  • Is the guideline organised for ease of use?
  • Are recommendations made and clearly identified?


Appraisal of qualitative studies: [6]

Use of appraisal checklists for qualitative studies is contested. However, appraisal is a necessary step of EBP and there are checklists such as CASP-qualitative Studies Checklist (456kb pdf) available. Qualitative studies might use approaches including observation, interview, focus groups and surveys. Some concepts that you might refer to when assessing qualitative studies include: 

  • Transferability: is sufficient study information provided for you to make connections between the study’s data and broader community settings?
  • Credibility: to what extent is the research account believable and appropriate, particularly in relation to what participants say and the interpretations made by the researcher?
  • Reflexivity: do the researchers examine and explain how they have influenced the research project including question development, sampling, and data collection, analysis and interpretation?
  • Transparency: how well are the research sampling strategies, data collection and analysis described? Where decisions affecting the study are made are these adequately explained and justified?

Part of the role of appraisal is to help users feel more certain about the findings. Bias can introduce uncertainty into the interpretation of findings.


Bias has been defined in research as ‘any process at any stage of inference that tends to produce results or conclusions that differ systematically from the truth’. [7] In other words, any process during a research project that leads to results or conclusions that are not true or that are uncertain.



Some of the common sources of bias relevant to palliative care research are defined below, the Oxford University CEBM catalogue lists more: [8]

  • Selection bias: Selection bias refers to systematic differences between baseline characteristics of the groups that are compared. Randomisation of participants to study groups can be used to overcome this.


  • Performance bias: Performance bias refers to systematic differences between groups in the care that is provided, or in exposure to factors other than the interventions of interest. Blinding of participants and/or study staff can help to overcome this.
  • Attribution bias: Attrition bias refers to systematic differences between groups in withdrawals from a study. Withdrawals from the study lead to incomplete outcome data. In palliative care where the person is expected to die this is a common issue.
  • Detection bias: Detection bias refers to systematic differences between groups in how outcomes are determined. Blinding of outcome assessors can help.
  • Reporting bias: Reporting bias refers to systematic differences between reported and unreported findings. If both statistically significant and non-significant differences are reported and discussed this type of bias can be reduced.

For an introduction to some of the common statistical approaches used in quantitative research to assess certainty visit the CareSearch page: Core Concepts in Assessing Statistics. You can also read the Brief Overview of Statistics from Winters et al. 2010.

Use of appraisal checklists for qualitative studies is contested. However, appraisal is a necessary step of EBP and while there are checklists such as CASP-qualitative Studies Checklist (456kb pdf) available, newer approaches are emerging.


In assessing qualitative research, the focus is on whether the findings are a reasonable representation of the phenomenon of interest. So, rather than ‘bias’ you might look at the level of confidence in findings as done in the GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) measure for assessing syntheses of qualitative research [9], or study ‘rigour’ as done by the Cochrane group.


While there are some tools available, formal appraisal of qualitative research is relatively new with tool development ongoing. Here we look at approaches taken by the Cochrane and GRADE groups.


Cochrane

The Cochrane handbook provides examples of study domains that can impact on qualitative study rigour: [10]

  • Clear aims and research question
  • Congruence between the research aims/question and research design/method(s)
  • Rigour of case and or participant identification, sampling and data collection to address the question
  • Appropriate application of the method
  • Richness/conceptual depth of findings
  • Exploration of deviant cases and alternative explanations
  • Reflexivity of the researchers (where they examine their own potential influence on the study)

GRADE

According to the GRADE-CERQual appraisal system, confidence in findings from reviews or syntheses of qualitative research will be influenced by four key components: [11]

  • Methodology,
  • Coherence,
  • Adequacy of data, and
  • Relevance.

Your initial assumption should be that there are no concerns, and appraisal of the report used to identify where this may not be the case.


Methodology refers to how well the study is done. While the appraisal of methodology in qualitative research also continues to be debated, GRADE-CERQual suggests this step is equivalent to examining ‘bias’ in quantitative research. [11] Consideration of some of the following for data collection relating to each review finding is suggested:

  • Privacy/sensitivity issues
  • Risk to participants
  • Social desirability
  • Presence of observation that might affect ‘authentic’ behaviour


            

Determining the potential benefits versus potential harms of research outcomes helps you to assess suitability for an individual patient. When comparing two groups the statistical significance tells you if an effect exists (certainty of outcome) but not whether it is important. This is where the following concepts are useful.


Quantitative research

Van den Block highlights two approaches that are often reported and of relevance when appraising quantitative research for use with an individual patient in the palliative care context. [12]


Standard mean deviation

In palliative care it is often found that different studies examining the same issue measure the same outcomes but in different ways e.g. using different scales. Combining these heterogenous studies to arrive at an overall measure of outcome can result in a high degree of variance that often lacks significance. Small study sizes will also impact on significance.


The standardised mean difference or SDM (often referred to as the ‘effect size’) can be used to standardise the studies to a uniform scale before they are combined. This quantifies the difference between two groups and is reported as SDM or effect size. The other advantage of this approach is that SDM does not depend on sample size (p-values do), so small studies as often reported in palliative care can be compared. To help you interpret effect sizes use the classification defined by Cohen (known as Cohen’s d): [13]


  • An effect size of 0.2 or less is small
  • An effect size of 0.5-0.8 is moderate
  • An effect size of 0.8 or greater is large.


Number needed to treat (NNT) to get benefit

NNT is the number of patients that need to be treated for a duration equal to the study period in order to have one additional person experience benefit. [12] You will also see Number needed to harm (NNH) reported. Ideally, NNT should be small and NNH large.
These measures help you to balance the benefits and risks of interventions for patients. The Video from NCCMT helps you to understand NNT and how to derive this from the often reported statistic of absolute relative risk.




Sometimes the evidence you find is weak or varies across studies. Other times there may not be any evidence available. While evidence is just one part of EBP and decision-making knowing how to respond to this situation is important. 


No evidence

Evidence based practice is based on the best available evidence relevant to the clinical issue.

When there is no research evidence the evidence hierarchy directs us to expert opinion. While a panel of experts will provide a more objective set of expertise and opinions, you may need to rely on a smaller number of local experts and others with experience.


Inconclusive evidence

Often clinical trial outcomes fail to reach significance. This can be because the study did not have sufficient power to reach significance or a conclusion. Participants withdrawing from the study or difficulty recruiting enough participants are common reasons for this in palliative care. Pooling of studies in a systematic review or meta-analysis can provide greater security in guiding practice. However, for meta-analysis the studies need to be similar. If there is too much variation or heterogeneity, then the studies cannot be combined. Standardised research protocols including approaches to assessment of outcomes is one way of facilitating future pooling of small studies.

It is important to remember that ‘no evidence of effect does not provide evidence of no effect.’ Failure to reach significance does not always mean two treatments are equivalent in effect. [14]


Conflicting evidence

Sometimes outcomes from a higher order study such as a systematic review or meta-analysis disagree with a large RCT. [14] Where the RCT has been appraised as high quality then some issues to explore include:

  • Are the review and RCT examining the same question in terms of PICO?
  • Was the RCT adequately powered to answer the question
  • What is the methodological quality of the review?
  • What is the methodological quality of the studies included in the review?
  • If small studies have been included have the contributions of each study been weighted (small RCT studies can be a source of bias)?
  • What level of between study heterogeneity has been reported?
  • Does the review conduct sensitivity analysis to explore the influence of between study heterogeneity e.g. outcomes for published versus un-published data, small versus large cohort studies, high quality versus low quality studies.

Practising in uncertainty

Many issues in palliative care include complexity and uncertainty such as prognostication, multimorbidity, or family and cultural contexts. The best available research may not relate to your specific question about your clinical practice/patient needs.The role of patient preferences and clinical judgement are critical alongside the guidance from research evidence. Practising in uncertainty can be challenging. To assist in managing the issues, the clinician can look at uncertainty from four perspectives: [15]

  • uncertainty about the evidence (e.g. what do the guidelines show?),
  • clarifying the narrative (what is the patient’s story?),
  • using case-based reasoning (what best to do in the circumstances?) and
  • drawing on multi-professional clinical expertise (how best to communicate and collaborate?)

A related approach was examined in a systematic review looking at how to manage clinical uncertainty in older people by developing a logic model of person-centred evidence-based tools. The identified logic model addresses clinical uncertainty by applying evidence-based tools to optimise person-centred management and improve patient outcomes. [16]


Such strategies can support health professionals to consider how to balance the patient’s need, their clinical judgement (and clinical team expertise) alongside current areas of practice uncertainty.

  1. Straus S, Glasziou P, Richardson WS, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 5th ed. London: Elsevier; 2018.
  2. Greenhalgh T, Taylor R. Papers that go beyond numbers (qualitative research). BMJ. 1997 Sep 20;315(7110):740-3. doi: 10.1136/bmj.315.7110.740.
  3. Centre for Evidence-Based Medicine. Critical Appraisal Tools [Internet]. Oxford (UK): University of Oxford; 2021. [cited 2021 Nov 16].
  4. National Health and Medical Research Council (NHMRC). How to use the evidence: assessment and application of scientific evidence. Canberra: NHMRC; 2000.
  5. Semlitsch T, Blank WA, Kopp IB, Siering U, Siebenhofer A. Evaluating Guidelines: A Review of Key Quality Criteria. Dtsch Arztebl Int. 2015 Jul 6;112(27-28):471-8. doi: 10.3238/arztebl.2015.0471.
  6. Williams V, Boylan AM, Nunan D. Critical appraisal of qualitative research: necessity, partialities and the issue of bias. BMJ Evid Based Med. 2020 Feb;25(1):9-11. doi: 10.1136/bmjebm-2018-111132. Epub 2019 Mar 12.
  7. Sackett DL. Bias in analytic research. J Chronic Dis. 1979;32(1-2):51-63. doi: 10.1016/0021-9681(79)90012-2.
  8. Catalogue of Bias Collaboration, Bankhead C, Aronson JK, Nunan D. Attrition bias. In: Catalogue Of Bias. Oxford, UK: University of Oxford; 2017.
  9. Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci. 2018 Jan 25;13(Suppl 1):2. doi: 10.1186/s13012-017-0688-3.
  10. Noyes J, Booth A, Cargo M, Flemming K, Harden A, Harris J, et al. Chapter 21: Qualitative evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. editors. Cochrane Handbook for Systematic Reviews of Interventions. version 6.3. London: Cochrane; 2022.
  11. Munthe-Kaas H, Bohren MA, Glenton C, Lewin S, Noyes J, Tunçalp Ö, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings-paper 3: how to assess methodological limitations. Implement Sci. 2018 Jan 25;13(Suppl 1):9. doi: 10.1186/s13012-017-0690-9.
  12. Van den Block L, Vandevoorde J. Evidence-Based Practice in Palliative Care. In: MacLeod R, Van den Block L. editors. Textbook of Palliative Care. Cham, CH: Springer International Publishing; 2019.
  13. Cohen J. A power primer. Psychol Bull. 1992 Jul;112(1):155-9. doi: 10.1037//0033-2909.112.1.155.
  14. Sylvester RJ, Canfield SE, Lam TB, Marconi L, MacLennan S, Yuan Y, et al. Conflict of Evidence: Resolving Discrepancies When Findings from Randomized Controlled Trials and Meta-analyses Disagree. Eur Urol. 2017 May;71(5):811-819. doi: 10.1016/j.eururo.2016.11.023. Epub 2016 Nov 30.
  15. Engebretsen E, Heggen K, Wieringa S, Greenhalgh T. Uncertainty and objectivity in clinical decision making: a clinical case in emergency medicine. Med Health Care Philos. 2016 Dec;19(4):595-603. doi: 10.1007/s11019-016-9714-5.
  16. Ellis-Smith C, Tunnard I, Dawkins M, Gao W, Higginson IJ, Evans CJ; SPACE. Managing clinical uncertainty in older people towards the end of life: a systematic review of person-centred tools. BMC Palliat Care. 2021 Oct 22;20(1):168. doi: 10.1186/s12904-021-00845-9.

Page created 28 March 2022