Following your search for literature the next step is to apply clearly defined eligibility criteria to decide which articles you will keep. This helps to keep your review focused and the workload manageable.
Some of the criteria will come from your PICO (or similar) elements as this ensures article relevance to your question. For example, population characteristics might include ‘older people with dementia’. In this case, younger people with dementia might be excluded. Your criteria might also include or exclude certain types of publications, languages, recency, and contexts. Once you have selected articles that meet your criteria you are ready to appraise their quality.
All studies including meta-analyses and systematic reviews need to be reviewed to ensure that they are of high quality. This might begin with the study design or methodology, but also includes consistency of or confidence in findings, generalisability, and applicability of the evidence.
Visit the CiAP NSW online course: Clinical Information Portal
Clinical information portal
Video from The University of Queensland Library
Appraisal of quantitative research helps you to determine evidence validity (closeness to the truth), impact (size of the effect), and applicability (usefulness in your clinical practice). 
Appraisal of qualitative research helps us assess the appropriateness of the approach taken, sampling strategy and data collection methods and whether the conclusions are justified by the results. 
At this step it is important that you understand different study types, the evidence hierarchy and what it means, and the different ways findings should be reported to enable appraisal (what information should be available). This will help you to select and appraise the most robust evidence. There are many online courses available about appraising literature.
Remember when appraising a study for use in your practice, the study should: 
Visit BMJ for a series of papers on reading and interpreting a research paper.
There are many tools or checklists available to help you assess the quality of research evidence. Using checklists helps to standardise the approach across different articles and improves objectivity when appraising research articles. They also help you to summarise your findings including relevance to your patient.
The most appropriate appraisal checklist to use will depend on the study type. For example, AGREE II is used for assessing the quality of guidelines, AMSTAR is for Systematic reviews, and AACODS is for grey literature. If there is no checklist available then reporting guidelines for that study type might be used to informally assess study quality. Since the purpose of these is to ensure sufficient information is provided to enable assessment. CareSearch Evidence Tools provides access to appraisal checklists and reporting guidelines for some of the different study types relevant in palliative care. The EQUATOR network has more.
Visit CareSearch Evidence Tools
Cochrane has developed a video series Critical Appraisal Modules 2019 on appraisal of different study types.
The use of checklists helps you to take a systematic approach to appraisal that is consistent across all of the studies you examine. However, as a general introduction to appraisal and what is important the NHMRC suggests the following examples of key quality criteria according to study type (this list is supplemented with criteria from other sources as indicated). 
Use of appraisal checklists for qualitative studies is contested. However, appraisal is a necessary step of EBP and there are checklists such as CASP-qualitative available. Qualitative studies might use approaches including observation, interview, focus groups and surveys. Some concepts that you might refer to when assessing qualitative studies include:
Part of the role of appraisal is to help users feel more certain about the findings. Bias can introduce uncertainty into the interpretation of findings.
Bias has been defined in research as ‘any process at any stage of inference that tends to produce results or conclusions that differ systematically from the truth’.  In other words, any process during a research project that leads to results or conclusions that are not true or that are uncertain.
Visit Cochrane collection training on the risk-of-bias tool for RCTs
Risk-of-bias tool for RCTs
Some of the common sources of bias relevant to palliative care research are defined below, the Oxford University CEBM catalogue lists more: 
Visit Oxford University CEBM Catalogue of Bias
CEBM Catalogue of Bias
For an introduction to some of the common statistical approaches used in quantitative research to assess certainty visit the CareSearch page:
Use of appraisal checklists for qualitative studies is contested.
However, appraisal is a necessary step of EBP and while there are checklists
such as CASP-qualitative available, newer approaches are emerging.
In assessing qualitative research, the focus is on whether the findings are a reasonable representation of the phenomenon of interest. So, rather than ‘bias’ you might look at the level of confidence in findings as done in the GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) measure for assessing syntheses of qualitative research , or study ‘rigour’ as done by the Cochrane group.
Watch - Cochrane collection training on GRADE-CERQual
Training on GRADE-CERQual
While there are some tools available, formal appraisal of qualitative research is relatively new with tool development ongoing. Here we look at approaches taken by the Cochrane and GRADE groups.
The Cochrane handbook provides examples of study domains that can impact on qualitative study rigour: 
According to the GRADE-CERQual appraisal system, confidence in findings from reviews or syntheses of qualitative research will be influenced by four key components: 
Your initial assumption should be that there are no concerns, and appraisal of the report used to identify where this may not be the case.
Methodology refers to how well the study is done. While the appraisal of methodology in qualitative research also continues to be debated, GRADE-CERQual suggests this step is equivalent to examining ‘bias’ in quantitative research.  Consideration of some of the following for data collection relating to each review finding is suggested:
Determining the potential benefits versus potential harms of research outcomes helps you to assess suitability for an individual patient. When comparing two groups the statistical significance tells you if an effect exists (certainty of outcome) but not whether it is important. This is where the following concepts are useful.
Van den Block highlights two approaches that are often reported and of relevance when appraising quantitative research for use with an individual patient in the palliative care context. 
In palliative care it is often found that different studies examining the same issue measure the same outcomes but in different ways e.g. using different scales. Combining these heterogenous studies to arrive at an overall measure of outcome can result in a high degree of variance that often lacks significance. Small study sizes will also impact on significance.
The standardised mean difference or SDM (often referred to as the ‘effect size’) can be used to standardise the studies to a uniform scale before they are combined. This quantifies the difference between two groups and is reported as SDM or effect size. The other advantage of this approach is that SDM does not depend on sample size (p-values do), so small studies as often reported in palliative care can be compared.
To help you interpret effect sizes use the classification defined by Cohen (known as Cohen’s d): 
Watch NCCMT Making sense of a standardised mean difference
NNT is the number of patients that need to be treated for a duration equal to the study period in order to have one additional person experience benefit.  You will also see Number needed to harm (NNH) reported. Ideally, NNT should be small and NNH large.
These measures help you to balance the benefits and risks of interventions for patients. The Video from NCCMT helps you to understand NNT and how to derive this from the often reported statistic of absolute relative risk.
Watch NCCMT Effectiveness of Interventions - Understanding NNT
Watch video - Understanding NNT
Sometimes the evidence you find is weak or varies across studies. Other times there may not be any evidence available. While evidence is just one part of EBP and decision-making knowing how to respond to this situation is important.
Evidence based practice is based on the best available evidence relevant to the clinical issue.
When there is no research evidence the evidence hierarchy directs us to expert opinion. While a panel of experts will provide a more objective set of expertise and opinions, you may need to rely on a smaller number of local experts and others with experience.
Often clinical trial outcomes fail to reach significance. This can be because the study did not have sufficient power to reach significance or a conclusion. Participants withdrawing from the study or difficulty recruiting enough participants are common reasons for this in palliative care. Pooling of studies in a systematic review or meta-analysis can provide greater security in guiding practice. However, for meta-analysis the studies need to be similar. If there is too much variation or heterogeneity, then the studies cannot be combined. Standardised research protocols including approaches to assessment of outcomes is one way of facilitating future pooling of small studies.It is important to remember that ‘no evidence of effect does not provide evidence of no effect.’ Failure to reach significance does not always mean two treatments are equivalent in effect. 
Sometimes outcomes from a higher order study such as a systematic review or meta-analysis disagree with a large RCT.  Where the RCT has been appraised as high quality then some issues to explore include:
Many issues in palliative care include complexity and uncertainty such as prognostication, multimorbidity, or family and cultural contexts. The best available research may not relate to your specific question about your clinical practice/patient needs.The role of patient preferences and clinical judgement are critical alongside the guidance from research evidence. Practising in uncertainty can be challenging. To assist in managing the issues, the clinician can look at uncertainty from four perspectives: 
A related approach was examined in a systematic review looking at how to manage clinical uncertainty in older people by developing a logic model of person-centred evidence-based tools. The identified logic model addresses clinical uncertainty by applying evidence-based tools to optimise person-centred management and improve patient outcomes. 
Such strategies can support health professionals to consider how to balance the patient’s need, their clinical judgement (and clinical team expertise) alongside current areas of practice uncertainty.
Page created 28 March 2022