Clinical audit has been defined as a quality improvement strategy that intends to measure and improve the care and outcomes patients experience.  It is a proven and effective process for measuring quality and driving improvement. 
An audit is conducted to evaluate how close local practice is to best practice and identify gaps. It is snapshot of current practice against best practice, target performance. This is done by the selection of aspects of patient care and the evaluation of the performance of a service against an agreed set of criteria or standards to answer the following questions:
- What is happening now? (baseline)
- What should be happening? (according to evidence, best practice, other agreed standards, which provide the criteria for the audit)
- How can we improve? (changes and interventions required)
- Have our improvements resulted in a change? (repeated audits as part of the cycle to close the gap).
Clinical audit is a fundamental component of maintaining high standards of clinical excellence. Therefore, it is essential that healthcare professionals at every level understand what clinical audits are, how they should work, how the results of clinical audits contribute to excellent care and how working knowledge of clinical audit can advance their careers.  Audits provide information about processes and, importantly, outcomes for patients and families. These outcomes need to be monitored on a continuous basis through an audit schedule and the results reported at relevant committees such as a Patient Safety and Quality committee.
- A Cochrane systematic review of audit and feedback, which is defined as ‘a summary of clinical performance over a specified period of time, shows that audit generally leads to small but potentially important improvements in professional practice. The effectiveness of audit and feedback seems to depend on baseline performance and how the feedback is provided.  Based on review of the 2012 Cochrane review and a 2017 systematic review of electronic A&F it was found that choice of comparator impacted on quality improvement outcomes, with no single comparator suited to all recipients and contexts.  Four suggestions were proposed for choosing comparators that would maximise feedback acceptance:
- Step away from benchmarking against the mean and consider tailored performance comparisons. Comparators need to be relevant and comparable while avoiding unachievable benchmarks that are too high for low performers.
- Balance the credibility and actionability of the feedback message. Using trends and multiple comparators is more complex but can assist recipients to gauge whether a low (or high) score is credible.
- Provide performance trends, but not trends alone. While trends support quality improvement cycles, the rate of performance change can be a greater motivator for improvement.
- Encourage feedback recipients to set personal, explicit targets guided by relevant information. Targets not linked to a credible authority or incentives/accreditation/penalties may be ignored as irrelevant. This might be addressed through support from feedback providers to set individual targets based on evidence, data and expert opinion.
- The audit and feedback (A&F) development needs to consider the audit component in relation to how and what data is collected and ensure audit cycles are repeated, the feedback component to include multi-modal methods of providing the feedback, the nature of the behaviour change required and explicit goals and action plans are included as part of the feedback. 
Audit in the Australian context
Many programs and toolkits have been developed within the Australian setting to support A&F in the context of end of life care including:
Theses audit toolkits include both instruction and templates that can be used to establish local auditing programs.