Real world evidence (RWE) took centre stage at this year’s ISPOR International, with a packed program of sessions covering everything from artificial intelligence (AI)1 to Z-codes.2 There was a noticeable shift from previous years: the overall acceptability of RWE has now been established, and the conversation is moving onto detailed discussions of best practice, as well as cutting-edge methods and technology.
Before the main conference started, we were delighted to attend the pre-conference RWE summit. A clear theme from the summit was that if it’s worth doing RWE then it’s worth doing it properly. Unsurprisingly there remains uncertainty about what that means in practice, although regulatory and health technology assessment (HTA) bodies are working hard to firm up their guidance on suitable RWE for their own contexts. Sebastian Schneeweiss, Professor of Medicine and Epidemiology at Harvard Medical School, noted that “everybody wants the most accurate evidence, but some have to make more compromises than others” – a welcome dose of reality!3
Full results from the RCT-DUPLICATE study were published shortly before ISPOR and were discussed at the RWE summit.4 The study found that if you can closely emulate both the study design and the measurement of variables from a randomised controlled trial (RCT), then the results of RWE often match the RCT very closely. However, available datasets still vary widely in quality and suitability for decision-making and this is often the limiting factor in achieving high-quality RWE. There is a need for better documentation from data owners on the data provenance, curation methods, and the accuracy of key variables in the dataset to help researchers understand the intrinsic quality of a dataset; however, this will always need to be supplemented with a thorough, study-specific assessment. Despite some attendees pleading for more alignment between different agencies on issues such as specific fit for purpose datasets, there was pushback from the agency representatives: for example, John Concato from the US Food and Drug Administration (FDA) confirmed that they will not certify any dataset as “good enough” because fitness for purpose is specific to the research question and context of your own study, and Laurie Lambert from the Canadian Agency for Drugs and Technologies in Health (CADTH) pointed out that a study can have excellent design and use high quality data but still not be fit for Canadian decision-making.
With this in mind, another strong focus in the discussion was on keeping thorough documentation of decisions made throughout the whole process of a RWE study. The framework for any study should be built around envisaging the “perfect” data and methods, but then acknowledging (a) how far away you are from that target study, and (b) how much it matters: if the effect size of a study is large and there are only a handful of expected confounders, then more uncertainty in the data may be tolerated.
Pharmaceutical companies are keen to include data on social determinants of health (SDoH) in RWE studies but need better guidance on where to find, and how to use, suitable data. Some datasets extract SDoH from “Z-codes”, which are codes within the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) that capture non-medical factors influencing a patient’s health; however, these codes are not yet used routinely by health care providers. There are a number of publicly available datasets in the US with SDoH, but these only report aggregate data (e.g. at state or zip code level) so researchers should be cautious about using these to infer characteristics of individual patients in a study.2
Methodology for external control arms is advancing quickly, with three new pieces of guidance either recently or soon-to-be released from the European Medicines Agency (EMA), FDA and CADTH.5-8
When using data from one country to infer what results would have been in another country, researchers need to consider the transferability of data; factors to consider for a treatment effectiveness study include the similarity of treatment patterns, the healthcare system, the prevalence and incidence of the indication, and the patient demographics between the two countries. The UK’s National Institute for Health and Care Excellence (NICE) and other non-US decision makers have shown willingness to accept studies using high-quality US data, provided that the rationale for using that dataset is justified.9
The use of quantitative bias assessment (QBA) can increase confidence in the results of a RWE study. Although the gold standard method is to find data (from published literature or expert opinion) on key sources of bias and explicitly adjust for these in your analyses, a more pragmatic approach can be to conduct a tipping point analysis quantifying the confounding that would be required to “tip” your treatment effect away from being statistically significant.10
References
If you would like any further information on the themes presented above, please do not hesitate to contact Amy Buchanan-Hughes, US Head of Real World Evidence (LinkedIn). Amy Buchanan-Hughes is an employee at Costello Medical. The views/opinions expressed are her own and do not necessarily reflect those of Costello Medical’s clients/affiliated partners.