What did the CDISC Standards Change?

Somewhere in the spring of 2008 when I first heard about CDISC, my head started swirling with all the jargon. With much introspect, I realized very soon all the pharmaceutical companies over the world would have almost same structure for their datasets for standard safety reporting. The novice SDTM learner within me thought, oh boy, if this concept is realized in all its fullness, then they surely won’t need a department full of programmers and I may lose my job and may have to consider an alternate career.

Its nearly 9 years since that thought and today I realize those were just fears, and my career certainly did not take any downhill turn as I feared. What I did not realize that, back then much of a clinical programmer’s energy was being wasted in, going back and forth to produce a standard safety dataset like (AE, DM, VS, LB, etc.). There was quiet sometime spent on correcting the manual CRFs and the equal amount of energy to annotate these. Every trial of the same sponsor had a variety of datasets of the same kind. There was absolutely no assurance that, if the programming of analysis dataset for demographics in trial X took 5 days then trial Y would also need 5 days to program the same demographics analysis dataset.

CDISC not just standardized the entire process of analyzing clinical trials, it lifted the great distress that was caused, due to the clinical programmer sorting out these rudimentary processes. Finally, the clinical programmers could focus on “Analysis”.

Today submitting your datasets to FDA in SDTM format has not just become a guideline, it’s become a mandate and ADAM guideline may fall in this category soon as well. CDSIC standards have managed to take off the stress of working on pooled analysis, exploratory analysis and overall has speed up creating analysis datasets and subsequently the tables, figures and listings.

And how exactly are we using this great amount of time that we have saved by implementing CDISC Standards?

Here is what some of us programmers have engaged themselves in doing increasingly in the recent past. The rest of us are slowly getting there

Exploratory analysis: Sponsors are increasingly engaging in various exploratory analysis for their trials that have been completed. Primarily as an input to their medical affairs teams or to discover new indications for the new drug or just to monitor ongoing safety and efficacy

Site monitoring analytics: Traditionally, clinical trials were monitored / conducted through the various sites that were decided by the sponsor to conduct a trial, using a mix of paper and computerized monitoring system. Recently it has been discovered that maintaining these sites can result in an overhead cost, especially, if the patient recruitments are low in and not as expected. Several algorithms (in machine learning and predictive analytics) have been used to enable real time decision making, opening a new avenue of site monitoring analytics to those programmers working in the domain for clinical trials. Site monitoring analytics aims to save costs, conduct trials efficiently and overall, improve the quality of data collected from a trial.

Data anonymization: Several trials have gone into the EudraCT database that must be a part of a public database, after the “Clinical Trials Regulations” have been passed by EU in 2014. Before the trial datasets are submitted on this public database, these databases must be anonymized. This is a new challenge for a clinical programmer

Pharmacovigilance Analytics:  This would involve analyzing the trends of adverse events (AE) that are reported to the pharmacovigilance team, predicting the occurrence of AE, calculating the time and efficiency in reporting an AE.

Health Economics and Patient Outcomes: This can be easily called as the BIG data for clinical space. Increasingly many pharmaceutical companies are investing in Health Economics and Patient outcomes to for better health care decisions. The data over here is typically BIG data from the patient registries, compelling the clinical programmer to write smart codes and use better tools to navigate and analyze data.

Standardization: This is a blanket term under which many activities are taking place currently, and these are opening fresh avenues for a clinical programmer viz:

  • Creating a macro library to map SDTM datasets (preferably at a project level).
  • Creating a macro library to generate standard tables figures and listings (preferably at a project level).
  • Creating edit checks for data mangers that would make SDTMs compliant to the implementation guide followed.
  • Automating Open CDSIC checks for the correctness of the SDTM dataset.
  • Creating define.xml for esub.
  • Creating tools to review the SDTM datasets that have been mapped.

The above list is not exhaustive and challenging days are here to stay. So, you see life did not stop when SDTM and the other CDISC standards were introduced, but it has become more challenging, informative and wholesome. It has challenged us clinical programmers to look beyond the traditional Table/Figures and listings rigmarole. Fortunately, many of us have taken this challenge head on and have upskilled ourselves. There is more analytics to be done, more standardization to be achieved and more regulations to abide to (if they are introduced in the future).  The future is bright and exciting times are waiting ahead for us. So, keep learning, growing and getting closer to be the better clinical programmer you ever wanted to be.

 

Written by Lovita Fernandes, Principal Programmer at GCE Solutions