How getting statistics right in a clinical trial can lead to prestigious publications?
The Epistop project has transformed the prevention of epilepsy in newborns. See how we supported it with our scientific approach and clinical data science expertise.
The commitment of analysts from Transition Technologies Science to the project is extremely important. It’s not only prompt and proper execution of tasks under the consortium agreement, but also constant, fruitful and inspiring participation in planning and discussions
The EPISTOP project was a large international clinical trial coordinated by the Institute of the Memorial Children’s Health Center (IPCZD). Alongside IPCZD, the project involved 10 clinical centers from Europe and Australia, as well as five research laboratories from Europe and the U.S.
The project has completely changed the way epilepsy is prevented in newborns with tuberous sclerosis. The results have now become treatment guidelines worldwide. They have been described in journals such as Annals of Neurology, Genetics in Medicine and Epilepsia.
However, the therapy was at first rejected by one of the most important journals in the field – Neurology.
In this case study, we’ll show you how well-planned analysis of clinical data allowed researchers to operate freely and contributed to the success and publication of the study.
Medical background
Tuberous sclerosis is a fairly rare disease; one in six thousand children is born with it. It causes benign tumors in various organs: the brain, heart, lungs, kidneys, fundus, and skin.
The disease is accompanied by epilepsy in 90 percent of cases. In 70 percent of young patients, it appears before the age of two, causing significant inhibition of mental development.
The severity of the disease often correlates with the timing of the first symptoms. Earlier onset tends to lead to more sensitive and fragile brain conditions, resulting in greater damage. Epilepsy appearing in the first year of life leads to handicap in 82 percent of cases.
%
Tuberous sclerosis is accompanied by epilepsy in 90% of cases.
%
Epilepsy appears before the age of 2 in 70% of young patients with tuberous sclerosis.
%
Epilepsy appearing in the 1st year of life leads to intellectual disabilities in 82% of cases.
The paradox in (not)treating epilepsy
Tuberous sclerosis can be diagnosed very early – it is already possible to detect tumours in the foetus during pregnancy.
However, even in children diagnosed this early, treatment for epilepsy has typically been delayed until the first seizures occur.
After all, how can one treat a condition that hasn’t yet manifested?
We waited with treatment even though we knew that 70 percent of children who are born with tuberous sclerosis will develop epileptic seizures that damage their brains within 24 months.
Jóźwiak MD, PhD began treating differently
In 2007, Dr. Jozwiak began conducting regular EEG tests on newborns before seizures occurred. 70% showed small changes in their EEG in the second or third month, and months later, had seizures. In 2011, a team led by Dr. Jóźwiak began treating 14 young patients with tuberous sclerosis differently. They had EEGs every four weeks from birth to 24 months.
Four children (30%) showed no changes in their EEG and did not develop epilepsy. The remaining ten (70%) did. When changes appeared in the EEG, they were given antiepileptic drugs.
In four of these children, EEG readings normalized over time, and they exhibited no clinical symptoms of epilepsy. In six, seizures occurred but were easier to control.
The most important journal in the field, Epilepsia, rejected their article on the subject.
Why? The study was not done according to scientific research standards – it lacked randomization and two groups of patients.
The first ideas in the course of the study are often translated into Excel, and this is where the initial feasibility will be checked. This first stage is to confirm the researcher’s intuition, and for this purpose this software is sufficient. However, in the later stages of the study, Excel becomes too versatile a tool. It doesn’t permit adequate quality control of the data. When things start to get serious, it makes sense to use professional tools.
Epistop was an international project funded by the European Commission, aimed to prove that preventive treatment is effective and investigate the origins of epilepsy.
Experts from 16 European research centres and clinical hospitals took part, including Tor Vergata University Hospital, Necker Enfants Malades Hospital, Vrije Universiteit Brussel and Charité – Universitätsmedizin Berlin. The genetic part was coordinated by Prof Dawid Kwiatkowski from Brigham and Women’s Hospital, Harvard Medical School. Prof Jóźwiak was the leader.
So how did the study go from initial rejection to ultimately being published in Epilepsia and Annals of Neurology pages?
prof. Kotulska-Jóźwiak
MD, PhD Primary investigator
We made it possible for Researchers to count on their data.
The Statistical Blueprint:
Ensuring Success in Clinical Trials
Embarking on a clinical trial? As an academic, you recognize the intricacies involved.
Before we go deeper into the risks and solutions in clinical data analytics, let’s explore how we can partner on your next project.
Stage 1: Initial Research (Needs Assessment & Budget Planning)
First, we’ll engage with you to understand your unique research landscape.
Stage 2: Protocol Development (Methodology Selection)
Next, we’ll co-create a robust protocol that aligns with your vision.
Stage 3: Study Go Live (Data Collection & Management)
As the study launches, we’ll be at your side, ensuring a seamless process.
Stage 4: Data Analysis and Reporting (Analysis & Interpretation)
When it’s time to delve into the data, we’ll bring our expertise to the forefront.
Stage 5: Secondary Data Analysis
Seeking deeper insights? We’ll explore the uncharted territories together.
Stage 6: Publication & Impact
Together, we’ll celebrate the successful publication and far-reaching impact of your project.
Now let’s discuss in detail what risks are associated with each step of the process and how we minimize them.
Stage 1: Initial Research (Needs Assessment & Budget Planning)
Potential Risks: Misguided assumptions about the effects, Budget constraints, Misalignment with research goals, and more.
Our Approach: Together, we’ll leverage diverse strategies to mitigate these risks:
- Real-World Data/Real-World Evidence (RWD/RWE): By analyzing data collected from actual patient experiences and clinical practices (RWD), and the evidence derived from this data (RWE), we can validate assumptions about effects, ensuring that the research is grounded in reality.
- Meta-Analyses: Aggregating results from multiple studies provides a broader perspective, reducing the risk of misguided assumptions and aligning the research with existing evidence. This method combines the findings from various independent studies to identify common trends and insights.
- Qualitative Studies: Engaging with qualitative insights, such as interviews and focus groups, allows for a deeper understanding of the context, ensuring that the research goals are relevant and achievable.
Stage 2: Protocol Development
Potential Risks: Bad study design, Inappropriate study size, Methodological inconsistencies, etc.
Our Approach: Together, we’ll employ targeted services to address these challenges:
- Sample Size Estimation (with Clinical Trial Simulations): By carefully estimating the study size and utilizing simulations, we’ll ensure that the design is statistically sound and powered to detect meaningful effects, reducing the risk of inappropriate study size.
- Statistical Analysis Plan Development: Crafting a comprehensive statistical analysis plan will guide the methodology, minimizing inconsistencies and aligning with best practices.
- Design Optimization: Working closely with you, we’ll review and refine the study design, ensuring that it aligns with your research goals and adheres to academic standards. Utilizing computational methods like optimization, we can compare different options, including adaptive designs, which allow for flexibility and adjustments during the trial, to find the best fit for your project.
Stage 3: Study Go Live
Potential Risks: Human error, Data inconsistency, Unforeseen obstacles, and more.
Our Approach:: Our innovative and adaptive approach includes:
- Utilizing ML-Based Classifiers: By employing machine learning classifiers, we’ll create algorithms that can detect and address challenges early in the data collection process. These classifiers can identify patterns and anomalies, often reducing their severity, ensuring quality and integrity in data collection. We’ll continuously monitor the study, ready to respond to any challenges that arise, ensuring that the project stays on track and aligned with your goals.
- Leveraging Previously Created Tools (Simulators, Optimizers): Should the unexpected occur, we’ll draw on our collection of specialized tools, including simulators and optimizers, to make informed decisions on how to proceed. These tools have been developed to handle various scenarios in clinical trials, optimizing the study’s progress.
Step 4: Data Analysis and Reporting (Analysis and Interpretation)
Potential Risks: Lack of documentation, Analytical errors, Misinterpretation, and beyond.
Our Approach: Our comprehensive and transparent approach includes:
- Crafting Reproducible Reports and Sharing the Code: By creating clear and reproducible documentation, we’ll ensure that the analysis is transparent and verifiable, reducing the risk of misinterpretation and aligning with academic standards.
- Employing Automated Data Tests: Utilizing automated tests, we’ll minimize analytical errors, ensuring that the data is accurate and consistent.
- Collaborative Interpretation: Working closely with you, we’ll interpret the results, ensuring that they align with your research goals and adhere to the highest standards of academic rigor.
Stage 5: Secondary Data Analysis
Potential Risks: Missed opportunities, Overlooked connections, Unexplored methodologies, etc.
Our Approach: Our innovative and collaborative approach includes utilizing Explainable AI (XAI):
By employing XAI, a branch of artificial intelligence that provides clear and understandable explanations for complex AI-driven findings, we’ll demystify complex patterns. This approach allows us to uncover insights that might otherwise be missed, providing clear explanations that align with academic rigor. It ensures that the AI’s decision-making process can be transparently understood, aligning with the principles of scientific inquiry.
Stage 6: Publication & Impact
Our Approach: Our comprehensive and transparent approach includes:
- Understanding Outcomes: We’ll work with you to interpret and articulate the results, ensuring that the findings are presented with clarity and academic rigor.
- Preparing the Paper: Collaborating closely, we’ll assist in crafting the manuscript, aligning it with publication standards, and enhancing its potential for acceptance in high-impact journals.
- Assisting with Reviewer Doubts: Should any questions or doubts arise from reviewers, we’ll be at your side, providing expert guidance and support in addressing them, ensuring a smooth publication process.
Our role in the project
We enabled researchers to freely explore by providing flexible data infrastructure in R.
Without proper tools, it’s hard to consider different scenarios and hypotheses, especially with limited time and budget.
In the Epistop project, we faced the challenge of drug-resistant epilepsy, for which there is no one definition.
Some scientists defined it as not responding to drugs after 30 days; others said 45 days is a better measure. Instead of just “running a calculation” we prepared infrastructure for generating reproducible reports where changing one parameter recalculates everything within seconds and without the risk of error.
Even the smartest people are prone to human error and one undetected error can question the entire study. So… We made the trial more credible with software that helped to indicate mistakes and suspicious data values.
Automatic tests
The first was automatic tests, a good practice in software development. We wrote code to find errors such as a study date which is earlier than the birth date or a result significantly different from previous ones. Such situations may be due to human errors when entering data. These tests quickly detect such errors.
Visualizations
The second was visualizations. We prepared a visual image of the treatment course for each patient, allowing scientists to easily verify each participant’s path and ensure there are no errors in the data.
We helped scientists find anomalies and ensure the study’s results is reliable.
We documented the data workflow in such a way that it was transparent and easy to explain to reviewers.
To understand the data and effectively answer questions at the end of a study, which can take years, we introduced versioning of generated reports in the Git repository. This was also where we conducted all project-related communication. Scientists can easily trace all report versions and add their comments. Months after generating a report, recalculations may be needed due to reviewers’ questions or subsequent projects. Changes to external tools and libraries can lead to variations in outcomes. We used Packrat – a tool for R code, which saves the calculation ecosystem for a given day, allowing recalculations to be performed in the same environment.
Technical description of the statistical methods used, e.g., addressing data gaps, variable transformations, etc.
Type of software used, e.g. versions of: operating system, R, system libraries.
In the Epistop project, we took care of the data and did everything so that researchers could complete clinical trials freely and with confidence in the numbers.
Effects of the project: nearly 1 200 citations