PREMIER Evaluation

From PREMIER QMS - Wiki
Revision as of 21:34, 2 December 2021 by Admin (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

← QM House

Objectives

The aim is to find the most suitable statistical method, to deliver a meaningful result from a well-planned experiment / study.

Background

In an experiment, the credibility of the inferred causal relationship between treatment and outcome is dependent upon on the statistical power and internal validity (Ref 1).

There are different statistical methods to analyze a given experiment / data set. The choice of these methods and the way a particular method is implemented can significantly influence the conclusions drawn from the experiment.

The principal features of the eventual statistical analysis of the data are described, before analyzing the data and even before performing the experiments.

The Institute of Biometry and Clinical Epidemiology at the Charité has developed a series of easy understandable, freely-available presentations in English and German about the most important topics for the correct statistical planning and evaluation in biomedical research.

HARKING

At PREMIER, the focus is on transparent and scientific work. Therefore, it is essential that the first working hypothesis is documented and included in the study protocol or pre-registration. In this way, so-called HARKing can also be prevented.

"HARKing" means "formulating hypotheses after the results are known": a hypothesis based on the interpretation of the data is presented as if it had already existed before the data collection. HARKing can also occur when a researcher tests an a-priori hypothesis, but then omits this hypothesis from his research report after he has learned the results of his test.

The protocol or pre-registration is preferably stored in an electronic laboratory journal / platform (e.g. OSF), so that all steps are traceable at any time (see also module “Planning of Experiments” – Pre-registration).

Tasks / Actions

In order to create a lab specific action plan, the first step is an assessment, which will be carried out by the PREMIER team. The assessment will determine the status quo of the laboratory in regard to existing quality tools. Here you find the general tasks / actions that are necessary to implement the module.

Primary Analysis and Evaluation of Raw Data

Primary analysis of raw data is the data processing necessary to derive (secondary) data that are shared, presented and/or subjected to statistical analysis (Ref. 2).

Information about the primary analysis of raw data is crucial for establishing a link between raw data and published results and is therefore an essential part of data traceability.

Primary analysis of raw data should be:

  • performed blindly (e.g. by an experimenter who does not know the pharmacological treatment) -> For confirmatory research this is a requirement.
  • retain the original randomization scheme (if applicable)
  • follow a pre-defined analysis plan, which can be part of the curriculum
  • Include data verification (even for data generated by automatic systems, there is usually additional data that is generated manually. Examples may include body weight, volume of medication administered, and unplanned observations during an experiment, such as abnormal behavior).
  • include a data validity check, i.e. in relation to the acceptance criteria predefined in the study protocol.

Statistical Analysis

The following recommendations are based on Motulsky (Ref. 2):

  • Statistical analysis should be performed exactly as described in the study plan.
  • Any changes (e.g. in steps used to process and analyze the data or changes to study hypothesis) must be documented; the reason for a change must be explained and the study conclusion may need to be labeled as “preliminary”.
  • As the p-value provides no information about the actual size of the observed effect, it is recommended to calculate, document and present the effect size as difference, percent difference, ratio, or correlation coefficient along with its confidence interval.
  • It is strongly recommended to report statistical hypothesis testing (and place significance asterisks on figures) only if a decision is to be based on that one analysis.
  • It is strongly advised against the use of the word “significant” in a report or a publication; in plain English "significant" means "relevant" or "important", but a p-value provides no basis for the importance of a finding. If statistical hypothesis testing is used to decide, it is recommended to state the p-value, a preset p-value threshold (statistical alpha), and the decision.
  • Once the statistical analysis is conducted, it is recommended to plot figures that show the distribution of data (scatter plot; box & whiskers; violin plot). However, if the data have to be presented as a mean (e.g. in a table), display results as a mean and the standard deviation (mean ± SD or median with inter-quartile ranges if normal distribution is not assumed).
  • It is recommended not to plot the mean with error bars that represent the standard error (mean ± SEM) because SEM is not an indicator of variability but of precision and as such less informative than confidence intervals.

It is strongly recommended to report all details when describing statistical methods.

Visualization of Data

The implementation of projects concentrates on the steps after data collection, especially statistical analysis and visualization.

The execution of projects focuses on the steps that happen after data collection, especially statistical analysis and visualization.


Several templates or instructions are provided here for the easy creation of scatterplots: (Ref.34)

  1. Excel template for creating univariate scatterplots for independent datahttps://doi.org/10.1371/journal.pbio.1002128.s002
  2. Excel templates for creating univariate scatterplots for paired or matched datahttps://doi.org/10.1371/journal.pbio.1002128.s003
  3. GraphPad PRISM instruction for creating univariate scatterplots for independent datahttps://doi.org/10.1371/journal.pbio.1002128.s004
  4. GraphPad PRISM instruction for creating univariate scatterplots for paired or matched data (one group, two conditions)https://doi.org/10.1371/journal.pbio.1002128.s005
  5. GraphPad PRISM instruction for creating univariate scatterplots for paired or matched data (two groups, two conditions)https://doi.org/10.1371/journal.pbio.1002128.s006

Online interactive tools for creating better data presentations

  1. InteractiveDotblotwith instructionvideo
  2. Interactiveline graphs
  3. Plots of Data - interactiveweb appfor visualizing data together with their summaries

Analysis Code

  • Git
  • Notebooks with R and RStudio
  • Jupyter Nootebooks

Data generated by the primary analysis of raw data should be securely stored. Alternatively, tools, algorithms, scripts and related analysis-related information that would be sufficient to restore the analysis may be stored. If the last approach is chosen, two requirements apply:

  • It should be possible for any researcher with the necessary qualifications to repeat the analysis.
  • The technical feasibility of such reanalysis should be ensured for the entire period during which raw data are stored (e.g. the ability to reanalyze should not be affected by software updates or the readability of guidance information).

Important:

  • Label and store all primary analysis files in a way that ensures traceability of the data.
  • Outside the specified criteria, the exclusion of data points and observations is only possible as long as the primary analysis is performed blind (i.e. before unblinding).
  • All decisions on the exclusion of data must be transparent.
  • Consider including this topic in a training programme for new staff or in a refresher training course (if applicable).

References