Statistical Consideration of LBs

Introduction

Laboratory specimens are integral to clinical trials, serving as primary indicators of systemic toxicities and providing crucial safety information. Their objective nature and direct relation to organ function make laboratory data indispensable for early detection of adverse effects, often before clinical symptoms become apparent. This routine collection and analysis of laboratory data is fundamental in clinical trials aimed at evaluating new therapies, where safety monitoring is a critical component.

Key Points: 1. Routine Collection: Lab data are routinely collected according to prespecified schedules to monitor patient safety throughout clinical trials. 2. Unscheduled Tests: Investigators might order unscheduled tests to follow up on identified toxicities or to investigate suspicious clinical observations.

Common Laboratory Tests

The most common categories for laboratory tests for human clinical trials are Hematology, Blood (or Serum) Chemistry, Urinalysis and Coagulation. Less common categories are Microbiology, Stool specimens, and testing for specific drugs. While the grouping specific tests sometimes varies from case report form to case report form, in general the Hematology category is of those tests involving the cellular components of the blood, while Blood Chemistry focuses on the plasma components. The Urinalysis group are those tests involving components of the Urine. Coagulation Tests are those that involve blood clotting, these are sometimes grouped into the Hematology test category. Microbiology tests are those testing for the presence of specific bacteria, and fungi.

The determination of what tests are actually collected during a clinical trial in some degree is determined by the drug being studied and is generally defined in the protocol. Below is a list of commonly collected laboratory tests grouped by category. The groupings are not hard and fast and different studies sometimes group the lab tests in different groups. For example, combining coagulation and hematology into one group, is a common practice.

  1. Hematology
    • Definition: Hematology tests focus on the cellular components of blood.
    • Common Tests:
      • RBC (Red Blood Cell Count)
      • Hemoglobin
      • Hematocrit
      • MCV (Mean Corpuscular Volume)
      • MCH (Mean Corpuscular Hemoglobin)
      • MCHC (Mean Corpuscular Hemoglobin Concentration)
      • WBC (White Blood Cell Counts)
      • Differentiation of white cells (Neutrophils, Lymphocytes, Monocytes, Eosinophils, Basophils)
  2. Blood (or Serum) Chemistry
    • Definition: These tests measure the plasma components of blood, often reflecting the function of various organs.
    • Common Tests:
      • Albumin
      • LDH (Lactate Dihydrogenase)
      • Alkaline Phosphatase
      • Electrolytes (Sodium, Potassium, Chloride, Bicarbonate)
      • Kidney Function Markers (BUN - Blood Urea Nitrogen, Creatinine)
      • Liver Enzymes (SGPT/ALT, SGOT/AST, GGT)
      • Lipid Profile (Cholesterol, Triglycerides)
      • Others (Phosphorus, Calcium, Total Bilirubin, Total Protein, Glucose, Uric Acid)
  3. Coagulation
    • Definition: Coagulation tests assess the blood’s ability to clot, which is crucial for diagnosing bleeding disorders.
    • Common Tests:
      • Prothrombin Time (PT)
      • Partial Thromboplastin Time (PTT)
      • International Normalization Ratio (INR)
  4. Urinalysis
    • Definition: Tests involving components of urine, used to detect diseases of the urinary system as well as systemic conditions.
    • Common Tests:
      • Urine pH
      • Urine Specific Gravity
  5. Microbiology
    • Definition: These tests identify the presence of specific bacteria, viruses, and fungi, useful in diagnosing infections.
    • Typical Tests:
      • Cultures from various samples (blood, urine, throat swabs, etc.)
      • Sensitivity tests to determine the effectiveness of antibiotics
  6. Functional Tests
    • Definition: These are grouped analyses of blood chemistry tests that provide insights into the function of specific organs or biological systems.
    • Application: Commonly used to ensure that a drug is not adversely affecting a particular organ system.
System or Organ Lab Tests
Liver ALT, AST, Alkaline Phosphatase, GGT, LDH, Albumin, Bilirubin
Kidney BUN, Creatinine
Pancreas Amylase
Electrolytes Sodium, Potassium, Chloride
Nutritional Glucose, Fats
Lipids Triglycerides, Cholesterol (LDL, HDL)

Central Labs vs. Local Labs

1. Central Labs - Overview: Central Labs, like Covance, CCLS, and Quintiles, manage laboratory data for clinical trials on a larger scale, often globally. - Advantages: - Standardization: Tests are performed using uniform procedures across all samples, enhancing the comparability of data across different patients and sites. - Data Handling: Results are typically delivered electronically and include application of normal ranges and unit standardization as per the trial sponsor’s specifications. - Consistency: Uses a common unit for reporting results, which can be essential for multi-center international trials. - Challenges: - Delay in Results: Samples must be shipped to the central lab, introducing potential delays in data availability to investigators.

2. Local Labs - Overview: Local labs are usually situated within the hospital or medical facility where the clinical trial is conducted. - Advantages: - Speed: Ideal for urgent testing where immediate results are necessary for patient dosing or trial enrollment decisions. - Accessibility: Convenient for quick data capture directly into case report forms. - Challenges: - Lack of Standardization: Each local lab may use different units of measurement and have different normal ranges, complicating data consistency and comparability. - Data Integration: Results from local labs often require additional efforts to standardize and integrate into the broader trial data set.

Units of Measurements in Clinical Trials

1. Conventional Units - Usage: Predominantly used in the United States. - Characteristics: These units are familiar to U.S. healthcare providers and are often derived from historical laboratory practices.

2. SI Units (Système Internationale) - Usage: Commonly used in international settings and preferred for global clinical trials. - Benefits: Facilitates easier comparison and interpretation of clinical trial data across different countries.

Adaptation for Trials: - Single Country Trials: It may be simpler to use conventional units if the trial is confined to the U.S. to align with local clinical practices. - International Trials: SI units are generally preferred to maintain consistency and ensure clear communication among diverse international teams.

Dual Reporting: - In some cases, particularly in trials spanning multiple countries, results might be reported in both conventional and SI units to accommodate the preferences and regulatory requirements of different regions.

Interpretation and Use of Reference Ranges

1. Purpose of Reference Ranges: - Diagnostic Guidance: Reference ranges help clinicians discern normal from potentially abnormal results, aiding in diagnosis and treatment decisions. - Safety Monitoring: In clinical trials, these ranges are used to flag abnormalities and help grade laboratory toxicities.

2. Challenges with Reference Ranges: - Variability in Establishment: Labs frequently adopt reference ranges provided by equipment manufacturers or develop their own based on internal criteria, which may not be consistent across different settings. - Analytical Variability: Different labs might optimize assays to varying points within the reference population range, affecting the sensitivity and specificity of results. - Clinical Considerations: The overlap between disease and non-disease states in analyte levels can make it challenging to set definitive range limits without compromising on either sensitivity or specificity.

3. Determination of Reference Ranges: - Analytical Decisions: Assay precision must accommodate the clinical decisions at critical points, which are often at the extreme ends of the reference range. - Clinical Decisions: Setting the range involves balancing the probability of correctly identifying an abnormal result (sensitivity) against that of correctly identifying a normal result (specificity). - Mathematical Decisions: Traditional methods might include using a small sample of presumed healthy individuals to establish a 95% reference range. Advanced statistical methods like bootstrapping or transformations for skewed data are sometimes applied to refine these ranges.

Implications for Clinical Trials

In the context of clinical trials, especially those involving multiple laboratories, the use of standard reference ranges can be problematic: - Multi-lab Variability: Different labs may produce variable results due to differing assay calibrations and reference standards. - Normalization Needs: It has been suggested that results from different labs should be normalized through proficiency surveys, where all participating labs analyze a common set of specimens to ensure compatibility of results. - Central Laboratory Use: Employing a central lab can reduce the need for adjustments across sites, but does not fully mitigate the issues associated with reference range construction.

Tabulation Summaries for Laboratory Results

In clinical trials, lab summaries play a crucial role in interpreting and communicating the effects of interventions. These summaries are tailored based on the objectives of the trial and are designed to provide clear and concise data analysis to support study findings.

Routine analyses of laboratory data include constructing tables that show the percentage of patients with each specific type of laboratory abnormality, tables that contain the frequencies of subjects’ experiencing a change from normal to abnormal status or from abnormal to normal status for each selected body function, tables that contain the frequencies of subjects’ experiencing a change in their pretreatment laboratory toxicity grade, and tables that present the summary statistics concerning the amount of change. Frequently, graphs depict the amount of change in relation to the pretreatment value, or the average group change over time. For statistical inference, nonparametric tests are frequently appropriate even though p-values from the inferential procedures are typically used for descriptive purpose only. Recently, some authors have discussed more informative graphic displays. In addition, multiple small figures that display laboratory data can also be quite informative.

Typical Summaries

Descriptive Statistics Table

  • Purpose: To provide a statistical summary of laboratory values at various time points throughout the trial.
  • Common Statistics:
    • Mean and Median: Indicate the central tendency of the data.
    • Standard Deviation: Measures the dispersion or variability of the lab values from the mean.
    • Minimum and Maximum: Show the range of the data.
  • Usage: This table is often repeated for each lab test conducted during the trial, offering a snapshot of changes or percent changes from baseline at predefined visits.

Shift Tables

  • Purpose: Used when variations in lab results complicate direct comparison, often due to the use of local labs.
  • Content:
    • Classification: Lab values are categorized as “Above Normal Range,” “Within Normal Range,” or “Below Normal Range.”
    • Summary: These categories are analyzed for shifts from baseline to subsequent time points, often focusing on the last visit.
  • Usage: Particularly useful in studies with high variability in data or when comparing results across multiple sites that might not use standardized measurement units.

Clinically Significant Categories Table

  • Purpose: To categorize lab values into clinically significant categories based on predefined critical values.
  • Content:
    • Categories: Lab results are segmented into critical levels, such as “Below a certain threshold” which might indicate a risk or require intervention.
    • Patient Count: Reports the number of patients falling into each category.
  • Usage: Helps in assessing the clinical relevance of lab results, particularly in safety analyses.

Standard CTC Toxicity Grading Table

  • Purpose: To group lab values based on established toxicity grades, often using Common Terminology Criteria for Adverse Events (CTC).
  • Content:
    • Grading: Each lab value is assigned a grade (e.g., Grade 1, Grade 2) based on severity according to toxicity criteria.
    • Summary: Number of patients corresponding to each toxicity grade at different time points.
  • Usage: Crucial for evaluating drug safety and managing patient care, particularly in oncology trials where toxicity monitoring is integral.

Multivariate Analysis

The analysis of laboratory data in clinical trials often extends beyond individual test results to include comprehensive evaluations of multiple parameters simultaneously. This approach recognizes the interrelated nature of various biomarkers and their collective impact on a patient’s health. The multivariate analysis of lab data can enhance the understanding of a drug’s safety profile and provide a more accurate representation of clinical outcomes.

1. Preference for Multivariate Analysis

  • Comprehensive Safety Profiles: Clinicians and researchers increasingly prefer to evaluate a patient’s overall safety profile using multivariate laboratory data rather than isolating single parameters. This holistic view helps in making more informed clinical decisions as it incorporates the complex interactions among various biomarkers.

  • Limitations of Univariate Analysis: Analyzing lab results one parameter at a time can lead to a fragmented understanding of patient health and may overlook the interactions between different body functions.

2. Methods and Approaches

  • Ranking and Pair Analysis: Brown et al. demonstrated a method involving the selection of tolerable limits for lab results and ranking abnormalities by the frequency of abnormal values detected. They then analyzed related test results concurrently to assess the safety of two antibiotics, leveraging the relationship between tests that assess similar functions.

  • Score Construction: Sogliero-Gilbert et al. and Gilbert et al. introduced the concept of constructing scores, such as the Genie score, which aggregates results from multiple related assays into a single score for each patient. This score can then be used to compare the effects of different treatments within a clinical trial, offering a consolidated view of the impact on patient health.

  • Incorporating Various Data Types: Chuang-Stein et al. suggested a more inclusive approach that combines laboratory results with other types of safety data, including clinical signs and symptoms. This method provides a more comprehensive safety profile of patients by integrating various dimensions of health data.

3. Benefits of Multivariate Analysis

  • Enhanced Data Interpretation: By aggregating related lab tests, researchers can achieve a more nuanced understanding of treatment effects on specific body functions.

  • Efficiency in Comparisons: Collapsing multivariate data into univariate scores simplifies the statistical analysis and can lead to more efficient comparisons among different treatment groups.

Regression to the Mean

  • Eligibility Criteria Impact: In many clinical trials, subjects are selected based on specific eligibility criteria, such as normal ranges for certain laboratory tests like SGOT and SGPT. Subjects with values too high or too low are typically excluded.

  • Baseline Measurements: The initial measurement (screen value) is used as a baseline to compare subsequent laboratory results. If the initial screening selects subjects based on extreme values, any natural variation towards the average (mean) can appear as a change due to the treatment, rather than simple statistical regression.

  • Mean Change Due to Regression: Chuang-Stein discussed how the mean change observed in a clinical trial can be influenced significantly by regression to the mean, especially if the correlation between the repeated measures is low.

  • Impact of Exclusion Criteria: The stricter the exclusion criteria (e.g., only allowing subjects with values within a narrow range), the more pronounced the regression effect can be. This can erroneously suggest that treatment has affected parameters like SGOT and SGPT when, in fact, it has not.

Source: Chuang-Stein C. The regression fallacy. Drug Info J 1993;27:1213–1220.

Adjustment Procedures

  • Proposed Adjustments: Chuang-Stein proposed specific adjustments to account for the regression effect, aiming to provide a more accurate assessment of how treatments impact laboratory parameters.
  • Selection of Baseline Values: Another approach to mitigate this effect is the selection of baseline values. Using values from a pretreatment period closer to the start of dosing rather than initial screening values can reduce the impact of regression to the mean.
  • Controlled Study Settings: In controlled trials where subjects are randomized into treatment groups, regression to the mean is less likely to skew comparisons among groups if the randomization effectively balances baseline characteristics across these groups.
  • Use of Multiple Baseline Measures: Comparing changes from a baseline established by multiple pre-treatment measures can also help mitigate this effect.

Mixture Response Distributions

In clinical trials, the response of patients to a treatment is not always uniform, leading to variations in how different subjects react. This variation can often be described using mixture distributions, which provide a statistical framework for modeling the differential response to treatment across a population.

  • Basic Idea: The response distribution for a laboratory parameter in a treatment group might not be homogenous but could represent a mixture of different distributions. For instance, part of the treatment group might follow the same distribution as the control group, while another part might show a significant shift in the response due to the treatment.
  • Realistic Modeling of Patient Responses: The mixture model approach acknowledges that some patients might not exhibit any treatment effects (hence the control-like distribution), while others might experience significant changes in lab parameters due to the treatment. This dual response can be crucial for understanding the full impact of a treatment, especially when considering side effects or therapeutic effectiveness.
  • Formulation: Assume G(x) as the response distribution for the treatment group and F(x) for the control. G(x) can be modeled as a mixture of F(x) and a transformed version of F(x) that includes a location shift D and a scale change l, formulated as: \[ G(x) = pF(x) + (1-p)F\left(\frac{x-D}{l}\right) \] Here, p represents the proportion of the treatment group that responds similarly to the control group.

Statistical Challenges and Solutions

  • Estimation and Hypothesis Testing: The primary statistical task is to estimate the mixture proportion p and test the null hypothesis \(H_0: p = 1\) against the alternative \(H_1: p \neq 1\). This tests whether the treatment has a different effect compared to the control.
  • Methodologies Applied:
    • Cherng et al. [16] discussed testing procedures specifically designed for this scenario, applying methods proposed by Conover and Salsburg and O’Brien. These methods help in identifying whether a significant portion of the treatment group exhibits a response different from the control group.

Correlating LB Data with Pharmacokinetic Parameters

Pharmacokinetic parameters describe how a drug is absorbed, distributed, metabolized, and excreted in the body. Laboratory data, on the other hand, can indicate how these processes affect the body’s normal functions, revealing potential toxicities or adverse effects.

In the context of drug development and clinical trials, the correlation between pharmacokinetic (PK) parameters and laboratory data is crucial for understanding how a drug behaves in the body and its potential toxic effects. This correlation helps in optimizing drug dosing, monitoring therapeutic levels, and enhancing overall treatment safety and efficacy.

Methodological Approaches

  • Toxicity and Drug Concentrations: Observing severe toxicities at high plasma drug concentrations provides a direct rationale for therapeutic drug monitoring. This approach ensures that drug levels remain within a therapeutic range that maximizes efficacy while minimizing harm.
  • Complexity of Analyses: The relationships between PK parameters and laboratory data are often complex and require sophisticated statistical tools to decipher.
  • Variability: High inter-individual variability in how patients process drugs can complicate these analyses, making universal conclusions difficult.
  • Statistical Modeling: Techniques such as regression analysis, mixed-effects models, and machine learning algorithms can be used to identify and quantify relationships between PK parameters and lab data.
  • Biomarker Development: Identifying biomarkers that predict drug response or toxicity can significantly enhance the utility of PK data, guiding drug dosing and monitoring strategies.

Applications of Correlating PK Parameters with Lab Data

  1. Therapeutic Drug Monitoring (TDM):
    • Purpose: To adjust drug dosages based on individual pharmacokinetic profiles and laboratory markers of toxicity or efficacy.
    • Example: If high drug plasma levels are correlated with laboratory signs of liver toxicity (e.g., elevated liver enzymes), dosing adjustments might be necessary.
  2. Effect of Baseline Conditions:
    • Impact on Drug Absorption and Effectiveness: Pre-existing conditions, such as hepatitis B, can influence how a drug is metabolized and absorbed. Understanding these effects through PK and laboratory data correlation can guide dosage adjustments and treatment planning.
    • Clinical Decision Making: Analyzing how baseline health conditions affect drug kinetics can help in predicting treatment outcomes and personalizing therapies.
  3. Pharmacodynamic Insights:
    • Mechanism of Action: Correlating lab data with PK parameters can elucidate the pharmacodynamic properties of a drug, revealing how it exerts its effects at the cellular or systemic level.
    • Safety and Efficacy: This correlation aids in identifying which pharmacokinetic profiles correspond to optimal therapeutic outcomes and acceptable safety profiles.

Patients at Risk for Laboratory Toxicity

Understanding and predicting laboratory toxicities in various patient subgroups is crucial for developing safer and more effective treatment regimens. Traditional clinical trials often exclude patients with major organ impairments, which limits the understanding of how these populations might respond to new treatments. However, regulatory bodies are increasingly requiring the inclusion of diverse patient subgroups to ensure a comprehensive safety profile.

Challenges in Current Clinical Trials

  • Exclusion of Vulnerable Populations: Initial phases of clinical trials commonly exclude patients with significant organ impairments (like hepatic or renal dysfunctions), which can lead to a lack of data about how these populations react to the treatments.
  • Post-Approval Risks: Once a treatment is approved and enters the market, it becomes accessible to a broader population, including those previously excluded from trials, potentially leading to unforeseen adverse reactions.

Regulatory Requirements

  • Inclusive Study Designs: Regulators are mandating the inclusion of specific subgroups—such as pediatric, geriatric, and patients with existing organ dysfunctions—in clinical trials to evaluate the safety profiles more comprehensively.
  • Exploratory Analyses: These are conducted to identify risk factors associated with the onset, presence, or severity of laboratory toxicities.

Statistical Methods for Safety Analysis

  • Logistic Regression: Useful for identifying predictors of binary outcomes, such as the occurrence of a specific toxicity. It helps in understanding which factors (e.g., age, baseline organ function) increase the likelihood of adverse reactions.
  • Survival Analysis: Applied to examine the time to onset of adverse reactions. This method can handle censored data (where patients drop out or the study ends before an event occurs) and can provide insights into the timing of toxicities.

Graphical Summaries for Laboratory Results

Graphical summaries are invaluable in clinical trials for visually representing laboratory data over time, comparing treatment groups, and identifying outliers that may require further investigation.

Typical Summaries

Box and Whisker Plot

  • Description: This plot is highly effective for depicting the distribution of lab data across different visits within a treatment group.
  • Components:
    • Median: The line in the middle of the box indicates the median value of the dataset.
    • Interquartile Range (IQR): The box represents the 25th to 75th percentile.
    • Whiskers: Extend from the box to 1.5 times the IQR, indicating the range of most data points.
    • Outliers: Data points that fall outside the whiskers are plotted individually, often marked with a specific symbol.
  • Usage: Ideal for showing the central tendency (median) and variability (IQR) of data over time, and for spotting outliers at each visit.

SAS Implementation

axis1 label=(angle=90 'Parameter Name') ℴ
axis2 label=('WEEK OF STUDY') offset=(0.5 in);
symbol1 interpol=boxt value=x;
proc gplot data=plotds;
plot &var*visn / noframe vaxis=axis1 haxis=axis2;

Mean and Standard Deviation Plot

  • Description: This type of plot compares the mean lab values across treatment groups over time, incorporating error bars to represent one standard deviation above and below the mean.
  • Steps:
    • Data Preparation: Calculate the mean and standard deviation for each treatment group at each visit.
    • Plotting: Display mean values connected by lines, with error bars showing the range of one standard deviation.
  • Usage: Useful for directly comparing the mean responses of treatment groups over time and visualizing the data spread.

SAS Implementation

proc univariate data=deff_n noprint;
by parm trtname visit;
var lbstresn;
output out=deff_v n=n1 mean=mean1 std=sd1;
data deff_v; set deff_v; by parm trtname visit;
val=mean1; output;
val=mean1+sd1; output;
val=mean1-sd1; output;

symbol1 l=1 value=diamond width=3 interpol=hiloj line=1;
symbol2 l=1 value=square width=3 interpol=hiloj line=5;
legend1 across=1 cborder=black label=none position=(top inside center) across=2 value=("TRTA(1-4 MG QID)" "&ztrtb&ztrtbb");
proc gplot data=ps nocache;
plot val*window=trtname / vaxis=axis1 haxis=axis2 legend=legend1;

eDISH

Presentation and Handling of Clinical Laboratory Data

The following figure is provided as example to display the mean changes in laboratory values over time and is presented in clinically relevant groupings based on the template.

Jagadish K., https://www.linkedin.com/posts/jagadishkatam_r-rstudio-datascience-activity-7132144850254123008-yylO/?utm_source=share&utm_medium=member_desktop

labs <- function(data, params, val){
  # Filter data based on parameters and visit
  adlbcx <- data[data$PARAMCD == params & data$AVISIT != '', ]

  # Summarize data to calculate mean, sd, se, CI lower and upper bounds
  adlbc1 <- adlbcx %>%
    group_by(TRTA, AVISIT) %>%
    summarise(
      mean = mean(val, na.rm = TRUE),
      sd = sd(val, na.rm = TRUE),
      n = n(),
      se = sd / sqrt(n),
      CI_lower = mean - 1.96 * se,
      CI_upper = mean + 1.96 * se,
      .groups = 'drop'
    )

  # Prepare for joining by renaming and transforming
  adlbcn <- adlbcx %>%
    rename_all(tolower) %>%
    group_by(trta, trtan, avisit, avisitn) %>%
    summarise(n = n(), .groups = 'drop') %>%
    left_join(adlbc1, by = c("trta", "trtan", "avisit", "avisitn")) %>%
    mutate(across(contains("."), ~round(.x, digits = 1)))

  return(adlbcn)
}

# Generate a plot using ggplot2
g1 <- ggplot(data = adlbcn, aes(x = reorder(avisit, avisitn), y = mean, group = trta)) +
  geom_point(aes(color = trta), size = 1.5, position = position_dodge(width = 0.7)) +
  geom_line(aes(color = trta, linetype = trta), position = position_dodge(width = 0.7)) +
  geom_errorbar(aes(ymin = CI_lower, ymax = CI_upper, color = trta), position = position_dodge(width = 0.7)) +
  theme_classic() +
  theme(legend.position = "bottom") +
  scale_linetype(guide = FALSE) +
  guides(color = guide_legend(title = "Treatment")) +
  labs(x = "Visit", y = "Mean Change from Baseline (95% CI)") +
  geom_hline(yintercept = 0, linetype = 'dashed')

# Plot number of patients
t2 <- ggplot(data = adlbcn, aes(x = reorder(avisit, avisitn), y = trta, label = as.character(n))) +
  geom_text(aes(color = 'white'), vjust = -0.5, hjust = 0.5) +
  ggtitle("Number of Patients") +
  theme_bw() +
  theme(
    axis.line = element_blank(),
    panel.grid = element_blank(),
    axis.ticks = element_blank(),
    axis.title = element_blank(),
    axis.text.x = element_text(color = "white"),
    plot.title = element_text(size = 11, hjust = 0)
  )

# Combine plots
library(patchwork)
combined_plot <- g1 + t2 + plot_layout(ncol = 1, nrow = 2, heights = c(2, 1))

Reference

General

  • Christy Chuang-Stein; PhD (1998). Laboratory Data in Clinical Trials: A Statistician’s Perspective. , 19(2), 167–177. doi:10.1016/s0197-2456(97)00123-2
  • Carlson, Randall K. “Approaches For Using SAS for Conversion of Pharmaceutical Safety Laboratory Values to Standard Units”, 19th Annual Conference Proceedings of the Northeast SAS® Users Group, September 2006
  • Carlson, R. K., & Freimark, N. (n.d.). Presentation and Handling of Clinical Laboratory Data: From Test Tube to Table. Wilmington, DE & Lakewood, NJ: Omnicare Clinical Research, Inc. https://www.lexjansen.com/nesug/nesug07/np/np02.pdf

Multivariate Analysis

  • Brown KR, Getson AJ, Gould AL, et al. Safety of cefoxitin: an approach to the analysis of laboratory data. Rev Infectious Dis 1979;1:228–231.
  • Sogliero-Gilbert G, Mosher K, Zubkoff L. A procedure for the simplification and assessment of lab parameters in clinical trials. Drug Info J 1986;20:279–296.
  • Gilbert GS, Ting N, Zubkoff L. A statistical comparison of drug safety in controlled clinical trials: the Genie score as an objective measure of lab abnormalities. Drug Info J 1991;25:81–96.
  • Chuang-Stein C, Mohberg NR, Musselman DM. The organization and analysis of safety data using a multivariate approach. Stat Med 1992;11:1075–1089.

Mixture Response Distributions

  • Conover WJ, Salsburg DS. Locally most powerful tests for detecting treatment effects when only a subset of patients can be expected to “respond” to treatment. Biometrics 1988;44:189–196.
  • O’Brien PC. Comparing two samples: Extensions of the t, rank-sum, and log-rank tests. JASA 1988;83:52–61.