11 research outputs found
Accelerating a random forest classifier: multi-core, GP-GPU, or FPGA? Accelerating a random forest classifier: multi-core, GP-GPU, or FPGA?
Abstract-Random forest classification is a well known machine learning technique that generates classifiers in the form of an ensemble ("forest") of decision trees. The classification of an input sample is determined by the majority classification by the ensemble. Traditional random forest classifiers can be highly effective, but classification using a random forest is memory bound and not typically suitable for acceleration using FPGAs or GP-GPUs due to the need to traverse large, possibly irregular decision trees. Recent work at Lawrence Livermore National Laboratory has developed several variants of random forest classifiers, including the Compact Random Forest (CRF), that can generate decision trees more suitable for acceleration than traditional decision trees. Our paper compares and contrasts the effectiveness of FPGAs, GP-GPUs, and multi-core CPUs for accelerating classification using models generated by compact random forest machine learning classifiers. Taking advantage of training algorithms that can produce compact random forests composed of many, small trees rather than fewer, deep trees, we are able to regularize the forest such that the classification of any sample takes a deterministic amount of time. This optimization then allows us to execute the classifier in a pipelined or single-instruction multiple thread (SIMT) fashion. We show that FPGAs provide the highest performance solution, but require a multi-chip / multi-board system to execute even modest sized forests. GP-GPUs offer a more flexible solution with reasonably high performance that scales with forest size. Finally, multi-threading via OpenMP on a shared memory system was the simplest solution and provided near linear performance that scaled with core count, but was still significantly slower than the GP-GPU and FPGA
Serum sample stability in ligand-binding assays: challenges in assessments of long-term, bench-top and multiple freeze–thaw
Applications of a planar electrochemiluminescence platform to support regulated studies of macromolecules: Benefits and limitations in assay range
In Silico Evaluation of the Potential Impact of Bioanalytical Bias Difference between Two Therapeutic Protein Formulations for Pharmacokinetic Assessment in a Biocomparability Study
Specific Method Validation and Sample Analysis Approaches for Biocomparability Studies of Denosumab Addressing Method and Manufacture Site Changes
Manufacturing changes during a biological drug product life cycle occur often; one common change is that of the manufacturing site. Comparability studies may be required to ensure that the changes will not affect the pharmacokinetic properties of the drug. In addition, the bioanalytical method for sample analysis may evolve during the course of drug development. This paper illustrates the scenario of both manufacturing and bioanalytical method changes encountered during the development of denosumab, a fully human monoclonal antibody which inhibits bone resorption by targeting RANK Ligand. Here, we present a rational approach to address the bioanalytical method changes and provide considerations for method validation and sample analysis in support of biocomparability studies. An updated and improved ELISA method was validated, and its performance was compared to the existing method. The analytical performances, i.e., the accuracy and precision of standards and validation samples prepared from both manufacturing formulation lots, were evaluated and found to be equivalent. One of the lots was used as the reference standard for sample analysis of the biocomparability study. This study was sufficiently powered using a parallel design. The bioequivalence acceptance criteria for small molecule drugs were adopted. The pharmacokinetic parameters of the subjects dosed with both formulation lots were found to be comparable. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1208/s12248-012-9414-x) contains supplementary material, which is available to authorized users
Validation of a microfluidic platform to measure total therapeutic antibodies and incurred sample reanalysis performance
Bioanalytical method requirements and statistical considerations in incurred sample reanalysis for macromolecules
Assessment of Incurred Sample Reanalysis for Macromolecules to Evaluate Bioanalytical Method Robustness: Effects from Imprecision
Incurred sample reanalysis (ISR) is recommended by regulatory agencies to demonstrate reproducibility of validated methods and provide confidence that methods used in pharmacokinetic and toxicokinetic assessments give reproducible results. For macromolecules to pass ISR, regulatory recommendations require that two thirds of ISR samples be within 30% of the average of original and reanalyzed values. A modified Bland–Altman (mBA) analysis was used to evaluate whether total error (TE), the sum of precision and accuracy, was predictive of a method’s passing ISR and to identify potential contributing parameters for ISR success. Simulated studies determined minimum precision requirements for methods to have successful ISR and evaluated the relationship between precision and the probability of a method’s passing ISR acceptance criteria. The present analysis evaluated ISRs conducted for 37 studies involving ligand-binding assays (LBAs), with TEs ranging from 15% to 30%. An mBA approach was used to assess accuracy and precision of ISR, each with a threshold of 30%. All ISR studies met current regulatory criteria; using mBA, all studies met the accuracy threshold of 30% or less, but two studies (5%) failed to meet the 30% precision threshold. Simulation results showed that when an LBA has ≤15% imprecision, the ISR criteria for both the regulatory recommendation and mBA would be met in 99.9% of studies. Approximately 71% of samples are expected to be within 1.5 times the method imprecision. Therefore, precision appears to be a critical parameter in LBA reproducibility and may also be useful in identifying methods that have difficulty passing ISR
