The ability to perform high-content screening in a high-throughput fashion is routinely limited to cell lines and other explant model systems, however, there is a risk that these may not be fully representative of the in vivo environment due to culture adaptation or the lack of multi-lineage cell types. The ability to gather high-content data directly from primary samples however, both direct from blood and bone marrow, metastasized cancers, and dissociated solid tumor, without cell outgrowth or selection in a method amenable to laboratory automation can be a more direct system. Further, by combining imaging of these primary sample with an adaptable analysis pipelines robust to micro-aggregates, especially formed in solid tumor biopsy homogenates, vastly different cell shapes and sizes, and that can ultimately harness the features from each cell can become a powerful means to study drug response in a variety of indications using model systems directly derived from the patient. This methodology has been used to prioritize therapy for late-state patients with hematological cancers in a basket trial (Snijder & Vladimer et al 2017, Lancet Hematology), has been integrated with genetic data to further uncover biological understanding and clinical synergy options (Schmidl & Vladimer et al 2019, Nat Chem Bio).
Here, this talk will specifically focus on the details of the computational framework, including supervised and unsupervised machine learning approaches for cell identification and feature extraction, and other aspects of necessary infrastructure including cloud-deployment that is used to, in very high-throughput, quantify single-cell phenotypes form primary material from cancer patients for for drug discovery. Further, the use case of understanding single-cell phenotypes after drug screening, both in single-cell suspensions and in micro-aggregate multi-cell / 3D environments, will be highlighted.