BACKGROUND: Over the years, models that assess the combined genetic burden have been refined as more putative risk SNPs have been identified and with the utilization of logistic regression genetic risk score (logitGRS) models. However, these models do not account for potential interaction effects between loci. Our OBJECTIVE was to apply a logitGRS model to a novel cohort from the southeast U.S., and to compare its performance to a machine learning neural network genetic risk (nnGR) model. We hypothesized that the nnGR would be capable of training itself to account for locus interaction effects and could therefore more accurately classify subjects as patients and controls. METHODS: A cohort consisting of 325 controls, 543 first-degree relatives, and 519 T1D subjects were genotyped on a custom SNP array for 3 SNPs to impute HLA DR3 and DR4 and 30 additional T1D-risk loci. The logitGRS calculation incorporated the odds ratios and number of risk alleles carried for each locus. The nnGR model input was the number of minor alleles carried at each SNP. RESULTS: The logitGRS yielded a T1D vs control ROC-AUC of 0.844 (P50th T1D-centile was indicative of T1D with 92.9% specificity. Interestingly, the logitGRS negatively correlated with age at diagnosis (Pearson P=0.0002). The logitGRS had a peak balanced accuracy of 63.5% at classifying T1D and control subjects, while the nnGR model peaked at 65.8%. This modest improvement may aid in cohort stratification, which will improve functional studies, biomarker identification, and subject selection for interventional and natural history trials.