Security Congress Abstract
We don't know how AI-based security systems think or arrive at decisions. We will examine bias in big data as the root of all errors in security systems. Well-balanced data sets are scarce, making AI outputs unreliable. Embedded biases are hard to detect. Machine learning algorithms rely on data sets fed into algorithms to execute the learning task. Embarrassing scandals misclassify racial groups because implementers neglected to quantify bias in the data. We see how cultural, class and societal biases play an unexpectedly strong role inAI-based security systems, and how it is directly correlated with the error rates in such systems. Finally, we propose an AI-security development framework for minimizing errors through bias-reduction.