Common sources of bias — historical bias, proxy variables, unbalanced datasets, algorithm choices and user interaction — On the surface, machine learning looks impartial. Algorithms have no concept of sensitive characteristics like race, ethnicity, gender or religion. So it shouldn't be possible for them to make biased decisions against certain groups. Yet, if left unchecked, they will do exactly that. To correct biased decisions we first need…