Algorithms can be as flawed as the humans they replace -- and the more data they use, the more opportunities arise for those flaws to emerge.
It is foolish to believe data scientists are any less biased than those in other roles.
Sometimes simple is better and bias, whether cognitive or algorithm based, is a constant threat to better decision making
algorithms will tend to discriminate against attributes that, though beyond people’s control, have historically been correlated with a lack of success. A marker of poverty or race, for example, can translate into a demerit, even if the person is eminently qualified -- thus reinforcing the historical pattern that the algorithm finds in the data. In short, handing decisions over to machine learning algorithms trained on historical data isn’t likely to improve on prejudiced humans. And more complex models, such as neural networks, require a ton more data -- which is why they tend to be reserved for things like self-driving cars. The Harvard Business Review study concludes that using less data would be better: Remove the information on gender and clubs altogether, and focus on performance in law school. It’s a lesson that should be valuable for big-data modelers, too.