A random piece generated the following:
We know but try not to believe that it is impossible to erase bias in anything created by a human, as human bias is inevitable. Some engineers (social, civil, name your kind) will offer to detect bias and correct for it article by article, act by act. Yet, to solve for evidence of bias in how people choose articles, or acts or words or deeds, is to solve for a math problem without first clustering a set of laws or hypotheses that pertain to the problem at large, i.e. without encompassing the problem holistically.
The only way to control for bias in anthropomorphic products is to have a series of anti/corrective procedures, fading into multiplicity, perhaps somewhat like solving for the value of pi. The right way to run that ‘evidence of bias and correction for it’ experiment would be to solve for both hypothesis and application problems simultaneously. Otherwise, the solution to problems would be equivalent to examining already-biased data and piling up evidence of certain degrees of bias without looking at why those biases exist.
The missing problem — in the whole field of machine learning, deep learning, AI and cognitive manipulation of political-economic populations, socio-political power and attribute-signalling furore — is why, given a choice (and not just between yes/no, such as in business or consumer choice models, or controlled experiments) individuals and groups choose one way and for one type of eventual result rather than others.
The calculation of outcome is not quite as cerebral as in chess. And people often choose to keep or return to homeostasis, i.e. to maintain what keeps them happy and fulfilled or to engineer outcomes in the common world that would bring them (as preferred group, not just one participating group) the lost homeostasis. Rarely do people choose for an outcome not yet imagined, and rarely are they willing to follow along without cognitive pictures of what the future will look like.
Bias is linked to payoff; ethically, it is impossible to remove human bias as long as we cannot solve the gnarls of human motivation.
It would perhaps be more desirable and fruitful to control for bias and payoff rather than attempt to remove the bias at all (even by cancelling out a negative with a positive). The calculation of payoff and satisfaction is a long derivation, and only a long game will ensure evidence that stands up to scrutiny as well as means to manage that evidence.
The real game in the digital world now is the vast experiment of controlled human emotion and behavior, which can be played by anyone who wants to ‘make an impact’ in the worlds linked up to social media, but are especially visible and truly formidable as platforms under names such as Google and Facebook. Data-driven regimes are perfect for both (1) convincing digitally docile populations to believe in the veracity of small steps and changes and choices as narrated within the framework of justice (righting wrongs, reversing damage) outcomes, and (2) gradually manipulating them to think and believe in digitally calculable ways. The Panopticon meets The Foundation trilogy.