Post by account_disabled on Mar 5, 2024 1:21:51 GMT -5
Beauty is a concept as variable as the eye that sees it. That's what they say, even though the fees promoted by this million-dollar industry reflect a tiny percentage of real society. For Plato, this physical perfection could not depend on the tastes of men. But what about robots? The scientists of the Beauty.AI project wanted to check if Artificial Intelligence is capable of perceiving beauty through objective factors such as symmetry and health. More than six thousand people from a hundred countries participated in this robotic Miss Universe. Contestants had to upload a selfie to the application's website so that their faces could be analyzed without bias by a programmed algorithm. “Beauty.AI aims to evolve into an objective personal assistant (robot) that advises us on our best look to always look young founder of the laboratory that promoted the contest. In addition to slight intentions, the project aimed to accumulate some data so that machines would be able to evaluate a person's long-term health based on their age, skin color or wrinkles.
The results of the contest, however, erased in one fell swoop all that objectivity they boasted about. Of the 44 winners, virtually all were America Mobile Number List light-skinned blondes, few were Asian, and only one was black. The controversy remained an anecdote until it served to reignite the debate about technology that is sold as impartial, when in reality it helps to perpetuate traditional prejudices. Along these lines, several projects sponsored by Microsoft were rescued that present the same flaws and with even more consequences. “The degree of subjectivity of the person who programs that device is crucial in this type of applications. Of course it influences the result, general secretary of the Spanish Association of Artificial Intelligence, explains to . Terror of machine learning The benefits of AI are many and, as the researcher recalls, most of them do not receive a space in the media. “The projects that have the most media impact are those that have failed, although that is not the usual trend in the field of research.” But one code error is enough for machine learning to be presented again as the catastrophe of the future.
In that sense the hand behind the software that generates wayward responses in a chat does not matter as much as the one that decides which prisoner should receive parole in a prison. The first aimed to reproduce the fantasy devised by Spike Jonze in Her and build a teenage bot that would learn automatically based on the conversation with his interlocutors. But when Tay began making racist statements on Twitter and even asking his followers for sex, Microsoft had to cancel the project. The accusations of xenophobia that fell on Bill Gates' company were nothing compared to the controversy that has aroused the following software: an application programmed to predict future criminals. The ProPublica website stated in its article Machine Bias that this computer program used in several jurisdictions in the United States has a racist bias. Researchers took data from offenders in Florida and found that 60 percent did, in fact, reoffend two years later. In this group there were as many whites as blacks. However, in 40 percent of the wrong predictions, they observed a significant racial disparity. Black defendants were twice as likely to be classified as “high risk,” although they were not arrested again, while white defendants who topped almost all of the “low risk” lists reoffended.
The results of the contest, however, erased in one fell swoop all that objectivity they boasted about. Of the 44 winners, virtually all were America Mobile Number List light-skinned blondes, few were Asian, and only one was black. The controversy remained an anecdote until it served to reignite the debate about technology that is sold as impartial, when in reality it helps to perpetuate traditional prejudices. Along these lines, several projects sponsored by Microsoft were rescued that present the same flaws and with even more consequences. “The degree of subjectivity of the person who programs that device is crucial in this type of applications. Of course it influences the result, general secretary of the Spanish Association of Artificial Intelligence, explains to . Terror of machine learning The benefits of AI are many and, as the researcher recalls, most of them do not receive a space in the media. “The projects that have the most media impact are those that have failed, although that is not the usual trend in the field of research.” But one code error is enough for machine learning to be presented again as the catastrophe of the future.
In that sense the hand behind the software that generates wayward responses in a chat does not matter as much as the one that decides which prisoner should receive parole in a prison. The first aimed to reproduce the fantasy devised by Spike Jonze in Her and build a teenage bot that would learn automatically based on the conversation with his interlocutors. But when Tay began making racist statements on Twitter and even asking his followers for sex, Microsoft had to cancel the project. The accusations of xenophobia that fell on Bill Gates' company were nothing compared to the controversy that has aroused the following software: an application programmed to predict future criminals. The ProPublica website stated in its article Machine Bias that this computer program used in several jurisdictions in the United States has a racist bias. Researchers took data from offenders in Florida and found that 60 percent did, in fact, reoffend two years later. In this group there were as many whites as blacks. However, in 40 percent of the wrong predictions, they observed a significant racial disparity. Black defendants were twice as likely to be classified as “high risk,” although they were not arrested again, while white defendants who topped almost all of the “low risk” lists reoffended.