With the rapid expansion of large-scale language models and artificial intelligence (AI), gender and racial bias and discrimination are becoming more noticeable. Research has shown that large-scale language models and AI such as ChatGPT and LLaMA are more accurate with men than with women. It has also been reported that facial recognition is more accurate with lighter-skinned people than with darker-skinned people. A famous example of this is a beauty contest. In 2015, a beauty contest judged by AI was held, and more than 6,000 people from all over the world applied. In this contest, 37 white people, 6 Asian people, and 1 other person made it to the final round. The reason was that there were not enough photos for the AI to learn from. There were many photos of white skin, so the AI could only choose from them. Biased or unreliable data confuses the AI’s learning, resulting in cases that it provides humans with incorrect information.
Why does prejudice become more prevalent in fields such as artificial intelligence (AI)? In the 1980s, there were very few female scientists at Harvard. Perhaps because of this, a prejudice arose that women’s brains are not suited to science. This is something that is also said in high schools and universities in Japan. It has been found that the number of “he said” statements on the Internet is overwhelmingly greater than the number of “she said” statements. ChatGPT and other systems learn this as data. Machine learning algorithms are trained based on past data. This becomes a stereotype and spreads. AI development requires the incorporation of social analysis at an early stage. No matter how much ChatGPT and Google aim for a bias-free world, once a decision is made, it is difficult to correct it.
In the future society, information such as SNS will be indispensable. It is expected that fakes will be mixed in with it. The key to defense seems to be in productive teams. Productive teams have a culture that members respect each other’s ideas. There is a sense of security within the team that allows them to admit mistakes and take risks to challenge themselves. We will place importance on this SNS within the team. However, that alone will make it an information Galapagos. We will exchange information with trusted external teams. Academic conferences and major newspapers do not spread fake news that much. It seems that people choose what to read on social media based on such information.