"The release of ChatGPT at the end of 2022 met with fears and optimism. One particularly important avenue of research that is emerging revolves around ChatGPT's ability to provide accurate and unbiased information on a variety of topics. Given the interest that Google and Microsoft have shown in sim
...
ilar technologies, it is likely that Large Language Models such as ChatGPT could become new gateways to information, and if this is the case, what kind of information this technology provides needs to be investigated. The current study examines the usefulness of ChatGPT as a source of information in a South African context by first investigating ChatGPT's responses to ten South African conspiracy theories in terms of truthfulness, before employing bias classification as well as sentiment analysis to evaluate whether ChatGPT exhibits bias when presenting eight South African political topics. We found that, overall, ChatGPT did not spread conspiracy theories. However, the tool generated falsehoods around one conspiracy theory and generally presented a left bias, albeit not to the extreme. Sentiment analysis showed that ChatGPT's responses were mostly neutral and, when more emotive, were more often positive than negative. The implications of the findings for academics and students are discussed, as are a number of recommendations for future research." (Abstract)
more
"[...] ChatGPT’s creator, OpenAI, is now reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. That would make OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines, o
...
ne of the world’s most valuable AI companies. But the success story is not one of Silicon Valley genius alone. In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers earning less than $2 per hour, a TIME investigation has found. [...]
OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty. The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods. The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs."
more
"AI tools, from ChatGPT to Google Translate, are useless to billions of people in the Global South who don't work in western languages. Researchers and startups from Africa and other parts of the world are changing that." (Introduction)