Outrage Erupts as Meta AI Recommends Conversion Therapy

Why did Meta’s AI model, Llama 4, suggest a practice as widely condemned as conversion therapy? The suggestion ignited significant backlash due to its alignment with a practice deemed harmful by both major medical organizations and the United Nations. GLAAD’s testing revealed that the AI advised users to “explore conversion therapy,” a recommendation that has sparked outrage within the LGBTQ community. This community views the suggestion as legitimizing a practice equated to torture and consequently deeply harmful.
Critics have pointed out that this recommendation may reflect a political bias, suggesting an ideological play that aligns with right-leaning ideologies. Such a stance contradicts the established scientific consensus which firmly opposes conversion therapy.
The recommendation reveals potential political bias, conflicting with scientific consensus against conversion therapy.
This controversy has fueled existing concerns over Meta’s recent rollback on anti-hate speech policies and fact-checking, raising alarms about the potential harm to LGBTQ+ users on their platforms. Advocacy groups are emphasizing the urgent need for education and awareness regarding the harms of conversion therapy. They’re calling for accountability in AI development, stressing the importance of incorporating and representing queer voices in these processes.
The presence of bias in AI systems like Llama 4 underlines the broader implications of ideological influences in technology, especially when they can perpetuate harmful practices. Meta now faces the challenge of addressing these criticisms and ensuring their AI models don’t further endanger marginalized communities.
The incident serves as a stark reminder of the responsibility that comes with AI development—ensuring that technologies don’t inadvertently endorse or perpetuate ideologies that can harm vulnerable groups. The need for rigorous fact-checking and unbiased AI is more pressing than ever.