Talk:Wikimedia Foundation Artificial Intelligence and Machine Learning Human Rights Impact Assessment
Add topicFeedback
[edit]My feedback about this report is that (just) having a report about human rights risks/threats is kind of biased and flawed to some degree even though it could be a good idea to have such first and to prioritize such.
It misses analysis of opportunities and technical aspects of potential uses and when it comes to human rights it also seems to miss the human rights relevant positive uses of AI/ML. I'm not sure about that since I haven't read the full document yet but I think at a minimum it only tangentially touches it by looking at where AI is already used extensively (not where it could be used).
I'll be concrete. AI can be used to make information relevant or directly about human rights more accessible, such as accessible also in additional languages or in better quality or in other media formats. More concrete examples is that I turned the long English Wikipedia article as of 2024 (not some version from 10 years ago) about the effects of climate change (e.g. human right to a clean, healthy and sustainable environment but also other HRs) at above-human audio quality (at some cases 1 or a few fixable minor pronunciation issues) – see on the right. Moreover, I visualized the genre of solarpunk and a certain concept of sustainable livable cities filling a gap of available free media. Lastly, I redubbed some HR-relevant videos to other languages such as the ones on the right from German to Spanish and English to Spanish. The community is largely asleep on AI and all they ever hear or talk about is possibly exaggerated risks and threats of AI, mainly text-generating chatbot LLMs (despite that could well be the least useful AI for Wikimedia).
Prototyperspective (talk) 12:49, 1 October 2025 (UTC)