#DemocraticPoliticians Should Leave #ElonMusk’s #XTwitter.
Undermining Democratic Discourse Since #Musk’s takeover, #Twitter has become a platform for #misinformation, #hatespeech, and #algorithmicbias, threatening democratic values.
Amplification of Extremism By removing #contentmoderation safeguards, #X allows #farright voices and #conspiracymyths to spread unchecked, distorting public debate. (1/3)
"After the entry into force of the Artificial Intelligence (AI) Act in August 2024, an open question is its interplay with the General Data Protection Regulation (GDPR). The AI Act aims to promote human-centric, trustworthy and sustainable AI, while respecting individuals' fundamental rights and freedoms, including their right to the protection of personal data. One of the AI Act's main objectives is to mitigate discrimination and bias in the development, deployment and use of 'high-risk AI systems'. To achieve this, the act allows 'special categories of personal data' to be processed, based on a set of conditions (e.g. privacy-preserving measures) designed to identify and to avoid discrimination that might occur when using such new technology. The GDPR, however, seems more restrictive in that respect. The legal uncertainty this creates might need to be addressed through legislative reform or further guidance."
https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025)769509
The Conversation: Unrest in Bangladesh is revealing the bias at the heart of Google’s search engine. “…while Google’s search results are shaped by ostensibly neutral rules and processes, research has shown these algorithms often produce biased results. This problem of algorithmic bias is again being highlighted by recent escalating tensions between India and Bangladesh and cases of […]
As JD Vance criticizes EU's AI regulation, 12+ US states are considering algorithmic discrimination bills strikingly similar to the EU's AI Act. #AIRegulation #AlgorithmicBias #TechPolicy #JDVance #USStates #AIAct #Discrimination #GovTech #ArtificialIntelligence
#epartheid: Social engineering under the guise of curating discourse.
#epartheid: Echo Chambers labeled as "Safe Spaces."
#epartheid: Moderation policies designed to silence dissent.
#epartheid: Censorship rebranded as Community Guidelines.
@Mastodon #Mastodon
#epartheid: digital isolation disguised as moderation.
Beyond The Illusion – The Real Threat Of AI: WEF Global Risks Report 2025 https://www.byteseu.com/666199/ #AI #AIHallucinations #AIHealthcare #AIRegulation #AIRisks #AlgorithmicBias #ArtificialIntelligence #EthicalAIFrameworks #GenerativeAI #ResponsibleAI #TechnologicalAcceleration #WorldEconomicForum
"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.
Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.
Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."
"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.
Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.
Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."