mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,4 Tsd.
aktive Profile

#algorithmicbias

0 Beiträge0 Beteiligte0 Beiträge heute
Debby<p><span class="h-card" translate="no"><a href="https://mastodon.social/@ErikJonker" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>ErikJonker</span></a></span> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/geopolitics" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>geopolitics</span></a></span> This is such an important observation about how platform design shapes public discourse! <br>It’s both fascinating and concerning to see how the same factual information can spark constructive conversation on one platform and devolve into disinformation on another. The contrast between Bluesky and X really underscores how algorithms and moderation policies influence the quality of dialogue.</p><p>For me, the fediverse—especially Mastodon—has been a breath of fresh air in this regard. It feels like a space where facts and evidence-based discussions can thrive, rooted in a shared reality rather than outrage or misinformation. But I’m curious: Is this just my personal experience, or do others share the impression that Mastodon fosters a more fact-based discussion environment? Have you explored other platforms beyond Bluesky that prioritize constructive dialogue?</p><p>It’s disheartening to see how platforms that prioritize engagement over accuracy can drown out meaningful conversations. While I’m fortunate enough to avoid Twitter/X and Meta, I recognize that transitioning to open-source, decentralized social networks isn’t feasible for everyone. This makes me wonder: How can we encourage more platforms to adopt models that foster informed debate rather than outrage? Supporting not-for-profit or decentralized alternatives might be part of the solution, but it’s a challenge that requires broader awareness and action.</p><p>Thanks for sharing this—it’s a powerful reminder of how critical platform design is to the health of our digital public spaces!</p><p><a href="https://hear-me.social/tags/DigitalLiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalLiteracy</span></a> <a href="https://hear-me.social/tags/PlatformDesign" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PlatformDesign</span></a> <a href="https://hear-me.social/tags/EvidenceBasedDiscourse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EvidenceBasedDiscourse</span></a> <a href="https://hear-me.social/tags/Fediverse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Fediverse</span></a> <a href="https://hear-me.social/tags/Mastodon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Mastodon</span></a> <a href="https://hear-me.social/tags/TechEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechEthics</span></a> <a href="https://hear-me.social/tags/ConstructiveDialogue" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ConstructiveDialogue</span></a> <a href="https://hear-me.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://hear-me.social/tags/FactOverFiction" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FactOverFiction</span></a> <br><a href="https://hear-me.social/tags/twitter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>twitter</span></a> <a href="https://hear-me.social/tags/Bluesky" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bluesky</span></a></p>
SRF IRIS<p>What does it actually mean when we say that generative AI raises ethical questions?<br>🔵 Dr. Thilo Hagendorff, our research group leader at IRIS3D, has taken this question seriously and systematically. With his interactive Ethics Tree, he has created one of the most comprehensive overviews of ethical problem areas in generative AI: <a href="https://lnkd.in/ebzZYaU7" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/ebzZYaU7</span><span class="invisible"></span></a><br>More than 300 clearly defined issues – ranging from discrimination and disinformation to ecological impacts – demonstrate the depth and scope of the ethical landscape. This “tree” does not merely highlight risks, but structures a field that is increasingly under pressure politically, technologically, and socially.<br>Mapping these questions so systematically underlines the need for ethical reflection as a core competence in AI research – not after the fact, but as part of the epistemic and technical process.</p><p><a href="https://xn--baw-joa.social/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a><br><a href="https://xn--baw-joa.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a><br><a href="https://xn--baw-joa.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a><br><a href="https://xn--baw-joa.social/tags/EthicsInAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicsInAI</span></a><br><a href="https://xn--baw-joa.social/tags/TechEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechEthics</span></a><br><a href="https://xn--baw-joa.social/tags/AIresearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIresearch</span></a><br><a href="https://xn--baw-joa.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a><br><a href="https://xn--baw-joa.social/tags/AIgovernance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIgovernance</span></a><br><a href="https://xn--baw-joa.social/tags/DigitalEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalEthics</span></a><br><a href="https://xn--baw-joa.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicBias</span></a><br><a href="https://xn--baw-joa.social/tags/Disinformation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Disinformation</span></a><br><a href="https://xn--baw-joa.social/tags/SustainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SustainableAI</span></a><br><a href="https://xn--baw-joa.social/tags/InterdisciplinaryResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InterdisciplinaryResearch</span></a><br><a href="https://xn--baw-joa.social/tags/ScienceAndSociety" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ScienceAndSociety</span></a><br><a href="https://xn--baw-joa.social/tags/IRIS3D" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IRIS3D</span></a></p>
The-14<p>Here’s why the public needs to challenge the ‘good AI’ myth pushed by tech&nbsp;companies<br><a href="https://mastodon.world/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a> <a href="https://mastodon.world/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.world/tags/GoodAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GoodAI</span></a> <a href="https://mastodon.world/tags/AIMyth" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIMyth</span></a> <a href="https://mastodon.world/tags/TechEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechEthics</span></a> <a href="https://mastodon.world/tags/DataBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataBias</span></a> <a href="https://mastodon.world/tags/PrivacyRights" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PrivacyRights</span></a> <a href="https://mastodon.world/tags/SurveillanceCapitalism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SurveillanceCapitalism</span></a> <a href="https://mastodon.world/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://mastodon.world/tags/ResistAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResistAI</span></a> <a href="https://mastodon.world/tags/CriticalTech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CriticalTech</span></a> <a href="https://mastodon.world/tags/EthicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicalAI</span></a> <a href="https://mastodon.world/tags/DigitalRights" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalRights</span></a> <a href="https://mastodon.world/tags/TechJustice" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechJustice</span></a><br><a href="https://the-14.com/heres-why-the-public-needs-to-challenge-the-good-ai-myth-pushed-by-tech-companies/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">the-14.com/heres-why-the-publi</span><span class="invisible">c-needs-to-challenge-the-good-ai-myth-pushed-by-tech-companies/</span></a></p>
BiyteLüm<p>🔒 Myth-busting: AI isn’t always intelligent—it reflects the data it’s fed. Biased training data can lead to discriminatory outcomes in hiring, policing, even credit scores.</p><p>🧠 Always ask: Who trained the AI? On what data?</p><p>Transparency &amp; accountability matter.<br><a href="https://mastodon.social/tags/PrivacyAware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PrivacyAware</span></a> <a href="https://mastodon.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a> <a href="https://mastodon.social/tags/DataProtection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataProtection</span></a> <a href="https://mastodon.social/tags/TechJustice" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechJustice</span></a> <a href="https://mastodon.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicBias</span></a></p>
Dr Robert N. Winter<p>In the final instalment of this edition of the Talent Aperture Series, I continue the case that hiring isn't procurement—it's stewardship—and explore:</p><p>🧠 How we reclaim human judgement in hiring<br>📈 Why blind recruitment and contextual interviews are gaining ground<br>💎 What good decision-making really demands in a world drunk on metrics.</p><p><a href="https://robert.winter.ink/the-talent-aperture-reopened/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">robert.winter.ink/the-talent-a</span><span class="invisible">perture-reopened/</span></a></p><p><a href="https://social.winter.ink/tags/Discernment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Discernment</span></a> <a href="https://social.winter.ink/tags/EthicalHiring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicalHiring</span></a> <a href="https://social.winter.ink/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://social.winter.ink/tags/HumanJudgement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanJudgement</span></a> <a href="https://social.winter.ink/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://social.winter.ink/tags/TalentEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TalentEthics</span></a> <a href="https://social.winter.ink/tags/StrategicRecruitment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>StrategicRecruitment</span></a> <a href="https://social.winter.ink/tags/HiringPractices" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HiringPractices</span></a></p>
Nebraska.Code<p>Heather Hartman, MS, PMP presents 'Technoethics, AI Dilemmas' July 24th at Nebraska.Code().</p><p><a href="https://nebraskacode.amegala.com/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">nebraskacode.amegala.com/</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Technoethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technoethics</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/Automation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Automation</span></a> <a href="https://mastodon.social/tags/BigData" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BigData</span></a> <a href="https://mastodon.social/tags/Privacy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Privacy</span></a> <a href="https://mastodon.social/tags/SurveillancePractices" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SurveillancePractices</span></a> <a href="https://mastodon.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://mastodon.social/tags/AIOmaha" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIOmaha</span></a> <a href="https://mastodon.social/tags/Nebraska" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nebraska</span></a> <a href="https://mastodon.social/tags/lincolnne" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lincolnne</span></a> <a href="https://mastodon.social/tags/TechConference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechConference</span></a> <a href="https://mastodon.social/tags/TechnologyTrends" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechnologyTrends</span></a> <a href="https://mastodon.social/tags/LincolnNE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LincolnNE</span></a> <a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/softwaredevelopment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>softwaredevelopment</span></a> <a href="https://mastodon.social/tags/softwareengineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>softwareengineering</span></a> <a href="https://mastodon.social/tags/EmergingTechnologies" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EmergingTechnologies</span></a> <a href="https://mastodon.social/tags/TechTalk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechTalk</span></a></p>
ResearchBuzz: Firehose<p>The Conversation: Women’s sports are fighting an uphill battle against our social media algorithms. “Algorithms, trained to maximise engagement and profits, are deciding what appears in your feed, which video auto-plays next, and which highlights are pushed to the top of your screen. But here is the problem: algorithms prioritise content that is already popular. That usually means men’s […]</p><p><a href="https://rbfirehose.com/2025/05/12/the-conversation-womens-sports-are-fighting-an-uphill-battle-against-our-social-media-algorithms/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/05/12/the-conversation-womens-sports-are-fighting-an-uphill-battle-against-our-social-media-algorithms/</a></p>
Michaël | HouseStationLive.com<p>THE ALGORITHM VS. THE HUMAN MIND: A LOSING BATTLE<br>¯</p><p>_<br>NO RECOGNITION FOR THE AUTHOR</p><p>YouTube does not reward consistency, insight, or author reputation. A comment may become a “top comment” for a day, only to vanish the next. There’s no memory, no history of editorial value. The platform doesn’t surface authors who contribute regularly with structured, relevant input. There's no path for authorship to emerge or be noticed. The “like” system favors early commenters — the infamous firsts — who write “first,” “early,” or “30 seconds in” just after a video drops. These are the comments that rise to the top. Readers interact with the text, not the person behind it. This is by design. YouTube wants engagement to stay contained within the content creator’s channel, not spread toward the audience. A well-written comment should not amplify a small creator’s reach — that would disrupt the platform’s control over audience flow.<br>¯</p><p>_<br>USERS WHO’VE STOPPED THINKING</p><p>The algorithm trains people to wait for suggestions. Most users no longer take the initiative to explore or support anyone unless pushed by the system. Even when someone says something exceptional, the response remains cold. The author is just a font — not a presence. A familiar avatar doesn’t trigger curiosity. On these platforms, people follow only the already-famous. Anonymity is devalued by default. Most users would rather post their own comment (that no one will ever read) than reply to others. Interaction is solitary. YouTube, by design, encourages people to think only about themselves.<br>¯</p><p>_<br>ZERO MODERATION FOR SMALL CREATORS</p><p>Small creators have no support when it comes to moderation. In low-traffic streams, there's no way to filter harassment or mockery. Trolls can show up just to enjoy someone else's failure — and nothing stops them. Unlike big streamers who can appoint moderators, smaller channels lack both the tools and the visibility to protect themselves. YouTube provides no built-in safety net, even though these creators are often the most exposed.<br>¯</p><p>_<br>EXTERNAL LINKS ARE SABOTAGED</p><p>Trying to drive traffic to your own website? In the “About” section, YouTube adds a warning label to every external link: “You’re about to leave YouTube. This site may be unsafe.” It looks like an antivirus alert — not a routine redirect. It scares away casual users. And even if someone knows better, they still have to click again to confirm. That’s not protection — it’s manufactured discouragement. This cheap shot, disguised as safety, serves a single purpose: preventing viewers from leaving the ecosystem. YouTube has no authority to determine what is or isn’t a “safe” site beyond its own platform.<br>¯</p><p>_<br>HUMANS CAN’T OUTPERFORM THE MACHINE</p><p>At every level, the human loses. You can’t outsmart an algorithm that filters, sorts, buries. You can’t even decide who you want to support: the system always intervenes. Talent alone isn’t enough. Courage isn’t enough. You need to break through a machine built to elevate the dominant and bury the rest. YouTube claims to be a platform for expression. But what it really offers is a simulated discovery engine — locked down and heavily policed.<br>¯</p><p>_<br>||<a href="https://hear-me.social/tags/HSLdiary" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HSLdiary</span></a> <a href="https://hear-me.social/tags/HSLmichael" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HSLmichael</span></a> </p><p><a href="https://hear-me.social/tags/YouTubeCritique" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>YouTubeCritique</span></a> <a href="https://hear-me.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://hear-me.social/tags/DigitalLabour" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalLabour</span></a> <a href="https://hear-me.social/tags/IndieCreators" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IndieCreators</span></a> <a href="https://hear-me.social/tags/Shadowbanning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Shadowbanning</span></a> <a href="https://hear-me.social/tags/ContentModeration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ContentModeration</span></a> <a href="https://hear-me.social/tags/PlatformJustice" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PlatformJustice</span></a> <a href="https://hear-me.social/tags/AudienceManipulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AudienceManipulation</span></a></p>

"After the entry into force of the Artificial Intelligence (AI) Act in August 2024, an open question is its interplay with the General Data Protection Regulation (GDPR). The AI Act aims to promote human-centric, trustworthy and sustainable AI, while respecting individuals' fundamental rights and freedoms, including their right to the protection of personal data. One of the AI Act's main objectives is to mitigate discrimination and bias in the development, deployment and use of 'high-risk AI systems'. To achieve this, the act allows 'special categories of personal data' to be processed, based on a set of conditions (e.g. privacy-preserving measures) designed to identify and to avoid discrimination that might occur when using such new technology. The GDPR, however, seems more restrictive in that respect. The legal uncertainty this creates might need to be addressed through legislative reform or further guidance."

europarl.europa.eu/thinktank/e

www.europarl.europa.euAlgorithmic discrimination under the AI Act and the GDPR | Think Tank | European ParliamentAlgorithmic discrimination under the AI Act and the GDPR
#EU#AI#AIAct

The Conversation: Unrest in Bangladesh is revealing the bias at the heart of Google’s search engine. “…while Google’s search results are shaped by ostensibly neutral rules and processes, research has shown these algorithms often produce biased results. This problem of algorithmic bias is again being highlighted by recent escalating tensions between India and Bangladesh and cases of […]

https://rbfirehose.com/2025/02/17/the-conversation-unrest-in-bangladesh-is-revealing-the-bias-at-the-heart-of-googles-search-engine/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · The Conversation: Unrest in Bangladesh is revealing the bias at the heart of Google’s search engine | ResearchBuzz: Firehose
Mehr von ResearchBuzz: Firehose

"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."

theconversation.com/ai-harm-is

The ConversationAI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respondThe damage AI algorithms cause is not easily remedied. Breaking algorithmic harms into four categories results in pieces that better align with the law and points the way to better regulation.

"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."

theconversation.com/ai-harm-is

The ConversationAI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respondThe damage AI algorithms cause is not easily remedied. Breaking algorithmic harms into four categories results in pieces that better align with the law and points the way to better regulation.