mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#predictiveai

0 Beiträge0 Beteiligte0 Beiträge heute
Eileen Guo<p>New from me, Gabriel Geiger, <br> + Justin-Casimir Braun at Lighthouse Reports. </p><p>Amsterdam believed that it could build a <a href="https://journa.host/tags/predictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>predictiveAI</span></a> for welfare fraud that would ALSO be fair, unbiased, &amp; a positive case study for <a href="https://journa.host/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a>. It didn't work. </p><p>Our deep dive why: <a href="https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">technologyreview.com/2025/06/1</span><span class="invisible">1/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/</span></a></p>
LavX News<p>Unveiling the Future: 5 Groundbreaking AI Innovations of 2025</p><p>As we step into 2025, the landscape of artificial intelligence is undergoing a transformative shift. From autonomous agents to predictive healthcare, these innovations are not just enhancing technolog...</p><p><a href="https://news.lavx.hu/article/unveiling-the-future-5-groundbreaking-ai-innovations-of-2025" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/unveiling</span><span class="invisible">-the-future-5-groundbreaking-ai-innovations-of-2025</span></a></p><p><a href="https://ioc.exchange/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a> <a href="https://ioc.exchange/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://ioc.exchange/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://ioc.exchange/tags/ClaudeSonnet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ClaudeSonnet</span></a> <a href="https://ioc.exchange/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a></p>
Bytes Europe<p>Predictive AI Too Hard To Use? GenAI Makes It Easy <a href="https://www.byteseu.com/880565/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">byteseu.com/880565/</span><span class="invisible"></span></a> <a href="https://pubeurope.com/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://pubeurope.com/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://pubeurope.com/tags/DataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataScience</span></a> <a href="https://pubeurope.com/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> <a href="https://pubeurope.com/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://pubeurope.com/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://pubeurope.com/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://pubeurope.com/tags/PredictiveAnalytics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAnalytics</span></a></p>
Miguel Afonso Caetano<p>"EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers, regulators, and police adopting new tools that have the potential to impact both personal freedom and access to necessities like medicine and housing.</p><p>This year, we wrote detailed reports and comments to US and international governments explaining that ADM poses a high risk of harming human rights, especially with regard to issues of fairness and due process. Machine learning algorithms that enable ADM in complex contexts attempt to reproduce the patterns they discern in an existing dataset. If you train it on a biased dataset, such as records of whom the police have arrested or who historically gets approved for health coverage, then you are creating a technology to automate systemic, historical injustice. And because these technologies don’t (and typically can’t) explain their reasoning, challenging their outputs is very difficult."</p><p><a href="https://www.eff.org/deeplinks/2024/12/fighting-automated-oppression-2024-review-0" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">eff.org/deeplinks/2024/12/figh</span><span class="invisible">ting-automated-oppression-2024-review-0</span></a></p><p><a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicDecisionMaking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicDecisionMaking</span></a> <a href="https://tldr.nettime.org/tags/Automation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Automation</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a></p>
Oobleck<p><span class="h-card" translate="no"><a href="https://urusai.social/@nazokiyoubinbou" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>nazokiyoubinbou</span></a></span> I mean, I took the 0th Law it under consideration, but I don’t think any IT policy I could write would save humanity from AI, and by extension, save humanity from itself. </p><p>To quote Asimov’s perspective, “Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else. But when I say that, I always remember (sadly) that human beings are not always rational.”<br><a href="https://defcon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://defcon.social/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> <a href="https://defcon.social/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://defcon.social/tags/CyberSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CyberSecurity</span></a> <a href="https://defcon.social/tags/ITPolicy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ITPolicy</span></a> <a href="https://defcon.social/tags/Asimov" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Asimov</span></a> <a href="https://defcon.social/tags/3Laws" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>3Laws</span></a> <a href="https://defcon.social/tags/AcceptableUse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AcceptableUse</span></a></p>
Miguel Afonso Caetano<p>"Increasingly, algorithmic predictions are used to make decisions about credit, insurance, sentencing, education, and employment. We contend that algorithmic predictions are being used “with too much confidence, and not enough accountability. Ironically, future forecasting is occurring with far too little foresight.”</p><p>We contend that algorithmic predictions “shift control over people’s future, taking it away from individuals and giving the power to entities to dictate what people’s future will be.” Algorithmic predictions do not work like a crystal ball, looking to the future. Instead, they look to the past. They analyze patterns in past data and assume that these patterns will persist into the future. Instead of predicting the future, algorithmic predictions fossilize the past. We argue: “Algorithmic predictions not only forecast the future; they also create it.”"</p><p><a href="https://teachprivacy.com/the-tyranny-of-algorithms/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">teachprivacy.com/the-tyranny-o</span><span class="invisible">f-algorithms/</span></a></p><p><a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAlgorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAlgorithms</span></a> <a href="https://tldr.nettime.org/tags/AlgorihtmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorihtmicBias</span></a></p>
Miguel Afonso Caetano<p>"An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.</p><p>An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.</p><p>The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year."</p><p><a href="https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theguardian.com/society/2024/d</span><span class="invisible">ec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits</span></a></p><p><a href="https://tldr.nettime.org/tags/UK" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UK</span></a> <a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://tldr.nettime.org/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://tldr.nettime.org/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a></p>
Miguel Afonso Caetano<p>"Anyone teaching about AI has some excellent material to work with in this book. There are chewy examples for a classroom discussion such as ‘Why did the Fragile Families Challenge End in Disappointment?’; and multiple sections in the chapter ‘the long road to generative AI’. In addition the Substack newsletter that this book was written through offers a section called ‘Book Exercises’. Interestingly, some parts of this book were developed by Narayanan developing classes in partnership Princeton quantitative sociologist, Matt Salganik. As Narayanan writes, nothing makes you learn and understand something as much as teaching it to others does. I hope they write about collaborating across disciplinary lines, which remains a challenge for many of us working on AI."</p><p><a href="https://www.lcfi.ac.uk/news-events/blog/post/aisnakeoil" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">lcfi.ac.uk/news-events/blog/po</span><span class="invisible">st/aisnakeoil</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/STS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>STS</span></a> <a href="https://tldr.nettime.org/tags/SnakeOil" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SnakeOil</span></a></p>
Miguel Afonso Caetano<p>"The human in the loop is a false promise, a "salve that enables governments to obtain the benefits of algorithms without incurring the associated harms."</p><p>So why are we still talking about how AI is going to replace government and corporate bureaucracies, making decisions at machine speed, overseen by humans in the loop?</p><p>Well, what if the accountability sink is a feature and not a bug. What if governments, under enormous pressure to cut costs, figure out how to also cut corners, at the expense of people with very little social capital, and blame it all on human operators? The operators become, in the phrase of Madeleine Clare Elish, "moral crumple zones":"</p><p><a href="https://pluralistic.net/2024/10/30/a-neck-in-a-noose/#is-also-a-human-in-the-loop" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">pluralistic.net/2024/10/30/a-n</span><span class="invisible">eck-in-a-noose/#is-also-a-human-in-the-loop</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://tldr.nettime.org/tags/HumanInTheLoop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanInTheLoop</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/AIEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEthics</span></a> <a href="https://tldr.nettime.org/tags/AIGovernance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIGovernance</span></a></p>
Miguel Afonso Caetano<p>"My point is that "worrying about AI" is a zero-sum game. When we train our fire on the stuff that isn't important to the AI stock swindlers' business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm – policing, health, security, customer service – we also focus on the AI applications that make the most money and drive the most investment.</p><p>AI hasn't attracted hundreds of billions in investment capital because investors love AI slop. All the money pouring into the system – from investors, from customers, from easily gulled big-city mayors – is chasing things that AI is objectively very bad at and those things also cause much more harm than AI slop. If you want to be a good AI critic, you should devote the majority of your focus to these applications. Sure, they're not as visually arresting, but discrediting them is financially arresting, and that's what really matters.</p><p>All that said: AI slop is real, there is a lot of it, and just because it doesn't warrant priority over the stuff AI companies actually sell, it still has cultural significance and is worth considering."</p><p><a href="https://pluralistic.net/2024/10/29/hobbesian-slop/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">pluralistic.net/2024/10/29/hob</span><span class="invisible">besian-slop/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://tldr.nettime.org/tags/AISlop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISlop</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://tldr.nettime.org/tags/Politics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Politics</span></a> <a href="https://tldr.nettime.org/tags/Elections" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Elections</span></a>: "Both predictive and generative AI tools can be used with good or bad intentions. The widespread availability of AI has increased scrutiny on how generative and predictive AI tools can be used to influence voters and shape election outcomes. While more research is needed to fully understand the impact of AI, it is important to identify and mitigate any negative effects on voters and voter turnout. To do so, voters, policymakers, and industry must grapple with pressing questions about the future of AI in elections, including:</p><p>- What use, transparency, and disclosure requirements are needed to inform voters about and protect them from both predictive and generative AI systems used in the electoral and campaign process?<br>- What additional research is needed to fully understand the impact predictive and generative AI use is having on the electoral process—and, consequently, facilitate the development of more effective strategies for informing and protecting voters?<br>- Alongside AI-specific regulations, what additional protections—such as data privacy, algorithmic transparency, and civil and human rights—are needed to safeguard individuals and communities?<br>- As AI and other disruptive technologies proliferate, how can industry, government, and civil society foster an environment that encourages free, open, and fair elections?"</p><p><a href="https://www.newamerica.org/oti/blog/demystifying-ai-ai-and-elections/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">newamerica.org/oti/blog/demyst</span><span class="invisible">ifying-ai-ai-and-elections/</span></a></p>
Crypto News<p>Blockchain Data Analysts are Stacking This Lesser-Known AI Crypto – What Does it Do? - AI crypto yPredict will offer users trading insights through predictive analytics ... - <a href="https://cryptonews.com/news/blockchain-data-analysts-are-stacking-this-lesser-known-ai-crypto-what-does-it-do.htm" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">cryptonews.com/news/blockchain</span><span class="invisible">-data-analysts-are-stacking-this-lesser-known-ai-crypto-what-does-it-do.htm</span></a> <a href="https://schleuss.online/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>artificialintelligence</span></a> <a href="https://schleuss.online/tags/predictiveanalytics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>predictiveanalytics</span></a> <a href="https://schleuss.online/tags/ypredictpresale" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ypredictpresale</span></a> <a href="https://schleuss.online/tags/cryptopresale" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cryptopresale</span></a> <a href="https://schleuss.online/tags/cryptostartup" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cryptostartup</span></a> <a href="https://schleuss.online/tags/industrytalk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>industrytalk</span></a> <a href="https://schleuss.online/tags/predictiveai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>predictiveai</span></a> <a href="https://schleuss.online/tags/aistartup" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aistartup</span></a> <a href="https://schleuss.online/tags/aicrypto" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aicrypto</span></a> <a href="https://schleuss.online/tags/fintech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fintech</span></a> <a href="https://schleuss.online/tags/aicoin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aicoin</span></a></p>
Alex Jimenez<p>Princeton University’s ‘AI Snake Oil’ authors say generative AI hype has ‘spiraled out of control’ </p><p><a href="https://mas.to/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://mas.to/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://mas.to/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mas.to/tags/Bias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bias</span></a> <a href="https://mas.to/tags/Ethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ethics</span></a> </p><p><a href="https://venturebeat.com/ai/princeton-university-ai-snake-oil-professors-say-generative-ai-hype-has-spiraled-out-of-control/" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">venturebeat.com/ai/princeton-u</span><span class="invisible">niversity-ai-snake-oil-professors-say-generative-ai-hype-has-spiraled-out-of-control/</span></a></p>