mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#aicommons

0 Beiträge0 Beteiligte0 Beiträge heute
Blacker than mirrors<p>Morning read. Dogger, gale force 5 :-)</p><p><a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/aiethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aiethics</span></a> <a href="https://mastodon.social/tags/aicommons" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aicommons</span></a> <a href="https://mastodon.social/tags/blackerthanmirrors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>blackerthanmirrors</span></a></p><p><a href="https://open.substack.com/pub/blackerthanmirrors/p/sounding-for-tallow-the-weight-of?r=5v4urt&amp;utm_medium=ios&amp;utm_source=post-publish" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">open.substack.com/pub/blackert</span><span class="invisible">hanmirrors/p/sounding-for-tallow-the-weight-of?r=5v4urt&amp;utm_medium=ios&amp;utm_source=post-publish</span></a></p>

🗞️"Transparency around the content used to train AI and information about how it was processed can support the legitimate interests of preventing discrimination and respecting cultural diversity." Learn more in this article by Maximilian Gahntz & Zuzanna Warso, published in Tech Policy Press.

techpolicy.press/how-the-eu-ai

Tech Policy Press · How the EU AI Act Can Increase Transparency Around AI Training Data | TechPolicy.PressTrade secrets can’t serve as a blanket excuse for intransparency, write Zuzanna Warso & Maximilian Gahntz.

"The Open Source #AI Definition is an important step in defining the standard of openness in AI development. Still, it should be seen as just one position in a broader debate that needs to bridge positions of AI developers with those of other stakeholders."

– read our new analysis of the Open Source Initiative (OSI)'s definition of open source AI, by @tarkowski and @paulk

openfuture.eu/blog/the-open-so

Open FutureThe Open Source AI Definition is a step forward in defining openness in AI – Open FutureThis week, the Open Source Initiative released its definition of open source AI. This analysis considers its significance as a standard, its limitations, and the need for a broader community norm.

Dan Cohen and Dave Hansen wrote recently a really good piece on books, libraries and AI training (the piece refers to the paper on Books Data Commons that I co-authored).

They start with a well-known argument about levelling the field: without offering public access to training resources, AI monopolies will benefit from information asymmetries. Google already has access to 40 million scanned books.

They add to this a key point about libraries' public interest stance - and suggest that libraries could actively govern / gatekeep access to books.

This reminds me of the recent paper by Melanie Dulong de Rosnay and Yaniv Benhamou, which for me is groundbreaking - it proposes that license-based approaches to sharing are combined with trusted institutions that offer more fine-grained access governance.

So it's good to see that this line of thinking is getting traction.

authorsalliance.org/2024/05/13

Authors Alliance · Books are Big AI’s Achilles HeelBy Dave Hansen and Dan Cohen Image of the Rijksmuseum by Michael D Beckwith. Image dedicated to the Public Domain. Rapidly advancing artificial intelligence is remaking how we work and live, a revo…

Interesting data from a new edition of the Foundation Model Transaprency Index - collected six months after the initial index was released.

Overall, there's big improvement, with average score jumping from 37 to 58 point (out of a 100). That's a lot!

The interesting fact is that researchers contacted developers and solicited data - interactions count.

More importantly, there is little improvement, and little overall transparency in a category that researchers describe as "upstream": on data, labour and compute that goes into training. And "data access" gets the lowest score of all the parameters.

More at Tech Policy Press: techpolicy.press/the-foundatio

Tech Policy Press · The Foundation Model Transparency Index: What Changed in 6 Months? | TechPolicy.PressFourteen model developers provided transparency reports on each of 100 indicators devised by Stanford, Princeton, and Harvard researchers.

The Think7 Italy Summit is happening this week, with the theme “The G7 and the World: Rebuilding Bridges”.

We have been invited to write a brief on “Democratic governance of AI systems and datasets”, which will be presented tomorrow by @tarkowski .

The brief has been a joint effort of three organizations: Open Future Foundation, Centro Politiche Europee and MicroSave Consulting (MSC), with contributions from Renata Avila, Lea Gimpel, and @savi.

#publicAI #AIcommons

think7.org/event/t7-italy-summ

think7.orgT7 Italy Summit – The G7 and the World: Rebuilding Bridges | Think 7

Open Future's newest white paper, authored by @zwarso and myself, addresses the governance of data sets used for #AI training.

Over the past two years, it has become evident that shared datasets are necessary to create a level playing field and support AI solutions in the public interest. Without these shared datasets, companies with vast proprietary data reserves will always have the winning hand.

However, data sharing in the era of AI poses new challenges. Thus, we need to build upon established methods like #opendata refining them and integrating innovative ideas for data governance.

Our white paper proposes that data sets should be governed as commons, shared and responsibly managed collectively. We outline six principles for commons-based governance, complemented by real-life examples of these principles in action.

openfuture.eu/publication/comm

Open FutureCommons-based Data Set Governance for AI – Open FutureIn this white paper, we propose an approach to sharing data sets for AI training as a public good governed as a commons.

I participated yesterday in an expert workshop on Public-Private Partnerships in Global Data Governance, organized by the United Nations University Centre for Policy Research (UNU-CPR) and the International Chamber of Commerce (ICC).

I was also invited to prepare a policy brief that presented how the Public Data Commons model, which we have been advocating for, could be applied at global level for dealing with emergencies, and the broader poly-crisis.

It is exciting to see UNU explore data sharing policies within the context of the policy debate on the UN Global Digital Compact.

Worth noting is also the recent report of the High-Level Advisory Board on Effective Multilateralism, "A Breakthrough for People and Planet". One of the transofrmative shifts, "the just digital transition", includes a recommendation for a global data impact hub.

In my brief, I show how this impact hub could be designed as a Public Data Commons. I also highly recommend other briefs presented at the event, by Alex Novikau, Isabel Rocha de Siqueira, Michael Stampfer and Stefaan Verhulst.

#aicommons #datacommons #datagovernance #ai

You can find the report and all the briefs on the UNU webpage: unu.edu/cpr/project/breakthrou

In a month (7-8 December) I will be speaking at a conference on data governance and AI, organized in Washington, DC by the Digital Trade and Data Governance Hub. I am excited about this for two reasons:

first of all, we need to connect the policy debates on data governance and AI governance. The space of AI development offers new opportunities to develop, at scale, commons-based approaches that have been much theorized and advocated for, but not yet implemented.

and secondly, I am a deep believer in dialogue between the US and the EU. US is leading in terms of AI development itself, while EU will most probably be the first country to innovate in terms of AI regulation.

Please consider joining, either in-person or remotely (it's a hybrid event).

#aicommons #datacommons #datagovernance #ai

linkedin.com/events/datagovern

www.linkedin.comData Governance in the Age of Generative AI | LinkedInThe global popularity and use of large language models for generative AI have revealed enforcement problems as well as gaps in the governance of data at the national and international levels. GWU’s Digital Trade and Data Governance Hub and the NIST-NSF Trustworthy AI Institute, along with several partners, are hosting a two-day conference to discuss these issues. At this free, hybrid event, speakers and participants will: • Identify data governance gaps for large language models (LLMs). • Propose and discuss solutions for these gaps. • Promote understanding of data governance as a key component of AI governance. The conference will feature two days of panel discussions, focusing on how firms acquire data, whether firms choose to make their LLMS open, partially open, or closed to outside review, and the implications of these choices for democracy, human rights, and trust. We will explore new ideas for how to govern the data underpinning generative AI while promoting broader understanding and engagement in the governance of data. Additionally, we will examine the actions governments are taking to bridge data governance gaps. The conference will include consensus-building exercises to attempt to arrive at common policy proposals. Register now to stay updated on the schedule and discussions. Audience and Format: The event is free and open to all interested in discussing the data that is used to build generative AI systems. The program will consist of five panels, two keynotes and a fireside chat, each of which will be designed to encourage audience questions and suggestions. Lunch and snacks will be provided. The conference will be livestreamed (link provided after registering) and posted on our youtube page after the event has ended. The conference will include computer and data scientists from companies such as Microsoft, Hugging Face, IBM, and Eleuther AI; researchers from University of Maryland, Stanford, Princeton, the Distributed AI Research Institute and the AFL-CIO; and policymakers from Germany, the EU, the UK, and the US. Panels will focus on: The Sources of LLM Data The Continuum of Closed and Open LLMs and their Implications for Data Governance Data Openness and Society What are Governments Doing to Close the Data Governance Gaps New Ideas for Shared Data Governance

The Chan-Zuckerberg Initiative announced that in order to support non-profit medical research they are building "computing infrastructure" - that is, purchasing over a 1000 state of the art GPUs.

This is super interesting, in an AI-powered world compute is not a commodity, but a currency.

So if a private foundation can do it, why can't governments do the same? Seems that providing public interest compute infrastructure is one of the simpler move that can be made, as the comples governance issues are solved in parallel.

#aicommons #publicai

archive.ph/DL0PO

New piece from @halcyene and Michael Birtwistle from Ada Lovelace argues for a more inclusive UK #AI Safety Summit.

adalovelaceinstitute.org/blog/

The reason for this, they argue, is that "AI safety" is a very broad category. And since many risks are socio-technical, the governance debate needs to include the society, especially those affected by risk. "Nothing about us without us".

It's interesting to observe how UK-based civic actors are attempting to pry open a policy platform that currently is designed as a conversation between business and the state (with a sprinkling of just a few, selected, civic / academic actors). I hope it's succesful and sets a precedent.

And I like the way Ada Lovelace frames risks, and highlights that there are structural harms, risk of market concentration in particular.

This risk is often ignored, and it's the one that can be addressed by policies that support open, commons-based governance of AI.

Also, it's a risk that - since it's structural - affects the policy debate itself: there is a risk of regulatory capture by the largest players, in whose corporate hands power is concentrated. One more reason to make the AI policy debate more inclusive.

www.adalovelaceinstitute.orgSeizing the ‘AI moment’: making a success of the AI Safety SummitReaching consensus at the AI Safety Summit will not be easy – so what can the Government do to improve its chances of success?