mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#generativeai

60 Beiträge45 Beteiligte11 Beiträge heute

"China’s ambition to turn its open-source artificial-intelligence models into a global standard has jolted American companies and policymakers, who fear U.S. models could be eclipsed and are mobilizing their responses to the threat.

Chinese advances in AI have come one after another this year, starting with the widely heralded DeepSeek and its R1 reasoning model in January. This was followed by Alibaba’s Qwen and a flurry of others since July, with names such as Moonshot, Z.ai and MiniMax.

The models all have versions that are free for users to download and modify. This approach, commonly referred to as open source or open weight, is driving global adoption of Chinese AI technology.

American companies that have kept their models proprietary are feeling the pressure. In early August, ChatGPT maker OpenAI released its first open-source model, called gpt-oss.
The history of technology offers many examples where a welter of competitors in an industry’s infancy eventually evolved into a monopoly or oligopoly of a few players. Microsoft’s Windows operating system for desktops, Google’s search engine, and the iOS and Android operating systems for smartphones are just a few of the examples.

History also teaches that the battle to become an industry standard isn’t necessarily won by the most technologically advanced player. Easy availability and flexibility play a role, which is why China’s advances in open-source AI worry many in Washington and Silicon Valley."

wsj.com/tech/ai/chinas-lead-in

#China#AI#GenerativeAI

🔥 Data Science August 2025 Trends Alert! Generative AI is reshaping analytics workflows, while edge AI brings intelligence closer to data sources. The rise of synthetic data and privacy-enhancing technologies is solving real-world data challenges. Automation is set to handle 50%+ of data tasks this year! #DataScience #GenerativeAI #EdgeAI #SyntheticData #Analytics

For services and consulting, see my Upwork profile: upwork.com/freelancers/~01d316

Suzanne Srdarov and I have a new publication out in the Oxford Intersection on AI and Society:

Generative Imaginaries of Australia: How Generative AI Tools Visualize Australia and Australianness
doi.org/10.1093/9780198945215.

It's paywalled, so we've also got a summary piece in The Conversation:

‘Australiana’ images made by AI are racist and full of tired cliches, new study shows theconversation.com/australian

but please ping me if need a PDF of the main piece! #generativeAI #Australia #racism #auspol

"In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate. Such voices argue that this technology is important, but not poised to drastically transform our lives. They challenge us to consider a different vision for the near-future—one in which A.I. might not get much better than this.

OpenAI didn’t want to wait nearly two and a half years to release GPT-5. According to The Information, by the spring of 2024, Altman was telling employees that their next major model, code-named Orion, would be significantly better than GPT-4. By the fall, however, it became clear that the results were disappointing. “While Orion’s performance ended up exceeding that of prior models,” The Information reported in November, “the increase in quality was far smaller compared with the jump between GPT-3 and GPT-4.”

Orion’s failure helped cement the creeping fear within the industry that the A.I. scaling law wasn’t a law after all. If building ever-bigger models was yielding diminishing returns, the tech companies would need a new strategy to strengthen their A.I. products. They soon settled on what could be described as “post-training improvements.” The leading large language models all go through a process called pre-training in which they essentially digest the entire internet to become smart. But it is also possible to refine models later, to help them better make use of the knowledge and abilities they have absorbed. One post-training technique is to apply a machine-learning tool, reinforcement learning, to teach a pre-trained model to behave better on specific types of tasks. Another enables a model to spend more computing time generating responses to demanding queries."

newyorker.com/culture/open-que

The New Yorker · What If A.I. Doesn’t Get Much Better Than This?Von Cal Newport
#AI#GenerativeAI#OpenAI

"Using multiple AI agents in tandem opens up impressive possibilities. “AI agents encode the wisdom of senior engineers and apply it universally,” Yahav says.

Looking to the future, Digital.ai’s To anticipates productivity gains with fewer errors and reduced cognitive load, as developers tap various agents for lower-level details. “As this space matures, multi-agent workflows will increase velocity by significantly reducing toil,” he says.

But doing this well will require clear boundaries around product requirements, coding standards, security policies, and more.

In short, AI tools require intention. “An agentic software development life cycle needs the same pillars that a high-performing human team does: a clear mission, a code of conduct, and shared knowledge,” adds Wang.

So, although we’re heading toward a future where developers manage a fleet of agents, early testers should prepare for a lot of trial and error. As Roeck puts it, “Get ready to fail. This isn’t baked yet.”"

infoworld.com/article/4035926/

InfoWorldMulti-agent AI workflows: The next evolution of AI codingInstead of working with a single coding agent, developers will soon realize gains by guiding a team of them.
#AI#GenerativeAI#AIAgents

"My gloss is that GPT-5 had become something of an albatross around OpenAI’s neck. And this particular juncture, not long after inking big deals with Softbank et al. and riding as high on its cultural and political trajectory as it’s likely to get—and perhaps seeing declining rates of progress on model improvement in the labs—a calculated decision was made to pull the trigger on releasing the long-awaited model. People were going to be disappointed no matter what; let them be disappointed now, while the wind is still at OpenAI’s back, and it can credibly make a claim to providing hyper-advanced worker automation.

I don’t think the GPT-5 flop ultimately matters all that much to most folks, and it can certainly be papered over well enough by a skilled salesman in an enterprise pitch meeting. Again, all this is clarifying: OpenAI is again centering workplace automation, while retreating from messianic AGI talk."

bloodinthemachine.com/p/gpt-5-

Blood in the Machine · GPT-5 is a joke. Will it matter?Von Brian Merchant
#AI#GenerativeAI#AGI

"GitHub Codespaces provides full development environments in your browser, and is free to use with anyone with a GitHub account. Each environment has a full Linux container and a browser-based UI using VS Code.

I found out today that GitHub Codespaces come with a GITHUB_TOKEN environment variable... and that token works as an API key for accessing LLMs in the GitHub Models collection, which includes dozens of models from OpenAI, Microsoft, Mistral, xAI, DeepSeek, Meta and more.

Anthony Shaw's llm-github-models plugin for my LLM tool allows it to talk directly to GitHub Models. I filed a suggestion that it could pick up that GITHUB_TOKEN variable automatically and Anthony shipped v0.18.0 with that feature a few hours later.

... which means you can now run the following in any Python-enabled Codespaces container and get a working llm command:"

simonwillison.net/2025/Aug/13/

Simon Willison’s Weblogsimonw/codespaces-llmGitHub Codespaces provides full development environments in your browser, and is free to use with anyone with a GitHub account. Each environment has a full Linux container and a browser-based …
#GitHub#Codespaces#LLMs

As long as #GDPR is taken into account, I think that pay-per-crawl might be a positive and constructive solution developed by companies like #Cloudflare, to solve the added cost of dealing with the increasing problem of uncompensated/for-profit webscraping in the era of generative AI.

Introducing pay per crawl: Enabling content owners to charge AI crawlers for access
blog.cloudflare.com/introducin