mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#phi4

0 Beiträge0 Beteiligte0 Beiträge heute

Ein „Hi“ – und Microsofts neue KI Phi-4 antwortet mit 56 Sätzen. Geniale Tiefenanalyse oder einfach nur Overthinking? Wenn Sprachmodelle einfache Aufgaben verkomplizieren, stellt sich die Frage: Wie alltagstauglich ist „Reasoning“ wirklich? Lies selbst. #Microsoft #Phi4 #KI 👇
all-ai.de/news/news24/microsof

All-AI.deMicrosofts KI denkt sich zu Tode: Phi-4 analysiert ein „Hi“56 Sätze für eine Begrüßung – zeigt Phi-4-Reasoning Genialität oder verliert es sich in seiner eigenen Tiefe?

Phi-4-reasoning and Phi-4-reasoning-plus are a proof of concept for how smaller, smarter models can compete with giants arxiv.org/abs/2504.21318

arXiv.orgPhi-4-reasoning Technical ReportWe introduce Phi-4-reasoning, a 14-billion parameter reasoning model that achieves strong performance on complex reasoning tasks. Trained via supervised fine-tuning of Phi-4 on carefully curated set of "teachable" prompts-selected for the right level of complexity and diversity-and reasoning demonstrations generated using o3-mini, Phi-4-reasoning generates detailed reasoning chains that effectively leverage inference-time compute. We further develop Phi-4-reasoning-plus, a variant enhanced through a short phase of outcome-based reinforcement learning that offers higher performance by generating longer reasoning traces. Across a wide range of reasoning tasks, both models outperform significantly larger open-weight models such as DeepSeek-R1-Distill-Llama-70B model and approach the performance levels of full DeepSeek-R1 model. Our comprehensive evaluations span benchmarks in math and scientific reasoning, coding, algorithmic problem solving, planning, and spatial understanding. Interestingly, we observe a non-trivial transfer of improvements to general-purpose benchmarks as well. In this report, we provide insights into our training data, our training methodologies, and our evaluations. We show that the benefit of careful data curation for supervised fine-tuning (SFT) extends to reasoning language models, and can be further amplified by reinforcement learning (RL). Finally, our evaluation points to opportunities for improving how we assess the performance and robustness of reasoning models.
#LLM#SLM#AI

Klein schlägt groß? Microsofts neues Sprachmodell Phi-4-mini zeigt, dass weniger mehr sein kann: 3,8 Mrd. Parameter, aber besser als viele Giganten – und das auf Geräten, die in deine Hosentasche passen. Ist das der Anfang vom Ende der KI-Massenmodelle? Finde es heraus:👇 #Microsoft #Phi4 #KI 👇
all-ai.de/news/top-news24/ki-s

All-AI.deKI-Sensation bei Microsoft! Mini-Modell schlägt Mega-KonkurrenzMit nur 3,8 Milliarden Parametern deklassiert Phi-4-mini etablierte Top-Modelle – ist das das Ende der Giganten-KIs?

Microsoft Unveils Powerful, Compact Phi 4 AI Models.

Microsoft’s new Phi 4 AI models offer advanced reasoning with fewer resources, rivaling much larger systems. Designed for efficiency and real-world application, these compact models bring high-performance AI to smartphones, IoT devices, and education pushing forward the democratization of AI technology.

#MicrosoftAI #Phi4 #AIInnovation #EdgeComputing #TechNews #TECHi

Read Full Article Here :- techi.com/microsoft-phi-4-ai-m

yoo #mastodon #community we rolled back to the #pixtral #vision #AI #model over #microsoft #phi4

we know #startrek @startrek, and #music @music are areas needing immediate improvement so we are trying to conjure up a "community notes" style approach to what the community thinks is bad #alttext

this will be done through responding publicly and direct messaging

This is still a work in progress so we shall see how it goes as we test

This way we'll have a mechanism for potentially useful #contribution to the AI #dataset rather than mindless bitching about how bad and evil AI is.

#AITraining#RAG#LoRA

good grief, we broke our #ai #alttext script. for a while we were running the latest #pixtral #vision #model however we wanted to try #Microsoft #PHi4 #vision model however we use #conda for our python virtualization management.

this is a huge mistake #lol #tech #aidev #thestruggle

i need a #venv for both pixtral and phi4 in one scripts run time.

suggestions anyone to work this mess?

#python #development #fail

obviously we are running with the phi4v model however we were testing across all the accounts when we realized we broke the production scripts.

same login, supposedly different virtual environments.

blah. perhaps this is what #uv is meant to fix?

Not really sure, I guess I could talk to #ChatGPT about it lol