mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#nlp

6 Beiträge6 Beteiligte0 Beiträge heute

Data Annotation vs Data Labelling- Find the right for you

Key takeaways:

• Understand the core difference between annotation and labeling
• Explore use cases across NLP, computer vision & more
• Learn how each process impacts model training and accuracy

Read now to make smarter data decisions:

hitechbpo.com/blog/data-annota

📢 Thrilled to share that, through a collaborative effort between the @OpenSearchProject and @huggingface neural sparse models are now available in the Sentence Transformers library. 🤗

The Sentence Transformers (a.k.a. SBERT) library, developed by @UKPLab and maintained by #HuggingFace, is a Python framework designed to generate semantically meaningful embeddings for sentences, paragraphs, and images.

opensearch.org/blog/neural-spa

Congrats to all involved! 👏

#opensearch#vectorSearch#NLP

VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification.

This paper presents VLAI, a transformer-based model that predicts software vulnerability severity levels directly from text descriptions. Built on RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and achieves over 82% accuracy in predicting severity categories, enabling faster and more consistent triage ahead of manual CVSS scoring. The model and dataset are open-source and integrated into the Vulnerability-Lookup service.

We ( @cedric and I) decided to make a paper to better document how VLAI is implemented. We hope it will give other ideas and improvements in such model.

#vulnerability #cybersecurity #vulnerabilitymanagement #ai #nlp #opensource

@circl

🔗 arxiv.org/abs/2507.03607

arXiv logo
arXiv.orgVLAI: A RoBERTa-Based Model for Automated Vulnerability Severity ClassificationThis paper presents VLAI, a transformer-based model that predicts software vulnerability severity levels directly from text descriptions. Built on RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and achieves over 82% accuracy in predicting severity categories, enabling faster and more consistent triage ahead of manual CVSS scoring. The model and dataset are open-source and integrated into the Vulnerability-Lookup service.

Happening Today at the University of Stuttgart: The IRIS Summer Symposium 2025!
We're delving into the complexities of intelligent systems and their societal impacts through a series of insightful talks and discussions. Key topics include:
🔵 Computational Digital Psychology – Exploring how digital environments influence human behavior.
🔵 Ethics of Generative AI – Examining the moral considerations surrounding AI-generated content.
🔵 Diversity-Aware NLP Intelligent Systems – Developing language processing tools that respect and reflect societal diversity.
🔵 Human-Intelligent Systems Interaction and Teaming – Investigating collaborative dynamics between humans and intelligent systems.
🔵 Teaching and Learning Forum RISING – Innovating educational approaches to prepare students for an AI-integrated future.
🔵 IRIS Public Engagement – Bridging the gap between scientific research and public discourse.
🔵 Poster Session Highlights – Showcasing interdisciplinary research on topics such as:
Trust dynamics in human-chatbot interactions.
Societal and organizational impacts of AI.
Literary narratives as tools for critically reflecting on intelligent systems.
Human-AI teaming in flight operations.
📍 Venue: University of Stuttgart, Campus Vaihingen
🕘 Time: 9:00 a.m. – 4:30 p.m. (CEST)
📌 Poster Session: 2:30 p.m. – 4:30 p.m., Room UN34.150
This event is free and open to all interested individuals. If you're passionate about the intersection of technology and society, there's still time to join us!
🔗 Learn more: www.iris.uni-stuttgart.de

#IRISStuttgart #IRISSummerSymposium #AIandSociety
#EthicsInAI #HumanAIInteraction
#NLP
#PublicEngagement #InterdisciplinaryResearch #UniversityOfStuttgart

How big of a deal would it be if someone developed a language model (kind of like ChatGPT) which didn't hallucinate, didn't use prodigious amounts of electricity/water/compute/memory, which ran locally or on a distributed user mesh instead of corporate server farms, and which remembered and learned from what you say if you want it to? Something which was reliable and testable and even interpretable -- meaning you could pop the hood and see what it's really doing. Would you be inclined to use a system like this? Are there other things you'd still take issue with?

#LLM
#ChatGPT
#NLP
#NLU