mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#machinevision

0 Beiträge0 Beteiligte0 Beiträge heute

✅ Machine Vision Market Insight 👁️🤖

The global machine vision market was USD 10.75B (2023) ➡️ USD 11.61B (2024) ➡️ projected USD 22.59B (2032) with a CAGR of 8.7% 🔍.

🌏 Asia Pacific led with 31.44% share (2023).

U.S. market to hit USD 3.46B (2032), boosted by demand for quality control & automated inspection in manufacturing 🏭.

👉 Explore more: fortunebusinessinsights.com/ma

What if your manufacturing line could see every defect before it became a costly problem?

With AI-powered machine vision, factories can now detect quality issues in real time using smart cameras + deep learning—no extra manpower needed.

📉 90% fewer defects
⚙️ Fully automated QC
📈 Better decisions, faster

Curious how it works?
🔗 softwebsolutions.com/resources

Softweb Solutions · How AI-Powered Machine Vision Transforms Manufacturing?Explore how AI-powered machine vision improves manufacturing efficiency, improves quality control, and drives smart automation.

The really interesting thing is that such a system could have been built more successfully using non #AI tech like #RFID, auto-zooming #QRCode scanners, #Bluetooth or #NFC apps, or even swipe cards. AI and #machinevision are useful, but hardly necessary for this use case. h/t @arstechnica

arstechnica.com/gadgets/2024/0

Ars Technica · Amazon Fresh kills “Just Walk Out” shopping tech—it never really workedVon Ron Amadeo

I was interviewed by The Economist's Babbage podcast on their series, "The science that built AI" last month. My hour long conversation was edited to about six minutes!

I am glad they edited/fit my conversation as taking the perspective that this big data, big compute driven deep-net approach is orthogonal to human/biological vision. And that, without incorporating biological principles (in this case, vision), autonomous visual navigation systems (i.e., self-driving cars) are unlikely and/or limited.

Unfortunately, the podcast requires a subscription to The Economist (I too had to access it from my university account!). But if you do have access, let me know what you think!

open.spotify.com/episode/4adN2

SpotifyBabbage: The science that built the AI revolution—part threeListen to this episode from Babbage from The Economist on Spotify. What made AI take off? A decade ago many computer scientists were focused on building algorithms that would allow machines to see and recognise objects. In doing so they hit upon two innovations—big datasets and specialised computer chips—that quickly transformed the potential of artificial intelligence. How did the growth of the world wide web and the design of 3D arcade games create a turning point for AI?This is the third episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?Host: Alok Jha, The Economist’s science and technology editor. Contributors: Fei-Fei Li of Stanford University; Robert Ajemian and Karthik Srinivasan of MIT; Kelly Clancy, author of “Playing with Reality”; Pietro Perona of the California Institute of Technology; Tom Standage, The Economist’s deputy editor.On Thursday April 4th, we’re hosting a live event where we’ll answer as many of your questions on AI as possible, following this Babbage series. If you’re a subscriber, you can submit your question and find out more at economist.com/aievent. Listen to what matters most, from global politics and business to science and technology—subscribe to Economist Podcasts+For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.
#Neuroscience#History#AI
Fortgeführter Thread

This paper explores 'tagging aesthetics' in new media art, blending machine vision and social media to analyze the assembly of socio-technical subjects through AI. Discussing naturalized machine vision, it delves into various subject conflicts (human-machine, classifier-classified, tech worker-data cleaner, AI-viewing public).
olh.openlibhums.org/article/id
#NewMediaArt #MachineVision #TaggingAesthetics #TechnoSocial

Open Library of HumanitiesMachine Vision and Tagging Aesthetics: Assembling Socio-Technical Subjects through New Media ArtThis paper builds on the concept of ‘tagging aesthetics’ (Bozzi, 2020b) to discuss new media art projects that combine machine vision and social media to address how different kinds of socio-technical subjects are assembled through AI. The premise outlines how the naturalisation of machine vision involves a range of subjects, juxtaposed along different conflictual lines: ontological (human-machine), biopolitical (classifier-classified), socio-technical (tech worker-data cleaner), political (AI-viewing public). Embracing the ambiguity inherent in the shifting boundaries of these subjects, I analyse works by different new media artists who approach one or more of these juxtapositions by engaging with diverse forms of tagging. The practice of tagging is often discussed through data-driven analyses of hashtags and how related publics can be mapped, but in my framework, tagging can encompass a wider spectrum of techno-social practices of connection (e.g. geotagging, tagging users). I discuss artworks by Kate Crawford and Trevor Paglen, Dries Depoorter and Max Dovey to illustrate how these practices can be leveraged artistically to make visible and even ‘stitch together’ the manifold subjects of machine vision. I explain how those taggings denaturalise processes of socio-technical classification by activating awareness, if not agency, through the sheer proximity they enact. Far from being a tool to map knowledge and essentialised identities, tagging aesthetics are ways to perform the techno-social and shape future cultural encounters with various forms of others. By exploring different approaches to tagging aesthetics – (dis)identification, semi-automated assembly and embodied encounter – this paper illustrates how tagging can be used to culturally negotiate the impact of machine vision in terms of issues such as surveillance and the performance of digital identity.
Fortgeführter Thread

This article delves into the challenges of troublesome machine behavior in AI, particularly concerning externalized governance. Focusing on machine vision, it explores the potential of hacking as a concept, method, and ethic in resisting surveillant vision. The 'intuition machine shift' is discussed, emphasizing a move from hacking sensorial devices to tricking intellectual seeing.
olh.openlibhums.org/article/id
#AI #MachineVision #ArtHacks #TechnologyEthics

Open Library of HumanitiesHacking Surveillance Cameras, Tricking AI and Disputing Biases: Artistic Critiques of Machine VisionIn the field of AI, troublesome machine behaviour is a recurring problem, and is particularly worrying when the governance of populations is externalised to machines. This article will focus on machine vision and explore whether hacking as a concept, a method and an ethic, as it has been appropriated by artists, makers and designers, offers ways for citizens to resist surveillant vision. By combining distant and close readings of art hacks in the ‘Database of Machine Vision in Art, Games and Narratives’ this article demonstrates a shift in resisting machine vision from hacking sensorial devices to tricking intellectual seeing. I call it the ‘intuition machine shift’ and argue that emergent with this shift is an art hacking strategy which specifically challenges biased machine vision. Drawing from critical making, tactical media and feminist theorisation of hacking, and adopting Mareille Kaufmann’s understanding of hacking as a form of disputing surveillance, this article outlines three artistic approaches to hacking machine vision: hacking surveillance cameras, tricking AI and disputing biases. The conceptual contribution of disputing biases is developed further to offer new nuanced understandings of risks and potentials of art hacks to resist biased machine vision.
Fortgeführter Thread

This article delves into the intersection of machine vision, face recognition, and affect in Kazuo Ishiguro's novel "Klara and the Sun" (2021). It explores how the novel portrays cognitive and emotional acts through 'face reading' and examines the affective dilemmas of technological face recognition using Bolen's 'kinesic imagination' and Ngai's 'ugly feelings.'
olh.openlibhums.org/article/id
#Literature #MachineVision #TechnologyEthics

Open Library of HumanitiesPixel, Partition, Persona: Machine Vision and Face Recognition in Kazuo Ishiguro’s <i>Klara and the Sun</i>This article examines the relationship between machine vision, face recognition and affect in Kazuo Ishiguro’s speculative fiction novel Klara and the Sun (2021). It explores the ways that Ishiguro’s novel enacts, stages, and dramatises cognitive and emotional acts of comprehension and empathy through ‘face reading’. The article takes up Guillemette Bolen’s theorisation of ‘kinesic imagination’ and Sianne Ngai’s concept of ‘ugly feelings’ to investigate the affective and representational dilemmas of technological face recognition in speculative fiction. Through the careful treatment of literary language, itself a complex response to rapidly evolving technology, Klara and the Sun presents instances of affective subtlety, hesitation, ambiguity, mutability, confusion and deficit to solicit an emotional response in the reader concerning the sociotechnical reception and future possibilities of machine vision and facial recognition technologies. In this way, Ishiguro’s novel offers a timely challenge to the algorithmic design principles of face-recognition technology due to its complex affective (rather than purely categorical) treatment of both human and non-human faces.