mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#ExLibris

0 Beiträge0 Beteiligte0 Beiträge heute

Je vous en montre un p'tit dernier. Je vais m'arrêter sur ce tampon ex-libris car j'en ai realisé pas mal. La demande était celle-ci : "l'image d'une jeune femme brune qui porte un bob et qui lit un livre, en référence au tableau La liseuse de Fragonard".
C'est toujours hyper réjouissant à réaliser, toujours un challenge à relever. Il faut que l'illustration plaise mais également que j'arrive à graver dans des formats assez petits parfois. Ici le visuel mesure 5 x 5 cm.

Bienvenue sur ma page !

Le temps de me faire la main sur cette nouvelle plateforme, je commence par partager avec vous quelques tampons personnalisés gravés à la main. ("Créatrice de tampons personnalisés gravés à la main" est une de mes deux activités professionnelles mais c'est pas que ça). N'hésitez pas à visiter mon site et me contacter pour tout renseignement : www.aurelies.fr

#wikidata #library #ExLibris #metadata people:

I have opened a ticket on a #ALMA sets and collections integration with #wikidata check it out and consider voting for it at ideas.exlibrisgroup.com/forums

Have an idea for Ex Libris?linked-data-powered sets and collectionsTL;DR: populate existing ALMA sets / collections infrastructure using retrieval of a list of LCCNs from an external URL to allow linked-data-powered sets and collections. Problem We've all seen the book displays that most physical libraries have in their high-traffic areas. These might be works by the in-house press, topical events, anniversaries of author's birth, etc, etc. In the digital space, there are shallow-functionality RSS feeds and then a yawning gap out to things that are typically called 'digital exhibitions' which are resource-intensive glossy displays. ALMA Collections can be used for digital exhibitions, but populating them is labour-intensive and slow. Concept It is relatively easy to use wikidata queries to return lists of LCCNs of authors which meet particular criteria. See https://www.wikidata.org/wiki/User:Stuartyeates/PeopleForBookDisplays for queries which display "people associated with an institution" "Writers of a particular genre" "Authors of a particular subject area" "New Zealand LGBTI+ people" "Māori people" and "New Zealand people born today / this month". The result set of these queries can be downloaded without as a TSV (Tab Separated Vector), which for a single-column result set is isomorphic with a white-space separated list of LCCNs. A method of populating ALMA bibliographic sets based on the holdings matching the LCCNs downloaded from an external source would hugely speed the making of these kinds of collections. Technical implementations options The simplest implementation would be a dialog to create a set of from a query URL, either an itimised set or a logical set (which would reload from the URL monthly or on demand). All the same set types and functionalities should be available. You'd need a drop down for which field to match against. A more sophisticated implementation might be uploading a CSV with the fields ID, CollectID, ParentCollectID, CollectName, CollectDesc, CollectionImage, LCCNsURL,... and having ALMA create a set for each row, retrieve the LCCNs and match to BIBs, load the BIBs into the set and create a collection (or reuse an existing collection if a CollectID has been specified) and populate the collection with the set and metadata provided. Possible complications (technical / ALMA related) * Very frequent retrieval of sets from an external source is likely to result in ALMA instances being blocked as bad netizens and needs to be avoided. In the very long term, if widely used, a clone of wikidata may be required; several organisations already maintain such things, refreshed weekly from the daily wikidata dumps. * The ability of anonymous third parties to update wikidata means that human oversight of public-facing changes in ALMA is needed. * There should not be any non-public or financial information here, but the connection probably still needs to be HTTPS not HTTP. * These URLs are VERY long because the entire query is in the URL; some pieces of infrastructure may not be prepared for such long URLs. * This being the corporate software, you’ll probably want a whitelist of hosts that you can connect to, similar to the FTP server list. Possible complications (wikidata / representation related) * Some concepts (LGBTI+, ethnicity, etc) are represented cumbersomely in wikidata, but by allowing any query this approach allows institutions to make their own choice among the options. * Some fields (dates of birth for living people, ethnicity, etc) are sparse in wikidata. Individual libraries are free to add / correct details for individuals they have sufficiently strong interest in and suitable sources on. * In theory it is possible to translate Library of Congress Subject Headings (P244) to wikidata main subject (P921) to bibliographic items. In my experience this does not work in practice due to excessive sparseness. * Wikidata representation of book banning is a work in progress.