https://wiki.share-vde.org/w/api.php?action=feedcontributions&user=Anna&feedformat=atomShare-VDE - User contributions [en]2024-03-28T15:19:41ZUser contributionsMediaWiki 1.39.6https://wiki.share-vde.org/w/index.php?title=ShareFamily:FAQ&diff=2254ShareFamily:FAQ2024-03-25T11:20:34Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:FAQ}}<br />
<br />
'''THIS PAGE IS A WORK IN PROGRESS'''<br />
<br />
<br />
<br />
You can't find what you are looking for? Read this page, and if you still haven't found it, e-mail us at [mailto:helpdesk@svde.org helpdesk@svde.org].<br />
<br />
__FORCETOC__<br />
<br />
==Frequently Asked Questions==<br />
<br />
Coming soon<br />
<br />
==Glossary==<br />
<br />
'''LOD Platform''': the innovative framework managing diverse catalogues, converting them into linked data. It automates the creation and publication of linked open data across different LAM domains and source formats. <br />
<br />
'''Tenant''': separate instance of the LOD Platform in which a library’s/a network of libraries’ data is stored in the same infrastructure. Each branch operates independently within its designated tenant, with its own Cluster Knowledge Base and dedicated web entity discovery portal. Tenants serve to compartmentalize and manage distinct branches under the overarching LOD Platform technology. <br />
<br />
'''Discovery Portal''': every Share Family website is referred to as a “discovery portal” or “portal”. It’s an advanced entity discovery system built on the BIBFRAME framework. It features a customized user interface (UI), integrates with local APIs, and it is the entity discovery tool of a tenant.<br />
<br />
'''Portal group''': every portal is a part of one “Portal group”. Portals are grouped around a shared database. For example: https://svde.org is a portal group collecting the data of [[ShareVDE:Main Page/SVDE institutions|several institutions]] members of Share-VDE initiative.<br />
<br />
While the portal group shows the data of all the institutions feeding the database, the institutional portal gives the ability to filter only the data of the institution that it has been designed for (see below).<br />
<br />
The portal URL can be configured and it will also be the base URI, ie. the prefix of URIs created by the system to uniquely identify its entities. For example: the Share-VDE portal base URI is https://svde.org.<br />
<br />
'''Primary and secondary/institutional/skin portal''': each portal group has one “primary” portal, which is the main entity discovery portal of a tenant. This is the non-branded version of the portal that represents the project as a whole, and not a particular institution or variation of the website. All other portals in a group are '''“secondary” or institutional portals''' (also informally called “'''skin portals'''” in Share Family jargon). The primary portal has certain minor differences compared to secondary portals. “Secondary” is the term used to configure features in the back-office, and in this context is synonym of “institutional”.<br />
<br />
For example: the primary portal of Share-VDE portal group is https://svde.org; institutional secondary portals are available for member institutions.<br />
<br />
'''Local system''': the local library environment at the library, referring generically to any of its components (eg. ILS/LSP, local OPAC etc.). <br />
<br />
'''Cluster / Entity:''' consolidated grouping of metadata representing various kinds of linked data descriptions about agents/contributors, intellectual works, and specific publications, created through a process that reconciles variations and assigns unique identifiers. This helps in managing and presenting data from different sources in a coherent and organized manner. Each cluster is identified by a URI. <br />
<br />
'''Cluster Knowledge Base – CKB''' (also named '''Entity Knowledge Base'''): a collaborative source of high quality data, the CKB includes the clusters of entities created in the reconciliation and conversion of bibliographic and authority data to linked data. It’s the LOD Platform database of linked data entities. <br />
<br />
'''Entity registry:''' tool in which the association between clusters and the URIs that identify such clusters is registered, and where all the changes affecting this association are reported. <br />
<br />
'''JCricket''': is the tool for collaborative linked data entity management. It enables - according to the BIBFRAME ontology - the entity curation (e.g. creation of new entities, entity modification, the application of entity merge and split functions). The scope is to improve the quality of the Entity Knowledge Base that is a live source produced through clustering and daily update processes. JCricket is a manual application that makes it possible to manage bibliography data in the form of entities. It empowers professional librarians to enhance the quality of machine-generated data output of MARC to BIBFRAME transformation.<br />
<br />
'''Clusterization tool''': the tool includes the clustering logics of the data coming from different sources often non-homogeneous in order to create the entity as Real-World-Object (RWO) and assign a unique identifier. By clustering we mean the mechanism of identification of the entities with Large Scale Fuzzy Name Matching Techniques, through different text analysis methods.<br />
<br />
'''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities.<br />
<br />
'''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed.<br />
<br />
'''URI:''' a URI is a unique identifier for an entity (author, resource etc.): <br />
<br />
· ''Local URI'': assigned exclusively to entities within a specific tenant (ie. SVDE URIs for entities within the SVDE tenant); <br />
<br />
· ''External URI'': URI from external authoritative sources added in the data processing phase (eg. VIAF, ISNI, etc.); <br />
<br />
'''SVDE Ontology''': an extension to BIBFRAME modelled within the Share Family initiative. While the ontology supports the discovery functionality of the Share Family search systems, it may be re-used in any system requiring a bridge among BIBFRAME, IFLA LRM and RDA. The preliminary version of the SVDE Ontology has been published and can be consulted at https://doi.org/10.5281/zenodo.8332350.</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:FAQ&diff=2251ShareFamily:FAQ2024-03-18T15:41:53Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:FAQ}}<br />
<br />
'''THIS PAGE IS A WORK IN PROGRESS'''<br />
<br />
<br />
<br />
You can't find what you are looking for? Read this page, and if you still haven't found it, e-mail us at[mailto:helpdesk@svde.org helpdesk@svde.org].<br />
<br />
__FORCETOC__<br />
<br />
==Frequently Asked Questions==<br />
<br />
Coming soon<br />
<br />
==Glossary==<br />
<br />
'''LOD Platform''': the innovative framework managing diverse catalogues, converting them into linked data. It automates the creation and publication of linked open data across different LAM domains and source formats. <br />
<br />
'''Tenant''': separate instance of the LOD Platform in which a library’s/a network of libraries’ data is stored in the same infrastructure. Each branch operates independently within its designated tenant, with its own Cluster Knowledge Base and dedicated web entity discovery portal. Tenants serve to compartmentalize and manage distinct branches under the overarching LOD Platform technology. <br />
<br />
'''Discovery Portal'''<br />
<br />
Every Share Family website is referred to as a “discovery portal” or “portal”. It’s an advanced entity discovery system built on the BIBFRAME framework. It features a customized user interface (UI), integrates with local APIs, and it is the entity discovery tool of a tenant.<br />
<br />
'''Portal group'''<br />
<br />
Every portal is a part of one “Portal group”. Portals are grouped around a shared database. For example: https://svde.org is a portal group collecting the data of [[ShareVDE:Main Page/SVDE institutions|several institutions]] members of Share-VDE initiative.<br />
<br />
While the portal group shows the data of all the institutions feeding the database, the institutional portal gives the ability to filter only the data of the institution that it has been designed for (see below).<br />
<br />
The portal URL can be configured and it will also be the base URI, ie. the prefix of URIs created by the system to uniquely identify its entities. For example: the Share-VDE portal base URI is https://svde.org.<br />
<br />
'''Primary and secondary/institutional/skin portal'''<br />
<br />
Each portal group has one “primary” portal, which is the main entity discovery portal of a tenant. This is the non-branded version of the portal that represents the project as a whole, and not a particular institution or variation of the website. All other portals in a group are '''“secondary” or institutional portals''' (also informally called “'''skin portals'''” in Share Family jargon). The primary portal has certain minor differences compared to secondary portals. “Secondary” is the term used to configure features in the back-office, and in this context is synonym of “institutional”.<br />
<br />
For example: the primary portal of Share-VDE portal group is https://svde.org; institutional secondary portals are available for member institutions.<br />
<br />
'''Local system''': the local library environment at the library, referring generically to any of its components (eg. ILS/LSP, local OPAC etc.). <br />
<br />
'''Cluster / Entity:''' consolidated grouping of metadata representing various kinds of linked data descriptions about agents/contributors, intellectual works, and specific publications, created through a process that reconciles variations and assigns unique identifiers. This helps in managing and presenting data from different sources in a coherent and organized manner. Each cluster is identified by a URI. <br />
<br />
'''Cluster Knowledge Base – CKB''' (also named '''Entity Knowledge Base'''): a collaborative source of high quality data, the CKB includes the clusters of entities created in the reconciliation and conversion of bibliographic and authority data to linked data. It’s the LOD Platform database of linked data entities. <br />
<br />
'''Entity registry:''' tool in which the association between clusters and the URIs that identify such clusters is registered, and where all the changes affecting this association are reported. <br />
<br />
'''JCricket''': is the tool for collaborative linked data entity management. It enables - according to the BIBFRAME ontology - the entity curation (e.g. creation of new entities, entity modification, the application of entity merge and split functions). The scope is to improve the quality of the Entity Knowledge Base that is a live source produced through clustering and daily update processes. JCricket is a manual application that makes it possible to manage bibliography data in the form of entities. It empowers professional librarians to enhance the quality of machine-generated data output of MARC to BIBFRAME transformation.<br />
<br />
'''Clusterization tool''': the tool includes the clustering logics of the data coming from different sources often non-homogeneous in order to create the entity as Real-World-Object (RWO) and assign a unique identifier. By clustering we mean the mechanism of identification of the entities with Large Scale Fuzzy Name Matching Techniques, through different text analysis methods.<br />
<br />
'''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities.<br />
<br />
'''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed.<br />
<br />
'''URI:''' a URI is a unique identifier for an entity (author, resource etc.): <br />
<br />
· ''Local URI'': assigned exclusively to entities within a specific tenant (ie. SVDE URIs for entities within the SVDE tenant); <br />
<br />
· ''External URI'': URI from external authoritative sources added in the data processing phase (eg. VIAF, ISNI, etc.); <br />
<br />
'''SVDE Ontology''': an extension to BIBFRAME modelled within the Share Family initiative. While the ontology supports the discovery functionality of the Share Family search systems, it may be re-used in any system requiring a bridge among BIBFRAME, IFLA LRM and RDA. The preliminary version of the SVDE Ontology has been published and can be consulted at https://doi.org/10.5281/zenodo.8332350.</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:FAQ&diff=2250ShareFamily:FAQ2024-03-18T15:24:17Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:FAQ}}<br />
<br />
'''THIS PAGE IS A WORK IN PROGRESS'''<br />
<br />
<br />
You can't find what you are looking for? Read this page, and if you still haven't found it, e-mail us at [mailto:helpdesk@svde.org helpdesk@svde.org].<br />
<br />
__FORCETOC__<br />
<br />
==Frequently Asked Questions==<br />
<br />
Coming soon<br />
<br />
==Glossary==<br />
<br />
Coming soon</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:Main_Page&diff=2249ShareFamily:Main Page2024-03-18T15:23:24Z<p>Anna: </p>
<hr />
<div><br />
{{DISPLAYTITLE:Share Family Linked Data Ecosystem}}<br />
<br />
The '''[http://www.share-family.org/ Share Family linked data ecosystem]''' comprises several collaborative LOD - Linked Open Data environments:<br />
* '''[https://www.svde.org/ Share-VDE]''' (Virtual Discovery Environment);<br />
* '''[http://catalogo.share-cat.unina.it/sharecat/clusters Share-Catalogue]''' - the Italian network of university libraries;<br />
* [https://pcc-lod.org '''the PCC data pool'''] - the Program for Cooperative Cataloging (PCC) Catalogue in Linked Open Data;<br />
* [https://natbib-lod.org '''National Bibliographies'''] in Linked Open Data;<br />
* [https://parsifal.urbe.it/parsifal '''Parsifal'''] - the LOD portal of the URBE consortium (Roman Union of Ecclesiastical Libraries);<br />
* '''[https://www.kubikat-lod.org/ Kubikat-LOD]''' pilot project - the LOD portal for the catalogues of Kubikat Art History libraries;<br />
* three pilot projects Share-Art, Share-Music, Share-MIA (Manuscripts, Incunabula, Ancient books) respectively in the Art, Music and Ancient book domains;<br />
* [https://lillit.share-family.org/lillit/ '''LILLIT''']: portal for Italian illustrated books 1501-1800.<br />
<br />
The different characteristics of each field are a useful asset that can be used to the advantage not only of the Share Family as a whole, but for each single discipline.<br />
<br />
Being part of the Share Family linked data ecosystem means facilitating cataloguing and exposition of bibliographic records in linked data, thus supporting the transition from the traditional cataloguing environment to innovative models applying the linked data paradigm, and providing the LAM - Libraries, Archives, Museums domain and information professionals with a more comprehensive suite of tools at their disposal.<br />
<br />
The platform [https://www.svde.org/ www.svde.org] and the other dedicated environments part of the Share Family enhance the discovery potential of library resources and unveil information that would otherwise have been hidden in archives, thus enabling the access to a rich amount of data that can be exported and re-imported by the participating institutions.<br />
<br />
The output common to all the branches of the Share Family foresees:<br />
<br />
*the enrichment of original MARC data and of the records converted in linked data with identifiers from external sources (e.g. ISNI, VIAF) and original Share identifiers;<br />
*the reconciliation and clusterization of entities;<br />
*the indexing of records in the Cluster Knowledge Base, authoritative environment in linked data;<br />
*the conversion of library catalogues from MARC to linked data;<br />
*delivery of converted and enriched data to libraries for reuse in their systems;<br />
*the publication of library records in linked data on the relative Share discovery platform.<br />
<br />
Download the '''[https://bit.ly/Share-Family_brochure_2023-June Share Family brochure]''' for public distribution.<br />
<br />
==The Share Family technology: the LOD Platform==<br />
The technology underlying the systems part of the Share Family is based on the LOD Platform, that is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data, extensible as needed for specific purposes.<br />
<br />
For more details, see a summary of the [https://wiki.share-vde.org/w/images/5/54/share_components_EN.pdf '''main components of the LOD Platform'''] and an [https://wiki.share-vde.org/w/images/a/ae/LOD_Platform_2021-02_ENG.pdf '''extensive description'''] of the framework.<br />
<br />
The Share Family technology relies on a '''[https://wiki.share-vde.org/w/images/1/1d/Schema_Share_family_tenant.png tenant infrastructure]'''. In the system architecture, a tenant is a pool of institutions contributing to the same Cluster Knowledge Base. Multiple tenants form the Share Family. Family members can interoperate among respective Cluster Knowledge Bases through a centralized registry. <br />
==The Share Family branches (tenants)==<br />
<br />
The Share Family of initiatives includes different branches and sister projects, supported by the same LOD Platform technology. Each branch or project is hosted in a specific tenant of the system architecture with a corresponding specific Cluster Knowledge Base and a dedicated web entity discovery portal. For more details on how the Share Family tenant infrastructure is designed see the [https://wiki.share-vde.org/w/images/5/51/Schema_Share_Family_tenant_slides.pdf '''Summary of Share Family tenants''']. <br />
<br />
In some cases, within a single tenant a customised skin (ie. a sub-portal of the main entity discovery) can be created to address ad hoc needs of an institution, or group of institutions, willing to expose only their own data or to integrate local services in the Share Family environment. For example, the entity discovery portal at svde.org is the discovery corresponding to Share-VDE tenant, including a pool of data from several institutions, and the respective skin portals / institutional portals. <br />
<br />
While the main entity discovery portal of a tenant shows the data of all the institutions feeding the tenant's Cluster Knowledge Base, the skin portal / institutional portal gives the ability to filter only the data of the institution or group of institutions that the skin/institutional portal has been designed for.<br />
{| class="wikitable" style=""<br />
|+<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" |'''Tenant name'''<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |'''Share-VDE'''<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" class="" |'''Share-Catalogue'''<br />
! style="border-left-width:1px;" class="" |'''PCC data pool'''<br />
! style="" class="" |'''National Bibliographies'''<br />
! style="" class="" |'''Parsifal'''<br />
! style="vertical-align:middle;" |'''LILLIT'''<br />
! style="" class="" |'''Kubikat-LOD pilot project'''<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |'''Tenant discovery portal URL'''<br />
| style="background-color:#f8f9fa;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |https://svde.org<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" |http://catalogo.share-cat.unina.it/<br />
| style="border-left-width:1px;" |https://pcc-lod.org<br />
|https://natbib-lod.org<br />
|https://parsifal.urbe.it/parsifal/<br />
|https://lillit.share-family.org/lillit/<br />
| style="background-color:#f8f9fa;" class="" |https://kubikat-lod.org<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-bottom-style:solid;" class="" |'''Data hosted in tenant CKB'''<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;" |catalogues of SVDE member libraries converted to linked data<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" |catalogues of Share-Catalogue member libraries converted to linked data<br />
| style="border-left-width:1px;" |records of PCC members converted to linked data<br />
|national bibliographies of institutions participating to this branch converted to linked data<br />
|catalogues of Parsifal member libraries (libraries of the URBE consortium - Roman Union of Ecclesiastical Libraries) converted to linked data<br />
|Linked Open Data descriptions and illustrations of Italian editions printed in the 16th-18th centuries<br />
|catalogues of Kubikat art history libraries converted to linked data<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-top-color:#acacac;border-top-style:solid;" class="" |'''Institutional portals within the same tenant'''<br />
| style="background-color:#f8f9fa;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-top-color:#acacac;" class="" |one skin portal for each member institution<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;" |not available yet<br />
| style="border-left-width:1px;" |not foreseen<br />
|British National Bibliography (others TBD)<br />
|not available yet<br />
|not foreseen<br />
|not foreseen<br />
|-<br />
| style="background-color:#eaecf0;border-top-width:1px;" class="" |'''Institutional portal URL'''<br />
| style="background-color:#f8f9fa;border-top-width:1px;" class="" |initial version of skin portals (others in progress): <br />
<br />
<span><br /></span>https://duke.svde.org/<br />
<br />
https://loc.svde.org/ <br />
<br />
https://natlibfi.svde.org/ <br />
<br />
https://nln.svde.org/ <br />
<br />
https://nyu.svde.org/<br />
<br />
https://penn.svde.org/<br />
<br />
https://smithsonian.svde.org/<br />
<br />
https://stanford.svde.org/ <br />
<br />
https://ualberta.svde.org/ <br />
<br />
https://uchicago.svde.org<br />
<br />
https://yale.svde.org<br />
| style="border-top-width:1px;" |not available yet<br />
| style="width:15%;" |not foreseen<br />
|https://bl.natbib-lod.org/ (preview of a beta site)<br />
|not available yet<br />
|not foreseen<br />
|not foreseen<br />
|}<br />
==The Share Family institutions and map==<br />
The Share Family institutions can participate in one or more tenants. Here follows the summary of the institutions part of the Share Family tenants and its network. <br />
{| class="sortable mw-collapsible casablanca" style="width:100%;"<br />
|-<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Share-VDE tenant''' <br />
'''full members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Share-Catalogue Institutions'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''PCC tenant members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''LD4P cohort members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''National Bibliographies''' <br />
'''tenant'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#d2d2d2;border-top-color:#acacac;border-bottom-color:#acacac;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Parsifal'''<br />
! style="border-left-color:#d2d2d2;border-right-color:#d2d2d2;border-top-color:#d2d2d2;border-bottom-color:#d2d2d2;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" class="col-blue-dark-bg" |LILLIT<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#d2d2d2;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |'''Kubikat-LOD''' <br />
'''pilot project'''<br />
|-<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Berkeley Law Library<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Università Degli Studi di Napoli “Federico II”<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |PCC member libraries<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Cornell University<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |The British Library* - British National Bibliography<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Accademia Alfonsiana<br />
| style="border-top-color:#d2d2d2;border-top-style:init;" |La Sapienza Università di Roma<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Kunsthistorisches <br />
Institut in Florenz<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Duke University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi della Basilicata<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Frick Art Reference Library<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Centro Pro Unione<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Biblioteca Hertziana, <br />
Rome<br />
|-<br />
|Lehigh University<br />
|Università degli Studi di Cassino<br />
|<br />
|University of Texas A&M<br />
|<br />
|Pontificium Institutum Patristicum Augustinianum<br />
|<br />
|Deutsches Forum für<br />
Kunstgeschichte, Paris<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |New York University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università Degli Studi di Napoli L’Orientale<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Harry Ransom Center Texas A&M<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Facoltà di Scienze dell'Educazione "Auxilium"<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Central Institute of Art <br />
History, Munich<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Stanford University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi di Napoli Parthenope<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Harvard University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Facoltà Teologica "Marianum"<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Alberta - NEOS consortium<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università del Salento<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |National Library of Medicine<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Antonianum<br />
|<br />
| style="border-bottom-width:2px;border-bottom-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:solid;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Chicago<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi di Salerno<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Northwestern University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università della Santa Croce<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |'''<span class="col-white">Share-Art pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Michigan at Ann Arbor<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi del Sannio <br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Princeton University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università di San Tommaso d'Aquino (Angelicum)<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''<span class="col-white">Share-MIA (Manuscripts, Incunabula and Ancient books) pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Pennsylvania<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi della Campania “Luigi Vanvitelli”<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of California Davis<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università Gregoriana<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''<span class="col-white">Share-Music pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Toronto<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi Suor Orsola Benincasa<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of California San Diego<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Lateranense<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
|Yale University<br />
|<br />
|<br />
|University of Washington<br />
|<br />
|Università Pontificia Salesiana<br />
|<br />
|<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Library of Congress*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University Colorado at Boulder<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Urbaniana<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |National Library of Finland*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Minnesota<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Ateneo Sant'Anselmo<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="vertical-align:middle;text-align:left;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" class="" |National Library of Norway*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Istituto Biblico<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Smithsonian Institution*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Istituto Teologico "Giovanni Paolo II" per le Scienze del Matrimonio e della Famiglia<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|}<br />
<br />
<nowiki>*</nowiki>National Libraries<br />
<br />
<span><br /></span><br />
The Share Family map can also be consulted on a [http://bit.ly/Share_map_2019 dedicated web page].<br />
<br />
[[File:Share Family 2024.png|976x976px]]<br />
<br />
__FORCETOC__<br />
__NEWSECTIONLINK__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:Main_Page&diff=2240ShareFamily:Main Page2024-03-12T10:24:37Z<p>Anna: </p>
<hr />
<div><br />
<br />
<br />
{{DISPLAYTITLE:Share Family Linked Data Ecosystem}}<br />
<br />
The '''[http://www.share-family.org/ Share Family linked data ecosystem]''' comprises several collaborative LOD - Linked Open Data environments:<br />
* '''[https://www.svde.org/ Share-VDE]''' (Virtual Discovery Environment);<br />
* '''[http://catalogo.share-cat.unina.it/sharecat/clusters Share-Catalogue]''' - the Italian network of university libraries;<br />
* [https://pcc-lod.org '''the PCC data pool'''] - the Program for Cooperative Cataloging (PCC) Catalogue in Linked Open Data;<br />
* [https://natbib-lod.org '''National Bibliographies'''] in Linked Open Data;<br />
* [https://parsifal.urbe.it/parsifal '''Parsifal'''] - the LOD portal of the URBE consortium (Roman Union of Ecclesiastical Libraries);<br />
* '''[https://www.kubikat-lod.org/ Kubikat-LOD]''' pilot project - the LOD portal for the catalogues of Kubikat Art History libraries;<br />
* three pilot projects Share-Art, Share-Music, Share-MIA (Manuscripts, Incunabula, Ancient books) respectively in the Art, Music and Ancient book domains;<br />
* [https://lillit.share-family.org/lillit/ '''LILLIT''']: portal for Italian illustrated books 1501-1800.<br />
<br />
The different characteristics of each field are a useful asset that can be used to the advantage not only of the Share Family as a whole, but for each single discipline.<br />
<br />
Being part of the Share Family linked data ecosystem means facilitating cataloguing and exposition of bibliographic records in linked data, thus supporting the transition from the traditional cataloguing environment to innovative models applying the linked data paradigm, and providing the LAM - Libraries, Archives, Museums domain and information professionals with a more comprehensive suite of tools at their disposal.<br />
<br />
The platform [https://www.svde.org/ www.svde.org] and the other dedicated environments part of the Share Family enhance the discovery potential of library resources and unveil information that would otherwise have been hidden in archives, thus enabling the access to a rich amount of data that can be exported and re-imported by the participating institutions.<br />
<br />
The output common to all the branches of the Share Family foresees:<br />
<br />
*the enrichment of original MARC data and of the records converted in linked data with identifiers from external sources (e.g. ISNI, VIAF) and original Share identifiers;<br />
*the reconciliation and clusterization of entities;<br />
*the indexing of records in the Cluster Knowledge Base, authoritative environment in linked data;<br />
*the conversion of library catalogues from MARC to linked data;<br />
*delivery of converted and enriched data to libraries for reuse in their systems;<br />
*the publication of library records in linked data on the relative Share discovery platform.<br />
<br />
Download the '''[https://bit.ly/Share-Family_brochure_2023-June Share Family brochure]''' for public distribution.<br />
<br />
==The Share Family technology: the LOD Platform==<br />
The technology underlying the systems part of the Share Family is based on the LOD Platform, that is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data, extensible as needed for specific purposes.<br />
<br />
For more details, see a summary of the [https://wiki.share-vde.org/w/images/5/54/share_components_EN.pdf '''main components of the LOD Platform'''] and an [https://wiki.share-vde.org/w/images/a/ae/LOD_Platform_2021-02_ENG.pdf '''extensive description'''] of the framework.<br />
<br />
The Share Family technology relies on a '''[https://wiki.share-vde.org/w/images/1/1d/Schema_Share_family_tenant.png tenant infrastructure]'''. In the system architecture, a tenant is a pool of institutions contributing to the same Cluster Knowledge Base. Multiple tenants form the Share Family. Family members can interoperate among respective Cluster Knowledge Bases through a centralized registry. <br />
==The Share Family branches (tenants)==<br />
<br />
The Share Family of initiatives includes different branches and sister projects, supported by the same LOD Platform technology. Each branch or project is hosted in a specific tenant of the system architecture with a corresponding specific Cluster Knowledge Base and a dedicated web entity discovery portal. For more details on how the Share Family tenant infrastructure is designed see the [https://wiki.share-vde.org/w/images/5/51/Schema_Share_Family_tenant_slides.pdf '''Summary of Share Family tenants''']. <br />
<br />
In some cases, within a single tenant a customised skin (ie. a sub-portal of the main entity discovery) can be created to address ad hoc needs of an institution, or group of institutions, willing to expose only their own data or to integrate local services in the Share Family environment. For example, the entity discovery portal at svde.org is the discovery corresponding to Share-VDE tenant, including a pool of data from several institutions, and the respective skin portals / institutional portals. <br />
<br />
While the main entity discovery portal of a tenant shows the data of all the institutions feeding the tenant's Cluster Knowledge Base, the skin portal / institutional portal gives the ability to filter only the data of the institution or group of institutions that the skin/institutional portal has been designed for.<br />
{| class="wikitable" style=""<br />
|+<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" |'''Tenant name'''<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |'''Share-VDE'''<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" class="" |'''Share-Catalogue'''<br />
! style="border-left-width:1px;" class="" |'''PCC data pool'''<br />
! style="" class="" |'''National Bibliographies'''<br />
! style="" class="" |'''Parsifal'''<br />
! style="vertical-align:middle;" |'''LILLIT'''<br />
! style="" class="" |'''Kubikat-LOD pilot project'''<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |'''Tenant discovery portal URL'''<br />
| style="background-color:#f8f9fa;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |https://svde.org<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" |http://catalogo.share-cat.unina.it/<br />
| style="border-left-width:1px;" |https://pcc-lod.org<br />
|https://natbib-lod.org<br />
|https://parsifal.urbe.it/parsifal/<br />
|https://lillit.share-family.org/lillit/<br />
| style="background-color:#f8f9fa;" class="" |https://kubikat-lod.org<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-bottom-style:solid;" class="" |'''Data hosted in tenant CKB'''<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;" |catalogues of SVDE member libraries converted to linked data<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" |catalogues of Share-Catalogue member libraries converted to linked data<br />
| style="border-left-width:1px;" |records of PCC members converted to linked data<br />
|national bibliographies of institutions participating to this branch converted to linked data<br />
|catalogues of Parsifal member libraries (libraries of the URBE consortium - Roman Union of Ecclesiastical Libraries) converted to linked data<br />
|Linked Open Data descriptions and illustrations of Italian editions printed in the 16th-18th centuries<br />
|catalogues of Kubikat art history libraries converted to linked data<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-top-color:#acacac;border-top-style:solid;" class="" |'''Institutional portals within the same tenant'''<br />
| style="background-color:#f8f9fa;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-top-color:#acacac;" class="" |one skin portal for each member institution<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;" |not available yet<br />
| style="border-left-width:1px;" |not foreseen<br />
|British National Bibliography (others TBD)<br />
|not available yet<br />
|not foreseen<br />
|not foreseen<br />
|-<br />
| style="background-color:#eaecf0;border-top-width:1px;" class="" |'''Institutional portal URL'''<br />
| style="background-color:#f8f9fa;border-top-width:1px;" class="" |initial version of skin portals (others in progress): <br />
<br />
<span><br /></span><nowiki>https://duke.svde.org</nowiki><span><span /></span><br />
<br />
<nowiki>https://loc.svde.org/</nowiki> <br />
<br />
<nowiki>https://natlibfi.svde.org/</nowiki> <br />
<br />
<nowiki>https://nln.svde.org/</nowiki> <br />
<br />
<nowiki>https://nyu.svde.org/</nowiki><br />
<br />
<nowiki>https://penn.svde.org</nowiki><br />
<br />
<nowiki>https://smithsonian.svde.org/</nowiki> <br />
<br />
<nowiki>https://stanford.svde.org</nowiki><br />
<br />
<nowiki>https://ualberta.svde.org</nowiki><br />
| style="border-top-width:1px;" |not available yet<br />
| style="width:15%;" |not foreseen<br />
|https://bl.natbib-lod.org/ (preview of a beta site)<br />
|not available yet<br />
|not foreseen<br />
|not foreseen<br />
|}<br />
==The Share Family institutions and map==<br />
The Share Family institutions can participate in one or more tenants. Here follows the summary of the institutions part of the Share Family tenants and its network. <br />
{| class="sortable mw-collapsible casablanca" style="width:100%;"<br />
|-<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Share-VDE tenant''' <br />
'''full members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Share-Catalogue Institutions'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''PCC tenant members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''LD4P cohort members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''National Bibliographies''' <br />
'''tenant'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#d2d2d2;border-top-color:#acacac;border-bottom-color:#acacac;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Parsifal'''<br />
! style="border-left-color:#d2d2d2;border-right-color:#d2d2d2;border-top-color:#d2d2d2;border-bottom-color:#d2d2d2;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" class="col-blue-dark-bg" |LILLIT<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#d2d2d2;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |'''Kubikat-LOD''' <br />
'''pilot project'''<br />
|-<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Berkeley Law Library<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Università Degli Studi di Napoli “Federico II”<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |PCC member libraries<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Cornell University<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |The British Library* - British National Bibliography<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Accademia Alfonsiana<br />
| style="border-top-color:#d2d2d2;border-top-style:init;" |La Sapienza Università di Roma<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Kunsthistorisches <br />
Institut in Florenz<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Duke University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi della Basilicata<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Frick Art Reference Library<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Centro Pro Unione<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Biblioteca Hertziana, <br />
Rome<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |New York University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università Degli Studi di Napoli L’Orientale<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Harry Ransom Center Texas A&M<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Facoltà di Scienze dell'Educazione "Auxilium"<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Central Institute of Art <br />
History, Munich<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Stanford University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi di Napoli Parthenope<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Harvard University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Facoltà Teologica "Marianum"<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Deutsches Forum für <br />
Kunstgeschichte, Paris<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Alberta - NEOS consortium<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università del Salento<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |National Library of Medicine<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Antonianum<br />
|<br />
| style="border-bottom-width:2px;border-bottom-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:solid;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Chicago<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi di Salerno<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Northwestern University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università della Santa Croce<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |'''<span class="col-white">Share-Art pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Michigan at Ann Arbor<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi del Sannio <br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Princeton University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università di San Tommaso d'Aquino (Angelicum)<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''<span class="col-white">Share-MIA (Manuscripts, Incunabula and Ancient books) pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Pennsylvania<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi della Campania “Luigi Vanvitelli”<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of California Davis<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università Gregoriana<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''<span class="col-white">Share-Music pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Toronto<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi Suor Orsola Benincasa<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of California San Diego<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Lateranense<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
|Yale University<br />
|Università degli Studi di Cassino<br />
|<br />
|University of Washington<br />
|<br />
|Università Pontificia Salesiana<br />
|<br />
|<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Library of Congress*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University Colorado at Boulder<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Urbaniana<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |National Library of Finland*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Minnesota<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Ateneo Sant'Anselmo<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="vertical-align:middle;text-align:left;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" class="" |National Library of Norway*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Texas A&M<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Istituto Biblico<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Smithsonian Institution*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Istituto Teologico "Giovanni Paolo II" per le Scienze del Matrimonio e della Famiglia<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificium Institutum Patristicum Augustinianum<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|}<br />
<br />
<nowiki>*</nowiki>National Libraries<br />
<br />
<span><br /></span><br />
The Share Family map can also be consulted on a [http://bit.ly/Share_map_2019 dedicated web page].<br />
<br />
[[File:Share Family map 2022.png|999x999px]]<br />
<br />
__FORCETOC__<br />
__NEWSECTIONLINK__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareVDE:Main_Page/SVDE_institutions&diff=2239ShareVDE:Main Page/SVDE institutions2024-03-08T12:47:00Z<p>Anna: </p>
<hr />
<div><br />
<br />
{{DISPLAYTITLE:Share-VDE institutions}}<br />
<br />
Share-VDE currently connects the catalogues of libraries worldwide, and includes an ad hoc pool of data from the '''PCC - Program from Cooperative Cataloguing''' member libraries, in the framework of the '''[https://wiki.lyrasis.org/pages/viewpage.action?pageId=187176106 LD4P3 project]'''.<br />
<br />
The '''founding members''' of the initiative are the Library of Congress, the National Library of Norway, Stanford University, the University of Alberta, the University of Chicago, the University of Pennsylvania.<br />
<br />
All members contribute to the initiative through the '''[https://wiki.share-vde.org/wiki/ShareVDE:Members/Share-VDE_working_groups Advisory Council and the working groups]''' devoted to the various work streams.<br />
<br />
==Share-VDE full members==<br />
[https://svde.org/ Share-VDE platform] hosts the data of the full member institutions:<br />
<br />
*Berkeley Law Library [https://www.law.berkeley.edu/library/ www.law.berkeley.edu/library/]<br />
*Duke University [http://www.duke.edu/ www.duke.edu]<br />
*Lehigh University [https://www1.lehigh.edu/ www1.lehigh.edu]<br />
*Library of Congress [http://www.loc.gov www.loc.gov]<br />
*National Library of Finland [https://www.kansalliskirjasto.fi/en www.kansalliskirjasto.fi/en]<br />
*National Library of Norway [https://www.nb.no/en/the-national-library-of-norway/ www.nb.no/en/the-national-library-of-norway/]<br />
*National Taiwan University Library https://www.lib.ntu.edu.tw/en<br />
*New York University [https://www.nyu.edu/ www.nyu.edu/]<br />
*Smithsonian Institution [https://www.si.edu/ www.si.edu/]<br />
*Stanford University [https://www.stanford.edu/ www.stanford.edu]<br />
*The British Library, [https://wiki.share-vde.org/wiki/ShareFamily:Main_Page#The_Share_Family_branches_(tenants) National Bibliographies tenant] [http://www.bl.uk/ www.bl.uk]<br />
*University of Alberta / NEOS Library Consortium [http://www.ualberta.ca/ www.ualberta.ca]<br />
*University of Chicago [http://www.uchicago.edu/ www.uchicago.edu]<br />
*University of Michigan Ann Arbor [https://umich.edu/ www.umich.edu]<br />
*University of Pennsylvania [http://www.upenn.edu/ www.upenn.edu]<br />
*University of Toronto [https://www.utoronto.ca/ www.utoronto.ca] <br />
*Yale University - level 1, former Cohort member, [http://www.yale.edu/ www.yale.edu]<br />
<br />
==Cohort libraries==<br />
The institutions that have taken advantage of the Share-VDE Linked Data Lifecycle Support are: <br />
<br />
*Cornell University [http://www.cornell.edu/ www.cornell.edu]<br />
*Frick Art Reference Library [http://www.frick.org/ www.frick.org]<br />
*Harry Ransom Center [http://www.hrc.utexas.edu/ www.hrc.utexas.edu]<br />
*Harvard University [http://www.harvard.edu/ www.harvard.edu]<br />
*National Library of Medicine [https://www.nlm.nih.gov/ www.nlm.nih.gov]<br />
*Northwestern University [http://www.northwestern.edu/ www.northwestern.edu]<br />
*Princeton University [http://www.princeton.edu/ www.princeton.edu]<br />
*Texas A&M University [https://www.tamu.edu/ www.tamu.edu]<br />
*University of California, Davis [http://www.ucdavis.edu/ www.ucdavis.edu]<br />
*University of California, San Diego [http://ucsd.edu/ ucsd.edu]<br />
*University of Colorado Boulder [http://www.colorado.edu/ www.colorado.edu]<br />
*University of Minnesota https://twin-cities.umn.edu/<br />
*University of Washington https://www.washington.edu/<br />
<br />
==R&D phases==<br />
The institutions that participated in the R&D phases are:<br />
<br />
*University of California, Berkeley [http://www.berkeley.edu/ www.berkeley.edu]<br />
*Massachusetts Institute of Technology [http://www.mit.edu/ www.mit.edu]<br />
*Columbia University [http://www.columbia.edu/ www.columbia.edu]<br />
*Pennsylvania State University [http://www.psu.edu/ www.psu.edu]<br />
*University of Toronto [http://www.utoronto.ca/ www.utoronto.ca]<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:Main_Page&diff=2238ShareFamily:Main Page2024-03-08T12:45:06Z<p>Anna: </p>
<hr />
<div><br />
<br />
<br />
{{DISPLAYTITLE:Share Family Linked Data Ecosystem}}<br />
<br />
The '''[http://www.share-family.org/ Share Family linked data ecosystem]''' comprises several collaborative LOD - Linked Open Data environments:<br />
* '''[https://www.svde.org/ Share-VDE]''' (Virtual Discovery Environment);<br />
* '''[http://catalogo.share-cat.unina.it/sharecat/clusters Share-Catalogue]''' - the Italian network of university libraries;<br />
* [https://pcc-lod.org '''the PCC data pool'''] - the Program for Cooperative Cataloging (PCC) Catalogue in Linked Open Data;<br />
* [https://natbib-lod.org '''National Bibliographies'''] in Linked Open Data;<br />
* [https://parsifal.urbe.it/parsifal '''Parsifal'''] - the LOD portal of the URBE consortium (Roman Union of Ecclesiastical Libraries);<br />
* '''[https://www.kubikat-lod.org/ Kubikat-LOD]''' pilot project - the LOD portal for the catalogues of Kubikat Art History libraries;<br />
* three pilot projects Share-Art, Share-Music, Share-MIA (Manuscripts, Incunabula, Ancient books) respectively in the Art, Music and Ancient book domains;<br />
* [https://lillit.share-family.org/lillit/ '''LILLIT''']: portal for Italian illustrated books 1501-1800.<br />
<br />
The different characteristics of each field are a useful asset that can be used to the advantage not only of the Share Family as a whole, but for each single discipline.<br />
<br />
Being part of the Share Family linked data ecosystem means facilitating cataloguing and exposition of bibliographic records in linked data, thus supporting the transition from the traditional cataloguing environment to innovative models applying the linked data paradigm, and providing the LAM - Libraries, Archives, Museums domain and information professionals with a more comprehensive suite of tools at their disposal.<br />
<br />
The platform [https://www.svde.org/ www.svde.org] and the other dedicated environments part of the Share Family enhance the discovery potential of library resources and unveil information that would otherwise have been hidden in archives, thus enabling the access to a rich amount of data that can be exported and re-imported by the participating institutions.<br />
<br />
The output common to all the branches of the Share Family foresees:<br />
<br />
*the enrichment of original MARC data and of the records converted in linked data with identifiers from external sources (e.g. ISNI, VIAF) and original Share identifiers;<br />
*the reconciliation and clusterization of entities;<br />
*the indexing of records in the Cluster Knowledge Base, authoritative environment in linked data;<br />
*the conversion of library catalogues from MARC to linked data;<br />
*delivery of converted and enriched data to libraries for reuse in their systems;<br />
*the publication of library records in linked data on the relative Share discovery platform.<br />
<br />
Download the '''[https://bit.ly/Share-Family_brochure_2023-June Share Family brochure]''' for public distribution.<br />
<br />
==The Share Family technology: the LOD Platform==<br />
The technology underlying the systems part of the Share Family is based on the LOD Platform, that is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data, extensible as needed for specific purposes.<br />
<br />
For more details, see a summary of the [https://wiki.share-vde.org/w/images/5/54/share_components_EN.pdf '''main components of the LOD Platform'''] and an [https://wiki.share-vde.org/w/images/a/ae/LOD_Platform_2021-02_ENG.pdf '''extensive description'''] of the framework.<br />
<br />
The Share Family technology relies on a '''[https://wiki.share-vde.org/w/images/1/1d/Schema_Share_family_tenant.png tenant infrastructure]'''. In the system architecture, a tenant is a pool of institutions contributing to the same Cluster Knowledge Base. Multiple tenants form the Share Family. Family members can interoperate among respective Cluster Knowledge Bases through a centralized registry. <br />
==The Share Family branches (tenants)==<br />
<br />
The Share Family of initiatives includes different branches and sister projects, supported by the same LOD Platform technology. Each branch or project is hosted in a specific tenant of the system architecture with a corresponding specific Cluster Knowledge Base and a dedicated web entity discovery portal. For more details on how the Share Family tenant infrastructure is designed see the [https://wiki.share-vde.org/w/images/5/51/Schema_Share_Family_tenant_slides.pdf '''Summary of Share Family tenants''']. <br />
<br />
In some cases, within a single tenant a customised skin (ie. a sub-portal of the main entity discovery) can be created to address ad hoc needs of an institution, or group of institutions, willing to expose only their own data or to integrate local services in the Share Family environment. For example, the entity discovery portal at svde.org is the discovery corresponding to Share-VDE tenant, including a pool of data from several institutions, and the respective skin portals / institutional portals. <br />
<br />
While the main entity discovery portal of a tenant shows the data of all the institutions feeding the tenant's Cluster Knowledge Base, the skin portal / institutional portal gives the ability to filter only the data of the institution or group of institutions that the skin/institutional portal has been designed for.<br />
{| class="wikitable" style=""<br />
|+<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" |'''Tenant name'''<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |'''Share-VDE'''<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" class="" |'''Share-Catalogue'''<br />
! style="border-left-width:1px;" class="" |'''PCC data pool'''<br />
! style="" class="" |'''National Bibliographies'''<br />
! style="" class="" |'''Parsifal'''<br />
! style="vertical-align:middle;" |'''LILLIT'''<br />
! style="" class="" |'''Kubikat-LOD pilot project'''<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |'''Tenant web portal url'''<br />
| style="background-color:#f8f9fa;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |https://svde.org<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" |http://catalogo.share-cat.unina.it/<br />
| style="border-left-width:1px;" |https://pcc-lod.org<br />
|https://natbib-lod.org<br />
|https://parsifal.urbe.it/parsifal/<br />
|https://lillit.share-family.org/lillit/<br />
| style="background-color:#f8f9fa;" class="" |https://kubikat-lod.org<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-bottom-style:solid;" class="" |'''Data hosted in tenant CKB'''<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;" |catalogues of SVDE member libraries converted to linked data<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" |catalogues of Share-Catalogue member libraries converted to linked data<br />
| style="border-left-width:1px;" |records of PCC members converted to linked data<br />
|national bibliographies of institutions participating to this branch converted to linked data<br />
|catalogues of Parsifal member libraries (libraries of the URBE consortium - Roman Union of Ecclesiastical Libraries) converted to linked data<br />
|Linked Open Data descriptions and illustrations of Italian editions printed in the 16th-18th centuries<br />
|catalogues of Kubikat art history libraries converted to linked data<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-top-color:#acacac;border-top-style:solid;" class="" |'''Institutional portals within the same tenant'''<br />
| style="background-color:#f8f9fa;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-top-color:#acacac;" class="" |one skin portal for each member institution<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;" |not available yet<br />
| style="border-left-width:1px;" |not foreseen<br />
|British National Bibliography (others TBD)<br />
|not available yet<br />
|not foreseen<br />
|not foreseen<br />
|-<br />
| style="background-color:#eaecf0;border-top-width:1px;" class="" |'''Institutional portal URL'''<br />
| style="background-color:#f8f9fa;border-top-width:1px;" class="" |initial version of skin portals (others in progress): <br />
<br />
<span><br /></span><nowiki>https://duke.svde.org</nowiki><span><span /></span><br />
<br />
<nowiki>https://loc.svde.org/</nowiki> <br />
<br />
<nowiki>https://natlibfi.svde.org/</nowiki> <br />
<br />
<nowiki>https://nln.svde.org/</nowiki> <br />
<br />
<nowiki>https://nyu.svde.org/</nowiki><br />
<br />
<nowiki>https://penn.svde.org</nowiki><br />
<br />
<nowiki>https://smithsonian.svde.org/</nowiki> <br />
<br />
<nowiki>https://stanford.svde.org</nowiki><br />
<br />
<nowiki>https://ualberta.svde.org</nowiki><br />
| style="border-top-width:1px;" |not available yet<br />
| style="width:15%;" |not foreseen<br />
|https://bl.natbib-lod.org/ (preview of a beta site)<br />
|not available yet<br />
|not foreseen<br />
|not foreseen<br />
|}<br />
==The Share Family institutions and map==<br />
The Share Family institutions can participate in one or more tenants. Here follows the summary of the institutions part of the Share Family tenants and its network. <br />
{| class="sortable mw-collapsible casablanca" style="width:100%;"<br />
|-<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Share-VDE tenant''' <br />
'''full members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Share-Catalogue Institutions'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''PCC tenant members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''LD4P cohort members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''National Bibliographies''' <br />
'''tenant'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#d2d2d2;border-top-color:#acacac;border-bottom-color:#acacac;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Parsifal'''<br />
! style="border-left-color:#d2d2d2;border-right-color:#d2d2d2;border-top-color:#d2d2d2;border-bottom-color:#d2d2d2;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" class="col-blue-dark-bg" |LILLIT<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#d2d2d2;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |'''Kubikat-LOD''' <br />
'''pilot project'''<br />
|-<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Berkeley Law Library<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Università Degli Studi di Napoli “Federico II”<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |PCC member libraries<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Cornell University<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |The British Library* - British National Bibliography<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Accademia Alfonsiana<br />
| style="border-top-color:#d2d2d2;border-top-style:init;" |La Sapienza Università di Roma<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Kunsthistorisches <br />
Institut in Florenz<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Duke University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi della Basilicata<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Frick Art Reference Library<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Centro Pro Unione<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Biblioteca Hertziana, <br />
Rome<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |New York University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università Degli Studi di Napoli L’Orientale<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Harry Ransom Center Texas A&M<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Facoltà di Scienze dell'Educazione "Auxilium"<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Central Institute of Art <br />
History, Munich<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Stanford University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi di Napoli Parthenope<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Harvard University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Facoltà Teologica "Marianum"<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Deutsches Forum für <br />
Kunstgeschichte, Paris<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Alberta - NEOS consortium<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università del Salento<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |National Library of Medicine<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Antonianum<br />
|<br />
| style="border-bottom-width:2px;border-bottom-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:solid;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Chicago<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi di Salerno<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Northwestern University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università della Santa Croce<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |'''<span class="col-white">Share-Art pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Michigan at Ann Arbor<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi del Sannio <br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Princeton University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università di San Tommaso d'Aquino (Angelicum)<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''<span class="col-white">Share-MIA (Manuscripts, Incunabula and Ancient books) pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Pennsylvania<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi della Campania “Luigi Vanvitelli”<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of California Davis<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università Gregoriana<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''<span class="col-white">Share-Music pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Toronto<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi Suor Orsola Benincasa<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of California San Diego<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Lateranense<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
|Yale University<br />
|Università degli Studi di Cassino<br />
|<br />
|University of Washington<br />
|<br />
|Università Pontificia Salesiana<br />
|<br />
|<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Library of Congress*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University Colorado at Boulder<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Urbaniana<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |National Library of Finland*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Minnesota<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Ateneo Sant'Anselmo<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="vertical-align:middle;text-align:left;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" class="" |National Library of Norway*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Texas A&M<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Istituto Biblico<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Smithsonian Institution*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Istituto Teologico "Giovanni Paolo II" per le Scienze del Matrimonio e della Famiglia<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificium Institutum Patristicum Augustinianum<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|}<br />
<br />
<nowiki>*</nowiki>National Libraries<br />
<br />
<span><br /></span><br />
The Share Family map can also be consulted on a [http://bit.ly/Share_map_2019 dedicated web page].<br />
<br />
[[File:Share Family map 2022.png|999x999px]]<br />
<br />
__FORCETOC__<br />
__NEWSECTIONLINK__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:Main_Page&diff=2237ShareFamily:Main Page2024-03-08T12:44:43Z<p>Anna: </p>
<hr />
<div><br />
<br />
<br />
{{DISPLAYTITLE:Share Family Linked Data Ecosystem}}<br />
<br />
The '''[http://www.share-family.org/ Share Family linked data ecosystem]''' comprises several collaborative LOD - Linked Open Data environments:<br />
* '''[https://www.svde.org/ Share-VDE]''' (Virtual Discovery Environment);<br />
* '''[http://catalogo.share-cat.unina.it/sharecat/clusters Share-Catalogue]''' - the Italian network of university libraries;<br />
* [https://pcc-lod.org '''the PCC data pool'''] - the Program for Cooperative Cataloging (PCC) Catalogue in Linked Open Data;<br />
* [https://natbib-lod.org '''National Bibliographies'''] in Linked Open Data;<br />
* [https://parsifal.urbe.it/parsifal '''Parsifal'''] - the LOD portal of the URBE consortium (Roman Union of Ecclesiastical Libraries);<br />
* '''[https://www.kubikat-lod.org/ Kubikat-LOD]''' pilot project - the LOD portal for the catalogues of Kubikat Art History libraries;<br />
* three pilot projects Share-Art, Share-Music, Share-MIA (Manuscripts, Incunabula, Ancient books) respectively in the Art, Music and Ancient book domains;<br />
* [https://lillit.share-family.org/lillit/ '''LILLIT''']: portal for Italian illustrated books 1501-1800.<br />
<br />
The different characteristics of each field are a useful asset that can be used to the advantage not only of the Share Family as a whole, but for each single discipline.<br />
<br />
Being part of the Share Family linked data ecosystem means facilitating cataloguing and exposition of bibliographic records in linked data, thus supporting the transition from the traditional cataloguing environment to innovative models applying the linked data paradigm, and providing the LAM - Libraries, Archives, Museums domain and information professionals with a more comprehensive suite of tools at their disposal.<br />
<br />
The platform [https://www.svde.org/ www.svde.org] and the other dedicated environments part of the Share Family enhance the discovery potential of library resources and unveil information that would otherwise have been hidden in archives, thus enabling the access to a rich amount of data that can be exported and re-imported by the participating institutions.<br />
<br />
The output common to all the branches of the Share Family foresees:<br />
<br />
*the enrichment of original MARC data and of the records converted in linked data with identifiers from external sources (e.g. ISNI, VIAF) and original Share identifiers;<br />
*the reconciliation and clusterization of entities;<br />
*the indexing of records in the Cluster Knowledge Base, authoritative environment in linked data;<br />
*the conversion of library catalogues from MARC to linked data;<br />
*delivery of converted and enriched data to libraries for reuse in their systems;<br />
*the publication of library records in linked data on the relative Share discovery platform.<br />
<br />
Download the '''[https://bit.ly/Share-Family_brochure_2023-June Share Family brochure]''' for public distribution.<br />
<br />
==The Share Family technology: the LOD Platform==<br />
The technology underlying the systems part of the Share Family is based on the LOD Platform, that is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data, extensible as needed for specific purposes.<br />
<br />
For more details, see a summary of the [https://wiki.share-vde.org/w/images/5/54/share_components_EN.pdf '''main components of the LOD Platform'''] and an [https://wiki.share-vde.org/w/images/a/ae/LOD_Platform_2021-02_ENG.pdf '''extensive description'''] of the framework.<br />
<br />
The Share Family technology relies on a '''[https://wiki.share-vde.org/w/images/1/1d/Schema_Share_family_tenant.png tenant infrastructure]'''. In the system architecture, a tenant is a pool of institutions contributing to the same Cluster Knowledge Base. Multiple tenants form the Share Family. Family members can interoperate among respective Cluster Knowledge Bases through a centralized registry. <br />
==The Share Family branches (tenants)==<br />
<br />
The Share Family of initiatives includes different branches and sister projects, supported by the same LOD Platform technology. Each branch or project is hosted in a specific tenant of the system architecture with a corresponding specific Cluster Knowledge Base and a dedicated web entity discovery portal. For more details on how the Share Family tenant infrastructure is designed see the [https://wiki.share-vde.org/w/images/5/51/Schema_Share_Family_tenant_slides.pdf '''Summary of Share Family tenants''']. <br />
<br />
In some cases, within a single tenant a customised skin (ie. a sub-portal of the main entity discovery) can be created to address ad hoc needs of an institution, or group of institutions, willing to expose only their own data or to integrate local services in the Share Family environment. For example, the entity discovery portal at svde.org is the discovery corresponding to Share-VDE tenant, including a pool of data from several institutions, and the respective skin portals / institutional portals. <br />
<br />
While the main entity discovery portal of a tenant shows the data of all the institutions feeding the tenant's Cluster Knowledge Base, the skin portal / institutional portal gives the ability to filter only the data of the institution or group of institutions that the skin/institutional portal has been designed for.<br />
{| class="wikitable" style=""<br />
|+<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" |'''Tenant name'''<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |'''Share-VDE'''<br />
! style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" class="" |'''Share-Catalogue'''<br />
! style="border-left-width:1px;" class="" |'''PCC data pool'''<br />
! style="" class="" |'''National Bibliographies'''<br />
! style="" class="" |'''Parsifal'''<br />
! style="vertical-align:middle;" |'''LILLIT'''<br />
! style="" class="" |'''Kubikat-LOD pilot project'''<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |'''Tenant web portal url'''<br />
| style="background-color:#f8f9fa;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;" class="" |https://svde.org<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" |http://catalogo.share-cat.unina.it/<br />
| style="border-left-width:1px;" |https://pcc-lod.org<br />
|https://natbib-lod.org<br />
|https://parsifal.urbe.it/parsifal/<br />
|https://lillit.share-family.org/lillit/<br />
| style="background-color:#f8f9fa;" class="" |https://kubikat-lod.org<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-bottom-style:solid;" class="" |'''Data hosted in tenant CKB'''<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;" |catalogues of SVDE member libraries converted to linked data<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-left-color:#acacac;" |catalogues of Share-Catalogue member libraries converted to linked data<br />
| style="border-left-width:1px;" |records of PCC members converted to linked data<br />
|national bibliographies of institutions participating to this branch converted to linked data<br />
|catalogues of Parsifal member libraries (libraries of the URBE consortium - Roman Union of Ecclesiastical Libraries) converted to linked data<br />
|Linked Open Data descriptions and illustrations of Italian editions printed in the 16th-18th centuries<br />
|catalogues of Kubikat art history libraries converted to linked data<br />
|-<br />
| style="background-color:#eaecf0;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-top-color:#acacac;border-top-style:solid;" class="" |'''Institutional portals within the same tenant'''<br />
| style="background-color:#f8f9fa;border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;border-top-color:#acacac;" class="" |one skin portal for each member institution<br />
| style="border-left-width:1px;border-right-width:1px;border-top-width:1px;border-bottom-width:1px;" |not available yet<br />
| style="border-left-width:1px;" |not foreseen<br />
|British National Bibliography (others TBD)<br />
|not available yet<br />
|not foreseen<br />
|not foreseen<br />
|-<br />
| style="background-color:#eaecf0;border-top-width:1px;" class="" |'''Institutional portal URL'''<br />
| style="background-color:#f8f9fa;border-top-width:1px;" class="" |initial version of skin portals (others in progress): <br />
<br />
<span><br /></span><nowiki>https://duke.svde.org</nowiki><span><span /></span><br />
<br />
<nowiki>https://loc.svde.org/</nowiki> <br />
<br />
<nowiki>https://natlibfi.svde.org/</nowiki> <br />
<br />
<nowiki>https://nln.svde.org/</nowiki> <br />
<br />
<nowiki>https://nyu.svde.org/</nowiki><br />
<br />
<nowiki>https://penn.svde.org</nowiki><br />
<br />
<nowiki>https://smithsonian.svde.org/</nowiki> <br />
<br />
<nowiki>https://stanford.svde.org</nowiki><br />
<br />
<nowiki>https://ualberta.svde.org</nowiki><br />
| style="border-top-width:1px;" |not available yet<br />
| style="width:15%;" |not foreseen<br />
|https://bl.natbib-lod.org/ (preview of a beta site)<br />
|not available yet<br />
|not foreseen<br />
|not foreseen<br />
|}<br />
==The Share Family institutions and map==<br />
The Share Family institutions can participate in one or more tenants. Here follows the summary of the institutions part of the Share Family tenants and its network. <br />
{| class="sortable mw-collapsible casablanca" style="width:100%;"<br />
|-<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Share-VDE tenant''' <br />
'''full members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Share-Catalogue Institutions'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''PCC tenant members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''LD4P cohort members'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''National Bibliographies''' <br />
'''tenant'''<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#d2d2d2;border-top-color:#acacac;border-bottom-color:#acacac;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''Parsifal'''<br />
! style="border-left-color:#d2d2d2;border-right-color:#d2d2d2;border-top-color:#d2d2d2;border-bottom-color:#d2d2d2;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" class="col-blue-dark-bg" |LILLIT<br />
! style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#d2d2d2;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |'''Kubikat-LOD''' <br />
'''pilot project'''<br />
|-<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Berkeley Law Library<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Università Degli Studi di Napoli “Federico II”<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |PCC member libraries<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Cornell University<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |The British Library* - British National Bibliography<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" |Accademia Alfonsiana<br />
| style="border-top-color:#d2d2d2;border-top-style:init;" |La Sapienza Università di Roma<br />
| style="border-top-width:2px;border-top-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:solid;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Kunsthistorisches <br />
Institut in Florenz<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Duke University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi della Basilicata<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Frick Art Reference Library<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Centro Pro Unione<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Biblioteca Hertziana, <br />
Rome<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |New York University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università Degli Studi di Napoli L’Orientale<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Harry Ransom Center Texas A&M<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Facoltà di Scienze dell'Educazione "Auxilium"<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Central Institute of Art <br />
History, Munich<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Stanford University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi di Napoli Parthenope<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Harvard University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Facoltà Teologica "Marianum"<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |Deutsches Forum für <br />
Kunstgeschichte, Paris<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Alberta - NEOS consortium<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università del Salento<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |National Library of Medicine<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Antonianum<br />
|<br />
| style="border-bottom-width:2px;border-bottom-color:#acacac;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:solid;" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Chicago<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi di Salerno<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Northwestern University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università della Santa Croce<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" data-ve-attributes="{&quot;style&quot;:&quot;width:25%;&quot;}" |'''<span class="col-white">Share-Art pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Michigan at Ann Arbor<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi del Sannio <br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Princeton University<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università di San Tommaso d'Aquino (Angelicum)<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''<span class="col-white">Share-MIA (Manuscripts, Incunabula and Ancient books) pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Pennsylvania<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi della Campania “Luigi Vanvitelli”<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of California Davis<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-right-width:2px;border-right-color:#acacac;border-left-style:init;border-right-style:solid;border-top-style:init;border-bottom-style:init;" |Pontificia Università Gregoriana<br />
|<br />
| style="border-left-width:2px;border-right-width:2px;border-top-width:2px;border-bottom-width:2px;border-left-color:#acacac;border-right-color:#acacac;border-top-color:#acacac;border-bottom-color:#acacac;border-left-style:solid;border-right-style:solid;border-top-style:solid;border-bottom-style:solid;" class="col-blue-dark-bg" |'''<span class="col-white">Share-Music pilot project</span>'''<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Toronto<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Università degli Studi Suor Orsola Benincasa<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of California San Diego<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Lateranense<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
|Yale University<br />
|Università degli Studi di Cassino<br />
|<br />
|University of Washington<br />
|<br />
|Università Pontificia Salesiana<br />
|<br />
|<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Library of Congress*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University Colorado at Boulder<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificia Università Urbaniana<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |National Library of Finland*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Minnesota<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Ateneo Sant'Anselmo<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="vertical-align:middle;text-align:left;border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" class="" |National Library of Norway*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |University of Texas A&M<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Istituto Biblico<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Smithsonian Institution*<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificio Istituto Teologico "Giovanni Paolo II" per le Scienze del Matrimonio e della Famiglia<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |Pontificium Institutum Patristicum Augustinianum<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|-<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|<br />
| style="border-left-style:init;border-right-style:init;border-top-style:init;border-bottom-style:init;" |<br />
|}<br />
<br />
<nowiki>*</nowiki>National Libraries<br />
<br />
<span><br /></span><br />
The Share Family map can also be consulted on a [http://bit.ly/Share_map_2019 dedicated web page].<br />
<br />
[[File:Share Family map 2022.png|999x999px]]<br />
<br />
__FORCETOC__<br />
__NEWSECTIONLINK__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2236ShareFamily:LODPlatform2024-03-08T12:33:07Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
'''THIS PAGE IS A WORK IN PROGRESS''' <br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
'''For more details on the specific terminology used, refer to [https://wiki.share-vde.org/wiki/ShareFamily:FAQ ShareFamily:FAQ]'''.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Entities Knowledge Base in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the entity editor (more details in the next section);<br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Entities Knowledge Base. This entity editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the entity editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the entity editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Entities Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entities Knowledge Base.]]<br />
<br />
== How the LOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entities Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.)<br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Entities Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries;<br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|thumb|439x439px|none|Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables]]<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
Appunto di Anna: riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/DiscoveryInterface LOD Platform discovery interface] ==<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform/EntityEditor&diff=2235ShareFamily:LODPlatform/EntityEditor2024-03-08T12:31:37Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:LOD Platform Entity Editor}}<br />
<br />
Appunto di Anna: capire qual è il nome definitivo di jcricket e riportare qui contenuto di:<br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/10zSn5iStTOmJRtiM71BIk-CAxTWHxJjiBp5D5joL6hk/edit#slide=id.g2011945aadc_0_105 <br />
<br />
- spiegare il concetto di prisma da qualche slide<br />
<br />
- mettere riferimento a jcricket ux guide di filip e a manuale d'uso per bibliotecari<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform/EntityEditor&diff=2234ShareFamily:LODPlatform/EntityEditor2024-03-08T12:31:22Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:LOD Platform Entity Editor}}<br />
<br />
Appunto di Anna: capire qual è il nome definitivo di jcricket e riportare qui contenuto di:<br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- <nowiki>https://docs.google.com/presentation/d/10zSn5iStTOmJRtiM71BIk-CAxTWHxJjiBp5D5joL6hk/edit#slide=id.g2011945aadc_0_105</nowiki><br />
<br />
- spiegare il concetto di prisma da qualche slide<br />
<br />
- mettere riferimento a jcricket ux guide di filip e a manuale d'uso per bibliotecari<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2233ShareFamily:LODPlatform2024-03-08T12:30:00Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
'''THIS PAGE IS A WORK IN PROGRESS''' <br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
'''For more details on the specific terminology used, refer to [https://wiki.share-vde.org/wiki/ShareFamily:FAQ ShareFamily:FAQ]'''.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Entities Knowledge Base in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the entity editor (more details in the next section);<br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Entities Knowledge Base. This entity editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the entity editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the entity editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Entities Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entities Knowledge Base.]]<br />
<br />
== How the LOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entities Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.)<br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Entities Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries;<br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|thumb|439x439px|none]]<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
Appunto di Anna: riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/DiscoveryInterface LOD Platform discovery interface] ==<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform/EntityEditor&diff=2232ShareFamily:LODPlatform/EntityEditor2024-03-08T12:29:32Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:LOD Platform Entity Editor}}<br />
<br />
Appunto di Anna: capire qual è il nome definitivo di jcricket e riportare qui contenuto di:<br />
<br />
- <nowiki>https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true</nowiki><br />
<br />
- <nowiki>https://docs.google.com/presentation/d/10zSn5iStTOmJRtiM71BIk-CAxTWHxJjiBp5D5joL6hk/edit#slide=id.g2011945aadc_0_105</nowiki><br />
<br />
- spiegare il concetto di prisma da qualche slide<br />
<br />
- mettere riferimento a jcricket ux guide di filip e a manuale d'uso per bibliotecari<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2231ShareFamily:LODPlatform2024-03-08T12:29:14Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
'''For more details on the specific terminology used, refer to [https://wiki.share-vde.org/wiki/ShareFamily:FAQ ShareFamily:FAQ]'''.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Entities Knowledge Base in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the entity editor (more details in the next section);<br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Entities Knowledge Base. This entity editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the entity editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the entity editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Entities Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entities Knowledge Base.]]<br />
<br />
== How the LOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entities Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.)<br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Entities Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries;<br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|thumb|439x439px|none]]<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
Appunto di Anna: riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/DiscoveryInterface LOD Platform discovery interface] ==<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform/EntityEditor&diff=2227ShareFamily:LODPlatform/EntityEditor2024-02-29T15:53:25Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:LOD Platform Entity Editor}}<br />
<br />
Appunto di Anna: capire qual è il nome definitivo di jcricket e riportare qui contenuto di:<br />
<br />
- <nowiki>https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true</nowiki><br />
<br />
- <nowiki>https://docs.google.com/presentation/d/10zSn5iStTOmJRtiM71BIk-CAxTWHxJjiBp5D5joL6hk/edit#slide=id.g2011945aadc_0_105</nowiki><br />
<br />
- spiegare il concetto di prisma da qualche slide<br />
<br />
- mettere riferimento a jcricket ux guide di filip e a manuale per bibliotecari<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareDoc:PublicDocumentation&diff=2226ShareDoc:PublicDocumentation2024-02-29T15:30:34Z<p>Anna: Protected "ShareDoc:PublicDocumentation": null ([Edit=Allow only editors and administrators] (expires 15:30, 1 March 2025 (UTC)) [Move=Allow only editors and administrators] (expires 15:30, 1 March 2025 (UTC)))</p>
<hr />
<div>{{DISPLAYTITLE:Public Documentation}}<br />
<br />
== [[ShareDoc:PublicDocumentation/Main technological components|Main technological components]] ==<br />
This section serves as a gateway to a comprehensive breakdown of key technological components that form the backbone of the LOD Platform. the technology that powers the Share Family system. This section provides an overview of crucial components and concepts integral to understanding the inner workings of our platform.<br />
<br />
== [[ShareDoc:PublicDocumentation/User guides|User guides]] ==<br />
Here you'll find user guides describing the LOD Platform and JCricket entity editor.<br />
<br />
== [[ShareDoc:PublicDocumentation/Release notes|Release notes]] ==<br />
This page gathers the release notes of the technical implementations.<br />
<br />
== [[ShareDoc:PublicDocumentation/APIs|Share Family API: Technical documentation]] ==<br />
This is a collection of Share Family API.<br />
<br />
This technical documentation is valid for all tenants of the Share Family. However, in the documentation pages you will find examples showing only the namespace "svde.org". If your institution belongs to a tenant that is not svde.org, you have to run queries using the specific namespace of your tenant (e.g. natbib-lod.org, or pcc-lod.org, or kubikat-lod.org). Please remember to replace the namespace "svde.org" that appears in the examples with the namespace of your tenant (both for entity URIs and for attributes/properties URIs).</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2225ShareFamily:LODPlatform2024-02-29T15:30:18Z<p>Anna: Protected "ShareFamily:LODPlatform": [object Object] ([Edit=Allow only editors and administrators] (expires 15:30, 1 March 2025 (UTC)) [Move=Allow only editors and administrators] (expires 15:30, 1 March 2025 (UTC)))</p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
'''For more details on the specific terminology used, refer to [https://wiki.share-vde.org/wiki/ShareFamily:FAQ ShareFamily:FAQ]'''.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Entities Knowledge Base in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the entity editor (more details in the next section);<br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Entities Knowledge Base. This entity editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the entity editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the entity editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Entities Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entities Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entities Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.)<br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Entities Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries;<br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|thumb|439x439px|none]]<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
Appunto di Anna: riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/DiscoveryInterface LOD Platform discovery interface] ==<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2224ShareFamily:LODPlatform2024-02-29T15:03:34Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
'''For more details on the specific terminology used, refer to [https://wiki.share-vde.org/wiki/ShareFamily:FAQ ShareFamily:FAQ]'''.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Entities Knowledge Base in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the entity editor (more details in the next section);<br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Entities Knowledge Base. This entity editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the entity editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the entity editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Entities Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entities Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entities Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.)<br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Entities Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries;<br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|thumb|439x439px|none]]<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
Appunto di Anna: riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/DiscoveryInterface LOD Platform discovery interface] ==<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2219ShareFamily:LODPlatform2024-02-26T10:48:25Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
<br />
'''For more details on the specific terminology used, refer to [https://wiki.share-vde.org/wiki/ShareFamily:FAQ ShareFamily:FAQ]'''.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Entities Knowledge Base in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the entity editor (more details in the next section);<br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Entities Knowledge Base. This entity editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the entity editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the entity editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Entities Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entities Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entities Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.)<br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Entities Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries;<br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|left|thumb|439x439px]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
Appunto di Anna: riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/DiscoveryInterface LOD Platform discovery interface] ==<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2217ShareFamily:LODPlatform2024-02-22T16:45:27Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Entities Knowledge Base in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the entity editor (more details in the next section);<br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Entities Knowledge Base. This entity editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the entity editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the entity editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Entities Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entities Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entities Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.)<br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Entities Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries;<br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|left|thumb|439x439px]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
Appunto di Anna: riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/DiscoveryInterface LOD Platform discovery interface] ==<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2216ShareFamily:LODPlatform2024-02-22T15:55:02Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Cluster Knowledge Base (CKB) in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the Cluster Knowledge Base Editor (more details in the next section); <br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Cluster Knowledge Base (CKB). This Cluster Knowledge Base editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the Cluster Knowledge Base editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the CKB editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Cluster Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entity Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entity Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Cluster Knowledge Base, also called Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.) <br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Cluster Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries; <br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|left|thumb|439x439px]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
Appunto di Anna: riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/DiscoveryInterface LOD Platform discovery interface] ==<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform/DiscoveryInterface&diff=2215ShareFamily:LODPlatform/DiscoveryInterface2024-02-22T15:54:33Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:LOD Platform Discovery Interface}}<br />
<br />
Appunto di Anna: capire se va messo, e se sì riportare qui il contenuto di:<br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- verificare nomenclatura (portal / interface / skin o institutional portal etc.)<br />
<br />
- linkare la guida del portale <br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform/DiscoveryInterface&diff=2214ShareFamily:LODPlatform/DiscoveryInterface2024-02-22T15:54:24Z<p>Anna: Created page with "{{DISPLAYTITLE:LOD Platform Discovery Interface}} capire se va messo, e se sì riportare qui il contenuto di: - https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true - verificare nomenclatura (portal / interfac..."</p>
<hr />
<div>{{DISPLAYTITLE:LOD Platform Discovery Interface}}<br />
<br />
capire se va messo, e se sì riportare qui il contenuto di:<br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- verificare nomenclatura (portal / interface / skin o institutional portal etc.)<br />
<br />
- linkare la guida del portale <br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2213ShareFamily:LODPlatform2024-02-22T15:53:07Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Cluster Knowledge Base (CKB) in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the Cluster Knowledge Base Editor (more details in the next section); <br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Cluster Knowledge Base (CKB). This Cluster Knowledge Base editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the Cluster Knowledge Base editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the CKB editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Cluster Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entity Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entity Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Cluster Knowledge Base, also called Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.) <br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Cluster Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries; <br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|left|thumb|439x439px]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== LOD Platform discovery interface ==<br />
capire se va messo, e se sì creare sottopagina in cui riportare contenuto di:<br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/EntityEditor Entity editor and shared cataloguing tool] ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform/EntityEditor&diff=2212ShareFamily:LODPlatform/EntityEditor2024-02-22T15:52:20Z<p>Anna: Created page with "{{DISPLAYTITLE:LOD Platform Entity Editor}} Appunto di Anna: capire qual è il nome definitivo di jcricket e riportare qui contenuto di: - <nowiki>https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true</nowiki>..."</p>
<hr />
<div>{{DISPLAYTITLE:LOD Platform Entity Editor}}<br />
<br />
Appunto di Anna: capire qual è il nome definitivo di jcricket e riportare qui contenuto di:<br />
<br />
- <nowiki>https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true</nowiki><br />
<br />
- <nowiki>https://docs.google.com/presentation/d/10zSn5iStTOmJRtiM71BIk-CAxTWHxJjiBp5D5joL6hk/edit#slide=id.g2011945aadc_0_105</nowiki><br />
<br />
- mettere riferimento a jcricket ux guide di filip e a manuale per bibliotecari<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2211ShareFamily:LODPlatform2024-02-22T15:50:59Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Cluster Knowledge Base (CKB) in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the Cluster Knowledge Base Editor (more details in the next section); <br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Cluster Knowledge Base (CKB). This Cluster Knowledge Base editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the Cluster Knowledge Base editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the CKB editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Cluster Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entity Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entity Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Cluster Knowledge Base, also called Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.) <br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Cluster Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries; <br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|left|thumb|439x439px]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== [https://wiki.share-vde.org/wiki/ShareFamily:LODPlatform/Workflow LOD Platform workflow components] ==<br />
<br />
== LOD Platform technological stack ==<br />
riportare contenuto di <br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wtSKHu15jzrKSzadkcc9PxcTwui0npCIioltRt4D11k/edit<br />
<br />
== LOD Platform discovery interface ==<br />
capire se va messo, e se sì creare sottopagina in cui riportare contenuto di:<br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
== Entity editor and shared cataloguing tool ==<br />
capire qual è il nome definitivo di jcricket, creare sottopagina in cui riportare contenuto di:<br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/10zSn5iStTOmJRtiM71BIk-CAxTWHxJjiBp5D5joL6hk/edit#slide=id.g2011945aadc_0_105<br />
<br />
- mettere riferimento a jcricket ux guide di filip e a manuale per bibliotecari<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform/Workflow&diff=2210ShareFamily:LODPlatform/Workflow2024-02-22T15:41:28Z<p>Anna: Created page with "{{DISPLAYTITLE:LOD Platform workflow}} Appunto di Anna: chiedere a Tiziana se descrivere in dettaglio il flusso a partire da e/o combinando le seguenti fonti: - https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=t..."</p>
<hr />
<div>{{DISPLAYTITLE:LOD Platform workflow}}<br />
<br />
Appunto di Anna: chiedere a Tiziana se descrivere in dettaglio il flusso a partire da e/o combinando le seguenti fonti:<br />
<br />
- https://casalinigroup.sharepoint.com/:w:/r/sites/CasaliniLAB/_layouts/15/Doc.aspx?sourcedoc=%7BCBE626A0-14A1-4FB1-BBF3-9D62DE43FDF3%7D&file=Copia%20di%20lavoro_Technical%20proposal%20-%20Identities%20And%20Vocabularies%20Software%20Solution%20for%20Qatar%20National%20Library.docx&action=default&mobileredirect=true<br />
<br />
- https://docs.google.com/presentation/d/1wepkkKKOvKrvUCHZoN4EdIlZ628s5IDQcqhefZok9mA/edit<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2209ShareFamily:LODPlatform2024-02-22T15:39:28Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Cluster Knowledge Base (CKB) in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the Cluster Knowledge Base Editor (more details in the next section); <br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Cluster Knowledge Base (CKB). This Cluster Knowledge Base editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the Cluster Knowledge Base editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the CKB editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Cluster Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entity Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entity Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Cluster Knowledge Base, also called Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.) <br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
The result of these processes is threefold:<br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
The data thus obtained are ready to be processed again, through different channels:<br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Cluster Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries; <br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket.<br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|left|thumb|439x439px]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.).<br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
== LOD Platform workflow components ==<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2208ShareFamily:LODPlatform2024-02-22T15:36:23Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Cluster Knowledge Base (CKB) in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the Cluster Knowledge Base Editor (more details in the next section); <br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Cluster Knowledge Base (CKB). This Cluster Knowledge Base editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the Cluster Knowledge Base editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the CKB editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Cluster Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entity Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entity Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Cluster Knowledge Base, also called Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.) <br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
== The data processing pipeline in a system using the LOD Platform ==<br />
The diagrams in the next paragraph illustrate the high-level workflow for the data processing cycle in the LOD Platform, from the delivery of original records to the publication on the web portal. The workflow diagrams have demonstrative purposes, but they express the overall steps of the process. <br />
<br />
Starting from the left of '''graph 1''', the data are imported from the library/the target institution (libraries, archives, museums etc.) in different formats (eg. MARC, Dublin Core, xml etc.). The type of data can be bibliographic and authority. <br />
<br />
The data received are processed according to Text analysis and String-matching processes (represented in the "Similarity's score" box), to identify the Entities included in the 'flat' texts (records), and prepare the creation of clusters of entities. <br />
<br />
This entity identification process is enhanced and extended through similar Text analysis and String matching processes launched on external sources (VIAF, ISNI, LC-NAF, GND, LCSH, Nuovo soggettario etc.), through the Authify framework: these processes generate the enrichment of the data with other variant forms coming from external sources and with the URIs through which the same entity is identified on these sources (reconciliation): the original cluster is enriched and will allow, in the process of conversion to linked data, to activate the function of interlinking, essential for sharing and reusing data on the web. <br />
<br />
<br />
The result of these processes is threefold: <br />
<br />
* Identification of entities; <br />
* Data enrichment; <br />
<br />
* Cluster/entity creation through reconciliation processes. <br />
<br />
<br />
The data thus obtained are ready to be processed again, through different channels: <br />
<br />
* manual enrichment and quality check (in the event that the library requests a specific service from external agencies - such as Casalini libri - or internally manages the enriched data received); <br />
* extraction of “hidden” relations for the generation and feeding of a database of relations (which will be reused in possible subsequent steps to enrich the data and in the publication stages, to extend the links between data); <br />
* creation of the Cluster Knowledge Base, available in RDF (therefore as Linked Open Data) and accessible via an end point for SPARQL and API queries; <br />
* processing and conversion to RDF, following the BIBFRAME model with extensions provided by the Share community (SVDE ontology, see below) and/or other ontologies and schemas proposed in the specific project. <br />
<br />
<br />
At the end of these processes, the data is ready to be indexed on the discovery portal and published on various sites, in RDF. It’s also available to be enhanced/increased through the entity editor JCricket. <br />
<br />
Graphs 2 and 3 show more in detail some steps of the overall workflow shown in graph 1. <br />
<br />
=== The LOD Platform workflow ===<br />
[[File:LODPlatform Graph1 Overall workflow.png|left|thumb|438x438px|Graph 1 - Overall workflow]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 2 – From records to entities through entity identification processes.png|left|thumb|439x439px|Graph 2 – From records to entities through entity identification processes]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png|left|thumb|439x439px]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
=== Data updates ===<br />
The LOD Platform is able to manage entities created with data from internal and external sources, using different approaches that depend on the data source (update/change management processes via SFTP file exchange, availability of OAI-PMH or other protocols, periodically update of the dump available for the web community etc.). The choice of the approach depend on the use case. In addition to this, JCricket editor will allow to manage entities data manually, by authorized persons, using a friendly user interface to enhance the data quality. <br />
<br />
Here follows an outline of the automated processes that have already been implemented for any Share Family project.<br />
<br />
==== Delta updates ====<br />
By “delta” update we mean the changes that occur to the library records that are periodically pushed to the LOD Platform, to be published on the discovery portal. The automation of the ingestion in the LOD Platform of updated library records has the purpose of regularly updating the data available through the discovery interface and the other endpoints of the workflow where the data are available. This means updating the data of the clustered entities and the related resources searchable on the discovery interface and in the triplestore, according to the frequency requested by the library.<br />
<br />
Steps of the process for update/change management via SFTP file exchange: <br />
<br />
# the library delivers bibliographic and authority records to the agreed SFTP directory, in the sub-directory dedicated to their institution. The system expects to receive from each library only the delta of their records, i.e. only those records that have been changed or added or deleted, compared to the previous dispatch;<br />
# a running script processes the records in sequential order, by file name, and accepts in input .mrc (for new and modified records) and .txt files (for deleted records). Additional input file formats are included in the workflow in case the library manages them in the regular/daily data handling; <br />
# ingestion of library MARC records in the system: after original records are uploaded from the library to the SFTP server, a script running regularly connects the Share internal system to the individual SFTP folders of the library, checks if a new file has been uploaded to the SFTP and downloads the MARC records in the system. Therefore, the files submitted by the library are automatically transferred from the SFTP sub-directory of the institution that has uploaded the files to the corresponding sub-directory of the Share internal repository; <br />
# MARC records processing: the delta updates MARC records are processed according to LOD Platform procedures. This process includes enriching MARC records by incorporating various URIs: the Share tenant entity identifier (eg. URIs from the <nowiki>https://svde.org</nowiki> namespace) and URIs from external authoritative sources such as ISNI, VIAF, and LCNAF. Upon request, MARC records can also be enriched with URIs from other tenants of the Share Family. The data are saved in the library tenant Postgres database; <br />
# upload to Solr: the records processed are uploaded to Solr platform for indexing, before populating the tenant. Among the processes involved, data from library records are processed and indexed so that the autocomplete function in the search fields on the discovery portal displays the indexed data (e.g. author, title) as suggested results to the user performing a search for a resource; <br />
# updated data online: after the indexing phase, the information processed is ready to go live on the discovery portal. <br />
<br />
<br />
What refers to MARC documents can be understood as referring to the different input formats included in the clustering and indexing processes (MODS, METS, Dublin Core, etc.). <br />
<br />
The delta updates process triggers: the update of clustered entities on the library tenant portal; the update of the data available on the triplestore, the delivery of enriched MARC records to libraries. <br />
<br />
[[File:LODPlatform Data flow for the elaboration of the delta updates.png|none|thumb|429x429px|Data flow for the elaboration of the delta update records in the LOD Platform system (this diagram specifically refers to the Share-VDE tenant flow).]]<br />
<br />
<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=File:LODPlatform_Data_flow_for_the_elaboration_of_the_delta_updates.png&diff=2207File:LODPlatform Data flow for the elaboration of the delta updates.png2024-02-22T15:34:55Z<p>Anna: </p>
<hr />
<div></div>Annahttps://wiki.share-vde.org/w/index.php?title=File:LODPlatform_Graph_3_%E2%80%93_The_Cluster_Knowledge_Base_RDF_conversion_and_different_deliverables.png&diff=2206File:LODPlatform Graph 3 – The Cluster Knowledge Base RDF conversion and different deliverables.png2024-02-22T15:26:22Z<p>Anna: </p>
<hr />
<div></div>Annahttps://wiki.share-vde.org/w/index.php?title=File:LODPlatform_Graph_2_%E2%80%93_From_records_to_entities_through_entity_identification_processes.png&diff=2205File:LODPlatform Graph 2 – From records to entities through entity identification processes.png2024-02-22T15:25:06Z<p>Anna: </p>
<hr />
<div></div>Annahttps://wiki.share-vde.org/w/index.php?title=File:LODPlatform_Graph1_Overall_workflow.png&diff=2204File:LODPlatform Graph1 Overall workflow.png2024-02-22T15:23:20Z<p>Anna: </p>
<hr />
<div></div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2203ShareFamily:LODPlatform2024-02-22T15:20:23Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Cluster Knowledge Base (CKB) in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections.<br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
There are several advantages:<br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the Cluster Knowledge Base Editor (more details in the next section); <br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge.<br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Cluster Knowledge Base (CKB). This Cluster Knowledge Base editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.<br />
[[File:high-level-CKB-editor-flow.png|none|thumb|432x432px|High-level workflow of the Cluster Knowledge Base editor]]<br />
<br />
A further added value is the ability of the LOD Platform to interact directly with external data sources such as ISNI and Wikidata. The interaction with Wikidata is currently under analysis and will be triggered from the CKB editor itself, allowing the search from the editor into Wikidata and the enrichment of the LOD Platform entities with information from Wikidata and vice versa. This way the editor will allow for the creation of new identifiers both in the external data sources (where possible or applicable) and in the Cluster Knowledge Base. <br />
[[File:Wikidata-query-jcricket.png|none|thumb|469x469px|Results from a query on Wikidata displayed on the editor interface: the editor is ready to enrich the entity with Wikidata information that will be saved in the Entity Knowledge Base.]]<br />
<br />
== How theLOD Platform works ==<br />
The developed components and tools aim to create a useful environment for knowledge management, with advanced search interfaces to improve the user experience and provide wider results to libraries, archives, museums and their users: <br />
<br />
* '''Authify''': a RESTFul module that provides search and full-text services of external datasets (downloaded, stored and indexed in the system), mainly related to Authority files (VIAF, Library of Congress Name Authority files etc.) that can also be extended to other types of datasets. It consists of two main parts: a SOLR infrastructure for indexing the datasets and related search services, and a logical level that orchestrates these services to find a match within the clusters of the entities. <br />
* '''Entity Knowledge Base''', on PostgreSQL database, is the result of the data processing and enrichment procedures with external data sources for each entity; typically: clusters of Agents (authorized and variant forms of the names of Persons, (Corporate Bodies, Families) and clusters of titles (authorized access points and variant forms for the titles of the Works). The Cluster Knowledge Base, also called Entities Knowledge Base, contains other entities produced through identification and clustering processes (such as places, roles, languages, etc.) <br />
* '''RDFizer''': a RESTFul module that automates the entire process of converting and publishing data in RDF according to the BIBFRAME 2.0 ontology in a linear and scalable way. It is flexible and can be adapted to multiple situations: it allows, therefore, to manage the classes and properties not only of BIBFRAME but also of other ontologies as needed. <br />
* '''Triple store''': the LOD Platform can currently be integrated with two different types of triple stores: one open source (Blazegraph), more suitable for small or medium-sized projects (up to about 2,000,000 bibliographic records), and a commercial one, more suitable for larger datasets, such as Neptune, supporting RDF and SPARQL. The latter can be considered a valid alternative since it is integrated with Amazon Web Services infrastructure already in use for the whole system, and the whole LOD Platform has already migrated Neptune; therefore this solution can provide better performance. <br />
* '''Discovery portal''': data presentation portal, for retrieving and browsing data in a user-friendly discovery interface.<br />
<br />
__FORCETOC__</div>Annahttps://wiki.share-vde.org/w/index.php?title=File:Wikidata-query-jcricket.png&diff=2202File:Wikidata-query-jcricket.png2024-02-22T15:16:18Z<p>Anna: </p>
<hr />
<div></div>Annahttps://wiki.share-vde.org/w/index.php?title=File:high-level-CKB-editor-flow.png&diff=2201File:high-level-CKB-editor-flow.png2024-02-22T15:13:39Z<p>Anna: </p>
<hr />
<div></div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2200ShareFamily:LODPlatform2024-02-22T15:12:04Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows:<br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.<br />
<br />
== High level steps ==<br />
In the implementation of a system that uses the LOD Platform, data from libraries, archives and museums are transformed into linked data through entity identification, reconciliation and enrichment processes. <br />
<br />
Attributes are used to uniquely identify a person, work or other entity, with variant forms reconciled to form a cluster of data referring to the same entity. The data are subsequently reconciled and enriched with further external sources, to create a network of information and resources. The result is an open relationship database and Cluster Knowledge Base (CKB) in RDF. <br />
<br />
The database uses the semantic web paradigms but allows the target institution to manage their data independently, and is able to provide: <br />
<br />
* enrichment of data with URIs, both for the original library records and for the output linked data entities; examples of sources for URI enrichment are ISNI, VIAF, FAST, GeoNames, LC Classification, LCSH, LC NAF, Wikidata; <br />
* conversion of data to RDF using the BIBFRAME vocabulary and other ontologies; <br />
* creation of a virtual discovery platform with web user interface; <br />
* creation of a database of relationships and clusters accessible in RDF through a triplestore; <br />
* implementation of tools for direct interaction with the data, permitting the validation, update, long-term control and maintenance of the clusters and of the URIs identifying the entities (see below); <br />
* batch/automated data updating procedures; <br />
* batch/automated data dissemination to libraries. <br />
* progressive implementation of additional workflows such as API for ILS, back-conversion for local acquisition and administration systems, reporting. <br />
<br />
<br />
The goal is to ensure that a large amount of data, which often remains hidden or unexpressed in closed silos (“containers”), finally reveals its richness within existing collections. <br />
<br />
== Benefits ==<br />
The LOD Platform, developed according to the principle of functionality, provides various environments and interfaces for the creation and enrichment of data and offers workflows capable of responding to the different needs of librarians / archivists / museum operators, professionals, scholars, researchers and participating students. <br />
<br />
<br />
There are several advantages: <br />
<br />
* integration of the processes of a collaborative environment with local systems and tools; <br />
* integration into the semantic web while maintaining ownership and control of the data, benefiting from the simplified administration of the environment and a large pool of data; <br />
* integration of library/archive/museum data into the collaborative environment and pool of data; <br />
<br />
* standards and infrastructures for "future-proof" data, ie ensuring that they are compatible with the structure of linked data and the semantic web; <br />
* enrichment of data with further information and relationships not previously expressed in the established metadata formats in use (e.g. MARC), increasing the possibilities of discovery for all types of resources; <br />
* create an environment that is useful for both end users and professionals (librarians, archivists, museum operators); <br />
* allow librarians a wider and direct interaction with and editing of linked data entities through the Cluster Knowledge Base Editor (more details in the next section); <br />
* advanced search interfaces to improve the user experience and provide broader search results to users; <br />
* reveal data that would otherwise have remained hidden in silos, allowing end users to access a large amount of information that can be both imported and exported by the library. <br />
<br />
<br />
This approach fully harnesses the potential of linked data, connecting library information to the advantage of scholars, patrons and all library users in a dynamic research environment that unlocks new ways of accessing knowledge. <br />
<br />
== Added values ==<br />
It’s particularly relevant to highlight that the LOD Platform is currently being enhanced with a module dedicated to edit and update entities in the Cluster Knowledge Base (CKB). This Cluster Knowledge Base editor has been named JCricket, and is conceived as a collaborative environment with different levels of access and interaction with the data, enabling several manual and automatic actions on the clusters of entities saved in the database, including creation, modification, merge of clusters of works, of agents etc. <br />
<br />
JCricket consists of two main layers: <br />
<br />
# automatic checks and update of the data performed by the LOD system;<br />
# manual checks and edit of the data performed by the user through a web interface. <br />
<br />
<br />
All changes to entities, both automatic and manual, are reported on the Entity Registry, a source (also available in RDF) that tracks the updates of each entity, especially when this has an impact on the persistent entity URI.</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:LODPlatform&diff=2199ShareFamily:LODPlatform2024-02-22T15:09:03Z<p>Anna: Created page with "{{DISPLAYTITLE:The LOD Platform Technology}} LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. The core of the LOD Platform was designed in the EU-funded project ALI..."</p>
<hr />
<div>{{DISPLAYTITLE:The LOD Platform Technology}}<br />
<br />
LOD - Linked Open Data Platform is a highly innovative technological framework, an integrated ecosystem for the management of bibliographic, archive and museum catalogues, and their conversion to linked data according to the BIBFRAME ontology version 2.0 (https://www.loc.gov/bibframe/docs/bibframe2-model.html), extensible as needed for specific purposes. <br />
<br />
The core of the LOD Platform was designed in the EU-funded project ALIADA, with the idea of creating a scalable and configurable framework able to adapt to ontologies from different domains, capable of automating the entire process of creating and publishing linked open data, regardless of the data source format. <br />
<br />
The aim of this framework is to open the possibilities offered by linked data to libraries, archives and museums by providing greater interoperability, visibility and availability for all types of resources. <br />
<br />
The application of the LOD Platform obviously requires the careful analysis of the standards, formats and models used in the institution addressed; however, its coverage, based on BIBFRAME 2.0 as core ontology, can be enriched with a suite of additional ontologies, such as Schema.org, Prov-O, MADS, RDFS, LC vocabularies, RDA vocabularies and so on; it’s extremely flexible and allows for the implementation of additional ontologies, vocabularies and modelling according to specific needs. <br />
<br />
<br />
By incorporating standards, models and technologies recognized as key elements for the creation of new processes of management and use of knowledge, the LOD Platform allows: <br />
<br />
* the creation of a data structure based on Agent, Work, Instance, Item, Place entities, as defined by BIBFRAME, and extensible to reconcile other entities; <br />
* data enrichment through the connection with external data sources; <br />
* reconciliation and clusterization of entities created from the original data; <br />
* the conversion of data according to the standard model indicated by the W3C for the LOD, RDF - Resource Description Framework; <br />
* delivery of converted and enriched data to the target institution for reuse in their systems; <br />
* the publication of the dataset in linked data on RDF storage (triplestore); <br />
* the creation of a discovery portal with a web user interface based on BIBFRAME or other ontologies defined in specific projects.</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareFamily:FAQ&diff=2198ShareFamily:FAQ2024-02-22T15:04:21Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:FAQ}}<br />
<br />
You can't find what you are looking for? Read this page, and if you still haven't found it, e-mail us at [mailto:helpdesk@svde.org helpdesk@svde.org].<br />
<br />
__FORCETOC__<br />
<br />
==Frequently Asked Questions==<br />
<br />
Coming soon<br />
<br />
==Glossary==<br />
<br />
Coming soon</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareDoc:PublicDocumentation/APIs&diff=2197ShareDoc:PublicDocumentation/APIs2024-02-22T13:38:35Z<p>Anna: </p>
<hr />
<div>This section provides a comprehensive collection of documents detailing the various APIs integral to the Share Family ecosystem. These APIs serve as the backbone of our platform, facilitating seamless integration, data retrieval, and interaction with Share Family services and resources. <br />
<br />
Please contact [mailto:info@svde.org info@svde.org] for any additional information.<br />
<br />
== General ==<br />
<br />
* [[ShareDoc:Domain Model|The Share-VDE Domain Model]]<br />
* [[ShareDoc:The PostMan Collection|How to import the Share-VDE API Collection]]<br />
* [[ShareDoc:API documentation|Share-VDE API: cross-cutting concepts]]<br />
* [[ShareDoc:Auth|Share-VDE API: Authentication]]<br />
* [[ShareDoc:Query Languages|Share-VDE API: Query Languages]]<br />
* [[ShareDoc:Simple Search|Share-VDE API: Simple Search]]<br />
<br />
== REST ==<br />
<br />
* [[ShareDoc:RESTFul API|Share-VDE REST API]]<br />
* [[ShareDoc:ShareVDEAndTheSemanticWeb|Share-VDE and the Semantic Web]]<br />
* [[ShareDoc:Content Negotiation|Share-VDE REST API: Content Negotiation]]<br />
<br />
== GraphQL ==<br />
*[[ShareDoc:GraphQL API|Share-VDE GraphQL API]]<br />
*[[ShareDoc:Subject API|Share-VDE GraphQL API: Subjects]]<br />
<br />
== SPARQL ==<br />
Share Family data are exposed also via a SPARQL endpoint that is available for queries. The current set-up includes two different methods for searching and viewing the data, ie.:<br />
<br />
* graphic user interface SPARQL UI Console at https://data-staging.svde.org: it's the graphic end user interface where the dataset loaded to the triple store can be selected; selecting "Query" in the user menu opens the query interface;<br />
* direct access to SPARQL Endpoint at https://data-staging.svde.org/sparql: it's the HTTP endpoint to run queries directly on the dataset.<br />
<br />
To access the triple store and consult the data contact [mailto:info@svde.org info@svde.org].</div>Annahttps://wiki.share-vde.org/w/index.php?title=ShareDoc:PublicDocumentation&diff=2196ShareDoc:PublicDocumentation2024-02-22T13:37:19Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:Public Documentation}}<br />
<br />
== [[ShareDoc:PublicDocumentation/Main technological components|Main technological components]] ==<br />
This section serves as a gateway to a comprehensive breakdown of key technological components that form the backbone of the LOD Platform. the technology that powers the Share Family system. This section provides an overview of crucial components and concepts integral to understanding the inner workings of our platform.<br />
<br />
== [[ShareDoc:PublicDocumentation/User guides|User guides]] ==<br />
Here you'll find user guides describing the LOD Platform and JCricket entity editor.<br />
<br />
== [[ShareDoc:PublicDocumentation/Release notes|Release notes]] ==<br />
This page gathers the release notes of the technical implementations.<br />
<br />
== [[ShareDoc:PublicDocumentation/APIs|Share Family API: Technical documentation]] ==<br />
This is a collection of Share Family API.<br />
<br />
This technical documentation is valid for all tenants of the Share Family. However, in the documentation pages you will find examples showing only the namespace "svde.org". If your institution belongs to a tenant that is not svde.org, you have to run queries using the specific namespace of your tenant (e.g. natbib-lod.org, or pcc-lod.org, or kubikat-lod.org). Please remember to replace the namespace "svde.org" that appears in the examples with the namespace of your tenant (both for entity URIs and for attributes/properties URIs).</div>Annahttps://wiki.share-vde.org/w/index.php?title=MediaWiki:Sidebar.json&diff=2195MediaWiki:Sidebar.json2024-02-22T10:51:24Z<p>Anna: </p>
<hr />
<div>[<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Highlights",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Share-VDE: linked data for libraries",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "Main_Page"<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "All wiki pages",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "Special:AllPages",<br />
"depth": 1<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Share Family",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Share Family Linked Data Ecosystem",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareFamily:Main_Page"<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "News And Updates",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareFamily:NewsAndUpdates",<br />
"depth": 1<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Resources",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareFamily:Resources",<br />
"depth": 1<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "FAQ",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareFamily:FAQ",<br />
"depth": 1<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Share-VDE",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "About Share-VDE",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Main Page"<br />
},<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Share-VDE institutions",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Main Page/SVDE institutions"<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Activities",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Working groups",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Members/Share-VDE_working_groups",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Strands of work",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Activities",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Workshops",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Workshops",<br />
"depth": 2<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Documentation",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Public documentation and APIs",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareDoc:PublicDocumentation",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Technical doc for Share Staff",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDEmembers:TechnicalDocumentation"<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Members' area",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Community work",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDEmembers:MembersArea",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Access members' area",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "Main_Page/AccessMembersArea"<br />
}<br />
]<br />
}<br />
]</div>Annahttps://wiki.share-vde.org/w/index.php?title=MediaWiki:Sidebar.json&diff=2194MediaWiki:Sidebar.json2024-02-22T10:09:35Z<p>Anna: </p>
<hr />
<div>[<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Highlights",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Share-VDE: linked data for libraries",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "Main_Page",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "All wiki pages",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "Special:AllPages",<br />
"depth": 1<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Share Family",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Share Family Linked Data Ecosystem",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareFamily:Main_Page"<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "News And Updates",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareFamily:NewsAndUpdates",<br />
"depth": 1<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Resources",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareFamily:Resources",<br />
"depth": 1<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "FAQ",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareFamily:FAQ",<br />
"depth": 1<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Share-VDE",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "About Share-VDE",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Main Page"<br />
},<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Share-VDE institutions",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Main Page/SVDE institutions"<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Activities",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Working groups",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Members/Share-VDE_working_groups",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Strands of work",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Activities",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Workshops",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDE:Workshops",<br />
"depth": 2<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Documentation",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Public documentation and APIs",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareDoc:PublicDocumentation",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Technical doc for Share Staff",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDEmembers:TechnicalDocumentation"<br />
}<br />
]<br />
},<br />
{<br />
"type": "enhanced-sidebar-panel-heading",<br />
"text": "Members' area",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"children": [<br />
{<br />
"type": "enhanced-sidebar-subpage-tree",<br />
"text": "Community work",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "ShareVDEmembers:MembersArea",<br />
"depth": 2<br />
},<br />
{<br />
"type": "enhanced-sidebar-internal-link",<br />
"text": "Access members' area",<br />
"hidden": "",<br />
"classes": [],<br />
"icon-cls": "",<br />
"page": "Main_Page/AccessMembersArea"<br />
}<br />
]<br />
}<br />
]</div>Annahttps://wiki.share-vde.org/w/index.php?title=Main_Page/AccessMembersArea&diff=2193Main Page/AccessMembersArea2024-02-22T10:07:38Z<p>Anna: </p>
<hr />
<div>{{DISPLAYTITLE:Access Members' Area}}<br />
<br />
Authenticate in the wiki using your account credentials to access the [https://wiki.share-vde.org/wiki/ShareVDEmembers:MembersArea Share-VDE members' area] including dedicated content to foster community work.</div>Annahttps://wiki.share-vde.org/w/index.php?title=Main_Page/AccessMembersArea&diff=2192Main Page/AccessMembersArea2024-02-22T10:07:18Z<p>Anna: Created page with "{{DISPLAYTITLE:Access Members Area}} Authenticate in the wiki using your account credentials to access the [https://wiki.share-vde.org/wiki/ShareVDEmembers:MembersArea Share-VDE members' area] including dedicated content to foster community work."</p>
<hr />
<div>{{DISPLAYTITLE:Access Members Area}}<br />
<br />
Authenticate in the wiki using your account credentials to access the [https://wiki.share-vde.org/wiki/ShareVDEmembers:MembersArea Share-VDE members' area] including dedicated content to foster community work.</div>Annahttps://wiki.share-vde.org/w/index.php?title=MediaWiki:FooterLinks&diff=2191MediaWiki:FooterLinks2024-02-22T10:02:05Z<p>Anna: </p>
<hr />
<div><br />
* Site:About|About this wiki<br />
* https://wiki.share-vde.org/wiki/Main_Page/Contacts|Contact us<br />
* Site:Privacy_policy|Privacy policy<br />
* Site:Terms_of_service|Terms of service</div>Annahttps://wiki.share-vde.org/w/index.php?title=MediaWiki:FooterLinks&diff=2190MediaWiki:FooterLinks2024-02-22T10:01:41Z<p>Anna: </p>
<hr />
<div><br />
* Site:Privacy_policy|Privacy policy<br />
* Site:About|About this wiki<br />
* https://wiki.share-vde.org/wiki/Main_Page/Contacts|Contact us<br />
* Site:Terms_of_service|Terms of service</div>Annahttps://wiki.share-vde.org/w/index.php?title=MediaWiki:FooterLinks&diff=2189MediaWiki:FooterLinks2024-02-22T10:00:39Z<p>Anna: </p>
<hr />
<div><br />
* Site:Privacy_policy|Privacy policy<br />
* Site:Terms_of_service|Terms of service<br />
* Site:About|About</div>Annahttps://wiki.share-vde.org/w/index.php?title=MediaWiki:FooterLinks&diff=2188MediaWiki:FooterLinks2024-02-22T09:59:18Z<p>Anna: </p>
<hr />
<div><br />
* Site:Privacy_policy|Privacy policy<br />
* Site:Terms_of_service|Terms of service<br />
* Site:Imprint|Imprint<br />
* Site:General_disclaimer|Disclaimer<br />
* Site:About|About</div>Anna