ELAG 2018
Permanentní URI k této kolekci
Procházet
Procházet ELAG 2018 podle Předmět "application programming interface"
Nyní se zobrazuje 1 - 5 z 5
Výsledků na stránku
Možnosti řazení
Publikace Otevřený přístup A Lightning Talk on Manuscripts and IIIF(2018) Seige, LeanderLightning Talks (June 8), video recording is available at: http://repozitar.techlib.cz/record/1275Publikace Otevřený přístup Enriching Library Metadata with APIs(2018) Mak, LucasGiven the ever-dwindling resources assigned to metadata creation, individual libraries are hard-pressed to create and maintain high quality traditional metadata across-the-board, let alone to prepare and transform legacy data into linked data. Coming up with no additional support by looking inside, one should look outside for resources that can help mitigate the situation. Nowadays, libraries no longer monopolize metadata creation. More and more special domain communities have set up Wikipedia-like crowd-sourced portals to serve information needs of their members. At the same time, there are international initiatives in the library community to set up data stores for linked data sets. Can the library tap into these rich information resources, in an efficient way, to enrich library metadata in the traditional way as well as prepare the legacy data for the big migration? This presentation will discuss how Michigan State University Libraries is able to harvest selected metadata from various library and non-library community based portals through APIs (Application Programming Interface) in a batch and automated fashion to enrich existing metadata of a popular music collection and enhance them with URIs for linked data conversion down the road.Publikace Otevřený přístup From Hydras to TACOs: Evolving the Stanford Digital Repository(2018) Harlow, Christina; Fahy, ErinStanford University Library has a robust digital library system called the Stanford Digital Repository. This repository holds a little under 500 TB of materials in preservation and online for researchers, capture of scholarly output, and digitized cultural heritage materials. These materials are managed across 90+ codebases serving a variety of functions from self-deposit web applications, to a nearly 10 year old parallel processing framework, to a digital repository assets publication mechanism leading into our Blacklight, Spotlight, and Geoblacklight applications – among other services and needs. At the core of this system is a Fedora 3 store. With Fedora 3 now end-of-lifed, and our system suffering from limited to no horizontal scalability options, we’re revisiting our system and architecture. We are writing it from the start with a goal to have data-forward, distributed microservices and some event-driven processing components. TACO, our new core management API, is the heart of this new architecture, and is currently being developed as a prototype. This talk will walk through the process of analysing our current system via a dataflows analysis; designing a new architecture for our digital library with a wide ranging set of requirements and users; prototyping a core component of our new architecture to be horizontally scalable as well as data & specification driven; then planning how to create ‘seams’ in our current system to migrate towards our new system in an evolutionary fashion instead of a turn-key migration.Publikace Otevřený přístup In Out, In Out, And Shake It All About: A Moving Story of Data(2018) Stevenson, JaneThe Archives Hub blends data. We bring together descriptions of archives, archival resources and repositories in a way that enables us to present an effective and valuable service through our website. We spent two years creating an entirely new system that was built upon the principle of bringing in data from different sources and providing that data for different purposes. I would like to give some insights from our experience of doing this, and consider whether we have created something innovative and with inherent potential for future development. I will talk about the architecture that we wanted to create, the workflow that we believed to be essential to our aims, and the challenges that we faced in being able to create a blend of data that could be successfully deblended in different ways. It required a great deal of thought and planning in terms of what we wanted to achieve, how we should process the data to fulfil those aims, and how we would work with data contributors, who were essential to our success. Over a year after going live with the new service, have we achieved our aim of more consistent, standardised data, and have we provided the realistic potential for the data to be re-used? I will give examples of where I think we have fulfilled our aims and where we still have issues. I will argue that the ability to blend/deblend relies upon systems and technology, but it also relies upon people and their habits, expectations, understanding and ambitions.Publikace Otevřený přístup The Delicate Dance of Decentralization and Aggregation: keynote speech(2018) Verborgh, RubenRuben Verborgh is a professor of Semantic Web technology at Ghent University – imec and a research affiliate at the Decentralized Information Group at MIT. He aims to build a more intelligent generation of clients for a decentralized Web at the intersection of Linked Data and hypermedia-driven Web APIs. Through the creation of Linked Data Fragments, he introduced a new paradigm for query execution at Web-scale. He has co-authored two books on Linked Data, and contributed to more than 200 publications for international conferences and journals on Web-related topics.