The invitation to present a keynote at the VIVO Conference and the goal of the VIVO platform, as stated on the DuraSpace site, to create an integrated record of the scholarly work of an organisation reminded me of various efforts that I have been involved in over the past years that had similar goals. EgoSystem (2014) attempted to gather information about postdocs that had left the organisation, leaving little or no contact details behind. Autoload (2017), an operational service, discovers papers by organisational researchers in order to upload them in the institutional repository. myresearch.institute (2018), an experiment that is still in progress, discovers artefacts that researchers deposit in web productivity portals and subsequently archives them. More recently, I have been involved in thinking about the future of NARCIS, a portal that provides an overview of research productivity in The Netherlands. The approach taken in all these efforts share a characteristic motivated by a desire to devise scalable and sustainable solutions: let machines rather than humans do the work. In this talk, I will provide an overview of these efforts, their motivations, the challenges involved, and the nature of success (if any).
For more than a decade, VIVO sites have been creating semantic data regarding scholarship that could be used to change how scholarly work is found and how expertise is assessed and compared. Previous work has attempted to centrally collect and normalize semantic data for search purposes. Other effort has used federated search across sites to provide simple access to profiles. Can we now consider how best to create a semantic cross-site search capability? Panelists will discuss the following questions: What is semantic search and how might it differ from other search paradigms? Should the approach be centralized, in which semantic data is brought together to a single provider of search functionality, decentralized in which data remains at rest and search functionality is localized, or should other approaches be considered? What are the roles for ontology, data and software provisioning in semantic search? How might technologies such as TPF, GraphQL, Schema.org, Solid, and others be leveraged? What is needed to create a semantic cross-site search capability for VIVO?
Herbert Van de Sompel Chief Innovation Officer, Data Archiving and Networked Services
Dr. Herbert Van de Sompel graduated in Mathematics and Computer Science at Ghent University (Belgium), and in 2000 obtained a Ph.D. in Communication Science there. He is currently Chief Innovation Officer at Data Archiving and Networked Services (DANS) in The Netherlands. He has previously held positions as head of Library Automation at Ghent University, Visiting Professor in Computer Science at Cornell University, Director of e-Strategy and Programmes at the British Library, and information scientist at the Research Library of the Los Alamos National Laboratory where he was the team leader of the Prototyping Team. Herbert has played a major role in creating the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), the Open Archives Initiative Object Reuse & Exchange specifications (OAI-ORE), the OpenURL Framework for Context-Sensitive Services (ANSI/NISO Z39.88-2004), the SFX linking server, the bX scholarly recommender service, info URI (RFC 4452), Open Annotation (W3C Community Group specification), ResourceSync (ANSI/NISO Z39.99-2014), Memento "time travel for the Web" (RFC 7089), Robust Links, and Signposting the Scholarly Web.
The Semantic Web captures knowledge by making, among others, research data discoverable, accessible and understandable in the long term by increasing their sherability, extensibility and reusability. However, the process of extracting, structuring, and organizing knowledge from one or multiple heterogeneous data sources to construct knowledge-intensive systems has proven to be easier said than done. During this talk, I will elaborate on knowledge graph generation from heterogeneous data sources, as well as on the assessment of not only the knowledge graphs but also the rules that generate a knowledge graph and on the refinement of both rules and knowledge graphs.
Anastasia Dimou Senior post-doctoral researcher, IDLab, Ghent University
Dr. Anastasia Dimou is a senior/post-doctoral researcher at imec and IDLab, Ghent University. Anastasia joined the IDLab research group in February 2013. Her research interests include Linked Data Generation and Publication, Data Quality and Integration, Knowledge Representation and Management. As part of her research, she investigated a uniform language to describe rules for generating high quality Linked Data from multiple heterogeneous data sources. Anastasia currently conducts research on automated Linked Data generation and publication workflows, data validation and quality assessment, query-answering and knowledge integration from Big stream data. Her research activities are applied in different domains, such as Internet of Things (IoT), manufacturing, media and advertisement and led to the development of the RML tool chain. She is involved in different national, bilaterals, and EU projects, authored several peer-reviewed publications presented at prominent conferences and journals such as ESWC, ISWC, JWS and SWJ, participated in several PCs and co-organized tutorials and workshops.
What are some of the sociotechnical constraints and the effects of contemporary scholarly communication? How can we appropriate the Open Web Platform to facilitate an actor-centric scholarly ecosystem? In this talk, we discuss designing decentralised and socially-aware systems as well as their effects and artifacts.
For more than a decade, VIVO sites have been creating semantic data regarding scholarship that could be used to change how scholarly work is found and how expertise is assessed and compared. Previous work has attempted to centrally collect and normalize semantic data for search purposes. Other effort has used federated search across sites to provide simple access to profiles. Can we now consider how best to create a semantic cross-site search capability? Panelists will discuss the following questions: What is semantic search and how might it differ from other search paradigms? Should the approach be centralized, in which semantic data is brought together to a single provider of search functionality, decentralized in which data remains at rest and search functionality is localized, or should other approaches be considered? What are the roles for ontology, data and software provisioning in semantic search? How might technologies such as TPF, GraphQL, Schema.org, Solid, and others be leveraged? What is needed to create a semantic cross-site search capability for VIVO?
Sarven Capadisli Researcher, University of Bonn and TIB, Hannover
Sarven Capadisli is currently writing his PhD thesis with University of Bonn, and researches with TIB, Hannover. His research involves the Linked Research initiative and dokieli (a clientside editor for decentralised article publishing, annotations and social interactions).
Staff at medical institutions are regularly called upon to produce and maintain lists of scholarly publications authored by individuals ranging from NIH-funded principal investigators to people affiliated with other institutions such as alumni and residents. This work tends to be done on an ad hoc basis and is time-consuming, especially when profiled individuals have common names. Often, feedback from the authors themselves is not adequately captured in some central location and repurposed for future requests. ReCiter is a highly accurate, rule-based system for inferring which publications in PubMed a given person has authored. ReCiter includes a Java application, a DynamoDB-hosted database, and a set of RESTful microservices which collectively allow institutions to maintain accurate and up-to-date author publication lists for thousands of people. This software is optimized for disambiguating authorship in PubMed and, optionally, Scopus. ReCiter rapidly and accurately identifies articles, including those at previous affiliations, by a given person. It does this by leveraging institutionally maintained identity data (e.g., departments, relationships, email addresses, year of degree, etc.) With the more complete and efficient searches that result from combining these types of data, individuals at institutions can save time and be more productive. Running ReCiter daily, one can ensure that the desired users are the first to learn when a new publication has appeared in PubMed. ReCiter is freely available and open source under the Apache 2.0 license. https://github.com/wcmc-its/ReCiter For our presentation, we will demonstrate: * How to install ReCiter * How to load ReCiter with identity data * How to run ReCiter * Its API outputs * How ReCiter integrates with a third-party interface for capture feedback, feedback which is fed back into ReCiter to further improve accuracy
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Paul Albert , Weill Cornell Medicine
Staff at medical institutions are regularly called upon to produce and maintain lists of scholarly publications authored by individuals ranging from NIH-funded principal investigators to people affiliated with other institutions such as alumni and residents. This work tends to be done on an ad hoc basis and is time-consuming, especially when profiled individuals have common names. Often, feedback from the authors themselves is not adequately captured in some central location and repurposed for future requests. ReCiter is a highly accurate, rule-based system for inferring which publications in PubMed a given person has authored. ReCiter includes a Java application, a DynamoDB-hosted database, and a set of RESTful microservices which collectively allow institutions to maintain accurate and up-to-date author publication lists for thousands of people. This software is optimized for disambiguating authorship in PubMed and, optionally, Scopus. ReCiter rapidly and accurately identifies articles, including those at previous affiliations, by a given person. It does this by leveraging institutionally maintained identity data (e.g., departments, relationships, email addresses, year of degree, etc.) With the more complete and efficient searches that result from combining these types of data, individuals at institutions can save time and be more productive. Running ReCiter daily, one can ensure that the desired users are the first to learn when a new publication has appeared in PubMed. ReCiter is freely available and open source under the Apache 2.0 license. https://github.com/wcmc-its/ReCiter For our presentation, we will demonstrate: * How to install ReCiter * How to load ReCiter with identity data * How to run ReCiter * Its API outputs * How ReCiter integrates with a third-party interface for capture feedback, feedback which is fed back into ReCiter to further improve accuracy
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Sarbajit Dutta , Weill Cornell Medicine
Staff at medical institutions are regularly called upon to produce and maintain lists of scholarly publications authored by individuals ranging from NIH-funded principal investigators to people affiliated with other institutions such as alumni and residents. This work tends to be done on an ad hoc basis and is time-consuming, especially when profiled individuals have common names. Often, feedback from the authors themselves is not adequately captured in some central location and repurposed for future requests. ReCiter is a highly accurate, rule-based system for inferring which publications in PubMed a given person has authored. ReCiter includes a Java application, a DynamoDB-hosted database, and a set of RESTful microservices which collectively allow institutions to maintain accurate and up-to-date author publication lists for thousands of people. This software is optimized for disambiguating authorship in PubMed and, optionally, Scopus. ReCiter rapidly and accurately identifies articles, including those at previous affiliations, by a given person. It does this by leveraging institutionally maintained identity data (e.g., departments, relationships, email addresses, year of degree, etc.) With the more complete and efficient searches that result from combining these types of data, individuals at institutions can save time and be more productive. Running ReCiter daily, one can ensure that the desired users are the first to learn when a new publication has appeared in PubMed. ReCiter is freely available and open source under the Apache 2.0 license. https://github.com/wcmc-its/ReCiter For our presentation, we will demonstrate: * How to install ReCiter * How to load ReCiter with identity data * How to run ReCiter * Its API outputs * How ReCiter integrates with a third-party interface for capture feedback, feedback which is fed back into ReCiter to further improve accuracy
Michael Bales , Weill Cornell Medicine
Staff at medical institutions are regularly called upon to produce and maintain lists of scholarly publications authored by individuals ranging from NIH-funded principal investigators to people affiliated with other institutions such as alumni and residents. This work tends to be done on an ad hoc basis and is time-consuming, especially when profiled individuals have common names. Often, feedback from the authors themselves is not adequately captured in some central location and repurposed for future requests. ReCiter is a highly accurate, rule-based system for inferring which publications in PubMed a given person has authored. ReCiter includes a Java application, a DynamoDB-hosted database, and a set of RESTful microservices which collectively allow institutions to maintain accurate and up-to-date author publication lists for thousands of people. This software is optimized for disambiguating authorship in PubMed and, optionally, Scopus. ReCiter rapidly and accurately identifies articles, including those at previous affiliations, by a given person. It does this by leveraging institutionally maintained identity data (e.g., departments, relationships, email addresses, year of degree, etc.) With the more complete and efficient searches that result from combining these types of data, individuals at institutions can save time and be more productive. Running ReCiter daily, one can ensure that the desired users are the first to learn when a new publication has appeared in PubMed. ReCiter is freely available and open source under the Apache 2.0 license. https://github.com/wcmc-its/ReCiter For our presentation, we will demonstrate: * How to install ReCiter * How to load ReCiter with identity data * How to run ReCiter * Its API outputs * How ReCiter integrates with a third-party interface for capture feedback, feedback which is fed back into ReCiter to further improve accuracy
Jie Lin , Weill Cornell Medicine
The Web of Science (WoS) is a trusted source for publication and citation metadata of scholarly works dating back to 1900. The multidisciplinary database covers all areas of science, as well as social sciences, and the arts and humanities. WoS is comprised of works published in over 20,000 journals, as well as books and conferences. In 2019, the Web of Science Group will release a new RESTful API that makes accessing and reusing citation metadata easier than ever. In this workshop, participants will be introduced to the new WoS APIs, the metadata available, and the new API registration process. Workshop participants will also gain hands-on experience using two Python script libraries, wos2vivo and incites2vivo. wos2vivo is an open source Python library for easily querying the Web of Science for your institution’s publications in bulk and transforming the data into VIVO-compatible linked data. incites2vivo will add indicator flags from InCites, such as if a publication is a Hot Paper, Industry Collaboration, International Collaboration, or Open Access. Lastly, workshop attendees will learn how to embed dynamically updated citation counts for their publications on their VIVO. This technical workshop is appropriate for both beginning and advanced users. Please bring a laptop with Python installed. While a subscription is required for access to the Web of Science, all participants will be provided with temporary API credentials for the workshop.
VIVO is an open source tool based on linked open data concepts for connecting and publishing a wide range of research information within and across institutions. The goal of this workshop is to introduce new community members to the VIVO project and software. The workshop will consist of three sections: 1. A summary of VIVO’s history and what it does, 2. How it works, and 3. Where the project is heading. Part 1 will include a background of the VIVO project, how institutions and organizations are currently using it, how institutional stakeholders are involved, what benefits it offers to researchers, to institutions, and to the global community. Part 2 will include a high level discussion about how VIVO works and introduce the concepts of Resource Description Framework, Ontology, Vitro, Triple Pattern Fragments, how VIVO is managed, and how to feed downstream systems. Finally, part 3 will introduce you to next-gen VIVO initiatives such as, decoupling of the architecture, the next version of the ontology, the VIVO Scholar, VIVO Combine, the internationalization efforts, and the VIVO Search. You’ll learn how to find the right resources as a new VIVO implementer including data sources, team members, governance models, and support structures. This workshop brings best practices and “lessons learned” from mature VIVO projects to new implementations. We’ll help you craft your messages to different stakeholders, so you’ll leave this workshop knowing how to talk about VIVO to everyone from your provost to faculty members to web developers.
Converis supports comprehensive tracking and support across the entire research lifecycle, including grant, award, and publication management. Built-in integrations with the Web of Science and InCites make it easy to manage your organization’s research activity. Converis is a powerful upstream tool to manage a confluence of data for a VIVO site. In this presentation, we will present improvements made to the Converis application in Converis 6, as well as improvements in the works. We will also discuss enhancements to the Converis to VIVO connector that allows VIVO to be automatically fed by an upstream Converis data source.
At the Technical University of Denmark VIVO is used as an internal research analytics platform providing the university’s researchers and research managers with easy-to-use analytical views on research output, impact, collaboration, etc. VIVO – combined with a good data source – provides a platform for high-quality, high-specificity and high-integrity analytics under the university’s own control. In the latest release, Web of Science data is supplemented with advanced indicators from InCites, and the triple store with a relational database management system and an elaborate caching system to achieve attractive response times. The WoS data is enhanced by local mapping/normalization of research department names, enabling department-level analytics. The DTU RAP is developed in collaboration between the university, the IT-consultants Vox Novitas and Ontocale and Clarivate Analytics, producer of WoS and InCites. All software is available as open source for anyone with similar needs and similar access to data.
The diversity, energy and innovation in the VIVO community is inspirational. Community initiatives are strong, as are contributions back to the core application. One of the VIVO project's primary objectives is to channel, where appropriate, community effort into the core application towards agreed upon outcomes. We are delighted to say that this has been happening! This presentation will not attempt to detail all of the technical efforts over the past year, but will highlight a range of advancements and milestones accomplished since the previous VIVO conference. In the context of reviewing the year’s activity, this session is also intended to solicit feedback from attendees on technical and community initiatives and processes. At the end of 2018, the VIVO Leadership Group collected input from the community and created a "Statement of VIVO's Product Direction for 2019". This statement details four strategic initiatives: * Modernize the presentation layer of VIVO * Decouple the architecture * VIVO Combine * VIVO Search Following the publication of this statement, an architecturally-minded team representing distinct VIVO stakeholder constituencies was gathered for the purpose of developing architectural approaches required to address the direction of the project. The primary goal of the two-day face-to-face meeting was to assess and document a plan for improving the VIVO application architecture towards enabling and realizing the technical efforts defined in the "Statement of VIVO's Product Direction for 2019". This presentation will include a detailed status of the completed as well as planned development activities following from the decisions of the architectural meeting.
Benjamin Gross , Web of Science Group
What makes current research information (CRIS) systems different from other research information systems used around the world? And what does this mean for VIVO? This workshop introduces CRISs to VIVO users in both the US, Europe and beyond. We’ll compare institutional goals for managing research information and show how they drive the selection of tools, platforms and systems as well as public implementations of these platforms. We’ll explore a number of VIVO implementations that consume and display CRIS data brilliantly. And we’ll get an update on the collaboration between euroCRIS and the VIVO community to map and load CERIF data to VIVO. Workshop agenda: *. Introduction to CRISs: ** How CRISs differ from other systems like repositories and grants systems (pre-award and post-award) ** CRIS use cases in Europe (for reporting, for management support, for generating profiles and CV’s, for managing and archiving research data, and more) *. Pairing VIVO and CRISs: the benefits and challenges of creating and maintaining VIVOs based on institutional CRIS implementations *. Promoting and facilitating interoperability: using standards to make the exchange of data between CRIS and VIVO easier and more efficient. Update on: the CERIF2VIVO mapping project.
Pablo de Castro , University of Strathclyde
What makes current research information (CRIS) systems different from other research information systems used around the world? And what does this mean for VIVO? This workshop introduces CRISs to VIVO users in both the US, Europe and beyond. We’ll compare institutional goals for managing research information and show how they drive the selection of tools, platforms and systems as well as public implementations of these platforms. We’ll explore a number of VIVO implementations that consume and display CRIS data brilliantly. And we’ll get an update on the collaboration between euroCRIS and the VIVO community to map and load CERIF data to VIVO. Workshop agenda: *. Introduction to CRISs: ** How CRISs differ from other systems like repositories and grants systems (pre-award and post-award) ** CRIS use cases in Europe (for reporting, for management support, for generating profiles and CV’s, for managing and archiving research data, and more) *. Pairing VIVO and CRISs: the benefits and challenges of creating and maintaining VIVOs based on institutional CRIS implementations *. Promoting and facilitating interoperability: using standards to make the exchange of data between CRIS and VIVO easier and more efficient. Update on: the CERIF2VIVO mapping project.
The growing complexity of the digital research environments and specially with the advance of the OpenScience that includes not only the OpenAccess, but OpenData, and the need to make the data FAIR, has highlighted the importance of evolving the way we search and visualize the research results with powerful tools. In this way, when the scientific information is correctly stored in the CRIS, it must also be easy to search and find, and must guide the discovery of the information. This is one of the goals of the SIGMA Strategic Plan for the Research area that, since the beginning of 2018, is working to improve the search tools and results for the scientific production of the researchers of an institution. This project began by evaluating the best ontology to translate the current data model for SIGMA CRIS, to a semantic model. To do this, we analyzed some of the semantic engines that exist and finally we decided to test the VIVO solution. We found that VIVO ontology fits more than 80% with the Spanish model for Research and took into account that VIVO is an OpenSource software supported by members and an ontology to represent the scholarship that is used by relevant universities, highly positioned in the international rankings and, finally, we also value the large community of members behind VIVO and Duraspace. For these reasons, SIGMA decided to join the DuraSpace community at the beginning of 2018, participating in the governance of the VIVO project. During this time SIGMA has collaborated with VIVO to define the 2019 Roadmap of the product, having now a calendar of sprints scheduled to carry out what was agreed in the Product direction. This calendar it’s aligned by the SIGMA side with its strategic plan. So, we have now an adapted SIGMA ontology based on the VIVO ontology and some tools that we will show in this presentation as examples of the VIVO ontology and tools, adapted to the Spanish science model.
Anna Guillaumet , SIGMA
What makes current research information (CRIS) systems different from other research information systems used around the world? And what does this mean for VIVO? This workshop introduces CRISs to VIVO users in both the US, Europe and beyond. We’ll compare institutional goals for managing research information and show how they drive the selection of tools, platforms and systems as well as public implementations of these platforms. We’ll explore a number of VIVO implementations that consume and display CRIS data brilliantly. And we’ll get an update on the collaboration between euroCRIS and the VIVO community to map and load CERIF data to VIVO. Workshop agenda: *. Introduction to CRISs: ** How CRISs differ from other systems like repositories and grants systems (pre-award and post-award) ** CRIS use cases in Europe (for reporting, for management support, for generating profiles and CV’s, for managing and archiving research data, and more) *. Pairing VIVO and CRISs: the benefits and challenges of creating and maintaining VIVOs based on institutional CRIS implementations *. Promoting and facilitating interoperability: using standards to make the exchange of data between CRIS and VIVO easier and more efficient. Update on: the CERIF2VIVO mapping project.
Michele Minnielli , DuraSpace
What makes current research information (CRIS) systems different from other research information systems used around the world? And what does this mean for VIVO? This workshop introduces CRISs to VIVO users in both the US, Europe and beyond. We’ll compare institutional goals for managing research information and show how they drive the selection of tools, platforms and systems as well as public implementations of these platforms. We’ll explore a number of VIVO implementations that consume and display CRIS data brilliantly. And we’ll get an update on the collaboration between euroCRIS and the VIVO community to map and load CERIF data to VIVO. Workshop agenda: *. Introduction to CRISs: ** How CRISs differ from other systems like repositories and grants systems (pre-award and post-award) ** CRIS use cases in Europe (for reporting, for management support, for generating profiles and CV’s, for managing and archiving research data, and more) *. Pairing VIVO and CRISs: the benefits and challenges of creating and maintaining VIVOs based on institutional CRIS implementations *. Promoting and facilitating interoperability: using standards to make the exchange of data between CRIS and VIVO easier and more efficient. Update on: the CERIF2VIVO mapping project.
Ed Simons , Radboud University Nijmegen
What makes current research information (CRIS) systems different from other research information systems used around the world? And what does this mean for VIVO? This workshop introduces CRISs to VIVO users in both the US, Europe and beyond. We’ll compare institutional goals for managing research information and show how they drive the selection of tools, platforms and systems as well as public implementations of these platforms. We’ll explore a number of VIVO implementations that consume and display CRIS data brilliantly. And we’ll get an update on the collaboration between euroCRIS and the VIVO community to map and load CERIF data to VIVO. Workshop agenda: *. Introduction to CRISs: ** How CRISs differ from other systems like repositories and grants systems (pre-award and post-award) ** CRIS use cases in Europe (for reporting, for management support, for generating profiles and CV’s, for managing and archiving research data, and more) *. Pairing VIVO and CRISs: the benefits and challenges of creating and maintaining VIVOs based on institutional CRIS implementations *. Promoting and facilitating interoperability: using standards to make the exchange of data between CRIS and VIVO easier and more efficient. Update on: the CERIF2VIVO mapping project.
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
In early 2018, the VIVO Leadership group brought together parties from across the broader VIVO community to Duke University to discuss critical aspects of VIVO as both a product and a community. At the meeting, a number of working groups were created to do deeper work on a set of focus areas to help inform the VIVO leadership in taking steps toward the future growth of VIVO. One group was tasked with understanding the current perception of VIVO's governance and structure from effectiveness, to openness and inclusivity, to make recommendations to the VIVO Leadership group concerning key strengths to preserve and challenges that needed to be addressed. This session will report on the results of a survey run by the Governance and Structure Working Group in late 2018. We will engage with the audience to get reactions to the results, ensure that interpretations make sense to the wider group and discuss the next stages in finalising publication of the overall report of the committee.
Your VIVO needs updated information. * Updated data means more traffic: visits and pageviews. * More traffic encourages faculty to update their information. Does this sound familiar? If you’re wondering how to build up this positive feedback loop for your institution’s VIVO, join us to review Duke University’s best practices for creating a “buzz” around Scholars@Duke. We’ll talk about: * Crafting effective messages * Choosing the right communications channels * Hosting events that attract faculty * Boosting SEO (or trying to) * Learning from users and measuring success And we'll give updates on our plans to launch a redesigned site, create a video series to build awareness, and improve strategic planning through analytics.
Julia Trimmer , Duke University
VIVO is an open source tool based on linked open data concepts for connecting and publishing a wide range of research information within and across institutions. The goal of this workshop is to introduce new community members to the VIVO project and software. The workshop will consist of three sections: 1. A summary of VIVO’s history and what it does, 2. How it works, and 3. Where the project is heading. Part 1 will include a background of the VIVO project, how institutions and organizations are currently using it, how institutional stakeholders are involved, what benefits it offers to researchers, to institutions, and to the global community. Part 2 will include a high level discussion about how VIVO works and introduce the concepts of Resource Description Framework, Ontology, Vitro, Triple Pattern Fragments, how VIVO is managed, and how to feed downstream systems. Finally, part 3 will introduce you to next-gen VIVO initiatives such as, decoupling of the architecture, the next version of the ontology, the VIVO Scholar, VIVO Combine, the internationalization efforts, and the VIVO Search. You’ll learn how to find the right resources as a new VIVO implementer including data sources, team members, governance models, and support structures. This workshop brings best practices and “lessons learned” from mature VIVO projects to new implementations. We’ll help you craft your messages to different stakeholders, so you’ll leave this workshop knowing how to talk about VIVO to everyone from your provost to faculty members to web developers.
For more than a decade, VIVO sites have been creating semantic data regarding scholarship that could be used to change how scholarly work is found and how expertise is assessed and compared. Previous work has attempted to centrally collect and normalize semantic data for search purposes. Other effort has used federated search across sites to provide simple access to profiles. Can we now consider how best to create a semantic cross-site search capability? Panelists will discuss the following questions: What is semantic search and how might it differ from other search paradigms? Should the approach be centralized, in which semantic data is brought together to a single provider of search functionality, decentralized in which data remains at rest and search functionality is localized, or should other approaches be considered? What are the roles for ontology, data and software provisioning in semantic search? How might technologies such as TPF, GraphQL, Schema.org, Solid, and others be leveraged? What is needed to create a semantic cross-site search capability for VIVO?
A second version of the VIVO ontology. We propose to develop a consistent, sufficient, BFO based ontology for representing scholarship. By consistent, we mean the ontology uses a single approach to representation. Using a single approach, we expect to simplify the ontology -- patterns are reused and complexity reduced. By sufficient, we mean we cover the domain of scholarship at the level necessary to represent and use information about scholarship. The ontology is informed by its applications. By BFO-based, we mean we commit to an approach to representation based on the Basic Formal Ontology. The approach is well-understood and well-adopted in the ontology community. The domain of the ontology is well-defined and stable. Why a new ontology, and why now? The original work on the VIVO ontology began in 2007 at Cornell. The 2009 NIH grant significantly expanded the ontology. The 2013 CTSA Connect effort significantly re-engineered the ontology attempting to bring the ontology to standards current at the time, and introducing BFO as an upper level ontology, but the effort was never completed, and introduction of the new ontology (VIVO version 1.6) was not accompanied by sufficient tooling, training, and time to manage the community change. Since 2013, the ontology has essentially been frozen. As development seeks to create an interface between the ontology and the presentation software, there is an opportunity to create an ontology that is independent of the software and can be mapped to it. Benefits of a new ontology. The new ontology will: Add to our ability to represent all of scholarship, including the arts, peer review, new research outputs, research impact, and global needs Adopt current ontological best practice including tooling, OBO Principles, focus on the domain of scholarship and expertise, and use only those ontologies that are aligned Use simple, consistent representations supporting ontological reasoning Be appropriate for use by any project seeking to build and use research graphs How and when. A new ontology could be developed in three phases by the Ontology Interest Group of the VIVO Project, working in collaboration with other projects, ontologists, developers, and community members. All are welcome to join the effort. A second phase would be necessary for refinement, testing, and tooling for adoption. A third phase is need for community change management, mapping to presentation data structures, testing, and training. The existing ontology (version 1.x) will continue to be supported indefinitely.
Violeta Ilik , Columbia University
VIVO is an open source tool based on linked open data concepts for connecting and publishing a wide range of research information within and across institutions. The goal of this workshop is to introduce new community members to the VIVO project and software. The workshop will consist of three sections: 1. A summary of VIVO’s history and what it does, 2. How it works, and 3. Where the project is heading. Part 1 will include a background of the VIVO project, how institutions and organizations are currently using it, how institutional stakeholders are involved, what benefits it offers to researchers, to institutions, and to the global community. Part 2 will include a high level discussion about how VIVO works and introduce the concepts of Resource Description Framework, Ontology, Vitro, Triple Pattern Fragments, how VIVO is managed, and how to feed downstream systems. Finally, part 3 will introduce you to next-gen VIVO initiatives such as, decoupling of the architecture, the next version of the ontology, the VIVO Scholar, VIVO Combine, the internationalization efforts, and the VIVO Search. You’ll learn how to find the right resources as a new VIVO implementer including data sources, team members, governance models, and support structures. This workshop brings best practices and “lessons learned” from mature VIVO projects to new implementations. We’ll help you craft your messages to different stakeholders, so you’ll leave this workshop knowing how to talk about VIVO to everyone from your provost to faculty members to web developers.
In early 2018, the VIVO Leadership group brought together parties from across the broader VIVO community to Duke University to discuss critical aspects of VIVO as both a product and a community. At the meeting, a number of working groups were created to do deeper work on a set of focus areas to help inform the VIVO leadership in taking steps toward the future growth of VIVO. One group was tasked with understanding the current perception of VIVO's governance and structure from effectiveness, to openness and inclusivity, to make recommendations to the VIVO Leadership group concerning key strengths to preserve and challenges that needed to be addressed. This session will report on the results of a survey run by the Governance and Structure Working Group in late 2018. We will engage with the audience to get reactions to the results, ensure that interpretations make sense to the wider group and discuss the next stages in finalising publication of the overall report of the committee.
For more than a decade, VIVO sites have been creating semantic data regarding scholarship that could be used to change how scholarly work is found and how expertise is assessed and compared. Previous work has attempted to centrally collect and normalize semantic data for search purposes. Other effort has used federated search across sites to provide simple access to profiles. Can we now consider how best to create a semantic cross-site search capability? Panelists will discuss the following questions: What is semantic search and how might it differ from other search paradigms? Should the approach be centralized, in which semantic data is brought together to a single provider of search functionality, decentralized in which data remains at rest and search functionality is localized, or should other approaches be considered? What are the roles for ontology, data and software provisioning in semantic search? How might technologies such as TPF, GraphQL, Schema.org, Solid, and others be leveraged? What is needed to create a semantic cross-site search capability for VIVO?
A second version of the VIVO ontology. We propose to develop a consistent, sufficient, BFO based ontology for representing scholarship. By consistent, we mean the ontology uses a single approach to representation. Using a single approach, we expect to simplify the ontology -- patterns are reused and complexity reduced. By sufficient, we mean we cover the domain of scholarship at the level necessary to represent and use information about scholarship. The ontology is informed by its applications. By BFO-based, we mean we commit to an approach to representation based on the Basic Formal Ontology. The approach is well-understood and well-adopted in the ontology community. The domain of the ontology is well-defined and stable. Why a new ontology, and why now? The original work on the VIVO ontology began in 2007 at Cornell. The 2009 NIH grant significantly expanded the ontology. The 2013 CTSA Connect effort significantly re-engineered the ontology attempting to bring the ontology to standards current at the time, and introducing BFO as an upper level ontology, but the effort was never completed, and introduction of the new ontology (VIVO version 1.6) was not accompanied by sufficient tooling, training, and time to manage the community change. Since 2013, the ontology has essentially been frozen. As development seeks to create an interface between the ontology and the presentation software, there is an opportunity to create an ontology that is independent of the software and can be mapped to it. Benefits of a new ontology. The new ontology will: Add to our ability to represent all of scholarship, including the arts, peer review, new research outputs, research impact, and global needs Adopt current ontological best practice including tooling, OBO Principles, focus on the domain of scholarship and expertise, and use only those ontologies that are aligned Use simple, consistent representations supporting ontological reasoning Be appropriate for use by any project seeking to build and use research graphs How and when. A new ontology could be developed in three phases by the Ontology Interest Group of the VIVO Project, working in collaboration with other projects, ontologists, developers, and community members. All are welcome to join the effort. A second phase would be necessary for refinement, testing, and tooling for adoption. A third phase is need for community change management, mapping to presentation data structures, testing, and training. The existing ontology (version 1.x) will continue to be supported indefinitely.
Michael Conlon , VIVO Project
Nataša Popović , University of Montenegro
For more than a decade, VIVO sites have been creating semantic data regarding scholarship that could be used to change how scholarly work is found and how expertise is assessed and compared. Previous work has attempted to centrally collect and normalize semantic data for search purposes. Other effort has used federated search across sites to provide simple access to profiles. Can we now consider how best to create a semantic cross-site search capability? Panelists will discuss the following questions: What is semantic search and how might it differ from other search paradigms? Should the approach be centralized, in which semantic data is brought together to a single provider of search functionality, decentralized in which data remains at rest and search functionality is localized, or should other approaches be considered? What are the roles for ontology, data and software provisioning in semantic search? How might technologies such as TPF, GraphQL, Schema.org, Solid, and others be leveraged? What is needed to create a semantic cross-site search capability for VIVO?
Ruben Verborgh , Gent University
In this talk we discuss the characteristics of Linked Data-based resource discovery and its limitations in finding content by direct processing of information resources -- in comparison to the solution provided through content metadata supported by knowledge organization systems (KOS) such as thesauri, classification and subject descriptors. KOS are traditional information discovery tools that determine the meaning and control ambiguities of language, hence they are often referred to as controlled vocabularies. They are used by libraries as well as by publishers and bookshops. However, most of KOSs are designed for and used in traditional information environments and are often not readily accessible by programs. Semantic technologies such as linked data offer solutions for expressing KOS in a more formalized and machine-understandable way. They provide a way of uniquely identifying and contextualizing semantically meaningful units irrespective of their possible linguistic or symbolic representations. This “unique identification” (URI) is the key element of linked data technology: anything that can be identified can be linked. The publishing of KOSs as linked data has become the most important form of sharing and using controlled vocabularies in the Web environment. This is also a solution for accessing the meaning and knowledge stored in the collections indexed by KOS (both directly or indirectly). As more and more KOSs are being published as linked data and more and more collection metadata containing KOS concepts join the linked data cloud, some obstacles to linking collection metadata and KOSs have become more obvious. Human knowledge is in constant flux and KOSs develop over time to embrace new terminology and new fields of knowledge. These changes affect unique identifiers used in KOS and consequently all links between KOS and resource collections.
Ronald Siebes , DANS
In this talk we discuss the characteristics of Linked Data-based resource discovery and its limitations in finding content by direct processing of information resources -- in comparison to the solution provided through content metadata supported by knowledge organization systems (KOS) such as thesauri, classification and subject descriptors. KOS are traditional information discovery tools that determine the meaning and control ambiguities of language, hence they are often referred to as controlled vocabularies. They are used by libraries as well as by publishers and bookshops. However, most of KOSs are designed for and used in traditional information environments and are often not readily accessible by programs. Semantic technologies such as linked data offer solutions for expressing KOS in a more formalized and machine-understandable way. They provide a way of uniquely identifying and contextualizing semantically meaningful units irrespective of their possible linguistic or symbolic representations. This “unique identification” (URI) is the key element of linked data technology: anything that can be identified can be linked. The publishing of KOSs as linked data has become the most important form of sharing and using controlled vocabularies in the Web environment. This is also a solution for accessing the meaning and knowledge stored in the collections indexed by KOS (both directly or indirectly). As more and more KOSs are being published as linked data and more and more collection metadata containing KOS concepts join the linked data cloud, some obstacles to linking collection metadata and KOSs have become more obvious. Human knowledge is in constant flux and KOSs develop over time to embrace new terminology and new fields of knowledge. These changes affect unique identifiers used in KOS and consequently all links between KOS and resource collections.
Aida Slavic , UDC Consortium
In this talk we discuss the characteristics of Linked Data-based resource discovery and its limitations in finding content by direct processing of information resources -- in comparison to the solution provided through content metadata supported by knowledge organization systems (KOS) such as thesauri, classification and subject descriptors. KOS are traditional information discovery tools that determine the meaning and control ambiguities of language, hence they are often referred to as controlled vocabularies. They are used by libraries as well as by publishers and bookshops. However, most of KOSs are designed for and used in traditional information environments and are often not readily accessible by programs. Semantic technologies such as linked data offer solutions for expressing KOS in a more formalized and machine-understandable way. They provide a way of uniquely identifying and contextualizing semantically meaningful units irrespective of their possible linguistic or symbolic representations. This “unique identification” (URI) is the key element of linked data technology: anything that can be identified can be linked. The publishing of KOSs as linked data has become the most important form of sharing and using controlled vocabularies in the Web environment. This is also a solution for accessing the meaning and knowledge stored in the collections indexed by KOS (both directly or indirectly). As more and more KOSs are being published as linked data and more and more collection metadata containing KOS concepts join the linked data cloud, some obstacles to linking collection metadata and KOSs have become more obvious. Human knowledge is in constant flux and KOSs develop over time to embrace new terminology and new fields of knowledge. These changes affect unique identifiers used in KOS and consequently all links between KOS and resource collections.
Andrea Scharnhorst , Royal Netherlands Academy of Arts and Sciences
Conferences are an essential part of scholarly communication. However, like researchers and organizations they suffer from the disambiguation problem, when the same acronym or the conference name refers to very different conferences. In 2017, Crossref and DataCite started a working group on conference and project identifiers. The group includes various publishers, A&I service providers, and other interested stakeholders. The group participants have drafted the metadata specification and gathered the feedback from the community. In this talk, we would like to update the VIVO participants with where we stand with the PIDs for conferences, conference series and Crossmark for proceedings and are inviting the broader community to comment. Read the CrossRef post for more info about the group: https://www.crossref.org/working-groups/conferences-projects/
Aliaksandr Birukou , Springer Nature
Conferences are an essential part of scholarly communication. However, like researchers and organizations they suffer from the disambiguation problem, when the same acronym or the conference name refers to very different conferences. In 2017, Crossref and DataCite started a working group on conference and project identifiers. The group includes various publishers, A&I service providers, and other interested stakeholders. The group participants have drafted the metadata specification and gathered the feedback from the community. In this talk, we would like to update the VIVO participants with where we stand with the PIDs for conferences, conference series and Crossmark for proceedings and are inviting the broader community to comment. Read the CrossRef post for more info about the group: https://www.crossref.org/working-groups/conferences-projects/
Patricia Feeney , Crossref
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Greg Burton , Duke University
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Don Elsborg , University of Colorado Boulder
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Your VIVO needs updated information. * Updated data means more traffic: visits and pageviews. * More traffic encourages faculty to update their information. Does this sound familiar? If you’re wondering how to build up this positive feedback loop for your institution’s VIVO, join us to review Duke University’s best practices for creating a “buzz” around Scholars@Duke. We’ll talk about: * Crafting effective messages * Choosing the right communications channels * Hosting events that attract faculty * Boosting SEO (or trying to) * Learning from users and measuring success And we'll give updates on our plans to launch a redesigned site, create a video series to build awareness, and improve strategic planning through analytics.
Hans Harlacher , Duke University
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Duke University's VIVO-based implementation, Scholars@Duke, has become an essential tool for the maintenance and dissemination of scholarly work within our institution. In addition to VIVO, Scholars@Duke includes user-friendly editing options that make it easy for collaborators to co-maintain a single scholarly public record. Before any records are sent to VIVO, our in-house editor, Profile Manager, along with Symplectic Elements, work together to create workflows that manage the rights and display preferences of multiple collaborators on a single record. In this presentation, I'll go through some of the solutions we've implemented regarding attribution, individual privacy concerns, conflicts in display preferences, and representing a project over time. I'll give examples from publication, artistic work, course, grant, and advisee records. I'll also give suggestions for weighing the benefits of shared records against its complexities.
Damaris Murry , Duke University
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Robert Nelson , Duke University
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Ralph O'Flinn , The University of Alabama at Birmingham
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Richard Outten , Duke University
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Harry Thakkar , Duke University
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Jim Wood , Duke University
The VIVO Scholar Task Force is creating a new, read-only front-end for VIVO. Come hear an update about the work on VIVO Scholar so far. Task force representatives will demo components and answer questions. * Learn how five universities have worked together to reach the current stage of VIVO Scholar. * Review the new profile and search pages. * Watch the quick and easy GraphQL queries. * See how sharing data makes your VIVO essential. For more info on VIVO Scholar, see the Task Force page on the VIVO wiki.
Alex Viggio , University of Colorado Boulder
There are numerous sources of metadata regarding research activity that Clinical and Translational Science Award (CTSA) hubs currently duplicate effort in acquiring, linking and analyzing. The Science of Translational Science (SciTS) project provides a shared data platform for hubs to collaboratively manage these resources, and avoid redundant effort. In addition to the shared resources, participating CTSA hubs are provided private schemas for their own use, as well as support in integrating these resources into their local environments. This project builds upon multiple components completed in the first phase of the Center for Data to Health (CD2H), specifically: a) data aggregation and indexing work of research profiles and their ingest into and improvements to CTSAsearch by Iowa (http://labs.cd2h.org/search/facetSearch.jsp); b) NCATS 4DM, a map of translational science; and c) metadata requirements analysis and ingest of from a number of other CD2H and CTSA projects, including educational resources from DIAMOND and N-lighten, development resources from GitHub, and data resources from DataMed (bioCADDIE) and DataCite. This work also builds on other related work on data sources, workflows, and reporting from the SciTS team, including entity extraction from the acknowledgement sections of PubMed Central papers, disambiguated PubMed authorship, ORCiD data and integrations, NIH RePORT, Federal RePORTER, and other data sources and tools. Early activities for this project include: * Configuration of a core warehouse instance * Population of the warehouse from the above-mentioned sources * Configuration of local schemas for each CTSA hub and other interested parties * Creation for example solutions for ingest/extraction using JDBC, GraphQL, SPARQL, and tools such as teiid (an open source data federation platform).
David Eichmann , University of Iowa
Prizes are important indicators of esteem in research, and they deserve a persistent primary record of their own. * Award citation information is needed throughout the sector, all the time To name a few examples, institutions aggregate prizes from their alumni over time to build a story about the minds they have educated, and how welcoming their research environment is to support creativity. Prizes are built into university rankings and accreditation processes. To tell these stories easily, award citation information needs to be easily available. * Award citations should be richly described records An award citation is more than just a date, award, and link to a person and awarding body. A citation links to the research that it acknowledges. Upon acceptance award, often an occasional speech is recorded. The best way to capture award citations in all of the richness they deserve is to establish normative metadata practices based around the minting of a persistent identifier. * Award citations are the historical signposts through which society understands research progress. These signposts deserve a permanent digital record. * Creating transparency around on prizes can help improve research culture At their best, prizes recognise a diversity of research achievement in society from literature to physics and everything in between. It has also been observed that prizes are being awarded to a concentrated set of elite researchers. By making prize awardee information more discoverable, more informed decisions about what prizes to award, and who to award them to can be made. * The flow of prize information through the research systems is currently significantly hampered. It needs fixing. Wikidata is perhaps the best secondary source of prize information. Consider how it gets there. What information does it loose along the way? A significant amount work could be reduced by building information flows around the authority that persistent records provide. To begin to address these issues, we have built an open source awards publishing reference implementation. This implementation is based on on xPub - a Journal submission and peer review platform from the Collaborative Knowledge Foundation. Finalised award records are then published to figshare, with associated metadata pushed to wikidata.
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Simon Porter , Digital Science
CRIS systems are becoming a mandatory element of University Information ecosystems. CRIS provides significant amenities for reporting routines for research councils, national authorities, employee evaluation, academic degrees communities and provide valuable insights for university management. Properly implemented (in terms of Organization) CRIS system becomes a source of complete and reliable data going far beyond projects and publications thus providing more profiling capabilities than for instance crowd-based systems like ResearchGate. Developing OMEGA-PSIR CRIS system at Warsaw University of Technology, which is now free and Open System used in 20+ Universities in Poland we thought about research visibility at the same priority as reporting. By applying Text Mining, Artificial Intelligence, external ontologies but also organizational regimes, we obtained a robust profiling systems that allows searching for experts, modelling and visualising research teams, intelligent discipline matching, multi-purpose and multi-source researchers rankings, and many others. We would like to share our experiences in building a robust RPS system that relies on complete and reliable data, basing on 20+ deployments of Omega-PSIR Open Software in Poland
Jakub Koperwas , Warsaw University of Technology
CRIS systems are becoming a mandatory element of University Information ecosystems. CRIS provides significant amenities for reporting routines for research councils, national authorities, employee evaluation, academic degrees communities and provide valuable insights for university management. Properly implemented (in terms of Organization) CRIS system becomes a source of complete and reliable data going far beyond projects and publications thus providing more profiling capabilities than for instance crowd-based systems like ResearchGate. Developing OMEGA-PSIR CRIS system at Warsaw University of Technology, which is now free and Open System used in 20+ Universities in Poland we thought about research visibility at the same priority as reporting. By applying Text Mining, Artificial Intelligence, external ontologies but also organizational regimes, we obtained a robust profiling systems that allows searching for experts, modelling and visualising research teams, intelligent discipline matching, multi-purpose and multi-source researchers rankings, and many others. We would like to share our experiences in building a robust RPS system that relies on complete and reliable data, basing on 20+ deployments of Omega-PSIR Open Software in Poland
Łukasz Skonieczny , Warsaw University of Technology
CRIS systems are becoming a mandatory element of University Information ecosystems. CRIS provides significant amenities for reporting routines for research councils, national authorities, employee evaluation, academic degrees communities and provide valuable insights for university management. Properly implemented (in terms of Organization) CRIS system becomes a source of complete and reliable data going far beyond projects and publications thus providing more profiling capabilities than for instance crowd-based systems like ResearchGate. Developing OMEGA-PSIR CRIS system at Warsaw University of Technology, which is now free and Open System used in 20+ Universities in Poland we thought about research visibility at the same priority as reporting. By applying Text Mining, Artificial Intelligence, external ontologies but also organizational regimes, we obtained a robust profiling systems that allows searching for experts, modelling and visualising research teams, intelligent discipline matching, multi-purpose and multi-source researchers rankings, and many others. We would like to share our experiences in building a robust RPS system that relies on complete and reliable data, basing on 20+ deployments of Omega-PSIR Open Software in Poland
Wacław Struk , Warsaw University of Technology
CRIS systems are becoming a mandatory element of University Information ecosystems. CRIS provides significant amenities for reporting routines for research councils, national authorities, employee evaluation, academic degrees communities and provide valuable insights for university management. Properly implemented (in terms of Organization) CRIS system becomes a source of complete and reliable data going far beyond projects and publications thus providing more profiling capabilities than for instance crowd-based systems like ResearchGate. Developing OMEGA-PSIR CRIS system at Warsaw University of Technology, which is now free and Open System used in 20+ Universities in Poland we thought about research visibility at the same priority as reporting. By applying Text Mining, Artificial Intelligence, external ontologies but also organizational regimes, we obtained a robust profiling systems that allows searching for experts, modelling and visualising research teams, intelligent discipline matching, multi-purpose and multi-source researchers rankings, and many others. We would like to share our experiences in building a robust RPS system that relies on complete and reliable data, basing on 20+ deployments of Omega-PSIR Open Software in Poland
Henryk Rybiński , Warsaw University of Technology
There are numerous sources of metadata regarding research activity that Clinical and Translational Science Award (CTSA) hubs currently duplicate effort in acquiring, linking and analyzing. The Science of Translational Science (SciTS) project provides a shared data platform for hubs to collaboratively manage these resources, and avoid redundant effort. In addition to the shared resources, participating CTSA hubs are provided private schemas for their own use, as well as support in integrating these resources into their local environments. This project builds upon multiple components completed in the first phase of the Center for Data to Health (CD2H), specifically: a) data aggregation and indexing work of research profiles and their ingest into and improvements to CTSAsearch by Iowa (http://labs.cd2h.org/search/facetSearch.jsp); b) NCATS 4DM, a map of translational science; and c) metadata requirements analysis and ingest of from a number of other CD2H and CTSA projects, including educational resources from DIAMOND and N-lighten, development resources from GitHub, and data resources from DataMed (bioCADDIE) and DataCite. This work also builds on other related work on data sources, workflows, and reporting from the SciTS team, including entity extraction from the acknowledgement sections of PubMed Central papers, disambiguated PubMed authorship, ORCiD data and integrations, NIH RePORT, Federal RePORTER, and other data sources and tools. Early activities for this project include: * Configuration of a core warehouse instance * Population of the warehouse from the above-mentioned sources * Configuration of local schemas for each CTSA hub and other interested parties * Creation for example solutions for ingest/extraction using JDBC, GraphQL, SPARQL, and tools such as teiid (an open source data federation platform).
In early 2018, the VIVO Leadership group brought together parties from across the broader VIVO community to Duke University to discuss critical aspects of VIVO as both a product and a community. At the meeting, a number of working groups were created to do deeper work on a set of focus areas to help inform the VIVO leadership in taking steps toward the future growth of VIVO. One group was tasked with understanding the current perception of VIVO's governance and structure from effectiveness, to openness and inclusivity, to make recommendations to the VIVO Leadership group concerning key strengths to preserve and challenges that needed to be addressed. This session will report on the results of a survey run by the Governance and Structure Working Group in late 2018. We will engage with the audience to get reactions to the results, ensure that interpretations make sense to the wider group and discuss the next stages in finalising publication of the overall report of the committee.
Kristi Holmes , Northwestern University
In early 2018, the VIVO Leadership group brought together parties from across the broader VIVO community to Duke University to discuss critical aspects of VIVO as both a product and a community. At the meeting, a number of working groups were created to do deeper work on a set of focus areas to help inform the VIVO leadership in taking steps toward the future growth of VIVO. One group was tasked with understanding the current perception of VIVO's governance and structure from effectiveness, to openness and inclusivity, to make recommendations to the VIVO Leadership group concerning key strengths to preserve and challenges that needed to be addressed. This session will report on the results of a survey run by the Governance and Structure Working Group in late 2018. We will engage with the audience to get reactions to the results, ensure that interpretations make sense to the wider group and discuss the next stages in finalising publication of the overall report of the committee.
Daniel W Hook , Digital Science
In early 2018, the VIVO Leadership group brought together parties from across the broader VIVO community to Duke University to discuss critical aspects of VIVO as both a product and a community. At the meeting, a number of working groups were created to do deeper work on a set of focus areas to help inform the VIVO leadership in taking steps toward the future growth of VIVO. One group was tasked with understanding the current perception of VIVO's governance and structure from effectiveness, to openness and inclusivity, to make recommendations to the VIVO Leadership group concerning key strengths to preserve and challenges that needed to be addressed. This session will report on the results of a survey run by the Governance and Structure Working Group in late 2018. We will engage with the audience to get reactions to the results, ensure that interpretations make sense to the wider group and discuss the next stages in finalising publication of the overall report of the committee.
Dean B Krafft , Cornell University
In early 2018, the VIVO Leadership group brought together parties from across the broader VIVO community to Duke University to discuss critical aspects of VIVO as both a product and a community. At the meeting, a number of working groups were created to do deeper work on a set of focus areas to help inform the VIVO leadership in taking steps toward the future growth of VIVO. One group was tasked with understanding the current perception of VIVO's governance and structure from effectiveness, to openness and inclusivity, to make recommendations to the VIVO Leadership group concerning key strengths to preserve and challenges that needed to be addressed. This session will report on the results of a survey run by the Governance and Structure Working Group in late 2018. We will engage with the audience to get reactions to the results, ensure that interpretations make sense to the wider group and discuss the next stages in finalising publication of the overall report of the committee.
Mark P Newton , Boston University
Converis supports comprehensive tracking and support across the entire research lifecycle, including grant, award, and publication management. Built-in integrations with the Web of Science and InCites make it easy to manage your organization’s research activity. Converis is a powerful upstream tool to manage a confluence of data for a VIVO site. In this presentation, we will present improvements made to the Converis application in Converis 6, as well as improvements in the works. We will also discuss enhancements to the Converis to VIVO connector that allows VIVO to be automatically fed by an upstream Converis data source.
Miguel Garcia , Web of Science Group
Your VIVO needs updated information. * Updated data means more traffic: visits and pageviews. * More traffic encourages faculty to update their information. Does this sound familiar? If you’re wondering how to build up this positive feedback loop for your institution’s VIVO, join us to review Duke University’s best practices for creating a “buzz” around Scholars@Duke. We’ll talk about: * Crafting effective messages * Choosing the right communications channels * Hosting events that attract faculty * Boosting SEO (or trying to) * Learning from users and measuring success And we'll give updates on our plans to launch a redesigned site, create a video series to build awareness, and improve strategic planning through analytics.
Lamont Cannon , Duke University
At the Technical University of Denmark VIVO is used as an internal research analytics platform providing the university’s researchers and research managers with easy-to-use analytical views on research output, impact, collaboration, etc. VIVO – combined with a good data source – provides a platform for high-quality, high-specificity and high-integrity analytics under the university’s own control. In the latest release, Web of Science data is supplemented with advanced indicators from InCites, and the triple store with a relational database management system and an elaborate caching system to achieve attractive response times. The WoS data is enhanced by local mapping/normalization of research department names, enabling department-level analytics. The DTU RAP is developed in collaboration between the university, the IT-consultants Vox Novitas and Ontocale and Clarivate Analytics, producer of WoS and InCites. All software is available as open source for anyone with similar needs and similar access to data.
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Christina Steensboe , Technical University of Denmark
At the Technical University of Denmark VIVO is used as an internal research analytics platform providing the university’s researchers and research managers with easy-to-use analytical views on research output, impact, collaboration, etc. VIVO – combined with a good data source – provides a platform for high-quality, high-specificity and high-integrity analytics under the university’s own control. In the latest release, Web of Science data is supplemented with advanced indicators from InCites, and the triple store with a relational database management system and an elaborate caching system to achieve attractive response times. The WoS data is enhanced by local mapping/normalization of research department names, enabling department-level analytics. The DTU RAP is developed in collaboration between the university, the IT-consultants Vox Novitas and Ontocale and Clarivate Analytics, producer of WoS and InCites. All software is available as open source for anyone with similar needs and similar access to data.
Karen Hytteballe Ibanez , Technical University of Denmark
At the Technical University of Denmark VIVO is used as an internal research analytics platform providing the university’s researchers and research managers with easy-to-use analytical views on research output, impact, collaboration, etc. VIVO – combined with a good data source – provides a platform for high-quality, high-specificity and high-integrity analytics under the university’s own control. In the latest release, Web of Science data is supplemented with advanced indicators from InCites, and the triple store with a relational database management system and an elaborate caching system to achieve attractive response times. The WoS data is enhanced by local mapping/normalization of research department names, enabling department-level analytics. The DTU RAP is developed in collaboration between the university, the IT-consultants Vox Novitas and Ontocale and Clarivate Analytics, producer of WoS and InCites. All software is available as open source for anyone with similar needs and similar access to data.
Mogens Sandfaer , Technical University of Denmark
At the Technical University of Denmark VIVO is used as an internal research analytics platform providing the university’s researchers and research managers with easy-to-use analytical views on research output, impact, collaboration, etc. VIVO – combined with a good data source – provides a platform for high-quality, high-specificity and high-integrity analytics under the university’s own control. In the latest release, Web of Science data is supplemented with advanced indicators from InCites, and the triple store with a relational database management system and an elaborate caching system to achieve attractive response times. The WoS data is enhanced by local mapping/normalization of research department names, enabling department-level analytics. The DTU RAP is developed in collaboration between the university, the IT-consultants Vox Novitas and Ontocale and Clarivate Analytics, producer of WoS and InCites. All software is available as open source for anyone with similar needs and similar access to data.
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Franck Falcoz ,
At the Technical University of Denmark VIVO is used as an internal research analytics platform providing the university’s researchers and research managers with easy-to-use analytical views on research output, impact, collaboration, etc. VIVO – combined with a good data source – provides a platform for high-quality, high-specificity and high-integrity analytics under the university’s own control. In the latest release, Web of Science data is supplemented with advanced indicators from InCites, and the triple store with a relational database management system and an elaborate caching system to achieve attractive response times. The WoS data is enhanced by local mapping/normalization of research department names, enabling department-level analytics. The DTU RAP is developed in collaboration between the university, the IT-consultants Vox Novitas and Ontocale and Clarivate Analytics, producer of WoS and InCites. All software is available as open source for anyone with similar needs and similar access to data.
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
The diversity, energy and innovation in the VIVO community is inspirational. Community initiatives are strong, as are contributions back to the core application. One of the VIVO project's primary objectives is to channel, where appropriate, community effort into the core application towards agreed upon outcomes. We are delighted to say that this has been happening! This presentation will not attempt to detail all of the technical efforts over the past year, but will highlight a range of advancements and milestones accomplished since the previous VIVO conference. In the context of reviewing the year’s activity, this session is also intended to solicit feedback from attendees on technical and community initiatives and processes. At the end of 2018, the VIVO Leadership Group collected input from the community and created a "Statement of VIVO's Product Direction for 2019". This statement details four strategic initiatives: * Modernize the presentation layer of VIVO * Decouple the architecture * VIVO Combine * VIVO Search Following the publication of this statement, an architecturally-minded team representing distinct VIVO stakeholder constituencies was gathered for the purpose of developing architectural approaches required to address the direction of the project. The primary goal of the two-day face-to-face meeting was to assess and document a plan for improving the VIVO application architecture towards enabling and realizing the technical efforts defined in the "Statement of VIVO's Product Direction for 2019". This presentation will include a detailed status of the completed as well as planned development activities following from the decisions of the architectural meeting.
A second version of the VIVO ontology. We propose to develop a consistent, sufficient, BFO based ontology for representing scholarship. By consistent, we mean the ontology uses a single approach to representation. Using a single approach, we expect to simplify the ontology -- patterns are reused and complexity reduced. By sufficient, we mean we cover the domain of scholarship at the level necessary to represent and use information about scholarship. The ontology is informed by its applications. By BFO-based, we mean we commit to an approach to representation based on the Basic Formal Ontology. The approach is well-understood and well-adopted in the ontology community. The domain of the ontology is well-defined and stable. Why a new ontology, and why now? The original work on the VIVO ontology began in 2007 at Cornell. The 2009 NIH grant significantly expanded the ontology. The 2013 CTSA Connect effort significantly re-engineered the ontology attempting to bring the ontology to standards current at the time, and introducing BFO as an upper level ontology, but the effort was never completed, and introduction of the new ontology (VIVO version 1.6) was not accompanied by sufficient tooling, training, and time to manage the community change. Since 2013, the ontology has essentially been frozen. As development seeks to create an interface between the ontology and the presentation software, there is an opportunity to create an ontology that is independent of the software and can be mapped to it. Benefits of a new ontology. The new ontology will: Add to our ability to represent all of scholarship, including the arts, peer review, new research outputs, research impact, and global needs Adopt current ontological best practice including tooling, OBO Principles, focus on the domain of scholarship and expertise, and use only those ontologies that are aligned Use simple, consistent representations supporting ontological reasoning Be appropriate for use by any project seeking to build and use research graphs How and when. A new ontology could be developed in three phases by the Ontology Interest Group of the VIVO Project, working in collaboration with other projects, ontologists, developers, and community members. All are welcome to join the effort. A second phase would be necessary for refinement, testing, and tooling for adoption. A third phase is need for community change management, mapping to presentation data structures, testing, and training. The existing ontology (version 1.x) will continue to be supported indefinitely.
Brian Lowe , Ontocale
Texas A&M University Libraries has been using VIVO in production since 2015. In that time, we have come up with many creative solutions to meet the needs of our users. In early 2019, we began developing a replacement front end for the VIVO interface to formally address campus demands here at Texas A&M University. Initial requirements: * Align the technology stack as much as possible with the existing VIVO stack to assist with implementation by others if they choose especially smaller libraries. * The majority of the front end is customizable by others. * Read only UI. No updating back to the triple store. * All data is retrieved via a REST API endpoint using Spring Data for Apache Solr. * 100% Search Engine Optimization. IE: A person / crawler can disable JavaScript and still have the same experience. Server side, and Client side rendering if needed. A demo of current work can be found here. https://demos.library.tamu.edu/scholars-ui/
William S Welling , Texas A&M University
Texas A&M University Libraries has been using VIVO in production since 2015. In that time, we have come up with many creative solutions to meet the needs of our users. In early 2019, we began developing a replacement front end for the VIVO interface to formally address campus demands here at Texas A&M University. Initial requirements: * Align the technology stack as much as possible with the existing VIVO stack to assist with implementation by others if they choose especially smaller libraries. * The majority of the front end is customizable by others. * Read only UI. No updating back to the triple store. * All data is retrieved via a REST API endpoint using Spring Data for Apache Solr. * 100% Search Engine Optimization. IE: A person / crawler can disable JavaScript and still have the same experience. Server side, and Client side rendering if needed. A demo of current work can be found here. https://demos.library.tamu.edu/scholars-ui/
Texas A&M has strategic initiatives focused on academic reputation and interdisciplinary research. The Scholars@TAMU team used campus needs associated with these initiatives to drive the evolution of the Texas A&M implementation of VIVO and associated library services. Our focus on meeting campus needs has led to strong faculty engagement with Scholars@TAMU as well as Provost and Dean buy-in for our programs. We discuss the strategic framework that guided Scholars@TAMU development and implementation. This framework may help provide a roadmap for future VIVO development as well as VIVO implementations at diverse universities.
Research information management systems (RIMSs) use different approaches to collecting and curating research identity information: manual curation by information professionals or users, automated data mining and curation scripts (aka bots); and some combination of the above. Assuring the quality of information is one of the critical ethical issues of information systems. Although data curation by professionals usually produces the highest quality results, it is costly and may not be scalable. RIMSs may not have enough resources to control the quality of large-scale information often batch harvested and aggregated from the Web and various databases of different scope and coverage. RIMSs are in great need of researchers to contribute and curate their research identity data. This presentation reports the findings of a collaborative study about researcher participation in RIMSs. The presenters of this study developed a theoretical framework for researcher participation in RIMSs (Stvilia, Wu, & Lee, 2019). The framework is grounded in empirical research and can guide the design of RIMSs by defining typologies of researcher activities in RIMSs, related motivations, levels of participation, and metadata profiles. RIMS managers and scholarly communications librarians can use the framework to assemble RIMS service and metadata profiles that are tailored to the researcher’s context. Likewise, the framework can guide the construction of communication messages personalized to the researcher’s priorities and her or his motivations for engaging in a specific activity, which will enhance the researcher’s engagement with the RIMS. In addition, this presentation discusses how the framework can be operationalized in practice using the case of Scholars@TAMU, a VIVO-based RIMS at Texas A&M University. Reference Stvilia, B., Wu, S., & Lee, D. J. (2019). A framework for researcher participation in research information management systems. The Journal of Academic Librarianship. 45(3), 195-202. doi:10.1016/j.acalib.2019.02.014
Dong Joon Lee , Texas A&M University
Texas A&M University Libraries has been using VIVO in production since 2015. In that time, we have come up with many creative solutions to meet the needs of our users. In early 2019, we began developing a replacement front end for the VIVO interface to formally address campus demands here at Texas A&M University. Initial requirements: * Align the technology stack as much as possible with the existing VIVO stack to assist with implementation by others if they choose especially smaller libraries. * The majority of the front end is customizable by others. * Read only UI. No updating back to the triple store. * All data is retrieved via a REST API endpoint using Spring Data for Apache Solr. * 100% Search Engine Optimization. IE: A person / crawler can disable JavaScript and still have the same experience. Server side, and Client side rendering if needed. A demo of current work can be found here. https://demos.library.tamu.edu/scholars-ui/
Texas A&M has strategic initiatives focused on academic reputation and interdisciplinary research. The Scholars@TAMU team used campus needs associated with these initiatives to drive the evolution of the Texas A&M implementation of VIVO and associated library services. Our focus on meeting campus needs has led to strong faculty engagement with Scholars@TAMU as well as Provost and Dean buy-in for our programs. We discuss the strategic framework that guided Scholars@TAMU development and implementation. This framework may help provide a roadmap for future VIVO development as well as VIVO implementations at diverse universities.
Douglas Hahn , Texas A&M University
Texas A&M University Libraries has been using VIVO in production since 2015. In that time, we have come up with many creative solutions to meet the needs of our users. In early 2019, we began developing a replacement front end for the VIVO interface to formally address campus demands here at Texas A&M University. Initial requirements: * Align the technology stack as much as possible with the existing VIVO stack to assist with implementation by others if they choose especially smaller libraries. * The majority of the front end is customizable by others. * Read only UI. No updating back to the triple store. * All data is retrieved via a REST API endpoint using Spring Data for Apache Solr. * 100% Search Engine Optimization. IE: A person / crawler can disable JavaScript and still have the same experience. Server side, and Client side rendering if needed. A demo of current work can be found here. https://demos.library.tamu.edu/scholars-ui/
Jason Savell , Texas A&M University
Texas A&M University Libraries has been using VIVO in production since 2015. In that time, we have come up with many creative solutions to meet the needs of our users. In early 2019, we began developing a replacement front end for the VIVO interface to formally address campus demands here at Texas A&M University. Initial requirements: * Align the technology stack as much as possible with the existing VIVO stack to assist with implementation by others if they choose especially smaller libraries. * The majority of the front end is customizable by others. * Read only UI. No updating back to the triple store. * All data is retrieved via a REST API endpoint using Spring Data for Apache Solr. * 100% Search Engine Optimization. IE: A person / crawler can disable JavaScript and still have the same experience. Server side, and Client side rendering if needed. A demo of current work can be found here. https://demos.library.tamu.edu/scholars-ui/
Kevin A Day , Texas A&M University
Texas A&M University Libraries has been using VIVO in production since 2015. In that time, we have come up with many creative solutions to meet the needs of our users. In early 2019, we began developing a replacement front end for the VIVO interface to formally address campus demands here at Texas A&M University. Initial requirements: * Align the technology stack as much as possible with the existing VIVO stack to assist with implementation by others if they choose especially smaller libraries. * The majority of the front end is customizable by others. * Read only UI. No updating back to the triple store. * All data is retrieved via a REST API endpoint using Spring Data for Apache Solr. * 100% Search Engine Optimization. IE: A person / crawler can disable JavaScript and still have the same experience. Server side, and Client side rendering if needed. A demo of current work can be found here. https://demos.library.tamu.edu/scholars-ui/
Texas A&M has strategic initiatives focused on academic reputation and interdisciplinary research. The Scholars@TAMU team used campus needs associated with these initiatives to drive the evolution of the Texas A&M implementation of VIVO and associated library services. Our focus on meeting campus needs has led to strong faculty engagement with Scholars@TAMU as well as Provost and Dean buy-in for our programs. We discuss the strategic framework that guided Scholars@TAMU development and implementation. This framework may help provide a roadmap for future VIVO development as well as VIVO implementations at diverse universities.
Ethel Mejia , Texas A&M University
Texas A&M University Libraries has been using VIVO in production since 2015. In that time, we have come up with many creative solutions to meet the needs of our users. In early 2019, we began developing a replacement front end for the VIVO interface to formally address campus demands here at Texas A&M University. Initial requirements: * Align the technology stack as much as possible with the existing VIVO stack to assist with implementation by others if they choose especially smaller libraries. * The majority of the front end is customizable by others. * Read only UI. No updating back to the triple store. * All data is retrieved via a REST API endpoint using Spring Data for Apache Solr. * 100% Search Engine Optimization. IE: A person / crawler can disable JavaScript and still have the same experience. Server side, and Client side rendering if needed. A demo of current work can be found here. https://demos.library.tamu.edu/scholars-ui/
Texas A&M has strategic initiatives focused on academic reputation and interdisciplinary research. The Scholars@TAMU team used campus needs associated with these initiatives to drive the evolution of the Texas A&M implementation of VIVO and associated library services. Our focus on meeting campus needs has led to strong faculty engagement with Scholars@TAMU as well as Provost and Dean buy-in for our programs. We discuss the strategic framework that guided Scholars@TAMU development and implementation. This framework may help provide a roadmap for future VIVO development as well as VIVO implementations at diverse universities.
Bruce E Herbert , Texas A&M University
VIVO at Osnabrück University is used to present scholarly activity especially third-party funded research projects to the interested public to improve transparency in research funding. In the early stage of development, the VIVO content was limited to research project key facts (e.g. abstract, related/selected publications, external project links, keywords) mainly extracted from the operational finance systems that is not intended to be used for presentation purposes. As VIVO strengths lay in searching, browsing and visualizing scholar activity for the wider public, the searchable content should not be limited to complex academic language as it is used in project abstracts or publications, a nonacademic person would not search for. Science goes Public - For more than 10 years the event series “Osnabrück Knowledge Forum” tries to make research more accessible. Every year, public is called upon to challenge Osnabrück University Professors by sending questions about any kinds of science topics to be answered within 4 minutes lightning talks in a precise, clear and entertaining way in front of the interested public. This »evening of knowledge» and its findings are showcased in a variety of media such as film sequences on YouTube or as an image brochure for University guests (PDF). All these formats do not connect content to researchers as VIVO connects researchers, projects or organizations with each other. In this presentation, we will show how social media content could be used to enrich research projects, researcher profils and research topics to make VIVO (even more) vivid!
Sonja Schulze , Osnabrück University
VIVO at Osnabrück University is used to present scholarly activity especially third-party funded research projects to the interested public to improve transparency in research funding. In the early stage of development, the VIVO content was limited to research project key facts (e.g. abstract, related/selected publications, external project links, keywords) mainly extracted from the operational finance systems that is not intended to be used for presentation purposes. As VIVO strengths lay in searching, browsing and visualizing scholar activity for the wider public, the searchable content should not be limited to complex academic language as it is used in project abstracts or publications, a nonacademic person would not search for. Science goes Public - For more than 10 years the event series “Osnabrück Knowledge Forum” tries to make research more accessible. Every year, public is called upon to challenge Osnabrück University Professors by sending questions about any kinds of science topics to be answered within 4 minutes lightning talks in a precise, clear and entertaining way in front of the interested public. This »evening of knowledge» and its findings are showcased in a variety of media such as film sequences on YouTube or as an image brochure for University guests (PDF). All these formats do not connect content to researchers as VIVO connects researchers, projects or organizations with each other. In this presentation, we will show how social media content could be used to enrich research projects, researcher profils and research topics to make VIVO (even more) vivid!
Dominik Feldschnieders , Osnabrück University
VIVO at Osnabrück University is used to present scholarly activity especially third-party funded research projects to the interested public to improve transparency in research funding. In the early stage of development, the VIVO content was limited to research project key facts (e.g. abstract, related/selected publications, external project links, keywords) mainly extracted from the operational finance systems that is not intended to be used for presentation purposes. As VIVO strengths lay in searching, browsing and visualizing scholar activity for the wider public, the searchable content should not be limited to complex academic language as it is used in project abstracts or publications, a nonacademic person would not search for. Science goes Public - For more than 10 years the event series “Osnabrück Knowledge Forum” tries to make research more accessible. Every year, public is called upon to challenge Osnabrück University Professors by sending questions about any kinds of science topics to be answered within 4 minutes lightning talks in a precise, clear and entertaining way in front of the interested public. This »evening of knowledge» and its findings are showcased in a variety of media such as film sequences on YouTube or as an image brochure for University guests (PDF). All these formats do not connect content to researchers as VIVO connects researchers, projects or organizations with each other. In this presentation, we will show how social media content could be used to enrich research projects, researcher profils and research topics to make VIVO (even more) vivid!
Kathrin Schnieders , Osnabrück University
VIVO at Osnabrück University is used to present scholarly activity especially third-party funded research projects to the interested public to improve transparency in research funding. In the early stage of development, the VIVO content was limited to research project key facts (e.g. abstract, related/selected publications, external project links, keywords) mainly extracted from the operational finance systems that is not intended to be used for presentation purposes. As VIVO strengths lay in searching, browsing and visualizing scholar activity for the wider public, the searchable content should not be limited to complex academic language as it is used in project abstracts or publications, a nonacademic person would not search for. Science goes Public - For more than 10 years the event series “Osnabrück Knowledge Forum” tries to make research more accessible. Every year, public is called upon to challenge Osnabrück University Professors by sending questions about any kinds of science topics to be answered within 4 minutes lightning talks in a precise, clear and entertaining way in front of the interested public. This »evening of knowledge» and its findings are showcased in a variety of media such as film sequences on YouTube or as an image brochure for University guests (PDF). All these formats do not connect content to researchers as VIVO connects researchers, projects or organizations with each other. In this presentation, we will show how social media content could be used to enrich research projects, researcher profils and research topics to make VIVO (even more) vivid!
Manuel Schwarz , Osnabrück University
VIVO at Osnabrück University is used to present scholarly activity especially third-party funded research projects to the interested public to improve transparency in research funding. In the early stage of development, the VIVO content was limited to research project key facts (e.g. abstract, related/selected publications, external project links, keywords) mainly extracted from the operational finance systems that is not intended to be used for presentation purposes. As VIVO strengths lay in searching, browsing and visualizing scholar activity for the wider public, the searchable content should not be limited to complex academic language as it is used in project abstracts or publications, a nonacademic person would not search for. Science goes Public - For more than 10 years the event series “Osnabrück Knowledge Forum” tries to make research more accessible. Every year, public is called upon to challenge Osnabrück University Professors by sending questions about any kinds of science topics to be answered within 4 minutes lightning talks in a precise, clear and entertaining way in front of the interested public. This »evening of knowledge» and its findings are showcased in a variety of media such as film sequences on YouTube or as an image brochure for University guests (PDF). All these formats do not connect content to researchers as VIVO connects researchers, projects or organizations with each other. In this presentation, we will show how social media content could be used to enrich research projects, researcher profils and research topics to make VIVO (even more) vivid!
Marco Seegers , Osnabrück University
How to best disseminate one’s research and get credit for one’s work? How to best and fairly assess the quality and impact of a given individual, group, or institution’s research? These are questions with which many are struggling, from individual researchers to departments, to a global world of research institutions. Recently, the Faculty Senate and University Libraries surveyed the faculty of our large, public research university to explore their perspective on these questions and more. In this presentation we present a summary of results from 501 respondents (out of 4451 faculty in total) representing different types of faculty (both within and outside of tenured and tenure-track positions), at different ranks, and from different disciplines. Results shared will indicate trends within the faculty on topics such as, the current most commonly used profile systems (top 5: Google Scholar, ORCID ID, LinkedIn, Elements (internal system), and ResearchGate); which profile systems are used most for: networking and connecting with colleagues (top 3: LinkedIn, Twitter, and ResearchGate), tracking research impact metrics (top 3: Google Scholar, ORCID, Elements (internal system)), showcasing one’s work to increase visibility (top 3:Google Scholar, ResearchGate, self-published sites); what types of research metrics are relied on (top 3: journal reputation (separate from impact factor), number of publications, and citation counts to individual works); the perceived fairness of evaluation by level of review (e.g., department, college, and university levels) and how they differ; and summaries of qualitative responses to questions such as why faculty rely on certain profile systems or research metrics, and perspectives on how fair research evaluation could be accomplished, within or across disciplines. Results will be summarized at the institutional level with breakout analysis of results from some disciplinary fields or other subsets. For us, these results from faculty across a range of disciplines will help inform institutional policy and practice discussions about research tracking and evaluation, such as a responsible research assessment policy. Results will also inform our in-process implementation of an institutional researcher profiles systems, and training offerings on disseminating research and assessing its impact. As movements such as DORA (Declaration on Research Assessment) and the Leiden Manifesto for Research Metrics demonstrate, faculty, institutions, and funders are re-examining the way metrics are used and methods for demonstrating impact. This presentation on a university-wide survey that includes summary data and the survey questions used offers an example that could be adapted and repeated elsewhere to gauge current practices and faculty perspectives on how to change or move forward with research assessment across a range of disciplines and levels within a large, research institution.
Rachel A Miles , Virginia Tech
How to best disseminate one’s research and get credit for one’s work? How to best and fairly assess the quality and impact of a given individual, group, or institution’s research? These are questions with which many are struggling, from individual researchers to departments, to a global world of research institutions. Recently, the Faculty Senate and University Libraries surveyed the faculty of our large, public research university to explore their perspective on these questions and more. In this presentation we present a summary of results from 501 respondents (out of 4451 faculty in total) representing different types of faculty (both within and outside of tenured and tenure-track positions), at different ranks, and from different disciplines. Results shared will indicate trends within the faculty on topics such as, the current most commonly used profile systems (top 5: Google Scholar, ORCID ID, LinkedIn, Elements (internal system), and ResearchGate); which profile systems are used most for: networking and connecting with colleagues (top 3: LinkedIn, Twitter, and ResearchGate), tracking research impact metrics (top 3: Google Scholar, ORCID, Elements (internal system)), showcasing one’s work to increase visibility (top 3:Google Scholar, ResearchGate, self-published sites); what types of research metrics are relied on (top 3: journal reputation (separate from impact factor), number of publications, and citation counts to individual works); the perceived fairness of evaluation by level of review (e.g., department, college, and university levels) and how they differ; and summaries of qualitative responses to questions such as why faculty rely on certain profile systems or research metrics, and perspectives on how fair research evaluation could be accomplished, within or across disciplines. Results will be summarized at the institutional level with breakout analysis of results from some disciplinary fields or other subsets. For us, these results from faculty across a range of disciplines will help inform institutional policy and practice discussions about research tracking and evaluation, such as a responsible research assessment policy. Results will also inform our in-process implementation of an institutional researcher profiles systems, and training offerings on disseminating research and assessing its impact. As movements such as DORA (Declaration on Research Assessment) and the Leiden Manifesto for Research Metrics demonstrate, faculty, institutions, and funders are re-examining the way metrics are used and methods for demonstrating impact. This presentation on a university-wide survey that includes summary data and the survey questions used offers an example that could be adapted and repeated elsewhere to gauge current practices and faculty perspectives on how to change or move forward with research assessment across a range of disciplines and levels within a large, research institution.
Amanda Mac Donald , Virginia Tech
How to best disseminate one’s research and get credit for one’s work? How to best and fairly assess the quality and impact of a given individual, group, or institution’s research? These are questions with which many are struggling, from individual researchers to departments, to a global world of research institutions. Recently, the Faculty Senate and University Libraries surveyed the faculty of our large, public research university to explore their perspective on these questions and more. In this presentation we present a summary of results from 501 respondents (out of 4451 faculty in total) representing different types of faculty (both within and outside of tenured and tenure-track positions), at different ranks, and from different disciplines. Results shared will indicate trends within the faculty on topics such as, the current most commonly used profile systems (top 5: Google Scholar, ORCID ID, LinkedIn, Elements (internal system), and ResearchGate); which profile systems are used most for: networking and connecting with colleagues (top 3: LinkedIn, Twitter, and ResearchGate), tracking research impact metrics (top 3: Google Scholar, ORCID, Elements (internal system)), showcasing one’s work to increase visibility (top 3:Google Scholar, ResearchGate, self-published sites); what types of research metrics are relied on (top 3: journal reputation (separate from impact factor), number of publications, and citation counts to individual works); the perceived fairness of evaluation by level of review (e.g., department, college, and university levels) and how they differ; and summaries of qualitative responses to questions such as why faculty rely on certain profile systems or research metrics, and perspectives on how fair research evaluation could be accomplished, within or across disciplines. Results will be summarized at the institutional level with breakout analysis of results from some disciplinary fields or other subsets. For us, these results from faculty across a range of disciplines will help inform institutional policy and practice discussions about research tracking and evaluation, such as a responsible research assessment policy. Results will also inform our in-process implementation of an institutional researcher profiles systems, and training offerings on disseminating research and assessing its impact. As movements such as DORA (Declaration on Research Assessment) and the Leiden Manifesto for Research Metrics demonstrate, faculty, institutions, and funders are re-examining the way metrics are used and methods for demonstrating impact. This presentation on a university-wide survey that includes summary data and the survey questions used offers an example that could be adapted and repeated elsewhere to gauge current practices and faculty perspectives on how to change or move forward with research assessment across a range of disciplines and levels within a large, research institution.
Nathaniel D Porter , Virginia Tech
How to best disseminate one’s research and get credit for one’s work? How to best and fairly assess the quality and impact of a given individual, group, or institution’s research? These are questions with which many are struggling, from individual researchers to departments, to a global world of research institutions. Recently, the Faculty Senate and University Libraries surveyed the faculty of our large, public research university to explore their perspective on these questions and more. In this presentation we present a summary of results from 501 respondents (out of 4451 faculty in total) representing different types of faculty (both within and outside of tenured and tenure-track positions), at different ranks, and from different disciplines. Results shared will indicate trends within the faculty on topics such as, the current most commonly used profile systems (top 5: Google Scholar, ORCID ID, LinkedIn, Elements (internal system), and ResearchGate); which profile systems are used most for: networking and connecting with colleagues (top 3: LinkedIn, Twitter, and ResearchGate), tracking research impact metrics (top 3: Google Scholar, ORCID, Elements (internal system)), showcasing one’s work to increase visibility (top 3:Google Scholar, ResearchGate, self-published sites); what types of research metrics are relied on (top 3: journal reputation (separate from impact factor), number of publications, and citation counts to individual works); the perceived fairness of evaluation by level of review (e.g., department, college, and university levels) and how they differ; and summaries of qualitative responses to questions such as why faculty rely on certain profile systems or research metrics, and perspectives on how fair research evaluation could be accomplished, within or across disciplines. Results will be summarized at the institutional level with breakout analysis of results from some disciplinary fields or other subsets. For us, these results from faculty across a range of disciplines will help inform institutional policy and practice discussions about research tracking and evaluation, such as a responsible research assessment policy. Results will also inform our in-process implementation of an institutional researcher profiles systems, and training offerings on disseminating research and assessing its impact. As movements such as DORA (Declaration on Research Assessment) and the Leiden Manifesto for Research Metrics demonstrate, faculty, institutions, and funders are re-examining the way metrics are used and methods for demonstrating impact. This presentation on a university-wide survey that includes summary data and the survey questions used offers an example that could be adapted and repeated elsewhere to gauge current practices and faculty perspectives on how to change or move forward with research assessment across a range of disciplines and levels within a large, research institution.
Virginia Pannabecker , Virginia Tech
How to best disseminate one’s research and get credit for one’s work? How to best and fairly assess the quality and impact of a given individual, group, or institution’s research? These are questions with which many are struggling, from individual researchers to departments, to a global world of research institutions. Recently, the Faculty Senate and University Libraries surveyed the faculty of our large, public research university to explore their perspective on these questions and more. In this presentation we present a summary of results from 501 respondents (out of 4451 faculty in total) representing different types of faculty (both within and outside of tenured and tenure-track positions), at different ranks, and from different disciplines. Results shared will indicate trends within the faculty on topics such as, the current most commonly used profile systems (top 5: Google Scholar, ORCID ID, LinkedIn, Elements (internal system), and ResearchGate); which profile systems are used most for: networking and connecting with colleagues (top 3: LinkedIn, Twitter, and ResearchGate), tracking research impact metrics (top 3: Google Scholar, ORCID, Elements (internal system)), showcasing one’s work to increase visibility (top 3:Google Scholar, ResearchGate, self-published sites); what types of research metrics are relied on (top 3: journal reputation (separate from impact factor), number of publications, and citation counts to individual works); the perceived fairness of evaluation by level of review (e.g., department, college, and university levels) and how they differ; and summaries of qualitative responses to questions such as why faculty rely on certain profile systems or research metrics, and perspectives on how fair research evaluation could be accomplished, within or across disciplines. Results will be summarized at the institutional level with breakout analysis of results from some disciplinary fields or other subsets. For us, these results from faculty across a range of disciplines will help inform institutional policy and practice discussions about research tracking and evaluation, such as a responsible research assessment policy. Results will also inform our in-process implementation of an institutional researcher profiles systems, and training offerings on disseminating research and assessing its impact. As movements such as DORA (Declaration on Research Assessment) and the Leiden Manifesto for Research Metrics demonstrate, faculty, institutions, and funders are re-examining the way metrics are used and methods for demonstrating impact. This presentation on a university-wide survey that includes summary data and the survey questions used offers an example that could be adapted and repeated elsewhere to gauge current practices and faculty perspectives on how to change or move forward with research assessment across a range of disciplines and levels within a large, research institution.
Jim A Kuypers , Virginia Tech
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Karen H Ibanez , Technical University of Denmark
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Nikoline D Lauridsen , Technical University of Denmark
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Marianne Gauffriau , Technical University of Denmark
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Adrian Price , Technical University of Denmark
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Anne L Høj , Technical University of Denmark
The OPERA project (Open Research Analytics) is developing a pilot VIVO with data for all 8 Danish universities. The key data come from the Dimensions database, but data from the Danish national bibliometric and open access indicators are also integrated. This pilot VIVO will facilitate national research analytics including its dimensions of openness – using open concepts, open software and data that are as open as possible. This joint experimental platform will be used to explore aspects such as: publication output and impact, Open Science efforts, funding patterns, subject fields, gender patterns and both established and potential collaboration patterns within and outside Denmark as well as with partners in industry. Network analyses and visualizations will be integrated in the VIVO platform to complement more traditional analytics and visual elements and to support new ways of perceiving numbers, patterns and potentials. The Dimensions VIVO is a collaboration between Danish universities, the IT consultants Vox Novitas and Ontocale and Digital Science, the producer of Dimensions.
Kirsten K Kruuse , Technical University of Denmark
The Smithsonian Institution began planning a VIVO implementation in 2015 and in August 2018, launched Smithsonian Profiles to the public. The Smithsonian is a research institution that comprises a network of 19 museums, 8 research centers, and the National Zoo. In addition to public programs and exhibits, staff conduct research in a wide range of domains, with over 700 scholars and their sponsored fellows authoring over 2500 publications every year. For over 10 years, the Smithsonian Libraries has been collecting and managing these publications using a home-grown system called Smithsonian Research Online, which now feeds into Smithsonian Profiles. This presentation will discuss Smithsonian Profiles and its role at the Smithsonian Institution, and will touch on the data systems it interacts with both internally and externally. There will also be a focus on the challenges presented by implementing VIVO in a non-university setting, including policy and accessibility considerations, and issues with defining identifying eligible from a pool of hundreds.
Kristina Heinricy , Smithsonian Libraries
The Smithsonian Institution began planning a VIVO implementation in 2015 and in August 2018, launched Smithsonian Profiles to the public. The Smithsonian is a research institution that comprises a network of 19 museums, 8 research centers, and the National Zoo. In addition to public programs and exhibits, staff conduct research in a wide range of domains, with over 700 scholars and their sponsored fellows authoring over 2500 publications every year. For over 10 years, the Smithsonian Libraries has been collecting and managing these publications using a home-grown system called Smithsonian Research Online, which now feeds into Smithsonian Profiles. This presentation will discuss Smithsonian Profiles and its role at the Smithsonian Institution, and will touch on the data systems it interacts with both internally and externally. There will also be a focus on the challenges presented by implementing VIVO in a non-university setting, including policy and accessibility considerations, and issues with defining identifying eligible from a pool of hundreds.
Alvin Hutchinson , Smithsonian Libraries
The Smithsonian Institution began planning a VIVO implementation in 2015 and in August 2018, launched Smithsonian Profiles to the public. The Smithsonian is a research institution that comprises a network of 19 museums, 8 research centers, and the National Zoo. In addition to public programs and exhibits, staff conduct research in a wide range of domains, with over 700 scholars and their sponsored fellows authoring over 2500 publications every year. For over 10 years, the Smithsonian Libraries has been collecting and managing these publications using a home-grown system called Smithsonian Research Online, which now feeds into Smithsonian Profiles. This presentation will discuss Smithsonian Profiles and its role at the Smithsonian Institution, and will touch on the data systems it interacts with both internally and externally. There will also be a focus on the challenges presented by implementing VIVO in a non-university setting, including policy and accessibility considerations, and issues with defining identifying eligible from a pool of hundreds.
Suzanne Pilsk , Smithsonian Libraries
VIVO by default offers some data visualizations, which might be limited or lacking user needs in the scope of analytics and reporting. Often institutes using VIVO has to rely on manual information gathering for basic analytics and reporting despite the required information being present in their VIVO instance. The inclusion of Elasticsearch driver, for indexing, in VIVO enables institutions to directly use the data from VIVO for custom applications. The open-source tool Kibana, which is a plugin for Elasticsearch, is a platform for building curated visualizations and dashboards on the data in Elasticsearch indexes. This presentation highlights how custom fields from VIVO could be indexed in Elasticsearch and how interactive dashboards could be created in Kibana for data analytics and reporting purposes. Furthermore, the possibility of including curated dashboards from Kibana into VIVO would also be discussed.
The rich semantic data captured in a VIVO instance (or any other application built on a Vitro core) presents a fantastic opportunity to surface knowledge about connected resources. However, the query interfaces within VIVO (or Vitro) are quite limited - other than a full text search, information is presented only in ways that have been baked into the UI. A SPARQL endpoint can be enabled for richer semantic queries, but this comes at a cost - the user needs to understand, or have access to a library of pre-written SPARQL queries, you can't combine queries (a SELECT over a CONSTRUCTed source), and there is a risk of badly written SPARQL queries having an impact on the system performance. With the Vitro Query Tool, we build upon the work of Cornell, using their DataDistributor as an API for storing and running a library of queries. To make this more accessible, we initially created a user interface allowing authorised users the ability to view and create queries using the DataDistributor building blocks, before extending this with the ability to schedule the execution of queries and distribute the results (e.g. via email). In utilising the existing DataDistributor, not only do we provide a means of creating a library of queries and reports for users to execute or receive, but it also allows the data to be exposed via API endpoints that can be ingested by other applications or used by visualisations within the application.
Qazi Asim Ijaz Ahmad , Technische Informationsbibliothek (TIB) û German National Library of Science and Technology
VIVO by default offers some data visualizations, which might be limited or lacking user needs in the scope of analytics and reporting. Often institutes using VIVO has to rely on manual information gathering for basic analytics and reporting despite the required information being present in their VIVO instance. The inclusion of Elasticsearch driver, for indexing, in VIVO enables institutions to directly use the data from VIVO for custom applications. The open-source tool Kibana, which is a plugin for Elasticsearch, is a platform for building curated visualizations and dashboards on the data in Elasticsearch indexes. This presentation highlights how custom fields from VIVO could be indexed in Elasticsearch and how interactive dashboards could be created in Kibana for data analytics and reporting purposes. Furthermore, the possibility of including curated dashboards from Kibana into VIVO would also be discussed.
The rich semantic data captured in a VIVO instance (or any other application built on a Vitro core) presents a fantastic opportunity to surface knowledge about connected resources. However, the query interfaces within VIVO (or Vitro) are quite limited - other than a full text search, information is presented only in ways that have been baked into the UI. A SPARQL endpoint can be enabled for richer semantic queries, but this comes at a cost - the user needs to understand, or have access to a library of pre-written SPARQL queries, you can't combine queries (a SELECT over a CONSTRUCTed source), and there is a risk of badly written SPARQL queries having an impact on the system performance. With the Vitro Query Tool, we build upon the work of Cornell, using their DataDistributor as an API for storing and running a library of queries. To make this more accessible, we initially created a user interface allowing authorised users the ability to view and create queries using the DataDistributor building blocks, before extending this with the ability to schedule the execution of queries and distribute the results (e.g. via email). In utilising the existing DataDistributor, not only do we provide a means of creating a library of queries and reports for users to execute or receive, but it also allows the data to be exposed via API endpoints that can be ingested by other applications or used by visualisations within the application.
Graham Triggs , Technische Informationsbibliothek (TIB) û German National Library of Science and Technology
VIVO by default offers some data visualizations, which might be limited or lacking user needs in the scope of analytics and reporting. Often institutes using VIVO has to rely on manual information gathering for basic analytics and reporting despite the required information being present in their VIVO instance. The inclusion of Elasticsearch driver, for indexing, in VIVO enables institutions to directly use the data from VIVO for custom applications. The open-source tool Kibana, which is a plugin for Elasticsearch, is a platform for building curated visualizations and dashboards on the data in Elasticsearch indexes. This presentation highlights how custom fields from VIVO could be indexed in Elasticsearch and how interactive dashboards could be created in Kibana for data analytics and reporting purposes. Furthermore, the possibility of including curated dashboards from Kibana into VIVO would also be discussed.
A second version of the VIVO ontology. We propose to develop a consistent, sufficient, BFO based ontology for representing scholarship. By consistent, we mean the ontology uses a single approach to representation. Using a single approach, we expect to simplify the ontology -- patterns are reused and complexity reduced. By sufficient, we mean we cover the domain of scholarship at the level necessary to represent and use information about scholarship. The ontology is informed by its applications. By BFO-based, we mean we commit to an approach to representation based on the Basic Formal Ontology. The approach is well-understood and well-adopted in the ontology community. The domain of the ontology is well-defined and stable. Why a new ontology, and why now? The original work on the VIVO ontology began in 2007 at Cornell. The 2009 NIH grant significantly expanded the ontology. The 2013 CTSA Connect effort significantly re-engineered the ontology attempting to bring the ontology to standards current at the time, and introducing BFO as an upper level ontology, but the effort was never completed, and introduction of the new ontology (VIVO version 1.6) was not accompanied by sufficient tooling, training, and time to manage the community change. Since 2013, the ontology has essentially been frozen. As development seeks to create an interface between the ontology and the presentation software, there is an opportunity to create an ontology that is independent of the software and can be mapped to it. Benefits of a new ontology. The new ontology will: Add to our ability to represent all of scholarship, including the arts, peer review, new research outputs, research impact, and global needs Adopt current ontological best practice including tooling, OBO Principles, focus on the domain of scholarship and expertise, and use only those ontologies that are aligned Use simple, consistent representations supporting ontological reasoning Be appropriate for use by any project seeking to build and use research graphs How and when. A new ontology could be developed in three phases by the Ontology Interest Group of the VIVO Project, working in collaboration with other projects, ontologists, developers, and community members. All are welcome to join the effort. A second phase would be necessary for refinement, testing, and tooling for adoption. A third phase is need for community change management, mapping to presentation data structures, testing, and training. The existing ontology (version 1.x) will continue to be supported indefinitely.
The rich semantic data captured in a VIVO instance (or any other application built on a Vitro core) presents a fantastic opportunity to surface knowledge about connected resources. However, the query interfaces within VIVO (or Vitro) are quite limited - other than a full text search, information is presented only in ways that have been baked into the UI. A SPARQL endpoint can be enabled for richer semantic queries, but this comes at a cost - the user needs to understand, or have access to a library of pre-written SPARQL queries, you can't combine queries (a SELECT over a CONSTRUCTed source), and there is a risk of badly written SPARQL queries having an impact on the system performance. With the Vitro Query Tool, we build upon the work of Cornell, using their DataDistributor as an API for storing and running a library of queries. To make this more accessible, we initially created a user interface allowing authorised users the ability to view and create queries using the DataDistributor building blocks, before extending this with the ability to schedule the execution of queries and distribute the results (e.g. via email). In utilising the existing DataDistributor, not only do we provide a means of creating a library of queries and reports for users to execute or receive, but it also allows the data to be exposed via API endpoints that can be ingested by other applications or used by visualisations within the application.
Christian Hauschke , Technische Informationsbibliothek (TIB) û German National Library of Science and Technology
A couple of years ago at Brown University we wrote a new frontend for the public interface of our VIVO installation. This new frontend provided a modern look and feel for searching and displaying information on Researchers at Brown[1] and was well received by researchers and the public in general. This year we are focusing on adding an editing interface to this application to allow researchers to easily add or update the information on their VIVO profile including publications, research statements, collaborators, appointments, and so on. In this presentation we'll show the general architecture of our VIVO installation, including the different services and applications that interact with it, we'll discuss the challenges during the development of the new editing features, and some of the gains that our approach has given us. The new editing interface is built on top of the Ruby on Rails application that we built for the public interface a couple of years ago. This application interacts with a typical REST API service that communicates with VIVO to submit the changes to the triplestore. One of the lessons that we've learned during our years using VIVO has been that we can preserve Linked Data in the backend (as VIVO natively does) while at the same time provide traditional REST API endpoints to allow other applications, written in a variety of programming languages, to consume and in this case update the information. This approach has the advantage that we can run sophisticated SPARQL queries against the triplestore (for example to generate networks graphs of collaborators to power visualizations) while at the same time isolate client applications written in Python and Ruby from the triplestore and RDF complexities, and instead, expose the data to those applications in ways that they can easily consume, for example via a REST API passing JSON back and forth. [1] https://vivo.brown.edu/
Hector Correa , Brown University
A couple of years ago at Brown University we wrote a new frontend for the public interface of our VIVO installation. This new frontend provided a modern look and feel for searching and displaying information on Researchers at Brown[1] and was well received by researchers and the public in general. This year we are focusing on adding an editing interface to this application to allow researchers to easily add or update the information on their VIVO profile including publications, research statements, collaborators, appointments, and so on. In this presentation we'll show the general architecture of our VIVO installation, including the different services and applications that interact with it, we'll discuss the challenges during the development of the new editing features, and some of the gains that our approach has given us. The new editing interface is built on top of the Ruby on Rails application that we built for the public interface a couple of years ago. This application interacts with a typical REST API service that communicates with VIVO to submit the changes to the triplestore. One of the lessons that we've learned during our years using VIVO has been that we can preserve Linked Data in the backend (as VIVO natively does) while at the same time provide traditional REST API endpoints to allow other applications, written in a variety of programming languages, to consume and in this case update the information. This approach has the advantage that we can run sophisticated SPARQL queries against the triplestore (for example to generate networks graphs of collaborators to power visualizations) while at the same time isolate client applications written in Python and Ruby from the triplestore and RDF complexities, and instead, expose the data to those applications in ways that they can easily consume, for example via a REST API passing JSON back and forth. [1] https://vivo.brown.edu/
Steven Mc Cauley , Brown University
The diversity, energy and innovation in the VIVO community is inspirational. Community initiatives are strong, as are contributions back to the core application. One of the VIVO project's primary objectives is to channel, where appropriate, community effort into the core application towards agreed upon outcomes. We are delighted to say that this has been happening! This presentation will not attempt to detail all of the technical efforts over the past year, but will highlight a range of advancements and milestones accomplished since the previous VIVO conference. In the context of reviewing the year’s activity, this session is also intended to solicit feedback from attendees on technical and community initiatives and processes. At the end of 2018, the VIVO Leadership Group collected input from the community and created a "Statement of VIVO's Product Direction for 2019". This statement details four strategic initiatives: * Modernize the presentation layer of VIVO * Decouple the architecture * VIVO Combine * VIVO Search Following the publication of this statement, an architecturally-minded team representing distinct VIVO stakeholder constituencies was gathered for the purpose of developing architectural approaches required to address the direction of the project. The primary goal of the two-day face-to-face meeting was to assess and document a plan for improving the VIVO application architecture towards enabling and realizing the technical efforts defined in the "Statement of VIVO's Product Direction for 2019". This presentation will include a detailed status of the completed as well as planned development activities following from the decisions of the architectural meeting.
Andrew Woods , DuraSpace
The VIVO community has long promoted the value of modeling the domain of scholarship in a form that is independent of a particular software implementation. The VIVO ontology is the product of a collaborative effort to define a shared understanding of the semantics behind the complex graphs of scholarly activity that different research networking systems might choose to process internally in different ways. As research networking software projects such as VIVO begin to develop the next generation of decoupled, dynamic and responsive user interfaces, there is an opportunity to consider a similar kind of collaborative modeling approach to define robust UIs whose behavior can be reasoned about and tested separately from a concrete software implementation. Statecharts were first described by David Harel in 1987 as an extension of finite state machines and state diagrams[1], but have recently gained traction in the web UI development community[2]. With UI statecharts, the representation of the features, behavior, and possible effects of different UI interactions is decoupled from the code that actually implements the behavior. This can lead to a number of potential benefits, such as validating that software properly implements community requirements, opening up aspects of development to contributors who may not be experts in particular web UI frameworks, and automatically generating more robust, testable and bug-free code. In this presentation we will examine the principles of UI statecharts and consider their application to community development of research networking software. REFERENCES [1] Harel, David. Statecharts: A Visual Formalism For Complex Systems. (1987). Science of Computer Programming, 8(3), 231-274. DOI: 10.1016/0167-6423(87)90035-9. [2] „Welcome to the world of statecharts.” https://statecharts.github.io/ . Retrieved 29 April 2019.
Andrei Tudor , Ontocale
Research information management systems (RIMSs) use different approaches to collecting and curating research identity information: manual curation by information professionals or users, automated data mining and curation scripts (aka bots); and some combination of the above. Assuring the quality of information is one of the critical ethical issues of information systems. Although data curation by professionals usually produces the highest quality results, it is costly and may not be scalable. RIMSs may not have enough resources to control the quality of large-scale information often batch harvested and aggregated from the Web and various databases of different scope and coverage. RIMSs are in great need of researchers to contribute and curate their research identity data. This presentation reports the findings of a collaborative study about researcher participation in RIMSs. The presenters of this study developed a theoretical framework for researcher participation in RIMSs (Stvilia, Wu, & Lee, 2019). The framework is grounded in empirical research and can guide the design of RIMSs by defining typologies of researcher activities in RIMSs, related motivations, levels of participation, and metadata profiles. RIMS managers and scholarly communications librarians can use the framework to assemble RIMS service and metadata profiles that are tailored to the researcher’s context. Likewise, the framework can guide the construction of communication messages personalized to the researcher’s priorities and her or his motivations for engaging in a specific activity, which will enhance the researcher’s engagement with the RIMS. In addition, this presentation discusses how the framework can be operationalized in practice using the case of Scholars@TAMU, a VIVO-based RIMS at Texas A&M University. Reference Stvilia, B., Wu, S., & Lee, D. J. (2019). A framework for researcher participation in research information management systems. The Journal of Academic Librarianship. 45(3), 195-202. doi:10.1016/j.acalib.2019.02.014
Besiki Stvilia , Florida State University
Research information management systems (RIMSs) use different approaches to collecting and curating research identity information: manual curation by information professionals or users, automated data mining and curation scripts (aka bots); and some combination of the above. Assuring the quality of information is one of the critical ethical issues of information systems. Although data curation by professionals usually produces the highest quality results, it is costly and may not be scalable. RIMSs may not have enough resources to control the quality of large-scale information often batch harvested and aggregated from the Web and various databases of different scope and coverage. RIMSs are in great need of researchers to contribute and curate their research identity data. This presentation reports the findings of a collaborative study about researcher participation in RIMSs. The presenters of this study developed a theoretical framework for researcher participation in RIMSs (Stvilia, Wu, & Lee, 2019). The framework is grounded in empirical research and can guide the design of RIMSs by defining typologies of researcher activities in RIMSs, related motivations, levels of participation, and metadata profiles. RIMS managers and scholarly communications librarians can use the framework to assemble RIMS service and metadata profiles that are tailored to the researcher’s context. Likewise, the framework can guide the construction of communication messages personalized to the researcher’s priorities and her or his motivations for engaging in a specific activity, which will enhance the researcher’s engagement with the RIMS. In addition, this presentation discusses how the framework can be operationalized in practice using the case of Scholars@TAMU, a VIVO-based RIMS at Texas A&M University. Reference Stvilia, B., Wu, S., & Lee, D. J. (2019). A framework for researcher participation in research information management systems. The Journal of Academic Librarianship. 45(3), 195-202. doi:10.1016/j.acalib.2019.02.014
Shuheng Wu , City University of New York
We see the first VIVO conference to be held in Europe as a very positive sign of the worldwide attention the VIVO platform has attained. In this regard, we strongly believe a complete VIVO internationalization (i18n) is critical to reach a major adoption at an international level. We are certain it is a major driver of growth for VIVO, as many institutions need to have support for languages other than English. The VIVO i18n internationalization task force has established the roadmap to achieve this goal. However, a critical mass of stakeholders is essential for it to become a development priority. This presentation is meant as an exchange to gather the community and create a synergistic involvement in the internationalization of the VIVO platform.
Rachid Belkouch , Université du Québec à Montréal
We see the first VIVO conference to be held in Europe as a very positive sign of the worldwide attention the VIVO platform has attained. In this regard, we strongly believe a complete VIVO internationalization (i18n) is critical to reach a major adoption at an international level. We are certain it is a major driver of growth for VIVO, as many institutions need to have support for languages other than English. The VIVO i18n internationalization task force has established the roadmap to achieve this goal. However, a critical mass of stakeholders is essential for it to become a development priority. This presentation is meant as an exchange to gather the community and create a synergistic involvement in the internationalization of the VIVO platform.
Pierre Roberge , Université du Québec à Montréal
The EU funded COURAGE project collected the methods and memories for cultural opposition in the socialist era (cc. 1950-1990), and built a registry for preserving data about culture as a form of opposition. The outcomes of the project include learning material, online exhibition, a public browseable interface and an open SPARQL endpoint. The heart of the software environment is a Vitro instance, which provides easily re-usable and connected data for the specific extensions implementing the virtual exhibition, the learning platform and other satellite services. Data was input and curated by historians, social scientists and other researchers from humanities. The Vitro code base had to be extended and sometimes overriden to comply with the requirements of the project. For example, editing rights had to be granted based on the edited data context, and a workflow for quality management had to be added to the system. The analysis of collected data has been helped by various statistical pages using SPARQL queries. Although the project has ended, we are still catching up with ideas to connect our registry with more and more services in the field.
András Micsik , Institute for Computer Science and Control, Hungarian Academy of Sciences
The EU funded COURAGE project collected the methods and memories for cultural opposition in the socialist era (cc. 1950-1990), and built a registry for preserving data about culture as a form of opposition. The outcomes of the project include learning material, online exhibition, a public browseable interface and an open SPARQL endpoint. The heart of the software environment is a Vitro instance, which provides easily re-usable and connected data for the specific extensions implementing the virtual exhibition, the learning platform and other satellite services. Data was input and curated by historians, social scientists and other researchers from humanities. The Vitro code base had to be extended and sometimes overriden to comply with the requirements of the project. For example, editing rights had to be granted based on the edited data context, and a workflow for quality management had to be added to the system. The analysis of collected data has been helped by various statistical pages using SPARQL queries. Although the project has ended, we are still catching up with ideas to connect our registry with more and more services in the field.
Tamás Felker , Institute for Computer Science and Control, Hungarian Academy of Sciences
A second version of the VIVO ontology. We propose to develop a consistent, sufficient, BFO based ontology for representing scholarship. By consistent, we mean the ontology uses a single approach to representation. Using a single approach, we expect to simplify the ontology -- patterns are reused and complexity reduced. By sufficient, we mean we cover the domain of scholarship at the level necessary to represent and use information about scholarship. The ontology is informed by its applications. By BFO-based, we mean we commit to an approach to representation based on the Basic Formal Ontology. The approach is well-understood and well-adopted in the ontology community. The domain of the ontology is well-defined and stable. Why a new ontology, and why now? The original work on the VIVO ontology began in 2007 at Cornell. The 2009 NIH grant significantly expanded the ontology. The 2013 CTSA Connect effort significantly re-engineered the ontology attempting to bring the ontology to standards current at the time, and introducing BFO as an upper level ontology, but the effort was never completed, and introduction of the new ontology (VIVO version 1.6) was not accompanied by sufficient tooling, training, and time to manage the community change. Since 2013, the ontology has essentially been frozen. As development seeks to create an interface between the ontology and the presentation software, there is an opportunity to create an ontology that is independent of the software and can be mapped to it. Benefits of a new ontology. The new ontology will: Add to our ability to represent all of scholarship, including the arts, peer review, new research outputs, research impact, and global needs Adopt current ontological best practice including tooling, OBO Principles, focus on the domain of scholarship and expertise, and use only those ontologies that are aligned Use simple, consistent representations supporting ontological reasoning Be appropriate for use by any project seeking to build and use research graphs How and when. A new ontology could be developed in three phases by the Ontology Interest Group of the VIVO Project, working in collaboration with other projects, ontologists, developers, and community members. All are welcome to join the effort. A second phase would be necessary for refinement, testing, and tooling for adoption. A third phase is need for community change management, mapping to presentation data structures, testing, and training. The existing ontology (version 1.x) will continue to be supported indefinitely.
Marijane White , Oregon Health and Science University
A second version of the VIVO ontology. We propose to develop a consistent, sufficient, BFO based ontology for representing scholarship. By consistent, we mean the ontology uses a single approach to representation. Using a single approach, we expect to simplify the ontology -- patterns are reused and complexity reduced. By sufficient, we mean we cover the domain of scholarship at the level necessary to represent and use information about scholarship. The ontology is informed by its applications. By BFO-based, we mean we commit to an approach to representation based on the Basic Formal Ontology. The approach is well-understood and well-adopted in the ontology community. The domain of the ontology is well-defined and stable. Why a new ontology, and why now? The original work on the VIVO ontology began in 2007 at Cornell. The 2009 NIH grant significantly expanded the ontology. The 2013 CTSA Connect effort significantly re-engineered the ontology attempting to bring the ontology to standards current at the time, and introducing BFO as an upper level ontology, but the effort was never completed, and introduction of the new ontology (VIVO version 1.6) was not accompanied by sufficient tooling, training, and time to manage the community change. Since 2013, the ontology has essentially been frozen. As development seeks to create an interface between the ontology and the presentation software, there is an opportunity to create an ontology that is independent of the software and can be mapped to it. Benefits of a new ontology. The new ontology will: Add to our ability to represent all of scholarship, including the arts, peer review, new research outputs, research impact, and global needs Adopt current ontological best practice including tooling, OBO Principles, focus on the domain of scholarship and expertise, and use only those ontologies that are aligned Use simple, consistent representations supporting ontological reasoning Be appropriate for use by any project seeking to build and use research graphs How and when. A new ontology could be developed in three phases by the Ontology Interest Group of the VIVO Project, working in collaboration with other projects, ontologists, developers, and community members. All are welcome to join the effort. A second phase would be necessary for refinement, testing, and tooling for adoption. A third phase is need for community change management, mapping to presentation data structures, testing, and training. The existing ontology (version 1.x) will continue to be supported indefinitely.
Muhammad Javed , Mastercard
A second version of the VIVO ontology. We propose to develop a consistent, sufficient, BFO based ontology for representing scholarship. By consistent, we mean the ontology uses a single approach to representation. Using a single approach, we expect to simplify the ontology -- patterns are reused and complexity reduced. By sufficient, we mean we cover the domain of scholarship at the level necessary to represent and use information about scholarship. The ontology is informed by its applications. By BFO-based, we mean we commit to an approach to representation based on the Basic Formal Ontology. The approach is well-understood and well-adopted in the ontology community. The domain of the ontology is well-defined and stable. Why a new ontology, and why now? The original work on the VIVO ontology began in 2007 at Cornell. The 2009 NIH grant significantly expanded the ontology. The 2013 CTSA Connect effort significantly re-engineered the ontology attempting to bring the ontology to standards current at the time, and introducing BFO as an upper level ontology, but the effort was never completed, and introduction of the new ontology (VIVO version 1.6) was not accompanied by sufficient tooling, training, and time to manage the community change. Since 2013, the ontology has essentially been frozen. As development seeks to create an interface between the ontology and the presentation software, there is an opportunity to create an ontology that is independent of the software and can be mapped to it. Benefits of a new ontology. The new ontology will: Add to our ability to represent all of scholarship, including the arts, peer review, new research outputs, research impact, and global needs Adopt current ontological best practice including tooling, OBO Principles, focus on the domain of scholarship and expertise, and use only those ontologies that are aligned Use simple, consistent representations supporting ontological reasoning Be appropriate for use by any project seeking to build and use research graphs How and when. A new ontology could be developed in three phases by the Ontology Interest Group of the VIVO Project, working in collaboration with other projects, ontologists, developers, and community members. All are welcome to join the effort. A second phase would be necessary for refinement, testing, and tooling for adoption. A third phase is need for community change management, mapping to presentation data structures, testing, and training. The existing ontology (version 1.x) will continue to be supported indefinitely.
Naomi Braun , University of Florida