We are pleased to announce the free workshops for VIVO Conference 2018. Workshops will be approximately three hours each and cover topics such as VIVO development, data integration, working with stakeholders, and complementary projects and technologies. While some workshops may be more technical in nature, they are open to all conference attendees and hands-on participation is optional!
Morning Workshops: 8:30 am – 11:30 am
Learning VIVO Development
Don Elsborg, Jim Blake, Huda Khan and M. Benjamin Gross
Keywords: development, contribution, process, VIVO code, technology, Java, Vitro
Much of the VIVO code base is understood only by a few original core developers. For VIVO to grow and thrive, we need new developers with new energy and new ideas. But where will they come from?
The goal of this workshop is to provide developers with a practical, hands-on introduction to selected VIVO development approaches and debugging methods. We also hope to highlight the process for contributing back to the VIVO core code. The workshop is intended to encourage further developer engagement in the VIVO community and to help begin training developers and build a broader base of VIVO committers.
Attendees will be provided a Virtual Machine with VIVO and the Eclipse IDE editor prior to the workshop and will be led through the workshop by VIVO developers
Hands on with the Dimensions API
Simon John Porter
Keywords: Dimensions API, Python, data, transformation, data analysis
This workshop will offer a hands on opportunity to work with the newly released Dimensions API. Dimensions is a unique linked research knowledge system linking and standardizing publications, clinical trials, patents, and funded grants awards metadata across hundreds of data sources globally.
Working with Jupyter notebooks, workshop participants will be given the opportunity to work through a number of use cases including:
- How to produce VIVO RDF from the Dimensions API
- How to create collaboration diagrams based on Dimensions API searches
- Approaches to creating your own metrics with multiple research sources
- An approach to resaerch demographic analysis
The workshop will be limited to 20.
This workshop is intended for:
- Researchers intending on using the Dimensions API as part of their research
- Institutions using or considering a Dimensions subscription
Afternoon Workshops: 1 pm – 4 pm
Exploring Research Information Citizenship in an Institutional Context
Simon John Porter, Brian Turner
Keywords: Research Information Citizenship, Data Integration, Implementing Research Profiling Systems
Creating an institutional research profiling system requires herculean acts of data corralling from multiple internal institutional systems. Within an institution, the knowledge that you need resides in HR, Finance, Grants, Publication management, and Student Systems to name a few. Getting access to this information requires multiple negotiations with many institutional stakeholders. This process requires extreme patience on your part to explain to internal stakeholders why information collected for one purpose can also be used for another.
When you do get access to the information that you need, in many cases it is not in the format that you would ideally like. The titles of projects can be in ALL CAPS, HR position titles can be truncated, dollar amount for grants that do not reflect the external amount for the award.
How do we turn this situation around? How do we increase the awareness of the data stewards of HR, Finance, and Student Systems that they are research information stewards too? Within an institutional context, how can we define what it is to be a research information citizen?
Building on a similar exercise held at Pidapalooza earlier this year, participants at this workshop will aim to identify what the shared understanding and norms surrounding how we handle and communicate the research information within an institution should be. In doing so, it is hoped that we can take the first step towards baking research profiling into the reasons that institutions collect information in the first place.
VIVO Product Evolution: Exploring New Technologies
Alex Viggio, Paul Albert, Richard Outten
Keywords: technology, VIVO code, ontology, data, user experience
The VIVO code base has grown over 15 years to more than half a million lines of Enterprise Java code. Experience has shown a steep learning curve for new developers especially front end developers, and challenges integrating newer web development technologies and approaches. Is there an opportunity to experiment with new technologies and techniques that are easier for new developers to dive into? The VIVO Product Evolution group is leading an effort to turn this opportunity into a reality. The vision is to prototype an agile web/mobile application that showcases the researchers, units, and scholarly works of an institution with their branding.
This workshop will be a working session for the VIVO Product Evolution group, but also an occasion to engage with other interested VIVO community members. The workshop will include updates and discussion involving the group leadership and current subgroups, lightning talks exploring new technologies and methods, and breakout sessions for the subgroups.
The current subgroups:
- Representing VIVO Data
- Functional requirements
- Implementing the presentation layer
Technologies, standards, and approaches under evaluation:
- JSON and JSON-LD
- SOLR and ElasticSearch
- GraphQL
- Schema.org as well as other scholarly information and data models (CERIF, CASRAI, COAR, CD2H)
- Javascript Frameworks (React, Angular, Vue, etc)
- Modern Agile principles