Excludes the first 17 minutes.
Marc-Antoine Parent: Glossary entries need a type or relation.
Gyuri Lajos: Knowledge enrichment is separate, a separate tool. On the other hand, while writing, autocomplete could be offered or a search for terms.
Marc-Antoine: Types could have attributes, but then everything becomes more complex. Topic maps with different roles even. Other term suggestions, should we only look at internal data, or also other, external data?
Gyuri: Suggestions could come first from WikiData and then Federation Servers.
Marc-Antoine: That would be overwhelming. When it comes to server vs. client, I don’t expect the client to have all the data, but instead query for auto-completion, so that should be server-side logic. Furthermore: what about merging suggestions? What if terms are equal from different sources?
Gyuri: That needs to be decided on preferrence, needs to be federated.
Frode Hegland: Let’s just build that system, practically.
Marc-Antoine: For the first bootstrapped version, autocomplete is not necessarily necessary. The server should do the merging already. The server needs to do the WikiData API call, which would make it a 2-hop call for the client.
Gyuri: I don’t want the Federation Server to be overwhelmed. If the system takes off, every query by everybody would go through the server.
Gyuri: Want to provide my solution to the ecosystem. If a word is selected, it’s glossary data is either already in the local storage, and if not, a query should be send off, that would constitute an automatic concordance.
Marc-Antoine: If there are Federation Server(s), if an identifier is received, the servers would be asked if they have the data for it, and if not, it would be made available to the Federation Servers by importing. Ontology should be another library.
Gyuri: Have reservations, it’s hard to do for WikiData.
Marc-Antoine: Wrote down the fields that are needed.
Frode: Let’s try to get glossary into document. It would be interesting to learn from the knowledge guys which interactions glossary could offer at the point of authoring.
Gyuri: Suggest to not bring ontology into writing glossary entries at this early stage, that’s too confusing and a separate step for later, two different activities.
Marc-Antoine: The relation “related” is almost just noise, not usable, so there’s the need for better typed relations, and then that depends on concepts. Ontologies, multiple ones even, should provide their relation types. If the target term comes first, that would already lead to a good restriction for the available relations.
Gyuri: Biomedical semantics are very hard to understand, it’s a heavily cultivated domain.
Marc-Antoine: It’s the only place where RDF survived.
Gyuri: WikiData allows to come up with new entries.
Frode: Specific suggestions needed for a document-centric dialog to add a glossary entry.
Marc-Antoine: To add a related term, should that be just added normally or with a role?
Frode: The glossary relation, is that only for other terms in the same glossary?
Marc-Antoine: It’s interesting that Gyuri immediately jumped on external glossaries for autocomplete.
Frode: Let’s draw an empty box, so the knowledge representation guys can add in there what they want.
Gyuri: And there can be cross-domain
Marc-Antoine: We just need to standardize on the interpretation of the messages.
Gyuri: The interpreter code can come along with the data.
Marc-Antoine: Isn’t CORS a problem?
Gyuri: No, an
Frode: Let’s just write it down, hack it together, quick and dirty.
Gyuri: We don’t have a standard for glossary yet, but WebAnnotation can do that.
Marc-Antoine: That’s more for the indexation part, not for annotation, but it could work. WebAnnotation allows to tag existing entries, but isn’t adding to the definition of the term. What’s the format to publish the entries? It should be very minimalistic, I can provide a JSON-LD spec.
Gyuri: We can replace WebAnnotation with whatever we’ll use.
Marc-Antoine: I can make a draft by today, but using WebAnnotation for it would be dangerous because the server would get the burden to do the replacement. It would be the archetype of technical debt.
Gyuri: Still, Federation Servers shold be able to do it.
Marc-Antoine: No, servers shouldn’t handle stuff that’s disguised as WebAnnotation.
Gyuri and Marc-Antoine agree that still, WebAnnotation should be used to annotate existing entries.
Frode: To clarify, is it for posting new entries to WordPress, to define the model?
Marc-Antoine: For the start, it’ll be for WordPress but soon change to Federation Server, and we’ll see how far we can get, and then do the next step.
Frode: With the end of the demo in mind, we would want to say: by the way, you can immediately use it, no need to set up your federated server.
Marc-Antoine: On IdeaLoom, there will be an interface to configure the data source for a given conversation, to say basically that this RSS feed is for glossary, and from the URL I generate a data feed. Then, Gyuri’s machinery can come in, and Marc will provide API endpoints to do things on the existing conversation.
Frode: Where will it live?
Marc-Antoine: On IdeaLoom as Federation Server, TopicQuest might have their own. So we’ll standardize on an API? There’s still the requirement for a server agreement with a server.
Frode: Doug’s demo was very different from what we have today. It could either be as natively as possible, using an existing WordPress installation and off you go, plus for more advanced knowledge graph work, go talk to the knowledge people, or use Google drive.
Marc-Antoine: Will try to come up with a generic Linked Data Platform solution, Apache Marmotta eventually.
Frode: How is that going to work for a person who’s a blogger?
Marc-Antoine: That’s an constraint on the architecture. The notion that it’ll work without a server is wrong. Somebody has to get a server and pay it, there’s no data that’s in the network. What we can try is to use IPFS, and distributed storage is a different story again. I’m interested in solutions on the server because there I can do computation. With the so-called “server-less” architectures, one has to have an account to do lambda computation there. There is no free lunch.
Frode: There needs to be a conversation about caching when it comes to autocomplete. In case of a student as a user, he might use his own server on which most material will be static, frozen. Documents will stay there online. If they get connected in a knowledge graph, then it is another story.
Gyuri: The data could be on the local storage of the client, or on the Open Science Framework. One can get storage there and they guarantee that the data will be there for a number of years. The content there will be static, but locally it can become alive. It can be DropBox, as the Open Science Framework has connectory to different storage services. That would be part of OHS Core.
Frode: To wrap up, how to give people a few days to send in a list of things they want to see in the empty box? Also, how to make it work for a non-technical student?
Marc-Antoine: I wasn’t aware of this focus.
Whuffie (?): No, not everybody needs to be able to set it up himself, a friend could do it for him.
Frode: It’s not so much about the technical difficulty to set it up, but the accessability/availability to get it started.
Marc-Antoine: That was a non-goal for me previously.
Frode: You’re a cloud guy, but your work should get a wide audience, so how to give people an easy click-click way to start it up?
Marc-Antoine: I see where you’re coming from, and by connecting things between different servers, it could become relatively “serverless”. There’s the Beaker browser, we might look into it as it’s like IPFS. One gets an URL for something that was created locally, and then it becomes accessible to other Beaker browsers, so there could be a Beaker proxy or something.