Skip to content

Re: Frode’s notes on Story for Knowledge Graph Interaction

Frode made a video titled “Notes on Story for Knowledge Graph Interaction”. I continue to see things differently. This is my reply.

0:55 to me is a generic capability to link text portions with the drop-down/context/popup menu as a way to specify some (meta)data for the link. If drag&drop as a mechanism is very helpful here, I don’t know, because the source text could be huge (except you type the position/statement onto the empty, plain canvas, and have a lot of such atomic pieces that are not narrated into a consecutive story, but are more like argument mapping – it also wouldn’t be hypertext because that allows each and all possible constellations, not limited to a single knowledge/concept model) and the target text portion not anywhere near, so if the source is selected and the non-selected text gets faded out, suggestions/picks for the target seem to be necessary because the complete source document is faded/hidden, and then I wouldn’t know what suggestions to make because that’s some knowledge feature and doesn’t make a lot of sense to me. Auto-suggest to find similar or different positions, or randomly pick what has similar words in it, or what? Sure, those things can be supported for specialized applications, but I want interactions that give the user the power to decide for himself what to do, in the entire text corpora available to him. Where do the suggested target positions come from? Are they generated by permutation? Do they come from every statement ever made that was processed by some federation hub, or provided by a knowledge server, or constructed from knowledge nodes (data pieces that know what they are, so the client can determine whether or not to use them)?

1:52 NLS commands are for text manipulation. I guess the way that “similarity” would be expressed in NLS is to put/link the two chocolate positions together on a page as a list of similar chocolate positions, and a page for mentions of food, and a page of things humans like, etc. It’s probably not very semantic, the actual knowledge is in the human mind and not for the machine for AI or reasoning.

2:35 The interaction style is separate from the interaction/manipulation capabilities (how you do something vs. what you do), except that the interaction style (how) is a capability itself, but no matter if you interact via mouse, keys, knee, data glove, joystick, speech (all of them capabilities in their own right), those can be mapped/configured to the intended actual interactions like select, show, mark something on the screen (too a capability in the tool capability infrastructure, could be markings on the printout or highlight in your fighter-jet AR-HUD glasses, or text-to-speech).

4:12 In my world, the commands don’t matter that much, I can just make them up as I go, so when I implement a new interaction/manipulation tool capability that does stuff, I can bind keys to it or mouse events or whatever. And it’s especially not very relevant to me because I don’t want to start to make formal statements about positions/knowledge etc. as long as I can’t do the most simple things as demonstrated in NLS and other hypertext systems.

4:33 I don’t understand any of this, it must be from irregular calls or one of the many Google docs I didn’t read.

There is an easy way to handle it, to enable Author/Liquid for the “web of knowledge”: the application could allow to configure the available entries for the drop-down/context/popup menu, so knowledge workers would set up their node relationships, import predefined ones (for their group, globally, from their own, individual toolbelt) and maybe even switch between them. Regardless on what research/project they’re working on, they could use the corresponding relations and it’s up to them to determine what makes sense in their context. For the demo, a particular example could be chosen for which Doug@50 would select the material/body/corpus (text, data, conversation) of a certain problem space, so reasonable relation types and node connections can be demonstrated in action. While Author/Liquid could be a generic tool that allows augmenting work on all kinds of different source materials, it would be easier for Frode to evolve the application based on a concrete case. On the other hand, he could just make some relations up and change them later to the actual useful types. The next step seems to be to export the data from the current document/view in the interoperability format, and to post/send/publish it to somewhere, so the other components can pull it from there or get it pushed to them, if Frode implemented a connector (which of course isn’t part of a capability infrastructure because for some reason Doug@50 decided to not do any of that). It’s probably not my job to apply the interoperability format to this specific export example, I guess everybody gets the idea if we imagine it to pseudo-be "Chocolate is lovely for dessert" ← similar to → "Chocolate is lovely for after dinner". This might be perfectly valid as a “concept”/position, but not very useful for reasoning, machine learning or other methods of automatically computing insights and gaining knowledge (isn’t that the point of a web of knowledge after all, to not have humans read so much, not all the similar and different statements/positions about what people like to eat?). Instead, Author/Liquid would need to be made to work on words, identifiers or even symbols found in or associated with a portion of text, so a user can declare that “chocolate” is a thing and “after dinner” a time/event, and that both happen to coincide or at least more often than not, or that it would be “lovely” to have them in relation. If I’m not mistaken, the result is supposed to be more like "chocolate" ← thing, "dessert" ← thing, "after dinner" ← time, time ← thing, "lovely" ← relation, "similar" ← relation, (chocolate ← lovely → dessert) ← similar → (chocolate ← lovely → after dinner). This way, tools like RDF can show the connections so humans don’t need to read all the nuancing and narrating text clutter in between to get the main point, another hope might be that an AI could potentially reason with this to gain some form of “understanding” in order to answer questions or help navigating. Sure, RDF won’t do it any more, a more modern “linked data” approach is used instead, yes, the amount of concepts and relations in such a web of knowledge would be massive, which is why they’re supposed to be collected on servers because what client machine will ever be able to do all the global sense-making and more importantly, should be allowed to own a copy of the data or reap the benefits?

It would be great if you guys could pull this off as one of the most complex unsolved tasks in information theory, but I guess the difficulty would be to determine what a “position” is (how it is expressed in data and how one could operate with it – there are serious, fundamental problems with all of it, I don’t see them addressed), what “similarity” is, what insights one can obtain without knowing about dinner situations or what the chocolate is used for (the statements/positions might be very different from each other depending on circumstances that are unknown by such a simplifying sentence), and in the end, we end up with what humans are, physics, the world and the universe. I need to be convinced how the web of knowledge is significantly different from what expert systems or the “Semantic Web” did wrong. In the end, everything is deeply intertwingled, of infinite complexity.

Published inKnowledge Graphs

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *