Liquid Scenario



Jane is about to author a document where he needs to first gather relevant information from the literature that is out there to read, annotate and critically analyse, save the citations she might need, organize her thoughts and material, then re-organize based on her own edits and those of colleagues, and finally integrate her new work for the next person to read, by publishing it–in effect contributing to a Dynamic Knowledge Repository as Doug Engelbart would have called it.


This scenario does not cover all aspects of the workflow, it focuses primarily on what the Liquid Information Environment (overview) augments. It is in the spirit of Joe from Doug Engelbart's Augmenting Human Intellect paper. Please note, most what is described here is possible with the tools we have built already, with the Liquid | Space being the new project to invest in and build:


Gather & Learn


Jane decides to start looking at the research and specifies that she wants to see only the literature which contain specific keywords and link or cites to specific other documents.


This produces a screen with too many documents so she puts a few keywords on the screen and lines instantly jump out to the documents where the text is found, with their thickness indicating how many occurrences there are.


Still too many, so she instantly changes the layout to be time based on the horizontal axis and popularity (based on citations and reviews) on the vertical axis.


Summing Up


She also decides to play around with the different 'summaries' the system can show of the articles. In addition to traditional summaries she looks at the last sentences of the abstracts since years of experience has taught her that this is often where the key description is.


She can also choose to see the documents in predominant colours based on her custom colour glossary (she can also use this when reading a single document to scroll through it to see what sections cover what: green for companies, blue for people, red for tech and so on).


Documents can be shown as collages of the images they contain, sized by the amount of text they contain or the amount of time they have been cited (or any other criteria), they can jiggle if still actively cited, they can fade if not accessed for a while and so on. There are a myriad of summaried views to help her get to grips with the knowledge space.


{ An Aside: A wise man* once told me, as a systems and tools designer, that “I can do anything”. He is right–within the constraints of; importing information from the world as it exists (and hopefully with the Journal Article Tag Suite, which Author will support, this will become easier), and integrate it back into the world in a legacy compatible ‘document’ which retains as much context and richness as the author wants to include.


The visualisation in Liquid | Space is where we can really look at this. We really can do anything, so we need to sit back and think and go for long walks and discuss over coffee and blog and argue and smile and build–build–build and start again, learning ever more about the needs of man and machine, while continuously releasing useful software tools and maintaining the open environment for innovation. }


Jane decides she is more interested in some documents than others and decides to turn their icons into little boxes with their Abstracts or automatic summaries, reads one, then clicks a handy button to skip to the next, all in beautifully large frames, not as tiny, hard to read icon/screenshot images. A few gets tagged as being useful to her current theme and a few rejected.


That was while working on her 30 inch-or-so, high resolution desktop machine.




Jane then sits down with a tablet of some sort, perhaps an iPad, perhaps an electronic ink system like Remarkable or even a book or document printed on paper, in a comfortable chair with a nice, fresh espresso and reads.


She comes across sections of interest which sparks her curiosity; she uses Liquid | Flow to look the text up. Some she posts to forums to discuss with colleagues, some she emails.


She annotates with underlines and notes as she wishes. Freeform, boxed, whatever she wants. Later on, when she needs to find something he has read, she can choose to search only the text he has underlined or highlighted, on a specific device or on any device, a specific time and place or any. This is in addition to any advanced searching Machine Learning can provide for her.


Some of her reading is in Liquid | Author. It is dynamic; she can pinch to compress the document into an outline and jump around at leisure. She can instantly see sentences with the text she is interested in, she can scroll though the document in a magical compressed scrolling way where the headings stay prominent but the body text fades, apart from keywords which become colour coded to help her understand the structure of the document, using the same colour glossary as she used when going through large amounts of documents.


She comes across something she is not sure about and with a flick of his wrist she instantly looks it up, searches it or brings it into her Liquid | Space so that she can see it in context with her other literature review documents.




Back on her main computer, the Liquid | Space has  all of her literature review sources at hand, as well as all her own writing and glossaries as well as a myriad other data sources in saved views she can go back to, in a completely dynamic environment, similar to what how she did her research, but now she is more interested in developing a shape to her document than the field she is researching.




As Jane gets a better view of her material and her own thoughts, she starts putting together a document in Liquid | Author. From a richly interactive, liquid environment, she now needs to put her argument into linear form in a credible way to help the eventual reader understand her intention


She types or dictates her thoughts and she pastes from her other document or from sources, which then have their citation information automatically brought across. She edits and copies and cuts but anything she cuts and does not pate is stored with her document, she simply modifies the keyboard shortcut to see a list of everything she has cut so far.


She remembers she read something on ‘Vint Cerf’ which she would now like to cite and does an instant search across all her literature reviews for any document which have the text ‘Vint Cerf’ highlighted and drops it in as a citation.

She defines an important phrase she just typed by launching Liquid | Flow in instant and filling in the glossary form. Her readers can now choose to read her document in a compressed form or expanded to include any of her definitions, or in other combinations of text. She does not bother with defining terms which are already well defined as these are available instantly through Liquid | Flow anyway.


She decides to continue writing a section while on the tube on her smartphone and she adds some notes about changes by speaking to her watch. She sketches while stopped at a coffee shop for a moment, using pen and paper or her tablet.


Back at her desk she can see all the new layers created while on the go and integrates some and hides others to view in the document or in the Liquid | Space later.


It's a complicated document so she decides to view it in Liquid | Space where she pulls it apart into headings which appear as nodes and draws lines connecting them and notes to show how. She chooses to have citations link out from the headings they are cited in and she moves the sources to a nice spot on the screen. She then copies this as a Quine, which is titled: ‘Save All, Including Software'. She pastes it into Liquid | Author where it appears as a still image with a title. When she distributes her document the reader can click on this image and the Quine will open in a browser with full interactivity intact.




When she is done, she publishes her document on the web as a jrnl enhanced blog post, as a Word document if required or a Rich PDF (where the original and an XML version of the original document is attached for future extraction by capable software) with all meta-information surfaced in the meta information to make the document easy to find and make use of.


The publishing process takes a few steps to make sure she is happy with the document and that it will be most useful to her future readers. First the document is checked for unintentional plagiarism, then for writing level and to make sure any cited material is available and all cited documents are copied into the document (pending copyright restrictions) to secure the document against link rot. An automatic summary is created and she can choose to click on any sentence in the summary to see what sentences in the document contributed to it, to help her clear any confusion or wrong summarisation.


Over Time


Over time it is expected that more and more documents will be created in such rich ways and shared without having so much of their richness stripped, as we do with PDFs today. It is expected that more intelligence will be built in over time and richer means to access the information.


The Result


The journey leads towards us getting a better handle on our information where we reach a stage where we have the same virtual control over our information as a world-class sculptor has over their clay–and then we surpass it–and achieve a magic visualisation and interaction that only powerful networked computes with high resolution displays. We move at the speed of thought, we change our view at the speed of inspiration, to change our perspective and broaden our minds.










(*= Alan Kay)