Over the past couple of months I've been experimenting intensely with using wiki as a federated data store. That is rather than treating a wiki page as text I've been trying to think of it as much as possible as data - we could call this Literate Data.
I have never liked databases. They appear to me as ugly, magically powerful perhaps but intrinsically dodgy as we say in the UK. Being in a database does not sound as good as being in a book. It was with a great deal of pleasure that I followed the progress of nosql and document oriented databases. They were still databases however.
I think this work here is about trying to define where this feeling comes from, and to explore if there are ideas of merit within it that we can use to define a better formal structure, that helps relate human beings to code and data in an evolutionary context.
# Servers and Authors
To serve an author we should not construct a database - we should create curated literate data that an author and her public can interact with. While this though alludes to hypertext and multimedia theory, we aim for something more, or at least different to this. We look for elements of structure that we could call metasemantics.
So we code a wiki for servers as a Site of Servers. And another for a Site of Authors. Within these sites we have wiki-pages that describe each site and author, and the page-json is structured in such a way that we can read these pages with software and extract useful information for our software.
We have found that rosters are good places to store lists of domains. The text element of the json-item is simply the index of domains easy to fetch. We provide additional tools to author wiki pages in absence of a proper REST Api for wiki. In this way we begin to provide code libraries that treat wiki simply as a data model.
For instance the Atopia Server page would contain a roster of all the wiki-sites on that server. We then have a function and a transporter that uses json-rpc to fetch and update this roster:
We also find that recursing over Roster DSL enables us to fetch complex lists of domains from wiki, effectively using the Roster DSL within our own code. This aspect of linked data semantics is what we have called the tangled web.
Arrays and more complex data structures can utilise the About JSON Plugin. Actually the plugin is only strictly needed for authoring new json content as a simple REST call will return json containing the data needed even without the plugin.
I've started to use these arrays for storing dictionaries of information (models) about wiki. So we have:
The aim is for these wiki pages to be maintained with the aid of scripts, while at the same time serving as descriptive wiki pages about these concepts. This is Literate Data.
# Elements of wiki
Over time we anticipate creating or otherwise beginning to use other elements of wiki for data storage. A reference-item for instance is a robust way to store a URI to another piece of linked data in wiki.
In crafting a mini-language for an item, and specifying the json structure of the element in wiki, we are considering how to make such an element social. We are crafting a new category of literate data.
Our aim when considering this is to figure out the sustainable social costs of maintaining such a data category. If it is worth crafting code to do this job, and this code can be maintained and eventually translated into the range of languages and implementations needed, then we should consider adding such a plugin.
We have begun to outline the utility of making a distinction between authoring plugins and core wiki plugins. The former would fall back to the latter for viewing in circumstances where the authoring plugin is not available.
Whether this is or is not a good architecture, the aim is to structure social processes of code and data curation that stay as close to core wc3 standards as possible, while enabling a social evolution of sustainable experiments around more complex structures.
# See also