Content from 2014-11

IPC and knowledge store programs
posted on 2014-11-30 15:32:23

What kind of applications aren't easily possible without a central message bus / knowledge store? Or rephrased as what kind of applications would benefit very much from the use of a central message bus / knowledge store.

To answer this, let's see how IPC usually works between applications: If a program wants to communicate with other parts of the system, there are several options, including writing data to a regular file, pipe/fifo, or socket, as well as directly executing another program. In some cases the communication style is uni-directional, in some bi-directional.

So depending on the setup you can pretty much exactly define how messages are routed between components. However, the details of this communication are hidden from the outside. If you wanted react to one message from another part of the system, you're out of luck. This direct coupling between components doesn't lend itself very well to interception and rerouting.

Unless the program of your choice is very scriptable, you then have no good way to e.g. run a different editor for a file to be opened. Since programs typically don't advertise their internal capabilities to outside use (like CocoaScript (?) allows you to a degree), you also don't have a chance to react to internal events of a program.

Proposed changes to browsers would include decoupling of bookmark handling, cookies/session state, notifications and password management. Additionally it would be useful to expose all of the scripting interface to allow for external control of tabs and windows, as well as possible hooking into website updates, but I think that part is just a side-effect of doing the rest.

Proposed changes to IRC clients / instant messengers would include decoupling of password management and notifications. Additionally the same argument to expose channels/contacts/servers to external applications applies.

Now let's take a look at the knowledge store. In the past I've used a similar Blackboard system to store sensor data and aggregate knowledge from reasoners. The idea behind that is the decoupling of different parts of the program from the data they work on, reacting to input data if necessary and outputting results for other programs to work on.

I imagine that this kind of system relieves programs from creating their own formats for storing data, as well as the need to explicitly specify where to get data from. Compared to a RDBMS to downside is obviously the lack of a hard schema, so the same problems from document based data-stores apply here. Additionally the requirement to have triggers in order to satisfy the subscriptions of clients makes the overall model more complex and harder to optimise.

What is then possible with such a system? Imagine having a single command to switch to a specific buffer regardless of how many programs are open and whether they use a MDI or just a single window. In general scripting of all running programs will be easier.

The knowledge store on the other hand could be used to hold small amounts of data like contacts, the subjects of the newest emails, notifications from people and websites. All of that context data is then available for other programs to use.

Assuming such data was then readily available, using ML to process at least some of the incoming data to look for important bits of information (emails/messages from friends/colleagues, news stories) can then create an additional database of "current events". How this is displayed is again a different problem. The simplest approach would simply be a ticker listening on a specific query, the most complex would maybe consist of whole graphical dashboard.

Security is obviously a problem in such a share-all approach. It should be possible though to restrict access to data similarly to how user accounts in regular DBM systems work and for scripting interactions the system would still have to implement restrictions based on the originating user and group on a single system, as well as the host in a distributed environment.

Tumblelog design
posted on 2014-11-28 21:30

Yay, found time to give my tumble log a little theme upgrade. And while I'm doing that, I thought I could write down some thoughts on the current design. I might even update it later.

In contrast to this Coleslaw blog I've made a small static renderer to HTML/ATOM for the tumble log, so the interface is mostly console vim to text file with S-expressions. Since at the moment there is only a single file I might just show the syntax for that one:

;; -*- mode: lisp; coding: utf-8-unix; -*-

(video :id 1 :date @2013-05-23T00:00:00+01:00 :tags (music) :url #U"" :title "I Charleston Tel Aviv" :via #U"")
(image :id 3 :date @2014-08-16T01:01:00+01:00 :url #U"" :src #P"data/tumblr_n17lclGKp51qdewwao1_1280.jpg" :alt "Scene from the mange BLAME!")
(text :id 22 :date @2014-08-28T21:22:58.047809+01:00 :tags (emacs lisp) :text #"`(global-set-key (kbd "H-1") ...)`  Hyper Hyper!  In other words, bind `Caps-Lock` to `Hyper` and rejoice!"#)
(link :id 88 :date @2014-11-20T17:27:05.107413+01:00 :url #U"" :title "HTSQL")

Clearly this would be viable to store in a DBMS as well, but this is good enough as they say. Featuring PURI, a few other syntax extensions, (my fork of) CL-WHO (for XML extensions) and CL-MARKDOWN for text rendering. The Markdown rendering could use a few fixes for edge cases, but in general it works pretty well and fast. Tags are also written to separate files (both HTML and ATOM), so readers could actually restrict themselves to a few subsets, even though it's not a by any stretch a full-fledged query system.

Still, by rendering to static files a lot of problems go away and the whole site can be served very efficiently via a small web server (I'm leaving Hunchentoot/teepeedee2 for more involved projects).

Integrated tools
posted on 2014-11-27 23:30:22

Continuing that thought, I feel that the tools we use are very much disconnected from each other. Of course that is the basic tenet, tools which do one thing well. However, tools serve a specific purpose, which in itself can be characterised as providing a specific service. It seems strange that basically the only widely applicable way to connect very different tools is the shell, respectively calling out to other programs.

I would argue that using some another paradigms could rather help to make the interplay between services in this sense much easier. For example, if we were to use not only a message bus, line in Plan9, but also a shared blackboard-like database to exchange messages in a common format, then having different daemons react on state changes would be much easier.

Of course, this would already be possible if the one point, a common exchange format, was satisfied. In that case, using file system notifications together with files, pipes, fifos etc. would probably be enough to have largely the same system, but based on the file system abstraction.

Imagine now, that programs actually were actually reusing functionality from other services. My favourite pet-peeve would be a shared bookmark list, cookie store, whatever browsers typically reimplement. It should be no surprise that very focused browsers like uzbl actually do just that, allowing for the combination of different programs rather then the reimplementation of basic components.

It's unfortunate that tools like mailcap and xdg-open aren't very integrated into shell workflow. Even though names like vim, or feh are muscle-memory for me, having unified generic nomenclature for actions like open, edit would be very helpful as a first step into a more programmable environment. Plan9-like decision-making for how to respond to these requests, or like the aforementioned blackboard/messaging architecture all go into the same direction.

If we now compare how command-line tools respond to queries, so for an abstract command like help it would naturally be foo --help (and shame on you if your tool then outputs "Use man foo."; git does it right fortunately), then we can see how we can procede from here. Not only should we adapt these kind of queries, but reflectivity is absolutely necessary to provide a more organised system.

Even though "worse is better", it hinders the tighter linking of tools, as everything first goes to serialised form, then back through the OS layers. Despite dreaming of a more civilised age (you know, Lisp Machines), a compromise based on light-weight message-passing in a service-based approach, relying on deamons as the implementation detail to provide functionality could be viable. Essentially, what can't be done in a single global memory space we emulate.

Sounds awfully like CORBA and DBUS, right? Well I think an important consideration is the complexity of the protocol versus the effort spent on integration. If it is very easy to link existing applications up to the message bus and if that system allows not only message passing, but also provides styleguides and an implementation of the basic interaction schemes (RPC call, two-way communication, ...), then what sounds otherwise like a lot of reimplementation might actually be more like leveraging existing infrastructure instead.

Lisp layered on Unix
posted on 2014-11-27 21:21:10

This blog post on LispCast is a pretty good start to get thinking about the intricacies of the interaction between Lisp (Machine) ideas and the current Unix environment. Of course that includes plan9 on the one side and Emacs on the other.

Lisp shell

There is scsh, but it's not really what I'm looking for. Using emacs as login shell (with the eshell package) comes closest to it regarding both with existing commands and integration of Lisp-based ones. However, while pipes work as expected with eshell, data is still passed around as (formatted) text. There doesn't seem to be an easy way to pass around in-memory objects, at least while staying in Emacs itself. That would of course mean to reimplement some (larger?) parts of that system.

This all ties in to the idea that unstructured text isn't the best idea to represent data between processes. Even though Unix pipes are extremely useful, the ecosystem of shell and C conventions means that the obvious way isn't completely correct, meaning that there are edge cases to consider. The best is something as innocent as ls | wc -l, which will break, depending on the shell settings, with some (unlikely) characters in filenames, i.e. newlines.

Common formats

One of the problems is obviously that in order to pass around structured data, i.e. objects, all participants have to understand their format. Passing references won't work without OS support though.

Instead of having unstructured streams, use streams of (data) objects. The distinction here is Plain Old Objects (PODs) instead of objects with an associated behaviour.

Let's take a look at standard Unix command line tools (I'm using GNU Coreutils here) in order to reproduce the behaviour and/or intent behind them:

Output of entire files

The first command here is cat. Although GNU cat includes additional transformations, this command concatenates files. Similar to the description, we can image a CAT to perform a similar operation on streams of objects.

It doesn't make much sense to concatenate a HTML document and an MP3 file (hence you won't do it in most cases anyway). However, since files are unstructured, cat can work on them.

Registering functionality

Although you can call commands individually on files, some of them form an ad-hoc service interface already: The C compiler, along with the toolchain forms one such interface, where you're required to use the same interface if you want to seamlessly replace one part of the toolchain.

Same goes for the Coreutils: As long as you honour the interface, programs can be replaced with different implementations.

Interactive commands

Emacs has a special form interactive to indicate whether a command can be directly called via the command prompt. There is also special handling there to accomodate the use of interactive arguments. This is something that can be generalised to the OS. An example of where this already happens is the .mailcap file and the .desktop files from desktop suites.

Threading and job control

Unfortunately getting proper job control to work will be a bit of a problem in any Lisp implementation, since the best way to implement the concurrent jobs is using threads, which are not particularly suited for handling the multitude of signals and other problems associated with them. Even without job control pipelines implemented in Lisp require shared in-memory communication channels, so something like (object-based) streams, mailboxes, or queues are necessary to move IO between different threads.

Yes Minister
posted on 2014-11-13 21:05:24

It's interesting to see that although the series was aired some thirty years ago, most of the themes, probably even the specific dialog, is still very current. For example "The Death List" sounds awfully familiar given the way mass surveillance is now rampant in the UK.

Not that it's not equally applicable to other countries.

This blog covers unix, tachikoma, postgresql, lisp, kotlin, java, git, emacs

View content from 2014-08, 2014-11, 2014-12, 2015-01, 2015-02, 2015-04, 2015-06, 2015-08, 2015-11, 2016-08, 2016-09, 2016-10, 2016-11, 2017-06, 2017-07

Unless otherwise credited all material Creative Commons License by Olof-Joachim Frahm