Content tagged tachikoma

Command Line Completion
posted on 2018-07-17 21:50:04

One part of the lack of cohesion on *nix systems is the hack of command line completion via e.g. Bash or ZSH modules. I can think of roughly three ways in the *nix paradigm to have a more cohesive pattern here:

  1. Give full control to the command line program and invoke it with a special handler to parse and generate appropriate output for command line completion.

  2. Give partial control and generate a script like description of the completion, e.g. as a Bash- or ZSH-specific script.

  3. Which is pretty similar, generate a static description on request.

All of these use the program, the thing that's being invoked, as the central mechanism of how to do completion. No extra files are required to be put anywhere and the description can always be in sync with what the program actually does later on. The downside might very well be that the program is invoked multiple times.

Therefore perhaps also

  1. Invoke the program in a special mode and keep it alive while it handles command line completion. Once finished and the user wants to invoke it, it would simply use the already existing in-memory structures and continue.

Of course I can see the downsides here too, though in the interest of having a more interwoven system I'd argue that the benefit might be worth it overall, in particular for programs whose arguments are a little bit more like a DSL than just simple file arguments. Notably though any shell features would not be available in variant 4, instead the program would have to call back or replicate the required functionality, making it somewhat less appealing?

Integrated tools #2
posted on 2016-10-15 23:53:14

In order to integrate any (interactive) application the following three parts need to be available, otherwise the integrator will have trouble one way or another:

  • objects, representing the domain model; these would be inspectable in order to get the contained information,
  • behaviours, allows the external user to invoke them,
  • notifications, to react to state changes

E.g. not only does an application need to expose its windows, but also it needs to allow to close or open a window and finally send updates if a window was closed or opened. Notifications in particular need to be in a channel that allows to poll multiple of them at the same time - if communication is done over a socket select and friends would allow the other side to control and react to multiple applications at the same time.

Image viewers and other systems tools
posted on 2015-06-12 16:20:23

After all the trouble I went through with trying to get an image viewer to work in CL (due to the used GTK+ 3 library being problematic, that is, unmaintained) maybe a different approach is more viable. It would be possible to use one existing program as a front end by calling it via IPC. So e.g. feh, my go-to program for that task, already has configurable keybindings; it should be a smaller problem to remote control it (even with adding some code).

However, as with all these *nix combinators, it feels like a mish-mash of tools intertwined and not-quite the optimal solution.

Consider what happens if you want to add new functionality, i.e. new widgets. In that case composability breaks down since feh is relatively minimal and therefore doesn't have much options in terms of providing different menus, input widgets, etc. Therefore you'd have to find either a different viewer with more scripting capabilities (which is counter to the "one-tool" mantra), or switch to a more integrated approach to have this component as an internal part of your environment.

Obviously now would be the time for either components/CORBA, or a Lisp Machine to hack up other programs.

Or switch to Qt. It seems that the bindings for that framework are more stable than the GTK bindings and additionally they (Qt) just have more people working on the framework.

Since one of the problems with the GTK bindings is the relatively recent upgrade to GTK+ 3, there seems to be a point in using the previous version 2 instead, considering that even GIMP didn't update yet.

Server interface
posted on 2014-12-02 13:40:42

What different components would a daemon expose via the message bus, respectively knowledge store?

For one, methods for remote control. Since RPCs aren't very flexible, the communication style should use message passing. Waiting for response needs to be both aware of possible timeouts and communication failures.

Passed objects can't be too complex, since every interacting language/environment needs to be able to access, create, possibly manipulate them, as well as to minimize the amount of overhead during serialisation. At the same time the schema of messages isn't fixed, so there is an obvious problem for very strict type systems, in that either a fixed-schema wrapper can be used, which would need to be updated at some point, or be kept backwards-compatible, or a very dynamic representation with runtime checks would have to be implemented.

Comparable string based messages from e.g. Plan9 are too simple on the one hand, whereas a protocol like DBus might be overengineered(?).

An important point to consider is the in(tro)spection features of the global system. It should be absolutely possible to easily see and edit messages as well as stored data in either a text-based or convertible format. Also, commands and objects should have runtime documentation in the system, so that a hypothetical call like describe object would display text- (in the terminal), or hypertext based documentation (in the browser).

Naming clashes have to be solved by a one or multi level package system. Since the naming is both shared and global, typical approaches include reversed domain-names, UUIDs and prefixing of identifiers.

Session management would include saving and reloading of state, even with different session names. A separate session manager would take care of initialising applications based on the saved state.

For both text- and graphical UIs methods to switch to the active screen/window need to be provided. Copy & paste functionality might still be done via the window system, or additionally via messaging, which would allow connected system, as well as text and graphical applications to exchange content without any problems.

IPC and knowledge store programs
posted on 2014-11-30 15:32:23

What kind of applications aren't easily possible without a central message bus / knowledge store? Or rephrased as what kind of applications would benefit very much from the use of a central message bus / knowledge store.

To answer this, let's see how IPC usually works between applications: If a program wants to communicate with other parts of the system, there are several options, including writing data to a regular file, pipe/fifo, or socket, as well as directly executing another program. In some cases the communication style is uni-directional, in some bi-directional.

So depending on the setup you can pretty much exactly define how messages are routed between components. However, the details of this communication are hidden from the outside. If you wanted react to one message from another part of the system, you're out of luck. This direct coupling between components doesn't lend itself very well to interception and rerouting.

Unless the program of your choice is very scriptable, you then have no good way to e.g. run a different editor for a file to be opened. Since programs typically don't advertise their internal capabilities to outside use (like CocoaScript (?) allows you to a degree), you also don't have a chance to react to internal events of a program.

Proposed changes to browsers would include decoupling of bookmark handling, cookies/session state, notifications and password management. Additionally it would be useful to expose all of the scripting interface to allow for external control of tabs and windows, as well as possible hooking into website updates, but I think that part is just a side-effect of doing the rest.

Proposed changes to IRC clients / instant messengers would include decoupling of password management and notifications. Additionally the same argument to expose channels/contacts/servers to external applications applies.

Now let's take a look at the knowledge store. In the past I've used a similar Blackboard system to store sensor data and aggregate knowledge from reasoners. The idea behind that is the decoupling of different parts of the program from the data they work on, reacting to input data if necessary and outputting results for other programs to work on.

I imagine that this kind of system relieves programs from creating their own formats for storing data, as well as the need to explicitly specify where to get data from. Compared to a RDBMS to downside is obviously the lack of a hard schema, so the same problems from document based data-stores apply here. Additionally the requirement to have triggers in order to satisfy the subscriptions of clients makes the overall model more complex and harder to optimise.

What is then possible with such a system? Imagine having a single command to switch to a specific buffer regardless of how many programs are open and whether they use a MDI or just a single window. In general scripting of all running programs will be easier.

The knowledge store on the other hand could be used to hold small amounts of data like contacts, the subjects of the newest emails, notifications from people and websites. All of that context data is then available for other programs to use.

Assuming such data was then readily available, using ML to process at least some of the incoming data to look for important bits of information (emails/messages from friends/colleagues, news stories) can then create an additional database of "current events". How this is displayed is again a different problem. The simplest approach would simply be a ticker listening on a specific query, the most complex would maybe consist of whole graphical dashboard.

Security is obviously a problem in such a share-all approach. It should be possible though to restrict access to data similarly to how user accounts in regular DBM systems work and for scripting interactions the system would still have to implement restrictions based on the originating user and group on a single system, as well as the host in a distributed environment.

Integrated tools
posted on 2014-11-27 23:30:22

Continuing that thought, I feel that the tools we use are very much disconnected from each other. Of course that is the basic tenet, tools which do one thing well. However, tools serve a specific purpose, which in itself can be characterised as providing a specific service. It seems strange that basically the only widely applicable way to connect very different tools is the shell, respectively calling out to other programs.

I would argue that using some another paradigms could rather help to make the interplay between services in this sense much easier. For example, if we were to use not only a message bus, line in Plan9, but also a shared blackboard-like database to exchange messages in a common format, then having different daemons react on state changes would be much easier.

Of course, this would already be possible if the one point, a common exchange format, was satisfied. In that case, using file system notifications together with files, pipes, fifos etc. would probably be enough to have largely the same system, but based on the file system abstraction.

Imagine now, that programs actually were actually reusing functionality from other services. My favourite pet-peeve would be a shared bookmark list, cookie store, whatever browsers typically reimplement. It should be no surprise that very focused browsers like uzbl actually do just that, allowing for the combination of different programs rather then the reimplementation of basic components.

It's unfortunate that tools like mailcap and xdg-open aren't very integrated into shell workflow. Even though names like vim, or feh are muscle-memory for me, having unified generic nomenclature for actions like open, edit would be very helpful as a first step into a more programmable environment. Plan9-like decision-making for how to respond to these requests, or like the aforementioned blackboard/messaging architecture all go into the same direction.

If we now compare how command-line tools respond to queries, so for an abstract command like help it would naturally be foo --help (and shame on you if your tool then outputs "Use man foo."; git does it right fortunately), then we can see how we can procede from here. Not only should we adapt these kind of queries, but reflectivity is absolutely necessary to provide a more organised system.

Even though "worse is better", it hinders the tighter linking of tools, as everything first goes to serialised form, then back through the OS layers. Despite dreaming of a more civilised age (you know, Lisp Machines), a compromise based on light-weight message-passing in a service-based approach, relying on deamons as the implementation detail to provide functionality could be viable. Essentially, what can't be done in a single global memory space we emulate.

Sounds awfully like CORBA and DBUS, right? Well I think an important consideration is the complexity of the protocol versus the effort spent on integration. If it is very easy to link existing applications up to the message bus and if that system allows not only message passing, but also provides styleguides and an implementation of the basic interaction schemes (RPC call, two-way communication, ...), then what sounds otherwise like a lot of reimplementation might actually be more like leveraging existing infrastructure instead.

This blog covers work, unix, tachikoma, scala, sbt, redis, postgresql, no-ads, lisp, kotlin, jvm, java, hardware, go, git, emacs, docker

View content from 2014-08, 2014-11, 2014-12, 2015-01, 2015-02, 2015-04, 2015-06, 2015-08, 2015-11, 2016-08, 2016-09, 2016-10, 2016-11, 2017-06, 2017-07, 2017-12, 2018-04, 2018-07, 2018-08, 2018-12, 2020-04, 2021-03


Unless otherwise credited all material Creative Commons License by Olof-Joachim Frahm