Recent Content

ELS 2015
posted on 2015-04-22 21:35:23+01:00

Yesterday the 8th European Lisp Symposium finished. In short it was a great experience (I was there the first time, but hopefully not the last). The variety and quality of talks was great, a good number of people attended both the actual talks as well as both(!) dinners, so there were lots of opportunities to exchange thoughts and quiz people, including on Lisp. Also except for one talk I believe all talks happened, which is also a very good ratio.

For the talks I still have to go through the proceedings a bit for details, but obviously the talk about the Lisp/C++ interoperability with Clasp was (at least for me) long awaited and very well executed. Both the background information on the origins, as well as the technical description on the use of LLVM and the integration of multiple other projects (ECL, SICL, Cleavir) were very interesting and informative.

There were also quite a number of Racket talks, which was surprising to me, but given the source of these projects it makes sense since the GUI is pretty good. VIGRA, although it's a bit unfortunate name, looks pretty nice. The fact that the bindings to a number of languages are available and in the case of the Lisps make the interaction a lot easier is good to see, so it might be a good alternative to OpenCV. It's also encouraging that students enjoy this approach and are as it seems productive with the library.

P2R, the Processing implementation in Racket is similarly interesting as there is a huge community using Processing and making programming CAD applications easier via a known environment is obviously nice and should give users more opportunities in that area.

If I remember correctly the final Racket talk was about constraining application behaviour, which was I guess more of a sketch how application modularity and user-understandable permissions could be both implemented and enforced. I still wonder about the applicability in e.g. a Lisp or regular *nix OS.

The more deeply technical talks regarding the garbage collector (be it in SBCL, or Allegro CL) were both very interesting in that normally I (and I imagine lots of people) don't have (a chance) to get down to that level and therefore learning about some details about those things is appreciated.

Same goes for the first talk by Robert Strandh, Processing List Elements in Reverse Order, which was really great to hear about in the sense that I usually appreciate the :from-end parameter of all the sequence functions and still didn't read the details of the interaction between actual order of iteration vs. the final result of the function. Then again, the question persists if any programs are actually processing really long lists in reverse in production. Somehow the thought that even this case is optimised would make me sleep easier, but then again, the tradeoff of maintainable code vs. performance improvements remains (though I don't think that the presented code was very unreadable).

Escaping the Heap was nice and it'll be great to see an open-sourced library for shared memory and off-heap data structures, be it just for special cases anyway.

Lots of content, so I doubt I'll get to the lightning talks. It'll be just this for now then. Hopefully I have time/opportunity to go to the next ELS or another Lisp conference; I can only recommend going.

Extensions, extensions, extensions
posted on 2015-04-17 10:15:23

For the longest time I've been using quite a number of Firefox extensions. The known problem with that is a steady slowdown, which is only amplified by my habit of soft bookmarks, i.e. having hundreds of open tabs with their corresponding state (which is the whole reason to do that).

However seeing that a lot of state is captured in a very inconvenient form, that is, it's hard to modify a long list of tabs, I want to make both this and incidentally also sharing of state between sessions and even browsers much easier.

The idea is to separate part of the browser state into a separate component, namely a database server for cookies (and other local storage), tabs, sessions and bookmarks. This way and by having a coarse control over loading of sessions the process of migrating state between sessions and browsers should be much easier.

Fortunately most of the browser extensions APIs seem to be usable enough to make this work for at least Firefox and Chrome, so at the moment I'm prototyping the data exchange. Weird as it is for Chrome you have to jump through some conversion hoops (aka local native extensions via a local process exchanging data via stdio), so it seems that the Firefox APIs, since they allow socket connections, are a bit friendlier to use. That said, the exchange format for Chrome, Pascal string encoded JSON, seems like a good idea with the exception of forcing local endianess, which is completely out of the question for a possibly network enabled system (which is to say, I'm definitely going to force network byte order instead).

Reifying Hidden Code
posted on 2015-04-09 13:17:46

The title sounds a bit too majestic too be true. However the though just occured to me, that much of what I've been doing over the last months often involved taking "dev ops" code, e.g. configuration code that lives in the deployment scripts, and putting it into a reusable format, i.e. an actual (Python) module.

That usually happens because what was once a depencency for an application (or service if you like), is now needed to be accessible from (Python) code for testing purposes, or because the setup routine isn't actually just setup any more, but happens more regularly as part of the application cycle.

Of course doing this has some major downsides, as the way scripts are written, using a specific library to access remote hosts, without much error handling, is fundamentally different from how a application code works, that is usually with a more powerful database interface, without any shell scripting underlying the commands (which will instead be replaced by "native" file manipulation commands) and with more proper data structures.

That leaves me with some real annoying busy work just to transform code from one way of writing it to another. Maybe the thing to take away here is that configuration code isn't and application code will sooner or later become library code as well -- aka. build everything as reusable modules. This of course means that using certain configuration (dev ops) frameworks is prohibited, as they work from the outside in, e.g. by providing a wrapper application (for Fabric that would be fab) that invokes "tasks" written in a certain way.

Properly done, that would mean that configuration code would be a wrapper so thin that the actual code could still be reused from application code and different projects later on. The difference between configuration and business logic would then be more of a distinction between where code is used, not how and which framework it was written against.

X11 keybindings for easier terminal clipboard handling
posted on 2015-02-12 15:35:15+01:00

After years of annoyance with the X11 behaviour for clipboard and selection handling with regards to terminal applications, I managed to find a good compromise via some additional shortcurts in my window manager of choice, dwm and terminal multiplexer, tmux.

Now, pressing C-o p (in tmux) pastes from X11 primary selection, C-o P from the clipboard and Linux-z (meaning the key formerly known is Windows key) exchanges clipboard and primary selection, so no more awkward pasting and selecting with the mouse in order to get the correct string in the correct location.

# to switch clipboard and primary selection
PRIMARY=`xsel -op`; xsel -ob | xsel -ip; echo "$PRIMARY" | xsel -ib

# to paste from primary selection in tmux
bind p run "tmux set-buffer \"$(xsel -op)\"; tmux paste-buffer

Other useful keybindings in tmux would copying into the clipboard etc. and there is a useful SO post explaning that.

As before this is public at the customisations branch on Github; I still have to upload the tmux part though.

A mocking library for Common Lisp
posted on 2015-01-06 14:52:47+01:00

After some time thinking about and rewriting the library in a subtly different approaches, CL-MOCK now looks good to me as a version one.

I've removed all mentions of generic functions for now, as first of all I'm unsure if functionality to dynamically rebind methods is even necessary, and second, because doing that is complicated by the details of that protocol. Which means that specifying which method to override is a bit hairy and I really want a good syntax before I let that stuff loose. So it'll have to wait until I figure out a good way to do that. Since it should be easily added to the existing frontend, it will very probably be done with some overloading of existing functions / macros (e.g. with a :method specifier or so).

I'm hoping to test all of this and possibly investigate the generic function issue on some other library. At the moment my single more complex example is a replacement for the DRAKMA HTTP-REQUEST call, which worked surprisingly well and might even make it into a new test suite. The benefit is obviously the improved reliability of not having to have a running network connection for testing libraries against a (HTTP) server.

Tiling WMs and multiplexing
posted on 2014-12-18 15:04:53

Since it came up on Hacker News I thought I can write a little bit about that topic as well.

I started to look for alternatives to the distribution default desktop environment relatively soon after arriving on Linux (Fedora if I remember correctly). At that point the options included Fluxbox and a couple of smaller ones like I3 and wmii. I also tried twm, but honestly, without any effort spend in themeing that was basically not really viable.

So after Fluxbox, which was great, but still leaves you with too much to do with your mouse, my conclusion was that I basically don't need a regular desktop. Having all those messy icons, menus, widgets lying around the screen is just way too distracting for me.

If you then remove all that decoration, you are left with a very bare bones look. Still, after starting to get the hang of Vim (with which I started) and later Emacs, the disadvantage of constantly having to deal with window positions became apparent.

I think the next step was to use wmii or one variant of that. Tiling leaves your mouse free to interact with the main point, your running program. No more juggling windows, aligning borders and so on. For me this isn't about a pretty and flashy screen, it's about the most comfortable environment to work in.

To the present day: I'm no converted to dwm from the awesome people of suckless.org. It's basically a single C file, you configure it with a header and additionally with a custom patch set and that's it. You'd be hard pressed to find a smaller, less resource intensive window manager. And on the flip side it has many amazing features which just work really well together.

Combined with tmux for terminal multiplexing, Emacs buffers for editing multiplexing and dwm for desktop and screen multiplexing this is just the right amount of flexibility to arrange and move around a lot of context.

Obviously this depends on each person, but since you can (and frankly, should) configure every aspect of this, with just a few keypresses you can switch to every part of your running programs and back, be it in the terminal, on a remote system, or graphical.

To be honest, until there is a better alternative to keyboards, I think I'll keep using this approach, maybe adding more scripting capabilities in the same line as in previous blog posts.

Server interface
posted on 2014-12-02 13:40:42

What different components would a daemon expose via the message bus, respectively knowledge store?

For one, methods for remote control. Since RPCs aren't very flexible, the communication style should use message passing. Waiting for response needs to be both aware of possible timeouts and communication failures.

Passed objects can't be too complex, since every interacting language/environment needs to be able to access, create, possibly manipulate them, as well as to minimize the amount of overhead during serialisation. At the same time the schema of messages isn't fixed, so there is an obvious problem for very strict type systems, in that either a fixed-schema wrapper can be used, which would need to be updated at some point, or be kept backwards-compatible, or a very dynamic representation with runtime checks would have to be implemented.

Comparable string based messages from e.g. Plan9 are too simple on the one hand, whereas a protocol like DBus might be overengineered(?).

An important point to consider is the in(tro)spection features of the global system. It should be absolutely possible to easily see and edit messages as well as stored data in either a text-based or convertible format. Also, commands and objects should have runtime documentation in the system, so that a hypothetical call like describe object would display text- (in the terminal), or hypertext based documentation (in the browser).

Naming clashes have to be solved by a one or multi level package system. Since the naming is both shared and global, typical approaches include reversed domain-names, UUIDs and prefixing of identifiers.

Session management would include saving and reloading of state, even with different session names. A separate session manager would take care of initialising applications based on the saved state.

For both text- and graphical UIs methods to switch to the active screen/window need to be provided. Copy & paste functionality might still be done via the window system, or additionally via messaging, which would allow connected system, as well as text and graphical applications to exchange content without any problems.

IPC and knowledge store programs
posted on 2014-11-30 15:32:23

What kind of applications aren't easily possible without a central message bus / knowledge store? Or rephrased as what kind of applications would benefit very much from the use of a central message bus / knowledge store.

To answer this, let's see how IPC usually works between applications: If a program wants to communicate with other parts of the system, there are several options, including writing data to a regular file, pipe/fifo, or socket, as well as directly executing another program. In some cases the communication style is uni-directional, in some bi-directional.

So depending on the setup you can pretty much exactly define how messages are routed between components. However, the details of this communication are hidden from the outside. If you wanted react to one message from another part of the system, you're out of luck. This direct coupling between components doesn't lend itself very well to interception and rerouting.

Unless the program of your choice is very scriptable, you then have no good way to e.g. run a different editor for a file to be opened. Since programs typically don't advertise their internal capabilities to outside use (like CocoaScript (?) allows you to a degree), you also don't have a chance to react to internal events of a program.

Proposed changes to browsers would include decoupling of bookmark handling, cookies/session state, notifications and password management. Additionally it would be useful to expose all of the scripting interface to allow for external control of tabs and windows, as well as possible hooking into website updates, but I think that part is just a side-effect of doing the rest.

Proposed changes to IRC clients / instant messengers would include decoupling of password management and notifications. Additionally the same argument to expose channels/contacts/servers to external applications applies.

Now let's take a look at the knowledge store. In the past I've used a similar Blackboard system to store sensor data and aggregate knowledge from reasoners. The idea behind that is the decoupling of different parts of the program from the data they work on, reacting to input data if necessary and outputting results for other programs to work on.

I imagine that this kind of system relieves programs from creating their own formats for storing data, as well as the need to explicitly specify where to get data from. Compared to a RDBMS to downside is obviously the lack of a hard schema, so the same problems from document based data-stores apply here. Additionally the requirement to have triggers in order to satisfy the subscriptions of clients makes the overall model more complex and harder to optimise.

What is then possible with such a system? Imagine having a single command to switch to a specific buffer regardless of how many programs are open and whether they use a MDI or just a single window. In general scripting of all running programs will be easier.

The knowledge store on the other hand could be used to hold small amounts of data like contacts, the subjects of the newest emails, notifications from people and websites. All of that context data is then available for other programs to use.

Assuming such data was then readily available, using ML to process at least some of the incoming data to look for important bits of information (emails/messages from friends/colleagues, news stories) can then create an additional database of "current events". How this is displayed is again a different problem. The simplest approach would simply be a ticker listening on a specific query, the most complex would maybe consist of whole graphical dashboard.

Security is obviously a problem in such a share-all approach. It should be possible though to restrict access to data similarly to how user accounts in regular DBM systems work and for scripting interactions the system would still have to implement restrictions based on the originating user and group on a single system, as well as the host in a distributed environment.

Tumblelog design
posted on 2014-11-28 21:30

Yay, found time to give my tumble log a little theme upgrade. And while I'm doing that, I thought I could write down some thoughts on the current design. I might even update it later.

In contrast to this Coleslaw blog I've made a small static renderer to HTML/ATOM for the tumble log, so the interface is mostly console vim to text file with S-expressions. Since at the moment there is only a single file I might just show the syntax for that one:

;; -*- mode: lisp; coding: utf-8-unix; -*-

(video :id 1 :date @2013-05-23T00:00:00+01:00 :tags (music) :url #U"https://www.youtube.com/watch?v=QNppCfa_oRs" :title "I Charleston Tel Aviv" :via #U"http://n0p.tumblr.com/post/51100501191/i-charleston-tel-aviv")
...
(image :id 3 :date @2014-08-16T01:01:00+01:00 :url #U"http://n0p.tumblr.com/post/94859150275" :src #P"data/tumblr_n17lclGKp51qdewwao1_1280.jpg" :alt "Scene from the mange BLAME!")
...
(text :id 22 :date @2014-08-28T21:22:58.047809+01:00 :tags (emacs lisp) :text #"`(global-set-key (kbd "H-1") ...)`  Hyper Hyper!  In other words, bind `Caps-Lock` to `Hyper` and rejoice!"#)
...
(link :id 88 :date @2014-11-20T17:27:05.107413+01:00 :url #U"http://htsql.org/" :title "HTSQL")

Clearly this would be viable to store in a DBMS as well, but this is good enough as they say. Featuring PURI, a few other syntax extensions, (my fork of) CL-WHO (for XML extensions) and CL-MARKDOWN for text rendering. The Markdown rendering could use a few fixes for edge cases, but in general it works pretty well and fast. Tags are also written to separate files (both HTML and ATOM), so readers could actually restrict themselves to a few subsets, even though it's not a by any stretch a full-fledged query system.

Still, by rendering to static files a lot of problems go away and the whole site can be served very efficiently via a small web server (I'm leaving Hunchentoot/teepeedee2 for more involved projects).

Integrated tools
posted on 2014-11-27 23:30:22

Continuing that thought, I feel that the tools we use are very much disconnected from each other. Of course that is the basic tenet, tools which do one thing well. However, tools serve a specific purpose, which in itself can be characterised as providing a specific service. It seems strange that basically the only widely applicable way to connect very different tools is the shell, respectively calling out to other programs.

I would argue that using some another paradigms could rather help to make the interplay between services in this sense much easier. For example, if we were to use not only a message bus, line in Plan9, but also a shared blackboard-like database to exchange messages in a common format, then having different daemons react on state changes would be much easier.

Of course, this would already be possible if the one point, a common exchange format, was satisfied. In that case, using file system notifications together with files, pipes, fifos etc. would probably be enough to have largely the same system, but based on the file system abstraction.

Imagine now, that programs actually were actually reusing functionality from other services. My favourite pet-peeve would be a shared bookmark list, cookie store, whatever browsers typically reimplement. It should be no surprise that very focused browsers like uzbl actually do just that, allowing for the combination of different programs rather then the reimplementation of basic components.

It's unfortunate that tools like mailcap and xdg-open aren't very integrated into shell workflow. Even though names like vim, or feh are muscle-memory for me, having unified generic nomenclature for actions like open, edit would be very helpful as a first step into a more programmable environment. Plan9-like decision-making for how to respond to these requests, or like the aforementioned blackboard/messaging architecture all go into the same direction.

If we now compare how command-line tools respond to queries, so for an abstract command like help it would naturally be foo --help (and shame on you if your tool then outputs "Use man foo."; git does it right fortunately), then we can see how we can procede from here. Not only should we adapt these kind of queries, but reflectivity is absolutely necessary to provide a more organised system.

Even though "worse is better", it hinders the tighter linking of tools, as everything first goes to serialised form, then back through the OS layers. Despite dreaming of a more civilised age (you know, Lisp Machines), a compromise based on light-weight message-passing in a service-based approach, relying on deamons as the implementation detail to provide functionality could be viable. Essentially, what can't be done in a single global memory space we emulate.

Sounds awfully like CORBA and DBUS, right? Well I think an important consideration is the complexity of the protocol versus the effort spent on integration. If it is very easy to link existing applications up to the message bus and if that system allows not only message passing, but also provides styleguides and an implementation of the basic interaction schemes (RPC call, two-way communication, ...), then what sounds otherwise like a lot of reimplementation might actually be more like leveraging existing infrastructure instead.

Previous Next

This blog covers work, unix, tachikoma, scala, sbt, redis, postgresql, no-ads, lisp, kotlin, jvm, java, hardware, go, git, emacs, docker

View content from 2014-08, 2014-11, 2014-12, 2015-01, 2015-02, 2015-04, 2015-06, 2015-08, 2015-11, 2016-08, 2016-09, 2016-10, 2016-11, 2017-06, 2017-07, 2017-12, 2018-04, 2018-07, 2018-08, 2018-12, 2020-04, 2021-03


Unless otherwise credited all material Creative Commons License by Olof-Joachim Frahm