Recent Content

Image viewers and other systems tools
posted on 2015-06-12 16:20:23

After all the trouble I went through with trying to get an image viewer to work in CL (due to the used GTK+ 3 library being problematic, that is, unmaintained) maybe a different approach is more viable. It would be possible to use one existing program as a front end by calling it via IPC. So e.g. feh, my go-to program for that task, already has configurable keybindings; it should be a smaller problem to remote control it (even with adding some code).

However, as with all these *nix combinators, it feels like a mish-mash of tools intertwined and not-quite the optimal solution.

Consider what happens if you want to add new functionality, i.e. new widgets. In that case composability breaks down since feh is relatively minimal and therefore doesn't have much options in terms of providing different menus, input widgets, etc. Therefore you'd have to find either a different viewer with more scripting capabilities (which is counter to the "one-tool" mantra), or switch to a more integrated approach to have this component as an internal part of your environment.

Obviously now would be the time for either components/CORBA, or a Lisp Machine to hack up other programs.

Or switch to Qt. It seems that the bindings for that framework are more stable than the GTK bindings and additionally they (Qt) just have more people working on the framework.

Since one of the problems with the GTK bindings is the relatively recent upgrade to GTK+ 3, there seems to be a point in using the previous version 2 instead, considering that even GIMP didn't update yet.

PostgreSQL insights I
posted on 2015-04-25 13:47:23+01:00

After working a lot more with PostgreSQL, I happened to stumble upon a few things, mostly related to query optimisation, that at least for me weren't quite readily apparent. Please note that this is all with regards to PostgreSQL 9.3, but unless otherwise noted it should still be the case for 9.4 as well.

Slow DELETEs

Obviously, indexes are key to good query performance. If you're already using EXPLAIN, with, or without ANALYZE, chances are good you know how and why your queries perform like they do. However I encountered a problem with a DELETE query, where the output from EXPLAIN was as expected, i.e. it was using the optimal plan, but still the performance was abysmal; a query to delete about 300 elements in bulk, like DELETE FROM table WHERE id IN (...);, was quite fast to remove the elements (as tested from a separately running psql), but still the query took about three minutes(!) to complete.

In this scenario the table in question was about a million rows long, had a primary index on the id column and was referenced from three other tables, which also had foreign key constraints set up; no other triggers were running on any of the tables involved.

The postgres server process in question wasn't doing anything interesting, it was basically taking up one core with semop calls as reported by strace, no other I/O was observed though.

At that point I finally turned to #postgresql. Since the information I could share wasn't very helpful, there were no immediate replies, but one hint finally helped me to fix this problem. It turns out that the (kinda obvious) solution was to check for missing indexes on foreign key constraints from other tables. With two out of three indexes missing I resumed to CREATE INDEX CONCURRENTLY...; and about five minutes later the DELETE was now running in a few milliseconds.

The part where I was really frustrated here is that none of the available statistics guided me in the search; EXPLAIN ANALYZE apparently doesn't include the runtime for foreign key constraints and they don't show up in other places as well. In hindsight this is something that I should've checked earlier (and from now on I will), but it's also a weakness of the analyze framework not to help the developer to see the slowdowns involved in a situation like this.

Common Table Expressions

Refactoring queries to reuse results with WITH queries is absolutely worth it and improved the runtime of rather large queries by a large factor. This is something that can be seen from the query plan, so when you're using the same expressions twice, start looking into this and see if it helps both for readability (don't repeat yourself) and performance.

JSON result construction

We need nested JSON output in a few cases. This means (in 9.3, there are some better functions available in 9.4) that a combination of row_to_json(row(json_agg(...))) was necessary to get proper nesting of aggregated sub-objects, as well as wrapping the end result in another object, because the output had to be formatted as a JSON object (with curly brackets) instead of a JSON array (with rectangular brackets).

Technicalities aside the JSON support is pretty good and since the initial code was written I've discovered that since we in many cases don't actually have multiple results (for json_agg), not using that method will again significantly improve performance.

That means instead of something like the following:

SELECT row_to_json(row(json_agg(...))) FROM ... JOIN ... GROUP BY id;

, where the input to json_agg is a single result from the JOIN, we can write the following instead:

SELECT row_to_json(row(ARRAY[...])) FROM ... JOIN ...;

, which, if you examine the output of EXPLAIN, means no sorting because of the GROUP BY clause. The convenience of json_agg here doesn't really justify the significant slowdown caused by the aggregation function.

Note that the array constructed via ARRAY[] is properly converted to JSON, so the end result is again proper JSON.

ELS 2015
posted on 2015-04-22 21:35:23+01:00

Yesterday the 8th European Lisp Symposium finished. In short it was a great experience (I was there the first time, but hopefully not the last). The variety and quality of talks was great, a good number of people attended both the actual talks as well as both(!) dinners, so there were lots of opportunities to exchange thoughts and quiz people, including on Lisp. Also except for one talk I believe all talks happened, which is also a very good ratio.

For the talks I still have to go through the proceedings a bit for details, but obviously the talk about the Lisp/C++ interoperability with Clasp was (at least for me) long awaited and very well executed. Both the background information on the origins, as well as the technical description on the use of LLVM and the integration of multiple other projects (ECL, SICL, Cleavir) were very interesting and informative.

There were also quite a number of Racket talks, which was surprising to me, but given the source of these projects it makes sense since the GUI is pretty good. VIGRA, although it's a bit unfortunate name, looks pretty nice. The fact that the bindings to a number of languages are available and in the case of the Lisps make the interaction a lot easier is good to see, so it might be a good alternative to OpenCV. It's also encouraging that students enjoy this approach and are as it seems productive with the library.

P2R, the Processing implementation in Racket is similarly interesting as there is a huge community using Processing and making programming CAD applications easier via a known environment is obviously nice and should give users more opportunities in that area.

If I remember correctly the final Racket talk was about constraining application behaviour, which was I guess more of a sketch how application modularity and user-understandable permissions could be both implemented and enforced. I still wonder about the applicability in e.g. a Lisp or regular *nix OS.

The more deeply technical talks regarding the garbage collector (be it in SBCL, or Allegro CL) were both very interesting in that normally I (and I imagine lots of people) don't have (a chance) to get down to that level and therefore learning about some details about those things is appreciated.

Same goes for the first talk by Robert Strandh, Processing List Elements in Reverse Order, which was really great to hear about in the sense that I usually appreciate the :from-end parameter of all the sequence functions and still didn't read the details of the interaction between actual order of iteration vs. the final result of the function. Then again, the question persists if any programs are actually processing really long lists in reverse in production. Somehow the thought that even this case is optimised would make me sleep easier, but then again, the tradeoff of maintainable code vs. performance improvements remains (though I don't think that the presented code was very unreadable).

Escaping the Heap was nice and it'll be great to see an open-sourced library for shared memory and off-heap data structures, be it just for special cases anyway.

Lots of content, so I doubt I'll get to the lightning talks. It'll be just this for now then. Hopefully I have time/opportunity to go to the next ELS or another Lisp conference; I can only recommend going.

Extensions, extensions, extensions
posted on 2015-04-17 10:15:23

For the longest time I've been using quite a number of Firefox extensions. The known problem with that is a steady slowdown, which is only amplified by my habit of soft bookmarks, i.e. having hundreds of open tabs with their corresponding state (which is the whole reason to do that).

However seeing that a lot of state is captured in a very inconvenient form, that is, it's hard to modify a long list of tabs, I want to make both this and incidentally also sharing of state between sessions and even browsers much easier.

The idea is to separate part of the browser state into a separate component, namely a database server for cookies (and other local storage), tabs, sessions and bookmarks. This way and by having a coarse control over loading of sessions the process of migrating state between sessions and browsers should be much easier.

Fortunately most of the browser extensions APIs seem to be usable enough to make this work for at least Firefox and Chrome, so at the moment I'm prototyping the data exchange. Weird as it is for Chrome you have to jump through some conversion hoops (aka local native extensions via a local process exchanging data via stdio), so it seems that the Firefox APIs, since they allow socket connections, are a bit friendlier to use. That said, the exchange format for Chrome, Pascal string encoded JSON, seems like a good idea with the exception of forcing local endianess, which is completely out of the question for a possibly network enabled system (which is to say, I'm definitely going to force network byte order instead).

Reifying Hidden Code
posted on 2015-04-09 13:17:46

The title sounds a bit too majestic too be true. However the though just occured to me, that much of what I've been doing over the last months often involved taking "dev ops" code, e.g. configuration code that lives in the deployment scripts, and putting it into a reusable format, i.e. an actual (Python) module.

That usually happens because what was once a depencency for an application (or service if you like), is now needed to be accessible from (Python) code for testing purposes, or because the setup routine isn't actually just setup any more, but happens more regularly as part of the application cycle.

Of course doing this has some major downsides, as the way scripts are written, using a specific library to access remote hosts, without much error handling, is fundamentally different from how a application code works, that is usually with a more powerful database interface, without any shell scripting underlying the commands (which will instead be replaced by "native" file manipulation commands) and with more proper data structures.

That leaves me with some real annoying busy work just to transform code from one way of writing it to another. Maybe the thing to take away here is that configuration code isn't and application code will sooner or later become library code as well -- aka. build everything as reusable modules. This of course means that using certain configuration (dev ops) frameworks is prohibited, as they work from the outside in, e.g. by providing a wrapper application (for Fabric that would be fab) that invokes "tasks" written in a certain way.

Properly done, that would mean that configuration code would be a wrapper so thin that the actual code could still be reused from application code and different projects later on. The difference between configuration and business logic would then be more of a distinction between where code is used, not how and which framework it was written against.

X11 keybindings for easier terminal clipboard handling
posted on 2015-02-12 15:35:15+01:00

After years of annoyance with the X11 behaviour for clipboard and selection handling with regards to terminal applications, I managed to find a good compromise via some additional shortcurts in my window manager of choice, dwm and terminal multiplexer, tmux.

Now, pressing C-o p (in tmux) pastes from X11 primary selection, C-o P from the clipboard and Linux-z (meaning the key formerly known is Windows key) exchanges clipboard and primary selection, so no more awkward pasting and selecting with the mouse in order to get the correct string in the correct location.

# to switch clipboard and primary selection
PRIMARY=`xsel -op`; xsel -ob | xsel -ip; echo "$PRIMARY" | xsel -ib

# to paste from primary selection in tmux
bind p run "tmux set-buffer \"$(xsel -op)\"; tmux paste-buffer

Other useful keybindings in tmux would copying into the clipboard etc. and there is a useful SO post explaning that.

As before this is public at the customisations branch on Github; I still have to upload the tmux part though.

A mocking library for Common Lisp
posted on 2015-01-06 14:52:47+01:00

After some time thinking about and rewriting the library in a subtly different approaches, CL-MOCK now looks good to me as a version one.

I've removed all mentions of generic functions for now, as first of all I'm unsure if functionality to dynamically rebind methods is even necessary, and second, because doing that is complicated by the details of that protocol. Which means that specifying which method to override is a bit hairy and I really want a good syntax before I let that stuff loose. So it'll have to wait until I figure out a good way to do that. Since it should be easily added to the existing frontend, it will very probably be done with some overloading of existing functions / macros (e.g. with a :method specifier or so).

I'm hoping to test all of this and possibly investigate the generic function issue on some other library. At the moment my single more complex example is a replacement for the DRAKMA HTTP-REQUEST call, which worked surprisingly well and might even make it into a new test suite. The benefit is obviously the improved reliability of not having to have a running network connection for testing libraries against a (HTTP) server.

Tiling WMs and multiplexing
posted on 2014-12-18 15:04:53

Since it came up on Hacker News I thought I can write a little bit about that topic as well.

I started to look for alternatives to the distribution default desktop environment relatively soon after arriving on Linux (Fedora if I remember correctly). At that point the options included Fluxbox and a couple of smaller ones like I3 and wmii. I also tried twm, but honestly, without any effort spend in themeing that was basically not really viable.

So after Fluxbox, which was great, but still leaves you with too much to do with your mouse, my conclusion was that I basically don't need a regular desktop. Having all those messy icons, menus, widgets lying around the screen is just way too distracting for me.

If you then remove all that decoration, you are left with a very bare bones look. Still, after starting to get the hang of Vim (with which I started) and later Emacs, the disadvantage of constantly having to deal with window positions became apparent.

I think the next step was to use wmii or one variant of that. Tiling leaves your mouse free to interact with the main point, your running program. No more juggling windows, aligning borders and so on. For me this isn't about a pretty and flashy screen, it's about the most comfortable environment to work in.

To the present day: I'm no converted to dwm from the awesome people of suckless.org. It's basically a single C file, you configure it with a header and additionally with a custom patch set and that's it. You'd be hard pressed to find a smaller, less resource intensive window manager. And on the flip side it has many amazing features which just work really well together.

Combined with tmux for terminal multiplexing, Emacs buffers for editing multiplexing and dwm for desktop and screen multiplexing this is just the right amount of flexibility to arrange and move around a lot of context.

Obviously this depends on each person, but since you can (and frankly, should) configure every aspect of this, with just a few keypresses you can switch to every part of your running programs and back, be it in the terminal, on a remote system, or graphical.

To be honest, until there is a better alternative to keyboards, I think I'll keep using this approach, maybe adding more scripting capabilities in the same line as in previous blog posts.

Server interface
posted on 2014-12-02 13:40:42

What different components would a daemon expose via the message bus, respectively knowledge store?

For one, methods for remote control. Since RPCs aren't very flexible, the communication style should use message passing. Waiting for response needs to be both aware of possible timeouts and communication failures.

Passed objects can't be too complex, since every interacting language/environment needs to be able to access, create, possibly manipulate them, as well as to minimize the amount of overhead during serialisation. At the same time the schema of messages isn't fixed, so there is an obvious problem for very strict type systems, in that either a fixed-schema wrapper can be used, which would need to be updated at some point, or be kept backwards-compatible, or a very dynamic representation with runtime checks would have to be implemented.

Comparable string based messages from e.g. Plan9 are too simple on the one hand, whereas a protocol like DBus might be overengineered(?).

An important point to consider is the in(tro)spection features of the global system. It should be absolutely possible to easily see and edit messages as well as stored data in either a text-based or convertible format. Also, commands and objects should have runtime documentation in the system, so that a hypothetical call like describe object would display text- (in the terminal), or hypertext based documentation (in the browser).

Naming clashes have to be solved by a one or multi level package system. Since the naming is both shared and global, typical approaches include reversed domain-names, UUIDs and prefixing of identifiers.

Session management would include saving and reloading of state, even with different session names. A separate session manager would take care of initialising applications based on the saved state.

For both text- and graphical UIs methods to switch to the active screen/window need to be provided. Copy & paste functionality might still be done via the window system, or additionally via messaging, which would allow connected system, as well as text and graphical applications to exchange content without any problems.

IPC and knowledge store programs
posted on 2014-11-30 15:32:23

What kind of applications aren't easily possible without a central message bus / knowledge store? Or rephrased as what kind of applications would benefit very much from the use of a central message bus / knowledge store.

To answer this, let's see how IPC usually works between applications: If a program wants to communicate with other parts of the system, there are several options, including writing data to a regular file, pipe/fifo, or socket, as well as directly executing another program. In some cases the communication style is uni-directional, in some bi-directional.

So depending on the setup you can pretty much exactly define how messages are routed between components. However, the details of this communication are hidden from the outside. If you wanted react to one message from another part of the system, you're out of luck. This direct coupling between components doesn't lend itself very well to interception and rerouting.

Unless the program of your choice is very scriptable, you then have no good way to e.g. run a different editor for a file to be opened. Since programs typically don't advertise their internal capabilities to outside use (like CocoaScript (?) allows you to a degree), you also don't have a chance to react to internal events of a program.

Proposed changes to browsers would include decoupling of bookmark handling, cookies/session state, notifications and password management. Additionally it would be useful to expose all of the scripting interface to allow for external control of tabs and windows, as well as possible hooking into website updates, but I think that part is just a side-effect of doing the rest.

Proposed changes to IRC clients / instant messengers would include decoupling of password management and notifications. Additionally the same argument to expose channels/contacts/servers to external applications applies.

Now let's take a look at the knowledge store. In the past I've used a similar Blackboard system to store sensor data and aggregate knowledge from reasoners. The idea behind that is the decoupling of different parts of the program from the data they work on, reacting to input data if necessary and outputting results for other programs to work on.

I imagine that this kind of system relieves programs from creating their own formats for storing data, as well as the need to explicitly specify where to get data from. Compared to a RDBMS to downside is obviously the lack of a hard schema, so the same problems from document based data-stores apply here. Additionally the requirement to have triggers in order to satisfy the subscriptions of clients makes the overall model more complex and harder to optimise.

What is then possible with such a system? Imagine having a single command to switch to a specific buffer regardless of how many programs are open and whether they use a MDI or just a single window. In general scripting of all running programs will be easier.

The knowledge store on the other hand could be used to hold small amounts of data like contacts, the subjects of the newest emails, notifications from people and websites. All of that context data is then available for other programs to use.

Assuming such data was then readily available, using ML to process at least some of the incoming data to look for important bits of information (emails/messages from friends/colleagues, news stories) can then create an additional database of "current events". How this is displayed is again a different problem. The simplest approach would simply be a ticker listening on a specific query, the most complex would maybe consist of whole graphical dashboard.

Security is obviously a problem in such a share-all approach. It should be possible though to restrict access to data similarly to how user accounts in regular DBM systems work and for scripting interactions the system would still have to implement restrictions based on the originating user and group on a single system, as well as the host in a distributed environment.

Previous Next

This blog covers work, unix, tachikoma, postgresql, lisp, kotlin, java, hardware, git, emacs

View content from 2014-08, 2014-11, 2014-12, 2015-01, 2015-02, 2015-04, 2015-06, 2015-08, 2015-11, 2016-08, 2016-09, 2016-10, 2016-11, 2017-06, 2017-07, 2017-12, 2018-04, 2018-07, 2018-08


Unless otherwise credited all material Creative Commons License by Olof-Joachim Frahm