Weekly Head Voices #132: Potato deadline.

Fragment of potato skin, taken with phone camera through GOU#2’s microscope at 100x.

We have a serious deadline coming up on Tuesday, so I’m going to make these few WHV minutes count.


  • Day zero has again been postponed, this time to June 4. We continue with our water saving efforts.
  • That unexpected side-project I mentioned in last week’s post did end up going live that very night. Armed with the Django Rest Framework and plenty of battle scars, it took about 17 hours from idea to fully deployed REST API, a large part of which was debugging the paper’s math and spreadsheets.
    • Django might be a slow runner relative to some of the other kids on the block (go with any of its web frameworks, nginx with openresty (lua right in your web server!), even apistar with uvicorn), but the completeness and maturity of Django and its ginormous ecosystem are hard to beat when it comes to development velocity.
  • There’s a whole blog on the nature of note-taking. I arrived there via interleave and org-noter, both emacs packages for keeping text (orgmode) notes synchronised with PDFs, both found via irreal, a great Emacs blog.
  • In the extra lessons I have with GOU#1, we studied electrical current from basic (atomic) principles. As I was getting all excited about the outer electrons being passed on from copper atom to copper atom (Khan Academy and I tag team these lessons), GOU#1 had to laugh at the goose flesh on my arms.
    • The Khan Academy lecture seemed to imply that Benjamin Franklin started us down the not-quite-correct path of conventional current (from positive to negative), whereas the electrons being passed on imply current flow from negative to positive, aka electron current. However, this physics StackOverflow answer more completely explains that current is defined as the flow of electric charge, with electron flow being one example, and hence both directions are correct.
  • To be honest, I became ever so slightly irritated with an episode of one of my favourite podcasts, CPPCast, as the guest said “like” so often that I had trouble following what he was actually like trying to say. This like led me to using Google’s machine-learning-based speech to text API one night to like transcribe the audio of the podcast to speech so that I could like count the number of like utterances. There were not as many as I thought, but still a whole lot. If you’re curious as to the stats, I wrote everything up in this nerdy vxlabs blog post.
    • On the topic of note-taking: Because I make lab notes of everything in my Emacs, including late night speech recognition experiments, publishing a blog post is a question of some copy pasting, and then telling Emacs to publish to the blog.
  • On Thursday, some dudes came to my house and, after somehow switching seamlessly from pick-axe to optic fibre splicer and back several times, left me with this (and more):
Two fibre strands into my house. They tell me one is for backup.
  • These are strange Gibson-esque times when there’s now permanently a laser transmitting all of these packets to you via the network of glass strands encircling the Earth, whilst many of us are still struggling to grasp the difference between fact and fiction.
    • “The future is already here — it’s just not very evenly distributed”, William Gibson, probably ’93.
  • We have a new president: President Cyril Ramaphosa! He was Mandela’s choice to become president of this country, but it was Thabo’s turn, and then things went pear-shaped with Zuma. Years later, the situation is quite dire, but so far there are many indications that Ramaphosa has the makings of a great leader (I have become convinced that we humans, all of us, need great leaders to advance as humanity; I hope to write a post about that some day). After Friday’s state of the nation (SONA) address by present Ramaphosa, I, along with many fellow South Africans, are hopeful for our future.

Ok peeps, have a wonderful week! I’ll see you NEXT TIME!

Weekly Head Voices #130: TTAGGG.

Lovely summer’s day. Not much rain.


On the water front (I see what I did there): Day Zero, that is the day on which the whole of Cape Town’s municipal water will be cut off, has been brought further forward to to April 12. Citizens will be able to fetch drinking water every day from 200 collection points. Judging by how quickly shelves of bottled water are currently disappearing from the shops and by panicky facebook posts, people are stocking up in advance.

The immortality of lobsters

Continuing with our watery theme, this past week I learned the very surprising fact that lobsters are sort of biologically immortal. In short, lobsters produce more of the enzyme telomerase than humans and other animals, which rejuvenates their telomeres, which means that their cells can in theory keep on dividing forever.

The telomeres are the genetic bits (feeling quite punny today; nucleotides TTAGGG in vertebrates, apparently) protecting the ends of your chromosomes. Every time cells divide, the child cells have slightly shortened telomeres. At some point, the telomere becomes too short, and that line of cells can’t divide anymore.

This is a large part of how most animals finally die: Our cells can only divide so many times, and then the telomere ends, and then someone switches on the bright lights, and then the whole party is over.

However, the enzyme telomerase is able to repair telomeres, thus extending the lifetime of the organism.

Lobsters naturally produce so much telomerase, that their cells can keep on dividing forever. In practice, lobsters apparently only grow in size, strength and reproductive ability as they age.

Unfortunately, their party also eventually ends. As they grow, they have to molt their suddenly too small exoskeleton. As they get bigger, this process takes more and more energy, until the day comes that they have grown so large (12 kg in one instance) so that the attempted molting, due to disease, is a fatal process.

Intriguingly, a 2013 study showed that lifestyle changes such as diet, exercise, stress reduction (including meditation) and social support, boosted telomerase activity and significantly increased telomere length in human subjects.

Your brain (not) at work

On the recommendation of a colleague who is most versed in these things, I am currently reading the book “Your Brain at Work: Strategies for Overcoming Distraction, Regaining Focus, and Working Smarter All Day Long” by David Rock.

While the author clearly has not yet read any books on Coming Up With Shorter Book Titles, he has put together a compelling piece on the extreme limitations of the human prefrontal cortex. These are the bits that we use for important thoughts and for solving tricky technical puzzles.

I thought that I just naturally had the attention span of a budgie (which I continuously try my best to compensate for by the gnashing of teeth, will power, and various other tricks), but it turns out it’s a basic human limitation.

A pretty budgie which will probably distract you from the contents of this post. FOCUS!

If all of the neuroscientists he has interviewed can be believed, we are severely limited both in terms of the number of thoughts / ideas we can handle at any one time, and, to me far more frighteningly, in terms of the total time we have available for this sort of complex work.

The prefrontal cortex is relatively-speaking quite inefficient, and gets exhausted really quickly. Remember the last time you spent the evening trying to figure out how to get all of your children to their various activities during the week, and how unexpectedly difficult that was? (if no children, please replace this example with something more familiar to you :) Your prefrontal cortex was probably already exhausted by 15:00 in the afternoon (if not earlier), and you were in effect beating a dead neural horse.

Sometimes you wake up the next morning, and you solve that exact same puzzle in 3 minutes, at which point you might have already exhausted your cognition quota, and might as well stay at home for the rest of the day.

Because the capacity and bandwidth of the prefrontal cortex can’t (yet?) be significantly improved, the book recommends that one carefully monitors oneself, taking breaks when necessary, single-tasking, and practising any often-occurring tasks until they become automatic, at which point the much more efficient basal ganglia take over.

Apart from this, the prefrontal cortex works at its best when you are slightly stressed, but not too much, and when you are slightly happy (with novelty and dopamine), but not too much. Too stressed, and it freezes up like a deer in the headlights of a rapidly approaching car. Too happy, and it just hangs around enjoying the vibes, not really producing anything.

I still have to finish the book, but it has already motivated me to continue on my quest to automate and script as much of my life and work as possible. For example, for the daily goals list mentioned in pro-tip #1 of WHV #126 I have a keyboard shortcut in Emacs which creates the relevant section in the correct part of my journal, correctly timestamped, and pre-filled with one or two habits I am trying to form, ready to accept the rest of the goals for the day. I used to think examples like this were perhaps going a little too far, but I now keep my eyes open for any task or activity that can be partially or fully automated. (Some even refer to Emacs Orgmode as their exocortex.)

On the topic of lists, the book mentions prioritisation as one of the more cognitively taxing activities we can engage in, so it makes even more sense to take care of it first thing in the morning, and to do this as efficiently as possible.

More broadly speaking, I think having instant access to documented and executable conventions for most of one’s tasks and projects would help greatly to free up the precious little prefrontal quality time we are allotted.

Even more broadly speaking, it seems we need to practise how to listen more carefully to our brain so that we are able to guide it through the treacherous waters of exhaustion, stress and happiness.

The part where I wish you a good journey

Thank you very much for reading this post. I hope you have a week filled with learning, challenges surmounted and a solid dose of contentment.

See you next time!

Weekly Head Voices #123: A semblance of a cadence.

Yes, we ended up in the mountains again.

In the period from Monday June 12 to Sunday June 25 we were mostly trying to get through the winter, fighting off a virus or three (the kind that invades biological organisms you nerd) and generally nerding out.

One more of my org2blog pull requests was merged in: You can now configure the thumbnail sizes your blog will automatically show of your uploaded images. Getting my own itch scratches merged merged into open source projects never fails to makes me happy, even although in this case there can’t be more than 5 other people who will ever use this particular functionality.



For a work project I was encouraged to explore Microsoft’s brand new ASP.NET Core. While on the one hand I remain wary of Microsoft (IE6 anyone?), I am an absolute sucker for new technology on the other.

You may colour me impressed.

If I had to describe it in one sentence, I would have to describe ASP.NET Core as Django done in C#. You can develop and deploy this on Windows, Mac or Linux. You model and query your data using Entity Framework Core and LINQ for example, or Dapper if you prefer performance and don’t mind the SQL (I don’t), or both. You write controller classes and view templates using the Razor templating language.

C# 7.0 looks like it could be a high development velocity language. It has modern features such as lambdas with what looks like real closures (unlike C++ variable capturing), as well as the null coalescing operator (??) and the null conditional operator (?.), the latter of which looks superbly useful. Between Visual Studio on Windows and the Mac, or the new Intellij Rider IDE (all platforms) or Visual Studio Code (all platforms), the tooling is top notch.

Time will have to tell how it compares to Python with respect to development velocity, a competition that Python traditionally fares extremely well at.

Where ASP.NET Core wins hands down is in the memory usage department: By default you deploy using the Kestrel web server, which runs your C# code using multiple libuv (yeah, of lightning fast node.js event loop fame) event loops, all in threads.

With Django I usually deploy as many processes as I can behind uwsgi, itself behind nginx. The problem is that with Python’s garbage collector, these processes end up sharing very little memory, and so one has to take into account memory limits as well as CPU count on servers when considering concurrency.

The long and the short of this is that one will probably be able to process many more requests in parallel with ASP.NET Core than with Django. With uwsgi and Django I have experimented with gevent in uwsgi and monkey patching, but this does not work as well as it does in ASP.NET Core, which has been designed with this concurrency model in mind from the get go. My first memory usage and performance experiments have shown compelling results.

Hopefully more later!

A cadence of accountability

Lately my Deep Work habits have taken a bit of a hit. At first I could not understand how to address this, until I remembered mention of a cadence of accountability in The Book.

Taking a quick look at that post, I understood what I had forgotten to integrate with my habits. Besides just doing the deep work, it’s important to “keep a compelling scoreboard” and to “create a cadence of accountability”.

Although I was tracking my deep work time using the orgmode clocking commands (when I start “deep working” on anything, I make an orgmode heading for it in my journal and clock in; when I’m done I clock out; orgmode remembers all durations) I was not regularly reviewing my performance.

With orgmode’s org-clock-report command (C-c C-x C-r), I can easily create or update a little table, embedded in my monthly journal orgfile, with all of my deep work clocked time tallied by day. This “compelling scoreboard” gives me instant insight into my weekly and monthly performance, and gives me either a mental kick in the behind or pat on the shoulder, depending on how many deep work hours I’ve been able to squeeze in that day and the days before it.

The moment I started doing this at regular intervals, “creating a cadence of accountability” in other words, I was able to swat distractions out of the way and get my zone back.

This is an interesting similarity with GTD (which I don’t do so much anymore because focus is far more important to me than taking care of sometimes arbitrary and fragmentary tasks) in that GTD has the regular review as a core principle.

Us humans being so dependent on habits to make real progress in life leads me to the conclusion that this is a clever trick to acquire behaviour that is not habitual: Work on an auxiliary behaviour that is habitual, e.g. the regular review, that encourages / reinforces behaviour that is perhaps not habitual, e.g. taking care of randomly scheduled heterogeneous tasks (GTD) or fitting in randomly scheduled focus periods (Deep Work of the journalistic variant).

As an aside, cadence in this context is just a really elegant synonym for habit. I suggest we use it more, especially at cocktail parties.


Weekly Head Voices #118: Accelerando.

Too much nerdery took place from Monday February 20 to Sunday March 5. Fortunately, be the end of that period, we found ourselves here:

The view from the shark lookout all the way to Hangklip.

bibtex references in orgmode

For a technical report, I thought it would be handy going from Emacs orgmode (where all my lab notes live in any case) to PDF via LaTeX.

This transformation is more or less built-in, but getting the whole machinery to work with citations from a local BibTeX export from my main Zotero database does not work out of the box.

I wrote a post on my other even-more-nerdy blog showing the extra steps needed to turn this into an easy-peasy 38-shortcut-key-combo affair.

Google GCE K80 CPUs available, cheap(ish)!

I’ve been using a cloud-hosted NVIDIA Tesla from Nimbix for my small-scale deep learning experiments with TensorFlow. This has also helped me to resist the temptation of buying an expensive new GPU for my workstation.

However, Google Compute Engine has finally shipped (in beta) their cloud-based GPU product. Using their pricing calculator, it turns out I can get a virtual machine with 8 CPU cores, 30G of RAM, 375GB of local SSD and a whole NVIDIA Tesla K80 GPU (12GB of memory) in their EU data centre for a paltry $1.32 / hour.

This is significantly less than half of what I paid Nimbix!

(That resistance is going to crumble, the question is just when. Having your stuff run locally and interactively for small experiments still beats the 150ms latency from this here tip of the African continent to the EU.)

nvpy leaves the nest :`(

My most successful open source project to date is probably nvpy, the cross-platform (Linux, macOS, Windows) Simplenote client. 600+ stars on github is not A-list, but it’s definitely also nothing to sneeze at.

nvpy stats right before the hand-over

Anyways, I wrote nvpy in 2012 when I was still a heavy Simplenote user and there was no good client for Linux.

In the meantime, Emacs had started taking over my note-taking life and so in October of 2014, I made the decision to start looking for a new maintainer for my open-source baby nvpy.

That attempt was not successful.

By the end of 2015 / early 2016 I had a bit of a Simplenote / nvpy revival, as I was using the official client on my phone, and hence nvpy on the desktop.

Emacs put a stop to that revival also by magically becoming available on my phone as well. I have to add that the Android Simplenote client also seems to have become quite sluggish.

I really was not using nvpy anymore, but I had to make plans for the users who did.

On Saturday March 4, I approached github user yuuki0xff, who had prepared a pretty impressive background-syncing PR for nvpy, about the possibility of becoming the new owner and maintainer of nvpy.

To my pleasant surprise, he was happy to do so!

It is a strange new world that we live in where you create a useful artifact from scratch, make it available for free to anyone that would like to use it, and continue working on improving that artifact for a few years, only to hand the whole thing over to someone else for caretaking.

The handing-over brought with it mixed feelings, but overall I am super happy that my little creation is now in capable and more active hands.

Navel Gaze

Fortunately, there’s a handy twitter account reminding us regularly how much of 2017 we have already put behind us (thanks G-J van Rooyen for the tip):

That slowly advancing progress bar seems to be very effective at getting me to take stock of the year so far.

Am I spending time on the right things? Am I spending just the right amount of effort on prioritising without this cogitation eating into the very resource it’s supposed to be optimising? Are my hobbies optimal?

I think the answer is: One deliberate step after the other is best.

Weekly Head Voices #117: Dissimilar.

The week of Monday February 13 to Sunday February 19, 2017 might have appeared to be really pretty boring to any inter-dimensional and also more mundane onlookers.

(I mention both groups, because I’m almost sure I would have detected the second group watching, whereas the first group, being interdimensional, would probably have been able to escape detection. As far as I know, nobody watched.)

I just went through my orgmode journals. They are filled with a mix of notes on the following mostly very nerdy and quite boring topics.

Warning: If you’re not an emacs, python or machine learning nerd, there is a high probability that you might not enjoy this post. Please feel free to skip to the pretty mountain at the end!

Advanced Emacs configuration

I finally migrated my whole configuration away from Emacs Prelude.

Prelude is a fantastic Emacs “distribution” (it’s a simple git clone away!) that truly upgrades one’s Emacs experience in terms of look and feel, and functionality. It played a central role in my return to the Emacs fold after a decade long hiatus spent with JED, VIM (there was more really weird stuff going on during that time…) and Sublime.

However, it’s a sort of rite of passage constructing one’s own Emacs configuration from scratch, and my time had come.

In parallel with Day Job, I extricated Prelude from my configuration, and filled up the gaps it left with my own constructs. There is something quite addictive using emacs-lisp to weave together whatever you need in your computing environment.

To celebrate, I decided that it was also time to move my todo system away from todoist (a really great ecosystem) and into Emacs orgmode.

From this… (beautiful multi-platform graphical app)

I had sort of settled with todoist for the past few years. However, my yearly subscription is about to end on March 5, and I’ve realised that with the above-mentioned Emacs-lisp weaving and orgmode, there is almost unlimited flexibility also in managing my todo list.

Anyways,  I have it setup so that tasks are extracted right from their context in various orgfiles, including my current monthly journal, and shown in a special view. I can add arbitrary metadata, such as attachments and just plain text, and more esoteric tidbits such as live queries into my email database.

The advantage of having the bulk of the tasks in my month journal, means I am forced to review all of the remaining tasks at the end of the month before transferring them to the new month’s journal.

We’ll see how this goes!

Jupyter Notebook usage

Due to an interesting machine learning project at work, I had a great excuse to spend some quality time with the Jupyter Notebook (formerly known as IPython Notebook) and the scipy family of packages.

Because Far Too Much Muscle-Memory, I tried interfacing to my notebook server using Emacs IPython Notebook (EIN), which looked like this:

However, the initial exhilaration quickly fizzled out as EIN exhibits some flakiness (primarily broken indentation in cells which makes this hard to interact with), and I had no time to try to fix or work-around, because day job deadlines. (When I have a little more time, I will have to get back to the EIN! Apparently they were planning to call this new fork Zwei. Now that would have been awesome.)

So it was back to the Jupyter Notebook. This time I made an effort to learn all of the new hotkeys. (Things have gone modal since I last used this intensively.)

The Notebook is an awe-inspiringly useful tool.

However, the cell-based execution model definitely has its drawbacks. I often wish to re-execute a single line or a few lines after changing something. With the notebook, I have to split the cell at the very least once to do this, resulting in multiple cells that I now have to manage.

In certain other languages, which I cannot mention anymore because I have utterly exhausted my monthly quota, you can easily re-execute any sub-expression interactively, which makes for a more effective interactive coding experience.

The notebook is a good and practical way to document one’s analytical path. However, I sometimes wonder if there are less linear (graph-oriented?) ways of representing the often branching routes one follows during an analysis session.

Dissimilarity representation

Some years ago, I attended a talk where Prof. Robert P.W. Duin gave a fantastic talk about the history and future of pattern recognition.

In this talk, he introduced the idea of dissimilarity representation.

In much of pattern recognition, it was pretty much the norm that you had to reduce your training samples (and later unseen samples) to feature vectors. The core idea of building a classifier, is constructing hyper-surfaces that divide the high-dimensional feature space into classes. An unseen sample can then be positioned in feature space, and its class simply determined by checking on which side of the hypersurface(s) it finds itself.

However, for many types of (heterogenous) data, determining these feature vectors can be prohibitively difficult.

With the dissimilarity representation, one only has to determine a suitable function that can be used to calculate the dissimilarity between any two samples in the population. Especially for heterogenous data, or data such as geometric shapes for example, this is a much more tractable exercise.

More importantly, it’s often easier to discuss with domain experts about similarity than it is to talk about feature spaces.

Due to the machine learning project mentioned above, I had to work with categorical data that will probably later also prove to be of heterogeneous modality. This was of course the best (THE BEST) excuse to get out the old dissimilarity toolbox (in my case, that’s SciPy and friends), and to read a bunch of dissimilarity papers that were still on my list.

Besides the fact that much fun was had by all (me), I am cautiously optimistic, based on first experiments, that this approach might be a good one. I was especially impressed by how much I could put together in a relatively short time with the SciPy ecosystem.

Machine learning peeps in the audience, what is your experience with the dissimilarity representation?

A mountain at the end

By the end of a week filled with nerdery, it was high time to walk up a mountain(ish), and so I did, in the sun and the wind, up a piece of the Kogelberg in Betty’s Bay.

At the top, I made you this panoroma of the view:

Click for the 7738 x 2067 full resolution panorama!

At that point, the wind was doing its best to blow me off the mountain, which served as a visceral reminder of my mortality, and thus also kept the big M (for mindfulness) dial turned up to 11.

I was really only planning to go up and down in brisk hike mode due to a whiny knee, but I could not help turning parts of the up and the largest part of the down into an exhilarating lope.

When I grow up, I’m going to be a trail runner.

Have fun (nerdy) kids, I hope to see you soon!