Weekly Head Voices #119: Snowcrash.

This edition of the not-quite-Weekly Head Voices covers the period of time from Monday March 6 to Sunday March 26, 2017.

As is becoming sort of a tradition around these parts, I get to show you a photo or two of our over-nerding antidote trips into the wild.

This is the path to the Koppie Alleen beach in the De Hoop Nature Reserve. Paths and photos of paths make me all pensive:

The following impression is of the De Hoopvlei. The short hiking trail along the vlei turned my morning run into an epic one.

The weekend before the De Hoop one, I spent at least half an hour sitting on the sofa just thinking.

Two things are noteworthy about this event.

  1. I had half an hour of pure, uncut, interruption-free idle time.
  2. During this time, I resisted the urge to flip out or open any information consumption device, instead electing to have my thoughts explore my internal landscape, like people used to do in the old days.

I was trying to come up with better ways to keep track of this landscape. The Deep Work phase I’m going through currently applies to work time (doh), but somehow not to free time, where I’m very much prone to latch on securely to the internet firehose for a mind-blasting gulp-athon.

Coincidentally, and perhaps even slightly ironically, the firehose deposited this insightful Scientific American article titled Warning: Your New Digital World Is Highly Addictive.

Psychology and marketing professor Adam Alter argues that the companies behind the apps and the media that we consume are naturally applying various advanced tools to ensure that we remain engaged for as long as possible.

In short, you spend so much time in Facebook (I don’t anymore haha) because Facebook employs really clever people who build systems that run continuous experiments on your viewing behaviour and figure out what to show you so that your eyeballs remain glued to that little display.

This makes absolute sense of course.

If your company’s life blood was advertising revenue, and/or user engagement was important to your company’s bottom line for some or other reason, of course you would analyse and A/B test the living daylights out of your users in order to keep them in your app / media stream for that much longer.

The following day, the firehose of faux-wisdom gave me this: Tim Berners-Lee: I invented the web. Here are three things we need to change to save it.

Sir Berners-Lee is arguing that we’ve lost control of our data (“your” data belongs to Facebook, Instagram, Twitter, <insert your thing here>; you should blog more it’s better) and, more importantly, that we’re being manipulated by less than benevolent actors (pronounce with melodramatic accent on the “tors”) who, again based on advanced analytics tools, manufacture and modify news in order to sway public opinion. (Read my other recent post Fake News is Just the Beginning if your tin foil hat is just getting warmed up.)

In 2011, 2012 and 2014, I wrote briefly about our innate weakness for anything new. Social media is almost the perfect poison in that respect. You keep on scrolling down, because what if, what if there’s something new that’s going to answer that question you have not even formulated yet.

Putting all of this together: The internet is a beautiful thing. However, we have evolved parts of it to target an insidious psychological weakness in ourselves. Furthermore, there is a massive commercial incentive to keep on tweaking the distractions so that they become even more addictive, negating many of the information-related advantages they might have offered.

As if that was not sufficient, there is political and commercial incentive to develop techniques that essentially subvert our cognition, thereby fairly effectively misleading us to make decisions that satisfy somebody else’s agenda.

All is not lost.

Making time to think is great. Set aside as much as you can. Stare into space. Resist the urge to check the little magic window.

Personally, I like to form habits as much as possible.

Deep Work has become a habit for me. I am going to bring in old-school glassy-eyed staring-into-space thinking as a habit also. Complementary to that, and as an answer to my search for ways of keeping track of my mental landscape, I have resolved to write / draw as much of that mental landscape as I can, at regular intervals.

More generally, I wonder if, and how, we as humankind are going to address the issue of news and opinion manipulation.

Do we have to resign ourselves to this future? Is it going to become a question of which side employs the most expensive analytics and data science, and can hence out-manipulate its opponents? Or are we humans somehow going to develop systemic immunity?

In any case, I wish you all great success and happiness staring into space!

P.S. I nominated Alexandra Elbakyan for the brand-new MIT Media Lab Disobedience Prize. If you have some time, and you feel that publically funded research results should be available to the public, please add your nomination.


Fake news is just the beginning.

Intrigued by the trailers of the movie The Arrival, which I have not seen yet, I read the short story compilation Stories of your life and others by Ted Chiang.

The short story Story of your life, on which the movie is based, has a fascinating premise. However, I do hear that the film does not spend that much time on her discovery of and assimilation by the alien writing system “Heptapod B”, which to me was one of the more interesting threads.

Anyways, this post is not about that story.

It’s about another story in the collection called Liking What You See: A Documentary, written way back in 2002.

Close to the end, an initially strong campaign for calliagnosia (a non-invasive and highly selective procedure to remove humans’ innate appreciation of attractiveness in others, already a great science-fiction philosophy prop) at the fictitious but influential unversity Pembleton is lost at the last minute due to a highly persuasive televised speech by the leader of the opposition.

Read the following extract for how exactly this was done:

In the latest on the Pembleton calliagnosia initiative, EduNews has learned that a new form of digital manipulation was used on the broadcast  of PEN spokesperson Rebbecca Boyer’s speech. EduNews has received files from the SemioTech Warriors that contain what appear to be two recorded versions of the speech: an original — acquired from the Wyatt/Hayes (ed: PR firm) computers — and the broadcast version. The files also include the SemioTech Warriors’ analysis of the difference between the two versions.

The discrepancies are primarily enhancements to Ms. Boyer’s voice intonation, facial expressions and body language. Viewers who watch the original version rate Ms. Boyer’s performance as good, while those who watch the edited version rate her performance as excellent, describing her as extraordinarily dynamic and persuasive. The SemioTech Warriors conclude that Wyatt/Hayes has developed new software capabable of fine-tuning paralinguistic cues in order to maximize the emotional response evoked in viewers. This dramatically increases the effectiveness of recorded presentations … and its use in the PEN broadcast is likely what caused many supporters of the calliagnosia initiative to change their votes.

I would like to remind you that Ted Chiang wrote that story in 2002.

When I read this, I had to sit still for a moment, thinking about all of the advanced techniques that are currently being used to analyse us all, and then to manipulate or even manufacture news with the explicit purpose of swaying public opinion in a specific direction.

My thoughts moved on to the great advances being made with deep learning based human speech synthesis, facial re-enactment (see youtube at the top of this post), cut-and-paste voice editing (Adobe’s VoCo) and much much more.

Up to now, the major mechanisms of influence have been curating which packets of information people consume. However, the tools for also modifying the contents of these packets of information seem to be ready.

We’ve been photoshopping photos of people to make those people appear more attractive since forever.

How long before we start seriously “videoshopping” televised speeches to make them more persuasive? With people consuming ever-increasing amounts of potentially personalised video, there’s an even bigger opportunity for those who desire to influence us for reasons less pure than just edification.

Weekly Head Voices #118: Accelerando.

Too much nerdery took place from Monday February 20 to Sunday March 5. Fortunately, be the end of that period, we found ourselves here:

The view from the shark lookout all the way to Hangklip.

bibtex references in orgmode

For a technical report, I thought it would be handy going from Emacs orgmode (where all my lab notes live in any case) to PDF via LaTeX.

This transformation is more or less built-in, but getting the whole machinery to work with citations from a local BibTeX export from my main Zotero database does not work out of the box.

I wrote a post on my other even-more-nerdy blog showing the extra steps needed to turn this into an easy-peasy 38-shortcut-key-combo affair.

Google GCE K80 CPUs available, cheap(ish)!

I’ve been using a cloud-hosted NVIDIA Tesla from Nimbix for my small-scale deep learning experiments with TensorFlow. This has also helped me to resist the temptation of buying an expensive new GPU for my workstation.

However, Google Compute Engine has finally shipped (in beta) their cloud-based GPU product. Using their pricing calculator, it turns out I can get a virtual machine with 8 CPU cores, 30G of RAM, 375GB of local SSD and a whole NVIDIA Tesla K80 GPU (12GB of memory) in their EU data centre for a paltry $1.32 / hour.

This is significantly less than half of what I paid Nimbix!

(That resistance is going to crumble, the question is just when. Having your stuff run locally and interactively for small experiments still beats the 150ms latency from this here tip of the African continent to the EU.)

nvpy leaves the nest :`(

My most successful open source project to date is probably nvpy, the cross-platform (Linux, macOS, Windows) Simplenote client. 600+ stars on github is not A-list, but it’s definitely also nothing to sneeze at.

nvpy stats right before the hand-over

Anyways, I wrote nvpy in 2012 when I was still a heavy Simplenote user and there was no good client for Linux.

In the meantime, Emacs had started taking over my note-taking life and so in October of 2014, I made the decision to start looking for a new maintainer for my open-source baby nvpy.

That attempt was not successful.

By the end of 2015 / early 2016 I had a bit of a Simplenote / nvpy revival, as I was using the official client on my phone, and hence nvpy on the desktop.

Emacs put a stop to that revival also by magically becoming available on my phone as well. I have to add that the Android Simplenote client also seems to have become quite sluggish.

I really was not using nvpy anymore, but I had to make plans for the users who did.

On Saturday March 4, I approached github user yuuki0xff, who had prepared a pretty impressive background-syncing PR for nvpy, about the possibility of becoming the new owner and maintainer of nvpy.

To my pleasant surprise, he was happy to do so!

It is a strange new world that we live in where you create a useful artifact from scratch, make it available for free to anyone that would like to use it, and continue working on improving that artifact for a few years, only to hand the whole thing over to someone else for caretaking.

The handing-over brought with it mixed feelings, but overall I am super happy that my little creation is now in capable and more active hands.

Navel Gaze

Fortunately, there’s a handy twitter account reminding us regularly how much of 2017 we have already put behind us (thanks G-J van Rooyen for the tip):

That slowly advancing progress bar seems to be very effective at getting me to take stock of the year so far.

Am I spending time on the right things? Am I spending just the right amount of effort on prioritising without this cogitation eating into the very resource it’s supposed to be optimising? Are my hobbies optimal?

I think the answer is: One deliberate step after the other is best.

Weekly Head Voices #117: Dissimilar.

The week of Monday February 13 to Sunday February 19, 2017 might have appeared to be really pretty boring to any inter-dimensional and also more mundane onlookers.

(I mention both groups, because I’m almost sure I would have detected the second group watching, whereas the first group, being interdimensional, would probably have been able to escape detection. As far as I know, nobody watched.)

I just went through my orgmode journals. They are filled with a mix of notes on the following mostly very nerdy and quite boring topics.

Warning: If you’re not an emacs, python or machine learning nerd, there is a high probability that you might not enjoy this post. Please feel free to skip to the pretty mountain at the end!

Advanced Emacs configuration

I finally migrated my whole configuration away from Emacs Prelude.

Prelude is a fantastic Emacs “distribution” (it’s a simple git clone away!) that truly upgrades one’s Emacs experience in terms of look and feel, and functionality. It played a central role in my return to the Emacs fold after a decade long hiatus spent with JED, VIM (there was more really weird stuff going on during that time…) and Sublime.

However, it’s a sort of rite of passage constructing one’s own Emacs configuration from scratch, and my time had come.

In parallel with Day Job, I extricated Prelude from my configuration, and filled up the gaps it left with my own constructs. There is something quite addictive using emacs-lisp to weave together whatever you need in your computing environment.

To celebrate, I decided that it was also time to move my todo system away from todoist (a really great ecosystem) and into Emacs orgmode.

From this… (beautiful multi-platform graphical app)

I had sort of settled with todoist for the past few years. However, my yearly subscription is about to end on March 5, and I’ve realised that with the above-mentioned Emacs-lisp weaving and orgmode, there is almost unlimited flexibility also in managing my todo list.

Anyways,  I have it setup so that tasks are extracted right from their context in various orgfiles, including my current monthly journal, and shown in a special view. I can add arbitrary metadata, such as attachments and just plain text, and more esoteric tidbits such as live queries into my email database.

The advantage of having the bulk of the tasks in my month journal, means I am forced to review all of the remaining tasks at the end of the month before transferring them to the new month’s journal.

We’ll see how this goes!

Jupyter Notebook usage

Due to an interesting machine learning project at work, I had a great excuse to spend some quality time with the Jupyter Notebook (formerly known as IPython Notebook) and the scipy family of packages.

Because Far Too Much Muscle-Memory, I tried interfacing to my notebook server using Emacs IPython Notebook (EIN), which looked like this:

However, the initial exhilaration quickly fizzled out as EIN exhibits some flakiness (primarily broken indentation in cells which makes this hard to interact with), and I had no time to try to fix or work-around, because day job deadlines. (When I have a little more time, I will have to get back to the EIN! Apparently they were planning to call this new fork Zwei. Now that would have been awesome.)

So it was back to the Jupyter Notebook. This time I made an effort to learn all of the new hotkeys. (Things have gone modal since I last used this intensively.)

The Notebook is an awe-inspiringly useful tool.

However, the cell-based execution model definitely has its drawbacks. I often wish to re-execute a single line or a few lines after changing something. With the notebook, I have to split the cell at the very least once to do this, resulting in multiple cells that I now have to manage.

In certain other languages, which I cannot mention anymore because I have utterly exhausted my monthly quota, you can easily re-execute any sub-expression interactively, which makes for a more effective interactive coding experience.

The notebook is a good and practical way to document one’s analytical path. However, I sometimes wonder if there are less linear (graph-oriented?) ways of representing the often branching routes one follows during an analysis session.

Dissimilarity representation

Some years ago, I attended a talk where Prof. Robert P.W. Duin gave a fantastic talk about the history and future of pattern recognition.

In this talk, he introduced the idea of dissimilarity representation.

In much of pattern recognition, it was pretty much the norm that you had to reduce your training samples (and later unseen samples) to feature vectors. The core idea of building a classifier, is constructing hyper-surfaces that divide the high-dimensional feature space into classes. An unseen sample can then be positioned in feature space, and its class simply determined by checking on which side of the hypersurface(s) it finds itself.

However, for many types of (heterogenous) data, determining these feature vectors can be prohibitively difficult.

With the dissimilarity representation, one only has to determine a suitable function that can be used to calculate the dissimilarity between any two samples in the population. Especially for heterogenous data, or data such as geometric shapes for example, this is a much more tractable exercise.

More importantly, it’s often easier to discuss with domain experts about similarity than it is to talk about feature spaces.

Due to the machine learning project mentioned above, I had to work with categorical data that will probably later also prove to be of heterogeneous modality. This was of course the best (THE BEST) excuse to get out the old dissimilarity toolbox (in my case, that’s SciPy and friends), and to read a bunch of dissimilarity papers that were still on my list.

Besides the fact that much fun was had by all (me), I am cautiously optimistic, based on first experiments, that this approach might be a good one. I was especially impressed by how much I could put together in a relatively short time with the SciPy ecosystem.

Machine learning peeps in the audience, what is your experience with the dissimilarity representation?

A mountain at the end

By the end of a week filled with nerdery, it was high time to walk up a mountain(ish), and so I did, in the sun and the wind, up a piece of the Kogelberg in Betty’s Bay.

At the top, I made you this panoroma of the view:

Click for the 7738 x 2067 full resolution panorama!

At that point, the wind was doing its best to blow me off the mountain, which served as a visceral reminder of my mortality, and thus also kept the big M (for mindfulness) dial turned up to 11.

I was really only planning to go up and down in brisk hike mode due to a whiny knee, but I could not help turning parts of the up and the largest part of the down into an exhilarating lope.

When I grow up, I’m going to be a trail runner.

Have fun (nerdy) kids, I hope to see you soon!

The Apple TV 4 Remote, nickname “Achilles”!

It turns out that when you, or one of your offspring, accidentally drop an Apple TV 4 remote from about a metre, the lovely touch surface shatters almost exactly like the screen of a smartphone:

Unfortunately, you now have to purchase a new remote, which over here is going to cost more than half of what the whole Apple TV unit, including remote, cost initially.