(Warning: Due to the momentous events of the past week, this post is 80% South African politics. If this is not your thing, eject right before the large warning further down! Maybe next week we’ll be back to our usual programming, all depending on how above-mentioned momentous events play out.)
Between all of the retinal excitement of the (gorgeous) new Justice League trailer and the release of the new Ghost in the Shell movie (pretty but soulless the reviews seem to be saying), you might have missed the new Valerian trailer.
This would be a shame, because it looks mind-expandingly stunning:
On Tuesday, March 28, 2017, when I arrived home and Genetic Offspring Unit #3 (GOU#3 for short) saw me, she all-matter-of-factly stood up and walked towards me.
My heart burst.
WARNING: What’s happening in SA politics at the moment, in 60 seconds.
One would think that one heart bursting per week is sufficient, but on Friday morning this happened again for unfortunately far less than positive reasons.
Capable ministers, especially so Pravin Gordhan, the minister of finances (who was doing a great job trying to steer our fragile economy in the right direction, often opposing deals that were bad for the economy but great for the pockets of Zuma and his bosses, the Gupta brothers) were replaced with ministers who might indeed be capable, but have clearly been selected not for that but rather for their loyalty to the Zuma network.
Amongst many other ramifications, this also means that the 1 trillion Rand nuclear deal with Russia is officially back on the table!
This was one of the many unhealthy projects blocked by Pravin Gordhan.
Nuclear power certainly has its place, but under these circumstances definitely not.
This cabinet reshuffle is the most recent in a long line of increasingly destructive moves by president Zuma and his supporters. The country is still reeling from the billions of Rands that have been and are still being extorted from the poorest of citizens by a foreign company, all right under the nose of minister Bathabile Dlamini.
Guess who did NOT get fired on Friday?
Loyalty seems to be sufficient to cancel out any amount of malice, greed and/or incompetence.
I am hoping that they are right, and that these will be the right sort of fireworks. With a bit of luck and the willingness of many good-hearted and talented people who have started speaking out and coming into action, our country could soon find itself with the kind of leadership that it desperately needs.
Keep your fingers crossed. I look forward to seeing you on the other side!
This edition of the not-quite-Weekly Head Voices covers the period of time from Monday March 6 to Sunday March 26, 2017.
As is becoming sort of a tradition around these parts, I get to show you a photo or two of our over-nerding antidote trips into the wild.
This is the path to the Koppie Alleen beach in the De Hoop Nature Reserve. Paths and photos of paths make me all pensive:
The following impression is of the De Hoopvlei. The short hiking trail along the vlei turned my morning run into an epic one.
The weekend before the De Hoop one, I spent at least half an hour sitting on the sofa just thinking.
Two things are noteworthy about this event.
I had half an hour of pure, uncut, interruption-free idle time.
During this time, I resisted the urge to flip out or open any information consumption device, instead electing to have my thoughts explore my internal landscape, like people used to do in the old days.
I was trying to come up with better ways to keep track of this landscape. The Deep Work phase I’m going through currently applies to work time (doh), but somehow not to free time, where I’m very much prone to latch on securely to the internet firehose for a mind-blasting gulp-athon.
Psychology and marketing professor Adam Alter argues that the companies behind the apps and the media that we consume are naturally applying various advanced tools to ensure that we remain engaged for as long as possible.
In short, you spend so much time in Facebook (I don’t anymore haha) because Facebook employs really clever people who build systems that run continuous experiments on your viewing behaviour and figure out what to show you so that your eyeballs remain glued to that little display.
This makes absolute sense of course.
If your company’s life blood was advertising revenue, and/or user engagement was important to your company’s bottom line for some or other reason, of course you would analyse and A/B test the living daylights out of your users in order to keep them in your app / media stream for that much longer.
Sir Berners-Lee is arguing that we’ve lost control of our data (“your” data belongs to Facebook, Instagram, Twitter, <insert your thing here>; you should blog more it’s better) and, more importantly, that we’re being manipulated by less than benevolent actors (pronounce with melodramatic accent on the “tors”) who, again based on advanced analytics tools, manufacture and modify news in order to sway public opinion. (Read my other recent post Fake News is Just the Beginning if your tin foil hat is just getting warmed up.)
In 2011, 2012 and 2014, I wrote briefly about our innate weakness for anything new. Social media is almost the perfect poison in that respect. You keep on scrolling down, because what if, what if there’s something new that’s going to answer that question you have not even formulated yet.
Putting all of this together: The internet is a beautiful thing. However, we have evolved parts of it to target an insidious psychological weakness in ourselves. Furthermore, there is a massive commercial incentive to keep on tweaking the distractions so that they become even more addictive, negating many of the information-related advantages they might have offered.
As if that was not sufficient, there is political and commercial incentive to develop techniques that essentially subvert our cognition, thereby fairly effectively misleading us to make decisions that satisfy somebody else’s agenda.
All is not lost.
Making time to think is great. Set aside as much as you can. Stare into space. Resist the urge to check the little magic window.
Personally, I like to form habits as much as possible.
Deep Work has become a habit for me. I am going to bring in old-school glassy-eyed staring-into-space thinking as a habit also. Complementary to that, and as an answer to my search for ways of keeping track of my mental landscape, I have resolved to write / draw as much of that mental landscape as I can, at regular intervals.
More generally, I wonder if, and how, we as humankind are going to address the issue of news and opinion manipulation.
Do we have to resign ourselves to this future? Is it going to become a question of which side employs the most expensive analytics and data science, and can hence out-manipulate its opponents? Or are we humans somehow going to develop systemic immunity?
In any case, I wish you all great success and happiness staring into space!
Intrigued by the trailers of the movie The Arrival, which I have not seen yet, I read the short story compilation Stories of your life and others by Ted Chiang.
The short story Story of your life, on which the movie is based, has a fascinating premise. However, I do hear that the film does not spend that much time on her discovery of and assimilation by the alien writing system “Heptapod B”, which to me was one of the more interesting threads.
Close to the end, an initially strong campaign for calliagnosia (a non-invasive and highly selective procedure to remove humans’ innate appreciation of attractiveness in others, already a great science-fiction philosophy prop) at the fictitious but influential unversity Pembleton is lost at the last minute due to a highly persuasive televised speech by the leader of the opposition.
Read the following extract for how exactly this was done:
In the latest on the Pembleton calliagnosia initiative, EduNews has learned that a new form of digital manipulation was used on the broadcast of PEN spokesperson Rebbecca Boyer’s speech. EduNews has received files from the SemioTech Warriors that contain what appear to be two recorded versions of the speech: an original — acquired from the Wyatt/Hayes (ed: PR firm) computers — and the broadcast version. The files also include the SemioTech Warriors’ analysis of the difference between the two versions.
The discrepancies are primarily enhancements to Ms. Boyer’s voice intonation, facial expressions and body language. Viewers who watch the original version rate Ms. Boyer’s performance as good, while those who watch the edited version rate her performance as excellent, describing her as extraordinarily dynamic and persuasive. The SemioTech Warriors conclude that Wyatt/Hayes has developed new software capabable of fine-tuning paralinguistic cues in order to maximize the emotional response evoked in viewers. This dramatically increases the effectiveness of recorded presentations … and its use in the PEN broadcast is likely what caused many supporters of the calliagnosia initiative to change their votes.
I would like to remind you that Ted Chiang wrote that story in 2002.
Up to now, the major mechanisms of influence have been curating which packets of information people consume. However, the tools for also modifying the contents of these packets of information seem to be ready.
We’ve been photoshopping photos of people to make those people appear more attractive since forever.
How long before we start seriously “videoshopping” televised speeches to make them more persuasive? With people consuming ever-increasing amounts of potentially personalised video, there’s an even bigger opportunity for those who desire to influence us for reasons less pure than just edification.
I’ve been using a cloud-hosted NVIDIA Tesla from Nimbix for my small-scale deep learning experiments with TensorFlow. This has also helped me to resist the temptation of buying an expensive new GPU for my workstation.
This is significantly less than half of what I paid Nimbix!
(That resistance is going to crumble, the question is just when. Having your stuff run locally and interactively for small experiments still beats the 150ms latency from this here tip of the African continent to the EU.)
nvpy leaves the nest :`(
My most successful open source project to date is probably nvpy, the cross-platform (Linux, macOS, Windows) Simplenote client. 600+ stars on github is not A-list, but it’s definitely also nothing to sneeze at.
Anyways, I wrote nvpy in 2012 when I was still a heavy Simplenote user and there was no good client for Linux.
In the meantime, Emacs had started taking over my note-taking life and so in October of 2014, I made the decision to start looking for a new maintainer for my open-source baby nvpy.
Emacs put a stop to that revival also by magically becoming available on my phone as well. I have to add that the Android Simplenote client also seems to have become quite sluggish.
I really was not using nvpy anymore, but I had to make plans for the users who did.
On Saturday March 4, I approached github user yuuki0xff, who had prepared a pretty impressive background-syncing PR for nvpy, about the possibility of becoming the new owner and maintainer of nvpy.
To my pleasant surprise, he was happy to do so!
It is a strange new world that we live in where you create a useful artifact from scratch, make it available for free to anyone that would like to use it, and continue working on improving that artifact for a few years, only to hand the whole thing over to someone else for caretaking.
The handing-over brought with it mixed feelings, but overall I am super happy that my little creation is now in capable and more active hands.
Fortunately, there’s a handy twitter account reminding us regularly how much of 2017 we have already put behind us (thanks G-J van Rooyen for the tip):
That slowly advancing progress bar seems to be very effective at getting me to take stock of the year so far.
Am I spending time on the right things? Am I spending just the right amount of effort on prioritising without this cogitation eating into the very resource it’s supposed to be optimising? Are my hobbies optimal?
I think the answer is: One deliberate step after the other is best.
The week of Monday February 13 to Sunday February 19, 2017 might have appeared to be really pretty boring to any inter-dimensional and also more mundane onlookers.
(I mention both groups, because I’m almost sure I would have detected the second group watching, whereas the first group, being interdimensional, would probably have been able to escape detection. As far as I know, nobody watched.)
I just went through my orgmode journals. They are filled with a mix of notes on the following mostly very nerdy and quite boring topics.
Warning: If you’re not an emacs, python or machine learning nerd, there is a high probability that you might not enjoy this post. Please feel free to skip to the pretty mountain at the end!
Advanced Emacs configuration
I finally migrated my whole configuration away from Emacs Prelude.
Prelude is a fantastic Emacs “distribution” (it’s a simple git clone away!) that truly upgrades one’s Emacs experience in terms of look and feel, and functionality. It played a central role in my return to the Emacs fold after a decade long hiatus spent with JED, VIM (there was more really weird stuff going on during that time…) and Sublime.
However, it’s a sort of rite of passage constructing one’s own Emacs configuration from scratch, and my time had come.
In parallel with Day Job, I extricated Prelude from my configuration, and filled up the gaps it left with my own constructs. There is something quite addictive using emacs-lisp to weave together whatever you need in your computing environment.
To celebrate, I decided that it was also time to move my todo system away from todoist (a really great ecosystem) and into Emacs orgmode.
I had sort of settled with todoist for the past few years. However, my yearly subscription is about to end on March 5, and I’ve realised that with the above-mentioned Emacs-lisp weaving and orgmode, there is almost unlimited flexibility also in managing my todo list.
Anyways, I have it setup so that tasks are extracted right from their context in various orgfiles, including my current monthly journal, and shown in a special view. I can add arbitrary metadata, such as attachments and just plain text, and more esoteric tidbits such as live queries into my email database.
The advantage of having the bulk of the tasks in my month journal, means I am forced to review all of the remaining tasks at the end of the month before transferring them to the new month’s journal.
We’ll see how this goes!
Jupyter Notebook usage
Due to an interesting machine learning project at work, I had a great excuse to spend some quality time with the Jupyter Notebook (formerly known as IPython Notebook) and the scipy family of packages.
However, the initial exhilaration quickly fizzled out as EIN exhibits some flakiness (primarily broken indentation in cells which makes this hard to interact with), and I had no time to try to fix or work-around, because day job deadlines. (When I have a little more time, I will have to get back to the EIN! Apparently they were planning to call this new fork Zwei. Now that would have been awesome.)
So it was back to the Jupyter Notebook. This time I made an effort to learn all of the new hotkeys. (Things have gone modal since I last used this intensively.)
The Notebook is an awe-inspiringly useful tool.
However, the cell-based execution model definitely has its drawbacks. I often wish to re-execute a single line or a few lines after changing something. With the notebook, I have to split the cell at the very least once to do this, resulting in multiple cells that I now have to manage.
In certain other languages, which I cannot mention anymore because I have utterly exhausted my monthly quota, you can easily re-execute any sub-expression interactively, which makes for a more effective interactive coding experience.
The notebook is a good and practical way to document one’s analytical path. However, I sometimes wonder if there are less linear (graph-oriented?) ways of representing the often branching routes one follows during an analysis session.
Some years ago, I attended a talk where Prof. Robert P.W. Duin gave a fantastic talk about the history and future of pattern recognition.
In this talk, he introduced the idea of dissimilarity representation.
In much of pattern recognition, it was pretty much the norm that you had to reduce your training samples (and later unseen samples) to feature vectors. The core idea of building a classifier, is constructing hyper-surfaces that divide the high-dimensional feature space into classes. An unseen sample can then be positioned in feature space, and its class simply determined by checking on which side of the hypersurface(s) it finds itself.
However, for many types of (heterogenous) data, determining these feature vectors can be prohibitively difficult.
With the dissimilarity representation, one only has to determine a suitable function that can be used to calculate the dissimilarity between any two samples in the population. Especially for heterogenous data, or data such as geometric shapes for example, this is a much more tractable exercise.
More importantly, it’s often easier to discuss with domain experts about similarity than it is to talk about feature spaces.
Due to the machine learning project mentioned above, I had to work with categorical data that will probably later also prove to be of heterogeneous modality. This was of course the best (THE BEST) excuse to get out the old dissimilarity toolbox (in my case, that’s SciPy and friends), and to read a bunch of dissimilarity papers that were still on my list.
Besides the fact that much fun was had by all (me), I am cautiously optimistic, based on first experiments, that this approach might be a good one. I was especially impressed by how much I could put together in a relatively short time with the SciPy ecosystem.
Machine learning peeps in the audience, what is your experience with the dissimilarity representation?
A mountain at the end
By the end of a week filled with nerdery, it was high time to walk up a mountain(ish), and so I did, in the sun and the wind, up a piece of the Kogelberg in Betty’s Bay.
At the top, I made you this panoroma of the view:
At that point, the wind was doing its best to blow me off the mountain, which served as a visceral reminder of my mortality, and thus also kept the big M (for mindfulness) dial turned up to 11.
I was really only planning to go up and down in brisk hike mode due to a whiny knee, but I could not help turning parts of the up and the largest part of the down into an exhilarating lope.