Weekly Head Voices #123: A semblance of a cadence.

Yes, we ended up in the mountains again.

In the period from Monday June 12 to Sunday June 25 we were mostly trying to get through the winter, fighting off a virus or three (the kind that invades biological organisms you nerd) and generally nerding out.

One more of my org2blog pull requests was merged in: You can now configure the thumbnail sizes your blog will automatically show of your uploaded images. Getting my own itch scratches merged merged into open source projects never fails to makes me happy, even although in this case there can’t be more than 5 other people who will ever use this particular functionality.

Anyways.

ASP.NET Core SURPRISE!

For a work project I was encouraged to explore Microsoft’s brand new ASP.NET Core. While on the one hand I remain wary of Microsoft (IE6 anyone?), I am an absolute sucker for new technology on the other.

You may colour me impressed.

If I had to describe it in one sentence, I would have to describe ASP.NET Core as Django done in C#. You can develop and deploy this on Windows, Mac or Linux. You model and query your data using Entity Framework Core and LINQ for example, or Dapper if you prefer performance and don’t mind the SQL (I don’t), or both. You write controller classes and view templates using the Razor templating language.

C# 7.0 looks like it could be a high development velocity language. It has modern features such as lambdas with what looks like real closures (unlike C++ variable capturing), as well as the null coalescing operator (??) and the null conditional operator (?.), the latter of which looks superbly useful. Between Visual Studio on Windows and the Mac, or the new Intellij Rider IDE (all platforms) or Visual Studio Code (all platforms), the tooling is top notch.

Time will have to tell how it compares to Python with respect to development velocity, a competition that Python traditionally fares extremely well at.

Where ASP.NET Core wins hands down is in the memory usage department: By default you deploy using the Kestrel web server, which runs your C# code using multiple libuv (yeah, of lightning fast node.js event loop fame) event loops, all in threads.

With Django I usually deploy as many processes as I can behind uwsgi, itself behind nginx. The problem is that with Python’s garbage collector, these processes end up sharing very little memory, and so one has to take into account memory limits as well as CPU count on servers when considering concurrency.

The long and the short of this is that one will probably be able to process many more requests in parallel with ASP.NET Core than with Django. With uwsgi and Django I have experimented with gevent in uwsgi and monkey patching, but this does not work as well as it does in ASP.NET Core, which has been designed with this concurrency model in mind from the get go. My first memory usage and performance experiments have shown compelling results.

Hopefully more later!

A cadence of accountability

Lately my Deep Work habits have taken a bit of a hit. At first I could not understand how to address this, until I remembered mention of a cadence of accountability in The Book.

Taking a quick look at that post, I understood what I had forgotten to integrate with my habits. Besides just doing the deep work, it’s important to “keep a compelling scoreboard” and to “create a cadence of accountability”.

Although I was tracking my deep work time using the orgmode clocking commands (when I start “deep working” on anything, I make an orgmode heading for it in my journal and clock in; when I’m done I clock out; orgmode remembers all durations) I was not regularly reviewing my performance.

With orgmode’s org-clock-report command (C-c C-x C-r), I can easily create or update a little table, embedded in my monthly journal orgfile, with all of my deep work clocked time tallied by day. This “compelling scoreboard” gives me instant insight into my weekly and monthly performance, and gives me either a mental kick in the behind or pat on the shoulder, depending on how many deep work hours I’ve been able to squeeze in that day and the days before it.

The moment I started doing this at regular intervals, “creating a cadence of accountability” in other words, I was able to swat distractions out of the way and get my zone back.

This is an interesting similarity with GTD (which I don’t do so much anymore because focus is far more important to me than taking care of sometimes arbitrary and fragmentary tasks) in that GTD has the regular review as a core principle.

Us humans being so dependent on habits to make real progress in life leads me to the conclusion that this is a clever trick to acquire behaviour that is not habitual: Work on an auxiliary behaviour that is habitual, e.g. the regular review, that encourages / reinforces behaviour that is perhaps not habitual, e.g. taking care of randomly scheduled heterogeneous tasks (GTD) or fitting in randomly scheduled focus periods (Deep Work of the journalistic variant).

As an aside, cadence in this context is just a really elegant synonym for habit. I suggest we use it more, especially at cocktail parties.

 

Weekly Head Voices #118: Accelerando.

Too much nerdery took place from Monday February 20 to Sunday March 5. Fortunately, be the end of that period, we found ourselves here:

The view from the shark lookout all the way to Hangklip.

bibtex references in orgmode

For a technical report, I thought it would be handy going from Emacs orgmode (where all my lab notes live in any case) to PDF via LaTeX.

This transformation is more or less built-in, but getting the whole machinery to work with citations from a local BibTeX export from my main Zotero database does not work out of the box.

I wrote a post on my other even-more-nerdy blog showing the extra steps needed to turn this into an easy-peasy 38-shortcut-key-combo affair.

Google GCE K80 CPUs available, cheap(ish)!

I’ve been using a cloud-hosted NVIDIA Tesla from Nimbix for my small-scale deep learning experiments with TensorFlow. This has also helped me to resist the temptation of buying an expensive new GPU for my workstation.

However, Google Compute Engine has finally shipped (in beta) their cloud-based GPU product. Using their pricing calculator, it turns out I can get a virtual machine with 8 CPU cores, 30G of RAM, 375GB of local SSD and a whole NVIDIA Tesla K80 GPU (12GB of memory) in their EU data centre for a paltry $1.32 / hour.

This is significantly less than half of what I paid Nimbix!

(That resistance is going to crumble, the question is just when. Having your stuff run locally and interactively for small experiments still beats the 150ms latency from this here tip of the African continent to the EU.)

nvpy leaves the nest :`(

My most successful open source project to date is probably nvpy, the cross-platform (Linux, macOS, Windows) Simplenote client. 600+ stars on github is not A-list, but it’s definitely also nothing to sneeze at.

nvpy stats right before the hand-over

Anyways, I wrote nvpy in 2012 when I was still a heavy Simplenote user and there was no good client for Linux.

In the meantime, Emacs had started taking over my note-taking life and so in October of 2014, I made the decision to start looking for a new maintainer for my open-source baby nvpy.

That attempt was not successful.

By the end of 2015 / early 2016 I had a bit of a Simplenote / nvpy revival, as I was using the official client on my phone, and hence nvpy on the desktop.

Emacs put a stop to that revival also by magically becoming available on my phone as well. I have to add that the Android Simplenote client also seems to have become quite sluggish.

I really was not using nvpy anymore, but I had to make plans for the users who did.

On Saturday March 4, I approached github user yuuki0xff, who had prepared a pretty impressive background-syncing PR for nvpy, about the possibility of becoming the new owner and maintainer of nvpy.

To my pleasant surprise, he was happy to do so!

It is a strange new world that we live in where you create a useful artifact from scratch, make it available for free to anyone that would like to use it, and continue working on improving that artifact for a few years, only to hand the whole thing over to someone else for caretaking.

The handing-over brought with it mixed feelings, but overall I am super happy that my little creation is now in capable and more active hands.

Navel Gaze

Fortunately, there’s a handy twitter account reminding us regularly how much of 2017 we have already put behind us (thanks G-J van Rooyen for the tip):

That slowly advancing progress bar seems to be very effective at getting me to take stock of the year so far.

Am I spending time on the right things? Am I spending just the right amount of effort on prioritising without this cogitation eating into the very resource it’s supposed to be optimising? Are my hobbies optimal?

I think the answer is: One deliberate step after the other is best.

Weekly Head Voices #117: Dissimilar.

The week of Monday February 13 to Sunday February 19, 2017 might have appeared to be really pretty boring to any inter-dimensional and also more mundane onlookers.

(I mention both groups, because I’m almost sure I would have detected the second group watching, whereas the first group, being interdimensional, would probably have been able to escape detection. As far as I know, nobody watched.)

I just went through my orgmode journals. They are filled with a mix of notes on the following mostly very nerdy and quite boring topics.

Warning: If you’re not an emacs, python or machine learning nerd, there is a high probability that you might not enjoy this post. Please feel free to skip to the pretty mountain at the end!

Advanced Emacs configuration

I finally migrated my whole configuration away from Emacs Prelude.

Prelude is a fantastic Emacs “distribution” (it’s a simple git clone away!) that truly upgrades one’s Emacs experience in terms of look and feel, and functionality. It played a central role in my return to the Emacs fold after a decade long hiatus spent with JED, VIM (there was more really weird stuff going on during that time…) and Sublime.

However, it’s a sort of rite of passage constructing one’s own Emacs configuration from scratch, and my time had come.

In parallel with Day Job, I extricated Prelude from my configuration, and filled up the gaps it left with my own constructs. There is something quite addictive using emacs-lisp to weave together whatever you need in your computing environment.

To celebrate, I decided that it was also time to move my todo system away from todoist (a really great ecosystem) and into Emacs orgmode.

From this… (beautiful multi-platform graphical app)
… to this!! (YOUR LIFE IN PLAINTEXT. DUN DUN DUUUUUN!)

I had sort of settled with todoist for the past few years. However, my yearly subscription is about to end on March 5, and I’ve realised that with the above-mentioned Emacs-lisp weaving and orgmode, there is almost unlimited flexibility also in managing my todo list.

Anyways,  I have it setup so that tasks are extracted right from their context in various orgfiles, including my current monthly journal, and shown in a special view. I can add arbitrary metadata, such as attachments and just plain text, and more esoteric tidbits such as live queries into my email database.

The advantage of having the bulk of the tasks in my month journal, means I am forced to review all of the remaining tasks at the end of the month before transferring them to the new month’s journal.

We’ll see how this goes!

Jupyter Notebook usage

Due to an interesting machine learning project at work, I had a great excuse to spend some quality time with the Jupyter Notebook (formerly known as IPython Notebook) and the scipy family of packages.

Because Far Too Much Muscle-Memory, I tried interfacing to my notebook server using Emacs IPython Notebook (EIN), which looked like this:

However, the initial exhilaration quickly fizzled out as EIN exhibits some flakiness (primarily broken indentation in cells which makes this hard to interact with), and I had no time to try to fix or work-around, because day job deadlines. (When I have a little more time, I will have to get back to the EIN! Apparently they were planning to call this new fork Zwei. Now that would have been awesome.)

So it was back to the Jupyter Notebook. This time I made an effort to learn all of the new hotkeys. (Things have gone modal since I last used this intensively.)

The Notebook is an awe-inspiringly useful tool.

However, the cell-based execution model definitely has its drawbacks. I often wish to re-execute a single line or a few lines after changing something. With the notebook, I have to split the cell at the very least once to do this, resulting in multiple cells that I now have to manage.

In certain other languages, which I cannot mention anymore because I have utterly exhausted my monthly quota, you can easily re-execute any sub-expression interactively, which makes for a more effective interactive coding experience.

The notebook is a good and practical way to document one’s analytical path. However, I sometimes wonder if there are less linear (graph-oriented?) ways of representing the often branching routes one follows during an analysis session.

Dissimilarity representation

Some years ago, I attended a talk where Prof. Robert P.W. Duin gave a fantastic talk about the history and future of pattern recognition.

In this talk, he introduced the idea of dissimilarity representation.

In much of pattern recognition, it was pretty much the norm that you had to reduce your training samples (and later unseen samples) to feature vectors. The core idea of building a classifier, is constructing hyper-surfaces that divide the high-dimensional feature space into classes. An unseen sample can then be positioned in feature space, and its class simply determined by checking on which side of the hypersurface(s) it finds itself.

However, for many types of (heterogenous) data, determining these feature vectors can be prohibitively difficult.

With the dissimilarity representation, one only has to determine a suitable function that can be used to calculate the dissimilarity between any two samples in the population. Especially for heterogenous data, or data such as geometric shapes for example, this is a much more tractable exercise.

More importantly, it’s often easier to discuss with domain experts about similarity than it is to talk about feature spaces.

Due to the machine learning project mentioned above, I had to work with categorical data that will probably later also prove to be of heterogeneous modality. This was of course the best (THE BEST) excuse to get out the old dissimilarity toolbox (in my case, that’s SciPy and friends), and to read a bunch of dissimilarity papers that were still on my list.

Besides the fact that much fun was had by all (me), I am cautiously optimistic, based on first experiments, that this approach might be a good one. I was especially impressed by how much I could put together in a relatively short time with the SciPy ecosystem.

Machine learning peeps in the audience, what is your experience with the dissimilarity representation?

A mountain at the end

By the end of a week filled with nerdery, it was high time to walk up a mountain(ish), and so I did, in the sun and the wind, up a piece of the Kogelberg in Betty’s Bay.

At the top, I made you this panoroma of the view:

Click for the 7738 x 2067 full resolution panorama!

At that point, the wind was doing its best to blow me off the mountain, which served as a visceral reminder of my mortality, and thus also kept the big M (for mindfulness) dial turned up to 11.

I was really only planning to go up and down in brisk hike mode due to a whiny knee, but I could not help turning parts of the up and the largest part of the down into an exhilarating lope.

When I grow up, I’m going to be a trail runner.

Have fun (nerdy) kids, I hope to see you soon!

Weekly Head Voices #116: Nothing much to see here, please move on.

This WHV is all about the weeks from Monday January 30 to Sunday February 12, 2017. I’ve mostly been in heads-down mode on two projects, so this post will be shorter than is usually the case.

I had my very first beer after the 30-day long Experiment Alcohol Zero (EAZ) on Friday, February 3. It was a good one:

EAZ has taught me that it would not be the worst idea to limit alcohol consumption slightly more.

As with many other enjoyable things, there is a price to be paid for this enjoyment. If paying that price interferes with the other enjoyable habits in your collection, it makes sense to evaluate and adjust the balance.

That reminds me of one of my favourite electronic music productions of all time: Balance 014 by Joris Voorn. He blew everyone’s minds when he decided to paint these fantastic soundscapes by mixing together more than 100 tracks, often 5 or 6 at a time.

Right at this point, just after that not-quite non sequitur, I wrote a far too long section on the relative performance of Android and iPhone, with a big “nerd content ahead” warning on it. Fortunately for you, I came to my senses before publishing and copied it out into its own little blog post: Android vs iPhone performance: A quick note.

Last weekend I dusted off my trusty old mu4e, an unbelievably attractive email client, again. This means I’m reading your mail and sending you beautifully UNformatted plain text emails right from my Emacs. As an additional bonus, I put all of the boring details about my configuration into a completely separate blog post, which you don’t have to read, titled: mu4e 0.9.18: E-Mailing with Emacs now even better.

What I will mention here and not in the other post, is that the current situation is subtly different from my previous adventure with mu4e: Where I previously synchronised all 60 thousand emails to process locally with mu4e, I am now following a more mellowed approach where I’m only synchronising the current year (and perhaps the previous year, still considering) of email. I use the fastmail web-app for searching through longer term storage.

I’m happy to report that so far it’s working out even better than the previous time. I really enjoy converting HTML emails (that’s what everyone sends, thanks GMail!) to well-behaved plain text when I reply.

Finally, after the Nth time that someone shared a clearly bogus science news post on Facebook, instantly bringing my blood to the boil, I decided to write a handy guide titled: Critical Thinking 101: Three super easy steps to spot poppycock on the internet.

This guide is 100% free, and really easy to send to your friends when you think this is necessary. Please let me know if you have any suggestions for improvement.

Ok kids, that’s it from me for now. I wish you a great week ahead. In the words of Yo-Play: Come on Mitch, don’t give up. Please try again.

Weekly Head Voices #115: So much Dutch.

Monday January 16 to Sunday January 29 of the year 2017 yielded the following possibly mention-worthy tidbits:

On Saturday, January 21, we had the privilege of seeing Herman van Veen perform live at the Oude Libertas Theatre. The previous time was a magical night many years ago in the Royal Theatre Carré in Amsterdam.

Herman van Veen is a living, extremely active and up to date legend. To most Dutch people you’ll ever meet he is a formidable part of their rich cultural landscape.

That evening, we heard so much Dutch spoken in the audience around us, it was easy to imagine that we had been teleported to a strange midsummer night’s performance, all the way back in The Netherlands.

Whatever the case may be, at 72 this artist and superb human being seems to have energy and magic flowing from every limb.

Things which running nerds might find interesting

The Dutch Watch

I had to start facing facts.

The Samsung Gear Fit 2 and I were not going to make a success of our relationship. The GF2 (haha) is great if you’re looking for a hybrid smart-fitness-watch. However, I was using it primarily for running, and then one tends to run (I’m on a roll here) into its limitations.

My inner engineer, the same guy who has a thing for hiking shoes, as they are the couture epitome of function over form, made the call and selected the TomTom Runner 3 Cardio+Music watch (the Runner 3 and the Spark 3 are identical except for styling) to replace my GF2.

Hidden in the name, there’s a subtle hint as to the focus of this wearable.

It has a less pretty monochrome display that manages to be highly visible even in direct sunlight. It does not have a touch screen, instead opting for a less pretty directional control beneath the screen that always manages to select the correct menu option. The menu options remind me of the first TomTom car navigation we bought years ago: Not pretty, but with exactly the right functions, in this case for runs and hikes.

Most importantly, the watch has an explicit function for syncing so-called QuickGPSFix data, so that when you want to start running, it is able to acquire a GPS lock almost immediately. Importantly, the device keeps you informed of its progress via the ugly user interface.

Also, I am now able to pre-load GPX routes. Below you can see me navigating my local mountain like a pro with a sense of direction, when in reality I am an amateur with pathological absence of sense of direction:

That’s me in the corner, losing my Re-Samsung.

Anyways, after being initially quite happy with the GF2, I am now more careful with my first judgement of the Runner 3. What I can say is that the first 40km with it on my arm has been a delight of function-over-form.

P.S. Well done Dutchies. The optical heart rate sensor in the previous Spark was based on technology by South African company LifeQ. I have not been able to find a good reference for the situation in the Spark 3 / Runner 3.

Experiment Alcohol Zero early results: Not what  I was hoping

The completely subjective Experiment Alcohol Zero (EAZ) I announced in my 2016 to 2017 transition post has almost run (err… too soon?) to completion.

November of 2016 was my best running month of that year: I clocked in at 80km.

EAZ started on January 4 and will conclude probably on Friday February 3.

Although I was a much more boring person in January of 2017, I did manage to run 110 km. The runs were all longer and substantially faster than my best runs of 2016.

Subjectively, there was just always energy (and the will) available to go running, and subjectively there was more energy available during the runs. This is probably for a large part due to the vicious upward spiral of better glucose processing, better sleep, hence better exercise, rinse, repeat.

I am planning to use some of this extra energy to sweep these results right under the proverbial carpet in order to try and limit the suffering that it might lead to.

(Seriously speaking, I will have to apply these findings to my pre-EAZ habits in a reasonable fashion. :)

Things which Linux nerds might find interesting

My whole web-empire, including this blog, my serious nerd business blog, and a number of websites I host for friends and family, has been migrated by the wonderful webfaction support to a new much faster shared server in London.

The new server sports 32 Intel Xeon cores, is SSD based and has a newer Linux distribution, so I was able to move over all of my wordpress instances to PHP 7.

Upshot: This blog might feel microscopically quicker! (I am a bit worried with my empire now being stuck in the heart of Article 50. I worry slightly more about a great deal of my data that lives on servers in the USA however. Probably more about that in a future post.)

On the topic of going around the bend, I now have emacs running on my phone, and I’m able to access all of my orgmode notes from there. It looks like this:

One might now ask a pertinent question like: “So Charl, how often do you make use of this wonderful functionality?”

To which I would currently have to answer: “Including showing the screenshot on my blog? Once.”

I’m convinced that it’s going to come in handy at some point.

Things which backyard philosophy nerds might find interesting

With what’s happening in the US at the moment, which is actually just one nasty infestation of the political climate around the globe, I really appreciate coming across more positive messages with advice on how we can move forward as a human race in spite of the efforts of the (libertarian) right.

The World Economic Forum’s Inclusive Growth and Development Report 2017 is one such message. As summarised in this WEF blog post, it tries to answer the question:

How can we increase not just GDP but the extent to which this top-line performance of a country cascades down to benefit society as a whole?

In other words, they present approaches for making our economies more inclusive, thus helping to mitigate the huge gap between rich and poor.

According to the report, the answer entails that national and international economic policies should focus primarily on people and living standards. In order to do this, each country will have to work on a different mix of education, infrastructure, ethics, investment, entrepreneurship and social protection.

The countries that are currently doing the best in terms of having inclusive economies, and are generally shining examples of socialism working extremely well thank you very much, are Norway, Luxembourg, Switzerland, Iceland, Denmark, Sweden, Netherlands, Australia, New Zealand and Austria. See the blog post for the specific different factors helping each of these countries to perform so well on the Inclusive Development Index (IDI).

Although the countries in the top 10 list all still have room for improvement, it’s great to see that it is actually quite a great idea to combine socialism (which is actually just another word for being further along the human development dimension) with economic survival and even success in today’s world.

(I am still hopeful that one day Gene Roddenberry’s dream of the United Federation of Planets will be realised.

LLAP!)