HyperNormalisation (2016) by Adam Curtis

I’ve watched a few BBC documentaries by Adam Curtis now, and I have a lot of respect for his ability to articulate ideas, and delve into history. A friend sent HyperNomalisation my way a little while ago. I watched it last night, and this is my initial impression.



Curtis starts in the mid-70s and moves up to present day, bouncing back and forth between economic changes in New York, and political changes in the Middle East. The two seem completely unrelated, until he starts to develop his thesis: that those in power made a conscious decision to move away from the old political norms to a mode of “managed perception”, in which they represent reality as something other than it really is, to keep those under their power in a constant state of unease, and disorientation (which of course prevents them from actually questioning or challenging power). It’s a fascinating thesis, and he does a good job of presenting it.

I wrote my friend:

I watched HyperNormalisation last night. Jesus there’s a LOT to unpack. I’m going to have to watch it again and jot down some notes. It REALLY helped fill in my knowledge of Middle East politics and American involvement. I knew bits and pieces of some things, but not how it all fit together. Curtis does a great job connecting dots. While I’m not sure about his interpretation of all the details, the general premise of “managed perception” is something that’s been fomenting in the back of my mind for a while, I just didn’t have a term for it, much less a theory of how it came about or was being systematically leveraged, but it makes total sense. Especially in light of the 9/11 and Iraq War years. How many WTF moments I had during that period where things just did not line up, but everyone in power just pretended they did. Makes total sense now.
His interpretation of Trump’s campaign was interesting too (and the history of Trump’s dealings in NY real estate). I always just assumed Trump lied because, well that’s just what he did (and it was obvious that he did). Thinking of it as a genuine, concrete strategy though is very interesting. Another thing that occurred to me is that manipulating a certain group of people by giving them a specific impression of the world does not mean that the opposite impression of the world is correct. In fact, the opposite is probably just as bad as the intended. Curtis made a point to bring up the complexity of the real world, and how people just don’t know how (or want) to deal with it. So taking that as a given, we could say that if an idea or concept seems “too simple” or “too straightforward”, there’s a good reason to be suspicious of it – it is either an over-generalization, or calculated, managed perception. Very interesting. I’ll write you again after I watch it a second time. Thanks again for recommending!

One piece of advice: take the time to watch the whole thing (almost 3 hours) in one sitting. Trust me, it won’t seem like 3 hours. But you need to carry his continuity of thought without interruption or you’ll probably lose the plot and have to backtrack.

If you are interested in history (in this case, America’s history in the Middle East), politics, and culture, I highly recommend it.

As a follow-up, his documentary The Century of the Self is also excellent.

Goodbye Evernote

I’ve been a paying Evernote user for years; and before that, a “free” Evernote user for even longer. Evernote has some seriously powerful features, among which is the Evernote Web Clipper extension available on all major browsers, the excellent PDF markup features, flawless sync across devices, etc. It is solid software, backed by a solid service.

But I’ve left Evernote, likely for good.

In the wake of efforts by “Big Tech” companies to censor, deplatform, or control the data that belongs to customers, I’ve been cutting ties with as many Big Tech companies as I’m able. And Evernote, though I have never experienced any negative service from them, Evernote does not offer end-to-end encryption for it’s services (meaning notes stored on Evernote servers are accessible by Evernote employees), and that has become a deal-breaker for me. I value my privacy, and my personal notes are where I can explore ideas and record my thoughts. And I don’t want to use any service that doesn’t respect and protect that.

But what to do with GIGABYTES of notes, clipped articles, recipes, photos, and annotated PDFs?

When I ditched GMail for ProtonMail, it took me months to dig through all my archived mail. I deleted, exported, or printed each piece, then had to update the sender’s settings with my new email address (or unsubscribe, if it was a newsletter). I transferred all of my contacts to ProtonMail, then reviewed them all to put the exported information into ProtonMail’s custom contact fields. It was a long, tedious, tiresome process, but I did it.

I expected my transition away from Evernote to be just as challenging. I came up with a list of goals for my new notebook scheme:

  • My notes should be plain text (well, technically Markdown) files that link to any relevant external assets, such as images, PDF files, etc.
  • Markdown files and assets should be organized in a uniform way.
  • Markdown files should have a uniform naming convention (all lower-case words, separated by hyphens).
  • I should be able to easily search for note content.
  • My notes should be available on multiple devices.
  • I should be able to clip content from the web and easily add it as a Markdown file to my notebook.

Step 1: Get my notes out of Evernote

There are two Evernote applications you can use on your device: the slick, newer version, and the old, legacy version. The new version looks nice, but it dropped a significant feature I thought would be available to me, and that is the ability to export all notes at once in HTML format. Even the legacy version seems to lack this feature (maybe just on OSX?), though in the legacy version you can still export individual notebooks as HTML collections. In the newer version you can only export notebooks as Evernote’s own ENEX file format (a kind of XML archive of notebook content). This seemed like it was going to be a show-stopper, since I had no clue how I would convert ENEX files into Markdown files, but a friend pointed me to the excellent utility evernote2md which does exactly that. Since I had 132 individual notebooks, it took me a while to export them all, then to convert them to Markdown with this utility, but once done, I had all of my saved notes in Markdown format (along with their attachments).

The total size of my exported Markdown notes and attachments is around 2GB.

Step 2: Fixing file names

I noticed two things pretty quickly after my initial export:

  1. Most Markdown file names were derived from the Evernote note title, which means they were typically in Title Case with underscores for space separators.
  2. MANY Markdown files, for one reason or another, had leading or trailing underscores.
  3. Some Markdown files – mostly ones clipped from Reddit threads – had strange naming conventions, e.g., _r_<subreddit-name>_<super-long-post-title>. This is because Evernote automatically uses the webpage title tag as the title of a note imported with it’s web clipper.
  4. MANY Markdown files were named “untitled-XX.md” (where XX is some number). Did these notes not have titles?
  5. MANY Markdown files had the word undefined randomly peppered through the file name, e.g., A_Historyundefined_and_Timeline_of_the_World.md. (I later realized this only occurred immediately preceding the word and, of which I speculate that the ampersand was actually used in the original title and evernote2md has a bug that does not translate it correctly.)

So I had my work cut out for me. The first thing I decided to tackle was normalizing the file name case and space separator concerns. I prefer all lower-case file names, with hyphens for space separators. I hacked together a simple node.js script to traverse all of my exported notebooks and make this change.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// npm install globby
const globby = require("globby");
const path = require("path");
const renameSync = require("fs").renameSync;
const execSync = require("child_process").execSync;

const dirs = [
// top-level notebook names omitted for REASONS
].map(dir => path.join(__dirname, dir, "**/*.md"));

(async () => {
const filePaths = await globby(dirs);
const fixedPaths = filePaths.map(filePath => {
const pathDir = path.dirname(filePath);
const pathName = path.basename(filePath, ".md");
let newPathName = (pathName.replace(/[^\w\d-]/g, "") + ".md").toLowerCase();
newPathName = newPathName.replace(/_/g, "-");
const newPath = path.join(pathDir, newPathName);
return {
oldPath: filePath,
newPath,
};
});
fixedPaths.forEach(fixedPath => {
console.info(`fixing path ${fixedPath.oldPath}...`);
try {
renameSync(fixedPath.oldPath, fixedPath.newPath);
} catch (e) {
console.error(e);
process.exit(1);
}
});
console.info("all done.");
process.exit(0);
})();

This script worked well and addressed the first naming problem – all files names are lowercase, and instead of underscores, hyphens delimit words – but there were still problems to address.

My initial gut instinct was to begin modifying my script to handle the remaining naming problems, but that just made me tired, so I turned to the INTERNET to figure out if there was a better way to do this.

Sweet Baby Jesus there is.

There is a wonderful utility called perl-rename that uses Perl’s regular expression engine to bulk rename files in-place. It’s very similar to how Vim performs find/replace, and it helped me solve two of my other problems in quick order.

Getting rid of undefined

To get rid of the pesky word undefined in my note file names, I used the find command to traverse my entire notebook structure, find all the Markdown files that contained that word, then pass along those file paths to the perl-rename utility which renamed the file without its troublesome intruder.

1
2
cd $NOTEBOOK
find . -iname "*undefined*.md" -exec perl-rename --verbose --dry-run -- 's/undefined//g' '{}' \;

The actual heavy lifting is done in the substitution string: s/undefined//g, which reads like this: <substitute>/<the word undefined>/<with nothing>/<anywhere in the file name (globally)>.

(Note that the --dry-run flag will show you what would happen if the perl-rename command succeeded; to actually make the changes permanent the flag must be removed from the command.)

So far so good – no more undefined in file names. What about leading and trailing spaces? Easy peasy.

1
2
3
cd $NOTEBOOK
find . -iname "*.md" -exec perl-rename --verbose --dry-run -- 's/^-//' '{}' \;
find . -iname "*.md" -exec perl-rename --verbose --dry-run -- 's/-$//' '{}' \;

Again, the magic is in the substitution.

  • In the first command, the substitution reads: <substitute>/<a dash at the beginning of the file name>/<with nothing>. (The caret ^ symbol represents the beginning of a series of characters.)
  • In the second command, the substitution reads: <substitute>/<a dash at the end of the file name>/<with nothing>. (The dollar sign $ symbol represents the end of a series of characters.)

Now for those pesky Reddit notes. Since I’d eliminated leading dashes in file names, clipped notes from Reddit would now have a file name like r-<subreddit>-<note-title>. I still wanted to know these notes were from Reddit, so I decided the following substitution was best.

1
2
cd $NOTEBOOK
find . -iname "r-*.md" -exec perl-rename --verbose --dry-run -- 's/^r-/reddit-/' '{}' \;

The substitution reads (as you probably know by now): <substitute>/<an r- at the beginning of the file name>/<with reddit- >.

Perfection.

But Nick, what about all those untitled-XX.md notes?

I’m glad you asked. There’s nothing to do with those notes but manually examine them and rename them according to their content. Which would absolutely be a pain in the ass if not for the terminal file manager ranger.

Step 3: Renaming untitled notes

Ever since I watched Luke Smith demonstrate the ranger file manager I’ve had major boner for it, and wanted a real chance to kick its tires. The challenge of renaming all these untitled files gave me the opportunity.

Briefly, ranger is a terminal file manager that emulates some of Vim’s modal editor behavior. For example, to move through directories you use the home-row keys h, j, k, and l. To run commands you press the colon key, then enter the command name. It’s both sexy and dangerous, and since I’m a kinky guy it was love at first sight.

Since I had never used ranger for any serious file system work before, this was a great way to get used to its navigation controls and command capabilities. I quickly figured out that the home row was my navigation center, but ALSO that I wanted to move through pages of files at a time rather than just hitting j and k repeatedly. Turns out if you hold shift and hit those same keys ranger will move you half-page at a time. Excellent. I traversed each notebook and used ranger’s find command – hitting / followed by a file name string – to quickly jump to the first instance of a file named untitled.... Ranger has a great file preview pane that immediately let me inspect the contents of each file, from which I could easily determine what the real file name should be. Renaming each file was easy enough – I typed the command :rename <new-file-name> and that did the trick. If I perchance needed to edit the file, I simply hit the l key to enter the file itself, which opened my default text editor (set by the EDITOR and VISUAL environment variables) for immediate access. Quitting the editor returned me immediately to ranger. Hitting the n key repeated my search. And so it went, until I had renamed all untitled-XX.md files in each notebook directory.

Occasionally I realized that a note I was viewing in ranger really needed to be in another directory (notebook). So I initiated an external shell command by typing ! (alternatively I could have typed :shell) and then typed my typical shell command: mv <file-name> <other-directory>/.

All without leaving ranger.

Gandalf weeping

Step 4: Prune unused assets

By far the bulk of the disk space in each notebook is allotted to assets attached to notes – be they images, or PDFs, or audio files. Markdown files, being plain text, require little space to store – but assets, being binary, are pigs.

When I exported my notes to markdown, evernote2md created two directories in each notebook for assets: file and image. This was uniform across notebooks, which worked to my advantage. After I exported my notes I started rummaging through each notebook directory, purging notes that were either no longer important, or too badly mangled by the export process to be of any value. But how to remove their assets as well? I hacked together another node.js script to help me find assets that were no longer referenced by any notes in a given notebook.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!node
const path = require("path");
const execSync = require("child_process").execSync;

const args = process.argv.slice(2);
const assetDir = path.resolve(args[0] || "."); // e.g., file, or image -- assume this script is executed in an asset directory
const mdDir = path.resolve(args[1] || "..");

const lsResults = execSync(`ls ${ assetDir }`).toString().split("\n").filter(n => !!n);

const noResults = [];
lsResults.forEach(a => {
const cmd = `grep -c -H -l "${ a }" ${ mdDir }/*.md`;
let grepResults = [];
try {
grepResults = execSync( cmd ).toString().split("\n").filter(n => !!n);
} catch ( e ) {
// empty
}
console.info(">>", a);
console.info(grepResults);
if ( grepResults.length === 0 ) { // does not appear in any file
noResults.push( a );
}
});

console.info( noResults );
if ( noResults.length > 0 ) {
console.info(`to remove - rm ${ noResults.join(" ") }`);
}

This script uses the grep command to determine if an asset filename appears in the text content of any note; if it does not, it is included in output at the end that builds up a long rm command string that can be copied and then run to eliminate unused assets for a given notebook directory.

The grep command flags are important here:

  • -c means to generate a count of the matching lines in a file for the given search string (in this case, the asset file name)
  • -H means to print the file name in which the match occurred
  • -l means to restrict output to matching file names only (instead of matching lines within a file)

This combination of flags produces one line per search file that will only be present if the asset name is found within the file, allowing the script to know how many times the asset itself is referenced. If it isn’t referenced at all, it’s safe to delete. And so it goes.

This process is still a work in progress. As I review each notebook, I’m pruning its assets, and keeping track of those I’ve completed.

Step 5: Add front-matter to notebooks

Several Evernote alternatives (e.g., Boostnote) and many static website generators use YAML metadata markup at in Markdown files to render them appropriately. This front-matter appears at the top of the file, and follows a schema similar to the following:

1
2
3
4
5
6
7
8
9
10
11
---
link:
title:
description:
keywords:
author:
date:
publisher:
stats:
tags:
---

My exported Evernote notes do not have this front-matter, but as will be demonstrated later, it is critically important for targeted note searches.

So since you’re wondering, yes, I did hack together another script to inject this front-matter into every existing Markdown note within a notebook directory.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#!node
const path = require("path");
const execSync = require("child_process").execSync;
const { writeFileSync, readFileSync } = require("fs");

const args = process.argv.slice(2);
const mdDir = path.resolve(args[0] || "."); // assume this command is in a notebook directory


const frontMatter = `
---
link:
title: <title>
description:
keywords:
author:
date:
publisher:
stats:
tags:
---
`.trim();

const capitalize = (s) => {
return s.charAt(0).toUpperCase() + s.slice(1);
};

const lsResults = execSync(`ls ${ mdDir }/*.md`).toString().split("\n").filter(n => !!n);
console.info(lsResults);

lsResults.forEach(m => {
const fileContent = readFileSync( m ).toString();
if ( fileContent.startsWith("---\n") ) {
console.info(`${ m } has front matter, skipping...`);
return;
}

const fileName = path.basename( m );
const formattedFrontMatter = frontMatter.replace(
'<title>',
capitalize(fileName.replace(/-/g, " ").replace(".md", ""))
)

const newContent = `${ formattedFrontMatter }\n${ fileContent }`;

writeFileSync( m, newContent )
});

Since most notes already had filenames derived from their Evernote titles, I took advantage of that fact and turned those filenames into the note’s front-matter title – sans hyphens, and with sentence case. It’s rough, I know, but better than nothing. The rest of the information I will have to add manually. The most important fields to me are title, author, and tags. (Are tags different than keywords? I don’t know.) On these fields – and the note’s file name – I will most frequently perform targeted searches.

Step 6: Searching for notes

Searching for files by name is easy. If I want to search for a file with the word taxes in it, I simply use the find command:

1
2
cd $NOTEBOOK
find . -iname "*taxes*"

This will give me a list of file paths in which the word taxes appears. I try to name my notes intelligently so this kind of search can be productive. But sometimes I want to be more specific.

In that case I can rely on the tags front-matter that I’ve added to each note. For example, I have a recipe for a mixed drink that my brother recommends. I’ve tagged this mixed drink with alcohol, and can quickly find it using the ack command (you could use grep as well, but I prefer ack):

1
2
3
$ ack '^tags.*alcohol.*'
alcohol/super-complex-highly-rewarding-concotion-to-drink.md
10:tags: alcohol, don-julio, grand-marnier, kombucha, cocktail

This simple command reveals that the file I’m looking for is alcohol/super-complex-highly-rewarding-concotion-to-drink.md.

In fact, I could use ack to search for any front-matter field, simply by using the correct search expression. (The observant reader will notice that the expression resembles those I used when renaming files with perl-rename. The syntax is very similar.) In this case, the search expression reads: files that contain a line that beings with 'tags' (^tags) followed by any other characters (.*) but ALSO that has the world 'alcohol' in it, followed by any other characters (.*).

If I want to cast a wider net, I can also use ack to search for any term that occurs in any of my notes with the simple command: ack <search-term>.

Step 7: Creating new notes

Now that my notes are exported, cleaned, organized, and “front-mattered”, how do I add new notes to my notebooks?

Adding a new Markdown file is as simple as using your favorite text editor to save a file with the .md extension. Because the evernote2md export favored file and image directories for external assets, I use those same conventions for my own notes. If I had a notebook directory called economics, for example, and I had a note called the-history-of-economics.md, I might reference assets like this:

1
2
3
4
5
6
7
The Author Adam Smith wrote the seminal work, The Wealth of Nations.

<!-- this is an image of Adam Smith -->
![Picture of Adam Smith](image/adam-smith.png)

<!-- this is a link to the das-kapital.pdf file -->
Later, Karl Marx challenged Adam Smith's ideas in his work, [Das Kapital](files/das-kapital.pdf).

Now, the vast majority of my notes are articles clipped from the Internet with the Evernote Web Clipper. As I’ve stated, this is one of the strongest features of Evernote, and the one I’ll probably miss the most.

However, I’ve since discovered the clean-mark npm package which will do the exact same thing a) by exporting a web page in well-formatted Markdown, and b) adding front-matter by default. This is now my go-to method of snipping articles from the Internet. The only caveat is that all assets referenced by an article will not be downloaded, but instead referenced by their individual Internet URLs. If images or external files are an integral part of an article, it will be up to me to download them manually and adjust the links accordingly.

Step 8: Accessing notes from multiple devices

So far I’ve entertained two methods for accessing my notes on multiple devices.

I use the MegaSync cloud storage service to back up and synchronize files across devices. It is, by far, the best cloud storage service I’ve used. Mega has clients that work on Windows, OSX, Linux, and Android – which is awesome since I have devices that run each of those. (Also supports iOS but I have an Android phone so I don’t care :D ). Synchronizing notes and files is flawless – the only downside is that the Android client does not render Markdown files, or show their plain text content, which obviously makes it an non-ideal mobile client solution. This is my only real gripe about Mega’s mobile offering. It’s 2021: everything should render Markdown.

As an alternative, I’ve also contemplated using Github to manage all of my notes. I am very familiar with Git and letting it manage versions of my files and track individual commits is very appealing to me. Synchronizing across devices is trivial, as Github’s web interface will render Markdown files (including embedded images) in any web browser – mobile or not. My only hesitation is that Github (unlike Mega) does not offer end-to-end encryption (my original issue with Evernote) which does not offer me the measure of privacy I desire.

This is the last big issue I need to solve before I have a complete Evernote replacement that meets all of my needs.

Conclusion

Leaving Evernote has been an adventure, but I’ve learned a lot along the way – mostly that the tools I need to achieve my values are already within my reach, and they demand nothing but the time to learn. It’s amazing how much of our personal lives we heft into the “cloud”, to Big Tech services that don’t actually give a crap about our privacy, and will use our own data against us when we don’t bring our thoughts in line with whatever pre-established narrative to which they beat their drums. If you don’t control your data, you roll the dice on ever more tenuous odds.

Reclaiming my data – and making it my own again – has been one of the most humanizing experiences I’ve had in a long time. I hope this inspires others to embark on a similar quest, for freedom – for knowledge – for autonomy.

Black Rednecks and White Liberals by Thomas Sowell

This morning I finished reading Thomas Sowell’s book Black Rednecks and White Liberals.

Sowell has spent a lifetime (he’s 90 years old) studying the root causes of racial tension throughout the world, especially in the United States. Written in 2005, this book is far more relevant today – fifteen years later – than when he first wrote it. But Sowell saw the fomenting racial tensions in this country and vocally protested the policies and ideas that have continued to push us all into corners defined by the past, instead of securing solutions in the present. He offers solutions – not based on popular slogans or emotional sentiment – but based on the hard facts of history; based on how other peoples in other times successfully adapted to each other to make the lives of each more prosperous. If we can’t do that, he warns, the result will be pain and misery for all.

“While the lessons of history can be valuable, the twisting of history and the mining of the past for grievances can tear a society apart. Past grievances, real or imaginary, are equally irremediable in the present, for nothing that is done among living contemporaries can change in the slightest the sins and the sufferings of generations who took those sins and sufferings to the grave with them in centuries past. Galling as it may be to be helpless to redress the crying injustices of the past, symbolic expiation in the present can only create new injustices among the living and new problems for the future, when newborn babies enter the world with pre-packaged grievances against other babies born the same day. Both have their futures jeopardized, not only by their internal strife but also by the increased vulnerability of a disunited society to external dangers… To be relevant to our times, history must not be controlled by our times. Its integrity as a record of the past is what allows us to draw lessons from it.”

Scott Hanselman is Wrong

If you’ve done any Microsoft development in the last two decades you probably know the name Scott Hanselman, and are probably familiar with his blog at hanselman.com. I used to enjoy reading Hanselman’s articles, back when I wrote code for the Microsoft platform, and generally considered him to be a pretty even-keeled individual with generally insightful thoughts and balanced opinions.

That was back before virtue signaling was all the rage of course. Now it appears that he’s fallen into the trap of proving how woke he is by showing us how to change our initial git repository name from master to main.

Why make this change, you ask?

I’ll quote from his blog:

The Internet Engineering Task Force (IEFT) points out that “Master-slave is an oppressive metaphor that will and should never become fully detached from history” as well as “In addition to being inappropriate and arcane, the master-slave metaphor is both technically and historically inaccurate.” There’s lots of more accurate options depending on context and it costs me nothing to change my vocabulary, especially if it is one less little speed bump to getting a new person excited about tech.

Aside from completely misunderstanding the meaning of master in git (think: master copy, not slave-driver), Scott has managed to lower himself to the point of sweating over a metaphor that encompasses all of human history, not just that of the American Antebellum South. (Check out Thomas Sowell’s chapter The Real History of Slavery in his book Black Rednecks and White Liberals. Sowell is a black, award-winning economist who grew up in the Bronx.)

The word master has quite a few other definitions that would be blotted out if we just tossed it to the side because wokeness. For example:

  • having complete control over a situation
  • learning to do something properly
  • the title of famous painters
  • the first and original copy of a recording
  • the head of a ship that carries passengers or goods
  • a college degree
  • a revered religious teacher
  • an abstract thing that has power or influence
  • a main or principal thing
  • etc.

Should we expunge these parts of our language because Scott feels bad about things he didn’t do, hundreds of years before he was born? Should we ditch any word that acts as a trigger mechanism for someone else’s discomfort?

I can tell you right now, moist isn’t making the keep list.

It makes me sad to see Hanselman become a useful idiot, and it makes me sad that, hundreds of years after the destruction of the Western slave trade (by the West!), white people are stooping to this level of bullshit because they’ve adopted some perverse doctrine of racial original sin, and are constantly trying to atone for it.

If Scott thinks self-flagellation will fix any actual racial problems in this country, he’s deluded. Changing a git repo name won’t eliminate qualified immunity, bring corrupt cops to justice, or stop our elected officials from shredding the Bill of Rights. It won’t fix fatherlessness in black America, or the blight of narcissistic, checked-out parents in white. It won’t provide healthcare, it won’t feed the poor, it won’t address mental health issues. It won’t protect us from white school shooters, and it won’t save us from black gang bangers. It’s a token gesture without teeth; emotions on meth.

People like this black American nationalist give me hope, though. He knows that real change will come through brutal honesty and personal responsibility, from all sides, and that’s the only way America can get over the blood feud we’ve been nursing for roughly two hundred years. He gets it, and he doesn’t need the contrition of false white guilt to satisfy a grudge. It’s a breath of fresh air, hearing this man speak, and I hope for all our sakes – even for the sake of Scott Hanselman – that many more like him will bring sanity to the sanitarium that is present-day America.

Bye Bye Google Play Music

This is a post I uploaded to Facebook on March 9.

In my quest to remove my personal data from google services and quit giving them my money, I’ve been searching for a service that would transfer my entire Google Music library (which is substantial) to another service. I don’t mind paying a subscription fee, just don’t want to pay it to Google, but I’ve used Music for so long that transferring my library manually would be monumental.

Over the weekend, however, I discovered Soundiiz.com, which automates the transfer of pretty much any music service to any other. They charge a monthly fee of $4.50 for the service, but I only had one transfer to conduct, and for $4.50 it’s a no-brainer (I will cancel my Soundiiz subscription before I have to pay another month).

In about 30 minutes Soundiiz managed to transfer my entire Google Music library to Spotify with ~90% success rate (not all music on GM is available on Spotify, and I had a lot of custom uploads on GM which, to my knowledge, Spotify does not support). I was blown away at how smooth the process was.

So if you need to get away from a Big Tech music service, I highly recommend checking out Soundiiz. They support pretty much every major (and many minor) music services.

The only thing Soundiiz didn’t sync was my podcasts, but I only had about a dozen or so on GM, so manually setting them up in Spotify was a cinch.

UPDATE: March 19

I am completely happy with my move to Spotify. Not only can I use it on all of my devices, I’ve noticed a few immediate benefits:

  • I can connect remotely to devices in my house that also run Spotify software, and play my music through those devices. So I can control my desktop computer with my phone, which is a huge feature that Google Play lacks.
  • Spotify’s podcast library is bigger than Google Play. Well, at least the podcasts that peak my interests.
  • If you want to listen to your own local music on Google Play, you have to upload it, and you’re limited to a certain amount of songs. (To be fair, it is a generous amount.) Spotify takes a different approach: you configure it to look for music in a directory on your device, and it will play whatever it finds there. If you have multiple devices and want your local library available on each, you will need to sync your local library with a cloud service (I recommend MegaSync).
  • Spotify’s music recommendations seem better. Google Play is decent, but Spotify seems to have more relevant recommendations.

import facepalm;

Sometimes bugs can be particularly evasive, and today I had such a one.

A module in deep in our codebase was throwing an Error, but only in Mozilla’s Firefox browser.

The error was NS_ERROR_ILLEGAL_VALUE.

I did some quick DuckDuckGoing and found that the error occurs when a native DOM function in Firefox is passed a value of a type it does not expect.

The stack trace led back to this line in our application code:

1
2
3
4
5
6
const hit = find( cache, c => c.original === obj );
if ( hit ) {
return hit.copy;
}
// ...some time later...
return someUncachedObject;

“@-E$!&&@#”, I thought. “Why is lodash’s find() function passing a bad value to a native function?”

You see, I use lodash all the time. So much, in fact, that I made one fatal error in my diagnosis.

I assumed that because the find() function was defined, that lodash had indeed been imported.

How. Wrong. I. Was.

It turns out that window.find() is, in fact, a non-standard, but nevertheless nearly omnipresent function that is designed to search a DOM document for a particular string. And since any function attached to window is global, a missing import of the same name – say, a missing lodash/find import – would not raise any alarms. The code built. The code ran. And it ran without error in every browser but Firefox. Why?

The window.find() function expects a first argument of type String. In modern browsers other than Firefox (pretty much all the Chromium-based browsers), passing a non-String argument to window.find() will simply cause the function to return false. As you can see in the snippet above, though rendering the cache useless, the application nevertheless continued to work. In Firefox, however, window.find() will throw if its first argument is not of type String. Thus my bug.

I am ashamed to say how long it took me to realize lodash/find was not the function being called.

In the end I applied the great wisdom of Spock’s ancestors, and started considering things that could not possibly be the case, until it dawned on me that perhaps – just perhaps – find() was not what it appeared to be after all.



And a single import find from "lodash/find"; statement fixed the bug.

Fun with Homebrew casks

One of my favorite utilities for OSX is Homebrew, a package manager that lets you easily install programs from the terminal.

One of my favorite pastimes is thumbing through the Homebrew Cask recipes to find new programs and utilities to install. Some are pretty nifty, like Zotero which manages research bibliographies. Or Electric Sheep which harnesses the power of your “sleeping” devices to crowdsource digital artwork. Or Finicky, which lets you specify which of your web browsers you want to open specific links. (Maybe you use Brave for normal browsing but want to open all google.com links in Chrome.)

Unfortunately the Cask recipe files have no real descriptions, so I usually just fish through them and dig out the homepage links of any Cask recipe file that has an interesting name. It’s kind of like a digital treasure hunt.

To make things even more fun, I cloned the homebrew-cask repo and came up with a simple shell script that will randomly choose a recipe and open its homepage for me.

1
2
3
4
5
6
find ~/projects/homebrew-cask/Casks -type f | \
shuf -n 1 | \
xargs cat | \
ack homepage | \
awk '{print $2}' | \
xargs open

Joker Movie Review

I finally watched Joker last night.

It is dark. But the movie earns all of its praise. It really is an extremely well done film. Everything works: cinematography, acting, music. It is very well crafted.

There are several major intertwined themes running through the film. A lot of people have interpreted it in a lot of ways, and they aren’t wrong; there’s a lot to unpack.

The biggest theme is probably “rich, powerful people are extremely out of touch with average, or below average people”, which we see every day now (Epstein, Weinstein, Clintons, etc.). This is the big Wayne connection to Joker. The movie tips the whole Batman story on its head; you actually feel sympathy for Arthur Fleck, and while you don’t think the Waynes are necessarily “bad”, the film portrays them as very out of touch, and condescending to the plight of the less fortunate. It’s actually well done; I didn’t feel that I was being preached at, like “look at the bad rich people!”. The message was more, “see the chasm here – these people have no idea what it’s like to live in normal society”.

The second is probably the damage that single mothers can do to sons. The void of an absent father can have a shattering impact on a child. This was probably the most gut-wrenching part of the film for me, and I’m surprised the writers tackled it. Well done though.

The next is probably mental health, and how we deal with disturbed people in society. Although I’m not sure I’d call this a “theme” because it’s more of a given in the film, that society handles this poorly. It’s not making a statement about it per se, it’s just assuming it. But the portrayal is well done, and thought provoking.

A tangential theme is that people need some degree of power in their lives – to feel they control something – and when that is taken away, the hopelessness they experience can cause them to seek power in other, socially taboo ways (e.g., violence), not because they would have preferred that, but because they psychologically have no alternative. This theme is more subtle, but probably the biggest statement the movie makes. The progressive criticism (“this movie is just about white, violent, incels!”) targets this theme, because progressives don’t believe that white males can ever be powerless.

Overall I highly recommend the film. As I said, it earns all the praise it gets. It’s one of the few films I’d consider a modern masterpiece.

Book Review - The Mythical Man-Month

The Mythical Man-Month is one of those books that is, well, mythical in the circles to which it pertains – that is, the software engineering and computer science fields. It is mythical because many people have heard of it, most agree that it is “classic”, but not many remember exactly why. Perhaps they have never read it, only absorbed its ideas through hearsay. Or perhaps they did read it, but so long ago that its principles have been taken only in the tide of time.

Either way, I have finally finished the book in what I consider to be an unreasonable amount of time. It’s not overly long, or overly verbose, but I have a bad habit reading a little from a lot of books at the same time, which means I don’t finish a book for a while. I took notes as I went so that hopefully time will be more gracious to my mind when someone asks me, in the years to come, if I’ve read Frederick Brooks.

Widely considered the central theme of the book, Brooks’s Law, in summary, is that adding programmers to a late software project will not make it go faster, but rather slower. This was a pattern Brooks saw during his years as a manager in larger companies that needed many engineers to write software either for internal use, or eventually for sale as products. Managers assumed that the central problem of software development – why projects did not finish on time or on budget – was a mechanical one that could be resolved with mechanical solutions. Maybe there just wasn’t enough manpower. Or maybe the tooling was inferior, and retarded progress. Or maybe the wrong language had been chosen for the task. While all of these things can, and do, affect software engineering endeavors, Brooks’s major insight was that they were the accidents of software engineering, and not its essence; and that the essence is what causes projects to fail.

The essence of “systems programming” (as Brooks called it) is one of complexity – an irreducible complexity1 – that of itself cannot be fixed by mechanical solutions. This complexity arises from the fact that software development is a creative, human process. The engineer must, to write a program, conceptualize the problem space correctly and then use the tools at his disposal to create a solution. As projects grow, engineers are added, the consequence of which, as Brooks keenly observed, tends to make the project slower because it increases the number of communication pathways among team members (with every addition), and the conceptual foundation of the project becomes spread among many minds, in ways that are often fragmented and incorrect. This, Brooks argues, is the core problem, and the solution to the problem is to adapt to it rather than try to conquer it.

“Complexity is the business we are in, and complexity is what limits us.”

How does one adapt to the problem of conceptual complexity in software engineering? Brooks proposed a number of solutions.

Conceptual integrity and communication

Brooks proposed that the conceptual integrity of a project – the core ideas about what the problems are and the models used to represent those problems – are of primary importance and must be safeguarded. The most efficient way to ensure this happens is to reduce the responsibility of that integrity to one, or at most, a couple of individuals, that will be responsible for enforcing that conceptual integrity by vetting the work of other team members on the project. They become the source of conceptual truth.

Communication and team structure

Because complexity scales with the number of communication pathways in a team, Brooks proposed that “surgical teams” be used in most software projects. These teams will be composed of the conceptual guardian(s) (the “surgeon”), and as few people as possible to get the work done. These teams are part of an organization as a whole, however, and there is always a management structure with which they must integrate. The key to good management, according to Brooks, is to realize that management is about action and communication. The person at the top should rely on his subordinate program managers to take action when needed, and he should give them the authority to do so. He should never, ever demand action when reviewing a general status report, however, because this will debilitate his program managers, and move the decision making power further from the decisions that needs to be made. Project managers should be concerned almost exclusively with managing the lines of communication in the team, and not with making decisions at all. The whole process of pushing decision making “down” to the program manager is effective because it gives program managers a stake in the total authority of the company, and therefore preserves the total authority of the company.

“The purpose of organization is to reduce the amount of communication and coordination necessary…”

“…the center gains in real authority by delegating power, and the organization as a whole is happier and more prosperous.”

Complexity in code

Complexity can be addressed in the code itself by reducing the mental burden a programmer has to carry while implementing code that has conceptual integrity. In the first edition of Brook’s book, he insisted that all programmers on a team be familiar with all modules (or entities) within a software project. In Brooks’s mind, this was a good way to safeguard the integrity of the system, because everyone would have a working understanding of all code. In a subsequent edition of the book, he backtracked on this position, because it essentially suffered from the mental equivalent of the communication problem. Code changes over time; no programmer ever has a complete and accurate understanding of system because it is not static. Brooks eventually came around to a view promoted by Canadian engineer David Parnas:

“[David] Parnas argues strongly that the goal of everyone seeing everything is totally wrong; parts should be encapsulated so that no one needs to or is allowed to see the internals of any parts other than his own, but should see only the interfaces… [I initially proposed that] Parnas’s proposal is a recipe for disaster [but] I have been quite convinced otherwise by Parnas, and totally changed my mind.”

Information hiding, or encapsulation, allows a programmer to “use” or take advantage of, code, without having to know how it works internally, only how to ask it for something. The mental footprint of understanding an interface (the way to ask code for something) is orders of magnitude smaller than the mental footprint required to understand the implementation behind the interface. And interfaces don’t change nearly as often as implementations (in a well-designed system).

Side-effects (changes created by a piece of code that change things beyond that code, or even system), likewise, should all be identified, well understood, and encapsulated (or eliminated) to reduce the mental burden of worrying about tangential consequences to implementations, which are often causes for bugs, and project delay.

Documentation

Documentation is central to adapting to complexity. Documenting decisions made in a software project is part of fostering the creative process itself:

“…writing the decisions down is essential. Only when one writes do the gaps appear and the inconsistencies protrude. The act of writing turns out to require hundreds of mini-decisions, and it is the existence of these that distinguishes clear, exact policies from fuzzy ones.”

Documentation need not be overly verbose, however; although overly verbose documentation is better than no documentation, Brooks believed. And the documentation regarding technical decisions – design and implementation – should be as close to the code itself as possible (even within the same code files) to ensure the documentation will be maintained and updated as the code itself changes. The goal of documentation should be twofold: 1) to create an overview of the particular system concern the documentation addresses, and 2) to identify the purpose (the why) of the decisions made in regard to that concern. Documentation is not only for other programmers to read; it is often to benefit the original author as well.

“Even for the most private of programs, prose documentation is necessary, for memory will fail the user-author.”

Conclusion

There are many more things I could say about Mythical Man-Month. More subtle insights, longer expositions on points articulated above. But these points are the ones that stuck with me most, the ones I felt were immediately relevant to my own work and career. Mythical Man-Month is one part history, one part knowledge, and one part wisdom. It lets us know that great minds in past struggled with, devised solutions to, and retracted and revised solutions to, the problems that make programming a struggle of genuine creativity.

This last quote is taken from a critique of Brooks, one that he found to be excellent, and expresses the same conclusion (albeit maybe a bit more pessimistic) at which Brooks himself arrived: the central problem of software is complexity, not tools or processes, and that will never change.

“Turski, in his excellent response paper [to No Silver Bullet] at the IFIP Conference, said eloquently: ‘Of all misguided scientific endeavors, none are more pathetic than the search for the philosopher’s stone, a substance supposed to change base metals into gold. The supreme object of alchemy, ardently pursued by generations of researchers generously funded by secular and spiritual rulers, is an undiluted extract of wishful thinking, of the common assumption that things are as we would like them to be. It is a very human belief. It takes a lot of effort to accept the existence of insoluble problems. The wish to see a way out, against all odds, even when it is proven that it does not exist, is very, very strong. And most of us have a lot of sympathy for those courageous souls who try to achieve the impossible. And so it continues. Dissertations on squaring a circle are being written. Lotions to restore lost hair are concocted and sell well. Methods to improve software productivity are hatched and sell very well. All too often we are inclined to follow our own optimism (or exploit the hopes of our sponsors). All too often we are willing to disregard the voice of reason and heed the siren calls of panacea pushers.’”


  1. Not to be confused with the same term used by some creationists to defend their particular ideas about the origin of the universe.

Author’s note: I have removed the two articles I posted previously with extensive quotes from The Mythical Man-Month, because I did unsure whether the number of quotes posted constitute “fair use” or not, and I wish to respect the copyright holder’s interests. I have not been contacted by the copyright owner or received any kind of DMCA letter; this is entirely my own decision. I have used those quotes to formulate the article above.

Protip - return from exceptional conditions early

During a recent code interview, I noticed a React component with a render method written in the following (abbreviated) form,

1
2
3
4
5
6
7
8
9
10
11
12
render() {
return this.state.items.length > 0 ? (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
) : (
''
);
}

where ComponentWithLotsOfProps had at least a dozen props, some of which were not simple primitive values.

While there is nothing technically wrong with this render method, it could be better. It suffers from a few deficiencies.

First, ternaries are objectively difficult to read when they are not short. It is difficult to grok what the method actually produces because the whole ternary is returned, requiring the reader to do double work to find the “implicit” returns (there are two) rather than looking for the easily identifiable return keyword.

Second, one must read the entire method to know what gets returned if there are no items in state. Is it a component? Is it null? Is it an empty string? That is unknown until the whole method has been read.

Third, if additional conditions are required in future work to determine what will be rendered, they cannot easily be introduced in this method.

A better alternative is to omit the ternary, and explicitly return the exceptional condition values first.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
render() {
if (this.state.items.length === 0) {
return '';
}

return (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
);
}

Due to reduced nesting, this is far easier to read, and return values are also easily identifiable. If additional conditions must be evaluated in the future, modifying this method becomes much simpler:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
render() {
if (this.state.items.length === 0) {
return '';
}

if (this.state.items.length == 1) {
return (<SingleItemComponent item={this.state.items[0]} />);
}

return (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
);
}

As with most things in programming: the simpler, more explicit, the better.